anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Thin lens equation calculation does not match experimental value, why? | Question: for an optics experiment I had to design a $2$-lens system to image a grating, the diagram looks like this:
The focal length of both lenses is $15$cm, at the end of the system there's a camera connected to a computer where the image is displayed. The system should have magnification $M=1$. I followed the thin lens equation, and placed the first lens $30$cm away from the grating, an aperture $30$cm from the first lens (this is the intermediate image plane) then the second lens 30cm from the aperture and finally the camera $30$cm to the right of the second lens. The strange thing is that it didn't work, I had to place the camera $22.3$cm from the second lens in order to get a sharp image, I don't understand why this happens whether the aperture is doing something that I'm not understanding or if the thin lens approximation is breaking down. Does anyone have a suggestion? I don't think it's the aperture because the radius was large compared to the wavelength of the laser, and the aperture was only used to select different areas on the grating that corresponded to different spacings.
Answer: Take out the camera, and replace it with a piece of paper. You should see the sharp image as predicted by the thin lens equation.
If you use a camera, you are adding another lens, and the image forms behind the lens on the sensor. That is fine, but you have to do more calculating to figure out where to put the lens and sensor.
Perhaps you want to take a picture of the paper instead of using the camera as part of the system? | {
"domain": "physics.stackexchange",
"id": 76818,
"tags": "experimental-physics, geometric-optics, home-experiment"
} |
How to shuffle only a fraction of a column in a Pandas dataframe? | Question: I would like to shuffle a fraction (for example 40%) of the values of a specific column in a Pandas dataframe.
How would you do it? Is there a simple idiomatic way to do that, maybe using np.random, or sklearn.utils.shuffle?
I have searched and only found answers related to shuffling the whole column, or shuffling complete rows in the df, but none related to shuffling only a fraction of a column.
I have actually managed to do it, apparently, but I get a warning, so I figure even if in this simple example it seems to work, that is probably not the way to do it.
Here's what I've done:
import pandas as pd
import numpy as np
df = pd.DataFrame({'i':range(20),
'L':[chr(97+i) for i in range(20)]
})
df['L2'] = df['L']
df.T
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
i 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
L a b c d e f g h i j k l m n o p q r s t
L2 a b c d e f g h i j k l m n o p q r s t
For now, L2 is simply a copy of column L. I keep L as the original, and I want to shuffle L2, so I can visually compare both. The i column is simply a dummy column. It's there to show that I want to keep all my columns intact, except for a fraction of L2 that I want to shuffle.
n_rows=len(df)
n_shuffle=int(n_rows*0.4)
n_rows, n_shuffle
(20, 8)
pick_rows=np.random.permutation(list(range(n_rows)))[0:n_shuffle]
pick_rows
array([ 3, 0, 11, 16, 14, 4, 8, 12])
shuffled_values=np.random.permutation(df['L2'][pick_rows])
shuffled_values
array(['l', 'e', 'd', 'q', 'o', 'i', 'm', 'a'], dtype=object)
df['L2'][pick_rows]=shuffled_values
I get this warning:
C:\Users\adumont\.conda\envs\fastai-cpu\lib\site-packages\ipykernel_launcher.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
"""Entry point for launching an IPython kernel.
df.T
I get the following, which is what I expected (40% of the values of L2 are now shuffled):
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
i 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
L a b c d e f g h i j k l m n o p q r s t
L2 e b c l i f g h m j k d a n o p q r s t
You can see the notebook here (it's rendered better on nbviewer than here): https://nbviewer.jupyter.org/gist/adumont/bc2bac1b6cf7ba547e7ba6a19c01adb6
Thanks in advance.
Answer: I don't think there is any idiomatic way of doing this since it's quite unusual operation, normally the whole row or column should be shuffled. What you are doing looks like a good approach.
The error SettingWithCopyWarning you get is a common warning that you could be operating on a copy of the original data and not a view (the origianl). For more information I would recommend checking the answers here: https://stackoverflow.com/questions/20625582/how-to-deal-with-settingwithcopywarning-in-pandas.
To avoid the error and make the code more compact you could do it as follows:
import random
fraction = 0.4
n_rows = len(df)
n_shuffle=int(n_rows*fraction)
pick_rows = random.sample(range(1, n_rows), n_shuffle)
df.loc[pick_rows, 'L2'] = np.random.permutation(df.loc[pick_rows, 'L2'])
Note that the use of loc here will make sure that no copy is created and everything is done on the origianl dataframe (i.e. this will not give the SettingWithCopyWarning warning). | {
"domain": "datascience.stackexchange",
"id": 5922,
"tags": "python, pandas, dataframe"
} |
Meaning of reversibility and quasistatic processes | Question: A process in a closed system is reversible if the entropy change is $dS = \frac{dQ}{T}$.
A process is quasistatic if a process is infinitely slowly.
Now, if a process is reversible, this means that we are always in equilibrium and this can only be the case, if we do this process very slowly. Thus, any reversible process is quasi-static.
Although I think I understand the basic definition, I don't see why we need the concept of a quasistatic process? Which equations or concepts in thermodynamics actually rely on this kind of process?
Wikipedia lists: isobaric, isochoric, isothermal processes. So is any such process necessarily quasistatic? What about adiabatic processes?
So far I did not get why it is important in thermodynamics to look at such processes and which equations hold only for such processes?
Answer: In classical thermodynamics the equations are valid only for thermodynamic equilibrium. That means that your system must have his variables well defined all the time. For instance, if you have a gas the temperature must be well defined and the same across the gas. In order for this to be valid when you have a process, the process must be slow enough so that different parts of the gas keep the same temperature. That is why any reversible process is necessarily a quasistatic one (but, some quasistatic processes are irreversible).
In short, when you assume a reversible process you are assuming a quasistatic one, thus each time you use the thermodynamic equations for whatever reversible process you are implicitly assuming "quasistaticity". In particular the ones you asked, such as isobaric, isochoric, isothermal processes (if they are reversible). | {
"domain": "physics.stackexchange",
"id": 52746,
"tags": "thermodynamics, statistical-mechanics, reversibility"
} |
How does the body repair extracellular damage caused by glucose? | Question: So we know that glucose is an aldehyde that can cause cell damage to the lysine and arginine residues on proteins through the Maillard Reaction (among other damaging reactions that glucose participates in). Since everyone needs glucose to survive, though - I'm wondering - how does the body repair damage to its proteins+blood vessels that are caused by all this glucose? I'm especially interested in the blood vessels, as the damage to blood vessels is common in diabetics.
Answer: The Wikipedia article on "Advanced Glycation Endproducts" (or AGE) is quite nice and in this case especially the section on "Clearance", which also contains a few references.
In short the AGE which are inside of cells are taken up by the lysosome and then broken down until AGE-amino acids are left. These are secreted into the blood stream and excreted with the urine. Bigger extracellular AGE which cannot pass through the cell membrane are taken up by special receptors first and then processed, here macrophages and Kupffer cells are included. An involvement of the liver is also discussed.
The following articles are also interesting:
Advanced glycation end-products: a review
The biology of the receptor for advanced glycation end products and
its ligands
Pathogenic effects of advanced glycosylation: biochemical, biologic,
and clinical implications for diabetes and aging. | {
"domain": "biology.stackexchange",
"id": 1704,
"tags": "senescence"
} |
Comparing asymptotic notations | Question: I have a problem P that is said to be O(n^7) in the worst case.
I'm asked to agree or not if it is solvable in O(n^9) time. And also I'm asked to agree or not if P cannot be solved faster than Ω(n^7) in the worst case.
My answer for the first question is that if P is bounded by O(n^7) it is also solvable in O(n^9).
And for the second question: Ω(n^7) means that the running time cannot be less than n^7, but we already know that P is bounded by O(n^7) in the worst case and it is not proven that it can't be solved faster, so I think this statement is false.
So my questions are: how do I prove the answer for the first question and how do I express my second answer in terms of functions?
I've seen a lot of info about asymptotic notations, but I couldn't find any info that could enlighten me how to answer the questions I am asked.
Thanks.
Answer: If P is solvable in $O(n^7)$, that means that P is solvable in $C \cdot n^7$ time, for some constant $C>0$ and large enough $n$. Since $C \cdot n^7 < C \cdot n^9$ for large enough $n$, that implies that P is solvable in $O(n^9)$.
P is bounded by O($n^7$) in the worst case, but it does not imply that in the best case cannot run in constant time, for instance. Thus, P is not greater than $C \cdot n^7$ for large enough $n$, so it is not $\Omega(n^7)$. | {
"domain": "cs.stackexchange",
"id": 5660,
"tags": "asymptotics, landau-notation"
} |
2D lattice with fixed boundary condition | Question: I am writing code for the following equation with fixed boundary condition on a 2 dimensional lattice of \$L\times L\$ sites:
$$\begin{align}
x_{i+1} =&\ (1-\varepsilon)r\, x_i (1-x_i) + \\
&\ 0.25\varepsilon\left((r\,x_{i-1}(1-x_{i-1}) + r\,x_{i+1}(1-x_{i+1}) + r\,x_{i-L}(1-x_{i-L}) + r\,x_{i+L}(1-x_{i+L})\right)
\end{align}$$
Fixed boundary condition means for end sites there are no neighboring sites beyond boundary.
Is there a simpler or more sophisticated way to write following code for the above equation with fixed boundary condition ?
def CML2d(x):
eps = 0.3
r = 3.9
xn = np.zeros(N+1, float)
for i in range(1, N+1):
if i>L and i<=(L-1)*L:
if i%L==1:
xl, xr = 0., x[i+1]
xu, xd = x[i-L], x[i+L]
elif i%L==0:
xl, xr = x[i-1], 0.
xu, xd = x[i-L], x[i+L]
else:
xl, xr = x[i-1], x[i+1]
xu, xd = x[i-L], x[i+L]
elif i>1 and i<L:
xl, xr = x[i-1], x[i+1]
xu, xd = 0., x[i+L]
elif i>(L-1)*L+1 and i<L*L:
xl, xr = x[i-1], x[i+1]
xu, xd = x[i-L], 0.
elif i==1:
xl, xr = 0., x[i+1]
xu, xd = 0., x[i+L]
elif i==L:
xl, xr = x[i-1], 0.
xu, xd = 0., x[i+L]
elif i==(L-1)*L+1:
xl, xr = 0., x[i+1]
xu, xd = x[i-L], 0.
elif i==L*L:
xl, xr = x[i-1], 0.
xu, xd = x[i-L], 0.
xn[i] = (1-eps)*r*x[i]*(1-x[i]) + 0.25*eps*( r*xl*(1-xl) + r*xr*(1-xr) + r*xu*(1-xu) + r*xd*(1-xd) )
return xn
L = 10 #side of 2d lattice
N = L*L #number of sites in 2d lattice
x0 = numpy.random.uniform(0.1, 0.9, N+1) #initial values for x
xf = [] # store iterate x
x = x0
for nt in np.arange(0.005, 50.005, 0.005):
x = CML2d(x)
xf.append(x)
Answer: Testing script from a few days ago:
import numpy as np
original:
def CML2d(x):
eps = 0.3
r = 3.9
xn = np.zeros(N+1, float)
for i in range(1, N+1):
if i>L and i<=(L-1)*L:
if i%L==1:
xl, xr = 0., x[i+1]
xu, xd = x[i-L], x[i+L]
elif i%L==0:
xl, xr = x[i-1], 0.
xu, xd = x[i-L], x[i+L]
else:
xl, xr = x[i-1], x[i+1]
xu, xd = x[i-L], x[i+L]
elif i>1 and i<L:
xl, xr = x[i-1], x[i+1]
xu, xd = 0., x[i+L]
elif i>(L-1)*L+1 and i<L*L:
xl, xr = x[i-1], x[i+1]
xu, xd = x[i-L], 0.
elif i==1:
xl, xr = 0., x[i+1]
xu, xd = 0., x[i+L]
elif i==L:
xl, xr = x[i-1], 0.
xu, xd = 0., x[i+L]
elif i==(L-1)*L+1:
xl, xr = 0., x[i+1]
xu, xd = x[i-L], 0.
elif i==L*L:
xl, xr = x[i-1], 0.
xu, xd = x[i-L], 0.
value = (1-eps)*r*x[i]*(1-x[i]) + 0.25*eps*( r*xl*(1-xl) + r*xr*(1-xr) + r*xu*(1-xu) + r*xd*(1-xd) )
xn[i] = value
return xn
partial attempt to work with 2d x; the intention was to replace all the uses of L with 2d array indexing tests.
def CML2d_1(x):
eps = 0.3
r = 3.9
L,_ = x.shape
N = L*L
# xn = np.zeros((L,L), float)
xn = (1-eps)*r*x*(1-x)
x = x.flat
#for i in range(N):
for j in range(L):
for k in range(L):
i = k+L*j
i1 = i+1
if i1>L and i1<=(L-1)*L:
if i1%L==1:
xl, xr = 0., x[i+1]
xu, xd = x[i-L], x[i+L]
....
#value = (1-eps)*r*x[i]*(1-x[i])
value = 0.25*eps*( r*xl*(1-xl) +
r*xr*(1-xr) +
r*xu*(1-xu) +
r*xd*(1-xd) )
#xn.flat[i] = value
xn[j,k] += value
return xn
But I then realized that I don't need to iterate over all points. Instead I could just sum subarrays (similar to the array of taking a 1d diff, x[1:]-x[:-1]:
# x[i+1] = (1-eps)* r*x[i]*(1-x[i]) +
# 0.25*eps*( r*x[i-1]*(1-x[i-1]) +
# r*x[i+1]*(1-x[i+1]) +
# r*x[i-L]*(1-x[i-L]) +
# r*x[i+L]*(1-x[i+L]) )
def CML2d_2(x):
eps = 0.3
r = 3.9
x2 = r * x * (1-x)
xn = (1-eps) * x2
xn[1:,:] += 0.25 * eps * x2[:-1,:]
xn[:-1,:] += 0.25 * eps * x2[1:,:]
xn[:,1:] += 0.25 * eps * x2[:,:-1]
xn[:,:-1] += 0.25 * eps * x2[:,1:]
return xn
then making use of a covolve2d function in scipy.signal:
from scipy import signal
def CML2d_3(x):
eps = 0.3
r = 3.9
in2 = np.zeros((3,3))
in2[1,:] = 0.25 * eps
in2[:,1] = 0.25 * eps
in2[1,1] = (1-eps)
print(in2)
x2 = r * x * (1-x)
res = signal.convolve2d(x2, in2, mode='same', boundary='fill', fillvalue=0)
return res
testing:
L = 10 #side of 2d lattice
#L = 4
N = L*L #number of sites in 2d lattice
x0 = np.random.uniform(0.1, 0.9, N+1) #initial values for x
x0[0]=np.nan # test if this is used
res=CML2d(x0.tolist())
print(res)
x2d = x0[1:].reshape(L,L)
res1=CML2d_1(x2d)
print(x0.shape)
print(res1.shape)
print(np.allclose(res1.flatten(), res[1:]))
print(np.allclose(res1, CML2d_2(x2d)))
print(np.allclose(res1, CML2d_3(x2d))) | {
"domain": "codereview.stackexchange",
"id": 23295,
"tags": "python, numpy"
} |
Data structure for a static set of sets | Question: I have collection $U$ of sets, where each set is of size at most 95 (corresponding to each printable ASCII character). For example, $\{h,r,l,a\}$ is one set, and $U = \{\{h,r,l,a\}, \{l,e,d\}, \ldots\}$. The number of sets in $U$ is nearly a million. Also a set in $U$ will mostly contains 8-20 elements.
I am looking for a datastructure for storing collection of sets that support following operations:
set matching, e.g. check if set $\{h,r,l,a\}$ is present in $U$
subset matching e.g. check if set $\{h,r,l\}$ is subset of any set in $U$
superset matching e.g. check if set $\{h,r,l,a,s\}$ is superset of any set in $U$
union matching e.g. check if set $\{h,r,l,a,e,d\}$ is union of sets in $U$
approximate set matching e.g. check if set $\{h,r,l,e\}$ is present in $U$, should return true
In particular, we can assume that once the data structure is built, no modifications are made but only queries of the above type (the structure is static).
I was thinking of trie data structure. But, it demands storing data in some order. So I have to store every set as a bit vector, but then the trie becomes binary decision tree. Am I in the right direction? Any pointers will be appreciated.
Answer: Generically, these are sometimes called subset/containment dictionaries. The fact that you had partial matching in your question (but deleted it) is actually not a coincidence, because subset/containment queries and partial matching are equivalent problems for sets.
You probably want to use an UBTree (unlimited branching tree) for this; it's basically a modified trie. See Hoffmann and Koehler (1998) for more details.
You could also have a look at the more recent set-trie data structure proposed by Savnik (2013) for the same purpose. (If you miss the color in the graphs in that preprint paper and don't have access to the official Springer publication [in which the colors aren't missing], there's precursor to that paper which has almost the same graphs/info and no missing colors.)
Both papers (Hoffmann & Koehler, and respectively Savnik) have experimental results, so you can have some idea what they can handle in practice. Both seem to handle data sets with a cardinality of U around 1M.
If you somehow have TCAM hardware (or the money for it), there's a way to leverage that for subset/superset queries. You can actually do both subset/superset queries in parallel assuming you have enough TCAM words (2x |U|).
Since TCAM words can be configured to be 144-bit wide, you and you have only 95 bits/letters to test, you wouldn't even need to bother with Bloom/hashing, you'd have an exact test using TCAM; this is trivial enough I'll even say here how: every {0, 1} bit-vector corresponding to every set in your U is simply turned into a {0, *} vector for subset queries and to a {1, *} vector for superset queries.
A more general problem tackled in ETI (no free copy, sorry) is finding a set with given similarity measure. For example, the similarity measure can be $J(Q,S)=\frac{|Q\cap S|}{{|Q\cup S|}}$ and the query for a given $Q$ can be to find all $S$ (in the database) with $J(Q,S)\geq 0.3$. Both the constant and the similarity measure are user-definable as a predicate in ETI. (The $J$ given as example here is called the Jaccard index.) | {
"domain": "cs.stackexchange",
"id": 4244,
"tags": "data-structures, sets"
} |
Why my training and testing set are about 99% but my single prediction does wrong prediction? | Question: I have performed fruits classification using CNN but i am paused at a point where all things are going right confusion matrix accuracy score all are correct it seems there is no overfitting but it always classifies wrong fruit. Why would this happen. Link to source code is provided below. Thank You!
Github source code link
Answer: It looks like the new data has a different distribution from the training data. It looks like the training data is just a single fruit, with white background, and the new image you've passed is a picture of bananas with blue background. The model has probably learned something like: if blue image, then blueberries, and for this reason it classifies the blue bananas picture as blueberries.
Whenever the distribution of new data is different from the data you've trained on, don't expect the model to work very well, as ML models just interpolate. | {
"domain": "datascience.stackexchange",
"id": 7776,
"tags": "deep-learning, cnn, image-classification, multiclass-classification, prediction"
} |
Noise covariance from sensors are 0....? | Question:
Hello, I'm very new in Kalman filter and I'm trying to use EKF packages (i.e., robot_pose_ekf or robot_localization) to fuse odometry and IMU in the HSR from Toyota Research Institute. When I first tried to hook the robot_pose_ekf package with my mobile robot, I encountered this following error:
[ERROR] [1519539033.564366334]: Covariance specified for measurement on topic wheelodom is zero
[ERROR] [1519539036.363531683]: filter time older than odom message buffer
....
So I checked the published data from the odometry (nav_msgs/Odometry) and IMU (sensor_msgs/Imu), and the covariance matrices in those are indeed all 0.
I am confused since I don't know whether it is common to have 0 covariances from those published data and we need to characterize and estimate those sensor noise covariances by collecting bunch of data with trials and errors (I don't have any sensor specification though)....?
Originally posted by kidpaul on ROS Answers with karma: 38 on 2021-10-04
Post score: 0
Answer:
There is no way for generic components to be aware of the specific characteristics of your hardware / sensors, so yes, if you have a specific platform, and the ROS installation was not provided by the OEM / mfg, it's likely covariances (and similar properties) will be blank/zero/empty (unless whoever wrote the sensor/hw drivers has done work to identify them, and has configured those components to also publish them).
Seeing as the HSR is a platform, I'd be surprised though if some other user / group has not figured those out already. You might want to ask around on a HSR-focused forum or support channel. Those values may not be exactly right for your specific robot, but they would provide a good starting point.
Originally posted by gvdhoorn with karma: 86574 on 2021-10-05
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by kidpaul on 2021-10-05:
Yeah, that's what I was expecting.... I should probably ask this to the company since they officially provide ROS package for this robot. | {
"domain": "robotics.stackexchange",
"id": 36980,
"tags": "navigation, mobile-base, robot-localization, base-odometry, 2d-pose-estimate"
} |
Does the four-force, in some ways, invalidate the results of the three-force in SRT? | Question: I have recently submitted an article to PNAS, however, the editor, oddly enough, claimed that he has issues with the traditional Lorentz transformation for force, he stated:
The argument of this paper is based entirely on eqn. (3), for which the only references given are refs. (1) and (3). I am unable to access ref. (1) (and I suspect so will be most readers). As to ref. (3), eqn. (3) indeed corresponds to eqn. (12) of that reference, which in turn is derived from eqn. (7). However, no justification of the latter equation is given, and I believe that it is wrong, at least as regards the y-and z-components. It is true that there seems to be some ambiguity in the literature as to the correct definition of force in special relativity, but in the present context it seems to me that a satisfactory definition is (partial) rate of change of 4-momentum with respect to proper time. If that is correct then it transforms as the space component of a 4-vector, and the outcome is that the transverse force constants are unchanged in the moving frame, invalidating the author's eqn. (3) and with it his whole argument.
It is possible that I am suffering from a blind spot here and that eqn. (3) is correct, but if so the author needs to provide explicit justification for it. Until that is done the manuscript has no claim to be considered for PNAS.
Eq. (3) in the article relates the spring constant in the rest frame to that measured by an inertial observer moving perpendicular to the spring alignment, which somehow shows that the transverse force conponents are measured smaller. Ref. [3] of the article belongs to O. Gron (Covariant formulation of Hooke’s law)
in which my Eq. (3) is proved. The editor, however, tried to invalidate Gron's article, and when I resubmitted my article with more clarification, he said:
... I said that any resubmission which claimed without explicit demonstration that in SR the transverse component of force is subject to Lorentz contraction would be automatically rejected ... As regards ref. [5]- the fact that a particular textbook (which because of Covid-19 conditions I am unable to consult) may use a different definition of force from what I would regard as the proper one in SR (rate of change of relativistic momentum with proper time) is not a matter of sufficient interest as to justify a PNAS publication. No doubt there may be real issues to be sorted out regarding the concept of force in SR, but the present manuscript makes no contribution to the debate.
Ref. [5] was indeed Resnick's book of Introduction to Special Relativity. Do we have different force transformations in SRT or did the editor just try to get rid of the article by this absurd excuse?
I specifically want to know if the use of four-force would, in some ways, invalidate the fact that the transverse component of force is subject to Lorentz contraction as we know from the three-force transformation. It seems that the editor thinks so according to the boldfaced sentence.
Answer: This is a meaningless argument. There are different ways of setting up relativistic dynamics. You can write your equations either in terms of the three-force $d\mathbf{p}/dt$ or the four-force $dp^\mu / d\tau$. One approach doesn’t invalidate the other, because both will give the exact same results if applied consistently.
It is also not true that one definition is manifestly superior to the other. For example, the Lorentz force in terms of electric and magnetic field vectors is naturally a three-force. But the Lorentz force in terms of the electromagnetic field tensor is naturally a four-force. Both can be useful depending on the situation. (On the other hand, the choices $d\mathbf{p}/d\tau$ and $dp^\mu/dt$ are probably unambiguously bad; I've never seen them be good for anything.)
Of course, if you somehow think you’re written a paper that has proved that special relativity is not Lorentz invariant, that probably is a result of misusing the three-force. The editor’s request for you to switch to four-forces is then a completely reasonable one, as it will quickly expose the error. | {
"domain": "physics.stackexchange",
"id": 76264,
"tags": "special-relativity, forces, reference-frames, momentum, time"
} |
Thoughts on the ice cube from orbit problem | Question: Let's say we have a really exquisite cocktail party somewhere in New Mexico, and we just ran out of ice cubes. To the rescue comes this new service provided by Orbital Glacier Inc.
They provide ice cubes around the world within only 5 minutes (!). How do they do it?
How big must an ice cube be when you drop it from the orbit, so that it will have the size of a typical ice cube when it hits your glass of scotch that you're holding in your hand.
Assuming the ice cube in the orbit needs to be of an enourmous size, would Orbit Glacier Inc. inevitably impact the climate conditions on planet earth when celebrating several such cocktail parties?
Disclaimer: We are in no way related to our competitor Moon Ice Now, Inc.
Answer: I don't think the question can be answered because you don't say how the orbital energy is to be dissipated. However it's quite interesting to compare the orbital energy with the energy required to boil the ice.
Let's suppose our ice supplied is aboard the International Space Station, so they are at an altitude of $h = 300\ \mathrm{km}$ and moving at an orbital velocity of about $v_\mathrm o = 7.7\ \mathrm{km/s}$. At the latitude of New Mexico (34° N) the Earth's surface is moving at about $v_\mathrm e = 370\ \mathrm{m/s}$. So the change in kinetic energy is:
$$\begin{align}
\Delta T &= \tfrac{1}{2}m v_\mathrm o^2 - \tfrac{1}{2}m v_\mathrm e^2 \\
&= 29.6\ \mathrm{MJ/kg}
\end{align}$$
The change in potential energy is:
$$\begin{align}
\Delta U &= \frac{GM}{r_\mathrm e} - \frac{GM}{r_\mathrm e + h} \\
&= 3.1\ \mathrm{MJ/kg}
\end{align}$$
So the total energy change in bringing $1\ \mathrm{kg}$ of ice from the ISS to New Mexico is:
$$ \Delta E = \Delta T + \Delta U = 32.7\ \mathrm{MJ/kg} $$
Could we use this energy to boil off some of $1\ \mathrm{kg}$ of ice and leave the rest available for cooling drinks? Well suppose we start with the ice at absolute zero (it's cold in space) and see how much energy it takes to boil it. The constants we need are:
$$\begin{align}
\text{Specific heat of ice}\ (-10\ \mathrm{^\circ C}) &= 2\,000\ \mathrm{J/(kg\ K)} \\
\text{Latent heat of fusion} &= 334\,000\ \mathrm{J/kg} \\
\text{Specific heat of water} &= 4\,200\ \mathrm{J/(kg\ K)} \\
\text{Latent heat of vap.} &= 2\,257\,000\ \mathrm{J/kg}
\end{align}$$
Assuming these constants don't change with temperature$^1$ the energy required to turn $1\ \mathrm{kg}$ of ice at absolute zero to a $\mathrm{kg}$ of steam at $100\ \mathrm{^\circ C}$ is:
$$\begin{align}
\Delta E &= 2\,000\ \mathrm{J/(kg\ K)}\times273\ \mathrm K + 334\,000\ \mathrm{J/kg} + 4\,200\ \mathrm{J/(kg\ K)}\times100\ \mathrm K + 2\,257\,000\ \mathrm{J/kg} \\
&= 3.56\ \mathrm{MJ/kg}
\end{align}$$
So the energy required to bring $1\ \mathrm{kg}$ of ice to rest in New Mexico is about ten times the amount of energy needed to boil away the ice even starting from absolute zero. You're going to have to find some other way of dissipating the energy.
$^1$ the specific heat of ice decreases with falling temperature so the energy calculated to boil the ice is a slight overestimate. | {
"domain": "physics.stackexchange",
"id": 15140,
"tags": "space, ice"
} |
Forms of energy in a closed circuit with a coil | Question: I am a bit confused. When i move a magnet through a coil thats in a closed circuit, what does my kinetic energy convert to?
I assume I will create a magnetic field, and that magnetic field will create a current trough the circuit. Is it true that the kinetic energy will only convert into magnetic potential energy from my magnet to the coil and electrical energy from the movement of electrons in the circuit, or are there more forms of energy that I need to take into account?
Also what would the equations look like for the different forms of energy?
Thanks a lot!
Answer: Well, you are mixing different forms of energy here. If you are moving magnet trough coil you are generally making work, and not using kinetic energy of the magnet.
The case, where you will convert the kinetic energy of magnet will be following:
On ice surface (no friction) you have coil, and you slide magnet trough coil. In that case, the magnet will decelerate due to eddy currents. Due to induction some voltage will be induced and if you would connect the edges of coil some current will flow. So kinetic energy will be transformed into kinetic energy of charge carriers or "energy of current", I really don't know better term. Bear in mind, that electrons generally have quite high kinetic energy due to thermal movement, but in this instance we don't have to take that into account, since this energy doesn't transform.
In the case, where you are moving magnet with hand you are experiencing some force, and therefore you make some work. In that instance work is transformed into energy of the current and than after current flows into thermal energy of the wires.
The magnetic energy of coil will come into account only in other case, where we would have some current, which flows thru the coil, and we would suddenly stop that current. Then the magnetic field would generate some current for short period of time.
But in the case of moving magnet we are only converting work into moving of electrons. If we would use current to move magnet we would convert energy of moving electrons into work. | {
"domain": "physics.stackexchange",
"id": 78759,
"tags": "electromagnetism, energy, magnetic-fields, electromagnetic-induction, inductance"
} |
Can an oxidising/reducing agent oxidise/reduce itself? | Question: I am new to the study of this divine science. So Just a query: Can an oxidising/reducing agent oxidise/reduce itself? If it can can anyone give an example and explain it?
Answer: A simple example can be the disproportionation of chlorine when bubbling a current of chlorine gas $\ce{Cl2}$ into a solution of $\ce{NaOH}$ (containing $\ce{OH^-}$ ions):
$$\ce{Cl2 + 2 OH^- -> ClO^- + Cl^- + H2O}$$
Here the chlorine atoms have the oxidation number equal to zero in the reactants, and get the final oxidation number $-1$ in $\ce{Cl^-}$ and $+1$ in $\ce{ClO^-}$. Here one of the $2$ original $\ce{Cl}$ atoms oxidises the other one from $0$ to $+1$. In this operation it is reduced fro $0$ to $-1$.
The final solution is the famous bleach, used for cleaning dirty surfaces. | {
"domain": "chemistry.stackexchange",
"id": 17215,
"tags": "physical-chemistry, redox"
} |
Deriving the angular frequency in terms of period | Question: In many Pre-Calculus and Trigonometry classes, when first learning about sine and cosine waves, you learn the following equations:
$$y=A\sin B(x+C)+D \tag{1}$$
$$y=A\cos B(x+C)+D \tag{2}$$
From Physics for Scientists and Engineers 3rd Edition by Douglas C. Giancoli, the equations are
$$x=A\cos(\omega t+\phi) \tag{3}$$
$$y=A\sin(\omega t+\phi) \tag{4}$$
which are derived by finding the general solution to the the equation of motion $\frac{d^2x}{dt^2}+\frac{k}{m}x=0$. Nonetheless they still are of the form of equations (1) and (2) save for that they are functions of the parameter $t$ and seem to be simplified a bit.
My questions are how to derive the equation for $B=\frac{2\pi}{T}$, where $T$ is equal to period, using equations(1) and (2). As you can already tell, $\omega$ acts as the same thing as $B$, except in physics it has some more physical significance as it is the angular frequency. Therefore by distributing $B$ of (1) and (2) into the arguments will yield $(Bx+BC)$. This leads me to wonder if $\phi$ absorb $\omega$ which is why it does not appear when $\omega$ is distributed into the arguments of sine and cosine in equations (3) and (4).
I know it is a little trivial but it seems difficult to derive period without some level of background knowledge on where to start with definitions.
Answer: In$$x=A\cos(\omega t+\phi)$$
$\phi$ is not the left shift by $\phi$ radians, it actually is the left shift by $\dfrac{\phi}{\omega}$ radians. You can see it by factoring $\omega$:
$$x=A\cos(\omega(t+\dfrac{\phi}{\omega}))$$
$T$ is the time taken for $1$ revolution, so there will be $\dfrac{1}{T}$ revolutions in $1$ second.
Since there are $2\pi$ radians in $1$ revolution, there will be $2\pi \times \dfrac{1}{T}$ radians in $1$ second. This constant quantity with units radians per second is called angular frequency $\omega$.
$f(t)=\cos(t)$ graph has a period of $2\pi$ seconds.
$f(\omega t)$ compresses the graph of $f(t)$ by a factor of $\omega$ when $\omega\gt 1$.
Thus the period of $f(\omega t) = \cos(\omega t)$ also gets reduced by the same factor and becomes $\dfrac{2\pi}{\omega}$ seconds. | {
"domain": "physics.stackexchange",
"id": 63227,
"tags": "harmonic-oscillator"
} |
Heisenberg's principle in Quantum Cryptography | Question: In quantum cryptography why do we need the Heisenberg uncertainty principle?
Edit:
I only know the statement of the Heisenberg uncertainity principle.
As I know that if Eve tries to know the polarization angle of a photon with a wrong basis then it will alert others. Then where is the use of Heisenberg uncertainity rule in it?
I know this is a childish question to physics people but I really need to know the answer. Please anybody answer it.
Answer: Heisenberg's uncertainty principle in its most general form is a statement about the measurement uncertainty (or variance) of so-called non-commuting variables.
In quantum mechanics, everything you can observe (such as the polarization of a photon) is represented by a so-called operator. By performing a measurement, you act with an operator on a quantum state.
Two operators $A$ and $B$ are said to commute if $AB = BA$. In simple terms, for commuting operators it doesn't matter whether you first measure $A$ and then $B$ or first measure $B$ and then $A$.
If, however, the operators do not commute, then the order matters: If you measure first $A$ and then $B$, you get a different outcome than if you measure first $B$ and then $A$.
One example is position and momentum: If you measure the position first, then the momentum will have a very high uncertainty. If you measure momentum first, then position will have a very high uncertainty.
The relevance for quantum cryptograhpy now is that measuring the polarization angle in a basis $1$ and measuring it in a basis $2$ corresponds to two non-commuting operators. This means that Eve measuring with basis $2$ before Bob measures with basis $1$ will lead to different outcomes than if Eve would do nothing. Heisenberg basically says that if Eve measures in basis $2$, then the uncertainty of the observable "Polarization in basis 2" is zero and therefore the uncertainty of the observable "Polarization in basis 1" must be at a maximum, i.e., Eve has completely messed up the state of the photon for basis $1$. | {
"domain": "physics.stackexchange",
"id": 2678,
"tags": "quantum-mechanics, heisenberg-uncertainty-principle"
} |
What is the relation between plasmid concentration and mRNA levels? | Question: Suppose a simple synthetic construct, consisting of a constitutive promoter and a single gene:
One of the simplest ways to model GFP transcription is to use an ODE:
$\frac{d [GFP_{mRNA}]}{dt} = a - b{\cdot}[GFP_{mRNA}]$
where $a$ is GFP transcription rate and $b$ is GFP mRNA degradation rate (both constants). Normally, we assume $a>>b$.
Suppose we wish to account for plasmid concentration with the value of transcription rate $a$ - e.g. with higher plasmid concentration, the ratio $a/b$ should increase as the mRNA saturation levels are expected to be reached faster. In other words, suppose we transfect 2 individual constructs depicted above, 10 ng of one and 30 ng of the other - how should this difference in concentration impact the rate constants values, assuming the constructs are otherwise identical?
Assuming this, approximately how does gene transcription rate $a$ change as a function of the amount of transfected plasmid containing the above construct? Ideally, I'd be interested in knowing this for HEK 293 cells, but any other decent estimation is acceptable. One simple option is to simply assume linear relation, but I think that's too simplistic: e.g. if we say $a=1$ at 10 ng plasmid concentration, then $a=3$ at 30 ng plasmid concentration. Note that I'm not necessarily limiting myself to ODE models, just looking for some kind of relation.
I have so far not been able to find anything useful in papers I've examined, so any help would be appreciated.
Answer: The amount of transfected plasmid does not correlate at all with the protein expression level. After transfction, usually each cell is going to get and keep only one copy of the plasmid. Once the plasmid is in the cell, it will be replicated and the cell will contain X copies of it, depending on the plasmid copy number. In general, plasmids with low copy numbers correlate with lower protein expression compared to plasmids with high copy number. But there are also exceptions:
Expression of genes under control of EF1α promoter appears to correlate with plasmid copy number. In contrast, expression driven by the more traditional CMV immediate early promoter appears less plasmid copy number dose-responsive (ref). | {
"domain": "biology.stackexchange",
"id": 1888,
"tags": "molecular-biology, gene-expression, mathematical-models, transcription, synthetic-biology"
} |
Sending an email every half hour if a certain condition is met | Question: This will be called every minute from the background thread. If my unauthorizedCount and badRequestCount is not equal to zero, then I send out an email.
private void callMetrics() {
// .. some other code
// send an email for unauthorized count
if (Integer.parseInt(unauthorizedCount) != 0) {
// send an email here
}
// send an email for bad request count
if (Integer.parseInt(badRequestCount) != 0) {
// send an email here
}
}
For example, if my unauthorizedCount is non-zero, then it will be non-zero at least for an hour. That means it will keep on sending out an email every minute for an hour, so our email will get flooded up with this. That's what I don't want to have, same thing with badRequestCount.
Now what I want to do is, as soon as unauthorizedCount is non-zero, it will send out its first email. If it is non-zero again continuously for half an hour, then I would like to send another email after half an hour. If it is non-zero continuously, but suppose if the first time it was non-zero, I will send out an email. Next time it is non-zero, I won't send another email, but if the third time it was zero, then I will reset the counter so that it sends out an email instantly.
I basically want to send out my first email whenever unauthorizedCount is non-zero but if it is non zero again in the next minute, then I don't want to send out another email and will send out another email if unauthorizedCount is non-zero after half an hour.
private static long lastUnauthorizedSent = -1;
private static long lastBadRequestSent = -1;
private void callMetrics() {
// .. some other code
long now = new Date().getTime();
// send an email for unauthorized count
if (Integer.parseInt(unauthorizedCount) != 0 && satisfiesUnauthorizedSinceLast(now)) {
// send an email here
lastUnauthorizedSent = now;
} else {
lastUnauthorizedSent = -1;
}
// send an email for bad request count
if (Integer.parseInt(badRequestCount) != 0 && satisfiesBadRequestSinceLast(now)) {
// send an email here
lastBadRequestSent = now;
} else {
lastBadRequestSent = -1;
}
}
private void satisfiesUnauthorizedSinceLast(long now) {
return lastUnauthorizedSent == -1 || now - lastUnauthorizedSent > 30*60*1000;
}
private void satisfiesBadRequestSinceLast(long now) {
return lastBadRequestSent == -1 || now - lastBadRequestSent > 30*60*1000;
}
Answer: We know what this is 30*60*1000 but you should put that into a variable instead of just letting it lay around out there naked like that.
public long waitTime = 30*60*1000;
and you can put it in whatever scope that you want to and use it as often as you like, just make sure that if you change it you know what it will affect. | {
"domain": "codereview.stackexchange",
"id": 9641,
"tags": "java, performance, datetime, email, timer"
} |
Why does potential energy relate to exothermic reactions in this way? | Question: I am not understanding how potential energy (which is the energy that results from position or configuration) relate to exothermic reactions. We say that exothermic reactions give off heat and so lose energy but I don't see how I can use this definition of potential energy to say that potential energy is lost from the system of an exothermic reaction. It might be my definition for potential energy that is wrong, I don't know.
Answer: Energy is a useful concept for rationalising the capability of doing work or heating; other than that, it is purely a theoretical concept.
As Feynman brilliantly pointed out,
It is important to realise that in physics today, we have no knowledge of what energy is.
We do not have a picture that energy comes in little blobs of a definite amount.
It is not that way.
However, there are formulas for calculating some numerical quantity, and when we add it all together it gives “28”—always the same number.
It is an abstract thing in that it does not tell us the mechanism or the reasons for the various formulas.
But let's get back to something answerable.
Energy is (a) conserved and (b) convertible into different forms.
If you got any system in a state $q_1$ with potential energy $U(q_1)$ and it changed into state $q_2$ with potential energy $U(q_2)$, then if $\Delta U = U(q_2) - U(q_1) < 0 $, energy has been lost to the surroundings.
This loss could be lots of things, most of the time it is just heat (kinetic motion).
This loss, in the big picture, is just a transfer or conversion.
Let's get to a very simple example, hydrogen combustion:
$$\ce{2 H2 + O2 -> 2 H2O}$$
Since energy can't be produced, we say this reaction releases large amounts of heat.
But, where does it come from?
All that the equation above tells us is the atomic rearrangement that has been taken place: in the left hand side we have two $\ce{H-H}$ bonds and a single $\ce{O=O}$ bond, in the right hand side we got four $\ce{O-H}$ bonds.
So quantifying the bond energies in each side gives us:
| Average bond energy (kJ/mol)
-----|-------------------------------
| | Left | Right
Bond | Single | hand side | hand side
-----|--------|-----------|----------
H-H | -432 | 2 x -432 | 0
O=O | -495 | -495 | 0
O-H | -467 | 0 | 4 x -467
--------------|-----------|----------
Total | -1359 | -1868
This means we get $-1868 - (-1359) = -509$ kJ for two moles of $\ce{H2}$, i.e., $-254.5$ kJ/mol of heat is released during the combustion of hydrogen.
Wikipedia says the correct value is $-242$ kJ/mol, so our calculation has an error of around 5%.
So heat energy has been released and it came from bond breaking and forming.
The energy stored in a bond could be called latent or potential, if you will.
It all boils down to conservation and convertibility. | {
"domain": "chemistry.stackexchange",
"id": 7356,
"tags": "physical-chemistry, experimental-chemistry, energy"
} |
Transfer pump suitable for air, water and their mixture | Question: I need a pump that can remove water from certain volume (~300ml). Water mixed with washing agent (soap) is pumped into this volume using another pump ~5-20L/minute. This volume is not completely sealed - air can get inside (sometimes easily, sometimes in very limited quantities). I need to guarantee that water will not spill out of this volume. In my understanding I need a water transfer pump, but such that can run dry for a while and work as vacuum pump. I looked what market has to offer, but didn't find anything like that.
So the question is - does such pump exist at all? If so, how it is called, so I can search for it online?
Answer: The terms you are looking for are "self priming pump" and/or "run dry pump".
Centrifugal self priming pumps require some water to be in them to start, but can pull a vacuum including air.
Diaphragm pumps pump air or water very well. They can run dry and are self priming. Some are specifically designed to just pump air.
Vane pumps or any other positive displacement pump will be self priming.
Depending on the reliability needs of your application a low cost centrifugal sump pump may work just fine. They are built for some abuse and many have integrated floats that control when they turn on and off. | {
"domain": "engineering.stackexchange",
"id": 1320,
"tags": "pumps, vacuum-pumps"
} |
Birds on a wire (again) - how is it that birds feel no current? They are just making a parallel circuit, no? | Question: I have been thinking about this and I know that other people have answered this on here, but there's one part that still baffles me, and it has to do with parallel circuits.
If I connect a battery to a resistor, and connect another in parallel to it, and measure the current across both, there will be a current! That is, if I connect a $6\ \mathrm V$ battery to a $100\ \mathrm\Omega$ resistor ($R_1$) and connect a $200\ \mathrm\Omega$ resistor in parallel to it ($R_2$), I would still measure $6\ \mathrm V$ across both (voltage is preserved in parallel circuits, correct?) and my current (per Ohm's Law) is
$$I=\frac VR\Rightarrow I=\frac{6\ \mathrm V}{100\ \mathrm\Omega}=0.06\ \mathrm A $$
$$I=\frac VR\Rightarrow I=\frac{6\ \mathrm V}{200\ \mathrm\Omega}=0.03\ \mathrm A $$
So that means one resistor has $6o\ \mathrm{mA}$ and the other has $30\ \mathrm{mA}$. Well and good, but why doesn't this apply to a bird?
That is, a bird dropping its feet on a wire isn't completing a circuit between two different potentials but it is making a parallel circuit. This is what confuses a lot of people I think including me. If the usual laws for parallel circuits apply, why doesn't it apply to birds on a wire?
One explanation I hear is that birds aren't connecting two places of differing potential – but if that was the case then why does my parallel circuit work? One resistor should register no current (or very little) – and I know if I make the resistor large enough (the one in parallel, say, $R_2$) the current draw will be smaller. Is that what is happening? The resistance of the bird is large enough that the current drawn is small?
Let's say a bird has $1\ \mathrm{M\Omega}$ of resistance. A $600\ \mathrm V$ wire would still put $0.6\ \mathrm{mA}$ through the animal.
But that doesn't satisfy me because we are dealing with a $\mathrm{kV}$ scale wire a lot of the time. You'd need for the bird, which is effectively a bag of water and such, to have a lot of resistance for that to work, but maybe it does.
I am always reading that in order for the circuit to be complete the bird (or person) must be grounded, but that doesn't make sense to me because then no parallel circuit would work from batteries! Or even the house current, which is basically a lot of circuits in parallel.
I feel like I am missing something here, and if anyone can tell me what it is that would be greatly appreciated.
Answer: A birds legs are pretty close together. An electrical transmission wire has very little resistance.
This means that the voltage as a function of distance barely changes. So the voltage difference between two birds feet is essentially 0, because the potential on each foot is practically the same. The potential difference between the wire and the ground might be large; but the bird isn't offering any pathway between the wire and anything at much lower voltage. It only offers a pathway between it's two legs, and so voltage difference remains small.
To add on to that, the bird has a lot more relative resistance than the wire, since the wire is supposed to minimize voltage drop across it. This means that most of the current will also flow through the wire, and relatively little current would flow through the bird.
The bird isn't really at risk unless it can connect the high voltage line to something of a significantly different potential, which the same line a few inches further down isn't.
For an example of the numbers, Solomon Slow estimated in the comments:
Suppose the wire is equivalent to 000-gauge copper, 0.0618 ohms per 1000 ft. Suppose it's carrying close to its rated capacity: 300A. Suppose a bird, maybe the size of a dove, with legs that grip the wire about 1 inch apart. According to my calculation, the potential difference between points 1 inch apart along the length of that wire will be about 1.6 millivolts.
Emphasis mine. It should be pretty easy to check that estimation for yourself, but it really illustrates the problem. | {
"domain": "physics.stackexchange",
"id": 66115,
"tags": "electric-circuits, electric-current, voltage"
} |
Need to understand signals received vs sent | Question: What is meant by in the following statement (from the paper titled "Implementing Lightweight Threads"):
As in singlethreaded processes, the number of signals received by the
process is less than or equal to the number sent.
Why is the number received less than or equal to the number sent. I don't have enough background in singlethreaded processes I guess.
Answer: Suppose I sent you 10 letters. Then the number of letters that you have received is at most 10 (some might still be en route).
The reason that this is stressed is that some of the signals could potentially be received by several threads. The operating system guarantees that each signal is received by at most one thread, chosen arbitrarily among threads which have not masked the signal. | {
"domain": "cs.stackexchange",
"id": 10831,
"tags": "operating-systems, threads"
} |
Very basic question about filters in signal reconstruction | Question: I am studying digital signal processing and I would like to ask a question about sampled signal -> original signal reconstruction filters.
Most textbooks use a box filter or a tent filter to show how a discrete-time sampled signal can be reconstructed to (an approximation in this case) of the original signal. However my question is:
Given a discrete signal result from sampling a signal with a sampling interval "Ts", the spectrum will consist of infinite replicas spaced at 1/Ts and scaled by 1/Ts. Now I assume 1/Ts is more than double the original signal maximum frequency, so there is no aliasing.
Our intent with the box (rect) or tent filter is try to "cut off" the spectrum replica in the middle with convolution. What most textbooks do not explain, is what size we should choose the rect or tent to be. Most examples I've seen use a box filter which has a base of Ts, or a tent filter with a base of 2*Ts. Why the base of the tent filter is double of that of the box filter? Is this an arbitrary choice? Looking at the fourier transform of a tent function (squared sinc) compared to that of a rect function (sinc) we shouldn't need to double the base to get the cutoff frequency near the (ideal) 1/2*Ts. With a tent with base 2*Ts we actually hit 0 amplitude at 1/4*Ts. Aren't we cutting out more frequency than we should?
Answer: Neither a filter with a rectangular impulse response, nor one with a triangular impulse response is a very good interpolation / reconstruction filter. If you use a filter with a rectangular impulse response, the resulting output is just a piecewise constant signal. Since you want the constant pieces to have exactly the duration between two samples, the width of the rectangle must equal $T_s$. You could call that a zeroth order interpolation.
A first-order or linear interpolation between the sample values is achieved with a triangular filter kernel. If you look at it in the time domain you can see that the triangle with its apex aligned with a sampling instant $k$ must extend from the previous sample $k-1$ to the next sample $k+1$ in order to provide linear interpolation when all those triangles are summed up. So for linear interpolation, the base of the triangle must have a width of $2T_s$.
Of course, that triangular function is the convolution of two rectangles of width $T_s$, so its Fourier transform is the square of the Fourier transform of the rectangular pulse. | {
"domain": "dsp.stackexchange",
"id": 7087,
"tags": "discrete-signals, reconstruction"
} |
How fast would one have to travel in an airplane in order to experience a continuous sunset? | Question: For the sake of simplifying this problem and removing any guess work on how high the plane is flying, we'll say that our airplane is at 30,000 ft. as this is the average altitude for commercial airplanes.
I have a feeling this should also depend on the latitude where the plane is flying. Again, in an effort to simplify the problem, let's say that we're on the equator flying due west into the sunset.
We needn't set a parameter on how close the sun needs to be above the horizon to qualify as a sunset as the main idea here is to keep up with the sun. That is, the sun should appear stationary to a passenger on the plane.
How fast do we need to move in order to keep up with the sun so that it appears stationary.
My first guess at answering this question is not very technical, but I want a more technical answer. Essentially, the question is to have "time" be unchanging. That is, the time of the day stays the same. We achieve this by moving across time zones. The earth is divided into 24 time zones each. if we're traveling at an altitude of 30,000 ft. Then the width of each time zone at our altitude should be approximately (not all time zones are equally spaced) should be $(1/24)*(2*\pi (r_{earth} +30,000ft))\approx 1039$ miles. Additionally, we need to cover each time zone in one hours time. Thus, $v_{aircraft} = 1039$ $mi/hr$. A quick google search turned up that the fastest aircraft (the Lockheed SR-71 Blackbird) has achieved a speed of 2193 mi/hr. So, assuming my approximation is somewhere near the real answer, this seems doable.
So, is my approximation in the right ballpark? Can someone take a more rigorous approach to this problem and find a better approximation than mine?
Edit: There appears to be aircrafts that can achieve much faster speeds than the aircraft I noted above.
Answer: good computation ...
let's assume that the plane flies on the equator , the earth circumference D= 40075 km ( wiki earth ) , its mean radius R= 6371 km and the altitude a= 10000 ft = 9.144 km. The day length d = 24h ( and not 23.9344 h in this case )
wiki : Earth orbits the Sun at an average distance of about 150 million kilometers every 365.2564 mean solar days, or one sidereal year. This gives an apparent movement of the Sun eastward with respect to the stars at a rate of about 1°/day, which is one apparent Sun or Moon diameter every 12 hours. Due to this motion, on average it takes 24 hours—a solar day—for Earth to complete a full rotation about its axis so that the Sun returns to the meridian.
The upper circumference C = $D * (R+a) / R $ and the speed to be Sun oriented stationary is $C / d = 40075 * (( 6371+9.144) / 6371 ) / 24 = 1672.2 km/h = 1039.1 miles/h$ | {
"domain": "physics.stackexchange",
"id": 99796,
"tags": "homework-and-exercises, kinematics, geometry"
} |
A variant of Critical SAT in DP | Question: A language $L$ is in the class $DP$ iff there are two languages $L1 \in NP$ and $L2 \in coNP$ such that $L = L1 \cap L2$
A canonical $DP$-complete problem is SAT-UNSAT : given two 3-CNF expressions, $F$ and $G$, is it true that $F$ is satisfiable and $G$ is not?
The Critical SAT problem is also known to be $DP$-complete : Given a 3-CNF expression $F$, is it true that $F$ is unsatisfiable but deleting any clause makes it satisfiable?
I am considering the following variant of the Critical SAT problem : Given a 3-CNF expression $F$, is it true that $F$ is satisfiable but adding any 3-clause (out of $F$ but using the same variables as $F$) makes it unsatisfiable?
But I don't succeed in finding a reduction from SAT-UNSAT or even prove it is $NP$ or $coNP$ hard.
My question: is this variant DP-complete ?
Thank you for your answers.
Answer: [I made it into a proper answer b/c somebody gave it -1]
If any clause is allowed to be added, then the language is empty -- clearly to any satisfiable formula $F$ you can add a 3-clause $c$ made up of variables that do not appear in $F$: $F \cup \{ c \}$ will be satisfiable.
If the added clauses must use variables of $F$, then the language is in P.
Justification is as follows:
Take any $F \in L$, i.e. $F \in SAT$ and for any 3-clause $c$ on variables of $F$, $F \cup \{c\} \in UNSAT$. Say $c = l_1 \lor l_2 \lor l_3 \notin F$, where $l_i$ is a literal. Since $F \cup \{ c \}$ is UNSAT, all models of $F$ must have $l_i=0$ (for $i=1,2,3$) - because if some model had e.g. $l_1=1$, then it would satisfy $c$ and so $F \cup \{c\}$. Now, assume that there exists another clause $c'$ that is exactly like $c$, but with one or more literal flipped and such that $c' \notin F$, say $c' = \neg l_1 \lor l_2 \lor l_3$. Then by the same argument all models of $F$ must have $l_1 = 1$. Thus, the necessary condition for $F \in L$ is that for each clause $c \in F$ there are exactly 6 other clauses in $F$ that use the three variables of $c$ -- lets call these 7-clause subsets of $F$ blocks. Note that each block implies a unique satisfying assignment to its variables. When this necessary condition is satisfied, $F$ is either uniquely satisfiable or unsatisfiable. The two cases can be distinguished by testing whether the assignments implied by the blocks of $F$ clash, which can clearly be done in linear time. | {
"domain": "cstheory.stackexchange",
"id": 1103,
"tags": "cc.complexity-theory, np-hardness, complexity-classes, sat"
} |
Grassmann numbers & supermanifolds | Question: I'm asking this question because I'm currently trying to learn about Super Symmetry but I'm having trouble understanding the concept of super-space and super-manifold.
I read that in super-spaces you have 2 Grassmann numbers for each coordinate.
Could anyone explain to me what these Grassmann numbers are?
And then, what's the difference between a regular manifold and a supermanifold?
Answer:
Supernumbers and their weirdness are e.g. discussed in this Phys.SE post.
The next logical step is to learn the notion of $(n|m)$ super vector spaces, which have $n$ Grassmann-even and $m$ Grassmann-odd dimensions.
Moreover, we will assume that the reader are familiar the definition of an ordinary $n$-dimensional $C^{\infty}$-manifold, which is covered by an atlas of local coordinate charts $U\subseteq \mathbb{R}^n$.
Finally let's discuss $(n|m)$ supermanifolds, which is technically a sheaf of $(n|m)$ super vector spaces. Heuristically and oversimplified, a supermanifold is a generalization of the notion of a manifold (3) where the local coordinate charts now are subsets of $(n|m)$ super vector spaces.
References:
planetmath.org/supernumber.
Bryce DeWitt, Supermanifolds, Cambridge Univ. Press, 1992.
Pierre Deligne and John W. Morgan, Notes on Supersymmetry (following Joseph Bernstein). In Quantum Fields and Strings: A Course for Mathematicians, Vol. 1, American Mathematical Society (1999) 41–97.
V.S. Varadarajan, Supersymmetry for Mathematicians: An Introduction, Courant Lecture Notes 11, 2004. | {
"domain": "physics.stackexchange",
"id": 36718,
"tags": "differential-geometry, supersymmetry, grassmann-numbers, category-theory, superspace-formalism"
} |
Euclidean Cluster Extraction | Question:
I'm trying to extract clusters from point cloud. I'm doing it with help of tutorial:
PCL Tutorial
Everything works fine, even point clouds show if I save them in .pcd file, but when I try to view them in rviz nothing is seen. If I subscribe to appropriate topic no points are received. It says status: error, Transform: For frame []: Frame [] does not exist.
The funny thing is if I do: rostopic echo clusters I see that points are generated on that topic.
Do I have to add any post processing to the extracted clusters to see them in rviz?
I'm using ubuntu, ros electric and perception_pcl_electric_unstable.
Here is some of my code:
sensor_msgs::PointCloud2::Ptr clusters (new sensor_msgs::PointCloud2);
pcl::PointCloud<pcl::PointXYZ>::Ptr input_cloud (new pcl::PointCloud<pcl::PointXYZ>);
pcl::PointCloud<pcl::PointXYZ>::Ptr clustered_cloud (new pcl::PointCloud<pcl::PointXYZ>);
/* Creating the KdTree from input point cloud*/
pcl::search::KdTree<pcl::PointXYZ>::Ptr tree (new pcl::search::KdTree<pcl::PointXYZ>);
tree->setInputCloud (input_cloud);
/* Here we are creating a vector of PointIndices, which contains the actual index
* information in a vector<int>. The indices of each detected cluster are saved here.
* Cluster_indices is a vector containing one instance of PointIndices for each detected
* cluster. Cluster_indices[0] contain all indices of the first cluster in input point cloud.
*/
std::vector<pcl::PointIndices> cluster_indices;
pcl::EuclideanClusterExtraction<pcl::PointXYZ> ec;
ec.setClusterTolerance (0.06);
ec.setMinClusterSize (30);
ec.setMaxClusterSize (600);
ec.setSearchMethod (tree);
ec.setInputCloud (input_cloud);
/* Extract the clusters out of pc and save indices in cluster_indices.*/
ec.extract (cluster_indices);
/* To separate each cluster out of the vector<PointIndices> we have to
* iterate through cluster_indices, create a new PointCloud for each
* entry and write all points of the current cluster in the PointCloud.
*/
std::vector<pcl::PointIndices>::const_iterator it;
std::vector<int>::const_iterator pit;
for(it = cluster_indices.begin(); it != cluster_indices.end(); ++it) {
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_cluster (new pcl::PointCloud<pcl::PointXYZ>);
for(pit = it->indices.begin(); pit != it->indices.end(); pit++) {
//push_back: add a point to the end of the existing vector
cloud_cluster->points.push_back(input_cloud->points[*pit]);
}
//Merge current clusters to whole point cloud
*clustered_cloud += *cloud_cluster;
}
pcl::toROSMsg (*clustered_cloud , *clusters);
publish_clusters.publish (*clusters);
Originally posted by Grega Pusnik on ROS Answers with karma: 460 on 2012-04-30
Post score: 2
Original comments
Comment by blackmamba591 on 2016-01-26:
Could you get your code running?
Comment by hichriayoub on 2022-04-08:
Maybe it's too late after 10 years but is it possible to have the full code ? i am really in need of this code for my project
Answer:
I figured out that before you publish PointCloud2 msg to ROS, you have to edit point cloud's header:
clusters.header.frame_id = "/camera_depth_frame";
clusters.header.stamp = ros::Time::now();
Originally posted by Grega Pusnik with karma: 460 on 2012-04-30
This answer was ACCEPTED on the original site
Post score: 7
Original comments
Comment by pnambiar on 2014-08-18:
Have you got the euclidean cluster extraction working in the ros ecosystem? I am trying to do the conversions and I'm having some difficulty.
Comment by blackmamba591 on 2016-01-27:
How to edit the point cloud headers | {
"domain": "robotics.stackexchange",
"id": 9192,
"tags": "pcl"
} |
Total functional computable real numbers | Question: Is there any computable real number which can not be computed by a higher order primitive recursive algorithm?
For computable real number I mean those that can be computed by a Turing machine to any desired precision in finite time. For higher order primitive recursive algorithm I mean common primitive recursive functions theory extended with first-class functions (as in Ackermann function).
Turing machines are more powerful than higher order primitive recursive functions so there exists the possibility that some computable reals numbers are not expressible by them.
Answer: The set of higher-order primitive recursive reals is essentially the class of functions $\mathbb{N}\rightarrow\mathbb{N}$ which can be represented by a term $\mathrm{Nat}\rightarrow\mathrm{Nat}$ in Gödel's system T.
Since every such function is total, and every well-typed term in the system can be enumerated effectively, there is a relatively easy proof by diagonalization that there is some computable real which cannot be represented. | {
"domain": "cs.stackexchange",
"id": 9040,
"tags": "computability, primitive-recursion, real-numbers"
} |
Very simple PostgreSQL ORM in C++ using libpq | Question: I'm working on a set of helper classes for working with libpq in C++ and I want to be able to pass in objects to the helper classes and have them internally represented as strings when sent to the db, and convert from string to objects when fetching data. I'm not particularly after a crazy ORM allowing custom objects, only the known PostgreSQL data types will suffice.
So, I'm thinking about a class for each data type, and each class managing the conversion to and from strings, using a method for each data type, toString() and fromString() because at the moment, data that is sent to the insert methods for inserting into the database is in the form of strings.
The problem I face now - which gets rather messy - is how to convert from string to these objects, when for example, a postgresql box is returned as (x1,y1),(x2,y2) but could be represented by the user as x1,y1,x1,y1, (x1,y1,x2,y2), (x1,y1),(x2,y2) or ((x1,y1),(x2,y2)) but I'm assuming that simple regex will suffice there?
class IDataType
{
public:
virtual std::string toString() = 0;
std::string toEscapedString()
{
std::string s = toString();
spg::convert::ReplaceAll(s, "(", "\"(");
spg::convert::ReplaceAll(s, ")", ")\"");
// as replace all add's a backslash to the start and end of the string if compound type
if (isCompound)
s = s.substr(1, s.length() - 2);
// remove double \"(\"( and )\")\" caused by compound types
spg::convert::ReplaceAll(s, "\"(\"(", "\"(");
spg::convert::ReplaceAll(s, "\")\")", "\")");
return s;
}
protected:
IDataType(bool isCompound) : isCompound(isCompound) {}
bool isCompound = false;
};
// helper class for numeric types
// limit to int,float,double etc
template <typename T>
class Number : public IDataType
{
public:
Number(T val) : IDataType(false), val(val)
{
// some dirty type checking for numbers
if (!std::is_arithmetic<T>::value)
throw "Arithmetic isn't possible, therefore type not numerical";
else if (std::is_class<T>::value)
throw "type is a class";
}
std::string toString()
{
return spg::convert::numberToString(val);
}
T val;
};
class Point : public IDataType
{
public:
Point(double x, double y) : IDataType(false), x(x), y(y) {}
std::string toString()
{
return x.toString() + "," + y.toString();
}
// see http://www.postgresql.org/docs/9.3/static/datatype-geometric.html
// takes format (x,y) or x,y
void fromString(std::string str)
{
// ...
}
Number<double> x, y;
};
class Box : public IDataType
{
public:
Box(double x1, double y1, double x2, double y2) : IDataType(false), corner1(x1, y1), corner2(x2, y2) {}
Box(Point corner1, Point corner2) : IDataType(false), corner1(corner1), corner2(corner2) {}
std::string toString()
{
return corner1.toString() + "," + corner2.toString();
}
Point corner1, corner2;
};
I'm implementing like so:
spg::SimplePg conn;
bool connected = conn.connect("dbname=db_test user=username password=123");
if (!connected) throw "not connected";
spg::datatypes::Point pt1(100, 300);
spg::datatypes::Point pt2(200, 300);
std::vector<std::string> params;
params.push_back(pt1.toString());
params.push_back(pt2.toString());
conn.execParams("insert into test(a_point1, a_point2) values ($1::Point, $2::Point)", params);
For completeness, the conversion methods:
std::stringstream ss;
// for replace string values - http://stackoverflow.com/questions/2896600/how-to-replace-all-occurrences-of-a-character-in-string
void ReplaceAll(std::string& str, const std::string& from, const std::string& to) {
size_t start_pos = 0;
while ((start_pos = str.find(from, start_pos)) != std::string::npos)
{
str.replace(start_pos, from.length(), to);
start_pos += to.length(); // Handles case where 'to' is a substring of 'from'
}
}
// TODO there surely has to be a better way than this to convert to string?
// TODO maybe I can just do this with generics?
std::string numberToString(int i)
{
ss.str("");
ss << i;
return ss.str();
}
std::string numberToString(double d)
{
ss.str("");
ss << d;
return ss.str();
}
std::string numberToString(float f)
{
ss.str("");
ss << f;
return ss.str();
}
Is there a neater C++ way to manage the conversion to and from strings for the data types, maybe using generics?
Conversion of a number to string feels messy. Is there a faster/neater way?
Answer: Firstly, and most importantly, you're missing a virtual destructor for your IDataType definition. This is bad, and should be the first thing you fix:
class IDataType
{
virtual ~IDataType() { }
}
For 1, you could do this using templates. However, you don't have to, assuming you have access to C++11. There is a free function defined in <string> called to_string that has an overload for most of the basic numeric types. Hence any place you're calling one of your numberToString methods, you could simply replace this with std::to_string(...) (see here for the documentation of to_string).
Converting back the other way is probably going to require a bit more effort, however. You could potentially do this with regex (but note that creating a regex that will accept only integers or doubles is somewhat difficult and error prone; cases like 1. or .4 or 1e5 are very easy to overlook and fiddly to get right). In this case, the only real difference seems to be that there are (multiple) parens or no parens. Perhaps a better way would be to reduce it to a format with no parens, and then parse that:
class Point : public IDataType
{
void fromString(std::string s)
{
// Remove parenthesis, leaving (hopefully) only "x,y"
s.erase(std::remove_if(s.begin(), s.end(),
[](char c) { return c == '(' || c == ')'; },
s.end());
// Split on ","
std::vector<std::string> split;
boost::split(split, s, boost::is_any_of(","));
// Make sure there are only 2 values
// May also want to do some checking around lexical_cast in case it fails
x = Number(boost::lexical_cast<double>(split[0]));
y = Number(boost::lexical_cast<double>(split[1]));
}
....
}
A very similar algorithm can be used for Box (N.B. I haven't tried to compile the above code, so I apologize in advance if there are any errors in it. Hopefully you get the general idea, though).
On a separate note, I'm not a huge fan of the design. The idea seems to be you can create an IDataType object which is mutated by calling fromString. Given the small size of these in general (and unless you really need this to be high performance), you might want to consider making them immutable:
class Point : public IDataType
{
public:
Point(double x, double y)
: IDataType(false),
x(x),
y(y)
{ }
Point(const std::string& s)
{
fromString(s);
}
std::string toString(....) const { } // Note: your toString method should be const
private:
void fromString(std::string s)
{
// Logic
}
};
This has a few benefits:
Easier to reason about; once you've instantiated it with given parameters, you don't need to worry about things changing through a call to fromString.
Easier to deal with exceptions: everything is done through the constructor, so you either get back a fully formed correct object, or the constructor throws. Currently, you need to make sure that your fromString function doesn't leave the object in an inconsistent state if it throws. | {
"domain": "codereview.stackexchange",
"id": 9001,
"tags": "c++, strings, converting, postgresql"
} |
Making an array editable by user | Question: What I want to do:
Let users add data to an array through an input field and add button.
HTML:
<body>
<input type="text" id="input">
<button type="button" id="btnAdd">Add Number</button>
<div id="outputBox"></div>
JS:
var arrayList = [];
$(document).ready(function() {
$('#btnAdd').on('click', mkArray);
$('#outputBox').on('click', 'span', deleteNum);
});
Display the array to the user.
function mkArray(event) {
event.preventDefault();
var numberAdd = $('#input').val();
arrayList.push(numberAdd);
$('#outputBox').append('<span id="' + numberAdd + '"> ' + numberAdd + '</span>');
}
If the user clicks on the data, confirm deletion and then delete the item and refresh the array and display.
function deleteNum() {
var confirmation = confirm('Are you sure you want to delete?');
var numberIndex = $(this).index();
if (numberIndex > -1) {
arrayList.splice(numberIndex, 1);
};
$(this).remove();
};
JS FIDDLE
Question
Any potential pitfalls with this way of modifying an array?
Answer:
Any potential pitfalls with this way of modifying an array?
There certainly are for the user! Your code is not really confirming deletion: The number gets removed even if you press No/Cancel. You need to actually check the return value of the confirm() call:
var confirmation = confirm('Are you sure you want to delete?'); // returns true/false
if(confirmation) {
// user confirmed, so proceed with removing the item
}
I'd also recommend a more descriptive confirmation message; tell me what I'm about to delete at least. Otherwise, I can't really be sure.
And there are further pitfalls.
For one, you're using the user's input as an element ID. Now, firstly, you don't need an ID. It's not used for anything. Second, IDs are really meant to be unique, but I can add the same thing as many times as I want, and I'll just get a bunch of elements with the same ID (it won't break anything, because - as mentioned - the ID isn't used for anything, but it only makes the IDs more pointless). And third, the input isn't being checked, so the user can input whatever they want, and have it appended directly to the page.
For instance, if you input <script>alert("hacked")</script>, you'll get an alert when you click to add it, because you've just added a script element to the page. It'll also add some complete nonsense HTML to the page, because a script tag has suddenly become the ID for a span element, and... well, nonsense. Even if you weren't using the user's input as ID, it'd still behave as described, because you're inserting the input string as-is in the span. If you want to insert as text, use jQuery's .text() to set it. Otherwise it'll be interpreted as HTML.
E.g. you can input the string <script>$("body").empty()</script> and things get very zen.
You can do all this with the JS console of course, but the point remains: Always, always validate and sanitize user input, and insert it in a safe manner.
Also, your arrayList variable is global, meaning anything and everything can modify it in any way (for instance, you can input <script>arrayList = [];</script> and your list will get reset when it should just add something). It can end up not even being an array. In that case, your code will fail, because you can't splice() something that's not an array. Even if it is still an array, it may have been sorted, emptied, or who knows what else, and splice() might remove the wrong thing - or nothing at all.
Really, the trouble here is that the array isn't doing anything useful. You add numbers to the HTML directly, and you remove them from the HTML directly. The array is just a weak, error-prone copy of what's on the page. And you have to manually keep them in sync.
It's much better to have a single source for your data, so there's only one place to keep track of things. In this case, I'd say that that source should be the HTML. If you need to get an array of inputs, you can just use jQuery to fetch all the input so far:
$("#outputBox span").map(function () { return $(this).text() }).toArray()
That line will go through each of the spans, get their text, and return an array with the stuff that's been added so far (provided you remove the extra space in the span, which shouldn't really be there anyway - see below)
Other stuff:
arrayList is not a great name for a variable. Yeah, it's an array, so it's list of some sort... but what's in it?. I'd simply call it numbers or inputs, since that's what it actually represents.
Use jQuery to build the elements to avoid the error-prone string-concatenation
If you're making a list in HTML, make an actual <ul> or <ol> list (I'd pick ul as it's an unordered list of user input), and make the individual numbers list items (<li>).
Doesn't <button type="button" seem redundant to you? Because it is. You only need the type="button" if you're using an <input> element. A <button> element is - by definition - already a button.
You don't need event.preventDefault(), since clicking a button has no default behavior that needs preventing.
Use the $(function () { ... }); shorthand instead of $(document).ready(function () { ... });
In the end, you can do it all like this
HTML
<input id="input" type="text">
<button id="addItem">Add</button>
<ul id="output"></ul>
JS
$(function () {
// keep the input and output elements around for later
var input = $("#input");
var output = $("#output").on("click", "li", function () {
if( confirm("Are you sure you want to delete this item?") ) {
$(this).remove();
}
});
$("#addItem").on("click", function () {
var value = input.val();
if(/^\d+$/.test(value)) { // check that the string contains only digits
output.append($("<li></li>").text(value));
input.val("");
} else {
alert("Please enter only numbers");
}
});
});
Here's a demo | {
"domain": "codereview.stackexchange",
"id": 7799,
"tags": "javascript, jquery, array"
} |
print the number of received messages | Question:
Hi all!
Every time a node receives a message, i want to increment and print the total number of messages that ha been received so far, starting with 0 every time the node starts! Any tips?
Originally posted by kaank_1993 on ROS Answers with karma: 1 on 2019-09-17
Post score: 0
Answer:
Something like will do the job.
void callback(msg::type & msg)
{
static int counter = 0;
counter++;
std::cout << counter << std::endl;
}
int main()
{
ros::init(argc, argv, "node_name");
ros::Subscriber sub = n.subscribe("your_sub_topic", 1, callback);
ros::spin();
}
Originally posted by stevemacenski with karma: 8272 on 2019-09-17
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 33780,
"tags": "ros, ros2"
} |
In 4 spatial dimensions, would motion under a central force law be confined to a plane? | Question: In dimension $3$, we have the angular momentum $\omega = q \times v$, and since
$$\frac{d}{dt} \omega = v \times v + q \times (F/m)$$
from the fact the force is central (so $F$ is parallel to $q$) we obtain the conservation of $\omega$, so $q$ always lies in the plane perpendicular to $\omega$.
With 4 spatial dimensions there's no such thing as a cross product so none of this makes sense. Can motion under a central force law not lie within a $2d$ plane in this case?
More specifically: is there some smooth $F : \mathbb{R}^4 - \{0\} \to \mathbb{R}^4$ such that $F(q)$ is always parallel to $q$, and some smooth $q : \mathbb{R} \to \mathbb{R}^4 - \{0\}$ satisfying $F(q(t)) = m q''(t)$ for some $m > 0$ such $q(\mathbb{R})$ does not lie in any affine 2-dimensional subspace of $\mathbb{R}^4$?
Answer: In general, the angular momentum is defined as $\mathbf{L}=\mathbf{r}\wedge\mathbf{p}$. In our problem, switching to index notation, we have, in the CM frame, $$L_{\mu\nu}=r_\mu p_\nu-r_\nu p_\mu=\mu(r_\mu\dot{r}_\nu-r_\nu\dot{r}_\mu)$$
Now, since there is no external torque, angular momentum is constant.
If we define a Cartesian coordinate system $(w,x,y,z)$ such that the initial position and velocity vectors are coplanar with the $wx$-plane associated with, then our angular momentum tensor would look like such: $$L_{\mu\nu}=\begin{pmatrix}0 & \mu(r_{w}(0)\dot{r}_x(0)-r_x(0)\dot{r}_w(0)) & 0 & 0\\
\mu(r_x(0)\dot{r}_w(0)-r_{w}(0)\dot{r}_x(0)) & 0 & 0& 0 \\
0 & 0 & 0 & 0 \\
0& 0 & 0 & 0
\end{pmatrix}$$
because the $y$ and $z$ components of both vectors are zero. However, because angular momentum is conserved, each element of this tensor is conserved. Therefore, the vectors never leave this plane. This argument can be extended to any number of dimensions. | {
"domain": "physics.stackexchange",
"id": 63212,
"tags": "newtonian-mechanics, angular-momentum, conservation-laws, symmetry, spacetime-dimensions"
} |
Quantum teleportation with "noisy" entangled state | Question: This is actually an exercise from Preskill (chapter 4, new version 4.4). So they are asking about the fidelity of teleporting a random pure quantum state from Bob to Alice, who both have one qubit of the following system ("noisy" entangled state):
$$\rho = (1 − \lambda)|\psi ^-\rangle \langle \psi ^- | + \frac{1}{4} \lambda I $$
with $|\psi ^-\rangle$ one of the Bell states. In the notes of Preskill they show you the example of quantum teleportation with a Bell state $|ψ ^-\rangle_{AB}$, by uniting the random qubit with the system (Bell state) and then make Alice do some measurements on the system, from which Bob gets a "copy" (or to be correct: the qubit was teleported to Bob) of the random qubit.
Now in this example, we are presented with the density matrix $\rho$, from which we cannot just get one "state". As I can not follow the example of the book (where they manipulate the state $|\psi ^-\rangle$), but now have to deal with the density matrix, I have no idea where to begin. How can we proceed to calculate the fidelity with which Bob will have the correct teleported state if using the given noisy entangled system $\rho$ to teleport the random qubit?
They also state that a random 'guess' has a 1/2 chance (which refers to the identity part $I$ in the $\rho$ system). I also know the fidelity of a pure Bell state will be 1. But I suppose I can't just say that for the system $\rho$ the fidelity is the sum of the parts?:
$$F = (1 − λ) + \frac{1}{4} \lambda \times \frac{1}{2}$$
And if I can, why is this? How can I explicitly calculate this?
P.S: By the way, if anyone would have solutions to the exercises in the notes of Preskill, a link would be much appreciated.
Answer: I'm not sure what was the expected solution, but this also works.
First of all, note that $$ I = |\phi^+\rangle\langle\phi^+| + |\phi^-\rangle\langle\phi^-| + |\psi^+\rangle\langle\psi^+| + |\psi^-\rangle\langle\psi^-|,$$
where $|\phi^+\rangle, |\phi^-\rangle, |\psi^+\rangle, |\psi^-\rangle$ are Bell states and $I$ is 4-dimensional identity operator.
The second observation is that if you apply teleportation scheme (that supposed to correctly work with $|\psi^-\rangle$ entangled state) on the wrong entangled Bell state (for example, Alice and Bob could share $|\phi^+\rangle$ instead of $|\psi^-\rangle$), then you will end up with one of this
$$
|r_0\rangle = a|0\rangle + b|1\rangle \\
|r_1\rangle = a|0\rangle - b|1\rangle \\
|r_2\rangle = a|1\rangle + b|0\rangle \\
|r_3\rangle = a|1\rangle - b|0\rangle \\
$$
where $|r_0\rangle$ is the teleported qubit. I'm sure it can be checked that for 4 possible
Bell states a fixed teleportation scheme will give exactly 4 different results $|r_0\rangle, |r_1\rangle, |r_2\rangle, |r_3\rangle$.
So, if you will use teleportation scheme (specified to $|\psi^-\rangle$) with the entangled state
$$ \rho = (1 − \lambda)|\psi ^-\rangle \langle \psi ^- | + \frac{1}{4} \lambda I = \\
= (1 − \lambda)|\psi ^-\rangle \langle \psi ^- | + \frac{1}{4} \lambda (|\phi^+\rangle\langle\phi^+| + |\phi^-\rangle\langle\phi^-| + |\psi^+\rangle\langle\psi^+| + |\psi^-\rangle\langle\psi^-|)
$$
then the result will be
$$
(1-\lambda)|r_0\rangle\langle r_0| + \frac{1}{4} \lambda (|r_0\rangle\langle r_0| + |r_1\rangle\langle r_1| + |r_2\rangle\langle r_2| + |r_3\rangle\langle r_3|)
$$
But you can check that
$$
|r_0\rangle\langle r_0| + |r_1\rangle\langle r_1| + |r_2\rangle\langle r_2| + |r_3\rangle\langle r_3| = 2I
$$
for any random qubit $|r_0\rangle$.
So, the final teleported state will be
$$
f = (1-\lambda)|r_0\rangle\langle r_0| + \frac{1}{4} \lambda \cdot 2I =
(1-\lambda)|r_0\rangle\langle r_0| + \frac{1}{2} \lambda I
$$
The fidelity between $r = |r_0\rangle\langle r_0|$ and $f$ is
$$
\left(\text{Tr}\sqrt{\sqrt{r}f\sqrt{r}}\right)^2 = \left(\text{Tr}\sqrt{rfr}\right)^2 = \left(\text{Tr}\sqrt{r((1-\lambda)r + \frac{1}{2} \lambda I)} r \right)^2 = \\
= \left(\text{Tr}\sqrt{(1-\lambda)r^3 + \frac{1}{2} \lambda r^2)} \right)^2 =
\left(\text{Tr}\sqrt{(1-\lambda)r + \frac{1}{2} \lambda r)} \right)^2 = \\
= \left(\text{Tr}\sqrt{\frac{1}{2}(2-\lambda)r} \right)^2 = \frac{1}{2}(2-\lambda)
$$
Note, that the link you shared contain the answer on the page 45 (but there $\lambda$ is actually $1-\lambda$). | {
"domain": "quantumcomputing.stackexchange",
"id": 703,
"tags": "quantum-state, entanglement, density-matrix, teleportation, noise"
} |
Non-zero Euclidean commutator in 2D CFT? | Question: In a Euclidean QFT, commutators of operators vanish for any spacetime separation. This can be argued very simply by using the path integral representation of the correlator, wherein operators become simple functions and hence can be easily moved around inside the integral.
Now, in a 2d CFT the two point correlator of a primary operator $\mathcal{O}$ with conformal weights $h$ and $\bar{h}$ looks like
$$\langle\mathcal{O}(z_1,\bar{z}_1)\mathcal{O}(z_2,\bar{z_2})\rangle=\frac{C}{(z_1-z_2)^{2h}(\bar{z}_1-\bar{z}_2)^{2\bar{h}}}$$
where $C$ is some normalizing constant.
We can exchange $z_1$ and $z_2$ in the above formula by rotating $z_1$ around $z_2$ by $\pi$: $(z_1-z_2)\to (z_1-z_2) e^{i\pi},(\bar{z}_1-\bar{z}_2)\to (\bar{z}_1-\bar{z}_2) e^{-i\pi}$
$$\langle\mathcal{O}(z_2,\bar{z}_2)\mathcal{O}(z_1,\bar{z_1})\rangle=e^{\pm 2\pi i s}\frac{C}{(z_1-z_2)^{2h}(\bar{z}_1-\bar{z}_2)^{2\bar{h}}}$$
where $s=h-\bar{h}$ is the spin of $\mathcal{O}$ and $\pm$ depends on the choice of the branch cut for the power functions.
Thus the commutator is
$$\langle[\mathcal{O}(z_1,\bar{z}_1),\mathcal{O}(z_2,\bar{z_2})]\rangle=\frac{C(1-e^{\pm 2\pi i s})}{(z_1-z_2)^{2h}(\bar{z}_1-\bar{z}_2)^{2\bar{h}}}$$
Clearly, the commutator is non-zero unless $s \in \mathbb{Z}$, which is inconsistent with our general expectation. What am I missing?
Answer: What you wrote is a correct explanation of why fields do not commute if their spins are not integer. A field with half-integer spin is called fermionic, and fermionic CFTs are the subject of a recent article by Runkel and Watts https://arxiv.org/abs/2001.05055. Fields with more general fractional spins are called parafermionic. Parafermionic fields not only do not commute, but also have multivalued correlation functions.
This illustrates the fact that in the axiomatic ("bootstrap") approach to CFT, most axioms can be relaxed, giving rise to generalizations. In this case, relaxing commutativity gives rise to fermionic and parafermionic CFTs. | {
"domain": "physics.stackexchange",
"id": 65155,
"tags": "conformal-field-theory, commutator, correlation-functions"
} |
Is it important for a quantum computer to be shielded by the magnetic field? | Question: I've been browsing The D-Wave 2000Q site when I bumped into this aspect of their quantum computers:
A Unique Processor Environment
Shielded to 50,000× less than Earth’s magnetic field
Why is that relevant? What would happen if it would be much less than 50.000x?
Answer: The DWave machine relies heavily on single-flux-quantum digital control for setting up qubit and coupler operating points, and for carrying out the annealing protocol. Any stray magnetic flux, if present while the chip is cooled through its superconducting transition, will be trapped inside the circuit and can cause it to fail.
You can calculate how much shielding you need by requiring the magnetic field inside the shield to be smaller than a flux quantum over the area of the chip. $B = \frac{\Phi_0}{A}$, where $\Phi_0 \sim 2 \cdot 10^{-15} ~ \text{Wb}$ is the flux quantum and $A$ is the area. If the area of the DWave chip is $(2 ~ \text{cm})^2$ (guessing) then $B \sim 5 ~ \text{pT}$. Earth’s field is about $0.25 ~ \mu \text{T}$ so you really want $\times 5 \cdot 10^6$ attenuation of the field. Shielding of 50,000 means that you will have on average about 100 flux quanta that can trap in the chip. Typically people add trapping sites on the chip to sequester the remaining flux in safe areas. | {
"domain": "quantumcomputing.stackexchange",
"id": 61,
"tags": "experimental-realization, d-wave"
} |
Relationship between CO2 concentration and thermal conductivity of the air in a volume? | Question: How much would the percentage of CO2 in a room need to change in order to make a measurable change (using transient hot wire method) to the thermal conductivity of the air inside room? Are there other changes to the air composition that could drive a significant change in the thermal conductivity?
Thermal conductivity just happens to be the easiest air parameter for me to instrument, I'm looking for ways to modify the air in a sealed space and then sense when the space becomes open by measuring the conductivity (or whatever else will do it) as it equalise with the bulk atmosphere.
Answer: I don’t think you can get enough CO2 into room air to have a signal and still breathe it.
Humidity might be a much stronger signal. Particularly at just above room temperature, the thermal conductivity varies significantly with humidity:
(From here, which has more info) | {
"domain": "physics.stackexchange",
"id": 55147,
"tags": "atmospheric-science, air, thermal-conductivity"
} |
How much energy released by the Theia-Gaia impact? | Question: I read somewhere that the Chicxulub impact released the energy equivalent of anywhere from 100-300 teratons of TNT. That's an impressive amount of energy.
Assuming a head on collision (according to newer research), how would it compare to the impact that created our moon?
Answer: Here's a quick-and-dirty estimate.
The gravitational self-energy of a uniform-density sphere is
$$
U = \frac35 \frac{GM^2}R
$$
Let's assume Theia had the same mass and density as Mars, and that Gaia contained the rest of the mass of the Earth-Moon system.
The binding energies for the four bodies are then
theia/mars 4.82e+30 joules
gaia 1.90e+32 joules
earth 2.24e+32 joules
moon 1.24e+29 joules
You can see that Theia and the Moon contribute to the binding energy starting the third significant figure: moving the 90% of Theia's mass to Gaia, leaving us with the Earth and the Moon, must have released something like $0.3\times10^{32}\rm\,J$ of gravitational binding energy as heat.
Kinetic energies due to Earth's rotation and the Moon's orbit are irrelevant compared to Earth's binding energy --- the biggest contributor there is $0.25\times10^{30}\rm\,J$ associated with Earth's daily rotation.
It's probably safe to assume the same about the progenitors.
Apparently a teraton of TNT is $4\times10^{21}\rm\,J$, if you insist on that comparison.
Note that I've assumed nothing about the geometry of the collision, whether it was head-on or glancing. I'm only making assumptions about the (well-known) final state and the (poorly constrained) initial state. | {
"domain": "earthscience.stackexchange",
"id": 1096,
"tags": "earth-history, moon, earth-system"
} |
death of a red dwarf star / minimum mass needed for a white dwarf? | Question: OK, first, I know there's a variety of sizes and types of red dwarf stars and the universe is too young for any of them to have reached the end of their main sequence phase yet, so it's all theoretical and/or modeling.
http://en.wikipedia.org/wiki/Red_dwarf
But what is the theoretical size needed for a star to undergo the electron degeneracy process which turns a small star from at least Jupiter sized, usually bigger, into to earth sized super dense object, where, as I understand it, the electrons are squeezed off the nuclei - electron degeneracy.
It would seem to me that a 7.5% solar mass star, which gradually burns hydrogen into helium but doesn't burn helium, might not have the mass to compact into a true white dwarf but might end it's life looking more like a brown dwarf / super Jupiter - well, talking appearance, not really, because super Jupiter & Brown dwarfs are mostly hydrogen while and an end of life red dwarf should be mostly helium - which, in and of itself, might make the difference.
It's just my curiosity whether all red dwarfs turn into white dwarfs at the end of their burning phase or is there a theoretical mass that's needed for that level of shrinkage to occur?
Thanks.
Answer: Stars that have a mass lower than about $0.5 M_{\odot}$ will not ignite helium in their cores, in an analogous fashion to the way that stars with $M<8M_{\odot}$ have insufficiently massive cores that never reach high enough temperatures to ignite carbon.
The cause in both cases is the onset of electron degeneracy pressure, which is independent of temperature and allows the core to cool at constant pressure and radius. [A normal gas would contract and become hotter as it loses energy!]
The end result for a $0.5M_{\odot}$ star will be a helium white dwarf with a mass (depending on uncertain details of the mass-loss process) of around $0.2M_{\odot}$. Such things do exist in nature now, but only because they have undergone some kind of mass transfer event in a binary system that has accelerated their evolution. The collapse to a degenerate state would be inevitable even for the lowest mass stars (which would of course then be very low-mass white dwarfs). As an inert core contracts it loses heat and cools - a higher density and lower temperate eventually lead to degenerate conditions that allow the core to cool without losing pressure.
The lowest mass stars ($<0.3 M_{\odot}$) do get there via a slightly different route - they are fully convective, so the "core" doesn't exist really, it is always mixed with the envelope. They do not develop into red giants and thus I guess will suffer much less mass loss.
The remnant would be a white dwarf in either case and is fundamentally different from a brown dwarf both in terms of size and structure, because it would be made of helium rather than (mostly) hydrogen. This should have an effect in two ways. For the same mass, the brown dwarf should end up bigger because the number of mass units per electron is smaller (1 vs 2) and also because the effects of a finite temperature are larger in material with fewer mass units per particle - i.e. its outer, non-degenerate layer would be more "puffed up". NB: The brown dwarfs we see today are Jupiter-sized, but are still cooling. They will get a bit smaller and more degenerate.
A simple size calculation could use the approximation of an ideal, cold, degenerate gas. A bit of simple physics using the virial theorem gives you
$$ \left(\frac{R}{R_{\odot}}\right) \simeq 0.013\left(\frac{\mu_e}{2}\right)^{-5/3} \left(\frac{M}{M_{\odot}}\right)^{-1/3},$$
where $\mu_e$ is the number of atomic mass units per electron.
Putting in appropriate numbers I get $0.32\ R_{Jup}$ for a $0.07M_{\odot}$ Helium white dwarf versus $1.01\ R_{Jup}$ for a $0.07M_{\odot}$ completely degenerate Hydrogen brown dwarf (in practice it would be a bit smaller because it isn't all hydrogen).
However, it would be interesting to see some realistic calculations of what happens to a $0.07M_{\odot}$ brown dwarf versus a $0.08M_{\odot}$ star in a trillion years or so. I will update the answer if I come across such a study.
EDIT: I knew I'd seen something on this. Check out Laughlin et al. (1997), which studies the long-term evolution of very low-mass stars. Low-mass stars do not pass through a red giant phase, remain fully convective and can thus convert almost all their hydrogen into helium over the course of $10^{13}$ years and end up cooling as degenerate He white dwarfs. | {
"domain": "astronomy.stackexchange",
"id": 841,
"tags": "stellar-evolution, stellar-astrophysics"
} |
Spacetime surgery - why are there unglueable points? | Question: In The time travel paradox by S. Krasnikov (2002), Deutsch-Politzer spacetime is constructed by making two cuts and rejoining the manifold by gluing opposite "banks" of the cuts... omitting the "corner" points.
See figure below - cut along dashed lines; corner points are the circles at the ends of the dashed lines; the identification is above upper line to below lower line.
Krasnikov then says "The corner points cannot cannot be glued back into the spacetime..."
Question: why can't they be glued back? What principles are being implicitly invoked?
(Can anyone point to an introductory text that describes what surgeries are possible?)
Answer: HOW DOEST ONE MAKE THE DEUTSCH-POLITZER SPACETIME
A good question because it's not 100% the standard procedure of a cut and paste spacetime (a good reference on cutting and pasting manifold is by the way C.T.C. Wall's Differential Topology. There is indeed not a lot of good references for GR on that topic).
It is best to consider the manifold atlas directly instead of any fancier methods (I think you could probably do it by adding a handle to Minkowski space and some kind of limiting process, but that sounds tricky as it approaches the limit), as it's not that complex.
Take Minkowski spacetime (ie topologically, $\mathbb{R}^n$). Here are the coordinate patches to consider
First, the obvious one, which covers most of it. Remove the two horizontal slits $D_1, D_2$ from it. Then the first coordinate chart is
$$(\mathbb{R}^n \setminus (D_1 \cup D_2), \operatorname{Id})$$
That's the easy part. now we need some additional patches to represent the junction of the slits.
Simplest case is to pick some open product around the slits. For instance, consider the case of $(1+1)$-dimensional Minkowski space, with the slits $D_1 = \{ -1 \} \times [-1, 1]$ and $D_2 = \{ 1 \} \times [-1, 1]$. The first junction can be for instance composed of $A_1 = (-2, -1) \times (-1, 1)$ and $A_2 = (1,2) \times (-1, 1)$, and the second of $B_1 = (-1, 0) \times (-1, 1)$ and $B_2 = (0,1) \times (-1, 1)$.
To define a manifold, all we need is a set of open sets of $\mathbb{R}^n$ and transition functions [4.1], so let's find that out. Our atlas to $A_1 \cup A_2$ can be made from a set $A = (0,2) \times (-1, 1)$, and our atlas to $B_1 \cup B_2$ can be made from a set $B = (0,2) \times (-1, 1)$. $A$ and $B$ do not overlap, so all we need is the transition functions between the main patch (let's call it $R$) and $A$ and $B$.
Here we go : for $A$ we consider the subsets
\begin{eqnarray}
A_{R1} &=& (0,1) \times (-1, 1)\\
A_{R2} &=& (1,2) \times (-1, 1)
\end{eqnarray}
Each mapping to the manifold as $\phi(A_{Ri}) = A_i$), and for $R$ we consider the subsets
\begin{eqnarray}
R_{A1} &=& (-2, -1) \times (-1, 1)\\
R_{A2} &=& (1,2) \times (-1, 1)
\end{eqnarray}
which simply map to $A_i$ via the identity map. Now we have the following transition maps :
\begin{eqnarray}
\phi_{A1R1}(t,x) &=& (t-2, x)\\
\phi_{A2R2}(t,x) &=& (t, x)
\end{eqnarray}
Finding the inverse maps isn't terribly hard, and overall these obey the proper transition map properties. The same can be found for the patch $B$ fairly easily.
That's the Deutsch-Politzer spacetime manifold. Consider now a curve lying entirely outside of $(-2, 2) \times (-1,1)$ (to simplify things) heading to the point $(-1, -1)$. In the patch defined by $R$, it is fairly obviously a singularity, as that point is removed from that patch. This wouldn't be a problem if another coordinate patch could continue that curve, but this is not the case here : there exists no point of that curve lying in any other coordinate patch. Hence we have an inextendible curve with finite affine parameter : that point is a singularity.
Had we left those points in (by say removing open sets $\operatorname{Int}(D)_i$ rather than closed sets), $R$ would not have been an open set, and hence the resulting space would not have been a manifold.
We are in Minkowski space here, which means that this singularity can only be of two types : either regular or quasi-regular. To make it a "serious" singularity we should probably check that it is not simply a regular point. If you try to extend the manifold to include this point, some open set around it will overlap with regions $A$ and $B$. A bad thing happens here : the tiny region in $R$ that is extended around the singularity contains points from one of the slit $D$. Those slit points will be adjacent to points from the junction (specifically points in $\{1\} \times (-1, 1)$, that is, there are pairs of points such that every neighbourhoods of those points overlap : the manifold isn't Hausdorff anymore, which is a big no-no.
Edit : By the way this kind of gluing may be more in line with the type discussed by Hajicek than the standard manifold gluing, and he discusses the issues in Bifurcate Space‐Times, such as in which conditions a gluing leads to non-Hausdorff spacetimes | {
"domain": "physics.stackexchange",
"id": 56155,
"tags": "spacetime, topology"
} |
Graph theory : Trees | Question: I need to determine all the trees on 25 vertices for which there exists an integer m ≥ 2, such
that the degree of each vertex gives the same remainder when divided by m.
Can somebody help ?
Answer: Suppose $G = (V, E)$ is a tree with 25 vertices such that there exist some constants $c \geq 0$ and $m \geq 2$ such that $\operatorname{deg} v \equiv_m c$ for all $v$. Note that we can equivalently write $\operatorname{deg} v = a_v m + c$ for some $a_v \geq 0$.
Since $G$ is connected by definition, we get that it must have 24 edges in total, giving us
$$ \sum_{v \in V} \operatorname{deg} v = 2 \cdot |E| = 48.$$
We can express this sum using the aforementioned decomposition of the degrees in $G$ to get
$$
\begin{align*}
\sum_{v \in V} \operatorname{deg} v
&= \sum_{v \in V} a_v m + c \\
&= c |V| + m \sum_{v \in V} a_v \\
&= 48.
\end{align*}
$$
Due to the both terms in the rewritten sum being non-negative, we get that $c$ must be 1 if we are to find any solutions.
Hence we get that $m$ must divide $48 - c|V| = 23$, which implies $m = 23$ as we required $m \geq 2$.
At this point, we know that any such $G$ satisfying your conditions does so with $c = 1$ and $m = 23$.
By using our new formula again, we find that the sum of our $a_v$ must also equal 1 and therefore, $G$ must be isomorphic to a tree with 1 internal node and 24 leaves, which is also known as the star graph $S_{24}$. | {
"domain": "cs.stackexchange",
"id": 16302,
"tags": "graphs"
} |
What is considered the frequency (and wavelength) of guided waves in a waveguide? | Question: In a rectangular waveguide with sides of length $a$ and $b$, the dispersion relation is
$$\beta^2 =\omega^2\mu\epsilon=\beta_z^2+\beta_x^2+\beta_y^2=\beta_z^2+\beta_s^2.$$
So we have
$$\beta_z = \sqrt{\omega^2\mu\epsilon-\beta_x^2-\beta_y^2}$$
which gives us the cutoff frequency if the wave propagates in $z$.
My confusion arises with using these modes in practice - what are we referring to as the frequency, and analogously the wavelength? When we talk about the wavelength, are we talking about $2\pi/\beta_z$ or are we talking about $\omega/c$?
Or are the dimensions usually so small of the waveguides that $\omega/c$ is roughly equal to $2\pi/\beta_z$?
Answer: The frequency is $f = \omega/(2 \pi)$ and specifies the oscillation in time. The wavenumber is $\beta_z = (2\pi)/\lambda_z$ and specifies the oscillation in space. Thus the electric wave is given by $E \cos(\beta_z z + \omega t)$.
The dispersion relation describes the relation between $\omega$ and $\beta$. In vacuum, this becomes $\beta \omega = c$ or $\lambda f = c$. This is a linear dispersion relation. In the rectangular waveguide, you have a changed dispersion because of the confinement (which gives rise to the additional terms $\beta_x$ and $\beta_y$) and material (which changes the speed of the wave $c' = 1/\sqrt{\mu_r \mu_0 \epsilon_r \epsilon_0}$).
The wavelength in the z-direction in the waveguide is given by $2\pi/\beta_z$. When the frequency ($\omega$) is sufficiently high, the terms $\beta_x$ and $\beta_y$ can be neglected in the dispersion relation. In this case (which is usually the case in practice), we end up with the linear dispersion relation. Thus, in practice when the frequency is sufficiently high, the real wavelength $\lambda = 2\pi/\beta$ can be approximated by $f/c' = \omega/(2\pi c')$. | {
"domain": "physics.stackexchange",
"id": 62505,
"tags": "optics, waves, electromagnetic-radiation, frequency, wavelength"
} |
How to perform Quantum Process Tomography for three qubit gates? | Question: I am trying to perform Quantum process tomography (QPT) on three qubit quantum gate. But I cannot find any relevant resource to follow and peform the experiment. I have checked Nielsen and Chuang's Quantum Computation and Quantum Information book.
And I found this, the formula to find Chi matrix for 2-qubit gates. Then in the research paperMeasuring Controlled-NOT and two-qubit gate operation there is lucid explanation how to perform the QPT for two qubit gates following Nielsen's suggestion in his book.
Following aforementioned references I am trying to obtain the formula for Chi matrix in case of 3 qubit gate. Experimentally I have found the matrix in the middle of equation 8.181 in Nielsen's book (it's in the attached image) but am having trouble finding permutation matrix 'P' (the permutation matrix) given in the same equation for three qubits. Can anyone help me explain how can I find it?
More importantly I want to know whether equation 8.810 of Nielsen's book (given in the attached image) itself should be used for the case of 3 qubit gates as well? If not how to modify it for 3-qubit gate?
Answer: I am sure that since you are asking this question you probably already understand this, but for future & other's reference let me give a quick recap of what we are trying to achieve.
Quantum channels
Any process (in an open quantum system) is some map $\Lambda$ from a space of density matrices to a space of density matrices. I write a, because these spaces are not necessarily of the same dimension (for instance, the tracing out a subsystem does not preserve dimension). Any unitary transformation is such a map as well.
We generally write $\Lambda(\rho_{\mathrm{in}}) = \rho_{\mathrm{out}}$ when our map transforms $\rho_{\mathrm{in}}$ to $\rho_{\mathrm{out}}$. Furthermore, since we always expect any $\rho_{\mathrm{out}}$ to be an actual physical state (it must be positive semidefinite and must have trace $1$), we impose two constraints on $\Lambda$.
Any map $\Lambda$ should be completely positive. This ensures that $\rho_{\mathrm{out}}$ is always positive semidefinite, even if it a subsystem of a larger whole. This constraint is often abbreviated as "CP".
Any map $\Lambda$ should be trace preserving: $\mathrm{tr}\big[\Lambda(\rho)\big] = \mathrm{tr}\big[\rho\big]$, $\forall\rho$. This ensures that $\rho_{\mathrm{out}}$ always has unit trace. We abbreviate this constraint as "TP".
Any map $\Lambda$ that is both CP & TP = CPTP, we call a quantum channel. Sometimes we relax the TP constraint to include trace decreasing maps (consider for instance a measurement); some authors refer to these maps as the more general quantum operations.
Different representations of quantum channels
A quantum channel can be represented in different ways; I recap three here.
The Kraus representation. Nielsen & Chuang refer to this as the operator-sum representation. In mathematical form:
\begin{equation}
\Lambda(\rho) = \sum_{k} A_{k} \rho A_{k}^{\dagger},
\end{equation}
where $\{A_{k}\}$ are known as the Kraus operators and $k$ is always the same or smaller than the system size $d = 4^{n}$. That is, for any valid map $\Lambda$ there can always be found a Kraus representation with at most $d$ operators. The CP constraint is automatically met here, the trace constraint reads: $\sum_{k} A_{k}^{\dagger} A_{k} \leq I$ (with equality for TP).
The Choi matrix, which is a direct result of the Choi-Jamiolkowski isomorphism. Some intuition on what this is can be found in this previous answer. Consider the maximally entangled state $|\Omega \rangle = \sum_{\mathrm{i}}|\mathrm{i}\rangle \otimes |\mathrm{i}\rangle$, where $\{|\mathrm{i}\rangle\}$ forms a basis for the space on which $\rho$ acts. (Note that we thus have a maximally entangled state of twice as many qubits).
The Choi matrix is the state that we get when on one of these subsystems $\Lambda$ is applied (leaving the other subsystem intact):
\begin{equation}
\rho_{\mathrm{Choi}} = \big(\Lambda \otimes I\big) |\Omega\rangle\langle\Omega|.
\end{equation}
As the Choi matrix is a state, it must be positive semidefinite (corresonding the the CP constraint) and must be unity trace (corresponding to the TP constraint).
The process- or $\chi$-matrix. We write our map as a double sum:
\begin{equation}
\Lambda(\rho) = \sum_{m,n} \chi_{mn}P_{m}\rho P_{n}^{\dagger},
\end{equation}
where $\{P_{m}\}$ & $\{P_{n}\}$ form a basis for the space of density matrices$^{1}$; we use the Pauli basis $\{I,X,Y,Z\}^{\otimes n}$ (thereby omitting the need for the $\dagger$ at $P_{n}$). The matrix $\chi$ now encapsulates all information of $\Lambda$; the CP constraint reads that $\chi$ must be positive semidefinite, and the trace constraint reads that $\sum_{m,n}\chi_{mn}P_{n}P_{m} \leq I$ (with equality for TP).
The goal for quantum process tomography is now to find a representation of an unknown channel $\Lambda$. We focus on the process matrix.
Standard QPT
Our goal is to find $\chi$ for an arbitrary quantum channel. We give ourselves only the power of inputting different input states $\rho_{\mathrm{in}}$, and measuring the output state $\rho_{\mathrm{out}}$ in different bases with measurement observables $\{M\}$.
We always measure in the Pauli basis, and using slight abusive notation we use the Pauli basis as input states as well$^{2}$. A measurement on $\rho_{out}$ in a basis denoted by $P_{j}$ with an input state $P_{i}$ then has expectation value $\lambda_{ij}$:
\begin{equation}
\begin{split}
\lambda_{ij} &= \mathrm{tr}\big[P_{j}\Lambda(P_{i})\big] \\
&= \mathrm{tr}\big[P_{j}\sum_{mn}\chi_{mn}P_{m}P_{i}P_{n}\big] \\
&= \sum_{mn}\chi_{mn} \mathrm{tr}\big[P_{j}P_{m}P_{i}P_{n}\big] \\
&= \sum_{mn} A_{(ij,mn)} \chi_{mn}.\\
\end{split}
\end{equation}
where $A_{(ij,mn)} = \mathrm{tr}\big[P_{j}P_{m}P_{i}P_{n}\big]$. If we now view all measurement outcomes $\{\lambda_{ij}\}$ as a vector $\overrightarrow{\lambda}$ and if we vectorize $\chi$ to $|\chi\rangle\rangle = \overrightarrow{\chi}$ we get a giant linear system of equations linking the measurement outcomes to the elements of $\chi$:
\begin{equation}
\overrightarrow{\lambda} = A \overrightarrow{\chi}.
\end{equation}
It is now our goal to solve for $\chi$.
Intermediary: some notes on the sets $\{P_{i}\}$ and $\{P_{j}\}$
The set of states from which we built $\{P_{i}\}$, known as the preparation set, needs to have (at first glance) every Pauli eigenstate for every qubit there is, resulting in $6^{n}$ different states. However, building all the Pauli matrices can be done by using any set of states that form a basis for the space of density matrices. A straightforward choice is to use $\{|0\rangle, |1\rangle, |+\rangle, |+i\rangle\}^{\otimes n}$ - both eigenstates of the $Z$ operator and the +1 eigenstates of the X and Y operator. This results in $4^{n}$ different input states.
The set of measurements from which we built all $\{P_{j}\}$ can be as simple as $\{X,Y,Z\}^{\otimes n}$; the "$I$"-measurements can be inferred from the outcomes of those measurements. I link a previous answer of mine on QST where I explain how to built all $\{P_{j}\}$'s from only these measurements; there I explain it in detail for $2$-qubit QST but the generalization to a higher number of qubits is very straightforward.
All in all, we thus need $4^{n} \times 3^{n} = 12^{n}$ different pairs of measurement operators and preparation states to perform QPT.
Solving for $\chi$
Solving for $\chi$ in our system of linear equations can be as straightforward as inverting $A$ ($A$ is indeed invertible). Moreover, by using the Pauli basis, $A$ is also unitary and even Hermitian so $\overrightarrow{\chi}$ is readily calculated as:
\begin{equation}
\overrightarrow{\chi} = A \overrightarrow{\lambda}.
\end{equation}
This, however, does not respect the CPTP constraints in any way.
Luckily, it can be shown that as long as all measurements that were performed had an actual outcome, the TP constraint on $\chi$ is automatically met when using this method.
However, the CP constraint is not automatically met; this means that the calculated $\chi$ might very well have negative eigenvalues. This stems from statistical noise on our estimates of $\lambda_{ij}$ which can be reduced by performing more repeated measurements. Note however that statistical noise will pretty much always persist (the statistical noise scales exponentially with the number of qubits considered in the QPT, so you need to repeat all measurements exponentially often to gain a constant fidelity. Methods to solve this problem are therefore needed.
A very straightforward but also less-then-ideal method of finding a positive semidefinite version of a non-CP $\chi$ is by taking a convex combination with the Identity process such that every eigenvalue of $\chi$ becomes non-negative. Let $\lambda_{\mathrm{min}} < 0$ be the smallest (largest negative) eigenvalue of $\chi$. Then the process matrix $\chi^{*}$ has only nonnegative eigenvalues:
\begin{equation}
\chi^{*} = \frac{1}{\mathrm{tr}[\chi]+2^{2n}|\lambda_{\mathrm{min}}|}\big(\chi + |\lambda_{\mathrm{min}}|I\big)
\end{equation}
The fraction before the sum is a renormalization constant. Of course there are other methods of bringing the eigenvalues to non-negative values, but those will most likely break the TP constraint (this method does in fact not).
(I would expect about 1-100 million repetitions per prepare-measure pair for $3$-qubit QPT will bring negative eigenvalues to something small enough for this method to give a okayish-fidelity. As I mentioned, QPT is hard.)
There exist more elaborate methods that I won't go into to much detail here. If you are familiar with semidefinite linear programming, solving the equation $\overrightarrow{\lambda} = A \overrightarrow{\chi}$ with $\chi$ subjected to our CPTP constraints is exactly a problem that can be optimized by this method. It should be noted that a proper optimization method will reduce the needed number of repeated measurements (often called shots) drastically when compared to the simple linear inversion mentioned above - I can therefore highly recommend optimalization in one way or another if you have a proper interest in QPT.
Another method can be found in this paper, where the authors make use of repeated projection onto the space of CP and TP maps, respectively. Proper proofs that this always converges to a proper minimum I have not yet seen, but I don't rule out the possibility either.
Another approach is to use the Choi-Jamiolkowski isomorphism, mentioned above regarding the Choi matrix. Here one would not optimize for a quantum channel, but for a quantum state (i.e. Quantum State Tomography). QST is much more popular and therefore many more optimization methods exist - I won't go into them here. It should be noted that this approach should be treated very carefully, as general quantum states do not always correspond to a proper quantum channel - which means that the optimization method used for the QST process outputs an estimate that is not a valid quantum channel.
Further reference or reading
My own MSc. thesis (Blatant self-promotion, please excuse me:)) can be found here, where I elaborate more on QPT in chapter 4. Chapter 3 might be a good read as an introduction to the terminology I use in chapter 4.
The text might be a bit convoluted at points but I feel that it introduces most of the basics. I have another text that I like better but I am not sure if I can distribute it; I will check. Furthermore, please feel free to ask me any subsequent questions.
Footnotes
Note that using a different basis does in fact transform the $\chi$ matrix. We almost always use the Pauli basis though. Also, note that the Paulis are, in fact, not density matrices - but they do form a basis for the space of Hermitian matrices, of which the density matrices are a subset.
Of course the Pauli operators are not valid density matrices since they are traceless, but using linearity we can combine the eigenstates of a Pauli operator to make that operator. If $\{|\psi_{+}\rangle\}$ & $\{|\psi_{-}\rangle\}$ are the $+1$- & $-1$ eigenstates of a Pauli operator $P$, we can combine them as such:
\begin{equation}
\Lambda(P) = \sum_{+}\Lambda(|\psi_{+}\rangle\langle \psi_{+}|) - \sum_{-}\Lambda(|\psi_{-}\rangle\langle \psi_{-}|).
\end{equation} | {
"domain": "quantumcomputing.stackexchange",
"id": 1476,
"tags": "quantum-gate, nielsen-and-chuang, density-matrix, state-tomography, quantum-process-tomography"
} |
Symmetry and symmetric vacuum in Quantum Field theory | Question: In the start of section 28.2 of Schwartz's Quantum Field theory and the Standard Model, Schwartz states that for a conserved charge, $\hat{Q}$, which generates the corresponding symmetry transformation, we have that $[\hat{H},\hat{Q}]=0$.
Moreover, he states that for a symmetric vacuum (with respect to the symmetry transformation generated by $\hat{Q}$), $|0\rangle_{sym}$ we have
$$\hat{Q}|0\rangle_{sym}=0$$
Other authors state this in words in the lines of "the symmetry transformations leave the vacuum invariant".
But, is the vacuum is invariant under $\hat{Q}$, couldn't we have the more general $\hat{Q}|0\rangle_{sym}=q|0\rangle_{sym}$ for some constant (charge) $q$? This way, the action of $\hat{Q}$ on the symmetric vacuum would give back the vacuum. This is consistent with $[\hat{H},\hat{Q}]=0$ since by definition the vacuum is the state of lowest energy.
One attempt to answer this question is that if $\hat{Q}|0\rangle_{sym}=q|0\rangle_{sym}$, then we can just redefine $\hat{Q}$ to be $\hat{Q'}=\hat{Q}-q$ for all the $\hat{Q}$s that generate the symmetries of the vacuum. Is this all there is? A trivial redefinition done for convenience?
Answer: Schwartz's statement is correct; I think your statement results from a confusion between finite symmetries and symmetry generators.
To make this concrete, consider a rotation operator $\hat{R}(\theta)$. The state that the vacuum is invariant under rotations is
$$\hat{R}(\theta) |0 \rangle = |0 \rangle.$$
But the rotation operator can also be written as
$$\hat{R}(\theta) = e^{i \theta \hat{J}}$$
where $\hat{J}$ is some angular momentum operator. Expanding to first order in $\theta$, we have
$$(1 + i \theta \hat{J}) |0 \rangle = |0 \rangle$$
which implies that
$$\hat{J} |0 \rangle = 0.$$
In Schwartz's statement, $\hat{Q}$ is a symmetry generator like $\hat{J}$. Invariance under $\hat{Q}$ means that the exponential of $\hat{Q}$, not $\hat{Q}$ itself, leaves the vacuum invariant. | {
"domain": "physics.stackexchange",
"id": 57132,
"tags": "quantum-field-theory, operators, symmetry, vacuum, symmetry-breaking"
} |
Effect of water pressure on sinking objects | Question: As I understand, water pressure increases as we go towards bottom of the ocean. So if an object* is thrown into water and it starts sinking with some speed, does the sinking object's acceleration increase with increasing pressure?
Also does water solidify under very high pressure and if so is that high pressure achieved at the ocean bottom? If so, would a sinking object stop sinking and rest on the "solidified" water, if we have a deep enough ocean?
e.g. black box of a submerged plane
Edit: I have accepted best current answer, if you think you can write a better answer, please do so.
Answer: That is right, deeper the pressure is stronger. But the pressure is not just in one direction it is in every direction. So the velocity will decrease in most cases. But also you have to be aware of the density of the object.
You could read this classical description of diving objects "Thrust" on wikipedia. This is a classical effect, in real cases the relation between deepness and pressure is not always linear.
Here is an example of the every direction pressure.
And answering the other question. That is possible under some specific conditions.
You could have solid water. But I don't know exactly if our planet is capable of have that rare condition.
In addition I have found information of a exoplanet that matches those conditions NatGeo.
Regards. | {
"domain": "physics.stackexchange",
"id": 18495,
"tags": "fluid-dynamics, pressure, water"
} |
Billiard ball with side spin | Question: A cue ball is travelling along a snooker table. Initially, it has only side spin (yaw). As it travels it will develop a rolling spin (pitch).
Can the ball develop any (roll) and thus move off the initial linear trajectory?
My maths knowledge says no, but my physics is rather poor.
Physically, this would be equivalent to playing the perfect side spin shot (perfectly flat cue through the horizontal equator)
Thanks in advance for any help.
Answer: Yes it moves sideways, in the direction of the side spin.
The spin momentum interact with the newly imposed pitch, resulting on longitudinal spin (rotating along main direction) that make the ball deviate. See for example Wikipedia
Another effect is the interaction with air; a rotating ball tend to curve its trajectory toward rotation side, as done in "spin" shots in baseball and tennis. Bit I will say that is weak in billiard. | {
"domain": "physics.stackexchange",
"id": 57353,
"tags": "newtonian-mechanics, rotational-dynamics, friction, rotational-kinematics"
} |
Problem designing a specific filter | Question: I have the next problem.
$H_{c1}(j\omega )$ is the ideal antialising filter and $H_{c2}(j\omega )$ is a real one. I'm asked to design $H(e^{j\Omega })$ so that $y[n]$ in the second diagram (the one on the right) is exactly the same as $x[n]$ in the first diagram (the one on the left).
I did something but I'm pretty sure that it isn't right. I found the equation for $H_{c2}(j\omega )$ as a function of $\omega$. Then I normalized the $\omega$-axis due to the A/D converter and divided the magnitude by $T$. If I didn't make any mistakes while doing the algebra, we get that
$$H_{c2}(e^{j\Omega })=\frac{9}{10}\cdot \frac{1}{\pi - \omega _{p}T}\cdot \Omega + \frac{9}{10}\cdot \frac{1}{\pi - \omega _{p}T}\cdot \pi + \frac{1}{10}\hspace{0.5cm} for\hspace{0.5cm} -\pi < \Omega < -\omega _{p}T$$
After that, I just thought of finding the inverse function of $H_{c2}(e^{j\Omega })$ for $-\pi < \Omega < -\omega _{p}T$... And that was it. Let $B(\Omega)=\frac{1}{\frac{9}{10}\cdot \frac{1}{\pi - \omega _{p}T}\cdot \Omega + \frac{9}{10}\cdot \frac{1}{\pi - \omega _{p}T}\cdot \pi + \frac{1}{10}}$. Then,
$$H(e^{j\Omega }) =
\left\{
\begin{array}{l1}
B(\Omega ) & \mbox{if } -\pi <\Omega <-\omega _{p}T \\
1 & \mbox{if } -\omega _{p}T < \Omega<\omega _{p}T \\
B(-\Omega) & \mbox{if } \omega _{p}T < \Omega<\pi
\end{array}
\right.$$
The problem with this is that I'm pretty sure that the filter I "designed" has no antitransform. I think that there is a smarter way of approaching this exercise but I just can't see it.
Answer: I just found this exact problem in Oppenheim-Schafer's Discrete Time Signal Processing (2nd edition). For those interested, it's exercise 4.56.
I've found in Internet the solutions for those problems. The solution for 4.56 that the book gives us is
So the definition of the filter written in the original question was correct. | {
"domain": "dsp.stackexchange",
"id": 3541,
"tags": "fourier-transform, filter-design"
} |
Binary Search recursive & iterative solution | Question: I have implemented binary search solution by using recursion and iterative approach.
I there any way to improve it?
Tests
package test;
import main.algorithms.BinarySearchDemo;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;
public class BinarySearchTest {
BinarySearchDemo binarySearchDemo;
int[] array;
@Before
public void setUp(){
binarySearchDemo = new BinarySearchDemo();
array = new int[]{2, 3, 5, 6, 9, 11, 12, 15, 17, 21};
}
@Test
public void testBinarySearchJavaAPI(){
Assert.assertEquals(2, binarySearchDemo.binarySearchJavaAPI(5,array));
}
@Test
public void testBinarySearchImpl(){
Assert.assertEquals(2, binarySearchDemo.binarySearchImpl(5,array));
}
}
Implementation
package main;
public class BinarySearch {
private static boolean binarySearchRecursive(int[] array, int i) {
return binarySearchRecursive(array,0,array.length-1,i);
}
private static boolean binarySearchRecursive(int[] array, int left, int right, int item) {
if(left > right){
return false;
}
int pivot = (right - left) / 2 + left;
if(item == array[pivot]){
return true;
}
else if ( item < array[pivot] ){
return binarySearchRecursive(array, left,pivot-1,item);
}else {
return binarySearchRecursive(array, pivot+1, right, item);
}
}
private static boolean binarySearchIterative(int[] array, int item) {
int left = 0;
int right = array.length - 1 ;
while (left < right) {
int mid = (left + right)/2 + left;
if (array[mid] == item) {
return true;
} else if(item < array[mid]) {
right = mid-1;
} else {
left = mid + 1;
}
}
return false;
}
}
Answer: You haven't indicated the purpose for writing these implementations. The natural alternative would be to use Arrays.binarySearch unless this is purely for learning purposes.
Testing
Your test suite is far from exhaustive, testing that both methods can find the item at position 2 is very minimal (at the moment both implementations could simply always return 2). Consider adding additional tests for edge cases (end of array being searched, beginning of array being searched for example). Consider what behaviour you're expecting if the item isn't in the array being searched and adding test cases for those scenarios.
array isn't a great name for the source data in your tests. sortedArray or dataToSearch would be a bit better however consider declaring the array in each test method. In a larger test file, scrolling up and down to see the data in the array, versus the test that's being performed adds extra overhead, which can be undesirable.
Implementations
All of your methods are declared as private. Some of them are meant to be called from outside the class, some of them (such as the recursive call) probably aren't. This should be reflected in the access declarations.
Naming is an important aspect of development. The better your names are and the more consistent you are with them, the easier your code is to follow. You're consistent with your use of left and right, but mid is called pivot in the alternate implementation. The item to find is either i or item. IDE's will often give parameter name prompts even when there is no Java Doc. If you name your parameters well, it's clear that the parameters are (int[] arrayToSearch, int itemToFind)
Doh...
I should really have noticed this earlier... but both of your methods return boolean, however you're asserting they both return 2, which isn't a boolean... Think about what you want the method to return...implement that...and write the tests accordingly... | {
"domain": "codereview.stackexchange",
"id": 39011,
"tags": "java, unit-testing, binary-search"
} |
Any other Colligative properties? | Question: Are there any other colligative properties other than lowering of vapor pressure, osmotic pressure, boiling point elevation, and freezing point depression? I was wondering whether surface tension of an aqueous solution is colligative or not, but I mostly think it should be. Can anyone help?
Edit: I am pretty aware of what a colligative property is. It's a property's of the solution that doesn't depend on the nature of solute, rather only concentration of the solute, and the nature of solvent.
Answer: Firstly I would like to expand a bit on what a colligative property is, in order to bring more understanding and also be able to apply this to a named physical property.
Colligative properties depend mainly on the number of particles in a solution.
The values of the colligative properties are approximately the same for equal
concentrations of different constituents in solution regardless of the species or chemical nature of the constituents. Examples are osmotic pressure, vapor pressure lowering, freezing point depression, and boiling point elevation (as you have noted).
Furthermore, physical properties of substances are not just limited to the colligative nature. In fact a number of properties have been defined:
additive (depend on the total contribution of the atoms in the molecule or on the sum of the properties of the constituents in a solution e.g molecular weight).
constitutive (depend on the arrangement and partly on the number and kind of atoms within a molecule e.g interfacial characteristics).
other thermodynamic physical properties are known (extensive, intensive)
Having gained this background, it is now evident that surface tension (a force that pulls the molecules of the interface together) is dependant on the arrangement and also on the nature of atoms or molecules , therefore is more of a constitutive property rather than a colligative property.
I have included a screenshot to better visualise the phenomenon:
Hope this helps. | {
"domain": "chemistry.stackexchange",
"id": 7930,
"tags": "solutions"
} |
Why is DFT magnitude less than expected? | Question: Code for signal:
def my_s(t):
return 2*np.sin(2*np.pi*50*t) + np.sin(2*np.pi*70*t+np.pi/4)
N = int(70*2.5) # sampling rate
T = 1 / N # sampling interval
x = np.arange(0, np.pi/9, T)
y = my_s(x)
Why first true bin is less 2? Why a lot of false bins?
What is a phase graph? What can be understood from it?
How can I calculate the period?
Answer: Remember that a discrete signal has two directions of discreteness:
limited duration in time: the number of samples you have
limited sampling rate: the interval between each sample
The sampling rate will limit your maximum frequency via Nyquist, and the duration will limit the frequency resolution.
Your signal is composed of 62 samples, and of a duration of pi/9 ~ 0.348s, which gives you frequency bins for the fft of 1/duration ~2.869 hZ.
As you can see from your graph, you don't have a frequency bin centred on 50Hz, but on 50.8hz (the 17th bin).
If you change your frequency in your signal definition to 50.8, you will see that the peak of the fft is much more pronounced (green squares for 50.8, blue circles for 50Hz), as it the signals fits better into the bins:
Of course, if you're measuring a real signal, you can't just change its frequency to match your bins. You can instead measure the signal for a longer time, so that the frequency resolution becomes greater, and the effects of the edges of the signal become less important. Here with a signal 9 times longer:
Another problem with DFT, is the effect of the start and stop of your signal, which can be reduced by multiplying with a Hamming window:
YH=y*np.hamming(len(x))
You can see that your peaks at 50 and 70 Hz are much more pronounced. Note that because the Hamming window has values between 0 and 1, the amplitude of your signal is reduced, so you need to divide the output of the fft by the mean of the Hamming window
You can also combine both techniques, use other types of windows.... See the links in the comment by OverLordGoldDragon above. | {
"domain": "dsp.stackexchange",
"id": 11680,
"tags": "discrete-signals, signal-analysis, dft, scipy, magnitude"
} |
Importance of halogens and nitrogen in degree of unsaturation | Question: Why is oxygen (or other elements) isn't present in the formulae for calculation of the value of degree of unsaturation $(\mathrm{DU})$ as it usually more common than nitrogen and halogens.
And whats so special about nitrogen and halogens to be in the formulae
$$\mathrm{DU} = C + 1 - \frac{H + X - N}{2},$$
where $C,$ $H,$ $N,$ $X$ are the numbers of carbon, hydrogen, nitrogen and halogen atoms present in the compound, respectively?
Answer: The degree of unsaturation is defined as the index of hydrogen deficiency (IHD) that determines the total number of rings and π bonds. It means the removal of two hydrogen atoms from a molecule is equal to one added $\mathrm{DU}.$
$$\text{rings} + \pi~\text{bonds} = C - \frac{H}{2} - \frac{X}{2} + \frac{N}{2} + 1$$
If you add a halogen to a molecule, you need to remove a hydrogen atom (decreased half-one $\mathrm{DU}).$ If you do the same for a nitrogen atom, you need to remove a hydrogen atom from the molecule and add two others to the nitrogen (for amines as a saturated substituent), so you have one more hydrogen (increased half-one $\mathrm{DU}).$ As a result, the number of halogen and nitrogen atoms must be considered in the equation, but about other atoms such as oxygen $(\ce{C-OH})$ and sulfur $(\ce{C-SH}),$ there is no change in the total number of hydrogen atoms when adding them as saturated substituents to a molecule.
In the case of the unsaturated substituents such as imines and carbonyls, it's clear that the total number of hydrogen atoms decreases, so the $\mathrm{DU}$ increases. | {
"domain": "chemistry.stackexchange",
"id": 14161,
"tags": "organic-chemistry, bond"
} |
Partition function - q-number or c-number, classical definition, etc | Question: Why is the partition function
$$Z[J]=\int\ \mathcal{D}\phi\ e^{iS[\phi]+i\int\ d^{4}x\ \phi(x)J(x)}$$
also called the generating function?
Is the partition function a q-number or a c-number?
Does it make sense to talk of a partition function in classical field theory, or can we define partition functions only in quantum field theories?
Is the source $J$ a q-number or a c-number?
Answer: It is called a generating function, because one can use it to generate $n$-point functions with the aid of functional derivatives with respect to the source $J$. For instance, one can compute the two point function as follows:
$$ \langle\phi(x_1)\phi(x_2)\rangle = \frac{\delta}{\delta J(x_1)} \frac{\delta}{\delta J(x_2)} Z[J]_{J=0} . $$
The result would be a $c$-number. Hence, the generating function itself is also a $c$-number.
Formally one can treat classical field theories also with the aid of such generating functions, provided that, if you set $J=0$, you recover the original theory.
These generating functions are based on the path integral approach in which fields can be interpeted as $c$-numbers as apposed to the $q$-numbered operator-valued fields used in the second quantization approach. As a result, the source $J$ is also interpeted as a $c$-number. | {
"domain": "physics.stackexchange",
"id": 34415,
"tags": "quantum-field-theory, partition-function"
} |
Quick question on Deriving Klein–Gordon equation from Dirac equation | Question: On page 172 of Schwatz’s QFT book, he derives the Klein–Gordon equation from Dirac equation as following:
$$(i \not\partial +m) (i \not\partial -m)\psi=(-\frac{1}{2} \partial_\mu \partial_\nu {\gamma^\mu \gamma^\nu}-\frac{1}{2} \partial_\mu \partial_\nu [\gamma^\mu \gamma^\nu]-m^2)\psi=-(\square +m^2)\psi =0$$
How does the second term that containing commutator of gamma matrices vanish?
Answer: $\partial_\mu\partial_\nu$ is symmetric and $[\gamma_\mu,\gamma_\nu]$ is antisymmetric. | {
"domain": "physics.stackexchange",
"id": 65193,
"tags": "quantum-mechanics, homework-and-exercises, dirac-equation, klein-gordon-equation, dirac-matrices"
} |
Makefile to build and debug a C++ console app | Question: I had to create this makefile to build and debug a C++ console app. I just need some hints and tips on how I can organize my makefile.
CC=g++
CFLAGS=-c -Wall
LDFLAGS=
SOURCES=helloWorld.cpp
OBJECTS=$(SOURCES:.cpp=.o)
EXECUTABLE=helloWorld
all: $(SOURCES) $(EXECUTABLE)
debug: CXXFLAGS += -DDEBUG -g
debug: CCFLAGS += -DDEBUG -g
debug: helloWorld
clean:
rm *o helloWorld
$(EXECUTABLE): $(OBJECTS)
$(CC) $(LDFLAGS) $(OBJECTS) -o $@
.cpp.o:
$(CC) $(CFLAGS) $< -o $@
Answer:
Making sources?
The line
all: $(SOURCES) $(EXECUTABLE)
asks make to build $(SOURCES), in this case helloWorld.cpp. Is it possible to build it? Strictly speaking there are situations when you do want to build the source file (e.g. fetch it from git/cvs/sccs) but it is not applicable here: no rule is provided. Generally you don't want to build something which doesn't depend on anything. In any case, let make deduce; this is what it is good for.
all: $(EXECUTABLE)
is what you want.
Be consistent
all depends on $(EXECUTABLE), but debug depends on helloWorld. Once you defined a macro, use it everywhere.
Synonymous targets
Consider the scenario: make; ./helloWold; something goes wrong and you want to debug; make debug: everything is up to date. To have a debug build you must intervene with make clean. A good practice is to separate debug and release builds into different directories.
Automatic dependencies
In your example the .o file depends only on a corresponding .cpp. In real life the .cpp has some #includes - and the .o must depend on them all. Otherwise you will end up with an inconsistent build. Listing the .h dependencies manually is tedious and error prone. The standard practice is to let the compiler generate them automatically. For example, g++ has -MM, -MT, etc options just for this purpose:
DEPS := $(SOURCES:.cpp=.d)
.cpp.d:
$(CC) $(CXXFLAGS) -MM -MT -o $@ $<
-include $(DEPS)
-c doesn't belong to CFLAGS
-c is typically not listed in a CFLAGS: you may want to generate various outputs (e.g. preprocessed source, assembly source, dependencies, documentation, etc) with the same set of flags. The way to achieve this is to specify -c or -MM or -S or whatever separately from other flags, e.g.
.cpp.o:
$(CC) -c $(CFLAGS) ....
.cpp.s:
$(CC) -S $(CFLAGS) ....
etc.
CC
Traditionally a c++ compiler is referred as CXX and uses CXXFLAGS. The CC and CFLAGS are reserved for plain c. | {
"domain": "codereview.stackexchange",
"id": 17717,
"tags": "c++, makefile, make"
} |
Get future events from Pinnacle API | Question: My program makes API requests to Pinnacle Sports to retrieve future events and saves them to the database.
Questions:
Do I manage the database resources correctly? It takes about 3 seconds to check whether there are duplicate entries so that apparently it is very inefficient. The previous approach was to have global constants with the database connection and cursor and it worked much faster, though probably not that safe.
Do I handle possible request errors the right way? Some problems that come to my mind: no Internet connection, HTTP error, empty response .
Should I get rid of the raw loop in save_fixtures and to introduce a function for dealing with each league separately?
How can I track the status of a script? At the moment I output everything in the console, though maybe there are more convenient ways for doing that like logging or something.
Code:
auth.py
"Creates signature and headers for interacting with Pinnacle API"
import base64
def create_signature(username, password):
"Given username and password creates base64 encoded signature username:password"
return base64.b64encode(f'{username}:{password}'.encode('utf-8'))
def create_headers(signature):
"Given a signature creates required headers for interacting with Pinnacle API"
return {
'Content-length' : '0',
'Content-type' : 'application/json',
'Authorization' : "Basic " + signature.decode('utf-8')
}
database.py
"Functionality for interacting with the database."
import pymysql
from contextlib import contextmanager
SERVER = 'localhost'
USER = 'root'
PASSWORD = ''
DATABASE = 'bets'
@contextmanager
def get_connection():
"Creates database connection."
connection = pymysql.connect(host=SERVER, user=USER, password=PASSWORD, db=DATABASE)
try:
yield connection
finally:
connection.close()
def record_fixture(league_id, fixture):
"Records given fixture to the database."
with get_connection() as con:
with con.cursor() as cursor:
event_id = fixture['id']
starts = fixture['starts'][0:10] # Example: 2019-08-22
home = fixture['home']
away = fixture['away']
sql = "INSERT INTO fixture (event_id, league_id, match_date, \
home_team, away_team) VALUES (%s, %s, %s, %s, %s)"
cursor.execute(sql, (event_id, league_id, starts, home,
away))
con.commit()
def is_duplicated_entry(event_id):
"Returns True if an entry with given event_id already exists"
with get_connection() as con:
with con.cursor() as cursor:
cursor = con.cursor()
sql = "SELECT * from fixture WHERE event_id = %s"
result = cursor.execute(sql, event_id)
return result != 0
get_fixtures.py
"""Obtains fixture list from Pinnacle API for the given list of leagues
and records them to the database."""
import json
import datetime
import time
import requests
import auth
import database
LEAGUES = ['1980', '5487', '2436', '5488', '2196', '5490', '1842', '5874',
'2627', '2630', '5452', '6263', '5938']
USERNAME = ""
PASSWORD = ""
SIGNATURE = auth.create_signature(USERNAME, PASSWORD)
HEADERS = auth.create_headers(SIGNATURE)
DELAY = 60
def get_fixtures(leagues):
"Gets fixtures list for the given list of leagues."
url = "https://api.pinnacle.com/v1/fixtures?sportId=29&leagueIds=" + ','.join(leagues)
try:
response = requests.get(url, headers=HEADERS)
except requests.ConnectionError:
print(f"{datetime.datetime.now()} No Internet connection")
return None
except requests.HTTPError:
print(f"{datetime.datetime.now()} An HTTP error occured.")
return None
if response.text == '':
print(f"{datetime.datetime.now()} There are no fixtures available")
return None
fixtures = json.loads(response.text)
return fixtures
def save_fixtures(fixtures):
"Records fixtures to the database and notifies about the new fixtures."
if not fixtures is None:
for league in fixtures['league']:
for fixture in league['events']:
if not database.is_duplicated_entry(fixture['id']):
notify_new_fixture(fixture)
database.record_fixture(league['id'], fixture)
def update_fixtures(leagues, delay=DELAY):
"""
Every DELAY seconds retrieves fixture list for the given leagues
and records them to the database.
"""
while True:
fixtures = get_fixtures(leagues)
save_fixtures(fixtures)
print(f"{datetime.datetime.now()}")
time.sleep(delay)
def notify_new_fixture(fixture):
""" Prints a notification about a new fixture. """
print(f"{datetime.datetime.now()} {fixture['id']} {fixture['home']} - {fixture['away']}")
if __name__ == '__main__':
update_fixtures(LEAGUES, DELAY)
Answer: Type hints
def create_signature(username, password):
can be
def create_signature(username: str, password: str) -> bytes:
Quote style consistency
Pick single or double:
'Authorization' : "Basic "
Connection string security
Please (please) do not store a password as a hard-coded source global:
SERVER = 'localhost'
USER = 'root'
PASSWORD = ''
DATABASE = 'bets'
for so many reasons. At least the password - and often the entire connection string - are externalized and encrypted, one way or another. I had assumed that there are high-quality wallet libraries that can make this safe for you, but have struggled to find one, so I have asked on Security.
A correct context manager!
@contextmanager
def get_connection():
Thank you! Too bad pymysql didn't bundle this...
String continuation
sql = "INSERT INTO fixture (event_id, league_id, match_date, \
home_team, away_team) VALUES (%s, %s, %s, %s, %s)"
I find more legible as
sql = (
"INSERT INTO fixture (event_id, league_id, match_date, "
"home_team, away_team) VALUES (%s, %s, %s, %s, %s)"
)
with a bonus being that there won't be stray whitespace from your indentation.
Requests sugar
This:
url = "https://api.pinnacle.com/v1/fixtures?sportId=29&leagueIds=" + ','.join(leagues)
should not bake in its query params. Pass those as a dict to requests like this:
https://requests.readthedocs.io/en/master/user/quickstart/#passing-parameters-in-urls
Also, do not call json.loads(response.text); just use response.json(). | {
"domain": "codereview.stackexchange",
"id": 38714,
"tags": "python, python-3.x, mysql, web-scraping, database"
} |
uvc_camera gets a terrible image and usb_camera doesn't set framerate | Question:
I have a Logitech c920 webcam that I want to use with ROS. I originally installed uvc_camera, but it gets a terrible quality image from the camera. Then I tried usb_cam, which gets a fantastic quality image, but only at 10fps (which just isn't good enough).
uvc_cam image
usb_camera image
So my question is:
How do I get a good image out of uvc OR
How do I set frame rate with usb_camera
It should be noted that guvcview gets a fantastic image using the uvc driver, though ros does not.
launch file contents:
FOR usb_cam:
<launch>
<node name="camera" pkg="usb_cam" type="usb_cam_node" output="screen" >
<param name="video_device" value="/dev/video1" />
<param name="image_width" value="1280" />
<param name="image_height" value="720" />
<param name="pixel_format" value="yuyv" />
<param name="camera_frame_id" value="webcam" />
</node>
</launch>
FOR uvc_camera:
<launch>
<node ns="camera" pkg="uvc_camera" type="camera_node" name="uvc_camera" output="screen">
<param name="width" type="int" value="1280" />
<param name="height" type="int" value="720" />
<param name="fps" type="int" value="30" />
<param name="frame" type="string" value="webcam" />
<param name="device" type="string" value="/dev/video1" />
</node>
</launch>
Originally posted by Slippery John on ROS Answers with karma: 51 on 2012-10-31
Post score: 3
Original comments
Comment by dinamex on 2012-11-21:
I have exactly the same problem with the weird color settings for the uvc_cam. To the slow frame rate of the usb_cam: do you use an usb2.0 port? $lsusb will show you...
Comment by Avio on 2016-09-12:
I have exactly the same problem with a BW analog cam (with USB dongle on video0): vlc shows the video correctly, cheese shows the video correctly, both usb_cam and uvc_camera show purple video with different shades of green/red. No matter what pixfmt parameter I set, they always fall back to YUYV.
Comment by Avio on 2016-09-12:
After a couple of hours spent fighting with this problem, I solved the issue. I discovered that the correct pixel format for my camera was "uyvy". For future reference, I used this launch file: https://siddhantahuja.wordpress.com/2011/07/20/working-with-ros-and-opencv-draft/
Answer:
Since you have Logitech C920, which has hardware H264 encoding which is an awesome thing.
For usb_cam you can try it by changing the value of pixel_format to H264. Change <param name="pixel_format" value="yuyv" /> to <param name="pixel_format" value="H264" />.
That should work and you'll get high fps with low resource usage.
Originally posted by sarkar with karma: 36 on 2013-10-27
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Artem on 2013-10-30:
If I do as you suggested I get
[FATAL] [1383197110.463584559]: Unknown pixel format.
[ERROR] [1383197110.463895717]: VIDIOC_STREAMOFF error 9, Bad file descriptor
Comment by Artem on 2013-10-30:
Okay uvc_cam accepts it, but the frame rate is still slow. | {
"domain": "robotics.stackexchange",
"id": 11578,
"tags": "ros, usb-cam, camera, uvc-camera"
} |
Sort an array in specific bounds | Question: Given an array with size of $n$ and except from $\sqrt{n}$ ( lower value ) elements in the array, all of the other elements are integers between the bounds of [$\sqrt{n}$, $n$$\sqrt{n}$]
I will need to write an algorithm that sorts that array in $\Theta$($n$)
Furthermore, will the same mission could be performed if, now it is given that except from $n/2$ ( lower value ) elements in the array, all of the other elements are integers between the bounds of [$\sqrt{n}$, $n$$\sqrt{n}$]
And how about $n/log(n)$ ( lower value )?
( In this time thare are ( $n$ - $n/log(n)$ ) ( lower value ) integers in the given bounds )
Note - We can not know where are the the integers located in the array
MY METHOD: For the first question I tried to seperate the array to 2 arrays, and then for the array with the numbers inside the bounds of
[$\sqrt{n}$, $n$$\sqrt{n}$] - I have converted the elements to $n$
based and sorted it with counting-sort. For the second array - sort it
with any other sorting method and then merge the 2 arrays
My questions are:
Does my approach is correct to the first question ( in terms of run-time, algorithm )?
What would be the best solution for the other question asked
Answer:
I am not sure counting sort would work, since it does not depend on the base, but on the range of the sorted values. In this case, you would need to create an array of size $(n-1)\sqrt{n}$ to count, so you would be in $\Omega(n^{3/2})$. As suggested in the comment, radix sort may do the trick here.
It is not possible to sort in $O(n)$ for one of the other bounds. Indeed, the problematic part would be sorting values not in the bounds $[\sqrt{n}, n\sqrt{n}]$: sorting $\frac{n}{2}$ values requires $\Omega(n\log n)$ time.
On the other hand, sorting $\frac{n}{\log n}$ values can be done in $O\left(\frac{n}{\log n}\log\left(\frac{n}{\log n}\right)\right) = O(n)$ time. | {
"domain": "cs.stackexchange",
"id": 18397,
"tags": "algorithms, data-structures, runtime-analysis, sorting"
} |
Is every density moment of a quantum harmonic oscillator a classical harmonic oscillator? | Question: Suppose we have the usual harmonic oscillator:
$$ \hat{H}=\frac{\hat{p}^2}{2m}+\frac{1}{2}m\omega^2\hat{x}^2 $$
with an arbitrary initial state. It is well known that the first density moment $\langle\hat{x}\rangle$ behaves like a classical harmonic oscillator as a function of time, and this can be shown with Ehrenfest's Theorem.
I suspect the same applies for higher-order density moments - that is, that $\langle \hat{x}^n\rangle$ for $n\in\mathbb{N}_0$ behaves like a classical harmonic oscillator. Is this true? How can you derive it? I tried using the Heisenberg equation of motion, but there didn't seem to be a nice way to simplify the Heisenberg-picture commutators.
Essentially, I want to show that:
$$\frac{d^2\langle\hat{x}^n\rangle}{dt^2}\propto-\langle\hat{x}^n\rangle$$
which is definitely true for $n=1$, and intuition leads me to suspect is true for higher moments.
Answer: I will show that the higher moments do not behave like classical harmonic oscillators by giving an explicit counterexample.
The counterexample is about a coherent state in the quantum harmonic oscillator.
The coherent state $|\alpha\rangle$ is defined as the eigenstate
$$ \hat a |\alpha\rangle = \alpha |\alpha\rangle $$
of the annihilation operator for some complex $\alpha$.
It is well known that the time evolution keeps a coherent state coherent; in fact,
$$ |\psi(t)\rangle = |\alpha_t\rangle , \quad \alpha_t = \alpha_0\, \mathrm e^{\mathrm i \omega t} $$
is a solution of the Schrödinger equation.
For simplicity, I will assume that $\alpha_0$ is real.
The first moment is
$$ \langle \hat q \rangle = \sqrt{\frac{\hbar}{2m\omega}} \langle \alpha_t | (\hat a + \hat a^\dagger) | \alpha_t \rangle = \sqrt{\frac{2\hbar}{m\omega}} \alpha_0 \cos(\omega t) . $$
This is obviously a harmonic motion, as demanded by the Ehrenfest theorem.
However, already the second moment
\begin{align}
\langle \hat q^2 \rangle &= \frac{\hbar}{2m\omega} \langle \alpha_t | (\hat a^2 + (\hat a^\dagger)^2 + 2\hat a^\dagger \hat a + 1) | \alpha_t \rangle \\
&= \frac{\hbar}{2m\omega} \left( 2\alpha_0^2 \cos(2\omega t) + 2\alpha_0^2 + 1 \right)
\end{align}
is not harmonic. | {
"domain": "physics.stackexchange",
"id": 60626,
"tags": "quantum-mechanics, harmonic-oscillator"
} |
How to find_package of a external ros-wrapped library (g2o) | Question:
Hi,
I have written a ros-wrapper for g2o [0] and wish to do a find_package(G2O REQUIRED) and include_directories(${G2O_INCLUDE_DIRS}) in my packages CMakeLists.txt
The library includes a FindG2O.cmake file which I also copied to my projects CMakeModules folder(not sure where the file should be), but still the find_package does't work. So, how are you supposed to find_package of external libraries you have written a ros-wrapper for? I am using Ubuntu 12.04 and Fuerte.
Another question would be, if the external library you wrapped in ROS, doesn't have a Find.cmake file, do you have to write one yourself in order to use find_package?
Cheers,
Oier
[0] https://github.com/RainerKuemmerle/g2o
Originally posted by Oier on ROS Answers with karma: 145 on 2013-02-25
Post score: 0
Answer:
I'll answer myself. You don't to write a FindFoo.cmake file nor use a find_package in you CMakeLists. The bug was, that my g2o wrapper didn't export the lflags correctly in it Manifest file. For my needs, it had to look like this:
<cpp cflags="-I${prefix}/include" lflags="-L${prefix}/lib -lg2o_stuff -lg2o_core -lg2o_solver_csparse -lg2o_csparse_extension"/>
Originally posted by Oier with karma: 145 on 2013-02-26
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 13062,
"tags": "ros, rosmake, g2o, cmake"
} |
Mathematical equation syntax tree | Question: I'm writing these pieces of code to parse expressions in the context of a dice rolling application for DnD. It's pretty much my first try using TypeScript and I'm not that good in Javascript. It is also the first time I write a syntax tree.
It's been tested using the Jasmine framework.
I'll split the question in two blocks :
Parsing the expression into tokens
a.k.a converting "1+21-314" to ['1','+','21','-','314']). This part of the code is fairly easy but I find there's lot of nesting for such a simple operation, is there maybe a way to take care of this problem using the map/reduce pattern? :
class ExpressionTokenizer {
public parse(expression : string) : Array<string> {
const emptyString : string = "";
let tokens: Array<string> = [];
let buffer : string = emptyString;
for(var i = 0;i < expression.length;i++) {
let character : string = expression[i];
if(isNaN(Number(character))) {
if(buffer !== emptyString) {
tokens.push(buffer);
}
buffer = emptyString;
tokens.push(character);
} else {
buffer += character;
}
}
if(buffer !== emptyString) {
tokens.push(buffer);
}
return tokens;
}
}
Creating a syntax tree from the tokens
The algorithm yields the following results in these test cases :
"1+2+3" => Add(Add(1,2),3)
"1*2+3" => Add(Multiply(1,2),3)
"3*(2+1)" => Multiply(3,Add(2,1))
Again, I don't like nesting. I feel like in such algorithms there's no real choice but maybe I'm wrong.. Is there a way to make the code look better/more explicit?
enum Operator {
None = 0,
Add,
Substract,
Multiply,
Divide,
LeftParenthesis,
RightParenthesis
}
class Node {
constructor(public value:number | Operator, public left?:Node, public right?:Node) {
}
}
class SyntaxParser {
private static map: { [symbol: string]: Operator } = {};
private static populateMap() {
if(Object.keys(SyntaxParser.map).length > 0) return;
SyntaxParser.map["+"] = Operator.Add;
SyntaxParser.map["-"] = Operator.Substract;
SyntaxParser.map["*"] = Operator.Multiply;
SyntaxParser.map["/"] = Operator.Divide;
SyntaxParser.map["("] = Operator.LeftParenthesis;
SyntaxParser.map[")"] = Operator.RightParenthesis;
}
constructor() {
SyntaxParser.populateMap();
}
public parseTokens(expression: string): Node {
if (SyntaxParser.isNumber(expression)) {
return new Node(Number(expression));
}
var tokens = new ExpressionTokenizer().parse(expression);
var stack: Array<Node> = [];
var current: Node = new Node(Operator.None);
for (var index = 0; index < tokens.length; index++) {
var element = tokens[index];
if (SyntaxParser.isNumber(element)) {
SyntaxParser.assignNumber(current, Number(element));
continue;
}
var symbol = SyntaxParser.map[element];
switch (symbol) {
case Operator.LeftParenthesis:
stack.push(current);
current = new Node(Operator.None);
break;
case Operator.RightParenthesis:
var head = stack.pop();
head.right = current;
current = head;
break;
default:
if (current.value == Operator.None) {
current.value = symbol;
} else {
var newNode = new Node(symbol, current, null);
current = newNode;
}
break;
}
}
while (stack.length > 0) {
current = stack.pop();
}
return current;
}
private static isNumber(value: any): boolean {
return !isNaN(Number(value));
}
private static assignNumber(current: Node, value: number) {
if (current.left == null) {
current.left = new Node(value);
} else {
current.right = new Node(value);
}
}
}
Answer:
It's pretty much my first try using TypeScript and I'm not that good in Javascript.
TypeScript is a superset of JavaScript. You're essentially already writing JavaScript if you write TypeScript.
If I were you, I would just skip all of this and just do eval or new Function. Others may say "eval is evil", which is true if used incorrectly. But if used correctly, you can easily bypass all of this effort by using the browser's parser to do all of this for you.
Now most of your code uses classes. However, the methods either are static or aren't doing any instance-related operations. The classes are just collections of related functions. Normally I would use classes/constructors only when I spawn instances and do inheritance. If not, I would just make them regular functions under one module.
// expression-tokenizer.js
export function parse(string){ ... }
// syntax-parser.js
function map(string){ ... }
function populateMap(string){ ... }
export function parseTokens(string){ ... }
export function isNumber(string){ ... }
export function assignNumber(string){ ... }
Additionally, an AST can be simply represented by plain objects and arrays. You don't need to go full OOP-ish with classes, instances, types etc.
// node.js
export function createNode(operator, left, right){
return { type: 'Node', left, right, operator };
}
Vanilla JS has no concept of enums, but can easily be emulated by a map of strings.
// operators.js
export default Object.freeze({
None : 'OPERATOR_NONE',
Add : 'OPERATOR_ADD',
Subtract : 'OPERATOR_SUBTRACT',
Multiply : 'OPERATOR_MULTIPLY',
Divide : 'OPERATOR_DIVIDE',
LeftParenthesis : 'OPERATOR_LEFT_PARENTHESIS',
RightParenthesis: 'OPERATOR_RIGHT_PARENTHESIS',
});
With all that, usage becomes really simple. No instantiations, no special types. Just plain old JavaScript objects being passed around.
import { parse } from './expression-tokenizer.js';
import { createNode } from './node.js';
import Operators from './operators.js';
...
var stack = [];
var tokens = parse(expression);
var current = createNode(Operator.None);
Now for your functions...
public parse(expression : string) : Array<string> {
...
for(var i = 0;i < expression.length;i++) {
let character : string = expression[i];
This could have been simplified if you just split the string into individual characters using string.split and use forEach to loop through them.
expression.split('').forEach(character => {
// do stuff
});
Same goes for your token parser:
parse(expression).split('').forEach(token => {
// do stuff
});
switch (symbol) {
case Operator.LeftParenthesis:
stack.push(current);
current = new Node(Operator.None);
break;
case Operator.RightParenthesis:
var head = stack.pop();
head.right = current;
current = head;
break;
default:
if (current.value == Operator.None) {
current.value = symbol;
} else {
var newNode = new Node(symbol, current, null);
current = newNode;
}
break;
}
I recommend a map of functions instead of a switch statement. The problem with a switch is that it quickly grows and tries to do everything in one piece of code. If you split off each operator into its own function, they become manageable. Here's an example of mapping.
// operations.js
import Operators from './operators.js';
const operations = {};
operations[Operators.LeftParenthesis] = function(){...};
operations[Operators.RightParenthesis] = function(){...};
// add more
export default operations;
Now when you use the operations:
import Operations from './operations.js';
export function parseExpression(expression){
...
parse(expression).split('').forEach(token => {
...
var symbol = map(token);
// Instead of switching the symbol, we use it as key for Operations
// to call the appropriate operation.
Operations[symbol].call(null, /* pass in necessary args */ );
...
});
} | {
"domain": "codereview.stackexchange",
"id": 20099,
"tags": "parsing, math-expression-eval, typescript"
} |
Debye temperature for the steel? | Question: I'm looking for data on Debye temperature of steel (ideally with a known carbon concentration, structure, and set of phases), but find only data on elements. Do You happen to meet the data in papers, or textbooks?
Answer: The Debye temperatures of some steel alloys can be found in the thesis of Rajevac, Vedran:
"Lattice dynamics in Hydrogenated Austenitic Stainless Steels and in the Superionic Conductor Cu 2-δ Se" on p. 43.
In general searching for material specific data I recommend starting with a search in Google, Google Scholar and Landolt-Boernstein. If neither turn up anything it might still be an open research question. | {
"domain": "physics.stackexchange",
"id": 11090,
"tags": "thermodynamics, material-science, phonons, data"
} |
Why do sweet biscuits melt faster than a salty biscuits? | Question: I live in India and here eating biscuit after dipping it in water is a famous habit. On careful observation my classmate and me found that a sweet biscuit melts faster than a salty one when dipped in water.
I don't know which law or which experimental result is working behind it, it is possible that the result is due to different composition of sweet and salty biscuits or maybe something else. I have got my mind struck there.
Answer: The disintegration of a solid food when it comes in contact with water is due to several factors, one of which might be dissolution but "melting" is unlikely to be occurring (unless the water is significantly above room temperature). I can not answer your question, it has too many culture specific concepts which I (USA) am not familiar with. In the UK (perhaps India?) a biscuit is what we call a cookie. The compositions of most cookies (in the USA) are not "biscuit-like". Even in India, I imagine, there is probably broad differences in the composition of a biscuit from cook to cook and from region to region. There are also probably significant differences in the processing. Wheat flour (only type I have some familiarity with, sorry) can be kneaded so that semi-crystalline polymers become stretched out and increase the materials toughness and strength. Baking temperatures and times also obviously will affect the final consistency (and composition: dehydration is usually a major process during baking, too little and the material has little cohesion (is gooey), too much and the material is more like a brick than a foodstuff. And I am ignoring the effects of yeast and density on the strength and cohesion of the finished biscuit. It should be obvious that a biscuit without large internal voids will be slower to absorb water. Most sugar cooky batter contains little water which leaves a large amount of very soluble sugar behind when it is baked off. Sugar is very soluble and is structurally weak, (unless caramelized). Salt, while also very water soluble and not very strong, need only be present in small quantities to give a cookie a very salty taste. If little sugar and salt is present, then the flour (which is generally not water soluble) will be slow to disintegrate. | {
"domain": "chemistry.stackexchange",
"id": 6420,
"tags": "everyday-chemistry, food-chemistry"
} |
BTC price checker with web scraping, regex and Tkinter GUI | Question: The purpose of this was to practice web scraping, regex and GUI. I use a popular bitcoin website to scrape the names and prices of the top 16 cryptocurrencies. I then create a data frame and make a GUI to display the information based on the user's selection. I am looking for any and all input on how to improve this code.
import bs4
from urllib.request import urlopen as uReq
from bs4 import BeautifulSoup as soup
import pandas as pd
my_url = 'https://coinmarketcap.com/'
uClient = uReq(my_url)
page_html = uClient.read()
uClient.close()
page_soup = soup(page_html, "html.parser")
containers = page_soup.find("div", {"class": "container main-section"})
tabes = containers.find("table", {"id": "currencies"})
table_names= tabes.find("tbody")
btc = table_names.find_all("tr")
#Get the names and prices of the first 16 crypto coins
prices = []
names = []
for i in range(16):
x = btc[i]
#find the price
x.find("a", {"class": "price"}).text
current_price = x.find("a", {"class": "price"}).text.strip()
prices.append(current_price)
#find the name of the crypto
name = x.find("td", {"class": "no-wrap currency-name"})
q = name.text.strip()
coin_name = " ".join(q.split("\n"))
names.append(coin_name)
#Make a dataframe with the information we have gathered
df = pd.DataFrame({'Name': names,
'Price': prices,
})
#Use regular expressions to create a column with just the symbol
import re
symbols = []
for i in df['Name']:
symbols.append(re.search(r"[A-Z]{3,5}\s\s", i).group().split()[0])
df['Symbol'] = symbols
#GUI
from tkinter import *
root = Tk()
root.title('BTC Price Check')
root.geometry('{}x{}'.format(950, 550))
frame = Frame(root,relief= GROOVE)
frame.pack(side = BOTTOM)
class CheckBox(Checkbutton):
boxes = [] # Storage for all buttons
def __init__(self, master=None, **options):
Checkbutton.__init__(self, frame, options)
self.boxes.append(self)
self.var = IntVar()
self.configure(variable=self.var)
header = Label(height=1, width=100, text = "Welcome to BTC Price Check")
header.config(font=("Courier", 20))
header.pack(side = TOP, pady = 0)
text = Text(frame)
text.pack(padx = 20, pady = 0, side = RIGHT)
#fucntions for our buttons
def display_price():
for c, box in enumerate(CheckBox.boxes):
if box.var.get():
text.insert(INSERT, "The price of " + names[c] + " is: " + prices[c])
text.insert(INSERT, "\n")
text.config(state=DISABLED)
def clearBox():
text.config(state=NORMAL)
text.delete("1.0", "end")
#Use the class we created to iterate through the 16 cryptos and create a checkbox for each
a=0
while a<len(df['Name']):
bouton=CheckBox(text = names[a],bg = 'yellow')
a=a+1
bouton.pack(fill = Y, pady = 2, side = TOP)
#Buttons
pricefind = Button(frame, text = 'Search',width= 20, command = display_price)
pricefind.pack()
clearprice = Button(frame, text = 'Clear', width= 20, command = clearBox)
clearprice.pack()
mainloop()
Answer: Group your imports
You should put all of your imports at the top of the file instead of sprinkling them throughout the code. This is both a best practice and a recommendation from PEP8
Don't use wildcard imports
You import Tkinter with from tkinter import *, but the PEP8 guidelines (and best practices) say you should not. For me, the best way to import Tkinter is with import tkinter as tk because it avoids polluting the global namespace and gives you a short prefix for using Tkinter classes and objects. The prefix helps to make your code a bit more self-documenting.
Use more functions and classes
Most of your code exists in the global namespace. While that's fine for short programs, it becomes difficult to manage as the program grows. Its good to get in the habit of always using functions or classes for nearly all of your code.
For example, I recommend putting all of the GUI code into a class. That will make the code a bit easier to read, and gives you the flexibility of moving the code to a separate file.
clearBox should restore the state of the text widget
Your clearBox function resets the state of the text widget to be editable so that it can clear the contents, but it doesn't set it back. That means that after clicking the clear button, the user could be able to type anything they want in that window. It's somewhat harmless, but is a bit sloppy. | {
"domain": "codereview.stackexchange",
"id": 33352,
"tags": "python, python-3.x, web-scraping, pandas, tkinter"
} |
Beneficial effects of the fungi of a termite mound on the diseases suffered by the termites | Question: My motivation to join this Biology Stack Exchange is the article by David Pride that I've read from the Spanish edition of Scientific American, that's Investigación y Ciencia. The article is [1], and I think that is a very good work from the author and editors of this popular science magazine.
Question. I would like to know if the fungi of a termite mound have beneficial effects on health (related to diseases caused by viruses or bacteria) for the termites that live in the termite mound.
I add that Wikipedia has the article Fungus, and what's I evoke with my Question, is about if some species of fungi could live with the termites inside the termite mound and have beneficial effects on the diseases suffered by the termites. I don't know if this question is in the literature, feel free to answer my question as a reference request if you know references answering my question.
References:
[1] David Pride, Los virus de nuestro cuerpo, Investigación y Ciencia, Febrero 2021, Nº 533, pages. 76-83.
Answer: In at least one case termite cultivated fungus produce antibiotics that benefit the termites and protect them from dangerous fungus. Of course just the fungus acting as food is also a huge benefit to the termites.
https://www.nature.com/articles/srep03250 | {
"domain": "biology.stackexchange",
"id": 12067,
"tags": "zoology, bacteriology, virology, mycology, infectious-diseases"
} |
Understanding bound states in quantum mechanics | Question:
Suppose I have this potential:
$$
\
V(x)= \left\{
\begin{array}{ll}
+\infty & x < 0 \\
-V_0 & 0\leq x\leq a\\
0 & x>a \\
\end{array}
\right.
\
$$
for $a>0$ and $V_0>0$. My job is to prove that there are no bound states for some energy, $E<0$, such that $V_0<-E$.
One way to do this would be to look at Schrodinger Time independent equation:
$$
-\frac{\hbar^2}{2m}\frac{d^2\psi}{dx^2}+V(x)\psi(x)=E\psi(x)
$$
and simply associate the first part with the second derivative with the kinetic energy and as $E<0$ and $V(x)=-V_0$ that would imply a negative kinetic energy but that has no physical meaning, thus there are no bound states.
My problem is with the math. Even though that makes sense, if I try to solve the equation somehow I get to this:
$$
-\frac{\hbar^2}{2m}\frac{d^2\psi}{dx^2}+V(x)\psi(x)=E\psi(x) \\
\frac{d^2\psi}{dx^2}=\frac{2m}{\hbar^2}(|E|-V_0)\psi(x)
$$
which has indeed a solution:
$$
\psi(x<0)=0 \\
\psi(0<x<a)=Ae^{kx}+Be^{-kx} \\
\psi(x>a)=Ce^{-k_1x}
$$
for some constants $A$,$B$,$C$ and $k^2=\frac{2m}{\hbar^2}(|E|-V_0)$ and $k_1^2=\frac{2m}{\hbar^2}(|E|)$. This is a weird solution but still a solution to the equation that decays exponentially at $x\to \infty$ and is already $0$ at $x<0$. What am I missing and how does this prove that there aren't any bound states?
EDIT:
After some implying continuity and smoothness I get:
$$
\
\left\{
\begin{array}{ll}
A+B=0 \\
Ae^{ka}+Be^{-ka}=Ce^{-k_1a} \\
Ake^{ka}-Bke^{-ka}=-Ck_1e^{-k_1a} \\
\end{array}
\right.
\
$$
From which I can use only the two first equations and get:
$$
\
\left\{
\begin{array}{ll}
A=-B \\
2A\sinh(ka)e^{k_1a}=C \\
\end{array}
\right.
\
$$
Allowing me to write:
$$
\psi(x<0)=0 \\
\psi(0<x<a)=2A\sinh (kx) \\
\psi(x>a)=2A\sinh(ka)e^{-k_1(x-a)}
$$
And finally with the normalization I get:
$$
A=\bigg(\frac{1}{k}(\cosh(ak)-1)+\frac{2}{k_1}\sinh^2(ka)\bigg)^{-1/2}
$$
which is a mere constant. What am I getting wrong?
As for the energy, I can divide the last two equations and get:
$$
\frac{1}{k}\tanh (ka)=-\frac{1}{k_1} \\
\tanh (ka)=-\frac{k}{k_1}\\
\tanh (ka)=-\sqrt{(1-V_0/|E|)}\\
ka=\tanh^{-1}(-\sqrt{1-V_0/|E|}) \\
\frac{2m}{\hbar^2}(|E|-V_0)a^2=(\tanh^{-1}(-\sqrt{1-V_0/|E|}))^2 \\
|E|=V_0+\frac{\hbar^2}{2ma^2}(\tanh^{-1}(-\sqrt{1-V_0/|E|}))^2
$$
Answer: HINT: You have the equation:
$$
\frac{1}{ak}\tanh (ka)=-\frac{1}{a k_1}
$$
where $k_1$ and $k$ are both positive. Try graphing the function $\tanh x/x$ and see where it could have a solution for $\tan{x}/x < 0$. | {
"domain": "physics.stackexchange",
"id": 58956,
"tags": "quantum-mechanics, homework-and-exercises, wavefunction, potential, schroedinger-equation"
} |
Reduction of a language to a shorter equivalent | Question: I'm new to Theoretical Computer Science, and my textbook says that it is easy to verify that the following language
\begin{array}{l}
L_{1}=A^{*} \cdot\{b\}-\left(A^{*} \cdot(A-\{a\}) \cdot A^{*} \cdot\{b\}\right) \\
L_{2}=A^{*}-\left(A^{*} \cdot(A-\{b\}) \cdot A^{*}\right) \\
L_{3}=L_{1} \cdot L_{2}
\end{array}
is equivalent to $L_{1}=a^{*} \cdot b$, $L_{2}=b^{*}$, and $L_{3}=a^{*} \cdot b^{+}$.
However, I don't seem to understand the process of reducing the languages. Can someone easily explain it to me?
Answer: Think set theory.
We have for L1
$$ (A - {a}) \subset A^* $$
$$ A^*.(A-{a}).A^* \subset A^* $$
Then
$$ S1 = A^*.\{b\} $$
$$ S1 = A^*.(A)^*.A^*.\{b\} $$
$$ S2 = (A^*.(A-{a})).A^*.\{b\} $$
$$ S1 - S2 = a^*b$$
Basically you removed everything from A* that does not have a, then you end up with a*.
Same argument goes for L2.
Then:
$$ L1.L2 = a*.b.b* = a^*.{b+b*} = a.b^+ $$
Note: This does not "reduce" the languages, but help you see how to arrive to the answer.
You may have to brush up on the operations of regular expressions, but the general logic should be a bit similar, if you realize that A* is a prefix closure and can be written in many forms. | {
"domain": "cstheory.stackexchange",
"id": 5197,
"tags": "fl.formal-languages"
} |
How Light or Water Intensity is equal to square modulus of wave function of Light or Water Waves $I=|\psi|^2 \,$? | Question: I've seen the Wave Function as a psi $\Psi$ $\psi$.
And always heard that the wave function is the Complex Number as Imaginary and real number.
But I've never seen it
I've never seen components of wave function
But I have seen it everywhere in the book of Quantum Mechanics.
i want to know how wave function could be intensity! in double slit experiment
sometimes called Young's experiment,
that is relation:
$$I=I_1+I_2$$
$$I=|\psi_1+\psi_2|^2=|\psi_1|^2+|\psi_2|^2+2Re(\psi_1 * \psi_2)$$
i want to know how Intensity could be square modulus of wave function $|\psi|^2$
$$I=|\psi|^2$$
$$I=I_1+I_2$$
$$I=I_1+I_2+2\sqrt {l_1 l_2} cos\delta$$
Answer: I can't explain QM here. It takes a lot of reading and working things out for yourself. For this particular question, however, an analogy might help (this may be far below your level, in which case apologies). QM is very often about "simple harmonic oscillators" (SHOs), for which the oldest prototype is the pendulum (approximately, if the amplitude is small). For a pendulum, if we want to know how far it will go from the vertical, we can wait to see how far it goes on each cycle. An alternative way is to measure how fast the pendulum goes when it passes through the vertical. For any given speed, there is a corresponding farthest distance from the vertical. We can equate these two, in a notional sort of way, by choosing units just so, $s_0=d_1$, the speed at its maximum is the same as its farthest distance from vertical. [if you don't want to choose such helpful units, write $s_0=Kd_1$.]
Now, suppose that we measure the speed and the distance at some intermediate point, for which we obtain $s_t,d_t$. For a simple harmonic oscillator, and approximately for a pendulum if its oscillations are small, we obtain $\sqrt{s_t^2+d_t^2}=s_0=d_1$. The square root $\sqrt{s_t^2+d_t^2}$ is an invariant quantity of the coordinates $(s_0,0)$ and $(0,d_1)$, which in general are $(s_t,d_t)$. Anything that is a function of the square root $\sqrt{s_t^2+d_t^2}$ is also a function of $s_t^2+d_t^2$, so we can work with whichever is more convenient.
The effects of a given SHO on other systems ---or of a system that contains many SHOs on other systems--- are determined both the phases and by the amplitudes, but the amplitude often determines the more obvious properties, with differences of phase causing important but often more subtle effects, which we typically might call interference (but there are many other words, such as "caustics", or even, in a New Age sort of way, "sympathetic vibrations"!).
The effects of a given quantum mechanical system are, at an elementary mathematical level, sui generis with a classical SHO or system of SHOs, but quantum mechanics describes the ways that the probabilities of discrete events evolve over time, instead of describing the evolution of a trajectory. The introduction of probability as an essential property makes QM a discussion of a higher order mathematical object. Especially different is the fact that we can no longer talk about velocities, because individual events do not have velocities (if we are determined to talk in terms of particles we cannot in general be sure which individual events go with which particle), however it's useful to introduce a notional object that we call momentum, which allows us to model patterns that we observe in the evolution of the probabilities as interference effects (whether that is what they are not, we can model the patterns in the probabilities using patterns of varying phases and amplitudes). The mathematical quantity that we call momentum is, however, sufficiently different from the classical momentum that is associated with a particle trajectory that the analogy breaks down in various mathematically significant ways.
I can't see how to address the final aspect of this that occurs to me, for now, at least not well. The much touted linearity of quantum mechanics is a consequence of the fact that QM describes the evolution of probabilities of individual measurement events. The object we call momentum is closely related to the mathematics of Fourier transforms of probability distributions, which is essentially associated with a squared modulus like $s_t^2+d_t^2$. One consequence of that is noncommutativity of the algebra of observables.
This is a quick and very vague writing down of a lot of experience, without much editing, so take it with a pinch of salt and with a lot of other reading of what other people have to say about the hard questions that quantum mechanics poses for us. I hope you find it more useful than confusing, but hey, I can take a few downvotes, and it's been oddly useful to me to write this down in this somewhat wild way. In fact, if you can see the ways in which this answer is related to your question, you understand QM pretty well already. | {
"domain": "physics.stackexchange",
"id": 3462,
"tags": "quantum-mechanics, waves, wavefunction, double-slit-experiment, interference"
} |
"Centrifugal" weight spinning causing vertical weight to lift | Question: So the problem Im dealing with has a corresponding diagram below where a mass is being spun in a circular path (at an increasing angular velocity) causing the hanging weight to be lifted through the rope. There is a demonstration here by Prof Julius Miller at the 8 minutes and 22 second mark.
Could anyone explain using senior high school terminology why the weight is being lifted?
Answer: weight spinning causing vertical weight to lift
In the horizontal plane there is only one force acting on the rotation mass and that force is the horizontal component of the tension on the string acting radially inwards.
The equation of motion of the mass on the ned of the string is
$T_{\rm horizontal} = m \dfrac {v^2}{r}$ where $m$ is the mass on the end of the string, $v$ its speed and $r$ the radius of the circle.
Now assume that $T_{\rm horizontal}$ stays approximately constant, $\approx Mg$, where $M$ is the mass of the hanging weight, what will happen if the speed of the rotating mass is increased?
$T_{\rm horizontal}$, and $m$ cannot change so the only way for the equation of motion to be satisfied is for $r$ to increase, $T_{\rm horizontal} = m \dfrac {v^2\uparrow}{r\uparrow}$.
To increase the radius the hanging mass must rise up.
The intermediate phase of the radius of motion of the rotating mass increasing and the hanging mass accelerating upwards is complex.
The hanging mass is subjected to a net upward force and the rotating mass as well as rotating faster also has a net radial acceleration with the radius of the motion increasing.
Think of the following as infinitesimal steps.
The rotating mass is made to move faster and to keep rotating mass at the same radius the tension in the string must increase.
An increase in the tension of the string will result on a net force on the hanging mass and the hanging mass will move upwards.
The radius of the motion of the rotating mass increases so it needs a small tension to maintain its motion.
The net force on the hanging mass decreases such that the hanging mass is no longer moving upwards.
The rotating mass is made to move even faster . . . . . . . . . | {
"domain": "physics.stackexchange",
"id": 81756,
"tags": "newtonian-mechanics, forces, rotational-kinematics, centripetal-force, centrifugal-force"
} |
Are the Pauli matrices closed under commutation? | Question: I tried to make a group multiplication table for the Pauli matrices, but I keep getting multiples in front of the elements. What am I doing wrong? I thought the Pauli matrices formed a group that was closed under commutation? Here's what I have:
$\begin{array}{c|ccc} [,] & \sigma_1 & \sigma_2 & \sigma_3 \\ \hline \sigma_1 & 0& 2i\sigma_3 & -2i\sigma_2 \\ \sigma_2 & -2i\sigma_3 & 0 & 2i\sigma_1 \\ \sigma_3 & 2i\sigma_2 & 2i\sigma_1 & 0 \end{array}$
Answer: Lie algebras are not a group w.r.t. to the commutator (the Lie bracket).
The first reason is that the commutator is not associative.
Another is that they almost always lack an identity element, since the identity matrix is, for example, not in $\mathfrak{su}(2)$, and Schur's lemma would, in the fundamental representation, guarantee that only multiples of the identity can be commuting with all elements of the algebra. From the lack of the identity, it follows that also the existence of an inverse is ill-defined, so they can't be a group.
They are closed under the Lie bracket operation though, but your table doesn't contradict that.
It is not possible to make the commutator into a group operation on finite-dimensional Lie algebras - just look at the trace of $[A,B] = 1$ to see that this cannot be true as long as the trace is defined for both sides of the equation.
The closest thing to providing a more "familiar" associative structure with a neutral element on a Lie algebra (aside from its vector addition) is to pass to the universal enveloping algebra. Even there, you still lack inverses for the algebra operation (which is good, since otherwise we'd have a (skewed) field, which is boring). | {
"domain": "physics.stackexchange",
"id": 16984,
"tags": "homework-and-exercises, angular-momentum, commutator, lie-algebra"
} |
How to derive partial gas equation? | Question:
If the following gasses are to mix, what is the partial pressure of neon?
The correct answer is 0.1. I cannot figure out how my professor derived this equation. From the ideal gas law, all I can arrive to is this:
$P_x = \frac{P_tV_t}{V_x}$
This gives me an answer of 4.4, which is obviously incorrect.
I've determined the equation he used is as follows:
$P_x = \frac{P_xV_x}{V_t}$
which gives the correct answer of 0.1. How did he arrive to this answer?
Answer: Prepare for an amazing feat of algebra! I sincerely hope there is a shorter way to the endpoint. (there is! See the end of my answer.)
To calculate partial pressure, $P_{\ce{He}}$, we need the total pressure $P_t$ and the fraction of the gas that is He $X_{\ce{He}}$: $$P_{\ce{He}}=X_{\ce{He}} P_t$$
You have $PV=nRT$. $T$ is not given, so assume it is constant. $n$ is unknown, but the system is closed, so $n_x$ for each gas is constant. Thus we have:
$$PV=\text{ constant}$$
Total Pressure
It is tempting to write variations of $$P_1 V_1 = P_2 V_2$$
However, for each gas, both $P$ and $V$ are changing, so we need to consider their products: $$(PV)_1 = (PV)_2$$
Thus, as trb456 suggests, $$P_t V_t = (PV)_t \implies P_t =\frac{\sum{(PV)_i}}{\sum{V_i}}$$
Fraction of He
The fraction of the mixture that is helium is determined by the ratio $\dfrac{n_{\ce{He}} }{n_t}$. Since $n=PV/RT$, and $R$ and $T$ are constant, we can write: $$X_{\ce{He}}=\frac{n_{\ce{He}}}{\sum{n_i}}=\frac{(PV)_{\ce{He}}}{\sum{(PV)_i}}$$
At last!
$$P_{\ce{He}}=X_{\text{He}} P_t = \left( \frac{(PV)_{\ce{He}}}{\sum{(PV)_i}}\right) \left( \frac{\sum{(PV)_i}}{\sum{V_i}}\right)$$
$$P_{\ce{He}_2}=\frac{(PV)_{\ce{He}_1}}{V_t}$$
Or, as it struck me as I finished:
Helium expands to fill the total volume. Now we can use $P_1 V_1 = P_2 V_2$ and completely ignore the other gasses (there is a lot empty space for the He atoms to fit in).
$$P_{\ce{He}_2}=\frac{P_{\ce{He}_1}V_{\ce{He}_1}}{V_t}$$
This equation is very similar to your second equation, except the subscripts are added to denote initial and final pressure of helium. | {
"domain": "chemistry.stackexchange",
"id": 282,
"tags": "physical-chemistry, gas-laws"
} |
onSaveInstanceState() - ensuring a call to super instance | Question: When onSaveInstanceState() is customized it should always call the same method on the super instance first.
I usually write:
@Override
public void onSaveInstanceState(Bundle outState) {
super.onSaveInstanceState(outState);
saveState(outState);
}
private void saveState(Bundle outState) {
outState.putParcelable("item1", item1);
outState.putParcelable("item2", item2);
// ...
outState.putParcelable("item10", item10);
}
instead of putting all code in the onSaveInstanceState() method, in order not to accidentally skip call of super.onSaveInstanceState() (it is very hard to notice in a large project).
Is it a bad habit?
Answer: I think it is a very good practice.
As a side note, if you want to design a method like those, I'd suggest you to make the API method final and let the custom behaviors to be injected with the strategy design pattern (if you want to use composition, which is what I like the most) or with the implementation of an abstract method (if you want to use inheritance).
In that way it won't be possible to skip, even on purpose, the call to the superclass method. | {
"domain": "codereview.stackexchange",
"id": 9070,
"tags": "java, android"
} |
An one-sentence proof of P ⊆ NP | Question: Recently I am reading a document [1]. In this document, Prof. Cook provides a brief proof of $\mathbf{P} \subseteq \mathbf{NP}$, which is only one sentence:
It is trivial to show that $\mathbf{P} \subseteq \mathbf{NP}$, since for each language $L$ over $\Sigma$, if $L \in \mathbf{P}$ then we can define the polynomial-time checking relation $R \subseteq \Sigma^* \cup \Sigma^*$ by
$$R(w, y) \Longleftrightarrow w \in L$$
for all $w, y \in \Sigma^*$.
I know the definitions of $\mathbf{P}$ and $\mathbf{NP}$, as in [1], but I still can't understand this proof. Could any one explain the proof to me? Even one sentence is good.
By the way, I think $\Sigma^* \cup \Sigma^*$ should be $\Sigma^* \times \Sigma^*$. Am I right?
Reference
[1] S. Cook, The P versus NP problem, [Online] http://www.claymath.org/sites/default/files/pvsnp.pdf.
Answer: Since L is in P, you can answer the word problem in polynomial time. To show that L is in NP as well, we need to provide a polynomial checking relation $R$ such that
$$ w\in L \Leftrightarrow \exists y.(|y|\le |w^k| \text{ and } R(w,y))$$
Now Prof. Cook says to take a very simple $R$. For every $w$ in $L$, no matter what $y\in \Sigma^*$ you take, $R(w,y)$ is true and for every $w$ not in $L$, $R(w,y)$ is false, regardless of the $y$. This is a polynomial time relation, since we can decide whether $w\in L$ or not in polynomial time (since $L \in P$), without looking at $y$ at all. And as any $y$ works, there are also some $y$ that are short enough to satisfy the length restriction in the above definition. | {
"domain": "cs.stackexchange",
"id": 6757,
"tags": "complexity-theory, np"
} |
Entropy and Gibbs Free Energy | Question: I've been struggling with the notion of entropy and gibbs free energy for almost three days now. Different sources on and off the internet say different things about entropy.
Gibbs Free Energy is said to be both a measure of spontaneity and maximum work extractable from a reaction, and somehow I am unable to reconcile the two ideas. While the laws of probability favour increases in entropy as a system can take on microstates, what is the cause of the energetics associated with it. How can it be both a "measure of spontaneity"and "maximum work".
Answer: You may have seen the reasoning to follow in most textbooks already but apparently it is not emphasized enough so I will say it again here.
The crucial starting point is the second law of thermodynamics that claims that the entropy change of the universe $\Delta S_\text{univ}$ is either zero or strictly positive for any physical change that occurs in it. I specify physical to stress that not all possible changes necessarily meet this constraint. In particular, when it comes to chemical reactions, they tend to happen in one way and not the other.
It would be fine to stick to this definition: a chemical reaction is physically favored if it leads to an increase of the entropy of the universe.
However, it is not very convenient because most of the time we care about a particular system and not the universe as a whole.
It is therefore common to partition the universe in two parts: the system of interest and the environment.
We then apply the property of additivity of entropy that says that $\Delta S_\text{univ} = \Delta S_\text{sys} + \Delta S_\text{env} \geq 0$
This is the most general statement we can make although it is not yet very useful.
It is now time to become more specific about the conditions under which the change or the reaction or the transformation in the system will occur.
For chemical reaction, it is often the case that they are carried at constant pressure $P$, temperature $T$ and mass $M$ or amount of matter.
The key is then to express $\Delta S_\text{env}$ in terms of the system thermodynamic properties to end up with a closed condition to be satisfied by the system only to fulfil the second law for the entire universe.
Since the temperature is fixed it means that the environment acts as a thermostat and we can write, by definition, that $\Delta S_\text{env} = \frac{Q_\text{env}}{T}$ where $Q_\text{env}$ is the heat received by the environment during the change in the system.
Then, since the only exchanges of heat occur between the environment and the system, it has to be the case that $Q_\text{env} = - Q_\text{sys}$ where $Q_\text{sys}$ is the heat received by the system during the transformation.
We now apply the first law of thermodynamics that tells us that $\Delta U_\text{sys} = W_\text{sys} + Q_\text{sys}$ from which we deduce that $Q_\text{env} = W_\text{sys}-\Delta U_\text{sys}.$
We now use the fact that the pressure is constant during the process. If $\Delta V_\text{sys}$ is the change of volume of the system during the transformation then we can write $W_\text{sys} = -P\Delta V_\text{sys} = \Delta (P_\text{sys}V_\text{sys}).$
Putting all this together we get that
\begin{equation}
\Delta S_\text{sys} + \frac{-\Delta (PV_\text{sys})-\Delta U_\text{sys}}{T} \geq 0
\end{equation}
Upon multiplying this last equation by $-T$ and using that fact $T$ is constant during the reaction, we get that the second law of thermodynamics is satisfied (for the whole universe) iff the system satisfies:
\begin{equation}
\Delta (U_\text{sys}+PV_\text{sys}-TS_\text{sys}) = \Delta G_\text{sys}(P,T) \leq 0
\end{equation}
That's for the spontaneity aspect.
For the maximum work extractable aspect, it comes from the assumption above that the work is only due to the imposed pressure and changes in the system volume. You can write more generally $W_\text{sys} = W_\text{other} -P\Delta V_\text{sys}$.
By redoing the same type of calculation as before, you then end up with the following relation for the second law of thermodynamic to hold:
\begin{equation}
\Delta G_\text{sys}(P,T) \leq W_\text{other}
\end{equation}
In particular, this other work is also related to the work to provide to reverse a chemical reaction. | {
"domain": "physics.stackexchange",
"id": 23185,
"tags": "thermodynamics, statistical-mechanics, probability"
} |
Why does ChatGPT fail in playing "20 questions"? | Question: IBM Watson's success in playing "Jeopardy!" was a landmark in the history of artificial intelligence. In the seemingly simpler game of "Twenty questions" where player B has to guess a word that player A thinks of by asking questions to be answered by "Yes/No/Hm" ChatGPT fails epically - at least in my personal opinion. I thought first of Chartres cathedral and it took ChatGPT 41 questions to get it (with some additional help), and then of Kant's Critique of Pure Reason where after question #30 I had to explicitly tell ChatGPT that it's a book. Then it took ten further questions. (Chat protocols can be provided. It may be seen that ChatGPT follows no or bad question policies or heuristics humans intuitively would use.)
My questions are:
Is there an intuitive understanding why ChatGPT plays "20 questions" so bad?
And why do even average humans play it so much better?
Might it be a future emergent ability which may possibly arise in ever larger LLMs?
I found two interesting papers on the topic
LLM self-play on 20 Questions
Chatbots As Problem Solvers: Playing Twenty Questions With Role Reversals
The first one answers some of my questions partially, e.g. that "gpt-3.5-turbo has a score of 68/1823 playing 20 questions with itself" which sounds pretty low.
Answer: Like any other question on why ChatGPT can't do something, the simple/superficial answer is that ChatGPT is just a language model fine-tuned with RL to be verbose and nice (or to answer like the human tuners suggested), so they just predict the most likely next token. They do not perform logical reasoning like us in general. If they appear to do it in certain cases, it's because that's the most likely thing to predict given the training data.
The more detailed answer may require some months/years/decades of research that attempt to understand neural networks and how we can control them and align them to our needs. Model explainability has been around for quite some time.
ChatGPT is really just an example of how much intelligence or stupidity you can simulate by brute-force training.
Still, it's impressive at summarizing or generating text in many cases that are open-ended, i.e. there aren't (many) constraints. Again, this can be explained by the fact that what it generates is the most likely thing given what you pass to it. Example: If you say "Always look on the bright side of...", it will probably answer with "life". Why? Because the web or the training data is full of data that has the sentence "Always look on the bright side of life".
I don't exclude it's possible to train a model to perform logical reasoning correctly in general in this way, but so far it hasn't really worked. ChatGPT can really be stupid and informationally harmful. People are assuming that there's only 1 function that computes "intelligence". Nevertheless, I think the combination of some form of pre-training with some form of continual RL will probably play a crucial role to achieve "true machine intelligence", i.e. reason/act like a human, assuming it's possible to do this.
(I've been working with ChatGPT for a few months). | {
"domain": "ai.stackexchange",
"id": 4128,
"tags": "natural-language-processing, chatgpt, benchmarks"
} |
katana teleop using phantom | Question:
Hi,
is it possible to teleop katana arm robot with a phantom using ros packages?
thanks
Originally posted by rem870 on ROS Answers with karma: 29 on 2013-07-18
Post score: 1
Answer:
There's no reason why you couldn't do that. The phantom_omni package gives you a 6D pose (the pose that you want the end effector to be in). You'd have to create an arm navigation configuration package for your complete robot (like kurtana_arm_navigation), then you can write a node that sends the pose from the phantom as a goal to the move_arm action. The motion planner would then plan a path to the desired pose and execute it on the robot.
Maybe even better would be to implement an inverse kinematics controller instead of using motion planning; perhaps this would allow you to send goals more frequently, allowing for a smoother motion of the arm. The package katana_ikfast_kinematics_plugin provides a 5D IK for the Katana arm.
You can also have a look at the pr2_omni_teleop package, where they developed something similar for the PR2.
Originally posted by Martin Günther with karma: 11816 on 2013-07-18
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 14962,
"tags": "ros"
} |
Number of pixel comparison needed to establish correspondence | Question: I am reading the paper titled Variable Baseline/Resolution Stereo David Gallup, Jan-Michael Frahm, Philippos Mordohai, Marc Pollefeys
Section 3 of the paper talks about the time complexity of the standard stereo, within it, subsection for fixed baseline case says the following
In stereo, each pixel must be tested against the pixels along the corresponding epipolar line within the disparity range of the scene. Because the depthrange is defined by the scene, the disparity range is some fraction of the image width, and thus increases with resolution. Letting $D$ be the ratio of the disparity range to the image width ($w$), the number of pixel comparisons needed is
$$T_{fixed} = D~w^2~h = \frac{D~w^3}{a}$$
Here, symbol $a$ is the aspect ratio and $h$ is the height of the image ($h = {w}/{a}$).
We know that the search for correspondence is restricted to epipolar line and thus actual number of comparisons should be way lower than the number given above.
OR
I am missing something?
Answer:
I am missing something?
Indeed, the context is standard stereo and (all) pixel comparisons ($w\times h$).
In a rectified image, the epipolar line can be typically be about the width of the image. Thus the search range would be $D\times w$.
Thus total number of comparisons = $D\times w \times wh$. | {
"domain": "robotics.stackexchange",
"id": 2022,
"tags": "stereo-vision"
} |
Failed to create the global_planner/GlobalPlanner planner | Question:
Setup: ROS Kinetic, Ubuntu 16.04
on the TurtleBot platform, when launching the move_base node configured for the global_planner/GlobalPlanner, I receive the following error:
Failed to create the global_planner/GlobalPlanner planner, are you sure it is properly registered and that the containing library is built? Exception: According to the loaded plugin descriptions the class global_planner/GlobalPlanner with base class type nav_core::BaseGlobalPlanner does not exist. Declared types are navfn/NavfnROS
I load move_base_params.yaml within move_base as follows:
<node pkg="move_base" type="move_base" respawn="false" name="move_base" output="screen" >
<rosparam file="$(find robust_navigation)/param/move_base_params.yaml" command="load" />
where the base_global_planner param is defined in move_base_params.yaml as follows:
base_global_planner: "global_planner/GlobalPlanner"
I've re-installed the nav stack and re-built my workspace to no avail.
Originally posted by ryanoldja on ROS Answers with karma: 60 on 2017-11-29
Post score: 0
Original comments
Comment by David Lu on 2017-11-30:
What does rospack find global_planner return?
Answer:
very sorry, the problem seems to have been caused by sourcing conflicting workspace. this is resolved. thanks!
Originally posted by ryanoldja with karma: 60 on 2017-11-30
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by malgudi on 2018-05-28:
Hello, Could you explain how did you resolve this issue? I am also facing the same error. I have created my own local planner. Thank you.
Comment by aarontan on 2018-06-13:
@mallikarjun have you figured this out? I am facing the same issue
Comment by AKN on 2020-04-23:
Most likely, you are miising the global_planner package. Use to following command to install it, e.g. for melodic:
sudo apt-get install ros-melodic-global_planner | {
"domain": "robotics.stackexchange",
"id": 29483,
"tags": "navigation, turtlebot, ros-kinetic, move-base"
} |
In Pygraphviz, How can I assign edge colors according to edge weights? | Question: I am now drawing a weighted directed graph using Pygraphviz.
The adjacency matrix A of the weighted directed graph is
A = pd.DataFrame({'X': [0.0,0.1,-0.8], 'Y': [-0.7,0.0,-0.1], 'Z':
[0.4,0,-0.1]} ,index=["X", "Y", "Z"])
where A_{i,j} indicates the edge weight from the node i to the node j.
I was able to draw the graph without edge color.
import networkx as nx
from networkx.drawing.nx_agraph import to_agraph
from IPython.display import SVG, display
G_nx = nx.from_pandas_adjacency(A,create_using=nx.DiGraph())
G_nx.graph['edge'] = {'arrowsize': '1.0', 'splines': 'curved'}
G_nx.graph['graph'] = {'scale': '3'}
Agraph_eg = to_agraph(G_nx)
Agraph_eg .node_attr["height"] = 0.3
Agraph_eg .node_attr["width"] = 0.3
Agraph_eg .node_attr["shape"] = "circle"
Agraph_eg .node_attr["fixedsize"] = "true"
Agraph_eg .node_attr["fontsize"] = 8
Agraph_eg.layout(prog="neato")
Agraph_eg.draw('graph_eg1.png')
But I have no idea how to assign edge colors and widths according to the following two rules.
#1 If weights are more than 0, edge color should be red. If weight are less than 0, edge color should be blue.
#2 Edge widths should be proportional to the absolute values of weights.
Do you have any idea how to do this??
Answer: This might help you
for edge in G_nx.edges(data=True):
color = "black"
weight = edge[2]["weight"]
if weight > 0:
color = "red"
elif weight < 0:
color = "blue"
edge[2]["color"] = color
edge[2]["penwidth"] = abs(weight)
This gives you the following graph:
Depending on your network size this solution might take some time. "penwidth" is the graphviz information for the edge width. | {
"domain": "bioinformatics.stackexchange",
"id": 1983,
"tags": "python, networks, graphs"
} |
gazebo client no icons | Question:
My second problem for today is that gazebo client doesn't show any icons for play, pause, position object, etc.
In the terminal I see error, which I have seen many times before but always ignored:
irobot@irobot-desktop:~$ gzclient
Gazebo multi-robot simulator, version 1.0.0
Copyright (C) 2011 Nate Koenig, John Hsu, Andrew Howard, and contributors.
Released under the Apache 2 License.
http://gazebosim.org
Msg Waiting for master
Msg Connected to gazebo master @ http://localhost:11345
libpng warning: Application built with libpng-1.2.46 but running with 1.5.4
libpng warning: Application built with libpng-1.2.46 but running with 1.5.4
libpng warning: Application built with libpng-1.2.46 but running with 1.5.4
libpng warning: Application built with libpng-1.2.46 but running with 1.5.4
libpng warning: Application built with libpng-1.2.46 but running with 1.5.4
libpng warning: Application built with libpng-1.2.46 but running with 1.5.4
libpng warning: Application built with libpng-1.2.46 but running with 1.5.4
libpng warning: Application built with libpng-1.2.46 but running with 1.5.4
libpng warning: Application built with libpng-1.2.46 but running with 1.5.4
libpng warning: Application built with libpng-1.2.46 but running with 1.5.4
libpng warning: Application built with libpng-1.2.46 but running with 1.5.4
Does libpng warning related to the fact the icons doesn't appear? If so should I remove libpng and install older version? Would it conflict with other ROS packages which are using this library?
// I tried to remove libpng but it looks like it was installed with my OS, because many other things depends on it and I can't remove libpng12-0 package without messing up whole system (Packages like xorg, gnome, and many others will be removed with bad results)...Not sure how then should I fix this problem...Icons doesn't appear when I tried to move object as well...Just big red, blue and green boxes instead of icons. Any help will be highly appreciated.
Originally posted by Roman Burdakov on ROS Answers with karma: 131 on 2012-05-16
Post score: 0
Answer:
I am not sure why this happens sometimes, I have recently seen it even when I built gazebo from scratch.
The reason is that the version of libpng used by either gzclient it self or one of the dependencies are not correctly compiled and linked, so the version linked is not the one used when compiling....
I don't intend to spend more time investigating how this happened, but it could relate to openCV shipping with libpng 1.2.46 and referring to it directly in headers, while the default installed in ubuntu 11 and upwards is libpng 1.5.4
try:
ldd /usr/...../gzclient
and see if it attempts to use libpng.so or libpng.12.so,
you want it to use the older version "libpng.so", and it can be installed by:
sudo apt-get install libpng3
however that will not solve you'r problem as the program still attempts to load the library it was linked to.
This can be circumvented after installing the extra libpng in ubuntu by running the client like this:
LD_PRELOAD=/usr/lib/i386-linux-gnu/libpng.so gzclient
that loads the libpng from the version required, before resolving the dynamically linked libraries. In principle when the stuff in a library is loaded once, it is not necessesary to load it again, and most programs will then use the library pre-loaded.
This trick will not work for all programs, as a security risk is introduced when allowing the user to manually choose libraries (and perhaps reimplement them), but it will work for gzclient on ubuntu.
Originally posted by zcuba with karma: 31 on 2012-07-08
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 9427,
"tags": "gazebo"
} |
Zero modes $a_j\sim e^{-\kappa j}$ in a semi-infinite quantum Ising chain? | Question: As a way of analyzing the performance of quantum annealing, I've been studying quantum diffusion in fermionizable lattice models with zero modes.
In particular, the 1+1D quantum Ising model, semi-infinite in the spatial direction, is the simplest possible example of a fermionizable lattice model with a zero mode just outside of its paramagnetic phase. However, I've been having trouble producing these modes in ab initio calculations. I have that the semi-infinite quantum Ising model is dual to the non-interacting fermion model
$$H=i\sum_{j\geq 1}\left(B\gamma_{2j-1}\gamma_{2j}+J\gamma_{2j}\gamma_{2j+1}\right)\in \mathfrak{spin}_\infty$$
Utilizing the identification $\mathfrak{spin}_\infty\simeq \mathfrak{so}_\infty$ induced by the universal covering map, one gets a representation of the model as an antisymmetric matrix $\tilde{H}$, of which the zero modes should be spatially decaying eigenvectors. Perhaps the reason I can't find these modes is that I'm extracting the eigenvectors of $\tilde H$ via a bulk ansatz, where we note that the infinite Ising chain
$$H'=i\sum_{j\in \mathbb{Z}}\left(B\gamma_{2j-1}\gamma_{2j}+J\gamma_{2j}\gamma_{2j+1}\right)~~~~~~~~~~~$$
can be solved exactly using translational symmetry, and admits a formal translational eigenspace decomposition. The zero modes should then correspond to those formal eigenvectors with imaginary wave vector $k=i\kappa$ that also happen to satisfy the boundary condition of the semi-infinite chain. Furthermore, normalization constraints guarantee a maximum amplitude at the boundary of the lattice and an exponentially decaying amplitude into the interior, i.e. a pure waveform. However, when I try and fit the mode to the semi-infinite chain, the boundary condition reduces to
$$B^2\,\frac{Je^{\kappa}-B}{B-Je^{-\kappa}}=B^2+J^2-2JB\cosh \kappa\,$$
Which reduces to $B=B-Je^{-\kappa}$, an equation which admits only the completely localized mode $\kappa =\infty$. I wonder then, about how to obtain gently decaying solutions, and, furthermore, conditions on the transverse field that guarantee their existence. Some clarity in precisely where my logic/method fails would be great!
Answer: The upshot is that you forgot to allow a complex component into the wave vector. The fact that your dispersion includes a trigonometric function like $\cosh$ or $\cos$, which are only defined on the real and imaginary axes, shows that you were hoping to find a purely real or imaginary wave vector. The true wave vector of the low energy mode lives on neither axis, so it is no surprise that you could not find it.
How do we find its explicit form? Better than an "bulk ansatz" is a rigorous mathematical approach. In fact, the Toeplitz extension from K-theory tells us that this low-energy mode in fact has zero energy, because the topological index of your model is nonzero and equal to one, and because of the fact that essential spectra are invariant under perturbations by a compact operator. This helps us immensely.
Also, you should have used one more important symmetry of your Hamiltonian: it does not mix odd-index basis vectors or even-index basis vectors among themselves. This implies a block-decomposition
$$H=\begin{pmatrix}0&JS_L+B\\ -JS_L^\dagger-B&0\end{pmatrix}$$
Where $S_L$ is the left-shift operator. Always, always use symmetries in the problem to your advantage. Furthermore, from the formula for the analytical index (because all quadratic fermion Hamiltonians are Dirac-type operators, they share the same index formula):
$$\text{a-ind}(H)=\dim\ker (JS_L+B)-\dim\ker (-JS_L^\dagger-B)$$
The first term is invertible, and so we have the formula
$$\text{a-ind}(H)=\dim\ker (-JS_L^\dagger-B)=\text{t-ind}(H)=1$$
(Recall that your hamiltonian is a smooth deformation of the Kitaev model at zero chemical potential, so it shares its topological index, which is one.) Therefore, our zero mode is the unique solution to this equation guaranteed by the topological index computation:
$$S_L=-B/J$$
This implies that the zero-mode is an eigenvector of left-shift, with negative eigenvalue $-B/J$. This has a solution with wave vector
$$\boxed{\kappa = \log B/J +i\pi/2}$$
Which does not lie on the real or imaginary axes, and is instead in the interior of the complex plane. | {
"domain": "physics.stackexchange",
"id": 35072,
"tags": "ising-model, majorana-fermions, integrable-systems, spin-models, spin-chains"
} |
Robot tilting problem on linear acceleration | Question:
Hello,
I am trying to simulate someting close to the Nexus 3-wheeled platform.
When I accelerate with 2 wheels in the direction of the third, the latter wheel gets raised into the air. I used the fomulas on wikipedia to calculate my inertia. For the mass I tried with 10, 100, 200 kg with the same tilting result.
I mananged to set the friction directly in the .sdf file generated from my urdf.
<friction>
<ode>
<mu>0.9</mu>
<mu2>0.001</mu2>
<fdir1>0 1.0 0</fdir1>
</ode>
</friction>
(I set it directly in the .sdf file because i noticed another problem described here.)
I also setup the joint model stiffness parameters kp, kd like so:
<contact>
<ode>
<kp>1e+08</kp>
<kd>1</kd>
</ode>
</contact>
in order to have some stiff wheels when in contact with the ground plane, otherwise they pass through.
Would I need to setup something else? I am using ros-kinetic and the gazebo delivered with it, Gazebo7.
(I use ros_control ( a modified version of diff_drive_controller for the 3-wheeled base) and of course continuous joints)
Tnx and regards!
UPDATE
I also had a small arm on top of my robot, with a really small weight compared to the base but with it's own inertia. I deleted all that and now I just have a base with 3 wheels and it works fine. For now it's ok for what I need but for the future I will need the arm also.
Originally posted by SorinV on Gazebo Answers with karma: 41 on 2017-06-26
Post score: 0
Original comments
Comment by sloretz on 2017-06-27:
It's the third wheel that raises? How high? What's the acceleration? Do you have a video of the behavior?
Comment by SorinV on 2017-06-28:
Yes the third wheel (aith the base link too, not just the wheel alone), like it had a huge inertial value on the robot. Not too high because my robot is low so it hits the ground with the back of the base_link. I made an update in my question.
Answer:
So i had this problem because of a prismatic joint I was using:
I managed to use velocity_controllers/JointPositionController instead of position_controllers/JointPositionController with the prismatic joint. You just need to add a PID for it. Here's an example.
Before:
arm_controller:
type: "position_controllers/JointPositionController"
joint: arm_joint
After:
arm_controller:
type: "velocity_controllers/JointPositionController"
joint: arm_joint
pid: {p: 100.0, i: 0.01, d: 1.0}
And for the transmission:
<hardwareInterface>hardware_interface/VelocityJointInterface </hardwareInterface>
Same answer/problem here.
Originally posted by SorinV with karma: 41 on 2017-07-31
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 4135,
"tags": "friction, gazebo-7"
} |
Validation Accuracy remains constant while training VGG? | Question: I posted this question on stackoverflow and got downvoted for unmentioned reason, so I'll repost it here, hoping to get some insights
This is the plot
This is the code:
with strategy.scope():
model2 = tf.keras.applications.VGG16(
include_top=True,
weights=None,
input_tensor=None,
input_shape=(32, 32, 3),
pooling=None,
classes=10,
classifier_activation="relu",
)
model2.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model2.fit(
train_images, train_labels,epochs=10,
validation_data=(test_images, test_labels)
)
I'm trying to train VGG16 from scratch, hence not importing their weights I also tried a model which I created myself, with same hyperparameters, and that worked fine
Any help is highly appreciated
Heres the full code
Answer: Ok, I solved this problem
The simple thing was that learning rate was too big
I changed the code to this
LR = batch_size/((z+1)*100000)
LR=LR/3
instead of
LR = batch_size/((z+1)*1000)
LR=LR/3
and it seems to work well | {
"domain": "ai.stackexchange",
"id": 2736,
"tags": "neural-networks, deep-learning, tensorflow"
} |
Moving frames in 3D (6dof) | Question:
Hi everybody, I have a question about frames , transforms and TF. I would like to extend the simples Turtle tf tutorials (http://www.ros.org/wiki/tf/Tutorials) based on a 2D space but I'm having a lot of troubles doing it. Unfortunately I didn't find (yet) something on the mailing list or on ros-answers...
I have this 3 frames:
world_frame
odom_frame (with a fixedtransform respect to world)
cart_frame (I move this frame with the odometry data)
Here is the code:
static tf::TransformBroadcaster br;
static tf::TransformListener tf_;
and this
while (ros::ok()){
transform.setOrigin(tf::Vector3(3,3,0));
transform.setRotation(tf::createQuaternionFromRPY(0.0,0.0,0.0));
br.sendTransform(tf::StampedTransform(transform, ros::Time::now(), "/world", "/odom"));
here, with updateOdom I calculate the x,y,theta position from the odometry (like the tutorial)
updateOdom(1);
try
{
transform.setOrigin(tf::Vector3(x,y,0));
transform.setRotation(tf::createQuaternionFromRPY(0.0,0.0,th));
br.sendTransform(tf::StampedTransform(transform, ros::Time::now(), "/odom", "/cart_frame"));
} catch(tf::TransformException e)
{ ROS_WARN("errore transformpose (%s)", e.what()); }
Now what I'm trying to do is move the cart_frame not only with the x+y+th params but with a full 6-dof transform, as if the the cart_frame was moving on a circular garage car ramp, and here starts my question: the simplest way is to add the information to the odometry, so I can call the sendtransform() directly, but what if I had a situation where the odometer always sends only 2D transforms?
In the example I tried to send a transform with a small pitch at the beginning of the while (ros ok())
transform.setOrigin(tf::Vector3(3,3,0));
transform.setRotation(tf::createQuaternionFromRPY(0.0,0.4,0.0));
br.sendTransform(tf::StampedTransform(transform, ros::Time::now(), "/world", "/odom"));
and this is the outcome http://www.youtube.com/watch?v=d4ktwwukCV0, not properly what I want...
Ok, probably I am very tired and confused today but if someone can tell me where I'm wrong I'll be infinitely grateful =)
Thank you
Augusto
Originally posted by Augusto Luis Ballardini on ROS Answers with karma: 430 on 2011-05-11
Post score: 0
Answer:
From the video it looks to me like you are updating the orientation and position in x and y, but you are not increasing the z height. Without doing that it's going to form circles not a spiral.
Also, you will need to compute the orientation of each point as a combination of the rotation and the incline.
Originally posted by tfoote with karma: 58457 on 2011-05-11
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 5552,
"tags": "transform"
} |
Possible to use alcali solution to discharge static electricity? | Question: My question may contain wrong assumptions:
I believe static electricity present in a workshop because of friction may be caused by electrons or negative ions
I believe that it is somehow possible to conduct these charges in a copper wire to the ground
If the copper wire is going into a bucket of alcali water (ex. PH 12 mixture of water and sodium hydroxide), will it be able to discharge static from the workshop?
Answer: It is not uncommon to use an electrically conductive bracelet, connected to a flexible electrically cable, to conduct electricitity between a person and the table he/she is working on, so that sparks can't jump between the person and things touching the table. This is close to what you are suggesting. The bucket of alkali solution, though, is not necessary. Only electrical contact between oerson and table is required. In cases where stray voltages may cause problems, e.g. in roll-to roll machinery, either cables with conductive brushes, ion generators, or even radioactive bars are used to provide conductive paths to dissipate static charge. | {
"domain": "physics.stackexchange",
"id": 76697,
"tags": "electrostatics, electrochemistry"
} |
Combined effects of photons occupying the same state | Question: Will multiple photons occupying the same quantum state have a detectable combined effect? I mean, is there any difference in their combined property? I am assuming yes for this. How can we determine whether there is just one or multiple photons?
Answer: An easy way to know there are multiple photons is if you see light. In this one photon at a time experiment one sees how an interference pattern typical of light is generated by the accumulation of single photons. These photons do not overlap in time as you would want, but the quantum mechanical probability functions make the wave paternt appear.
A demonstration of the overlap of the zillion of photons that make up a laser beam can be seen in this MIT video. It shows an experiment with a laser beam split in two beams, and how the interference pattern comes up on the screen, even though the individual photons are not interacting. Lasing is a quantum mechanical phenomenon, where individual photons can be guided optically through the beam they make up. What is interesting is total interference of the two beams , showing a black screen, and the explanation of where the energy of the beam goes ( back tot the lasing source) . It proves to me how complicated a classical system is, when viewed quantum mechanically. | {
"domain": "physics.stackexchange",
"id": 63817,
"tags": "quantum-mechanics, optics, photons"
} |
Derivation of Conservation of Energy from Newton's Second Law | Question: Given Newtons's Second Law: $$ \frac {d}{dt} (m \boldsymbol{\dot r}) = \mathbf F $$
How is it possible to derive the conservation of energy equation with a constant mass?
That is how can you derive $ \mathbf F = - \nabla V(\mathbf r) $ where $V(\mathbf r)$ is shown to be the potential energy when the force is conservative?
Attempted Proof:
Let $KE = T = \frac {1}{2} m\boldsymbol{\dot r} \cdot \boldsymbol{\dot r} $
or
$$\frac{dT}{dt} = \frac{1}{2}m[\boldsymbol{\dot r} \cdot \frac {d \boldsymbol{\dot r}}{dt} + \boldsymbol{\dot r} \cdot \frac {d \boldsymbol{\dot r}}{dt}] = m \boldsymbol {\dot r} \cdot \frac {d \boldsymbol{\dot r}}{dt}$$
and $\nabla V = \frac {\partial V} {\partial{\mathbf r}} $
Also, a conservative force says $\frac{dE}{dt} = 0$
Newton's Second Law could also be written as: $$m\boldsymbol{\dot r} \cdot \frac{d\boldsymbol{\dot r}}{dt} = \mathbf F \cdot \boldsymbol{\dot r}$$
My question is how is $ \mathbf F = - \nabla V(\mathbf r) $ introduced to Newton's second law properly and then integrated (? maybe) to obtain the energy?
Because I can easily prove that $ \mathbf F = - \nabla V(\mathbf r) $ if $E = T + V(\mathbf r)$ but I am trying to conclude that $V(\mathbf r)$ is the potential energy, not assume it
Answer: Defining $\vec{v}=\dfrac{d\vec{r}}{dt}$ and $\vec{a}=\dfrac{d\vec{v}}{dt},$ we have:
$$\dfrac{dE}{dt}=\dfrac{d}{dt} \left(\frac{1}{2}m\vert \vec{v} \vert^2+V \right)=m\vec{v} \cdot \vec{a}+\dfrac{dV}{dt}.$$
Next notice that because of the chain rule we have:
$$\dfrac{dV}{dt}=\nabla V \cdot \vec{v},$$ so that we have:
$$\dfrac{dE}{dt}= m\vec{v} \cdot \vec{a}+\nabla V \cdot \vec{v}$$
Next we use Newton's second Law, which states $m\vec{a}=\vec{F}.$ If we assume that the force field is conservative, which means that the force is the gradient of a scalar field (the potential energy), we further have $F=-\nabla V,$ which finally yields:
$$\dfrac{dE}{dt}=-\vec{v}\cdot \nabla V+\nabla V \cdot \vec{v} = 0$$ | {
"domain": "physics.stackexchange",
"id": 61297,
"tags": "newtonian-mechanics, energy-conservation, potential-energy, conservative-field"
} |
1PI effective action and Action generated through Hubbard-Stratonovich transformation | Question: In standard lectures on advanced QFT one learnt that performing Legendre transformation leads to effective actions generating one-particle-irreducible (1PI) diagrams, which is encoded by Schwinger-Dyson equation. From the book by Altland and Simons, Hubbard-Stratonovich transformation provides an alternative approach to the diagrammatic perturbation exploiting Schwinger-Dyson equation or Bethe-Salpeter equation. I feel confused trying to put the two techniques in the same picture and would like to ask how these two are related if there is any? Or is there a criterion on when to apply one of the two techniques? It seems that I don’t really understand the two concepts. (cf. Altland and Simons’ book (2nd edition) p. 250)
Explicitly, in the book, (eq. 6.6) is compared with the result (eq. 5.34) calculated from random phase approximation:
$S_{eff} = \frac{1}{2TL^3}\sum_{q}\phi_q\left(V_{eff.}^{-1}\right)\phi_{-q} + \mathcal{O}\left(\phi^4\right)$ with
$V_{eff.} = \frac{1}{V\left(\mathbf{q}\right)^{-1} - \Pi_q}$ and $\Pi_q$ is the polarization operator.
Basically, the self-energy term emerges from both the calculations done in chapter 5 and that in chapter 6. It feels like one obtains 1PI without performing Legendre transformation.
Answer: To derive (5.34), one has to compare contributions $F^{(2),1}$ and $F^{(2),2}$. After careful comparison, you can conclude that $F^{(2),2}$ is finite, whereas $F^{(2),1}$ is divergent. As we have been taught by our Fathers, the divergent contributions are important, finite contributions are not so interested. It means that at this step you have specified the sub-class of diagram (so-called ring diagrams), which is important for the problem and now you goal is to perform summation over this diagrams. The summation gives screened interaction (5.34),
$$V_{\text{eff}}(q)=\frac{1}{V({\bf q})^{-1}-\Pi_q},$$
where $\Pi_q$ is the polarizaiton operator,
$$\Pi_q=\frac{2T}{L^3}\sum_p G_pG_{p+q}.$$
In this calculation you have dealt with free energy $F$, which is related to partition function as $F=-T\ln\mathcal{Z}$ (see p. 211).
You can achieve the same result by performing decoupling of quartic non-local interaction (Coulomb) in direct channel. To do it, you introduce a new field $\phi$, which is effective. Performing all the calculations, you reproduce the result obtained by RPA. To be honest, I do not see here Legendre transformation.
There is some similarity with 1PI, because in RPA you choose the concrete type of diagrams (ring diagrams) and perform summation over them because they are dominant.
P.S.: It seems nice to write equations in your questions, because sometimes it is hard to open book and go through pages. For instance, you can write down at which step you suspect Legendre transformation.
Let me also draw you attention onto beautiful book "Green's functions. Theory and practice" by L. Levitov & A. Shytov (unfortunately, I do not about the translation from Russion to English, but you can also check this course page) | {
"domain": "physics.stackexchange",
"id": 97890,
"tags": "quantum-field-theory, lagrangian-formalism, non-perturbative, 1pi-effective-action"
} |
Why Church-encoded types aren't sufficient to express inductive proofs? | Question: I've heard some claims that the calculus of constructions without inductive types isn't powerful enough to express proofs by induction. Is that correct? If so, why isn't the Church-encoding sufficient for that?
Answer: How would you prove inside the pure CoC that the induction principle holds of the Church numerals? See Thomas Streicher's, Independence of the induction
principle and the axiom of
choice in the pure calculus
of constructions. | {
"domain": "cs.stackexchange",
"id": 13081,
"tags": "proof-techniques, functional-programming, automated-theorem-proving, curry-howard"
} |
Ignoring $(\sigma_i-M)(\sigma_j-M)$ in mean field theory? | Question: A way to do mean field theory for the Ising model is as follows.
First take the Ising Hamiltonian: $$H=-J \sum_{\left<i,j\right>} \sigma_i\sigma_j$$
Let $\sigma_i=\sigma_i-M+M$ and likewise for $\sigma_j$ to get: $$H=-J \sum_{\left<i,j\right>} (M^2 +(\sigma_i-M)M+(\sigma_j-M)M+\underbrace{(\sigma_i-M)(\sigma_j-M)}_{\bigstar})$$
Ignore the stared ($\star$) term.
Write down the partition function, apply a self-consistency condition etc.
Given that in the Ising model $\sigma_i=\pm1$ thus for any given $i$ and $j$, the ($\star$) term is not going to be small. What is the standard justificiation for then ignoring it?
Answer: Even though $\sigma_i-M$ is not small, expectation value of its square is small as that corresponds to the variance, hence fluctuations, which are assumed to be next order terms in the mean-field-theory. That's why summation $\sum (\sigma_i-M)(\sigma_j-M)$, which is basically autocovariance function along lattice sites, should be small. By the way, here we should have $M=<\sigma_i>$. | {
"domain": "physics.stackexchange",
"id": 48337,
"tags": "condensed-matter, ising-model, mean-field-theory"
} |
Swapping adjacent nodes of a linked list | Question: I just have a small question. The code I wrote works well for the problem. But is it the best way, or can we make it better?
struct node * swapAdjacent(struct node * list)
{
struct node * temp,*curr,*nextNode;
temp = list;
curr = temp->next;
if(curr == NULL)
return temp;
nextNode = curr->next;
curr->next = temp;
if(nextNode == NULL)
{
temp->next = nextNode;
return curr;
}
temp->next = swapAdjacent(nextNode);
return curr;
}
Answer:
Consider having swapAdjacent(NULL) return NULL. Once it does, you can get rid of the whole if (nextNode == NULL) statement, and just unconditionally say temp->next = swapAdjacent(nextNode);.
You might want to use a couple of guard clauses to separate the null checks from the other stuff. That can make the steps easier to follow.
With those things done:
struct node * swapAdjacent(struct node * list) {
struct node *temp, *curr, *nextNode;
if (!list) return NULL;
if (!list->next) return list;
temp = list;
curr = list->next;
nextNode = curr->next;
curr->next = temp;
temp->next = swapAdjacent(nextNode);
return curr;
}
The meanings of the names curr, temp, and nextNode are a bit foggy. I'd change the names to something that unambiguously refers to the nodes' positions in the list either before or after the swap.
Frankly, temp and nextNode could probably go away, and the resulting code would be simpler for it.
Watch:
struct node * swapAdjacent(struct node * list) {
if (!list) return NULL;
if (!list->next) return list;
struct node *newHead = list->next;
list->next = swapAdjacent(newHead->next);
newHead->next = list;
return newHead;
}
Note how there's a bit of a rotation of values going on. It was going on in the original code, too...but it was harder to see with the variables in the way. | {
"domain": "codereview.stackexchange",
"id": 7517,
"tags": "c, linked-list"
} |
shared_ptr and FILE for wrapping cstdio (update: also dlfcn.h) | Question: Even in the presence of <fstream>, there may be reason for using the <cstdio> file interface. I was wondering if wrapping a FILE* into a shared_ptr would be a useful construction, or if it has any dangerous pitfalls:
#include <cstdio>
#include <memory>
std::shared_ptr<std::FILE> make_file(const char * filename, const char * flags)
{
std::FILE * const fp = std::fopen(filename, flags);
return fp ? std::shared_ptr<std::FILE>(fp, std::fclose) : std::shared_ptr<std::FILE>();
}
int main()
{
auto fp = make_file("hello.txt", "wb");
fprintf(fp.get(), "Hello world.");
}
Update: I just realized that it is not allowed to fclose a null pointer. I modified make_file accordingly so that in the event of failure there won't be a special deleter.
Second update: I also realized that a unique_ptr might be a more suitable than shared_ptr. Here is a more general approach:
typedef std::unique_ptr<std::FILE, int (*)(std::FILE *)> unique_file_ptr;
typedef std::shared_ptr<std::FILE> shared_file_ptr;
static shared_file_ptr make_shared_file(const char * filename, const char * flags)
{
std::FILE * const fp = std::fopen(filename, flags);
return fp ? shared_file_ptr(fp, std::fclose) : shared_file_ptr();
}
static unique_file_ptr make_file(const char * filename, const char * flags)
{
return unique_file_ptr(std::fopen(filename, flags), std::fclose);
}
Edit. Unlike shared_ptr, unique_ptr only invokes the deleter if the pointer is non-zero, so we can simplify the implementation of make_file.
Third Update: It is possible to construct a shared pointer from a unique pointer:
unique_file_ptr up = make_file("thefile.txt", "r");
shared_file_ptr fp(up ? std::move(up) : nullptr); // don't forget to check
Fourth Update: A similar construction can be used for dlopen()/dlclose():
#include <dlfcn.h>
#include <memory>
typedef std::unique_ptr<void, int (*)(void *)> unique_library_ptr;
static unique_library_ptr make_library(const char * filename, int flags)
{
return unique_library_ptr(dlopen(filename, flags), dlclose);
}
Answer: I should start with the fact that I don't entirely agree with the widespread belief that "explicit is better than implicit". I think in this case, it's probably at least as good to have a class that just implicitly converts to the right type:
class file {
typedef FILE *ptr;
ptr wrapped_file;
public:
file(std::string const &name, std::string const &mode = std::string("r")) :
wrapped_file(fopen(name.c_str(), mode.c_str()))
{ }
operator ptr() const { return wrapped_file; }
~file() { if (wrapped_file) fclose(wrapped_file); }
};
I haven't tried to make this movable, but the same general idea would apply if you did. This has (among other things) the advantage that you work with a file directly as a file, rather than having the ugly (and mostly pointless) .get() wart, so code would be something like:
file f("myfile.txt", "w");
if (!f) {
fprintf(stderr, "Unable to open file\n");
return 0;
}
fprintf(f, "Hello world");
This has a couple of advantages. The aforementioned cleanliness is a fairly important one. Another is the fact that the user now has a fairly normal object type, so if they want to use overloading roughly like they would with an ostream, that's pretty easy as well:
file &operator<<(file &f, my_type const &data) {
return data.write(f);
}
// ...
file f("whatever", "w");
f << someObject;
In short, if the user wants to do C-style I/O, that works fine. If s/he prefers to do I/O more like iostreams use, a lot of that is pretty easy to support as well. Most of it is still just syntactic sugar though, so it generally won't impose any overhead compare to using a FILE * directly. | {
"domain": "codereview.stackexchange",
"id": 30048,
"tags": "c++, c++11"
} |
Planck's constant, Boltzmann constant and Hawking Temperature | Question: The Hawking temperature of a Schwarzschild black hole is given in SI units as
$$T_{H}=\frac{\hbar c^3}{8 \pi G k_{B} M},$$
where $k_{B}$ is the Boltzmann constant. I would like to know how $\hbar$ and $k_{B}$ show up in the temperature. I mean where in the original derivation by Hawking do these constants show up?
I have looked into the original paper by Hawking, "Particle Creation by Black Holes". There he begins the calculation by writing down the massless scalar wave equation in curved background
$$ \frac{1}{\sqrt{-g}}\partial_{\mu} \left(\sqrt{-g}\,g^{\mu\nu} \partial_{\nu} \phi \right)=0.$$
Now as far as I can understand Hawking temperature shows up in the exponent when the modes $\sim\phi$ are traced from the surface of collapsing body to the future infinity. Alternative calculations without invoking the collapse geometry suggest modes tunnel through the horizon. So at first I thought $\hbar$ naturally shows because from quantum mechanics we have $\phi \sim e^{\frac{i}{\hbar}S}$. But what is bothering me here is that above wave equation does not have any $\hbar$ in it. In fact such wave equation comes from a Lagrangian of the form
$$ I\left[\phi\right] = \int \left[\frac{1}{2}g^{\mu\nu}\partial_\mu\phi\partial_\nu\phi \right]\sqrt{-g} d^4x,$$
where $\hbar$ does not show up. And I am not sure whether such Lagrangian should come with a $\hbar$ based on dimensional analysis.
My second source of confusion is related to the Boltzmann constant. Again, I have no idea how and where $k_{B}$ emerges in the derivation. Without the notion of temperature, $k_{B}$ seems unrelated to such calculations which involve wave equations, Bogoliubov transformations etc...
Answer: You are correct that the action he used does not contain $\hbar$ in it. After all, it is a classical action which describes simply a wave equation in curved spacetime. There is no problem in considering these sorts of phenomena classically and, in principle, there is no reason for $\hbar$ appear anywhere in computations involving the wave equation.
Nevertheless, Hawking uses the wave equation as the classical equation of motion for a quantum field, which means he does not treat the field as a real function, but rather as an operator. This operator must satisfy commutation relations which do involve $\hbar$, even though (as far as I remember) Hawking uses units with $\hbar = 1$. The fact that the calculation uses operators everywhere (for example, in the very notion of a Bogoliubov transformation) already necessitates $\hbar$ to be present. Of course, Hawking's units hide this fact.
As for the Bolztmann context, you said
Without the notion of temperature, $k_B$ seems unrelated to such calculations [...]
This statement is correct and actually applies to every expression in Physics that involves $k_B$, not only the Hawking temperature. The sole physical meaning of $k_B$ is to serve as a conversion factor between energy scales and more conventional temperature scales. The reason $k_B$ appears in the expression—actually, in any expression—is just so you can read the temperature in Kelvin. Temperature could just as well be expressed, e.g., in Joules or in your favorite unit of energy. The Wikipedia page for the Boltzmann constant seems to mention a bit about this. Notice also that in every physical equation written in the SI in which temperature appears it is accompanied by the Boltzmann constant (sometimes, hidden in another constant, such as the molar gas constant, which is just the Boltzmann constant times Avogadro's number). | {
"domain": "physics.stackexchange",
"id": 89008,
"tags": "thermodynamics, quantum-field-theory, general-relativity, hawking-radiation, qft-in-curved-spacetime"
} |
Orthogonal signal generator using integer arithmetic | Question: I have a problem with implementing an orthogonal signal generator (OSG) algorithm on a microcontroller using integer arithmetic. I use this algorithm for a single-phase phase-locked loop (PLL) algorithm, for which I need an orthogonal component of a grid voltage.
The OSG algorithm is defined as follows:
$$\frac{d}{dt}v_x = \hat{\omega} \cdot \bigl((v_g-v_x)-v_y\bigr)$$
$$\frac{d}{dt}v_y = \hat{\omega} \cdot v_x$$
where $v_g$ is the measured grid voltage, $\hat{\omega}$ is the estimated grid frequency, and $v_x$ and $v_y$ are estimated components, with $v_x$ being equal to $v_g$ for ideal estimation. For this purpose, let us assume that the grid frequency is known.
The numerical integrator is implemented as follows:
$$y_k = \frac{T_s}{12} \bigr( 23u_{k-1} - 16u_{k-2} + 5u_{k-3} \bigl) + y_{k-1}$$
where $T_s=50~\mu\text{s}$ is the sample time.
Now, this algorithm works fine in floating point implementation, but is poor in integer arithmetic implementation. Here I give both implementations:
Floating point implementation
Code declaration.
float w = (2*PI)*50;
float Ts = 50e-6;
float i1u1, i1u2, i1u3, i1y1;
float i2u1, i2u2, i2u3, i2y1;
float NumInt3rd(float u1, float u2, float u3, float y1) {
return (Ts/12)*(23*u1-16*u2+5*u3)+y1;
}
Main function.
float vg = floor(Input(0));
float vg_x = NumInt3rd(i1u1,i1u2,i1u3,i1y1);
float vg_y = NumInt3rd(i2u1,i2u2,i2u3,i2y1);
i1u3 = i1u2;
i1u2 = i1u1;
i1u1 = ((vg-vg_x)-vg_y)*w;
i1y1 = vg_x;
i2u3 = i2u2;
i2u2 = i2u1;
i2u1 = vg_x*w;
i2y1 = vg_y;
The Input(0) is a macro to get an input signal (sine wave with an amplitude of $2048$).
Integer arithmetic implementation
Code declaration.
int w = 643398L; // (2*PI)*50*2048
int i1u1, i1u2, i1u3, i1y1;
int i2u1, i2u2, i2u3, i2y1;
int NumInt3rd(int u1, int u2, int u3, int y1) {
int iu = 23*u1-16*u2+5*u3;
int iy = 240000L*y1;
return (iu+iy)/240000L;
}
Main function.
int vg = (int) Input(0);
int vg_x = NumInt3rd(i1u1,i1u2,i1u3,i1y1);
int vg_y = NumInt3rd(i2u1,i2u2,i2u3,i2y1);
i1u3 = i1u2;
i1u2 = i1u1;
i1u1 = ((vg-vg_x)-vg_y)*w/2048;
i1y1 = vg_x;
i2u3 = i2u2;
i2u2 = i2u1;
i2u1 = vg_x*w/2048;
i2y1 = vg_y;
Note that I've checked for possible overflows, it never occurs. Also, the interesting thing is that the same algorithm works fine for $T_s=300~\mu\text{s}$.
I'm not that experienced with integer arithmetic implementations. Can you please give me an advice how to possibly fix this? Thanks!
Here is the estimation error for both implementations (on y-axis: percentage of the estimation error). The estimation error in case of integer artihmetic implementation is around $\pm 5\%$.
Answer: I managed to find an answer to my problem.
The problem is with rounding in integer division. For example, -4/3 will be rounded to -1, just as -5/3. Because of this, the integration error is constantly accumulated. Instead of explaining, here I give a code how to fix this.
Code declaration.
// Global variables
int w = 643398L; // (2*PI)*50*2048
int i1u1=0, i1u2=0, i1u3=0;
int i2u1=0, i2u2=0, i2u3=0;
short i1y1=0, i2y1=0;
// Numerical integrator implementation
short NumInt3rd(int u1, int u2, int u3, short y1) {
int iu = 23*u1-16*u2+5*u3;
int iy = 240000L*y1;
short y0 = (((iu+iy)>>12)*2237+65536)>>17;
return y0;
}
Main function.
// Get voltage measurements (-2048 to +2048)
short vg = (short) Input(0);
// Numerical integrators, 3rd order
short vg_x = NumInt3rd(i1u1,i1u2,i1u3,i1y1);
short vg_y = NumInt3rd(i2u1,i2u2,i2u3,i2y1);
// Downsample frequency to prevent overflow
short wb = w>>5;
// Update integrator #1 states
i1u3 = i1u2;
i1u2 = i1u1;
i1u1 = ((int)((vg-vg_x)-vg_y)*wb)>>6;
i1y1 = vg_x;
// Update integrator #2 states
i2u3 = i2u2;
i2u2 = i2u1;
i2u1 = ((int)vg_x*wb)>>6;
i2y1 = vg_y;
It should be noted that instead of using integer division, which is very expensive in terms of required number of instruction cycles, I rather use bit shift operation. For example, 1/240000 can be well approximated as 2237/2^29 - the approximation accuracy is 0.00169277%.
And here is the sine wave estimation error. As you can see, the results are much better now. | {
"domain": "dsp.stackexchange",
"id": 4683,
"tags": "control-systems, integration, embedded-systems"
} |
How to use the Pumping Lemma to show that a language is not context free? | Question: I have the following alphabet $\Sigma = \{0,\dots,9\}$ and the following language over $\Sigma \cup \{\#\}$:
$$L=\{\#w \ |\ w \in\Sigma^*,\sum_{i\geq1}w_i\ \text{is prime}\}\\\\$$
This language represents all numbers wich have a prime as digit sum. I now want to show that this language is not context free. I want to show this with a reductio ad absurdum via the pumping lemma and I am not quite sure if I proofed it correctly:
My idea was to just pick a word wich is in $L$ and then show that it cannot be pumped up with the pumping lemma and because if the language is context free every word can be pumped up this shows that $L$ is not context free. But I am not quite sure if this is enough.
Let's assume that L is context free. Then the pumping lemma states that there is a number $k \in \mathbb{N}$ for wich every word $w \in L$ with $|w|\geq k$ can be splitted up like the following $w=xuyvz$ where the following constraints hold:
$0<|uv|\leq|uyv|\leq k$
$\forall n \in \mathbb{N}:xu^nyv^nz \in L$
Let $k=5$ and $w=\#11111$ because $|w|=6 \Rightarrow |w|\geq k$. We can split up $w$ like this $w=xuyvz$ where the following holds:
$x=\#$
$u=1$
$y=11$
$v=1$
$z=1$
Because $|uv|=2 \ \land \ |uyv|=4 \Rightarrow 0<|uv|\leq|uyv|\leq k$. Now $\forall n \in \mathbb{N}:xu^nyv^nz \in L$ should also be true. But let $n=3$ then $w'=\#111111111 \notin L$. Thus $L$ is not context free, because the pumping lemma with a number $k \in \mathbb{N}$ is not working for every $w$ with $|w|\geq k$.
I am self learning and have no one who can help me with this, so I really would appreciate if someone could tell me if this proof is working or how I can improve it.
Answer: There are several problems in your proof. The language $L$ indeed is not context-free, and the pumping lemma can be used to prove it.
However:
you cannot choose the value of $k$ yourself;
you cannot choose the values of $x, u, y, v$ and $z$ yourself.
The pumping lemma states that if $L$ is context-free, then THERE EXISTS $k\in \mathbb{N}$ such that FOR ALL $w\in L$ with $|w| \geqslant k$, then THERE EXISTS a decomposition $w=xuyvz$ verifying the three conditions.
However, to prove that $L$ is not context-free, you have to use the contraposition:
If FOR ALL $k\in \mathbb{N}$, THERE EXISTS $w\in L$ with $|w| \geqslant k$ such that FOR ALL decompositions $w=xuyvz$, not all three conditions are verified, then $L$ is not context-free.
The formulation you have seen may be a bit different, but the same ideas are underlying.
Now back to your problem. Let $k\in\mathbb{N}$ be any integer. Consider $p$ any prime number $\geqslant k$. Given the definition of $L$, it is clear that $w = \#1^p\in L$ (here the $^p$ denotes $p$ repetitions, not the mathematical exponentiation).
Suppose $w = xuyvz$ with $|uv| >0$ and $|uyv|\leqslant p$. Let us distinguish:
if $uv$ contains the symbol $\#$, then $xyz$ does not contain $\#$ so $xyz\notin L$;
that means that $uv = 1^q$ with $0<q\leqslant p$. Then, $xu^{p+1}yv^{p+1}z = xuyvz(uv)^p = \#1^p1^{qp} = \#1^{q(p+1)}$. However, $q(p+1)$ is not prime, so $xu^{p+1}yv^{p+1}z\notin L$.
We conclude that $L$ is not context-free. | {
"domain": "cs.stackexchange",
"id": 20829,
"tags": "context-free, pumping-lemma"
} |
Clipboard support in jQuery using revealing module pattern | Question: I have recently been getting into the habit of leveraging the revealing module pattern for all my code. I used this guide for inspiration, but my code doesn't feel as elegant.
var styleGuide = (function styleGuideHandler() {
'use strict';
var publicAPI,
intervalId = null,
clipboard = new Clipboard('.copyButton'),
btns = document.querySelectorAll('.style-guide');
function setTooltip(btn, message) {
$(btn).attr('data-original-title', message);
setTimeout(function() {
$(btn).tooltip('show');
}, 150);
}
function hideTooltip(btn) {
if (intervalId !== null) {
clearTimeout(intervalId);
}
intervalId = setTimeout(function() {
$(btn).tooltip('hide');
intervalId = null;
}, 500);
}
publicAPI = {
init: function() {
clipboard.on('success', function(e) {
setTooltip(e.trigger, 'Copied!');
hideTooltip(e.trigger);
e.clearSelection();
console.log(e);
});
clipboard.on('error', function(e) {
setTooltip(e.trigger, 'Failed!');
hideTooltip(e.trigger);
console.log(e);
});
$('.copyButton').tooltip({
trigger: 'click',
placement: 'bottom'
});
$('pre code').each(function(i, block) {
hljs.highlightBlock(block);
});
/* preventDefault on buttons */
for (var i = 0, l = btns.length; i < l; i++) {
btns[i].addEventListener('click', function(e) {
e.preventDefault();
e.stopPropagation();
});
}
}
};
return publicAPI;
})();
$(document).ready(styleGuide.init);
Also, would executing the ready function like this $(document).ready(function(){styleGuide.init}); encapsulate the module further? Meaning, there would be no chance the styleGuide module could overwritten?
Answer: Interesting question, your code is very readable.
However, if you only reveal an init function, then really there is not much sense in using a revealing pattern.
I would probably not self execute styleGuideHandler but pass it to the jQuery call:
$(document).ready(styleGuideHandler);
Other than that, just for giggles, I might also pass the few globals it uses:
$(document).ready(styleGuideHandler( document, Clipboard, hljs ));
Then when styleGuideHandler is executed, you run the code in init. | {
"domain": "codereview.stackexchange",
"id": 23319,
"tags": "javascript, jquery, revealing-module-pattern"
} |
How much hydrogen or oxygen will be produced in the electrolysis of water? | Question: Is there a way to exactly calculate the quantity of hydrogen or oxygen in a water electrolysis? I am thinking of a plain and simple electrolysis like an anode and cathode in salted water separating oxygen and hydrogen.
I am looking for the relation between the amounts of hydrogen $n(\ce{H2})$ or oxygen $n(\ce{O2})$ and the amounts of water and salt, current, voltage and time of electrolysis.
Answer: I'm not sure it's appropriate to explain the relationship between all the variables above, here, as there are a lot of concepts that would need to be explored. However, fundamentally (assuming there are no other reactions apart from the electrolysis) the charge passed through the cell is proportional to the amount of products formed, described by Faraday's laws of electrolysis. Here is a useful rearrangement:
$$n = \frac{Q}{Fz}$$
Where $Q$ is the charge passed, $F$ is the Faraday constant and $z$ is the number of electrons transferred per redox reaction. For example, $z = 2$ for hydrogen because it takes two electrons to convert $\ce{H+}$ to $\ce{H2}$ as $\ce{2H+ + 2e- <=> H2}$. Therefore, for every coulomb of charge, we get: $$\begin{align}
n &= \frac{1\ \mathrm{C}}{(96.485\ \mathrm{kC/mol})(2)}\\
&= 5.18\ \mathrm{µmol}
\end{align}$$ of hydrogen gas produced. So if we can measure the charge (or current integrated over time), we can calculate the amount of product produced. This is true no matter what voltage is applied or what the electrolyte content of the solution is, however if more than one reaction is occurring at an electrode, you won't be able to separate the contributions from each reaction with only the charge as information. There will also be a minor contribution from charging the double layer capacitance when the potential is first applied.
Where things become less straightforward is understanding how changing the potential or the solution resistance affects the amount of charge that flows, but the charge itself is directly related to how much product is being produced. | {
"domain": "chemistry.stackexchange",
"id": 4131,
"tags": "physical-chemistry, electrochemistry, water, electrolysis"
} |
ROS Answers SE migration: learning_joy | Question:
I am trying the tutorial with turtlesim and joystick on ROS electric, and Ubuntu 11.4. I keep getting [ERROR] [1324135517.676376697]: Client [/teleop] wants topic /joy to have datatype/md5sum [joy/Joy/e3ef016fcdf22397038b36036c66f7c8], but our version has [sensor_msgs/Joy/5a9ea5f83505693b71e785041e67a8bb]. Dropping connection.
I know that the call back has to be changed to sensor_msgs, but I am not sure how to do this.
Thank you,
Morpheus
Originally posted by Morpheus on ROS Answers with karma: 111 on 2011-12-17
Post score: 1
Answer:
Hi,
Go to the teleop_base.cpp file and search for Joy::joy, it will in the joy_cb callback function, change it to sensor_msgs::joy.
Check if the
#include "sensor_msgs/Joy.h
is included in the file if not include it. rosmake the teleop_base and it should work.
Hope this helps, Karthik
Originally posted by karthik with karma: 2831 on 2011-12-17
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by Morpheus on 2011-12-20:
Thanks karthik, I am new at ROS
Comment by karthik on 2011-12-19:
if you don't know what is call back function then you have to go through the tutorials http://www.ros.org/wiki/ROS/Tutorials/WritingPublisherSubscriber(c%2B%2B)
Comment by karthik on 2011-12-19:
Its just about the type of the object. In your teleop_base.cpp there will be a call back function in expecting a msg of the form Joy::joy you have to change that type to sensor_msgs::joy. Include the above mentioned header file also.
Comment by Morpheus on 2011-12-19:
I am new at this, and I am not sure what and where I need to make these changes.
Comment by karthik on 2011-12-17:
Thanks Eric :)
Comment by Eric Perko on 2011-12-17:
I've ticketed updating all the tutorials to use the new sensor_msgs::Joy message here: https://code.ros.org/trac/ros-pkg/ticket/5306 | {
"domain": "robotics.stackexchange",
"id": 7675,
"tags": "joystick, turtlesim"
} |
How to calculate the detection limit? | Question: I am having trouble figuring out how to calculate the detection limit. Here is the question:
You and your friend working in a different lab have been given an unknown water sample that contains potassium. You are both asked to measure the potassium concentration in the solution (10 replicate measurements). The methods that you and your friend are using are different. The results that were obtained are listed below. Determine:
(a) the concentration, standard deviation and relative standard deviation for the unknown as measured using the two methods (check for outliers!),
(b) calculate the detection limit (3sigma) for each method,
(c) compare the standard deviations and evaluate whether the two averages are significantly different (or not) at the 95% confidence level.
RESULTS:
Standard Method 1 Method 2
Concentration Intensity Intensity
(mg/L) (nA) (mV)
0.000 0.624 1.955
0.000 0.488 2.490
0.000 0.522 2.166
0.000 0.355 1.500
5.000 9.245 15.644
10.000 17.069 31.220
15.000 26.200 44.266
20.000 33.881 62.394
25.000 43.826 75.611
Replicate
1 26.544 46.977
2 25.449 49.120
3 21.053 50.998
4 24.353 46.615
5 23.899 49.326
6 24.010 46.666
7 25.554 45.291
8 23.549 42.995
9 26.008 43.678
10 24.404 49.012
So my problem is with part (b). I know the detection limit is calculated with the formula DL = 3sigma/slope and that I can find the slope by calculating Sxy/Sxx but I'm not sure what points I'm supposed to use? What points do I use for the standard deviation (sigma) and for Sxy/Sxx?
I've tried looking at examples in the textbook but nothing was really similar. I also tried searching online, but I couldn't find anything that helped me.
Answer: Have a look at this: http://www.chem.utoronto.ca/coursenotes/analsci/stats/LimDetect.html
It is possible to calculate the limit of detection from the standard error of the regression:
$$C_{\mathrm{LOD}} = \frac{3s_{y/x}}{b}$$ These values are calculated from the regression of all the data points, not the standard deviation of any subset of data.
However, in this case we have replicate blank measurements (four measurements at C=0), so we can also calculate the limit of detection as simply 3 times the standard deviation of the blank measurements:
$$y_{\mathrm{LOD}} = y_{\mathrm{blank}}+3s_{\mathrm{blank}}$$
(in instrument response units—convert to concentration from a known standard or from the regression equation) | {
"domain": "chemistry.stackexchange",
"id": 2059,
"tags": "analytical-chemistry"
} |
New to ROS - Want to get quadrocoptor or drones | Question:
Hi -
Hobby robotics/AI builder. I have done a bunch of projects with the NXT and some C-based programming, but I would like to get into ROS and pick up either a quadrocoptor or a set of small drone-like coptors I could use to program. Is there a good quadcoptor that isn't too expensive for me to get started on that would be recommended for a beginner on ROS? Also - I would like to get into swarm robotics and perhaps have 5 or 6 drones behaving in unison. Is this something that is possible on a hobbyist's budget or is it a higher level project?
Thanks in advance
Originally posted by MusicMagi on ROS Answers with karma: 23 on 2012-04-22
Post score: 2
Answer:
That question depends on what kind of budget you have but it's likely a higher level project. The cheapest quadrotor on the market at the moment is the AR Drone. While a lot of fun to fly they are limited and may or may not be applicable to your project. You can build something considerably more capable for about $600. I've done this by combining a Guai 330X chassis (which is no longer for sale) and an ArduPilot Mega. Beware that this route requires a sizable time investment to achieve programmatic control over such a device. If you are not tied to the idea of using real robots you could consider the hector_quadrotor statck. IIRC it is a quadrotor simulator built on Gazebo and should allow you to work with a swarm.
Originally posted by Dustin with karma: 328 on 2012-04-22
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 9081,
"tags": "ros, uav, nxt"
} |
Number of reflecting surfaces in the rotating mirror in the Michelson method of determination of speed of light | Question: The following text is from Concepts of Physics by Dr. H.C.Verma, from the chapter "Speed of Light", page 447, topic "Michelson Method":
For higher image resolution click here.
Michelson and his co-workers made a series of similar experiments. The first determination was made in $1879$ with an octagonal [$8$] rotating mirror. The latest in the series was underway at the time of the death of Michelson and was completed in $1935$ by Pease and Pearson. This experiment used a rotating mirror with $32$ faces.
(Emphasis Mine)
The following text is from the "Air & Space" article -
The Pipeline That Measured the Speed of Light:
On each run, a “sun bright” beam from an arc lamp bouncing off a $16$-sided whirling mirror completed five round trips. To clock elapsed time, Michelson adjusted the mirror’s rotation until the returning beam met the next mirrored face exactly.
(Emphasis Mine)
The first thing I noticed once I read about this experiment from different sources was - the number of reflecting surfaces in the rotating mirror is in multiples of $8$. Is this a coincidence or are there any valid reasons behind this?
Related question asked by me: Advantage of using a polygonal mirror with larger number of faces in Michelson method of measuring the speed of light and its value
I think Michelson method of determining the speed of light is different from the Michelson Morley experiment. So, I had to use the query michelson speed of light -morley as my initial results were populated with the second experiment which has a similar name.
This method of determination of speed of light is briefly discussed here and here.
Answer: There is no need for any particular number of faces on the polygonal mirror. Any number will do. Perhaps it is easier to fabricate a precise polygon with 4, 16, or 32 faces. | {
"domain": "physics.stackexchange",
"id": 63878,
"tags": "optics, speed-of-light, reflection, geometric-optics"
} |
High temperature expansion in general | Question: I'm referencing this thesis which should be open-access.
In Appendix D.1 "High temperature expansion in general", the author writes the high temperature expansion in the following way:
$$
\begin{align*}
\langle \hat{O} \rangle
&= \frac{\sum_i \langle i| \hat{O} e^{-\beta \hat{H}} |i\rangle}{\sum_i \langle i| e^{-\beta \hat{H}} |i\rangle} \\
&= \beta^0 \Bigl[ \frac{1}{\Theta} \mathrm{Tr}(\hat{O})\Bigr] -\beta^1 \Bigl[ \frac{1}{\Theta} \mathrm{Tr}(\hat{O}\hat{H}) - \frac{1}{\Theta^2} \mathrm{Tr}(\hat{O})\mathrm{Tr}(\hat{H}) \Bigr] \\
&\qquad + \beta^2 \Bigl[ \frac{1}{2}\frac{1}{\Theta} \mathrm{Tr}(\hat{O}\hat{H}^2) - \frac{1}{\Theta^2} \mathrm{Tr}(\hat{OH})\mathrm{Tr}(\hat{H}) \\
&\qquad\quad - \frac{1}{2}\frac{1}{\Theta^2} \mathrm{Tr}(\hat{O})\mathrm{Tr}(\hat{H}^2) + \frac{1}{\Theta^3} \mathrm{Tr}(\hat{O})\mathrm{Tr}(\hat{H})^2 \Bigr] + \mathcal{O}(\beta^3)
\end{align*}
$$
where $\hat{O}$ is some operator and trace is over multiparticle states $|i\rangle$; $\Theta \equiv \mathrm{Tr}(I)$ is the dimension of the problem.
My question is: How did they do this expansion? (My attempt:)
Clearly there has been an expansion of the exponential in the numerator in terms of $\beta$,
$$
\sum_i \langle i| \hat{O} e^{-\beta \hat{H}} |i\rangle = \sum_m \frac{(-\beta)^m}{m!} \sum_i \langle i| \hat{O} \hat{H}^m |i\rangle \tag{1}
$$
but I'm not sure 1) how or where $\Theta$ comes from, and also 2) why there is a split of the traced terms: $\mathrm{Tr}(\hat{O}\hat{H})$, $\mathrm{Tr}(\hat{O})\mathrm{Tr}(\hat{H})$ in $\beta^1$ for example.
And also 3) how to formally divide out the denominator $\sum_i \langle i| e^{-\beta \hat{H}} |i\rangle$, like after substituting (1) back into the original equation:
$$
\frac{\sum_i \langle i| \hat{O} \hat{H}^m |i\rangle}{\sum_i \langle i| e^{-\beta \hat{H}} |i\rangle} = \text{terms for each $\beta^m$}
$$
Can someone enlighten me on this?
Answer: To first order in $\beta$, the numerator reads
$$\eqalign{
\sum_i<i|Oe^{-\beta H}|i>
&=\sum_i <i|O|i>-\beta\sum_i<i|OH|i>\cr
&={\rm Tr}\ \!O-\beta\ \!{\rm Tr}\ \!OH\cr
}$$
while the denominator is
$$\eqalign{
\sum_i<i|e^{-\beta H}|i>
&=\sum_i <i|i>-\beta\sum_i<i|H|i>\cr
&=\Theta-\beta\ \!{\rm Tr}\ \!H\cr
&=\Theta\Big(1-{\beta\over\Theta}\ \!{\rm Tr}\ \!H\Big)\cr
}$$
Since $\beta$ is small, the inverse is (to first order)
$${1\over\Theta}\Big(1-{\beta\over\Theta}\ \!{\rm Tr}\ \!H\Big)^{-1}={1\over\Theta}\Big(1+{\beta\over\Theta}\ \!{\rm Tr}\ \!H\Big)$$
The average is finally to first order in $\beta$
$$\eqalign{
\langle O\rangle&={1\over\Theta}\Big({\rm Tr}\ \!O-\beta\ \!{\rm Tr}\ \!OH\Big)\Big(1+{\beta\over\Theta}\ \!{\rm Tr}\ \!H\Big)\cr
&={1\over\Theta}{\rm Tr}\ \!O
-{\beta\over\Theta}{\rm Tr}\ \!OH
+{\beta\over\Theta^2}{\rm Tr}\ \!O\ {\rm Tr}\ \!H\cr
}$$
I let you extend the calculation to higher orders. | {
"domain": "physics.stackexchange",
"id": 53282,
"tags": "statistical-mechanics, temperature, approximations"
} |
Water accumulated between towers | Question: I came across this interview question to find water accumulated between towers:
You are given an input array whose each element represents the height of the tower. The width of every tower is 1. You pour water from the top. How much water is collected between the towers?
Any suggestions?
def test(x):
if len(x) <= 2 or x == sorted(x) or x == sorted(x, reverse=True):
return 0
area = 0
count = 1
max_height = x[0]
while count < len(x) - 1:
if x[count] < max_height and x[count:] != sorted(x[count:], reverse=True):
temp_count = 0
new_count = count
temp_height = 0
height = max_height
while count < len(x):
if x[count] >= height or count == len(x) - 1:
temp_count = count
temp_height = x[count]
height = min(max_height, temp_height)
break
count += 1
while new_count < temp_count:
area += height - x[new_count]
new_count += 1
else:
max_height = x[count]
count += 1
return area
try:
a = [1,2,3,4,5]
print a, test(a)
b = [5,4,3,9,1]
print b, test(b)
c = [3,2,1,1,1,2,3]
print c, test(c)
e = [1,2,0,2,1]
print e, test(e)
f = [1, 4, 2, 5, 1, 2, 3]
print f, test(f)
g = [3, 2, 1, 1, 1, 2, 3]
print g, test(g)
a1 = [1001,1000,1002]
print a1, test(a1)
except Exception as e:
print e
Answer: Three quick comments:
There’s no documentation anywhere in your code. Ideally there should be a docstring, and some comments explaining why the code is behaving in a particular way. I wasn’t able to work out your approach to the problem because there are no comments, and I didn’t want to sit down and reverse engineer it.
Note that variable names are also a form of documentation, and some of yours could be better – x is a poor name, and so is count. What’s it counting?
Your tests aren’t very useful, because I have to work out if they're correct by hand. If I start editing the code, and I introduce a bug, there’s a possibility I won’t notice the test changes. Your tests should warn me if there’s been a regression, not simply print out some results.
The most basic form of this is something like.
assert test(a) == 0
assert test(b) == 3
assert test(c) == 8
Now I can read your tests and find out what the results should be, and I can be warned if I introduce a bug.
The better way to do tests is with the unittest module. If you haven’t used that before, I recommend Ned Batchelder’s talk Getting started with testing, which is a good introduction to Python testing in general.
I don’t know why you have the try…except block. You should really only use try…except if you expect an exception to be raised, so that you can wrap it accordingly. Unexpected exceptions tell you something about your program – there’s a problem – and you shouldn’t be ignoring that information.
And you should try to avoid broad except: statements. Better to catch only the specific exceptions you think might be raised (in this case, there aren’t any), and let anything else bubble to the top. | {
"domain": "codereview.stackexchange",
"id": 15702,
"tags": "python, interview-questions"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.