anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Doppler effect on Baud Rate
Question: I'm having a hard time wrapping my head around doppler and how it affects a digital communications signal's baud rate. The examples that I've seen explain that the center frequency of a signal is shifted when either the receiver and/or transmitter are moving relative to each other. What I don't understand is the why doppler also affects baud rate? Can someone point me to some tutorials on this topic or explain? What I'd like to understand clearly is: Intuitively understand why this happens? Mathematically understand why this happens? I'd like to be able to calculate when this is something that a communications receiver has to take into account? My guess is that based on the doppler shift that will be experienced by the receiver, in combination with the signal's baud rate, but I guess I need some hints or help to get me along with this. This is a follow-up to this question already posted: How to estimate and compensate for doppler shift in wireless signals? Answer: I'm sure that you are familiar with the idea of how Doppler affects a sine wave. When the transmitter and/or receiver move towards the other the wave arrives more quickly, which makes the wave appear to be a higher frequency. In regards to the baud rate, it also arrives more quickly. If you think about it you will see that this must be so. After all, the data stream cannot be separated from the carrier wave that modulates it! So, you have a signal that is exactly identical to the "non-Doppler" signal except it arrives faster or slower than the non-Doppler signal. This can be modeled exactly, in the discrete time domain, through resampling. So why is Doppler usually modeled as a simple frequency shift when it should really be modeled as resampling? Two reasons- first, the frequency shift effect is almost always vastly larger than the baud rate change. The reason for this is that the Doppler effect is proportional to the frequency of the original wave ($\Delta f = \frac{\Delta v}{c}f_0$). The frequency of the carrier wave is vastly greater than the baud rate, so the effect is much larger on the carrier wave. The second reason most people don't model it as resampling is that it is difficult to do very small arbitrary sample rate changes, like 3013/3014. If either the numerator or denominator is a prime number or has really large factors then it can become impractical to do traditional fractional resampling. In that case polynomial interpolation is probably the best way to go.
{ "domain": "dsp.stackexchange", "id": 8287, "tags": "digital-communications, demodulation" }
What is the difference between electromagnet and solenoid?
Question: What is the difference between electromagnet and solenoid? Both these terms seem as the same thing to me. The only difference that I can find seems to be that an electromagnet contains a soft iron core. I'm sure there must be some other difference between the two and I hope someone can clear this matter up for me. Answer: An electromagnet is a made coil associated with a ferromagnetic core. This way, the strength of the magnet is controlled by the input current. A solenoid is a simple shape used in magnetostatics or magnetics. Like the plane or the sphere in electrostatics, the 1-turn coil in magnetostatics, its study is interesting because the calculus of the magnetic field inside is doable. Moreover, the solenoid produces a pretty uniform field inside, if you are neglecting edge effects. So you could say that the solenoid is interesting because of the uniform magnetic flux density inside, and the electromagnet because of the non uniform magnetic flux density outside (an electromagnet may be done with a solenoid).
{ "domain": "physics.stackexchange", "id": 69135, "tags": "electromagnetism, terminology, inductance" }
Why is this rearrangement of factors in the Compton Scattering matrix element valid?
Question: I've been trying to follow through this derivation of the total squared matrix element for Compton scattering. We have two first-order diagrams: Using Feynman rules, the matrix elements for each diagram are: $$\mathcal{M}_{1} =\left(\overline{u}^{\left( s^{\prime }\right)}\left( p^{\prime }\right)\left( ie\gamma ^{\nu }\right) \epsilon _{\nu }^{\ast }\right)\left[\frac{{\not}p +{\not}k \ +m}{( p+k)^{2} -m^{2}}\right]\left( \epsilon _{\mu }\left( ie\gamma ^{\mu }\right) u^{( s)}( p)\right)$$ $$\mathcal{M}_{2} =\left(\overline{u}^{\left( s^{\prime }\right)}\left( p^{\prime }\right)\left( ie\gamma ^{\nu }\right) \epsilon _{\nu } \ \right)\left[\frac{{\not}p -{\not}k \ +m}{( p-k)^{2} -m^{2}}\right]\left( \epsilon _{\mu }^{\ast }\left( ie\gamma ^{\mu }\right) u^{( s)}( p)\right)$$ Now when the author solved for $| \mathcal{M}_{1}| ^{2}$, he put all the polarization vectors at the beginning. $$| \mathcal{M}_{1}| ^{2} =\tfrac{e^{4}}{4\left( s-m_{e}^{2}\right)^{2}}\sum\limits _{s,s^{\prime }}\sum\limits _{r,r^{\prime }} \epsilon _{\nu }^{r^{\prime } \ast } \epsilon _{\mu }^{r\ast } \epsilon _{\rho }^{r\ast } \epsilon _{\sigma }^{r^{\prime } \ast }\left[\overline{u}^{\left( s^{\prime }\right)} \gamma ^{\nu }\left({\not}p +{\not}k \ +m_{e}\right) \gamma ^{\mu } u^{( s)}\right]$$ $$\times \left[\overline{u}^{( s)} \gamma ^{\rho }\left({\not}p +{\not}k \ +m_{e}\right) \gamma ^{\sigma } u^{\left( s^{\prime }\right)}\right]$$ Why is it valid to move all polarization vectors to the front? Do they commute with all the other factors in the equations above? Other derivations (this and this) I found on the internet went through this exact same step and I'm confused why this is valid. The way I understand it is that the first part of the matrix element $ \left(\overline{u}^{\left( s^{\prime }\right)}\left( p^{\prime }\right)\left( ie\gamma ^{\nu }\right) \epsilon _{\nu }^{\ast }\right) $ is calculated as follows: $ \overline{u}^{\left( s^{\prime }\right)}\left( p^{\prime }\right)$ is a $1\times 4$ row vector, $\left( ie\gamma ^{\nu }\right) $ is a $4\times 4$ matrix, and $\epsilon _{\nu }^{\ast } $ is a $ 4\times 1$ column vector, and multiplying these three gives us a $1 \times 1$ constant, but since we're actually summing this for $\nu = 0$ to $3$, this actually becomes a $1\times 4$ row vector $j^\nu$. Same goes for the outgoing part of the matrix element $\left( \epsilon _{\mu }\left( ie\gamma ^{\mu }\right) u^{( s)}( p)\right)$, which then becomes a $ 4\times 1$ vector $j^\mu$. Then we multiply these two vectors: $j^\nu \left[\frac{{\not}p +{\not}k \ +m}{( p+k)^{2} -m^{2}}\right] j^\mu$, where the middle factor is a $ 4\times 4$ matrix. Is this understanding correct? It bothers me that if we "factor out" the $\epsilon$'s, the dimensions of the remaining factors would not be compatible for matrix multiplication. Answer: Let's look at $\gamma^\nu \epsilon_\nu^\ast$. According to the Einstein's summation convention it means: $$\overbrace{\gamma^\nu \epsilon_\nu^\ast}^{4\times 4}\equiv\sum_{\nu=0,\ldots, 3} \gamma^\nu \epsilon_\nu^\ast\equiv \overbrace{\gamma^0 \epsilon^\ast_0}^{4\times 4} +\overbrace{\gamma^1 \epsilon^\ast_1}^{4\times 4} +\overbrace{\gamma^2 \epsilon^\ast_2}^{4\times4} +\overbrace{\gamma^3 \epsilon^\ast_3}^{4\times 4}$$ Each $\epsilon_0, \ldots, \epsilon_3 $ is a number not even a vector, well each is a component of a 4-vector, whereas each $\gamma_0,\ldots, \gamma_3$ is a $4\times 4$-matrix. Therefore $\gamma^\nu \epsilon_\nu^\ast$ is a 4x4 matrix, not more and not less. The same is true for $\epsilon_\mu \gamma^\mu$. In total we have schematically: $$\bar{u} \gamma^\nu \epsilon_\nu^\ast [ \ldots ] \epsilon_\mu \gamma^\mu u \quad\text{so we have}\quad \sum_\nu \sum_\mu\overbrace{(1\times 4)}^{\bar{u}}\overbrace{4\times 4}^{\gamma^\nu \epsilon_\nu^\ast } \overbrace{[ 4\times 4]}^{\text{propagator}}\overbrace{4\times 4}^{\epsilon_\mu \gamma^\mu }\overbrace{(4\times 1)}^{u}= 1\times 1$$ Nothing changes if the $\epsilon$'s are put in front of the matrix product since the $4\times 4$ matrix $\gamma^\nu \epsilon_\nu^\ast$ remains $4\times 4$ if it is only a $\gamma$ inside the full matrix product and the same is true for $\epsilon_\mu \gamma^\mu$. Or write out the full product in components, taking a component for instance $\epsilon_0$ in front of the matrix product in one of the summands or keeping it inside does not change anything as $\epsilon_0$ is just a number.
{ "domain": "physics.stackexchange", "id": 92832, "tags": "particle-physics, feynman-diagrams, matrix-elements" }
Polyphase decimation matlab
Question: I am new to DSP, and currently trying to implement a simple polyphase decimation program in matlab. M is decimation factor, x is input (for now it is a square wave 8096 samples), y_polyDec is result from polyphase, output is result from regular filter then decimation. code : FilterBank_LPF = fir1(127, 1/M); y_polyDec = zeros(8096/M, 1); for i = 1:M p(:, i) = FilterBank_LPF(i:M:end); x(:, i) = input(M-i+1:M:end); y_polyDec = filter(p(:, i), 1, x(:, i))+ y_polyDec; end out_LPF = filter(FilterBank_LPF, 1, input); output = downsample(out_LPF, M); %=========================================== If I compare "output" and "y_polyDec" (either comparing the decimated sample values or frequency spectrum using fvtool) I notice there are differences in the values. I was wondering if my code is wrong, or am I still missing something. Thank you Answer: The problem is in the input signals to your polyphase filters. They should be delayed and subsampled versions of the input signal, like in the following code fragment: M=8; % downsampling factor L=256; % length of input signal, integer multiple of M x=randn(L,1); % input signal h = fir1(127, 1/M); yp = zeros(L/M, 1); for i = 1:M, xtmp = [zeros(i-1,1); x]; % delayed input signal tmp = filter( h(i:M:end), 1, xtmp(1:M:L) ); % polyphase filtering yp = yp + tmp; % accumulate outputs of polyphase filters end % compare with standard filtering y = filter ( h, 1, x); yM = downsample( y, M ); n=1:L/M; plot(n,(yM-yp)/max(yM))
{ "domain": "dsp.stackexchange", "id": 1669, "tags": "matlab, decimation" }
Spinor inner products
Question: The spinor inner product in particle physics is given by $\overline{\psi} \psi = \psi^{\dagger} \gamma_0 \psi $, where I take the convention that the zeroth gamma matrix is hermitian while the rest are anti-hermitian. This is invariant under spin group transformations, $\psi \rightarrow e^{\omega_{ab} S^{ab} }\psi$, with $\omega_{ab}$ real parameters and $S^{ab} = \frac{1}{4}[\gamma^a, \gamma^b]$. However, there is a second invariant inner product given by $\psi^{\dagger} \gamma_0 \gamma_5 \psi$, with $\gamma_5 = i\gamma_0 \gamma_1 \gamma_2 \gamma_3$. My question is, why not the other one? Is it down to the fact that there are observable differences between the two, and one is favoured by experiment? Answer: Well the $\gamma^5$ term does appear in the standard model. As you duly note, both $\bar{\psi}\psi$ and $\bar{\psi} \gamma^5 \psi$ are Lorentz invariant, so the question to ask is, how are they different? It turns out that the difference lies in their transformations under a Parity transformation. Without proof, I claim that under $P:(x,y,z) \to (-x,-y,-z)$, $\bar{\psi}\psi \to \bar{\psi}\psi$, while $\bar{\psi}\gamma^5\psi \to -\bar{\psi}\gamma^5\psi$. So the first object is a true scalar while the second object is a pseudoscalar (i.e. it picks up a minus sign under $P$). Thus, if we have the second term in the Lagrangian, it breaks Parity symmetry. It turns out (experimentally) that $P$ is indeed broken in the weak interaction. So we do have a weak interaction vertex (for a electron + neutrino $ \to W^-$ interaction, for example) that looks like \begin{align} \sim -\frac{-i g_{\mu\nu}}{2\sqrt{2}}\gamma^\mu(1-\gamma^5). \end{align} This is an axial vector coupling, which is not parity invariant. However the pseudoscalar $\bar{\psi}\gamma^5\psi$ doesn't appear in the SM because the object $(1-\gamma^5)/2 = \mathbb{P}_L$ is the projector onto the left-handed component of a spinor, so it can only appear if you want a theory in which the mass term of the right-handed and left-handed spinors are different, which we don't observe experimentally.
{ "domain": "physics.stackexchange", "id": 11451, "tags": "standard-model, spinors, dirac-matrices" }
How fast do molecules move in objects?
Question: I guess it depends on the heat or the type of the material but can you give some examples or formulas to calculate it ? The best example would be the average speed of the air molecules (all types in the air) at room temperature or water molecules at human body temperature. Answer: It depends on the mass of the molecule in question. Here's a quick, back-of-the-envelope answer. In a body at thermal equilibrium, every energy mode has the same average amount of energy, $\frac12kT$, where $T$ is temperature and $k$ is Boltzmann's constant. One of the energy modes is the translational kinetic energy of a molecule in some direction $x$, $\frac12mv_x^2$. We can solve $$\frac12kT=\frac12mv_x^2$$ to find $$v_x=\sqrt{\frac{kT}m}$$ and then plug in $k=1.38×10^{-23}\rm{m^2 kg s^{-2} K^{-1}}$, $T=300\rm{K}$, and for $m_{\rm{N}_2}=2×14\rm{u}=2×14×1.66×10^{−27} \rm{kg}=4.65×10^{−26} \rm{kg}$ to get $$v_x=298\rm{m/s}=667mph.$$ The molecule is also moving in the $y$ and $z$ axes, so the answer depends on what exactly you mean by average speed: mean spead vs. root-mean-square speed. This ignores rotational and vibrational degrees of freedom. Similar calculations may be performed for other substances. Some links: http://en.wikipedia.org/wiki/Root-mean-square_speed
{ "domain": "physics.stackexchange", "id": 15185, "tags": "thermodynamics, speed, molecules" }
Tutorial on 2nd generation wavelets (with lifting)?
Question: For some denoising and deconvolution experiments, I'd like to apply a 2nd generation wavelet transform (using lifting steps) to images. I know that there are several implementations available, but most of them use Matlab, while I want to work in C++ with OpenCV. Since there is no built-in wavelet transform implementation in OpenCV 2.x, I plan to implement it myself (plus, it will make a good exercise for me). After some research, I've been able to find the original articles about the 2nd generation transform, but I'm still a bit confused about the exact way the algorithm works. Taking for main reference the paper [1] by Sweldens: The Lifting scheme: a construction of second generation wavelets, I'm still confused by the definition of the index sets $\mathcal{K}(j)$ : what is their size ? how are they built ? ... Hence my question : does anyone know about some resources about the 2nd generation wavelet transform (papers, tutorials, slides...) that are either in a tutorial-like form, or that provide a more algorithmic view (rather than a mathematical one), which would help me designing my own implementation ? Thank you in advance. References My main reference is: [1] Sweldens, W. (1998). The lifting scheme: A construction of second generation wavelets. SIAM Journal on Mathematical Analysis, 29(2), 511. And I am also learning from: [2] Daubechies, I., & Sweldens, W. (1998). Factoring wavelet transforms into lifting steps. Journal of Fourier analysis and applications, 4(3), 247–269. [3] Kovacevic, J., & Sweldens, W. (2000). Wavelet families of increasing order in arbitrary dimensions. Image Processing, 9(3), 480–496. doi:10.1109/83.826784 Answer: I've bought finally a copy of [Ripples in Mathematics The Discrete Wavelet Transform][1], and I'm very pleased by this book. The authors explain the DWT with alternating point of views (lifting schemes, filter banks approach, multi-resolution analysis), where each of these viewpoints has its own advantages. Furthermore, the book is implementation oriented, with chapters about boundaries handling and matlab/C implementations. I'm still looking on a proper way to handle odd-sized signals, but Ripples gave me a good start. [1]: http://www.control.auc.dk/~alc/ripples.html "Ripples in Mathematics The Discrete Wavelet Transform", by Arne Jensen and Anders la Cour-Harbo
{ "domain": "dsp.stackexchange", "id": 339, "tags": "image-processing, matlab, wavelet, denoising, deconvolution" }
If nature exhibits symmetry, why don't up and down quarks have equal magnitude of electric charge?
Question: I always hear people saying symmetry is beautiful, nature is symmetric intrinsically, physics and math show the inherent symmetry in nature et cetera, et cetera. Today I learned that half of the quarks have +2/3 electric charge and other half have -1/3 magnitude electric charge. Is there any explanation for this? Why their net charge isn't zero? Answer: When it comes to fundamental charges, the (left-handed) up-type quarks actually have either the same values of the charge as the down-type quarks, or exactly the opposite ones. It just happens that the electric charge isn't a fundamental charge in this sense. Let me be more specific. All the quarks carry a color – red, green, or blue – the charge of the strong nuclear force associated with the $SU(3)$ gauge group. There is a perfect uniformity among all quarks of all types. The quarks also carry the hypercharge, the charge under the $U(1)$ gauge group of the electroweak force. Both the up-quarks and the down-quarks (well, their left-handed components) carry $Y=+1/6$ charge under this group. The right-handed components carry different values of $Y$ but I won't discuss those because that would make the story less pretty. ;-) Finally, there is the $SU(2)$ group of the electroweak force. The left-handed parts of the up-quarks and down-quarks carry $T_3=+1/2$ and $T_3=-1/2$, exactly opposite values, and there is a perfect symmetry between them. (The right-handed components carry $T_3=0$.) It just happens that neither $Y$ nor $T_3$ but only their sum, $$ Q = Y+ T_3$$ known as the electric charge, is conserved. The individual symmetries generated by $Y$ and $T_3$ are "spontaneously broken" due to the Higgs mechanism for which the latest physics Nobel prize was given. The symmetry is broken because a field, the Higgs field $h$, prefers – in order to lower its energy – a nonzero value of the field. More precisely, one component is nonzero, and this component has $Y\neq 0$, $T_3\neq 0$ but $Q=0$. So the first two symmetries are broken but the last one, the $U(1)$ of electromagnetism generated by the electric charge, is preserved. It is often the case that every symmetry that is "imaginable" and "pretty" is indeed a symmetry of the fundamental laws but at low energies, due to various dynamical mechanisms, some of these symmetries are broken. The laws of physics may still be seen to respect the symmetry at some level but the vacuum state isn't invariant under it, and the effective low-energy laws therefore break the symmetry, too.
{ "domain": "physics.stackexchange", "id": 11923, "tags": "particle-physics, symmetry, standard-model, charge, quarks" }
Minimum weighted arithmetic mean partion?
Question: Assume I have some positive numbers $a_1,\ldots,a_n$ and a number $k \in \mathbb{N}$. I want to partition these numbers into exactly $k$ sets $A_1,\ldots,A_k$ such that the weighted arithmetic mean $$\text{cost}(A_i,\ldots,A_k)=\sum_{i=1}^{k}\frac{|A_i|}{n}c(A_i)$$ is minimal, where $c(A_i)=\sum_{a \in A_i}a$ is simply the sum of all numbers in $A_i$. Is there actually a (polynomial) algorithm to do this or is this a (NP) hard problem? I tried to reduce it to some NP-hard problems but didn't get anywhere, especially because the numbers are nonnegative and thus in an optimal partition big sets need to have smaller weight which seems to be some kind of balancing problem instead of a packing problem (which I am more familiar with). Answer: Suppose first that we fix the sizes of the sets $|A_i| = n_i$ in non-decreasing order $n_1 \leq \cdots \leq n_k$. In that case, if we arrange the numbers $a_i$ in non-decreasing order, then an optimal choice is $$ A_1 = \{a_n,\ldots,a_{n-n_1+1}\}, A_2 = \{a_{n-n_1},\ldots,a_{n-n_1-n_2+1}\}, \ldots, A_k = \{a_{n_k},\ldots,a_1\}. $$ The reason is that given any solution where $a_i \in A_I$ and $a_j \in A_J$, switching $a_i$ and $a_j$ will result in a total change of $$ -n_Ia_i-n_Ja_j + n_Ja_i+n_Ia_j = (n_I-n_J) (a_j-a_i),$$ so the switch is beneficial (or harmless) as long as $n_I \leq n_J$ and $a_j \geq a_i$. In view of this, there is always an optimal solution in which $A_1,\ldots,A_k$ is a partition of $a_1,\ldots,a_n$ into intervals. This suggests a dynamic programming algorithm. For each $\ell \leq k$ and $t \leq m \leq n$, we compute $$ \min_{A_1,\dots,A_{\ell-1} \vdash a_1,\ldots,a_t} \sum_{i=1}^\ell |A_i| c(A_i), \quad \text{where $A_\ell = \{a_{t+1},\ldots,a_m\}$}. $$ Here $A_1,\dots,A_{\ell-1} \vdash a_1,\ldots,a_t$ means that the left-hand side is a partition of the right-hand side into intervals. I am assuming that sets could be empty; the algorithm can be modified to ensure that sets are not empty (just make sure $\ell \leq t < m$).
{ "domain": "cs.stackexchange", "id": 2772, "tags": "optimization, np-hard, np, partitions" }
Fidelity concentration bound for random stabilizer states
Question: Let $|\Phi\rangle$ be a normalized vector in $\mathbb{C}^d$ and let $|\psi\rangle$ be a random stabilizer state. I am trying to compute the quantity $$\mathsf{Pr}\big[|\langle \Phi|\psi \rangle|^2 \geq \epsilon \big].$$ Note that if $|\psi\rangle$ is Haar random, then, by equation $2$ of this paper, $$\mathsf{Pr}\big[|\langle \Phi|\psi \rangle|^2 \geq \epsilon \big] \leq \mathsf{exp}(-(2d-1) \epsilon).$$ Does a similar concentration bound hold for random stabilizer states too? Answer: No such bound holds for general $|\Phi\rangle$. The set $\mathcal{S}_n$ of $n$-qubit stabilizer states is finite, so $m=\min_{|\psi\rangle\in\mathcal{S}_n} |\langle\Phi|\psi\rangle|^2$ is well-defined and if $|\Phi\rangle$ is not a stabilizer state then $m>0$. But then for any $\epsilon\in[0,m]$ we have $\mathrm{Pr}\left[|\langle\Phi|\psi\rangle|^2\ge\epsilon\right]=1$ which rules out any general bound $\mathrm{Pr}\left[|\langle\Phi|\psi\rangle|^2\ge\epsilon\right]\le f(\epsilon)$ with $f(\epsilon)<1$ for $\epsilon>0$. In particular, no general bound of the form $\mathrm{Pr}\left[|\langle\Phi|\psi\rangle|^2\ge\epsilon\right]\le\exp(-a\epsilon)$ with $a>0$ is possible.
{ "domain": "quantumcomputing.stackexchange", "id": 4690, "tags": "quantum-state, clifford-group, fidelity, stabilizer-state, linear-algebra" }
ROS Fuerte on Ubuntu from source Checkout Error
Question: Hello, Is there a new or updated rosinstall file for installing Fuerte from source? From the wiki: ros.org/wiki/fuerte/Installation/Ubuntu/Source It pulls down the rosinstall from: ros.org/rosinstalls/fuerte-ros-full.rosinstall In this file it pulls down the repos from: github.com/wg-debs The files in that repo no longer exist as they have been moved to this repo I believe: github.com/ros-gbp Now that the files have been moved, I am not sure about the correct versions I am suppose to pull down as they are not the same. Does anyone have an updated rosinstall to install the Full version of ROS source for Fuerte? Thank you Originally posted by Raptor on ROS Answers with karma: 377 on 2013-01-02 Post score: 1 Answer: The file has been fixed as of 01/10/2013. Here is the new rosinstall: http://ros.org/rosinstalls/fuerte-ros-full.rosinstall Originally posted by Raptor with karma: 377 on 2013-01-15 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 12249, "tags": "ros, ros-fuerte, git, rosinstall, build-from-source" }
Robot with Custom Plugin locks up gzclient
Question: I have a custom plugin that is designed to drive a robot via protobuf messages. When I place this robot into an empty world, it begins locking out gzclient - while gzserver keeps running fine. The run of gzclient looks like this: Starting program: /usr/bin/gzclient-2.2.2 [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/i386-linux-gnu/libthread_db.so.1". [New Thread 0xafbb8b40 (LWP 6831)] Gazebo multi-robot simulator, version 2.2.2 Copyright (C) 2012-2014 Open Source Robotics Foundation. Released under the Apache 2 License. http://gazebosim.org [New Thread 0xaefffb40 (LWP 6832)] Msg Waiting for master Msg Connected to gazebo master @ http://127.0.0.1:11345 Msg Publicized address: 10.18.104.83 [New Thread 0xae5ffb40 (LWP 6833)] [New Thread 0xabc96b40 (LWP 6834)] [New Thread 0xab295b40 (LWP 6835)] [New Thread 0xaa894b40 (LWP 6836)] [New Thread 0xa9e93b40 (LWP 6837)] [New Thread 0xa9492b40 (LWP 6838)] [New Thread 0xa8a91b40 (LWP 6839)] [New Thread 0xa8090b40 (LWP 6840)] [New Thread 0xa788fb40 (LWP 6841)] [New Thread 0xa6238b40 (LWP 6842)] [New Thread 0xa58ffb40 (LWP 6843)] [New Thread 0xa4de3b40 (LWP 6844)] [New Thread 0xa41ffb40 (LWP 6845)] [New Thread 0xa3ffeb40 (LWP 6846)] [New Thread 0xa39feb40 (LWP 6848)] [New Thread 0xa3bffb40 (LWP 6847)] [New Thread 0xa35fcb40 (LWP 6849)] [New Thread 0xa33fbb40 (LWP 6851)] [New Thread 0xa37fdb40 (LWP 6850)] [Thread 0xa6238b40 (LWP 6842) exited] ^C Program received signal SIGINT, Interrupt. 0xb7fdd424 in __kernel_vsyscall () The CTRL-C comes after it has locked out for at least a minute. The backtrace: (gdb) bt #0 0xb7fdd424 in __kernel_vsyscall () #1 0xb6a3084b in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/i386-linux-gnu/libpthread.so.0 #2 0xb7edeadc in wait (m=..., this=<optimized out>) at /usr/include/boost/thread/pthread/condition_variable.hpp:56 #3 gazebo::transport::request (_worldName=..., _request=..., _data=...) at /tmp/buildd/gazebo-current-2.2.2/gazebo/transport/TransportIface.cc:203 #4 0x0810a376 in gazebo::gui::JointControlWidget::SetModelName (this=0x8b39c70, _modelName=...) at /tmp/buildd/gazebo-current-2.2.2/gazebo/gui/JointControlWidget.cc:256 #5 0x08161ea6 in gazebo::gui::ToolsWidget::OnSetSelectedEntity (this=0x8b397c0, _name=...) at /tmp/buildd/gazebo-current-2.2.2/gazebo/gui/ToolsWidget.cc:76 #6 0x08162500 in operator() (a2=..., a1=..., p=<optimized out>, this=0x8b51024) at /usr/include/boost/bind/mem_fn_template.hpp:280 #7 operator()<boost::_mfi::mf2<void, gazebo::gui::ToolsWidget, const std::basic_string<char>&, const std::basic_string<char>&>, boost::_bi::list2<std::basic_string<char>&, std::basic_string<char>&> > ( a=<synthetic pointer>, f=..., this=0x8b5102c) at /usr/include/boost/bind/bind.hpp:392 #8 operator()<std::basic_string<char>, std::basic_string<char> > (a2=..., a1=..., this=0x8b51024) at /usr/include/boost/bind/bind_template.hpp:61 #9 boost::detail::function::void_function_obj_invoker2<boost::_bi::bind_t<void, boost::_mfi::mf2<void, gazebo::gui::ToolsWidget, std::string const&, std::string const&>, boost::_bi::list3<boost::_bi::value<gazebo::gui::ToolsWidget*>, boost::arg<1>, boost::arg<2> > >, void, std::string, std::string>::invoke (function_obj_ptr=..., a0=..., a1=...) at /usr/include/boost/function/function_template.hpp:153 #10 0x080f450e in operator() (a1=..., a0=..., this=0x8b51020) at /usr/include/boost/function/function_template.hpp:760 #11 gazebo::event::EventT<void (std::string, std::string)>::Signal<std::string, char [7]>(std::string const&, char const (&) [7]) (this=0xb7fc0540 <gazebo::event::Events::setSelectedEntity>, _p1=..., _p2=...) at /tmp/buildd/gazebo-current-2.2.2/gazebo/common/Event.hh:276 #12 0x080eef36 in operator()<std::basic_string<char>, char [7]> (_p2=..., _p1=..., this=<optimized out>) at /tmp/buildd/gazebo-current-2.2.2/gazebo/common/Event.hh:141 #13 gazebo::gui::GLWidget::OnMouseReleaseNormal (this=0x8bb2f58) at /tmp/buildd/gazebo-current-2.2.2/gazebo/gui/GLWidget.cc:587 #14 0x080ef0c8 in gazebo::gui::GLWidget::OnMouseRelease (this=0x8bb2f58) at /tmp/buildd/gazebo-current-2.2.2/gazebo/gui/GLWidget.cc:411 #15 0x080f39d8 in operator() (a1=..., p=<optimized out>, this=0x8bca5b8) at /usr/include/boost/bind/mem_fn_template.hpp:165 #16 operator()<bool, boost::_mfi::mf1<bool, gazebo::gui::GLWidget, const gazebo::common::MouseEvent&>, boost::_bi::list1<const gazebo::common::MouseEvent&> > (a=<synthetic pointer>, f=..., this=0x8bca5c0) at /usr/include/boost/bind/bind.hpp:303 #17 operator()<gazebo::common::MouseEvent> (a1=..., this=0x8bca5b8) at /usr/include/boost/bind/bind_template.hpp:47 #18 boost::detail::function::function_obj_invoker1<boost::_bi::bind_t<bool, boost::_mfi::mf1<bool, gazebo::gui::GLWidget, gazebo::common::MouseEvent const&>, boost::_bi::list2<boost::_bi::value<gazebo::gui::GLWidget*>, boost::arg<1> > >, bool, gazebo::common::MouseEvent const&>::invoke ( function_obj_ptr=..., a0=...) at /usr/include/boost/function/function_template.hpp:132 #19 0x0815d6c5 in operator() (a0=..., this=0x8bca5b4) at /usr/include/boost/function/function_template.hpp:760 #20 gazebo::gui::MouseEventHandler::Handle ( this=this@entry=0x833d958 <SingletonT<gazebo::gui::MouseEventHandler>::GetInstance()::t>, _event=..., _list=...) at /tmp/buildd/gazebo-current-2.2.2/gazebo/gui/MouseEventHandler.cc:115 #21 0x0815d7ca in gazebo::gui::MouseEventHandler::HandleRelease ( this=this@entry=0x833d958 <SingletonT<gazebo::gui::MouseEventHandler>::GetInstance()::t>, _event=...) at /tmp/buildd/gazebo-current-2.2.2/gazebo/gui/MouseEventHandler.cc:81 #22 0x080ee3f5 in gazebo::gui::GLWidget::mouseReleaseEvent (this=0x8bb2f58, _event=0xbfffe464) at /tmp/buildd/gazebo-current-2.2.2/gazebo/gui/GLWidget.cc:555 #23 0xb6eba710 in QWidget::event(QEvent*) () from /usr/lib/i386-linux-gnu/libQtGui.so.4 #24 0xb6e63c7c in QApplicationPrivate::notify_helper(QObject*, QEvent*) () from /usr/lib/i386-linux-gnu/libQtGui.so.4 #25 0xb6e67587 in QApplication::notify(QObject*, QEvent*) () from /usr/lib/i386-linux-gnu/libQtGui.so.4 #26 0xb6bbb90e in QCoreApplication::notifyInternal(QObject*, QEvent*) () from /usr/lib/i386-linux-gnu/libQtCore.so.4 #27 0xb6e6a823 in QApplicationPrivate::sendMouseEvent(QWidget*, QMouseEvent*, QWidget*, QWidget*, QWidget**, QPointer<QWidget>&, bool) () from /usr/lib/i386-linux-gnu/libQtGui.so.4 #28 0xb6eec735 in ?? () from /usr/lib/i386-linux-gnu/libQtGui.so.4 #29 0xb6eeb525 in QApplication::x11ProcessEvent(_XEvent*) () from /usr/lib/i386-linux-gnu/libQtGui.so.4 #30 0xb6f1a904 in ?? () from /usr/lib/i386-linux-gnu/libQtGui.so.4 #31 0xb44ab3b3 in g_main_context_dispatch () from /lib/i386-linux-gnu/libglib-2.0.so.0 #32 0xb44ab750 in ?? () from /lib/i386-linux-gnu/libglib-2.0.so.0 #33 0xb44ab831 in g_main_context_iteration () from /lib/i386-linux-gnu/libglib-2.0.so.0 #34 0xb6bedc21 in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib/i386-linux-gnu/libQtCore.so.4 #35 0xb6f1aa0a in ?? () from /usr/lib/i386-linux-gnu/libQtGui.so.4 #36 0xb6bba3ec in QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib/i386-linux-gnu/libQtCore.so.4 #37 0xb6bba6e1 in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib/i386-linux-gnu/libQtCore.so.4 #38 0xb6bc03fa in QCoreApplication::exec() () from /usr/lib/i386-linux-gnu/libQtCore.so.4 #39 0xb6e61fc4 in QApplication::exec() () from /usr/lib/i386-linux-gnu/libQtGui.so.4 #40 0x080fa7ad in gazebo::gui::run (_argc=_argc@entry=1, _argv=_argv@entry=0xbfffefc4) at /tmp/buildd/gazebo-current-2.2.2/gazebo/gui/GuiIface.cc:210 #41 0x080d9ec7 in main (_argc=1, _argv=0xbfffefc4) at /tmp/buildd/gazebo-current-2.2.2/gazebo/gui/main.cc:23 Another BT, ended earlier before a total lockout - if I don't click on the robot directly, it doesn't crash completely, just doesn't speak to the gzserver. #0 0xb7df67b9 in atomic_exchange_and_add (dv=-1, pw=0x15) at /usr/include/boost/smart_ptr/detail/sp_counted_base_gcc_x86.hpp:50 #1 release (this=0x11) at /usr/include/boost/smart_ptr/detail/sp_counted_base_gcc_x86.hpp:143 #2 ~shared_count (this=0xa806754, __in_chrg=<optimized out>) at /usr/include/boost/smart_ptr/detail/shared_count.hpp:305 #3 ~shared_ptr (this=0xa806750, __in_chrg=<optimized out>) at /usr/include/boost/smart_ptr/shared_ptr.hpp:164 #4 destroy (__p=0xa806750, this=<optimized out>) at /usr/include/c++/4.7/ext/new_allocator.h:123 #5 destroy (__p=0xa806750, __a=...) at /usr/include/c++/4.7/ext/alloc_traits.h:206 #6 std::vector<boost::shared_ptr<gazebo::rendering::Visual>, std::allocator<boost::shared_ptr<gazebo::rendering::Visual> > >::erase (this=this@entry=0xa8f93a0, __position=__position@entry=...) at /usr/include/c++/4.7/bits/vector.tcc:141 #7 0xb7de5a71 in gazebo::rendering::Visual::DetachVisual (this=0xa8f9330, _name=...) at /tmp/buildd/gazebo-current-2.2.2/gazebo/rendering/Visual.cc:600 #8 0xb7de5c2a in gazebo::rendering::Visual::Fini (this=0xa903038) at /tmp/buildd/gazebo-current-2.2.2/gazebo/rendering/Visual.cc:164 #9 0xb7dabdc1 in gazebo::rendering::Scene::RemoveVisual (this=0x8bd9918, _vis=...) at /tmp/buildd/gazebo-current-2.2.2/gazebo/rendering/Scene.cc:2699 #10 0xb7db291d in gazebo::rendering::Scene::Clear (this=0x8bd9918) at /tmp/buildd/gazebo-current-2.2.2/gazebo/rendering/Scene.cc:189 #11 0xb7d94951 in gazebo::rendering::RenderEngine::RemoveScene ( this=this@entry=0x833d980 <SingletonT<gazebo::rendering::RenderEngine>::GetInstance()::t>, _name=...) at /tmp/buildd/gazebo-current-2.2.2/gazebo/rendering/RenderEngine.cc:218 #12 0xb7d94a5d in gazebo::rendering::RenderEngine::Fini ( this=this@entry=0x833d980 <SingletonT<gazebo::rendering::RenderEngine>::GetInstance()::t>) at /tmp/buildd/gazebo-current-2.2.2/gazebo/rendering/RenderEngine.cc:326 #13 0xb7d9ec83 in gazebo::rendering::fini () at /tmp/buildd/gazebo-current-2.2.2/gazebo/rendering/RenderingIface.cc:66 #14 0xb4632e3c in gazebo::sensors::fini () at /tmp/buildd/gazebo-current-2.2.2/gazebo/sensors/SensorsIface.cc:71 #15 0xb783dec8 in gazebo::shutdown () at /tmp/buildd/gazebo-current-2.2.2/gazebo/gazebo.cc:248 #16 0x08111ccc in gazebo::gui::MainWindow::closeEvent (this=0x888fb60) at /tmp/buildd/gazebo-current-2.2.2/gazebo/gui/MainWindow.cc:240 #17 0xb6eb9f9f in QWidget::event(QEvent*) () from /usr/lib/i386-linux-gnu/libQtGui.so.4 #18 0xb72eb46c in QMainWindow::event(QEvent*) () from /usr/lib/i386-linux-gnu/libQtGui.so.4 #19 0xb6e63c7c in QApplicationPrivate::notify_helper(QObject*, QEvent*) () from /usr/lib/i386-linux-gnu/libQtGui.so.4 #20 0xb6e66bfa in QApplication::notify(QObject*, QEvent*) () from /usr/lib/i386-linux-gnu/libQtGui.so.4 #21 0xb6bbb90e in QCoreApplication::notifyInternal(QObject*, QEvent*) () from /usr/lib/i386-linux-gnu/libQtCore.so.4 #22 0xb6eb98d0 in QWidgetPrivate::close_helper(QWidgetPrivate::CloseMode) () from /usr/lib/i386-linux-gnu/libQtGui.so.4 #23 0xb6eda276 in ?? () from /usr/lib/i386-linux-gnu/libQtGui.so.4 #24 0xb6edd2ea in QApplication::x11ClientMessage(QWidget*, _XEvent*, bool) () from /usr/lib/i386-linux-gnu/libQtGui.so.4 #25 0xb6eeb293 in QApplication::x11ProcessEvent(_XEvent*) () from /usr/lib/i386-linux-gnu/libQtGui.so.4 #26 0xb6f1a904 in ?? () from /usr/lib/i386-linux-gnu/libQtGui.so.4 #27 0xb44ab3b3 in g_main_context_dispatch () from /lib/i386-linux-gnu/libglib-2.0.so.0 #28 0xb44ab750 in ?? () from /lib/i386-linux-gnu/libglib-2.0.so.0 #29 0xb44ab831 in g_main_context_iteration () from /lib/i386-linux-gnu/libglib-2.0.so.0 #30 0xb6bedc3f in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib/i386-linux-gnu/libQtCore.so.4 #31 0xb6f1aa0a in ?? () from /usr/lib/i386-linux-gnu/libQtGui.so.4 #32 0xb6bba3ec in QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib/i386-linux-gnu/libQtCore.so.4 #33 0xb6bba6e1 in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib/i386-linux-gnu/libQtCore.so.4 #34 0xb6bc03fa in QCoreApplication::exec() () from /usr/lib/i386-linux-gnu/libQtCore.so.4 #35 0xb6e61fc4 in QApplication::exec() () from /usr/lib/i386-linux-gnu/libQtGui.so.4 #36 0x080fa7ad in gazebo::gui::run (_argc=_argc@entry=1, _argv=_argv@entry=0xbfffefc4) at /tmp/buildd/gazebo-current-2.2.2/gazebo/gui/GuiIface.cc:210 #37 0x080d9ec7 in main (_argc=1, _argv=0xbfffefc4) at /tmp/buildd/gazebo-current-2.2.2/gazebo/gui/main.cc:23 Any help would be greatly appreciated. EDIT: Further data. I've narrowed this problem down to post the transport::run() call in the following code. Post call, the world update stops ticking over/communicating with the gzclient. // Get the world name. this->world = _parent->GetWorld(); this->model = _parent; this->world->EnablePhysicsEngine(true); //Start up the transport system if (!transport::init()) { std::cout << "Transport init failed!" << std::endl; return; } transport::run(); std::cout << "Transport is running?!" << std::endl; Originally posted by qworg on Gazebo Answers with karma: 31 on 2014-02-24 Post score: 0 Original comments Comment by nkoenig on 2014-02-24: You shouldn't call transport::init or transport::run inside a plugin. Transport setup is handled for you. Answer: As per nkoenig above - my error was in calling transport inside of the plugin. Thanks! Originally posted by qworg with karma: 31 on 2014-02-25 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 3557, "tags": "ubuntu-13.04" }
Statically allocated buffer using templates
Question: I want to have some classes that would contain arrays of different sizes. I want this to be statically allocated because it is intended for an embedded system. The only good solution I can think of is using templates. However, I want to use the same class type to access those objects. This is my solution so far. class Base { public: Base(char* _data, int _size) : data(_data), size(_size) {} char* getData(void) { return data; } int getSize(void) { return size; } protected: char* const data; const int size; }; template <int sz> class Foo : public Base { public: Foo() : Base(dataBuff, sizeof(dataBuff)) {} private: char dataBuff[sz]; }; // Here I can access every template that inherits Base class void printFoo(Base* base) { std::cout << base->getData() << std::endl; std::cout << "size is " << base->getSize() << std::endl; } int main(void) { Foo<20> foo; char* data = foo.getData(); strcpy(data, "Hello world"); printFoo(&foo); return 0; } Of course the reason for this is to have more methods in the base class for appending deleting and searching for data. Is this something valid? This is something very basic. Can I have a better solution? EDIT: Fixed bug (typo). Answer: Use std::array if possible You can get a statically allocated buffer using std::array. Its implementation is very similar to your Foo, with the added benefit that it works like any other STL container. Here is how you would use it: #include <array> #include <cstring> #include <iostream> int main() { std::array<char, 20> foo; strcpy(foo.data(), "Hello world"); std::cout << foo.data() << "\nSize is " << foo.size() << '\n'; } About static allocation In your example, foo is not allocated statically, rather it is allocated on the stack. You could prepend static in front of it and/or declare that variable outside a function to ensure it is really allocated statically. Then the question is whether this will save you any memory. Many embedded systems do allow dynamic allocation of heap memory. Sure, there is some overhead associated with it, but an advantage is that you only allocate as much as you need, whereas with an array of a static size, you might have to reserve more memory than you are going to need. What is best depends on the situation. About type erasure However, I want to use the same class type to access those objects. I don't see a reason for that in your example usage. But if you do need to pass something like a pointer to Base along, consider that you don't need inheritance; you can have a separate class that stores a pointer to the array and a size, like std::span for arrays, or std::string_view for strings in particular. Here is an example using std::span that matches your example: #include <array> #include <cstring> #include <iostream> #include <span> void printFoo(std::span<char> base) { std::cout << base.data() << "\nSize is " << base.size() << '\n'; } int main() { std::array<char, 20> foo; strcpy(foo.data(), "Hello world"); printFoo(foo); } If you cannot use std::array and/or std::span on your embedded system, I recommend you try to implement them yourself. Your code already has parts of it, you just need some name changes and to not use the span as a base class for the array anymore.
{ "domain": "codereview.stackexchange", "id": 44170, "tags": "c++, inheritance" }
rosserial doesn't subscribe to a published Topic
Question: I've written a simple program for my arduino which reads data from an IMU and send it to the PC. As you can see it runs smooth and well and I m able to see the content of the topic through ROS using the rostopic echo /topic wilhem@MIG-31:~/workspace_ros$ rostopic echo /imu9dof roll: 0.00318404124118 pitch: 0.00714263878763 yaw: -2.9238409996 --- roll: 0.00247799116187 pitch: 0.00720235193148 yaw: -2.92382121086 --- roll: 0.00187148409896 pitch: 0.00657133804634 yaw: -2.92379975319 I wrote also a program running ROS to subscribe on that topic, take the value of the 3 axis and put them in a variable which is used later. The problem is... the subscriber doesn 't listen to the topic and does't save the variables transmitted by the massage. Here the barebone program: #include <ros/ros.h> #include <string> #include <sensor_msgs/JointState.h> #include <tf/transform_broadcaster.h> #include "asctec_quad/imu9dof.h" float roll_ = {0.0}; float pitch_ = {0.0}; float yaw_ = {0.0}; void updateIMU( const asctec_quad::imu9dofPtr& data ) { roll_ = data->roll; pitch_ = data->pitch; yaw_ = data->yaw; ROS_INFO( "I heard: [%f]", data->yaw ); } int main( int argc, char **argv ) { ros::init( argc, argv, "state_publisher" ); ros::NodeHandle nh; ROS_INFO( "starting ROS..." ); ros::Publisher joint_pub = nh.advertise<sensor_msgs::JointState>( "joint_states", 1 ); ros::Subscriber sub = nh.subscribe( "imu9dof", 10, updateIMU ); sensor_msgs::JointState joint_state; ros::Rate loop( 50 ); tf::TransformBroadcaster broadcaster; geometry_msgs::TransformStamped odom_trans; odom_trans.header.frame_id = "odom"; odom_trans.child_frame_id = "base_footprint"; while( ros::ok() ) { // update position odom_trans.header.stamp = ros::Time::now(); odom_trans.transform.translation.x = 0; odom_trans.transform.translation.y = 0; odom_trans.transform.translation.z = 0; odom_trans.transform.rotation = tf::createQuaternionMsgFromYaw( 0 ); /* update joint state */ joint_state.header.stamp = ros::Time::now(); joint_state.name.resize(3); joint_state.position.resize(3); joint_state.name[0] = "odom_2_base_footprint"; joint_state.position[0] = 0.0; joint_state.name[1] = "base_footprint_2_base_link"; joint_state.position[1] = 0.0; joint_state.name[2] = "base_link_2_base_frame"; joint_state.position[2] = yaw_; /* send the joint_state position */ joint_pub.publish( joint_state ); /* broadcast the transform */ broadcaster.sendTransform( odom_trans ); loop.sleep(); } ros::spin(); return 0; } as you can see in the function: void updateIMU( const asctec_quad::imu9dofPtr& data ) { roll_ = data->roll; pitch_ = data->pitch; yaw_ = data->yaw; ROS_INFO( "I heard: [%f]", data->yaw ); } I put a ROS_INFO just for debugging porpuoses. I start the serialnode for arduino and I can see the output, but launching the ROS programs that function is never called. It hangs all the time: wilhem@MIG-31:~/workspace_ros$ rosrun asctec_quad my_prog [ INFO] [1409088374.452412227]: starting ROS... I did it checkign the tutorials many many times and crosschecking in the Q&A. What can I do? Thanks Originally posted by Andromeda on ROS Answers with karma: 893 on 2014-08-26 Post score: 0 Answer: Change ros::spin() to ros::spinOnce() and move it inside the while loop. See http://answers.ros.org/question/11887/significance-of-rosspinonce/?answer=17582#post-id-17582 for an excellent explanation of why you need to do this. Originally posted by duffany1 with karma: 173 on 2014-08-26 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Andromeda on 2014-08-27: Ehy man, I love you for the rest of my life. It works!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
{ "domain": "robotics.stackexchange", "id": 19201, "tags": "arduino, rosserial" }
Do “neutrino supernovae” exist?
Question: Core collapse supernovae release most of their energy in the form of neutrinos. About 1% of the neutrinos are absorbed by the thick outer envelope which powers a spectacular supernova explosion. Core collapse can also happen in white dwarfs. According to this answer, white dwarfs made of light elements (He, C) explode upon reaching the Chandrasekhra mass limit, while white dwarfs consisting of heavy elements (O, Ne, Mg) collapse into neutron stars. The collapse is not much different than the core collapse of massive stars, except that the white dwarfs are not buried under a thick atmosphere. As a result, neutrinos will just escape freely with little absorption. Does that mean such collapses will result in “neutrino supernovae” (in analogy with neutron bombs), with minimal electromagnetic emission? Answer: The answer appears to be yes. Unless the accretion-induced-collapse (AIC) supernova interacts with a close companion (see below) then the supernova will be much fainter optically than a Type II core-collapse supernova explosion in a massive star, but will be similar in terms of neutrino luminosity. In type II supernovae the neutrinos are not absorbed by the thick outer envelope. They are absorbed, briefly, in the inner few tens of km of the collapsed core at nuclear densities ($\geq 10^{17}$ kg/m$^{-3}$). From that point of view, a supernova instigated by accretion-induced-collapse (AIC) of a massive white dwarf will be no different to a core-collapse supernova. A reasonably canonical paper to look at is that by Fryer et al. (1999) who did some of the first basic models of AIC supernovae. Perhaps the main difference in not having a massive envelope is that the explosion develops more rapidly - there should not be such a big delay in between a neutrinio signature and a visible signature of the supernova. Another consequence maybe that "de-leptonised" (aka neutron-rich) material should be more successfully ejected and could be a source of neutron-capture (r-process) elements. In type II supernovae, the infalling envelope probably causes a much greater fallback of this neutron-rich material onto the proto neutron star. AIC supernovae are though expected to be relatively faint and short-lived in optical terms - the amount of mass ejected is small and contains little of the radioactive Nickel that powers type Ia supernovae. Piro & Thompson (2014) estimate that the absolute visual magnitude of such an event would peak at just $M_V \sim -13$ , which is about 100-1000 times fainter than most type Ia and type II supernovae, and would fade on timescales of 1-2 days. However, if the AIC is triggered by accretion from a close companion, they show that the blast wave will shock-hear the envelope of the companion causing the transient to be more luminous by factor of $\sim 10-100$ and extending the timescale of emission by a few days.
{ "domain": "astronomy.stackexchange", "id": 6364, "tags": "astrophysics, supernova, neutron-star, white-dwarf" }
Acey Ducey game
Question: I'm a beginner in Java and want to improve and learn. This is a game called "Acey Ducey": The dealer shows you two cards. You decide whether you want the dealer to pick a third card or not. If the third card is between the two cards, then you win, otherwise you lose. GitHub This is the main file: import model.*; import java.util.ArrayList; import java.util.List; import java.util.Scanner; public class Main { public static void main(String[] args) { // load Player player = new Player(); List<Card> deck = new ArrayList<>(); for (CardColor color : CardColor.values()) { for (CardNumber number : CardNumber.values()) { Card card = new Card(number, color); deck.add(card); } } while(true) { System.out.println("---------------------"); System.out.println("remaining cards " + deck.size()); if (deck.size() < 2 ) { System.out.println("Game over. Not enough cards"); break; } if (player.getMoney() <= 0) { System.out.println("Game over. Not enough money"); break; } // pick two Card firstPick = Dealer.pickRandomCardFrom(deck); Card secondPick = Dealer.pickRandomCardFrom(deck); Card thirdPick = new Card(); //show two cards System.out.println("First card "); System.out.println(firstPick.getCardNumber() + " of " + firstPick.getCardColor()); System.out.println("----"); System.out.println("Second card "); System.out.println(secondPick.getCardNumber() + " of " + secondPick.getCardColor()); System.out.println("Do you want to set?"); String userInput = getUserInput(); if (userInput.equals("y")) { System.out.println("How much money do you want to bet?"); System.out.println("You got that amount of money " + player.getMoney()); int playerBet = Integer.parseInt(getUserInput()); try { player.withdrawMoney(playerBet); } catch (Exception e) { System.out.println("You don't have enough money"); e.printStackTrace(); break; } thirdPick = Dealer.pickRandomCardFrom(deck); int compareThis = thirdPick.getCardNumberValue(); int maxValue = Math.max(firstPick.getCardNumberValue(), secondPick.getCardNumberValue()); int minValue = Math.min(firstPick.getCardNumberValue(), secondPick.getCardNumberValue()); System.out.println("Result:"); System.out.println(thirdPick.getCardNumber() + " of " + thirdPick.getCardColor()); if (compareThis >= minValue && compareThis <= maxValue) { System.out.println("You won!"); try{ player.depositMoney(2 * playerBet); } catch (Exception e) { System.out.println("Invalid amount of money"); e.printStackTrace(); } } else { System.out.println("though luck!"); } } } } private static String getUserInput() { Scanner scanner = new Scanner(System.in); return scanner.nextLine(); } } In the folder "model" I got these classes: Card package model; public class Card { private CardNumber number; private CardColor color; public Card() {}; public Card(CardNumber number, CardColor color) { this.number = number; this.color = color; } public CardNumber getNumber() { return number; } public String getCardNumber() { return number.getCardNumber(); } public int getCardNumberValue() { return number.getCardValue(); } public void setNumber(CardNumber number) { this.number = number; } public CardColor getColor() { return color; } public String getCardColor() { return color.getCardColor(); } public void setColor(CardColor color) { this.color = color; } } Card Color (=card suite) package model; public enum CardColor { HEART("Heart"), SPADE("Spade"), CLUB("Club"), DIAMOND("Diamond"); private String color; private CardColor(String color) { this.color = color; } public String getCardColor() { return color; } } Card Number package model; public enum CardNumber { ACE("Ace", 1), TWO("Two", 2), THREE("Three", 3), FOUR("Four", 4), FIVE("Five", 5), SIX("Six", 6), SEVEN("Seven", 7), EIGHT("Eight", 8), NINE("Nine", 9), TEN("Ten", 10), JACK("Jack", 11), QUEEN("Queen", 12), KING("King", 13); private String number; private int value; private CardNumber(String number, int value) { this.number = number; this.value = value; } public String getCardNumber(){return number;} public int getCardValue() {return value;} } Dealer package model; import java.util.List; public class Dealer { public static Card pickRandomCardFrom(List<Card> deck) { int randomNumber = (int)Math.floor(Math.random() * deck.size()); Card pickedCard = deck.get(randomNumber); deck.remove(randomNumber); return pickedCard; } } Player package model; public class Player { private int money = 100; public int getMoney() { return money; } public void setMoney(int money) throws Exception { if (money > 0) { this.money = money; } else { throw new Exception(); } } public void withdrawMoney(int amount) throws Exception { if (amount <= this.money) { this.money -= amount; return; } throw new Exception(); } public void depositMoney(int amount) throws Exception { if (amount > 0) { this.money += amount; return; } throw new Exception(); } } Answer: First of all, it's very well written and I can easily understand what's going on. I also want to point out to another post which is similar to yours with a lot of good code, answers and comments: Object Oriented Design of Card Deck However ... What 'hurts' the most is the very long main method. A lot of things in the main method can be moved to separate types/methods, for instance, the loading of the Deck: Consider writing something like List<Card> deck = getCardDeck(). It's still quite clear what you should happen, but it's written / summarized in one line. As others have mentioned, it's better to move the 'game itself' to its separate type. Why? Assume you want to provide a GUI for your game - what do you have to change? Yeah, a lot, because a lot of the code is not reusable. I think, with that in mind, you will have a total different approach of writing the game, its classes and methods and the control flows. And by how your posted code looks, and how you are already "thinking in objects", I think you can do that without me pointing out every single tiny thing I see - so I won't go into detail about what part of code you should refactor to where and why - which would really take a lot of time. Now, some smaller thingies: // load: I do always mention this in nearly every answer I post here: Try not to write any comments at all. Make the code speak for itself, make it sexy (that's what we call it, when it's really well written). Actually it does speak for it self, it creates a new player, then creates a card deck, no need to explain that any further. The declaration of the three Cards firstPick, secondPick, thirdPick: I would name it something like firstSelectedCard, to make it clear, that this variable is a Card. Why? So it will be obvious, when I read it later in the code. Also, you declare thirdPick up there. Try to put it as close as possible to the point where you need it. In your case, you can declare it directly when you call Dealer.pickRandomCardFrom. Why? When you come across the pickRandomCardFrom-line, you will ask yourself "Why I am assigning a new value? What happend to it before?". Also, that might sound very stupid: You have to scroll up. Even if it's a "small thing to do", the process of scrolling up, "answering your question about a specific line of the code", scrolling down again and searching for the line you were before can break your concentration. And the more of those "focus breaks" in a piece of code, the harder to understand. If you put it closer to where it's actually needed, you will save the scrolling. int compareThis = thirdPick.getCardNumberValue(); Again: Move the declaration down. compareThis: That is really badly named. A variable should be named after what it represents, not what you should have to do with it. Card type: I'd make it immutable, meaning: Remove the default constructor and remove the setters. I don't think it's needed. Dealer.pickRandomCardFrom: The remove method of List actually does return the removed object, so you can directly write return deck.remove(randomNumber). Player exception handling. Usually you do the so called "guards" - a precondition, which must be satisfied, before the method is or can be executed - at the top of the method and then perform the actual method. Why? Well, it's usually more of a problem when you have more than one guard, which leads to quite unsexy nested if else blocks. So, it's a good habit to do it always. I really don't like to use return for anything else than returning a return value. Also: In setMoney you throw the Exception within the else block, in withdrawMoney, you return in the if block, and have no else block, at least be consistent. You throw an exception without an message, I would have at least expected something like "Not enough money". In general it's questionable if you should throw a Exception there, because an Exception shouldn't be used for controlling the flow of a routine. I would have provided more methods, for example for depositMoney, I would have provided something like a canDepositMoney method. And then call the actual depositMoney method, at the same time leave the guards, but throw a RuntimeException (if that happens, something's totally wrong, anyway). Hope that helps ...
{ "domain": "codereview.stackexchange", "id": 25530, "tags": "java, playing-cards" }
One-bit depth audio
Question: I have a music player which is only able to play sound with one a bit-depth of one. I can produce this by taking a song and simply boosting the signal $+100\textrm{ dB}$. Is there another approach which produces a better sounding result? Answer: You could try PWM or Delta-Sigma modulation. The result will depend on the sample-rate of your music player, and the frequency content of the sound you are trying to play back. You can get good results if you can use a high sample-rate and an "aggressive" noise-shaping filter in Delta-Sigma modulation. A Python based toolbox is available here. The format used on Super Audio CD (SACD) is one bit.
{ "domain": "dsp.stackexchange", "id": 3949, "tags": "audio, transform, downsampling, music" }
Does the principal quantum number, $n$, have an operator?
Question: The hydrogen energy eigenstates are labelled with well-defined $n$, the principal quantum number. Is there an associated operator than one can apply to these wavefunctions which would result in an eigenvalue of $n$ multiplied by the hydrogen wavefunction? If so, what does the operator look like? Answer: You can always construct an operator that gives you any quantum number you wish, using the eigenstates. Let the bound states of the hydrogen atom be $|n,l,m\rangle$; the representations of these states in position space (that is, the spatial wave functions) are well known. Then you may have an operator $${\cal N}=\sum_{n=1}^{\infty}\sum_{l=0}^{n-1}\sum_{m=-l}^{l}n|n,l,m\rangle\langle n,l,m|,$$ for which the eigenvalues are the principal quantum numbers for the bound states and zero otherwise. In the position space representation of this operator, each term in the sum involves the explicit Schrödinger wave function. For example, when the $n=1$ term acts on a general wave function $\Psi(\vec{r})$, it takes the form $$\psi_{1,0,0}(\vec{r})\int d^{3}r'\,\psi^{\dagger}_{1,0,0}\left(\vec{r}\,'\right)\Psi\left(\vec{r}\,'\right) =\frac{1}{\pi a_{0}^{3}}\exp\left(-r/a_{0}\right)\int d^{3}r'\, \exp\left(-r'/a_{0}\right)\Psi\left(\vec{r}\,'\right).$$ You could also write this operator in a different basis of energy eigenstates (such as that obtained by solving the Coulomb problem in parabolic coordinates). However, if you asking whether this has a convenient form when expressed in terms of $\vec{r}$ and $\vec{p}$, the answer is no. For one thing, the principal quantum number is not something that is even defined for the complete hydrogen spectrum; the continuum states above zero energy do not have a value for $n$, and if you tried to continue $n$ to these unbound states, its values would be imaginary. If you apply the explicit operator ${\cal N}$ to a continuum energy eigenstate, you will get $0$, since the bound states and continuum states are orthogonal.
{ "domain": "physics.stackexchange", "id": 78992, "tags": "quantum-mechanics, hilbert-space, operators, atomic-physics, hydrogen" }
Multiplicity vs Partition function
Question: I'm a little confused between all the different notations for the multiplicity and partition function. They're not the same thing, are they? I know that entropy can be expressed as $ S = k \ln\Omega $ or $ S = k\ln Q + kT \frac{\partial \ln(Q)}{\partial T} $ in terms of multiplicity and partition function respectively. They look like they could be related. What is the relationship? Answer: In the limit that $T\rightarrow\infty$, the partition function and the multiplicity of states are equal. Why? Well, we have that $Q=\sum_{i} e^{-E_i/kT}$, where $i$ indexes all possible microstates. If $T\rightarrow\infty$, these Boltzman factors all approach one, and we have $Q=\sum_i 1=\Omega$. You might think that in the limit $T\rightarrow\infty$ the two formulas you gave above would badly disagree, since there's an extra term proportional to $T$ in your second formula. But if you do the derivative explicitly, you'll find that the $\frac{\partial \ln(Q)}{\partial T}$ is proportional to $\frac{1}{T^2}$, so that the term $kT\frac{\partial \ln(Q)}{\partial T}$ goes to zero as $T\rightarrow\infty$.
{ "domain": "physics.stackexchange", "id": 67602, "tags": "thermodynamics, statistical-mechanics, entropy, partition-function" }
Can we always simultaneously diagonalize $H_A \otimes \mathbb{1}$ and $\mathbb{1} \otimes H_B$?
Question: Suppose we have systems $A$ and $B$ with respective Hamiltonians $H_A \otimes \mathbb{1}$ and $\mathbb{1} \otimes H_B$. These Hamiltonians commute, so they share the same eigenbasis and hence can be simultaneously diagonalized. If we work in the shared eigenbasis, is it true that we can rewrite the Hamiltonians as $D_A \otimes \mathbb{1}$ and $\mathbb{1}\otimes D_B$ where $D_A$ and $D_B$ are diagonal matrices of $H_A$ and $H_B$ respectively? If this is relevant, let's suppose that the total Hamiltonian is $$H = H_A \otimes \mathbb{1} + \mathbb{1} \otimes H_B.$$ My guess is that for this to work we need $[H_A, H_B] = 0$ and not $[H_A \otimes \mathbb{1}, \mathbb{1} \otimes H_B] = 0$, but I'm struggling to prove that. Edit: $H_A$, $H_B$ have the same dimension. Answer: TL;DR: You can always achieve simultaneous diagonalization of $H_A\otimes\mathbb{1}$ and $\mathbb{1}\otimes H_B$ even if $[H_A, H_B]\ne 0$. And yes, this does follow from the fact that $[H_A\otimes\mathbb{1}, \mathbb{1}\otimes H_B]=0$. Moreover, it is true whether or not the Hilbert spaces of $A$ and $B$ have the same dimension. I will prove it in two different ways. The first one shows that $[H_A\otimes\mathbb{1}, \mathbb{1}\otimes H_B]=0$ is the correct condition. The second also shows that diagonalization may be achieved in a product basis which is sometimes useful to know. Finally, I'll show the irrelevance of $[H_A, H_B]$. Simultaneous diagonalization A general argument is to use the theorem about simultaneous diagonalization: Theorem. If $A$ and $B$ are normal and $[A,B]=0$ then there exists unitary $U$ such that $UAU^\dagger$ and $UBU^\dagger$ are both diagonal. See for example wikipedia. Recall that matrix $M$ is called normal if it commutes with its Hermitian conjugate, i.e. $[M,M^\dagger]=0$. All Hermitian matrices and all unitary matrices are normal. To see how the theorem applies in your case, set $A:=H_A\otimes\mathbb{1}$ and $B:=\mathbb{1}\otimes H_B$. Note that both $H_A\otimes\mathbb{1}$ and $\mathbb{1}\otimes H_B$ are Hermitian and thus normal. Moreover $$ \begin{align} [A,B]&=[H_A\otimes\mathbb{1}, \mathbb{1}\otimes H_B]\tag1\\ &=(H_A\otimes\mathbb{1})\circ(\mathbb{1}\otimes H_B) - (\mathbb{1}\otimes H_B)\circ(H_A\otimes\mathbb{1})\tag2\\ &= H_A\otimes H_B - H_A\otimes H_B\tag3\\ &=0.\tag4 \end{align} $$ Thus, all assumptions of the theorem are satisfied and we obtain that there exists a unitary $U_{AB}$ such that $U_{AB}(H_A\otimes\mathbb{1})U_{AB}^\dagger$ and $U_{AB}(\mathbb{1}\otimes H_B)U_{AB}^\dagger$ are diagonal. Independent diagonalization There is a simple way to reach the same conclusion which is less general, but also proves that $U_{AB}$ is a product unitary. It is based on the observation that the basis change on $A$ required for the diagonalization of $H_A$ is independent of the basis change on $B$ required for the diagonalization of $H_B$. Suppose that $V_A$ is a unitary diagonalizing $H_A$ and $W_B$ is a unitary diagonalizing $H_B$ $$ \begin{align} V_A H_A V_A^\dagger = D_A\tag5\\ W_B H_B W_B^\dagger = D_B\tag6 \end{align} $$ where $D_A$ and $D_B$ are diagonal. Define $U_{AB}:=V_A\otimes W_B$. Then $$ U_{AB}(H_A\otimes\mathbb{1})U_{AB}^\dagger = (V_AH_AV_A^\dagger)\otimes(W_B\mathbb{1}W_B^\dagger)=D_A\otimes\mathbb{1}\tag7 $$ and similarly, $$ U_{AB}(\mathbb{1}\otimes H_B)U_{AB}^\dagger = (V_A\mathbb{1}V_A^\dagger)\otimes(W_BH_BW_B^\dagger)=\mathbb{1}\otimes D_B.\tag8 $$ Irrelevance of $[H_A,H_B]$ As a counterexample to the relevance of $[H_A,H_B]$ consider $H_A=X$ and $H_B=Z$ where $X$ and $Z$ are the single-qubit Pauli matrices. Clearly, $[X,Z]=-2iY\ne 0$. However, both $X\otimes\mathbb{1}$ and $\mathbb{1}\otimes Z$ are diagonal in the basis consisting of $|{+}\rangle|0\rangle$, $|{+}\rangle|1\rangle$, $|{-}\rangle|0\rangle$ and $|{-}\rangle|1\rangle$.
{ "domain": "quantumcomputing.stackexchange", "id": 4437, "tags": "many-body-systems, hamiltonian, linear-algebra" }
What percent of the Earth's core is uranium?
Question: What percent of the Earth's core is uranium? And how much of the heat at the core is from radioactive decay rather than other forces? Answer: Good question! Geochemists and geophysicists agree to disagree, sometimes quite strongly. There are also disagreements within each group as well as between the two groups. It's not just uranium. There are four isotopes whose half-lives are long enough that they can be primordial and whose half-lives are not so long that they don't produce much heat. These four isotopes are Uranium 235, with a half-life of 0.703 billion years, Potassium 40, with a half-life of 1.277 billion years, Uranium 238, with a half-life of 4.468 billion years, and Thorium 232, with a half-life of 14.056 billion years. The consensus view amongst geochemists is that there is very little, if any, of any of these isotopes in the Earth's core. Potassium, thorium, and uranium are chemically active. They readily oxidize. In fact, they readily combine chemically with lots other elements -- but not iron. They are strongly lithophilic elements. Moreover, all three are "incompatible" elements. In a partial melt, they have a strong affinity to stay in the molten state. This means that relative to solar system abundances, all three of these elements should be strongly enhanced in the Earth's crust, slightly depleted in the Earth's mantle, and strongly depleted in the Earth's core. Geophysicists look at the amount of heat needed to drive the Earth's magnetic field, and at the recent results from neutrino observations. From their perspective, the amount of residual heat from the Earth's formation is not near enough to drive the geomagneto. The growth of the Earth's inner core creates some heat, but not near enough to sustain the geodynamo. Geophysicists want a good amount of heat flux across the core mantle boundary to sustain the geodynamo, and to them the only viable source is radioactivity. Recent geoneutrino experiments appear to rule out uranium or thorium in the Earth's core, but not potassium 40. The neutrinos generated from the decay of potassium 40 are not detectable using current technology.
{ "domain": "earthscience.stackexchange", "id": 2601, "tags": "core, uranium" }
Configuring a DataTable using CoffeeScript
Question: I am working on a simple Rails app. For the Expense resource I use a DataTable. So, in my CoffeeScript for this resource I basically do several things: initialize the datatabe; since I use a custom search field (not the one, provided by DataTables), I add a search function in the table api; initialize the Select2 plugin; check to see if the request is made from a mobile device - if yes -> change the type of an input field to date, otherwise initialize Datepicker submit a from when the date from the datepicker is selected Here is my code $.fn.dataTable.ext.type.order['currency-bg-pre'] = parseCurrency changeTextInputToDate = (input) -> input.each(-> $(@).clone().attr('type', 'date').insertBefore(@)).remove() dataTableFooterCallback = (row, data, start, end, display) -> api = @api() total = api.column(1).data().reduce sumCurrency, 0 $(api.column(1).footer()).html("#{total.toFixed(2)} лв") initializeDatepicker = (element) -> element.datepicker autoclose: true clearBtn: true disableTouchKeyboard: true format: 'yyyy-mm-dd' orientation: 'top auto' todayBtn: true todayHighlight: true initializeDateTable = (element) -> element.DataTable columns: [ type: 'date', searchable: false , type: 'currency-bg' searchable: false , type: 'string' , type: null orderable: false searchable: false ] footerCallback: dataTableFooterCallback order: [[0, 'desc']] paging: false dom: 't' initializeSelect2 = (element) -> element.select2 ajax: url: '/tags' data: (query) -> query: query.term processResults: (data) -> results: data cache: true placeholder: 'Enter tag(s)' tags: true theme: 'bootstrap' tokenSeparators: [','] isMobile = -> check = false ((a) -> if /(android|bb\d+|meego).+mobile|avantgo|bada\/|blackberry|blazer|compal|elaine|fennec|hiptop|iemobile|ip(hone|od)|iris|kindle|lge |maemo|midp|mmp|mobile.+firefox|netfront|opera m(ob|in)i|palm( os)?|phone|p(ixi|re)\/|plucker|pocket|psp|series(4|6)0|symbian|treo|up\.(browser|link)|vodafone|wap|windows ce|xda|xiino/i.test(a) or /1207|6310|6590|3gso|4thp|50[1-6]i|770s|802s|a wa|abac|ac(er|oo|s\-)|ai(ko|rn)|al(av|ca|co)|amoi|an(ex|ny|yw)|aptu|ar(ch|go)|as(te|us)|attw|au(di|\-m|r |s )|avan|be(ck|ll|nq)|bi(lb|rd)|bl(ac|az)|br(e|v)w|bumb|bw\-(n|u)|c55\/|capi|ccwa|cdm\-|cell|chtm|cldc|cmd\-|co(mp|nd)|craw|da(it|ll|ng)|dbte|dc\-s|devi|dica|dmob|do(c|p)o|ds(12|\-d)|el(49|ai)|em(l2|ul)|er(ic|k0)|esl8|ez([4-7]0|os|wa|ze)|fetc|fly(\-|_)|g1 u|g560|gene|gf\-5|g\-mo|go(\.w|od)|gr(ad|un)|haie|hcit|hd\-(m|p|t)|hei\-|hi(pt|ta)|hp( i|ip)|hs\-c|ht(c(\-| |_|a|g|p|s|t)|tp)|hu(aw|tc)|i\-(20|go|ma)|i230|iac( |\-|\/)|ibro|idea|ig01|ikom|im1k|inno|ipaq|iris|ja(t|v)a|jbro|jemu|jigs|kddi|keji|kgt( |\/)|klon|kpt |kwc\-|kyo(c|k)|le(no|xi)|lg( g|\/(k|l|u)|50|54|\-[a-w])|libw|lynx|m1\-w|m3ga|m50\/|ma(te|ui|xo)|mc(01|21|ca)|m\-cr|me(rc|ri)|mi(o8|oa|ts)|mmef|mo(01|02|bi|de|do|t(\-| |o|v)|zz)|mt(50|p1|v )|mwbp|mywa|n10[0-2]|n20[2-3]|n30(0|2)|n50(0|2|5)|n7(0(0|1)|10)|ne((c|m)\-|on|tf|wf|wg|wt)|nok(6|i)|nzph|o2im|op(ti|wv)|oran|owg1|p800|pan(a|d|t)|pdxg|pg(13|\-([1-8]|c))|phil|pire|pl(ay|uc)|pn\-2|po(ck|rt|se)|prox|psio|pt\-g|qa\-a|qc(07|12|21|32|60|\-[2-7]|i\-)|qtek|r380|r600|raks|rim9|ro(ve|zo)|s55\/|sa(ge|ma|mm|ms|ny|va)|sc(01|h\-|oo|p\-)|sdk\/|se(c(\-|0|1)|47|mc|nd|ri)|sgh\-|shar|sie(\-|m)|sk\-0|sl(45|id)|sm(al|ar|b3|it|t5)|so(ft|ny)|sp(01|h\-|v\-|v )|sy(01|mb)|t2(18|50)|t6(00|10|18)|ta(gt|lk)|tcl\-|tdg\-|tel(i|m)|tim\-|t\-mo|to(pl|sh)|ts(70|m\-|m3|m5)|tx\-9|up(\.b|g1|si)|utst|v400|v750|veri|vi(rg|te)|vk(40|5[0-3]|\-v)|vm40|voda|vulc|vx(52|53|60|61|70|80|81|83|85|98)|w3c(\-| )|webc|whit|wi(g |nc|nw)|wmlb|wonu|x700|yas\-|your|zeto|zte\-/i.test(a.substr(0, 4)) check = true return ) navigator.userAgent or navigator.vendor or window.opera check parseCurrency = (amount) -> switch typeof amount when 'string' then amount.match(/\d+(?:\.\d+)?/)[0] * 1 when 'number' then amount else 0 sumCurrency = (a, b) -> parseCurrency(a) + parseCurrency(b) $ -> api = initializeDateTable $('.datatable') $('#search').on 'keyup', -> api.search(@value).draw() initializeSelect2 $('#expense_tag_list') if isMobile() changeTextInputToDate $('form input:text') else initializeDatepicker $('.input-group.date') $('.datepicker').on 'change', -> $('form').submit() So what I do is basically this: figure out what must happen on the page; write the function that does that; call it on page ready; Although I have provided a particular example, I generally write JavaScript/CoffeeScript this way. Would you provide some basic example of how you would reorganize this code. In particular I am interested in using CoffeeScript's classes. Also, at what point does one decide that the code has become too messy and one needs to start using Angular/React etc.? Answer: I can't tell you when something like Angular/React will make more sense. It's a cost/benefit question, but they're your costs and your benefits. I'd say you code is okay. In some places it seems overly terse, while in others it's spread out too much. And there are a few places where you perhaps abuse CoffeeScript syntax to the point where it becomes harder to read. There's also a number of things that can just be simplified: The isMobile function doesn't need to be a function with a nested IIFE and a closure. Just run it: isMobile = do -> userAgent = navigator.userAgent or navigator.vendor or window.opera or "" /.../.test(userAgent) or /.../.test(userAgent.substr(0,4)) Now, isMobile is just a boolean value. The do keyword means it's evaluated immediately. Note I've also added a "" fallback for userAgent just in case. Since you might call substr on userAgent, it must be a string, or the script will just fail. You changeTextInputToDate doesn't actually change the inputs, really. It replaces them. But changing them seems much easier: changeTextInputToDate = (inputs) -> inputs.each -> $(this).attr type: 'date' (I'm using this instead of @ just because I always think @ looks weird by itself. I like using it as a prefix for things, but if I'm referring to the this object itself, I prefer to use this. Just personal preference.) In dataTableFooterCallback it'd be nicer to fetch the column once, since that's what you need for the expressions. I'd also make the footer callback a nested function in initializeDataTable, too. Especially as that function references @api, which doesn't make sense except in the context of a data table callback. So there's no reason for the function to float around in a wider scope: initializeDateTable = (element) -> footerCallback = -> column = @api().column(1) total = column.data().map(parseCurrency).reduce (sum, value) -> sum + value $(column.footer()).html "#{total.toFixed(2)} лв" element.DataTable columns: [ { type: 'date', searchable: false } { type: 'currency-bg', searchable: false } { type: 'string' } { type: null, orderable: false, searchable: false } ] footerCallback: footerCallback order: [[0, 'desc']] paging: false dom: 't' You'll note a few more changes: I'm using explicit { and } around objects in the columns array, rather than relying solely on whitespace - makes it easier to read, I think. I'm using map + reduce, rather than doing all the work in the reduce callback. This makes for a simpler expression, and eliminates sumCurrency completely. parseCurrency I'd write as parseCurrency = (value) -> switch typeof value when 'number' then value when 'string' then (value.match(/\d+(\.\d+)?/)?[0] or 0) * 1 else 0 The ?[0] or 0 guards against match returning null. Without the null-coalescing ? your script just fails as null[0] doesn't work. Finally, I'm not sure I understand these last lines: if isMobile() changeTextInputToDate $('form input:text') else initializeDatepicker $('.input-group.date') $('.datepicker').on 'change', -> $('form').submit() So, if it's a mobile client you change all text inputs in any form. If it's not, you initialize a datepicker with a completely different - and much more specific - selector. It just reads as "either do this, or do something completely different". Why aren't the selectors more similar? I'd imagine it's the same inputs you're dealing with. And finally, the change event listener. That's yet another selector! So that's confusing.
{ "domain": "codereview.stackexchange", "id": 16830, "tags": "jquery, object-oriented, coffeescript" }
How to convert ASCII to binary to facilitate a UART to SPI bridge for an MSP430?
Question: I am currently working on project where the commands for a PC to the MSP430 will be communicated via UART. These commands need to be converted to binary to be able to written to the SPI bus. For example 'a' from PC is translated to 0x61 on the microcontroller. What is really needed is hex 'a' Thus ASCII 'a' from PC = 0x61 => need to translate hex 0x0A Question : What is the most simplest way to convert ASCII to binary? I am using "C". Answer: The input from the PC is a character, i.e. an ASCII-value. To understand how this is saved, have a look at the ASCII-table. The numbers '0' to '9' are represented by the ASCII-codes 0x30 to 0x39. The capital letters 'A' to 'Z' are represented by 0x41 to 0x5A, and the small letters 'a' to 'z' by 0x61 to 0x7A. You could of course use a function like sscanf, as suggested by @JohnO'M. Such functions (printf, scanf, ...) have many features but thus require a lot of power and memory[Citation needed], which we often don't want to spend on a microcontroller. A simple alternative would be to distinguish the 3 possible cases (0-9 or A-F or a-f) and subtract the correct number from the ASCII code to get to the result: if( (s >= 0x30) && (s <= 0x39)) { // 0-9 x = s - 0x30; } else { if( (s >= 0x41) && (s <= 0x46)) { // A-F x = s - 0x37; } else { if( (s >= 0x61) && (s <= 0x66)) { // a-f x = s - 0x57; } else { // wrong input x = 0xFF; } } }
{ "domain": "engineering.stackexchange", "id": 382, "tags": "electrical-engineering, embedded-systems" }
Common false beliefs in theoretical computer science
Question: This post is inspired by the one in MO: Examples of common false beliefs in mathematics. Since the site is designed for answering research level questions, examples like $\mathsf{NP}$ stands for non-polynomial time should be not on the list. Meanwhile, we do want some examples that may not be hard, but without thinking in details it looks reasonable as well. We want the examples to be educational, and usually appears when studying the subject for the first time. What are some (non-trivial) examples of common false beliefs in theoretical computer science, that appear to people who are studying in this area? To be precise, we want examples different from surprising results and counterintuitive results in TCS; these kinds of results are surprising to many people, but they are TRUE. Here we are asking for surprising examples that people may think are true at first glance, but after deeper thought the fault within is exposed. As an example of proper answers on the list, this one comes from the field of algorithms and graph-theory: For an $n$-node graph $G$, a $k$-edge separator $S$ is a subset of edges of size $k$, where the nodes of $G \setminus S$ can be partition into two non-adjacent parts, each consists of at most $3n/4$ nodes. We have the following "lemma": A tree has a 1-edge separator. Right? Answer: This is one is common to computational geometry, but endemic elsewhere: Algorithms for the real RAM can be transferred to the integer RAM (for integer restrictions of the problem) with no loss of efficiency. A canonical example is the claim “Gaussian elimination runs in $O(n^3)$ time.” In fact, careless elimination orders can produce integers with exponentially many bits. Even worse, but still unfortunately common: Algorithms for the real RAM with floor function can be transferred to the integer RAM with no loss of efficiency. In fact, a real-RAM+floor can solve any problem in PSPACE or in #P in a polynomial number of steps.
{ "domain": "cstheory.stackexchange", "id": 5358, "tags": "big-list, examples" }
State and explain whether the exchange particle is a $W^{+}$ , $W^{-}$ or $Z^{0}$
Question: State and explain whether the exchange particle is a $W^{+}$ , $W^{-}$ or $Z^{0}$. I think it is $W^{-}$ but I can't explain in detail please help me. Answer: Consider charge. It must be conserved from our $Y$ particle to the $\bar{u}d$. Charges: $$\bar{u} = -\frac{2}{3}$$ $$d = -\frac{1}{3}$$ (anti particles have the opposite sign charge). So now we have an end state of $-\frac{2}{3}-\frac{1}{3}=-1$. Which of your potential particles ($W^{+},W^{-},Z^{0}$) does this agree with?
{ "domain": "physics.stackexchange", "id": 38815, "tags": "homework-and-exercises, particle-physics" }
What are the gates used to implement Shor's algorithm?
Question: I understand theoretically how Shor's algorithm works, but I don't know what specific gates are used (or would be used) to implement it. What would the quantum circuit look like? Answer: There are several correct answers to this question because there are several candidate sets available for use as a universal set of quantum operators and because there isn't an agreed upon best implementation for the primitives needed by Shor's algorithm. Different circuits will trade-off circuit-depth for ancillary qubits and vice versa. Ultimately, the exact implementation is likely to depend on the architecture of the quantum computer. What that in mind, you can probably find a version of the details you're looking for in Fast Modular Exponentiation Architecture for Shor's Factoring Algorithm by Pavlidis and Gizopoulos. The exact circuit layout is lengthy and complex and probably too much for an answer here. The paper has lots of diagrams though and a quick scan should should provide you with a rough feel for the circuit construction. Unsurprisingly, the circuit makes heavy use of Hadamard gates and controlled phase-shift operators both to implement the inverse QFT and the modular exponentiation circuit.
{ "domain": "cs.stackexchange", "id": 8507, "tags": "algorithms, quantum-computing, circuits" }
QGtkStyle could not resolve GTK
Question: I have installed Orange 3 in Ubuntu 18.04 using Anaconda. It runs just fine, but the menus appear as blank. I obtain the following error when I execute it: QGtkStyle could not resolve GTK. Make sure you have installed the proper libraries. I have been trying to sort it for days without success. Any idea of how to fix this? Answer: A quick fix has been published at https://stackoverflow.com/a/50583925/3866828 It works adding export QT_STYLE_OVERRIDE=gtk2 at the end of the ~/.bashrc file. Then it's a matter of executing source ~/.bashrc to see the content of the menus the next time that Orange 3 is loaded.
{ "domain": "datascience.stackexchange", "id": 3178, "tags": "linux" }
C++ ThreadGroup Implementation
Question: Edit: An improved code based on feedback is available here. As a kind of sequel to the previous question, here is an improved version( with clarified naming ). The Idea is the same: An integer threads_ready is increased to threads.size() until all threads are finished with the payload, and then back to 0 when all threads are ready to execute again. This version has no busy-waiting, and it's more generic because of templates. Is there any possibility to optimize more? I would think that using atomic for the state is overkill, as mostly it is modified under a mutex, but I couldn't get the program to work without using atomic. I have tried to re-implement this using variable template arguments, but I failed. I would think it is mainly because the argument for running is provided through pointers. Would using lambdas be a better solution here? ( Mainly regarding portability, but also in performance ) #include <iostream> #include <functional> #include <vector> #include <thread> #include <mutex> #include <iomanip> #include <cassert> #include <numeric> #include <atomic> #include <condition_variable> #include <chrono> using std::atomic; using std::vector; using std::function; using std::thread; using std::mutex; using std::unique_lock; using std::lock_guard; using std::condition_variable; template<typename T> class ThreadGroup{ public: ThreadGroup(int number_of_threads, function<void(T&, int, int)> function) : target_buffer(nullptr) , worker_function(function) , threads() , threads_ready(0) , state(IDLE_VALUE) , state_mutex() , synchroniser() { for(int i = 0; i < number_of_threads; ++i) threads.push_back(thread(&ThreadGroup::worker, this, i)); } ~ThreadGroup(){ { /* Signal to the worker threads that the show is over */ lock_guard<mutex> my_lock(state_mutex); state.store(END_VALUE); } while(0 < threads.size()){ if(threads.back().joinable()) threads.back().join(); threads.pop_back(); } } void start_and_block(T& buffer){ { /* initialize, start.. */ unique_lock<mutex> my_lock(state_mutex); target_buffer = &buffer; state.store(START_VALUE); } { /* wait until the work is done */ unique_lock<mutex> my_lock(state_mutex); if(threads.size() > threads_ready)synchroniser.wait(my_lock,[=](){ return (threads.size() <= threads_ready); }); } { /* set appropriate state */ unique_lock<mutex> my_lock(state_mutex); state.store(IDLE_VALUE); } synchroniser.notify_all(); /* Notify worker threads that the main thread is finished */ { /* wait until all threads are notified */ unique_lock<mutex> my_lock(state_mutex); if(0 < threads_ready)synchroniser.wait(my_lock,[=](){ return (0 >= threads_ready); /* All threads are notified once the @threads_ready variable is zero again */ }); } } private: static const int IDLE_VALUE = 0; static const int START_VALUE = 1; static const int END_VALUE = 2; T* target_buffer; function<void(T&, int, int)> worker_function; /* buffer, start, length */ vector<thread> threads; int threads_ready; atomic<int> state; mutex state_mutex; condition_variable synchroniser; void worker(int thread_index){ while(END_VALUE != state.load()){ /* Until the pool is stopped */ while(START_VALUE == state.load()){ /* Wait until start signal is provided */ worker_function( (*target_buffer), (thread_index * (target_buffer->size()/threads.size())), (target_buffer->size()/threads.size()) );/* do the work */ { /* signal that work is done! */ unique_lock<mutex> my_lock(state_mutex); ++threads_ready; /* increase "done counter" */ } synchroniser.notify_all(); /* Notify main thread that this thread is finsished */ { /* Wait until main thread is closing the iteration */ unique_lock<mutex> my_lock(state_mutex); if(START_VALUE == state.load())synchroniser.wait(my_lock,[=](){ return (START_VALUE != state.load()); }); } { /* signal that this thread is notified! */ unique_lock<mutex> my_lock(state_mutex); --threads_ready; /* decrease the "done counter" to do so */ } synchroniser.notify_all(); /* Notify main thread that this thread is finsished */ } /*while(START_VALUE == state)*/ } /*while(END_VALUE != state)*/ } }; int main(int argc, char** agrs){ int result = 0; mutex cout_mutex; ThreadGroup<vector<double>> pool(5,[&](vector<double>& buffer, int start, int length){ double sum = 0; for(int i = 0; i < length; ++i){ sum += buffer[i]; } lock_guard<mutex> my_lock(cout_mutex); std::cout << "Partial sum: " << std::setw(4) << sum << " \t\t |" << "\r"; result += sum; //std::this_thread::sleep_for(std::chrono::milliseconds(200)); //to test with some payload }); result = 0; for(int i = 0; i< 10000; ++i){ vector<double> test_buffer(500, rand()%10); result = 0; pool.start_and_block(test_buffer); { lock_guard<mutex> my_lock(cout_mutex); std::cout << "result["<< i << "]: " << std::setw(4) << result << "\t\t " << std::endl; } assert(std::accumulate(test_buffer.begin(),test_buffer.end(), 0) == result); } std::cout << "All assertions passed! "<< std::endl; return 0; } Answer: Fix compiler warnings Enable compiler warnings and try to fix all of them. There are several unused parameters. You can avoid the warnings by not giving the parameters a name. For example, you can omit the name start from the lambda in main(): [&](std::vector<double>& buffer, int, int length){...} As for the parameters of main() itself, you can use the same trick, or use the other allowed form of main() that takes no parameters: int main() { The remaining warnings I get are about comparisons between signed and unsigned integers. Use std::size_t as the type of threads_ready. Unconditionally call wait() There is no need to check the condition before calling wait() on a condition variable if you are passing a predicate. The first thing wait() will do is to execute the predicate to see if it needs to wait at all. Prefer default initializers over initializer lists The constructor of ThreadGroup has a large initializer list. Some of them are redundant, and some of them could be replaced by default initializers. The only things that you should have to put in initializer lists normally are things that depend on the parameters passed to the constructor. So: class ThreadGroup { public: ThreadGroup(std::size_t number_of_threads, std::function<void(T&, int, int)> function) : worker_function(function) { for(std::size_t i = 0; i < number_of_threads; ++i) threads.emplace_back(&ThreadGroup::worker, this, i); } ... private: T* target_buffer = nullptr; std::function<void(T&, int, int)> worker_function; std::vector<std::thread> threads; std::size_t threads_ready = 0; std::atomic<int> state = {IDLE_VALUE}; std::mutex state_mutex; std::condition_variable synchroniser; ... }; Note that we have to use aggregate initialization for std::atomic<int> here, otherwise the deleted copy constructor would be selected. Don't busy-loop You start worker threads in the constructor, but state is initialized to IDLE_VALUE. This causes worker() to go into a busy-loop until state changes. Use a condition variable so the worker thread can wait() for work to arrive. Unconditionally join() threads I don't see why you are checking if threads are joinable or not. Once they are added to threads, they are always in a joinable state. So I would change the destructor to: ~ThreadGroup() { { std::lock_guard<mutex> my_lock(state_mutex); state.store(END_VALUE); // signal a condition variable to ensure idle threads get woken up } for (auto &thread: threads) thread.join(); } Avoid using namespace std or using std:: in headers While it is usually safe to use using namespace std in a source file, and it's even better to just bring individual elements from std:: into the global namespace as you did, as soon as you would move the definition of class ThreadGroup to a header file, you should not do this anymore, as that will result in unexpected behavior for source files that don't want to use that but do #include your header file. It's not that much extra work to type std::, and as a bonus you avoid possible confusion when you have things like function<...> function, where the variable starts shadowing the type.
{ "domain": "codereview.stackexchange", "id": 41600, "tags": "c++, multithreading" }
Yet another CLI Hangman game
Question: ...with properly packaged/type-hinted code, automated testing, and no dependencies. Do note that this package requires Python 3.12 or later. Only pyproject.toml and non-test .py files are included here; still, there are about 1400 LOC. For non-code files (and/or a better code-reading experience), view the project on GitHub. Please ignore my non-PEP8/PEP257-compliant code style. Other than that, suggestions regarding design patterns, naming conventions, APIs, etc. are all welcomed. The project structure . ├── LICENSE ├── README.md ├── pyproject.toml ├── src │ ├── _hangman │ │ └── runner.py │ └── hangman │ ├── __init__.py │ ├── _lax_enum.py │ ├── _static_reader.py │ ├── assets │ │ ├── gallows.txt │ │ ├── head.txt │ │ ├── instructions.txt │ │ ├── left_arm.txt │ │ ├── left_leg.txt │ │ ├── right_arm.txt │ │ ├── right_leg.txt │ │ ├── title.txt │ │ ├── trunk.txt │ │ └── you_lost.txt │ ├── canvas.py │ ├── choice_list.py │ ├── conversation.py │ ├── game.py │ ├── py.typed │ ├── word.py │ ├── word_list.py │ └── words │ ├── easy.txt │ ├── hard.txt │ ├── medium.txt │ └── unix.txt ├── tests └── tox.ini pyproject.toml [project] name = "hangman" version = "0.2.1" description = "A CLI hangman game" readme = "README.md" requires-python = ">=3.12" license = { text = "Unlicense" } authors = [ { name = "InSyncWithFoo", email = "insyncwithfoo@gmail.com" } ] classifiers = [ "License :: OSI Approved :: The Unlicense (Unlicense)", "Topic :: Games/Entertainment" ] [project.optional-dependencies] dev = [ "build~=1.0.3", "mypy~=1.6.0", "pytest~=7.4.2", "pytest-cov~=4.1.0", "setuptools~=68.2.2", "tox~=4.11.3" ] [project.urls] Homepage = "https://github.com/InSyncWithFoo/hangman" [project.scripts] hangman = "_hangman.runner:main" [build-system] requires = ["setuptools>=68.0.0", "wheel"] build-backend = "setuptools.build_meta" [tool.setuptools] package-data = { "*" = ["*.txt"] } src/_hangman/runner.py from hangman import Game def main(): Game().start() if __name__ == '__main__': main() src/hangman/init.py from .canvas import Canvas, Layer from .choice_list import ChoiceList, Choices from .conversation import Conversation from .game import Game from .word_list import Level, WordList __all__ = [ 'Game', 'Canvas', 'Choices', 'ChoiceList', 'Conversation', 'Layer', 'Level', 'WordList' ] src/hangman/_lax_enum.py import re from collections.abc import Generator from re import Pattern from typing import ClassVar class LaxEnum(type): ''' Despite its name, a LaxEnum is no different from a normal class except for that it yields every item that is not a dunder when being iterated over. ''' _dunder: ClassVar[Pattern[str]] = re.compile(r'__.+__') def __iter__(cls) -> Generator[object, None, None]: for member_name, member in cls.__dict__.items(): if not cls._dunder.fullmatch(member_name): yield member src/hangman/_static_reader.py from functools import partial from os import PathLike from pathlib import Path package_directory = Path(__file__).resolve().parent assets_directory = package_directory / 'assets' word_list_directory = package_directory / 'words' def _read(base_directory: Path, filename: PathLike[str]) -> str: ''' Read a file and return its contents. :param base_directory: The base directory to look up the file :param filename: The name of the file :return: The contents of the file ''' with open(base_directory / filename) as file: return file.read() get_asset = partial(_read, assets_directory) get_word_list = partial(_read, word_list_directory) src/hangman/canvas.py from __future__ import annotations import dataclasses from collections.abc import Collection, Generator, Iterator, Sequence from dataclasses import dataclass from functools import partial from itertools import batched from typing import overload, Self from ._lax_enum import LaxEnum from ._static_reader import get_asset @dataclass(frozen = True) class LayerCell: ''' Represents a layer cell. ``value`` must be a single character. ''' row: int column: int value: str def __post_init__(self) -> None: if len(self.value) != 1: raise ValueError('"value" must be a single character') @property def is_transparent(self) -> bool: ''' Whether the value contains only whitespaces. ''' return self.value.strip() == '' _GridOfStrings = Sequence[Sequence[str]] class Layer: r''' A rectangle grid of :class:`LayerCell`\ s. ''' __slots__ = ('_cells', '_height', '_width') _cells: list[LayerCell] _height: int _width: int def __new__(cls, argument: _GridOfStrings) -> 'Self': # PY-62301 ''' Construct a :class:`Layer`. :param argument: A grid of strings. Cannot be a string itself. ''' if isinstance(argument, str): raise TypeError('"rows" must not be a string') instance = super().__new__(cls) grid = [list(row) for row in argument] first_row = grid[0] same_width = all(len(row) == len(first_row) for row in grid) if not same_width: raise ValueError('All rows must have the same width') instance._height = len(grid) instance._width = len(first_row) instance._cells = [] for row_index, row in enumerate(grid): for column_index, cell_value in enumerate(row): cell = LayerCell(row_index, column_index, cell_value) instance._cells.append(cell) return instance def __repr__(self) -> str: horizontal_frame = f'+{'-' * self._width}+' joined_rows = ( f'|{''.join(cell.value for cell in row)}|' for row in self.rows() ) return '\n'.join([ horizontal_frame, *joined_rows, horizontal_frame ]) @overload def __getitem__(self, item: int) -> LayerCell: ... @overload def __getitem__(self, item: tuple[int, int]) -> LayerCell: ... def __getitem__(self, item: int | tuple[int, int]) -> LayerCell: if isinstance(item, int): return self._cells[item] if isinstance(item, tuple) and len(item) == 2: row_index, column_index = item if row_index >= self._height or column_index >= self._width: raise IndexError('Row or column index is out of bounds') index = self._width * row_index + column_index return self[index] raise TypeError(f'Invalid index') def __iter__(self) -> Iterator[LayerCell]: yield from self._cells def __len__(self) -> int: return len(self._cells) def __eq__(self, other: object) -> bool: if not isinstance(other, Layer): return NotImplemented return self._cells == other._cells def __add__(self, other: Layer) -> Layer: if not isinstance(other, Layer): return NotImplemented copied = self.copy() copied += other return copied def __iadd__(self, other: Layer) -> Self: if not isinstance(other, Layer): return NotImplemented if self.height != other.height or self.width != other.width: raise ValueError( 'To be added, two layers must have the same height and width' ) copy_cell = dataclasses.replace for index, other_cell in enumerate(other): if not other_cell.is_transparent: self._cells[index] = copy_cell(other_cell) return self @property def height(self) -> int: ''' The height of the layer. ''' return self._height @property def width(self) -> int: ''' The width of the layer. ''' return self._width @classmethod def from_text(cls, text: str, width: int | None = None) -> Self: ''' Construct a :class:`Layer` from a piece of text. :param text: Any string, with one or more lines. :param width: \ The width of the layer. :return: A new :class:`Layer`. ''' if width is None: width = -1 lines = text.splitlines() longest_line_length = max(len(line) for line in lines) width = max([longest_line_length, width]) return cls([line.ljust(width) for line in text.splitlines()]) @classmethod def from_sequence(cls, cells: Sequence[str], width: int) -> Self: ''' Construct a :class:`Layer` from a sequence of strings. :param cells: A :class:`Sequence` of strings. :param width: \ The number of cells per chunk. \ The last chunk is padded with spaces. :return: A new :class:`Layer`. ''' rows = [] for row in batched(cells, width): if len(row) < width: padder = ' ' * (width - len(row)) rows.append([*row, *padder]) else: rows.append(list(row)) return cls(rows) def rows(self) -> Generator[tuple[LayerCell, ...], None, None]: r''' Yield a tuple of :class:`LayerCell`\ s for each row. ''' yield from batched(self._cells, self._width) def columns(self) -> Generator[tuple[LayerCell, ...], None, None]: r''' Yield a tuple of :class:`LayerCell`\ s for each column. ''' for column in zip(*self.rows()): yield column def cells(self) -> Generator[LayerCell, None, None]: ''' Synonym of :meth:`__iter__`. ''' yield from self def copy(self) -> Self: ''' Construct a new :class:`Layer` from this one. ''' string_cells = [cell.value for cell in self] return self.__class__.from_sequence(string_cells, self._width) class Canvas(Collection[Layer]): r''' A collection of :class:`Layers`. Its string representation is that of all its layers merged. ''' __slots__ = ('_height', '_width', '_layers') _height: int _width: int _layers: list[Layer] def __init__(self, height: int, width: int) -> None: ''' Construct a :class:`Canvas` of given height and width. :param height: The height of the canvas. :param width: The width of the canvas. ''' self._height = height self._width = width self._layers = [] def __str__(self) -> str: if not self._layers: return '\n'.join([' ' * self._width] * self._height) first, *others = self._layers flattened = sum(others, start = first) joined_rows = [ ''.join(cell.value for cell in row) for row in flattened.rows() ] return '\n'.join(joined_rows) def __contains__(self, layer: object) -> bool: return layer in self._layers def __iter__(self) -> Iterator[Layer]: return iter(self._layers) def __len__(self) -> int: return len(self._layers) @property def height(self) -> int: ''' The height of the canvas. ''' return self._height @property def width(self) -> int: ''' The width of the canvas. ''' return self._width @property def layers(self) -> Generator[Layer, None, None]: ''' Yield every layer the canvas contains. ''' for layer in self._layers: yield layer @classmethod def from_layer(cls, layer: Layer) -> Self: ''' Construct a :class:`Canvas` from a layer using its height and width. :param layer: A :class:`Layer`. :return: A new :class:`Canvas`. ''' canvas = cls(height = layer.height, width = layer.width) canvas.add_layers(layer) return canvas def _fits_layer(self, layer: Layer) -> bool: ''' Whether the layer has same height and width as the canvas. :param layer: A :class:`Layer`. :return: ``True`` if the layer fits, ``False`` otherwise. ''' return self._height == layer.height and self._width == layer.width def add_layers(self, *layers: Layer) -> None: r''' Add one or more :class:`Layer`\ s to the canvas. :param layers: One or more :class:`Layer`\ s. ''' if not all(self._fits_layer(layer) for layer in layers): raise ValueError( 'Layers must have same height and width as canvas' ) self._layers.extend(layers) _make_80_wide_layer = partial(Layer.from_text, width = 80) class Component(metaclass = LaxEnum): r''' Pre-built :class:`Layer`\ s to be used in the game. ''' GALLOWS = _make_80_wide_layer(get_asset('gallows.txt')) HEAD = _make_80_wide_layer(get_asset('head.txt')) TRUNK = _make_80_wide_layer(get_asset('trunk.txt')) LEFT_ARM = _make_80_wide_layer(get_asset('left_arm.txt')) RIGHT_ARM = _make_80_wide_layer(get_asset('right_arm.txt')) LEFT_LEG = _make_80_wide_layer(get_asset('left_leg.txt')) RIGHT_LEG = _make_80_wide_layer(get_asset('right_leg.txt')) YOU_LOST = _make_80_wide_layer(get_asset('you_lost.txt')) src/hangman/choice_list.py from __future__ import annotations from collections.abc import Generator from dataclasses import dataclass from typing import Mapping, NamedTuple, Self from ._lax_enum import LaxEnum from .word_list import Level @dataclass(frozen = True, slots = True) class Choice: ''' Represents a valid choice of a :class:`ChoiceList`. ''' shortcut: str description: str aliases: frozenset[str] value: str | None = None def __str__(self) -> str: return f'[{self.shortcut}] {self.description}' _ChoiceDescriptor = tuple[str, set[str], str | None] class ChoiceDescriptor(NamedTuple): ''' Syntactic sugar for a bare tuple containing three elements: ``description``, ``aliases``, and ``value``. ''' description: str aliases: set[str] = set() value: str | None = None class _LengthList(list[int]): ''' A list of integers which keeps track of the sum. Meant for internal use only. ''' total: int def __new__(cls) -> 'Self': # PY-62301 instance = super().__new__(cls) instance.total = 0 return instance def append(self, value: int) -> None: ''' Append an integer value to the end of the list and add it the total. :param value: A length. ''' self.total += value super().append(value) class ChoiceList: __slots__ = ('_shortcut_map', '_alias_map', 'max_width', 'separator') _shortcut_map: dict[str, Choice] _alias_map: dict[str, Choice] separator: str max_width: int def __new__( cls, /, argument: Mapping[str, _ChoiceDescriptor] | None = None, *, separator: str = ' ', max_width: int = 80, **kwargs: _ChoiceDescriptor ) -> 'Self': # PY-62301 r''' Construct a list of valid choices whose string representation looks like the following:: [A] Foobar bazqux [BAR] Lorem ipsum [C] Consectetur adipiscing elit All shorcuts and aliases are case-insensitive and mapped to their corresponding :class:`Choice`. Shortcuts are uppercased in the string representation. A :class:`Choice` can be chosen by referencing either ``shortcut`` or any of the ``aliases``. ``argument`` and ``kwargs`` are shortcut-to-:class:`ChoiceDescriptor` maps. Each ``shortcut`` *should* be a single character, whereas the ``description``\ s need to be human-readable. ``value`` is the value the choice represents, defaults to ``None``. :param argument: A :class:``collections.abc.Mapping``. :param separator: \ A string to be used as the separator in the string representation. :param max_width: \ The maximum width of the string representation, in characters. :param kwargs: \ Other shortcut-to-descriptor arguments. ''' if isinstance(argument, Mapping): kwargs = {**argument, **kwargs} instance = super().__new__(cls) instance.separator = separator instance.max_width = max_width shortcut_map = instance._shortcut_map = {} alias_map = instance._alias_map = {} for shortcut, (description, aliases, value) in kwargs.items(): shortcut = shortcut.upper() uppercased_aliases = frozenset(alias.upper() for alias in aliases) choice = Choice( shortcut, description, uppercased_aliases, value ) shortcut_map[shortcut] = choice for alias in uppercased_aliases: alias_map[alias] = choice return instance def __contains__(self, item: object) -> bool: if not isinstance(item, str): return False item = item.upper() return item in self._shortcut_map or item in self._alias_map def __getitem__(self, item: str) -> Choice: item = item.upper() if item in self._shortcut_map: return self._shortcut_map[item] return self._alias_map[item] def __str__(self) -> str: output: list[list[str]] = [[]] lengths: list[_LengthList] = [_LengthList()] for choice in self: choice_stringified = str(choice) choice_length = len(choice_stringified) total_choice_length = lengths[-1].total + choice_length total_separator_length = len(self.separator) * len(lengths[-1]) new_last_row_length = total_choice_length + total_separator_length if new_last_row_length > self.max_width: output.append([]) lengths.append(_LengthList()) output[-1].append(choice_stringified) lengths[-1].append(choice_length) return '\n'.join(self.separator.join(row) for row in output) def __repr__(self) -> str: return ( f'{self.__class__.__name__}(' + ', '.join(repr(choice) for choice in self) + f')' ) def __iter__(self) -> Generator[Choice, None, None]: yield from self._shortcut_map.values() class Choices(metaclass = LaxEnum): ''' Pre-built instances of :class:`ChoicesList`. ''' CONFIRMATION = ChoiceList( Y = ChoiceDescriptor( description = 'Yes', aliases = {'Yes'}, value = 'YES' ), N = ChoiceDescriptor( description = 'No', aliases = {'No'}, value = 'NO' ) ) LEVEL = ChoiceList( E = ChoiceDescriptor( description = 'Easy (22.5k words)', aliases = {'EASY'}, value = Level.EASY ), M = ChoiceDescriptor( description = 'Medium (74.5k words)', aliases = {'MEDIUM'}, value = Level.MEDIUM ), H = ChoiceDescriptor( description = 'Hard (168k words)', aliases = {'HARD'}, value = Level.HARD ), U = ChoiceDescriptor( description = 'Unix (205k words)', aliases = {'UNIX'}, value = Level.UNIX ) ) src/hangman/conversation.py from __future__ import annotations from collections.abc import Callable, Generator from dataclasses import dataclass from typing import ClassVar, Literal from .choice_list import ChoiceList def _response_is_valid_choice( response: str, choices: ChoiceList | None ) -> bool: ''' Checks if the response is a valid choice. ''' assert choices is not None return response in choices def _no_op(_response: str, _choices: ChoiceList | None) -> Literal[True]: ''' A validator that always returns ``True``. ''' return True @dataclass(frozen = True, slots = True, eq = False) class Validator: ''' Callable wrapper for a validator function. The second argument is the warning message to be output when this validator fails. ''' predicate: ResponseValidator warning: str def __call__(self, response: str, choices: ChoiceList | None) -> bool: return self.predicate(response, choices) InputGetter = Callable[[str], str] OutputDisplayer = Callable[[str], None] ResponseValidator = Callable[[str, ChoiceList | None], bool] OneOrManyValidators = Validator | list[Validator] _FailingValidators = Generator[Validator, None, None] class Conversation: ''' Protocol for input-output operations. ''' _INVALID_RESPONSE: ClassVar[str] = \ 'Invalid response. Please try again.' _INVALID_CHOICE: ClassVar[str] = \ 'Invalid choice. Please try again.' __slots__ = ('_input', '_output') _input: InputGetter _output: OutputDisplayer def __init__(self, ask: InputGetter, answer: OutputDisplayer) -> None: ''' Construct a :class:`Conversation`. :param ask: A ``input``-like callable to be called for inputs. :param answer: A ``print``-like callable to be called to output. ''' self._input = ask self._output = answer def _get_response(self, prompt: str) -> str: ''' Get a raw response. :param prompt: The prompt to be used. :return: The response, uppercased. ''' return self._input(prompt).upper() def _ask( self, prompt: str, /, choices: ChoiceList | None = None, *, validators: list[Validator] ) -> str: ''' Get a response, then validate it against the validators. Repeat this process until the response passes all validations. ''' failing_validators: Callable[[], _FailingValidators] = lambda: ( validator for validator in validators if not validator(response, choices) ) find_first_failing_validator: Callable[[], Validator | None] = \ lambda: next(failing_validators(), None) response = self._get_response(prompt) while failing_validator := find_first_failing_validator(): self.answer(failing_validator.warning) response = self._get_response(prompt) return response def ask( self, question: str, /, choices: ChoiceList | None = None, *, until: OneOrManyValidators | None = None ) -> str: r''' Thin wrapper around :meth:`_ask`. If ``choices`` is given, it will be included in the prompt text. If ``until`` is ``None``, a default validator will be used to check if the response is a valid choice. If both ``choices`` and ``until`` are ``None``, no validators will be applied. :param question: The question to ask. :param choices: The choices to choose from. Optional. :param until: \ A :class:`Callable`, a :class:`Validator` or a list of :class:`Validator`\ s. Optional. :return: The response of the user. ''' prompt = f'{question}\n' prompt += f'{choices}\n' if choices is not None else '' if choices is None and until is None: validators = [Validator(_no_op, '')] elif until is None: validators = [ Validator(_response_is_valid_choice, self._INVALID_CHOICE) ] else: validators = [until] if callable(until) else until return self._ask( prompt, choices, validators = validators ) def answer(self, answer: str) -> None: ''' Outputs the caller's message. :param answer: The message to output. ''' self._output(answer) src/hangman/game.py from __future__ import annotations from typing import ClassVar from ._static_reader import get_asset from .canvas import Canvas, Component, Layer from .choice_list import ChoiceList, Choices from .conversation import ( Conversation, InputGetter, OneOrManyValidators, OutputDisplayer, Validator ) from .word import Word from .word_list import Level, WordList def _response_is_ascii_letter(character: str, _: ChoiceList | None) -> bool: return len(character) == 1 and 'A' <= character <= 'Z' class Game: ''' The Hangman game. ''' TITLE: ClassVar[str] = get_asset('title.txt') INSTRUCTIONS: ClassVar[str] = get_asset('instructions.txt') COEFFICENTS: ClassVar[dict[Level, int]] = { Level.EASY: 1, Level.MEDIUM: 2, Level.HARD: 3, Level.UNIX: 4 } _MAX_DISPLAY_WIDTH: ClassVar[int] = 80 __slots__ = ( '_conversation', '_used_words', '_points', '_reward', '_penalty', '_ended' ) _conversation: Conversation _used_words: set[str] _points: int _reward: int _penalty: int _ended: bool def __init__( self, input_getter: InputGetter = input, output_displayer: OutputDisplayer = print, reward: int = 2, penalty: int = -1 ) -> None: ''' Initialize a new game. See :class:`Conversation` for more information on ``input_getter`` and ``output_displayer``. :param input_getter: \ An ``input``-like function. Defaults to ``input``. :param output_displayer: \ A ``print``-like function. Defaults to ``print``. :param reward: \ The number of points to be added to the total on each correct guess. :param penalty: \ The number of points to be subtracted from the total on each incorrect guess. ''' self._used_words = set() self._points = 0 self._reward, self._penalty = reward, penalty self._ended = False self._conversation = Conversation( ask = input_getter, answer = output_displayer ) @property def points(self) -> int: ''' The total points earned in this game. ''' return self._points @points.setter def points(self, value: int) -> None: ''' Called on operations such as the following: game.points += 1 The number of points cannot be negative. ''' self._points = max(value, 0) def _start(self) -> None: ''' Output the title, the instructions, then start the first :class:`GameRound`. If that round is won and the user wants to continue, start another. Otherwise, if the game has not ended (user did not lose in the latest round), end the game. ''' self._output_game_title() self._output_game_instructions() self._start_round() while not self._ended and self._prompt_for_continue_confirmation(): self._start_round() if not self._ended: self.end() def _output_game_title(self) -> None: ''' Output the title, which is just some fancy ASCII art. ''' self.output(self.TITLE) def _output_game_instructions(self) -> None: ''' Output the instructions. ''' self.output(self.INSTRUCTIONS) def _prompt_for_continue_confirmation(self) -> bool: ''' Ask for a response until it is a yes/no answer. :return: Whether the user wants to continue. ''' answer = self.input('Continue?', Choices.CONFIRMATION) return answer in ('Y', 'YES') def _start_round(self) -> None: ''' Ask for a level, construct a :class:`WordList` and a coefficient from that level, then get a random word that has not been used. Finally, initialize a :class:`GameRound` by passing the word and the coefficent to it. ''' level = self._prompt_for_level() word_list = WordList.from_level(level) coefficient = self.COEFFICENTS[level] word = word_list.get_random_word() while word in self._used_words: word = word_list.get_random_word() self._used_words.add(word) game_round = self._initialize_round(word, coefficient) game_round.start() def _initialize_round(self, word: str, coefficient: int) -> GameRound: ''' Pass ``word`` and ``coefficient`` as arguments to :class:`GameRound`. ''' return GameRound(self, word, coefficient) def _prompt_for_level(self) -> Level: ''' Ask for a response until it is a valid level. :return: The corresponding :class:`Level`. ''' choices = Choices.LEVEL response = self._conversation.ask('Choose a level:', choices) value = choices[response].value assert value is not None return Level(value) def start(self) -> None: ''' Start the game. If a :class:`KeyboardInterrupt` is caught, call :meth:`end`. ''' try: self._start() except KeyboardInterrupt: self.end() def end(self) -> None: ''' Switch a boolean flag and call :meth:`output_current_points`. ''' self._ended = True self.output('Game over.'.center(self._MAX_DISPLAY_WIDTH, '-')) self.output_current_points() def input( self, prompt: str, choices: ChoiceList | None = None, validators: OneOrManyValidators | None = None ) -> str: ''' Shorthand for ``self.conversation.ask``. ''' return self._conversation.ask(prompt, choices, until = validators) def output(self, answer: str) -> None: ''' Shorthand for ``self.conversation.answer``. ''' return self._conversation.answer(answer) def output_current_points(self) -> None: ''' Output the total number of points earned. ''' self.output(f'Points: {self._points}') def reward_correct_guess(self, count: int, coefficient: int) -> None: ''' Add ``reward`` multiplied by ``coefficient`` and ``count`` to the number of points. ''' self.points += self._reward * count * coefficient def penalize_incorrect_guess(self, coefficient: int) -> None: ''' Substract ``penalty`` multiplied by ``coefficient`` from the number of points. ''' self.points += self._penalty * coefficient class GameRound: ''' A game round. The game ends when a game round ends with a loss. ''' _INVALID_GUESS: ClassVar[str] = \ 'Invalid guess. Please input a letter.' _ALREADY_GUESSED: ClassVar[str] = \ 'You have already guessed this letter. Please try again.' __slots__ = ( '_game', '_canvas', '_layer_stack', '_word', '_coefficient', '_guesses' ) _game: Game _canvas: Canvas _layer_stack: list[Layer] _word: Word _coefficient: int _guesses: set[str] def __init__(self, game: Game, word: str, coefficient: int) -> None: ''' Initialize a game round. There are initially 6 layers in the stack. Each incorrect guess pops one from the stack and adds it to the canvas. When the stack reaches 0, the entire game is over. See :class:`Word` for relevant checking logic. :param game: The game this round belongs to. :param word: The word to guess in this round. :param coefficient: \ The coefficient corresponding to the level of this round. ''' self._game = game self._canvas = Canvas.from_layer(Component.GALLOWS) self._layer_stack = [ Component.HEAD, Component.TRUNK, Component.LEFT_ARM, Component.RIGHT_ARM, Component.LEFT_LEG, Component.RIGHT_LEG ] self._word = Word(word) self._coefficient = coefficient self._guesses = set() @property def lives_left(self) -> int: ''' The number of layers left in the stack. ''' return len(self._layer_stack) def _output_canvas(self) -> None: ''' Output the canvas with all components lost via incorrect guesses. ''' self._game.output(str(self._canvas)) def _output_current_word_state(self) -> None: ''' Output the word with unknown characters masked. ''' self._game.output(f'Word: {self._word.current_state}') def _output_word(self) -> None: ''' Output the word. Only called when the game is over. ''' self._game.output(f'The word was "{self._word}".') def _start_turn(self) -> None: ''' Call :meth:`_output_canvas` and :meth:`_output_current_word_state`. Ask for a new guess, then check it against the word and handle the result accordingly. ''' self._output_canvas() self._output_current_word_state() guess = self._prompt_for_guess() count = self._word.count(guess) self._guesses.add(guess) if count == 0: self._handle_incorrect_guess() else: self._handle_correct_guess(guess, count) def _handle_incorrect_guess(self) -> None: ''' Output a notice, then call :meth:`Game.penalize_incorrect_guess` and :meth:`Game.output_current_points`. Also call :meth:`_minus_1_life`. If the number of lives left is 0, add :attr:`Component.YOU_LOST` to the canvas. ''' self._game.output('Incorrect guess.') self._game.penalize_incorrect_guess(self._coefficient) self._game.output_current_points() self._minus_1_life() if self.lives_left == 0: self._canvas.add_layers(Component.YOU_LOST) def _handle_correct_guess(self, guess: str, count: int) -> None: ''' Output a notice, then call :meth:`Game.reward_correct_guess` and :meth:`Game.output_current_points`. :param guess: The character guessed. :param count: The number of that character's appearances in the word. ''' if count == 1: self._game.output(f'There is {count} "{guess}"!') else: self._game.output(f'There are {count} "{guess}"s!') self._game.reward_correct_guess(count, self._coefficient) self._game.output_current_points() def _prompt_for_guess(self) -> str: ''' Ask for a new guess which must be an ASCII letter. :return: The guess. ''' validators = [ Validator(_response_is_ascii_letter, self._INVALID_GUESS), Validator(self._not_previously_guessed, self._ALREADY_GUESSED) ] return self._game.input('Your guess:', validators = validators) def _not_previously_guessed( self, response: str, _choices: ChoiceList | None ) -> bool: ''' Check if ``response`` is a previous guess. Meant to be called in :meth:`_prompt_for_guess`. :param response: The response to check. :return: Whether ``response`` is a previous guess. ''' return response not in self._guesses def _minus_1_life(self) -> None: ''' Pops a layer from the stack and add it to the canvas. ''' self._canvas.add_layers(self._layer_stack.pop(0)) def start(self) -> None: ''' While the word is not completely solved and there are still some lives left, start a turn. If there are no lives left (user lost the game), call :meth:`_output_canvas` and :meth:`_output_word`, then :meth:`Game.end`. Otherwise, :meth:`_output_current_word_state`. ''' while not self._word.all_clear and self.lives_left: self._start_turn() if not self.lives_left: self._output_canvas() self._output_word() self._game.end() else: self._output_current_word_state() src/hangman/word.py class Word: ''' A word being guessed. ''' __slots__ = ('value', '_character_indices', '_masked') value: str _masked: list[str] _character_indices: dict[str, list[int]] def __init__(self, value: str) -> None: self.value = value.upper() self._masked = ['_'] * len(value) self._character_indices = {} for index, character in enumerate(self.value): self._character_indices.setdefault(character, []).append(index) def __str__(self) -> str: return self.value def __contains__(self, item: str) -> bool: return item.upper() in self._character_indices @property def current_state(self) -> str: ''' The letters, space-separated; unguessed ones are replaced with underscores. ''' return ' '.join(self._masked) @property def all_clear(self) -> bool: ''' Whether all letters have been guessed correctly. ''' return all(char != '_' for char in self._masked) def count(self, guess: str) -> int: ''' Count the guess's appearances in the word and replace those with underscores in the current state. :param guess: A letter. :return: The number of its appearances. ''' if guess not in self: return 0 indices = self._character_indices[guess] for index in indices: self._masked[index] = guess return len(indices) src/hangman/word_list.py import random from enum import StrEnum from os import PathLike from typing import ClassVar, final, Self from ._static_reader import word_list_directory class Level(StrEnum): EASY = 'EASY' MEDIUM = 'MEDIUM' HARD = 'HARD' UNIX = 'UNIX' @final class WordList: _instances: ClassVar[dict[str, Self]] = {} __slots__ = tuple(['_list']) _list: list[str] def __new__(cls, filename: str | PathLike[str]) -> 'Self': # PY-62301 filename = str(filename) if filename not in cls._instances: cls._instances[filename] = instance = super().__new__(cls) with open(filename) as file: instance._list = file.read().splitlines() return cls._instances[filename] def __len__(self) -> int: return len(self._list) def get_random_word(self) -> str: return random.choice(self._list) @classmethod def from_list(cls, words: list[str]) -> Self: instance = super().__new__(cls) instance._list = words return instance @classmethod def from_level(cls, level: str) -> Self: match level.upper(): case Level.EASY: filename = 'easy.txt' case Level.MEDIUM: filename = 'medium.txt' case Level.HARD: filename = 'hard.txt' case Level.UNIX: filename = 'unix.txt' case _: raise ValueError('No such level') return cls(word_list_directory / filename) Answer: Overall a well-organized and carefully-crafted project that I enjoyed reviewing. The main things that jumped out at me came from conversation.py, so with that in mind here are my points of feedback: ask In ask, the branching behavior based on how the optional parameters are set makes it harder for the reader to understand. The optional parameters also allow the client to use the method in a probably not-intended way, e.g. by passing in both a non-empty choices and non-empty until. Some cleaner alternatives: Explicitly create distinct methods, e.g. ask_choices(self, choices: ChoiceList, question: str) and ask_until(until: OneOrManyValidators, question: str). Use functools.singledispatchmethod to implement method overloading based on whether the client gives you a ChoiceList or OneOrManyValidators. As for the case where neither choices or until is provided, since it's a case that's currently not being used anywhere in the project I'd just remove it and not handle it (i.e. YAGNI). ResponseValidator ResponseValidator can and should be simplified from Callable[[str, ChoiceList | None], bool] to Callable[[str], bool], and by doing so we can avoid the ChoiceList | None parameter leaking into other validators and method signatures. To do this, we can create a closure with the list of choices (choices: ChoiceList) baked in. The resulting closure has the signature we want, i.e. Callable[[str], bool]. def create_choice_validator(choices: ChoiceList) -> Callable[[str], bool]: def fn(response: str) -> bool: return response in choices return fn We can then use this closure to create a new Validator: response_is_valid_choice = create_choice_validator(choices) validators = [Validator(response_is_valid_choice, self._INVALID_CHOICE)] _ask While looking over _ask, I had to re-read it several times to pick up on how the updated response within the while loop was getting re-evaluated in the failing_validators lambda. The following has a bit more ceremony with the creation of a closure, but I think it makes it easier to follow the flow of validation checking. def _ask(self, ..., validators: list[Validator]) -> str: def create_validator_checker(validators: list[Validator]) -> Callable[[str], Validator | None]: def fn(response: str) -> Validator | None: for validator in validators: if not validator(response): return validator return fn maybe_first_failing_validator = create_validator_checker(validators) response = self._get_response(prompt) while failing_validator := maybe_first_failing_validator(response): self.answer(failing_validator.warning) response = self._get_response(prompt) Testing the Core Business Logic I always love to see tests, so kudos for creating them. Noticeably missing from the collection, however, is a test suite that tests the core business logic of the game, for example: When the player makes a correct guess, the correct corresponding letters in the target word are revealed, and the score is updated accordingly. When the player makes an incorrect guess, no letters in the target word are revealed, 1 life is deducted, and the score is updated accordingly. When the player correctly guesses all the letters in the target word, they win the game. When the player loses all of their lives, they lose the game. This might require refactoring Game and/or GameRound to expose the relevant parts of internal game state so they can be more easily tested, but it will be well worth the effort.
{ "domain": "codereview.stackexchange", "id": 45158, "tags": "python, console, hangman" }
Is it possible to see traveling light beams?
Question: As we all know the speed of light is the thing that allows us to see Now let's imagine a beam of gamma rays from a Black hole just a few light years away passing through the solar system somehow was able to totally miss earth and every other observable thing,then lets tell if the space around the path of this beam was a somewhat perfect vacuum i.e. has no particles in it ,then would we ever be able to know that we escaped from a lion's den? Answer: Even if the beam did not interact with anything, it would still have energy/momentum. According to GR, any energy/momentum distribution will curve spacetime around it. In principle, it should be possible to see the bending of the light of distant stars passing through the region as a small change in the apparent position of the stars. I say "in principle" since the effect will be impossibly small for any realistic gamma ray beam.
{ "domain": "physics.stackexchange", "id": 48806, "tags": "waves, electromagnetic-radiation, thought-experiment, gamma-rays" }
Protocols for "almost equality" with one-sided error
Question: In the well-known communication task EQUALITY, Alice has a string $x$ of $n$ bits, Bob has a string $y$ of $n$ bits, and their task is to determine whether $x = y$. In the public coin model, there is a probabilistic protocol which uses 2 bits, is always correct when $x = y$, and is correct with constant probability when $x \neq y$ (in fact, with probability 1/2). The following generalization came up in a recent question. In the task $k$-HAMMING (where $k$ is a constant parameter), Alice and Bob hold strings $x,y$ of length $n$ bits, and their task is to determine whether the Hamming distance between $x$ and $y$ is at most $k$. EQUALITY is the case $k=0$. When $k = 1$, we have the following nice protocol, which uses 5 bits, is always correct in the Yes case, and is wrong in the No case with probability at most 5/8. Alice and Bob agree on two random strings $z,w \in GF(3)^n$. Alice and Bob each compute the inner product of $z,w$ with their input, and compare the values. If both are equal or both are different, they output Yes, and otherwise they output No. If the Hamming distance between $x$ and $y$ is $d$ then the probability $p_d$ that $\langle z,x-y \rangle = 0$ is $1/3 + (2/3)(-1/2)^d$. We have $p_0 = 1$, $p_1 = 0$, and $1/4 \leq p_d \leq 1/2$ for $d \geq 2$. This shows that the error probability when $d \geq 2$ is at most $p_d^2 + (1-p_d)^2 \leq (1/4)^2 + (3/4)^2 = 5/8$. When $k > 1$, a similar protocol works since using enough samples we can estimate $p_d$ to any required accuracy. However, the resulting protocol no longer has one-sided error. One-sidedness can be recovered in many ways at the cost of using $O(\log n)$ bits of communication. Is there a one-sided error protocol for $k$-HAMMING for $k \geq 2$ using $O(1)$ communication? Answer: Here is a protocol with one-sided error for the case $k=2$. Suppose we have $x,y \in \{0,1\}^n$. Sample $N$ strings $z_1,\dots,z_N$ uniformly at random from $\{1,4\}^n$. Alice sends $\alpha_j = \sum_i x_i z_{j,i} \bmod 5$ to Bob, and Bob sends $\beta_j = \sum_i y_i z_{j,i} \bmod 5$ to Alice. Both parties compute $\gamma_j = \alpha_j - \beta_j \bmod 5$. Let $\Gamma$ denote the set $\{\gamma_1,\dots,\gamma_N\}$ (suppressing duplicates, since it is a set). If $\Gamma = \{0\}$ or $\Gamma \subseteq \{1,4\}$ or $\Gamma \subseteq \{0,2,3\}$, output "the Hamming distance is probably at most 2" (in fact, we can even say that the Hamming distance is probably 0, 1, or 2, respectively, depending on which case you are in). Otherwise, output "the Hamming distance is definitely at least 3". Why does this work? If the Hamming distance is zero, then the only possible value for each $\gamma_j$ is 0; if the Hamming distance is 1, the only possible values are 1,4; if the Hamming distance is 2, the only possible values are 0,2,3; if the Hamming distance is 3, the only possible values are 1,2,3,4; if the Hamming distance is 4 or greater, all values are possible. Therefore, if the algorithm says "definitely at least 3", it will never be wrong. Moreover, if the Hamming distance is 3 or greater, the probability of each possible value for $\gamma_j$ is at least $1/8$. Consequently, by a union bound, the probability of wrongly outputting "probably at most 2" is at most $2(1 - 2/8)^N$, which can be made exponentially small by making $N$ sufficiently large. In particular, it suffices to take $N$ to be a small constant (e.g., $N=5$) to achieve a one-sided error, with a one-sided error probability less than $1/2$. I haven't thought about whether this can be generalized to arbitrary $k$.
{ "domain": "cs.stackexchange", "id": 9857, "tags": "communication-complexity" }
Energy Density is expressed in Liters, Liters of What?
Question: When looking at energy density of batteries it is often expressed as Wh/L. For instance, Wikipedia says Lithium Ion batteries have a density of 250–693 W·h/L. Liters is supposed to express the volume but the volume depends on the density of the substance. So liters of what? Water? Answer: Volume is volume. A (large) battery that is a cube 10 cm on a side has a volume of 1 liter regardless of what it is made of, or what density it has.
{ "domain": "physics.stackexchange", "id": 73757, "tags": "conventions, batteries, si-units, volume" }
Trivial solution to the Continuous Knapsack problem
Question: I am a bit puzzled as to why the continuous knapsack problem is a non-trivial problem https://en.m.wikipedia.org/wiki/Continuous_knapsack_problem Using the terminology in the Wikipedia link above, If the Knapsack had a capacity of $W$, couldn't we just pick $W/w_i$ of the material $i$ having the maximum value per unit weight $v_i$? Answer: Briefly, the supply of each material is finite: there is only $w_i$ of material $i$. If $w_i < W$, then you cannot fill the knapsack just with material $i$. In the continuous (or fractional) knapsack problem, we are given a bunch of materials $1,\ldots,n$. Material $i$ has weight $w_i$ and total value $v_i$. The goal is to pick $\alpha_1,\ldots,\alpha_n \in [0,1]$ such that $\sum_i \alpha_i w_i \leq W$ in a way which maximizes $\sum_i \alpha_i v_i$. Your suggestion of taking $\alpha_i = W/w_i$ (for the best material $i$) is only feasible and optimal if $W = w_i$. If $W > w_i$ then the choice isn't feasible – there is not enough of material $i$. If $W < w_i$, then the choice (probably) isn't optimal, since the knapsack will still have room left.
{ "domain": "cs.stackexchange", "id": 14839, "tags": "greedy-algorithms" }
Apparent frequency and wavelength in the Doppler effect
Question: For a transverse wave(or for pressure waves required to produce longitudinal waves), the motion perpendicular to the direction of propagation of the wave is governed by an equation like $y = Asin(\omega t)$ in case of harmonic waves(here $\omega$ is angular frequency of simple harmonic motion). The time period ($T$) of the (harmonic) wave is then $2*\pi/\omega$. And frequency$(\nu)$ is $1/T$. Also, velocity of propagation of a wave $v = \nu*\lambda$. Now the Doppler effect shows us that if the source of the wave or the observer is in motion the frequency$(\nu)$ of the wave changes(Here I am talking about longitudinal waves propagating in a stationary medium). My initial doubt was how the frequency and the time period of the wave could change if $\omega$ did not change. However, this animation (scenes 4 and 5 of 8) helped me realise(?) that, in the case of the observer moving and the source remaining stationary, the wave itself does not change, but the apparent frequency of the wave as seen by the observer changes. Also, as there is relative velocity between the source and the wave I thought that the apparent change in frequency would be due to the apparent change in velocity and wavelength $(v=\nu*\lambda)$ which means that $\lambda$ would be constant. However, in the case where the source is moving with respect to the observer, which is stationary, (scene 7 of 8) the state of motion of the observer is the same as in the case where both the observer and the source are stationary(scene 3 of 8). This must mean that there is an inherent change in the wavelength and frequency of the wave, visible to any stationary observer. However, according to my initial arguments, how can frequency of the wave change if there is no change in $\omega$? So my final questions are:- 1.In the case where the observer is moving, is it true that the apparent wavelength does not change from the case where both source and observer are stationary? And in the case where the source is moving, is it true that both wavelength and frequency are changing from the normal case? And if so, why is it that the relative motion is not what matters? 2.In the case where the source is moving, how can frequency change when $\omega$ remains constant? Edit: Clarified that I am talking about longitudinal waves propagating in a stationary medium. Answer: The animations that you linked refer to sound waves - that is, there is a stationary medium involved in the transmission of the sound. This means that it matters whether the source or the observer is moving relative to the medium. Answering your two specific questions: yes, if the observer is moving, the wavelength of the waves does not change (they are generated by the source and will propagate in the same way regardless of whether somebody is there to observe them). When the source is moving, the wavelength is increased (as the animation makes clear) so while the source frequency is unchanged, the frequency observed at a particular point in space does change. However, an observer traveling in the same direction as the source and at the same speed, would observe the original frequency (the distance between them doesn't change). In that scenario, the wavelength is still changed - because the sound waves are traveling towards the observer faster than the speed of sound in the medium.
{ "domain": "physics.stackexchange", "id": 19339, "tags": "acoustics, relative-motion, doppler-effect" }
Colors from a computer vs. colors from visible spectrum of sunlight
Question: Observation: So, I know that all computer screens are able to project many different colors by varying how they display the RGB (Red, Green, Blue) pixels. Question What's the difference between say, yellow light (575 nm) from sunlight vs. the same yellow light that's displayed on the computer screen? Aren't they different? Is our brains mixing together the RGB lights together and its being interpreted as yellow light or is yellow light the same (575 nm) light that comes from sunlight actually hits our eyes? Relationship between Fourier Series and light? 1) So I know that in Fourier Series, you have basis functions, sines and cosines that make up any other function depending on the coefficients that are in front of the sines and cosines terms. 2) I know that RGB colors on a screen are the "basis" for all the other colors that a computer screen makes up and the intensity of each of the RGB are like the "coefficients". So, am I making the correct relationship here? Answer: Yes, the mixing is happening in the eyes & brain; no, an RGB mix of yellow isn't the same as a pure yellow frequency; but our eyes will see it as the same. The eyes have 3 (or 2, if you're colour-blind) types of colour sensors, each of which responds with a different signal profile - each peaks at a particular frequency, and trails off for frequencies that differ from that. The brain merges the signals from those 3 (or 2) different sensors, to make sense of the colour signals to create a single colour signal, and it can't tell whether that was a balanced combination of red and green, or a pure yellow frequency. See also this answer to a previous, related question. That explains most colours we see. Except for when we see a combination of red and blue, with no signals in between. There isn't a colour in the spectrum for that - the colours in between red and blue all feature higher signals in the middle, around green. To have signals from red and blue but not green, doesn't map to the spectrum. And our brain won't show a combination of two or more colours for a single point, it always maps a single point to a single colour. So our brain creates a new colour, not on the spectrum, for a combination of red and blue. Hence, purple pigments aren't real, in that sense - purple is the brain's interpolation of red + blue + no green. Purple is just a pigment of our imagination.
{ "domain": "physics.stackexchange", "id": 17499, "tags": "visible-light" }
Trying to find constant of motion formula
Question: I have a ball that oscillates around a point. The ball have an origin and a destination (the point where the screen is touched). This destination is the equilibrium. One operator gave me this equation that is supposed to be the motion of a mass attached to a spring \begin{eqnarray} x(t) &=& A e^{-\gamma t/2} \cos(\omega t - \phi) \\ y(t) &=& B e^{-\gamma t/2} \cos(\omega t - \psi) \\ \end{eqnarray} Only problem is I don't know how to calculate $A, B, \phi, \psi$. The original post was closed and was this one. I've tried many things without any success I will write another post for this, but for the moment I would like to know how to calculate these constants. (from what I've understood $\phi, \psi$ should be the phase shift (but I'm not sure)). Answer: $A$ and $B$ are the size of the oscillations in the $x$ and $y$ directions, you could choose these so the image stays near the point, they might need to represent the distance from the origin to where the screen was touched. Don't worry too much about $\gamma, \psi$ and $ \phi$ set to zero. $\omega$ sets the speed of oscillation, the bigger it is, the faster is the oscillation. Try the above to start with, these conditions should make the ball oscillate with the destination as the equilibrium position. If you wanted a circular motion, try setting one of $\psi$ or $ \phi$ equal to $\frac{\pi}{2}$ (about 1.57). Hope this helps.
{ "domain": "physics.stackexchange", "id": 81353, "tags": "newtonian-mechanics, mass, spring, simulations" }
Using Rosjava Time
Question: I am trying to use the ClockTopicTimeProvider to get time information, but it's constructor takes a defaultNode. Most of my nodes were launched via nodeMainExecutor, nodeMainExecutor.execute(sensorView, nodeConfiguration); The result of this has a connectedNode, I tried casting this to a defaultNode but that didn't work. The actual constructor for a defaultNode looks like its definably not something I should be using. Has anyone had any success getting ClockTopicTimeProvider working? Or knows how one obtains the "defaultNode" For reference I have multiple other subscribers and publishers working, and this is an android application using rosjava. Originally posted by Jyo on ROS Answers with karma: 100 on 2012-10-03 Post score: 0 Answer: You should ask the Node for the current time instead of using a TimeProvider directly. You can specify the TimeProvider to use in the NodeConfiguration then use Node.getCurrentTime(). A ClockTopicTimeProvider will be used automatically if the /use_sim_time parameter is true when the Node starts. Originally posted by damonkohler with karma: 3838 on 2012-10-10 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 11220, "tags": "rosjava, android" }
Why use the output of the generator to train the discriminator in a GAN?
Question: I've been doing some reading about GANs, and although I've seen several excellent examples of implementations, the descriptions of why those patterns were chosen isn't clear to me in many cases. At a very high level, the purpose of the discriminator in a GAN is establish a loss function that can be used to train the generator. ie. Given the random input to the generator, the discriminator should be able to return a probability of the result being a 'real' image. If the discriminator is perfect the probability will always be zero, and the loss function will have no gradient. Therefore you iterate: generate random samples generate output from the generator evaluate the output using the discriminator train the generator update the discriminator to be more accurate by training it on samples from the real distribution and output from the generator. The problem, and what I don't understand, is point 5 in the above. Why do you use the output of the generator? I absolutely understand that you need to iterate on the accuracy of the discriminator. To start with it needs to respond with a non-zero value for the effectively random output from the generator, and slowly it needs to converge towards correctly classifying images at 'real' or 'fake'. In order to achieve this we iterate, training the generator with images from the real distribution, pushing it towards accepting 'real' images. ...and with the images from the generator; but I don't understand why. Effectively, you have a set of real images (eg. 5000 pictures of faces), that represent a sample from the latent space you want the GAN to converge on (eg. all pictures of faces). So the argument goes: As the generator is trained iteratively closer and closer to generating images from the latent space, the discriminator is iteratively trained to recognise from the latent space, as though it had a much larger sample size than the 5000 (or whatever) sample images you started with. ...ok, but that's daft. The whole point of DNN's is that given a sample you can train it to recognise images from the latent space the samples represent. I've never seen a DNN where the first step was 'augment your samples with extra procedurally generated fake samples'; the only reason to do this would be if you can only recognise samples in the input set, ie. your network is over-fitted. So, as a specific example, why can't you incrementally train the discriminator on samples of ('real' * epoch/iterations + 'noise' * 1 - epoch/iterations), where 'noise' is just a random input vector. Your discriminator will then necessarily converge towards recognising real images, as well as offering a meaningful gradient to the generator. What benefit does feeding the output of the generator in offer over this? Answer: It turns out there is actually a practical reason for this. Practically speaking, in GANs the generator tends to converge on few 'good' outputs that fool the discriminator if you don't do it. In the optimal case, the generator will actually emit a single fixed output regardless of the input vector that fools the discriminator. Which is to say, the generators loss function is intended not simply as "fool the discriminator", it is actually: Fool the discriminator. Generate novel output. You can write your generator's loss function to explicitly attempt to say the output in any training batch should be distinct, but by passing the outputs to the discriminator you create a history of previous predictions from the generator, effectively applying a loss metric for when the generator tends to produce the same outputs over and over again. ...but it is not magic, and is not about the discriminator learning "good features" features; it is about the loss applied to the generator. This is referred to as "Mode Collapse", to quote the Google ML guide on GAN troubleshooting: If the generator starts producing the same output (or a small set of outputs) over and over again, the discriminator's best strategy is to learn to always reject that output. But if the next generation of discriminator gets stuck in a local minimum and doesn't find the best strategy, then it's too easy for the next generator iteration to find the most plausible output for the current discriminator. Each iteration of generator over-optimizes for a particular discriminator, and the discriminator never manages to learn its way out of the trap. As a result the generators rotate through a small set of output types. This form of GAN failure is called mode collapse. See also, for additional reading "Unrolled GANs" and "Wasserstein loss". see: https://developers.google.com/machine-learning/gan/problems
{ "domain": "ai.stackexchange", "id": 1669, "tags": "generative-adversarial-networks" }
Time-dependence of ladder operators in quantized EM fields
Question: My Question Are the operators for the $A$, $E$ and $B$ field to be treated as operators in a Heisenberg description or is their time dependence explicit when performing a textbook EM quantization as in Sakurai? The following two sections document the chain of thoughts that brought me to this point and define the quantities I am talking about. Quantization of the EM Field In the standard textbook procedure of quantizing the EM field (like the following from Sakurai) one usually starts with a Coulomb gauge in vacuum, which leads to an expression for the $A$-field which has the form $\mathbf{A}(\mathbf{x};t) = \sum_{\mathbf{k},\lambda} A_{\mathbf{k},\lambda}(\mathbf{x};t) \ \mathbf{e}_{\mathbf{k},\lambda}$, where $\mathbf{e}_{\mathbf{k},\lambda}$ are the unit vectors of circular polarizations and $A_{\mathbf{k},\lambda}(\mathbf{x};t) = A_{\mathbf{k},\lambda} e^{- i(\omega_k t - \mathbf{k} \cdot \mathbf{x})} + c.c.$ The procedure is now to find an expression for the energy density of the field and identify the appropriate choice for annihilation and creation operators. In Gaussian units this would be $A_{\mathbf{k},\lambda} = \sqrt{\frac{4 \pi\hbar c^2}{2 \omega_k V}} \ a_{\lambda}(\mathbf{k})$, where $a_{\lambda}(\mathbf{k})$ are boson operators. So far, so good. Now my conceptional problems start. If I now consider the quantized version of a single mode $E$ field, it has the form $\mathbf{E}(\mathbf{x};t) = i E^{0}_k \left(a_{\lambda}(\mathbf{k}) e^{- i(\omega_k t - \mathbf{k} \cdot \mathbf{x})} - h.c.\right) \mathbf{e}_{\mathbf{k},\lambda}$, where $E^{0}_k = \sqrt{\frac{4 \pi \hbar \omega_k}{2 V}}$. My Problem In the description of a two-level system with ladder operators $\sigma$ and $\sigma^\dagger$ which is interacting with a single mode cavity one usually considers a coupling of the form $H = \hbar g\ (\sigma+\sigma^\dagger)(a+a^\dagger)$ with coupling parameter $g$. This is surprising for me, since my naive assumption would be to start from a dipole interaction with the quantized field that I derived above and write $H = - \boldsymbol{\mu} \cdot \mathbf{E}(\mathbf{x};t)$ I understand how to get rid of the space dependency by considering a long-wavelength limit but I have no idea how to treat the time dependence of the field. My feeling is that the operators which emerged in this quantization procedure should be treated as being Heisenberg operators evolving in time but I am not sure about this. The way of performing the quantization in this way offers less insight to me whether the result should be understood in the Heisenberg picture or not. Especially because usual Schrödinger operators are allowed to have explicit time-dependence. Answer: Time-dependence of operators when quantizing a free field When quantizing EM field, the QM picture used is about choice. All different pictures of QM are unitary equivalent. But, surely there are some pictures that seems to be more convenient than others. Here, since the ladder operator is the basic bloc of a quantized theory of EM-field, it seems to be more appropriate to stick the time-dependence to it, i.e. to use Heisenberg picture. Let us decompose the vector potential in modes : $$ \textbf{A}(\textbf{x},t)=\sum_\textbf{k}\textbf{a}_\textbf{k}(t)\,e^{\mathrm{i}\textbf{k}\cdot\textbf{x}}+\textbf{a}^*_\textbf{k}(t)\,e^{-\mathrm{i}\textbf{k}\cdot\textbf{x}} $$ which is the solution of the equation of motion : $$ \left[\Delta-\frac{1}{c^2}\partial^2_t\right]\textbf{A}(\textbf{x},t)=0 $$ This equation then provides two things : the dispersion relation of the modes : $\quad\omega^2_\textbf{k}=\textbf{k}^2\,c^2\quad\text{i.e.}\quad\omega_\textbf{k}=\pm kc$ this leads to two possible solutions for the mode amplitudes : $\quad\textbf{a}_\textbf{k}(t)=\textbf{a}_\textbf{k}e^{\pm\mathrm{i}\omega_\textbf{k}t}$ Generally, the $\textbf{a}_\textbf{k}e^{-\mathrm{i}\omega_\textbf{k}t}$ solution is prefered since it allows to see $\textbf{A}(\textbf{x},t)$ as a sum of "wave-like propagating modes" described with an $e^{\mathrm{i}(\textbf{k}\cdot\textbf{x}-\omega_\textbf{k}t)}$ term. Then, when quantizing the field, the $\textbf{a}_\textbf{k}$ are associted to ladder operators : $$ \textbf{a}_\textbf{k}\rightarrow\hat{a}_\textbf{k} $$ Now, and only now, it is possible to choose the QM picture. And here it's easy to see that the Heisenberg picture goes quite well with the quantization process : $$ \textbf{a}_\textbf{k}(t)=\textbf{a}_\textbf{k}e^{-\mathrm{i}\omega_\textbf{k}t}\rightarrow\hat{a}_\textbf{k}(t)=\hat{a}_\textbf{k}e^{-\mathrm{i}\omega_\textbf{k}t} $$ Time-dependence of operators in the interacting case Since now you are dealing with time dependant couplings (coupling between an atom and the cavity), it's often neater to express your operators in the interaction picture. If you do so, you will find that : $$ a(t)=a\,e^{-\mathrm{i}\omega t}\quad\text{and}\quad \sigma_\pm(t)=\sigma_\pm\,e^{\pm\mathrm{i}\omega_0 t} $$ where the Pauli matrices $\sigma_\pm$ are : $$ \sigma_+=|e\rangle\langle g|\quad\text{and}\quad\sigma_-=|g\rangle\langle e| $$ Here, $|g\rangle$ and $|e\rangle$ stands respectively for the ground state and the excited state of your atom. $\omega_0$ is the resonance frequency of your atomic transition, and $\omega$ is the cavity mode frequency. Then by putting all of this in the interaction hamiltonian : $$ \mathcal{H}_{int}(t)=\hbar g(\sigma_+(t) + \sigma_-(t))(a(t)+a^\dagger(t)) $$ which can be expanded as : $$ \mathcal{H}_{int}(t)=\hbar g(\sigma_+ a\,e^{\mathrm{i}(\omega_0-\omega) t} +\sigma_+ a^\dagger\,e^{\mathrm{i}(\omega_0+\omega) t}+\sigma_- a\,e^{-\mathrm{i}(\omega_0+\omega) t}+\sigma_- a^\dagger\,e^{-\mathrm{i}(\omega_0-\omega) t}) $$ Then generally the physics is studied close to resonance, i.e. $|\omega-\omega_0|<<\omega_0$ so that very rapid oscillating terms in $\omega+\omega_0\sim 2\omega_0$ can be neglected compared to the slow terms in $\omega-\omega_0$. Such approximation is oftenly refered as the rotating wave approximation. A fortiori, if you are exactly at resonance $\omega=\omega_0$, the interaction hamiltonian reads : $$ \mathcal{H}_{int}=\hbar g(\sigma_+ a\ +\sigma_- a^\dagger) $$ which is "removing" the time dependence. And as you suggested, $\mathcal{H}_{int}$ is a dipolar coupling between the atom and the field and is derived from a $-\hat{\textbf{d}}\cdot\hat{\textbf{E}}$ term.
{ "domain": "physics.stackexchange", "id": 20199, "tags": "quantum-field-theory, quantum-optics, second-quantization" }
Are there specific features of birds that cats/small predators are attracted to?
Question: I've recently heard a podcast, in which a professor describes one of the theories as to why we like abstract art. In his talk, he mentions an experiment with seagull chicks, in which the seagull chicks mistake a stick with a red dot for their mother's beak, and in the case of stick with 3 stripes actually preferred it to their mother's beak. When a stick like that is waved around a chick, it starts to peck at it, believing it is mother bird: This experiment suggests that birds are imprinted to recognize specific patterns and interpret them as mother. I'm in trying to create a similar experiment for cats. To do so I'm trying to understand if cats other small predators (ferrets,etc) are imprinted in a similar way - do cats recognize specific features of a bird to identify it as "bird", "prey" or "can hunt and eat"? I'm talking about stuff like - do they recognize eyes, beaks, wings or tail in a special way? To paraphrase the question: If I'm to create a stick like above, but for cats, what features would be painted on the stick? I know that butterflies, caterpillars and other insects have evolved to mimick "eyes" on non-vital organs to confuse birds. I'm interested if same stuff exists for small mammal predators. Answer: Not all cat predatory behavior is innate. Researchers found that cats predatory behavior for birds vs. mice depends to a significant degree on training by the mother: if the mother taught predatory behavior with birds, the kittens grew up to be better at catching birds than at catching mice and vice versa. Supportive data shows that aside from monkeys and other primates, cats are among the most adept at learning by observing the successes and failures of other animals attempting to complete tasks to obtain a reward. This might influence, therefore, a cat's response to and attraction for objects in the air or on the floor. Blindfolded cats can hunt mice by vibration. Paul Leyhausen, a German animal behaviorist studied domestic cats (and related felines) from the 1950's through the 1970's, and much of what is known about their predatory behavior is thanks to him. Although their vision is not especially keen, domestic cats see well in dim light and readily perceive motion. According to Leyhausen, prey movement and the direction of movement are the only factors which innately release crouching, stalking, and catching behaviors in cats. When presented with quail, cats were far more likely to go after active quail than those stilled by tonic immobility. Cats are also more likely to go after smaller prey (mice > rats) and mice were much more likely to elicit play pawing. Leyhausen also demonstrated that rodents were probably the natural prey of cats. Though there has been much study of cats and predation, little is available on detail of the kind you are interested in. My best guess for a good toy for a cat would be a smaller object that moved along the floor. More information can be obtained from Leyhausen's book, Cat Behavior: The Predatory and Social Behavior of Domestic and Wild Cats. Effects of the mother, object play, and adult experience on predation in cats The effects of experience on the predatory patterns of cats Tonic immobility in Japanese quail can reduce the probability of sustained attack by cats The behavioral bases of prolonged suppression of predatory attack in cats
{ "domain": "biology.stackexchange", "id": 3186, "tags": "evolution, zoology, development, sensation" }
Will a solid mix with a different solid?
Question: I apologize at first if this question is naive since I am not a physics major student... In my understanding, the basic building block of matter is the atom, and according to my knowledge from basic physics, the atoms are keep moving around, although the atoms of solid matter can not move too much around, but they are not fixed. Here is my question: assume two solid matters are put together (adjacent to each other). Then, would it be possible that certain amount (maybe extremely small) of atoms of those two matters, at the adjacent layer, mix with each other? If yes, could you provide some reference? If not, can any 'thing' smaller than atom has such property? photons? Thank you! Answer: Here is my question: assume two solid matters are put together (adjacent to each other). Then, would it be possible that certain amount (maybe extremely small) of atoms of those two matters, at the adjacent layer, mix with each other? It is possible for two solids to bond, rather than mix, and before I started this answer, I was confident I could immediately think of two solids that could bonded together, but perhaps needing a certain amount of pressure. However, pressure causes heat, so there is a high probability that you will be joining a liquid to a solid, which is not your question. If you want two solids to mix, as far as I know, you need to apply some process that establishes a liquid interface, even if only on a temporary basis, before it cools back to solid. Welding two different metals together is an example of this. The atoms in a solid tend to attract each other, more than they are attracted by another substance, which I suppose is one definition of a solid. So strictly speaking, atoms from one solid will not get caught up in the atomic bonding structure of another solid just by placing them together. But the bonding of a solid to another solid is actually the cause of friction. An example of this is the temporary bonding of my shoes to the pavement and, although wasteful of energy, friction is also vital to ordinary life, as it allows me to walk. The friction occurs from a combination of inter-surface adhesion, surface roughness, surface deformation, and surface contamination. The part that interests us here is inter-surface adhesion. SOURCE: Dipoles And Friction Electromagnetic interactions at the atomic scale are a source of friction, but atoms and molecules are neutral so this answer sounds odd. However, if two atoms come close together then the repulsion between their electron clouds can cause the clouds to want to maximise their distance of separation. However there is also an attraction the cloud feels from the other atom's nucleus and so what happens is that the atoms polarise each other, creating a scenario where electrons are cluster in a small region of the atom to minimise repulsive forces. Resulting in a region of strongly negative charge but the nucleus also provides a region of positive charge, this gives rise to an induced dipole. If you imagine one ball is an electron from the top surface, and the other ball is a proton from the bottom surface, then they be held together for a very short time, by electrostatic forces, and that's where the photons come in, as they establish the attracting force between the two particles. You can think of a dipole as a bound system of two charges of equal magnitude and opposite sign. The axes (if we consider axis direction as going from the positive charge to the negative charge) of the dipoles induced this way by adjacent atoms are anti-parallel, meaning that their interaction is attractive (as the electron cloud of one atom is closer to the other atom's nucleus than it is to other electron cloud). Attraction between dipoles that are induced by atoms at the boundaries of the two surfaces moving past each other is the simplest of the interactions that results in friction. There are also interactions where bonds are created between atoms or molecules in the two surfaces, creating an adhesive friction. There are many kinds of friction, but it is still a subject not fully understood today.
{ "domain": "physics.stackexchange", "id": 34045, "tags": "newtonian-mechanics, particle-physics, states-of-matter" }
Getting Energy consumed by using the Gradient of the Power
Question: My goal is to predict the energy consumption in Wh over a time window by using the current power draw and the current gradient of this power draw. In other words, if my power usage is increasing at this moment, I want to take that into consideration and anticipate for the power to increase further over time. However, for this application it also makes no sense to view the gradient as constant over the time window. This obviously would mean that over time my power draw would climb indefinetely, which is not realistic. My idea was to use the following function to describe the curve of the Gradient: $$ \Delta P(t) = \Delta P(0)*e^{^{at}} $$ Given a negative "a" this function starts with the currently observed Gradient but over time goes to zero. So my power consumption at time T should be: $$ P(t)=P(0) + \int_{0}^{T} \Delta P(0)*e^{^{at}} dt $$ If I integrate this again, it should provide me with the energy consumption at time T: $$ E(t)=E(0)+ \int_{0}^{T}(P(0) + \int_{0}^{T} \Delta P(0)*e^{^{at}} dt) dt $$ Which I throw in to Wolfram Alpha with the following result for the definite integral: $$ T(\frac{\Delta P(0)*(e^{aT}-1)}{a}+P(0)) $$ So let's test this with the easiest example I can come up with: E(0) := 0, P(0)= 0, a=0, the Gradient of P is 1[W/s] and my time window is one hour. This results in 3600 Wh. But that cannot be the correct answer. If the power growing at an (almost) constant rate of one Watt per Second over the time window, I would expect 3600 Watt after one hour (3600 s) but an average over that hour of just 1800 W. So the energy consumed should be 1800 Wh. I suspect that I make an error somewhere in the way I use these integrals or the defined integral to solve my problem. It's like I first integrate to get the power drawn at point T, which is 3600 W. This I then used for the second integration to get the Energy consumed from the Power. So after the second integration I get the overall result as if these 3600 W were present constantly the whole time. Can anyone point out the mistake I made? Answer: I initially edited your question to "correct" your notation, but I think your error is actually due to that confusion, so I reverted the changes. If you actually perform your first integral, you get $$ P(t)=P(0) + \int_{0}^{t} \Delta P(0)*e^{at'} dt' = P(0) + \Delta P(0) \, \frac{e^{at} - 1}{a} $$ Note here that I've used a "dummy" variable $t'$ to perform the integration, to avoid confusion with the upper bound of integration $t$. If you then integrate this again over the range $t \in [0, T]$, you get $$ E(T) = E(0) + \int_0^T \left[ P(0) + \Delta P(0) \, \frac{e^{at} - 1}{a} \right] \, dt = E(0) + P(0) T + \Delta P(0) \frac{e^{aT} - 1 - aT}{a^2} $$ The fraction at the end of this expression is indeterminate when $a = 0$ but approaches $T^2/2$ as $a \to 0$; so in the $a \to 0$ limit we have $E(T) = P(0) T + \frac{1}{2} \Delta P(0) T^2$. This will, I'm pretty sure, give you a more reasonable result for your test inputs. Note that the use of dummy variables is very helpful in getting the correct answer. With the correct dummy variables in place, your original equation for $E$ as a function of time should be $$ E(\color{red}{T})=E(0)+ \int_{0}^{T}\left[ P(0) + \int_{0}^{\color{red}{t}} \Delta P(0)*e^{a\color{red}{t'}} d\color{red}{t'} \right] dt $$ (changes marked in red.) If you try to do this process without carefully distinguishing between $t'$, $t$, and $T$, it's likely that you'll get the wrong answer—particularly if you use a software tool like Wolfram Alpha that takes your original ambiguous input literally.
{ "domain": "physics.stackexchange", "id": 91256, "tags": "electricity, integration, power, electronics" }
resistivity and temperature scan rate
Question: 100 amperes pass through a copper bar of $5$x$5$ mm cross-section. The resistivity of copper is $1.7 $x $10^{-8}$ ohm-metres. Its volumetric heat capacity is $3.45$ joules per kelvin per cc. Ignoring heat loss, what is the rate of increase of temperature of the copper in degrees C per second? The question is pretty straightforward. $R$$ =$$r\tfrac{l}{A}$ $Q$$ =$$I^{2}{R}{t}$ If change in heat per unit time$ = $$I^{2}{R}$, temperature to cause this change in heat per sec is the answer . But what is the equation that relates Temperature T and heat Q . How can we come to answer to this problem ? Answer: It's given that volumetric heat capacity = c is 3.45 joules per kelvin per cc that is equal that the heat necessary to heat up 1 cc of cooper for 1 K is c, so $\Delta Q = CV \ \Delta T$ where $V = lA$ is a volume.
{ "domain": "physics.stackexchange", "id": 18158, "tags": "homework-and-exercises, thermodynamics, electricity" }
Can the sealed bottle garden be called a perpetual motion machine?
Question: I was not sure just where to ask this question, but I was wondering if the sealed bottle garden be really called a perpetual motion machine. http://www.dailymail.co.uk/sciencetech/article-2267504/The-sealed-bottle-garden-thriving-40-years-fresh-air-water.html It has its own isolated ecosystem, and can be self sufficient for years with only needing to water once. Answer: There are no perpetual motion machines. So when you think you've found one, you need to ask a couple of questions, because there will always be an answer to at least one of them. How is the dissipation of energy being concealed? That is to say, some energy store is being depleted - where is it? Where is the additional energy coming from? In this case, additional energy is coming from the light that enters the sealed bottle garden. If there's no light, there's no growth.
{ "domain": "earthscience.stackexchange", "id": 866, "tags": "geophysics, geodynamics, geobiology" }
quantization of this hamiltonian?
Question: let be the Hamiltonian $ H=f(xp) $ if we consider canonical quantization so $ f( -ix \frac{d}{dx}-\frac{i}{2})\phi(x)= E_{n} \phi(x)$ here 'f' is a real valued function so i believe that $ f(xp) $ is an Hermitian operator, however How could i solve for a generic 'f' the eqution above ? , in case 'f' is a POlynomial i have no problem to solve it the same in case f is an exponential but for a generic 'f' how could i use operator theory to get the eigenvalue spectrum ?¿ Answer: $xp$ is not Hermitian, but $(xp+px)/2$ is. Thus you need to use the latter as an argument. To solve the time-independent Schroedinger equatione, first solve it for $H=(xp+px)/2$, and note that the eigenvectors don't change when taking a function of this operator.
{ "domain": "physics.stackexchange", "id": 3471, "tags": "quantization" }
Login code probably not secure
Question: I'm new to PHP security and have made this login file: <?php header('Content-type: application/javascript'); $hostname_localhost = ""; $database_localhost = ""; $username_localhost = ""; $password_localhost = ""; $salt1 = ""; $salt2 = ""; $localhost = mysql_connect($hostname_localhost,$username_localhost,$password_localhost) or trigger_error(mysql_error(),E_USER_ERROR); mysql_select_db($database_localhost, $localhost); $username = clean($_POST['username']); $password = $_POST['password']; $sha1_password = sha1($salt1.$password.$salt2); $query_search = "select * from users where username = '".$username."' AND password = '".$sha1_password. "'"; $query_exec = mysql_query($query_search) or die(mysql_error()); $rows = mysql_num_rows($query_exec); if($rows == 0) { echo "No Such User Found"; } else { $result = mysqli_query($localhost,"select * from users where username = '".$username."'"); while ($row = mysqli_fetch_array($result)) { echo $row['id']; } } function clean($string){ if (get_magic_quotes_gpc()) $string = stripslashes($string); return mysql_real_escape_string($string); } ?> You'll probably all find a million things wrong with it but I need all the help I can get! Answer: Warning This is a code-review (CR) site. Though my answer/review may seem harsh at points, it is my firm belief that CR can, and should, be tough. I've explained my reasoning at length here. It is not my intention to hurt yours or anyone else's feelings, be advised that some criticisms may be phrased badly, or bluntly. It is important, then, to keep in mind that my only goal is to help. Due to the fact that much of CR is criticizing other peoples code, you may get the impression that I see myself as the light of the world, and only source of "good code". Not so. Some critiques are subjective (such as which extension I recommend: PDO and not mysqli_*). TMTOWTDI and to each his own. The first thing that is "bad" (as in: not good practice), is that you're not using the right MySQL extension. What is in fact outright wrong is that you're not consistently using a single extension throughout. Most of the time, you're using the mysql_* extension (which is really, really bad, mkay). But then out of the blue you have this, shall we say, moment of clarity, and you use the MySQL Improved extension (or mysqli_*): //all your code using mysql_* $result = mysqli_query($localhost,"select * from users where username = '".$username."'"); while ($row = mysqli_fetch_array($result)) { echo $row['id']; } //reverting back to deprecated extension The help section of this site specifically request you post actual, working code to review. If you need this code fixed/debugged: set about trying yourself and, if you can't seem to get it to work, post a question on SO. Please clarify (by editing your posted code) which extension you're actually using. Also mention any notices, errors or warnings you get, if any. Regardless of what extension you choose: make sure it's not deprecated, use it as was intended and be consistent. The second thing is the use of magic quotes. Like the mysql_* extension, these have been deprecated for quite some time and have been removed since PHP 5.4.0. Do not use them. Ever. No, seriously. Stop. I'm not joking. Honest. Third item on the agenda: at no point are you bothering to check if there is any POST data to be processed, You just blindly assume $_POST['username'] will be there. That can issue notices (undefined index E_NOTICE). Adding an if (!isset($_POST['username']) || !isset($_POST['password'])) exit(); isn't a lot of work, and prevents all this. I would assume you're include-ing this file as part of a bigger script, and in case that script does check for isset params, then you should be good, but if that's the case: be careful with that header business. If the headers are already sent, you'll get an error. Which brings us neatly on to point four: header('Content-type: application/javascript');? Why? What? How? Why are you setting a header that makes no sense? As far as I can tell, you're only echoing text, a more suitable header, then, would be: header('Content-Type: text/plain'); If you are going to add markup to the output, change that header to: header('Content-Type: text/html'); Fifth: mysql_connect(...) or trigger_error(mysql_error(),E_USER_ERROR); vs $query_exec = mysql_query($query_search) or die(mysql_error());. Why are you using that horrible or? and why use trigger_error the first time 'round, only to then switch to that equally horrible die? If the db connection failed, then the app failed. throw new RuntimeException('Could not connect to db'); from a function that returns a db connection is what you could do, or simply exit(1); Whatever you do: be consistent. Lastly, and most importantly: as a result of the things mentioned above, and the fact you're simply concatenating barely validated POST data into a query: You are vulnerable to injection attacks. This site explains what they are. An easy, reliable and safe way to prevent this sort of attack is to use prepared statements. An example of how this is done with PDO would be: $dsn = 'mysql:host=127.0.0.1;dbname=default_db'; $user = 'root'; $pass = 'DoNotUseRootUserIfYouDoNotNeedRootAccess'; $pdo = new PDO( $dsn, $user $pass, array(//more options here PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_OBJ ) ); //no quotes, no data, just the query as-is $stmt = $pdo->prepare( 'SELECT * FROM users WHERE username = :uname AND password = :pass' ); $stmt->execute( array( ':uname' => $_POST['username'], ':pass' => sha1($salt1.$_POST['password'].$salt2) ) ); if (!($row = $stmt->fetch())) { echo 'User not found'; } else { do { echo $row->id, PHP_EOL; }while($row = $stmt->fetch()); } Just browse through the documentation for the PDO class, or work out how to use prepared statements using the mysqli_* extension. It's not very hard, and I'd be happy to review your attempts at it, but I don't really like mysqli all too much, so I'm not posting an example here. To help you choose between the two extensions, there's a special choose-an-api page can be found here The reason why I prefer PDO, in case you are wondering is simply because its API is clean, and consistent: it's OO-only, whereas mysqli_* offers both a procedural and OO API. That, to my eye, makes for messy code. mysqli_* uses a bit too much pass-by-reference in its methods: $foo = 123; $mysqliStmt->bindParam('i', $foo);//REFERENCE to $foo $foo = 678; $mysqliStmt->execute();//will use CURRENT value of $foo -> 678 It, once again, has to do with being consistent. Last thoughts: Given that you're still new to PHP, I'll leave it at this. Spend some time on the doc pages, and if it all gets a bit much, perhaps look at the unofficial coding standards: at PHP-FIG. All major players (Zend, Symfony, Doctrine, Drupal, Joomla... the list is on the site) subscribe to these standards. Since I've repeated it many times before here, if you want to be consistent, too: try and adopt these coding standards.
{ "domain": "codereview.stackexchange", "id": 6402, "tags": "php, beginner, mysql, security" }
What happens when the receiver of a parabolic antenna is covered by a metallic layer?
Question: Out of curiosity I decided to conduct a small experiment and covered the receiver of the parabolic TV antenna with a metallic cylindrical food container without touching the receiver. It blocked the signals. Now I am getting confused in explaining what exactly happened? Elaboration: The visible light gets scattered when obstructed by an opaque layer with non-uniformities on the surface. However talking about EM waves being blocked by a metal is getting me confused. To simply put my point, I am assuming just a time varying electric field as given in the image below. A: an infinite sheet with time varying surface charge density that can generate a signal B: a metal block introduced to the electric field that is uniform at any instant C: receiver The electric field will be zero inside the metal block but will not be affected anywhere else. Hence the receiver should not have any problem getting the signal. But that is not what happened. Where am I going wrong? Answer: What actually happens is that electromagnetic waves are being reflected on the surface of the metal. The case that you are showing is only valid for DC fields. For AC fields you have to calculate Maxwell's equations inside a conducting material, which will give you skin-effect, which is characterized by a frequency dependent quantity called "skin-depth". Fields inside a conductor are attenuated exponentially due to this skin-depth and thus the waves do not continue "on the other side". Instead the incident energy gets reflected back into space, which means that the receiver is being shielded for wavelengths that are much smaller than the size of the metal object in front of it. For objects roughly the size of the wavelength we have to calculate or measure the reflections and the resulting scattering and diffraction explicitly, which can be quite difficult.
{ "domain": "physics.stackexchange", "id": 25897, "tags": "electromagnetic-radiation" }
Is it easier to oxidise iron(II) to iron(III) in acidic or alkaline solution?
Question: Is it easier to oxidise $\ce{Fe^2+}$ to $\ce{Fe^3+}$ in acidic or alkaline solution? $$ \begin{align} \ce{Fe &-> Fe^2+(aq) -> Fe^3+(aq)}\\ \ce{Fe &-> Fe(OH)2 -> Fe(OH)3} \end{align} $$ In my textbook it is written that as soon as $\ce{Fe^{2+}}$ changes to $\ce{Fe^{3+}}$, it acquires $\ce{OH-}$ to form $\ce{Fe(OH)3}$. This promotes oxidation of $\ce{Fe^{2+}}$ changes to $\ce{Fe^{3+}}$. I don't quite get this why precipitate formation speeds up the reaction? Answer: Her is a source: THE MECHANISM OF IRON CATALYSIS IN CERTAIN OXIDATIONS that confirms Maurice's reported observations: It is well known that ferrous iron is comparatively stable in acid solution and that it is rapidly oxidized to the ferric state by the oxygen of the air in alkaline solution. On the question of why, I quote another work: Fe2+ adsorption on iron oxide: the importance of the redox potential of the adsorption system with citing references: Nowadays the dominant hypothesis is that adsorption of Fe(II) by ferric and non-ferric oxides has a different mechanism. Fe(II) adsorbed on ferric oxide (like Fe2O3) can be easily oxidized by the transfer of an electron from adsorbed species to the solid (Gorski and Scherer 2011; Hiemstra and van Riemsdijk 2007; Larese-Casanova et al. 2012). Note my added link to the work of Hiemstra based on surface complexation modeling. I hope this helps
{ "domain": "chemistry.stackexchange", "id": 14218, "tags": "inorganic-chemistry, physical-chemistry, electrochemistry, oxidation-state" }
Supercurrent phase and gauge change: why a specific choice for the quantum phase?
Question: I am following the following lecture notes: http://web.mit.edu/6.763/www/FT03/Lectures/Lecture9.pdf In the last slide, we see how a gauge change for the EM field impact the phase of the wavefunction. I remind: $$ \psi(x,t)=\sqrt{n(x,t)}e^{i \theta(x,t)}$$ $$ \mathbf{J}=qn(x,t) \left( \frac{\hbar}{m} \mathbf{\nabla}(\theta(x,t))-\frac{q}{m} A(x,t) \right)$$ If we change the E.M potential like the following, the physical description is the same: $$A'= A+\mathbf{\nabla} \chi $$ $$\phi'= \phi - \frac{\partial \chi}{\partial t} $$ Thus in the Gauge "prime", the physical quantities are the same: $n(x,t)=n'(x,t)$ and $\mathbf{J}=\mathbf{J'}$. From those equalities, we find: $$qn(x,t)\left( \frac{\hbar}{m} \mathbf{\nabla}(\theta'(x,t))-\frac{q}{m} A'(x,t) \right)=qn(x,t)\left( \frac{\hbar}{m} \mathbf{\nabla}(\theta(x,t))-\frac{q}{m} A(x,t) \right)$$ It implies: $$ \mathbf{\nabla}(\theta'-\theta-\frac{q}{\hbar} \chi)=0$$ A particular solution for this is: $\theta'=\theta+\frac{q}{\hbar} \chi$, but we could expect other. Why is it this particular solution that is only considered in the slides ? Answer: Consider first a time-independent gauge transformation. Then the vanishing of the gradient implies only that $$ \theta'-\theta+\frac{q}{\hbar}\chi= {\rm constant}. $$ But we also know that $\theta-\theta'=0$ if $\chi=0$. Therefore the constant is zero. For a time dependent gauge transformation, one needs to include the gauge covariance of the Josephson acceleration equation to make sure that the "constant" is also time independent
{ "domain": "physics.stackexchange", "id": 69505, "tags": "superconductivity, gauge-invariance" }
Books for the Mathematical Theory of AI/ML
Question: I am interested in the mathematical foundations of Artificial Intelligence and Machine Learning. Are there any books which will describe and present the mathematical foundations in detail? I am not that interested in coding and would prefer a text which is heavy in mathematics and the theory. Answer: My favorite is Understanding Machine Learning: From Theory to Algorithms. It’s presentation is very probability oriented and introduces concepts in a very concise, yet insightful way. It covers the foundations of a lot of Statistical Learning Theory and thanks to the rigorous introduction, I found it is easy to build on certain directions that interest me.
{ "domain": "cs.stackexchange", "id": 16739, "tags": "machine-learning, reference-request, artificial-intelligence" }
Using two wordlists to search a list of texts
Question: I have a function that takes two separate wordlists and searches a third list, which is a text formatted as a list of wordlists. The function finds the proximity between words in word_list1 and word_list2 by taking the difference between their indexes; (it takes one over the difference, so that larger numbers will indicate closer proximity). I ultimately will write the output to a .csv file and create a network of the word proximities in gephi. This function works for me, but it is very slow when used on a large number of texts. Do you have any suggestions for making it more efficient? (If this is unclear at all, let me know, and I will try to clarify.) text = [ 'This, Reader, is the entertainment of those who let loose their own thoughts, and follow them in writing, which thou oughtest not to envy them, since they afford thee an opportunity of the like diversion if thou wilt make use of thy own thoughts in reading.', 'For the understanding, like the eye, judging of objects only by its own sight, cannot but be pleased with what it discovers, having less regret for what has escaped it, because it is unknown.' ] word_list1 = ['entertainment', 'follow', 'joke', 'understanding'] word_list2 = ['envy', 'use', 'nada'] text_split = [] for line in text: text_split.append(line.split(' ')) def word_relations(list_a, list_b, text): relations = [] for line in text: for i, item in enumerate(line): for w in list_a: if w in item: first_int = i first_word = w for t, item in enumerate(line): for x in list_b: if x in item: second_int = t second_word = x if first_int: if second_int != first_int: dist = 1.0 / abs(second_int-first_int) if dist in relations: continue else: relations.append((first_word, second_word, dist)) return(relations) print(word_relations(word_list1, word_list2, text_split)) Here is the output: [('entertainment', 'envy', 0.05263157894736842), ('entertainment', 'use', 0.02857142857142857), ('follow', 'envy', 0.1111111111111111), ('follow', 'use', 0.04), ('understanding', 'use', 0.03571428571428571)] Answer: Algorithm You enumerate items within a loop also enumerating items. This means your algorithm is quadratic in the length of each sentence, which is bad. I think you can improve your algorithm to make only a single pass over items by creating a dict which stores unique words as keys, and lists of word indices as the values. Then you can lookup the appropriate indices of the items in your wordlists and perform the distance calculation. Since dict lookups are a constant-time operation, this reduces the complexity to linear in the length of each sentence. Do note that the algorithm is still quadratic in the length of your word lists, so there may be some improvement to be had if your word lists are long. Correctness and Edge Cases It's hard to tell exactly what this code is supposed to do, so I will be making a few assumptions. The handling of edge cases will vary depending on the requirements. You likely have at least one bug in your code, which is reflected in your example output: you check if x in item, which will evaluate True for the string 'use' in the word 'because'. If this is not the desired behavior, you may want a stricter check like checking equality x == item, or something based on Levenshtein distance for a less strict evaluation. Another possible bug is that you never include the first word of a sentence in your results. Your check if first_int: will be False for every word whose index is 0. Code Style Holy indentation, Batman! Deeply nested code is hard to read and understand, and usually indicates you can organize your code better. Usually the inner loops can be brought into their own function. You can sometimes reduce nesting by consolidating conditional statements. For example, an if statement immediately followed by another if with no else can be brought onto one line: if first_int: if second_int != first_int: can be written on one line as if first_int and second_int != first_int: Short variable names like w, t, and x aren't very descriptive, and make it hard for others to understand the code. Try to pick more descriptive names. Make sure you don't include unnecessary logic. For example, your check if dist in relations will always be False, since you only insert tuples, and dist is a float. It can be removed, saving you a line of code and a level of indentation.
{ "domain": "codereview.stackexchange", "id": 13173, "tags": "python, performance" }
What is a difference between cross-lingual IR and multi-lingual IR?
Question: In papers usually Cross-Lingual Information Retrieval (CLIR) and Multi-Lingual Information Retrieval (MLIR) use equivalently or distinctly. I want to know is there any difference between these two terms? Answer: In this paper they differentiate between these two terms as follows: While CLIR is concerned with retrieval for given language pairs (i.e. all the documents are given in a specific language and need to be retrieved to queries in another language), MLIR is concerned with retrieval from a document collection where documents in multiple languages co-exist and need to be retrieved to a query in any language. MLIR is thus inherently more difficult than CLIR.
{ "domain": "cs.stackexchange", "id": 9185, "tags": "information-retrieval" }
What hair color will result in someone inheriting both blond and ginger genes?
Question: Both the genes for blond hair and ginger hair are recessive, so they need both parents to give the same gene for it to take affect. What happens when a person has 1 copy of a recessive gene and another copy of a different recessive gene. They don't have enough of the recessive genes to have either blond or ginger hair, but they don't have a dominant gene to "take charge"? Answer: Hair color is not so simple as that. Most traits, especially those as complex as color, are controlled by many alleles at many loci. That's why there are different kinds of brown, blond, and red hair in the population. There is no "hair color gene." A fascinating paper came out a few years ago, identifying dozens of SNPs playing a role in hair and eye color. It's a deep, deep rabbit hole that we have only begun to plunge into. EDIT: To summarize, it's very complex. This paper performed genome-wide association scans looking at two comparisons each for eyes (Blue and green, blue and brown), hair (red or not, blond or brown), and skin (freckles, sun-sensitivity). The researchers looked in a group of Icelandic and Dutch individuals, which means they only looked at a small portion of human variability (i.e., very few African or Asian genomes, for example). Still, just from those simple comparisons, their scans: revealed 104 associations that reached genome-wide significance, accounted for by 60 distinct SNPs, of which 32 showed genome-wide association with only one pigmentation trait, 12 with two traits and 16 with three traits. That is to say, they found 60 DNA bases that explained their data. Roughly half of those only affected one thing (eye, or hair, or skin, but only one), a fifth of those single base changes contributed to two different traits, and a full 25% of those SNPs were related with three traits. That's still pretty hefty. Basically, they found 60 things that can affect hair, eye, or skin color, and some of those can affect some or all of those at the same time. And they only looked at people from two countries. Here's a figure where they summarize the seven strongest SNPs; you can see how tricky it can be to explain colors.
{ "domain": "biology.stackexchange", "id": 1287, "tags": "human-genetics, genetics, pigmentation, hair" }
A Boolean function that is not constant on affine subspaces of large enough dimension
Question: I'm interested in an explicit Boolean function $f \colon \\{0,1\\}^n \rightarrow \\{0,1\\}$ with the following property: if $f$ is constant on some affine subspace of $\\{0,1\\}^n$, then the dimension of this subspace is $o(n)$. It is not difficult to show that a symmetric function does not satisfy this property by considering a subspace $A=\\{x \in \\{0,1\\}^n \mid x_1 \oplus x_2=1, x_3 \oplus x_4=1, \dots, x_{n-1} \oplus x_n=1\\}$. Any $x \in A$ has exactly $n/2$ $1$'s and hence $f$ is constant the subspace $A$ of dimension $n/2$. Cross-post: https://mathoverflow.net/questions/41129/a-boolean-function-that-is-not-constant-on-affine-subspaces-of-large-enough-dimen Answer: The objects you are searching for are called seedless affine dispersers with one output bit. More generally, a seedless disperser with one output bit for a family $\mathcal{F}$ of subsets of $\{0,1\}^n$ is a function $f : \{0,1\}^n \to \{0,1\}$ such that on any subset $S \in \mathcal{F}$, the function $f$ is not constant. Here, you are interested in $\mathcal{F}$ being the family of affine subspaces Ben-Sasson and Kopparty in "Affine Dispersers from Subspace Polynomials" explicitly construct seedless affine dispersers for subspaces of dimension at least $6n^{4/5}$. The full details of the disperser are a bit too complicated to describe here. A simpler case also discussed in the paper is when we want an affine disperser for subspaces of dimension $2n/5+10$. Then, their construction views ${\mathbb{F}}_2^n$ as ${\mathbb{F}}_{2^n}$ and specifies the disperser to be $f(x) = Tr(x^7)$, where $Tr: {\mathbb{F}}_{2^n} \to {\mathbb{F}}_2$ denotes the trace map: $Tr(x) = \sum_{i=0}^{n-1} x^{2^i}$. A key property of the trace map is that $Tr(x+y) = Tr(x) + Tr(y)$.
{ "domain": "cstheory.stackexchange", "id": 225, "tags": "cc.complexity-theory, circuit-complexity, derandomization, linear-algebra" }
What determines valid and invalid turing machines?
Question: According to my understanding, a Turing machine that's valid has to have finite steps to finish a certain step. If this is right, what else determines the validity of a turing machine? Answer: If you are referring to what I think you are referring to then your understanding seems correct. A Turing machine has a precise definition: It is a tuple ... see for example http://en.wikipedia.org/wiki/Turing_machine#Formal_definition Any system with the same components and conditions as described by the definition is indeed a Turing machine. Anything else (which may at first glance appear to be a Turing machine) is not a Turing machine. This distinction is to weed out some wrong intuitions. Here is an example adopted from: https://stackoverflow.com/questions/2435607/why-is-this-an-invalid-turing-machine For example (given a polynomial P as input): Start counter at 0 Start Zero at False while(not Zero) { eval P(counter) if ^^ is 0 set Zero to True } Return True Can be computed by a Turing machine (i.e. describes a valid Turing machine) while Start list at [] Start counter at 0 while(true){ add eval P(counter) to list } if any element of list is 0 return true else return false Describes an invalid Turing machine, i.e. the code does not correspond to a Turing mahcine. Well ^^ is all I can come up with. Maybe it will help.
{ "domain": "cs.stackexchange", "id": 1269, "tags": "terminology, turing-machines" }
Anti-gravity in an infinite lattice of point masses
Question: Another interesting infinite lattice problem I found while watching a physics documentary. Imagine an infinite square lattice of point masses, subject to gravity. The masses involved are all $m$ and the length of each square of the lattice is $l$. Due to the symmetries of the problem the system should be in (unstable) balance. What happens if a mass is removed to the system? Intuition says that the other masses would be repelled by the hole in a sort of "anti-gravity". Is my intuition correct? Is it possible to derive analytically a formula for this apparent repulsion force? If so, is the "anti-gravity" force expressed by $F=-\frac{Gm^2}{r^2}$, where $r$ is the radial distance of a point mass from the hole? Edit: as of 2017/02 the Video is here (start at 13min): https://www.youtube.com/watch?v=mYmANRB7HsI Answer: I think that your initial intutiion is right--before the point particle is removed, you had (an infinite set of) two $\frac{G\,m\,m}{r^{2}}$ forces balancing each other, and then you remove one of them in one element of the set. So initially, every point particle will feel a force of $\frac{G\,m\,m}{r^{2}}$ away from the hole, where $r$ is the distance to the hole. An instant after that, however, all of the particles will move, and in fact, will move in such a way that the particles closest to the hole will be closer together than the particles farther from the hole. The consequence is that the particles would start to clump in a complicated way (that I would expect to depend on the initial spacing, since that determines how much initial potential energy density there is in the system)
{ "domain": "physics.stackexchange", "id": 1612, "tags": "newtonian-mechanics, newtonian-gravity" }
What is the fastest marine mammal?
Question: When I did a research, most of the sources say that orca is one of the fastest. But I could not find a source that says it is actually the fastest. Some sources say that Dall's porpoises rival orcas in speed. And there are other kinds of porpoises and dolphins that comes into question. Is there any scientific research with measurements? Or does anyone have any detailed knowledge about this topic? (Also it would be nice to know top 10 fastest marine mammals, so the question becomes: what are the fastest marine mammals?) Note: This scientific research might be difficult to conduct also so you can explain the situation as well. Answer: This article by R. Aiden Martin doesn't have citations, but is a great read with a lot of detail on observations and mechanics of animals moving in water. If you trust the numbers the author gives, the top 10 marine mammals in terms of speed (in mph = 1.6 km/h) are: Dall's Porpoise (Phocaenoides dalli), leaping 34.5 mph Killer Whale (Orcinus orca) 34.5 mph Shortfin Pilot Whale (Globicephala macrorhynchus) 30.4 mph Blue Whale (Balaenopterus musculus) 29.76 mph Fin Whale (Balaenoptera physalus) 25.42 mph California Sea Lion (Zalophus californianus) 25 mph Pacific Spotted Dolphin (Stenella attenuata) 24.7 mph Common Dolphin (Delphinus delphis) 23.6 mph Bottlenose Dolphin (Tursiops truncatus) 17 mph Pacific Whitesided Dolphin (Lagenorhynchus obliquidens) 17 mph As noted in the article, higher estimates of speed for, for instance, the common dolphin may be in fact records of them 'surfing' on the bow wave of boats (not propelling themselves through the water).
{ "domain": "biology.stackexchange", "id": 9326, "tags": "zoology, mammals, marine-biology" }
Reaction of the body to heat
Question: In massage school we are being taught hydrotherapy -- applying cold and heat to specific areas. It says that when applying cold, first the body goes through vasoconstriction, and then later vasodialation. That makes sense -- preserve heat through the former, but if the cold is applied for too long the body still has to keep the cells alive, thus the latter. What doesn't make sense is this passage, in relation to applying heat: Law of Action and Reaction: Initially blood will rise to the surface, applied for too long the blood will recede to the interior causing inner congestion to be greater than before Is that true? Answer: Local warming of a patch of skin will cause vasodilation. However, after prolonged warming there is a 'die away' phenomenon, is which blood vessels slowly constrict back to their normal diameter. Johnson JM, Kellogg DL Jr. Local thermal control of the human cutaneous circulation. J Appl Physiol (1985). 2010;109(4):1229–1238. doi:10.1152/japplphysiol.00407.2010 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2963328/
{ "domain": "biology.stackexchange", "id": 9695, "tags": "human-physiology, blood-circulation" }
How do we know that $\psi$ is the eigenfunction of an operator $\hat{H}$ with eigenvalue $W$?
Question: I am kind of new to this eigenvalue, eigenfunction and operator things, but I have come across this quote many times: $\psi$ is the eigenfunction of an operator $\hat{H}$ with eigenvalue $W$. First I need some explanation on how do we know this? All I know about operator $\hat{H}$ so far is this equation where $\langle W \rangle$ is an energy expected value: \begin{align} \langle W \rangle &= \int \limits_{-\infty}^{\infty} \overline{\Psi}\, \left(- \frac{\hbar^2}{2m} \frac{d^2}{d \, x^2} + W_p\right) \Psi \, d x \end{align} From which it follows that $\hat{H} = - \frac{\hbar^2}{2m} \frac{d^2}{d \, x^2} + W_p$. Additional question: I know how to derive relation $\hat{H}\hat{a} = (W - \hbar \omega)\hat{a} \psi$ for which they state that: $\hat{a} \psi$ is an eigenfunction of operator$\hat{H}$ with eigenvalue $(W-\hbar \omega)$. I also know how to derive relation $\hat{H}\hat{a}^\dagger = (W + \hbar \omega)\hat{a}^\dagger \psi$ for which they state that: $\hat{a}^\dagger \psi$ is an eigenfunction of operator$\hat{H}$ with eigenvalue $(W+\hbar \omega)$. How do we know this? Answer: You're not getting your facts right at all. How do we know from this $\langle W \rangle = \int_{-\infty}^{\infty} \bar{\Psi}\left(-\frac{\hbar^2}{2m} \frac{d^2}{dx^2} + W_p \right) \Psi dx$ or this $\hat{H} = -\frac{\hbar^2}{2m}\frac{d^2}{dx^2} + W_p$ that we have an eigenfunctiuion and eigenvalue. Answer: we don't. All I know about operator $\bar{H}$ so far is this equation where $\langle W \rangle$ is an energy expected value: \begin{align} \langle W \rangle = \int_{-\infty}^{\infty} \bar{\Psi}\left(-\frac{\hbar^2}{2m} \frac{d^2}{dx^2} + W_p \right) \Psi dx \end{align} No, you don't. Here's the mathematical side of what an eigenfunction and eigenvalue is: Given a linear transformation $T : V \to V$, where $V$ is an infinite dimensional Hilbert or Banach space, then a scalar $\lambda$ is an eigenvalue if and only if there is some non-zero vector $v$ such that $T(v) = \lambda v$. Here's the physics side (i.e. QM): We postulate that the state of a system is described by some abstract vector (called a ket) $|\Psi\rangle$ that belongs to some abstract Hilbert space $\mathcal{H}$. Next we postulate that this state evolves in time by some Hermitian operator $H$, which we call the Hamiltonian, via the Schrodinger equation. What is $H$? you guess and compare to experimental results (that's what physics is anyway). Next we postulate for any measurable quantity, there exists some Hermitian operator $O$, and we further postulate that the average of many measurements of $O$ is given by $ \langle O \rangle = \langle \Psi | O | \Psi \rangle$. Connection to wavefunctions: we pick the Hilbert space $L^2(\mathbb{R}^3)$ to work in, so $\Psi(x) = \langle x | \Psi \rangle$, and $\langle O \rangle = \int_{-\infty}^{\infty} \Psi^*(x) O(x) \Psi(x) dx$. Ok, that's the end. The form of $H$ doesn't follow from the energy expected value. Wait! I haven't even talked about eigenvalues and eigenfunctions. This is a useless post! Answer: well you don't have to. But it is useful to find the eigenvalues and eigenfunctions of $H$, because the eigenfunctions of $H$ form a basis of the Hilbert space, and certain expressions become diagonal/more easily manipulated when we do whatever calculations we want to do. So to find the eigenvalues of $H$, we simply solve the eigenvalue equation as stated above: Solve \begin{align} H | \Psi_n \rangle = E_n | \Psi_n \rangle. \end{align} This is in the form $T(v) = \lambda v$. So as Alfred Centauri says, we simply want to find the eigenfunctions of $H$. A more subtle question would be, how do we know they exist? The answer lies in spectral theory and Sturm-Liouville theory but nevermind for now, as physicists we assume they always exist. So your additional question: $\hat{a} \psi$ is an eigenfunction of operator$\hat{H}$ with eigenvalue $(W-\hbar \omega)$. Well.... that just follows straightaway. You said you already proved that $H a^\dagger \psi = (W - \hbar \omega) a^\dagger \psi$. So here $T$ = $H$, $a^\dagger \psi = v$, and $\lambda = (W - \hbar \omega)$. which is an eigenvalue equation $T(v) = \lambda v$. Thus, $a^\dagger \psi$ is an eigenfunction of $H$ with eigenvalue $(W-\hbar \omega)$.
{ "domain": "physics.stackexchange", "id": 7671, "tags": "quantum-mechanics, operators, hamiltonian, eigenvalue" }
Physical Interpretation of the trace distance between Bloch vectors
Question: I came across a problem in which the trace distance is maximum if the Bloch vectors of the two density matrices are perpendicular to each other. What is the physical interpretation of this? Answer: The trace distance, which is a metric on the space of density matrices i.e., $$D(\rho_1,\rho_2)=\frac{1}{2}\mathrm{Tr}||\rho_1-\rho_2||$$ for two density operators $(\rho_1,\rho_2)$, tells$^1$ us the degree of distinguishability between two states, and if the states are orthogonal ("perpendicular"), this distance is maximized. So, if this distance is a minimum, the states are completely indistinguishable and if it's a maximum, the states are perfectly distinguishable (in principle), and as stated, this happens when the states are orthogonal. $^1$ To get a more intuitive feel for this, you can compare the Trace distance to its classical analogue, called the Kolmogorov distance applicable to classical probability distributions, and gives a measure of the similarity between two classical probability distributions.
{ "domain": "physics.stackexchange", "id": 91718, "tags": "quantum-mechanics, homework-and-exercises, quantum-information, density-operator, bloch-sphere" }
If the earth was really massive, then is it possible that no sunlight will reach earth due to gravitational lensing?
Question: Say the earth was really massive, that it warps space significantly then is it possible that no sunlight will reach earth due to gravitational lensing? Answer: Gravitational lensing by a dense object: Earth would receive even more light if it were more massive because an increase in the strength of earth's gravitational field would mean that earth attracts even more light than it did with it's normal gravitational field. Gravitational lensing only deflects the path of distant objects which makes them appear distorted and it deflects the path of light in such a way that light is attracted towards the dense object rather than being repelled away from it. Even if you were to place a black hole of mass equal to earth's mass in earth's place, it would still continue to receive sunlight from the sun.
{ "domain": "physics.stackexchange", "id": 28535, "tags": "general-relativity, spacetime, gravitational-lensing" }
convert VLP16 sensor_msgs/PointCloud2 to sensor_msgs/LaserScan
Question: I would like to test a library that require a sensor _msgs/LaserScan topic. I found this question http://answers.ros.org/question/9450/how-to-setup-pointcloud_to_laserscan/ but I am pretty new to ROS and I have a lot of difficulties in adapting this solution to my case. Could you help me with this? Originally posted by ESeNonFossiIo on ROS Answers with karma: 1 on 2016-12-07 Post score: 0 Original comments Comment by McMurdo on 2016-12-07: What's the problem in your case? What exactly is the error when you run that launch file? Comment by ESeNonFossiIo on 2016-12-07: I do not know how to write a launch file for the VLP16. I got two default launch files but they are supposed to work with Kinect. Answer: There is Velodyne specific software: https://github.com/ros-drivers/velodyne/pull/110 http://wiki.ros.org/but_velodyne_proc#Laserscan_Node Edit: Added velodyne_laserscan pull request Originally posted by kmhallen with karma: 1416 on 2016-12-07 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by ESeNonFossiIo on 2016-12-07: I couldn't compile it on my laptop. I got: but_calibration_camera_velodyne/Calibration.h:53:24: error: ‘isnan’ was not declared in this scope return !isnan(value); Comment by ESeNonFossiIo on 2016-12-09: Using your branch it seems working. I see /scan and the output is ok.
{ "domain": "robotics.stackexchange", "id": 26421, "tags": "ros, vlp16, laserscan, pointcloud" }
Will glass always break in the same way?
Question: This question has had me thinking for a while. If I have two large panes of glass and a rock or similar item is thrown in exactly the same place on the glass, would the two panes break in the same way. Does the shattering of glass follow any rules or is it always random and subject to other variables? Could you predict the shattering of glass down to the smallest shards or again, is it random? Answer: The answer is sort of yes and no. YES: If you have two perfectly identical panes of glass and two perfectly identical projectiles, and you throw the two projectiles in a perfectly identical way, then the two panes will shatter in a perfectly similar fashion. This is really just by construction, you did the same thing twice. NO: Shattering glass involves breaking bonds between atoms/molecules. This leads to two important conclusions. First, two "identical" panes of glass for this experiment must be identical at least down to the arrangement of the atoms (including the placement of any impurities), and possibly as far as the internal configuration of each atom (as the strength of the bonds can depend on the electron configuration, for instance). In practice this means that it is impossible, given current technological constraints, to construct two macroscopic identical panes of glass. Second, predicting the shattering of a given pane of glass would require both a detailed description of the microscopic structure of the pane (which is impractical because of the large amount of data storage required, and because the structure varies quickly enough in time that any measurement would quickly become obsolete), and solving the relevant dynamical equations. I imagine the equations would be reasonably easy to write down, we're talking about a bunch of particles connected by bonds and reasonably well defined forces, after all. But solving them would be computationally prohibitive, given the size of the system. Still, some characteristics of the shattering can be predicted, for instance under suitable conditions the glass will begin to break at the location of the projectile impact, and the smallest shards will form near the impact site, larger shards further away, etc. The coarse properties of the process can be predicted, but we're stuck describing the fine properties as "random".
{ "domain": "physics.stackexchange", "id": 13765, "tags": "determinism, chaos-theory" }
Will new proteins incorporating new amino acids trigger an immune response?
Question: This article reported that scientists have succeeded in adding two new bases to the quartet of A, C, G and T, resulting in non-canonical amino acid. Additionally, the bacteria in which this was done were able to produce new proteins using the newly added bases. The article quotes the scientists as saying that the extra amino acids “might become building blocks for new drugs and novel materials”. My question is that if new proteins are made from amino acids that don’t naturally occur, won’t the body reject them? Answer: The body's mechanism for detecting foreign object has a built-in failsafe that minimizes the likelihood of misinterpreting a self-derived antigen as foreign. Immunologic tolerance (unresponsiveness) normally prevents reactions against self-antigens; if immunologic tolerance is broken, autoimmune reactions may occur. Much of the development of tolerance occurs in the thymus by the elimination (clonal deletion) or inactivation (clonal anergy) of self-reactive clones of T cells. Other mechanisms of tolerance occur extrathymically and include activation of antigen-specific T suppressor cells and clonal deletion, which results in the elimination of self-reactive B cells or T cells, and clonal anergy. https://www.ncbi.nlm.nih.gov/books/NBK7795/ Basically, before a T cell starts looking for potential antigens, it generally first spends some time in the thymus being tested against antigens which are native to the body. If it responds to such an antigen, the T cell will either be destroyed or deactivated. Having unnatural amino acids as part of this process is unlikely to change its efficacy. T cells which would target peptides with these unnatural amino acids will still be checked against the body's cells before being sent out to do their work; the mechanism would work the same way. I suspect that the rate of autoimmune diseases would be unchanged compared to normal organisms.
{ "domain": "biology.stackexchange", "id": 9454, "tags": "proteins, immunology, synthetic-biology" }
Is it true that "[sand] grains in the Coconino Sandstone come from the Appalachian Mountains"?
Question: The Creation Museum is a terrifying place for scientists of all stripes. One of their exhibits is the "Flood Geology" exhibit, which purports to explain how a flood of (literally) Biblical proportions circa 6000 years ago explains all the features of the Earth we see today. One panel at that exhibit is titled "Evidences of the Flood in the Grand Canyon": (photo taken by John Scalzi; click for larger, more-readable version - if you think you can handle it) This is obviously boneheaded in so, so many ways, but anyway - this panel makes the following claim: Sand grains in the Coconino Sandstone come from the Appalachian Mountains. Three related questions: Is this claim true? (The Creation Museum is not beyond telling outright lies, so I have to ask...) If so, how did it get there? How do we determine that a particular grain of sand originates in the Appalachians? Answer: Jeffrey Ralh of Yale and coauthors published Combined single-grain (U-Th)/He and U/Pb dating of detrital zircons from the Navajo Sandstone, Utah, which concludes that Navajo sandstone originated from the Appalachian Mountains. A review of the article is found here: http://www.geotimes.org/nov03/NN_navajo.html. Rahl et al. propsed that a large river system transported the sediment. However, Ralh's research concerned only Navajo sandstone, not Coconino sandstone as far as I know. U–Pb ages of detrital zircons from Permian and Jurassic eolian sandstones of the Colorado Plateau, USA: paleogeographic implications does investigate Coconino Sandstone and other sandstones from the region and discusses what fraction are of Appalachian origin and transport mechanisms. 34% of Coconino Sandstone zircon grains are reported to be of Appalachian origin in table 4.
{ "domain": "earthscience.stackexchange", "id": 144, "tags": "geology, sedimentology" }
Derivation of Lorentz Transformations
Question: How can I derive the Lorentz transformations? I don't want to use hyperbolic functions and the fact that the light waves travel by forming spherical wavefronts. Is there a way to derive the Lorentz transformations applying the conditions I have mentioned. I was unable to understand the method given in Landau and lifshitz deeply. That's why I want a method other than the one using hyperbolic functions Answer: Here's a derivation that uses very basic properties of space and time (isotropy, homogeneity, the fact that two Lorentz boosts should compose into another valid Lorentz boost, etc.). The constant maximum speed through space (i.e., the speed of light) is a derived property, not an assumption. One more derivation of the Lorentz transformation - Jean-Marc Levy-Leblond Here's a similar one that uses linear algebra after deriving the fact that the transform is linear, with similar results. Nothing but relativity - Palash B. Pal These kinds of group-theory-based derivations go back to Vladimir Ignatowski in 1910.
{ "domain": "physics.stackexchange", "id": 63252, "tags": "homework-and-exercises, special-relativity, inertial-frames, lorentz-symmetry" }
Pretending photon has a small mass in soft bremsstrahlung
Question: In Peskin and Schroeder chapter 6, on page 184 when discussing the infrared divergence problem in perturbative QED, the book says we can make the following equation $$\tag{6.25} \text{Total probability}\approx\frac{\alpha}{\pi}\int_0^{|\boldsymbol{q}|}dk\frac{1}{k}I(\boldsymbol{v},\boldsymbol{v}').$$ well defined by pretending the photon has a small mass $\mu$, giving us $$\tag{6.26} d\sigma(p\rightarrow p'+\gamma(k))=d\sigma(p\rightarrow p')\cdot\frac{\alpha}{2\pi}\log\bigg(\frac{-q^2}{\mu^2}\bigg)I(\boldsymbol{v},\boldsymbol{v}')$$ Here we are considering the bremsstrahlung of an electron with momentum $p$, resulting an electron with momentum $p'$ and a soft photon with momentum $k$, $q=p-p'$. $I(\boldsymbol{v},\boldsymbol{v}')$ is an expression independent of $k$. How did we get equation 6.26 from 6.25? What do we mean when we say "pretending the photon has a small mass"? Answer: We have: \begin{align} \int_0^{E} \frac{1}{k}dk \stackrel{(\ast)}{\leadsto} \int_0^E \frac{1}{\sqrt{k^2+\mu^2}}dk \stackrel{\mu \rightarrow 0}{\simeq} \int_0^E \frac{1}{\sqrt{k^2+\mu^2}}d\left( \sqrt{k^2+\mu^2} \right) \end{align} Which gives something like $\simeq \frac{1}{2}\ln \left( \frac{E^2}{\mu^2} \right)$. $(\ast)$ is precisely what is meant by 'considering the photon to have a small mass'. After some manipulations, you may eventually convert the energy squared $E^2$ into the Lorentz invariant $-q^2$.
{ "domain": "physics.stackexchange", "id": 90412, "tags": "quantum-electrodynamics, renormalization, scattering-cross-section, regularization" }
Energy consumed by a two-link planar manipulator with feedback control
Question: I was carrying out a problem in which I have a fully actuated two link manipulator, so with an independent servo motor for each link that allows rotation in one direction and another (positive for counterclockwise rotation and negative for clockwise rotation). Let's assume we start from a certain initial condition and arrive at a final condition through a feedback control of the torques applied to each motor. To do this, a counter-clockwise torque is first applied to each servo, then a clockwise torque to "brake" (no dissipation) the system. Assuming a time interval of 10 seconds, at each sampling instant (suppose it is 0.1 seconds) I have a different torque and / or angular velocity value (suppose we always have a positive angular velocity). Consequently, by projecting the $P = \tau \omega$ products, I obtain instant by instant the value of power generated by each motor. Now, I have two doubts. The first one concerns the power value. Is it correct to obtain a negative power value, obtained by possibly multiplying a negative torque value for the angular velocity? Or should i consider the torque absolute value? And then, is it correct to calculate the total energy value as the integral between 0 and 10 seconds of the power curve? Of course I know that the model is very approximate and I have not even presented the dynamics and kinematics of the manipulator in question, but I would simply like to have a clear idea of the concept of power and energy. Answer: Just some general notes. In general when one deals with dynamics problems vectors and their products can be quite a burden and a hassle. My advice is clear up the notions of cross $\times$ and dot product $\cdot$ for vectors, and in early problem try to use summation of the moments which are calculated by vector form (this is especially true for 3d problems, although starting from simpler 2d like this one is the same). In your particular example, $P = \vec{T}\cdot \vec\omega$ is a simple dot product so the only thing that matters about the sign is that they have the same direction. When the Torque and the angular velocity are in the same direction then the member will rotationally accelerate, while if one is opposite to the other then the magnitude of the angular velocity will decrease. So depending of how your convention is the sign of power will indicate if energy is added to the system or not. Alternative approach (work and energy) I can't help noticing that in this particular problem, instead of integrating torque and angular velocity, it might be easier to use work and energy principle. I.e. you can easily write the Langrangian of this system and calculate the total energy of the system based on the position, and from that derive the energy at different configurations. Then based on the time interval required to move from one to the other configuration you can estimate the total energy required.
{ "domain": "engineering.stackexchange", "id": 4475, "tags": "mechanical-engineering, applied-mechanics, energy" }
Time Evolution of Wigner Function
Question: The Wigner Function is defined as: $$W(x,p,t)=\frac{1}{2\pi\hbar}\int dy \rho(x+y/2, x-y/2, t)e^{-ipy/\hbar}\tag{1}$$ Where $\rho(x, y, t)=\langle x|\hat{\rho}|y\rangle$. I am supposed to find the time evolution of the Wigner function for the Harmonic Oscillator starting from the von Neumann evolution equation given by: $$i\hbar\frac{\partial \rho}{\partial t}=\left[H,\rho\right].\tag{2}$$ I am not sure how to start, because the von Neumann evolution equation involves the commutator of the Hamiltonian and the operator of interest. However the Wigner function is a function, how can I evaluate the commutator? Answer: Starting from the von Neumann equation: $$i\hbar\partial \hat{\rho} / \partial t=[\hat{H}, \hat{\rho}]$$ We now take the Weyl Transform on both sides and noting that the partial derivative commutes with the transform and the commutator gets mapped to the Moyal bracket: $$i\hbar\partial \tilde{\rho} / \partial t=-2i\tilde{H} sin(\hbar \Lambda/2) \tilde{\rho}$$ where the tilde implies they Weyl transform of the operator and $\Lambda = \frac{\partial}{\partial p}\frac{\partial}{\partial x}-\frac{\partial}{\partial x}\frac{\partial}{\partial p}$ Where the first partial derivative acts to the left and the second to the right. Now the Weyl transform of the Hamiltonian of the harmonic oscillator can be shown to be just $\tilde{H}=p^2/2m+m\omega^2x^2$ Now expanding the sine function in a Taylor Series we get: $$i\hbar\partial \tilde{\rho}=-2i\left((p^2/2m + m\omega^2 x^2)\left(\sum_{n=0}^{\infty} \frac{(-1)^n}{(2n+1)!}\left(\frac{\hbar}{2}\right)^{2n+1}\left(\frac{\partial}{\partial p}\frac{\partial}{\partial x}-\frac{\partial}{\partial x}\frac{\partial}{\partial p}\right)^{2n+1}\right)\tilde{\rho}\right)$$ Now we express the first term of the sum seperately and we get: $$i\hbar\partial \tilde{\rho}=-2i\left((p^2/2m + m\omega^2 x^2)\left(\left(\frac{\hbar}{2}\right)\left(\frac{\partial}{\partial p}\frac{\partial}{\partial x}-\frac{\partial}{\partial x}\frac{\partial}{\partial p}\right)+\sum_{n=1}^{\infty} \frac{(-1)^n}{(2n+1)!}\left(\frac{\hbar}{2}\right)^{2n+1}\left(\frac{\partial}{\partial p}\frac{\partial}{\partial x}-\frac{\partial}{\partial x}\frac{\partial}{\partial p}\right)^{2n+1}\right)\tilde{\rho}\right)$$ Now applying the first term of the sum we get: $$i\hbar\partial \tilde{\rho}=-i\hbar\left((p/m\frac{\partial}{\partial x} - 2 m\omega^2 x\frac{\partial}{\partial p})\tilde{\rho}+\tilde{H}\left(\sum_{n=1}^{\infty} \frac{(-1)^n}{(2n+1)!}\left(\frac{\hbar}{2}\right)^{2n+1}\left(\frac{\partial}{\partial p}\frac{\partial}{\partial x}-\frac{\partial}{\partial x}\frac{\partial}{\partial p}\right)^{2n+1}\right)\tilde{\rho}\right)$$ The term on the left and the first two terms on the right outside the sum resemble precisely Lioville's equation. Since the harmonic oscillator Hamiltonian is quadratic in $x$ and $p$ and doesn't have any higher order terms the terms of higher order vanish, leaving us with: $$\partial \tilde{\rho}+(p/m\frac{\partial}{\partial x} + 2 m\omega^2 x\frac{\partial}{\partial p})\tilde{\rho}=0$$
{ "domain": "physics.stackexchange", "id": 70743, "tags": "quantum-mechanics, time-evolution, wigner-transform, deformation-quantization" }
Why doesn't the uncertainty principle contradict the existence of definite-angular momentum states?
Question: We know that for a position variable $x$ and momentum $p$, the uncertainties of the two quantities are bounded by $$\Delta x \Delta p \gtrsim \hbar$$ Now, this is usually first explained with $x$ being a simple linearly measured position and $p$ being linear momentum. But it should apply to any good coordinate and its conjugate momentum. It should, for instance, apply to angle $\phi$ about the $z$ axis, and angular momentum $L_z$: $$\Delta \phi \Delta L_z \gtrsim \hbar$$ The thing is, $\Delta \phi$ can never be greater than $2\pi$. I mean, you have to have some value of $\phi$ and $\phi$ only runs from 0 to $2\pi$. Therefore $$\Delta L_z \gtrsim \hbar/\Delta \phi \geq \hbar/2\pi$$ But, uh-oh! This means it is impossible for $\Delta L_z$ to be zero, and we should never be able to have angular momentum states with definite $L_z$ values. Of course, it doesn't mean that. But I have never figured out how this is not in contradiction with the Schroedinger eqn. calculations that give us states with definite values of $L_z$. Can anyone help me out? One answer I anticipate is that $\phi$ is sort of "abstract" in that if you chose your origin at some other point you will get completely different values of $\phi$ and $L_z$, and ipso facto, usual considerations don't apply. I don't think this will work, though. Consider a "quantum bead" sliding around on a rigid circular ring and you get the exact same problem with no ambiguity in $\phi$ or $L_z$. (Well, there will be some limited ambiguity in $\phi$, but still, there won't be in $L_z$.) Answer: The problem here is there is at this time still no "legitimate" self-adjoint phase operator. As you phrase the problem, you assume that $\hat \phi$ and $\hat L_z$ would have the same commutation relations as $\hat x$ and $\hat p$, and in particular given that $\hat L_z\mapsto -i\hbar d/d\phi$ the $\hat \phi$ operator would be multiplication of an arbitrary function $f(\phi)$ by $\phi$, i.e. $$ \hat L_zf(\phi)=-i\hbar \frac{df}{d\phi}\, ,\qquad \hat \phi f(\phi)= \phi f(\phi) $$ Thus far everything is fine except that, when it comes to boundary condition, we must have $f(\phi+2\pi)=f(\phi)$. However, the function $\phi f(\phi)$ does not satisfy this. As a result, the action of a putative $\hat \phi$ as defined above takes a "legal" function $f(\phi)$ that satisfies the boundary conditions to an "illegal" one $\phi f(\phi)$, and make $\hat \phi$ NOT self-adjoint (which means trouble). The uncertainty relation assumes that the operators involved as self-adjoint. Since there is (thus far) no known definition of $\hat \phi$ that makes it self-adjoint, the quantity $\Delta \phi$ cannot be computed in the usual way and indeed is not necessarily well defined for arbitrary states. In other words, there is no mathematical reason to believe that $\Delta \phi\Delta L_z\ge \hbar /2$. Indeed an obvious "problem" with your expression is obtained by taking $f(\phi)$ to be an eigenstate of $\hat L_z$. Then clearly $\Delta L_z=0$ so the putative variance $\Delta \phi$ would have to be arbitrarily large, which is impossible given that $\phi$ physically ranges from $0$ to $2\pi$. The problem of constructing a self-adjoint phase operator is an old one. It has been the subject of several questions on this site, including this one. Finding a good definition of a phase operator remains an open research problem. Edit: added some clarifications after a query.
{ "domain": "physics.stackexchange", "id": 41027, "tags": "quantum-mechanics, angular-momentum, heisenberg-uncertainty-principle" }
What's the longest transcript known?
Question: What's the longest functional transcript known? I'm wondering about RNA length post splicing, so not including introns. Answer: Top 10 long processed transcripts in humans (with multiple isoforms), from gencode 19 annotations: Transcript Length(bases) ------------------------ TTN-018 108861 <-- Titin TTN-019 103988 TTN-002 101206 KCNQ1OT1-001 91666 TTN-201 82413 TTN-202 82212 TTN-003 81838 MUC16-001 43732 TSIX-001 37026 MCC-009 29616 Ignoring isoforms (only longest isoforms shown) Transcript Length(bases) ------------------------ TTN-018 108861 KCNQ1OT1-001 91666 MUC16-001 43732 TSIX-001 37026 MCC-009 29616 TRAPPC9-015 29514 SYNE1-001 27602 GRIN2B-001 27204 OBSCN-011 26811 NEB-204 26020 Titin clearly is the longest transcript in humans However this is the list of longest genes: Gene Length(Kb) ------------------------- CNTNAP2 2304.64 LSAMP 2186.93 DLG2 2169.35 DMD 2092.29 PTPRD 2084.57 MACROD2 2057.83 CSMD1 2056.87 EYS 1987.24 LRP1B 1900.28 PCDH15 1806.76 CTNNA3 1783.65 ROBO2 1740.82 RBFOX1 1691.87 NRXN3 1619.64 DAB1 1548.83 RP11-420N3.2 1536.21 PDE4D 1513.42 FHIT 1502.09 AGBL4 1491.06 CCSER1 1474.33 Top 5 in Zebrafish (Zv9.75); longest isoforms: ttna-203 93727 <-- Titin ttnb-202 82632 si:dkey-16p6.1-001 67263 syne2b-201 31867 si:dkey-30j22.1-001 29269 Top 5 in Drosophila (FlyBase r6.02); longest isoforms: dp-RQ 71300 <-- Dumpy sls-RP 56448 <-- Titin Muc14A-RA 48719 Msp300-RG 43105 Ank2-RU 42107 Top 5 in C.elegans (WormBase WS220); longest isoforms: W06H8.8g 55623 <-- Titin K07E12.1a.2 39257 <-- dig-1 ZK973.6 25608 C09D1.1b 24198 C41A3.1 23457 Top 5 in Arabidopsis (TAIR 10.23): AT1G67120.1 16272 <-- Midasin homolog AT3G02260.1 15451 <-- Calossin-like protein AT5G28263.1 15194 AT1G43060.1 14622 AT5G30269.1 14590 Top 5 in yeast (SGD): YLR106C 14733 <-- Midasin Q0045 12884 <-- Subunit I of cytochrome c oxidase YKR054C 12279 YHR099W 11235 YDR457W 9807
{ "domain": "biology.stackexchange", "id": 2785, "tags": "rna, genomics" }
Creating a simple Interpreter for the Quartz language
Question: I've been trying to improve my C++ skills, and deiced to try my hand at making a interpreter for a toy language. The language is called Quartz, and so far the only thing you can do is output strings. The following keywords can be used to print out a string: output, which prints all the strings on one line, and nl_output, which prints each string on a different line. The following program is valid in Quartz: nl_output "Hello World" nl_output "Goodbye Wolrd" nl_output "This is a test of the Quartz language" Each file has the .qz extension, and is basically like a text file. The over-view of how my interpreter works is: It first opens a .qz, and then checks if the file was opened successfully. After ensuring that the files has been opened properly, the file contents is then read into a string. The string is then feed to a lexer that checks for the tokens. The lexer use a for-loop to iterate over the string, and adds any tokens it finds to a vector. The lexer then returns the vector to be read by the parser. The parser uses a while loop to iterate over the vector, and calls the correct code if a keyword/keywords is found. main.cpp #include<iostream> using std::cout; using std::cerr; using std::endl; #include<fstream> using std::ifstream; using std::fstream; #include<string> using std::string; using std::getline; #include<vector> using std::vector; void open_file(const char *filename, ifstream &data) { data.open(filename); if(data.fail()) { cerr << "FileError: specified file '" << filename << "' could not be found" << endl; } } vector<string> lexer(string &data_str, ifstream &data) { string tok; string string_var; string expr; vector<string> tokens; getline(data, data_str, '\0'); bool is_string = false; data_str += '$'; for(unsigned int i=0; i < data_str.length(); i++) { tok += data_str[i]; if(tok[tok.size()-1] == '\n' or tok[tok.size()-1] == '$') { tok = ""; } if(data_str[i] == ' ') { if(is_string == false) { tok = ""; } else if(is_string == true) { tok = " "; } } if(tok == "nl_output") { tokens.push_back("nl_output"); tok = ""; } if(tok == "output") { tokens.push_back("output"); tok = ""; } if(data_str[i] == '"') { if(is_string == false) { is_string = true; } else if (is_string == true) { tokens.push_back("string:" + string_var); string_var = ""; is_string = false; tok = ""; } } if(is_string) { string_var += tok; tok = ""; } } /*for(int i=0;i<tokens.size();i++) { cout << tokens[i] << ' '; }*/ //cout << tokens[0] + " " + tokens[1].substr(0,6) << endl; return tokens; } void parser(const vector<string> &tokens) { unsigned int i = 0; while(i < tokens.size()) { if(tokens[i] + " " + tokens[i+1].substr(0,6) == "output string") { if(tokens[i+1].substr(0,6) == "string") cout << tokens[i+1].substr(8, tokens[i+ 1].size()); i+=2; } else if(tokens[i] + " " + tokens[i+1].substr(0,6) == "nl_output string") { if(tokens[i+1].substr(0,6) == "string") cout << tokens[i+1].substr(8, tokens[i+ 1].size()) << endl; i+=2; } } } int main(int argc, char *argv[]) { ifstream data; open_file(argv[1], data); string data_str; vector<string> tokens = lexer(data_str, data); parser(tokens); return 0; } To test the interpreter, simply compile the code in your command prompt/terminal window. In my case I did: g++ C:\main.cpp -o quartz.exe Then run the [insert executable name].exe. The .exe takes one command line argument, which is the path to your .qz file. To make the .qz file, make a text file, and choose to rename the extension .qz. Or if you don't want to go through that hassle, a .txt file works fine to. The three main questions I have are: Is the way I'm reading over my string, and adding my tokens to the tokens vector inefficient and slow? Is it bad practice to read A file until a NULL character ('\0') is reached? Is it mandatory to close a file after opening it? What might occur if I choose not to? Answer: The three main questions I have are: OK. Is the way I'm reading over my string, and adding my tokens to the tokens vector inefficient and slow? It's not the worst I have seen. But there does seem to be an awful lot of copying going on. You don't have to actually send back strings as the tokens. The lexer usually reads the string and converts this into a stream (or a vector) of lexemes. The lexemes only need to be a stream of numbers. nl_output => 256 output => 257 <string> => 258 But the worst part is that it is not clear what you are trying to achieve (without really digging into the code). Your code should be self documenting and currently is not. Is it bad practice to read A file until a NULL character ('\0') is reached? Yes. Because there can potentially be '\0' characters as valid input. Are you assuming that the a file is null terminated? It is not. When you reach the end of file the end of file flag will be set on the stream. Is it mandatory to close a file after opening it? What might occur if I choose not to? Not mandatory. In my opinion not good practice (unless you plan to do something if it fails). And closing a read file is not going to fail in an exciting way. Other things will have gone wrong first. Let the destructor of the stream close the file for you. Code Review. I think your lexer can be much more easily written. Assuming all lexemes are "white space separated". The list of lexemes is: TERMINAL: nl_output TERMINAL: output Quoted String: -> "<Any character that is not ">*" Code std::vector<std::string> lexer(std::istream& s) { std::vector<std::string> result; std::string word; while(s >> word) // reads a word from the stream. { // Drops all proceeding white space. if (word == "nl_output") { result.push_back(word); } else if (word == "output") { result.push_back(word); } else if (word[0] == '"') { result.push_back(readComment(word, s)); } else { // Error } } return result; } std::string readComment(std::string const& word, std::istream& s) { // First see if the whole quote is in the first word. auto find = std::find(std::begin(word) + 1, std::end(word), '"'); if (find != std::end(word)) { auto extraStart = find+1; auto extraDist = std::distance(extraStart, std::end(word)); for(int loop = 0; loop < extraDist; ++loop) { s.unget(); } return word.substr(0, std::distance(std::begin(word), extraStart)); } // OK the quote spans multiple words. std::string moreData; std::getline(s, moreData, '"'); return word + moreData + '"'; } But this logic will get convoluted real quickly. I suggest you use a real lexer (like flex). Writing the rules is much simpler. Space [ \r\n\t] QuotedString "[^"]*" %% nl_output {return 256;} output {return 257;} {QuotedString} {return 258;} {Space} {/* Ignore */} . {error("Unmatched character");} %%
{ "domain": "codereview.stackexchange", "id": 21479, "tags": "c++, parsing, interpreter" }
Does each spectral line of an atom/molecule have a unique lineshape?
Question: A spectral line is determined by a particular transition in an atom or molecule. In reality, this line isn't infinitely sharp, but has a small distribution about the resonance frequency as a result of a few things. This distribution will have a lineshape, e.g., Gaussian, Lorenztian, Voigt, etc. My first question is: is each spectral line, corresponding to a unique transition within an atom or molecule, going to have a unique lineshape? My initial reasoning leads me to say yes. Given that lifetime broadening (which gives a spectral line its natural width) concerns the energy uncertainty ΔE of the transition, and this ΔE is different for every transition in a given atom/molecule, then the lifetime broadening should be different and thus the shape as well. The other sources of broadening, like doppler or collision broadening, should apply uniformly. Is all of this correct? Secondly, I'd like to ask about isotopes: different isotopes will have different, unique spectra compared to each other (even if the difference is subtle). Will the same transition in each isotope have the same lineshape? In other words, will a transition in isotope 1 have the same lineshape as the same transition in isotope 2? (By "the same", I mean that even though the frequency / energy gap is slightly different, it's still the same transition between particular orbitals). Answer: A lot of questions, but since they are related, we go on: A spectral line is determined by a particular transition in an atom or molecule. In reality, this line isn't infinitely sharp, but has a small distribution about the resonance frequency as a result of a few things. This distribution will have a lineshape, e.g., Gaussian, Lorenztian, Voigt, etc. My first question is: is each spectral line, corresponding to a unique transition within an atom or molecule, going to have a unique lineshape? Unique, maybe no. Remember we have also measurement accuracies in play. Few popular shapes, yes. A lot of spectral lines (e.g. the famous sodium doublet at 589nm) are in fact multiplets and this was discovered after the equipment and measurement techniques evolved enough precision to distinguish them. Multiplets have their intrinsic intensity ratios between the constituent lines, so yes, this can count as different shapes. My initial reasoning leads me to say yes. Given that lifetime broadening (which gives a spectral line its natural width) concerns the energy uncertainty ΔE of the transition, and this ΔE is different for every transition in a given atom/molecule, then the lifetime broadening should be different and thus the shape as well. The other sources of broadening, like doppler or collision broadening, should apply uniformly. Is all of this correct? I don't see flaws there. Secondly, I'd like to ask about isotopes: different isotopes will have different, unique spectra compared to each other (even if the difference is subtle). Will the same transition in each isotope have the same lineshape? In other words, will a transition in isotope 1 have the same lineshape as the same transition in isotope 2? (By "the same", I mean that even though the frequency / energy gap is slightly different, it's still the same transition between particular orbitals). The above consideration about the lifetime broadening still holds. Since the frequency differences are this much subtle, the lifetime broadening differences will be correspondingly subtle, but in theory will be there. Then again, see the hyperfine structure - isotopes do have different nuclear spin, different nuclear magnetic momentum and as an effect - different hyperfine splitting. Depending on your wavelength resolution, this may also count for different "shape" of a line or for completely different lines. Now I see you also talk about molecules in the title of your question. With molecules in a gas phase, everything becomes a multiplet by adding and subtracting vibrational and rotational transition energies to/from the electron transition energy. A beautiful example e.g. here: https://opentextbc.ca/universityphysicsv3openstax/chapter/molecular-spectra/ a comment from @Landak with some good remarks: This is a great answer, but it might also be an idea to mention the fact that Lorentzian shapes arise as the Fourier transform of an exponential damped sinusoid (i.e. natural lifetime decay) and Gaussian lineshapes arise if diffusion dominates. The situation is more complex if the experimental imperfections (or natural properties of the material studied) are considered – for an obvious but different example think of the homogeneity of the magnetic field in NMR or EPR spectroscopy where the lineshape observed can be completely changed by it.
{ "domain": "physics.stackexchange", "id": 88860, "tags": "spectroscopy, atoms, molecules, photon-emission, isotopes" }
Scraping Instagram - Download posts, photos - videos
Question: Python script that can downloads public and private profiles images and videos, like Gallery with photos or videos. It saves the data in the folder. How it works: Log in in instragram using selenium and navigate to the profile Checking the availability of Instagram profile if it's private or existing Creates a folder with the name of your choice Gathering urls from images and videos Using threads and multiprocessing improving the execution speed My code: from pathlib import Path import requests import time from selenium import webdriver from selenium.common.exceptions import NoSuchElementException, TimeoutException from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By from multiprocessing.dummy import Pool from concurrent.futures import ThreadPoolExecutor from typing import * import argparse import shutil class PrivateException(Exception): pass class InstagramPV: MAX_WORKERS: int = 8 N_PROCESSES: int = 8 def __init__(self, username: str, password: str, folder: Path, profile_name: str): """ :param username: Username or E-mail for Log-in in Instagram :param password: Password for Log-in in Instagram :param folder: Folder name that will save the posts :param profile_name: The profile name that will search """ self.username = username self.password = password self.folder = folder self.http_base = requests.Session() self.profile_name = profile_name self.links: List[str] = [] self.pictures: List[str] = [] self.videos: List[str] = [] self.url: str = 'https://www.instagram.com/{name}/' self.posts: int = 0 self.driver = webdriver.Chrome() def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): self.http_base.close() self.driver.close() def check_availability(self) -> None: """ Checking Status code, Taking number of posts, Privacy and followed by viewer Raise Error if the Profile is private and not following by viewer :return: None """ search = self.http_base.get(self.url.format(name=self.profile_name), params={'__a': 1}) search.raise_for_status() load_and_check = search.json() self.posts = load_and_check.get('graphql').get('user').get('edge_owner_to_timeline_media').get('count') privacy = load_and_check.get('graphql').get('user').get('is_private') followed_by_viewer = load_and_check.get('graphql').get('user').get('followed_by_viewer') if privacy and not followed_by_viewer: raise PrivateException('[!] Account is private') def create_folder(self) -> None: """Create the folder name""" self.folder.mkdir(exist_ok=True) def login(self) -> None: """Login To Instagram""" self.driver.get('https://www.instagram.com/accounts/login') WebDriverWait(self.driver, 10).until(EC.presence_of_element_located((By.TAG_NAME, 'form'))) self.driver.find_element_by_name('username').send_keys(self.username) self.driver.find_element_by_name('password').send_keys(self.password) submit = self.driver.find_element_by_tag_name('form') submit.submit() """Check For Invalid Credentials""" try: var_error = WebDriverWait(self.driver, 4).until(EC.presence_of_element_located((By.CLASS_NAME, 'eiCW-'))) raise ValueError(var_error.text) except TimeoutException: pass try: """Close Notifications""" notifications = WebDriverWait(self.driver, 20).until( EC.presence_of_element_located((By.XPATH, '//button[text()="Not Now"]'))) notifications.click() except NoSuchElementException: pass """Taking cookies""" cookies = { cookie['name']: cookie['value'] for cookie in self.driver.get_cookies() } self.http_base.cookies.update(cookies) """Check for availability""" self.check_availability() self.driver.get(self.url.format(name=self.profile_name)) self.scroll_down() def posts_urls(self) -> None: """Taking the URLs from posts and appending in self.links""" elements = self.driver.find_elements_by_xpath('//a[@href]') for elem in elements: urls = elem.get_attribute('href') if urls not in self.links and 'p' in urls.split('/'): self.links.append(urls) def scroll_down(self) -> None: """Scrolling down the page and taking the URLs""" last_height = self.driver.execute_script('return document.body.scrollHeight') while True: self.driver.execute_script('window.scrollTo(0, document.body.scrollHeight);') time.sleep(1) self.posts_urls() time.sleep(1) new_height = self.driver.execute_script("return document.body.scrollHeight") if new_height == last_height: break last_height = new_height self.submit_links() def submit_links(self) -> None: """Gathering Images and Videos and pass to function <fetch_url> Using ThreadPoolExecutor""" self.create_folder() print('[!] Ready for video - images'.title()) print(f'[*] extracting {len(self.links)} posts , please wait...'.title()) with ThreadPoolExecutor(max_workers=self.MAX_WORKERS) as executor: for link in self.links: executor.submit(self.fetch_url, link) def get_fields(self, nodes: Dict, *keys) -> Any: """ :param nodes: The json data from the link using only the first two keys 'graphql' and 'shortcode_media' :param keys: Keys that will be add to the nodes and will have the results of 'type' or 'URL' :return: The value of the key <fields> """ fields = nodes['graphql']['shortcode_media'] for key in keys: fields = fields[key] return fields def fetch_url(self, url: str) -> None: """ This function extracts images and videos :param url: Taking the url :return None """ logging_page_id = self.http_base.get(url, params={'__a': 1}).json() if self.get_fields(logging_page_id, '__typename') == 'GraphImage': image_url = self.get_fields(logging_page_id, 'display_url') self.pictures.append(image_url) elif self.get_fields(logging_page_id, '__typename') == 'GraphVideo': video_url = self.get_fields(logging_page_id, 'video_url') self.videos.append(video_url) elif self.get_fields(logging_page_id, '__typename') == 'GraphSidecar': for sidecar in self.get_fields(logging_page_id, 'edge_sidecar_to_children', 'edges'): if sidecar['node']['__typename'] == 'GraphImage': image_url = sidecar['node']['display_url'] self.pictures.append(image_url) else: video_url = sidecar['node']['video_url'] self.videos.append(video_url) else: print(f'Warning {url}: has unknown type of {self.get_fields(logging_page_id,"__typename")}') def download_video(self, new_videos: Tuple[int, str]) -> None: """ Saving the video content :param new_videos: Tuple[int,str] :return: None """ number, link = new_videos with open(self.folder / f'Video{number}.mp4', 'wb') as f: content_of_video = self.http_base.get(link, stream=True).raw shutil.copyfileobj(content_of_video, f) def images_download(self, new_pictures: Tuple[int, str]) -> None: """ Saving the picture content :param new_pictures: Tuple[int, str] :return: None """ number, link = new_pictures with open(self.folder / f'Image{number}.jpg', 'wb') as f: content_of_picture = self.http_base.get(link, stream=True).raw shutil.copyfileobj(content_of_picture, f) def downloading_video_images(self) -> None: """Using multiprocessing for Saving Images and Videos""" print('[*] ready for saving images and videos!'.title()) picture_data = enumerate(self.pictures) video_data = enumerate(self.videos) pool = Pool(self.N_PROCESSES) pool.map(self.images_download, picture_data) pool.map(self.download_video, video_data) print('[+] Done') def main(): parser = argparse.ArgumentParser() parser.add_argument('-U', '--username', help='Username or your email of your account', action='store', required=True) parser.add_argument('-P', '--password', help='Password of your account', action='store', required=True) parser.add_argument('-F', '--filename', help='Filename for storing data', action='store', required=True) parser.add_argument('-T', '--target', help='Profile name to search', action='store', required=True) args = parser.parse_args() with InstagramPV(args.username, args.password, Path(args.filename), args.target) as pv: pv.login() pv.downloading_video_images() if __name__ == '__main__': main() Usage: myfile.py -U myemail@hotmail.com -P mypassword -F Mynamefile -T stackoverjoke My previous comparative review tag:Download pictures (or videos) from Instagram using Selenium Answer: More constants This: self.url: str = 'https://www.instagram.com/{name}/' appears to be a constant, so it can join the others in class scope. While you're doing that, you can also pull the URL from self.driver.get('https://www.instagram.com/accounts/login') into a constant; and also pull the base URL out. In other words: class InstagramPV: MAX_WORKERS: int = 8 N_PROCESSES: int = 8 BASE_URL = 'https://www.instagram.com/' PROFILE_URL_FMT = BASE_URL + '{name}/' LOGIN_URL = BASE_URL + 'accounts/login' Nested get These: load_and_check.get('graphql').get('user').get('edge_owner_to_timeline_media').get('count') won't actually do what you want, which is a fail-safe object traversal. For that you need to provide defaults that are empty dictionaries: self.posts = ( load_and_check.get('graphql', {}) .get('user', {}) .get('edge_owner_to_timeline_media', {}) .get('count') ) Also, the first part should be factored out into its own temporary variable, since it's used three times: user = ( load_and_check.get('graphql', {}) .get('user', {}) ) Methods for reuse self.driver.execute_script("return document.body.scrollHeight") should be factored out into a new method for re-use. Static function This: def get_fields(self, nodes: Dict, *keys) -> Any: """ :param nodes: The json data from the link using only the first two keys 'graphql' and 'shortcode_media' :param keys: Keys that will be add to the nodes and will have the results of 'type' or 'URL' :return: The value of the key <fields> """ fields = nodes['graphql']['shortcode_media'] for key in keys: fields = fields[key] return fields doesn't ever use self, which is a big clue that it doesn't belong as an instance method. You should just make it a @staticmethod. The only reason I don't recommend it moving to global scope is that it still has knowledge of the Instagram data format, with its reference to graphql. Dictionary traversal The loop in get_fields can be replaced with a call to functools.reduce(dict.get, keys, media). Also, keys - even though it is a variadic argument - can still receive a type hint, and should be Iterable[str]. nodes itself, if you don't know a lot about the structure of the dictionary, can still be narrowed to nodes: Dict[str, Any]. Context manager for response Now that you're using the streaming interface for Requests (nice!), it's more important that you use the response object as a context manager. For more information read https://github.com/psf/requests/issues/4136 Basically: with open(self.folder / f'Video{number}.mp4', 'wb') as f, \ self.http_base.get(link, stream=True) as response: shutil.copyfileobj(response.raw, f)
{ "domain": "codereview.stackexchange", "id": 37964, "tags": "python, python-3.x, web-scraping, selenium, instagram" }
how to improve maps from octomap?
Question: Hi , I am trying to build quality maps by using octomap on ROS hydro using a xtion pro live sensor mounted on the robot. The map generated by octomap do not update well, when visiting the same area and corrupts the map. . I am using gmapping for SLAM with a Hokuyo sensor. The 2d grid map builds up nicely and updates but the octomap overlays the map and do no update. I am doing this on a p3dx pioneer robot. My octomap_mapping.launch file looks like this <launch> <node pkg="octomap_server" type="octomap_server_node" name="octomap_server"> <param name="resolution" value="0.05" /> <!-- fixed map frame (set to 'map' if SLAM or localization running!) --> <param name="frame_id" type="string" value="odom" /> <!-- maximum range to integrate (speedup!) --> <param name="sensor_model/max_range" value="5.0" /> <!-- data source to integrate (PointCloud2) --> <remap from="cloud_in" to="/camera/depth_registered/points" /> </node> Can someone help me and suggest ideas as to how I can improve the maps generated by octomap. Thanks Alex Originally posted by AlexR on ROS Answers with karma: 654 on 2015-02-20 Post score: 2 Answer: According to your config in the launch file, you are mapping in the odom frame, so you only use odometry. Accumulated error will cause a drift in the position estimate. So errors when re-visiting known areas are to be expected. As mentioned in the comment above that line in the launch file, you need to switch to mapping in your actual map frame when using a localization source or SLAM. Originally posted by AHornung with karma: 5904 on 2015-02-21 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by AlexR on 2015-02-21: I see changing the frame id from odom to map solved the problem. Thanks a lot.
{ "domain": "robotics.stackexchange", "id": 20938, "tags": "ros, octomap, octomap-mapping, octomap-server" }
Do cold neutrons produce radioactive elements?
Question: As far as I have understood neutron nuclear reactions, if a fast neutron gets captured by a nucleus, since the kinetic energy has to go somewhere, the newly formed nucleus is radioactive and must loose this energy (by gamma emission or decay). What happens in the case of cold (meaning kinetic energy below 0.025 eV) neutrons? Does irradiation of a stable element by cold neutrons produce radioactive isotopes? If yes, only theoretically or have there been experiments? Answer: Every neutron capture causes a nuclear transmutation. The “neutron separation energy” of the daughter nucleus is generally released promptly, usually as a cascade of gamma rays. The ground states of the daughter nuclei may be stable or unstable. The new unstable nuclei are referred to a “neutron activation products”; material which has become radioactive following exposure to neutrons is said to have been “activated.” Some practical examples: High-density polyethylene, whose chemical formula is long chains of $\rm CH_2$, mostly captures neutrons by hydrogen to deuterium, with some fewer captures changing carbon-12 to carbon-13. Neither capture product is radioactive, so clean HDPE is not activated by neutron beams. Aluminum has only one stable isotope. Neutron capture on aluminum forms aluminum-28, which beta decays to silicon with half-life of a couple minutes. Neutron-activated aluminum has no detectable activity after an hour or so. If you were building an experiment that used cesium iodide detectors to look at the prompt gamma rays from neutron capture, but your shielding design were flawed and neutrons entered your detector crystals, you would also detect radiation from the decays of iodine-128 (25m), the isomer cesium-134m (2.9h), and the ground state cesium-134 (2y). Boy howdy, was that an expensive mistake. Not that I’m bitter. Cold neutrons in the air undergo a nucleon transfer reaction with nitrogen, $$\rm ^{14}N + n \to p + {}^{14}C$$ which has biological consequences. If you own a smoke detector with an americium ionization source, that americium was produced by repeated neutron capture on uranium in a reactor core. In general, neutrons whose kinetic energy is below the energy of any nuclear resonance have cross section $$ \sigma = \sigma_\text{thermal} \sqrt{\frac{E_\text{thermal}}{E}} $$ where $E_\text{thermal} \approx \rm \frac{1}{40}\,eV$ is the kinetic energy associated with “room temperature.” Nuclear resonances bottom out in the kilo-eV range, so the approximation is good for basically all milli-eV neutrons. The energy dependence is often referred to as a “one over vee” cross section, referring to the neutron velocity $v = \sqrt{2E/m}$. A hand-waving interpretation of a $1/v$ cross section dependence is that the probability of neutron capture is proportional to the neutron’s “dwell time” in the vicinity of the nucleus. The smallest neutron separation energies are mega-eV, so the kinetic energy of a milli-eV neutron is completely negligible in a capture reaction.
{ "domain": "physics.stackexchange", "id": 86326, "tags": "nuclear-physics, neutrons, weak-interaction" }
Choosing a graphics card for Gazebo
Question: I am in the process of obtaining the appropriate hardware for Gazebo. Do you know if the following graphics cards will work well with Gazebo? NVIDIA Quadro 600 NVIDIA Quadro 2000 I would like some confirmation before I make purchases. Thank you for your help! Originally posted by mchapman on Gazebo Answers with karma: 1 on 2013-07-18 Post score: 0 Answer: found some reference benchmarks for reference: quadro 2000 quadro 600 gtx 650 In addition, from specs on Nvidia site, the memory bandwidth of Quadro 2000 (@41.5 GB/sec) is about half of GTX 650 (@80.0 GB/sec), and Quadro 600 (@25.6 GB/sec) will be about half of Quadro 2000. Since a lot of the simulation work is getting the images off of these cards, one can probably expect performance to scale accordingly as well. Originally posted by hsu with karma: 1873 on 2013-07-18 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by mchapman on 2013-07-19: Thank you very much for your help! Comment by mchapman on 2013-07-19: Thank you very much for your help!
{ "domain": "robotics.stackexchange", "id": 3380, "tags": "gazebo" }
Plasma cells and memory cells
Question: What decides whether an activated B cell will get converted to plasma cell or memory B cell? Is it necessary that out of a mitotic division one will convert to memory B cell and the other to plasma cell? Answer: A summary of what we know so far includes that for B cells to become plasma cells they need BLIMP-1. For memory cells CD40 engaging with CD40L on B cells is required. CD40 is expressed on Th cells and thus they are also necessary for memory cell development. Problem is though, it's really damn confusing after that. It really isn't known but if you're interested in this stuff consider reading this review which talks about memory cell differentiation and this which discusses plasma cell differentiation. The image below helps (taken from this review) PC = Plasma Cell, MBC = Memory B cell
{ "domain": "biology.stackexchange", "id": 1076, "tags": "immunology" }
Transforming a list of two-dimensional coordinates into a flat list of relative changes
Question: I have a little function that parse a nested list of coordinates into a flat list of compressed coordinates. By compressed coordinates, I mean that only the delta (distance) between each coordinates are stored in the list and the float coordinates are transformed in integer. input = [[-8081441,5685214], [-8081446,5685216], [-8081442,5685219], [-8081440,5685211], [-8081441,5685214]] output = [-8081441, 5685214, 5, -2, -4, -3, -2, 8, 1, -3] def parseCoords(coords): #keep the first x,y coordinates parsed = [int(coords[0][0]), int(coords[0][1])] for i in xrange(1, len(coords)): parsed.extend([int(coords[i-1][0]) - int(coords[i][0]), int(coords[i-1][1]) - int(coords[i][1])]) return parsed parsedCoords = parseCoords(input) As the input list is really big, is there a way to improve the function, maybe by using generators or list comprehension? Answer: for i in xrange(1, len(coords)) is a red flag, since it is preferred in python to iterate directly over the elements rather than the indices. If you trully need the indices, you can still use enumerate. Here it would look like for i, coord in enumerate(coords[1:]): previous = coords[i] parsed.extend([int(previous[0] - coord[0]), int(previous[1] - coord[1])]) which seems worse as it creates a copy of coords when slicing it; still uses an index to retrieve the previous element. Instead, it seems better to convert the list into an iterator an manually handle the current/previous coordinates. Something like: def parse_coordinates(coords): iterator = iter(coords) previous_x, previous_y = iterator.next() parsed = [int(previous_x), int(previous_y)] for current_x, current_y in iterator: parsed.append(int(previous_x - current_x)) parsed.append(int(previous_y - current_y)) previous_x, previous_y = current_x, current_y return parsed You can note the use of append instead of extend that will avoid building a temporary list. append try to be smart when resizing the underlying array so that two consecutive appends should not have more performance hit than extend. But all in all, using append or extend in a for loop is often better written using a list-comprehension or a generator. You can easily turn this function into a generator by turning these append into yields: def parse_coordinates(coords): iterator = iter(coords) previous_x, previous_y = iterator.next() yield int(previous_x) yield int(previous_y) for current_x, current_y in iterator: yield int(previous_x - current_x) yield int(previous_y - current_y) previous_x, previous_y = current_x, current_y There is an other advantage to this approach: if the coords parameter is empty, your approach using coords[0] and the first one building a list using iterator.next() will crash raising either an IndexError or a StopIteration. This generator can easily be fed to the list constructor or a for loop and won't crash; producing either an empty list or not entering the loop. Lastly, you could drop manually managing the previous/current thing by using itertools.tee which is the key feature of the pairwise recipe: from itertools import tee, izip def parse_coordinates(coords): prev, cur = tee(coords) x, y = cur.next() yield int(x) yield int(y) for (previous_x, previous_y), (current_x, current_y) in izip(prev, cur): yield int(previous_x - current_x) yield int(previous_y - current_y)
{ "domain": "codereview.stackexchange", "id": 23095, "tags": "python, python-2.x, coordinate-system, generator" }
Why does earth look blue from outer space?
Question: I know it's more than 70% water. But what has it got to do with earth's colour ? Answer: Quite a simple answer: Scattering of light (Rayleigh scattering would be more precise here...) An observer in ground sees the sky as blue due to scattering of light by air molecules present in the atmosphere. For an observer in space, The water bodies reflect the color of sky... The water bodies (ocean, lakes, river) appear blue ('cause water is quite colorless) because of the way sunlight is selectively scattered as it goes through our atmosphere. Taking Raman effect into account, Water absorbs more of the red light in sunlight. By this way, water also enhances the scattering of blue light in the surroundings. By Rayleigh scattering law: (It's more important here) The amount of scattering is inversely proportional to the fourth power of its wavelength. Due to the larger amount of $N_2$ and $O_2$ molecules (78% and 21%) in the atmosphere, blue light which is having shorter wavelength is scattered to a greater extent. Thus, the earth wouldn't be blue if it doesn't have enough $O_2$ and $N_2$ molecules in its atmosphere. The scattering depends on the characteristics of gaseous molecules in atmosphere... This is applicable to other planets also. (like Mars appearing Red, Venus appears yellow, etc.)
{ "domain": "physics.stackexchange", "id": 4543, "tags": "water, visible-light" }
If a object that's bigger than the other collides with one thats more massive, who wins?
Question: So, let's say you've got two objects (A and B), A has a mass of 20 suns, while B has a mass of 10 suns, but A is only 10 suns in radius, while B is 30 suns in radius. If both objects collided, who would survive and "win"? Does it depend on density? Or in mass? Shouldn't the smaller but more dense object go into the core of B and extract all the mass or what? EDIT: I do know that if two objects aren't going fast enough, they can bounce on collision, but in this example they're going at high speeds. Answer: If they are stars then they may well combine. If you'd imagine them like bowling balls, then size has nothing to do with it! It's all down to momentum P = mv - so the only factors affecting momentum are mass and velocity. Assuming they're both going the same speed, then the more massive one will "win" so to speak. an air filled balloon v.s a bowling ball at the same velocity is also a good way to think about it! If you want to get more complicated, then the effects of air resistance on larger objects can also come into play with your question. Hope this helps!
{ "domain": "physics.stackexchange", "id": 53547, "tags": "collision" }
rosrun: unable to locate the node robot_state_publisher
Question: The exact error I am having is ERROR: cannot launch node of type [robot_state_publisher/state_publisher]: can't locate node [state_publisher] in package [robot_state_publisher] I have cloned the robot_state_publisher from ros wiki and already had done rosmake. I have also sourced the terminal to devel before using rosrun aswell as exported all the reqaquired core packages and python path to the terminal before trying to run the robot_state_publisher. here is the my CmakeList.txt file, cmake_minimum_required(VERSION 2.8) project(robot_state_publisher) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -Wall -Wextra") find_package(orocos_kdl REQUIRED) find_package(catkin REQUIRED COMPONENTS roscpp rosconsole rostime tf tf2_ros tf2_kdl kdl_parser ) find_package(Eigen3 REQUIRED) find_package(urdfdom_headers REQUIRED) catkin_package( LIBRARIES ${PROJECT_NAME}_solver INCLUDE_DIRS include DEPENDS roscpp rosconsole rostime tf2_ros tf2_kdl kdl_parser orocos_kdl urdfdom_headers ) include_directories(SYSTEM ${EIGEN3_INCLUDE_DIRS}) include_directories(include ${catkin_INCLUDE_DIRS} ${orocos_kdl_INCLUDE_DIRS} ${urdfdom_headers_INCLUDE_DIRS}) link_directories(${orocos_kdl_LIBRARY_DIRS}) add_library(${PROJECT_NAME}_solver src/robot_state_publisher.cpp src/treefksolverposfull_recursive.cpp ) target_link_libraries(${PROJECT_NAME}_solver ${catkin_LIBRARIES} ${orocos_kdl_LIBRARIES}) add_library(joint_state_listener src/joint_state_listener.cpp) target_link_libraries(joint_state_listener ${PROJECT_NAME}_solver ${orocos_kdl_LIBRARIES}) add_executable(${PROJECT_NAME} src/joint_state_listener.cpp) target_link_libraries(${PROJECT_NAME} ${PROJECT_NAME}_solver ${orocos_kdl_LIBRARIES}) # compile the same executable using the old name as well add_executable(state_publisher src/joint_state_listener.cpp) target_link_libraries(state_publisher ${PROJECT_NAME}_solver ${orocos_kdl_LIBRARIES}) Can you please help me solve this problem, I am trying to run a robot I have designed using xacro on gazebo and control it with ros. I have already installed ros_control and ros_gazebo packages. Thank you advance, Originally posted by microbot on ROS Answers with karma: 96 on 2019-01-04 Post score: 0 Original comments Comment by gvdhoorn on 2019-01-04:\ I have cloned the robot_state_publisher from ros wiki why? and already had done rosmake. why? Did sudo apt-get install ros-kinetic-robot-state-publisher not work for you? Comment by gvdhoorn on 2019-01-04: Please remove robot_state_publisher from your workspace (remove your build and devel folders as well), install the binary version, rebuild your workspace and try again. Comment by microbot on 2019-01-04: after cloning it from the wiki, I had used rosmake to compile the cloning package. I am not sure whether its mandatory or not but I followed the same process i follow to create any new packages on catkin workspace. Comment by gvdhoorn on 2019-01-04:\ I followed the same process i follow to create any new packages on catkin workspace. rosmake is not used for Catkin packages. That would be catkin_make (or catkin build). Comment by gvdhoorn on 2019-01-04: In general: always try to use binary packages and install them using apt (or apt-get). There are few cases where you'd want to / need to build packages from source. git clone should not be your default for installing ROS packages. Comment by microbot on 2019-01-04: build and devel folders contains other ros packages that I have been using. Doesn't remove the entire folders affect the entire project in a way that I had to create them all again ? and also may I know how it is going to help ? Comment by gvdhoorn on 2019-01-04:\ build and devel folders contains other ros packages that I have been using. Doesn't remove the entire folders affect the entire project in a way that I had to create them all again ? build and devel contain only derived resources -- if you're using a normal Catkin workspace setup. .. Comment by gvdhoorn on 2019-01-04: .. That means that after removing build and devel, a catkin_make should result in all the pkgs in src being rebuilt. In the end devel should contain the same packages as before. and also may I know how it is going to help ? you've added robot_state_publisher to your workspace, .. Comment by gvdhoorn on 2019-01-04: .. and then built your workspace. If you now remove src/robot_state_publisher, but do not rebuild your workspace, your ROS environment will still point to the robot_state_publisher in your workspace, even though you've removed it. That's why a rebuild is a good idea. Comment by microbot on 2019-01-04: Thank you very much , I have deleted the devel and build directories from the catkin workspace and when I used catkin_make command as you said they are created again. Thanks for explaining me the part. So you suggest me to remove the cloned robot_state_publisher and then rebuild my workspace... Comment by microbot on 2019-01-04: ..and then install the robot-state-publisher using the apt-get command? I am kind of beginer to the ROS and catkin environment. so when you suggested me to rebuild, can you please tell me what exactly you meant ? Comment by gvdhoorn on 2019-01-04: To get this clear first: always first try to use apt (or apt-get) to install ROS pkgs. Don't go git cloneing things. That's not how this works and it should not be your go-to command when installing ROS pkgs. There are situations where it's needed, but this is not one of them. Answer: As to your question: yes. Remove src/robot_state_publisher, remove devel and build folders, run sudo apt-get install ros-kinetic-robot-state-publisher and then rebuild your workspace (run catkin_make). Originally posted by gvdhoorn with karma: 86574 on 2019-01-04 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by microbot on 2019-01-06: Thank you very much, now I have got it about how to install. Thank you once again. Comment by microbot on 2019-01-07: the question I have asked had already been answered although I have got a beginner confusion , could you please point me where the packages I installed using apt-get command located ? I have checked in the ros folder and it contains only ros core packages. can you please help me ? Comment by sciduck on 2020-08-19: try roscd robot_state_publisher/ then pwd
{ "domain": "robotics.stackexchange", "id": 32233, "tags": "ros, gazebo, ros-kinetic, gazebo-ros-control, robot-state-publisher" }
Why can a person in a noisy room not hear someone outside the room, yet the the outsider can hear the person in the room?
Question: If I'm in a noisy room (radio on, babies splashing in bath, etc.) I struggle to hear someone shouting to me from outside the room. Yet to them, my voice comes through quite clearly. What causes this difference? Is my voice simply a lower frequency than the background noise, thus more adept at passing through walls/doors etc, or is something else going on here? Answer: If we ignore reverberation, sound intensity follows the inverse-square law and falls off with the square of distance. Since distance does not depend on direction, if both of you are talking at roughly the same level than the speech "signal" at your ears will be roughly the same. The noise, however, is closer, and hence more intense, to the person in the noisy room. This means the signal-to-noise ratio (SNR) is lower for the person closer to the noise source. Our ability to both hear sounds and understand speech depend critically on the SNR. Consider the following worked example: If both talkers are speaking such that the sound level 1 m away is 80 dB SPL, then the sound level 10 m away will be 60 dB SPL. If the sound level of the noise in the room, 1 m from the center of the room is 80 dB SPL, then the sound level 10 m away will be 60 dB SPL. Assuming the two talkers are 10 m apart and one is 1 m from the center of the room and the other is 10 m from the center of the room, then at each listeners ears the signal level will be 60 dB SPL. The noise level at ears of the listener near the center will be 80 dB SPL, giving a SNR of -20 dB which will mean the speech is inaudible and unintelligible. The noise level at the ears of the listener far from the center of the room will be 60 dB SPL and the SNR will be 0 dB which will make the speech both clearly audible and nearly perfectly intelligible.
{ "domain": "physics.stackexchange", "id": 19046, "tags": "everyday-life, acoustics" }
GPS data from smartphone
Question: Hello, I want to add GPS data to my project in order to improve robot localization estimate. I've read that some people did this successfully using an external GPS device. However, I would like to do this by pulling the GPS data from a smartphone. A few questions: Does someone have already done this? I did some research but I couldn't find it. How could I pull the GPS data from the smartphone? Does the smartphone output the GPS data in the same format as a regular GPS device? Thanks! Originally posted by gerhenz on ROS Answers with karma: 53 on 2014-08-12 Post score: 0 Answer: If you are using an Android device, you might try the android_sensors_driver (Google play link). This has worked relatively well for me in the past. Originally posted by gvdhoorn with karma: 86574 on 2014-08-12 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by gerhenz on 2014-08-12: My intention was to use a Windows phone, but in case I don't manage to do it I will end up using an Android device. This package you mentioned seems really good, I will take a look. Thanks! Comment by gvdhoorn on 2014-08-12: ROS support for Windows phone is non-existent (afaik). You might be able to get one of the c# implementations to work on it, but it is probably going to take some effort.
{ "domain": "robotics.stackexchange", "id": 19011, "tags": "localization, navigation, gps, robot-localization" }
How a mobile platform can be used with ROS Navigation
Question: We have a mobile platform. It can move and read odometry information by using C program. How to package it can be used with ROS Navigation and SLAM . Thank you very much. Originally posted by Nelson Wu on ROS Answers with karma: 11 on 2016-11-10 Post score: 0 Answer: Why do not you read the wiki page? It is clear and well done. Start from Navigation Tutorial. Originally posted by rastaxe with karma: 620 on 2016-11-10 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 26197, "tags": "ros, navigation, package" }
Would oscillations of an electrified spring vary from those of a normal spring?
Question: When a current is run through a coil, such as a spring, a magnetic field is produced. Thus, in a spring, the magnetic forces of attraction cause each ring to attract each other, causing a compressing force. I understand that when a spring oscillates, it is moving upwards due to the tension in the spring, which compresses it, and downwards due to the weight of whatever mass is being hung on it, which extends it. My question is whether the attraction between each ring would add to the compressing force, causing the oscillation pattern to change, and if so, how? Answer: Very interesting question. Lets consider an ideal case of a long spring with a high density of turns in the spring and no friction or air resistance on the mass (so no damping from air resistance or transfer of energy to internal modes of energy). We will use a cylindrical geometry $(r,\phi,z)$. Assume the free spring constant is $k_{0}$. So let us consider a vertical spring with a mass on it. Also, we are running current $I$ through the spring. The spring has a high density of turns $n$, so lets assume each turn as a circular current loop. The spring is long too, so the magnetic field $\boldsymbol B = B \hat{z}$ generated by the current in the spring is perpendicular the the loops. Then the total lorentz force between the adjacent current loops $$F=\int I (d\boldsymbol l \times \boldsymbol B)$$ is zero, since the forces on opposite sides of the loop cancel (for instance, the force on the top of the loop points up, and the force on the bottom points down). In this idealization, there is actually no magnetic force between adjacent spring loops. In this case, the spring will oscillate up and down forever if the mass is displaced from its equilibrium point. Ok, great, but in real life the magnetic field is not so uniform, i.e. $\boldsymbol B = B_{z} \hat{z} + B_{r} \hat {r}$ near the ends of the spring. In this case, yes, there is a Lorentz force on the ends of the spring, compressing it. You can imagine this spring as a stiffer, smaller spring. In lieu of an in depth analysis, my guess is we can approximate this spring as a new spring with a different equilibrium point and higher spring constant $k_{1}$, that, under small oscillations, will still behave like a normal spring. I would suggest starting with a simple force balance $m \frac{dv} {dt} = -kx + F_{mag}$ and see what equations of motion you can get. A more interesting endeavor might be to formulate a Lagrangian for this Newtonian/Electrodynamic system.
{ "domain": "physics.stackexchange", "id": 32751, "tags": "newtonian-mechanics, electromagnetism, electricity, spring" }
What evolutionary advantage do viruses have in host specificity?
Question: Warning: I have almost no knowledge of biology past the high school level. Viruses generally have three components: the DNA, the virus protein coat, and an outer membrane "decorated" with these surface marker glycoproteins. I am thinking that a virus would want to infect as many hosts as possible, so that it would reproduce as much as possible, why would a virus just infect one group of organisms. What evolutionary advantage do viruses have in host specificity? Answer: It is true for any living creature, that it would be great for it if it could thrive in all environments. Any creature would do better if it had a greater ecological niche while remaining as competitive in each of these niches. However, competition, predation and other biotic and abiotic factors lead species to specialize in specific niches. Of course, some species are more generalist and some are more specialist but I won't go into these details. When it comes to parasites, such as viruses, the story is the same. A host is an environment. Being less specific would be great but the immune system is no easy detail to get around. Viruses are often quite specific to a given species, just because it evolved to be efficient for a given host but tend not to be that efficient in other hosts. Note that parasites are not only species specific but also often tissue specific and specific to the specifics genetics of the host (e.g. malaria). Somewhat related posts: Why do parasites sometimes kill their hosts? Why aren't all infections immune-system resistant? Thank you @DeNovo for helpful comment
{ "domain": "biology.stackexchange", "id": 12222, "tags": "evolution, virology" }
What highpass filter is implemented in Audacity?
Question: When I selected highpass filter effect in Audacity (an open-source audio editor), it lets me choose the cutoff frequency and the roll-off but it doesn't specify which filter type is used (IIR Butterworth, IIR Chebyshev or FIR types etc.). Do you know which highpass filter is implemented in Audacity's highpass filter? My guess is that if the filter is IIR then it is Butterworth, which needs only order (affecting roll-off) and cutoff frequency as parameters. See this manual entry: https://manual.audacityteam.org/man/high_pass_filter.html Answer: Almost certainly Butterworth. Very standard filter for audio It has a slope of 6dB/octave per order No passband or stop band ripple. and 3) would be difficult to do with an FIR filter especially for low frequencies and Chebyshev(s) and elliptic IIR filters have ripples.
{ "domain": "dsp.stackexchange", "id": 11764, "tags": "filter-design, filtering, audio-processing, highpass-filter" }
Looper: Duplicate sample name error
Question: I am using Looper. While running looper run project_config.yaml I am getting duplicate sample name error. I have two fastq files for each sample, so in the annotation.csv file I have: sample_name,library,file abeta-24-1,RNA-seq,data/merged/abeta_24h_1_r1.fq.gz abeta-24-1,RNA-seq,data/merged/abeta_24h_1_r2.fq.gz abeta-24-2,RNA-seq,data/merged/abeta_24h_2_r1.fq.gz abeta-24-2,RNA-seq,data/merged/abeta_24h_2_r2.fq.gz My pipeline_interface.yaml file looks the following way: protocol_mapping: RNA-seq: sandro_rna_seq pipelines: sandro_rna_seq: name: sandro_rna_seq path: sandro_rna_seq.py # relative to this pipeline_interface.yaml file looper_args: True arguments: "-L": "IU" "-T": "gencode_mouse_m13" "-E": "/scratch/nv4e/binf- pipelines/bulk_rna_seq/salmon/pypiper/jyvo_experiment-metadata.yaml" "-C": "/scratch/nv4e/binf- pipelines/bulk_rna_seq/salmon/pypiper/salmon_rna_seq.yaml" "-X": "True" "-P": "6" "-I": sample_name resources: default: file_size: "0" cores: "20" mem: "120000" Why am I getting this error? How could I fix it? Answer: As explained in the pep docs, the sample_name column should be considered a unique identifier for a sample. The PEP requires that One row corresponds to one sample You have multiple rows with the same name; this is not allowed. You simply need to recast your annotation sheet such that it has file1 and file2 columns, but only 2 rows.
{ "domain": "bioinformatics.stackexchange", "id": 384, "tags": "scrnaseq, looper, yaml" }
Vector Potential for Magnetic field when the field is not in simply-connected region
Question: According to Poincare's Lemma, if $U\subset \mathbb{R}^n$ is a star-shaped set and if $\omega$ is a $k$-form defined in $U$ that is closed, then $\omega$ is exact, meaning that there's some $(k-1)$-form, say $\eta$ with $\omega = d\eta$. Now, translating to vector fields, if we consider $U$ a star-shaped set in $\mathbb{R}^3$ and if $B$ is a vector field inside $U$ such that $\nabla\cdot B= 0$, then there's some vector field $A$ defined in $U$ such that $B = \nabla \times A$. I've heard that Poincare's Lemma turns out to be true even if $U$ is not star-shaped, but rather, just contractible. Now, in the hypothesis of Poincare's Lemma, the fact that the magnetic field satisfies $\nabla\cdot B = 0$ implies the existance of the vector potential $A$, with $B = \nabla \times A$. But now, what happens if the magnetic field is defined in some region of space that is not simply connected? In that case, the vector potential could not exist according to Poincare's Lemma (it doesn't says that it doesn't exists, but it doesn't guarantees the existance). So, if the region where the field is defined has holes in it, what happens? Is really a chance that the vector potential doesn't exists? In that case, what are the physical consequences of that? Since I was always told that the vector potential was just a mathematical tool introduced to make life easier, I think that there wouldn't be so great impact on the point of view of conceptual explanation of the situation, however, I'm not sure. Answer: You ask if the region where the field is defined has holes in it, what happens? Well, in that case you can define the vector potential on simply-connected sub-regions $R_i$ whose intersection is the whole non-simply connected region $R$ and such that they differ by only by a gauge transformation on the regions of overlap. This is a physically well-motivated thing to do, because it means that up to gauge transformation, the vector potential can be defined on $R$. Here's a simple example. Let $\ell=\{(x,y,z)\,|\, x=0, y=0\}$ denote the $z$-axis, then the region $R=\mathbb R^2\setminus\ell$ is not simply connected. To see this, simply consider a closed loop enclosing the axis; there is no way to continuously shrink it down to a point while staying in $R$. Because of this, the there is no $\mathbf A$ defined on all of $R$. However, let $\ell_+$ denote the positive $z$-axis, and let $\ell_-$ denote the negative $z$-axis, then the regions $R_- = \mathbb R^3\setminus \ell_+$ and $R_+ = \mathbb R^3\setminus \ell_-$ have the property that they are each simply connected and $R = R_+\cap R_-$. Moreover, we can define a vector potential $\mathbf A_+$ on $R_+$ and $\mathbf A_-$ on $R_-$ such that there exists a scalar function $\Lambda$ for which \begin{align} \mathbf A_+(\mathbf x) - \mathbf A_-(\mathbf x) = \nabla\Lambda(\mathbf x), \qquad \text{for all $\mathbf x\in R$} \end{align} In fact, here are the explicit expressions in spherical coordinates $(r,\theta,\phi)$: \begin{align} \mathbf A_{\pm} &= -g\frac{\cos\theta\mp 1}{r\sin\theta}\hat{\boldsymbol \phi} \end{align} I'll leave it to you to determine $\Lambda$; it's a fun exercise. what are the physical consequences of that? Well, in the context of quantum mechanics, these sorts of topological issues are physically relevant (I am unsure if there are examples in which they are relevant at the classical level, but I don't think so). I won't go into the details here (unless perhaps there is some demand), but the very vector potentials I wrote down in the example above come up when discussing magnetic monopoles and the quantization of electric charge (see Dirac quantization). These topological issues also become significant in discussing the famous Aharonov-Bohm effect.
{ "domain": "physics.stackexchange", "id": 9604, "tags": "electromagnetism, mathematical-physics, differential-geometry, gauge-theory, topology" }
Temperature in an isobaric process
Question: I have a certain conceptual issue - I'm solving heat engine problems and I found something difficult to understand. Let's take an isobaric part of a cycle of an engine, let's say that the ideal gas is compressed from $2V$ to $V$. How does the temperature change? From the Clapeyron equation in the earlier point it is $T_2 = 2Vp/nR$ and in the latter point $T_1 = Vp/nR$ (it's labeled "1", because the cycle closes there). This would mean that compressed gas has lower temperature that the uncompressed one? This is false isn't it? Could someone please explain this to me? Answer: If you rearrange the ideal gas law to be expressed in terms of pressure $$ P=\frac{NRT}{V}\qquad\Rightarrow\qquad \frac{T_1}{V_1}=\frac{T_2}{V_2} $$ where the right hand equation assumes it is an isobaric process with no mass exchange. So, in an isobaric process temperature and volume vary inversely. If the volume decreases then the temperature must go up. Your confusion probably arises because we don't cool gasses by compressing them at a fixed pressure. The pressure, in fact, rises dramatically as we compress them. If you allow pressure to vary, then the above equality can be written as $$ \frac{T_1}{P_1V_1}=\frac{T_2}{P_2V_2} $$ so as long as the pressure rise is more dramatic than the volume reduction the temperature goes down.
{ "domain": "physics.stackexchange", "id": 13149, "tags": "thermodynamics, ideal-gas" }
Bell's Original Paper - Local hidden variable theories correlations smaller than entanglement
Question: I'm having trouble following Bell's derivation of equation 22 in his original paper. Particularly, how to go from $$ | \overline{P} (\vec{a}, \vec{b}) - \overline{P}(\vec{a}, \vec{c}) | \leq 1 + \overline{P}(\vec{b}, \vec{c}) + \varepsilon + \delta $$ to $$ | \vec{a} \cdot \vec{c} - \vec{a} \cdot \vec{b} | - 2 (\varepsilon + \delta) \leq 1 - \vec{b} \cdot \vec{c} + 2(\varepsilon + \delta) $$ using \begin{equation} | \overline{P}(\vec{a}, \vec{b}) + \vec{a} \cdot \vec{b} | \leq \varepsilon + \delta. \qquad(1) \end{equation} My attempt was $$ \begin{equation} \begin{split} &\overline{P}(\vec{b}, \vec{c}) + \vec{b} \cdot \vec{c} \leq | \overline{P}(\vec{b}, \vec{c}) + \vec{b} \cdot \vec{c} | \leq \varepsilon + \delta\\ \implies & \overline{P}(\vec{b}, \vec{c}) \leq (\varepsilon + \delta) - \vec{b} \cdot \vec{c}\\ \implies & 1 + (\varepsilon + \delta) + \overline{P}(\vec{b}, \vec{c}) \leq 1 - \vec{b} \cdot \vec{c} + 2(\varepsilon + \delta)\\ \implies& | \overline{P} (\vec{a}, \vec{b}) - \overline{P}(\vec{a}, \vec{c})| \leq 1 - \vec{b} \cdot \vec{c} + 2(\varepsilon + \delta) \end{split} \end{equation} $$ How do I deal with the left hand side, i.e., the absolute value of difference of probabilities using $(1)$? Edited to add more info We are dealing with a pair of spin one-half particles in a singlet state moving freely in opposite directions. $$ P (\vec{a}, \vec{b}) \equiv \int d \lambda \rho(\lambda)A(\vec{a}, \lambda) B(\vec{b}, \lambda) $$ where $\lambda$ is a continuous parameter, $\rho(\lambda)$ is its probability distributions and $A(\vec{a}, \lambda) = \pm 1$, $B(\vec{a}, \lambda) \pm 1$ are the possibles results of measuring spin components $\vec{\sigma}_{1}, \vec{\sigma}_{2}$ along $\vec{a}, \vec{b}$ respectively. The probabilities $\overline{P}$s are nonnegative, the vectors $\vec{a}, \vec{b}, \vec{c}$ are unit vectors and I'm assuning $\varepsilon, \delta > 0$. Answer: What Equation (1) says is that if you replace $P(\vec x,\vec y)$ by $-\vec x\cdot \vec y$, you incur an error of at most $\pm (\epsilon+\delta)$. Now do this both on the rhs and the lhs of your first equation: On the lhs of the "$\le$", in the worst case the error will make the value larger, which means you need to put a minus sign to still satisfy the inequality. You do two replacements, thus $-2(\epsilon+\delta)$. On the rhs, in the worst case the error will make the value smaller. Thus, you get an extra $+(\epsilon+\delta)$. If you want to formalize this, this can be done by the triangle inequality, plus the following variant thereof: If $|a-b|\le \eta$, then $b-\eta \le a \le b+\eta$.
{ "domain": "physics.stackexchange", "id": 93190, "tags": "homework-and-exercises, quantum-information, bells-inequality" }
PDA accepting all words not of the form $b^na^n$
Question: I am studying Automata theory. DFAs and NFAs seem pretty straightforward to me, but I don't quite understand how to design push-down automata for context-free languages. If I have context-free language where the input alphabet is $\{a,b\}$ $L = \{w \text{ where }w\text{ is NOT of the form }b^n a^n\}$ How would I design a push-down automata for it? What are the steps I would need to take, and what would it look like (formal description)? Answer: In a related question you obtained a context-free grammar for this language. $S\to bSa \mid A\mid B$ $A\to aA \mid a$ $B\to bB \mid b$ Although I agree with Yuval that directly constructing a push-down automaton shows you understood the concepts of a PDA, I like to recall that there are direct constructions between PDA and CFG. As wikipedia mentions the construction from CFG to PDA is straightforward. The other direction is more tedious. The construction in wikipedia is called expand-match. One obtains a PDA that accepts by empty stack that way, which is slightly nonstandard. I will add the details for a PDA with final state acceptance. The new PDA has three states, initial state $q_0$, working state $q_1$ and accepting state $q_2$. The initial push-down symbol is $Z$. First step: push the axiom on the stack. $(q_0,\varepsilon,Z, q_1, SZ)$ Now perform the CFG derivation on the stack (expand), checking the derived terminals with the tape symbols (match) $(q_1,\varepsilon ,A,q_1,\alpha )$ for each rule $A\to \alpha$ $(q_1,a,a,q_1,\varepsilon )$ for each terminal symbol $a$ Finally, move to accept when the stack reaches bottom. $(q_1,\varepsilon,Z,q_2,Z)$ As an example, the CFG production $S\to bSa$ is directly translated into the PDA instruction $(q_1,\varepsilon ,S,q_1,bSa )$.
{ "domain": "cs.stackexchange", "id": 18081, "tags": "automata, context-free, pushdown-automata" }
Why does Q-learning use an actor model and critic model?
Question: I'm currently reading Hands on Machine Learning with Scikit-Learn & Tensorflow, and I'm wondering why does Q-learning require an actor model and a critic model to learn? On page 465, it states: As we will see, the training algorithm we will use requires two DQNs with the same archicture (but different parameters): one will be used to drive Ms. Pac-Man during training (the actor), and the other will watch the actor and learn from its trials and errors (the critic). Is this a typical Q-learning implementation? If not, what is? Answer: The book you are reading is being somewhat lax with terms. It uses the terms "actor" and "critic", but there is another algorithm called actor-critic which is very popular recently and is quite different from Q learning. Actor-critic does have two function estimators with the roles suggested in the quote. Q-learning has one such estimator*. I have looked at the chapter in more detail, and where it says: one will be used to drive Ms. Pac-Man during training (the actor), and the other will watch the actor and learn from its trials and errors (the critic). I would substitute: one will be used learn from current actions, and the other will remember results from some time steps ago in order to estimate the values for next actions. This is not something that's inherently part of Q-learning, but it is part of DQN's adjustments when combining Q-learning with neural networks. Both experience replay and having two copies of the learning network (one a temporarily "frozen" version of the other) are important for stabilising the learning algorithm. Without them it can become numerically unstable. Is this a typical Q-learning implementation? It's a typical implementation of basic DQN, which is how many people nowadays would implement Q-learning with neural networks. You can ignore the references to "actor" and "critic". Instead it is easier to consider that there is just one "action value" network, and you keep an old copy of it around to help with stability. * Generally in RL, the term "model" is reserved for a model of the environment - which neither Q-learning nor actor-critic provide. So you will also read that Q-learning is a "model free" algorithm. For the rest of the book, you will have seen "model" to refer to any statistical learning algorithm (or the architecture and learned parameters) . . . what you will see in RL texts is the careful use of "function estimator" or other terms for networks which learn something else other than how the environment behaves.
{ "domain": "datascience.stackexchange", "id": 3095, "tags": "deep-learning, q-learning" }
Algorithm to find k elements following the median in sorted order
Question: I have the following problem: Given an unsorted array A of size n, print the first k elements in A larger than its median. Here's my approach to the problem: 1. Create a minHeap and a maxHeap 2. Iterate over elements in A // O(n) - if maxHeap.count < minHeap.count insert current element to maxHeap // O(log(n)) else: insert current element to minHeap // O(log(n)) 3. if maxHeap.count < minHeap.count: median = minHeap.extractMin() 4. output k elements from minHeap // O(klog(n)) This maintains a maxHeap of elements less than the median and a minHeap of elements greater than or equal to the median. But from my analysis, this seems to take O(nlogn + k(log(n)) which is no better than sorting A first and grabbing A[n/2:n/2+k] in just O(nlog(n) + k). Now I have 2 questions: Is my analysis tight? I am doubtful since the heaps have at most i elements in the ith iteration, not n. Is there a better algorithm? Maybe something like O(n+klog(k))? Answer: Answering your second question: Find the median $m$, and partition the set into $L_1 = \{x | x \in S \wedge x < m\}$ and $L_2 = \{x | x \in S \wedge x > m\}$. Find the $k$th largest element $x_k$ in $L_2$, and partition the set into $L_2' = \{x | x \in L_2 \wedge x \leq x_k\}$ and $L_3 = \{x | x \in L_2 \wedge x > x_k\}$. Sort $L_2'$ and return. Steps 1 and 2 take $O(n)$. Step 3 takes $O(k \log k)$. Total time $O(n + k \log k)$. Note this assumes distinct values. Repeated values involves some edge case handling.
{ "domain": "cs.stackexchange", "id": 9268, "tags": "algorithms, sorting, arrays, heaps" }
Why does a fuse blow when connecting to opposite terminals
Question: A fuse "blows" if current greater than fuse's rating flows through. But recently I was connecting battery terminals opposite on a motorbike; this kept blowing the fuse. When I correctly connected the terminals, the new fuse did not blow. Shouldn't the current that passes through be same regardless of connection? Not sure if this is right place to post. Any info greatly appreciated Answer: I have found it online: http://www.sto-p.com/pfp/pfp-reversepolarity.htm Essentially a reverse biased diode sourced to ground lies in the same circuit in parallel. When polarity is reversed, the diode conducts and shorts the power supply, thereby blowing the fuse.
{ "domain": "physics.stackexchange", "id": 43600, "tags": "electricity, electrons, electric-current, voltage" }
Can the second law of thermodynamics be written as $\delta Q \leq T \mathrm{d} S$?
Question: Can the second law of thermodynamics for a system be written as follows?$$\delta Q \leq T \mathrm{d} S$$ Where $T$ should be the temperature of the system. I'm not sure about this because we are considering a generic process (so also irreversible) here, hence it is not necessarily true that the temperatures of system and environment are the same. But I saw that this form of second law is used to get to $$\delta W \leq - (\mathrm{d}U-T \mathrm{d}S)$$ And define thermodynamics potentials ($F,G,H..$) So it should be correct, but are there limitations on writing the second law like that? Answer: The second law of thermodynamics cannot be written in this way because the inequality in question is not even wrong (the entropy term is not defined). The entropy is only definable for irreversible processes if they start and end in equilibrium, and gas expansion is the only such process. See this: http://philsci-archive.pitt.edu/archive/00000313/ Jos Uffink, Bluff your Way in the Second Law of Thermodynamics, p.39: "A more important objection, it seems to me, is that Clausius bases his conclusion that the entropy increases in a nicht umkehrbar [irreversible] process on the assumption that such a process can be closed by an umkehrbar [reversible] process to become a cycle. This is essential for the definition of the entropy difference between the initial and final states. But the assumption is far from obvious for a system more complex than an ideal gas, or for states far from equilibrium, or for processes other than the simple exchange of heat and work. Thus, the generalisation to all transformations occurring in Nature is somewhat rash."
{ "domain": "physics.stackexchange", "id": 33871, "tags": "homework-and-exercises, thermodynamics, entropy, work" }
Singleton Provider
Question: I've taken over a code base that is still very much in it's infancy, and stumbled across this class. I'm just wondering if it's actually worth it to have a SingletonProvider, and making use of Lazy<T> if it's just going to return the value straight away? public static class SingletonProvider { private static readonly ConcurrentDictionary<Type, Lazy<object>> Instances = new ConcurrentDictionary<Type, Lazy<object>>(); public static T GetSingleton<T>() where T : class, new() { return Instances.GetOrAdd(typeof(T), t => new Lazy<object>(() => Activator.CreateInstance(t), isThreadSafe: true)).Value as T; } } Answer: I hope that code base isn't sprinkled with calls to that SingletonProvider! A "singleton provider", conceptually, makes no sense. This is highlighted by this generic type constraint: where T : class, new() By definition, T cannot be a singleton - it must expose a public, parameterless constructor to satisfy the type constraint! I'm not sure what to call this provider, but it definitely needs a rename refactoring, to remove the confusing notion of "Singleton". Consider this hypothetical client code: var foo = SingletonProvider.GetSingleton<Something>(); And then, elsewhere: var bar = SingletonProvider.GetSingleton<Something>(); What have we gained? foo is a reference to the same object as bar. But if Something must be a Singleton, then a much less surprising approach would be something like: var foo = Something.GetInstance(); And then, there's no way to accidentally do this elsewhere: var bar = new Something(); ...and you keep your code coupled with one class instead of two. I see what problem this class is trying to solve: it's not all that rare that an object must only ever exist in once instance. The problem it's creating is that it's imposing way too heavy constraints on the type (T), and as a result this Something class must in turn be either tightly coupled with its own dependencies, or have them property-injected... which is unnecessary complexity. I'd much rather code against an abstraction, say ISomething, and let the Something implementation decide how it wants to deal with its dependencies: public class Something : ISomething { private readonly IFoo _foo; private readonly IBar _bar; public Something(IFoo foo, IBar bar) { _foo = foo; _bar = bar; } //... } If I have types that must depend on ISomething, I'll have them receive an instance in their constructor, have my favorite IoC container inject all its dependencies, and if I instruct my IoC container that whoever requires an ISomething implementation should always receive the same Something instance, I can do that without even implementing a Singleton - e.g. with Ninject it would look something like this: kernel.Bind<ISomething>().To<Something>().InSingletonScope(); The takeaway here is that the responsibility of creating objects and managing object lifetimes is the responsibility of the IoC container - not of some "SingletonProvider". So why all this talk about IoC and dependency injection? Because good, SOLID object-oriented code strives for low coupling (and high cohesion too). By coding against abstractions, you effectively reduce coupling to concrete types to a minimum; by using this SingletonProvider, you're not only coupled with the SingletonProvider class, but also with the concrete implementation of whatever the type of T is. That SingletonProvider seems like a good way to make testing very hard, for very little benefit. It forces your client code to depend on concrete types, which increases coupling. As for the implementation itself, I agree with you - it makes no sense to use a Lazy<T> if you're going to retrieve its value in the very same instruction; that leaves you with your already-initialized Lazy<T> instance in the dictionary, so you might as well directly store the instance itself.
{ "domain": "codereview.stackexchange", "id": 22074, "tags": "c#, singleton" }
Quantum hardness of XQUATH conjecture
Question: Consider the XQUATH conjectures, as defined here (https://arxiv.org/abs/1910.12085, Definition 1). (XQUATH, or Linear Cross-Entropy Quantum Threshold Assumption). There is no polynomial-time classical algorithm that takes as input a quantum circuit $C \leftarrow D$ and produces an estimate $p$ of $p_0$ = Pr[C outputs $|0^{n}\rangle$] such that \begin{equation} E[(p_0 − p)^{2}] = E[(p_0 − 2^{−n})^{2}] − Ω(2^{−3n}) \end{equation} where the expectations are taken over circuits $C$ as well as the algorithm’s internal randomness. Here, $D$ is any distribution "over circuits on $n$ qubits that is unaffected by appending NOT gates to any subset of the qubits at the end of the circuit." For the purposes of our discussion, we can assume the circuit $C$ to be a particular Haar random unitary. So, we believe the task mentioned in XQUATH is hard for classical computers. But how hard is the task for quantum computers? If it is easy for quantum computers, what is the algorithm? A trivial algorithm I can think of just runs the quantum circuit many times, samples from the output distribution of the circuit each time, and then computes the frequency of observing $|0^{n}\rangle$. But what is the guarantee that this procedure will give us an additive error estimate robust enough to meet the condition of XQUATH? Answer: Maybe think of it this way - a quantum computer, executing a small enough random circuit $C$ acting on a state initially prepared as $\vert 0^n\rangle$ and sampling therefrom, will get an $n$-bit string as output, say $0110\cdots10$. We know, merely from the fact that this was sampled, that the squared amplitude of $\vert 0110\cdots10\rangle$ is large and likely contributes more than $1/2^n$ to the linear cross-entropy score. I think the implications of the XQUATH conjecture are that a classical computer likely cannot even determine the output probability of even the all-zeroes basis vector $\vert 0000\cdots 00\rangle$, much less any particular vector such as $\vert 0110\cdots10\rangle$. The all-zeroes vector was chosen as both the input vector (the initially prepared state) and the output vector (the state with which the squared amplitudes are calculated) only for convenience, and as you note you could always append $\mathsf{NOT}$ gates to the output circuit without changing the analysis. If we execute a circuit $C$ and output the string $0110\cdots10$, we can think of another circuit $C'$ wherein we append a $\mathsf{NOT}$ gate onto the second, third, ..., second-to-last bit, and the conjecture goes through. That is, it appears that Aaronson and Gunn conjecture that it's classically hard to determine an output probability of any particular basis vector, which would be needed to spoof the cross-entropy score. But quantumly all we need to do is execute the circuit and sample therefrom.
{ "domain": "quantumcomputing.stackexchange", "id": 4610, "tags": "quantum-state, quantum-algorithms, complexity-theory, quantum-advantage, haar-distribution" }
Basic terrain generator
Question: I've decided to try and start programming using Object-Oriented-Programming, so I built a small terrain generator that could be done using something as simple as this: import random for _ in range(1000): print random.choice('*#~o'), Instead, I decided to have a go at OOP and made it like this instead: # Terrain generator from random import choice from time import sleep # Class containg world attr. class Attr(object): def __init__(self): self.TILES = '#*~o' self.WORLD_SIZE = 1000 self.Y_LENGTH = 1000 self.X_LENGTH = 1000 # Generator class class Generator(object): def __init__(self): self.world = [] self.attr = Attr() # Create an empty row def create_row(self): for _ in range(self.attr.X_LENGTH): self.world.append(choice(self.attr.TILES)) # Main generator class class MainGen(object): def __init__(self): self.gen = Generator() self.attr = Attr() # Create the world def create_world(self): for _ in range(self.attr.WORLD_SIZE): self.gen.create_row() # Render the world def render_world(self): for tile in self.gen.world: print tile, sleep(0.05) # Main game class class Game(object): def __init__(self): self.main_gen = MainGen() # Run the functions def run(self): self.main_gen.create_world() self.main_gen.render_world() # Start the program if __name__ == "__main__": game = Game() game.run() All I'm really looking for are these things: What OOP mistakes did I make? What general mistakes did I make? How can I improve in general? Answer: Separation of responsibilities Consider the features you need and how you can group them into classes, and try to separate responsibilities as cleanly as possible. For example you need: Generate a terrain of given dimensions and set of tiles Render a generated terrain Glue code: configure the available classes/functions and do something with them Here's one possible class design to meet these requirements: WorldGenerator: generates a world from given parameters Constructor parameters: x length, y length, set of tiles Public methods: generate (world) WorldRenderer: renders a given world Constructor parameters: a world generator Public methods: render (world) main: the glue code. This could be just a method, there's no point making it a class. There's no place for an Attr class here. The attributes belong to the world generator, they can be encapsulated in it. It also makes sense to create different world generators with different attributes, instead of using constant attributes for all worlds. In your original MainGen class, you were creating a world and rendering it. Creating a world should be the responsibility of the generator class, and it makes sense to separate from rendering anyway. Suggested implementation from random import choice class WorldGenerator(object): def __init__(self, tiles='#*~o', x_length=1000, y_length=1000): self.world = [] self.tiles = tiles self.x_length = x_length self.y_length = y_length def create_row(self): for _ in range(self.x_length): self.world.append(choice(self.tiles)) def create(self): self.world = [] for _ in range(self.y_length): self.create_row() class WorldRenderer(object): def __init__(self, generator): self.gen = generator def render(self): for tile in self.gen.world: print tile, # sleep(0.05) def main(): gen = WorldGenerator(x_length=30, y_length=10) gen.create() renderer = WorldRenderer(gen) renderer.render() if __name__ == "__main__": main()
{ "domain": "codereview.stackexchange", "id": 8918, "tags": "python, object-oriented, game, python-2.x" }