anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
"unable to load manifest" error after subscribe via RosSharp/RosBridge
Question: Using Unity and the RosSharp library I'm able to connect but unable to subscribe to any custom .msg types. The types are correctly named in the RosSharp dictionary and builds fine on the ROS side. After requesting to subscribe it will throw the error on the rosbridge socket terminal: unable to load the manifest for package ####. Caused by #### ROS path [0]=/opt/ros/kinetic/share/ros ROS path [1]=/opt/ros/kinetic/share Any thoughts? Thanks, Lane Originally posted by Lane on ROS Answers with karma: 3 on 2018-03-23 Post score: 0 Original comments Comment by gvdhoorn on 2018-03-24: Is package ### present in your workspace / installed on your system? Does rospack find #### return the expected output (in the same terminal as where you start the RosBridge nodes)? I don't see any Catkin workspace in the error output, so did you source it? Comment by Lane on 2018-03-26: the package is present, rospack find identifies it and i did source ./devel/setup.bash after the catkin_make. Comment by Lane on 2018-03-27: This works on a linux box, but fails on my VirtualBox VM - identical projects. Answer: Just making sure: the terminal where you started rosbridge also had sourced the workspace setup.bash? Edit: Wow, did not realize that was required for each new terminal. Yes, that is needed in every terminal. See #q286466 for a recent question about this. Originally posted by gvdhoorn with karma: 86574 on 2018-03-27 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Lane on 2018-03-27: Wow, did not realize that was required for each new terminal. Seems to be working fine, thanks =) Comment by Lane on 2018-03-27: run source catkin_ws/devel/setup.bash in the bridge terminal window.
{ "domain": "robotics.stackexchange", "id": 30432, "tags": "ros-kinetic, rosbridge" }
Hermiticity of the Hamiltonian operator with probability conservation
Question: I am following MIT lessons on quantum physics (Prof. Zwiebach): Part I, Lecture 6, at https://ocw.mit.edu/courses/physics/8-04-quantum-physics-i-spring-2016/lecture-notes/ Video lecture: https://youtu.be/Ex_fFlwZoM0 I understand that the normalized wave function can be written as the integral of probability density: $ N(t)=\int \rho (x,t)dx$ and if we have at the initial time t-zero $N(t_0)=1$. We can prove that probability is conserved at any time if $\frac{dN(t)}{dt}=0$ that is equal to: $$\frac{dN(t_0)}{dt}=\int_{-\infty }^{\infty } \frac{\partial \rho (x,t)}{\partial t}dx =\int_{-\infty }^{\infty }\left(\frac{\partial \psi^*(x,t)}{\partial t} \psi(x,t)+\psi ^*(x,t) \frac{\partial \psi(x,t) }{\partial t} \right)dx$$ by complex conjugating the Schrodinger equation we have: $$ \frac{dN(t_0)}{dt}=\int_{-\infty }^{\infty } \frac{\partial \rho (x,t)}{\partial t}dx =\frac{i}{\hbar}\left(\int_{-\infty }^{\infty }(\psi \hat H)^* \psi dx-\int_{-\infty }^{\infty }\psi ^* (\hat H\psi)dx\right)$$ At this point to have zero we need $$ \int_{-\infty }^{\infty }(\psi \hat H)^* \psi dx=\int_{-\infty }^{\infty }\psi ^* (\hat H\psi )dx$$ This equation is valid if the general condition of hermiticity hold: $\int\psi_1^* (T\psi_2 ) dx=\int (T^\dagger \psi_1 )^* \psi_2 dx$ in our case$ \int (\hat H \psi _1)^* \psi _2 dx=\int \psi _1^* \hat H \psi _2 dx$. Now i go to my problem, I understand that the above is valid when $\psi _1 = \psi _2$ and we can prove through the boundary condition $\lim_{x\to \pm\infty}\psi (x,t)=0 ; \lim_{x\to \pm\infty}\frac{\partial \psi (x,t)}{\partial x} < \infty $ that the $\frac{dN(t)}{dt}=0$ and if this is zero the hamiltonian is hermitian. In the video lecture, Prof. Zwiebach asks (5:10) to prove the same also for the general case when $\psi_1$ and $\psi_2$ are different functions. Can someone help me to understand how to make this proof? Answer: So you want to prove that the Hamiltonian $\hat{H}$ is Hermitian? There's two ways of answering this. By postulate: The energy is an observable. By the postulates of QM, it is described by a linear Hermitian operator on some Hilbert space. Ta-Da! Ok, this, doesn't give you much additional insight in the mathematical workings and will hardly qualify as a proof. So the other option may help you more to understand the concepts, though keep in mind that it is less fundamental/general. By using a specific Hamiltonian: Not every operator is Hermitian, so you will need to supply a specific Hamiltonian to prove the statement. You have not given us the Hamiltonian that is used in Prof. Zwiebach's lecture. In position representation, the Hamiltonian operator that will get you through most introductory lectures is probably $$\hat{H} \psi(x) = \left[-\frac{\hbar^2}{2m} \frac{\partial^2}{\partial x^2} + V(x)\right] \psi(x).$$ Just for practice, you can check the linearity of this operator yourself. You can also check that it's Hermitian fairly easy. $V(x)$ stands for a real-valued function $V: \mathbb{R}^d \to \mathbb{R}$ (it seems to be $d=1$ in your case). Therefore, $V(x) = V^*(x)$. So quite obviously $$\int_{-\infty}^\infty \textrm{d} x\ \psi_1^*(x) V(x)\psi_2(x) = \int_{-\infty}^\infty \textrm{d} x\ \psi_1^*(x) V^*(x)\psi_2(x) = \int_{-\infty}^\infty \textrm{d} x\ (V(x)\psi_1(x))^*\psi_2(x).$$ The derivative term is almost as simple, you just need to use integration by part and recall that in the Hilbert space you have, all functions and their derivatives fall off to zero at ininifty, $\psi(x) \to 0$ and $\partial_x \psi(x) \to 0$ as $|x| \to \infty$. Therefore, you can integrate by parts twice to get the desired result. I think you did most of the work already, judging from what you say in your question. So just have at it once more, you'll get there.
{ "domain": "physics.stackexchange", "id": 74974, "tags": "quantum-mechanics, hilbert-space, operators, wavefunction" }
How is this sugar classified as an aldose?
Question: I have made the following image in paint, so forgive me if it isn't the most appealing sugar. I have a hard time visualizing how the following sugar is an aldose. My speculation for the Fischer projection of the alleged carbohydrate is: My speculation is as a result of the lack of primary alcohol at the end of traditional carbohydrates like D-glucose for example means that there is not carbon group alpha or beta to carbon 4 in the furanose ring. The fact that there is still a CHO group on Carbon 1 would make this sugar an aldose? Is this logic correct, and is that the Fischer projection for the sugar shown above in the Haworth projection? Answer: Yes its an aldose it is classified as an aldose because of the aldehyde group (cho). The compound is actually an aldotetrose called L-Threose. Try using acd Chem sketch program for the projection it is a free download for the freeware version...
{ "domain": "chemistry.stackexchange", "id": 4288, "tags": "organic-chemistry, carbohydrates" }
Why can't we use Meterbridge to measure high resistance and low resistance?
Question: Our physics master taught us that we can't use meterbridge to measure low resistance but he didn't give a valid reason for that. Answer: A Wheatstone bridge works best (smallest error) when the resistors are all about equal. Having two small resistors in series on one side of the bridge will result in a lot of current (maybe more than the supply can handle), heating, and errors in the measurement. And because you're measuring the voltage across the same contact that carries the current, you will be more sensitive to contact resistance (compare this with the 4 point Kelvin probe arrangement that circumvents the problem and is more suited for low impedance measurements). Furthermore, the slightest imbalance in the bridge will send a large current through the galvanometer - which might well break it. On the other hand, when resistance gets very high, the currents that would flow if the bridge was unbalanced become very small, and this may make them hard to detect (it depends a bit on the make and model of the device you use to detect the imbalance). Here, for example, is the calculation of the current that flows when you have two resistors of resistance R in series, with 10 V across the bridge, and you offset the jockey by 5%. I will assume that the resistance of the wire is lower (maybe 10 kOhm) so it doesn't really come into play. We see then that there are unequal voltages across the two resistors; with the jockey 5% off center, the voltages will be 5.5 V and 4.5 V respectively. There is a net current of (5.5 - 4.5)V / (1 MOhm) = 1 uA that flows through the galvanometer. That's not a ridiculous small current - but it is already near the limit of what ordinary mechanical (coil and needle) devices can measure. So as resistance increases, your ability to measure it accurately with a bridge decreases.
{ "domain": "physics.stackexchange", "id": 45222, "tags": "electric-circuits, electric-current, electrical-resistance" }
Is it a mandatory condition that complex poles and zeros should always exist as conjugate pairs?
Question: Complex poles and zeros always exist in conjugate pairs?If not always, in which context applicable? https://www.informit.com/articles/article.aspx?p=32090&seqNum=9 The above link mentions related idea with eq 3.50 The above link is related to control systems and since control systems and signal processing are some how linked, please try to answer/comment for general case and also specifically for these two subjects (control systems and signal processing)in context Answer: For any polynomial with real coefficients, the roots if complex will always be in complex pairs, since if we do have a complex root the product of the factored polynomial can only have real coefficient if multiplied with it's complex conjugate. To see this simply, consider that angles add in the product of a complex number. A generalized complex number with real magnitude $K$ and real angle $\theta$ can be written in exponential form as $Ke^{j\theta}$. The complex conjugate of this is $Ke^{-j\theta}$ and the product would always be real: $$Ke^{j\theta}Ke^{-j\theta} = K^2e^{j0} = K^2$$ Similarly the addition of two complex conjugate values would be real which is clear when the complex number is written in real plus imaginary form with $I$ is the real number and $jQ$ is the imaginary number, as: $$Ke^{j\theta} = I + jQ$$ $$Ke^{-j\theta} = I - jQ$$ $$Ke^{j\theta} + Ke^{-j\theta} = (I + jQ) + (I - jQ) = 2I$$ For poles and zeros, as the roots of associated polynomials which can be factored into a product of first order forms: $$P_z(s) = (s-z_1)(s-z_2)(s-z_2)\ldots$$ Where $z_n$ represents the zeros, and similarly for the poles $$P_p(s) = (s-p_1)(s-p_2)(s-p_2)\ldots$$ Where $p_n$ represents the poles; For every complex zero or pole that exists, in order for the entire product to be real, a complex conjugate pair must also exist as demonstrated by the following product: $$(s-p_1)(s-p^*_1) = s^2 - p_1 s - p^*_1 s + p_1 p^*_1 = s^2 + s(p_1 +p^*_1) +p_1 p^*_1$$ Where we see the complex conjugate sum and product occur that would result in real coefficients. Similarly, for any polynomial with complex coefficients, the poles and zeros will not be in complex conjugate pairs. One example from radio communications where complex poles will not exist in complex pairs is a baseband IIR filter to correct for spectral asymmetry: this can occur when a modulated signal passes through a channel where the signal content above the carrier frequency has higher loss than the signal content below the carrier frequency. In a direct conversion receiver, this waveform is frequency translated directly to baseband and would be represented as a complex signal. Any filter with real coefficients would have the same magnitude response (and conjugate phase response) for the upper (positive frequencies) and lower (negative frequencies) sidebands. We require a filter with complex coefficients in order to filter the positive and negative frequencies differently.
{ "domain": "dsp.stackexchange", "id": 11042, "tags": "poles-zeros, control-systems" }
What causes $A^{\mu\nu}_{\pm}=F^{\mu\nu}\pm i \tilde{F}^{\mu\nu}$ to have three independent components rather than six?
Question: Both the elctromagnetic field strength tensor $F^{\mu\nu}$ and its dual $\tilde{F}^{\mu\nu}$ defined as $\tilde{F}^{\mu\nu}=\frac{1}{2}\epsilon^{\mu\nu\lambda\rho}F_{\lambda\rho}$ are examples of antisymmetric tensors of rank two with six independent components in (3+1) dimensional Minkowski spacetime. Both are 6-dimensional irreducible representations of proper Lorentz group ${\rm SO(3,1)}$. Let us define the objects $$ A^{\mu\nu}_{\pm}=F^{\mu\nu}\pm i \tilde{F}^{\mu\nu} $$ Questions What causes $A^{\mu\nu}_{\pm}$ to have three independent components rather than six? Do $A^{\mu\nu}_{+}$ and $A^{\mu\nu}_{-}$ satisfy additional constraints with further cut down their number of independent components? Can we attach physical meanings to the components of $A^{\mu\nu}_{\pm}$? Answer: The existing answers are good, but they kind of beat around the point and/or overcomplicate things, I think. In short: Do $A^{\mu\nu}_{+}$ and $A^{\mu\nu}_{-}$ satisfy additional constraints with further cut down their number of independent components? Yes. Your $A^{\mu\nu}_{\pm}$ tensors have an additional symmetry that's not present in either of $F^{\mu\nu}$ or its dual, and that additional symmetry means that you need less components to reconstruct the full tensor. So, what is this symmetry? It's called electric-magnetic duality, which is the fancy name for the idea that the replacements \begin{align} \mathbf E & \mapsto \mathbf B \\ \mathbf B & \mapsto -\mathbf E \end{align} will leave Maxwell's equations untouched, so it's a full symmetry of the theory. From the field-tensor expressions in md2perpe's answer it's easy to see that this symmetry can be rephrased as \begin{align} F^{\mu\nu} & \mapsto \tilde F^{\mu\nu} = +\frac{1}{2}\epsilon^{\mu\nu\lambda\rho} F_{\lambda\rho} \\ \tilde F^{\mu\nu} & \mapsto -F^{\mu\nu} = +\frac{1}{2}\epsilon^{\mu\nu\lambda\rho} \tilde F_{\lambda\rho} . \end{align} Since you've defined $$ A^{\mu\nu}_{\pm}=F^{\mu\nu}\pm i \tilde{F}^{\mu\nu}, $$ this means that under the duality transformation, your tensor will change to $$ A^{\mu\nu}_{\pm} = F^{\mu\nu}\pm i \tilde{F}^{\mu\nu} \mapsto \tilde{F}^{\mu\nu}\mp i F^{\mu\nu} =\mp i \left( F^{\mu\nu}\pm i \tilde{F}^{\mu\nu} \right) = \mp i A^{\mu\nu}_{\pm} , $$ i.e. it is an eigenvector of the transformation. Since the pure component-based aspect of the transformation requires that we send $$ A^{\mu\nu}_{\pm} \mapsto \frac{1}{2}\epsilon^{\mu\nu\lambda\rho} A_{\pm \ \lambda\rho} , $$ adding the demand that $A^{\mu\nu}_{\pm} \mapsto \mp i A^{\mu\nu}_{\pm}$ be an eigenvalue then implies that there needs to be a set of linear relationships between the tensor's components, $$ \frac{1}{2}\epsilon^{\mu\nu\lambda\rho} A_{\pm \ \lambda\rho} = \mp i A^{\mu\nu}_{\pm} , $$ and it is these linear relationships that are ultimately responsible for knocking out independent components from your tensor.
{ "domain": "physics.stackexchange", "id": 53667, "tags": "electromagnetism, special-relativity, field-theory, group-theory, lorentz-symmetry" }
Why center of mass is at the focus for elliptical orbits?
Question: I know there are similar questions with answers like this one in physics stackexchange but before I read these I had a completely different argument with a different conclusion which I thought was correct but now I'm suspecting that there is something wrong with the argument. I just want to know what exactly is wrong with it. In a two body problem with say particles of masses $m_1$ and $m_2$ interacting under a central force $f(r)\hat{r}$ where $\mathbf{r} = \mathbf r_2 - \mathbf r_1$ the equations of motion of the two particles are $$m_1\ddot{\mathbf r}_1 = f(r)\hat{r}$$ $$m_2 \ddot{\mathbf r}_2= -f(r)\hat{r}$$ From these two equations I have $$\ddot{\mathbf{r}}_2 - \ddot{\mathbf{r}}_1 = -f(\mathbf{r}) \hat{r} \left(\frac{1}{m_1} + \frac{1}{m_2} \right)$$ or $$\mu \ddot{\mathbf r} = - f(r)\hat{r}$$ where $\mu = \frac{m_1m_2}{m_1 + m_2}$ Now by just looking at the last equation it seems that there is a single particle under a central force and $\vec{r}$ is just the position vector of that particle. But I also know that $\vec{r}$ is the separation vector between particles of masses $m_1$ and $m_2$. So it seems to me that when I find the trajectory of $\mu$ (when I find $\vec{r}(t)$) it would also be the trajectory of $m_2$ with $m_1$ at the origin. But it seems my reasoning was wrong because it turns out that the center of mass is at the origin which is further reinforced by the fact that the angular momentum and energy of $\mu$ is the angular momentum and energy of the two body system in the center of mass frame. So there must be something I was missing. But I can't figure out what it is. Answer: I figured what was wrong with my argument a while back. First of all J. Thomas was right, I do implicitly put the center of mass at the origin. I guess the way the one body problem equation was derived in my book confused me. If I put my origin at the center of mass, then the position vectors of $m_1$ and $m_2$ relative to the center of mass are $-\frac{m_2}{m_1 + m_2} \mathbf{r}$ and $\frac{m_1}{m_1 + m_2} \mathbf{r}$ respectively. So the equations of motion for $m_1$ and $m_2$ in the cm frame are $$m_1 \frac{d^2}{dt^2}\left( -\frac{m_2}{m_1 + m_2} \mathbf{r} \right) = f(r)$$ or $$\mu \ddot{\mathbf{r}} = - f(r)$$ and $$m_2 \frac{d^2}{dt^2}\left( \frac{m_1}{m_1 + m_2} \mathbf{r} \right) = - f(r)$$ or $$\mu \ddot{\mathbf{r}} = - f(r)$$ But I'm still solving for the separation vector $\mathbf{r}$ and not the actual positions of $m_1$ or $m_2$ with respect to the center of mass. My mistake was forgetting that the tail of $\mathbf{r}$ was an accelerating point. I thought that because I solved for $\mathbf{r}$ with it's tail fixed at my origin, it's motion must be the same $\mathbf{r}$ as seen if $m_1$ was my origin. But the rate of change of vectors when viewed in rotating frames are different from rate of change of vectors when viewed in inertial frames. So the viewed trajectory would be different. If $\mathbf{\Omega}$ is the angular velocity of $m_1$ at an instant with respect to the center of mass, then the rate of change of the separation vector as viewed with $m_1$ at the center is $$\left( \frac{d\mathbf{r}}{dt} \right)_{m_1} = \left( \frac{d\mathbf{r}}{dt} \right)_{cm} - \mathbf{\Omega} \times \mathbf{r} $$ So the trajectories are different. The actual trajectories of $m_1$ and $m_2$ are ofcourse given by $-\frac{m_2}{m_1 + m_2} \mathbf{r}$ and $\frac{m_1}{m_1 + m_2} \mathbf{r}$ respectively with the origin (center of mass) at the focus. But if $m_2 \gg m_1$ like in the Earth-Sun case, then the trajectory of $m_1$ with respect to cm is approximately $\mathbf{r}(t)$
{ "domain": "physics.stackexchange", "id": 70129, "tags": "newtonian-mechanics, orbital-motion, vectors, inertial-frames, celestial-mechanics" }
Feed forward neural network using numpy for IRIS dataset
Question: I tried to build a neural network for working on IRIS dataset using only numpy after reading an article (link: https://iamtrask.github.io/2015/07/12/basic-python-network/). I tried to search the internet but everyone was using ml libraries and found no solution using just numpy. I tried to add different hidden layers to my feed forward neural network still it wasn't converging. I tried to use backpropagation. I used sigmoid and also relu neither of which was successful. Can someone please give me the code which will work on IRIS dataset and built only using feed forward neural networks and numpy as the only library or if it is not possible to built such a thing with these constraints then please let me know what goes wrong with these constraints. Also tell me will it be possible to create a neural network to predict values of a matrix multiplication i.e. if we have A * B = C with matrix A as input and C as output, can we acheive substantial amount of accuracy with feed forward neural networks here?. Answer: You can not use the ready code directly without any manipulation. Because every piece of code is written for specific datasets. In the article that you mentioned, the writer created a small dataset and then he creates an ANN architecture for it. If you want to build an ANN based on Iris dataset, you should think and create an architecture on paper maybe before coding. You should understand the Iris dataset first, I mean you should understand what is going to be your input shapes, and also the shape of weights are related to inputs' shape. After understanding these you should decide the number of layer of your Neural Network. And that tutorial that you followed is explaining all of these clearly. Do not skip and understand everything steps by step. If you want a build ANN architecture by yourself, you must understand every mathematics behind that. (forward and backward propagation, loss and cost, gradient descent) If you do not want to deal with all of these maths, you can use a library such as Keras, then you can create an ANN easily. See this as a source, https://ml-cheatsheet.readthedocs.io/en/latest/ And you see my Kaggle kernel, I create this network with details and only using numpy. Also I used Keras on another section. You can check; https://www.kaggle.com/erdemuysal/gender-recognition-with-lr-and-ann
{ "domain": "ai.stackexchange", "id": 870, "tags": "neural-networks, deep-learning, backpropagation, gradient-descent, feedforward-neural-networks" }
How do Anti-Aliasing-Filters in audio signal processing work?
Question: I am a little confused about the use of anti-aliasing filters. As on how I am confused, please consider the following task: Consider a simple sampling rate conversion system with a conversion rate of 4/3. The system consists of two upsampling blocks, each by 2, and one downsampling block of 3. The frequency spectrum has periodic repetitions at integer multiples of the sampling frequency. Therefore, upsampling creates additional - unwanted - spectral images. These can be cancelled by an anti-imaging filter: The spectrum within the blue dotted rectangles should be my output Y_1. This output will be upsampled a second time, which I think results just having 4 spectrum repetitions until 2*pi. But, how does downsampling work? I am given the following figure from my lecture notes: So, apparently before downsampling we apply the anti-aliasing filter, which will result in the part of the spectrum within the blue-dotted shapes. Now, we increase Omega by a factor of 2, such that.. I don't know. What exactly happens here ? Answer: I think you make this needlessly complicated. Let's do a specific example: let's say we want to go from 48 kHz up by 4/3 to 64 kHz and your signal bandwidth is 20 kHz. In the first step, you would up-sample by a factor of 4 by inserting 3 zeros between each sample. Your new sample rate is 192 kHz. Your original spectrum is preserved and you get mirror spectra centered around 48 kHz & 96 kHz. To eliminate this you need a lowpass filter with cutoff of 20 kHz at 192 kHz sample rate and sufficient attenuation at 24 kHz. Finally you just down sample the result by factor of three by simply throwing away every second and third sample. This works without aliasing, since the lowpass filter has already eliminated all content that would alias. Regarding your graphs: in general I would avoid drawing anything past the Nyqusit frequency. It's just periodic repetition and doesn't add any useful information and it can get confusing.
{ "domain": "dsp.stackexchange", "id": 7194, "tags": "audio, anti-aliasing-filter" }
Autopilots which are compatible with ROS
Question: Where can i find their list please help Originally posted by gadamer on ROS Answers with karma: 11 on 2018-03-18 Post score: -1 Answer: There's not a central authority. Ardupilot and PX4 are both known to work. Other autopilots may have support you'll likely have more accurate responses by looking at the autopilot project(s) of your interest. Originally posted by tfoote with karma: 58457 on 2018-03-19 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 30358, "tags": "ros, ros2, drone" }
Were Carboniferous plants more efficient sequestering $\ce{CO2}$ than present plants?
Question: The Carboniferous was a period where CO2 levels fell drastically. Source: Geologic history of seawater: A MAGic approach to carbon chemistry and ocean ventilation I think the main reasons are tectonic settings and the formation of coal deposits with the arrival of lignite-rich plants that bacteria didn't know yet how to decompose. My question here is: Forgetting Carboniferous plants were efficient sequestrating CO2 forming coal deposits after dying, were living Carboniferous plants more efficient sequestering CO2 than present plants? Meaning, would a Carboniferous forest be more efficient than present forests to sequestrate CO2 nowadays? Answer: Probably not. The biological pathways to make lignin is the same in all plants so there is no reasons to believe they would be better. If anything modern trees as a whole may be better since they survive in larger range of environments and very dense wooded trees exist. In addition without defenses against modern wood decomposers Carboniferous plants may have a hard time surviving in the modern world.
{ "domain": "earthscience.stackexchange", "id": 2249, "tags": "climate, paleoclimatology, plant" }
Why does foam in a rotating liquid accumulate near the centre?
Question: I first noticed this while having a coffee. When the coffee was rotating in the cup, most of its foam accumulated near the centre. I recreated the effect with some soap and water. The accumulated foam formed a beautiful dome. You can see the dome formation in detail in this video. Side view Top view I wonder what causes this pattern formation. I would expect the foam to move away from the centre due to centrifugal forces, but that's not what I see. I believe there's something else to it. So here's the question in short: Why does foam accumulate near the centre of rotating fluids? Also, why there is a characteristic dome shape for the accumulated foam? Answer: The water experiences a greater centrifugal force than the bubbles Both the bubbles and the water experience a centrifugal force. However, since the centrifugal force is given by: $$ F_c = m R \omega ^2 $$ You can see that a more massive object (the water) will experience a greater centrifugal force. From the perspective of the rotating frame, those forces would look like the pink arrows below: Thus the water at the same $R$ as the bubble will flow around the bubble, shoving it closer to the center of rotation. Of course, the bubbles still rise to the surface, so they rise in a pile and cause the bubble bulge in your picture: This is the same principle by which a centrifuge operates, but instead of throwing the heavier material to the outside of the rotating water, it throws the lighter objects towards the center of the rotating water.
{ "domain": "physics.stackexchange", "id": 62818, "tags": "newtonian-mechanics, fluid-dynamics, bubbles" }
Understanding the frequency scale of a spectrogram
Question: The graph below was derived from a raw seismogram recorded during an earthquake over a timespan from t=0 to t=1400 seconds (not shown on x-axis). The original seismogram $s(t)$ is not shown, but it is known that the original signal is the convolution between the source wave $w(t)$, the effects of the medium $g(t)$, and the effects from instrument itself $i(t)$; that is, $$s(t)=w(t)*g(t)*i(t)$$ The graph above is what remains after removing the instrument response as shown below: I think this instrument response is telling me that the amplitudes of the original seismogram $s(t)$ at frequencies higher than $0.02Hz$ were amplified at a constant gain of approximately $1E9$, while amplitudes lower than $0.01Hz$ are effectively dampened (by gains significantly smaller than at those frequencies higher than $0.02Hz$). But what exactly can I effectively say about the shape of the original seismogram from this information? If these frequencies are 'dampened' in the broadband representation above, what, if anything, can be said about the amplitudes and shapes observed in the original graph? Can I simply say that longer waves should be present in the original signal but not say anything about when they occur? Could I somehow convolve these graphs to generate the original signal (or at least closely approximate it)? Answer: I think what you're saying is that you have a linear, time invariant (LTI) system model for both the propagation medium and the response of the measurement instrument itself. Given that you know the (at least approximate) magnitude response of the instrument, you would like to best estimate what the actual signal at the instrument's input looked like. This is similar to the process of equalization used in communications systems. Since you have some knowledge of the system (in this case characterized by the impulse response $i(t)$ or the frequency response plot that you showed), you can attempt to undo the effects on the signal that are imposed by the measurement instrument. More generally, this type of operation is known as deconvolution: given an output of a convolution and optionally knowledge of one of the inputs to the convolution, one would like to calculate the other input. In this case, one of the convolution inputs is $i(t)$ and the output is the measurements reported by your instrument. It appears that you don't know the exact impulse response of the measurement device, only its magnitude response. This isn't enough information in itself to execute the deconvolution, so you'll need to make some assumptions about the phase response of the instrument in order to move forward with that approach. This is a very powerful technique, however; as the Wikipedia article notes, the approach can be used to compensate for the effects of the seismic propagation medium also (since it's also modeled as a convolution with an impulse response), potentially allowing you to gain more insight as to the original seismic event. Edit: And, as to your questions about how you're interpreting the plot, your understanding seems correct. The system has a highpass characteristic with a flat passband starting at approximately $0.02\ \text{Hz}$, with the magnitude response decreasing by a factor of 10 per decade of frequency below $0.02\ \text{Hz}$. This could be modeled well as a single-pole highpass filter.
{ "domain": "dsp.stackexchange", "id": 502, "tags": "convolution, impulse-response" }
Book reference for electronic occupation of TM
Question: Is there a book I can reference that has a table like this? I can't reference an online source on my thesis and all the pictures I have found online like this don't make their reference explicit. Thanks! Answer: Element Name and Symbol + Atomic Number + Electron Configuration NIST Standard Reference Database 111 provides an up-to date reference for the electron configuration of neutral atoms in the ground state [1] in both tabulated and periodic table formats. Common Oxidation States An overview of elements' oxidation numbers is provided in practically every inorganic chemistry textbook. Arguably, the most extensive one is given throughout Greenwood and Earnshaw's Chemistry of the Elements [2]. More compressed information is provided in CRC handbook of chemistry and physics (Section 4. Properties of the Elements and Inorganic Compounds) [3, pp. 4-1 — 4-92]. There is also a periodic table at the end which includes oxidation numbers of the elements in the upper right corner of each cell: References Kramida, A., Ralchenko, Yu., Reader, J., and NIST ASD Team (2018). NIST Atomic Spectra Database (ver. 5.6.1), [Online]. Available: https://physics.nist.gov/asd. National Institute of Standards and Technology, Gaithersburg, MD. DOI: https://doi.org/10.18434/T4W30F Greenwood, N. N.; Earnshaw, A. Chemistry of the Elements, 2nd ed.; Butterworth-Heinemann: Oxford; Boston, 1997. ISBN 978-0-7506-3365-9 Haynes, W. M.; Lide, D. R.; Bruno, T. J. CRC Handbook of Chemistry and Physics: A Ready-Reference Book of Chemical and Physical Data.; CRC Press, 2017; Vol. 97. ISBN 978-1-4987-5429-3
{ "domain": "chemistry.stackexchange", "id": 12039, "tags": "reference-request" }
Need help! How do you test for a correlation between two data sets?
Question: I'm doing a research project and want to test for correlation between different data sets. For example, I want to test if there is a correlation between median house prices and homeless population in the US by year. Here is some made up data for the problem: Year 2000 , House price $260,000, homeless pop 330,000 2005 - 270,000 - 315,000 2010 - 285,000 - 320,000 2015 - 330,000 - 340,000 2020 - 400,000 - 370,000 I want to then get (r) to measure the correlation between these two data sets and compare that strength of correlation to other data sets (for example, median house price and rates of domestic violence in the US) Thank you for the help! Answer: It would depend on your specific problem statement, if you do not want to consider this as a time series data i.e. do not want to take year into account you would simply consider the correlation values between the home price and homeless population versus home price and domestic violence; whichever value will be high in magnitude (positively or negatively correlated) will be strongly correlated than the other data_df['Home Price'].corr(data_df['Homeless pop']) data_df['Home Price'].corr(data_df['Domestic Violence rate']) If you want to consider time factor ; then you would have to convert the date column into datetime column and then consider three different time series Year and home price Year and homeless pop Year and domestic violence rate And then you can use granger causality test for causality or cross correlation to see the correlation between time series. You can refer to this post as well - https://towardsdatascience.com/computing-cross-correlation-between-geophysical-time-series-488642be7bf0#:~:text=Cross%2Dcorrelation%20is%20an%20established,inference%20on%20the%20seismic%20data.
{ "domain": "datascience.stackexchange", "id": 11758, "tags": "statistics" }
Can water be liquefied or solidified just by adjusting the temperature, regardless of the pressure?
Question: technically the line between liquid and solid phase would go a long way before hitting the y-axis, but the point is it will eventually. So the question remains, can any liquids, including water (with its unique phase diagram) be liquefied regardless of the pressure? Answer: If I actually understood your question correctly, yes there are regions above which there will be only solid state (rather than no solid state). This is limited to the molecular region. The Roman numerals indicate various ice phases. From the above phase diagram (which can be found here) the solid-liquid equilibrium line doesn’t touch the y-axis as you suggest. But there will be only solid phase above the $10\ \mathrm{GPa}$ region(approximately). In the phase diagram you drew, if (a big if) such a PD exists for some element/molecule, then there will be no solid above the point where solid-liquid line meets y-axis. As pointed out in the comments, when we reach high temperatures and pressures we won’t be seeing the molecular water. It will dissociate into atoms/ions as shown in the figure.
{ "domain": "chemistry.stackexchange", "id": 3326, "tags": "water, phase" }
Physics of implosions - Lost submarine in South Atlantic waters
Question: Days ago, an argentine submarine dissapeared in waters of South Atlantic Ocean: the "ARA SAN JUAN". There is an article from Mr. Bruce Rule which can be seen here: https://thenewstalkers.com/community/discussion/36424/death-of-a-submarine In it, we can read the following: "The frequency of the collapse event signal (bubble-pulse) was about 4.4 Hz." Can somebody from the community explain what this bubble-pulse is and how it works, how this value of 4.4 Hz is obtained. And in general, how implosions work. Regards. Answer: when a submarine with an intact hull sinks, it eventually reaches a depth where the surrounding water pressure is sufficient to collapse or "implode" its hull. when this happens, the surrounding water rushes inwards as the hull crumples in upon itself. when the hull is completely crushed, the surrounding water keeps rushing inwards because of its inertia and it then bounces back and reverses its direction briefly. In so doing it leaves a zone of negative pressure behind in which a cloud of cavitation bubbles bursts into existence. the water flow then reverses itself and flows inwards again to neutralize the negative pressure and the cavitation bubbles vanish. this process generates an acoustic signal called a bubble pulse. If a submarine has its hull torn open while at the surface, it fills with water and sinks- and does not undergo catastrophic collapse at its so-called "crush depth", and generates no bubble pulse signal. analysis of the acoustic signature of a sinking submarine hence allows investigators to determine probable cause.
{ "domain": "physics.stackexchange", "id": 46241, "tags": "acoustics" }
Conditioning Probability on a Language With Measure 0
Question: Let $\Sigma = \{ 1, 2, \ldots, n\}$ be some alphabet. Assume that you have a coin with n-sides (each side corresponds to a letter in $\Sigma$), and we get each letter with equal probability. Now you can think of an infinite word $w$ over $\Sigma$ as a result of tossing the n-sided coin infinitely many times. With this view, I can think of $\omega$-regular langauges over $\Sigma$ as events. There are interesting languages that have measure $0$. For example, consider the language $L = \bigcup\limits_{i\in \Sigma } \Sigma^* \cdot (\Sigma \setminus \{i\})^\omega$. That is, $L$ consists of all infinite words over $\Sigma$ that have finitely many $i$'s, for some $i$. Now I am interested in computing $P(L_2|L)$ for some language $L_2 \subseteq L$. Sometimes, I can guess the answer by "symmetry" considerations in case $L$ and $L_2$ are "simple" enough, but my question is: how can I condition on a language $L$ which is of measure $0$ in the general case? Example: assume again that $L = \bigcup\limits_{i\in \Sigma } \Sigma^* \cdot (\Sigma \setminus \{i\})^\omega$. Consider the following event (or language): $$ L_2 = L\setminus \{ w \in \Sigma^\omega: \text{$w$ has finitely many 1's, and every $j\neq 1$ appears infinitely often in $w$}\}$$ What is $P(L_2|L)$? In other words, how can I compute the fraction of words that I did not through out from $L$. I believe there is a way to condition on a language of measure $0$ but I'm not sure how. A similar simple situation appears in the following: one can sample a random $(x, y)$ point in a rectangle $[0, 1]\times [0, 1] \subseteq\mathbb{R}_2$ and then ask what is the probability $P(y \geq \frac{1}{2} | x = \frac{1}{2})$. Clearly, this probability is $\frac{1}{2}$ although we condition on the line $x = \frac{1}{2}$ which has measure $0$ in $[0, 1]\times [0, 1]$. The way I think about the line-rectangle example is as follows. I imagine the line $x = \frac{1}{2}$ as a rectangle of width $\epsilon \to 0$. Is there a similar approach that can be adapted to $\omega$-reuglar languages? Answer: I don't know of a general approach to handle this, but in the case of $\omega$-regular languages, this has been done. One approach, which I think is first introduced in the paper Computing Conditional Probabilities in Markovian Models Efficiently is the following: Given the language $L$, let $D$ be a DPW for it. Now, start by constructing a Markov chain $M$ that captures your distribution on $\Sigma^\omega$ (uniform, in your case). Then take the product of $M$ and $D$. Now, in the product MC, you have the property that the probability that the run is accepting is exactly the probability of $L$ (0, in your case, but it works for any $L$). We now modify the product by "redistributing" the probability: whenever the MC reaches an end-component (ergodic component) in which the probability of acceptance is 0, the run restarts in the initial state. This ensures that all $0$-probability-of-acceptance runs get "another chance". You can use this to essentially restrict yourself only to words in $L$, and now you can ask what the probability of another language is, in this MC, and this provides a reasonable definition of conditional probability. I'm slightly sketchy on the details, but I think this should work. I've used this technique in a paper here: https://arxiv.org/abs/1608.06567.
{ "domain": "cstheory.stackexchange", "id": 5246, "tags": "fl.formal-languages, automata-theory, pr.probability" }
Why does radium have a higher first ionisation energy than barium?
Question: I'm wondering why radium appears to buck the general trend that first ionisation energies decrease as you move down a group in the periodic table: barium (the group 2 element preceding it) has a first ionisation energy of $\pu{502.9 kJ/mol}$, whereas radium has a slightly higher first I.E. of $\pu{509.3 kJ/mol}$ (from the Wikipedia, although my textbook agrees). Is there any explanation for this at present? (I imagine that quantum mechanics may be involved in some way, but I'm not entirely sure how) There are most likely other examples of the trend being broken, but this is the only one I've come across so far and I'm curious to know why this is the case. Answer: I think it's also important to mention relativistic effects here. They already start becoming quite visible after $Z=70$, and $\ce{Ra}$ lies a good bit after that. In very heavy atoms, the electrons of the $\ce{1s}$ orbital (actually, all orbitals with some electron density close to the nucleus, but the $\ce{1s}$ orbital happens to be the closest and therefore most affected) are subjected to very high effective nuclear charges, compressing the orbitals into a very small region of space. This in turn forces the innermost electrons' momenta to be very high, via the uncertainty principle (or in a classical picture, the electrons need to orbit the nucleus very quickly in order to avoid falling in). The momenta are so high, in fact, that special relativity corrections become appreciable, so that the actual, relativistically corrected momenta, ($p_{\text{relativistic}}=\gamma p_{\text{classical}}$) are somewhat higher than the approximate classical momenta. Again via the uncertainty principle, this causes a relativistic contraction of the $\ce{1s}$ orbital (and other orbitals with electron density close to the nucleus, especially $\ce{ns}$ and $\ce{np}$ orbitals). The relativistic contraction of the innermost orbitals creates a cascade of electron shielding changes among the rest of the orbitals. The final result is that all $\ce{ns}$ orbitals are contracted, getting closer to the nucleus and becoming shifted down in energy. This is relevant to the question because the $\ce{7s}$ valence electrons in $\ce{Ra}$ are more attracted to the nucleus than one would expect from a simple trend analysis, since they rarely take into account the increase of relativistic effects as one goes down the periodic table. Thus, the first (and second) ionization energy of $\ce{Ra}$ becomes higher than expected, to the point that there's actually a upward blip in the downward trend. Eka-radium ($Z=120$) would have far stronger relativistic effects, and can be expected to have a significantly higher ionization energy compared to $\ce{Ra}$. In fact, relativistic effects will conspire to make the group 2 metals slightly more noble! Though the periodic table becomes such a mess near the super heavy elements that it's hard to say whether it'll be a clearly visible trend, or just one effect to be combined with several others.
{ "domain": "chemistry.stackexchange", "id": 9857, "tags": "physical-chemistry, periodic-table" }
Introduction To Algorithms 3rd Edition MIT Press: Red Black Tree insertion error in pseudo-code?
Question: I'm implementing the algorithm on page 316 of the book Introduction to Algorithms. When I look at the pseudo-code, I feel that there is something wrong between line 10 to 14. I feel it's missing a check. There is a YouTube video explaining this whole function (and it includes the pseudo-code plus line numbers): https://youtu.be/5IBxA-bZZH8?t=323 The thing is, I think that //case 2 needs its own check. The else if z == z.p.right is both meant for //case 2 and //case 3. However, the code from //case 2 shouldn't always fire. It should only fire when there is a triangle formation according to the YouTube video. In my implementation it always fires, even when it's a line. So I feel the pseudo-code is wrong, it's also weird that it has an indentation, but I see no extra check. Am I missing something? Maybe superfluous, but I also typed the pseudo code given from the book here: RB-INSERT-FIXUP(T, z) while z.p.color == RED if z.p == z.p.p.left y = z.p.p.right if y.color == RED z.p.color = BLACK // case 1 y.color = BLACK // case 1 z.p.p.color = RED // case 1 z = z.p.p // case 1 else if z == z.p.right z = z.p // case 2 LEFT-ROTATE(T, z) // case 2 z.p.color = BLACK // case 3 z.p.p.color = RED // case 3 RIGHT-ROTATE(T, z.p.p) // case 3 else (same as then clause with "right and "left" exchanged) T.root.color = BLACK Answer: In an email with one of the authors it became clear that the following pseudo-code rule applies. (this is not an actual quote, but the format seems better for this) Whenever there is an else clause. The first statement of that else clause will be on the same line as the else clause. This can be a variable assignment, an if-statement or other things. If that was too abstract, the code from question should be interpreted as follows: RB-INSERT-FIXUP(T, z) while z.p.color == RED if z.p == z.p.p.left y = z.p.p.right if y.color == RED z.p.color = BLACK // case 1 y.color = BLACK // case 1 z.p.p.color = RED // case 1 z = z.p.p // case 1 else // this is the difference compared to code in the question if z == z.p.right z = z.p // case 2 LEFT-ROTATE(T, z) // case 2 z.p.color = BLACK // case 3 z.p.p.color = RED // case 3 RIGHT-ROTATE(T, z.p.p) // case 3 else (same as then clause with "right and "left" exchanged) T.root.color = BLACK
{ "domain": "cs.stackexchange", "id": 13939, "tags": "data-structures, red-black-trees" }
Sensor to measure distance with high precision
Question: I am making a machine which, from the curve of the top of a violin, cuts the feet of a bridge. I thought that a sensor attached to a motorized rail could take precise measurements of the distance from the top of the rail to the top of the violin, which would be translated to a curve (from the distance measurements in the X and Y directions) used to cut the feet of a bridge on a CNC machine. This sensor wouldn't need to have a high range (the rail could be as low as needed for the sensor) but a high precision would be required. Since the tolerance of my CNC machine is +/- 0.005 in, I would like a sensor which could measure somewhere in the range of that precision. However, I cannot seem to find a sensor that would do the job. I want to use a Raspberry Pi to control the motorized rail system which the sensor would be attached to, which means that the sensor would have to interface in some way with the Raspberry Pi. After some research, it seems that a linear displacement sensor could work, as would a Triangulation Laser Distance Sensor. However, I searched for products and could not find any. I took a look at laser parts like the Sharp GP2Y0E03 sensor, but the only information about the accuracy I could see was "High Precision Measurement", which isn't exactly helpful. I would think that a part sensor similar digital caliper but for distance would exist? Here is (an admittedly bad) drawing of what I am trying to achieve. Answer: Non-contact sensors are not going to have the precision or spatial resolution that you want. Basically you want a specific-purpose CMM. I am going to point you towards the indicators used in machining. Called dial indicators, AGD, test indicators, or various other terms. There are two types: those with a plunger and those with a lever. Most are mechanical, some are digital and among those some probably have a serial readout. The plunger types give a more direct reading of distance but the lever types are more suited for scanning (i.e. dragging) across a workpiece and the contact force is also lighter. The disadvantage of the lever type here is the cosine error since you are using a rotating lever to measure a linear distance. For either type, for a violin, you will want to get a probe with a plastic ball such as delrin or teflon, and not the carbide that usually come with either type of indicator. A plunger indicator also has the potential option for you to rig up a digital measurement device to the knob at the top of the indicator which is the opposite end of the stem where the probe is mounted. That way you could more more easily adapt a computerized digital measurement even if you can't get an appropriate digital indicator. Something like a drag-wire sensor, for example. It is also possible to rig up a lever to drive the plunger. This will introduce the potential for cosine error again though but give you more flexibility. It's difficult to modify the lever indicators beyond what they already are. Re-reading your post, you have a CNC machine. You should be aware of these indicators already.
{ "domain": "engineering.stackexchange", "id": 5197, "tags": "sensors, distance-measurement" }
What type(s) of gloves are effective against DCM and acetone?
Question: I know that standard latex and nitrile gloves don't stop common organic solvents such as DCM and acetone. However, I was reading a Reddit thread and there were comments that if you are coming into contact with solvents you are just wearing the wrong type of gloves. In a non-manufacturer specific fashion, what types of gloves will stop DCM and acetone? What will it slow down long enough for me to have a chance to finish what I'm doing and change my gloves in a safe manner? I am looking for a guide on what gloves to use. I just had DCM on my mind since nitrile doesn't slow it down, and it burns. And acetone, since it is hard to stop. Answer: You can use tips from the site link - the table at the bottom or just google which glove can suit your needs. As you can read from the link: Nitrile gloves are Low cost, excellent physical properties, dexterity, but they are Poor vs. benzene, methylene chloride, trichloroethylene, many ketones. They are recommended for Oils, greases, aliphatic chemicals, xylene, perchloroethylene, trichloroethane; fair vs. toluene, According to Agronne for dealing with acetone you should use Natural Latex/Rubber or Butyl gloves. For DCM there are no good gloves - you could try using neoprenes
{ "domain": "chemistry.stackexchange", "id": 126, "tags": "solvents, safety, gloves" }
What does all chromatography have in common?
Question: We're learning about chromatography in class, and I'm confused about all the different types (e.g. those that separate liquid-liquid solutions vs. those that separate liquid-solid solutions vs. those that separate solid-solid solutions, etc.). Could you provide a summary of the similarities between all forms of chromatography? (just an overview, do not require specifics) Answer: Here are some similarities: There is always a mobile (e. g. Gas - GC, Liquid - HPLC; GPC) and a stationary phase (liquid; gel - GC, solid - LCs, GPC). Compounds in the sample interact different with both phases and are therefore held back stronger or lesser. This results in the accumulation of compounds that interact similar at some point in the system $-$ with some chromatographys, especially affinity chromatography, one might even get pure substances. The mobile phase carrys the sample and is commonly less polar than the stationary phase, except for RPCs. The structure of a chromatograph is basically like this: transport system: reproductive transport of the mobile phase, adjustable (e. g. piston pump - LCs, compressed gas cylinder - GC) sample application: import of the sample into the flowing mobile phase (e. g. injector, valve) column: stationary phase and its mounting (e. g. capillary column in the column oven) detector: transformation of the chemical or physical properties, which change with substance concentration, into electrical signals (e. g. UV/VIS-detector, thermal conductivity detector) plotting unit: visualisation of the electrical signals (e. g. integrator, computer)
{ "domain": "chemistry.stackexchange", "id": 4195, "tags": "analytical-chemistry, solutions, chromatography, mixtures, separation-techniques" }
Cartesian/polar coordinates program
Question: Can I get my program checked for efficiency? As in, see if there are better ways of writing my code, for example, making more efficient use of memory and/or security options like where to use constants? main.cpp // MAIN.CPP // #include "Force.h" #include "require.h" #include <iostream> #include <fstream> #include <vector> using namespace std; int main() { //Read in values from "ForceList.txt" char ctpe; double real1, real2; vector<Force> forceCol; ifstream in("ForceList.txt"); assure(in, "Forcelist.txt"); //verify its open //Store forces in vector 'forceCol' in cartesian as default while (in >> ctpe >> real1 >> real2) { Force cart(ctpe, real1, real2); if (ctpe == 'c' || ctpe == 'C') { forceCol.push_back(cart); } else if (ctpe == 'p' || ctpe == 'P') { cart.converter(); forceCol.push_back(cart); } } in.close(); // Print the forces in vector 'forceCol' in cartesian form. cout << "Cartesian:\n" << endl; cout << "No." << " x." << " y. \n"; for (unsigned int i = 0; i < forceCol.size(); ++i) { cout << "Force " << i + 1 << ":"; forceCol[i].print(); } //Convert 'forceCol' into polar form. for (unsigned int i = 0; i < forceCol.size(); ++i) { forceCol[i].converter(); } // Print the forces in vector 'forceCol' in polar form. cout << endl << "---------------" << endl; cout << "\nForces in polar:\n" << endl; cout << "No." << " Radius." << " Angle. \n"; for (unsigned int i = 0; i < forceCol.size(); ++i) { cout << "Force " << i + 1 << ":"; forceCol[i].print(); } //Convert back to cartesian for calculations for (unsigned int i = 0; i < forceCol.size(); ++i) { forceCol[i].converter(); } //Summate the forces in 'forceCol' and print the result in cartesian form. Force summedForce; for (unsigned int i = 0; i < forceCol.size(); ++i) { summedForce = summedForce + forceCol[i]; } cout << endl << "---------------" << endl; cout << "\nThe summed force in cartesian form is: " << endl; summedForce.print(); //Convert 'summedForce' to polar form and print. summedForce.converter(); cout << "\nThe summed force in polar form is: " << endl; summedForce.print(); //Print out the resultant force of the first 3 forces in cartesian form Force resultantForce; resultantForce = -(forceCol[0] - forceCol[1] + forceCol[2]); cout << "\nThe resultant force of the first 3 forces in cartesian form is: " << endl; resultantForce.print(); //Convert 'resultantForce' to polar form and print. resultantForce.converter(); cout << "\nThe resultant force of the first 3 forces in polar form is: " << endl; resultantForce.print(); cout << endl; } Force.h #ifndef Force_H #define Force_H #include <iostream> #include <iomanip> #include "require.h" //Question 1 - Class 'Force' that reads in cartesian or polar coordinates, //can discern which is which, and allows conversion and arthithmetical manipulation class Force { char coType; // Type of coordinate - cartesian or polar double r1; // 1st real number double r2; // 2nd real number public: //Constructor that reads in the files and checks that the first value is 'c, C, p or P' //and assign defeauts Force(char coType1 = 'c', double re1 = 0.0, double re2 = 0.0) : coType(coType1), r1(re1), r2(re2) { require(coType1 == 'p' || coType == 'P' || coType == 'c' || coType == 'C'); }; ~Force() {}; //Binary operators for addition and subtraction of 'Force' objects values. Force operator+(Force& fo){ return Force(coType = fo.coType, r1 + fo.r1, r2 + fo.r2); }; Force operator-(Force& fo){ return Force(coType = fo.coType, r1 - fo.r1, r2 - fo.r2); }; //Unary operator to change the sign of the objects values. Force operator-() { return Force(coType, r1 = -r1, r2 = -r2); } //Accessors char get_coType() const { return coType; } double get_r1() const { return r1; } double get_r2() const { return r2; } //mutators void set_coType(char c) { coType = c; } void set_r1(double a) { r1 = a; } void set_r2(double b) { r2 = b; } //printer int print() { using namespace std; cout << setprecision(2) << fixed; cout << coType << setw(10) << right << r1 << setw(10) << right << r2 << endl; return 0; } //Converter void converter(); }; #endif Force.cpp #include "Force.h" #include <cmath> using namespace std; //Converter function void Force::converter() { //Pass by reference to change original values double& x = r1; double& y = r2; double& rad = r1; double& theta = r2; //'Intermeditary' variables double x1 = x; double y1 = y; double rad1 = rad; double theta1 = theta; //Check which type of coordinates are being converted //intermeditary variables used to stop math errors in the conversion if (coType == 'c' || coType == 'C') { coType = 'p'; rad = hypot(x1, y1); theta = atan2(y1, x1); } else if (coType == 'p' || coType == 'P') { coType = 'c'; y = rad1 * sin(theta1); x = rad1 * cos(theta1); } }; ForceList.txt p 10 0.5 c 12 14 p 25 1 p 100. 0.80 c 50. 50. p 20 3.14 c -100. 25 p 12 1.14 The first letter is what type of force cartesian or polar, the following 2 numbers are the coordinates for cartesian and the magnitude and angle for polar Answer: Here are some comments that may help you improve your code. Understand the difference between <cmath> and <math.h> The difference between the two forms is that the former defines things within the std:: namespace versus into the global namespace. Language lawyers have lots of fun with this, but for daily use I'd recommend using <cmath> and then to use functions defined there, explicitly use the namespace. That is, write std::atan2 instead of plain atan2. See this SO question for details. Don't abuse using namespace std Putting using namespace std at the top of every program is a bad habit that you'd do well to avoid. If you use it at all, use it only within functions. Reconsider the class design The class requires that the user keep track of which form the data is in and to use the converter() function to change from polar to cartesian and back. That's not a good class design. Better would be to keep the internal representation in whatever format is convenient to you and and only apply conversion on input or output. Also, C++ is not Java. Don't make "accessors" and "mutators" for every member item. If that's what you really need (and usually it isn't), just use a plain struct instead of a class. If that's not clear yet, the following points should make it clear. Use the std::complex class Much of what you're implementing already exists within the std::complex class template. I'd recommend using that until and unless you need to go to three or more dimensions. Things will be much simpler if you declare the Force class like this: class Force : public std::complex<double> { /*...*/ } Use const where practical I would not expect the print routine to alter the underlying Force on which it operate, and indeed it does not. You should make this expectation explicit by using the const keyword: int print() const; This declares that the print will not modify the Force, making it clear to both the compiler and to the human reader of your code. Don't use std::endl if you don't really need it The difference betweeen std::endl and '\n' is that '\n' just emits a newline character, while std::endl actually flushes the stream. This can be time-consuming in a program with a lot of I/O and is rarely actually needed. It's best to only use std::endl when you have some good reason to flush the stream and it's not very often needed for simple programs such as this one. Avoiding the habit of using std::endl when '\n' will do will pay dividends in the future as you write more complex programs with more I/O and where performance needs to be maximized. Use string concatenation The main function includes these lines: cout << endl << "---------------" << endl; cout << "\nForces in polar:\n" << endl; cout << "No." << " Radius." << " Angle. \n"; That's multiple calls to operator<< where only one is really required. I'd write it like this instead: std::cout << "\n---------------\n" "\nForces in polar:\n\n" "No." " Radius." " Angle. \n"; This reduces the entire thing to a single call to operator<< because consecutive strings in C++ (and in C, for that matter) are automatically concatenated into a single string by the compiler. Have you run a spell check on comments? If you run a spell check on your comments, you'll find a number of things such as "arthithmetical" instead of "arithmetical" and "intermeditary" instead of "intermediary". Since your code is fairly well commented, it's worth the extra step to eliminate spelling errors. Don't hardcode file names The input file might be something that a user of this program wants to place elsewhere. It would be nice to allow the user to specify the input file name as a command line argument instead of hardcoding it. Prefer a stream extractor to manual input Right now, the input routine is in main. If this is a format you're expecting to use again, I'd recommend instead to implement a stream extractor like this: friend std::istream &operator>>(std::istream &in, Force &f); Then within main, you could use it like this: std::vector<Force> forceCol; { std::ifstream in(argv[1]); Force f; while (in >> f) { forceCol.push_back(f); } } Note that most of the lines in this snippet are enclosed within braces. This is deliberate so that in and f will go out of scope and automatically be destroyed when the code reaches the closing brace. This means there's no need for an explicit file close. Use standard algorithms Instead of writing the loop like this: Force summedForce; for (unsigned int i = 0; i < forceCol.size(); ++i) { summedForce = summedForce + forceCol[i]; } You could instead write it like this: Force summedForce{std::accumulate(forceCol.begin(), forceCol.end(), Force{}) }; Write two different print routines As mentioned earlier, rather than change the internal structure of the Force, simply create two different print routines like this: std::string asCart() const; std::string asPolar() const; In this case, I've elected to create and return a std::string, but one could just as easily pass a std::ostream & in (and out). Now the user doesn't need to care about the internal representation, but just asks explicitly for the desired form.
{ "domain": "codereview.stackexchange", "id": 30209, "tags": "c++, performance, object-oriented, coordinate-system, overloading" }
Proving that a language is not in P using diagonalization
Question: Pardon me if i'm missing something which is very obvious here but i cant seem to figure it out. $E=\{ \langle M, w \rangle \mid \text{ Turing Machine encoded by $M$ accepts input $w$ after at most $ 2^{|w|}$ steps}\}$ We have to prove $E\notin P$ The book (Papadimitrou, Elements of the ToC) assumes $E\in P$ and it constructs another language (a diagonal one) $E_1=\{\langle M\rangle \mid \text{ Turing Machine encoded by $M$ accepts input $M$ after at most $ 2^{|M|}$ steps}\}$ and takes its complement language $E_1'$ and it follows that with the assumption $E\in P$ , it is true that $E_1' \in P$ The question it then asks is the following: Say the polynomially bounded turing machine to decide $E_1'$ is $M^*$ then what happens when $M^*$ is presented with $M^*$ as an input? Now I understand it cant give an yes because that results in a contradiction. My doubt is where is the contradiction if the answer is no? Answer: The idea (which doesn't quite work) is that given that $E'_1 \in P$, $M^*$ runs in time at most $2^n$ on an input of length $n$ (that's not quite true). Given that, if $M^*$ rejects $M^*$ then by the definition of $E'_1$, $M^*$ accepts input $M^*$ after at most $2^{|M^*|}$ steps, which is a contradiction. The other direction, which you were happy with, is problematic: if $M^*$ accepts $M^*$ then by the definition of $E'_1$, $M^*$ does not accept input $M^*$ after at most $2^{|M^*|}$ steps. That's not a contradiction since our assumption that the running time of $M^*$ is $2^n$ was wrong - perhaps it's $2^{|M^*|} n$. So you need to be more subtle - presumably Papadimitriou addresses this.
{ "domain": "cs.stackexchange", "id": 1548, "tags": "complexity-theory, polynomial-time, check-my-proof" }
Phase of Elements
Question: There are 11 gaseous elements and two liquid elements at standard temperature and pressure. The rest are solid. Can phase be predicted from quantum mechanical principles? Answer: It's not easy. However there are attempts to calculate a phase diagram of an element from first principles. For example, in this paper http://prl.aps.org/abstract/PRL/v95/i18/e185701 the solid-liquid transition of diamond is calculated. The calculation of the free energies is done with ab initio molecular dynamics. This means that the carbon nuclei are treated as classical particles, but the electrons are treated quantum mechanically. There are also some other approximations involved in the treatment of the electrons and the electron-nuclei interactions. Helium is another element for which a phase transition has been studied using quantum mechanics. In that case a different method - path integral Monte Carlo - is used, i.e. the free energy estimation is by Monte Carlo integration. See for example http://prl.aps.org/abstract/PRL/v72/i12/p1854_1. I think that calculations like these are a step on the way to construct a phase diagram from quantum mechanics, even if we're not at room temperature and pressure yet. Also, the heavier elements are much more challenging.
{ "domain": "physics.stackexchange", "id": 2064, "tags": "quantum-mechanics, elements" }
Interdependence of $P,V$ and $T$
Question: Can we prove that the thermodynamic state of a system is completely determined by any two out of the three factors $P,V$ and $T$ ? (Without using statistical mechanics. Only using Thermodynamics) NB: I have not learnt the axiomatic formulation of Thermodynamics. Answer: There exist systems with other degrees of freedom other than pressure temperature and volume, for example in magnetic systems you can also talk about the degree of magnetization and the applied field. So the completely general answer is no, because it is not always true. Once you have limited yourself to systems that only have these degrees of freedom, we can point to the equation of state $$ p = -\left(\frac{\partial F}{\partial V}\right)_{T} $$ where $F(T,V)$ is the Helmholtz free energy. This links the three quatities, so at most 2 of them can be independent. In terms of why at least 2 of them must be independent, this is again really a matter of definition. If we have a system in contact with a heat bath at a fixed temperature or in a thermally insulated container then we do have an extra equation linking the variables (the fixed temperature or the adiabatic equation respectively) and so only have 1 degree of freedom. We tend to think of these relations, however, as constraints on a more general $pV$ system, rather than fundamentally different systems.
{ "domain": "physics.stackexchange", "id": 76653, "tags": "thermodynamics, pressure, temperature, volume" }
Extending java.util.Random.nextInt(int) into nextLong(long)
Question: In java.util.Random the Oracle implementation of nextInt(int) is as follows: public int nextInt(int n) { if (n <= 0) throw new IllegalArgumentException("n must be positive"); if ((n & -n) == n) // i.e., n is a power of 2 return (int)((n * (long)next(31)) >> 31); int bits, val; do { bits = next(31); val = bits % n; } while (bits - val + (n-1) < 0); return val; } I have a need to do the same thing for longs, but this is not included as part of the class signature. So I extended the class to add this behavior. Here's my solution, and even though I'm pretty sure I have it right, bit-twiddling can subtly fluster even the best of devs! import java.util.Random; public class LongRandom extends Random { public long nextLong(long n) { if (n <= 0) throw new IllegalArgumentException("n must be positive"); if ((n & -n) == n) // i.e., n is a power of 2 return nextLong() & (n - 1); // only take the bottom bits long bits, val; do { bits = nextLong() & 0x7FFFFFFFL; // make nextLong non-negative val = bits % n; } while (bits - val + (n-1) < 0); return val; } } Have I introduced a subtle bug? Are there improvements to make? What might I need to watch out for? Answer: Try it with n>2^32 and your code will fail. 0x7FFFFFFFL reduces your code to int, breaking your extension to long. You need to use the maximal value of long not int in your mask. i.e. 7FFFFFFFFFFFFFFFL.
{ "domain": "codereview.stackexchange", "id": 2715, "tags": "java, random" }
Is there a danger in electrolyzing NaCl?
Question: To make $\ce{NaOH}$ one could electrolyze a solution of $\ce{NaCl}$ in water ($\ce{H2O}$) It would go like this: $$\ce{2H2O_{(l)} + 2Cl-_{(aq)} + 2Na+_{(aq)} -> H2_{(g)} + Cl2_{(g)} + Na+_{(aq)} + 2OH-_{(aq)}}$$ After calculating a bit, I found that even just a few minutes of electrolyzing this at $10\ \mathrm{A}$ could be deadly. Can anybody confirm this? Answer: If the cell is mixed, you would make bleach, sodium hypochlorite and some free chlorine (and probably chew up your anode). Since the solution is not very basic, hypochlorite would probably disproportionate to chlorite, and that possibly to chlorate. One mole of electrons is one faraday, about 96 500 coulombs. One ampere is one coulomb per second. Five minutes passes $(5\ \mathrm{min})(60\ \mathrm{s/min})(10\ \mathrm{C/s}) = 3000\ \mathrm{C}$ or $0.031\ \mathrm{F}$ in a reaction that requires two moles input for each mole of chlorine. $0.01554\ \mathrm{mol}$ chlorine or $350\ \mathrm{ml}$ at STP, tops. It wouldn’t be good for you.
{ "domain": "chemistry.stackexchange", "id": 895, "tags": "home-experiment, safety, electrolysis" }
Nondeterministic ball bounce[SOLVED]
Question: Hello, I am getting different behaviors with exactly the same execution and world settings. As you can see in this video , I reset the world every time and launch the ball with exactly the same velocity. This happens when launching the ball against any kind of surface (not only this board). Update: more videos: vid2, vid3. Ball model.sdf file: <?xml version="1.0" ?> <sdf version="1.5"> <model name="basketball"> <static>false</static> <self_collide>true</self_collide> <link name="ball"> <inertial> <mass>0.25</mass> <!-- inertia based on solid sphere 2/5 mr^2 --> <inertia> <ixx>0.00169</ixx> <iyy>0.00169</iyy> <izz>0.00169</izz> <ixy>0</ixy> <ixz>0</ixz> <iyz>0</iyz> </inertia> </inertial> <visual name="visual"> <geometry> <sphere> <radius>0.13</radius> </sphere> </geometry> </visual> <collision name="collision"> <geometry> <sphere> <radius>0.13</radius> </sphere> </geometry> <surface> <bounce> <restitution_coefficient>0.5</restitution_coefficient> <threshold>0.1</threshold> </bounce> <contact> <ode> <max_vel>5</max_vel> <min_depth>0.0001</min_depth> </ode> </contact> </surface> </collision> </link> </model> </sdf> Originally posted by nzlz on Gazebo Answers with karma: 90 on 2016-09-22 Post score: 1 Answer: After testing different parameters changing the following seems to be a decent fix: Gazebo->Physics->solver->iterations = 100 (default is 50) Originally posted by nzlz with karma: 90 on 2016-09-23 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Alice_ly on 2021-08-15: excuse me, but where to set the interation? And does a .world file suport that?
{ "domain": "robotics.stackexchange", "id": 3993, "tags": "gazebo-7" }
Getting price for one firm by week
Question: I've a lot of data whit prices of a product from different firms by week. I've a class named WeekPrice witch contains a list of firms. Here you've my class structure. Here you've a little part of the data in JSON format: [ { "ID": 1, "NumberOfWeek": 1, "Year": 2016, "Date": "01/04/2016", "Firms": [ { "ID": 1, "Name": "Firm ABC", "Price": 0.73 }, { "ID": 2, "Name": "DEF Solutions", "Price": 0.97 } ] }, { "ID": 2, "NumberOfWeek": 2, "Year": 2016, "Date": "08/04/2016", "Firms": [ { "ID": 3, "Name": "Firm ABC", "Price": 0.83 }, { "ID": 4, "Name": "DEF Solutions", "Price": 0.94 } ] } ] I'll to make a graph that shows the evolution of the price for one firm by week. to this I need to get the price and the week in a object. In the C# code below DoublePoint is used for do this. The property Data is the week number and Value is the price. List<DoublePoint> list = (from week in MyListWitData where week.Year == DateTime.Now.Year group week by week.NumberOfWeek into grp select new DoublePoint() { Data = grp.Key, Value = (double)(from price in grp.ToList() where price.Year == DateTime.Now.Year select (from firm in prijs.Bedrijf where firm.Name == "Firm ABC" select firm.Price ).FirstOrDefault<decimal>() ).FirstOrDefault() }).ToList<DoublePoint>(); It works perfectly but the only problem is that the code is a little bit messy. Can you refactor my code so it's better in performance and easier to read it? Thanks in advance Answer: The model you chose for your data is quite far from ideal. I guess that's one of the reasons your code became a little messy. However, even with this model, the end result can be reached in a more readable, and probably more efficient way: var pricesPerWeek = weekPrices .Where(weekPrice => weekPrice.Year == DateTime.Now.Year) .SelectMany( (weekPrice) => weekPrice.Firms.Where(firm => firm.Name == "ABC"), (week, firm) => new {week.Week, firm.Price}) .ToList(); In this code, weekPrices is a List<WeekPrice>. This code first selects week prices from the current year. It then uses the overload SelectMany to flatten the list of week prices to a collection of anonymous objects with properties Week and Price. A complete and working sample of this code can be found on dotnetfiddle.net. However, as said before, your model is not ideal. I would refactor your model to three classes: Firm, Product and ProductPrice: // Represents a firm you track. public class Firm { public Firm() { Prices = new List<ProductPrice>(); } public int Id {get;set;} public String Name {get;set;} public List<ProductPrice> Prices {get; private set;} } // Keeps record of the firm's price of a product at a certain moment public class ProductPrice { public int FirmId {get;set;} public int ProductId {get;set;} public int Year {get;set;} public int Week {get;set;} public decimal Price {get;set;} } // Represents a product you track. public class Product { public int Id {get;set;} public String Description {get;set;} } This model has several advantages over your model. For instance, firms do not get a new Id every week. Instead, the firm's Id and Name are stored once. The same goes for products (although that isn't part of your question). All you have to do every week, is add an entry of type ProductPrice for all product-firm combinations. And now, with this model, back to your question. To select the weeks and prices for a certain firm (and product) you can do: var pricesPerWeek = firms .Where(firm => firm.Id == 1) .SelectMany(firm => firm.Prices) .Where( productPrice => productPrice.Year == DateTime.Now.Year && productPrice.ProductId == 1) .Select(productPrice => new {productPrice.Week, productPrice.Price}) .ToList(); In this code, firms is a List<Firm>. This code first selects the correct firm, it then selects the correct weeks (and product) and then returns a collection of anonymous objects with properties Week and Price. A complete and working sample of this code can be found on dotnetfiddle.net.
{ "domain": "codereview.stackexchange", "id": 19430, "tags": "c#, performance, linq" }
How do we know that gravity is spacetime and not a field on spacetime?
Question: How do we know that gravity is the curvature of spacetime as opposed to a field, which couples equally to all objects, on spacetime? Answer: Practically speaking, what's the difference? There exists a rank-two tensor field on spacetime called the "metric" $g_{\mu \nu}$ which couples to all mass-energy, and things that we intuitively call "gravity" happen when that field deviates from the Minkowski metric $\eta_{\mu \nu}$. Whether you want to call the metric "spacetime itself" or "a field on spacetime" is basically just terminology. Steve Weinberg, very unusually, likes to think of general relativity as being just another field theory, with the metric as just another field, and dislikes the "geometric interpretation" that is usually taught, where we describe the metric as being spacetime. Note, however, that certain stress-energy tensors are only compatible with certain spacetime topologies, so you can't just think of every spacetime as just being $\mathbb{R}^4$ with a funky metric on it.
{ "domain": "physics.stackexchange", "id": 38105, "tags": "general-relativity, gravity, metric-tensor, field-theory" }
How to convert input numpy data to tensorflow tf.data to train model in tensorfow?
Question: I am working on an image classification problem using TensorFlow. I have converted my input image dataset and label into NumPy data but it takes more time and more ram to load all the data into memory because I have 90K images. I would like to use TensorFlow data API using tf.keras.preprocessing.image_dataset_from_director Here's my current code to train NumPy data. I want to convert this code into tf.data(tf.keras.preprocessing.image_dataset_from_director) to train my huge dataset. INIT_LR = 1e-4 EPOCHS = 20 BS = 32 # grab the list of images in our dataset directory, then initialize # the list of data (i.e., images) and class images print("[INFO] loading images...") imagePaths = list(paths.list_images(args["dataset"])) data = [] labels = [] # loop over the image paths for imagePath in imagePaths: # extract the class label from the filename label = imagePath.split(os.path.sep)[-2] # load the input image (224x224) and preprocess it image = load_img(imagePath, target_size=(224, 224)) image = img_to_array(image) image = preprocess_input(image) # update the data and labels lists, respectively data.append(image) labels.append(label) # convert the data and labels to NumPy arrays data = np.array(data, dtype="float32") labels = np.array(labels) # perform one-hot encoding on the labels lb = LabelBinarizer() labels = lb.fit_transform(labels) labels = to_categorical(labels) # partition the data into training and testing splits using 75% of # the data for training and the remaining 25% for testing (trainX, testX, trainY, testY) = train_test_split(data, labels, test_size=0.20, stratify=labels, random_state=42) # construct the training image generator for data augmentation aug = ImageDataGenerator( rotation_range=20, zoom_range=0.15, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.15, horizontal_flip=True, fill_mode="nearest") # load the MobileNetV2 network, ensuring the head FC layer sets are # left off baseModel = MobileNetV2(weights="imagenet", include_top=False, input_tensor=Input(shape=(224, 224, 3))) # construct the head of the model that will be placed on top of the # the base model headModel = baseModel.output headModel = AveragePooling2D(pool_size=(7, 7))(headModel) headModel = Flatten(name="flatten")(headModel) headModel = Dense(128, activation="relu")(headModel) headModel = Dropout(0.5)(headModel) headModel = Dense(2, activation="softmax")(headModel) # place the head FC model on top of the base model (this will become # the actual model we will train) model = Model(inputs=baseModel.input, outputs=headModel) # loop over all layers in the base model and freeze them so they will # *not* be updated during the first training process for layer in baseModel.layers: layer.trainable = False # compile our model print("[INFO] compiling model...") opt = Adam(lr=INIT_LR, decay=INIT_LR / EPOCHS) model.compile(loss="binary_crossentropy", optimizer=opt, metrics=["accuracy"]) # train the head of the network print("[INFO] training head...") H = model.fit( aug.flow(trainX, trainY, batch_size=BS), steps_per_epoch=len(trainX) // BS, validation_data=(testX, testY), validation_steps=len(testX) // BS, epochs=EPOCHS) # make predictions on the testing set print("[INFO] evaluating network...") predIdxs = model.predict(testX, batch_size=BS) # for each image in the testing set we need to find the index of the # label with corresponding largest predicted probability predIdxs = np.argmax(predIdxs, axis=1) # show a nicely formatted classification report print(classification_report(testY.argmax(axis=1), predIdxs, target_names=lb.classes_)) # serialize the model to disk print("[INFO] saving mask detector model...") model.save(args["model"], save_format="h5") Answer: You could try using the flow_from_directory() method on your ImageDataGenerator class, which does the augmentation - only a small change is necessary: H = model.fit( aug.flow_from_directory(trainX, trainY, batch_size=BS), ...) If you start using a tf.data.Dataset directly, you will get more control over how the data is read from disk (caching, number of threads etc.), but you will lose the easy image augmentation you get with the ImageDataGenerator.
{ "domain": "datascience.stackexchange", "id": 8917, "tags": "deep-learning, keras, tensorflow, image-classification, data-science-model" }
Gravitational potential energy of any spherical distribution
Question: The general formula to get the potential energy of any spherical distribution is this : \begin{equation}\tag{1} U = - \int_0^R \frac{GM(r)}{r} \, \rho(r) \, 4 \pi r^2 \, dr, \end{equation} where $M(r)$ is the mass inside a shell of radius $r < R$. It is easy to get the gravitational energy of a uniform sphere of mass $M$ and radius $R$ : \begin{equation}\tag{2} U = -\, \frac{3 G M^2}{5 R}. \end{equation} In general, for any spherical distribution of total mass $M$ and exterior radius $R$, we can write this : \begin{equation}\tag{3} U = -\, \frac{k \, G M^2}{R}, \end{equation} where $k > 0$ is a constant that depends on the internal distribution. $k = \frac{3}{5}$ for the uniform distribution. For a thin spherical shell of radius $R$ (all mass concentrated on its surface), we can get $k = \frac{1}{2}$. Now, I suspect that for all cases : \begin{equation}\tag{4} \frac{1}{2} \le k < \infty. \end{equation} Physically, this makes sense. But how to prove this from the general integral (1) ? To simply things a bit, we may introduce the dimensionless variable $x = r/R \le 1$, and defines relative mass $\bar{M}(x) \equiv M(r)/M \le 1$ and relative density $\bar{\rho}(x) = \rho(r) / \rho_{\text{average}}$, where $\rho_{\text{average}} = 3 M/4 \pi R^3$. Thus, integral (1) takes the following form : \begin{equation}\tag{5} U = -\, \frac{3 G M^2}{R} \int_0^1 \bar{M}(x) \, \bar{\rho}(x) \, x \, dx. \end{equation} The last integral is $\frac{k}{3}$. I'm not sure this may help to prove (4). Answer: Start by writing that: $$\tag1 M=\int^R_0 \rho(r)4\pi r^2 dr = \int^R_0 \frac{dM}{dr}dr.$$ From (1), we can identify that: $$\tag2 \frac{dM}{dr}dr = 4\pi r^2 \rho(r)dr.$$ Now let's return to your equation (1): $$U = - \int_0^R \frac{GM(r)}{r} \, \rho(r) \, 4 \pi r^2 dr,$$ using equation (2), this becomes: $$U=-G\int_0^R \frac{M(r)}{r} \frac{dM}{dr}dr.$$ Let's concentrate on just the integral part, and call it $I$ for concreteness, $$I=\int_0^R \tag{3}\frac{M(r)}{r} \frac{dM}{dr}dr.$$ Aha! This looks like something we can integrate by parts. Let's set $u=\frac{M(r)}{r}$, and $v'=\frac{dM}{dr}$. Then $$u'=-\frac{M(r)}{r^2} + \frac{1}{r}\frac{dM}{dr}.$$ Continuing with IBP, we get: $$I=\Bigg[\frac{M(r)}{r}M(r)\Bigg]^R_0-\int^R_0 \Big(-\frac{M(r)}{r^2} + \frac{1}{r}\frac{dM}{dr}\Big)M(r) dr.$$ Let's look at solely the integral part, calling it $J$: $$J=-\int^R_0 \frac{M^2(r)}{r^2} dr + \underbrace{\int^R_0 \frac{M(r)}{r}\frac{dM}{dr} dr}_{\textrm{we know this integral, it's }I}.$$ So now we have that: $$I=\Bigg[\frac{M^2(r)}{r}\Bigg]^R_0 + \int^R_0 \frac{M^2(r)}{r^2} dr - I,$$ and so: $$2I= \Bigg[\frac{M^2(r)}{r}\Bigg]^R_0 + \int^R_0 \Bigg(\frac{M(r)}{r}\Bigg)^2 dr.$$ For the case of non-pathological $M$, that is $M(0)=0$, we get: $$I=\frac{M^2}{2R} + \frac{1}{2}\int^R_0 \Bigg(\frac{M(r)}{r}\Bigg)^2 dr.$$ Hence we can identify from this that $U$ has the limit of $-\frac{GM^2}{2R}$, and from that we subtract the integral of something positive definite. I believe this proves (to physicist standards) your conjecture about $k$. Request of simplifying the process: Let's just consider with no dimensions the integral: $$K=\int^X_0 \frac{f(x) f'(x)}{x} dx.$$ Take: $$g(x)=\frac{f^2(x)}{2x},$$ then taking the derivative: $$g'(x) = \frac{f(x) f'(x)}{x}-\frac{f^2(x)}{2 x^2}.$$ We can recognise our original integral in this mix, and so integrating $g'(x)$ will give us our integral, less $\int^X_0 \frac{f^2(x)}{2 x^2} dx$, and so we can write: $$K=\big[g(x)\big]^X_0 + \frac{1}{2}\int^X_0 \frac{f^2(x)}{x^2}dx= \Bigg[\frac{f^2(x)}{2 x}\Bigg]^X_0 + \frac{1}{2}\int^X_0 \bigg( \frac{f(x)}{x} \bigg)^2 dx.$$ Giving us the result we wanted. As IBP is always the inverse of the product rule, with a little bit of thought you can generally figure out the original function. However if I'd just posted this last 4-step proof, I think you'd have just thought I was a wizard...
{ "domain": "physics.stackexchange", "id": 41429, "tags": "newtonian-mechanics, gravity, newtonian-gravity, potential-energy, integration" }
Why does the colour of a varnish differ between how it looks before and after application as well as by-eye and through an iPhone?
Question: I opened a tin of woodstain varnish and was surprised to find that the varnish was a dark blue in colour not the mahogany which was stated on the tin. The photograph does not do justice as to what I saw with my eyes. The greyish/slightly mauve colour in the photograph was a darkish blue to the eye as shown by the added coloured circle. Once used the colour (wood to right of tin) is as per the label on the tin. Why the blue colour when viewed with the eye, is it a scattering by emulsion effect. Why the change of colour when a photograph was taken with an iPhone? Answer: For water-based stains and paints, the color "in the can" does not match the color "on the board" after the stain dries. This is because the product is a water-and-resin emulsion that has a milky appearance when in the can, that disappears as the water evaporates and the resins cure. As that emulsion disappears, the true color of the pigment mix becomes stronger than the emulsion effect and eventually the color on the board is due just to the pigments and the board matches the color sample on the outside of the can. Paint shops now use a spectrophotometer linked to a computer which is linked to the electric pigment pumps in the shop's prep area, so a customer can come in with a board painted with the desired color and the employee can then scan the board and derive a perfect match to it with a fresh batch of paint. The software that controls the pigment dispense takes into account the color shifts that occur during the drying process- so the freshly-opened paint can does not match the board, but it does after the paint is fully-dried.
{ "domain": "physics.stackexchange", "id": 73447, "tags": "visible-light, everyday-life" }
teb_local_planner question
Question: I'm trying teb_local_planner on omni_dir robot. https://youtu.be/AKF6wPZgCa8?si=Y_17OSZsSo7nzhfV However, the robot moves like a car as shown in this video. I want to moving holonomic like this picture. my setting parameter is this. TebLocalPlannerROS: # Trajectory teb_autosize: True dt_ref: 0.3 dt_hysteresis: 0.1 max_samples: 500 global_plan_overwrite_orientation: True allow_init_with_backwards_motion: True max_global_plan_lookahead_dist: 0.5 global_plan_viapoint_sep: -1 global_plan_prune_distance: 1 exact_arc_length: False feasibility_check_no_poses: 5 publish_feedback: False # Robot max_vel_x: 0.4 max_vel_x_backwards: 0.2 max_vel_y: 0.1 max_vel_theta: 0.3 acc_lim_x: 0.5 acc_lim_y: 0.2 acc_lim_theta: 0.5 min_turning_radius: 0.0 # omni-drive robot (can turn on place!) footprint_model: type: "circular" radius: 0.188 # GoalTolerance xy_goal_tolerance: 0.05 yaw_goal_tolerance: 0.15 free_goal_vel: False complete_global_plan: True # Obstacles min_obstacle_dist: 0.25 # This value must also include our robot radius, since footprint_model is set to "point". inflation_dist: 0.6 include_costmap_obstacles: False costmap_obstacles_behind_robot_dist: 1.0 obstacle_poses_affected: 30 costmap_converter_plugin: "" costmap_converter_spin_thread: True costmap_converter_rate: 5 # Optimization no_inner_iterations: 5 no_outer_iterations: 4 optimization_activate: True optimization_verbose: False penalty_epsilon: 0.05 obstacle_cost_exponent: 4 weight_max_vel_x: 2 weight_max_vel_y: 2 weight_max_vel_theta: 1 weight_acc_lim_x: 1 weight_acc_lim_y: 1 weight_acc_lim_theta: 1 weight_kinematics_nh: 1 # WE HAVE A HOLONOMIC ROBOT, JUST ADD A SMALL PENALTY weight_kinematics_forward_drive: 1 weight_kinematics_turning_radius: 1 weight_optimaltime: 1 # must be > 0 weight_shortest_path: 0 weight_obstacle: 50 weight_inflation: 0.2 weight_dynamic_obstacle: 10 weight_dynamic_obstacle_inflation: 0.2 weight_viapoint: 1 weight_adapt_factor: 2 # Homotopy Class Planner enable_homotopy_class_planning: True enable_multithreading: True max_number_classes: 4 selection_cost_hysteresis: 1.0 selection_prefer_initial_plan: 0.9 selection_obst_cost_scale: 1.0 selection_alternative_time_cost: False roadmap_graph_no_samples: 15 roadmap_graph_area_width: 5 roadmap_graph_area_length_scale: 1.0 h_signature_prescaler: 0.5 h_signature_threshold: 0.1 obstacle_heading_threshold: 0.45 switching_blocking_period: 0.0 viapoints_all_candidates: True delete_detours_backwards: True max_ratio_detours_duration_best_duration: 3.0 visualize_hc_graph: False visualize_with_time_as_z_axis_scale: False # Recovery shrink_horizon_backup: True shrink_horizon_min_duration: 10 oscillation_recovery: True oscillation_v_eps: 0.1 oscillation_omega_eps: 0.1 oscillation_recovery_min_duration: 10 oscillation_filter_duration: 10 What settings should I adjust? Answer: I would start with increasing the y axis related values. First check the velocity and acceleration parameters. Is there any change on the planned path? Then I would increase the weight parameters related to y axis and observe the changes. If changing only y axis related values does not work, I would decrease x axis related values and observe the changes again. It is possible that you have selected correct values to generate a commands that can steer your robot but it is not selected as the optimum output. If none of this works, I would gradually decrease the value of the weight_kinematics_nh parameter down to zero. Also reducing the weight on the time optimization could help because x axis motion could be faster compared to y axis motion in your case (weight_optimaltime)
{ "domain": "robotics.stackexchange", "id": 38709, "tags": "ros-noetic, teb-local-planner" }
Can the change in the entropy of the surroundings always be obtained by dividing heat transferred by the temperature at which the transfer occurs?
Question: Consider $\pu{1 mol}$ of an ideal monoatomic gas going through reversible isochoric heating from $\pu{100 K}$ to $\pu{1000 K}$. Calculate $\Delta S_\pu{sys}, \Delta S_\pu{surr}.$ $$\Delta S_\pu{sys} = nC_v \int_{\pu{100 K}}^{\pu{1000 K}} \frac{dT}{T} = \frac{3}{2}R\ln10$$ Now, as I have $q_\pu{sys} = 1350R$. I can put $\displaystyle \Delta S_\pu{surr} = \frac{-q_\pu{sys}}{T_\pu{surr}}$ to get $$\Delta S_\pu{surr} = -\frac{27}{20}R$$ but in a reversible process $\Delta S_\pu{total} = 0 $ but that gives me $$\Delta S_\pu{surr} = -\frac{3}{2}R\ln10$$ Where am I wrong? Image source: Physical Chemistry, 10th ed. by Atkins and de Paula, p. 116. Here, $\displaystyle q_\pu{sur} =$ heat supplied to surroundings. Answer: The surroundings that are assumed in the question are not the same surroundings as in the image. The surroundings in the image are essentially at a constant temperature while it is assumed in the question that the surroundings are heating the gas reversibly. The gas is heated by some heat source which is in thermal equilibrium with the gas. Let us assume that the surroundings which are not at a constant temperature are heating the gas. We assume that our universe consists of only gas plus the surroundings. The surroundings have some internal source of energy distributed uniformly so that no local hot spots occur. Now, $$\Delta S_\pu{gas} = n C_V \int_{\pu{100 K}}^{\pu{1000 K}} \frac{\operatorname{d}T}{T} = \frac{3}{2} R\ln10 \tag1\label{(1)}$$ And for a reversible change, $\Delta S_\pu{total} = 0$ and thus $\Delta S_\pu{surr} = - \Delta S_\pu{gas}$. We can also verify this in the following way: $\pu{d}q_\pu{gas} = n C_{V} \cdot \operatorname{d}T$ and from the First law of thermodynamics, $$\pu{d}q_\pu{surr} = - n C_{V} \cdot \operatorname{d}T = - \pu{d}q_\pu{gas} \tag{2}\label{(2)}$$ Thus, $$\Delta S_\pu{surr} = \int^{\pu{1000 K}}_{\pu{100 K}} \frac{\pu{d} q_\pu{surr}}{T} = - n C_V \int_{\pu{100 K}}^{\pu{1000 K}} \frac{\operatorname{d}T}{T} = - \frac{3}{2} R\ln10 \tag{3}\label{(3)}$$
{ "domain": "chemistry.stackexchange", "id": 9697, "tags": "physical-chemistry, thermodynamics, heat, temperature, entropy" }
Unbounded operators defined only on dense subdomain of Hilbert space in QM?
Question: I am relatively new to quantum mechanics. In a set of notes I am using, the following is a description of an aspect of some operators corresponding to observables. The notes state the following: "Observables corresponding to unbounded operators are not defined on the whole of $\mathcal{H}$ but only on dense subdomains of $\mathcal{H}$ that are not invariant under the action of the observables. Such non-variance makes expectation values, uncertainities and commutation relations not well defined on the whole of $\mathcal{H}$." There are a few things I don't follow. Why would it be a property of 'unbounded operators' that it is not defined on the whole of $\mathcal{H}$? Also how does invariance come into this? And how does non-invariance influence expectation values, uncertainties and commutation relations as stated? Answer: Some relevant self-adjoint operators in QM, like orthogonal projectors, are bounded actually, but these are very few in QM. Boundedness is equivalent to the fact that the range of values the observable, i.e., the spectrum $\sigma(A)$ of the associated operator $A$, is bounded in view of the spectral radius identity, $$||A||= \sup\{|\lambda| \:|\: \lambda \in \sigma(A)\}\:.$$ However, most observables attain arbitrarily large values (think of position or momentum observables). In turn, the definition of adjoint operator and the closed graph theorem prove that boundedness of a self-adjoint operator $A :D(A) \to \cal H$ is equivalent to $D(A)=\cal H$. This explains why most observables in QM are represented by self-adjoint operators whose domain -- always dense, otherwise the adjoint is not defined -- does not coincide with the whole Hilbert space. Regarding invariance of the domain, i.e., the property $$A(D(A))\subset D(A)$$ the text is a bit wrong, since the expectation value $<A>_\psi$ is not affected from non-invariance of the domain. It satisfies, for a pure state defined by the unit vector $\psi$, $$<A>_\psi = \langle \psi|A \psi \rangle\:.\tag{0}$$ You see that $\psi \in D(A)$ is sufficient to guarantee the validity of that identity. Concerning uncertainties $\Delta A_\psi$, the quoted text may be right since they satisfy $$\Delta A_\psi^2 = \langle \psi|A^2 \psi \rangle - \langle \psi|A \psi \rangle^2\tag{1}$$ and you see that the first therm on the right-hand side needs that $A(A\psi)$ be well-defined, that is $A\psi \in D(A)$ for $\psi \in D(A)$. Here invariance of the domain matters instead. Finally, regarding commutation relations, as they involve composition of operators $AB$ and $BA$, corresponding crossed invariance properties should hold: $$A(D(B)) \subset D(A)\quad \mbox{and}\quad A(D(A)) \subset D(B)\:.$$ Some final comments are in order. Strictly speaking, (0) is not the definition of expectation value of $A$ and (1) is not the definition of uncertainty of $A$, in the pure state defined by the unit vector $\psi$, also if they are important properties. The true definitions respectively are $$<A>_\psi := \int_{\sigma(A)} \lambda d\langle\psi|P^{(A)}(\lambda)\psi\rangle\tag{2}$$ and $$\Delta A_\psi^2 := \int_{\sigma(A)} (\lambda- <A>_\psi)^2 d\langle\psi|P^{(A)}(\lambda)\psi\rangle\tag{3}$$ where I have introduced the spectral measure of $A$, $P^{(A)}$. The right-hand side of (3) is well-defined provided $$\int_{\sigma(A)} \lambda^2 d\langle\psi|P^{(A)}(\lambda)\psi\rangle < +\infty\tag{4}$$ and this is another way to write $\psi \in D(A)$. So, even in this case, invariance of $D(A)$ is not necessary. Obviously (1) is valid from (3) when $\psi \in D(A^2)$ and is false in general though it holds into a weaker form $$\Delta A_\psi^2 = \langle A\psi|A \psi \rangle - \langle \psi|A \psi \rangle^2\tag{1'}\:.$$ Similarly $<A>_\psi$ is well defined if $\psi \in D(\sqrt{|A|})$ which is a weaker condition than $\psi \in D(A)$.
{ "domain": "physics.stackexchange", "id": 34218, "tags": "quantum-mechanics, mathematical-physics, operators, hilbert-space, observables" }
Multi-center Taub-NUT geometry and homologically nontrivial cycles
Question: In the string theory book by Ibanez and Uranga (click here for the Google books excerpt), the four-dimensional multi-center Taub-NUT metric is written as $$ds^2 = \frac{V(\textbf{x})}{4}d\textbf{x}^2 + \frac{V(\textbf{x})^{-1}}{4}(dx^{10} + \omega\cdot d\textbf{x})^2$$ $$V(\textbf{x}) = 1 + \sum_{a = 1}^{N}\frac{1}{|\textbf{x}-\textbf{x}_a|}, \qquad \nabla\times\omega = \nabla V(\textbf{x})$$ where $\omega$ the 3D vector potential for $N$ Dirac magnetic monopoles (equivalently $D6$-branes) in $\mathbb{R}^3$ and $\textbf{x} \in R^3$ parametrize the 3D space transverse to the D6 branes. The authors say Around each degenerate fiber over a a center $\textbf{x}_a$, the geometry supports a normalizable harmonic $2$-form $\omega_a$. The component of the M-theory 3-form along it, $$ C_3 = \omega_a \wedge A_1^a$$ produces a $U(1)$ gauge boson, interpreted in the type-IIA picture as the $D6$-brane worldvolume $U(1)$. Question 1: From simply the solution written above, is it obvious that there is a harmonic 2-form at each $\textbf{x}_a$? I know that $V(\textbf{x})$ is harmonic in $\mathbb{R}^3$. And at first, it seemed tempting to connect $\omega$ the vector to $\omega_a$ the normalizable 2-form. But that does not seem right because one should have $N$ 2-forms according to the quoted paragraph. So I dug a bit deeper and found Gubser's TASI lecture notes where, on page 24, he says [for $n+1$ parallel $D6$-branes] ..there are $n$ homologically non-trivial cycles for the $(n+1)$-center Taub-NUT geometry: topoligically this is identical to a resolved $A_n$ singularity. Thus there exist $n$ harmonic, normalizable two-forms, call them $\omega^i$. These forms are localized near the centers of the Taub-NUT space, and they are the cohomological forms dual to the $n$ non-trivial homology cycles. Furtheremore, there is one additional normalizable $2$-form on the Taub-NUT geometry, which can be constructed explicitly for $n = 0$, but which owes its existence to no particular topological property. Let us call this form $\omega^0$. ..we expand the Ramond-Ramond three-form of type-IIA as $$ C_{(3)} = \sum_{i=0}^{n}\omega^i \wedge A_i + \cdots$$ In contrast to the book, where the notion of 2-cycles is described later (and is not seemingly used to assert the existence of normalizable harmonic 2-forms), Gubser's argument seems to suggest that the normalizable harmonic 2-forms exist due to the presence of non-trivial homology cycles. Why? Question 2: Why does $\omega^0$ not "owe its existence to any particular topological property"? What is the significance of this "additional" normalizable 2-form on the Taub-NUT geometry? I would have thought that all D6-brane centers would be treated the same way -- if we have $(n+1)$ centers, then why do we seem to make a distinction between $n$ and the $0^{th}$ one? Answer: The existence of normalizable harmonic 2-forms in a multi-center Taub-NUT geometry is indeed not obvious at all. The specific form of these forms may be found in Sen's "Dynamics of Multiple Kaluza-Klein Monopoles in M- and String Theory", who gives them (adapted to the notation in the question) as \begin{align} \omega_i & = \mathrm{d}\xi_i \tag{1a}\\ \xi_i & = V^{-1} V_i (\mathrm{d}x^{10} + \omega\cdot\mathrm{d}x) - \omega_i \cdot \mathrm{d}x,\tag{1b}\end{align} where $V_i = \frac{m}{x-x_i}$. Sen, in turn, cites Ruback's "The motion of Kaluza-Klein monopoles" as the origin of these formulae, where it turns out that the exterior derivative in eq. (1a) is only meant to work as the derivative on the coordinates of Taub-NUT space that are not $x^{10}$, if I am reading Ruback's notation correctly. Ruback, in turn, helpfully cites "Page, D.N.: Private communication; Yuille, A.L., PhD. Thesis, University of Combridge (1980) unpublished" as the origin of these formulae, so here the trail ends. $\omega^0$ in the quote from Gubser does not "owe its existence to any particular property" because it's not associated to the basis of 2-cycles that arises from the lines between the center of the Taub-NUT monopoles - you'll note that applying Poincaré duality to these $n-1$ cycles gives us only $n-1$ compactly supported (and hence normalizable) modes. Note also that these modes are not the $\omega_i$ from eq. (1a), since although normalizable, they are not compactly supported. So what Gubser means is that $\omega^0$ is an additional normalizable mode that does not show up through Poincaré duality, while the rest of the normalizable modes have "compactly supported versions" we can see through the homological argument. Conveniently, this explains a discrepancy between the Taub-NUT version of gauge enhancement and the type IIa $D_6$-brane gauge enhancement: If you look at the type IIa approach, you'll note that the total gauge group there is $\mathrm{U}(N) = \mathrm{SU}(N)\times\mathrm{U}(1)$, but that the Taub-NUT approach looking at the cycles only delivers $\mathrm{SU}(N)$. The additional normalizable mode not arising from the cycles is precisely the explanation for the $\mathrm{U}(1)$.
{ "domain": "physics.stackexchange", "id": 41893, "tags": "string-theory, topology, branes, instantons" }
Is time unidirectional because of 4th spatial dimension?
Question: We heard about an expanding universe. Consider an expanding sphere. Consider the surface of the sphere as our 3 dimensional universe. Can time dimension be the radius $R$ of this sphere? And because of its expansion it is one directional? EDIT: for a moment forget about time. Our universe is expanding. How? It can't expand only in 3 dimentional universe. You can only imagine an expanding 3D universe in a surface of 4D universe like what I explained before. Maybe I was wrong to call this 4th dimention as time. But we must consider another 4th spatial dimention to show that our universe expanding will occur through it. And I think because of this expansion in 4th dimension time is unidirectional. Because expansion through 4th dimensions can't go back so if you put all particles in universe to some previous states it will not means going back through time because there is diffrences in 4th dimention. I apologize for my poor english. EDIT:I changed the title: "Is time unidirectional because of 4th spatial dimension?" Answer: Yes it can be. But that's not why time is unidirectional. There are models of the universe that are consistent with all observations and that have the symmetries we expect where spacetime is $$\mathbb R^4=\{(a,b,c,d):a,b,c,d\in\mathbb R\}$$ and time is equal to $t=\sqrt{a^2+b^2+c^2+d^2}$. However time is unidirectional for a totally different reason, which is because of the signature of the metric. The metric is what clocks and rulers measure. For instance the thing I called time, $t,$ isn't what clocks measure. It cant be because two clocks that move differently measure differently. So they aren't measuring something about the universe (like where in 4d spacetime you are), they measure something about the particular path they were following in the universe. Rulers do that too. When they move differently they measure differently. So what they measure is something about their path. What they measure is the metric along their path. So the metric could allow things to move backwards in time by have two independent time directions (this would give a signature of (2,2) for two independent timelike directions and two independent spacelike directions) so you would have room to do a rotation in time. But the metric in our space doesn't allow you to rotate around all the way, you can only wiggle a bit while you keep moving radially outwards in 4d (in this model, there are other models that are also consistent with observations so far in which time isn't a radial direction in 4d at all). So the metric is important. For instance it might look like space is getting bigger at a certain rate in this model, after all the 3d set of points with a fixed $t$ seems larger for a larger $t$ but since measurements are based on the metric, the real question is does the metric get larger or smaller on those surfaces of larger $t$ and depending on whether it does the size of the universe (as measured by rulers in the universe that are measuring the metric like all rulers do) could be getting larger, or smaller. And you could even make the universe collapse in a finite amount of time by making the metric have clocks tick slower and slower for larger $t$ so even though it looks in the picture like you have an infinite amount of time for the universe to live, you might not. So the details of the metric are super important. And in general relativity the work goes into finding out how that metric changes from point to point and time to time.
{ "domain": "physics.stackexchange", "id": 29346, "tags": "general-relativity, cosmology, spacetime, time" }
Find $X(j\omega)$ after sampling of $2\cos(2000\pi t)+\sin(5000\pi t)$ at 5 kHz sampling rate
Question: The Fourier transform of the first term has two spikes at -2000pi and 2000pi of magnitudes 2pi for both. The Fourier transform of the second term has two spikes at 5000pi and -5000pi having magnitudes $-j\pi$ and $j\pi$ respectively. Since sampling is at 5000Hz, the -2000pi and 2000pi spikes repeat at 3000pi and -3000pi respectively, while the -5000pi and 5000pi spikes repeat at 0. Now we've got the spectrum in the big-Omega domain. We divide this by 5000Hz to get the spectrum in -pi to pi domain So the answer I've derived has seven spikes, at -pi, pi, -2/5pi, 2/5pi, -3/5pi, 3/5pi, and 0. However, none of the options have seven spikes. What am I doing wrong? EDIT - I forgot to scale by $\frac{1}{T}$. But my problem still remains. Only option 1 and 3 have spikes at -pi, pi, -0.4pi and 0.4pi. But neither of these options have spikes at 3/5pi, -3/5pi, and zero. Answer: Look at your signal and figure out what sampling rate you need to avoid aliasing. In this case, you are sampling at twice the maximum frequency component so you should not expect aliasing, but you work leads to aliasing components showing up at plus/minus $\frac{3\pi}{5}$. Remember that the DTFT is periodic with period $2\pi$, and try re-working the problem.
{ "domain": "dsp.stackexchange", "id": 7952, "tags": "discrete-signals, sampling" }
Mechanical joint that break at specific force
Question: I am designing a joint which I want to break at a specific force. I want to design a 2 parts connected by a joint that breaks when a certain amount of force is applied and thus prevent any damage to those individual components themselves. I also want the joint to be easily replaceable without replacing the individual parts themselves. I'm hoping to 3D Print the joint. I have seen some petrol stations that have pipes attached such that they will break and cut-off fuel supply if they are pulled harshly(i.e. By someone who forgot to take the nozzle off their car and tried to drive away). I don't know the name of the mechanical linkage they use. But I'm hoping to achieve a similar affect. I would appreciate if you could point me to relevant information sources and design of such type of linkages. Answer: Another option is the shear pin. It was first thing that came to mind (Pete W suggested it also already). A joint involving a shear pin (in tension and compression) would look something like the following (here its called a clevis pin, and sometimes you find it as hinge pin). force calculation The force required for the shear pin to fail is basically determined by the smallest cross-section subjected to shear. Although theoretically you can have any cross-section, usually , (to allow for rotation) the shear pin is cylindrical. So the Force for a cylindrical shear pin will be calculated by: $$F_{max} = \sigma_s \cdot A$$ where: $\sigma_s$ is the shear stress at failure $A$: is the smallest cross-section under stress. For a filled cylinder $A=\pi r^2$ for a hollow cylinder like the following 3d Printing relevance The really good thing about this and 3D printing is that you can fine tune the exact force that you want. With 3D printing you can design a hollow shear pin and adjust : thickness of walls printing orientation generate different cross-section geometries like
{ "domain": "engineering.stackexchange", "id": 4027, "tags": "mechanical-engineering, structural-engineering, design, 3d-printing" }
A scaling difference between MATLAB's pwelch and Python's SciPy welch
Question: I'm having trouble in python with the scipy.signal method called welch (https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.welch.html), which estimates the frequency spectrum of a time signal, because it does not (at all) provide the same output as MATLAB's method called pwelch (https://www.mathworks.com/help/signal/ref/pwelch.html), given the same parameters (window type (Hamming), window size, overlap, etc.). Beneath is my code in each language (the input is a real signal): MATLAB: pxx_matlab = pwelch(w_in,4096,0.5*4096,4096) (according to the documentation the default window in MATLAB's pwelch is hamming) Python: from scipy import signal f, pxx = signal.welch(w_in, fs=16000, window='hamming', nperseg=4096, noverlap=2048, nfft=4096, detrend=False) This is the input data: np.random.seed(0) w_in = np.random.randn(16000*10) (I don't know how to attach CSV files) Here are the results: As you can see, the two signals differ by a scaling constant (well, almost a constant), which is about 34 dB (2544). I tried to link the constant to the sampling frequency fs = 16000, Nyquist frequency (8 kHz) or the window's length 4096, overlap 2048 and their 10*np.log10 dB counterparts but to no avail. Do you know why is there a constant scaling difference and to what does the constant equal (as a function of the parameters)? Edit: the constant 34 dB seems to be independent of the data used, therefore it is very likely it is a function of the signal processing parameters (fs, window length, overlap etc). Note that 10*log10(2049) and 10*log10(2048) are approximately 33.11 dB I posted the same question on StackOverflow: https://stackoverflow.com/questions/68242817/a-scaling-difference-between-matlabs-pwelch-and-pythons-scipy-welch Answer: MATLAB's function pwelch scales the PSD under the assumption that the DFT is executed across the range of $0:2 \pi$ in the event the sample frequency is not passed to the function. Thus, you have two options: Scale your answer by: $p_{xx} = p_{xx} \frac{2 \pi}{f_{s}}$. Pass the sample frequency to the function as the 5th argument. e.g: pxx = pwelch(x, window, noverlap, nfft, fs); In the context of your example: $10 \log_{10} \left( \frac{2 \pi}{16000} \right) = -34.06 ~ dB$ which equals your offset error. This information may be found directly in the MATLAB script for welch.
{ "domain": "dsp.stackexchange", "id": 11128, "tags": "matlab, python, power-spectral-density" }
Electricity in clouds and induced magnetism
Question: This is well known to everyone that clouds carry electric charges and cumulonimbus clouds contain huge amount of electricity. It is also known from Maxwell's equations that moving particles induce magnetic field around them. I am sorry if I am missing any concept here but I have a question, Should there also be a magnetic field around cumulonimbus clouds due to the presence and motion of charged particles in them? Edit: I want to know about field studies reporting the intensity of the magnetic field in different cloud systems. I am interested in knowing about cold clouds as the interaction of solid hydrometeors could be prominent in them. I also want to know if the intensity is sufficient enough to have an interaction with magnetic field of earth. I see recent modeling studies are being performed on electric fields, but the mention of magnetic field is still missing in them. Answer: The short answer to your question is yes all charge particles (positive or negative), independent (like electron, proton) or attached to other materials like cloud droplets, molecules, atoms, etc. when in motion produce magnetic field. If two charged particles of same charge are moving in opposite direction then they cancel out each other's magnetic field. Hence, the random motion of droplets in cloud cause net zero magnetic field away from the cloud. One can understand this by analogy to a pair of electric wires used in powering homes which does not produce appreciable magnetic field at distance when the two wires of pair are close to each other because electrons are moving in opposite directions in the two wires. In organized motion of charged particles that is when there is net movement of charge in a preferential direction, for example a cloud with charged droplets moving under influence of wind one can have net magnetic field. However, here too, if the cloud as whole contains both positive and negative charges in equal numbers will not produce appreciable magnetic field at a long distance because magnetic field produced by negative charges will be opposite in direction with respect to magnetic field produced by positive charge moving in same direction. If there is net positive or negative charge on cloud then the magnitude of magnetic field will depend on wind speed, which for atmospheric range of wind speeds will produce very small magnetic field. The appreciable magnitude of magnetic field is produced during lighting since large amount of charge moves along a particular path with high velocity but such movements are momentary and as soon as the lighting strike is over, the magnetic field produced starts dying. If a compass needle is having significant inertia then before the needle can move under influence of newly produced magnetic field, the magnetic field will vanish and hence ordinary compasses are unlikely to show effect of lightning strikes. Further Reading: Fundamentals of Lightning by Vladimir A. Rakov, Cambridge Univesity Press, ISBN 978-1-107-07223-7, 2016
{ "domain": "earthscience.stackexchange", "id": 1670, "tags": "atmosphere, clouds, electromagnetism" }
Purpose of Hermitian adjoints?
Question: During a QM lecture, we went over Hermitian adjoints. While I understand that it is the Hermitian conjugate of an operator, I do not understand what this represents, besides its definition. Also, I do not understand the motivation behind taking the adjoint of an operator. What exactly does an adjoint of an operator describe and how is the adjoint of an operator useful in quantum mechanics? Answer: From the spectral theorem an adjoint operator has only real eingenvalues. Now, after you give a measure of your system, from the postulate of wave function collapse, you measure only the eingevalues of your operator. Since a measure is real number you need the adjoint property to ensure that the eingevalues are reals. Obsviusly there are operators with reals eingevalues which are non hermitian, instead the requirment that your observables must be hermitian operators is a postulate of quantum mechanics.
{ "domain": "physics.stackexchange", "id": 91510, "tags": "quantum-mechanics, hilbert-space, operators, complex-numbers, observables" }
Slider module and extending it further
Question: I created this module, and I'm wondering if I did a good job. How would an experienced JavaScript developer improve this simple slider module further? Like, let's say that you want to use this module on 3 sliders on the same page. Here is a jsFiddle. Module var sliderTest = (function(){ /*::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: ✚ private variables :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::*/ var expoElement = document.querySelector(".test") var expoCanvas = expoElement.querySelector(".canvas") var expoSlides = expoCanvas.querySelectorAll(".slide") var controlPrev = expoElement.querySelector(".control.prev") var controlNext = expoElement.querySelector(".control.next") /*::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: ✚ private functions :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::*/ function whereIsActive(){ var y = {} for(var i = 0; i < expoSlides.length; i++){ if(expoSlides[i].classList.contains("active")){ y.current = i; if(expoSlides[i+1]){ y.next = i+1; } else { y.next = 0; } if(expoSlides[i-1]){ y.prev = i-1; } else { y.prev = expoSlides.length-1; } } } return y } function removeActiveClasses(){ for(var i = 0; i < expoSlides.length; i++){ expoSlides[i].classList.remove("active") } } /*::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: ✚ public methodes :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::*/ var publicMethod = { nextSlide:function(ev){ ev.preventDefault(); var nextIndex = whereIsActive().next; removeActiveClasses(); expoSlides[nextIndex].classList.add("active"); }, prevSlide:function(ev){ ev.preventDefault(); var prevIndex = whereIsActive().prev; removeActiveClasses(); expoSlides[prevIndex].classList.add("active") }, sendInTheClone:function(){alert(whereIsActive().current)} } /*::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: ✚ event listners :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::*/ controlNext.addEventListener("click", publicMethod.nextSlide) controlPrev.addEventListener("click", publicMethod.prevSlide) /*::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: ✚ return :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::*/ return publicMethod; })(); HTML <div class="test"> <div class="canvas"> <div class="slide ">1</div> <div class="slide ">2</div> <div class="slide active">3</div> <div class="slide ">4</div> </div> <a class="control prev" href="" data-control="prev">prev</a> <a class="control next" href="" data-control="next">next</a> </div> Answer: document.querySelector While it is a great function to use, you are using in the wrong situations. For every time you use it, you are getting an array of elements with a class that you passed in to the function. Why not just use document.getElementsByClassName? Sure, it takes a little bit longer to type but it's faster. This about it: the querySelector function has to read the argument passed in and, based on it's syntax ('.' for classes, '#' for ids, and a few more), scan the document in a way unique to each type of identification (id, class, etc). Wouldn't it be easier for the function to know that it was going to look for one class and for one class only? Semicolons They aren't entirely necessary, and the interpreter probably adds them in for you, but your code looks like JavaScript when you put them in (at least in my opinion). HTML indentation Your divs are perfectly indented, but when you go down to your as, you start to try and clump them on to one line. Originally, when I was looking at the a tag(s), I thought it was one really long a tag. JavaScript indentation For the most part, it's really good. But there is one section that is bothering me: your public methods section. I actually had to post this in to JSFiddle and click "Tidy up" to understand what was going on there. I believe the problem is the closing bracket of nextSlide; it is too far back, and it looks like it is closing publicMethod. Built in output In the sendInTheClone method of publicMethod, it uses the function alert. Not everyone using this module may want an alert popping up on their screen; maybe they want it written to the DOM, or maybe they use custom pop-up boxes. (Excuse this section ("Built in output") if I misunderstood your sendInTheClone function) Good things: Contrary to what people have been commenting, I think that the big comments going across your code to split it up is a good idea. In my opinion, it helps with legibility. The use of publicMethod. It's a pretty good idea; I like it.
{ "domain": "codereview.stackexchange", "id": 12655, "tags": "javascript, modules" }
Man having 1 light year height can see his whole body?
Question: Here is my one imagination and few questions on it: One man standing in universe at somewhere. His height is 1 light year. Can he see his whole body with exact age of his eyes? What he can see if he turns down eyes to see his legs and see his organs one by one from legs to chest? If he moves hands from bottom to top, will hands movement be visible by him? Consider man's latest age is 20 years. Answer: He cannot see his whole body at the same 'age' his eyes are if you are asking for what an instantaneous age across his body is. He would see the age of other parts of his body as younger because it takes time for light to travel from his body to his eyes. He would see his feet at somewhere around 19 years old and the rest of his body some version of: 20-(distance away in lightyears) assuming he was all created at the same time (which you seem to assume). However, this is not the best question because there is no instantaneous 'same time' in the universe. To put it simply, general relativity says everything is relative. If he chooses to move his hands from bottom to top, it will take time for the signal from his brain to travel to his muscles in his arms. His hands cannot move as we normally would because they are limited by the speed of light, and it would take time for him to see his hands move as per the above.
{ "domain": "physics.stackexchange", "id": 43891, "tags": "special-relativity, visible-light" }
How reliant is the Solar System on being exactly the way it is?
Question: We know that all objects with mass exert forces on all other objects of mass such that $$ F = \frac{GMm}{R^2}.$$ And as others have discussed the planets do interfere with each other gravitationally to a small degree. My question is how reliant the solar system is on its exact structure. If a planet were to change its alignment or orbit or gravitational effect on other planets, through gain of a mass through an asteroidal collision for example. Would a deviation in the structure of the solar system as it is cause it to collapse? e.g planets change orbits significantly enough to drift away from the sun or drift into it? Answer: This is something I played with while testing a n-body code I wrote during college. Unfortunately I don't have any animations, or even the original code anymore - but I can report qualitative results. Removing Jupiter and Saturn does indeed have a significant destabilizing effect -- an a chaotic one at that (i.e. depending on precise initial conditions, and varying on numerical accuracy) -- leading to the dynamical instability of numerous planets. Removing the other planets had no effect on dynamical stability, but there were some small changes to periods, etc. This result should be expected as the gravitational effects of planets other than Jupiter (and saturn to a lesser degree) are almost entirely negligible on the dynamics of other planets.
{ "domain": "physics.stackexchange", "id": 7744, "tags": "newtonian-mechanics, newtonian-gravity, solar-system, stability" }
What elements don't observe the octet rule?
Question: Apart from hydrogen which forms a duet, which element's don't observe the octet rule? Answer: Octet rule is more advisory than a rule; It's usually obeyed by the main group elements. Exception are paramagnetic compounds (obvious reason); first for elements in periodic system (He-configuration), 3-valent boron compounds; depending on counting system boron clusters (f.e. dodecaborane with 6-coordianted boron); transition metal complexes (these usually follow 18e rule, however there are plenty of examples with 14, 16 or even 20 electron complexes).
{ "domain": "chemistry.stackexchange", "id": 505, "tags": "inorganic-chemistry" }
Find duplicate files using Python
Question: Most Python "duplicate file finder" scripts I found do a brute-force of calculating the hashes of all files under a directory. So, I wrote my own -- hopefully faster -- script to kind of do things more intelligently. Basically, it first searches for files of exact same size, then it compares only N bytes at the head and tail of the files, and finally it compares the files' full hashes. My #1 concern of course would be correctness, followed closely by maintainability. __author__ = 'pepoluan' import os import hashlib from glob import glob from itertools import chain # Global variables class G: OutFormat = 'list' # Possible values: 'list', 'csv' OutFile = None StartPaths = [ 'D:\\DATA_2', 'D:\\DATA_1', 'D:\\' ] PartialCheckSize = 8192 FullFileHash = True MinSize = 16 * 1024 * 1024 ProgPeriod = 1000 FullBlockSize = 1024 * 1024 Quiet = False HashFunc = hashlib.md5 def get_walker_generator(at_path): return ( chain.from_iterable( glob( os.path.join( x[0].replace('[', '[[]').replace(']', '[]]'), '*.*' ) ) for x in os.walk(at_path) ) ) def dict_filter_by_len(rawdict, minlen=2): assert isinstance(rawdict, dict) return {k: v for k, v in rawdict.items() if len(v) >= minlen} def qprint(*args, **kwargs): if not G.Quiet: print(*args, **kwargs) def get_dupes_by_size(path_list): qprint('===== Recursively stat-ing {0}'.format(path_list)) processed = set() size_dict = {} for statpath in path_list: c = 0 uniq_in_path = 0 qprint('{0}...'.format(statpath), end='') for fname in get_walker_generator(statpath): try: if c >= G.ProgPeriod: print('.', end='', flush=True) c = 0 if fname not in processed: c += 1 uniq_in_path += 1 fstat = os.stat(fname) fsize = fstat.st_size flist = size_dict.get(fsize, set()) flist.add(fname) size_dict[fsize] = flist processed.add(fname) except: print('\nException on ', fname) raise qprint(uniq_in_path) qprint('\nTotal files: ', len(processed)) dupe_sizes = {(None, sz): list(fset) for sz, fset in size_dict.items() if sz >= G.MinSize and len(fset) > 1} qprint('Dupes: ', len(dupe_sizes)) return dupe_sizes def refine_dupes_by_partial_hash(dupes_dict, partial_check_size=G.PartialCheckSize, hashfunc=G.HashFunc): assert isinstance(dupes_dict, dict) qprint('===== Checking hash of first and last {0} bytes ====='.format(partial_check_size)) qprint('Processing...', end='', flush=True) size_and_hashes = {} for selector, flist in dupes_dict.items(): fsize = selector[-1] for fname in flist: with open(fname, 'rb') as fin: hash_front = hashfunc(fin.read(partial_check_size)).hexdigest() seek_targ = fsize - G.PartialCheckSize - 1 if seek_targ > 0: fin.seek(seek_targ) hash_rear = hashfunc(fin.read(partial_check_size)).hexdigest() else: hash_rear = hash_front # "size" at rear, so a simple print will still result in a nicely-aligned table selector = (hash_front, hash_rear, fsize) flist = size_and_hashes.get(selector, []) flist.append(fname) size_and_hashes[selector] = flist qprint('.', end='', flush=True) dupe_exact = dict_filter_by_len(size_and_hashes) qprint('\nDupes: ', len(dupe_exact)) return dupe_exact def refine_dupes_by_full_hash(dupes_dict, block_size=G.FullBlockSize, hashfunc=G.HashFunc): assert isinstance(dupes_dict, dict) qprint('===== Checking full hashes of Dupes') qprint('Processing...', end='', flush=True) fullhashes = {} for selector, flist in dupes_dict.items(): sz = selector[-1] # Save size so we can still inform the user of the size for fname in flist: hasher = hashfunc() with open(fname, 'rb') as fin: while True: buf = fin.read(block_size) if not buf: break hasher.update(buf) # "size" at rear, so a simple print will still result in a nicely-aligned table slct = (hasher.hexdigest(), sz) flist = fullhashes.get(slct, []) flist.append(fname) fullhashes[slct] = flist qprint('.', end='', flush=True) dupe_exact = dict_filter_by_len(fullhashes) qprint('\nDupes: ', len(dupe_exact)) return dupe_exact def output_results(dupes_dict, out_format=G.OutFormat, out_file=G.OutFile): assert isinstance(dupes_dict, dict) kiys = [k for k in dupes_dict] kiys.sort(key=lambda x: x[-1]) if out_file is not None: qprint('Writing result in "{0}" format to file: {1} ...'.format(out_format, out_file), end='') else: qprint() if out_format == 'list': for kiy in kiys: flist = dupes_dict[kiy] print('-- {0}:'.format(kiy), file=out_file) flist.sort() for fname in flist: print(' {0}'.format(fname), file=out_file) elif out_format == 'csv': print('"Ord","Selector","FullPath"', file=out_file) order = 1 for kiy in kiys: flist = dupes_dict[kiy] flist.sort() for fname in flist: print('"{0}","{1}","{2}"'.format(order, kiy, fname), file=out_file) order += 1 if out_file is not None: qprint('done.') def _main(): dupes = get_dupes_by_size(G.StartPaths) dupes = refine_dupes_by_partial_hash(dupes) if G.FullFileHash: dupes = refine_dupes_by_full_hash(dupes) output_results(dupes, out_format=G.OutFormat, out_file=G.OutFile) if __name__ == '__main__': _main() Answer: Don't Repeat Yourself The refine_dupes_by_partial_hash is almost identical to refine_dupes_by_full_hash; the only difference is what to be done with a file. Factor the difference out into a hashing_strategy callable: def refine_dupes(dupes_dict, hashing_strategy, block_size, hashfunc): hashes = {} for selector, flist in dupes_dict.items(): fsize = selector[-1] for name in flist: hasher = hashfunc() with open(fname, 'rb') as fin: result = hashing_strategy(fin, strategy, blocksize, hasher) flist = fullhashes.get(result, []) flist.append(fname) hashes[result] = flist dupe_exact = dict_filter_by_len(hashes) return dupe_exact In fact, I'd go one step further and make hashing_strategy a class to incapsulate block_size and hashfunc. Use builtins where possible: for order, fname in enumerate(flist, 1): instead of manually incrementing order Use sys.argv instead of hardcoding paths.
{ "domain": "codereview.stackexchange", "id": 15683, "tags": "python, algorithm, python-3.x" }
Why does the differential wave equation of EM in dielectrics use the permittivity and permeability of vacuum?
Question: I have been reading some basic book to understand how to derive the wave equation of light from the Maxwell equations, but those equations use the permittivity and permeability of vacuum. The books usually tell you that those constants should be replaced from the specific one of the medium. But when I start reading how to derive the behavior of a EM wave in a dielectric (isotropic) they start with the differential wave equation deduced from Maxwell, but with the vacuum constants. $$\nabla^2\mathbf{E}-\epsilon_0\mu_0\frac{\partial\mathbf{E}}{\partial t^2}=\mu_0\frac{\partial^2\mathbf{P}}{\partial t^2}$$ Is there a reason for that? At the end, by doing some steps with the previous equations in order to get the index of refraction of the dielectric, you get as one of it in terms the ε of the vacuum (is that correct?). $$\tilde{n}=\sqrt{1+\frac{Ne^2}{m_e\epsilon_0(w_0^2-w^2+i\gamma w)}}$$ And also the wave number (k) will be in terms of the speed of light in vacuum (c) as a k=nw/c, but if from the first moment (in the differential wave equation) we use the permittivity and permeability of the material, we will end up with k=nw/v, where v is the velocity of the wave in the material (isn't that correct?) I know that I'm not taking into account something (in terms of theory), but I can't figure out what it is. Answer: In principle you can always use the vacuum (or microscopic) Maxwell-equations. There is no theoretical necessity for the macroscopic Maxwell equations (those with the material dielectric constants). The consequence is however that you need to explicitly include all the charges within your system in your calculation, e.g. if you want to describe the propagation of light through glass you need to include all the positive and negative charges (nuclei and electrons) that make up the glass. This is pretty cumbersome and ineffective and therefore the macroscopic Maxwell equations were introduced. The macroscopic Maxwell equations are based on the Polarization $\mathbf{P}$ and Magnetization $\mathbf{M}$, which desrcibe the effective behaviour of a in total neutral collection of charges. Those charges can (and must be) be discarded from the explicit calculation and only enter by their Polarization and Magnetization. The equation $$\nabla^2\mathbf{E}-\epsilon_0\mu_0\frac{\partial\mathbf{E}}{\partial t^2}=\mu_0\frac{\partial^2\mathbf{P}}{\partial t^2}$$ you stated is therefore already a macroscopic equation and implicitly includes the relative permittivity and permeability. The permittivity explicitly enters if we use the typical assumption that the Polarization of the material depends linearly on the external electric field. Then $$ \mathbf{P} = \chi_e \varepsilon_0 \mathbf{E} $$ $$\nabla^2\mathbf{E}-\epsilon_0\mu_0\frac{\partial\mathbf{E}}{\partial t^2}=\mu_0\varepsilon_0 \chi_e\frac{\partial^2\mathbf{E}}{\partial t^2}$$ $$\nabla^2\mathbf{E}-\epsilon_0\mu_0\underbrace{(1-\chi_e)}_{\varepsilon_r} \frac{\partial\mathbf{E}}{\partial t^2}=0$$ To wrap this up: The macroscopic Maxwell equation with the relative permittivity and permeability allow to exclude part of the system from the explicit calculation by encapsulating their effective behaviour within the polarization and magnetization. In the example with light within glass this means that we can use the Maxwell equations as if there were no charges present, i.e. set $\rho=0,\;\mathbf{j}=0$. Only the permittivity and permeability are changed.
{ "domain": "physics.stackexchange", "id": 51162, "tags": "visible-light, electromagnetic-radiation, maxwell-equations" }
Best Car Recognition Algorithm
Question: Could anyone suggest the best algorithm for a real-time Car recognition (say in a parking space)? I am planning to implement the same on FPGA as well. Kindly suggest. Thank you. Answer: Do you mean car detection or recognition? Detection: where are all the cars? Recognition: which car is this? See: Object detection versus object recognition In any case, the short form of the answer is to prototype the heck out of this, likely using OpenCV or libccv, your favorite neural network package, and as much horsepower as possible (several Nvidia GTX 980). Don't worry about realtime-ness or implementation details, just get it working first. The state of the art for object recognition and detection both is probably convolutional neural networks, check out stuff like AlexNet; there are simpler algorithms (eg Histogram of Oriented Gradient or HoG) which could do okay at detection, but not at recognition (there is a decent person detector using HoG, I think you can probably make a decent car detector in the same way). Running this on a FPGA is... unlikely: high-end desktop GPUs are very well suited to the problem, and orders of magnitude faster at it. People are working on neural network ASICs now, should be ready in a couple of years.
{ "domain": "dsp.stackexchange", "id": 2469, "tags": "image-processing, computer-vision" }
NotBF - A Brainfuck-ish like "language"
Question: I've made an interpreted language that's like Brainfuck except it has keywords instead of characters. Here's an explanation of the commands, and how to run it. add_ostream - Add the ASCII value of the current cell to the output stream. chg_size & [position]; - Change the current position on the stack. reset_stack; - Reset the stack and all its cells to the defaults. chg_size & [size]; - Change the size of the stack. This resets all cell values. reset_pos; - Reset the current position on the stack. chg_cell & [value]; - Change the value of the current cell. out_stream; - Output the output stream. reset_ostream; - Reset the output stream. To run the program simply type this into your command prompt: python NotBF.py /path/to/notbffile.txt """ NotBF v0.1.0 --------------------------------------- NotBF is an interpreted Brainfuck-like language. NotBF is has many similarites to regular Brainfuck, except without all the confusing characters. --------------------------------------- """ from sys import exit, argv class NotBFError(object): """ This is the base NotBF error class from which all NotBF errors are derived from. """ def __init__(self, message, name, code): self.message = message self.name = name self.code = code def raise_error(self): """ Raise an error if something goes wrong. """ print "{0}::{1} -> {2}".format(self.code, self.name, self.message) exit(0) class Environment(object): """ This class provides the data and functions for managing a NotBF environment during runtime. """ def __init__(self, stack_size, output_stream,): self.stack_size = stack_size self.output_stream = output_stream self.stack = [0 for _ in range(self.stack_size)] self.stack_position = 0 self.max_stack_pos = len(self.stack)-1 self.min_stack_pos = 0 self.max_cell_value = 255 self.min_cell_value = 0 def add_output_stream(self, character): """ Add a character to the output stream. """ self.output_stream += character def reset_stack(self): """ Reset the stack to it's default length, self.stack_size, and reset all cells. """ self.stack = [0 for _ in range(self.stack_size)] def change_stack_size(self, new_size): """ Change the size of the stack. WARNING, this operation resets all cell values. """ self.stack = [0 for _ in range(new_size)] def reset_stack_position(self): """ Reset the stack_position to zero. """ self.stack_position = 0 def change_stack_position(self, new_position): """ Move the stack_position to a new position. """ self.stack_position = new_position def change_cell_value(self, new_value): """ Change the value of a cell. """ self.stack[self.stack_position] = new_value def output_output_stream(self): """ Output the output stream. """ print self.output_stream def reset_output_stream(self): """ Reset the output stream. """ self.output_stream = "" class NotBFCommand(object): """ A base command class where tokenized input is inputted into and then run. All NotBF command classes are derived from this base class. """ def __init__(self, tokenized_string): self.tokenized_string = tokenized_string def debug_input(self): print self.tokenized_string """ Initalize various variables and other items to make sure that command classes work the way they should. """ runtime_env = Environment(256, "") integer_error = NotBFError("Invalid integer.", "int_error", "e01") no_cell_error = NotBFError("Cell doesn't exist.", "no_cell_error", "e02") bad_value_error = NotBFError("Invalid ASCII code.", "bad_value_error", "e03") command_error = NotBFError("Invalid command.", "cmd_error", "e04") class AddOutputStream(NotBFCommand): def execute(self): character = chr(runtime_env.stack[runtime_env.stack_position]) runtime_env.add_output_stream(character) class ResetStack(NotBFCommand): def execute(self): runtime_env.reset_stack() class ChangeStackSize(NotBFCommand): def execute(self): try: new_size = int(self.tokenized_string[1]) runtime_env.change_stack_size(new_size) except ValueError: integer_error.raise_error() class ResetStackPosition(NotBFCommand): def execute(self): runtime_env.reset_stack_position() class ChangeStackPosition(NotBFCommand): def execute(self): try: new_position = int(self.tokenized_string[1]) try: runtime_env.change_stack_position(new_position) except IndexError: no_cell_error.raise_error() except ValueError: integer_error.raise_error() class ChangeCellValue(NotBFCommand): def execute(self): try: new_value = int(self.tokenized_string[1]) if new_value <= runtime_env.max_cell_value and new_value >= runtime_env.min_cell_value: runtime_env.change_cell_value(new_value) else: bad_value_error.raise_error() except ValueError: integer_error.raise_error() class OutputOutputStream(NotBFCommand): def execute(self): runtime_env.output_output_stream() class ResetOutputStream(NotBFCommand): def execute(self): runtime_env.reset_output_stream() class GetCodeInput(object): """ Get code file input from a path. """ def __init__(self, code_file_path): self.code_file_path = code_file_path def return_file(self): """""" with open(self.code_file_path, "r") as code_file: return code_file.read().replace("\n", "").replace(" ", "").replace("\t", "") class Tokenizer(object): """ Tokenize given input into a format that is readable by the interpreter class. Here's the format. tokenized_string = [ [ keyword, arg, ... ], [ ... ], ... ] """ def __init__(self, input_string, line_split=";", arg_split="&"): self.input_string = input_string self.line_split = line_split self.arg_split = arg_split def tokenize(self): """ Tokenize the string input into the correct format. """ tokenized_string = self.input_string.split(self.line_split) tokenized_string = [string.split(self.arg_split) for string in tokenized_string] tokenized_string.remove([""]) print tokenized_string return tokenized_string class Interpreter(object): def __init__(self, tokenized_input): self.tokenized_input = tokenized_input self.COMMAND_KEYS = { "add_ostream": AddOutputStream, "chg_pos": ChangeStackPosition, "reset_stack": ResetStack, "chg_size": ChangeStackSize, "reset_pos": ResetStackPosition, "chg_cell": ChangeCellValue, "out_stream": OutputOutputStream, "reset_ostream": ResetOutputStream, } def execute_input(self): for line in self.tokenized_input: token = line[0] if token in self.COMMAND_KEYS: command_to_execute = self.COMMAND_KEYS[token](line) command_to_execute.execute() else: command_error.raise_error() if __name__ == "__main__": code_input = GetCodeInput(argv[1]).return_file() tokenized_code = Tokenizer(code_input).tokenize() interpreter = Interpreter(tokenized_code).execute_input() Here's some example code and its output: chg_cell & 48; add_ostream; chg_pos & 1; chg_cell & 49; add_ostream; chg_pos & 1; reset_stack; chg_size & 512; chg_cell & 65; add_ostream; chg_pos & 1; chg_cell & 66; add_ostream; chg_pos & 1; out_stream; reset_stack; reset_ostream; Here's the output of this program: [['chg_cell', '48'], ['add_ostream'], ['chg_pos', '1'], ['chg_cell', '49'], ['add_ostream'], ['chg_pos', '1'], ['reset_stack'], ['chg_size', '512'], ['chg_cell', '65'], ['add_ostream'], ['chg_pos', '1'], ['chg_cell', '66'], ['add_ostream'], ['chg_pos', '1'], ['out_stream'], ['reset_stack'],['reset_ostream']] 01AB Answer: As the classic "Stop Writing Classes" puts it: the signature of "this shouldn't be a class" is that it has two methods, one of which is __init__ Virtually all of your classes fall foul of this; just because you can use OOP, doesn't mean you always should. Looking at the use of the classes in the code, this was a big red flag: code_input = GetCodeInput(argv[1]).return_file() tokenized_code = Tokenizer(code_input).tokenize() interpreter = Interpreter(tokenized_code).execute_input() You're creating the instance and immediately calling a method on it; you don't actually need the instance (or keep it around), you've just split the logic (pretty much arbitrarily) over two methods. The last line assigns interpreter = None, which doesn't even make any sense! Much neater would be simple functions: code_input = get_code_input(argv[1]) tokenized_code = tokenize(code_input) interpret(tokenized_code) Only the Environment is using state in a meaningful way. All of your NotBFCommand subclasses rely on the existence of some global runtime_env; every single one has a single method that just aliases a method on the Environment, so why not just use those methods? A neater implementation would spot that basically all of your code is about performing operations on the stack. You can therefore have a single class, which encapsulates the Environment (the state; stack and output stream), the NotBFCommands (methods) and NotBFErrors (a method plus some data). For example: class Interpreter(object): """Interpreter for a BrainFuck-ish language.""" # Cell sizes for the stack CELL_MAX = 255 CELL_MIN = 0 # Parsing rules LINE_SPLIT = ";" ARG_SPLIT = "&" # Errors ERRORS = { "integer_error": ("Invalid integer.", "e01"), "no_cell_error": ("Cell doesn't exist.", "e02"), "bad_value_error": ("Invalid ASCII code.", "e03"), "command_error": ("Invalid command.", "e04"), } def __init__(self, stack_size=256): self.stack_size = stack_size self.reset_output() self.reset_position() self.reset_stack() def add_output_stream(self): """Add the current stack cell to the output.""" self.output_stream += chr(self.stack[self.pos]) def change_cell(self, new_val): """Change the value of the current cell.""" new_val = self.convert_int(new_val) if self.CELL_MIN <= new_val <= self.CELL_MAX: self.stack[self.pos] = new_val else: self.raise_error("bad_value_error") def change_position(self, new_pos): """Change stack pointer position, if new position is valid.""" new_pos = self.convert_int(new_pos) if new_pos not in range(len(self.stack)): self.raise_error("no_cell_error") self.pos = new_pos @classmethod def convert_int(cls, val): """Convert the value to integer or raise an error.""" try: return int(val) except ValueError: cls.raise_error("integer_error") def execute_command(self, command, *data): """Execute the command or raise an error.""" if command not in self.COMMANDS: self.raise_error("command_error") self.COMMANDS[command](self, *data) def print_output(self): """Print the current output stream.""" print self.output_stream @classmethod def raise_error(cls, name): """Report the error and exit.""" msg, code = cls.ERRORS[name] print "{code}::{name} -> {msg}".format(code=code, name=name, msg=msg) exit(0) @staticmethod def remove_whitespace(string): """Remove unwanted whitespace from the string.""" for whitespace in '\t\n ': string = string.replace(whitespace, '') return string def reset_output(self): """Reset the output stream.""" self.output_stream = "" def reset_position(self): """Reset the stack pointer position to zero.""" self.pos = 0 def reset_stack(self): """Reset the stack to empty.""" self.stack = [self.CELL_MIN for _ in range(self.stack_size)] def resize_stack(self, new_size): """Resize the stack.""" self.stack_size = self.convert_int(new_size) self.reset_stack() def run_commands(self, commands): """Execute a series of commands.""" for command in commands: self.execute_command(*command) def run_file(self, filename, verbose=False): """Execute commands from a code file.""" with open(self.filename, "r") as code_file: data = self.remove_whitespace(code_file.read()) self.run_commands(self.tokenize(data, verbose)) @classmethod def tokenize(cls, command_string, verbose=False): """Split the command string into tokens.""" tokens = [] for line in command_string.split(cls.LINE_SPLIT): if line: tokens.append(line.split(cls.ARG_SPLIT)) if verbose: print tokens return tokens COMMANDS = { "add_ostream": add_output_stream, "chg_pos": change_position, "reset_stack": reset_stack, "chg_size": resize_stack, "reset_pos": reset_position, "chg_cell": change_cell, "out_stream": print_output, "reset_ostream": reset_output, } Note that I've added docstrings and followed the style guide. Running the program is now: if __name__ == "__main__": interpreter = Interpreter() interpreter.run_file(argv[1], True) I have also made the stack resizing permanent - if that's not appropriate, you may need to refactor slightly.
{ "domain": "codereview.stackexchange", "id": 13032, "tags": "python, interpreter, state-machine, language-design" }
Interaction between liquid mercury and magnetic fields
Question: I came across the following experimental setup: According to the answers I’ve been given, when the current flows through the circuit, the wire will jump up and down in the mercury. How does the current through the mercury cause the wire to jump up and down? I’ve searched online but all I can find is mercury beginning to spin when a current is applied, so please enlighten me. Answer: There is nothing special about using mercury. It is a conductor. You could use a copper sheet instead. The apparatus starts with the wire in the mercury. The mercury conducts, so the circuit is complete. When current flows in the electromagnets above the mercury, they are magnetized and attract each other. This pulls the bottom electromagnet up. The wire comes out of the mercury, breaking the circuit. This turns off the electromagnets. The bottom elecromagnet falls. The wire touches the mercury again.
{ "domain": "physics.stackexchange", "id": 66631, "tags": "electromagnetism, magnetic-fields, electric-circuits" }
Partitioning a set of 2d polygons into intersection-connected subsets
Question: My question is given a set of 2d polygons how can I find the connected components of polygons according to a criteria based on intersection or proximity of them. In other words I have a set of polygons. I want to group them into subsets where each polygon has at least one intersection with (one or more) other polygon(s) of the subset and no intersections with any other polygon in a different subset. So if I have S = {A, B, C, D} and A intersects C and C intersects D the resulting partitions are: P1 = {A, C, D} and P2 = {B}. I don't have any problems with an good approximate solution if its fast and doesn't generate false negatives (i.e. no (potential) intersection is lost). I don't need to actually calculate the intersections just determine if they intersect. In my case the polygons are complex but if a known answer only works for concave or simple polygons I would still like to know about it! I am actually not sure how to phrase this question which may account for my lack of findings of relevant papers. Answer: What you want is to find the connected components of the intersection graph of the polygons, right? It would help to be more clear about what counts as an intersection: do the boundaries have to cross or does one polygon entirely inside another count as an intersection? And can the polygons have holes? Regardless, a natural lower bound for running times for the problem is $\Omega(n^{4/3})$ where $n$ is the total number of segments in the input. Any faster and we could also improve the known time bounds for Hopcroft's problem of detecting an intersection between $n$ given points and $n$ given lines: expand each point into a small polygon (not so large that it intersects any line it didn't before) and test whether any of these expanded points belongs to a nontrivial connected component. This lower bound is mentioned briefly in my paper "Testing Bipartiteness of Geometric Intersection Graphs" which uses it to argue that bipartiteness of intersection graphs is strictly easier than connectivity; it in turn cites Jeff Erickson's paper "New lower bounds for Hopcroft’s problem", which proves an $\Omega(n^{4/3})$ lower bound in a reasonable but limited model of computation.
{ "domain": "cstheory.stackexchange", "id": 2460, "tags": "polygon, computational-geometry" }
Quantum field theory: field operators in terms of creation/annihilation operators
Question: I am learning Quantum Field Theory and there is a step in my notes that I do not really understand. It starts with the classical definitions of position $q$ and momentum $p$: $$ q = \frac{1}{\sqrt{2\omega}}(a+a^{\dagger}) $$ and $$ p = -i\sqrt{\frac{\omega}{2}}(a-a^{\dagger}). $$ $a$ and $a^{\dagger}$ being the annihilation and creation operators. Then, it defines the field operators $\phi(\mathbf{x})$ and $\pi(\mathbf{x})$, equivalent to $q$ and $p$, in the following way: $$ \phi(\mathbf{x}) = \int \frac{d^3p}{(2\pi)^3}\frac{1}{\sqrt{2\omega_{\mathbf{p}}}}[a_{\mathbf{p}}e^{i\mathbf{p}\cdot{\mathbf{x}}} + a_{\mathbf{p}}^{\dagger}e^{-i\mathbf{p}\cdot{\mathbf{x}}}]$$ and $$ \pi(\mathbf{x}) = \int \frac{d^3p}{(2\pi)^3}(-i)\sqrt{\frac{\omega_{\mathbf{p}}}{2}}[a_{\mathbf{p}}e^{i\mathbf{p}\cdot{\mathbf{x}}} - a_{\mathbf{p}}^{\dagger}e^{-i\mathbf{p}\cdot{\mathbf{x}}}].$$ Is there an obvious relation between the two expressions? What mathematical and physical assumptions have been made? Answer: It seems there is a mistake in your $\pi (x)$ expression: there must be one minus sign near $\hat{a}^{\dagger}$. The relation between classical and field is obvious since lagrangian (hamiltonian) of free $\varphi $ (it's not hard to see that $\varphi$ satisfies Klein-Gordon equation) field may be rewritten as lagrangian of free ossilator in terms of $\hat{\varphi} , \hat{\pi} = \dot{\hat{\varphi}}$ which have been introduced in your question. The differences between "classical" and field expressions for coordinate and momentum is caused that field in every point can be represented as set of oscillators. This follows from hamiltonian in terms of $\hat{\varphi} , \hat{\pi}$): $$ L =\frac{1}{2}\left( (\partial_{\mu}\varphi )^{2} - m^{2}\varphi^{2}\right) \Rightarrow \hat{H} = \int T^{00}d^{3}\mathbf r = \int \left(\frac{\partial L}{\partial (\partial_{0}\varphi)}\partial_{0}\varphi - L \right)d^{3}\mathbf r = $$ $$ = \frac{1}{2}\int d^{3}\mathbf r \left( m^{2}\varphi^{2} + \pi^{2} + (\nabla \varphi )^{2}\right). $$ By introducing canonical relations $$ [\hat{a}(\mathbf p), \hat{a}^{\dagger}(\mathbf p')] = \delta (\mathbf p - \mathbf p '), \quad [\hat{a}(\mathbf p), \hat{a}(\mathbf p')] = [\hat{a}^{\dagger}(\mathbf p), \hat{a}^{\dagger}(\mathbf p')] = 0 $$ you get full correspondence between "classical" and "field" operators $\hat {x}, \hat{p}$, $\hat{\varphi}, \hat{\pi}$ (up to delta-function $\delta (\mathbf x - \mathbf x ')$): $$ [\hat{x}_{i}, \hat{p}_{j}] = i\delta_{ij}, \quad [\hat{\varphi} (\mathbf x , t), \hat{\pi}(\mathbf y, t)] = i\delta (\mathbf x - \mathbf y). $$
{ "domain": "physics.stackexchange", "id": 15939, "tags": "quantum-mechanics, quantum-field-theory, harmonic-oscillator, fourier-transform, klein-gordon-equation" }
Force on a wall due to a static fluid
Question: Suppose a tank has a wall of width $8m$ and is filled with water to a depth of $2m$ and we want to find the force applied by the water to the wall. Do we need Calculus to find this? It seems to me we do, although this is related to a problem from a book where calculus is generally not necessary. I would think the pressure varies with depth so that we need to integrate the force per area (or just pressure) at every depth point. [Edit: In case anyone's curious, here is the exact statement of the problem. A large aquarium of height 5.00 m is filled with fresh water to a depth of 2.00 m. One wall of the aquarium consists of thick plastic 8.00 m wide. By how much does the total force on that wall increase if the aquarium is next filled to a depth of 4.00 m? I'm guessing there's a trick to it so that I don't actually have to find the forces and subtract them, but rather can do something else. I'm still thinking about what the other thing is that I can do and am not really asking about that--but it just got me to wondering, if I did want to directly find the respective forces, wouldn't I need to use integration?] Answer: Since the pressure is varying linearly with depth, all you need to do is use the average pressure (half way down) to calculate the force. This gives you the same result as doing the integration. But, be careful when they are asking for the change as a result of increasing the depth. In that case, it's easier to calculate the initial and the final forces, based on the different depths.
{ "domain": "physics.stackexchange", "id": 27391, "tags": "homework-and-exercises, fluid-statics" }
Mean Absolute Error in Random Forest Regression
Question: I am new to the whole ML scene and am trying to resolve the Allstate Kaggle challenge to get a better feeling for the Random Forest Regression technique. The challenge is evaluated based on the MAE for each row. I've run the sklearn RandomForrestRegressor on my validation set, using the criterion=mae attribute. To my understanding this will run the Forest algorithm calculating the mae instead of the mse for each node. After that I've used this: metrics.mean_absolute_error(Y_valid, m.predict(X_valid)) in order to calculate the MAE for each row of data. What I would like to know is if the logic I'm following is sound. Am I making a fundamental mistake or missing something here? Should I have used the default MSE based Regressor and then calculate the MAE of each row using the mean_absolute_error function? Answer: Let me clarify few fundamental things: In sklearn, RandomForrest Regressor criterion is: The function to measure the quality of a split It's a performance measure (by default, MSE) which helps the algorithm to decide on a rule for an optimum split on a node in a tree. Kaggle is giving you a metric, i.e. MAE (again a performance/ quality measure) but to evaluate the performance of your ML model, once finalized. To come back to your question: while both MAE/ MSE are performance measures, they are being used at two different stages of a modeling process and might not be related. So, while it makes sense to evaluate your final model on MAE as you would be judged on it, you can choose any of MAE/ MSE for criterion (i.e. for RandomForest) depending on performance at validation stage. While the above being said, keep in mind that you might want to evaluate the validation errors (i.e. for finalizing a model) on the same metric (i.e. MAE in this case), to keep error measure consistent with the test set evaluation.
{ "domain": "datascience.stackexchange", "id": 4363, "tags": "machine-learning, regression, random-forest, linear-regression" }
Organizing a csv file of multiple datasets into a list of Pandas dataframes
Question: I have a csv file, containing results from a Computational Fluid Dynamics (CFD) simulation (a sample of my csv file is attached as a google drive link; file size: 1,392KB). In particular, the csv file has information about multiple streamlines (number of streamlines may reach 1000 depending on the case). All the data for all the streamlines are saved back to back in the csv file (so there is no empty row or something to tell us the end of one streamline and start of the next one). The only way I can distinguish streamlines from each other is that when the value in column "IntegrationTime" is zero, it indicates the start of a new streamline, until we hit another zero in the"IntegrationTime" column which is the start of the next streamline. I need to read this csv file, and organize its data into a list of Pandas datafreames, like: streamlineList = [df_for_streamline_1, df_for_streamline_2, ...., df_for_streamline_N] Note (extras question here): This is not crucial but would be nice to have: if you look at the end of my csv file, you see multiple rows where IntegrationTime is zero (100 rows to be exact). Preferably, I don't need these lines to be included in my final list of data frames. Can somebody help me a way to do this? https://drive.google.com/file/d/1lJhOJadGrno1C-KZOUqxV-KNkaOHIJNk/view?usp=sharing Answer: It is possible to solve this problem procedurally by thinking line-by-line or streamline-by-streamline. For each streamline that can be matched with IntegrationTime == 0.0, extract the slice from the data frame, and append the slide to an output list (if it has more than 1 data point). A code like the following should address this problem: import pandas as pd # read the dataset df = pd.read_csv("vel.csv") # get row indexes where integrationtime is zero (start of each streamline) start_index_list = df.loc[df['IntegrationTime']==0.0].index.values stream_line_list = [] # the output list for i in range(len(start_index_list)): # for each stream line, obtain the slice of the original dataframe that corresponds to it start_index = start_index_list[i] end_index = None if i+1 < len(start_index_list): end_index = start_index_list[i+1]-1 stream_line_df = df.loc[start_index:end_index] # only append streamline with more than 1 data point if len(stream_line_df) > 1: stream_line_list.append(stream_line_df) print(f"number of complete streamlines found: {len(stream_line_list)}") ```
{ "domain": "datascience.stackexchange", "id": 9317, "tags": "python, pandas" }
Does a magnetic field arise from a moving charge or from its spin, or both?
Question: I learned that a moving charge creates a magnetic field perpendicular to its direction of motion. I also learned that charged particles like electrons have spin and they also create a magnetic field because of their magnetic dipole moment. I don't understand what magnetic dipole moment is, and I can't find a decent explanation for it by using Google. First, Can someone please explain magnetic dipole moment arising from spin in an effective way? Second, is my main question; Does a magnetic field arise from a moving charge or from its spin, or both? Third, I am very confused. I think I am mixing together the classical electrodynamics description of magnetic field, (moving charge), with quantum mechanics description of magnetic field, (magnetic dipole moment due to spin?) Answer: The way you've asked your question (especially part 3), it sounds like you're trying to understand how magnetic dipole moments and moving charges are related to each other. But they're not. Moving charges and magnetic dipole moments don't describe a magnetic field, they produce a magnetic field. Off the top of my head, I can actually think of three ways that magnetic fields are produced: Moving electric charge: any time you have an electrically charged object in motion, it will produce a magnetic field. Electric currents fall into this category. This is usually the first way that physics students learn to generate a magnetic field; it's described by Ampere's law (one of Maxwell's equations). Changing electric field: any time an electric field changes in time, it will produce a magnetic field, even if there is no current around. This is usually the second way that physics students learn that a magnetic field can be generated. It's also described by Ampere's law (technically the "Maxwell correction" term). Static magnetic multipoles: this one is a little more complicated because it's not described by any of Maxwell's equations, at least not directly. Let me start with an analogy. Hopefully you know that a charged object produces an electric field. But you don't have to have a net charge to produce a field. If you take a positive charge and a negative charge of equal magnitude and put them very close to each other, you'll still get an electric field, because the field from the positive charge and the field from the negative charge don't exactly cancel each other out. This is an example of an electric dipole. You can think of this as a "secondary source" of the field, which depends not on the total amount of charge, but on how the charge is distributed within the object. Normally, when the total amount of charge is nonzero, the distribution of the charge has a small enough effect that we don't have to care about it, but when there is no net charge, the way the charge is distributed becomes important. Obviously, in order to do physics we need to have a physical quantity that describes the distribution. This is the electric dipole moment. In fact, we can measure the electric dipole moment of an object and use it to do useful calculations even if we don't know anything about the charge distribution - or even if there may not be a charge distribution at all. In other words, one could imagine that there might be some unknown physical mechanism, completely separate from electric charge, that causes some object to have an electric dipole moment. So it makes sense to define an "electric dipole" as "something that has a nonzero electric dipole moment," whether or not that thing has a charge distribution. The same thing applies to magnetic dipoles and the magnetic dipole moment. It works just like the electric dipole moment, except with the magnetic field and "magnetic charge" instead of electric field and electric charge. The thing is, as far as we know, there are no magnetically charged objects (the so-called "magnetic monopoles"). So the magnetic dipole moment never gets masked by magnetic charge, the way the electric dipole moment usually does. As with the electric dipole, a magnetic dipole of any sort will generate a magnetic field. One kind of magnetic dipole is a small loop of current. If the current is made of physical charges moving around in a circle, then it will have some angular momentum. So once it was discovered that the electron has intrinsic angular momentum (spin), physicists naturally wondered whether that angular momentum was due to constituent particles moving in circles inside the electron. One way to test this theory would be to measure the magnetic dipole moment of the electron and calculate whether it corresponds to the prediction of the current-loop model. As it turns out, it doesn't. So evidently something else is going on; the magnetic dipole moment of the electron is not just produced by classical charges moving in a circle. It's something intrinsic to the electron. (Quantum electrodynamics correctly predicts the exact value of the electron's magnetic dipole moment, but it doesn't offer a simple physical picture.)
{ "domain": "physics.stackexchange", "id": 19298, "tags": "quantum-mechanics, electromagnetism, quantum-spin" }
Isothermal bulk modulus of an ideal gas
Question: To calculate isothermal bulk modulus, I have two methods. Method 1: $P_1V_1=P_2V_2=K$ $B=-\dfrac{(P_2-P_1)V_1}{V_2-V_1}$ $B=-\dfrac{(\dfrac{K}{V_2}-\dfrac{K}{V_1})V_1}{V_2-V_1}$ $B=-\dfrac{\dfrac{K(V_1-V_2)}{V_2V_1}V_1}{V_2-V_1}$ $B=\dfrac{KV_1}{V_2V_1}$ $B=\dfrac{K}{V_2}$ $B=P_2$ Method 2: I love calculus $PV=K$ On differentiating, $dPV+PdV=0$ $\dfrac{dP}{dV}=-\dfrac{P}{V}$ .......(1) $B=-\dfrac{dP}{dV}V$ Using equation (1) $B=-(-\dfrac{P}{V})V$ $B=P$ Using method (1) I got $P_2$ as the answer whereas in method (2) I got $P_1$ as the answer. I know $P_1\neq P_2$. Which one is correct then? Answer: Isothermal bulk modulus is defined as volume times negative partial derivative of pressure with respect to volume at constant temperature: $K = -V \dfrac{\partial P}{\partial V}$ An ideal gas satisfies the following equation of state: $PV = nRT$ So the pressure for an ideal gas is given by: $P = \dfrac{nRT}{V} $ The partial derivative of pressure w.r.t. volume at constant temperature is: $\dfrac{\partial P}{\partial V} = \dfrac{∂\dfrac{nRT}{V}}{∂P}$ $= nRT \dfrac{d\dfrac{1}{V}}{dV} $ $= - nRT/V² $ $= - \dfrac{\dfrac{nRT}{V}} {V}$ $= - \dfrac{P}{V} $ Hence, $K = P$
{ "domain": "physics.stackexchange", "id": 63935, "tags": "pressure, ideal-gas" }
Convert decimal to base K
Question: Problem statement: Write a program that takes two command-line arguments number and k. The program should convert number to base k. Assume the base is between 2 and 16 inclusive. For bases greater than 10, use the letters A through F to represent the digits 10 through 15, respectively. This is one of my self-imposed challenges in Rust to become better at it. The problem was taken from Sedgewick Exercise 1.3.21. Here is my code: use std::env; fn main() { let arguments: Vec<String> = env::args().collect(); let mut number: u32 = arguments[1].parse().unwrap(); let k: u32 = arguments[2].parse().unwrap(); let number_original: u32 = number; let mut largest_power: u32 = 1; let mut kary: String = String::new(); while largest_power <= number/k { largest_power *= k; } while largest_power > 0 { if largest_power > number { kary.push_str(&format!("{}", "0")); } else { let dividend: u32 = number/largest_power; if number/largest_power < 10 { kary.push_str(&format!("{}", dividend.to_string())); number -= dividend*largest_power; } else if number/largest_power == 10 {kary.push_str(&format!("{}", "A")); number -= dividend*largest_power;} else if number/largest_power == 11 {kary.push_str(&format!("{}", "B")); number -= dividend*largest_power;} else if number/largest_power == 12 {kary.push_str(&format!("{}", "C")); number -= dividend*largest_power;} else if number/largest_power == 13 {kary.push_str(&format!("{}", "D")); number -= dividend*largest_power;} else if number/largest_power == 14 {kary.push_str(&format!("{}", "E")); number -= dividend*largest_power;} else if number/largest_power == 15 {kary.push_str(&format!("{}", "F")); number -= dividend*largest_power;} } largest_power /= k; } match k { 2..=16 => println!("The number {} at base {} is represented with {}.", number_original, k, kary), _ => panic!("Base must be between 2 and 16."), } } Is there any way that I can improve my code? Answer: Don't unwrap Panicking is not a good way to handle errors. Rather provide an error message in case of an Err and exit the program gracefully. Use libraries If you'd use e.g. clap you can let it parse the CLI arguments for you as the desired data type (u32 in your case) and gracefully handle the respective errors. There's also radix_fmt available to do the conversion to any base. Use expressive variable names The name kary is misleading. It sounds like a person's name. If you're referring to k-ary numbers, maybe rename it to k_ary. Though a less technical term such as digits should also suffice and might be more widely understood. Use rustfmt or cargo fmt ... to format your code with a standardized style. Validate your input. Running the program e.g. with a base 1 results in an infinite loop. Invalid input as base-1 should be caught early. Similarly base-0 inputs panic the program. Don't panic!() If you encounter an error use e.g. eprintln!() to print an error message and use std::process::exit() to exit the program with a possibly nonzero return code. Suggested: src/main.rs use clap::Parser; use radix_fmt::radix; use std::ops::RangeInclusive; use std::process::exit; const VALID_BASES: RangeInclusive<u8> = 2..=36; #[derive(Debug, Parser)] struct Args { #[arg(index = 1)] number: u32, #[arg(index = 2)] base: u8, } fn main() { let args = Args::parse(); if !VALID_BASES.contains(&args.base) { eprintln!( "Base must be in {}..={}", VALID_BASES.start(), VALID_BASES.end() ); exit(1); } println!( "Number {} in base {} is: {}", args.number, args.base, radix(args.number, args.base) ); } Cargo.toml [package] name = "bases" version = "0.1.0" edition = "2021" # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html [dependencies] clap = { version = "4.3.11", features = ["derive"] } radix_fmt = "1.0.0"
{ "domain": "codereview.stackexchange", "id": 44891, "tags": "beginner, rust" }
Validating list of values to be sent to API
Question: I'm currently writing a client library to use with a courier's SOAP API to create and manage shipments. I'm using the service classes generated from the WSDL provided by the courier, their implementation is on the verbose side (multiple objects nested inside each other to specify a country code for an address as an example), so I'm trying to write a client library I can reuse that uses a cleaner set of classes. I'm trying to have the library perform some basic validation on data so it can catch errors before it gets sent to the API. When creating a shipment there are several options that need to be supplied to specify the service type/format and other enchantments, the API requires a string code (1 to 3 characters) and they supply a list of codes and what they each mean (For example P = Package, L = Letter for shipment format). To implement these options in my library I decided to use the type-safe-enum pattern to provide any consumers of my library an easy way of specifying values, and allowing my library a way of checking the options without having to use magic values when checking for validation, for example the Email Notification enhancement would require a email be provided for the API request to be accepted by the courier's system, so instead of checking the enhancements list for "14" I would check for ServiceEnhancement.EmailUpdates making the code easier to understand. A consumer might also want to get one of these options by code and I want my library to thrown an exception if an invalid code is given. This all seems to work but I now have 6 classes all with almost exactly the same code (only the Class name and the list of enum variables change) which says to me I should be using inheritance, but I'm fairly sure I can't, due to the fact the static GetByCode function's return type needs to match the class its in. I did look into using generics but I'm not sure they are right for this use case. public sealed class ServiceEnhancement { // This dictionary must come before the public static fields, otherwise it won't be declared in time. // See: https://stackoverflow.com/a/424414/2920332 private static readonly Dictionary<string, ServiceEnhancement> Instance = new Dictionary<string, ServiceEnhancement>(); public static ServiceEnhancement Loss1000 = new ServiceEnhancement("1", "Consequential Loss £1000"); public static ServiceEnhancement Loss2500 = new ServiceEnhancement("2", "Consequential Loss £2500"); public static ServiceEnhancement Loss5000 = new ServiceEnhancement("3", "Consequential Loss £5000"); public static ServiceEnhancement Loss7500 = new ServiceEnhancement("4", "Consequential Loss £7500"); public static ServiceEnhancement Loss10000 = new ServiceEnhancement("5", "Consequential Loss £10000"); public static ServiceEnhancement Recorded = new ServiceEnhancement("6", "Recorded"); public static ServiceEnhancement Loss750 = new ServiceEnhancement("11", "Consequential Loss £750"); public static ServiceEnhancement TrackedSignature = new ServiceEnhancement("12", "Tracked Signature"); public static ServiceEnhancement SmsUpdates = new ServiceEnhancement("13", "SMS Notification"); public static ServiceEnhancement EmailUpdates = new ServiceEnhancement("14", "E-Mail Notification"); public static ServiceEnhancement SmsAndEmailUpdates = new ServiceEnhancement("16", "SMS + E-Mail Notification"); public static ServiceEnhancement LocalCollect = new ServiceEnhancement("22", "Local Collect"); public static ServiceEnhancement SaturdayGuaranteed = new ServiceEnhancement("24", "Saturday Guaranteed"); private readonly string _code; private readonly string _name; private ServiceEnhancement(string code, string name) { _code = code; _name = name; Instance.Add(code, this); } public static ServiceEnhancement GetByCode(string code) { if (!Instance.ContainsKey(code)) { throw new ArgumentException($"Code '{code}' not a valid ServiceEnhancement.", "code"); } return Instance[code]; } public string GetName() { return _name; } public override string ToString() { return _code; } } I'm thinking I might be better off dropping the whole enum aspect and just going with a List/Dictionary to handle the validation and just comment the usage of magic values inside my library, for example: private bool NeedsMobileNumber => ServiceEnhancements.Any(se => se.Equals(ServiceEnhancement.SmsUpdates) || se.Equals(ServiceEnhancement.SmsAndEmailUpdates)); private bool NeedsEmailAddress => ServiceEnhancements.Any(se => se.Equals(ServiceEnhancement.EmailUpdates) || se.Equals(ServiceEnhancement.SmsAndEmailUpdates)); Would become: // Check for SMS Notification Enhancement (13) and SMS + E-Mail Notification Enhancement (16) private bool NeedsMobileNumber => ServiceEnhancements.Any(se => se.Equals("13") || se.Equals("16")); // Check for Email Notification Enhancement (13) and SMS + E-Mail Notification Enhancement (16) private bool NeedsEmailAddress => ServiceEnhancements.Any(se => se.Equals("14") || se.Equals("16")); Answer: I'm thinking I might be better off dropping the whole enum aspect No, no, no, don't you dare! I guess you are the first person on Code Review asking for advice about how to better worsen the code ;-) private bool NeedsMobileNumber => ServiceEnhancements.Any(se => se.Equals(ServiceEnhancement.SmsUpdates) || se.Equals(ServiceEnhancement.SmsAndEmailUpdates)); This might look like an overkill now but you'll be glad having this in few weeks. You never want to maintain comments. It's easier to maintain the code. If you format it properly then it's pretty easy to read. This means that in a case like this with lenghty code I'd rather use full-property syntax, add a few line-breaks and indents and here it is. Doesn't it look much better? It documents itself, leave it as it is. Work with formatting now. private bool NeedsMobileNumber { get { return ServiceEnhancements.Any(se => se.Equals(ServiceEnhancement.SmsUpdates) || se.Equals(ServiceEnhancement.SmsAndEmailUpdates) ); } } A couple of other things... public static ServiceEnhancement Loss1000 = These should be readonly. You don't want to overwrite them by accident. // This dictionary must come before the public static fields, otherwise it won't be declared in time. Or you can use the static constructor to initialize all fields too. public string GetName() { return _name; } Why isn't this a property? public static ServiceEnhancement GetByCode(string code) { if (!Instance.ContainsKey(code)) { throw new ArgumentException($"Code '{code}' not a valid ServiceEnhancement.", "code"); } return Instance[code]; } You can rewrite it with the new dictionary out variable which I find is easier to follow: public static ServiceEnhancement GetByCode(string code) { if (Instance.TryGetValue(code, out var serviceEnhancement)) { return serviceEnhancement; } throw new ArgumentException($"Code '{code}' not a valid ServiceEnhancement.", "code"); }
{ "domain": "codereview.stackexchange", "id": 28170, "tags": "c#, validation, api, soap" }
What does "stationary" mean in the context of reinforcement learning?
Question: I think I've seen the expressions "stationary data", "stationary dynamics" and "stationary policy", among others, in the context of reinforcement learning. What does it mean? I think stationary policy means that the policy does not depend on time, and only on state. But isn't that a unnecessary distinction? If the policy depends on time and not only on the state, then strictly speaking time should also be part of the state. Answer: A stationary policy is a policy that does not change. Although strictly that is a time-dependent issue, that is not what the distinction refers to in reinforcement learning. It generally means that the policy is not being updated by a learning algorithm. If you are working with a stationary policy in reinforcement learning (RL), typically that is because you are trying to learn its value function. Many RL techniques - including Monte Carlo, Temporal Difference, Dynamic Programming - can be used to evaluate a given policy, as well as used to search for a better or optimal policy. Stationary dynamics refers to the environment, and is an assumption that the rules of the environment do not change over time. The rules of the environment are often represented as an MDP model, which consists of all the state transition probabilities and reward distributions. Reinforcement learning algorithms that work online can usually cope and adjust policies to match non-stationary environments, provided the changes do not happen too often, or enough learning/exploring time is allowed between more radical changes. Most RL algorithms have at least some online component, it is also important to keep exploring non-optimal actions in environments with this trait (in order to spot when they may become optimal). Stationary data is not a RL-specific term, but also relates to the need for an online algorithm, or at least plans for discarding older data and re-training existing models over time. You might have non-stationary data in any ML, including supervised learning - prediction problems that work with data about people and their behaviour often have this issue as population norms change over timescales of months and years.
{ "domain": "ai.stackexchange", "id": 1877, "tags": "reinforcement-learning, terminology, policies, stationary-policy" }
In instructions pipelining, why does register read/write take up only half clock cycle?
Question: While studying instruction pipelining in MIPS processor, we make an assumption that registers read/write stages take only a half clock cycle, as this picture shows (half clock cycles are dotted in register operations): So what is the purpose of this assumption? and why is the fourth instruction in this graph is considered a hazard if we actually wrote the value of $2 in the first half clock cycle and then read it in the latter one? Answer: By using different phases of a clock cycle for register write and register read, the number of register file ports can be reduced. (This can reduce area and latency.) A two-read and one-write instruction would otherwise require three register file ports (or introduce a structural hazard); by using different phases of the cycle one port can be used for both read and write. This can also facilitate reading from the register file in the same cycle as the value is written, which reduces the number of results that must be managed through the forwarding network. The fourth instruction (add $14, $2, $2) does not have a data hazard even without forwarding.
{ "domain": "cs.stackexchange", "id": 13972, "tags": "cpu-pipelines, mips" }
Clarity of encryption class
Question: I have recently gotten back into development and I am wondering if the script I have just created is clearly documented and easily understandable throughout each step. Is it easy to understand? class Encryption { public function Encrypt ($String){ /* * Create an Encrypted String. * @Var string * */ $Size = mcrypt_get_iv_size(MCRYPT_CAST_256,MCRYPT_MODE_CBC); $iv = mcrypt_create_iv($Size,MCRYPT_RAND); $Hash = strlen($String); return array( "EncryptedString" => openssl_encrypt($String, "AES-256-CBC",$Hash,0,$iv), "IV" => $iv, "Hash" => $Hash ); } public function Merge($Array){ /* * Merge The Array Returned From the String Encryption into a string * * Information: * 1) Convert Each Element from the array into their personal variables * 2) Create an Array from the first element. Will be used at the later stages * 3) Get the length of the current IV passed * 4) Create an empty string to be manipulated using the attaching loop * */ # $Array = $this->Encrypt($String); $Encrypted_String = $Array['EncryptedString']; $IV = $Array['IV']; $Hash = $Array['Hash']; $Hash_Count = count(str_split($Hash)); $EncStr_Arr = str_split($Encrypted_String); $Count = strlen($IV); $Increment = 0; $String = ""; // NEW: Appending the Hash Count, to be used in later functions. So The correct Hash can be obtained. $String .= $Hash_Count.$Count.$Hash; // Append the Count and Hash (as EncryptedString and IV are different lengths. Used to compensate While ($Increment < $Count){ /* * This loop appends to the string created in the order: * Even: IV Character * Odd: EncryptedString Character * * Unset Elements of the Array for each iteration */ $String .= $IV[$Increment]; $String .= $Encrypted_String[$Increment]; unset($EncStr_Arr[$Increment]); $Increment++; } /* * After The loop has broken (Increment reaches the count of the IV), implode the Remainder elements (encrypted string) * and return string + imploded array */ $Encrypted_String = implode("",$EncStr_Arr); return $String.$Encrypted_String; } public function UnMerge($String){ /* * function splits a string into 3 elements of an array to a usable format for the decryption * 1) Split the String into an Array, 1 character = 1 element * 2) The hash count is obtained (pushed to first character in the merge) * 3) Unset the Hash Number * 4) IV length is stored and created into the 2 digit number to be used later * 5) Unset the IV Containers in elements 1 & 2 * 6) Reset the Array Index to 0 * 7) See While loop Comments * */ $String_Array = str_split($String); $Hash_Count = $String_Array[0]; unset($String_Array[0]); $IVLength = $String_Array[1].$String_Array[2]; $Hash = NULL; unset($String_Array[1]); unset($String_Array[2]); $Hash_Incrementer = 0; $String_Array = array_values($String_Array); while ($Hash_Incrementer < $Hash_Count){ /* * Use a while loop to pull correct Hash number from the string & unsetting as we go */ $Hash .= $String_Array[$Hash_Incrementer]; unset($String_Array[$Hash_Incrementer]); $Hash_Incrementer++; } /* * Increment the array values to start from 1 instead of 0 */ array_unshift($String_Array, null); unset($String_Array[0]); $IV = null; $Encryption = null; foreach ($String_Array AS $Key => $Value){ /* * Through each iteration of the array (using the keys) decides if the key is odd or even. * If odd: Appending to the empty $IV var * If Even: Appending to the encrypted string */ if($Key&1) { $IV .= $Value; } else { $Encryption .= $Value; } unset($String_Array[$Key]); // Unset as we go if ($IVLength*2 == $Key){ /* * * If Correct IV length is equal to the Key then end the foreach loop. * * * Length after manipulation in earlier functions = 41 * Pushing the three extra elements into the array = 44 * IV Length Counted = 16. * * Maths: * 41 + 3 = 44 * 16*2 = 32 + 1 = 33 * 41 - 33 = 8 (Remainding Characters from the EncryptedString */ break; } } return array( /* * Return an Array with correct information to be used for Decryption. * "EncryptedString" = What was pushed in earlire foreach with an imploded array of the remainding Chars * "IV" = The Correct IV * "Hash" = A Type juggled integer containing the Hash as created in the Encryption */ "EncryptedString" => $Encryption.implode("",$String_Array), "IV" => $IV, "Hash" => (int)$Hash ); } function Decrypt($Array){ /* * Decrypt The Encrypted String with passing correct information as managed by UnMergeString */ return openssl_decrypt($Array['EncryptedString'],"AES-256-CBC",$Array['Hash'],0,$Array['IV']); } Answer: In my opinion, no your class is not clearly documented and easily understandable. Here are the main reasons I find this code... dirty: The naming is misleading. The naming is unusual. There are too many comments. Inconsistencies. Is that it? Well, let me explain: Misleading Names The first thing I did was look at the class name, because that alone should tell you exactly what the class will do. No ifs-ands-or-buts. Sweet, the class is about encryption. Now, by definition, "Encryption" is: In cryptography, encryption is the process of encoding messages or information in such a way that only authorized parties can read it. This was simply from Googling "define encryption". How is this relevant? Well, if you check out the last bit of your code, it's decrypting (decoding if you want a polar of the quote). This seems like a stupid thing to point out, but in all honesty, I was expecting two classes, an encryption and a decryption class! This is what I mean by misleading names. The next greatest glaring thing is the Merge and UnMerge functions (both of which I need to criticize later). When I saw Merge, I got lost. What in the world could an encrypting class need to merge? And why is it so prominent in the code? So I take a look at the contents, and it's A) defining variables to equal other variables, and B) obfuscating the encryption in a strange manner. In a completely education attitude, I find this function a waste of space and time. I think it takes up unnecessary space by appending a mish-mash of characters to an already encoded string, and I feel like the variable tide grew and washed up a bunch of unnecessary variables! What's the point in the whole while loop? If you have a reasonable explanation, I'd be more than welcome to be enlightened by the way. As of now though, I see absolutely no reason for this to be it's own method. Would I say the same about the UnMerge method? Most definitely. Without the Merge method, this un-merger wouldn't exist. Naming is one of the most important things a programmer can get right. Here's a couple pointers for naming classes: Class names are always nouns, not verbs. (Kudos to you) Class names should be descriptive in nature without implying implementation. Since implementation can change, to imply implementation in the name forces the class name and all references to it to change or else the code can become misleading. Remember: "describing but not specifying". Unusual Naming Now, here's the fun part. Hehe I've got a lot to say! But before I begin, I want you to know that it's important to choose a code formatting standard that the readers will understand. There are many, many, many standards to choose from for PHP, and the choice is up to you. The three I listed are quite similar in style. Having a style that people recognize makes reading almost 200% easier. Since I wasn't used to your style, it took me longer to read your code than it normally would have. Here's a couple notes (taken from PHP.net): Method names follow the 'studlyCaps' (also referred to as 'bumpy case' or 'camel caps') naming convention, with care taken to minimize the letter count. The initial letter of the name is lowercase, and each letter that starts a new 'word' is capitalized. Variable names must be meaningful. ... Variable names should be in lowercase. Use underscores to separate between words. I focus on white space in the Inconsistencies section. Too Many Comments This one could be debated in certain situations, but in your case, I find your comments detrimental to the readability and conciseness of your code. The comments are non-standard (PHPdoc), very spread out, in inappropriate places, and I think far too long. PHPdocs are almost identical to JavaDocs, if you're familiar with that. If not, I suggest this standard to get you started. I think you too much white space in between comments and inside comments. It forces the eyes to jump around. If the variables names, method names, and class names are all excellent, comments are only needed in essential places. I think that because the names of these objects are vague, you're having to do a lot more commenting than necessary! I think you're giving too much information. With the PHPdocs, it's all in that one place, compact, uniform, and concise. There are however, times when inline comments are actually necessary, and it's perfectly acceptable to use them then. Comments aren't code, so of course it really doesn't matter what you do. I'm just thinking about us readers ;) Inconsistencies These are the little things we often forget about, which is why Code Review is here! Compare Encrypt ($String) and Merge($Array). Do you want one space or none? Choose. Compare $iv, $Size, and $IV. You need to decide what capitalization pattern should be followed. Is it While or while? Don't fix it if it ain't broke. Nowhere in the PHP documentation are these keywords capitalized. These are just a couple examples. I think you get the point! Well, that is why I think it's unclear code! As far as your actual code logic, I could write another post on the security and alternate implementations and efficiency, but that would make this far too long! :)
{ "domain": "codereview.stackexchange", "id": 8611, "tags": "php, security, cryptography" }
sequence of problems that take $\Theta(n^k)$ for increasing $k$?
Question: Do we know an infinite sequence of decision problems where the most efficient algorithm for each problem takes $\Theta(n^k)$ time, where $k$ increases unboundedly? Suppose for example that we would know that finding a k-clique takes $\Theta(n^k)$, then this sequence could be {1-clique, 2-clique, 3-clique, ...}. However, perhaps there are more efficient algorithms for k-clique, so this answer is incorrect. EDIT: By the Time Hierarchy Theorem (which includes "P does not collapse to DTIME($n^k$) for any fixed $k$"), such a sequence should exist. After some discussions in the comments, common NP problems are not good candidates. If it takes only $O(n)$ time to check a certificate (or $O(n^c)$, where $c$ is a constant independent of $k$), it would imply that $P\neq NP$ Answer: You leave the computational model open. I will assume that the question refers to random access machines (RAMs), as that is customary when one asks for the actual exponent in polynomial time. Let $P_k$ be the set of (encodings of) RAMs $M$, such that $M$ halts in at most $|M|^k$ steps using at most the initial $|M|^k$ memory cells. A universal RAM can solve $P_k$ in $O(n^k)$. On the other hand, $P_k$ is the universal problem for $DTIME(n^k)$ (in the RAM model and under linear time reductions). As a special case of the RAM version of the time hierachy theorem, a simple diagonalization shows that all algorithms for $P_k$ have running time $\Omega(n^k)$. Thus the complexity of $P_k$ is $\Theta(n^k)$ as asked. The tightness heavily relies on the computational model. The Turing machine version of the time hierarchy theorem has a $\log n$ gap. Let $P'_k$ be the set of (encodings of) Turing machines $M$, such that $M$ halts in at most $\frac{|M|^k}{\log n}$ steps. Then $P'_k$ can be solved in $n^k$ time by a universal Turing machine and all algorithms have running time $\Omega(\frac{n^k}{\log n})$. I suspect this is the best one can do. [EDIT] In a comment, you ask for more conventional problems, like graph problems or logic. First, let me point out how $k$-Clique (as suggested in the answer) does not seem like a good candidate. If we could prove that $k$-Clique requires time $\Omega(n^k)$ (or $\Omega(n^{f(k)})$ for some unbounded $f$), we have implied that Clique is not in P. And thus that P is different from NP. That is not likely to be easy. The same holds for slices of any other problem that we know to be in NP, or in some other classes like PSPACE that are unknown to be different from P. Every problem can be rephrased as a graph problem, by encoding the input as a graph. I don't know if you would call a graph version of $P_k$ conventional. I wouldn't call it natural. As for logic, I can provide an example. It is not boolean logic however, and there is a gap between the lower and upper bound. The example is based on the Immerman-Vardi theorem. Let $\mathcal L$ be first-order logic extended by least-fixed-point operators. Let ${\mathcal L}^k$ denote the fragment where only $k$ first-order variables are allowed. It is permitted to re-use variables, so the restriction is that each subformula has at most $k$ free variables. The Problem $M_k$ is the model-checking problem for ${\mathcal L}^k$, that is on input of a formula $\varphi\in{\mathcal L}^k$ and a structure $\mathfrak A$ of matching vocabulary, the task is to decide whether $\mathfrak A\models\varphi$, that is whether $\varphi$ is true in $\mathfrak A$. $M_k$ can be solved in time $O(n^{2k+1})$. On the other hand, for some constant $c$, $M_{3k+c}$ is hard for $DTIME(n^k)$ (where we also require an $n^k$ bound on the contents of memory cells). I believe $c=2$ suffices. From diagonalization we obtain that $M_{3k+c}$ requires time $\Omega(n^k)$, so $M_k$ requires time $\Omega(n^{\frac{k-c}3})$. The bound is not tight in the sense of $\Theta(n^{k'})$ for some $k'$, but at least we have $n^{\Theta(k)}$.
{ "domain": "cs.stackexchange", "id": 11289, "tags": "complexity-theory, decision-problem, polynomial-time" }
Power of Double - Logarithmic Space
Question: I try to solve exercise "on the power of double - logarithmic space" from the great textbook Computational Complexity by Oded Goldreich. The goal is to show that the given set $S=\left \{ w_k \mid k \in \mathbb{N} \right \}$, where $w_k$ - concatenation of all $k$-bit long strings separated by *'s is not regular and yet it is decidable in double-logarithmic space. The exercise contains guidelines, and I would like to shed the light on few sentences from guidelines in order to solve the exercise. In the guidelines it's mentioned that we can take advantage of of the *'s (in $w_i$) , the $i$th iteration can be implemented in space $O(\log i)$ The $i$th iteration is verifying whether $x = w_i$, which is really can be decided in $O(\log i)$ space, where $\log i$ can be used to note the position in $i$ long string, but the position in $x$ can be included in $O(\log i)$ or not? Furthermore, on input $x \notin S$, we halt and reject after at most $\log |x|$ iterations. It means only $\log |x|$ $w$'s from the set S will be compared to $x$. Why it is actually so? Actually, it is slightly simpler to handle the related set $\left \{w_1**w_2**..**w_k \right \}$ Why it is actually so, and I would call it set, it is rather concatenated string. Answer: Well, the set $\left \{w_1**w_2**..**w_k \right \}$ contains only one single element, but is nevertheless a set. And TM can accept only languages, which are sets of words. Why is it slightly easier? Hmm, I would say, it is easier since you can simplify the algorithm slightly. You have one counter for $k$, and then you check, whether in the string $\cdots *x*y*\cdots$, you have $\text{bin}(y)=\text{bin}(x)+1$ on every position. You do not have to care about detecting the right $k$. Other than this, the suoer-string is longer, which gives you more space. But this effect vanishes in the big-O.
{ "domain": "cs.stackexchange", "id": 825, "tags": "complexity-theory, space-complexity" }
Remove duplicate chars from String
Question: I have retackled this problem using the help I received from here: Remove duplications from a Java String but this time using a LinkedHashSet since previosuly I was using a HashSet but the answer was out of order. From this implementation my run time should be \$O(n)\$ correct? Does anyone see any room where I can improve my code or some mistakes? public static void main(String[] args){ String test = "Banana"; LinkedHashSet knownChars = new LinkedHashSet(); StringBuilder noDups = new StringBuilder(); for(Character c : test.toCharArray()){ if(!knownChars.contains(c)){ knownChars.add(c); noDups.append(c); } } System.out.println("No duplicate string is: " + noDups); } Answer: The below points are in no particular order: Extraction into Methods The utility and the test should be broken up into methods, i.e., you should have a function String removeDuplicates(String), containing the logic for removing duplicates. Manual Boxing Unnecessary manual boxing to Character, just use char and let javac take care of it on its own (JDK1.5+ supports autoboxing of primitives). Why LinkedHashSet? A HashSet can be used just fine here - the order of the resulting deduplicated String is determined by the order of insertion of characters into the StringBuilder, not the HashSet. (Believe me, I checked. Here you can also: http://ideone.com/K9Ku2p) Note that the add method of Sets return a boolean, true if the set did not already contain the element and it has been added successfully, or false if the element was already present in the Set. Exploiting this makes the call to Set.contains(...) redundant, see the example code. Generics Use generics. Don't use raw collections - they can violate type safety. In your case, you might not realise the immediate benefit of doing so, but it is a good practice when scaling to larger programs. Here, using generics is as simple as changing LinkedHashSet knownChars = new LinkedHashSet(); to LinkedHashSet<Character> knownChars = new LinkedHashSet<>(); (JDK 1.7+ to get the diamond type inference, otherwise it has to be LinkedHashSet<Character> knownChars = new LinkedHashSet<Character>();, JDK 1.5+) Space-time tradeoffs To minimize the number of reallocations of the underlying buffers of StringBuilder or HashSet, initialize them with a default capacity of the largest possible size they could have, which is the length of the input String. Use the constructors which have an int capacity parameter. See the example code for details. To avoid a gotcha involving the load factor (a parameter which decides how full a HashSet should be before it is resized) of the HashSet when initializing the HashSet with capacity in point 6 (the Hashset may be prematurely resized), also set the load factor to 1.0f, using the new HashSet(int capacity, float loadFactor) constructor overload. Miscellaneous Type to interfaces, e.g., use Set<Character> knownChars = new LinkedHashSet<>(); instead of LinkedHashSet<Character> knownChars = new LinkedHashSet<>();. This makes your code in general more resilient to refactoring, you can use a different Set implementation at any time by changing one word instead of 2. Qualify your method parameters with final if you are not going to reassign them in any way - granted, String being immutable makes this redundant, in the sense that any reassignments done to input in removeDuplicates will not affect test in main, but it's a good practice anyway. Better output messages - see the example code for an example. Better variable naming - it's already quite good, but try to use full words. See the example code. Example Code (Ideone): import java.util.Set; import java.util.HashSet; // Store in a file `StringUtilities.java` public class StringUtilities { public static void main(String[] args) { String test = "Banana"; System.out.println("Test string \"" + test + "\" with duplicates removed is: \"" + removeDuplicates(test) + "\""); } public static String removeDuplicates(final String input) { Set<Character> knownCharacters = new HashSet<>(input.length(), 1.0f); StringBuilder noDuplicates = new StringBuilder(input.length()); for(char character : input.toCharArray()){ if(knownCharacters.add(character)){ noDuplicates.append(character); } } return noDuplicates.toString(); } }
{ "domain": "codereview.stackexchange", "id": 26304, "tags": "java, strings" }
What Are Some Examples of Physics Problems With Many Different Approaches That Give the Same Answer?
Question: I was watching a clip of a lecture by Richard Feynman (see here), in which he states, Every theoretical physicist whose any good knows six or seven different theoretical representations for exactly the same physics. So, I'm wondering, what are some examples of physics problems with six or seven different approaches resulting in the same answer? Answer: This is just an example: here is a review by Bickers which discusses a dozen of different $1/N$ techniques as applied to Kondo problem. Note that $1/N$ expansion is just one of many approaches to this problem, alongside the simple perturbation theory, several Green's functions formulations, renormalization group, Bethe ansatz and others. Another example are the variety of methods used to solve problems in quantum transport: Simple Drude-like approaches (equations for the election group velocity and the wave vector) Perturbation theory Kinetic equation Landauer-Büttiker formalism and other approaches based on the scattering matrix Non-equilibrium green's function approaches Quantum Langevin equations Kubo formula Density-matrix equations (master equation, etc.) And this list is by no means exhaustive.
{ "domain": "physics.stackexchange", "id": 78513, "tags": "quantum-mechanics, electromagnetism, thermodynamics, special-relativity, classical-mechanics" }
What does it mean for a parameterised path to be spherically symmetric?
Question: This question seems embarrassingly simple, but I was wondering what it means for an object to be spherically symmetric? I've been working through some questions where I'm told to use this fact to help simplify working with paths in space. I understand what this means intuitively (maybe), but am having difficulty expressing this condition mathematically. Say we have a parameterised path given by: $\sigma(s)=(r(s), \theta(s), \phi(s))$ What does it mean for this to be spherically symmetric? Some people have suggested this means $\frac{d\theta}{ds}=\frac{d\phi}{ds}=0$ Thinking about this, I initially thought about a circle which is (I think?) is a spherically symmetric path, i.e. the path in polar coordinates $(r, \theta, \phi)$ given by: $x(s)=(R, 2\pi s, 0)$, where $R$ is fixed and $s\in[0, 1]$ This doesn't have $\frac{d\theta}{ds}=0$, which makes me think the condition must be something else? Or is the circle not a spherically symmetric path because it isn't the same in ALL directions (eg. $\phi\ne 0$). In this case, any idea what could my books be referring to when they talk about using spherical symmetry to simplify working with paths? Answer: My answer might not be truly correct by "definition", but I'm going to attack it from a different viewpoint. I am inspired by the action $J$ of a functional $$ J = \int_{x_1}^{x_2} f(x,y,\dot y)dx $$ as in Lagrangian mechanics. You can solve this problem to solve the geodesics of physical geometries. For example in Euclidean space, the functional is $ds = \sqrt{dx^2 + dy^2} = \sqrt{1+{\dot y}^2}dx$, which minimizes to a linear solution $y-y_0 = m(x-x_0)$ in applying Lagrange's equations. Here $ds$ is a function of $x$ and $y$, which are all of the independent coordinates belong to that that 2D space. Now let's go to your example of the circle: in polar coordinates, we write a general displacement as $$ dr = \sqrt{dr^2 + (rd\theta)^2} $$ But in the case of a circle, the radius does not change over the movement, so $$ ds = rd\theta.$$ Here, I would say the circle has spherical symmetry because the displacement - or from my inspiration, the functional minimized to uncover its geodesic - maintains a constant radial distance $r$ with respect to the other variables. Properly I believe you should write $$d\vec{x} = dr{\hat r} + rd\theta \mathbf{\hat{\theta}},$$ which collapses to a circular displacement in the limit of $r \rightarrow 0$. The real condition I think for spherical symmetry resulting of this is $$ \frac{dr}{d\theta} = \frac{dr}{d\phi} = 0 $$ which is to say that $r$ remains constant under changes of $\theta$ and $\phi$. Now for your position vector $\vec{x}(s) = (R, 2\pi s,0)$. Differentiate with respect to your parameter $s$: $$ \frac{d\vec x}{ds} = (0, 2\pi, 0),$$ which is a circular path of $2\pi$ radians. Here we needed to satisfy $\frac{dR}{ds} = 0$ for the circle, which is certainly does. Here $ds = \frac{1}{2\pi}d\theta$ so we have the condition that $\frac{dr}{d\theta} = 0$ hidden there.
{ "domain": "physics.stackexchange", "id": 52088, "tags": "symmetry, coordinate-systems" }
Neural Network - exercise
Question: I am currently learning the concept of neural networks by myself. I am working with a very good pdf from http://neuralnetworksanddeeplearning.com/chap1.html I also did a few exercises but there is one exercise I really don't understand. Task: There is a way of determining the bitwise representation of a digit by adding an extra layer to the three-layer network above. The extra layer converts the output from the previous layer into a binary representation, as illustrated in the figure below. Find a set of weights and biases for the new output layer. Assume that the first 3 layers of neurons are such that the correct output in the third layer (i.e., the old output layer) has activation at least 0.99, and incorrect outputs have activation less than 0.01. I found also the solution, as can be seen in the second image I understand why the matrix has to have this shape, but I really struggle to understand the step, where the user calculates 0.99 + 3*0.01 4*0.01 PS: I know that the question was answered in Extra output layer in a neural network (Decimal to binary). However, the specific step I am looking for was not answered. Answer: But what happens if the input has 0.99 and 0.01? indicates the point: if the third layer is perfect, then we're already done; but we need to account for small variations in the third layer. We need to understand how far off we can get after multiplying by $W$. Now, it appears that the answer gets this a little wrong. If $\vec{x}$ consists of one entry at least 0.99 and nine entries of at most 0.01, then $W\vec{x}$ is off by some number depending on the ten columns of $W$, not the four rows. And we need lower bounds for the correct slots, and upper bounds for the incorrect slots (rather than upper bounds for both). For example, if $\vec{x}=\langle 0.05, 0.997, 0.1, 0.01, \dotsc\rangle$, we'd get $$W\vec{x} = \begin{pmatrix} 0.997 +0.01 + \dotsb \\ 0.1 +0.01 + \dotsb \\ \vdots \end{pmatrix}.$$ We need to use bias/activation/cutoffs to "push" this to the desired $\langle 1,0,0,0\rangle$ (and similarly for other $\vec{x}$). Well, the entries where we desire a 1 are at least 0.99 in the relevant position and at least 0 elsewhere (seemingly the activations are supposed to make everything positive?), so we get at least 0.99. The entries where we desire a 0 are at most 0.01 in every non-zero position, so we get at most $5*0.01$ (the first row of $W$ has the most nonzero entries, 5). So setting any cutoff (with relevant a bias and/or activation function) between 0.05 and 0.99 will suffice. [Another example, since that might have been too wordy. Say the output in the third row is $$\vec{x}=\langle 0, 0.01, 0.99, 0.01, 0, 0.01, 0.01, 0.01, 0, 0.01\rangle.$$ Then $$W\vec{x} = \langle 0.05, 1.02, 0.03, 0.01\rangle.$$ You can see how I've cooked up $\vec{x}$ to get the largest possible first entry in $W\vec{x}$ given the constraints on $\vec{x}$.]
{ "domain": "datascience.stackexchange", "id": 5513, "tags": "neural-network" }
Acceleration without force in rotational motion?
Question: This has really been bugging me. I hope someone can point out the flaw in my logic. Force is required to change velocity. A revolving object in space is continually changing its velocity by virtue of this revolution. Therefore this revolving object is forever experiencing a force. A force requires the expenditure of some energy. Therefore a revolving object requires a constant input of energy to keep rotating. Point 5 is obviously wrong by experience, but why is it wrong? Answer: Point 4 sounds perfectly reasonable, but it turns out to be wrong upon closer examination! Force does not require an expenditure of energy. Only force directed along the path of a moving object requires expenditure of energy. To phrase that more mathematically: Energy Expenditure = $\int \vec{F} \cdot d\vec{x} $, where $\vec{x}$ is the coordinate along the path of motion. For a rotating or circularly revolving object, force and motion are perpendicular and therefore energy expenditure is 0.
{ "domain": "physics.stackexchange", "id": 94179, "tags": "newtonian-mechanics, rotational-dynamics, energy-conservation, work" }
Quick Solidworks question: How to cut a 3D part along a line in a sketch?
Question: I'm not having luck with the Trim Entities feature, which is just for sketches. I just want to shave the top part off completely at the angle that I've drawn. Answer: There's a couple of ways to do this - I've illustrated both @StainlessSteelRat's Plane Cut method, and @NMech's Cut/Extrude method. Both are valid, and as always with these questions - there's not enough context to recommend which of these two, or the numerous other potential methods is best for your situation.
{ "domain": "engineering.stackexchange", "id": 3809, "tags": "solidworks" }
Substitution or elimination when a chloroalkene reacts with NaOH in ethanol?
Question: NaOH + EtOH will eliminate the Cl atom forming a double bond. At least, that's what I think. 3 could also be a viable answer since the OH can also attack the said double bond (this is probably not correct) and cause an OH to be added forming the compound in 3. Answer: This is an example of the subtle difference between different bases that are strong in water but may not be equally strong in other, less polar solvents. Alcoholic potassium hydroxide certainly does eliminate a proton from the substrate leading to product (4), but sodium hydroxide in ethanol is just a little weaker and uses nucleophilic substitution to form (1) instead.
{ "domain": "chemistry.stackexchange", "id": 17695, "tags": "organic-chemistry, halides, elimination, halogens" }
DFT and perfect reconstruction of a square wave on a digital computer
Question: I know that in theory, when reconstructing a square wave from its Fourier coefficients, unless we have an infinite amount of them, the resulting reconstruction will have Gibbs ringing artifacts due to lack of enough harmonics. On a computer, we can take the Fourier transform X = fft(x) of a square wave x, and reconstruct it without artifact with x_rec = ifft(X), maybe with some rounding error of the order of 1e-17 or something but no visible ringing. I don't have a satisfying answer for that? I guess there has to be something to do with the fact the "the square wave" x is a digitized version of a continuous wave, and my Fourier basis vectors (complex exponentials are also discretized of course since we are in a computer...) but still... how would you justify the absence of Gibbs ringing artefacts from Fourier reconstruction of the Fourier transform of a digital square wave ? %%%%%%%%%%%%%%%%%%%% Tought experiment proposed by Dan Szabo fs=10;%sampling frequency t=0:(1/fs):1-(1/fs); s = [1 1 1 1 1 0 0 0 0 0]; sTr = imtranslate(s,[0.5 0]) sTr = 0.5000 1.0000 1.0000 1.0000 1.0000 0.5000 0 0 0 0 Answer: First of all, it's a misunderstanding that the Gibbs phenomenon disappears if you use infinitely many Fourier series coefficients to reconstruct a discontinuous periodic function, such as a square wave. It doesn't. The reason is that generally the Fourier series doesn't converge point-wise, but it converges in the mean, i.e., $$\lim_{N\to\infty}\int_{0}^{T}\left|x(t)-\sum_{k=-N}^{N}c_ke^{j2\pi kt/T}\right|^2= 0\tag{1}$$ if $x(t)$ is a $T$-periodic function, and $c_k$ are its Fourier coefficients. Taking the discrete Fourier transform (DFT) of a finite sequence of numbers just corresponds to a matrix multiplication: $$\mathbf{y}=\mathbf{Ax}\tag{2}$$ and as long as the matrix $\mathbf{A}$ is invertible you can compute $\mathbf{x}$ from $\mathbf{y}$: $$\mathbf{x}=\mathbf{A}^{-1}\mathbf{y}\tag{3}$$ This has nothing to do with the Fourier series of a continuous periodic function, and it has nothing to do with the Gibbs phenomenon.
{ "domain": "dsp.stackexchange", "id": 9103, "tags": "fourier-transform, square" }
What are $U(n)$ or $\mathbb{Z}_2$ quantum spin liquids?
Question: Quantum spin liquid is a state of matter in which spins are correlated and fluctuate even at zero temperature. My question is about these terms in general. When we say that a state or a quasi-particle is $U(n)$, $SU(n)$ or $\mathbb{Z}_2$, what do we physically mean? Answer: They specify which gauge symmetry the Quantum Spin Liquids is subject to, when treated via a Lattice Gauge Theory. The gauge field (some sort of interaction) is defined over a discretised space(-time). For example imagine atoms are situated on the vertices on this square lattice: where the lines between vertices are links, and the area defined by $4$ links is a plaquette. $U(1)$ Gauge variables are unphysical and not directly observable. Let's start with a $U(1)$ gauge theory, corresponding to usual electromagnetism. The gauge fields live on links like $nm$. Physical (observable) variables are electric fluxes on links $E_{nm}$ and magnetic fluxes through plaquettes $\Phi_{mnpq}$: $$ \mathbf{E} = -\frac{\partial \mathbf{A}}{\partial t} \quad\Rightarrow \quad E_{mn} = -\dot{A}_{mn}, $$ $$ \Phi = \int \mathbf{B}\cdot \mathrm{d}^2\mathbf{r} = \oint_{\mathrm{plaquette}} \mathbf{A}\cdot \mathrm{d}\mathbf{r} \quad \Rightarrow \quad \Phi_{mnpq} = A_{mn}+A_{nq}+A_{pq}+A_{qn}.$$ $E$ is quantised, because it is related to electric charges. In some units, then, $E = 0, \pm 1, \pm 2, \dots$. Normally $A$ can take any value from $-\infty$ to $\infty$. Usually, however, one considers a compact U(1) gauge theory by requiring $$ 0 \leq A \leq 2\pi, \quad V(A + 2\pi) = V(A) \Rightarrow V(A) = f(e^{\mathrm{i}A}), $$ where $V$ would be the potential energy in the Hamiltonian $H$: $$ H = \sum_{\mathrm{links}} \frac{E^2_{mn}}{2I} - \sum_{\mathrm{plaquette}} \lambda \cos \Phi_{mnpq}.$$ The $E^2$ term makes sense as it is the energy density of the electric field. The $\cos$ is introduced to preserve the compactness and periodicity of the magnetic potential $A$. For small fluxes $\Phi$, this reduces to $\propto 1 + B^2$ which, again, makes sense as it is the energy density of the magnetic field. $\mathbb{Z}_2$ For $\mathbb{Z}_2$, you switch from integer arithmetics $(\mathbb{Z})$ to binary arithmetics $(\mathbb{Z}_2)$ for the electric flux along the links. They can be either "up" or "down", and not taking any value $\in \mathbb{R}$ between $0$ and $2\pi$. The simplest way to try and do this is to exponentiate all your old variables $E$, $A$ and $\Phi$: $$ \begin{array}{|c|c|} \hline \mathrm{(compact)}\, U(1) & \mathbb{Z}_2 \\ \hline E = 0, \pm 1, \pm 2, \dots & (-1)^E \leftrightarrow \sigma^x = \pm 1 \\ 0 \leq A \leq 2\pi & e^{\pm \mathrm{i}A} \leftrightarrow \sigma^z = \pm 1 \\ \Phi = \sum_{\mathrm{plaquette}}A & e^{\mathrm{i}\Phi} \leftrightarrow \Pi_{\mathrm{plaquettes}} \sigma^z = \pm 1 \\ H = \sum_{\mathrm{links}} \frac{E^2_{mn}}{2I} - \lambda\sum_{\mathrm{plaquette}} \cos \Phi_{mnpq} & H = -\Gamma\sum_{\mathrm{links}}\sigma^x_{mn} - \lambda \sum_{\mathrm{plaquettes}} \sigma^z_{mn}\dots \sigma^z_{qm}\\ \hline \end{array}$$ An example of a $\mathbb{Z}_2$ theory, with $\sigma_x$ and $\sigma_z$, is the Heisenberg model. This can employed, for instance, to model the Kagome lattice -- which is one of those expected to display quantum spin liquids, and which hopefully someone will soon investigate experimentally!: EDIT: reference for figures: here and here. $U(N)$, $SU(N$) etc. If you want to have $SU(2)$, instead of $U(1)$, you need a state which has $2$ internal degrees of freedom (internal states). So you don't just have a scalar wavefunction $\psi \rightarrow \underbrace{e^\phi}_{\in U(1)}\psi$, but a multi-component one $\psi = \left ( \begin{array}{c} \psi_1\\ \psi_2 \end{array} \right ) \rightarrow \underbrace{\left ( \begin{array}{cc} a & b \\ c & d \end{array} \right )}_{\in SU(2)} \left ( \begin{array}{c} \psi_1\\ \psi_2 \end{array} \right ) $, and so on for $ N \geq 2$.
{ "domain": "physics.stackexchange", "id": 61249, "tags": "condensed-matter, terminology, topological-order, topological-phase, spin-models" }
Potential energy in an electric circuit
Question: I am studying electrical circuits and how they work. I know that electrons go from the negative pole to the positive pole (where potential difference increases), this means they go from a lower electric potential to upper electric potential. I know in the electric circuit where there is no resistance the potential difference is equal to $0V$, this means that where there is a resistance there is a potential difference. I don't understand how resistance's potential difference changes if I add or remove other resistances in the electrical circuit, what happens inside the circuit? How does the potential energy of resistance terminals change? When the charges go through the resistor, how could they know their flow intensity? How could they know if there is a second or third resistor? Answer: Think of electric potential as water pressure. Such as in hose or plumbing system. Think of the resistor as a filter or a constriction in the hose or pipe which "resists" the flow of water. There is a certain pressure on one side of the filter because a lot of water particles are pushing forward. This pressure squeezes the water through the filter. As soon as a water particle is through, it can continue flowing with no large pressure from behind. On this side of the filter, the pressure is lower. If you now add another filter on this side, then the water will not have any tendency to pass through it. There is such as low pressure here after the first filter, that nothing is pushing it through the next filter. The second filter is like a wall, and so the water will stop and stay here. But soon, more water molecules arrive from the first filter - so, soon they accumulating water amount here behind the first filter starts building up a pressure on this side, since there is soon not enough space for all the water. Then the pressure grows. The pressure difference across the first filter is now smaller (the pressure on one side is the same, but that on the other is larger). There is now also a pressure to force water through the second filter. And so, water is squeezed through this second filter as well. On the other side, there is again no pressure. With one filter, the entire pressure-drop happened over that filter. With two filters, the total pressure-drop is shared between them. There is a smaller pressure-drop across each filter, that sum up to the original pressure-drop. With a smaller pressure-drop, the water flow is also smaller. And this is how the flow and the pressure around a filter is influenced by other filters being nearby. In your case, this water-system analogy is very fitting with charges (water particles) flowing in a current (water flow, litres/second e.g.) due to a potential (pressure) difference across resistors (filters) along the wires (hose, pipe).
{ "domain": "physics.stackexchange", "id": 66837, "tags": "electrostatics, electric-circuits, electrons, electrical-resistance, potential-energy" }
Dichromate ion: Tetrahedral or bent?
Question: I am currently studying about the vespr theory and I'm stuck on an ion which is dichromate (VI) ion $\ce{Cr2O7^2-}$. So this is what I am thinking about: 1) If I make the oxygen atom at the middle the centre atom, it is a bent. 2) If I make the chromium atom thecentre atom, it is a tetrahedral. So, my problem is that whether dichromate (VI) ion bent or tetrahedral? Answer: Both, Bent about Oxygen and tetrahedral about chromium: $\hskip2.5in$
{ "domain": "chemistry.stackexchange", "id": 3271, "tags": "molecular-structure, vsepr-theory" }
How many calories in a block of wood?
Question: I was recently thinking about the human body and the energy we get from food. I understand we don't have the ability to properly digest certain things like stones and wood and grass, but if we did, I was imagining that trees would be a great source of food. I was wondering, how many calories are in a piece of wood, let's say 1 cubic inch, for edibility's sake. I'm also not sure if the type of wood matters, but if it does, let's say something simple like oak or hemlock, and as dry as the piece would get at room temperature in a normal home. Answer: The principle constituent of wood is cellulose. Cellulose is a complex carbohydrate, made up of the same simple sugars as starch. The problem is the linkage between the simple sugars in cellulose. Digestion of complex carbohydrates involves the use of specific digestive enzymes to break specific links. For example, lactase to break the disaccharide lactose into two simple sugars. Unfortunately, the enzyme cellulase is produced by only a few fortunate fungi, bacteria and snails.https://en.wikipedia.org/wiki/Cellulase So just treat wood like an equivalent amount of dried mashed potatoes: 4 Calories per gram (answer is case sensitive)
{ "domain": "physics.stackexchange", "id": 51127, "tags": "thermodynamics, energy, physical-chemistry, estimation, biology" }
Why does ammonium chloride form white crystals?
Question: Why does ammonium chloride ($\ce{NH4Cl}$) form white crystals on top of a test tube when heated? Answer: I suppose you are not asking why crystalline $\ce{NH4Cl}$ is white. Ammonium chloride decomposes upon heat $$\ce{NH4Cl(s) ->[\Delta] NH3(g) + HCl(g)}$$ The top of the test tube is cold, and they recombine to generate the original salt. $$\ce{NH3(g) + HCl(g) -> NH4Cl(s)}$$ This is the white crystal you see.
{ "domain": "chemistry.stackexchange", "id": 9569, "tags": "acid-base, experimental-chemistry" }
ROS Answers SE migration: what is tf
Question: Hey guys, I am all in all pretty new to computer science in general and learning ROS now. came across some tutorials on tf after completing the beginner tutorials. I want to use rososc but that's another story since it is not updated for kinetic and catkin and I'm still trying to work out how to make that work for me... anyway, -explain what tf is used for as if I am a 5 year old. -why would I use it in developing robots? -what type of applications is it useful for? Originally posted by moonspacedancer on ROS Answers with karma: 123 on 2017-09-07 Post score: 5 Original comments Comment by moonspacedancer on 2017-09-07: wow someone downvoted my question? am i supposed to be ashamed to be a beginner or something? sounds like you need to take a break if you can't even come up w a creative response and instead downvote the question. Comment by gvdhoorn on 2017-09-07: I don't think the downvote was because you are asking a beginners question. We all started like that, and it is definitely nothing to be ashamed of. I didn't downvote, but I can imagine some people don't appreciate the "like I'm 5" bit: ROS Answers is a beginners site, we already know that, so .. Comment by gvdhoorn on 2017-09-07: .. adding that is unnecessary and makes you seem unnecessarily insecure. If something in an answer is unclear, you can always just ask for additional clarification or explanation. Comment by moonspacedancer on 2017-09-07: oh ok, thanks it's based on a reddit group where people explain things like they are talking to extreme beginners. guess it's not some people's cup of tea. thank you. I just feel like if I don't say I am an absolute beginner it won't be concise enough for me to understand at this point. Answer: The question "what is TF" seems to be rather succinctly answered by its wiki page: tf2 is the second generation of the transform library, which lets the user keep track of multiple coordinate frames over time. tf2 maintains the relationship between coordinate frames in a tree structure buffered in time, and lets the user transform points, vectors, etc between any two coordinate frames at any desired point in time. And the first section -- What does tf2 do? Why should I use tf2? -- has this to say: A robotic system typically has many 3D coordinate frames that change over time, such as a world frame, base frame, gripper frame, head frame, etc. tf2 keeps track of all these frames over time, and allows you to ask questions like: Where was the head frame relative to the world frame, 5 seconds ago? What is the pose of the object in my gripper relative to my base? What is the current pose of the base frame in the map frame? tf2 can operate in a distributed system. This means all the information about the coordinate frames of a robot is available to all ROS components on any computer in the system. Tf2 can operate with a central server that contains all transform information, or you can have every component in your distributed system build its own transform information database. I'm assuming you already read that, but it wasn't clear enough. If that is the case, it would help if you could clarify what wasn't clear and perhaps we can explain that. Edit: @gvdhoorn I did read all of this. I am asking even more basically: what do all of these 'frames' have to do w my actual robot in space and why do I need to transform points, vectors or anything? Right. I'm not going to explain this in too much detail, as that would be off-topic for this site (we're ROS Answers, not robotics.stackexchange.com). So almost everything in robotics is concerned with where things are - either relative to the robot itself or relative to other things. Whenever a robot wants to interact with the real world, it will need to know where the things it wants to interact with are. As robots are basically computers with sensors and actuators, they store references to objects as coordinates with some attached semantics. Those coordinates will need to be updated whenever either the robot moves, or other things move. And keeping track of all of those coordinates and updating them becomes a really involved task if you move beyond a simple robot or robot application: I've worked on applications that tracked locations of hundreds of objects, updating multiple times per second over several hours and in a large volume of space. We don't want to -- nor can we -- do that manually. And I also don't want to -- nor should we -- write software that does such things for each and every new robot that we happen to create. So this is where TF comes in: it's essentially a reusable library that implements a piece of functionality that allows any ROS node to create, retrieve and update locations (ie: frames) in an efficient and (relatively) easy way. The library takes care of integrating all the CRUD operations that happen distributed across your entire ROS node graph (ie: all the connected nodes that you started, which may run on many different machines connected via a network) to guarantee that you have an as consistent as possible view of the state of all those frames (ie: positions). It also keeps a buffer of all the changes, so you can walk backwards through time and see what the state (ie: position) of all frames was at a particular point in time. This is very helpful, as it allows you to corroborate sensor data with positional data, which makes things like mapping possible (as that needs answers to questions like: "where was my camera when it took this picture 5 seconds ago?"). is this for actual movement through space or what??? If you mean: does TF do anything with making my robot move, then: no. But it is certainly true that movement (as the first derivative of position wrt time) is related to TF, as it stores positions with an associated timestamp. Originally posted by gvdhoorn with karma: 86574 on 2017-09-07 This answer was ACCEPTED on the original site Post score: 6 Original comments Comment by moonspacedancer on 2017-09-07: @gvdhoorn I did read all of this. I am asking even more basically: what do all of these 'frames' have to do w my actual robot in space and why do I need to transform points, vectors or anything? is this for actual movement through space or what??? Comment by gvdhoorn on 2017-09-07: If you don't have any prior exposure to things like coordinate frames, transformations, linear algebra or robot kinematics (and dynamics), I can imagine that all of this seems rather strange to want to deal with. It would probably help if you can try to get a grasp of these concepts, as it will .. Comment by gvdhoorn on 2017-09-07: .. make working with libraries such as TF a lot more understandable. Comment by gvdhoorn on 2017-09-07: Note that for really simple robotics, all of this is probably not needed, but for anything a bit more complicated, dealing with spatial data with a temporal dimension is going to be a lot more feasible with something like TF. Comment by moonspacedancer on 2017-09-08: thank you @gvdhoorn this answer plus comments has me in a new place understanding that I need to learn some requisite robotics in general to really get into it. I do study math but am only in precalculus so looking forward to linear and will definitely look into kinematics. thank you :) Comment by moonspacedancer on 2017-09-08: like would TF be useful with a robot who uses machine vision and stuff like that? Comment by gvdhoorn on 2017-09-09:\ like would TF be useful with a robot who uses machine vision I think you should be able to answer that yourself: does machine vision at any point depend on / use the position of anything? If yes: TF can help. If no, probably not needed. Comment by SuleKayiran on 2021-08-13: Hello, your comment was quite explanatory for me. Thank you for myself. Also, I would like to ask: If we are using lidar as a sensor and we see that the movement in the lidar is quite late compared to the robot when we perform imaging in rviz, can the tf transformation be the reason for these delays? @gvdhoorn Comment by gvdhoorn on 2021-08-13: Don't post follow-up questions as comments under questions with already accepted answers. No one will see your question. Post a new question, after having made sure yours isn't already discussed (use Google and append site:answers.ros.org to your query).
{ "domain": "robotics.stackexchange", "id": 28792, "tags": "ros, beginner, transform" }
Identify dependencies when group_depend is involved
Question: I'm trying to navigate the tree of dependencies of some packages in ROS2 but I'm hitting a possible bug. For normal build_depend and buildtool_depend I can use ament list_dependencies or inspect the package.xml However, some packages use group_depend. After reading about the format 3 spec, I see this is kind of a reverse dependency specification. Now I would have to find all packages that state to belong to the group. That seems too impractical to be done by hand. And ament crashes when asking it for dependencies (ROS2 built from sources as of today): $ ament list_dependencies rosidl_default_generators Traceback (most recent call last): File "/home/jano/opt/ros2/install_isolated/ament_tools/bin/ament", line 11, in <module> load_entry_point('ament-tools==0.4.0', 'console_scripts', 'ament')() File "/home/jano/opt/ros2/install_isolated/ament_tools/lib/python3.6/site-packages/ament_tools/commands/ament.py", line 88, in main rc = args.main(args) File "/home/jano/opt/ros2/install_isolated/ament_tools/lib/python3.6/site-packages/ament_tools/verbs/list_dependencies.py", line 97, in main g.extract_group_members(packages.values()) File "/home/jano/opt/ros2/install_isolated/ament_package/lib/python3.6/site-packages/ament_package/group_dependency.py", line 60, in extract_group_members assert g.evaluated_condition is not None AssertionError Any ideas on how to proceed besides grepping my way forward? Should I report this as a bug or am I doing something wrong? Thanks. Originally posted by amosteo on ROS Answers with karma: 43 on 2018-05-11 Post score: 0 Answer: In the meantime we have switched from using ament_tools for ROS 2 to use colcon (see design article). For the upcoming ROS 2 release Bouncy we recommend using the new tool - even though ament_tools will still be available in that release. Unfortunately that mean that we will not spend any effort on fixing issues in the deprecated tool. If you want to look into the problem and try to fix it we would certainly be happy to review / merge fixes. I think the problem is very similar to what was fixed in catkin_pkg recently. Maybe this diff helps. Using the new build tool you can get a list of all build, run, and test dependencies: colcon info <pkgname> The result will consider group dependencies. Originally posted by Dirk Thomas with karma: 16276 on 2018-06-22 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 30807, "tags": "ros, ros2, ament" }
struct sockaddr_storage initialization by network format string
Question: I am writing a wrapper library for network functionality in C and wanted to let the user let the user initialize a struct sockaddr_storage with a format string. Format ist like: "proto:host:port/servicename" Example: "tcp:localhost:8080", "udp:8.8.8.8:dns" Possible Protocols Strings: tcp, tcp4, tcp6, udp, udp4, udp6 Some questions I'v got now. Am I using to many comments? Am I using to much register variables? Is this code readable? Do you see parts of the code that I can reduce? I get the following timing: - see my main function 2:2001::124:2144:153:80 - 0.000163 ms 3:172.217.16.195:80 - 0.000135 ms 0:127.0.0.1:8080 - 0.000013 ms 0:127.129.41.24:463 - 0.000005 ms Why is the time going rapidly down? Code: #include <stdio.h> #include <stdlib.h> #include <stdbool.h> #include <string.h> #include <errno.h> #include <assert.h> #include <unistd.h> #include <time.h> #include <netinet/ip.h> #include <arpa/inet.h> #include <netdb.h> typedef enum {tcp, tcp4, tcp6, udp, udp4, udp6} network_id; typedef struct Addr { network_id proto; /* protocol (can be): tcp, tcp4, tcp6, udp, udp4, udp6, _unix (unix), unixgram, unixpacket */ char addr[40]; /* string containng either ipv4 or ipv6 address */ int port; /* port number */ } Addr; #ifndef strncpy_s int strncpy_s(char *dest, size_t destsz, const char *src, size_t n) { if(dest == NULL) return -1; while(destsz-- && n-- && *src != '\0') *dest++ = *src++; *dest = '\0'; return 0; } #endif /* compare functions that compared until delimiter is found either in str1 or in str2 */ int compare_until_delim(const char *str1, const char *str2, char delim) { /* *str1 != delim xor *str2 == 0 and *str2 != delim xor *str2 == 0 */ while((!(*str1 != delim) != !(*str1 == '\0')) && (!(*str2 != delim) != !(*str2 == '\0'))) { if(*str1 != *str2) return false; str1++; str2++; } if(*str1 != *str2) return false; return true; } /* protocol table containing supported protocol for socket configuration */ const char *protocol_table[] = { "tcp:", "tcp4:", "tcp6:", "udp:", "udp4:", "upd6:", }; /* simple compare until dilimiter check for the protocol table above */ int check_protocol(const char *str) { int i, table_size; table_size = sizeof(protocol_table) / sizeof(*protocol_table); for(i = 0; i < table_size ; i++) { if(compare_until_delim(protocol_table[i], str, ':')) return i; } return -1; /* not found */ } /** ipstr_to_addr - initializes struct Addr and struct sockaddr_in by string containing specific ip address format * return@int: (0 on success, -2 = inet_pton error, -1 = parsing error at start index 0, 1...n = error at index) * param@ipstr: format string like format "proto:host:service" * param@addr: struct Addr (internal use) (contains printable addr infos, except protocol) if NULL only the _sockaddr_storage * param@_sockaddr_in: struct sockaddr_in, will be setup regarding to the format string */ int ipstr_to_addr(const char *ipstr, Addr *addr, struct sockaddr_storage *_sockaddr_storage, int *socktype) { assert(_sockaddr_storage != NULL); register const char *start_ip, *end_ipv6; register const char *service_start; register char *domain_p; struct Addr local; char domain_buffer[256]; /* will temporary hold ipv4 and domain format to convert later from */ bool ipv6; int proto, port, err; if(addr == NULL) addr = &local; domain_p = domain_buffer; /* set service start to beginning of ipstr and use it as a seperator to service/port */ service_start = ipstr; ipv6 = false; err = 0; /* get internal protocol id */ proto = check_protocol(ipstr); if(proto == -1) { err = -1; goto cleanup_error; } service_start = service_start + strlen(protocol_table[proto]); /* if first char after protocol is [ check for ] to say it is an ipv6 address */ if(*service_start == '[') { /* go latest until null byte */ start_ip = service_start + 1; while(*service_start != '\0') { /* if enclosure bracket ] was found set ipv6 true and increment service_start */ if(*service_start == ']') { end_ipv6 = service_start; ipv6 = true; service_start++; break; } service_start++; } /* service_start should point to ':' in format string right before service/port */ } /* ip not ipv6 assume ipv4 or domain name */ if(ipv6 == false) { /* go latest until null byte */ while(*service_start != '\0') { /* service_start points to seperator ':', increment domain_p first and set to null byte */ if(*service_start == ':') { *domain_p = '\0'; break; } /* write a copy to domain_buffer */ *domain_p++ = *service_start++; } /* service_start should point to ':' in format string right before service/port too */ } /* if not ':' we should be at the end of the string which leads to incomplete format, return position of error */ if(*service_start != ':') { err = service_start - ipstr; goto cleanup_error; } service_start++; /* increase by one to get the real service start position */ port = atoi(service_start); /* try to get port number after ':' */ /* if port port was set to 0, try get port by name */ if(port == 0) { /* NOTE: maybe use getservent_r for thread safety */ struct servent *service = getservbyname(service_start, NULL); if(service == NULL) { err = service_start - ipstr; goto cleanup_error; } port = htons(service->s_port); /* convert to host byte order */ } /*********************/ /* setup struct Addr */ addr->proto = proto; addr->port = port; if(ipv6) { ipstr++; strncpy_s(addr->addr, sizeof(addr->addr), start_ip, end_ipv6 - start_ip); } else { ///* add check for ipv6 protocol later struct hostent *entry = gethostbyname(domain_buffer); strncpy(addr->addr, inet_ntoa(*(struct in_addr*)entry->h_addr_list[0]), sizeof(addr->addr)); } if(ipv6 && addr->proto == tcp) addr->proto = tcp6; else if(ipv6 && addr->proto == udp) addr->proto = udp6; /****************************/ /* setup struct sockaddr_in */ switch(addr->proto) { case tcp6: case udp6: ((struct sockaddr_in6*)_sockaddr_storage)->sin6_family = AF_INET6; break; case tcp: /* NOTE: add later should only be usable for listener if calls like 'tcp::port' */ case udp: /* NOTE: same as tcp */ case tcp4: case udp4: ((struct sockaddr_in*)_sockaddr_storage)->sin_family = AF_INET; break; default: err = -3; /* if this happens and internal error occured because this function should check strict and returns if something not parsed correctly*/ goto cleanup_error; break; } /* set port */ ((struct sockaddr_in*)_sockaddr_storage)->sin_port = htons(addr->port); if(ipv6) { if(inet_pton(((struct sockaddr_in6*)_sockaddr_storage)->sin6_family, addr->addr, &((struct sockaddr_in6*)_sockaddr_storage)->sin6_addr) != 1) { err = -2; goto cleanup_error; } } else { /* ipv4 */ if(inet_pton(((struct sockaddr_in*)_sockaddr_storage)->sin_family, addr->addr, &((struct sockaddr_in*)_sockaddr_storage)->sin_addr) != 1) { err = -2; goto cleanup_error; } } /* set correct socket type */ if(addr->proto >= tcp && addr->proto <= tcp6) *socktype = SOCK_STREAM; /* socket is tcp socket */ else if(addr->proto >= udp && addr->proto <= udp6) *socktype = SOCK_DGRAM; /* socket is udp socket */ return err; /* error is 0 (success) */ /* if some error occure you land here handle the struct Addr properly */ cleanup_error: { *addr->addr = '\0'; addr->port = -1; addr->proto = -1; return err; } } int main(void) { Addr test; struct sockaddr_storage addr; int socktype; clock_t clock_start, clock_end; clock_start = clock(); printf("%d\n", ipstr_to_addr("tcp:[2001::124:2144:153]:http", &test, &addr, &socktype)); clock_end = clock(); printf("%d:%s:%d - ", test.proto, test.addr, test.port); printf("%f ms\n", (double) (clock_end - clock_start) / CLOCKS_PER_SEC); clock_start = clock(); ipstr_to_addr("udp:www.google.de:http", &test, &addr, &socktype); clock_end = clock(); printf("%d:%s:%d - ", test.proto, test.addr, test.port); printf("%f ms\n", (double) (clock_end - clock_start) / CLOCKS_PER_SEC); clock_start = clock(); ipstr_to_addr("tcp:localhost:8080", &test, &addr, &socktype); clock_end = clock(); printf("%d:%s:%d - ", test.proto, test.addr, test.port); printf("%f ms\n", (double) (clock_end - clock_start) / CLOCKS_PER_SEC); clock_start = clock(); ipstr_to_addr("tcp:127.129.41.24:463", &test, &addr, &socktype); clock_end = clock(); printf("%d:%s:%d - ", test.proto, test.addr, test.port); printf("%f ms\n", (double) (clock_end - clock_start) / CLOCKS_PER_SEC); /* open listener with netcat and run this */ int sock = socket(addr.ss_family, SOCK_STREAM, 0); if(sock == -1) { perror("socket error"); exit(1); } if(connect(sock, (struct sockaddr*) &addr, sizeof addr)) { perror("connection failed"); exit(1); } send(sock, "Hallo Meister\n", 15, 0); close(sock); } ``` Answer: Am I using too many comments? No. Am I using too much register variables? Yes. The register keyword is completely obsolete these days. It was intended as an optimisation hint to the compiler, which the compiler can shamelessly and freely ignore. A declaration of an identifier for an A declaration of an identifier for an object with storage-class specifier register suggests that access to the object be as fast as possible. The extent to which such suggestions are effective is implementation-defined. Most compiler optimizers can do a better job at optimizing code, and you should leave the keyword where it belongs — in the past. Eliminate magic numbers: typedef struct Addr { network_id proto; /* protocol (can be): tcp, tcp4, tcp6, udp, udp4, udp6, _unix (unix), unixgram, unixpacket */ char addr[40]; /* string containng either ipv4 or ipv6 address */ int port; /* port number */ } Addr; Instead of that magic number 40, code could use INET_ADDRSTRLEN and INET6_ADDRSTRLEN defined in <netinet/in.h> INET_ADDRSTRLEN -- storage for an IPv4 address INET6_ADDRSTRLEN -- storage for an IPv6 address Similarly, in: char domain_buffer[256] 256 should be a named constant. #define DOMAIN_BUFSIZ 256 char domain_buffer[DOMAIN_BUFSIZ]; It eases maintainability; requires just one change to the constant instead of all occurrences of the number. Is this code readable? Short-term memory and the field of vision are small: ipstr_to_addr() is too long, complex, and hurts readability. Consider breaking it down into 3-4 (or more) smaller functions. For instance, this: /* set correct socket type */ if(addr->proto >= tcp && addr->proto <= tcp6) *socktype = SOCK_STREAM; /* socket is tcp socket */ else if(addr->proto >= udp && addr->proto <= udp6) *socktype = SOCK_DGRAM; /* socket is udp socket */ deserves to be a separate set_socktype() function. So does this: if(ipv6) { if(inet_pton(((struct sockaddr_in6*)_sockaddr_storage)->sin6_family, addr->addr, &((struct sockaddr_in6*)_sockaddr_storage)->sin6_addr) != 1) { err = -2; goto cleanup_error; } } else { /* ipv4 */ if(inet_pton(((struct sockaddr_in*)_sockaddr_storage)->sin_family, addr->addr, &((struct sockaddr_in*)_sockaddr_storage)->sin_addr) != 1) { err = -2; goto cleanup_error; } } Do you see parts of the code that I can reduce? int compare_until_delim(const char *str1, const char *str2, char delim) { /* *str1 != delim xor *str2 == 0 and *str2 != delim xor *str2 == 0 */ while((!(*str1 != delim) != !(*str1 == '\0')) && (!(*str2 != delim) != !(*str2 == '\0'))) { if(*str1 != *str2) return false; str1++; str2++; } if(*str1 != *str2) return false; return true; } can be shortened to just: bool compare_until_delim(const char *str1, const char *str2, char delim) { /* *str1 != delim xor *str2 == 0 and *str2 != delim xor *str2 == 0 */ while((!(*str1 != delim) != !(*str1 == '\0')) && (!(*str2 != delim) != !(*str2 == '\0'))) { if(*str1++ != *str2++) { return false; } } return *str1 == *str2; } with no loss of readability. NB that I changed the return type from int to bool because the function returns true / false. Use the correct types regardless of any implicit conversions. And: int check_protocol(const char *str) { int i, table_size; table_size = sizeof(protocol_table) / sizeof(*protocol_table); for(i = 0; i < table_size ; i++) { if(compare_until_delim(protocol_table[i], str, ':')) return i; } return -1; /* not found */ } can be shortened to: int check_protocol(const char *str) { for (size_t i = 0; i < sizeof(protocol_table) / sizeof (*protocol_table); i++) { if(compare_until_delim(protocol_table[i], str, ':')) return i; } return -1; /* not found */ } Use a consistent style: While I do prefer the shorter one, use a consistent style throughout the code: while((!(*str1 != delim) // if(dest == NULL) if (!dest) // if(ipv6 == false) if (!ipv6) Simplify: // while((!(*str1 != delim) ... while ((*str1 == delim) ... /* -- @Toby */ atoi() lacks error checking: port = atoi(service_start); errno is not set on error so there is no way to distinguish between 0 as an error and as the converted value. No checks for overflow or underflow are done. Only base-10 input can be converted. It is recommended to instead use the strtol() and strtoul() family of functions in new programs. Limit the scope of variables and only declare where needed: //int i, table_size; // table_size = sizeof(protocol_table) / sizeof(*protocol_table); // for(i = 0; i < table_size ; i++) { // ... // } size_t table_size = sizeof(protocol_table) / sizeof(*protocol_table); // Use unsigned types for array and loop indexing. // sizeof() also returns size_t. for (size_t i = 0; i < table_size; i++) { ... } Don't define more variables than you need: We can forego table_size, as it's not used anywhere else in the function. for (size_t i = 0; i < sizeof(protocol_table) / sizeof(*protocol_table); i++) { ... } Or if that is too long for your liking, here's a handy function-like macro: #define ARRAY_CARDINALITY(x) (sizeof(x) / sizeof((x)[0])) for (size_t i = 0; ARRAY_CARDINALITY (protocol_table); i++) { ... } Specify internal linkage: Functions have external linkage by default in C. But you are not exporting anything from this single translation unit, so specify internal linkage for all functions except main(): // int check_protocol(const char *str) static int check_protocol (const char *str) Braces not required: //cleanup_error: { // *addr->addr = '\0'; // addr->port = -1; // addr->proto = -1; // return err; // } /* Following the convention that macros and goto labels are in all caps. * This makes it easier to spot. */ CLEANUP_ERROR: *addr->addr = '\0'; /* Note that `->` binds more tightly than `*. */ addr->port = -1; addr->proto = -1; return -1; Implementation doesn't match documentation: typedef struct Addr { network_id proto; /* protocol (can be): tcp, tcp4, tcp6, udp, udp4, udp6, _unix (unix), unixgram, unixpacket */ char addr[40]; /* string containng either ipv4 or ipv6 address */ int port; /* port number */ } Addr; I see 9 protocols in the above comment. But, there are only 6 in protocol_table. Prefer returning from main(): Use named constants EXIT_SUCCESS and EXIT_FAILURE as return values for main(): //if(sock == -1) { // perror("socket error"); // exit(1); //} if (sock == -1) { perror ("socket:"); return EXIT_FAILURE; } TCP is a byte stream: send() and recv() doesn't necessarily send and receive all bytes with a single call. send() returns the number of bytes actually sent out—this might be less than the number you told it to send! See, sometimes you tell it to send a whole gob of data and it just can't handle it. It'll fire off as much of the data as it can, and trust you to send the rest later. Remember, if the value returned by send() doesn't match the value in len, it's up to you to send the rest of the string. The good news is this: if the packet is small (less than 1K or so) it will probably manage to send the whole thing all in one go. Again, -1 is returned on error, and errno is set to the error number. — Beej's guide to Network Programming The convention is to call it in a loop until all bytes are sent, and check its return value. Upon a successful return, send() shall return the number of bytes sent. Otherwise, it returns -1 to indicate a failure, or 0 to indicate a closed connection. The return value of strncpy_s is discarded: // strncpy_s(addr->addr, sizeof(addr->addr), start_ip, end_ipv6 - start_ip); /* Add */ if (strncpy_s (addr->addr, sizeof addr->addr, start_ip, end_ipv6 - start_ip) == -1) { complain(); } Why return a value when you're just going to ignore it? Similarly, ipstr_to_addr is documented to return an error code, but its return value is discarded. /** ipstr_to_addr - initializes struct Addr and struct sockaddr_in by string containing specific ip address format * return@int: (0 on success, -2 = inet_pton error, -1 = parsing error at start index 0, 1...n = error at index) Use braces around if/else/for..: It's generally considered a good practice to use braces even if it's just a single statement. //if(ipv6 && addr->proto == tcp) // addr->proto = tcp6; // else if(ipv6 && addr->proto == udp) // addr->proto = udp6; if(ipv6 && addr->proto == tcp) { addr->proto = tcp6; } else if(ipv6 && addr->proto == udp) { addr->proto = udp6; } Consider what happens when you add another statement before the else and forgot to add braces? The first one would be rightly bounded to the if statement, but the second? It wouldn't remain conditional at all. This could lead to some nasty bugs. See, for instance, Apple's SSL/TSL bug. gethostbyname is an obsolete function: The gethostbyname*(), gethostbyaddr*(), herror(), and hstrerror() functions are obsolete. Applications should use getaddrinfo(3), getnameinfo(3), and gai_strerror(3) instead. strncpy_s risks invoking undefined behavior: while(destsz-- && n-- && *src != '\0') *dest++ = *src++; You did not check whether src was a valid pointer. Your code would invoke undefined behavior if src was a NULL pointer.
{ "domain": "codereview.stackexchange", "id": 44501, "tags": "c, networking, socket" }
Calculating the determinant of a matrix
Question: I wanted to do some exercise and came up with the idea of a good challenge (for my level of course). I tried to implement Laplace's algorithm for computing the determinant, recursively. #include <math.h> #include <stdlib.h> #include <vector> #include <iostream> using namespace std; int getMinimoCount = 0; //ignore these, just to keep track of the recursion. int calcDetCount = 0; void printMatrix ( vector< vector<double> > M) { //just does what it means int size = M.size(); for( int i = 0; i < size; i++ ) { cout << "\t"; for( int j = 0; j < size; j++ ) { cout << M[i][j] << "\t"; } cout << endl << endl << endl; } cout << endl; } vector< vector<double> > getMinimo( vector< vector<double> > src, int I, int J, int ordSrc ) { // Compute and return the minimum of the element I J // If the element is not in the Ith row or Jth column it will get copied to the minimum matrix getMinimoCount++; vector< vector<double> > minimo( ordSrc-1, vector<double> (ordSrc-1,0)); int rowCont = 0; for( int i=0; i < ordSrc; i++) { int colCont = 0; if ( i != I ) { for ( int j=0; j < ordSrc; j++) { if ( j != J ) { minimo[rowCont][colCont] = src[i][j]; colCont++; } }; rowCont++; } }; return minimo; } double calcDet( vector< vector<double> > src, int ord) { // Here be recursion. calcDetCount++; if ( ord == 2 ) { double mainDiag = src[0][0] * src[1][1]; double negDiag = src[1][0] * src[0][1]; return mainDiag - negDiag; } else { double det = 0; for( int J = 0; J < ord; J++) { vector< vector<double> > min = getMinimo( src, 0, J, ord); if ( (J % 2) == 0 ) { det += src[0][J] * calcDet( min, ord-1); } else { det -= src[0][J] * calcDet( min, ord-1); } }; return det; } } int main() { // Just some UI to gather the matrix. not really convinced of this. int ord; cout << "############## MATRIX DET ##############" << endl << endl; cout << " Matrix order: "; cin >> ord; cout << endl; vector <vector<double> > mainMatrix( ord, vector<double> (ord, 0)); cout << """ insert values one row at time. Top to bottom:\n\n"""; for ( int countY = 0; countY < ord; countY++) { for ( int countX = 0; countX < ord; countX++) { cin >> mainMatrix[countY][countX];}; }; system("CLS"); cout << "############## MATRIX DET ##############" << endl << endl; cout << endl << endl << " This is the input matrix:" << endl << endl << endl; printMatrix( mainMatrix ); system("PAUSE"); system("CLS"); cout << "############## MATRIX DET ##############" << endl << endl; cout << " Working...!" << endl; double det = calcDet( mainMatrix, ord ); system("CLS"); cout << endl << endl << "############## MATRIX DET ##############" << endl << endl; cout << " Det =\t" << det << endl << endl; cout << " getMinimo() chiamata: " << getMinimoCount << " volte" << endl; cout << " calcDet() chiamata: " << calcDetCount << " volte" << endl << endl; return 0; } The concept is simple: you have a matrix of order n. While doing this by hand you'd prefer chosing a row that's particularly math friendly; since it's a computer doing the dirty work it really doesn't care about what number he's multiplying. Every element a_IJ of a matrix has a minor. A minor is the determinant of the matrix without the I-th row and the J-th column. With this we can define the det of a matrix like so: Sum (-1)^i+j * a_ij * M_ij (where M_ij is the minimum of the element a_ij) Once a matrix reach the order == 2 it just computes the determinant since is just a simple multiplication between 4 elements. At first I had problems finding something that could be used as a matrix object and be passed around from one function to another. I tried bi-dimensional arrays, but they aren't dynamic and couldn't understand how I could pass an array object to a function. I was looking for some hidden matrix type or class to use but i had no luck what so ever. I came up with "a vector of vectors" which worked but I'm not completely sure it's a really good idea. Not even counting that vector <vector<double> > looks awful. Secondly, running some recursion tracking I found out that the time it takes ramps up so damn quickly: Order 3 = 4 calls Order 4 = 17 calls Order 5 = 86 calls ... Order 10 = 2.606.501 calls Although, even in the latter case, it doesn't take too much: 3 to 4 seconds. Is there a way to reduce the steepness of the curve? This way it gets out of hand way too soon. I don't have a huge programming experience so I know almost nothing on optimization or good practice either. Since I have some fresh code to work with I'd like to know any error I might be doing and way to optimize this algorithm. How costly is to cast a type? The (type) x type of cast to be clear. What could be a better way to input a matrix from the user? Answer: Includes #include <math.h> #include <stdlib.h> You're using C++. You really ought to #include the C++ versions of the headers instead of the C ones: #include <cmath> #include <cstdlib> Namespace using namespace std; This is a very bad idea. It pollutes the global namespace with everything from std. If std gets updated and includes a symbol that conflicts with something in your project, you're in big trouble. It's not much work to prepend std:: to your objects, but if you're really that lazy, just import what you need: using std::vector; using std::cin; using std::cout; using std::endl; System calls system("CLS"); Please don't do this. It makes your code completely non-portable. If you really must use an OS-dependent feature like this, at the very least isolate it in its own function, so you only have to change it one place instead of all through your code. Algorithm The way that's taught in high school for calculating the determinant of the matrix is rather inefficient (though simple to apply). The time is proportional to \$n!\$ -- that's right, factorial. Especially on a computer, it's much better to get the determinant by one of the decomposition methods (basically, extended Gaussian elimination). This is an \$O(n^3)\$ process instead. Check out Wikipedia for some ideas.
{ "domain": "codereview.stackexchange", "id": 13340, "tags": "c++, matrix, vectors" }
What is the space complexity of breadth-first search?
Question: When using the breadth-first search algorithm, is the space complexity $O(b^d)$, where $b$ is the branching factor and $d$ the length of the optimal path (assuming that there is indeed one)? Answer: The space complexity of the breadth-first search algorithm is $O(b^d$) in the worst case, and it corresponds to the largest possible number of nodes that may be stored in the frontier at once, where the frontier is the set of nodes (or states) that you are currently considering for expansion. You can take a look at section 3.5 (page 74) of the book Artificial Intelligence: A Modern Approach (3rd edition, by Norvig and Russell) for more info about the time and space complexity of BFS.
{ "domain": "ai.stackexchange", "id": 2412, "tags": "search, breadth-first-search, branching-factors, space-complexity" }
Change rosconsole minimum log level
Question: I'm debugging a ROS node and I'm trying to find an easy way to change the output level of rosconsole so I can see the debug statements, but I'm not seeing much. I've seen something about changing the log4j configuration file, but this seems obtuse. There are ROS environmental variables for just about everything else - is there nothing for easily changing verbosity? Originally posted by zacwitte on ROS Answers with karma: 170 on 2017-05-01 Post score: 0 Answer: It seems like there has been discussion about this previously, see this issue. Looks like the end decision was "open to implementation", but it might environment variables in general might not be sufficiently clean. Originally posted by BryceWilley with karma: 711 on 2018-08-27 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 27768, "tags": "ros, logging" }
class Taboo -- sort passed in List according to 'rules'
Question: I know as a matter of fact that there are many areas upon which my code could be improved. I was wondering if anyone can provide me with suggestions on how Taboo class and tests can be improved. Suggestions of any kind and scope are very welcomed. it would be even better if reasons can be provided along with the suggestions. Problem: class Taboo Most of the previous problems have been about single methods, but Taboo is a class. The Taboo class encapsulates a "rules" list such as {"a", "c", "a", "b"}. The rules define what objects should not follow other objects. In this case "c" should not follow "a", "a" should not follow "c", and "b" should not follow "a". The objects in the rules may be any type, but will not be null. The Taboo noFollow(elem) method returns the set of elements which should not follow the given element according to the rules. So with the rules {"a", "c", "a", "b"} the noFollow("a") returns the Set {"c", "b"}. NoFollow() with an element not constrained in the rules, e.g. noFollow("x") returns the empty set (the utility method Collections.emptySet() returns a read-only empty set for convenience). The reduce(List) operation takes in a list, iterates over the list from start to end, and modifies the list by deleting the second element of any adjacent elements during the iteration that violate the rules. So for example, with the above rules, the collection {"a", "c", "b", "x", "c", "a"} is reduced to {"a", "x", "c"}. The elements in bold -- {"a", "c", "b", "x", "c", "a"} -- are deleted during the iteration since they violate a rule. The Taboo class works on a generic type which can be any type of object, and assume that the object implements equals() and hashCode() correctly (such as String or Integer). In the Taboo constructor, build some data structure to store the rules so that the methods can operate efficiently -- note that the rules data for the Taboo are given in the constructor and then never change. A rules list may have nulls in it as spacers, such as {"a", "b", null, "c", "d"} -- "b" cannot follow "a", and "d" cannot follow "c". The null allows the rules to avoid making a claim about "c" and "b". Taboo class: package assign1; import java.util.ArrayList; import java.util.HashMap; import java.util.HashSet; import java.util.List; import java.util.Map; import java.util.Set; public class Taboo<T> { public Map<T, Set<T>> rulesMap = new HashMap<>(); /** * Constructs a new Taboo using the given rules (see handout.) * * @param rules rules for new Taboo */ public Taboo(List<T> rules) { for (int i = 1; i < rules.size(); i++) { T firstItem = rules.get(i - 1); T secondItem = rules.get(i); if (firstItem != null && secondItem != null) { if (rulesMap.containsKey(firstItem)) { rulesMap.get(firstItem).add(secondItem); } else { Set<T> set = new HashSet<>(); set.add(secondItem); rulesMap.put(firstItem, set); } } } } /** * Returns the set of elements which should not follow the given element. * * @param elem the given element (or the first of a pair of element) * @return elements which should not follow the given element */ public Set<T> noFollow(T elem) { return rulesMap.get(elem); } /** * Removes elements from the given list that violate the rules (see * handout). * * @param list collection to reduce * @return the list reduced according to the rules */ public List<T> reduce(List<T> list) { List<T> reducedList = new ArrayList<>(list); for (int secondItemPosition = 1; secondItemPosition < reducedList.size(); secondItemPosition++) { reducedList = removeViolation(secondItemPosition, reducedList); } return reducedList; } /** * this method does the actual comparison between the passed in list and * the rules in the 'rulesMap'. This method calls itself recursively * @param secondItemPosition * @param list * @return */ private List<T> removeViolation(int secondItemPosition, List<T> list) { List<T> localCopyOfList = new ArrayList<>(list); T firstItem = localCopyOfList.get(secondItemPosition - 1); T secondItem = localCopyOfList.get(secondItemPosition); if (rulesMap.containsKey(firstItem)) { if (rulesMap.get(firstItem).contains(secondItem)) { localCopyOfList.remove(secondItemPosition); if (secondItemPosition < localCopyOfList.size()) { localCopyOfList = removeViolation(secondItemPosition, localCopyOfList); } } } return localCopyOfList; } } Test class: import java.util.ArrayList; import java.util.Arrays; import java.util.HashMap; import java.util.HashSet; import java.util.List; import java.util.Map; import java.util.Set; import static org.junit.Assert.*; import org.junit.Test; public class TabooTest<T> { @Test public void testConstructorWithInteger() { Map<Integer, Set<Integer>> integerMap = new HashMap<>(); integerMap.put(1, new HashSet<>(Arrays.asList(2, 3))); integerMap.put(2, new HashSet<>(Arrays.asList(4))); List<Integer> integerList = new ArrayList<>(Arrays.asList(1, 2, 4, null, 1, 3)); Taboo<Integer> taboo = new Taboo<>(integerList); assertEquals(integerMap, taboo.rulesMap); } @Test public void testConstructorWithString() { Map<String, Set<String>> stringMap = new HashMap<>(); stringMap.put("hello", new HashSet<>(Arrays.asList("world", "morning"))); stringMap.put("world", new HashSet<>(Arrays.asList("hello", "morning"))); stringMap.put("morning", new HashSet<>(Arrays.asList("world", "hello"))); List<String> stringList = new ArrayList<>(Arrays. asList("hello", "world", "morning", "hello", "morning", "world", "hello")); Taboo<String> taboo = new Taboo<>(stringList); assertEquals(stringMap, taboo.rulesMap); } @Test public void testConstructorWithEmptyMap() { Map<Double, Set<Double>> doubleMap = new HashMap<>(); List<Double> doubleList = new ArrayList<>(Arrays.asList(1.1, null, 2.2, null, 3.3)); Taboo<Double> taboo = new Taboo<>(doubleList); assertEquals(doubleMap, taboo.rulesMap); } @Test public void testNoFollowMethodWithString() { Set<String> stringSet = new HashSet<>(Arrays.asList("world", "morning")); List<String> stringList = new ArrayList<>(Arrays. asList("hello", "world", "morning", "hello", "morning", "world", "hello")); Taboo<String> taboo = new Taboo<>(stringList); assertEquals(stringSet, taboo.noFollow("hello")); } @Test public void testNoFollowingMethodWithNull() { List<String> stringList = new ArrayList<>(Arrays. asList("hello", "world", "morning", "hello", "morning", "world", "hello")); Taboo<String> taboo = new Taboo<>(stringList); assertNull(taboo.noFollow("mf")); } @Test public void testReduceMethodWithInteger() { List<Integer> reducedList = new ArrayList<>(Arrays.asList(10, 10)); List<Integer> rulesIntegerList = new ArrayList<>(Arrays.asList(10, 20)); Taboo<Integer> taboo = new Taboo<>(rulesIntegerList); List<Integer> passedInList = new ArrayList<>(Arrays.asList(10, 20, 10, 20)); assertEquals(reducedList, taboo.reduce(passedInList)); } @Test public void testReduceMethodWithString() { List<String> rules = new ArrayList<>(Arrays.asList("he", "she", "i", "she")); Taboo<String> taboo = new Taboo<>(rules); List<String> passedInList = new ArrayList<>(Arrays.asList("he", "she", "i", "she", "i", "she", "i")); List<String> reducedList = new ArrayList<>(Arrays.asList("he", "i", "i", "i")); assertEquals(reducedList, taboo.reduce(passedInList)); } } Answer: Return unmodifiable collections The method noFollow method returns a direct reference to the set contained inside the rulesMap. public Set<T> noFollow(T elem) { return rulesMap.get(elem); } One of the issue with this approach is that a caller of that class can then modify this set and add their own rules, or even delete current rules, at wish. Consider this: public static void main(String[] args) { Taboo<String> taboo = new Taboo<>(Arrays.asList("1", "2")); System.out.println(taboo.noFollow("1")); // prints [2] taboo.noFollow("1").clear(); System.out.println(taboo.noFollow("1")); // prints [], oops } To tacke this issue, you can return an unmodifiable set instead: public Set<T> noFollow(T elem) { return Collections.unmodifiableSet(rulesMap.get(elem)); } With this change, the above code would throw an UnsupportedOperationException, and not mess up with the rules. Make sure to return non-null collections Called with an element that is not present in the rule map, noFollow will return null. This is in contradiction of the problem: NoFollow() with an element not constrained in the rules, e.g. noFollow("x") returns the empty set Additionnally, it is always preferable to return an empty collection instead of null to avoid potential NullPointerException for the caller. As such, consider modifying your method to: public Set<T> noFollow(T elem) { Set<T> set = rulesMap.get(elem); return set == null ? Collections.emptySet() : Collections.unmodifiableSet(set); } Starting with Java 8, you could condense that a little with getOrDefault: public Set<T> noFollow(T elem) { return Collections.unmodifiableSet(rulesMap.getOrDefault(elem, Collections.emptySet())); } Collections.emptySet() does not create a new instance of an empty set; this set is cached, so no new unnecessary objects are being created. Note that this change will break one of your test, mainly testNoFollowingMethodWithNull. This test expected null to be returned. You'll need to update it to expect an empty set. Consider iterators Your implementation of reduce works with having a private inner method to do the hard work, that is called recursively. The issue with this approach is that it creates a lot of temporary new lists, and makes the code a bit difficult to read. When traversing a collection to remove elements, an Iterator is generally the right tool for the job: it supports going forward and removing elements, which is exactly what we want here. Consider the following approach: public List<T> reduce(List<T> list) { List<T> reducedList = new ArrayList<>(list); if (reducedList.size() < 1) { return reducedList; } Iterator<T> iterator = reducedList.iterator(); T current = iterator.next(); while (iterator.hasNext()) { T next = iterator.next(); if (current != null && next != null && rulesMap.containsKey(current) && rulesMap.get(current).contains(next)) { iterator.remove(); } else { current = next; } } return reducedList; } What it does is pretty straight-forward. If the list has less than 2 elements, there will be nothing to do so it exits early. Otherwise, it gets the first element, and then loops through the rest. When the next element is contained inside the no follow set, it removes the element and keeps the same current element. When this element is allowed, it keeps it and update the current element, moving forward. Tests are testing too much, and not enough at the same time You have three unit tests that asserts the content of the map rulesMap. You should not have that. The reason is that rulesMap is an implementation detail of your class that the public API is not concerned about. This map is never exposed to the client of the class, hence, its content should not be asserted with an unit-test. An unit test should focus on the public API of the class tested. As such, in this case, you are only interested in testing the two methods noFollow and reduce. Testing anything else is too much. At the same time, you're not testing enough those public APIs. For example, you have no unit test that cover the requirements A rules list may have nulls in it as spacers, such as {"a", "b", null, "c", "d"} -- "b" cannot follow "a", and "d" cannot follow "c". The null allows the rules to avoid making a claim about "c" and "b". This should be tested independently for both noFollow and reduce in dedicated tests.
{ "domain": "codereview.stackexchange", "id": 20615, "tags": "java, unit-testing, collections, junit, hash-map" }
Point group symmetries and unit cell
Question: I was wondering if the unit cell (of a given lattice) had to have every point group symmetries of the lattice it defines ? I guess there is no unique way to define a unit cell and that it may not have all point group symmetries. However, is it possible to define a unit cell that has all point group symmetries? Answer: You can certainly take a unit cell which has the full point group symmetry and distort it in such a way that the symmetry is reduced. The Wigner-Seitz Cell provides you with a procedure to construct a unit cell which has the full point group symmetry. It is defined as the volume of points which are closer to one lattice point than to any other one. The interesting question would now be, whether or not the Wigner-Seitz Cell is uniquely specified given the point group symmetry and the requirement of the unit cell to be a primitive one. I am not aware of a proof of this statement.
{ "domain": "physics.stackexchange", "id": 37396, "tags": "condensed-matter, symmetry, group-theory, crystals, unit-cell" }
Is it possible to include numerical values or constants within brackets(time complexity)?
Question: I am currently trying to learn a bit more about the theoretical side of computer science and I have stumbled upon this example I found online: $5n^2 + 5n = O(0.5n^2)$ I do have basic understanding of how time complexity works but it is the only example I have found that has a number attached to the $n$ (right side). Does it mean that the person writing that article made a mistake or it is possible to include values this way (like in the definition: $0 ≤ f(n) ≤ cg(n)$ where $c = 0.5$ in this case)? Sorry if I missed an obvious aspect of this concept but I try learning this stuff on my own and have nobody apart from the internet to ask for help. :) Answer: What you read is slightly unusual, but absolutely correct. Follow the definition of Big-O literally: It means there is a c > 0 such that for large n, f(n) <= c * 0.5 * n^2. It’s exactly the same as O(n^2), except that the constant c is different (twice as big will work). In your example, for n >= 5 we have 5n <= n^2 and therefore 5n^2 + 5n <= 6n^2. So f(n) <= 12 * 0.5 n^2 and f(n) <= 6*n^2, c=12 instead of c=6.
{ "domain": "cs.stackexchange", "id": 15573, "tags": "time-complexity" }
Chess Knight minimum moves to destination on an infinite board
Question: There are tones of solutions for Knights tour or shortest path for Knights movement from source cell to destination cell. most of the solutions are using BFS which seems the best algorithm. Here is my implementation using HashMap: public class Knight_HashMap { static HashMap<String, Position> chessboard = new HashMap<String, Position>(); static Queue<Position> q = new LinkedList<Position>(); static int Nx, Ny, Kx, Ky, Cx, Cy; public static void main(String[] args) { Scanner sc = new Scanner(System.in); System.out.println("insert Board dimentions: Nx, Ny"); Nx = sc.nextInt(); Ny = sc.nextInt(); System.out.println("inset Knight's location: Kx, Ky"); Kx = sc.nextInt(); Ky = sc.nextInt(); System.out.println("insert destination location: Cx, Cy"); Cx = sc.nextInt(); Cy = sc.nextInt(); sc.close(); // Assume the position for simplicity. In real world, accept the values using // Scanner. Position start = new Position(Kx, Ky, 0); // Positionition 0, 1 on the chessboard Position end = new Position(Cx, Cy, Integer.MAX_VALUE); chessboard.put(Arrays.toString(new int[] { Kx, Ky }), new Position(Kx, Ky, 0)); q.add(start); while (q.size() != 0) // While queue is not empty { Position pos = q.poll(); if (end.equals(pos)) { System.out.println("Minimum jumps required: " + pos.depth); return; } else { // perform BFS on this Position if it is not already visited bfs(pos, ++pos.depth); } } } private static void bfs(Position current, int depth) { // Start from -2 to +2 range and start marking each location on the board for (int i = -2; i <= 2; i++) { for (int j = -2; j <= 2; j++) { Position next = new Position(current.x + i, current.y + j, depth); if (isValid(current, next)) { if (inRange(next.x, next.y)) { // chessboard.put(Arrays.toString(new int[] { next.x, next.y }), next); // Skip if next location is same as the location you came from in previous run if (current.equals(next)) continue; Position position = chessboard.get(Arrays.toString(new int[] { next.x, next.y })); if (position == null) { position = new Position(Integer.MAX_VALUE, Integer.MAX_VALUE, Integer.MAX_VALUE); } /* * Get the current position object at this location on chessboard. If this * location was reachable with a costlier depth, this iteration has given a * shorter way to reach */ if (position.depth > depth) { chessboard.put(Arrays.toString(new int[] { current.x + i, current.y + j }), new Position(current.x, current.y, depth)); // chessboard.get(current.x + i).set(current.y + j, new Position(current.x, // current.y, depth)); q.add(next); } } } } } } private static boolean isValid(Position current, Position next) { // Use Pythagoras theorem to ensure that a move makes a right-angled triangle // with sides of 1 and 2. 1-squared + 2-squared is 5. int deltaR = next.x - current.x; int deltaC = next.y - current.y; return 5 == deltaR * deltaR + deltaC * deltaC; } private static boolean inRange(int x, int y) { return 0 <= x && x < Nx && 0 <= y && y < Ny; } } class Position { public int x; public int y; public int depth; Position(int x, int y, int depth) { this.x = x; this.y = y; this.depth = depth; } public boolean equals(Position that) { return this.x == that.x && this.y == that.y; } public String toString() { return "(" + this.x + " " + this.y + " " + this.depth + ")"; } } This works well with small dimension but with 10^9 x 10^9 I face outOfMemory exception. I also tried with java 2d array, ArrayList, HashMap alongside with a LinkedList as Queue. but for that dimension 10^9 x 10^9 with any data structure, I face outOfMemory exception. Is there possibility of optimization to avoid outOfMemory or anyother way/data structure to handle such huge dimension? Note: I should mention this question is from BAPC17 contest named Knight's Marathon Answer: thanks to matt timmermans by his hint I realized for infinite chess boards no search algorithm BFS, DFS, A*, Dijkstra should be used. just calculate diagonal symmetry and imagine that start point as (0,0). just 2 corners should be hardcoded. adopted from here System.out.println((int) distance(ENDx - STARTx, ENDy - STARTy)); public static double distance(int x, int y) { // axes symmetry x = Math.abs(x); y = Math.abs(y); // diagonal symmetry if (x < y) { int t = x; x = y; y = t; } // 2 corner cases if (x == 1 && y == 0) { return 3; } if (x == 2 && y == 2) { return 4; } // main formula int delta = x - y; if (y > delta) { return (delta - 2 * Math.floor((float) (delta - y) / 3)); } else { return (delta - 2 * Math.floor((delta - y) / 4)); } }
{ "domain": "cs.stackexchange", "id": 13154, "tags": "algorithms, shortest-path, searching, board-games, space-analysis" }
recover files after groupTuple
Question: I have a process that emits BAM files with a key containing the sample name: process source{ output: tuple val(sample), path("sample.bam") into ch [...] } 1st question: I have several files per sample, let's assume 2 for now. I want to merge them together with samtools merge. I can regroup that channel with groupTuple(), but how can I recover them in the next process? Considering they all have the same name: grouped_ch = ch.groupTuple(by: 0, size: 2) process merge{ input: grouped_ch shell: ''' samtool merge sample.bam sample.bam > !{sample}.bam ''' } Is there perhaps a way in the input: block to say that I expect a list, and name the list components? 2nd question: In practice, I have a variable number of "sample.bam" files for each sample value, does that change the way to define the input:? It also appears I will need to use the built-in groupKey(), is there documentation available? I have read this Github issue, but it would help if a general description was available. Answer: My preference is to avoid working with files with the same name as much as possible. However, sometimes this might be unavoidable. Fortunately, file name collisions can be avoided when working with multiple input files by using the * and ? wildcards: When a target file name is defined in the input parameter and a collection of files is received by the process, the file name will be appended by a numerical suffix representing its ordinal position in the list. The target input file name can contain the * and ? wildcards, that can be used to control the name of staged files. Note that there is also the stageAs path option (see input of type 'path'), which can also take a name pattern (using the above wildcards) to let you use a variable in your script. For example: process samtools_merge { conda 'samtools=1.15.1' debug true stageInMode 'rellink' input: tuple val(sample), path(bam_files, stageAs: 'mysample*.bam') output: tuple val(sample), path("${sample}.bam") """ samtools merge "${sample}.bam" ${bam_files} ls -nl """ } workflow { Channel.fromPath( ['./path/to/files/*.bam', './path/to/more_files/*.bam'] ) \ | map { bam -> tuple( bam.baseName, bam ) } \ | groupTuple(by: 0, size: 2) \ | samtools_merge } Results: N E X T F L O W ~ version 22.04.0 Launching `script.nf` [elated_montalcini] DSL2 - revision: 1427d64a1d executor > local (3) [83/25654c] process > samtools_merge (3) [100%] 3 of 3 ✔ total 184 lrwxrwxrwx 1 1000 985 34 May 25 12:18 mysample1.bam -> ../../../path/to/files/sample1.bam lrwxrwxrwx 1 1000 985 39 May 25 12:18 mysample2.bam -> ../../../path/to/more_files/sample1.bam -rw-r--r-- 1 1000 985 187911 May 25 12:18 sample1.bam total 128 lrwxrwxrwx 1 1000 985 34 May 25 12:18 mysample1.bam -> ../../../path/to/files/sample2.bam lrwxrwxrwx 1 1000 985 39 May 25 12:18 mysample2.bam -> ../../../path/to/more_files/sample2.bam -rw-r--r-- 1 1000 985 129947 May 25 12:18 sample2.bam total 196 lrwxrwxrwx 1 1000 985 34 May 25 12:18 mysample1.bam -> ../../../path/to/files/sample3.bam lrwxrwxrwx 1 1000 985 39 May 25 12:18 mysample2.bam -> ../../../path/to/more_files/sample3.bam -rw-r--r-- 1 1000 985 197225 May 25 12:18 sample3.bam If each group contains a variable number of files, you can keep the same input declaration (as above) but you will want use a special groupKey object so that the collected values can be streamed as soon as possible. It's not well documented, but here's a simple example that sorts the input BAM files prior to merging. Once all of the 'samtools_sort' jobs are finished for a particular sample, that sample can proceed to merging without us having to wait for all the other 'samtools_sort' jobs to be finished (which would otherwise be the case if the 'size' parameter had not been specified): process samtools_sort { tag { bam.baseName } conda 'samtools=1.15.1' input: tuple val(sample), path(bam) output: tuple val(sample), path("${bam.baseName}.sorted.bam") """ samtools sort -o "${bam.baseName}.sorted.bam" "${bam}" """ } process samtools_merge { tag { sample } conda 'samtools=1.15.1' input: tuple val(sample), path(bam_files, stageAs: 'mysample*.bam') output: tuple val(sample), path("${sample}.bam") """ samtools merge "${sample}.bam" ${bam_files} """ } workflow { Channel.fromPath( params.bam_files ) \ | map { bam -> tuple( bam.baseName, bam ) } \ | groupTuple() \ | map { sample, files -> tuple( groupKey(sample, files.size()), files ) } \ | transpose() \ | set { bam_files } samtools_sort( bam_files ) \ | groupTuple() \ | samtools_merge \ | view() } Results: $ nextflow run script.nf -ansi-log false --bam_files './path/to/*/*.bam' N E X T F L O W ~ version 22.04.0 Launching `script.nf` [cheesy_swanson] DSL2 - revision: da1a82a059 [04/643129] Submitted process > samtools_sort (sample3) [1a/93a939] Submitted process > samtools_sort (sample1) [93/fde0a0] Submitted process > samtools_sort (sample3) [db/208bac] Submitted process > samtools_sort (sample3) [8f/f2a231] Submitted process > samtools_sort (sample2) [f3/8f0f9f] Submitted process > samtools_sort (sample2) [bb/cca23e] Submitted process > samtools_merge (sample1) [sample1, /home/steve/testing/work/bb/cca23e016a641b201b7e42f9db74ef/sample1.bam] [ff/bd3a2a] Submitted process > samtools_merge (sample2) [sample2, /home/steve/testing/work/ff/bd3a2ac83a0691d2b80b04ef97b755/sample2.bam] [94/a4f8eb] Submitted process > samtools_merge (sample3) [sample3, /home/steve/testing/work/94/a4f8eb483602e9335b28f5d40058c8/sample3.bam] Note that the val specified as input to 'samtools_sort' is an object of class nextflow.extension.GroupKey. In the example above, the input to 'samtools_merge' is also a groupKey - but it doesn't need to be. If you need a string value for whatever reason, you can just call '.toString()' to get it back: ... samtools_sort( bam_files ) \ | groupTuple() \ | map { sample_key, files -> tuple( sample_key.toString(), files ) } \ | samtools_merge \ | view()
{ "domain": "bioinformatics.stackexchange", "id": 2167, "tags": "nextflow" }
Applying kinematics issue
Question: I've been having an unusually difficult time solving kinematics problems in comparison to all the other students in the classroom. It appears that I'm one of the unfortunate people with an anti-math mind (although I've been expecting as much for some time now). In short, I have no idea how to solve this problem: A carpenter tosses a shingle off a $9.4m$ high roof, giving it an initial horizontal velocity of $7.2 m/s$. a. What is the final vertical velocity of the shingle? b. How long does it take to reach the ground? c. How far does it move horizontally in this time? I tried to organize the data I had and the data I needed to calculate: Horizontal: 1. time is unknown and the same as vertical time 2. initial position is $0m$ 3. final position is unknown 4. initial velocity is $7.2m/s$ 5. final velocity is $7.2m/s$ 6. acceleration must be $0$ because velocity doesn't change Vertical: 1. time is unknown and the same as vertical time 2. initial position is $12m$ 3. final position is $0m$ 4. initial velocity is gravity, or $9.81m/s$ 5. final velocity is unknown 6. acceleration is $9.81m/s^2$ However, according to my unknowns and my equations, I cannot ultimately solve any of these because I cannot solve for time, which requires knowing distance, velocity, and acceleration, like this: $t = -v_i \pm \sqrt{v_i^2 + 2a(\pm d)} \over{a}$ Is this correct and is there another way to find time with less unknowns? And none of my equations can explicitly find the initial or final positions of anything: $\Delta x = v_{ix} \Delta t + 0.5a_x \Delta t^2$ (also works with $y$) $v_{fx}^2 - v_{ix}^2 = 2a_x \Delta x$ (also works with $y$) $v_{fx} = v_{ix} + a_x \Delta t$ (also works with $y$) $\Delta y = 0.5(v_{fy} + v_{iy}) \Delta t$ $y = v_{avg} * t$ $\Delta y = 0.5(V_{fy} + V_{iy}) \Delta t$ What I want to know is how to correctly solve this (with equations listed). Please do not just provide the answer; thanks in advance. Answer: Let's solve the problem: Step 1: What is going on? We are standing on a roof and we throw an object of horizontal. The object will fly through the air and will eventually hit the ground because of gravity. It would be good to draw a picture at this point but I am too lazy. Step 2: What do we want to know? We want to know the final vertical velocity of the shingle, how long until the shingle hits the ground, and how far the shingle moves vertically (how far the shingle lands from the base of the building). Step 3: What are we given? We are told the height of the building and how fast we throw the shingle. Step 4: How do we expect what we want to know to depend on what we know? Well if the building gets shorter, I would expect the time to fall to be less, and I would expect the shingle to hit the ground with less vertical velocity. I also expect the shingle to not land as far away from the building because it won't fly for as long. For example consider the extreme case where the height the shingle falls is zero. Then the shingle hits the ground immediately so the fall time is zero and its velocity is zero when it hits the ground. Also the shingle doesn't move horizontally at all. Now what if we throw it harder. I am not exactly sure how this will affect fall time or the velocity it hits the ground. Maybe it will stay in the air longer because you are throwing it so hard. Maybe it will hit the ground faster because it is moving faster. Similarly I am not sure how the vertical impact speed will be affected. But I am pretty sure that the harder I throw it, the farther it will land from the base of the building. For example, if I throw it with zero speed, it will just go straight down and there will be no horizontal displacement. Step 5: What physical concepts are at play to determine the relationships described in the previous step? Well we said that the taller the building is the longer the time the shingle will fall. The concept behind this must have something to do with gravity, because gravity is what is making the shingle fall. Gravity is also responsible for the shingle having a final velocity. Now what about the horizontal motion? This must have something to do with the initial horizontal speed. So there must be some law relating a speed and a distance traveled. Step 6: What are the relevant laws having to do with these concepts? We said the shingle was going to fall because of gravity. We would like a quantitative law explaining how exactly the shingle falls because of gravity. The best we could ask for is having vertical displacement as a function of time. Fortunately this law is known. It goes like this $$\Delta y(\Delta t) = v_{0y}\Delta t - \frac{1}{2} g (\Delta t)^2,$$ where $\Delta y(t)$ is the $y$ displacement after a time $\Delta t$, $ v_{0y}$ is the initial velocity in the $y$ direction, and $g$ is the acceleration of gravity and we have taken up to be in the positive $y$ direction. Notice that the displacement in the $y$ direction doesn't care about velocity in the $x$ direction. We can now ask what we know and don't know for our problem. We do know $\Delta y$ (the height of the building), $v_{0y}$ (it is zero), and $g$. We don't know $\Delta t$. Fortunately this is the only unknown in the equation. Thus we will be able to solve for $\Delta t$ and we will then know how long it takes the object to fall. Another thing we need to find is the final velocity of the shingle. The velocity was also due to gravity, so we must find a law relating a velocity to gravity. We know that gravity causes constant acceleration, so the change in is proportional to time. The constant of proportionality is the acceleration. Thus we get the law $$\Delta v_y(\Delta t) = -g \Delta t.$$ At this point, the only unknown is $\Delta v_y$, so this equation determines that value. The other thing we need to find is the horizontal displacement of the ball. We know the initial speed we threw it at, and we need its horizontal displacement. There is nothing causing the shingle to accelerate besides gravity, and we know gravity only affects motion in the vertical direction and not the horizontal direction, so the law we want to use is $$\Delta x (\Delta t) = v_{0x} \Delta t,$$ where $\Delta x$ is the change in the $x$ position and $v_{0x}$ is the initial velocity in the $x$ direction. The only unknown in this equation is $\Delta x$, so this equation determines $\Delta x$. Step 7: Use the laws to find the answers Our first law is $\Delta y(\Delta t) = v_{0y}\Delta t - \frac{1}{2} g (\Delta t)^2$. For us, $\Delta t$ is $t_f$, the time it takes the shingle to fall, and $\Delta y(\Delta t)$ is $-H$, where $H$ is the height of the building. We are told that the shingle is thrown horizontally so $v_{0y}$ is zero. The law then becomes $-H = - \frac{1}{2} g t_f^2.$ We were asked for $t_f$. Solving for this we find $$t_f = \sqrt{\frac{2 H}{g}}.$$ Next we will use our second law to find the final $y$ velocity of the shingle. The second law was $\Delta v_y(\Delta t) = -g \Delta t.$ Again, $\Delta t$ is $t_f$. Our $\Delta v_y(\Delta t)$ is the change actually just the final $y$ velocity $v_{fy}$ since the initial $y$ velocity is $0$. Thus we find $$v_{fy} = -g t_f = -g\sqrt{\frac{2 H}{g}} = -\sqrt{2 H g}.$$ Last we should use the law for the horizontal displacement. This law was $\Delta x (\Delta t) = v_{0x} \Delta t.$ In our case $\Delta x (\Delta t)$ is the horizontal displacement of the shingle when it hits the ground $x_f$. $v_{0x}$ is the horizontal velocity we threw the shingle at. The law we get is $$x_f = v_{0x} t_f = v_{0x} \sqrt{\frac{2 H}{g}}.$$ Step 8: Check dimensions of our answers We check our answers. We will first check the dimensions. We will use $L$ to denoted a dimension of length and $T$ to denote a dimension of time. We will start with $v_f$. We found $t_f = \sqrt{\frac{2 H}{g}}$. The dimension we get for $t_f$ are $\sqrt{\frac{ L}{L/T^2}} = \sqrt{T^2} = T$, so we get the right dimensions. Now let's check dimensions on $v_{fy}$. We found $v_{fy} = -\sqrt{2 H g}$. Thus the dimensions of $v_{fy}$ are $\sqrt{L L/T^2} = \sqrt{(L/T)^2} = L/T$, which is the dimensions for speed. Finally let's check the dimensions on $x_f$. We found $x_f = v_{0x} \sqrt{\frac{2 H}{g}}$. The units for $x_f$ are $L/T \sqrt{\frac{L}{L/T^2}} = L/T \sqrt{T^2} = L/T * T = L$. These units make sense. Step 9: Check answer scales as expected. We should check to make sure the things we are being asked for depend on what we are given as we expected in Step 4. Let's start with $t_f = \sqrt{\frac{2 H}{g}}$. We see that $t_f$ gets bigger when $H$ gets bigger as we expected. We also find that it has no dependence on $v_{0x}$ which is what we realized should be the case. Also we see that if $g$ were to get stronger $t_f$ would decrease, which makes sense. Now consider $v_{fy} = -\sqrt{2 H g}$. We see that as $H$ gets bigger $v_{fy}$ gets bigger as expected. Also we see that if gravity were to get stronger, $v_{fy}$ would also increase. This makes sense. Finally consider $x_f = v_{0x} \sqrt{\frac{2 H}{g}}$. We see that if we were to throw the shingle harder it would go farther, as expected. Also the higher the building is (bigger $H$), the farther the shingle goes, as expected. If gravity were to get stronger, we predict the shingle wouldn't go as far. This makes sense because the shingle would hit the ground a lot sooner. Step 10: plug in numbers Now is a good time to plug in numbers (Note steps 8 and 9 are impossible if you plug in numbers too soon). We find $t_f = 1.4 \mathrm{\ s}$, $v_{fy} = -14 \mathrm{\ m/s}$, and $x_f = 10. \mathrm{\ m}$. I think an important thing about solving the problems is doing the first five steps. Usually these tell you a lot about what equations you should think about using. Another important this is to not bother plugging in numbers until the very end.
{ "domain": "physics.stackexchange", "id": 10258, "tags": "homework-and-exercises, kinematics" }
Close electric field lines in wave guides
Question: In a wave guide, graphics of propagation of Transversal Magnetic modes show closed field lines for the electric field. For example, for a rectangular guide: $E_x (x,y,z) = \frac {-j\beta m \pi}{a k^2_c} B_{mn}\cos\frac{m\pi x}{a}\sin\frac{n\pi y}{b}e^{-j(\beta z + \omega t)}$ $E_y (x,y,z) = \frac {-j\beta n \pi}{b k^2_c} B_{mn}\sin\frac{m\pi x}{a}\cos\frac{n\pi y}{b}e^{-j(\beta z + \omega t)}$ $E_z (x,y,z) = B_{mn}\sin \frac{m\pi x}{a}\sin\frac{n\pi y}{b}e^{-j(\beta z + \omega t)}$ Is it possible to have closed lines for the electric field? Answer: Yes, it is possible. Maxwell's equations say $$ \oint_l \vec{E}\; d\vec{l} = -\frac{1}{c}\int_{S(l)}\frac{\partial \vec{B}}{\partial t} d\vec{S}. $$ The electric field of this closed line is proportional to the rate of the change of the magnetic flux. There is no problem with energy conservation. An electron moving along this closed line will be accelerated but it consumes the energy we waste to keep the field changing. This effect is used in some particle accelerators while the reversal effect is used in most microwave sources where the EMW consumes the kinetic energy of moving electrons.
{ "domain": "physics.stackexchange", "id": 2273, "tags": "electromagnetism, waves" }
Is there a DNA analogue to ribozymes?
Question: If not, is it impossible for DNA to have enzymatic activity? Answer: Not very common, and not found so far in nature, but they exist and are called deoxyribozymes. Additional information: Deoxyribozymes are the equivalent of ribozymes in the DNA world and can function as catalysts for different biochemical reactions, such as DNA cleavage. While DNAzymes (short name) were synthesized in a laboratory context (In-vitro) and proved to be active, no observations were made of DNA molecules having an enzymatic activity In-vivo which is of course not a definitive proof they don't exist in nature. The latter point is a major difference compared to ribozymes which were proved to be active and functional in living cells.
{ "domain": "biology.stackexchange", "id": 3599, "tags": "dna, rna, enzymes" }
What color would you see if you place 2 mirrors in opposit when one is a one way mirror
Question: What would you see if you have 2 huge mirrors (you cant look over it or next to it) You place them with the reflecting side to each other and you look in the mirror from behind one mirror (one way mirror = 1 side glass 1 side mirror, like in a movie when cops are questioning a suspect.) The mirror would reflect each other endless. The example could be preformed with a see trough mirror (1 side mirror other side glass) but I don't have one, also I don't have huge mirrors. Would it make a diffrence if you place a light source and if you dont have a light source? My guess is that you would always see black because there is nothing to mirror. Edit important: There is nothing in the room to reflect on, so it only reflects the mirror, thats what i ment. Answer: I understand that you are saying that both the mirrors reflect 100 percent yet somehow you are able to see, anyway if this is possible lets first assume you can see the frame of the mirror from which you are seeing in ghe mirror in front, first of all the mirror would continue to get smaller and smaller, and then finally vanish into a point because you would not be able to resolve it, it would be like a sort of pyramid with its apex waaaay inside the mirror. Now in your original question what you have is even removed the frame, you will see nothing, not even a single dot, see since all the light is reflecting lets assume it's white light the first reflection would be a big white board, now because of 100% reflection the light will not diminish but a smaller white board will be formed, since there is no difference in brightness you will not be able to see this image ane none of the images that would be formed afterwards, since you will not identify any image it would be like shining into a very bright like, something like looking into floodlights from real close.
{ "domain": "physics.stackexchange", "id": 11288, "tags": "visible-light, reflection" }
Regular expression for a binary string containing even number of 0's
Question: To get the regular expression I made a finite automata as the following (not sure if you can directly write regular expression without it): The regular expression for the above according to me should be $(1+01^*0)^*$ but elsewhere I have seen it can be $(1^*01^*01^*)^*$. Why is it different? Answer: There are (infinitely) many regular expressions for every regular language. Your approach gives you one (good job on a structured approach!), others give you others. Consider, for instance, two distinct yet equivalent NFA translated using Thompson's construction.
{ "domain": "cs.stackexchange", "id": 18453, "tags": "formal-languages, finite-automata, regular-expressions" }
Given a divergence free vector Field, how can I find the related vector potential without guesswork?
Question: I am reading through Griffith's Electrodynamics 4e cover to cover, skipping no sections and doing all of the problems (sans the ones for masochists) as a passion project. I came to problem 1.53, in which I must find a vector potential A such that $$ F=\nabla \times A $$ Given that $F=x^2\hat{x}+3xz^2\hat{y}-2xz\hat{z}$, and that $A_x, A_y, A_z$ are the x, y, and z components of the vector potential A, respectively. After my initial attempt of trying to equate the components of F with the components of the curl(A) and trying to integrate to obtain A_x, A_y, and A_z from the partials (essentially, followed the solution to problem 1.50 in this link https://physicsisbeautiful.com/resources/introduction-to-electrodynamics/problems/1.50/solutions/1.50-1.51.pdf/wVKSeo2RpyFGeobAJcwXh4/) After setting all the terms that show up from integrating to zero (which is allowed, right?), I obtained a vector potential of $$ A=(xz^3-2xyz,-\frac{3}{2}x^2z,\frac{1}{2}x^2y-\frac{3}{2}x^2z^2) $$ Which does not satisfy the condition of F= curl(A). It does have some similar pieces to the I checked what ever internet solutions I could get my hands on, the textbook solutions, stemjock, physicsisbeautiful, and even this website. Many of the solutions just simplified it with "choose (this term) to be zero" without any sort of explanation. I was not happy with these hand-wavy guess-work answers (unfortunately. I would be much farther along in my cover-to-cover journey had I accepted them). I decided to try the procedure described in https://math.stackexchange.com/questions/81405/anti-curl-operator. My work is shown here: $$ \nabla \times F = -6xz\hat{x}+2z\hat{y}+3z^2\hat{z} $$ Which means that $$F_1(xt,yt,zt)=-6xzt^2$$ $$F_2(xt,yt,zt)=2zt$$ $$F_3(xt,yt,zt)=3z^2t^2$$ Plugging these into the integrals for $G_1, G_2, G_3$, I obtain values of $$G_1=\frac{2}{3}z^2-\frac{3}{4}yz^2$$ $$G_2=\frac{9}{4}xz^2$$ $$G_3=-\frac{3}{2}xyz-\frac{2}{3}xz$$ Edit: In the 2nd link, the question asked is how one can use the components of F to find A (at least, that's what I understood and that's what I'm looking for). However, his example starts with the vector H. Then, he takes the curl of H, uses the components as his $F_1,F_2,F3$, then, uses the formula to undo the curl(H). Doesn't this just give him... H instead of F? So here we are, yet again with another post on asking about solving for vector potentials. I have already looked at many stack exchange posts that talk about problems like this and found that they did not do it for me and this specific problem. If anyone can provide a resource that goes through a similar problem, that may be all I need to set this problem to rest and come up with my own solution to this problem. Answer: Too long for a comment: $$ \mathbf{F}=\begin{pmatrix}x^2\\3xz^2\\-2xz\end{pmatrix}\,, \quad \mathbf{A}=\int_0^1t\,\mathbf{F}(t\,\mathbf{x})\,dt\times\mathbf{x} $$ solves $\mathbf{F}=\nabla\times\mathbf{A}$ (albeit not uniquely) and gives \begin{align} \boldsymbol{A}&=\int_0^1\begin{pmatrix}t^3x^2\\3t^4xz^2\\-2t^3xz\end{pmatrix}\,dt\times \begin{pmatrix}x\\y\\z\end{pmatrix} =\begin{pmatrix}x^2/4\\3xz^2/5\\-xz/2\end{pmatrix}\times \begin{pmatrix}x\\y\\z\end{pmatrix}= \begin{pmatrix}3xz^3/5+xyz/2\\-x^2z/2-x^2z/4\\x^2y/4-3x^2z^2/5\end{pmatrix}\\ &=\frac{x}{20}\begin{pmatrix}12z^3+10yz\\-15xz\\5xy-12xz^2\end{pmatrix}\,. \end{align}
{ "domain": "physics.stackexchange", "id": 96427, "tags": "homework-and-exercises, potential, vector-fields" }
Track a moving object
Question: If this has already been answered, by all means please point me to it. I am in the process of building a quadcopter which I eventually plan to run autonomously by allowing it to track an object and take a video feed of it moving. GPS is one of the options I've considered, basically: GPS antena on moving object (person, car, bike, surfer) GPS antena on quadcopter Radio to transmit coordinates from moving object to quad copter Some of the challenges I can foresee are Line of sight for camera. How does the camera know exactly where to point? Angle, how can I pre-program the quad to always record, say... 10m to the right of the moving object, or even better, program a set of angles to record from whilst keeping up with the object GPS accuracy, what happens if the GPS lock is weak? What are some of my other options? I saw this TED Talk where the quads are following a ball shaped sensor? I believe it uses Kinect cameras and lots of them which is not really an option for this challenge. So I'm open to hearing some ideas before I start research and development of these features. Answer: A Vicon motion capture system system is used in the TED Talk that you referenced. It is similar to a Kinect in that gives the 3D coordinates of any object being tracked within its field of view. Like the Kinect it uses IR, but uses IR reflective balls attached to the quadrotor and the presenters pointer to identify and track objects. Not to mention it is a considerably more expensive system. Systems like Vicon and the Kinect do not work outdoors because they use IR and the sun blinds them. The GPS solution you propose is a common approach to this problem for outdoor venues. Another approach might be to use Quuppa, an RF based motion capture system. Regardless of which tracking method you use, once you know the position of the robot and the target the problem of calculating the direction to the target, or ahead of it, is a basic geometric problem. Subtracting the robot position from the target position will give you the direction to point the camera. To track a few feet ahead of the target you would first add vector to the target position representing the direction of travel and lead distance (10m in your example). If you don't actually need to know the location of the robot, only its relative position to the target, then you could use computer vision techniques to track the target and move with it. Of course you will need to avoid obstacles in the process. Also the obstacles themselves could occlude your view in which case you would need some way to predict the targets movement. This could be particularly difficult to come up with if your tracking humans outdoors. The reality is that the problem you are addressing is a highly active area of research. I know of several research groups, both in academia and industry, that are trying to address this problem. I have only scratched the surface here.
{ "domain": "robotics.stackexchange", "id": 287, "tags": "arduino, quadcopter, gps" }
Checking whether two strings are isomorphic
Question: I was trying to solve a problem to determine if 2 strings are isomorphic. They are isomorphic when each character in string one can be replaced with another character in string 2. Two different characters may not map to the same value, and same character must not match to different values. Also the length of the strings must be same. I've used exception for logic flow: module Isomorphism class NotIsomorphicException < StandardError attr_reader :object def initialize(object) @object = object end end def self.isomorphic? string_a,string_b begin self.isomorphism string_a,string_b true rescue NotIsomorphicException => _ return false end end protected def self.isomorphism string_a,string_b raise NotIsomorphicException.new("Unequal length") unless string_a.size == string_b.size isomorphism_map = {} associations = string_a.chars.zip string_b.chars associations.each do |char_one,char_two| if(isomorphism_map.key? char_one) raise NotIsomorphicException.new("Single character needs to map to multiple characters, not isomorphic") unless isomorphism_map[char_one] == char_two elsif(isomorphism_map.value? char_two) raise NotIsomorphicException.new("Two characters map to same value") end isomorphism_map[char_one] = char_two end end end Please let me know if it's okay to code this way and any improvements that I might be able to make this better. Following are the specs, just in case: describe Isomorphism do it "should return true for single character words" do expect(Isomorphism.isomorphic? "t","d").to be true end it "should return true for empty strings" do expect(Isomorphism.isomorphic? "","").to be true end it "should return false if strings are of unequal length" do expect(Isomorphism.isomorphic? "dh","dhruv").to be false end it "should return false when 2 characters need to map to same character" do expect(Isomorphism.isomorphic? "ab","aa").to be false end it "should return true for egg and add" do expect(Isomorphism.isomorphic? "egg","add").to be true end it "should return false for foo and bar" do expect(Isomorphism.isomorphic? "foo","bar").to be false end it "should return true for paper and title" do expect(Isomorphism.isomorphic? "paper","title").to be true end it "should return false for aab and aaa" do expect(Isomorphism.isomorphic? "aab","aaa").to be false end end Answer: Looking at your isomorphism helper function… The flow of control is kind of nasty. In particular, burying an unless in here… if … raise Exception.new(…) unless … elsif … raise end … is too tricky, because: You already have an if-elsif expression, and it would be better to work it into the same chain. It's at the end of a very long line, where it is barely visible. unless, being a negation, is automatically less preferable to an affirmative if. Switching the naming convention from string_a, string_b to char_one, char_two is not ideal. Why not char_a, char_b? Or better yet, just a, b? Doing Hash lookups in reverse using value? is not efficient. You would be better off making a "forward" map and an "inverse" map. See below for an even better way to decompose the problem. As for the use of exceptions internally, I think it's a bad idea. It's just a cumbersome way to return false with commentary. You could just write a comment to avoid the overhead. Better yet, if you write the code well, I don't think there is a need for any comment. For example, in the implementation below, b != expected_b is pretty self-explanatory. module Isomorphism def self.isomorphic?(string_a, string_b) surjective?(string_a, string_b) && surjective?(string_b, string_a) end protected def self.surjective?(string_a, string_b) return false if string_a.size != string_b.size surjection = {} string_a.chars.zip(string_b.chars).each do |a, b| expected_b = surjection[a] if expected_b && (b != expected_b) return false end surjection[a] = b end true end # Alternate implementation: shorter, but won't detect failure as quickly def self.surjective?(string_a, string_b) return false if string_a.size != string_b.size pairs = string_a.chars.zip(string_b.chars) surjection = Hash[pairs] pairs.all? { |a, b| surjection[a] == b } end end
{ "domain": "codereview.stackexchange", "id": 13404, "tags": "algorithm, strings, ruby, rspec" }