anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Conservation of crystal momentum
Question: I am trying to convince myself that crystal momentum is conserved in a periodic lattice modulo a reciprocal lattice vector. Consider a Hamiltonian $H$ which is periodic under translations of a Bravais lattice vector. The canonical momentum operator $\mathbf{P} = (P_x,P_y,P_z)$ is the generator of translations, so I can write my translation operator as $$ T(\mathbf{a}) = e^{i \mathbf{a} \cdot \mathbf{P}}, \quad \mathbf{a} \in \mathbb{R}^3.$$ However, for a periodic Hamiltonian, the full symmetry is broken down to translations within the Bravais lattice only. I would express this symmetry as $[ T(\mathbf{a}) , H] =0$ for any Bravais lattice vector $\mathbf{a}$. Now, substituting my translation operator into the commutator, I find $$ \mathbf{a} \cdot[ \mathbf{P} , H] = 0$$ If my system had the full translation symmetry, I could factor out the $\mathbf{a}$ to conclude that each component of the momentum is conserved: $[P_i, H] = 0$. However, as we are restricted to the Bravais lattice, I can only conclude that $ \mathbf{a} \cdot \mathbf{P}$ is conserved and I would rename $\mathbf{P}$ as the crystal momentum. I am unsure how I arrive at the fact that the crystal momentum is conserved modulo a reciprocal lattice vector. I imagine it has something to do with assuming I can bring down the exponent in the commutator. I can see why the exponential does not define momentum uniquely, however if I had full translational symmetry, I would be able to say the exponent is conserved. What is different here? Answer: There is no need to expand the exponential at all. Let the lattice have basis $\mathbf{a}_i$. The fact that $$[e^{i \mathbf{a}_i \cdot \mathbf{P}}, H] = 0, \quad [e^{i \mathbf{a}_i \cdot \mathbf{P}}, e^{i \mathbf{a}_j \cdot \mathbf{P}}] = 0$$ indicates that we can simultaneously diagonalize the $e^{i \mathbf{a}_i \cdot \mathbf{P}}$ and $H$. Since $e^{i \mathbf{a}_i \cdot \mathbf{P}}$ is unitary, its eigenvalues are pure phases, so we may define $$e^{i \mathbf{a}_i \cdot \mathbf{P}} |\psi \rangle = e^{i \phi_i} |\psi \rangle.$$ Now, because the $\mathbf{a}_i$ form a basis of $\mathbb{R}^3$, there exist vectors $\mathbf{k}$ so that $$e^{i \phi_i} = e^{i \mathbf{a}_i \cdot \mathbf{k}}.$$ We can then call $\mathbf{k}$ the "crystal momentum". The reason that $\mathbf{k}$ is only defined up to multiples of reciprocal lattice vectors is because we have not specified $\mathbf{k}$ anywhere in this argument, only its exponential. Indeed, if we add a reciprocal lattice vector $\mathbf{b}_j$, then the phases change by $e^{i \mathbf{a}_i \cdot \mathbf{b}_j} = e^{2 \pi i \delta_{ij}} = 1$ by the definition of the reciprocal lattice. For full translational symmetry, you can take $\mathbf{a}$ infinitesimal and Taylor expand the exponential, giving $[\mathbf{a} \cdot \mathbf{P}, H] = 0$, and then since $\mathbf{a}$ is arbitrary we have $[\mathbf{P}, H] = 0$. But for the lattice translations, expanding the exponential isn't really clean, and it's not necessary either.
{ "domain": "physics.stackexchange", "id": 67735, "tags": "momentum, solid-state-physics, conservation-laws, symmetry, crystals" }
Probability of position in linear shm?
Question: The problem that got me thinking goes like this:- Find $dp/dx$ where $p$ is the probability of finding a body at a random instant of time undergoing linear shm according to $x=a\sin(\omega t)$. Plot the probability versus displacement graph. $x$=Displacement from mean. My work: $$v=dx/dt=\omega \sqrt{a^2-x^2}$$ Probability of finding within $x$ and $x+dx$ is $dt/T$ where dt is the time it spends there and T$$ is the total period. Therefore $$dp=dt/T=\frac{dx}{\pi \sqrt{a^2-x^2}}$$ because $t=2\pi /\omega$ and the factor 2 is to account for the fact that it spends time twice in one oscillation. The answer matches the answer and also the condition that integration $-a$ to $a$ of $dp =1$. But when i try to find p as a function of x to plot the graph I get $$p=\frac{1}{\pi}\arcsin(x/a)+C.$$ But then I get stuck as there is no way to find $C$ (except the fact that for $C=0$ the probability at the mean position is $0$ and hence $C$ cannot equal 0) which I know of. So how can I get a restraint on $C$ to find its value and hence to properly graph it with the condition that the probability from $-a$ to $a$ be 1? Answer: The differential $dp(x)$ is the probability of finding the body in an interval of length $dx$ centered at $x$. The quantity $p$ you are looking for is the cumulative distribution function, $$P(x)=\int_{-\infty}^x \frac{dp}{dx}(x) dx,$$ which is the probability that the particle will be to the left of the point $x$. Since the particle cannot be to the left of $-a$ you can fix $C$ by requiring that $P(-a)=0$. This will then give $P(0)=1/2$ as expected. It's just a matter of being precise as to exactly what you are calculating.
{ "domain": "physics.stackexchange", "id": 7545, "tags": "homework-and-exercises, harmonic-oscillator, probability, oscillators" }
Explaining The Unbelievable Pendulum Catch
Question: What would be a theoretical explanation of an "ideal" 14:1 mass ratio in this experiment, also demonstrated in this video? The experiment ties one nut to one end of the string and 14 nuts to the other, then holds the string like this and lets go: The end with the single nut ends up wrapped round your finger and stops the nuts falling to the floor: Why is a 14:1 mass ratio required for this to happen? EDIT: Here is the set of equations I'm trying myself for this problem: $ l(t) = r^2(t) \alpha'(t) \\ T(t) = \text{max}(\mu g - k \alpha(t), 0) \\ l'(t) = g \cos{\alpha(t)} r(t) + T(t) r_0 \\ r''(t) = g \sin{\alpha(t)} - T(t) $ With $l(t)$ - angular momentum divided by smaller mass, $T(t)$ - string tension, $\mu$ - mass proportion, $k$ - friction coefficient, $r_0$ - pivot radius, $r(t)$ - string length, $\alpha(t)$ - string angle. Answer: TL;DR: Mass ratio = 14 is not particularly special, but it is in a special region of mass ratios (about 11 to 14) that has optimal properties to wind the rope around the finger as much as possible. If you want to know why read the answer. If you just want to look at pretty gifs check it out (hat tip to @Ruslan for the animation idea). One can actually learn a lot from these movies! Especially if one considers that friction kicks in after probably about 2 windings to stop the rope from slipping along the finger, one can identify which mass ratios should work in practice. Only the experiment can tell the full result since there are a lot more factors not considered in the model here (air resistance, non-ideal rope, finite finger thickness, finger movement...). Code for animations if you want to run it yourself or adapt the equations of motion to someting fancy (such as including friction): import matplotlib.pyplot as plt import matplotlib as mpl from matplotlib import cm import numpy as np # integrator for ordinary differential equations from scipy.integrate import ode def eoms_pendulum(t, y, params): """ Equations of motion for the simple model. I was too dumb to do the geometry elegantly, so there are case distinctions... """ # unpack # v1_x, v1_y, v2, x1, y1, y2 = y m1, m2, g, truncate_at_inversion = params if x1<=0 and y1<=0: # calc helpers # F1_g = m1*g F2_g = m2*g # _g for "gravity" L_swing = np.sqrt( x1**2 + y1**2 ) # distance of mass 1 to the pendulum pivot Theta = np.arctan(y1/x1) # angle dt_Theta = ( v1_y/x1 - v1_x*y1/x1**2)/(1. + y1**2/x1**2) # derivative of arctan help_term = -F2_g/m2 - F1_g/m1 * np.sin(Theta) - v1_x*np.sin(Theta)*dt_Theta + v1_y*np.cos(Theta)*dt_Theta F_r = help_term / ( -1./m1 - 1./m2 ) # _r for "rope", this formula comes from requiring a constant length rope # calc derivatives dt_v1_x = ( F_r*np.cos(Theta) ) / m1 dt_v1_y = ( -F1_g + F_r*np.sin(Theta) ) / m1 dt_v2 = ( F_r - F2_g ) / m2 dt_x1 = v1_x dt_y1 = v1_y dt_y2 = v2 elif x1>=0 and y1<=0: # calc helpers # F1_g = m1*g F2_g = m2*g L_swing = np.sqrt( x1**2 + y1**2 ) Theta = np.arctan(-x1/y1) dt_Theta = -( v1_x/y1 - v1_y*x1/y1**2)/(1. + x1**2/y1**2) help_term = -F2_g/m2 - F1_g/m1 * np.cos(Theta) - v1_x*np.cos(Theta)*dt_Theta - v1_y*np.sin(Theta)*dt_Theta F_r = help_term / ( -1./m1 - 1./m2 ) # calc derivatives dt_v1_x = ( -F_r*np.sin(Theta) ) / m1 dt_v1_y = ( -F1_g + F_r*np.cos(Theta) ) / m1 dt_v2 = ( F_r - F2_g ) / m2 dt_x1 = v1_x dt_y1 = v1_y dt_y2 = v2 elif x1>=0 and y1>=0: # calc helpers # F1_g = m1*g F2_g = m2*g L_swing = np.sqrt( x1**2 + y1**2 ) Theta = np.arctan(y1/x1) dt_Theta = ( v1_y/x1 - v1_x*y1/x1**2)/(1. + y1**2/x1**2) help_term = -F2_g/m2 + F1_g/m1 * np.sin(Theta) + v1_x*np.sin(Theta)*dt_Theta - v1_y*np.cos(Theta)*dt_Theta F_r = help_term / ( -1./m1 - 1./m2 ) # calc derivatives dt_v1_x = ( -F_r*np.cos(Theta) ) / m1 dt_v1_y = ( -F1_g - F_r*np.sin(Theta) ) / m1 dt_v2 = ( F_r - F2_g ) / m2 dt_x1 = v1_x dt_y1 = v1_y dt_y2 = v2 elif x1<=0 and y1>=0: # calc helpers # F1_g = m1*g F2_g = m2*g L_swing = np.sqrt( x1**2 + y1**2 ) Theta = np.arctan(-y1/x1) dt_Theta = -( v1_y/x1 - v1_x*y1/x1**2)/(1. + y1**2/x1**2) help_term = -F2_g/m2 + F1_g/m1 * np.sin(Theta) - v1_x*np.sin(Theta)*dt_Theta - v1_y*np.cos(Theta)*dt_Theta F_r = help_term / ( -1./m1 - 1./m2 ) # calc derivatives dt_v1_x = ( F_r*np.cos(Theta) ) / m1 dt_v1_y = ( -F1_g - F_r*np.sin(Theta) ) / m1 dt_v2 = ( F_r - F2_g ) / m2 dt_x1 = v1_x dt_y1 = v1_y dt_y2 = v2 if truncate_at_inversion: if dt_y2 > 0.: return np.zeros_like(y) return [dt_v1_x, dt_v1_y, dt_v2, dt_x1, dt_y1, dt_y2] def total_winding_angle(times, trajectory): """ Calculates the total winding angle for a given trajectory """ dt = times[1] - times[0] v1_x, v1_y, v2, x1, y1, y2 = [trajectory[:, i] for i in range(6)] dt_theta = ( x1*v1_y - y1*v1_x ) / np.sqrt(x1**2 + y1**2) # from cross-product theta_tot = np.cumsum(dt_theta) * dt return theta_tot ################################################################################ ### setup ### ################################################################################ trajectories = [] m1 = 1 m2_list = np.arange(2, 20, 2)[0:9] ntimes = 150 for m2 in m2_list: # params # params = [ m1, # m1 m2, # m2 9.81, # g False # If true, truncates the motion when m2 moves back upwards ] # initial conditions # Lrope = 1.0 # Length of the rope, initially positioned such that m1 is L from the pivot init_cond = [ 0.0, # v1_x 0., # v1_y 0., # v2 -Lrope/2, # x1 0.0, # y1 -Lrope/2, # y2 ] # integration time range # times = np.linspace(0, 1.0, ntimes) # trajectory array to store result # trajectory = np.empty((len(times), len(init_cond)), dtype=np.float64) # helper # show_prog = True # check eoms at starting position # #print(eoms_pendulum(0, init_cond, params)) ################################################################################ ### numerical integration ### ################################################################################ r = ode(eoms_pendulum).set_integrator('zvode', method='adams', with_jacobian=False) # integrator and eoms r.set_initial_value(init_cond, times[0]).set_f_params(params) # setup dt = times[1] - times[0] # time step # integration (loop time step) for i, t_i in enumerate(times): trajectory[i,:] = r.integrate(r.t+dt) # integration trajectories.append(trajectory) # ### extract ### # x1 = trajectory[:, 3] # y1 = trajectory[:, 4] # x2 = np.zeros_like(trajectory[:, 5]) # y2 = trajectory[:, 5] # L = np.sqrt(x1**2 + y1**2) # rope part connecting m1 and pivot # Ltot = -y2 + L # total rope length ################################################################################ ### Visualize trajectory ### ################################################################################ import numpy as np from matplotlib import pyplot as plt from matplotlib.animation import FuncAnimation plt.style.use('seaborn-pastel') n=3 m=3 axes = [] m1_ropes = [] m2_ropes = [] m1_markers = [] m2_markers = [] fig = plt.figure(figsize=(10,10)) for sp, m2_ in enumerate(m2_list): ax = fig.add_subplot(n, m, sp+1, xlim=(-0.75, 0.75), ylim=(-1, 0.5), xticks=[], yticks=[]) m1_rope, = ax.plot([], [], lw=1, color='k') m2_rope, = ax.plot([], [], lw=1, color='k') m1_marker, = ax.plot([], [], marker='o', markersize=10, color='r', label=r'$m_1 = {}$'.format(m1)) m2_marker, = ax.plot([], [], marker='o', markersize=10, color='b', label=r'$m_2 = {}$'.format(m2_)) axes.append(ax) m1_ropes.append(m1_rope) m2_ropes.append(m2_rope) m1_markers.append(m1_marker) m2_markers.append(m2_marker) ax.set_aspect('equal', adjustable='box') ax.legend(loc='upper left', fontsize=12, ncol=2, handlelength=1, bbox_to_anchor=(0.1, 1.06)) plt.tight_layout() def init(): for m1_rope, m2_rope, m1_marker, m2_marker in zip(m1_ropes, m2_ropes, m1_markers, m2_markers): m1_rope.set_data([], []) m2_rope.set_data([], []) m1_marker.set_data([], []) m2_marker.set_data([], []) return (*m1_ropes, *m2_ropes, *m1_markers, *m2_markers) def animate(i): for sp, (m1_rope, m2_rope, m1_marker, m2_marker) in enumerate(zip(m1_ropes, m2_ropes, m1_markers, m2_markers)): x1 = trajectories[sp][:, 3] y1 = trajectories[sp][:, 4] x2 = np.zeros_like(trajectories[sp][:, 5]) y2 = trajectories[sp][:, 5] m1_rope.set_data([x1[i], 0], [y1[i], 0]) m2_rope.set_data([x2[i], 0], [y2[i], 0]) m1_marker.set_data(x1[i], y1[i]) m2_marker.set_data(x2[i], y2[i]) return (*m1_ropes, *m2_ropes, *m1_markers, *m2_markers) anim = FuncAnimation(fig, animate, init_func=init, frames=len(trajectories[0][:, 0]), interval=500/ntimes, blit=True) anim.save('PendulumAnim.gif', writer='imagemagick', dpi = 50) plt.show() Main argument Winding angle behavior in the no friction, thin pivot case My answer is based on a simple model for the system(no fricition, infinitely thin pivot, ideal rope, see also detailed description below), from which one can actually get some very nice insight on why the region around 14 is special. As a quantity of interest, we define a winding angle as a function of time $\theta(t)$. It indicates, which total angle the small mass has travelled around the finger. $\theta(t)=2\pi$ corresponds to one full revolution, $\theta(t)=4\pi$ corresponds to two revolutions and so on. One can then plot the winding angle as a function of time and mass ratio for the simple model: The color axis shows the winding angle. We can clearly see that between mass ratio 12-14, the winding angle goes up continuously in time and reaches a high maximum. The first maxima in time for each mass ratio are indicated by the magenta crosses. Also note that the weird discontinuities are places where the swinging mass goes through zero/hits the finger, where the winding angle is not well defined. To see the behaviour in a bit more detail, let us look at some slices of the 2D plot (2$\pi$ steps/full revolutions marked as horizontal lines): We see that mass ratios 12, 13, 14 behave very similarly. 16 has a turning point after 4 revolutions, but I would expect this to still work in practice, since when the rope is wrapped 4 times around the finger, there should be enough friction to clip it. For mass ratio 5, on the other hand, we do not even get 2 revolutions and the rope would probably slip. If you want to reproduce these plots, here is my code. Feel free to make adaptions and post them as an answer. It would be interesting, for example, if one can include friction in a simple way to quantify the clipping effect at the end. I imagine this will be hard though, and one would need at least one extra parameter. import matplotlib.pyplot as plt import matplotlib as mpl from matplotlib import cm import numpy as np from scipy.signal import argrelextrema # integrator for ordinary differential equations from scipy.integrate import ode def eoms_pendulum(t, y, params): """ Equations of motion for the simple model. I was too dumb to do the geometry elegantly, so there are case distinctions... """ # unpack # v1_x, v1_y, v2, x1, y1, y2 = y m1, m2, g, truncate_at_inversion = params if x1<=0 and y1<=0: # calc helpers # F1_g = m1*g F2_g = m2*g # _g for "gravity" L_swing = np.sqrt( x1**2 + y1**2 ) # distance of mass 1 to the pendulum pivot Theta = np.arctan(y1/x1) # angle dt_Theta = ( v1_y/x1 - v1_x*y1/x1**2)/(1. + y1**2/x1**2) # derivative of arctan help_term = -F2_g/m2 - F1_g/m1 * np.sin(Theta) - v1_x*np.sin(Theta)*dt_Theta + v1_y*np.cos(Theta)*dt_Theta F_r = help_term / ( -1./m1 - 1./m2 ) # _r for "rope", this formula comes from requiring a constant length rope # calc derivatives dt_v1_x = ( F_r*np.cos(Theta) ) / m1 dt_v1_y = ( -F1_g + F_r*np.sin(Theta) ) / m1 dt_v2 = ( F_r - F2_g ) / m2 dt_x1 = v1_x dt_y1 = v1_y dt_y2 = v2 elif x1>=0 and y1<=0: # calc helpers # F1_g = m1*g F2_g = m2*g L_swing = np.sqrt( x1**2 + y1**2 ) Theta = np.arctan(-x1/y1) dt_Theta = -( v1_x/y1 - v1_y*x1/y1**2)/(1. + x1**2/y1**2) help_term = -F2_g/m2 - F1_g/m1 * np.cos(Theta) - v1_x*np.cos(Theta)*dt_Theta - v1_y*np.sin(Theta)*dt_Theta F_r = help_term / ( -1./m1 - 1./m2 ) # calc derivatives dt_v1_x = ( -F_r*np.sin(Theta) ) / m1 dt_v1_y = ( -F1_g + F_r*np.cos(Theta) ) / m1 dt_v2 = ( F_r - F2_g ) / m2 dt_x1 = v1_x dt_y1 = v1_y dt_y2 = v2 elif x1>=0 and y1>=0: # calc helpers # F1_g = m1*g F2_g = m2*g L_swing = np.sqrt( x1**2 + y1**2 ) Theta = np.arctan(y1/x1) dt_Theta = ( v1_y/x1 - v1_x*y1/x1**2)/(1. + y1**2/x1**2) help_term = -F2_g/m2 + F1_g/m1 * np.sin(Theta) + v1_x*np.sin(Theta)*dt_Theta - v1_y*np.cos(Theta)*dt_Theta F_r = help_term / ( -1./m1 - 1./m2 ) # calc derivatives dt_v1_x = ( -F_r*np.cos(Theta) ) / m1 dt_v1_y = ( -F1_g - F_r*np.sin(Theta) ) / m1 dt_v2 = ( F_r - F2_g ) / m2 dt_x1 = v1_x dt_y1 = v1_y dt_y2 = v2 elif x1<=0 and y1>=0: # calc helpers # F1_g = m1*g F2_g = m2*g L_swing = np.sqrt( x1**2 + y1**2 ) Theta = np.arctan(-y1/x1) dt_Theta = -( v1_y/x1 - v1_x*y1/x1**2)/(1. + y1**2/x1**2) help_term = -F2_g/m2 + F1_g/m1 * np.sin(Theta) - v1_x*np.sin(Theta)*dt_Theta - v1_y*np.cos(Theta)*dt_Theta F_r = help_term / ( -1./m1 - 1./m2 ) # calc derivatives dt_v1_x = ( F_r*np.cos(Theta) ) / m1 dt_v1_y = ( -F1_g - F_r*np.sin(Theta) ) / m1 dt_v2 = ( F_r - F2_g ) / m2 dt_x1 = v1_x dt_y1 = v1_y dt_y2 = v2 if truncate_at_inversion: if dt_y2 > 0.: return np.zeros_like(y) return [dt_v1_x, dt_v1_y, dt_v2, dt_x1, dt_y1, dt_y2] def total_winding_angle(times, trajectory): """ Calculates the total winding angle for a given trajectory """ dt = times[1] - times[0] v1_x, v1_y, v2, x1, y1, y2 = [trajectory[:, i] for i in range(6)] dt_theta = ( x1*v1_y - y1*v1_x ) / (x1**2 + y1**2) # from cross-product theta_tot = np.cumsum(dt_theta) * dt return theta_tot def find_nearest_idx(array, value): """ Find the closest element in an array and return the corresponding index. """ array = np.asarray(array) idx = (np.abs(array-value)).argmin() return idx ################################################################################ ### setup ### ################################################################################ theta_tot_traj_list = [] # scan mass ratio m2_list = np.linspace(5,17,200) for m2_ in m2_list: # params # params = [ 1, # m1 m2_, # m2 9.81, # g False # If true, truncates the motion when m2 moves back upwards ] # initial conditions # Lrope = 1.0 # Length of the rope, initially positioned such that m1 is L from the pivot init_cond = [ 0.0, # v1_x 0., # v1_y 0., # v2 -Lrope/2, # x1 0.0, # y1 -Lrope/2, # y2 ] # integration time range # times = np.linspace(0, 2.2, 400) # trajectory array to store result # trajectory = np.empty((len(times), len(init_cond)), dtype=np.float64) # helper # show_prog = True # check eoms at starting position # #print(eoms_pendulum(0, init_cond, params)) ################################################################################ ### numerical integration ### ################################################################################ r = ode(eoms_pendulum).set_integrator('zvode', method='adams', with_jacobian=False) # integrator and eoms r.set_initial_value(init_cond, times[0]).set_f_params(params) # setup dt = times[1] - times[0] # time step # integration (loop time step) for i, t_i in enumerate(times): trajectory[i,:] = r.integrate(r.t+dt) # integration ### extract ### x1 = trajectory[:, 3] y1 = trajectory[:, 4] x2 = np.zeros_like(trajectory[:, 5]) y2 = trajectory[:, 5] L = np.sqrt(x1**2 + y1**2) # rope part connecting m1 and pivot Ltot = -y2 + L # total rope length theta_tot_traj = total_winding_angle(times, trajectory) theta_tot_traj_list.append(theta_tot_traj) theta_tot_traj_list = np.asarray(theta_tot_traj_list) #maxima_idxs = np.argmax(theta_tot_traj_list, axis=-1) maxima_idxs = [] for i,m_ in enumerate(m2_list): maxima_idx = argrelextrema(theta_tot_traj_list[i,:], np.greater)[0] if maxima_idx.size == 0: maxima_idxs.append(-1) else: maxima_idxs.append(maxima_idx[0]) maxima_idxs = np.asarray(maxima_idxs) ### 2D plot ### fig=plt.figure() plt.axhline(14, color='r', linewidth=2, dashes=[1,1]) plt.imshow(theta_tot_traj_list, aspect='auto', origin='lower', extent = [times[0], times[-1], m2_list[0], m2_list[-1]]) plt.plot(times[maxima_idxs], m2_list,'mx') plt.xlabel("Time") plt.ylabel("Mass ratio") plt.title("Winding angle") plt.colorbar() fig.savefig('winding_angle.png') plt.show() fig=plt.figure() slice_list = [5, 12, 13, 14, 16] for x_ in [0.,1.,2.,3.,4.,5.]: plt.axhline(x_*2.*np.pi, color='k', linewidth=1) for i, slice_val in enumerate(slice_list): slice_idx = find_nearest_idx(m2_list, slice_val) plt.plot(times, theta_tot_traj_list[slice_idx, :], label='Mass ratio: {}'.format(slice_val)) plt.xlabel('Time') plt.ylabel('Winding angle') plt.legend() fig.savefig('winding_angle2.png') plt.show() Details The simple model The simple model used above and (probably) the simplest way to model the system is to assume: An ideal infinitely thin rope. An infinitely thin pivot than the rope wraps around (the finger in the video). No friction. Especially the no friction assumption is clearly flawed, because the effect of stopping completely relies on friction. But as we saw above one can still get some insight into the initial dynamics anyway and then think about what friction will do to change this. If someone feels motivated I challenge you to include friction into the model and change my code! Under these assumptions, one can set up a system of coupled differential equations using Newton's laws, which can easily be solved numerically. I won't go into detail on the geometry and derivation, I'll just give some code below for people to check and play with. Disclaimer: I am not sure my equations of motion are completely right. I did some checks and it looks reasonable, but feel free to fill in your own version and post an answer. Geometry The geometry assumed is like this: From the picture, we can get the equations of motion as follows: $$ m_1 \dot{v}_{x,1} = F_\mathrm{rope} \cos(\theta) \,, \\ m_1 \dot{v}_{y,1} = -F_{g,1} + F_\mathrm{rope} \sin(\theta) \,, \\ m_2 \dot{v}_{y,2} = -F_{g,2} + F_\mathrm{rope} \,, \\ \dot{x}_1 = v_{x,1} \,, \\ \dot{y}_1 = v_{y,1} \,, \\ \dot{y}_2 = v_{y,2} \,. $$ This is just Newton's laws for the geometry wirtten as a set of first order coupled differential equations, which can easily be solved in scipy (see code). The hard bit is to find the rope force $F_\textrm{rope}$. It is constraint by the ideal rope condition, that the total rope length does not change in time. Following this through I got $$ F_\textrm{rope} = \frac{\frac{F_{g,2}}{m_2} + \frac{F_{g,1}}{m_1}\sin(\theta) + v_{x,1}\sin(\theta)\dot{\theta} - v_{y,1}\cos(\theta)\dot{\theta}}{\frac{1}{m_1} + \frac{1}{m_2}} \,. $$ Note that my way of writing the solution is not particularly elegant and as a result some of these formulas only apply in the lower left quadrant ($x_1<0$, $y_1<0$). The other quadrants are implemented in the code too. As the initial position, we will consider $x_1 = -L/2$, $y_1 = -L/2$, similarly to the video. $y_1$ does not matter too much, it simply causes an overall displacement of mass 2. We set $L=1$ and $g=9.81$. Someone else can work out the units ;-) Let's do it in python I already gave some code snippets above. You need numpy and matplotlib to run it. Maybe python3 would be good. If you want to plot static trajectories you can used: ################################################################################ ### setup ### ################################################################################ # params # params = [ 1, # m1 14., # m2 9.81, # g False # If true, truncates the motion when m2 moves back upwards ] # initial conditions # Lrope = 1.0 # Length of the rope, initially positioned such that m1 is L from the pivot init_cond = [ 0.0, # v1_x 0., # v1_y 0., # v2 -Lrope/2, # x1 0.0, # y1 -Lrope/2, # y2 ] # integration time range # times = np.linspace(0, 1.0, 400) # trajectory array to store result # trajectory = np.empty((len(times), len(init_cond)), dtype=np.float64) # helper # show_prog = True # check eoms at starting position # print(eoms_pendulum(0, init_cond, params)) ################################################################################ ### numerical integration ### ################################################################################ r = ode(eoms_pendulum).set_integrator('zvode', method='adams', with_jacobian=False) # integrator and eoms r.set_initial_value(init_cond, times[0]).set_f_params(params) # setup dt = times[1] - times[0] # time step # integration (loop time step) for i, t_i in enumerate(times): trajectory[i,:] = r.integrate(r.t+dt) # integration ### extract ### x1 = trajectory[:, 3] y1 = trajectory[:, 4] x2 = np.zeros_like(trajectory[:, 5]) y2 = trajectory[:, 5] L = np.sqrt(x1**2 + y1**2) # rope part connecting m1 and pivot Ltot = -y2 + L # total rope length ################################################################################ ### Visualize trajectory ### ################################################################################ # fig = plt.figure(figsize=(15,7)) plt.subplot(121) titleStr = "m1: {}, m2: {}, g: {}, L: {}".format(params[0], params[1], params[2], Lrope) fs = 8 plt.axvline(0, color='k', linewidth=1, dashes=[1,1]) plt.axhline(0, color='k', linewidth=1, dashes=[1,1]) plt.scatter(x1, y1, c=times, label="Mass 1") plt.scatter(x2, y2, marker='x', c=times, label='Mass 2') #plt.xlim(-1.5, 1.5) #plt.ylim(-2, 1.) plt.xlabel('x position', fontsize=fs) plt.ylabel('y position', fontsize=fs) plt.gca().set_aspect('equal', adjustable='box') cbar = plt.colorbar() cbar.ax.set_ylabel('Time', rotation=270, fontsize=fs) plt.title(titleStr, fontsize=fs) plt.legend() plt.subplot(122) plt.axhline(0., color='k', dashes=[1,1]) plt.plot(times, x1, '-', label="Mass 1, x pos") plt.plot(times, y1, '-', label="Mass 1, y pos") plt.plot(times, y2, '--', label="Mass 2, y pos") plt.xlabel('Time') plt.legend() plt.tight_layout() fig.savefig('{}-{}.pdf'.format(int(params[1]), int(params[0]))) plt.close() # check that total length of the rope is constant # plt.figure() plt.axhline(0, color='k', linewidth=1, dashes=[1,1]) plt.axvline(0.4, color='k', linewidth=1, dashes=[1,1]) plt.plot(times, Ltot, label='total rope length') plt.plot(times, L, label='rope from mass 1 to pivot') plt.legend() plt.tight_layout() plt.close() The dynamics for 1/14 mass ration Here is what the dynamics of the pendulum look like for a mass ratio of 14 ($m_1 = 1$, $m_2=14$): The left panel shows the trajectories of the two masses in the x-y plane, with time being indicated by the color axis. This is supposed to be a front view of the video performance. We see that mass 1 wraps around the pivot (at x=0, y=0) multiple times (see winding angle picture above). After a few revolutions, the model is probably not representative anymore. Instead, friction would start kicking in and clip the rope. In our simplified picture, the particle keeps going. What is interesting is that even without friction, the lower particle stops at some point and even comes back up, causing stable oscillation!! What changes with the mass ratio? We already saw what changes with the mass ratio in the winding angle picture. Just for visual intuiution, here is the corresponding picture for 1/5 mass ratio: For higher mass ratio (1/20):
{ "domain": "physics.stackexchange", "id": 65865, "tags": "newtonian-mechanics, angular-momentum, string, home-experiment" }
Momentum which is transfered to the wall of container by a molecule
Question: now I study thremodynamics course, and I cant understand why for ideal gas in equilibrium state when a molecule collides with the wall of container it transfers to it the momentum which is equals to $2mv$, where $m$ is the mass of the molecule and $v$ - is it's speed. Why it is not just $mv$, but double of that? I suspect that the answer is connected with the conservation of momentum principle, but it is not obvious for me. Please help me understand. Answer: If the initial momentum is $mv$, then when the particle bounces off the wall, it is going in the opposite direction, so its new momentum is $-mv$. The difference is $2mv$ because $mv - (-mv) = 2mv$. By the way, this is not really how it works; molecules will speed up or slow down when they hit the wall. If the molecule is moving slowly, it's likely to speed up, and if it's going fast, it's likely to slow down. The calculation is just for what happens on average.
{ "domain": "physics.stackexchange", "id": 22788, "tags": "thermodynamics, momentum" }
Tree list recursion from Stanford tutorial
Question: I have found this appearing-to-be famous problem in Stanford tutorial about trees and decided to solve it. My solution appears to work great: #include <iostream> using namespace std; struct num { int n; num* left; num* right; num* next; num* pre; }; num* cr (int data) { num* node = new num; node->n = data; node->left = 0; node->right = 0; return node; } /* function for tree creation*/ void treecre (num* & node) { node = cr (10); node->left = cr (8); node->left->left = cr (6); node->left->left->left = cr (4); node->right = cr (12); node->right->right = cr (14); node->right->right->right = cr (16); node->right->left = cr(11); node ->left->right = cr (9); } // function for transforming the tree into linked list with next and previous // links // but without linking the first and last element yet num* btc (num* node, num*& list) { num* temp1 = 0; num* max = node; if (node == 0) return 0; else { btc (node->right, list); num* temp = new num; temp->n = node->n; temp ->next = list; if (list != 0) list->pre = temp; list = temp; btc (node->left, list); } return list; } void printtree (num* node) { if (node != 0) { printtree (node->left); cout << node->n; printtree (node->right); } } // printing the linked list backwards void printback (num*& list, int& k) { if (k < 15) { cout << list->n; list = list->pre; printback (list, ++k);} else {cout << list->n;} } // and in right direction void printlistto (num*& list, int& k) { if (k < 12) { cout << list->n; list = list->next; printlistto (list, ++k); } else { cout << list->n; } } int main () { num* node = 0; num* list = 0; int k = 0; treecre (node); printtree (node); cout << endl; list = btc (node, list); // next lines link the first and last elemets num* temp1 = list; if (temp1->next!= 0) { while (temp1->next != 0) { temp1 = temp1->next; }} temp1->next = list; list->pre = temp1; printback (list, k); cout << endl; cin.get (); cin.ignore(); } But then I take a look at this Stanford solution, which is way longer and uses other tactics. Could you please tell me if I overlooked something? Does my version have some sufficient drawbacks or if it is fine? Answer: Your solution may work great, but I see some room for improvement, especially in terms of readability and maintainability and using OO. I quickly went over your recursive function btc, it seems to do the trick. But the code is a bit messy. The stanford solution tries to embrace the typical list methods, like: appending elements, joining lists, etc. In fact that's the goal, right: porting a binary sorted tree to a list, so one should also provide a good list interface. They also do this in a good object oriented/modular and readable way. A review on your coding style To start: you miss some indentation (in your if/else clauses). This already can make it more readable! Perhaps also document some steps in the recursion method, since recursion always results to confusion. Methods your method names are not that understandable, use camelCase and use appropriate, complete names, that indicate the purpose of the method. for example: treecre() should be treeCreate() it would even be better to use a Tree class that has a create() method, which builds your test tree. Classes You also used a struct for your node, may I suggest to use classes, since they offer a wide set of feature that can be used for future purposes of your code: inheritance, polymorphism, etc. This is important since one might want to extend this implementation to handle nodes that contain ASCII characters instead of integers. Or, refactor the class to use C++ templates to support generic types. In addition I would use the standard list/binary tree names, like: root, node, .. which is easier for communication purposes. I hope this is a bit helpful!
{ "domain": "codereview.stackexchange", "id": 12632, "tags": "c++, tree, linked-list" }
Python Automate the Boring Stuff Collatz exercise
Question: I'm looking for some feedback on my code for the collatz function exercise in the Python book Automate the Boring Stuff. It works, but doesn't feel very elegant to me. Is there a better way to approach it? I'm trying to avoid just looking at someone else's solution so general advice would be preferred. def collatz(number): if (number %2 ==0 ) :#even newNumber=number//2 print(newNumber); if newNumber!=1: collatz(newNumber); else: return (newNumber) elif (number % 2 == 1) : #odd newNumber=3*number+1 print (newNumber) collatz(newNumber) return newNumber; #Input validation loop while True: try: number = int(input("Enter number:")) except ValueError: print("Enter a valid integer") continue else: if number < 1: print("enter a valid positive integer") continue break collatz(number) Answer: Cleaning the user input validation Remove continue because in both cases the loop will continue anyway, only break will stop it, so do not worry. Do not make the user think: number = int(input("Enter number:")): which number to enter? Sure you know, but messages are intended to communicate with the user, so do not make him guess what you want, but be clear and precise with him. Cleaning collatz() Please apply the naming conventions and indentation rules. Leave space between operators and their operands (example: newNumber=3*number+1 should be written newNumber = 3 * number + 1) Do not borrow ; sysntax into Python. You should remove it. Remove unnecessary comments: when you code (number %2 ==0 ) :#even, I think everybody is smart enough to guess you are processing the case where number is even. The same thing goes for #odd and #Input validation loop. Remove return newNumber, it does not make sense because return (newNumber) is enough to exit the recursion when newNumber == 1. So under the shadow of what is mentioned above, let us clean collatz(): def collatz(number): if (number %2 == 0 ): new_number = number // 2 print(new_number); if new_number!=1: collatz(new_number); else: return new_number elif (number % 2 == 1): new_number = 3 * number + 1 print (new_number) collatz(new_number) Improving the input validation What you have done so far is good, I suggest you to handle the case where the user presses Ctrl + c to exit your program, because when I tried to do so, I got an ugly KeyboardInterrupt exception. Think of code reuse, I mean the input validation could be re-structured as a function, this way you can use it elsewhere easily if similar purposes are encountered. Given these last 2 elements, let us re-write the code validation: def validate_user_input(): while True: try: number = int(input("Enter a positive integer number ( > 1): ")) except KeyboardInterrupt: print('\nSee you next time!') break except ValueError: print("That was not even a valid integer!") else: if number < 0: print("Positive integer, please!") elif number < 2: print("Integer should be at least equal to 2 !") else: return number Improving collatz() You said you do not want to see other solutions and you want only to see if you can improve your own one. So let us see if we can do so. From your solution we can see: You always call collatz() whether new_number is odd or even. You always print new_number whether it is odd or even The only useful return statement is the once which corresponds to new_number = 1 . Let us translate the 3 phrases above into code: callatz(new_number) print(new_number) return new_number We should not forget the shared fact: new_number can be either odd or even (so we do not care about that) new_number = n//2 if n % 2 == 0 else n*3 + 1 Now we are ready to gather the 4 instructions listed above into one following their coherent flow: def collatz(n): if n == 1: return else: new_number = n // 2 if n % 2 == 0 else n * 3 + 1 print(new_number) collatz(new_number) Putting all together Let us gather the code written so far to create an MCVE: def validate_user_input(): while True: try: number = int(input("Enter a positive integer (minimum 2): ")) except KeyboardInterrupt: print('\nSee you next time!') break except ValueError: print("That was not even a valid integer!") else: if number < 0: print("Positive integer, please!") elif number < 2: print("Integer should be at least equal to 2 !") else: return number def collatz(n): if n == 1: return else: new_number = n//2 if n % 2 == 0 else n*3 + 1 print(new_number) collatz(new_number) if __name__ == '__main__': print('Collatz Conjecture Collatz Conjecture') number = validate_user_input() collatz(number) P.S. Just in case: you can read about if __name__ == "__main__":
{ "domain": "codereview.stackexchange", "id": 30705, "tags": "python, python-3.x, collatz-sequence" }
jQuery duplicated selector
Question: $("#colors_added").html(parseFloat($("#colors_added").html()) + 1); This JS line makes my IDE go Duplicated jQuery selector with the following explanation: Checks that jQuery selectors are used in an efficient way. It suggests to split descendant selectors which are prefaced with ID selector and warns about duplicated selectors which could be cached. I've made a screenshot of it: Is my IDE on to something here, can I write this in a better way? Answer: You should cache your selector in a variable: var $addedColors = $("#colors_added"); $addedColors.html(parseFloat($addedColors.html()) + 1); Reason: placing selectors in variables is recommended, since overusing selectors can result in poor performance. Everytime you would call a function on $("#colors_added"), the whole DOM has to be parsed by the browser. This is not the case when you store it in a variable. Also, it's a general principle to not repeat yourself in code, also called the DRY principle (as mentioned by Peter Rader in the comments).
{ "domain": "codereview.stackexchange", "id": 10701, "tags": "javascript, jquery" }
Python 3.4 password generator
Question: I'm pretty new to Python and I made a password generator. I would like you to check it out and give me tips on how I could do it better. import random def small(): # Prints a generated string of 6 chars list = ("A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "L", "L", "M", "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z") generatepassword = random.choice(list) + random.choice(list) + random.choice(list) + random.choice(list) +\ random.choice(list) + random.choice(list) print(generatepassword) print("The passwords consists of: " + str(len(generatepassword))+" Characters") print("\n") def med(): # Prints a generated string of 10 chars list2 = ("A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "L", "L", "M", "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z") generatepassword = random.choice(list2) + random.choice(list2) + random.choice(list2) + random.choice(list2) +\ random.choice(list2) + random.choice(list2) + random.choice(list2) + random.choice(list2) +\ random.choice(list2) + random.choice(list2) print(generatepassword) print("The passwords consists of: " + str(len(generatepassword))+" Characters") print("\n") def big(): # Prints a generated string of 32 chars list3 = ("A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "L", "L", "M", "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z") generatepassword = random.choice(list3) + random.choice(list3) + random.choice(list3) + random.choice(list3) +\ random.choice(list3) + random.choice(list3) + random.choice(list3) + random.choice(list3) +\ random.choice(list3) + random.choice(list3) + random.choice(list3) + random.choice(list3) +\ random.choice(list3) + random.choice(list3) + random.choice(list3) + random.choice(list3) +\ random.choice(list3) + random.choice(list3) + random.choice(list3) + random.choice(list3) +\ random.choice(list3) + random.choice(list3) + random.choice(list3) + random.choice(list3) +\ random.choice(list3) + random.choice(list3) + random.choice(list3) + random.choice(list3) +\ random.choice(list3) + random.choice(list3) + random.choice(list3) + random.choice(list3) print(generatepassword) print("The passwords consists of: " + str(len(generatepassword))+" Characters") print("\n") def generate(): # This askes how long the password should be print("How big do you want your password? choices >> [6], [10], [32]") choice = input("please input the lenght >> ") while choice != '6' and choice != '10' and choice != '32': choice = input("please choose: [6], [10] or [32] >> ") if choice == '6': break elif choice == '10': break elif choice == '32': break if choice == '6': small() if choice == '10': med() if choice == '32': big() again = 'yes' while again == 'yes' or again == 'y': # From here on the user can choose to generate another password # If the person typed yes or y then it will run the def function generate # And it will restart # If the user types anythin else then yes or y then the program quits generate() print("\n") print("Do you want to generate another password? [yes] or [no] >> ") again = input() Answer: Summary: If you have multiple function that does similar tasks, make it as one that passes an arg. If you have to repetitively type the something over and over again, there's probably a better way to do it Include some error checking in spots such as input() Take advantage of the string module. For example: string.ascii_letters is the same as 'abcdef...ABCDEF...' Note to the OP: Here's a more concise version of your code. I also made it be able to take any amount of length. Read the comments cause they explains every change I made import random import string def gen_pass(length): # Prints a generated string of any length of chars # don't name anything list or other type name. Use a name like word_list instead word_list = list(string.ascii_letters+string.digits) # use string module to make it shorter, so you dont need to type everything generatepassword = "".join([random.choice(word_list) for i in range(length)]) # typing the same thing over and over again is redundent # use a variable for the length instead of making multiple functions print(generatepassword) print("The passwords consists of: {} Characters\n".format(length)) # use str.format instead of + to make it look neater. # isn't length the same as len(generatepassword)? # add the \n on to the previous print statement # don't need the other functions now def generate(): # This askes how long the password should be print("How long do you want your password? ") # now it supports any length of password while 1: # do some checking, while 1 is an infinite loop until it's breaked try: # do some error checking as well choice = int(input("please input the length >> ")) # make it an integer if choice < 1: # should not be less than 1 raise TypeError() except TypeError: # if choice < 1: print("Length should not be less than 1") # show an error message except ValueError: # if choice is not a number: print("Please input a valid integer") # show an error message except: print("Some other error occured... :(") # show an error message if some other error occured, though I don't think it's possible else: break # if no error occured gen_pass(choice) # generates a password with any length # everything else isn't needed again = 'yes' while again.lower() in ['yes','y']: # use lower in case they capped it, use this kind of checking method generate() again = input("\nDo you want to generate another password? [yes] or [no] >> ") # put the print statements inside the input()
{ "domain": "codereview.stackexchange", "id": 24691, "tags": "python, beginner, strings, python-3.x, random" }
When a plane is accelerating upwards, why is both the upwards acceleration and gravitational acceleration positive?
Question: At a weight m, a plane is acceleration upwards at rate of a. We also remember the value of g. From my understanding, we have two opposite forces that we care about. The force due the gravitational acceleration, which points in the negative y-direction, The force of the upwards acceleration, which points in the positive y-direction. The first force must be equal to m times g, while the second would be equal to m times a. However, that is not the case with the upwards acceleration. My physics book gives this reasoning for calculating the upwards force: I don't understand why we would add g, since it points in the opposite direction? Answer: To accelerate the mass $m$ at constant acceleration $a$, you need to apply a force ($F_{body}$) on the body upwards such that: $$F_{net} = ma$$ Now drawing the forces at play, you get two forces: $F_g=-mg$(downwards) and $F_{body}$(upwards) They should sum up to $F_{net}$, therefore: $$F_{net} = F_g + F_{body}$$ $$ma = -mg + F_{body}$$ $$F_{body}=m(a+g) \text{(upwards)}$$ This is the apparent weight you feel. To make it even more intuitive, consider the example when you step into an elevator and it "just" starts to go up. At that very moment, you suddenly feel that your weight has increased or the ground is pushing onto you!
{ "domain": "physics.stackexchange", "id": 95223, "tags": "forces, newtonian-gravity, acceleration" }
Simple kinematics excercise, throwing something upwards
Question: I am trying to solve this simple excercise: Question You throw a small coin upwards with $4 \frac{m}{s}$ . How much time does it need to reach the height of $0.5 m$ ? Why do we get two results? Answer (We get two results for time, because the coin passes the 0.5m mark downwards too.) The equtation I used (for constant acceleration): $x = x_0 + v_0 t + \frac{1}{2} a t^2$ These values I know: $x = 0.5m$, $x_0 = 0$, $v_0 = 4 \frac{m}{s}$, $a = -9.81 \frac{m}{s^2}$ So, I get a quadratic equtation which I can solve for $t$. My two results are: $t_1 = 0.15s$ and $t_2 = 0.66s$, however the book has the results 0.804s and 0.013s. What am I doing wrong? Thanks for your help Answer: Nothing, your book did it wrong.
{ "domain": "physics.stackexchange", "id": 587, "tags": "homework-and-exercises, kinematics" }
Example of Quantum Error Correction
Question: Shor's 9 Qubit code. Imagine that we encode the state $| \psi \rangle = \alpha | 0 \rangle + \beta | 1 \rangle$ using Shor's 9 qubit code, then an X error occurs on the 8th qubit of the encoded state $| E ( \psi ) \rangle$. a) Write down the state following the error. Apparently the answer is $$\frac{1}{2 \sqrt2}( \alpha (| 000 \rangle + | 111 \rangle) ( | 000 \rangle + | 111 \rangle) ( | 010 \rangle + | 101 \rangle) \\ + \beta ( | 000 \rangle - | 111 \rangle)( | 000 \rangle - | 111 \rangle)( | 010 \rangle - | 101 \rangle))$$ How has this been derived? I cant see how you do this with an error. b) We now decode the encoded state, starting by applying the bit flip code decoding algorithm. What are the syndromes returned by the measurements in the algorithm? Apparently the syndromes are $00, 00, 10$. How do I know what measurements to do? c) Now imagine that $| E( \psi ) \rangle$ is affected by two $X$ errors, on the 7th and 8th qubits. What are the syndromes returned this time? What state does the decoding algorithm output? Now the syndromes are $00, 00, 01$. The decoding algorithm thus thinks there has been an X error on the 9th qubit. So it "corrects" this by applying an X operation on this qubit, to give the state $$\frac{1}{2 \sqrt2}( \alpha (| 000 \rangle + | 111 \rangle)( | 000 \rangle + | 111 \rangle)( | 000 \rangle + | 111 \rangle)\\ - \beta ( | 000 \rangle - | 111 \rangle)( | 000 \rangle - | 111 \rangle)( | 000 \rangle - | 111 \rangle))$$ Note that $\beta$ now has a minus sign in front of it. After the bit decoding, we are left with $\alpha | {+++} \rangle - \beta | {---} \rangle$, which is then decoded to $\alpha | 0 \rangle - \beta | 1 \rangle$. Again how would I know what measurements to take? Also how could I know a priori that I have errors on the 7th and 8th qubits? Why do we apply a $X$ operation to the 9th qubit? Answer: Answer to a) Initial encoded state with qubit indexes (I will omit $\frac{1}{2\sqrt{2}}$ for simplicity): \begin{align}|\psi\rangle = &\alpha (|0_1 0_2 0_3\rangle + |1_1 1_2 1_3\rangle)(|0_4 0_5 0_6\rangle + |1_4 1_5 1_6\rangle)(|0_7 0_8 0_9\rangle + |1_7 1_8 1_9\rangle) + \\ &\beta (|0_1 0_2 0_3\rangle - |1_1 1_2 1_3\rangle)(|0_4 0_5 0_6\rangle - |1_4 1_5 1_6\rangle)(|0_7 0_8 0_9\rangle - |1_7 1_8 1_9\rangle) \end{align} After applying $X$ gate on $8$th qubit (and after removing indexes): \begin{align}|\psi\rangle = &\alpha (|0 0 0\rangle + |1 1 1\rangle)(|0 0 0\rangle + |1 1 1\rangle)(|0 1 0\rangle + |1 0 1\rangle) + \\ &\beta (|0 0 0\rangle - |1 1 1\rangle)(|0 0 0\rangle - |1 1 1\rangle)(|0 1 0\rangle - |1 0 1\rangle) \end{align} Answer to b) One should always do the same operator measurements no matter what error have been acquired. The operators for detecting $X$ error are $Z_1 Z_2$, $Z_2 Z_3$, $Z_4 Z_5$, $Z_5 Z_6$, $Z_7 Z_8$, $Z_8 Z_9$. After measuring all these $6$ operators one obtains for each of them either $0$ or $1$. $00,00,10$ syndrome measurement is wrong (I guess there is a typo in the exercise). The true syndrome is $00,00,11$ and that means only $Z_7 Z_8$ and $Z_8 Z_9$ operator measurements yielded $1$ indicating that the $X$ error has been acquired on $8$ qubit. One can apply a $X$ gate to the same (errored) $8$th qubit in order to correct the error. Here is the circuit for all mentioned $6$ operator measurements (note that there are $6$ measurements). Answer to c) With this error-correcting code, we always assume that we have only one qubit error. If there are two qubit errors then this technique with its syndrome may indicate to do something that will not correct the error. In this example, $00, 00, 01$ indicates (wrongly, because our assumption of one qubit error is not true for this error example) that $9$th qubit has got an error. I think the main question here is how to do operator measurement for the syndrome. If I am correct then I suggest asking separately a question with a focus on this matter (with maybe this title "How to do $ZZ$ operator measurement for Shor's 9 qubit code?").
{ "domain": "quantumcomputing.stackexchange", "id": 1956, "tags": "error-correction, textbook-and-exercises" }
How can Newton's idea of absolute space be reconciled with Galilean relativity?
Question: I wasn't sure if this might be better suited to History of Science and Mathematics SE, but I suppose it is a bit more 'science-y' than historical. Apparently Newton believed in absolute space and absolute time, existing independently from anything else within the universe (these beliefs were supposedly laid down in the 'Scholium' of his Principia; see here). However I was under the impression that Galilean relativity, although supporting the idea of absolute time, did away with absolute space. And yet Newton believed in Galilean relativity and his first law- a body remaining in a state of rest or uniform motion unless acted on by a net force- is essentially a statement of Galielean relativity. How can it be so that Newton believed in absolute space and believed in Galilean relativity? In fact, in the link it suggests that Newton believed in absolute motion as well (which would be implied from his belief in the existence of absolute space and absolute time anyway). Perhaps I am getting confused between the notions of 'relativity' and 'absolute space/time'... Answer: Newton's idea of absolute space simply appeared as an answer to the following question: What is an inertial system? Saying that an inertial system is one with constant velocity relative to another inertial system of course does not answer the question. To avoid such logical weakness in Newton's first law one has, at some point, to assume that there is a frame of reference, called absolute space, that - by definition - is inertial. On the other hand Galilean relativity consists on transformations among inertial frames of reference and such relations do not forbid an absolute space. In fact, the idea is that we can use Galilean relativity transformations to relate any inertial frame to the absolute space. Although the concept of absolute space can be removed by Mach's definition of inertial frame (given an isolated particle, there is a reference frame, called inertial, relative to which the particle has constant velocity), there is no contradiction between Galilean relativity and absolute space. Only Special Relativity can definitely rule out the existence of absolute space.
{ "domain": "physics.stackexchange", "id": 42298, "tags": "newtonian-mechanics, reference-frames, inertial-frames, galilean-relativity, machs-principle" }
If the integer representation used is "0 through 4,294,967,295 (2^32 − 1)", so does this mean the register cannot handle negative numbers?
Question: From Wikipedia: A 32-bit register can store 2^32 different values. The range of integer values that can be stored in 32 bits depends on the integer representation used. With the two most common representations, the range is 0 through 4,294,967,295 (2^32 − 1) for representation as an (unsigned) binary number, and −2,147,483,648 (−2^31) through 2,147,483,647 (2^31 − 1) for representation as two's complement. So if the integer representation used is "0 through 4,294,967,295 (2^32 − 1)", so does this mean the register cannot handle negative numbers? From a similar standpoint, if the integer representation used is "−2,147,483,648 (−2^31) through 2,147,483,647 (2^31 − 1)", so does this mean that the register cannot handle numbers greater than 2,147,483,647? Answer: The contents of a register don't have any inherent semantics. Some instructions might assume certain semantics. For example, the x86 ADD instruction assumes that the registers represent integers, either unsigned or signed using two's complement. There are signed and unsigned versions in the x86 architecture for the multiplication instructions. Another unlikely place in which sign makes a difference is promotion instruction, in which a smaller register is assigned to a larger register – you want either zero extension (in the unsigned case) or sign extension (in the signed case, assuming two's complement). What this all means is that certain the instruct set favors some interpretations by having instructions that assume them, but beyond that, the interpretation of the data stored in a register is completely up to the user. Moreover, in x86 at least, the same register can be used for both signed and unsigned operations. The register doesn't "know" which type of data it stores – it is completely up to the programmer. (One could imagine different architectures with different conventions.)
{ "domain": "cs.stackexchange", "id": 17126, "tags": "cpu, bit-manipulation" }
Spectral clustering with heat kernel weight matrix
Question: I am studying normalized graph cuts, and one of the way to define weight matrix is using heat kernel, which is $W_{ij} = e^{\frac{−∥x_i − x_j∥^2}{σ^2}}$. I want to ask: what's the meaning of sigma? Does it affect on the partition of the data? How do we pick sigmas? And what happens if its too large or too small? What happens to the Laplacian matrix L as $\sigma → 0$ and $\sigma → \infty$? What are the eigenvectors and eigenvalues of this matrix? Answer: $\sigma$ represents a typical distance between points. If all points of one cluster of your graph are separated from all points of another by a distance that is significantly higher than $\sigma$, then the spectral clustering will probably use this as a cut. If you already know the number of clusters (or cuts) that you want to make, $\sigma$ does not need to be tuned very finely. But it becomes really important if you have no idea how many cuts you want to make. A good approach is to make a parametric study over $\sigma$, and make the decision through the eigen values. Here's an example from one of my study cases, giving eigen values (in ascending order) of the Laplacian matrix, for 4 different values of $\sigma$: What you are looking for is a break in the increase of eigen values: theoritically, a wide gap between two consecutive values ($n$ and $n+1$) should tell you that it is a good idea to make $n$ clusters. As you can see, small sigma values (upper left plot) lead to very low eigen values, sometimes even numerically negative. Very small values tend to make very small clusters containing outliers, and a very big cluster with all other points. High sigma values (lower right plot) will make your matrix look like the identity, with high eigen values, this usually won't bring anything interesting. What I usually do is try a wide window for sigma values, and shorten the range progressively, until I find a satisfying result. This is all graphical, and depends on the prior knowledge about your problem (how many clusters do you approximately expect? 2, 10, 100?). Eigen values represent the quality of the cut. Small values correspond to very distant clusters. For instance, in my upper right case, I could make 4 very well separated clusters (first 4 eigen values are almost 0). The next cut (to reach 5 clusters) would be less effective than the 3 previous ones. 13 clusters could also have been a good try. But I actually selected 9 clusters, because it was closer to what I actually expected.
{ "domain": "datascience.stackexchange", "id": 3996, "tags": "clustering, dimensionality-reduction" }
Why 8–15 µm is considered "thermal infrared" if typical room temperature kT is 48 µm?
Question: According to Wikipedia: Long-wavelength infrared (8–15 µm, 20–37 THz, 83–155 meV): The "thermal imaging" region, in which sensors can obtain a completely passive image of objects only slightly higher in temperature than room temperature - for example, the human body - based on thermal emissions only and requiring no illumination such as the sun, moon, or infrared illuminator. This region is also called the "thermal infrared". However, using $\frac{hc}{\lambda}=k_\mathrm B T$, the temperature range 288–308 K (15–35 °C) is equivalent to 50–46.7 µm, while 8–15 µm is equivalent to 1800–960 K (using the same equation). Answer: Your formula is wrong, is the short answer. It works fine as an order of mangnitude approximation, but it's only an approximation. What you need is Wien's displacement law: $\lambda=\frac{b}{T}$ where Wien's displacement constant $b=2.8977729(17)×10^{−3} m K$. Put $T=288K,308K$ into that and you get: $\lambda=9.4-10.0\mu m$, which as you'd expect is at the lower range of the thermal infra-red sensors. Note that $b=\frac{hc}{xk_B}$ where $x$ can be determined from the finding the peak of the black body spectrum from Planck's law - which has to be done numerically. It turns out that $x=4.96$, as opposed the value of 1 you in effect used in your original estimate. Hence your values are around a factor of 5 too high.
{ "domain": "physics.stackexchange", "id": 29568, "tags": "thermal-radiation, infrared-radiation" }
Deriving photon energy equation using $E = mc^2$ and de Broglie wavelength
Question: So I was learning de Broglie wavelength today in my physics class, and I started playing around with it. I wondered if it was possible to calculate the energy of a light wave given its wavelength and speed. After rearranging a bit, I plugged it into $E = mc^2$, and realized I had found the equation for the energy of a photon that I learned at the beginning of my quantum mechanics unit, $E = hf$. $\lambda = \frac{h}{p}$ $p = \frac{h}{\lambda}$ $E = mc^2$ $E = \frac{p}{c}\cdot c^2$ $E = \frac{hc}{\lambda} = hf$ I have a few questions about this. Firstly, I do not understand how it can make sense to do $\frac{p}{c}$ in this context, because, as I understand it, light has no mass. How can I come to $E = hf$ using the mass of a massless object? Secondly, as I was doing those steps, I thought I would be calculating the energy of the entire light ray. I now realize I was finding the energy of a single photon. In hindsight, this makes sense, because the energy of the entire light ray must depend on some length value, correct? This led me to two other questions: do light rays have some finite length? how do I calculate the energy of a light ray, not just a single photon? Answer: Firstly, I do not understand how it can make sense to do p/c in this context, because, as I understand it, light has no mass. That's correct. Light has no mass so using $m=\frac{p}{c}$ is technically incorrect. How can I come to E=hf using the mass of a massless object? From the relation $$E^2=p^2c^2+m^2c^4$$ given that the mass of a photon is indeed zero, then $$E=pc\rightarrow E=\frac{hc}{\lambda}\ \ \ \text{since }\ \ \ p=\frac{h}{\lambda}$$ You also know that $c=f\lambda$ so $$E=hf$$ the energy of the entire light ray must depend on some length value, correct? Yes, it depends on the wavelength, $\lambda$. It's not clear what you mean by "entire". The energy of a light ray is characterized by its frequency, and therefore wavelength. Note that a single photon is a particle, and the "light ray" classical wave character comes from many photons.
{ "domain": "physics.stackexchange", "id": 85954, "tags": "quantum-mechanics, visible-light" }
How realistic is it to dry clothes in freezing weather?
Question: There are stories about clothes freezing and drying in cold weather. The typical reason given for that is that the water is undergoing sublimation (ice -> vapor). The phase diagram for water is In a typical case, one would hang wet clothes outside, at say -5°C (or whatever realistic temperature). This would cause the clothes to freeze and then, in order to move between phases, the pressure would need to drop. Note that this is the scenario I have in mind, not a direct water -> vapor transition (the usual kind, but also the one where the stating point on the graph is on liquid phase, and the ending point (external temp + current pressure) - in vapor phase). Aren't the conditions a bit too esoteric for that to happen, often enough to have made it into some family legends? Answer: I guess all you need for ice sublimation is low water vapor pressure (low humidity), not low atmospheric pressure.
{ "domain": "physics.stackexchange", "id": 46647, "tags": "everyday-life, water, phase-transition" }
Why is the western sky yellow after sunset?
Question: I have read this question: Why is the sky blue and the sun yellow? where John Rennie says: The only light we see directly from the Sun is the light that travels in a straight line from the Sun to our eye If you consider the upper yellow line we can't see this light ray because it misses our eye. However the Rayleigh scattering due to the air scatters in all directions, so some of this scattered light reaches our eye. That means when we look away from the Sun we only see the scattered light and not the direct sunlight. The Rayleigh scattering depends on the wavelength and blue light is scattered most. That means the light we see coming from directions away from the Sun has a spectrum weighted towards the blue. Now I do understand that the sky is blue due to Rayleigh scattering. The Sun appears yellowish when close to sunset because the closer it is to the horizon, light has to pass through a longer way through the atmosphere and this means more Rayleigh scattering, thus, the direct sunlight into our eyes will be weighted more towards the yellow (more blue photons scatter off). This explains both why the sky is blue and why the direct light from the Sun is more yellowish close to the horizon. But what happens after sunset? There is no direct sunlight. All Sunlight is because of scattering. How can that be then yellowish? It should all be blue because of Rayleigh scattering. After sunset, none of the photons are reaching our eyes directly. All of them are scattered off the atoms in the atmosphere. Thus, Rayleigh scattering should dominate and cause it to be just blue. Just to clarify, my question is about after the sunset, when the Sun is completely below the horizon, and no photon can reach our eyes directly from the Sun, all photons are scattered through the atmosphere. Why is the sky yellow then? There are no clouds either. The question is basically if the afterglow is caused by Mie scattering and how it can dominate Rayleigh. Answer: This phenomenon is also known as an afterglow (the opposite is called foreglow, which occurs before sunrise), which is a broad arch of whitish or pinkish sunlight in the sky that is scattered by fine particulates, like dust, suspended in the atmosphere. An afterglow may appear above the highest clouds in the hour of fading twilight or be reflected off high snowfields in mountain regions long after sunset. The particles produce a scattering effect upon the component parts of white light. The high-energy and high-frequency light is scattered out the most, while the remaining low-energy and -frequency light reaches the observer on the horizon at twilight. The backscattering of this light further turns it pinkish to reddish.
{ "domain": "physics.stackexchange", "id": 73143, "tags": "electromagnetic-radiation, visible-light, photons, scattering, atmospheric-science" }
Graph (Forest) representation that supports edge deletion and efficient traversal
Question: I am trying to write a data structure that given a general tree (or forest) will support the following operations: Edge deletion Connected(u,v) queries This problem is addressed in section two of the following ACM journal article: "An On-Line Edge-Deletion Problem". The idea given claims to be able to carry out q edge deletions in O(q + |V|log|V|) time, while allowing constant time Connected(u,v) queries. The idea being: to maintain a table mapping each vertex to a connected component. Upon each deletion, each of the new trees is scanned in parallel. Which ever tree is finished being scanned first - becomes a new component. Now my question is, which graph representation can I use to implement their idea? On one hand I need to be able to delete an edge without having to scan an O(|V|) adjacency list, on the other hand, I need to be able to run a traversal (DFS) in O(|E|) = O(|V| (tree) time which can't happen using a matrix. Answer: This problem is known as Decremental Connectivity. In general, decremental connectivity is where you need to support the operations: Connected($u$,$v$) : Check whether vertex $u$ is connected to vertex $v$ Delete($e$): Remove an edge $e$ Given $n$ queries of the first kind and $m$ queries of the second, Even and Shiloach [1] gave an $O(n\log{n} + m)$ algorithm and later, Alstrup et al. gave an $O(n+m)$ algorithm in [2]. There is a simple way to do the former using a data-structure known as the Euler Tour Trees or the approach taken in [1] could also be used. If you wish to implement [1], the tree is stored as a set of connected components. It is then enough to determine which component each vertex belongs to answer the connected-ness query. There is a breadth-first seach used to answer the deletion process. But it is $O(\log{n})$ amortized. This would achieve $O(n\log{n} + m)$. If the $\log{n}$ factor must be removed, then a leaf-trimming approach would work. You trim all subtrees with size $\geq\log{n}$. Then, you can use the Euler Tour Trees (or the approach in [1]) to perform queries in the untrimmed portion. Some cases could need to be seperately handled. This is the approach given in [2]. [1] S. Even and Y. Shiloah. An on-line edge-deletion problem. Journal of the Assoiation for Computing Mahinery, 28:1-4, 1981. [2] Stephen Alstrup and Jens Peter Secher and Maz Spork. Optimal On-line Decremental Connectivity in Trees, IPL, 65:64-8, 1997
{ "domain": "cstheory.stackexchange", "id": 2942, "tags": "graph-algorithms, tree" }
RAD Seq Data Analysis without barcode
Question: I received a ddRAD Seq data from my supervisor without barcodes and restriction enzymes. I asked him for both and he said I don't need it since the data has been cleaned by the company. Now, I am kinda confused and don't know where to start. Answer: Wow Biobash, that's kinda harsh. It is cool that sometimes you unexpectedly learned something new. What Michael mentioned is pretty interested, no one has ever mentioned that or I have never heard that. So, my approach would be that you now are doing a Poolseq study, since you can't identify individuals. The other way to try to catch the paralogues is to map your reads to a sister species. My question is what are you trying to estimate? Genetic differentation? Genetic diversity? Pop Gen structure? https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6116966/ https://onlinelibrary.wiley.com/doi/full/10.1002/ece3.5240 https://www.g3journal.org/content/9/12/4159 One semi-robust approach to assess usefulness is thinking about your system and defining what you expect to find, let's say in terms of genetic differentiation or genetic structure, and estimating that from your data. If that hypothesis is confirmed, your dataset might be useful. You can test the same by first estimating the loci by mapping your reads to a sister reference genome, to try to detect paralogues. Since you don't have barcodes, then each "locus" is independent, and you lose power, but there is nothing else you can do. Good luck;
{ "domain": "bioinformatics.stackexchange", "id": 1638, "tags": "snp, phylogenetics" }
Truncating text with jQuery but keep the HTML formatting
Question: I repeat here this answer on Stack Overflow. I first posted an answer with not finalized code, as a simple description of the solution I could think, without any test. But later I remained interested, so I worked to make it (hopefully) perfectly functional. To precisely define what it is meant to do, let me cite my previous answer: This is a classic dilemma for any CMS or blog, where the teaser should present the begin of an article: often the solution is either stripping text from its tags and cut at a precise count OR keep tags but cut approximately because the tags are counted too... So here the intent is to take an HTML element with any number of children, and any nesting level and return: the "same" element (i.e. keeping its tag and attributes) where resulting text content (i.e. visible as characters in the resulting page) is limited to a given count where resulting text is built from successive text nodes in their natural order where encountered tags are keeped intact and at their natural place Here is my actual solution: function cutKeepingTags(elem, reqCount) { var grabText = '', missCount = reqCount; $(elem).contents().each(function() { switch (this.nodeType) { case Node.TEXT_NODE: // Get node text, limited to missCount. grabText += this.data.substr(0,missCount); missCount -= Math.min(this.data.length, missCount); break; case Node.ELEMENT_NODE: // Explore current child: var childPart = cutKeepingTags(this, missCount); grabText += childPart.text; missCount -= childPart.count; break; } if (missCount == 0) { // We got text enough, stop looping. return false; } }); return { text: // Wrap text using current elem tag. elem.outerHTML.match(/^<[^>]+>/m)[0] + grabText + '</' + elem.localName + '>', count: reqCount - missCount }; } And here is a working example. (I kept the HTML example posted by the previous question OP) Answer: First of all, I would just like to say that this is a really good and useful function. From a Code Review standpoint, there are almost no errors in it that I know of. Here are a few things I found from examining it: Keep spacing uniform Line 8 doesn't have a space between parameters, where your other function calls do. This is most likely due to quick typing and not any major issue other than a cleanliness nitpick. An invalid input for if Lines 19-21 where you have an if like so: if (missCount == 0) { // We got text enough, stop looping. return false; } You should never use == over === due to it being possible that something like "0" would match the same as 0. This is because the double equals signs finds if it matches an exact value, where triple equals signs tests for exact value and type. So your final code should look like this for the if statement: if (missCount === 0) { // We got text enough, stop looping. return false; } Optional - JSHint If you use JSHint in JSFiddle, then you'll run into errors when trying to run this: elem.outerHTML.match(/^<[^>]+>/m)[0] + grabText + '</' + elem.localName + '>', If you're worried about that then you just have to write it all on one line. But for being short and concise, that might not be what you want.
{ "domain": "codereview.stackexchange", "id": 16174, "tags": "javascript, jquery, html, regex, dom" }
Electric Potential and Corona Discharge
Question: I saw a physics demonstration where, between two flat metal plates, was a round conducting ball with large radius. When the two plates had a potential difference, there was an increasing electric field, which increased the electric potential of the conducting ball according to the equation $V = Er$, where $V$ is the electric potential, $E$ is the vector quantity of the electric field, and $r$ is the radius of the sphere. The large electric potential caused a spark across the air. Then, a spike was placed between the two plates. Electric potential built up in the spike, but because the radius of the "ball" at the end of the spike is small, the electric potential at the end of the spike is small too. Why does the air not experience dielectric breakdown in the case of the spike? How does current continue to flow through the air, without any sparks? Answer: There are several types of discharge that might occur in a gaseous/plasma medium. The exact type of the discharge depends on several parameters: the type of gaseous medium (what kind of gas or plasma is present), pressure, voltage and current. Spark is what is called an arc discharge. As far as I understand, in this case the type of medium and the pressure is the same in both experiments (ambient air at 1 atmosphere?). So the key difference was in voltage/current. The spike has lower capacitance with respect to the sphere, so it doesn't allow the voltage to build up enough to actually break down the air. In a somewhat simplified way one might look at it as such: the charge easily leaves the spike ("flows off" of it) because of it's point-sharp form whereas the sphere has plenty of room to accumulate the charge. So as we increase voltage the current rises quicker in case of spike (yet it's not big enough to glow brightly), whereas the sphere accumulates a lot of charge with a relatively small current and then the air quickly breaks down with a spark. So, to sum up, very roughly speaking, in case of a spike we have relatively big current and low voltage, and in case of a sphere we have relatively low current and high voltage - which eventually leads to an arc.
{ "domain": "physics.stackexchange", "id": 37678, "tags": "electric-fields, potential, dielectric" }
How to know the group number from the ionization energy?
Question: The successive ionization energy of the first four electrons of a representative element are $\pu{738.1 kJ/mol}$, $\pu{1450 kJ/mol}$, $\pu{7730 kJ/mol}$ and $\pu{10500 kJ/mol}$. Characterize the element according to the periodic group. From this data page, I predicted that the element is $\ce{Mg}$ which belongs to Group 2. But I don't have a systematic way to predict the group number or the electron configuration from the question. Is there any approach for this kind of question ? Answer: The particular values are not that important. Their pattern is. Notice the big relative step between the 2nd and the 3rd ionization energies. The first two values are relatively small as these electrons are quite shielded by more inner electrons. The other two need much more energy to be taken away, being shielded much less, as the "peer electrons" from the same orbital group do not shield the nucleus charge much for each other. That means the first two electrons have started another element period with another quantum number $n$. So the element group is the second group.
{ "domain": "chemistry.stackexchange", "id": 16156, "tags": "periodic-trends, periodic-table, ionization-energy" }
What does the _vendor suffix mean in a package name?
Question: There are many repos in ROS index named something_vendor and also many named something_cmake_module. What is the purpose of a vendor package? What does the "_vendor" suffix mean versus just calling the package, e.g. 'gmock`? When is it appropriate to have a vendor package versus a cmake_module package? Originally posted by DanRose on ROS Answers with karma: 274 on 2019-09-12 Post score: 0 Answer: I can't speak to the early history of *_vendor packages in ROS 2 but my understanding is that vendor packages exist to provide software that ROS 2 needs on platforms where it may not be available, or where a different version than what is available is required. Most vendor packages first check to see if the vendored software is available, and if not build and install it within the vendor package directory using CMake's ExternalProject features. Prior to the homespun Chocolatey packages tinyxml_vendor was the best example of a case where a library wasn't available on all platforms. Ubuntu had libtinyxml-dev via apt and MacOS had tinyxml via homebrew but there was no such package available for Windows. On a correctly set up Ubuntu or MacOS system, tinyxml_vendor acts as a thin shim for the system package. But on Windows it takes care of fetching, building, and installing the software and making that installation available to the rest of the ROS 2 workspace. the *_cmake_module packages exist to provide CMake glue for system packages that either don't provide it or provide it in an inconsistent fashion. In standard projects each project would just embed its own FindWhatever.cmake but since many packages in the ROS ecosystem are built together we want to make those modules available for any packages that need them rather than duplicating effort. Originally posted by nuclearsandwich with karma: 906 on 2019-09-12 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 33757, "tags": "ros, ros2, package.xml" }
Preferable way to express $O(n2^n)$
Question: Is it preferable to write $O(n2^n)$ or $O((2 + \epsilon)^n)$? If neither, what is the best way? Since I see a lot of papers with $O(1.42^n)$ instead of $O(2^{\frac{n}{2}})$ and similar transformations, I was wondering what people prefer. Answer: If you have an exact and simple formula like $O(n2^n)$ I would go with that. It is more precise than $O((2+\epsilon)^n)$ because it tells you what the epsilon really is. For the same reason I would generally prefer $O(2^{n/2})$ (but with a slashed fraction not a vertical one in the exponent) to $O(1.42^n)$, although in some cases you might want to give the numerical value (or both forms) to show more clearly how a bound compares to previous bounds for the same problem. If the formula is more complicated ($c^n$ where $c$ is the largest real root of a higher-degree polynomial, say) it can be helpful to replace $c$ by a numeric value. In this case $c$ should always be rounded up, not down, and lower-order terms like the factor of $n$ in the first formula should be omitted (because they are covered by the rounding of $c$).
{ "domain": "cstheory.stackexchange", "id": 3211, "tags": "notation, asymptotics" }
Identify this textbook
Question: I would really like to buy this textbook, but only see a scan of one chapter online. Could anybody help me identify the book? Thanks! Answer: Thermal Environmental Engineering 3rd Edition by Kuehn, Ramsey & Threlkeld (1998), Pearson.
{ "domain": "engineering.stackexchange", "id": 4360, "tags": "mechanical-engineering, heat-transfer, hvac, refrigeration, dehumidification" }
Is the pressure higher in the corners?
Question: I was watching the video The Ingenious Design of the Aluminum Beverage Can. At 20'', the author says [..] a spherical can [..] has no corner, so no weak points because the pressure in the can uniformly stresses the wall. and later talking about a cuboid [..] these edges are weak points and require very thick walls. It is intuitive to me that in a sphere the wall is stressed uniformly. It is also quite intuitive that edges of a cuboid cans are weak points as they can be hit from the exterior from a wider angle. However, the statement says that in a sphere there is "no weak points because the pressure in the can uniformly stresses the wall" suggests to me that in a cuboid the pressure is not uniform and eventually higher in the corners. This is not intuitive to me! Is the pressure higher against the corners than against flat walls in a cuboid? How much does the pressure against the walls vary (if it does) in a cuboid at equilibrium? When not at equilibrium (typically when the cuboid is accelerated), it the pressure higher against the corners? Answer: In Thermodynamic equilibrium, the pressure will be the same throughout the whole can. If there was a region with higher pressure, this high pressure would immediately cause a motion in the gas. So the pressure is the same everywere. The author you cited emphasizes that the pressure can uniformly stress the wall. The property that is not uniformly distributed in case of a corner is not pressure, but how it stresses the wall. In a corner, you give pressure more surface that it can act on, which increases the force that is exerted on the "corner point". An additional note by user1583209: A cornerpoint is mutch more stressed by pressure than a point in the middle of a wall, because the force acts normal to a surface. At a corner point, the walls to the side of the corner will be pushed in different directions, which results the corner being torn apart. An additional note by alephzero: Since there is gravity inside the can, the pressure will be higher at the bottom of the can. This effect is a) completely negligable for an aluminium can, when it's about the force that the wall has to endure. b): This effect is independent from wether there is a corner in the wall, or the wall is flat.
{ "domain": "physics.stackexchange", "id": 35651, "tags": "pressure, topology, machs-principle" }
Field and Charge densities in two dimensional corners and along edges
Question: In jackson's book, we can derive the equation as following. $\sigma(\rho)=\epsilon_{0}E_{\phi}(\rho, 0)\approx-\frac{\epsilon\pi a_{1}}{\beta}\rho^{(\pi/\beta)-1}---(2.75)$ My question is there are no charge accumulates at the small $\beta\approx0$. But, if the $\beta=2\pi$, the singularity is as $\rho^{-1/2}$. For me, $\beta\approx0$ and $\beta\approx2\pi$, the geometry is the same!! How can i explain this? Answer: The geometry is not the same, they are complementary. The case where $\beta \approx0$ describes a very deep corner, so the space not occupied by the conductor is very narrow. The case where $\beta \approx 2 \pi$ is the case where the conductor is almost a thin sheet, that is, the space occupied by the conductor is very narrow. As expected, the charge accumulations on the corner are realy different.
{ "domain": "physics.stackexchange", "id": 16815, "tags": "electromagnetism, electrostatics, classical-electrodynamics" }
Wrong constructor in ROS2
Question: I am trying to create a constructor for my odom node but for some reason I keep getting the strange error: error: no matching function for call to ‘rclcpp::TimeSource::TimeSource(<brace-enclosed initializer list>)’ odom_pub{}, _odom_tb{std::make_shared<tf2_ros::StaticTransformBroadcaster>(shared_from_this())} The thing is that I am not even trying to pass the rclcpp::TimeSource but my compiler thinks so. I think i managed to follow the variable declaration = variable initialization order, so for example in my header: class OdomNode : public rclcpp::Node { private: int _rate, _ticks_meter, _left_pos, _right_pos, _enc_left, _enc_right, _lmult, _rmult; int _prev_rencoder, _prev_lencoder; double _d_left, _d_right; std::shared_ptr<tf2::Quaternion> q; std::string _base_frame_id, _odom_frame_id; //Node rclcpp::TimeSource _ts; rclcpp::Clock::SharedPtr _clock; std::shared_ptr<geometry_msgs::msg::TransformStamped> _odom_ts; rclcpp::Rate _loop_rate; rclcpp::Time _then; // Subs and publishers: //rclcpp::Subscriber _left_sub; std::shared_ptr<rclcpp::Publisher<nav_msgs::msg::Odometry>> _odom_pub; std::shared_ptr<tf2_ros::StaticTransformBroadcaster> _odom_tb; // Known vars: double _x{0}; double _y{0}; double _th{0}; std::vector<double> _q_imu{0,0,0,1}; std::shared_ptr<std_msgs::msg::Float64> _state_left, _state_right; And my cpp file: OdomNode::OdomNode() : Node("odometry_node","odometry_node_ns", rclcpp::NodeOptions()), _rate(30), _ticks_meter(1), _left_pos{0}, _right_pos{0}, _enc_left{0}, _enc_right{0}, _lmult{0}, _rmult{0}, _prev_rencoder{0}, _prev_lencoder{0}, _d_left{0}, _d_right{0}, q{std::make_shared<tf2::Quaternion>()}, _base_frame_id{"base_footprint"}, _odom_frame_id{"odom"}, _ts{this}, _clock{std::make_shared<rclcpp::Clock>(RCL_ROS_TIME)}, _odom_ts{std::make_shared<geometry_msgs::msg::TransformStamped>()}, _loop_rate{30}, _then{}, _odom_pub{}, _odom_tb{std::make_shared<tf2_ros::StaticTransformBroadcaster>(shared_from_this())} {} Originally posted by EdwardNur on ROS Answers with karma: 115 on 2019-08-08 Post score: 1 Answer: The time source rclcpp::TimeSource _ts; Is being initialized with this _ts{this} Which is of type OdomNode *, but the only constructor that takes any arguments expects rclcpp::Node::SharedPtr. However, rclcpp::Node already creates a time source and clock. Instead of creating a time source, use get_clock() or now() since OdomNode inherits from rclcpp::Node. Originally posted by sloretz with karma: 3061 on 2019-08-08 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by EdwardNur on 2019-08-08: @sloretz Ok, I see now... Also, can you explain why am I getting this error: [odometry_node-1] terminate called after throwing an instance of 'std::bad_weak_ptr' [odometry_node-1] what(): bad_weak_ptr I mean I know that this is because of hanged pointer but why does it happen? Is it because of shared_from_this() ? I also have this line: while(rclcpp::ok()) { this->loop(); this->_loop_rate.sleep(); rclcpp::spin(shared_from_this()); } And I think it is the last line that brings me that error, as the shared_ptr was hanged. I tried to do this just before the while loop: std::shared_ptr<rclcpp::Node> _for_spin = std::make_shared<rclcpp::Node>(this); But it did not work Comment by sloretz on 2019-08-08: Please open a second question for the new error. Questions are easier to find than comments for future people with the same problem . Comment by EdwardNur on 2019-08-08: @sloretz Yes, I did : https://answers.ros.org/question/330261/what-bad_weak_ptr/
{ "domain": "robotics.stackexchange", "id": 33592, "tags": "ros2" }
How does conduction happen?
Question: I'd like to ask you how does conduction happen ? I mean, the atoms vibrating more hits the less vibrating atoms and gives energy. But how is that energy transferred ? For atoms to collide they must be very fast which that ends up fusion but it's not the case. I pressume that energy is transferred by a field that accelerates and cause them to vibrate more and that is charge of the atoms I think. So can anyone cast light on this ? Answer: Imagine two spheres of equal mass. One is stationary, the other has velocity $v$. If they get close enough, they "collide" and some of the momentum and energy of the moving sphere is transferred to the stationary sphere. That is the basic mechanism. The assumption is that gas molecules can be treated as spheres that collide elastically. This is true at "ordinary" temperatures (before gas turns into plasma - because in a plasma the energy of collision is so high that electrons are knocked off). Electrons are bound fairly tightly to their molecules - on the order of eV. In contrast, the typical energy of a gas atom is on the order of $kT$ which is about 1/40th of an eV at room temperature. Gas atoms have a velocity distribution which means there is a very small probability of a really energetic atom; but that probability is negligible at room temperature.
{ "domain": "physics.stackexchange", "id": 27608, "tags": "thermodynamics, atoms, thermal-conductivity" }
Convert Protobuf Message to ROS Message
Question: Hi, i just got the issue, that my incoming message (via an external libary) is in Protobuf 2.5 format. i got those .proto files and let them generate in catkin_make now i need a gateway function cause already existing nodes need the data as ros message those are generated via the .msg files the .proto files have the same internal stuff than the .msg file as example the .proto file: package Grid message MsgExample { optional float f_A = 10 [default = - 66666.0]; optional float f_B = 20 [default = - 55555.0]; optional int32 i_A = 1 [default = 0]; and my ros message looks like: #MsgExample float32 f_A float32 f_B int32 i_A is it possible to directly copy the to each other? not only one by one, cause in some cases these are really big number of different variables inside of a message. at the moment i have to do for every message ros_message::MsgExample myRosMessage; Grid::MsgExample myProtoMessage // this data i get from a callback (not important for my question where the data coming from) myRosMessage.f_A = myProtoMessage .f_a(); myRosMessage.f_B = myProtoMessage .f_b(); myRosMessage.i_A = myProtoMessage .i_a(); to publish the ros message later is need something to directy publish the protobuf message as ros message or to only do it with one operation like myRosMessage = myProtoMessage is this possible? thanks for your help and intrest best regards Rene Gaertner Originally posted by ReneGaertner on ROS Answers with karma: 21 on 2015-02-25 Post score: 0 Answer: My recommendation would be to write a function which does the conversion for you for each data type. You can do tricks and overload operators but that usually ends up with less readable code. Originally posted by tfoote with karma: 58457 on 2015-02-25 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by tfoote on 2015-02-27: Unfortunately if you change the message definitions you will need to update the code. If you want to fully automate that you're starting to look at code generators and ways to define the mappings between the two messages. You will need to maintain a mapping between them, in code is usually easiest Comment by ReneGaertner on 2015-03-02: i m just going that way, i have a file with the mapping options and search for my topic name to find which one is for which other message, than i choose the 2 header files of the message and fill them to each other. At the moment it s not working but seems to be a good way to do it. Comment by jinder1s on 2018-04-20: Hi tfoote, I'm curious if you would change your recommendations now?(Sorry if this is not the right avenue to ask that question) -jinder Comment by tfoote on 2018-04-20: No, why would you expect a change in the recommendation?
{ "domain": "robotics.stackexchange", "id": 20993, "tags": "ros, rosmessage, c++11" }
Bounded Quadratic Congruence Problem Variant (for some specific Residue)
Question: Given: 3 positive integers $a,b,L$. Problem: Is there a positive integer $x<L$ such that $x^2≡ a(mod\ b)$? The above problem is NPComplete (as mentioned in G&J) even if we have the factorization of $b$ given. My query is the following: Is there some specific value of $a$ (say $a_0$) such that, if we limit the original problem (the residue) to only $a_0$, the problem ceases to be NPComplete (and much easily solvable than the generic case) ? My guess is for any/each specific $a_0$ the problem still remains NPComplete but I am not certain. Anyone please ? Answer: Yes. If $a=1$, the problem is no longer NP-complete; the answer to the problem is always "yes, there is such a positive integer".
{ "domain": "cs.stackexchange", "id": 11204, "tags": "complexity-theory" }
Conservation of energy and inclines
Question: I have question which I'm unsure of whether or not I am thinking about it right. If an object is sliding down frictionless incline and it then comes to the "foot" of the incline where it encounters friction, how far would it travel. The foot of the incline is horizontal. The only information given was height of the incline and the coefficient of kinetic friction. I used conservation of energy to find the velocity at the foot of the incline. After, I solved for horizontal displacement using kinetic energy = displacement x mass x gravitational acceleration x coefficient of kinetic friction. Can anyone offer some insight into whether or not this is a good approach? Thank you. Answer: If the "foot" of the incline is itself also inclined, you need to take into account further increase in energy due to gravity. If the foot is horizontal, then your approach is fine - because you compute the normal force times coefficient of friction to get force of friction, and force times displacement is work done by the object. When it runs out of kinetic energy, it stops.
{ "domain": "physics.stackexchange", "id": 17237, "tags": "newtonian-mechanics, energy-conservation, homework-and-exercises" }
Is a purely Electron or Proton bomb possible?
Question: Since electrons repel other electrons and protons repel other protons, if you had enough of either of these in an enclosed space, and suddenly removed the enclosing barrier, would the repelling force between the particles release enough energy to cause significant effect? How many particles would need to be present to equate to a regular stick of dynamite? Answer: Proton-Proton Bombs Protons repel each other electromagnetically, but that repulsion is weaker than the strong force attraction between protons at short distances, so the focus is on reducing the total binding energy of a nucleus in the end state relative to the starting state via nuclear fission or nuclear fusion. A proton-proton bomb is basically an A-bomb like the one used in Hiroshima and Nagasaki (or a later H-bomb). The first A-bomb used in anger called "Little Man" weighed 9700 pounds of which 140 pounds was warhead, and had a yield equal to 60,000,000 sticks of TNT (which have half a pound of TNT each). An A-bomb which relies on nuclear fission of atoms like uranium (with 92 protons) and plutonium (with 94 protons) isn't a truly pure example of a proton-proton bomb because these isotypes also have lots of neutrons that contribute to the binding energy. A more pure example would be an H-bomb which fuses hydrogen-1 atoms (basically protons with an electron around them) with an igniting detonator into helium. This is quite doable technologically as an uncontrolled one time reaction, even though doing so efficiently on a sustained basis in a controlled manner to generate energy for practical energy production is not currently technologically feasible. The yield of an H-bomb is even greater per pound of warhead mass than an A-bomb, although basically similar at a gross order of magnitude level. For example: The W47 [thermonuclear warhead] was 18 in (460 mm) in diameter and 47 in (1,200 mm) long, and weighed 720 lb (330 kg) in the Y1 model and 733 lb (332 kg) in the Y2 model. The Y1 model had design yield of 600 kilotons and the Y2 model had a doubled design yield of 1.2 megatons. The Y2 was equivalent to 4,800,000,000 sticks of dynamite from a 733 pound bomb warhead. Electron-Electron Bombs The problem with an "electron-electron" bomb is containing the electrons prior to the bomb being used arriving at its target. You would need some sort of magnetic containment field, which would take heavy materials to generate, and would leak due to quantum tunneling (the leaking would get worse at the electrons were packed more tightly). Also, electrons packed into each other would tend to collide and produce new particles much like they would in a particle accelerator. The explosive force upon releasing the electrons would depend both upon how tightly the electrons were packed and upon their kinetic energy within the containment area. So, the explosive force would depend very much upon the design of the bomb, and it might not be feasible at all as a weapon. A more fruitful and conventional approach to an electron weapon would be to shoot a stream of very fast moving electrons towards a target like a ray gun. This is what happens inside an old school CRT screen but at a non-weaponized energy scale.
{ "domain": "physics.stackexchange", "id": 59793, "tags": "electrons, protons" }
Where does a spinning figure skater's energy go when she slows down?
Question: Today in physics class we were talking about angular momentum and rotational kinetic energy. My teacher used the classic example of a figure skater spinning on ice - when she pulls her arms in, her angular momentum is conserved and her angular velocity increases, meaning that her rotational kinetic energy also increases. Of course, this increase in energy must come from somewhere - in this case, it comes from the figure skater doing work on her arms and pulling them in toward her body. Then I started wondering - if the figure skater slows her rotation by extending her arms, she decreases her rotational KE. Where is her energy going? Or to put it another way, what force is doing work on the figure skater in order to decrease her energy? Answer: I think there are 2 main sources of confusion: First, because of gravity, extending your arms feels like work. We're only interested in the radial movement, though, and in this direction, the skater's arms are pulled by the centrifugal force (in the long tradition of spherical cows in vacuum, we could replace the figure skater with two beads on a spinning rod). Second, the idea of rotational energy as kinetic energy. The relevant work variable is (as already mentioned) the radial extension of the skater's arms, and as far as that's concerned, rotational energy plays the part of potential energy. Think of the skater pulling in her arms as compressing a spring, and extending the arms as its release. Going by either the bead or spring model, the rotational energy gets converted into kinetic energy of the arms, accelerated by the centrifugal force in direction of the radial work variable and ultimately dissipating via vibrations when the arms abruptly reach maximal extension. Of course, if the skater doesn't let her arms be accelerated and slowly extends them instead, the energy dissipates right away, which might be the more realistic approach.
{ "domain": "physics.stackexchange", "id": 11304, "tags": "energy, rotational-dynamics, rotational-kinematics" }
How to construct CFG for language
Question: We have alphabet $\Sigma = \{ { a, b} \} $. How to construct CFG for language $\Sigma^{\ast} - \{a^{n}b^{n} | n \ge 0 \}$. I suggest that is very easy, but I can't invent. I know PDA for this language if it can help. Answer: Split into parts that you can handle. All strings not of the form $a^*b^*$. All strings of the form $a^ib^j$ with $i< j$. Then all strings ....
{ "domain": "cs.stackexchange", "id": 13837, "tags": "context-free" }
Solving Euler squares
Question: These couple of functions solve (or attempt to solve) an arbitrary Euler square. In this case I will refer to it as the colored tower puzzle and explain it that way: Given an n by n grid of tower bases of different heights (0 to n-1) which each base height occurring once in each row and column as well as n sets of n towers of heights 1 to n and of n different colors, the goal is to place the towers in such a way that the result forms a nice cube (i.e. height n towers have to go onto height 0 bases, height n-1 towers on height 1 bases etc) such that each color occurs in each row and column exactly once (see here for an example) The code does this by a very simple algorithm: - fill the first row (by symmetry which color goes where is irrelevant here) - recursively try all the options by picking a possible color for a location and moving onto the next until either the puzzle is solved or all possibilities have been tried, at which point we try the next possible color for this location """Solve the colored tower puzzle.""" """ The colored tower puzzle is given by an n by n grid of heights 0 to n-1. The goal is to take n sets of different colors of n towers of height 1 to n and place them on the grid such that a cube of height n emerges with no color being repeated in any row or column. solvepuzzle: Solve the puzzle for a given grid. printgrid: Print towergrids in a more readable form. """ import itertools import copy def _solve_step(grid,prev): """Perform one solution step and recurse, return solved grid or None""" #Move to next element, grid is solved if index gets out of bounds n=len(grid) if prev[1]<n-1: now=copy.deepcopy(prev) now[1]+=1 elif prev[0]<n-1: now=copy.deepcopy(prev) now[0]+=1 now[1]=0 else: return grid #Try all colors for current element, eliminate options and recurse to next for c in grid[now[0]][now[1]]['colors']: newgrid=copy.deepcopy(grid) newgrid[now[0]][now[1]]['colors']=c for k in range(now[1]+1,n): newgrid[now[0]][k]['colors']=[col for col in newgrid[now[0]][k]['colors'] if not col==c] for k in range(now[0]+1,n): newgrid[k][now[1]]['colors']=[col for col in newgrid[k][now[1]]['colors'] if not col==c] for k,l in itertools.product(range(now[0]+1,n),range(n)): if newgrid[k][l]['height']==grid[now[0]][now[1]]['height']: newgrid[k][l]['colors']=[col for col in newgrid[k][l]['colors'] if not col==c] newgrid=_solve_step(newgrid,now) if newgrid is not None: return newgrid return None def grid_to_string(grid,file=None): """Return stringform of towergrid and print it to a file.""" out='\n'.join([' '.join([''.join(str(el['colors']))+str(el['height']) for el in row]) for row in grid]) if file is not None: file.write(out+'\n\n') return out def solve_puzzle(heightgrid,colors=None): """Return solved grid or None """ """ heightgrid: n by n list indicating the heights of the base colors: names of the n colors, range(1,n+1) by default """ n=len(heightgrid) if colors is None: colors=range(1,n+1) #set grid with first line filled and only valid options in the rest grid=[[{'height':h,'colors':colors} for h in row] for row in heightgrid] grid[0]=[{'height':grid[0][k]['height'],'colors':colors[k]} for k in range(n)] for k,l in itertools.product(range(1,n),range(n)): grid[k][l]['colors']=[col for col in grid[k][l]['colors'] if not (col==grid[0][l]['colors'] or col==grid[0][heightgrid[0].index(heightgrid[k][l])]['colors'])] #Solve the grid, starting on second line return _solve_step(grid,[1,-1]) Example input and output: Input grid=solve_puzzle([[0,1,2],[1,2,0],[2,0,1]],colors=['r','b','g']) print(grid_to_string(grid)) Output r0 b1 g2 g1 r2 b0 b2 g0 r1 Input grid=solve_puzzle([[0,1],[1,0]],colors=['r','b']) print(grid) Output None I am specifically looking for feedback on my general coding style for small programs such as this one and especially on my use of python. I usually can get the code to work, but feel that there are better methods to accomplish the task, making the code clearer or faster. Examples where I feel uncertain that I picked the right option include the way I delete elements here (I had some trouble with del), my iterating through for-loops and even the fact that I used a dictionary where a list would do the job just fine. Answer: I'll go for the styling of your code. Firstly, python uses docstrings. There are one line docstrings and multiple lines docstring. def solve_puzzle(heightgrid,colors=None): """Return solved grid or None """ """ heightgrid: n by n list indicating the heights of the base colors: names of the n colors, range(1,n+1) by default """ would be def solve_puzzle(heightgrid,colors=None): """ Return solved grid or None. heightgrid: n by n list indicating the heights of the base colors: names of the n colors, range(1,n+1) by default. """ Using two docstrings is not the conventions, and isn't as esthetically pleasing. Right? The white spaces. A block comment #Solve the grid, starting on second line starts with a white space # Solve the grid, starting on second line and a comma {'height':h,'colors':colors} ... return _solve_step(grid,[1,-1]) is always followed by a white space {'height':h, 'colors':colors} ... return _solve_step(grid, [1, -1]) Using annotation which states what the in parameters are, and what is returned: def solve_puzzle(heightgrid,colors=None): """ Return solved grid or None. heightgrid: n by n list indicating the heights of the base colors: names of the n colors, range(1,n+1) by default. """ becomes something in line with: def solve_puzzle(height_grid: [[]], colors: [str]=None) -> [] or None: """ height_grid: n by n list indicating the heights of the base colors: names of the n colors, range(1,n+1) by default. """ You are allowed 80characters per line, so the styling of this function would be def solve_puzzle(height_grid: [[]], colors: [str]=None) -> [] or None: """ height_grid: n by n list indicating the heights of the base colors: names of the n colors, range(1,n+1) by default. """ n = len(height_grid) if colors is None: colors = range(1, n+1) # set grid with first line filled and only valid options in the rest grid = [[{'height': h, 'colors': colors} for h in row] for row in height_grid] grid[0] = [{'height': grid[0][k]['height'], 'colors':colors[k]} for k in range(n)] for k, l in itertools.product(range(1, n), range(n)): grid[k][l]['colors'] = [col for col in grid[k][l]['colors'] if not ( col == grid[0][l]['colors'] or col == grid[0][height_grid[0].index(height_grid[k][l])]['colors'] )] # Solve the grid, starting on second line return _solve_step(grid, [1, -1])
{ "domain": "codereview.stackexchange", "id": 22370, "tags": "python, python-3.x" }
What causes quantum entanglement to end?
Question: I seems strange to me that I could not easily find information on "when quantum entanglement ends?", "What causes quantum entanglement to end?". Google search finds explanations of what quantum entanglement is, wiki article gives a lot of info, definition of entanglement: Quantum entanglement is a physical phenomenon that occurs when a pair or group of particles are generated, interact, or share spatial proximity in a way such that the quantum state of each particle of the pair or group cannot be described independently of the state of the others ways to create one, which I found very informative to myself. I doubt entanglement is forever, in fact I recall vaguely it ends e.g. with measurement, but strangely now could not find any place where it is stated that clearly. That question here When does quantum entanglement cease? also states: This implies once the quantum entanglement is measured/set the entanglement ceases irreversibly. And in answers I did not see arguments against that part. But no objections there IMHO does not mean automatically it is correct as other reason for OP misconception was presented. Could somebody list all ways in which entanglement ends? Answer: The most popular approaches to quantum mechanics would argue that the entanglements never end, they just get arbitrarily weak. In theories built on decoherence, the entanglements are always present, but they become less and less statistically important as they interact with a "heat bath" of particles in a random state. When they do so, the effect of these random interactions start to overpower the effect of the entanglement and the entanglement becomes increasingly negligible for predicting future measurements. When the entanglement accounts for 0.00000000000001% of the expected result of a measurement, the effect of this entanglement falls into the noise in our measurement errors and we can hand-wave it away. What complicates this somewhat is that the major interpretations of quantum mechanics claim that a "measurement" causes this decoherence. This is because the purpose of the interpretations of quantum mechanics is to tie the world of quantum mechanics to a hypothetical world governed by classical mechanics. This connection is done through vague terms like "measurement" and "collapse." What we find is that, in practice, the things called "measurements" can be implemented by thinking about them with the decoherence model. In a sense, we can use this way of thinking to realize the very abstract concepts of "measurement." They are typically built around using a great body of atoms in a known state (put there with energy in the classical sense), which then interact with a quantum system in a way such that the statistical expectation of the results looks like a measurement in the classical sense (like how one might measure a stick to be 12 cm long). These "measurements" do indeed have the effect of breaking entanglements in the way you describe, and the exact process of doing so is well described by multiple interactions with particles that are in a random state. Something I found useful for working through this is the idea of weak measurements. While the exact meaning of a weak measurement is not universally agreed upon, it typically takes the form of doing an interaction with the system that falls short of being a "measurement," in effect creating an entanglement, and then taking the actual measurement later. Seeing how a weak measurement of an entangled system causes entanglements of its own is very helpful for seeing how these entanglements and measurements pile up.
{ "domain": "physics.stackexchange", "id": 73495, "tags": "quantum-mechanics, quantum-entanglement, decoherence, quantum-measurements" }
Retry cancelled tasks
Question: I wrote an extension method for retrying tasks when cancelled. Can speed, versatility, readability, or elegance be improved at all? public static async Task<T> Try<T>(this Task<T> task, int retries) { var i = 0; do { try { // if cancelled, it can be reused var copy = task; return await copy; } catch (TaskCanceledException exception) { Console.WriteLine(exception.Message); } } while (i++ < retries); return default(T); } Answer: Task is a reference type, so your copy will always be the same initial task, and when it is cancelled it will stay cancelled forever - see Restart a completed task. You can use Func instead of a Task as the argument and create a new Task out of it in each iteration of the loop. public static async Task<T> Try<T>(this Func<T> func, int retries) { var i = 0; do { try { return await Task.Run(func); } catch (TaskCanceledException exception) { Console.WriteLine(exception.Message); } } while (i++ < retries); return default(T); }
{ "domain": "codereview.stackexchange", "id": 28093, "tags": "c#, performance, multithreading, asynchronous, extension-methods" }
Looking for volunteers, resources and/or SIG - RPI image testers
Question: Hello, all. My early experiences with ROS have taken me down some bumpy roads, finding obstacles regarding my platform of choice. (Trying to) develop a robot using a Raspberry PI is a bit of a challenge, even to get the development environment up and running, left alone the robot itself. Well, long story short, I saw myself forced to create SD images of ROS Hydro, running on Wheezy, but this was done using QEMU and other dirty tricks. The very first image is about to be ready and I thought that it might be nice to share. My goal is to have Hydro running on Raspbian on a 4 GB image, ready to boot and go. So, here are the questions: a) Would you be interested in such an image to start using and eventually report back errors ? I need more stuff than "it boots" or "it works", as far as feedback goes. Would be really nice to have the details of the robot/application you used this image on. b) Would there be someone interested in hosting this image, in case we have a big demand ? c) I will be publishing a wiki-like tutorial of how I did it; In case someone wants to take a look at it and see if it is worth publishing here, I'll be more than glad to offer it. So, just to be in the same page, this is NOT officially endorsement, but is a nice and practical way of having a working RPI system with ROS with the following: "Vanilla" Raspbian (Debian Wheezy on Raspberry PI) image, with ROS Hydro installed, on an image to be used on 4 GB SD cards. KR, Carlos Originally posted by ccapriotti on ROS Answers with karma: 255 on 2014-06-22 Post score: 0 Answer: I suggest that you start a discussion on the Embedded SIG mailing list There has already been quite a bit of work put into getting ROS running on the Raspberry Pi and most of the people are on that mailing list. Originally posted by tfoote with karma: 58457 on 2014-06-22 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by ccapriotti on 2014-06-23: Hi Tully. Thanks for the answer. Good to see your answer, especially being one of the admins. The list you mentioned has 55 questions since its creation, and the latest entry dates from May. I still think this discussion belongs here. If it is all the same to you, I'll keep it active. Comment by fergs on 2014-06-23: I would suggest at least posting to that mailing list pointing out this question -- there is a good chance that a large portion of that group rarely logs in here, and with so many questions getting posted, they may not even see this one.
{ "domain": "robotics.stackexchange", "id": 18346, "tags": "ros, ros-hydro, raspbian" }
Why consider an accelerating frame instead of a stationary frame for the Special Theory of Relativity?
Question: According to Special theory of relativity, magnetic field is equal to electric field if we see it in a frame of electron (in the frame of electron) but why we have to see it in accelerating frame? Why can't we just see in stationary frame? Answer: The magnetic field is always a zero-divergence vector field (since there are no isolated magnetic monopoles). The electric field has nonzero divergence when there is a charge distribution. So, the magnetic field is not “equal” to the electric field. Instead, the magnetic field and the electric field are components of a more complicated object (the field tensor). (For another distinction, the electric field is an ordinary polar vector, whereas the magnetic field is a pseudo vector. Again, they are not equal.)
{ "domain": "physics.stackexchange", "id": 96980, "tags": "electromagnetism, special-relativity, electric-fields" }
Elemental Potassium as both an oxidizer and a reducer
Question: On this table I realized that elemental potassium could adopt oxidation states of -1, 0, and +1. http://en.wikipedia.org/wiki/List_of_oxidation_states_of_the_elements Does this mean that elemental potassium can function as both an oxidizer and a reducer? I understand that oxidizers are reduced (gain electrons), and that reducers are oxidized (lose electrons). How can potassium anion be formed? The potassium anion must be extremely unstable due to potassium's extremely low electronegativity, correct? Also in what compound does potassium actually have a negative oxidation state? Answer: Compounds having potassium anions are referred to as potassides and are rare, usually involving K- within a cyrpt such as: Crystalline Salts of Na- and K- (Alkalides) that Are Stable at Room Temperature J. Am. Chem. Soc. vol. 121, pages 10666-10667 describes preparation of K+(aza222)K- , where "aza222" is the compound in the above figure, but with the 6 H's replaced with methyl groups. The compound is formed by dissolving potassium metal in methyl amine solution of the methylated aza222.
{ "domain": "chemistry.stackexchange", "id": 2645, "tags": "redox" }
Deriving statistics of band limited Random Noise
Question: The question is: Consider a continuous random number with a Gaussian distribution of mean $\mu$ and variance $\sigma$ . The RV is measured from time $t=-\infty$ to $t=\infty$. This time domain signal $x(t)$ is passed through a first order low pass filter with cutoff at $f_o$. What would the statistics of the output be? Would it still be Gaussian? If yes, then what is the new mean and variance? I am unable to understand where to start with this. Any help is greatly appreciated. Answer: at first read a book on stochastic process or stochastic signal processing. it's better that we say we have a random process which it's first order distribution (PDF of the process at each instant of time) is independent of time and is a Gaussian with known mean and variance. Now we want to obtain the first order properties of filtered process. To begin we write the output of filter as a convolution. The output process at each instant of time is a linear combination (weighted average) of input process at different instant of time which all has the same Gaussian distribution, also we know linear combination of variables with Gaussian PDF leads to another variable with Gaussian distribution. So the first order distribution of output must be a Gaussian. Now taking expectation from both side of convolution, we see expectation of filtered process at each instant of time becomes convolution of filter response and the mean of input process which is a constant over time, so it becomes the multiplication of input mean and the DC response of filter. To obtain the variance of process around its mean, we have to find the expectation of squared difference of the process and its mean or expectation of squared value of output minus the square of it's mean. To relate the squared value of output to the input we could convolve the filterd signal with its reversed time version at zero lag. At the end we see new variance is the variance of input times square of DC response of filter.
{ "domain": "dsp.stackexchange", "id": 5502, "tags": "filters, noise, continuous-signals, homework, gaussian" }
How to visualize depthimages from rosbag file with RViz?
Question: I have a *.bag file which contains depthimages from a Kinect 2 Camera. I want to visualize them in rviz and convert them to point clouds. So far, I don't succeed in visualizing the data in rivz. There is the following warning: Global Status: Warn Fixed Frame No tf data. Actual error: Fixed Frame [map] does not exist I'm not really sure where to go from here on and would be thankful the more detailed the instructions are. Originally posted by lisa on ROS Answers with karma: 3 on 2016-08-07 Post score: 0 Answer: RViz requires not only data to display, but also TF transform information that is published on separate topic. If TF data was not generated when .bag file was recorded, then it would be missing. Thus you probably need to spoof some dummy TF transform, just so RViz is happy. The other problem is that, to spoof TF data correctly, you need to spoof them with timestamps corresponding to data from .bag file. Which means you need to use simulated clock from bag file. Try the following: start roscore as normal setup roscore to use simulation time (I read that rosbag should do it automatically, but it didn't seem to work for me): rosparam set use_sim_time true start dummy TF transform rosrun tf static_transform_publisher 0 0 0 0 0 0 map dummy_frame 100 Where: 0 0 0 0 0 0 is offset and rotation for child frame "dummy_frame" - this does not matter map is source "global" frame - this is the one you are missing (this should really be fixed in RViz, if no frames, then assume global frame exists with default name?) dummy_frame - static_transform_publisher needs some child frame to work with - does not matter 100 - publish rate in ms Play bag file along with clock signal for other ROS nodes rosbag play --clock mydata.bag Start Rviz Hope this helps. Originally posted by Marcin Bogdanski with karma: 231 on 2016-08-08 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by lisa on 2016-08-08: Thank you for your detailed answer. It was so clear, I succeed on first try =) Comment by Marcin Bogdanski on 2016-08-08: I just realized parameter description got folded into one long unreadable blob. Now fixed. Happy to help.
{ "domain": "robotics.stackexchange", "id": 25468, "tags": "kinect, rosbag, pointcloud" }
Energy loss in nuclear recoil
Question: When a free nucleus undergoes recoil, with a kinetic energy of K , it emits a gamma ray with a reduced energy of $h\nu-K$ . But in a bound nucleus of a lattice , we do not find the reduced energy of gamma ray .We see it as $h\nu$ . Why does such reduction in energy occur in case of free nucleus and not in bound nucleus? Answer: It is kinematics. Conservation of momentum induces the smaller energy in the gamma from a free nucleus . A nucleus bound in a lattice conserves momentum with the whole lattice, and very little energy is needed for that, as momentum is mv and the mass of the lattice is orders of magnitude larger than the mass of a nucleus. As said in the comments: In the Mössbauer effect, a narrow resonance for nuclear gamma emission and absorption results from the momentum of recoil being delivered to a surrounding crystal lattice rather than to the emitting or absorbing nucleus alone. When this occurs, no gamma energy is lost to the kinetic energy of recoiling nuclei at either the emitting or absorbing end of a gamma transition: emission and absorption occur at the same energy, resulting in strong, resonant absorption.
{ "domain": "physics.stackexchange", "id": 45873, "tags": "particle-physics" }
What does the Schrodinger Equation really mean?
Question: I understand that the Schrodinger equation is actually a principle that cannot be proven. But can someone give a plausible foundation for it and give it some physical meaning/interpretation. I guess I'm searching for some intuitive solace here. Answer: This is a fairly basic approach suitable for students who have finished at least one semester of introductory Newtonian mechanics, are familiar with waves (including the complex exponential representation) and have heard of the Hamiltonian at a level where $H = T + V$. As far as I understand it has no relationship to Schrödinger's historical approach. Let's take up Debye's challenge to find the wave equation that goes with de Broglie waves (restricting ourselves to one dimension merely for clarity). Because we're looking for a wave equation we will suppose that the solutions have the form $$ \Psi(x,t) = e^{i(kx - \omega t)} \;, \tag{1}$$ and because this is suppose to be for de Broglie waves we shall require that \begin{align} E &= hf = \hbar \omega \tag{2}\\ p &= h\lambda = \hbar k \;. \tag{3} \end{align} Now it is a interesting observation that we can get the angular frequency $\omega$ from (1) with a time derivative and likewise wave number $k$ with a spacial derivative. If we simply define the operators1 \begin{align} \hat{E} = -\frac{\hbar}{i} \frac{\partial}{\partial t} \tag{4}\\ \hat{p} = \frac{\hbar}{i} \frac{\partial}{\partial x} \; \tag{5}\\ \end{align} so that $\hat{E} \Psi = E \Psi$ and $\hat{p} \Psi = p \Psi$. Now, the Hamiltonian for a particle of mass $m$ moving in a fixed potential field $V(x)$ is $H = \frac{p^2}{2m} + V(x)$, and because this situation has no explicit dependence on time we can identify the Hamiltonian with the total energy of the system $H = E$. Expanding that identity in terms of the operators above (and applying it to the wave function, because operators have to act on something) we get \begin{align} \hat{H} \Psi(x,t) &= \hat{E} \Psi(x,t) \\ \left[ \frac{\hat{p}^2}{2m} + V(x) \right] \Psi(x,t) &= \hat{E} \Psi(x,t) \\ \left[ \frac{1}{2m} \left( \frac{\hbar}{i} \frac{\partial}{\partial x}\right)^2+ V(x) \right] \Psi(x,t) &= -\frac{\hbar}{i} \frac{\partial}{\partial t} \Psi(x,t) \\ \left[ -\frac{\hbar^2}{2m} \frac{\partial^2}{\partial x^2}+ V(x) \right] \Psi(x,t) &= i\hbar \frac{\partial}{\partial t} \Psi(x,t) \;. \tag{6}\\ \end{align} You will recognize (6) as the time-dependent Schrödinger equation in one dimension. So the motivation here is Write a wave equation. Make the energy and momentum have the de Broglie forms, and Require energy conservation but this is not anything like a proof because the pass from variable to operators is pulled out of a hat. As an added bonus if you use the square of the relativistic Hamiltonian for a free particle $(pc)^2 - (mc^2)^2 = E^2$ this method leads naturally to the Klein-Gordon equation as well. 1 In very rough language an operator is a function-like mathematical object that takes a function as an argument and returns another function. Partial derivatives obviously qualify on this front, but so do simple multiplicative factors: because multiplying a function by some factor returns another function. We follow a common notational convention in denoting objects that need to be understood as operators with a hat, but leaving the hat off of explcit forms.
{ "domain": "physics.stackexchange", "id": 43029, "tags": "quantum-mechanics, wavefunction, schroedinger-equation, wave-particle-duality, complex-numbers" }
Can we tune the exhaust system of a gasoline car to be as quiet as an electric car?
Question: Electric cars are quiet, so quiet that legislation is proposed in many countries, to make them audible to pedestrians, for obvious safety reasons. My question: is it possible, without getting into engineering or cost details more than the physics demands, so sticking to the fluid dynamics and acoustics aspects as much as possible, to design an exhaust system to create a silent internal combustion engine car. This would, I suppose, be a car exhaust system which would be the equivalent of a silencer on a gun, and may follow many of the same principles in relation to expansion of gases. Answer: That's what the muffler does, as you learn the first time you have a car with a muffler that gets damaged. The essential difference between gas and electric cars is that the gasoline power is derived from a carefully timed sequence of small explosions, in the pistons; the electric car does not have this phenomenon and will always be quieter for the same amount of muffling.
{ "domain": "physics.stackexchange", "id": 25076, "tags": "fluid-dynamics, energy, acoustics" }
How far apart are galaxies on average? If galaxies were the size of peas, how many would be in a cubic meter?
Question: The actual number: How far apart are galaxies on average? An attempt to visualize such a thing: If galaxies were the size of peas, how many would be in a cubic meter? Answer: The simple answer is that the average galaxy spacing is around a few megaparsecs, while the biggest galaxies are around 0.1 megaparsecs in size. So the average spacing is somewhere in the range of 10 - 100 times the size of the biggest galaxies. The peas I had for lunch today were (at a guess - I didn't measure them!) 5mm in diameter so the interpea spacing would be 5 - 50cm, or between 8 and 8,000 per cubic metre. But this is a very misleading statistic. Galaxies are not distributed uniformly, but instead are grouped into clusters, which are themselves grouped into superclusters. Also galaxies vary enormously in size, with dwarf galaxies around a thousand times smaller than the biggest galaxies. I would resist the temptation to assign any significance to my figures above. However there is a take home message i.e. galaxies are much, much, much closer relative to their size than stars are. That's why galaxy collisions are quite frequent while stellar collisions are rare to the point of non-existence.
{ "domain": "physics.stackexchange", "id": 70897, "tags": "homework-and-exercises, cosmology, estimation, galaxies" }
Antibiotic resistance
Question: I have a biological puzzle that's been perplexing me for years. Can you review my logic and tell me where I'm wrong or if I'm on to something. As I understand it, most, if not all genetic engineering uses a marker gene to identify transformed bacteria. This marker gene was/is usually an antibiotic resistance gene so the transformed bacteria will be the only ones that can grow on an antibiotic laced culture medium. So, every (most) GMO will contain a copy of a bacterial antibiotic resistance gene. Also, I believe that bacteria are able to spontaneously aquire DNA from their environment and / or other bacteria. So, is it a fair question to ask if the unnatural proliferation of antibiotic resistance genes in the wild due to the commercial propagation of GMOs might be at least partially responsible for the increase in multiply resistant organisms that we are seeing? Has anyone, for example, mashed up GMO matter, added non antibiotic resistant bacteria, grown the mix up overnight, added the antibiotic used as a marker in the GMO, incubated for a few days and then seen if any bacteria survived? Answer: As I understand it, most, if not all genetic engineering uses a marker gene to identify transformed bacteria. This marker gene was/is usually an antibiotic resistance gene so the transformed bacteria will be the only ones that can grow on an antibiotic laced culture medium Most but color markers (which gives the host cell a color that can be used for selection) are also used. More complicated selectable-counter selectable markers are also used (ie HyTK gene). A counter selectable marker is a marker that kills a cells when it is present. A selectable-counter selectable marker is a hybrid marker that allows you to both select for cells with the marker using one selection agent and kill the same cell with a second selection agent. So, every (most) GMO will contain a copy of a bacterial antibiotic resistance gene. Well not quite. GMO ie Genetic modified organism, includes both prokaryotes (bacteria) and eukaryotes (plants, animals, and yeast). The promoters which drives gene activity in prokaryotes and eukaryotes are different. So a eukaryote markers do not work in prokaryotes and vise versa. Furthermore, genetic modification of eukaryotes is difficult. Hence modified viruses are often used. These viruses are made in such a way that their genome has been replace by desired DNA playload. And as the payload size is limited, often time any unnecessary DNA is also removed, like prokaryotic markers which have no function in a eukaryotic host. Lastly due to public pressure, there has been a push towards cleaning up after genetic engineering work in GMOs that are to be marketed to the public (GMO crops). So things such a selection markers are removed (either by use of counter selectable markers followed homologous recombination or site specific recombinases). So, is it a fair question to ask if the unnatural proliferation of antibiotic resistance genes in the wild due to the commercial propagation of GMOs might be at least partially responsible for the increase in multiply resistant organisms that we are seeing? Given the wide spread use of antibiotic in animal husbandry and by the public general, and move toward removing selection markers in commercial GMOs, I do not think GMOs are the cause of the increase in MR bacteria. A more likely cause would simply be because we are using so much antibiotics and thus there is a selection pressure to aquire multiple resistance. Remember all our antibiotics resistance genes and most of our antibiotics were acquired from nature. Has anyone, for example, mashed up GMO matter, added non antibiotic resistant bacteria, grown the mix up overnight, added the antibiotic used as a marker in the GMO, incubated for a few days and then seen if any bacteria survived? Yes, lots of times around the world to varying degrees of sucess. I have personally spent time doing this and so have my labmates. It will work if you prepare good quality DNA that has not been degraded by endonucleases, the bacteria are very transformable (ie the bacteria has been treated to take up DNA easily), electroporation is use (to force the DNA into the cell) and you have prokaryotic marker. A eukaryotic marker, is a marker that works in an animal cells will not work in bacteria. So you have to engineer your GMO animal to carry both a eukaryotic and prokaryotic marker.
{ "domain": "biology.stackexchange", "id": 8723, "tags": "genetics, molecular-genetics" }
What does Weinberg–Witten theorem want to express?
Question: Weinberg-Witten theorem states that massless particles (either composite or elementary) with spin $j > 1/2$ cannot carry a Lorentz-covariant current, while massless particles with spin $j > 1$ cannot carry a Lorentz-covariant stress-energy. The theorem is usually interpreted to mean that the graviton ( $j = 2$ ) cannot be a composite particle in a relativistic quantum field theory. Before I read its proof, I've not been able to understand this result. Because I can directly come up a counterexample, massless spin-2 field have a Lorentz covariant stress-energy tensor. For example the Lagrangian of massless spin-2 is massless Fierz-Pauli action: $$S=\int d^4 x (-\frac{1}{2}\partial_a h_{bc}\partial^{a}h^{bc}+\partial_a h_{bc}\partial^b h^{ac}-\partial_a h^{ab}\partial_b h+\frac{1}{2}\partial_a h \partial^a h)$$ We can calculate its energy-stress tensor by $T_{ab}=\frac{-2}{\sqrt{-g}} \frac{\delta S}{\delta g^{ab}}$, so we get $$T_{ab}=-\frac{1}{2}\partial_ah_{cd}\partial_bh^{cd}+\partial_a h_{cd}\partial^ch_b^d-\frac{1}{2}\partial_ah\partial^ch_{bc}-\frac{1}{2}\partial^ch\partial_ah_{bc}+\frac{1}{2}\partial_ah\partial_bh+\eta_{ab}\mathcal{L}$$ which is obviously a non-zero Lorentz covariant stress-energy tensor. And for U(1) massless spin-1 field, we can also have the energy-stress tensor $$T^{ab}=F^{ac} F^{b}_{\ \ \ c}-\frac{1}{4}\eta^{ab}F^{cd}F_{cd}$$ so we can construct a Lorentz covariant current $J^a=\int d^3x T^{a 0}$ which is a Lorentz covariant current. Therefore above two examples are seeming counterexamples of this theorem. I believe this theorem must be correct and I want to know why my above argument is wrong. Answer: The stress tensor for $h_{ab}$ is not Lorentz covariant, despite the fact that it looks like it is. This is because $h_{ab}$ itself is not a Lorentz tensor. Rather under Lorentz transformations $$ h_{ab} \to \Lambda_a{}^c \Lambda_b{}^d h_{cd} + \partial_a \zeta_b + \partial_b \zeta_a ~. $$ The extra term is present to make up for the fact that $h_{ab}$ is not a tensor of the Lorentz group. Plug this into the stress tensor and you will find that the stress tensor also transforms with a inhomogeneous piece thereby making it non-covariant. The photon is not charged under the $U(1)$ gauge symmetry. Thus, its $U(1)$ current is zero. The current you have defined is not the $U(1)$ current. Rather it is the current corresponding to translations. Weinberg-Witten theorem has nothing to say about this current.
{ "domain": "physics.stackexchange", "id": 30750, "tags": "quantum-field-theory, stress-energy-momentum-tensor" }
ROS Battery Status
Question: Hi, Last December, my team and I participated in the first Maritime RobotX Challenge is an international team competition uniquely designed to evolve into a multi-platform competition to include maritime, aerial and submersible tasks to broaden students’ exposure to robotics applications and technologies. Our team utilized the ROS for some tasks, such as Simultaneous Localization and Mapping (SLAM), Path Planning, and Color Recognition. I am a power system lead of the team, and I have been trying to program in ROS to check the status of our lead acid (standard car) batteries as well as some Lithium Ion battery packs. Unfortunately, none of the team members knows how to approach to this problem. Please help! Originally posted by minshikk on ROS Answers with karma: 1 on 2017-02-16 Post score: 0 Original comments Comment by Humpelstilzchen on 2017-02-17: Duplicate of #254900 How about using the diagnostics packages? You publish a diagnostic message with the current battery value. Comment by Tav_PG on 2019-09-11: Check out https://www.instructables.com/id/1S-6S-Battery-Voltage-Monitor-ROS/ Answer: There's a pretty substantial gap between writing software and checking the state of your batteries, and you'll need support from your hardware to cross that gap. At the most basic level, battery monitoring is generally done by measuring the battery voltage, measuring and integrating the current through the battery, or both. This is an entire field of study with chemistry and is a bit too broad to cover here, but it's important to remember that the relationship between battery capacity and voltage can be highly non-linear depending on your battery chemistry. I'd suggest that you buy (or possibly build) hardware that can measure the SOC (state of charge) of your batteries, and then connect that hardware to your computer and write a ROS node that translates that data into a sensor_msgs/BatteryState message. I believe there are dashboard widgets that can consume and display this message in a reasonable way. P.S. - your question makes reference to a demo, but doesn't link to it. ROS is a very broad community, and most of us aren't familiar with all of the demos, so a link would be helpful. Originally posted by ahendrix with karma: 47576 on 2017-02-17 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 27036, "tags": "ros" }
Naive Kalman filter for 3D position
Question: I looked at posts that discusses 3D kalman filter. Kalman Filter to estimate 3D position of a node Help with Kalman Filter implementation for estimating 3D position Both from my understanding, both person were attempting the same thing, which is calculating 3D position based on accelerator readings. Yet their process for setting up the Kalman filter is completely different. My own method is also different These are my equation based on what I learned here, it treats acceleration as the U vector http://www.cl.cam.ac.uk/~rmf25/papers/Understanding%20the%20Basis%20of%20the%20Kalman%20Filter.pdf Xk = AXk-1 + BUt x_curr = x_prev + x_vel_prev * dt + 0.5*dt^2*x_acc x_vel_curr = x_vel_prev + dt*x_acc The sescond post seemed to have dropped the U vector completely similiar to this example http://campar.in.tum.de/Chair/KalmanFilter#Literature. While the first post has a different process equation based on wiki and it uses Ga instead of Bu https://en.wikipedia.org/wiki/Kalman_filter#Example_application.2C_technical: X = FX_prev + Ga This is what I did and it make sense I think: X = [x,y,z,dx,dy,dz] U = [ddx, ddy, ddz] A = np.matrix([[1, 0, 0, dt, 0, 0], [0, 1, 0, 0, dt, 0], [0, 0, 1, 0, 0, dt], [0, 0, 0, 1, 0, 0], [0, 0, 0, 0, 1, 0], [0, 0, 0, 0, 0, 1]], ) B = np.matrix([[0.5*(dt**2), 0, 0], [0, 0.5*(dt**2), 0], [0, 0, 0.5*(dt**2)], [dt, 0, 0], [0, dt, 0], [0, 0, dt]]) H = np.matrix([[1,0,0,0,0,0], [0,1,0,0,0,0], [0,0,1,0,0,0]]) P = np.matrix([[10,0,0,0,0,0], [0,10,0,0,0,0], [0,0,10,0,0,0], [0,0,0,100,0,0], [0,0,0,0,100,0], [0,0,0,0,0,100]]) Q = np.matrix([[1,0,0,0,0,0], [0,1,0,0,0,0], [0,0,1,0,0,0], [0,0,0,1,0,0], [0,0,0,0,1,0], [0,0,0,0,0,1]]) R = np.matrix([[1,0,0], [0,1,0], [0,0,1]]) What I dont understand is, how come the previous 2 posts have completely different matrix? Is there a reason why? And is my approach correct? Any input and suggestion is greatly appreciated Answer: The thing you should keep in the back of your mind is that R Kalman published his seminal paper in 1962 (or there about). He was a Control Engineer, interested in real systems at a time prior to widely available high fidelity numerical multi-physics modeling. The kinematic models you contrast, are derived from $$ F=ma =m \frac{dv}{dt}. $$ and $m$ is often a constant point mass. Planes and rockets consume mass for propulsion so we can see early on that there will be neglected terms. $F$ should also include aerodynamic drag and for laminar flow, drag is roughly proportional $v^2$ which is not even close to the Navier-Stokes equations. So even a simple drag model is nonlinear and for a real time online algorithm a linear state space model is all he really had. The solution to these oversimplifications is noise and small step sizes and prediction feedback. Noise can also be actual random phenomena. So a lot of non trivial modeling errors are lumped together in $$ \frac{d}{dt} \mathbf{x}(t)= \mathbf{A}\mathbf{x}(t) + \mathbf{B} \mathbf{u}(t) $$ where $\mathbf{u}(t)$ can be control forces, random forces, or psuedo modeling errors that we treat as noise. We also have a measurement set of equations that can also include noise, real or conceptual. You can stick with this form and use a Kalman-Bucy Filter or you can convert the continuous state space model to a difference equation which is typically used in what is a Kalman Filter. To do that you need to integrate the equation just given. I'm not going to go in detail because there are books and websites that show this but one start with calculating the matrix exponnential $$\mathbf{F}= \mathbf{e}^{\mathbf{A} \Delta t} $$. Some of your examples calculate the matrix exponential and some don't. You can often obtain a symbolic solutution for the matrix exponential or you can solve it numerically. Getting back to your original question is that these models are simplifications of simplifications. There may be poor choices taken but there really isn't a single correct discrete time 3D state space model. some may have $x_{k+1}=x_{k} + \Delta t v_{k}$ and some might include $\frac{(\Delta t)^2}{2} a_k$. Real objects aren't point masses. Angular orientation matters. A lot of what is missing is treated as noise over small time steps. When the nonlinearity is more severe, we switch to the extended Kalman Filter. In the future, the state models will be real time multi-physics numerical routines. The best model is the one that works best with you data.
{ "domain": "dsp.stackexchange", "id": 6371, "tags": "kalman-filters" }
Is the term "Power" meaningful?
Question: As far as my intuition of Force goes, I believe it is a kind of 'field' (that was not the most precise of ways to describe it!) Like say, a force of 5 N acts on a 1 kg block. It produces an acceleration of 5 m/s² on it. Now how long it accelerates depends on how long the field exists. In other words, if I stop the force at anytime, the acceleration will stop simultaneously, too. Now let's come to work. Work is done by a force. And the work done by a force translates into the energy gained by the body on which the force acts. The quantitative way of accounting for the work we do is, in simpler terms, multiplying, the force we applied into the work, with the net output, viz. displacement. We see that the existence of force is time dependent. That means force may exist for a definite period in time. So thus should work. And thus should energy! Energy, thereby, is much like a 'field' - much like a phenomenon that exists as long as its "sidekick", force, exists. Finally, power. What are we trying to do in case of power? Divide energy by time? Why? Why are we trying to distribute a phenomenon (energy) into parcels of time? Why then are we not distributing force in the same way? How does it make sense? (Edit: I have kind of built a heuristic (?) explanation for myself. I want to see if that is correct. A little indication of your thoughts would do.) Answer: Field has a specific definition in physics. You can have force fields, and energy density can be a field, but I don’t think I have ever seen energy as a field. Finally, power. What are we trying to do in case of power? Divide energy by time? Why? Why are we trying to distribute a phenomenon (energy) into parcels of time? We take the time derivative of work to find out how quickly the energy is transferred. If your phone holds a certain amount of energy in its battery then power tells you if you can charge your phone while you eat or if you have to charge it while you sleep. To me, that is certainly meaningful. Why then are we not distributing force in the same way? We certainly can. We can take as many time derivatives of any quantity as we like. The time derivative of acceleration is called jerk.
{ "domain": "physics.stackexchange", "id": 65452, "tags": "newtonian-mechanics, forces, power" }
What bolts should I specify with Weathering Steel?
Question: When using weathering steel (COR-TEN) on a project, what sort of bolts should I be using? I have yet to see fixings made of weather steel; presumably because the way it uses a sacrificial layer to form a protective rust coating makes it unsuitable for precision applications (such as bolt threads; maintaining the structural area, etc.). I am also aware that mixing different metals; even various steel alloys, can lead to accelerated corrosion due to anodic/cathodic behaviour. So what type of bolts should be specified with weathering steel? Answer: For structural applications (in the US), the most common bolt for weathering steel is ASTM A 325 Type 3. Type 1 is a plain steel bolt that can be galvanized, but in this situation the zinc in the galvanizing will quickly be used trying to protect the rest of the structure. Update for British bolts Interestingly, the only option for UK seems to be to get these same bolts in metric (M24) size. References: Supplier Weathering steel bridges (page 13 of pdf) Specific discussion about the A325 bolts meeting EN standards
{ "domain": "engineering.stackexchange", "id": 1656, "tags": "steel, metallurgy, bolting" }
What is the difference between base shear and pseudo lateral load in seismic analysis of buildings?
Question: What is the difference between base shear and pseudo lateral load in seismic analysis of buildings or are they the same? Answer: Base shear is the result of any type of analysis, including pseudo lateral load and seismic analysis.
{ "domain": "engineering.stackexchange", "id": 3297, "tags": "structural-engineering, structural-analysis" }
Statistical mechanics of vibrating string
Question: I am trying to calculate the average thermal energy of a vibrating string that has mass M, under tension F, its boundaries are fixed a distance L apart $y(0) = y(L)$. The energy of the string for an instantaneous displacement y(x,t) is $$E[y(x,t)]= \frac 12\int_0^Ldx\left(\frac ML\left(\frac {\partial y}{\partial t}\right)^2 + F\left(\frac {\partial y}{\partial x}\right)^2\right)$$ By writing y(x,t) as Fourier series $$y(x,t) = \sum_{n=1}^N A_n(t)sin(\frac{n\pi x}{L})$$ I manage to reduce the energy equation (as asked by the question) using property of orthogonality to $$E[y(x,t)] = \sum_{n=1}^N\left(\frac M4 \dot A_n^2 + \frac{n^2\pi^2F}{4L}A_n^2 \right)$$ Then, to get the average thermal energy I attempted to evaluate the partition function as follows $$Z = \sum e^\frac{-(E[(y,t)]}{k_BT}= \sum_{\dot A_1=0}^\infty\sum_{\dot A_2=0}^\infty ... e^{-\frac M{4k_bT}(\dot A_1 +\dot A_2+...)}\sum_{A_1=0}^\infty \sum_{A_2=0}^\infty...e^{-\frac {\pi^2F}{4Lk_bT}(A_1 + 4A_2+...)}\\=\sum_{\dot A_1=0}^\infty e^{-\frac {M\dot A_1}{4k_b T}} \sum_{\dot A_2=0}^\infty e^{-\frac {M\dot A_2}{4k_b T}}...\sum_{A_1 = 0}^{\infty}e^{-{\frac{\pi^2FA_1}{4Lk_b T}}}\sum_{A_2 = 0}^{\infty}e^{-{\frac{4\pi^2FA_2}{4Lk_b T}}}...\\ =\prod_{l=0}^N {\frac{1}{1-e^{-\frac{M\dot A_l}{4k_bT}}}}\prod_{l=0}^N {\frac{1}{1-e^{-\frac{l^2A_l\pi ^2F}{4Lk_bT}}}}$$ Next, my thought is that I can write the Helmholtz free energy $F = -k_BT lnZ$ and assume ${N\to \infty}$ $$F=-k_BTln\prod_{l=0}^\infty {\frac{1}{1-e^{-\frac{M\dot A_l}{4k_bT}}}}\prod_{l=0}^\infty {\frac{1}{1-e^{-\frac{l^2A_l\pi ^2F}{4Lk_bT}}}}\\= k_bT\left(\sum_{l=1}^\infty ln(1-e^{-\frac{M\dot A_l}{4k_bT}})+ln({1-e^{-\frac{l^2A_l\pi ^2F}{4Lk_bT}}})\right)$$ Up to this point, I find myself somewhat stuck; in fact, all these steps are tailored for computing the average energy of a harmonic oscillator. After obtaining the free energy $\left(\frac{\pi^2}{6}(\frac{k_bT}{\hbar w})^2 \hbar w for\,quantum\,harmonic\, oscillator/string\right)$, we should be able to get the entropy and the average thermal energy. This problem is from Introductory Statistical Mechanics by Bowley, Roger Chapter 8, problem 10.Full problem Answer: You can use the equipartition theorem since in fourier space the hamiltonian is a sum of uncoupled quadratic harmonic oscillators: $$E[y(x,t)] = \sum_{n=1}^N\left(\dfrac{M/2}{2}\dot A_n^2 + \frac{n^2\pi^2F/(2L)}{2}A_n^2 \right)$$ Thus $M/2\langle\dot A_n^2 \rangle= k_bT$, $n^2\pi^2F/(2L)\langle A_n^2\rangle = k_bT$ and cross terms are 0. You also have: $$y^2 = \sum_{n=1, m = 1}^N A_mA_n\sin(\frac{n\pi x}{L})\sin(\frac{m\pi x}{L})$$ The average is easily done because $\langle A_n A_m\rangle\propto k_bT\delta_{n, m}$ : $$\langle y^2\rangle = \sum_{n=1}^N \langle A_n^2\rangle \sin(\frac{n\pi x}{L})^2 = \dfrac{Lk_bT}{2\pi^2F}\sum_{n=1}^N \dfrac{\sin(\frac{n\pi x}{L})^2}{n^2}$$ Specifically, in $x = L/2$: $$\langle y^2\rangle = \dfrac{Lk_bT}{2\pi^2F}\sum_{n=1}^N \dfrac{\sin(n\pi/2 )^2}{n^2}$$ The sin square takes the values 0, 1, 0, 1, ... with increasing n, thus the sum can be rewritten as: $$\langle y^2\rangle = \dfrac{Lk_bT}{\pi^2F}\sum_{n=0}^N \dfrac{1}{(2n + 1)^2} = \dfrac{Lk_bT}{2\pi^2F} \dfrac{\pi^2}{8} = \dfrac{Lk_bT}{16F}$$ This was in 1D, in 3D you'd have three times more (uncoupled) modes so: $$\langle x^2 + y^2 + z^2\rangle = \dfrac{3Lk_bT}{16F}$$
{ "domain": "physics.stackexchange", "id": 98569, "tags": "homework-and-exercises, statistical-mechanics" }
refactor python strategy pattern to use abstract base class
Question: I came across this strategy pattern implementation https://github.com/jtortorelli/head-first-design-patterns-python/blob/master/src/python/chapter_1/adventure_game.py class Character: def __init__(self): self.weapon_behavior = None def set_weapon(self, weapon_behavior): self.weapon_behavior = weapon_behavior def fight(self): self.weapon_behavior.use_weapon() class Queen(Character): def __init__(self): super().__init__() self.weapon_behavior = KnifeBehavior() class King(Character): def __init__(self): super().__init__() self.weapon_behavior = BowAndArrowBehavior() class Troll(Character): def __init__(self): super().__init__() self.weapon_behavior = AxeBehavior() class Knight(Character): def __init__(self): super().__init__() self.weapon_behavior = SwordBehavior() class WeaponBehavior: def use_weapon(self): raise NotImplementedError class KnifeBehavior(WeaponBehavior): def use_weapon(self): print("Stabby stab stab") class BowAndArrowBehavior(WeaponBehavior): def use_weapon(self): print("Thwing!") class AxeBehavior(WeaponBehavior): def use_weapon(self): print("Whack!") class SwordBehavior(WeaponBehavior): def use_weapon(self): print("Thrust!") knight = Knight() king = King() queen = Queen() troll = Troll() knight.fight() king.fight() queen.fight() troll.fight() Would it be correct to refactor it the following way, using an ABC? from abc import ABC, abstractmethod class Character: def __init__(self): self.weapon_behavior = None def set_weapon(self, weapon_behavior): self.weapon_behavior = weapon_behavior def fight(self): self.weapon_behavior.use_weapon() class Queen(Character): def __init__(self): super().__init__() self.weapon_behavior = KnifeBehavior() class King(Character): def __init__(self): super().__init__() self.weapon_behavior = BowAndArrowBehavior() class Troll(Character): def __init__(self): super().__init__() self.weapon_behavior = AxeBehavior() class Knight(Character): def __init__(self): super().__init__() self.weapon_behavior = SwordBehavior() class WeaponBehavior(ABC): @abstractmethod def use_weapon(self, message): print(message) class KnifeBehavior(WeaponBehavior): def use_weapon(self): super().use_weapon("Stabby stab stab") class BowAndArrowBehavior(WeaponBehavior): def use_weapon(self): super().use_weapon("Thwing!") class AxeBehavior(WeaponBehavior): def use_weapon(self): super().use_weapon("Whack!") class SwordBehavior(WeaponBehavior): def use_weapon(self): super().use_weapon("Thrust!") knight = Knight() king = King() queen = Queen() troll = Troll() knight.fight() king.fight() queen.fight() troll.fight() Answer: Only the Character and WeaponBehaviour classes are actually useful and neither of them needs to inherit anything. The rest can be just factory functions, because only the constructors differ. If a class constructor does anything except assign arguments to properties, it is probably wrong. Strategy pattern is based on composition rather then inheritance. class Character: def __init__(self, weapon): self.weapon = weapon def set_weapon(self, weapon): self.weapon = weapon def fight(self): self.weapon.use() def Queen(): return Character(Knife()) def King(): return Character(Bow()) class Weapon: def __init__(self, message): self.message = message def use(self): print(self.message) def Knife(): return Weapon("Stabby stab stab") def Bow(): return Weapon("Thwing!") king = King() queen = Queen() king.fight() queen.fight() queen.set_weapon(Bow()) queen.fight() Notice that I have removed the behavior part of the names as it seemed a bit useless. I have also renamed use_weapon() to just use() because it is called on a weapon variable, and so it seemed redundant. Also notice, that unless I used the set_weapon() method on a constructed Character instance (ie. to switch weapon in middle of battle), the Character class would be useless, because everything could have been done with the weapons alone. Of course I know this is just a pattern demonstration code, but I wanted to point out anyway.. As a bonus, here is something (not only) for the king :) Also notice how again composition is prefered over inheritance to provide flexibility. class DoubleWeapon: def __init__(self, left, right): self.left = left self.right = right def use(self): self.left.use() self.right.use() king.set_weapon(DoubleWeapon(Knife(), Sword())) king.fight() ```
{ "domain": "codereview.stackexchange", "id": 39588, "tags": "python, object-oriented, design-patterns, strategy-pattern" }
Would our body adapt to changes that are not in our DNA?
Question: If a person was to undergo a treatment or surgery at a very young age, that significantly reduced their final height, would the rest of their body adjust to the sudden change? If not; because our arms’ length is very similar to our height, would this give the person arms that weren’t proportional to the rest of their body? Answer: It will be hard to discuss for the specific example you consider, mainly because it is unclear how would you treatment affect the person's height. That being said, yes, our bodies can change in function of the environment. This is called phenotypic plasticity Phenotypic plasticity refers to some of the changes in an organism's behavior, morphology and physiology in response to a unique environment. Here the term environment does not only refer to outside temperature or other obvious environmental factor. For example, one can ask whether the size of a vein can react plastically to the blood flow. Here, blood flow is the environmental factor toward which, the plastic response is mediated. Note that some plastic responses are adaptive and some are not. Note also that there are a lot of related terms (developmental flexibility, acclimation, polyphenism, developmental selection, ...) in the field of plasticity and different authors sometimes use these terms with slightly different definitions which can make everything a bit confusing.
{ "domain": "biology.stackexchange", "id": 9512, "tags": "human-biology, genetics" }
Confirming an inconsistency of the Balitsky-Kovchegov equation between references
Question: I'm comparing the form of the Balitsky-Kovchegov equation, which describes the splitting of low-momentum gluons, between different references, and I'm finding an inconsistency: most sources (1, 2, 3, 4, 5, etc.) give the relevant part of the equation as $$\frac{\alpha_s N_c}{2\pi^2}\iint\mathrm{d}^2\vec{b}\frac{r^2}{s^2 t^2}\bigl[N(s) + N(t) - N(r) - N(s)N(t)\bigr]\tag{1}$$ where the coordinates are related to each other by $$\begin{align} \vec{r} &= \vec{x} - \vec{y} & \vec{s} &= \vec{x} - \vec{b} & \vec{t} &= \vec{y} - \vec{b} \end{align}$$ (ignore the colors) and $\vec{x}$ and $\vec{y}$ are fixed. But one paper, which happens to be my main reference for the specific issue I'm looking at, gives the equation in the form $$\frac{\alpha_s N_c}{\pi^2}\iint\mathrm{d}^2\vec{t}\biggl[\frac{r^2}{s^2 t^2} N(s) - \frac{r^2}{(s^2 + t^2)t^2}N(r) - \frac{1}{2}\frac{r^2}{s^2 t^2}N(s)N(t)\biggr]\tag{2}$$ Is this equivalent to the other form, specifically the middle term? Changing the integration variable from $\vec{t}$ to $\vec{b}$ presents no problems (the Jacobian is $1$), and the $N(s)N(t)$ terms in both equations are obviously the same. I can also split the $N(s)$ term of equation (2) in half, and transform one half using $\vec{x} \leftrightarrow \vec{y}$ (thus turning $\vec{s}$ into $\vec{t}$), which accounts for the first two terms of equation (1). It looks like the remaining terms, proportional to $N(r)$, are not the same, but I can't quite rule out some coordinate transformation that makes them equivalent. Can anyone either prove that they're not the same, or find a physically valid transformation which shows that they are? Answer: We start by rewriting the second version. Switching back to an integral over $\vec b$ and factoring out the same prefactor as in the first equation, we find $$ \frac{\alpha_sN_c}{2\pi^2}\iint \mathrm d^2 \vec b \frac{r^2}{s^2t^2}\big[2N(s)-\frac{2s^2}{s^2+t^2}N(r)-N(s)N(t)\big]$$ Using the fact that we can split the $2N(s)$ into $N(s)+N(t)$, as you already noted, all the terms but one already coincide. The only difference is the term proportional to $N(r)$. The coefficient in the first equation is $-1$, but in the second it is $$-\frac{2s^2}{s^2+t^2}\overset{!}{=}-\frac{s^2+t^2}{s^2+t^2}=-1$$ Here, the crucial step once again uses the $s\leftrightarrow t$ symmetry. Thus, the two formulae are seen to be equivalent.
{ "domain": "physics.stackexchange", "id": 28799, "tags": "particle-physics, quantum-chromodynamics" }
Example on Referential transparency (wikipedia)
Question: I have a rather foolish question on an example explaining the idea behind Referential transparency Here is given an example i not understand: Consider a function that returns the input from some source. In pseudocode, a call to this function might be GetInput(Source) where Source might identify a particular disk file, the keyboard, etc. Even with identical values of Source, the successive return values will be different. Therefore, function GetInput() is neither deterministic nor referentially transparent. I'm confused about the claim that even with identical values of (variable) Source, the successive return values will be different. Eg if I tap an 'a' on my keyboard and apply GetInput(Source), it gives my 'a' as output. Then I tap again an 'a', which of course is identical to my previuos input. Next application of GetInput(Source) gives me again 'a'. Therefore I not understand how is it possible that GetInput(Source1) and GetInput(Source2) might give different values if the two imputs Source1 and Source2 are identical. Sorry, if it's too dumb, but I not understand this quoted example. Answer: If Source refers to the keyboard, and the user types a the first time but b the second time, then GetInput(Source) will not return the same value both times. It's impossible to replace GetInput(Source) by its value without changing the meaning of the program, because there are many possible values, depending on the actions of the user. To replace GetInput(Source) with a would be wrong if the user decides to press b. An example of a consequence of the fact that GetInput(Source) is not referentially transparent is that if the program calls this twice, which call returns a and which calls return b depends on the order in which the calls are evaluated (or more precisely, on the order in which the calls get their input from the keyboard: you have to get down to this level of detail to be sure).
{ "domain": "cs.stackexchange", "id": 18269, "tags": "functional-programming, coding-theory, imperative-programming" }
OpenCR Board Can't Publish, Can Subscribe (WSL)
Question: Hello, Lately I've been trying to follow the tutorials at the robotis manual to learn about ROS and Turtlebot3. I have a Turtlebot3 Burger. My setup Using ROS Melodic PC OS: WSL Ubuntu 18.04 Bionic Tb3: Raspberry Pi 3 Ubuntu 18.04 Server Bionic OpenCR 1.0 firmware: v1.2.3 ... Let me know to add any other relevant info. Question While trying to get SLAM to work, I was experiencing a common issue of no transform from odom to base_footprint. Upon more investigation, I found that I couldn't see any odom data through rostopic echo odom and rostopic hz odom showed that there were never any messages. However, I know that I can command the turtlebot using keyboard teleop. I just can't receive any messages that are sent "from" the OpenCR board. Why is this? I have explored the idea of this being a time-sync issue. I have checked to make sure my ROS_MASTER_URI and ROS_HOSTNAME are correct on all machines. Any help is much appreciated. Thank you. --EDIT 0: This doesn't seem like a cross-computer issue to me, but rather something with rosserial that is causing topics to not have messages published.-- EDIT 1: actually, it is a cross-machine issue! EDIT 2: When I use ROS_IP instead of ROS_HOSTNAME and set ROS_MASTER_URI to be http://ip of pc:11311 I don't see any data on the topics from the OpenCR; when I run everything on the RPi, I do see data. EDIT 3: I am able to publish from RPi to PC and vice versa, but nothing comes from the rosserial_python node. Originally posted by ros_noob_corgi on ROS Answers with karma: 16 on 2019-05-30 Post score: 0 Answer: I have figured out my answer: issues were due to my use of WSL. Originally posted by ros_noob_corgi with karma: 16 on 2019-05-31 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 33097, "tags": "navigation, odometry, ros-melodic, turtlebot, opencr" }
Where does an object get kinetic energy?
Question: Where does an object "obtain" kinetic energy? I understand that an object often gets kinetic energy from another object. Where does the first object get the energy? Answer: Where does an object "obtain" kinetic energy? Ruben is correct. An object can convert it's potential energy into kinetic energy. For example see the below figure, where potential energy of water has been converted into kinetic energy. A moving body can transfer some of its energy to set another body into motion. For example, when a billiard ball collides with another billiard ball, it sets it into motion. I understand that an object often gets kinetic energy from another object This may not be always true, for example in the example given above. Water converts it's potential energy into kinetic energy. Where does the first object get the energy? Einstein showed from his theory of relativity that it is necessary to treat mass as another form of energy. So, object what you are speaking about is nothing but energy (mass).
{ "domain": "physics.stackexchange", "id": 11855, "tags": "newtonian-mechanics, energy" }
C++ class to compute similarity coefficients
Question: I'm a complete beginner in C++. I wrote a class to compute similarity coefficients using Euclidean distance and Pearson coefficients on map data structure. I would like to know if it is possible to improve the code to make it more efficient in terms of memory and time it takes to execute the code. Header: #include <map> #include <string> extern "C" class DATAANALYTICS_API Similarity { std::map<std::string, std::map<std::string, double>> data_; public: Similarity(std::map<std::string, std::map<std::string, double>> &data); double EuclideanSimilarity(std::string a, std::string b); double PearsonSimilarity(std::string a, std::string b); }; Implementation: #include <iostream> #include <map> #include <string> #include <cmath> #include <vector> #include "DataAnalytics.h" Similarity::Similarity(std::map<std::string, std::map<std::string, double>> &data) { data_ = data; } double Similarity::EuclideanSimilarity(std::string a, std::string b) { double distance_squared = 0, aVal, bVal; for (auto const &entry : data_[a]) { if (data_[b].count(entry.first) > 0) { aVal = entry.second; bVal = data_[b][entry.first]; distance_squared += (aVal - bVal) * (aVal - bVal); } } return (1 / (1 + distance_squared)); } double Similarity::PearsonSimilarity(std::string a, std::string b) { double aVal, bVal, aExpectedValue = 0, bExpectedValue = 0, aSquaredExpectedValue = 0, bSquaredExpectedValue = 0, abExpectedValue = 0; std::vector<double> aValues, bValues, abValues; int commonItemCounter = 0; for (auto const &entry : data_[a]) { if (data_[b].count(entry.first) > 0) { commonItemCounter++; aVal = entry.second; bVal = data_[b][entry.first]; aValues.push_back(aVal); bValues.push_back(bVal); abValues.push_back(aVal * bVal); } } if (aValues.size() == 0) { return 0; } else { for (int i = 0; i < aValues.size(); i++) { aExpectedValue += aValues[i] / commonItemCounter; bExpectedValue += bValues[i] / commonItemCounter; aSquaredExpectedValue += pow(aValues[i], 2) / commonItemCounter; bSquaredExpectedValue += pow(bValues[i], 2) / commonItemCounter; abExpectedValue += abValues[i] / commonItemCounter; } double denominator = sqrt(aSquaredExpectedValue - pow(aExpectedValue, 2)) * sqrt(bSquaredExpectedValue - pow(bExpectedValue, 2)); if (denominator != 0) { return (abExpectedValue - (aExpectedValue * bExpectedValue)) / denominator; } else { return 0; } } } When I run unit tests on the two methods EuclideanSimilarity and PearsonSimilarity, they both take about 130ms to complete which is a bit strange because I'm doing a lot more in PearsonSimilarity than in EuclideanSimilarity. Answer: I have to admit that I don't have any familiarity with this particular problem, so I can't offer any great advice on it. That said, I do see a few things in your code that I think could be improved. Scope One thing I would recommend is moving variables closer to the place where they are used. You're currently defining all variables at the beginning of the function, and that makes it harder to tell what might have been changed when reading the body of the function. In PearsonSimilarity() you're using aVal, bVal, and the vectors in the first for loop. The expected values are not used until the second for loop (inside an if). I would move their declarations into the else clause of the if because of this. Further, I would move each declaration to its own line. Your lines are very long and hard to read because I have to scroll to see everything on them. In fact, it might make sense to break up the function and call 2 smaller functions - one which calculates the commonItemCounter and the arrays, and another which processes the arrays. That's up to you. Performance One big thing I see that could be improved is to remove all calls to pow(). In each case you are squaring the input value. pow() is a very expensive function, and it's a single instruction to just multiply a value by itself. I would first pull the value out of the array, and then use it multiple times, like this: for (int i = 0; i < aValues.size(); i++) { double a = aValues [ i ]; double b = bValues [ i ]; double ab = abValues [ i ]; aExpectedValue += a / commonItemCounter; bExpectedValue += b / commonItemCounter; aSquaredExpectedValue += (a * a) / commonItemCounter; bSquaredExpectedValue += (b * b) / commonItemCounter; abExpectedValue += ab / commonItemCounter; } Another thing that I see that is likely to be slow is that commonItemCounter is defined as an int but is used to calculate floating point values. You should declare it as a double. And then, once you have, it might make sense (measure to be sure) to invert it and multiply by it instead of dividing by it. (This might be a non-issue on modern systems, but has been an issue in the past.) Early Returns Early returns are helpful when you have some "early out" conditions such as the input being out-of-range or degenerate in some way. That's not the case here, and it can be confusing to follow. In this case, I'd recommend having a return value and only a single return statement at the end like this: double result = 0.0; if (aValues.size() > 0) { for (int i = 0; i < aValues.size(); i++) { // ... rest of loop here } double denominator = sqrt(...stuff...) if (denominator != 0.0) { result = (abExpectedValue - (aExpectedValue * bExpectedValue)) / denominator; } } return result; It's fewer levels of ifs (less cyclomatic complexity) and easier to understand the flow.
{ "domain": "codereview.stackexchange", "id": 28796, "tags": "c++, performance, memory-optimization" }
entropy of vaporization vs entropy of fusion
Question: Is the entropy of vaporization greater than entropy of fusion, for elements? And in which case they can be same, if it so happens? I have asked this here because I couldn't find any satisfactory answers anywhere on the Internet and most of them were very limited in their description. Answer: Here I'm assuming you are interested in comparing the magnitude of $\Delta S^{\circ}_{fus}$ at $T_{fus}$ with that of $\Delta S^{\circ}_{vap}$ at $T_{vap}$, for the elements in their standard states (so, for instance, hydrogen would be $\ce{H_2}$ rather than $\ce{H}$). Like you, I was unable to find a comparative tabulation of these values on the internet. Fortunately, one can readily generate such a table using Wolfram Mathematica's chemical database. For each element: $$\Delta S^{\circ}_{fus} \text{ at } T_{fus} = \frac{\Delta H^{\circ}_{fus}}{T_{fus}}$$ $$\Delta S^{\circ}_{vap} \text{ at } T_{vap} = \frac{\Delta H^{\circ}_{vap}}{T_{vas}}$$ Wolfram has the above data for all of elements 1–93 (hydrogen through neptunium), except for helium (which can't be solidified at standard pressure, which is 1 bar), astatine, and francium.* Here I've plotted $|\Delta S^{\circ}_{vap}|$ vs. $|\Delta S^{\circ}_{fus}|$* for these 90 elements, and added a y = x line. From the placement of the points relative to this line, you can see that all except one of the elements have $|\Delta S^{\circ}_{vap}| > |\Delta S^{\circ}_{fus}|$. That single exception is hydrogen, for which: $$|\Delta S^{\circ}_{fus}| \text{ at } T_{fus} = 39.8 \frac{J}{mol K}$$ $$|\Delta S^{\circ}_{vap}| \text{ at } T_{vap} = 22.3 \frac{J}{mol K}$$ *Note, however, this complication: Most, but not all, of these measurements were done at standard pressure (1 bar). For instance: "When heated at standard atmospheric pressure, arsenic changes directly from a solid to a gas, or sublimates, at a temperature of 887 K. In order to form liquid arsenic, the atmospheric pressure must be increased. At 28 times standard atmospheric pressure, arsenic melts at a temperature of 1090 K. If it were also measured at a pressure of 28 atmospheres, arsenic's boiling point would be higher than its melting point, as you would expect." https://education.jlab.org/itselemental/ele033.html
{ "domain": "chemistry.stackexchange", "id": 14449, "tags": "thermodynamics, entropy" }
OctoMap for collision avoidance / using point clouds
Question: Firstly, following the discussion below, http://answers.ros.org/question/30426/adding-octomap-data-to-planning-scene-in-ros, I understand that the octomap result is relatively easy to further feed to arm navigation. Then I wonder that how about the surface case, i.e., is it possible to make use of the octomap data for mobile navigation in an outdoor terrain environment? Second question, after an octomap is initialized from a point cloud, is it possible to retrieve the points from the cloud that drop within an voxel, given the voxel index or its cooordinate? I am trying both octomap and pcl/octree, and this question arises from the exercise. More explanation about the differences between octomap and pcl/octree would be great helpful. Originally posted by clark on ROS Answers with karma: 393 on 2012-03-29 Post score: 1 Answer: OctoMap by defatult does not store individual points but rather integrates them into an occupancy map for maximum compression in order to reduce memory usage. Have a look at http://octomap.sourceforge.net/ and the paper there for a detailed description. You can continuously update this map with further sensor readings but the original point clouds are not stored (which you could do yourself instead). If you need individual points, you would need to extend your own octree class. If you are only interested in the points and need the octree just for adressing or neighbor search, then I would suggest the PCL octree instead. The occupancy map can be used for a variety of tasks such as localization, arm navigation and navigation in cluttered environments. The 3D_navigation stack can serve you as a starting point but it's not quite ready yet to be deployed out of the box directly. Originally posted by AHornung with karma: 5904 on 2012-03-29 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 8796, "tags": "pcl, octomap, pointcloud" }
Electric engine and transmission
Question: What kind of automatic transmission would be the best to connect an electric engine with the rear axle of a RWD car? Specifications: 200HP, 3000rpm electric motor top speed 220km/h The automatic transmission should have higher output RPM and be able to bring the car to top speed of 200km/h. Answer: The essential question here is why or why not use a transmission in an electric car. Here are the underlying issues. It is commonly asserted that a DC electric motor is a constant-torque device, developing the same torque at standstill (0 RPM) that it does at its rated RPM (in this case, 3000). Advocates of the "no-transmission" position cite this as a reason why electric motors in cars do not need transmissions, but instead can be used in direct-drive mode. But the flaw in this reasoning is that power- the capacity to perform work at a certain rate- is the product of torque and RPM, and it is this product which accelerates the car. This means that in the case of a motor which is starting from zero RPM to accelerate a car, its power output at near-zero RPM is near zero, and the motor does not develop its rated power output until it is running at its rated RPM. This means that to maximize the power output of the motor and thereby maximize the car's acceleration, we need to get the motor all the way up to 3000RPM as quickly as possible, which means gearing the motor down when starting off from zero speed. Then, when we reach 3000RPM, we upshift the transmission to a "taller" gear ratio, and shift again when the motor comes up to 3000RPM, and repeat until the drag force on the car body is equal to the force applied to the pavement by the rear wheels. this process is designed to keep the motor running at or near its maximum power point for as long as possible. In fact, the fastest acceleration will be had if the transmission is capable of holding the motor right at 3000RPM throughout the car's acceleration from zero wheel speed to whatever its top speed is. This means that to maximize acceleration, a transmission is necessary, and the best one will be a continuously-variable one that locks the engine speed at 3000RPM at all times.
{ "domain": "engineering.stackexchange", "id": 2919, "tags": "gears, electric-vehicles, transmission" }
A simple library using Flask and SQLAlchemy
Question: This is the main.py script that does most of the work of Adding movies (Movie objects) as well as modifying and removing: from flask import Flask, render_template, request, flash, url_for, redirect import sqlite3 from datetime import date from sqlalchemy import create_engine from sqlalchemy.orm import sessionmaker from sqlalchemy.ext.declarative import declarative_base from movie import Movie app = Flask(__name__) app.secret_key = "LOL" Base = declarative_base() engine = create_engine('sqlite:///adatabase.db', connect_args={'check_same_thread': False}, echo=True) Base.metadata.create_all(engine) Base.metadata.bind = engine Session = (sessionmaker(bind=engine)) #scoped_session Base.metadata.create_all(engine) session = Session() @app.route('/') @app.route('/home') def home(): return render_template('home.html') @app.route('/form', methods = ['POST','GET']) def form(): try: if request.method == 'POST': title = request.form['title'] release_date = request.form['release_date'] print(release_date) session.add(Movie(title,date(int(release_date.split('-')[0]),int(release_date.split('-')[1]),int(release_date.split('-')[2])))) session.commit() session.close() return redirect(url_for('table')) except: flash("An error occured while writing to database") return redirect(url_for('home')) return render_template('form.html', title = "Form") @app.route('/table') def table(): con = sqlite3.connect('adatabase.db') con.row_factory = sqlite3.Row cur = con.cursor() cur.execute('select * from movies') movies = cur.fetchall() return render_template('table.html',movies = movies, title = "Movies") @app.route('/delete/<int:post_id>') def delete(post_id): query = session.query(Movie).filter(Movie.id == post_id).first() session.delete(query) session.commit() session.close() return redirect(url_for('table')) @app.route('/modify/<int:post_id>', methods = ['POST','GET']) def modify(post_id): query = session.query(Movie).filter(Movie.id == post_id).first() if request.method == 'POST': title = request.form['title'] release_date = request.form['release_date'] session.delete(query) session.add(Movie(title,date(int(release_date.split('-')[0]),int(release_date.split('-')[1]),int(release_date.split('-')[2])))) session.commit() session.close() return redirect(url_for('table')) return render_template('edit.html',num = post_id,title = query.title,date = query.release_date) if __name__ == '__main__': app.run(debug = True) I have a Movie class defined in another script movie.py: from sqlalchemy import Column, String, Integer, Date, Table, ForeignKey from sqlalchemy.orm import relationship from base import Base movies_actors_association = Table( 'movies_actors', Base.metadata, Column('movie_id', Integer, ForeignKey('movies.id')), Column('actor_id', Integer, ForeignKey('actors.id')) ) class Movie(Base): __tablename__ = 'movies' id = Column(Integer, primary_key=True) title = Column(String) release_date = Column(Date) actors = relationship('Actor',secondary=movies_actors_association) def __init__(self,title,release_date): self.title = title self.release_date = release_date def __repr__(self): return self.title Finally, I have 5 HTML files that I think would just clutter up the post (if it isn't already) that are just home, form, table, edit, and base (the one that all of them extend, containing hyperlinks to all other pages). The library works great. I was having an SQLAlchemy check_same_thread issue but I've added connect_args={'check_same_thread': False}, echo=True) to the engine=create_engine() and now it runs smoothly. I know that I should probably have several files in which I do databases, classes, page switching and such but I'm not sure about the exact structure. It also feels like I'm missing some things as some example code I've seen uses db = SQLAlchemy(app) but others just use the session and engine. Answer: Your database management is kind of a mess: SQLAlchemy recommend to keep your sessions scope the same than your requests scope; Having a single, global, session and closing it after a request mean that you can only ever make a single request for the whole lifetime of your application (which is not very useful); (I suspect that because of that) You mix using SQLAlchemy sessions and plain sqlite connection, this is bad as a single tool should perform all these operations; You mix table creation with application operations, these should be separated into two different scripts/task: your web server operate on the table, and an administration task (either by hand or with a dedicated script) is responsible for creating them beforehand. For simplification of these tasks, a library have been developed: Flask-SQLAlchemy You can have the following layout: movie.py from flask_sqlalchemy import SQLAlchemy db = SQLAlchemy() class Movie(db.Model): id = db.Column(db.Integer, primary_key=True) title = db.Column(db.String, nullable=False) release_date = db.Column(db.Date, nullable=False) # I’ll let you manage actors accordingly main.py from flask import Flask, render_template, request, flash, url_for, redirect from movie import Movie, db app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///adatabase.db' db.init_app(app) # regular route definitions if __name__ == '__main__': app.run(debug=True) And then, the table creation can simply be done by launching into your Python interpreter and doing: >>> from movie import db >>> from main import app >>> app.app_context().push() >>> db.create_all() No need to embed this logic into your web-server. Or, at the very least, put it into a function in your main.py, you don't have to run this every time you launch your server. Now to the part about your web server. The kind of operations you display here is know as CRUD. This usually requires two kind of routes: A general route to list all items of a kind and to create new ones; A specific route to manage a single element (read, update, delete). The general route usually respond to GET and POST, the specific one usually respond to GET, PUT and DELETE. A rough sketch of the application would be: from datetime import datetime from contextlib import suppress from flask import Flask, render_template, request, redirect, url_for from movie import Movie, db app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///adatabase.db' db.init_app(app) @app.route('/', methods=['GET']) @app.route('/movies', methods=['POST', 'GET']) def movies(): if request.method == 'POST': title = request.form['title'] release_date = request.form['release_date'] db.session.add(Movie(title, parse_release_date(release_date))) db.session.commit() movies = Movie.query.all() return render_template('movies.html', movies=movies) @app.route('/movies/<int:post_id>', methods=['GET', 'PUT', 'DELETE']) def movie(post_id): the_movie = Movie.query.filter(Movie.id == post_id).first() if request.method == 'DELETE': db.session.delete(the_movie) db.session.commit() return redirect(url_for('movies')) if request.method == 'PUT': with suppress(KeyError): the_movie.title = request.form['title'] with suppress(KeyError): the_movie.release_date = parse_release_date(request.form['release_date']) db.session.commit() return render_template('single_movie.html', movie=the_movie) def parse_release_date(date): parsed_date = datetime.strptime(date, '%Y-%m-%d') return parsed_date.date() if __name__ == '__main__': app.run(debug=True) Then you just need a simple movies.html displaying the list of movies and providing a form to add a new one; and a single_movie.html presenting the informations of a movie and providing a form to update it as well as a delete button.
{ "domain": "codereview.stackexchange", "id": 31424, "tags": "python, python-3.x, flask, sqlalchemy" }
How to use LSTM to generate a paragraph
Question: A LSTM model can be trained to generate text sequences by feeding the first word. After feeding the first word, the model will generate a sequence of words (a sentence). Feed the first word to get the second word, feed the first word + the second word to get the third word, and so on. However, about the next sentence, what should be the next first word? The thing is to generate a paragraph of multiple sentences. Answer: Take the sentence that was generated by your LSTM and feed it back into the LSTM as input. Then the LSTM will generate the next sentence. So the LSTM is using it's previous output as it's input. That's what makes it recursive. The intial word is just your base case. Also you should consider using GPT2 by open AI to do this. It's pretty impressive. https://openai.com/blog/better-language-models/
{ "domain": "ai.stackexchange", "id": 1624, "tags": "deep-learning, long-short-term-memory, sequence-modeling, text-classification, text-generation" }
Generating a unique reference number for logging purposes in a custom exception
Question: I've come up with a base class for exceptions for this SMS-based service I am working on. The basic idea behind the service is that a user sends a text message to a number, the message requests a particular action to be carried out by an agent that logs into a different website, from which some account information is scraped and spat back out to the user. Small take on QoL. Obviously, things don't always work as intended for different reasons. When something goes wrong and the returned text message doesn't contain the requested information, I want the user to have a reference number to that error, should the issue persist. To accommodate for that I will keep logs, where I will quickly be able to look up what happened using the reference number. The logs will contain full error information. Intended use of the custom exceptions: try: service.login(credentials) except AuthenticationError as e: sms_client.send(phone_number, f'{Something went wrong. If this keeps on happening, please contact us with this error ref: {e.reference}') ... import logging import uuid logging.basicConfig(filename='errors.log', level=logging.ERROR) class ServiceException(Exception): def __init__(self, message, url): self.reference = str(uuid.uuid4()).split('-')[0] error_message = f'{message}: \'{url}\' | error ref: {self.reference}' logging.error(error_message) super(Exception, self).__init__(error_message) class AuthenticationError(ServiceException): """Ambiguous error prevented successful authentication.""" Answer: concerned if cutting the UUID short will keep it unique Well, you're outputting 32 bits. So you can do ballpark 64k transactions (2^16) before running into Big Trouble, according to the birthday paradox. If you realistically expect someone to report issues within a day, then the identifier is effectively (day, reference), which looks more attractive, as you can do some tens of thousands of transactions each day rather than over the lifetime of the service. More generally, resolve ties with "last one wins", and then you're comparing "typical distance between collisions" to "typical time for user to report an issue". You're putting 4 bits of entropy into each hex nybble, which likely will be copy-n-pasted (or forwarded) by the user. If you're working within an 8-character constraint, consider using a scheme like base36 or base64 so each character sends more bits. Without pasting there will be usability limitations, such as eyeballing an 'l' and sending the digit '1'. If you're not worried about users making up identifiers, consider using (userid, serial_num), or better: (userid, timestamp).
{ "domain": "codereview.stackexchange", "id": 30620, "tags": "python, python-3.x, error-handling" }
My online hangman solver
Question: I've created a simple hangman solver and although I'd consider myself proficient in webdev I am by no means an expert. The trick with this is that since HTTP is stateless, I can't really think of a better way to look words up than to start from scratch each time (and lookup is performed with ajax EVERY time the input changes). So any special data structures would need to be created from the input each time, an overhead which isn't really worth it. I'm especially interested in any ways to optimize my backend lookup code: /* lookup.php */ $length = $_POST['length']; $proto = $_POST['proto']; $dead = $_POST['dead']; $live = str_replace("*", "", $proto); // check that $length is number // load appropriate dictionary into $dict $dict = file("dicts/" . $length . "_letters.txt", FILE_IGNORE_NEW_LINES); $words = array(); // loop through each word in dictionary foreach($dict as $word){ $flag = true; // check that word matches proto for($i=0; $i<strlen($proto); $i++){ if($proto[$i] == "*"){ // wildcard for($j=0; $j<strlen($live); $j++){ if($word[$i] == $live[$j]){ // disallow a*** matching abba $flag = false; continue 3; } } continue; } if($proto[$i] == $word[$i]){ // correct, go to next letter //error_log($proto[$i] . " = " . $word[$i]); continue; } else{ // incorrect //error_log($proto[$i] . " != " . $word[$i]); $flag = false; continue 2; } } if($flag){ array_push($words, $word); } }// end foreach loop unset($word); $deadWords = array(); //remove words with dead letters foreach ($words as $word){ for ($i=0; $i < strlen($dead); $i++) { if(strpos($word, $dead[$i]) !== false){ error_log("adding " . $word . " to deadWords"); array_push($deadWords, $word); continue 2; } } } unset($word); $words = array_values(array_diff($words, $deadWords)); // probably limit returned words to 500 or so $alphabet = "abcdefghijklmnopqrstuvwxyz"; $letters = array_fill_keys(str_split($alphabet), 0); // count occurrances of each letter by word foreach($words as $word){ // check for each letter in each word for($i=0; $i<26; $i++){ if(strpos($word,$alphabet[$i]) !== false){ $letters[ $alphabet[$i] ]++; } } }// end foreach loop unset($word); arsort($letters, SORT_NUMERIC); // normalize numbers to size of words $num_words = count($words); foreach($letters as $l => $v){ $letters[$l] = $letters[$l] / $num_words; } //remove letters with zero frequency foreach($letters as $l=>$v){ if($v == 0 || $v == 1){ $letters = array_diff($letters, array($l=>$v)); } } unset($l); $ret = array("words"=>$words, "letters"=>$letters); echo json_encode($ret); Here is the JS on my front end var length; var response; var blocking = false; $(document).ready(function(){ setProto(); var height = Math.ceil($(window).height() / 40); var width = Math.ceil($(window).width() / 40); document.getElementById("length").oninput = function(){ if( !isNaN(parseInt( $("#length").val() )) ){ length = parseInt($("#length").val()); } setProto(); evaluate(); } document.getElementById("proto").oninput = evaluate; document.getElementById("dead").oninput = evaluate; $(".headings h2").click(function(e){ dataSelect(e.target); }); $(".reset button").click(reset); }); var evaluate = function(){ if(blocking) return; if( $("#length").val() == "") return; if( !checkProto() ) return; $(".bottom-row").removeClass("hidden"); $("#loading").removeClass("hidden"); for(var i=0; i<4; i++){ $(".column" + i).html(""); } blocking = true; // send ajax request $.ajax({ url: 'lookup.php', type: 'post', data: { 'length':length, 'proto':getProto(), 'dead':$("#dead").val()}, success: function(data, status) { data = JSON.parse(data); response = data; var j=0; var norm; for(var key in response.letters){ if(norm == null){ norm = 0.95 / response.letters[key]; } var percent = (response.letters[key] * 100).toFixed(2); var str = key + " : " + percent + "%"; var entry = document.createElement("div"); var label = document.createElement("span"); var bar = document.createElement("div"); $(entry).addClass("letter-entry"); $(label).html(str); $(bar).attr("style", "width:" + percent*norm + "%"); $(entry).append( $(label) ); $(entry).append( $(bar) ); $(".column" + j%2).append( $(entry) ); j++; } for(var i=0; i<data.words.length; i++){ var word = document.createElement("a"); var lbrk = document.createElement("br"); $(word).html(data.words[i]); $(word).attr("href", "http://dictionary.com/browse/" + data.words[i]); $(word).attr("target", "_blank"); $(".column" + (i%2 + 2) ).append( $(word) ); $(".column" + (i%2 + 2) ).append( $(lbrk) ); } $("#possible").html(data.words.length + " Possible Words"); }, error: function(xhr, desc, err) { console.log(xhr); console.log("Details: " + desc + "\nError:" + err); }, complete: function(){ $("#loading").addClass("hidden"); blocking = false; } }); // end ajax call }; function setProto(){ if(length == 0) length = 4; for(var i=0; i<8; i++){ if(i<length){ $($("#proto").children()[i]).css("display", "block"); }else{ $($("#proto").children()[i]).css("display", "none"); } } } function getProto(){ var ret = ""; for(var i=0; i<length; i++){ if($($("#proto").children()[i]).val() == ""){ ret += "*"; } else{ ret += $($("#proto").children()[i]).val(); } } return ret.toLowerCase(); } // return true if proto is not all asterisks function checkProto(){ var proto = getProto(); for(var i=0; i<proto.length; i++){ if(proto.charAt(i) != "*"){ return true; } } return false; } function dataSelect(target){ if($(target).hasClass("selected")) return; $("#likely").toggleClass("selected"); $("#possible").toggleClass("selected"); if(target.id == "likely"){ $("#lettergraph").removeClass("hidden"); $("#wordlist").addClass("hidden"); }else{ $("#lettergraph").addClass("hidden"); $("#wordlist").removeClass("hidden"); } } var reset = function(){ length = 4; $("#length").val(4); for(var i=0; i<length; i++){ $($("#proto").children()[i]).val(""); } $(".bottom-row").addClass("hidden"); $("#dead").val(""); setProto(); } Answer: $length = $_POST['length']; $proto = $_POST['proto']; $dead = $_POST['dead']; Consider adding more normalization to this. For example, trim will strip off leading and trailing whitespace. foreach($dict as $word){ $flag = true; // check that word matches proto for($i=0; $i<strlen($proto); $i++){ if($proto[$i] == "*"){ // wildcard for($j=0; $j<strlen($live); $j++){ if($word[$i] == $live[$j]){ // disallow a*** matching abba $flag = false; continue 3; } } continue; } if($proto[$i] == $word[$i]){ // correct, go to next letter //error_log($proto[$i] . " = " . $word[$i]); continue; } else{ // incorrect //error_log($proto[$i] . " != " . $word[$i]); $flag = false; continue 2; } } if($flag){ array_push($words, $word); } }// end foreach loop You don't need $flag here with the way that you are using continue. $protoLength = strlen($proto); $deadLength = strlen($dead); foreach ($dict as $word) { for ($i = 0; $i < $protoLength; $i++) { if ($proto[$i] == "*") { // wildcard // disallow a*** matching abba if (strpos($live, $word[$i]) !== false) { continue 2; } } else if ($proto[$i] != $word[$i]) { continue 2; } } for ($i=0; $i < $deadLength; $i++) { if (strpos($word, $dead[$i]) !== false){ error_log("adding " . $word . " to deadWords"); continue 2; } } array_push($words, $word); } This will do the same thing without all the $flag setting since you skip to the next iteration every time you set $flag to false. Adding an else saves having to do a continue in the first if. And you never needed the second if. You're really just checking the false case. I wouldn't put strlen in a loop check, as I'm not convinced that PHP will optimize it out rather than calling it on each iteration. Doing it this way ensures that it will only get checked once. A strpos call is going to be more efficient than a for loop. I moved the dead letter check into the same loop. This way we never add a dead word to the list rather than removing them later.
{ "domain": "codereview.stackexchange", "id": 20263, "tags": "javascript, php, algorithm, jquery, hangman" }
Infinite Increase in Entropy when Energy added to Absolute Zero
Question: My textbook states the following: If a system were at absolute zero, an additional small amount of heat energy would lead to an infinite increase in entropy. Such a state is impossible. Absolute zero can never be achieved. It also provides the equation: $$ \Delta S_\text{surroundings} = \frac{-\Delta H_\text{system}}{T}$$ Where $T$ is given in kelvin. From the statement, will the entropy of the system or surroundings increase; from what I can deduce, I would say the entropy increase would be within the system as the surroundings is losing energy, but I am not sure. The second part of my question is, why would this lead to an infinite increase in entropy? Please provide a comprehensible mathematical explanation and also, preferably, an analogy. Textbook: Pearson Baccalaureate: Higher Level Chemistry, 2nd Edition. By Catrin Brown and Mike Ford Pages: 254-255 Answer: The textbook is referring to the entropy change of the system. While the textbook is correct that absolute zero can never be attained, its statement that the entropy change is infinite is wrong. The authors' rationale for thinking it is infinite likely stems from a misinterpretation of the definition of entropy change: $dS = \frac{\text{đ}q_{rev}}{T}$ where $\text{đ}q_{rev}$ is the reversible heat flow into the system. Let's start with a system at $0 K$, and then warm it to a temperature $T = T'$. Then: $\Delta S =\int_{0}^{T'} dS= \int_{0}^{T'}\frac{\text{đ}q_{rev}}{T}$ Let's consider what happens when we flow the first, infinitesimal amount of heat, $\text{đ}q$, into the system. And to make the calculation easy, let's assume the heat flow is reversible (it doesn't have to be; if it weren't, we'd just need to find a reversible process that gets us to the same final state). As soon as we do this, the temperature is no longer zero! And once the temperature rises above zero, the singularity disappears and we no longer have a concern about infinite entropy change. But, you may ask, what about the mathematics right at the very beginning, when the temperature is indeed zero? Here, to properly calculate the entropy change, we need to use a limit, recognizing that both $T$ and $\text{đ}q$ are approaching zero. The easiest way to understand this is to consider the Debye $T^3$ law, which models the constant-volume heat capacity of solids as they approach absolute zero: $C_v = C_v(T) = k T^3$, where $k$ is a constant, and where I've written "$C_v(T)$" as an explicit indicator that $C_v$ is temperature-dependent. Since, for a constant-volume reversible process, $\text{đ}q_{rev} = C_v(T) dT$, we have: $\Delta S= \int_{0}^{T'}\frac{\text{đ}q_{rev}}{T} = \int_{0}^{T'}\frac{C_v(T)}{T} dT = \int_{0}^{T'}\frac{k T^3}{T} dT=\int_{0}^{T'}k T^2 dT$ I.e., in evaluating what happens to $dS = \frac{\text{đ}q_{rev}}{T}$ as $T \rightarrow 0$, it is necessary to consider what is happening to both the numerator and the denominator. The textbook authors' mistake was in not understanding this. Clearly, once we find a reasonable functional form for $\text{đ}q$ in the limit as $T \rightarrow 0$, the singularity disappears. Think of it this way (borrowing from one of my comments): At constant volume, $dS=\frac{C_v}{T} dT$. As you approach absolute zero, it's not just $T$ that's going to $0$; $C_v$ is going to $0$ as well. And since $C_v$ is going to $0$ faster than $T$ is going to $0$, $\frac{C_v}{T}$ goes to $0$ rather than infinity (for the same reason that $\frac{x^3}{x}$ goes to $0$ rather than infinity as $x$ goes to $0$). I.e., the mistake would be in assuming that $C_v$ is a constant; it's not. Here's another way we can understand that the textbook's statement is wrong: Thermodynamics allows us, in principle, to determine absolute entropies for any substance. The absolute entropy is given by integrating the entropy change from absolute zero to whatever temperature the substance is at: $\text {Absolute entropy at } T' \equiv S(T') =\Delta S_{0 \rightarrow T'}= \int_{0}^{T'}\frac{\text{đ}q_{rev}}{T}$ If the entropy change between $0 K$ and any higher temperature were infinite, the absolute entropies of all real substances would likewise also be infinite! [Unless the substance were at absolute zero, which is unatainable.] Here's a link showing how standard molar entropies might be calculated (I don't know if the procedure described here is what is used to determine the official CODATA values): https://www2.stetson.edu/~wgrubbs/datadriven/entropyaluminumoxide/entropyal2o3wtg.html. Essentially, it mentions using the Debye extrapolation up to ~$15 K$, and then using a differential scanning calorimeter to determine the reversible heat flow between $15 K$ and $298.15 K$
{ "domain": "chemistry.stackexchange", "id": 12385, "tags": "enthalpy, temperature, entropy" }
Simple CLI Python Hangman game
Question: Preamble: I am very new to Python, and outside of Googling functions and a former work colleague telling me why I'm wrong, I have no formal training and haven't taken any "Learn Python" courses. I have written a CLI Hangman game which selects a random word from a ~7000-line list of words (in another file called wordlist.txt), and you have to guess the word within a set number of attempts. I would really appreciate some input into how I can follow normal Python coding practices better; my experience is in other languages so I tried my best to follow general coding standards but beyond that I just followed what PyCharm suggested I should improve - I am not looking for improvements in things like functionality; I am more looking for improvements in code style and best practices. Below is my code. There is another file which this is called from (if you select the correct menu item), Menu.py, hence no Hangman() at the end: import os class Hangman(object): input_letter = [] word_to_guess = [] DIR_PATH = os.path.dirname(os.path.realpath(__file__)) DICTIONARY = os.path.join(DIR_PATH, "wordlist.txt") FILE = open(DICTIONARY, "r") list_of_words = [] mutable_hidden_word = [] count = 0 game_complete = False wrong_guesses = [] number_of_guesses = 0 HANGED_MAN = { 0: " \n | \n | \n | \n | \n | \n |\n____|________\n", 1: " _______\n | \n | \n | \n | \n | \n |\n____|________\n", 2: " _______\n |/ \n | \n | \n | \n | \n |\n____|________\n", 3: " _______\n |/ |\n | \n | \n | \n | \n |\n____|________\n", 4: " _______\n |/ |\n | (_)\n | \n | \n | \n |\n____|________\n", 5: " _______\n |/ |\n | (_)\n | |\n | |\n | \n |\n____|________\n", 6: " _______\n |/ |\n | (_)\n | \\|\n | |\n | \n |\n____|________\n", 7: " _______\n |/ |\n | (_)\n | \\|/\n | |\n | \n |\n____|________\n", 8: " _______\n |/ |\n | (_)\n | \\|/\n | |\n | / \n |\n____|________\n", 9: " _______\n |/ |\n | (_)\n | \\|/\n | |\n | / \\\n |\n____|________\n" } def __init__(self): print "Welcome to Hangman!" self.reset_all() self.add_words_to_list() self.get_random_word(self.list_of_words, self.read_word_list_length(self.DICTIONARY)) self.FILE.close() self.create_hidden_word(self.mk_string(self.word_to_guess, "")) while not self.game_complete: os.system('clear') self.play_game() play_again = raw_input("Play again? (y/n)\n> ").lower() if play_again == "y": Hangman() # SETUP FUNCTIONS def reset_all(self): self.word_to_guess = [] self.wrong_guesses = [] self.mutable_hidden_word = [] self.game_complete = False self.count = 0 self.number_of_guesses = 0 self.input_letter = [] def add_words_to_list(self): if not self.list_of_words: for line in self.FILE: self.list_of_words.append(line) @staticmethod def read_word_list_length(fileName): with open(fileName) as f: i = -1 for i, l in enumerate(f, 1): pass return i @staticmethod def mk_string(word, inbetween=""): return inbetween.join(word) def get_random_word(self, listOfWords, wordListLength): from random import randint randomWord = randint(0, wordListLength-1) word = listOfWords[randomWord].replace("\n", "") for char in word: self.word_to_guess.extend(char) def create_hidden_word(self, wordToHide): for char in range(0, len(wordToHide)): self.mutable_hidden_word.append("_") # GAMEPLAY FUNCTIONS def play_game(self): if "_" in self.mutable_hidden_word: self.print_hanged_man() self.print_hidden_word() self.print_wrong_guesses() self.take_player_guess() self.check_player_guess(self.input_letter[0]) self.number_of_guesses += 1 if self.count == 9: self.print_hanged_man() self.print_hidden_word() self.print_wrong_guesses() print "You lose!" print "Correct word: " + self.mk_string(self.word_to_guess) self.game_complete = True else: self.print_hanged_man() self.print_hidden_word() print "Congratulations, you win! Your word was \"%s\". It took you %d guesses to win."\ % (self.mk_string(self.word_to_guess), self.number_of_guesses) self.game_complete = True def take_player_guess(self): player_input = raw_input("\nInput a letter: >> ") if len(player_input) > 1: self.take_player_guess() else: self.input_letter.insert(0, player_input) def check_player_guess(self, guess): if guess not in self.mk_string(self.word_to_guess): self.count += 1 self.wrong_guesses.append(guess) else: for letter in range(0, len(self.mk_string(self.word_to_guess))): if self.mk_string(self.word_to_guess)[letter] == guess: self.mutable_hidden_word[letter] = guess def print_hidden_word(self): print self.mk_string(self.mutable_hidden_word, " ") def print_wrong_guesses(self): if not self.wrong_guesses == []: print "Wrong guesses: " + self.mk_string(self.wrong_guesses, ", ") def print_hanged_man(self): try: print self.HANGED_MAN[self.count] except KeyError: print "Dictionary key is invalid!" Answer: 2 notes : 1) add more docstrings for comments from def check_player_guess(self, guess): if guess not in self.mk_string(self.word_to_guess): ... to def check_player_guess(self, guess): """what does it do? params""" if guess not in self.mk_string(self.word_to_guess): ... take it as a good practise though i must congratulate you on your judicious namings 2) use @classmethod will reduce some bloats from def play_game(self): if "_" in self.mutable_hidden_word: self.print_hanged_man() self.print_hidden_word() self.print_wrong_guesses() self.take_player_guess() self.check_player_guess(self.input_letter[0]) self.number_of_guesses += 1 to @classmethod def play_game(): ... print_hanged_man() print_hidden_word() print_wrong_guesses() take_player_guess() check_player_guess(self.input_letter[0]) number_of_guesses += 1 if @classmethod was also used on them (i.e print_ ...)
{ "domain": "codereview.stackexchange", "id": 31674, "tags": "python, game, hangman, python-2.x" }
Manual transmission: Why downshifting is less smooth than shifting up?
Question: Today while driving I had to make a quick shift to a lower gear that was not as smooth as it should've been. I've been taught that while shifting to a lower gear you have to be more careful when releasing the clutch than while shifting a gear up. Why is this? I understand the basic mechanism of transmission and clutch. When shifting up, the car can "bump" slightly if the rotational speeds of the engine and wheels are far apart. But why does this effect seem more noticeable when shifting down? Answer: That's because when you upshift, you select a lower ratio, so your clutch speed drops. Since you took your foot of the gas, the engine rpm also dropped, and now the clutch and engine speed are close to synced, and hence little force is felt when you engage the clutch. The best way to upshift is to relieve the throttle a little(not fully), so the rpm will drop a bit as soon as you declutch. Meanwhile, shift up and clutch. After practice, no force will be felt. Every gearshift will in/-decrease rpm by about 20-25%. So aim with the throttle at a rpm 20-25% higher or lower when resp. down- or upshifting. When you downshift, you select a higher ratio, so the clutch speed increases. You should apply a little throttle so the engine spins up and matches the increased clutch speed. Again, you'll feel no force when engaging the clutch. If you don't do this, the clutch will have to make up for the difference in speed from the engine and the clutch. That costs time and gives you the annoying feedback. The best way to downshift, is to keep your foot the same on the throttle as you were while cruising. As soon as you disengage the clutch, the engine rpm will increase. Now quickly downshift and re-engage the clutch. While you briefly disengaged the clutch, the rpm rose just enough to match the clutch in the lower gearing, if you did it right. But you have to shift and clutch quickly. It will require some practice, but you'll have supersmooth shifts and impress your passengers.. :p On top of this, you'll save on some wear on the clutch.
{ "domain": "engineering.stackexchange", "id": 1995, "tags": "automotive-engineering, engines, transmission" }
AVL tree worst case height proof
Question: The worst case height of AVL tree is $1.44 \log n$. How do we prove that? I read somewhere about Fibonacci quicks but did not understand it. Answer: We want to show that the number of nodes $n$ in a height-balanced binary tree with height $h$ grows exponentially with $h$ and at least as fast as the Fibonacci sequence. Let $N_h$ denote the minimum number of nodes in a height-balanced binary tree having height $h$. Recall that in a height-balanced binary tree of height $h$, the subtree rooted at one of the children of the root has height $h-1$, and the subtree rooted at the other child of the root has height $h-1$ or $h-2$. Thus, $N_h > N_{h-1} + N_{h-2}$. Thus, $N_h$ is at least $f_h$, the $h$th term of the Fibonacci sequence, where $f_h \approx \phi^h / \sqrt{5}$ and $\phi$ is the golden ratio $\frac{1+\sqrt{5}}{2}$. So, if $n$ is the number of nodes in an AVL tree of height $h$, we have $n \ge \phi^h / \sqrt{5}$. Taking $\log_2$ of both sides, we get $h \le \frac{\log_2 n}{\log_2 \phi} + c = 1.4404 \log_2 n + c$, for some constant $c$. Thus, an AVL tree has height $h = O(\log n)$ An easier proof, if you don't care about the constants as much, is to observe that $N_h > N_{h-1}+N_{h-2} > 2N_{h-2}$. Hence, $N_h$ grows at least as fast as $\sqrt{2}^h$. So the number of nodes $n$ in a height-balanced binary tree of height $h$ satisfies $n > \sqrt{2}^h$. So $h \log_2 \sqrt{2} < \log n$, which implies $h < 2 \log n$.
{ "domain": "cs.stackexchange", "id": 15234, "tags": "complexity-theory, binary-trees, balanced-search-trees, avl-trees" }
Strong Password Detection in Python
Question: This is a problem from "Automate the boring stuff". Write a function that uses regular expressions to make sure the password string it is passed is strong. A strong password is defined as one that is at least eight characters long, contains both uppercase and lowercase characters, and has at least one digit. You may need to test the string against multiple regex patterns to validate its strength. How to improve this code? #! /usr/bin/python3 import re def uppercase_check(password): if re.search('[A-Z]', password): #atleast one uppercase character return True return False def lowercase_check(password): if re.search('[a-z]', password): #atleast one lowercase character return True return False def digit_check(password): if re.search('[0-9]', password): #atleast one digit return True return False def user_input_password_check(): password = input("Enter password : ") #atleast 8 character long if len(password) >= 8 and uppercase_check(password) and lowercase_check(password) and digit_check(password): print("Strong Password") else: print("Weak Password") user_input_password_check() Answer: I'm submitting this as a different answer because it goes in a different direction from my previous one, eliminating the bool cast as well as individual functions. You can simply define a tuple of regular expressions and apply all. rexes = ('[A-Z]', '[a-z]', '[0-9]') # ... if len(password) >= 8 and all(re.search(r, password) for r in rexes)): print('Strong password')
{ "domain": "codereview.stackexchange", "id": 35170, "tags": "python, beginner, python-3.x" }
Scraper for downloading and saving images from web page
Question: I've written some code using Python 3 to scrape movie names, links to the movie posters, and finally save the pictures on the local drive after downloading them from a web page. I have used two functions to accomplish the whole task. I've tried my best to make the process clean. It is working great now. Any suggestion as to the betterment of this script will be very helpful for me. Thanks in advance. Here is the working code: import requests from lxml import html import os url = "https://www.yify-torrent.org/search/1080p/" def ImageScraper(link): response = requests.session().get(link).text tree = html.fromstring(response) for title in tree.xpath('//div[@class="mv"]'): movie_title = title.findtext('.//h3/a') image_url = title.xpath('.//img/@src')[0] image_url = "https:" + image_url image_name = image_url.split('/')[-1] SavingImages(movie_title, image_name, image_url) def SavingImages(movie_name, item_name, item_link): response = requests.session().get(item_link, stream = True) if response.status_code == 200: os.chdir(r"C:\Users\ar\Desktop\mth") with open(item_name, 'wb') as f: for chunk in response.iter_content(1024): f.write(chunk) print(movie_name, item_link) ImageScraper(url) Answer: I would focus on the following things specifically: variable and function naming: use lower_case_with_underscores naming convention what if we rename title to movie and movie_title to title - I think that would be a bit more descriptive response should probably be named page_source since it is not a Response instance but already the text of the response use of spaces and line breaks: according to PEP8 coding style, you should have 2 line breaks between the functions when passing a keyword argument to a function, don't put spaces around the = code organization: I would use a class to share a web-scraping session and have it parameterized with a url and a download directory. I think that would be more modular. Improved code: import os import requests from lxml import html class ImageScraper: def __init__(self, url, download_path): self.url = url self.download_path = download_path self.session = requests.Session() def scrape_images(self): response = self.session.get(self.url).text tree = html.fromstring(response) for movie in tree.xpath('//div[@class="mv"]'): title = movie.findtext('.//h3/a') image_url = "https:" + movie.xpath('.//img/@src')[0] image_name = image_url.split('/')[-1] self.save_image(title, image_name, image_url) def save_image(self, movie_name, file_name, item_link): response = self.session.get(item_link, stream=True) if response.status_code == 200: with open(os.path.join(self.download_path, file_name), 'wb') as image_file: for chunk in response.iter_content(1024): image_file.write(chunk) print(movie_name, file_name) if __name__ == '__main__': scraper = ImageScraper(url="https://www.yify-torrent.org/search/1080p/", download_path=r"C:\Users\ar\Desktop\mth") scraper.scrape_images()
{ "domain": "codereview.stackexchange", "id": 26500, "tags": "python, python-3.x, image, web-scraping" }
What are the main causes for eyeball becoming too long or short in myopia and hyperopia?
Question: myopia and hypeopia occur due to change in eye ball size or lack in cilliary muscle ability to accomodate. \n\n But why in lasik laser operation , the one's cornea shape is changed rather than acting on cilliary muscles or somehow fixing the size of eyeball?\n\n And why does the eyeball become larger or smaller? Does it happen due to external injury or some internal mechanism? What makes the eyeball change it size? Answer: Strictly speaking, you are describing axial myopia, indeed caused by a change in the eyeball shape. In some cases, the cornea itself maybe the problem. But your question is still valid for axial myopia and I will try to answer. Regarding the laser operation acting on the cornea, it is the same as wearing glasses or contact lenses: You adjust the (innocent) lens (the refracting and focusing machinery of the eye) to offset the disabling defect of the (guilty) eyeball. To put it in physics terms: You have a lens (the refracting and focusing machinery of the eye, which includes the cornea) positioned perfectly in front of a screen (the retina), so that the focal point of the lens lies on the screen. Perfect vision. Now for some reason the screen is shifted backwards or forwards. (That's the eyeball muscles misbehaving.) You want to shift the screen back to its original position, but you cannot (muscles won't obey, surgical procedure difficult,...) So you operate on the lens (the refractive machinery) and change its focal point so that it lies on the newly positioned screen. More on the underlying mechanisms of myopia: https://en.wikipedia.org/wiki/Near-sightedness#Mechanism
{ "domain": "physics.stackexchange", "id": 53465, "tags": "optics, vision, biology" }
Why is not Fourier Transform Good for Non-linear Processes
Question: I was reading through slides about Hilbert Huang Transform. In slide 14, which talks about the motivations of a new method instead of Fourier Transform (FT), the author provides those two reasons in addition to other reasons Physical processes are mostly nonstationary. Physical processes are mostly nonlinear. I understand the first reason, where the FT cannot locate the frequencies in time. However, I do not understand why the FT does not work for the nonlinear processes. Based on my understanding, the FT is a method to decompose the signal into different and simple components. Why does condition of non-linearity play a role here? My first thought for reasoning that assumption, was that we cannot represent a non-linearity in the signal by a linear combinations of sinusoids. However, I think sinusoids themselves will handle the non-linearity. Answer: Because complex exponentials $e^{\jmath \omega t}$, which are results of Fourier transform, are the eigenfunctions for linear, time invariant (LTI) systems. See eigenfunction of LTI. Also see this answer on SP.SE. Thus, Fourier transform is useful for analyzing linear (not suitable for non-linear one) time invariant (can be intepreted as stationnary) system.
{ "domain": "dsp.stackexchange", "id": 5181, "tags": "fourier-transform, non-linear" }
What are good strategies for tuning PID loops?
Question: Tuning controller gains can be difficult, what general strategies work well to get a stable system that converges to the right solution? Answer: For small, low torque motors with little or no gearing, one procedure you can use to get a good baseline tune is to probe it's response to a disturbance. To tune a PID use the following steps: Set all gains to zero. Increase the P gain until the response to a disturbance is steady oscillation. Increase the D gain until the the oscillations go away (i.e. it's critically damped). Repeat steps 2 and 3 until increasing the D gain does not stop the oscillations. Set P and D to the last stable values. Increase the I gain until it brings you to the setpoint with the number of oscillations desired (normally zero but a quicker response can be had if you don't mind a couple oscillations of overshoot) What disturbance you use depends on the mechanism the controller is attached to. Normally moving the mechanism by hand away from the setpoint and letting go is enough. If the oscillations grow bigger and bigger then you need to reduce the P gain. If you set the D gain too high the system will begin to chatter (vibrate at a higher frequency than the P gain oscillations). If this happens, reduce the D gain until it stops. I believe this technique has a name. I'll put it here when I find it.
{ "domain": "robotics.stackexchange", "id": 1603, "tags": "control, pid, tuning" }
Why do I get the error name operator is not defined?
Question: I checked this code multiple times, I am trying to reproduce the same code using Grover's algorithm from qiskit summerschool: import numpy as np from qiskit import IBMQ, QuantumCircuit, Aer, execute from qiskit.quantum_info import Operator from qiskit.providers.ibmq import least_busy from qiskit.visualization import plot_histogram from qiskit.tools.jupyter import * provider = IBMQ.load_account() def phase_oracle(n, indices_to_mark, name='Oracle'): qc = QuantumCircuit(n, name=name) oracle_matrix = np.identity(2**n) for index_to_mark in indices_to_mark: oracle_matrix[index_to_mark, index_to_mark]= -1 qc.unitary(operator(oracle_matrix), range(n)) return qc def diffuser(n): qc=QuantumCircuit(n,name='Diff - "V"') qc.h(range(n)) qc.append(phase_oracle(n,[0]),range(n)) qc.h(range(n)) return qc def Grover(n, marked): qc=QuantumCircuit(n,n) r = int(np.round(np.pi/ (4*np.arcsin(np.sqrt(len(marked)/2**n)))-1/2)) print(f'{n} qubits, basis state {marked} marked, {r} rounds') qc.h(range(n)) for _ in range(r): qc.append(phase_oracle(n,marked),range(n)) qc.append(diffuser(n),range(n)) qc.measure(range(n),range(n)) return qc n = 5 x = np.random.randint(2**n) marked = [x] qc = Grover(n, marked) qc.draw() I get the name operator error which I cannot figure out the reason: NameError Traceback (most recent call last) <ipython-input-22-96635782dc30> in <module> 2 x = np.random.randint(2**n) 3 marked = [x] ----> 4 qc = Grover(n, marked) 5 6 qc.draw() <ipython-input-20-f14e47e0af5d> in Grover(n, marked) 20 qc.h(range(n)) 21 for _ in range(r): ---> 22 qc.append(phase_oracle(n,marked),range(n)) 23 qc.append(diffuser(n),range(n)) 24 qc.measure(range(n),range(n)) <ipython-input-20-f14e47e0af5d> in phase_oracle(n, indices_to_mark, name) 4 for index_to_mark in indices_to_mark: 5 oracle_matrix[index_to_mark, index_to_mark]= -1 ----> 6 qc.unitary(operator(oracle_matrix), range(n)) 7 return qc 8 NameError: name 'operator' is not defined. Can anybody help me woth this? Answer: That is because on line 14 of your program, you wrote: qc.unitary(operator(oracle_matrix), range(n)) when it should be: qc.unitary(Operator(oracle_matrix), range(n)) You should have capitalized the O in operator. Changing that, I get the following output when executed your code: 5 qubits, basis state [15] marked, 4 rounds ┌───┐┌─────────┐┌─────────────┐┌─────────┐┌─────────────┐┌─────────┐» q_0: ┤ H ├┤0 ├┤0 ├┤0 ├┤0 ├┤0 ├» ├───┤│ ││ ││ ││ ││ │» q_1: ┤ H ├┤1 ├┤1 ├┤1 ├┤1 ├┤1 ├» ├───┤│ ││ ││ ││ ││ │» q_2: ┤ H ├┤2 Oracle ├┤2 Diff - "V" ├┤2 Oracle ├┤2 Diff - "V" ├┤2 Oracle ├» ├───┤│ ││ ││ ││ ││ │» q_3: ┤ H ├┤3 ├┤3 ├┤3 ├┤3 ├┤3 ├» ├───┤│ ││ ││ ││ ││ │» q_4: ┤ H ├┤4 ├┤4 ├┤4 ├┤4 ├┤4 ├» └───┘└─────────┘└─────────────┘└─────────┘└─────────────┘└─────────┘» c: 5/════════════════════════════════════════════════════════════════════» » « ┌─────────────┐┌─────────┐┌─────────────┐┌─┐ «q_0: ┤0 ├┤0 ├┤0 ├┤M├──────────── « │ ││ ││ │└╥┘┌─┐ «q_1: ┤1 ├┤1 ├┤1 ├─╫─┤M├───────── « │ ││ ││ │ ║ └╥┘┌─┐ «q_2: ┤2 Diff - "V" ├┤2 Oracle ├┤2 Diff - "V" ├─╫──╫─┤M├────── « │ ││ ││ │ ║ ║ └╥┘┌─┐ «q_3: ┤3 ├┤3 ├┤3 ├─╫──╫──╫─┤M├─── « │ ││ ││ │ ║ ║ ║ └╥┘┌─┐ «q_4: ┤4 ├┤4 ├┤4 ├─╫──╫──╫──╫─┤M├ « └─────────────┘└─────────┘└─────────────┘ ║ ║ ║ ║ └╥┘ «c: 5/══════════════════════════════════════════╩══╩══╩══╩══╩═ « 0 1 2 3 4
{ "domain": "quantumcomputing.stackexchange", "id": 2169, "tags": "programming" }
Arduino code and PING))) ultrasonic rangefinder
Question: Context I've started digging into Arduino and different type of sensors which is fun, but I'm worried that that my coding is too explicit. I've lots of experience for coding corporate programs, though it is not in C/C++. I don't consider myself a beginner, but I think something is wrong. Blocks I'd place my code in this section. It is using PING))) sensor to detect how far an object is based on the travel time of an emitted ultra-sonic signal. This sketch is taken from the TinkerCad website which can be found here: tinkercad.com. /* Ping))) Sensor This sketch reads a PING))) ultrasonic rangefinder and returns the distance to the closest object in range. To do this, it sends a pulse to the sensor to initiate a reading, then listens for a pulse to return. The length of the returning pulse is proportional to the distance of the object from the sensor. The circuit: * +V connection of the PING))) attached to +5V * GND connection attached to ground * SIG connection attached to digital pin 7 http://www.arduino.cc/en/Tutorial/Ping This example code is in the public domain. */ const int speedOfSoundInAirInMetersPerSecond = 343; const float speedOfSoundInAirInCentimeterPerMicrosecond = speedOfSoundInAirInMetersPerSecond / 10000.0; const float centimeterToInchRatio = 2.54; int inches = 0; int cm = 0; long readTravelTimeInMicroseconds(int triggerPin, int echoPin) { pinMode(triggerPin, OUTPUT); // Clear the trigger digitalWrite(triggerPin, LOW); delayMicroseconds(2); digitalWrite(triggerPin, HIGH); delayMicroseconds(10); digitalWrite(triggerPin, LOW); pinMode(echoPin, INPUT); // Reads the echo pin, and returns the sound wave travel time in microseconds return pulseIn(echoPin, HIGH); } void setup() { Serial.begin(9600); } void loop() { // I had to divided that speed by 2 because // I'm only interested in the time which took // signal to reach an object. I don't need to know // how much distance we covered by the signal to the object and back. cm = (speedOfSoundInAirInCentimeterPerMicrosecond / 2) * readTravelTimeInMicroseconds(7, 7); inches = (cm / centimeterToInchRatio); Serial.print(inches); Serial.print("in, "); Serial.print(cm); Serial.println("cm"); delay(100); } My thoughts Am I not being too explicit? I've learnt over the years that it is always better to beexplicit. But, am I not going overboard here? Are there styles and conventions, styles I should follow when working with Arduino or electronics of this type? My question concerns with the technical side of the code. Answer: VeryLongIdentifiersInCamelCase do not make your code explicit. They just add noise. The definition const float speedOfSoundInAirInCentimeterPerMicrosecond = speedOfSoundInAirInMetersPerSecond / 10000.0; is very hard to read. 10000 stands out as weird. After all, there are 1000000 microseconds in a second. It took me quite a time to realize that two orders of magnitude are hidden in meters-to-centimeters conversion. By default, pulseIn timeouts in 1 second. In reality it means that there is no obstacle within 170 or so meters (aka 500 feet). I seriously doubt your hardware is capable detecting an echo at such distances. It feels safe to shorten the timeout for the sake of better responsiveness. The production quality code must deal with the ambient noise, which will result in false positives. At least, test that the signal you detected is indeed an echo - that is, it is about as long as the ping. You should also special-case inPulse returning 0. Currently, you are returning 0 distance, whist in fact it means the distance is practically infinite. Unless I am seeing things, either the code, or documentation is wrong. According to the spec if value is HIGH, pulseIn() waits for the pin to go from LOW to HIGH, starts timing, then waits for the pin to go LOW and stops timing. Returns the length of the pulse in microseconds which means it returns the duration of the echo, not the turnaround time you are after. Unfortunately, I don't have the hardware to play with. Does your code give reasonable results?
{ "domain": "codereview.stackexchange", "id": 40304, "tags": "c++, c, arduino" }
Velodyne HDL32 FPS issue
Question: Hi Pals, I've been using the latest velodyne package in order to receive data from the HDL-32E. I am using ubuntu 14.04.5 with a ROS indigo installation. The driver mostly works fine, getting the desired 15Hz on a 900 RPM lidar rotation (and respectfully other Hz corresponding to other RPMs). This was checked via "rostopiz hz". Using the Lidar on it's regular configuration generates ~50k points per frame, which is too much for me, especially considering I only need the front +-60 degrees (120 deg in total, which is a third of the whole 360 deg sweep). So I put that information into the corresponding slots inside the lidar's web GUI (you can choose the desired sweep angle), and I do get the desired sweep when I look in Rviz. Now for the problem: I get the same amount of points per message, with the messages coming in at 1/3rd of the rate. The lidar is accumulating points for 3 revolutions and then publishing 1 message. Tests I've made show that this is probably a driver issue and not hardware. I found no parameter to change that might affect this behavior. Here are some screenshots, with the Rviz, launch file, and velodyne web gui open. Note that for full 360deg you get 15 fps, on a 120deg fan you get 5 fps (360/3->15/3), and on a 72deg fan you get 3 fps. https://drive.google.com/open?id=1m0C... https://drive.google.com/open?id=12B6... https://drive.google.com/open?id=1qy8... If anyone ever tried to limit the sweep before and get it to work, I'de love to hear. Thanks in advance, Steve EDIT: adding pics directly. Firefox gave me trouble for some reason Originally posted by StevenCoral on ROS Answers with karma: 167 on 2018-06-20 Post score: 0 Original comments Comment by gvdhoorn on 2018-06-20: Can you please attach the screenshots to your question directly instead of linking to google drive? Answer: If memory serves, the ros velodyne driver calculates an expected number of packets per scan based on receiving data for a full 360 degrees. So if you are only sending 1/3 the data packets every rotation, it will take 3 rotations before ros will register enough packets to publish a scan. This lines up with your notes on observed topic publishing frequency. This is a little hacky, but can you try changing the rpm param in your launch file to be 2700 instead of 900 and report back (keep the actual rpm setting in the web interface at 900)? Originally posted by stevejp with karma: 929 on 2018-06-20 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by StevenCoral on 2018-06-20: Actually this makes a lot of sense. Tried it and it works! thanks.
{ "domain": "robotics.stackexchange", "id": 31045, "tags": "lidar, ros-indigo, velodyne" }
Lorentz transformations in Minkowski space including invariants
Question: Let's assume the reference frame of a still observer with the 2 axes: the x-axis and time axis t. Let's say there's another observer within that frame moving with a constant velocity v with respect to the first observer (who's still). The two transformed coordinates for this moving observer are given by x' and t' in a way (by Lorentz transformation): $x' = \frac{x-vt}{\sqrt{1-v^2}}$ ... (1.1) $t' = \frac{t-vx}{\sqrt{1-v^2}}$ ... (1.2) assuming relativistic units. Now, we know that $\tau^2 = t^2-x^2 = t'^2 - x'^2$ is an invariant called proper time. Let's say, for the moving observer, we move along his world line i.e. $x' = 0$. If that's true, we get $\tau^2 = t'^2$ or $\tau = t'$. Similarly, if we move along $x=0$ i.e. the world line of the still observer in the same RF, we get $\tau^2 = t^2$ or $\tau = t$. Since $\tau$ is an invariant, it doesn't change in the same reference frame. Clearly, from the above 2 calculations we get that $\tau = t = t'$ which is obviously not true since t $\neq$ t' (shown in equation 1.2). I feel like I'm missing something rather obvious, something fundamental to the idea of Lorentz transformations, could someone please point that out? Answer: Here is how the set up looks like: We have the real world and three charts (osbervers viewing) it. The observer is denoted by it's reference frame (I am using Schutz's definition). In SR, we are interested in the chart transition maps, the one which relate the coordinate in one frame to another. Now, there is a bit of trickery going on here conventional SR presentation use a bit of abuse of notation of these variables. They say use that $(x,t)$ is the coordinates of the trajectory of a particle in space time and also that it is coordinate of the whole grid. I prefer to be more careful and if I am to talk about the trajectory of a certain particle, I will suffix my variables with a letter. For the case here, we study the motion of two particle $A$ and $B$ in the real world (2D space time manifold) in these three frames. You have written: Similarly, if we move along $x=0$ i.e. the world line of the still observer in the same RF, we get $\tau^2 = t^2$ or $\tau = t$. To understand what went wrong we begin at the equation $$\tau^2 = t^2 - x^2= t'^2 - x'^2$$ In the above equation, the space time interval is invariant, if you plug in the points from the $t-x$ plane corresponding to the points in the $t'-x'$ plane by the Lorentz transformation. The points on the line $x=0$ and the line $x'=0$ are not necessarily the same space time points so it doesn't make sense to compare their time. The reason it looks the same is that if you are in a path where the $x$ coordinate doesn't change then proper time = normal time.
{ "domain": "physics.stackexchange", "id": 90160, "tags": "special-relativity, invariants" }
How well (or poorly) does natural leather insulate?
Question: What is the general thermal conductivity of animal skin leather? I understand there are many different animal's skins (cow, sheep, buffalo, deer, pig), but in general, does natural leather insulate well (compared to, for scale, an esky for food insulation), or not? Answer: Interestingley enough, there are papers entirely devoted to the subject of measuring the properties of living skin. The paper linked gives a value of $7 \times 10^{-4}~\mathrm{\frac{cal}{cm\, s\, °C}}$ for human skin, or in everyday units used nowadays: $0.29 ~\mathrm{\frac{W}{m\, K}}$. This is only for the dermis, the part of skin that then gets transformed into leather. Now, this website has a comprehensive list of many different thermal conductivity values, and dry leather clocks in at $0.14 ~\mathrm{\frac{W}{m\, K}}$, which is about 2 times less the conductivity than that of living skin. What a difference the tanning process can make! Given the similarity observed in the review paper between man, beef and porcine tissue, I think it is fair to say that the value for leather will not change greatly across species.
{ "domain": "chemistry.stackexchange", "id": 848, "tags": "thermodynamics, heat, conductivity" }
Stability of alkenes tri>di>mono, but how to explain tetra
Question: When we tell students about the formation of alkenes (by elimination for example), we often tell them that reactions will favour the thermodynamically favourable most substituted alkene. Zaistev's rule empirically described this: "The alkene formed in greatest amount is the one that corresponds to removal of the hydrogen from the β-carbon having the fewest hydrogen substituents" The stability of the alkene with increasing substitution mono>di>tri is easily explained by hyperconjugation/qualitative MO theory (Phillip wrote a concise answer about that here). As an example to clarify, if theres a choice between eliminating to form a trisubstituted double bond over a mono substituted, the tri will predominate in most situations. What I often struggle to explain in any detailed way is why tetra-substituted alkenes don't entirely follow the trend. They are generally quite difficult to form (metathesis aside, but that isn't an elimination in the classical sense), and if formed, are generally quite reactive (on a similar order of magnitude to di-substituted alkenes if some studies on hydrogenation and some calculations are correct). My general answer is 'sterics', based on the fact that 4 substituents around the same alkene is quite crowded, but I'm the first to acknowledge that this 1)doesn't really address all of the deviations from expected properties and 2)isn't really an explanation. Answer: I agree with the OP and have seen numerous examples where tetrasubstituted olefins are indeed quite difficult to make, both in the literature and in my own experience, leading to the formation of Anti-Zaitzev (Hoffman) elimination products. Many question writers are reluctant to make a solution where the tetra-substituted option is the correct answer. It is not as simple as the trend in carbenium ion stability where hyperconjugation and conjugation dominate around a trigonal planar (less sterically encumbered) center. A tetrasubstituted olefin is too sterically encumbered due to 1) two A(1,2) interactions and 2) two exo substituents (gem-dimethyl) interactions. Such proof for this failure in the stability trend is included in the original question as the kinetic stability of such an olefin is unusually and unexpectedly high. As far as the the thermodynamic stability, torsional strain makes effective hyperconjugation less possible in the transition state which varies only marginally upon the strength of base and the lability of the bond to the leaving group. This trend: mono is less stable than exo-di is less stable than cis-di is less stable than trans-di is less stable than tri is....that is about all you can correctly say about the trend. Consider why cis-disubstituted is less stable than a trans-disubstituted olefin. Now consider a tetra-substituted alkene. Zaitsev's "rule" like Markownikoff's "rule" is a just fast and dirty guideline for undergraduates and which both have numerous exceptions. It does not follow that tetrasubstituted must be more thermodynamically stable than tri-substituted and product ratios indicate this repeatedly in the literature. Stereoselectivity of E over Z is a non-problem compared to the difficulty of and abysmal yields when creating a tetrasubstituted olefin. Simply put an ellipsis ("...") after the trisubstituted alkene as that is all that can be stated to an undergraduate class without presenting a falsehood.
{ "domain": "chemistry.stackexchange", "id": 8295, "tags": "organic-chemistry, reactivity" }
Select Feynman diagrams for 2-loop QED vertex correction
Question: I am trying to calculate the 2-loop correction for a basic vertex in QED So I need to figure out what diagrams I need to consider. My literature says (without much explanation), that there are 7 contributing diagrams (including pictures). But naively it is possible to draw many more 2-loop diagrams with two electron-legs and one photon leg. Most of them are excluded by simple rules such as All legs should be amputated. The diagram should be connected (i.e. no vacuum bubbles). But what about the diagram ? It seems to obey all rules, but yet it is not listed as one of the contributing diagrams. Why is that? What rule am I missing? Answer: The rule you're missing is Furry's theorem. There's another diagram where the internal electron loop has the arrows reversed, related to the original diagram by charge conjugation, and the two should exactly cancel. This generally happens whenever you have a fermion loop with an odd number of photons attached.
{ "domain": "physics.stackexchange", "id": 40145, "tags": "quantum-electrodynamics, feynman-diagrams" }
Why does the force of gravity get weaker as it travels through the dimensions?
Question: Some theories predict that the graviton exists in a dimension that we of course can't see, and that is why the force of gravity is so weak. Because by the time gravity has got from the dimension in which the graviton exists to our dimension it has lost much of its strength. For a better understanding of this theory, I think this might help: Bigger: https://lh4.googleusercontent.com/-M3Z7HBllZys/T5JT-dGobZI/AAAAAAAAFIE/I_XJUA3M0pI/randall_750.jpg?imgmax=1600 Why does the force lose strength as is travels through the other dimensions? Answer: This is nearly a duplicate of large-extra dimensions questions, you can see the answer here. The theories that predict that gravity is weak because it has extra dimensions are the large extra dimensions theories. The reason gravity gets weaker is that gravity obeys a version of Gauss's law, which states that the outward gravitational field integrated over a large surface is equal to the mass inside (up to pressure corrections--- this is the Newtonian gravity, but the Einstein gravity is qualitatively identical in its scaling). If you have a sphere of radius R, it's volume goes like $R^d$ by dimensional analysis, where d is the number of dimensions. The surface area goes like $R^{d-1}$ because it's the derivative of the volume with respect to the radius (the surface area is the volume enclosed by two spheres of infinitesimally different radius divided by the infinitesimal radial difference). So the total gravitational field times $R^{d-1}$ is equal to the mass, so that the gravitational field falls off as $R^{d-1}$ where d is the number of spatial dimensions. For 3 dimensions, it falls off as $R^2$, and this is Newton's universal gravitation. In higher dimensions, gravity falls off faster. So if there are extra dimensions, and only gravity can propagate in these dimensions, gravity gets weaker quickly until you reach the scale where the extra dimensions are curled up. This means that gravity is weaker than other forces to the extent that the extra dimensions are large. This type of theory must lower the Planck scale tremendously in order for this to be the dominant explanation of gravity's weakness, and in these circumstances, it runs into problems with non-renormalizable corrections, which should have already been observed. This is discussed in the linked answer. Within standard string theory however, a miniature version of this mechanism allows gravity and the other forces to unify at a slightly lower energy scale than the Planck scale, nearer to the GUT scale, because the size of the extra dimensions is seen to be non zero around there. Lowering the Planck scale by one or two orders of magnitude in this was is not controversial, and was suggested by Witten in the 1980s, long before large extra dimensions were proposed.
{ "domain": "physics.stackexchange", "id": 2906, "tags": "gravity, quantum-gravity, spacetime-dimensions" }
Why is the Dirac QFT Spin Operator depending on the state?
Question: I am reading the book Student friendly quantum field theory , I feel sorry to paste the entire page, but I just want to illustrate this more clear. This very a long page. Firstly the author defined the QFT spin operator in (4-110), and then he checked the result of this operator functions on a state to gain the desired value 1/2 in (4-116). During this process he seemed to used the fact that the operator should depend on the state being measured at the third term after the second equal sign in (4-116). This, I don't understand. How can a measurement be affected by the thing been observed? Answer: The dirac operator is $ -i\not D $ , but it seems the author has called the Dirac hamiltonian the dirac operator. In QFT we make the local Hamiltonian out of quantum fields in order to ensure locality. The Hamiltonian operator is then an integral of a local hamiltonian densiy over all space-time. If you get the dirac equation and ask for what lagragian gives it and then from that construct the Hamiltonian you will get something like what the author calls the Dirac operator.
{ "domain": "physics.stackexchange", "id": 34814, "tags": "quantum-field-theory, dirac-equation" }
Project Euler 18/67: Maximum Sum from Top to Bottom of the Triangle
Question: This is my attempt at Project 18/67 of Project Euler. You can see the problem information, and obtain the needed txt file from here. I had some help with the split function, and help on how to use an vector to do this. I would love any other feedback on this code. #include <iostream> #include <fstream> #include <vector> #include <string> #include <sstream> //function by jesyspa std::vector<int> split(std::string line){ std::stringstream ss (line); std::vector<int> result; std::string num; while(std::getline(ss, num, ',')) result.push_back(std::stoi(num)); return result; } int main(){ std::vector<std::vector<int>> grid; //inputing triangle fule std::ifstream nums; nums.open("triangle.txt"); std::string row; //Calling function, and pushing back into the vector while(std::getline(nums, row, '\n')){ grid.push_back(split(row)); } //term is one shorter then grid int term = grid.back().size() - 1; //This loop add's the larger of two bottom numbers, with the number above the first of the two. for(int a = term; a >= 1; a--){ for(int b = 0; b <= a; b++){ if(grid[a][b] > grid[a][b+1]){ grid[a-1][b] += grid[a][b]; } else{ grid[a-1][b] += grid[a][b+1]; } } } //Our answer std::cout << grid[0][0]; } Answer: Generally speaking, I find that your code is clear. I just have a few remarks: You should order your headers in alhabetical order. It will help you to quickly check whether some header is already included or not in you plan to add some: #include <fstream> #include <iostream> #include <sstream> #include <string> #include <vector> You should open your file directly from the constructor, and check whether there was a problem when you tried to open the file: std::ifstream nums.open("triangle.txt"); if (not nums.good()) { throw std::runtime_error("Could not open the file."); } You can replace this condition: if(grid[a][b] > grid[a][b+1]){ grid[a-1][b] += grid[a][b]; } else{ grid[a-1][b] += grid[a][b+1]; } You can use std::max instead: grid[a-1][b] += std::max(grid[a][b], grid[a][b+1]); I don't know why you chosed , as a separator since the original file uses a space as separator. Had you kept the space as a separator in your file, you vould have rewritten the function split with std::istream_iterator, with an int template parameter: std::vector<int> split(const std::string& line){ std::stringstream ss(line); return { std::istream_iterator<int>{ss}, std::istream_iterator<int>{} }; } The braces after the return correspond to list initialization. The rules for list initialization are pretty complex; the curly braces after return mean that an instance of the anounced return type (std::vector<int>) should be created with the arguments between the braces. In our case, the chosen constructor is the one that matches the best two istream_iterator<int> instances: template<class InputIt> vector(InputIt first, InputIt last, const Allocator& alloc=Allocator());
{ "domain": "codereview.stackexchange", "id": 7100, "tags": "c++, c++11, programming-challenge" }
Elegant way to plot the L2 regularization path of logistic regression in python?
Question: Trying to plot the L2 regularization path of logistic regression with the following code (an example of regularization path can be found in page 65 of the ML textbook Elements of Statistical Learning https://web.stanford.edu/~hastie/Papers/ESLII.pdf). Have a feeling that I am doing it the dumb way - think there is a simpler and more elegant way to code it - suggestions much appreciated thanks. counter = 0 for c in np.arange(-10, 2, dtype=np.float): lr = LogisticRegression(C = 10**c, fit_intercept=True, solver = 'liblinear', penalty = 'l2', tol = 0.0001, n_jobs = -1, verbose = -1, random_state = 0 ) model=lr.fit(X_train_z, y_train) coeff_list=model.coef_.ravel() if counter == 0: coeff_table = pd.DataFrame(pd.Series(coeff_list,index=X_train.columns),columns=[10**c]) else: temp_table = pd.DataFrame(pd.Series(coeff_list,index=X_train.columns),columns=[10**c]) coeff_table = coeff_table.join(temp_table,how='left') counter += 1 plt.rcParams["figure.figsize"] = (20,10) coeff_table.transpose().iloc[:,:10].plot() plt.ylabel('weight coefficient') plt.xlabel('C') plt.legend(loc='right') plt.xscale('log') plt.show() Answer: sklearn has such a functionality already for regression problems, in enet_path and lasso_path. There's an example notebook here. Those functions have some cython base to them, so are probably substantially faster than your version. One other improvement that you can include in your implementation without adding cython is to use "warm starts": nearby alphas should have similar coefficients. So try # This needs to be instantiated outside the loop so we don't start from scratch each time. lr = LogisticRegression(C = 1, # we'll override this in the loop warm_start=True, fit_intercept=True, solver = 'liblinear', penalty = 'l2', tol = 0.0001, n_jobs = -1, verbose = -1, random_state = 0 ) for c in np.arange(-10, 2, dtype=np.float): lr.set_params(C=10**c) model=lr.fit(X_train_z, y_train) ...
{ "domain": "datascience.stackexchange", "id": 10183, "tags": "python, matplotlib, regularization, lasso" }
Am I interfacing in a secure manner with rijndael?
Question: I have been working to create an easy-to-use set of methods to encrypt configuration objects for my client application. It will contain username and passwords to databases and similar vaults of data, so it's rather important that I've got the fundamentals correct. I'm fairly new to this form of encryption, and I certainly haven't done this in .NET before. So I am curious - is there any given, open caveats in the attached code? I'll greatly appreciate your comments! Also, please don't mind the comments in the code. This is a late-night side-project for me, and I'll look into sprucing it all up once I've got it all examined. public byte[] CalculatePasswordHash(byte[] salt) { var hash = Encoding.UTF8.GetBytes(Password); var hashing = SHA256.Create(); var iterations = 10000; while (iterations > 0) { // Compute the hash hash = hashing.ComputeHash(hash); // Apply the salt for (int position = 0, cursor = 0; position < hash.Length; position += 1) { hash[position] ^= salt[cursor]; cursor += 1; if (cursor >= salt.Length) cursor = 0; } iterations -= 1; } return hash; } public byte[] CreateSalt() { return CreateSalt(SaltLength); } public byte[] CreateSalt(int size) { var random = new RNGCryptoServiceProvider(); var buf = new byte[size]; random.GetBytes(buf); return buf; } public void Lock(string filePath, byte[] source) { // Ensure that the path is rooted filePath = RootPath(filePath); // Generate salt as well as the encryption key var salt = CreateSalt(); var key = CalculatePasswordHash(salt); // Initialize the Rijndael encryption tool var crypto = Rijndael.Create(); var iv = CreateSalt(crypto.BlockSize / 8); using (var mem = new MemoryStream()) { // Encrypt the source array crypto.Key = key; crypto.IV = iv; var desc = crypto.CreateEncryptor(); using (var cryptoStream = new CryptoStream(mem, desc, CryptoStreamMode.Write)) { cryptoStream.Write(source, 0, source.Length); } desc.Dispose(); // Append the salt and iv to the end of the file using (var fs = new FileStream(filePath, FileMode.Create)) { var encrypted = mem.ToArray(); fs.Write(encrypted, 0, encrypted.Length); fs.Write(salt, 0, salt.Length); fs.Write(iv, 0, iv.Length); } } } Answer: CalculatePasswordHash is inadvertently reimplementing (and probably not as well) a standard primitive. You should use System.Security.Cryptography.Rfc2898DeriveBytes instead. The code var desc = crypto.CreateEncryptor(); ... desc.Dispose(); raises a red flag. Why isn't it using using? You write to a memory stream and then write the contents of the memory stream unmodified to a file. It seems that you could simplify this just skipping the middle-man.
{ "domain": "codereview.stackexchange", "id": 4256, "tags": "c#, .net, security, cryptography" }
HMSegmentedControl react to tapping on currently selected segment
Question: I'm using HMSegmentedControl, an open-source UISegmentedControl subclass. I'm trying to react to the user tapping on the currently selected segment. HMSegmentedControl only supports reacting to a segment CHANGE, with either UIControlEventValueChanged or a block that executes when the index is changed. I need to react to the currently selected segment in order to present a drop-down menu. Here is what I've done so far: -(void)segmentedControlValueChanged:(HMSegmentedControl *)control { CGFloat midX = self.view.frame.size.width / 2 - _filterView.frame.size.width / 2; _filterView.frame = CGRectMake(midX, -_filterView.frame.size.height+64, _filterView.frame.size.width, _filterView.frame.size.height); _filterView.hidden = YES; _filterBackgroundView.alpha = 0.0f; _filterBackgroundView.hidden = YES; } -(void)segmentedControlTouchUpInside:(UIGestureRecognizer *)gr { if(!_filterView) { _filterView = (HomeFilterView *)[[NSBundle mainBundle] loadNibNamed:@"HomeFilterView" owner:self options:nil].firstObject; CGFloat midX = self.view.frame.size.width / 2 - _filterView.frame.size.width / 2; _filterView.frame = CGRectMake(midX, -_filterView.frame.size.height+64, _filterView.frame.size.width, _filterView.frame.size.height); _filterView.delegate = self; [self.view insertSubview:_filterView belowSubview:self.segmentedControl]; } if(!_filterBackgroundView) { _filterBackgroundView = [[UIView alloc] initWithFrame:self.view.bounds]; _filterBackgroundView.backgroundColor = [UIColor blackColor]; _filterBackgroundView.alpha = 0.0f; [self.view insertSubview:_filterBackgroundView belowSubview:_filterView]; } _filterView.hidden = NO; _filterBackgroundView.hidden = NO; [UIView animateWithDuration:0.5f delay:0.0f usingSpringWithDamping:0.55f initialSpringVelocity:0.0f options:UIViewAnimationOptionCurveLinear animations:^{ CGFloat midX = self.view.frame.size.width / 2 - _filterView.frame.size.width / 2; CGFloat y = self.segmentedControl.frame.size.height + self.segmentedControl.frame.origin.y; _filterView.frame = CGRectMake(midX, y-70, _filterView.frame.size.width, _filterView.frame.size.height); } completion:^(BOOL finished) { }]; [UIView animateWithDuration:0.25f animations:^{ _filterBackgroundView.alpha = 0.5f; } completion:^(BOOL finished) { }]; } I added a tap gesture recognizer to the segmented control. So, if a user selects the current segment, only segmentedControlTouchUpInside: will be fired. If a user selected a different segment, segmentedControlTouchUpInside: is fired, followed by segmentedControlValueChanged:. I'm relying on the segmentedControlValueChanged: method to be called after the gesture-recognizer method, and essentially "undo" the adding of the drop-down to the UI. This is the only way I could figure out to handle the tapping of the current segment. It seems to accomplish what I'm going for in the UI, but can anyone think of a better way of doing this? Is this safe to do? Answer: You can fork the control and do a tiny modification to the code to do what you need. In touchesEnded:withEvent:, you need to modify this line so it wouldn't check if the tapped segment is the currently selected segment: if (segment < sectionsCount)
{ "domain": "codereview.stackexchange", "id": 11503, "tags": "objective-c, ios, user-interface" }
Is there any law for pH conservation in chemical reactions?
Question: I would like to know (just curiosity) if there is any law for pH conservation in chemical reactions. (like conservation of momentum in dynamics). EDIT The reaction I would like to take as example is fuel combustion. Too many sources (mostly media) stating that acid products (NOx and COx dissolved in water) are obtained. My idea is that acids cannot be created from nothing. Either the inputs of the reaction were acid or the output is not only acid but also basic outputs. Thus the question whether this reasoning is true. Answer: No, there is no law requiring pH conservation. And such a law would be chemical nonsense. Why? Read on. You need to recall what pH actually is. It is defined as the negative base-10 logarithm of the activity (similar enough to concentration for almost all practical purposes) of positively charged hydrogen ions in solution.† Or in equations: $$\mathrm{pH} = -\lg [\ce{H+}]$$ Note that most conservation laws in physics somehow relate back to the conservation of energy or something related. However, the pH of a solution cannot be easily traced back to any state function or anything else that would warrant its conservation. In fact, the only thing conserved is the atom itself. Hydrogen, if $\ce{H+}$ is somehow present in a reaction or solution, does not disappear, it needs to be put in on one side of the equation and turn up somewhere else. In reactions that liberate protons of any kind, usually bases are added to capture them. In those that require protons, acids are added. What (if anything) is conserved is the number of hydrogen atoms but if they are not active as $\ce{H+}$ in solution they do not count towards pH. Therefore, there is no sense in postulating pH conservation. †: $\ce{H+}$ as is does not actually exist in solution. One can treat it as $\ce{H3O+}$ for all intents and purposes, but actually it is better to think of it as a proton being shuffled back and forth by multiple water molecules. I think I recall four being stated as the lowest count in undergrad education.
{ "domain": "chemistry.stackexchange", "id": 4393, "tags": "ph" }
How can I merge weather data from different time zones?
Question: Does anyone know of any simple means for merging (inverse distance weighting method) hourly weather files (precip, temp, etc.) from different time zones? I am creating weather data for HUC8s, and am using the 5 closest NCDC weather stations. But some HUC8s cross over different time zones. Has anyone figured out the easiest method for aligning weather data? My thought is to put everything into GMT - so that 10am is 10am everywhere, and track the offset in a new column in my data. I believe this would work, but am having trouble completely wrapping my head around it: Focusing on a watershed that cross from CST into EST - if it was raining at 10am EST, and also raining at 9am CST (raining at the timezone division, with weather stations on either side of the timezone division), it's raining at the same time, we are just marking the 'time' differently in each location. However, if I transformed my date/time into GMT, with a new column tracking the offset (maybe to convert it back later), it would be the same GMT time in both locations that the rain was occurring? In effect, it is just shifting all the weather into the same time zone, and then merging based on IDW. Does anyone have any simple solutions I am overlooking? Answer: Many good data sources will come with a UTC offset column. This is helpful for harmonizing time zones since you can simply reverse the UTC offset to make a unified UTC dataset. If your data doesn't have the UTC offset, then you can link to it using a table like this: TZ UTC Time EST -5 EDT -4 CST -6 CDT -5 MST -7 MDT -6 PST -8 PDT -7 So for instance, if your data is in EST, simply subtract -5 from the reported time to make it UTC.
{ "domain": "earthscience.stackexchange", "id": 711, "tags": "meteorology, data-analysis" }
What does the $R$ in the formula $P=V^2/R$ for power represent?
Question: We know the following formula: $$P=V^2/R.$$ I want to ask what the $R$ in the above formula represents. Is it the normal resistance of the resistor (load) or is it the increased resistance of the resistor (load) due to heat generation and consequently, increase in temperature of the resistor (load). I am asking this because today we solved a question in class which read: The resistance of a 240V and 100W electric lamp heats up from room temperature to operating temperature. As it heats up its resistance is increased by a factor of 16. What is the resistance of the lamp at room temperature? The answer to the about problem was $36\:\Omega$ which obviously assumed that the $R$ in the above formula represents the resistance of the bulb when its temperature increases when its operating. I think the answer should simply be $576 \:\Omega$. I think the $R$ in the formula simply shows what the resistance of the bulb will be when there is no increase in temperature, since it is derived using ohm's law (which works under constant physical conditions). Please note that this is not a homework question and the teacher is probably not going to discuss it in class again (unless the real answer is not $36\:\Omega$ and I am correct). Answer: The relation $P=V^2/R$ gives the power $P$ dissipated by a given element that's subjected to an external voltage $V$ and which presents an electrical resistance $R$ at the specified loading conditions. Thus, if the resistance depends on the temperature, and the temperature depends on the dissipated power, then increasing the voltage will cause a change in the resistance that must be used. As a general nomenclature framework, an ohmic conductor is a conductor which obeys Ohm's law to the letter: the current $I$ induced by a voltage $V$ is strictly porportional to the voltage (and therefore the resistance $R=V/I$ is independent of the applied voltage). As a general rule, whether a conductor is ohmic or not will depend on the material, the load (i.e. some materials will be ohmic at some applied voltages but not others), the conditions (so there might be e.g. temperature dependence) and the precision sought (so a material might be roughly ohmic but not if you care about high-precision measurements, and indeed if the precision you require is high enough then many materials will start to deviate from the ideal). There are plenty of non-ohmic conductors around. Many useful examples come from semiconductor devices such as diodes, which might only conduct electricity in a single direction, or which might have threshold voltages below which they do not conduct at all. But the simplest non-ohmic conductors are lightbulb filaments, which heat up as the load increases, thereby increasing the resistance. As regards your set-piece statement, though, The resistance of a 240V and 100W electric lamp heats up from room temperature to operating temperature. As it heats up its resistance is increased by a factor of 16. What is the resistance of the lamp at room temperature? it is very clear that (i) the expectation is that you are aware that resistance changes with the load, and (ii) the stated power rating of 100W under the applied voltage of 240V should be understood as representing the steady-state conditions in which the filament has warmed up (thus increasing its resistance by the stated factor). As such, those ratings are connected via $P=V^2/R$ at the increased resistance, not the nominal room-temperature value, which is correspondingly smaller.
{ "domain": "physics.stackexchange", "id": 91342, "tags": "electric-current, electrical-resistance, power, electronics" }
Deep zeros in the spectrum of the input data?
Question: I wanted to know the meaning of "deep zeros" in the spectrum of the input data. I came across this terminology while going through the introduction of chapter 2 (Bussgang techniques for blind deconvolution and equalization) of the book named "Blind Deconvolution" by Simon Haykin. Thanks, JK Answer: "Deep zeros" in the input spectrum are caused by a channel which strongly attenuates one or several frequencies, often due to multipath fading. This causes problems for a (linear) equalizer because it tries to compensate for these "deep zeros" by strong amplification of the corresponding frequency bands, which leads to noise enhancement. Non-linear equalizers are better at dealing with such problematic channel conditions.
{ "domain": "dsp.stackexchange", "id": 1842, "tags": "frequency-spectrum, frequency-response, adaptive-filters" }
Why would a suite of related rocks have one anomalous sample?
Question: I have a suite of rocks that have nearly identical major element chemistry, but a single sample has quite different trace element concentrations. All the rocks are trachydacites. I know the analysis was done corrected. What process could cause this? Answer: Under the assumption that we don't know anything about sampling, preparation and analysis we have to proceed with caution. Any of these steps can introduce effects that can produce stronger signals than the natural geochemistry of a rock sample. Just to give you an idea of the errors: Sampling a heterogeneous rock can produce very different whole rock chemistry. If you take your sample from a mica-rich layer you will get different results than from a feldspar-rich layer. During sample preparation you can inadvertently through grinding, sieving and pouring produce aliquots which don't have the same mineral composition. Micas pour different from one vessel into another than quartz-grains. A certain mineral you want to analyze might have a smaller grain size and sit at the bottom of your vessel or in a different sieve your not analyzing. Each step in the lab can result in chemical fractioning! Analytics is a whole science on its own. Without knowing how the mineral sample was dissolved (did everything dissolve?) and values for detection limit and standard deviation you will have uncertainties in your data that might be make these values meaningless. Now assuming that the data is somewhat trustworthy: The left plot shows you the chondrite normalized rare earth element pattern. The light rare earths are 100 to 1000 times enriched relative to the chondrite (should actually be cited which one). Heavy earth elements are also enriched but less. The sample CCS01 has lower enrichment and a positive Eu-anomaly. This could indicate that the sample has a different mineral composition than the other samples. Maybe a rare-earth-element-enriched mineral has a lower concentration in the CCS01-sample. The right plot the CCS01-sample sticks out again. It has lower P-concentrations. Think again of how your sample might have fewer phosphates. The relative depletion in Ce and La might correlate with P. The sample might just have less monazite (But other minerals could cause this too). Lower Nb and Ta might indicate that the sample has fewer columbite and tantalite (but the Elements occur in other minerals too). Conclusion: Instead of guessing you should find out everything you can about the samples history (sampling, preparation, analysis). If possible suggest more analytics. A day at the microprobe could show you which minerals are contained in your sample (rock slide or strewn grain concentrate). If not available a polarizing microscope could also help you figure out the differences in the sample (excluding very small grains).
{ "domain": "earthscience.stackexchange", "id": 132, "tags": "geology, geochemistry, petrology" }
Why is meters/second the same as meters per second?
Question: In quantities such as speed where the derived (SI) unit is m/s, why do we pronounce it and interpret it as meters per second? My guess is that 1 m is associated with 1 second. Similarly, 5 m/s is pronounced and interpreted as 5 meters per second, because 5 meters are associated with 1 second. I am not sure whether this view is naive. Answer: It is an instantaneous value. The speed, in m/s (or any other unit) is: $$\frac{ds}{dt} $$ where $s$ is distance/displacement. If speed is constant, it really doesn't matter: an object moving at 5 m/s will cover 5 m in every second. It will be very different if the speed varies. Imagine a dropping stone, subject to a 10m/s/s acceleration. At 1 s, the instantaneous speed is 10 m/s, but that speed occurs for exactly zero time.
{ "domain": "physics.stackexchange", "id": 46913, "tags": "velocity, dimensional-analysis" }
TPOT machine learning
Question: I trained a regression TPOT algorithm on Google Colab, where the output of the TPOT process is some boiler plate Python code as shown below. import numpy as np import pandas as pd from sklearn.ensemble import ExtraTreesRegressor from sklearn.model_selection import train_test_split from sklearn.pipeline import make_pipeline, make_union from tpot.builtins import StackingEstimator from tpot.export_utils import set_param_recursive # NOTE: Make sure that the outcome column is labeled 'target' in the data file tpot_data = pd.read_csv('PATH/TO/DATA/FILE', sep='COLUMN_SEPARATOR', dtype=np.float64) features = tpot_data.drop('target', axis=1) training_features, testing_features, training_target, testing_target = \ train_test_split(features, tpot_data['target'], random_state=1) # Average CV score on the training set was: -4.881434802676966 exported_pipeline = make_pipeline( StackingEstimator(estimator=ExtraTreesRegressor(bootstrap=False, max_features=0.9000000000000001, min_samples_leaf=1, min_samples_split=20, n_estimators=100)), ExtraTreesRegressor(bootstrap=True, max_features=0.9000000000000001, min_samples_leaf=6, min_samples_split=13, n_estimators=100) ) # Fix random state for all the steps in exported pipeline set_param_recursive(exported_pipeline.steps, 'random_state', 1) exported_pipeline.fit(training_features, training_target) results = exported_pipeline.predict(testing_features) Would anyone know what is the Sklearn pipeline process like how does this work? When I butter up the boiler plate code and run it with my data set in IPython I can see this output from the pipeline process, what is this all doing? Pipeline(steps=[('stackingestimator-1', StackingEstimator(estimator=ExtraTreesRegressor(max_features=0.6500000000000001, min_samples_leaf=19, min_samples_split=14, random_state=1))), ('maxabsscaler', MaxAbsScaler()), ('stackingestimator-2', StackingEstimator(estimator=ExtraTreesRegressor(max_features=0.4, min_samples_leaf=3, min_samples_split=7, random_state=1))), ('adaboostregressor', AdaBoostRegressor(learning_rate=0.001, loss='exponential', n_estimators=100, random_state=1))]) The results look good just curious about how the pipelines processes work, any tips or links to tutorials greatly appreciated. I thought this machinelearningmastery tutorial is also somewhat useful for anyone interested in learning more about TPOT. Answer: The documentation for StackingEstimator is surprisingly poor, but it's relatively simple: fit the estimator on the data, and tack its predictions onto the dataset as a new feature. Source, github issue. So, your pipeline fits an ExtraTreesRegressor on the original inputs, and appends its predictions to the dataset going forward. The data (original + 1st predictions) get scaled, then another ExtraTreesRegressor is fit (on scaled original and 1st preds, and with different hyperparameters), its predictions also getting tacked onto the dataset. Finally, an AdaBoostRegressor is fit on scaled-original + scaled-1st-preds + 2nd-preds.
{ "domain": "datascience.stackexchange", "id": 9606, "tags": "machine-learning, python, scikit-learn, regression, pipelines" }
How to determine gradient of load/extension graph?
Question: Here's the excercise: A tensile test is carried out on a mild steel specimen of gauge length 40mm and cross-sectional area 100mm2. The results obtained for the specimen up to it's yield point are given below: Load (kN) | 0 | 8 | 19 | 29 | 36 Extension(mm) | 0 | 0.015 | 0.038 | 0.060 | 0.072 I need to determine the gradient of the load / extension graph. When the data is plotted (in the book) on a graph it gives a straight line therefore the gradient should be any load (of the given values) divided by the corresponding extension gradient = load/(extension/1000) except e.g. the first one (8kN) gives 533*106 while the third one (19kN) gives 500*106. Why and which one do I use for the calculations? Answer: Theoretically, the elastic range of a material is linear. But as Yogi Berra said, "In theory there is no difference between theory and practice. In practice there is." This data is obtained from a physical experiment, so errors are natural. Perhaps the piece or the strain gauge was incorrectly placed. Or, even more likely (given the data), you were dealing with a specimen which simply wasn't perfect (none ever is). Slight microimperfections meant that even in the elastic regime you might have a result which isn't perfectly linear. So you need to find a best estimate of what the "theoretical" Young's modulus is. Here are the results for each of the points, take your pick: disp (mm) | F (kN) | F/d (kN/mm) ----------+---------+------------ 0 | 0 | - 0.015 | 8 | 533.3 0.038 | 19 | 500 0.060 | 29 | 483.3 0.072 | 36 | 500 It is visually clear that the result is hovering around 500. I'm unsure of how serious the statistical analysis needs to be. You can simply say 500 looks reasonable. You can take an average, which gives you 504 You can do a least-squares analysis, which gives you 490.6, but with a line crossing the y-intercept at 0.25, as opposed to zero, as would be expected. You can force the least-squares to cross through zero, which gives you 495. They all give you the following results:
{ "domain": "engineering.stackexchange", "id": 1402, "tags": "mechanical-engineering" }