anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
"Iron Core" in Inductive Charigng
Question: Inductive charging used for wireless charging often faces the hindrance of being too short ranged for many cases. There appear to be some workarounds such as using a capacitor to resonate them at the same resonant frequency. Please excuse the naiviety of the question, but when looking back at the humble solenoid, a simple iron core can drastically boost it's magnetic field strength. So, why not just stick an iron core into the middle of the inductive charging coils? Answer: At the high frequencies implied by your question the eddy current heating of the iron core and the hysteresis loss due to the rapid oscillation of the magnetic field in the iron core will result in the Q-value of the circuit being very small. In other words the energy losses of an iron cored inductor would be too high. As long as the frequencies are not too high ferrite cores are used as they have a low electrical conductivity which means that eddy currents losses are small, low hysteresis loss (and a high magnetic permeability). At higher frequencies no core is required and the losses due to air are very small.
{ "domain": "physics.stackexchange", "id": 52281, "tags": "electromagnetism, magnetic-fields" }
Calculating a base price with surcharge conditions
Question: The following code has a lot of conditionals. I am trying to write it in a functional programming way. val basePrice = { var b = 0.0 if (runtime > 120) b += 1.5 if ((day == Sat) || (day == Sun)) b += 1.5 if (!isParquet) b += 2 if (is3D) b += 3 b } I think the following code would be a good approach, but maybe I am complicating this too much. val basePrice = { List((runtime > 120, 1.5), (day == Sat || day == Sun, 1.5), (!isParquet, 2.0), (is3D, 3.0)).foldLeft(0.0)((acum, cond) => if (cond._1) acum + cond._2 else acum) } How would you write the first snippet of code using functional programming? Answer: Each condition is a function. It might be that you could write it more concisely, but I think the code is clearer if you do this: def priceFunction(cond: => Boolean)(mod: Double => Double) = (_: Double) match { case x if cond => mod(x) case y => y } val modRuntime = priceFunction(runtime > 120)(_ + 1.5) val modWeekend = priceFunction(day == Sat || day == Sun)(_ + 1.5) val modParquet = priceFunction(!isParquet)(_ + 2.0) val mod3d = priceFunction(is3D)(_ + 3.0) val modifiers = List( modRuntime, modWeekend, modParquet, mod3d ) val modifierFunction = modifiers reduceLeft (_ andThen _) val basePrice = modifierFunction(0.0) The name of the identifiers here suck, and I could have written val modifiers = modRuntime andThen modWeekend andThen modParquet andThen mod3d without trouble. I choose putting them in a List because it shows how well it can scale. One could also make PartialFunction and chain them with orElse, for the cases where you want only the first condition. You see this kind of thing used in web frameworks, such as BlueEyes, Lift or Unfiltered, for example.
{ "domain": "codereview.stackexchange", "id": 4178, "tags": "functional-programming, comparative-review, scala" }
In mitochondria, what is the mechanism by which electrons are transferred between different cytochromes?
Question: And how is the energy gained from the lowering of the "energy level" of the electron used to generate the chemiosmotic gradient? Answer: Good question. The common picture of the electron transport chain as a sequence of molecular machines that pass along a "high energy electron" is biochemically rather misleading I think. What is really going on is just a series of energy-releasing (exothermic) redox reactions. The respiratory complexes are enzymes that catalyze these reactions and couple the energy released in each reaction to proton pumping against a gradient. As an example, lets consider the reaction carried out by Complex I, where NADH is oxidized: NADH + H$^+$ + CoQ $\iff$ NAD$^+$ + CoQH$_2$ This reaction transfers a hydride ion (H$^-$) carrying two electrons from NADH, which is accepted by CoQ. Since CoQ is a much better electron acceptor than NAD$^+$, this reaction is highly favorable with a $\Delta G$ of about -85 kJ. So quite a bit of energy is released, and part of this energy is captured by Complex I to pump four protons across the inner memberane. Note that electrons are not "traveling" on their own through Complex I somehow. The electrons are bound to molecules that participate in a redox reaction. And it doesn't make much sense to say that a specific electron's "energy level" is decreased. Rather, the compounds on the right hand side have a lower free energy $G$ in total than those on the left hand side, and therefore the reaction overall releases energy (the free energy difference $\Delta G$ is negative). Also, if you look at the chemical structures of CoQ and CoQH2 --- and you absolutely should look at structures in biochemistry, the names alone are not very helpful --- you will find that the electrons involved are actually delocalized in CoQH2, so there's no way to figure out which electron goes where. The same reasoning goes for the other respiratory complexes. Complex III oxidizes CoQH$_2$ back to CoQ by coupling it to reduction of cytochrome C, CoQH$_2$ + 2 Ferricytochrome-C $\iff$ CoQ + 2 Ferrocytochrome-C + 2 H$^+$ This reaction is also favorable, and again the energy released is used to pump protons. Finally, Complex IV oxidizes cytochrome C back and transfers electrons to O$_2$, also an energy-releasing redox reaction. The reaction mechanisms are much more complicated of course, involving various chemical groups bound to the enzymes, but this is the net result. So the "respiratory chain" is not some conveyor belt for electrons; it is a sequence of coupled redox reactions. The net result of the Complex I + III + IV reactions is that NADH has lost electrons and oxygen has gained electrons, but it's not necessarily the same electrons, and it doesn't make sense to speak of "energy level" of electrons in this context. To understand the energetics, we must look at all the compounds and reactions involved.
{ "domain": "biology.stackexchange", "id": 6685, "tags": "biochemistry, molecular-biology, cell-biology" }
Difference between classical magnet and quantum spin
Question: A classical spin (a tiny magnet) starting at rest will feel a torque and try to align with a magnetic field (if it starts almost antiparallel it will periodically align and misalign). A spin ½ particle, instead, will precess and the spin vertical component will never change or attempt to align (I am assuming B in the vertical z direction). How does this behavior change as the spin goes from ½ to very large when we should be able to recover the classical behavior? What makes it align instead of precess? Answer: That's a good question. To be concrete, imagine that we have a magnetic dipole $\mathbf m = m \hat x$ in a uniform magnetic field $\mathbf B = B \hat y$. This system will experience a torque $\boldsymbol \tau = \mathbf m \times \mathbf B = mB \hat z$, which implies that its angular momentum will begin to change in the $+\hat z$ direction. If the system consists of a single spin-1/2 particle at rest, then its angular momentum is just its spin. In accordance with the previous (semiclassical) argument, its spin will begin to precess around the $y$-axis. On the other hand, imagine that the system consists of a chain of spin-1/2 particles which lies along the $x$-axis, and whose spins all point in the $+\hat x$ direction. In principle, the spins could all precess in the same manner as before. However, the interactions between the members of the chain are such that the system prefers to be magnetized in the direction of the chain. More specifically, the cumulative interaction energy between members of the chain is minimized if they are aligned along the direction of the chain (the system's so-called easy axis). There is another way for the angular momentum of the chain to change in the $+\hat z$ direction, however, and that is for the chain to start rotating about the $\hat z$-axis. There is no energy barrier to overcome here - even the smallest torque will cause the chain to undergo this rotation (in the absence of any opposing torques such as friction) and so this is the behavior we would expect to see, barring fringe cases like enormous magnetic fields. Tl;dr: Interactions between dipoles (or rather, magnetic domains in the case of a macroscopic permanent magnet) resist the precession of individual dipoles (or domains) in favor of the bulk rotation of the magnet itself. Free, individual point dipoles can't rotate in this way and there are no competing interactions, so they precess instead.
{ "domain": "physics.stackexchange", "id": 94326, "tags": "quantum-mechanics, quantum-spin" }
What area should I use when calculating induced voltage?
Question: I am trying to build a low power output generator with 12 pole pairs and 9 stator coils. The design will be similar to the one described in this link Basic Principles Of The Homebrew Axial Flux Alternator , but at a different scale/size. I am trying to find some way of estimating the number of turns I will need in the coils to produce an output voltage of 6.5V. The generator should produce this voltage at 100rpm. I am getting very confused about how to use Faraday's law to do my calculations. $\varepsilon = N\frac{d(BA)}{dt}$ Faraday's Law as explained in this example where the area used is that of the magnet, while Faraday Law of Electromagnetic Induction states at the bottom that A is the area of the coil. Does this difference have to do with when the magnetic field is stationary while the coil is moving versus when the magnetic field is moving relative to a stationary coil? Also, should I be using $$\varepsilon = N\frac{d(BAcos(\theta))}{dt}$$ instead? Thanks very much in advance, and apologies if I am missing something very obvious here. Answer: No expert, but the problem is interesting. Assumptions: Generate $6.5V_{RMS}$ at 100 rpm with 9 stator coils. $V_{MAX} = 9.2V$ 100rpm = 1.67rps if radius to center of magnets/coils = 0.1m $C = 2 \pi r = 2 \pi \times 0.1m = 0.628m$ at 1.67 rps, v = 1.05m/s N48 Neodymium Bar Magnets 1 in x 1/2 in x 1/4 in L = 0.0254m, W = 0.0127m, D = 0.00635m. Separation z = 0.00635m. Biggest problem is separation between magnets and windings. Closer they are the better. From: How do you calculate the magnetic flux density? From: Magnetic Properties of Sintered NdFeB Magnets N48 has a Remanence field of $B_r$ = 1.48T, which gives a flux density B = 0.164T, when you process the above equation. $$V_{Ind} = N B l v$$ $$N = \frac {V_{Ind}} {B l v} = \frac {9.2V} {0.164T \times 0.0254m \times 1.05m/s} = 2108 turns$$ So that assumes you have one coil. Divide that by 3 (in series) gives 703 turns/coil (which is a lot). This coil has to be within length of magnet of 0.0254m. Whole area of coil has to fit within area of magnet for greatest effect. The math is a bit distorted, but it should give you a ballpark to do your initial calculations. I'd guess, by the second or third prototype, you will have what you want. Not all three coils in series will generate maximum voltage at the same time, so some experimentation will have to come into play (i.e. more turns). Realize that the coil will not experience a constant magnetic field. The further away windings are from the magnet the less the voltage induced. As already stated, biggest problem is air gap between windings and magnets. Air is a poor conductor of flux. I'd use a steel plate to attach your magnets.
{ "domain": "engineering.stackexchange", "id": 1383, "tags": "generator" }
Octave: Poor performance of convolution by separable 1D kernels
Question: As a newbie, I wrote a custom function in octave to perform a 2D image convolution using separable kernels. The results of this custom function were compared with conv2() and they were consistent. But that was where the joy ended. conv2() works at the speed of light while my custom function is slower than a steam locomotive. How can I speed up the loops below? As you can see, the number of MACs per pixel is 6 compared to 9 that would have been the case in 2D convolution. The function takes as inputs a row vector(3x1) and a column vector(1x3). function OutputImage = Convolve3X3(InputImage, RowCount, ColCount, Kernel_x, Kernel_y) % Create a padded image PaddedImage = uint8(zeros(RowCount + 2, ColCount + 2)); % Create a staging area StagingImage = uint8(zeros(RowCount + 2, ColCount + 2)); % Create a Row Vector RowVector = uint8(zeros(1,3)); % Create a Col Vector ColVector = uint8(zeros(3,1)); % Copy the input image into the padded image PaddedImage(2:RowCount + 1, 2:ColCount + 1) = InputImage; % 1D convolution of necessary rows with Kernel_x for i = 2: RowCount + 1 for j = 1:ColCount RowVector = PaddedImage(i,j:j+2) .* Kernel_x; StagingImage(i,j) = RowVector(1) + RowVector(2) + RowVector(3); end end % 1D convolution of necessary columns with Kernel_y for i = 1: RowCount for j = 1:ColCount ColVector = StagingImage(i:i+2,j) .* Kernel_y; OutputImage(i,j) = ColVector(1) + ColVector(2) + ColVector(3); end end endfunction How is conv2() so radically fast? Answer: Because loops in MATLAB/Octave are slow (mostly because they are interpreted languages not compiled) and such operations are typically implemented in C/C++ for performance reasons. You could even further speed these up by reverting to better C/C++ implementations and writing your mex wrappers.
{ "domain": "dsp.stackexchange", "id": 4560, "tags": "image-processing, convolution, separability" }
Is winter hotter at the Equator?
Question: I live in the Northern Emisphere. One of the things I have been taught when I was in high school is that winter is colder than summer because in that season we're at an unfavorable angle and nights last longer than days, despite Earth being closer to the Sun during winter months. Does this really mean that on the Equator, where day and night are of the same length all year long, winter is the hottest season*? Does this also mean that Winters in the Southern hemisphere are colder than our Summers and their Winters are hotter than or Summers (or that they would be if not for the Gulf Stream, air currents and other temperature-changing effects)? *I can think of a different reason that would make equinoxes hotter (more sunrays per surface area when the sun hits from above all day long) but I have no idea of which effect would be greater. Answer: Does this really mean that on the Equator, where day and night are of the same length all year long, winter is the hottest season? By "winter", I assume you mean December, January, and February. The answer is "No!" Insolation varies on an annual basis outside of the tropics, one maximum and one minimum every year. This is not the case in tropical regions, where available insolation has two local maxima and two local minima every year. This effect is greatest at the equator. The graph below depicts available insolation at increments of 30 degrees latitude, from the equator to the north pole. Note that at the equator, available insolation achieves local maxima at the two equinoxes and local minima at the two solstices. The reason is that the Sun is directly overhead at local noon on the equinoxes but is 23.5 degrees from vertical at local noon on the solstices. Sunlight has to travel through more air at the solstices than at the equinoxes. The slight variations in insolation at the equator are easily overcome by climate. Equatorial regions tend to have wet seasons and dry seasons. When these occur depends much more on wind patterns than it does on aphelion / perihelion. Does this also mean that Winters in the Southern hemisphere are colder than our Summers and their Winters are hotter than or Summers? The answer is once again "No". The slight variation in insolation due to the Earth's eccentric orbit is once again easily overcome by other phenomena. The driving characteristic for this part of the question is that the northern hemisphere has much more land mass than does the southern hemisphere. This means that except for polar regions, southern hemisphere seasons tend to be more moderate than do northern hemisphere seasons.
{ "domain": "earthscience.stackexchange", "id": 1054, "tags": "temperature, geophysics" }
Instantaneous Coulomb interaction in QED
Question: It seems I am stuck with a (at a first sight) trivial problem. It's from the "Quarks and Leptons" (Halzen, Martin) book page $141$, where one considers the following integral: $$\tag{1} T_{fi} = -i\int \!d^4x \, J_0^A(t_A,\vec{x}_A)\,J_0^B(t_A,\vec{x}_A)\frac{1}{|\vec{q}|^2}. $$ In equation $(1)$, $J_0^A$ and $J_0^B$ are the zeroth component of two electron currents: $$J_\mu(x) = j_\mu\mathrm{exp}[(p_f-p_i)\cdot x].$$ Now, according to the authors, one can rewrite $(1)$ by making use of the Fourier transform $$\tag{2} \frac{1}{|q|^2} = \int\! d^3x\, e^{i\vec{q}\cdot\vec{x}}\frac{1}{4\pi|\vec{x}|}, $$ to the following $$ \tag{3} T_{fi}^{Coul} = -i\int \!dt_A\int d^3x_A\int d^3x_B \, \frac{J_0^A(t,\vec{x}_B)\,J_0^B(t,\vec{x}_B)}{4\pi|\vec{x}_B-\vec{x}_A|}. $$ Equation $(3)$ is then interpreted as the instantaneous$^1$ Coulomb interaction between the charges of the particles, $J_0^A$ and $J_0^B$. The derivation of this is given in the answer below. $^1$I.e. interaction without retardation at time $t_A$. Answer: I suspected that one needed to go back to the definition of the currents and indeed, in doing so one can derive the result. Here's a short version. The electron current is defined as [see equation (6.6) in 1] $$\tag{1}J_\mu(x) = -e\bar{u}_f\gamma_\mu u_i \times\mathrm{exp}[(p_f-p_i)\cdot x]$$ which we write as $$\tag{2}J_\mu(x) = j_\mu\mathrm{exp}[(p_f-p_i)\cdot x]. $$ We will also need to use $$\tag{3}q = p_i^A-p_f^A = p_f^B-p_i^B$$ Then the integral $(1)$ in the original post can be written $$ \tag{4} T_{fi} = -i\int \!dt_Ad^3x_A d^3x \,\, j^Aj^B e^{i(p_f^{A0}-p_i^{A0})t_A}e^{i(p_f^{B0}-p_i^{B0})t_A}\frac{1}{|\vec{x}|}e^{i\vec{q}\cdot\vec{x}}. $$ Now shifting $\vec{x}=\vec{x}_B-\vec{x}_A$ with $d^3x=d^3x_B$ and using $(3)$ and $$(\vec{p}_f^A-\vec{p}_i^A)\cdot(\vec{x}_B-\vec{x}_A) = -(\vec{p}_f^B-\vec{p}_i^B)\cdot\vec{x}_B-(\vec{p}_f^A-\vec{p}_i^A)\cdot\vec{x}_A, $$ equation $(4)$ becomes $$\tag{3} T_{fi} = -i\int \!dt_A\int d^3x_A\int d^3x_B \, \frac{J_0^A(t_A,\vec{x}_A)\,J_0^B(t_A,\vec{x}_B)}{4\pi|\vec{x}_B-\vec{x}_A|}, $$ where $J_0^A(t_A,\vec{x}_A)$ corresponds to the OP's $A(t_A,\vec{x}_A)$ and so on. References: See appendix of J. H. Field, Classical electromagnetism as a consequence of Coulomb's law, special relativity and Hamilton's principle and its relationship to quantum electrodynamics
{ "domain": "physics.stackexchange", "id": 16047, "tags": "electromagnetism, quantum-electrodynamics, classical-electrodynamics, coulombs-law" }
Varying results when calculating scatter matrices for LDA
Question: I'm following a Linear Discriminant Analysis tutorial from here for dimensionality reduction. After working through the tutorial (did the PCA part, too), I shortened the code using sklearn modules where applicable and verified it on the Iris data set (same code, same result), a synthetic data set (with make_classification) and the sklearn-digits dataset. However, then I tried the exact same code on a complete different (unfortunately non-public) data set that contains spectra recordings of two classes. The LDA crashes at the eigenvector verification part, where the $\lambda \mathbf{v}$ is supposed to be almost equal to $S_W^{-1} S_B \mathbf{v}$ (with $\lambda$ being the eigenvalue and $\mathbf{v}$ the corresponding eigenvector; $S_W$ and $S_B$ are the in/between-class scatter matrices). The first vector to be wrong seems at random positions, meaning each run it's a different vector that's causing this error. I suspect it's related to rounding during calculations, since I get complex eigenvectors. For the PCA I just discarded the complex part (I think I read it somewhere in this forum), but this approach does not seem to work with LDA. Has anybody encountered similar problems or knows what's wrong? Following is my code for the analysis, which is more or less the same as in the tutorial. I'm using the manual approach, since I'm interested in how many linear discriminants are needed to describe my data. (I'm not sure how to do this with sklearn's LDA.) def LDAnalysis_manual(X, y): n_features = X.shape[1] n_classes = len(np.unique(y)) print("Mean vectors...") mean_vectors = [] for cl in range(n_classes): mean_vectors.append(np.mean(X[y == cl], axis=0)) # print("Mean vector class {}: {}".format(cl, mean_vectors[cl - 1])) print("In-class scatter matrix...") S_W = np.zeros((n_features, n_features)) for cl, mv in zip(range(1, n_classes), mean_vectors): class_sc_mat = np.zeros((n_features, n_features)) # each class' scatter matrix for row in X[y == cl]: row, mv = row.reshape(n_features, 1), mv.reshape(n_features, 1) # column vectors class_sc_mat += (row - mv).dot((row - mv).T) S_W += class_sc_mat # sum class scatter matrices overall_mean = np.mean(X, axis=0) print("Between-class scatter matrix...") S_B = np.zeros((n_features, n_features)) for i, mean_vec in enumerate(mean_vectors): n = X[y == i + 1].shape[0] mean_vec = mean_vec.reshape(n_features, 1) # make column vector overall_mean = overall_mean.reshape(n_features, 1) S_B += n * (mean_vec - overall_mean).dot((mean_vec - overall_mean).T) eig_vals, eig_vecs = np.linalg.eig(np.linalg.inv(S_W).dot(S_B)) print("Eigenvector test") for i in range(len(eig_vals)): print("\r{:3}".format(i), end=" ") sys.stdout.flush() eigv = eig_vecs[:, i].reshape(n_features, 1) np.testing.assert_array_almost_equal(np.linalg.inv(S_W).dot(S_B).dot(eigv).real, (eig_vals[i] * eigv).real, decimal=6, err_msg='', verbose=True) __log.debug("\nAll values ok.") eig_pairs = [(np.abs(eig_vals[i]), eig_vecs[:, i]) for i in range(len(eig_vals))] # make list of value & vector tuples eig_pairs = sorted(eig_pairs, key=lambda k: k[0], reverse=True) # Sort tuple-list from high to low __log.info("\nEigenvalues (decending):") for i in eig_pairs: __log.info(i[0]) tot = sum(eig_vals) var_exp = [(i / tot) for i in sorted(eig_vals, reverse=True)] cum_var_exp = np.cumsum(var_exp) cum_var_exp = cum_var_exp.real plot(len(var_exp), var_exp, cum_var_exp) idx_98 = next(idx for idx, val in enumerate(cum_var_exp) if val > .98) return idx_98 + 1 Answer: The LDA crashes for the exact reason you suspected. You have complex eigenvalues. If you use np.linalg.eigh, which was designed to decompose Hermetian matrices, you will always get real eigenvalues. np.linalg.eig can decompose nonsymetric square matrices, but, as you've suspected, it can produce complex eigenvalues. In short, np.linalg.eigh is more stable, and I would suggest using it for both PCA and LDA. Dropping the complex part of the eigenvalues may have been acceptable in your specific example, but in practice it should be avoided. Depending on the size of the complex part of the number, it can significantly change the result. For example think of the multiplication of two complex conjugates. $(3+ .1i)*(3-.1i)=9-.01=9.01$ comared to $9$ when droping the complex part is a relatively safe, but $(3-2i)*(3+2i)=13$ compared to $9$ is a significant miscalculation. Using the above method for the eigen-decomposition will prevent this situation from arising. Remember that one of the assumptions of LDA is that the features are normally distributed and independent of each other. Try running print('Class label distribution: %s' % np.bincount(y_train)[1:]). If the counts are not close to being equal, you've violated the first asumption of LDA, and the within-class scatter matrix must be scaled, in short divide each count by the number of class samples $N_i$. By doing this it should be obvious that computing the normalized scatter matrix is the same as computing the covariance matrix $\Sigma_i$. $$\Sigma_i=\frac{1}{N_i}S_W=\frac{1}{N_i}(x-m_i)(x-m_i)^T $$ Make sure your scaling your features before you do your PCA/LDA. If the above doesn't fix your eigenvector verification step I suspect the problem is that the eigenvectors are scalled differently. Remember from your linear algebra class that a single eigenvalue, $\lambda_i$, has infinitely many eigenvectors, each being a scalar multiple of the others. $v_i=[1,2,3]$ and $v_i=[2,4,6]$ can both be the eigenvector of $\lambda_i$. So while you may get different values when calculating values at any given step after the decomposition, the end result should be the same. Below is a template I use for LDA data compression. It assumes that you've split your data into a training and test set, the feature space has been properly scaled, and there are three classes in your label vector (you can adjust accordingly). It plots the individual and cumulative "discriminability" of each linear discriminant and then relies on the lda package in sklearn to transform the feature space using the number of discriminants you intend on using (here I chose to use the first 2). It also scales the within class scatter matrices by default. LINEAR DISCRIMINANT ANALYSIS calculate mean vectors mean_vecs = [] for label in range(1, 4): mean_vecs.append(np.mean(X_train_std[y_train==label], axis=0)) print('MV %s: %s\n' %(label, mean_vecs[label-1])) calculate within-class scatter matrix d = X_train_std.shape[1] S_W = np.zeros((d, d)) for label, mv in zip(range(1, 4), mean_vecs): class_scatter = np.cov(X_train_std[y_train==label].T) S_W += class_scatter print('Scaled within-class scatter matrix: %sx%s' % (S_W.shape[0], S_W.shape[1])) calculate between-class scatter matrix mean_overall = np.mean(x_train_std, axis=0) S_B = np.zeros((d, d)) for i, mean_vec in enumerate(mean_vec): n = X_train_std[y_train==i+1, :].shape[0] mean_vec = mean_vec.reshape(d, 1) mean_overall = mean_overall.reshape(d, 1) S_B += n * (mean_vec - mean_overall).dot((mean_vec - mean_overall).T) print('Between-class scatter matrix: %sx%s' % (S_B.shape[0], S_B.shape[1])) eigen decomposition eigen_vals, eigen_vecs = np.linalg.eigh(np.linalg.inv(S_W).dot(S_B)) eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:,i]) for i in range(len(eigen_vals))] eigen_pairs = sorted(eigen_pairs, key=lambda k: k[0], reverse=True) print('Eigendecomposition: \nEigenvalues in decreasing order:\n') for eigen_val in eigen_pairs: print(eigen_val[0]) plot discriminablithy and select number of linear discriminants tot = sum(eigen_vals.real) discr = [(i / tot) for i in sorted(eigen_vals.real, reverse=True)] cum_discr = np.cumsum(discr) plt.bar(range(1, 14), discr, alpha=0.5, align='center', label='individual "discriminability"') plt.step(range(1, 14), cum_discr, where='mid', label='cumulative "discriminability"') plt.ylabel('"discriminability" ratio') plt.xlabel('Linear Discriminants') plt.ylim([-0.1, 1.1]) plt.legend(loc='best') plt.tight_layout() plt.show() from sklearn.lda import LDA lda = LDA(n_components=2) x_train_lda = lda.fit_transform(X_train_std) x_test_lda = lda.transform(X_test_std) print('Features projected onto %d-dimensional LD subspace' % lda.n_components)
{ "domain": "datascience.stackexchange", "id": 2721, "tags": "python, scikit-learn, discriminant-analysis" }
have send tf data from multiple robot in rviz
Question: ROS Kinetic: I am using 2 robot and I am not sure how to display in in rviz with amcl. What I have is each robot has its own tf. I would do <remap from="tf" to="tf1"/> is that wrong? I think it is wrong because rviz can only take in one tf. I did some reading and it said instead to do. <node name="robot_state_publisher" pkg="robot_state_publisher" type="state_publisher" > <param name="tf_prefix" value="robot_0"/> </node> and <node name="robot_state_publisher" pkg="robot_state_publisher" type="state_publisher" > <param name="tf_prefix" value="robot_1"/> </node> Is one of these method right? Or is there another solution. Any help is appreciated! Thank you! Originally posted by Usui on ROS Answers with karma: 21 on 2019-08-03 Post score: 0 Answer: Having multiple robots on the same roscore is a real tricky challenge. I'd recommend looking into how to solve that problem before you look into solving the TF problem. Additionally, AMCL will only localize one robot - so you won't be able to use the same instance of that program with more than one entity you're trying to localize. See this post for a good amount of information: https://answers.ros.org/question/41433/multiple-robots-simulation-and-navigation/?answer=41434?answer=41434#post-id-41434 Hope this helps! Originally posted by luc with karma: 350 on 2019-08-06 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Usui on 2019-08-06: That post is uses 2 robot with amcl also. I can get 2 robot using multi machine. Do you think tf_prefix in the robot_state_publisher be able to get 2 robot in an rviz? I just need to transform it to the same global frame. Comment by luc on 2019-08-08: They must have two separate AMCL processes running to localize the two robots. You may be able to use a container approach to run two roscores on one machine. Because of the way tf operates, I dont think the prefix alone will help reconcile having two robots on one roscore. Look at the API of the tf2 buffer - it does not look as if there is a way to specify the prefix here: http://docs.ros.org/indigo/api/tf2_ros/html/c++/classtf2__ros_1_1Buffer.html Comment by Usui on 2019-08-08: @luc Okay so I started on it and I have a problem. Here's the question: https://answers.ros.org/question/330234/robot-model-not-showing-up-when-using-tf_prefix/ Comment by luc on 2019-08-16: @Usui I'll check out that question. If this answer helped you, please mark it as correct!
{ "domain": "robotics.stackexchange", "id": 33577, "tags": "navigation, rviz, ros-kinetic, tf-broadcaster, joint-states" }
What's wrong with this "proof" that QFT violates causality?
Question: In An Introduction to Quantum Field Theory, by Peskin and Schroeder, when discussing the quantized real Klein-Gordon field ($\phi=\phi^\dagger$), they show the commutator $[\phi(x),\phi(y)]$ vanishes when $y-x$ is space-like. They then say on p. 28-29 Thus we conclude that no measurement in the Klein-Gordon theory can affect another measurement outside the light-cone. However, when I tried verifying this claim, I ran into problems. I tried using the operators $\phi(x)|0\rangle\langle 0|\phi(x)$ and $\phi(y)|0\rangle\langle 0|\phi(y)$, which I believe correspond to measuring whether there is a particle at space-time position $x$ and $y$ respectively. Then the commutator of these two operators is $$\phi(x)|0\rangle\langle 0|\phi(x)\phi(y)|0\rangle \langle 0|\phi(y)-\phi(y)|0\rangle \langle 0|\phi(y)\phi(x)|0\rangle \langle 0|\phi(x).$$ Now I know $\langle 0|\phi(x)\phi(y)|0\rangle$ doesn't vanish outside the light-cone (P&S equation 2.52). Furthermore, as far as I can tell, $\phi(x)|0\rangle\langle 0|\phi(y)$ is not proportional to $\phi(y)|0\rangle\langle 0|\phi(x)$, so it seems to me that this commutator is non-zero (a measurement at $x$ can affect a measurement made outside the light-cone of $x$). I'm not sure what I did wrong. I suspect it may have something to do with choosing incorrect operators for position measurement. I'd appreciate any help! There are many related questions (specifically, this one was the closest I could find). However, none of them address this point. Answer: The operator $\phi(x)|0\rangle\langle 0|\phi(x)$ doesn't correspond to measuring whether there is a particle at $x$, and in fact this operator is not local at all, because $|0\rangle\langle 0|$ is not local: it projects onto the state of lowest total energy, and "total energy" is non-local. A strict particle-position observable does not exist in relativistic QFT. This is reviewed in my answer here.
{ "domain": "physics.stackexchange", "id": 70403, "tags": "quantum-field-theory, special-relativity, commutator, causality, klein-gordon-equation" }
Which explainable artificial intelligence techniques are there?
Question: Explainable artificial intelligence (XAI) is concerned with the development of techniques that can enhance the interpretability, accountability, and transparency of artificial intelligence and, in particular, machine learning algorithms and models, especially black-box ones, such as artificial neural networks, so that these can also be adopted in areas, like healthcare, where the interpretability and understanding of the results (e.g. classifications) are required. Which XAI techniques are there? If there are many, to avoid making this question too broad, you can just provide a few examples (the most famous or effective ones), and, for people interested in more techniques and details, you can also provide one or more references/surveys/books that go into the details of XAI. The idea of this question is that people could easily find one technique that they could study to understand what XAI really is or how it can be approached. Answer: Explainable AI and model interpretability are hyper-active and hyper-hot areas of current research (think of holy grail, or something), which have been brought forward lately not least due to the (often tremendous) success of deep learning models in various tasks, plus the necessity of algorithmic fairness & accountability. Here are some state of the art algorithms and approaches, together with implementations and frameworks. Model-agnostic approaches LIME: Local Interpretable Model-agnostic Explanations (paper, code, blog post, R port) SHAP: A Unified Approach to Interpreting Model Predictions (paper, Python package, R package). GPU implementation for tree models by NVIDIA using RAPIDS - GPUTreeShap (paper, code, blog post) Anchors: High-Precision Model-Agnostic Explanations (paper, authors' Python code, Java implementation) Diverse Counterfactual Explanations (DiCE) by Microsoft (paper, code, blog post) Black Box Auditing and Certifying and Removing Disparate Impact (authors' Python code) FairML: Auditing Black-Box Predictive Models, by Cloudera Fast Forward Labs (blog post, paper, code) SHAP seems to enjoy high popularity among practitioners; the method has firm theoretical foundations on co-operational game theory (Shapley values), and it has in a great degree integrated the LIME approach under a common framework. Although model-agnostic, specific & efficient implementations are available for neural networks (DeepExplainer) and tree ensembles (TreeExplainer, paper). Neural network approaches (mostly, but not exclusively, for computer vision models) The Layer-wise Relevance Propagation (LRP) toolbox for neural networks (2015 paper @ PLoS ONE, 2016 paper @ JMLR, project page, code, TF Slim wrapper) Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization (paper, authors' Torch code, Tensorflow code, PyTorch code, yet another Pytorch implementation, Keras example notebook, Coursera Guided Project) Axiom-based Grad-CAM (XGrad-CAM): Towards Accurate Visualization and Explanation of CNNs, a refinement of the existing Grad-CAM method (paper, code) SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability (paper, code, Google blog post) TCAV: Testing with Concept Activation Vectors (ICML 2018 paper, Tensorflow code) Integrated Gradients (paper, code, Tensorflow tutorial, independent implementations) Network Dissection: Quantifying Interpretability of Deep Visual Representations, by MIT CSAIL (project page, Caffe code, PyTorch port) GAN Dissection: Visualizing and Understanding Generative Adversarial Networks, by MIT CSAIL (project page, with links to paper & code) Explain to Fix: A Framework to Interpret and Correct DNN Object Detector Predictions (paper, code) Transparecy-by-Design (TbD) networks (paper, code, demo) Distilling a Neural Network Into a Soft Decision Tree, a 2017 paper by Geoff Hinton, with various independent PyTorch implementations Understanding Deep Networks via Extremal Perturbations and Smooth Masks (paper), implemented in TorchRay (see below) Understanding the Role of Individual Units in a Deep Neural Network (preprint, 2020 paper @ PNAS, code, project page) GNNExplainer: Generating Explanations for Graph Neural Networks (paper, code) Benchmarking Deep Learning Interpretability in Time Series Predictions (paper @ NeurIPS 2020, code utilizing Captum) Concept Whitening for Interpretable Image Recognition (paper, preprint, code) Libraries & frameworks As interpretability moves toward the mainstream, there are already frameworks and toolboxes that incorporate more than one of the algorithms and techniques mentioned and linked above; here is a partial list: The ELI5 Python library (code, documentation) DALEX - moDel Agnostic Language for Exploration and eXplanation (homepage, code, JMLR paper), part of the DrWhy.AI project The What-If tool by Google, a feature of the open-source TensorBoard web application, which let users analyze an ML model without writing code (project page, code, blog post) The Language Interpretability Tool (LIT) by Google, a visual, interactive model-understanding tool for NLP models (project page, code, blog post) Lucid, a collection of infrastructure and tools for research in neural network interpretability by Google (code; papers: Feature Visualization, The Building Blocks of Interpretability) TorchRay by Facebook, a PyTorch package implementing several visualization methods for deep CNNs iNNvestigate Neural Networks (code, JMLR paper) tf-explain - interpretability methods as Tensorflow 2.0 callbacks (code, docs, blog post) InterpretML by Microsoft (homepage, code still in alpha, paper) Captum by Facebook AI - model interpetability for Pytorch (homepage, code, intro blog post) Skater, by Oracle (code, docs) Alibi, by SeldonIO (code, docs) AI Explainability 360, commenced by IBM and moved to the Linux Foundation (homepage, code, docs, IBM Bluemix, blog post) Ecco: explaining transformer-based NLP models using interactive visualizations (homepage, code, article). Recipes for Machine Learning Interpretability in H2O Driverless AI (repo) Reviews & general papers A Survey of Methods for Explaining Black Box Models (2018, ACM Computing Surveys) Definitions, methods, and applications in interpretable machine learning (2019, PNAS) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead (2019, Nature Machine Intelligence, preprint) Machine Learning Interpretability: A Survey on Methods and Metrics (2019, Electronics) Principles and Practice of Explainable Machine Learning (2020, preprint) Interpretable Machine Learning -- A Brief History, State-of-the-Art and Challenges (keynote at 2020 ECML XKDD workshop by Christoph Molnar, video & slides) Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI (2020, Information Fusion) Counterfactual Explanations for Machine Learning: A Review (2020, preprint, critique by Judea Pearl) Interpretability 2020, an applied research report by Cloudera Fast Forward, updated regularly Interpreting Predictions of NLP Models (EMNLP 2020 tutorial) Explainable NLP Datasets (site, preprint, highlights) Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges eBooks (available online) Interpretable Machine Learning, by Christoph Molnar, with R code available Explanatory Model Analysis, by DALEX creators Przemyslaw Biecek and Tomasz Burzykowski, with both R & Python code snippets An Introduction to Machine Learning Interpretability (2nd ed. 2019), by H2O Online courses & tutorials Machine Learning Explainability, Kaggle tutorial Explainable AI: Scene Classification and GradCam Visualization, Coursera guided project Explainable Machine Learning with LIME and H2O in R, Coursera guided project Interpretability and Explainability in Machine Learning, Harvard COMPSCI 282BR Other resources explained.ai blog A Twitter thread, linking to several interpretation tools available for R A whole bunch of resources in the Awesome Machine Learning Interpetability repo The online comic book (!) The Hitchhiker's Guide to Responsible Machine Learning, by the team behind the textbook Explanatory Model Analysis and the DALEX package mentioned above (blog post and backstage)
{ "domain": "ai.stackexchange", "id": 2431, "tags": "reference-request, ethics, explainable-ai" }
What could cause a wind from an SMBH
Question: In this paper, the author tests whether the Milky Way's Fermi bubbles could have been caused by an AGN-type explosion, or by a 'wind'. He comes to the conclusion that the wind may be "the same as active galactic nucleus outflows", and that it picks up interstellar gas on the way out of the GC. This paper also talks about SMBH winds. He rules out stellar wind from stars forming near the Galactic Centre. What causes AGN outflows and these 'winds'? Is it purely mass and energy outflows from the accretion disk of the SMBH? Answer: Like outflows from stellar feedback, AGN outflows can be driven by different mechanisms: Momentum-driven winds Momentum-driven winds are (mostly, for AGN, see below) caused by pure radiation pressure, and are hence also called radiation-driven. If the central SMBH accretes mass sufficiently fast, the luminosity $L$ of the released radiation will exceed the so-called Eddington limit $L_\mathrm{Edd}$, pushing gas away from the SMBH. This mechanism can also sometimes drive strong winds in the sub-Eddington regime (Proga & Kallman 2004). For stellar outflows, momentum transfer by cosmic rays may dominate over photons (see e.g. Hopkins et al. (2021) and Huang & Davis (2022) for some recent papers on this). AGN also emit lost of cosmic rays, and are probably the main source of extragalactic cosmic rays (Berezhko 2008), but I don't think that (or I should probably say "don't know if") they can drive outflows from AGN. Thermally-driven winds Outflows can also be caused simply by X-rays heating the gas in the inner region, causing it to expand with the local sound velocity (Begelman et al. 1983; Woods et al. 1996). The buoyancy of the resulting bubbles of hot gas cause it to rise, possibly exceeding the escape velocity. Which of these two processes dominate depends on various factors such as geometry, accretion rate, gas composition, and, in particular, the spectral shape of the AGN, i.e. how "hard" the radiation is. Magnetic fields In addition to the two main mechanisms above, there is a third that I don't really know anything about, but will mention anyway: Magnetic fields can be "frozen" in the disk due to the ionized gas. This can drive outflows by the centrifugal force, but I don't think much is known about how strong an effect this is. The mechanism is discussed in Blandford & Payne (1982) and Konigl & Kartje (1994).
{ "domain": "astronomy.stackexchange", "id": 6547, "tags": "milky-way, supermassive-black-hole, accretion-discs" }
SkipLast of an IEnumerable - Linq Extension
Question: As my answer to this question, I came up with this solution: static public IEnumerable<T> SkipLast<T>(this IEnumerable<T> data, int count) { if (data == null || count < 0) yield break; Queue<T> queue = new Queue<T>(data.Take(count)); foreach (T item in data.Skip(count)) { queue.Enqueue(item); yield return queue.Dequeue(); } } It returns all items in the set but the count last, but without knowing anything about the size of the data set. I think it's funny and that it's working, but a comment claims that is doesn't. Am I overlooking something? A version with a circular queue could be: static public IEnumerable<T> SkipLast<T>(this IEnumerable<T> data, int count) { if (data == null || count < 0) yield break; if (count == 0) { foreach (T item in data) yield return item; } else { T[] queue = data.Take(count).ToArray(); int index = 0; foreach (T item in data.Skip(count)) { index %= count; yield return queue[index]; queue[index] = item; index++; } } } Performance wise they seems to be even. Compared to other solutions like the most obvious: data.Reverse().Skip(count).Reverse() It seems to be at least as fast and for very large set about twice as fast. Test case: int count = 20; var data = Enumerable.Range(1, count); for (int i = 0; i < count + 5; i++) { Console.WriteLine($"Skip: {i} => {(string.Join(", ", data.SkipLast1(i)))}"); } Any comments are useful. Answer: if (data == null || count < 0) yield break; This behaviour is somewhat consistent with Take, but not with Skip: Skip treats a negative values as a zero. As, indeed, does the SkipLast which doesn't appear in .NET Framework. It should throw on a null argument with an ArgumentNullException. My only other real issue with the methods is that neither will work with IEnumerables that can't be enumerated multiple times, and will incur overheads in any that can but have to generate the data lazily. I would go for the slightly more painful: if (source == null) throw new ArgumentNullException("Source Enumeration may not be null", nameof(source)); if (count <= 0) { foreach (T item in source) yield return item; } else { bool yielding = false; T[] buffer = new T[count]; int index = 0; foreach (T item in source) { if (index == count) { index = 0; yielding = true; } if (yielding) yield return buffer[index]; buffer[index] = item; index++; } } If I cared about performance, I might consider the following, which reduces the amount of decision making inside the loop (which might make it faster: I'd better benchmark it). // just the bit inside the else T[] buffer = new T[count]; using (var e = source.GetEnumerator()) { // initial filling of buffer for (int i = 0; i < buffer.Length; i++) { if (!e.MoveNext()) yield break; buffer[i] = e.Current; } int index = 0; while (e.MoveNext()) { yield return buffer[index]; buffer[index] = e.Current; index = (index + 1) % count; } } Performance wise they seems to be even. That's encouraging, since Queue<T> is also implemented as a circular buffer. You'd hope that the array based version would be a bit lighter, but may consume more memory is Count > data.Count(). Having benchmarked your two proposals, my two proposals, and the .NET Core SkipLast (didn't include the Reverse based method), it seems the fastest methods are that built into .NET Core (hurray) and my last one, but the difference between test instance (with different data lengths and skip counts) is great. Unfortunately, I messed up and didn't run a third of the .NET Core tests, so the saga of incompetence on my part continues. The code and data can be found in a gist. The only real conclusion I would want to draw from this data (aside from 'use the BCL method if you can') is that your first method is consistently the slowest when the input array isn't empty in these tests on my machine with it's current workload. The difference is jolly significant, with your first method requiring twice as much time as others in some cases. Why the methods have different performance is less than clear.
{ "domain": "codereview.stackexchange", "id": 35178, "tags": "c#, comparative-review, linq, iterator, extension-methods" }
XOR game quantum strategy expected payoff?
Question: I am reading Thomas Vidick, Quantum multiplayer games, testing and rigidity. On top of p.4, $$\text{E}[a\cdot b] = \sum_{i,j\in \{0,1\}}(-1)^{i+j}\text{Pr}\big((a,b)=(i,j)\big)$$ I do not understand what the notation $a\cdot b$ means and where does the right hand side expression comes from, particularly the reason for the sign $(-1)^{i+j}$. Could someone please shed light on this question? Answer: In other words, this says that Alice and Bob each perform a measurement of their own observable ($X$ for Alice and $Y$ for Bob) which has $0$ and $1$ as outcomes. If they get a $0$ they select $-1$ and if they get a $1$ they select $+1$. So the product of their selections $a$ and $b$ will be equal to $1$ if either they both got $0$ or they both got $1$ in their measurements while it will be equal to $-1$ if they got different outcomes. So by the definition of the expected value, you get $$\mathbb{E}[a\cdot b] = (+1) \cdot Pr(\text{Alice's outcome}=\text{Bob's outcome}) + (-1)\cdot Pr(\text{Alice's outcome}\neq\text{Bob's outcome})$$ and if you expand it a little more you'll arrive at what the author states. What's confusing here is that the author chooses to denote the selection and the measurement outcomes with the same variable names (in the LHS $a$ and $b$ are $\pm 1$ while in the RHS they are $0$ or $1$).
{ "domain": "quantumcomputing.stackexchange", "id": 4171, "tags": "bell-experiment, nonlocal-games" }
Execute Shell commands in Python code
Question: I wrote a little C++ program that lets me transpile a syntax that allows running Shell in Python to legal Python and then execute it. Here is an example input: filename = f'./data/lines.txt' n_lines = int(`wc -l {filename}`.split()[0]) print(n_lines, 'lines in file') This is transpiled to: import subprocess filename = f'./data/lines.txt' _ = subprocess.run(f'wc -l {filename}'.split(), capture_output=True).stdout.decode('utf-8').strip() n_lines = int(_.split()[0]) print(n_lines, 'lines in file') and then executed. My main code is: main.py #include <regex> #include <string> #include <vector> #include <sstream> #include <cctype> #include <memory> #include <fstream> #include <cstdlib> #include <stdlib.h> #include <iostream> #include <filesystem> #include "formatter.h" /** Process a single line. * * @param line - The line to process * @return The processed Python code */ std::string process_line(std::string& line) { std::ostringstream generated; // Parse template args in the string. if (line.find("`") != std::string::npos) { // Find all indices. std::vector<size_t> cmd_idx; size_t cur_tick_idx = 0; size_t next_tick_idx; // Find all backticks while ((next_tick_idx = line.find("`", cur_tick_idx)) != std::string::npos) { // First, check that it is not escaped. if (next_tick_idx <= 0 || line[next_tick_idx - 1] != '\\') cmd_idx.push_back(next_tick_idx); cur_tick_idx = next_tick_idx + 1; } // Ensure we have an even number of indices if (cmd_idx.size() % 2 == 1) throw "Invalid number of template quotes"; // Begin substitution using formatters. for (size_t i{}, j{1}; i < cmd_idx.size(); i += 2, j += 2) { std::string substr = line.substr(cmd_idx[i] + 1, cmd_idx[j] - cmd_idx[i] - 1); generated << "_ = subprocess.run(f'" << substr << "'.split(), capture_output=True).stdout.decode('utf-8').strip()\n"; // Check for formatters if (cmd_idx[i] > 0 && (std::isalnum(line[cmd_idx[i] - 1]) || line[cmd_idx[i] - 1] == '_')) { size_t k; for (k = cmd_idx[i] - 2; k >= 0; --k) { if (!std::isalnum(line[k]) && line[k] != '_') break; } std::string format = line.substr(k + 1, cmd_idx[i] - k - 1); // Apply formatter // If "str", do nothing. if (format != "str") { std::unique_ptr<type_formatter> formatter = std::make_unique<type_formatter>(format); generated << formatter->format(); } } // Now, replace the part in quotes with our variable generated << line.replace(cmd_idx[i], cmd_idx[j] - cmd_idx[i] + 1, "_"); } } else { generated << line; } return generated.str(); } int main(int argc, char* argv[]) { // TODO: Change this to a filename input std::ifstream fin(argv[1]); std::ofstream fout("out.py"); fout << "import subprocess\n\n"; std::string line; while (std::getline(fin, line)) { fout << process_line(line) << std::endl; } fout.close(); // Run the code const char* path = std::getenv("PATH"); std::filesystem::path cur_path = std::filesystem::current_path(); std::string new_path = std::string(path) + ":" + cur_path.string(); if (setenv("PATH", new_path.c_str(), 1) != 0) throw "Failed to set PATH"; std::system("python out.py"); return 0; } and my formatter code is pretty simple: formatter.cpp #include "formatter.h" type_formatter::type_formatter(const std::string& fmt) { this->fmt = fmt; } /** * Returns Python code that checks whether the string * can be safely casted to the desired type. * * TODO: Check indent level */ std::string type_formatter::get_safe_formatter() { std::string check_cast_code = "try:\n\t" "_ = " + fmt + "(_)\nexcept ValueError:\n\t" "raise"; return check_cast_code; } std::string type_formatter::format() { if (fmt == "int" || fmt == "float") return get_safe_formatter(); if (fmt == "list") return "_ = _.split('\\n')\n"; if (fmt.starts_with("list.")) { std::string list_type = fmt.substr(5); return "_ = [" + list_type + "(x) for x in _.split('\\n')]\n"; } throw "Formatter for type does not exist."; }; formatter.h #ifndef FORMATTER_H #define FORMATTER_H #include <string> /** * The base class for formatters. This is an abstract class * and should be extended to implement specific formatters. * Formatters must implement the `format()` function, which * should return a std::string containing Python code to process * a variable called _, which will contain the output from shell * code in the transpiled program. As a template, the code should * end up assigning _ to the correct type. The Python code should * end in a newline. */ class basic_formatter { public: basic_formatter() = default; virtual std::string format() = 0; }; class type_formatter : public basic_formatter { std::string fmt; std::string get_safe_formatter(); public: type_formatter(const std::string&); type_formatter() = delete; virtual std::string format(); }; #endif I primarily work with JS and Python, so I'm trying to understand how I can write better C++ code, what norms I've broken, what I could do better, etc. I'm using C++20. Answer: Code that's ready for review shouldn't have any outstanding TODO comments. You should resolve those. Instead of including both <cstdlib> and <stdlib.h>, use just the C++ header, and namespace-qualify std::size_t where it's used. Class basic_formatter is intended for use as a base class. You should provide a virtual destructor so that subclasses are correctly destroyed when deleted via a base-class pointer: basic_formatter() = default; virtual ~basic_formatter() = default; In type_formatter, I don't think we want implicit conversion from strings, so mark that constructor with explicit. Use the constructor's initializer-list to populate fmt, rather than letting it default-construct and then overwriting in the body. And if we pass by value, we can avoid an extra copy in many cases. explicit type_formatter(const std::string fmt) : fmt{std::move(fmt)} {} It's not necessary to declare a deleted no-args constructor - just omit this, as the above constructor inhibits generation of a default constructor. The format() method should be declared override instead of virtual{ std::string format() override; Consider whether you really want to allow format() to modify the formatter - perhaps it should be const? When we construct a formatter, we make a smart pointer. But we don't need to do that, as we can construct and use it directly: if (format != "str") { type_formatter formatter{format}; generated << formatter.format(); } We have a logic error here: std::size_t k; for (k = cmd_idx[i] - 2; k >= 0; --k) { That's an infinite loop, because k >= 0 can never be false. That's a sign of not enabling sufficient compiler warnings, and perhaps also of insufficient unit-testing. When throwing, we should use appropriate subclasses of std::exception. C++ allows us to throw a string literal, but that is generally considered a poor practice. And we never catch anything we throw - that needs to be fixed. process_line() shouldn't need to write to a string-stream and stringify the result, given that when we call it, we immediately write its result to another stream. Just pass the output stream into the function (as a reference) and write to it directly. The search for matching backquote characters is flawed. A preceding backslash doesn't escape a backquote if the backslash itself is quoted - we'll need a smarter parser to give us real robustness here. Lots of ways in which main() can misbehave, none of which give any diagnostic or exit status: input filename isn't readable unable to create or overwrite out.py in current directory (consider using mktemp() or similar to get somewhere safe to write) writing fails later (e.g. disk full) python is not on the path (likely in systems where the interpreters are called python2 and/or python3). We lose the exit status of the invoked Python program. It's probably better to exec() rather than std::system() on POSIX systems, so that the caller gets the full exit status (including signal and core-dump information if appropriate). Where only standard-library functions are available, consider using the return value from std::system() to determine what to return from main().
{ "domain": "codereview.stackexchange", "id": 44089, "tags": "c++, c++20" }
Multiway Ruby conditional
Question: I'm a little unhappy with the look of this "flexible" way of dealing with errors from an API in Ruby: class ApiError < StandardError def initialize(response) message = response.is_a?(String) ? response : !(response.respond_to? 'body') ? 'Error' : response.body.is_a?(Hash) ? response.body['message'] : response.body.to_s super(message) end end The idea is you can pass to ApiError.new either a string, or an object with an accessor called body in which case we'll take the body if it's a string or its message property if it's a hash. We'll use a generic error message if the argument is anything else. What would be the idiomatic alternative, if any? I'm happy to take suggestions for completely different approaches (multiple constructors, static factories, etc.) but am still interested in a single-initialize-method approach. I know I can do if...elsif...elsif...else...end. Is that better? Answer: I would try a sequence of case statements: class ApiError < StandardError def initialize(response) message = case when response.is_a?(String) response when !(response.respond_to? 'body') 'Error' when response.body.is_a?(Hash) response.body['message'] else response.body.to_s end super(message) end end ###test code class XX attr_accessor :body end p ApiError.new('string') p ApiError.new(:x) xx = XX.new() p ApiError.new(xx) xx.body={'message' => 'my message'} p ApiError.new(xx) I hope I understood your code correct. Actually there is no check, if a given Hash has a 'message'-key. You can also omit the message variable: class ApiError < StandardError def initialize(response) super case when response.is_a?(String) response when !(response.respond_to? 'body') 'Error' when response.body.is_a?(Hash) response.body['message'] else response.body.to_s end end end
{ "domain": "codereview.stackexchange", "id": 22431, "tags": "ruby, constructor" }
pointcloud to laserscan with transform?
Question: Has anyone written a nodelet that can be applied between the cloud_throttle nodelet and the cloud_to_scan nodelet in the pointcloud_to_laserscan package in the turtlebot stack? This nodelet would be used to provide a horizontal laserscan relative to the base_footprint frame when the Kinect is tilted downwards to provide a better view of the ground area just in front of the robot. (or is there a better way to accomplish the same objective?) Originally posted by Bart on ROS Answers with karma: 856 on 2011-04-02 Post score: 3 Answer: With the help of others I have developed a nodelet to display a horizonal projection of a laserscan with a pan/tilt Kinect camera. This nodelet is intended to work with the other pointcloud_to_laserscan nodelets, providing a choice of two laserscan projection types. Here is the source code with detailed instructions appended at the end ... SOURCE cloud_to_scanHoriz.cpp: /* * Copyright (c) 2010, Willow Garage, Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * Neither the name of the Willow Garage, Inc. nor the names of its * contributors may be used to endorse or promote products derived from * this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE * POSSIBILITY OF SUCH DAMAGE. */ #include "ros/ros.h" #include "pluginlib/class_list_macros.h" #include "nodelet/nodelet.h" #include "sensor_msgs/LaserScan.h" #include "pcl/point_cloud.h" #include "pcl_ros/point_cloud.h" #include "pcl/point_types.h" #include "pcl/ros/conversions.h" #include "pcl_ros/transforms.h" #include "tf/transform_listener.h" #include "tf/message_filter.h" #include "message_filters/subscriber.h" namespace pointcloud_to_laserscan { typedef pcl::PointCloud<pcl::PointXYZ> PointCloud; class CloudToScanHoriz : public nodelet::Nodelet { public: //Constructor CloudToScanHoriz(): min_height_(0.10), max_height_(0.75), baseFrame("/base_footprint"), laserFrame("/camera_tower") { }; private: double min_height_, max_height_; std::string baseFrame; //ground plane referenced by laser std::string laserFrame; //pan frame simulating projected laser bool result; tf::TransformListener tfListener; message_filters::Subscriber<PointCloud> cloudSubscriber; tf::MessageFilter<PointCloud> *tfFilter; ros::Publisher laserPublisher; //Nodelet initialization virtual void onInit() { ros::NodeHandle& nh = getNodeHandle(); ros::NodeHandle& private_nh = getPrivateNodeHandle(); private_nh.getParam("min_height", min_height_); private_nh.getParam("max_height", max_height_); private_nh.getParam("base_frame", baseFrame); private_nh.getParam("laser_frame", laserFrame); NODELET_INFO("CloudToScanHoriz min_height: %f, max_height: %f", min_height_, max_height_); NODELET_INFO("CloudToScanHoriz baseFrame: %s, laserFrame: %s", baseFrame.c_str(), laserFrame.c_str()); //Set up to process new pointCloud and tf data together cloudSubscriber.subscribe(nh, "cloud_in", 5); tfFilter = new tf::MessageFilter<PointCloud>(cloudSubscriber, tfListener, laserFrame, 1); tfFilter->registerCallback(boost::bind(&CloudToScanHoriz::callback, this, _1)); tfFilter->setTolerance(ros::Duration(0.01)); laserPublisher = nh.advertise<sensor_msgs::LaserScan>("laserScanHoriz", 10); }; //Pointcloud and tf transform received void callback(const PointCloud::ConstPtr& cloud_in) { PointCloud cloudTransformed; sensor_msgs::LaserScanPtr output(new sensor_msgs::LaserScan()); try { //Transform pointcloud to new reference frame result = pcl_ros::transformPointCloud(baseFrame, *cloud_in, cloudTransformed, tfListener); } catch (tf::TransformException& e) { NODELET_INFO("CloudToScanHoriz failed"); std::cout << e.what(); return; } //NODELET_DEBUG("Got cloud"); //Setup laserscan message output->header = cloud_in->header; output->header.frame_id = laserFrame; output->angle_min = -M_PI/2; output->angle_max = M_PI/2; output->angle_increment = M_PI/180.0/2.0; output->time_increment = 0.0; output->scan_time = 1.0/30.0; output->range_min = 0.45; output->range_max = 10.0; uint32_t ranges_size = std::ceil((output->angle_max - output->angle_min) / output->angle_increment); output->ranges.assign(ranges_size, output->range_max + 1.0); //"Thin" laser height from pointcloud for (PointCloud::const_iterator it = cloudTransformed.begin(); it != cloudTransformed.end(); ++it) { const float &x = it->x; const float &y = it->y; const float &z = it->z; if ( std::isnan(x) || std::isnan(y) || std::isnan(z) ) { continue; } if (z > max_height_ || z < min_height_) { continue; } double angle = atan2(y, x); if (angle < output->angle_min || angle > output->angle_max) { continue; } int index = (angle - output->angle_min) / output->angle_increment; //Calculate hypoteneuse distance to point double range_sq = y*y+x*x; if (output->ranges[index] * output->ranges[index] > range_sq) output->ranges[index] = sqrt(range_sq); } //for it laserPublisher.publish(output); } //callback }; PLUGINLIB_DECLARE_CLASS(pointcloud_to_laserscan, CloudToScanHoriz, pointcloud_to_laserscan::CloudToScanHoriz, nodelet::Nodelet); } INSTRUCTIONS: Load turtlebot stack to provide original pointcloud_to_laserscan source ... hg clone https://kforge.ros.org/turtlebot/turtlebot /opt/ros/dturtle/turtlebot Copy attached source code for cloud_to_scanHoriz.cpp to turtlebot/pointcloud_to_laserscan/src/cloud_to_scanHoriz.cpp Modify turtlebot/pointcloud_to_laserscan/nodelets.xml to include: <class name="pointcloud_to_laserscan/CloudToScanHoriz" type="pointcloud_to_laserscan::CloudToScanHoriz" base_class_type="nodelet::Nodelet"> <description> A nodelet to transform a point cloud to a base frame and then thin it to a laser scan. </description> </class> Modify last line of turtlebot/pointcloud_to_laserscan/CMakeList.txt: rosbuild_add_library(cloud_to_scan src/cloud_to_scan.cpp src/cloud_throttle.cpp src/cloud_to_scanHoriz.cpp) rosmake pointcloud_to_laserscan Modify turtlebot/pointcloud_to_laserscan/launch/kinect_laser.launch: <launch> <!-- kinect and frame ids --> <include file="$(find openni_camera)/launch/openni_node.launch"/> <!-- openni manager --> <node pkg="nodelet" type="nodelet" name="openni_manager" output="screen" respawn="true" args="manager"/> <!-- throttling --> <node pkg="nodelet" type="nodelet" name="pointcloud_throttle" args="load pointcloud_to_laserscan/CloudThrottle openni_manager"> <remap from="cloud_in" to="/camera/depth/points"/> <remap from="cloud_out" to="cloud_throttled"/> <param name="max_rate" value="2"/> </node> <!-- normal laser --> <node pkg="nodelet" type="nodelet" name="kinect_laser" args="load pointcloud_to_laserscan/CloudToScan openni_manager" output="screen"> <remap from="cloud" to="cloud_throttled"/> <param name="output_frame_id" value="/openni_depth_frame"/> </node> <!-- tilting laser --> <node pkg="nodelet" type="nodelet" name="tilt_laser" args="load pointcloud_to_laserscan/CloudToScanHoriz openni_manager"> <remap from="cloud_in" to="cloud_throttled"/> <param name="base_frame" value="/camera_tower"/> <param name="laser_frame" value="/camera_tower"/> <param name="min_height" value="-0.2"/> <param name="max_height" value="0.50"/> </node> </launch> Notes: throttling nodelet cloud_in remapped to /camera/depth/points base_frame link is a fixed URDF link at the front of the robot, exact name is not important. The min and max height are set cover most of the robot height, relative to the base_frame link elevation. laser_frame link is a fixed URDF link which will be the origin of the calculated laserscan projection, robot dependent. Robot program must provide a complete tf link/joint path from the /openni optical frames to the base_frame link. Kinect camera can be mounted on a pan/tilt mechanism or simply use the Kinect tilting base as long as the pan and tilt angles are exported to tf by the robot program. (add kinect_aux package to openni_kinect stack to use Kinect tilting base) roslaunch pointcloud_to_laserscan kinect_laser.launch Run Rviz and your robot program, including the tf broadcaster. In Rviz, set Laser Scan display to laserScan, tilting pan/tilt downwards should project a perpidicular line at floor elevation In Rviz, set Laser Scan display to laserScanHoriz, tilting pan/tilt downwards should project laser points for obstacles at the laser_frame height. Adjust actual Kinect pan control for robot, laserscan points should remain at same projected elevation, but follow the panning changes. Originally posted by Bart with karma: 856 on 2011-04-07 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 5263, "tags": "kinect, turtlebot, openni-kinect" }
Retrieve filtered point cloud from Moveit Planning Scene
Question: Hello, I added several sensor plugins into the occupancy_map_monitor of moveit. I have a generated occupancy map which i can visualize in RViz using /move_group/monitored_planning_scene topic Is it possible to retrieve the filtered point cloud of the overall scene which is generated from all the sensors? Regards Originally posted by Amine on ROS Answers with karma: 11 on 2015-02-02 Post score: 0 Answer: Is the filtered_topic what you are looking for? From moveit.ros.org/wiki/3D_Sensors: filtered_cloud_topic: If this parameter is specified, the filtered cloud (without robot parts) is also republished. This makes things a little less efficient but can be useful for debugging. Edit: hm, this is only the point cloud after self-filtering, so may not be what you are after. Originally posted by gvdhoorn with karma: 86574 on 2015-02-02 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Amine on 2015-02-03: This is the filtered point cloud from one sensor, and not from all sensors. I would like to have one topic for all filtered point clouds Comment by gvdhoorn on 2015-02-03: You might also want to try and send a message to the moveit-users mailing list. But do please report back here if you get an answer there.
{ "domain": "robotics.stackexchange", "id": 20762, "tags": "ros, moveit, move-group, pointcloud" }
Very low Neural Network Accuracy for Titanic Survival Problem
Question: I am new to neural networks and have done a few projects but have got very low accuracy for all of them. I have included the code for titanic NN code here. Am I missing something or what? Can you help me with this? ''' import numpy as np import pandas as pd train=pd.read_csv(r'E:\Learning Python\Kaggle Competition\TItanic\Dataset\train.csv') test=pd.read_csv(r'E:\Learning Python\Kaggle Competition\TItanic\Dataset\test.csv') train.head() train.iloc[[60]] train.info() train.describe() train.drop(['Name','Ticket','Cabin', 'PassengerId'], axis=1, inplace=True) categorical_cols=[] numeric_cols=[] for col in train.columns: if train[col].dtype=='object': categorical_cols.append(col) else: numeric_cols.append(col) print(categorical_cols) print(numeric_cols) for col in categorical_cols: print(train[col].unique()) train['Survived'].unique() train.isna().sum() train.describe() ### Handling Missing Values train['Age']=train['Age'].fillna(train['Age'].median()) train=train[~train['Embarked'].isna()] train=train[~train['Survived'].isna()] for col in categorical_cols: train[col]=train[col].astype('category') train.dtypes ### Encoding Categorical Values categorical_cols for col in categorical_cols: print(train[col].unique()) print(train[col].isna().sum()) from sklearn.preprocessing import OneHotEncoder ohe=OneHotEncoder(drop='first') #for Sex column encoded_array=ohe.fit_transform(train['Sex'].values.reshape(-1,1)).toarray() encoded_df=pd.DataFrame(encoded_array, columns=ohe.get_feature_names_out(['Sex'])) encoded_df.shape train.shape train = train.reset_index(drop=True) encoded_df = encoded_df.reset_index(drop=True) train=pd.concat([train, encoded_df], axis=1) train=train.drop(['Sex'], axis=1) train.isna().sum() # for Embarked column encoded_array=ohe.fit_transform(train['Embarked'].values.reshape(-1,1)).toarray() encoded_df=pd.DataFrame(encoded_array, columns=ohe.get_feature_names_out(['Embarked'])) train=pd.concat([train, encoded_df], axis=1) train=train.drop(['Embarked'], axis=1) train.head() ### Splitting train and test data from sklearn.model_selection import train_test_split X=train.drop(['Survived'], axis=1) y=train['Survived'] X_train, X_test, y_train, y_test=train_test_split(X,y,test_size=0.2, random_state=1, shuffle=True) train.isna().sum() ### Scaling Numeric Values from sklearn.preprocessing import StandardScaler, PowerTransformer pt=PowerTransformer() categorical_cols=[] numeric_cols=[] for col in X_train.columns: if train[col].dtype=='object': categorical_cols.append(col) else: numeric_cols.append(col) X_train[numeric_cols]=pt.fit_transform(X_train[numeric_cols]) X_test[numeric_cols]=pt.transform(X_test[numeric_cols]) #st=StandardScaler() #X_train[numeric_cols]=st.fit_transform(X_train[numeric_cols]) #X_test[numeric_cols]=st.transform(X_test[numeric_cols]) # Data Visualization import seaborn as sns import matplotlib.pyplot as plt plt.figure(figsize=(15,15)) sns.heatmap(train.corr(), cmap='jet', annot=True, linewidth=True) plt.show() plt.figure(figsize=(20,10)) sns.boxplot(train[numeric_cols]) plt.xticks(rotation=30) plt.show() ## Neural Network import tensorflow as tf from tensorflow import keras from tensorflow.keras import Sequential, optimizers from tensorflow.keras import layers from tensorflow.keras.utils import to_categorical from tensorflow.keras.layers import Dense, Dropout, BatchNormalization from tensorflow.keras.optimizers import Adam, SGD from sklearn.metrics import accuracy_score model=Sequential() model.add(Dense(256, activation='relu', input_dim=X_train.shape[1])) model.add(Dense(32, activation='relu')) model.add(Dense(4, activation='relu')) model.add(Dense(1, activation='softmax')) model.compile(optimizer='SGD', loss='binary_crossentropy', metrics='Accuracy') model.fit(X_train, y_train, epochs=100, validation_split=0.2) ''' Answer: Your model will always give output of Class 0 since you are using softmax activation function and 1 output node for binary classification. The output of a Softmax is a vector with probabilities of each possible outcome. So, the softmax layer should have the same number of nodes as the number of classes. You need to change the number of nodes in output layer to 2 and also change y_train accordingly (one-hot encoding) OR You can keep the number of nodes in output layer as 1, and change the activation function of output layer to sigmoid. If the output of your sigmoid output node will be low, it will assign "Class 0", else "Class 1". Softmax is basically an extension of sigmoid. You can go through this article for better understanding of the two functions. Note: For binary classification, one output node with sigmoid activation function is preferred as it will update faster due to less number of parameters.
{ "domain": "datascience.stackexchange", "id": 11844, "tags": "machine-learning, neural-network, keras" }
Are the $\lambda_I$-Calculus and the $\lambda_K$-Calculus equivalent?
Question: I see here and there mention of the $\lambda_I$-Calculus (in which every variable must be used at least once) and the $\lambda_K$-Calculus (in which a variable can also be unused). Are they equivalent? Why has the latter kinda obscured the former? EDIT By equivalent, I mean they have the same expressive power, namely, being universal or Turing complete. Answer: You've basically answered the question yourself. $\lambda K$ is just another name for the standard, untyped lambda calculus. $\lambda I$ is a strict subset of $\lambda K$. $\lambda I$ doesn't allow terms where one abstracts over a variable but doesn't use it. So $$K = \lambda xy.x \in \lambda K$$ but $$ K \not\in \lambda I$$ Thanks to this restriction, $\lambda I$ has some interesting properties, in particular if $M$ has a normal form then so do all its sub-terms. Barendregt, H. P. The Lambda Calculus: Its Syntax and Semantics contains some notes about $\lambda I$, namely: ... the $\lambda I$ calculus is sufficient to define all recursive functions (since $K_1 := \lambda xy . yIIx$ satisfies $K_1xc_n = x$ for each of Church's numerals $c_n := \lambda fz . f^n z$ - it is also the case that for each finite set $n$ of nf's, we can find a "local $K$ for $n$" $K_n$ such that $K_nMN = M$ for each $N$ in $n$). ... The $\lambda I$ calculus corresponds to the combinatory logic with primitive combinators $I$, $B$, $C$, and $S$. ...
{ "domain": "cstheory.stackexchange", "id": 1933, "tags": "lambda-calculus" }
Conway's game of life in C++ using SDL2
Question: I finally finished my Conway's game of life in c++ project and i'd love to find out how to improve the code. main.cpp #include "Engine/Engine.h" int main() { Conway::Engine Engine(1280, 1024); Engine.Run(); return 0; } Engine.h #pragma once #include <SDL2/SDL.h> #include <memory> #include "Board.h" // Namespace Conway to // always enclose classes in a namespace. namespace Conway { // The engine class handles // everything about the program: // It handles the window, the renderer, // Drawing, Events, Game logic, // and holds representations of the game's // parts (Cell, Grid) class Engine { public: Engine(int ScreenWidth, int ScreenHeight); ~Engine(); void Run(); private: void HandleEvents(); void Draw(); void DrawLines(); const int m_ScreenWidth; const int m_ScreenHeight; bool m_Update = false; bool m_Running = true; std::unique_ptr<Board> m_Board; SDL_Window* m_Window; SDL_Renderer* m_Renderer; }; } Engine.cpp #include "Engine.h" Conway::Engine::Engine(int ScreenWidth, int ScreenHeight) : m_ScreenWidth{ScreenWidth}, m_ScreenHeight{ScreenHeight} { SDL_assert(SDL_Init(SDL_INIT_VIDEO) >= 0); m_Window = SDL_CreateWindow( "Conway's game of life", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, m_ScreenWidth, m_ScreenHeight, SDL_WINDOW_SHOWN ); SDL_assert(m_Window != NULL); if (!m_Board) { Coord<int, int> ScreenSize = {m_ScreenWidth, m_ScreenHeight}; m_Board = std::make_unique<Board>(ScreenSize); } SDL_assert(m_Board != nullptr); m_Renderer = SDL_CreateRenderer(m_Window, -1, SDL_RENDERER_ACCELERATED|SDL_RENDERER_PRESENTVSYNC); SDL_assert(m_Renderer != NULL); // Show lines SDL_RenderClear(m_Renderer); DrawLines(); } Conway::Engine::~Engine() { SDL_DestroyWindow(m_Window); SDL_DestroyRenderer(m_Renderer); m_Window = NULL; m_Renderer = NULL; SDL_Quit(); } void Conway::Engine::HandleEvents() { SDL_Event Event; while(SDL_PollEvent(&Event)) { switch(Event.type) { case SDL_QUIT: m_Running = false; break; // Toggles the updating with a keypress case SDL_KEYDOWN: if (Event.key.keysym.sym == SDLK_SPACE) { m_Update = m_Update ? false : true; DrawLines(); } else if (Event.key.keysym.sym == SDLK_c) { m_Board->Clear(); } break; case SDL_MOUSEBUTTONDOWN: if (!m_Update) { if (Event.button.button == SDL_BUTTON_LEFT) { m_Board->ToggleClickedCell({Event.button.x, Event.button.y}); } } break; } } } // Draws the grid void Conway::Engine::Draw() { SDL_RenderClear(m_Renderer); for (int i = 0; i < Board::GRID_HEIGHT; ++i) { for (int j = 0; j < Board::GRID_WIDTH; ++j) { if (m_Board->ReadCell(j + Board::GRID_WIDTH * i) == Board::Cell::Alive) { SDL_SetRenderDrawColor(m_Renderer, 255, 255, 255, 255); } else { SDL_SetRenderDrawColor(m_Renderer, 0, 0, 0, 255); } SDL_Rect rect; rect.x = m_Board->GetCellSize().first * j; rect.y = m_Board->GetCellSize().second * i; rect.w = m_Board->GetCellSize().first; rect.h = m_Board->GetCellSize().second; SDL_RenderFillRect(m_Renderer, &rect); } } if (!m_Update) { DrawLines(); } SDL_RenderPresent(m_Renderer); } // This function draws // the lines delimiting each cell. // The first loop draws the horizontal // lines, the second one the vertical lines. void Conway::Engine::DrawLines() { SDL_SetRenderDrawColor(m_Renderer, 255, 255, 255, 255); for (int i = 0; i < Board::GRID_HEIGHT; ++i) { if (i != 0) { SDL_RenderDrawLine( m_Renderer, 0, m_Board->GetCellSize().second * i, m_Board->GetCellSize().first * Board::GRID_WIDTH, m_Board->GetCellSize().second * i ); } } for (int i = 0; i < Board::GRID_WIDTH; ++i) { if (i != 0) { SDL_RenderDrawLine( m_Renderer, m_Board->GetCellSize().first * i, 0, m_Board->GetCellSize().first * i, m_Board->GetCellSize().second * Board::GRID_HEIGHT ); } } SDL_RenderPresent(m_Renderer); SDL_SetRenderDrawColor(m_Renderer, 0, 0, 0, 255); } // Main game loop void Conway::Engine::Run() { while (m_Running) { HandleEvents(); if (m_Update) { m_Board->Update(); } Draw(); SDL_Delay(100); } } Board.h #pragma once #include <vector> #include "Coord.h" namespace Conway { class Board { public: Board(Coord<int, int> ScreenSize); static constexpr int GRID_WIDTH = 80; static constexpr int GRID_HEIGHT = 60; Coord<int, int> GetCellSize() { return m_CellSize; } void ToggleClickedCell(Coord<int, int> MouseCoords); void Update(); void Clear(); enum class Cell { Dead, Alive }; private: int CountAliveNeighbors(Coord<int, int> GridCell); std::vector<Cell> m_Grid; const Coord<int, int> m_CellSize; public: Cell ReadCell(int Index) { return m_Grid[Index]; } }; } Board.cpp #include "Board.h" #include <cmath> Conway::Board::Board(Coord<int, int> ScreenSize) : m_CellSize{ScreenSize.first / GRID_WIDTH, ScreenSize.second / GRID_HEIGHT} { int GridSize = GRID_WIDTH * GRID_HEIGHT; std::vector<Cell> temp(GridSize, Cell::Dead); m_Grid = temp; } void Conway::Board::Clear() { std::fill(m_Grid.begin(), m_Grid.end(), Cell::Dead); } int Conway::Board::CountAliveNeighbors(Coord<int, int> GridCell) { int count = 0; for (int i = -1; i < 2; ++i) { for (int j = -1; j <2; ++j) { int absoluteX = GridCell.first + i; int absoluteY = GridCell.second + j; if (absoluteX == -1 || absoluteX == GRID_WIDTH || absoluteY == -1 || absoluteY == GRID_HEIGHT || (i == 0 && j == 0)) { continue; } if (m_Grid[absoluteX + GRID_WIDTH * absoluteY] == Cell::Alive) { ++count; } } } return count; } // Inverses the cell that was clicked on void Conway::Board::ToggleClickedCell(Coord<int, int> Coords) { int ClickedCell = (floor(Coords.first / m_CellSize.first)) + GRID_WIDTH * (floor(Coords.second / m_CellSize.second)); m_Grid[ClickedCell] = m_Grid[ClickedCell] == Cell::Dead ? Cell::Alive : Cell::Dead; } void Conway::Board::Update() { std::vector<Cell> temp(m_Grid); for (int i = 0; i < Board::GRID_HEIGHT; ++i) { for (int j = 0; j < GRID_WIDTH; ++j) { if (m_Grid[j + GRID_WIDTH * i] == Cell::Alive) { if (CountAliveNeighbors({j, i}) < 2 || CountAliveNeighbors({j, i}) > 3) { temp[j + GRID_WIDTH * i] = Cell::Dead; } } else { if (CountAliveNeighbors({j, i}) == 3) { temp[j + GRID_WIDTH * i] = Cell::Alive; } } } } m_Grid = temp; } Coord.h #pragma once template <typename T1, typename T2> struct Coord { T1 first; T2 second; }; Answer: The code looks well organized, and has a clear coding style. Good separation of responsibility between classes, almost no raw pointers (except for those coming from the C API of SDL of course), and no global variables. Nice! But there are still some areas of improvement: Only use SDL_assert() to check for programming errors Assertions are a tool to help find bugs in your program. However, in release builds, these assertions are typically compiled out. Thus, they should not be used to check for errors that can reasonably happen. For example: SDL_assert(m_Window != NULL); It is very possible that, without any bugs in your program, an SDL window could not be created, for example because of an out of memory condition, or the program being run without a display server running. So instead, you have to use a regular if-statement to check for this condition, and then handle the error appropriately. You could use exceptions for that, like so: #include <stdexcept> ... if (!m_Window) { throw std::runtime_error("Failed to create window"); } Use nullptr instead of NULL NULL should be used in C code, in C++ you should use nullptr. However, you can also avoid writing it entirely in most cases. For example, instead of if (foo != nullptr), you can just write if (foo). Also, instead of Foo *foo = nullptr you can write Foo *foo = {}. Whether you want to use nullptr explicitly or use the shorter notations is up to the code style you are using. Avoid unnecessary indirection One of things you do in the constructor of Engine is to allocate a new instance of Board and store the pointer in m_Board. But why allocate this way, when you can just store a Board directly in Engine, like so: class Engine { ... private: Board m_Board; }; The constructor should then ensure it initializes it like so: Conway::Engine::Engine(int ScreenWidth, int ScreenHeight) : m_ScreenWidth{ScreenWidth}, m_ScreenHeight{ScreenHeight} , m_Board({ScreenWidth, ScreenHeight}) { ... Don't draw in the constructor of Engine It should not be necessary to call Draw() from the constructor, instead this is done in Run(). In general, avoid having functions do more than necessary. Don't reset member variables in the destructor There is no pointing in setting m_Window and m_Renderer to NULL in the destructor of Engine, since those variables will be gone as soon as the function exits. Add a default statement to the switch in HandleEvents Be explicit and tell the compiler what behaviour you want if Event.type doesn't match any of the case-statements. Otherwise, when enabling warnings, the compiler might warn about unhandled event types. It just has to be: default: break; Improve class Coord Your class Coord is basically the same as std::pair. So, if you really wanted to have coordinate pairs where each coordinate can have its own type, you should just have written std::pair<int, int> instead of Coord<int, int>. However, in your code you always use ints for coordinates. So there really is no need for a template at all. Furthermore, you clearly want x and y-coordinates, so just make that explicit: struct Coord { int x; int y; }; Be consistent in how you name things. In your code, you use i, first and somethingX as names for variables related to the x coordinate. Make sure it has x in the name everywhere. Also, do use your class Coord wherever you have a pair of coordinate. Here is how it would look: int Conway::Board::CountAliveNeighbors(Coord GridCell) { int count = 0; for (int dx = -1; dx <= 1; ++dx) { for (int dy = -1; dy <= 1; ++dy) { Coord absolute; absolute.x = GridCell.x + dx; absolute.y = GridCell.y + dy; ... Don't use arbitrary delays You are calling SDL_Delay(100), which limits your code to run at less than 10 frames per second. Maybe you want to have the evolution of the board go at a rate of 10 Hz, but it is in general better to decouple rendering from the timesteps of your simulation. You already set the SDL_RENDERER_PRESENTVSYNC flag, so you can drop the call to SDL_Delay() and have your code render at the same framerate as your monitor. If you want to limit how often the board updates, then I suggest you use SDL_GetTicks() to keep track of time, and only call Update() when enough time has passed. Pass coordinate pairs to ReadCell() The fact that class Board stores cells as a one-dimensional std::vector should not have to be exposed to other classes. So it is better if ReadCell() takes x and y-coordinates in the form of a Coord, and converts them to an index itself, so in Engine::Draw() you can write: if (m_Board.ReadCell({x, y}) == Board::Cell::Alive) Rename ToggleClickedCell() to ToggleCell() You have a very good separation of responsibility in your code: class Board implements the logics of the board, while class Engine handles user input and rendering. This makes it easy to change the Engine while keeping the functionality of the Board the same. For example, you could make a text-only version of your program by changing Engine such that it would not use SDL but render the board as ASCII art for example. In that case, you would not use a mouse but the keyboard to toggle cells, so it would be strange to have to call ToggleClickedCell() when no clicking is involved. You should also just pass the grid x and y coordinates to ToggleCell(), not the mouse coordinates. Converting mouse coordinates to grid coordinates should be done by Engine. Make member functions const where appropriate Apart from variables, you can also make member functions const. You should do this when the member function doesn't change any of the member variables of its class. That allows the compiler to optimize the code better. You just have to add it right after the declaration in the header files, like so: int CountAliveNeighbors(Coord GridCell) const; Avoid repeatedly using a function to get the same value In Board::Update() there are three calls to CountAliveNeighbors({j, i}). Apart from the code duplication, if the compiler cannot see that each call will produce exactly the same result, it will perform more function calls than necessary. While there are ways to make the compiler optimize this anyway (using function attributes like [[gnu::const]] or link-time optimization), you can easily improve the code yourself by calling the function once and storing the result in a variable: auto aliveNeigbors = CountAliveNeighbors({x, y}); if (ReadCell({x, y}) == Cell::Alive) { if (aliveNeighbors < 2 || aliveNeighbors > 3) { ... Keep two vectors of cells in memory In Board::Update(), you create a temporary std::vector<Cell>, write the new cell state to it, and at the end copy the temporary vector into m_Grid, and then you destroy the temporary. If this was something you would only do sporadically, that could be fine, but this is where your program spents a large part of its time, so you should try to optimize this. A simple way to do this is to keep two vectors for storage, and a variable to keep track of the "current" vector. For example, in Board.h: class Board { ... private: std::vector<Cell> m_Grids[2]; int m_CurrentGrid = 0; }; Then, in Update(), do something like: auto &m_Grid = m_Grids[m_CurrentGrid]; // Get a reference to the current grid auto &temp = m_Grids[m_CurrentGrid ^ 1]; // Get a reference to the temporary grid for (...) { ... } m_CurrentGrid ^= 1; // Swap the temporary and current grid Of course, everywhere you used m_Grid before, you have to ensure you use the current grid. This makes it even more important to use a member function to get cell at a given coordinate, instead of reading a vector directly, even inside class Board itself, because then you only need one place where you put the logic which of the two vectors to read.
{ "domain": "codereview.stackexchange", "id": 37855, "tags": "c++, game-of-life, sdl2" }
Why not relax only edges in Q in Dijkstra's algorithm?
Question: Can someone tell me why almost in every book/website/paper authors use the following: foreach vertex v in Adjacent(u) relax(u,v) when relaxing the edges, instead of: foreach vertex v in Adjacent(u) if (v is in Q) relax(u,v) This is extremely confusing for someone when learning the algorithm. Is there any reason why the people are omitting the IF ? Anyway I wrote a semi-Javascript (I changed it here to a readable syntax) implementation of Dijkstra and I wanted to be sure if it is correct because of this IF case. Here is my code excluding the initialising: while (queue.length != 0) min = queue.getMinAndRemoveItFromQ() foreach v in min.adjacentVertices // inspect edge from "min" to "v" if ( queue.contains(v) AND min.priority + weight(min,v) < v.priority ) v.priority = min.priority + weight(min,v) v.pre = min Is this implementation correct or am I missing something ? Answer: The condition min.priority + weight(min,v) < v.priority can only be true if $v$ is in the queue. If a vertex $v$ has been removed from $Q$ the invariant of Dijkstra's algorithm guarantees we've already found the shortest path to $v$. Edit: Proof Sketch Suppose v isn't in Q. Then we must have already found the shortest path to v. Now if we later examine a vertex u connected to v, the condition min.priority + weight(min,v) < v.priority must be false, otherwise we would have found a shorter path to v which is a contradiction.
{ "domain": "cs.stackexchange", "id": 939, "tags": "algorithms, graphs, shortest-path" }
Getting rid of Canny outliers
Question: I have been using the Canny edge detection function in OpenCV to detect the edge of an elliptic annulus (light object, dark background). In theory there should be two edges. However, it sometimes does not all edges on the outer ring. So if I radially plot my detected points it looks like this (x: angle, y: radius from center of mass) There is a wavy line which is the outer border of my object and a few outliers which are the inner border (you can tell because they look like they are systematically offset). I don't know the exact breadth of the annulus. Q: Is there any way to programmatically get rid of these "obvious" outliers so that the exact outer contour is interpolated? Answer: Please see my answer to this question: Detecting and isolating part of an image You can use circular shortest path to do this quite elegantly. You are using OpenCV and and not MATLAB so some modification of my code is required.
{ "domain": "dsp.stackexchange", "id": 2947, "tags": "canny-edge-detector" }
Conflict between Bra-Ket notation and Integration
Question: Suppose, I have a wavefunction given by $\psi(x,t)$. This wavefunction, over time, becomes $\psi(\alpha x,t)$. I've been asked to compute the final kinetic energy of this new wavefunction, in terms of the initial kinetic energy. We know, $$\langle T_i\rangle=\langle \psi(x)|(-\frac{\hbar}{2m} \nabla_x^2)|\psi(x)\rangle$$ This is the initial kinetic energy, in Bra-Ket notation. We can write the final kinetic energy as : $$\langle T_f\rangle=\langle \psi(\alpha x)|(-\frac{\hbar}{2m} \nabla_x^2)|\psi(\alpha x)\rangle$$ However, changing variable to $u$ such that $u=\alpha x$, we can see : $$\frac{\partial}{\partial u} = \frac{\partial}{\partial x}\frac{\partial x}{\partial u} = \frac{1}{ \alpha}\frac{\partial}{\partial x}$$ Thus, $$\alpha^2\frac{\partial^2}{\partial u^2} =\frac{\partial^2}{\partial x^2}$$ Thus, we can write kinetic energy as : $$\langle T_f\rangle= \alpha^2\langle \psi(u)|(-\frac{\hbar}{2m} \nabla_u^2)|\psi(u)\rangle = \alpha^2\langle T_i \rangle$$ However, if I write this same thing through integration, I'm facing a problem. $$\langle T_i \rangle = \int\psi^*(x)(-\frac{\hbar}{2m} \nabla_x^2)\psi(x)dx$$ Similarly, we have : $$\langle T_f \rangle = \int\psi^*(u)(-\frac{\hbar}{2m} \nabla_x^2)\psi(u)dx$$ As we have seen, $$\nabla_x^2 = \alpha^2\nabla_u^2 \space\space\space\& \space\space\space dx=\frac{du}{\alpha}$$ Plugging these two values in, and noting that $u$ is just a dummy variable, we have : $$\langle T_f \rangle = \int\psi^*(u)(-\alpha^2\frac{\hbar}{2m} \nabla_u^2)\psi(u)\frac{du}{\alpha} = \alpha \langle T_i\rangle$$ Even though the two notations are equivalent, there are giving me different answers. Can someone guide me as to where I'm making a mistake, and how should I deal with problems such as these? Answer: Your second derivation is ok. In bra-ket notation, $|\psi(x)\rangle$ is meaningless. The state is $|\Psi\rangle$, an abstract Hilbert-space vector with no explicit dependence on any variables specific to a given basis. The wavefunction $\psi(x)$ is the state projected into the position basis, $\psi(x) = \langle x | \Psi \rangle$. Furthermore, you shouldn't write the energy operator in terms of $\nabla_x$ in bra-ket notation. The Hamiltonian is $\hat{H}=\hat{p}^2/2m$, where $\hat{H}$ and $\hat{p}$ are abstract operators. You should only express them in terms of a number or function or differential operator in some basis. So the following expressions can be parsed \begin{equation} \langle \Psi | \hat{H} | \Psi \rangle = \langle \Psi | \frac{\hat{p}^2}{2m} | \Psi \rangle \end{equation} while you should avoid writing things like $\langle \psi(x) | \hat{H} | \psi(x) \rangle$ or $\langle \Psi | \nabla_x^2 | \Psi \rangle $ or $\langle \psi(x) | \nabla_x^2 | \psi(x) \rangle$. To express this in terms of a basis, you insert a complete set of states using the resolution of the identity \begin{equation} \hat{\mathbf{1}} = \int d x | x \rangle \langle x | = \int \frac{dp}{2\pi} | p \rangle \langle p | \end{equation} where $\hat{\mathbf{1}}$ is the identity operator, and I've written the identity operator in two different bases (position and momentum) to emphasize that you can choose to do the calculation in any basis. The $2\pi$ is conventional (but needs to appear somewhere to make the Fourier transforms work out). To illustrate this, let's first evaluate the expression in the momentum basis. Then \begin{eqnarray} \langle \Psi | \hat{H} | \Psi \rangle &=& \langle \Psi | \frac{\hat{p}^2}{2m}| \Psi \rangle \\ &=& \langle \Psi | \hat{\mathbf{1}} \frac{\hat{p}^2}{2m} \hat{\mathbf{1}} | \Psi \rangle \\ &=& \int \frac{d p}{2\pi} \int \frac{d p'}{2\pi} \langle \Psi | p\rangle \langle p | \frac{\hat{p}^2}{2m} | p' \rangle \langle p' | \Psi \rangle \\ &=& \int dp \int dp' \tilde{\psi}(p)^\star \left( \frac{p^2}{2m} 2\pi \delta(p-p') \right) \tilde\psi(p') \\ &=& \frac{1}{2m} \int \frac{dp}{2\pi} p^2 |\tilde{\psi}(p)|^2 \end{eqnarray} where $\tilde{\psi}(p) \equiv \langle p | \Psi \rangle$ is the state in the momentum representation, or the momentum-space wavefunction. Note that at no point in this derivation did any differential operator appear. Instead, when it came time to evaluate $\langle p | \frac{\hat{p}^2}{2m} | p' \rangle$, we only had to use $\hat{p}^2 | p \rangle = p^2 | p \rangle$ and $\langle p | p'\rangle = 2\pi \delta(p-p')$. We can follow the exact same logic in the position representation, by replacing $\hat{\mathbf{1}}= \int dx | x \rangle \langle x | $. I'll leave the details for you (feel free to ask follow up questions). The main differences with respect to the momentum-space derivation are that The real-space wavefunction $\psi(x) = \langle x | \Psi \rangle$ appears instead of the momentum-space wavefunction. A differential operator appears at the step $\langle x | \frac{\hat{p}^2}{2m} | x' \rangle$. The easiest thing is just to use the rule that you can replace this combination with $-\frac{\hbar^2}{2m}\delta(x-x') \nabla_x^2$; then you will end up with your second derivation. As an alternative to 2, you can also proceed by inserting a complete set of momentum states, evaluating $\hat{p}$ in the momentum basis, then using $\langle x | p \rangle = \frac{1}{\sqrt{2\pi}}e^{i p x/\hbar}$ and $p e^{i p x/\hbar} = -i \hbar \nabla_x e^{i p x/\hbar}$ to convert the factors of $p$ into gradients. Here are some more equations to flesh out method 3. I'm not going to do the whole derivation, but just focus on the tricky expectation value $\langle x | \frac{\hat{p}^2}{2m} | x' \rangle$ that appears when you replace $\hat{\mathbf{1}}$ with $\int dx |x\rangle \langle x |$ \begin{eqnarray} \langle x | \frac{\hat{p}^2}{2m} | x' \rangle &=& \int \frac{dp}{2\pi} \int \frac{dp'}{2\pi} \langle x | p \rangle \langle p | \frac{\hat{p}^2}{2m} | p' \rangle \langle p' | x' \rangle \\ &=& \int \frac{dp}{2\pi} \int \frac{dp'}{2\pi} \frac{e^{ipx/\hbar}}{\sqrt{2\pi}} \langle p | \frac{\hat{p}^2}{2m} | p' \rangle \frac{e^{-ip'x'/\hbar}}{\sqrt{2\pi}} \\ &=& \int \frac{dp}{2\pi} \int \frac{dp'}{2\pi} \frac{e^{i(px - p'x')/\hbar}}{2\pi} \left(2\pi \delta(p-p')\frac{p^2}{2m}\right) \\ &=& \frac{1}{4\pi m} \int \frac{dp}{2\pi} p^2 e^{i p (x-x')/\hbar} \\ &=& \frac{1}{4\pi m} \int \frac{dp}{2\pi} \left(-\hbar^2 \nabla_x^2 e^{i p(x-x')/\hbar}\right) \end{eqnarray} In practice, you'll be integrating this expression over $x$ and $x'$, times wavefunctions $\psi(x)$ and $\psi(x')$, so the next step is to integrate the $\nabla_x^2$ by parts so you put it on $\psi(x)$. Then you can do the integral over $p$, giving you a delta function $\delta(x-x')$, which kills the $x'$ integral. Then you are back to your second derivation. The key pieces of information used are $\langle p | x \rangle = e^{i p x/\hbar}/\sqrt{2}$ \hat{p}|p\rangle = p | p\rangle $p e^{i p x/\hbar} = -i \hbar \nabla_x e^{i p x/\hbar}$ (to see this just take the derivative on the RHS and you'll see it equals the LHS) $p^2 e^{i p x/\hbar} = - \hbar^2 \nabla_x^2 e^{i p x/\hbar}$ (just the above equation, twice)
{ "domain": "physics.stackexchange", "id": 83172, "tags": "quantum-mechanics, energy, operators, wavefunction, schroedinger-equation" }
Program that replicates itself
Question: While misreading the beginning of Stage I of this classic paper by Ken Thompson, I decided to create program that replicates itself. Let's say this program is called Replicator.exe. Upon running it once, it will create a new executable, Replicator_1.exe. Upon running either of those executables, a new executable, Replicator_2.exe, will be created. Some of the things I am interested in: De-coupling the Windows dependent logic. Naming (I feel like some of my variable and function names are bad). Anything else that seems odd or better ways to do this. Please note that I wrote this with Visual Studio 2012, so I only have access to a limited amount of C++11 features. For your own safety, please do not try to put the contents of main() into an infinite loop. This is the main driver: Driver.cpp #include "Replicator.h" #include <algorithm> #include <fstream> #include <iterator> #include <string> template <class OutIter> OutIter copy_file (const std::string &filepath, OutIter out, std::ios::openmode open_flags = std::ios::in) ; int main () { namespace ff = fun_fs ; const std::string filepath = ff::process_path () ; const std::string filepath_new = ff::unique_filename (filepath) ; std::ofstream file (filepath_new, std::ios::binary) ; copy_file (filepath, std::ostream_iterator <char> (file), std::ios::binary) ; return 0 ; } template <class OutIter> OutIter copy_file (const std::string &filepath, OutIter out, std::ios::openmode open_flags) { std::ifstream file ; file.open (filepath, open_flags) ; if (!file.good ()) { return out ; } file.unsetf (std::ios::skipws) ; auto begin = std::istream_iterator <char> (file) ; auto end = std::istream_iterator <char> () ; auto new_out = std::copy (begin, end, out) ; return new_out ; } These are some helper functions: Replicator.h #pragma once #ifndef REPLICATOR_H #define REPLICATOR_H #include <string> namespace fun_fs { std::string process_path () ; std::string unique_filename (std::string filename) ; } #endif Replicator.cpp #include "Replicator.h" #include <string> #include <system_error> #include <utility> #include <Windows.h> namespace fun_fs { static std::pair <std::string, std::string> split_file_extension (const std::string &filename) ; static std::string increment_count (const std::string &filename) ; } std::string fun_fs::process_path () { std::string path (500, ' ') ; DWORD dw = ::GetModuleFileName (nullptr, &path[0], path.size ()) ; // Do not mistake hidden files for file extensions. if (dw == 0 || dw == path.size ()) { std::error_code ec (::GetLastError (), std::system_category ()) ; throw std::system_error (ec, "::GetModuleFileName () failed.") ; } path.resize (dw) ; return path ; } std::string fun_fs::unique_filename (std::string filename) { auto file_and_extension = split_file_extension (filename) ; std::string filename_part = std::move (file_and_extension.first) ; const std::string file_extension = std::move (file_and_extension.second) ; do { filename_part = increment_count (filename_part) ; filename = filename_part + file_extension ; } while (::GetFileAttributes (filename.data ()) != INVALID_FILE_ATTRIBUTES) ; return filename ; } // example: "name.txt" -> {"name", ".txt"} // example: "name" -> {"name", ""} static std::pair <std::string, std::string> fun_fs::split_file_extension (const std::string &filename) { std::string file_extension ; auto index = filename.rfind ('.') ; // ignore hidden files if (index != 0 && index != std::string::npos) { return std::make_pair (filename.substr (0, index), filename.substr (index)) ; } return std::make_pair (filename, "") ; } // example: "name" -> "name_1" // example: "name_2" -> "name_3" // example: "name_2cool" -> "name_2cool_1" static std::string fun_fs::increment_count (const std::string &filename) { const std::string start_count = "1" ; auto index = filename.rfind ('_') ; if (index == (filename.size () - 1)) { return filename + start_count ; } else if (index != std::string::npos) { const std::string possible_number = filename.substr (index + 1) ; std::size_t end_of_conversion = 0 ; try { int number = std::stoi (possible_number, &end_of_conversion) ; if (end_of_conversion == possible_number.size ()) { return filename.substr (0, index + 1) + std::to_string (number + 1) ; } } catch (std::invalid_argument) { // do nothing... } } return filename + "_" + start_count ; } Answer: De-coupling platform dependent code: You have very few Windows dependent tasks in your code as it stands. If I didn't miss anything, the only functions that perform system calls are process_path() and unique_filename(). I would start of the decoupling by defining a class, instead of loose functions, and adopt a simple inheritance structure to separate the platform specific tasks into a child class. It would be a lot nicer to define a base class for the portable operations, similar to: class Replicator { public: // This being the only method the client calls. void replicateSelf(); // Might be a default, empty... virtual ~Replicator(); protected: // These are the platform dependent services, implemented by the specialized class. virtual std::string processPath() = 0; virtual std::string uniqueFilename(const std::string & filename) = 0; private: // The other helper methods... }; // A factory function. Could also be a static member function of Replicator. std::unique_ptr<Replicator> CreateReplicator() { #if WINDOWS return std::unique_ptr<Replicator>( new WindowsReplicator ); #elif LINUX return std::unique_ptr<Replicator>( new LinuxReplicator ); #else #error "Missing implementation!" #endif } And the platform specific class would have to do very little, just implement the two virtual methods: class WindowsReplicator : public Replicator { std::string processPath() { // calls GetModuleFileName() ... } std::string uniqueFilename(const std::string & filename) { // calls GetFileAttributes() ... } }; Client code would use it like this: int main() { std::unique_ptr<Replicator> replicator = CreateReplicator(); replicator->replicateSelf(); } With this setup, you would already achieve a very good separation of the platform specific parts and the portable parts, plus have your code ready for porting to another system. Virtual methods, or dynamic dispatch, however, are normally associated with dynamic runtime behavior. Meaning that virtual methods are very good when you want to switch the underlaying logic while the program is running. In this case, the platform does not change during runtime. A Replicator is always a WindowsReplicator or a LinuxReplicator, or whatever. So you could instead of using virtual methods be using a form of static dispatch. CRTP comes to mind in this case. I'll leave it to you to further explore the concept and maybe apply it. Once you create the interface class, changing it to use CRTP is easy. Side note: As Loki Astari suggested in a comment, you might very well avoid the dynamic allocation inside CreateReplicator() by making Replicator an implicit singleton. The program will never require more than one instance of the class. So another option for the factory would be something like this: Replicator & GetReplicatorInstance() { #if WINDOWS static WindowsReplicator rep; return rep; #elif LINUX static LinuxReplicator rep; return rep; #else #error "Missing implementation!" #endif } In C++ 11, initialization order is also thread safe, by the way.
{ "domain": "codereview.stackexchange", "id": 9539, "tags": "c++, c++11, portability" }
Deriving the regular expression for C-style /**/ comments
Question: I'm working on a parser for a C-style language, and for that parser I need the regular expression that matches C-style /**/ comments. Now, I've found this expression on the web: /\*([^\*]*\*+[^\*/])*([^\*]*\*+|[^\*]*\*/ However, as you can see, this is a rather messy expression, and I have no idea whether it actually matches exactly what I want it to match. Is there a different way of (rigorously) defining regular expressions that are easy to check by hand that they are really correct, and are then convertible ('compilable') to the above regular expression? Answer: I can think of four ways: Define an automaton for the language you are interested in. Convert the regular expression to an automaton (using Brzozowski's derivatives). Check that both automata accept the same language (determinize and minimize or use a bisimulation argument). Write loads of test cases and apply your regular expression to them. Convert the automaton defined in point 1 to a regular expression, using standard techniques. A combination of the above.
{ "domain": "cs.stackexchange", "id": 20810, "tags": "compilers, parsers, regular-languages" }
Recreated Snake in Rust
Question: This is my first ever program in Rust. I've made it using only the book, the reference, and any documentation about the crates/functions I was using on the official rust lang website. I have the feeling this could be cleaned up a lot. I'm looking for suggestions. extern crate rand; extern crate device_query; use std::{vec, thread::sleep, time::Duration}; use rand::*; use device_query::{DeviceQuery, DeviceState, Keycode}; fn main(){ let device_state = DeviceState::new(); let mut rows = [['⬛'; 11]; 11]; let mut score: i32 = 0; let mut direction: u8 = 0; // = Up, 1 = Left, 2 = Down, 3= Right let mut positions:Vec<(usize, usize)> = vec![(5,5)]; //every square taken by our slithery friend let mut close_game: bool = false; rows[5][5] = ''; rows[3][5] = ''; while !close_game { print!("\x1B[2J\x1B[1;1H"); direction = input_direction(direction, &device_state); close_game = game_tick(&mut rows, &mut positions, direction, &mut score); draw_game(&rows, &score); sleep(Duration::from_millis(300)); } println!("Game over! Final Score: {}", score); } fn draw_game(a: &[[char;11];11], score: &i32){ println!(""); println!(" Score:{:02} teo.snake", &score); for i in 1..10 { print!(" "); for j in 1..10 { print!("{}", a[i][j]); } println!(""); } } fn game_tick(a: &mut[[char;11];11], vec: &mut Vec<(usize, usize)>, d: u8, s: &mut i32)-> bool { let mut game_over: bool = false; let newpos: (usize, usize)= newposition(d, vec[0], &mut game_over); if game_over == true { return game_over } for i in 0..vec.len() { if newpos == vec[i] { game_over = true; return game_over } } vec.insert(0, newpos); let mut next_fruitx:usize; let mut next_fruity:usize; let mut checkpass: bool; 'looptillvalid: loop { checkpass = true; next_fruitx = rand::thread_rng().gen_range(1..10); next_fruity = rand::thread_rng().gen_range(1..10); for i in 0..vec.len() { if (next_fruitx, next_fruity) == vec[i] { checkpass = false; break; } } if checkpass == true { break 'looptillvalid; } } let lastpos: (usize, usize) = vec.pop().unwrap(); a[lastpos.0][lastpos.1] = '⬛'; match a[newpos.0][newpos.1] { '' => { vec.push(lastpos); a[newpos.0][newpos.1] = ''; a[lastpos.0][lastpos.1] = ''; *s += 1; game_over = false; a[next_fruitx][next_fruity] = ''; } _ => { a[newpos.0][newpos.1] = ''; game_over = false} } game_over } fn newposition(d: u8, first: (usize, usize), go: &mut bool) -> (usize, usize){ let mut ret: (usize, usize) = (0,0); match d { 0 => { ret.0 = first.0 - 1; ret.1 = first.1} 1 => { ret.0 = first.0; ret.1 = first.1 - 1} 2 => { ret.0 = first.0 + 1; ret.1 = first.1} 3 => { ret.0 = first.0; ret.1 = first.1 + 1} _ => {println!("Invalid Direction")} } if ret.0 < 1 || ret.0 > 9 || ret.1 < 1 || ret.1 > 9 { *go = true; } ret } fn input_direction(d: u8, d_state: &DeviceState) -> u8{ let mut dir: u8 = d; let keys: Vec<Keycode> = d_state.get_keys(); if keys.len() != 0 { match keys[0] { Keycode::Up => {dir = 0} Keycode::Left => {dir = 1} Keycode::Down => {dir = 2} Keycode::Right => {dir = 3} _ => () } } dir } Answer: Clippy One of the most important tools when writing good rust code is clippy. I recommend to configure your editor to run clippy on-the fly. It has very nice messages and includes an explanation for most lints. Let's go through some of it's suggestions: warning: empty string literal in `println!` --> src/main.rs:31:5 | 31 | println!(""); | ^^^^^^^^^--^ | | | help: remove the empty string | = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#println_empty_string = note: `#[warn(clippy::println_empty_string)]` on by default warning: equality checks against true are unnecessary --> src/main.rs:45:8 | 45 | if game_over == true { | ^^^^^^^^^^^^^^^^^ help: try simplifying it as shown: `game_over` | = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#bool_comparison = note: `#[warn(clippy::bool_comparison)]` on by default This one actually applies to all popular languages. Let's see one more: warning: length comparison to zero --> src/main.rs:115:8 | 115 | if keys.len() != 0 { | ^^^^^^^^^^^^^^^ help: using `!is_empty` is clearer and more explicit: `!keys.is_empty()` | = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#len_zero = note: `#[warn(clippy::len_zero)]` on by default Then, it has many issues related to iteration. In rust, it is unidiomatic to use explicit indexing when you can iterate over the actual values instead: for row in a { print!(" "); for elem in row { print!("{}", elem); } println!(); } for pos in &*vec { if newpos == *pos { game_over = true; return game_over } } Here, the assignment to game_over is actually useless, because you can just return true directly. Use Rust 2021 The 2021 is the latest edition is the newest and should be used for all new code. If you're following a book or tutorial and it tells you to extern crate, it's very outdated. Control flow logic It may just be me, but I find the control flow of the loop in gametick very confusing. I think it is more readable if written this way: loop { next_fruitx = thread_rng().gen_range(1..10); next_fruity = thread_rng().gen_range(1..10); if vec.contains(&(next_fruitx, next_fruity)) { break; } } vec.contains can also be used in other places. Finding these is an exercise to the reader. At the end of game_tick, both branches of the match (which reads very nicely btw) contain the line game_over = false. They can be removed and replaced with a single false at the end of the function. Constants There are multiple occurrences of various constants in the code. This is both not DRY and also an instance of a magic number. Things I would extract into named constants are 11, 9, and the emojis. This reduces the risk of typos that can become logic errors. Data Structures You use a lot of Tuples for Coordinates, where I think a simple Struct with x and y fields could be more readable. Since direction only has 4 valid values, I would make an enum for it. That way, you can remove the "Invalid direction" error, which should be unreachable. I'd also write a 2-valued enum for the return type of game_tick, as the meaning of true and false aren't very obvious. Thanks for reading all of this. Feel free to make a new post with updated code.
{ "domain": "codereview.stackexchange", "id": 45160, "tags": "game, rust, snake-game" }
Google Kick Start Practice Round 2019 - Mural
Question: My code exceeds the time limit on the second test set. A suggestion/hint of a better algorithm would be appreciated. Problem Thanh wants to paint a wonderful mural on a wall that is N sections long. Each section of the wall has a beauty score, which indicates how beautiful it will look if it is painted. Unfortunately, the wall is starting to crumble due to a recent flood, so he will need to work fast! At the beginning of each day, Thanh will paint one of the sections of the wall. On the first day, he is free to paint any section he likes. On each subsequent day, he must paint a new section that is next to a section he has already painted, since he does not want to split up the mural. At the end of each day, one section of the wall will be destroyed. It is always a section of wall that is adjacent to only one other section and is unpainted (Thanh is using a waterproof paint, so painted sections can't be destroyed). The total beauty of Thanh's mural will be equal to the sum of the beauty scores of the sections he has painted. Thanh would like to guarantee that, no matter how the wall is destroyed, he can still achieve a total beauty of at least B. What's the maximum value of B for which he can make this guarantee? Input The first line of the input gives the number of test cases, T. T test cases follow. Each test case starts with a line containing an integer N. Then, another line follows containing a string of N digits from 0 to 9. The i-th digit represents the beauty score of the i-th section of the wall. Output For each test case, output one line containing Case #x: y, where x is the test case number (starting from 1) and y is the maximum beauty score that Thanh can guarantee that he can achieve, as described above. Limits 1 ≤ T ≤ 100. Time limit: 20 seconds per test set. Memory limit: 1 GB. Small dataset (Test set 1 - Visible) 2 ≤ N ≤ 100. Large dataset (Test set 2 - Hidden) For exactly 1 case, N = 5 × 10^6; for the other T - 1 cases, 2 ≤ N ≤ 100. Sample Input 4 4 1332 4 9583 3 616 10 1029384756 Output Case #1: 6 Case #2: 14 Case #3: 7 Case #4: 31 In the first sample case, Thanh can get a total beauty of 6, no matter how the wall is destroyed. On the first day, he can paint either section of wall with beauty score 3. At the end of the day, either the 1st section or the 4th section will be destroyed, but it does not matter which one. On the second day, he can paint the other section with beauty score 3. In the second sample case, Thanh can get a total beauty of 14, by painting the leftmost section of wall (with beauty score 9). The only section of wall that can be destroyed is the rightmost one, since the leftmost one is painted. On the second day, he can paint the second leftmost section with beauty score 5. Then the last unpainted section of wall on the right is destroyed. Note that on the second day, Thanh cannot choose to paint the third section of wall (with beauty score 8), since it is not adjacent to any other painted sections. In the third sample case, Thanh can get a total beauty of 7. He begins by painting the section in the middle (with beauty score 1). Whichever section is destroyed at the end of the day, he can paint the remaining wall at the start of the second day. My solution T = int(input()) # number of tries in test set for i in range(1,T+1): N = int(input()) # number of sections of wall score_input = input() # string input of beauty scores beauty_scores = [int(x) for x in score_input] muralLength = (N+1)//2 bestScore = 0 # to obtain best beauty score for k in range((N+2)//2): # the no. of possible murals score = sum(beauty_scores[k:k+muralLength]) if score > bestScore: bestScore = score print("Case #{}: {}".format(i, bestScore)) Further details My code worked fine for the first test set, but the time limit was exceeded for the second. The most likely outcome is that with the test case N = 5 x 10^6, there was far too many mural options for the code to check (2500001 to be exact.) Answer: A time to compute sum(beauty_scores[k:k+muralLength]) is proportional to muralLength, which is N/2, and there are N/2 iterations. Total time to execute the loop is \$O(N^2)\$. TLE. As a hint, once you've computed the sum for a [0..m] slice, the sum for next slice ([1..m+1]) can be computed much faster. I don't want to say more. range(1, T+1) is unconventional, considering that i is never used. Also, Pythonic style recommends to use _ for a dummy loop variable: for _ in range(T):
{ "domain": "codereview.stackexchange", "id": 34187, "tags": "python, performance, python-3.x, programming-challenge, time-limit-exceeded" }
What does the concept of "Entropy Flow" mean in detail?
Question: Recently a found a paper on the thermoelectric effect: https://williamsgj.people.cofc.edu/Thermoelectric%20Effect.pdf When I started with Chapter 5 "Irreversible Thermodynamics" I struggle totally with concepts of "Entropy Flow". My question is general and not related to thermoelectricity yet. I just want to understand the concepts the author uses to derive some features. On page 6 a system is split into a set of subsystems, each in local equilibrium. Then the author writes $$T_i \delta S_i = \delta U_i - \mu_i \delta N_i$$ I would understand this as a relation describing equilibrium states which are close together. Next the author presents an equation where I cannot follow in detail: $$T J_s = J_h - \mu J_p$$ Where $J_s$, $J_h$ and $J_p$ denotes the entropy flux, internal energy flux and particle flux. Unfortunately I cannot follow what is meant with "Entropy-Flux". Entropy is a state variable of a system or a sub-system, how can there be a "flux"? A few lines after that, he defines heat flux and relates it to entropy flux: $$J_Q = T J_s$$ Of course I know, that for reversible processes $dQ_{rev}= T dS$, but this would imply that the processes behind are all reversible. In general $$\delta Q \le T dS$$ so why can we relate heat flux directly to entropy flux. There are processes possible, where dQ=0 but the entropy of a system increases anyway - for instance expansion of a gas into a bigger volume after opening a valve. When the chapter is obviously about "irreversible thermodynamics", why do we assume reversible processes from the beginning? Isn't this a discrepancy? Unfortunately I'm completely lost with those concepts. I'm aware that the topic is complex - if there is not an easy answer possible, where can I read more about it? My textbook of thermodynamics doesn't cover such things and just deals with equilibrium thermodynamics. Answer: There are two ways that the entropy of a closed system can change: By heat flow across the boundary between the system and its surroundings at the boundary temperature $T_B$. This part of the entropy change is given by $\int{\frac{dQ}{T_B}}$, where the integral is carried out along the process path from initial state to final state. This contribution to the entropy change is present in both reversible and irreversible processes; moreover, in a reversible process, there are no temperature variations within the system, so that $T_B=T$ along the process path where T is the (uniform) system temperature. Entropy generation within the system as a result of irreversibility within the the system. This part of the entropy change, denoted $\sigma$ is always positive, unless the process is reversible, in which case it is equal to zero. So, based on this, the total entropy change of a closed system experiencing an irreversible process is $$\Delta S=\int{\frac{dQ}{T_B}}+\sigma$$And, for a reversible process, $$\Delta S=\int{\frac{dQ}{T}}$$
{ "domain": "physics.stackexchange", "id": 84201, "tags": "thermodynamics, entropy, reversibility, heat-conduction" }
Mathematical statement of the Born-Oppenheimer approximation
Question: I have been looking up a formal mathematical definition of the Born-Oppenheimer approximation. I have thus far come across two (my wording): Definition 1 The Born-Oppenheimer approximation is given by: $$\nabla^2\psi_e(\vec r, R)\chi(\vec R) \approx \psi_e(\vec r, \vec R) \nabla^2\chi(\vec R)$$ (from: Linne, M. A. (2002). Spectroscopic measurement: an introduction to the fundamentals. Academic Press. (p224)) Definition 2 The Born-Oppenheimer approximation is given by: $$\psi=\psi_e \chi$$ (from: Das, I. et al. (2005) An Introduction to Physical Chemistry. New Age International.(p105)) In both cases $\psi_e$ is the electron wave function and $\chi$ is the nuclear wave function. My question is are these definitions equivalent. If so how can it be proved and if not which is taken to be the standard definition. (A source describing both would also be great). Answer: The two expressions are clearly not equivalent: the first one assumes the second one, and extends it. Note that writing $f(x,y)=g(x)h(x,y)$ is not an approximation at all, so the second expression is meaningless on its own. The true approximation is $h'\ll g'$, so that you can write $f'\approx g' h$. Therefore, the second expression is just notation; it is the first expression what really introduces an approximation.
{ "domain": "physics.stackexchange", "id": 37573, "tags": "quantum-mechanics, molecules, born-oppenheimer-approximation" }
Is net torque is not zero about all points on the rod for a linearly accelerating rod?
Question: Consider thin uniform rod $AB$ of mass $m$ and length $L$ just translating with some acceleration $a$ due to two anti parallel forces $F_1$ and $F_2$ perpendicular to the rod. Force $F_1$ acts at the end $A$ where as $F_2$ acts at distance $y$ from end $A$. Because body is just translating, Net torque on the body about any point on the rod be zero. About Center of Mass: $F_1.L/2 = F_2.(L/2-y)$ gives me a ratio of $F1/F2$ If I proceed with these values of F1 and F2 then net torque about the end B is not zero. About end B of the rod: $F_1.L = F_2.(L-y)$ gives me another value for $F1/F2$ If I proceed with these values of F1 and F2 then net torque about the center of mass is not zero. Where am I going wrong? EDIT: I have generalized the above situation from the following problem: Solution to the above problem says "Since the rod moves translationally only, Hence Torque about $B$ is zero. Hence $N = 0$ and hence $x=2$" Answer: EDIT- I would like to tell that there was a flaw in my last answer. Because body is just translating, Net torque on the body about any point on the rod be zero. My previous answer told that this was wrong. This is indeed correct. As the rod moves in a straight line, the acceleration of every point on the will be $2m/s^2$. If you want to balance the torque about $B$, you must add the pseudo force on the COM as point $B$ is accelerating with $2m/s^2$ too! If you do that, you will get the same value for $\frac{F_1}{F_2}$. Here in this photograph I have taken the general case which proves that torque about any point is zero which indeed gives us the same value of $\frac{F_1}{F_2}$
{ "domain": "physics.stackexchange", "id": 37957, "tags": "homework-and-exercises, classical-mechanics, rotational-dynamics" }
Why navfn is using Dijkstra?
Question: As an algorithm learner, I'm just curious about why navfn was is first equipped with dijkstra search algorith. The answer at this question explains how navfn works using dijkstra, but what is the reason it was selected instead of other algorithms that are equally good (e.g. A*) ? Originally posted by 130s on ROS Answers with karma: 10937 on 2012-02-23 Post score: 17 Answer: If you look at the navfn code, you'll see that there is in fact an optimized A* algorithm in there. It was not used because there was a bug and nobody had the cycles to fix it. IMO there's a lot of premature optimization here, given that global planning does not need to be happening that often. The right thing to do is to use a principled and efficient A* implementation such as sbpl :) Originally posted by bhaskara with karma: 1479 on 2012-05-26 This answer was ACCEPTED on the original site Post score: 14 Original comments Comment by 130s on 2012-05-26: While Dimitri's answer initiated a meaningful discussion, I thought I have to choose this response as a more direct answer to my particular question. Comment by 2ROS0 on 2014-11-12: what do you mean by nobody had the "cycles to fix it" ? Comment by Void on 2018-11-16: I believe it's talking about free time Comment by navid on 2019-10-10: Are you sure it is using an Optimized A* ? ROS wiki says "support for an A* heuristic may also be added in the near future". You mean they have already added A* support but never edited their description in ROS wiki?
{ "domain": "robotics.stackexchange", "id": 8362, "tags": "ros, navigation, navfn" }
Why do anaerobic bacteria require a low redox potential for their culture media?
Question: I'm reading Clostridium ,which is an anaerobic bacteria in my microbiology text and it says, of more importance than the absence of oxygen is the provision of a sufficiently low redox potential in the medium. Third can be achieved by adding unsaturated fatty acids, ascorbic acid, glutathione, cysteine, etc. Why? Do they reduce oxygen free radicals or something? Well, are these present in Clostridium's natural habitat? Answer: The higher the redox potential is, the higher the oxygen dissolved in the media will be. Lowering this potential insure that the oxygen concentration is lower and that the anaerobe will be able to grow easily. Several steps can be done while preparing the media such as boiling to remove oxygen and adding chemical which reduces this redox potential to dissolve oxygen back into the medium. As for the natural habitat, they naturally have no or really really low amounts of oxygen. If the habitats containing little oxygen have other molecules able to reduce redox potential is a good question. Lowering the redox potential is mainly important for the medium since there is oxygen involved (hard to remove 100% while preparing media). I hope my answer helps.
{ "domain": "biology.stackexchange", "id": 6135, "tags": "microbiology" }
Would we need Alternating Current if superconducting wires existed?
Question: The major advantage of Alternating Current is that it can be transmitted to large distances without significant losses, which is not possible in Direct Current. Had economical superconducting wires existed, DC could be transmitted to any distance without any loss, and DC is much safer compared to AC. So, I want to know, do we need AC if long distance transmission is no longer a problem because of superconducting wires? Would DC be better in that case, or we would still need AC? Answer: Ohmic power line losses occur in both DC and AC systems and are always proportional to the RMS value of the current, so your assertion that AC transmission has insignificant losses compared to DC is not correct. The major advantage of AC power transmission is that the transmitting voltage can be transformed down as needed locally at any point along the line with a simple transformer. This is not possible with DC, which is why it is rarely used for long-distance power transmission. In addition, three-phase AC power transmission directly enables three-phase electric motor technology, which is overwhelmingly preferred in industrial use. Note that AC power transmission is not inherently more dangerous than DC, and that lossless superconducting lines for long-distance power transmission could be used in either DC or 3-phase AC mode.
{ "domain": "physics.stackexchange", "id": 98900, "tags": "electricity, electric-current, electrical-resistance, superconductivity" }
Algorithm to find bucket and item indices from power of two bins?
Question: So this is building off of Algorithm for dividing a number into largest "power of two" buckets?. Here is a slight modification of the answer from there: let numbers = [1, 2, 3, 4, 5, 6, 7, 30, 31, 32, 33, 20, 25, 36, 50, 100, 201] numbers.forEach(n => console.log(`${n} =`, split(n).join(', '))) function split(number) { const result = [0, 0, 0, 0, 0, 0] let unit = 32 let index = 5 while (unit > 0) { while (number >= unit) { result[index]++ number -= unit } index -= 1 unit >>= 1 } return result } The split function creates "power of two" buckets from an overall array of items of length n. That is, it divides a single large array into multiple small arrays. The result array is 6 items long, accounting for buckets of sizes [1, 2, 4, 8, 16, 32]. Each item in the array is how many buckets of that size need to exist. Given that, the goal is to then take an index i, and return the bucket and bucket index where you will find that corresponding item. So for example, here are some outputs, and here is my attempt at an algorithm: let numbers = [1, 2, 3, 4, 5, 6, 7, 30, 31, 32, 33, 20, 25, 36, 50, 100, 201] numbers.forEach(n => { const [c, i] = getCollectionIndexAndItemIndex(n, 300) console.log(`${n} = ${c}:${i}`) }) function getCollectionIndexAndItemIndex(i, size) { const parts = split(size).reverse() // assume this is memoized or something let j = 0 let last = 0 let map = [1, 2, 4, 8, 16, 32].reverse() let k = 0 let bucket = 0 main: while (k <= i) { let times = parts[j] while (times--) { let value = map[j] last = 0 while (value--) { k++ if (value > 0) { last++ } else { last = 0 bucket++ } if (k == i) { break main } } } j++ } return [ bucket, last ] } function split(number) { const result = [0, 0, 0, 0, 0, 0] let unit = 32 let index = 5 while (unit > 0) { while (number >= unit) { result[index]++ number -= unit } index -= 1 unit >>= 1 } return result } That outputs this: 1 = 0:1 2 = 0:2 3 = 0:3 4 = 0:4 5 = 0:5 6 = 0:6 7 = 0:7 30 = 0:30 31 = 0:31 32 = 1:0 33 = 1:1 20 = 0:20 25 = 0:25 36 = 1:4 50 = 1:18 100 = 3:4 201 = 6:9 So basically, for index i == 1, we go to the first bucket (bucket 0), second index (i == 1), represented as 1 = 0:1. For the 32nd index i == 32, that is the 33rd item, so we fill up 1 32-item bucket, and spill over 1, so index 0 in the second bucket, represented 32 = 1:0. For index 201, equals 6:9, which you can calculate as ((32 * 6) - 1) + 10 == 192 - 1 + 10 == 201. The problem is, this algorithm is O(n), it counts k++ for every item up to k == i. I think there might be a way to optimize this so it can do larger jumps (32, 16, 8, 4, 2, 1 jumps), and cut out a lot of the iterations, I'm just not sure how. Can you find a way to maximally optimize this to the fewest number of steps, using only primitive operations and values (i.e. not fancy array.map and such, but just low-level while or for loops, and bitwise operations)? Basically, how can you optimize the getCollectionIndexAndItemIndex operation, and also simplify it so it's easier to follow. The size parameter is set to 300, but that is the size of the array. But it could be any size, and we would then want to find the corresponding index within that array, jumping to the appropriate bucket and offset. Answer: I think your code is too complex and can be simplified. Also, you wrote "using only primitive operations and values (i.e. not fancy array.map and such...", yet you use array.forEach. Where is the "line"? I hope this is what you are looking for. Note: up-to 32 -> one bucket 32 up-to 32*32 -> another bucket 3232 up-to 3232*32 -> another bucket etc. Only after done, I got @superb rain's [i >> 5, i & 31]... Perhaps I'll implement that in the future. However, my code allows for "bucket-of-buckets". The heart of my code is the recursive function: function recBucket(number, size) { if(number<size) return [number]; else { var whole=Math.floor(number/size); return [recBucket(whole, size), number-whole*size].flat(); } }; Here is a snippet that uses and builds on that: var numbers = [1, 2, 3, 31, 32, 33, 100, 201, 1023, 1024, 5555]; function recBucket(number, size) { if(number<size) return [number]; else { var whole=Math.floor(number/size); return [recBucket(whole, size), number-whole*size].flat(); } }; console.log("as is:"); numbers.forEach(n=>console.log(n+" = "+recBucket(n, 32).join(":"))); function minBuckets(number, size, buckets) { var result=recBucket(number, size); while(result.length<buckets) result.unshift(0); return result; }; console.log("min 2 buckets:"); numbers.forEach(n=>console.log(n+" = "+minBuckets(n, 32,2).join(":"))); console.log("min 4 buckets:"); numbers.forEach(n=>console.log(n+" = "+minBuckets(n, 32,4).join(":"))); .as-console-wrapper { max-height: 100% !important; top: 0; }
{ "domain": "codereview.stackexchange", "id": 39487, "tags": "javascript, performance, algorithm, array" }
Installing ROS on Nexus 4
Question: I recently changes the os on my nexus 4 to Ubuntu touch and i tried installing ROS the traditional way onto the device. But everytime i try to install i keep getting errors saying that the package was not found or cannot be installed. I havent found much documentation on this topic. How would i be able to install ROS on the device ?. I need to install it on my robot to control the bot and carry out all the processing. Please help Originally posted by raghu.s1211 on ROS Answers with karma: 1 on 2014-07-03 Post score: 0 Original comments Comment by TillScout on 2014-07-12: Have you been successful already? I am curious about the performance of ROS on Ubuntu Touch devices. Comment by pendragon on 2019-03-26: Anybody figured out a way to do this yet? I got a project coming up and we need to use a portable device capable of running ROS. Any and all suggestions are welcomed. Thanks Comment by dpills on 2022-08-14: Hi any one who has succeeded to run ROS on Ubuntu touch or on any of the newest version OTA22/23 UBports on any mobile devices ? Thanks for any hints Answer: I am not sure but I think that packages for Ubuntu touch are different from the packages for desktop computers having the standard Ubuntu. From this page it looks like ros precompiled packages are indeed only available for Desktop Ubuntu http://packages.ros.org/ros/ubuntu/dists/ If this is the case you should install ROS by your own by compiling from the source code. Here is how to do it http://wiki.ros.org/hydro/Installation/Source Originally posted by Mehdi. with karma: 3339 on 2014-07-03 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by raghu.s1211 on 2014-07-03: can i install groovy instead of hydro using the source code ? Comment by Mehdi. on 2014-07-03: check this link http://wiki.ros.org/groovy/Installation/Source Comment by raghu.s1211 on 2014-07-07: Thank you very much. Ill try it and follow up. Comment by stefan on 2014-08-19: any news from ros on ubuntu touch?
{ "domain": "robotics.stackexchange", "id": 18494, "tags": "android" }
Difference between a TM with an empty language and the one accepting empty string
Question: If a TM(Turing Machine) accepts NO input string(even the blank), then its language is empty. If a TM ONLY accepts the blank string(meaning that there is nothing on the tape except for the default blank characters), then its language has only one item and it is the blank string. Are these definitions correct? Could you describe the TM for each? Also, this might be irrelevant but let me ask: I saw somewhere that there must be at least two states for a TM. Which states must be there all the time in a TM? Answer: The definitions (well, descriptions) look correct, more or less. A TM accepting the empty language may move directly into the halt reject state, regardless of what may or may not be on the input tape. A TM accepting the language consisting of only the empty string may examine the first tape symbol. If the symbol is a blank, it could move to the halt accept state. Otherwise, it would move to the halt reject state. We don't have to worry about the user providing an input that starts on some later position in the tape since the blank symbol is not allowed in the input alphabet. To summarize: A TM for $\{\epsilon\}$ has pseudocode: if tape[1] is blank then accept else reject A TM for $\emptyset$ has pseudocode: reject or, if it helps you make the difference even clearer: if tape[1] is blank then reject else reject As Yuval points out in the comments, there are infinitely many TMs accepting one or the other of these languages; the two suggested here are for illustrative purposes only. [EDIT]Some additional discussion motivated by recent comments. (1) Depending on what your definition of a TM is and/or how explicit you want to be, there are two possibilities. If you want your machine to explicitly enter a halting state for every input tape, then yes, you will have a transition to the halt reject state on every symbol of the input alphabet. If you allow your Turing machine to reject strings by "crashing" (reaching a configuration for which there is no defined transition), then these transitions aren't necessary; simply transition to halt accept on blank, and let the machine crash in all other cases. (2) Yes, the machine which accepts the empty language always rejects. Notice that it is perfectly acceptable to define automata with accepting states which are never reached (depending upon your definition, of course). In particular, we could take any arbitrary TM and change it to accept the empty language by changing all transitions from the initial state to lead to the halt reject state. (3) Typically, TMs are understood to have two special states in addition to any "user-defined" states: the halt accept and halt reject states, which the TM enters to explicitly accept or reject, respectively, some input. In addition to these two states, an initial state is required from which to begin processing the input. By my reckoning, this means that every TM has at least three states. Notice, though, that not all of these states needs to be used by any given TM. In particular, a valid TM might endlessly loop on the initial state for all inputs; this TM will accept the empty language (although it doesn't decide it; to decide it, it needs to enter one of the halting states for each input).
{ "domain": "cs.stackexchange", "id": 1236, "tags": "terminology, turing-machines" }
Haskell's `partition`
Question: Learn You a Haskell mentions the partition function: partition takes a list and a predicate and returns a pair of lists. The first list in the result contains all the elements that satisfy the predicate, the second contains all the ones that don't. How's my implementation? I'd prefer to avoid the ++ function, but I'm not sure how to avoid it here. partition' :: (a -> Bool) -> [a] -> ([a], [a]) partition' f [] = ([], []) partition' f ys = partition'' f ys [] [] where partition'' f [] as bs = (as, bs) partition'' f (x:xs) as bs | f x = partition'' f xs (as ++ [x]) bs | otherwise = partition'' f xs as (bs ++ [x]) Answer: As @aled1027 suggests, partition' is just a fancy form of filter. Therefore, it would be useful to study how filter can be implemented without ++. filter' :: (a -> Bool) -> [a] -> [a] filter' _ [] = [] filter' f (x:xs) | f x = x:rest | otherwise = rest where rest = filter' f xs The key to avoiding ++, I think, is to force yourself to write x: first, then figure out how to fill in everything surrounding it. The next logical step would be to write | f x = x:filter' f xs … and the rest should follow naturally. Here's what I came up with for partition' (hover for spoiler): partition' :: (a -> Bool) -> [a] -> ([a], [a]) partition' _ [] = ([], []) partition' f (x:xs) | f x = ((x:accepted), rejected) | otherwise = (accepted, (x:rejected)) where (accepted, rejected) = partition' f xs
{ "domain": "codereview.stackexchange", "id": 7520, "tags": "haskell, reinventing-the-wheel" }
Method for finding Hyperrectangle that a coordinate is within
Question: I have a problem at work where I have set of hyperrectangles, in no particular order, that do not overlap and when unioned create a hyperrectangle with no gaps. At the moment I am looking for a way to efficiently figure out which hyperrectangle a given coordinate is within. I do not always have too many hyperrectangles, so just iterating through the set of partitioned hyperrectangles is often fine to see which one a given coordinate is in. However, I sometimes have larger quantities of hyperrectangles and I want to write something that scales better than simply iterating through the set. My first thought is to consider some hashing approach analogous to spatial hashing but am not sure the best way to go about formulating this since the hyperrectangles could be any dimension. Any thoughts? Answer: The best algorithm will depend on the proportion of hyperrectangles to dimensions. If I understand correctly, what you want is to do a first-pass computation where you build a data structure in order to get a sublinear algorithm (in terms of the number of hyperrectangles) that will tell you which hyperrectangle a point is in. I think a binary search tree would work well. At each branch of the tree you can split the set of hyperrectangles in half by performing a check to see if coordinate X_i is greater than or less than some value. You want to include in each set every rectangle that is at least partially inside the split region. For example, if you have a rectangle from (0,0) to (2,2) and you split on the first coordinate at 1, you would want to include this rectangle in both sets. Continue until you have only single hyperrectangles - these will be the leaves of the tree. How and when you split is up to you. Each time the number of hyperrectangles on both sides should be roughly equal. An approach where you just cycle through the dimensions would probably work well. For example, first you split halfway along X_1, then halfway along X_2, ... eventually you split halfway along X_d, then you start over if necessary. If n is the number of hyperrectangles, then the construction step should take about O(n*log(n)), and the lookup step should take about O(log(n)) each time.
{ "domain": "cs.stackexchange", "id": 8554, "tags": "algorithms, computational-geometry, hashing" }
ekf_localization_node : Filterd odometry yaw depend on IMU too much and Laser scan drift when robot rotate with filtered odometry
Question: I have a odometry/filtered fuse by wheel odometry odom and IMU imu_data with ekf_localization. I let my robot facing a wall and do some test with the odometry/filtered. I have two problem right now: Problem 1 My odometry/filtered when I move the robot forward and backward, rotate the robot on the spot, the odometry/filtered output seem to be like good in rviz. https://gph.is/g/4wg9DPD The rviz output, green arrow > odometry/filtered, blue arrow > odom, the axis in front the base link axis is my laser axis. But when I lift the robot up to make the wheel loss contact with the ground and move the robot forward and backward, both the odometry/filtered and odom output in rviz will move forward and backward too. When I rotate robot on the spot without wheel and ground contact, only the wheel odom move, the odometry/filtered is not moving, this make sense to me because the robot is not rotating only the wheel itself is running. When I leave the robot not moving and rotate only the IMU, for sure the odom won't move, but the odometry/filtered will be move exactly same with how I rotate my IMU manually. Rviz output: When the robot lift up, first: Rotate the robot , Second: Move robot forward and backward, Third: Rotate only the IMU. https://gph.is/g/EvdnpAY Is my odometry/filtered depend too much on the IMU? And is it related to covariance? How to make a better configuration on this? Problem 2 When I rotate the robot with the odometry/filtered, my laser scan will drift when the robot rotate and the laser will back on right position when the robot stop. This is the rviz output when the robot is facing a wall and not rotating. The green arrow is odometry/filtered and the blue arrow is pure odom from wheel encoder. https://ibb.co/Y33m99M This is the rviz output when I set the laser scan decay time to 20s and rotate the robot. The laser scan will rotate when the robot rotate. https://ibb.co/TB3ZpmB Rviz video https://gph.is/g/ZnM9XNJ ODOM topic## header: seq: 42461 stamp: secs: 1560329405 nsecs: 936365909 frame_id: "odom" child_frame_id: "base_link" pose: pose: position: x: -0.210383832846 y: -0.0374875475312 z: 0.0 orientation: x: 0.0 y: 0.0 z: 0.145501269807 w: 0.98935806485 covariance: [0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01] twist: twist: linear: x: 0.0 y: 0.0 z: 0.0 angular: x: 0.0 y: 0.0 z: 0.0 covariance: [0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01] IMU topic header: seq: 86075 stamp: secs: 1560329453 nsecs: 734069150 frame_id: "base_imu" orientation: x: -0.00954644712178 y: 0.0186064635724 z: 0.145939352166 w: 0.989072479826 orientation_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] angular_velocity: x: 0.0 y: 0.0 z: 0.0 angular_velocity_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] linear_acceleration: x: -0.383203125 y: -0.129331054688 z: 9.733359375 linear_acceleration_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] My configuration frequency: 20 sensor_timeout: 1 two_d_mode: true transform_time_offset: 0.01 transform_timeout: 0.01 print_diagnostics: true debug: false debug_out_file: /path/to/debug/file.txt publish_tf: true publish_acceleration: false # map_frame: map odom_frame: odom base_link_frame: base_link world_frame: odom odom0: /odom imu0: /imu_data odom0_queue_size: 2 odom0_nodelay: false odom0_differential: false odom0_relative: false odom0_config: [false, false, false, false, false, false, true, true, false, false, false, true, false, false, false] imu0_config: [false, false, false, false, false, false, false, false, false, false, false, true, false, false, false] imu0_nodelay: false imu0_differential: false imu0_relative: false imu0_queue_size: 5 imu0_remove_gravitational_acceleration: true process_noise_covariance: [0.05, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.05, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.06, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.03, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.03, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.06, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.025, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.025, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.04, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.015] initial_estimate_covariance: [1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9] Originally posted by jynhao_low on ROS Answers with karma: 1 on 2019-06-12 Post score: 0 Answer: But when I lift the robot up to make the wheel loss contact with the ground and move the robot forward and backward, both the odometry/filtered and odom output in rviz will move forward and backward too. When I rotate robot on the spot without wheel and ground contact, only the wheel odom move, the odometry/filtered is not moving, this make sense to me because the robot is not rotating only the wheel itself is running. Right, this normal/expected, as are your other bullet points. The reason for this is that your IMU is also measuring yaw velocity, and it has an all-zero covariance matrix (which you should fix), so the filter trusts it much, much more than it trusts the wheel velocity for detecting turning. But the IMU doesn't report and kind of linear velocity, so the filter is only using the wheel encoder data for linear motion. So to answer your question, the filter isn't too dependent on the IMU; you are telling the filter to trust the IMU much more than it trusts the wheel encoders. In general, I'd say this is correct, but your IMU may not report velocities very accurately. In any case, if you want the wheel encoders to have more of an effect, then you need to make sure the IMU and wheel encoder covariance values for yaw velocity are on the same order of magnitude. When I rotate the robot with the odometry/filtered, my laser scan will drift when the robot rotate and the laser will back on right position when the robot stop. This says to me that you need to do more covariance tuning, or that your IMU data may be lagged. There will always be some amount of lag in a filter, and you will experience drift if your IMU is under-reporting its error. If I were you, I'd see how it does with just wheel encoder data as an input. If there's less laser "smearing," then the issue is with the IMU data. I'd also try increasing the initial and process noise covariance for yaw velocity. Originally posted by Tom Moore with karma: 13689 on 2019-07-03 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 33163, "tags": "navigation, odometry, extended-kalman-filter, ros-kinetic, robot-localization" }
Why object detection algorithms are poor in optical character recognition?
Question: OCR is still a very hard problem. We don't have universal powerful solutions. We use the CTC loss function An Intuitive Explanation of Connectionist Temporal Classification | Towards Data Science Sequence Modeling With CTC | Distill which is very popular, but it's still not enough. The simple solution would be to use object detection algorithms for recognizing every single character and combine them to form words and sentences. We already have really powerful object detection algorithms like Faster-RCNN, YOLO, SSD. They can detect even very complicated objects that are not fully visible. But I read that these object detection algorithms are very poor if you use them for recognizing characters. It's very strange since these are very simple objects, just a few lines and circles. And mainly grayscale images. I know that we use object detection algorithms to detect the regions of text on big images. And then we recognize this text. Why can't we just use object detection algorithms (small versions of popular neural networks) for recognizing single characters? Why we use CTC or other approaches (besides the fact that it would require much more labeling)? Why not object detection? Answer: Good question! Using Yolo to recognise characters would be a good experiment to try. It may be because of the density of characters on a page -- systems like Yolo are very good at detecting a small number e.g. 2,3 or 10, objects, but don't work so well when the number of objects is the hundreds as you might have with OCR. A better approach might be to try face detection methods that work well with large crowds.
{ "domain": "ai.stackexchange", "id": 2782, "tags": "object-detection, object-recognition, optical-character-recognition, ctc-loss" }
How many null directions are there?
Question: The metric signature of spacetime is usually given as ($3,1$), but spaces can also be ($3,n,1$). Null surfaces include photons and event horizons, which exist, so is $n$ actually $ > 1$ in the signature? In theory, could an experiment establish the number of null dimensions/directions? In another answer, someone said there are an infinite number of null directions. If that's true, is there ever any reason to specify $n$ in the signature? Is there an example in physics where $n$ is explicitly presented as nonzero? Answer: The signature of the metric is determined by the eigenvalues of the metric. Using a convention where spacelike distances are positive and timelike distances are negative, the Minkowski metric (describing spacetime without gravity) has $3$ positive and $1$ negative eigenvalue. In standard coordinates, the metric looks like (in units where $c=1$) $$ \eta_{\mu\nu} = \left( \begin{matrix} -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{matrix} \right) $$ The Minkowski metric does have null directions, but there are no zero eigenvalues in the metric. It is just that there are distances where the negative (timelike) contribution to the distance exactly cancel the positive (spacelike) contribution to the distance. For example, if $\Delta x = (A, A, 0, 0)$, then $$ \eta_{\mu\nu} \Delta x^\mu \Delta x^\nu = -A + A = 0 $$ There are, as you said, an infinite number of null directions, but this isn't that surprising -- light can travel in an infinite number of directions. However, the existence of null directions is not relevant for the signature of the metric. The signature is just (3, 1).
{ "domain": "physics.stackexchange", "id": 100507, "tags": "spacetime, vectors, event-horizon, spacetime-dimensions" }
split gravitational force into x, y, and z componenets
Question: I am writing a program for a computer science class in which I am doing an n-body simulation in 3-dimensional space. Currently, I have figured out the gravitational force along the hypotenuse between two bodies. Now I have to split these up into the x, y, and z components, and this is where I am having trouble. Force between two objects (force_t)=G*m1*m2/r^2 Typically, force_x=force_t*cos(angle between x-axis and hypotenuse) force_y=force_t*sin(angle between x-axis and hypotenuse) So, I can figure out my triangle. I know that my two bodies are located at (x1, y1, z1) and (x2, y2, z2), so I have my triangle. And I am fairly certain that the above equations will hold for x and y, even though I'm in three dimensions, but I'm just not sure what the corresponding equation for the z component will be. My thought was that the combination of the three forces has to equal the total force, so perhaps I solve for the z component algebraically using the pythagorean theorem, but I have no way to check whether or not this is true. Anyway, any help would be appreciated. Thanks! Answer: For this application, I'd suggest using the actual gravitational force equation, for vector quantities: $$\vec{F} = \frac{Gm_1m_2}{r^3}\vec{r}$$ Here $\vec{r}$ is the vector pointing from the source object to the point where you are computing the force. So if you're computing the force on object 2 (caused by object 1), you'll have $$\vec{r} = (x_2 - x_1, y_2 - y_1, z_2 - z_1)$$ or if you're computing the force on object 1 (caused by object 2), it'll be the other way around, $$\vec{r} = (x_1 - x_2, y_1 - y_2, z_1 - z_2)$$ Since the force equation is a vector equation, you effectively get one copy of it for each of the three directions, $$\begin{align}F_x &= \frac{Gm_1m_2}{r^3}r_x \\ F_y &= \frac{Gm_1m_2}{r^3}r_y \\ F_z &= \frac{Gm_1m_2}{r^3}r_z\end{align}$$ As a side note: Typically, force_x=force_t*cos(angle between x-axis and hypotenuse) force_y=force_t*sin(angle between x-axis and hypotenuse) Those equations do not generally hold in 3D. Or rather, the first one does, but not the second one. The general rule is $$F_s = |\vec{F}|\cos\bigl(\theta_{s,F}\bigr)$$ which means that the $s$-component of force ($F_s$), where $s$ can represent any axis, is equal to the magnitude of the force ($|\vec{F}|$) times the cosine of the angle between the force and the axis [$\cos(\theta_{s,F})$]. In the particular case of 2D space, the angle between the force and the x-axis is the complement of the angle between the force and the y-axis: $\theta_{y,F} = 90^\circ - \theta_{x,F}$. And since $\cos(90^\circ - \theta) = \sin\theta$ (in 2D only), you can replace $\cos(\theta_{y,F})$ with $\sin(\theta_{x,F})$.
{ "domain": "physics.stackexchange", "id": 1935, "tags": "forces, vectors" }
How do lenses magnify images?
Question: Using the ray model of light helps us to model what happens when a ray of light enters in to the lens. right? but doesn't tell us how? As you can see in this image the rays of light are converging to form a smaller or a bigger image. but what does really happen. To make my question clear let me give you an example. if give a computer scientist a 4 by 4 pixel image and ask him to enlarge by a factor of two, what he would do is make an 8 by 8 pixel image and fill every 2 pixel the color of the one pixel in the older image. By the same logic what do lenses do, do they increase the number of photons that are reflected from the object,(which sounds wrong, but to just give you what kind of answer i am looking for). Answer: To make my question clear let me give you an example. if give a computer scientist a 4 by 4 pixel image and ask him to enlarge by a factor of two, what he would do is make an 8 by 8 pixel image and fill every 2 pixel the color of the one pixel in the older image. By the same logic what do lenses do, Ultimately, the same thing. The back of your eye, the retina, consists of many microscopic sensors. Images look "bigger" when they fall on more of these sensors. The optic nerve and brain are the parts that give things "size". So when you enlarge an image on the computer, you are making it physically larger so it spans more of your retina when you look at it. The optic system then says it's "bigger". When you look through a lens, the image is being spread out. So instead of it falling on a small number of sensors, it falls on more of them. Once again, the brain says "bigger". Consider Case VI in your images. In this we have a small object being viewed through a lens. If the rays from A and B went right into your eye you would get a small number of sensors being hit and you would say "small". But if you follow the rays after they come out of the lens, you can see that they are spread out. So, in that case, they would hit more sensors, and look larger. Note that these diagrams also show why you need to hold a magnifying glass close to the object you're looking at.
{ "domain": "physics.stackexchange", "id": 57859, "tags": "optics, lenses" }
Breakdown of the Space Hierarchy Theorem
Question: Say that we have two deterministic space complexity classes $SPACE(n^k)$ and $SPACE(f(n))$ where $f(n) = n^{k-1}$ when $n$ is odd and $f(n) = n^{k+1}$ when $n$ is even. Obviously, if $f(n)$ were always $n^{k+1}$, we would able to say $SPACE(n^k) \subseteq SPACE(f(n))$ by the Space Hierarchy Theorem, and I believe that as $n$ grows we can still generally say $SPACE(n^k) \subseteq SPACE(f(n))$ (correct me if I'm wrong), but given the condition when $n$ is odd, do we just have to say that always $SPACE(n^k) \neq SPACE(f(n))$? Answer: Here is a strengthening of the space hierarchy theorem. Suppose that $f(n) \geq \log n$ is space-constructible, that $g(n) = o(f(n))$, and that $h(n) = g(n)$ infinitely often (that is, for infinitely many $n$). Then $\mathrm{SPACE}(f(n)) \not\subseteq \mathrm{SPACE}(h(n))$. Here $\mathrm{SPACE}(f(n))$ consists of all languages decided by multi-tape Turing machines which use $O(f(n))$ space. Proof. We closely follow the Wikipedia proof of the space hierarchy theorem. Let $$ L = \{ (\langle M \rangle, 1^m) : \text{when $M$ is run on $x = (\langle M \rangle, 1^m)$, it uses $\le f(|x|)$ space and rejects} \} $$ Here $1^m$ is just $1$ repeated $n$ times, and we encode the input such that its size is exactly $C_M + m$, for some constant $C_M$ depending only on $M$. The language $L$ is clearly in $\mathrm{SPACE}(f(n))$. Suppose that it is also in $\mathrm{SPACE}(h(n))$, say accepted by a machine $M$ which uses space $Ch(n)$. Since there are infinitely many $n$ such that $h(n) = g(n)$ and $g(n) = o(f(n))$, we can find $m$ such that $Ch(C_M + m) = Cg(C_M + m) \leq f(C_M + m)$. When running $M$ on $x = (\langle M \rangle, 1^m)$, it uses at most $Ch(|x|) = Cg(|x|) \leq f(|x|)$, and so $(\langle M \rangle, 1^m) \in L$ iff $M$ doesn't accept $(\langle M \rangle, 1^m)$, contradicting the assumption that $M$ computes $L$. $\square$ Using this, you can check that $\mathrm{SPACE}(n^k)$ and $\mathrm{SPACE}(f(n))$, where $f(n)$ is the function defined in your post, are incomparable: none is contained in the other.
{ "domain": "cs.stackexchange", "id": 19735, "tags": "complexity-theory, space-complexity, complexity-classes, memory-hierarchy" }
What is the definition of a "Symbol"
Question: I’m looking for the simplest, precise definition of a Symbol e.g. The symbols found within a signal The signal has a N symbols per second I'm following this tutorial. The fellow defines the term symbol informally as "different states" and I do grasp the concept intuitively, however I am seeking a more formal definition. Here are various definitions I have found: My definition 1: "A distinct, observable state of a property of a physical medium ... which persists for a fixed unit of time in a communication channel" My definition 2: "A distinct state of a quantity of a waveform ... which persists for a fixed unit of time in a communication channel" Khan Academy definition: “A symbol can be broadly defined as the current state of some observable signal, which persists for a fixed period of time.” 4m 15s on this video Signal Processing StackExchange Definition: "A symbol is a symbolic representation of a baseband signal in digital communication." Electrical Engineering StackExchange Definition 1: "A symbol is any distinct state of the communication channel." Electrical Engineering StackExchange Definition 2: "A symbol is an information entity" Wikipedia Definition 1 (under Symbol Rate): “A symbol may be described as either a pulse in digital baseband transmission or a tone in passband transmission using modems. A symbol is a waveform, a state or a significant condition of the communication channel that persists, for a fixed period of time” Wikipedia Definition 2 (under Symbol (disambiguation): “Symbol (data), the smallest amount of data transmitted at a time in digital communications” Out of interest, is there a mathematical/formal definition of a symbol? Perhaps that would help me. I'm not certain if this is the appropriate StackExchange to post the question if it's not should I try one of the following? : Mathematics (under Information Theory) Electrical Engineering Network Engineering Thank you for reading Answer: In the words of that great communications theorist, William Shakespear, "There are more things in heaven and earth, Horatio, than are dream'd of in your philosophy". The reason it's hard to pin down an exact definition of "symbol" is because it's a really handy concept that can be stretched to aid us in doing a lot of useful math, but an all-inclusive definition gets tediously vague, while the nice concise definition that I might use for my problem over here may not fit with the nice concise definition that you need to use for your problem over there. Worse, the definition may reasonably change even within a problem. For example, MSK modulation can be defined as frequency-shift modulation where a symbol is one bit-period long, the transmitted frequency can take on one of two values where the frequency shift is exactly $\frac{1}{2}$ the bit rate, and (this is crucial) the phase remains continuous from one bit to the next (no phase jumps). After you grind through about a page of math, you can show that "really", MSK isn't frequency shift keying, it's "really" quadrature phase shift keying, with pulses that are two bit periods long, and are half a period of a sinusoid (i.e., one bump). There's other examples. You can think of OFDM as being a chunk of spectrum that's been subdivided into a whole bunch of teeny sub-chunks, with each sub-chunk modulated one way or another (often QPSK). In that case you're sending $N$ channels, with some number of $m < N$ symbols ($m < N$ because OFDM often has "quiet channels"), then putting them together into a word later. Or, you can think of each frame of OFDM as being one giant symbol that's encoding an unreasonably large number of bits. Or, you can think of OFDM completely in the time domain, as being $m$ possible symbols all added together and modulated onto a wave. If I remember correctly, the CDMA cell phone standards have one layer where they define 64 different possible "symbols", defined as 64 different possible Hadamard codes. All the above was just to soften you up -- now that you're reeling, here's a general-to-the-point-of-uselessness definition of a symbol: A symbol is a known pattern that is modulated onto the transmit signal, that can be usefully distinguished from other symbols at the receiver. Slightly more useful: Usually, symbols are all of the same duration, and are sent in such a way that they are orthogonal or nearly orthogonal to one another (look up "intersymbol interference" (ISI) for counterexamples -- ISI is usually unintentional, but sometimes it's profitable to let some creep in intentionally, as in GMSK). Usually, symbols are emitted on a known schedule (i.e., always starting at the same time, or, in the case of OQPSK, staggered but on an even interval, and the "odd numbered" symbols are guaranteed to be orthogonal to the "even numbered" symbols). The most important definition for a symbol is whatever the heck the author meant when they wrote the text you're reading. If you're lucky, they tell you. If you're kinda lucky, you can infer what they meant by reading on a bit and scratching your head (usually there's examples, either intentional or not). If you're not lucky at all, then you need to find help and ask. Personally, I consider it good style to say what I mean by a term when I'm using it, unless I feel it's super-obvious from the surrounding text. (I.e., if I assume you know what PSK is, then I might talk about "symbols" without defining them in context). But you can't always count on that, particularly in papers where you get a fixed amount of column space and are thus motivated to make your writing very concise.
{ "domain": "dsp.stackexchange", "id": 10535, "tags": "discrete-signals, digital-communications" }
Are Buckyball-sized black holes possible?
Question: The first item is the basic question; the subsequent items build upon it if it's possible. If these need to be broken into separate questions, I can do that, but they're pretty tightly related. Is a non-rotating negatively-charged singularity small enough to be contained by a C₆₀ Buckyball theoretically possible? Would the charge repulsion of the carbon electrons be stronger than the gravitational attraction, keeping the singularity from consuming them? Is it theoretically possible for a singularity to be too small to absorb hadrons or even elementary particles? Anything this small would doubtless evaporate very quickly, but just how long could they last? Thanks! Answer: A buckyball is about a nanometre ($10^{-9}$ m) across. If you limit the charge on the black hole to something like that of an electron or a few electrons, then this would mean the event horizon(s) of a charged, spinless, Reissner-Nordstrom black hole would be almost indistinguishable from that of a Schwarzschild black hole. The mass of this black hole would therefore be around $r_s c^2/2G \simeq 10^{18}$ kg. Yes, this is theoretically possible and maybe such black holes were produced during the big bang and are still around today. For Coulomb forces to outweigh gravitational forces then you need $$ \frac{Q^2}{4\pi \epsilon_0} > G M_{\rm BH}{m_C}\ .$$ In this case $Q \sim 10^{-18}$ Coulombs, $M_{\rm BH}\sim 10^{18}$ kg, $m_C = 12\times 1.67\times 10^{-27}$ kg. The LHS is $\sim 10^{-26}$ Nm$^2$ and the RHS is $\sim 10^{-18}$ Nm$^2$. So perhaps surprisingly, gravity will win and the buckyball will almost instantly be incorporated into the black hole. We don't have a theory governing the quantum behaviour of black holes. A singularity doesn't really have a size in a non-spinning black hole. A $10^{18}$ kg black hole would evaporate via Hawking radiation in about $10^{30}$ years.
{ "domain": "astronomy.stackexchange", "id": 6279, "tags": "black-hole, singularity, molecules" }
Computational complexity of Turán-type problems
Question: According to Turán's theorem (with $r=n/2$), any graph $G$ with $n$ vertices and at least $n(n-2)/2$ edges must contain a clique of size $n/2+1$. My question is: how hard is it to find this clique?$^*$ The problem is clearly in $\mathbf{FNP}$, and in fact in $\mathbf{TFNP}$. Moreover, by looking at the original proof of existence (first proof given here), it seems to be solvable by polynomial-time algorithm which recursively computes the clique (i.e. the problem is in $\mathbf{FP}$). Am I missing something here? $^*$The corresponding problem for $r=2$ (i.e., Mantel's theorem) is easy since we can enumerate over all triples of vertices in $G$. Answer: Since $r$ has to be an integer, I assume $n$ is even. I’m pretty sure the discussion below works for $n$ odd as well, if you decide whether you want $r=(n+1)/2$ or $r=(n-1)/2$ and adjust the bound according to the statement of Turán’s theorem. I find it easier to express the problem in terms of the complement of your graph. Then it becomes: given a graph $G$ on $n$ vertices with at most $n/2$ edges, find an independent set of size $n/2+1$. First, there is an off-by-one error: if $G$ consists of $n/2$ disjoint edges, it does not have any independent set of size $n/2+1$. Indeed, the statement of Turán’s theorem as given by your link requires $G$ to have strictly more than $n(n-2)/2$ edges, so in terms of the complement, the correct problem is: given a graph with less than $n/2$ edges, find an independent set of size $n/2+1$. This can be done trivially by the following polynomial-time algorithm: pick one vertex in each connected component of $G$. Indeed, since a component with $m$ vertices has at least $m-1$ edges, the number of edges in a graph with $n$ vertices and $c$ components is at least $n-c$. Thus, a graph with $\le n/2-1$ edges has $\ge n/2+1$ components.
{ "domain": "cstheory.stackexchange", "id": 5347, "tags": "graph-theory, extremal-combinatorics" }
What color I see objects with a green glass in a room lightened by red light?
Question: Suppose the only light source in a room is red light. What would see an observer with a green eyeglasses? Answer: Well, the green eyeglasses only let green light pass through them, since they are transparent. Therefore if the only source of light was red light, then the entire room would be in darkness if viewed through the eyeglasses. The phenomenon displayed by your theoretical glasses is in fact the reason that the old (anaglyph) 3D glasses worked. One plastic lens only let red light pass through, and the other blue. The image you saw through the glasses was a composite of a red and blue image formed when one eye saw only the blue image and one eye the red. If they are appropriately spaced images, then the brain will process these two 2 dimensional images and make it appear three dimensional, as it does when we see in everyday life. Additional info after comment by DJohnM: This assumes the 'red' light being emitted and the frequencies of 'green' light being allowed through do not overlap in their frequency spectra. I say this because the cells in our eyes that independently sense red, green and blue light can sometimes overlap in their sensing abilities meaning they can 'see' (be stimulated to send electrical signals to the brain) parts of the frequency spectra of the other colours of light. This is in fact, when pronounced enough, a cause of colour-blindness.
{ "domain": "physics.stackexchange", "id": 35582, "tags": "visible-light, vision" }
MSSQL query to look for duplicate record
Question: This query took 6 seconds to complete. How can I optimize it? Total records in table is 166803. SELECT ltrim(rtrim(CAST(cageID as nvarchar(max))))+ltrim(rtrim(CAST(trayNo as nvarchar(max)))) as _unique,* from lf_transit_cage where ltrim(rtrim(CAST(cageID as nvarchar(max))))+ltrim(rtrim(CAST(trayNo as nvarchar(max)))) in ( SELECT dt._unique FROM ( SELECT ltrim(rtrim(CAST(cageID as nvarchar(max))))+ltrim(rtrim(CAST(trayNo as nvarchar(max)))) as _unique from lf_transit_cage ) as dt group by dt._unique HAVING COUNT(dt._unique)>1 ) order by cageID,trayNo Answer: As mentioned in the comments, there are benefits to casting/storing that unique key in the table during the ETL process, especially if it's going to be used in other places than just this query. Most likely, the performance hit is coming from using IN (typically results in a row by row lookup) and from de-duping with the casted key. You could get a performance gain from JOINing the subequery instead of using IN. You could also use ROW_NUMBER which, in my experience, is typically more performant than the GROUP BY with HAVING clause. Here's my example using ROW_NUMBER and CTE's for easier reading: --Calculate Unique NVARCHAR key ;WITH cte_lf_transit_cage AS ( SELECT ltrim(rtrim(CAST(cageID as nvarchar(max))))+ltrim(rtrim(CAST(trayNo as nvarchar(max)))) as _unique, * FROM lf_transit_cage ) --Get the Row Count , cte_rowcount AS ( SELECT _unique, ROW_NUMBER() OVER (PARTITION BY _unique ORDER BY cageID, trayNo) AS rowcnt FROM cte_lf_transit_cage ) --Grab all instances of duplicate rows SELECT ltc.* FROM cte_lf_transit_cage ltc WHERE EXISTS (SELECT unique FROM cte_rowcount rc WHERE rc._unique = ltc._unique AND rc.rowcnt > 1 ) ORDER BY ltc.cageID, ltc.trayNo Also, was mentioned in the comments that you may not need to generate the _unique key depending on how the data is stored. Might compare results to confirm: --Get the Row Count ;WITH cte_rowcount AS ( SELECT cageID, trayNo, ROW_NUMBER() OVER (PARTITION BY cageID, trayNo ORDER BY trayNo) AS rowcnt FROM lf_transit_cage ) --Grab all instances of duplicate rows SELECT ltrim(rtrim(CAST(ltc.cageID as nvarchar(max))))+ltrim(rtrim(CAST(ltc.trayNo as nvarchar(max)))) as _unique, ltc.* FROM lf_transit_cage ltc WHERE EXISTS (SELECT * FROM cte_rowcount rc WHERE rc.cageID = ltc.cageID AND rc.trayNo = ltc.trayNo AND rc.rowcnt > 1 ) ORDER BY ltc.cageID, ltc.trayNo
{ "domain": "codereview.stackexchange", "id": 18421, "tags": "sql, sql-server" }
How could one implement a circuit using Grover's algorithm to solve a linear system of equations?
Question: Given the following system: $$\begin{bmatrix}0 & 1 & 0\\1 & 1 & 1\\1 & 0 & 1\end{bmatrix} \begin{bmatrix}s_2\\ s_1\\ s_0\end{bmatrix} = \begin{bmatrix}0\\ 0\\ 0\end{bmatrix}$$ How could one implement a circuit such that the output is $|1\rangle$ when $|s_0\rangle$, $|s_1\rangle$, and $|s_2\rangle$ are solutions? Is this possible using Grover's algorithm and without hardcoding the solutions? Answer: You certainly could use Grover's search. You would create 2 registers. This first, of 3 qubits, would effectively store the $\{s_0,s_1,s_2\}$. This is the standard register for Grovers on which you apply the Grover iterator. Then, you'd have a second register of at least 3 qubits. You construct the search oracle by evaluating the matrix multiplication on the second register (I say that you need at least 3 qubits, because I haven't checked what you need to implement the calculation reversibly). Then you do a multi-controlled-phase gate which introduces a -1 phase only if all the qubits in the second register were in the 0 state (matching your target on the right-hand side). Then you reverse the computation of the matrix multiplication. However, just because you can do this, doesn't mean that you should. Computing the solution to this classically is linear in the number of variables. You should not be solving it by testing all possible examples of the input variables. Grover is usually applied to cases where the classical computation requires exponentially many steps. I think it is extremely unlikely that Grover would give you a computational advantage in this case. the Grover oracle may be implemented as follows: To understand how this works, focus on the first 3 steps. Look at the pattern of the targets along a given ancilla and how it corresponds to a row in your target matrix.
{ "domain": "quantumcomputing.stackexchange", "id": 2679, "tags": "grovers-algorithm, quantum-circuit, simons-algorithm" }
Which alkene on heating with alkaline KMnO4 solution gives acetone and CO2?
Question: Which alkene on heating with alkaline $\ce{KMnO4}$ solution gives acetone and a gas that turns lime water milky ? 2-methyl-2-butene isobutylene 1-butene 2-butene I know that $\ce{CO2}$ turns lime water milky but, I'm not able to find the alkene in the question. Also, I want the mechanism involved. Answer: The best approach, and good practice, is to draw out the reaction products for each case. You should know that alkaline potassium permanganate replaces a carbon-carbon double bond with two carbon-oxygen bonds, one to each of the original double-bonded carbons. For instance, with 2-butene we have: $\ce{CH3-CH=CH-CH3 -> CH3-CH=O + CH3-CH=O}$ Here the product molecules are the same because the alkene is, of course, symmetric. But wait, there's more. The carbonyl compounds pictured in the above example are stable in the alkaline permanganate medium if they are ketones, with only carbon atoms attached to the carbonyl function. If there are aldehydes, the carbon-hydrogen bond(s) to the carbonyl group will be oxidized further. Thus the "acetaldehyde" identified above for 2-butene is further oxidized to break the carbon-hydrigen bond at the carbonyl group: $\ce{CH3-CH=CH-CH3 -> 2 CH3-CH=O -> 2 CH3-C(O)-OH}$ where in the alkaline medium the acetic acid will appear as acetate ion. As you work through your choices you find that there is one aldehyde where the carbonyl carbon is bonded only to hydrogen atoms allowing that function to be oxidized all the way to carbon dioxide (or, again accounting for the alkali, carbonate or bicarbonate ion). Find it, then see which choice gives that along with the other given product acetone. Bonus question: Would the acetone hold up against further oxidation? How would you tell?
{ "domain": "chemistry.stackexchange", "id": 10278, "tags": "organic-chemistry, reaction-mechanism, hydrocarbons, organic-oxidation" }
roscreate-pkg: command not found
Question: Hi there I'm new in working with ROS. I'm trying to create a package by the following command: roscreate-pkg beginner_tutorials std_msgs rospy roscpp but I get the following error roscreate-pkg: command not found I appreciate if you help me. Originally posted by Reza1984 on ROS Answers with karma: 70 on 2013-11-06 Post score: 0 Original comments Comment by BennyRe on 2013-11-06: Do other ROS bash command work? Like roscd/rosmsg/... Comment by Reza1984 on 2013-11-06: Yes, but rospack and roscreate-pkg doesn't work. Btw I'm working with fuerte in ubuntu 12.04 Comment by BennyRe on 2013-11-06: Did you source the setup.bash? Comment by Reza1984 on 2013-11-06: Thanks aloooot, I forgot to do it! Comment by BennyRe on 2013-11-06: Ok great I'll post it as an answer, so you can accept it. Comment by meriemm on 2015-04-17: hi,i have the same problem i have add the " source /opt/ros/hydro/setup.bash source ~/hydro_workspace/sandbox/setup.bash " but i still have the same problem,can you please help me. Answer: Source your setup.bash You may want to put this in your .bashrc Originally posted by BennyRe with karma: 2949 on 2013-11-06 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by meriemm on 2015-04-17: hi,i have the same problem i have add the " source /opt/ros/hydro/setup.bash source ~/hydro_workspace/sandbox/setup.bash " but i still have the same problem,can you please help me. Comment by BennyRe on 2015-04-20: Sourcing two setup.bash makes no sense. The first source gets overwritten by the second one. Have you set up your workspace correctly?
{ "domain": "robotics.stackexchange", "id": 16069, "tags": "ros, roscreate-pkg" }
Process string vector and change each string to uppercase
Question: The question below came from C++ primer 5th edition. Read a sequence of words from cin and store the values a vector. After you’ve read all the words, process the vector and change each word to uppercase. Print the transformed elements, eight words to a line. I was just wondering if there is any way to improve my solution. int main(){ vector<string> stringVector; string s; while(cin >> s){ stringVector.push_back(s); } for(string &s : stringVector){ for(char &c : s){ c = toupper(c); } } for (int i = 0; i != stringVector.size(); i++){ if (i != 0 && i % 8 == 0){ cout << endl; } cout << stringVector[i] << " "; } return 0; } Answer: I see some things that may help you improve your code. Don't abuse using namespace std Putting using namespace std at the top of every program is a bad habit that you'd do well to avoid. Know when to use it and when not to (as when writing include headers). Consider using standard algorithms There is nothing wrong with your transformation of the words to uppercase, but it may be useful to consider an alternative using a standard algorithm. In particular std::transform may be useful here: for(string &s : stringVector){ std::transform(s.begin(), s.end(), s.begin(), [](char c){ return std::toupper(c); }); } Use a counter rather than the % operator It's not really necessary to use the % operator here. It may not make any difference for this particular application, but on many processors, the % operator requires more CPU cycles (and therefore time) while counting down is usually very fast. Minimize the number of iterations through the vector The code currently makes three passes through the data. First, it gets the input, next it transforms to upper case, and finally it emits the opper case versions of the input words. These could all three be done in a single pass, although doing so doesn't quite match the given instructions. The advantage, however, is that now a std::vector isn't even needed any longer. Written that way the program might look like this: #include <iostream> #include <string> #include <algorithm> #include <ctype> int main() { constexpr unsigned WORDS_PER_LINE = 8; std::string s; unsigned wordcount = WORDS_PER_LINE; while(std::cin >> s){ std::transform(s.begin(), s.end(), s.begin(), [](int c){ return std::toupper(c); }); std::cout << s << ' '; if (--wordcount == 0) { std::cout << '\n'; wordcount = WORDS_PER_LINE; } } std::cout << '\n'; } Omit return 0 When a C++ program reaches the end of main the compiler will automatically generate code to return 0, so there is no reason to put return 0; explicitly at the end of main.
{ "domain": "codereview.stackexchange", "id": 18463, "tags": "c++, strings, vectors" }
Why we do not consider trajectories for photons/lightlike curves/radiation? I am having a term confusion
Question: Lately I have asked a question about the trajectory of photons and almost everyone told me that I shouldn't talk about trajectories. Also people talked about photons, lightlike curves, light, radiation. It seems like there is an important distinction between all that, which I have no idea. So why shouldn't I talk about photon trajectories? Probably you will say light is a wave not particle. But what does this even mean, cannot light be particles carrying vectorial fields, and wave is composed of many particles with magnitudes/intensities varying like a wave shape? What is the difference between all light related terms? Why do we have so many? Answer: The photon is an elementary point "particle". particle between quotation marks because it is not a classical point particle , it is a quantum mechanical entity. Quantum mechanical entities depending on the boundary conditions display sometime classical point-like elementary particle behavior and sometimes have a probability density for their location that has sinusoidal properties, i.e. wavelike. A large number of physicists who have progressed and assimilated PhD level second quantization physics object to the term "trajectory", because the higher level description of this underlying quantum mechanical level is considered to be excitations on a large number of fields at each point (x,y,z), each particle in the table being represented by a field . This is too esoteric for most laymen or first year physics students, and in anyway is mathematically isomorphic with the simple description of "trajectories, as seen for example in this electron in the magnetic field displayed in a bubble chamber photo. The red curve can be fitted very accurately as a classical charged particle turning in the magnetic field with energy loss. So trajectories are in the brain of the thinker. Back to photons. They have 0 charge and 0 mass. The 0 charge means that they will leave with great difficulty a track in any medium, and the 0 mass that according to special relativity they will be traveling in straight lines. The LHC experiments depend absolutely on this trajectories of photons to identify them with the vertex from which they came. Candidate Higgs boson event from collisions between protons in the CMS detector on the LHC. From the collision at the centre, the particle decays into two photons (dashed yellow lines and green towers) (Image: CMS/CERN) That far for photons in the lab. Now radiation and light. Radiation is a term coming from classical electrodynamics, and light is represented as changing electric and magnetic fields radiating outwards from the source as a wave of energy density in space and time. Optics which has been studied for centuries uses rays to compute where the wave will impact and how it will be reflected or refracted. Indices of refraction and absorption give accurate results for impinging radiation on matter etc etc. So it is not trajectories but rays, classically. Classical electromagnetic waves are built up in synergy by zillions of photons. This can be seen mathematically for those who can follow the mathematics. As I hope it is now clear that we have many terms because light and photons are not synonymous, in the way that a brick and a building are not synonymous. When studying individual photons the concept of trajectory is useful in certain boundary conditions. When studying light optical rays are a good tool in describing and predicting the behavior of light.
{ "domain": "physics.stackexchange", "id": 25188, "tags": "visible-light, electromagnetic-radiation, photons" }
All but Five Three Colorable
Question: An NP Problem Named All But Five Three Colorable(AB53C) is defined as follows :- Input : Connected Graph G(V,E) The Connected Graph is AB53C, iff the Given Graph is 3-Colorable by leaving UPTO 5 Vertices Uncolored. Question:- The Problem is in NP. Show the reduction from 3-Colorable Problem. The Proposed Solution is :- Find Permutation of All Subsets where |V'| = |V| - 5. Basically these subsets will have 5 vertices less than the original set. Remove all edges from V' to V. All such subsets are found out and then passed through the 3-Color. If we get YES on any one of these Subgraphs, then we have a AB53C. I want someone disprove my method OR show that the reduction is non-polynomial. Otherwise, my proposal is correct. Answer: Hint: Add a clique on 8 vertices to the graph.
{ "domain": "cs.stackexchange", "id": 3729, "tags": "complexity-theory, graphs, np-complete, reductions, colorings" }
Why does one use different materials for cathode and anode in the photoelectric effect experiment
Question: All photoeletric lab experiments I have seen so far have a setup where you have different materials for cathode and anode. However this raises some experimental difficulties since you have to take the contact EMF into account. From a pedagogic point of view it seems to be better to use just the same material for anode and cathode. What are the problems in using the same materials? Answer: If one would use the same metal, then both electrodes would be equally photosensitive, which is not a good idea. Ideally one does not want the anode to produce any electrons, at all, so one would chose a metal with a high work function (in the UV region). As far as teaching is concerned, there is nothing wrong with teaching students that science is (almost) always about "playing with handicap". Nature rarely ever presents us with kitchen clean effects and we have to use our minds to separate what we are interested in from the interference. I would use this as an opportunity.
{ "domain": "physics.stackexchange", "id": 29462, "tags": "experimental-physics, education, photoelectric-effect, experimental-technique" }
Shor's Algorithm Results - Qiskit
Question: I have been trying to build a Shor's Algorithm simulation for N = 15 on Qiskit's framework. Having referenced the Qiskit textbook, I built a circuit that largely resembles what they have done, with a few minor caveats. I am getting some strange and unexpected measurements, could anybody find where my problem is? Below is my code. N = 15 a = np.random.randint(2, 15) if math.gcd(a, N) != 1: raise ValueError("Non-trivial factor.") print(a) def a_mod15(a, x): if a not in [2,7,8,11,13]: raise ValueError("'a' must be 2,7,8,11 or 13") U = QuantumCircuit(4) for iteration in range(x): if a in [2, 13]: U.swap(0, 1) U.swap(1, 2) U.swap(2, 3) if a in [7, 8]: U.swap(2, 3) U.swap(1, 2) U.swap(0, 1) if a == 11: U.swap(1, 3) U.swap(0, 2) if a in [7, 11, 13]: for q in range(4): U.x(q) U = U.to_gate() U.name = "%i^%i mod 15" % (a, x) c_U = U.control() return c_U def mod_exp(qc, n, m, a): for x in range(n): qc.append(a_mod15(a, 2**x), [x] + list(range(n, n + m))) def iqft(qc, n): qc.append(QFT(len(n), do_swaps = False).inverse(), n) def circ(n, m, a): # Let n = 'X register' # Let m = 'W register' qc = QuantumCircuit(n + m, n) qc.h(range(n)) qc.x(n + m - 1) mod_exp(qc, n, m, a) iqft(qc, range(n)) qc.measure(range(n), range(n)) return qc n = 4 m = 4 qc = circ(n, m, a) qc.draw(fold=-1) simulator = Aer.get_backend('qasm_simulator') counts = execute(qc, backend=simulator).result().get_counts(qc) plot_histogram(counts) These are the expected Qiskit results (note they used 8 counting qubits and I used 4): Answer: Your error lies in the use of the built-in QFT inside your iqft function. It seems the issue gets resolved with either of the following two tweaks: Setting do_swaps=True, i.e. def iqft(qc, n): qc.append(QFT(len(n), do_swaps=True).inverse(), n) Reverting back to using Qiskit's hard-coded inverse QFT method, i.e. def qft_dagger(n): """n-qubit QFTdagger the first n qubits in circ""" qc = QuantumCircuit(n) # Don't forget the Swaps! for qubit in range(n//2): qc.swap(qubit, n-qubit-1) for j in range(n): for m in range(j): qc.cp(-np.pi/float(2**(j-m)), m, j) qc.h(j) qc.name = "QFT†" return qc def iqft(qc, n): qc.append(qft_dagger(len(n)), n)
{ "domain": "quantumcomputing.stackexchange", "id": 3493, "tags": "qiskit, shors-algorithm" }
Locale.getISOCountries() Wrapper
Question: I have the following codes, i want to get a list of country CODE and NAME from Locale.getISOCountries() and set it into my custom made class, i managed to do it but i somehow feel that my codes is not clean enough, especially the getCountryList() part but i not sure how to minimize it, is there any suggestion? I don't mind to use lambda if possible. private static List<CountryPair> getCountryList() { String[] locales = Locale.getISOCountries(); List<String> countryCodes = Arrays.asList(locales); List<CountryPair> pairList = new ArrayList<>(); countryCodes.forEach(code -> { CountryPair pair = new CountryPair(); Locale locale = new Locale("", code); pair.setValue(locale.getCountry()); pair.setDisplayName(locale.getDisplayCountry()); pairList.add(pair); }); return pairList; } Country pair class: public class CountryPair { private String value; private String displayName; public String getValue() { return value; } public String getDisplayName() { return displayName; } public void setValue(String value) { this.value = value; } public void setDisplayName(String displayName) { this.displayName = displayName; } } Answer: My recommendation is: create a constructor for CountryPair, which takes the parameter values: public CountryPair(String value, String displayName) { this.value = value; this.displayName = displayName; } Then, you can use an elegant map-chain: List<CountryPair> pairList = countryCodes.stream() .map(code -> new Locale("", code)) .map(locale -> new CountryPair(locale.getCountry(), locale.getDisplayCountry())) .collect(Collectors.toList()); Alternatively, you could also use a constructor which uses the Locale object as a parameter.
{ "domain": "codereview.stackexchange", "id": 25506, "tags": "java" }
Addition reaction of hydroiodic acid to 2-bromo-3-chloro-2-butene
Question: How can the major product of the addition reaction of 2-bromo-3-chloro-2-butene with hydroiodic acid be predicted, since Markovnikov's rule fails to distinguish it? $$\ce{H3C-C(Cl)=C(Br)-CH3~+~HI~->~?}$$ Answer: TL;DR - Sometimes you need to do the experiment. Like all questions involving Markovnikov's rule, you should compare the structures of the two carbocations: At C2 (Bromo) $$\ce{H3C-C(Cl)=C(Br)-CH3 + H+ -> H3C-CH(Cl)-C+(Br)-CH3}$$ The two issues to consider are resonance and induction. Resonance You could draw a resonance structure to show that the bromo group is stabilizing the carbocation: $$\ce{R2C+-Br <-> R2C=Br+}$$ However, since the bromine atom is much larger than the carbon atom, it cannot form as effective $\ce{p}-\pi$ overlap with carbon. Induction The carbocation has two stabilizing alkyl groups. What about that bromo group? Bromine is more electronegative than carbon, but not by much (2.96 vs 2.55 on the Pauling scale). Plus, bromine has a lot of electron density around it to donate, if it could. Bromine thus is a mild electron-withdrawing group by induction. At C3 (chloro) $$\ce{H3C-C(Cl)=C(Br)-CH3 + H+ -> H3C-C+(Cl)-CH(Br)-CH3}$$ We once again examine resonance and induction. Resonance $$\ce{R2C+-Cl <-> R2C=Cl+}$$ Chlorine is smaller than bromine and closer in size to carbon. Thus, chlorine is a better resonance stabilizer than bromine. Induction The same two alkyl groups are present, so we will discount them. Chlorine is more electronegative than bromine (3.16 on Pauling), and it has less electron density around it to begin with. Chlorine is more powerful electron-withdrawer by resonance than bromine. Compare: Chlorine Bromine Resonance Stronger donor Weaker donor Induction Stronger withdrawer Weaker withdrawer In many cases, resonance usually wins, so I might expect carbocation formation preferentially at C3 with the chloro group, but I would expect the other product to form also. Now... what else might happen. Regardless of the initial regioselectivity, the carbocation also can be stabilized by forming a halonium ion. Bromine is better at this since it is larger (and less electronegative so it can stabilize the positive charge better). This halonium would be attacked at the less hindered position (C2), resulting in transfer of the bromine to C3. So in effect, you can get three products: $$\ce{CH3-CHCl-CIBr-CH3}$$ $$\ce{CH3-CICl-CHBr-CH3}$$ $$\ce{CH3-CClBr-CHI-CH3}$$ You would then want to actually do the experiment to determine which of these is really the major product.
{ "domain": "chemistry.stackexchange", "id": 2838, "tags": "organic-chemistry, halides, c-c-addition" }
Random Forest Stacking Experiment for Imbalanced Data-set Problem
Question: In order to solve a Imbalanced Dataset Problem, I experimented with Random Forest in the given manner (Somewhat inspired by Deep-Learning) Trained a Random Forest which will take in the input data and the predict probability of the label of the trained model will be used as a input to train another Random Forest. Pseudo Code for this : train_X, test_X, train_y, test_y = train_test_split(X,y, test_size = 0.2) rf_model = RandomForestClassifier() rf_model.fit(train_X, train_y) pred = rf_model.predict(test_X) print('******************RANDOM FOREST CM*******************************') print(confusion_matrix(test_y, pred)) print('******************************************************************') predict_prob = rf_model.predict_proba(X) X['first_level_0'] = predict_prob[:, :1].reshape(1,-1)[0] X['first_level_1'] = predict_prob[:, 1:].reshape(1,-1)[0] train_X, test_X, train_y, test_y = train_test_split(X,y, test_size = 0.2) rf_model = RandomForestClassifier() rf_model.fit(train_X, train_y) pred = rf_model.predict(test_X) print('******************RANDOM FOREST 2 CM*******************************') print(confusion_matrix(test_y, pred)) print('******************************************************************') And I was able to see considerable improvement in the recall. Is this approach mathematically sound. I used the second layer of the Random Forest such that it would be able to correct the error by the first layer. Just looking to combine the principle of boosting to Random Forest Bagging Technique.Looking for thoughts. Answer: The underlying idea is fine, but you've fallen into a common data leakage trap. By recombining the data and then resplitting, your second model's test set includes some of the first model's training set. The first model knows the labels on those datapoints and, especially if overfit, passes along that information in its predictions. So the score you see for the ensemble is probably optimistically biased. The most common approach to fixing this is to use k-fold cross-validation to produce out-of-fold predictions on the entire training dataset for the second model. Note that sklearn now has such stacked ensembles builtin: https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.StackingClassifier.html
{ "domain": "datascience.stackexchange", "id": 7301, "tags": "scikit-learn, random-forest, boosting, bagging" }
Having trouble deriving the exact form of the Kinematic Transport Theorem
Question: The Kinematic transport theorem is a very basic theorem relating time derivatives of vectors between a non rotating frame and another one that's rotating with respect to it with a uniform angular velocity. I was trying to prove it for the special case of $3$ dimensions, and everything seems straightforward apart from the fact that I'm getting into a difficulty with obtaining the exact form of the final expression. Following is my attempted proof: Let's assume without loss of generality that we have a frame $\widetilde{O}$ rotating with angular velocity $\mathbf{\Omega} = (0,0,\omega)$ about the $z$ axis of another frame $O$. Let $\bf{f}$ be a vector seen by frame $O$ and $\widetilde{\mathbf{f}}$ the same vector as seen by an observer in the $\widetilde{O}$ frame, and further assume that this observer is located at position $\widetilde{\bf{r}}$ with respect to this rotating frame, at a distance $R$ away from the origin. Note that this distance $R$ is correct for both frames since their origins coincide by assumption. It is clear that this observer, as seen from the $O$ frame is given by the position vector $\bf{r}$ as follows: \begin{equation} \textbf{r} = R(\cos{\omega t}, \sin{\omega t}, 0) \end{equation} Hence the vector $\widetilde{\bf{f}}$ is given by: \begin{equation} \widetilde{\textbf{f}} = \textbf{f} - \textbf{r} = (f_x - R\cos(\omega t), f_y - R\sin(\omega t), f_z) \end{equation} Differentiating the above with respect to time we get: \begin{align} \dot{\widetilde{\textbf{f}}} &= (\dot{f_x} + \omega R\sin(\omega t), \dot{f_y} - \omega R\cos(\omega t), \dot{f_z}) \\ & = \dot{\textbf{f}} + \omega R (\sin(\omega t), -\cos(\omega t), 0) \\ & = \dot{\textbf{f}} + \omega (r_y, -r_x, 0) \end{align} Isolating $\dot{\mathbf{f}}$ we get: \begin{equation} \dot{\textbf{f}} = \dot{\widetilde{\textbf{f}}} + \omega (-r_y, r_x, 0) \end{equation} Now as you can see, what I'm getting is slightly different than what I'm supposed to get according to the theorem, the above in fact reads: $$\left(\frac{d\mathbf{f}}{dt}\right)_O = \left( \frac{d\mathbf{f}}{dt}\right )_\widetilde{O} + \mathbf{\Omega} \times \mathbf{r} $$ Where I've replaced $\dot{\mathbf{f}}$ and $\dot{\widetilde{\mathbf{f}}}$ with the "abstract" notation that denotes differentiating with respect to each frame $O$ and $\widetilde{O}$, just to make it look more similar to how the theorem is usually stated. However, what I'm supposed to get according to the theorem is in fact: $$\left(\frac{d\mathbf{f}}{dt}\right)_O = \left( \frac{d\mathbf{f}}{dt}\right )_\widetilde{O} + \mathbf{\Omega} \times \mathbf{f} $$ Where is my error? I suspect that it may have to do with how I am interpreting the operation of differentiating the vector "in the rotating frame", for example I'm not totally sure it's correct to say that: $\dot{\widetilde{\textbf{f}}} = \left( \frac{d\mathbf{f}}{dt}\right )_\widetilde{O}$ Also it's very weird that the final expression I got depends on the position of the observer in the rotating frame, but I can't find what causes this error either. Answer: As mentioned in a comment, I've noticed that my method is inadequate once I saw that putting $R=0$ will lead to no rotation at all of the supposedly rotating frame. The proper way to describe vectors in different frames is, as @nickbros123 mentioned, via expressing the same vector in different coordinate bases. Once this is done correctly, it seems that the derivation is quite simple: Let $\mathbf{b_1},\mathbf{b_2},\mathbf{b_3}$ be any orthonormal basis of the non rotating frame $O$ and $\widetilde{\mathbf{b_1}},\widetilde{\mathbf{b_2}},\widetilde{\mathbf{b_3}}$ those of the rotating frame $\widetilde{O}$, that's rotating with angular velocity $\mathbf{\Omega} = (0,0,\omega)$. So we have that: \begin{align} \widetilde{\mathbf{b_1}} &= \cos{\omega t}\mathbf{b_1} + \sin{\omega t}\mathbf{b_2} \\ \widetilde{\mathbf{b_2}} &= -\sin{\omega t}\mathbf{b_1} + \cos{\omega t}\mathbf{b_2} \\ \widetilde{\mathbf{b_3}} &= \mathbf{b_3} \end{align} Now the statement that for any vector $\mathbf{f}$ it must hold that $\left(\mathbf{f}\right)_O = \left(\mathbf{f}\right)_{\widetilde{O}}$ is simply that the vector is the same regardless of the basis it is expressed by, that is, if it has components $(f_x,f_y,f_z)$ with respect to $O$ and components $(\tilde{f_x},\tilde{f_y},\tilde{f_z})$ with respect to $\widetilde{O}$ then it must hold that: $$ f_x\mathbf{b_1}+f_y\mathbf{b_2}+f_z\mathbf{b_3} = \tilde{f_x}\widetilde{\mathbf{b_1}}+\tilde{f_y}\widetilde{\mathbf{b_2}}+\tilde{f_z}\widetilde{\mathbf{b_3}} $$ Differentiating the left hand side simply yields $\left(\frac{d\mathbf{f}}{dt}\right)_O = (\dot{f_x},\dot{f_y},\dot{f_z})$ and there are no additional terms in that basis simply because by assumption $\dot{\mathbf{b_i}} = 0$ for $i=1,2,3$. Differentiating also the right hand side, we get: \begin{align} \left(\frac{d\mathbf{f}}{dt}\right)_O &= \dot{\tilde{f_x}}\widetilde{\mathbf{b_1}}+\dot{\tilde{f_y}}\widetilde{\mathbf{b_2}}+\dot{\tilde{f_z}}\widetilde{\mathbf{b_3}}+\tilde{f_x}\dot{\widetilde{\mathbf{b_1}}}+\tilde{f_y}\dot{\widetilde{\mathbf{b_2}}}+\tilde{f_z}\dot{\widetilde{\mathbf{b_3}}} \\ &= \left(\frac{d\mathbf{f}}{dt}\right)_\widetilde{O} + \tilde{f_x}\dot{\widetilde{\mathbf{b_1}}},+\tilde{f_y}\dot{\widetilde{\mathbf{b_2}}}+\tilde{f_z}\dot{\widetilde{\mathbf{b_3}}} \end{align} Now since we note that: $$\dot{\widetilde{\mathbf{b_1}}} = -\omega\sin{\omega t}\mathbf{b_1}+\omega\cos{\omega t}\mathbf{b_2} = \omega\widetilde{\mathbf{b_2}} \\ \dot{\widetilde{\mathbf{b_2}}} = -\omega\cos{\omega t}\mathbf{b_1}-\omega\sin{\omega t}\mathbf{b_2} = -\omega\widetilde{\mathbf{b_1}} \\ \dot{\widetilde{\mathbf{b_3}}} = 0$$ We obtain: \begin{align} \left(\frac{d\mathbf{f}}{dt}\right)_O &= \left(\frac{d\mathbf{f}}{dt}\right)_\widetilde{O} + \tilde{f_x}\dot{\widetilde{\mathbf{b_1}}},+\tilde{f_y}\dot{\widetilde{\mathbf{b_2}}}+\tilde{f_z}\dot{\widetilde{\mathbf{b_3}}} \\ &= \left(\frac{d\mathbf{f}}{dt}\right)_\widetilde{O} + \omega\tilde{f_x}\widetilde{\mathbf{b_2}} -\omega\tilde{f_y}\widetilde{\mathbf{b_1}} \\ &= \left(\frac{d\mathbf{f}}{dt}\right)_\widetilde{O} + \left(\mathbf{\Omega} \times \mathbf{f}\right)_{\widetilde{O}} \end{align} Which appears to be the correct result. Additional observations and useful resources Despite the fact that the above argument assume a very special looking form for $\mathbf{\Omega}$ it can in fact be easily generalized, by considering that the orthonormal basis which is our starting point $\mathbf{b_1},\mathbf{b_2},\mathbf{b_3}$ can be related to any other fixed orthonormal basis by a fixed rotation. With respect to this other basis then, $\mathbf{\Omega}$ will no longer have this special form, and yet such a fixed rotation clearly leaves the rotated basis vectors as time independent, and hence does not affect the rest of the derivation. Since writing this, I have found two very good posts that are very much related and worth reading. The first one is a derivation of the centrifugal and Coriolis force terms that appear in a rotating reference frame, which basically derives the same theorem (without naming it, which is why it took me a long time to find!) in a slightly different way. The second one is a truly beautiful mathematical treatment that much more generally derives the existence of all the known fictitious forces that arise in a non inertial frame of reference, the kinematic transport theorem can also be derived by applying the same techniques used there.
{ "domain": "physics.stackexchange", "id": 94672, "tags": "classical-mechanics, reference-frames, vectors, differentiation, rotational-kinematics" }
Dynamically mapping an API's setting IDs to instance methods using Descriptors
Question: I'm working with an API, where the url can be constructed with setting_ids. To more clearly indicate the functionality of the setting ids I am mapping them to a WriterClass' methods, with relevant names. In the code below these are method_a to method_c, in the true application this is a list of ~250 settings. The functionality I desire would access the API for a desired setting through the structure writer_instance.setting_name(value) - returning a to be awaited coroutine. For this I'm using descriptors, with which I'm still familiarising myself. This is my main motivation for asking this question, to ensure I'm utilising them correctly and as efficiently as possible - and if not, to learn how to do so now and in the future. Below is code that functionally does what I desire, with placeholder prints instead of code accessing the (private) API. The Credentials class is a separate class as it's utilised by other Classes which also access the same API. import asyncio import httpx DOMAIN = 'api.mysite.com' class Credentials: # In true context this fetches an api_token from database using the serial. def __init__(self, device_serial: str, api_token): self.device_serial = device_serial self._api_token = api_token self.headers = self.__build_headers() def __build_headers(self): headers = { 'Authorization': ('Bearer ' + self._api_token), 'Content-Type': 'application/json', 'Accept': 'application/json', } return headers class MethodDescriptor: def __init__(self, id): self._id = id def __get__(self, instance, owner): # Using this so that the final method has access to instance variables. # Not happy with this approach necessarily, but it gets the results I desire. return BoundMethod(self._id, instance) class BoundMethod: def __init__(self, id, instance): self._id = id self._instance = instance async def __call__(self, value): # placeholder logic for actual API call. await asyncio.sleep(1) # Simulate work print( f"Method called with id={self._id}, {value=}, at url={self._instance.domain} for serial {self._instance.device_serial}") return f'accessed setting number {self._id}' class WriterClass: method_a = MethodDescriptor(1) method_b = MethodDescriptor(2) method_c = MethodDescriptor(3) def __init__(self, credentials: Credentials, client: httpx.AsyncClient): self.device_serial = credentials.device_serial self.headers = credentials.headers self.domain = DOMAIN self.client = client # Example usage async def main(): creds = Credentials("123AA", "API_TOKEN") # API token obtained elsewhere in real code. async with httpx.AsyncClient() as client: writer = WriterClass(creds, client) task1 = asyncio.create_task(writer.method_c("some_str")) # should use __call__ with self._id = 3, value="some_str" and instance vars. task2 = asyncio.create_task(writer.method_b("12:34")) task3 = asyncio.create_task(writer.method_a("3700")) results = await asyncio.gather(task1, task2, task3) print(results) if __name__ == '__main__': asyncio.run(main()) Which returns: Method called with id=3, value='some_str', at url=api.mysite.com for serial 123AA Method called with id=2, value='12:34', at url=api.mysite.com for serial 123AA Method called with id=1, value='3700', at url=api.mysite.com for serial 123AA ['accessed setting number 3', 'accessed setting number 2', 'accessed setting number 1'] As expected. Answer: Use a More Meaningful Name For Your Descriptor Class? How about renaming your descriptor class from MethodDescriptor to SettingDescriptor? And consider renaming class WriterClass to SettingWriter. I would also use more descriptive names for the descriptor instances themselves. For example: class SettingWriter: set_a = SettingDescriptor(1) set_b = SettingDescriptor(2) set_c = SettingDescriptor(3) ... Then your main function becomes: async def main(): creds = Credentials("123AA", "API_TOKEN") # API token obtained elsewhere in real code. async with httpx.AsyncClient() as client: setter = SettingWriter(creds, client) task1 = asyncio.create_task(setter.set_c("some_str")) # should use __call__ with self._id = 3, value="some_str" and instance vars. task2 = asyncio.create_task(setter.set_b("12:34")) task3 = asyncio.create_task(setter.set_a("3700")) results = await asyncio.gather(task1, task2, task3) print(results) Calling a method named set_a seems more descriptive than calling method_a. The above renaming suggestions are just that -- suggestions. You might be able to find names that are even more descriptive than the ones I quickly came up with due to your greater familiarity with the actual application. A Simplification to Consider It seems to me that you can do away with class BoundedMethod if you modify class SettingDescriptor as follows: class SettingDescriptor: def __init__(self, id): self._id = id def __get__(self, instance, owner): # Using this so that the final method has access to instance variables. self._instance = instance return self async def __call__(self, value): # placeholder logic for actual API call. await asyncio.sleep(1) # Simulate work print( f"Method called with id={self._id}, {value=}, at url={self._instance.domain} for serial {self._instance.device_serial}") return f'accessed setting number {self._id}' Add Type Hints Describe what arguments and return values a method/function expects using type hints. For example, from typing import Type, Callable ... class SettingDescriptor: def __init__(self, id: int): self._id = id def __get__(self, instance: object, owner: Type) -> Callable: ... Add Doctsrings Describing What Your Methods Do For example, class SettingDescriptor: ... def __get__(self, instance: object, owner: Type) -> Callable: """Returns self, a callable that invokes the API with the appropriate id and value arguments.""" ... But Is Using Descriptors for This Overkill? Ultimately, you are just trying to map a property name, which is a string, to an integer that the API you are using requires. For this you could use a dictionary. So what if you just had this instead: ... class SettingWriter: property_name_mapping = { 'a': 1, 'b': 2, 'c': 3 } def __init__(self, credentials: Credentials, client: httpx.AsyncClient): self.device_serial = credentials.device_serial self.headers = credentials.headers self.domain = DOMAIN self.client = client async def set(self, property_name: str, value: str) -> str: id = self.property_name_mapping[property_name] await asyncio.sleep(1) # Simulate work print( f"Method called with id={id}, {value=}, at url={self.domain} for serial {self.device_serial}") return f'accessed setting number {id}' # Example usage async def main(): creds = Credentials("123AA", "API_TOKEN") # API token obtained elsewhere in real code. async with httpx.AsyncClient() as client: setter = SettingWriter(creds, client) task1 = asyncio.create_task(setter.set("c", "some_str")) task2 = asyncio.create_task(setter.set("b", "12:34")) task3 = asyncio.create_task(setter.set("a", "3700")) results = await asyncio.gather(task1, task2, task3) print(results)
{ "domain": "codereview.stackexchange", "id": 45547, "tags": "python, python-3.x, classes" }
Unit Problem in Designing a Filter for a Given Auto Correlation Function
Question: Given a WSS process with the following Auto Correlation Function: $$ r\left ( \tau \right ) = {\sigma}^{2} {e}^{-\alpha \left | \tau \right |} $$ The Laplace Transform would be: $$ R \left ( s \right ) = \mathfrak{L} \left \{ r \left ( \tau \right ) \right \} = \frac{-2 \alpha {\sigma} ^ {2}}{\left ( s - \alpha \right ) \left ( s + \alpha \right )} $$ Hence the filter would be of the form: $$ H \left( s \right) = \frac{c}{s + \alpha} $$ My question is about units. In the Auto Correlation Function the units of $ \alpha $ are [Hz]. While in the filter form, assuming $ s = j \omega $ the units are [Rad / Sec]. How this conflict can be resolved? What a I missing? Thanks. Answer: Radians are considered to be dimensionless. See Are angles dimensionless? and Dimensionless quantity. They are considered to be pure numbers like $\pi$. So $\alpha$ is in Hz, which is a measure of 1/second, and $s$ is also considered to be measured per second.
{ "domain": "dsp.stackexchange", "id": 105, "tags": "filters, filter-design, autocorrelation" }
Can we find actual rest mass of things on Earth
Question: Earth moves around the Sun and the Sun moves around the galaxy and the galaxy moves with unknown speed and direction. We have speed so the mass of us all altered. Can we know the real rest mass? If so, can we deduce our speed in the universe? Answer: Earth moves around the Sun and the Sun moves around the galaxy and the galaxy moves with unknown speed and direction. We have speed so the mass of us all altered. The relativistic mass is altered, but this is a somewhat archaic term these days, and is said to be a measure of energy. Nowadays when we say mass without qualification, we tend to mean rest mass. Like Rc and Jazz said, this doesn't change with speed. Instead it changes with gravitational potential, see mass in general relativity and the mass deficit. Unfortunately rest mass is also called invariant mass, which is rather confusing. Can we know the real rest mass? Yes, because the mass of a body is a measure of its energy content, and energy is conserved. But we have no accepted theories for that at present. For example the first free parameter of the Standard Model is electron mass. If so, can we deduce our speed in the universe? We don't need to deduce it. We can measure it. From the CMB dipole anistropy. We're moving at 627±22 km/s relative to the reference frame of the CMB. Or relative to the universe as a whole.
{ "domain": "physics.stackexchange", "id": 23108, "tags": "special-relativity, mass, reference-frames, relativity, relative-motion" }
PNG steganography tool in C
Question: This is a steganography tool enabling you to conceal any file within a PNG file. In order to compile the program you need libpng to be installed on the system. It is one of my personal projects and I would love to receive expert advice. Running Hide message in PNG file: steg -h <file_in> <png_in> <png_out> Read message from PNG file: steg -r <png_in> <file_out> The arguments <file_in> and <file_out> can be filenames of any (binary) files. Code I'm following Linus Torvalds's coding style. In steg.c I have: #include <stdio.h> #include <stdlib.h> #include <string.h> #include "png_io.h" /* hide one byte in png */ void hide_byte(unsigned char byte, long i) { short bit; for (bit = 0; bit < CHAR_BIT; ++bit) { long abs_bit = i * CHAR_BIT + bit; int y = abs_bit / channels / width; int x = abs_bit % (width * channels); png_byte *value = &row_pointers[y][x]; if (byte & (1 << (CHAR_BIT - bit - 1))) *value = (*value & ~1) + 1; /* 1 */ else *value = *value & ~1; /* 0 */ } } /* hide file contents in png */ void hide_file(char *filename, char *src_png_name, char *out_png_name) { FILE *fp = fopen(filename, "rb"); if (!fp) { fprintf(stderr, "Error: failed to open file \"%s\"\n", filename); exit(EXIT_FAILURE); } fseek(fp, 0, SEEK_END); long fsize = ftell(fp); fseek(fp, 0, SEEK_SET); unsigned char buffer[4 + fsize]; buffer[0] = fsize >> 24; buffer[1] = fsize >> 16; buffer[2] = fsize >> 8; buffer[3] = fsize; fread(&buffer[4], fsize, 1, fp); printf("sizeof(buffer) = %zu\n", sizeof(buffer)); read_png(src_png_name); printf("width: %d\nheight: %d\nchannels: %d\n", width, height, channels); if ((CHAR_BIT * sizeof(buffer)) > (width * height * channels)) { fprintf(stderr, "Error: binary file doesn't fit into png file\n"); exit(EXIT_FAILURE); } long i; for (i = 0; i < sizeof(buffer); ++i) { hide_byte(buffer[i], i); } write_png(out_png_name); fclose(fp); } /* read one byte from png */ void read_byte(unsigned char *byte, long i) { short bit; for (bit = 0; bit < CHAR_BIT; ++bit) { long abs_bit = i * CHAR_BIT + bit; int y = abs_bit / channels / width; int x = abs_bit % (width * channels); png_byte *value = &row_pointers[y][x]; if (*value & 1) *byte += 1 << (7 - bit); } } /* read file contents from png */ void read_file(char *filename, char *png_name) { FILE *fp = fopen(filename, "wb"); if (!fp) { fprintf(stderr, "Error: failed to open file \"%s\"", filename); exit(EXIT_FAILURE); } read_png(png_name); printf("width: %d\nheight: %d\nchannels: %d\n", width, height, channels); long fsize = 0; short ib; for (ib = 0; ib < 32; ++ib) { int y = ib / (width * channels); int x = ib % (width * channels); png_byte *value = &row_pointers[y][x]; if (*value % 2 == 1) fsize += 1 << (31 - ib); } unsigned char buffer[4 + fsize]; memset(buffer, 0, sizeof(buffer)); printf("sizeof(buffer) = %zu\n", sizeof(buffer)); long i; for (i = 0; i < sizeof(buffer); ++i) { read_byte(&buffer[i], i); } fwrite(&buffer[4], fsize, 1, fp); fclose(fp); } /* handle command line arguments */ int main(int argc, char *argv[]) { if (argc == 5 && !strcmp(argv[1], "-h")) { hide_file(argv[2], argv[3], argv[4]); } else if (argc == 4 && !strcmp(argv[1], "-r")) { read_file(argv[3], argv[2]); } else { printf("Hide message: %s -h <file_in> <png_in> <png_out>\n" "Read message: %s -r <png_in> <file_out>\n", argv[0], argv[0]); } return EXIT_SUCCESS; } In png_io.c I have: #include <png.h> #include <stdbool.h> #include <stdio.h> #include <stdlib.h> #include "png_io.h" #define PNG_BYTES_TO_CHECK 4 /* global variables */ png_infop info_ptr; png_bytepp row_pointers; png_uint_32 width, height; png_byte channels; /* read png file */ void read_png(char *filename) { FILE *fp = fopen(filename, "rb"); if (!fp) { fprintf(stderr, "Error: failed to open file '%s'\n", filename); exit(EXIT_FAILURE); } /* read signature bytes */ unsigned char sig[PNG_BYTES_TO_CHECK]; if (fread(sig, 1, PNG_BYTES_TO_CHECK, fp) != PNG_BYTES_TO_CHECK) { fprintf(stderr, "Error: failed to read signature bytes" "from '%s'\n", filename); exit(EXIT_FAILURE); } /* compare first bytes of signature */ if (png_sig_cmp(sig, 0, PNG_BYTES_TO_CHECK)) { fprintf(stderr, "Error: '%s' is not a PNG file\n", filename); exit(EXIT_FAILURE); } /* initialize png_struct `png_ptr` */ png_structp png_ptr = png_create_read_struct(PNG_LIBPNG_VER_STRING, NULL, NULL, NULL); if (!png_ptr) { fclose(fp); fprintf(stderr, "Error: memory allocation failed\n"); exit(EXIT_FAILURE); } /* allocate memory for image information */ info_ptr = png_create_info_struct(png_ptr); if (!info_ptr) { fclose(fp); png_destroy_read_struct(&png_ptr, NULL, NULL); exit(EXIT_FAILURE); } /* set error handling */ if (setjmp(png_jmpbuf(png_ptr))) { fclose(fp); png_destroy_read_struct(&png_ptr, NULL, NULL); exit(EXIT_FAILURE); } /* set up input control */ png_init_io(png_ptr, fp); /* because we read some of the signature */ png_set_sig_bytes(png_ptr, PNG_BYTES_TO_CHECK); /* read entire image into info structure */ png_read_png(png_ptr, info_ptr, PNG_TRANSFORM_IDENTITY, NULL); /* optain information */ row_pointers = png_get_rows(png_ptr, info_ptr); width = png_get_image_width(png_ptr, info_ptr); height = png_get_image_height(png_ptr, info_ptr); channels = png_get_channels(png_ptr, info_ptr); /* free allocated memory */ png_destroy_read_struct(&png_ptr, NULL, NULL); fclose(fp); } /* write png file */ void write_png(char *filename) { FILE *fp = fopen(filename, "wb"); if (!fp) { fprintf(stderr, "Error: failed to open file '%s'\n", filename); exit(EXIT_FAILURE); } /* initialize png_struct `png_ptr` */ png_structp png_ptr = png_create_write_struct(PNG_LIBPNG_VER_STRING, NULL, NULL, NULL); if (!png_ptr) { fclose(fp); fprintf(stderr, "Error: memory allocation failed\n"); exit(EXIT_FAILURE); } /* set up output control */ png_init_io(png_ptr, fp); /* save new pixel values */ png_set_rows(png_ptr, info_ptr, row_pointers); /* all image data in info structure */ png_write_png(png_ptr, info_ptr, PNG_TRANSFORM_IDENTITY, NULL); /* free allocated memory */ png_destroy_write_struct(&png_ptr, &info_ptr); fclose(fp); } In png_io.h I have: #ifndef PNG_IO_H #define PNG_IO_H #include <png.h> /* global variables */ extern png_bytepp row_pointers; extern png_byte channels; extern png_uint_32 width, height; /* read png file */ void read_png(char *filename); /* write png file */ void write_png(char *filename); #endif /* PNG_IO_H */ Answer: Unconditional masks This: if (byte & (1 << (CHAR_BIT - bit - 1))) *value = (*value & ~1) + 1; /* 1 */ else *value = *value & ~1; /* 0 */ is really just *value &= ~1; if (byte & (1 << (CHAR_BIT - bit - 1))) *value |= 1; Const arguments void hide_file(char *filename, char *src_png_name, char *out_png_name) should be void hide_file(const char *filename, const char *src_png_name, const char *out_png_name) C99 It's nice. It will allow this: long i; for (i = 0; i < sizeof(buffer); ++i) { to be for (long i = 0; i < sizeof(buffer); ++i) { Or vs. add I suspect that these: *byte += 1 << (7 - bit); *value = (*value & ~1) + 1; /* 1 */ are more safely expressed as bitwise or | operations, and really that better communicates your intent anyway.
{ "domain": "codereview.stackexchange", "id": 41034, "tags": "algorithm, c, file, image, steganography" }
A recursive_count_if Function with Automatic Type Deducing from Lambda for Various Type Arbitrary Nested Iterable Implementation in C++
Question: This is a follow-up question for A recursive_count_if Function For Various Type Arbitrary Nested Iterable Implementation in C++ and A recursive_count_if Function with Specified value_type for Various Type Arbitrary Nested Iterable Implementation in C++. After digging into the thing that detecting the argument type of a function, I found that it is possible to simplify the T1 parameter in last implementation with boost::callable_traits::args_t syntax in Boost.CallableTraits library. As the result, recursive_count_if template function can be used exactly as the following code. std::vector<std::vector<std::string>> v = {{"hello"}, {"world"}}; auto size5 = [](std::string s) { return s.size() == 5; }; auto n = recursive_count_if(v, size5); The implementation of recursive_count_if function with automatic type deducing: #include <boost/callable_traits.hpp> // recursive_count_if implementation template<class T1, class T2> requires (is_iterable<T1> && std::same_as<std::tuple<std::iter_value_t<T1>>, boost::callable_traits::args_t<T2>>) auto recursive_count_if(const T1& input, const T2 predicate) { return std::count_if(input.begin(), input.end(), predicate); } // transform_reduce version template<class T1, class T2> requires (is_iterable<T1> && !std::same_as<std::tuple<std::iter_value_t<T1>>, boost::callable_traits::args_t<T2>>) auto recursive_count_if(const T1& input, const T2 predicate) { return std::transform_reduce(std::begin(input), std::end(input), std::size_t{}, std::plus<std::size_t>(), [predicate](auto& element) { return recursive_count_if(element, predicate); }); } The used is_iterable concept: template<typename T> concept is_iterable = requires(T x) { *std::begin(x); std::end(x); }; The constraints of usage Because the type in the input lambda function plays the role of termination condition, you can not use auto keyword as generic lambdas here. If the lambda function like [](auto element) { } is passed in, compiling errors will pop up. If you want to use generic lambdas, maybe you can choose the previous version recursive_count_if function due to the termination condition is separated. A Godbolt link is here. All suggestions are welcome. The summary information: Which question it is a follow-up to? A recursive_count_if Function For Various Type Arbitrary Nested Iterable Implementation in C++ and A recursive_count_if Function with Specified value_type for Various Type Arbitrary Nested Iterable Implementation in C++ What changes has been made in the code since last question? This version of recursive_count_if template function is boost-dependent, and the type of termination condition can be deduced automatically from input lambda parameter. Why a new review is being asked for? In my opinion, the requires-clause of recursive_count_if template function is a little complex and it's boost-dependent. If there is any simpler way to do this, please let me know. Answer: Don't try to deduce the predicate's parameter type As Quuxplusone mentioned in the comments of the predecessor question, you should not try to determine the type of the predicate function's parameter, this is bound to fail. And indeed the problem immediately starts when you look at your predicate: auto size5 = [](std::string s) { return s.size() == 5; }; And think: hey, that's making a copy of s, I should pass that by const reference instead: auto size5 = [](const std::string &s) { return s.size() == 5; }; Now it fails the requires clauses of your functions. You might be able to add some more tricks to cast away cv-qualifiers and references before comparing the types with std::same_as, but that does not cover all possible situations either. For example, I could write a lambda which allows any type of std::basic_string to be checked: auto size5 = []<typename T>(std::basic_string<T> s) { return s.size() == 5; }; As I suggested in the comments, you just want to write a concept that checks if the predicate can be applied to the argument it is fed, you don't need to check the types themselves. Here is a possible implementation: template<typename Pred, typename T> concept is_applicable_to_elements = requires(Pred predicate, const T &container) { predicate(*container.begin()); }; template<class T1, class T2> requires is_applicable_to_elements<T2, T1> auto recursive_count_if(const T1& input, const T2 predicate) { return std::count_if(input.begin(), input.end(), predicate); } template<class T1, class T2> auto recursive_count_if(const T1& input, const T2 predicate) { return std::transform_reduce(std::begin(input), std::end(input), std::size_t{}, std::plus<std::size_t>(), [predicate](auto& element) { return recursive_count_if(element, predicate); }); } Note that since all the templates used is_iterable(), you can remove that requirement, although if you keep it you get nicer error messages if you accidentily try to apply recursive_count_if() on something that does not support iterating over.
{ "domain": "codereview.stackexchange", "id": 39998, "tags": "c++, recursion, c++20" }
Why is the equation for electric potential energy so counter-intuitive?
Question: The equation: $U_E = Vq$ or $(Eq)d$. This works the same way as the equation for gravitational potential energy: $mgh$ But to me, for charges of different signs, the potential energy also varies. Because $d$ is measured from lowest point, usually negative plate, in the case of parallel plate for example, the direction that a particle moves in affect the distance it travel, and thus the potential energy it has. clarification For example, a negatively charge particle, starting from 2v, would move upward towards the positive plate, which is higher potential. A positive particle would move downward. Since potential kinda is distance. The two particles would have different potential energy. But this is not shown in the equation. See here, if the charged particle is placed at exactly the middle, sign will affect direction and thus affect the potential energy. Same for point charge: Consider $E = k\frac{q}{r^2}$. Because "r" is the distance between the center of the particle to where the test point is located and because in this case the particle is positive, the equation would really apply if the test point placed near this particle is also positive, meaning if released, the test point would go away from it. Then the distance is not "r". ` Answer: A particle subjected to only a conservative force field, without other restrictions, will move in a direction which reduces the potential energy of the system. The particle itself does not have potential energy; the particle-field system has potential energy (some may say the field-field system, but let's not nit-pick that yet). So what's important is not the value of the potential energy of the system, but how the potential energy changes. Based on your formula we can write $$\Delta U_E = q\Delta V.$$ That means that a negative charge will move toward higher $V$ under the influence of the external $E$ field. A positive charge will move toward a lower $V$. The actual values of $V_{initial}$ and $V_{final}$ are irrelevant as long as $$V_{final}-V_{initial}=\Delta V$$ arithmetically has the correct sign. An electron and a proton placed in the same field (separately, so they don't influence each other) will move in opposite directions. They are opposite sign and their unrestricted movements must reduce the system potential energy.
{ "domain": "physics.stackexchange", "id": 23693, "tags": "potential, potential-energy" }
Depending upon how I download, I get two different files
Question: I am downloading the data set for the Kaggle competition on the titanic. If I use the following code : if (!file.exists("data")){ dir.create("data") } fileUrl <- 'https://www.kaggle.com/c/titanic/download/train.csv' download.file(fileUrl, destfile='./data/train.csv') I get a 14kb file, however, Paste this Url directly in your browser and you will download the correct file about 60kb. Answer: This code works for sites where you don't need to be logged on. The Kaggle link only gives you the file when you are logged on to Kaggle. The file that is created with the code only contains html / javascript code of the Kaggle page.
{ "domain": "datascience.stackexchange", "id": 1007, "tags": "r" }
Using amplification lemma to decrease error probability gives 0 as divisor
Question: I am attempting to work through an example of how to select an error bound, and then determine the number of simulations necessary by the amplification lemma to obtain the desired error bound. I have probabilistic Turing machine $M$ with an error probability of $.5$. I'd like an error bound of $.25$, so I will bound the error probability by $2^{-t}$, where $t=2$. According to my textbook, I need to choose a $k$ such that $k \ge t/f$, where $f=-log_2(4e(1-e))$, where $e$ is the error probability. As $M$ has an error probability of $.5$, $e=.5$. Substituting into the equation gives: $-log_2 (2(.5))=-log_2(1)=0$. The problem is when I go to select a $k$ such that $k \ge t/f$, I get $0$ as a divisor. Are there other restrictions on the calculation? Or, am I just doing something wrong? Answer: If the Turing machine has an error probability of $1/2$, the Turing machine is useless and you cannot amplify to reduce the error bound. Consider: if you just ignore the input and flip a coin and use that to decide whether to accept or reject, you get an error probability of $1/2$ -- yet that is an obviously useless procedure. A Turing machine with an error probability of $1/2$ is no better than that. Here's another way to see why it's hopeless. One standard method of amplification is to run the Turing machine multiple times and take a majority vote. Suppose we run the Turing machine 3 times, and take a majority vote. What is the probability that the outcome of the majority vote is correct? It's not hard to see that this probability is $1/2$ -- no better than running the machine once. (Why? Each time you run the Turing machine, it is either correct or incorrect. Since you run it 8 times, there are 8 cases: CCC, CCI, CIC, CII, ICC, ICI, IIC, III, where C = correct and I = incorrect. If the Turing machine's error probability is $1/2$, all 8 of those cases are equally likely. The majority vote after 3 runs will be correct in the cases CCC, CCI, CIC, and ICC; thus, the majority vote is correct with probability $4/8 = 1/2$.) The same thing happens no matter how many times you run the Turing machine. To get amplification, you need the error probability to be strictly less than $1/2$.
{ "domain": "cs.stackexchange", "id": 8830, "tags": "turing-machines, probability-theory, probabilistic-algorithms" }
How is one amp under 1 volt different from 1 amp under 2 volts?
Question: I have been reading about volts and about the water pressure analogy being inadequate to describe electricity, since pressure makes water increase speed and thus would mean electrons would also have an increase in speed under a larger potential, So if an electron doesn't increase in speed or have a greater kinetic energy under a larger volt, and it always carries the same charge, and current is the number of electrons passing through a point: How is one amp under 1 volt even different from 1 amp under 2 volts? Answer: If 2 volts produce 1 amp of current, there must be a higher electrical resistance than in the case of 1 volt producing 1 amp of current. Using your water analogy, you have increased the pressure but also added rocks which slow the water down.
{ "domain": "physics.stackexchange", "id": 40055, "tags": "electrostatics, electric-circuits, electric-current, electrical-resistance, voltage" }
How can I programmatically set the rosconsole verbosity of an object?
Question: I have a local planner plugin that instantiates a base_local_planner::TrajectoryPlannerROS object and then calls checkTrajectory on it. Specifically, this is the function that gets called eventually: TrajectoryPlanner::checkTrajectory (link to source code) During execution of my local planner, that function is called a lot in order to evaluate velocity command candidates. As a result, it spams this warning: ROS_WARN("Invalid Trajectory %f, %f, %f, cost: %f", vx_samp, vy_samp, vtheta_samp, cost); I know that, from the command line, I can call the set_loggers service of the corresponding node in order to set the logger level to ERROR and thus prevent the spamming. Is there a way to set the logger level when I instantiate the TrajectoryPlannerROS object in my local planner? I would like to avoid: making a service call (whether programmatic or via the command line) forking base_local_planner and changing the ROS_WARN to ROS_DEBUG The rosconsole wiki page has some relevant info, but it seems overkill for what I want to achieve. (Although I've never used anything but the ROS_* macros, so it may just be lack of familiarity.) References: http://wiki.ros.org/rosconsole http://answers.ros.org/question/211961/how-to-avoid-invalid-trajectory-cost-1000000-in-dwa-local-planner Originally posted by spmaniato on ROS Answers with karma: 1788 on 2016-12-18 Post score: 1 Answer: The rosconsole page that you link to has a section on setting the logger level that includes some example C++ code for changing the logger level. Is this what you're looking for? If you want to change the logger level for the entire DWA local planner, you might want to consider switching all of its logger macros to use the named logger macros. This will allow you to the logger level for DWA separately from the rest of your code. This is probably a useful enough change that it would be worth submitting it into the public ROS navigation stack. Originally posted by ahendrix with karma: 47576 on 2016-12-19 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by spmaniato on 2016-12-19: This excerpt from example.cpp does look promising: log4cxx::Logger::getLogger(ROSCONSOLE_DEFAULT_NAME".test"); I'll start there. Thanks Austin!
{ "domain": "robotics.stackexchange", "id": 26520, "tags": "ros, rosconsole" }
Anomaly detection thresholds issue
Question: I'm working on an anomaly detection development in Python. More in details, I need to analysed timeseries in order to check if anomalies are present. An anomalous value is typically a peak, so a value very high or very low compared to other values. The main idea is to predict timeseries values and, using thresholds, detect anomalies. Thresholds are calculated using the error, that is the real values minus the predicted ones. Then, mean and standard deviation of the error are performed. The upper threshold is equals to mean + (5 * standard deviation). The lower threshold is equals to mean - (5 * standard deviation). If the error exceeds thresholds is marked as anomalous. What doesn't work with this approach is that if I have more than one anomalous value in a day, they are not detected. This because error, mean and standard deviation are too much influenced by anomalous values. How can i fix this problem? Is there another method that i can use to identify thresholds without this issue? Thank you Answer: Instead of mean and standard deviation, you could estimate the median and mean absolute deviation. The median is immune to outliers, and the MAD should be at least more robust than the standard deviation formula. You will probably have to change your critical value to something other than 5 to get the same kind of coverage. According to Wikipedia, you'll want the new critical value to be $5\sqrt{\frac{\pi}{2}}$ if your data are iid Gaussian. An alternative that might be more difficult to implement, but is probably more statistically appropriate, is to use trimmed estimators for the mean and standard deviation. With trimmed estimators, you throw away the most extreme values in your data (the proportion of which is specified beforehand), and estimate your statistics on the remaining data. The estimator for the mean would be the truncated mean, and the Wikipedia page for trimmed estimators mentions how to get a decent estimator for the standard deviation from the interquartile range. I hope this helps!
{ "domain": "datascience.stackexchange", "id": 6271, "tags": "machine-learning, python, time-series, unsupervised-learning, anomaly-detection" }
Do neutron stars produce sound?
Question: I've watched a video on youtube about neutron stars as called (pulsar) and it claims that pulsars produce this sound. then i found that they produce radio waves which can be converted to sound. Radio waves are electro-magnetic waves and sound is a mechanical wave which of course can't be transmitted in vacuum. My question is, do they "really" produce sound that is then converted somehow in space as radio waves or do they just just create radio waves which humans convert to sound? Answer: Stars can produce "sound" when they pulsate. Stellar oscillations produce waves of density and or pressure that propagate within the star. These sound waves cannot travel to us through space because the near vacuum does not allow them to propagate. Sound waves due to stellar oscillations can be detected by looking for periodic signals in a star's brightness as it pulsates, or by detecting the motion of the stellar surface towards and away from us using the Doppler effect. Because the characteristic frequencies tell us something about the stellar interior, this branch of science is termed asteroseismology. The frequencies detected can be converted back into sounds we can hear in exactly the same way that any electrical signal can in a radio or HiFi. The characteristic frequencies have an order of magnitude given by $\sqrt{G \rho}$, where $\rho$ is the average density of the star. For the Sun, this corresponds to $3\times 10^{-4}$ Hz (or a period of 50 minutes). The dominant frequency turns out to be a bit faster, with a period of 5 minutes. So you would indeed need to bump this frequency up by factors of 42,000 to make an audible signal. The equivalent calculation for pulsars (neutron stars), which have densities of order a few $10^{17}$ kg/m$^3$, suggest characteristic frequencies of 5 kHz. A few pulsars do have a signal frequency of this order (the millisecond pulsars), however most pulsars are much slower, with frequencies of about 1 Hz. It was quickly realised that pulsations are not responsible for the pulsar phenomenon. Pulsars are rapidly rotating neutron stars and it is rotational modulation of their electromagnetic radiation that gives the signal its frequency. The range of pulsar rotation periods is responsible for the range of signal frequencies found. The pulsar signal is often transformed directly into a voltage and then into an audible signal by feeding it to a loudspeaker.
{ "domain": "physics.stackexchange", "id": 22483, "tags": "acoustics, neutron-stars, radio-frequency" }
How to use gmapping to build a map?
Question: Hello everyone. I am new to ROS (fuerte 12.04). I am trying to use gmapping to build a map in rviz with a Hokuyo laserscanner(UTM-30LX) and a mobile robot, which can provide odometry data. I have already followed the tutorial MappingFromLoggedData. I run "rosrun gmapping slam_gmapping scan:=base_scan", and an already existed mapping bag runs in rviz rightly. But when I try to create my own map (e.g. my room) and run rosrun gmapping slam_gmapping, it shows "/use_sim_time set to true and no clock published. still waiting for valid time." Did I follow the right tutorial? What else shoould I follow? I also don't know how to set the parameter of gmapping like tf trees. which command should I use to set the parameter and run gmapping sucessfully? I am really confused. Thank you so much for answering my question! Now I set the time to sim_time false with command "rosparam set use_sim_time false" In terminal 2 I enter the command like in the tutorial "rosrun gmapping slam_gmapping scan:=base_scan" In terminal 3 I enter "rosbag record -O mylaserdata /base_scan /tf". It shows [ INFO] [1389873864.228511055]: Subscribing to /base_scan [ INFO] [1389873864.256834976]: Subscribing to /tf [ INFO] [1389873864.284224081]: Recording to mylaserdata.bag. [ WARN] [1389873864.284374493]: Less than 5GB of space free on disk with mylaserdata.bag.active. In terminal 4 I try to record my bag with "rosbag play mylaserdata.bag". It shows [ INFO] [1389873813.472244379]: Opening mylaserdata.bag Waiting 0.2 seconds after advertising topics... done. And I open the rviz with commend "rosrun rviz rviz" and add a map set to the topic /map but nothing in rviz. In terminal 2 the errors show up just after I enter "rosbag play mylaserdata.bag" in terminal 4. here are the errors. TF_OLD_DATA ignoring data from the past for frame /odom at time 1.38987e+09 according to authority /play_1389877419301103194 Possible reasons are listed at I am confused. Thank you for helping! Originally posted by Yuliang Sun on ROS Answers with karma: 5 on 2014-01-13 Post score: 0 Original comments Comment by Tirjen on 2014-01-16: It is not very clear to me what you are trying to do... Why don't you first try to use gmapping without rosbag? Moreover in your description I'm missing the node who's publishing the laser scan and the one publishing tf. Is your tf tree ok ('rosrun tf view_frames')? Comment by Yuliang Sun on 2014-01-16: I just want to run gmapping with real sensor data. And I have no clue about it. Should I run "rosparam set use_sim_time false at first" then is "rosrun gmapping slam_gmapping scan:=base_scan" . and then? which commend should I enter? my tf tree is not right. only has /map to /odom. It should be /map -> /odom -> base_link -> base_laser. But I do not know how to change my tf tree. With which commend? Sorry i am not really good at that. And really hope you can help me. thank you so much! Answer: As explained here, what you want to have is a node that publishes the required tf transforms of your robot, in particular the tf from the laser_link to base_link and from base_link to odom, i.e. the odometry of your robot. If you are new to ros I suggest you to lokk at this tutorial on tf. Moreover did you check if the laser of your robot works on rviz? EDIT: As I said you must have a node that is publishing the tf tree of your robot. The tf from /map to /odom is the one gmapping provides. But without a node like robot_state_publisher that publish the tf of your robot and the odometry gmapping can't work. I suggest you to look at the navigation's tutorials and also at the code of turtlebot_navigation / turtlebot_gazebo packages to see how they use gmapping, and all it needs. Originally posted by Tirjen with karma: 808 on 2014-01-13 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Yuliang Sun on 2014-01-16: Thank you for explaining. I already used hector_slam (without odometry) to get a mapping process in rviz. So the laserscanner functions well. Now I tried to use gmapping because gmapping can provide odometry and the result is better. As I reedited my question, I have some problems in gmapping.
{ "domain": "robotics.stackexchange", "id": 16640, "tags": "navigation, odometry, mapping, laserscanner, gmapping" }
In a RNA-Seq heatmap should you do Z-score standardisation before clustering the rows/columns or after?
Question: I have made a heatmap using RPKM values from a RNA-Seq dataset using the pheatmap() function in R. I have log2-transformed the data before performing Z-score standardisation of the data. I have also clustered the rows and columns of the heatmap. I have the following code: heatmap_data_log2 %>% pheatmap(color = colorRampPalette(c("blue2","white","red"))(100), scale = "column", cluster_cols = T, clustering_method = "ward.D2", angle_col = 45, fontsize_row = 7.5, fontsize_col = 8, border_color = NA, cutree_rows = 4) I have seen in a video tutorial that the pheatmap() function does the Z-score scaling before doing the clustering of the rows and columns. However, in this online article, it says that "The Z-scores are computed after the clustering, so that it only affects the graphical aesthetics and the color visualization is improved". As these two sources are giving contradictory information, I was wondering which is better. Should you do Z-score scaling of the gene expression values before doing the clustering or after? Any advice is appreciated. Answer: Looking at the source for pheatmap, there is a function called scale_mat that is used to preprocess and normalize the input matrix, depending on the value of scale, which specifies one of either none, row, or column normalization options. Separate and similar functions are used for color value scaling downstream. I have not watched the video or read the linked article, so I'm not sure this is contradictory, but it may be just small confusion about how normalization is applied (and to what). Normalization is usually done before clustering to focus cluster construction on signal rows or columns that are not some multiple or scale factor of another — and in referring back to the source code, this is what pheatmap does, as well.
{ "domain": "bioinformatics.stackexchange", "id": 2040, "tags": "r, heatmap" }
Critical Buckling Load for a Spring Supported Bar
Question: The above is a past exam question from an introductory structural analysis course, one in which although we have studied the Euler buckling load equation, we have just been given parameters for the equation based on the standard end support conditions (fixed/fixed, fixed/pinned, etc.). I don't think the above example fits any of the standard end conditions and am therefore finding it difficult to derive an equation for $P_{cr}$ I have tried to use the bending moment equation by including torque (that opposes bending) from B and the net load (due to spring C) as follows: $$ T_B = \beta_r \theta $$ Taking a small deflection x in the vertical direction and y in the horizontal: $$ tan\theta = \frac{dy}{dx} \approx \theta $$ $$ \therefore T_B = \beta_r \frac{dy}{dx} $$ Using the bending moment equation: $$ M=-EI\frac{d^2y}{dx^2} $$ $$ \therefore \sum M=-EI\frac{d^2y}{dx^2} + \beta_r \frac{dy}{dx} = Qy $$ The net load $Q$ can be expressed in term of the force P and the reaction force due to the spring at C as: $$ Q = P - \beta dx $$ Leaving the final second order equation: $$ EI\frac{d^2y}{dx^2} - \beta_r \frac{dy}{dx} + (P-\beta dx) y = 0 $$ This is beyond the scope of our course, so I have no idea if: a) The above equation is correct or solvable (and if so how one goes about solving it) b) There is a different approach using more basic techniques (which is more than likely the case) Answer: To answer the specific question: (a) No, that equation is irrelevant and (b) Yes. see below. The question isn't about Euler buckling. It says the bar is rigid. Euler buckling only applies to a flexible bar - the Euler formulas include Young's modulus, and also the moment of inertia of the bar which is not mentioned in the question. One way to solve the question is to use the principle of virtual work, and compare the strain energy in the springs with the work done by the external force P. You might also notice that the condition $\beta_r = 3\beta L^2/2$ is related to the geometry of the triangle ABC.
{ "domain": "engineering.stackexchange", "id": 1240, "tags": "structural-engineering, structures, buckling" }
An equilibrium reaction in the presence of other fast reactions -- is it necessary also to consider the individual rate constants?
Question: When mathematically describing an equilibrium reaction, we generally treat it as though it is enough just to specify the equilibrium constant, as it relates the concentrations of products and reactants. If this equilibrium reaction is occurring in a highly reactive environment, however, where the reactants and the products are constantly changing due to other reactions, it seems to me that the equilibrium reaction can no longer be described by equilibrium constant only. Instead, the forward and/or reverse reaction coefficients are needed (in addition to the equilibrium constant) to describe the relationship between the reactants and the products. Is this correct? If not, why not? Answer: You're correct. If other reactions are occurring that are fast on the time scale of the equilibrium reactions, then you must treat the forward and reverse reactions separately, including by using their separate rate constants. "Equilibrium" is a mathematical model that never actually perfectly matches reality. In order for most systems to actually be at "perfect equilibrium" we would have to let them sit and react for an infinite amount of time. Of course, practically speaking, that's not necessary: equilibrium is reached to within measurement precision in quite reasonable amounts of time for many reactions. What's necessary in order for the concept of 'equilibrium' to be useful in analyzing a reaction system is for the time scale of those reactions to be "very short" (or, equivalently, for the rate of those reactions to be "very fast") when compared to all other reactions of interest in the system. In other words, when we write the following system of reactions: $$ \ce{A + B <=>[K_1] C + D} \\ ~ \\ \ce{C + E ->[k_2] P} $$ what we're really writing is: $$ \ce{A + B \underset{k_{-1}}{\overset{k_1}{\rightleftarrows}} C + D} \\ ~ \\ \ce{C + E ->[k_2] P} \\ ~ \\ \begin{align} k_1\ce{[A][B]} &\gg k_2\ce{[C][E]} \\ k_{-1}\ce{[C][D]} &\gg k_2\ce{[C][E]} \\ K_1 &= {k_1 \over k_{-1}} \end{align} $$ The relative values of the forward and reverse rate constants can vary, with their ratio described by $K_1$, but at all points through the reacting process of interest, both the forward and reverse equilibrium reactions are assumed to be much faster than the final, product-forming reaction. If this assumption doesn't hold, then we can't treat the first reaction as "just" an equilibrium reaction.
{ "domain": "chemistry.stackexchange", "id": 8247, "tags": "reaction-mechanism, equilibrium, kinetics" }
sonar sensor Gazebo-ROS
Question: Hello Team, I am using the plugin hector_gazebo_ros_sonar.so of the package hector_gazebo_plugins but the performance is very bad. My problem is describe here: link:here I am looking for another sensor which works a little better. Anybody know another sonar sensor plugin? It is to do a study to explain the best position of these sonar sensors around the robot vs Kinect in some cases. Thanks a lot. Originally posted by pmarinplaza on Gazebo Answers with karma: 3 on 2012-10-03 Post score: 0 Original comments Comment by nkoenig on 2012-10-05: Is the problem that the sonar values jump, and then slowly extend back to the ground plane? Comment by pmarinplaza on 2012-11-06: Yes. The people of Hector fix the problem and it works very well now. Answer: Quoting from @pmarinplaza's comment: Yes. The people of Hector fix the problem and it works very well now. Originally posted by gerkey with karma: 1414 on 2013-01-11 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 2757, "tags": "gazebo, gazebo-sensor" }
prediction() returning mistakenly false positives
Question: I do not know how to interpret the result of: prediction(c(1,1,0,0), c(1,1,0,0)) prediction() functino comes from prediction {ROCR} it has this site: http://rocr.bioinf.mpi-sb.mpg.de/ The above is a working example. As per the documentation the first parameter is 'predictions' and the second 'labels' (they would be the true values). The output is this, which I do not fully understand, specially why there is a '2' in "fp". : An object of class "prediction" Slot "predictions": [[1]] [1] 1 1 0 0 Slot "labels": [[1]] [1] 1 1 0 0 Levels: 0 < 1 Slot "cutoffs": [[1]] [1] Inf 1 0 Slot "fp": [[1]] [1] 0 0 2 Answer: The slot "fp" counts how many false positives there are at each choice of classification cutoff (which can be found in the "cutoff" slot). The cutoff represents at what value you set the threshold to binarize the numerical values into classes. Your output already appears to be binary classes, so the concept of a threshold doesn't really make much sense, but the package still tries, setting the potential thresholds at Infinity, 0, and 1. When you set the threshold at 0, everything gets classified positive, including the 2 actual negative samples - when the classification threshold is 0, you get 2 false positives.
{ "domain": "datascience.stackexchange", "id": 7157, "tags": "r, prediction" }
6d Massive Gravity
Question: Massive gravity (with a Fierz-Pauli mass) in 4 dimensions is very well-studied, involving exotic phenomena like the van Dam-Veltman-Zakharov (vDVZ) discontinuity and the Vainshtein effect that all have an elegant and physically transparent explanation in terms of an effective field theory of longitudinal modes, as explained by Arkani-Hamed, Georgi, and Schwartz. Is there any analogous work on six-dimensional massive gravity? (The right mass term would still be of Fierz-Pauli form, but the little group is bigger and so I would expect a more complicated set of longitudinal modes to think about.) EDIT: added a bounty to renew interest. Answer: Unfortunately I am not really aware of any literature regarding this, however a generalization of the Arkani-Hamed et al paper to the case of d+1 dimensions should be rather straightforward. Let us begin by noting a few basic facts about the group theory involved: The little group for a massless representation (this is the analog of helicity) of the Poincaré algebra is given by $SO(d-2)$, while the little group for a massive representation is given by $SO(d-1)$. The number of degrees of freedom corresponding to massive spin-2 is given by the symmetric traceless tensor of the little group, which has $\frac{d(d+1)}{2} -1$ degrees of freedom. In the massless case a similar argument leads to $\frac{(d-1)d}{2}-1$. Now the point of the analysis by Arkani-Hamed et al is essentially to understand the theory in the UV, i.e. at energy scales much larger than the mass. To do this they try to decompose the massive representation in terms of massless ones, a straightforward counting exercise shows that in this case a massive spin-2 decomposes into a scalar, a helicity-1 vector and a helicity-2, exactly as in the 3+1d case. Using this knowledge it should be very easy to generalize the previous results, it is mostly a matter of carefully keeping track of the $d$'s. The vDVZ discontinuity, will still be there although the relative factor in the radiation-radiation and matter-matter interaction will depend on d, this can easily be seen by decomposing the tensor structure of the massive spin-2 propagator in terms of three massless one's corresponding to the helicities, a nice derivation for the $d=3$ case can be found in Zee's QFT book. I hope that helps a bit...
{ "domain": "physics.stackexchange", "id": 3247, "tags": "gravity, field-theory, research-level, spacetime-dimensions, effective-field-theory" }
Two electrons are in a sphere. How does the total spin depend on the size of the sphere?
Question: I was looking at Landau's entrance exam problem set from this compilation http://people.tamu.edu/~abanov/QE/TM-QM.pdf I stumbled across problem 186 which asks: Two electrons are inside a sphere. Find (qualitatively) how the total spin depends on the radius of the sphere $R$. I would think naively that the fact that there are two electrons are important but I fail to see why the spins are affected. $SO(3)$ should still be preserved under scaling of the sphere so the spins should be unaffected. Answer: I have to imagine that the question is referring to the total spin of the ground state. The ground state of two non-interacting fermions confined to a spherical well is the state in which they have the same (ground state) spatial wavefunction while occupying the antisymmetric spin singlet, so their total spin is $s=0$. In the first excited state, the spatial wavefunction $\propto \psi_0(x)\psi_1(y) - \psi_0(y)\psi_1(x)$ is antisymmetric and they occupy the symmetric spin triplet with $s=1$. Once we turn electrostatic interactions on, there will be an extra contribution to the total energy which affects the symmetric spatial wavefunctions more than the antisymmetric spatial wavefunctions - heuristically, when the spatial wavefunction is symmetric, the electrons are "closer together" on average. This contribution scales like $1/R$ where $R$ is the radius of the sphere. On the other hand, the difference between the ground state and first excited state of the non-interacting particles scales like $1/R^2$. As a result, there will be a fight between the "kinetic" term and the potential energy term, with the dominant term depending on the size of the sphere. In turn, this affects whether the ground state wavefunction will be symmetric (which requires the antisymmetric $s=0$ spin singlet) or antisymmetric (which requires the symmetric $s=1$ spin triplet).
{ "domain": "physics.stackexchange", "id": 85483, "tags": "quantum-mechanics, quantum-spin" }
How to tackle too many outliers in dataset
Question: I boxplot all of my columns with seaborn boxplot in order to know how many outliers that i have, surprisingly there're too many outliers and so i can remove the outliers because i'm afraid with too many outliers it will have bad impact to my model especially impacting the mean,median, variance which will further impact the performance of my model. Then i found this QnA about too many outliers the answer said that the case is not really an outliers. The answer makes sense but i'm still afraid it will give bad impact to the performance of my model. What is the best thing should i do? These are two of examples of what i mean, 9 out of 10 columns of my data are exactly like this and i'm really worried about this because you see, it's not only there're too many outliers, it also happens in almost every (9 of 10 columns in total) columns. Answer: If you have lots and lots of missing datapoint in any feature that you are depending on, you final analysis will be pretty weak. Let's say you want to train your model on heights of basketball players and overall scoring on the court. If you have 100 players, and only have heights for 10 players, and nulls for the other 90 players, how accurate do you think the model will be in predicting scores per player. Well, not very accurate. You can use this small script to find the percentage of nulls, per column/feature, in your entire dataset. import pandas as pd import numpy as np df = pd.read_csv('C:\\your_path\\data.csv') df_missing = df.isna() df_num_missing = df_missing.sum() print(df_num_missing / len(df)) print(df.isna().mean().round(4) * 100) I don't know what the rule of thumb is, but you probably want to have at least 70% - 80% coverage, at least. Closer to 100% is better!! For outliers, there are a few things you can do. Cnsider finding Z-Scores for each column/feature in your dataframe. cols = list(df.columns) cols.remove('ID') df[cols] # now iterate over the remaining columns and create a new zscore column for col in cols: col_zscore = col + '_zscore' df[col_zscore] = (df[col] - df[col].mean())/df[col].std(ddof=0) df Reference: https://stackoverflow.com/questions/24761998/pandas-compute-z-score-for-all-columns Kick out records with a Z-Score over a certain threshold. from scipy import stats df[(np.abs(stats.zscore(df)) < 3).all(axis=1)] Reference: https://stackoverflow.com/questions/23199796/detect-and-exclude-outliers-in-pandas-data-frame Also, consider using some kind of scaling or normalization technique to handle those pesky outliers! Which is which? The difference is that, in scaling, you’re changing the range of your data while in normalization you’re changing the shape of the distribution of your data. Scaling: from sklearn.preprocessing import StandardScaler scaler = preprocessing.StandardScaler() scaled_df = scaler.fit_transform(df) #[where df=data] Normalization: from sklearn import preprocessing scaler = preprocessing.Normalizer() scaled_df = scaler.fit_transform(df) That's a lot to take in. Let's wrap it up very soon. Just one final topic to discuss... Feature Selection: #importing libraries from sklearn.datasets import load_boston import pandas as pd import numpy as np import matplotlib import matplotlib.pyplot as plt import seaborn as sns import statsmodels.api as sm from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from sklearn.feature_selection import RFE from sklearn.linear_model import RidgeCV, LassoCV, Ridge, Lasso#Loading the dataset x = load_boston() df = pd.DataFrame(x.data, columns = x.feature_names) df["MEDV"] = x.target X = df.drop("MEDV",1) #Feature Matrix y = df["MEDV"] #Target Variable df.head() reg = LassoCV() reg.fit(X, y) print("Best alpha using built-in LassoCV: %f" % reg.alpha_) print("Best score using built-in LassoCV: %f" %reg.score(X,y)) coef = pd.Series(reg.coef_, index = X.columns) print("Lasso picked " + str(sum(coef != 0)) + " variables and eliminated the other " + str(sum(coef == 0)) + " variables") imp_coef = coef.sort_values() import matplotlib matplotlib.rcParams['figure.figsize'] = (8.0, 10.0) imp_coef.plot(kind = "barh") plt.title("Feature importance using Lasso Model") References: https://towardsdatascience.com/feature-selection-with-pandas-e3690ad8504b https://towardsdatascience.com/feature-selection-techniques-in-machine-learning-with-python-f24e7da3f36e https://towardsdatascience.com/a-feature-selection-tool-for-machine-learning-in-python-b64dd23710f0 ENJOY!!
{ "domain": "datascience.stackexchange", "id": 11411, "tags": "machine-learning, python, outlier" }
What is the correct way to connect two tf trees?
Question: I have a setup in which two separate tf trees are published, and I'd like to define one with regard to the other, but not from the base link/root node. For instance two trees one with a tf_prefix, one without such that you might see: /baseLink /link0 /linkA0 /linkB0 and /ghost/baseLink /ghost/link0 /ghost/linkA0 /ghost/linkB0 And I want to define /ghost/linkB0 as having an identity transformation from /linkB0. However, since tf is set up with child and parent relationships as a tree and not as a free network, the correct way to do this is unclear. The only way I can think to do it would be to rebuild the KDL::Tree that the sub-ordinate tree's RobotStatePublisher uses to originate at the desired link, but this would be non-trivial, and not offer a way to easily select a different link at will without doing the same thing for each desired combination. Is there a better way to do this? Is it even possible given the current architecture? Originally posted by Asomerville on ROS Answers with karma: 2743 on 2013-03-19 Post score: 5 Answer: Your instincts are correct you can simply add a transform from "/baseLink" to "/ghost/baseLink". The graph structure is independent of the names of the frames. Originally posted by tfoote with karma: 58457 on 2013-03-19 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Asomerville on 2013-03-19: Problem is that I want to go from /linkA0 to /ghost/linkA0 which cause /ghost/linkA0 to have two parents and fail. I'm working on a work-around where I publish the transfrom from /ghost/linkB0 to /ghost/baseLink as the transform from /linkB0 to /ghost/baseLink to get the same effect. Comment by tfoote on 2013-05-27: Just publish the link the other direction so it's a child, not a parent. You cannot bridge in the middle of a tree.
{ "domain": "robotics.stackexchange", "id": 13446, "tags": "ros-fuerte, robot-state-publisher, transform" }
Capacitance calculator voltage
Question: There are couple parallel plate capacitance calculators on the internet but none of the ones I found had window where you input voltage, they dont even mention voltage anywhere. For example this calculator says two charged parallel plates have 0.885 pico farad capacitance if their surface area is 100mm2 and distance is 1mm. It never mentions at what voltage this 0.885 pF is, I assume its 1 volt but I am not sure. So the question is, what voltage does this calculator use to arrive at its result? https://www.daycounter.com/Calculators/Plate-Capacitor-Calculator.phtml Answer: The calculator is giving you capacitance based on the physical characteristics of the capacitor, namely, plate area, separation, and electrical permittivity of the dielectric medium between the plates. Voltage is not required. On the other hand, capacitance is also electrically defined as the charge on the capacitor per volt across the plates, or $$C=\frac{Q}{V}$$ hope this helps.
{ "domain": "physics.stackexchange", "id": 62276, "tags": "electrostatics, electrons, capacitance" }
Does intrinsic curvature in a higher dimension mean that the lower dimensions also exhibit curvature?
Question: If our universe has intrinsic curvature in a higher dimension, would that mean the 3 dimensions that we live in would be curved? and if so would the lower dimensions exhibit intrinsic or extrinsic curvature as a result of the curvature in the higher dimension. A follow up question would be, could the lower dimensions have intrinsic curvature without exhibiting any extrinsic curvature in higher dimensions Answer: A cylinder is a simple example which shows that curvature in one dimension does not imply curvature in another. We could imagine a universe which is (locally) flat in the 4 space-time dimension, but is curved in one or more other dimensions. These other dimensions could have a large or small amount of curvature, and in a SF context might be seen as 'like' space-time in some respect. In a Physics context though the difficult question is what they are. Is there anything we can measure or perceive which could be interpreted as a 'dimension' orthogonal to space-time?
{ "domain": "physics.stackexchange", "id": 81957, "tags": "gravity, spacetime, differential-geometry, curvature, spacetime-dimensions" }
Momentum of Wind
Question: Is there an actual method to calculate the momentum of wind? I'm doing an experiment on hurricane shutters, where I have to calculate the minimum momentum to close the hurricane shutter. I need the "actual momentum of wind needed" in order to analyze my data. According to a website, it says the wind speed should exceed 33.1m/s in order to be categorized as a hurricane. Answer: It is a force which will close the shutters and so you will need the rate of change of momentum not just the momentum. If you consider an area $A$ which is perpendicular to the wind velocity $v$ the rate of transfer of momentum through that area is $\rho A v^2$ where $\rho$ is the density of the air. You then have to make an assumption as to what happens to the velocity of the air once it has hot the shutters so that you can evaluate the rate of change of momentum.
{ "domain": "physics.stackexchange", "id": 33059, "tags": "newtonian-mechanics, momentum, velocity" }
What features do reversible work have over the irreversible kind?
Question: At 6:47 of this video lecture, the professor defines enthalpy for a constant pressure process as $$ q_{p}= \Delta U + p \Delta V$$ but, I can not understand why the work he implicitly starts referring work as an exact differential. Is this due to it being the reversible kind of work? and around 36:55 of this lecture, an even stranger thing happens, $$dU \neq dW$$ unless it is reversible process, but why? What exactly is the distinguishing difference between reversible and non-reversible work, and what are the consequences of these differences? In this stack, a similar question is asked and while the answer does make sense, the professor says that the process is adiabatic around 36:15, then, writes the first law. Now, by the definition of first law, isn't $$ dU = dW$$ Always? or is the first law a statement which changes on what situation which you place it in? Answer: OK. Here are the focus problems I recommend considering: I have an ideal gas of pressure, volume, and temperature $P_1$, $V_1$, and $T_1$, respectively, in an insulated cylinder with a massless, frictionless piston. Initially, the external pressure is also $P_1$. REVERSIBLE ADIABATIC EXPANSION I gradually lower the external pressure (reversibly) until the volume has increased to $V_2$. Determine the final pressure $P_2$ and final temperature $T_2$. Determine the amount of work done on the surroundings W and the change in internal energy $\Delta U$. How does the amount of work compare with the change in internal energy? IRREVERSIBLE ADIABATIC EXPANSION: I suddenly lower the external pressure to a new value P and hold it constant at this value until the system re-equilibrates. In terms of P, what is the final volume and final temperature? What value of P would be required for the final volume to be the same as it was in the reversible case, $V_2$, and what would be the final temperature under these circumstances? What would be the work done on the surroundings W and what would be the change in internal energy $\Delta U$. How does the irreversible work compare with the irreversible change in internal energy? How does the work done on the surroundings in this irreversible case compare with the work done in the reversible case? SOLUTION TO THE IRREVERSIBLE CASE: The first law tells us that, for an adiabatic process, Q = 0 and $$\Delta U=-W$$So, for the irreversible expansion described here: $$nC_v(T-T_1)=-P(V-V_1)$$where n is the number of moles of gas. Substituting the ideal gas law in this equation for the initial and final thermodynamic equilibrium states gives: $$nC_v(T-T_1)=-P\left(\frac{nRT}{P}-\frac{nRT_1}{P_1}\right)$$This allows us to find the final temperature T in terms of the final pressure P: $$T=\left[\frac{1+(\gamma-1)\frac{P}{P_1}}{\gamma}\right]T_1$$where $\gamma=\frac{C_p}{C_v}$. From the ideal gas law, $$\frac{PV}{T}=\frac{P_1V_1}{T_1}$$So, if $V=V_2$ (the final volume that we got in the reversible case), $$P=\left[\frac{V_1}{V_2\gamma+V_1(\gamma-1)}\right]P_1$$
{ "domain": "physics.stackexchange", "id": 71183, "tags": "thermodynamics, entropy, reversibility" }
A context-based simple harmonic question
Question: The question I'm trying to understand is part b) of this: For part a) I calculated the angular velocity to be $\pi$ rad$s^{-1}$ by solving $T=\frac{2\pi}{ω}$, where $T=2$s. Then I used $v_{max}=aω$ and found $a=\frac{3}{\pi}$. Now, regarding part b) my plan was to find the time taken for the particle to travel from the equilibrium position to the lowest point and find the time taken for the particle to travel 5m below the pier (from the equilibrium position) and then subtract these two times to get the time taken to go from the lowest point to 5m below the pier. Since the highest point is 3m from the pier, then 5m below the pier must be 2m down from the highest point. The equilibrium position is $a$ metres down from the highest position so it's $\frac{3}{\pi}=0.955m$ down from the highest point. Since $2>0.955$ then 2m from the highest point must be $2-0.955=1.045m$ below the equilibrium position. However, since $1.045>0.955$ this means that 2m below the highest point is actually past the lowest point so it wouldn't be possible to calculate a time from the equilibrium point to the point 2m below the highest point. I've been trying to find the flaw in my reasoning and I've not been successful so I'm wondering if anyone knows what I'm doing wrong. Answer: HINT: the boat takes 2 seconds to travel from its highest point to its lowest point. How much time does it take to make a full cycle and return to its highest point? Which one of these amounts of time is the period of the oscillation?
{ "domain": "physics.stackexchange", "id": 39892, "tags": "homework-and-exercises, waves, harmonic-oscillator" }
Should we also shuffle the test dataset when training with SGD?
Question: When training machine learning models (e.g. neural networks) with stochastic gradient descent, it is common practice to (uniformly) shuffle the training data into batches/sets of different samples from different classes. Should we also shuffle the test dataset? Answer: Short answer Shuffling affects learning (i.e. the updates of the parameters of the model), but, during testing or validation, you are not learning. So, it should not make any difference whether you shuffle or not the test or validation data (unless you are computing some metric that depends on the order of the samples), given that you will not be computing any gradient, but just the loss or some metric/measure like the accuracy, which is not sensitive to the order or the samples you use to compute it. However, the specific samples that you use affects the computation of the loss and these quality metrics. So, how you split your original data into training, validation and test datasets affects the computation of the loss and metrics during validation and testing. Long answer Let me describe how gradient descent (GD) and stochastic gradient descent (SGD) are used to train machine learning models and, in particular, neural networks. Gradient descent (GD) When training ML models with GD, you have a loss (aka cost) function $L(\theta; D)$ (e.g. the cross-entropy or mean squared error) that you are trying to minimize, where $\theta \in \mathbb{R}^m$ is a vector of parameters of your model and $D$ is your labeled training dataset. To minimize this function using GD, you compute the gradient of your loss function $L(\theta; D)$ with respect to the parameters of your model $\theta$ given the training samples. Let's denote this gradient by $\nabla_\theta L(\theta; D) \in \mathbb{R}^m$. Then we perform a step of gradient descent $$ \theta \leftarrow \theta - \alpha \nabla_\theta L(\theta; D) \label{1}\tag{1} $$ Stochastic gradient descent (SGD) You can also minimize $L$ using stochastic gradient descent, i.e. you compute an approximate (or stochastic) version of $ \nabla_\theta L(\theta; D)$, which we can denote as $\tilde{\nabla}_\theta L(\theta; B) \approx \nabla_\theta L(\theta; D)$, which is typically computed with a subset of $B$ of your training dataset $D$, i.e. $B \subset D$ and $|B| < |D|$. The step of SGD is exactly the same as the step of GD, but we use $\tilde{\nabla}_\theta L(\theta; B)$ $$ \theta \leftarrow \theta - \alpha \tilde{\nabla}_\theta L(\theta; B) \label{2}\tag{2} $$ If we split $D$ into $k$ subsets (or batches) $B_i$, for $i=1, \dots, k$ (and these subsets usually have the same size, i.e. $|B_i| = |B_j|, \forall i$, apart from one of them, which may contain fewer samples), then the SGD step needs to be performed $k$ times, in order to go through all training samples. Sampling, shuffling, and convergence Given that $\tilde{\nabla}_\theta L(\theta; B_i) \approx \nabla_\theta L(\theta; D), \forall i$, it should be clear that the way you split the samples into batches can affect learning (i.e. the updates of the parameters). For instance, you could consider your dataset $D$ as an ordered sequence/list, and just split it into $k$ sub-sequences. Without shuffling this ordered sequence before splitting, you will always get the same batches, which means that, if there's some information associated with the specific ordering of this sequence, then it may bias the learning process. That's one of the reasons why you may want to shuffle the data. So, you could uniformly choose samples from $D$ to create your batches $B_i$ (and this is a way of shuffling, in the sense that you will be uniformly building these batches at random), but you can also sample differently and you could also re-use the same samples in different batches (i.e. sampling with replacement). Of course, all these approaches can affect how learning proceeds. Typically, when analyzing the convergence properties of SGD, you require that your samples are i.i.d. and that the learning rate $\alpha$ satisfies some conditions (the Robbins–Monro conditions). If that's not the case, then SGD may not converge to the correct answer. That's why sampling or shuffling can play an important role in SGD. Testing and validation During testing or validation, you are just computing the loss or some metric (like the accuracy) and not a stochastic gradient (i.e. you are not updating the parameters, by definition: you just do it during training). The way you compute the loss or accuracy should not be sensitive to the order of the samples, so shuffling should not affect the computation of the loss or accuracy. For instance, if you use the mean squared error, then you will need to compute \begin{align} L(\theta; D_\text{test}) &= \operatorname {MSE} \\ &= {\frac {1}{n}}\sum _{i=1}^{n}(f_\theta(x_i)-{\hat {y_{i}}})^{2}\\ &= {\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-{\hat {y_{i}}})^{2} \end{align} where $f_\theta$ is your ML model $x_i$ is the $i$th input $y_{i}$ is the true label for input $x_i$ $\hat {y_{i}}$ is the output of the model $n$ is the number of samples you use to compute the MSE This is an average, so it doesn't really matter whether you shuffle or not. Of course, it matters which samples you use though! Further reading Here you can find some informal answers to the question "Why do we shuffle the training data while training a neural network?". There are other papers that partially answer this and/or other related questions more formally, such as this or this.
{ "domain": "ai.stackexchange", "id": 2399, "tags": "machine-learning, training, datasets, stochastic-gradient-descent, testing" }
How to partition the rows of a matrix in such a way that every column satisfies a given condition?
Question: The input is: Given a matrix $\mathbf{A}=\left[a_{ij}\right]$ of nonnegative integers for all $i\in\{1,\ldots, m\}$ and $j\in\{1,\ldots, n\}$ (where $n<m$). Nonnegative integers $V_j$ for all $j\in\{1,\ldots,n\}$. The question is: Find $n$ disjoint sets $S_j$ of $\{1,\ldots,m\}$ such that $$\bigcup\limits_{j=1}^{n} S_j=\{1,\ldots,m\},$$ $$\quad\quad\quad\;\,\sum_{i\in S_j}a_{ij}\geqslant V_j, \forall\,j\in\{1,\ldots,n\}.$$ So for example, given the matrix $$ \begin{pmatrix} 7 & 4 & 3\\ 3 & 2 & 7\\ 2& 3 & 4\\ 1 & 1& 5\\ 6 & 10 & 8 \end{pmatrix}, $$ where $n=3$, $m=5$ and $V_1=12$, $V_2=10$ and $V_3=5$. Then, a solution is $S_1=\{1,2,3\}$, $S_2=\{5\}$ and $S_3=\{4\}$. I think the difficulty of solving this problem comes from the fact that we would like to partition the rows of a given matrix in such a way that every column satisfies a given condition. Even though the problem seems related to the exact cover problem, I cannot find a good way to solve it. Can you suggest a method/algorithm that finds solutions to such problem? If it is a known problem, do you know any reference? Answer: As suggested by Yuval Filmus, reduce PARTITION to my problem. Given an instance of PARTITION, that is a set of nonnegative integers $\{b_1, \ldots, b_k\}$, is there a subset $S\subset\{1,\ldots,k\}$, such that $\sum_{i\in S}b_i=\sum_{i\notin S}b_i=\frac{\sum_{i=1}^kb_i}{2}$? Let $n=2$, $m=k$, $a_{ij}=b_i$ for all $(i,j)\in\{1,\ldots,k\}\times\{1,2\}$ and $V_1=V_2=\frac{\sum_{i=1}^kb_i}{2}$. This is clearly created in polynomial-time. PARTITION is solved if and only if my problem is solved. If PARTITION is solved: there is a set $S\subset\{1,\ldots,k\}$, such that $\sum_{i\in S}b_i=\sum_{i\notin S}b_i=\frac{\sum_{i=1}^kb_i}{2}$. Take $S_1=S$ and $S_2=\{1,\ldots,k\}\backslash S$. Clearly, $S_1\cup S_2=\{1,\ldots,k\}$ and $S_1$ and $S_2$ are disjoint. Further, we have $$\sum_{i\in S_1}a_{i1}=\sum_{i\in S_1}b_i=V_1\geqslant V_1,\\ \sum_{i\in S_2}a_{i2}=\sum_{i\in S_2}b_i=V_2\geqslant V_2,$$ and my problem is solved. If my problem is solved: there are disjoint $S_1$ and $S_2$ such that $$S_1\cup S_2=\{1,\ldots,k\},\\ \sum_{i\in S_1}a_{i1}=\sum_{i\in S_1}b_i\geqslant V_1,\\ \sum_{i\in S_2}a_{i2}=\sum_{i\in S_2}b_i\geqslant V_2.$$ Since $V_1=V_2=\frac{\sum_{i=1}^kb_i}{2}$ and $\sum_{i\in S_1}b_i+\sum_{i\in S_2}b_i=\sum_{i=1}^kb_i$, we must have $$\sum_{i\in S_1}b_i=\sum_{i\notin S_2}b_i=\frac{\sum_{i=1}^kb_i}{2},$$ and PARTITION is solved. Therefore, my problem is NP-hard. To solve the problem, let us write it as integer programming problem as suggested by Yuval Filmus. To do so, introduce the binary variable $x_{ij}$ that is equal to $1$, if $i$ is in set $S_j$, and, $0$ otherwise. \begin{align} & {\underset{\mathbf{ x }}{\text{maximize}}} & & 0\\[6pt] & \text{subject to} & & \sum_{i=1}^ma_{ij}x_{ij}\geqslant V_j,\forall\, j\in\{1,\ldots,n\},\tag{C1}\\[6pt] & & & \sum_{j=1}^nx_{ij}=1, \forall\, i\in\{1,\ldots,m\},\tag{C2}\\[6pt] & & & x_{ ij }\in\{0, 1\}, \forall (i,j)\in\{1,\ldots,k\}\times\{1,\ldots,n\}\tag{C3}. \end{align} Even though this solves my problem, I need to develop a greedy algorithm for it, can I do that?
{ "domain": "cs.stackexchange", "id": 7729, "tags": "algorithms" }
Why are there two vernier scales on a prism spectrometer?
Question: Why are there two vernier scales on a prism spectrometer and why are they 180 degrees apart? Example image (source): I have some idea that it reduces the error in measurements but I don't exactly know how it does that. this article reasons as follows: "Record both VERNIER readings (in minutes). Average the two vernier readings (to eliminate any systematic error from misalignment of the circle scale with respect to bearing axis), and add the result to one of the angle scale readings." can someone elaborate this reasoning for me ? Answer: Construction Construction may vary, this version goes with image along the question. The vernier scale and the prism table move together on an axis different from that of the main scale, which moves with the telescope. We are assuming that both the scales move independent of each other about their axes. The figure presented here is a simplified bird's eye view of the two scales. (circular purple inner scale and main scale ) NOTE : The offset is exaggerated to explain the error and so the error may appear large relative to the readings but since the offset is very small in reality the error due to it is also small. The purple circle is the circumference of the circular disc of which the vernier scales are a part. Measured and Expected/True value The value that is measured by us is the main scale marking which coincides with the zero of the vernier scales whereas the true/expected value is the marking that coincides with the zero of the vernier scale if the axis of the main scale and vernier scale were the same.(that is no offset is the ideal case) The line along which the vernier scales' axis offsets is named the line of offset.(l) The error in the measured value and the true value is due to the offset and the position of the vernier scales with respect to the line of offset. Talk about the error due to offset From the figure given bellow : The outer markings are of the main scale. The purple lines represent extended the markings on the vernier scales.Here only 0 and 10 of both scales are shown. The measured value at 300 is different from the true value since the vernier's zero marking points to marking 301. similarly the vernier 2 measures at 142 instead of 120. This error in vernier 1 and vernier 2 is unchanged if the main scale is rotated, since only the markings change, the line of offset is at the same angle with vernier scales. When finding any angle, we take the difference between two measured values at different positions of the telescope. This should not affect the vernier scale and therefore the error in both measured values will be equal resulting in no error due to offset in the measured angle which obtained by taking the difference of the readings. In the case where the vernier table is rotated to measure the angle, the mean of the angles measured by vernier 1 and vernier 2 will minimise the systematic error in measurement but it will not eradicate it completely. Conclusion The systematic error in measurements from V1 and V2 will be completely erased if difference of any two measurements is taken which we always do while measuring angles. In the case where we rotate the vernier table to measure the angle (which we don't need to do), the systematic error can be reduced but not removed by taking the mean of the two measurements of V1 and V2.
{ "domain": "physics.stackexchange", "id": 80401, "tags": "experimental-physics, spectroscopy, error-analysis" }
Is the Chomsky-hierarchy outdated?
Question: The Chomsky(–Schützenberger) hierarchy is used in textbooks of theoretical computer science, but it obviously only covers a very small fraction of formal languages (REG, CFL, CSL, RE) compared to the full Complexity Zoo Diagram. Does the hierarchy play any role in current research anymore? I found only little references to Chomsky here at cstheory.stackexchange, and in Complexity Zoo the names Chomsky and Schützenberger are not mentioned at all. Is current research more focused on other means of description but formal grammars? I was looking for practical methods to describe formal languages with different expressiveness, and stumbled upon growing context sensitive language (GCSL) and visibly pushdown languages (VPL), which both lie between the classic Chomsky languages. Shouldn't the Chomsky hierarchy be updated to include them? Or is there no use of selecting a specific hierarchy from the full set of complexity classes? I tried to select only those languages that can be fit in gaps of the Chomsky hierarchy, as far as I understand: REG (=Chomsky 3) ⊊ VPL ⊊ DCFL ⊊ CFL (=Chomsky 2) ⊊ GCSL ⊊ CSL (=Chomsky 1) ⊊ R ⊊ RE I still don't get where "mildly context-sensitive languages" and "indexed languages" fit in (somewhere between CFL and CSL) although there seems to be of practical relevance for natural language processing (but maybe anything of practical relevance is less interesting in theoretical research ;-). In addition you could mention GCSL ⊊ P ⊂ NP ⊂ PSPACE and CSL ⊊ PSPACE ⊊ R to show the relation to the famous classes P and NP. I found on GCSL and VPL: Robert McNaughton: An Insertion into the Chomsky Hierarchy?. In: Jewels are Forever, Contributions on Theoretical Computer Science in Honor of Arto Salomaa. S. 204-212, 1999 http://en.wikipedia.org/wiki/Nested_word#References (VPL) I'd also be happy if you know any more recent textbook on formal grammars that also deal with VPL, DCLF, GCSL and indexed grammars, preferable with pointers to practical applications. Answer: In short: yes. More particularly: Chomsky was one of the first to formalize a hierarchy relating languages, grammars, and automata. This insight is still very relevant and is taught in all intro courses on automata theory. However, the specific hierarchy Chomsky came up with and the names for the elements of the hierarchy aren't really significant anymore. We've since invented numerous formalisms which fall between levels of Chomsky's hierarchy, above it, or below it. And the names Chomsky used aren't particularly interesting, i.e. they aren't based on an interesting measure of complexity or anything, they're just numbers. Should mildly context sensitive languages be Type-1.5 or Type-1.7 or Type-1.3? Who cares. "Mildly context sensitive" is a much more informative name. The Complexity Zoo is a bit different because it's full of all sorts of conditional equivalences and the like. A more modern hierarchy for automata theory wouldn't be linear (e.g., compare CFG vs PEG) but it would still have a well-known topology. To get a perspective on modern automata theory you should look at work on parser combinator libraries and some of the stuff on unification and type theory (though those both branch out far afield).
{ "domain": "cstheory.stackexchange", "id": 353, "tags": "cc.complexity-theory, automata-theory, fl.formal-languages, big-picture, teaching" }
bloom: Generate debian package with pip-only dependency?
Question: Hi, I've made a ROS package that has a pip-only dependency (moviepy). I already included this dependency into rosdistro, so rosdep can resolve it (python-moviepy-pip). Now I'd like to release the package via bloom, but: ==> git-bloom-generate -y rosdebian --prefix release/indigo indigo -i 0 --os-name ubuntu Generating source debs for the packages: ['movie_publisher'] Debian Incremental Version: 0 Debian Distributions: ['trusty'] Releasing for rosdistro: indigo Pre-verifying Debian dependency keys... Running 'rosdep update'... Key 'python-moviepy-pip' resolved to '['moviepy']' with installer 'pip', which does not match the default installer 'apt'. Failed to resolve python-moviepy-pip on ubuntu:trusty with: Error running generator: The Debian generator does not support dependencies which are installed with the 'pip' installer. python-moviepy-pip is depended on by these packages: ['movie_publisher'] <== Failed What are the options? Can I somehow release a deb-packaged version of the pip-only dependency? I've seen https://askubuntu.com/questions/327543/how-can-a-debian-package-install-python-modules-from-pypi , but I don't know how to use with with the ROS build ecosystem. I also saw https://answers.ros.org/question/280855/bloom-with-pip-test-depend/ , but that was a test-only dependency, while I need the package as a run dependency. Originally posted by peci1 on ROS Answers with karma: 1366 on 2019-01-25 Post score: 0 Answer: As is stated in the rosdep contributing guide " Native packages are strongly preferred. (They are required for packaging and have upgrade and conflict tracking.)" As per the answer to the question you linked ot on Stackoverlfow https://askubuntu.com/a/508608 the right solution is to get that dependency packaged as a debian package. Part of being in a distribution is coordinating with the existing dependencies and versions available. It looks like there's been some interest in packaging it already: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=814529 but I don't see any followups. There are ways to embed it inside a ROS package but in general that's discouraged unless you want to take on the full maintenance of the package. In particular there's also other dependencies that appear to be unmet for the debian packaging which is where things get complicated. Originally posted by tfoote with karma: 58457 on 2019-01-25 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by peci1 on 2019-01-25: So if I want to proceed, I have basically two ways? 1) become the package maintainer for Debian 2) become the package maintainer for a ROS-released deb package? hmm, none of these sounds really interesting to me, though I think 2) would be easier... Comment by tfoote on 2019-01-25: Yeah, unfortunately somebody has to do the work to generate and maintain the package. Comment by peci1 on 2019-01-25: Okay, so I'll release the package as source-only... Comment by ahendrix on 2019-01-26: As a third option, consider how much of the functionality of your dependency you're using, and consider re-implementing just that functionality within your package, so that you don't need the dependency at all. Comment by tfoote on 2019-01-26: True, and you can also look for functional alteratives that are packaged. Comment by peci1 on 2019-01-26: I did that, and haven't found any packaged alternative that would nicely wrap ffmpeg so that you could get movie frames as numpy arrays. I know I could call ffmpeg myself and do all the stuff about "computing" the right CLI parameters, but that's exactly what I want to avoid, since it's not trivial. Comment by ahendrix on 2019-01-26: OpenCV python can read and write video files, and it's already available in rosdep: https://www.learnopencv.com/read-write-and-display-a-video-using-opencv-cpp-python/ Comment by peci1 on 2019-01-27: Nice, thanks! It makes sense that OpenCV can also read video files :) However, their interface seems to be much less versatile than moviepy. I ended up adding opencv as a fallback, removing moviepy dependency, and informing the user at runtime that he should install moviepy for better performance.
{ "domain": "robotics.stackexchange", "id": 32344, "tags": "ros, rosdep, pip, bloom-release, ros-indigo" }
What is difference in Dirac Notation for probability and Probability Density in Quantum Mechanics?
Question: The Dirac Notation for wave function $$\langle\psi|\psi\rangle= \int_{-\infty}^\infty \psi^{*}\psi \,dx $$ $$\text{Probability} = \int_{-\infty}^\infty \psi^{*}\psi \,dx $$ But most often it is quoted in books like $$\text{Probability} = \vert\langle\psi|\psi\rangle\vert^2 $$ What is this conundrum? I am not able to comprehend. Answer: Understanding this requires understanding the difference between a wave function and a state vector. The difference between these two is essentially the same as the difference between "vector components" and "vectors" in ordinary vector algebra, in the same respective order. Mathematically, they really are the same thing - "vectors" are just elements of general systems called "vector spaces", and the spaces of ordinary vector-algebra vectors and quantum state vectors are both vector spaces. In particular, in a sense, while an ordinary vector in 3-dimensional, say, space has 3 components, i.e. $$\mathbf{r}_\mathrm{example} := \left<r_x, r_y, r_z\right>$$ a vector in a quantum state space can, and typically does, have infinitely many components - in sense, here it has uncountably infinitely many components; the space has an uncountably infinite dimension: effectively, as many "coordinate axes" as there are real numbers. Thus, while in the above, we label them with labels "x", "y", and "z", here, we have to label the components with a real number or, even, a real-number vector, and hence we have a function which returns the component of the vector with a given real-number/real-vectorial "index": this is the wave function, $$\psi(\mathbf{r})$$ which represents the vectorial component indexed by vector $\mathbf{r}$. Effectively, whereas before, you had an x-, y-, and z-axis, you now have, say, a $(0.3, -5.0, \pi)$-axis and a $(5000.3, 10^{-9000}, \Omega_U)$-axis, and a $(35.239\cdots, -4669.1, 10^{10^{10^{100}}})$-axis, and so forth for literally uncountably infinitely many different possibilities each requiring infinite amounts of space to write down exactly when in full generality. The corresponding vector itself is denoted $$|\psi\rangle$$ . The two are related by the following equation: $$|\psi\rangle = \int_{\mathbb{R}^3} \psi(\mathbf{r})|\mathbf{r}\rangle\ dV$$ where $|\mathbf{r}\rangle$ is a positional basis vector. Mathematically, it's hard to describe this thing, but the semantic denotation is knowledge that a particle is located exactly at the physical spatial position $\mathbf{r}$ (though many quibble over the ontology here). This is analogous to the three vectors $\hat{\mathbf{x}}$, $\hat{\mathbf{y}}$, and $\hat{\mathbf{z}}$ in ordinary vector algebra in 3 real dimensions, which allow you to write the $\mathbf{r}_\mathrm{example}$ given at the beginning as $$\mathbf{r}_\mathrm{example} = r_x \hat{\mathbf{x}} + r_y \hat{\mathbf{y}} + r_z \hat{\mathbf{z}}$$ This is that same formula, only now we have uncountable components and hence (being a bit rough - for lots of really picky reasons the vectors $|\mathbf{r}\rangle$ are actually "bad" and require special treatment to make the above formula actually make sense) need an "uncountable sum", which is provided by an integral. Likewise, just as you can have other basis vector sets in 3-dimensional vector algebra, so too can you in infinite-dimensional algebra: and which you're usign depensd on what kind of argument the wave function takes. If it is a function of a position, $\psi(\mathbf{r})$, then that wave function is the position-basis expansion. But you can also have a momentum-basis expansion, with basis vectors $|\mathbf{p}\rangle$ that are tagged by momentum values, and this will give different components. The notation $$\left<\phi|\psi\right>$$ represents a vector inner product. It is the same thing as the dot products you may be familiar with from ordinary vector algebra, only here with the infinite dimensional quantum state vectors instead. The integral representation $$\left<\phi|\psi\right> = \int_{\mathbb{R}^3} [\phi(\mathbf{r})]^{*} \psi(\mathbf{r})\ dV$$ is then how this dot product is described in basis form, and this is directly analogous to the usual formula for dot product of 3-space vectors by summing component products. However, you may also wonder why we say $$\mathrm{Probability} = |\left<\phi|\psi\right>|^2$$ This is because the actual probability value - here that to obtain $|\phi\rangle$ when querying state $|\psi\rangle$ - is not the inner-product itself, which is a complex number, as the quantum vectors are complex vectors in the sense they have complex, instead of real, components, but rather is given by the expression above.
{ "domain": "physics.stackexchange", "id": 59910, "tags": "quantum-mechanics, hilbert-space, wavefunction, probability, born-rule" }
What caused the blue column of ionised air above Chernobyl exploded reactor?
Question: I read that the blue column of light directly above the exploded reactor was actually the ionisation of air but I like to know where did the electric field came from to cause such phenomenon? I imagine thunder cloud where there is a built up of electric potential and then a lightning occurs so for the column of blue light there must also be a electric potential and why column that extends into the sky? Answer: Air glows when molecules, that are brought to an excited state by a collision, go back to a less-excited state by emitting a photon. The question becomes : what generated a particle fast enough to generate such a collision? Acceleration of a charged particle by some electric field. That's the case for lightning or neon lights, for example. Emission of an energetic particle by some high-energy process (such as radioactive decay) ; this doesnt need any electric field. As far as I know, that was the case for Chernobyl : nuclear reactions in the core sent high energy particles in all directions ; those that went down or sideways were stopped by concrete in meters (or less) but those going upwards could travel through air (which is less dense) for a bit, eventually hit some air molecule and bring it to an excited state in the process. The fact that the light-column was kilometers high indicates that the mean-free-path of those high energy particles was kilometers (at least). That strongly suggests γ-ray photons. I have read several times that this glow was due to Cherenkov radiation (light emission by charged particles going through a medium faster than light propagation). I have some doubts about it because light speed in air is so close to that in vacuum (refractive index is very near 1) ; therefore the energy needed for a particle to be above the speed of light in air is positively huge, higher I think than energy of most nuclear processes. On the other hand, particle energy necessary to bring a molecule to an excited state would be mere eVs, much lower than that of any nuclear process. (obviously, blue glow of water in a nuclear reactor is an other matter, since speed of light in water is significantly slower than in vacuum, enabling Cherenkov radiation much more readily)
{ "domain": "physics.stackexchange", "id": 65092, "tags": "electromagnetism, electric-fields, ionization-energy" }
Is there a simple model explaining Faraday effect?
Question: I find magneto-optical effects fascinating, and especially the Faraday effect. But most sources only give a phenomenological description, while I want a deeper explanation of its mechanism. Is there a simple model that can explain the formula $\beta = \mathcal{V}Bd$? No need for a precise calculation of the Verdet constant. Answer: It is indeed a topic that is discussed in many books but only a few give a rigorous mathematical description of the phenomena. For stringency in non-linear optics topics I always trust HARTMANN ROMER: Theoretical Optics, An Introduction. 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. A book which is also mathematically rigorous is BOYD, ROBERT W: Nonlinear Optics, Third Edition. Academic Press, 2008 The simplest way/model to explain the Faraday effect and get the $\beta=\mathcal{V}BL$ result is to solve the dynamic problem of a classical electron moving in a non-conducting substance under the action of an applied magnetic field $\vec B$ in the $z$ direction: $$ m\frac{{{d^2}\vec r}}{{d{t^2}}} + K\vec r = - e\vec E - e\left( {\frac{{d\vec r}}{{dt}} \times \vec B} \right) $$ Solving for the rotational coordinates $R_{\pm}=x\pm iy$ one finds that the introduction of the applied magnetic field will break the linear isotropy of the substance by splitting the refractive index into two distinct parts, $n_{-}$ and $n_{+}$, that spin in opposing directions. This will end up resulting in a Faraday rotation of the plane of polarization of incident waves of the form: $$ \beta = \frac{\pi }{\lambda }L\left( {{n_ - } - {n_ + }} \right) \simeq \left[ {\frac{{{\omega _p}^2}}{{{n_0}c}}\frac{e}{m}\left( {\frac{{{\omega ^2}}}{{{\omega _L}^4 - 4{\omega _0}^2{\omega _L}^2}}} \right)} \right]{B_z}L = \mathcal{V}B_{z}L $$ There are many details to this derivation that you can look up in the books I've mentioned. I hope you find these references useful.
{ "domain": "physics.stackexchange", "id": 10519, "tags": "electromagnetism, optics, visible-light, polarization" }
MySQL database custom session handler using PHP with MySQLi extension
Question: I have made the decision to move the storing of session data to the database. Any new ideas, suggestions are welcome. Please also give security suggestions. Like SQL injection is possible here, etc... <?php /** * MySQL/MariaDB Session Handler - Handle Sessions using Database * Copyright (c) 2021 Baal Krshna * PHP Version 5.4 * * @author Puneet Gopinath (PuneetGopinath) <baalkrshna@gmail.com> * @copyright 2021 Baal Krshna */ namespace BaalKrshna\SessionHandler; /** * MySQL/MariaDB Session Handler - Handle Sessions using Database * * @author Puneet Gopinath (PuneetGopinath) <baalkrshna@gmail.com> */ class MySQLHandler implements \SessionHandlerInterface { /** * MySQL/MariaDB Session Handler Version number * Used for easier checks, like if SessionHandler is up to date or not * * @var string VERSION The Version number */ const VERSION = "0.1.0"; /** * Session Lifetime, default 2 hrs * * @var integer $expiry The expiry time in seconds */ private $expiry = 7200; /** * Session Id * * @var string $sessionId The session id */ private $sessionId = null; /** * User Agent * * @var string $userAgent The User Agent */ private $userAgent = null; /** * DB connection object * * @var object $db_conn The DB connection object */ private $db_conn = null; /** * DB table name for storing session info * * @var string $tablename The Table name */ private $tablename = "sessions"; /** * Session name * * @var string $sessionName The Session name */ private $sessionName = "PHPSESSID"; /** * MySQLHandler class constructor. * * @param array $config The config settings * @return MySQLHandler The MySQLHandler class */ public function __construct($config) { /*session_set_save_handler( array($this, "open"), array($this, "close"), array($this, "read"), array($this, "write"), array($this, "destroy"), array($this, "gc") );*/ session_set_save_handler($this, true); $this->setConfig($config); session_start(); } /** * Set config * * @param array $config The config settings * @return bool The return value (usually TRUE on success, FALSE on failure). */ private function setConfig($config) { if (empty($config["db_conn"])) { error_log("DB connection not set in config!!"); return false; } $this->db_conn = $config["db_conn"]; $this->tablename = empty($config["tablename"]) ? $this->tablename : $config["tablename"]; $this->expiry = empty($config["expiry"]) ? $this->expiry : $config["expiry"]; $this->userAgent = $_SERVER["HTTP_USER_AGENT"]; ini_set("session.gc_maxlifetime", $this->expiry); ini_set("session.gc_probability", "0"); if ( $stmt = $this->db_conn->prepare( "CREATE TABLE IF NOT EXISTS $this->tablename (" . "session_id VARCHAR( 255 ) NOT NULL ," . "data TEXT NOT NULL ," . "userAgent VARCHAR( 255 ) NOT NULL ," . "lastModified DATETIME NOT NULL ," . "PRIMARY KEY ( session_id )" . ")" ) ) { if (!$stmt->execute()) { return false; } $stmt->close(); } else { return false; } return true; } /** * Refresh the session * * @return bool The return value (usually TRUE on success, FALSE on failure). */ private function refresh() { $currentId = session_id(); session_regenerate_id(); $this->sessionId = session_id(); if ( $stmt = $this->db_conn->prepare( "UPDATE $this->tablename SET session_id=? WHERE session_id=?" ) ) { $stmt->bind_param( "ss", $currentId, $this->sessionId ); if (!$stmt->execute()) { return false; } $stmt->close(); } else { return false; } return true; } /** * Open/Start session * * @param string $savePath The path where to store/retrieve the session. * @param string $sessionName The session name * @return bool The return value (usually TRUE on success, FALSE on failure). */ public function open($savePath, $sessionName) { $this->sessionName = $sessionName; return true; } /** * Close session * * @return bool The return value (usually TRUE on success, FALSE on failure). */ public function close() { if (!$this->gc()) { return false; } return true; } /** * Read session data * * @param string $id The session id * @return string The data read from database */ public function read($id) { if ( $stmt = $this->db_conn->prepare( "SELECT data, session_id FROM $this->tablename WHERE session_id=?" ) ) { $stmt->bind_param("s", $id); $stmt->execute(); $stmt->bind_result($data, $sessionId); $stmt->close(); if (empty($sessionId)) { $this->refresh(); return ""; } return $data; } else { return ""; } } /** * Write data to session * * @param string $id The session id * @param string $data The data to write * @return bool The return value (usually TRUE on success, FALSE on failure). */ public function write($id, $data) { $date = date("Y-m-d H:i:s"); $read = $this->read($id); if (empty($read) || !$read) { if ( $stmt = $this->db_conn->prepare( "INSERT INTO $this->tablename ( session_id, data, lastModified, userAgent ) VALUES (?, ?, ?, ?)" ) ) { $stmt->bind_param( "ssss", $id, $data, $date, $this->userAgent ); if (!$stmt->execute()) { return false; } $stmt->close(); } else { return false; } } else { if ( $stmt = $this->db_conn->prepare( "UPDATE $this->tablename SET data=?, lastModified=?, userAgent=? WHERE session_id=?" ) ) { $stmt->bind_param( "ssss", $data, $date, $this->userAgent, $id ); if (!$stmt->execute()) { return false; } $stmt->close(); } else { return false; } } return true; } /** * Destroy the session * * @param string $id The session id * @return bool The return value (usually TRUE on success, FALSE on failure). */ public function destroy($id) { if ( $stmt = $this->db_conn->prepare( "DELETE FROM $this->tablename WHERE session_id=?" ) ) { $stmt->bind_param( "s", $id ); if (!$stmt->execute()) { return false; } $stmt->close(); } else { return false; } return true; } /** * Do garbage collection * * @param int $max_lifetime Sessions that have not updated for the last maxlifetime seconds will be removed. * @return int|bool The return value (usually TRUE on success, FALSE on failure). */ public function gc($max_lifetime = null) { if (empty($max_lifetime)) { $max_lifetime = $this->expiry; } $sessionLife = time() - $max_lifetime; if ( $stmt = $this->db_conn->prepare( "DELETE FROM $this->tablename WHERE lastModified < ?" ) ) { $stmt->bind_param( "s", $sessionLife ); if (!$stmt->execute()) { return false; } $stmt->close(); } else { return false; } return true; } } The above code follows PSR12. Even though it follows PSR12, the only thing it doesn't follow is about the visibility of constant. I have some questions: Is not closing the connection going to have problems? If question 1 is yes, then how can I get a new mysqli object in the next time I need to use mysqli I want to know whether the write function will not be used to update existing fields in dB? Testing the above code: You have to edit the MySQL credentials in the mysqli_connect function's args. $connection = mysqli_connect( "localhost", //Hostname "root", //Username "password", //Password "test" //DB name ); $handler = new \BaalKrshna\SessionHandler\MySQLHandler( array( "db_conn" => $connection ) ); //Session handler already set in construct method $_SESSION["foo"] = "bar"; echo $_SESSION["foo"]; session_write_close(); session_gc(); session_destroy(); Answer: Use the null coalescing operator for all occurrences where you want to provide fallback values for undeclared/null variables. $this->tablename = $config["tablename"] ?? $this->tablename; I agree with @Dharman's comment under the question, don't bother with manually checking for mysqli errors. See the commented links. private function refresh(): int { $currentId = session_id(); session_regenerate_id(); $this->sessionId = session_id(); $stmt = $this->db_conn->prepare("UPDATE $this->tablename SET session_id=? WHERE session_id=?"); $stmt->bind_param("ss", $currentId, $this->sessionId); $stmt->execute(); return $this->db_conn->affected_rows; } I don't personally ever bother manually closing prepared statements. PHP is going to automatically do that for you when I knows it is done using them. Simplify boolean return by not manually typing the true/false. Your IDE might even be alerting you to this. public function close() { return (bool) $this->gc(); } Set up your database table(s) to have lastModified DEFAULT to the current timestamp, this way you don't have to manually write that in your sql when INSERTing a new row. Inside of write(), the following is unnecessary/redundant: if (empty($read) || !$read) { Instead, just do a falsey check because you know that the variable will be unconditionally declared.if (!$read) { As a general rule, when a method is performing an INSERT, I typically return the autogenerated id (whenever possible) -- even if I don't need it right now, it is possible that I may want it in the future. For UPDATE and DELETE queries, I return the affected rows as an integer -- this allows me to verify that a change had actually occurred from the executed query and knowing how many rows were affected can sometimes help with diagnostics. In both all "database writing" cases, the return value is easily compared as truthy/falsey if your method call doesn't need the excessive specificity in the outcome. Again, in gc(), if (empty($max_lifetime)) { is not necessary -- the variable WILL be declared, do a falsey / function-less check here. Or even better avoid the single-use variable declarations and use the null coalescing operator again. public function gc($max_lifetime = null): int { $stmt = $this->db_conn->prepare("DELETE FROM $this->tablename WHERE lastModified < ?"); $modifiedTime = time() - ($max_lifetime ?? $this->expiry); $stmt->bind_param("s", $modifiedTime); $stmt->execute(); $this->db_conn->affected_rows; } Security suggestion: Don't support PHP 5.3 instead support the versions of php which still receive security updates, See supported versions of php here PHP 5.3 has a lot of security issues.
{ "domain": "codereview.stackexchange", "id": 41239, "tags": "php, mysql, database, mysqli, session" }
LinkedList of int nodes in C++
Question: I'm new to C++ programming. I'm experienced in Java and its OOP paradigm. This code works well. I just need to make sure whether it's correct in terms of C++ programming standard. main.cpp #include <iostream> #include "ListNode.h" #include "LinkedList.h" using namespace std; int main(void) { LinkedList l; ListNode a(10); ListNode b(5); ListNode c(3); l.addFirst(a); l.addFirst(b); l.addLast(c); bool empty = l.isEmpty(); cout << l.listSize() << endl; cout << empty << endl; system("PAUSE"); return 0; } LinkedList.h #pragma once #include "ListNode.h" class LinkedList { public: LinkedList(); ~LinkedList(); void addFirst(ListNode &node); void addLast(ListNode &node); bool isEmpty(); int listSize(); private: ListNode *head; ListNode *tail; int size; }; LinkedList.cpp #include "LinkedList.h" LinkedList::LinkedList() { head = new ListNode(); tail = new ListNode(); } LinkedList::~LinkedList() { } void LinkedList::addFirst(ListNode & node) { if (isEmpty()) { *head = node; *tail = node; size = 1; } else { node.setNext(*head); head->setPrev(node); *head = node; size++; } } void LinkedList::addLast(ListNode & node) { if (isEmpty()) { *tail = node; *head = node; size = 1; } else { tail->setNext(node); node.setPrev(*tail); *tail = node; size++; } } bool LinkedList::isEmpty() { return size == 0; } int LinkedList::listSize() { return size; } ListNode.h #pragma once class ListNode { public: ListNode(); ListNode(int val); ~ListNode(); inline void setPrev(ListNode &node) { *prev = node; } inline ListNode *getPrev() { return prev; } inline void setNext(ListNode &node) { *next = node; } inline ListNode *getNext() { return next; } inline void setValue(int val) { value = val; } inline int getValue() { return value; } private: ListNode *prev; ListNode *next; int value; }; ListNode.cpp #include "ListNode.h" ListNode::ListNode() { } ListNode::ListNode(int val) { prev = new ListNode; next = new ListNode; value = val; } ListNode::~ListNode() { } Am I do it correctly as usually done by C++ programmer? Or there is something I need to fix. I heavily put my focus on pointer/reference usage. Answer: It's not idiomatic, and definitely not good style. C++ has evolved along the years, and C++11 brought along a swath of new facilities that good C++ style should now use: they help cut down the number of bugs. Let's start from the bottom up, with ListNode: Use std::unique_ptr to manage dynamically allocated memory by default, although here std::shared_ptr/std::weak_ptr is necessary because of the doubly-linked aspect1 Always initialize built-ins with a default value Use explicit for constructors that may be called with a single argument Follow the Rule of Zero (no need to define any special member, or if you have to, define them all) Use const wherever possible inline is unnecessary if you define a method inside the class definition 1 Doubly linked lists are actually very tricky from a memory management point of view; there are risks of cycles, ... Putting this altogether: #pragma once class ListNode { public: ListNode() = default; explicit ListNode(int val): value(value) {} int getValue() const { return value; } void setValue(int val) { value = val; } private: std::weak_ptr<ListNode> prev; std::shared_ptr<ListNode> next; int value = 0; }; Note that by default a Node does not allocate memory for the previous and next nodes. That's because the previous and next fields are supposed to refer to existing nodes, not new ones! Moving on: we need to be able to set the previous/next fields! We do so by passing std::shared_ptr<ListNode> around: std::shared_ptr<Node> ListNode::getNext() const { return next; } void ListNode::setNext(std::shared_ptr<Node> n) { next = n; } std::shared_ptr<Node> ListNode::getPrevious() const { return previous.lock(); } void ListNode::setPrevious(std::shared_ptr<Node> p) { previous = p; } Moving on to LinkedList. Good encapsulation is about hiding internal implementation details, and therefore the LinkedList interface should NOT expose the fact that there are ListNode instances under the scenes. Instead, it should allow the user to manipulate values. On top of the previous remarks: prefer empty and size, those are the names used by the Standard containers For simplicity's sake, I will only demonstrate operations at the head of the list; the tail is symmetric. #pragma once #include "ListNode.h" class LinkedList { public: bool empty() const { return size == 0; } int size() const { return size; } void prepend(int value); private: int size; std::shared_ptr<ListNode> head; std::shared_ptr<ListNode> tail; }; And now, how do we prepend? In C++, using new is bad form. C++11 fortunately provides std::make_shared (and C++14 provides std::make_unique). This is a factory method: pass the type as template argument, pass the arguments to be forwarded to the constructor of this type as regular arguments, and low and behold it returns an instance of this type wrapped in a shared_ptr. void LinkedList::prepend(int value) { auto node = std::maked_shared<ListNode>(value); if (empty()) { head = node; tail = node; size = 1; return; } node->setNext(head); head->setPrevious(node); head = node; size += 1; } It's relatively simple. I'll let you figure out how to unlink a node (when removing it), if done wrong you could leak memory. Also, a final parting remark to get your brain churning, there are two issues with this implementation: A copy of LinkedList is a shallow copy: both original and copy share the nodes, due to the usage of shared_ptr. You may either prevent copying (using LinkedList(LinkedList const&) = delete;) or you need to actually implement the copy constructor... and as per the Rule of Five this means all other special members. The default generated destructor of ListNode may cause a stack overflow as it recurses; I suggest to actually handle this issue at LinkedList level, an assert in ListNode that next is null in the destructor will help with identifying the places where you did not correctly unlink it.
{ "domain": "codereview.stackexchange", "id": 23393, "tags": "c++, beginner, linked-list" }