anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
What is the grammar for language $L=\{a^nb^m : n \neq m-1\}$?
Question: What is the grammar for language $L = \{ a^nb^m : n\neq m-1\}$? I only know I have to write grammar for both $ n<m-1 $ and $ n>m-1 $, so this is what I wrote: For $n<m-1$: \begin{align} &S\to Abb \\ &A\to aAb \mid \lambda \end{align} For $n>m-1$: \begin{align} &S\to aaA \\ &A\to aAb \mid \lambda \end{align} Yet I can not mix them and get a correct grammar for $L$. Answer: If you split the grammar into two parts for disjoint parts of the language you better take different names for the variables, since they have a different meaning in the two parts.
{ "domain": "cs.stackexchange", "id": 18491, "tags": "context-free, formal-grammars" }
Understanding Momentum Conservation of a Fluid in Control Volume
Question: I am taking a course in Fluid Dynamics, and we discussed the conservation of momentum in class: $$ \dot{M}_{out,x} = \rho u (dydz) u|_{x+dx} + \rho v (dxdz) u|_{x+dx} + \rho v (dxdy) u|_{x+dx} $$ I understand this notation as follows: $$ \dot{M}_{out,x} = \rho u(x+dx) (dydz) u(x+dx) + \rho v(y) (dxdz) u(x+dx) + \rho w(z) (dxdy) u(x+dx) $$ The next line in the equation on the notes reads: $$\dot{M}_{out,x} = \rho u (dydz) u + \frac{\partial (\rho u (dydz)u)}{\partial x}dx + \rho v (dydz) u + \frac{\partial (\rho v (dydz)u)}{\partial y}dy + \rho u (dydz) u + \frac{\partial (\rho w (dydz)u)}{\partial z}dz $$ I can see they did a taylor expansion, but since the equation reads $|_{x+dx}$ at every component, I do not understand why the taylor expansion was carried for $|_{y+dy}$ and $|_{z+dz}$. I am really struggling to arrive to that last form. Answer: The derivation assumes you have a tiny rectangular parallelepiped as our control volume. It assumes that momentum enters at x, y, and z, and exits at $x+\Delta x$, $y+\Delta y$, and $z+\Delta z$.
{ "domain": "physics.stackexchange", "id": 96127, "tags": "fluid-dynamics, momentum, conservation-laws, flow" }
Different validation sets give very different results. What can be the reason?
Question: I have ~78k microscopy images of single cells, where the task is to classify for cancer (binary classifier). The images are labeled according to which patient the data came from. I do the train-val split, making sure no patient has images in both train and validation. I noticed that depending on which patients I put in the validation set (one malignant patient, one benign patient, always perserving 20% validation size and about the same class distribution) I get wildly different validation accuracies. Below is a plot of a test I did, where I tried all permutations of validation set for each patient with cancer. The dashed lines marks where a new patient with cancer is replaced in the validation set. It seems that it is which patient with cancer I put in the validation set that influences the validation accuracy heavily. My question is, what does this tell me and are there any popular methods for dealing with similar situations? My thinking is that I should train the model using the split in the dashed group number 3 in the plot, since it has the highest validation accuracy without lowering training accuracy, but then again maybe those results are due to some unknown leak. EDIT: It should be noted that the images are labeled according to if they came from a patient with cancer or not, not whether the cell actually is cancerous. Below is an example of what the pictures look like, with very little difference between all images as far as what I can see with my eyes. Answer: Different validation splits will give different results because the data points will vary. How severe can the change of results be depends on how different the data points are. One way to reduce this impact is to use CrossValidation while training your model. Since you have a case of Binary Classification, you should go for StratifiedCV. This helps your model to capture the majority of the diverseness of the dataset. Also since you mention that the majority of the images are similar (as far as you can tell), you should use image augmentation techniques. Keras has a helpful library which you can use. This will help your model to become more robust to any diverseness it might encounter when deployed. These 2 methods will definitely solve your issue! Cheers!
{ "domain": "datascience.stackexchange", "id": 11751, "tags": "machine-learning, deep-learning, image-classification, image-preprocessing" }
What is the difference between mean free path and intermolecular distance?
Question: Why is the mean free path not be equal to the intermolecular distance? A particle moving in a particular direction should strike the object in that direction after the traveling the same distance as the distance between them initially. Answer: There's a few things to point out. @lemon already pointed out one of them in the comment -- it is possible for a molecule to move and end up going between other molecules and missing them. So even though they start out, say, equally spaced (think like an equilateral triangle), it's possible for a molecule to move between the other two rather than directly at it. The other is to remember that all the molecules are moving. Consider a very simplified case where you have two molecules on the X axis, initially a distance Y apart. They are both moving at +Z on the axis. Their mean free path is infinite -- they will never collide -- but their intermolecular distance is constant. They are just following each other. So, all of this is to say that the molecules are all moving at the same time and in random directions. It is not as if you have a single molecule moving and the rest are frozen. Nor are they guaranteed to be moving directly towards one another. The mean free path is how long a molecule travels before it hits something on average, while the intermolecular distance is the mean spacing between the molecules without consideration for their direction of motion.
{ "domain": "physics.stackexchange", "id": 38494, "tags": "fluid-dynamics, terminology, kinetic-theory" }
What is "gold number"?
Question: I was reading about gold number and a book defined gold number as following: It is the minimum mass in milligrams of a stabilizing agent which is added to the 10 mL of red gold sol to protect it against coagulation caused by 1 mL of 10% (by mass) NaCl. Is this the definition? Because I am not able to understand above definition. Is NaCl used as the standard? Answer: A red gold sol is a colloidal suspension of gold nanoparticles with average size of particles less than 100 nm. The suspension is quite stable — the nanoparticles don't aggregate because of the existence of an electrical double layer on the surface of particles which causes electrostatic repulsion between the particles. If you add a sufficient amount of an electrolyte (e.g., NaCl) to the colloid, the ions will disrupt the electric double layer which will cause aggregation (coagulation) of gold particles. A stabilizing agent (e.g., citric acid) is a compound that is able to adsorb on the surface of gold nanoparticles and protect them from aggregation (it acts like a shield), even in a high electrolyte concentration.
{ "domain": "chemistry.stackexchange", "id": 5986, "tags": "terminology, reference-request, colloids" }
Why does the gc content deviate from 50% in prokaryotes
Question: I have read quite some articles but I can't figure out the main reason for gc content deviation in prokaryotes. In eukaryotes I can understand it, because the genome isn't composed at random, like TATA boxes and CpG island, because they are important for functioning in the production of proteins. However in prokaryotes there is such a variety, GC% is ranging from 20% up to 70%. Most of the time it depends on the environment, high temperature needs a stable genome (= high gc content). I also read the answer to this question How does GC-content evolve? however it's still not clear to me. I hope someone can explain the deviation a little bit more. Question Prokaryotes have an AT drift, but what mechanism causes some of these bacteria to higher their GC% instead of lowering it. Answer: I wonder whether it is just some sort of functionless drift. I don't know about bacteria, but think the example of some human viruses may be instructive. The human herpes simplex 1 virus (causes cold sores) has a very high GC content, whereas the quite closely related (in terms of gene repertoire and organization) human varicella zoster virus (causes shingles) has a bias towards AT. Now these viruses depend completely on the host cell's translational machinery, which will be similar (e.g. distribution of tRNA species) in the cells these two viruses infect (as will temperature, ionic strength etc.) (The viruses encode their own DNA replication enzymes but not the systems for transcription or translation.) So if there is a functional reason for the differences in these alpha-herpes viruses, it is difficult to discern, and one might assume that the situation in bacteria might well be similar. Clearly, there must be mechanisms for the accentuation in a GC bias once established. One imagines there could be a selective advantage in either replication, transcription or translation, but molecular explanations do not jump to mind (at least not to mine).
{ "domain": "biology.stackexchange", "id": 5371, "tags": "evolution, genomes, prokaryotes, cpg" }
If an NP-complete problem is shown to have a non-polynomial lower bound, would that prove that P != NP?
Question: I understand that the Cook-Levin theorem proved that any NP problem is reducible to an NP-complete problem, which signifies that if a polynomial-time algorithm for an NP-complete problem is found, it will mean that all problems in NP can be solved in polynomial time. My question is: does this proof also hold the other way? I.e. if an exponential lower bound for an NP-complete problem is proved, would that mean that no problems in NP can be solved in polynomial time? Answer: Not quite. If there's an exponential lower bound for a NP-complete problem, then it follows that no NP-complete problem can be solved in polynomial time. However, there will be some other NP problems (which aren't NP-complete) that can be solved in polynomial time. If there's an exponential lower bound for any NP problem, then it follows that P != NP. In particular, an exponential lower bound for some problem means that the problem is not in P. So, if you have an exponential lower bound for some NP problem, that means the problem is in NP but not in P -- which means NP contains a problem that is not in P -- which means that NP != P. It might help to recall that NP is a set of problems (the set of all search problems where YES answers have a polynomial-size witness that can be verified in polynomial time). Similarly, P is a set of problems (the set of all search problems that can be solved in polynomial time). So, if you've found one element of NP that is not an element of P, you can immediately conclude that those two sets are not the same sets. Or, to put it another way: your second paragraph is the contrapositive of the statement in the first paragraph. If there is a NP problem with a non-polynomial lower bound, then not every problem in NP can be solved in polynomial time. If not every problem in NP can be solved in polynomial time, then it follows that no polynomial-time algorithm can be found for any NP-complete problem. See What is the definition of P, NP, NP-complete and NP-hard? for more background.
{ "domain": "cs.stackexchange", "id": 5097, "tags": "complexity-theory, np-complete, decision-problem, lower-bounds, p-vs-np" }
Training Fractionally Spaced Channel Estimator and Equalizer
Question: If you are attempting least squares channel estimation with a fractionally spaced channel estimator, do you want the training sequence to also be fractionally spaced or symbol spaced? It looks like you could do either. Answer: The training sequence should be at the same spacing as the equalizer when considering its sampling at the input to the equalizer. Adaptive algorithms converge to the least square solution based on the error between the received sequence and the transmitted sequence (known when a training sequence is used). Further, the equalizer can only determine a solution where channel energy is present (for example, if you transmitted a single tone, you would only find the single phase/magnitude weight for that frequency) If you use only one sample per symbol to derive the channel, the solution will suffer from "band edge aliasing" as I depict in the figures below, and the channel solution will suffer from the folded energy at the band edges. For further details see this post where I have derived the operation of the LMS Equalizer and the solution using the Wiener-Hopf equations (for when you have the luxury of block post-processing, but also helps illuminate the operation since this is the same solution that the adaptive algorithms would converge to). Compensating Loudspeaker frequency response in an audio signal
{ "domain": "dsp.stackexchange", "id": 8135, "tags": "digital-communications, estimation, digital, equalization" }
An alternative explanation of electron distribution in a Hydrogen atom
Question: So basically I have this idea, we can't see how does an electron really move because of its small size and the wavelength of light being comparatively being much larger. What if we took two spheres (which are macroscopic that is can be seen with the naked eye), one more massive than the other and having a mass ratio of 15368.5978977 (exact ratio between the mass of a proton to that of an electron). Now we gave the massive sphere a +e charge(positive electronic charge) and the less massive sphere a -e charge(the electronic charge). Now we suspend the macroscopic proton model (M.P.M) in a vacuum (suppose outer space) in absence of any significant gravitational field. Then we shoot the macroscopic electron model (M.E.M) towards the M.P.M with such a velocity that when it finally arrives in the M.P.M electric field, its Electrical PE + Motional KE is exactly equal to that of an electron in the hydrogen atom. Now if the electric, gravitational forces are the same, shouldn't the M.E.M move around the M.P.M exactly like an electron does around the nucleus in the hydrogen atom? Answer: We do not have to do your experiment because it can be described by classical electrodynamics, and classical electrodynamics is so very well validated macroscopically that it is not very smart to make models when the mathematics is there. The solution does not describe what is happening with the hydrogen atom, which is one of the reasons quantum mechanics had to be invented in place of classical mechanics. In classical mechanics and electrodynamics there is no stable system of two charges the way you describe, whereas the hydrogen atom is stable. The two charges in your classical model will attract each other and fall on each other neutralizing the charge while at the same time radiating away energy continuously. The Bohr model was invented to stop this destructive scenario, since the hydrogen atom was stable. So to start with stability was imposed by postulates so that the series fitting the radiation spectra of the hydrogen atom could be arrived at. Quantum mechanics with its postulates fitted these spectra using the Schrodinger equation. In quantum mechanics only the probability of the existence of a particle with a given (x,y,z) can be predicted and known. An average momentum from the energy of the hydrogen energy level may be estimated, but the electron is not in an orbit, but in an orbital, a probability locus..
{ "domain": "physics.stackexchange", "id": 95337, "tags": "electrons, rotational-dynamics, atomic-physics, subatomic" }
PyTorch Vectorized Implementation for Thresholding and Computing Jaccard Index
Question: I have been trying to optimize a code snippet which finds the optimal threshold value in a n_patch * 256 * 256 probability map to get the highest Jaccard index against ground truth mask. Consider a single probability map (256 * 256) and its ground truth (256 * 256 with 1 and 0). To find the optimal threshold value which yields the highest Jaccard index against the ground truth, we loop over all the probability i in the probability map and threshold the probability map using i and then compute the Jaccard index of the thresholded map against the ground truth. After looping over through all probabilities (65536 in total since 256*256) in the probability map, we will have a threshold value which generates the highest Jaccard index. The attached code is computing n_patch probability maps at once instead of a single probability map. However, even I have optimized the implementation as vectorized as possible, the code still runs around 330 seconds on a GPU. Note the attached code is also executable on CPU, it will use an Nvidia GPU if you have one. A modified version of the code can be found further down. The data are available in here (around 24MB). The file named mask.npy is a n_patch * 256 * 256 binary (contains only 0 and 1) and the file named pred_mask.npy is a n_patch * 256 * 256 probability (contains 0 to 1 probability) maps. The threshold method is implemented gen_mask and it takes a 3D pred_mask and threshold on each dimension based on a threshold value vector. The jaccard computes the Jarrard index of a 3D thresholded mask agains the ground truth and returned a n_patch * 1 shape array. import numpy as np import torch import time USE_CUDA = torch.cuda.is_available() def gen_mask(mask_pred, threshold): mask_pred = mask_pred.clone() mask_pred[:, :, :][mask_pred[:, :, :] < threshold] = 0 mask_pred[:, :, :][mask_pred[:, :, :] >= threshold] = 1 return mask_pred def jaccard(prediction, ground_truth): union = prediction + ground_truth union[union == 2] = 1 intersection = prediction * ground_truth union = union.sum(axis=(1, 2)) intersection = intersection.sum(axis=(1, 2)) ji_nonezero_union = intersection[union != 0] / union[union != 0] ji = ji = torch.zeros(intersection.shape) if USE_CUDA: ji = ji.cuda() ji[union != 0] = ji_nonezero_union return ji groundtruth_masks = np.load('./masks.npy') pred_mask = np.load('./pred_mask.npy') n_patch = groundtruth_masks.shape[0] groundtruth_masks = torch.from_numpy(groundtruth_masks) groundtruth_masks = groundtruth_masks.type(torch.float) pred_mask = torch.from_numpy(pred_mask) vector_pred = pred_mask.view(n_patch, -1) best_threshold_val = torch.zeros(n_patch) best_jaccard_idx = torch.zeros(n_patch) if USE_CUDA: groundtruth_masks = groundtruth_masks.cuda() pred_mask = pred_mask.cuda() vector_pred = vector_pred.cuda() best_threshold_val = best_threshold_val.cuda() best_jaccard_idx = best_jaccard_idx.cuda() start = time.time() # I think this outer for loop is inevitable since # vector_pred.shape[1] is 65536 # so we cannot simply create a matrix with n_patch * 65536 * 256 * 256 # which is too large even for a GPU to handle for i in range(vector_pred.shape[1]): cur_threshold_val = vector_pred[:, i] cur_threshold_val = cur_threshold_val.reshape(n_patch, 1, 1) thresholded_mask = gen_mask(pred_mask.squeeze(), cur_threshold_val) thresholded_mask = thresholded_mask.type(torch.float) ji = jaccard(thresholded_mask, groundtruth_masks) cur_threshold_val = cur_threshold_val.squeeze() best_threshold_val[ji > best_jaccard_idx] = cur_threshold_val[ji > best_jaccard_idx] best_jaccard_idx[ji > best_jaccard_idx] = ji[ji > best_jaccard_idx] print(i, '/', vector_pred.shape[1], end="\r") end = time.time() print(best_threshold_val) print(best_jaccard_idx) print(end - start) Also, the output: Best Threshold: tensor([6.8828e-01, 4.7082e-01, 1.2254e-01, 3.4189e-01, 2.8555e-01, 2.4655e-01, 4.9444e-01, 5.9245e-01, 5.0390e-01, 1.7931e-01, 2.3205e-01, 3.8314e-01, 4.5103e-01, 3.6109e-01, 3.4614e-01, 3.8766e-01, 3.6444e-01, 2.3667e-01, 2.0029e-01, 8.0435e-01, 4.9489e-01, 2.8066e-01, 1.4230e-04, 1.8089e-01, 2.2194e-01, 3.7781e-01, 3.5074e-01, 5.4690e-03, 2.6937e-01, 1.7834e-01, 2.2150e-01, 1.8330e-01], device='cuda:0') Best Jaccard Index: tensor([0.9978, 0.9936, 0.9975, 0.9956, 0.9921, 0.9977, 0.9938, 0.9972, 0.9987, 0.9983, 0.9974, 0.9972, 0.9955, 0.9851, 0.9979, 0.9938, 0.9960, 0.9936, 0.9967, 0.9852, 0.9963, 0.9924, 0.9890, 0.9946, 0.9954, 0.9971, 0.9945, 0.9919, 0.9964, 0.9947, 0.9920, 0.9977], device='cuda:0') Any suggestions to optimize the code snippet are welcome! Update: I managed to speed up the script by 100s using PyTorch logical and and or. However, this operation is only supported for type torch.uint8 which means I have to do type conversion. Now the performance is 232 seconds on a GPU. The following is the modified version: import numpy as np import torch import time USE_CUDA = torch.cuda.is_available() def gen_mask(mask_pred, threshold): mask_pred = mask_pred.clone() mask_pred[:, :, :][mask_pred[:, :, :] < threshold] = 0 mask_pred[:, :, :][mask_pred[:, :, :] >= threshold] = 1 return mask_pred.type(torch.uint8) def jaccard(prediction, ground_truth): union = prediction | ground_truth intersection = prediction & ground_truth union = union.sum(axis=(1, 2)) intersection = intersection.sum(axis=(1, 2)) union = union.type(torch.float) intersection = intersection.type(torch.float) union_nonzero_idx = union != 0 cur_jaccard_idx = torch.zeros(intersection.shape) if USE_CUDA: cur_jaccard_idx = cur_jaccard_idx.cuda() cur_jaccard_idx[union_nonzero_idx] = intersection[union_nonzero_idx] / union[union_nonzero_idx] return cur_jaccard_idx groundtruth_masks = np.load('./masks.npy') pred_mask = np.load('./pred_mask.npy') n_patch = groundtruth_masks.shape[0] groundtruth_masks = torch.from_numpy(groundtruth_masks) groundtruth_masks = groundtruth_masks.type(torch.uint8) pred_mask = torch.from_numpy(pred_mask) vector_pred = pred_mask.view(n_patch, -1) best_threshold_val = torch.zeros(n_patch) best_jaccard_idx = torch.zeros(n_patch) if USE_CUDA: groundtruth_masks = groundtruth_masks.cuda() pred_mask = pred_mask.cuda() vector_pred = vector_pred.cuda() best_threshold_val = best_threshold_val.cuda() best_jaccard_idx = best_jaccard_idx.cuda() start = time.time() # I think this outer for loop is inevitable since # vector_pred.shape[1] is 65536 # so we cannot simply create a matrix with n_patch * 65536 * 256 * 256 # which is too large even for a GPU to handle for i in range(vector_pred.shape[1]): cur_threshold_val = vector_pred[:, i] cur_threshold_val = cur_threshold_val.reshape(n_patch, 1, 1) thresholded_mask = gen_mask(pred_mask.squeeze(), cur_threshold_val) cur_jaccard_idx = jaccard(thresholded_mask, groundtruth_masks) cur_threshold_val = cur_threshold_val.squeeze() best_threshold_val[cur_jaccard_idx > best_jaccard_idx] = cur_threshold_val[cur_jaccard_idx > best_jaccard_idx] best_jaccard_idx[cur_jaccard_idx > best_jaccard_idx] = cur_jaccard_idx[cur_jaccard_idx > best_jaccard_idx] print(i, '/', vector_pred.shape[1], end="\r") end = time.time() print(best_threshold_val) print(best_jaccard_idx) print(end - start) Answer: I think a different approach is needed to achieve a better performance. The current approach recomputes the Jaccard similarity from scratch for each possible threshold value. However, going from one threshold to the next, only a small fraction of prediction values change as well as the intersection and the union. Therefore a lot of unnecessary computation is performed. A better approach can first compute for each patch a histogram of the prediction probabilities given ground truth = 1, using the thresholds as bin edges. The frequency counts in each bin give the amount of predictions affected going from one threshold to the next. Therefore, the Jaccard similarity values for all thresholds can be computed directly from cummulative frequency counts derived from the histogram. In your case, the prediction probabilities are used directly as thresholds. Therefore the histograms coincide with the inputs sorted by the probabilities. Consider the following example input probablities and true labels: Label 1 1 0 0 1 0 0 0 Prob 0.9 0.8 0.7 0.6 0.45 0.4 0.2 0.1 The labels themselves are also the counts of true positive instances within each interval. Given a threshold \$t\$ and its index \$i\$, \$|Label \cap Predicted|\$ is just the sum of labels with indices \$\leq i\$, which is the cumulative sum of labels until \$i\$. Also note that \$|Predicted|=i+1\$ and \$Label\$ is the count of true positive instances. Therefore the Jaccard similarity $$ \begin{align*} Jaccard(Label, Predicted) & = \frac{|Label \cap Predicted|}{|Label \cup Predicted|} \\ & = \frac{|Label \cap Predicted|}{|Label|+|Predicted|-|Label \cap Predicted|} \\ & = \frac{cumsum(Label, i)}{(\text{# of true positive instances}) + i + 1 - cumsum(Label, i)} \end{align*} $$ This computation can be easily vectorized for all possible \$i\$s to get a Jaccard similarity vector for every threshold.
{ "domain": "codereview.stackexchange", "id": 36062, "tags": "python, numpy, vectorization, pytorch" }
Time Evolution of Coherent State (Gerry and Knight)
Question: I'm stuck on some simple mathematics in finding the time evolved coherent state for a single-mode field from Gerry and Knight, Introductory Quantum Optics page 51. The Hamiltonian is given by $\hat{H} = \hbar \omega \left(\hat{n} + \frac{1}{2}\right)$, and the time evolution is found as follows: $$ \lvert \alpha, t \rangle \equiv \exp(-i\hat{H}t/\hbar) \lvert \alpha\rangle = e^{-i\omega t/2}e^{-i\omega t \hat{n}}\lvert\alpha\rangle = e^{-i\omega t/2}\lvert\alpha e^{-i\omega t}\rangle, $$ however I do not understand this last step; why the term $e^{-i\omega t \hat{n}}$ can be brought into the ket like that as a phase change of the annihilation operator eigenvalue $\alpha$. Here, $\hat{a} \lvert\alpha\rangle = \alpha \lvert\alpha\rangle$, and $\hat{n} = \hat{a}^\dagger\hat{a}$. Thanks for any help! Answer: \begin{align} e^{-i\omega t/2}e^{-i\omega t \hat n}\vert\alpha\rangle&= e^{-i\omega t/2}\sum_{N}e^{-i\omega t \hat n}e^{-\vert\alpha\vert^2/2} \frac{\alpha^n}{\sqrt{n!}}\vert n\rangle\\ &=e^{-i\omega t/2}\sum_{N}e^{-\vert\alpha\vert^2/2} \frac{e^{-i\omega nt}\alpha^n}{\sqrt{n!}}\vert n\rangle\\ &=e^{-i\omega t/2}\sum_{N}e^{-\vert\alpha\vert^2/2} \frac{(e^{-i\omega t}\alpha)^n}{\sqrt{n!}}\vert n\rangle\, ,\\ &=e^{-i\omega t/2}\vert e^{-i\omega t}\alpha\rangle \end{align}
{ "domain": "physics.stackexchange", "id": 39429, "tags": "quantum-mechanics, homework-and-exercises, harmonic-oscillator, quantum-optics, coherent-states" }
Is tripleaxis planet possible?
Question: Imagine. Our solar system. Our sun. Then earth and moon orbiting it. And you have "powers" to create any planet you want, any size, any density, any weight and any velocity. Would it be possible for you (using all knowledge of earth), to create a natural satellite to moon? Whose trajectory would be almost circular/ellipsoidal? Question actually goes only as follows: Can moon of moon (actual moon) of moon (earth) of sun exist? Answer: This is possible. For example, this post provides a good explanation of the math involved. The key point is whether an object lies in the planet's Hill sphere or the moon's Hill sphere: Can the Moon have a moon? Yes, the Moon could have a sub-satellite. If we look at a system of the Earth, Moon, and a sub-satellite, the same idea as above applies. The Moon has its own Hill sphere with a radius of 60,000 km (1/6th of the distance between the Earth and Moon) where a sub-satellite could exist. If an object lies outside the Moon's Hill sphere, it will orbit Earth instead of the Moon. The only problem is that the sub-satellite cannot stay in orbit around the Moon indefinitely because of tides. As long as the gravitational attraction of a satellite is consistently stronger than the body the satellite orbits, a third body will orbit the satellite. There are obviously limits to how small you can go with this, as gravity is a relatively weak force at small scales. At some level, either the Hill sphere is smaller than the object itself or other forces dwarf gravity's impact. For example, a person's gravitational field is too weak to attract its own satelite; the Earth wins that one every time.
{ "domain": "physics.stackexchange", "id": 11705, "tags": "newtonian-gravity, orbital-motion, moon, celestial-mechanics, satellites" }
Probability of finding particle in left side at time $t$
Question: I have a problem of a particle in a box of width $a$. I am trying to find the probability of finding the particle on the left side of the box at time $t$. The wavefunction at $t=0$ is given by $$|\psi(0)\rangle = \frac{1}{\sqrt{2}} |\phi_1\rangle + \frac{1}{2}|\phi_2\rangle + \frac{1}{2} |\phi_3\rangle,$$ where $\phi_n$ is the wavefunction at order $n$. My Attempt: I have calculated and normalized the time-independent wavefunctions $\phi_n$ and used the time evolution operator to write down the time-dependent wavefunctions $$|\psi(t)\rangle=\frac{e^{i E_{1} t / \hbar}}{\sqrt{2}}\left|\phi_{1}\right\rangle+\frac{e^{-i E_{2} t / \hbar}}{2}\left|\phi_{2}\right\rangle+\frac{e^{-i E_{3} t / \hbar}}{2}\left|\phi_{3}\right\rangle,$$ with the famous result $$ |\phi_n \rangle = \sqrt{\frac{2}{a}} \sin\left( n \pi \frac{x}{a} \right).$$ To find the probability I have tried to compute $$P(0 < x < a/2) = \left| \int_0^{a/2} \psi^*(t) \psi(t) dx \right|^2 \\ = \frac{\left(20 \sqrt{2} \cos \left( \frac{(E_2-E_1)}{\hbar} t \right) + 12 \cos \left(\frac{(E_3-E_2)}{\hbar} t \right)+15 \pi \right)^2}{900 \pi ^2} $$ To check that my answer makes sense, first I verified that $$P(0<x<a)=1.$$ But now when I look at the left-side and right-side probabilities they do not add up to one $$P(0<x<a/2) = 0.86, P(a/2<x<a) = 0.005$$ What am I doing wrong? Answer: The probability density is given by $$\rho(x,t)=|\Psi(x,t)|^2=\Psi^*(x,t)\Psi(x,t)$$ and so $$P(a\leq x\leq b)=\int_a^b\Psi^*(x,t)\Psi(x,t) dx$$ As far I can see you have taken a mod square which is wrong, in the $4$th equation from below.
{ "domain": "physics.stackexchange", "id": 77405, "tags": "quantum-mechanics, homework-and-exercises, wavefunction, schroedinger-equation, probability" }
C++ Mock Library: Part 1
Question: Parts C++ Mock Library: Part 1 C++ Mock Library: Part 2 C++ Mock Library: Part 3 C++ Mock Library: Part 4 C++ Mock Library: Part 5 C++ Mock Library: Part 6 Note: If you see an extra T on a macro ignore for now. I will get to this in a subsequent review. There is to much for one review so I will do in stages. Example: MOCK_FUNC() and MOCK_TFUNC() for now pretend these are the same. I just don't want to have to edit my code before pasting here. I will go over these oddities in a later review. Motivation So writing unit tests for my code! I use Google Test as the basic framework. But my unit tests files were 3947+ lines long (the code was only about 500 lines). A lot of this is repeated code and looked very ugly in the tests. So I though lets try and improve the framework a bit to reduce the repetitive nature of the unit tests. So this is the code for my mock framework (reduced the unit test files to 1713 and does more testing). Interface OK. So this is the part I like least. But I could not think of a better way. Any function that you want to mock must be wrapped in MOCK_FUNC(). In debug and release mode this simply does nothing and the function is just used like normal. In my code coverage build (which is where the unit tests are run). These functions are swapped out by a functor object. Example: Socket::Socket(std::string const& hostname, int port, Blocking blocking) : fd(-1) { fd = MOCK_FUNC(socket)(PF_INET, SOCK_STREAM, 0); if (fd == -1) { // STUFF removed } // STUFF removed HostEnt* serv = nullptr; while (true) { serv = MOCK_FUNC(gethostbyname)(hostname.c_str()); if (serv == nullptr && h_errno == TRY_AGAIN) { continue; } // STUFF removed } // STUFF removed if (MOCK_FUNC(connect)(fd, reinterpret_cast<SocketAddr*>(&serverAddr), sizeof(serverAddr)) != 0) { // STUFF removed } if (blocking == Blocking::No) { if (MOCK_TFUNC(fcntl)(fd, F_SETFL, O_NONBLOCK) == -1) { // STUFF removed } } } You will see several mocked functions: MOCK_FUNC(socket), MOCK_FUNC(gethostbyname), MOCK_FUNC(connect), MOCK_TFUNC(fcntl). So my build tools (make) will supply the macro -DMOCK_FUNC\(x\)=::x for debug and release builds, while for coverage builds it will add --include coverage/MockHeaders.h so that all files include this file automatically and have some more complex definitions (coming below). The idea here is there should be zero cost in release or debug builds. But for coverage builds we get some extra functionality. Generated files: It generates two other files `coverage/MockHeaders.h` and `coverage/MockHeaders.cpp` these files contain the majority of the code to review. Its a lot so I will split it over a couple of reviews. These files simply need to part of the build. The important part is that for every mocked out functions it generates a global functor variable. namespace ThorsAnvil::BuildTools::Mock { // Functor // For each mocked out function we have a functor. // This can hold an alternative version of the function, but by default it is // initialized by the variable it is supposed to override. // So if you want to use the normal function during the unit tests you don't // need to do anything and you get the normal function being called. // Alternatively there are objects provide to override this function with a lambda. // MockFunctionHolder<RemoveNoExcept<ThorsAnvil::BuildTools::Mock::FuncType_fcntl>> MOCK_BUILD_MOCK_SNAME(fcntl)("fcntl", ::fcntl); MockFunctionHolder<RemoveNoExcept<ThorsAnvil::BuildTools::Mock::FuncType_open>> MOCK_BUILD_MOCK_SNAME(open)("open", ::open); MockFunctionHolder<RemoveNoExcept<decltype(::close)>> MOCK_BUILD_MOCK_SNAME(close)("close", ::close); MockFunctionHolder<RemoveNoExcept<decltype(::connect)>> MOCK_BUILD_MOCK_SNAME(connect)("connect", ::connect); MockFunctionHolder<RemoveNoExcept<decltype(::gethostbyname)>> MOCK_BUILD_MOCK_SNAME(gethostbyname)("gethostbyname", ::gethostbyname); MockFunctionHolder<RemoveNoExcept<decltype(::pipe)>> MOCK_BUILD_MOCK_SNAME(pipe)("pipe", ::pipe); MockFunctionHolder<RemoveNoExcept<decltype(::read)>> MOCK_BUILD_MOCK_SNAME(read)("read", ::read); MockFunctionHolder<RemoveNoExcept<decltype(::shutdown)>> MOCK_BUILD_MOCK_SNAME(shutdown)("shutdown", ::shutdown); MockFunctionHolder<RemoveNoExcept<decltype(::socket)>> MOCK_BUILD_MOCK_SNAME(socket)("socket", ::socket); MockFunctionHolder<RemoveNoExcept<decltype(::write)>> MOCK_BUILD_MOCK_SNAME(write)("write", ::write); Code Review OK. Finally to the part where you get to look at some code: Helper Macros: // Macros to expand a function name into the name of a global functor object. // MOCK_FUNC used in user code includes the full namespace prefix. // MOCK_BUILD_MOCK_SNAME used to name the object inside a namespace scope. // #define MOCK_BUILD_MOCK_NAME_EXPAND_(name) mock_ ## name #define MOCK_BUILD_MOCK_NAME_EXPAND(name) MOCK_BUILD_MOCK_NAME_EXPAND_(name) #define MOCK_BUILD_MOCK_NAME(name) ThorsAnvil::BuildTools::Mock:: MOCK_BUILD_MOCK_NAME_EXPAND(name) #define MOCK_BUILD_MOCK_SNAME(name) MOCK_BUILD_MOCK_NAME_EXPAND(name) #define MOCK_FUNC(name) MOCK_BUILD_MOCK_NAME(name) Helper Classes namespace ThorsAnvil::BuildTools { template <typename T> struct RemoveNoExceptExtractor { using Type = T; }; template <typename R, typename... Args> struct RemoveNoExceptExtractor<R(Args...) noexcept> { using Type = R(Args...); }; template <typename F> using RemoveNoExcept = typename RemoveNoExceptExtractor<F>::Type; } Functor Object namespace ThorsAnvil::BuildTools::Mock { template<typename Func> struct MockFunctionHolder { using Param = ParameterType<Func>; std::string name; std::function<Func> action; template<typename F> MockFunctionHolder(std::string const& name, F&& action); std::string const& getName() const; template<typename... Args> ReturnType<Func> operator()(Args&&... args); }; // ------------------------- // MockFunctionHolder // ------------------------- template<typename Func> template<typename F> MockFunctionHolder<Func>::MockFunctionHolder(std::string const& name, F&& action) : name(name) , action(std::move(action)) {} template<typename Func> std::string const& MockFunctionHolder<Func>::getName() const { return name; } template<typename Func> template<typename... Args> ReturnType<Func> MockFunctionHolder<Func>::operator()(Args&&... args) { return action(std::forward<Args>(args)...); } } Writing a unit test mocking out a function When writing a unit test to simply mock out a function you use the macro MOCK_SYS() that replaces a function with a macro: TEST(UniTestBloc, MyUnitTest) { MOCK_SYS(socket, [](int, int, int) {return 12;}); MOCK_SYS(connect, [](int, sockaddr const*, unsigned int) {return 0;}); MOCK_SYS(gethostbyname, [](char const*) { static char* addrList[] = {""}; static hostent result {.h_length=1, .h_addr_list=addrList}; return &result; }); auto action = [](){ ThorsSocket::Socket socket("www.google.com", 80); // Do tests }; ASSERT_NO_THROW(action()); } The definition of MOCK_SYS is: #define MOCK_SYS_EXPAND_(type, func, mocked, lambda) ThorsAnvil::BuildTools::Mock::MockOutFunction<type> mockOutFunction_ ## func(mocked, lambda) #define MOCK_SYS_EXPAND(type, func, mocked, lambda) MOCK_SYS_EXPAND_(type, func, mocked, lambda) #define MOCK_SYS(func, lambda) MOCK_SYS_EXPAND(ThorsAnvil::BuildTools::RemoveNoExcept<decltype(::func)>, func, MOCK_BUILD_MOCK_NAME(func), lambda) And the object it creates is: namespace ThorsAnvil::BuildTools::Mock { template<typename Func> struct MockOutFunction { std::function<Func> old; MockFunctionHolder<Func>& orig; MockOutFunction(MockFunctionHolder<Func>& orig, std::function<Func>&& mock); ~MockOutFunction(); }; // ------------------------- // MockOutFunction // ------------------------- template<typename Func> MockOutFunction<Func>::MockOutFunction(MockFunctionHolder<Func>& orig, std::function<Func>&& mock) : old(std::move(mock)) , orig(orig) { swap(old, orig.action); } template<typename Func> MockOutFunction<Func>::~MockOutFunction() { swap(old, orig.action); } } Part 1 done OK. Thats the basics. This part simply is about haveing a function that can be repalced by a lambda during a UNIT TEST. This by itself is not earth shattering and does not save much code. The next question will expand on this. Answer: Macros are difficult Your code relies on macros, and macros are notoriously difficult to work with if you go beyond the simple #define CONSTANT and #ifdef FLAG. Let's just look at what happens if you don't enable mocking: So my build tools (make) will supply the macro -DMOCK_FUNC\(x\)=::x for debug and release builds, […] I immediately see two problems with this. The first is that this will only work correctly for functions in the global namespace. If you would write: using std::format; auto text = MOCK_FUNC(format)("Hello, {}!", "world"); Then this would fail in a release build. And the following would work in a release build, but would break a coverage build: auto text = MOCK_FUNC(std::format)("Hello, {}!", "world"); The second issue is that not everything that looks like a function is actually a function. Consider this: FD_SET fd_set; MOCK_FUNC(FD_CLR)(&fd_set); FD_CLR() expands to something hygienic like do {…} while(0), but of course then ::FD_CLR() will not work correctly. While this is a rather obvious example, there might be more innocent looking functions that are actually implemented as macros, which also might depend on the platform you are on. So what to do about this? First, just use -DMOCK_FUNC(x)=x to fix release builds. Second, if you want to handle namespaces, perhaps the only way is to stringify the name of the functions, and use it as a lookup into a dictionary: #define MOCK_FUNC(func) MOCK_LOOKUP_FUNC(#func) Or perhaps even MOCK_LOOKUP_FUNC<#func> since C++20. With some effort, most if not all of the hashing can be done at compile-time. Since you are autogenerating files anyway, you can use perfect hashing (with a tool like gperf). Alternatives What's great about your code is that you can just write MOCK_SYS(function, replacement) in your tests. It's less great to have to find all instances of function in your code base and to wrap it in MOCK_FUNC(). The latter is very bad for maintenance: it's easy to forget to do this when making changes to your actual code. I would recommend finding a way to make it work without having to use MOCK_FUNC() at all. You have a preprocessing step anyway that finds MOCK_FUNC() calls and uses it to generate coverage/MockHeaders.*. Instead you can have that step look for MOCK_SYS() calls in your tests, and then generate code to replace the standard library functions with custom ones, like suggested in this StackOverflow answer. Other possibilities are using the compiler to wrap functions, just overriding library functions (but then you need some tricks to be able to call the original), and perhaps there are other possibilities as well.
{ "domain": "codereview.stackexchange", "id": 45083, "tags": "c++, mocks" }
Trying to get the XV-11 Lidar to work
Question: Hey there! I already asked this over at the Trossen Robotics Forum but did not get a reply yet. Maybe someone here can give me a hint at what might be wrong with my XV-11 Lidar Setup. I bought the laser scanner from Ebay and am trying to get it to work in ROS. What I have done so far: Connected the Lidar to ROS Indigo running Ubuntu in Virtualbox on a Windows Machine. For the power to the motor I am using a 3V regulator that gets its power form USB (5V). The sensor is powered by an Arduino with 3.3V for now. Instead of the Sparkfun FTDI board I am using a standard RS-232 to USB Adapter that is getting its data from the Lidard's Tx. The adapter gets accepted by Ubuntu. Dmesg shows: usb 1-2: FTDI USB Serial Device converter now attached to ttyUSB0 Running rostopic echo /scan I get this: header: seq: 34 stamp: secs: 1413106818 nsecs: 470585788 frame_id: neato_laser angle_min: 0.0 angle_max: 6.28318548203 angle_increment: 0.0174532923847 time_increment: 0.00059940997744 scan_time: 0.0 range_min: 0.0599999986589 range_max: 5.0 ranges: [0.25600001215934753, 8.192000389099121, 4.928999900817871, 13.229000091552734, 13.300999641418457, 13.274999618530273, 10.946999549865723, 0.37700000405311584, 14.380999565124512, 15.883000373840332, 15.381999969482422, 0.25600001215934753, 8.960000038146973, 6.848999977111816, 13.114999771118164, 13.0649995803833, 13.126999855041504, 13.109000205993652, 0.36899998784065247, 14.380999565124512, 15.883000373840332, 15.378000259399414, 0.5120000243186951, 1.2799999713897705, 6.880000114440918, 13.14900016784668, 10.869000434875488, 9.96500015258789, 10.836999893188477, 0.37599998712539673, 14.348999977111816, 15.873000144958496, 15.847999572753906, 16.128000259399414, 11.008000373840332, 6.90500020980835, 11.003000259399414, 10.888999938964844, 10.954999923706055, 4.479000091552734, 0.33799999952316284, 15.279000282287598, 14.972999572753906, 7.293000221252441, 0.23399999737739563, 8.019000053405762, 8.29699993133545, 15.732999801635742, 15.833000183105469, 7.519999980926514, 11.776000022888184, 8.960000038146973, 6.888999938964844, 11.795000076293945, 13.74899959564209, 13.708999633789062, 11.914999961853027, 0.4320000112056732, 11.229000091552734, 16.277000427246094, 16.277000427246094, 9.472000122070312, 8.704000473022461, 3.884999990463257, 16.381999969482422, 16.381999969482422, 16.381999969482422, 16.381999969482422, 8.295999526977539, 15.189000129699707, 16.277000427246094, 13.991999626159668, 9.472000122070312, 0.0, 3.628000020980835, 16.381999969482422, 16.381999969482422, 16.381999969482422, 16.381999969482422, 7.690999984741211, 15.189000129699707, 16.277000427246094, 15.187999725341797, 9.472000122070312, 7.935999870300293, 3.627000093460083, 16.381999969482422, 16.381999969482422, 16.381999969482422, 16.381999969482422, 8.399999618530273, 15.189000129699707, 16.277000427246094, 13.991999626159668, 9.472000122070312, 7.679999828338623, 3.625999927520752, 16.381999969482422, 16.381999969482422, 16.381999969482422, 16.381999969482422, 8.295999526977539, 15.189000129699707, 16.277000427246094, 13.991999626159668, 9.472000122070312, 7.423999786376953, 13.097000122070312, 16.381999969482422, 16.381999969482422, 16.381999969482422, 16.381999969482422, 8.406000137329102, 15.189000129699707, 16.277000427246094, 16.37700080871582, 15.871999740600586, 11.520000457763672, 14.026000022888184, 16.381999969482422, 16.381999969482422, 16.381999969482422, 16.381999969482422, 0.43700000643730164, 16.35700035095215, 16.37700080871582, 16.356000900268555, 15.871999740600586, 4.0960001945495605, 14.001999855041504, 16.381999969482422, 16.381999969482422, 16.381999969482422, 16.381999969482422, 0.43700000643730164, 16.047000885009766, 16.37700080871582, 16.327999114990234, 15.871999740600586, 8.192000389099121, and so on... So there seems to be data coming in. But then here's my problem. In RVIZ the data looks like noise and it seems like the Laser is detecting itself. It shows a circular thing right in the middle although the unit is sitting on a shelf with rectangular walls around it. Here is a screenshot of both my settings and the RVIZ point cloud. rviz http://oi58.tinypic.com/2rolv6f.jpg Any ideas what could be the problem? Might this be a problem due to the difference between RS232 and TTL? Is my conversion going wrong? But the why would I get a valid header? In the meantime I switched the power for the sensor itself (not the motor) to 5V after confirming that it can take it. Did not change anything though... Thanks in advance! Originally posted by tinytron on ROS Answers with karma: 45 on 2014-10-14 Post score: 0 Answer: The header is generated on the computer side, not the lidar, so it's quite possible that your data is corrupt even though the header looks correct. TTL and RS-232 use different voltages and different polarities, so that's probably causing problems. I would start by finding a proper USB to TTL serial converter. It doesn't have to be the specific Sparkfun model, but it should definitely be a 5V level and non-inverted (RS-232 is inverted). Originally posted by ahendrix with karma: 47576 on 2014-10-15 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 19733, "tags": "ros, lidar, neato, xv-11" }
Why does adding water to a saturated solution increase the number of ions present in the solution?
Question: In my book, it is stated that when some water is added to a test tube having a saturated $\ce{Ag2SO4}$ solution, with its solid at equilibrium, the number of $\ce{Ag}$ ions in the solution increases. I don't understand it. If $\ce{Ag2SO4}$ solution is saturated, it means that no more solute can dissolve, and if we add water, the amount of dissolved solute should not change. Also, if the amount of dissolved $\ce{Ag}$ increases, would the $K_\mathrm{sp}$ value also increase? Answer: Consider a saturated solution of $\ce{Ag2SO4}$. The following equilibrium is attained. $$ \ce{Ag2SO4 <=> 2Ag+ + SO4^2-}$$ Adding water accounts to increasing the volume of the solution. Hence more amount of solute can be dissolved since solubility of a salt depends on the amount of salt dissolved per unit volume. Therefore, there is an increase in the no. of ions produced in the solution. But as Avnish Kabaj mentioned, $[\ce{Ag+}]$ remains the same. Also as MollyCooL said, $\pu{K_{sp}}$ doesn't change as it is a constant at a particular temperature.
{ "domain": "chemistry.stackexchange", "id": 9813, "tags": "equilibrium, solubility" }
What is the point of the "check" array in a triple-array trie?
Question: I am currently reading the definition of a triple-array trie as described here. More precisely, I am trying to understand the nature of the three arrays: base. Each element in base corresponds to a node of the trie. For a trie node s, base[s] is the starting index within the next and check pool (to be explained later) for the row of the node s in the transition table. next. This array, in coordination with check, provides a pool for the allocation of the sparse vectors for the rows in the trie transition table. The vector data, that is, the vector of transitions from every node, would be stored in this array. check. This array works in parallel to next. It marks the owner of every cell in next. This allows the cells next to one another to be allocated to different trie nodes. That means the sparse vectors of transitions from more than one node are allowed to be overlapped. I understand how those three arrays are linked together, but I fail to see the point of the "check" array. It seems to me like you could walk the trie without using the "check" array at all. What am I missing? Answer: It is a bit confusing. Suppose the alphabet has size $\Sigma$. Basically the idea is that instead of allocating each state an entire block of $\Sigma$ elements in next[] to hold the target states for each possible input character, most of which would just hold a special "this transition is invalid" sentinel value since there are typically only a few valid transitions from each state, check[] lets us "overlap" these blocks of $\Sigma$ elements, and thereby save space, provided there are no "collisions" -- that is, elements in next[] that correspond to valid transitions from more than one state. A crucial piece of information that seems to be missing from the article (I only read down to the beginning of the next section, "Double-array Tree") is that initially all check[] values must have some special "no current owner" sentinel value (which in practice would be something like -1). Think of each element in next[] as a resource slot, which can be either unallocated (as it is initially), or allocated to exactly one source state: check[i] tells you which source state the i-th element is allocated to (or if it is unallocated). The rule is: If a source state s does not own the slot in next[] that corresponds to the transition on some character c, then there is no valid transition from state s on character c. Specifically, check[base[s] + c] != s, which literally means "State s does not own slot base[s] + c in next[]", also implies "State s has no transition on character c". In this way, all the elements of next[] that would have been set to a "this transition is invalid" sentinel value in the original every-state-gets-its-own-block-of-$\Sigma$-elements-in-next[]-to-itself scheme become available to be used by some other source state. If some such element, say at position i, is later needed by some other state s, then check[i] will be set to s at that time. Example First let's initialise the triple-array trie data structure. Suppose the alphabet is the digits 0-9, the states range from 0-49, and we have the following transitions for state 17, to which we allocate the block of 10 elements starting at position 20 in next[]: Transitions from state 17 | Character | Next state | next[] slot | |-----------+------------+-------------| | 3 | 40 | 23 | | 5 | 6 | 25 | | 8 | 13 | 28 | After adding this information to the triple-array trie, base[17] = 20, next[23] = 40, next[25] = 6 and next[28] = 13, and check[23], check[25] and check[28] are all set to 17, with every other element in check[] remaining at its initial value of -1. Now suppose we that we have the following transitions for state 35, to which we try to allocate the overlapping block of 10 elements starting at position 22 in next[]: Transitions from state 35 | Character | Next state | next[] slot | |-----------+------------+-------------| | 2 | 10 | 24 | | 3 | 33 | 25 | <-- COLLISION There is a problem, since next[25] is needed by both source states. We detect this by noticing that check[25] != -1. In this case, either state 17's or state 35's range of slots in next[] has to be reallocated somehow. Let's reallocate state 35's range. First, undo the allocation already done (set check[24] back to -1; it is not necessary or helpful to change next[24] to anything, since there is no meaning attached to the value of next[i] when check[i] == -1). Now we will try to reallocate state 25's slot range to start at position 24 in next[]. (Why 24? It's just a value that I know will work. In practice, you would probably need to just keep trying different starting points until one worked, and there is no guarantee that any will work.) Now it looks like this: Transitions from state 35 | Character | Next state | next[] slot | |-----------+------------+-------------| | 2 | 10 | 26 | | 3 | 33 | 27 | And there will be no collisions. So after adding the information for states 17 and 25 to the data structure, we will have base[17] = 20, base[35] = 24, and next[] and check[] look like this: | i | next[i] | check[i] | |-----+---------+----------| | . | . | . | | . | . | . | | . | . | . | | 20 |don'tcare| -1 | | 21 |don'tcare| -1 | | 22 |don'tcare| -1 | | 23 | 40 | 17 | | 24 |don'tcare| -1 | | 25 | 6 | 17 | | 26 | 10 | 25 | | 27 | 33 | 25 | | 28 | 13 | 17 | | 29 |don'tcare| -1 | | . | . | . | | . | . | . | | . | . | . | Now let's try some queries. Is there a transition from state 17 on character 5? First see if state 17 owns the relevant entry in next[]: check[base[17] + 5] = check[25] = 17, so it does (and the destination state is then given by next[25] = 6). Is there a transition from state 35 on character 1? check[base[35] + 1] = check[25] = 17 != 35, so state 35 does not own this entry in next[] => there is no transition from state 35 on character 1.
{ "domain": "cs.stackexchange", "id": 15625, "tags": "data-structures, information-retrieval" }
Combat game - character factory
Question: I decided to improve my OOP design knowledge and learn python at the same time so I started to write a simple program which will be a fight "simulator". For now, there are only 2 character types : Orc and Human. The CharacterFactory class is created so that I'll be able to handle Orc and Human classes in a more abstract way or so. GameState is meant to track the state of the game. Instances will be immediately registered to the GameState as soon as they created. And for example, if a character dies, I want to remove it from the list as well. (self.characters). #abtract factory or something class CharacterFactory: def __init__(self, character=None, game_state=None): self.character = character game_state.register_character(self) def inspect(self): print(self.character.name) print(self.character.attack_dmg) print(self.character.health) print("\n") def attack(self, target): target.character.health = target.character.health - self.character.attack_dmg #Observable or so class GameState: def __init__(self): self.characters = [] def register_character(self, character): self.characters.append(character) def show_characters(self): list(map(lambda x: x.inspect(), self.characters)) class Orc: def __init__(self,name): self.name = name self.attack_dmg = 50 self.health = 100 class Human: def __init__(self,name): self.name = name self.attack_dmg = 45 self.health = 105 def Main(): game_state = GameState() orc = CharacterFactory(Orc("Karcsi"),game_state) #orc.inspect() #orc.attack() human = CharacterFactory(Human("Nojbejtoo"),game_state) #human.inspect() #human.attack() print("Game state:\n") game_state.show_characters() orc.attack(human) game_state.show_characters() Main() My question is: Is this a good design? Or maybe a complete pain in the ... to work with? Is there something that I could improve? Of course this code is really far from being finished, but I try to find the best approach possible to design things like this so that i can deepen my knowledge in design patterns and stuff like that. Answer: The CharacterFactory class is created so that I'll be able to handle Orc and Human classes in a more abstract way or so. You try to stay DRY which is very good, but this idea would be better represented with inheritance. You could create a class Character() and let both Human and Orc, inherit from the super class like so: class Character(): def __init__(self, health, attack_damage, name): self.attack_damage = attack_damage self.health = health self.name = name def attack(self, target): target.health -= self.attack_damage def __str__(self): return f"Name: {self.name}\nDamage: {self.attack_damage}\nHealth: {self.health}\n" class Human(Character): def __init__(self, name, health=105, attack_damage=45): super().__init__(health, attack_damage, name) class Orc(Character): def __init__(self, name, health=100, attack_damage=50): super().__init__(health, attack_damage, name) def main(): orc = Orc("Karcsi") human = Human("Nojbejtoo") print(orc) print(human) orc.attack(human) print(human) if __name__ == "__main__": main() Things I changed: Instead of an inspect function, I override the magic function __str__ so you can directly print(human) or print(orc) The use of the if __name__ == '__main__' And the snake_case functions main instead of Main In the attack function body, see how a = a - b equals a -= b I have omitted the GameState part, maybe someone else will pick that up. Good luck with your game!
{ "domain": "codereview.stackexchange", "id": 31764, "tags": "python, object-oriented, python-3.x, role-playing-game, abstract-factory" }
Numerical renormalization
Question: is there a numerical algorithm (Numerical methods) to get 'renormalization' ?? i mean you have the action $ S[ \phi] = \int d^{4}x L (\phi , \partial _{\mu} \phi ) $ for a given theory and you want to get it renormalized without using perturbation theory .. can this be made with numerical methods ?? or on the other side let us suppose we have a divergent integral in perturbative renormalization and you want to evaluate it with numerical methods to extract some finite part of it. Answer: The answer to this question is a resounding YES. Lattice field theorists do their computations entirely numerically. As a result, they must resort to numerical (and hence, nonperturbative) renormalization (by extrapolating down the lattice spacing). They would not deal with counterterms, but rather deal directly with the various $Z$ factors appearing in the renormalized Lagrangian.
{ "domain": "physics.stackexchange", "id": 5211, "tags": "renormalization" }
How to display the output as the input with a fractional delay?
Question: I am trying to introduce a fractional delay in my input vector x[n] so the output is something like y[n]=x[n-M/N]. I am not supposed to use the MATLAB fracdelay function.So, I used sampling and delay. The three steps I followed are: 1)Upsample the input by M. 2)Delay the upsampled version. 3)Downsample the output obtained in step 2 to get y[n]. However I can't get the correct fractionally delayed output. Can someone please help? Here is my MATLAB code: x=[1 2 3 4 1 3 4 5 6 3]; up=upsample(x,2); z=zeros(1,10); for n=1:10; k=n+1; z(1,k)=up(1,n); end y=downsample(z,2,1); subplot(411); stem(x) subplot(412) stem(up) subplot(413); stem(z) subplot(414); stem(y) Here is how my output looks like: Answer: You should use the built-in upsample() and downsample() functions for the obvious purposes. While for delaying the signal you'll have to create a separate index vector because in MATLAB you can't keep track of index without this step. Then simply introduce the fractional delay in the index vector and plot your original signal against this delayed version of the index vector.
{ "domain": "dsp.stackexchange", "id": 1535, "tags": "matlab, digital-communications, downsampling, delay" }
Calculate bag of words feature vector
Question: In visual bag of words model, I have been able to construct the visual codebook through kmeans clustering of SIFT descriptors. How to calculate the feature vector for an image then? P/S: For each image, we can find interesting SIFT points, and for each points we have a SIFT descriptor (which is usually a 128 length vector). im1 ==> SIFT feature f1 (10 by 128) (here 10 is an abitrary number) im2 ==> SIFT feature f2 (20 by 128) ... If we combine all SIFT features, f=[f1; f2; ..] and perform kmeans clustering we will get the codebook c=[c1; c2; .. c10] which is bow codebooks. From the codebook how can we find the feature vectore, represent image im1? Answer: The feature vector is just the histogram of how many times a feature from each cluster appeared in the image.
{ "domain": "dsp.stackexchange", "id": 1343, "tags": "computer-vision" }
How to prove $P^{Halt} = PSPACE^{Halt}$
Question: Halt means the halting set. $PSPACE^{Halt}$ is the class of problems that can be solved with polynomial memory (possibly exponential time), given a halting oracle. Answer: Using $n$ calls to the halting oracle and time $O(n^2)$, you can compute the first $n$ bits of the Chaitin's constant. Using the $n$ bits of the Chaitin's constant and unbounded time, all queries to the halting oracle of length approximately $≤n$ (depending on the representation) can be eliminated. Finally, the result can be obtained in bounded time by querying the halting oracle. Additional Details: In the relativization of $\mathrm{PSPACE}$ here, the oracle tape counts against space usage, so a $\mathrm{PSPACE}^\mathrm{Halt}$ machine can only use queries of length $n^{O(1)}$ (though the number of the queries can be exponential). The digits of the Chaitin's constant can be used to get the number of machines of length $≤n$ that halt, and by simulating all these machines until the right number halts, we can find out exactly which of the (roughly $2^n$) machines halt. The digits of the Chaitin's constant, or just the number of the halting machines, can be computed in $n^{O(1)}$ queries by bisection.
{ "domain": "cstheory.stackexchange", "id": 4197, "tags": "oracles, pspace" }
CH3I + NaCl in acetone gives
Question: How will methyl iodide react with $\ce{NaCl}$ in acetone? According to me $S_N2$ reaction will take place where iodine will be replaced by chlorine and we will get $\ce{CH3Cl}$. Can anyone suggest why am I wrong and what will be the actual reaction? Answer: There will be no reaction The solubility of NaCl in acetone is 0.42mg per kilogram of acetone (source Wikipedia here). This means that there is insufficient NaCl in solution for any reaction to occur.
{ "domain": "chemistry.stackexchange", "id": 16074, "tags": "organic-chemistry, nucleophilic-substitution" }
Why does binary search take $O(\log n)$ time?
Question: My question seems like an elementary question, but it's really not. Suppose I have one million cars in a line sorted in alphabetical order by license plate. I am standing at the top of the line with the car with license plate AAA111. I want to find the car with license plate ZZZ999. I could do binary search to find the car with license plate ZZZ999, but I'm still going to have to walk the distance of one million cars, not a distance on the order of $\log 10^6$. So this is why I'm asking how or why did computer scientists come up with the idea that binary search takes $O(\log n)$ time? Answer: This is only possible in data structures which allow random access like arrays and vectors. So if you already know that you have to access the last car , then you can do it in O(1) time. Data structures like linked lists won't allow you to perform a binary search in O(logn) The example you have mentioned is similar to using binary search on a linked list. So it will need O(n) time as you will need to traverse all the nodes to reach the last node in the worst case. So your argument that you cannot reach the middle car in no time is valid in your example but not in data structures which offer such properties.
{ "domain": "cs.stackexchange", "id": 3210, "tags": "complexity-theory, history" }
Error Installing ROS indigo on ubuntu 15.10
Question: Hi, I am trying to install ROS indigo on Ubuntu 15.10. I am following the tutorials given here: http://wiki.ros.org/indigo/Installation/Ubuntu When I enter the command sudo apt-get install ros-indigo-desktop-full I get the error "Unabe to locate package ros-indigo-desktop-full" I thought there was something wrong with sources.list file but when I opened it in gedit, the restricted, universe , and multiverse lines are already uncommented. I managed to follow all the steps from 1.1 to 1.4 up until i had to enter sudo apt-get install ros-indigo-desktop-full Which gives me the error mentioned above. Please advise! Originally posted by uzair on ROS Answers with karma: 87 on 2015-10-30 Post score: 0 Answer: In installation page of the wiki, it states; Setup your computer to accept software from packages.ros.org. ROS Indigo ONLY supports Saucy (13.10) and Trusty (14.04) for debian packages. Therefore you may not be able to install debian packages on 15.10 Originally posted by Akif with karma: 3561 on 2015-10-30 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by uzair on 2015-10-30: http://answers.ros.org/question/220064/ros-on-ubuntu-wily/ The answer on this post says "You can always try building ROS from source if you are stuck on 15.10.". Do you know what this means? Comment by Akif on 2015-10-30: As it tells in that answer, instead on installing ROS using debian package, you can download the source code and build packages on your operating system. Details are here: http://wiki.ros.org/indigo/Installation/Source Comment by peci1 on 2015-12-18: I've succeeded building some part of ROS Indigo on 15.10, That doesn't mean anything will work, but at least the core stuff works. If you'd need to satisfy some dependency that doesn't have a package in 15.10, you may try playing with /etc/ros/rosdep/sources.list.d and create your own dep mapping.
{ "domain": "robotics.stackexchange", "id": 22869, "tags": "ros-indigo" }
Is this decidable language a subset of this undecidable language?
Question: I think I understand the theoretical definition of decidable and undecidable languages but I am struggling with their examples. A(DFA) = {(M, w): M is a deterministic finite automaton that accepts the string w} A(TM) = {(M, w): M is a turing machine that accepts the string w} I know that A(DFA) is decidable and A(TM) is not. But, is A(DFA) a subset of A(TM)? Answer: No, A(DFA) is not necessarily a subset of A(TM). When you write "(M, w)" you actually refer to some encoding of M in some alphabet (usually {0,1}). These encodings of DFAs and TMs can be very different. For a simple example: Let $A_1, A_2, \dots$ be an enumeration of all DFAs, and let $M_1, M_2, \dots$ be an enumeration of all TMs. You could say that you encode $A_i$ as the word $0 \text{bin}(i)$ and $M_i$ as $1 \text{bin}(i)$, where $\text{bin}(i)$ is the binary representation of the number $i$. Then the two langauges you presented are disjunct. As it seems that this is part of your question: Being a subset and being computationally easier do not relate. Take the set $\Sigma^*$ of all possible words. It is easily decidable (even by a DFA) but every language, even undecidable ones, are subsets of it.
{ "domain": "cs.stackexchange", "id": 6631, "tags": "undecidability" }
usb_cam low FPS from webcam
Question: Hello, I found out I can use usb_cam to publish images from my webcam. I tested it with my laptop's webcam and it performs poorly, only 7 FPS. OS: Ubuntu 11.10 Oneric ROS: Electric (installed via apt-get) dmesg output regarding webcam: [ 7209.629430] usb 2-1: USB disconnect, device number 3 [ 7211.188071] usb 2-1: new high speed USB device number 6 using ehci_hcd [ 7211.383400] uvcvideo: Found UVC 1.00 device CNF7231 (04f2:b073) [ 7211.395092] input: CNF7231 as /devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/input/input11 I downloaded http://www.ros.org/wiki/usb_cam and compiled it using rosmake. Compilation log: http://pastebin.com/rJYMR0Bw As you can see it complains about libswscale (although I have it installed) but it managed to compile it. Starting usb_cam gives the following output: http://pastebin.com/6JQ2CSpP Any ideas? Thanks in advance. Originally posted by Rosia Nicolae on ROS Answers with karma: 91 on 2011-12-03 Post score: 1 Original comments Comment by Mac on 2012-02-16: Have you tried other webcam software? Webcam support in linux is...spotty...at best. Do guvcview, or cheese, or similar, give you better behavior? It's not necessarily ROS' fault... Answer: Maybe you can try another pixel format, my launch file for my usb webcam looks like this: <launch> <node name="usb_cam" pkg="usb_cam" type="usb_cam_node"> <param name="video_device" value="/dev/video0" /> <param name="image_width" value="640" /> <param name="image_height" value="480" /> <param name="pixel_format" value="yuyv" /> <param name="camera_frame_id" value="usb_cam" /> <param name="io_method" value="mmap" /> </node> Originally posted by Stephan with karma: 1924 on 2012-02-16 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 7508, "tags": "ros, usb-cam, camera, webcam" }
Silver electroplating
Question: In most of silver electroplating (or electrorefining) methods, it is mentioned that silver anode is oxidized in aqueous solution of silver nitrate, and that silver is deposited at the cathode (regardless of cathode material, let's assume it's also silver for the sake of simplicity). What I don't get is why would silver anode be oxidized before $\ce{OH-}$? The reduction potentials are ~0.8 V for silver and ~0.4 V for $\ce{OH-}$, at standard conditions. Answer: Hydroxide is bad for this process. You will have additional problems if hydroxide anion is present. Silver cation reacts with hydroxide anion to form silver hydroxide, which spontaneously decomposes into silver oxide: $$\begin{aligned}\ce{Ag+ + OH-}&\ce{ -> AgOH}\\ \ce{2AgOH}&\ce{ -> H2O +Ag2O}\end{aligned}$$ In addition to shutting down the oxidation of silver, hydroxide precipitates any silver cation that does dissolve. For this reason, the electroplating process is probably run at a slightly acidic pH, and the competing oxidation reaction is not of hydroxide, but of water (reduction potential 1.229 V): $$\ce{2H2O -> O2 + 4H+ + 4e-}\ E^\circ = +1.229\mathrm{\ V}$$ So long as you don't apply too much of an overpotential to the electroplating operating, silver will oxidize before water.
{ "domain": "chemistry.stackexchange", "id": 6207, "tags": "electrochemistry, electrolysis, silver, electroplating" }
Why does the rate of change of atomic radius in the second period change so drastically?
Question: I was reading in my textbook Chemistry Part I, Textbook for Class XI by NCERT, ed. January 2021 that: The atomic size generally decreases a period across as illustrated in Fig. 3.4 (a) for the elements in the second period. It is because within the period the outer electrons are in the same valence shell and the effective nuclear charge increases as the atomic number increases resulting in the increased attraction of the electrons to the nucleus. I understood what they've tried to say, but when I looked at Fig. 3.4 (a) that they provided: I noticed that the rate of change for the atomic radius becomes less drastic as I move across the period from left to right, albeit there's an anomaly: the slope of the graph between C and N is less steep than the slope of the graph between N and O. What explains this? To reiterate, I understand how the atomic radius changes across a period, and I also understand why it does so. What I don't understand, however, is why the rate of change of atomic radius changes? Why is the graph not linear? Answer: Within a given period, one might expect such a graph to be of roughly the same shape as the one-electron Bohr radius $r=\frac{53}Z$ pm, which is nonlinear and follows from the basic force equations of the Bohr model. Of course, the presence of additional electrons will perturb this basic structure to some extent, but there is no reason for it to become linear.
{ "domain": "chemistry.stackexchange", "id": 16859, "tags": "inorganic-chemistry, periodic-trends" }
EM-wave energy of the Gaussian laser beam
Question: I am trying to calculate EM-energy store in a Gaussian beam as $U_{EM} = \epsilon_0 \int_V d^3r \vec{E} \cdot \vec{E}^{\dagger}$ What I get is $U_{EM} \propto (\pi A^2/2)*\infty$ because of integration over $z$. What is the consistent way of defining the EM-energy stored in an optical beam? Thanks in advance for help! Let's consider the Gauss beam as: $\vec{E} = \frac{\vec{A}}{\text{w}(z)} e^{-\frac{\rho^2}{\text{w}^2(z)}}e^{\frac{ik\rho^2}{2R(z)}}$ Explicit evaluation: $\int_V d^3r \vec{E} \cdot \vec{E}^{\dagger} = 2\pi A^2 \int_0^{\infty} dz \frac{1}{\text{w}^2(z)} \int_0^{\infty}d\rho \;\rho \; e^{-\frac{2\rho^2}{\text{w}^2}} = \int_0^{\infty}dz \frac{2\pi A^2\text{w}^2(z)}{4 \text{w}^2(z)}=\frac{\pi A^2}{2}\int_0^{\infty}dz=\infty$ I guess the beam cannot propagate to infinity... Makes sense. How do I define EM energy though in an unambiguous way? Answer: Of course you get an infinite energy when integrating from $z=0$ to $\infty$. This is no surprise. The laser needs to shine for an infinite time to reach $z=\infty$, and therefore has emitted infinite energy. It would make more sense to calculate the energy per length (let's say from $z=0$ to $L$): $$\begin{align} \frac{U_{EM}}{L} &= \frac{1}{2}\epsilon_0\frac{1}{L} \int_V d^3r \vec{E} \cdot \vec{E}^{\dagger} \\ &= \frac{1}{2}\epsilon_0\frac{2\pi A^2}{L} \int_0^L dz \frac{1}{\text{w}^2(z)} \int_0^{\infty}d\rho \;\rho \; e^{-\frac{2\rho^2}{\text{w}^2}} \\ &= \frac{1}{2}\epsilon_0\frac{1}{L} \int_0^Ldz \frac{2\pi A^2\text{w}^2(z)}{4 \text{w}^2(z)} \\ &= \frac{1}{2}\epsilon_0\frac{\pi A^2}{2L}\int_0^Ldz \\ &= \epsilon_0\frac{\pi A^2}{4} \end{align}$$
{ "domain": "physics.stackexchange", "id": 68569, "tags": "electromagnetism, optics, waves" }
How can I use test nodes in rostest which are not part of rostest itself?
Question: I want to test a ROS node with rostest but with a test node which is not included in rostest itself. Means instead of using the hztest test node like in this rostest test... <launch> <node name="talker" pkg="rospy" type="talker.py" /> <param name="hztest1/topic" value="chatter" /> <param name="hztest1/hz" value="10.0" /> <param name="hztest1/hzerror" value="0.5" /> <param name="hztest1/test_duration" value="5.0" /> <param name="hztest1/wait_time" value="21.0" /> <test test-name="hztest_test" pkg="rostest" type="hztest" name="hztest1" /> </launch> ... I would like to use my own test node owntest from my own ROS Python package ownpackage instead: <launch> <node name="talker" pkg="rospy" type="talker.py" /> <param name="owntest1/topic" value="chatter" /> ... <param name="owntest1/test_duration" value="5.0" /> <test test-name="owntest_test" pkg="ownpackage" type="owntest" name="owntest1" /> </launch> I built and installed ownpackage with catkin_make --pkg ownpackage, cd /build/ownpackage and make install. However if I run the test I get an error because rostest does not know about the existence of owntest: FAILURE: Test Fixture Nodes ['owntest'] failed to launch File "/usr/lib/python2.7/unittest/case.py", line 329, in run testMethod() File "/home/florian/ws_catkin/src/ros_comm/tools/rostest/src/rostest/runner.py", line 121, in fn self.assert_(not failed, "Test Fixture Nodes %s failed to launch"%failed) File "/usr/lib/python2.7/unittest/case.py", line 422, in assertTrue raise self.failureException(msg) How can I let rostest know about the existence of the owntest node from ownpackage that rostest can use it in a test? Or alternativelly: What is the best approach/development procedure when I would place owntest into ros_comm/rostest/nodes instead of into ownpackage? Edit: If I start the roscore and try to run a rostest test node with rosrun rostest hztest I got the following output: [ROSUNIT] Outputting test results to /home/kromer/.ros/test_results/rostest/rosunit-hztest.xml test_hz ... FAILURE! FAILURE: hztest not initialized properly. Parameter ['~hz'] not set. debug[/hztest] debug[/hztest/hz] File "/usr/lib/python2.7/unittest/case.py", line 331, in run testMethod() File "/opt/ros/indigo/share/rostest/nodes/hztest/hztest", line 89, in test_hz self.fail('hztest not initialized properly. Parameter [%s] not set. debug[%s] debug[%s]'%(str(e), rospy.get_caller_id(), rospy.resolve_name(e.args[0]))) File "/usr/lib/python2.7/unittest/case.py", line 412, in fail raise self.failureException(msg) -------------------------------------------------------------------------------- ------------------------------------------------------------- SUMMARY: * RESULT: FAIL * TESTS: 1 * ERRORS: 0 [] * FAILURES: 1 [test_hz] I would expect the same for my package. I diff the files of my package and rostest to figure out what I missed... Originally posted by thinwybk on ROS Answers with karma: 468 on 2017-10-07 Post score: 0 Original comments Comment by ahendrix on 2017-10-07: This should work if you can run owntest with rosrun. Try rosrun ownpackage owntest Comment by thinwybk on 2017-10-09: If I run rosrun rosfake faketopicpublisherint32.test I get the error [rospack] Error: package 'rosfake' not found not found`. Seems like I missed or messed up something in the package definition of the package. Comment by gvdhoorn on 2017-10-09: @thinwybk: please only use answers to answer your own question. For everything else, use comments or edit your original question text. I've already merged your update into your OP, but please keep it in mind for next time. Thanks. Comment by thinwybk on 2017-10-09: @gvdhoorn Thanks for the hint. I will consider that in the future. Answer: Sorry for that stupid question. I just forgot to make owntest executable with chmod +x owntest. Originally posted by thinwybk with karma: 468 on 2017-10-11 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 29025, "tags": "catkin, rostest" }
Psychoactive ingredients in this soap?
Question: Could any ingredients from this product be psychoactive in some people, if the are sensitive to them? https://www.lush.co.uk/products/honey-i-washed-kids I have found the product to be relaxing as advertised, but more so that I would expect from a nice scent alone. Answer: None of the ingredients listed would be classified as a psychotropic substance. However, many of the compounds, when studied in mice, do have effects on brain activity and behavior characteristics. For example, rapeseed oil (in a mixture with other vegetable oils): "the rapeseed oil group exhibited much higher locomotor activity than that expected from the alpha-linolenate/linoleate ratio. Additionally, the rapeseed oil group exhibited unusual behavior patterns, including higher ambulation and rearing activities, faster acquisition of the water maze task and slower habituation behavior as compared with the control group. Susceptibility to pentobarbital anesthesia tended to be higher in the rapeseed oil group." abstract link bergamot: "produces dose-dependent increases in locomotor and exploratory activity that correlate with a predominant increase in the energy in the faster frequency bands of the EEG spectrum" abstract link linalool: "Inhaled linalool showed anxiolytic properties in the light/dark test, increased social interaction and decreased aggressive behavior" abstract link So I am not surprised to read that when used in combination they have a noticeable affect on you.
{ "domain": "chemistry.stackexchange", "id": 1624, "tags": "everyday-chemistry, biochemistry, drugs" }
LinkedList Swift Implementation
Question: Linked List Implementation in Swift Swift 5.0, Xcode 10.3 I have written an implementation for a doubly linked list in Swift. As well, I decided to make the node class private and thus hidden to the user so they don't ever need to interact with it. I have written all of the algorithms that I needed to give it MutableCollection, BidirectionalCollection, and RandomAccessCollection conformance. What I Could Use Help With I am pretty sure that my LinkedList type properly satisfies all of the time complexity requirements of certain algorithms and operations that come hand in hand with linked lists but am not sure. I was also wondering if there are any ways I can make my Linked List implementation more efficient. In addition, I am not sure if there are and linked list specific methods or computed properties that I have not included that I should implement. I have some testing but if you do find any errors/mistakes in my code that would help a lot. Any other input is also appreciated! Here is my code: public struct LinkedList<Element> { private var headNode: LinkedListNode<Element>? private var tailNode: LinkedListNode<Element>? public private(set) var count: Int = 0 public init() { } } //MARK: - LinkedList Node extension LinkedList { fileprivate typealias Node<T> = LinkedListNode<T> fileprivate class LinkedListNode<T> { public var value: T public var next: LinkedListNode<T>? public weak var previous: LinkedListNode<T>? public init(value: T) { self.value = value } } } //MARK: - Initializers public extension LinkedList { private init(_ nodeChain: NodeChain<Element>?) { guard let chain = nodeChain else { return } headNode = chain.head tailNode = chain.tail count = chain.count } init<S>(_ sequence: S) where S: Sequence, S.Element == Element { if let linkedList = sequence as? LinkedList<Element> { self = linkedList } else { self = LinkedList(NodeChain(of: sequence)) } } } //MARK: NodeChain extension LinkedList { private struct NodeChain<Element> { let head: Node<Element>! let tail: Node<Element>! private(set) var count: Int // Creates a chain of nodes from a sequence. Returns `nil` if the sequence is empty. init?<S>(of sequence: S) where S: Sequence, S.Element == Element { var iterator = sequence.makeIterator() guard let firstValue = iterator.next() else { return nil } var currentNode = Node(value: firstValue) head = currentNode var nodeCount = 1 while true { if let nextElement = iterator.next() { let nextNode = Node(value: nextElement) currentNode.next = nextNode nextNode.previous = currentNode currentNode = nextNode nodeCount += 1 } else { tail = currentNode count = nodeCount return } } return nil } } } //MARK: - Copy Nodes extension LinkedList { private mutating func copyNodes() { guard let nodeChain = NodeChain(of: self) else { return } headNode = nodeChain.head tailNode = nodeChain.tail } } //MARK: - Computed Properties public extension LinkedList { var head: Element? { return headNode?.value } var tail: Element? { return tailNode?.value } var first: Element? { return head } var last: Element? { return tail } } //MARK: - Sequence Conformance extension LinkedList: Sequence { public typealias Iterator = LinkedListIterator<Element> public __consuming func makeIterator() -> LinkedList<Element>.Iterator { return LinkedListIterator(node: headNode) } public struct LinkedListIterator<T>: IteratorProtocol { public typealias Element = T private var currentNode: LinkedListNode<T>? fileprivate init(node: LinkedListNode<T>?) { currentNode = node } public mutating func next() -> T? { guard let node = currentNode else { return nil } currentNode = node.next return node.value } } } //MARK: - Collection Conformance extension LinkedList: Collection { public typealias Index = LinkedListIndex<Element> public var startIndex: LinkedList<Element>.Index { return Index(node: headNode, offset: 0) } public var endIndex: LinkedList<Element>.Index { return Index(node: nil, offset: count) } public func index(after i: LinkedList<Element>.Index) -> LinkedList<Element>.LinkedListIndex<Element> { precondition(i.offset != endIndex.offset, "LinkedList index is out of bounds") return Index(node: i.node?.next, offset: i.offset + 1) } public struct LinkedListIndex<T>: Comparable { fileprivate weak var node: LinkedList.Node<T>? fileprivate var offset: Int fileprivate init(node: LinkedList.Node<T>?, offset: Int) { self.node = node self.offset = offset } public static func ==<T>(lhs: LinkedListIndex<T>, rhs: LinkedListIndex<T>) -> Bool { return lhs.offset == rhs.offset } public static func < <T>(lhs: LinkedListIndex<T>, rhs: LinkedListIndex<T>) -> Bool { return lhs.offset < rhs.offset } } } //MARK: - MutableCollection Conformance extension LinkedList: MutableCollection { public subscript(position: LinkedList<Element>.Index) -> Element { get { precondition(position.offset != endIndex.offset, "Index out of range") guard let node = position.node else { fatalError("LinkedList index is invalid") } return node.value } set { precondition(position.offset != endIndex.offset, "Index out of range") // Copy-on-write semantics for nodes if !isKnownUniquelyReferenced(&headNode) { copyNodes() } position.node?.value = newValue } } } //MARK: LinkedList Specific Operations public extension LinkedList { mutating func prepend(_ newElement: Element) { replaceSubrange(startIndex..<startIndex, with: [newElement]) } mutating func prepend<S>(contentsOf newElements: S) where S: Sequence, S.Element == Element { replaceSubrange(startIndex..<startIndex, with: newElements) } mutating func popFirst() -> Element? { guard !isEmpty else { return nil } return removeFirst() } mutating func popLast() -> Element? { guard !isEmpty else { return nil } return removeLast() } } //MARK: - BidirectionalCollection Conformance extension LinkedList: BidirectionalCollection { public func index(before i: LinkedList<Element>.LinkedListIndex<Element>) -> LinkedList<Element>.LinkedListIndex<Element> { precondition(i.offset != startIndex.offset, "LinkedList index is out of bounds") if i.offset == count { return Index(node: tailNode, offset: i.offset - 1) } return Index(node: i.node?.previous, offset: i.offset - 1) } } //MARK: - RangeReplaceableCollection Conformance extension LinkedList: RangeReplaceableCollection { public mutating func replaceSubrange<S, R>(_ subrange: R, with newElements: __owned S) where S : Sequence, R : RangeExpression, LinkedList<Element>.Element == S.Element, LinkedList<Element>.Index == R.Bound { let range = subrange.relative(to: indices) precondition(range.lowerBound >= startIndex && range.upperBound <= endIndex, "Subrange bounds are out of range") // If range covers all elements and the new elements are a LinkedList then set references to it if range.lowerBound == startIndex, range.upperBound == endIndex, let linkedList = newElements as? LinkedList { self = linkedList return } var newElementsCount = 0 // Update count after replacement defer { count = count - (range.upperBound.offset - range.lowerBound.offset) + newElementsCount } // There are no new elements, so range indicates deletion guard let nodeChain = NodeChain(of: newElements) else { // If there is nothing in the removal range // This also covers the case that the linked list is empty because this is the only possible range guard range.lowerBound != range.upperBound else { return } // Deletion range spans all elements guard !(range.lowerBound == startIndex && range.upperBound == endIndex) else { headNode = nil tailNode = nil return } // Copy-on-write semantics for nodes before mutation if !isKnownUniquelyReferenced(&headNode) { copyNodes() } // Move head up if deletion starts at start index if range.lowerBound == startIndex { // Can force unwrap node since the upperBound is not the end index headNode = range.upperBound.node! headNode!.previous = nil // Move tail back if deletion ends at end index } else if range.upperBound == endIndex { // Can force unwrap since lowerBound index must have an associated element tailNode = range.lowerBound.node!.previous tailNode!.next = nil // Deletion range is in the middle of the linked list } else { // Can force unwrap all bound nodes since they both must have elements range.upperBound.node!.previous = range.lowerBound.node!.previous range.lowerBound.node!.previous!.next = range.upperBound.node! } return } // Obtain the count of the new elements from the node chain composed from them newElementsCount = nodeChain.count // Replace entire content of list with new elements guard !(range.lowerBound == startIndex && range.upperBound == endIndex) else { headNode = nodeChain.head tailNode = nodeChain.tail return } // Copy-on-write semantics for nodes before mutation if !isKnownUniquelyReferenced(&headNode) { copyNodes() } // Prepending new elements guard range.upperBound != startIndex else { headNode?.previous = nodeChain.tail nodeChain.tail.next = headNode headNode = nodeChain.head return } // Appending new elements guard range.lowerBound != endIndex else { tailNode?.next = nodeChain.head nodeChain.head.previous = tailNode tailNode = nodeChain.tail return } if range.lowerBound == startIndex { headNode = nodeChain.head } if range.upperBound == endIndex { tailNode = nodeChain.tail } range.lowerBound.node!.previous!.next = nodeChain.head range.upperBound.node!.previous = nodeChain.tail } } //MARK: - ExpressibleByArrayLiteral Conformance extension LinkedList: ExpressibleByArrayLiteral { public typealias ArrayLiteralElement = Element public init(arrayLiteral elements: LinkedList<Element>.ArrayLiteralElement...) { self.init(elements) } } //MARK: - CustomStringConvertible Conformance extension LinkedList: CustomStringConvertible { public var description: String { return "[" + lazy.map { "\($0)" }.joined(separator: ", ") + "]" } } Note: My up-to-date LinkedList implementation can be found here: https://github.com/Wildchild9/LinkedList-Swift. Answer: Nested types All “dependent” types are defined within the scope of LinkedList, which is good. To reference those types from within LinkedList you don't have to prefix the outer type. For example, public func index(before i: LinkedList<Element>.LinkedListIndex<Element>) -> LinkedList<Element>.LinkedListIndex<Element> can be shortened to public func index(before i: LinkedListIndex<Element>) -> LinkedListIndex<Element> This applies at several places in your code. Nested generics There are several nested generic types: fileprivate class LinkedListNode<T> private struct NodeChain<Element> public struct LinkedListIterator<T>: IteratorProtocol public struct LinkedListIndex<T>: Comparable All these types are only used with the generic placeholder equal to the Element type of LinkedList, i.e. private var headNode: LinkedListNode<Element>? private var tailNode: LinkedListNode<Element>? public typealias Iterator = LinkedListIterator<Element> public typealias Index = LinkedListIndex<Element> So these nested type do not need to be generic: They can simply use the Element type of LinkedList, i.e. fileprivate class LinkedListNode { public var value: Element public var next: LinkedListNode? public weak var previous: LinkedListNode? public init(value: Element) { self.value = value } } which is then used as private var headNode: LinkedListNode? private var tailNode: LinkedListNode? The same applies to the other nested generic types listed above. This allows to get rid of the distracting <T> placeholders and some type aliases. It becomes obvious that the same element type is used everywhere. Another simplification The while true { ... } loop in NodeChain.init is not nice for (at least) two reasons: A reader of the code has to scan the entire loop body in order to understand that (and when) the loop is eventually terminated. An artificial return nil is needed to make the code compile, but that statement is never reached. Both problems are solved if we use a while let loop instead: init?<S>(of sequence: S) where S: Sequence, S.Element == Element { // ... while let nextElement = iterator.next() { let nextNode = LinkedListNode(value: nextElement) currentNode.next = nextNode nextNode.previous = currentNode currentNode = nextNode nodeCount += 1 } tail = currentNode count = nodeCount } It also is not necessary to make the head and node properties of NodeChain implicitly unwrapped optionals (and does not make much sense for constant properties anyway). Simple non-optional constant properties will do: let head: Node<Element> let tail: Node<Element> Structure You have nicely structured the code by using separate extensions for the various protocol conformances. In that spirit, var first should be defined with the Collection properties, and var last should be defined with the BidirectionalCollection properties. To guard or not to guard (This paragraph is surely opinion-based.) The guard statement was introduced to get rid of the “if-let pyramid of doom,” it allows to unwrap a variable without introducing another scope/indentation level. The guard statement can be useful with other boolean conditions as well, to emphasize that some condition has to be satisfied, or otherwise the computation can not be continued. But I am not a fan of using guard for every “early return” situation, in particular not if it makes the statement look like a double negation. As an example, guard !(range.lowerBound == startIndex && range.upperBound == endIndex) else { headNode = nodeChain.head tailNode = nodeChain.tail return } is in my opinion much clearer written as if range.lowerBound == startIndex && range.upperBound == endIndex { headNode = nodeChain.head tailNode = nodeChain.tail return } Performance One issue that I noticed: You do not implement the isEmpty property, so that the default implementation for collection is used. As a consequence, each call to isEmpty creates two instances of LinkedListIndex (for startIndex and for endIndex), compares them, and then discards them. A dedicated public var isEmpty: Bool { return count == 0 } property would be more efficient. A bug There seems to be a problem with the copy-on-write semantics: var l1 = LinkedList([1, 2, 3]) let l2 = l1 l1.removeFirst() l1.removeLast() makes the program abort with a “Fatal error: Unexpectedly found nil while unwrapping an Optional value.”
{ "domain": "codereview.stackexchange", "id": 35649, "tags": "performance, linked-list, swift, complexity, collections" }
How does the temperature of water affect the reactivity of alkali metals acting on it?
Question: There are a few experiments comparing the reactivity of alkali metals with cold water. Is there any purpose to use cold water? Does the temperature of water affect the reactivity of alkali metals? Answer: Increasing temperature does indeed increase the reactivity of alkali metals, or most things for that matter. The rate of reaction increases because the average kinetic energy of the molecules increases, and so more collisions can overcome the activation energy for the reaction to take place. The reason for using cold water rather than hot water for the reactions of alkali metals is that alkali metals are already reactive enough at room temperature. Even fairly small lumps of potassium will burn if you put them in cold water so imagine what it would be like with hot water. This is not something you want to try without very careful precautions.
{ "domain": "chemistry.stackexchange", "id": 4934, "tags": "reactivity" }
How does use an action potential from innervation selectively modify tropomyosin over time?
Question: Within a myofibril, the myofilaments move past one another to product muscle contraction if and only if the actin binding sites are exposed to the myosin by locally removing the tropomyosin by binding it with calcium ions What isn't quite clear to me however is: what does a pulse from a motor neuron do to the tropomyosin in order to to excite this initial change to move the tropomyosin away and expose the actin? Then, how does the muscle cell and/or motor neuron sustain this state of contraction, and then finally decide to end this state of contraction and return the muscle cell to its relaxed state? Also, where exactly is the tropomyosin going after it binds with calcium ions? Answer: At the neuromuscular junction, motor neurons release acetylcholine which binds to nicotinic acetylcholine receptors on the muscle. These receptors are ligand-gated ion channels that are permeable to small cations. Opening these channels depolarizes the muscle cell membrane, which causes the opening of calcium channels, primarily on the sarcoplasmic reticulum (SR) for skeletal muscle. The key channel is called the ryanodine receptor and is either mechanically coupled to a voltage-gated calcium channel (skeletal muscle) or activated by calcium released from voltage-gated calcium channels (smooth & cardiac muscle), but in both cases the change in voltage is the initial signal. Calcium concentrations are very low in the muscle cell at rest, but high within the sarcoplasmic reticulum so calcium rushes through and some of it binds with troponin (not tropomyosin itself, but they are associated/attached with one another), which releases the hold of tropomyosin on the actin filament. The calcium-bound tropomyosin doesn't go anywhere, but as calcium is quickly pumped into the SR or buffered up by calcium-binding proteins, troponin is no longer bound to calcium, allowing tropomyosin to block the myosin binding sites. The increase in calcium concentration is transient, and travels in a wave down the muscle. If there are no further action potentials, the muscle will relax. Constant contraction requires a constant train of action potentials, and the overall strength of the contraction can be regulated by the firing rate of the motor neuron. The Wikipedia pages for muscle contraction, tropomyosin, voltage-gated calcium channel, and ryanodine receptor can all be helpful. I'd also recommend a basic textbook on neuroscience, which typically covers neuron effects on muscles and some of the mechanics of muscle contraction. I typically recommend Purves, see https://neuroscience5e.sinauer.com/ or a library or anyplace you get books from (any edition is fine).
{ "domain": "biology.stackexchange", "id": 8280, "tags": "human-biology, muscles" }
Why do we use Boltzmann distribution rather than Fermi-Dirac distribution in a transistor?
Question: Considering electrons are fermions I would think it is better to use Fermi-Dirac distribution when discussing the physics of the electrons and holes in a transistor. However, I have been told Boltzmann distribution is what should be used. Can Boltzmann distribution apply to fermions? Answer: Short version: it's an approximation, but it's a good one that's accurate for normal transistors and makes a lot of math easier. In many cases, you're only interested in the "tail" of the distribution, in which case Fermi-Dirac and Maxwell-Boltzmann statistics produce approximately the same results. For Fermi-Dirac $$n_{FD} = \frac{1}{e^{\left(\epsilon - \mu\right)/k_BT} + 1},$$ and for Maxwell-Boltzmann $$n_{MB} = \frac{1}{e^{\left(\epsilon - \mu\right)/k_BT}}.$$ The two converge in the limit where $e^{\left(\epsilon - \mu\right)/k_BT}$ is large (equivalently, if $\left(\epsilon - \mu\right)/k_BT$ is medium sized). Put another way, they converge in the limit where $n$ is small (i.e. you're dealing with the tail of the distribution). This is generally the case in transistors. E.g., if you have a typical n-type semiconductor, that means there are electrons in the conduction band, but the number of electrons in the conduction band is small compared to the number of available states (even close to the band edge). In other words, $n$ is small, and Maxwell-Boltzmann statistics makes for a good approximation. This all falls apart if your conduction band has a lot of electrons in it (e.g. you've degenerately doped it), but that's not the case in common transistors. EDIT: if you want a visual of the "tails", see the figure here. The red curve is the Fermi-Dirac distribution, and it has a low value in the conduction band because $\mu$ falls in the bandgap for normal doping.
{ "domain": "physics.stackexchange", "id": 82332, "tags": "thermodynamics, statistical-mechanics, solid-state-physics, semiconductor-physics, electronics" }
Java Temperature Converter
Question: I have made a program that converts Celsius to Fahrenheit and Fahrenheit to Celsius. How is my code? import java.util.Scanner; public class TemperatureConversion { public static void main(String[] args) { double fahrenheit, celsius; Scanner input = new Scanner(System.in); // Celsius to Fahrenheit \u2103 degree Celsius symbol System.out.println("Input temperature (\u2103): "); celsius = input.nextDouble(); System.out.println(celsius * 1.8 + 32 + " \u2109 \n"); // Fahrenheit to Celsius \u2109 degree Fahrenheit symbol System.out.println("Input temperature (\u2109: "); fahrenheit = input.nextDouble(); System.out.print((fahrenheit - 32) / 1.8 + " \u2103"); } } Answer: It solves the problem, and that's good. Period. But you're presenting your code here to get advice how to improve your programming skills, so here are a few remarks. Take them as a hint where to go from here, as soon as you feel ready for the "next level". Separate user interaction from computation Your main method contains both aspects in one piece of code, even within the same line (e.g. System.out.println(celsius * 1.8 + 32 + " \u2109 \n");). Make it a habit to separate tasks that can be named individually into methods of their own: input a Celsius value (a method double readCelsius()) input a Fahrenheit value (a method double readFahrenheit()) convert from Celsius to Fahrenheit (a method double toFahrenheit(double celsius)) convert from Fahrenheit to Celsius (a method double toCelsius(double fahrenheit)) output a Fahrenheit value (a method void printFahrenheit(double fahrenheit)) output a Celsius value (a method void printCelsius(double celsius)) With separate methods, it will be easier to later change your program to e.g. use a window system, or do bulk conversions of many values at once, etc. More flexible workflow Your current program always forces the user to do exactly one C-to-F and then one F-to-C conversion. This will rarely both be needed at the same time. I'd either ask the user at the beginning for the conversion he wants, or make it two different programs. By the way, doing so will be easier if the different tasks have been separated. Minor hints Combine variable declaration and value assignment. You wrote double fahrenheit, celsius; and later e.g. celsius = input.nextDouble();, I'd combine that to read double celsius = input.nextDouble();. This way, when reading your program (maybe half a year later), you immediately see at the same place that celsius is a double number coming from some input. I'd avoid the special characters like \u2109 and instead write two simple characters °F. Not all fonts on your computer will know how to show that \u2109 Fahrenheit symbol, so this might end up as some ugly-looking question mark inside a rectangle or similar. Using two separate characters does the job in a more robust way.
{ "domain": "codereview.stackexchange", "id": 38747, "tags": "java, unit-conversion" }
Production of sulphuric acid
Question: To my knowledge, sulphuric acid is produced by complex industrial processes like contact process. The following structure shows a sulphate ion. I wonder why sulphuric acid cannot be produced through the protonaction of sulphate ions, for example, by mixing HCl and Na2SO4. After mixing the salt and acid, concentrated sulphuric acid can be produced by heating up the liquid. However, this method is not commonly used, or not even used, why? Answer: Sulfuric acid is too strongly dissociating to be extracted from water solution in this way. With sulfuric acid you have complete dissociation for the first stage (in strongly acidic solution) forming solvated $\ce{H^+}$ and $\ce{HSO4^-}$. You will have these in combination with the sodium and chloride ions. With this solute composition, the boiling-down stage will likely produce sodium bisulfate ($\ce{NaHSO4}$) plus $\ce{NaCl}$ and fumes containing unreacted (or re-formed) $\ce{HCl}$.
{ "domain": "chemistry.stackexchange", "id": 17800, "tags": "acid-base, synthesis" }
Is the Lagrangian density a functional or a function?
Question: Weinberg at page 300 of The Quantum Theory of Fields - Volume I says: $L$ itself should be a space integral of an ordinary scalar function of $\Psi(x)$ and $\partial \Psi(x)/\partial x^\mu \,$, known as the Lagrangian density $\mathscr{L}$: $$ L[\Psi(t), \dot{\Psi}(t)]= \int d^3x \, \mathscr{L}\bigr(\Psi({\bf x},t), \nabla \Psi({\bf x},t), \dot{\Psi}({\bf x},t)\bigl) $$ So he says that $\mathscr{L} \, $ is a function. But Gelfand and Formin at page one of their book Calculus of variations say: By a functional we mean a correspondence which assigns a definite (real) number to each function (or curve) belonging to some class. So from that I'd say it is a functional. The notes of quantum field theory of my professor stay on this side, explicitly calling the lagrangian density a functional. I'm very confused at the moment. I always used this latter way of defining functionals (the Gelfand way) so Weinberg saying that $\mathscr{L}$ is a function confuses me. Can someone makes some clarity about this? Answer: The Lagrangian density is a function. Consider the following examples: $$ A[f]=\int_0^1\mathrm dx\ f(x) $$ and $$ B(f(x))=f(x) $$ It is clear that $A$ is a functional, because for example $$ A[\sin]=1-\cos1=0.45\in\mathbb R $$ is a number, while $B$ is a function, because $$ B(\sin)=\sin $$ is not a number, but a function. In your notation, $L$ is a functional, because given a certain field configuration, you get a number. But $\mathscr L$ is a function, because given a certain field configuration, you get another function, not a number. In some cases, such as QED in the Coulomb gauge, you may want to include non-local terms in the Lagrangian density, which makes it into a function of some of its arguments, and a functional of the others. This is an exception to the rule above.
{ "domain": "physics.stackexchange", "id": 55136, "tags": "lagrangian-formalism, field-theory, definition, variational-calculus" }
Confusion over Snell's law
Question: Does the angle of refraction always have $n_2$ as its refractive index? As far as I know, $$\frac{\sin\theta_i}{\sin\theta_r} = \frac{n_2}{n_1}.$$ So I got this question here: Calculate the critical angle for a flint glass-air ($n=1.58$) boundary. Since the critical angle has an angle of refraction of 90° that is $\theta_r= 90^\circ$, but I am confused on what values should I substitute for $n_2$ and $n_1$. So does the refractive index of the angle of incidence always equal $n_1$? Answer: To avoid confusion, here is a different way of writing Snell’s law. $$ n_1\times\sin(i) = n_2\times\sin(r) $$ Here, $n_1$ is the refractive index of the medium in which the incident ray exists $n_2$ is the refractive index of the medium in which the refracted ray exists. So for this question as the light goes from denser to rarer medium, $n_1$ would be the refractive index of flint and $n_2$ would be that of air.
{ "domain": "physics.stackexchange", "id": 53631, "tags": "homework-and-exercises, optics, waves, refraction" }
What constitutes one unit of time in runtime analysis?
Question: When calculating runtime dependence on the input, what calculations are considered? For instance, I think I learned that array indexing as well as assignment statements don't get counted, why is that? Answer: When doing complexity calculations we generally use RAM model. In this model we assume that array indexing is O(1). Assignment statement is just like assigning some value to some variable ins array of variables. This is just for convenience. It simplifies analysis of the algorithm. In real world array indexing takes $\approx$ O(log I) where I is number of things indexed. We generally consider things that depend on size of input e.g. loops. Even if there is O(1) operation in the loop and it is executed n times then algorithm runs for O(n) time. But O(1) operation outside of loop take only constant time and O(n) + O(1) = O(n). Read about radix sort algorithm from CLRS.
{ "domain": "cs.stackexchange", "id": 1592, "tags": "terminology, algorithm-analysis, runtime-analysis" }
How can cooling be achieved in an isolated system?
Question: When we talk about cooling, like from an air conditioner, we talk about heat transfer. Heat is transferred from a warmer body to an already cool body. But can cooling be achieved in an isolated substance without heat transfer? Answer: When we talk about cooling,like in air conditioner, we talk about heat transfer. Heat is transferred from warmer body to already cool body. But can cooling be achieved in an isolated substance without heat transfer? In physics, heat is the process of transfer of energy, it is not the same definition as we use in ordinary life. Once inside the box, we really should say amount of internal energy, not heat. If the substance (or system) has a certain amount of internal energy, and is perfectly insulated (or isolated) from the outside world, it will never lose or gain internal energy. But the key word here is perfectly, which is not achievable in practice because of the laws of thermodynamics. Eventually, internal energy will leave the substance because we cannot make perfect insulators, even in theory. The temperature of an isolated system could increase or decrease. Just the energy is conserved. Thanks to Jan Bos for pointing this out.
{ "domain": "physics.stackexchange", "id": 35116, "tags": "thermodynamics" }
Error while compiling rviz
Question: Previously I was using rviz from sudo apt-get install ros-groovy-rviz and it encounters some error when I trying to view laser data. So I decided to git clone it and compile the version 1.9.35 of rviz. However, when I catkin_make the rviz it shows me the error below Linking CXX executable /home/viki/catkin_ws/devel/lib/rviz/rviz [ 92%] Built target default_plugin Linking CXX executable /home/viki/catkin_ws/devel/lib/rviz/image_view /home/viki/catkin_ws/devel/lib/librviz.so: undefined reference to `image_transport::ImageTransport::~ImageTransport()' /home/viki/catkin_ws/devel/lib/librviz.so: undefined reference to `image_transport::ImageTransport::ImageTransport(ros::NodeHandle const&)' /home/viki/catkin_ws/devel/lib/librviz.so: undefined reference to `image_transport::ImageTransport::subscribe(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned int, boost::function<void (boost::shared_ptr<sensor_msgs::Image_<std::allocator<void> > const> const&)> const&, boost::shared_ptr<void> const&, image_transport::TransportHints const&)' /home/viki/catkin_ws/devel/lib/librviz.so: undefined reference to `image_transport::Subscriber::shutdown()' collect2: ld returned 1 exit status make[2]: *** [/home/viki/catkin_ws/devel/lib/rviz/rviz] Error 1 make[1]: *** [rviz/src/rviz/CMakeFiles/executable.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs.... CMakeFiles/rviz_image_view.dir/image_view.cpp.o: In function `ImageView': /home/viki/catkin_ws/src/rviz/src/image_view/image_view.cpp:56: undefined reference to `image_transport::ImageTransport::ImageTransport(ros::NodeHandle const&)' /home/viki/catkin_ws/src/rviz/src/image_view/image_view.cpp:56: undefined reference to `image_transport::ImageTransport::~ImageTransport()' CMakeFiles/rviz_image_view.dir/image_view.cpp.o: In function `~ImageView': /home/viki/catkin_ws/src/rviz/src/image_view/image_view.cpp:62: undefined reference to `image_transport::ImageTransport::~ImageTransport()' /home/viki/catkin_ws/src/rviz/src/image_view/image_view.cpp:62: undefined reference to `image_transport::ImageTransport::~ImageTransport()' CMakeFiles/rviz_image_view.dir/image_view.cpp.o: In function `image_transport::SubscriberFilter::subscribe(image_transport::ImageTransport&, std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned int, image_transport::TransportHints const&)': /opt/ros/groovy/include/image_transport/subscriber_filter.h:111: undefined reference to `image_transport::ImageTransport::subscribe(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned int, boost::function<void (boost::shared_ptr<sensor_msgs::Image_<std::allocator<void> > const> const&)> const&, boost::shared_ptr<void> const&, image_transport::TransportHints const&)' CMakeFiles/rviz_image_view.dir/image_view.cpp.o: In function `image_transport::SubscriberFilter::unsubscribe()': /opt/ros/groovy/include/image_transport/subscriber_filter.h:119: undefined reference to `image_transport::Subscriber::shutdown()' collect2: ld returned 1 exit status make[2]: *** [/home/viki/catkin_ws/devel/lib/rviz/image_view] Error 1 make[1]: *** [rviz/src/image_view/CMakeFiles/rviz_image_view.dir/all] Error 2 make: *** [all] Error 2 Invoking "make" failed At first I thought it was something to do with the image_transport package and so after I do source /opt/ros/groovy/setup.bash , I can rosrun image_transport but the error still remain the same. Do anyone know how to solve it? Thank you. Originally posted by zero1985 on ROS Answers with karma: 100 on 2014-02-26 Post score: 1 Answer: You can get the source code from git and check out branch 1.9.35. It's a patch to resolve this issue. Read more here. cd ~/catkin_ws/src git clone https://github.com/ros-visualization/rviz.git cd rviz git checkout 1.9.35 cd ../.. catkin_make EDIT: If this doesn't work, it's likely that something went wrong with your ROS installation. You might want to try to reinstall ROS! Originally posted by Andrew.A with karma: 324 on 2014-02-27 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 17105, "tags": "rviz, ros-groovy" }
Wrapper functions for terminal capabilities
Question: I've prepared a small C interface so the rest of my program can access relevant system functions. I am not a C programmer, though - just C++ and D, mainly. I want to make sure that everything here is "C-ish" before committing the rest of my code to this interface. This is just a plain old C file, compiled with a C compiler (as opposed to C++). /* * This file provides functions to interface with * low-level terminal properties such as size and * input modes. Functions meant to be accessible * to external "user" code begin with "Terminal." */ #include <sys/ioctl.h> // For interfacing with the terminal device. #include <termios.h> // Ditto. #include <unistd.h> // For the STDIN_FILENO file descriptor macro. // This is just used internally, and should not be called directly from D code. struct winsize Size() { struct winsize Size; ioctl(STDIN_FILENO, TIOCGWINSZ, &Size); return Size; } unsigned short int TerminalColumnCount() { return Size().ws_col; } unsigned short int TerminalRowCount() { return Size().ws_row; } // Used to reset the terminal properties to their original state. // Again, this is only for internal use. struct termios BackupProperties; // Removes terminal input buffering so programs can see typed characters before Enter is pressed. void TerminalMakeRaw() // I wish I could come up with a better name for this. { tcgetattr(STDIN_FILENO, &BackupProperties); struct termios NewProperties; cfmakeraw(&NewProperties); tcsetattr(STDIN_FILENO, TCSANOW, &NewProperties); } void TerminalReset() { tcsetattr(STDIN_FILENO, TCSANOW, &BackupProperties); } Answer: A few quick comments: You should declare all internal functions and variables static to avoid polluting the global namespace. (You'd do this in a C++ program as well, but would probably use an unnamed namespace. I'm not sure what the D equivalent is). Since you have state (implicitly and explicitly via BackupProperties) I'd recommend using an object oriented approach exposing an opaque pointer to maintain it, even if there is only one Terminal instance it's (IMO) a better approach. Otherwise you should at least ensure that TerminalMakeRaw isn't called twice without corresponding calls to TerminalReset. You should also consider that at some point you'll probably want to have callbacks of some sort to notify the users of your library when/if the window size changes and so on.
{ "domain": "codereview.stackexchange", "id": 774, "tags": "c, console, unix" }
Point Pattern Recognition
Question: Having two different size of sets of points (2D for simplicity) dispersed within two different size squares the question are that: 1- how to find any occurrence of the the small one through the large one? 2- Any idea on how to rank the occurrences as shown on the following figure? Here is a simple demonstration of the question and a desired solution: Update 1: The following figure shows a bit more realistic view of the problem being investigated. Regarding the comments the following properties apply: exact location of points are available exact size of points are available size can be zero(~1) = only a point all points are black on a white background there is no gray-scale/anti-aliasing effect Here is my implementation of the method presented by endolith with some small changes (I rotated target instead of source since it is smaller and faster in rotation). I accepted 'endolith's answer because I was thinking about that before. About RANSAC I have no experience so far. Furthermore the implementation of RANSAC requires lots of code. Answer: This is not the best solution, but it's a solution. I'd like to learn of better techniques: If they were not going to be rotated or scaled, you could use a simple cross-correlation of the images. There will be a bright peak wherever the small image occurs in the large image. Registering an Image Using Normalized Cross-Correlation Template-based matching and convolution You can speed up cross-correlation by using an FFT method, but if you're just matching a small source image with a large target image, the brute-force multiply-and-add method is sometimes (not usually) faster. Source: Target: Cross-correlation: The two bright spots are the locations that match. But you do have a rotation parameter in your example image, so that won't work by itself. If only rotation is allowed, and not scaling, then it is still possible to use cross-correlation, but you need to cross-correlate, rotate the source, cross-correlate it with the entire target image, rotate it again, etc. for all rotations. Note that this will not necessarily ever find the image. If the source image is random noise, and the target is random noise, you won't find it unless you search at exactly the right angle. For normal situations, it will probably find it, but it depends on the image properties and the angles you search in. This page shows an example of how it would be done, but doesn't give the algorithm. Any offset where the sum is above some threshold is a match. You can calculate the goodness of the match by correlating the source image with itself and dividing all your sums by this number. A perfect match will be 1.0. This will be very computationally heavy, though, and there are probably better methods for matching patterns of dots (which I would like to know about). Quick Python example using grayscale and FFT method: from __future__ import division from pylab import * import Image import ImageOps source_file = 'dots source.png' target_file = 'dots target.png' # Load file as grayscale with white dots target = asarray(ImageOps.invert(Image.open(target_file).convert('L'))) close('all') figure() imshow(target) gray() show() source_Image = ImageOps.invert(Image.open(source_file).convert('L')) for angle in (0, 180): source = asarray(source_Image.rotate(angle, expand = True)) best_match = max(fftconvolve(source[::-1,::-1], source).flat) # Cross-correlation using FFT d = fftconvolve(source[::-1,::-1], target, mode='same') figure() imshow(source) # This only finds a single peak. Use something that finds multiple peaks instead: peak_x, peak_y = unravel_index(argmax(d),shape(d)) figure() plot(peak_y, peak_x,'ro') imshow(d) # Keep track of all these matches: print angle, peak_x, peak_y, d[peak_x,peak_y] / best_match 1-color bitmaps For 1-color bitmaps, this would be much faster, though. Cross-correlation becomes: Place source image over target image bitwise-AND all overlapping pixels (much faster than multiplication) sum all the 1s in the overlapped area Move source image by 1 pixel bitwise-AND all overlapping pixels sum all the 1s ... Thresholding a grayscale image to binary and then doing this might be good enough. Point cloud If the source and target are both patterns of dots, a faster method would be to find the centers of each dot (cross-correlate once with a known dot and then find the peaks) and store them as a set of points, then match source to target by rotating, translating, and finding the least squares error between nearest points in the two sets.
{ "domain": "dsp.stackexchange", "id": 6335, "tags": "image-processing, computer-vision, image-registration" }
Expression for discrete fourier transform of linear ramp
Question: I am trying to compute a single coefficient of the DFT of a linearly ramping sequence, $x[n]=an$ where $a$ is a constant that changes from sequence to seqeunce. I have looked at loads of DFT transform property and pairs tables, but this is a simple case I have not been able to find an analytical expression for. I don't want to have to compute the entire DFT of the sequence just to extract a single coefficient. My first approach was to notice that multiplication by $n$ in the time domain is like differentiation in the $k$-domain. However, since the DFT of $a$ is an impulse function at $k=0$, I haven't found a way to come up with an analytical expression to take the derivative. In my application, the number of samples in the sequence, $N$, and the coefficient of interest, $k$, will frequently change. It's just such a simple case that, given $a$, there must be a way for me to predict the value of the DFT of the sequence at a particular value of $k$ without actually doing the transform. Is the only way to do this to numerically create a look-up table of all possible $N$ and $k$ of interest, and then scale by $a$? Answer: There is indeed a better way; you can derive the analytical expression for the DFT of a ramp. Let's start with the discrete-time Fourier transform (DTFT) of a finite length sequence (i.e., the sequence is zero outside the interval $[0,N-1]$): $$X(e^{j\omega})=\sum_{n=0}^{N-1}x[n]e^{-jn\omega}\tag{1}$$ We also need the DTFT correspondence $$nx[n]\Longleftrightarrow j\frac{dX(e^{j\omega})}{d\omega}\tag{2}$$ The DTFT of the constant signal $x[n]=a$, $n\in [0,N-1]$ (and zero otherwise) is $$X(e^{j\omega})=a\sum_{n=0}^{N-1}e^{-jn\omega}=\begin{cases}aN,&\quad\omega=0\\\displaystyle a\frac{1-e^{-jN\omega}}{1-e^{-j\omega}},&\quad\text{otherwise}\end{cases}\tag{3}$$ From $(2)$ and $(3)$ we get $$\begin{align}a\sum_{n=0}^{N-1}ne^{-jn\omega}&=j\frac{dX(e^{j\omega})}{d\omega}\\&=-a\frac{Ne^{-jN\omega} (1-e^{-j\omega})-(1-e^{-jN\omega})e^{-j\omega}}{(1-e^{-j\omega})^2},\quad\omega\neq 0\end{align}\tag{4}$$ which is the DTFT of the given sequence. For $\omega=0$ we simply have $$a\sum_{n=0}^{N-1}n=a\frac{N(N-1)}{2}\tag{5}$$ Now we only need to remember that for finite length sequences, the DFT is a sampled version of the DTFT with $\omega_k=2\pi k/N$. From $(4)$ and $(5)$ we obtain $$\text{DFT}_N\{an\}[k]=\begin{cases}\displaystyle a\frac{N(N-1)}{2},&\quad k=0\\-a\displaystyle\frac{N}{1-e^{-j2\pi k/N}},&\quad k\in[1,N-1]\end{cases}\tag{6}$$
{ "domain": "dsp.stackexchange", "id": 4257, "tags": "dft" }
Is photoelectric effect a surface phenomenon?
Question: I got this question on a test and the answer key states that the answer is 'Yes'. According to what I understand electrons are emmitted with different kinetic energies based upon their depth from the metal surface i.e. an electron would come out with a lesser kinetic energy if it was situated deeper as it would have to go through more collisions. This reasoning contradicts the given answer. I would like to know if my reasoning is correct. Answer: It is somewhat matter of what precisely one would refer to as photoelectric effect. As far as the radiation-electron mechanism of transfer of energy, there is no direct role played by surface. However, referring to the Einsten's formula; $$ h f = \Phi + K, $$ where $K$ is the maximum kinetic energy of the photoelectron, $f$ the frequency of the incoming radiation, and $\Phi$ the work-function of the metal, it is true that the latter term is depending on the surface and its detailed structure, presence of impurities and so on. In this sense, there is a clear surface effect.
{ "domain": "physics.stackexchange", "id": 55798, "tags": "photoelectric-effect" }
Taking change - convergent iteration through iterate
Question: For the Code Review community challenge "Implement StackSTV", the algorithm requires a "convergent iterative scheme" to calculate the keep ratios of candidates that have been elected. To simplify the calculation of those keep ratios I have written following function to truncate an infinite list that calculates the keep ratio scheme's next step. I wonder whether I missed some basic library function and whether the function is idiomatic haskell: {- Take items from a list, as long as the last item taken is not the same as the next item in the list. Passing an empty list returns an empty list. -} takeModified :: Eq a => [a] -> [a] takeModified (x:xs) = [x] ++ go x xs where go :: Eq a => a -> [a] -> [a] go elem (x':xs') | elem == x' = [] | otherwise = [x'] ++ go x' xs' go elem [] = [] takeModified [] = [] Answer: Typically, where clauses don't include full type annotations, though this is mostly a matter of personal preference. For prepending items to a list, prefer x:xs over [x] ++ xs. In your "base case" of the recursive go function: go elem [] = [], elem is not used. Prefer _ in this case. Your comment exceeds 80 characters in width. Again, a matter of personal preference. The note about passing an empty list returns an empty list is probably not necessary as this is very clearly the case from the code itself. Here is how I would write this function: takeWhileNotDup :: Eq a => [a] -> [a] takeWhileNotDup [] = [] takeWhileNotDup (x:xs) = x:go x xs where go _ [] = [] go previous (x:xs) | previous /= x = x:go x xs | otherwise = []
{ "domain": "codereview.stackexchange", "id": 24084, "tags": "haskell, community-challenge" }
Caesar cipher/beginnings of a crypto library in Rust
Question: As a hobby project (and to learn the language), I'm working on a crypto library in Rust. The following is a component thereof, which implements encryption with and the cracking of the Caesar cipher. My primary interest is whether the code follows best practices, with a secondary concern regarding its efficiency. I'm not very used to Rust idioms, and there are a few places where I feel that the code is clunky, but can't see a better solution. Note that: this being a hobby project, I aim to write most everything myself and avoid external libraries; as this is part of a larger (as yet unwritten) framework, I have generalised more than necessary for this component alone. If it seems over-engineered, that would be because it has been designed for easy implementation of other [poly]alphabetic ciphers, like a more general substitution cipher or the Vigenère cipher. lib.rs #![feature(const_option)] pub mod alphabetic; pub mod caesar; pub mod frequency; alphabetic.rs ///Provides various types for dealing with alphabetic ciphers, like ///the Caesar and Vigenère ciphers. use std::{ cmp::{self, Ordering}, fmt::{self, Display}, ops::{Add, Sub}, }; pub const ALPHABET_SIZE: usize = 26; #[derive(Copy, Clone, Debug)] pub enum Case { Upper, Lower, } #[derive(Copy, Clone, Debug)] ///Contains a `value` (zero-indexed and guaranteed to be in the range 0-25) and /// a `Case` specifying the case. The case is only used for display purposes, /// and when cases are mixed in arithmetic operations, the output favours ///`Case::Upper` (so if one letter is uppercase, the result will also be /// uppercase); this ensures that addition is at least commutative. pub struct Letter { pub value: u8, pub case: Case, } impl Letter { ///Constructs a new `Letter` from its `u8` and `Case` components. pub const fn new(value: u8, case: Case) -> Letter { Letter { value: (value) % ALPHABET_SIZE as u8, case, } } ///Constructs a new `Letter`, accepting a negative alphabetical index ///(which will wrap around). pub fn from_signed(value: i8, case: Case) -> Letter { Letter { //rem_euclid returns nonnegative integers so this cast is safe value: value.rem_euclid(ALPHABET_SIZE as i8) as u8, case, } } ///Returns the `Letter` corresponding to a given `char`, or `None` if ///the `char` is out of the alphabetical ASCII range. pub const fn try_from_char(char_value: char) -> Option<Letter> { let (start, case) = match char_value { 'A'..='Z' => ('A' as u32, Case::Upper), 'a'..='z' => ('a' as u32, Case::Lower), _ => return None, }; Some(Letter::new((char_value as u32 - start) as u8, case)) } ///Returns the `Letter` corresponding to a given byte, or `None` if the ///byte is out of the alphabetical ASCII range. pub fn try_from_utf8(byte_value: u8) -> Option<Letter> { let (start, case) = match byte_value { 0x41..=0x5A => (0x41, Case::Upper), 0x61..=0x7A => (0x61, Case::Lower), _ => return None, }; Some(Letter::new(byte_value - start, case)) } ///Returns the `char` corresponding to the alphabetical character specified ///by `self.value` and `self.case`. Returns `None` if this fails, but that ///could only happen if `self.value` where out of its correct range (0-25). ///Typically we can be sure that is not the case, so we can `unwrap` the /// result, or use the `unsafe` `to_char_unchecked` if speed is /// required. pub fn try_to_char(&self) -> Option<char> { char::from_u32(match self.case { Case::Upper => self.value as u32 + 'A' as u32, Case::Lower => self.value as u32 + 'a' as u32, }) } ///# Safety ///This should be safe in all circumstances, since `from_u32_unchecked` ///could only fail if the value is out of ASCII range. So long as our ///range guarantee on `self.value` (0-25) is met, the corresponding `u32` ///will be in range. pub unsafe fn to_char_unchecked(&self) -> char { char::from_u32_unchecked(match self.case { Case::Upper => self.value as u32 + 'A' as u32, Case::Lower => self.value as u32 + 'a' as u32, }) } ///Returns the (ASCII) byte corresponding to the character specified by ///`self.value` and `self.case`. pub fn to_utf8(&self) -> u8 { match self.case { Case::Upper => self.value + 0x41, Case::Lower => self.value + 0x61, } } ///Returns `Case::Lower` iff both inputs are lowercase, and `Case::Upper` ///otherwise. Used internally to decide which case to give the result of an /// arithmetic operation on two `Letter`s. fn combined_case(case1: Case, case2: Case) -> Case { match (case1, case2) { (Case::Lower, Case::Lower) => Case::Lower, _ => Case::Upper, } } } ///`Letters` add their values, wrapping around the alphabet if necessary. impl Add<Letter> for Letter { type Output = Self; fn add(self, rhs: Self) -> Self { Self::new( self.value + rhs.value, Letter::combined_case(self.case, rhs.case), ) } } impl Add<i8> for Letter { type Output = Self; fn add(self, rhs: i8) -> Self { Self::from_signed(self.value as i8 + rhs, self.case) } } impl Sub<Letter> for Letter { type Output = Self; fn sub(self, rhs: Self) -> Self { Self::from_signed( self.value as i8 - rhs.value as i8, Letter::combined_case(self.case, rhs.case), ) } } impl Sub<i8> for Letter { type Output = Self; fn sub(self, rhs: i8) -> Self { Self::from_signed(self.value as i8 - rhs, self.case) } } impl PartialEq for Letter { fn eq(&self, other: &Self) -> bool { self.value == other.value } } impl Eq for Letter {} impl Ord for Letter { fn cmp(&self, other: &Self) -> Ordering { self.value.cmp(&other.value) } } impl PartialOrd for Letter { fn partial_cmp(&self, other: &Self) -> Option<Ordering> { Some(self.cmp(other)) } } impl Display for Letter { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { write!(f, "{}", self.try_to_char().ok_or(fmt::Error)?) } } #[derive(Clone, Debug, PartialEq)] ///Newtype over a `Vec<Letter>`. Allows conversion to and from a UTF-8 string, ///and allows elementwise addition and other operations. pub struct Alphabetic(Vec<Letter>); impl Alphabetic { pub fn new(vec: Vec<Letter>) -> Alphabetic { Alphabetic(vec) } ///Returns an `Alphabetic` from the given string, but returns `None` if the /// string contains any non-alphabetic characters. pub fn try_from_str(string: &str) -> Option<Alphabetic> { let letters: Option<Vec<Letter>> = string.bytes().map(Letter::try_from_utf8).collect(); Some(Alphabetic::new(letters?)) } ///Returns an `Alphabetic` from the given string. Unlike `try_from_str`, /// this function will simply ignore any characters which it cannot /// convert, rather than returning `None`. pub fn from_str_filtered(string: &str) -> Alphabetic { Alphabetic::new( string .bytes() .filter(|x| x.is_ascii_alphabetic()) .map(|x| Letter::try_from_utf8(x).unwrap()) .collect(), ) } ///Applies `func` to each pair of values from `self` and `other`, /// collecting the results into a new `Alphabetic`. If it reaches the /// end of one of the strings, it will simply dump the remainder of the /// longer string into the output; therefore, the output is always the /// same length as the longer of the two strings. pub fn pairwise_map( &self, other: &Self, func: Box<dyn Fn(Letter, Letter) -> Letter>, ) -> Alphabetic { let mut buf: Vec<Letter> = Vec::with_capacity(cmp::max(self.len(), other.len())); let mut self_iter = self.iter(); let mut other_iter = other.iter(); loop { match (self_iter.next(), other_iter.next()) { (Some(sval), Some(oval)) => buf.push(func(*sval, *oval)), (Some(sval), None) => { buf.push(*sval); buf.extend(self_iter); break; } (None, Some(oval)) => { buf.push(*oval); buf.extend(other_iter); break; } (None, None) => break, } } Alphabetic::new(buf) } //exposed functions from the inner vec pub fn len(&self) -> usize { self.0.len() } pub fn is_empty(&self) -> bool { self.0.is_empty() } pub fn iter(&self) -> std::slice::Iter<Letter> { self.0.iter() } } impl Display for Alphabetic { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { write!(f, "{}", { let chars = String::from_utf8(self.iter().map(Letter::to_utf8).collect()); chars.map_err(|_e| fmt::Error)? }) } } //copy-paste boilerplate land ///Addition and subtraction on `Alphabetic`s works by adding/subtracting each /// pair of `Letter`s, until we reach the end of either string, at which point /// the remainder of the longer string will be appended. impl Add<Alphabetic> for Alphabetic { type Output = Self; fn add(self, other: Self) -> Self { self.pairwise_map(&other, Box::new(|x, y| x + y)) } } impl Add<Letter> for Alphabetic { type Output = Self; fn add(self, other: Letter) -> Self { Alphabetic::new(self.iter().map(|x| *x + other).collect()) } } impl Add<i8> for Alphabetic { type Output = Self; fn add(self, other: i8) -> Self { Alphabetic::new(self.iter().map(|x| *x + other).collect()) } } impl Sub<Alphabetic> for Alphabetic { type Output = Self; fn sub(self, other: Self) -> Self { self.pairwise_map(&other, Box::new(|x, y| x - y)) } } impl Sub<Letter> for Alphabetic { type Output = Self; fn sub(self, other: Letter) -> Self { Alphabetic::new(self.iter().map(|x| *x - other).collect()) } } impl Sub<i8> for Alphabetic { type Output = Self; fn sub(self, other: i8) -> Self { Alphabetic::new(self.iter().map(|x| *x - other).collect()) } } #[cfg(test)] mod tests { use super::*; #[test] fn test_char_display() { let mut letter = Letter::try_from_char('U').unwrap(); //display property works properly assert_eq!(format!("{}", letter), "U"); letter = Letter::new(6, Case::Lower); //initialising letters from their values works properly assert_eq!(format!("{}", letter), "g"); } #[test] fn test_char_operators() { let letter1 = Letter::new(8, Case::Lower); let letter2 = Letter::from_signed(-16, Case::Lower); assert_eq!(format!("{}", letter1 + letter2), "s"); assert_eq!(format!("{}", letter1 - letter2), "y"); assert!(letter2 > letter1); assert!(letter2 != letter1); } #[test] fn test_alphabetic_display() { let alpha = Alphabetic::try_from_str("hello").unwrap(); //display property works properly assert_eq!(format!("{}", alpha), "hello"); let beta = Alphabetic::try_from_str("helloり"); //try_from_str correctly does not accept non-alphabetic chars assert_eq!(beta, None); } #[test] fn test_alphabetic_addition() { let alpha = Alphabetic::try_from_str("hello").unwrap(); let beta = Alphabetic::try_from_str("bbbbb").unwrap(); assert_eq!((alpha + beta).to_string(), "ifmmp"); let long = Alphabetic::try_from_str("longabcdefgh").unwrap(); let short = Alphabetic::try_from_str("short").unwrap(); //strings of different lengths can be added, commutatively assert_eq!((long.clone() + short.clone()).to_string(), "dvbxtbcdefgh"); assert_eq!((short.clone() + long.clone()).to_string(), "dvbxtbcdefgh"); //letter can be added to alphabetic assert_eq!( (Alphabetic::try_from_str("bbbbbb").unwrap() + Letter::try_from_char('b').unwrap()) .to_string(), "cccccc" ); } #[test] fn test_empty_values() { let a = Alphabetic::try_from_str("").unwrap(); let b = Alphabetic::from_str_filtered(""); let c = Alphabetic::new(vec![]); assert_eq!(a.to_string(), ""); assert_eq!(b.to_string(), ""); assert_eq!(c.to_string(), ""); } } frequency.rs ///Provides utilities relating to frequency analysis, to be used by cipher /// modules. use crate::alphabetic::{self, Alphabetic, Case, Letter}; //https://en.wikipedia.org/wiki/Letter_frequency //relative frequencies of letters in the english language, in alphabet order pub const ENGLISH_FREQUENCIES: [f64; alphabetic::ALPHABET_SIZE] = [ 0.08167, 0.01492, 0.02782, 0.04253, 0.12702, 0.02228, 0.02015, 0.06094, 0.06966, 0.00153, 0.00772, 0.04025, 0.02406, 0.06749, 0.07507, 0.01929, 0.00095, 0.05987, 0.06327, 0.09056, 0.02758, 0.00978, 0.02360, 0.00150, 0.01974, 0.00074, ]; ///Returns a `Vec` of relative frequencies of each `Letter` in the string. pub fn frequencies(string: &Alphabetic) -> Vec<f64> { let mut counts = vec![0usize; alphabetic::ALPHABET_SIZE]; string.iter().for_each(|x| { counts[x.value as usize] += 1; }); let sum = counts.iter().sum::<usize>(); if sum == 0 { return vec![0f64; alphabetic::ALPHABET_SIZE]; } counts.into_iter().map(|x| x as f64 / sum as f64).collect() } ///Returns the `Letter` which occurs most frequently in the given string. ///# Panics ///There are 2 `unwrap`s in this function. One of them /// unwraps the result of a `std::slice::Iter::max_by()`, /// which yields `None` only when the iter is empty Since we /// use the constant iter `(0..26)`, this is not a concern. /// The other unwrap is on the result of an /// `f64::partial_cmp()`, which would fail if one of the /// `f64`s is an unusual value, like `NaN`. We obtain these /// values from division and check for division by zero, so /// there *should* be no way for this panic to occur. pub fn most_frequent_letter(string: &Alphabetic) -> Letter { let freqs = frequencies(string); Letter::new( (0..26) .max_by(|x, y| freqs[*x].partial_cmp(&freqs[*y]).unwrap()) .unwrap() as u8, Case::Lower, ) } ///Returns a `Vec` containing the `Letters` representing `a` through `z`, in /// order from most frequent to least frequent in the given string. pub fn letters_by_frequency(string: &Alphabetic) -> Vec<Letter> { let freqs = frequencies(string); let mut indices: Vec<usize> = (0..26).collect(); indices.sort_by(|x, y| freqs[*y].partial_cmp(&freqs[*x]).unwrap()); indices .into_iter() .map(|x| Letter::new(x as u8, Case::Lower)) .collect() } ///Returns the mean squared difference between two `f64` slices. The mean /// squared difference is only defined on two slices of the same length (if the /// slices are not the same length, this function returns `None`). It is defined /// as the sum of the squares of the differences between each pair of items (one /// from each slice), divided by the length of the slices. pub fn mean_squared_difference(freqs1: &[f64], freqs2: &[f64]) -> Option<f64> { let length = match (freqs1.len(), freqs2.len()) { (0, _) | (_, 0) => { return None; } (x, y) if x == y => x, _ => return None, }; let mut sum = 0f64; for i in 0..length { sum += (freqs1[i] - freqs2[i]).powi(2) } Some(sum) } #[cfg(test)] mod tests { use super::*; #[test] fn test_frequency() { let string = Alphabetic::try_from_str("hello").unwrap(); assert_eq!( most_frequent_letter(&string), Letter::try_from_char('l').unwrap() ); let freqs: Vec<f64> = vec![ 0.0, 0.0, 0.0, 0.0, 0.2, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.4, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ]; assert_eq!(frequencies(&string), freqs); let letters = letters_by_frequency(&string); assert_eq!(letters[0], Letter::try_from_char('l').unwrap()); assert_eq!(letters[1], Letter::try_from_char('e').unwrap()); assert_eq!(letters[2], Letter::try_from_char('h').unwrap()); assert_eq!(letters[3], Letter::try_from_char('o').unwrap()); } } caesar.rs ///Provides functions for encrypting and cracking Caesar ciphers. use crate::{ alphabetic::{self, Alphabetic, Letter}, frequency::{frequencies, mean_squared_difference, most_frequent_letter, ENGLISH_FREQUENCIES}, }; ///Shift the given string by the given number of places. pub fn shift(string: &Alphabetic, num_places: i8) -> Alphabetic { string.clone() + num_places } ///Iterator over all of the possible shifts of a given string. pub struct GenShifts(Alphabetic, i8); impl GenShifts { fn new(string: Alphabetic) -> GenShifts { GenShifts(string, 0) } } impl Iterator for GenShifts { type Item = Alphabetic; fn next(&mut self) -> Option<Self::Item> { if self.1 == alphabetic::ALPHABET_SIZE as i8 { return None; } self.1 += 1; Some(shift(&self.0, self.1 - 1)) } } pub fn gen_shifts(string: Alphabetic) -> GenShifts { GenShifts::new(string) } ///Crack an encrypted string by shifting the string such /// that the most common character therein is mapped to 'e' /// (the most common letter in English) pub fn crack_match_maxima(string: &Alphabetic) -> Alphabetic { const MOST_FREQUENT_ENGLISH: Letter = Letter::try_from_char('e').unwrap(); string.clone() - most_frequent_letter(&string) + MOST_FREQUENT_ENGLISH } ///Crack an encrypted string by choosing the shift which /// minimises the MSE (mean squared error) between the /// observed letter frequencies and the expected letter /// frequencies (based on a large sample of English text) pub fn crack_minimise_mse(string: &Alphabetic) -> Alphabetic { gen_shifts(string.clone()) .min_by(|x, y| { let mse = |x| mean_squared_difference(&frequencies(x), &ENGLISH_FREQUENCIES).unwrap(); mse(x).partial_cmp(&mse(y)).unwrap() }) .unwrap() } #[cfg(test)] mod tests { use super::*; #[test] fn test_shift() { assert_eq!( shift(&Alphabetic::try_from_str("abcdef").unwrap(), 1).to_string(), "bcdefg" ); let mut shifts = gen_shifts(Alphabetic::try_from_str("abcdef").unwrap()); assert_eq!(shifts.next().unwrap().to_string(), "abcdef"); assert_eq!(shifts.next().unwrap().to_string(), "bcdefg"); assert_eq!(shifts.next().unwrap().to_string(), "cdefgh"); assert_eq!(shifts.next().unwrap().to_string(), "defghi"); } #[test] fn test_crack() { let original = Alphabetic::from_str_filtered( "If it were done when 'tis done, then 'twere well It were done quickly: if the assassination Could trammel up the consequence, and catch With his surcease success; that but this blow Might be the be-all and the end-all here, But here, upon this bank and shoal of time, We'ld jump the life to come. But in these cases We still have judgment here; that we but teach Bloody instructions, which, being taught, return To plague the inventor: this even-handed justice Commends the ingredients of our poison'd chalice To our own lips. He's here in double trust; First, as I am his kinsman and his subject, Strong both against the deed; then, as his host, Who should against his murderer shut the door, Not bear the knife myself. Besides, this Duncan Hath borne his faculties so meek, hath been So clear in his great office, that his virtues Will plead like angels, trumpet-tongued, against The deep damnation of his taking-off; And pity, like a naked new-born babe, Striding the blast, or heaven's cherubim, horsed Upon the sightless couriers of the air, Shall blow the horrid deed in every eye, That tears shall drown the wind. I have no spur To prick the sides of my intent, but only Vaulting ambition, which o'erleaps itself And falls on the other.", ); let shifted = shift(&original, 6); assert_eq!(crack_match_maxima(&shifted), original); assert_eq!(crack_minimise_mse(&shifted), original); } } Answer: Overall, your implementation looks good! Your code is well documented, it has unit tests and the use of the standard traits is also great. Here are some of my suggestions: The value field of Letter must be in the range 0-25, I would either add an assertion in the new method of Letter that panics if the range is violated, or make it return a Result and handle the error. Implementing the above will allow you to move the unsafe from pub unsafe fn to_char_unchecked(&self) -> char into the method body as it will always be safe to convert. It's a good practice to avoid unsafe in public APIs whenever it's possible by creating a safe abstraction over the unsafe code. value and case should not be public. You can change them to pub(crate). Remove the need for unstable features. const MOST_FREQUENT_ENGLISH: Letter = Letter::try_from_char('e').unwrap(); can be replaced with const MOST_FREQUENT_ENGLISH: Letter = Letter { value: 4, case: Case::Lower, }; and #![feature(const_option)] can be removed. The const variable can also be taken out of the method. fn combined_case(case1: Case, case2: Case) -> Case can be implemented using the Add trait as you have done for Letter. Use filter_map() instead of filter() and map(). Alphabetic::new( string .bytes() .filter(|x| x.is_ascii_alphabetic()) .map(|x| Letter::try_from_utf8(x).unwrap()) .collect(), ) can be replaced with Alphabetic::new( string .bytes() .filter_map(Letter::try_from_utf8) .collect(), ) assert!(letter2 != letter1); can be changed to assert_ne!(letter2, letter1); Avoid dynamic dispatch. There is no need to use dynamic dispatch in this method pub fn pairwise_map( &self, other: &Self, func: Box<dyn Fn(Letter, Letter) -> Letter>, ) -> Alphabetic It can be changed using generic type: pub fn pairwise_map<Func>(&self, other: &Self, func: Func) -> Alphabetic where Func: Fn(Letter, Letter) -> Letter, Or an fn pointer: pub fn pairwise_map( &self, other: &Self, func: fn(Letter, Letter) -> Letter, ) -> Alphabetic { This: let length = match (freqs1.len(), freqs2.len()) { (0, _) | (_, 0) => { return None; } (x, y) if x == y => x, _ => return None, }; can be simplified let length = match (freqs1.len(), freqs2.len()) { (x, y) if x == y && x != 0 => x, _ => return None, };
{ "domain": "codereview.stackexchange", "id": 41544, "tags": "beginner, rust, caesar-cipher" }
Realizability Assumption: Why is that for every ERM hypothesis $L_{S}(h_{S})=0$
Question: I'm quoting Understanding Machine Learning: From Theory to Algorithms, Cambridge University Press, 2014: Definition 2.1 (The Realizability Assumption). There exists $h^{\star} \in \mathcal{H}$ s.t. $L(D, f )(h^{\star}) = 0$. Note that this assumption implies that with probability 1 over random samples, S, where the instances of S are sampled according to D and are labeled by f, we have $L_{S}(h^{\star})=0$. My understanding of the second sentence in this definition is that because $h^{\star}$ satisfies the equation $L(D, f )(h^{\star}) = 0$, so every prediction made by $h^{\star}$ on every example $x$ sampled from the domain set $\mathcal{X}$ is correct (otherwise the loss $L(D, f )(h^{\star})$ will not equal 0). Equivalently, every prediction made by $h^{\star}$ is correct. Therefore, for any sample $S$ sampled from $\mathcal{X}$ we have $L_{S}(h^{\star})=0$. However what I'm stumble upon is when the author further collaborates on this def.: The realizability assumption implies that for every ERM hypothesis we have that $L_{S}(h_{S})=0$. I don't quite get what the author means here since every ERM hypothesis $h_{S}$ is found based on some subjective minimization algorithm, which in turn, depends on a number of other factors, such as the choice of the loss function, the sample size, the algorithm complexity and thus may not always converge to $h^{\star}$? Answer: What I was missing is the condition in the definition: "S is labeled by a function $f$". Since $(\mathcal{},f)(ℎ^{\star})=0$ and $h^{\star}\in\mathcal{H}$, then for every ERM hypothesis $h_{\mathcal{S}}=ERM_{\mathcal{H}}(\mathcal{S})\in \underset{h\in{\mathcal{H}}}{argmin} L_{\mathcal{S}}(h)$ learned from $S$ that minimizes the loss defined by: $$ \begin{align} L_{S}(h):=\frac{|\{i \in [m]: h(x_{i}) \ne f(x_{i}) = y_{i}\}|}{m}, \end{align} $$ we simply have that $L_{\mathcal{S}}(h_{\mathcal{S}}))=0$ otherwise choose $h^{\star}$ for $h_{\mathcal{S}}$. This is different from, say, the agnostic ERM learner where this condition is relaxed by replacing the labeling function $f$ by a data generating distribution instead.
{ "domain": "ai.stackexchange", "id": 4000, "tags": "machine-learning, ai-basics, hypothesis-class" }
Disable Element If
Question: This is a pretty straightforward library function I've got here. I've had to build it in to about 3 different forms but I never heard of this kind of function anywhere else. I'm trying to make it as readable and clear as possible, but I also am not too familiar with the best way to capture values in JavaScript. As far as I know, the following disableChecker function allows me to capture the value arguments at declaration time so I can have many of these DisableIf checks going simultaneously. function disableChecker(checkSelector, disableValue, targetSelector) { var optionValue = $(checkSelector).val(); if (optionValue === disableValue) { $(targetSelector).prop("disabled", true); //Disable target } else { $(targetSelector).prop("disabled", false); //Enable target } } function DisableIf(checkSelector, disableValue, targetSelector) { $(checkSelector).on("change,keyup,blur", disableChecker(checkSelector, disableValue, targetSelector)); } Usage: <select id="disableSource"> <option value="0">Disable Next</option> <option value="1">Enable Next</option> </select> <input type="text" id="disableTarget" /> <input type="text" id="disableTarget2" /> <script type="text/javascript"> //Please note that at this time, order is important! DisableIf("#disableSource", "0", "#disableTarget"); DisableIf("#disableTarget", "Disable Next", "#disableTarget2"); DisableIf("#disableSource", "0", "#disableTarget2"); <script> jsFiddle Answer: Readability: Merging code into a single line: Variables - If the function isn't too complex to understand, variable declarations that are only used in a single location are unnecessary. If-statements - If-Statements take a boolean and do something different based on the boolean. If you're toggling a value like "disabled" you usually can just pass the boolean straight to the value-setter. function disableChecker(checkSelector, disableValue, targetSelector) { $(targetSelector).prop("disabled", $(checkSelector).val() === disableValue); } Library: Naming - It should be clear-er to anyone working with the code what each function does (whether this person be you, someone else, or you 6 months from now). Condition - Why force it to always be an equality condition? Perhaps in the future you may want it to be greater-than or something even more complex? By passing in a boolean parameter instead you expand your library function's potential. Maintainability - Single Responsibility Principal. Each function does one specific thing so that if any of the parts change they aren't all affected. Initialization - This is the only slight down-side that I see. It takes a little bit more code turning the condition into a parameter. But flexibility and readability usually come at some kind of cost (generally in the setup or performance departments). Given that it's a library function though, you're probably wanting that flexibility. Something like: function setDisable(targetSelector, isDisabled) { $(targetSelector).prop("disabled", isDisabled); } function valueEqualsExpected(selector, expectedValue) { return $(selector).val() === expectedValue; } function turnOnAndRunEvent(events, selector, func) { $(selector).on(events, func); //$(document).on(events, selector, func); func(); } turnOnAndRunEvent("change keyup blur", "#disableSource", function () { setDisable("#disableTarget", valueEqualsExpected("#disableSource", "0")); }); turnOnAndRunEvent("change keyup blur", "#disableTarget", function () { setDisable("#disableTarget2", valueEqualsExpected("#disableTarget", "Disable Next")); }); turnOnAndRunEvent("change keyup blur", "#disableSource", function () { setDisable("#disableTarget2", valueEqualsExpected("#disableSource", "0")); }); JSFiddle
{ "domain": "codereview.stackexchange", "id": 15542, "tags": "javascript, jquery" }
What is a universal quantum algorithm?
Question: Nobody seems to define the term anywhere. It's in the abstract of "Factoring integers with sublinear resources on a superconducting quantum processor" by Bao Yan, Ziqi Tan, Shijie Wei et alia, 2022. Shor's algorithm has seriously challenged information security based on public key cryptosystems. However, to break the widely used RSA-2048 scheme, one needs millions of physical qubits, which is far beyond current technical capabilities. Here, we report a universal quantum algorithm for integer factorization by combining the classical lattice reduction with a quantum approximate optimization algorithm (QAOA). Answer: "Nobody seems to define the term anywhere." The term "universal quantum algorithm" is not commonly used. In the case of factoring numbers it could mean one of two things: Some algorithms for factoring numbers do not exploit "universal quantum computers" meaning that they could work on simpler quantum annealers like the D-Wave machine which cannot do universal quantum computation. These would not be "universal" quantum algorithms, and ones that do require universal quantum computers could (although it wouldn't be a very common way to say it) be called "universal". Some algorithms for factoring numbers cater only to specific types of numbers (e.g. semiprime numbers, or numbers of the form $(p-1)(q-1)$ for primes $p$ and $q$). These include: Trial division Wheel factorization Pollard's rho algorithm Algebraic-group factorization algorithms Fermat's factorization method Euler's factorization method Special number field sieve In the paper that you mentioned, the algorithm can run on a non-universal D-Wave machine (even though they used QAOA on a circuit-based machine to solve the QUBO part of their algorithm, instead of just running it on a D-Wave machine), they more likely to mean the second option above. They mean that their algorithm will work for the factorization of any integer. I recently wrote a much longer explanation of that paper here, in the answer to Quantum Computing Used to Break RSA by "fixing" Schnorr's Recent Factorization Claim?.
{ "domain": "quantumcomputing.stackexchange", "id": 4533, "tags": "quantum-algorithms" }
Optics: Derivation of $\vec\nabla{n} = \frac{d(n\hat{u})}{ds}$
Question: I have been given this formula from optics here, with no background: $$\vec\nabla{n} = \frac{d(n\hat{u})}{ds}$$ Where $n$ is the refractive index and $\hat{u}$ is a unit vector tangent to the path $s$ that light takes inside a medium. Does anyone know if this formula has a name? I am looking specifically for a derivation of it. I have looked through the optics book by Hecht with no luck - I assume it comes from fermats principle of least time in some form. Any help greatly appreciated. Answer: This equation is called the ray equation and it can indeed be derived from Fermat's principle. I guess you can find more about its derivation in, e.g., Born and Wolf's Principles of optics or in Fundamentals of Photonics by Saleh and Teich.
{ "domain": "physics.stackexchange", "id": 18280, "tags": "optics, refraction, variational-principle" }
Best window before FFT ? (for a signal consisting of 2 tones used for phase measurement)
Question: I am doing phase measurments by transmitting and receiving tones (100 kHz and 16 kHz simultaneously). I am transmitting the tones and receiving them, applying Blackman Harris window and doing FFT for phase measurements. I feel the Blackman Harris window is not optimal for a signal that has only 2 tones but don't know which windowing function is better for phase measurement for such a signal. Any Ideas? Answer: Using an FFT to measure phase for just two tones results in a lot more processing that doing the following alternate approach that can be either streamed or processed in blocks. No windowing is needed: Apply the received signal as the input to two multipliers. Apply a normalized local copy of one tone as the second input to one of the multipliers and a normalized local copy of the other tone to the other. The low pass filter of each output will be an estimate of the phase given by$A\cos(\theta)$. For full 360 degree phase resolution, use a complex local tone and two complex multipliers with two outputs (I, Q) with the phase determined using atan2(Q, I). (This is essentially the operation in two bins of the DFT).
{ "domain": "dsp.stackexchange", "id": 10079, "tags": "fft, phase, measurement, tone-generation" }
Stuck following derivation of geodesic equation
Question: In the book "Reflections on Relativity" by Kevin Brown, there is a chapter called "Relatively Straight", in which he derives the geodesic equations using the Euler equation. Online version Just after the second mention of the Euler equation (about 80% down), there is the following text: "Therefore, we can apply Euler's equation to immediately give the equations of geodesic paths on the surface with the specified metric $$ \frac{\partial F}{\partial x^\sigma} - \frac{d}{d\lambda}\frac{\partial F}{\partial \dot{x}^\sigma}$$ For an n-dimensional space this represents n equations, one for each of the coordinates $x_1, x_2, ..., x_n$. Letting $w = (\frac{ds}{dl})^2 = F^2 = g_{\alpha\beta}\dot{x}^\alpha\dot{x}^\beta$ this can be written as $$\frac{\partial w^{1/2}}{\partial x^\sigma} - \frac{d}{d\lambda}\frac{\partial w^{1/2}}{\partial \dot{x}^\sigma} = \frac{d}{d\lambda}\frac{\partial w}{\partial \dot{x}^\sigma} - \frac{\partial w}{\partial x^\sigma} - \frac{1}{2w}\frac{dw}{d\lambda} \frac{\partial w}{\partial \dot{x}^\sigma} = 0 $$ I get the substitution of sqrt(w) for F on the LHS, but can't see how he obtains the middle expression. I have tried using the product/chain rules as is usual with these things, but just cannot see what he is doing here. I can usually follow Kevin's work, with a bit of effort, but this one seems a little trickier than I am used to. Can anyone help me to understand the trick? Answer: It seems right. You have $$ \frac{\partial w^{1/2}}{\partial x^\sigma}=\frac1{2\sqrt{w}}\frac{\partial w}{\partial x^\sigma} $$ Change the order of the derivatives in the second term $$ \frac{d}{d\lambda}\frac{\partial w^{1/2}}{\partial \dot{x}^\sigma}=\frac{\partial}{\partial \dot{x}^\sigma}\left(\frac{d}{d\lambda} w^{1/2}\right)=\frac{\partial}{\partial \dot{x}^\sigma}\left( \frac1{2\sqrt{w}}\frac{d w}{d\lambda } \right) $$ Product rule $$ \frac{\partial}{\partial \dot{x}^\sigma}\left( \frac1{2\sqrt{w}}\frac{d w}{d\lambda } \right)=-\frac1{4w\sqrt{w}}\frac{\partial w}{\partial \dot{x}^\sigma}\frac{dw}{d\lambda}+\frac1{2\sqrt{w}}\frac{\partial }{\partial \dot{x}^\sigma}\frac{dw}{d\lambda} $$ Subtract, equate to zero, multiply $2\sqrt{w}$ you get $$ \frac{\partial w}{\partial x^\sigma}+\frac1{2w}\frac{\partial w}{\partial \dot{x}^\sigma}\frac{d w}{d\lambda}-\frac{d}{d\lambda}\frac{\partial w}{\partial \dot{x}^\sigma}=0 $$
{ "domain": "physics.stackexchange", "id": 13964, "tags": "general-relativity, differential-geometry, variational-principle, geodesics" }
Signals in real world?especially natural signals?
Question: Do all the signals in real world ,that are of interest in engineering(for measurements or processing)are only continous/analog or there any other alternatives also? For example birds make voice,animals make voices are examples of continous signals Few natural discrete signals in my opinion Number of teeth and number of bones in human Number of mountains in a range,height of a mountain/hill Please kindly give me other examples from natural world Please kindly guide me if i am wrong in my understanding of continuous and discrete signals Answer: Eventhough most engineering signals are continuous, not all are such. For example, statistics of many kinds provide discrete data such as daily stock market indicators, computer network stats, hourly number of customers in a queue, etc... Of course, audio signals are transmitted as acoustic pressure waves through air (typically) and pressure is a continuous modeled physical variable.
{ "domain": "dsp.stackexchange", "id": 8586, "tags": "continuous-signals, real-time, real-analysis" }
Why does Ensemble Averaging actually improve results?
Question: Why does ensemble averaging work for neural networks? This is the main idea behind things like dropout. Consider an example of a hypersurface defined by the following image (white means lowest Cost). We have two networks: yellow and red, each network has 2 weights, and adjusts them to end up in the white portion. Clearly, if we average them after they were trained we will end up in the middle of the space, where the error is very high. Answer: I think there is a missunderstanding in your question. In your question you imply that you take the average of the weights of the networks, and you should not do this. Instead, what you average are the predictions of different networks. For this reason, if you average two correct predictions the result will be a correct prediction, and the problem you were thinking about does not exist.
{ "domain": "datascience.stackexchange", "id": 3023, "tags": "gradient-descent" }
Hector slam configuration issues
Question: I am changing my project from using gmapping to hector slam but in rviz my robot is displaying and moving in a less than stable manner, as shown below. I've run out of ideas on things to try; Teleoperating the robot is even worse : I have the following error on my command-line : [ERROR] [1616948938.487207485, 0.535000000]: Transform failed during publishing of map_odom transform: Lookup would require extrapolation into the past. Requested time 0.030000000 but the earliest data is at time 0.205000000, when looking up transform from frame [chasis] to frame [odom] My TF tree hierarchy is fully connected as shown below : I have hokuyo lidar setup in my xacro as follows: <!-- HOKUYO as RPLIDAR --> <xacro:hokuyo_utm30lx name="hokuyo_laser" parent="chasis" ros_topic="hokuyo_scan" update_rate="40" ray_count="1040" min_angle="130" max_angle="-130"> <origin xyz="0 0 0.3" rpy="0 0 0" /> </xacro:hokuyo_utm30lx> And my hector related configuration in my launch file is as follows : <arg name="tf_map_scanmatch_transform_frame_name" default="scanmatcher_frame"/> <arg name="base_frame" default="chasis"/> <arg name="odom_frame" default="odom"/> <arg name="pub_map_odom_transform" default="true"/> <arg name="scan_subscriber_queue_size" default="5"/> <arg name="scan_topic" default="hokuyo_scan"/> <arg name="map_size" default="2048"/> <node pkg="hector_mapping" type="hector_mapping" name="hector_mapping" output="screen"> <!-- Frame names --> <param name="map_frame" value="map" /> <param name="base_frame" value="$(arg base_frame)" /> <param name="odom_frame" value="$(arg odom_frame)" /> <!-- Tf use --> <param name="use_tf_scan_transformation" value="true"/> <param name="use_tf_pose_start_estimate" value="false"/> <param name="pub_map_odom_transform" value="$(arg pub_map_odom_transform)"/> <!-- Map size / start point --> <param name="map_resolution" value="0.050"/> <param name="map_size" value="$(arg map_size)"/> <param name="map_start_x" value="0.5"/> <param name="map_start_y" value="0.5" /> <param name="map_multi_res_levels" value="2" /> <!-- Map update parameters --> <param name="update_factor_free" value="0.4"/> <param name="update_factor_occupied" value="0.9" /> <param name="map_update_distance_thresh" value="0.4"/> <param name="map_update_angle_thresh" value="0.06" /> <param name="laser_z_min_value" value = "-1.0" /> <param name="laser_z_max_value" value = "1.0" /> <!-- Advertising config --> <param name="advertise_map_service" value="true"/> <param name="scan_subscriber_queue_size" value="$(arg scan_subscriber_queue_size)"/> <param name="scan_topic" value="$(arg scan_topic)"/> <param name="tf_map_scanmatch_transform_frame_name" value="$(arg tf_map_scanmatch_transform_frame_name)" /> </node> <node pkg="tf" type="static_transform_publisher" name="map_nav_broadcaster" args="0 0 0 0 0 0 odom chasis 100"/> My hector-slam knowledge is a work-in-progress so I would appreciate some help in understanding what's wrong with my configuration. I'm happy to provide more code if it would help. Originally posted by sisko on ROS Answers with karma: 247 on 2021-03-28 Post score: 0 Answer: In my hours and hours of researching this issue, I came across this and in the hector-mapping section, it has it's static-transform-publisher defined as follows: <node pkg="tf" type="static_transform_publisher" name="base_to_laser_broadcaster" args="0 0 0 0 0 0 base_link laser 100" /> This lead me to realise my static-transform-publisher is defined incorrectly. I changed mine to publish TF data between chasis -> hokuyo_scan instead of odom -> chasis, i:e : <node pkg="tf" type="static_transform_publisher" name="map_nav_broadcaster" args="0 0 0 0 0 0 chasis hokuyo_scan 100"/> And that fixed my problem in regards to rviz. The robot model no longer "hops" in place like a low-rider and the teleoperation is nice and smooth. Although, I still get the "Transform failed during publishing of map_odom transform: . . ." errors in my terminal. Originally posted by sisko with karma: 247 on 2021-03-28 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 36248, "tags": "ros-melodic, hector-slam" }
In which O-class does my Θ-result belong?
Question: In a multiple-choice test, I'm asked to solve the recurrence $T(n)=2T(n/2)+n/2$. I've done this using the master theorem: $f(n)=n/2$, $a=2$, $b=2$, so we're in the second case and $T(n)=\Theta(n\log n)$. But the possible answers are $O(\log n)$, $O(n)$, $O(n \log n)$, $O(n^2)$ and $O(n^2/\log n)$ and more than one may be correct. What do I do now? Answer: You've solved the recurrence more precisely than the test-setter was expecting you to. You know that $T(n)=\Theta(n\log n)$, which means that $T(n) = O(n\log n)$ and $T(n) = \Omega(n\log n)$, and that gives you the answer. (By the way, I didn't check that you solved the recurrence correctly, but you can do that yourself. Answer checking is off-topic here, and you didn't ask for it, so I focused on the rest of your question.)
{ "domain": "cs.stackexchange", "id": 6864, "tags": "asymptotics, landau-notation" }
On solving an RC circuit
Question: Given the following RC circuit: it's required to calculate how the electric current varies and how the other electrical quantities vary. Based on that request, I was wondering if it was correct to consider this other RC circuit: therefore, as usual, distinguish between: charge process: $i(t) = \frac{\Delta V}{R_{eq}}e^{-\frac{t}{R_{eq}\,C}}$ and $q(t) = C\,\Delta V\left(1 - e^{-\frac{t}{R_{eq}\,C}}\right)$; discharge process: $i(t) = -\frac{\Delta V}{R_{eq}}e^{-\frac{t}{R_{eq}\,C}}$ and $q(t) = C\,\Delta V\,e^{-\frac{t}{R_{eq}\,C}}$; or is this simplification not correct and it's necessary to solve the initial circuit through differential equations? In the latter case if I was given some help on how to do it I would be happy, thanks! EDIT: Thanks to the clear answers of @Ritam_Dasgupta and @Señor O, I managed to write the associated Cauchy problem: $$ \begin{cases} \epsilon = R_1\,\dot{q}_1(t) + \frac{q(t)}{C} \\ \epsilon = R_2\,\dot{q}_2(t) + \frac{q(t)}{C} \\ \epsilon = R_3\,\dot{q}_3(t) + \frac{q(t)}{C} \\ \dot{q}(t) = \dot{q}_1(t) + \dot{q}_2(t) + \dot{q}_3(t) \\ q(0) = q_1(0) = q_2(0) = q_3(0) = \overline{q} \end{cases} $$ which can be declined in the two classic cases: charge process: $\epsilon = \Delta V$, $\overline{q} = 0$, then: $$ \begin{aligned} & q(t) = C\,\Delta V\left(1 - e^{-\frac{t}{R_{eq}\,C}}\right), \quad \quad \quad \, i(t) = \frac{\Delta V}{R_{eq}}\,e^{-\frac{t}{R_{eq}\,C}}\,; \\ & q_k(t) = r_k\,q(t), \quad \quad \quad \quad \quad \quad \quad \quad \quad i_k(t) = r_k\,i(t)\,; \\ & U_{vs} = \frac{q^2(t)}{C}\,, \quad \quad U_C = \frac{q^2(t)}{2\,C}\,, \quad \quad U_R = \frac{q^2(t)}{2\,C}\,; \\ \end{aligned} $$ discharge process: $\epsilon = 0$, $\overline{q} = C\,\Delta V$, then: $$ \begin{aligned} & q(t) = C\,\Delta V\,e^{-\frac{t}{R_{eq}\,C}}, \quad \quad \quad \quad \quad \quad \quad \quad i(t) = -\frac{\Delta V}{R_{eq}}\,e^{-\frac{t}{R_{eq}\,C}}\,; \\ & q_k(t) = (1 - r_k)\,C\,\Delta V + r_k\,q(t), \quad \quad \quad \, i_k(t) = r_k\,i(t)\,; \\ & U_{vs} = 0\,, \quad \quad \quad \quad U_C = \frac{q^2(t)}{2\,C}\,, \quad \quad \quad \quad U_R = \frac{q^2(t)}{2\,C}\,; \\ \end{aligned} $$ where $r_k = \frac{R_{eq}}{R_k}$ and $R_{eq} = \left(\begin{aligned}\sum_{k = 1}^3\end{aligned} \frac{1}{R_k}\right)^{-1}$. I hope I haven't made any mistakes, in case I will correct. Answer: Yes, what you did is most certainly correct. Now if you have to find out currents in individual branches you just have to notice that since potential difference across each resistor is the same the current passing through a resistor would be inversely proportional to its resistance. You can verify this by writing the differential equation itself. Consider the current in the branches to be $i_1$, $i_2$ and $i_3$ respectively such that $i_{eq}=i_1+i_2+i_3$. Now the differential equations for different paths would be: $$ V-\frac {q_{eq}} {C}- i_1R_1=0$$ $$V-\frac {q_{eq}}{C}-i_2R_2=0$$ $$V-\frac {q_{eq}}{C}-i_3R_3=0$$ To proceed you should divide these $3$ equations with $R_1$, $R_2$, and $R_3$ respectively and add. We have, $\frac {1}{R_{eq}}=\frac {1}{R_1}+\frac {1}{R_2}+\frac {1}{R_3}$ Hence, we get: $$\frac {V}{R_{eq}}-\frac {q}{R_{eq}C}-i_{eq}=0$$ Since we have $\frac {dq}{dt}=i_{eq}$, all that remains is to integrate, and you'll obtain the required result.
{ "domain": "physics.stackexchange", "id": 79283, "tags": "homework-and-exercises, electric-circuits, electrical-resistance, capacitance, batteries" }
How are two species with similar phenotypes identified as different?
Question: How are two species with similar phenotypes identified as different? Have two different individuals that were thought to be different species ever been determined to be from the same species ( or one a subspecies of the other's species ) ? Answer: Species are difficult to define. This is because they form over a gradual continuous spectrum via evolution, one species does not suddenly become two, but lineages diverge (perhaps because there is some kind of barrier - like a mountain range - in the way) eventually reaching a point where we class them as separate species. How we define species is debated but the most generally accepted definition is the Biological Species Concept (BSC). Roughly the BSC defines a species as "a group of individuals that do, or could, successfully breed." Other species concepts do exist, some have their merits, but the BSC is most widely accepted and normally taken with a pinch of salt - it's not always possible to perfectly define a species because evolution and speciation are ongoing processes. Therefore, according to the BSC, two groups can be called different species even if the individuals look identical as long as there is true reproductive isolation. Reproductive isolation can occur because of physical barriers such as rivers, oceans, mountains, environmental change. This is the cause of allopatric speciation. One species could become split when a subpopulation forms on an island (e.g. if humans take them to an island). If enough time passes without migration they will become reproductively incompatible. Another mechanism is sympatric speciation, when two species form without geographic isolation. This could be by the evolution of new behavioural, morphological or physiological forms which create division within the initial meta-population in to distinct sub-populations which over time become more reproductively isolated. Traits related to reproduction are amongst the fastest evolving and can quickly reinforce speciation (e.g. sperm morphology is extremely varied between species, see Sperm Biology: An evolutionary perspective, chapter 3). The higher up the tree of biological classification you go the easier it becomes to classify things in to distinct groups, e.g. An oak tree is clearly a plant, and a whale is clearly and animal; further towards species definitions a whale is a mammal and a sparrow is a bird and both are animals; and perhaps less clearly, carrion crows and hooded crows are two different bird species in the Corvus genus. (Formerly these two were considered the same species!) Obviously problems begin to occur when we consider species that reproduce through means other than sexual reproduction - e.g. bacteria which clonally reproduce. Traditionally this has been done by phenotypic information. We are now in the genomics era which opens new possibilities. This article discusses modern techniques by which utilise genetic similarity & difference between cells to define species and they suggest it to be a good method. However, this article seems to suggest that genomic methods need some refining first before we start classifying species with them. Currently, it appears more pragmatic and efficient to preserve the current species definition than to replace it, because it is serviceable as a first level of screening and current phylogenetic knowledge is too limited for a universal and sound change in the definition. As for whether two species have been reclassified in to one I know there are examples but Google is not throwing up anything and there's nothing I can remember off of the top of my head - if I remember correctly the male and female mallard were once considered separate species because they look so different but I can't find anything to back this up. Species formerly described as one have been split in to two as well - gorilla's are one such example.
{ "domain": "biology.stackexchange", "id": 2166, "tags": "evolution, species-identification" }
How does a thin gas uphold its emission spectrum?
Question: This question has been bothering me for many years and maybe I've been too embarassed to ask up until now. The reason why a thin atomic gas has an absorption spectrum has been explained to me by noting that the atoms absorb certain frequencies only and reemit the absorbed radiation in all directions. This explaines the effect for the usual experimental setup. However, I think the sun's spectral lines cannot be explained in this way. Assuming the direction of reemission is random would mean that most of the solid angle from an atom in the sun's atmosphere would be facing outwards. This would lead one to expect the intensity at the absorption lines to be only slightly weaker, while the lines are in reality quite pronounced. What mechanism explains this behaviour? Also what would be the emission spectrum of a thin atomic gas in thermal equilibrium? Would it have spectral lines? Answer: A simple way to think about the Sun's absorption spectrum is to think of the solar atmosphere as a cool layer on top of something that is much hotter that is emitting blackbody radiation. (In reality it is more like a smoothly varying continuum of layers, but this doesn't affect the argument). A fraction of the blackbody photons are headed towards our spectrograph, but they have to get through the layer of cooler gas. Some of them are absorbed at particular wavelengths corresponding to atomic transitions and then re-emitted in all directions. Roughly half of the re-emission is directed outwards, but only a vanishingly small fraction of those re-emitted photons are in the original direction and thus we see an absorption line. However, other photons could be scattered into our line of sight from light that was not travelling in our direction originally. I think your confusion is over whether this compensates? The answer is no, because the specific intensity of light emitted from the cooler overlying layer is much lower than that of light originating further in (it goes as $T^4$). So another way of looking at a spectral line is to imagine each wavelength originating at a particular depth (a gross simplification). The bottom of a spectral absorption line originates high up in the cooler part of the atmosphere, whilst the continuum originates in hotter interior layers. An optically thin atomic gas would have an observed spectrum consisting of narrow emission lines. Exactly which lines and at what wavelengths depends what the gas is made of and its temperature and density.
{ "domain": "physics.stackexchange", "id": 34949, "tags": "electromagnetic-radiation, atomic-physics, spectroscopy" }
Using solubility guidelines to predict a reaction of two ionic, water-soluble salts
Question: Will a reaction occur in the following case? If so, write a net ionic equation for it: $\ce{(NH4)2SO4 (aq) + ZnCl2 (aq) -> ?}$ The book says that all possible combinations of positive and negative ions lead to water soluble compounds, all of the ions remain in solution. No reactions occur. Now I am a little confused. Why didn't $\ce{(NH4)2}$ join with $\ce{Cl2}$ and why didn't $\ce{SO4}$ join with $\ce{Zn}$? Or did they? Answer: Of course $\ce{NH4+}$ can join $\ce{Cl-}$ and $\ce{Zn^2+}$ can join $\ce{SO4^2-}$, but the formed salts are completely soluble in water, and they completely dissociate in water to gives the corresponding ions. On the other hand, the starting salts: $\ce{(NH4)2SO4}$ and $\ce{ZnCl2}$ are completely soluble in water, and they completely dissociate in water to gives the corresponding ions. So, all of the ions remain in solution, as if no reaction occurs.
{ "domain": "chemistry.stackexchange", "id": 2071, "tags": "inorganic-chemistry, solubility, ions" }
Restriction and re-labeling on CCS
Question: In a process like $$ R \stackrel{def}=((a.\bar{b}.0)\setminus\{b\})[a\to b]\mid(\bar{b}.b.0)+\bar{b}.c.0 $$ $b$ is restricted to perform on the inner process of RHS $(a.\bar{b}.0)\setminus\{b\}$, thereafter the $a$ is relabeled to $b$. What I am confused is in drawing the LTS of it, the $a$ action occur at first process (we can have a transition with label $a$ in the LTS) and it is the next process that renaming takes place? can the following LTS be valid as first actions transitions? (the bar is replaced by underline) Answer: I am not completely sure about what is causing your confusion, but perhaps this can help: $a.\bar{b}.0$ can only perform $a$. $(a.\bar{b}.0)\setminus\{b\}$ can only perform $a$. $((a.\bar{b}.0)\setminus\{b\})[a\to b]$ can only perform $b$ (i.e. $a$ after substitution). The full process $((a.\bar{b}.0)\setminus\{b\})[a\to b]\mid(\bar{b}.b.0)+\bar{b}.c.0$ can perform $b$, $\bar b$ (two distinct ways), and $\tau$ (two distinct ways). Also note that the unrelated process $((a.\bar{b}.0)[a\to b])\setminus\{b\}$ would instead be stuck.
{ "domain": "cs.stackexchange", "id": 11369, "tags": "process-algebras, transition-systems, ccs" }
probability density distribution: From free diffusion to presence of a barrier
Question: I am a biologist and I am not very comfortable with statistical mechanics. However, I want to learn and I am trying to understand. I just need some clues from people that handle these topics easily. I would like to model the probability density distribution (PDF) for a particle moving from xo to x in time t, in presence of a barrier. We have the solution for "absence of a barrier" = free diffusion (1st case). We have the solution in presence of a barrier (2nd case). However, I'm interested in knowing how PDF would change from "1st case" to "2nd case" if I model the adding of a barrier at time t1. Statistical mechanics is used in both cases. I'm interested to know what I should study and look at if I want to model this "phase transition" from case 1 to case 2. I'm following this article in which both cases are present: https://www.sciencedirect.com/science/article/pii/S1090780714002134 Answer: Actually, all the work has been done in the paper that you refer to. The solution of diffusion equations is conveniently handled by calculating the propagator or Green's function, $G(x,y;t)$. This is the probability density of finding a particle at position $x$ at time $t$, given that it started at position $y$ at time $0$. You can imagine it describing how a very narrow initial distribution of density (a Dirac delta function $\delta(x-y)$ ) spreads with time, and becomes a Gaussian function of $x$ at later times: $$ G_0(x,y;t) = \frac{1}{\sqrt{4\pi Dt}} \exp[-(x-y)^2/4Dt] $$ The useful feature is that, because the diffusion equation is linear, any initial distribution of density $\rho(x,0)$ can be described as a linear superposition of spreading Gaussians, determined by the initial conditions. So at time $t_1$ $$ \rho(x,t_1) = \int_{-\infty}^\infty \, G_0(x,y;t_1) \, \rho(y,0) \, dy $$ depending on your initial density $\rho$ at $t=0$. If your particles really are localised at position $y$ at time zero, $\rho(x,0)=\delta(x-y)$, then this simplifies to $\rho(x,t_1)=G_0(x,y;t_1)$. Now, the propagator must be constructed to comply with the imposed boundary conditions. The propagator above, $G_0$, is for no barrier: the boundary conditions are simply that the density should vanish as $x\rightarrow\pm\infty$. When the semi-permeable barrier is present, the propagator is more complicated. However, the cited paper gives the propagator for this case, $G(x,y;t)$, in eqn (5), in terms of complementary error functions. It is a combination of slightly different formulae depending on the signs of $x$ and $y$. Nonetheless, it functions in exactly the same way as $G_0$. Therefore, if you insert the barrier at time $t_1$, start with the density $\rho(x,t_1)$ determined by the no-barrier propagator up to that time, and seek solutions at $t>t_1$, you get $$ \rho(x,t) = \int_{-\infty}^\infty \, G(x,y;t-t_1) \, \rho(y,t_1) \, dy $$ In the case that $\rho(x,0)=\delta(x-y)$ we can write this as $$ \rho(x,t) = \int_{-\infty}^\infty \, G(x,y';t-t_1) \, G_0(y',y;t_1) \, dy' $$ and the quantity on the left can be interpreted as the overall propagator (probability density to be at position $x$ at time $t$, given initial position $y$ at time $0$), incorporating the effects of free diffusion up to time $t_1$, followed by diffusion in the presence of the barrier for later times. NB in the equations above $\rho(x,t)$ represents the probability density of the diffusing particles, as a function of position and time. In the cited paper, $\rho$ is used for something different.
{ "domain": "physics.stackexchange", "id": 51170, "tags": "statistical-mechanics, diffusion, biology" }
Why do the distress values for Pavement ME Design for alligator cracking and thermal cracking for AC pavements seem to give incorrect results?
Question: When performing a pavement design using AASHTOWare Pavement ME for AC on AC Overlay, the pavement distress prediction for bottom-up (alligator) cracking and AC thermal cracking seem to give incorrect results. First, the Reliability for both these distresses seems fixed at 50%. It does not allow me to increase the value. Secondly, if I go ahead and run the program for 10 years the two distresses almost always remain zero (or close to zero). Any ideas on this? Answer: Note that you can change the reliability for total fatigue cracking and total transverse cracking which includes the cracking in the existing AC layer that reflects through the AC overlay and new cracks that develop in the overlay. The reliability is set at 50 percent for new fatigue cracking in the AC overlay and new transverse cracks in the AC overlay. The reason the reliability is set to 50 percent is because it is impossible to separate new crack development in the AC overlay from the reflective fatigue and transverse cracks in the existing AC layer that reflect through the overlay. As such, the reliability is applied to the total fatigue and total transverse cracks which combines new crack development with reflective cracks. To provide an answer to the second part of your question, more information is required. AC overlay thickness and the location of the site in terms of climate is important relative to version 2.3.1. If the site is located in the south or in a mild climate, version 2.3.1 will almost never calculate transverse cracks – they will be close to “0” when using the global calibration coefficient for transverse cracks. If the AC overlay thickness is relatively thick, then it will take a long time for additional fatigue cracks to develop in the existing AC layer because the tensile strains are significantly reduced. The version to be released in July of this year has a coefficient for transverse cracks that is climate dependent.
{ "domain": "engineering.stackexchange", "id": 2569, "tags": "pavement" }
Rotary Phones vs Pushbutton -- why did we ever have rotary?
Question: I remember when we first got pushbutton phones -- I would say user experience is literally ten times better -- if you used a rotary phone extensively as part of your job, you would rapidly develop a callous on your dialing finger. Of course, you also push the same number (we still use the term dial, at least some people do) maybe ten times faster than a dial. My question is, why was a dial ever used? Could not something like a pushbutton phone have been developed much sooner? What is the idea behind the dial "UI"? Note that I know something about computer history, how incredibly hard it used to be, for example, to store a byte of memory using mercury delay lines -- so if I suggest that a musical note be sent (as maybe push button phones did it) instead of N pulses for the integer N I can guess that this is much easier said than done. Answer: Rotary phones were controlled by sending pulses down the line. These pulses were the equivalent of hanging up the phone quickly - hence what you see in old movies and TV shows when they quickly tap the hang-up mechanism to get the operator. In the final versions of rotary phones, you could "dial" by quickly pressing down the hang-up button the right number of times for each number. All the rotary mechanism did was send the equivalent of a hang-up press quickly in succession the right number of times. So why didn't tones work? Because mechanical relays won't send tones, the phones system listeners wouldn't accept tones, and because in general technology has to be invented before it works. You might as well ask why Mr. Bell didn't invent cell phones straight off.
{ "domain": "engineering.stackexchange", "id": 4438, "tags": "mechanical-engineering, electrical-engineering, telecommunication" }
Are theta vacua topologically protected?
Question: In discussions of Yang-Mills instantons it is often stated that one should sum in the path integral over all contributions of fluctuations around all the topologically distinct vacua labelled by winding number $n$. Usually there follows a discussion on $\theta$-vacua, which are basically a linear combination of $n$-vacua, in the sense that $\lvert \theta \rangle = \sum e^{in\theta} \lvert n \rangle$ and that the procedure of summing over all fluctuations around instanton contributions in the path integral could be replaced by applying the usual rules to a Lagrangian where the term $ \sim \theta (*F, F)$ is added, where $(.,.)$ denotes the Cartan inner product. I don't understand how this can be. Doing this, one seems to replace the path integral as a sum over contributions from different $n$-vacua by just the path integral expanded in a unique $\theta$-vacuum. Indeed, this seems to come down to summing over a particular linear combination of $n$-vacua. But WHY should you only take into account only a particular $\theta$-vacuum, even though the $\theta$-vacua are inequivalent (they have different energy density)? Why should e.g. QCD not just be studied in the true vacuum (the one with the lowest energy density) which is just $\theta=0$? Is this because, the $\theta$-vacua, too are topologically protected? Answer: the procedure of summing over all fluctuations around instanton contributions in the path integral could be replaced by applying the usual rules to a Lagrangian where the term $\sim \theta (*F, F)$ is added, where $(.,.)$ denotes the Cartan inner product. I don't think you are thinking about this correctly. If you do not add the $\theta$ term that is the same as $\theta=0$. Even if $\theta=0$, you still need to sum over field configurations in the path integral with arbitrary number of instantons. The $\theta$ term just weights these different field configurations differently in such a way that you are in a particular non-zero $\theta$ vacuum. You are certainly free to study QCD with $\theta=0$ if you want to, the point is there is an extra parameter which tells us about different sectors of QCD (built upon different states $|\theta\rangle$), and we can study these sectors just by adding a $\theta$ term to the path integral. Different values of $\theta$ correspond to different sectors in the sense that a state from one sector can't evolve to that from another sector. This is less like a topological protection, and more like an ordinary superselection rule (like between states of different charge). There is a trivial shift symmetry $|n\rangle\rightarrow |n+1\rangle$ of the theory, and this corresponds to $\theta$ being conserved (it is a bit like a conserved momenta in field space)
{ "domain": "physics.stackexchange", "id": 57752, "tags": "quantum-field-theory, quantum-chromodynamics, yang-mills, instantons" }
Which of the following compounds will form significant amounts of meta product during mono-nitration reaction?
Question: Hi, I tried to answer this question based wholly on the concepts of inductive effects. But haven't gotten anywhere with it. I would appreciate any help regarding these questions: What condition makes a compound to give meta product(instead of ortho or para)? I know there is some flaw with my approach, so what is the correct way to do it? Can this condition be applied to other compounds as well, or is it just for aromatic compounds? Answer: The answer to the question is (1) Aniline. This occurs because under standard strongly acidic nitrating conditions (typically cH2SO4/HNO3) the aniline NH2 is protonated removing the ability of nitrogen lone pair to donate any electrons. Consequently the electron-withdrawing polar effect makes it m-directing. Some p-nitroaniline is formed because there is a small amount of the aniline freebase in equilibrium with the anilinium, but that is much more reactive than the anilinium. Further reading here Also discussed in this question Why does nitration of aromatic amines give meta directed nitro compounds?
{ "domain": "chemistry.stackexchange", "id": 11175, "tags": "aromatic-compounds, electrophilic-substitution" }
Separate business logic from Data Access Logic in the repository
Question: I have a PupilService which is calling the PupilRepository.AttachPupil() method. I have an N to M relation between SchoolclassCode and Pupil. Technical logic: A pupil can be related to many SchoolclassCode like "Math 7 a" or "English 8 c". Business logic: A pupil can be related only to SchoolclassCode like "Math 7 a" or "English 7 c". That means it's not possible for a pupil to belong to 2 different class numbers. A pupil can only be attached to a smaller class number during the year (changes happen in mid-year) thus all higher class relations must be removed. The business logic from 1) and 2) should be put elsewhere. It just does not fit into the PupilService or another Repository method because by splitting up the tasks in finer grained method It would causes more queries to do the same job. Further more if I split up the business logic the repo.AttachPupil would only work the half if the repository would be called directly. I do not like that either. public async Task AttachPupil(int pupilId, int targetSchoolclassCodeId) { var pupil = new Pupil { Id = pupilId }; context.Pupils.Attach(pupil); var targetSchoolclassCode = await context.SchoolclassCodes.SingleAsync(s => s.Id == targetSchoolclassCodeId); await context.Entry(pupil).Collection(p => p.SchoolclassCodes).LoadAsync(); var existingLowerSchoolclasses = pupil.SchoolclassCodes.Where( s => s.SchoolclassNumber < targetSchoolclassCode.SchoolclassNumber); if (existingLowerSchoolclasses.Any()) { throw new PupilAdvancedNextSchoolclass("A pupil can only downgrade a class within a schoolyear"); } else { foreach (var schoolclassCodeToDetach in existingLowerSchoolclasses) { // Detach pupil from higher SchoolclassCode classes e.g. class 9 pupil.SchoolclassCodes.Remove(schoolclassCodeToDetach); } // Attach Pupil to a lower class becaue his grades/marks are too bad e.g. in the mid-year to class 8 targetSchoolclassCode.Pupils.Add(pupil); await context.SaveChangesAsync(); } } How would you refactor this AttachPupil method so that the business logic is better separated from the data access logic? Answer: First of all I'd say that your foreach loop will never loop, because it is only reached when there are no elements in existingLowerSchoolclasses. In general I would recommend to re-think your datastructure, especially with a view towards next year and the year after. Because then you'll wish you had identified classes like "English 8 c 2014" or "Math 7 d 2015". This will make it easier to enforce the constraints you described. If you're concerned about the number of queries you could use a stored transaction. Even that feels old-fashioned, in some cases it still makes sense to guard the access to N:M relations with those things. The question is always: Is it really a domain problem, or is it caused by the fact that your data structure doesn't adhere enough to the domain? If you can answer that, you'll know where to put the Data Access Logic. Last of all let me be honest: Whenever I see a N:M relation I get very suspicious about it. In most real-world situations, there are no anonymous N:M relations. A student started a class at a certain time and left it at another, also different students could have different mentors within one course, and there might be much more attributes you might want to assign to that relationship in the future. Therefore I would advise you to remodel this relation, e.g. by introducing a ClassMembership entity.
{ "domain": "codereview.stackexchange", "id": 14423, "tags": "c#, repository" }
Dependency injection with Factory pattern sample code
Question: I'm trying to understand and learn about the SOLID principle and especially Dependency Injection (DI) and Factory pattern. Some background: I have a production application that uses some "helpers" classes but those are all-in-one classes that I just copy into each new project. One of the (most used) functions is serialization. Our customer has strict demands about history/revision tracing so all important requests are serialized to XML and inserted in Sql server. With SOLID, DI and Factory in mind I decided to rewrite ye old helpers into a fresh C# project and do things properly. Below is sample code for Serializers (I have three, XML, JSON and Binary), shown is only the Json class but they are all pretty much the same. My questions are: Have I correctly implemented the Factory pattern? Is the Dependency Injection the way it is supposed to be? I'm not exactly sure if I even covered both patterns, a mix or perhaps just one? I hope I made things clear, if not let me know, and thanks for helping me understand DI/Factory better. Interface namespace nUtility.Serializers { public interface ISerializer { string InputString { get; set; } object InputObject { get; set; } string SerializeToString<T>(); // Need <T> because of the Xml serializer. object DeserializeFromString<T>(); } } Factory namespace nUtility.Serializers { public class SerializersFactory { readonly ISerializer _iserializer; public SerializersFactory(ISerializer iserializer) { _iserializer = iserializer; } public string Serialize<T>() => _iserializer.SerializeToString<T>(); public object Deserialize<T>() => _iserializer.DeserializeFromString<T>(); } } Serializers (Json in this example) using Newtonsoft.Json; using System; namespace nUtility.Serializers { public class nSerializerJson : ISerializer { public string InputString { get; set; } public object InputObject { get; set; } public nSerializerJson(string inputString, object inputObject) { if (inputString == null) throw new ArgumentNullException(nameof(inputString)); if (inputObject == null) throw new ArgumentNullException(nameof(inputObject)); InputObject = inputObject; InputString = inputString; } public string SerializeToString<T>() { return JsonConvert.SerializeObject(InputObject); } public object DeserializeFromString<T>() { return JsonConvert.DeserializeObject<T>(InputString); } } } Program.cs (SampleValues.InputObjectList is a simple test class which populates a List with 3 items) var serializersFactory = new SerializersFactory(new nSerializerJson(expectedString, SampleValues.InputObjectList)); expectedString = serializersFactory.Serialize<List<SerializeTestObject>>(); Two more things: Why not static class for serializers - I kinda agree with this answer on SO, however on a second thought changing my class (the parameter creep/constructor point in SO answer) would violate SOLID (the SRP) so I'm not sure if this is correct decision? I know I'm breaking the Type and Namespace naming convetion by putting "nSerializers" etc. but this "n" prefix has been in the company for 9 years now and we're not prepared to let go yet :). Answer: public class SerializersFactory { readonly ISerializer _iserializer; public SerializersFactory(ISerializer iserializer) { _iserializer = iserializer; } public string Serialize<T>() => _iserializer.SerializeToString<T>(); public object Deserialize<T>() => _iserializer.DeserializeFromString<T>(); } Sorry to burst your bubble, but... this isn't a factory. In fact, it's rather confusing what purpose it serves, given how that serializer is implemented: public nSerializerJson(string inputString, object inputObject) { if (inputString == null) throw new ArgumentNullException(nameof(inputString)); if (inputObject == null) throw new ArgumentNullException(nameof(inputObject)); InputObject = inputObject; InputString = inputString; } It's very confusing why you need to specify both the serialized JSON string and the deserialized object for the constructor to not throw an ArgumentNullException. Actually, it's very hard to understand why one would even need this constructor. It's the methods that need the parameters; having them at instance-level seems to make the whole thing more or less useless - and confusing. What's the role of the <T> type parameter here? public string SerializeToString<T>() { return JsonConvert.SerializeObject(InputObject); } Oh, I see: string SerializeToString<T>(); // Need <T> because of the Xml serializer. Since when do implementations dictate what abstractions look like? That T is clearly begging for a method parameter there, of type T. Seems you need to stop and ask yourself what you're trying to accomplish, and why. Your system has a dependency on Newtonsoft.Json; your serializer is essentially wrapping the Newtonsoft API... in some awkward way. I would have used something like this: public interface ISerializer { string Serialize<T>(T deserialized); T Deserialize<T>(string serialized); // <out T> } And the implementation would be something like this: public class JsonSerializer : ISerializer { public string Serialize<T>(T deserialized) { return JsonConvert.SerializeObject(deserialized); } public T Deserialize<T>(string serialized) { return JsonConvert.DeserializeObject<T>(serialized); } } What else would you need? I think everything else you got here, is over-engineered fluff that can be removed. There's nothing to DI here, ISerializer is the dependency. As for the factory, there's no need for one, since any type that needs an ISerializer can just tell the IoC container it needs one, by specifying an ISerializer parameter in its constructor.
{ "domain": "codereview.stackexchange", "id": 16029, "tags": "c#, design-patterns, dependency-injection" }
What causes the vertical bands of light in an aurora?
Question: I was lucky enough to witness a medium-strength aurora recently in Iceland (KP ~3). We saw the classic rippling arc of green light, which at times seemed to consist of smaller flickering vertical columns of light with sharp boundaries. They were similar to the photo attached below. I believe they're described as 'Rayed Band (RB)' aurora. I understand the basics of aurora formation (charged solar particles exciting atmospheric particles, which emit light). However I'm struggling to find any information about these vertical bands - I'd like to know specifically what causes them? Is it an optical phenomena or something astro-physical? Are the atmospheric particles streaking downwards as they emit light? Answer: Individual rays in an aurora do not correspond to individual particles; they correspond to separated streams of particles. Here is a good article that explains aurora borealis. Edit 1: Take note: in the article it's implied that the rays correspond to magnetic field lines. That's a bit misleading. Magnetic "lines" don't really exist. They are just lines that we draw to point "downhill" in a magnetic field. But "downhill" lines can be drawn everywhere; they're not confined to specific places. The imaginary things we call magnetic field lines simply indicate the local direction of the field. An incoming mostly uniform swarm of particles from the Sun can become separate particle streams via a very complicated interaction between the incoming mostly uniform swarm of particles from the sun, the Earth's magnetic field, and the Earth's upper atmosphere. The streams, once formed, follow the local direction of the geomagnetic field. Edit 2: This is an intriguing question. A bit of further searching turned this up, about Birkeland currents, which addresses filamentation (breaking into separate streams) of currents in the aurora.
{ "domain": "physics.stackexchange", "id": 61493, "tags": "optics, atmospheric-science, plasma-physics, geomagnetism, solar-wind" }
Sampling and reconstruction of signal in Matlab
Question: I'm trying to write a program in Matlab that samples (using Nyquist theorem) and recovers signal. I got stucked on recovery part...recovery signal doesn't match with the original one (see photo). And I have to make graph that shows every sinc separately (before the sum) like on photo. Also I have to use formula from photo. This is my code: clear; clc; subplot(3,1,1); f = 30e3; fs = 2 * f; Ts = 1/fs; t1 = 0:1e-7:5/f; x1 = cos(2 * pi * f * t1); plot(t1,x1); hold on; t2 = 0:Ts:5/f; x2 = cos(2 * pi * f * t2); stem(t2,x2); xlabel('Time'); ylabel('Amplitude'); xr = zeros(length(t1)); for i=1:length(t1) for j=1:length(x2) xr(i)= xr(i) + x2(j)*sinc(2*fs*t1(i)-j); end end plot(t1,xr); xlabel('Time'); ylabel('Amplitude'); legend('x(t)','x(nT)','x_r(t)'); What I need to change and add? Thanks in advance! Answer: You are pretty close. Here is a hint: you need to make sure that your sinc pulses are lining up with your samples. You can check this by breaking it down and plotting individually the sinc pulse train that you are getting. Make sure that these line up as shown in your picture and you should be good to go. %% Sampling and reconstruction demo clear,clc,close all; %% Parameters F = 30; % frequency of signal [Hz] Fs = 2*F; % sampling rate [Hz] Ts = 1/Fs; % sampling period [sec] %% Generate "continuous time" signal and discrete time signal tc = 0:1e-4:5/F; % CT axis xc = cos(2*pi*F*tc); % CT signal td = 0:Ts:5/F; % DT axis xd = cos(2*pi*F*td); % DT signal N = length(td); % number of samples %% Reconstruction by using the formula: % xr(t) = sum over n=0,...,N-1: x(nT)*sin(pi*(t-nT)/T)/(pi*(t-nT)/T) % Note that sin(pi*(t-nT)/T)/(pi*(t-nT)/T) = sinc((t-nT)/T) % sinc(x) = sin(pi*x)/(pi*x) according to MATLAB xr = zeros(size(tc)); sinc_train = zeros(N,length(tc)); for t = 1:length(tc) for n = 0:N-1 sinc_train(n+1,:) = sin(pi*(tc-n*Ts)/Ts)./(pi*(tc-n*Ts)/Ts); xr(t) = xr(t) + xd(n+1)*sin(pi*(tc(t)-n*Ts)/Ts)/(pi*(tc(t)-n*Ts)/Ts); end end %% Plot the results figure hold on grid on plot(tc,xc) stem(td,xd) plot(tc,xr) xlabel('Time [sec]') ylabel('Amplitude') %% Sinc train visualization figure hold on grid on plot(tc,xd.'.*sinc_train) stem(td,xd) xlabel('Time [sec]') ylabel('Amplitude')
{ "domain": "dsp.stackexchange", "id": 9462, "tags": "matlab, sampling" }
Implementation of Stochastic Gradient Descent in Python
Question: I am attempting to implement a basic Stochastic Gradient Descent algorithm for a 2-d linear regression in Python. I was given some boilerplate code for vanilla GD, and I have attempted to convert it to work for SGD. Specifically -- I am a little unsure as to if I correctly implemented the loss function and partial derivatives, since I am new to regressions in general. I do see that the errors tend to "zig zag" as expected. Does the following look like a correct implementation or have I made any mistakes? #sample data data = [(1,1),(2,3),(4,3),(3,2),(5,5)] def compute_error_for_line_given_points(b, m, points): totalError = 0 x = points[0] y = points[1] return float(totalError + (y - (m * x + b)) ** 2) def step_gradient(b_current, m_current, points, learningRate): N = float(1) for i in range(0, 1): x = points[0] y = points[1] b_gradient = -(2/N) * (y - ((m_current * x) + b_current)) #this is the part I am unsure m_gradient = -(2/N) * x * (y - ((m_current * x) + b_current)) #here as well new_b = b_current - (learningRate * b_gradient) new_m = m_current - (learningRate * m_gradient) return [new_b, new_m] err_log = [] coef_log = [] b = 0 #initial intercept m = 0 #initial slope iterations = 4 for i in range(iterations): #epochs for point in data: #one point at a time for SGD err = compute_error_for_line_given_points(b,m, point) err_log.append(err) b,m = step_gradient(b,m,point,.01) coef_log.append((b,m)) ``` Answer: There is only one small difference between gradient descent and stochastic gradient descent. Gradient descent calculates the gradient based on the loss function calculated across all training instances, whereas stochastic gradient descent calculates the gradient based on the loss in batches. Both of these techniques are used to find optimal parameters for a model. Let us try to implement SGD on this 2D dataset. The algorithm The dataset has 2 features, however we will want to add a bias term so we append a column of ones to the end of the data matrix. shape = x.shape x = np.insert(x, 0, 1, axis=1) Then we initialize our weights, there are many strategies to do this. For simplicity I will set them all to 1 however setting the initial weights randomly is probably better in order to be able to use multiple restarts. w = np.ones((shape[1]+1,)) Our initial line looks like this Now we will iteratively update the weights of the model if it mistakenly classifies an example. for ix, i in enumerate(x): pred = np.dot(i,w) if pred > 0: pred = 1 elif pred < 0: pred = -1 if pred != y[ix]: w = w - learning_rate * pred * i This line is the weight update w = w - learning_rate * pred * i. We can see that doing this process continuously will lead to convergence. After 10 epochs After 20 epochs After 50 epochs After 100 epochs And finally, The code The dataset for this code can be found here. The function which will train the weights takes in the feature matrix $x$ and the targets $y$. It returns the trained weights $w$ and a list of historical weights encountered throughout the training process. %matplotlib inline import numpy as np import matplotlib.pyplot as plt def get_weights(x, y, verbose = 0): shape = x.shape x = np.insert(x, 0, 1, axis=1) w = np.ones((shape[1]+1,)) weights = [] learning_rate = 10 iteration = 0 loss = None while iteration <= 1000 and loss != 0: for ix, i in enumerate(x): pred = np.dot(i,w) if pred > 0: pred = 1 elif pred < 0: pred = -1 if pred != y[ix]: w = w - learning_rate * pred * i weights.append(w) if verbose == 1: print('X_i = ', i, ' y = ', y[ix]) print('Pred: ', pred ) print('Weights', w) print('------------------------------------------') loss = np.dot(x, w) loss[loss<0] = -1 loss[loss>0] = 1 loss = np.sum(loss - y ) if verbose == 1: print('------------------------------------------') print(np.sum(loss - y )) print('------------------------------------------') if iteration%10 == 0: learning_rate = learning_rate / 2 iteration += 1 print('Weights: ', w) print('Loss: ', loss) return w, weights We will apply this SGD to our data in perceptron.csv. df = np.loadtxt("perceptron.csv", delimiter = ',') x = df[:,0:-1] y = df[:,-1] print('Dataset') print(df, '\n') w, all_weights = get_weights(x, y) x = np.insert(x, 0, 1, axis=1) pred = np.dot(x, w) pred[pred > 0] = 1 pred[pred < 0] = -1 print('Predictions', pred) Let's plot the decision boundary x1 = np.linspace(np.amin(x[:,1]),np.amax(x[:,2]),2) x2 = np.zeros((2,)) for ix, i in enumerate(x1): x2[ix] = (-w[0] - w[1]*i) / w[2] plt.scatter(x[y>0][:,1], x[y>0][:,2], marker = 'x') plt.scatter(x[y<0][:,1], x[y<0][:,2], marker = 'o') plt.plot(x1,x2) plt.title('Perceptron Seperator', fontsize=20) plt.xlabel('Feature 1 ($x_1$)', fontsize=16) plt.ylabel('Feature 2 ($x_2$)', fontsize=16) plt.show() To see the training process you can print the weights as they changed through the epochs. for ix, w in enumerate(all_weights): if ix % 10 == 0: print('Weights:', w) x1 = np.linspace(np.amin(x[:,1]),np.amax(x[:,2]),2) x2 = np.zeros((2,)) for ix, i in enumerate(x1): x2[ix] = (-w[0] - w[1]*i) / w[2] print('$0 = ' + str(-w[0]) + ' - ' + str(w[1]) + 'x_1'+ ' - ' + str(w[2]) + 'x_2$') plt.scatter(x[y>0][:,1], x[y>0][:,2], marker = 'x') plt.scatter(x[y<0][:,1], x[y<0][:,2], marker = 'o') plt.plot(x1,x2) plt.title('Perceptron Seperator', fontsize=20) plt.xlabel('Feature 1 ($x_1$)', fontsize=16) plt.ylabel('Feature 2 ($x_2$)', fontsize=16) plt.show()
{ "domain": "datascience.stackexchange", "id": 2998, "tags": "linear-regression, gradient-descent" }
Algorithm to Determine if (Union of Cartesian Products of Subsets) equals (Cartesian Product of Full Sets)
Question: I have already asked this question in StackOverflow (open bounty closing on Aug 25th). Let's say I have some finite sets: A, B, ..., K I also have A1, A2, ... An, which are subsets of A; B1, B2, ... Bn, which are subsets of B, etc. Let's say S is the cartesian product A x B x ... x K and Sn is the cartesian product of An x Bn x ... x Kn Is there an algorithm to efficiently determine if the union of all Sn is equivalent to S? Some pointers to literature where I can study this problem are greatly appreciated. Answer: The problem is coNP-complete and so unlikely to have a poly-time algorithm. (I'm sure the observation was made before; I don't know where, but look in Garey-Johnson.) Here is a simple reduction from 3-UNSAT $= \{\psi: \psi$ is an unsatisfiable 3-CNF$\}$. Say $\psi(x_1, \ldots, x_t) = \bigwedge_{j = 1}^m (y_{j, 1}\vee y_{j, 2} \vee y_{j, 3})$, where each $y_{j, a}$ is either a variable or a negated variable. In your notation, let $A = B = \ldots = K = \{0, 1\}$, with $t$ such sets (one for each variable). Think of each element of $S = A^t$ as an assignment to $x_1, \ldots, x_t$. For each $j \in [m]$, create a Cartesian product set $S_j$ which describes the set of assignments that will falsify the $j^{th}$ clause. Note that this can easily be done: we restrict precisely the 3 components corresponding to the variables appearing in this clause. The union of the $S_j$'s equals all of $S = \{0, 1\}^t$ iff every assignment to $\psi$ falsifies some clause, i.e., iff $\psi$ is unsatisfiable.
{ "domain": "cstheory.stackexchange", "id": 2219, "tags": "ds.algorithms" }
Loading either all shops or a specifically named shop from a DataTable
Question: public delegate DataTable loadDataTable(); DataTable shops = (cmbShop.Text == "All Shops") ? new loadDataTable(() => { Program.con.GET_Table_From_DataBase("sh", "select * from shops "); return Program.con.dst.Tables["sh"]; } ).Invoke() : new loadDataTable(() => { Program.con.GET_Table_From_DataBase("sh", "select * from shops where shopname='" + cmbShop.Text + "' "); return Program.con.dst.Tables["sh"]; } ).Invoke(); I am just setting the value of DataTable shops here. I am learning about lambda expressions, so just for learning purposes, I want to know if this code can be shortened while using lambda expressions. Answer: You don't need any lambda expressions at all. You can write if (cmbShop.Text == "All Shops") Program.con.GET_Table_From_DataBase("sh", "select * from shops "); else Program.con.GET_Table_From_DataBase("sh", "select * from shops where shopname='" + cmbShop.Text + "' "); DataTable shops = Program.con.dst.Tables["sh"];
{ "domain": "codereview.stackexchange", "id": 426, "tags": "c#, linq" }
Fast way to calculate source response using Green's function
Question: I am a newbie in the scientific computing in python. The function I am trying to calculate is the response of the wave equation for a given source term (For reference, See: Equation 11.67 in https://webhome.phy.duke.edu/~rgb/Class/phy319/phy319/node75.html). $$\phi(\pmb{x},t) = \int_V G_+(\pmb{x}, t; \pmb{x'}, t)\rho(\pmb{x'}, t')d^3x'dt'$$ To give context, the term phi in the equation 11.67 is the displacement and the term rho can be thought as a source of disturbance. Now, in my case, the problem is constructed in spatial dimension of 2 (x-y). Thus, I have to iterate the equation for grid points in x, y and t. This makes the overall calculation extremely time-consuming. Below is my lines of code: import numpy as np # Assigning grid points Nx = 100 Ny = 100 Nt = 100 x = np.linspace(-Nx, Nx, 100) y = np.linspace(-Ny, Ny, 100) t = np.linspace(0, Nt-1, 100) # Equation to solve # phi(x,y,t) = ∫∫∫ G(x,y,t; x',y',t') . Source(x',y',t') dx' dy' dt' # G(x,y,t; x',y',t') = Green's Function # phi = displacement by the wave phi = np.zeros((Nt,Ny,Nx)) # Define Function to realize Green's Function for Wave Equation def gw(xx, yy, tt, mm, nn, ss): numer = (tt-ss) - np.sqrt((xx-mm)**2+(yy-nn)**2) denom = ((tt-ss)**2-(xx-mm)**2-(yy-nn)**2) if denom < 0: denom = 0 else: denom = np.sqrt(denom) kk = np.heaviside(numer,1)/(2*np.pi*denom+1) return (kk) # Define Function to realize Gaussian Disturbance def g_source(xx, yy, tt): spatial_sigma = 5 # spatial width of the Gaussian temporal_sigma = 30 # temporal width of the Gaussian onset_time = 20 # time when Gaussian Disturbance reaches its peak kk = np.exp((-(np.sqrt((xx)**2+(yy)**2)/spatial_sigma)**2)))*np.exp(-((tt-onset_time)/temporal_sigma)**2) return (kk) for k in range (Nt): for j in range (Ny): for i in range (Nx): # this is the Green's function evaluated in each grid points green = np.array([gw(x[i],y[j],t[k],i_grid,j_grid,k_grid) for k_grid in t for j_grid in y for i_grid in x]) green = green.reshape(Nt, Ny, Nx) # this is the source term (rho), here it is a Gaussian Disturbance gauss = np.array([g_source(i_grid,j_grid,k_grid) for k_grid in t for j_grid in y for i_grid in x]) gauss = gauss.reshape(Nt, Ny, Nx) # performing integration phi[k,j,i] = sum(sum(sum(np.multiply(green,gauss)))) Please let me know if there is a faster way to do this integration. Thanks in advance. I deeply appreciate your patience and support. Answer: Performance wise In general: You use numpy, but you write it almost like in fortran. Python with numpy is good for scientific programing and computing as long as you don't do many loops read this first A beginners guide to using Python for performance computing if you really need to do tight loops than use cython. But most of the time you can avoid that by expressing the loop as some functional operation on the whole array most numpy functions can operator on whole array in one pass, use it. always prefer y[:]=np.sqrt(x[:]) and avoid `for i in xrange(N): y[i]=np.sqrt(x[i])` operation like dot,convolve,einsum,ufunc.reduce are very potent to express most sciencetific algorithms in more abstract way use advanced array slicing you can express most inner if by boolen mask arrays or integer index array e.g. f[ f>15.0 ] = 15.0 clamp all elements in array f which are >15.0 to 15.0; you can also store the boolean mask mask = (f>15.0) and than use it like f[mask]+=g[mask]. This way you can express branching as fully functional programs/expressions with arrays. Do not construct new np.array (np.zeros, np.ones etc.) too often (i.e. inside tight loops). Optimal performance is obtained if you prepare all arrays you need at the beggining. Avoid frequent conversion between list and array To address your code example in particular: Green's functions are basically convolutions. I'm pretty sure you can express it using e.g. scipy.ndimage.filters.convolve if your convolution kernel is large (i.e. pixels interact more than with few neighbors) than it is often much faster to do it in Fourier space (convolution transforms as multiplication) and using np.fftn with O(nlog(N)) cost. cannot def gw(xx, yy, tt, mm, nn, ss): operate on the whole array, rather than individual numbers? if denom < 0: can be expressed using boolean mask array like mask = denom >0; denom[mask] = np.sqrt(denom[maks]) Never do this if you can: for k in range (Nt): for j in range (Ny): for i in range (Nx): the operations seems to be possible to rewrite that operate on the whole x,y space at once This is how you do convolution-like operations in numpy (from here) # The actual iteration u[1:-1, 1:-1] = ((u[0:-2, 1:-1] + u[2:, 1:-1])*dy2 + (u[1:-1,0:-2] + u[1:-1, 2:])*dx2)*dnr_inv this is so horrible: green = np.array([gw(x[i],y[j],t[k],i_grid,j_grid,k_grid) for k_grid in t for j_grid in y for i_grid in x]) list comprehesion is relatively fast, but still much slower than numpy array operations (which are implemented in C). do not create temporary list and convert it to temporary array, you loose lot time doing that. why you cannot just prealocate Green[Nx,Ny,:] and Gauss[Nx,Ny,:] and use it as a whole?
{ "domain": "codereview.stackexchange", "id": 35798, "tags": "python, python-3.x, numpy" }
Generating good synchronization sequences
Question: As part of my work I am putting together a simple bursty BPSK communication system. I want to make the demodulation easy, so I am going to put a synchronization sequence at the beginning of each burst to help the receiver lock onto the frequency, phase, and timing. In order to get good synchronization information and avoid false positives the autocorrelation of the synchronization sequence should be as close to a delta dirac as possible. There may also be additional constraints, like starting the sequence with all 1's to help the receiver get frequency lock quickly. How does one go about generating such a sequence? Is there a systematic way of doing it, or do you use common sense and trial and error? Answer: A synchronization sequence generally needs the property that its autocorrelation function resembles an impulse. There are two possible autocorrelation functions that can be considered. For a (real-valued) sequence $x$ of length $N$, the periodic autocorrelation function is $$R_x[n] = \sum_{k=0}^{N-1}x[k]x[k+n]$$ where the sequence is assumed to extend periodically so that $x[k+n] = x[k+n\bmod N]$. In this case, one commonly used solution to choose $x$ to be a maximal-length linear binary feedback shift-register (LFSR) sequence of period $N = 2^m-1$ whose periodic autocorrelation function is $$R_x[n] = \begin{cases} N, & n \equiv 0 \bmod N,\\ -1, & n \not\equiv 0 \bmod N, \end{cases}$$ which resembles an impulse train of period $N$ except that the "out-of-phase" value of the periodic autocorrelation function is not identically zero, but is small (and constant) compared to the "in-phase" value of $N$. The other form of autocorrelation function is the aperiodic autocorrelation function which is given by $$C_x[n] = \begin{cases}\sum_{k=0}^{N-1-n}x[k]x[k+n], & 0 \leq n < N,\\ \sum_{k=0}^{N-1+n}x[k-n]x[k], & -N < n < 0,\\ 0, & |n| \geq N. \end{cases}$$ Now, for binary sequences where $x$ takes on values in $\{+1,-1\}$, the value of $C_x[n]$ alternates between even and odd integers as $n$ varies from $-(N-1)$ to $(N-1)$, and so a "flat" out-of-phase aperiodic autocorrelation function is impossible to achieve. The closest we can hope to come is to find sequences for which $|C_x[n]| \leq 1$ for $1\leq n < N$. Sequences with this property are called Barker sequences, and the longest known Barker sequence is of length $13$. it is widely believed, though not proven, that Barker sequences of longer lengths do not exist, and numerical studies show that if such a sequence exists, it it must be so long that it will not be be usable in any practical scheme as a synchronization preamble.
{ "domain": "dsp.stackexchange", "id": 495, "tags": "autocorrelation, signal-detection" }
Functional Fibonacci in OCaml
Question: I'm very new to OCaml, and as an exercise I decided to implement the nth Fibonacci algorithm (return a specific Fibonacci number, given an index) in different ways, using what I've learned so far. I also want to start practicing test-driven development in OCaml, so I also included some basic tests using assertions. Here is the code: (** Get the nth fibonacci number using lambda expression and if-then-else. *) let rec fibo_lamb = fun n -> if n < 3 then 1 else fibo_lamb (n - 1) + fibo_lamb (n - 2) (** Get the nth fibonacci number using if-then-else. *) let rec fibo_if n = if n < 3 then 1 else fibo_if (n - 1) + fibo_if (n - 2) (** Get the nth fibonacci number using pattern matching long form. *) let rec fibo_mtch n = match n with | 1 | 2 -> 1 | n -> fibo_mtch (n - 1) + fibo_mtch (n - 2) (** Get the nth fibonacci number using pattern matching short form. *) let rec fibo_ptrn = function | 1 | 2 -> 1 | n -> fibo_ptrn (n - 1) + fibo_ptrn (n - 2) (** Get the nth fibonacci number using tail recursion. *) let fibo_tail n = let rec loop n last now = if n < 3 then now else loop (n - 1) (now) (last + now) in loop n 1 1 (** Get the nth fibonacci number using lists and tail recursion. *) let fibo_list n = let rec loop n xs = if n < 3 then List.nth xs 1 else loop (n - 1) [List.nth xs 1; List.hd xs + List.nth xs 1] in loop n [1; 1] (** Unit test a fibo function. *) let test_fibo_fun f id = assert (f 1 == 1); assert (f 2 == 1); assert (f 3 == 2); assert (f 4 == 3); assert (f 5 == 5); assert (f 6 == 8); id ^ " passed all tests!" |> print_endline (** Perform all unit tests for fibo functions. *) let run_fibo_tests () = test_fibo_fun fibo_lamb "fibo_lamb"; test_fibo_fun fibo_if "fibo_if"; test_fibo_fun fibo_mtch "fibo_mtch"; test_fibo_fun fibo_ptrn "fibo_ptrn"; test_fibo_fun fibo_tail "fibo_tail"; test_fibo_fun fibo_list "fibo_list"; "all fibo unit tests passed!" |> print_endline let () = run_fibo_tests () Answer: Clearly as I'm sure you're aware, all of your non-tail-recursive solutions have pretty horrendous performance characteristics. But I want to look at one of your tail-recursive solutions. (** Get the nth fibonacci number using lists and tail recursion. *) let fibo_list n = let rec loop n xs = if n < 3 then List.nth xs 1 else loop (n - 1) [List.nth xs 1; List.hd xs + List.nth xs 1] in loop n [1; 1] A list is a data structure that by its very nature has any number of elements. The lists you use always have two elements. This is a much better place to use a tuple. let fibo_tuple n = let rec loop n (a, b) = if n < 3 then b else loop (n - 1) (b, a + b) in loop n (1, 1) If you're going to use a list, you can still clean it up with pattern-matching, rather than calling List.hd and List.nth. let fibo_list n = let rec loop n [a; b] = if n < 3 then b else loop (n - 1) [b; a + b] in loop n [1; 1] You will be warned about non-exhaustive pattern-matching because again, lists are the wrong data structure to use here, but your code never creates a list with anything other than two elements.
{ "domain": "codereview.stackexchange", "id": 44168, "tags": "algorithm, functional-programming, comparative-review, fibonacci-sequence, ocaml" }
Determining a value in an algorithm on its first run
Question: I was given the following algorithm as a solution to one of my problems. However, I am baffled at understanding how c' is ever initialized. The first if statement can never be reached, as c' is never set the first time. Can someone explain this to me? Am I just missing where c' gets set? This is for Kleinberg and Tardos, Chapter 11, Question 12. Answer: Posting Yuval Filmus's comment as an answer since they didn't: It’s a typo - should be $c’=$ instead of $c’-$ in line 5.
{ "domain": "cs.stackexchange", "id": 13911, "tags": "algorithms, algorithm-analysis, approximation" }
Potential difference of conductor with induced load
Question: If a metallic sphere is grounded and close to a positive charge q, it will be charged with -q. Let's say that the electrons will arrive through the grounding. This charge will cover the surface of the sphere and counter the filed outside the sphere. This sphere will have a potential of zero with the ground as reference. Now another scenario. If the sphere is not grounded then induced charge in the surface will counter the external field. Inside the sphere I read that a second field will be generated to nullify the field of the external charge. What will the potential of this sphere be? If it is different than zero, why? Please give a qualitative answer and not a math one. Answer: I'm assuming you mean a solid metallic sphere, not a shell. The analysis is slightly different for a shell but rests on the same principles. It is actually not correct that the total charge induced on the sphere in the case of grounding is $-q$. The basic concepts you need to know about electrostatics with conductors is that the entirety of the conductor must be an equipotential volume and that any net charge distribution must reside on the surface of the conductor. (Do you see why this must be the case? Remember that we are working in the statics case, so charges must not be accelerating and hence charges should experience no net force). Now, if you will forgive me, I will do some simplified math first, and then try to formulate the reasoning "qualitatively" as you ask. Mathy (only slightly) version This problem can be solved using the method of image charges, which basically works as follows: if we can put some additional "image" charges in the sphere, figure out the potential in all of space, and then restrict our solution to the outside, the solution we get is a solution the sphere for Maxwell's equation with just our original external charge $q$ (Maxwell's equations are local differential equations, and so if we have restricted ourselves to the region outside the sphere, we need only consider charge densities outside the sphere.) Thus, if we can choose our image charges so that the appropriate boundary conditions are satisfied, i.e., the surface of the sphere is an equipotential with appropriate potential, then the solution is correct for our original problem (this can be more rigorously formulated as a uniqueness theorem for solutions to the Poisson equation with Dirichlet boundary conditions). Now suppose that the radius of the sphere is $R$ and the external charge $q$ is at a distance $d$ from the centre of the sphere. You can easily check (I will leave it up to you, if you'd like to, and if you need any help verifying this leave me a comment and I can edit my answer to show how), that if we place an image point charge of charge $-qR/d$ at a point in the sphere a distance $R^2/d$ from the centre of the sphere, on the line connecting the centre of the sphere to the external charge $q$, the potential on surface of the sphere (i.e. at $r=R$) vanishes. This corresponds, as you have noted, to the case of where the sphere is grounded. (Note that this is not to say that physically, there is a point charge that develops in the middle of the body of the metallic sphere - this is impossible in electrostatics. Rather, a surface charge density is created that from the outside behaves as if it were such a point charge. We could compute exactly what this surface charge density looks like fairly straightforwardly from the so called matching conditions at the interface between the sphere and the outside; again, if you'd like me to explain this to you, leave me a comment.) The first thing to note is that in the grounded case, the total charge contained in the sphere is not $-q$, but rather $-qR/d$. You might feel that this is jumping the gun a bit: the image charge is an idealisation, not a physical charge density, and you might think that the physical surface charge density may have a different total charge. However, remember that whichever situation we examine, the solution for the potential outside the sphere is exactly the same, and therefore, the electric field outside the sphere is exactly the same. If we apply Gauss' law to a surface containing the metallic sphere, but not containing the external charges, the surface integral of the electric flux in the image charges picture and the physical surface charge distribution picture would have to be the same, since the electric fields are the same - but in both cases this relates directly to the enclosed charges, and so we can state generically that the total charge of the physical surface charge distribution must be the same as the total charge of the image charge in the distribution. In this case specifically, it must be $-qR/d$, not $-q$. Now, coming to the second part of your question - what if we don't ground the sphere? Well, we would need to modify our image charge distribution somehow to account for the new boundary conditions. There are two conditions that we need to meet: first, the total charge in the image charge distribution must be zero, and secondly, the surface of the sphere must remain an equipotential surface (though possibly at a different potential). But by superposition, this is easy to do - just place a charge of $qR/d$ at the centre of the sphere! The total charge in the image charge distribution is now $-qR/d+qR/d=0$, and the second image charge at the centre simply changes the potential at the surface of the sphere from $0$ to $k(qR/d)/R=kq/d$. A curious point to note here: as long as $R<d$, i.e., the external charge is outside the sphere, the induced potential on the surface of the sphere doesn't actually depend on the radius of the sphere $R$ - and it is not at all obvious why this should be the case. Qualitative version Now, to come to the "qualitative" part of the answer. Can I give a simple, math-free explanation why the induced charge on the sphere is $-qR/d$ in the case of grounding? No, I can't (perhaps someone smarter could). But we can see that it could not be simply $-q$. If I had the external charge at a distance $d\gg R$ away from the centre of the sphere, i.e., very, very far away, the potential due to the external charge at all points on the surface of the sphere would be approximately zero, and so there would be no need for an induced surface charge density to correct the potential in the sphere. In fact, we can deduce from this argument that our result for the total charge density must go to $0$ as $d$ goes to infinity, which is the case for our answer. Having accepted the answer to the first part, for the second part we are basically asking: ok I know how to distribute surfaces charges on the sphere so that the total potential in the sphere is $0$; but the total charge of these surface charges is $-qR/d\neq 0$. If I want to make the sphere neutral, I will need to distribute an additional charge of $qR/d$ over the surface of the sphere. Now, I am allowed to change the potential of the surface of the sphere from zero, but I must still respect the condition that the potential is constant on the sphere - thus, I must distribute the charge of $qR/d$ uniformly on the surface of the sphere, by symmetry considerations. Now, what is the potential due to this uniform surface charge distribution of $qR/d$? From the shell theorem (look it up if you are unfamiliar, it is a neat result, and fairly intuitive), since this charge distribution is spherically symmetric, outside of itself the potential due to this charge shell behaves as if all the charge was concentrated at a point at the centre and inside the charge shell, the potential is constant, i.e., outside of the sphere, for $r>R$, the potential due to this additional, uniform surface charge distribution of $qR/d$ is given by $kqR/dr$. Evaluating this at (technically, just outside) the surface of the sphere,we get $kq/d$, as before - and since the potential is continuous and constant inside the sphere it must be $kq/d$ everywhere in the sphere. Since the potential without this distribution was zero throughout thesphere, by the principle of superposition, the total potential at the surface of the sphere is $kq/d\neq 0$ throughout the sphere. Even more qualitative version Even more qualitatively, without assuming the result in the grounded case, we can make an argument for why the potential cannot be generically zero (though not much more). Assume WLOG that the point charge $q$ is positively charged, and suppose that the potential was generically zero throughout the sphere. It is intuitively obvious that the surface charge density induced on the sphere must be most negative at the point closest to the external charge (the "north pole") and least negative (equivalently most positive) at the diametrically opposite point, furthest from the external charge (the "south pole"), and that travelling along a line of "constant longitude" between the two points, the surface charge density must vary continuously, becoming less and less negative (equivalently, more an more positive). Now, in order for the sphere to be neutral overall, at least some part of the surface must have positive surface charge density, and so from the argument above there exists some "latitude" above which the surface charge density is negative and below which it is positive. The total negative charge in the "northern" piece must equal the total positive charge in the "southern" piece for the entire sphere to remain neutral. Now, as we bring the external point charge closer and closer to the surface of the sphere, consider what happens to the potential at the "south pole". There are three contributions to the potential - (1) a positive term from the eternal point charge, (2) a negative term from the negative "northern" piece and (3) a positive term from the positive "southern" piece. When we the external charge is very, very close to the surface of the sphere ($d\approx R$, but still $d>R$), as we are bringing it still closer, the term (1) does not change much - the distance to the south pole remains approximately the same at $\approx 2R$. However, as the charge gets closer and closer, negative charge gets pulled more and more towards the "north pole" and the charge in the "northern" piece gets increasingly concentrated in the north pole. The "southern" piece is not as strongly affected since all of the charges in it are further away from the external charge, but still, the positive charges are pushed more and more towards the "south pole". What happens to the relative sizes of term (2) and term (3)? Well, as we get closer, while the total charge in the "northern" piece may get more negative, and in the "souther " piece may get more positive, the total charges in both pieces remain equal to each other. But the negative charge of the "northern" piece is getting more and more concentrated towards the north pole, where as the positive charge in the "southern" piece is getting more and more concentrated towards the south pole. Since we are interested in the potential at the south pole, we can see that the negative term (2) is becoming more negative at a slower rate than the rate at which positive term (3) is becoming more positive. This is obvious since the charges accumulating around the "south pole" are closer to the "south pole" than the charges accumulating around the "north pole", and hence give greater and greater contributions. Thus, it is clear that at least in the limit where we take the eternal charge very, very close to the sphere, since (1) remains fairly constant and (3) is increasing faster than (2) is decreasing, the potential at the south pole (and by extension, the potential on the whole sphere) must be increasing and thus cannot be generically zero.
{ "domain": "physics.stackexchange", "id": 22035, "tags": "electricity, charge, potential" }
P = NP: Doesn't a search generate more information than a check?
Question: I feel like I am understanding P ≠ NP fairly well, but there is one issue I feel like I am missing. It seems like a search for an answer generates information that a check does not. Is this a correct way of looking at it, and if so, how does that not already imply that P ≠ NP? Example for Clarification Let's take the wedding table example problem. We have a very large table for our wedding and a large number of guests. Each guest has a list of preferences about all the other guests. For any guest, there are some other guests they are willing to sit beside and others they will not. In this case, it is easy to check that a solution works, but very hard to find a solution. Here is where my understanding breaks down. It seems like, unless we can prove that the problem has only one outcome, a successful check of a solution provided to us merely tells us that the proffered outcome succeeds. By contrast, a search tells us that every outcome that was checked and rejected does not succeed. That is, the search gives us extra information. Another example I have seen is a search that finds the first number that violates the Goldbach Conjecture (provided one exists). A solution offered to us could be checked far quicker than said solution could be located via an algorithm, and yet the fact that this number violates the conjecture does not tell us it is the first such number that does so. The search, however, can determine if this is true. In terms of information theory, this also suggests to me that there might be an issue with Kolmogorov Complexity in some respects. If a shorter description of an object tells us everything we need to know to identify it, and yet the identity of that object remains indiscernible to us from many other candidates until we generate a boatload of new information using the search, it seems to me that the shorter description is not equivalent with the object (that it has less information). Rather, for them to be equivalent, we'd have to include the information generated by all the checks, which would then make the search algorithm longer than a code that simply types the number out. Or: if you can have a description of an object, but you can still be surprised by said object's identity, then the two (description and identity) are not equivalent. However, I fear I may be missing something essential here, hence the question. Answer: What you describe is one of the reason some people tend to think that $\mathsf{P}$ indeed is different than $\mathsf{NP}$. However, you suppose here that checking all possibilities is the only way to solve a problem, which it is not necessarily. As an example, consider primality testing. One of the naive way to find if an integer $n$ is a prime or not is to test all number lesser or equals than $\sqrt{n}$ and to see if one of them is a divisor of $n$. That would result in a $\Theta(\sqrt{n})$ algorithm in the worst case (when $n$ is prime), which is exponential in the memory size of $n$. The AKS algorithm however, can decide if a number is a prime without testing all divisors, in time $\mathcal{O}((\log n)^6)$. That was the proof that primality testing is in $\mathsf{P}$. I can give other examples: the example you gave looks like a matching problem. It is well-known that 3D matching is $\mathsf{NP}$-complete, however 2D matching is solvable in polynomial time, without checking all possible matchings ; 3SAT is $\mathsf{NP}$-complete, but 2SAT is in $\mathsf{P}$, and can be solved in polynomial time without checking all truth assignments. The important thing, here, is that in the general case, we don't know if checking all possibilities is the only way to deterministically solve a problem. That's why the $\mathsf{P}\neq \mathsf{NP}$ conjecture is so hard to crack.
{ "domain": "cs.stackexchange", "id": 20614, "tags": "p-vs-np" }
Navigation ideas
Question: Hello, I am new to navigation and currently I have a wheeled robot and I want to implement various navigation algorithms with it. For now though, I don't have an odometer for localization and I currently use only a lidar. I have managed to do some obstacle avoidance with it, but I am out of ideas for now. If you could help me with some navigation algorithms that don't require an odometer for localization, I would be more than happy to hear them. Thank you for your answers and for your time in advance, Chris Originally posted by patrchri on ROS Answers with karma: 354 on 2016-08-08 Post score: 0 Original comments Comment by ssj on 2017-08-14: hello i am new to ros and trying to use navigation stack for obstacle avoidance. how did you manage obstacle avoidance? Answer: Hello, I'm not sure if it is what you want but you can use the SLAM for localization. You can use hector_mapping (hector_slam), I know it is possible to use it without odometry. There is also another algorithm for SLAM: slam_gmappping (gmapping) but I don't know if it is possible to use it without odometry. You can find all the information about those 2 methods on the ros wiki (by typing hector_slam and gmapping). I hope it will help you, lfr Originally posted by lfr with karma: 201 on 2016-08-09 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 25475, "tags": "ros, navigation, lidar, ros-kinetic" }
Publishing on /map topic only once and data output format
Question: Hi, I am trying to run nav2_map_server. I am seeing map_server is publishing only once on topic /map even and this is published when I launch the node. I can provide the launch file if necessary. And the echo on the topic looks something like this. header: stamp: sec: 1647904550 nanosec: 512909420 frame_id: map info: map_load_time: sec: 1647904550 nanosec: 512906384 resolution: 0.019999999552965164 width: 632 height: 799 origin: position: x: -10.0 y: -20.24 z: 0.0 orientation: x: 0.0 y: 0.0 z: 0.0 w: 1.0 data: - 0 - 0 - 0 - 0 - 0 - 0 - 0 - 0 When I was working with ros1 I use to get data in form of an array/matrix but this is kind of new to me. Is this the right format of an output or is it something that I am missing? Thanks in advance for the help! Originally posted by Flash on ROS Answers with karma: 126 on 2022-03-21 Post score: 0 Original comments Comment by ljaniec on 2022-03-21: As I understand from documentation there: https://github.com/ros-planning/navigation2/tree/main/nav2_map_server and a description there: https://github.com/ros-planning/navigation2/tree/main/nav2_map_server, after map load request you have this map published on /map topic once. If you want to publish it more than 1 time, you have to request it again each time. I could be wrong, I am still learning Nav2... Comment by Flash on 2022-03-21: okay, so do you mean I have to make a service call every time I want the map data. And do you know if the output data is in the correct format? Comment by ljaniec on 2022-03-22: I copied the same link twice, second link should be https://navigation.ros.org/configuration/packages/configuring-map-server.html, I cannot edit it now, sorry. I will check the source code and the format of this message in a moment :) Comment by ljaniec on 2022-03-22: Based on this https://github.com/ros-planning/navigation2/blob/main/nav2_map_server/include/nav2_map_server/map_server.hpp it should be this format http://docs.ros.org/en/lunar/api/nav_msgs/html/msg/OccupancyGrid.html - it seems to match what you got there. You can check it easily with ros2 node info /your_node_name too :) I would call this service https://github.com/ros-planning/navigation2/blob/main/nav2_msgs/srv/LoadMap.srv to get the map. Comment by Flash on 2022-03-22: Thanks @ljaniec it helped :D. Data provided on the map topic is in a 1D array with row-major, so I have to convert from 1D to 2D using other methods. And to get the map data I have to run the map_server which only pub 1 time as you mentioned. For my other problem, I need to add map_server to my launch file so that my other nodes get the data as soon as map_server launches. There must be another way I can request without launching but will work with this method until I find another solution. Comment by ljaniec on 2022-03-25: I think it could be a new question overall, let's solve problems step by step, you can link it there too. I will paste these comments as an answer too, so you can upvote and mark question as solved. Comment by Flash on 2022-03-25: I think my problem was in understanding if map_server publishes only once- which is true and the other was data format which is in form of a 1D row-major array. We can mark it as solved. Thanks for help @ljaniec Answer: As I understand from documentation there: https://github.com/ros-planning/navigation2/tree/main/nav2_map_server and a description there: https://navigation.ros.org/configuration/packages/configuring-map-server.html after map load request you have this map published on /map topic once. If you want to publish it more than 1 time, you have to request it again each time. Based on this: https://github.com/ros-planning/navigation2/blob/main/nav2_map_server/include/nav2_map_server/map_server.hpp it should be this format: http://docs.ros.org/en/lunar/api/nav_msgs/html/msg/OccupancyGrid.html It seems to match what you got there. You can check it easily with ros2 node info /your_node_name too :) I would call this service: https://github.com/ros-planning/navigation2/blob/main/nav2_msgs/srv/LoadMap.srv (it could be done within the code too normally, not only with launchers) Copied from the discussion under question. Originally posted by ljaniec with karma: 3064 on 2022-03-25 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 37519, "tags": "ros2" }
Serial port class wrapper for serial port object VB.NET
Question: I've been reviewing some code that I maintain recently, and I came across this wrapper object for the serial port class. I'm trying to understand the advantage of this seemingly redundant object: Public Class SerialPort_Class Private WithEvents Serial_Port As IO.Ports.SerialPort Private SerialPort_ReOpen_Flag As Boolean = True Private ReadOnly Common_Methods As New Common_Class Private _BaudRate As Integer = 115200 Private _DataBits As Integer = 8 Private _DiscardNull As Boolean = False Private _DtrEnable As Boolean = False Private _Encoding As Text.Encoding = Text.Encoding.ASCII Private _Handshake As IO.Ports.Handshake = IO.Ports.Handshake.None Private _NewLine As String = vbLf 'Default Private _Parity As IO.Ports.Parity = IO.Ports.Parity.None Private _ParityReplace As Byte = Byte.MinValue 'If the value is set to the null character, parity replacement is disabled Private _PortName As String = "None" 'Default of Nothing, Prevent Occupying Com Port Until Set Private _ReadBufferSize As Integer = 4096 Private _ReadTimeout As Integer = 250 Private _ReceivedBytesThreshold As Integer = 1 Private _RtsEnable As Boolean = False Private _StopBits As IO.Ports.StopBits = IO.Ports.StopBits.One Private _WriteBufferSize As Integer = 2048 Private _WriteTimeout As Integer = 250 Private Shared ReadOnly Search_Com_Lock_Object As New Object Public Property BaudRate As Integer Get Return _BaudRate End Get Set(value As Integer) If value.Equals(_BaudRate) Then Return _BaudRate = value SerialPort_ReOpen_Flag = True End Set End Property Public Property DataBits As Integer Get Return _DataBits End Get Set(value As Integer) If value.Equals(_DataBits) Then Return _DataBits = value SerialPort_ReOpen_Flag = True End Set End Property Public Property DiscardNull As Boolean Get Return _DiscardNull End Get Set(value As Boolean) If value.Equals(_DiscardNull) Then Return _DiscardNull = value SerialPort_ReOpen_Flag = True End Set End Property Public Property DtrEnable As Boolean Get Return _DtrEnable End Get Set(value As Boolean) If value.Equals(_DtrEnable) Then Return _DtrEnable = value SerialPort_ReOpen_Flag = True End Set End Property Public Property Encoding As Text.Encoding Get Return _Encoding End Get Set(value As Text.Encoding) If value.Equals(_Encoding) Then Return _Encoding = value SerialPort_ReOpen_Flag = True End Set End Property Public Property Handshake As IO.Ports.Handshake Get Return _Handshake End Get Set(value As IO.Ports.Handshake) If value.Equals(_Handshake) Then Return _Handshake = value SerialPort_ReOpen_Flag = True End Set End Property Public Property NewLine As String Get Return _NewLine End Get Set(value As String) If value.Equals(_NewLine) Then Return _NewLine = value SerialPort_ReOpen_Flag = True End Set End Property Public Property Parity As IO.Ports.Parity Get Return _Parity End Get Set(value As IO.Ports.Parity) If value.Equals(_Parity) Then Return _Parity = value SerialPort_ReOpen_Flag = True End Set End Property Public Property ParityReplace As Byte Get Return _ParityReplace End Get Set(value As Byte) If value.Equals(_ParityReplace) Then Return _ParityReplace = value SerialPort_ReOpen_Flag = True End Set End Property Public Property PortName As String Get Return _PortName End Get Set(value As String) If value.Equals(_PortName) Then Return _PortName = value SerialPort_ReOpen_Flag = True End Set End Property Public Property ReadBufferSize As Integer Get Return _ReadBufferSize End Get Set(value As Integer) If value.Equals(_ReadBufferSize) Then Return _ReadBufferSize = value SerialPort_ReOpen_Flag = True End Set End Property Public Property ReadTimeout As Integer Get Return _ReadTimeout End Get Set(value As Integer) If value.Equals(_ReadTimeout) Then Return _ReadTimeout = value SerialPort_ReOpen_Flag = True End Set End Property Public Property ReceivedBytesThreshold As Integer Get Return _ReceivedBytesThreshold End Get Set(value As Integer) If value.Equals(_ReceivedBytesThreshold) Then Return _ReceivedBytesThreshold = value SerialPort_ReOpen_Flag = True End Set End Property Public Property RtsEnable As Boolean Get Return _RtsEnable End Get Set(value As Boolean) If value.Equals(_RtsEnable) Then Return _RtsEnable = value SerialPort_ReOpen_Flag = True End Set End Property Public Property StopBits As IO.Ports.StopBits Get Return _StopBits End Get Set(value As IO.Ports.StopBits) If value.Equals(_StopBits) Then Return _StopBits = value SerialPort_ReOpen_Flag = True End Set End Property Public Property WriteBufferSize As Integer Get Return _WriteBufferSize End Get Set(value As Integer) If value.Equals(_WriteBufferSize) Then Return _WriteBufferSize = value SerialPort_ReOpen_Flag = True End Set End Property Public Property WriteTimeout As Integer Get Return _WriteTimeout End Get Set(value As Integer) If value.Equals(_WriteTimeout) Then Return _WriteTimeout = value SerialPort_ReOpen_Flag = True End Set End Property Answer: Is there more code in this class? it looks like other objects are instantiated, but not used: Private ReadOnly Common_Methods As New Common_Class and Private Shared ReadOnly Search_Com_Lock_Object As New Object To answer your question, this code can be simplified. Is the truth bit that has been added to each statement for determining if the port is open? SerialPort_ReOpen_Flag = True if so it would be better to use the built-in SerialPort.IsOpen Property to check when the port is open. After removing that flag you can remove the getter and setter methods and just use the Auto-implemented syntax for properties like _BaudRate. You can review this type of setup in the Microsoft Docs
{ "domain": "codereview.stackexchange", "id": 43483, "tags": ".net, vb.net, serial-port" }
Advice on what Machine Learning Algorithms to study for a Job to candidate matching algorithm
Question: I have asked in a few places and this seems to get downvoted for some reason. If this is not the place to ask this then some advice on how and where to ask it would be appreciated. I'm creating a web platform 2 sided market that matches available consultants to short term contracts. We currently have a rules-based algorithm that matches based on a small number of features like industry, job title, rate, and availability. For that list of consultants, It then calculates a percentage match based on other features like skill set, qualifications, certifications, years of service, etc. We have data on candidates that were accepted and candidates that were rejected. I've spent a year revising maths, statistics, and studying basic machine learning with python. But I am no closer to understanding how machine learning could be used to match consultants to assignments. I assume what I need to look into is feature engineering and some sort of classification algorithm. It would seem that a contract assignment would have an ideal candidate and so we are matching all other candidates to the features of the ideal. Is it some sort of unsupervised learning to classify the candidates into groups and putting forward those candidates that fall in the same group? Does anyone have an idea on how best this would best be done? I am looking at what to study to get closer to a solution so any advice to so that I can get to a point of understanding how this could best be done would be appreciated. Answer: One way to frame the problem as approximate nearest neighbors retrieval. Given a set of features for a posting, what are the nearby candidates? Or given a set of features for posting, what are nearby candidates? In order to find the nearest neighbors, all entities need to be in the same feature space. That features space can be a learned embedding. One implementation of learned embedding is StarSpace.
{ "domain": "datascience.stackexchange", "id": 7665, "tags": "machine-learning, algorithms, multilabel-classification" }
Tic Tac Toe Game Python
Question: I've just started a self-learning "course" on Python practical programming for beginners. I found this book online, and decided to go through the chapters. The book had a half-complete implementation of a Tic-Tac-Toe game, and I was supposed to finish it as an exercise. This is my code so far: theBoard = {'top-L': ' ', 'top-M': ' ', 'top-R': ' ', 'mid-L': ' ', 'mid-M': ' ', 'mid-R': ' ', 'low-L': ' ', 'low-M': ' ', 'low-R': ' '} def printBoard(board): """ This function prints the board after every move """ print(board['top-L'] + '|' + board['top-M'] + '|' + board['top-R']) print('-+-+-') print(board['mid-L'] + '|' + board['mid-M'] + '|' + board['mid-R']) print('-+-+-') print(board['low-L'] + '|' + board['low-M'] + '|' + board['low-R']) def checkWin(board): """ This functions checks if the win condition has been reached by a player """ flag = False possibleWins = [['top-L', 'top-M', 'top-R'], ['mid-L', 'mid-M', 'mid-R'], ['low-L', 'low-M', 'low-R'], ['top-L', 'mid-L', 'low-L'], ['top-M', 'mid-M', 'low-M'], ['top-R', 'mid-R', 'low-R'], ['top-L', 'mid-M', 'low-R'], ['top-R', 'mid-M', 'low-L']] for row in range(len(possibleWins)): temp = board[possibleWins[row][0]] if temp != ' ': for position in possibleWins[row]: if board[position] != temp: flag = False break else: flag = True if flag: return True return False turn = 'X' for i in range(9): printBoard(theBoard) print('Turn for ' + turn + '. Move on which space?') while True: move = input() if move in theBoard: if theBoard[move] != ' ': print('Invalid move. Try again.') else: break else: print('Invalid move. Try again.') theBoard[move] = turn if checkWin(theBoard): printBoard(theBoard) print('Player ' + turn + ' wins!') break if turn == 'X': turn = 'O' else: turn = 'X' My checkWin function is very "stupid", it detects a win based on predetermined scenarios, and may not be very efficient in that regard as well. What if the board was of an arbitrary size nxn? Is there an algorithm to determine the victory condition without having to rewrite the entire game? Answer: For a 3x3 tic-tac-toe board it hardly provides any real advantage, but if you where to play on an nxn board, with n consecutive same player marks determining a winner, you could do the following... class TicTacToe(object): def __init__(self, n=3): self.n = n self.board = [[0 for j in range(n)] for k in range(n)] self.row_sum = [0 for j in range(n)] self.col_sum = [0 for j in range(n)] self.diag_sum = 0 self.diag2_sum = 0 def add_move(self, player, row, col): assert player in (0, 1) assert 0 <= row < self.n and 0 <= col < self.n delta = [-1, 1][player] winner = None self.board[row][col] = delta self.row_sum[row] += delta: if self.row_sum[row] = delta * self.n: winner = player self.col_sum[col] += player: if self.col_sum[col] = player * self.n: winner = player if col == row: self.diag += delta if self.diag == delta * self.n: winner = player if row == self.n - row - 1: self.diag2 += delta if self.diag2 == delta* self.n: winner = player return winner You basically initialize the board to all zeros, use -1 for one player and 1 for the other, and keep track of the sum of values in each row, column and diagonal. Whenever you add a new move, you can update all your data in constant time, and check in constant time if the player has won.
{ "domain": "codereview.stackexchange", "id": 14691, "tags": "python, beginner, tic-tac-toe" }
What affects the surface characteristics of cumulus clouds?
Question: I love looking at clouds. I love trying to describe them and compare what I see day to day. I have dozens of questions about the things I’ve observed and I would really like to understand the physics of the world around me. Today I would like to focus on the surface dynamics of cumulus clouds. Why do some cumulus clouds have distinct textured surface while others are softer and wispy at the edge? I imagine it probably comes down to temperature and pressure. It may also be an effect of direct sunlight, I’m not sure. I notice the soggy cotton clouds are more common in the evening and the billowy ice cream clouds are more common mid-day. Do clouds have surface tension like liquid water? Does the flow of water either into or out of a cloud affect its surface characteristics? Answer: No, clouds don't really have a 'surface' that could have tension like a body of water. The different looks in these two examples (left Cumulonimbus Calvus and right Cumulus Humilis) are greatly dependent on how they have formed and how are they evolving now. The large Cumulonimbus is still growing in a relatively rapid speed. The cloud is reaching higher and higher upwards carrying moist air. The moist air due to turbulent flows and expanding of the rising air gets mixed with the cold air and instantly forms/extends cloud as it reaches saturation (saturation is reached with less water in colder air and air is colder in higher altitudes). If there wasn't the expansion then flows of dry air to the cloud would desaturate the cloud and more small scale variation could be seen as in the Cumulus Humilis. Also the Cumulonimbus being much larger and further away looks different just due to distance. The steady state like situation of the Cumulus Humilis where it isn't growing (perhaps a little on the top) and the fact that the lower atmosphere has stronger turbulent motions equals to the appearance where more cotton candy pieces 'drift' from the cloud. Neither of these are ice droplet clouds (they look different just because they are made of ice and not water droplets), but all clouds that are in freezing temperatures have some very small percentage of ice in them too. The Cumulus Humilis might be below 0 Celsius and the Cumulonimbus definitely is at some height. A crude estimate is that temperature drops 6 Celsius every kilometer. The base of both clouds is probably somewhere between 1 and 2.5 kilometers and the Cumulonimbus might reach over 10 kilometers.
{ "domain": "earthscience.stackexchange", "id": 875, "tags": "atmosphere, clouds, fluid-dynamics" }
Why does bad smell follow people (assuming they are not the source)?
Question: When you are sitting in a room where there is a source of bad smell, such as somebody smoking or some other source of bad smell, it is often a solution to simply move to another spot where bad smell is not present. Assuming you are not actually the source of the smell, this will work for a while until you notice the smell has somehow migrated to exactly the spot where you are now sitting. Frustrating. This got me thinking about the fluid mechanics of this problem. Treat bad smell as a gas that is (perhaps continuously) emitted at a certain fixed source. One explanation could be that human breathes and perhaps creates a pressure differential that causes the smell to move around. Is there any truth to this? Please provide a reasoned argument with reference to the relevant thermodynamic and/or fluid quantities in answering the question. Theoretical explanation is desired, but extra kudos if you know of an experiment. Answer: From a fluid dynamics standpoint, as a body moves through a fluid, a small region of fluid is dragged along with it. This is what forms the boundary layer. In the near-body region, odor will be dragged along with the body. Likewise, behind a moving person is a turbulent wake and a low pressure region. The low pressure reason will "suck" the odor along with the body, and the turbulence will mix the odor into the air which will also help distribute it. Turns out there is an experiment, in this paper, that looks at the effect of a stationary body and a moving body (as in human body) in a room with stratified contaminants. The principles discussed therein are along the lines of your question.
{ "domain": "physics.stackexchange", "id": 5150, "tags": "fluid-dynamics, statistical-mechanics" }
How to write a reward function that optimizes for profit and revenue?
Question: So I want to write a reward function for a reinforcement learning model which picks products to display to a customer. Each product has a profit margin %. Higher price products will have a higher profit margin but lower probability of being purchased. Lower price products have a lower profit margin, but higher probability of being purchased. The goal is to maintain an AVERAGE margin of 5% for ALL products sold, while maximizing the total revenue. What's the best way to write this reward function? Answer: Your goals include two criteria that interact and may conflict. It is not possible to write a single reward function to solve this perfectly. You have to decide first on relative importance of the two goals. As one is effectively a constraint, you need to decide on how hard you want to apply this constraint. As the revenue is easy to measure, and already natural expression of what that part of the optimisation is supposed to achieve, you can start by using an arbitrary scaling for revenue that makes the numbers simple for your approximator - e.g. a neural network. Having numbers in the thousands or millions is not great because the error values could be really large during training, so I would try to scale this part of the reward by some order of magnitude depending on values you are expecting. Following that, you then have to decide how to add in some reward factor for the gross profit margin. There are lots of ways to do this, because the constraint you have been given is not "natural", it is something that a business owner or analyst has determined will result in overall acceptable net profit margin, which is related to but not the same as the gross profit margin goals you have been given (this is not unexpected, net profit margin is the real goal of the company, but much more complicated to figure out than gross profit margin per sale). I can think of two additional rewards that you could add in order to represent the goal of meeting the gross profit margin target: As it has been phrased as a constraint, you will want negative rewards for sales that result in gross profit margin below 5% and positive rewards for sales that result in gross profit margin above 5%. You may be able to simplify that down to +1 or -1 per sale depending on what side of the line your margin currently is. As an individual sale may not move this average by much, you may want add a third reward centred on the 5% mark that simply is the amount above or below the 5% mark for an individual sale. So e.g. an object sold at £104 with a cost of £100 would score -1 reward. This option is a form of "reward shaping". There is a chance it could be counter-productive, but bear it in mind in case short term learning does not steer sales in the right direction. There are several other ways that you could construct a reward system. The key thing to bear in mind is that all rewards that you are adding from different sources need to be scaled to work together and express the goal of your agent. This is something you will need to establish through trial and error. You may be able to get a feel for the behaviour your weightings are encouraging by working through some examples from your data. High weights on meeting the 5% constraint may reduce revenue through lack of sales (because all offered items may be more expensive), low weights on the constraint may have the business operating at a loss overall (as it makes sales that cost the company more in overheads than the smaller profit margins can make up for). However, there is no mathematically correct answer to that unless you can somehow model the relationship to net profit margin well enough to use that as the goal instead.
{ "domain": "datascience.stackexchange", "id": 10157, "tags": "machine-learning, reinforcement-learning, reward" }
How do I implement cross-correlation to prove two audio files are similar?
Question: I have to do cross correlation of two audio file to prove they are similar. I have taken the FFT of the two audio files and have their power spectrum values in separate arrays. How should I proceed further to cross-correlate them and prove that they're similar? Is there a better way to do it? Any basic ideas will be helpful for me to learn and apply it. Answer: Cross-correlation and convolution are closely related. In short, to do convolution with FFTs, you zero-pad the input signals a and b (add zeros to the end of each. The zero padding should fill the vectors until they reach a size of at least N = size(a)+size(b)-1) take the FFT of both signals multiply the results together (element-wise multiplication) do the inverse FFT conv(a, b) = ifft(fft(a_and_zeros) * fft(b_and_zeros)) You need to do the zero-padding because the FFT method is actually circular cross-correlation, meaning the signal wraps around at the ends. So you add enough zeros to get rid of the overlap, to simulate a signal that is zero out to infinity. To get cross-correlation instead of convolution, you either need to time-reverse one of the signals before doing the FFT, or take the complex conjugate of one of the signals after the FFT: corr(a, b) = ifft(fft(a_and_zeros) * fft(b_and_zeros[reversed])) corr(a, b) = ifft(fft(a_and_zeros) * conj(fft(b_and_zeros))) whichever is easier with your hardware/software. For autocorrelation (cross-correlation of a signal with itself), it's better to do the complex conjugate, because then you only need to calculate the FFT once. If the signals are real, you can use real FFTs (RFFT/IRFFT) and save half your computation time by only calculating half of the spectrum. Also you can save computation time by padding to a larger size that the FFT is optimized for (such as a 5-smooth number for FFTPACK, a ~13-smooth number for FFTW, or a power of 2 for a simple hardware implementation). Here's an example in Python of FFT correlation compared with brute-force correlation: https://stackoverflow.com/a/1768140/125507 This will give you the cross-correlation function, which is a measure of similarity vs offset. To get the offset at which the waves are "lined up" with each other, there will be a peak in the correlation function: The x value of the peak is the offset, which could be negative or positive. I've only seen this used to find the offset between two waves. You can get a more precise estimate of the offset (better than the resolution of your samples) by using parabolic/quadratic interpolation on the peak. To get a similarity value between -1 and 1 (a negative value indicating one of the signals decreases as the other increases) you'd need to scale the amplitude according to the length of the inputs, length of the FFT, your particular FFT implementation's scaling, etc. The autocorrelation of a wave with itself will give you the value of the maximum possible match. Note that this will only work on waves that have the same shape. If they've been sampled on different hardware or have some noise added, but otherwise still have the same shape, this comparison will work, but if the wave shape has been changed by filtering or phase shifts, they may sound the same, but won't correlate as well.
{ "domain": "dsp.stackexchange", "id": 10647, "tags": "audio, fft, waveform-similarity, cross-correlation" }
To Do List Project - app lets you make projects and inside of this projects you can save to-dos
Question: I have built a To Do App in Javascript, using Webpack and some modules. The app lets you store projects (School, Sport, Grocery, etc.)... And inside these projects, you can store todo items... You can also edit the todo item, and click it do delete/finish it. The app should be built with OOP principles in mind, and that's the main concern with the code, and the reason that I want it to be reviewed. This is the weakness of my code. And I would want you to give me tips on how to improve this... It is my first post here... I hope I followed all the rules. And that is why I posted only the following code here, and shared my github repo if there is a person generous to look into it. Github repo! (Posted it because of webpack...) Index.js: if (myProjects.length == 0) { defaultProject(); } //defaultProject(); // Add new project! newProjectListener.addEventListener("submit", (event) => { event.preventDefault(); const newProjectTitle = document.getElementById("newProjectName").value; if (newProjectTitle == "") { } else { newProjectEvent(event); addProjectUI(newProject); // saveToLocalStorage(myProjects); emptyForm(); } }); // Delete project, adding event listeners to all future trash buttons for projects... window.addEventListener("click", (event) => { let element = event.target.classList.contains("bi-trash") ? event.target.parentElement : event.target.classList.contains("trash-project") ? event.target : false; if (element) { let itemToRemove = element.parentElement.parentElement; deleteProject(itemToRemove); deleteItemUI(itemToRemove); cleanToDoView(); // localStorage.clear(); // saveToLocalStorage(myProjects); } }); // clicked project projectListDiv.addEventListener("click", (event) => { if (event.target.tagName == "P") { resetClickedProject(); clickedProject(event); clickedProjectIndex = idClickedProject(event); cleanToDoView(); render(); } }); // new to do... newToDoListener.addEventListener("submit", (event) => { event.preventDefault(); if ( toDoTitle.value == "" || description.value == "" || dueDate.value == "" || priority.value == "" || note.value == "" ) { } else { newToDoEvent(event, clickedProjectIndex); let toDo = newToDo; appendToDo(toDo); // localStorage.clear(); // saveToLocalStorage(myProjects); emptyToDoForm(); } }); tableListener.addEventListener("click", (event) => { let element = event.target.classList.contains("delete") ? event.target.parentElement.parentElement : event.target.classList.contains("fa-check") ? event.target.parentElement.parentElement.parentElement : false; if (element) { let deleteItem = element; deleteToDoFromObject(deleteItem, clickedProjectIndex); deletToDoUI(deleteItem); // localStorage.clear(); // saveToLocalStorage(myProjects); } }); window.addEventListener("click", (event) => { let element = event.target.classList.contains("edit-button") ? event.target.parentElement.parentElement : event.target.classList.contains("fa-pencil-square-o") ? event.target.parentElement.parentElement.parentElement : false; if (element) { toDoIndex = clickedToDoIndex(event, element); editTodo(clickedProjectIndex, toDoIndex); } }); editToDo.addEventListener("submit", (event) => { event.preventDefault(); editFinish(clickedProjectIndex, toDoIndex); cleanToDoView(); render(); // localStorage.clear(); // saveToLocalStorage(myProjects); }); const render = () => { myProjects[clickedProjectIndex].toDos.forEach((todo) => { appendToDo(todo); }); }; const initialLoad = () => { myProjects.forEach((project) => { addProjectUI(project); console.log(project) const projectHeader = document.querySelector(".projectName"); projectHeader.textContent = project.title }); }; initialLoad(); const initalTodoLoad = () => { myProjects[0].toDos.forEach((todo) => { appendToDo(todo); }); }; initalTodoLoad(); projectFactory.js: let myProjects = []; let newProject; // let myProjects = localStorage.getItem("projects") // ? JSON.parse(localStorage.getItem("projects")) // : [ // ]; const saveToLocalStorage = () => { localStorage.setItem("projects", JSON.stringify(myProjects)); }; // Project factory, which takes in title and makes toDo array, to which the toDos will be added... const newProjectFactory = (id, title) => { const toDos = []; const add_toDo = (toDo) => { toDos.push(toDo); }; return { id, title, toDos, add_toDo }; }; const newProjectEvent = (event) => { // DOM elements of form ... event.preventDefault(); const newProjectTitle = document.getElementById("newProjectName").value; let ID; if (myProjects.length > 0) { ID = myProjects[myProjects.length - 1].id + 1; } else { ID = 0; } newProject = newProjectFactory(ID, newProjectTitle); myProjects.push(newProject); }; Answer: This review will focus on what object oriented programming [oop] is and how we can use two concepts of oop (encalpsulation and abstraction) in your code to make it more object oriented. The self-drawn pictures of this review come from my Github-Repository for a presentation that is supposed to show the basics of oop. How to write Object-Oriented Code Object-Oriented Programming is based on the following 4 concepts: abstraction encapsulation inheritance polymorphism Not all of these concepts have to be implemented per file, class, object, ... in order to be object-oriented. But following the concepts of abstraction and encapsulation especially at the beginning will make the code much more object-oriented. To make these concepts easier to remember, there is the acronym called "a pie". Encapsulation What it is Encapsulation is used to hide the values or state of a structured data object inside a class, preventing unauthorized parties' direct access to them. — wikipedia What is against the concept of encapsulation? The variable myProjects has been defined globally and can be read and modified by anyone and everyone. // in projectFactory.js let myProjects = []; // in index.js if (myProjects.length == 0) { /* ... */ } //... const newProjectEvent = (event) => { //... if (myProjects.length > 0) {/ * ... */} // ... myProjects.push(newProject); }; Breaking the code is easy. All that has to be done is to assign a different value to myProjects: myProjects = ""; Now you will think that nobody would do that, but when you work in a team it goes faster than expected. Even if it is accidental. The same for toDos of you project object: myProjects[clickedProjectIndex].toDos.forEach((todo) => { appendToDo(todo); }); The variable toDos can be intervened again without restriction and we could modify it again: myProjects[clickedProjectIndex].toDos = /* something wrong */ How can we fix the violations? We can hide the variables inside objects with #-Symbol. class ProjectCollection { #projects = []; } const projects = new ProjectCollection(); // not possible anymore projects.projects = /* something wrong */ Abstraction What it is The term encapsulation refers to the hiding of state details, but extending the [...] associate behavior most strongly with the data, and standardizing the way that different data types interact, is the beginning of abstraction. — wikipedia What is against the concept of abstraction? Since the code has no encapsulation, there is no abstraction. For example, we ask whether the length of the array myProjects is 0: if (myProjects.length == 0) { // ... } However, we can abstract this by calling a method: if (myProjects.containsNon()) { // ... } const myProjects = new ProjectCollection(); class ProjectCollection { #projects = []; containsNon() { return this.#projects == 0; } }
{ "domain": "codereview.stackexchange", "id": 39540, "tags": "javascript, object-oriented, modules, to-do-list" }
ROS and Gazebo: Documentation Soon?
Question: Cross-post from here Now that ROS and Gazebo are both under the umbrella of OSRF, when will we be getting updates to the ROS.org documentation of gazebo? The Gazebo website says "Only install Gazebo from here, and only follow tutorials from this website. Documentation on ros.org for Gazebo is old and not actively maintained." but doesn't provide any basic tutorials like how to subscribe to a message Gazebo generates in ROS. (There are Tutorials for making plugins, but not for basic usage) It doesn't seem like it helps the community at all to have such a fractured approach to documentation. Originally posted by David Lu on Gazebo Answers with karma: 111 on 2013-05-20 Post score: 2 Answer: The latest documentation for ROS integration is here: http://gazebosim.org/wiki/Tutorials#ROS_Integration (thanks to Dave Coleman for writing it!). Originally posted by scpeters with karma: 2861 on 2013-07-23 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by davetcoleman on 2013-07-24: Boom! Documentated.
{ "domain": "robotics.stackexchange", "id": 3300, "tags": "gazebo" }
Why is the manufacture of sulfuric acid known as the contact process?
Question: I was after some detail as to why the contact process was named as such. Answer: According to this site: The Hot gases( $\ce{SO2}$ ) evolved from burning of sulfur ore comes in contact with catalyst bed, So the name of this process is called contact process.
{ "domain": "chemistry.stackexchange", "id": 16073, "tags": "acid-base" }
I am compiling the gazebo_naoqi_control for NAO on Gazebo but getting compilation Error
Question: I am compiling the nao_gazebo (https://github.com/costashatz/nao_gazebo) package for NAO simulation on Gazebo but i am getting compilation error on catkin_make . Error is : In file included from /home/saleem/catkin_ws/src/nao_gazebo/gazebo_naoqi_control/src/gazebo_naoqi_control_plugin.cpp:24:0: /home/saleem/catkin_ws/src/nao_gazebo/gazebo_naoqi_control/include/gazebo_naoqi_control/gazebo_naoqi_control_plugin.h:45:31: fatal error: alnaosim/alnaosim.h: No such file or directory #include ^ compilation terminated. make[2]: *** [nao_gazebo/gazebo_naoqi_control/CMakeFiles/gazebo_naoqi_control.dir/src/gazebo_naoqi_control_plugin.cpp.o] Error 1 make[1]: *** [nao_gazebo/gazebo_naoqi_control/CMakeFiles/gazebo_naoqi_control.dir/all] Error 2 make: *** [all] Error 2 Invoking "make -j4 -l4" failed Originally posted by saleem on ROS Answers with karma: 1 on 2016-03-13 Post score: 0 Answer: You need to have the Aldebaran NAO Simulator SDK installed and properly set up. There you'll find the missing header file. Also, check out the Readme of the package you try to build, this actually tells you in the Building/Compiling section that you have to configure some environment variables... Originally posted by mgruhler with karma: 12390 on 2016-03-14 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by saleem on 2016-03-14: I followed the steps for Building/Compiling. I am still getting the error. My SDK path is -- ~/naoqi/simulator-sdk-2.1.2.17-linux64 -- so i have added these envoirnment variables AL_SIM_DIR = ~/naoqi/simulator-sdk-2.1.2.17-linux64 AL_DIR = ~/naoqi/simulator-sdk-2.1.2.17-linux64 Any Help? Comment by mgruhler on 2016-03-14: where did you add/export them? added to .bashrc? only in one terminal? Did you compile in the terminal where you set this? Also, does te path contain the include folder directly? Check out [this fiile] for where the paths should "end". Also, try replacing ~ with /home/saleem. Comment by saleem on 2016-03-14: I have added them in .bashrc file. No path doesn't contain include folder directly. My AL_DIR= ~/naoqi/simulator-sdk-2.1.2.17-linux64 -- no include in the environment variables. Do i need to add include folder in AL_DIR? Comment by mgruhler on 2016-03-14: if you check out this file (now with the link hopefully working), you see it expects to have the include folder directly after your AL_SIM_DIR, this is ${AL_SIM_DIR}/include should exist. Comment by mgruhler on 2016-03-14: Also (long shot, but just to be safe ;-) ). Did you source the .bashrc after adding the environment variables there? Comment by saleem on 2016-03-14: I have hardcoded the include files in the headers. Now it has been compiled and motion and stiffness commands are working. Hoping that it will not crash for any other commands. Thankss a lot mig for your help. Will get back to you for any other help. :)
{ "domain": "robotics.stackexchange", "id": 24095, "tags": "ros, gazebo, naoqi, nao, ros-indigo" }
If I dropped a golf ball straight down from about 5 meters, would air resistance cause a noticable change in the acceleration of the ball?
Question: I am working on a Grade 11 physics lab report. I dropped a golf ball from a balcony 5m and 13cm off the ground. When I did all the calculations, the acceleration was 7,72m/s (calculations below). I am wondering if this discrepancy (7,72m/s² < 9,8m/s²) was caused by the human reaction time being slow, by air resistance or by human error (a miscalculation somewhere). Calculations (v = velocity): average ∆t = 1,15s, ∆d = (-)5,13m, v = ?, vi = 0m/s, a = ? average v = ∆d/∆t = 5,13m/1.15s = 4,46m/s final v = 2(average v) - initial v = 2(4,46m/s) - 0m/s = 8,92m/s a = (final v - initial v)/∆t = (8,92m/s - 0m/s)/1,15s = 7,76m/s² Answer: You are on the right path in terms of trying to run down possible sources of error. You can estimate the drag force on the ball as a function of speed $v$. Looking at https://core.ac.uk/download/pdf/82404896.pdf I see that the coefficient of drag for a golf ball at low speeds (low Re number) is about $C_D = 0.65 $ for typical balls. So let's calculate the deceleration due to drag at $v=8.92\;\text{m/s}$ Force of air resistance $$F_{\rm drag} = \tfrac{1}{2}\rho C_D A v^2 = 0.044\; \text{N} $$ where $A=0.001432\;\text{m}^2$ is the frontal area of a ball, and $\rho = 1.2\;\text{kg/m}^3$ is the density of air Acceleration due to drag $$ a_{\rm drag} = \frac{F_{\rm drag}}{m} = 0.968\;\text{m/s}^2 $$ where $m=0.046\;\text{kg}$ is the mass of the ball. As you can see the deceleration due to drag is about 10% of gravity in the end, and thus you cannot ignore its effects. But this 10% effect, does not account for the 21% change in apparent acceleration. So the most likely scenario here is measurement error If you learn calculus in the future you will learn how to deal with problems with variable acceleration. I did some quick calculations with a first-order approximation and you can estimate the measured acceleration $g'$ as a function of earth's $g$ and the distance traveled as $$g' = g \left( \tfrac{5}{3} - \tfrac{2}{3} \exp(\beta \Delta d/m) \right)$$ where $\beta = \tfrac{1}{2}\rho C_D A = 0.000558\;\text{kg/m}$, $\Delta d = 5.13\;\text{m}$ is the distance traveled and $g=9.81\;\text{m/s}^2$ With the above values, if you had a perfect timing system, you would have calculated $g' = (95.8\%) g = 9.389 \;\text{m/s}^2$ instead of 7.72 Doing the math with $g=9.81\;\text{m/s}^2$ I see the ball will need $\Delta t = 1.0333\;\text{s}$ to reach $v_f = 9.728\;\text{m/s}$ and $\Delta d= 5.13\,\text{m}$. This means your timing error is about $1.15 - 1.033 = 0.1167\,\text{s}$ too slow.
{ "domain": "physics.stackexchange", "id": 90928, "tags": "homework-and-exercises, acceleration, drag, error-analysis, free-fall" }
Average length of s-t (simple) paths in a directed graph
Question: Given the fact that $s$-$t$ path enumeration is a #P-complete problem, could there be efficient methods that compute (or at least approximate) the average length of $s$-$t$ path without enumerating them? What if paths are allowed to revisit vertices? Relevant results on special graphs could also be helpful. Answer: calculating/estimating/approximating the average path length has been studied for some random graph models including the Erdos-Renyi model and the Barabasi-Albert scale free networks, and also the Strogatz small world graphs which may be suitable as approximations for your graphs. [it would be better if you could narrow down/detail some nature/characteristics of the graphs you're studying.] Computing the average path length and a label-based routing in a small-world graph — Philippe J. Giabbanelli, Dorian Mazauric, and Stephane Perennes Average path length in random networks — Agata Fronczak, Piotr Fronczak, Janusz A. Holyst The average distance in a random graph with given expected degrees — Fan Chung, Linyuan Lu AN ESTIMATION OF THE SHORTEST AND LARGEST AVERAGE PATH LENGTH IN GRAPHS OF GIVEN DENSITY — Laszlo Gulyas, Gabor Horvath, Tamas Cseri and George Kampis
{ "domain": "cs.stackexchange", "id": 1265, "tags": "algorithms, complexity-theory, graphs, approximation, enumeration" }
How to understand data from the image_raw/compressed topic?
Question: I wanted to know what they represent and how to read data from the image_raw / compressed topic. For example, the laser / scan topic, your "ranges" data represent the distance to an object, so the image_raw / compressed topic represents exactly what? How can I use them? Here is an example of the topic: header: seq: 2068 stamp: secs: 689 nsecs: 786000000 frame_id: camera_link format: rgb8; jpeg compressed bgr8 data: [255, 216, 255, 224, 0, 16, 74, 70, 73, 70, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 255, 219, 0, 67, 0, 6, 4, 5, 6, 5, 4, 6, 6, 5, 6, 7, 7, 6, 8, 10, 16, 10, 10, 9, 9, 10, 20, 14, 15, 12, 16, 23, 20, 24, 24, 23, 20, 22, 22, 26, 29, 37, 31, 26, 27, 35, 28, 22, 22, 32, 44, 32, 35, 38, 39, 41, 42, 41, 25, 31, 45, 48, 45, 40, 48, 37, 40, 41, 40, 255, 219, 0, 67, 1, 7, 7, 7, 10, 8, 10, 19, 10, 10, 19, 40, 26, 22, 26, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 255, 192, 0, 17, 8, 3, 32, 3, 32, 3, 1, 34, 0, 2, 17, 1, 3, 17, 1, 255, 196, 0, 31, 0, 0, 1, 5, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 255, 196, 0, 181, 16, 0, 2, 1, 3, 3, 2, 4, 3, 5, 5, 4, 4, 0, 0, 1, 125, 1, 2, 3, 0, 4, 17, 5, 18, 33, 49, 65, 6, 19, 81, 97, 7, 34, 113, 20, 50, 129, 145, 161, 8, 35, 66, 177, 193, 21, 82, 209, 240, 36, 51, 98, 114, 130, 9, 10, 22, 23, 24, 25, 26, 37, 38, 39, 40, 41, 42, 52, 53, 54, 55, 56, 57, 58, 67, 68, 69, 70, 71, 72, 73, 74, 83, 84, 85, 86, 87, 88, 89, 90, 99, 100, 101, 102, 103, 104, 105, 106, 115, 116, 117, 118, 119, 120, 121, 122, 131, 132, 133, 134, 135, 136, 137, 138, 146, 147, 148, 149, 150, 151, 152, 153, 154, 162, 163, 164, 165, 166, 167, 168, 169, 170, 178, 179, 180, 181, 182, 183, 184, 185... Originally posted by jmva on ROS Answers with karma: 1 on 2017-11-20 Post score: 0 Answer: For a compressed image, the byte stream in the data array is the jpeg compressed representation of the image. To use it with OpenCV, this jpeg representation first has to be decompressed to a uncompressed representation. Luckily, this is made transparent by the image_transport mechanism, so this is taken care of for you if you use that. Have a look at compressed_image_transport. Originally posted by Stefan Kohlbrecher with karma: 24361 on 2017-11-21 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by jmva on 2017-11-21: Thank you Stefan, I'll try this. I think it works for me.
{ "domain": "robotics.stackexchange", "id": 29414, "tags": "ros, gazebo, image-raw, camera" }
Find Magnitude of a DFT signal using a Blackman window
Question: Hi I've been given a signal made of a series of cosines. I have taken the DFT of the signal using a rectangular window (blue), hamming window (red) and a Blackman window (black). I have identified two of the cosine components of the signal. I was wondering how can I use the dB magnitude to find the actual magnitude of the frequency. This is so that i can recreate the signal by summing all of the cosines found. I was also wondering why these frequencies can be seen on the blackman window but not the other 2 windows? any help would be greatly appreciated! Answer: First of all, this is a (y-axis) logarithmic plot, so the plotted magnitudes are given by : $$ Y_{dB} = 20 \cdot \log_{10} ( X ) ~~~,~~~ X > 0 .$$ And the linear amplitude is found by inverting it : $$ X = 10^{Y_{dB}/20} .$$ There will also be an amplitude scale due to DFT length and the window type, which you can determine by a calibration procedure (if not with a formula). You can also find the formula, or just approximate it. Coming to the reason why Blackman window was able to show the side-by components of lower amplitudes: among other windows there, Blackman window has the largest main-lobe-width, but the smallest side-lope-peaks; i.e., fast decaying tails. Large main lobe width, means low frequency resolution : if you have two sine waves very close in frequency together, then they will look like a single component instead of two. Hence spectral resolution (discrimination of two close components of similar amplitude) is reduced. Small side lobe peaks (or fast decaying tails), means reduced leakage: When there's a mixture of large and small amplitude sinewaves (at arbitrary frequencies) in the signal, then the spectrum of the largest amplitude signal will shadow the spectrum of the remanining smaller amplitude components, especially when they are closer to the main component. All those components falling below the (modulated) window's tail magnitude will be invisible. Hence a window type with small tail magnitude (fast decaying tais) will have less shadow on the remaning spectrum. The effect is more pronounced as you get closer to the main lobe of the largest components (as the tail amplitude will be larger around the main lobe).
{ "domain": "dsp.stackexchange", "id": 9616, "tags": "matlab, signal-analysis, dft, window-functions" }
Wavy baseline in carbon NMR
Question: I realize anything NMR-related is essentially physics, but I figured this was a better place to ask this. I have this issue fairly regularly (with one instrument in particular) where my baseline ends up looking like the one in the image below. The (inverted) sine wave-shape is what I'm referring to, if it's not obvious. Is there something specific that causes this baseline shape? Is there a way to fix it? Preferably I'd want to use something that fixed the signal/input, but if I have to resort to a processing method, I'd settle for that. Answer: This baseline roll seen in this spectrum is an example of acoustic ringing, and is quite normal for many low frequency NMR experiments that are recorded over wide sweep widths. It occurs as a consequence of slight breakthrough of the excitation pulse, as it rings down to zero. Ideally, you would like to start collecting signal at your receiver immediately after your excitation pulse, however it does take a small amount of time (microseconds) for the pulse to decay to zero. If your pre-scan delay (typically configured for your spectrometer during the configuration process) is too short, you can get this ringing artifact. It is more pronounced for weaker samples, but usually nothing to worry about. To correct this as an experimental acquisition parameter, you would make your pre-scan delay longer; for Topspin, this is the parameter DE in your acquisition parameter list. It is normally defined during spectrometer configuration, and needs to be as short as practically possible to maximise your signal detection - typically less than 10 microseconds. For some nuclei, much of the signal has already decayed during this pre-scan period, and require some backward linear prediction to help. I'd speak to your local friendly NMR spectroscopist before mucking about with changing parameters like DE. It is perfectly normal, and you should be able to get rid of it by applying a simple baseline correction algorithm - for Topspin you should use the commands abs or absd. This should work for your spectrum. As a word of caution when applying baseline corrections to your 1H spectrum, this will also apply an automatic integration for you, overwriting any integration you may have done previously. You can avoid this by using the optional n switch (abs n). For severe cases of baseline roll, apply a partial basline correction using absf between the limits of absf1 and absf2. Your Topspin manual can provide a fuller description, including other baseline correction modules. In recent versions of TopSpin (4 onwards) there is also the apbk command, which uses a new algorithm to perform simultaneous phase and baseline correction. This can yield better results than the traditional apk + abs combination, where phase and baseline correction is carried out separately.
{ "domain": "chemistry.stackexchange", "id": 5481, "tags": "experimental-chemistry, nmr-spectroscopy" }