anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Find cut-off frequency of a low-pass filter for a given output signal
Question: Suppose we have an ideal low-pass filter $H(e^{j\theta})$ with a cut-off frequency $\theta_c$ in the range of $0\leq \theta_c \leq \pi$. I want to know the input signal $x[n]$, as well as the cut-off frequency $\theta_c$, which produces the following output signal $y[n]$. To solve this I started off with transforming $y[n]$ to the frequency domain by calculating the DTFT: $$ Y(e^{j\theta})=\sum_{n=-\infty}^{\infty}y[n]e^{-j\theta n}=1+e^{-j\theta}+2e^{-j2\theta}+e^{-3j\theta}+e^{-j4\theta} $$ Now I can calculate $X(e^{j\theta})=\frac{Y(e^{j\theta})}{H(e^{j\theta})}$. This is only possible for $|\theta| \leq \theta_c$, since outside of this region $H(e^{j\theta}) = 0$ and $X(e^{j\theta})$ would approach infinity. But inside this region $H(e^{j\theta})$ is simply 1 and when transforming $X(e^{j\theta})$ back to $x[n]$ it is just the same as $y[n]$, a behaviour that is expected of a low-pass. The thing I am not sure about is the cut-off frequency. If I simply plot $Y(e^{j\theta})$, I see that there are frequency components until $\pi$. So I could just say alright, $\theta_c=\pi$. But is it really like that? Because the low-pass is actually just dependent on $\theta$ and not all the multiples of it, which I find in the complex exponentials of $Y(e^{j\theta})$. So how do I calculate $\theta_c$? Answer: set $\theta_c = \pi$ and set $x[n] = y[n]$ and you're done. and you're not dividing by zero anywhere.
{ "domain": "dsp.stackexchange", "id": 4514, "tags": "fourier-transform" }
Does a well-lit mirror weigh more than an unlit mirror?
Question: If you weighed a mirror in a room with no light, and then weighed a mirror in a well lit room so that the mirror reflects light, would the weight be different? Answer: Yes, if the mirror was on the floor and facing upward. The transfer of momentum as photons recoil from it and are absorbed by the surroundings would make it heavier.
{ "domain": "physics.stackexchange", "id": 22170, "tags": "reflection" }
Pose estimation using CNNs on Point clouds
Question: In the case of single shot detection of point clouds, that is the point cloud of an object is taken only from one camera view without any registration. Can a Convolutional Network estimate the 6d pose of objects (initially primitive 3D objects -- cylinders, spheres, cuboids)? The dataset will be generated by simulating a depth sensor using a physics engine (ex:gazebo) and primitive 3D objects are spawned with known 6d pose as ground truth. The resulting training data will be the single viewed point cloud of the object with the ground truth label (6d pose)? Answer: The answer is yes this is possible and here are the papers where they do almost exactly the same project you are describing above. Although none of the bellow combine gazebo, single point/single shot, 6D-pose and CNNs. In order to use synthetic data to train a model that works on real data. Pose Estimation by Key Points Registration in Point Cloud (2019) By Weiyi ZHANG, Chenkun QI This paper uses gazebo to test validate the model they made. Dex-Net 2.0: Deep Learning to Plan Robust Grasps with Synthetic Point Clouds and Analytic Grasp Metrics (2017) by Jeffrey Mahler, Jacky Liang, Sherdil Niyaz, Michael Laskey, Richard Doan, Xinyu Liu, Juan Aparicio Ojea, Ken Goldberg Real-Time Seamless Single Shot 6D Object Pose Prediction (2018) by Bugra Tekin, Sudipta N. Sinha, Pascal Fua This paper sounds like exactly what you want to do. SSD-6D: Making RGB-Based 3D Detection and 6D Pose Estimation Great Again (2017) by Wadim Kehl, Fabian Manhardt, Federico Tombari, Slobodan Ilic, Nassir Navab This paper is also referenced by them. The model will be able to be trained but how effect a model trained on the synthetic data will be able to properly function on real data will be the challenge.
{ "domain": "ai.stackexchange", "id": 1513, "tags": "deep-learning, convolutional-neural-networks, object-recognition, object-detection, regression" }
Algorithm to test whether a binary tree is a search tree and count complete branches
Question: I need to create a recursive algorithm to see if a binary tree is a binary search tree as well as count how many complete branches are there (a parent node with both left and right children nodes) with an assumed global counting variable. This is an assignment for my data structures class. So far I have void BST(tree T) { if (T == null) return if ( T.left and T.right) { if (T.left.data < T.data or T.right.data > T.data) { count = count + 1 BST(T.left) BST(T.right) } } } But I can't really figure this one out. I know that this algorithm won't solve the problem because the count will be zero if the second if statement isn't true. Could anyone help me out on this one? Answer: As others have already indicated in comments, you really have two unrelated functions here: testing whether the tree is a search tree, and counting the complete branches. Unless the assignment specifically calls for it, I would write two separate functions. Let's see abount counting the complete branches first. That means counting the nodes that have both a left child and a right child. Then you need to increment the counter (count = count + 1) when both T.left and T.right are non-null (not T.left.data and T.right.data: the data doesn't matter for this task). if (T.left and T.right) { count = count + 1 Furthermore, you need to explore the left subtree even if the right subtree is empty, and you need to explore the right subtree even if the left subtree is empty. So watch where you put the recursive calls. To test whether the tree is a search tree, you need to inspect the data values. You've already got something close to the right comparison; not quite right. Write a few example trees with various shapes (not very big, 2 to 5 nodes) and run your algorithm on them to see what happens. You still need to find some place to put the result of the validity check. Again, watch where you put the recursive calls (if you only do this part, there are several solutions, but at this stage don't worry if you only see one). Finally, once you've managed to write both functions separately, and you've tested them on a few examples, put them together carefully (if required by the assignment).
{ "domain": "cs.stackexchange", "id": 3, "tags": "algorithms, recursion, trees" }
Confusions about oxidation and reduction
Question: My teacher told me that elements get oxidised when: 1)There is a loss of electron 2) There is an addition of oxygen 3)There is a loss of hydrogen And that they get reduced when: 1)There is a gain of electron 2) There is a loss of oxygen 3)There is an addition of hydrogen Now I don't understand these things: 1) Most elements will lose an electron (and get oxidised) on the addition of oxygen due to its high electronegativity. But what about OF2? In this case shouldn't oxygen get oxidised and fluorine get reduced due to the higher electronegativity of fluorine? (I know its a covalent bond but I am talking in terms of on which side the electron will get attracted) 2) How does loss of hydrogen (or addition of hydrogen) oxidise (or reduce) an element? Like there are elements that have higher electronegativity than hydrogen then how will they get oxidised on the loss of hydrogen? They should get reduced because of the gain of an electron from hydrogen, right? Answer: The formal definition of an oxidation/reduction is linked to the loss/gain of electrons. Considering the addition or loss of hydrogen and oxygen is not a global rule, but rather a trick generally used in organic chemistry. If you consider organic molecules, thus mainly composed of carbon, "adding" oxygen atoms suggests an oxidation of the carbon, as oxygen is more electronegative than carbon, while "adding" hydrogen atoms suggests a reduction, as hydrogen is less electronegative than carbon. An example would be the successive oxidation of ethanol to ethanal, then acetic acid : $$\ce{CH3CH2OH ->[-2 e^-][``-2\text{ H}"] CH3CHO ->[-2e^-][``+1\text{ O}"] CH3COOH}$$
{ "domain": "chemistry.stackexchange", "id": 6083, "tags": "redox, electronegativity" }
Python source for built-in message types
Question: Where can I find Python source code for the built-in message types? I need them for a presentation. I've looked for them but I guess that they are not on GitHub but that they are generated during builds. Specifically for the moment I am looking for Twist Originally posted by pitosalas on ROS Answers with karma: 628 on 2018-09-02 Post score: 0 Answer: The message types are generated into the devel directory of your workspace (like other things that your source code wants to include), under devel/lib/python2.7/dist-packages/. The built-in ones are in the install directory, so since you are on kinetic, most likely /opt/ros/kinetic/lib/python2.7/dist-packages/. PS: find . -name std_msgs helped. Originally posted by fvd with karma: 2180 on 2018-09-02 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by gvdhoorn on 2018-09-03: @pitosalas: this is the answer to your question. If it isn't, then please clarify why you haven't accepted it yet. All msg packages can be found in the directories that @fvd mentions, including geometry_msgs. Comment by pitosalas on 2018-09-03: Yes, I found it. I was mistakenly looking in std_msgs. Thanks all!
{ "domain": "robotics.stackexchange", "id": 31696, "tags": "rospy, ros-kinetic" }
Relation between histogram equalization and 'Auto Levels'
Question: I need to enhance an image using only global linear adjustment of intensity, i.e.: $$I'=aI+b,\qquad a,b\in \mathbb{R}$$ where $I$ and $I'$ are input and output (enhanced) image. I already know how to determine best values of $a,b$ through histogram stretching given the minimum and maximum intensity within image. However, this will not work on the following image. I have added black and white pixel (in red circle): The image already contains values 0 and MAX, further histogram stretching is not possible. However, I tried "Auto Levels" feature in Paint.NET software with the following result: Looking on the dialog window, there is some extra factor that affects overall histogram shape (number in the middle right part): Is this an additional scaling factor? Or it is some parameter of non-linear transform? How is is determined? Answer: From the paint.NET manual, the parameter in the middle is a gamma correction, which can be used to enhance the contrast in the dark tones or high tones. The gamma-curve is the simplest non-linear level transfer curve. A more sophisticated non-linear technique for automatic level adjustment is histogram equalization, which consists in applying a monotonic, non-linear map to the image, so that the CDF of the resulting image is linear.
{ "domain": "dsp.stackexchange", "id": 450, "tags": "image-processing, histogram" }
Why does the pH of a weak acid not increase by 1 when diluted by a factor of 10?
Question: Strong acid pH increases by one unit when diluted by a factor of 10, but why do weak acids not? Answer: In a weak acid $\ce{HB}$ solution, with a nominal concentration $c$, a tiny amount of its molecules are dissociated into $\ce{H^+}$ and $\ce{B^-}$. Let's call this concentration $[\ce{H^+}]$ = $\ce{[B^-]}$ = $x <<c$, so that the following approximation can be made : $c - x$= $c$. The dissociation equilibrium constant $K_a$ of this weak acid $\ce{HB}$ can be approximated by$$K_a\ce{= \frac {[H^+][B^-]}{$c - x$} = \frac{$x$^2}{$c-x$} = \frac{$x$^2}{$c$}}$$ $$\ce{[H^+] = x = \sqrt{K_ac}}$$ $$\ce{$p$H = \frac{1}{2} (log$K$_a - log $c$)}$$ Look at the coefficient $1/2$ before the logarithm. If the concentration $c$ is multiplied by $10$, the log c increases by $1$, but the $p$H changes by $1/2$.
{ "domain": "chemistry.stackexchange", "id": 14983, "tags": "organic-chemistry, physical-chemistry, acid-base, equilibrium, ph" }
Dining Philosopher's problem implementation with Java Locking Framework to avoid deadlock
Question: Dining philosopher problem is one of the classic problems in computer science. I intended to implement it using Java threads. I attempted using the locking framework that came with Java 5 and used the tryLock() method to avoid deadlock. My implementation is fairly simple. I implemented the runnable interface to represent a philosopher and used executor service to run all these runnable. As a lock, I have used ReentrantLock. I know there are several implementations are already discussed here, but I would like to get some review on my implementation. import java.time.LocalDateTime; import java.time.format.DateTimeFormatter; import java.util.Random; import java.util.concurrent.TimeUnit; import java.util.concurrent.locks.Lock; public class Philosopher implements Runnable { private String name; private final Lock leftFork; private final Lock rightFork; public Philosopher(String name, Lock leftFork, Lock rightFork) { this.name = name; this.leftFork = leftFork; this.rightFork = rightFork; } public void think() { log("thinking"); } public void eat() { //assume, eating requires some time. //let's put a random number try { log("eating"); int eatingTime = getRandomEatingTime(); TimeUnit.NANOSECONDS.sleep(eatingTime); } catch (InterruptedException e) { Thread.currentThread().interrupt(); } } @Override public void run() { while (true) { keepThinkingAndEating(); } } private void keepThinkingAndEating() { think(); if (leftFork.tryLock()) { try { log("grabbed left fork"); if (rightFork.tryLock()) { try { log("grabbed right fork"); eat(); } finally { log("put down right fork"); rightFork.unlock(); } } } finally { log("put down left fork"); leftFork.unlock(); } } } private void log(String msg) { DateTimeFormatter formatter = DateTimeFormatter.ISO_LOCAL_TIME; String time = formatter.format(LocalDateTime.now()); String thread = Thread.currentThread().getName(); System.out.printf("%12s %s %s: %s%n", time, thread, name, msg); System.out.flush(); } private int getRandomEatingTime() { Random random = new Random(); return random.nextInt(500) + 50; } } And the main method to run this code: import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantLock; public class PhilosopherMain { public static void main(String[] args) { Lock[] forks = new Lock[5]; for (int i = 0; i < forks.length; i++) { forks[i] = new ReentrantLock(); } Philosopher[] philosophers = new Philosopher[5]; ExecutorService executorService = Executors.newFixedThreadPool(5); for (int i = 0; i < philosophers.length; i++) { Lock leftFork = forks[i]; Lock rightFork = forks[(i + 1) % forks.length]; philosophers[i] = new Philosopher("Philosopher " + (i + 1), leftFork, rightFork); executorService.execute(philosophers[i]); } executorService.shutdown(); } } Answer: Your implementation is minimalistic but it does what it does and it does it good. Your code consists basically of three elements: Setup the place Logging actions of the philosophers Handling philosopher behaviour 1 and 2 are not that interesting as they may differ slightly from implementation to implementation. And in general you did not reach a code mass to address any modularization issues. Number 3 is the part I want to address. You chose the egoistic variant of the dining philosophers. A philosopher may face starvation if other philosophers keep taking the forks in unfavorable point of times. Another variant is cooperative. You have to introduce one more artefact to do so. Philosophers may give aquired forks directly to their neighbors if they registered for usage of a fork while the fork itself is currently unavailabe. This prevents a philosopher from releasing the fork and aquiring it again immediately so other philosophers have no chance to introduce themselves in the process.
{ "domain": "codereview.stackexchange", "id": 28383, "tags": "java, multithreading, thread-safety, locking, dining-philosophers" }
Do magnetic fields generated from ion flow through a membrane channel have any physiological relevance?
Question: Membrane pores and transporters see millions of ions flow through them per second. This creates a current and therefore a magnetic field. Do cells have any use for these fields (like maybe drawing charged receptors together?) or is it physiologically irrelevant? Answer: For currents in biological systems the magnetic field is minuscule, and likely has no physiological effect. That said, magnetoencephalography is based on measuring the tiny magnetic fields created when nerves fire. Detection requires an incredibly sensitive detector, a superconducting quantum interference device (SQUID), to detect the location of the currents. A SQUID can detect fields of just a few atoteslas (aT)! Perhaps one could investigate how much ion flow in electric eel muscle is constrained by the induced magnetic field... It could lead to shocking revelations.
{ "domain": "chemistry.stackexchange", "id": 12732, "tags": "biochemistry, magnetism" }
Print the Twelve Days of Christmas without loops or conditionals
Question: For my AP Computer Science class, I am required to write a program that prints the Twelve Days of Christmas without loops or conditionals using static methods. public class TwelveDaysOfChristmas { public static final String[] lines = new String[] { "A partridge in a pear tree", "Two turtle doves and", "Three French Hens,", "Four calling birds", "Five golden rings.", "Six geese a-laying,", "Seven swans a-swimming,", "Eight maids a-milking,", "Nine ladies dancing,", "Ten lords a-leaping,", "Eleven Pipers piping,", "Twelve drummers drumming," }; public static final String[] days = new String[] { "first", "second", "third", "fourth", "fifth", "sixth", "seventh", "eighth", "ninth", "tenth", "eleventh", "twelfth" }; public static final String verseText = "On the %s day of Christmas,\n" + "My true love sent to me\n" + "%s"; public static void main(String[] args) { //Print the song System.out.println(song()); } public static String song() { //The song consists of all verses up to verse 12, but the index starts from 0 //Print verses 0 to 11 return versesBelow(11); } public static String versesBelow(int verse) { //Evil hack that allows me to check whether the input is zero without using a conditional ;) try { //Divide by verse. If the verse is zero, then an ArithmeticException is thrown. //I am not using a conditional here!!! int i = 1 / verse; } catch(ArithmeticException ex) { //Stop the recursion return verse(verse); } //else return this verse and all the verses below this one. return versesBelow(verse - 1) + verse(verse); } public static String verse(int verse) { //Format the string with the correct day and verse and add a newline for a blank space after return String.format(verseText, days[verse], linesBelow(verse)) + '\n'; } public static String linesBelow(int line) { //Same evil hack try { int i = 1 / line; } catch(ArithmeticException ex) { //Stop the recursion if line is zero return line(line); } //Else return this line and all the lines below this line return line(line) + linesBelow(line - 1); } public static String line(int line) { //Return this line and a newline return lines[line] + '\n'; } } Is there a better way to accomplish this than my way? I do kind of cheat a little, but it's not technically a conditional, is it? Answer: Recursion is something that is often "unwrapped" to become loops, but the same is true, in the other direction. Almost all loops can be implemented as recursion. Using your "trick" of throwing an exception to act as a conditional, it would be simple to turn all your code... this code: public static final String verseText = "On the %s day of Christmas,\n" + "My true love sent to me\n" + "%s"; public static void main(String[] args) { //Print the song System.out.println(song()); } public static String song() { //The song consists of all verses up to verse 12, but the index starts from 0 //Print verses 0 to 11 return versesBelow(11); } public static String versesBelow(int verse) { //Evil hack that allows me to check whether the input is zero without using a conditional ;) try { //Divide by verse. If the verse is zero, then an ArithmeticException is thrown. //I am not using a conditional here!!! int i = 1 / verse; } catch(ArithmeticException ex) { //Stop the recursion return verse(verse); } //else return this verse and all the verses below this one. return versesBelow(verse - 1) + verse(verse); } public static String verse(int verse) { //Format the string with the correct day and verse and add a newline for a blank space after return String.format(verseText, days[verse], linesBelow(verse)) + '\n'; } public static String linesBelow(int line) { //Same evil hack try { int i = 1 / line; } catch(ArithmeticException ex) { //Stop the recursion if line is zero return line(line); } //Else return this line and all the lines below this line return line(line) + linesBelow(line - 1); } public static String line(int line) { //Return this line and a newline return lines[line] + '\n'; } in to just: public static void main(String[] args) { try { recurseDown(0); } catch (RuntimeException e) { // nothing. } } private static void recurseDown(int i) { System.out.println("\nOn the " + days[i] + " day of Christmas\nMy true love sent to me"); try { recurseUp(i); } catch (RuntimeException e) { // nothing. } recurseDown(i + 1); } private static void recurseUp(int i) { System.out.println(lines[i]); recurseUp(i - 1); } you can see it running here: http://ideone.com/SAte6Z Written properly, it would be: public static void main(String[] args) { recurseDown(0); } private static void recurseDown(int i) { if (i == days.length) { return; } System.out.println("\nOn the " + days[i] + " day of Christmas\nMy true love sent to me"); recurseUp(i); recurseDown(i + 1); } private static void recurseUp(int i) { if (i < 0) { return; } System.out.println(lines[i]); recurseUp(i - 1); }
{ "domain": "codereview.stackexchange", "id": 17256, "tags": "java, recursion" }
What is the typical AI approach for solving blackjack?
Question: I'm currently developing a blackjack program. Now, I want to create an AI that essentially uses the mathematics of blackjack to make decisions. So, what is the typical AI approach for solving blackjack? It doesn't have to be language-specific, but if it will help with an answer, the language I plan to use to do this would be Python. Answer: Blackjack is usually modelled using Monte Carlo (MC) Methods. There is a lot of literature on MC methods which is interesting on its own right but here is a paper describing how MC is applied to Blackjack. There is also a good description on page 110 of the Introduction to Reinforcement Learning. Good luck!
{ "domain": "ai.stackexchange", "id": 316, "tags": "python, game-ai, algorithm-request, monte-carlo-methods" }
IIR filter design in digital domain using the magnitude squared
Question: Does anyone have any good references for deriving parameters of an IIR Low pass/High Pass filter directly in the digital domain using the magnitude squared at the corner frequency? I have been able to derive the parameters of a first order Low/High pass filter with $3\textrm{ dB}$ attenuation at the corner frequency i.e. calculating $k$ and $\alpha$ in: $$H(z) = k\frac{\left(1+z^{-1}\right)}{\left(1-\alpha z^{-1}\right)}$$ My issue is that I distinctly remember deriving the parameters using a $6\textrm{ dB}$ attenuation at the corner frequency in a DSP course I have done previously but I have forgotten the trigonometric identiftes used to finish the derivation. The general procedure is as follows: Let $\omega = 0/\pi$ to calculate the gain term $k$ such that there is a $0\textrm{ dB}$ gain at $0/\pi$ Calculate the magnitude squared at the corner frequency to obtain a value for $\alpha$ in terms of the corner frequency. The problem may be that it should be a second order filter or I am recalling the method for a band pass/stop filter but I'm not sure and it appears this method is not used very often except in the case of band pass/stop filters for parametric EQ. I hope the question is clear and I will try to improve the structure with the responses so it will be useful for others. Any help will be appreciated. Answer: To solve the case that you mentioned... You have 2 variables to determine, so you need two relationships to resolve the two variables. I'm going to use $k$ and $a$ as the variables to make this easy to type up. $$H(z) = k\frac{1 + z^-1}{1-az^-1}$$ Start by considering the passband gain. Use $f = 0$ for this. Assume you want unity gain at $f_0=0$. Assume: $H(f_0) = 1, f_0 = 0$ Substitute $e^{i2\pi f/f_s}$ for $z$, $f_s$ is your sampling rate, set $f = 0$ and solve for $k$ to satisfy $H\left(f_0\right) = 1$. From this you get $k = \frac{(1-a)}{2}$ Now work on the gain squared at your desired corner frequency ($f_c$) to determine $a$. $H(f_c) = -3\textrm{ dB}$ (magnitude squared will be $-6\textrm{ dB}$ as you've stated) We'll work with the magnitude squared at $f_c$ and set the gain to $1/2$ ($-6\textrm{ dB}$). $\lvert H\left(f_c\right)\rvert^2 = \frac{1}{2}$ This time substitute $e^{i2\pi fc/f_s}$ for $z$. To simplify the arithmatic you can solve this equation: $$ \left(\frac{\lvert H(f_0)\rvert}{\lvert H(f_c)\rvert}\right)^2 = 2 $$ This eliminates the factor $k$. You will end up with a quadradic relationship in $a$. Solving for $a$ yields: $$ a = \frac {1 - \sqrt{1-\cos^2\left(2\pi\frac{f_c}{f_s}\right)}} {\cos\left(2\pi\frac{f_c}{f_s}\right) }= \frac {1 - \sin\left(2\pi\frac{f_c}{f_s}\right)} {\cos\left(2\pi\frac{f_c}{f_s}\right) } $$
{ "domain": "dsp.stackexchange", "id": 739, "tags": "filter-design, infinite-impulse-response" }
Counterexample for LTL - CTL equivalence
Question: I have to find an example of a model where the LTL-formula $F G p \wedge F q$ is valid and the CTL-formula $EF AG p \wedge AF q$ is not valid. I found this example, but I'm not completely sure whether it's correct: Answer: Consider the following model: you have 3 states, $s_0,s_1,s_2$ with the transitions: $s_0\to s_0$, $s_0\to s_1$, $s_1\to s_2$ and $s_2\to s_2$ and the labels are $L(s_0)=\{p,q\}$, $L(s_1)=\emptyset$ and $L(s_2)=p$. Then, every computation starts with $q$, so $Fq$ holds, and every infinite computation eventually gets stuck in $s_2$, or it is $s_0^\omega$, and both satisfy $FGp$, so the LTL formula holds. However, it never holds that $AFq$, so the CTL formula does not hold.
{ "domain": "cs.stackexchange", "id": 5948, "tags": "linear-temporal-logic, temporal-logic" }
Primer dilution verification
Question: I just received a primer (details are displayed in the picture). Since this is my first time doing this, I want to do it correctly. My goal is to make a 10µM primer. So, If I get it right and based on the paper I received, should I have to add 665μl Sterile distilled water (Inside the primer tube) and then aliquot 90μl Sterile distilled water with 10μl of primer tube mix? Answer: Yes, this is correct. Note that it is not important that the water has been sterilized, it needs to be nuclease free. Also, for long term storage TE buffer might be a better choice. This will not result in inhibition of your PCR by the EDTA component of the buffer, since the amount of it in the final reaction is neglectable.
{ "domain": "biology.stackexchange", "id": 12416, "tags": "primer" }
Claim that DeBroglie relation doesn't work in crystal
Question: In this Wikipedia article on Position and Momentum Space, https://en.wikipedia.org/wiki/Position_and_momentum_space there is a claim that "the de Broglie relation is not true in a crystal" in the sentence before the content box. Is this claim valid? If so, why? What is the implications for quasi-particles (e.g. plasmons and polaritons) in materials? Answer: In a crystal, $\vec p$ does not necessarily have the same direction than $\vec k$. So, I suppose that it's indeed true that the de Broglie relation ($\vec p = \hbar \vec k$) does not always hold in a crystal. If we take a perfect crystal, then the wavefunction of an electron can be written as the Bloch electron wavefunction $\Psi = u(\vec r) e^{i\vec k \cdot \vec r}$ where $u(\vec r)$ is a periodic function whose periodicity matches the lattice's. By applying the momentum operator $\hat p =-i\hbar \nabla_\vec r$ to that wavefunction, one finds that it's equal to $\hbar \vec k \Psi + \text{something not proportional to } \Psi$ (nor to $\vec k$ for that matter.). Here, $\hbar \vec k$ is called the crystal momentum and does not match the electron's momentum. See Ashcroft and Mermin pages 139 and 219 for a detailed discussion about that.
{ "domain": "physics.stackexchange", "id": 58358, "tags": "condensed-matter, momentum, solid-state-physics, wavelength, crystals" }
Accurately measure a bolt hole circle?
Question: I need to accurately measure (+/-0.25mm) the bolt hole circle drilled into a heavy metal plate. There are three holes on one plate and nine holes on a second plate. The plates are approximately 300mm diameter. The holes are approximately 18mm diameter tapped M20 and the bolt hole circle is about 200mm diameter. When known, I will create a mating part on the CNC mill and lathe. The finished part rotates at around 2000 RPM, so the accuracy of the machining and fitment will determine the vibration of the finished system. How can I measure the hole circle? Tools? Technique? Calculations? Answer: The bolt pattern should be a standard; if there are three bolt holes then there should be 120 degrees between each hole. If there are nine bolt holes then there should be (360/9) = 40 degrees between each hole. Let's talk for a second about vibration. Vibration will exist if the centerline of your flange does not coincide with the centerline of the existing flange. While your bolt pattern does matter, it doesn't matter that much. If you are relying on your bolt pattern to perfectly align two pieces of rotation machinery you are doing it wrong. The machinist that attaches the two flanges together should have a means to shim one piece of equipment, or the other, or both. You need to rely on the shims to bring the two centerlines together, both coincident and parallel. Again, if you're relying on the bolt pattern to do this for you, then that implies that the flange needs to be perfectly attached to the shaft, that the bearings have to be perfectly sized with no room for dimensional tolerance, the body has to be perfect, the mounting feet have to be perfect, and the sub-structure to which both pieces of equipment attach has to be perfect. You need to have room for adjustment where the machinery bolts down. You will never attain the degree of perfection required to count on a bolt pattern to mate two pieces of rotation equipment. So, that said, where vibration does matter with regards to your bolt pattern is that your bolt pattern should be EVEN. If you have three bolts that are supposed to be 120 degrees apart, but two are 110 degrees apart and the others are 125 each, then you'll get vibration because the bolt pattern isn't rotationally symmetric. So, my advice to you would be to get the bolt pattern dimensions from the existing flange, then ignore minor variations and make your new pattern as symmetric as you can achieve. If the existing flanges are threaded, then your holes are through holes. There needs to be some tolerance there for the bolt to be able to pass through, and the tolerance should also allow for the minor variations in hole placement in both parts. The clamping force of the bolts holds the flanges together. The shims under the mounting feet of the machinery brings the centerlines together. The location of the holes matters for vibration only if they're not rotationally symmetric. The fit of the the flange doesn't have anything to do with vibration provided you have adequately clamped the two flanges together and you aligned the shaft centerlines to be coincident, and again, that's done by moving the equipment, not by adjusting the flange bolt holes. I'm harping on this to try to impress on you that a flange with through holes is not, and should not be, a piece of high precision equipment. You might need the face to be very flat if you're using a gasket, and it should definitely be perpendicular to the shaft centerline, but the through holes are just through holes. If one flange is able to rotate relative to the other flange then you haven't torqued it well enough and/or you haven't used enough fasteners. If the flanges were able to have relative motion then you're "riding the clutch" so to speak and can expect (very) premature joint failure. So, all that said, here's how to calculate what the parameters should be for your circular bolt pattern. Find the angular distance between bolts, $\theta$, by dividing 360 degrees by the number of bolts. A 3-bolt pattern is 120 degrees between fasteners, 4-bolt is 90, 9-bolt is 40, etc. For a threaded hole, thread a bolt into each of two adjacent holes. Not necessary if you can get the measurement tips of a pair of calipers into the holes, but it does make it easier. Measure the largest outside-outside distance between the two bolts. Measure the smallest inside-inside distance between the two bolts. Average those numbers (add together and divide by two) and you get the center-center distance. The bolting pattern's radius is given by: $$ r = \frac{\mbox{average distance}/2}{\sin{(\theta /2)}} \\ $$ Now you can put your blank flange on a lathe, find the center, measure out a radius $r$, mark that circle, and locate your through hole centers at the appropriate angular positions on that circle. If you're doing it by hand you could open a compass to the $\mbox{average distance}$ between bolt holes that you found, put the pointy end anywhere on the circle and sweep it, then move the pointy end to any sweep-circle intersection and sweep again. Here's a graphic if that'll make it any clearer:
{ "domain": "engineering.stackexchange", "id": 1152, "tags": "mechanical-engineering, machining" }
Two wires and conservations Law
Question: I was studying current in a wire, now a doubt raise me up. If we have, for example, two wires of the same length connected one to the end of the other, but one wire has the area two times the area of the other. What will be conserved here? (Conserved in the sense that the magnitude of the quantity will remain the same) The current or the density current? $I = \int J da$ I was thinking about it, and since J is independent of A, i would predict that J is conserved and so the currents magnitude changes in the discontinuity. In the same way $V = RI$ implies, with R = pl/A, that the current is proportional to area, so technically we have two argument in prol of |J| maintain the same. But, mathematically, the antiderivative is always continuous, even if the function to be integrate is discontinuity in certain special cases (as that, it is discontinuous in one point!). So, being I continuous, we need that I in the point which connects the wires be the same. But so, J need to changes! Answer: $E$ (as $V$), is not constant in the circuit. It is 2 times greater where the wire is thinner. $J$ is also 2 times bigger there. The expression $V = RI$ translates as $E = \rho J$ where $\rho$ is the resistivity. Looking at the first expression ($V = RI$), $V$ and $R$ changes while $I$ is constant. Looking at the second expression ($E = \rho J$), $E$ and $J$ changes while $\rho$ is constant. The current $I$ is conserved, not $J$.
{ "domain": "physics.stackexchange", "id": 73343, "tags": "electromagnetism" }
Minimum frame size in ethernet
Question: Suppose the round trip propagation delay for a 10 Mbps Ethernet having 48-bit jamming signal is 46.4 μs. The minimum frame size is: a) 94 b) 416 c) 464 d) 512 My approach:- I have read in the book that minimum frame transmission time is atleast twice the round trip time.So by that logic i can say that the answer will be 464.As soon as the first bit of jamming signal reaches the sender,it know about the collision. But I need to know whether time to transmit jamming signal over the link should also be included in this case?I mean whether the first bit of the jamming signal is sufficient to for the sender to detect the collision or the complete jamming signal needs to be received at sender side,which will make the answer as 512. I dont have the answer as this question was in some exam in 2005 and there was no answers available to this. Answer: This might not be the correct answer to your question(as I am not sure about jamming signal), but it might resolve the query. As Ethernet uses CSMA/CD for error detection, the time taken to transmit the data must be greater than or equal to twice the time taken to propagate the data. Only then the station will come to know if there had been a collision. Tt >= 2*Tp
{ "domain": "cs.stackexchange", "id": 15143, "tags": "computer-networks" }
Passing protons through cation-permeable membranes
Question: Protons are attached to a water molecule (making hydronium) in acidic solutions. If a container was split through the middle with a cation-permeable but anion impermeable membrane and, say, HCl was added to one side, what would happen? I'd assume because of the concentration gradient, osmosis will cause hydronium ions to move to the less concentrated side until equilibrium is reached. But that would also make charge very uneven on both sides, where one side is highly negatively charged and the other side is very positively charged. So, I am not sure osmosis alone could even lead to this state. Considering the ions on either side are either electron deficient or have full shells, and have the same charge, I would not expect any gas to come out of solution. If a negative electric current was applied to one side of the container, and a positive on the other, would that produce gas more efficiently than plain electrolysis? If it depends, then under what conditions? Or am I wrong, and what will happen to the ions from the acid? Answer: Without current you would establish an equilibrium situation in which a charge imbalance is created across the membrane interface that counteracts (via migration) diffusion of your protons through the membrane. This is what is described by the Donnan equilibrium. You can pass current in such a system as well, you will find however that there's a potential drop (a resistance) across this membrane. This resistance is exactly (or more) what you would thermodynamically gain by having a higher proton concentration at one side. You sadly cannot cheat thermodynamics. But will it work better? Depends. The rate of the different electrochemical reactions is not only dependent on the thermodynamic driving force but also on other factors such as the kinetics and mass transport. If the rate is highly dependent on the proton concentration you might benefit from this. People are working on such systems for water electrolysis, in which at the hydrogen side they want an acidic environment and at the anode an alkaline environment. These membranes are called bipolar membranes.
{ "domain": "chemistry.stackexchange", "id": 17224, "tags": "electrochemistry, aqueous-solution, electrolysis, osmosis" }
Proper way to send and receive buffer in Winsock
Question: I have a piece of code to send and receive buffers. Is this the right way to do it? And am I guaranteed that the full buffer will be sent and received? receiving function: #include <stdio.h> #include <winsock2.h> #include <ws2tcpip.h> #include <direct.h> #include <string.h> #include <stdint.h> static inline uint32_t ntohl_ch(char const* a) { uint32_t x; memcpy(&x, a, sizeof(x)); return ntohl(x); } char* recvStrBuffer(SOCKET s) { int totalReceived = 0; int received = 0; // recv buffer size char b[sizeof(uint32_t)]; int r = recv(s, b, sizeof(uint32_t), 0); if (r == SOCKET_ERROR) { printf("error recv\n"); return NULL; } uint32_t bufferSize = ntohl_ch(&b[0]); //printf("bufferSize: %d\n", bufferSize); char* buff = (char*)malloc(sizeof(char) * bufferSize); while (totalReceived < bufferSize) { received = recv(s, buff + totalReceived, bufferSize - totalReceived, 0); if (received == SOCKET_ERROR) { printf("error receiving buffer %d\n", WSAGetLastError()); return NULL; } totalReceived += received; //printf("received: %d\n", received); //printf("totalReceived: %d\n", totalReceived); } //printf("%s", buff); return buff; } sending function: #include <stdio.h> #include <winsock2.h> #include <ws2tcpip.h> #include <direct.h> #include <string.h> #include <stdint.h> int sendStrBuffer(SOCKET s, char* buffer) { // send buffer size int bufferSize = strlen(buffer); //printf("bufferSize: %d\n", bufferSize); uint32_t num = htonl(bufferSize); char* converted_num = (char*)&num; int res = send(s, converted_num, sizeof(uint32_t), 0); if (res == SOCKET_ERROR) { printf("error send\n"); return SOCKET_ERROR; } int totalSent = 0; int sent = 0; while (totalSent < bufferSize) { sent = send(s, buffer + totalSent, bufferSize - totalSent, 0); if (sent == SOCKET_ERROR) { printf("error sending buffer\n"); return SOCKET_ERROR; } totalSent += sent; //printf("sent: %d\n", sent); //printf("totalSent: %d\n", totalSent); } } And then in main (receiving part): char* buffer; buffer = recvStrBuffer(socket); if (buffer == NULL) { printf("error %d\n", WSAGetLastError()); } printf("%s", buffer); free(buffer); Main (sending part): int r = sendStrBuffer(socket, totalBuffer); if (r == SOCKET_ERROR) { printf("error %d\n", WSAGetLastError()); } Answer: General Observations The code is mostly readable and except in one case maintainable. The exception may be a copy-and-paste error. Generally code isn't considered ready for review when it contains commented out debug statements such as //printf("bufferSize: %d\n", bufferSize); //printf("received: %d\n", received); //printf("totalReceived: %d\n", totalReceived); Warning Messages It would be better if you compiled using the -wall switch to catch all possible errors in the code. I compiled this with Visual Studio 2019 and got 2 warning messages, both messages indicate possible bugs: warning C4018: '<': signed/unsigned mismatch warning C4715: 'sendStrBuffer': not all control paths return a value The second warning is a problem that you definitely want to fix. You are not explicitly returning a value from sendStrBuffer when the function is successful. What gets returned is undefined and that definitely isn't a good thing, it may return SOCKET_ERROR or some other value that could cause problems in the calling program. The function should probably return zero if it is successful. The first warning is on this line: while (totalReceived < bufferSize) in recvStrBuffer(). The variable totalReceived is declared as a signed integer, the variable bufferSize is declared as an unsigned integer. It would be better if both variables were defined using the same type. It would be even better if both variables were defined as size_t since they represent a size. The type size_t is what is returned by the sizeof() operator. The size_t type is the largest unsigned integer value your system supports. File and Program Organization To make it easier to share values between the sending function and the receiving function it might be better to put both functions into a common library C source file and share the resulting object file between the sending program and the receiving program. There should be a dedicated header file that provides the function prototypes for both functions that the sending program and the receiving program include. This file organization would make it easier to maintain the code because all the code for sending and receiving through the socket is in the same file. Test for Possible Memory Allocation Errors In modern high-level languages such as C++, memory allocation errors throw an exception that the programmer can catch. This is not the case in the C programming language. While it is rare in modern computers because there is so much memory, memory allocation can fail, especially if the code is working in a limited memory application such as embedded control systems. In the C programming language when memory allocation fails, the functions malloc(), calloc() and realloc() return NULL. Referencing any memory address through a NULL pointer results in undefined behavior (UB). Possible unknown behavior in this case can be a memory page error (in Unix this would be call Segmentation Violation), corrupted data in the program and in very old computers it could even cause the computer to reboot (corruption of the stack pointer). To prevent this undefined behavior a best practice is to always follow the memory allocation statement with a test that the pointer that was returned is not NULL. Example of Current Code: char* buff = (char*)malloc(sizeof(char) * bufferSize); Example of Current Code with Test: char* buff = (char*)malloc(sizeof(*buff) * bufferSize); if (!buff) { fprintf(stderr, "Malloc of buffer failed in recvStrBuffer\n"); return NULL; } Convention When Using Memory Allocation in C When using malloc(), calloc() or realloc() in C a common convention is to sizeof(*PTR) rather sizeof(PTR_TYPE); this makes the code easier to maintain and less error prone, since less editing is required if the type of the pointer changes. See the example above. Print Error Messages to stderr There are 3 streams provided by stdio.h one, stdin is an input stream, two, stdout and stderr are output streams. Generally it is better to print error messages to stderr rather than stdout. When you redirect output to a file the two streams can be separated, and you can generate two files, one containing errors and the other containing program output. This helps when you are debugging or developing code. One Statement Per Line I don't know if this is a copy and paste error or if this the actual code in the program, but there should always be only one statement per line to make maintenance of the code easier. static inline uint32_t ntohl_ch(char const* a) { uint32_t x; memcpy(&x, a, sizeof(x)); return ntohl(x); } This function should return size_t rather than uint32_t. Use of the inline Keyword The inline keyword is only a recommendation to the compiler; it may not do anything. Rather than use the inline keyword it is better to compile with the best optimization you can, generally -O3. The optimizing compilers will use inline code when that code fits into the cache memory whether you use the inline keyword or not.
{ "domain": "codereview.stackexchange", "id": 43504, "tags": "c, socket, winapi" }
What is this insect from Brasil?
Question: SE Brazil Atlantic rainforest Oct 2017. I'm assuming this is an early stage of an insect. It was crawling around a tree in the moss and was about 6-7mm. Answer: This is, indeed, a nymph (what you called a young stage), but not from a cicada as you suspected: it's a nymph from a leafhopper, which are hemipterans from the Family Cicadellidae (cicadas are also hemipterans, but they belong to the Family Cicadidae). More specifically, this seems to be a sharpshooter, which are leafhoppers from the Tribe Proconiini. Narrowing down to the Genus is more complicated, but I'd guess it is Oncometopia. Here is an image of Oncometopia orbona for comparison: This other image (also Oncometopia orbona) is even more similar to yours:
{ "domain": "biology.stackexchange", "id": 7992, "tags": "species-identification, entomology" }
How does energy stay conserved if the force is time dependent and doesn't depend on location?
Question: While reading The Theoretical Minimum for Classical Mechanics the author said that the derivative of the potential energy is equals the force and showed this equation describing the potential energy of a single particle in a one dimensional space: $$F(x)=-\frac{dV(x)}{dx}$$ He said that the force here depends on the location of the particle. He continued: $$E=T+V$$ $$\dot{E}=\dot{T}+\dot{V}=mva+\frac{dV(x)}{dt}=mva+\frac{dV(x)}{dx}\frac{dx}{dt}$$ $$\dot{E}=mva+\frac{dV(x)}{dx}v=v(ma+\frac{dV(x)}{dx})=0$$ And that's how he proved that energy is conserved, but what if the the force is time dependent then $F(t)=\frac{dV(t)}{dt}$ and continuing with the same path I get this: $$\dot{E}=mva+\frac{dV(t)}{dt}\neq0$$ What am I doing wrong in this case? Answer: You do nothing wrong. If the forces depends on time explicitly, energy is not conserved in general. There is a deeper connection here: Energy is the conserved quantity corresponding to time-translation symmetry (as momentum is the conserved quantity corresponding to spatial translation symmetry). If your force now depends on time explicitly, the motion of your test particle (starting from the same position), will depend on the starting time – so the time translation symmetry is broken by that explicit dependency. The correspondence between (continuous) symmetries and conserved quantities is known as Noether's theorem and a corner stone of theoretical physics. A small note on what I mean with explicit dependency: Of course you can always can write down a function of force over time $F(t)$, if however this can be written as $F\big(x(t), v(t)\big)$, where $x(t)$ and $v(t)$ are the solution to your equation of motion, then there will be no explicit time dependency, because if you move your initial setup to another time, the force curve will just shift in time with your initial conditions. So the time translation symmetry is upheld.
{ "domain": "physics.stackexchange", "id": 100460, "tags": "classical-mechanics, energy-conservation, potential-energy" }
ConnectDeleteEntity event doesn't work in World Plugin
Question: Hello everybody, I want to make a world plugin with events callback, everything work perfectly except the ConnectDeleteEntity event. With the following code, when we create a model, we get a "Add" message (the function "OnAddEntity" is called) but when remove a model (for example, by selecting a model and pressing the Delete Key, or with the menu), the function "OnDeleteEntity" is not called. namespace gazebo{ struct PluginGazebo : public WorldPlugin { //---------------------------------- event::ConnectionPtr DeleteEntityConnection; event::ConnectionPtr AddEntityConnection; //---------------------------------- void OnDeleteEntity(const std::string str){ std::cerr << "Delete " << str << std::endl; } //---------------------------------- void OnAddEntity(const std::string str){ std::cerr << "Add " << str << std::endl; } //---------------------------------- virtual void Load(physics::WorldPtr _world, sdf::ElementPtr _sdf){ //-- this->DeleteEntityConnection = event::Events::ConnectDeleteEntity(boost::bind(&PluginGazebo::OnDeleteEntity, this, _1)); this->AddEntityConnection = event::Events::ConnectAddEntity(boost::bind(&PluginGazebo::OnAddEntity, this, _1)); } }; GZ_REGISTER_WORLD_PLUGIN(PluginGazebo) } When OnDeleteEntity should be called ? Thanks. Originally posted by Benoit on Gazebo Answers with karma: 3 on 2017-01-31 Post score: 0 Answer: It looks like the deleteEntity event is never being fired by Gazebo, which is a bug. I ticketed an issue here. A workaround for now would be to subscribe to the ~/request topic and check if _msg->request() == "entity_delete", see an example here. Originally posted by chapulina with karma: 7504 on 2017-01-31 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Benoit on 2017-02-01: Your workaround works perpectly ! Thanks a lot.
{ "domain": "robotics.stackexchange", "id": 4048, "tags": "gazebo-model, gazebo-plugin" }
Query String Serializer
Question: I have a ASP.NET Web Forms project, and I want to build calls to .aspx pages in a strongly-typed fashion. I ended up rolling my own serializer that takes simple structs and saves them to/loads them from the query string. What do you think? Is my approach sane? Is there an accepted alternative I don't know about? Any feedback on the code? Here's what building a call to a particular page looks like: var fooParams= new FooPage.Parameters { NodeID = nodeId, FooString = "the foo string" }; string url = MyHelper.BuildCall(FooPage.URL, fooParams); //url: ~/dir/FooPage.aspx?NodeID=5&FooString=the%20foo%20string FooPage: public partial class FooPage : System.Web.UI.Page { public const string URL = "~/Dir/FooPage.aspx"; public struct Parameters { public long? NodeID; public string FooString; public int? OtherParam; } protected Parameters Params; protected void Page_Load(object sender, EventArgs e) { Params = MyHelper.DeserializeFromNameValueCollection<Parameters>(Request.Params); //... //use Params.NodeID, Params.FooString, etc.. } } Serialize/Deserialize to/from NameValueCollection: public static void SerializeToNameValueCollection<T>(NameValueCollection nameValueCollection, T @object) where T : struct { Type type = typeof(T); var fields = type.GetFields(); foreach (var field in fields) { string key = field.Name; var value = field.GetValue(@object); if (value != null) nameValueCollection.Add(key, value.ToString()); } } public static T DeserializeFromNameValueCollection<T>(NameValueCollection nameValueCollection) where T : struct { T result = new T(); Type type = typeof(T); var fields = type.GetFields(); foreach (var field in fields) { string key = field.Name; string stringValue = nameValueCollection[key]; if (stringValue != null) { object value; var baseType = Nullable.GetUnderlyingType(field.FieldType); if (baseType != null) { value = Convert.ChangeType(stringValue, baseType); } else { value = Convert.ChangeType(stringValue, field.FieldType); } field.SetValueDirect(__makeref(result), value); } } return result; } Format NameValueCollection into query string: public static string BuildCall<T>(string url, T queryStringParams) where T : struct { var queryStringBuilder = HttpUtility.ParseQueryString(""); UrlHelper.SerializeToNameValueCollection(queryStringBuilder, queryStringParams); string queryString = queryStringBuilder.ToString(); return url + "?" + queryString; } Answer: One of my favorite patterns for handling URL parameters in WebForms is the WebNavigator - http://polymorphicpodcast.com/shows/webnavigator/ If you're going through these kinds of Strongly-typed interactions for passing parameters between pages, maybe it is time you check out ASP .NET MVC - your solution looks a lot like model-binding.
{ "domain": "codereview.stackexchange", "id": 1544, "tags": "c#, .net, asp.net, parsing, reflection" }
How promising is the possibility of carbon-based qubits to make a qubit that’s stable at room temperature?
Question: Here is the first article I could find on this idea in 2016: https://arxiv.org/abs/1611.07690 And here is a patent in 2017 for a quantum electronic device developed with one of the authors of the paper, Mohammad Choucair along with Martin Fuechsle (who invented a single atom transistor): https://patentscope.wipo.int/search/en/detail.jsf?docId=WO2017091870 The two are now working at Archer Materials to commercialize this idea. Fuechsle is known for inventing a single-atom transistor, which has applications to the mentioned quantum device: https://www.researchgate.net/publication/221840938_A_single-atom_transistor This leads me to my questions: How promising is a carbon-based qubit? Any disadvantages to this approach? If topological quantum computing prevails, could a room temperature qubit based on carbon still be beneficial to topological quantum computing? Is anyone outside of Archer Materials researching this approach? Answer: In my opinion this is not a very promising qubit for quantum computing, though it may hold more promise for quantum sensing or communication. Making a qubit, a two-level quantum system, is not that hard, but making a good qubit is very hard. David DiVincenzo laid out 5 criteria on which you could gauge how good a qubit is for quantum computing. https://en.m.wikipedia.org/wiki/DiVincenzo%27s_criteria Going through those criteria it becomes obvious where the system demonstrated in the first paper falls short. First what they did right, they developed and characterized a new spin qubit and demonstrated that they can manipulate it with microwaves in a magnetic field. (Somewhat fulllfilling criteria 1 and 4) They also demonstrated long, for this type of system, coherence times (175 ns). However, if you consider their minimum gate times, about 16 ns, those coherence times really aren't that long. And just as an example other organic radicals (which could be considered qubits) can exceed 10 us at room temperature. https://doi.org/10.1021/acs.jpcb.5b03027 Next the biggest problem comes from scaling the systems, both down to the single qubit level(criteria 5) and to multi-qubit systems (criteria 1 and 3). They were working with ensembles of qubits, I'd you want to use those qubits in a fashion similar to topological QC's, you ideally need to work with single qubits. Single spin magnetic resonance is very hard and there are really only two solutions: a superconducting microwave resonator, which commonly require low temperature; or optical detection, which require very specific photophysical processes in order to read out the spinstate. Nitrogen vacancy centers are a good example of a spin system with optical detection. That said, there are proposal about how to perform ensemble quantum computing, where you basically get your statistics out in one shot which would render that point moor. Scaling up to multi-qubit devices also poses a challenge. One way to have qubits communicate is through spin spin interactions, but those tend to also destroy the coherence times. There might be other clever ways to enable communication between qubits so we can use two qubit gates but I'm unfamiliar with them. Lastly, the biggest issue with spin qubits in in criteria 2, initialization. Unfortunately, many of the spin qubits systems rely on thermal Boltzmann population and T1 relaxation to provide polarization. In order to get close to a pure starting state one needs to go to very high fields(>3T) and very low temperature (<4K) Though, optically generated polarization is a thing but just like with optical readout, you need to satisfy very specific photophysical conditions. Overcoming these challenges is not just unique to the paper you cited, but to the very diverse field of electron spin qubits(which includes solid state defects, and a huge range different sized and composition molecules).
{ "domain": "quantumcomputing.stackexchange", "id": 1624, "tags": "experimental-realization" }
Drone for mavros
Question: Hi ! So I want to build a small drone(can be tested in lab) and deploy ROS on it. Can anyone help me with what motors and flight controller that I should use to do so ? I want to use mavros_pkg so want to load Arducopter firmware on the flight controller. Originally posted by Yug Ajmera on ROS Answers with karma: 19 on 2019-04-01 Post score: 0 Answer: Iv'e been using Navio2 hat for Raspberry Pi3 as FCU in my lab and it has been quite intuitive and easy. Navio2 software comes pre-installed with ROS and the documentation is great. As for a drone and motors, just check hobbyking.com and pick something in your budget and preferances. Originally posted by Benzy with karma: 16 on 2019-04-12 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 32799, "tags": "ros, drone, ros-kinetic, mavros" }
Building pcl on OSX
Question: Hello, i followed @kevin's post on the Rososx-mailinglist to compile pcl I saved the package.xml from pcl then deleted the pcl package grabbed the latest pcl via svn co http://svn.pointclouds.org/pcl/trunk Added the modified CMakeLists.txt, pcl_find_ros.cmake and the old package.xml to the pcl folder However building fails due to 13%] Building CXX object io/tools/ply/CMakeFiles/pcl_plyheader.dir/plyheader.cpp.o In file included from /Users/tatsch/ros_catkin_ws/src/pcl/io/src/image_grabber.cpp:47: In file included from /usr/local/include/vtk-5.10/vtkImageReader2.h:41: In file included from /usr/local/include/vtk-5.10/vtkImageAlgorithm.h:28: In file included from /usr/local/include/vtk-5.10/vtkAlgorithm.h:32: In file included from /usr/local/include/vtk-5.10/vtkObject.h:41: In file included from /usr/local/include/vtk-5.10/vtkObjectBase.h:43: In file included from /usr/local/include/vtk-5.10/vtkIndent.h:24: In file included from /usr/local/include/vtk-5.10/vtkSystemIncludes.h:40: In file included from /usr/local/include/vtk-5.10/vtkIOStream.h:108: In file included from /usr/include/c++/4.2.1/backward/strstream:51: /usr/include/c++/4.2.1/backward/backward_warning.h:32:2: warning: This file includes at least one deprecated or antiquated header. Please consider using one of the 32 headers found in section 17.4.1.2 of the C++ standard. Examples include substituting the <X> header for the <X.h> header for C++ includes, or <iostream> instead of the deprecated header <iostream.h>. To disable this warning use -Wno-deprecated. [-W#warnings] #warning This file includes at least one deprecated or antiquated header. \ ^ Linking CXX executable ../../../bin/pcl_plyheader [ 13%] Built target pcl_plyheader [ 13%] Building CXX object io/CMakeFiles/pcl_io.dir/src/hdl_grabber.cpp.o /Users/tatsch/ros_catkin_ws/src/pcl/io/src/image_grabber.cpp:628:32: error: no viable overloaded '=' cloud_color.header.stamp = timestamp; ~~~~~~~~~~~~~~~~~~~~~~~~ ^ ~~~~~~~~~ /Users/tatsch/ros_catkin_ws/install_isolated/include/ros/time.h:169:22: note: candidate function (the implicit copy assignment operator) not viable: no known conversion from 'uint64_t' (aka 'unsigned long long') to 'const ros::Time' for 1st argument class ROSTIME_DECL Time : public TimeBase<Time, Duration> ^ /Users/tatsch/ros_catkin_ws/src/pcl/io/src/image_grabber.cpp:664:26: error: no viable overloaded '=' cloud.header.stamp = timestamp; ~~~~~~~~~~~~~~~~~~ ^ ~~~~~~~~~ /Users/tatsch/ros_catkin_ws/install_isolated/include/ros/time.h:169:22: note: candidate function (the implicit copy assignment operator) not viable: no known conversion from 'uint64_t' (aka 'unsigned long long') to 'const ros::Time' for 1st argument class ROSTIME_DECL Time : public TimeBase<Time, Duration> ^ /Users/tatsch/ros_catkin_ws/src/pcl/io/src/image_grabber.cpp:745:32: error: no viable overloaded '=' cloud_color.header.stamp = timestamp; ~~~~~~~~~~~~~~~~~~~~~~~~ ^ ~~~~~~~~~ /Users/tatsch/ros_catkin_ws/install_isolated/include/ros/time.h:169:22: note: candidate function (the implicit copy assignment operator) not viable: no known conversion from 'uint64_t' (aka 'unsigned long long') to 'const ros::Time' for 1st argument class ROSTIME_DECL Time : public TimeBase<Time, Duration> ^ /Users/tatsch/ros_catkin_ws/src/pcl/io/src/image_grabber.cpp:787:26: error: no viable overloaded '=' cloud.header.stamp = timestamp; ~~~~~~~~~~~~~~~~~~ ^ ~~~~~~~~~ /Users/tatsch/ros_catkin_ws/install_isolated/include/ros/time.h:169:22: note: candidate function (the implicit copy assignment operator) not viable: no known conversion from 'uint64_t' (aka 'unsigned long long') to 'const ros::Time' for 1st argument class ROSTIME_DECL Time : public TimeBase<Time, Duration> ^ 1 warning and 4 errors generated. make[2]: *** [io/CMakeFiles/pcl_io.dir/src/image_grabber.cpp.o] Error 1 make[2]: *** Waiting for unfinished jobs.... Scanning dependencies of target pcl_ml [ 13%] Building CXX object ml/CMakeFiles/pcl_ml.dir/src/point_xy_32i.cpp.o [ 13%] Building CXX object ml/CMakeFiles/pcl_ml.dir/src/point_xy_32f.cpp.o [ 13%] Building CXX object ml/CMakeFiles/pcl_ml.dir/src/densecrf.cpp.o [ 14%] Building CXX object ml/CMakeFiles/pcl_ml.dir/src/pairwise_potential.cpp.o [ 14%] Building CXX object ml/CMakeFiles/pcl_ml.dir/src/permutohedral.cpp.o make[1]: *** [io/CMakeFiles/pcl_io.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs.... [ 14%] Building CXX object ml/CMakeFiles/pcl_ml.dir/src/svm_wrapper.cpp.o [ 14%] Building CXX object sample_consensus/CMakeFiles/pcl_sample_consensus.dir/src/sac_model_circle.cpp.o Linking CXX shared library ../lib/libpcl_kdtree.dylib [ 14%] Built target pcl_kdtree [ 14%] Building CXX object ml/CMakeFiles/pcl_ml.dir/src/svm.cpp.o [ 15%] Building CXX object ml/CMakeFiles/pcl_ml.dir/src/kmeans.cpp.o Linking CXX shared library ../lib/libpcl_ml.dylib [ 15%] Built target pcl_ml [ 16%] Building CXX object sample_consensus/CMakeFiles/pcl_sample_consensus.dir/src/sac_model_cone.cpp.o [ 16%] Building CXX object sample_consensus/CMakeFiles/pcl_sample_consensus.dir/src/sac_model_line.cpp.o [ 16%] Building CXX object sample_consensus/CMakeFiles/pcl_sample_consensus.dir/src/sac_model_cylinder.cpp.o [ 16%] Building CXX object sample_consensus/CMakeFiles/pcl_sample_consensus.dir/src/sac_model_circle3d.cpp.o [ 16%] Building CXX object sample_consensus/CMakeFiles/pcl_sample_consensus.dir/src/sac_model_normal_plane.cpp.o [ 17%] Building CXX object sample_consensus/CMakeFiles/pcl_sample_consensus.dir/src/sac_model_normal_sphere.cpp.o [ 17%] Building CXX object sample_consensus/CMakeFiles/pcl_sample_consensus.dir/src/sac_model_normal_parallel_plane.cpp.o [ 17%] Building CXX object sample_consensus/CMakeFiles/pcl_sample_consensus.dir/src/sac_model_stick.cpp.o [ 17%] Building CXX object sample_consensus/CMakeFiles/pcl_sample_consensus.dir/src/sac_model_sphere.cpp.o [ 17%] [ 17%] Building CXX object sample_consensus/CMakeFiles/pcl_sample_consensus.dir/src/sac_model_plane.cpp.o Building CXX object sample_consensus/CMakeFiles/pcl_sample_consensus.dir/src/sac_model_registration.cpp.o Linking CXX shared library ../lib/libpcl_sample_consensus.dylib [ 17%] Built target pcl_sample_consensus make: *** [all] Error 2 <== Failed to process package 'pcl': Command '/Users/tatsch/ros_catkin_ws/install_isolated/env.sh make -j4 -l4' returned non-zero exit status 2 Reproduce this error by running: ==> /Users/tatsch/ros_catkin_ws/install_isolated/env.sh make -j4 -l4 Command failed, exiting. Can someone help? Originally posted by J.M.T. on ROS Answers with karma: 266 on 2013-02-01 Post score: 1 Answer: Check the pcl_find_ros.cmake, I'm guessing you will see somewhere : option(USE_ROS "Integrate with ROS rather than using native files" OFF) Set the OFF to ON and it will tell PCL to use ROS. Originally posted by Hansg91 with karma: 1909 on 2013-02-02 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 12687, "tags": "pcl, ros-groovy, osx" }
Black Jack Game in Python (Jupyter)
Question: I am working on Jupyter Notebook and I am new to Python. I have created this BlackJack Game (This is my 2nd Project as I am Learning Python. See my 1st - Tic Tac Toe from IPython.display import clear_output import time import random colors = [] faces = [] values = {} deck = [] ip=[0] id=[0] def shufflecards(): global colors,faces,values,deck colors = ['spades', 'hearts', 'diamonds', 'clubs'] faces = ['A', '2', '3', '4', '5', '6', '7', '8', '9', '10', 'J', 'Q', 'K'] values = dict(zip(faces, [11, 2, 3, 4, 5, 6, 7, 8, 9, 10, 10, 10, 10])) deck = [face + ' of ' + color for color in colors for face in faces] random.shuffle(deck) class Black_Jack_Bank(object): def __init__(self,bankroll=100): self.bankroll = bankroll def add_money(self,add=20): self.bankroll += add def sub_money(self,sub=10): trymoney = self.bankroll - sub if(trymoney<0): while True : try: clear_output() print"You have Gone Broke. So What? You Were Born to Overcome. You are not Destined to Lose." val = int(raw_input("1. Enter Money to Bank\n2.GO Broke:P\nEnter Your Choice : ")) except: print "Looks like you did not enter a valid choice !\nTry Some Numbers." continue else: if(val == 1): while True : try: print"You have Gone Broke. So What? You Were Born to Overcome. You are not Destined to Lose." self.new_amount = int(raw_input("Empty Your Pockets :) Add Money To Casion Bank ! : ")) except: print "Looks like you did not enter a valid choice !\nTry Some Numbers." continue else: self.bankroll = self.new_amount break if(val == 2): continue break while True: try: self.bet = int(raw_input("\nPlease enter your Bet : ")) except: print "Oops ! Thats not a valid Bet ! \nTry Some Numbers" continue else: trymoney1 = self.bankroll - self.bet if(trymoney1<0): print "Do Not Spend MONEY you don't have." print "Your Current Balance is : " print self.bankroll continue self.bankroll -= self.bet break def get_money(self): return self.bankroll def get_bet(self): return self.bet def blackjack_sub(self,sub): self.bet = sub self.bankroll -= self.bet class Black_Jack_Gameplay(object): def __init__(self): self.player_card = [] self.valuep = 0 self.dealer_card = [] self.valued = 0 def dealer_hand_initial(self): for i in range(2): a_card = deck.pop() self.dealer_card.append(a_card) self.valued += values[a_card.split()[0]] print '\nDealers Cards are : ' print"[' * ', '{a}']".format(a=self.dealer_card[1]) def player_hand_initial(self): for i in range(2): a_card = deck.pop() self.player_card.append(a_card) self.valuep += values[a_card.split()[0]] print '\nYour Initial Cards are : ' print self.player_card print self.valuep def player_hand_hit(self): if(ip[0] == 0): if( self.player_card[0] == 'A of spades' or self.player_card[0] == 'A of hearts' or self.player_card[0] == 'A of diamonds' or self.player_card[0] == 'A of clubs' or self.player_card[1] == 'A of spades' or self.player_card[1] == 'A of hearts' or self.player_card[1] == 'A of diamonds' or self.player_card[1] == 'A of clubs'): values['A']=1 ip[0] = 1 self.valuep = self.valuep - 10 a_card = deck.pop() self.player_card.append(a_card) self.valuep += values[a_card.split()[0]] clear_output() print '\nDealers Card are :' print"[' * ', '{a}']".format(a=self.dealer_card[1]) print '\nYour Cards are : ' print self.player_card print self.valuep def dealer_hand_hit(self): if(self.valued>=17): clear_output() print '\nYour Cards are : ' print self.player_card print '\nDealers Cards are :' print self.dealer_card print self.valued while self.valued < 17 : #Once Player Stands The Dealer will play till Soft Hand is Reached if(id[0] == 0): if( self.dealer_card[0] == 'A of spades' or self.dealer_card[0] == 'A of hearts' or self.dealer_card[0] == 'A of diamonds' or self.dealer_card[0] == 'A of clubs' or self.dealer_card[1] == 'A of spades' or self.dealer_card[1] == 'A of hearts' or self.dealer_card[1] == 'A of diamonds' or self.dealer_card[1] == 'A of clubs' ): values['A']=1 id[0] =1 self.valued = self.valued - 10 a_card = deck.pop() self.dealer_card.append(a_card) self.valued += values[a_card.split()[0]] clear_output() print '\nYour Cards are :' print self.player_card print '\nDealers Cards are : ' print self.dealer_card print self.valued def player_win(self): if (self.valuep == 21): clear_output() print '\nDealers Cards are :' print self.dealer_card print '\nYour Cards are : ' print self.player_card print self.valuep return 'Win' if(self.valuep > 21): clear_output() print '\nDealers Cards are :' print self.dealer_card print '\nYour Cards are : ' print self.player_card print self.valuep return 'Burst' if(self.valuep < 21): return 'C' def dealer_win(self): if (self.valued == 21): return 'BJ' if(self.valued > 21): return 'B' if (self.valued > self.valuep): return 'W' if(self.valued < self.valuep): return 'L' if(self.valued == self.valuep): return 'D' player_name = raw_input("Please Enter Your Name : ") game_on = True input1 = 0 gamebet1=0 while True: try: gamebet1 = int(raw_input("Empty Your Pockets :) Add Money To Casion Bank ! : ")) except: print "Oops ! Thats not a valid Amount ! \nTry Some Numbers" continue else: player_bank = Black_Jack_Bank(gamebet1) break while game_on: clear_output() print 'Welcome To Black Jack' player_bank = Black_Jack_Bank(gamebet1) gamebet = player_bank.sub_money() gamebet1 = player_bank.get_money() gamebet2 = player_bank.get_bet() print'Shuffling Cards . . . & Counting Money' time.sleep(3) shufflecards() clear_output() print"Dealer's Card are as follows" player_name = Black_Jack_Gameplay() player_name.dealer_hand_initial() player_name.player_hand_initial() z = player_name.player_win() if(z == 'Win'): print "Yipee ! Its BlackJack" player_bank.add_money((gamebet2)*2.5) gamebet1 = player_bank.get_money() print "Your Balance is : %s "%(gamebet1) time.sleep(7) continue y = player_name.dealer_win() if(y == 'BJ'): print "Yipee ! Its BlackJack" player_bank.blackjack_sub((gamebet2)*0.5) gamebet1 = player_bank.get_money() print "Your Balance is : %s "%(gamebet1) time.sleep(7) continue while True: try: val = int(raw_input("Please enter \n1. Hit\n2. Stand\n3. Current Balanace\n4. Exit Game\nPlease Enter Your Choice : ")) except: print "Looks like you did not enter a Valid Choice!" continue else: if(val == 1): player_name.player_hand_hit() z = player_name.player_win() if(z == 'Win'): print "Yipee ! You Win !" player_bank.add_money((gamebet2)*2) gamebet1 = player_bank.get_money() print "Your Balance is : %s "%(gamebet1) time.sleep(7) break if(z == 'Burst'): print"Woopsie ! It's a burst" print "Your Balance is : %s "%(gamebet1) time.sleep(7) break if(z == 'C'): continue if(val == 2): player_name.dealer_hand_hit() y = player_name.dealer_win() if(y == 'B'): print'Dealer Bursts ! You Win' player_bank.add_money((gamebet2)*2) gamebet1 = player_bank.get_money() print "Your Balance is : %s "%(gamebet1) time.sleep(7) break if(y == 'BJ'): print"Hmmm ! Dealer Wins" player_bank.add_money(gamebet2) gamebet1 = player_bank.get_money() print "Your Balance is : %s "%(gamebet1) time.sleep(7) break if(y == 'W'): print'Hmmm ! Dealer Wins' print "Your Balance is : %s "%(gamebet1) time.sleep(7) break if(y == 'L'): print'Yipee ! You Win' player_bank.add_money((gamebet2)*2) gamebet1 = player_bank.get_money() print "Your Balance is : %s "%(gamebet1) time.sleep(7) break if(y == 'D'): print"Woah !It's a Draw" player_bank.add_money(gamebet2) gamebet1 = player_bank.get_money() print "Your Balance is : %s "%(gamebet1) time.sleep(7) break if(val == 3): print gamebet1 if(val == 4): player_name.dealer_hand_hit() y = player_name.dealer_win() if(y == 'B'): print'Dealer Bursts ! You Win' player_bank.add_money((gamebet2)*2) gamebet1 = player_bank.get_money() print "Your CheckOut Balance is : %s "%(gamebet1) time.sleep(7) break if(y == 'BJ'): print"Hmmm ! Dealer Wins" player_bank.add_money(gamebet2) gamebet1 = player_bank.get_money() print "Your CheckOut Balance is : %s "%(gamebet1) time.sleep(7) break if(y == 'W'): print'Hmmm ! Dealer Wins' print "Your CheckOut Balance is : %s "%(gamebet1) time.sleep(7) break if(y == 'L'): print'Yipee ! You Win' player_bank.add_money((gamebet2)*2) gamebet1 = player_bank.get_money() print "Your CheckOut Balance is : %s "%(gamebet1) time.sleep(7) break if(y == 'D'): print"Woah !It's a Draw" player_bank.add_money(gamebet2) gamebet1 = player_bank.get_money() print "Your CheckOut Balance is : %s "%(gamebet1) time.sleep(7) break I want help to make it better. Any code optimization with explanation is welcome. If you know any major rules that this code does not follow or some bugs in code, please mention. P.S. I have tried using Object Oriented Programming. Due to gettting frustrated at the end due to errors. At the end the name of variables might be confusing. Answer: After a quick glance I see there are a few blocks that are repeated. They can be put into functions instead to reduce repetition and make it more readable. The sections are: 1) if(ip[0] == 0): if( self.player_card[0] == 'A of spades' or self.player_card[0] == 'A of hearts' or self.player_card[0] == 'A of diamonds' or self.player_card[0] == 'A of clubs' or self.player_card[1] == 'A of spades' or self.player_card[1] == 'A of hearts' or self.player_card[1] == 'A of diamonds' or self.player_card[1] == 'A of clubs'): values['A']=1 ip[0] = 1 2) print "Yipee ! Its BlackJack" player_bank.add_money((gamebet2)*2.5) gamebet1 = player_bank.get_money() print "Your Balance is : %s "%(gamebet1) time.sleep(7) continue 3) print "Yipee ! You Win !" player_bank.add_money((gamebet2)*2) gamebet1 = player_bank.get_money() print "Your Balance is : %s "%(gamebet1) time.sleep(7) 4) print '\nDealers Cards are :' print self.dealer_card print '\nYour Cards are : ' print self.player_card Also making the dealer and player into their own classes would at least help with readability and logically grouping your methods, since their methods are both currently in the Black_Jack_Gameplay class. With their similarities you could even find a way to use one class as both, or make one a parent class and one a child class (so you can practice some inheritance). As a side note, if you use github and don't mind your practice programs being open source, there are a few free automated code review tools that can pick out things like repeated code and bad practices for you.
{ "domain": "codereview.stackexchange", "id": 26243, "tags": "python, beginner, python-2.x, playing-cards" }
Difference between these two digital down conversion methods
Question: I can think of two possible methods of digital down conversion. One seems superior to the the other for most cases, but I'd like to get some DSP experts' comments on the practical differences between these two. For both methods described below, I start with the assumption that the $x(t)$ input is already constrained to the band of interest, to avoid the consideration of antialiasing. Antialiasing filters may be necessary in practical applications. In method A, the in phase component is generated by mixing $x(t)$ with $\cos(\omega t)$ and low pass filtering out the undesired high frequency images. Since there is excess bandwidth in the signal you can use a decimating filter instead of a straight lowpass, shown in my figure below as a blue box around a low pass and a decimator. The quadrature component is similarly obtained by mixing with $\sin(\omega t)$ and then filtering. As a formality I show multiplying the quadrature component by $j$ and adding it to the in phase component. In method B, convolution is used to perform the discrete Hilbert transform on $x(t)$. This is the reconstruction of the missing quadrature component in $x(t)$. Downconversion is accomplished by mixing the resultant complex signal with a complex sinusoid, no unwanted images generated! Depending on the bandwidth of the baseband signal, it may be smart to add a decimating filter to reduce the data rate, but the output here is usable without that filter. That is, it is not aliased (provided the input signal is limited to the appropriate band). The primary constraint that I can see in method A is that you can't downconvert by small amounts relative to your bandwidth. Say, for example, you had a 1kHz bandwidth in your signal of interest. If you want to downconvert by 100Hz, method A will not allow for that. The band of the images generated in mixing will intersect the band of the signal of interest. I've seen vector analyzers with analog front ends that can use double down conversion to reject images. A similar method could be used in DSP of course. The problem isn't unsolvable, it just isn't solved by method A. Method B appears to work around the problems with method A. The construction of a complex signal from the real signal essentially eliminates all negative frequency components, and mixing with a complex sinusoid ensures no images are created, so the only signal present in the output is the signal desired. No filtering is required. However, the convolution required to do the discrete Hilbert transform could be expensive. I would generally choose method A. My question is: are there advantages / drawbacks of these methods I haven't thought of? What are some practical considerations I should keep in mind before choosing a method? Answer: As you already know, the two methods are theoretically identical if there's no noise. If there is out-of-band noise then the low pass filters in method A will further suppress the out-of-band noise, whereas there's no such noise suppression with method B. This would be one advantage of method A over method B. Note, however, that the phase splitter in method B (which is the complex-valued filter with a wire in the real part and a Hilbert transformer in the imaginary part) could be replaced by a complex valued band pass filter filtering out the positive in-band frequencies. With such a filter, both methods would be equivalent, even with out-of-band noise. I do not think that the need to implement a Hilbert transformer is a disadvantage of method B, because the Hilbert transform does not need to be approximated over the whole band, just over the signal bandwidth.
{ "domain": "dsp.stackexchange", "id": 3017, "tags": "digital-communications, algorithms, hilbert-transform, quadrature" }
Symmetry of Wigner's 3j symbol
Question: I'm trying to understand the symmetry of Wigner's 3$j$ symbols. $3!=6$ permutations, as well as flipping the sign of all magnetic quantum numbers yields 12 operations, which are sometimes called the "12 classical ones". Wikipedia lists two additional "Regge" symmetries, which would (to my understanding) both double the number of possible operations, yielding 48 symmetry operations. However, the 3$j$ symbols apparently have 72 symmetries. What am I missing? Answer: The two Regge symmetries both have order $2$, but they don't both double the number of operations, because they don't commute with all of the other operations (or with each other). The Wikipedia article linked in the question actually mentions a nice way of expressing the operations that makes the full structure of the group (and the fact that it has $72$ elements) more obvious. I'll explain it in more detail. Use the abbreviation $$ \newcommand{\bfv}{\mathbf{v}} \newcommand{\magic}{{M}} \newcommand{\threej}{\Omega} \bfv \equiv (j_1,\, j_2,\, j_3,\, m_1,\, m_2,\, m_3) $$ for the list of numbers appearing in the $3$-$j$ symbol $$ \threej(\bfv) \equiv \left(\begin{matrix} j_1 & j_2 & j_3 \\ m_1 & m_2 & m_3 \end{matrix}\right). $$ Using the same list of numbers, define the matrix $$ \magic(\bfv) \equiv \left[\begin{matrix} -j_1+j_2+j_3 & j_1-j_2+j_3 & j_1+j_2-j_3 \\ j_1 + m_1 & j_2 + m_2 & j_3 + m_3 \\ j_1 - m_1 & j_2 - m_2 & j_3 - m_3 \end{matrix}\right]. $$ Notice that each row sums to $j_1+j_2+j_3$, as does each column. Now, define these linear operations on $\bfv$: $\bfv\to C(a,b)\bfv$ is the linear transformation whose effect on $\magic$ is to swap columns $a$ and $b$. $\bfv\to R(a,b)\bfv$ is the linear transformation whose effect on $\magic$ is to swap rows $a$ and $b$. $\bfv\to T\bfv$ is the linear transformation whose effect on $\magic$ is to take the transpose. Notice that $j_1+j_2+j_3$ is invariant under all of these operations. Using the abbreviation $$ s(\bfv) \equiv (-1)^{j_1+j_2+j_3}, $$ the corresponding symmetries of $\threej(\bfv)$ are: $\threej(\bfv)\to s(\bfv)\threej(C(a,b)\bfv)$. This swaps columns $a$ and $b$ in the $3$-$j$ symbol and multiplies the result by the sign $s(\bfv)$. $\threej(\bfv)\to s(\bfv)\threej(R(2,3)\bfv)$. This replaces $(m_1,m_2,m_3)\to(-m_1,-m_2,-m_3)$ in the bottom row of the $3$-$j$ symbol and multiplies the result by the sign $s(\bfv)$. $\threej(\bfv)\to \threej(T\bfv)$. This is the first of the Regge symmetries shown in the Wikipedia article. Explicitly: $$ \threej(T\bfv) = \left(\begin{matrix} j_1 & \frac{j_2+j_3-m_1}{2} & \frac{j_2+j_3+m_1}{2} \\ j_3-j_2 & \frac{j_2-j_3-m_1}{2}-m_3 & \frac{j_2-j_3+m_1}{2}+m_3 \end{matrix}\right). $$ $\threej(\bfv)\to s(\bfv)\threej(R(1,3)\bfv)$. This is the second of the Regge symmetries shown in the Wikipedia article. Explicitly: $$ s(\bfv)\threej(R(1,3)\bfv) = (-1)^{j_1+j_2+j_3} \left(\begin{matrix} \frac{j_2+j_3+m_1}{2} & \frac{j_3+j_1+m_2}{2} & \frac{j_1+j_2+m_3}{2} \\ j_1 - \frac{j_2+j_3-m_1}{2} & j_2 - \frac{j_3+j_1-m_2}{2} & j_3 - \frac{j_1+j_2-m_3}{2} \end{matrix}\right). $$ This correspondence shows that the group generated by these symmetries of the $3$-$j$ symbol is identical (as an abstract group) to the group generated by the linear transformations $C,R,T$ of $\bfv$. Because of the way those transformations were defined by their effect on the matrix $\magic(\bfv)$, we can count the number of elements in this group relatively easily. The group generated by the $C$s has $3!=6$ elements, as does the group generated by the $R$s, and the group generated by $T$ has $2$ elements. The $C$s and the $R$s commute with each other and cannot undo each other, so the group generated by the $C$s and $R$s has $6\times 6 = 36$ elements. The operation $T$ doesn't commute with the $C$s and $R$s, but it does satisfy $TG=GT$ where $G$ is the group generated by the $C$s and $R$s, so the group generated by the $C$s, $R$s, and $T$ has $36\times 2=72$ elements.
{ "domain": "physics.stackexchange", "id": 69675, "tags": "angular-momentum, symmetry, representation-theory" }
What is meant by complete outer shell? Why do the noble gases have zero valency?
Question: Does having 8 or 2 electron in the outmost shell mean its outmost shell is full and its valency is zero? I know that the 3rd and 4th shell can contain 18 and 32 electrons. Then how can Argon's (2,8,8) outmost shell be full, though it does not contain the highest number of electron (18) in it's 3rd shell? It is the same in the cases of other noble gases too? Why? Answer: When we refer to "outer shells" we are talking about the highest n-level, but also limiting ourselves to the s- and p-orbitals. The reason for this has to do with how the effective nuclear charge "felt" by the valence electrons changes as you move through successively higher energy configurations. In summary, $s$- and $p$- electrons are screened less effectively by inner shells, and so for a given n-value, the $s$ and $p$ orbitals of the next n fill before the current n (they have lower energy). In your example using argon, this means that the 3d shell actually has no electrons - the condensed configuration is: $$ \ce{Ar: \space [Ne]}3s^23p^6 $$ For the next noble gas, krypton, the condensed electron configuration is: $$ \ce{Kr: \space [Ar]}4s^23d^{10}4p^6 $$ Note that it has $3d$ electrons at a higher energy than what argon had, but since the value of n roughly corresponds to the distance of the orbital from the nucleus, the $3d$ orbitals are physically closer to the nucleus than the $4s$ and $4p$ orbitals. In other words, for single atoms, the lowest energy electron configuration always fills the s- and p- orbitals first, and so when we are looking for the valence or "outermost" electrons, that is where we need to look - we sort of ignore the $d$- and $f$-orbital electrons for these kinds of problems.
{ "domain": "chemistry.stackexchange", "id": 1623, "tags": "electronic-configuration, noble-gases" }
Why can I apply $HS^\dagger$ and then measure in the computational basis to measure $Y$?
Question: I come from a CS background I was reading Neven and Farhi's paper ("Classification with Quantum Neural Networks on near Term Processors"), and I am trying to implement the subset parity problem using Qiskit, and solve it using a quantum Neural Network. There is one thing that doesn't make sense to me though. In the paper, they measure "the Pauli Y gate on the readout qubit" (perhaps this phrasing is wrong, as I have to admit that whenever one does not measure in the computational basis, the whole thing doesn't make sense to me anymore). In one of the questions I already asked on this site, I was told that measuring in a basis other than the computational basis is simply the same as applying a matrix to the qubit and then measuring it in a computational basis. Through various research, I was able to determine that, for this problem "to measure the Pauli Y gate the readout qubit", I had to apply $HS^{\dagger}$ and then measure in the computational basis in order to obtain the same result. It works, but I don't understand why it has to be this matrix in particular (is there any mathematical proof that shows that this is indeed this matrix ?) Answer: Your normal measurement is a pauli-$Z$ measurement. If you apply a unitary $U$ just before measurement, this transforms the $Z$ measurement into $U^\dagger ZU$. So, any $U$ that transforms $U^\dagger ZU=Y$ will do the job. One convenient way of doing this is $$ \frac{Y+Z}{\sqrt{2}}, $$ but your choice will also work: $$ SHZHS^\dagger=SXS^\dagger=-iS^2X=-iZX=Y $$ If you want to know why it's the transformation $U^\dagger ZU$, well think about a circuit with input $|\psi\rangle$ that has a unitary $U$ enacted upon it, and then it's measured in the standard basis. The probability of getting the 0 answer is $$ |\langle 0|U|\psi\rangle|^2, $$ which is the same as the probability that $|\psi\rangle$ is in the state $U^\dagger|0\rangle$. This corresponds to a measurement projector $U^\dagger |0\rangle\langle 0|U$, so you can see that transformation starting to come out.
{ "domain": "quantumcomputing.stackexchange", "id": 1493, "tags": "measurement, pauli-gates" }
How can a parallel circuit work?
Question: The electrons always takes the easiest way in a circuit, right? So in a parallel circuit, why does the electrons flow through all parts of the circuit and not just the one with the least resistance? Answer: Imagine a river flowing towards a fork. The water on the left side of the river does find it easier to go on the left fork and the water on the right does find it easier to go on the right. Now imagine that the right branch looks like more of a side street than a fork. But if there is a slow down in the forward direction and the side branch goes downhill then you would still expect some of the water to take the right turn. Specifically the water on the right finds it easier to go to the right. The same thing happens with a circuit. Charges on the surface of the wires provide a net field that guides the current in different parts (right or left) to tend towards the particular branches of the parallel circuit. So each part of the wire has some current and each part does find it easier to go on a particular branch of the circuit.
{ "domain": "physics.stackexchange", "id": 29407, "tags": "electrons, electric-current, electrical-resistance, conductors" }
Why does an MRI machine or other EMP generating machine not damage humans, but it will fry computers?
Question: A sufficiently strong electromagnetic pulse can/will destroy smartphones and computers. I know somebody who went into MRI machine and forgot a Visa credit card in his pocket. The card was toast and he had to get a new one. A mobile phone in an MRI probably wouldn't fare better. But a big part of the human body itself is based on electric signals. The brain and nervous system, including the heart, works on electric signals. And those signals have to go to very precise places. There is an area of brain processing vision, another is responsible for speech, etc. Also the heart function depends on precisely timed signals traveling very specific routes. So it would seem that a trip to an MRI scan should totally fry anyone's possessing brain and heart. Except it doesn't. An MRI scan is harmless (if you are not allergic to those injections they give). Why? And then there are those electromagnetic pulse devices they show in Hollywood movies. While totally trashing electronics of bad guys, fellow humans are always shown unharmed. Again, why should the brain be different? Answer: Y'know that spark that jumps between your finger and a doorknob on a dry day in winter? That spark is enough to break down the "gate oxide" insulator between the gate electrode and the body of a microscopic field effect transistor. The insulator is so thin, that it only takes maybe a hundred volts or so to punch through it, ruining the transistor, and thereby ruining the integrated circuit chip of which the transistor is a critical part. There is nothing in your flesh that is so fragile or so critical. (Flesh heals, IC chips don't.) Also, Y'know how there are wires all over the circuit boards in an electronic device? Those wires are like little antennas that can convert an electromagnetic pulse to an electrical pulse with enough voltage to toast one of those crucial transistors. There aren't any wires like that in our bodies.* The effects of electromagnetic radiation on our flesh is much more diffuse. * Not true for everybody. My dad has an implanted cardiac pacemaker. He isn't allowed anywhere near an MRI scanner, and he might not fare as well as his neighbors if a nuclear weapon is exploded in orbit above the city where he lives.
{ "domain": "physics.stackexchange", "id": 95299, "tags": "electromagnetism, electric-current, electronics, biophysics, electrical-engineering" }
Question on expression in "J.S.Bell : On the Einstein Podolsky Rosen paradox"
Question: I have a question on the article J. S. Bell, On the Einstein Podolsky Rosen paradox, Physics 1, 195, 1964. (link) My question concerns the expression (3) of the article, at page 196. I don't understand what is the reasoning that leads to this expression of the expectation value... I think I miss something but I don't know what. This is what I understood from now on : $\vec{\sigma_1}$ and $\vec{\sigma_2}$ are the spins of the two particles that move apart and must be exactly opposite according to quantum mechanics when measured in a direction of the component $\vec{a}$. First, did I understand well and do we really have $$A(\vec{a},\lambda) = \vec{\sigma_1}.\vec{a} = \pm 1 \\ B(\vec{b},\lambda) = \vec{\sigma_2}.\vec{b} = \pm 1$$ then ? If not, what does $A(\vec{a},\lambda)$ and $B(\vec{b},\lambda)$ correspond to ? A sort of $sign$ function or something like in the next section? Secondly, why $$ <\vec{\sigma_1}.\vec{a}\; \vec{\sigma_2}.\vec{b}> = -\vec{a}.\vec{b}$$ Is it because $\vec{\sigma_1}$ and $\vec{\sigma_2}$ are opposite ? Thanks ! Answer: No, all your $\vec \sigma_1, \vec \sigma_2$ are simply the Pauli matrices, but applying to the first or the second particle, so $\vec \sigma_1. \vec a, \vec \sigma_2. \vec b$ are the measurement operators applying respectively to particles $1$ and $2$. The outcome of a measurement can be $1$ or $-1$, but that does not mean that $\vec \sigma_1. \vec a = \pm 1$ or $\vec \sigma_2. \vec b = \pm 1$. This is false, this is not a operator equality. In fact, a better notation for the measurement operator of the 2-particles system, is $\vec \sigma. \vec a \otimes \vec \sigma. \vec b$. Here $\vec \sigma$ are also the Pauli matrices This notation means that the operator $\vec \sigma. \vec a$ is applyed to the first particle, and that the operator $\vec \sigma. \vec b$ is applyed to the second particle. The mean value is referred to the singlet state : $$\psi = \frac{1}{\sqrt{2}} (|+ \rangle |- \rangle - |- \rangle |+ \rangle) \tag{1} $$ So, you have : $$M=<\vec \sigma. \vec a \otimes \vec \sigma. \vec b>_{Singlet} = \langle\psi|\vec \sigma. \vec a \otimes \vec \sigma. \vec b|\psi\rangle \tag{2}$$ That is : $$M = \frac{1}{2}(\langle +| \langle -| - \langle -| \langle +| )|\vec \sigma. \vec a \otimes \vec \sigma. \vec b(|+\rangle|-\rangle - |-\rangle|+\rangle) \tag{3}$$ So, you have four terms for $M$ ($M= M_1 + M_2 +M_3 + M_4$): $$M_1 = \frac{1}{2} \langle +|\vec \sigma.\vec a|+\rangle \langle -|\vec \sigma.\vec b|-\rangle\tag{4}$$ $$M_2 = -\frac{1}{2} \langle +|\vec \sigma.\vec a|-\rangle \langle -|\vec \sigma.\vec b|+\rangle\tag{5}$$ $$M_3 = -\frac{1}{2} \langle -|\vec \sigma.\vec a|+\rangle \langle +|\vec \sigma.\vec b|-\rangle\tag{6}$$ $$M_4 = \frac{1}{2} \langle -|\vec \sigma.\vec a|-\rangle \langle +|\vec \sigma.\vec b|+\rangle\tag{7}$$ With the help of the expressions : $$\langle +|\vec \sigma.\vec c|+\rangle = c_3, \langle +|\vec \sigma.\vec c|-\rangle = c_1 - ic_2,\langle -|\vec \sigma.\vec c|+\rangle = c_1 + ic_2, \\ \langle -|\vec \sigma.\vec c|-\rangle = -c_3\tag{8}$$ you will find easily that : $$M = -(a_1b_1 + a_2b_2 + a_3b_3) = - \vec a.\vec b \tag{9}$$
{ "domain": "physics.stackexchange", "id": 9137, "tags": "bells-inequality" }
Kernel Convolution in Frequency Domain - Cyclic Padding
Question: I don't know whether this is the right place to post this, but I suppose it is. I know that frequency multiplication = circular convolution in time space for discrete signals (vectors). I also know that "the convolution theorem yields the desired linear convolution result only if $x(n)$ and $h(n)$ are padded with zeros prior to the DFT such that their respective lengths are $N_x+N_h-1$, essentially zeroing out all circular artifacts." and everything works with vectors.. but my goal is circular convolution with matrices as in this paper: Victor Podlozhnyuk (nVidia) - FFT Based 2D Convolution If you watch the first two figures (figure 1 and 2) you'll see that the kernel is padded in a weird way I've never seen before, what's this? Answer: Figures 1 and 2 are not showing any padding whatsoever. The larger matrix is the data (probably image) matrix, not a padded kernel matrix. The figures are simply showing how the circular aspect of the convolution works in 2 dimensions.
{ "domain": "dsp.stackexchange", "id": 7317, "tags": "image-processing, filters, fourier-transform, algorithms, convolution" }
Uniqueness of a stress (only) boundary value problem
Question: A static problem in linear elasticity is typically written as the following boundary value problem: find $\boldsymbol u$ and $\boldsymbol \sigma$ such that: $\text{div} \boldsymbol \sigma + \boldsymbol f = \boldsymbol 0$ in $\Omega$, $\boldsymbol \sigma^T = \boldsymbol \sigma$ in $\Omega$, $\boldsymbol \sigma = 2\mu \boldsymbol \epsilon + \lambda \text{tr}\boldsymbol \epsilon \, \boldsymbol I$ in $\Omega$, $\boldsymbol \epsilon = \frac{1}{2}( \nabla \boldsymbol u + \nabla^T \boldsymbol u )$ in $\Omega$, $\boldsymbol \sigma \cdot \boldsymbol n = \boldsymbol T^d$ on $\partial\Omega_T$, $\boldsymbol u = \boldsymbol u^d$ on $\partial\Omega_u$. And it can be proved that the solution is unique both on displacement field and stress field. I wonder if we have unicity for the boundary value problem: find $\boldsymbol \sigma$ such that: $\text{div} \boldsymbol \sigma + \boldsymbol f = \boldsymbol 0$ in $\Omega$, $\boldsymbol \sigma^T = \boldsymbol \sigma$ in $\Omega$, $\boldsymbol \sigma \cdot \boldsymbol n = \boldsymbol T^d$ on $\partial\Omega$. We have three equations, three boundary conditions and three independent component fields (in case the coordinate system is chosen so that basis vectors correspond to eigen vectors at each point of $\Omega$). I am aware about the indertermination of the displacement field due to a rigid body motion. Do you have knowledge, references that treat this, I would say intermediate, problem? Thank you in advance for sharing it. Answer: The aswer is negative. You may, for instance, construct a vector field $v$ whose integral lines are circles centered on the $z$ axis and with $\mbox{div}\: {\bf v}=0$. (Think of an incompressible fluid rotating around the $z$ axis). You may confine this field to a torus $\Omega$. The tensor field $\sigma = {\bf v}\otimes {\bf v}$ satisfies all your requirements with $f=0$, ${\bf T}^d=0$. But also ${\bf v}=0$ does. This result implies that, on that $\Omega$ the problem with generic $f$ and ${\bf T}^d$ does not admit unique solution. Indeed, if $\sigma$ is a solution with given $f$ and ${\bf T}^d$, $\sigma + a{\bf v}\otimes {\bf v}$ satisfies the same problem for every $a \in \mathbb R$ and ${\bf v}$ defined as above.
{ "domain": "physics.stackexchange", "id": 27150, "tags": "continuum-mechanics" }
The significance of bounded parameters in complexity
Question: A lot of complexity results are given with respect to bounded cases where results are more favourable. For example, the graph isomorphism problem — which is GI-complete in general — is known to be tractable in cases where degree is bounded. But every time I see such a result, I have a nagging voice in my head telling me that for any input graph the degree must be bounded. I find it hard to intuitively resolve the resulting cognitive dissonance. I understand that bounding a parameter allows for considering it as a constant rather than a variable in the complexity analysis. But that seems almost like a semantic game since you can still define the bound to be arbitrarily large so long as it's fixed. Perhaps the message to be gleaned from these bounded results is that if you assume the parameter in question to generally be small, the worst case is more favourable. But this seems rather vague to me since one can define "small" arbitrarily. The nagging voice tells me I'm still missing something ... that something hasn't clicked. To distil a question out of this: when one sees such a bounded-parameter result, what should one intuitively conclude about the problem in practice with respect to that parameter? What is the importance of such a result? Answer: When you see that a parameter is bounded, you shouldn't think of the value of the parameter of a single graph, but of a class of all graphs that satisfy the bound on the parameter. So, to say that graph isomorphism is in polynomial time for bounded degree graphs really means that, for every $d$, there is a polynomial time algorithm $A_d$ that decides graph isomorphism on the class of all graphs of degree at most $d$. However, the running time of $A_d$ depends on $d$. And, yes, the take-home is generally that there are decent algorithms as long as the parameter is small. What counts as decent will depend on your circumstances: what sorts of inputs you need to deal with and how much computing time you have available.
{ "domain": "cs.stackexchange", "id": 3830, "tags": "complexity-theory" }
respawn_delay does not delay the respawn!
Question: There's a new .launch element (only in Indigo) that delays the repawn of a node. However, it doesn't do what it says. This is a demo part of the .launch file I have: Is there anything I'm missing? <node pkg="res_prj" type="init_tf_broadcaster" name="init_tf_br" respawn="true" respawn_delay="20"/> I expect the node init_tf_br respawn after 20 secs, but it starts immediately once it dies. Thanks in advance. Reference: http://wiki.ros.org/roslaunch/XML/node Originally posted by emacsd on ROS Answers with karma: 194 on 2014-07-13 Post score: 0 Answer: It looks like that change was introduced three days ago, and hasn't been released yet: https://github.com/ros/ros_comm/pull/446 Until the ros_comm maintainers do the next release, you should be able to test it if you get the latest version of ros_comm from source. Originally posted by ahendrix with karma: 47576 on 2014-07-13 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by emacsd on 2014-07-13: You're right! Thank you.
{ "domain": "robotics.stackexchange", "id": 18597, "tags": "ros, roslaunch, respawn, roslauch" }
How fast could you suck up the atmosphere
Question: So kind of a strange question, but if i had a 1000 foot wide hose with endless storage space, whats the quickest possible time it could be used to suck up the entirety of the Earth's atmosphere. Edit: Taking into consideration Knzhou's comment, assume I can manipulate gravity, bundle up the atmosphere, and push it through that 1000 ft wide hole. Is there a limit to how fast it could go? Edit2: Assume it is being pushed through at half the speed of light. P.S. if anyone is curious why I'm so hung up on this, I've been planning out a science fiction novel where humanity figured out how to control gravity and this deals with the workings of a terraformer. Answer: The first version of the question asked about just sucking up the atmosphere with a hose, without manipulating gravity. I'll answer that first, since I think it's a very good question that many — including my previous self — get wrong. No manipulation of gravity They did this with a giant vacuum cleaner in the movie Spaceballs. But as knzhou comments, it is fundamentally impossible. The reason that the atmosphere stays where it is, is there there is hydrostatic equilibrium: Gravity tries to pull the air molecules down, but pressure builds up and prevents it from collapsing altogether. Whether or not you build a 500 km long hose, doens't change that. A vacuum cleaner works by creating a lower pressure $P$ inside than outside. But in this case the gravitational potential $\Phi$ is the same inside and outside. In the case of the atmosphere, $P$ is lower in space, but $\Phi$ is lower closer to Earth. Here a drawing that may help understand. At a given height, $P$ and $\Phi$ is the same inside and outside the hose Fiercely manipulating gravity Your first edit assumes that gravity can be manipulated arbitrarily. In that case, there is no limit to how fast the atmosphere can be sucked out. Just create an Alcubierre drive. Essentially this works by constructing a metric of space such that there is gradient that can, in principle, be arbitrarily large. Although the air molecules don't move through space faster than light locally, as seen from "outside" the speed can be faster than $c$. You'd have to think carefully about how exactly you do this without simultaneously tearing Earth apart. Moderately manipulating gravity Your second edit assumes a maximum speed of $v=c/2$. In that case the answer is simply given by the distance from the antipode of the hose (since air on the other side of the Earth from the hose needs to travel around Earth), plus the distance from ground to space. Assuming 500 km for the latter, that distance is roughly $d = 20,\!500\,\mathrm{km}$, so the time is $t=d/c=0.07$ seconds.
{ "domain": "physics.stackexchange", "id": 30737, "tags": "earth, atmospheric-science" }
Can anyone tell me formula for lattice $a$, $b$ and $c$ in a hexagonal structure?
Question: Can anyone tell me formula for lattice constants $a$, $b$ and $c$ in a hexagonal structure? $a$ , $b$ and $c$ are units cell of structure. As we see in cubic structure we have a formula to calculate side $a$, $b$ and $c$ method known as braggs law. So what should be the method for calculating it in a hexagonal structure. I want to define the XRD (X-Ray diffraction) structure of my crystal. Answer: I still don't understand what your question is about, but e.g. this paper contains everything about the real and reciprocal lattice of a hexagonal structure (on the first 1.5 pages). "Unit cell of structure" is not a common term; I think you are referring to the lattice constants, of which graphene has only one, not three.
{ "domain": "physics.stackexchange", "id": 7045, "tags": "solid-state-physics" }
How to use robot_localization correctly?
Question: Hi All, I would like to ask you opinions and best practices of using the robot_localization package for the following use case: The vehicle has two wheels (left and right), more accurately two tracks (a robotic vehicle for agriculture). Eventually I will have to achieve automatic GPS waypoint finding. The wheels have fairly accurate encoders, so I can measure travelled distance and speed of both left and right tracks, calculate and publish Odometry messages from that with at least 10Hz. The vehicle is equipped with a navigational device which provides me GPS latitude, longitude and orientation (roll, pith, yaw) and speed. I can use these data to calculate and publish another Odometry message. Latitude and longitude is updated with 1Hz, orientation is updated with 10Hz. I'm planning to fuse these two odometry sources with one ekf_localization_node and use its output for the waypoint finding. I'm wondering if this is a viable approach to do this or there would be a better way? Many thanks, Tamás EDIT 23/08/2020: I have updated my setup based on the answers from @tom-moore (see below and other posts). It's available on GitHub here, just on case it will be useful for someone. Originally posted by tbondar on ROS Answers with karma: 29 on 2020-02-14 Post score: 0 Answer: Seems viable to me. Are you doing the conversion of the GPS data into your world frame coordinates, are are you using navsat_transform_node? Make sure the IMU adheres to the specifications in the r_l wiki. The package isn't great at handling IMU and GPS only fusion, so the fact that you have encoders will help you immensely. Originally posted by Tom Moore with karma: 13689 on 2020-03-25 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by tbondar on 2020-07-07: Sorry about the delay, there's been a pause in the project, like in everything. I don't plan to use the navsat_transform_node because I don't have direct access to the GPS and IMU inside my navigational device. It reports lat, long and orientation (yaw) values directly and I don't have access to the details. I'll have to put together the Odometry message myself from these. Does this change your opinion regarding feasibility? I guess the covariances will be the most challenging. I will probably have to ask further questions when I'll get there. Comment by Tom Moore on 2020-07-14: If it gives you lat/long/orientation, you should still be able to use navsat_transform_node, but you may need a translation node so it outputs the right message types for consumption by navsat_transform_node. But as long as your GPS conversion outputs usable world-frame coordinates, you should be fine, absolutely. Just fuse pose data from your GPS (or GPS conversion node) and velocity data from the wheel encoders. Comment by tbondar on 2020-08-23: @tom-moore Thanks for your reply, I've updated my setup according to your suggestions here and other posts. I've uploaded it with a simple vehicle emulator to GitHub (link above in my original post). My current node graph is here. Would you mind having a quick look just in case you spot something incorrect? Comment by Tom Moore on 2020-08-31: Is anything misbehaving, or is it working as expected? I don't have the cycles to dig into it right now, but if there are specific issues, feel free to ask new questions.
{ "domain": "robotics.stackexchange", "id": 34441, "tags": "navigation, ros-melodic, robot-localization" }
Derivation of the linear cross entropy
Question: I'm looking at cross-entropy benchmarks and there's much that I'm reading at the moment but I'm stuck on one detail: how to derive the linear cross-entropy formula from the cross-entropy formula. The cross-entropy of probability densities $p(x)$ and $q(x)$ over $D=2^N$ possible values of $x\in \{0,1\}^N$ is given by $$ -\sum^D q(x) \log p(x) $$ I took the linearization of the log function $\log (x) = x-1$ in an attempt to get the linear cross entropy (following the derivation of Linear entropy). As the linearization, I obtain $$ -\sum^D q(x) (p(x) -1) = 1 -\sum^D q(x) p(x) $$ In both "Quantum supremacy using a programmable superconducting processor" and "Limitations of Linear Cross-Entropy as a Measure for Quantum Advantage" [arXiv:2112.01657] the linear cross-entropy is given as $$D\sum_{x}^{D} q(x) p(x) -1$$ I have no idea why my sign is off and where the pre-factor of $D$ comes from. I can recover the linear XEB formula if $\log(p(x))\approx 1-D p(x)$. However, I don't know how I can get the factor of $D$ to appear in any sensible approximation. I tested some numerics and the XE and the linear XE do not appear to follow the same trends. I did an interpolation from $q_{s=0} = p$ to $q_{s=1}=unif$ in five steps and found that the XE increases as $q$ is further from $p$ while the linear XEB decreases to zero as $q$ approaches the uniform distribution. I think this is correct but I'm lost on the intuition/understanding of how the XE and linear XE are connected. import numpy as np #fix seed np.random.seed(0) #qubits n=10 #from Google notation D=2**n #print(D) #print("Randomly choosen \ket p in basis \e") #print(p) #distro p p = np.random.rand(D) p = p / sum(p) #distro q_s = (1-s) \ket p + s \ket Delta Delta = np.random.rand(D) Delta = Delta / sum(Delta) #sharp peaked = np.zeros(D) peaked[np.random.randint(D)] =1.0 #unif unif = np.ones(D) unif = unif / sum(unif) def getq(s,qmax=unif): """get q for a given mixing parameter s""" if s>1: s=1 if s<0: s=0 return (1-s) * p + (s) * qmax def xel(p,q): """linear cross entropy of two distributions""" #sum S=0 for k in range(len(p)): S= S + (p[k] * q[k]) return D*S -1 def xe(p,q): """cross entropy""" #sum S=0 for k in range(len(p)): if q[k]==0: continue S = S - q[k] * np.log(p[k]) return S def S(p): """ Entropy of probability density vector """ #entropy S=0 for k in range(len(p)): if p[k]==0: continue S = S - p[k]* np.log(p[k]) return S def purity(p): """ linear entropy """ #sum S = 0 for k in range(len(p)): S = S + p[k]*(p[k]-1) return S print("Entropy of \ket p", S(p)) print("Purity of \ket p",purity(p)) print(" ") print("Entropy of \ket q_max",S(getq(1,qmax))) print("Purity of \ket q_max",purity(getq(1))) print(" ") print("purity max", purity(unif)) svals = np.linspace(0,1,5) for s in svals: print(" s=",s) q= getq(s,qmax) print("xel_pq",xel(p,q)) print("xe_pq",xe(p,q)) #print("xel_qp",xel(q,p)) #print("xe_qp",xe(q,p)) print(" ") s xel_pq xe_pq 0.0 0.3448222395967324 6.734095320988952 0.25 0.25861667969755 6.860293267333703 0.5 0.17241111979836532 6.9864912136784465 0.75 0.08620555989918333 7.112689160023204 1.0 1.9984014443252818e-15 7.238887106367953 Answer: In case anyone else gets caught up on this detail: I spoke to Soonwon Choi and he explained that the "linear" cross-entropy is not a linearization of the cross-entropy. Rather it is called ``linear'' since the components of $p$ appear linearly. The form is motivated by this benchmark taking the value 1 when the samples are obtained from sufficiently random circuits (Porter-Thomas) and taking the value 0 if the samples are uniformly random.
{ "domain": "quantumcomputing.stackexchange", "id": 3729, "tags": "information-theory, entropy, quantum-advantage" }
Why dont we consider centripetal force in the expression for net force in specific circular motion problems?
Question: I came across a problem in a circular motion which includes a stone tied to the end of a string. They asked for the net force at the highest point and the lowest point, but in the answer, they never included the centripetal force why? (the correct option according to them is a but according to me it should be d) Answer: There are only two forces acting on the stone - its weight $mg$ acting downwards and the tension in the string, which is $T_1$ acting upwards at the lowest point of the circle and $T_2$ acting downwards at the highest point of the circle. The centripetal force is not a third force - it is just the net force that is required to keep the stone moving in a circle. Since we know that the stone is moving in a circle, the net force on the stone (the sum of its weight and the tension in the string, with appropriate signs) is the required centripetal force.
{ "domain": "physics.stackexchange", "id": 91686, "tags": "homework-and-exercises, newtonian-mechanics, home-experiment" }
Unexpected behavior when trying to set position of PR2 arm in Gazebo
Question: Hi, I am using Fuerte on Ubuntu 11.10. In Gazebo, I have been using the SetModelConfiguration method suggested here to move a PR2 arm to a certain position. It was working fine for the past several months, but recently I have gotten some unexpected behavior. When I make a service call to /gazebo/set_model_configuration, the robot seems to do a reset to an initial position before trying to set the requested position. This reset also seems to make the robot jump around a little. I was wondering if anyone else has used this SetModelConfiguration method and has experienced a similar problem. Thank you Update: When I try to move an arm through a path in small steps, the shifting is more prominent, and the robot eventually just falls through the floor. Also, I should have posted the following code before. It is what I am using to call set_model_configuration for moving the arm. set_model_config_client = rospy.ServiceProxy("/gazebo/set_model_configuration", SetModelConfiguration) req = SetModelConfigurationRequest() req.model_name = "pr2" req.urdf_param_name = "robot_description" req.joint_names = rospy.get_param("/%s/joints" % ("r_arm_controller")) req.joint_positions = angles rospy.wait_for_service("/gazebo/set_model_configuration") res = set_model_config_client(req) angles is the list of joint angles specifying the position of the right arm Originally posted by clee2693 on ROS Answers with karma: 71 on 2012-07-08 Post score: 1 Answer: There were some bugs in 1.6.10- release that should have been patched correctly in simulator_gazebo 1.6.11, which I have just released today. Please let me know if the problem persists. Thanks, John Originally posted by hsu with karma: 5780 on 2012-07-27 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by clee2693 on 2012-08-07: It seems to be working again. Thanks. Comment by clee2693 on 2012-08-07: It seems to be working fine again. Thanks. Comment by clee2693 on 2012-08-07: It seems to be working fine again. Thanks.
{ "domain": "robotics.stackexchange", "id": 10100, "tags": "ros" }
A Class to Supply an Open Database Connection during the life of a Web Request
Question: For some database requests, I like to use raw ADO.NET. In the context of a web request, I have created a class which provides an open IDbConnection object. I use a Dependency Injection library ("DI") to do this, scoping the object instantiated from this class to a web request. So, disposal of the object is handled by the DI container. The abstraction for this class is simple and looks like this: public interface IDbConnectionManager : IDisposable { IDbCommand BuildCommand(DbParameter[] parameters, string query); IDbConnection GetOpenConnection(); } And the concrete implementation, written for Sql Server, looks like this: public class DbConnectionManager : IDbConnectionManager { public SqlConnection DbConnection; public string ConnectionString { get; set; } public DbConnectionManager(string connectionString) { ConnectionString = connectionString; } public IDbConnection GetOpenConnection() { return GetOpenSqlConnection(); } private SqlConnection GetOpenSqlConnection() { if (ReferenceEquals(DbConnection, null)) { DbConnection = new SqlConnection(ConnectionString); } if (DbConnection.State != ConnectionState.Open) { DbConnection.Open(); } return DbConnection; } public IDbCommand BuildCommand(DbParameter[] parameters, string query) { if (parameters == null) throw new ArgumentNullException(nameof(parameters)); if (string.IsNullOrWhiteSpace(query)) throw new ArgumentException(nameof(query)); var command = new SqlCommand { Connection = GetOpenSqlConnection() }; command.Parameters.AddRange(parameters); command.CommandText = query; return command; } public void Dispose() { DbConnection?.Dispose(); } } You can see the BuildCommand method uses the GetOpenSqlConnection for its connection. The main reason I have taken this approach is because DbConnections are expensive. But I'm not sure whether that relates to creating a connection or opening one. I'm aware that connection pooling is available (if enabled), but I figured for simple web requests (think API), with perhaps just 2 queries to the database, that this would probably be a good approach. And as can be seen, there's not a lot to the code. I just wanted a bit of feedback on this code/approach. And if you can see any potential problems with it, by all means let me know that too. As a last comment, I am aware that a developer could manually call Dispose or use it in a using block (thereby calling Dispose). This would be on me as lead developer to ensure that this doesn't happen and that devs understand that the DI container disposes of the object. Answer: As far as I know, in ADO.NET the connection pooling is enabled by default, and if you don't need to pool connections, you'll have to explicitly disable pooling. So, for your implementation, everything seems okay to me, but DbConnectionManager seems to be specific for handling the connection! so, BuildCommand method will be odd in this class! I would prefer to rename it to something covers all exposed operations and keep everything simpler. for GetOpenSqlConnection() this could be unnessary, since you can do it in the property level. What I suggest is to make DbConnection & ConnectionString static, so you ensure you only have a single instance of SqlConnection. and create two constructors one takes SqlConnection, and the other one takes connectionString. you would have something like this : public class DbConnectionManager : IDbConnectionManager { // make it static to have a single instance private static SqlConnection DbConnection { get => DbConnection is null ? new SqlConnection(ConnectionString) : DbConnection; set => DbConnection = value; } // store the connectionString of SqlConnection, for connection backup. private static string ConnectionString { get; set; } = DbConnection.ConnectionString; public DbConnectionManager(string connectionString) : this(new SqlConnection(connectionString)) { } public DbConnectionManager(SqlConnection dbConnection) { DbConnection = dbConnection; ConnectionString = dbConnection.ConnectionString; } } in both constructors, they're initiating a new DbConnection and the ConnectionString is just backup, in case your actual DbConnection is lost, you can re-initiate it with the connectionString that you've stored. doing that, it'll eleminate the need of : private SqlConnection GetOpenSqlConnection() { if (ReferenceEquals(DbConnection, null)) { DbConnection = new SqlConnection(ConnectionString); } if (DbConnection.State != ConnectionState.Open) { DbConnection.Open(); } return DbConnection; } for : public IDbCommand BuildCommand(DbParameter[] parameters, string query) { if (parameters == null) throw new ArgumentNullException(nameof(parameters)); if (string.IsNullOrWhiteSpace(query)) throw new ArgumentException(nameof(query)); var command = new SqlCommand { Connection = GetOpenSqlConnection() }; command.Parameters.AddRange(parameters); command.CommandText = query; return command; } Since there is no actual execution is going here, opening a connection here is not needed, because you built the command and return the instance back to execute it somewhere else. this may requires you to create two public methods for opening and closing connection before you call the execute methods such as command.ExecuteNonQuery(). What I would do is maybe make this method private, and create public methods for each SQL execution type such as ExecuteNonQuery and ExecuteScalar and ExecuteReader, with the same arguments of BuildCommand has, and all will be from within the same class. Something like this : public void ExecuteNonQuery(DbParameter[] parameters, string query) { var command = BuildCommand(parameters, query); if (DbConnection.State != ConnectionState.Open) DbConnection.Open(); command.ExecuteNonQuery(); } The best approach for that will be creating a static property of SqlCommand then you initiate it, use it across the class, dispose it whenever you're done. another question got in my mind is this : if (parameters == null) throw new ArgumentNullException(nameof(parameters)); why you made parameters required? suppose you need to execute a query with no parameters such as SELECT * FROM table, then, you would have to adjust the current implementation or create new method for that. So, keeping it optional will come in handy.
{ "domain": "codereview.stackexchange", "id": 36863, "tags": "c#, dependency-injection, asp.net-core, ado.net" }
Reaction of potassium cyanide with 2-(chloromethyl)furan
Question: What would the mechanism for the reaction between 2-(chloromethyl)furan and potassium cyanide be, as detailed below? I know that for 6-membered rings like pyridine, a $\mathrm{S_NAr}$ reaction can occur. However, I'm not sure as to how the mechanism differs between the two in a way that the cyanide group is added at the 5-position instead of the methyl group. Answer: Cyanide ion attacks the 5 position of the furan. The 4,5 double bond migrates to 3,4. The 2,3 double bond migrates to the Me group kicking out Chloride and giving the exomethylene (double bond outside tbe ring) intermediate. Then the bonds reorganise to re-aromatise giving the product shown. I hope this makes sense. I have no access to a drawing package.
{ "domain": "chemistry.stackexchange", "id": 8257, "tags": "organic-chemistry, reaction-mechanism, aromatic-compounds, heterocyclic-compounds" }
Finding the ground state $L$ and $S$ quantum numbers of an atom
Question: In an example in class we were asked to determine the ground state total orbital angular moment and total spin angular momentum quantum numbers $\textbf{L}$ and $\textbf{S}$ of Nitrogen with electron configuration $$N:[\mathrm{He}]\,2s^22p^3$$ We are told to used Hund's rules which were given as the following. Find the maximum $\space M_S$ consistent with the Pauli Exclusion Principle. Set $S=M_s$ For that$\space M_S$, find the maximum $M_l$. Set $L=M_l$ It was then presented that the result is as follows. $$\max(M_s)=\frac{3}{2}\implies S = \frac{3}{2}$$ $$\max(M_l)=0\implies L = 0$$ $$\therefore S = \frac{3}{2}\space \text{and} \space L = 0$$ I am trying to do homework problems similar to this and can not figure out how they reasoned $L = 0$ and was hoping to gain some clarification if at all possible.When moving to atoms with partially filled $\space f$ and $\space d$ orbitals I don't know where to start and I think it is because I am not sure how they approached this problem. Any help clarifying this process would be much appreciated. Answer: The given spin state requires all the spins to be parallel, which means that that state (and the whole $S=3/2$ manifold by extension) is symmetric under exchange. However, the global state needs to be antisymmetric, which means that the orbital component also needs to be antisymmetric. Within a p shell, this can only be achieved by putting one electron each on the $m_L=1$, $0$ and -$1$ states (since any repetitions would vanish under antisymmetrization), and that then gives you $M_L=0$.
{ "domain": "physics.stackexchange", "id": 45281, "tags": "quantum-mechanics, homework-and-exercises, angular-momentum, atomic-physics, orbitals" }
Simple Affine cipher
Question: The affine cipher is a simple mathematical substitution cipher. If you're interested in the details behind how it works, this page goes further into detail. After writing a program to encrypt and decrypt text using the affine cipher, I felt that it seemed needlessly cluttered. I have also noticed that there are no questions related to the affine cipher on Code Review, at least not for C++. Here's the code: #include <iostream> #include <algorithm> #include <numeric> #include <cmath> int gcd(int a, int b); int modInverse(int a, int b); // from https://rosettacode.org/wiki/Modular_inverse#C.2B.2B int main(){ std::string choice; do{ std::cout << "Encrypt or Decrypt? [e/d] = "; std::getline(std::cin, choice); std::transform(choice.begin(), choice.end(), choice.begin(), ::tolower); } while(choice.length() > 1 || choice != "e" && choice != "d"); std::cout << "\nInput string: "; std::string input; std::getline(std::cin, input); int a, b; do{ std::cout << "\na and b must be coprime\na = "; std::cin >> a; std::cout << "b = "; std::cin >> b; } while(std::cin.fail() || gcd(a,b) != 1); std::cout << '\n'; if(choice == "e"){ for(int i = 0; i < input.length(); ++i){ if(input[i] >= 'a' && input[i] <= 'z'){ std::cout << (char)((a * (input[i] - 'a') + b) % 26 + 'a'); } else if(input[i] >= 'A' && input[i] <= 'Z'){ std::cout << (char)((a * (input[i] - 'A') + b) % 26 + 'A'); } else{ std::cout << input[i]; } } } else{ for(int i = 0; i < input.length(); ++i){ if(input[i] >= 'a' && input[i] <= 'z'){ std::cout << (char)(modInverse(a, 26) * (26 + input[i] - 'a' - b) % 26 + 'a'); } else if(input[i] >= 'A' && input[i] <= 'Z'){ std::cout << (char)(modInverse(a, 26) * (26 + input[i] - 'A' - b) % 26 + 'A'); } else{ std::cout << input[i]; } } } std::cout << '\n'; return 0; } int gcd(int a, int b){ return b == 0 ? a : gcd(b, a % b); } int modInverse(int a, int b){ int b0 = b, t, q; int x0 = 0, x1 = 1; if (b == 1) return 1; while (a > 1) { q = a / b; t = b, b = a % b, a = t; t = x0, x0 = x1 - q * x0, x1 = t; } if (x1 < 0) x1 += b0; return x1; } Here is an example output for the encryption side: Encrypt or Decrypt? [e/d] = e Input string: Hello World! a and b must be coprime a = 5 b = 9 Sdmmb Pbqmy! And here is an example output for the decryption side: Encrypt or Decrypt? [e/d] = d Input string: Sdmmb Pbqmy! a and b must be coprime a = 5 b = 9 Hello World! I am using the modInverse() function from Rosetta Code. This is quasi-related to my ongoing series of classical ciphers. So far I have done a simple Caesar cipher and an Atbash cipher. How can I improve this code, both for readability and for efficiency? Answer: It's very easy to understand each step in your program, even without understanding the actual algorithm you're implementing, which is really nice. Here's how I would improve things. Use Functions Each part of your main() function looks to me like it should be a separate named function. For example, it's clear that you're asking for whether the user wants to encrypt or decrypt, then getting the input string, then asking for a and b, then doing the encryption or decryption. Each of those things is a separate task and as such should be its own function. Your main() should look more like this: int main() { std::string choice = getTaskFromUser(); // returns either 'e' or 'd' std::string input = getInputFromUser(); // return string to encrypt or decrypt int a, b; getAAndB(a, b); // Gets a and b if (choice == TASK_ENCRYPT) { displayEncryptedText(input, a, b); } else { displayDecryptedText(input, a, b); } std::cout << "\n"; return 0; } Usability There are a few things in your program that are named confusingly. First, I had to look up what "coprime" meant. It's not a hard concept, but I'd never heard it before. It might be worth printing out a one or two-sentence description of what it means when asking the user to input coprime numbers. The variables a and b may also be poorly named. What function do they serve in the algorithm? Looking at the linked page, it looks like the cypher consists of a linear equation of the form y = ax + b where a is the slope and b is the intercept. That might be a good set of names, or it might not depending on how a typical user of such an algorithm thinks about it. a and b aren't terrible given the abstractness of the algorithm, but if you can improve those names, I recommend it. Certainly, as a user, I wouldn't understand what they represent. Avoid Magic Numbers Looking at the code, I see this mysterious number "26" repeated in several places. What does it represent? As an English speaker, I can guess that it's the number of letters in the alphabet, but what if I want to include numbers and symbols? What if I want to implement this algorithm in a different language with more or fewer letters? At the very least, there should be a named constant for the value 26. I recommend something like: const int kAlphabetSize = 26; That way you can change it later. Also, the magic values a, A, z, and Z should be handled better. Do you want to preserve the case of the original text? It seems to me like it makes the cypher weaker if you do. It's additional context that someone trying to decrypt it can use. I recommend allowing at least all the (printable) ASCII characters to be used as one large alphabet. It simplifies the code as you don't end up special-casing as many things.
{ "domain": "codereview.stackexchange", "id": 23900, "tags": "c++, performance, caesar-cipher" }
General organic chemistry
Question: What is the correct order of boiling points of the following compounds ? 1) butanol 2)butanal 3)butanoic acid Answer: Boiling points depends on strength of intermolecular interaction. In case of butanol and butanoic acid there is H bonding which is a stronger intermolecular attractive force. Further butanoic acid remain in dimeric form so there is more extent of H bonding. So order of boiling point is butanoic acid > butanol > butanal.
{ "domain": "chemistry.stackexchange", "id": 9542, "tags": "organic-chemistry, boiling-point" }
How do I take a conjugate transpose?
Question: In Klauber's "Student Friendly QFT" second edition page 463, the following expression for $M$ is given in equation 17-98: $$M = \left( \bar{u}_{s'_2}(p_2') \gamma_\alpha v_{s'_1}(p_1')\right)_{(l)} \frac{i e^2}{\left(p_1 + p_2\right)^2} \left(\bar{v}_{s_1}(p_1) \gamma^\alpha u_{s_2} (p_2)\right)_{(e)}$$ where subscript $e$ stands for electron and $l$ for $\mu$ or $\tau$. Then in equation 17-100: $$M^* = \left(\bar{v}'_{s'_1}\gamma_\beta u_{s'_2}(p_2')\right)_{(l)} \frac{-i e^2}{\left(p_1 + p_2\right)^2} \left(\bar{u}_{s_2} (p_2) \gamma^\beta v_{s_1} (p_1) \right)_{(e)}$$ This seems to be saying $(ABC)^* = A^* B^* C^*$, where: $$A^* = \left(\bar{v}'_{s'_1}\gamma_\beta u_{s'_2}(p_2')\right)_{(l)}$$ $$B^* = \frac{-i e^2}{\left(p_1 + p_2\right)^2}$$ $$C^* = \left(\bar{u}_{s_2} (p_2) \gamma^\beta v_{s_1} (p_1) \right)_{(e)}$$ Why isn't it the following (because $(ABC)^* = C^* B^* A^*$): $$\left(\bar{u}_{s_2} (p_2) \gamma^\beta v_{s_1} (p_1) \right)_{(e)} \frac{-i e^2}{\left(p_1 + p_2\right)^2} \left(\bar{v}'_{s'_1} \gamma_\beta u_{s'_2}(p_2')\right)_{(l)} $$ I know these expressions are equal to each other because after equation 17-100 the book notes that the quantities inside the parentheses with subscripts (e) and (l) are scalars and so can be placed anywhere in the expression. But my question is that equation 17-100 is before rearrangement of the terms, not after rearrangement. Answer: The result from the book seems correct. M is a scalar composed by three terms (I won't write explicitly the $p_i$ dependencies in the spinors for the ease of the reading): $\bar{u}_{s_2^\prime}\gamma_\alpha v_{s_1^\prime} \equiv z_1$ $\frac{ie^2}{\left(p_1+p_2\right)^2} \equiv z_2$ $\bar{u}_{s_1^\prime}\gamma^\alpha u_{s_2^\prime} \equiv z_3$ Calling each scalar, $z_i$, we can say: $$ M = z_1z_2z_3 \implies M^*=z_1^*z_2^*z_3^*$$ The term $z_2$ is pretty easy to conjugate (it is a purely imaginary number): $$z_2^*=-z_2$$ Now, the term $z_2$ for example, we have to used the property that you mentioned, $\left(ABC\right)^\dagger=C^\dagger B^\dagger A ^\dagger$: $$z_1^*= \left(v_{s_1^\prime}\right)^\dagger\left(\gamma_\alpha\right)^\dagger \left( \bar{u}_{s_2^\prime}\right)^\dagger$$ Using the properties of the $\gamma$ matrices: $$\left(\gamma^\mu\right)^\dagger=\gamma^0\gamma^\mu\gamma^0$$ $$\left(\gamma^0\right)^\dagger=\gamma^0$$ We can simplify to: $$z_1^*=\left[\left(v_{s_1^\prime}\right)^\dagger\gamma_0\right]\gamma_\alpha\gamma_0 \left[u_{s_2^\prime}^\dagger \gamma_0\right]^\dagger$$ And finnaly: $$z_1^*=\left[\bar{v}_{s_1^\prime}\right]\gamma_\alpha\gamma_0 \gamma_0\left[u_{s_2^\prime}\right]$$ Using that $\gamma_0^2=1$ we obtain the expression from the book: $$z_1^*=\left[\bar{v}_{s_1^\prime}\right]\gamma_\alpha\left[u_{s_2^\prime}\right]$$ You can now workout the same strategy for the $z_3$ term and obtain exactly the expression from the book.
{ "domain": "physics.stackexchange", "id": 89342, "tags": "quantum-electrodynamics, scattering, fermions, dirac-equation" }
Why does the shadow(formed at the bottom surface of beaker) of thumb disappear when it is at the interface of air and water?
Question: Switch on the light bulb of your room and take a beaker filled with water.Now,slowly drift your thumb towards the top surface of water,notice that a sharp shadow of thumb is formed at the bottom surface of beaker.When the thumb is just touching the interface separating air and water the shadow of some part of thumb disappears.Now dip the thumb in water,notice that the shadow of thumb reappears. What happens at the interface which causes the shadow of thumb to disappear? Answer: Hydrophile + surface tension + refraction
{ "domain": "physics.stackexchange", "id": 61515, "tags": "optics" }
What is the physical significance of Dipole moment?
Question: What does dipole moment physically signify? I know it is the product of 2 charges and distance between them. Like momentum is the product of mass and velocity, but physically it shows the quantity of motion the larger the momentum the harder it would be to stop a moving body. I know this question has been asked, but no has answered its physical significance. Answer: Actually, the dipole moment is a vector which is the product of the charge magnitude and the displacement vector pointing from the negative to the positive charge. An electric dipole consists of 2 equal magnitude, opposite-signed charges. The physical significance is it gives a measure of the polarity/polarization of a net neutral system. If the dipole moment is small, either the charges are small or the separation is small. The electric field due to the polarization will be small. If the polarization is large (large charges/large separation), the electric field will be distinctly non-monopole. The dipole moment also measures the tendency of a dipole to align with an external electric field. The moment will be parallel to the local field, and weak dipoles are easily twisted out of alignment by external work such as mechanical vibration or thermal effects.
{ "domain": "physics.stackexchange", "id": 52115, "tags": "electrostatics, electricity, dipole, dipole-moment" }
Repulsion force of a magnetic field and a positive ion
Question: I'm trying to figure this out but my understanding is limited so I'd appreciate any help. For the sake of simplicity, let's say I've stripped away the electron from an atom of hydrogen leaving only the proton. The charge of a proton is $ 1.6 × 10^{-19}$ C. According to Coulomb's Law, $$F= k\frac{qQ}{r^2},$$ So $k = 8.99 × 10^ 9$. Let's say $ r = 0.5m $ . Now, we have a magnetic field with a flux density (B) of $1.5$ T with positive perpendicular to the positively ionized hydrogen atom. How can I figure out the net force and how much force the ion exerts on the magnet and the magnet exerts on the ion? Answer: Coulomb's law gives you the forces between two interacting charges. But in your question, you have one charge and a magnetic field. So that law won't help. You instead need the other half of the Lorentz Force Law: $\vec{F}_{magnetic} = q\vec{v} \times \vec{B}$, where $v$ is the velocity of the particle, and $B$ is the strength of magnetic field. Both are vectors, so you need the cross product. You assumed a distance $r$ in your question, but it is unclear what that distance represents.
{ "domain": "physics.stackexchange", "id": 49338, "tags": "homework-and-exercises, magnetic-fields, charge, ions" }
Explanation of how DeepExplainer works to obtain SHAP values in simple terms
Question: I have been using DeepExplainer (DE) to obtain the approximate SHAP values for my MLP model. I am following the SHAP Python library. Now I'd like learn the logic behind DE more. From the relevant paper it is not clear to me how SHAP values are gotten. I see that a background sample set is given and an expected model output is calculated based on this data and the difference is calculated with the current model's output. This difference is the sum of the SHAP values. However, I don't understand how each contribution is obtained? Could you give an explanation with simple terms? Answer: From https://en.wikipedia.org/wiki/Shapley_value, it is possible to understand that direct computation of Shapley values is difficult with their general formula : $$ \varphi_i(v) = \frac{1}{\text{number of players}} \sum_{\text{coalitions excluding }i} \frac{\text{marginal contribution of }i\text{ to coalition}}{\text{number of coalitions excluding } i \text{ of this size}}$$ Basically beacuse the number of coalitions excluding i grows in complexity with $\sum_{k=1}^{n-1} k!$, where n is the number of variables. Some progresses has been made in the direction of evaluating this sum with Monte-Carlo techniques (as mentionned in https://christophm.github.io/interpretable-ml-book/), but those calculations are still intensive. In their article (http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions) Lundberg and Lee proposes two new approaches, relying on SHAP - (these are the Shapley values of a conditional expectation function of the original model) : A model agnostic approach basically rewriting the problem as a linear regression problem, which is intuitively less expensive to compute. Basically saying that SHAP is a function of the weight of the model and trying to approximate it. A model specific approach. Assuming input independence (which is rarely true ...) they show how to compute SHAP value directly from model weights. Starting with linear models they devise similar relations for NN with usual propagation techniques. As to which method is used exactly for MLP, I don't know exactly, but the second one seems more appropriate (model specific, exact method).
{ "domain": "datascience.stackexchange", "id": 6827, "tags": "neural-network, deep-learning, explainable-ai, shap" }
Publish a numpy.array in a UInt8MultiArray in python
Question: Hi all, Pretty new to python and ROS. I have a node that subscribes to some data, in the callback it then does some calculations on that data and creates a new np.array. I then want to publish this new array in a UInt8MultiArray with a fixed size so that I can receive it with another node and send it over UDP. The example below doesn't include the calculations but the output of the code at the moment is [[ 31.41386309 292.95704286 2.44705586]] <type 'numpy.ndarray'> I need to send these values with a fixed length over a Uint8MultiArray so that I can pack it and send it over UDP. I've tried multiple ways but can't seem to get it to work. I was wondering if someone can help with some example code or ideas/best ways. #!/usr/bin/env python import rospy from std_msgs.msg import UInt8MultiArray import struct import math import numpy as np def callback(data): #Calculations before to create enu with data from callback. enu = ** print(enu) print(type(enu)) pub_packet.publish(gps_enu) def lla2enu(): global gps_enu, pub_packet rospy.init_node('lla2enu', anonymous=True) rate = rospy.Rate(250) # 10hz rospy.Subscriber("packet", UInt8MultiArray, callback) pub_packet = rospy.Publisher('gps_enu', UInt8MultiArray, queue_size=10) rospy.spin() if __name__ == '__main__': try: lla2enu() except rospy.ROSInterruptException: pass Any help is much appreciated. Thanks guys. Originally posted by DRos on ROS Answers with karma: 23 on 2018-08-22 Post score: 0 Original comments Comment by gvdhoorn on 2018-08-22:\ in a UInt8MultiArray with a fixed size pedantic, but: UInt8MultiArray by definition cannot have a fixed size. Comment by DRos on 2018-08-22: @gvdhoorn, thanks for the reply. Maybe I can manipulate the data inside enu to be of a specific size to guarantee the size of the UInt8MultiArray? Comment by gvdhoorn on 2018-08-22: My comment was slightly unfair: I just meant to say that UInt8MultiArray uses unbounded lists for its fields, so by definition those do not have a fixed size. You can of course specify a certain size to be used, but that would be completely at the application level (and not enforced in/by ROS). Comment by gvdhoorn on 2018-08-22: I would also think that the serialisation to your UDP packet/datagram would be orthogonal to the ROS msg type you're intending on using. If an incoming msg doesn't "fit", you could ignore it / raise an error, etc. Also note btw: *MultiArray is an extremely bad choice for a topic. It has .. Comment by gvdhoorn on 2018-08-22: .. absolutely no semantics associated, other than that it is a nD array with fields of a certain type. That makes interpretation of the data completely dependent on information external to the msg, which goes against best practice in ROS. Comment by DRos on 2018-08-22: Thanks a lot for the replies. What msg type would you recommend for a topic that just wants to publish 3 float values? Comment by gvdhoorn on 2018-08-22: That completely depends on the semantics. What do those values represent? Comment by DRos on 2018-08-22: These values represent a local x,y,z value in metres. Comment by gvdhoorn on 2018-08-22: Then I would suggest to use an appropriate msg from the geometry_msgs pkg, such as PointStamped or perhaps even PoseStamped. I'm recommending the Stamped variaties, as the non-stamped ones don't provide a reference frame. Even if the value is a .. Comment by gvdhoorn on 2018-08-22: .. "local coordinate", that sounds like something that would still be relative to something. The stamped msgs allow you to capture that. The non-stamped versions don't. Comment by DRos on 2018-08-22: Thanks gvdhoorn, I'll take a look at the links you've mentioned below with the numpy support in rospy and the Point/PoseStamped msgs. I'll get back to you if successful Answer: Haven't used it myself, but perhaps eric-wieser/ros_numpy can help here. And rospy itself also has support for numpy. See rospy.numpy_msg and Using numpy with rospy fi. Originally posted by gvdhoorn with karma: 86574 on 2018-08-22 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by DRos on 2018-08-24: Thanks for this. Using the talker example and numpy_msg(Floats) as a message type worked perfectly. Thank you! Comment by gvdhoorn on 2018-08-24: Glad it worked for you. Just noticed this btw in your code: rate = rospy.Rate(250) # 10hz 250 != 10 Hz. Comment by DRos on 2018-08-24: Ahh yes, it's carried over from another node. Poor documentation, my bad! Comment by gvdhoorn on 2018-08-24: Also: the rate is not even being used.
{ "domain": "robotics.stackexchange", "id": 31609, "tags": "python, ros-kinetic" }
Assymmetric top with time-varying moment of inertia
Question: Studying the stability of a free asymmetric top is an usual excersice done when one is learning the rigid body motion. One learns that a body will rotate stably about the axis with the largest or the smallest moment of inertia, but not about the intermediate axis. In such excercise it is considered that $I_1\neq I_2\neq I_3$ are constants. But what would happen if one of those moment of inertia is varying in time? Say, a top with moments $I_1(t)>I_2>I_3$, where $I_{2}$ and $I_3$ are constants. I tried solving Euler's equations of motion considering that the top spins about one of the axis, but obtained coupled equations for the angular velocity components, and don't know how to solve them. Because of this, I can't know how the stability of the rigid body would be in this case. Answer: I think that the question "what would be the effect of a time-varying moment of inertia on stability?" can only be answered definitively by defining the function $I_1(t)$ and examining the equations. It would not be necessary to solve the equations, just examine critical cases. My own intuition is that if $I_1(t)$ is periodic with a frequency much different from the rotation rate of the body, it is likely to remain stable even for large amplitude (provided that $I_1 \gt I_2$. Whereas if the frequency of $I_1$ is close to the rotation rate of the body then it is likely to become unstable even for small values of amplitude. But this needs to be verified by looking at the equation of motion.
{ "domain": "physics.stackexchange", "id": 36784, "tags": "homework-and-exercises, classical-mechanics, moment-of-inertia, rigid-body-dynamics" }
Difference between pressure and stress tensor
Question: What is the difference between hydrostatic pressure and stress tensor? Answer: The hydrostatic pressure is one-third of the trace of the stress tensor. In other words, the pressure is the average of the normal stresses on a given volume. If you are using equations that include both pressure and a stress tensor, care must be taken to make sure that the diagonal components of the stress tensor don't also include the pressure terms. The off-diagonal terms of the stress tensor are the shear stresses on the element. So the differences between the stress tensor and the pressure are: Pressure is a scalar, the stress tensor is a tensor Pressure is only based on the normal stress on the element, the stress tensor includes both normal and shear terms
{ "domain": "physics.stackexchange", "id": 12121, "tags": "thermodynamics, pressure, stress-energy-momentum-tensor, fluid-statics" }
EM Field tensor of a point charge
Question: If I say the Reissner-Nordstrom metric $$ ds^2=-\left(1-\frac{2m}{r}+\frac{e^2}{r^2}\right)\text d t^2 + \left( 1-\frac{2m}{r}+\frac{e^2}{r^2}\right)^{-1}\text d r^2 + r^2 \text d \Omega^2 $$ is the solution of the Einstein equation $G_{\mu\nu}=8\pi T_{\mu\nu}$ of a point charge, where $$ T_{\mu\nu} = \frac{1}{4\pi}\left( F_{\alpha\mu}F^\alpha_{\phantom \alpha \nu} - \frac 1 4 g_{\mu\nu} F^{\alpha\beta}F_{\alpha\beta } \right)\;, $$ what does $F$ look like? Of course I need to write down $F$ for a point charge with mass $m$ and charge $q$. In one source I found that for a point charge we have $$ A = A_\mu \text d x^\mu = \frac e r \text d r\;, $$ where $$ F_{\mu\nu} = \nabla_\mu A_\nu - \nabla_\nu A_\mu\;. $$ But when I calculate it, I find: $$ F_{\mu\nu} = \partial_\mu \left( \delta_{\nu r}\frac{e}{r} \right) - \partial_\nu \left( \delta_{\mu r}\frac{e}{r} \right) = -\frac e r^2(\delta_{\nu r}\delta_{\mu r} - \delta_{\mu r}\delta_{\nu r}) = 0\;. $$ So $F$ of a point charge is zero? Answer: If four vector notation is less intuitive then refer back to three vectors \begin{align*} \vec{E} &= - \vec{\nabla}\phi - \frac{\partial\vec{A}}{\partial t} \\ \vec{B} &= \vec{\nabla}\times\vec{A} \end{align*} For a static point particle \begin{align*} \vec{E} &= \frac{e}{r}\hat{r}\\ \vec{B} &= 0 \end{align*} The solution up to gauge transformation is what you already know \begin{align*} A_o &= \phi = \frac{e}{r}\\ \vec{A} &=0 \end{align*} Or $A = \frac{e}{r}dt$. There for as @Holographer suggested the appropriate potential should be $A_\mu = (\frac{e}{r}, 0 ,0 , 0)$
{ "domain": "physics.stackexchange", "id": 21000, "tags": "homework-and-exercises, general-relativity, differential-geometry, reissner-nordstrom-metric" }
Language of all sequences of permutations whose product is the identity
Question: Let $\Sigma$ be the set of all permutations in $S_n$. What is the minimum number of states in a DFA accepting the language of all words over $\Sigma$ which multiply to the identity permutation? For example, if $n=2$, then $\Sigma$ consists of two mappings, the identity mapping $\iota$ and the transposition $\tau$. The language in this case consists of all words containing an even number of $\tau$'s. Answer: You haven't stated what happens to the empty word – I'm assuming it's accepted. It is easy to check that the equivalence classes of the Myhill–Nerode relation correspond to all words multiplying to a certain permutation. Therefore the minimal DFA contains $n!$ states. If the empty word is not allowed, there are $n!+1$ equivalence classes, one of them consisting only of $\epsilon$.
{ "domain": "cs.stackexchange", "id": 13303, "tags": "automata, finite-automata" }
How to find the work done by gravity without using calculus?
Question: The problem is as follows: We have a rocket with a mass of $m$, which left the earth and came to a height $h$ with the terminal velocity $v$. I need to calculate the work done by the force of gravity over the entire flight. I must not use calculus. I don't really know how to go about this.The best idea is to state that the change in energy is equal to the work done to the rocket, i.e. $\Delta E=W$. I have enough information to compute $\Delta E$, but I think that $W$ is the total work done on the rocket, both from the force of gravity and the force that pulls the rocket upwards. Answer: Since gravity is a conservative force, the net work done by gravity will be final P.E. - initial P.E. (by P.E. I mean gravitational potential energy). By saying that the rocket reaches a terminal velocity, I believe the question means that the rocket is 'out of' the gravitational field of the earth or the final P.E. $\approx$ $0$. Thus, your answer should be $mgr_e$ where $r_e$ is radius of the earth.
{ "domain": "physics.stackexchange", "id": 29451, "tags": "homework-and-exercises" }
Writing a utility class for converting between datetime and timestamp
Question: I'm writing an Python application that uses frequently datetime and Unix timestamp. I know Python is 'batteries included', however, I found that converting between datetime and timestamp in Python 2.6 is not trivial. That's why I wrote this utility class. This code works as expected, for Python 2.6, 2.7, and 3.4. I want the code style of my code reviewed. I don't know whether my function names like dttomicrotimestamp and code styles (like @staticmethod) are suitable for pythonic way of coding. #!usr/bin/python # -*- coding: utf-8 -*- from __future__ import absolute_import, print_function, division import time import datetime import pytz from tzlocal import get_localzone class datetimeutil: @staticmethod def strtodt(text): try: dt = datetime.datetime.strptime(text, "%Y-%m-%d %H:%M:%S.%f") except: dt = datetime.datetime.strptime(text, "%Y-%m-%d %H:%M:%S") return get_localzone().localize(dt) @staticmethod def dttotimestamp(dt): if dt.tzinfo is None: dt = get_localzone().localize(dt) return datetimeutil._totalseconds( (dt - dt.utcoffset()).replace(tzinfo=None) - datetime.datetime(1970, 1, 1)) @staticmethod def dttomicrotimestamp(dt): return int(datetimeutil.dttotimestamp(dt) * 1e6) @staticmethod def timestamptodt(timestamp): return get_localzone().localize(datetime.datetime.fromtimestamp(timestamp)) @staticmethod def microtimestamptodt(microtimestamp): return datetimeutil.timestamptodt(microtimestamp / 1e6) @staticmethod def _totalseconds(timedelta): return ((timedelta.seconds + timedelta.days * 24 * 3600) * 1e6 + timedelta.microseconds) / 1e6 Answer: As you apparently suspect already, having a class full of static methods isn't really Pythonic. The only reason to ever do that is for namespacing, but this should all go into a module anyway, so wrapping all the functions in a class is just unnecessary. Flat is better than nested. You use the abbreviation dt a lot in the function names. Don't. The only time you should use abbreviations like that is if they are more recognisable than spelling out what they stand for (the canonical example is "do use HTTP"). A lot of them probably don't need to mention datetime at all in the name, given they're in a module called dateutil and their only argument is a datetime object. Similarly, you use the abbreviation "microtimestamp", which is a little opaque. It looks like you mean "Like a Posix timestamp but measured in microseconds", so a more explicit thing to call it is microsecond_timestamp or microsecond_offset. With those considerations in mind, and with a view to emphasising what the functions do, I would rename them thus: strtodt -> parse_datetime dttotimestamp -> timestamp dttomicrotimestamp -> microsecond_timestamp timestamptodt -> datetime_from_timestamp microtimestamptodt -> datetime_from_microsecond_timestamp Do the microsecond ones really need to be functions at all? They seem like they'd be used only rarely (probably at the end points, reading and writing to a database), and since they're one trivial calculation, a small comment at the call site would be sufficient. Some of these really seem like they belong as instance methods on the datetime object. In fact, a lot of them are in newer stdlib, and the entire point of your code is to also support versions of Python from before that was the case. So, it might be a good idea to simply offer a compatible interface. There's a few ways to do that. First, the stdlib datetime module is implemented in Python, so you could include a newer version of it wholesale and have your dtutil module do this: if sys.version_info < (2,7,0): # Use the newer datetime module even # on older versions of Python. import datetime34 as datetime else: import datetime Second, you could inherit from the stdlib one and add the missing functionality. If you want to add _totalseconds directly to timedelta this way as well, you would also have to override __sub__ and __rsub__ to turn any timedelta object they are about to return into your extended timedelta. I was going to suggest monkey patching as another option (this being one limited context where it does seem worth it), but it turns out you can't (datetime and timedelta use __slots__). I only recommend adding the strict compatibility things this way. The idea is to set it up so that if you stop supporting 2.6 down the track, you can delete this code without having to adjust anything else, except maybe some imports. parse_datetime in particular ought to stay separate, especially because it fairly specifically enforces/assumes your local policy ("serialised dates will be in one of these two formats"). In this function: def strtodt(text): try: dt = datetime.datetime.strptime(text, "%Y-%m-%d %H:%M:%S.%f") except: dt = datetime.datetime.strptime(text, "%Y-%m-%d %H:%M:%S") return get_localzone().localize(dt) avoid bare except:, it can mask bugs, and even cause some - eg, you might catch and ignore KeyboardInterrupt. except ValueError: instead. Consider adding a docstring to explicitly say that it tries two formats, as well as that it gives you a timezone-aware datetime. def dttotimestamp(dt): if dt.tzinfo is None: dt = get_localzone().localize(dt) return datetimeutil._totalseconds( (dt - dt.utcoffset()).replace(tzinfo=None) - datetime.datetime(1970, 1, 1)) The function before this guarantees that it will give an aware datetime, so I think it would be reasonable for this one to assume that is what it is given. Then if you're using the newer version of the module (or faking a compatible interface), then this is just: dt.timestamp(). You import time and localtz, but never use them. Drop those, they're just noise.
{ "domain": "codereview.stackexchange", "id": 15074, "tags": "python, python-3.x, datetime, unix" }
Slight variation in NMR integrals of CH protons vs. CH2 or CH3
Question: While acquiring some NMRs I find that there is a consistent pattern in that protons attached to sp2 carbons tend to have slightly smaller integrals than protons attached to sp3 carbons: (peaks in aromatic region are from a para-disubstituted benzene ring; peak at 4.92 ppm is from a trisubstituted alkene; 4.50 is a NH; the remainder are aliphatic CH2's or CH3's.) I calibrated the total sum of integrals to 21; one can see from the spectrum that the aromatic and olefinic peaks are slightly smaller than integral values, and the others are slightly larger. Obviously it doesn't hinder the analysis of the spectrum, but it was an interesting trend (the same was observed in multiple spectra). Am I reading too much into this, or is there a reason behind it? I strongly suspect there is something - perhaps related to the marginally slower relaxation of these protons, since there are no geminal protons (only vicinal protons)? If it is of relevance - a 60° pulse is used (pulse program zg60). The number of scans ns is 16, and the relaxation delay d1 is 1 second. I'm happy to provide any other necessary acquisition parameters, or to provide the full structure of the compound if it's necessary. P/S I found something in Findeisen & Berger's 50 and More Essential NMR Experiments, in which they mention that the integral of an aromatic proton in strychnine is smaller than expected as it has the longest relaxation time. Even though the authors used ns = 1, too short a delay between the receiver gain adjust and the scan was used, which led to incomplete relaxation and hence a smaller integral. Answer: That's to be expected: Less protons as spatially close neighbours, less dipolar coupling, slower relaxation. For a relaxation/recycle delay of $\pu{1 s}$ and $T_1$ of perhaps $\pu{3 s}$, the Ernst angle is already only $44^\circ$, so your measurement is far from optimal anyway. Use a shorter pulse or longer recycle delay. At one second, your are anyway dangerously close to generating echoes which overlay the signal from further scans, distorting the accumulated spectrum.
{ "domain": "chemistry.stackexchange", "id": 8991, "tags": "organic-chemistry, nmr-spectroscopy" }
Why is all of the energy from a battery stored on an inductor but only 50% on a capacitor?
Question: I am learning about inductors and capacitors and we derived the energy stored on a capacitor to be 50% of that delivered by the battery. We did this considering a circuit of a capacitor connected to a battery and resistor in series, to not encounter the problem of an infinite initial current if we assumed there was no other resistor in the circuit. However our lecturer assured us that no matter how small the resistance in the circuit (even if it is just the small resistance of the wires), exactly 50% of energy would be lost. This made sense to me from the mathematics. I assume that in the case of a capacitor it is impossible to consider the theoretical case with no initial circuit resistance as you get infinities popping up in the mathematics? Then we considered an inductor charging in a simple circuit consisting of just a battery and an inductor, and found that all of the energy from the battery is stored on the inductor. I appreciate that this is just a theoretical treatment and that some energy would be lost in the wires/internal resistance of the battery, and I also understand why a similar theoretical treatment of the capacitor case is impossible; however I can't think of the fundamental reason as to why it is completely impossible to charge a capacitor with anything but 50% of the battery energy whereas an inductor could theoretically store 100%. Answer: When you try to force current through a superconducting inductor, the change of current will generate a back emf that will limit how much current can flow. The value of this back e.m.f. is $-L\frac{dI}{dt}$, and the work done by the current is the product of the current and the back emf. If the back emf is exactly equal to the voltage of the battery, current can flow (and can keep increasing - the rate of change of current is $\frac{dI}{dt}=-\frac{V}{L}$ ). This shows the current will increase linearly as all the energy of the power source is converted to magnetic energy - there is no need for a "loss" of energy in the transfer of energy from a battery to an inductor. By contrast, when you start charging a capacitor, its initial voltage is zero. Electrons that start off with the full potential of the battery will have to lose most of that energy on their way to the capacitor, where they will only have a very small initial potential (since V=Q/C, and Q starts out at 0). So in the inductor, the energy is actually stored in the B field; in the capacitor, it is stored in the electrons that came from the battery. If you could "ramp" your battery (make its voltage increase as the capacitor is charging) you would be able to get (close to) 100% of the energy of the battery transferred. There are certain switching power supplies that try to mimic this type of thing by rapidly opening and closing a switch between source and load, with an inductor in series to smooth some of the power fluctuations that this would otherwise bring about.
{ "domain": "physics.stackexchange", "id": 35700, "tags": "electromagnetism, electric-circuits, energy-conservation, mathematics, inductance" }
Arrangement of Amino Acids in the Protein alphabet
Question: I am a software engineer with little knowledge of molecular biology. However I am trying to understand some bioinformatics computer code where the protein alphabet appears to be represented as the following string, with each of the twenty amino acid constituents of protein: ACDEFGHIKLMNPQRSTVWY The code appears to define a second string in which the first is reordered as: DEKRHNQSTPGAVILMCFYW I am not sure of the biological significance of this. Does this reordering represent some specific interaction between these molecules? Answer: As suggested by tyersome's comment, the amino acids are grouped by their physiochemical properties. Let's add some commas: DE,KRH,NQ,ST,PGAVIL,MC,FYW aspartic acid (D) and glutamic acid (E) are acidic lysine (K), arginine (R), and histidine (H) are basic asparagine (N) and glutamine (Q) are amidic serine (S) and threonine (T) are hydroxylic proline (P), glycine (G), alanine (A), valine (V), isoleucine (I), and leucine (L) are aliphatic methionine (M) and cysteine (C) are sulfur-containing phenylalanine (F), tyrosine (Y), and tryptophan (W) are aromatic My source is this graphic.
{ "domain": "biology.stackexchange", "id": 11716, "tags": "biochemistry, proteins, amino-acids" }
It is possible for a ball to start slipping mid a incline plane if is in pure rolling from the beginning?
Question: My initial guess is that is not possible because when a ball is rolling down an inclined plane the force of gravity makes the ball accelerate constantly and the torque from friction generates a constant angular acceleration. If the values happen to match $ a_{com} = \alpha R $, we have pure rolling. And given that the acceleration values are constant, the pure rolling is maintained. Answer: That is correct, both the net force and the net torque on the object doing pure rolling remains constant throughout its journey, leading to a constant angular and translational acceleration. It is not possible for the object to begin with pure rolling and then slide as in this case of a incline.
{ "domain": "physics.stackexchange", "id": 79203, "tags": "newtonian-mechanics, classical-mechanics, rotational-dynamics" }
What type of servos are used in industrial robot arms like Universal Robot UR5?
Question: I've noticed that the industrial robot arms have very smooth, fast, and strong movement. Does anyone know what type of servos they use. I'm trying to build my own and don't want to have the jerky movement that is seen in a lot of DIY robot arms? Thanks. Answer: Universal states that they use brush-less DC motors with harmonic drives on their FAQ here http://cross-automation.com/blog/universal-robots-top-10-faqs Bigger ones like the KUKA KR5 uses AC servo motors. From the conversation here https://support.industry.siemens.com/tf/ww/en/posts/kuka-servo-motor/87265/?page=0&pageSize=10#post344333 it looks it is a custom version of the Siemens IFK7 synchronous motor series. Many of these high end motors can have holding brakes, better quality bearings and drive mechanisms and very good control systems that prevent them from jerking like a DIY arm. In a DIY robot, joints are typically connected straight to the motor with no drive mechanism and the basic control system does not provide a lot of options for smooth motion.
{ "domain": "robotics.stackexchange", "id": 1969, "tags": "robotic-arm, otherservos" }
roslaunch and NVidia profiling
Question: Has anyone had any success getting NVidia profiling tools and ROS to play well together? At the moment, the best I can do is profile all processes, but that only reports memory copies to and from host, and some OpenCV (copy to and from cv::Mat and cv::cuda::GpuMat). My custom kernels are never profiled (yes, I have explicit cudaProfilerStart()/Stop() calls) and trying to use launch-prefix="nvprof" or directly profiling roslaunch never gets me anywhere except errors about being unable to load some nodelets. Any suggestions as to what I might be doing wrong? I'm on Ubuntu 16.04. Originally posted by KenYN on ROS Answers with karma: 541 on 2018-09-04 Post score: 0 Original comments Comment by ahendrix on 2018-09-05: Are you running cuda code within nodelets? If your cuda code is running within a nodelet, you may want to try running nvprof on the nodelet manager. Comment by KenYN on 2018-09-05: I've tried that too, but no joy. I even have my cudaProfilerStart() called from every thread within the nodelet. Once or twice I have actually managed to capture calls to my CUDA code, but I've never managed to reproduce that... Comment by KenYN on 2018-09-05: Ah, I've tried again and just noticed an error about being unable to activate Unified Memory Profiling, so using launch-prefix="nvprof --unified-memory-profiling off" gets me further than I've ever got before. Comment by gvdhoorn on 2018-09-05: @KenYN: what was the answer here? Your last comment? If so: please post that as an answer and then accept your own answer. We don't really close questions here on ROS Answers when they have an actual answer. Comment by KenYN on 2018-09-05: @gvdhoorn Oops, I cannot re-open. Can someone else please? I also discovered how to get final output, so I can actually answer the question now. Comment by gvdhoorn on 2018-09-05: I've re-opened it for you. Answer: I finally managed to get output, but not very prettily... In my manager node line, I added launch-prefix="nvprof --unified-memory-profiling off --profile-child-process --profile-from-start off". Then in a suitable callback I added the following: static bool startedProfile = false; void MyClass::image_cb(const sensor_msgs::ImageConstPtr image) { if (!startedProfile) { startedProfile = true; cudaProfilerStart(); } else if (startedProfile && image->header.seq > 400) // 400 frames is enough profiling { cudaProfilerStop(); cudaDeviceReset(); exit(0); } // Existing code... } This is a very ugly way to finish profiling, but cudaProfilerStop() on its own didn't produce any output and neither did the addition of exit(0). There are other nodelets running other CUDA code on both the same and different GPUs, so perhaps we needed to force every CUDA process to stop to get the profiling results to output? Originally posted by KenYN with karma: 541 on 2018-09-05 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 31711, "tags": "roslaunch, ros-kinetic, ubuntu, ubuntu-xenial" }
Texturing a urdf object dynamically
Question: I'm trying to dynamically generate a grid in Gazebo- I need to be able to specify length and width of each cell as well as the number of columns and rows. The grid will be a checkerboard pattern of two colors used for tracking while simulating two quadcopters. So far I have written a C++ program to generate a .urdf file that creates the correct number of links and joints with the appropriate colors. Launching this was fine as long as the grid was a 3x3 or smaller. Beyond that, Gazebo was somehow overloaded- the object would be spawned and collisions would work as expected, but you couldn't see anything. So now I'm taking a different approach- generating a single box and texturing it with a dynamically generated .png image. I can generate the image file as expected, but I'm not sure how to apply the image to the box. I've tried applying a tag with the file name of the image, but that doesn't work. I just need the image applied to the top face- Any suggestions for how to do this? Originally posted by ewall on Gazebo Answers with karma: 1 on 2013-02-12 Post score: 0 Answer: I'm not sure this is the best option (it's probably one of the more hacky ones, but the first one that comes to my mind ;) ), nevertheless it should work in principle: If you embed the textures in COLLADA files that you use for your URDF, you can either change the texture name in your COLLADA files (by opening and changing them) or you could put every COLLADA mesh into it's own folder with it's associated texture and reference those different folders in your URDF. A example of COLLADA with embedded texture is this. If you do not need a texture, but only color, you should be able to use material colors instead of textures, which is simpler. Also, for you application it likely makes sense to model the grid parts as SDF files and give them the static attribute (which means they are not objects that have to be dynamically simulated, increasing speed and stability of simulation). Last time I checked this wasn't possible for URDF models. Originally posted by Stefan Kohlbrecher with karma: 473 on 2013-02-12 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 3032, "tags": "gazebo" }
Explanation for the trends in nucleophilicity—Orbital interactions or electrostatic attraction
Question: The trend of halide nucleophilicity in polar protic solvents is $$\ce{I- > Br- > Cl- > F-}$$ The reasons given by Solomons and Fryhle[1], and by Wade[2] are basically as follows. Smaller anions are more solvated than bigger ones because of their 'charge to size ratio'. Thus, smaller ions are strongly held by the hydrogen bonds of the solvent. Larger atoms/ions are more polarizable. Thus, larger nucleophilic atoms/ions can donate a greater degree of electron density to the substrate than a smaller nucleophile whose electrons are more tightly held. This helps to lower the energy of the transition state. Clayden et al.[3] give the reason that in SN2 reactions, the HOMO–LUMO interaction is more important than the electrostatic attraction. The lone pairs of a bigger anion are higher in energy compared to a smaller anion. So, the lone pairs of a bigger anion interact more effectively with the LUMO σ*. Thus, soft nucleophiles react well with saturated carbon. What I don't understand is the trend of halide nucleophilicity in polar aprotic solvents. $$\ce{F- > Cl- > Br- > I-}$$ In aprotic solvents, only the solvation factor is absent. Bigger anions are still more polarizable and can interact in a better way with the LUMO. Then, what is the reason for the trend reversal in aprotic solvents? Solomons and Fryhle[1] say In these[aprotic] solvents anions are unencumbered by a layer of solvent molecules and they are therefore poorly stabilized by solvation. These “naked” anions are highly reactive both as bases and nucleophiles. In DMSO, for example, the relative order of reactivity of halide ions is opposite to that in protic solvents, and it follows the same trend as their relative basicity. Shouldn't the orbital interactions matter more than the electrostatic attractions even in polar aprotic solvents? Why does nucleophilicity follow the basicity trend in polar aprotic solvents? References: Solomons, T. W. Graham; Fryhle, C. B. Organic Chemistry, 10th ed.; Wiley: Hoboken, NJ, 2011; pp. 258–260. Wade, L. G. Organic Chemistry, 8th ed.; Pearson Education: Glenview, IL, 2013; pp. 237–240. Clayden, J.; Greeves, N.; Warren, S. Organic Chemistry, 2nd ed.; Oxford UP: Oxford, U.K., 2012; pp. 355–357. Answer: This is a rather intellectually-stimulating question and one that is also very difficult to answer. You have constructed a very good case for why the nucleophilicity order would not be expected to reverse in polar aprotic solvents, i.e. the order of intrinsic nucleophilicities of the halide ions should be $\ce {I^- > Br^- > Cl^- > F^-}$, based on the concepts of frontier molecular orbital interactions, as well as electronegativity and polarisability of the nucleophilic atom. Affirming your viewpoint Admittedly, the saturated carbon atom is a soft centre and based on electronegativity considerations, the energy of the $\ce {HOMO}$ of the halide ions should increase from $\ce {F^-}$ to $\ce {I^-}$, i.e. the hardness of the the halide ions as nucleophiles decreases from $\ce {F^-}$ to $\ce {I^-}$. Based on the hard-soft acid-base (HSAB) principle, we would expect the the strength of the interaction between the halide ion and the carbon centre to increase from $\ce {F^-}$ to $\ce {I^-}$. We can provide more quantitative justification for this using the Klopman-Salem equation$\ce {^1}$: $$\Delta E=\frac{Q_{\text{Nu}}Q_{\text{El}}}{\varepsilon R}+\frac{2(c_{\text{Nu}}c_{\text{El}}\beta)^2}{E_{\text{HOMO}(\text{Nu})}\pm E_{\text{LUMO}(\text{El})}}$$ The first term is indicative of the strength of the Coulombic interaction while the second term is indicative of the strength of the orbital interactions. In the reaction with the soft saturated carbon, it is actually expected for the reactivity to increase from $\ce {F^-}$ to $\ce {I^-}$ (ref 1, p. 117). At the moment, all the evidence presented points us towards the conclusion that you have posited. However, we have overlooked the importance of a thermodynamic factor - the strength of the $\ce {C-X}$ bond. A not-particularly-relevant add-on Actually, if you think about it the considerations made in the Klopman-Salem equation are essentially kinetic considerations. We cannot conclude anything about reaction energetics from such considerations. The strength of the $\ce {HOMO-LUMO}$ interaction or that of the Coulombic attraction tells us nothing about how the electrons would organise themselves subsequently, what would be the strength of the resultant bonds etc. A comment made on this by Olmstead & Brauman (1977) On this topic of explaining the relative order of intrinsic nucleophilicity of the halide ions, Olmstead & Brauman (1977) had this to say$\ce {^2}$: The intrinsic nucleophilicities follow the reverse order of the polarizabilities (e.g., $\ce {CH3O^-}$ > $\ce {CH3S^-}$ and $\ce {F^-}$ > $\ce {Cl^-}$ > $\ce {Br^-}$). This could be due to a stronger interaction between the more concentrated molecular orbitals of the anion with the carbon center. It could also be simply a reflection of the greater thermodynamic methyl cation affinities of the smaller anions. It is important to note that Olmstead & Brauman are merely postulating the reasons. However, I do feel that the point on thermodynamics leads us on the right track to answering this question. On the other point of "more concentrated MOs", I am unclear of how it can be interpreted. Let us focus now on this point. How does thermodynamics even influence kinetics? Well... It certainly does here. What are we concerned with when dealing with nucleophilicity? As it is a kinetic phenomenon, we are essentially concerned with the rate of the substitution reaction. This makes the consideration of the activation energy ($E_\mathrm{a}$) a particularly important one because the higher the $E_\mathrm{a}$, the lower the rate of the substitution reaction. Here, the $E_\mathrm{a}$ can be taken to be the difference in energy between the reactants and the transition state. Thus, we can decrease the $E_\mathrm{a}$ by either raising the energy level of reactants or decreasing the energy level of the transition state. The reaction energy profile for an $\ce {S_N2}$ reaction is shown below, taken from ref. 3 (p. 394). There are two ways to see how the strength of the $\ce {C-X}$ bond can affect the $E_\mathrm{a}$. I am not sure if they may be considered equivalent perspectives. Nonetheless, I will present both. The first perspective is well-explained by Francis & Sundberg (2007) with regard to the $\ce {S_N2}$ reaction$\ce {^3}$: Because the $\ce {S_N2}$ process is concerted, the strength of the partially formed new bond is reflected in the TS. A stronger bond between the nucleophilic atom and carbon atom results in a more stable TS and a reduced activation energy. Another perspective (which may be somewhat similar to the first, or even possibly the same) is that the transition state structure bears resemblance to the product structure, based on the Hammond's postulate. However, this would be more applicable for an endothermic $\ce {S_N2}$ reaction where the transition state structure does indeed bear more resemblance to the product. When the product is more stable, i.e. has bonds that form more exothermically, the energy of the transition state would thus be lowered, and the $E_\mathrm{a}$ would also decrease, although not necessarily proportionally. Conclusion The argument you have presented failed to consider the important thermodynamics factor. Thus, it resulted in the predicted reactivity of the halide ions in the reversed order. The takeaway here is that the application of the $\ce {HSAB}$ concept alone does not always produce the correct prediction. Additional information More examples are cited for which the HSAB concept fails. These include: The favourable combination of $\ce {H^+}$, a hard acid, and $\ce {H^-}$, a soft base (discussed here as well) (ref. 4). The fact that the rate of reaction between $\ce {Ag^+}$, soft acid, and an alkene, soft base, is slower than that between $\ce {Ag^+}$ and $\ce {OH^-}$, a hard base (ref. 1, p. 110) References Fleming, I. Molecular Orbitals and Organic Chemical Reactions (Student Edition). John Wiley & Sons, Ltd. United Kingdom, 2009. Olmstead, W. N.; Brauman, J. I. Gas-phase nucleophilic displacement reactions. J. Am. Chem. Soc. 1977, 99(13), 4219–4228. Carey, F. A.; Sundberg, R. J. Advanced Organic Chemistry Part A. Structure and Mechanisms (5th ed.). Springer, 2007. Pearson, R. G.; Songstad, J. Application of the Principle of Hard and Soft Acids and Bases to Organic Chemistry. J. Am. Chem. Soc. 1967, 89(8), 1827-1836.
{ "domain": "chemistry.stackexchange", "id": 11133, "tags": "organic-chemistry, halides, nucleophilic-substitution" }
Reducing the line of collision of two rigidbodies while using the coefficient of restitution
Question: I am trying to solve the problem of the collision of any two rigid-bodies. So far this is what I got: I am concerned with the part where I reduce the equation with the coefficient of restitution by $\vec{n}$. As far as I know it can be factored out and the fraction, reduced. Yet, I am unsure as it seems to me, that it would make the line of collision redundant, while it is critical, as the COR is defined only on it. Can I reduce the fraction by $\vec{n}$? Answer: First off you understand that you cannot divide two vectors. Instead the COR is used in the following scalar equation describing the law of collisions. $$ \boldsymbol{n} \cdot (\boldsymbol{v}_{1c}^\text{after} - \boldsymbol{v}_{2c}^\text{after} ) = - \epsilon \, \boldsymbol{n} \cdot ( \boldsymbol{v}_{1c}^\text{before} - \boldsymbol{v}_{2c}^\text{before}) \tag{1}$$ where the vector $\boldsymbol{n}$ is the contact normal, and $\cdot$ is the vector dot product. Also $\boldsymbol{v}_{1c}$ denotes body 1 velocity vector at the contact point, etc. This equation is used to find the impulse magnitude $J$ for the collision. Each body has equal and opposite impulse vector $\boldsymbol{n} J$ applied, creating a step (change) in the velocity and rotation in each body. Here $\boldsymbol{r}_1$ and $\boldsymbol{r}_2$ is the COM positions of each body, and $\boldsymbol{r}_c$ the contact point. So the impulse vector through the contact point has the following effect in the motion of each body $$ \begin{aligned} \Delta \boldsymbol{v}_1 & = -\tfrac{1}{m_1} \boldsymbol{n} J & \Delta \boldsymbol{v}_2 & = \tfrac{1}{m_2} \boldsymbol{n} J \\ \Delta \boldsymbol{\omega}_1 &= -\mathbf{I}_1^{-1} ( \boldsymbol{r}_c-\boldsymbol{r}_1) \times \boldsymbol{n} J & \Delta \boldsymbol{\omega}_2 &= \mathbf{I}_2^{-1} ( \boldsymbol{r}_c-\boldsymbol{r}_2) \times \boldsymbol{n} J \end{aligned} \tag{2} $$ where $\Delta \boldsymbol{v}_i$ is the vector change of body's center of mass, and $\Delta \boldsymbol{\omega}_i$ the vector change of the body's rotational velocity. And the kinematics of the contact point before impact are $$ \begin{aligned} \boldsymbol{v}_{1c}^\text{before} & = \boldsymbol{v}_1 + \boldsymbol{\omega}_1 \times (\boldsymbol{r}_c - \boldsymbol{r}_1) \\ \boldsymbol{v}_{2c}^\text{before} & = \boldsymbol{v}_2 + \boldsymbol{\omega}_2 \times (\boldsymbol{r}_c - \boldsymbol{r}_2) \\ \end{aligned} \tag{3} $$ and after impact $$ \begin{aligned} \boldsymbol{v}_{1c}^\text{after} & = (\boldsymbol{v}_1+\Delta \boldsymbol{v}_1) + ( \boldsymbol{\omega}_1 +\Delta \boldsymbol{\omega}_1) \times (\boldsymbol{r}_c - \boldsymbol{r}_1) \\ \boldsymbol{v}_{2c}^\text{after} & = (\boldsymbol{v}_2+\Delta \boldsymbol{v}_2) + (\boldsymbol{\omega}_2 + \Delta \boldsymbol{\omega}_2)\times (\boldsymbol{r}_c - \boldsymbol{r}_2) \\ \end{aligned} \tag{4}$$ Now use (2) into (4), and then user (3) & (4) into (1) to get one equation in terms of $J$. First find the impact speed $v_{\rm impact} = \boldsymbol{n} \cdot ( \boldsymbol{v}_{1c}^\text{before} - \boldsymbol{v}_{2c}^\text{before}) $ which is a known quantity. Then form the law of collisions $$\boldsymbol{n}\cdot\left(\Delta\boldsymbol{v}_{1}+\Delta\boldsymbol{\omega}_{1}\times(\boldsymbol{r}_{c}-\boldsymbol{r}_{1})-\Delta\boldsymbol{v}_{2}-\Delta\boldsymbol{\omega}_{2}\times(\boldsymbol{r}_{c}-\boldsymbol{r}_{2})\right)=-\left(1+\epsilon\right)\,v_{{\rm impact}} \tag{5}$$ and factor out $J$ in preparation for solving. $$ \begin{split}\Bigl\{\tfrac{1}{m_{1}}+\tfrac{1}{m_{2}}-\boldsymbol{n}\cdot(\boldsymbol{r}_{c}-\boldsymbol{r}_{1})\times\mathbf{I}_{1}^{-1}(\boldsymbol{r}_{c}-\boldsymbol{r}_{1})\times\boldsymbol{n}\\ -\boldsymbol{n}\cdot(\boldsymbol{r}_{c}-\boldsymbol{r}_{2})\times\mathbf{I}_{2}^{-1}(\boldsymbol{r}_{c}-\boldsymbol{r}_{2})\times\boldsymbol{n}\Bigr\} J & =\left(1+\epsilon\right)\,v_{{\rm impact}} \end{split} \tag{6} $$ I like the think of the solution as $ J = (1+\epsilon) m_{\rm reduced} v_{\rm impact} $ where I factor all the mass/inertia terms into a reduced mass for the contact. Once $J$ is known, then use (2) to find the change of motion for both bodies. References: An Introduction to Physically Based Modeling, Part 2
{ "domain": "physics.stackexchange", "id": 69239, "tags": "newtonian-mechanics, vectors, collision, rigid-body-dynamics" }
On the method described by Purcell for finding the magnetic field by measuring the force on a test particle
Question: The following text is a method for finding the magnetic field as described in Purcell's Electricity and Magnetism (page $151$, the top part). Measure the force on the particle when its velocity is $\bf{v}$; repeat with $\bf{v}$ in some other direction. Now find a $\bf{B}$ that will make $\textbf{f}=q\textbf{v}\times\textbf{B}$ fit all these results. Why do we need two velocity vectors? For a given velocity $\textbf{v}_{1}=[v_{1} v_{2} v_{3}]^{T}$, there will be a corresponding force $\textbf{f}=[f_{1} f_{2} f_{3}]^{T}$. Therefore, the equation $\textbf{f}=q\textbf{v}\times\textbf{B}$ will be equivalent to $$f_{1}=q(v_{2}B_{3}-v_{3}B_{2})$$ $$f_{2}=q(v_{1}B_{3}-v_{3}B_{1})$$ $$f_{3}=q(v_{1}B_{2}-v_{2}B_{1})$$ Three equations, three unknowns. Am I missing something? Answer: The linear system you've written, $$ \vec f= \begin{pmatrix} f_1 \\ f_2 \\ f_3 \end{pmatrix} =q \begin{pmatrix} 0 & -v_3 & v_2 \\ v_3 & 0 & -v_1 \\ -v_2 & v_1 & 0 \end{pmatrix} \begin{pmatrix} B_1 \\ B_2 \\ B_3 \end{pmatrix} =q(\vec v\times) \vec B $$ is indeterminate, and it does not have a unique solution. You can easily verify this by noticing that the determinant is zero, since the matrix $M=(\vec v\times)$ is antisymmetric, but the transpose can't change the determinant, so $$ \det(M) = \det(M^T) = \det(-M) = (-1)^3\det(M) = -\det(M), $$ which requires $\det(M)=0$. Alternatively, you can just notice that it is unable to distinguish a magnetic field parallel to the velocity from a vanishing field (since both give zero force). This is seen most simply by choosing your coordinate axes so that $\vec v$ lies along the $x$ axis, so that the system of equations reads $$ \begin{pmatrix} f_1 \\ f_2 \\ f_3 \end{pmatrix} = qv_1 \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & 1 & 0 \end{pmatrix} \begin{pmatrix} B_1 \\ B_2 \\ B_3 \end{pmatrix}, $$ which is clearly sufficient to figure out $B_2$ and $B_3$ from $f_2$ and $f_3$, but cannot say anything about $B_1$.
{ "domain": "physics.stackexchange", "id": 62634, "tags": "magnetic-fields, vectors, vector-fields" }
Springback after a plastic deformation
Question: Before the yield point, the deformation is elastic, and then it is plastic. But what happens to the elastic 'part' of strain after the load is removed? Is it retracted fully or partly, and how does this relate to the extent of plastic deformation? Answer: When the load is removed, the elastic strain is recovered completely (by definition); the plastic strain is not recovered at all (by definition): If the load is reapplied, the curve picks up where it left off (i.e., the system moves back up the unloading line and resumes plastic deformation). This point is explained here.
{ "domain": "engineering.stackexchange", "id": 2238, "tags": "materials, strength, deformation" }
What is the protein sequence taken as input in the Path-A prediction system
Question: Considering the Path-A based metabolic pathway prediction (http://nar.oxfordjournals.org/content/34/suppl_2/W714.short). It uses machine learning for pathway prediction. Suppose that the input was a protein sequence of a query organism. I do not understand what these protein sequences are? Are these the enzymes for the reactions? The basic algorithm has two inputs: a set of protein sequences from the query organism and a set of model pathways, one for each target pathway. Answer: Not necessarily, they can be enzymes, but they include a lot more (the whole proteome). It takes a FASTA format file containing a set of query protein sequences from a single organism (a partial or complete proteome) and identifies those sequences that are likely to participate in any of its supported metabolic pathways Path-A predicts the pathways supported by arbitrary sets of proteins, using validated prediction techniques based on sequence alignment and machine learning.
{ "domain": "biology.stackexchange", "id": 3972, "tags": "biochemistry, molecular-biology" }
Why Didn't the Energy-Momentum Relation Work?
Question: I am currently studying for the GRE Physics subject test by working through published past tests. My question is about problem 20 from the test GR8677: A positive kaon ($K^{{}+{}}$) has a rest mass of $494\, {\rm MeV}/c^2$, whereas a proton has a rest mass of $938\, {\rm MeV}/c^2$. If a kaon has a total energy that is equal to the proton rest energy, the speed of the kaon is most nearly (A) $0.25\, c$ (B) $0.40\, c$ (C) $0.55\, c$ (D) $0.70\, c$ (E) $0.85\, c$ The solution (readily available online) is to solve the equation $$\gamma\, m_{\rm kaon}c^2 = m_{\rm proton}c^2$$ for $\gamma$ and then to solve $$\gamma = \frac{1}{\sqrt{1+\beta^2}}$$ for $\beta$. This can be worked out to give the correct answer, (E). This I understand. My first thought when encountering this problem, however, was to instead use the energy-momentum relation $$E^2 = (pc)^2 + (mc^2)^2$$ to solve the problem. We are given the total energy of the kaon and its mass, so I solved for $p$: $$p = \sqrt{E^2 - (mc^2)^2} / c$$ $$= \sqrt{(938)^2 - (494)^2}\, {\rm MeV} / c$$ $$\approx \sqrt{(900)^2 - (500)^2}\, {\rm MeV} / c$$ $$\approx 800\, {\rm MeV} / c$$ The velocity would then be $$v = \frac{p}{m} = \frac{800\, {\rm MeV} / c}{494\, {\rm MeV} / c^2} \approx \frac{8}{5} c = 1.6c$$ which obviously can't be right because the speed of the kaon can't be larger than $c$. However, I have done this problem several times so I don't think I have made an algebraic error. My question is then what, specifically, is wrong with my approach? It's clearly not the most efficient tactic, but I don't want to know the best way to solve a problem like this. Rather, I would like to improve my understanding of SR. Answer: You used the nonrelativistic approximation for the momentum-velocity relation. Use the one including the gamma and you will get correct values
{ "domain": "physics.stackexchange", "id": 59967, "tags": "homework-and-exercises, special-relativity, momentum, mass-energy, speed" }
Electric field at surface of a spherical shell
Question: The shell theorem provides a well known result that for a spherical shell with uniformly distributed charge $Q$ and radius $R$, the electric field at a distance of $r$ from the center is: $$\begin{array}{cc} \ & \begin{array}{cc} \frac{Q}{4 \pi r^2 \epsilon _0} & r>R \\ 0 & r<R \\ \end{array} \\ \end{array}$$ Or plotted, However, there appears to be a discontinuity at $r = R$. What would the field be at this distance? In real-life, of course, you cannot lie perfectly on the surface but for a mathematical shell this is of-course valid right? Also interestingly, the potential (being the integral of the electric field) doesn't suffer from the same discontinuity (though it of course lacks differentiability at $r = R$). Is there any physical significance to this? Answer: Those formulas, and those graphs are idealizations. In reality, there are no discontinuous fields, just as there are no zero-thickness shells. If you start with an impossible situation, you will calculate impossible and meaningless results. For real situations, situations for which the theory is valid, the field may change quickly, but it does so smoothly. Your question, "what would the field be at that distance" has no answer. Let me ask you this: how fast can a unicorn run?
{ "domain": "physics.stackexchange", "id": 100321, "tags": "electrostatics, electric-fields, gauss-law, conductors, approximations" }
Thermodynamic properties of enantiomers
Question: Pairs of enantiomers have the same chemical and physical properties (except rotation of plane polarized light) within an achiral environment. However, I am curious about the thermodynamic properties of pairs of enantiomers: say a molecule has a certain $\Delta H_f$, $\Delta S$, and $\Delta G_f$; will its enantiomer have the same exact values? Answer: All the normal thermodynamic properties of a pair of enantiomers are the same. Unless there is some interaction with a chiral environment. The only way any properties will differ is if the environment in which they are measured is chiral. For example, the heat of combustion of D glucose is the same as for L glucose (as combustion doesn't care about the stereochemistry). But only D glucose is usable by higher organisms in metabolism as all the enzymes involved are chiral (despite the thermodynamics of the overall reaction being exactly the same).
{ "domain": "chemistry.stackexchange", "id": 15375, "tags": "organic-chemistry, physical-chemistry" }
How can OS schedule disk requests efficiently without knowing the LBN to PBN mapping ?
Question: I learn from the OS schoolbook that the OS has the function that schedule the disk requests with the knowing algorithms such as SSTF, SCAN .. However, the date layout inner disk drives now is very complex that we may can't know. Disk zoning, disk skew, disk slipping and many other features of disk drives changes the distribution of data layout. So how can the OS do this work ? Thanks! Answer: Modern disks interfaces provide an approximation of position called the logical base address. This is just a linear integer indexing scheme where each block gets a unique integer assigned to it. While you can't calculate an exact "distance" (in terms of arm movement latency and rotational latency) between two blocks it is approximately the case that if the difference between address $a$ and address $b$ is smaller than the distance between address $a$ and address $c$ then block $a$ will be closer to block $b$ than it is to block $c$. Disk scheduling algorithms try to improve average latency or average throughput without being significantly unfair or causing starvation. They also don't know about requests that are going to arrive in the future, so whatever they do needs to be an approximation based only on the requests they've already seen. Since the algorithms are approximate anyway, the fact that we don't know the exact distance between two requests is not particularly a problem.
{ "domain": "cs.stackexchange", "id": 2857, "tags": "operating-systems, scheduling" }
Merging files gives memory allocation error
Question: I have two files like > head(a) chr position 1 1 136962 2 1 562020 3 1 672948 > class(a) [1] "data.frame" > And > dim(b) [1] 1855235 10 > head(b[,1:3]) SNP chr position 1: rs62635286 1 13116 2: rs75454623 1 14930 3: rs806731 1 30923 4: rs200943160 1 49298 5: rs116400033 1 51479 6: rs141149254 1 54490 > class(b) [1] "data.table" "data.frame" > I want to find corresponding SNP in file b matched to position in file a so I did like this that gives memory allocation error. > unique(merge(a,b,x.by=a$position,y.by=b$position)) Error: cannot allocate vector of size 137.2 Gb I also tried rstudio cloud but uploading the bigger file never completes Can you help me with task please? The expected output would be something like rs62635286 1 13116 rs376723915 1 91515 rs147061536 1 92858 rs62642131 1 135982 rs371474651 1 158006 rs201293782 1 665401 rs138476838 1 668374 rs1401137 1 2063094 rs3128293 1 2065339 EDITED > setDT(a) > setkey(a, chr, position) > setkey(b, chr, position) Error in setkeyv(x, cols, verbose = verbose, physical = physical) : some columns are not in the data.table: chr,position > setkey(b, Chromosome, Position) > a[b, nomatch=0] Error in bmerge(i, x, leftcols, rightcols, roll, rollends, nomatch, mult, : Incompatible join types: x.chr (factor) and i.Chromosome (integer). Factor columns must join to factor or character columns. > Answer: You could try something like this : First you should work with the same class of object : data.table for fast merging operation # Turning a to a data.table setDT(a) # b is already a data.table Then you might have to set the keys of both table : # I added chr in case you need to match the same position on a different chromosome. setkey(a, chr, position) setkey(b, chr, position) Finally perform the joining operation : a[b, nomatch=0] Because you didn't provide the expected output, it might need some tuning !
{ "domain": "bioinformatics.stackexchange", "id": 1206, "tags": "r" }
Feynman's derivation of Schrödinger equation. Potential spatial dependence
Question: I am working on the book "Quantum Mechanics and Path Integrals" from Feynman and Hibbs. When finding the correspondence with Schrödinger equation he takes $$\eqalign{&\psi(x,t+\epsilon) = {}\cr &\int_{-\infty}^{\infty} \!\!\exp\left\{\frac{i\,\epsilon}{\hbar} L\left(\frac{x + y}{2},\frac{x - y} {\epsilon}\right) \right\}\, \psi(y,t)\,\frac{\mathrm{d}y}{A(\epsilon)}\cr}$$ Making the Lagrangian explicitly as $L = m\dot{x}^2/2 - V(x,t)$, and making the substitution $y = x + \eta$ he gives $$\eqalign{ &\psi(x,t+\epsilon) = \int_{-\infty}^{\infty} \exp\left\{ \frac{i m \eta^2}{2\hbar\epsilon} \right\} \cr &\qquad\exp\left\{ -\frac{i\, \epsilon}{\hbar} V\left( x+ \frac{\eta}{2}, t \right) \right\} \psi(x +\eta, t) \, \frac{\mathrm{d}\eta}{A(\epsilon)}\cr}$$ Now the first exponential varies very rapidly and he says that most of the integral will be contributed by $\eta$ in the order of 0 to $\sqrt{2\hbar \epsilon/m}$. For a small $\eta$ he can now expand the second exponential, as well as $\psi$ $$\eqalign{ &\psi(x,t) + \epsilon\, \frac{\partial \psi}{\partial t} = {}\cr &\quad\int_{-\infty}^{\infty} \exp\left\{ \frac{i m \eta^2}{2\hbar\epsilon} \right\} \left[1 -\frac{i\, \epsilon}{\hbar} V \left( x, t \right)\right] \cr &\qquad\left[\psi(x,t) + \eta \frac{\partial \psi}{\partial x} +\frac{\eta^2}{2} \frac{\partial^2 \psi}{\partial x^2} \right] \, \frac{\mathrm{d}\eta}{A(\epsilon)}\cr}$$ Here, he replaces $\epsilon V(x +\frac{\eta}{2},t)$ for $\epsilon V(x,t)$ saying that the error is of higher order than $\epsilon$. My problem is that the expansion of $V(x +\frac{\eta}{2},t)$ would have a term of order $\eta$, which when multiplied by $\eta \frac{\partial \psi}{\partial x}$ would give a term of order $\eta^2$ and it's integration would be non-zero. The terms of order $\eta^2$ are not neglected, since that going with the second derivative of $\psi$ is preserved. The problematic term is then $$\int_{-\infty}^{\infty} \exp\left\{ \frac{i m \eta^2}{2\hbar\epsilon} \right\} \frac{i\, \epsilon \, \eta^2}{\hbar} \left. \frac{\partial V}{\partial (x + \eta/2)} \right|_{(x,t)} \frac{\partial \psi}{\partial x} \, \frac{\mathrm{d}\eta}{A(\epsilon)}$$ I think that the problem might be I am not working properly the Taylor series. Thank you for your help. Answer: Okay, the problem actually is not there. Both statements are correct, the mistake I've made was in the comparison between orders of the development. We take just the first order in $\epsilon$ in the left hand side $$\psi + \epsilon \partial_t\psi,$$ and the last integral, $$\int_{-\infty}^{\infty} \exp\left\{ \frac{i m \eta^2}{2\hbar\epsilon} \right\} \frac{i\, \epsilon \, \eta^2}{\hbar} \left. \frac{\partial V}{\partial (x + \eta/2)} \right|_{(x,t)} \frac{\partial \psi}{\partial x} \, \frac{\mathrm{d}\eta}{A(\epsilon)},$$ will give something of the order of $\epsilon^2$, since we have $$\int_{-\infty}^{\infty} \exp\left\{ \frac{i m \eta^2}{2\hbar\epsilon} \right\} \eta^2 \, \frac{\mathrm{d}\eta}{A(\epsilon)} = \frac{i\hbar \epsilon}{m},$$ where the condition for $A$ is found through the correspondence of the terms of zeroth order, and nothing else depends on $\eta$, and there is one factor $\epsilon$ already present. The term with the second derivative does have an $\eta^2$, but only its product with the 1 in the expansion of the potential is preserved. The identification of the first order terms on $\epsilon$ gives the expression of the Schrödinger equation.
{ "domain": "physics.stackexchange", "id": 57881, "tags": "quantum-mechanics, wavefunction, schroedinger-equation, path-integral" }
Roller Screw drive - axial movement instead of friction
Question: I need an equation or a some hints to solve the following problem. Imagine a roller screw drive. I apply a torque of T to translative move my load mass M. I assume my screw has an efficiency of 90%. Now an additional axial force affects my mass in the opposite moving direction. Is this force completely transformed into torque (of course considering the efficiency) or is it possible, that my whole roller screw is moving, because it is not fixed? I just found papers/books/articles for movable slides/loads, but fixed shafts. But in my case motor and shaft are part of an osciallation system. I'm not a mechanical engineer, so I'm sorry if the answer may is trivial. I made a little sketch now The process force Fp is pushing my mass, most of the force is transformed into a load torque Tp which acts against my drive torque TD. Some of the energy is lost by friction. The question is, if there is also a partial force Tp? which is affecting the bearing and therefore exciting my chassis. Answer: OK. as drawn, ignoring mass and accelerations, the force $F_p$ will appear as a torque on your ball screw. However, the total force on the ball screw, and hence the torque, depends on the mass of the thing you're moving with the ball screw interacting with gravity (if it's being moved in anything other than a horizontal plane), and on whether or not the whole assembly -- frame and load -- is moving at anything other than a steady velocity. On a bad day, your mass-spring-damper system will have an overall resonance that interacts with your control system, making oscillations happen where you never expected them.
{ "domain": "robotics.stackexchange", "id": 231, "tags": "movement, torque, differential-drive" }
Caching lookup values in an e-commerce website
Question: I am working on an e-commerce website. I have a CarMake dropdown which contains car make names and their corresponding Id, for example: { 1:Alfa romeo }, { 2:Audi }, { 3:BMW }, etc I want to keep this lookup in cache, so I won't have to make a trip to DB every time I want to get the corresponding Name or Id... So in my infrastructure layer, I read the data from DB and keep it in cache, I am using a static CarMakeLookup class: public static class CarMakeLookup { private static readonly Dictionary<short, string> _carMakeIdLookup; private static readonly Dictionary<string, short> _carMakeNameLookup; static CarMakeLookup() { _carMakeIdLookup = new Dictionary<short, string>(); _carMakeNameLookup = new Dictionary<string, short>(); using (var context = ApplicationDbContext.Create()) { DropdownRepository dropdownRepository = new DropdownRepository(context); var carMakes = dropdownRepository.GetCarMakes(); foreach (var carMake in carMakes) { _carMakeIdLookup[carMake.CarMakeId] = carMake.CarMakeName; _carMakeNameLookup[carMake.CarMakeName.ToLower()] = carMake.CarMakeId; } } } public static string GetCarMakeName(short carMakeId) { if (carMakeId <= 0) { return string.Empty; } if (_carMakeIdLookup.ContainsKey(carMakeId)) { return _carMakeIdLookup[carMakeId]; } return string.Empty; } public static short GetCarMakeId(string carMakeName) { if (!string.IsNullOrEmpty(carMakeName)) { string lowerCaseCarMakeName = carMakeName.ToLower(); if (_carMakeNameLookup.ContainsKey(lowerCaseCarMakeName)) { return _carMakeNameLookup[lowerCaseCarMakeName]; } } return -1; } public static Dictionary<short, string> GetCarMakeLookup() { return _carMakeIdLookup; } } I also need to display car make in a dropdown in my View... so I need a List<SelectListItem> to be displayed as a dropdown. So in my Application layer I have another static class which holds this list: public static class CarCache { static CarCache() { var selectCarMake = new SelectListItem { Value = "-1", Text = "Select a car", Disabled = true, Selected = true }; CarMakeItemsIncludingSelect = new List<SelectListItem>(); // Dropdown's first option should be "select a car" CarMakeItemsIncludingSelect.Insert(0, selectCarMake); foreach (KeyValuePair<short, string> carMakeKeyPair in CarMakeLookup.GetCarMakeLookup()) { var selectListItem = new SelectListItem { Value = carMakeKeyPair.Key.ToString(), Text = carMakeKeyPair.Value }; CarMakeItemsIncludingSelect.Add(selectListItem); } } public static List<SelectListItem> CarMakeItemsIncludingSelect { get; private set; } public static string GetCarMakeName(short carMakeId) { return CarMakeLookup.GetCarMakeName(carMakeId); } public static short GetCarMakeId(string carMakeName) { return CarMakeLookup.GetCarMakeId(carMakeName); } } I have a few more dropdowns like this in my application, for example HouseType, JobType, etc Everything works fine, the only problem is that my application startup time is slow... I assume one problem is that when I start debugging the application, the application starts building all these static cache classes, and since these classes are static I cannot inject the DbContext into them... so each dropdown should build its own DdContext instance, get corresponding data from DB and Dispose of DbContext. There is no performance problem with the production server... Any recommendation on how to improve this cache? Answer: Global static state makes your code untestable... I would define your car make lookup as: public class CarMakeLookup { public CarMakeLookup(IEnumerable<(short Id, string Make)> data) { Makes = data.ToDictionary(d => d.Id, d => d.Make); Ids = data.ToDictionary(d => d.Make, d => d.Id); } public IReadOnlyDictionary<short, string> Makes { get; } public IReadOnlyDictionary<string, short> Ids { get; } } And read it with: public class CarMakeReader : IReader<CarMakeLookup> { public CarMakeReader(Func<ApplicationDbContext> context) => Context = context; Func<DbContext> Context { get; } public Task<CarMakeLookup> ReadAsync() { using (var context = Context()) return new CarMakeLookup( from cm in context.CarMakes select (cm.CarMakeId, cm.CarMakeName)); } } Where IReader<> is simply: public interface IReader<TSet> { Task<TSet> ReadAsync(); } There should be also an extension method Cache defined nearby: public static class Reader { public static IReader<TSet> Cache<TSet>(this IReader<TSet> source) => new CachingReader<TSet>(source); } Where: class CachingReader<TSet> : IReader<TSet> { public CachingReader(IReader<TSet> source) => Lookup = new Lazy<Task<TSet>>(() => source.ReadAsync()); Lazy<Task<TSet>> Lookup { get; } public Task<TSet> ReadAsync() => Lookup.Value; } Now register you reader in the IoC container as a decorated singleton: containerBuilder.RegisterInstance(ctx => new CarMakeLookup( ctx.Resolve<Func<ApplicationDbContext>>()) .Cache()).AsImplementedInterfaces();
{ "domain": "codereview.stackexchange", "id": 36251, "tags": "c#, cache, static, asp.net-mvc-5" }
Sizes of different image types?
Question: What is the ordering of size following image types : RGB,grayscale, indexed image, binary image? RGB highest size?and binary lowest? Answer: Binary images are clearly smallest in terms of size, followed by indexed (paletted) images. RGB and greyscale images can take different sizes. There are also (A)RGB images with sizes up to 48 bits but they usually include an extra alpha channel (for transparency purposes) so I haven't described them below. RGB (red, green, blue ) - 24 bits (each colour channel takes up 8 bits of space ) RGB (red, green, blue ) - 16 bits ( the green channel usually takes up 6 bits ) RGB (red, green, blue ) - 15 bits ( 5 bits per channel) RGB (red, green, blue ) - 8 bits ( usually encoded as rrrgggbb ) Greyscale image - 8 - 24 bits ( each colour channel in a greyscale image has the same value) Indexed image - usually 8 bits ( the pixels don't carry colour information at all--rather they are indices into a colour table (i.e a palette). An 8-bit palette can hold 256 colours. While larger palettes are possible, they're rarely used because the most important reason to use a palette is to conserve space ) Binary image - 1 bit ( in a binary image, each pixel is stored as 1 bit. Since 1 bit can encode only 2 values (0 or 1), this kind of image can display only 2 colours (e.g black and white etc. )
{ "domain": "dsp.stackexchange", "id": 8718, "tags": "image-processing" }
How to compute score and predict for outcome after N days
Question: Let's say I have a medical dataset/EHR dataset that is retrospective and longitudinal in nature. Meaning one person has multiple measurements across multiple time points (in the past). I did post here but couldn't get any response. So, posting it here This dataset contains information about patients' diagnosis, mortality flag, labs, admissions, and drugs consumed, etc. Now, if I would like to find out predictors that can influence mortality, I can use logistic regression (whether the patient will die or not). But my objective is to find out what are the predictors that can help me predict whether a person will die in the next 30 days or the next 240 days, how can I do this using ML/Data Analysis techniques? In addition, I would also like to compute a score that can indicate the likelihood that this person will die in the next 30 days? How can I compute the scores? Any tutorials links on how is this score derived?, please? Can you please let me know what are the different analytic techniques that I can use to address this problem and different approaches to calculate score? I would like to read and try solving problems like this Answer: This could be seen as a "simple" binary classification problem. I mean the type of problem is "simple", the task itself certainly isn't... And I'm not even going to mention the serious ethical issues about its potential applications! First, obviously you need to have an entry in your data for a patient's death. It's not totally clear to me if you have this information? It's important that whenever a patient has died this is reported in the data, otherwise you cannot distinguish the two classes. So the design could be like this: An instance represents a single patient history at time $t$, and it is labelled as either alive or dead at $t+N$ days. This requires refactoring the data. Assuming data spans a period from 0 to $T$, you can take multiple points in time $t$ with $t<T-N$ (for instance every month from 0 to $T-N$). Note that in theory I think that different times $t$ for the same patient can be used in the data, as long as all the instances consistently represent the same duration and their features and labels are calculated accordingly. Designing the features is certainly the tricky part: of course the features must have values for all the instances, so you cannot rely on specific tests which were done only on some of the patients (well you can, but there is a bias for these features). To be honest I doubt this part can be done reliably: either the features are made of standard homogeneous indicators, but then these indicators are probably poor predictors of death in general; or they contain specialized diagnosis tests for some patients but then they are not homogeneous across patients, so the model is going to be biased and likely to overfit. Ideally I would recommend splitting between training and test data before even preparing the data in this way, typically by picking a period of time for training data and another for test data. Once the data is prepared, in theory any binary classification method can be applied. Of course a probabilistic classifier can be used to predict a probability, but this can be misleading so be very careful: the probability itself is a prediction, it cannot be interpreted as the true chances of the patient to die or not. For example Naive Bayes is known to empirically always give extreme probabilities, i.e. close to 0 or close to 1, and quite often it's completely wrong in its prediction. This means that in general the predicted probability is only a guess, it cannot be used to represent confidence. [edit: example] Let's say we have: data for years 2000 to 2005 N=1, i.e. we look at whether a patient dies in the next year. a single indicator, for instance say cholesterol level. Of course in reality you would have many other features. for every time $t$ in the features we represent the "test value" for the past 2 years to the current year $t$. This means that we can iterate $t$ from 2002 (2000+2) to 2004 (2005-N) Let's imagine the following data (to simplify I assume the time unit is year): patientId birthYear year indicator 1 1987 2000 26 1 1987 2001 34 1 1987 2002 18 1 1987 2003 43 1 1987 2004 31 1 1987 2005 36 2 1953 2000 47 2 1953 2001 67 2 1953 2002 56 2 1953 2003 69 2 1953 2004 - DEATH 3 1969 2000 37 3 1969 2001 31 3 1969 2002 25 3 1969 2003 27 3 1969 2004 15 3 1969 2005 - DEATH 4 1936 2000 41 4 1936 2001 39 4 1936 2002 43 4 1936 2003 43 4 1936 2004 40 4 1936 2005 38 That would be transformed into this: patientId yearT age indicatorT-2 indicatorT-1 indicatorT-0 label 1 2002 15 26 34 18 0 1 2003 16 34 18 43 0 1 2004 17 18 43 31 0 2 2002 49 47 67 56 0 2 2003 50 67 56 69 1 3 2002 33 37 31 25 0 3 2003 34 31 25 27 0 3 2004 35 25 27 15 1 4 2002 66 41 39 43 0 4 2003 67 39 43 43 0 4 2004 68 43 43 40 0 Note that I wrote the first two columns only to show how the data is calculated, these two are not part of the features.
{ "domain": "datascience.stackexchange", "id": 8657, "tags": "machine-learning, deep-learning, classification, regression, survival-analysis" }
Interleave two sorted arrays
Question: To learn Rust, I tried to write a function that takes two sorted arrays of integers and interleaves them into one longer sorted array. Is this approach ok? Have I made any mistakes? fn main() { let a = [1, 3, 4, 4]; let b = [0, 2, 5, 6]; println!("{:?}", merge(&a, &b)); } fn merge(list_a : &[i32; 4], list_b : &[i32; 4]) -> [i32; 8] { let mut merged_list: [i32; 8] = [0; 8]; let mut idx = 0; let mut idx_b = 0; for a in list_a.iter() { while idx_b < list_b.len() && list_b[idx_b] < *a { merged_list[idx] = list_b[idx_b]; idx_b +=1; idx += 1; } merged_list[idx] = *a; idx += 1; } for b in list_b[idx_b..].iter() { merged_list[idx] = *b; idx += 1; } println!("{:?}", merged_list); merged_list } #[cfg(test)] mod tests { use super::*; #[test] fn test_merge() { let a = [1, 3, 3, 7]; let b = [0, 4, 6, 8]; assert_eq!(merge(&a, &b), [0, 1, 3, 3, 4, 6, 7, 8]); } } Answer: Your code is quite reasonable. I'm happy to see that you included both a main function and tests, which make it easy to see that the function works. Stylistically, colons (:) are "attached" to the argument name, they don't have a space on both sides: -fn merge(list_a : &[i32; 4], list_b : &[i32; 4]) -> [i32; 8] { +fn merge(list_a: &[i32; 4], list_b: &[i32; 4]) -> [i32; 8] { You should let type inference do its thing; there's no need to provide the type for merged_list: let mut merged_list = [0; 8]; Instead of calling .iter(), it's customary to just pass a reference to an array / slice / Vec to the for loop: for a in list_a { /* ... */ } for b in &list_b[idx_b..] { /* ... */ } Looking beyond the current implementation, you'll find that arrays, which have a fixed-size, are usually pretty limiting, at least until RFC 2000 is implemented. Until then, it's common to either use a macro to implement a trait for many concrete types, or to take a slice (&[T]) and return a Vec<T>. The macro route is visible in the standard library and is why many array implementations only go up to 32 elements. I'd encourage you to write a version that takes two slices, returns a Vec, and uses a match statement inside a loop; I think such a merge sort is a nice showcase of some of Rust's abilities. In such a solution, you should not need to use any indexing operations!
{ "domain": "codereview.stackexchange", "id": 27559, "tags": "mergesort, rust" }
Optimise Sieve of Eratosthenes
Question: I have written the following code for finding out the prime numbers less than or equal to n. When I enter the input n = 10^7, my program crashes. So I was wondering whether my code can be further optimised. def sieve(n): array = [] for i in range(1, n): array.append(True) for i in range(0, n-1): for j in range(2*i+4, n+1, i+2): if array[j-2] != False: if j % (i+2) == 0: array[j-2] = False final = [] for i in range(0, n-1): if array[i] == True: final.append(i+2) return final Answer: First, I would clean up the indices in the inner loop. I think the +2 and -2 are a bit confusing at the first glance. def sieve(n): array = [] for i in range(0, n + 1): array.append(True) for i in range(2, n + 1): for j in range(2*i, n+1, i): if array[j] != False: if j % (i) == 0: array[j] = False final = [] for i in range(0, n+1): if array[i] == True: final.append(i) return final This way, array indices directly correspond to their number, no conversion needed. This wastes array space for 0 and 1, but I think it is a lot easier to understand. The primary optimization that is possible adjusts the ranges of the loops. Each non-prime <= n has a divisor <= sqrt(n) and thus we can limit the outer loop to range(2, math.sqrt(100) + 1) Additionally we don't actually need to run the inner loop for numbers which are non-prime (since each non-prime has prime divisors) and we can add an if array[i] == True: to reduce the number of inner loops further. The range of the inner loop can also be reduced. It actually can start at i^2 instead of 2*i since the argument made earlier also applies here. All non-primes smaller than i^2 must have an divisor < i and thus were already set to false in an earlier iteration of the outer loop. If we apply these changes, we get the following code: def faster_sieve(n): array = [] for i in range(0, n + 1): array.append(True) for i in range(2, int(math.sqrt(n)) + 1): if array[i] == True: for j in range(i*i, n + 1, i): if array[j] != False: if j % i == 0: array[j] = False final = [] for i in range(2, n + 1): if array[i] == True: final.append(i) return final For comparison we can run faster_sieve(int(math.pow(10, 7))) and sieve(int(math.pow(10, 7))) On my machine: ~ time python faster_sieve.py python faster_sieve.py 5.44s user 0.04s system 99% cpu 5.477 total ~ time python sieve.py python sieve.py 32.97s user 0.03s system 99% cpu 33.003 total Which is a lot faster! Now we could to some python microoptimizations, but I don't know anything about python. A first step could be to replace the append-loop with something faster like array = [True] * (n + 1), which saves another second on my machine ~ time python faster_sieve.py python faster_sieve.py 4.46s user 0.02s system 99% cpu 4.475 total So, yes indeed, your code could be further optimized. /e I might add that there are already good reviews of python code for the sieve on this site. For example this which applies a lot of python optimization. /e2 Looking at the code again and looking at Wikipedia, the checks inside the inner loop are actually not needed since j is per loop definition a multiple of i and we can simplify it to for j in range(i*i, n + 1, i): array[j] = False which optimizes the program further: ~ time python faster_sieve.py python faster_sieve.py 2.61s user 0.01s system 99% cpu 2.617 total /e3 After reading jerry's response and finding two bugs in his implementation, I think there is another easy optimization that can be added to his approach. Since i is always odd in his inner loop, so is i * i. Thus we can increase the increment step of the inner loop from i to i * 2 (i * i + i would be even). This result in the following results: Simon: 3.5496606826782227 Jerry (broken): 1.7031567096710205 Josay: 2.2623558044433594 Krypton: 5.4344189167022705 Krypton2: 2.5819575786590576 Jerry2: 1.396036148071289 Krypton2 and Jerry2 being: def krypton2(n): array = [True] * (n + 1) for i in range(2, int(math.sqrt(n)) + 1): if array[i] == True: for j in range(i*i, n + 1, i): array[j] = False final = [] for i in range(2, n + 1): if array[i] == True: final.append(i) return final def jerry2(n): array = [True] * n result = [] result.append(2) for i in range(4, n, 2): array[i] = False; for i in range(3, int(math.sqrt(n) + 1), 2): if array[i]: for j in range (i*i, n, i * 2): array[j] = False; for i in range(3, n, 2): if array[i]: result.append(i) return result
{ "domain": "codereview.stackexchange", "id": 22324, "tags": "python, performance, sieve-of-eratosthenes" }
Prove the time complexity of this algorithm of finding longest subarray with maximum value in the middle
Question: I'm trying to find time complexity $T(n)$ of the algorithm by a given pseudocode: 1: for i:=1 to N do 2: begin 3: len := 1 4: while i - len >= 1 and i + len <= N do 5: begin 6: if a[i] > a[i - len] and a[i] > a[i + len] then 7: len := len + 1 8: else 9: break 10: end 11: print(i, len) 12: end I know that the answer is $T(n) = O(n\log n)$ but I struggle to prove that. From the pseudocode its clear that this algorithm is designed to find the longest subarray with each element smaller then the middle one. (actually this algorithm prints the length of such array for each array element) I can't prove it formally because I struggle with the content-dependent loop at 5-10, but informally I went like this: Lets construct the array which will be the worst case for this algorithm. For the sake of simplicity lets use array with 15 elements and start with array filled with zeros: $$0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0$$ external loop will execute 15 times and each time the loop 5-10 will perform a single comparison Now we will do the following: we divide array in two and put maximum value in the middle. Then for each half we will do the same, except the value will be decresed by 1. We will get the following array: $$0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0$$ Now, intuitively I can see why this is the worst case for the algorithm and where $logn$ comes from, but I can't prove it properly. As a side note I would appreciate any pointers toward where one can practice these kind of problems (concrete algorithms time complexity analysis, preferably with answers/explanations) Answer: Let an array $a[1],\ldots,a[n]$ be given. For an index $i$, let $\ell(i)$ be the largest $\ell$ such that $\{i-\ell,\ldots,i+\ell\} \subseteq \{1,\ldots,n\}$ and $a[i]$ is larger than all elements $a[i-\ell],\ldots,a[i-1],a[i+1],a[i+\ell]$. It is not difficult to see that the running time of the algorithm on the array is $\Theta(n + \sum_{i=1}^n \ell(i))$. This prompts the following definition: Given $n$, the quantity $M(n)$ is the maximum of $\sum_{i=1}^n \ell(i)$ over all arrays of length $n$. Let $a$ be an array for which $M(n)$ is achieved, and let $a[i]$ be an element of maximum value. Then $\ell(i) = \min(i-1,n-i)$. Moreover, since $a[i]$ is not smaller than any other element, we have $\sum_{j=1}^{i-1} \ell(j) \leq M(i-1)$ and $\sum_{j=i+1}^n \ell(j) \leq M(n-i)$. This shows that $$ M(n) \leq \max_{i=1}^n M(i-1) + M(n-i) + \min(i-1,n-i). $$ In fact it is not hard to check that equality holds: take the maximum $i$, and combine two arrays attaining $M(i-1)$ and $M(n-i)$ with a larger element in the middle. This gives the recurrence $$ M(0)=0, \; M(n) = \max_{i=1}^n [M(i-1) + M(n-i) + \min(i-1,n-i)]. $$ This recurrence is A078903. It is intuitively clear that the best choice for $i$ is $\lfloor \frac{n+1}{2} \rfloor$ or $\lceil \frac{n+1}{2} \rceil$. We can solve the recurrence explicitly: for $0 \leq k < 2^m$, we have $$ M(2^m+k) = (m-2)2^{m-1}+1+N_2(k), $$ where $N_2(k)$ is the total number of 1s in the binary expansions of $1,\ldots,k$ (this is A000788). It is known that $N_2(k) = k\log_2 k + O(k)$, and so $M(n) = \Theta(n\log n)$. This asymptotic estimate also follows from the monotonicity of $M(n)$, by considering the cases $k = 0$. We conclude that the running time of the algorithm is indeed $O(n\log n)$, and moreover this estimate is tight.
{ "domain": "cs.stackexchange", "id": 9099, "tags": "algorithms, complexity-theory, algorithm-analysis, time-complexity" }
What is the name of this landform?
Question: These are from the movie Insomnia. Also available in this video from 4 seconds to 8 seconds. What is the name of this landform? Are these just snow on Lapies? or These are some form of glaciated landform? Answer: This is a large glacier calving event. Calving occur at the front of the glacier, probably a very wet (and likely warm based) glacier where a considerable amount of water is flowing out of the system (thus from the glacier to the sea). The amount of water is important as it can accelerate the ice flow speed on the valley floor, enabling the breaking of big ice chunks along the way. This is what typical calving events looks like: https://pubs.usgs.gov/of/2004/1216/c/images/calving1.gif When the glacier ice is moving fast in a valley, and encounter hard rock outcrops on the valley floor followed by sudden drops in topography the ice will break, making it what is known as an ice fall: https://pubs.usgs.gov/of/2004/1216/i/images/icefall1.gif , or if the topography is moderate, seracs instead https://pubs.usgs.gov/of/2004/1216/s/images/serac1.gif . But typical events of normal glacier dynamics as aforementioned does only answer a part of the question. What happened in these picture is that the glacier chunks went seaborne, in a fjord, after a big calving event. The fjord prevent the glacier fragments to spread, forcing these to move along the fjord. This next picture as reported by ABC news show a big calving event that happened in Greenland in 2018, similar to the picture you are showing: (Source and caption: Tabular icebergs float in the Sermilik Fjord after a large calving event at the Helheim glacier near Tasiilaq, Greenland, June 23, 2018. Source: ABC News https://abcnews.go.com/International/watch-billions-tons-ice-collapse-climate-change-impacting/story?id=57972318) Here we see the perspective of the rock walls on both sides preventing the spreading and concentrating the glacier chunks.
{ "domain": "earthscience.stackexchange", "id": 1899, "tags": "glaciology, groundwater, snow, land-surface" }
What kind of hybridisation is there around the central carbon atoms?
Question: I am currently working on an assignment and I am confused about what kind of hybridisation exists for the first condensed structure CH3CN. Does the CH3 have sp2 hybridisation? I suspect its that because it has 3 hydrogens (1s^2) and 1 carbon (1s^2 2s^2 2p^2), allowing for 3 sigma bonds and 1 pi bond. I believe that the C standing alone would have sp3 because it looks like a bent shape? I am not too sure. Thank you. Answer: In $\ce{CH3CN}$, the three atoms $\ce{C,C}$ and $\ce{N}$ are alined. The terminal $\ce{C}$ atom (in $\ce{CH3}$) is hybridized sp3. The central $\ce{C}$ atom is hybridized sp. The bond $\ce{C-C}$ between $\ce{CH3}$ and $\ce{CN}$ is sigma, and the bond $\ce{CN}$ has one sigma and two pi bonds.
{ "domain": "chemistry.stackexchange", "id": 17614, "tags": "molecular-orbital-theory, orbitals, hybridization" }
What is the (intuitive) relation of NP-hard and #P-complete problems?
Question: From Wikipedia on $\mathrm{NP}$-completenes: "a [decision] problem is NP-complete if it is both in NP and NP-hard." [link] I think we can paraphrase this as the first statement: An $\mathrm{NP}$-hard (decision) problem $\Pi$ is $\mathrm{NP}$-complete if and only if $\Pi \in \mathrm{NP}$. $\#\mathrm P$-complete (counting) problems are at least as hard as $\mathrm{NP}$-complete problems. The second statement is from Wikipedia on $\#\mathrm P$-completeness [link]. I think it is fair to conclude that: The two classes of $\mathrm{NP}$-hard and $\#\mathrm P$-complete cannot intersect, since they are defined for fundamentally different types of problems (counting vs decision). However, the second statements suggests that there is a relation between the two. Can this relation be expressed intuitively, like in terms of intersection? After further reading on the $\#\mathrm P$ class, for which Wikipedia says that it is "the set of the counting problems associated with the decision problems in the set NP," I suspect that the following taxonomy can be established: Is this correct? Answer: The theory of NP-completeness studies decision problems and (indirectly) optimization problems. In contrast, the theory of #P-completeness studies counting problems. In particular, NP is a class of decision problems while #P is a class of counting problems. The elements of these two classes are different, so you cannot compare them directly as your Venn diagram suggests. Instead of giving precise definitions of the various types of problems, here are some examples: Decision problem (SAT): Given a CNF, determine whether it is satisfiable. Counting problem (#SAT): Given a CNF, determine the number of satisfying assignment. Optimization problem (MAX-SAT): Given a CNF, determine the maximum number of clauses which can be satisfied simultaneously. If you can solve #SAT efficiently then you can solve SAT efficiently. This is not an isolated case. Recall that a decision problem $L$ is in NP if there is a polynomial time verifier $V$ and a constant $C$ such that $$ L = \{ x : V(x,y) = 1 \text{ for some } y \text{ of length at most } C|x|^C \}. $$ A counting problem $f$ (which maps instances to natural numbers) is in #P if there is a polynomial time verifier $V$ and a constant $C$ such that $$ f(x) = |\{ y \text{ of length at most } C|x|^C : V(x,y) = 1\}|. $$ That is, $f(x)$ counts the number of witnesses for $x$. For a given verifier $V$, computing $f(x)$ is harder than deciding whether $x \in L$, since $x \in L$ iff $f(x) \geq 1$. This suggests that if a decision problem is hard, then its counting version is also hard. The converse is not true. For example, 2SAT (the special case of SAT in which all clauses contain at most two literals) is in P, but its counting version #2SAT is #P-complete. Where do optimization problem fit in? We can think of an optimization problem as given by a verifier $V(x,y)$ which checks whether $y$ is a feasible solution for $x$, together with a function $O(x,y)$ which computes the objective value of $y$. For example, in MAX-SAT, $V(x,y)$ checks that $y$ is a truth assignment for the variables appearing in the CNF $x$, and $O(x,y)$ counts the number of clauses of $x$ satisfied by $y$. We can associate with each maximization problem the following decision version: $$ \{ (x,o) : V(x,y) = 1 \text{ and } O(x,y) \geq o \text{ for some } y \text{ of length at most } C|x|^C \}. $$ In the case of MAX-SAT, we are given a CNF $x$ and a target $o$, and we need to determine whether some truth assignment satisfies at least $o$ clauses of $x$. We say that an optimization problem is NP-hard if its decision version is NP-hard. In the case of MAX-SAT, the problem is obviously NP-hard, since choosing $o$ to be the number of clauses reduces the problem to SAT. (In fact, even MAX-2SAT is NP-hard, although 2SAT itself is in P.) In many other cases (for example, Vertex Cover, Max Cut and so on), there is no natural decision problem (no analog of SAT) other than the one obtained from the optimization problem.
{ "domain": "cs.stackexchange", "id": 20475, "tags": "complexity-theory, np-complete, np-hard, complexity-classes, counting" }
Writing a electrochemical cell representation in correct way
Question: The answers to following question seem wrong to me. The books has plainly told correct answer is (3), but it seems incorrect to me. Please help. The chemical reaction $\ce{2AgCl(s) + H2(g) -> 2 HCl(aq) + 2 Ag(s)}$ taking place in a galvanic cell (under standard condition) is represented by the notation $$\ce{Pt(s) | H2(g), \pu{1 bar} | 1 M KCl(aq) | AgCl(s) | Ag(s)}\tag{1}$$ $$\ce{Pt(s) | H2(g), \pu{1 bar} | 1 M HCl(aq) | 1 M Ag+(aq) | Ag(s)}\tag{2}$$ $$\ce{Pt(s) | H2(g), \pu{1 bar} || 1 M HCl(aq) | AgCl(s) | Ag(s)}\tag{3}$$ $$\ce{Pt(s) | H2(g), \pu{1 bar} | 1 M HCl(aq) | Ag(s) | AgCl(s)}\tag{4}$$ Photo of problem: https://i.stack.imgur.com/Oflyg.jpg Answer: The chemical reaction is: $$\ce{2AgCl(s) + H2(g) -> 2 HCl(aq) + 2 Ag(s)}$$ If we split this into half reactions, we get an oxidation half reaction at the anode of: $$\ce{H2(g) -> 2 H+(aq) + 2 e-}$$ And a reduction half reaction at the cathode of: $$\ce{2AgCl(s) + 2e- -> 2 Ag(s) + 2 Cl-(aq)}$$ If you want all the species at standard state, you need a partial pressure of $\pu{1 bar}$ of hydrogen gas, a $\mathrm{pH}$ of about 0, and a chloride concentration of $\pu{1 mol/L}$. In cells where the reagents can't react directly, you don't need a salt bridge. In the reaction we are looking at, $\ce{AgCl}$ is confined to the cathode because it sticks to it, and hydrogen gas is confined to the anode because that's where we are releasing it, and it has low solubility in water. The cell notation starts with the conductor of the anode half reaction, and ends with the conductor of the cathode half reaction. Different phases are separated by vertical lines. If there is more than one species in a phase, they are separated by commas. $$\ce{Pt(s) | H2(g, \pu{1 bar}) | H+(aq, \pu{1 mol/L}) , Cl- (aq, \pu{1 mol/L}) | AgCl(s) | Ag(s)}$$ The books has plainly told 3. to be correct. The answer is (5) none of the above. Using the format of the answer key, the best answer would be: $$\ce{Pt(s) | H2(g), \pu{1 bar} | 1 M HCl(aq) | AgCl(s) | Ag(s)}$$ You get this answer from answer 3 by changing the salt-bridge boundary ("||") to a phase boundary ("|"). Having a salt bridge between the hydrogen gas and the aqueous hydrogen ions would mean that you are generating hydrogen ions at the electrode where they are not in contact with $\pu{1 M}$ hydrogen ions.
{ "domain": "chemistry.stackexchange", "id": 13639, "tags": "electrochemistry" }
How much weight would I need to put on the end of a tube to break it?
Question: Say I have a tube with a circular cross-section made from some material (for an example, I'd like to use acrylic). I support it horizontally from one end and hang a weight from the other end. How heavy does the weight have to be to break the tube? What if I support the tube from both ends and hang the weight from the middle? For the example, please use acrylic with inner diameter 2 inches, outer diameter 2.25 inches, length 60 inches. However, I'd like to know the formulae and theories that are used to make these calculations so that I can do them myself in the future. You might find the following useful: Properties of acrylic Answer: The equation you would apply is: $\sigma = \frac{M*Y}{I}$ Where M is the bending moment or torque, $Y$ is the distance from the center of the cross section to the top or bottom most fiber, and $I$ is the moment of inertia of the cross section about its x-axis. $\sigma$ is the stress. So, Maximum moment = $M= F * 60$ inches where $F$ = your downward force. $Y= 1.125$ inches. $I$ for this particular cross sectional shape equals $\frac{\pi(D_O^4 -D_I^4)}{64}$ where $D_O = 2.25$ inches and $D_I = 2.00$ inches. I kept all the units in inches. If you know what the maximum tolerable $\sigma$ is in PSI (pounds per square inch), then you plug that into the equation and solve for $F$ in pounds. This is an EXTREMELY basic structural engineering problem. If you apply a force in this fashion to this particular structural configuration, you end up creating a bending moment at the opposite end that causes tension in the uppermost fiber and compression in the bottommost fiber. In structural analysis, the loading possibilities and connection possibilities are innumerable and range from simple to complex. There have been cases in history where even simple structures have collapsed resulting in death because the designers simply neglected basic concepts. Every single weld has to be properly designed. Every single bolt has to be properly sized. Every single element must be correctly designed. Otherwise...possible disaster.
{ "domain": "physics.stackexchange", "id": 19269, "tags": "forces, stress-strain, material-science" }
Using singleton pattern for a MySQL connection
Question: I'm new to web programming, and writing some web apis to enable users to sync their data between devices. Is it okay to use singleton pattern to prevent reconnecting the MySQL database on every api call? <?php final class MySQLiConnection { private static $connection = null; private function __construct() {} public static function getInstance() : MySQLiConnection { if ($connection == null || !$connection->ping()) { $connection = new mysqli("localhost", "id", "password", "database"); } return $connection; } public function execute(string $sql, iterable $params) : void { if ($statement = $connection->prepare($sql)) { foreach ($params in $param) { if (is_int($param)) { $statement->bind_param("i", $param); } else if (is_double($param)) { $statement->bind_param("d", $param); } else if (is_string($param)) { $statement->bind_param("s", $param); } } $statement->execute(); $statement->close(); } } public function getResult(string $sql, iterable $params) : mysqli_result { if ($statement = $connection->prepare($sql)) { foreach ($params in $param) { if (is_int($param)) { $statement->bind_param("i", $param); } else if (is_double($param)) { $statement->bind_param("d", $param); } else if (is_string($param)) { $statement->bind_param("s", $param); } } $statement->execute(); $result = $statement->get_result(); $statement->close(); return $result; } } } // example MySQLiConnection::getInstance()->execute("SELECT * FROM users WHERE forename = ? AND surname = ?", "ben", "dover"); ?> Answer: First, on your premises. First of all, no pattern would prevent PHP from making a connection to MySQL database on every web api call. Because PHP will die between different API calls, along with all its singletons, connections and any other stuff. So all you can prevent is reconnecting the MySQL database on every call to a database connection class. No, it is frowned upon using singleton pattern to prevent reconnecting the MySQL database on every call to a database connection class. A more accepted approach is Dependency Injection. Funny enough, due to a typo, this class would not prevent reconnecting the mysql. Had you error_reporting set to E_ALL, PHP would have signaled that you are trying to use a non-existent variable $connection every time getInstance() is called. Surely you wanted to call it as $this->connection instead. Now to the code. The intention is very good, especially I like getResult() and execute() methods that allow you to avoid that bind param hassle. However, there is evidently a duplicated code. Why not to make execute to return $statement? It will let you make getResult() as simple as public function getResult(string $sql, iterable $params) : mysqli_result { return $this->execute($sql, $params)->get_result(); } And finally. I just noticed, halfway the review process, that your code is off topic, as it simply doesn't work. Mysqli is not PDO, you cannot bind your parameters in a loop. So you need to rework it. In order to help, here are two my articles: How to properly connect to Mysql database using mysqli that will show you important options missed in your connection code Mysqli made simple to show you how to bind parameters for mysqli dynamically
{ "domain": "codereview.stackexchange", "id": 33323, "tags": "php, mysqli" }
Bootstrap show many move one carousel modification
Question: I've been steadily improving a show many move one carousel sample that I found a few weeks ago. I want to post my improvements for others to use but don't want to spread bad code. I'm not an expert in JavaScript but I always like to produce clean code. Working demo (function(){ $('.carousel-showmanymoveone .item').each(function(){ var itemToClone = $(this); for (var i=1;i<4;i++) { itemToClone = itemToClone.next(); // wrap around if at end of item collection if (!itemToClone.length) { itemToClone = $(this).siblings(':first'); } // grab item, clone, add marker class, add to collection itemToClone.children(':first-child').clone() .addClass("cloneditem-"+(i)) .appendTo($(this)); } }); }()); Thoughts I've done all the improvements I can think of but: I'm wondering about seeing $(this) appear twice. Are the siblings, children, :first-child, :first usages the best options? Any other improvements or best practices I've missed? Is there something I could do better? Answer: Thanks for looking at this. I have concluded that I am happy with the code. To incorporate the valid suggest by @vmariano, I have split this off into its own variable. This does make sense from a best-practices viewpoint as once I clone the item it is no longer itemToClone it is a clonedItem. Final code: (function(){ $('.carousel-showmanymoveone .item').each(function(){ var itemToClone = $(this); for (var i=1;i<4;i++) { itemToClone = itemToClone.next(); // wrap around if at end of item collection if (!itemToClone.length) { itemToClone = $(this).siblings(':first'); } // grab item, clone, add marker class, add to collection var clonedItem = itemToClone.children(':first-child').clone(); clonedItem.addClass("cloneditem-"+(i)) .appendTo($(this)); } }); }());
{ "domain": "codereview.stackexchange", "id": 12832, "tags": "javascript, jquery, circular-list, twitter-bootstrap" }
Could I use some elements of my target variable to predict it?
Question: I'm trying to predict if a company will bankrupt, I use a dataset of 2020 and I manually created my target variable with the status of the company the status date, status reason to create my target variable. Could I use these variables too for my model or because I build my target with it it's totally forbidden ? (My opinion is that I cannot but I'm curious to hear what the community will say) Thanks. Answer: If you would already have those datapoints before a company actually goes into bankruptcy then you can then them in your model since when predicting to the future you could have access to that data. However, if you would only know the data once the bankruptcy event happens (e.g. date of bankruptcy) then you cannot use this variable in your model since you would be leaking data (using data that is in the future and the model would not have access to when the model is actually deployed and used).
{ "domain": "datascience.stackexchange", "id": 10409, "tags": "machine-learning, python, classification, pandas, predictive-modeling" }
Combining two arrays using nested foreach loops
Question: I have the following arrays: Array (db_values) ( [0] => Array ( [system_name] => object_car_brand [value] => Alfa Romeo [id] => 136 ) [1] => Array ( [system_name] => object_car_model [value] => Spider [id] => 137 ) ) Array (db_attributes) ( [0] => Array ( [id] => 105 [system_name] => object_car_brand ) [1] => Array ( [id] => 106 [system_name] => object_car_model ) ) I combine these two using the following code: foreach($db_attributes as $db_attribute){ foreach($db_values as $db_value){ if($db_value["system_name"] === $db_attribute["system_name"]){ $update[$db_attribute["id"]] = $db_value["value"]; } } } I do not think that this is the most resource friendly way of doing it, is there a better way? Answer: I have made the assumption that there is a 1:1 relationships between the $attributes and $values array elements. With that I mean the array key in the $attributes array corresponds with an entry in $values array. If that is the case it can be reduced to one foreach loop by using the key from the attributes array: $combined = []; // Make sure the $combined array exists. foreach($attributes as $key => $attribute) { // First check if the array key exists and that the 'system_name' is the same if(array_key_exists($key, $values) && $attribute['system_name'] == $values[$key]['system_name']) { $combined[$attribute['id']] = $values[$key]['value']; } } This should produce the following array with the data you have provided: Array(combined) ( [106] => 'Alfa Romeo' ) If my assumption is incorrect, then just ignore my answer.
{ "domain": "codereview.stackexchange", "id": 10715, "tags": "php, array" }
Building a string made easier?
Question: I have these lines of code in my view: <?php $count = count($product->getTags()); $tagsStr = ''; foreach($product->getTags() as $key => $tag){ $tagsStr.= " " . $tag->getTag(); if(($key == 0 && $count < 1) || ($key == 0 && $count >1 && $key != $count)){ $tagsStr .= ','; } } ?> It prints strings like: Fish, Onions Or Fish, Onions, Eggs All these items are stored in an ArrayCollection $product->getTags();. I find that these are a lot of lines for completing something this simple. I was wondering if you have ideas on simplifying this code. Answer: $tagsStr = implode(', ', array_map(function ($tag) { return $tag->getTag(); }, $product->getTags()->toArray()));
{ "domain": "codereview.stackexchange", "id": 4993, "tags": "php, strings" }
How to debug a gazebo model plugin with setting breaktpoints and the drcsim run_gzserver_gdb script?
Question: Hello, I am quite new to gazebo and I would like to debug my gazebo plugin with breakpoints. I found a few questions here that do point out a similar problem. Some of the answers included the run_gzserver_gdb script of the drcSim. I copied the related drcSim scripts and only changed the package to my own. I did not modify the actual gdbrun script. https://bitbucket.org/osrf/drcsim/src/194be8500fef81593f79607a21ee2badd9700a0e/drcsim_gazebo/scripts/run_gzserver_gdb?at=default&fileviewer=file-view-default https://bitbucket.org/osrf/drcsim/src/194be8500fef81593f79607a21ee2badd9700a0e/drcsim_gazebo/scripts/gdbrun?at=default&fileviewer=file-view-default When I run my project with these scripts the server just starts running and I can only interact with gdb once my project has crashed. #!/bin/bash #gdbrun of drcSim extra_text="" if [ "$1" == "--break-main" ]; then extra_text="break main" shift fi EXEC="$1" shift run_text="run" for a in "$@"; do run_text="${run_text} \"$a\"" done TMPFILE=/tmp/gdbrun.$$.$#.tmp cat > ${TMPFILE} <<EOF ${extra_text} ${run_text} EOF gdb -x ${TMPFILE} "${EXEC}" rm -f "${TMPFILE}" I get that the problem is somewhere along these lines: run_text="run" for a in "$@"; do run_text="${run_text} \"$a\"" done TMPFILE=/tmp/gdbrun.$$.$#.tmp cat > ${TMPFILE} <<EOF ${extra_text} ${run_text} EOF The problem is that the gazebo sever is run directly since the temp file will only create a run confguration: run (gzserver and configuration args I pass to gzserver) If I try running this by hand and add breackpoints the server does not start properly and shuts down after 30 seconds. How do I have to modify the script so I can set breackpoints? If it is not possible to use these scripts how could I possibly debug my plugin with gdb and breakpoints? Regards Originally posted by winter2016 on Gazebo Answers with karma: 1 on 2016-07-22 Post score: 0 Answer: Here is the solution, that worked for me. Add a breakpoint to wahtever you want to debug. Do so at the very beginning of the gdb command in gdbrun. gdb -ex "set breakpoint pending on" -ex "break <some source file from plugin>:line_nr" -x ${TMPFILE} "${EXEC}" For debugging: Only start the gzserver in the roslaunchfile Check in the output if the breakpoints are set pending start the gzclient manually in an extra window set additional breakpoints if required once gdb stops the gzserver usual gdb debugging Originally posted by winter2016 with karma: 1 on 2016-07-26 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 3954, "tags": "drcsim" }
Scrabble helper: find the highest score with any 7 letters
Question: I'm making a python script that accepts 7 letters and returns the highest scoring word along with all other possible words. At the moment it has a few "loops in loops" and others things that will slow down the process. import json #open file and read the words, output as a list def load_words(): try: filename = "dictionary_2.json" with open(filename,"r") as english_dictionary: valid_words = json.load(english_dictionary) return valid_words except Exception as e: return str(e) #make dictionary shorter as there will be maximum 7 letters def quick(): s = [] for word in load_words(): if len(word)<7: s.append(word) return s # takes letters from user and creates all combinations of the letters def scrabble_input(a): l=[] for i in range(len(a)): if a[i] not in l: l.append(a[i]) for s in scrabble_input(a[:i] + a[i + 1:]): if (a[i] + s) not in l: l.append(a[i] + s) return l #finds all words that can be made with the input by matching combo's to the dictionary and returns them def word_check(A): words_in_dictionary = quick() for word in scrabble_input(A): if word in words_in_dictionary: yield word #gives each word a score def values(input): # scrabble values score = {"a": 1, "c": 3, "b": 3, "e": 1, "d": 2, "g": 2, "f": 4, "i": 1, "h": 4, "k": 5, "j": 8, "m": 3, "l": 1, "o": 1, "n": 1, "q": 10, "p": 3, "s": 1, "r": 1, "u": 1, "t": 1, "w": 4, "v": 4, "y": 4, "x": 8, "z": 10} word_total = 0 for word in word_check(input): for i in word: word_total = word_total + score[i.lower()] yield (word_total, str(word)) word_total = 0 #prints the tuples that have (scrabble score, word used) def print_words(a): for i in values(a): print i #final line to run, prints answer def answer(a): print ('Your highest score is', max(values(a))[0], ', and below are all possible words:') print_words(a) answer(input("Enter your 7 letters")) I have removed some of the for loops and have tried to make the json dictionary I found shorter by limiting it to 7 letter words max. I suppose I could do that initially so that it doesn't need to do that each time i run the script. Any other tips on how to speed it up? Answer: Know your data structures. l=[] for i in range(len(a)): if a[i] not in l: l.append(a[i]) for s in scrabble_input(a[:i] + a[i + 1:]): if (a[i] + s) not in l: l.append(a[i] + s) How long does each of those not in checks take? If the list contains n strings, a not in check has to do up to n string comparisons. list is the wrong data structure to keep a collection of non-duplicate values: that is a set. l=set() for i in range(len(a)): l.add(a[i]) l.update([a[i] + s for s in scrabble_input(a[:i] + a[i + 1:])]) is easier to read and should be faster for non-trivial inputs. An alternative, which is arguably even easier to read, uses a single accumulator: def scrabble_input(rack, prefix='', accum=set()): if len(prefix) > 0: accum.add(prefix) for i in range(len(rack)): scrabble_input(rack[:i] + rack[i + 1:], prefix + rack[i], accum) return accum Further optimisation is possible by just avoiding local duplicates, and in that case you can switch back to using a list for the accumulator: def scrabble_input(rack, prefix='', accum=[]): if len(prefix) > 0: accum.append(prefix) used = set() for i in range(len(rack)): if not rack[i] in used: used.add(rack[i]) scrabble_input(rack[:i] + rack[i + 1:], prefix + rack[i], accum) return accum Tastes may vary as to which is the nicest implementation. You can test to see which is the fastest for your use cases. (Of course, for "serious" use as opposed to practice, itertools is your friend). if word in words_in_dictionary: The same applies: you should notice a significant speedup by using a set for words_in_dictionary. If you need to optimise further then you should be looking at more complicated data structures like tries or suffix trees. In principle you will be able to do a recursion with one parameter which is the current word (prefix or suffix, depending on your data structures); one parameter which is the unused letters from the Scrabble rack; and a final parameter which is the current node of the dictionary tree. Then if the dictionary tree tells you that no words exist with that prefix/suffix, you can abort early without generating all of its extensions.
{ "domain": "codereview.stackexchange", "id": 28436, "tags": "python, performance, game" }