anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Is there a simple trigonometric relationship between these two angles?
Question: The setup shows 3 rigid links (with fixed lengths) constrained by 4 joints. The blue joints are fixed in place and only allow rotation, while the black joints can move and also allow rotation. Initially θ1 < θ2, and the top link is perfectly horizontal. As θ1 decreases (due to the leftmost link rotating clockwise), it is clear that θ2 will increase, but I would like to get the specific value of θ2 as a function of θ1. Is there a name for this kind of problem? What would be the best software tool to simulate this? Answer: Since considering four-bar linkages is a useful tool in my research field, I happen to know of a paper that precisely solves what you want to do. George H. Martin published in 1958 in the journal Machine Design the paper "Four-Bar Linkages", the following equations are taken from that paper: When you consider a four-bar linkage as shown below, then you know the angle $\theta _2$ and you want to know the angle that is the sum of $\beta$ and $\lambda$. Considering the triangle OAD, we can write $$ \beta = \sin^{-1} \left( \frac{b}{l} \sin \theta_2 \right).$$ So that is already the first half. For the triangle ABD, we can write $$ \psi = \cos^{-1} \left( \frac{c^2+l^2-d^2}{2cl} \right).$$ Using $\psi$, we can write $\lambda$ as $$ \lambda = \sin^{-1} \left( \frac{c}{d} \sin \psi \right).$$ With these, you should be able to work out a single equation for your problem.
{ "domain": "engineering.stackexchange", "id": 4341, "tags": "mechanical-engineering, dynamics, kinematics" }
Algorithm to get all unique pair combinations
Question: I am working on making a site which outputs all possible combinations of Secret Santa Pairs. I currently have this working by having the site visitor input the participants and any known couples. Once this is done, the visitor simply clicks the 'Calculate' button and it outputs the unique pairs (minus know couples - because they don't want to be paired), and the unique combinations of unique pairs. Current Implementation To achieve this, I have written what I believe to be an efficient algorithm to generate the unique pairs (minus the known couples): function generateUniquePairs($participants, $known_couples) { $participantCount = count($participants); $pairs = []; $pos = 0; for ($i = 0; $i < $participantCount; $i++) { for ($j = $i + 1; $j < $participantCount; $j++) { $tempPair = [$participants[$i], $participants[$j]]; if ($known_couples) { $isKnowCouple = false; foreach($known_couples as $index => $known_couple) { if(($tempPair[0] === $known_couple[0] && $tempPair[1] === $known_couple[1]) || ($tempPair[0] === $known_couple[1] && $tempPair[1] === $known_couple[0])) { $isKnowCouple = true; }; }; if ($isKnowCouple === false) { $pairs[$pos++] = $tempPair; } } else { $pairs[$pos++] = $tempPair; } } } return $pairs; } Which I then pass to a modified (and not efficient) \$\binom{n}{k}\$ algorithm: $result = array(); $combination = array(); function generateUniqueCombinations(array $unique_pairs, $choose) { global $result, $combination; $n = count($unique_pairs); function inner($start, $choose_, $arr, $n) { global $result, $combination; if ($choose_ == 0) { $combination_count = count($combination); $participantCounts = new stdClass(); $hasDuplicates = false; for ($p = 0; $p < $combination_count; $p++) { if (!$hasDuplicates) { if (property_exists($participantCounts, $combination[$p][0])) { $hasDuplicates = true; } else { $participantCounts->{$combination[$p][0]} = 1; }; if (!$hasDuplicates) { if (property_exists($participantCounts, $combination[$p][1])) { $hasDuplicates = true; } else { $participantCounts->{$combination[$p][1]} = 1; }; }; }; }; if (!$hasDuplicates) { array_push($result, $combination); }; $hasDuplicates = false; } else { for ($i = $start; $i <= $n - $choose_; ++$i) { array_push($combination, $arr[$i]); inner($i + 1, $choose_ - 1, $arr, $n); array_pop($combination); } } } inner(0, $choose, $unique_pairs, $n); return $result; } Example Scenario The following participant list and know couples: $participants = ["Tom", "Leanne", "Connor", "Sophie", "Tony", "Anita"]; $known_couples = ["Tom", "Leanne"]; $unique_pairs = generateUniquePairs($participants, $known_couples); $unique_combinations = generateUniqueCombinations($unique_pairs, count($participant) / 2); should ouput: // $unique_pairs [["Tom","Connor"],["Tom","Sophie"],["Tom","Tony"],["Tom","Anita"],["Leanne","Connor"],["Leanne","Sophie"],["Leanne","Tony"],["Leanne","Anita"],["Connor","Sophie"],["Connor","Tony"],["Connor","Anita"],["Sophie","Tony"],["Sophie","Anita"],["Tony","Anita"]] // $unique_combinations [[["Tom","Leanne"],["Anita","Tony"]],[["Tom","Anita"],["Leanne","Tony"]],[["Tom","Tony"],["Leanne","Anita"]]] The Problem My code works, but if you use a list of participants > 12, it generates an array of unique pairs of length > 66, which means when it gets to running generateUniqueCombinations($unique_pairs, $choose), it times out. I understand the problem is that my code is doing a lot of throw away processing due to it iterating through hundreds of thousands of potential combinations unnecessarily because those combinations are not viable due to them containing the same person more than once. Where I need help I would really like to understand if it is possible to skip past those non-viable combinations mathematically so there is no need to count participants within combinations and unnecessarily iterate. I really would like my site to be more useful than being able to generate combinations of pairs for only <= 12 participants... Answer: Firstly, the combination of global variables, an inner function, stdClass, and a lack of comments makes it harder than it should be to understand what the code is doing. Speed tests carried out with 10 participants and one couple. Using foreach instead of a simple indexed for in for ($p = 0; $p < $combination_count; $p++) { and eliminating $combination_count reduced the time from about 9.7 secs to 7.9 secs. Replacing stdClass with a simple array and property_exists with isset reduced it further to 5.3 secs. And that doesn't yet address the biggest obvious problem: function inner($start, $choose_, $arr, $n) { global $result, $combination; if ($choose_ == 0) { $combination_count = count($combination); $participantCounts = new stdClass(); $hasDuplicates = false; ... if (!$hasDuplicates) { array_push($result, $combination); }; $hasDuplicates = false; } else { for ($i = $start; $i <= $n - $choose_; ++$i) { array_push($combination, $arr[$i]); This is where you should be checking for duplicates. Otherwise it will select ["Tom","Connor"] as the first pair, ["Tom","Sophie"] as the second pair, and for each of the (42 choose 3) = 11480 triples chosen from the remaining elements of $unique_pairs it will rerun the duplicate finding code and find that "Tom" is duplicated. Having said that, I think the best approach would be to start from scratch. The problem you're trying to solve is indeed, as discussed in comments, finding all exact matchings in an almost complete graph. (Or even a complete graph, if there are no couples). Take a quick sanity check to ask yourself if you really need to find all of them or whether it would suffice to pick one at random. Bear in mind that with \$2n\$ people and no couples the number of exact matchings is \$\frac{(2n)!}{2^n n!}\$. With 18 people that's already 34459425 matchings, so no algorithm which enumerates all of them is going to scale very far. Whether you decide to generate them all or to randomise and generate one, I would also reconsider the graph structure. $unique_pairs is essentially a list of the edges in the graph, but since it's almost complete it would be more efficient to represent the graph as a list of vertices (people) and a list of edges which aren't in the graph (couples). If you implement the couples as an associative array then testing whether two people form a couple is very fast. The approach would be something like Pick the first person (P) in the list and remove from the list For each other person Q in the list who doesn't form a couple with P: Remove Q from the list Recurse to get partial solutions S Add the pair (P, Q) to each partial solution in S, and append to an accumulator Restore Q to the list Return the accumulator
{ "domain": "codereview.stackexchange", "id": 27854, "tags": "performance, php, algorithm" }
Determining common divisors of two numbers
Question: Is there a better algorithm for finding the common divisors of two numbers? Can this code be shortened? import java.util.*; public class Testing1 { public static void main (String args[]){ Scanner x=new Scanner(System.in); System.out.print("Enter the 1st number(larger number) : "); int y=x.nextInt(); System.out.print("Enter the 2nd number : "); int z=x.nextInt(); int ar1[]=new int[y]; int ar2[]=new int[z]; for( int i=1;i<y;i++){ //puts the divisors of the larger number to an array ar1[] int l=y%i; if(l==0) ar1[i]=i;} for( int i=1;i<z;i++){//puts the divisors of the other number to an array ar2[] int l=z%i; if(l==0) ar2[i]=i;}System.out.print("Common divisors of "+y+", "+z+" = "); for(int i=1;i<=ar2.length-1;i++){ //printing the common elements of both arrays. if((ar1[i]==ar2[i])&&ar2[i]!=0) System.out.print(i+" "); }}} Answer: There are few improvements you can do in your code: First choose better variable names. I would name the Scanner type variable as scanner rather than x. Really, x, y, z are the worst variable name, which doesn't give any idea as to what they denote. Secondly, I won't force user to pass larger number first, and then smaller number. I would take the burden of finding that out myself. There is no need of storing all the divisors of both the numbers in two different arrays. Suppose you have to find common divisor of 2 and 2132340. Will you store all the divisors of 2132340, or simply check that the common divisor cannot be greater than the divisor of 2, and use that fact? Another fact is, all the divisors of num1 will be less than or equal to num1 / 2. Now, if num1 < num2, then the common divisors all must be < num1 / 2. So, iterate the loop till min / 2. Of course, you can move the code that finds the common divisor in a different method, and store them in a List<Integer>. Lastly, it might be possible that the smaller number itself is a divisor of larger number. So, you need to store that too. Here's the modified code which looks a bit cleaner: public static void main (String args[]){ Scanner scanner = new Scanner(System.in); System.out.print("Enter the 1st number : "); int num1 = scanner.nextInt(); System.out.print("Enter the 2nd number : "); int num2 = scanner.nextInt(); List<Integer> commonDivisor = getCommonDivisor(num1, num2); System.out.println(commonDivisor); } public static List<Integer> getCommonDivisor(int num1, int num2) { List<Integer> list = new ArrayList<Integer>(); int min = minimum(num1, num2); for(int i = 1; i <= min / 2; i++) { if (num1 % i == 0 && num2 % i == 0) { list.add(i); } } if (num1 % min == 0 && num2 % min == 0) { list.add(min); } return list; } public static int minimum(int num1, int num2) { return num1 <= num2 ? num1 : num2; }
{ "domain": "codereview.stackexchange", "id": 5270, "tags": "java, optimization, beginner, mathematics" }
Which way will the scale tip?
Question: This problem has bothered me for quite some time and I can't solve it. I have even tried to make a construction, but it sometimes tips to the left and sometimes tips to the right :). When we submerge the body in the water the water pushes it up. That is an action. At the same time the body pushes water down. That is a reaction. So the balance should be maintained. But, since the water level rises the hydrostatic pressure on the bottom is greater so the right side should go down. There is another similar problem but it's not the same. Please help. Answer: The balance will be maintained because there is no EXTERNAL force applied on left or right side. This is because of the same reason you can't push a car while sitting in it!
{ "domain": "physics.stackexchange", "id": 38837, "tags": "homework-and-exercises, newtonian-mechanics, free-body-diagram, statics, scales" }
transfer function of a sampler in the s domain
Question: I would like to modelize my whole system into the S-domain. This is a mixed system, there a numerical part (corrector, ADC, DAC) and an analogic part (plant transfer function, sensors, etc...). I know that it is possible to transform a DAC (Digital to Analog converter) or a ZOH into the s domain thanks to the Padé's approximation. Nevertheless I do not know how to do for the ADC as it is non linear like the DAC. Is it possible to do the same with the Padé approximation. Does anyone know the transfer function in the S domain or the approximation of the transfer function of the ADC ? Thank you and have a nice day ! Answer: ADCs don't have a transfer function. An ideal ADC is an ideal sampler with a sampling period T. Real ADCs are not ideal, they have some sampling delays, jitter, quantization noise, thermal noise, etc. Usually the sampling delays are really small compared to the dynamic of the systems, so we don't model them. Quantization noise and thermal noise should not be a factor unless the signal you measure is really small. And jitter is not an issue unless the signal frequency is really high. So bottom line, if your ADC is really fast compared to the dynamics of your system and if the measured signal is inside the dynamic range of your ADC, you could pretend that the ADC is not there.
{ "domain": "dsp.stackexchange", "id": 9774, "tags": "discrete-signals, z-transform, laplace-transform, analog-to-digital" }
Why is Einstein Equivalence Principle the basic assumption for Gerneral Relativity instead of the Strong Equivalence Principle?
Question: Einstein Equivalence Principle (EEP) and the Strong Equivalence Principle (SEP) both state that the trajectory of a point mass in a gravitational field depends only on its initial position and velocity, and is independent of its composition and structure. On top of that, the EEP says that the outcome of any local non-gravitational experiment in a freely falling laboratory is independent of the velocity of the laboratory and its location in spacetime. While the SEP claims that the outcome of any local experiment (gravitational or not) in a freely falling laboratory is independent of the velocity of the laboratory and its location in spacetime. According to all sources that I have read, it is Einstein Equivalence Principle that we use as one of the hypothesis to derive General Relativity (for example, in page 50 in the book by Sean Carroll. Weinberg also talks about EP in page 70 of his book, but it was not clear to me which of the two formulations he was choosing). I know that EEP and SEP are only distinguished by local gravitational effects, like for example gravitational self-interaction energy or maybe some torsion experienced by particle in a gravitational field. But from my understanding of General Relativity, the assumption that lets us describe the theory using pseudo-Riemannian manifolds is that spacetime must look exactly the same for a free-falling observer than it does for an accelerated observer. If we allow gravitational effects, like maybe gravitational radiation, to change the physics for an observer in a gravitational field, are manifolds still a good description of the physics? So in sumary my questions are: Is EEP enough for General Relativity or should we impose SEP? If we included a term in our theory that causes object to rotate as they fall in a gravitational field, it violates the SEP but not the EEP, right? Would such a term be allowed in General Relativity? Edit: Am I wrong about objects that rotate while they fall satisfying the EEP? I guess you could detect such a rotation with non-gravitational experiments even if the cause was indeed of purely gravitational origin! Does that mean that a theory with EEP rules out any possible observable effect except maybe affecting how other particles feel gravitational interactions? As in for example making the gravitational force between two bodies weaker when they are in free-fall in a third body's gravitational field than they would be in an accelerated frame. But wouldn't that also be measurable with non-gravitational experiments like, for example, measuring the resulting velocity? Answer: I look forward to a more informed answer. This entry in wikipedia though: Einstein's theory of general relativity (including the cosmological constant) is thought to be the only theory of gravity that satisfies the strong equivalence principle From what I gather, if necessary, General Relativity is compatible with the strong equivalence principle. It is not imposed at present because there is no observation or measurement up to now that makes it necessary to impose it: Thus, the strong equivalence principle can be tested by searching for fifth forces (deviations from the gravitational force-law predicted by general relativity)
{ "domain": "physics.stackexchange", "id": 65522, "tags": "general-relativity, equivalence-principle" }
How does chaotic situation arise in planetary motion in solar system?
Question: So I have been recently reading about chaotic motion in the solar system. Some research articles particularly emphasize on sensitivity to initial conditions as a prime governing factor behind the rise of a chaotic situation. But what does that physically mean for let's say our solar system? I may be asking a really question, but one thing that I am just not getting is how exactly can I make sense of sensitivity to initial conditions for our planetary motion? Answer: An easier to understand example of chaos is a frictionless billiard table. Suppose you have several perfectly spherical billiard balls in a perfectly straight line. Suppose you shoot the end ball perfectly straight toward the next ball. All the balls will bounce off each other. The end ball bounces off the table and returns to the next ball. All the balls will stay on the line forever. Suppose your aim is slightly off. The first ball hits the second slightly off center on the left. The second ball is deflected slightly off the line to the right. In this way, each ball is deflected. The longer you watch, the farther each ball travels from the line. When struck again, the misalignment will be larger. Soon this will look nothing like the first solution. The reason for the sensitivity to initial conditions is the curvature of the balls. A larger misalignment means a larger deflection angle. If the imperfection was a slight misalignment of one edge of the table, balls would be deflected the wrong way but the from between a perfect table would not grow as quickly. For the solar system, several planets in nearly circular orbits isn't obviously chaotic. Replace them with many planets in random initial directions. Every so often two planets will pass close to each other and deflect each other strongly. Just how strongly depends on exactly how close they pass and their relative velocities. Small changes in these parameters make a large different in deflection.
{ "domain": "physics.stackexchange", "id": 39556, "tags": "chaos-theory" }
Subscribe and publish velocity
Question: Hi, I have a code where I want to see the linear velocity of a powered wheelchair and then publish linear velocity to it. An example of the code is: ros::Publisher vel_pub_=n.advertise<geometry_msgs::Twist>("cmd_vel", 1); if(vel.linear.x > 1.8){ vel.linear.x = 1.8; vel_pub_.publish(vel);} --UPDATE-- To subscribe to the cmd_vel topic I do: ros::Subscriber vel_sub = n.subscribe("cmd_vel", 0, velCallback); My question is... What do I insert in the velCallback function? What does it contain? What should the header of the function be? Thank you! Originally posted by anamcarvalho on ROS Answers with karma: 123 on 2014-08-08 Post score: 1 Answer: If you want to synchronize the data and use one callback for both of the topics, have a look at the message_filter tutorial. Edit: The callback should look something like this (no guarantee that this works though, beware of typos): void velCallback(geometry_msgs::Twist::ConstPtr& vel) { //Since vel is a const pointer you cannot edit the values inside but have to use the copy new_vel. geometry_msgs::Twist new_vel = *vel; if(vel->linear.x > 1.8) { new_vel.linear.x = 1.8; vel_pub_.publish(new_vel); } } Please have a look at this tutorial to see how subscribers and publishers work. I would also suggest to have a queue size of at least 1: ros::Subscriber vel_sub = n.subscribe("cmd_vel", 1, velCallback); Edit2: For the case you mentioned in the comments something like this could work (not tested, beware of typos): #include "ros/ros.h" #include "geometry_msgs/Twist.h" ros::Publisher pub; void velCallback(geometry_msgs::Twist::ConstPtr& vel) { geometry_msgs::Twist new_vel = *vel; if (vel->linear.x > 1.8) { new_vel.linear.x = 1.8; } pub.publish(new_vel); } int main(int argc, char **argv) { ros::init(argc, argv, "my_node"); ros::NodeHandle n; pub = n.advertise<geometry_msgs::Twist>("cmd_vel", 10); ros::Subscriber sub = n.subscribe("my_cmd_vel", 10, velCallback); ros::spin(); return 0; } You only have to make sure that the node publishing the original cmd_vel values publishes to my_cmd_vel now which you can achieve via remapping the topic in the launch file. The node I posted above will take that value and pass it on to cmd_vel or change the linear speed if necessary and then pass it on to cmd_vel. Since the callback is running in a different thread than your main function you cannot rely on always having a value for your linear_actual in the main function. So do whatever you need to do with the actual values in the callback. In this easy example this is good enough. Originally posted by Chrissi with karma: 1642 on 2014-08-10 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by anamcarvalho on 2014-08-13: Hi @Chrissi, do you this this will work? void velCallback (const geometry_msgs::Twist::ConstPtr& vel_msg) { ros::NodeHandle n; ros::Publisher vel_pub_=n.advertise<geometry_msgs::Twist>("cmd_vel", 1); geometry_msgs::Twist vel; linear_actual=vel_msg->linear.x; angular_actual=vel_msg->angular.z; } Comment by anamcarvalho on 2014-08-13: What I do in the code I inserted in the comment above is to update linear and angular velocity to the current velocities! Then, in main function I do: if(linear_actual > 1.8){ vel.linear.x = 1.8; vel_pub_.publish(vel);} Is this ok? Comment by Chrissi on 2014-08-13: I wouldn't create the publisher in the callback because then the publisher will be created when ever a new message arrives, this is unnecessary. Even though a kitten dies every time you use global variable, I would make it a global variable. The thing with the cmd_vel is, that still both messages will reach the robot, so the one with linear.x > 1.8 and your new one and the robot will not know what to do and start stuttering. You have to remap the output of whatever publishes the cmd_vel in the first place to a new topic and listen to that. Then you pass on every message you get to cmd_vel and change it where necessary. Comment by anamcarvalho on 2014-08-13: Sorry, my bad! I forgot to delete the publisher in the code! void velCallback (const geometry_msgs::Twist::ConstPtr& vel_msg) { linear_actual=vel_msg->linear.x; angular_actual=vel_msg->angular.z; } The code is just like this.. I just updates the current velocity! Comment by anamcarvalho on 2014-08-13: The rest of the code, the comparison to see it its greater that 1.8 I need to have in the main, because in the real code I have, it isn't so simple! So I really need to do this comparison inside main! Is it ok this time? Comment by Chrissi on 2014-08-13: You have to make sure that callback is not writing to the variable while you are trying to read it. See this thread on how to use boost scoped lock Comment by anamcarvalho on 2014-08-13: This updates at a rate of 40Hz... I will be a matter of milliseconds, I am assuming that shouldn't be a problem!
{ "domain": "robotics.stackexchange", "id": 18970, "tags": "ros, node, velocity, topic, subscribe" }
Torque when jumping from AT-REST Merry Go Round, Conservation of Angular momentum
Question: Say there a child, that stands on the edge of an AT REST merry go round. When they jump off, the child has a linear velocity, and the merry go round begins to turns. They say there is no net torque on the merry go round/child system and angular momentum is conserved. I understand the torque applied to the merry go round from the child(force of child x radius of merry go round), but how does the merry go round provide a torque to the child. Doesn't it just apply a force? This is the same issues as a child jumping on a merry go round that is coming in straight/tangentially to the edge. Answer: I understand the torque applied to the merry go round from the child(force of child x radius of merry go round), but how does the merry go round provide a torque to the child. Doesn't it just apply a force? Torques are "just forces", but interpreted about a specific point or axis. If we have an axis of consideration (probably the platform axis is convenient), then a force applied off-axis is also a torque. So any torque the child applies to the platform, the opposite torque is applied to the child (even though the child doesn't start rotating). The sum of torques and the sum of angular momentum remains zero. Why doesn't the child rotate though if they are experiencing a torque? Unbalanced torques create a change in angular momentum. Rotations are only one form of angular momentum. Motion that is not collinear with the axis is also a form of angular momentum. Does this change the work done by the child also, since now they are accelerating linear and yet gaining angular momentum. No. The work done is the same. Just because we account for the motion as having angular momentum in this case doesn't change the energy transfer.
{ "domain": "physics.stackexchange", "id": 71096, "tags": "classical-mechanics" }
Why the divergent part of the 1-loop correction for the photon propagator *cannot* depend on the fermion mass $m$?
Question: Last week my QFT II professor claimed that the divergent part of the diagram of the one-loop correction to the photon propagator by means of a fermion cannot depend on the fermion's mass. I haven't been able to find out the reason of this. I have checked that indeed the divergent part of the diagram doesn't depend on $m^2$ (where $m$ is the mass of the fermion), but I don't underestand why this must be like this, or how could we know this prior to the calculation. My guesses are: The divergence occurs when $k\rightarrow\infty$ (UV-divergent), and so the mass of the fermion can be neglected. The fermion having a mass in the divergente term would imply adding a mass to the photon, which would break gauge symmetry (I'm not sure if this makes any sense). Answer: Let us sketch an argument: The photon vacuum polarization $\Pi^{\mu\nu}(p,m)$ in $d=4$ has superficial degree of divergence (SDOD) $D=2$. Each time we differentiate wrt. the photon momentum $p^{\mu}$ or the fermion mass $m$, we effectively gain 1 more fermion propagator, and hence lower the SDOD by 1 unit, cf. Ref. 1. Lorentz covariance dictates that the tensor structure $\Pi^{\mu\nu}$ comes from either $p^{\mu}p^{\nu}$ or $g^{\mu\nu}$. We can therefore Taylor expand around $p=0$: $$\begin{align} \Pi^{\mu\nu}(p,m)~=~&A(m) g^{\mu\nu}\Lambda^2\cr &+ \{B(m) g^{\mu\nu}p^2 + C(m) p^{\mu}p^{\nu}\} \ln\Lambda \cr &+ \text{finite terms},\end{align}\tag{1} $$ where $\Lambda$ is a UV momentum cut-off. Then by differentiation: $$\begin{align} \frac{\partial\Pi^{\mu\nu}(p,m)}{\partial m}~=~&A^{\prime}(m) g^{\mu\nu}\Lambda^2\cr &+ \{B^{\prime}(m) g^{\mu\nu}p^2 + C^{\prime}(m) p^{\mu}p^{\nu}\} \ln\Lambda \cr &+ \text{finite terms}.\end{align}\tag{2} $$ The LHS of eq. (2) has SDOD $D=1$, so the singular terms$^1$ on the RHS of eq. (2) must vanish. This answers OP's question: The divergent terms do not depend on the fermion mass $m$. Let us mention for completeness that from a Ward identity, we know that the vacuum polarization should be transversal, so in fact $$ A~=~0, \qquad B + C ~=~0.$$ References: M.E. Peskin & D.V. Schroeder, An Intro to QFT; Section 10.1, p. 319. -- $^1$ Note that already in eq. (1) we may from section 1 assume w.l.o.g. that $B(m)$ and $C(m)$ do not depend on $m$.
{ "domain": "physics.stackexchange", "id": 85685, "tags": "quantum-field-theory, mass, quantum-electrodynamics, renormalization, dimensional-analysis" }
Time complexity of Max counters
Question: As per the instructions are given in MaxCounters-Codility, You are given N counters, initially set to 0, and you have two possible operations on them: increase(X) − counter X is increased by 1, max counter − all counters are set to the maximum value of any counter. A non-empty array A of M integers is given. This array represents consecutive operations: if A[K] = X, such that 1 ≤ X ≤ N, then operation K is increase(X), if A[K] = N + 1 then operation K is max counter. I have written this code public int[] maxCount(int[]A,int N) { int[] I = new int[N]; for (int i = 0; i < A.length; i++) { try { I[A[i] - 1]++; } catch (Exception e) { Arrays.sort(I); Arrays.fill(I, I[I.length - 1]); } } return I; } It gives correct answers for all test cases. Any Idea to do this with time complexity O(N). Its currently on O(N*M). Answer: Lets start with the time complexity of your current algorithm: for (int i = 0; i < A.length; i++) has a time complexity of O(M) (Length of array 'A' is 'M') Arrays.sort(I) has a time complexity of O(N*log(N))1 Arrays.fill(I, I[I.length - 1]) has a time complexity of O(N) (The number of counters) That means the complexity of your current algorithm is O(N^2 * log(N) * M). You can replace the sorting by keeping track of the maximum value for all counters like this: public int[] maxCount(int[] A, int N) { int[] I = new int[N]; //Initialize the max value to 0 int max = 0; for (int i = 0; i < A.length; i++) { if (A[i] == N + 1) { Arrays.fill(I, max); } else { I[A[i] - 1]++; if (I[A[i] - 1] > max) { //Update the max value max = I[A[i] - 1]; } } } return I; } The time complexity of this version is now O(M * N). This version is also using if statements to control the flow of the program as opposed to exceptions which is an anti-pattern2. UPDATE: I've used the suggestion of Mees de Vries from his comment to implement a data structure for the problem. The complexity of the function reading the instruction incrementCounters() is O(n). public class SynchronizedCounters { private int[] counters; private int size; private int base = 0; private int max = 0; private final int INSTRUCTION_OFFSET = 1; public SynchronizedCounters(int size) { this.size = size; this.counters = new int[size]; } public void incrementCounters(int[] instructions) { for (int instruction : instructions) { int instruct = instruction - INSTRUCTION_OFFSET; if (instruct >= size) { base = max; } else { normalizeCounter(instruct); counters[instruct]++; if (counters[instruct] > max) { max = counters[instruct]; } } } } public Integer getCounterValue(int counter) { normalizeCounter(counter); return counters[counter]; } private void normalizeCounter(int index) { counters[index] = java.lang.Math.max(counters[index],base); } } Example using the class: public static void main(String[] args) { SynchronizedCounters synchronizedCounters = new SynchronizedCounters(5); synchronizedCounters.incrementCounters(new int[]{1, 1, 1, 3, 2, 1, 1, 6, 2, 3}); System.out.println("Value of first counter: " + synchronizedCounters.getCounterValue(0)); } Output: Value of first counter: 5 1 https://stackoverflow.com/questions/21219777/the-running-time-for-arrays-sort-method-in-java 2 https://web.archive.org/web/20140430044213/http://c2.com/cgi-bin/wiki?DontUseExceptionsForFlowControl
{ "domain": "codereview.stackexchange", "id": 32008, "tags": "java, programming-challenge, time-limit-exceeded, complexity" }
Limits of integration
Question: In the following video can someone explain why did he take the limits of integration to be from $-\frac{\pi}{2}$ to $\frac{\pi}{2}$ ? https://www.youtube.com/watch?v=bJWFgJTxIFk&index=9&list=PLYVDsiuOZP5pNzoB-e4ugTz96dGpspdDI Answer: The person in the video took $\theta$ to be from the vertical axis. This was stated in the video itself. You might be more familiar with taking an angle to measure going counter-clockwise from the 3 o'clock position. If you want to measure angles that way then you can go from $0$ to $\pi$ and this was explained by the maker of the video in the comments to the video.
{ "domain": "physics.stackexchange", "id": 24180, "tags": "classical-mechanics" }
How to find the percentage of chloroderivatives after monochlorination of a compound
Question: There are series of examples in my textbook to decipher the chlorination selectivity. The example was to find percentage of monohlorderviatives of n-pentane after free radical halogenation giving products 1,2,and 3-chloropentane, (assuming relative reactivity of all H-atoms to be same). The amount of each isomers formed depends on its rate the formation, therefore, the ratio of amounts (i.e., relative amounts) is equal to the ratio of rates. Relative amount of products=(Total number of equivalent Hydrogen)*(Relative Reactivity) In 1-Chloropentane, there are two 1°Carbon so six 1°H atoms. After this I cannot understand how to get 4 and 2 H-equivlents for followup products. First, I tried to do it the same way I did for n-butane (six 1°H and four 2°H equivalents for chlorbutane and 2-chlorobutane, respectively.) But my answer was too wrong, I searched some sites to find how to find what I was missing? It seems to me, at least my initial approach was right, H-equivalents means 1°/2°/3° H-atoms and the rest is to go by formula to get the answer? I tried again, but I am still no able to understand how are 4 and 2 equivalents for the rest of product? Do H-equivalents means something else and what key component am I missing? Here is one of the solution (without assuming same reactivity for H equivalents): Answer: A normal alkane $\ce{C_nH_{2n+2}}$ has two sorts of Carbon atoms : $1$) terminal $\ce{C}$ atoms, which hold $3$ $\ce{H}$ atoms, and $2$) internal $\ce{C}$ atoms, holding only $2$ $\ce{H}$ atoms. Removing one $\ce{H}$ atom from anywhere in the molecule produces n-alkyl radicals $\ce{C_nH_{2n+1}}$. And these radicals are primary $1$-alkyl if the $\ce{H}$ atom was attached to the terminal position, and secondary alkyl if the $\ce{H}$ atom comes from internal $\ce{C}$ atoms. The first alkane with both terminal and internal $\ce{C}$ atoms is propane $\ce{C3H8}$ with $n=3$. In propane $\ce{C3H8}$, the first and the last $\ce{C}$ atoms are terminal atoms. Removing one of the $3$ $\ce{H}$ atoms par terminal $\ce{C}$ atoms produce $1$-propyl radicals. Each terminal $\ce{C}$ atom can produce $1$-propyl radicals. Removing one of the two internal $\ce{H}$ atoms produce $2$-propyl radicals. In total removing one $\ce{H}$ of the eight $\ce{H}$ atoms in propane can produce six different $1$-propyl radicals and two different $2$-propyl radicals. For butane $\ce{C_4H_{10}}$, there are two terminal $\ce{C}$ atoms and two internal $\ce{C}$atoms. Removing one $\ce{H}$ atom among the $10$ available can produce $2· 3 = 6$ different $1$-butyl radicals (the same number of $1$-alkyl radicals as for propane). But removing one of the $4$ $\ce{H}$ atom attached to one of the two internal $\ce{C}$ atoms can produce $4$ different $2$-butyl radicals. In total, removing one $\ce{H}$ of the ten $\ce{H}$ atoms in butane can produce six different $1$-butyl radicals and four different $2$-butyl radicals. Now I think you can discover yourself why pentane $\ce{C_5H_{12}}$ can be chlorinated to produce $12$ possibilities of chloropentane : six $1$-chloropentane, four $2$-chloropentane, and two $3$-chloropentane
{ "domain": "chemistry.stackexchange", "id": 17324, "tags": "organic-chemistry, halogenation, halogens" }
The 4D Physics of a Superstring Background
Question: Suppose I have a compactification of string theory to four Minkowski dimensions of the form $M^{1,3}\times X$, where the internal CFT on $X$ is a $c=9$, $(2,2)$ SCFT. For example, let the SCFT on $X$ be described by the IR fixed point of an LG model with the superpotential $W(\Phi)=\Phi^p$. Can someone explain to me (hopefully in details) how can we extract the space-time physics in $M^{1,3}$ from the data of the internal CFT. What is the relation between the world-sheet superpotential $W(\Phi)=\Phi^p$ and the space-time superpotential? What is the space-time interpretation of the chiral ring of $W(\Phi)$? How does the non-perturbative corrections to the space-time superpotential are related to the correction to $W(\Phi)$. I know this is a basic question in string theory and it should be easy to find an answer in Polchinski but I hope someone will clarify things further. Answer: There is no simple relationship between the worldsheet objects and spacetime objects (such as superpotentials) in this curved case (and many other general cases). The LG-like models correspond to Calabi-Yau compactifications whose typical radii are comparable to the string length so it is very hard to see the spacetime geometry - and moreover, the mirror symmetric geometry is as easy or as hard to see as the "original" one, whichever is which. Moreover, much of the world sheet objects are auxiliary and don't survive into spacetime. In particular, the whole world sheet supersymmetry is an auxiliary symmetry - a gauge symmetry - that doesn't really survive in spacetime. Its existence has consequences in spacettime (nonrenormalization theorems etc.) but its detailed building blocks don't have any counterparts because they're partly "unphysical states" from the spacetime viewpoint. So the way to proceed when identifying the topology of Gepner-like models is to find the Hodge numbers and primary operators, choose a corresponding topology of a Calabi-Yau space that matches the constraints, and verify that this candidate is indeed equivalent. To see that there can't be any coordinate-by-coordinate map between the world sheet pieces and spacetime physics, note that the central charges of the world sheet CFT are fractional. The relationship between Gepner models and particular Calabi-Yaus is a form of duality, and as any duality, it must be nontrivial to be seen, otherwise it wouldn't be interesting and it wouldn't really be a "duality".
{ "domain": "physics.stackexchange", "id": 910, "tags": "string-theory" }
Atomically update robot joints and position without affecting sensors
Question: The gazebo threading model requires all sensors and plugins to effectively operate within their own threads, with an assumption that no one is clobbering the others data. As far as it seems, this is OK as long as physics is running, as I believe the physics update happens outside of the sensor updates. In the case of a plugin modifying the position of an object, in real-time, however, the sensor does not necessary know the true state of the world at the time it was run in its own thread. Thread A: updates a pose manually (i.e. manually setting the pose of a body or the position of a joint) Thread B: a laser scanner or camera viewing the scene If A wakes up at t=0.5 and performs its update, while B also wakes up at t=0.5, it is very difficult to know whether the sensor collected data before or after A did its action. Are there any built-in methods to allow such synchronization and make these changes atomic? Originally posted by gm on Gazebo Answers with karma: 1 on 2013-05-22 Post score: 0 Answer: Most sensors should collect data after a physics update. The method used by each sensor varies slightly but they mainly generate sensor data based on timestamped physics data published by the physics thread after an update. The same timestamp is attached to the message the sensors publish. In gazebo 1.8, a similar mechanism is put in place for rendering based sensors such as gpu ray sensor and camera. Sensors that do not enforce this yet are the physics based ray sensor and rfid sensor. Originally posted by iche033 with karma: 1018 on 2013-05-23 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 3309, "tags": "gazebo" }
Exceeding speed of light in simple mechanical setup
Question: Suppose a system as shown. If i pull block $m1$ and $m2$ (the two side masses) with speed $v$ and theta is the angle mentioned the speed of central block becomes vsectheta in vertical direction. If theta approaches $pi/2$ then speed of central mass EXCEEDS SPEED OF LIGHT? how? From where did i get enough energy of $(1/2) m c^2$ to take central block to that speed?? This can be extended to any setup where we get sec theta as a multiplier. How is this possible? Answer: It is not possible. Leaving aside relativistic effects, as theta increases, the force you need to apply to move the outer blocks at a constant speed to raise the inner block increases more rapidly than sec theta, so the required force would approach infinity as theta approached a right angle. When you add in relativistic effects, the force would rise even more rapidly. In reality, for any finite force, theta would never reach a right angle, and in any case the apparatus would break under the strain.
{ "domain": "physics.stackexchange", "id": 87480, "tags": "homework-and-exercises, special-relativity, classical-mechanics, speed-of-light, string" }
What is the concept of the technological singularity?
Question: I've heard the idea of the technological singularity, what is it and how does it relate to Artificial Intelligence? Is this the theoretical point where Artificial Intelligence machines have progressed to the point where they grow and learn on their own beyond what humans can do and their growth takes off? How would we know when we reach this point? Answer: The technological singularity is a theoretical point in time at which a self-improving artificial general intelligence becomes able to understand and manipulate concepts outside of the human brain's range, that is, the moment when it can understand things humans, by biological design, can't. The fuzziness about the singularity comes from the fact that, from the singularity onwards, history is effectively unpredictable. Humankind would be unable to predict any future events, or explain any present events, as science itself becomes incapable of describing machine-triggered events. Essentially, machines would think of us the same way we think of ants. Thus, we can make no predictions past the singularity. Furthermore, as a logical consequence, we'd be unable to define the point at which the singularity may occur at all, or even recognize it when it happens. However, in order for the singularity to take place, AGI needs to be developed, and whether that is possible is quite a hot debate right now. Moreover, an algorithm that creates superhuman intelligence (or superintelligence) out of bits and bytes would have to be designed. By definition, a human programmer wouldn't be able to do such a thing, as his/her brain would need to be able to comprehend concepts beyond its range. There is also the argument that an intelligence explosion (the mechanism by which a technological singularity would theoretically be formed) would be impossible due to the difficulty of the design challenge of making itself more intelligent, getting larger proportionally to its intelligence, and that the difficulty of the design itself may overtake the intelligence required to solve the said challenge. Also, there are related theories involving machines taking over humankind and all of that sci-fi narrative. However, that's unlikely to happen, if Asimov's laws are followed appropriately. Even if Asimov's laws were not enough, a series of constraints would still be necessary in order to avoid the misuse of AGI by misintentioned individuals, and Asimov's laws are the nearest we have to that.
{ "domain": "ai.stackexchange", "id": 310, "tags": "philosophy, definitions, agi, superintelligence, singularity" }
How to make a decision tree with both continuous and categorical variables in the dataset?
Question: Let's say I have 3 categorical and 2 continuous attributes in a dataset. How do I build a decision tree using these 5 variables? Edit: For categorical variables, it is easy to say that we will split them just by {yes/no} and calculate the total gini gain, but my doubt tends to be primarily with the continuous attributes. Let's say I have values for a continuous attribute like {1,2,3,4,5}. What will be my split point choices? Will they be checked at every data point like {<1,>=1......& so on till} or will the splitting point will be something like the mean of column? Answer: Decision trees can handle both categorical and numerical variables at the same time as features, there is not any problem in doing that. Theory Every split in a decision tree is based on a feature. If the feature is categorical, the split is done with the elements belonging to a particular class. If the feature is contiuous, the split is done with the elements higher than a threshold. At every split, the decision tree will take the best variable at that moment. This will be done according to an impurity measure with the splitted branches. And the fact that the variable used to do split is categorical or continuous is irrelevant (in fact, decision trees categorize contiuous variables by creating binary regions with the threshold). Implementation Although, at a theoretical level, is very natural for a decision tree to handle categorical variables, most of the implementations don't do it and only accept continuous variables: This answer reflects on decision trees on scikit-learn not handling categorical variables. However, one of the scikit-learn developers argues that At the moment it cannot. However RF tends to be very robust to categorical features abusively encoded as integer features in practice. This other post comments about xgboost not handling categorical variables. rpart in R can handle categories passed as factors, as explained in here Lightgbm and catboost can handle categories. Catboost does an "on the fly" target encoding, while lightgbm needs you to encode the categorical variable using ordinal encoding. Here's an example of how lightgbm handles categories: import pandas as pd from sklearn.datasets import load_iris from lightgbm import LGBMRegressor from category_encoders import OrdinalEncoder X = load_iris()['data'] y = load_iris()['target'] X = OrdinalEncoder(cols=[3]).fit_transform(X) dt = LGBMRegressor() dt.fit(X, y, categorical_feature=[3])
{ "domain": "datascience.stackexchange", "id": 5210, "tags": "machine-learning, decision-trees" }
What are the partial differential equations for Solid Stress Analysis?
Question: When using Finite Element Analysis for Fluids we solve the Navier Stokes Equation and continuity equation, when solving for temperature we solve the heat equation and fouriers law, when dealing with diffusion we solve Ficks law. But when dealing with solid mechanics what are the PDE's the usual hookes law may relate stress to strain but where are we getting the partial time term? Is it becz $$ F=ma=m \frac{d^2x}{dt^2}~? $$ Answer: After going on Comsol's documenation I found that the governing equations are: Newton's Second Law: $\boldsymbol{\nabla}\cdot\boldsymbol{\sigma} + \mathbf{F} = \rho\ddot{\mathbf{u}}$; Hookes law: $ \boldsymbol{\sigma} = \mathsf{C}:\boldsymbol{\varepsilon}$; and Linearized strain: $ \boldsymbol{\varepsilon} =\tfrac{1}{2} \left[\boldsymbol{\nabla}\mathbf{u}+(\boldsymbol{\nabla}\mathbf{u})^\mathrm{T}\right] \, .$
{ "domain": "physics.stackexchange", "id": 64238, "tags": "elasticity, continuum-mechanics, stress-strain, solid-mechanics" }
On the theoretical aspects of the development of the first nuclear bombs
Question: I've just read that 68 years ago Little Boy was dropped on Hiroshima, which made me wonder about some rather historical facts about the development of the first nuclear bombs; they seem to be several questions, but they boil down to the same thing: the theoretical aspects of the development of the bombs. As far as I know, nuclear fission was already understood at the time, so was all the competition between the american team and the german team to develop the bomb first, a mere matter of engineering? Or what was the role of theoretical physicists such as Richard Feynman? Also, I've heard things like that the team led by Werner Heisenberg had misconceptions about nuclear fission, so that his team could not develop the bomb first. Can anyone put in perspective the theoretical aspects of the development of the bombs taking care of this historical issues? Answer: As far as I know, nuclear fission was already understood at the time, so was all the competition between the american team and the german team to develop the bomb first, a mere matter of engineering? Only some very, very basic knowledge about the physics of nuclear fission was available at this time. I'll give a few details about this below. Also, according to this 1967 interview with Heisenberg, it's probably not accurate to imagine a competition between the US and the Nazis to build a bomb; the Germans were struggling to keep fighting at all in Europe, and didn't think it was realistic to produce more than research reactors given the time and resources they had available. The liquid drop model dated back to 1935, so physicists, even non-specialists like Einstein, could readily understand the basic idea that fission was possible and that it should release neutrons, making a chain reaction possible. However, fission is a tunneling process, and tunneling depends exponentially on the width and height of the barrier, making it extremely difficult, even today, to calculate fission rates from first principles to even order-of-magnitude precision. People today would typically approach this kind of calculation using the Strutinsky smearing technique, which wasn't invented until 1968 (Strutinsky, Nucl. Phys. A122 (1968) 1; described in http://arxiv.org/abs/1004.0079 ). A primitive version of the nuclear shell model had been proposed, but it wasn't until Maria Goeppert-Mayer in the 50's that it was really developed into a detailed theory, and it only worked for spherical nuclei -- uranium and plutonium are deformed. Even gross features of the barrier, like the existence of a metastable minimum (fission isomers), were not to be discovered until the 60's. So induced fission cross-sections and average neutron multiplicities had to be measured empirically: [W]hile Glenn Seaborg's team had proven in March 1941 that plutonium underwent neutron-induced fission, it was not known yet if plutonium released secondary neutrons during bombardment. Further, the exact sizes of the "cross sections" of various fissionable substances had yet to be determined in experiments using the various particle accelerators then being shipped to Los Alamos. (source) The Germans also set themselves back because of Heisenberg's decision to use heavy water as a moderator, when graphite would have been easier. This was apparently partly based on a mistake in a 1940 measurement by Bothe. There have been lots of hints (possibly involving wishful thinking and retroactive rewriting of history) by the German physicists that they may have dragged their feet or intentionally made mistakes, because they didn't want their own country to get the bomb. The true nature of Heisenberg’s role in the Nazi atomic bomb effort is a fascinating question, and dramatic enough to have inspired a well- received 1998 theatrical play, “Copenhagen.” The real story, however, may never be completely unraveled. Heisenberg was the scientific leader of the German bomb program up until its cancellation in 1942, when the German military decided that it was too ambitious a project to undertake in wartime, and too unlikely to produce results. Some historians believe that Heisenberg intentionally delayed and obstructed the project because he secretly did not want the Nazis to get the bomb. Heisenberg’s apologists point out that he never joined the Nazi party, and was not anti-Semitic. He actively resisted the government’s Deutsche-Physik policy of eliminating supposed Jewish influences from physics, and as a result was denounced by the S.S. as a traitor, escaping punishment only because Himmler personally declared him innocent. One strong piece of evidence is a secret message carried to the U.S. in 1941, by one of the last Jews to escape from Berlin, and eventually delivered to the chairman of the Uranium Committee, which was then studying the feasibility of a bomb. The message stated “...that a large number of German physicists are working intensively on the problem of the uranium bomb under the direction of Heisenberg, [and] that Heisenberg himself tries to delay the work as much as possible, fearing the catastrophic results of success. But he cannot help fulfilling the orders given to him, and if the problem can be solved, it will be solved probably in the near future. So he gave the advice to us to hurry up if U.S.A. will not come too late.” The message supports the view that Heisenberg intentionally misled his government about the bomb’s technical feasibility; German Minister of Armaments Albert Speer wrote that he was convinced to drop the project after a 1942 meeting with Heisenberg because “the physicists themselves didn’t want to put too much into it.” Heisenberg also may have warned Danish physicist Niels Bohr personally in September 1941 about the existence of the Nazi bomb effort. On the other side of the debate, critics of Heisenberg say that he clearly wanted Germany to win the war, that he visited German-occupied territories in a semi-official role, and that he simply may not have been very good at his job directing the bomb project. On a visit to the occupied Netherlands in 1943, he told a colleague, “Democracy cannot develop sufficient energy to rule Europe. There are, therefore, only two alternatives: Germany and Russia. And then a Europe under German leadership would be the lesser evil.” Cassidy 2000 argues that the real point of Heisenberg’s meeting with Bohr was to try to convince the U.S. not to try to build a bomb, so that Germany, possessing a nuclear monopoly, would defeat the Soviets — this was after the June 1941 entry of the U.S.S.R. into the war, but before the December 1941 Pearl Harbor attack brought the U.S. in. Bohr apparently considered Heisenberg’s account of the meeting, published after the war was over, to be inaccurate. The secret 1941 message also has a curious moral passivity to it, as if Heisenberg was saying “I hope you stop me before I do something bad,” but we should also consider the great risk Heisenberg would have been running if he actually originated the message. David C. Cassidy, "A Historical Perspective on Copenhagen," Physics Today, July 2000, p. 28, http://www.aip.org/pt/vol-53/iss-7/p28.html
{ "domain": "physics.stackexchange", "id": 11924, "tags": "nuclear-physics, history" }
Prove correctness of this (context-free) grammar
Question: I created a context free grammar for the language which has words where twice as many a than b occur. So as example, the language would accept the words baaaab and bbbaaaaaa, aab, etc. The context free grammar is G = ({X} {a,b}, P, X) P: X -> XaXaXbX, XaXbXaX, XbXaXaX, epsilon (where epsilon is the empty word) Now how can you justify the correctness of this grammar? So I'm not asking for a strict proof, just rather a way to reason the correctness of this grammar. The smallest possible strings are aab, aba, baa. And somewhere, in between one of these characters, you can insert any of these three possibilities or just the empty word epsilon. You will always end up with twice as many a than b because all our combinations satisfy this condition and we are only using these conditions. So in the end the condition will be the same and be satisfied. Would you count this as a "reasoning of correctness"? If I read what I have written once again, it sounds a bit cheap :p Answer: Now how can you justify the correctness of this grammar? In order to prove that some grammar $G$ generates a language $L$, i.e. $L= L(G)$, you need to prove two things: for any string $w \in L$, $G$ yields/generates $w$, i.e., $S \Rightarrow^* w $, and for any string $w$ (consisiting of only terminal symbols), if $G$ yields/generates $w$, then $w \in L$ In your particular example, you first need to prove that if $w$ has twice as many $a$s than $b$s then your grammar $G$ yields $w$, and if $G$ yields some string $w$ then $w$ has twice as many $a$s than $b$s. Induction on the length of $w$ or the number of steps of application of derivation rule might be helpful. This is our reference question which may be helpful to prove that a language is context free.
{ "domain": "cs.stackexchange", "id": 10293, "tags": "regular-languages, automata, context-free, formal-grammars" }
Why Goldstein's book is claiming that radius and angle doesn't contain time variable even there is $\dot{r}$ and $\dot{\theta}$?
Question: $$L=\frac{1}{2}m(\dot{r}^2+r^2\dot{\theta}^2)-V(r)$$ $$p_\theta=\frac{\partial L}{\partial \theta}=mr^2\dot{\theta}$$ $$\dot{p}_\theta=\frac{d}{dt}(mr^2\dot{\theta})$$ Goldstein wrote that $\dot{P}_\theta=0$. I know $r$ and $\theta$ both (function) have time variable. Although why he wrote differentiation of momentum respect to time is 0? Answer: You are correct that $r$ and $\theta$ are functions of time, and Goldstein is not claiming that they arent. The whole point of the discussion is that $p_{\theta}$ is a constant, even though it may not appear so at first since $r = r(t)$ and $\theta = \theta(t)$. This follows from the Euler Lagrange equations, there is one for each generalized coordinate $$ \frac{d}{dt} \frac{\partial \mathcal L}{\partial \dot \theta} = \frac{\partial \mathcal L}{\partial \theta}, \qquad \frac{d}{dt} \frac{\partial \mathcal L}{\partial \dot r} = \frac{\partial \mathcal L}{\partial r}. $$ The first one implies $$ \frac{d}{dt} \underbrace{\frac{\partial \mathcal L}{\partial \dot \theta}}_{\equiv p_{\theta}} = \frac{\partial \mathcal L}{\partial \theta} = 0. $$
{ "domain": "physics.stackexchange", "id": 82470, "tags": "newtonian-mechanics, lagrangian-formalism, angular-momentum, conservation-laws, coordinate-systems" }
robot_localization : no output from navsat_transform on odometry/gps
Question: Hi, I have a bag file which has IMU, GPS and wheel odometry and I'm trying to fuse it through robot_localization. I have followed everything as per the document. the ekf node is working fine but there is no output on the topic odometry/gps. I am manually tuning the covariance values for all the three sensors. For this I have a small code which runs above the bag file and just adds up the values in covariance matrix. Please find the attached launch file and config file . Also, I am pasting few messages for IMU, GOS and odometry IMU: header: seq: 122 stamp: secs: 1531127130 nsecs: 412077068 frame_id: "imu_link" orientation: x: 0.01904296875 y: -0.0274047851562 z: -0.852172851562 w: 0.522216796875 orientation_covariance: [24.0, 0.0, 0.0, 0.0, 24.0, 0.0, 0.0, 0.0, 24.0] angular_velocity: x: 2.375 y: -0.875 z: 2.75 angular_velocity_covariance: [24.0, 0.0, 0.0, 0.0, 24.0, 0.0, 0.0, 0.0, 24.0] linear_acceleration: x: -0.270000010729 y: 1.48000001907 z: -1.27999997139 linear_acceleration_covariance: [24.0, 0.0, 0.0, 0.0, 24.0, 0.0, 0.0, 0.0, 24.0] GPS: header: seq: 8 stamp: secs: 1531127123 nsecs: 87058067 frame_id: "gps_link" status: status: 0 service: 1 latitude: 50.72745 longitude: 7.087071 altitude: 110.3 position_covariance: [0.01, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.01] position_covariance_type: 1 Odometry: header: seq: 633 stamp: secs: 1531127147 nsecs: 190536275 frame_id: "odom" child_frame_id: "base_link" pose: pose: position: x: 8.16897660416 y: -7.75282666754 z: 0.0 orientation: x: 0.0 y: 0.0 z: -0.6552952454 w: 0.755372849231 covariance: [0.375, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.375, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.375, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.375, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.375, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.375] twist: twist: linear: x: 0.866949711312 y: 0.0 z: 0.0 angular: x: 0.0 y: 0.0 z: 0.0533507514654 covariance: [0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01] Config file: # https://github.com/cra-ros-pkg/robot_localization/blob/kinetic-devel/params/dual_ekf_navsat_example.yaml ekf_se_odom: frequency: 30 sensor_timeout: 0.1 two_d_mode: true map_frame: map odom_frame: odom base_link_frame: base_link world_frame: odom odom0: odometry/cov odom0_config: [false, false, false, #X,Y,Z false, false, true, #roll,pitch,yaw true, true, false, #X˙,Y˙,Z false, false, true, #roll˙,pitch˙,yaw false, false, false] #X¨,Y¨,Z odom0_differential: false odom0_queue_size: 10 imu0: imu/cov imu0_config: [false, false, false, false, false, true, false, false, false, false, false, true, true, false, false] imu0_differential: false imu0_queue_size: 10 imu0_remove_gravitational_acceleration: false use_control: false ekf_se_map: frequency: 30 two_d_mode: true map_frame: map odom_frame: odom base_link_frame: base_link world_frame: map odom0: odometry/cov odom0_config: [false, false, false, #X,Y,Z false, false, false, #roll,pitch,yaw true, true, false, #X˙,Y˙,Z false, false, true, #roll˙,pitch˙,yaw false, false, false] #X¨,Y¨,Z odom0_differential: false odom0_queue_size: 10 odom1: odometry/gps odom1_config: [true, true, false, false, false, false, false, false, false, false, false, false, false, false, false] odom1_differential: false odom1_queue_size: 10 imu0: imu/cov imu0_config: [false, false, false, false, false, false, false, false, false, false, false, false, false, false, false] imu0_differential: false imu0_queue_size: 10 imu0_remove_gravitational_acceleration: false use_control: false navsat_transform: frequency: 30 delay: 3.0 magnetic_declination_radians: 0.034732052 # For lat/long 50.727246, 7.086934 yaw_offset: 2.35619449019 # IMU reads 0 facing magnetic north, not east broadcast_utm_transform: flase zero_altitude: true publish_filtered_gps: false use_odometry_yaw: false wait_for_datum: false LAUNCH FILE Originally posted by pk11 on ROS Answers with karma: 13 on 2018-08-15 Post score: 1 Answer: Two main comments: Don't fuse yaw from your wheel odometry and your IMU in the odom EKF. Just use the data from the IMU. As you drive around, those values will diverge, and the filter will jump between them. Your GPS data is in the gps_link frame, and your IMU data is in the imu_link frame, but your launch file doesn't seem to provide any transforms from base_link->imu_link or base_link->gps_link. See the "Common errors" section of the wiki: Velocity data should be reported in the frame given by the base_link_frame parameter, or a transform should exist between the frame_id of the velocity data and the base_link_frame. In this case, even though the GPS and IMU both provide some pose data, you still need to provide transforms from base_link to the frame_ids in the messages. Your odom data is already in the odom and base_link frames, so that's fine. But my general advice is to back up and do one thing at a time. Start with just a single odom EKF, and only fuse the wheel encoder data. When you're satisfied that it's working, add the IMU data. Then move on to the next EKF, and repeat. Then add navsat_transform_node, and repeat. Originally posted by Tom Moore with karma: 13689 on 2018-08-27 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by pk11 on 2018-10-31: Thanks..! that helped, there was no Tf, I just had to launch the robot description. Basic mistake..!
{ "domain": "robotics.stackexchange", "id": 31549, "tags": "ros, navigation, ros-kinetic, navsat-transform-node, robot-localization" }
Measuring the size of the proton from the hydrogen atom spectrum?
Question: I was reading that besides measuring the angle of ricocheted electrons bouncing off the proton to pin down its size, it is also possible to excite the electron and then measure the frequency of the light emitted by the excited electron. Why would the gap between ground state and excited state tell us the size of a proton? Is there something I have missed? Answer: This is an interesting and non-trivial problem. Basically the Coulomb potential assumes a point particle but, if the proton is modelled as a solid sphere of finite radius, part of the electron wave function would be "inside" the proton, where the assumption of point charge no longer holds. To account for this one must modify the Coulomb potential from $1/r$ outside the proton to (basically) $C r^2$ inside, where $C$ is some constant. The simplest model is to think of the proton as a uniformly charged sphere (constant volume charge density) so the $Cr^2$ term comes from Gauss's law for the potential inside this type of sphere. This small perturbation in the potential will affect slightly the energy level. Since for small distances the radial probability density generally goes like $r^{2(\ell+1)}$, the smaller values of $\ell$ will produce wave functions with larger probabilities of having the electron "inside" the proton, so experiments were done measuring the energy difference between $2S_{1/2}$ and $2P_{1/2}$ which have $\ell=0$ and $\ell=1$ respectively. These states would normally have the same energy under the pure Coulomb potential since both are $n=2$ states, but they are affected differently under the assumption that the proton has a non-zero volume. The story of the "proton problem" goes back 10 years or so, when a group in Geneva made extremely accurate measurements of the size of the nucleus. Basically, they deduced what value of the radius of the proton (assumed as a uniform spherical charge distribution) was needed to reproduce their experimental measurements of energy levels, and it didn't agree with the accepted value. There's a good synopsis of this The proton -- smaller than thought: Scientists measure charge radius of hydrogen nucleus and stumble across physics mysteries https://www.sciencedaily.com/releases/2010/07/100712103339.htm (They used muonic hydrogen since the Bohr radius of this system is smaller than the usual electron-proton system, thus enhancing the portion of the wavefunction inside the nucleus.) The unexpected result was only confirmed this year. A summary of new results can be found here and the actual paper of the experiment Bezginov, N., Valdez, T., Horbatsch, M., Marsman, A., Vutha, A.C. and Hessels, E.A., 2019. A measurement of the atomic hydrogen Lamb shift and the proton charge radius. Science, 365(6457), pp.1007-1012 appears to be available online from this link provided courtesy of GoogleScholar. Note there are other perturbations in hydrogen - the fine and hyperfine structure - which have to be accounted for as well, making this volume effect non-trivial to isolate. I love this stuff. It shows that the hydrogen atom is not completely archeological but there's still some interesting surprises to be found in this canonical example of undergraduate level quantum mechanics
{ "domain": "physics.stackexchange", "id": 63535, "tags": "electrons, atomic-physics, hydrogen, orbitals, protons" }
Maximum sub-matrix sum
Question: Given a $n\times m$ matrix $A$ of integers, find a sub-matrix whose sum is maximal. If there is one row only or one column only, then this is equivalent to finding a maximum sub-array. The 1D version can be solved in linear time by dynamic programming. The 2D version can be solved in $\cal O(n^3)$ by looping over all pairs of columns and using the 1D algorithm on the array whose length is the number of rows in the matrix where each position $r$ holds the sum of the elements at row $r$ between the two columns. If the matrix is given by: \begin{pmatrix} 1 & -2 & 0 & -1 \\ 5 & 43 & 31 & 78 \\ -45 & -12 & 19 & 9 \end{pmatrix} Then for the pair of columns $(0,2)$, the max sub-matrix sum can be found by using the 1D algorithm on the array (top to bottom): \begin{pmatrix} 1-2+0 & = & -1 \\ 5+43+31 & = & 79\\ -45-12+19 & = & -38 \end{pmatrix} Does anybody know of a $\cal O(n^2)$ algorithm for solving this problem? Answer: I found this: Sung Eun Bae, Sequential and Parallel Algorithms for the Generalized Maximum Subarray Problem. Read pages 18-30, where it says that there are just cubic $O(nm^2)$ and sub-cubic algorithms for this problem (in general case), for example Tadao Takaoka's $O\left(n^3 \sqrt{\frac{\log\log n }{\log n }}\right)$ algorithm. I've also found a forum comment saying that this problem can be solved in $O(N^2\log N )$ for matrices with N non-zero elements using "funny" segment tree (you can ask commentator about details).
{ "domain": "cs.stackexchange", "id": 2960, "tags": "dynamic-programming, maximum-subarray" }
Commutator with exponential $[\exp(A),\exp(B)]$
Question: $A,B$ are quantum mechanical operators. $[A,B]\neq 0$ that is given. $e^{A}=\sum_{n=1}^{\infty} \frac{A^n}{n!} $ Is the following correct? $$[e^{A},e^{B}]=e^{A}e^{B}-e^{B}e^{A}=e^{A+B}-e^{B+A}=0 $$ Answer: An even simpler example: \begin{align} s_z=\left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \\ \end{array} \right)\, ,& \qquad s_y=\left( \begin{array}{cc} 0 & i \\ -i & 0 \\ \end{array} \right)\\ e^{-i\alpha s_z}=\left( \begin{array}{cc} e^{-i \alpha } & 0 \\ 0 & e^{i \alpha } \\ \end{array} \right)&\qquad e^{-i\beta s_y}=\left( \begin{array}{cc} \cos (\beta ) & \sin (\beta ) \\ -\sin (\beta ) & \cos (\beta ) \\ \end{array} \right) \end{align} and clearly $[e^{-i\alpha s_z} ,e^{-i\beta s_y}]\ne 0$. Note that, in connection to the comments on the answer of @user124864, $e^{-i\alpha s_z}e^{-i\beta s_y}\ne e^{-i\alpha s_z-i\beta s_y-\frac{1}{2}\alpha\beta[s_z,s_y]}$ either. You can easily verify using the first few terms of the explicit expansion that, in general $$ e^A e^B= \sum_n \frac{A^n}{n!}\,\sum_m \frac{B^m}{m!} \ne e^{A+B}=\sum_{p}\frac{(A+B)^p}{p!}\, . $$ If anything: \begin{align} e^Ae^B &=\left(1+A+\frac{A^2}{2!}+\ldots \right) \left(1+B+\frac{B^2}{2!}+\ldots \right)\\ &= 1+ (A+B)+ \frac{1}{2!} (A^2+2AB+B^2)+\ldots \tag{1} \end{align} but $$ (A^2+2AB+B^2)\ne (A+B)^2= A^2+AB+BA+B^2 \tag{2} $$ unless $AB=BA$, i.e. unless $[A,B]=0$.
{ "domain": "physics.stackexchange", "id": 46172, "tags": "quantum-mechanics, operators, commutator" }
What is the immediate reward in value iteration?
Question: Suppose you're given an MDP where rewards are attributed for reaching a state, independently of the action. Then when doing value iteration: $$ V_{i+1} = \max_a \sum_{s'} P_a(s,s') (R_a(s,s') + \gamma V_i(s'))$$ what is $R_a(s,s')$ ? The problem I'm having is that terminal states have, by default, $V(s_T) = R(s_T)$ (some terminal reward). Then when I'm trying to implement value iteration, if I set $R_a(s,s')$ to be $R(s')$ (which is wha I thought), I get that states neighboring a terminal state have a higher value than the terminal state itself, since $$ P_a(s,s_T) ( R_a(s,s_T) + \gamma V_i(s_T) ) $$ can easily be greater than $V_i(s_T)$, which in practice makes no sense. So the only conclusion I seem to be able to get is that in my case, $R_a(s,s') = R(s)$.. is this correct? Answer: what is $R_a(s,s')$ ? In this case, it appears to represent the expected immediate reward received when taking action $a$ and transitioning from state $s$ to state $s'$. It is written this way so it could be implemented as a series of square matrices, one for each action. Those matrices might of course be very sparse if only a few transitions are possible, but it's a nice generic form for describing MDPs. Notation varies between different RL tutorials, so do take care when looking at other sources. The problem I'm having is that terminal states have, by default, $V(s_T) = R(s_T)$ (some terminal reward). No, $V(s_T) = 0$ by definition. The value of a state, is the expected discounted sum of future rewards. A terminal state has no future rewards, thus its value is always $0$. The "terminal reward" in your system occurs on the transition $s \rightarrow s_T$, so its expected value should be represented by $R_a(s,s_T)$ If the terminal state is some goal state, or a bad exit from an episode, and it doesn't matter how you arrive there, then in the formulation you have given, you still represent it as $R_a(s,s_T)$, just that the values of $a$ and $s$ don't matter for your case. In general they might matter, for other MDPs. So the only conclusion I seem to be able to get is that in my case, $R_a(s,s') = R(s)$.. is this correct? Not in general. However, in some problems this might be a reasonable simplification. If you make that simplification, then you should take care when having a value associated with a single state to be consistent about whether the reward is for entering a particular state, or exiting it. There cannot be a reward for "being in a state", the closest you can get to that is granting a consistent reward when entering a state (ignoring the previous state and action that caused the transition). It appears here that you have suggested here that you want a reward for exiting a particular state - ignoring which action is taken in it, or what the next state is. I don't know the details of your MDP, so cannot say whether this would work for you. Maybe not, given the rest of your question. If you are asking in general, then you have the answer $R_a(s,s') \ne R(s)$ If you are asking in order to work on a specific MDP, I suggest take another look at your original problem, with the extra information in this answer.
{ "domain": "datascience.stackexchange", "id": 3916, "tags": "reinforcement-learning, q-learning" }
How does the electric motor and generator vary?
Question: what is the key difference between the windings of the electric motor and electric generator? Answer: Electric motors vary... a lot: size; shape; purpose; AC or DC; linear, rotational, etc. Electric generators, likewise, vary... a lot. Having said that, a DC motor can be thought of as a DC generator with the energy conversion process running in reverse: A DC motor converts electrical energy into mechanical energy, and the generator converts mechanical energy into electrical energy. AND the neat thing is that this process can run either direction in the very same device. Electric cars are doing this now: the electric motor which powers the wheels also acts as an elecrodynamic brake when you want to slow down. The motors use the car's kinetic energy to power each wheel motor as a generator, reclaiming electrical energy to recharge the car's batteries which got the wheels rolling in the first place.
{ "domain": "physics.stackexchange", "id": 40961, "tags": "electromagnetism, electricity" }
Load SDF (.model) into robot_description
Question: In the past and currently, you can load a URDF/XACRO file into the parameter service via something like: <param name="robot_description" command="$(find xacro)/xacro.py 'my_robot.urdf.xacro'"/> The latest Gazebo package supplies several nice robot models (simulator_gazebo/gazebo/gazebo/share/gazebo-1.0.1/models/youbot.model for example) which are great to spawn in Gazebo. Is it possible to use the .model file (in the SDF format that Gazebo uses) in order to load the model into robot_description in Fuerte? Originally posted by rtoris288 on ROS Answers with karma: 1173 on 2012-05-09 Post score: 5 Answer: I believe the robot_description parameter can only handle Collada and URDF formatted files. This is then read by the "URDF Interface" for use with tf, Rviz, etc. A SDF .model file is not supported. Originally posted by Dave Coleman with karma: 1396 on 2012-06-18 This answer was ACCEPTED on the original site Post score: 7
{ "domain": "robotics.stackexchange", "id": 9320, "tags": "gazebo, sdf, xacro, ros-fuerte, robot-description" }
Can free protons show $β^+$ decay if provided with energy?
Question: I'm studying introductory Nuclear Physics in school and we were taught that free protons never undergo $β^+$ decay since the mass of neutron is greater than mass of proton, so the $Q$ value of the reaction is negative. However, if suppose we provide protons with the energy via a collision or some other means, can we expect them to undergo $β^+$ decay? If so, does this extend to any non spontaneous nuclear reaction? Will sufficient energy lead to it happening? Answer: The overall process for $\beta^+$ decay is: $$ p \to n + e^+ + \nu_e \tag{1} $$ To supply energy to the proton we need something to collide with it, but there is no other particle on the left hand side to supply that energy. What we could do is in effect add an electron to both sides of the equation. On the right hand side the electron and positron cancel out and we're left with: $$ p + e^- \to n + \nu_e $$ which is the closely related process of electron capture. Now we can supply the required energy by giving the incoming electron the required energy, and as it happens this process has been discussed in the question Is there a term for electron capture outside the nucleus? The problem is that this process has a very low probability and in practice it's hard to think of a situation where it could be observed. Nevertheless it is theoretically possible. Another possibility would be to start with our initial equation (1) and add an antineutrino to both sides. This time the neutrino and antineutrino cancel on the right side and we get: $$ p + \bar\nu_e \to n + e^+ $$ This is known as inverse beta decay and the reaction is a standard way to detect anti-neutrinos. The kinetic energy of the incoming anti-neutrino supplies the required energy. So if you're prepared to accept electron capture or inverse beta decay as a form of $\beta^+$ decay then the answer to your question would that yes it is possible for a free proton. There is another way energy could be supplied. Hadron have excited states, and the first excited state of the proton is the $\Delta^+$ particle. So we could imagine a process where a proton is excited to a $\Delta^+$ by a collision. The $\Delta^+$ certainly has enough energy to decay by $\beta^+$ emission, but the problem is that it has so much energy that strong force processes dominate and we typically get decay to a neutron and pion or a proton and pion. Weak force processes are so much slower that $\beta^+$ decay of a $\Delta^+$ would be fantastically unlikely.
{ "domain": "physics.stackexchange", "id": 56828, "tags": "nuclear-physics" }
How do astronomers find interesting events?
Question: I always wondered how those tiny dots representing moving stars or whatever forming an interesting event (supernova explosions, stars being sucked into black holes etc.) get caught in the huge solid angle of $4\pi$? It seems absolutely improbable to find any such event in that big solid angle. Do astronomers just appear lucky looking at the right place at the right time to find an interesting event, or do they use some systematic methods of finding them? Do they maybe use special software which analyzes tons of data and identifies such interesting events? Answer: The answer to your question is yes. Many times discoveries in astronomy are serendipitous in nature - some of the most well-known discoveries fall into this category: the discovery of the cosmic microwave background, the discovery of some of the planets in our solar system, are but two examples of this. However, many times they are not serendipitous. Astronomers plan to discover things - look at the Kepler satellite which finds nearby exosolar planets by detecting planetary transits. The Sloan Digital Sky Survey was a planned galactic survey which gave us new insights about the properties of galaxies as well as structure formation. One also has to consider time scales. Many things astronomers care about have timescales which are millions or billions of years (merging clusters, the evolution of main sequence stars, etc..), however, some phenomena are on very short time scales, namely supernovae. Microlensing events can also be somewhat serendipitous in their discovery, though researchers still devise experiments to look for them (see OGLE). It is certainly a combination of luck and planning. This answer was intended to be a bit vague, since the question doesn't specifically ask about one phenomenon. Software does play a big role in identifying and classifying objects and events in the universe. I can talk a little bit more about this in the context of gravitational lensing if people care to know about it.
{ "domain": "astronomy.stackexchange", "id": 188, "tags": "observational-astronomy" }
Recognizing math functions within songs
Question: I'm new to DSP, and just discovered this StackExchange, so apologies if this isn't the right place to post this question. Is there a resource that describes genres in a more mathematical terms? For example, if I've performed an FFT on the signal on this section of the song (2:09 if the link doesn't start there), is there any way of me being able to detect that this section has that rough sort of sound? Do sounds like this follow some mathematical function with which I can compare? http://www.youtube.com/watch?v=SFu2DfPDGeU&feature=player_detailpage#t=130s (link starts playing sound straight away) Is the only way to use supervised learning techniques, or is there a different approach (which preferably doesn't require supervision)? Thank you for any advice. Answer: I think the distinction you're looking for is more like empirical vs. theoretical (as opposed to supervised vs. unsupervised), but I could be wrong about that. In other words, the ideal thing would be to have a theoretical definition of various genres, rather than just a bunch of opaque data which can be used to classify a song [without any real understanding]. However, for general genre classification, you're probably stuck at least with training from examples, even if just to create the definitions of genres in the first place. With respect to your example, consider how frequently people will argue [on YouTube] over whether a given track is really dubstep (e.g. any track that's more dubby and less wobbly, even though the genre started out without any real wobble). People define genres over time through examples, so it's reasonable to expect that algorithms which replicate that behavior would also require some examples. The way people describe genres is almost like a feature vector anyway--they ask a list of questions about the song (e.g. Is it more breaky or wobbly? Does it have a lot of sub bass? How long is it? What's the tempo? Is there a vocal? etc.). Of course, you may be able to choose a list of features that also provide an intuitive understanding of the genre. A feature like "Dynamic Range" is something that a person can also detect by ear, but something like "Time Domain Zero Crossings" wouldn't be very intuitive--even if it works well for classification. The following paper has quite a few features that might be interesting to you: George Tzanetakis, Perry R. Cook: Musical genre classification of audio signals. IEEE Transactions on Speech and Audio Processing 10(5): 293-302 (2002) link. For measuring roughness, psychoacoustic roughness would be a good place to start, but it might not be sufficient to distinguish between dubstep leads and electro leads, for example. For finer-grained distinctions, one thing to look into is timbre recognition. The following thesis has a decent survey of techniques: T. H. Park, “Towards automatic musical instrument timbre recognition,” Ph.D. dissertation, Princeton University, NJ, 2004. link. There's also a model related to perceptual roughness in Timbre, Tuning, Spectrum and Scale which is used for constructing custom scales for arbitrary timbres. The idea is that harmonics which are very close together produce beat frequencies which are perceived as dissonance. Paraphrasing from Appendix F and E, When $F$ is a spectrum with partials at frequencies $f_1,f_2,...,f_n$, the intrinsic dissonance [assuming unit amplitudes] is $$ D_F = 1/2 \space \sum_{i=1}^{n}{} \space \sum_{j=1}^{n}{\space d\left({|f_i - f_j| \over{\min(f_i,f_j)}} \right) } $$ where $$d(x) = e^{-3.5 x} - e^{-5.75 x}$$ is a model of the Plomp-Levelt Curve. It's used for measuring how pleasing a given chord is with respect to a timbre (by minimizing the dissonance). I don't know if either roughness of the psychoacoustic variety, or intrinsic dissonance would be very fruitful for your purposes on their own, but they may be useful in combination with other metrics. You'll probably have more luck classifying timbres mathematically than genres. For example, strings have even and odd harmonics, but a clarinet has only odd harmonics (cf. Sawtooth wave, Square wave). Dubstep wobble tends to be done with LFO-driven filters (low pass and/or formant filters), so something like Spectral Flux (see [Tzanetakis], above) might be a good starting point as a feature. However, I doubt anyone has studied mathematical classification of wobble yet ;)
{ "domain": "dsp.stackexchange", "id": 38, "tags": "algorithms, fourier-transform, audio, frequency-spectrum" }
making an equation dimensionless
Question: I have a balance of energy equation as following (for a spherical particle that colliding with a spherical fluid droplet) Left hand side is for before collision and RHS for after that: \begin{equation} \frac{\pi}{12}\rho_fD_d^3V_d^2+\frac{\pi}{12}\rho_pD_p^3V_{p, bc}^2=2\pi\sigma L(D_p+Lsin\theta)+\frac{2\pi\mu LU^2}{h}(D_p+Lsin\theta)\Delta t+\frac{\pi}{12}\rho_pD_p^3V_{p, ac}^2 \end{equation} in which $D$ stands for diameter, $d$ for droplet, $p$ for particle, $\sigma$ for fluid surface tension, $bc$ for before collision, $ac$ after collision, $\Delta t$ collision time, $U$ relative velocity of drop and particle (collision velocity) and $H, L, h$ are geometrical parameters. First term at RHS of the above equ. is related to surface energy of fluid and 2nd term is related to dissipation of energy during the collision. I want to make this equation non-dimension. So, I simplified it to: \begin{equation} \rho_fD_d^3V_d^2+\rho_pD_p^3(V_{p, bi}^2-V_{p, ai}^2)=24L(D_p+Lsin(\theta))[\sigma+\frac{\mu U^2}{h}\Delta t] \end{equation} and then to: \begin{equation} \frac{V_d^2}{U^2}+\frac{\rho_p}{\rho_d}(\frac{D_p}{D_d})^3 {\frac{V_{p, bi}^2-V_{p, ai}^2}{U^2}}=C_1.\frac{L.D_{eq}}{D_d^2}[\frac{1}{We}+\frac{C_2}{Re}] \end{equation} in which $We$ and $Re$ stand for Weber and Reynolds numbers. However, I am trying to define the terms as groups of simpler meaningful dimensionless parameters. Please let me know if you have any idea about a more appropriate definition of groups. Thanks. Answer: Both sides of your equation are already dimensionless, so now it is just a matter of choice. For example, you can define your $\gamma_{...}\equiv V_{...}^2/U^2$ terms as dimensionless variables of your model and $\lambda \equiv L \cdot D_{eq}/D_d^2$ as a dimensionless parameter. Taking $\alpha \equiv \rho_p/\rho_d$, $\beta = (D_p/D_d)^3$ your equation reads $$\gamma_d + \alpha \beta (\gamma_{p,bi}- \gamma_{p,ai}) = C_1 \lambda (\frac{1}{We} + \frac{1}{Re})$$ Where all the symbols represent dimensionless numbers.
{ "domain": "physics.stackexchange", "id": 16705, "tags": "energy-conservation, dimensional-analysis" }
if I have concentrated acetic acid (99%) what are the out put gases during electrolysis?
Question: If I use concentrated acetic acid (glacial acetic acid($\ce{CH3COOH}$)) as an electrolyte. What ions are produced at the anode and cathode? Can I get ethanol with it ($\ce{C2H6O}$)? If not, what would be the products? Answer: Upon electrolysis, organic acids will lose the carboxyl group and dimerize. This is known as the Kolbe electrolysis: https://en.wikipedia.org/wiki/Kolbe_electrolysis In the case of electrolysis of acetic acid, ethane and $\ce{CO2}$ will be formed.
{ "domain": "chemistry.stackexchange", "id": 14335, "tags": "electrochemistry, electrolysis" }
How best to differentiate boolean values in a long list?
Question: I have a large C# Dictionary of string keys and boolean values. Background on the reason for it is this: my program builds up a bunch of the same objects from two different sources, and then compares all the properties to see if there are any disparates. But I want to ignore the differences where I know one source hasn't exposed that property for comparison, so I have this array that lists the properties that are actually 'gathered'. Then when filtering the differences before user display, I ask if(!gathered[prop]) and remove it in that case. It would throw an exception if I hadn't defined a case for the property, and I don't want to ignore that as it may just mean I neglected to specify if it was or wasn't gathered. public Dictionary<string, bool> TPORGatheredData = new Dictionary<string,bool>() { // Represents fields that are actually retrieved from TP data { "Reference", true }, { "Title", true }, { "Status", true }, { "Status.State", true }, { "OriginatorName", true }, { "InvestigatorName", true }, { "ManagerName", true }, { "ObservedOn", true }, { "RaisedOn", true }, { "ClosedOn", true }, { "Project", true }, { "VariantObservedOn", true }, { "SoftwareVersionObserved", true }, { "AreaObservedOn", true }, { "StageObservedOn", true }, { "VariantAppliedTo", true }, { "AreaAppliedTo", true }, { "FaultClassification", true }, { "Type", true }, { "SecurityClassification", true }, { "CommercialClassification", true }, { "SecurityGroup", true }, { "SafetyRelated", true }, { "Description", false }, { "Recommendations", true }, { "Recommendations.PointNumber", true }, { "Recommendations.Id", true }, { "Recommendations.Type", true }, { "Recommendations.RecommendationText", true }, { "ClosureText", true }, }; The problem with this is that the value of false is quite significant. How could I differentiate the false from the true here? Two options I can think of: I could tab align all the values but this looks ugly as the farthest extent they need to go for alignment is a bit too far to easily see which value is for which key. assume if there's no key then the value is true (although that depends on the context making this a suitable option, and as it stands in my case not having the value may be an indicator of a field that needs checking to see if it is or isn't handled). Answer: Is this more like what you're after? var TPORGatheredData = new[] { // Add all values in the logical order that you want them "Reference", "Title", "Status", "Status.State", //...omitted for brevity "Description", "ClosureText" }.ToDictionary(x=>x,x=>true); // This is significant TPORGatheredData["Description"] = false;
{ "domain": "codereview.stackexchange", "id": 4672, "tags": "c#, hash-map" }
How big is the Solar System?
Question: How big is the solar system? By "big", I guess I mean "wide", i.e. how far away from the Sun is the farthest object that is considered part of the Solar System? I've checked Wikipedia's pages on the Solar System, as well as Pluto, the Kuiper belt, and Trans-Neptunian Objects, but couldn't see the answer. Answer: There are a variety of definitions, most of which can be put into the graphic below, grabbed from Wikipedia. Note that the scale is a log scale, so don't think the solar system is quite like is shown. The most commonly accepted is based off of what is known as the Heliosphere. Simply put, the Heliosphere is where the force from the solar wind equals the force from the galactic pressure. It is about 200 AU, perhaps a bit further, where 1 AU= the mean distance of the Earth from the sun. Alternative definitions might include the distance of Neptune from the sun, 40 AU, the edge of the gravitational influence of the sun, which would be almost 1 light year (There are bodies orbiting at 1 light year away from the Sun), the distance of the Kuiper Belt, at around 200 AU, or a fair other number of methods.
{ "domain": "physics.stackexchange", "id": 3097, "tags": "solar-system" }
Locality in Quantum Mechanics
Question: We speak of locality or non-locality of an equation in QM, depending on whether it has no differential operators of order higher than two. My question is, how could one tell from looking at the concrete solutions of the equation whether the equ. was local or not...or, to put it another way, what would it mean to say that a concrete solution was non-local? edit: let me emphasise this grew from a problem in one-particle quantum mechanics. (Klein-Gordon eq.) Let me clarify that I am asking what is the physical meaning of saying a solution, or space of solutions, is non-local. Answers that hinge on the form of the equation are...better than nothing, but I am hoping for a more physical answer, since it is the solutions which are physical, not the way they are written down .... This question, which I had already read, is related but the relation is unclear. Why are higher order Lagrangians called 'non-local'? Answer: Presuming that there aren't nonlocal constraints, a differential operator that is polynomial in differential operators is local, it doesn't have to be quadratic. My understanding is that irrational or transcendental functions of differential operators are generally nonlocal (though that's perhaps a Question for math.SE). A given space of solutions implies a particular nonlocal choice of boundary conditions, unless the equations are on a compact manifold (which, however, is itself a nonlocal structure). There is always an element of nonlocality when we discuss solutions in contrast to equations. [For the anti-locality of the operator $(-\nabla^2+m^2)^\lambda$ for odd dimension and non-integer $\lambda$, one can see I.E. Segal, R.W. Goodman, J. Math. Mech. 14 (1965) 629 (for a review of this paper, see here).] EDIT: Sorry, I should have gone straight to Hegerfeldt's theorem. Schrodinger's equation is enough like the heat equation to be nonlocal in Hegerfeldt's sense. There are two theorems, from 1974 in PRD and from 1994 in PRL, but in arXiv:quant-ph/9809030 we have, of course with references to the originals, Theorem 1. Consider a free relativistic particle of positive or zero mass and arbitrary spin. Assume that at time $t=0$ the particle is localized with probability 1 in a bounded region V . Then there is a nonzero probability of finding the particle arbitrarily far away at any later time. Theorem 2. Let the operator $H$ be self-adjoint and bounded from below. Let $\mathcal{O}$ be any operator satisfying $$0\le \mathcal{O} \le \mathrm{const.}$$ Let $\psi_0$ be any vector and define $$\psi_t \equiv \mathrm{e}^{-\mathrm{i}Ht}\psi_0.$$ Then one of the following two alternatives holds. (i) $\left<\psi_t,\mathcal{O}\psi_t\right>\not=0$ for almost all $t$ (and the set of such t's is dense and open) (ii) $\left<\psi_t,\mathcal{O}\psi_t\right>\equiv 0$ for all $t$. Exactly how to understand Hegerfeldt's theorem is another question. It seems almost as if it isn't mentioned because it's so inconvenient (the second theorem, in particular, has a rather simple statement with rather general conditions), but a lot depends on how we define local and nonlocal. I usually take Hegerfeldt's theorem to be a non-relativistic cognate of the Reeh-Schlieder theorem in axiomatic QFT, although that's perhaps heterodox, where microcausality is close to the only definition of local. Microcausality is one of the axioms that leads to the Reeh-Schlieder theorem, so, no nonlocality.
{ "domain": "physics.stackexchange", "id": 2124, "tags": "quantum-mechanics, locality" }
Why do manufacturers still make aluminum kitchen utensils?
Question: I noticed many kitchen utensils or stand mixer attachments are made with injected aluminum. Aluminum (~\$20/kg) is usually more expensive than stainless steel (~\$10/kg) and is not compatible with dishwashers. For instance, I have a peeler made with anodized aluminum and a KitchenAid attachment with some aluminum parts that I cannot put in my dishwasher. I don't understand why engineers are still making such utensils in this material. Is there any good reason for this? Answer: Yes, steel (even some stainless) may be cheaper than aluminum, but the material cost of an item is seldom the majority of the total cost, especially a small item such as a potato peeler. Making aluminum parts with complex, curved shapes can be fairly easily done by casting. Aluminum pours at around 1500 °F, which can be achieved with low-cost furnaces and contained by low-cost crucibles. The molds can be easily made, and you'll find lots of instructions for doing so, e.g. using lost-plastic casting from 3D-printed templates done in the home. Aluminum can also be easily machined. Stainless steel, however, has a much higher pour temperature (above 2500 °F), which makes melting it, keeping it uncontaminated, and casting it much more difficult. This is why you'll seldom see a consumer item made from cast stainless; if it involves stainless it will generally be stamped or machined, either of which seriously boosts the price of the item. In a market where price is king, and speed to market is queen, the fact that the results aren't strictly dishwasher safe is a minor consideration.
{ "domain": "engineering.stackexchange", "id": 698, "tags": "steel, aluminum" }
What is the difference between a copper electromagnet ( that is, an electromagnet with a copper core) and regular copper in terms of magnetism?
Question: If I bring a magnet close to a copper bar, there will be no attractive or repulsive force. Similarly, if I bring a magnet close to a copper electromagnet, still there is no attractive or repulsive force. So can't we say that the magnetism of regular copper and a copper electromagnet the same ? If no, then why? What difference is there between regular copper and a copper electromagnet? How does copper get changed if electricity passes through it? If there's any problem in my question please inform me. Thanks! Answer: Yes, the magnetism of the copper in the regular copper and the copper electromagnet are the same. The magnetic force of a copper electromagnet comes not from the copper, but from the electrons flowing through the copper. Currents produce magnetic forces. That's why it's called an electromagnet, you need electricity to produce a magnetic force.
{ "domain": "physics.stackexchange", "id": 49725, "tags": "electromagnetism, magnetic-fields" }
Phase shift problem in Fast Fourier Transform
Question: I try to make graph/print for magnitudes and phase shifts for impulse response calculated by FFT. For magnitude everything works perfect, but for phase shift I get some strange curve for higher frequencies. I can't figure out why. Could anyone help me? I calculate phase shift by that: atan2(fftOutput.imag(),fftOutput.real()) * 180.0/M_PI; And for simply impulse, like: impulse[1024] = { 1, 0, 0, 0 ... 0 } with no processing I expect straight line (phase shift for all freq bin should be zero). But I get something like that (I drawn it in paint, cause I can't run my app at the moment, but it looks almost exactly the same): Why is that? Answer: Something is wrong with your FFT. This looks like your input signal is either time reversed or shifted (circular) by one sample to the left.
{ "domain": "dsp.stackexchange", "id": 6708, "tags": "fft, phase" }
E-store automatic email send, to be run daily
Question: This is a fictional email sending program for a e-store I've done for practice purposes. EmailSenderProgram is a program sending emails to customers. Currently it sends two types of email: "welcome" and "please come back" email. It's supposed to run daily and write a debug log each day if it worked or not. I'm gonna add more email types later and tried to make it easy for you to later add more emails. I'm Kinda new to programming, but it seems to be working and I would love some improvement tips. using System; using System.Collections.Generic; namespace EmailSenderProgram { internal class Program { /// <summary> /// This application is run everyday /// </summary> /// <param name="args"></param> private static void Main(string[] args) { Console.WriteLine("Send Welcomemail"); bool success = DoEmailWork(); #if DEBUG Console.WriteLine("Send Comebackmail"); success = DoEmailWork2("CompanyComebackToUs"); #else if (DateTime.Now.DayOfWeek.Equals(DayOfWeek.Monday)) { Console.WriteLine("Send Comebackmail"); success = DoEmailWork2("CompanyComeBackToUs"); } #endif Check if the sending went OK if (success == true) { Console.WriteLine("All mails are sent, I hope..."); } Check if the sending was not going well... if (success == false) { Console.WriteLine("Oops, something went wrong when sending mail (I think...)"); } Console.ReadKey(); } /// <summary> /// Send Welcome mail /// </summary> /// <returns></returns> public static bool DoEmailWork() { try { List<Customer> e = DataLayer.ListCustomers(); for (int i = 0; i < e.Count; i++) { If the customer is newly registered, one day back in time if (e[i].CreatedDateTime > DateTime.Now.AddDays(-1)) { System.Net.Mail.MailMessage m = new System.Net.Mail.MailMessage(); m.To.Add(e[i].Email); Add subject m.Subject = "Welcome as a new customer at Company!"; Send mail from company@info.com m.From = new System.Net.Mail.MailAddress("compay@info.com); m.Body = "Hi " + e[i].Email + "<br>We would like to welcome you as customer on our site!<br><br>Best Regards,<br>Company Team"; #if DEBUG Don't send mails in debug mode, just write the emails in console Console.WriteLine("Send mail to:" + e[i].Email); #else Create a SmtpClient to our smtphost: yoursmtphost System.Net.Mail.SmtpClient smtp = new System.Net.Mail.SmtpClient("yoursmtphost"); smtp.Send(m); #endif } } return true; } catch (Exception) { return false; } } /// <summary> /// Send Customer ComebackMail /// </summary> /// <param name="v"></param> /// <returns></returns> private static bool DoEmailWork2(string v) { try { //List all customers List<Customer> e = DataLayer.ListCustomers(); //List all orders List<Order> f = DataLayer.ListOrders(); //loop through list of customers foreach (Customer c in e) { // We send mail if customer hasn't put an order bool Send = true; //loop through list of orders to see if customer don't exist in that list foreach (Order o in f) { // Email exists in order list if (c.Email == o.CustomerEmail) { //We don't send email to that customer Send = false; } } //Send if customer hasn't put order if (Send == true) { //Create a new MailMessage System.Net.Mail.MailMessage m = new System.Net.Mail.MailMessage(); //Add customer to reciever list m.To.Add(c.Email); //Add subject m.Subject = "We miss you as a customer"; m.From = new System.Net.Mail.MailAddress("company@info.com"); //Add body to mail m.Body = "Hi " + c.Email + "<br>We miss you as a customer. Our shop is filled with nice products. Here is a voucher that gives you 50 kr to shop for." + "<br>Voucher: " + v + "<br><br>Best Regards,<br>Company Team"; #if DEBUG //Don't send mails in debug mode, just write the emails in console Console.WriteLine("Send mail to:" + c.Email); #else Create a SmtpClient to our smtphost: yoursmtphost System.Net.Mail.SmtpClient smtp = new System.Net.Mail.SmtpClient("yoursmtphost"); smtp.Send(m); #endif } } return true; } catch (Exception) { return false; } } } } Second class, a file with just the datalayers. namespace EmailSenderProgram { public class Customer { public string Email { get; set; } public DateTime CreatedDateTime { get; set; } } public class Order { public string CustomerEmail { get; set; } public DateTime OrderDatetime { get; set; } } class DataLayer { /// <summary> /// Mockup method for all customers /// </summary> public static List<Customer> ListCustomers() { return new List<Customer>() { new Customer(){Email = "mail1@mail.com", CreatedDateTime = DateTime.Now.AddHours(-7)}, new Customer(){Email = "mail2@mail.com", CreatedDateTime = DateTime.Now.AddDays(-1)}, new Customer(){Email = "mail3@mail.com", CreatedDateTime = DateTime.Now.AddMonths(-6)}, new Customer(){Email = "mail4@mail.com", CreatedDateTime = DateTime.Now.AddMonths(-1)}, new Customer(){Email = "mail5@mail.com", CreatedDateTime = DateTime.Now.AddMonths(-2)}, new Customer(){Email = "mail6@mail.com", CreatedDateTime = DateTime.Now.AddDays(-5)} }; } /// <summary> /// Mockup method for listing all orders /// </summary> public static List<Order> ListOrders() { return new List<Order>() { new Order(){CustomerEmail = "mail3@mail.com", OrderDatetime = DateTime.Now.AddMonths(-6)}, new Order(){CustomerEmail = "mail5@mail.com", OrderDatetime = DateTime.Now.AddMonths(-2)}, new Order(){CustomerEmail = "mail6@mail.com", OrderDatetime = DateTime.Now.AddDays(-2)} }; } } } Answer: There are good and bad things about the code, but I will focus on some things that I believe to be bad about the code. The EmailSenderProgram namespace should be called EmailSender. You make an assumption that the program is run exactly once a day. What happens if it is run twice or not at all on a particular day? You can solve this by including a bool IsWelcomeEmailSent on the Customer class and updating it. Or even better adding a CustomerEmails store. This application is run everyday is a bad comment. What happens if the business requirement changes and now it only runs once a week? You should state what the program actually does. For example This application sends emails to customers.. DoEmailWork and DoEmailWork2 are bad function names. Instead explain the purpose of the function, for example SendWelcomeEmails and SendRetentionEmails. If the DoEmailWork function returns false, it will set bool success to false. Then if the DoEmailWork2 function returns true, it will overwrite bool success to true. Use 2 separate variables. The use of debug code, #if DEBUG etc, is pretty weird to be honest. You are essentially writing test code into the code itself which is bad practice. Instead you should be using unit testing, injecting all your dependencies so that they can be mocked. This would allow you to mock the datasource as you are already doing, but then when you move to the real database, the code won't need to change. This is a large subject on it's own and beyond the scope of this answer, but research unit testing, dependency injection and mocking. Give variables descriptive names. List<Customer> e = DataLayer.ListCustomers(); should be var customers = DataLayer.ListCustomers();. Then instead of for (int i = 0; i < e.Count; i++), use foreach (var customer in customers). Which brings me onto inconsistent looping. In DoEmailWork there is a for loop, but in DoEmailWork2 there is a foreach loop. This is confusing. There is duplication of the send email code. Instead of building up the MailMessage object in each function, you should have general SendEmail(List<string> recipients, string from, string subject, string body) function that you call from the other functions. There are other considerations to be made but I think you have enough to start of with. You should also look into Entity Framework so that you can store your data in a database. This will all mesh together with Unit Testing, Dependency Injection and Mocking over time. This is a lot of information but I hope it helps you further improve your code. EDIT Here is an example of a possible SendEmail method. using System.Net.Mail; public void SendEmail(List<string> recipients, string from, string subject, string body) { var mailMessage = new MailMessage(); mailMessage.To = string.Join(",", recipients); mailMessage.From = new MailAddress(from); mailMessage.Subject = subject; mailMessage.Body = body; } And you can call it like this. SendEmail( new List<string>() { "some@toemail.com", "another@toemail.com" }, "amazing@fromemail.com", "the subject", "the super amazing body text and html" ); Or using your data layer. SendEmail( DataLayer.ListCustomers().SelectMany(customer => customer.Email), "amazing@fromemail.com", "the subject", "the super amazing body text and html" ); You probably want to add a .Where(customer => customer.CreatedDateTime > DateTime.Now.AddDays(-1)) after ListCustomers() aswell. Then just wrap that function in an appropriately named function. public void SendWelcomeEmails() { foreach (var customer in DataLayer.ListCustomers() .Where(customer => customer.CreatedDateTime > DateTime.Now.AddDays(-1))) { SendEmail( customer.Email, "amazing@fromemail.com", "the subject", $"welcome {customer.Email}, this is the super amazing body text and html" ); } }
{ "domain": "codereview.stackexchange", "id": 26895, "tags": "c#, .net, email" }
Confused about weak/strong acids, conjugated acid-base pairs and buffers
Question: I learned that a) the conjugate base to a weak acid is a strong base (and vice versa). b) a buffer consists of a weak acid (base) and its conjugate base (acid). However, this explanation of buffers says the following: "[...] a weak acid is one that only rarely dissociates in water [...]. Likewise, since the conjugate base is a weak base, [...]" which seems to stand in conflict with my assumption a) above. So, what is correct? Furthermore, if b) is correct, isn't any solution of a weak acid solution a buffer, since any weak acid in water makes an equilibrium of the form $HA \text{ (weak acid)} \leftrightarrow H^+ + A^- \text{ (conjugate base)}$ and is thus a solution of a weak acid and its base? Answer: A strong acid (or base) forms a weak conjugate base (or acid). This is correct. By saying that $\ce{HA}$ is a weak acid and $\ce{A-}$ is a weak conjugate base they mean: Acid shouldn't be too strong - don't use $\ce{NaCl + HCl }$ Conjugated base shouldn't be too strong - don't use $\ce{EtOH/NaOEt }$ Use a "not-strong" acid (weak acid) whose conjugate base is also "not-strong" (weak conjugate base). For example, $\ce{H3CCOOH}$ is a relatively weak acid. But $\ce{H3CCOONa}$ is also a relatively weak base. Choose such acids to make a buffer.
{ "domain": "chemistry.stackexchange", "id": 6005, "tags": "acid-base" }
The Adiabatic Theorem and Symmetries of the Hamiltonian
Question: For this question, all operators and states are on a finite dimensional Hilbert space. Suppose I have a collection of continuously parametrized Hamiltonians $H(t), 0\leq t\leq T$. Suppose furthermore that I have a time-independent Hermitian operator $O$ such that $[H(t), O] = 0$ for all $t$. Informally, the adiabatic theorem states that if $|\psi(0)\rangle$ is an eigenstate of $H(0)$, then, provided the evolution of $H(t)$ is sufficiently slow/long, $|\psi(t)\rangle$ will remain an eigenstate of $H(t)$ for all $t$. I am wondering what I can say about the relationship between $|\psi(t)\rangle$ and the symmetry operator $O$. Suppose for instance that $|\psi(0)\rangle$ is also an eigenstate of $O$ with eigenvalue $\lambda$. My questions are Is it true that $|\psi(t)\rangle$ is an eigenstate of $O$ for all $t$? If so, will it necessarily have the same eigenvalue $\lambda$ as $|\psi(0)\rangle$? Does the physics change if the eigenspace of $O$ associated to $\lambda$ is degenerate? Answer: If $| \psi(0) \rangle$ is initially an eigenvalue of $O$ with eigenvalue $\lambda$, and $O$ commutes with your Hamiltonian $H(t)$ for all times $t$, then indeed the state will always be an eigenvalue of $O$ with eigenvalue $\lambda$. This follows trivially by writing the time evolution operator in terms of $H$. The most general form possible for the time evolution operator is obtained by the time ordered exponential $$ U(t, t_0) = T \exp \Bigg\{ -i \int_{t_0}^t dt' H(t') \Bigg\} = \sum_{n = 0}^{\infty} \frac{(-i)^n}{n!} \int_{t_0}^t dt_1 \ldots \int_{t_0}^t dt_n \ T \Big\{ H(t_1) \ldots H(t_n) \Big\}, $$ where $T$ is the time-ordering symbol, which orders the $n$ instances of the Hamiltonian from latest to earliest in time. This form for the time evolution operator is necessary when (for instance) $H$ does not commute with itself at different times. In any case, you can immediately see that $[O,H(t)] = 0$ implies $[O,U(t)] = 0$ (I've set $t_0 = 0$), and therefore $$ O |\psi(t) \rangle = O U(t) |\psi(0) \rangle = U(t) O | \psi(0) \rangle = \lambda U(t) | \psi(0) \rangle = \lambda | \psi(t) \rangle $$ From the above, if the eigenspace associated to $\lambda$ is nondegenerate then clearly the state $| \psi(t) \rangle$ is proportional to $| \psi(0) \rangle$ at all times. On the other hand, if the eigenspace is degenerate then there's no reason $| \psi(t) \rangle$ can't explore the full eigenspace. As a trivial example, take $O = 1$ to be the identity operator on a 2-dimensional Hilbert space, and let $H(t)$ be anything that evolves nontrivially.
{ "domain": "physics.stackexchange", "id": 78994, "tags": "quantum-mechanics, operators, symmetry, time-evolution, adiabatic" }
Simple Email Subject Filtering (python)
Question: This challenge took me forever. I spent a lot of time messing with regular expressions, but couldn't get it to work. I ended up coming up with this abomination. If anybody wants to take the time I'm interested in simpler ways to accomplish this task. (I'm a brand new pythoner) def is_stressful(subj): """ recognise a stressful email subject we are looking for any of: all uppercase, or ending in 3 !!!, or containing 'help', 'asap', or 'urgent' despite have extraneous spellings """ import string flagged_words = ['help', 'asap', 'urgent'] if subj[-3:] == '!!!': # check for ending in at least 3 !!! return True if subj.isupper(): # check for uppercase return True stripped = "".join(c for c in subj if c not in string.punctuation) # get rid of confusing characters wordlist = stripped.lower().split(' ') # make the list to check for word in wordlist: # first easy check, everything is spelled correctly if word in flagged_words: return True for word in wordlist: # start the annoying check, getting rid of extra letters in flagged words r = 0 # our word list counter getgood = [] badletters = [] while r < len(flagged_words): # start the loop for i, l in enumerate(word): # going through each character if l in flagged_words[r]: # checking for good letters if l not in getgood: getgood.append(l) # no repeats except for 'a' in asap (don't know a good way for this) elif l == 'a' and getgood.count('a') < 2: getgood.append(l) else: badletters.append(False) # make sure we don't spell flagged words accidentally r += 1 # go to the next word in list i = 0 # reset letter counter if ''.join(getgood) in flagged_words and all(badletters): # our final check return True getgood = [] # reset the loop badletters = [] # reset the loop return False My tests: print(is_stressful("H!E!L!P! its urGent asAP")) print(is_stressful("asaaap")) print(is_stressful("Headlamp, wastepaper bin and supermagnificently")) print(is_stressful("I neeed advice!!!!")) Thanks for any tips! Answer: Improvements with a new solution: import string. If the function is a top-level commonly used function - better to move that import at the top of enclosing module.Though in my proposed solution string won't be used. flagged_words. Instead of generating a list of flagged words on every function's call - make it a constant with immutable data structure defined in the outer scope: FLAGGED_WORDS = ('help', 'asap', 'urgent') empty subj. To avoid multiple redundant checks/conditions on empty subj argument, if such would be passed in, a better way is handling an empty string at start: subj = subj.strip() if not subj: raise ValueError('Empty email subject!') if subj[-3:] == '!!!' and if subj.isupper() checks lead to the same result.That's a sign for applying Consolidate conditional expression technique - the conditions are to be combined with logical or operator the last for ... while ... for traversal looks really messy and over-complicated.When trying to untangle that, your test cases allowed me to make an assumption that the crucial function (besides of trailing !!! chars and all upper-cased letters) should catch: exact word match with any of the flagged words, like urGent asAP (test case #1) exact word match with repetitive allowed chars like asaaap (test case #2) and should not allow strings that contain only words which combine both allowed and unallowed chars like Headlamp or wastepaper (though they contain he..l..p, .as....ap..) (test case #3) Instead o going into a mess of splitting/loops I'd suggest a complex regex solution that will cover both exact word matching and exact matching with repetitive allowed chars cases.The underlying predefined pattern encompasses the idea of quantifying each char in each flagged word like h+e+l+p+ with respect for word boundaries \b: import re FLAGGED_WORDS = ('help', 'asap', 'urgent') def quantify_chars_re(words): """Adds `+` quantifier to each char for further use in regex patterns""" return [''.join(c + '+' for c in w) for w in words] RE_PAT = re.compile(fr"\b({'|'.join(quantify_chars_re(FLAGGED_WORDS))})\b", re.I) def is_stressful(subj): """ Recognize a stressful email subject we are looking for any of: all uppercase, or ending in 3 !!!, or containing 'help', 'asap', or 'urgent' despite have extraneous spellings """ subj = subj.strip() if not subj: raise ValueError('Empty email subject!') if subj.isupper() or subj[-3:] == '!!!' or RE_PAT.search(subj): return True return False if __name__ == "__main__": print(is_stressful("H!E!L!P! its urGent asAP")) print(is_stressful("asaaap")) print(is_stressful("Headlamp, wastepaper bin and super-magnificently")) print(is_stressful("I neeed advice!!!!")) The output: True True False True
{ "domain": "codereview.stackexchange", "id": 36532, "tags": "python, python-3.x, strings" }
In the double slit experiment with electrons why the single slit case is not diffracting as photons would have?
Question: The famous double slit experiment used in books to describe the duality of particles nature usually presents a case where only one slit is open and there is no diffraction. Aren't we supposed to see diffraction from single slit using electrons ? They are also waves after all. If we use instead photons they will diffract from single slit. Answer: According to the Quantum Mechanics, one cannot talk about a particle's position but about the probability of the position of the particle. A wave function is a function whose product with its conjugate gives the probability distribution of the position of the particle. Since the particle is known to be located somewhere in the range, the wave function should be normalizable. The wave function of a particle is determined from the famous Schrödinger Equation. The diffraction of photons can be explained by the particle-wave duality but when it comes to a particle like an electron it is needed to introduce the concept of the wave function. So when electrons are sprayed through a single slit their wave functions diffract. The probability of the position of an electron is in a wave form and when passing through the slit the fashion of the probability of the position of the electron changes. That is why even if you send electrons one by one through a single or double slit you observe a diffraction and interference. However, there is an efect of the observer. Earlier in the 20th century there was a famous discussion about the position of the particle just before the measurement is done. There were three suggested answers: agnostic, realistic and orthodox. For more details on this subject you can refer the textbook on quantum mechanics by Griffiths. As a result of a well-prepared experiment it is found that the particle was not really anywhere. But, just after the measurement the particle is found to be there where it is measured to be. So as a result of this phenomenon it is observed that the wave function of the particle collapses after the measurement. This should also answer your question. To clarify the effect of the observer some textbooks (like Serway-Jewett at the section of modern physics) give figures where electrons do not diffract after passing through a single slit. This only occurs when the position of the electron is measured before passing through the slit. This measurement results the collapse of its wave function and the electron behaves like a rigid small ball as pictured in classical mechanics. This is also the case in the double slit experiment if you measure the position of the electron. You simply collapse the wave function on this way and somehow force it to be in that position.
{ "domain": "physics.stackexchange", "id": 49499, "tags": "quantum-mechanics, heisenberg-uncertainty-principle, double-slit-experiment, interference, diffraction" }
publish array 3 dim
Question: hi, i have 3 variables with type double and i want to publish them on array 3Dim. do you have any idea about this? Originally posted by Emilien on ROS Answers with karma: 167 on 2016-04-21 Post score: 0 Answer: You need to create a msg file with the following contents: double[3] my_array ... or alternatively double myvar1 double myvar2 double myvar3 Originally posted by Martin Günther with karma: 11816 on 2016-04-21 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Emilien on 2016-04-21: please can you explain it more? Comment by Martin Günther on 2016-04-21: Please go through the linked tutorial. When you have specific questions, open another question here.
{ "domain": "robotics.stackexchange", "id": 24414, "tags": "ros" }
Do magnetic field slows in a medium?
Question: As far as I know either you get a magnetic field to go through the material or you don't, suppose instead of say molten iron I am using drinking water maybe contains some traces of mineral such as iron etc as long as they are ferrous metal. I'm wondering what happens to the magnetic field of a permanent bar magnet when I drop it into the water, negate any oxidation? I know electromagnetic wave, light suffers from snell's law so how about the magnetic field which also travels at speed of light in vaccum condition? Answer: It is very important to understand the difference between a static EM field, and EM waves (like light). static field You are asking about a magnetic field, EM field, which is a static field. The effects (if they would change) of the static field travel outwards from the source at speed c. This speed is not affected by the medium they are in, because this static field already exists everywhere around the source. The static field's effects are not mediated by real photons, but we use virtual photons to describe the mathematical model that we use for the static field. These virtual photons are off mass shell and are not constrained by the speed of light limit. An electromagnetic field (also EMF or EM field) is a magnetic field produced by moving electrically charged objects.[1] It affects the behavior of non-comoving charged objects at any distance of the field. https://en.wikipedia.org/wiki/Electromagnetic_field EM waves, light Now you are asking about EM waves, light, and you are correct, EM waves do slow down in media. Light and EM waves travel at the speed of light in vacuum, when measured locally, and they slow down in media. The main idea really is that the changed phase velocity is a collective phenomenon that only manifests in macroscopic electric field, but is due to mutual microscopic interactions of the medium elements, while those interactions take place at unchanged vacuum light speed c. Mathematics supporting the classical explanation of why the phase speed of light slows down in a medium Now EM waves consist of real photons, that are massless, and travel at speed c in vacuum when measured locally. What really slows down in media is the wavefront of light. Individual photons do travel at speed c between the atoms of the media in vacuum (when measured locally). In physics, electromagnetic radiation (EM radiation or EMR) refers to the waves (or their quanta, photons) of the electromagnetic field, propagating (radiating) through space, carrying electromagnetic radiant energy.[1] https://en.wikipedia.org/wiki/Electromagnetic_radiation
{ "domain": "physics.stackexchange", "id": 62602, "tags": "electromagnetism, magnetic-fields" }
What is the largest "allowed" seed for a PRNG to not give any extra power to a deterministic machine?
Question: Suppose a polynomial time machine that has an access to a polynomially long string of bits independent on the input. On average, it's impossible to compress this string to a subpolynomially long string using a deterministic polynomial time compression algorithm. As far as I understand, such a machine would be a $BPP$ machine, even though the source of bits is not necessarily a random number generator. In this case the amount of independent bits is polynomial. I suppose, if we allow only a constant amount of independent bits (e.g. how it happens with most existing PRNGs using system time - a fixed length independent string - for a seed) then the PRNG gives no extra computation power. However, can it be above constant and still keep the machine equal to a polynomial time deterministic machine? Answer: I think there must be some confusion in the problem setting. Given an input of $n$ bits chosen uniformly at random, there is no algorithm to compress it to something whose length is on average shorter than $n$ bits. This is a pure information theoretic fact, that holds regardless of how much computation power you throw at the problem. So I think your formulation in terms of compression is not accurately capturing what you actually care about. Any program that uses only a random seed of length $O(\log n)$ can be derandomized, by trying all $2^{O(\log n)}=O(\text{poly}(n))$ possibilities for the seed. So a seed length of up to $O(\log n)$ bits does not add extra power. It is an open question to what extent derandomization is possible. There is work relating derandomization and the existence of pseudorandom generators, which might be of interest to you.
{ "domain": "cs.stackexchange", "id": 21619, "tags": "complexity-theory, time-complexity, randomized-algorithms, randomness, probabilistic-algorithms" }
Closure on regular languages
Question: A) Let $L$ be a regular language. according to the theorem there is an DFA which accepts the language. Describe shortly how to change the DFA to NFA which Accepts $L^R$, where R is reverse. There is no need to write how to build formally or proof or correctness. B) True or False: if $L$ is not regular then $L^R$ is not regular My solution: A) First of all because we want Reverse the accepting state would become the start and the rejecting states will become accepting, also change the direction of the edges to the oppositse side. but when it comes to changing it to NFA im a bit stuck. B) I think its true since it doesnt matter if it reversed if its regular then of course $L^R$ will be regular as well. Is this the way for A and B? Answer: A) First of all because we want Reverse the accepting state would become the start and the rejecting states will become accepting, also change the direction of the edges to the oppositse side. You're almost there. What happens if you have multiple accepting states in the original DFA? That's where you need NFA capabilities to convert them. Here is a spoiler in case you want to check your answer: Designing a DFA and the reverse of it on ComputerScience.SE. B) Your answer is true although you might want to phrase it a bit more carefully. We want to prove "if $L$ is not regular, $L^R$ is not regular". So let $L$ be not regular. If $L^R$ were regular, so were $(L^R)^R = L$, which is a contradiction. Hence, $L^R$ is not regular.
{ "domain": "cs.stackexchange", "id": 15762, "tags": "automata, computation-models" }
Find largest smaller key in Binary Search Tree
Question: Given a root of a Binary Search Tree (BST) and a number num, implement an efficient function findLargestSmallerKey that finds the largest key in the tree that is smaller than num. If such a number doesn’t exist, return -1. Assume that all keys in the tree are nonnegative. The Bst Class is given to me. public class BstNode { public int key; public BstNode left; public BstNode right; public BstNode parent; public BstNode(int keyt) { key = keyt; } } you can ignore the unit test code this is a just the demo. [TestMethod] public void LargestSmallerKeyBstRecrussionTest() { BstNode root = new BstNode(20); var left0 = new BstNode(9); var left1 = new BstNode(5); var right1 = new BstNode(12); right1.left = new BstNode(11); right1.right = new BstNode(14); left0.right = right1; left0.left = left1; var right0 = new BstNode(25); root.right = right0; root.left = left0; Assert.AreEqual(14, helper.FindLargestSmallerKeyRecursion(17, root)); } this is the code I would like you please to review, comment. public static int FindLargestSmallerKey(uint num, BstNode root) { if (root == null) { return -1; } int temp = Math.Max( FindLargestSmallerKey(num, root.left), FindLargestSmallerKey(num, root.right)); if (root.key < num) { return Math.Max(root.key, temp); } return temp; } Answer: Your code works correctly now (as far as I can tell). But due to int temp = Math.Max( FindLargestSmallerKey(num, root.left), FindLargestSmallerKey(num, root.right)); it always traverses the entire tree. This can be improved: If root.key < num then root.key is a possible candidate for the result, better candidates can only exist in the right subtree. Otherwise, if root.key >= num, keys less than the given bound can only exist in the left subtree. That leads to the following implementation: public static int FindLargestSmallerKey(uint num, BstNode root) { if (root == null) { return -1; } else if (root.key < num) { int tmp = FindLargestSmallerKey(num, root.right); return tmp != -1 ? tmp : root.key; } else { return FindLargestSmallerKey(num, root.left); } } This is faster because at each step, only one subtree is inspected instead of both, so that the time complexity is limited by the height of the tree. If the tree is balanced then the result is found in \$ O(\log N) \$ time where \$ N \$ is the number of nodes. The same idea can be used for an iterative solution: public static int FindLargestSmallerKey(uint num, BstNode root) { int result = -1; while (root != null) { if (root.key < num) { // root.key is a candidate, continue in right subtree: result = root.key; root = root.right; } else { // root.key is not a candidate, continue in left subtree: root = root.left; } } return result; } I would also add a more comprehensive set of unit tests. Start with the simplest trees, for example 10 10 10 10 / \ / \ 5 15 5 15 and call your function with num from 1 to 20. That would already have helped to find the flaws in your initial implementation.
{ "domain": "codereview.stackexchange", "id": 32071, "tags": "c#, interview-questions, binary-search" }
How can I install bioconductor-gviz and use it in jupyter notebook?
Question: I tried to install gviz in a conda environment, but that library seems to be incompatible to python and r. I tried to setup a clean environment using conda create -n r -c conda-forge r-essentials jupyter and then add the library with: source activate r conda install -c bioconda bioconductor-gviz getting UnsatisfiableError: The following specifications were found to be in conflict: - atk I remove atk, but now I get: UnsatisfiableError: The following specifications were found to be in conflict: - bioconductor-gviz - r-bindr Does anyone manage to use gviz from within an jupyter notebook? I also tried to install gviz from a running R-notebook with: install.package('gviz') Warning in install.packages : package ‘gviz’ is not available (for R version 3.4.3) Same when I try 'bioconductor-gviz'. Answer: Did you try following the installation instructions? Try this in R: source("https://bioconductor.org/biocLite.R") BiocInstaller::biocLite(c("Gviz"))
{ "domain": "bioinformatics.stackexchange", "id": 838, "tags": "bioconductor, gviz, bioconda, jupyter" }
Odd "pulsed" acceleration when using amcl and move_base
Question: Hello, I am trying out amcl and move_base on a new home built robot that is similar to the TurtleBot but uses the Serializer microcontroller and Pololu motors and encoders. I also have a Hokuyo laser scanner. I'm running the odometry loop at 20Hz and the robot can do dead reckoning fairly nicely. Next I created a small map of a couple of rooms using gmapping and that also looks good. The trouble occurs when I then run amcl and move_base and set a navigation goal in RViz. The robot always gets to the goal fine and follows the planned path fairly well. However, as it is moving it does a kind of pulsed acceleration and deceleration, even when it is moving across a complete clear space. The frequency of the pulsing is about 2 seconds. Is this something that could be tweaked somewhere in the base local planner parameters? I am basically using the TurtleBot parameters for the planner and the cost maps so I won't post them here. Thanks! patrick UPDATE: Jan 1, 2012: It seems I spoke too soon. I am still getting oscillations in cmd_vel as sent to the robot by move_base. I have posted a bag file here demonstrating the phenomenon. I am running move_base with a blank map and fake localization. Then I set a nav_goal about a meter ahead and slightly to the right of the robot in an otherwise clear area. If you rxplot cmd_vel you can see the oscillations in the x component which are very noticeable when watching the robot. For move_base, I am using essentially the identical parameters as in the turtlebot_navigation package. Can anyone see the cause of this oscillation--perhaps some kind of asynchronization between /odom and /cmd_vel? Update 2: Here is an rxplot of /cmd_vel/linear/x and /odom/twist/twist/linear/x with the robot up on blocks so the wheels are spinning with no resistance: Originally posted by Pi Robot on ROS Answers with karma: 4046 on 2011-12-30 Post score: 2 Original comments Comment by Pi Robot on 2012-01-04: @Eric Perko - I was not able to create the rxplot on the running robot either--then I discovered that I was not publishing a timestamp on the odom message header. Once I added the timestamp, I was able to get the plot. Unfortunately, adding the timestamp did not fix the oscillation problem. Comment by Eric Perko on 2012-01-03: @Pi Robot: Just curious - are you able to create the rxplot while playing back from the bag file you posted or only against the running robot? Comment by Eric Perko on 2012-01-01: @Pi Robot: Can you plot rxplot /cmd_vel/linear/x,/odom/twist/twist/linear/x against the live system while you are seeing the oscillations? I can't seem to get it to plot on the same graph on your bag file, and this graph is the one that I think would be most illuminating. Comment by Pi Robot on 2011-12-31: Thanks @ahendrix and @Eric Perko. I looked at cmd_vel/linear/x in rxplot and the robot was definitely being sent an oscillating velocity command. Since I wrote the odometry node for the Serializer I figured that was the weakest link. ;-) So I went back to the code and sure enough, I had used Python time() instead of rospy.Time() when computing and publishing odometry messages. Once I fixed that bug, the oscillations disappeared. So I'm guessing I had a small time synchronization error between odom messages and move_base. Comment by Eric Perko on 2011-12-30: Have you looked at an rxplot of cmd_vel/linear/x and /odom/twist/twist/linear/x while this pulsing is going on? Do the commands pulse because of weirdness in the reported velocity or do the reported velocities pulse to follow pulsing commands? Answer: From comment above: Thanks @ahendrix and @Eric Perko. I looked at cmd_vel/linear/x in rxplot and the robot was definitely being sent an oscillating velocity command. Since I wrote the odometry node for the Serializer I figured that was the weakest link. ;-) So I went back to the code and sure enough, I had used Python time() instead of rospy.Time() when computing and publishing odometry messages. Once I fixed that bug, the oscillations disappeared. So I'm guessing I had a small time synchronization error between odom messages and move_base Originally posted by tfoote with karma: 58457 on 2011-12-31 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Pi Robot on 2012-01-01: @ahendrix and @Eric Perko -- please see Update to my original question.
{ "domain": "robotics.stackexchange", "id": 7762, "tags": "navigation, move-base, amcl" }
Grand canonical partition function: factorization
Question: The grand canonical ensemble partition function is defined to be $$ \mathcal{Z} := \sum_{\forall |n\rangle} e^{\beta \mu N_{|n\rangle} - \beta E_{|n\rangle}} $$ being $|n\rangle$ a notation for each microstate (not necessarily quantum, just a microstate), $N_{|n\rangle}$ the number of particles of that microstate and $E_{|n\rangle}$ the energy corresponding to this microstate. This is the formal definition. Now I have from my lecture notes that when we are dealing with non-interacting and indistinguishable particles (e.g. an ideal gas) the partition function can be written as $$ \mathcal{Z} = \prod_{\forall \epsilon_i} \sum_{\forall \text{ allowed } n}(z e^{-\beta \epsilon _i})^n $$ now being $\epsilon _i$ each "monopartiuclar state" and "$\forall \text{ allowed } n$" is $\{0,1\}$ for fermions and $\{0,1,2,\dots\}$ for bosons. The "monoparticular states" are the energy levels that each particle can be in. The question: How do we go from the definition to the second formula? My approach (wrong, or at least incomplete) If particles do not interact between one another then the energy of the microstate $|n\rangle$ can be written as a summation $$ E_{|n\rangle} = \epsilon_1 + \epsilon_2 + \dots + \epsilon_{|n\rangle} = \sum_{i=1}^{N_{|n\rangle}} \epsilon_i $$ where $\epsilon_i$ is the energy of each particle. Thus the partition function writes $$ \mathcal{Z} = \sum_{\forall |n\rangle} e^{\beta \mu N_{|n\rangle}} \prod_{i=1}^{N_{|n\rangle}} e^{-\beta \epsilon _i}$$ where I have already expanded the exponential of a summation as a product of exponentials. Now, if particles are identical then the allowed values for $\epsilon _i$ are for all the particles the same, say $\epsilon _i \in \{\varepsilon_0, \varepsilon_1, \dots\}$. Using this notation, $\epsilon_8 = \varepsilon_3$ reads as "particle number 8 is in the third energy level". This allows to arrange the microstates of the system as follows (sorry for changing the language in the pic): Now I don't know how to go on... Any help is appreciated. Answer: You are misunderstanding what is meant by a product over monopartiuclar states. This is not a product over the states of the $N$ particles in the system, it is a product over all possible single particles states. The sum is then over the occupation number of those states, i.e. the number of particles actually in that state (which may be $0$). The advantage of this occupation number approach to over keeping track of individual particles is that it automatically takes account of particle indistinguishably. The total number of particles in a state $\gamma$ is clearly simply the sum of the number of particles in each single particle state $$ N_\gamma = \sum_i n_i $$ The total energy is the energy of each single particle state, times the number of particles in that state $$ E_\gamma = \sum_i \epsilon_i n_i $$ Now the formula follows from a simple manipulation \begin{align} \mathcal{Z} &= \sum_{\gamma} e^{-\beta(E_\gamma - \mu N_\gamma)}\\ &= \sum_{\gamma} e^{-\beta\sum_i n_i(\epsilon_i-\mu)}\\ &= \sum_{n_0}\sum_{n_1}\ldots\; \left(\prod_i e^{\beta n_i(\mu - \epsilon_i)}\right)\\ &= \prod_i \sum_{n_i} \left(ze^{-\beta\epsilon_i}\right)^{n_i} \end{align}
{ "domain": "physics.stackexchange", "id": 46655, "tags": "statistical-mechanics, partition-function" }
Negative mass, or just the 'appearance' of negative mass? What's the acid test?
Question: Breaking news here claim researchers at Washington State University have created negative mass by the characteristic that if you push it, it accelerates in the opposite direction. The so called negative mass is a collection of rubidium atoms forming an Einstein Bose Condensate. My question is can one claim they have negative mass by this one characteristic? Doesn't the material at least need to be checked against gravitational forces to see if it 'falls' up? What is the 'acid test' that indeed these experimenters have negative mass? Perhaps they are just seeing some other mechanism of the system that makes things appear as though they have negative mass. Answer: You're right, there's some confusion due to the way this paper has been publicized. Negative effective mass The 'negative mass' referred to in the paper is effective mass. The idea is that while every fundamental constituent of a physical system has a known, nonnegative mass, the effective degrees of freedom of the system may behave as if they have a different mass. This isn't a new idea; it pops up in a lot of contexts: You can claim that an electron's mass is "really" zero, because if you turn off every other quantum field, electrons are massless. But what we call a physical electron is really a combination of excitations of the electron and Higgs fields, which does have a positive mass. Since we can't separate the two, the latter description is more useful. If you have a sealed, almost-full container of water, you can describe the system by the position of the air bubbles instead of the position of the water molecules. Within the fluid, the air bubbles act as if they have negative mass: if you push the water down, the bubbles go up. Electrons in a solid can act as if they have a mass different from the electron mass. The reason is that the electrons interact with all the lattice ions; when you push on the electron, you end up pushing on the ions too. This can either increase or decrease the effective mass, possibly making it negative. The example in the paper is most like this, though it's in a BEC instead. Effective vs. fundamental mass Is a negative effective mass "really" a negative mass? On a fundamental level, we think of mass as the thing that goes into $E = mc^2$; alternatively it's the mass of an object that determines how it couples to the gravitational field. If you're thinking of mass this way, then no, none of the examples I listed above have negative mass, nor does the paper. But if you're in the business of atomic physics or condensed matter physics, it doesn't matter, because relativity is totally irrelevant to your experiments. The energies are low enough that the speed of light might as well be infinite, and the excitations you're studying really do have a preferred reference frame (the lab frame). If you're a fish that never leaves the water, it makes perfect sense to call an air bubble 'negative mass', even if people outside the water disagree. Does negative mass fall down? You also asked whether an object with negative mass falls up or down. The equivalence principle tells us that gravity is indistinguishable from uniform acceleration. That means that positive and negative masses have to behave the exact same way under gravity, so negative mass falls down. The common confusion here probably comes from the fact that an air bubble in water (with its negative effective mass) appears to fall up. This isn't actually true. If you drop a container of water containing an air bubble, the entire thing will accelerate downward uniformly, and the bubble will be stationary in the water, as required by the equivalence principle. You can see this explicitly in this video from the ISS (timestamp 1:05). If you hold a container of water on Earth, the air bubbles will accelerate upward, but this isn't due to gravity. Gravity is pulling both the air and water down, but your hand is pushing the water up, and the water in turn pushes the air bubbles up. The excitations in the BEC, which also have negative effective mass, are fully analogous. If you drop the BEC, they'll fall down. If you hold the BEC still, they might as well 'fall up', but this is just due to interactions within the material, not to gravity itself.
{ "domain": "physics.stackexchange", "id": 39684, "tags": "mass, bose-einstein-condensate, exotic-matter" }
Propagation of Electric Signal from Physics Perspective?
Question: We know that in terms of copper or metal conductors, when one part is given heat the other part also gets hot. This is some kind of transmission of electrons. Signal transmission in terms of data communication is not any different than that, I believe. But I can't get a clear picture of how the digital or analog signal that is passed through a coaxial or twisted pair copper cable is handled by the conductor. What happens to the voltage of the signal? Or why don't the collisions of electrons inside the conductor attenuate the signal, distort the signal, or don't add any noise to it? You can point me to any resource book if suitable or answer it here. Answer: You seem to have a good instinct about the issue. The resistance of conductors causes signal degradation for high frequency components and it adds noise. You can separate the two issues in most cases, but they both play a major role in the design of communications systems. The most important step in understanding signal conduction on wires is to realize that it is not actually the electrons that get transmitted, but it's the electromagnetic field that surrounds the conductors. The most obvious sign of this is that the electrons in a wire are moving at a fraction of a mm/s, but the signals on cables are being transmitted at roughly one half to two thirds of the speed of light. One way of thinking about this is that the main function of the electrons in the conductor is to keep the electromagnetic field from "escaping" into free space. You probably know that in the static case the potential on the surface of a conductor is constant. A wire is therefor a "guide" for an electrostatic potential. Wherever the wire leads, the potential follows. While the field of a free charge has a $1/r$ potential and decays very quickly with distance, the potential of the same charge on a long wire will be exactly the same, independent of the distance (for the static case without current flow). This is the reason why we can generate a GW of electric power in a large power plant and transmit it hundreds of miles to where the energy is needed with reasonable losses. The potential along the wired does, of course, give rise to a radial electric field $\vec E$ surrounding the wire. In addition, when there is a current flowing trough the wire, there will also be a cylindrical magnetic field $\vec B$ surrounding the wire and it will also be the strongest right around the wire along its entire length, so, again, the wire acts as just the right boundary conditions for the magnetic field to stay localized right where we want it. In electrodynamics you will learn that there is also a quantity called the Poynting-vector $\vec S$, which is a cross product of electric and magnetic field: $\vec S=\vec E \times \vec B$. The Poynting vector describes the flow of energy due to the electromagnetic field. Because the electric field surrounding a straight wire is radial and the magnetic field has cylinder symmetry, the Poynting vector happens to be aligned parallel to the wire. Wikipedia has a nice drawing showing the fields and the Poynting vector in a simple circuit with energy flow: https://en.wikipedia.org/wiki/Poynting_vector This is how energy flows trough space around the wire from the source to the load. It is not being transported by the electrons in the wire, but it is being transported by the field on the outside! So while the guiding electrons can only move very slowly, the energy can flow almost at the speed of light. Now, if we want to understand the transmission of dynamic signals around pairs of conductors (we always need two because there has to be a current return path), then we have to solve Maxwell's equations or a simplified model of them called the Telegrapher's equations: https://en.wikipedia.org/wiki/Telegrapher%27s_equations To derive the Telegrapher's equations we imagine two long parallel conductors and we look at the electric and magnetic fields along a small section. One can then either use symmetries and simplifications to turn the Maxwell equations into a one dimensional wave equation or one can use a simplified circuit model for short segments. The most common way of thinking about it is that the wire segment is an inductor in series with a small resistance (that's the resistance of the wires on that length of wire) and a small capacitance between the two wires in parallel with another resistance which models the finite isolation resistance of the dielectric (the latter is usually not a big problem, though). Both approaches lead to the same equations and when we solve them for the lossless case then we get traveling wave solutions in both directions. The signal transmission can be characterized by an effective velocity and a cable impedance. When there are losses in the cable due to the finite conductivity of the wires, then there will be an exponential dampening and some signal dispersion, i.e. a wave packet will slowly spread out as it travels down the cable. Both effects can be minimized by choosing proper materials and cable geometries and today they are being carefully managed by digital signal equalization technologies, which mathematically model the dispersion and correct for it. And this bring us to the question of noise. As we said, our conductors have a finite resistance and there is an exponential loss of signal along the length of a cable. This loss is frequency dependent and greatly increases with frequency. In practice these signal losses will limit cable runs to around 100m for frequencies above 1GHz, which you will easily be able to identify as typical limits for e.g. GBit ethernet connections. But we are not only losing signal, we are also picking up noise along the way. A simple thermal spectral noise density formula is given by https://en.wikipedia.org/wiki/Johnson%E2%80%93Nyquist_noise: $v_n=\sqrt{4kTR}$. For a typical $50\Omega$ coaxial system this means that the irreducible thermal noise floor is set by the terminating resistance (either on the driver or the receiver side) and it is limited to a approx. $0.9nV/\sqrt{Hz}$. For a 1GHz bandwidth system this give us a noise voltage of roughly $0.9\times\sqrt{10^9}nV\approx 28.5\mu V$. In practice this will probably be more like 50-100% larger due to the additional noise in our electronic circuits. So if we want to have, at least, a $5\sigma$ signal to noise ratio, then we need to inject something on the order of $200\mu V$ times the signal attenuation along the cable, which can range from 30-50dB. In practice this means that we need tens to hundreds of mV for said GBit/s connections over 100m distances. One can get a little better than that with error correction codes, but that's an entire lecture on digital communication. :-) Hope this gives you an idea where all of this leads...
{ "domain": "physics.stackexchange", "id": 32752, "tags": "electrons, voltage, conductors, data" }
Resources for finding all drugs of a certain class
Question: I may be embarking on a project involving a fairly extensive healthcare records data set, looking for the use of a particular type of drug (for example, "Proton Pump Inhibitors"). But these drugs are usually listed by their trade or generic names - is there a well maintained resource for looking up what drugs are members of a certain class (if class is indeed the right word)? Answer: The WHO has their own methodology, the Anatomical Therapeutic Chemical (ATC, thank you commenter) classification system, for organizing such data (its impetus is comparing results between studies). Also, if you have $900 sitting around, you can get a subscription for the United States Pharmacopeia. I'm not sure if that can be spread across multiple subscriptions for an organization like the Red Book David mentions.
{ "domain": "biology.stackexchange", "id": 326, "tags": "pharmacology" }
Heat pump intuition
Question: What is an intuitive explanation for the concept of heat pumps? I know that it is basically a reversed Carnot process. We can for example take an amount of heat $Q_1$ out of a warmer system and transform part of it into work W. The rest goes to the colder system. If we now reverse that process we need to take heat out of the colder system. For this to be done we need the same amount of energy W we got out of the process previously. But here my problem starts: How do you force the energy to come out of the colder reservoir? How can you explain that without just saying that it is an inversed Carnot process? Answer: You describe the process. For an example, you might think of how a gas refrigerator works. You take a gas and expose it to the cold area, which cools it to the cold temperature. Then you isolate it from the cold area, which costs no energy. You compress it, raising the temperature above that of the hot reservoir, at the cost of physical work. You then connect it to the hot reservoir and let some heat escape. Disconnect it again, then expand it to a temperature below the cold reservoir, recovering some energy in the process. Now connect it to the cold reservoir and you have a cycle with work in and heat moving from cold to hot. This is really just adding details to saying an inverse of the Carnot process.
{ "domain": "physics.stackexchange", "id": 32957, "tags": "thermodynamics, heat-engine" }
Arago Spot in the Shadows of Celestial Bodies
Question: Recently, I've watched this video by Veritasium describing Poisson's Spot, or the Arago Spot. It is explained in the video that near circular (or spherical) objects can produce this optical effect, given that there aren't many obstructions about the circumference of the object to diffuse and scatter the light and the light producing the shadow is more or less in phase. My question is, Are Arago Spots possible/Have Arago Spots been observed in the solar shadows of any celestial bodies? Answer: @JamesK's answer is basically correct. I'd change "be point-like" to "be parallel" or "be collimated", but then I'd change that to "have a high degree of partial coherence" or maybe "have high etendue because a slightly converging or diverging wave front could produce a similar spot-like structure. The important thing is that the spread in the directions of rays from the source does not blur the spot out so much that it becomes so weak and spread out that it is not noticeable, or isn't really a spot anymore. There are several other problems as well, mostly related to scale. For visible light, a planet's roughness is so huge compared to a wavelength that it will not produce a small well-defined spot. See how Deviation from circularity points out that the spot's existence and size can be though of in terms of a point-spread function of the source. You wouldn't compare the planet's roughness with a wavelength directly, but perhaps to something like $\sqrt{\lambda \ R_{planet}}$. For the Earth that's a few meters, so the Earth and other planets are too rough. For the roughness example in Wikipedia using a 4 mm sphere, that's 45 microns, and that nicely explains why the simulated edge corrugations of 10 microns do not remove the central peak, but they are weaker at 50 microns and nearly gone at 100 microns! And while much of the contribution comes from rays that pass fairly near the obstruction, a planet's atmosphere's refraction will muddle that and have more refractive than diffractive effect. This is why most demonstrations use the collimated light from a laser. It's not for the wavelength purity (the detailed fine structure depends on wavelength, but the spot is an enhancement extended along the axis for any wavelength) but for the extremely high partial coherence. But that shouldn't completely stop you from looking for an Arago spot in the radio wavelengths, especially at low frequency where all the problems related to scale are less severe. Long wavelength signals from the Earth (e.g. AM radio) are blocked from reaching space by the ionosphere unfortunately, so you can't sneak around behind the Moon to look for your favorite radio station's Arago spot. That's too bad because the Moon would likely be smooth enough for those half-km wavelengths, and it's quite round (rather than oblate) as well. The nice thing about that would be the narrow frequency of the carrier in standard AM transmissions; detection of that would be far simpler than a broadband radio source.
{ "domain": "astronomy.stackexchange", "id": 3289, "tags": "optics" }
Parallel download of rpm packages
Question: I use zypper to download and install for openSUSE Tumbleweed (much like everyone else of course). Unfortunately it downloads packages serially, and having about a thousand packages to update a week, it gets quite boring. Also, while zypper can download packages one-by-one in advance, it can't be called concurrently. I found the libzypp-bindings project but it is discontinued. I set myself to improve the situation. Goals: Download all repositories in parallel (most often different servers); Download up to MAX_PROC (=6) packages from each repository in parallel; Save packages where zypper picks them up during system update: /var/cache/zypp/packages; Alternatively, download to $HOME/.cache/zypp/packages; Avoid external dependencies, unless necessary. Outline: Find the list of packages to update; Find the list of repositories; For each repository: Keep up to $MAX_PROC curl processes downloading packages. Copy files to default package cache; #!/bin/bash MAX_PROC=6 function repos_to_update () { zypper list-updates | grep '^v ' | awk -F '|' '{ print $2 }' | sort --unique | tr -d ' ' } function packages_from_repo () { local repo=$1 zypper list-updates | grep " | $repo " | awk -F '|' '{ print $6, "#", $3, "-", $5, ".", $6, ".rpm" }' | tr -d ' ' } function repo_uri () { local repo=$1 zypper repos --uri | grep " | $repo " | awk -F '|' '{ print $7 }' | tr -d ' ' } function repo_alias () { local repo=$1 zypper repos | grep " | $repo " | awk -F '|' '{ print $2 }' | tr -d ' ' } function download_package () { local alias=$1 local uri=$2 local line=$3 IFS=# read arch package_name <<< "$line" local package_uri="$uri/$arch/$package_name" local local_dir="$HOME/.cache/zypp/packages/$alias/$arch" local local_path="$local_dir/$package_name" printf -v y %-30s "$repo" printf "Repository: $y Package: $package_name\n" if [ ! -f "$local_path" ]; then mkdir -p $local_dir curl --silent --fail -L -o $local_path $package_uri fi } function download_repo () { local repo=$1 local uri=$(repo_uri $repo) local alias=$(repo_alias $repo) local pkgs=$(packages_from_repo $repo) local max_proc=$MAX_PROC while IFS= read -r line; do if [ $max_proc -eq 0 ]; then wait -n ((max_proc++)) fi download_package "$alias" "$uri" "$line" & ((max_proc--)) done <<< "$pkgs" wait } function download_all () { local repos=$(repos_to_update) while IFS= read -r line; do download_repo $line & done <<< "$repos" wait } download_all sudo cp -r ~/.cache/zypp/packages/* /var/cache/zypp/packages/ There's 2 or 3 places where grep/tr are subject to issues, but the nature of the data doesn't require much more than that. Answer: I completely changed the approach which resulted in faster downloads, and improved functionality. The overall approach of the initial version works well for a generic solution looking for adding concurrency to independent jobs. Removed zypper list-updates. This format is easier for machine consumption, but it's not intended for Tumbleweed. Replaced with zypper dup/inr/in --details; Removed background jobs and waits. Replaced with aria2 which handles the maximum number of concurrent connections; It pays off learning more about awk capabilities, in order to replace sequences of grep/awk/tr with a single awk; The main job of the script became building a plain text file with URIs and target directory for each .rpm file aria2 is really great tool. Superb quality. curl is not reliable in its native concurrent download capabilities.
{ "domain": "codereview.stackexchange", "id": 37897, "tags": "bash, linux, curl" }
How can I specify an email address in the SLURM snakemake json configuration file?
Question: I am working on a SLURM-based cluster. I am currently using the following slurm json file: { "__default__" : { "A" : "overall", "time" : "96:00:00", "nodes": 1, "ntasks": 1, "cpus" : 24, "p" : "LONG", "mem": 16G, "output": "snakemake%A.out", "error": "snakemake%A.err" } } along with the following command: snakemake --jobs 200 --use-conda --printshellcmds --cluster-config slurm.json --cluster "sbatch -A {cluster.A} -p {cluster.p} -t {cluster.time} --output {cluster.output} --error {cluster.error} --ntasks {cluster.ntasks} --cpus-per-task {cluster.cpus} --mem {cluster.mem}" I would like to be able to specify my email and receive an email when a job finishes. I have tried adding a line in the json file with "email": my@email.com and an option to the sbatch command --email {cluster.email} but that did not work. Anyone knows how to do it? Answer: As noted by the comments and the SLURM manual (not so much bioinformatics related?) You need the following terms in your batch/queuing command: --mail-type=ALL --mail-user=you@email.com I used "ALL" which will send you all reports. If you just want the finish message then you would use "END" and "FAIL" like such: --mail-type=END,FAIL Here are the specifics from the manual if you wish to make a different selection: --mail-type= Notify user by email when certain event types occur. Valid type values are NONE, BEGIN, END, FAIL, REQUEUE, ALL (equivalent to BEGIN, END, FAIL, REQUEUE, and STAGE_OUT), STAGE_OUT (burst buffer stage out and teardown completed), TIME_LIMIT, TIME_LIMIT_90 (reached 90 percent of time limit), TIME_LIMIT_80 (reached 80 percent of time limit), TIME_LIMIT_50 (reached 50 percent of time limit) and ARRAY_TASKS (send emails for each array task). Multiple type values may be specified in a comma separated list. The user to be notified is indicated with --mail-user. Unless the ARRAY_TASKS option is specified, mail notifications on job BEGIN, END and FAIL apply to a job array as a whole rather than generating individual email messages for each task in the job array. --mail-user= User to receive email notification of state changes as defined by --mail-type. The default value is the submitting user.
{ "domain": "bioinformatics.stackexchange", "id": 911, "tags": "snakemake, slurm" }
What makes water a liquid (when it is)?
Question: I'm curious about what the minimal conditions for some collection of water molecules to be liquid water are. What is the minimum number? What sort of bonds must hold? What other sorts of things I don't know to ask about need to be true? In my peeking around a subject I have little familiarity with, I've seen mention of a tetrahedron, suggesting a lower bound of 4 molecules for water to be a liquid. But then I also stumbled across the following (from p. 23, here): Whatever the model concerned, the presence of a fifth water molecule is necessary to be considered in explaining the molecular mechanism of defect formation and Hbond network rearrangement. So it seems that although only four molecules are in this bond network undergoing constant change, there must actually be a fifth "neighbor" molecule for the bonds to function appropriately. Is this fifth molecule part of the liquid, or merely a helpful neighbor who makes the four molecules' liquification possible? In short, what makes water a liquid (when it is)? Answer: Let me venture an answer as well as I can. I would point out that the article you are referencing is speaking of the arrangement of molecules in bulk liquid water when it is talking about this "fifth water" being present. The key is to note that they say a "fifth water molecule in the first coordination shell (23)." That means they are looking at how water molecules are coordinated with each other in liquid water. They are studying a massive (compared to a single water) amount of water, so the question of its liquidity is no question at all. The "fifth water" thing is curious because it is common belief that waters arrange themselves in a tetrahedral arrangement. This is known to be true for ice, but is in question for liquid water. For instance, this article about the liquid water hydrogen-bonding network indicates that water, as it is heated, begins participating in only two hydrogen bonds rather than four. Now, to answer the question of what makes liquid water be liquid water (when it is indeed liquid water). I think from the outset, it should be clear that this is a very difficult question to answer and any answer will likely be debated. I say that because you must define how many molecules it takes to make a liquid which is a very difficult thing to do. I will tell you, however, that I have some useful information from research I am doing on water clusters and eventually liquid water (as a computational chemistry research assistant). The largest cluster we have studied is called DD*(20,1) which means (distorted dodecahedron with a water inside the cluster). An idea of what $\ce{(H2O)21}$ can look is this: (taken from this very interesting article) water 21 http://pubs.acs.org.ezproxy.spu.edu/appl/literatum/publisher/achs/journals/content/jpcbfk/2006/jpcbfk.2006.110.issue-38/jp056416m/production/images/medium/jp056416mf00001.gif The DD*(20,1) arrangement is image C. As you can see from these different isomers, they are very distinct and no doubt each monomer behaves dramatically different from the next monomer. 21 monomers is quite a lot for a cluster, but even this does not mimic the properties of liquid water at all particularly because one would still consider this a gas-phase cluster. The fact that we even have isomers is indicative of not being near the point where something is considered liquid water. There are water clusters whose energy minima have been identified theoretically all the way from n=2 to (I've seen an article that says) n=60. Read more about some of that here. One point I would like to make is that in structure C, for the research I'm doing, we have identified the vibrational properties of many of the monomers as being significantly different from those of molecules in liquid water. So, all that being said about water clusters, we are certain that many more than five or 21 waters must be present to truthfully call something a liquid. We should then start looking for a system which is just large enough to accurately predict the properties of liquid water which are found experimentally. As it happens, it is very difficult to do this on any scale, theoretically, at least, and identifying how many water molecules one is dealing with during an experiment to a quantifiable amount of precision is quite difficult. There are, however, good approximations of liquid water which are generated using Molecular Dynamics simulation software (here's a wikipedia page about MD software). The various models used for approximating the behavior of water do quite well at mimicking water's properties from an MD perspective. You can read about the TIP4P model of water if you want. From the paper I just linked about TIP4P, they say that they approximate the properties of liquid water using 360 water molecules in their MD simulations, so that can act as a reasonable baseline for how many monomers are necessary before a system behaves like liquid water. I say baseline for an important reason. The results found in that paper use MD simulations which pull a clever trick. Rather than having 360 waters that essentially float around in a box with nothing at all beyond the edges of the box, the software will actually mirror the behavior of a molecule beyond the walls of the box in which you are simulating. That means if you pick a single water, you would find that exact same water in the same place a full "simulation box" away from where you originally found it. So, if this says 360 waters, the number of waters for which interactions are being calculated is actually much larger than the amount of interactions 360 waters alone would have. All that leads me to conclude that water behaves like water probably somewhere around 500 monomers. It must be noted, however, that the monomers behaving like liquid water as we know it will be the monomers on the inside of those 500 (but likely more) waters. Once you get towards a boundary, things start getting weird. Hope that helps.
{ "domain": "chemistry.stackexchange", "id": 4083, "tags": "water, molecules" }
Relational schema and query for multiplayer tabletop game
Question: As an exercise, I was asked to design a database schema (for MS SQL Server) for a tabletop game. The requirements were simple: players compete in matches and there are specific match types (e.g. 1v1 or 2v2). A hypothetical web site that would make use of this database would include things like player match history, a scoreboard page, and a leaderboard page. My initial design had some flaws and inconsistencies, but after thinking more on it, I came up with the following schema: create table Players ( Id int identity(1, 1) primary key, PlayerName nvarchar(50) not null ); create table MatchTypes ( Id tinyint primary key, Name nvarchar(50) not null ); create table Matches ( Id int identity(1, 1) primary key, MatchTypeId tinyint not null references MatchTypes(Id), IsComplete bit not null, CompletionDate datetime2 null ); create table Teams ( Id int identity(1, 1) primary key, MatchId int not null references Matches(Id), Score smallint not null, IsWinningTeam bit null ); create table PlayerTeams ( PlayerId int not null references Players(Id), TeamId int not null references Teams(Id) ); Although the design is fairly simple, I'm sure I've made sub-optimal design decisions. What could be improved on? In addition, I went ahead and wrote a SQL query to extract data for a hypothetical "Player Match History" page. The goal is something like the following: Player: Foo Matches: Against Bar ... Won! Against Quux ... Lost With Baz, against Bar & Quux ... Lost With Bar, against Quux & Baz ... Won! Essentially, given a player, select all matches that the player participated in, and display them with opponents/teammates. Here's the query, although I feel like it may be unnecessarily complex. select pm.MatchId, t.Id as TeamId, p.Id as PlayerId, p.PlayerName, case when t.Id = pm.TeamId then 1 else 0 end as IsTeammate, pm.Won from ( select m.Id as MatchId, m.CompletionDate, t.Id as TeamId, t.IsWinningTeam as Won from Players p inner join PlayerTeams pt on p.Id = pt.PlayerId inner join Teams t on pt.TeamID = t.Id inner join Matches m on t.MatchId = m.Id where m.IsComplete = 1 and p.Id = 1 ) pm inner join Teams t on pm.MatchId = t.MatchId inner join PlayerTeams pt on t.Id = pt.TeamId inner join Players p on pt.PlayerId = p.Id where p.Id != 1 order by pm.CompletionDate desc Answer: Good work on schema Overall, I think your schema is clean, easy to follow, and makes sense (in the limited scope of your scenario). Granted the overarching or "whole" schema for something like this would be much larger, but if you kept following this sort of normalization it should be fine. Aliases I see at least 5 unclear aliases: p, t, m, pt, pm. Mr. Maintainer would have to read through your entire script to even grasp what these stand for. Your column names are short enough, I really don't think aliases are of much use, and they just make your script more confusing to review. Imagine this kind of notation used on 1000 lines of codes in a real business setting, this should drive my point home. Common Table Expressions You have a rather large subquery (which I will address next point) and in SQL Server and several other RDBMS, you can simplify the way it reads by using a CTE. So this: from ( select m.Id as MatchId, m.CompletionDate, t.Id as TeamId, t.IsWinningTeam as Won from Players p inner join PlayerTeams pt on p.Id = pt.PlayerId inner join Teams t on pt.TeamID = t.Id inner join Matches m on t.MatchId = m.Id where m.IsComplete = 1 and p.Id = 1 ) pm You could instead write, at the very beginning of your script: with pm as ( select m.Id as MatchId, m.CompletionDate, t.Id as TeamId, t.IsWinningTeam as Won from Players p inner join PlayerTeams pt on p.Id = pt.PlayerId inner join Teams t on pt.TeamID = t.Id inner join Matches m on t.MatchId = m.Id where m.IsComplete = 1 and p.Id = 1 ) select /* bunch of work here */ from pm -- reference CTE in main query as if it were a table Redundancy I noticed your subquery (or CTE) is joining some of the same stuff your main query is doing. Look: from ( select m.Id as MatchId, m.CompletionDate, t.Id as TeamId, t.IsWinningTeam as Won from Players p inner join PlayerTeams pt on p.Id = pt.PlayerId -- (1) inner join Teams t on pt.TeamID = t.Id -- (2) inner join Matches m on t.MatchId = m.Id where m.IsComplete = 1 and p.Id = 1 ) pm inner join Teams t on pm.MatchId = t.MatchId inner join PlayerTeams pt on t.Id = pt.TeamId -- (2) inner join Players p on pt.PlayerId = p.Id -- (1) Best to avoid redundant operations, especially expensive join's, as it makes the execution slower. Only one of each should be needed in most cases. Nitpick This preferably should not be used: where p.Id != 1 Instead, use this: where p.Id <> 1 Although both work just the same, != is not ANSI-92 standard.
{ "domain": "codereview.stackexchange", "id": 8844, "tags": "game, sql, sql-server" }
Why is the temperature *still* rising?
Question: 2015 is the hottest year on record, and the average temperature continues to rise. I don't understand why this continues, as (over the past twenty years) so much work was put into reducing Global Warming over the past 40 years, yet not only does the temperature not fall, it continues to rise more than it did between 1870-1960. I don't understand something. The amount of industry went through the roof (literally) between 1870 to 1960, and no one cared about the environment Now that we do care about it, and (at least somewhat) legislate cleaner cars, factories, etc, I would expect the temperature to even out, yet it doesn't Why not? Answer: "So much work"? Actually, compared to the global rate of greenhouse gas emissions, it's a case of "so little work"! From a scientific perspective the 'economists' solution' of carbon trading was always unlikely to achieve the required carbon cuts, as has been verified by their ineffectiveness over the last decade or so. As farrenthorpe points out, the rate of increase of CO2 is largely population-driven, and hence there is still an inexorable rise in mean atmospheric CO2. The acid test of human efforts to limit global warming is whether the Hawaiian CO2 monitoring graph is flattening off: It clearly isn't going to flatten anytime soon. In fact, if anything, it is getting steeper. So all the hot air from 'Paris', and previous talkfests, is evidently too little, too late. Realistically, limiting the average temperature rise to less than 2 °C, is now effectively unattainable. We have yet to see what 'all this work' can achieve. So far, almost nothing.
{ "domain": "earthscience.stackexchange", "id": 1176, "tags": "atmosphere, climate-change, climate" }
rplidar_node on ros2 / Autoware drops message for ‚Unknown‘ reason
Question: I‘m using Autoware and the rplidar_node showing correct messages (ros2 topic echo scan) but it fails to get them shown on rviz2 with this error message [INFO] [1625892725.336256970] [rviz]: Message Filter dropping message: frame 'laser_frame' at time 1625892723.515 for reason 'Unknown' When I use ros melodic everything works fine. Originally posted by heavy02011 on ROS Answers with karma: 1 on 2021-07-10 Post score: 0 Answer: This is likely because the global frame that you have set in rviz2 is not laser_frame and you haven't defined a transform between laser_frame (the frame in which the laser scans are being published) and the global frame that you are using in rviz2. In order to visualize the scan data while using a global frame other than the sensor's frame, you must define and publish (either with a URDF file and robot_state_publisher or tf2_static_transform_publisher) the frame transform between the laser's frame and at least one other frame in your transform tree (usually base_link). Hope this helps. Originally posted by Josh Whitley with karma: 1766 on 2021-07-12 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 36683, "tags": "ros, rplidar" }
Code review: infix to postfix converter
Question: public static StringBuffer infixToPostfix(StringBuffer infix) throws InvalidCharacterException { StringBuffer postfix = new StringBuffer(""); Stack<String> myStack = new Stack<String>(); myStack.push("("); infix.append(')'); for(int i = 0; i < infix.length(); i++) { if(infix.charAt(i) == ' ') { //System.out.println("Space!"); } else if(Character.isDigit(infix.charAt(i)) == true) { postfix.append(infix.charAt(i) + " "); } else if(infix.charAt(i) == '(') { myStack.push("("); } else if(infix.charAt(i) == ')') { while(myStack.peek()!="(") { postfix.append(myStack.pop() + " "); } myStack.pop(); } else if(isOperator(infix.charAt(i) + "") == true) { String peekedItem = myStack.peek(); if(isOperator(peekedItem) == true) { if(getPrecedence(infix.charAt(i) + "") <= getPrecedence(peekedItem)) { String poppedOp = myStack.pop(); if(poppedOp == "+") { postfix.append("+ "); } else if(poppedOp == "-") { postfix.append("- "); } else if(poppedOp == "*") { postfix.append("* "); } else if(poppedOp == "/") { postfix.append("/ "); } else if(poppedOp == "%") { postfix.append("% "); } String op = String.valueOf(infix.charAt(i)); myStack.push(op); } } else if(isOperator(peekedItem)==false) { String op = String.valueOf(infix.charAt(i)); myStack.push(op); } } else throw new InvalidCharacterException(infix.charAt(i)); } return postfix; } public static double evaluatePost(StringBuffer postfix) { String str = new String(postfix); str = str.replaceAll(" ", ""); postfix = new StringBuffer(str); postfix.append(")"); Stack<Double> anotherStack = new Stack<Double>(); double answer = 0; for(int k = 0; postfix.charAt(k) != ')'; k++) { if(Character.isDigit(postfix.charAt(k)) == true) { anotherStack.push(Double.parseDouble(postfix.charAt(k)+"")); } else if(isOperator(postfix.charAt(k)+"")==true) { double x = anotherStack.pop(); double y = anotherStack.pop(); char op = postfix.charAt(k); if(op=='+') { answer = (x+y); anotherStack.push(answer); answer = 0; } else if(op=='-') { answer = (y-x); anotherStack.push(answer); answer = 0; } else if(op=='*') { answer = (x*y); anotherStack.push(answer); answer = 0; } else if(op=='/') { answer = (y/x); anotherStack.push(answer); answer = 0; } else if(op=='%') { answer = (x%y); anotherStack.push(answer); answer = 0; } } } double finalAnswer = anotherStack.pop(); return finalAnswer; } Answer: When doing String comparisons, *DO NOT USE == * Change your lines: if(poppedOp == "+") { postfix.append("+ "); } ..... to be if("+".equals(poppedOp)) { postfix.append("+ "); } .....
{ "domain": "codereview.stackexchange", "id": 5084, "tags": "java, strings, stack, converting" }
$\alpha$ and $\beta$-decay, why don't they neutralize each other?
Question: $\alpha$ radiation consist of positive charged helium nuclei, $\beta$ radiation of negative charged electrons. So why don't the $\alpha$ particles take those electrons to get neutral? Answer: I agree with jwenting but in some sense, I feel that he is not answering the question: why there's no "combined $\alpha$ plus $\beta$ decay in which a nucleus emits e.g. a helium atom? Well, let me start with the $\beta$-decay. Nuclei randomly - after some typical time, but unpredictably - may emit an electron because a neutron inside the nuclei may decay via $$ n\to p+e^- +\bar\nu$$ which may be reduced to a more microscopic decay of a down-quark, $$ d\to u + e^- +\bar\nu.$$ This interaction, mediated by a virtual W-boson, is why a nucleus - with neutrons - may sometimes randomly emit an electron. So the $\beta$-decay is due to the weak nuclear force. On the other hand, the $\alpha$-decay is due to the strong nuclear force: the nucleus literally breaks into pieces, with a very stable combination of 2 protons and 2 neutrons appearing as one of the pieces (helium nucleus). The two processes above are independent, and each of them can kind of be reduced to a single elementary interaction whose origin is different. This independence and different origin is why the "combined" decay, with an emission of both electron (or two electrons) and a Helium nucleus, is extremely unlikely. Such an emission of a whole atom (which is electrically neutral but it is surely not "nothing"!) could only occur if several of the elementary decay interactions would occur at almost the same time which is extremely unlikely.
{ "domain": "physics.stackexchange", "id": 726, "tags": "nuclear-physics, radiation" }
How to publish transform from odom to base_link?
Question: I have a Recorded rosbag file which contains Laserscan data (LMS100) linked to base_link. I trying to use amcl and localize the robot in the map, but amcl needs odom->baselink so that it can publish map->odom. But currently there is no link between odom and base_link. I loaded the map in rviz, when set the global options to base_link -> I am getting scan data ; when set to odom -> I getting odometry data in rviz; when set map-> nothing is happening. I want to integrate in the way map->odom->base_link. I could not able to figure out the way to link the odom -> base_link. Can anyone help me to solve above problem. Originally posted by Arunkumar on ROS Answers with karma: 1 on 2013-11-29 Post score: 1 Original comments Comment by Arunkumar on 2013-11-29: Link to files as below: "https://www.dropbox.com/sh/iezxmpehxs11qbl/4kVXtOGaU7" Answer: There is no way you can do that without actual/simulated data. But, since you already have the laser data, the odometry data should be from the same 'run' as the laser data. That is, both should be in phase (or in time). So without the odometry data I don't think you can use amcl. If you have odometry data on the topic say /odom. Then the rest is simple: write a node that reads the /odom topic and publishes whatever it reads on the odom topic as a transform from odom - base_link. Look at this: http://wiki.ros.org/navigation/Tutorials/RobotSetup/Odom The only difference is you don't have to compute x, y and theta. They are already available in the odom message. Originally posted by McMurdo with karma: 1247 on 2014-07-07 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 16303, "tags": "navigation, odometry, mapping, base-link, amcl" }
Is it feasible to grow edible crops for lawns?
Question: Is it possible to grow some type of plant that functions well as a lawn (e.g. is somewhat grass like), but that is humanly edible? Would anything that currently exists fit this role? For example possibly spinach or lettuce might work somewhat, but probably wouldn't be ideal. If not, could we bioengineer something to fit that role? I am thinking if we can, this could solve world hunger! Answer: There are lots of things that you can eat that you could grow as a lawn. That probably are growing in your lawn now. My yard is about half violets and they are edible. Purslane is a delicious succulent lawn weed. Grab some, rinse it off and eat it. It is crunchy! I am telling you from experience you can eat a lot of it and you will not get sick. Purslane image from here. http://www.livescience.com/15322-healthiest-backyard-weeds.html This backyard weeds link says lambs quarters are edible and I did not know that. They are also volunteers but I did not know they were good for anything and so I was pulling them up. We are surrounded by things we could eat and which our ancestors ate all the time but which we do not eat. If you are in school, Johnathan, and this sort of thing interests you it would be a stellar project. You have all summer to grow weeds in pots and then at the science fair you could have people taste them.
{ "domain": "biology.stackexchange", "id": 7178, "tags": "genetics, plant-physiology, food, protein-engineering" }
Poor man's JIT using nested lambdas
Question: While answering this code review question, I came up with a way to convert an equation given at runtime to a std::function<double(double)> that would evaluate that equation at a given point. Assuming the equation was already split into tokens and converted to postfix notation, this is done by evaluating those tokens like you would do in a normal implementation of a RPN calculator by maintaining a stack of intermediate results, but instead of just storing the values, the stack stores lambda expressions that can calculate those values. At the end of the evaluation, the stack should thus contain a single lambda expression that implements the desired equation. Here is a simplified version of the parser than only supports a few operations: #include <cmath> #include <functional> #include <iostream> #include <numbers> #include <stack> #include <string> #include <string_view> std::function<double(double)> build_function(const std::vector<std::string_view> &rpn_tokens) { std::stack<std::function<double(double)>> subexpressions; for (const auto &token: rpn_tokens) { if (token.empty()) throw std::runtime_error("empty token"); if (token == "x") { // Variable subexpressions.push([](double x){ return x; }); } else if (isdigit(token[0])) { // Literal number double value = std::stof(token.data()); subexpressions.push([=](double /* ignored */){ return value; }); } else if (token == "sin") { // Example unary operator if (subexpressions.size() < 1) { throw std::runtime_error("invalid expression"); } auto operand = subexpressions.top(); subexpressions.pop(); subexpressions.push([=](double x){ return std::sin(operand(x)); }); } else if (token == "+") { // Example binary operator if (subexpressions.size() < 2) { throw std::runtime_error("invalid expression"); } auto right_operand = subexpressions.top(); subexpressions.pop(); auto left_operand = subexpressions.top(); subexpressions.pop(); subexpressions.push([=](double x){ return left_operand(x) + right_operand(x); }); } else { throw std::runtime_error("invalid token"); } } if (subexpressions.size() != 1) { throw std::runtime_error("invalid expression"); } return subexpressions.top(); } int main() { auto function = build_function({"1", "x", "2", "+", "sin", "+"}); // 1 + sin(2 + x) for (double x = 0; x < 2 * std::numbers::pi; x += 0.1) { std::cout << x << ' ' << function(x) << '\n'; } } I call this a poor man's JIT because while it looks like we get a function object that is seemingly created at runtime, it basically is a bunch of precompiled functions (one for each lambda body in the above code) strung together by the lambda captures. So there is quite a lot more overhead when calling the above function() than if one would write the following: auto function = [](double x){ return 1 + sin(2 + x); } But it should still be much faster than parsing the bunch of tokens that make up the equation each time you want to evaluate it. Some questions: Is there a better way to do this? Is this technique already implemented in some library? The argument to resulting function has to be passed down to most of the lambdas (the exception being the one returning literal numbers). Is there a better way to do this? How should one handle building a function that takes multiple arguments? Can we make build_function a template somehow that can return a std::function with a variable number of arguments? Performance measurements on an AMD Ryzen 3900X, code compiled with GCC 10.2.1 with -O2, after a warm-up of 60 million invocations, averaged over another 60 million invocations of function(): A simple lambda: 10 ns per evaluation The above code: 21 ns per evaluation Deduplicator's original answer: 77 ns per evaluation Adding stack.reserve(): 39 ns per evaluation Making stack static: 26 ns per evaluation Deduplicator's version using separate bytecode and data stacks: 19 ns per evaluation Hardcoding using a fixed small stack: 17 ns per evaluation Answer: Is there a better way to do this? Well, you have dynamic dispatch and yet another scattered island of memory for every single token. That is massively inefficient. A simple way to more efficiency is writing your own mini-VM for executing expressions. You will still only have to parse once, but now all the memory is in a single compact chunk which will be linearly used, and the compiler can see all the code. Switch-statements with a single compact range of valid values are simplicity itself, and you avoid the argument shuffling, register saving, and all the other overhead of dynamically calling arbitrary functions. The next step for efficiency would be compiling to native code instead. Also, putting all the arguments in an array and then using an instruction with operand or consecutive simple instructions to get them should be a simple modification. I also added extensive X-Macros to reduce repetition and avoid defining all applicable transformations all over the place. Adapted example live on coliru: #include <iostream> #include <numbers> #include <charconv> #include <cmath> #include <exception> #include <memory> #include <span> #include <string_view> #include <string> #include <vector> #define XX() \ /* token, tag, args, results, code */ \ X("x", x, 0, 1, *stack = x) \ X("sin", sin, 1, 1, *stack = std::sin(*stack)) \ X("+", plus, 2, 1, *stack += stack[-1]) enum class instruction : char { end, push, #define X(a, b, c, d, e) b, XX() #undef X }; auto build_function(std::span<const std::string_view> rpn_tokens) { std::vector<instruction> code; std::vector<double> data; double temp; unsigned count = 0, max_count = 0; auto process = [&](instruction id, unsigned args, unsigned results) { if (count < args) throw std::runtime_error("invalid expression: underflow"); code.push_back(id); count += results - args; max_count = std::max(max_count, count); }; for (auto token : rpn_tokens) { #define X(a, b, c, d, e) \ if (token == a) \ process(instruction::b, c, d); \ else XX() #undef X // if (auto [p, ec] = std::from_chars(token.begin(), token.end(), temp); !ec && p = token.end()) { // process(instruction::push, 0, 1); // data.push_back(temp); // } else // throw std::runtime_error("invalid token"); try { temp = std::stod(std::string(token)); data.push_back(temp); process(instruction::push, 0, 1); } catch(...) { std::throw_with_nested(std::runtime_error("invalid token")); } } process(instruction::end, 1, 0); if (count) throw std::runtime_error("invalid expression: overflow"); code.shrink_to_fit(); data.shrink_to_fit(); return [code, data, max_count](double x) { constexpr auto small_stack = 128; auto core = [](auto code, auto data, auto stack, auto x){ for (;;) { switch(*code++) { case instruction::end: return *stack; case instruction::push: *--stack = *data++; break; #define X(a, b, c, d, e) \ case instruction::b: \ stack += c - d; \ e; \ break; XX() #undef X default: throw std::runtime_error("unexpected"); } } }; auto small = [&]{ double stack[small_stack]; return core(code.cbegin(), data.cbegin(), std::end(stack), x); }; auto big = [&]{ auto stack = std::make_unique<double[]>(max_count); return core(code.cbegin(), data.cbegin(), &stack[0] + max_count, x); }; return max_count <= small_stack ? small() : big(); }; } #undef XX int main() { std::string_view rpn[] = {"1", "x", "2", "+", "sin", "+"}; // 1 + sin(2 + x) auto function = build_function(rpn); for (double x = 0; x < 2 * std::numbers::pi; x += 0.1) { std::cout << x << ' ' << function(x) << '\n'; } }
{ "domain": "codereview.stackexchange", "id": 41055, "tags": "c++, performance, parsing, calculator, lambda" }
Regular Expression Problem
Question: Question is $$\left\{ w | w \text{ contains an even number of } 0\text{s or exactly two }1\text{s} \right\}$$ I need to confirm my answer but i don't from where to ask someone. My answer is 1*(01*0)*+11 but i don't know if it's correct or not Answer: Your expression is close to correct for the 'even number of zeros' part; you just forgot the string could end in ones. I.e: 1*(01*0)*1* + 11 [where 'exactly two ones' means precisely that] or 1*(01*0)*1* + 0*10*10* [where 'exactly two ones' implies any number of zeros]
{ "domain": "cs.stackexchange", "id": 8463, "tags": "automata, regular-languages, regular-expressions" }
turtlebot2 / kobuki base
Question: Hello, I want to use turtlebot2 package ... with another robot. I'm experienced to use ROS (nodes, topics ...), to compile ... and to use my robots with a previous version of ROS and I need now to use indigo with them, so to use turtlebot2 package and modify it. My problem is to find the way to do it, specially to find where are defined all the exchanges between hardware and nodes. So I need to modify turtlebot2 package to be able to use my robot. So my questions are simple: where can I find information about type of exchanges between nodes and hardware (and in which node ?); I have no imu on my robot (but I have kinect). How can I find solution to use turtlebot2 package without it ? Thanks a lot. Originally posted by goupil35000 on ROS Answers with karma: 113 on 2015-12-22 Post score: 0 Original comments Comment by mkhansen on 2015-12-22: Do you have an existing turtlebot to run as a reference? If so, you could run the turtlebot and do an rqt_graph to see all the running nodes & nodelets, then use that as your reference guide for how it works. Answer: The Turtlebot navigation package is based on the overall ROS navigation package, and I think that is what you really want. Read this page: http://wiki.ros.org/navigation and try tutorial #18. Make sure you have components for your sensors and actuators, you may have to write those yourself. Originally posted by mkhansen with karma: 156 on 2015-12-23 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 23284, "tags": "imu, kobuki" }
Axions and complex fields
Question: I have been reading on axions and pseudo-Nambu-Goldstone Bosons (pNGBs) and I was wondering the following: A complex field has two degrees of freedom, meaning that one can write the following non-linear realisation (similar to the coset construction with chiral symmetry breaking): $$\Phi= \rho e^{i\theta}$$ where $\rho,\theta$ are real field. Moreover, $\theta$ is an axion, as it is shift symmetric: $\Phi(\theta)\sim\Phi(\theta+2\pi n)$ for $n\in \mathbb{Z}$. The kinetic term of that field is $$|\partial \Phi|^2=(\partial\rho)^2+\rho^2(\partial\theta)^2.$$ Now assume that $\rho$ has a large v.e.v. We can assume $\rho\simeq v=\text{const}$, and define a canonically normalised field $\bar\theta=\theta/v$. Should the v.e.v. be large enough, $\rho$ decouples and we can focus on an effective approach and consider only the canonically normalised field $\bar\theta$. If $\Phi$ is a pNGB and breaks the $U(1)$ symmetry through a mass term (e.g. a soft mass term in SUSY theories) $$V\supset m^2\Phi^2+\text{c.c}\simeq 2m^2v^2 \cos(\bar{\theta}/v),$$ what would would be the mass of $\bar\theta$ in that case? Naively I would define the mass has as $$m^2_\bar{\theta}=\frac{1}{2}\left.\frac{\partial^2 V}{\partial \theta^2}\right|_{\theta~=~0}=-m^2$$ In this case the mass is tachyonic. I would expect that assuming $m^2>0$, even in non-linear realisations, everything should be stable, i.e. $m^2_{\bar{\theta}}>0$. Am I missing something? Answer: Just expand around one of the correct minima among which $\theta=0$ doesn't belong to, once the perturbation $\Phi^2$ is added. Before adding the perturbation any of $\theta$ would have provided a good minimum (because of the shift symmetry), but with the perturbation turned on only a discrete subgroup of minima survive, namely $\theta=(2n+1)\pi$. The perturbation breaks more than the $U(1)$, it breaks also the discrete shift symmetry you started with, since now only odd integers shifts are respected. A better parametrization for $\Phi$ is actually $\Phi=-\rho e^{i\phi}$.
{ "domain": "physics.stackexchange", "id": 34909, "tags": "quantum-field-theory, symmetry-breaking, axion" }
Type of filter, given difference equation?
Question: Given a difference equation, how do we tell if it is an IIR filter or FIR filter? For example, y(n) = x(n-3)+y(n-1) . Is it FIR or IIR? Can you please give me a way to figure this out? Thanks! :) Answer: Non-Recursive filters are synonym to FIR filters. if you will take z transform of difference equation which is non recursive then you will find out poles for FIR can lie at only zero.Hence if the equation is non-recursive then surely it will represent a FIR filter and its z transform will have all poles at zero.for example, if you have y(n)=x(n)-2x(n-1) then it is surely a FIR filter But Recursive equations can represent IIR or FIR. case 1:when does a recursive can represent a FIR? if you introduce a unit delay in above equation then y(n-1)=x(n-1)-2x(n-2) and then subtract it from above one you will have y(n)-y(n-1)=x(n)+x(n-1)-2x(n-2) Now this equation is a recursive one and if you will find out its Z transform you will see that it has a pole at 1.then if you see its zeros it will surely have a zero at exactly at the same point at which pole is present hence it cancels the effect of that pole making the system give a finite response. case 1:when does a recursive can represent an IIR? this is the case which exactly fits the equation you provided in your example. Find z transform and you will see that a pole is present in z transform that lie at a position other than 0 causing the system to give infinite response
{ "domain": "dsp.stackexchange", "id": 1459, "tags": "filters, discrete-signals, signal-analysis" }
Is chromate a suitable indicator for the titration of Ag⁺ with Cl⁻?
Question: To titrate $\ce{Cl-}$ with $\ce{Ag+}$ we use chromate $\ce{CrO4^2-}$ as an indicator. The titration reaction is: $$\ce{Ag+ + Cl- <=> AgCl}\tag{R1}$$ $$K_1 = \frac{1}{K_\mathrm{sp}(\ce{AgCl})} = \frac{1}{1.8×10^{-10}} = 5.56×10^9\tag{1}$$ The theory says that after all $\ce{Ag+}$ are reacted with $\ce{Cl-}$ the end point of titration is detected when excess $\ce{Ag+}$ reacts with the indicator chromate to form silver chromate: $$\ce{2 Ag+ + CrO4^2- <=> Ag2CrO4}\tag{R2}$$ $$K_2 = \frac{1}{K_\mathrm{sp}(\ce{Ag2CrO4})} = \frac{1}{1.1×10^{-12}} = 9.1×10^{11}\tag{2}$$ However, as you see, $K_1 < 100K_2,$ so when both $\ce{Cl-}$ and $\ce{CrO4^2-}$ are present, $\ce{Ag+}$ will react with $\ce{CrO4^2-}$ and not with $\ce{Cl-}$. But our teacher and everywhere on Google they say $\ce{AgCl}$ precipitates before $\ce{AgCrO4}$. And that should be true since this method of titration (Mohr's method) has been used long ago. But, how can that be true? I don't understand why. Where have I mistaken? Answer: You got the solubility part reversed. The solubility of $\ce{AgCl}$ is lower than the solubility of $\ce{Ag2CrO4}:$ $$s(\ce{AgCl}) = \sqrt{K_\mathrm{sp}(\ce{AgCl})} = \sqrt{\pu{1.8E-10 mol2 L-2}} = \pu{1.34E-5 mol L-1}$$ $$s(\ce{Ag2CrO4}) = \sqrt[3]{\frac{K_\mathrm{sp}(\ce{Ag2CrO4})}{4}} = \sqrt[3]{\frac{\pu{1.1E-12 mol3 L-3}}{4}} = \pu{6.50E-5 mol L-1}$$ Therefore, if the $\ce{AgNO3}$ solution is gradually added to the solution containing the both $\ce{Cl-}$ and $\ce{CrO4^2-}$ ions, then initially the formation of a sparingly soluble $\ce{AgCl}$ salt occurs. After the $\ce{Cl-}$ ions are almost completely isolated in the form of $\ce{AgCl},$ the $\ce{Ag2CrO4}$ precipitation starts to occur, signifying the equivalence point is reached. The same reasoning can also be applied to the titration of even less soluble silver bromide $\ce{AgBr}$ with $K_\mathrm{sp}(\ce{AgBr}) = \pu{5.3E-13}$ (try it yourself). Note, however, that while using Mohr's method it's imperative to titrate halide salts solutions with $\ce{AgNO3}$ and not vice versa. Otherwise the precipitation condition $$c(\ce{Ag+}) · c(\ce{Cl-}) > K_\mathrm{sp}(\ce{AgCl})$$ will be overridden by $$c(\ce{Ag+})^2 · c(\ce{CrO4^2-}) > K_\mathrm{sp}(\ce{Ag2CrO4})$$ due to high concentration of silver ions in solution, favoring silver chromate precipitation (note squared term $c(\ce{Ag+})^2$) and thus shifting the equivalence point.
{ "domain": "chemistry.stackexchange", "id": 12930, "tags": "equilibrium, aqueous-solution, analytical-chemistry, titration, precipitation" }
DFT (Discrete Fourier Transform) Algorithm in Swift
Question: I am looking to replicate in Swift what the FFT function does in Matlab. Essentially, it takes an arbitrary length signal (not necessarily a multiple of \$2^n\$) and gives real and complex DFT coefficients. Since the FFT described in Accelerate can only handle sample sizes that are multiples of \$2^n\$, I wrote a brute force algorithm in Swift that produces exactly the same results as the Matlab FFT function for arbitrary sample size. The problem: When my sample size > 15,000 sample (say), this algorithm takes about 20 s to complete. Could this be sped up? import Foundation public func fft(x: [Double]) -> ([Double],[Double]) { let N = x.count var Xre: [Double] = Array(repeating:0, count:N) var Xim: [Double] = Array(repeating:0, count:N) for k in 0..<N { Xre[k] = 0 Xim[k] = 0 for n in 0..<N { let q = (Double(n)*Double(k)*2.0*M_PI)/Double(N) Xre[k] += x[n]*cos(q) // Real part of X[k] Xim[k] -= x[n]*sin(q) // Imag part of X[k] } } return (Xre, Xim) } // Call FFT let x: [Double] = [1, 2, 3, 4, 5, 6] // works rapidly // let x = Array(stride(from: 0, through: 15000, by: (1.0))) // Will choke it let (fr, fi) = fft (x: x) print("Real:", fr) print(" ") print("Imag:", fi) // Call FFT Answer: The fastest algorithm would be a native implementation of the "chirp z-transform" as described here with example code in Pyhton.
{ "domain": "codereview.stackexchange", "id": 24125, "tags": "performance, algorithm, swift, signal-processing" }
Can an optical phased array be used to create free-floating holograms?
Question: Frustratingly, most sources I've found about optical phased arrays only state that they can be used for "holograms" but do not explain what that means. Can optical phased arrays be used for "Star Trek" style, free-floating, volumetric holograms (at least in principle if not in practice), or are they only capable of displaying a hologram on a screen? Answer: Yes, an optical phased array can be used to display a 3D image -- but there are limitations (of course). A "phase-only spatial light modulator" is an example of an optical phased array. The limitations are basically due to the pixel count and the element size. For example: if the phase-shifting elements are 10 microns x 10 microns and the array is, say, 10 mm x 10 mm, then there are 1000 x 1000 elements. The maximum diffraction angle of visible light at the array is about 4 degrees due to the element spacing, so the maximum field of view would be about 4 degrees. You would need to be about a half meter away to see any part of the image with both eyes. If we ever come up with a spatial light modulator whose elements are sub-micron size and individually addressable - and with a pixel count of, say, 100,000 x 100,000- then we will have the kind of 3D display imagined in SciFi. However, it will not be a "princess Lea" type 3D display, whose image floats in mid air with nothing between or beyond your eye and the image. You will only see image points that are on a line between your eye and the array elements.
{ "domain": "physics.stackexchange", "id": 58321, "tags": "optics, hologram" }
What's the derivation of an orbiting object's energy in Kerr Spacetime
Question: If in Schwarzschild spacetime, the energy per unit mass ε of an object is given by the following equation in timelike geodesics (derived from the squared magnitude of timelike tangent vectors at θ=π/2): $$\varepsilon^2=\left(\frac{dr}{d\tau}\right)^2+c^2-c^2\frac{r_s}{r}+\frac{\ell^2}{r^2}-\frac{r_s\ell^2}{r^3}$$ then what would the corresponding derivation of the following Kerr orbit equation be? $$\frac{1}{2}(\varepsilon^2-1)=\left(\frac{dr}{d\tau}\right)^2+\frac{\ell^2-a^2(\varepsilon^2-1)}{2r^2}-\frac{M(\ell-ae)^2}{r^3}-\frac{M}{r}$$ Where ε is energy per unit mass and cursive l is the object's angular momentum per unit mass. It's worth noting that that the Schwarzchild equation is written in natural units whereas the Kerr equation is written in planck units so the values of -1 could represent -c^2. The second equation comes from a 2015 paper from Stockholm University: The Angular Momentum of Kerr Black Holes (p: 24, eqs: 4.45 & 4.46) http://3dhouse.se/ingemar/exjobb/The%20Angular%20Momentum%20of%20Kerr%20Black%20Holes.pdf Any derivations of ε in Kerr spacetime would be helpful. Answer: Kerr has two Killing vector fields: $$T^\mu = (-1,0,0,0)$$ $$L^\mu = (0,0,0,1)$$ related to the time translated and rotational symmetry. Consequently, We have two constants of motion along a geodesic orbits, if $$u^\mu = (\frac{dt}{d\tau},\frac{dr}{d\tau},\frac{d\theta}{d\tau},\frac{d\phi}{d\tau}),$$ they are the (specific) energy $$\epsilon = T_\mu u^\mu = (1+\frac{2M r(r^2+a^2)}{(r^2-2Mr+a^2)(r^2+a^2\cos^2\theta )})\frac{dt}{d\tau}+ \frac{2aM r}{(r^2-2Mr+a^2)(r^2+a^2\cos^2\theta )}\frac{d\phi}{d\tau} $$ and the (specific) axial angular momentum $$ \ell = L_\mu u^\mu=- \frac{2aM r}{(r^2-2Mr+a^2)(r^2+a^2\cos^2\theta )}\frac{dt}{d\tau}- (\frac{1}{\sin^2\theta(r^2+a^2\cos^2\theta )}-\frac{a^2}{(r^2-2Mr+a^2)(r^2+a^2\cos^2\theta )})\frac{d\phi}{d\tau}. $$ We also no that the norm of the 4-velocity $u^\mu$ $$u^\mu g_{\mu\nu} u^\nu =-1$$ is constant along the orbit. Taking these three equations, and specializing to the equatorial $\theta=\pi/2$ case we can solve for $ \frac{dr}{d\tau}$ to find $$\left( \frac{dr}{d\tau}\right)^2 = \frac{(\epsilon(r^2+a^2)-a\ell)^2 - (r^2-2Mr+a^2)(r^2+(a\epsilon-\ell)^2)}{r^4} $$
{ "domain": "physics.stackexchange", "id": 91805, "tags": "general-relativity, black-holes, astrophysics, orbital-motion, kerr-metric" }
Fixed Frame [/laser] does not exist
Question: my launch file <launch> <node pkg="hokuyo_node" type="hokuyo_node" name="hokuyo_node"> <remap from="/scan" to="/base_scan" /> </node> </launch> Why rviz says Fixed Frame [/laser] does not exist? Originally posted by sam on ROS Answers with karma: 2570 on 2011-07-19 Post score: 1 Answer: Nobody is publishing tf information from the /laser frame. I guess, you input the fixed frame manually in the box and you also see the laser, so that should be fine. Problems will arise if you have data in other frames. Originally posted by dornhege with karma: 31395 on 2011-07-19 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by dornhege on 2011-07-20: It's only if you don't have tf information, you can hardcode something. The better way is to publish tf information as you did! Comment by sam on 2011-07-20: Uh,I know I can choose from Fixed Frame combo,but why I need to input some text(different than text which can be choose) in Fixed Frame box? Is it useful for something? Comment by dornhege on 2011-07-20: Usually the Fixed Frame has a combo containing the frames send by tf for you to choose. If nobody publishes you can still just input anything (e.g. /laser) Comment by sam on 2011-07-19: Thank you~I don't know what means of 'input the fixed frame manually in the box', but I found that when I add tf node with /laser transform,it works! Comment by ZoltanS on 2013-04-11: The option target frame is missing in groovy's rviz. What can be done instead? Comment by dornhege on 2013-04-11: This is a different question. Please open a new one.
{ "domain": "robotics.stackexchange", "id": 6195, "tags": "ros, rviz, laser, frame, hokuyo-node" }
C=O bond length comparison among cyclohex‐2‐en‐1‐one and cyclohex‐3‐en‐1‐one
Question: I can see that in the first case a resonance would occur, and in the second case, there would be no resonance effect. I have read the fact that resonance reduces bond length due to partial double bond character. According to that, shouldn't the $\ce{C=O}$ bond length in 1 be lesser than that in 2? The answer in my book is that the $\ce{C=O}$ bond in 1 is longer (reason is simply given as: "Resonance"). Could someone help me understand what's going on here? Answer: The case 1 has partial double bond character (due to resonance) while case 2 has total double bond character of the C-O bond. So, relative to case 2, the C-O bond length in case 1 is longer.
{ "domain": "chemistry.stackexchange", "id": 12730, "tags": "organic-chemistry, bond" }
Linear or quadratic air resistance with football?
Question: I'm modelling a football shot as a projectile - is the air resistance force proportional to the speed or speed squared? Answer: Neither. The air resistance (or drag) of a football would be proportional to the speed squared only if the drag coefficient were constant. It would be linear only if the coefficient were proportional to the inverse of the speed. Neither of these trends is actually the case. The viscous characteristics of the flow depend strongly on the Reynolds number (which is a function of speed), and cannot be characterized by such simple functions. If you are referring to an American football, then the variation of the coefficient would be similar to the top curve in the following figure from Hoerner's Fluid Dynamic Drag If you are referring to what the Americans call a "soccer ball," the coefficient would vary as shown in the figure from NASA (https://www.grc.nasa.gov/WWW/K-12/airplane/socdrag.html) Note that I'm assuming the football is going straight through the air. If it is inclined and creating a lift force, the drag coefficient would be varying in even more complicated ways.
{ "domain": "physics.stackexchange", "id": 80425, "tags": "newtonian-mechanics, projectile, estimation, drag, models" }
Pokémon style battle game
Question: I haven't been learning Python for too long and I was just wondering how this Pokémon style battle looks? It's based off of this: Turn Based Pokémon Style Game. It's my first proper time using classes so I'd love advice or critique on the usage. Also, where I make the CPU more likely to use heal when under 35 health, there must surely be a better way to do that. # Simple battle simulator in the style of Pokemon. # author: Prendy import random moves = {"tackle": range(18, 26), "thundershock": range(10, 36), "heal": range(10, 20)} class Character: """ Define our general Character which we base our player and enemy off """ def __init__(self, health): self.health = health def attack(self, other): raise NotImplementedError class Player(Character): """ The player, they start with 100 health and have the choice of three moves """ def __init__(self, health=100): super().__init__(health) def attack(self, other): while True: choice = str.lower(input("\nWhat move would you like to make? (Tackle, Thundershock, or Heal)")) if choice == "heal": self.health += int(random.choice(moves[choice])) print("\nYour health is now {0.health}.".format(self)) break if choice == "tackle" or choice == "thundershock": damage = int(random.choice(moves[choice])) other.health -= damage print("\nYou attack with {0}, dealing {1} damage.".format(choice, damage)) break else: print("Not a valid move, try again!") class Enemy(Character): """ The enemy, also starts with 100 health and chooses moves at random """ def __init__(self, health=100): super().__init__(health) def attack(self, other): if self.health <= 35: # increasing probability of heal when under 35 health, bit janky moves_1 = ["tackle", "thundershock", "heal", "heal", "heal", "heal", "heal"] cpu_choice = random.choice(moves_1) else: cpu_choice = random.choice(list(moves)) if cpu_choice == "tackle" or cpu_choice == "thundershock": damage = int(random.choice(moves[cpu_choice])) other.health -= damage print("\nThe CPU attacks with {0}, dealing {1} damage.".format(cpu_choice, damage)) if cpu_choice == "heal": self.health += int(random.choice(moves[cpu_choice])) print("\nThe CPU uses heal and its health is now {0.health}.".format(self)) def battle(player, enemy): print("An enemy CPU enters...") while player.health > 0 and enemy.health > 0: player.attack(enemy) if enemy.health <= 0: break print("\nThe health of the CPU is now {0.health}.".format(enemy)) enemy.attack(player) if player.health <= 0: break print("\nYour health is now {0.health}.".format(player)) # outcome if player.health > 0: print("You defeated the CPU!") if enemy.health > 0: print("You were defeated by the CPU!") if __name__ == '__main__': battle(Player(), Enemy()) Answer: Magic numbers Right off the bat I see some magic numbers moves = {"tackle": range(18, 26), "thundershock": range(10, 36), "heal": range(10, 20)} What does that mean? Being familiar with Pokemon I'd assume damage, or something, but that won't necessarily be apparent to the user. ABCs You have your base class, Character. It would benefit from being an abstract base class For example, import abc class Character(metaclass=abc.ABCMeta): def __init__(self, starting_health): self.current_health = starting_health @abc.abstractmethod def attack(self, other): raise NotImplementedError I've done a few things here. For one, I've used more clear variable names. The input parameter health is actually the starting_health of the character, while self.health is actually referring to the current_health of the character. Better variable names make code easier to read. The meat of this is the abc stuff. By giving the Character class a metaclass of abc.ABCMeta (don't worry about what a metaclass is) we're saying that it cannot be instantiated directly if it has any abstract methods or properties. With this definition, if you then tried to do this char = Character(100) you would get the following error: TypeError: Can't instantiate abstract class Character with abstract methods attack This carries into sub-types as well. It is a way of guaranteeing that all classes you instantiate that should override a method do override it. Moves Your moves should probably be classes. This will make it much, much easier to extend this, and simplify some other behaviors. I'd look at something like this from enum import Enum DamageTypes = Enum('DamageTypes', 'DAMAGING HEALING STATUS') Types = Enum('Types', 'ELECTRIC NORMAL') class Move(metaclass=abc.ABCMeta): @abc.abstractproperty def damage_type(self): return NotImplemented @abc.abstractproperty def move_type(self): return NotImplemented @abc.abstractmethod def health_change(self, modifiers=None): return NotImplemented class Thundershock(Move): _max = 36 _min = 10 @property def damage_type(self): return MoveTypes.DAMAGING @property def move_type(self): return Types.ELECTRIC def health_change(self, modifiers=None): if modifiers is None: return random.randint(self._min, self._max) else: # Do something here if they have some ability that reduces electric damage, or whatever I did a few things here. The first was the Enums. Enums let you group related constants together. For example, now I can do something like if move.move_type is Types.ELECTRIC: # Has lightningrod ability, immune to electric moves return 0 without having a magic number. I then gave the class some properties (i.e. attributes that I can get/set without using parentheses) as well as some methods (attributes that are functions). This gives you a slightly more well organized codebase, and is easy to extend. Just add a new subclass. Now in your Player class you can do something like def attack(self, opponent) while True: try: move = moves[str.lower(input("stuff"))] except KeyError: print("Not a valid move, try again!") else: if move.move_type is MoveTypes.HEAL: self.health += move.health_change(None) elif move.move_type is MoveTypes.DAMAGIN: opponent.health -= move.health_change(None) else: opponent.status = move.status_effect(None) break which to me is much cleaner, and also easier to extend if you add move types. It doesn't rely on the specific strings of the names (can you imagine typing all ~600 moves that exist in Pokemon?) just on what sort of effect they have. Weighted randomness I won't repeat everything here, but Ned Batchelder has a good suggestion for how to handle weighted randomness here Attack order You should randomise this. Good guys don't always go first :) String formatting Instead of doing print("The health of the CPu is now {0.health}.".format(enemy) just do print("The health of the CPU is now {}.".format(enemy.health) Improving your base class As per @200_success's answer below, you should expand the functionality of your Character base class. For example, attacking, healing, and taking damage are all shared between characters (mostly). What if you did this? class Character(metaclass=abc.ABCMeta): def __init__(self, starting_health): self.current_health = starting_health def attack(self, other, modifiers): move = self.get_move() if move.move_type is MoveTypes.DAMAGING: other.damage(move.health_change(modifiers)) elif move.move_type is MoveTypes.HEAL: self.heal(move.health_change(modifiers)) elif move.move_type is MoveTypes.STATUS: other.status(move.status_effect(modifiers)) else: raise NotImplementedError @abc.abstractmethod def get_move(self): raise NotImplementedError I've consolidated the act of attacking and abstracted away the only part that changes - how they pick their move. Now you can do class Player(Character): def get_move(self): while True: move = moves[str.lower(input(""))] except KeyError: print("No such move") else: return move class Enemy(Character): def get_move(self): # do some pseudo random stuff return move And you don't have to repeat the attacking logic, and new types of characters only need to override how they pick a move.
{ "domain": "codereview.stackexchange", "id": 15215, "tags": "python, game, battle-simulation" }
Repeated measures simulator optimisation
Question: I am learning to program in R and to do that and create something useful in the process, I have decided to rewrite this Java applet for repeated measures simulation and to implement some new functionality. I have succeeded, but the code seems to run very slowly (5K simulations in around 8 seconds as opposed to ~ 0.8 seconds in that applet) so it is likely to be a bad solution. I am looking for improvements to the code, tips on how to speed up the process (surely, it can not be that slower than Java, best practices, and overall comments on my solution. In order to compare, run the applet and click Simulate 5000 and then import the code to R and run simRM(5000). It is a lot, but I have tried to make it clear in the comments. > system.time(simRM(5000)) user system elapsed 8.896 0.063 8.843 library(MASS) library(ggplot2) # n = number of observations; m1 = mean of the first group; m2 = mean of the second group; # sd = standard deviation of both groups; rho = correlation coefficient; type = experimental design sml <- function(n = 8, m1 = 10, m2 = 15, sd = 5, rho = 0.5, type = "within") { n <<- n # Make number of observation pairs global type <<- type # Make type of experiment global if (type == "within") { # If paired (within group), then correlation rho cor <- matrix(rho, nrow = 2, ncol = 2) } else if (type == "between") { # If independent (between groups), then correlation 0 rho <- 0 cor <- matrix(rho, nrow = 2, ncol = 2) } diag(cor) <- 1 #sigma <- cor*as.matrix(c(sd,sd))%*%t(as.matrix(c(sd,sd))) # If different standard deviations sigma <- cor*(sd^2) # Compute covariance matrix. res <- as.data.frame(mvrnorm(n, c(m1, m2), sigma)) # Simulate from a multivariate normal distribution using MASS::mvrnorm() dFrame <- data.frame(por=c(rep("A",n),rep("B",n)),cis=c(1:n,1:n),val=c(res[,1],res[,2])) # Data frame: por = phase; cis = subject ID; val = value #print(describeBy(dFrame, group=dFrame$por)) # Debugging return(dFrame) } glRes <- vector(length=4) # Prepare a vector to write multiple results in names(glRes) <- c("rep", "significant", "insignificant", "percent") # Name individual items # rep = number of repetitions/simulations; alpha = the probability of making a type I error simRM <- function(rep=1, alpha=0.05, ...) { t <- vector() # Prepare a vector to write t values in pb <- txtProgressBar(min = 0, max = rep, style = 3) # Set up the progress bar for (i in 1:rep) { dFrame <- sml(...) #x <- t.test(dFrame[dFrame$por=="A",3],dFrame[dFrame$por=="B",3],paired=T)$p.value # For p values if (type == "within") { # Student's paired t-test x <- unname(t.test(dFrame[dFrame$por=="A",3],dFrame[dFrame$por=="B",3],paired=T)$statistic) } else if (type == "between") { # Welch's t-test x <- unname(t.test(dFrame[dFrame$por=="A",3],dFrame[dFrame$por=="B",3])$statistic) } t <- abs(round(append(t, x),4)) # Add rounded t values to the vector t setTxtProgressBar(pb, i) # Update progress bar } cat("\n\n") # Introduce two line breaks to the output for better readability if (rep == 1) { # Print plot only if there is a single repetition/simulation if (type == "within") { # Plot the values and connect those coming from the same subject plot <- ggplot(data=dFrame, aes(x=por, y=val, group=cis)) + geom_point() + geom_line() + theme_bw() } else if(type == "between") { # Just plot the values plot <- ggplot(data=dFrame, aes(x=por, y=val, group=cis)) + geom_point() + theme_bw() } print(plot) } if (type == "within") { # Compute the critical value criticalValue <- abs(qt(alpha/2, n-1)) } else if (type == "between") { criticalValue <- abs(qt(alpha/2, 2*n-2)) } sig <- length(t[t > criticalValue]) # The number of significant outcomes (t > critical value) insig <- length(t[t < criticalValue]) # The number of insignificant outcomes (t < critical value) res <- c(sig, insig, (sig/rep)*100) # The result containing sig, insig and the percentage of sig names(res) <- c("significant", "insignificant", "percent") # Name items accordingly glRes[1] <<- glRes[1] + rep # Update each value accordingly and add to the global result variable glRes glRes[2] <<- glRes[2] + sig glRes[3] <<- glRes[3] + insig glRes[4] <<- round(glRes[2] / glRes[1] * 100, 2) return(glRes) # Return the final result } Answer: I am not sure if this will be under 2 seconds on your machine, but should be quite close. The main points of optimisation: Converting matrix (from mvrnorm) to data.frame is slow. I dropped data.frame and used resulting matrix directly. sigma can be computed only once, so I removed it from the loop. I generated multivariate normally distributed variables with length rep * n in one go. I subselect the rows in each iteration of the loop. It is better to define t with required length, not to update it in loop. Some other small tweeks. It is possible to reduce the execution time even more by implementing parallel computing. All iterations are independent from each other so it will be effective approach. Timing of the original code on my machine: user system elapsed 14.345 0.072 14.603 Timing of optimised code on my machine: user system elapsed 3.088 0.072 3.446 The code: rm(list = ls()) gc() library(MASS) library(ggplot2) sml <- function(n, m1, m2, sigma) mvrnorm(n, c(m1, m2), sigma) glRes <- vector(length=4) # Prepare a vector to write multiple results in names(glRes) <- c("rep", "significant", "insignificant", "percent") # Name individual items # rep = number of repetitions/simulations; alpha = the probability of making a type I error # n = number of observations; m1 = mean of the first group; m2 = mean of the second group; # sd = standard deviation of both groups; rho = correlation coefficient; type = experimental design simRM <- function(rep = 1, alpha = 0.05, n = 8, m1 = 10, m2 = 15, sd = 5, rho = 0.5, type = "within") { if (type == "within") { # If paired (within group), then correlation rho sigma <- matrix(c(1, rho, rho, 1), nrow = 2) * (sd^2) } else if (type == "between") { # If independent (between groups), then correlation 0 sigma <- matrix(c(0, rho, rho, 0), nrow = 2) * (sd^2) } else stop("Wrong type") dFrame <- sml(n = n * rep, m1 = m1, m2 = m2, sigma = sigma) t <- vector(mode = "double", length = rep) # Prepare a vector to write t values in pb <- txtProgressBar(min = 0, max = rep, style = 3) # Set up the progress bar for (i in 1:rep) { dF <- dFrame[n*(i-1)+(1:n), ] t[i] <- (t.test(dF[, 1], dF[, 2], paired = (type == "within"))$statistic) setTxtProgressBar(pb, i) # Update progress bar } #t <- abs(round(t, 4)) ### Why do you round t? t <- abs(t) cat("\n\n") # Introduce two line breaks to the output for better readability if (rep == 1) { # Print plot only if there is a single repetition/simulation if (type == "within") { # Plot the values and connect those coming from the same subject plot <- ggplot(data=dFrame, aes(x=por, y=val, group=cis)) + geom_point() + geom_line() + theme_bw() } else if(type == "between") { # Just plot the values plot <- ggplot(data=dFrame, aes(x=por, y=val, group=cis)) + geom_point() + theme_bw() } print(plot) } if (type == "within") { # Compute the critical value criticalValue <- abs(qt(alpha/2, n-1)) } else if (type == "between") { criticalValue <- abs(qt(alpha/2, 2*n-2)) } sig <- length(t[t > criticalValue]) # The number of significant outcomes (t > critical value) # insig <- length(t[t < criticalValue]) # The number of insignificant outcomes (t < critical value) insig <- rep - sig res <- c(sig, insig, (sig / rep) * 100) # The result containing sig, insig and the percentage of sig names(res) <- c("significant", "insignificant", "percent") # Name items accordingly glRes[1] <<- glRes[1] + rep # Update each value accordingly and add to the global result variable glRes glRes[2] <<- glRes[2] + sig glRes[3] <<- glRes[3] + insig glRes[4] <<- round(glRes[2] / glRes[1] * 100, 2) return(glRes) # Return the final result } set.seed(1) system.time(simRM(5000)) glRes
{ "domain": "codereview.stackexchange", "id": 4988, "tags": "optimization, simulation, r" }
Why doesn't planet Earth expand if I accelerate upwards when standing on its surface?
Question: According to General Relativity I am being accelerated upwards by planet earth while writing this question. But a curious person on the the other side of the planet relative to me would have the same experience. That means we are accelerated in opposite directions, although earths diameter do not seem to increase. How can this be? Answer: Spacetime curvature makes this possible. Here's an analogy. There are two paths on opposite sides of the equator, at a constant distance from it. Someone walking east along the path north of the equator will have to continually turn slightly left to stay on the path. (If that isn't obvious, imagine it's so far north that it visibly circles the pole.) Likewise, someone walking east on the path south of the equator will have to turn right. Two people walking side by side along the paths will stay the same distance apart, even though they're constantly turning away from each other. This wouldn't be possible on the Euclidean plane, but it's possible on a curved surface. That's what happens in general relativity, but the direction they're walking is the time direction, and the turning is acceleration.
{ "domain": "physics.stackexchange", "id": 99457, "tags": "general-relativity, spacetime, curvature, planets" }
What is the difference between continuous, discrete, analog and digital signal?
Question: It's my first time studying DSP and I've faced a problem finding a convenient definition. Are the following definitions correct? And if so why there are some resources defining it in other terms such as "Digital signal: is a signal with discrete time and discrete amplitude" Discrete time signal: X-axis (time) is discrete and Y-axis (amplitude) may be continuous or discrete. Continuous time signal: X-axis (time) is continuous and Y-axis (amplitude) may be continuous or discrete. Digital signal: Y-axis (amplitude) is discrete and X-axis (time) may be continuous or discrete. Analog signal: Y-axis (amplitude) is continuous and X-axis (time) may be continuous or discrete. Answer: A signal is indeed a function. Given a signal $f(x)$, according to whether continuous or discrete for both the variable $x$ and the function $f(x)$, there are four types of combinations: (1) $\mathbf{continuous}$ $x$ and $\mathbf{continuous}$ $f(x)$ This is the most common $\mathbf{analog}$ signal. (2) $\mathbf{continuous}$ $x$ and $\mathbf{discrete}$ $f(x)$ For this one, we can imagine the ideal base-band waveform used in digital communication. As this one: (3) $\mathbf{discrete}$ $x$ and $\mathbf{continuous}$ $f(x)$ This is indeed the signal in most the "digital signal processing" textbooks. An example, as others have pointed out, is the output of the CCD sensor. (4) $\mathbf{discrete}$ $x$ and $\mathbf{discrete}$ $f(x)$ This is the $\mathbf{digital}$ signal. Digital signals are used in practical implementation aspects, and they actually exist in a conceptual manner. If we concern the discrete feature of the function, the problem will be more complex, therefore, in most "digital signal processing" textbooks, the signals are $\mathbf{not}$ digital indeed. An interesting fact is that, for the the classical textbook by A. V. Oppenheim, the name was "digital signal processing" in the 1st edition, but the name was changed to "discrete-time signal processing" for the later editions.
{ "domain": "dsp.stackexchange", "id": 4310, "tags": "discrete-signals, continuous-signals, digital, analog" }
Polarity index vs. Dipole moment
Question: I'm looking to find whether water or methanol is more polar, and I'm getting conflicting answers. My textbook says "the polarity of a bond is quantified by the size of its dipole moment." According to this table, at 20 °C, the dipole moments of methanol and water are 2.87 D and 1.87 D. This implies to me that methanol is more polar. But there seems to be another measure called "polarity index," which I can't seem to find a definition for online. This table says the polarity indices of methanol and water are 10.2 and 5.1, which implies to me that actually water is more polar. So what's the difference between the dipole moment and the polarity index? How is the latter defined? And is methanol or water more polar? Answer: From Roger E. Schirmer "Modern Methods of Pharmaceutical Analysis": Solvents are generally ranked by polarity, but polarity is not a uniquely defined physical property of a substance. Hence the relative polarity of a solvent will be somewhat dependent on the method used to measure it.... Solvent polarity is a complex function of many parameter in addition to adsorption energy. A more recent ranking of solvents by Snyder is based on a combination of parameter such as dipole moment, proton acceptor or donor properties, and dispersion force solvent... Snyder's polarity index ranks solvents according to a complex theoretical summation of these properties. As a rule, the higher the polarity index, the more polar the solvent. Snyder's paper was published in Journal of Chromatography A, Volume 92, Issue 2, 22 May 1974, Pages 223-230 . Take a look at it to see how the index was calculated. There are different types of polarity indices. And each of them has different parameters and ways to calculate the polarity. Unlike polarity, dipole moment is a physical property. Polarity indices often take dipole moments as a parameters when calculating the polarity of solvents. The reason why there are polarity indices is because dipole moments alone couldn't explain the nature and interactions of solvents. Part of your confusion stems from the repetition of the word "polar" to describe different phenomena. There is polarity of bonds and there is polarity of solvents. They are different things.
{ "domain": "chemistry.stackexchange", "id": 7210, "tags": "polarity, dipole" }
Does the Hartree-Fock energy of a virtual orbital satisfy the virial theorem?
Question: In calculating the ground state of atoms or molecules at the equilibrium geometry, the expectation values of the kinetic, $\langle T\rangle$, and potential, $\langle V\rangle$, energies relate to the total energy, $E$, according to the virial theorem: $$ E = -\langle T\rangle=\tfrac{1}{2}\langle V\rangle. $$ Since the solution of the Schrödinger equation at the Hartree Fock (HF) level is variational, the viral theorem holds for it. Also, the HF energy is the sum of the energies of occupied orbitals; therefore, these energies must also fulfill the virial conditions individually. Can the same be said about virtual orbitals? Answer: As you can easily see the mean field potential potential acting on every electron in atoms and molecules do not have this shape $V=Ar^\alpha$ so the virial theorem do not hold because of the repulsive electron repulsion in atoms and the other atomic potential in molecules. If you perform a HF calculation you will see that the total energy is $E\approx -\langle T \rangle $. This means that the correlation effects do not deviate to much from this theorem because the atomic potential has a dominant contribution. The total energy is not the sum of the occupied orbital energy. The total energy strictly is the negative value sum of the ionization energies of all the electrons reason why there is a correction taking into account electron-electron interaction affecting every ionization. The virtual and occupied orbitals are issued from the same eigenvalue equation and because of the potential, the virial theorem cannot be applied on them (or very approximately)
{ "domain": "chemistry.stackexchange", "id": 17372, "tags": "quantum-chemistry, orbitals" }
Checking if an integral converges (or diverges) using dimensional analysis
Question: Cross posted at Math.SE I have been watching some online lectures, and the lecturer uses dimensional analysis to make claims such as the following: Consider the integral \begin{equation} I(\xi, d) = \int_0^\xi \frac{\mathrm{d}^\mathrm{d}q}{(2\pi)^d} \frac{1}{q^2(q^2+1)}\quad\text{where}\quad q=|\boldsymbol{q}|. \end{equation} It is claimed that The "measure of the integral" is $\xi^{d-4}$, hence $I$ is a constant for $2<d<4$ in the limit $\xi\to\infty$ Using the above, $I$ diverges with $\xi$ for $d>4$ When $d=4$ (then"marginal case"), $I\sim\log(\xi)$ I recognise that these claims are imprecise: but that's exactly my question! How is such dimensional analysis used to determine the convergence or lack thereof of integrals in arbitrary dimension $d$? What does the "measure of the integral" mean? Furthermore, how do we know that there is no divergence due to the singularity at the origin? EDIT I'm accepting Qmechanic ♦'s answer below. For a purely mathematical answer (which I found very clear), see the linked Math.SE cross post. Answer: Here we try to give a conceptional (rather than a computational) answer to OP's 3 questions: The relevant notion is the superficial degree of (UV) divergence $D$, see e.g. my related Phys.SE answer here. The measure of the integral usually refers to $\mathrm{d}^\mathrm{d}q$. It seems the lecturer instead means that the measure of the integral is $\xi^D$, which is nonstandard terminology. Infrared (IR) singularities from massless fields are often regularized by giving them a small mass. (Also we assume that the integral has been Wick-rotated to Euclidean signature.) In OP's integral the possible IR singularity is the reason for the mentioned lower bound $d>2$.
{ "domain": "physics.stackexchange", "id": 91863, "tags": "renormalization, dimensional-analysis, integration, singularities" }
Claim that 30-m class telescopes will have resolution far superior to Hubble: true?
Question: This article makes the claim that the Giant Magellan Telescope (GMT, number 4 in the list) will have resolution 10 times better than that of Hubble, while the Thirty Meter Telescope (TMT, number 3 in the list) will have resolution 12 times better than that of Hubble. These are claims I find impossible to believe. A simple application of, for instance, the Raleigh Criterion ($\theta = 1.22~\lambda/D$), where $\theta$ is the angular resolution of the telescope, $\lambda$ is the wavelength of the light in question, and $D$ is the diameter of the telescope, would show that a comparison between angular resolutions is as simple as comparing the diameters of the telescope apertures. Hubble's mirror is 2.4 m; comparing with the 24.5 m GMT and the 30 m TMT we see that the GMT will have 10.2 times the angular resolution of Hubble, while the TMT will have 12.5 times the angular resolution of Hubble. I think a calculation similar to the one I have described is how the article linked above came up with the numbers they did about the angular resolution of these telescopes compared to Hubble. However, the Raleigh criterion only applies to telescopes working at their diffraction limit. Space telescopes (if they're designed well and built correctly) can work close to the diffraction limit (maybe even at the diffraction limit). Ground-based telescopes, however, are limited in angular resolution by the atmosphere, which at best will limit resolution to about an arc-second at the best sites on Earth. Thus the GMT and TMT by themselves will not have better image resolution than Hubble. My question, then, is whether this article is correct (possibly because of one of the reasons I list below) or whether it seems this article naively applied the Raleigh Criterion for angular resolution with no thought about how the atmosphere will affect the resolving capabilities of these large ground-based telescopes. Possible reasons the article may still be correct: Adaptive optics, a developing technology which can allow telescopes to correct for distortions to the image produced by the atmosphere. Perhaps GMT and TMT will have very fancy adaptive optics systems. Another technique, such as speckle imaging or lucky imaging. Article could be referring to spectral resolution, rather than angular resolution (or image resolution). However, obtaining good spectral resolution is as much the job of the spectral instrument as it is of the telescope so I don't consider "spectral resolution" to be an inherent property of a telescope. Answer: Adaptive optics, as it says on the GMT website: One of the most sophisticated engineering aspects of the telescope is what is known as “adaptive optics.” The telescope’s secondary mirrors are actually flexible. Under each secondary mirror surface, there are hundreds of actuators that will constantly adjust the mirrors to counteract atmospheric turbulence. These actuators, controlled by advanced computers, will transform twinkling stars into clear steady points of light. It is in this way that the GMT will offer images that are 10 times sharper than the Hubble Space Telescope. and on the TMT web site: In addition to providing nine times the collecting area of the current largest optical/infrared telescopes (the 10-meter Keck Telescopes), TMT will be used with adaptive optics systems to allow diffraction-limited performance, i.e., the best that the optics of the system can theoretically provide. This will provide unparalleled high-sensitivity spatial resolution more than 12 times sharper than what is achieved by the Hubble Space Telescope. For many applications, diffraction-limited observations give gains in sensitivity that scale like the diameter of the mirror to the fourth power, so this increase in size has major implications.
{ "domain": "astronomy.stackexchange", "id": 6928, "tags": "observational-astronomy, telescope, hubble-telescope, angular-resolution, giant-magellan-telescope" }
Stratosphere height vs. Temperature based on ozone concentration
Question: Why does temperature increase as height increases in the stratosphere (15 km - 60 km above earth), when the ozone molecules are most concentrated at about 25 km? Answer: There are two main points to make here. The first point is that UV radiation is entering the stratosphere from above (ignoring angular dependencies and scattering), so is preferentially absorbed by ozone higher in the stratosphere. This is a form of the Beer-Lambert law, which leads to greater absorption higher in the stratosphere. The second point is that the Dobson Units used in your plot standardize for (partial) pressure and temperature differences between different heights in the stratosphere, so that you can compare the mass or number density of ozone at different levels. But it doesn't tell you how much ozone there is relative to all the other gases making up the air at a particular height. A common way to look at this is the mole mixing ratio: A height–latitude cross section of annual average ozone mixing ratio (ppmv) from the climatology. Source: McPeters et al (2005), Ozone climatological profiles for satellite retrieval algorithms, GRL. While the ozone concentration peaks around 20 km, the mixing ratio peaks around 30 to 40 km and doesn’t decline so abruptly with height. Where the mixing ratio is greater, absorption of a particular amount of UV radiation can have a greater effect on the overall air temperature at that height. The net effect of these two properties is that heating of the stratosphere by UV absorption (see plot below) peaks in the upper stratosphere around 50 km. This is why the greatest temperatures are found in the upper stratosphere. Heating from absorption of UV radiation (K/day) Source: Haigh (1984) Radiative heating in the lower stratosphere and the distribution of ozone in a two‐dimensional model, QJRMS.
{ "domain": "earthscience.stackexchange", "id": 1705, "tags": "temperature, atmospheric-chemistry, ozone, stratosphere" }
Quantum Katas - Tutorials - SingleQubitGates - Exercise 7 - Preparing an arbitrary state
Question: Exercise 7 "Preparing an arbitrary state" from the Quantum Katas - Tutorials - SingleQubitGates asks to prepare a state $\alpha|0\rangle + e^{i\theta}\beta|1\rangle$, using parameters $\alpha$, $\beta$, and $\theta$. In brief, $\theta$ is one of known-inputs, why we don't use $\theta$ for the Ry gate directly ? Something like this.. Ry(theta, q); R1(theta, q); But alas, I got error: Qubit in invalid state. Expecting: Zero Expected: 0 Actual: 0.061208719054813704 Try again! Any ideas would be highly appreciated!! Answer: The angle to use for Ry gate is not necessarily the same one as the given angle $\theta$ to use for R1 gate. This means that you need to figure out the angle for Ry gate from the parameters $\alpha$ and $\beta$. If you're using $\theta$ for both angles, you'll be preparing a state $\cos \frac{\theta}{2}|0\rangle + e^{i\theta}\sin \frac{\theta}{2}|1\rangle$, not $\alpha|0\rangle + e^{i\theta}\beta|1\rangle$ the task asks for. I recommend checking out the workbook for that tutorial - it has a very detailed explanation of the steps you need to take to solve this task.
{ "domain": "quantumcomputing.stackexchange", "id": 2617, "tags": "programming, q#" }
How can i do ROS -> Gazebo -> GUI communication
Question: I want to do this kind of communication but i don't know how, there are some source codes that i can see? I know that ros can communicate with gazebo easily, but i want to return the gazebo informations to another node, as a GUI application. Originally posted by GeorgeFilho on ROS Answers with karma: 1 on 2014-10-09 Post score: 0 Original comments Comment by GeorgeFilho on 2014-10-13: yes, i saw these tutorials, what i'm trying to do is control the robot on ROS by teleoperation, make the robot move on gazebo and return the robot vision in another interface created by me (Like RVIZ), i need to know how ROS and gazebo do these communications. Comment by dblitz on 2016-10-25: Hey were you ever able to figure this one out? Answer: I'm afraid we'll need a little more detail about what you're trying to do in order to help you. Have you seen these tutorials? Originally posted by Airuno2L with karma: 3460 on 2014-10-09 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 19687, "tags": "ros, gazebo, communication, turtlebot" }
Is the Standing Wave of a particle Hologram "interactive"?
Question: In other words, suppose one creates a particle hologram -- no photo reaction, just a standing waveform in space. Suppose the particle is something we are skilled at manipulating, like an electron. Next, one continuously blasts the kind of energy the particle would normally absorb at the hologram. Will the particle, when it is finally detected at some terminating point (like a phosphorous plate) have the same, more, or less "energy"? And, as a follow on, would there be any mechanism to do this (interact with the diffracted particle), under any conditions? In short: you can absorb energy from diffractions to record a hologram. Can you push energy into the system? Disclaimer: I recently asked a question on this topic, and discovered that, yes, particle holograms exist: Matter Holograms: What would be involved? Answer: Your question is a bit unclear, but I'll answer the question I think you want to ask: "If an unspecified kind of particle is diffracted by a standing wave pattern formed by a particle beam, how is the particle affected?" The experiment you describe probably has never been done using electron beams to form a standing wave. However, an inverse of the experiment has been done: Light has been used to form a standing wave pattern, and particles have been diffracted by that pattern. Perhaps more to the point, sound waves have been used to form a standing wave pattern, and light has been diffracted by that acoustic standing wave. See Acousto-Optic Modulator. When light is diffracted by the acoustic standing wave, it does indeed pick up (or lose) energy. Similarly, it is reasonable to expect that a standing wave produced by interfering particle waves would add (or subtract) energy from light or matter waves diffracted by the standing wave.
{ "domain": "physics.stackexchange", "id": 74508, "tags": "particle-physics, electrons, diffraction, wave-particle-duality, hologram" }
Robot upstart with serial library
Question: Hi, I'm using robot_upstart and the 'serial' package (http://wjwwood.io/serial/) on a Fitlet mini ubuntu PC. I've set permissions for serial ports using custom udev rules by navigating to /etc/udev/rules.d and (sudo) creating a new file called 40-permissions.rules with contents as follows KERNEL=="ttyUSB0", MODE="0666" ... repeat for each serial port used When I try to run the upstart service in the foreground with: sudo runslam-start everything works fine. I had a look at the log file on startup and the only piece of information I could find was: terminate called after throwing an instance of 'serial::IOException' what(): IO Exception (16): Device or resource busy, for ..../src/serial/src/impl/unix.cc, line 151. Why would the port be busy on startup? And how would I go about fixing this? Full Error: process[hector_mapping-4]: started with pid [1473] process[odom_2_base_footprint-5]: started with pid [1491] terminate called after throwing an instance of 'serial::IOException' what(): IO Exception (16): Device or resource busy, file /home/andre/adr_slam/src/serial/src/impl/unix.cc, line 151. process[base_link_2_laser-6]: started with pid [1500] process[hector_trajectory_server-7]: started with pid [1510] process[hector_geotiff_node-8]: started with pid [1543] [ INFO] [1463215260.338325073]: Waiting for tf transform data between frames /map and base_stabilized to become available HectorSM map lvl 0: cellLength: 0.1 res x:1024 res y: 1024 HectorSM map lvl 1: cellLength: 0.2 res x:512 res y: 512 [ INFO] [1463215260.934681633]: HectorSM p_base_frame_: base_footprint [ INFO] [1463215260.937439552]: HectorSM p_map_frame_: map [ INFO] [1463215260.938086638]: HectorSM p_odom_frame_: base_footprint [ INFO] [1463215260.938558139]: HectorSM p_scan_topic_: scan [ INFO] [1463215260.939030365]: HectorSM p_use_tf_scan_transformation_: true [ INFO] [1463215260.939420596]: HectorSM p_pub_map_odom_transform_: true [ INFO] [1463215260.939794708]: HectorSM p_scan_subscriber_queue_size_: 10 [ INFO] [1463215260.940222312]: HectorSM p_map_pub_period_: 0.500000 [ INFO] [1463215260.940644045]: HectorSM p_update_factor_free_: 0.400000 [ INFO] [1463215260.941224273]: HectorSM p_update_factor_occupied_: 0.800000 [ INFO] [1463215260.942148507]: HectorSM p_map_update_distance_threshold_: 0.300000 [ INFO] [1463215260.942606739]: HectorSM p_map_update_angle_threshold_: 0.040000 [ INFO] [1463215260.943586806]: HectorSM p_laser_z_min_value_: -1.000000 [ INFO] [1463215260.944245548]: HectorSM p_laser_z_max_value_: 1.000000 [ INFO] [1463215261.118777754]: Successfully initialized hector_geotiff MapWriter plugin TrajectoryMapWriter. [ INFO] [1463215261.119103784]: Geotiff node started No handlers could be found for logger "roslaunch" [fake_gps-3] process has died [pid 1467, exit code -6, cmd /home/andre/adr_slam/devel/lib/tf_to_serial_gps/tf_to_serial_gps_node __name:=fake_gps __log:=/tmp/944452a6-19af-11e6-9294-db3f83b218bf/fake_gps-3.log]. log file: /tmp/944452a6-19af-11e6-9294-db3f83b218bf/fake_gps-3*.log [ INFO] [1463215262.338960828]: Finished waiting for tf, waited 2.000792 seconds Originally posted by Andre_Preller_3000 on ROS Answers with karma: 48 on 2016-05-14 Post score: 1 Answer: I ended up fixing this problem by creating a script that delays and then launches my serial node according to this answer Originally posted by Andre_Preller_3000 with karma: 48 on 2016-05-17 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 24649, "tags": "ros, usb, robot, robot-upstart" }
How is the water meniscus at the edge of a capillary tube?
Question: Suppose we have a capillary tube in which water can rise to a height of x cm. If we dip the tube such that the height above the surface is less than x, then how will the water meniscus be at the edge of the tube? Why? Excuse the flat water at the surface near the base of the capillaries. EDIT: @NewAlexandria Here's my reasoning. A is the most likely case as once the water molecules at the inner circumference reach the edge, they cannot go any further up as there is no glass to give the needed normal reaction (cosine component). That's all I can think of. Answer: The formula for capillary rise that most people know is easily derived through a pressure balance between the capillary pressure and the hydrostatic pressure. The hydrostatic pressure equals $$\Delta P_h=\rho g h$$ whereas the capillary pressure is $$\Delta P_c=\frac{2\gamma}{R}=\frac{2\gamma \cos \theta}{r}$$ So balancing these we get our 'famous' equation: $$h=\frac{2\gamma\cos\theta}{\rho g r} $$ Now we have a situation in which the height of our tube above the liquid, $h_{max}$ is smaller than $h$. For an equilibrium situation we still need the hydrostatic pressure and the capillary pressure to balance so we plug in the maximum height that we can get, $h_{max}$, and get: $$h_{max}=\frac{2\gamma \cos\theta_p}{\rho g r}$$ Note that I have changed $\theta$ into $\theta_p$ (see figure below), because that is in fact the only thing that can change, all the other parameters are fixed properties of the system. It is not entirely clear how you define $A$, but if we define $A$ as the situation for which $\theta_p=\theta$ then changing $\theta_p$ would result in a gradual shift from $A$ to $B$ depending on the value of $h_{max}$. In fact, $B$ is the limit for $h_{max}=0$, because $\cos \pi/2=0$, in which case, as pointed out by Olin and can be seen from the capillary pressure equation, no hydrostatic pressure can be sustained. The fact that $\theta$ can become bigger (i.e. change into $\theta_p$) is caused by contact angle hysteresis at the rim of the capillary tube.
{ "domain": "physics.stackexchange", "id": 10723, "tags": "water, fluid-statics, surface-tension, capillary-action" }
Paramagnet: working with molar quantities
Question: I am currently working through A Modern Course in Statistical Physics by Linda E. Reichl. (2nd edition) I am working on problem 2.13 which discusses a paramagnet. The equation of state is given as $$m = \dfrac{DH}{T}$$ where m is the molar magnetisation, H is the magnetic field, D is a constant and T is temperature. The molar heat capacity at constant magnetisation is given as constant: $c_{m} = c$ I am having trouble working with molar quantities. The textbook states $C_{X, n} = \left(\dfrac{{\partial}U}{{\partial}T}\right)_{X, n}$, therefore $c_{x} = \left(\dfrac{{\partial}u}{{\partial}T}\right)_{x}$ and $\left(\dfrac{{\partial}U}{{\partial}X}\right)_{T, n}$ = $\left(\dfrac{{\partial}u}{{\partial}x}\right)_{T}$ where a lower case letter indicates we are dealing with a molar quantity ($U/n = u$). I tried do derive the following analogously: $C_{M, n} = T\left(\dfrac{{\partial}S}{{\partial}T}\right)_{M, n}$, therefore $c_{m} = T\left(\dfrac{{\partial}s}{{\partial}T}\right)_{m}$ And $\left(\dfrac{{\partial}S}{{\partial}M}\right)_{T, n}$ = - $\left(\dfrac{{\partial}H}{{\partial}T}\right)_{M, n}$ = $\left(\dfrac{{\partial}s}{{\partial}m}\right)_{T}$ where the first equality is a Maxwell relation I have therefore worked out: $\left(\dfrac{{\partial}s}{{\partial}T}\right)_{m} = c/T$ and $\left(\dfrac{{\partial}s}{{\partial}m}\right)_{T} = -m/D$ I then integrated both equations to get $s(T, m) = c\ln(T) - \dfrac{m^{2}}{2D}$ This however isn't the correct answer. I am assuming I have made an error using molar quantities, however as far as I can tell I haven't done much more than repeat what the textbook says. Could someone explain why the textbook's equalities hold whereas mine don't ? Edit : The correct answer is $$s(T, m) = (c+m^{2}/2D)\ln\left(\dfrac{T(c+m^{2}/2D)}{u_{0}}\right)$$ Answer: I think your answer is correct, the only thing that you need is to write the entropy in terms of its natural variables (for this case $U=U(S,M,N)$ or $u=(s,m)$). I mean, for getting the entropy that is supposed to be the correct one, the internal energy must have a 'magnetic' contribution, which is not the case, as you can easily prove by calculating $\left(\frac{dU}{dM}\right)$ at constant $T$ using that $c_m=$ constant. On the other hand, all your molar quantities are right you haven't made any mistake. You can see Thermal physics by Morse for more details on paramagnetic substance (although they assume that $C_M=F(T)$, all the calculations are quite similar to the ones needed to solve problem 2.13). Hope this be useful, even though your question has been posted long ago.
{ "domain": "physics.stackexchange", "id": 70320, "tags": "homework-and-exercises, thermodynamics" }
Confusion with thermal wind mechanism
Question: In meteorology, we know that cold air mass sinks while warm air mass rises due to density difference, resulting in higher pressure in cold air mass. This leads to a horizontal pressure gradient from cold to warm area (i.e. wind flows from cold to warm area). However, when considering the situation for thermal wind, based on the hypsometric equation, at any same level, the pressure over cold area is lower than that over warm area, resulting in horizontal pressure gradient from warm to cold area. This is the reverse of the above situation, which is confusing. How do we explain the difference between the two situations to eliminate any contradiction? Answer: based on the hypsometric equation, at any same level, the pressure over cold area is lower than that over warm area, resulting in horizontal pressure gradient from warm to cold area. Not necessarily. In a colder area, the isobars are more "compressed" while in a warmer area, the isobars are more "spread out". In most examples of thermal wind, the pressure at the surface is actually the same in the cold area and in the warm area. Take a look at this image. The horizontal pressure gradient is only up in the higher altitudes. As a matter of fact, thermal wind is greater in the upper troposphere and weakens closer to the surface. Does this contradict the concept of monsoons? No because monsoons are surface-level winds. Let's use land and sea breezes as an example since they are basically just small-scale monsoons. Take a look at this image regarding land and sea breezes. Air in the warm region rises which causes the isobars to "spread apart". A horizontal pressure gradient in the upper troposphere occurs so wind will flow from the warm area to the cooler area; this is thermal wind. Air above the cooler area sinks because it is more dense. This gives room for the thermal wind to also sink above the cooler area. The surface pressure in the cooler area is now higher than the warmer area since the sinking air is building up. A horizontal pressure gradient forms which causes wind on the surface to flow from the cooler area to the warmer area. I hope you understand since explaining this was a bit tricky without a lot of visualization. Also, check out wind flow maps of a monsoon. On the surface, wind flows from the colder area to the hotter area. But if you check the wind flow at higher altitudes, the wind actually reverses. Edit: Here's an analogy I just thought of. Think of the air as individual molecules. Think of these molecules as balls in an aquarium. These balls would always want to be a set distance away from other balls. If a ball is too near to another ball, they will push each other; if a ball is too far from another ball, they will try to go closer. One end of the aquarium is hot while the other is cold. The balls in the hot part rise and build up at the upper portion of the aquarium. The balls in the cold part sink and build up at the bottom part of the aquarium. The balls build up but since they don't want to be together, they flow towards the spaces with no balls. The hot balls will glide on the ceiling of the aquarium towards the colder part where they cool down and sink. The cold balls will glide on the floor of the aquarium towards the hot part where they heat up and rise. The cycle then just keeps repeating. The thermal wind phenomenon occurs on the ceiling of the aquarium, the monsoon/land-sea breeze phenomenon occurs on the floor of the aquarium.
{ "domain": "earthscience.stackexchange", "id": 2653, "tags": "meteorology, atmosphere, wind, barometric-pressure" }
Why are synthetic elements unstable?
Question: So far 20 synthetic elements have been synthesized. All are unstable, decaying with half-lives between years and milliseconds. Why is that? Answer: Protons are positively charged, and neutrons are neutral, so large nuclei are highly positively charged. A postively charged sphere will energetically prefer to break up into two separate charged droplets which move far apart, this reduces the electrostatic energy, since the electrostatic field does work during this process. This thing, spontaneous fission, is usually phase-space unlikely, since you need to have a large chunk of the nucleus tunnel away from another large chunk, and it's unlikely for all those particles to tunnel out together. But at large atomic numbers, you are unstable even just to shooting out an alpha-particle, and this doesn't require a conspiracy, so large Z nuclei are alpha unstable, usually with long half-lives. The positive charge on nuclei puts a limit to the stable ones. The reason is simply that the electrostatic force is long range, while the cohesion force is short range. The same phenomenon causes the instability of water droplets, so that if you charge one up, it will break into a fine mist. The cohesion of the droplets is local, while the electrostatic repulsion is long range. The scale at which you get a fission instability directly can be estimated from surface-tension considerations. If you break a sphere into two adjacent spheres of same total volume, the radius is reduced by the cube-root of two, so that the surface area is decreases by the square of this, and you multiply by 2 (since there are two spheres) so the net factor is the cube-root of 2, which is around 1.3. So the extra surface tension energy is increased by a factor of 1.3, or 30%. But in separating the two spheres, you have taken one ball of charge, with an energy of $Q^2\over R$ and separated it into two adjacent balls of reduced radius and half the charge. Adding up the electrostatic energy, it is about 80% of the original electrostatic energy in the single sphere. So spontaneous droplet fission will happen when you have a charged ball for which 30% of the surface tension energy is less than 20% of the charge energy. Since charge goes up almost as the volume (not quite, but close) while the surface tension goes up as the area, there is a crossover, and charged droplets will spontaneously separate when they are too big. The surface tension can be found from the binding energy curve of nuclei, and these simple considerations limit stable nuclear size to about that of Uranium. The U nucleus can spontaneously fission at an extremely low rate, but the transuranics become progressively more unstable because their electrostatic energy is increasing as the volume to a power greater than 2/3, while their surface tension energy is increasing as the surface area, which grows as the 2/3 power of the volume. These considerations, in much more sophisticated form, are due to Niels Bohr in the seminal liquid drop model of the 1940s. This model explained the nuclear binding energy curve quantitatively, and accounted well for fission phenomena. The only major thing left out of this was the shell model and magic numbers, which was supplied by Mayer.
{ "domain": "physics.stackexchange", "id": 100465, "tags": "nuclear-physics, radioactivity, binding-energy, elements, isotopes" }
annotation file for ham10000
Question: I am attempting to train a Faster R-CNN model using the HAM10000 dataset. However, I have been unable to locate an annotation file specifically for this dataset. I am seeking guidance on the most effective approach to annotate bounding boxes for this dataset. Is there any possibility of automating this process to streamline the annotation procedure? Answer: Mysuggestion would be this: manually annotate few hundreds samples and then train a yolo5 model or SSD model to detect and classify objects then from the model you can create the annotation for the rest of images then check the annotation that made by the model and fix the errors finally fint-une the model on the final dataset
{ "domain": "datascience.stackexchange", "id": 11829, "tags": "image-preprocessing, annotation" }
Most stable conformation of aldehydes and ketones
Question: Why is the most favorable conformation of ketones and aldehydes the one where the alkyl group is anti to the other alkyl group (or hydrogen atom in an aldehyde) as attached? Why isn't more stable the conformation where the bond between the alpha and beta carbon atoms can participate in hyperconjugation with the carbonyl pi antibonding MO? By contrast, the slightly more favorable conformation for butene is the right one: Answer: I found in Ian Fleming's Molecular orbitals and Organic Chemical Reactions, page 93-94 the answer (the aldehyde is propanal): The polarisation of the H-C bonds in the methyl group leaves a weak positive charge on the outside and this is attracted to the partial negative charge on the oxygen atom, lowering the energy of this conformation (the difference in energy of the two eclipsed conformations of propanal is 8 kJ/mol).
{ "domain": "chemistry.stackexchange", "id": 4072, "tags": "organic-chemistry, stereochemistry, carbonyl-compounds, conformers" }
Issue with predict generator keras
Question: I'm new with keras with tensorflow backend and I'm trying to do transfer learning with pretrained net. The problem is that the accuracy on validation set is very high, around the 90% , but on test set the accuracy is very bad, less that 1%. I solved the problem using opencv to read and resize image, but I'd like to understand why with keras methods I've this problem. I paste my code below. from keras.preprocessing.image import ImageDataGenerator from keras.applications.xception import preprocess_input import keras train_val_datagen = ImageDataGenerator( validation_split=0.25, preprocessing_function=preprocess_input) train_val_generator = train_val_datagen.flow_from_directory( # subset di allenamento directory="./image-dataset/", target_size=(299, 299), color_mode="rgb", batch_size=32, class_mode="categorical", shuffle=True, subset = 'training', seed=17) val_train_generator = train_val_datagen.flow_from_directory( # subset di validation directory="./image-dataset/", target_size=(299, 299), color_mode="rgb", batch_size=32, class_mode="categorical", shuffle=True, subset = 'validation', seed=17) final_train_generator = train_val_datagen.flow_from_directory( # set finale di allenamento con tutti i dati directory="./image-dataset/", target_size=(299, 299), color_mode="rgb", batch_size=32, class_mode="categorical", shuffle=True, seed=17) As you can see i used Xception as pretrained-net and I choose to resize my images to adapt them to the net. After the training I created a new Iterator for test data as follow: test_datagen = ImageDataGenerator(preprocessing_function=preprocess_input) test_generator = test_datagen.flow_from_directory( directory="./TEST/", target_size=(299, 299), color_mode="rgb", batch_size=1, # predico una alla volta shuffle = False, class_mode=None # non ce alcuna classe di riferimento ) test_generator.reset() where the preprocess function is exactly the same. The predictions are made with the following code: predictions = model_xcpetion.predict_generator(test_generator, 6104, verbose = 1 ) where 6104 is the number of images in test folder. After this i've generated a csv with images name with relative categorical probabilities: import pandas as pd import numpy as np df = pd.DataFrame(predictions) cols =[('probability of' + str(i)) for i in list(range(1, 30 )) ] df.columns = cols df['images'] = imNames df.to_csv('predictions_xception_all_data.csv', sep=',') where the columns represent the labels (1 to 29) and imNames are obtained with filenames attribute of test_generator. Finally I generated the csv with the labels with highest probabilities value and i compute the accuracy obtaining the value that I wrote before. The code that I used to solve is the same but I read and resize images with the following code: width = 299 height = 299 dim = (width, height) images = [] # for each img resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA) images.append(resized) where "img" are all the imageas read with "imread_collection" of skimage.io Thanks in advance for your help. EDIT1: the images resized with opencv have not been processed with the preprocessing function Answer: I solved the problem following the advices in the comments of this discussion. I paste here my code: dizionario = dict({'1': 0, '10': 1, '11': 2, '12': 3, '13': 4, '14': 5, '15': 6, '16': 7, '17': 8, '18': 9, '19': 10, '2': 11, '20': 12, '21': 13, '22': 14, '23': 15, '24': 16, '25': 17, '26': 18, '27': 19, '28': 20, '29': 21, '3': 22, '4': 23, '5': 24, '6': 25, '7': 26, '8': 27, '9': 28}) as you can see I created a dictionary to map classes label to index . I used the output of final_train_generator.class_indices to create this dictionary. After predict_generator I created the csv of prediction with following code: predicted_class_indices = np.argmax(predictions, axis = 1) final_predictions = [] for element in predicted_class_indices: final_predictions.append(list(dizionario.keys())[list(dizionario.values()).index(element)]) df = pd.DataFrame() df['class'] = final_predictions df['imnames'] = imNames df.to_csv('predictions_xception_all_data_bon.csv', sep=',') I paste here also the code that I changed following the discussion link in the comments: test_datagen = ImageDataGenerator(preprocessing_function=preprocess_input) test_generator = test_datagen.flow_from_directory( directory="./TEST/", target_size=(299, 299), color_mode="rgb", batch_size=20, shuffle = False, class_mode = "categorical", ) test_generator.reset() imNames = test_generator.filenames predictions = model_xcpetion.predict_generator(test_generator, steps=len(test_generator), verbose = 1 )
{ "domain": "datascience.stackexchange", "id": 11213, "tags": "machine-learning, deep-learning, keras, tensorflow, predict" }