anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
local cost map not updated
Question: Hi, All, I am using a 6m laser range finder to generate local cost map. (Hokuyo URG). I found the the local cost map was not updated promptly. what happens is that, whenever an obstacle occurs in the range. an inflated object is generated in the local cost map. however, after it disappears, the inflated object remains there for a very long time. I have set the cost map update frequency to 10Hz. but it does not seems to help. any other suggestion? I have an unverified postulation here. on the actual laser packets received, after the obstacle disappears, the range readings at that particular points becomes 0 or NaN. somehow cost_map_2D keeps the old data if the particular beam is 0. thanks ray Originally posted by dreamcase on ROS Answers with karma: 91 on 2014-08-14 Post score: 3 Original comments Comment by dreamcase on 2014-08-14: http://wiki.ros.org/costmap_2d#Marking_and_Clearing I notice the error could be an anomaly in clearing process. somehow, the clearing part is not functioning. below is my config file. can someone help me check it? costmap_common_params.yaml obstacle_range: 4.0 raytrace_range: 4.0 transform_tolerance: 5.0 footprint: [[0.75,0.4], [0.8, 0], [0.75, -0.4],[-0.75, -0.4], [-0.75, 0.4]] robot_radius: 1.0 inflation_radius: 2.0 footprint_padding: 0.01 cost_scaling_factor: 7 lethal_cost_threshold: 100 observation_sources: laser_scan_sensor point_cloud_sensor laser_scan_sensor: {sensor_frame: local_laser, data_type: LaserScan, topic: localscan, marking: true, clearing: true} point_cloud_sensor: {sensor_frame: local_laser, data_type: PointCloud, topic: point_cloud, marking: true, clearing: true} local_costmap_params.yaml local_costmap: global_frame: /world robot_base_frame: base_link update_frequency: 5.0 transform_tolerance: 5.0 publish_frequency: 2.0 static_map: false rolling_window: true width: 10.0 height: 10.0 resolution: 0.05 origin_x: -293.6 origin_y: -100.0 allow_unknown: false Comment by pkohout on 2014-08-14: can you please also post the output of rostopic list ? Comment by dreamcase on 2014-08-14: some more observations. if change the laser source to SICK long range laser LMS111, the cost map gets updated properly. has it something to do with the resolution of the laser? what if the laser is not able to cover all the 5cmx5cm grid within the local cost map? does it cause residues on the local cost map? Answer: This looks like it's being caused by your laser driver reporting 0 and NaN when you'd like it to report maxRange or inf. This can be a safety constraint when your scanner can't detect anything in range, in case of situations such as scanning a mirrored surface which doesn't get picked up as on obstacle. In practice, for controlled environments, this is probably not your desired behaviour. You can setup a custom node, or implement a laser_filter which would convert the 0s and NaNs to maxRange (which the costmap obstacle layer will pick up as 'no obstacles') or inf (which requires the inf_is_valid parameter to be set on the costmap obstacle layer for that sensor), and then republish the scan message on a filtered topic. This will allow the costmap to properly clear 'empty space' for rays with no obstacles in range. Originally posted by paulbovbel with karma: 4518 on 2014-08-14 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by musiqsoulchild on 2015-03-23: Does anyone know exactly how to implement a solution like this? I'm not sure where to start. How do I use laser_filter to get rid of 0s and NaNs for maxRange and inf? Comment by kevin.kuei.0321@gmail.com on 2015-10-29: I encountered the same problem. Anyone plz??
{ "domain": "robotics.stackexchange", "id": 19053, "tags": "ros, navigation, mapping, update, rate" }
Asyncronous Function Calls
Question: I am writing a node.js inspired C++ library for asynchronous function calls. How efficient and stable does my code look? If you want, you can see some WIP code and demonstration code here. (net.hpp is for POSIX sockets and is far from done. It does not have any IO functions written) /* ASYNC.HPP * Defines some functions for calling other functions in the background * This gives the ability to create callback based functions easily, * similar to node.js (But I hate javascript so I wrote this library for c++) */ #ifndef ASYNC_H #define ASYNC_H #include <thread> #include <vector> std::vector<std::thread> asyncCalls; // Don't have to deal with threads leaving scope since this vector is global #define asyncCall asyncCalls.emplace_back // Whenever someone calls asyncCall this constructs a new std::thread which calls their function void finishAsync(){ //Call to block until all running asyncronous functions return while(!asyncCalls.empty()){ //Loop until the vector is empty asyncCalls.back().join(); //Get the thread from the back and join it to block //^ I feel like this line might throw an exception, but in my testing it hasn't thrown anything. asyncCalls.pop_back(); //Pop it and get a new one } } // Infinite running threads will block forever #ifdef _GLIBCXX_CHRONO //Include <chrono> before this header for this function void sleep(uint32_t millis){ //I just realized while adding comments that I could do this with a define. Oh well std::this_thread::sleep_for(std::chrono::milliseconds(millis)); } #endif // End chrono function #endif //End header guard Answer: std::vector<std::thread> asyncCalls; This creates a global variable (so, you know, don't do that) — and it creates the global variable in every .cpp file that imports this header file. So unless you only have one .cpp file in your project, you're going to get linker errors when you try to link your project. What you wanted to do was put this variable definition in a .cpp file and put a declaration of it (using extern) in your .hpp file. Alternatively, as of C++17, you could have made it an inline variable: inline std::vector<std::thread> asyncCalls; However, global variables are terrible; don't do anything like this. Maybe what you want is some notion of "thread pool": class ThreadPool { std::vector<std::thread> asyncCalls; // ... }; #define asyncCall asyncCalls.emplace_back // Whenever someone calls asyncCall this constructs a new std::thread which calls their function This is a preprocessor macro (so, you know, don't do that). What you meant was class ThreadPool { std::vector<std::thread> asyncCalls; public: template<typename... Args> void asyncCall(Args&&... args) { asyncCalls.emplace_back(std::forward<Args>(args)...); } }; void finishAsync(){ //Call to block until all running asyncronous functions return while(!asyncCalls.empty()){ //Loop until the vector is empty Two things: Please indent your code. Four-space indents would be nice. Who do you expect to call this function? and how do you know that nobody else is going to be calling emplace_back at the same instant that that guy is calling empty? Looks like you have a huge thread-safety problem here. #ifdef _GLIBCXX_CHRONO //Include <chrono> before this header for this function Don't do this. For one thing, your code won't do the right thing on Clang with libc++ (or MSVC, or basically any non-libstdc++ distribution). What you meant was #include <chrono> This way, your header includes all the headers it depends on, recursively, and you never have to worry about whether your caller included something else before you or not. As far as "would this code be a good idea if it worked": a resounding no. Your asyncCall function isn't really doing anything that std::async wouldn't do on its own; and std::async on its own is a bad idea because if you call it too many times you'll eventually run out of threads and start getting "Resource temporarily exhausted" errors. If you're going to be writing programs that use asynchrony as their primary implementation technique, the last thing you want to do is run out of threads. You need to figure out some way to queue up "tasks" without tying up a whole std::thread per (stalled) task.
{ "domain": "codereview.stackexchange", "id": 25348, "tags": "c++, multithreading" }
What's the difference between locally Lorentzian and locally Euclidean?
Question: What's the difference between locally Lorentzian and locally euclidean? Was the former (Lorentzian) the hyperbolic surface restriction of the latter (Euclidean)? Answer: A pseudo-Riemannian manifold $(M,g)$ is locally Euclidean (Lorentzian) if the metric tensor $g$ has positive (Minkowski) signature, respectively. NB: Concerning the use of the word Euclidean, see also my Phys.SE answer here.
{ "domain": "physics.stackexchange", "id": 67903, "tags": "general-relativity, spacetime, differential-geometry, metric-tensor" }
Are there improvements on Dana Angluin's algorithm for learning regular sets
Question: In her 1987 seminal paper Dana Angluin presents a polynomial time algorithm for learning a DFA from membership queries and theory queries (counterexamples to a proposed DFA). She shows that if you are trying to learn a minimal DFA with $n$ states, and your largest countexample is of length $m$, then you need to make $O(mn^2)$ membership-queries and at most $n - 1$ theory-queries. Have there been significant improvements on the number of queries needed to learn a regular set? References and Related Questions Dana Angluin (1987) "Learning Regular Sets from Queries and Counterexamples", Infortmation and Computation 75: 87-106 Lower bounds for learning in the membership query and counterexample model Answer: In his answer on cstheory.SE, Lev Reyzin directed me to Robert Schapire's thesis that improves the bound to $O(n^2 + n\log m)$ membership queries in section 5.4.5. The number of counterexample queries remains unchanged. The algorithm Schapire uses differs in what it does after a counterexample query. Sketch of the improvement At the highest level, Schapire forces $(S,E,T)$ from Angluin's algorithm to have the extra condition that for a closed $(S,E,T)$ and each $s_1, s_2 \in S$ if $s_1 \neq s_2$ then $row(s_1) \neq row(s_2)$. This guarantees that $|S| \leq n$ and also makes the consistency property of Angluin's algorithm trivial to satisfy. To ensure this, he has to handle the results of a counterexample differently. Given a counterexample $z$, Angluin simply added $z$ and all its prefixes to $S$. Schapire does something more subtle by instead adding a single element $e$ to $E$. This new $e$ will make $(S,E,T)$ to be not closed in Angluin's sense and the update to get closure with introduce at least one new string to $S$ while keeping all rows distinct. The condition on $e$ is: $$\exists s, s' \in S, a \in \Sigma \quad \text{s.t} \quad row(s) = row(s'a) \; \text{and} \; o(\delta(q_0,se)) \neq o(\delta(q_0,s'ae))$$ Where $o$ is the output function, $q_0$ is the initial state, and $\delta$ the update rule of the true 'unknown' DFA. In otherwords, $e$ must serve as a witness to distinguish the future of $s$ from $s'a$. To figure out this $e$ from $z$ we do a binary search to figure out a substring $r_i$ such that $z = p_ir_i$ and $0 \leq |p_i| = i < |z|$ such that the behavior of our conjectured-machine differs based on one input character. In more detail, we let $s_i$ be the string corresponding to the state reached in our conjectured machine by following $p_i$. We use binary search (this is where the $\log m$ comes from) to find an $k$ such that $o(\delta(q_0,s_kr_k)) \neq o(\delta(q_0,s_{k+1}r_{k+1})$. In other words, $r_{k+1}$ distinguishes two states that our conjectured machines finds equivalent and thus satisfies the condition on $e$, so we add it to $E$.
{ "domain": "cs.stackexchange", "id": 6352, "tags": "algorithms, learning-theory, machine-learning" }
Base class for implementing IComparable
Question: If I create a class that implements IComparable<T>, I must implement CompareTo<T>. It is also recommended that I implement IEquatable<T> and the non-generic IComparable. If I do all that, I am required or encouraged to: Override GetHashCode() Implement CompareTo(Object) Override Equals(Object) Implement Operator ==(T, T) Implement Operator !=(T, T) Implement Operator >(T, T) Implement Operator <(T, T) Implement Operator >=(T, T) Implement Operator <=(T, T) That's 9 additional methods, most of which depend on the logic that compares two instances of the class. Rather than having to implement all those methods in any class that implements IComparable<T>, I decided to create a base class that implements IComparable<T> and the other recommended interfaces (similar to the way Microsoft provides Comparer as a base class for implementations of IComparer<T>) It doesn't make sense to compare instances of two different classes that each inherit from the base class. preventing that was the main reason for making the class generic (although it makes coding a derived class a little more complicate). I would like to ask for a review of the code for the base class. Am I missing something? Can it be simplified? Is this a bad idea? Here is the base class public abstract class Comparable<T> : IComparable, IComparable<T>, IEquatable<T> where T: Comparable<T> { public abstract override int GetHashCode(); public abstract int CompareTo(T other); public int CompareTo(object obj) { T other = obj as T; if (other == null && obj != null) { throw new ArgumentException($"Objects of type {typeof(T).Name} can only be compared to objects of the same type", nameof(obj)); } return CompareTo(other); } public override bool Equals(object obj) { return CompareTo(obj) == 0; } new public bool Equals(T other) { return CompareTo(other) == 0; } private static int Compare(Comparable<T> comp1, Comparable<T> comp2) { if (comp1 == null) { return ((comp2 == null) ? 0 : -1); } return comp1.CompareTo(comp2); } public static bool operator == (Comparable<T> comp1, Comparable<T> comp2) { return Compare(comp1, comp2) == 0; } public static bool operator != (Comparable<T> comp1, Comparable<T> comp2) { return Compare(comp1, comp2) != 0; } public static bool operator > (Comparable<T> comp1, Comparable<T> comp2) { return Compare(comp1, comp2) > 0; } public static bool operator < (Comparable<T> comp1, Comparable<T> comp2) { return Compare(comp1, comp2) < 0; } public static bool operator >= (Comparable<T> comp1, Comparable<T> comp2) { return Compare(comp1, comp2) >= 0; } public static bool operator <= (Comparable<T> comp1, Comparable<T> comp2) { return Compare(comp1, comp2) <= 0; } } Below is a minimal implementation of the base class. public class SeasonCompare : Comparable<SeasonCompare> { public int Number {get; set;} public override int GetHashCode() { return Number; } public override int CompareTo(SeasonCompare other) { if (other == null) { return 1; } return Number.CompareTo(other.Number); } } Answer: It is also recommended that I implement ... the non-generic IComparable. I don't see that recommendation in the current doc for IComparable. I would recommend against it: having the non-generic method turns compile-time errors into runtime errors, which are more expensive to find and fix. private static int Compare(Comparable<T> comp1, Comparable<T> comp2) { if (comp1 == null) { return ((comp2 == null) ? 0 : -1); } return comp1.CompareTo(comp2); } For consistency, this requires that subclasses guarantee that CompareTo(null) returns a positive value, but that requirement isn't documented. Perhaps a better solution would be: private static int Compare(Comparable<T> comp1, Comparable<T> comp2) { if (comp1 == null) { return comp2 == null ? 0 : 0.CompareTo(comp2.CompareTo(comp1)); } return comp1.CompareTo(comp2); } That way the only requirement is that CompareTo(null) be consistent. There may be a more elegant way of inverting the sense of a comparison, but that's the easiest one I can think of which doesn't fail on corner cases.
{ "domain": "codereview.stackexchange", "id": 32979, "tags": "c#" }
Why does the space of pure qudit states have dimension $2(D-1)$, rather than $D^2-2$?
Question: It is well known that two-dimensional states, that is, qubits, can be represented using the Bloch sphere: all pure states lie on the surface of a three-dimensional sphere, while all the mixed states are in its interior. This is consistent with a simple parameter counting argument: the space of all qubits is the space of all $2\times 2$ positive Hermitian operators with unit trace, which has dimension $2^2-1=3$, while for the pure states there are only $2\times2 - 2=2$ real degrees of freedom (two complex coefficients minus normalization and global phase). The same conclusion can be reached by considering that the pure states are a subset of all states satisfying the single additional constraint that $\operatorname{Tr}(\rho^2)=1$. What happens for higher dimensional qudits? Let's consider the states living in a Hilbert space of dimension $D$. The space of all states is again the space of positive Hermitian $D\times D$ operators with unit trace, which has dimension $D^2-1$. The pure states can on the other hand be represented as length-$D$ complex vectors, so the number of real degrees of freedom, after having considered the global phase and the normalization condition, seems to be $2D-2$. However, the set of pure states is also equal to the subset of all those states $\rho$ such that $\operatorname{Tr}(\rho^2)=1$. Now this seems like a single real condition, which therefore would make me think that the dimension of the space of pure $D$-dimensional states is $D^2-2$, in direct contrast with the argument above. What is the correct way to think about this? Answer: The Hilbert space counting which gets $2D-2$ is correct. When we think about parameter counting in the way you have in this question, we are implicitly assuming that the equations are sufficiently "generic" so that intersections work the way they do in linear algebra. This is not always the case, particularly when we consider equations with singularities or which are defined on spaces with boundaries. For an extreme example, in a normed $N$-dimensional real vector space, the equation $|\vec v|^2 = 0$ is a single equation, but it reduces the $N$-dimensional space down to a single (apparently $0$-dimensional) point. When we write the equation $\text{Tr} \rho^2 = 1$, a slightly more complicated version of the same thing is happening. If you diagonalize the density operator, you will get a set of $D$ real eigenvalues $\lambda_i$. These must each be non-negative for a density operator. Additionally, we know that $\rho$ has unit trace, meaning that $\sum_i \lambda_i = 1$. This means that each $\lambda_i \in [0,1]$. Under these conditions, $\lambda_i \ge \lambda_i^2$ with equality only for $\lambda_i \in \{0,1\}$. Thus $\text{Tr} \rho^2 = \sum_i \lambda_i^2 \le \text{Tr} \rho = 1$, and the two are equal only if all the $\lambda_i$ are either $0$ or $1$, which means that exactly one is $1$ and the others are $0$. Note that this is just saying that $\rho$ is a projection onto the single eigenvector with eigenvalue $1$, meaning that $\rho = | \psi \rangle \langle \psi |$ for some $|\psi\rangle$. Let us also note that, in general, the boundary of the set of mixed states is not the set of pure states. Being on the boundary means that (just) one inequality becomes an equality, which means we only need one $\lambda_i = 0$. Being a pure state is a much stronger condition. I think this may be part of your confusion as the boundary of the set of mixed states does have dimension $D^2 - 2$. The Bloch sphere is an unhelpful example in this case, because since the Hilbert space is only $2$ dimensional, one eigenvalue going to $0$ is equivalent to being a pure state, but for larger $D$ that is not true. Note that this still looks like you only impose $D$ real equations, namely one per eigenvalue, meaning the naive dimension counting still appears to be wrong. Why is that? The answer is tied to the fact that our end result has a degeneracy; specifically we have $D-1$ eigenvectors of $\rho$ with eigenvalue $0$. Thus the system, described in this way, has a fictitious $U(D-1)$ symmetry rotating those vectors. If you apply such a rotation, the density operator does not change, but our naive counting would not realize that. We would think that we should subtract the dimensionality of $U(D-1)$, namely $(D-1)^2$. But this $U(D-1)$ does not act freely; a transformation which only changes the phase of a given $|\psi_i \rangle$ leaves $|\psi_i \rangle \langle \psi_i |$ invariant, meaning that any basis for the Hilbert space is stabilized by $U(1)^{D}$, so we actually have only $(D-1)^2 - D$ real equation redundancies. Now we can finally get the right counting: $$(D^2 - 1) - D - ((D-1)^2 - D) = 2D-2.$$ The "trick" that makes this counting work, whereas the more naive counting fails, is that these equations can all be imposed on the affine space of trace-class unit trace operators without requiring positive semi-definiteness and still get the right set of pure states. Positive semidefiniteness is the set of inequalities that bounds the space of mixed states, and if we don't impose it we don't have the issues arising before where the solutions to the equations are on the boundary of the space.
{ "domain": "physics.stackexchange", "id": 48113, "tags": "quantum-mechanics, quantum-information, quantum-states" }
Stop giving current to motor with Roboclaw
Question: I am not an expert on this topic and I know this is kind of old thread but im facing the same issue and I would like some help or advice. I am using and Arduino and a Roboclaw 2x7A (old version) At first I was also stopping the motors using roboclaw.SpeedAccelM2(address, 0, 0); on each one and it worked, but later I saw it is still consuming current. I used your suggestion and it works, but only for M1, M2 doesn't seem to stop recieving current, less than an ampere, but i would like it to be zero. I am uploading a piece of code so I can explain myself and to show you what I am doing. This is the part where it checks if error (difference between origin and goal) is greater than deadzone, if greater then keep moving, if lower, it would stop by itself but also it must stop giving current to that motor. // M1 > Azimut //M2 > Zenith if (abs(error1) > deadzone1) { roboclaw.SpeedAccelDeccelPositionM1(address, 0, 0, 0, posicionM1, depth1); } else { Serial.println("Azimut - error es menor que deadzone"); roboclaw.DutyM1(address, 0); } if (abs(error2) > deadzone2) { roboclaw.SpeedAccelDeccelPositionM2(address, 0, 0, 0, posicionM2, depth2); } else { Serial.println("Zenith - error es menor que deadzone"); roboclaw.DutyM2(address, 0); } And this is the output (with some extra info) Azimut - error es menor que deadzone Zenith - error es menor que deadzone Encoder1:15531 80 Speed1:0 Temp: 47.10 error : -24 Encoder2:15474 80 Speed2:0 Temp2: 0.00 error : -81 Pos Obj - M1 : 15555 Pos Real - M1 : 19.97 azimuthGlobal - M1 : 20.00 Pos Obj - M2 : 15555 Pos Real - M2 : 19.90 zenithGlobal - M2 : 20.00 PWM Zen : 0 Azi : 0 switch Pin Zen : 0 Azi : 0 e-Stop : 0 total revs Zen : 280000 Azi : 280000 Corrientes Zen : 0.06 Azi : 0.00 At the end, as you can see, Zen motor is still recieving 0.06 A, but the other motor is fully stopped. Why does this only works with one of them?, Or. Is there some configuration I am doing wrong?. Answer: Solved. Controller error, even when the controller "says" there is current, actually there isn't any! Update: The problem is inside the controller Roboclaw. Roboclaw's forum moderator says the problem is because current is too low to get an accurate measure. A current of .06 amps is noise, not actual current. Anythign under 1 amp isnt really accurate and even with no motors attached(so there is definitely no current) you will usually see readings from 60 to 200ma, .06 to .2, coming from the current sensors. Also, the sensor is not very good calibrated, and there is no way a user can calibrate that. The current sense is measuring .001v/amp. We already do a lot of filtering to get reasonably good values but there is no way to completely elliminate the noise or offset error(not without adding craszy costs). Solution: Just ignore any current under 1 amp. Robloclaw's forum moderator solution: You should use a conditional in your code to display zero if the value is less than around 200ma to 1amp. More info here
{ "domain": "robotics.stackexchange", "id": 1439, "tags": "arduino" }
AMCL works intermitently?
Question: So I've been trying to get AMCL to work on my ad-hoc network using the amcl_demo.launch. I've tried and can successfully run AMCL every time I attempt it with a wired connection. With the ad-hoc network though, it fails a majority of the time resulting in [ INFO] [#]: Subscribed to Topics: scan [ INFO] [#]: Requesting the map... [ INFO] [#]: Still waiting on map... [ INFO] [#]: Still waiting on map... [ WARN] [#]: You have set map parameters, but also requested to use the static map. Your parameters will be overwritten by those given by the map server [ INFO] [#]: Received a 544 x 512 map at 0.050000 m/pix [ WARN] [#]: Waiting on transform from /base_link to /map to become available before running costmap, tf error: [ WARN] [#]: Waiting on transform from /base_link to /map to become available before running costmap, tf error: [ WARN] [#]: Waiting on transform from /base_link to /map to become available before running costmap, tf error: At first I thought this might be a bandwidth issue. But when I looked at the consumption it was ~1.5Mb/s for the peek. And when AMCL has successfully run on the ad-hoc network, there didn't seem to be much issues with sending it navigation goals and controlling it via teleoperation keyboard. I do remember there being lots of warnings and errors with the successful runs, but because it doesn't work that often, I can't replicate their messages to report them. What is possibly going wrong with my network or ROS, such that amcl_demo.launch works only on a rare occasion? Upon further investigation with tf_monitor, I've noticed that I get a large delay in /base_footprint to /odom until the map is received. The delay is on the order of 20s. I also noticed that there are delays in the nodes /robot_pose_ekf and /robot_state_publisher that are more consistent across the runtime that are around 5s and 1s, respectively. I also tried to pay attention to the CPU usage. At the beginning of running launch file, the CPU usage is around 90%, but once all the processes have begun, it tappers off. When I run gmapping_demo.launch, I don't have nearly as many problems. UPDATE: Just in retesting with a LAN connection, I've come across the same problem. It just doesn't occur that often. Side Note: The TurtleBot does have an odd behavior not being able to move forward smoothly, whereas all other directions can move smoothly. Originally posted by mculp42 on ROS Answers with karma: 28 on 2013-03-13 Post score: 0 Answer: After talking to Bill Morris from iheartrobotics, I changed my setup a bit. He suggested that it may be a problem with node priorities or nodelet managers miscommunicating since I had roscore running on a different computer than the TurtleBot. Instead of having a master node that runs ROS, I made it so the TurtleBot laptop was running ROS itself. With that, it has consistently run without the costmap errors. For the most part it runs smoother with commands. Now rviz and viewing the map in real time is slow, but that's alright. Originally posted by mculp42 with karma: 28 on 2013-03-19 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 13350, "tags": "ros, navigation, amcl, network" }
Electrons Boiled Off At Cathode In Vacuum-Tube Diodes
Question: This is my first time asking a question on this forum so I hope my question wouldn't violate any policies. I have been trying to learn about the vacuum-tube diodes and there is a point mentioned in a lot of texts that I have not been able to understand. In a space-charge limited condition, the cathode is heated and electrons are boiled off and leave the cathode at zero velocity. At the same time, the electric field at the cathode is zero. This is my problem. If the electric field is zero at the cathode, how could the electrons there, which have zero velocity, accelerate and move to the anode? Answer: Good question. The electrons have charge, and they repel each other. So the "sea of electrons" that is formed close to the cathode slowly expands - eventually the outermost electrons start to "feel" the anode and move away. In the space charge limited case, the rate at which electrons leave the solid is limited by their internal energy and the work function of the material - there is no help from the electric field. But that doesn't mean that electrons at the edge of the cloud don't get pushed away.
{ "domain": "physics.stackexchange", "id": 38457, "tags": "electrostatics, electric-fields, electric-current, semiconductor-physics, coulombs-law" }
Prove that Slater determinants form a complete basis
Question: I want to prove that the basis vectors of the antisymmetrized, $N$-particle vector space generated by the Slater determinants form a complete basis. Attempt: I started from the definition of a Slater determinant as $$ \lvert\psi\rangle = \frac{1}{\sqrt{N!}} \textrm{det} \left[ \,\,\lvert u_1\rangle\, |u_2\rangle\, ... |u_N\rangle \,\,\right], $$ and want to prove that $$ \sum_{u_1\, u_2\, ... ,u_N} \frac{1}{N!}|\psi\rangle \langle \psi| = I. $$ Ok, first I think that $N!$ shouldn't be there in the equation immediately above because each Slater determinant already contains it. Am I right? Secondly about solving this problem itself, being expanded each determinant will have $N!$ terms. So that each $|\psi\rangle \langle\psi|$ contains $(N!)^2$ terms, $N!$ of which are "self" terms having the form $$ \sum_{u_1\, u_2\, ... ,u_N} |u_1\rangle\, |u_2\rangle\, ... |u_N\rangle \langle u_1|\, \langle u_2|\, ... \langle u_N| = I $$ Since there are $N!$ terms with such form, they will add up to $N!I$. Now I am concerned with the "cross" terms in the product of the two determinant. For if they are all zero, then it's straightforward to see that we will get something like $N!I/N! = I$ hence proved. But are the cross terms indeed zero? EDIT: To illustrate what I have been working on, I will take a special case of N=2. $$ \psi_{kl}(x_1,x_2) = \frac{1}{\sqrt{2}}(u_k(x_1)u_l(x_2)-u_k(x_2)u_l(x_1)) $$ Then $$ \sum_{k,l} \psi_{kl}(x_1,x_2) \psi^*_{kl}(x_1,x_2) = 1 $$ (note that here I don't use extra 2!). Inserting the expression for $\psi_{kl}(x_1,x_2)$ we will get four terms. The first two are $$ \frac{1}{2}\left(\sum_{k,l} u_k(x_1)u_l(x_2)u^*_k(x_1)u^*_l(x_2) + \sum_{k,l} u_l(x_1)u_k(x_2) u^*_l(x_1)u^*_k(x_2) \right)= 1 $$ because $\sum_{k,l} u_k(x_1)u_l(x_2)u^*_k(x_1)u^*_l(x_2) = \langle x_1|\sum_{k} |u_k\rangle \langle u_k| |x_1\rangle \langle x_2|\sum_{l} |u_l\rangle \langle u_l| |x_2\rangle = \langle x_1|x_1\rangle \langle x_2|x_2\rangle = 1$. The other two terms are the cross terms $$ \frac{1}{2}\left(\sum_{k,l} u_k(x_1)u^*_k(x_2)u^*_l(x_1)u_l(x_2) + \sum_{k,l} u_l(x_1)u^*_l(x_2) u^*_k(x_1)u_k(x_2) \right)= 0 $$ Because $\sum_{k,l} u_k(x_1)u^*_k(x_2)u^*_l(x_1)u_l(x_2) = \langle x_1|x_2\rangle \langle x_2|x_1\rangle = 0$. In this last step I used the fact that $x_1$ must be different from $x_2$ otherwise all eigenfunctions vanish. Summing all those four terms, I get 1 as required. Answer: The nontrivial point to understand here is that the antisymmetrized $N$-particle states do not form a basis for the complete space of $N$-particle states, but only for its antisymmetric part. Consequently, they will not satisfy a completeness relation in the former, but only in the latter. Consider the simpler example of 2 fermions over two modes. The only possible state is in this case: $$ \lvert12\rangle_{\mathcal A} = \frac{1}{\sqrt2} (\lvert 12\rangle - \lvert 21\rangle ). $$ The matrix representation (in the total tensor product Hilbert space) of this state is: $$ \lvert12\rangle_{\mathcal A}\langle12\rvert = \begin{pmatrix}0&0&0&0\\0&1&-1&0\\0&-1&1&0\\0&0&0&0\end{pmatrix},$$ which is quite clearly not the identity one could naively expect. Let us also work out the slightly more complex case of 2 fermions over 3 modes. There are now $\binom{3}{2}=3$ basis states: $$ \lvert12\rangle_{\mathcal{A}} = \frac{1}{\sqrt2}(\lvert12\rangle-\lvert21\rangle),\\ \lvert13\rangle_{\mathcal{A}} = \frac{1}{\sqrt2}(\lvert13\rangle-\lvert31\rangle), \\ \lvert23\rangle_{\mathcal{A}} = \frac{1}{\sqrt2}(\lvert23\rangle-\lvert32\rangle). $$ The projector corresponding to the first one has now matrix representation: $$ \lvert12\rangle_{\mathcal A}\langle12\rvert = \begin{pmatrix}0&0&0&0&0&0&0&0&0\\0&1&0&-1&0&0&0&0&0\\0&0&0&0&0&0&0&0&0\\0&-1&0&1&0&0&0&0&0\\0&0&0&0&0&0&0&0&0\\0&0&0&0&0&0&0&0&0\\0&0&0&0&0&0&0&0&0\\0&0&0&0&0&0&0&0&0\\0&0&0&0&0&0&0&0&0\end{pmatrix},$$ and similar matrices for the other two states, that again you can easily seen do not sum up to anything similar to an identity matrix. What you can do is showing that these antisymmetric states nonetheless for a complete basis for the antisymmetric component of the Hilbert space, $\mathcal{H}_{\mathcal A}$. Let us call $N$ the number of femions and $M$ the number of modes in which each fermion can be. It follows that $\mathcal{H}_{\mathcal A}$ has dimension $\binom{M}{N}$, which is also the number of possible antisymmetric states (you can see this remembering that the binomial factor $\binom{M}{N}$ counts the number of ways in which you can choose groups of $N$ objects among a collection of $M$). The normalization follows from the definition of antisymmetric state: $$ {}_{\mathcal A}\langle u_1\cdots u_N \rvert u_1\cdots u_N \rangle_{\mathcal A} = \frac{1}{N!} \sum_{\sigma,\tau} (-1)^{\sigma+\tau} \prod_{k=1}^N \langle u_{\sigma(k)}\rvert u_{\tau(k)}\rangle\\ = \frac{1}{N!} \sum_{\sigma,\tau} (-1)^{\sigma+\tau} \prod_{k=1}^N \delta_{\sigma(k),\tau(k)} = \frac{1}{N!} \sum_{\sigma} 1 = 1. $$ To finally prove the orthogonality we consider two states, $\lvert u_1\cdots u_N\rangle_{\mathcal A}$ and $\lvert v_1\cdots v_N\rangle_{\mathcal A}$ where there is at least one element of the latter, say $v_1$, which is different from all the elements of the former: $ \forall i \in \{1,...,N\}, \,\, v_1 \neq u_i. $ Note that this is equivalent to say that the two states are different. If this is true, than we have: $$ {}_{\mathcal A}\langle u_1\cdots u_N \rvert v_1\cdots v_N \rangle_{\mathcal A} = \frac{1}{N!} \sum_{\sigma,\tau} (-1)^{\sigma+\tau} \prod_{k=1}^N \langle u_{\sigma(k)}\rvert v_{\tau(k)}\rangle = 0, $$ where in all terms of the above sum, the product is always trivially zero, because it will contain a braket of the form $\langle u_j \rvert v_1\rangle$, which always vanishes because $v_1$ is different from all the $u_j$.
{ "domain": "physics.stackexchange", "id": 35117, "tags": "quantum-mechanics, homework-and-exercises, fermions, many-body" }
Do Timers run in different threads? [roscpp]
Question: I'm wondering if I'm setting up different timers, do they run in different threads using roscpp? The background is that one Timer callback is taking a longer time to be executed and, therefore, I want to run it at a lower frequency. But it should not interrupt the execution frequency of ros::spin() (because of the callbacks there) and the other Timer callback. So my open questions are: Do different Timer callbacks run in different threads? Do callback functions of the same Timer run in different threads or what is happening if the frequency of a Timer is too high and execution of callback takes longer than this? Do Timers run in a different thread than ros::spin(), or is ros::spin() setting the minimal time? From what I've read, this is the case and the Timer is executed during ros::spin(). If callbacks during ros::spin() take longer than the Timer duration, what will happen? And as a possible solution: If I switch to AsyncSpinner, does this also affect the Timer callbacks? What I'm doing now is described in this solution: https://answers.ros.org/question/53055/ros-callbacks-threads-and-spinning/?answer=53088#post-id-53088 Typically, if you want to do time-consuming computations such as "leg detection", the callbacks are definitively not the place to do it. Just, copy your data to a LegDetector object instance and call in your main thread the method that will do the heavy work. However, how should this not block my other callbacks? I'm calling the computational intense function during while(ros::ok()) and it is blocking the ROS spinner and, therefore, every other callback. So what is the intention behind this solution? Sources: http://wiki.ros.org/roscpp/Overview/Timers https://roboticsbackend.com/roscpp-timer-with-ros-publish-data-at-a-fixed-rate/ : Once created, the timer will return so the following of the code will be executed. It will then, in another thread, call the callback every X seconds, where X is the first parameter you gave to the Timer. https://answers.ros.org/question/240388/do-topic-callbacks-and-timer-callbacks-run-in-the-same-thread/ : No, topicCallback and timerCallback cannot be executed in parallel, as long as you are using single-threaded spinning. For example, if you are using an AsyncSpinner, callbacks are called from multiple threads and additional care needs to be taken. Originally posted by prex on ROS Answers with karma: 151 on 2020-12-08 Post score: 2 Answer: After spending quite some time looking for information about ROS callback queues, multi-threaded spinners and timers, I think I can answer my questions. Seems like the ROS documentation could be improved on this side. A very useful article about multi-threaded spinners and multiple callback queues can be found here: https://levelup.gitconnected.com/ros-spinning-threading-queuing-aac9c0a793f As known, ROS does some internal threading. It runs each subscriber in a receiver thread and each timer in a timer thread. This allows to receive data independent of the spinner. However, these threads do not process the callback, they just collect it. For the subscriber, it adds an element to the subscriber queue and for the timer, it adds the timer callback to the callback queue. Timer callbacks end up in the same callback queue with subscriber callbacks. Now, on each spin the subscriber queue is added to the callback queue (number of elements to keep in describer queue can be set. If spinner has much lower frequency than incoming subscriber callbacks, they might be dropped). The complete callback queue is processed in first-in-first-out order. This means in each spin, every element in the callback queue gets processed. Callbacks should always be fast, because multiple of the same callback can end up in the queue if the subscriber queue is not limited to 1. For the timer to work properly, it is required that the spinner runs regularly at a fast frequency, because the ROS timer just adds elements to the callback queue when spin is called and might only execute it after other callbacks are processed. Also, for this reason, it makes sense to use timers instead of limiting the frequency of the spinner. Limiting the frequency means that callbacks can not be processed anymore just because of maybe one function which should be executed at a certain frequency. But the spinner does much more and we should not limit it. Since the spinner is a bottleneck if processing the callback queue takes too much time, it makes sense to use a multi-threaded spinner. If you use a multi-threaded spinner/ async spinner, keep in mind that a lock is applied for a specific callback (no concurrency per default) and multiple callbacks of the same type are not processed in parallel. However, if enough threads are available, the next unlocked callback of the queue will be called (https://stackoverflow.com/a/48544551/8623933) (https://roboticsbackend.com/ros-asyncspinner-example/ ). This also means that if a timer callback takes longer than the timer duration, it will start with a delay. No new timer callback will be added to the callback queue during this time (https://answers.ros.org/question/248656/does-callbacks-get-drop-from-queue-when-it-is-exceeded-it-expected-execution-time-by-too-much/). For the case that there are processes which are more time critical and can not wait until the elements in front of the callback queue are processed, we can use multiple callback queues and assign subscribers and timers (http://docs.ros.org/en/diamondback/api/roscpp/html/structros_1_1TimerOptions.html) to them. There are different options of doing this. The gitconnected article linked above describes how to create another spinner in a new thread. Another option is to add an AsyncSpinner with its own callback queue in addition to a global callback queue as shown here: https://gist.github.com/bgromov/e6f5eb142346b3c88e9f96bce17eee92 So as a summary, the solution to my problem is to create an AsyncSpinner, which can process the timer callback in parallel so that it does not block the faster callbacks. If this is not enough, I would start using multiple callback queues for critical processes. Two more comments about spinners and how they process the callback queue: There are 2 steps for a spinner to execute a callback from the callback queue. First, it needs to get the callback from the queue, and second, it executes the callback. During the first step, there's a lock on the callback queue so that no other threads can access it at the same time. But once the callback is loaded, another callback can be loaded into another thread. Step 2 works in parallel. (Explained by the author of the gitconnect article) The AsyncSpinner should be able to continue executing a new spin (begin with a new callback queue) in parallel if a thread is available, even if one callback takes much more time. (See comment here: https://stackoverflow.com/a/48544551/8623933) Originally posted by prex with karma: 151 on 2020-12-10 This answer was ACCEPTED on the original site Post score: 7 Original comments Comment by jarvisschultz on 2020-12-10: Very helpful write-up! Thanks for doing some research. Couple of quick notes: The threading model is quite different in rospy https://answers.ros.org/question/9543/rospy-threading-model/ Agree that the documentation on these things is not great. Reminder, this documentation is a wiki. Feel free to add your own contributions and edits. Comment by prex on 2020-12-10: Thanks @jarvisschultz. I added the hint that I'm talking about roscpp.
{ "domain": "robotics.stackexchange", "id": 35849, "tags": "ros, ros-melodic, timer, asyncspinner, multi-thread" }
Is the double slit pattern a standing wave?
Question: This question is about terminology. The double slit pattern has nodal lines and antinodal lines, and therefore resembles a standing wave. However, the antinodal lines within the double slit pattern resemble travelling waves. Do the terms standing wave and travelling wave have a definition, and if so, are those definitions mutually exclusive? -- Edit, for clarification: Naively, I would tend to think that only a Chladni pattern is a true standing wave, because its antinodal areas are standing waves, not travelling waves. [ image derived from a wikimedia commons image Answer: Yes, the interference pattern produced by two slits (or, equivalently, two oscillators with the same frequency that are in phase with each other) is a type of two dimensional standing wave. The nodes, where the amplitude of the combined wave is zero, lie along lines where the difference in the distance from the two slits is an odd number of half-wavelengths. Along these lines the two waves are $180^o$ out of phase so they cancel each other out. There are also lines of anti-nodes, where the difference in the distance from the two slits is a whole number of wavelengths. Along these lines the two waves are exactly in phase, so the amplitude of the combined wave is the sum of the amplitudes of the individual waves. The mid-line exactly half way between the two slits is one example of a line of anti-nodes.
{ "domain": "physics.stackexchange", "id": 84107, "tags": "waves, interference, double-slit-experiment" }
How to align more than 2 sequences with Needleman-Wunsch?
Question: I have a DNA sequence of a protein and around 1200 zinc finger target sequences. These zinc finger target sequences are 9 bp long, resulting in BLASTn not finding them in the sequence. Needleman-Wunsch does find them, however, I can only search/align the DNA protein sequence with 1 zinc finger at a time. Is there a way to do this simultaneous (serial or parallel, computational time is not a big problem) for all 1200 zinc fingers and have the result in a table as with BLAST? Answer: (I assume that by "DNA protein" you mean "coding DNA".) "I can only search/align the sequence with 1 zinc finger at a time" - does this mean that you have not found out a way to do so, or that you desire to proceed in this fashion? I think that it is the first, based on the title of your question, but it is not at all clear what you are looking to do. If you want to locate specific 9bp motifs in your sequence, then I can suggest this small piece of code that I wrote that looks for exact matches of specific sequences, that outputs a BLAST-like table. It is slow relative to BLASTN but it does not use the short-match-ignoring heuristics that BLASTN relies upon so it may be helpful. The code was written in response to this bioinformatics SE question, which seems somewhat similar to your question here. Note that you will have to input each expected 9bp query against the reference sequence in the form of a FASTA file.
{ "domain": "biology.stackexchange", "id": 11767, "tags": "bioinformatics, sequence-alignment, blast" }
When can you reorder log operations?
Question: For example, you can reorder a softmax + nl (negative likelihood) to log_softmax + nll (negative log-likelihood) Essentially changing log(softmax(x)) to softmax(log(x)) However, what are the rules to reordering logging of things? Answer: In general, you cannot reorder like that. The example you give is a very special case that only works because softmax is based on the exponential function, which is the inverse function of the natural logarithm.
{ "domain": "datascience.stackexchange", "id": 7129, "tags": "logistic-regression, mathematics, softmax" }
A lost and confused newbie - installation
Question: I am new to Linux, Ubuntu, and ROS. From the installation page: I ran this in the terminal, and there were no problems: sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu lucid main" > /etc/apt /sources.list.d/ros-latest.list' And no problems with this: sudo apt-get update I ran this a couple of times in the terminal: sudo apt-get install ros-fuerte-desktop-full and the terminal gave this: Reading package list... Done Building dependency tree Reading state information... Done ros-fuerte-desktop-full is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 13 not upgraded. Now what? Remember, I am clueless about the whole thing. I am running Ubuntu 12.04. Originally posted by kentoo on ROS Answers with karma: 21 on 2012-05-11 Post score: 2 Answer: That means you've completed the installation. Good start! The tutorials are the thing to do next. Originally posted by Mac with karma: 4119 on 2012-05-11 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 9352, "tags": "ros, installation" }
Prove that a language is bounded if and only if it's finite
Question: Let's assume $L$ is a language. $L$ is bounded if for some natural number $n \in \mathbb N$ applies $|x| ≤ n$, where $|x|$ is a length of a string, with every $x \in L$. Let's also assume that $L$ lies in a finite set of alphabets $\Sigma$. How to prove that $L$ is bounded if and only if it's finite? Answer: The claim holds only for languages over finite alphabets. Bounded $L$ $\implies$ Finite $L$ Let $\Sigma$ be the alphabet of $L$ and $L$ be bounded by some $n \in \mathbb{N}$. The largest possible such $L$, call it $L^\#$, is $\bigcup_{\,i=0}^{\,n} \Sigma^i$ elements. $L^\#$ is finite since $|L^\#| = \sum_{i=0}^{n} |\Sigma|^i$. Therefore, any $L \subseteq L^\#$ must also be finite. Finite $L$ $\implies$ Bounded $L$ Let $x^\#$ denote the longest string in $L$. Such a string must always exist since $L$ is finite. Then, $\forall x \in L \ldotp |x| \leq |x^\#|$ and thus, $L$ is bounded. A simple counterexample to the infinite alphabet case: Consider an infinite alphabet $\Sigma = \{ s_0, s_1, ... \}$. The language $L = \Sigma$ is bounded since $\forall x \in L \ldotp |x| \leq 1$, but is infinite.
{ "domain": "cs.stackexchange", "id": 14627, "tags": "formal-languages, regular-languages, finite-automata" }
For loop that iterates through files and iteratively appends lists storing values from the read file
Question: How can I fasten this code, that reads a text file sequentially and appends a list? fp and fp_2 reads two different files from the same sub-sirectory here import os import glob from IPython import embed import numpy as np dirs = os.listdir("/home/Set/") poseList = [] #Store poses for all files featuresList = [] # Store features def data(): for dir_name in dirs: fp = '/home/Set/left' fp_2 = '/home/Set/fc/fc2_0.npy' if os.path.exists(fp) and os.path.exists(fp_2): files = glob.glob(fp+"/*.txt") curr_pose = [] #Store curr pose for f in files: with open(f) as pose: pl = pose.readlines() pl = [p.strip() for p in pl] pl = map(float, pl) # str ->float curr_pose.append((pl)) poseList.append(curr_pose) #append to main list featuresList.append(np.load(fp_2)) print "Not available", dir_name data() embed() Answer: Variable naming Your variable's names do not convey their purpose and are even misleading. E.g. fp and fp_2 suggest file pointers, but are actually strings. Notwithstanding, for a better reference, I will use your variable names below. You should also implement PEP8 (poseList vs. pose_list etc.). Recurring declaration You keep redefining the same constants fp and fp_2 in a loop: for dir_name in dirs: fp = '/home/Set/left' fp_2 = '/home/Set/fc/fc2_0.npy' Better declare them beforehand: fp = '/home/Set/left' fp_2 = '/home/Set/fc/fc2_0.npy' for dir_name in dirs: Avoiding race conditions By checking for existing files and folders using if os.path.exists(fp) and os.path.exists(fp_2): you've created yourself a race condition. It is better to just try to open the file and handle the respective exception raised if something goes wrong. Joining paths Your path concatenation fp+"/*.txt" might work for you. But there is os.path.join to do this safely and get a sanitized path: os.path.join(fp, "*.txt") # Leading "/" removed! Just for the record: In Python 3 (which you are not using assuming from your print statement) you can use pathlib.Path to handle your paths: fp = Path('/home/Set/left') … str(fp.joinpath('*.txt')) Reading lines Files are iterable, so you can get a file's lines more easily: for f in files: with open(f) as pose: pl = [line.strip() for line in pose] …
{ "domain": "codereview.stackexchange", "id": 28151, "tags": "python, performance" }
Convert big collection into simple array
Question: Is there a better way to do this? const data = [{id: 1, name: 'item1'}, {id: 2, name: 'item2'}] const a = [] data.forEach(x => { a[x.id] = x.name }) return a // ['1': 'item1', '2': 'item2'] Answer: Yes. Use array.reduce and use an object. const a = data.reduce((items, item) => (items[item.id] = item.name, items), {}); // a = {1:{...}, 2: {...}; You can still use the bracket notation to access the items, i.e. a[id]. A problem with using IDs as array indices is that you introduce gaps in the array. You also get a false array length. Take the following example: const a = []; a[999] = {/* User data */}; a.length // 1000; You only added a single item, with id "999" but now the array reports 1000 in length. Items 0 to 998 are actually empty.
{ "domain": "codereview.stackexchange", "id": 22014, "tags": "javascript, array, ecmascript-6" }
Suppose you put your hands on a wall and push it
Question: Suppose, you put your hands on a wall and push it,if neither the wall or you accelerate, how could one calculate the force? Answer: This very common misconception about Newton's third law. The sum of these two forces are zero that is why nothing is moving That is wrong. The two forces acts on different body. You do get pushed by the same amount you pushed the wall, but you don't move horizontally because of friction present between ground and your feet. We can't directly calculate the exact force exterted by you but we can tell by looking that is less than maximum value of static friction, if you are not accelerating. The two forces here means: The force by which you push the wall. The force by which the wall push you back. These two forces are equal in magnitude and opposite in direction. They also form an action reaction pair.
{ "domain": "physics.stackexchange", "id": 74294, "tags": "newtonian-mechanics, forces, free-body-diagram" }
What is a timestamp?
Question: What is a timestamp? How is it related to the 3D location of an object? Why is it useful? Originally posted by Rai on ROS Answers with karma: 30 on 2017-03-13 Post score: 0 Original comments Comment by gvdhoorn on 2017-03-14: @Rai: ROS Answers is definitely a website where we try to help out beginners, but we do expect a certain amount of effort on your side as well. We're happy to help, but could I ask you to include whatever you've already found yourself in future questions? That way we avoid duplicating answers. Comment by Rai on 2017-03-14: Thank you for your comment. If you browse my questions, you will see I have asked and re-asked these questions and have tried to add further improvements, but yet nothing. Comment by Rai on 2017-03-14: I was wondering if you could check this and tell me if I need to ask it in a different way, thank you. Comment by gvdhoorn on 2017-03-14: Just an observation, but I get the impression that you are trying to learn all sorts of things at the same time. ROS is not easy and trying to learn about robotics / computer vision in general while also climbing the steep learning curve of ROS will most likely not be most efficient. Comment by gvdhoorn on 2017-03-14: Pick your battles and try to get a general understanding of things first, then try to translate those into ROS domain concepts, which will probably lead you to packages, nodes and techniques that you can use for that. Also, pick up a book or two. Comment by Rai on 2017-03-14: Which means one or two months of sleeping on books. Do I have a shorter way to "just complete the mentioned project" or something like that, in two or three days? Okay, the answer is no, but still.. Comment by gvdhoorn on 2017-03-14: Well I don't know what you want me to say. Unfortunately there is no magic way to do everything someone would want in no time whatsoever. If your requirements are very close to something that an existing pkg provides, you could cut some corners, but you wouldn't gain any understanding. Comment by gvdhoorn on 2017-03-14: If you meant: are there any books which take you through a nr of example projects, then my answer is: yes, those exist. Just check out some of the books on the page I linked (anything by example would seem to apply here). Answer: Timestamp of e.g. an image ist the point of time, when the Image was captured. If robot moves through its operateing Environment the mounted sensors will move as well. Therefore, the timestamp together with the motion trayectory of your robot will help you to keep trace where each Image was captured. Originally posted by Wolf with karma: 7555 on 2017-03-14 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by Rai on 2017-03-14: Oh, thank you now makes sense.. Thank you!!
{ "domain": "robotics.stackexchange", "id": 27305, "tags": "ros, timestamp" }
Count of words in a mutable Map
Question: The function should get some text on input and return a map in a following format: ("word1" -> 1, "word2" -> 2 ...) The keys are words from text and the values are representing the number of times the word appeared in the text. We take into a count only words which at least 4 characters long. Special chars are ignored, words are case insensitive. import scala.collection.mutable.Map def wordCounter(text: String) = { var map:Map[String, Int] = Map() text.toLowerCase.replaceAll("[^a-z ]", "").split(" ").filter(_.length > 3).foreach(x => addWord(map, x)) map } def addWord(map: Map[String, Int], word: String) = { map(word) = map.getOrElse(word, 0) + 1 } Is that fine to use mutable in this case? Should I always be suspicious about the solutions that require to use mutable and use it only when its problematic to implement it with immutable types? I'm doing an extra loop to filter words longer than 3 chars. In my mind this way it looks a bit cleaner than to have a condition in foreach. Should I care more about the performance or readability in my code? Answer: Solution Explanation So first off we drop everything to lowercase and then filter each Char of the string based on whether it is alphanumeric or a space. We've kept the spaces so that in the next line we may split the String into an Array of sub-strings which we then filter based on your length requirement. In the final line we utilize two more collection methods groupBy and mapValues. If res1 was equal to Array(abcd, abcd, scala) then res1.groupBy(w => w) would return Map[String, Array[String]](abcd -> Array(abcd, abcd), scala -> Array(scala)). ...mapValues then performs the final transformation to get your desired output. def wordCounter(text: String): Map[String, Int] = { val res0 = text.toLowerCase.filter(c => c.isLetterOrDigit || c == ' ') val res1 = res0.split(' ').filter(_.length > 3) res1.groupBy(w => w).mapValues(_.length) } Regarding your Questions Is that fine to use mutable in this case? Should I always be suspicious about the solutions that... There are times when it is OK to use mutability. After all it is a part of the language, and the language is just a tool to get stuff done. In this case, however, your function wordCounter leaks mutability by returning a mutable.Map[...]. In other words, if you need to use mutability within a function don't let it escape the function. ... Should I care more about the performance or readability in my code? Don't take this the wrong way, but performance and readability aren't mutually exclusive. While it may be the case that performant code ends up being longer, it should still remain as readable as a succinct one-liner. But while we're on the subject of readability, note that I added a return type to your wordCounter function :) Fortunately for style related questions Scala has a great style guide. Within the guide the Declarations -> Methods subsection is one place you can find details on idiomatic method declaration style and the reasoning behind it. Other Details When performing operations on common data structures such as a String or Array as we do in the above code, and the overall scope of the program doesn't have a set of common domain specific descriptions I think it is OK to use short value names, e.g. res0, res1, etc. If we wanted to improve readability even further we could always include type signatures with our value declarations. For example: val res0: String = ... val res1: Array[String] = ...
{ "domain": "codereview.stackexchange", "id": 14372, "tags": "scala, hash-map" }
Normalisation of a wavefunction
Question: If the system if found in the state: $$\psi=\sqrt{\frac{1}{2\pi}}(\frac1{\sqrt3}e^{-i3\phi}+ce^{-i4\phi})$$ what value of $c$ normalizes the wavefunction? Clearly: $$\int_0^{2\pi}\psi^*\psi d\psi=1$$ But I get to the following point and can get no further: $$\frac1{2\pi}\int_0^{2\pi}\frac13+c^*c+\frac1{\sqrt3}c^*e^{i \phi} + \frac1{\sqrt3}ce^{-i \phi}=1$$ I'm not sure if it's necessary to put $c^*$ and $c$, please could you clear up whether this is necessary. Also, please tell me where I go from here in order to find c. Answer: By normalization condition you get$$\int_0^{2\pi}\frac13+c^*c+\frac1{\sqrt3}c^*e^{i \phi} + \frac1{\sqrt3}ce^{-i \phi}=2\pi$$ Now we know that $e^{i\theta}=\cos{\theta}+i\sin{\theta}$ thus its integration over a period of $2\pi$ is 0. Thus our equation reduces to $$cc^*=\frac{2}{3}$$ Thus any complex number who's magnitude or modulus is $\sqrt{\frac{2}{3}}$ makes your wave function normalized.
{ "domain": "physics.stackexchange", "id": 21316, "tags": "quantum-mechanics, homework-and-exercises, wavefunction, normalization" }
Terminal Typing Game
Question: A few weeks ago, I developed a terminal game that increases your typing speed. The game is all about typing, providing difficulty levels from easy to hard. I feel like this terminal game won't become as popular as my others, so I need tips on how I could improve this code to make it shorter, easier to run, and more entertaining. Here is the code developed by me, PYWPM: import time import random logo = ''' _____ __ _______ __ __ | __ \ \ \ / / __ \| \/ | | |__) | \ \ /\ / /| |__) | \ / | | ___/ | | \ \/ \/ / | ___/| |\/| | | | | |_| |\ /\ / | | | | | | |_| \__, | \/ \/ |_| |_| |_| __/ | |___/ ''' print(logo) print(" ") difficulty = input("Enter difficulty level (easy/hard): ") if difficulty == "easy": openPlz = open('easywordbank.txt','r') readPlz = openPlz.read() wordBank = readPlz.split() elif difficulty == "hard": openPlz = open('hardwordbank.txt','r') readPlz = openPlz.read() wordBank = readPlz.split() open2 = open('highscore.txt','r+') open2lst = open2.readlines() stat = True strike = 0 score = 0 def gameMain(wordBank): #Primary game loop. Returns a lst: #lst[0] = added points, lst[1] = added strikes lst = [0,0] start = time.time() wordQuiz = wordBank[random.randint(0,(len(wordBank)-1))] wordType = input('Enter the word, '+ wordQuiz + ' : ') if wordType == wordQuiz and time.time()-start < 3: lst[0] += 1 elif time.time()-start >= 3: print('STRIKE! Too Slow! ') lst[1] += 1 else: print('STRIKE! Watch your spelling. Be careful with strikes!') lst[1] += 1 return lst def highScore(name,score,highScoreLst,zFile): for line in highScoreLst: if score >= int(line[-3:-1]): highScoreLst.insert(highScoreLst.index(line),name+'-'+str(score)+'\n') highScoreLst.pop() zFile.seek(0,0) zFile.writelines(highScoreLst) break def rsg(): print('Ready?') time.sleep(1) print('Set?') time.sleep(1) print('Go!') time.sleep(1) name = input('Enter a username for this session: ') print("Type the word then press enter in under 3 seconds!") time.sleep(2) rsg() #MainState while stat == True: lst = gameMain(wordBank) score += lst[0] strike += lst[1] if strike == 3: time.sleep(.5) print('Game Over! The game has ended..!\n') time.sleep(2) print('Your Typing & Accuracy Score: ' + str(score)) highScore(name,score,open2lst,open2) time.sleep(2) break print('\nHighscores for PyWPM:') time.sleep(2) for line in open2lst: print(line, end='') time.sleep(1.5) time.sleep(5) openPlz.close() open2.close() Yes, this game includes a word bank that randomizes words. The high scores aren't global. How could I make this better? Answer: I made some changes to your code, here's the description: Indentation correction; Removal of possible spaces in user input; Change of While stat ==. True for While stat. It's the same thing; PEP8 (and good practices) use 1 blank space after a comma and 2 line breaks before and after the functions; You can also simplify the import of packages by specifying exactly what you are going to use: I did it in the code below; Finally, the documentation always provides for an empty line break at the end of the code. from time import time, sleep from random import randint logo = ''' _____ __ _______ __ __ | __ \ \ \ / / __ \| \/ | | |__) | \ \ /\ / /| |__) | \ / | | ___/ | | \ \/ \/ / | ___/| |\/| | | | | |_| |\ /\ / | | | | | | |_| \__, | \/ \/ |_| |_| |_| __/ | |___/ ''' print(logo) print(" ") difficulty = input("Enter difficulty level (easy/hard): ").strip if difficulty == "easy": openPlz = open('easywordbank.txt', 'r') readPlz = openPlz.read() wordBank = readPlz.split() elif difficulty == "hard": openPlz = open('hardwordbank.txt', 'r') readPlz = openPlz.read() wordBank = readPlz.split() open2 = open('highscore.txt', 'r+') open2lst = open2.readlines() stat = True strike = 0 score = 0 def gameMain(wordBank): """ Primary game loop. Returns a lst: #lst[0] = added points, lst[1] = added strikes :param wordBank: :return: """ lst = [0, 0] start = time() wordQuiz = wordBank[randint(0,(len(wordBank)-1))] wordType = input('Enter the word, '+ wordQuiz + ' : ') if wordType == wordQuiz and time()-start < 3: lst[0] += 1 elif time()-start >= 3: print('STRIKE! Too Slow! ') lst[1] += 1 else: print('STRIKE! Watch your spelling. Be careful with strikes!') lst[1] += 1 return lst def highScore(name, score, highScoreLst, zFile): for line in highScoreLst: if score >= int(line[-3:-1]): highScoreLst.insert(highScoreLst.index(line), name+'-'+str(score)+'\n') highScoreLst.pop() zFile.seek(0, 0) zFile.writelines(highScoreLst) break def rsg(): print('Ready?') sleep(1) print('Set?') sleep(1) print('Go!') sleep(1) name = input('Enter a username for this session: ') print("Type the word then press enter in under 3 seconds!") sleep(2) rsg() # MainState while stat: lst = gameMain(wordBank) score += lst[0] strike += lst[1] if strike == 3: sleep(.5) print('Game Over! The game has ended..!\n') sleep(2) print('Your Typing & Accuracy Score: ' + str(score)) highScore(name, score, open2lst, open2) sleep(2) break print('\nHighscores for PyWPM:') sleep(2) for line in open2lst: print(line, end='') sleep(1.5) sleep(5) openPlz.close() open2.close()
{ "domain": "codereview.stackexchange", "id": 41445, "tags": "python, game, console" }
NMR magnetically equivalent protons for a 1,4-disubstituted benzene ring
Question: Below I have drawn a molecule with random substituents $\ce{X}$ and $\ce{Y}$: If the distances from $\ce{H_b}$ and $\ce{H_a}$ to $\ce{H_d}$ are considered, they are different - $\ce{H_b}$ is 5 bonds away and $\ce{H_a}$ is 3 bonds away, so $\ce{H_b}$ sees $\ce{H_d}$ differently compared to $\ce{H_c}$. The same argument can be made vice versa and also between the same pair of protons towards $\ce{H_c}$. This was what the lecturer told me regarding this compound below. This led me to believe then that all the protons in the molecule are magnetically inequivalent and using up to $\ce{^4J}$ coupling, all protons show a doublet of doublet. However, when I was in school, I was told that if symmetry was a property of a molecule, as above, $\ce{H_a}$ and $\ce{H_b}$ were equivalent, as are $\ce{H_c}$ and $\ce{H_d}$, and each pair of protons gives a doublet. Researching this matter on Google still held true in my undergraduate studies. What am I getting wrong here? I read a quote on another question highlighted here For two nuclei to be magnetically equivalent, they need to have equivalent coupling to all other non-chemical shift equivalent nuclei in the molecule. So as $\ce{H_a}$ and $\ce{H_b}$ are chemically equivalent but $\ce{H_b}$ couples differently to $\ce{H_c}$ than $\ce{H_a}$ does, that therefore means that surely $\ce{H_a}$ and $\ce{H_b}$ are magnetically inequivalent? Answer: when I was in school, I was told that [...] each pair of [equivalent] protons gives a doublet This is only strictly true for magnetically equivalent protons. If there is magnetic inequivalence, such as in the p-disubstituted benzene ring, then it is no longer true. It's not as simple as a dd either; when you have magnetically inequivalent protons that see each other, the first-order rules (the so-called $n+1$ rule) doesn't work so well anymore. You can see some real-life examples of how these molecules look like here: https://organicchemistrydata.org/hansreich/resources/nmr/?page=05-hmr-15-aabb%2F For the p-disubstituted benzene it sort of looks like a doublet if you zoom out far enough, but if you look closely, there are additional peaks flanking the major peaks. I say "see each other" because if you think about it, the two protons labelled HA above are also magnetically inequivalent; but since they are so far apart, the two benzene rings are essentially independent of each other, and give rise to the same NMR spectrum.
{ "domain": "chemistry.stackexchange", "id": 13074, "tags": "organic-chemistry, nmr-spectroscopy" }
Is any work done if I walk in a circle?
Question: My friend and I were arguing about this and I was wondering if someone out there could settle this for us. Basically, he and I were walking to buy some stamps. When we were on our return trip he made the assertion that when I returned to my desk, which is where our trip began, I would have accomplished zero "net work". That is, utilizing our admittedly simple understanding of work from Wikipedia: In physics, a force is said to do work when it acts on a body, and there is a displacement of the point of application in the direction of the force. when I return to my departure point, that is my desk, the total displacement would be zero and using the definition from Wikipedia that $W = Fd$ and $d$ is displacement, no work was done. Intuitively, I suggested that there were two quantities of work done, one quantity of work from my desk to the shop and one quantity of work from the shop to my desk, but when I look at some decriptions of a displacement vector, I feel like the two displacements may also cancel each other out. Can someone help us sort this out? Thanks in advance for the help! Answer: The work done on an object by a force is the distance traveled by the object multiplied by the magnitude of the force in the direction traveled. So if you move along some closed path (i.e. a path that gets you back to where you started) of length $d$, acted on by a constant magnitude force $F$ that is always parallel to your path and pointing backwards along it (like friction), then the force will have done work $-Fd$, not zero. If instead your force was a constant in both magnitude and direction, then the force would do zero work over the journey. This is because on the way to the shop, the force would be (say) pointing forwards along your path and so doing positive work, whereas on the way back from the shop the force would be pointing backwards along your path and so doing negative work. These would cancel out. In this particular example, the force (friction) is of the former kind --- it always points backwards along your path, trying to impede your motion. An example of a force of the latter kind is gravity --- it always point downwards (at least if we're talking about small terrestrial matters). Suppose you threw something vertically up into the air. To begin with the force of gravity would be pointing in the opposite direction to the path taken, doing negative work on the body. Then as the object turned around in the air, the force of gravity (whose direction remains the same) would start to act in the same direction to the path taken. Gravity would then do positive work, precisely equal and opposite to the work it did on the way up, such that when the object returned to its original position, it would have neither gained nor lost energy. The net work done would be zero. To reiterate: for the case of friction, the force changes direction so as to be perpetually impeding your motion. This is why the work done by the friction builds up continually, rather than cancelling out. This makes sense: the force of friction is dissipating energy in your legs for the entirety of the journey --- when you get back, you know you've used up some energy, since you might be a bit peckish! The friction hasn't at any point been putting energy back into your legs --- it's just been consistently sapping it --- and so it must have done some work (work being the amount of energy a force puts into an object).
{ "domain": "physics.stackexchange", "id": 73782, "tags": "newtonian-mechanics, energy, work" }
Solid in Liquid Heat Transfer
Question: If there is a solid immersed in a large (but finite) pool of water, where the solid has temperature $T_s$ and the water has temperature $T_w$, with $T_w>T_s$, how can I calculate $T_s(t)$ and $T_w(t)$? (Suppose that the masses and the material properties of the solid are known.) Answer: Typical time dependent heat equation problem with Neumann boundary conditions. The solution $T_s(x,y,z,t)$ will give you the evolution of temperature at every point of the solid. The boundary condition is $\nabla T = aT + b$ with $a$, $b$ constants because you will suppose that energy is exchanged by natural convection and conduction (the latter might be neglected depending on the properties of the bodies and the initial conditions) . In a general case the heat equation can only be solved numerically, there are no analytical solutions. In special cases depending on the geometry (e.g. spherical symmetry) the problem can be simplified. If the heat capacity of the bath is large compared to the one of the solid, further simplifications are possible. As a summary : the only general method is numerical integration of a partial differential equation of second order with temperature gradients imposed on the boundary.
{ "domain": "physics.stackexchange", "id": 8306, "tags": "thermodynamics, heat" }
What's the Time Complexity and total no of iterations?
Question: for a=1 to m means for(a=1;a<=m;a++) for i=1 to n for j=i to n c= c+1; The total no of iterations is O(n^2) i=1,j executes n times. i=2,j executes n-1 times. . . . i=n, j executes 1 times. So Total No of iterations = n+n-1 + n-2+ ..... + 1 = Sum of n natural numbers = O(n^2) For the Time complexity of the given algorithm, i executes n times and j executes from i to n so it will be O(n*(n-i+1)) = O(n^2). Am I correct in my answer and approach for both of the solution? Answer: The first approach is correct. The idea is that the sum of the first $n$ numbers can be computed as $$\dfrac{n(n+1)}{2} = \dfrac{n^2 + n}{2} = O(n^2)$$ I would not consider the second one as complete, as since $i$ is a free variable in your complexity expression, the result does not immediately follow unless you quantify over $i$ (which would result in you obtaining the expression above).
{ "domain": "cs.stackexchange", "id": 21618, "tags": "algorithms, time-complexity" }
If you are in a deep gravity well, where time goes by more slowly, do you see the unfolding of a cosmic event at a different rate?
Question: If you were on one of those planets orbiting a super massive black hole (ala Interstellar), where time is moving more slowly, would you time astronomical events differently or even the age of the universe? Answer: Time dilation is related to differences in gravitational potential in General Relativity. Observing a clock situated deep in a potential well, a distant observer would see it running slow. Vice-versa, an observer looking outwards from within a deep potential well would see distant clocks running faster. The situation is complicated by orbital motion. This results in a time dilation that would be predicted by Special Relativity. Any relative motion will make a moving clock appear to run slower to an observer in a stationary frame. To keep things simple, a stationary observer at a radial coordinate $r$ around a non-spinning black hole, would observe events far away from the black hole speed up by a factor $(1 - r_s/r)^{-1/2}$, where $r_s$ is the Schwarzschild radius (and is the smallest radius at which a stationary observer could possibly exist). However, you would only make incorrect estimates of durations and ages (by using times on your own clock) if you didn't understand General Relativity.
{ "domain": "astronomy.stackexchange", "id": 7174, "tags": "gravity, time-dilation" }
Computing number of batches in one epoch
Question: I have been reading through Stanford's code examples for their Deep Learning course, and I see that they have computed num_steps = (params.train_size + params.batch_size - 1) // params.batch_size [github link]. Why isn't it num_steps = params.train_size // params.batch_size instead? Answer: The double-slash in python stands for “floor” division (rounds down to nearest whole number), so if the result is not an integer, it will always miss the last batch, which is smaller than the batch size. For example: Given a dataset of 10,000 samples and batch size of 15: # Original calculation: # (10000 + 15 - 1) // 15 = floor(667.6) = 667 # Your calculation: # 10000 // 15 = floor(666.667) = 666 Your formula calculates number of full batches, theirs total number of batches.
{ "domain": "datascience.stackexchange", "id": 4578, "tags": "tensorflow, training, epochs" }
What is the biochemical explanation for tingling and burning sensation in brain due to certain food?
Question: Consumption of mustard (spicy English Mustard), wasabi and horseradish based food dressings usually result in a burning, tingling or freezing sensation in the brain/scalp and nostrils as the vapour goes through the nasal cavity. I read somewhere, the checmical which gives the burning sensation is isothyocyanates. And some of the posts roughly mention about sinuses being agitated due to this chemical vapour. How to explain this from biochemical and biology aspect? e.g. What sort of a cell reaction, if any receptors involved etc. (Appreciate an answer that is similar to the type of explanation here.) Answer: You are right the compounds provoking this burning/tingling sensation is called allyl isothiocyanate. We (human) perceive these compounds in two different ways when ingested, namely via the gustatory and olfactory systems. The molecular receptor sensing isothiocyanates is called the transient receptor channel A1 (TRPA1) [ref]. Here a simplistic view of how ingredients are perceived: Gustatory sensation First we get a burning sensation in the buccal cavity due to TRPA1 receptors present on the surface of the sensory neurons of the trigeminal nerve. In essence isothiocyanates dissolved in the saliva will activate those neurons via TRPA1 provoking electrical impulses in the trigeminal nerve which leads to a burning sensation. Retronasal olfactory sensation For several food ingredients we also perceived an aroma (taste = gustation, aroma = olfaction, taste + aroma = flavor). The aroma of an ingredient is perceived by olfactory sensory neurons in the nasal cavity and because some of the ingested ingredient will be vaporized inside the buccal cavity and travel through the nasal cavity where they are perceived (retronasal perception). Not so surprisingly, isothiocyanates will again be perceived by sensory cells of the olfaction system via TRPA1 hence a burning sensation in the nostrils. How receptor activation leads to a neuronal stimuli This is a very broad subject and I will only give an overview of what is happening. We first start with the activation of a receptor by an agonist (in your question, isothiocyanates binding and activating TRPA1). Upon binding of the agonist the conformation of the receptor will change from an inactive to an active state. The way the signal is transduced varies according to the type of receptor but for the TRPA1 channel what happens is that the channel opens leading to an influx of calcium ions ($Ca^{2+}$) in the cells which changes its electrical state (as calcium ions are positively charged), a phenomena called depolarization. Depolarization of specific sensory cells (here the one expressing TRPA1) start an electrical signal in the nerve which travels to the brain where it is then decoded. The exact way the brain decodes sensory signals is still under debate but a likely explanation is that only some neurons within the nerve are activated which allows the brain to decode the sensory signal based on the nerve activation pattern. As an analogy, it would be the same as having only specific wires (neurons) turned on within a bundle of wires (nerve). Further reading If you want to further extend your knowledge on taste perception at the molecular level read this excellent review Chandrashekar, 2006. For a in-depth review on olfaction refer to this book. For a book on signal transduction refer to this while for TRP channels activation this is a good reading. The field is quite broad and there are many good references. Sensory perception includes two distinct and broad fields namely neurosciences and cell signalling.
{ "domain": "biology.stackexchange", "id": 4200, "tags": "biochemistry, molecular-biology, sensation" }
Flow of current across the cross section of a wire
Question: A current of 5ampere is flowing wire which is thick at one end and narrow at another will the current be the same at botht the ends or different Answer: It will be the same. Current is the total charge per second. Charge can be measured in Coulombs or number of electrons. The number of electrons passing by per second is the same for both ends. For every electron in one end, and electron comes out the other. Current density is different. It measures the current per square meter. Though current per square millimeter might be a better unit for a wire. In the fat end the current is spread out over many square millimeters. Each square millimeter doesn't have all that many electrons passing by per second. In the narrow end, a square millimeter has more electrons per second. The word current is used for water flowing in a stream in a slightly different way. Thinking about a water current can confuse the issue. In water, current is just the velocity of the water. If you have a pipe that is fat on one end and narrow at the other, water has to move faster at the narrow end.
{ "domain": "physics.stackexchange", "id": 91608, "tags": "electric-circuits, electric-current, electrical-resistance" }
Autocorrelation of random walk
Question: I want to analyze the auto-correlation of a received power signal that I captured. Unfortunately, I cannot publish the data but I found the same problem arises for a random walk, that's why I used the random walk in the code snippet below. The problem is that the progression scales with the number of input samples, i.e., the zero-crossing happens every time at approximately 0.23*len_walk, no matter what len_walk actually is. I'm not sure about the reason, maybe someone can help me interpret. import matplotlib.pyplot as plt import numpy as np from statsmodels.tsa.stattools import acf len_walk = 1000 n_walks = 100 acf_sum = np.zeros(len_walk) for j in range(n_walks): w = [0] for i in range(len_walk-1): w.append(w[-1] + np.random.normal()) acf_result = acf(w, nlags=len_walk) acf_sum = acf_sum + acf_result plt.plot(acf_sum/n_walks, label='acf_sum') plt.grid() plt.legend() Is it that the absolute difference between minimum and maximum value increases (on average) the longer the time is, so samples that have been previously close to uncorrelated look now correlated given more values? Answer: Let's try to figure this out from first principles. Let's start with $x$ our zero-mean, Gaussian, independent, identically distributed noise sequence: $$ x[n] \sim N(0,\sigma^2_x) $$ Then our random walk is just the accumulation of these values \begin{align} y[n] &= y[n-1] + x[n], \mbox{ for } n \ge 0, x,y = 0 \mbox{ for } n < 0.\\ &= \sum_{k=0}^n x[k] \end{align} The variance of $y[n]$ is just \begin{align} \sigma^2_y &= E\left[ y[n] y[n] \right]\\ &= E\left[ \sum_{k=0}^n (x[k])^2\right] + \mbox{ expectation 0 terms}\\ &= (n+1) \sigma^2_x \end{align} We can extend this to look at the covariance of $y$ for $m \ge n \ge 0$ \begin{align} E[y[n]y[m]] &= E\left[ \sum_{k=0}^{n} (x[k])^2\right] + \mbox{ expectation 0 terms}\\ &= (n+1) \sigma^2_x \end{align} Then the autocorrelation is: \begin{align} \rho_y(n,m) &= \frac{E\left[ y[n] y[m]\right]}{\sqrt{E\left[ y[n]y[n]\right]E\left[ y[m]y[m]\right]}}\\ &= \sqrt{\frac{n+1}{m+1}} \end{align} And there's the problem: the $\color{red}{\bf \mbox{random walk is not a stationary process}}$. Because statsmodels.tsa.stattools.acf aims to capture second order statistics as a function of a single lag, this doesn't make sense for the random walk. For example, the lag of 1: \begin{align} \rho(0,1) &= 0.707\ldots\\ \rho(1,2) &= 0.816\ldots\\ \rho(2,3) &= 0.866\ldots\\ &\vdots\\ \rho(100,101) &= 0.995\ldots\\ \end{align}
{ "domain": "dsp.stackexchange", "id": 11358, "tags": "python, autocorrelation, random" }
How to reconcile two different expressions for the Noether current?
Question: In A Modern Introduction to Quantum Field Theory by Maggiore (as well as in my quantum field theory course), the Noether current for an internal symmetry is found to be $$j^{\mu}=\frac{\partial\mathcal{L}}{\partial(\partial_{\mu} \phi)}\delta\phi\tag{3.32}$$ where the fields transform as $\phi\rightarrow\phi+\alpha\delta\phi$ for infinitesimal $\alpha$ (and the coordinates are unchanged). However, in An Introduction to Quantum Field Theory by Peskin and Schroeder, if the Lagrangian transforms as $$\mathcal{L}\rightarrow\mathcal{L}+\alpha\partial_{\mu}\mathcal{J}^{\mu},\tag{2.10}$$ it is given in general by $$j^{\mu}=\frac{\partial\mathcal{L}}{\partial(\partial_{\mu} \phi)}\delta\phi-\mathcal{J}^{\mu},\tag{2.12}$$ which is clearly not the same. Initially I thought that $\mathcal{J}^{\mu}=0$ for an internal symmetry so that they would match. However, answers on my last question have shown that this is not true. Both books assume the action is invariant, not necessarily the Lagrangian. So, what is going on here? Answer: No, Peskin & Schroeder between eqs. (2.9) & (2.10) assume that the infinitesimal transformation is a quasi-symmetry of the action, i.e. the action can change by a boundary term; while Maggiore below eq. (3.20) assumes a strict symmetry of the action. Correspondingly, the Noether current gets modified with an improvement term in Peskin & Schroeder's eq. (2.12) as compared to Maggiore's eq. (3.32). Btw, both Noether currents (2.12) & (3.32) assume so-called vertical transformations, cf. e.g. this related Phys.SE post.
{ "domain": "physics.stackexchange", "id": 87182, "tags": "lagrangian-formalism, symmetry, field-theory, gauge-theory, noethers-theorem" }
How do I use collision_object.header.frame_id? Can I use it for frame transformations?
Question: I want to add collision objects to the world using a frame that is broadcast by a node. Setting collision_object.header.frame_id = "task_frame" results in the planner responding that Unable to transform from frame 'task_frame' to frame '/base_link'. Returning identity. I tried setting my collision_object.header.stamp = ros::Time::now(); but no luck. I know task_frame exists, because I can run through tf_echo just fine. I can also manually transform poses, but the code will soon become really ugly as the list of objects increases. Am I using collision_object.header wrong? frames.png Originally posted by paturdc on ROS Answers with karma: 157 on 2014-05-19 Post score: 0 Original comments Comment by Maya on 2014-05-19: Can you upload on image of the tf using $ rosrun tf view_frames $ evince frames.pdf ? And I know its working but could you add the output of rosrun tf tf_echo /map /odom ? Comment by paturdc on 2014-05-19: I don't have /map or /odom transforms, are they required? I'm transforming directly from the base_link of the robot (which is stationary), to a particular configuration of the end_effector. I'll edit my question to include an image of the tf. Comment by paturdc on 2014-05-19: Weird, it suddenly stared working, I didn't change anything. I suppose there might be some suboptimal way I've set up my transforms/timing/planning_scene somewhere. Comment by paturdc on 2014-05-19: And now it is back where I started. I even added a wait for transform before setting my collision_object.header to prove to myself that the frame is coming through. Every node that I can think of can see "task_frame" except my planner. I'm out of ideas. Comment by Maya on 2014-05-19: It's probably a typo somewhere but when I look at your image, I don't see any task_frame, I see task_space... Comment by paturdc on 2014-05-19: Yeah, it's a type in the question. But everything is spelled correctly in the code. I imagine it has to do with the timestamp, embedded in the collision_object, but I can't find a solution that works. Comment by Maya on 2014-05-20: Ok osrry it's the end of my knowledge. I have no idea why it does not work :/ Answer: It turns out that restarting the planner after the new transform has been published does the trick. I suppose it loads a list of transforms to subscribe to, and doesn't update that list after it has been started. Thanks for the help, it did at least clear up that I was on the right track :) Originally posted by paturdc with karma: 157 on 2014-05-21 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Maya on 2014-05-21: Hey! Just realize by you saying it actually doesn't actualize the transformation loaded. I don't think it's a normal behavior of the system. Are you, by any chance doing something as describe here : http://answers.ros.org/question/39643/problem-with-roscore-subscription-and-publishing/ ? Comment by paturdc on 2014-05-21: Yes, it looks like it is the same kind of problem. I can't tell if this is undesired behaviour from ROS or if I just need to make sure that things are running in the correct sequence. It is impossible to change the sequence in certain applications for me. Comment by Maya on 2014-05-21: Last attempt to understand : is your roscore launch in a independant terminal ? =) Comment by paturdc on 2014-05-21: Yes, I have multiple terminals, and my subscriber is launched in a different terminal than my publisher (and the subscriber is launched before the publisher).
{ "domain": "robotics.stackexchange", "id": 17995, "tags": "ros, collision-object, frame-id" }
Why don't we use Leibniz integral rule when solving Diffusion equation using the Fourier transform?
Question: My question concerns the solution to the diffusion equation: $$\frac{\partial{p(x,t)}}{\partial{t}}=D\frac{\partial^2{p(x,t)}}{\partial{x}^2}~.\tag{1}\label{1}$$ I have a question about the solution presented in Ian Ford's Statistical Physics: An Entropic Approach. This solution consists of defining $$G(k,t)=\int\limits_{-\infty}^{\infty}p(x,t)\mathrm{e}^{\mathrm{i}kx}\mathrm{d}x~,$$ such that $$p(x,t)=\frac{1}{2\pi}\int\limits_{-\infty}^{\infty}G(k,t)\mathrm{e}^{-\mathrm{i}kx}\mathrm{d}k~,$$ then taking Fourier transform of Eq.$\eqref{1}$ yields $$\int\limits_{-\infty}^{\infty}\frac{\partial{p(x,t)}}{\partial{t}}\mathrm{e}^{\mathrm{i}kx}\mathrm{d}x= \int\limits_{-\infty}^{\infty}D\frac{\partial^2{p(x,t)}}{\partial{x}^2}\mathrm{e}^{\mathrm{i}kx}\mathrm{d}x~.$$ The right-hand side is integrated by parts and on the left-hand side we exchange integration with differentiation, and we get $$\frac{\partial{G(k,t)}}{\partial{t}}=-k^2DG(k,t)~.\tag{2}\label{2}$$ The thing which I don't understand is why are we allowed to exchange the integral and the partial derivative with respect to time on the left hand side of Eq.$\eqref{2}$. Why don't we have to use the Leibniz integral rule $${\displaystyle {\frac {\mathrm{d}}{\mathrm{d}t}}\left(\int _{a}^{b}f(x,t)\,\mathrm{d}x\right)=\int _{a}^{b}{\frac {\partial }{\partial t}}f(x,t)\,\mathrm{d}x~,}$$ but then we would have $$\frac{\mathrm{d}{G(k,t)}}{\mathrm{d}{t}}=\frac{\partial{G(k,t)}}{\partial{t}}+\frac{\partial{G(k,t)}}{\partial{k}}\frac{\mathrm{d}{k}}{\mathrm{d}{t}}=-k^2DG(k,t)~,$$ which is generally different from Eq.$\eqref{2}$. Could you please help me understand why one doesn't use Leibnitz rule in this case? Thank you in advance for your answers. Answer: $k$ is a completely independent variable from $t$ (just as $x$ is). The independence of $x$ and $t$ is the reason we are careful to use partial derivatives in the original equation. When we Fourier transform, we are replacing one independent variable ($x$) by another ($k$).
{ "domain": "physics.stackexchange", "id": 56479, "tags": "statistical-mechanics, fourier-transform, integration, diffusion" }
failed to load openni_kinect libraries
Question: I'm using my turtlebot for the first time with electric. Every time I try to roslaunch anything that uses packages from the openni_kinect stack (follower.launch, openni.launch, etc), I get a bunch of errors about failing to load libraries from openni_kinect. Here's an example of one of the errors. [ERROR] [1337214239.808330340]: Failed to load nodelet [/camera/depth/metric_rect] of type [depth_image_proc/convert_metric]: Failed to load library /opt/ros/electric/stacks/openni_kinect/depth_image_proc/lib/libdepth_image_proc.so. Make sure that you are calling the PLUGINLIB_REGISTER_CLASS macro in the library code, and that names are consistent between this macro and your XML. Error string: Cannot load library: /opt/ros/electric/stacks/perception_pcl/pcl/lib/libpcl_visualization.so.1.1: symbol __cxa_pure_virtual, version libmysqlclient_16 not defined in file libmysqlclient.so.16 with link time reference [FATAL] [1337214239.809701780]: Service call failed! [camera/depth/metric_rect-8] process has died [pid 16675, exit code 255]. log files: /home/turtlebot/.ros/log/f32ffc28-9fb2-11e1-ac38-485d603306cf/camera-depth-metric_rect-8*.log There are a bunch of similar errors all with libraries from the same stack. I've tried reinstalling that stack but to no avail. Hopefully this is something really obvious that I'm missing. Thanks! Originally posted by selliott on ROS Answers with karma: 51 on 2012-05-16 Post score: 0 Answer: This looks like trouble between the version of libmysql that the debian packages were linked against at compile time and the version of libmysql that's installed on your system. I can see a couple of possible causes here: The system is out of date; the ROS packages may depend on some arbitrary library features that it probably shouldn't. Updating should fix that. Sourcing multiple versions of ROS in the same shell. I know some of the turtlebot installs source ROS from the system bashrc or /etc/profile, so this could be hiding somewhere non-obvious. This will show up as multiple versions in your $ROS_PACKAGE_PATH You're pulling in debs for the wrong version of ubuntu; take a look at your sources.list[.d], and make sure the version string in your deb line matches the version of ubuntu you're running Originally posted by ahendrix with karma: 47576 on 2012-05-16 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 9425, "tags": "kinect, openni, turtlebot, openni-kinect, ros-electric" }
Traveling Salesman Problem with Neural Network
Question: I was curious if there were any new developments in solving the traveling salesman problem using something like a Hopfield recurrent neural network. I feel like I saw something about recent research getting a breakthrough in this, but I can't find the academic papers anywhere. Is anyone aware of any new, novel developments in this area? Answer: This Medium post lists the latest (not a full list of course) studies in the combinatorial optimization domain. All three papers use Deep Reinforcement Learning, which does not need any training set but learns completely from its own experience. I have been working on the first paper for some time and inference time is on milliseconds level. According to their experiments, the approximation ratio (a metric they use to benchmark their own method) on 1000-1200 test cases reaches to 1.11.
{ "domain": "cs.stackexchange", "id": 14372, "tags": "np-hard, neural-networks, traveling-salesman" }
Problems with rosinstall on MAC OSX 10.5
Question: Hello, When I try to install ros by following the wiki, I get this error : $ rosinstall ~/ros "http://packages.ros.org/cgi-bin/gen_rosinstall.py?rosdistro=cturtle&variant=base&overlay=no" /Library/Python/2.5/site-packages/rosinstall-0.5.16-py2.5.egg/rosinstall/vcs/svn.py:49: Warning: 'with' will become a reserved keyword in Python 2.6 Traceback (most recent call last): File "/usr/local/bin/rosinstall", line 5, in <module> pkg_resources.run_script('rosinstall==0.5.16', 'rosinstall') File "/System/Library/Frameworks/Python.framework/Versions/2.5/Extras/lib/python/pkg_resources.py", line 442, in run_script self.require(requires)[0].run_script(script_name, ns) File "/System/Library/Frameworks/Python.framework/Versions/2.5/Extras/lib/python/pkg_resources.py", line 1167, in run_script exec script_code in namespace, namespace File "/usr/local/bin/rosinstall", line 28, in <module> File "/Library/Python/2.5/site-packages/rosinstall-0.5.16-py2.5.egg/rosinstall/vcs/svn.py", line 49 with open(os.devnull, 'w') as fnull: ^ SyntaxError: invalid syntax I have python 2.5: $ ls -l `which python` lrwxr-xr-x 1 root wheel 72 11 sep 2010 /usr/bin/python -> ../../System/Library/Frameworks/Python.framework/Versions/2.5/bin/python I found the same problem happend on debian - link How can I solve the problem? Originally posted by gwilly on ROS Answers with karma: 1 on 2011-07-19 Post score: 0 Answer: To get this to work on python 2.5 you need to add from __future__ import with_statement at the top of the rosinstall script. Originally posted by tfoote with karma: 58457 on 2011-07-25 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 6182, "tags": "ros, cturtle, rosinstall, osx" }
Is sugar necessary for the sweetening effect of food?
Question: A lot of food packages, nowadays, are mentioning "0 g sugar" or "sugar free". Then how do those still taste sweet? Answer: There are plenty of sweet things that are not sugar(s) People have a sweet tooth (or, to put it another way humans like sweet things). This is an evolutionary adaptation because fresh fruits are both good food (both nutritionally and in providing lots of energy) and rare in pre-modern food gathering cultures. So we are adapted to like things that are sweet as eating them is good for us (in moderation). In the modern world we can have as much sweet stuff as we want but this is no longer a survival advantage as too much sugar provides more energy than we need to survive or thrive and eating it just makes us fat. So the modern food industry has adapted to give us sweet-tasting things that don't overload us with unneeded energy. Many of those food ingredients mimic the sweetness of sugar without containing many calories, so allow us to eat sweet food without the same risk of consuming too many empty calories and getting fat. There are many chemical that taste sweet but are not sugars. The successful ones tend to tase a great deal sweeter than the typical sugars found in fruits and related plants (glucose, sucrose, fructose and others). For example, aspartame is hundreds of time sweeter than sucrose but is a peptide not a sugar; Saccharin is an aromatic suphimide and is also hundreds of times sweeter than sucrose; acesulfame K is similar to saccharin. There are also sugar-like sweetness that the body cannot digest but which taste sweet. Steria is a complex sugar with sweetness but few calories; Sucralose is a modified sugar the body can't digest so tastes sweet without containing the calories of sucrose. The point of the claims on "sugar-free" foods is that they are sweet but non-calorific. Plenty of compounds can mimic the sweetness of real sugars like sucrose but are calorie free either because they are far sweeter (so much less stuff is required for the same level of sweetness) or because the body can't digest the compound and turn it into unneeded calories.
{ "domain": "chemistry.stackexchange", "id": 10694, "tags": "food-chemistry, taste" }
Lagrangian mechanics formulation of a simple free motion of two masses in uniform gravity field
Question: As a part of larger project, I decided to test my Lagrangian formulation of simple system of two rigidly connected point masses as indicated below. I introduce the generalized coordinates vector $\bar{q} = [x, y, q]^T$, where $x,y$ are coordinates of position of mass $M$ and $q$ is the angular deflection from horizontal of the massless rod of length $l$, in the middle of which mass $m$ is attached. Then, the position coordinates of $m$ are given by $$ x_m = x + \dfrac{l}{2}\cos(q) $$ $$ y_m = y + \dfrac{l}{2}\sin(q) $$ Having this, I defined kinetic ($T$) and potential ($V$) energies $$ T = \dfrac{1}{2}M(\dot{x}^2 + \dot{y}^2) + \dfrac{1}{2}m(\dot{x}_m^2 + \dot{y}_m^2)$$ $$ V = g \left( My + m\left(y+\dfrac{l}{2}\sin(q)\right) \right) $$ Then, the Lagrangian $L$ and mechanical energy of the system $E$ are $$ L = T-V$$ $$ E = T+V$$ Further, the derivations were done by my by hand (a few times to double check) and using SymPy (symbolic math package of Python) and they match. Upon integrating the EOM, I receive a nicely looking plots for state vector variables $\bar{Q} = [x,y,q,\dot{x},\dot{y},\dot{q}]^T$, if my initial conditions involve null $q$ nd $\dot{q}$. However, if they are non-zero, the motion looks highly non-physical visually (I created a Matlab animation of the motion) and additionally the mechanical energy of the system is not adding up to be constant (see pic below for $\bar{Q}_0 = [1,1000,1,1,0,-0.1]^T$). My question is therefore, are my assumptions and initial formulation correct for the given situation? I am quite new to Lagrangian formalism and afters hours of tackling this seemingly simple problem, started to think there might be some rule of Lagrangian mechanics I violate. All classical mechanics problems I was able to find online are not free motion of multibody systems, therefore I was not able to check with actual solution for problem like this. To give some more context, my intention is to expand this problem to put the multibody system in orbital flight (central point-mass gravity field) and increase complexity of structure, leading to n-link pendulum attached to bus mass $M$. If there is a fundamental error in the proceedings I presented above here, I assume I would not be able to extend the formulation to more complex system I just described. I will be eternally grateful for any help / advice! I will be also happy to provide more details on my solution. EDIT To clarify, I post my EL equations: $$ M \frac{d^{2}}{d t^{2}} x{\left(t \right)} - 0.5 m \left(l \sin{\left(q{\left(t \right)} \right)} \frac{d^{2}}{d t^{2}} q{\left(t \right)} + l \cos{\left(q{\left(t \right)} \right)} \left(\frac{d}{d t} q{\left(t \right)}\right)^{2} - 2 \frac{d^{2}}{d t^{2}} x{\left(t \right)}\right)= 0 $$ $$ M \frac{d^{2}}{d t^{2}} y{\left(t \right)} + g \left(M + m\right) + 0.5 m \left(- l \sin{\left(q{\left(t \right)} \right)} \left(\frac{d}{d t} q{\left(t \right)}\right)^{2} + l \cos{\left(q{\left(t \right)} \right)} \frac{d^{2}}{d t^{2}} q{\left(t \right)} + 2 \frac{d^{2}}{d t^{2}} y{\left(t \right)}\right) = 0 $$ $$ l m \left(0.5 g \cos{\left(q{\left(t \right)} \right)} + 0.25 l \frac{d^{2}}{d t^{2}} q{\left(t \right)} - 0.5 \sin{\left(q{\left(t \right)} \right)} \frac{d^{2}}{d t^{2}} x{\left(t \right)} + 0.5 \cos{\left(q{\left(t \right)} \right)} \frac{d^{2}}{d t^{2}} y{\left(t \right)}\right) = 0$$ $$ \begin{bmatrix} M+m & 0 & -\dfrac{l}{2}m\sin(q) \\ 0 & M+m & \dfrac{l}{2}m\cos(q) \\ -\dfrac{l}{2}m\sin(q) & \dfrac{l}{2}m\cos(q) & m\dfrac{l^2}{4} \end{bmatrix} \begin{bmatrix} \ddot{x} \\ \ddot{y} \\ \ddot{q} \end{bmatrix} = \begin{bmatrix} \dfrac{l}{2}m\cos(q)\dot{q}^2 \\ -g(M+m) + \dfrac{l}{2}m\sin(q)\dot{q}^2 \\ -\dfrac{l}{2}gm\cos(q) \end{bmatrix} $$ Answer: This is my solution for your problem: The Kinetic Energy is: $$T=\frac{1}{2}\,M(\dot x^2+\dot y^2)+\frac 12\,m\,\left[\left(\dot x -\frac 12 l \sin(\varphi)\,\dot \varphi\right)^2 +\left(\dot y +\frac 12 \,l\cos(\varphi)\dot \varphi\right)^2\right] $$ and the potential energy $$U=M\,g\,y+m\,g\,\left(y +\frac 12 l\sin(\varphi)\right)$$ the generalized coordinates are: $$\vec q=\begin{bmatrix} x \\ y \\ \varphi \\ \end{bmatrix}$$ The EOM's: $$\ddot x-\frac 12 \frac{\cos(\varphi)\,m\,l}{M+m}\,\dot \varphi^2=0$$ $$\ddot y-\frac 12 \frac{\sin(\varphi)\,m\,l}{M+m}\,\dot \varphi^2+g=0$$ $$\ddot \varphi=0$$ the numerical simulation give you for $E=T+U$ the initial conditions are all zero except for $~\varphi(0)=0.3$ thus: if you have this equations, the total energy $E=T+U~$ is constant as it should be. edit in case that $\varphi(0)=\varphi_0$ and all other initial conditions are zero, the solution of the equations of motion are: $$x(\tau)=0~,y(\tau)=-\frac 12\,g\,\tau^2~,\varphi(\tau)=\varphi_0 $$ and $$E=\frac 12 m\,g\,l\,\sin(\varphi_0)=~\text{constant}$$ The equations of motion: from Euler Lagrange you obtain this three equations $$\left( M+m \right) { \ddot x}-1/2\,ml\sin \left( \varphi \right) \ddot \varphi -1/2\,ml\cos \left( \varphi \right) {\dot \varphi }^{2} =0$$ $$\left( M+m \right) {\ddot y}+1/2\,ml\cos \left( \varphi \right) \ddot\varphi -1/2\,ml\sin \left( \varphi \right) {\dot \varphi }^{2}+M g+mg=0 $$ and $$-1/4\,ml \left( 2\,\sin \left( \varphi \right) { \ddot x}-2\,\cos \left( \varphi \right) {\ddot y}-\ddot\varphi \,l-2\,g\cos \left( \varphi \right) \right) =0$$ solve those equations for $\ddot x~,\ddot y~,\ddot \varphi$ you obtain my equations above
{ "domain": "physics.stackexchange", "id": 70366, "tags": "newtonian-mechanics, classical-mechanics, lagrangian-formalism, point-particles" }
How is Newton's 3rd law applied in rocket propulsion?
Question: I'm really wanting to get into physics more and I've had this question for a while. I do know a bit about rocketry as I think it's pretty cool but I'm still struggling to understand how Newton's 3rd law applies on a more microscopic level. In a rocket, propellant is used to produce often very hot gas going very fast. This gas then exits through a rocket nozzle. Newton's 3rd law says that this gas moving with a lot of force away from the rocket will cause the rocket to experience an equal and opposite force thus allowing the rocket to liftoff. I'm curious where the actual collisions happen as the rocket seems to just eject the exhaust without the particles "hitting" anywhere to allow the force to occur. If anything needs clarifying I'd be happy to try! Answer: There are two ways to think about it, depending on how you want to think about gases. You can think of gasses as having a pressure. When a rocket is burning, there is a very high pressure inside the rocket. That pressure pushes gas downward, but also pushes up on the rocket. The other approach is to think of individual molecules of gas. They aren't just going straight down. They bounce around with thermal energy (they're very hot). Those gas molecules sometimes collide with the rocket, imparting momentum to the rocket. Both are valid ways of thinking about a rocket motor, it just depends on how you want to treat them. Sometimes its best to think of the gasses as a fluid. Other times its best to think of it as a bunch of particles. But both have a rationale for why the rocket goes upward.
{ "domain": "physics.stackexchange", "id": 100380, "tags": "newtonian-mechanics, momentum, conservation-laws, free-body-diagram, rocket-science" }
Frame dragging around a black hole
Question: If you have one stationary non-rotating black hole, being orbited by a 2 or more black holes just outside each other's event horizons does this alter the size of the central black hole's event horizon? What's the answer to this thought experiment, if you put a spaceship into orbit around the central black hole. and it Falls into the event horizon just barely, and then introduce the multiple black holes orbiting just outside the central black hole's event horizon would it enable the spaceship to escape central black hole's event horizon via the pull of the exterior orbiting black holes. (I don't know if rotating or non-rotating would have any effect on the event horizon._ Answer: First of all if two black holes are orbiting each other with their horizons almost touching, they will very quickly merge into one black hole. Even if they are orbiting at 10 times the horizon radius, they still will merge in a relatively short time. The reason that they merge is that the closely orbiting black holes will be radiating a significant amount of their orbital energy as gravitational waves. The energy for the outgoing gravitational waves will come from the orbital energy of the black holes which will force them to get closer to each other which will increase the gravitational radiation in a runaway process that quickly results in the black hole mergers. You can see many interesting videos of simulations of this process at this website: http://www.black-holes.org/explore2.html . In the second part of your question, I think you are trying to arrange things such that the spaceship is sucked out of the event horizon of the first black hole by the second black. Now, if you look at the videos, you will see that when the black holes are far from each other, the event horizons that are facing each other get flattened somewhat, which I think is the effect you are looking for that you think might allow the spaceship to escape. However, if you look at the videos, you see that as the holes get even closer to each other the event horizons "reach" out towards each other and as the two cusps from each hole touch, then the black holes quickly merge into one black hole. Here is a frame from a movie showing the event horizons flattened and the beginning of the cusps forming which will lead to the merger: The problem with your scheme for escaping the black hole is that the spaceship will have already fallen into the singularity by the time you move the second black hole in close enough to "flatten" the event horizon. The problem is that once the spaceship crosses the event horizon, the future time-like direction has become rotated into a spatial direction towards the black hole singularity at the center. See this answer on Astronomy Stack Exchange for more of an explanation: Why can't you escape a black hole? Here is the image from that question showing how the time direction gets rotated towards the center of the black hole:
{ "domain": "physics.stackexchange", "id": 2282, "tags": "black-holes, event-horizon, frame-dragging" }
rosbag play on two machines
Question: Hi, I have recorded simultaneously multiple cameras sequences on two machines with rosbag. I now would like to replay this sequence simulataneously on the two machines (to avoid transfer images on the network) with a good synchronization. Is it possible to do it with rosbag or should I need to code this myself ? Originally posted by Amaury Negre on ROS Answers with karma: 56 on 2012-12-09 Post score: 0 Answer: I don't think that is possible with rosbag. You will have to implement it yourself. Originally posted by Lorenz with karma: 22731 on 2012-12-10 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 12036, "tags": "ros, rosbag, machines, multiple, synchronization" }
Electrons carry the bulk of heat flux in the solar wind?
Question: In the introduction of the attached paper is mentioned that: Electrons provide additional heating by carrying the bulk of the solar wind heat flux and through collisions with the protons. Electron and proton heating by solar wind turbulence What is meant by this phrase? Answer: Electrons provide additional heating by carrying the bulk of the solar wind heat flux and through collisions with the protons. First, the solar wind is a weakly collisional plasma at best (e.g., see https://physics.stackexchange.com/a/268594/59023). That is, collisions play an extremely minor role in nearly all processes in the solar wind. Second, the electron heat flux arises not due to collisions with protons but due the Lorentz force and differences in thermal speeds between the two populations. That is, the electron thermal speed greatly exceeds the Sun's gravitational escape speed so they are free to go whenever they move in the anti-sunward direction. The Lorentz force limits their mobility to being mostly along the quasi-static magnetic field. Finally, the conservation of the first adiabatic invariant (i.e., magnetic moment of particle gyration) reduces any perpendicular (with respect to quasi-static magnetic field) velocity as the particle moves away from the Sun and the magnetic field strength decreases (e.g., see https://physics.stackexchange.com/a/670591/59023). These effects would take an isotropic, drifting Maxwellian and turn it into an anisotropic, skewed, narrow, magnetic field-aligned beam. This population of electrons is known as the strahl (German for beam). See the references below for more discussion. It is currently thought that the strahl scatters as it propagates away from the sun and forms the other dominant suprathermal electon population in the solar wind called the halo. The cold, dense core of the electron population helps to balance the total electric current such that in the plasma rest frame, there is zero net current. In the core electron rest frame, the heat flux is dominated by the strahl electrons (sometimes the halo helps out too). In the plasma rest frame, the core electrons can have a rather large heat flux at times in the sunward direction. Though I don't think this is physically meaningful in the thermodynamic sense since the weakly collisional nature of the solar wind means the particles stream past each other and do not transfer significant energy through collisions via temperature gradients. What is meant by this phrase? This arXiv paper is based on a fluid approximation of the solar wind, which would automatically entail particle-particle collisions as a means of energy transfer, i.e., temperature gradients control the heat flux term. References Feldman, W.C., et al., "Electron Velocity Distributions Near the Earth's Bow Shock," Journal of Geophysical Research 88(A1), pp. 96--110, doi:10.1029/JA088iA01p00096, 1983. Maksimovic, M., et al., "Ulysses electron distributions fitted with Kappa functions," Geophysical Research Letters 24(9), pp. 1151--1154, doi:10.1029/97GL00992, 1997. W.G. Pilipp, et al., "Large-scale variations of thermal electron parameters in the solar wind between 0.3 and 1 AU," J. Geophys. Res. 95(A5), pp. 6305-6329, doi:10.1029/JA095iA05p06305, 1990. E.E. Scime, et al., "Regulation of the solar wind electron heat flux from 1 to 5 AU: Ulysses observations," J. Geophys. Res. 99(A12), pp. 23,401-23,410, doi:10.1029/94JA02068, 1994. S.J. Schwartz and E. Marsch "The radial evolution of a single solar wind plasma parcel," J. Geophys. Res. 88(A12), pp. 9919-9932, doi:10.1029/JA088iA12p09919, 1983. S. Stverak, et al., "Electron energetics in the expanding solar wind via Helios observations," J. Geophys. Res. 120(10), pp. 8177-8193, doi:10.1002/2015JA021368, 2015. L.B. Wilson III, et al., "The Statistical Properties of Solar Wind Temperature Parameters Near 1 au," Astrophys. J. Suppl. 236(2), pp. 41, doi:10.3847/1538-4365/aab71c, 2018. L.B. Wilson III, et al., "Electron Energy Partition across Interplanetary Shocks. I. Methodology and Data Product," Astrophys. J. Suppl. 243(8), pp. 26, doi:10.3847/1538-4365/ab22bd, 2019a. L.B. Wilson III, et al., "Electron Energy Partition across Interplanetary Shocks. II. Statistics," Astrophys. J. Suppl. 245(24), pp. 29, doi:10.3847/1538-4365/ab5445, 2019b. L.B. Wilson III, et al., "Electron Energy Partition across Interplanetary Shocks. III. Analysis," Astrophys. J. 893(22), pp. 21, doi:10.3847/1538-4357/ab7d39, 2020.
{ "domain": "physics.stackexchange", "id": 88730, "tags": "electrons, astrophysics, plasma-physics, thermal-conductivity, solar-wind" }
Time required to observe electrostatic phenonmenon
Question: So I had to do this project on Van De Graff generators and explain the mechanism of how it works so I thought id start it by explaining how charge is induced on a spherical conductor due to another spherical conductor inside it. Now what we learnt in class was charges get induced cause the field inside a conductor must always be zero otherwise there would be current inside (which would be free-energy* which is impossible) So using this property id like to explain the magnitude of charges being induced. This requires me to have an external spherical conductor or radius 'R' and an inner one of smaller radius 'then distribute charges across the surface such that it satisfies the no field condition and then increase the charge on the inner conductor which gradually increases the charge present on the external conductor and then just bring that close to a grounded ball and done. But then I got curious, How long does it actually take to observe phenomenon like induction of charges between two conductors or just induction in general? nanoseconds? picoseconds? Is there a mathematical relation between the kinetic energy of the free electrons present in the conductor that can move to another point across a potential difference or something mathematical about it or is it completely observational? What about other phenomenon like triboelectric effect? Answer: The minimum time required would be $t=L/c$ where $L$ is the largest length involved and $c$ is the speed of light. If there are stray capacitances and non-zero resistances then it will take longer. A better estimate would be $t=5\ RC$ where $R$ is the largest non-zero resistance and $C$ is the largest stray capacitance. The 5 is just a rule of thumb that after 5 time constants the item is at steady state.
{ "domain": "physics.stackexchange", "id": 90596, "tags": "electrostatics, time" }
Data hazard after in load word after addi
Question: 5 stage pipline addi $t1,$zero,0x30 lw $t2,0($t1) sw $t2,0xff18($zero) addi $t2,$zero,100 Question is to find hazards existing in the code and the answer is: Hazard 1: Between lines 1 and 2. Register t1. Solved using forwarding. Hazard 2: Between lines 2 and 3. Register t2. Solved using forwarding. But I found the answer is rather strange, So there is a hazard between line 1 and 2, in the first line t1 writes back in the fifth stage, and in the second line t1 uses in the decode stage, in my opinion there should be a stalling first and then forward to have access to t1 in the second line. I don't see a hazard between line 2 and 3? line 2 t2 write back in the fifth stage, and line 3 fetches t2 in the first stage Am I thinking something strange, why the correct answer seems not correct to me? Answer: THe answers are correct. There is a data hazard when the information stored in the "regular" location (generally a data reg) is incorrect with respect to the program flow. Here information copied in the ID stage at line 2 will be the previous value of $t1, as the new one will be copied at the end of WB stage of instruction 1 (and hence at the end of MEM stage of instr 2). Forwarding means discarding the regular information and replacing it by the new (and correct value). It can be done 1/ as soon as the new value is available in the processor. Lets call t_a this time 2/ up to the last time that this information is required (because it will be transformed by the ALU, written to the mem, etc). Lets call t_u this instant If t_a <= t_u, simple forwarding can be done. Otherwise one or more stalls are required. Lets look at the code 1 addi $t1,$zero,0x30 2 lw $t2,0($t1) Say Stage IF of instr 1 is t1, ID t2, etc 1 the new value of $t1 will be available at the end of stage WB (t5) 2 the value of $t1 is read at the start of ID stage (t3) There is a data hazard. When is the information associated with $t1 available ? After EX stage of 1. So t_a=t3 When is the last time that this information is required by instruction 2 ? Just before the adresse (0+$t1) is computed by the ALU. I.e. t_u=t4 There is no need of any stall. The value read in $t1 register is just discarded and replaced by the one in the ppline register. Consider now instructions 2 and 3. 2 lw $t2,0($t1) 3 sw $t2,0xff18($zero) There is a hasard as $t2 is written at the end of WB stage of inst 2 (t6) and read at the beginning of ID stage of inst 3 (t4). When if the information associated with $t2 available in the processor. At the end of MEM of instr 2 it is written in some ppline reg of the proc (t5=t_a). When is this information required ? Just before writing to the memory by inst 3. After it is too late. That is at the start of MEM stage of inst 3 t6=t_u. Again a simple forward can solve the problem. (note that if instr 3 had been sw $t5,0xff18($t2) the situation would have been different. $t2 would have been required for the address computation (start of EX stage of inst 3 at t5), that is before its production (end of MEM of instr 2 t5), and a stall would have been required over the forwarding.)
{ "domain": "cs.stackexchange", "id": 12993, "tags": "computer-architecture, mips" }
Efficient Median Algorithm With Very Constrained Operators
Question: What is the most efficient algorithm for finding the median of an array when the available operations are limited to Max(), Min(), Multiply, Add, and no conditionals are allowed. Pivots and sorts are not allowed. The algorithm I have come up with so far seems very inefficient and goes like so: For an array of length n, create a working array of length 2n by doing pairwise Max() comparisons for every pair of elements in the input array. This 2n array is guaranteed not to contain the lowest value from the original (or one fewer of the lowest value if there were ties for the lowest value) Recursively call (1) n/2 times Take min() of final array There's got to be a faster algorithm! Answer: I can get you an improvement for the first step; every time you do that first step, you're basically trying to find the minimum, and then continue on the rest of the array. You don't need to do all that just to find the minimum; instead of making that large array (which sounds like it contains $\binom n 2$ elements, not $2n$, unless I misunderstood you), you can basically do the following: Starting from the left, take every pair of consecutive elements, and put the minimum of the two on the right. Something like this in pseudocode: temp1 = min(array[i], array[i+1]) temp2 = max(array[i], array[i+1]) array[i] = temp2 array[i+1] = temp1 What will happen is that the minimum element gets moved all the way to the far right using $2(n-1)$ mins and maxs. We can then do our recursive call on the first $n-1$ elements, and repeat until we finally pull out the median.1 1 Some of you reading this will note that basically what we end up with is a bubble sort (reverse in the example I gave) where we stop as soon as we figure out the median. If you decide to move the max to the right instead, then we pretty much are doing a normal bubble sort, but stopping once we get the median.
{ "domain": "cs.stackexchange", "id": 2277, "tags": "algorithms, search-algorithms" }
What is the difference between the Interaction picture (Dirac Picture) and a rotating reference frame?
Question: In David McIntyre's Quantum Mechanics, we examine an electron within a magnetic field $$\vec{B}=B_o \hat{z}+B_1[\cos(\omega t)\hat{x}+\sin(\omega t)\hat{y}]$$ The Hamiltonian is then time-dependent and in matrix form it is as follows $$H=\frac{\hbar}{2}\begin{bmatrix}\omega_0& \omega_1e^{-i\omega t}\\\omega_1e^{i\omega t}&-\omega_0\end{bmatrix}=H_0+H_t(t)$$ where $\omega_0=\frac{eB_0}{m}$ and $\omega_1=\frac{eB_1}{m}$. McIntyre then says that the solution to the Schrödinger equation can be written using the energy basis of the time-independent part of the Hamiltonian (this is the usual basis: $|\uparrow\rangle_z=(1,0)$and $ |\downarrow\rangle_z=(0,1)$) as $$|\psi(t)\rangle=c_+(t)|\uparrow\rangle_z +c_-(t)|\downarrow\rangle_z=(\,c_+(t)\,,\,c_-(t)\,) \tag{1}$$ Now as far as I am aware, the above solution is in the Schrödinger picture where $|\psi(t)\rangle_s=e^{-iHt/\hbar}|\psi(t=0)\rangle$. Now I have learnt in class that the state in the interaction picture is defined as $$ |\psi(t)\rangle_I=e^{iH_0t/\hbar}|\psi(t)\rangle \tag{2} $$ where the exponent only includes the time-independent part of the Hamiltonian. My issue is that McIntyre then says that we can simplify the problem by transforming the state in equation (1) to the rotating frame which he says (without justification) yields $$ |\psi(t)_*\rangle=c_+(t)e^{i\omega t/2}|\uparrow\rangle_z +c_-(t)e^{-i\omega t/2}|\downarrow\rangle_z=(\,c_+(t)e^{i\omega t/2}\,,\,c_-(t)e^{-i\omega t/2}\,) \tag{3} $$ Now I have read that the rotating frame is related to (if not entirely equivalent to) the interaction picture. This leads me to think that equation (3) is the state vector in the interaction picture for this problem. However, how can this be if the interaction picture is defined as in equation (2)? If we are to believe equation (2), then should we not get something like $$ |\psi(t)_*\rangle=(\,c_+(t)e^{i\omega_1 t/2}\,,\,c_-(t)e^{-i\omega_1 t/2}\,) $$ where we've used $\omega_1$ instead of $\omega$. In fact, why should we not use $\omega_0$ considering in equation (3), it is the unperturbed time-independent Hamiltonian that is present while $\omega$ is associated with the energy due to the time-dependent part? Answer: Yes, the interaction picture is a rotating frame, but not every rotating frame is "the" interaction picture. Therefore we do not use equation (2) to find (3) but rather some [yet-to-be-proven-to-be] useful transformation $$ \begin{pmatrix}c_+^R(t)\\c_-^R(t)\end{pmatrix}=\exp\left(i \begin{pmatrix}1&0\\0&-1\end{pmatrix}\omega t\right) \begin{pmatrix}c_+^S(t)\\c_-^S(t)\end{pmatrix}. $$ Note that we have not said anything about what functional form the coefficients $\begin{pmatrix}c_+^S(t)\\c_-^S(t)\end{pmatrix}$ actually take - we have only specified how they related to the "rotated" coordinates $\begin{pmatrix}c_+^R(t)\\c_-^R(t)\end{pmatrix}$. You have tried to guess the answer for what the coefficients should look like, but that is exactly what is hard to do because we are solving a problem with a time-dependent Hamiltonian! Another way of looking at this is by considering the transformation to act on the basis states: $$ |\uparrow\rangle_z\to |\uparrow(t)\rangle_z\equiv e^{i\omega t/2}|\uparrow\rangle_z ,\quad |\downarrow\rangle_z\to |\downarrow(t)\rangle_z\equiv e^{-i\omega t/2}|\downarrow\rangle_z. $$ If we write the Hamiltonian in bra-ket notation, dropping the $z$ subscript, it looks like \begin{align} \frac{2H}{\hbar}=\omega_0|{\uparrow}\rangle\langle{\uparrow}|-\omega_0|{\downarrow}\rangle\langle{\downarrow}| +\omega_1e^{-i\omega t}|{\uparrow}\rangle\langle{\downarrow}| +\omega_1e^{i\omega t}|{\downarrow}\rangle\langle{\downarrow}|. \end{align} We can substitute in our new definitions of the states to see that (remembering to take all of the complex conjugates) \begin{align} \frac{2H}{\hbar}=\omega_0|{\uparrow(t)}\rangle\langle{\uparrow(t)}|-\omega_0|{\downarrow(t)}\rangle\langle{\downarrow(t)}| +|{\uparrow(t)}\rangle\langle{\downarrow(t)}| +|{\downarrow(t)}\rangle\langle{\downarrow(t)}|. \end{align} Do you see the trick? Using this redefinition, we have made the Hamiltonian appear to be time-independent! It is only time-independent in this funky basis in which the states depend on time, but that time dependence is easy to sort out at the end of any calculation. This final form of the Hamiltonian justifies the choice of rotating frame. In other contexts, such as "the" interaction picture, another final form of the Hamiltonian justifies a different choice of rotating frame like the one you suggested using $\exp(i H_0 t)$.
{ "domain": "physics.stackexchange", "id": 80593, "tags": "quantum-mechanics, schroedinger-equation, mathematical-physics, perturbation-theory" }
How can I publish exactly once when the node is run?
Question: I want to publish a single /initialpose message when a node is run. How can I make sure it's only published once? This is what I have: #!/usr/bin/env python import rospy from geometry_msgs.msg import Pose, PoseWithCovarianceStamped from std_msgs.msg import Header from numpy import matrix class PoseChange: def __init__(self): self.msg = PoseWithCovarianceStamped() rospy.init_node("pose_change", anonymous=True) rospy.Subscriber('/pose', Pose, self.callback) self.pub = rospy.Publisher('/initialpose', PoseWithCovarianceStamped, queue_size=3) def callback(self, data): self.msg.header = Header() self.msg.header.stamp = rospy.Time.now() self.msg.header.frame_id = "map" self.msg.pose.pose = data self.msg.pose.covariance = matrix([[0.09, 0, 0, 0, 0, 0], [0, 0.09, 0, 0, 0, 0], [0, 0, 0.09, 0, 0, 0], [0, 0, 0, 0.25, 0, 0], [0, 0, 0, 0, 0.25, 0], [0, 0, 0, 0, 0, 0.25]]) def run(self): self.pub.publish(self.msg) if __name__ == "__main__": try: m = PoseChange() m.run() except rospy.ROSInterruptException: raise e However, this never publishes when run. The /pose topic it's listening to is continuously publishing at 50 Hz. Originally posted by j1337 on ROS Answers with karma: 25 on 2017-07-24 Post score: 1 Answer: Hi @j1337, ROS takes some time to notify the publishers and subscribers of the topics. So, you have two options: 1 - Sleep some seconds before publish the first message. In the "main" part of your code could be done like: if __name__ == "__main__": try: from time import sleep print 'Sleeping 10 seconds to publish' sleep(10) print 'Sleep finished.' m = PoseChange() m.run() except rospy.ROSInterruptException: raise e 2 - Make sure that there are subscribers (using get_num_connections) when you starts publishing. In the "main" part of your code it would be like: if __name__ == "__main__": try: m = PoseChange() rate = rospy.Rate(10) # 10hz while not rospy.is_shutdown(): connections = m.pub.get_num_connections() rospy.loginfo('Connections: %d', connections) if connections > 0: m.run() rospy.loginfo('Published') break rate.sleep() except rospy.ROSInterruptException, e: raise e If you still have doubts, I made a video (https://youtu.be/BGV6DI_PItA) that shows how to solve exactly this question by checking the subscribers. Originally posted by Ruben Alves with karma: 1038 on 2017-07-24 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by j1337 on 2017-07-24: Thank you. This was extremely helpful. Comment by jayess on 2017-08-09: Why not explain exactly how to solve this question here? The problem with linking to a video for more steps is that now part of the answer is external and this answer is not self-contained. What if the video gets taken down? Comment by Ruben Alves on 2017-10-09: Ok, now the answer is self contained, @jayess. Thanks for the advice. Comment by AlessandroSaviolo on 2020-03-19: THANK YOU!
{ "domain": "robotics.stackexchange", "id": 28426, "tags": "rospy" }
Electromagnetism: Dampening of Magnet in Coil
Question: Explain why, when the switch is closed and the magnet is oscillated vertically, the oscillations will dampen. Here's my approach: Assuming the bottom end of the magnet is the North pole, when this end moves downwards in the coil, the magnetic flux increases in the downwards direction. Thus, by Lenz's law, the resulting induced current must produce a magnetic field that is in the upwards direction inside the coil. For this to happen, the current must be going counter-clockwise. From here, I can't get to the conclusion that there will be an upwards force on the magnet when it moves downwards, and a downwards force on the magnet when it moves upwards. Answer: The upward movement would be produced by the spring stretched beyond the position of static equilibrium. Lenz tells you that the direction of the induced current will always be such as to oppose the motion producing it. As the magnet goes down the induced current will be such as to oppose the downward movement of the magnet. That is, the magnet will feel an upward force upwards and have to do work to move down. While all this is going on the induced current is passing through a circuit which has resistance and so heat is being generated in the circuit. In terms of energy transfer within the system as the magnet moves down its gravitational potential energy is being converted to kinetic energy, energy stored in the spring and heat. You have damped harmonic motion with the damping provided by the induced current. What happens next depends on the degree of damping. The magnet could eventually stop in the static equilibrium position for the system. The magnet could stop and then be pulled up by th extended spring and then the induced current would be such as to produce a downward force on the magnet whilst all the time generating heat in the circuit. So you would get damped oscillatory motion until the final state of static equilibrium.
{ "domain": "physics.stackexchange", "id": 33804, "tags": "electromagnetism" }
ROS+Qt question
Question: Hello, I want to develop an application in Qt that can run nodes in ROS. I know one can run the CMakeList.txt of the package in Qt as per the necessary changes shown in http://answers.ros.org/question/12790/ros-qt-creator-gui/. I already have a developed application that has the normal .pro files. The application has loads of other algorithm codes in it and I intend to integrate this application with some Qt code to run ROS nodes. But I can not understand how to do so, as the ROS package can be run using the CMakeList.txt. I may not have enough expertise on ROS+Qt, so any help will be highly appreciated. Thanks, Sen Originally posted by sen on ROS Answers with karma: 61 on 2012-09-30 Post score: 0 Answer: Although in theory it is possible to build ROS with qmake/.pro files, I'd still recommend cmake/rosbuild. This means you will need to port your .pro files to cmake. If this is too much work, you can build the majority of your code and algorithms using qmake into libraries (without any ROS dependencies) and then only provide a ROS interface wrapper that integrates ROS via cmake and just links your libraries. Originally posted by dornhege with karma: 31395 on 2012-10-01 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by sen on 2012-10-02: Thanks Dornhege. I like your suggestions and I intend to follow the second suggestion. I will let you know how it turns out.
{ "domain": "robotics.stackexchange", "id": 11184, "tags": "ros, qt" }
Why do metals appear lustrous?
Question: I came across a question asking me the reason for the lustrous appearance of many metals. The answer stated that it was due to the presence of free electrons in the metal. But I don't understand how this works. How do the free electrons affect the reflection of light from the metallic surface? Thanks. Answer: Light is an electromagnetic wave. A metal has a large cloud of relatively free electrons (electrons that are loosely bound to the metal surface). When a beam of light is incident on a metal surface, it polarizes the electron cloud, i.e. some regions on the metal become relative more "positive" while some regions relatively become more "negative". Thus, this induces a field which makes the electrons to start oscillating. This oscillation generates another electromagnetic wave which opposes the incident radiation, (An ideal metal will completely oppose the incident light radiations), and hence our incident light rays get reflected.
{ "domain": "chemistry.stackexchange", "id": 8002, "tags": "physical-chemistry, electrons, metal" }
Speeding up class that uses an ODBC connection
Question: I have created a class that gets data from an ODBC connection. The code works but it is really slow - I'm talking up to 1.20ish minutes to run. I know my code is inefficient but I'm really not sure why it's so slow. Tips or advice on speeding it up would be much appreciated. <?php class machine { public $name = ""; public function __construct($name, $Id) { $this->name = $name; $this->Id = $Id; } Public function Data() { $conn = odbc_connect('monitor', '', ''); if (!$conn) { exit("Connection Failed: " . $conn); } $sql = "SELECT TOP 1 ReaderData.ReaderIndex, ReaderData.CardID, ReaderData.ReaderDate, ReaderData.ReaderTime, ReaderData.controllerID, Left([dtReading],10) AS [date], ReaderData.dtReading FROM ReaderData WHERE ReaderData.controllerID=$this->Id AND ReaderData.CardID = 'FFFFFFF0 ' ORDER BY ReaderData.ReaderIndex DESC;"; $rs = odbc_exec($conn, $sql); if (!$rs) { exit("Error in SQL"); } while (odbc_fetch_row($rs)) { $this->DtReading = odbc_result($rs, "dtReading"); $result = strtotime($this->DtReading) + 2 * 60 * 60; $this->time = time() + 60 * 60 - $result; return round($this->time / 60, 2); } } public function Cycle() { $this->Arr = array(); $conn = odbc_connect('monitor', '', ''); if (!$conn) { exit("Connection Failed: " . $conn); } $sql = "SELECT TOP 2 ReaderData.ReaderIndex, ReaderData.CardID, ReaderData.ReaderDate, ReaderData.ReaderTime, ReaderData.controllerID, Left([dtReading],10) AS [date], ReaderData.dtReading FROM ReaderData WHERE ReaderData.controllerID=$this->Id AND ReaderData.CardID = 'FFFFFFF0 ' ORDER BY ReaderData.ReaderIndex DESC;"; $rs = odbc_exec($conn, $sql); if (!$rs) { exit("Error in SQL"); } $data = array(); while (odbc_fetch_row($rs)) { $data[] = $this->DtReading = odbc_result($rs, "dtReading"); } $this->Time1 = ($data[0]); $this->Time2 = ($data[1]); $Time1E = strtotime($this->Time1) + 2 * 60 * 60; $Time2E = strtotime($this->Time2) + 2 * 60 * 60; $this->cycle = $Time1E - $Time2E; return round($this->cycle, 2); } public function SageData() { $conn = odbc_connect('Data hub', '', ''); if (!$conn) { exit("Connection Failed: " . $conn); } $sql = "SELECT [SHOP FLOOR PRODUCTION PLAN].[MACHINE], [SHOP FLOOR PRODUCTION PLAN].[cycletime] FROM [SHOP FLOOR PRODUCTION PLAN] WHERE ((([SHOP FLOOR PRODUCTION PLAN].[MACHINE])='$this->name')); "; $rs = odbc_exec($conn, $sql); if (!$rs) { exit("Error in SQL"); } while (odbc_fetch_row($rs)) { $SageCycle = odbc_result($rs, "cycletime"); } // var_dump($this->SageCycle); odbc_close($conn); return @$SageCycle; } public function GetM() { $q = $this->Cycle(); $qq = $this->SageData(); $M = $q - $qq; // $this->P = $this->M / $this->sageData(); if ($qq == 0) { $this->P = 0; } else { $this->P = $M / $this->sageData(); } return round($this->P, 2) * 100; } public function name() { echo $this->name; } } $machine1 = new machine('ZW01001', 41); $machine4 = new machine('ZW01004', 37); $machine5 = new machine('ZW01005', 28); $machine6 = new machine('ZW01006', 38); $machine7 = new machine('ZW01007', 30); $machine8 = new machine('ZW01008', 31); $machine9 = new machine('ZW01009', 32); $machine10 = new machine('ZW01010', 40); $machine21 = new machine('ZW01021', 13); $machine22 = new machine('ZW01022', 2); $machine23 = new machine('ZW01023', 33); $machine24 = new machine('ZW01024', 34); $machine25 = new machine('ZW01025', 35); $machine26 = new machine('ZW01026', 36); $Cycle = (object) array( 'Machines' => array( array( 'cycle' => $machine1->Cycle(), 'percent' => $machine1->GetM(), 'Data' => $machine1->Data() ), array( 'cycle' => $machine4->Cycle(), 'percent' => $machine4->GetM(), 'Data' => $machine4->Data(), ), array( 'cycle' => $machine5->Cycle(), 'percent' => $machine5->GetM(), 'Data' => $machine5->Data() ), array( 'cycle' => $machine6->Cycle(), 'percent' => $machine6->GetM(), 'Data' => $machine6->Data() ), array( 'cycle' => $machine7->Cycle(), 'percent' => $machine7->GetM(), 'Data' => $machine7->Data() ), array( 'cycle' => $machine8->Cycle(), 'percent' => $machine8->GetM(), 'Data' => $machine8->Data() ), array( 'cycle' => $machine9->Cycle(), 'percent' => $machine9->GetM(), 'Data' => $machine9->Data() ), array( 'cycle' => $machine10->Cycle(), 'percent' => $machine10->GetM(), 'Data' => $machine10->Data() ), array( 'cycle' => $machine21->Cycle(), 'percent' => $machine21->GetM(), 'Data' => $machine21->Data() ), array( 'cycle' => $machine22->Cycle(), 'percent' => $machine22->GetM(), 'Data' => $machine22->Data() ), array( 'cycle' => $machine23->Cycle(), 'percent' => $machine23->GetM(), 'Data' => $machine23->Data() ), array( 'cycle' => $machine24->Cycle(), 'percent' => $machine24->GetM(), 'Data' => $machine24->Data() ), array( 'cycle' => $machine25->Cycle(), 'percent' => $machine25->GetM(), 'Data' => $machine25->Data() ), array( 'cycle' => $machine26->Cycle(), 'percent' => $machine26->GetM(), 'Data' => $machine26->Data() ), ) ); $fp = fopen('Rag.json', 'w'); fwrite($fp, json_encode($Cycle)); fclose($fp); Answer: Recycling is the solution For every Machine, you are performing 3 queries and opening/closing 3 connections. So for 10 machines, you are performing 30 queries (no problemo) but opening a connection 30 times (w t f?) and closing it again. There is your problem. Instead of having one connection that you query, you open a connection everytime you need to query something. this is really bad for performance and kills your application. However, there are multiple things wrong with your code. Not only is it poorly written (it's just a bunch of functions put into a class), the design is simply wrong / there is no design. Application Design When writing software, we always solve problems. If the problem can't be solved in a few lines of code, we split the problem into smaller problems that can be solved. If a lot of small problems exist, it is best to create some structure in those solutions. You could for instance bundle them in a class, or namespace or file. Defining the problem Getting Data from an ODBC DataSource. For this, we have a set of functions provided to us by php. the odbc_* functions. Solved. NEXT! the real problem is that it's a real pain in tha ass writing the exact same odbc_query all over the place. We need an interface to comunicate with that helps us managing MachineObjects that are stored in an odbc MachineDataSource. Defining the solution Now we know the problem, we can start splitting it into smaller ones: We need a Machine Objects that holds all the Machine data and adds some extra meaning + calculations for us. (e.g. the GetM() method) I need a MachineDataSource that when I pass in an ID and a Name, it returns me the correct MachineObject So you need a class that can represent a Machine and you need an interface that creates these Machines for you. I would go for the Repository Pattern here (look it up). Fine tuning A good way to fine tune this is to use Eager loading. Instead of performing multiple selects: SELECT something FROM somewhere WHERE id = 1; SELECT something FROM somewhere WHERE id = 2; SELECT something FROM somewhere WHERE id = 3; SELECT something FROM somewhere WHERE id = 4; SELECT something FROM somewhere WHERE id = 5; We could use eager loading and fetch them at the same time: SELECT something FROM somewhere WHERE id IN(1,2,3,4,5); //not sure or this query is correct, but you get the point This however adds a lot of complex stuff to your code, Laravel Eloquent does a good job using a Collection of Objects (Machine's in your story) and then load certain data for all Objects in the Collection.
{ "domain": "codereview.stackexchange", "id": 8590, "tags": "php, optimization, sql" }
Smach Userdata Input/Output keys error
Question: [ERROR] [WallTime: 1455271923.018549] [44.655000] Userdata key 'error_bool' not available. Available keys are: ['parameters'] I keep getting this error... Anybody has a clue why is this happening? This is how the definition of the states and the state machine are done: def main(self): # Create a SMACH state machine sm = smach.StateMachine(outcomes=['outcome']) sm.userdata.parameters = [] # Open the container with sm: # Add states to the container smach.StateMachine.add('Ready', ready_state.Ready(), transitions={'cartesianMotionRequest':'CartesianMotion', \ 'articularMotionRequest':'ArticularMotion', \ 'fullBodyMotionRequest':'FullBodyMotion', \ 'endEffectorRequest':'EndEffectorCoordination', \ 'trajectoryExecutionRequest':'TrajectoryExecution', \ 'failure':'ErrorHandling', \ 'recordTrajectoryRequest':'RecordTrajectory', \ 'changeEndEffectorRequest':'ChangeEndEffector', \ 'exit':'Finish'}, remapping={ 'error_bool':'error_bool', 'parameters':'parameters' }) smach.StateMachine.add('CartesianMotion', cartesian_motion_state.CartesianMotion(self.hiro), transitions={'success':'Ready', \ 'failure':'ErrorHandling'}, remapping={'parameters':'parameters', 'error_bool':'error_bool'}) ________________________________________________________________________________________- class Ready(smach.State): def __init__(self): smach.State.__init__(self, outcomes=['cartesianMotionRequest', \ 'articularMotionRequest', \ 'fullBodyMotionRequest', \ 'endEffectorRequest', \ 'trajectoryExecutionRequest', \ 'failure', 'recordTrajectoryRequest', \ 'changeEndEffectorRequest', 'exit'], \ output_keys=['parameters'], input_keys=['error_bool'] ) class CartesianMotion(smach.State): def __init__(self, robot): smach.State.__init__(self, outcomes=['success', 'failure'], input_keys=['parameters'], output_keys=['error_bool'] ) Originally posted by MartinI on ROS Answers with karma: 1 on 2016-02-12 Post score: 0 Answer: You should init the userdata key before you call it: i.e. sm.userdata.error_bool = False i copy pasted your code and applied some simplification (you can ignore the introspection/threading stuff at the bottom). Try below to test it: #!/usr/bin/env python # This Python file uses the following encoding: utf-8 import roslib import rospy import threading import smach from smach import StateMachine import smach_ros from smach_ros import IntrospectionServer, SimpleActionState class Ready(smach.State): def __init__(self): smach.State.__init__(self, outcomes=['test'], output_keys=['parameters'], input_keys=['error_bool'] ) def execute(self, userdata): rospy.loginfo("ready state key error_bool: "+str(userdata.error_bool)) rospy.sleep(1) userdata.parameters = ['some param']; return 'test' class CartesianMotion(smach.State): def __init__(self): smach.State.__init__(self, outcomes=['success', 'failure'], input_keys=['parameters','error_bool'], output_keys=['error_bool']) def execute(self, userdata): rospy.loginfo("cartesian state key parameters: "+str(userdata.parameters)) rospy.sleep(1) return 'success' def main(): rospy.init_node('test')#, log_level=rospy.DEBUG) # Create a SMACH state machine sm = smach.StateMachine(outcomes=['succeeded','aborted','preempted']) sm.userdata.parameters = [] sm.userdata.error_bool = False # Open the container with sm: # Add states to the container smach.StateMachine.add('Ready', Ready(), transitions={'test':'CartesianMotion'}) smach.StateMachine.add('CartesianMotion', CartesianMotion(), transitions={'success':'succeeded', 'failure':'aborted'}) #setup introspection sis = IntrospectionServer('server_name', sm, '/Test') sis.start() #set preemption handler smach_ros.set_preempt_handler(sm) # Create a thread to execute the smach container smach_thread = threading.Thread(target = sm.execute) smach_thread.start() # Wait for ctrl-c rospy.spin() # Request the container to preempt sm.request_preempt() # Block until everything is preempted # (you could do something more complicated to get the execution outcome if you want it) smach_thread.join() #stop introspection sis.stop() if __name__ == '__main__': main() With the introspectionserver you can also use smach_viewer to check the states of your userdata keys / check your state machine: image description http://s9.postimg.org/piqq5lg1r/Screenshot_from_2016_04_06_20_11_53.png Hope this helps :) Originally posted by ski11io with karma: 46 on 2016-04-06 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 23738, "tags": "ros, smach, parameters" }
gazebo fuerte crashes after calling gazebo/delete_model service
Question: Hi I was following the simulator_gazebo tutorial on ROS wiki when I encountered this problem. First of all I spawned a model using " rosrun gazebo spawn_model -param coffee_cup_description -gazebo -model coffee_cup -x 0 -z 2 " and then I deleted it using " rosservice call gazebo/delete_model '{model_name: coffee_cup}' ". After I called the delete_model command,gazebo crashed unexpectedly and gave the following: [gazebo_gui-3] process has died [pid 3162, exit code 139, cmd /opt/ros/fuerte/stacks/simulator_gazebo/gazebo/scripts/gui __name:=gazebo_gui __log:=/home/chengxiang/.ros/log/fba33798-f058-11e1-8139-0017c4794a79/gazebo_gui-3.log]. log file: /home/chengxiang/.ros/log/fba33798-f058-11e1-8139-0017c4794a79/gazebo_gui-3*.log Other than this instance, gazebo also crashes unexpectedly sometimes when I was doing other stuffs like spawning models. Did any one of you encounter this issues before? Any advice will be appreciated. Thanks. Originally posted by ChengXiang on ROS Answers with karma: 201 on 2012-08-27 Post score: 0 Original comments Comment by ChengXiang on 2012-08-27: On a side note, my gazebo sometimes launches with a blackout screen, could this be related? Comment by ChengXiang on 2012-08-27: In addition, I am using Ubuntu 12.04. Answer: Please file a bug report on gazebo trac, and include all details needed to reproduce the error, as well as any debugging information if available. Thanks! Originally posted by hsu with karma: 5780 on 2012-08-28 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by ChengXiang on 2012-08-29: Hi Hsu. Actually the problem seem to disappear for now, after i installed the driver for my graphic card. So, i couldn't reproduce the errors to file the report. However, I will file a bug report if such problem occurs again. Thanks!
{ "domain": "robotics.stackexchange", "id": 10782, "tags": "ros" }
Can a tidally locked planet have their own habitable zone?
Question: As we know, if a planet is near a star and is tidally locked, then at near side it is very hot and at far side it is very cold. But between 2 sides there should be a gradient of temperature change and should have an area which the temperature is suitable for lives. Can we also say it has a habitable zone? Answer: As you suggest, it might be possible for a habitable corridor to exist along the stationary terminator. But there are ideas around for more than that. The planet might be exposed to tidal forces the volcanism of which warms the far side. It might have a thick stormy ocean or atmosphere which evens out the surface temperature (All but the smallest planets have atmospheres). It might have habitable moons which when tidally locked to the planet rotate regularly versus the star. And if it, like Mercury, has an eccentric enough orbit it might rotate relative to its star although it is tidally locked. Planetary diversity is huge.
{ "domain": "astronomy.stackexchange", "id": 1276, "tags": "tidal-locking, habitable-zone, extra-terrestrial" }
Why the maximum number of supercharges in supergravity must be $Q=32$?
Question: For a supergravity theory not to have particles with spin greater than 2, all books state that $Q\leq 32$, where $Q$ is the number of fermionic supercharges and for a given dimension $D$ it's related to the number of supersymmetries $\mathcal{N}$ through the number of components of the fundamental spinors in that dimension, $C$ as $Q=\mathcal{N} C$. Naively, I would expect each supercharge $Q$ to raise or lower the spin of a given particle by $1/2$, as it does in the 4 dimensional case, but I suspect this is not the case in different dimensions (though all the books I have read are pretty confusing, just analyzing $D=4$ in detail using the chiral properties in that dimension and then hand-waving their way to higher dimensions). If each of the $\mathcal{N}$ supersymmetries could be used once and only once to raise the spin of the states, then I would expect the limit to be $\mathcal{N} \leq 8$ instead of a bound on $Q$, but this is not the case. According to one of my professors, $Q=32$ is just the consequence of wanting $\mathcal{N}=8$ at the most in 4 dimensions, where $C=4$, so that we can compactify the higher dimensional theory and obtain an acceptable model for our world. However, the $D=11$ supergravity should only include the graviton in that case, right? And $D=10$ theories would only have the graviton and $\mathcal{N}$ gravitinos, which is not true. So what's the right justification for $Q \leq 32$? Answer: You have to first understand the contruction of massless multiplets which is found in the beginning of every introduction to supersymmetry, so I won't repeat it here. Then the argument goes like this: Given $Q$ supercharges, half of them will be zero for the massless case, thus leaving you with $Q/2$ non-zero supercharges. From the remaining $Q/2$ supercharges we can construct $Q/4$ lowering operators, and $Q/4$ raising operators. Every raising/lowering operator changes the helicity $\lambda$ by $\pm 1/2$. So avoiding helicities $\lambda$ greater than $|2|$ requires that \begin{equation} Q/4\leq 8 \end{equation} Therefore the maximum number of supercharges is $Q=32$ for any supergravity.
{ "domain": "physics.stackexchange", "id": 66348, "tags": "supersymmetry, spacetime-dimensions, supergravity" }
How to tag a question on this site (properly)?
Question: Multiple robots in ROS Gazebo SITL with separate MAVlink/MAVproxy software codes is the original question which isn't having proper tags; Commas are not accepted while tagging and re-tagging apparently requires 100 to 500 Karma points; Unfortunately, I was able to tag just one tag and all other tags require "number, characters or -+.#" (something that I have been unable to understand). I have spent about an hour on this and I would like to set the tags for the question as "gazebo .ros mav quadcopter .launch .world mavconn SITL roslaunch multi-agent". Can you please let me know if that is possible? (What am I doing wrong while tagging?) Thanks for your time and consideration. Prasad N R Appended information: Trying "1gazebo 2.ros 3mav 4quadcopter 5.launch 6.world 7mavconn 8SITL 9roslaunch 10multiagent" resulted in '1gazebo', '2.ros', '3mav', '4quadcopter', '5.launch', '6.world', '7mavconn', '8SITL', '9roslaunch' and '10multi-agent' tags (with numbers). Trying commas with spaces like "gazebo, .ros, mav, quadcopter, .launch, .world, mavconn, SITL, roslaunch, multi-agent" didn't work out either. Other similar attempts like "gazebo -+.# .ros -+.# mav -+.# quadcopter -+.# .launch -+.# .world -+.# mavconn -+.# SITL -+.# roslaunch -+.# multi-agent" didn't work out. Originally posted by PrasadNR on ROS Answers with karma: 37 on 2016-10-21 Post score: 0 Answer: I've just retagged your question for you. Not sure, but it would appear that the dots (.) before ros, launch and world did not make it through the filter and causes your tags to not be updated. If you feel the explanation of what are acceptable tags is confusing, please open an issue over at ros-infrastructure/answers.ros.org. Edit: not sure what you are trying to achieve. Adding numbers does not make sense. Tags are simply single words, separated by spaces. No need for commas or numbers. And please: this is not Twitter: no hashes. Originally posted by gvdhoorn with karma: 86574 on 2016-10-21 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 26014, "tags": "ros" }
Why should the pharyngeal cavity be essential for articulated speech?
Question: In a lecture, I've been told that one of the signs that homo habilis and homo erectus couldn't speak was that they probably lacked the distinction between oral and pharyngeal cavity. But do we really use it, apart from the nasal consonants? Answer: This bit of the book Developmental Neuropsychiatry: Fundamentals describes the difference between the oral cavity in humans and other animals. It doesn't refer to Homo habilis in this context, but it says: As a result of this anatomical pattern, the range of sounds that an animal can make is limited because of the pharyngeal cavity, which is necessary of sound amplification, is small. and: Still, this lower position of the larynx produces a larger pharyngeal space above the vocal chords, which makes possible a greater range of sound modification and becomes the key to the production of articulare speech. Which suggests that the pharyngeal cavity has a more general use in speech than just making nasal consonants. This article goes into some detail about Human speech anatomy and its evolution; what it says about the pharyngeal cavity is less about the cavity itself than about the position of the tongue, which allows the separation of that cavity into different sections, leading to different wavelengths being produced. They further say that apes have their tongue entirely in their oral cavity, and are as a result unable to make vowels other than schwas (the basic vowel you make when you're making no effort whatsoever) (they do mention one exception but that species apparently doesn't use its pharynx to make vowels like we do). The article makes another very interesting point that is relevant to your question, I quote: Speech communication would be possible without quantal vowels [note from me: the sounds it was talking about that we can make but monkeys can't]. Indeed, there would have been no selective advantage for retaining whatever mutations led to the evolution of the human supralaryngeal vocal tract unless some form of speech had already been part of hominid culture. That is an excellent point, which is that we need to separate the concept of "speech" from "fully modern speech". Things evolve gradually, and there will have been a continuum from simple communication and vocalizations that our latest common ancestor with other apes likely did, to modern speech with its sounds, grammar, and anatomical and cognitive abilities it implies. And since we no longer have examples of anything on that continuum, we don't know which parts of it we'd be more or less likely to think of as "speech".
{ "domain": "biology.stackexchange", "id": 6764, "tags": "human-biology, communication, anthropology, language" }
ros1_bridge build error for Turtlebot3
Question: Hello all, I am trying to build ros1_bridge from source and facing the following issue. Any idea where I am going wrong? It seems to be pointing to Turtlebot3's sound message/service: $ colcon build --packages-select ros1_bridge --cmake-force-configure --cmake-args -DBUILD_TESTING=FALSE Starting >>> ros1_bridge [Processing: ros1_bridge] [Processing: ros1_bridge] --- stderr: ros1_bridge CMakeFiles/ros1_bridge.dir/generated/turtlebot3_msgs__srv__Sound__factories.cpp.o: In function `ros1_bridge::Factory<turtlebot3_msgs::Sound_<std::allocator<void> >, turtlebot3_msgs::msg::Sound_<std::allocator<void> > >::convert_1_to_2(turtlebot3_msgs::Sound_<std::allocator<void> > const&, turtlebot3_msgs::msg::Sound_<std::allocator<void> >&)': turtlebot3_msgs__srv__Sound__factories.cpp:(.text+0x284): multiple definition of `ros1_bridge::Factory<turtlebot3_msgs::Sound_<std::allocator<void> >, turtlebot3_msgs::msg::Sound_<std::allocator<void> > >::convert_1_to_2(turtlebot3_msgs::Sound_<std::allocator<void> > const&, turtlebot3_msgs::msg::Sound_<std::allocator<void> >&)' CMakeFiles/ros1_bridge.dir/generated/turtlebot3_msgs__msg__Sound__factories.cpp.o:turtlebot3_msgs__msg__Sound__factories.cpp:(.text+0x284): first defined here CMakeFiles/ros1_bridge.dir/generated/turtlebot3_msgs__srv__Sound__factories.cpp.o: In function `ros1_bridge::Factory<turtlebot3_msgs::Sound_<std::allocator<void> >, turtlebot3_msgs::msg::Sound_<std::allocator<void> > >::convert_2_to_1(turtlebot3_msgs::msg::Sound_<std::allocator<void> > const&, turtlebot3_msgs::Sound_<std::allocator<void> >&)': turtlebot3_msgs__srv__Sound__factories.cpp:(.text+0x2a0): multiple definition of `ros1_bridge::Factory<turtlebot3_msgs::Sound_<std::allocator<void> >, turtlebot3_msgs::msg::Sound_<std::allocator<void> > >::convert_2_to_1(turtlebot3_msgs::msg::Sound_<std::allocator<void> > const&, turtlebot3_msgs::Sound_<std::allocator<void> >&)' CMakeFiles/ros1_bridge.dir/generated/turtlebot3_msgs__msg__Sound__factories.cpp.o:turtlebot3_msgs__msg__Sound__factories.cpp:(.text+0x2a0): first defined here collect2: error: ld returned 1 exit status make[2]: *** [libros1_bridge.so] Error 1 make[1]: *** [CMakeFiles/ros1_bridge.dir/all] Error 2 make: *** [all] Error 2 --- Failed <<< ros1_bridge [ Exited with code 2 ] Summary: 0 packages finished [1min 3s] 1 package failed: ros1_bridge 1 package had stderr output: ros1_bridge These are my steps: Terminal 1: Source ROS 1 workspace: $ source /opt/ros/melodic/setup.bash $ source ~/catkin_ws/devel/setup.bash Terminal 2: Source ROS 2 workspace: $ source /opt/ros/dashing/setup.bash $ source ~/ros2_dd_ws/install/setup.bash Terminal 3: ros1_bridge workspace get source: $ cd ~/ros1_bridge_ws/src $ git clone -b dashing https://github.com/ros2/ros1_bridge.git $ . ~/catkin_ws/devel/setup.bash $ . ~/colcon_ws/install/setup.bash ROS_DISTRO was set to 'melodic' before. Please make sure that the environment does not mix paths from different distributions. Verify environment: $ echo $CMAKE_PREFIX_PATH | tr ':' '\n' /home/swaroophs/ros2_dd_ws/install/rosrect-listener-agent /home/swaroophs/catkin_ws/devel /opt/ros/melodic ros1_bridge build (FAILS): $ colcon build --packages-select ros1_bridge --cmake-force-configure --cmake-args -DBUILD_TESTING=FALSE Starting >>> ros1_bridge [Processing: ros1_bridge] [Processing: ros1_bridge] --- stderr: ros1_bridge CMakeFiles/ros1_bridge.dir/generated/turtlebot3_msgs__srv__Sound__factories.cpp.o: In function `ros1_bridge::Factory<turtlebot3_msgs::Sound_<std::allocator<void> >, turtlebot3_msgs::msg::Sound_<std::allocator<void> > >::convert_1_to_2(turtlebot3_msgs::Sound_<std::allocator<void> > const&, turtlebot3_msgs::msg::Sound_<std::allocator<void> >&)': turtlebot3_msgs__srv__Sound__factories.cpp:(.text+0x284): multiple definition of `ros1_bridge::Factory<turtlebot3_msgs::Sound_<std::allocator<void> >, turtlebot3_msgs::msg::Sound_<std::allocator<void> > >::convert_1_to_2(turtlebot3_msgs::Sound_<std::allocator<void> > const&, turtlebot3_msgs::msg::Sound_<std::allocator<void> >&)' CMakeFiles/ros1_bridge.dir/generated/turtlebot3_msgs__msg__Sound__factories.cpp.o:turtlebot3_msgs__msg__Sound__factories.cpp:(.text+0x284): first defined here CMakeFiles/ros1_bridge.dir/generated/turtlebot3_msgs__srv__Sound__factories.cpp.o: In function `ros1_bridge::Factory<turtlebot3_msgs::Sound_<std::allocator<void> >, turtlebot3_msgs::msg::Sound_<std::allocator<void> > >::convert_2_to_1(turtlebot3_msgs::msg::Sound_<std::allocator<void> > const&, turtlebot3_msgs::Sound_<std::allocator<void> >&)': turtlebot3_msgs__srv__Sound__factories.cpp:(.text+0x2a0): multiple definition of `ros1_bridge::Factory<turtlebot3_msgs::Sound_<std::allocator<void> >, turtlebot3_msgs::msg::Sound_<std::allocator<void> > >::convert_2_to_1(turtlebot3_msgs::msg::Sound_<std::allocator<void> > const&, turtlebot3_msgs::Sound_<std::allocator<void> >&)' CMakeFiles/ros1_bridge.dir/generated/turtlebot3_msgs__msg__Sound__factories.cpp.o:turtlebot3_msgs__msg__Sound__factories.cpp:(.text+0x2a0): first defined here collect2: error: ld returned 1 exit status make[2]: *** [libros1_bridge.so] Error 1 make[1]: *** [CMakeFiles/ros1_bridge.dir/all] Error 2 make: *** [all] Error 2 --- Failed <<< ros1_bridge [ Exited with code 2 ] Summary: 0 packages finished [1min 3s] 1 package failed: ros1_bridge 1 package had stderr output: ros1_bridge Originally posted by swaroophs on ROS Answers with karma: 25 on 2020-06-08 Post score: 0 Answer: This seems to be a bug in the ros1_bridge due to the fact that in the turtlebot3_msgs package there is a message and service with the same name: Sound. That case wasn't handled correctly. Please see https://github.com/ros2/ros1_bridge/pull/272 which fixes the problem on master. Backports of the various ROS distros will need to happen too. You should be able to apply the patch locally to the dashing branch you are using before building the ros1_bridge package. Originally posted by Dirk Thomas with karma: 16276 on 2020-06-08 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by swaroophs on 2020-06-08: Thanks @Dirk for commenting on this so quickly! I tried out the patch in __init__.py locally and it works! P.S: Thanks for being so active on these forums. I pretty much see you everywhere some one has a ROS 2 related question helping them. I really appreciate the time and effort you put in :). Big fan.
{ "domain": "robotics.stackexchange", "id": 35071, "tags": "ros2, turtlebot3" }
Difference between depth point cloud and image depth (in their usage)
Question: Hello, I have recently started doing image processing with the Kinect v2 (with opencv) and I was thinking of adding the third dimension to this processing, but I have a basic question about the usage of depth pointcloud and the image depth. I think I know their differences (the pc is a data structure containing data about (x,y,z) and the depth image is an actual image which contains data about the distance from the sensor), but I cannot understand why someone would use a depth point cloud for computations ? Depth images sound much more simpler to implement tasks. I mean, if I detect an object and be able to know its location in the 2 dimensions of an rgb image and with the right callibration of the sensor, wouldn't be easy to know its distance from the sensor from the depth image at the same location I located it in the rgb image? Lastly, for what tasks would I use depth pointclouds and what are their advantages in comparison to for example a depth image? Thanks for answering and for your time in advance, Chris Originally posted by patrchri on ROS Answers with karma: 354 on 2016-10-07 Post score: 0 Answer: https://www.quora.com/What-is-the-relation-between-Depth-Maps-and-Point-Clouds Originally posted by patrchri with karma: 354 on 2016-10-14 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 25923, "tags": "ros, kinect, ros-kinetic, camera-depth-points, depth-image" }
roboearth detection and other issues
Question: Hello everyone, I am a newbie roboearth user and have recently managed to integrate roboearth with ros, resolving some build and run-time errors. I start training roboearth with a bottle. The first problem I encounter is that I couldn't see any points on the screen (i.e.re_object_recorder does not receive any point clouds) when I enable depth-registration. On the contrary, I can see the points when I disable depth-registration. Here are two screenshots: I continue training with disabling depth-registration even if it explicitly stated otherwise. After training finishes, I upload my model to roboearth, everything seems fine up to this point. Then, I try to detect the bottle as it shown in the ros tutorials. I start openni (again, I disable depth-registration for the same reason) and other nodes (re_kinect_object_detector etc...). I download my object, and roboearth just doesn't recognize the bottle. There is this hint in the tutorial page, which I really take into account. I believe my model is dense enough (you can search my model through roboearth api, it is called bottle.screensaversolution). So, what might have gone wrong? Thanks in advance. Originally posted by yigit on ROS Answers with karma: 796 on 2012-10-03 Post score: 1 Answer: Please note that what you are using there is basically a 2D-feature based recognition algorithm. The depth information from the Kinect is just used for filtering out implausibly detected object poses. So you will get best results with a well-textured object. A good first test is to use the video game package that usually comes with a Kinect. With that said, I really wonder about the problem with the depth registration here. It really should not make a difference for receiving a point cloud. Can you check with RViz if there is a point cloud published when you have depth registration enabled? If you did not see them already, maybe also take a look at these hints. Originally posted by ddimarco with karma: 916 on 2012-10-03 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by yigit on 2012-10-30: Ddimarco thanks for your help and sorry it took me so long to reply. The problem about depth registration was caused by a bug (I suppose) in the openni drivers in the ubuntu repos. I built the drivers from the source and depth-registration problem is solved now. Then roboearth started to recognize.
{ "domain": "robotics.stackexchange", "id": 11218, "tags": "ros, openni, 3d-object-recognition, roboearth" }
How can there be really any instantaneous velocity?
Question: I have read about Zeno's arrow paradox that tells us there is no motion of the arrow at a particular instant of its flight. It can be inferred that there can be no velocity at any instant. Moreover we cannot calculate velocity at any instant in the real world (of course it can be done by using calculus) but how can this be possible? What is the intuition behind this concept? Answer: At a "frozen" instant of time, the arrow may not be moving - but this is a tautology, since movement is something that requires time. However, even in that frozen instant the arrow does have a velocity (instantaneous velocity, if you will). Imagine that time is a series of huge number of discrete frames (or instead imagine that it is continuous, and that we are taking finer and finer discrete approximations). The position of the arrow jumps to the right from frame to frame. How does the arrow "know" how far to travel from one frame to the next? If the only piece of information "stored" in one frame were its position, then the arrow wouldn't be able to determine this! The necessary information, which is the instantaneous velocity of the arrow, must be as much a part of this frozen frame as all the information related to the arrow's position. More formally, one says that the configuration space of a physical system, which is the set of all information needed to predict its future (and thus all the information associated with a point in time) includes not only the list of positions of all objects, but also their velocities.
{ "domain": "physics.stackexchange", "id": 15742, "tags": "kinematics, time, velocity, differentiation, discrete" }
Serializing a table for filing
Question: I have a table in Lua, which contains two 1-dimensional arrays in which each array contains approximately 800,000 elements. I want to serialize this Lua table to file efficiently. Hence, I planned to use Lua C bindings. #include "lua.h" #include "lauxlib.h" #include <stdio.h> #include <assert.h> static int do_it(lua_State *L) { assert(L && lua_type(L, -1) == LUA_TTABLE); int len, idx; void *ptr; FILE *f; size_t r; lua_pushstring(L, "p"); lua_gettable(L, -2); len = lua_objlen(L, -1); // instead of using lua_rawlen, i used lua_objlen. see below // len = lua_rawlen(L, -1); // it throws the following error // lua: error loading module 'savetable' from file './savetable.so': // ./savetable.so: undefined symbol: lua_rawlen int p_values[len]; for (idx = 1; idx <= len; idx++) { lua_rawgeti(L, -1, idx); p_values[idx - 1] = (int)lua_tonumber(L, -1); lua_pop(L, 1); } f = fopen("p.bin", "wb"); assert(f); r = fwrite(p_values, sizeof(int), len, f); printf("[p] wrote %zu elements out of %d requested\n", r, len); fclose(f); lua_pop(L, 1); lua_pushstring(L, "q"); lua_gettable(L, -2); len = lua_objlen(L, -1); double q_values[len]; for (idx = 1; idx <= len; idx++) { lua_rawgeti(L, -1, idx); q_values[idx - 1] = (double)lua_tonumber(L, -1); lua_pop(L, 1); } f = fopen("q.bin", "wb"); assert(f); r = fwrite(q_values, sizeof(double), len, f); printf("[q] wrote %zu elements out of %d requested\n", r, len); fclose(f); lua_pop(L, 1); return 1; } int luaopen_savetable(lua_State *L) { static const luaL_reg Map[] = {{"do_it", do_it}, {NULL, NULL}}; luaL_register(L, "mytask", Map); return 1; } Please note that for debugging purpose, I have defined two very small 1-dimensional arrays: my_table = {p = {11, 22, 33, 44}, q = {0.12, 0.23, 0.34, 0.45, 0.56}} require "savetable" mytask.do_it(my_table) I used the following commands to compile and run it: > gcc -I/usr/include/lua5.1 -o savetable.so -shared savetable.c -fPIC > lua wrapper.lua The code works, however, I am looking for suggestions to make table serialization to file much faster than current. Please note that I am using Lua 5.1 on a 64-bit Ubuntu PC. Answer: // undefined symbol: lua_rawlen This happens because there is no such function in Lua 5.1 C API. As stated by documentation lua_objlen is preffered way to get table length. FYI lua_rawlen function first appeared in Lua 5.2. It is very handy to check Lua Reference manual to see availiable C API for particular Lua version. Serialization with zero latency Wow, that title stands out. Let me explain what I mean: serialization with zero latency means there is no serialization at all. Watch out: it heavely depends on the origin of your table. The caveats: you must control the table creation only inserts, no removes If this is the case, follow on: As you stated your table is actually an array of Lua numbers. In C Lua number is lua_Number which is probably a double value. We can create a custom table implementation to only store Lua numbers in it. The table will be continuos memory with lua_Number members in it. typedef struct { lua_Number *items; size_t len; // number of entries in items size_t cap; // maximum `len` value before we will `realloc(items)` } fasterarray; To provide fasterarray we'll use lua_newuserdata. It will allocate memory that will be watched by Lua garbage collector. In particular when there is no more references the allocated userdata will be collected. static int new(lua_State *L) { fasterarray *fa = lua_newuserdata(L, sizeof(fasterarray)); luaL_getmetatable(L, NAME); // <-- setting metatable, see below lua_setmetatable(L, -2); // <-- fa->len = 0; fa->cap = 8; fa->items = calloc(fa->cap * sizeof(lua_Number), 1); return 1; } Insert function will realloc items when more space is needed. static int insert(lua_State *L) { fasterarray *fa = lua_touserdata(L, 1); lua_Number num = lua_tonumber(L, 2); if (fa->len == fa->cap) { fa->cap *= 2; fa->items = realloc(fa->items, fa->cap * sizeof(lua_Number)); assert(fa && fa->items); } fa->items[fa->len++] = num; return 0; } Important part of this implementation is garbage collection. userdata will be collected by Lua automagically, but the memory behind items must be freed directly in C. To handle this we will use __gc metatable method. static int gc(lua_State *L) { fasterarray *fa = lua_touserdata(L, 1); if (fa && fa->items) free(fa->items); printf("gc\n"); return 0; } Full example: fasterarray.c #include <lua.h> #include <lauxlib.h> #include <stdlib.h> #include <stdio.h> #include <assert.h> #define NAME "fasterarray" typedef struct { lua_Number *items; size_t len; size_t cap; } fasterarray; static int new(lua_State *L) { fasterarray *fa = lua_newuserdata(L, sizeof(fasterarray)); luaL_getmetatable(L, NAME); lua_setmetatable(L, -2); fa->len = 0; fa->cap = 8; fa->items = calloc(fa->cap * sizeof(lua_Number), 1); return 1; } static int gc(lua_State *L) { fasterarray *fa = lua_touserdata(L, 1); if (fa && fa->items) free(fa->items); printf("gc\n"); return 0; } static int insert(lua_State *L) { fasterarray *fa = lua_touserdata(L, 1); lua_Number num = lua_tonumber(L, 2); if (fa->len == fa->cap) { fa->cap *= 2; fa->items = realloc(fa->items, fa->cap * sizeof(lua_Number)); assert(fa && fa->items); } fa->items[fa->len++] = num; return 0; } int luaopen_fasterarray(lua_State *L) { // fasterarray metatable luaL_newmetatable(L, NAME); luaL_register(L, NULL, (luaL_Reg []) { {"__gc", gc}, {NULL, NULL} }); // exported funcs luaL_register(L, NAME, (luaL_Reg []) { {"new", new}, {"insert", insert}, {NULL, NULL} }); return 1; } example.lua require"fasterarray" local ar = fasterarray.new() local i = 700000 while i > 0 do fasterarray.insert(ar, i) i = i - 1 end
{ "domain": "codereview.stackexchange", "id": 31183, "tags": "performance, c, serialization, lua, lua-table" }
Best localization method?
Question: I am making a robot that is supposed to roam inside my house and pick up trash using openCV. I plan to send information from my arduino mega to my arduino nano connected to window pc using radio transceivers. I also plan to send video feed from raspberry pi camera over WiFi to the windows PC. The windows PC then uses openCV and processes other information from the sensors and sends command back to arduino mega. I have right now: Arduino mega raspberry pi + usb camera + wifi dongle Xbox 360 kinect wheel encoders sonar distance sensor arduino nano windows PC I want to know how to keep track of the robot like the room it is. I think what I am trying to do is SLAM, but I just need to make the map once because the rooms don't change much. I am open to ideas. Cost is a factor. Answer: Looking at your hardware, only the wheel encoders are suitable for environment-independent localization, which is not enough in most cases. The localization method using the wheel encoders called "odometry". The other sensors you have are suited more for known environments and map-based navigation. The camera could be used for markers detection and mapping, the distance sensor can be used for matching the location to a map (Particle Filter is the thing). In any case, I would add some inertial sensors(gyro, accelerometer or integrated IMU) to your system, to improve the localization performance.
{ "domain": "robotics.stackexchange", "id": 519, "tags": "arduino, mobile-robot, localization" }
Error when calculating Alt/Az from Ra/Dec
Question: I wrote a program based on this tutorial: http://www.stjarnhimlen.se/comp/ppcomp.html to calculate altitude and azimuth of a celestial object. When I compare the calculated RA/Dec values to the ones from Stellarium, they are very accurate. But when I compare the Alt/Az values to the ones from Stellarium, there is an error of about 2 degrees! I use the following method to calculate Alt, Az: GMST0 = Ls + 180_degrees # Ls = Sun's longitude GMST = GMST0 + UT LST = GMST + local_longitude HA = LST - RA x = cos(HA) * cos(Decl) y = sin(HA) * cos(Decl) z = sin(Decl) xhor = x * sin(lat) - z * cos(lat) yhor = y zhor = x * cos(lat) + z * sin(lat) az = atan2( yhor, xhor ) + 180_degrees alt = asin( zhor ) = atan2( zhor, sqrt(xhor*xhor+yhor*yhor) ) Some specific example: Test date 15.09.2018, time 15:00 UT Planet: Mercury Coordinates: +47.55777777° +8.89888888 What stellarium says: RA = 11h 18m 13.26s Dec = +6°25'08.5" Az = +250°21'13.2" Alt = +25°25'00.1" What my program says: RA = 11h 18m 14s Dec = 6° 25' 6.59" Az = +248° 49' 6.9" Alt = +26° 33' 16.43" Thanks in advance! Answer: I have the answer to your question, in case you have not already worked it out! You have used the wrong value for the Longitude of the Sun in your calculation of GMST0. You used the Ecliptic Longitude, lonsun = v + w (using the symbols on the tutorial website you mentioned) but you need the Sun's mean longitude, Ls = M + w It so happens that I had already written a program based on the tutorial you mentioned: http://www.stjarnhimlen.se/comp/ppcomp.html and have used it to check the results for your example. Using 'lonson' gives values very close to what you got: Altitude = +26deg 33min 16sec, Azimuth = 16hrs 35min 16sec = 248deg 49.0min but using the correct 'Ls' gives the following results: Altitude = +25deg 24min 57sec, Azimuth = 16hrs 41min 24sec = 250deg 21.0min which are very close to the Stellarium values. Finally I must say that, in spite of the comments in previous answers, I would highly recommend that tutorial website, and get pretty accurate results using all of its calculation methods.
{ "domain": "astronomy.stackexchange", "id": 7283, "tags": "positional-astronomy, azimuth" }
Recursive merge sort in python
Question: I'm very new to python, however not new to programming as I've been doing C for some time. So here is my practice of a merge sort, I looked at other questions however they were many more lines compared to mine. Which leaves me to believe I'm doing something wrong. I come here to look for best practices in python, and I mean the best of the best. I want to know all the little details! Most importantly if there is anyway to shorten code without sacrificing efficiency. #!/usr/bin/python def merge_sort(array): ret = [] if( len(array) == 1): return array; half = len(array) / 2 lower = merge_sort(array[:half]) upper = merge_sort(array[half:]) lower_len = len(lower) upper_len = len(upper) i = 0 j = 0 while i != lower_len or j != upper_len: if( i != lower_len and (j == upper_len or lower[i] < upper[j])): ret.append(lower[i]) i += 1 else: ret.append(upper[j]) j += 1 return ret array = [4, 2, 3, 8, 8, 43, 6,1, 0] ar = merge_sort(array) print " ".join(str(x) for x in ar) #>>> 0 1 2 3 4 6 8 8 43 Thanks CR. Answer: First of all, there is a bug in the code - if an array is empty, you'll get a "maximum recursion depth exceeded" error. Improve your base case handling: if len(array) <= 1: return array Other improvements: the merging logic can be simplified - loop while where are elements in both arrays and append what is left after the loop extract the merging logic into a separate merge() function improve the variable naming - e.g. use left_index and right_index as opposed to i and j, result instead of ret no need to enclose if conditions into outer parenthesis Python 3 compatibility: use print() as a function instead of a statement use // for the floor division Improved version: def merge(left, right): """Merge sort merging function.""" left_index, right_index = 0, 0 result = [] while left_index < len(left) and right_index < len(right): if left[left_index] < right[right_index]: result.append(left[left_index]) left_index += 1 else: result.append(right[right_index]) right_index += 1 result += left[left_index:] result += right[right_index:] return result def merge_sort(array): """Merge sort algorithm implementation.""" if len(array) <= 1: # base case return array # divide array in half and merge sort recursively half = len(array) // 2 left = merge_sort(array[:half]) right = merge_sort(array[half:]) return merge(left, right)
{ "domain": "codereview.stackexchange", "id": 24129, "tags": "python, mergesort" }
regression with noisy target vairable
Question: How can I approach a regression problem where the input data is not noisy but the target variable is noisy? Are there any regression algorithms that are robust to a noisy target variable? Also, is it possible to de-noise the target variable somehow? If so, how? Answer: It depends how much noise: If it's only a little noise, say for instance 2% of the target values are off by a small value, then you can safely ignore it since the regression method will rely on the most frequent patterns anyway. If it's a lot of noise, like 50% of the target values are totally random, then unless you can detect and remove the noisy instances you can forget it: the dataset is useless. In general ML algorithms are based on statistical principles, to some extent their job is to avoid the noise and focus on the regular patterns. But there are two things to pay attention to: Is the noise truly random, or does it introduce some biases in the data? The latter is a much more serious issue. Noisy data is even more likely to cause overfitting, so extra precaution should be taken against it: depending on the data, it might be necessary to reduce the number of features and/or the complexity of the model.
{ "domain": "datascience.stackexchange", "id": 9603, "tags": "machine-learning, deep-learning, regression, svm, supervised-learning" }
Temperature of a gas giant 23 AU from Fomalhaut
Question: If a gas giant, weighing about 30 Jupiter masses, orbited the A-type star Fomalhaut at 23 AU, what would its temperature be? Would it be warm enough to have ammonia clouds like Jupiter or Saturn, or would it be too cold to support these and instead “condense” into an ice giant? Answer: The mass ranges of stars and planets are separated by the mass range of a third type of objects, brown dwarfs. Brown Dwarfs have a mass range from about 13 Jupiter masses (or about 4,131.4 Earth masses) to about 75 Jupiter masses (or about23,835 Earth masses. An object with about 30 Jupiter masses (or about 9,534 Earth masses) would be a brown dwarf, not a planet. If your planet is 23 AU from Fomalhaut the amount of radiation it receives from Fomalhaut would be 1 dived 23 squared, or 1 divided by 529, or 0.001890, as much as it would receive it if was 1 AU from Fomalhaut. Fomalhaut A has a luminosity of about 16.63 plus or minus 0.48 that of the Sun. 16.63 times 0.001890 is 0.031436672. Thus a planet 23 AU from Fomalhaut A would receive about 0.031436672 as much radiation as Earth get from the Sun. The semi-major axis of Saturn's orbit is 9.5826 AU. So Saturn receives about 1 divided by 9.5826 squared, or 1 divided by 91.82622276, or 0.010890135, as much radiation as Earth receives from the Sun. So your brown dwarf would, if it was identical to Saturn, have a temperature slightly higher than that of Saturn. Of course your 30 Jupiter mass brown dwarf would be very different in many ways from Saturn, which would change its temperature. I note that the star TW Piscus Austrini is also known as Fomalhaut B, since it is a companion star to Fomalhaut. And it is claimed that the star LP-876-10 is also a companion of Fomalaut. https://en.wikipedia.org/wiki/TW_Piscis_Austrini https://en.wikipedia.org/wiki/Fomalhaut_C I also note that Fomalaut A is known to have several rings of dust surrounding it. And there have been a number of attempts to detect planets orbiting Fomalhaut A. https://en.wikipedia.org/wiki/Fomalhaut
{ "domain": "astronomy.stackexchange", "id": 6835, "tags": "exoplanet, temperature, gas-giants" }
Sequence prediction with unlimited predictions
Question: I have a special kind of prediction problem. I have observed $M$ sequences $X_m = [x_1, x_2, ..., x_N]$ where the distance $d$ between $x_n$ and $x_{n+1}$ is drawn from the same normal distribution, eg $d \sim N(\mu, \sigma^2)$. I can learn the parameters $\mu, \sigma$. Now I need to predict/generate a whole sequence at once where a prediction $\hat{x}_n$ is considered correct if it falls within some absolute tolerance of the true data point $x_n$. There is one caveat: I can make as many predictions as I want without penalty but there must always be a minimum distance $\epsilon$ between predictions $\hat{x}_n$ and $\hat{x}_{n+1}$, where we can safely assume $\epsilon << d$. Intuitively, this makes me want to predict a pattern rather than trying to predict each point individually. To re-iterate: a prediction that is outside the tolerance of a true data point is not penalized. We only need to maximize the number of correct predictions (that fall within the tolerance of a true data point). Example 1 Prediction: [10, 20, 30] True observation: [11, 21, 31] ------------------------------ 3 correct predictions if tolerance >= 1, else 0 correct predictions Example 2 Epsilon = 4 (eg we can predict with a minimum distance 4) Prediction: [6, 10, 15, 22, 30, 35] True observation: [11, 21, 31] ----------------------------------- tolerance = 1 => 2 correct predictions (10, 30) tolerance = 2 => 3 correct predictions (10, 22, 30) What would be a good way to approach this problem? Are there problems that are similar? Edited for clarity. Answer: Your biggest issue with the evaluation scheme you have - "success" means within tolerance, "failure" means outside tolerance, plus your constraint on model outputs needing to vary per time step - is that it will be hard to extract gradients in order to train the prediction model directly. This rules out many simple and direct regression models, at least if you want to use "maximise number of scores within tolerance" as your objective function. The constraints on sequential predictions and allowing re-tries are also non-differentiable if taken as-is. I think you have two top level choices: 1. Soften the loss function, and add the hard function as a metric Use a differentiable loss function that has best score when predictions are accurate and constraints are met. For example your loss function for a single predicted value could be $$L(\hat{x}_n, \hat{x}_{n+1}, x_{n+1}) = (\hat{x}_{n+1} - x_{n+1})^2 + \frac{a}{1+e^{s(|\hat{x}_n - \hat{x}_{n+1}| - \epsilon)}}$$ the second constraint part is essentially sigmoid with $a$ controlling the relative weight of meeting constraints with accuracy of the prediction and $s$ controlling the steepness of cutoff around the constraint. a. The weighting between prediction loss and constraint loss will be a hyper-parameter of the model. So you would need to include $a$ and $s$ amongst parameters to search if you used my suggested loss function. b. You can use your scoring system, not as an objective function, but as a metric to select the best model on a hyper-parameter search. c. With this approach you can use many standard sequence learning models, such as LSTM (if you have enough data). Or you could just use a single step prediction model that you feed current prediction plus any other features of the sequence that is allowed to know, and generate sequences from it by calling it repeatedly. This system should encourage re-tries that get closer to the true value. 2. Use your scoring system directly as a learning goal This will require some alternative optimising framework to gradient descent around the prediction model (although some frameworks can generate gradients internally). Genetic algorithms or other optimisers could be used to manage parameters of your model, and can attempt to change model parameters to improve results. For this second case, assuming you have some good reason to want to avoid constructing a differentiable loss function at all, then this problem can be framed as Reinforcement Learning (RL): State: Current sequence item prediction (or a null entry), as well as any known information such as tolerance, length of sequence, current sequence item value (which may be different from current prediction) $\epsilon$, $d$, $\mu$ or $\sigma$ can be part of the current state. The action is to select next sequence value prediction, or probably more usefully, the offset for the next sequence item value. Using offsets allows you easily add constraint for minimum $\epsilon$ The reward is +1 for being within tolerance or 0 otherwise. Time steps match the time steps within a current sequence. You can use this to build a RL environment and train an agent that will include your prediction/generator model inside it. There are a lot of options within RL for how to manage that. But what RL gives you here is a way to define your goal formally using non-differentiable rewards, whilst internally the model can still be trained using gradient based methods. The main reason to not use RL here is if the prediction model must be assessed at the end of generating the sequence. In which case the "action" might as well be the whole sequence, and becomes much harder to optimise. It is not 100% clear to me from the question whether this is the case. Caveat: RL is a large and complex field of study. If you don't already know at least some RL, you can expect to spend several weeks getting to grips with it before starting to make progress on your original problem. There are alternatives to RL that could equally apply, such as NEAT - deciding which could be best involves knowing far more about the project (e.g. the complexity of the sequences you wish to predict) and practical aspects such as how much time you have available to devote to learning, testing and implementing new techniques. Have you forgotten something? If you allow infinite re-tries, then an obvious strategy is to generate a very large sequence moving up and down using different step sizes (all greater than $\epsilon$). This doesn't require any learning model, just a bit of smart coding to cover all integers eventually. Chances are this model is only a few lines of code in most languages. If this is to be ruled out, then some other rule or constraint is required: Perhaps only positive increments are allowed in the predicted sequence (so we cannot re-try by subtracting and trying again)? This conflicts with your "unlimited predictions" statement. Perhaps a sub-goal here is to make the guessing efficient? In which case RL could be useful, as you can add a a discount factor to reward processing in order make the model prefer to get predictions correct sooner.
{ "domain": "datascience.stackexchange", "id": 5397, "tags": "regression, prediction, sequence" }
Representation of $SU(2)$, i.e., spin
Question: Let \begin{equation} X= \begin{bmatrix} 0 & 1\\ 0 & 0\\ \end{bmatrix}, \qquad Y= \begin{bmatrix} 0 & 0\\ 1 & 0\\ \end{bmatrix}, \qquad H= \begin{bmatrix} 1 & 0\\ 0 & -1\\ \end{bmatrix}\tag{1} \end{equation} If $V_m$ is the $(m+1)$-dimensional complex representation of $\text{sl}(2,\mathbb{C})$, then we know that there exists a basis $u_m,u_{m-2},...,u_{-m}$ such that $u_k$ is the eigenvector of $H$ with eigenvalue $k$ and $Y u_k = u_{k-2}$. In physics, on the other hand, we write $S_{\pm} =X,Y$ and $S_z = H/2$ and use the basis $|s,m_s \rangle$ such that $$\begin{align} S_z |s,m_s \rangle &= m_s |s,m_s \rangle \\ S_{\pm} |s,m_s \rangle &= \sqrt{s(s+1)-m_s (m_s \pm 1)} |s,m_s \pm 1\rangle \end{align}\tag{2}$$ and that $|s,m_s \rangle$ are orthogonal. Is there a deeper reason why we put the coefficients in front of $|s,m_s \rangle$, when $S_{\pm}$ acts on it and why $|s,m_s \rangle$ are orthogonal? Answer: In the specific case of the 2-dimensional representation, the coefficients are 1 so it doesn't matter much. On the other hand, for the higher-dimensional reps of $SU(2)$, the coefficients in front are not trivial, v.g. your raising operator $$ X\to \sqrt{2}\left(\begin{array}{ccc}0&1&0\\0&0&1\\ 0&0&0\end{array}\right) $$ and for even larger representations the coefficients are not all the same, $v.g.$ $$ X\to \left(\begin{array}{ccc}0&2&0&0&0\\0&0&\sqrt{6}&0&0\\ 0&0&0&\sqrt{6}&0\\ 0&0&0&0&2\end{array}\right)\, . $$ If you don't have the correct coefficients your matrices will not be a hermitian representation and thus will not exponentiate to a unitary rep. Alternatively, if your basis states are not properly normalized you will not get a unitary rep. either. We want unitary because it preserves the (complex) inner product $\langle \phi\vert\psi\rangle$ and (for instance) probabilities of outcomes depend on such overlaps.
{ "domain": "physics.stackexchange", "id": 61327, "tags": "angular-momentum, hilbert-space, quantum-spin, representation-theory, lie-algebra" }
PHP LDAP connection
Question: I have this class, which helps me to connect to LDAP: class adUser { private $_username; private $_password; public function __construct($username, $password) { $this->_username = $username; $this->_password = $password; } public function connectAD() { # Connect to Domain Controller $ldap = ldap_connect("censored"); # Case / When to see if user and password match if ($bind = ldap_bind($ldap, $this->_username, $this->_password)) { # todo: Successful return $this->_username; } else { # todo: Failure echo "Invalid login."; header('Location: index.php?eid=1'); } # Close connection ldap_close($ldap); } public function getFullName() { # Initialize the connection $ldap = ldap_connect("censored"); ldap_set_option($ldap, LDAP_OPT_PROTOCOL_VERSION, 3); ldap_set_option($ldap, LDAP_OPT_REFERRALS, 0); # Parameters bound $dn = "censored"; $bind = ldap_bind($ldap, $this->_username, $this->_password); # Will trim domain and slash from username $person = substr($this->_username,strpos($this->_username,"\\")+1,strlen($this->_username) - strpos($this->_username,"\\")); # Set search criteria (search on username -> return cn aka full name) $filter="(sAMAccountName=".$person.")"; $justthese = array("cn"); $sr=ldap_search($ldap, $dn, $filter, $justthese); $info = ldap_get_entries($ldap, $sr); # Loop through results - although we only have one entry at this time. for ($i=0; $i<$info["count"]; $i++) { //echo "dn is: ". $data[$i]["dn"] ."<br />"; echo $info[$i]["cn"][0]; } # Close the connection ldap_close($ldap); } } It's basically the second time I've tried OOP PHP - is there any way where I could significantly compress this class? Did I get this whole OOP thing right? This is how I call the functions: include_once("functions/LDAP.php"); $curUser = new adUser($_POST['username'], $_POST['password']); echo "logged in as: ".$curUser->connectAD(); echo "<br>"; $curUser->getFullName(); Answer: I've commented through your existing code below and noted what was removed/replaced/added (my comments are denoted by double #'s): ## Let's modify the class name 'adUser' to something more ## informative, like 'ActiveDirectoryUser' class ActiveDirectoryUser { private $connection; ## The use of an underscore is a leftover from a past convention ## it is ok to not have an underscore on private properties private $username; private $password; private $ldap_db = "censored"; private $ldap_connection; ## We moved the connection to here for state\ /** * Always have documentation * @param $username string * @param $password string */ public function __construct($username, $password) { $this->username = $username; $this->password = $password; } /** * Always have documentation */ public function __destruct(){ ldap_close($this->ldap_connection); } /** * Always have documentation * @param $username string * @param $password string */ ## We'll rename connectAD to connect - it should be simple. public function connect() { ## Get rid of this comment - it doesn't tell us anything informative # Connect to Domain Controller $this->ldap_connection = ldap_connect($this->ldap_db); ## Replaced the string with a property for reuse ## Again with the uninformative comment - it ## doesn't tell us anything that the code already does # Case / When to see if user and password match if ($bind = ldap_bind($this->ldap_connection, $this->username, $this->password)) { ## We'll return true if it connects, otherwise throw an exception. ## Our connect() function shouldn't ever return a username or data return True # return $this->username; # Your bracketing in else is weird and unconventional } else { # todo: Failure #echo "Invalid login."; # Don't handle your redirect logic in your class - do so at call level # Instead, throw an exception or return False #header('Location: index.php?eid=1'); throw new Exception("Invalid login."); } ## Why are we closing the connection in the connect function? Move this to the destruct() # Close connection #ldap_close($ldap); } /** * Always have documentation */ ## Renamed from getFullName to getName public function getName(){ # Initialize the connection ## No need to call LDAP connection again since we already have the connection stored in a property # $ldap = ldap_connect($this->ldap_db); ldap_set_option($this->ldap_connection, LDAP_OPT_PROTOCOL_VERSION, 3); ldap_set_option($this->ldap_connection, LDAP_OPT_REFERRALS, 0); # Parameters bound $bind = ldap_bind($this->ldap_connection, $this->username, $this->password); # Will trim domain and slash from username $person = substr($this->username,strpos($this->username,"\\")+1,strlen($this->username) - strpos($this->username,"\\")); # Set search criteria (search on username -> return cn aka full name) $filter="(sAMAccountName=".$person.")"; ## justthese can be moved to the ldap_search function since its inline and only called once ## additionally, we can use the shorthand [] array initiator #$justthese = array("cn"); $sr = ldap_search($this->ldap_connection, $this->ldap_db, $filter, ['cn']); $info = ldap_get_entries($ldap, $sr); # Loop through results - although we only have one entry at this time. ## We'll have a returnData array to return rather than echo out. $returnData = []; for ($i=0; $i<$info["count"]; $i++) { //echo "dn is: ". $data[$i]["dn"] ."<br />"; $returnData[] = $info[$i]["cn"][0]; } ## Don't need to close it as it's closed on class destruct # Close the connection #ldap_close($ldap); return $returnData; } } and here is the same, with my comments removed: /** * Always have documentation here */ class ActiveDirectoryUser { private $connection; private $username; private $password; private $ldap_db = "censored"; private $ldap_connection; /** * Always have documentation * @param $username string * @param $password string */ public function __construct($username, $password) { $this->username = $username; $this->password = $password; } /** * Always have documentation */ public function __destruct(){ ldap_close($this->ldap_connection); } /** * Always have documentation * @param $username string * @param $password string */ public function connect() { $this->ldap_connection = ldap_connect($this->ldap_db); if ($bind = ldap_bind($this->ldap_connection, $this->username, $this->password)) { return True } else { throw new Exception("Invalid login."); } } /** * Always have documentation */ public function getName(){ ldap_set_option($this->ldap_connection, LDAP_OPT_PROTOCOL_VERSION, 3); ldap_set_option($this->ldap_connection, LDAP_OPT_REFERRALS, 0); $bind = ldap_bind($this->ldap_connection, $this->username, $this->password); # Will trim domain and slash from username $person = substr($this->username, strpos($this->username, "\\") + 1, strlen($this->username) - strpos($this->username,"\\")); $filter="(sAMAccountName=" . $person.")"; $sr = ldap_search($this->ldap_connection, $this->ldap_db, $filter, ['cn']); $info = ldap_get_entries($ldap, $sr); $returnData = []; for ($i = 0; $i < $info["count"]; $i++) { $returnData[] = $info[$i]["cn"][0]; } return $returnData; } } And finally, the usage (note we have to try/catch for Exception): include_once("functions/LDAP.php"); $ActiveDirectoryUser = new ActiveDirectoryUser($_POST['username'], $_POST['password']); try{ $ActiveDirectoryUser->connect(); } catch (Exception $e){ # Do header redirect here } $user = $ActiveDirectoryUser->getName();
{ "domain": "codereview.stackexchange", "id": 9637, "tags": "php, ldap" }
Designing classes for non-standard arithmetics
Question: This question is inspired by http://anydice.com - a dice probability calculator web application. Anydice language has three run-time types: a number, a sequence and a die. There is also a number of unary and binary operations defined on these types. Each binary operation can take any type pair (out of the 3) as arguments, there are separate definitions of what each operation does for each possible pair of types. Some operations are not commutative. For simplicity let's consider a single binary operation, which I'll call OpAccess, or @. My goal is to design a set of classes that represent the run-time values, so that if the access operation is executed on two of such values the correct operation implementation (depending on argument types) is called. The main problem is that at compile time it is not known yet what run-time type a value has, yet, it's required to dispatch the correct operation implementation during the run-time. Let's start with defining the base class for our values: abstract class Primitive { public abstract Primitive OpAccess(Primitive right); } Our value will use itself as the left argument in the OpAccessoperation and accept the right argument as the parameter. Having this base we can design our value classes as follows: class Number : Primitive { public override Primitive OpAccess(Primitive right) { return OpAccess((dynamic)right); } public Primitive OpAccess(Number right) { Console.WriteLine("Number @ Number"); return null; } public Primitive OpAccess(Sequence right) { Console.WriteLine("Number @ Sequence"); return null; } public Primitive OpAccess(Die right) { Console.WriteLine("Number @ Die"); return null; } } class Sequence : Primitive { public override Primitive OpAccess(Primitive right) { return OpAccess((dynamic)right); } public Primitive OpAccess(Number right) { Console.WriteLine("Sequence @ Number"); return null; } public Primitive OpAccess(Sequence right) { Console.WriteLine("Sequence @ Sequence"); return null; } public Primitive OpAccess(Die right) { Console.WriteLine("Sequence @ Die"); return null; } } class Die : Primitive { public override Primitive OpAccess(Primitive right) { return OpAccess((dynamic)right); } public Primitive OpAccess(Number right) { Console.WriteLine("Die @ Number"); return null; } public Primitive OpAccess(Sequence right) { Console.WriteLine("Die @ Sequence"); return null; } public Primitive OpAccess(Die right) { Console.WriteLine("Die @ Die"); return null; } } Now if we run something like: Primitive a = new Sequence(); Primitive b = new Die(); a.OpAccess(b); b.OpAccess(a); We will get: Sequence @ Die Die @ Sequence Since all operation will be defined differently the number of resulting methods is not an issue, it's really this many different ways to do an operation. What worries me more is that I had to duplicate public override Primitive OpAccess(Primitive left) { return OpAccess((dynamic)left); } in each class and do not have an easy way around it. Remember, it's just one operation we are considering here, there will be more than a dozen of those in reality. Also, I have no idea if the use of dynamic becomes a performance problem. (Manual dispatch with switch/case should be faster). But maybe I'm attacking this problem from the completely wrong angle? What do you think? Note: this is in the context of writing a parser and an interpreter for Anydice language. Update Additional research prompted by Peter Taylor's answer uncovered the following article, which is most illuminating: Double Dispatch is a Code Smell Answer: You might want to define the base type like this abstract class Primitive { public Primitive OpAccess(Primitive right) { switch (right) { case Number number: return OpAccess(number); // ... other types default: throw new ArgumentOutOfRangeException(); } } protected abstract Primitive OpAccess(Number right); // ... other OpAccess } where there is only one public method and the new C# 7 switch takes care of the dispatch and derived classes need to implement only the concrete protected overloads: class Sequence : Primitive { protected override Primitive OpAccess(Number right) { Console.WriteLine("Sequence @ Number"); return null; } // ... other OpAccess } dynamic is no longer necessary.
{ "domain": "codereview.stackexchange", "id": 25628, "tags": "c#, object-oriented" }
Random (Over) Sampling signal and perfect Reconstruction in Nyquist form?
Question: Imagine we have band limited signal with bandwidth of $B$, so the required Nyquist rate would be $f_{nyq}>2B$ that is oversampled with rates $f_s$ where $f_s = M*f_{nyq}$ and $M$ is random and $M>=1$. By that, I mean the sampling rate is at least at Nyquist rate but most of the time its way more than Nyquist and it is random. I have heard and read about iterative My questions are: Is there a method to construct the signal as if it was taken in Nyquist more, without degradation of signal quality? After all, we have more samples than needed, so I do not expect any degradation, right? Is there are some loss, how much is it, what factors it depends on? Is there a toolbox or something that I can feed my data in and have some experiments of my own? , 4.Is the technique used in MATLAB lossy? I appreciate any kind of contribution here :) Answer: If you assume the signal was strictly bandlimited to below some Nyquist frequency, then it can be decomposed into some number (N) of DFT basis vectors over the sampling aperture (although that decomposition will include rectangular windowing artifacts if the signal wasn't integer periodic in the aperture width). If you have enough sample points (M >= N) of that signal, then this becomes a problem of fitting M equations to N unknowns. IIRC, the farther the sample points are from being equally spaced over the aperture, the more sensitive any computed solution might be to noise and numerical issues. Once deconstructed into DFT basis vectors, any other sample points of that strictly bandlimited signal can be interpolated using a summation of the resultant complex exponential coefficients.
{ "domain": "dsp.stackexchange", "id": 5447, "tags": "sampling, nyquist, reconstruction" }
move_arm how ignore his arm to collision
Question: I know my title is not clear but it's difficult to explain. So I use a wizard description package with a robot arm (Package Arm navigation). I can send a goal(pose and articular) and the robot follow a trajectory I use also a kinect to generate a pointcloud and then a octomap_collision. My problem it's that the kinect detects the robot like a obstacle and so stops directly. I can see that there are cubes of collision_map in the robot. How can I do to avoid this mistake ? The environment server can understand that this points are the robot ? I think that something exists because it's a very recurrent problem in grasping of object for example. I don't know if it's more clear. if it isn't, post a question in comment please. Thanks Originally posted by jep31 on ROS Answers with karma: 411 on 2012-08-10 Post score: 0 Answer: The self-filtering functionality is implemented in the robot_self_filter package. You can check an example launch setup in the pr2_arm_navigation_perception package. Originally posted by Adolfo Rodriguez T with karma: 3907 on 2012-08-10 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by jep31 on 2012-08-10: Thanks for your answer, I think it's what I was looking. This node takes as argument a topic of pointCloud and return the pointCloud filtered ? How I can generate a yaml of pr2_arm_navigation_perception that it's used in the example ? Comment by Adolfo Rodriguez T on 2012-08-10: I presume you're referring to the self_filter config file. It contains info like the min distance at which the filter operates, and the robot links it will work with (along with geometric padding values to account for calibration and sensing errors). This info is specific to your robot setup. Comment by Adolfo Rodriguez T on 2012-08-10: Further, the list of links you specify in the config file corresponds to those that will show up in the field of view of your sensor(s). That is, you don't want to waste computational resources trying to filter out links that will never come into view.
{ "domain": "robotics.stackexchange", "id": 10562, "tags": "kinect, collision-map, arm-navigation, move-arm, environment-server" }
Similarly structured loops with different big-O time complexities?
Question: I have a function: int sum = 0; for (int i = 1; i < n; i*= 2) for (int j = 0; j < n; j++) sum++; From my understanding this is $O(n\log(n))$ because the inner loop runs $n$ times for every time the outer loop runs, and the outer loop is running $\log(n)$ times. Putting them together gives me $O(n\log(n))$, which I understand. However, the following loop: int sum = 0; for (int i = n; i > 0; i/= 2) for (int j = 0; j < i; j++) sum++; I see this as $O(n\log(n))$ because the outer loop is running $\log(n) + 1$ times still and the inner loop runs $n + n/2 + n/4...$ whose sum will be some coefficient $c$ times $n$. Together with the outer loop simplifying to $O(n\log(n))$ but it turns out it is actually $O(n)$ however I don't see how it is? Answer: The first version increments sum $$ \underbrace{n + n + \dots + n}_{\log n \text{ terms}} = n\log n $$ time; the second does it $$ \underbrace{n + \frac{n}{2} + \frac{n}{4} + \dots + 1}_{\log n \text{ terms}} \leq 2n $$ times (use the formula for a geometric series). By the way, it's not actually wrong to say that the second is also $O(n\log n)$: it's just that you can give a more precise answer of $O(n)$.
{ "domain": "cs.stackexchange", "id": 10588, "tags": "algorithms, asymptotics" }
What Prevents Delayed Choice Quantum Eraser Experiment from Being Used to Predict the Future?
Question: Reading about the DCQE experiment commentary by Ross Rhodes. My question is: why can't information about the future state of the idler photon (the one that heads toward the Glen-Thompson prism) be gleaned from the D0 detector in order to enable "predictions" of the future? I'll preempt any "because that would violate conservation of quantum information" responses by saying, yeah, that's why I'm asking this question. I'm certain there is something I don't understand and I'm hoping someone can help me understand it. I've seen these other questions: Can post-selecting on the screen in the Delayed Choice Quantum Eraser experiment be used to predict the quantum-eraser measurement results? Delayed Choice Quantum Eraser: Am I missing something here? Delayed Choice Quantum Eraser? Delayed Choice Quantum Eraser without retrocausality? Are photons locked in time, and does this explain the "delayed choice quantum eraser" experiment? None of them answers the exact question I'm asking, or (more likely) I'm not understanding the answer. I'm sorry to drag people through the explanation again, I'll do my best to propose a simple modification to the experiment which you can tell me the outcome of, your answer will help me understand. Since DCQE seems to have a tendency to attract magical thinking (myself included), hopefully, this can be asked and answered clearly enough to dispel the magic and uncover the mechanics. The experimental setup is this: A photon is sent through a double-slit where it passes randomly and then is split into two entangled photons. The upper entangled photon (signal photon) hits the D0 detector where its x position is recorded. The second photon (idler photon) goes down to the lower part of the diagram where it either is reflected into D3/D4 by BS (which gives us information about its path) or it passes through to another reflector (unlabeled in the diagram) that destroys its path information. If that explanation doesn't make sense, here is a video that explains the experiment. Fair warning, it devolves into magic at the end, but prior to that, I found it to be helpful. Whenever a photon is detected by D4 or D3 we should get a "clumping pattern", rather than an "interference pattern" (that's the terminology used in the video, I don't know if it's correct) because we know about the photon's path. If it hit D4 then we know it came from the lower slit (blue line) and if it hit D3 it came from the upper slit (red line). The Grand Reveal, amazingly, is that the result from D0 will also show that clumping pattern! Somehow, even though D0 happens before D4 or D3 it "knew" that its entangled twin was going to be detected before it even happened. I know, I know, this is where the physicists on this site are snapping their optical nerves with vigorous eye-rolling. I'm guessing this is where the break in my understanding is. The reason I think this is an error is that if it were true DCQE could be used to create a future predictor, which either means all the quantum physicists in the world should already be billionaires, or I'm missing something (I wonder which one). To demonstrate this, here is the same experiment with only the eraser (I removed BSa and BSb and labelled the final reflector BSc): Eraser Mode Let's call this Eraser Mode since all it does is erase our knowledge. In the top-right corner, I have connected the output of the D0 detector to a monitor where we will plot the x impacts. If the video and (my reading of) the commentary are correct, then since I do not have information about the path that the photon took, I should see an interference pattern on the ${Output}_{Erase}$ display. This, I'm guessing, is not true and is another hole in my understanding. Rather, the Coincidence Counter must be what enables the creation of the interference pattern, but the article seemed to indicate that the detector itself shows that pattern. Here is the experiment again, this time with just the Eraser removed (the one I labelled BSc) and only the signal reflectors remaining: Signal Mode Let's call this Signal Mode since regardless of how the experiment plays out we can figure out which slit the photon went through (D2 or D3 is bottom, D1 or D4 top). I've also connected the output of this system to a display named ${Output}_{Signal}$. The question is this: Will the output of ${Output}_{Signal}$ look any different than the output of ${Output}_{Erase}$? It could be an interference pattern (artist's rendition, be gentle): Or no interference pattern: Or something else, because I don't understand. The problem is that if there is a different pattern on ${Output}_{Signal}$ than there is on ${Output}_{Erase}$ then I could use that to predict the future. I could set up a delay mechanism between the Glenn-Thompson prism and the BBO so that it takes a long time for the photon to make it to the prism. Let's say I send it through 120,960,000,000 km of fiber optic cable (a cable so long that it could go around the equator >3 million times) so that it takes a week for the photon to make it from the BBO to the prism (assuming the speed of light in the cable is 200,000 km/s). In the meantime, I can measure the pattern on the D0 detector. If I have switched the lever to the Signal State, then BSc is removed and it will only show the non-interference pattern. If instead, I switch it to the Erase State then the BSa and BSb are removed and it will show me the interference pattern. If I see that the output is an interference pattern I can conclude the system will be in the Erase State when the photon arrives in a week. If I see a clumping pattern I can conclude the system will be in the Signal State when it arrives in a week. Of course, then I can get up to all sorts of shenanigans by, say, writing a script that flips the switch if Bitcoin ever goes over 10k in that week (or whatever). This future prediction trick makes me pretty sure that my understanding is incomplete. In particular, I think the gap is in understanding how the coincidence counter uses the incoming data to create the interference pattern. If that's the case, then I'm not that impressed by this experiment. I mean, essentially it's saying, "From the parts of the system where we have more information, we can get more information about the system." Or am I also missing something there? Probably the easiest way to answer this question is to tell me what pattern I will see at the D0 monitor. Answer: You are misunderstanding what information you get from each of the sensors. And by information I mean the peaks and troughs in the image formed by the photons, that can be further interpreted by humans. The first axiom I want you to take is D0 = sum(D_i), meaning that the information (peaks and troughs) at D0 is the sum of the information of all sensors detected using the idler photos. It does not matter what you do to the idler photons, the output of D0 in both cases (Signal Mode and Eraser Mode) will be this: In the case you turn on Eraser Mode and you check the photons of D0 that hit R01 = D1 or R02 = D2 you will get this: Take a look of how R01 and R02 are dislocated from one another. If you add both signals together, you get R01 + R02 = D0. Notice how the peaks of R01 align with the troughs of R02, such that the final result is the value in D0. This mode results in an interference pattern only because you can sort between the photons of each sensor. If you now switch to the Signal Mode, such that R03 = D1 + D4 and R04 = D2 + D3, this is what you will get: In the same way as before, if you add R03 and R04 you will get D0, but in a different way from the Signal mode. And by "different way", I mean that now there is no interference pattern. You see no interference pattern since the phothons are concentrated around one location (one peak for each slit/sensor). As you can see, looking at D0 will not help you get any future predictions, since it will always yield the same result, no matter what you do to the entangled photon. The patterns only emerge after the coincidence counter sorts which photon went to each sensor.
{ "domain": "physics.stackexchange", "id": 62986, "tags": "quantum-mechanics, quantum-information, quantum-eraser" }
How does Bell measurement work in the teleportation?
Question: I'm a complete beginner and one of the first things I was taught was the teleportation protocol. In the protocol, the party sending its state (which we call say $|\phi\rangle$) makes a Bell measurement on a Bell state it has from before along with $|\phi\rangle$. From this, it finds the indices of its Bell state which it sends to the receiver. But my question is why does the fact that it is measuring $|\phi\rangle$ together with its Bell state have any bearing on the indices of the Bell state, considering $|\phi\rangle$ is not involved in the Bell measurement? Answer: In the teleportation protocol, the two parties share the entangled Bell State and it is implemented via a CNOT gate between the state to be sent (suppose Alice is sending the state) and the part of entangled Bell state Alice has. The CNOT gate creates another entangled state whose measurement Alice will send to the second party, say it's Bob, to perform the measurements accordingly. So if you assume the state to be sent is 1 qubit state after CNOT you will have a 3 qubit state (as Bell sate is 2 qubit state). Now Alice will measure the middle qubit which will be the part of classical information she will communicate to Bob. So, you see the controlled-NOT acts as an entangling operator without affecting the first qubit (here $|\phi\rangle$) and changing the entangled Bell state and hence its measurement outcome.
{ "domain": "quantumcomputing.stackexchange", "id": 1768, "tags": "teleportation, textbook-and-exercises, bell-basis" }
Hamiltonian for a system of 3 interacting particles and the meaning of potential
Question: I'd like to discuss some physics basic. assume we have 3 particles $\{\vec{q_i},\vec{p_i}\}, \quad i=1,2,3$ whereas $\vec{q_i}$ is the position and $\vec{p_i}$ is the momentum. We also have a "pair-potential" $U(\vec{q_i},\vec{q_j})$ between the particles $i$ and $j$. Now the hamiltonian is given by: $$H(\vec{q_i},\vec{q_j})=\sum_i \frac{\vec{p_i}}{2m}+\sum_{i < j}U(\vec{q_i},\vec{q_j}) \tag{1}$$ Now, I'd the second term in (2) is a sum over $i<j$. That's because we don't want to add up the potential between two particles twice. Which let me to think about, what the potential actually describes. I always thought of it as a "the ability to do work [work in a physics sense]". If you take 1 Liter of water at height 1m you can let it flow down and "do something with it". If you put it at 10m, you could do even more. Now what's exactly going on here? I have don't like my understanding of a situation like here. Could anyone try to interpret the meaning of the potential as given here? Answer: To be precise, this is not a potential, but the potential energy of two particles interacting. Let us take for example two charged particles with charges $q_1,q_2$, separated by vector $\mathbf{r}_{2} - \mathbf{r}_{1}$. The potential created by the first particle is $$\varphi_1(\mathbf{r}) = \frac{q_1}{|\mathbf{r} -\mathbf{r}_1|},$$ whereas the potential energy of the second particle in the field created by the first is $$U_2(\mathbf{r}_2) = \frac{q_1q_2}{|\mathbf{r_2} -\mathbf{r}_1|}.$$ One can repeat the same reasoning by considering the potential energy of the first particle in the field of the second with the same result. So it is reasonable to ignore which one is the first and which is the second and simply speak of the potential of their interaction: $$U_{12}(\mathbf{r}_1, \mathbf{r}_2) = U_1(\mathbf{r}_1) = U_2(\mathbf{r}_2) = \frac{q_1q_2}{|\mathbf{r_2} -\mathbf{r}_1|}.$$
{ "domain": "physics.stackexchange", "id": 66211, "tags": "potential, hamiltonian" }
Are electron shells probability distributions?
Question: I know that orbitals are probability distributions. Are electron shells probability distributions too? Answer: In order to understand you should know the differences between shell, sub-shell and orbital. Shell represents the main energy level occupied by the electron. It is given by the principal quantum number (n). For example , if n=1, the electron is present in the first main shell, called K-shell. if n=2, the electron is present in the second main shell, called L-shell and so on.. Sub-shell represents the sub-energy level occupied by the electron (as main energy level is considered to consist of a number of energy sub-levels ). it is given by azimuthal quantum number. subshell corresponding to l=0,1,2,3 are represented by s,p,d,and respectively. Orbitals you know it well. You may understand it like this- when n=1 we have only s subshell when n=2 we have s and p subshell for n=2 we have s,p and d subshell and so on. Now, s, p and d and f have certain regions, which we call orbitals. which are probability regions as you know. Here is the link for you, http://en.wikipedia.org/wiki/Atomic_orbital Must watch the video associated with it named [atomic orbitals and periodic construction] Last but not the least- we certainly not have electron shells probability distribution. its only orbitals. shells are created by scientists to study the atom systematically but they have physical significance too (i mean here the subshells and orbitals lie). it seems you are not clear with - shell, sub-shell and orbitals. you should first check out what they actually means in order to understand.
{ "domain": "chemistry.stackexchange", "id": 3546, "tags": "electrons" }
Lower bound on competitive ratio of $m$-machine scheduling
Question: Given a sequence of positive reals $a_1, a_2, \dots, a_n$ and an integer $m$, for each $j$ assign $a_j$ to a machine $i$, $1< i < m$, so as to minimize the maximum, over $i$, of the sum of all reals assigned to machine $i$. Theorem: There is no randomized $m$-machine scheduling algorithm with a competitive ratio less than $4/3$ for any $m \geq 2$. The following is the Proof Sketch given in the paper: Consider the job sequences $m \times 1$ and $m \times 1, 2$. (Here $m \times 1$ denotes a sequence of $m$ 1’s.) If the algorithm schedules the $m$ 1‘s on $m$ different machines with probability $p$, the worst-case ratio between its cost and the optimal cost is at least $\max\{2 – p, 1 + p/2\} \sim 4/3$. I have problem understanding the proof. I would really appreciate if anyone can explain the proof in details, especially how to arrive at the worst-case ratio of $\max\{2-p, 1+p/2\}$. Answer: Consider first the sequence $m \times 1$. If the algorithm schedules all the 1's in different machines, then its competitive ratio is 1. Otherwise, its competitive ratio is at least 2 (since the maximum is at least 2 instead of the optimum 1). This case gives an average competitive ratio of at least $p \cdot 1 + (1-p) \cdot 2 = 2-p$. Consider next the sequence $m \times 1, 2$. If the algorithm schedules all the 1's in different machines, then its competitive ratio is 3/2, since the maximum is 3 while the optimum is 2. If it doesn't schedule the 1's in different machines, we use the trivial bound on the competitive ratio. This case gives an average competitive ratio of at least $p \cdot (3/2) + (1-p) \cdot 1 = 1 + p/2$. We conclude that the competitive ratio is at least $\max(2-p,1+p/2)$. If $p \leq 2/3$ then $2-p \geq 4/3$, whereas if $p \geq 2/3$ then $1+p/2 \geq 4/3$, and so whatever the value of $p$, we get that the competitive ratio must be at least $4/3$.
{ "domain": "cs.stackexchange", "id": 9359, "tags": "algorithms, scheduling, online-algorithms, competitive-ratio" }
Web Scraping Newspapers
Question: Wrote a python script to web scrape multiple newspapers and arrange them in their respective directories. I have completed the course Using Python to access web data on coursera and I tried to implement what I learned by a mini project. I am sure there would be multiple improvements to this script and I would like to learn and implement them to better. import urllib.request, urllib.error, urllib.parse from bs4 import BeautifulSoup import ssl import requests import regex as re import os from datetime import date, timedelta today = date.today() ctx = ssl.create_default_context() ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE def is_downloadable(url): """ Does the url contain a downloadable resource """ h = requests.head(url, allow_redirects=True) header = h.headers content_type = header.get('content-type') if 'text' in content_type.lower(): return False if 'html' in content_type.lower(): return False return True # dictionary for newspaper names and their links newspaper = dict({'Economic_times':'https://dailyepaper.in/economic-times-epaper-pdf-download-2020/', 'Times_of_India':'https://dailyepaper.in/times-of-india-epaper-pdf-download-2020/', 'Financial_Express':'https://dailyepaper.in/financial-express-epaper-pdf-download-2020/', 'Deccan_Chronicle':'https://dailyepaper.in/deccan-chronicle-epaper-pdf-download-2020/', 'The_Telegraph':'https://dailyepaper.in/the-telegraph-epaper-pdf-download-2020/', 'The_Pioneer':'https://dailyepaper.in/the-pioneer-epaper-pdf-download-2020/', 'Business_Line':'https://dailyepaper.in/business-line-epaper-pdf-download-2020/', 'Indian_Express':'https://dailyepaper.in/indian-express-epaper-pdf-download-2020/', 'Hindustan_Times':'https://dailyepaper.in/hindustan-times-epaper-pdf-free-download-2020/', 'The_Hindu':'https://dailyepaper.in/the-hindu-pdf-newspaper-free-download/', 'Dainik_Jagran':'https://dailyepaper.in/dainik-jagran-newspaper-pdf/', 'Dainik_Bhaskar':'https://dailyepaper.in/dainik-bhaskar-epaper-pdf-download-2020/', 'Amar_Ujala':'https://dailyepaper.in/amar-ujala-epaper-pdf-download-2020/'}) #dictionary to give serial numbers to each newspaper #I think something better could be done instead of this dictionary serial_num = dict({1:'Economic_times', 2:'Times_of_India', 3:'Financial_Express', 4:'Deccan_Chronicle', 5:'The_Telegraph', 6:'The_Pioneer', 7:'Business_Line', 8:'Indian_Express', 9:'Hindustan_Times', 10:'The_Hindu', 11:'Dainik_Jagran', 12:'Dainik_Bhaskar', 13:'Amar_Ujala'}) print("The following Newspapers are available for download. Select any of them by giving number inputs - ") print("1. Economic Times") print("2. Times of India") print("3. Financial Express") print("4. Deccan Chronicle") print("5. The Telegraph") print("6. The Pioneer") print("7. Business Line") print("8. Indian Express") print("9. Hindustan Times") print("10. The Hindu") print("11. Dainik Jagran") print("12. Dainik Bhaskar") print("13. Amar Ujala") #taking serial numbers for multiple nespapers and storing them in a list serial_index = input('Enter the number for newspapers - ') serial_index = serial_index.split() indices = [int(x) for x in serial_index] for ser_ind in indices: url = newspaper[serial_num[ser_ind]] req = urllib.request.Request(url, headers={'User-Agent': 'Mozilla/5.0'}) html = urllib.request.urlopen(req).read() soup = BeautifulSoup(html, 'html.parser') tags = soup('a') list_paper = list() directory = serial_num[ser_ind] parent_dir = os.getcwd() path = os.path.join(parent_dir, directory) #make a new directory for given newspaper, if that exists then do nothing try: os.mkdir(path) except OSError as error: pass os.chdir(path) #enter the directory for newspaper #storing links for given newspaper in a list for i in range(len(tags)): links = tags[i].get('href',None) x = re.search("^https://vk.com/", links) if x: list_paper.append(links) print('For how many days you need the '+ serial_num[ser_ind]+' paper?') print('i.e. if only todays paper press 1, if want whole weeks paper press 7') print('Size of each paper is 5-12MB') for_how_many_days = int(input('Enter your number - ')) for i in range(for_how_many_days): url = list_paper[i] req = urllib.request.Request(url, headers={'User-Agent': 'Mozilla/5.0'}) html = urllib.request.urlopen(req).read() soup = BeautifulSoup(html, 'html.parser') tags = soup('iframe') link = tags[0].get('src',None) date_that_day = today - timedelta(days=i) #getting the date if is_downloadable(link): print('Downloading '+serial_num[ser_ind]+'...') r = requests.get(link, allow_redirects=True) with open(serial_num[ser_ind]+"_"+str(date_that_day)+".pdf",'wb') as f: f.write(r.content) print('Done :)') else: print(serial_num[ser_ind] + ' paper not available for '+ str(date_that_day)) os.chdir('../') #after downloading all the newspapers go back to parent directory ``` Answer: Usage of requests Strongly consider replacing your use of bare urllib with requests. It's much more usable. Among other things, it should prevent you from having to worry about an SSL context. Type hints def is_downloadable(url): can be def is_downloadable(url: str) -> bool: And so on for your other functions. Boolean expressions content_type = header.get('content-type') if 'text' in content_type.lower(): return False if 'html' in content_type.lower(): return False return True can be content_type = header.get('content-type', '').lower() return not ( 'text' in content_type or 'html' in content_type ) Also note that if a content type is not provided, this function will crash unless you change the default of the get to ''. Dictionary literals This: newspaper = dict({ ... does not need a call to dict; simply use the braces and they will make a dictionary literal. URL database Note what is common in all of your newspaper links and factor it out. In other words, all URLs match the pattern https://dailyepaper.in/... so you do not need to repeat the protocol and host in those links; save that to a different constant. Newspaper objects dictionary to give serial numbers to each newspaper I think something better could be done instead of this dictionary Indeed. Rather than keeping separate dictionaries, consider making a class Newspaper with attributes name: str, link: str and serial: int. Then after The following Newspapers are available for download, do not hard-code that list; instead loop through your sequence of newspapers and output their serial number and name. List literals list_paper = list() can be papers = [] Get default Here: links = tags[i].get('href',None) None is the implicit default, so you can omit it. However, it doesn't make sense for you to allow None, because you immediately require a non-null string: x = re.search("^https://vk.com/", links) so instead you probably want '' as a default. String interpolation 'For how many days you need the '+ serial_num[ser_ind]+' paper?' can be f'For how many days do you need the {serial_num[ser_ind]} paper?' Raw transfer r = requests.get(link, allow_redirects=True) with open(serial_num[ser_ind]+"_"+str(date_that_day)+".pdf",'wb') as f: f.write(r.content) requires that the entire response be loaded into memory before being written out to a file. In the (unlikely) case that the file is bigger than your memory, the program will probably crash. Instead, consider using requests, passing stream=True to your get, and passing response.raw to shutil.copyfileobj. This will stream the response directly to the disk with a much smaller buffer.
{ "domain": "codereview.stackexchange", "id": 38425, "tags": "python, beginner, web-scraping" }
rqt_graph topic statistics not working
Question: This is the documentation i'm following: http://wiki.ros.org/rqt_graph#Topic_statistics This feature is not working on my system. I can't find the reason why. ROS Indigo Ubuntu 14.04 Tested by creating some instances of the turtlesim package's executables. Here's my result when using rqt_graph. You can see from the terminal window that the parameter is currently set to true Image: Please ignore the line under the terminal window. Originally posted by dotcom on ROS Answers with karma: 120 on 2014-07-30 Post score: 1 Original comments Comment by dotcom on 2014-07-31: As answered below, besides setting the parameter to true, one needs to launch topic monitor and tick every statistics that is desired. Launch rqt Plugins -> topics -> topic monitor Answer: You may need to set the parameter before launching the nodes you want to introspect. I am able to monitor the statistics between turtlesim and the robot steering rqt plugin on my machine running indigo: Also, remember that you need to manually refresh the rqt Node Graph window using the refresh button. Originally posted by William with karma: 17335 on 2014-07-30 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by dotcom on 2014-07-31: Thank you, I think I was missing the ticks on Topic Monitor. The rqt_graph documentation led me to believe all statistics measurements would be active after setting the parameter to true. Enabling isn't the same as doing that, though.
{ "domain": "robotics.stackexchange", "id": 18836, "tags": "ros, rqt-graph, ros-indigo" }
The photon propagator term in Peskin & Schroeder Eq. 6.38
Question: In Peskin and Shcroeder, when calculating the one-loop vertex correction, the line above Eq. (6.38) reads $$ \rightarrow \int \frac{d^4 k}{(2\pi)^4} \frac{-ig_{\nu\rho}}{(k-p)^2 + iϵ} \bar{u}(p') (-ie\gamma^\nu) \frac{i(\displaystyle{\not} k' + m)}{k'^2 - m^2 + i\epsilon} \gamma^\mu \frac{i(\displaystyle{\not} k + m)}{k^2 - m^2 + i\epsilon}(-e\gamma^\rho)u(p)$$ I am trying to figure out where does the $\gamma^\mu$ between the $k'$ and $k$ propagators come from. What I see is that this should be the propagator of the real photon, so the term should be $-ie\gamma^\mu$. Where did the $-ie$ go? Answer: I found the answer. The original expression for all vertex corrections is $-ie\Gamma^\mu$. The expression on the left should hence be proportional to $-ie\delta\Gamma^\mu$ and the $-ie$ term on the left is removed with the same term from the photon propagator on the right.
{ "domain": "physics.stackexchange", "id": 47884, "tags": "quantum-electrodynamics, feynman-diagrams, dirac-matrices" }
Communicating to Docker container on OSX
Question: I would like to run some ROS-related things natively on OSX (simulators to take advantage of GPU acceleration), and the majority of the rest of a ROS system in an Ubuntu Docker container (to take advantage of packages and standards). Is this possible? If so, does anyone have an example of the configuration required? Originally posted by hawesie on ROS Answers with karma: 282 on 2017-09-25 Post score: 1 Answer: I think the Docker for Mac install has a special network config, as it really just hosts a lightweight VM for the linux kernel. That config kind of makes it tricky to expose ROS containers transparently. I don't think it's impossible, but I haven't seen a properly solved issue on it posted on answers.ros yet, and I don't own a mac to debug it. My guess is if you could just get to the point in being able to address the VM on Mac via a unique IP, then load containers onto the host via docker run --net=host, then all ports would be easily accessible, and all you'd need to do is just keep in mind the ROS master URI. Related: https://github.com/osrf/docker_images/issues/55 https://answers.ros.org/question/269335/adding-gui-to-docker-on-osx/ Originally posted by ruffsl with karma: 1094 on 2017-09-27 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 28918, "tags": "docker, osx" }
Do we know what causes the release of energy in nuclear fission?
Question: I was trying to put together all the things I've read in the last couple of days and I realized that based on my current knowledge of the standard model and the way nuclear fission works, I'm not able to understand why would any number of nuclei packed together by the strong nuclear force, no matter how tight, release energy when split. This took me on an interesting rabbit hole and then I found this answer. Leaving aside the rant, it made me wonder, do we really understand why the splitting releases energy, or we just know it does based on observations. The way I see things now, since the nuclear forces are so strong, their tendency should be to always bring nuclei together, even if bombarded with an extra neutron. The extra neutron should stick, or if the kinetic force is strong enough, then break the binding and split the nucleus, but certainly not releasing energy, losing mass and shooting 1-3 neutrons in the process. So, why is my expectation wrong, since experimentally it obviously is. Answer: You are essentially asking why the binding energy of nuclei decreases beyond a certain point in the periodic table (the binding energies being strongest around iron). According to the liquid drop model (which is pretty accurate), there are two effects that play into this. The simpler effect is simply that all the protons in the nucleus repel one another, and this electromagnetic repulsion is long range, while the strong attraction is only short range. Because it has such limited range, the residual strong interaction produces a binding energy proportional to the number of nearest-neighbor nucleon pairs. For large nuclei, this results is an essentially constant binding energy per nucleon. Almost every nucleon is fully surrounded by the neighbors, and so the total binding energy is proportional to the number of nucleons $A$. For smaller nuclei, there are a substantial number of nuclei on the surface, which is why smaller nuclei are less bound. However, the electrostatic energy of all the protons in the nucleus does not increase linearly with with number of constituents. Instead, it increases as $Z^{2}$, where $Z$, the atomic number is the number of protons. So at some point, adding more protons to the nucleus is going to actually decrease the binding energy per nucleon, and this is the point at which fission becomes an exothermic reaction. This does not answer the question of why we cannot simply add more and more neutrons to the nucleus, however. The reason for that is quantum mechanical. The neutrons (like the protons) are fermions and thus obey the exclusion principle. If we have more neutrons than protons, the Fermi/zero-point motion of the neutrons will be greater, while their binding potentials will be the same. If the nucleus becomes too neutron rich, it then becomes energetically favorable for a neutron to decay into a proton. For small $Z$, when the electrostatic repulsion effect is small [because the fine structure constant $\alpha\approx1/137$, while the strong coupling is ${\cal O}(1)$], the most stable configurations have basically equal numbers of protons and neutrons, because this minimizes the energy of the nucleons' Fermi motion. As the electrostatic effect becomes more significant, the line of stability shifts toward having more neutrons than protons, but eventually (in the neighborhood of $z\sim1/\alpha$), you cannot add more of either kind of nucleon with decreasing the stability.
{ "domain": "physics.stackexchange", "id": 54783, "tags": "nuclear-physics, mass-energy, binding-energy" }
Merge multi-robot's maps
Question: Hello, I want to know whether navigation_tutorials can merge multi-robot's maps? Or just try to control different robots? Is it real time, or just offline? Originally posted by arahp on ROS Answers with karma: 46 on 2019-10-08 Post score: 0 Answer: The problem of merging maps between robots is a slam problem. There are different solutions to do slam in ROS. (Amongst others gmapping, hector slam and cartographer) One solution that is known to support multiple agents is cartographer: https://google-cartographer-ros.readthedocs.io/en/latest/going_further.html#multi-trajectories-slam Originally posted by ct2034 with karma: 862 on 2019-10-09 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by arahp on 2019-10-21: Thanks a lot, if i want to merge maps using gmapping, do you have any suggestion? For example, a framework to realize it or some related work? It seems that Gmapping does not have this function itself. Comment by ct2034 on 2019-11-20: can you please click on the little checkmark next to this answer, then. Comment by Ga_phantom on 2020-03-10: There's a package called multirobot_map_merge can merges maps. Comment by Ga_phantom on 2020-03-15: I have some problem with /start_trajectory, like it will say topic [scan] is already used.can you speak more specific about it,Thanks in advance. Comment by ct2034 on 2020-03-17: This is cartographer specific, yes? I am not an expert on that. I would suggest opening a new question and clearly describe the problem you have with cartographer. Comment by Ga_phantom on 2020-03-22: OK, Thank you~.
{ "domain": "robotics.stackexchange", "id": 33867, "tags": "ros-kinetic" }
Sending a message to turtlesim
Question: Can anyone point me to an example of how to send a linear/angular message to turtlesim from a C++ program, including declaration of the publisher and message type. Basic, I know... but it's not obvious to me how to do this from the tutorials and docs. Originally posted by Paul0nc on ROS Answers with karma: 271 on 2011-10-18 Post score: 0 Answer: Check out the turtlesim teleop package. Originally posted by David Lu with karma: 10932 on 2011-10-18 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by Paul0nc on 2011-10-18: Thanks David. This was a great help.
{ "domain": "robotics.stackexchange", "id": 7002, "tags": "ros, turtlesim, message" }
Practical importance of Turing machines?
Question: I am an electrical engineer, and only had one CS course in college 26 years ago. However, I am also a devoted Mathematica user. I have the sense that Turing Machines are very important in computer science. Is the importance only in the theory of computer science? If there are practical implications/applications what are some of them? Answer: The importance of Turing machines is twofold. First, Turing machines were one of the first (if not the first) theoretical models for computers, dating from 1936. Second, a lot of theoretical computer science has been developed with Turing machines in mind, and so a lot of the basic results are in the language of Turing machines. One reason for this is that Turing machines are simple, and so amenable to analysis. That said, Turing machines are not a practical model for computing. As an engineer and a Mathematica user, they shouldn't concern you at all. Even in the theoretical computer science community, the more realistic RAM machines are used in the areas of algorithms and data structures. In fact, from the point of view of complexity theory, Turing machines are polynomially equivalent to many other machine models, and so complexity classes like P and NP can equivalently be defined in terms of these models. (Other complexity classes are more delicate.)
{ "domain": "cs.stackexchange", "id": 15303, "tags": "turing-machines" }
Can one make a synthetic dimension "curl around" into a cylinder?
Question: A really cool recent proposal, Synthetic Gauge Fields in Synthetic Dimensions. A. Celi et al. Phys. Rev. Lett. 112, 043001 (2014), arXiv:1307.8349, shows how you can simulate a synthetic magnetic field in a fictional 2D lattice by taking a 1D lattice and populating it with atoms that have an internal degree of freedom, usually a hyperfine manifold of ground states. A set of Raman coupling lasers with cleverly engineered phases allows one to set up a synthetic magnetic flux in each lattice plaquette and therefore a full synthetic magnetic field in the whole lattice. I also know of one experimental realization, Observation of chiral edge states with neutral fermions in synthetic Hall ribbons. M. Mancini et al. arXiv:1502.02495. This is limited, of course, by the number of available internal states, which means that the lattice looks more like a thin strip than anything else. Is it possible to alter the strip's topology? In particular, can one couple the maximal $m=M$ and $m=-M$ hyperfine states in a way that will 'knit' the two strip edges together to make the synthetic lattice doubly connected? This would obviously have really nice implications in terms of quantum simulation of "curled up" extra dimensions. Has anything like this been proposed? Answer: The simple answer to this is "yes, go read the paper". Indeed, the OP is somewhat negligent in their post, since the Celi et al. paper plainly states in the introduction that We also show that by using additional Raman and radio-frequency transitions one can connect the edges in the extra dimension. Thus, the possibility of engineering synthetic cylinders was on the cards to begin with. However, later work does propose much fancier topologies for these sorts of schemes. More specifically, in Quantum simulation of non-trivial topology. O. Boada et al. N. J. Phys. 17 045007 (2015) (open access) the ICFO group proposes a number of interesting schemes, including a circle, a cylinder, a torus, a Möbius strip, and a twisted torus, Of course, whether these schemes can be realized experimentally is another matter. In particular, they require very specific hyperfine couplings to be turned on, while other couplings are not. This is in principle OK but it does require a good bit of fine-tuned control. Additionally, some of these schemes require sites to be individually targeted, which can be very hard, particularly if the links are directly implemented in the microwave domain. As an example of the difficulties, consider the implementation by Mancini et al.. They used Raman transitions for the hyperfine couplings, as they (i) are generally easier to implement than direct microwave transitions, and (ii) have a much larger spatial phase variation, which is required to implement the large Peierls phases they needed to simulated the synthetic magnetic field. However, Raman transitions consist of two dipole transitions instead of one, which forces $\Delta m=2$ between lattice sites instead of $\Delta m=1$, and therefore cuts in half the available lattice width (i.e. three sites instead of six, with a spin-5/2 system). An additional point worth mentioning is that often you don't really even want such fancy topologies in the first place. This is the case, for example with the Mancini et al. paper, where they care about chiral edge states. These are cyclotron orbits in a magnetic field, which can only exist in a single direction near a sharp edge: Thus, for these purposes it is vital to have a sharply defined cutoff, and it is advantageous to have a thin strip (as long as it is wide enough to support the cyclotron orbits), as this maximizes the relative contribution of the edge states. For other interesting effects, of course, you probably do care about the topology. To see what signatures you might look for, and why you'd care about them, see the Boada et al. paper.
{ "domain": "physics.stackexchange", "id": 21764, "tags": "topology, cold-atoms, optical-lattices, synthetic-gauge-fields" }
Is it possible to calculate the distance to a rainbow by using the parallax method?
Question: A colleague of mine (a physicist) recently claimed that it is possible to calculate the distance to a rainbow by applying the parallax method and that the result would be ~150 million kilometers, the earth-sun distance. To me this statement seems quite odd, since to my knowledge, you need some fixed, far distant background for calculating the parallax angle. Can anyone explain how this is possible or is it simply false? Answer: Your colleague is right. The rainbow is a virtual image (not a physical object) similar to a mirror image. The rain droplets are similar to small mirrors or scatterers. The virtual origin of the light rays, where the extrapolated rays intersect, is approximately at infinity. In geometrical optics, the exact distance of a virtual image is defined by its parallax. If the rainbow is observed from different points on Earth, as in the diagram below, the center of the rainbow (the antisolar point) has a small negative parallax, corresponding to the location of the Sun, at a distance of 1 AU (astronomical unit). The reference background for parallax measurement is the celestial sky. For example, in mid january the rainbow is approximately positioned over the stars Aldebaran, Rigel and Sirius. The stars are not visible in the sky during daylight, but a planetarium app on a smartphone would reveal their location. From day to day the center of the rainbow is moving slowly across the celestial sphere, at a speed of 1° per day along the ecliptic. Generally, when discussing rainbows, it is more useful to think of that distance of 1 AU as approximately infinite, because for visual observations by eye or camera the difference is negligable. Then: The rainbow is a 42° circle around the antisolar point. The antisolar point is opposite to the sun, located at infinity. When using visual observations (eye or camera, possibly at different locations), the rainbow is indistinguishable from a 42° circle at infinity. Imagine the Sun was replaced by an ideal monochromatic point source. The rainbow would then be a 42° sharp monochromatic circle instead of a blurry multicolor circular band. A camera lens would have to focus at infinity to obtain a sharp photo, otherwise the recording would be blurred. About three days before or after full moon it is possible, occasionally, to see the moon in the rainbow (example). With some luck it should be possible to make simultaneous photos from different places on earth. The parallax difference would show that the rainbow is further away than the moon. Some people think the rainbow is a cone. However, the cone is the location of the light rays, whereas the rainbow is the apparent origin of those rays: a 42° circle around the antisolar point, at infinity.
{ "domain": "physics.stackexchange", "id": 69564, "tags": "optics, visible-light, refraction" }
How fast does a spacecraft have to be to enter a primordial black hole without being torn apart?
Question: If there really is a primordial black hole beyond the Kuiper belt, we can send a probe to the black hole and into it. But how fast must the probe be in order to enter the black hole without being ripped apart by spaghettification? The black hole in question has an event horizon diameter of 2-3 inches and 5-15 Earth masses so it is a tiny probe (tiny enough cameras do exist) we would have to launch. Let's say the probe is 1.5 inches broad and has a length of 2.5 inches. How fast must it be in order to not be torn apart and successfully enter the black hole? Answer: Let's try a crude back of the envelope calculation to get orders of magntitude: The tidal force near the event horizon is of the order of $G M/r^3$ which is something like $10^{18} g/m$. So a probe with a mass of 1 gram and diameter of 1cm would experience about $10^{12} N$ of force trying to "spaghettify" it by accelerating ends of it at about $10^{16} g$ relative to the centre. If it experienced that force for a time $t$ the ends would likely move about $10^{17} t^2 m$ relative to the centre, assuming $t$ is short enough that relavity does not complicate things, so if we want to limit that to say 1mm we need $t < 10^{-10}$. So basically we want the probe to go from a few Schwarzchild radii down to 1 in about 100ps in its internal time frame, so that the tidal forces do not have time to tear it apart. Using the Newtonian approximation this is a few times faster than the speed of light, so what we learn is that the probe must be going at relativistic velocities, to have a chance of surviving. From its perspective, that flattens out the gravitational field around the black hole in the direction of travel, so that the tidal force seems to even briefer in duration.
{ "domain": "astronomy.stackexchange", "id": 4273, "tags": "black-hole, speed, space-travel" }
What is the power of the most powerful quasar found?
Question: Trying to find an answer to this question, I came across many sources that are in complete in contradiction. For example Wikipedia states that a typical quasar has a power of $10^{40}$ watts while according to this, the most powerful quasar has a power of $10^{39}$ watts ($10^{46}$ ergs). So now which is correct, and what is the correct number for a typical quasar ? Answer: It is difficult to keep track on these things, specially since quasars are also highly variable. Trying to answer the question of the post title, I found this example that sound pretty impressive: $7\times 10^{14}\,{\rm L_\odot}$, or $1.4\times 10^{41}$ W. The ESO press release refers to the kinetic luminosity, or the kinetic energy of the outflow per unit time, not quasar radiative luminosity. According to this paper based on SDSS data, just to give an example, the first thing is that the mean luminosity of the quasar population changes with time, from high to low redshift. But apparently, the quasar population distributes broadly around absolute magnitude $-26$, which is interesting since this is close to the apparent magnitude of the Sun. So a good rule of thumb might be that a quasar at 10 parsecs from Earth would look as bright as the Sun, or would have a luminosity of $\sim4\times 10^{36}$ W. These are very rough numbers.
{ "domain": "physics.stackexchange", "id": 15491, "tags": "gravity, black-holes, astronomy, astrophysics" }
What causes the circular polarization of light from outside Snell's window?
Question: Wikipedia and other sources claim that the internal reflection underwater outside Snell's window is circularly polarized. What is the mechanism that causes this circular polarization? Answer: The claim is true only for the following conditions: Reflexion that is highly grazing, i.e. the incident wave is almost parallel to the interface and the angle of incidence approaches $\pi/2$; The incident light is linearly polarized and the plane of polarization is exactly halfway between the $s$ and $p$ polarization planes, i.e. so that the electric field makes an angle of $45^\circ$ with the intersection line between the transverse plane of propagation and the interface. In short, at glancing angles, the TIR mechanism mimics the action of a quarter wave plate. I show the above is true with the Fresnel equations in my answer to the Physics SE question How does one calculate the polarization state of random light after total internal reflection (read the answer to about halfway, where I discuss the grazing angle case). The mechanism is that the Goos-Hänchen phase shift differs for the $s$ and $p$ states. It is a little hard to give an intuitive explanation; one has to refer to the Fresnel equations (as I do in my answer referred to above). But remember that TIR doesn't happen at the interface; the field tunnels beyond the interface a small distance as an evanescent field, as I explain in my answer here. So it is intuitively clear that there is an effective plane of reflexion a small distance beyond the physical interface owing to the nonzero tunnelling i.e. turnaround distance, and this distance happens to be a quarter of a wavelength at highly glancing incidence angles.
{ "domain": "physics.stackexchange", "id": 41825, "tags": "optics, water, reflection, refraction, polarization" }
How to implement simple noise filter based on the frequency domain information?
Question: Brief introduction: I am working on my university project. My task is to develop application that captures sound from mobile phone and removes/reduces noise (this simpilied version of task). I never faced with DSP before, I spend a lot of time to figure out how to get audio signal from phone (no direct API is porvided), so I have only few weeks left to study DSP. There are a lot of information on that field, my basic source of information was: http://www.dspguide.com/. My work, to be accepted should include some work(algorithms) in the field of DSP (so I should avoid ready libraries). What I need simple algorithm(easy to understand) for noise reduction, I am very limited in time (I got only one day left, after that I should present my work) refernces to resources for understanding FFT & inverse FFT (although I implemented that algorithms in code, I simply adopted code from the book) what I have done FFT, Inverse FFT I tried to remove frequencies by applying FFT to signal in time domain and zeroing needed ranges in result frequency domain (ranges < 300 && ranges >3700 - as human voice is in range [300;3700] - so doing that I tried to remove sounds not related to human voice, but I got very bad results (https://stackoverflow.com/questions/24101814/why-ideal-band-pass-filter-not-working-as-expected) Could you please suggest me how I can based on the results of FFT reduce noise (I don't need super efficient method, I need something that I can present in my work and explain how I did it) Please help! Answer: Fast convolution filtering with an FFT/IFFT requires zero padding and using overlap add or overlap save methods to remove circular convolution artifacts. You should also use a frequency domain response that has a shorter impulse response than a rectangle (which zeroing bins is).
{ "domain": "dsp.stackexchange", "id": 1864, "tags": "fft, filters, audio, noise" }
Associativity of fusion of anyons: Why are anyons ordered?
Question: Anyon theories are required to be associative, i.e. when fusing three anyons with labels $a,b,c$, we have $$(a\times b) \times c = a\times (b \times c)$$ This associativity is extended to the fusion and splitting spaces. Vectors in these spaces are represented by fusion trees as in the image. The image suggests that there are precisely two ways of fusing $a,b,c$ to $d$ My question is simple, what about first fusing $a$ with $c$ and then with $b$? Should this not be another possibility that is not covered by the diagrams. Indeed, in his paper https://arxiv.org/abs/cond-mat/0506438 Kitaev explicitly points out that the anyon theory is established on a line and that the order of anyons on that line matters. Still the theory is claimed to describe particles in 2D. I cannot see how this fits, as in 2D, the anyons need not be arranged on a line but can be literally anywhere. Answer: It depends on the context whether it's possible to fuse $a$ with $c$ first. If we just have a fusion category, this process is ambiguous. In this case, our quasiparticles live in one spatial dimension, so their positions are linearly ordered. To fuse $a$ with $c$, we would need to close our space into a circle, and then there would be a choice of boundary condition which could affect the fusion outcome. On the other hand, if we are in two spatial dimensions, then we have a braided fusion category (which is more algebraic data!), and we can freely move $a$ to the other side of $b \otimes c$ to obtain the fusion $(b \otimes c) \otimes a$, which could be compared to $a \otimes (b \otimes c)$ using the $R$-symbol. It's not enough to just have the associator $F$! The consistency relations between $F$ and $R$ are captured by the hexagon equation. Note that the 16 Ising categories in Kitaev's paper come with this braiding, so they can describe quasiparticles in 2d. However, if we are just given a fusion category, we can use it to produce the Levin-Wen model, and so obtain a model with 2d anyons. However, the 2d anyons are not the objects of the fusion category we started with! Instead they form the Drinfeld center, which one can think of as the universally smallest braided envelope of the fusion category (not all fusion categories are braided). Our original fusion category describes quasiparticles constrained to live in a certain universal boundary condition you can think of as a TQFT version of the Dirichlet boundary condition.
{ "domain": "physics.stackexchange", "id": 57708, "tags": "topological-field-theory, anyons" }
How can a microbe be in two places at once?
Question: As an ex-biologist I often comb the pop science columns for interest. But I was completely floored by this claim of this planned experiment to put a living organism in two different places at exactly the same time: http://www.theguardian.com/science/2015/sep/16/experiment-to-put-microbe-in-two-places-at-once-quantum-physics-schrodinger Now, my maths isn't great but I know a bit of physical chemistry. And I'm familiar with some of the odder quantum concepts like wave/particle duality and the uncertainty principle on a conceptual level. I was happy to accept these bizarre ideas when they applied to the world of single atoms, but a microbe? That's a whole other level. Can someone explain to me (without maths, please) how on earth this is possible? Answer: Firstly, the paper doesn't sound very exciting. They are basically saying they have a large membrane that has some quantum related dynamics and the microbe is small enough that gluing it to the large thing won't change its motion enough to make it not get the quantum dynamics. And then on top they found a thing in the microbe sufficiently isolated from the rest of the microbe that it can be entangled with other things. But you do have a serious misconception if you thought quantum mechanics is reserved for atoms. So you might benefit from a less mysterious description of quantum mechanics. For experts that are interested, I'll be describing the MIW version of quantum mechanics (which is not a misspelling of MWI). First let's tall about classical mechanics. You can imagine a mass on a spring. If you plot velocity on the y axis and position on the x axis. Then you notice that you cab specify the position and the velocity (the initial conditions) jointly by specifying a point on the plane. And that over time the position and the velocity both change so you really get different points in the plane at different times. It always has to move clockwise and for the spring it actually moves on an ellipse. This is different than the earth moving on an ellipse around the sun because the y axis is velocity. Next step, still classical mechanics. This time you have two particles. In different places. Each on a spring. So you can imagine a 4d space, two of them specifying where each particle is and the other two specifying the velocity of each particle. So now you have a point in 4d space moving around and its initial position is all the initial conditions of each particle and it just moves around in 4d always specifying the location and velocity of evey particle. So that's what we expect from classical mechanics. You know the specification is in some range of your 4d space and you know how each point in the 4d space moves, so you know what happens. In general particles can move in 3d and there can be n particles so you need a point in 6n dimensional space to specify the system 3n for the location of each and 3n more for the velocity of each. So a single point in 6n dimensional space tells you everything classically and classical mechanics tells you how that point in 6n dimensional space moves in time. In quantum mechanics you have a wavefunction. The wavefunction does some weird things. Firstly, it assigns velocities (from the probability current) for each location. So when you specify a point in 3n to describe a possible location where all the particles could be found, then the wavefunction specifies all 3n velocity components. Let's visualize that. Go back to the single particle on a spring. It could be sitting at rest with the spring at its natural length. That's the being at the origin of the 2d space. Or it could start out displaced 1mm and at rest, that means starting it on the x axis and then going in a clockwise direction around an ellipse. If it was pulled out 2mm then it just goes on a larger ellipse. Once on an ellipse, it stats on that ellipse forever. But if you pulled it out 2mm and gave it a velocity then you moved it in the x and y direction in the 2d space effectively placing it on a larger ellipse and eventually it will be at larger than 2mm. In quantum mechanics once you specify the wavefunction there is a velocity determined for each configuration of the particles. So back to the 2d picture. Classically, it could have any starting point on the 2d plane and then move along an ellipse. Now it is more like a bar graph. You see that each location along the x axis has to have a particular velocity. That doesn't seem weird at first. But now imagine lots of points. If each one was a single classical world it would move along an ellipse. But because of this rule that each location have its one velocity consider two points on the same ellipse, to be concrete have one start on the positive y axis and the other one just a bit clockwise along the same ellipse. They can't just move along the ellipse like they classically would because if they did and that right most one got to the x axis it has to keep going clockwise but the places to the left already have their own velocities, positive ones. Its like you have two classical worlds each might want to move along the classical trajectory (in this case, the ellipse) but they can't, they get in others way. This means each point follows a different trajectory. That different trajectory is a purely quantum effect, and it always happens, no matter how many particles there are, how big something is, how massive, whatever. When it has more parts it just means you 2d space becomes 4d or 6000d (for 1000 particles moving in 3d). When the space is larger it simply becomes easier for these two configurations to evolve so they don't get in each others way because there are so many directions to go. So you can imagine it like a fluid in this large 6000d space, a fluid with some dye. You can think of it as a wave that is high only in a few places and the dye is a small tracer on the top of the water. The dye looks like a configuration and you can mathematically see it trace out its path. In the case of the spring different dye tracers track different configurations. If you have a hill, as the dye closer to the hill slows down the ones behind are forced to slow too and that slows the ones behind them. The net effect is that some dye tracers are seen to bounce away from the hill long before they get to the hill and some that classically would have slowed down and bounced back from the hill get pushed on based on the build up from behind. We call that tunneling and reflection. And it is a quantum effect. But it isn't mysterious and it isn't reserved for small things. Sometimes the fluid separates part got pushed through, part for reflected and then if you are in a 6000d space with lots of things that can deflect you around those two bunches of fluid basically act like they are the only one. These collections that are confined to a small region of the huge space but act like they are the only thing, that's the classical experience you know the configuration is something in a particular regions but you don't know exactly where and besides you are getting split every so often anyway. When someone talks about quantum weirdness they are saying that two waves that could have acted on their own are set to overlap and thus they will make the tracers move in a nonclassical way because of that single velocity per location rule. And the way you assign velocities to locations depends on the wavefunction, so lots of different wavefunctions can give lots of different dynamics. Since you are the whole collection (not the/a tracer) you don't know coming into a splitting event that you will be going one way or the other. Both are going to happen. But each outcome will afterwards think of itself as the whole wave, simply because you won't ever meet the other wave. So let's review what quantum mechanics is. There a huge space of configurations like 3000d each has a velocity (so a 3000d space of velocities) determined by the wavefunction. There are all regions with more tracers (or more fluid). So in this giant 6000d space there are regions with some fluid and regions without. Each little piece of fluid could act like an island and not notice the other islands as they move around it their own path through a huge 6000d space. That's is your classical experience. Sometimes a region breaks into two or more distinct regions as it moves around, that is what you experience as indeterminism. Each piece afterwards thinks of itself as the whole world and doesn't know why it went the way it did instead of the other way. Because you only experience which group you are in. But each little tracer just follows its path. It doesn't always follow the classical path because it could spread out from the other tracers in the fluid and the fluid itself spreading out. But when the tracers are pushed in paths that deviate from the classical path they still do it in ways that conserve a kind of joint energy and momentum. Now if you want to process information one way is to do it at the level of the whole region. So you specify the wavefunction which tells you the velocity of every tracer and the whole fluid and you track information from that. And some of that information can dynamically changes in a very classical way. For instance in a hydrogen atom in the ground state the fluid pushing on the tracers is like a force that totally opposes the classical force and the tracer can sit there at rest, specifically the location of the electron and proton has a relative separation and that doesn't change in time. Quantum mechanics made that degree of freedom become dynamically boring. And the center of mass can just cruise along with no classical forces. So one possible wavefunction has a the x for the proton and the x for the proton cruise along at the same speed (and same for the us and the zs) and you have different tracers for lots of differences between the $(x,y,z)$ of the electron and the $(x,y,z)$ of the proton. And the density of fluid and the tracers is (at each point) exactly what is needed to counter the classical electrostatic force they would exert on each other so they don't push each other away or towards each other. So as a unit they act like a pair of particles with a fixed separation moving with a common velocity. So that center of mass moves in a very classical way. Similarly, there are regions of the fluid with information that can behave very classically. And that is what you are sensitive to. So, there is no reason to expect some cutoff based on size. Size makes it easier for split regions to stay split. But it is just about making it easier.
{ "domain": "physics.stackexchange", "id": 24958, "tags": "quantum-mechanics" }
Repository implementation
Question: I have a repository called PostsRepository: public class PostsRepository : IPostsRepository { private readonly DatabaseContext context = new DatabaseContext(); public IEnumerable<Post> All() { return context.Posts .OrderBy(post => post.PublishedAt); } public IEnumerable<Post> AllPublishedPosts() { return context.Posts .OrderBy(post => post.PublishedAt) .Where(post => !post.Draft); } public Post Find(string slug) { return context.Posts.Find(slug); } public void Create(Post post) { post.Slug = SlugConverter.Convert(post.Slug); post.Summary = Summarize(post.Content); post.PublishedAt = DateTime.Now; AttachTags(post); if (context.Posts.Find(post.Slug) != null) { throw new Exception("tag already exists. choose another."); } context.Posts.Add(post); context.SaveChanges(); } public void Update(Post post) { post.Slug = SlugConverter.Convert(post.Slug); post.Summary = Summarize(post.Content); AttachTags(post); if (context.Posts.Find(post.Slug) != null) { throw new Exception("tag already exists. choose another."); } context.Posts.Add(post); context.SaveChanges(); } public void Delete(Post post) { context.Posts.Remove(post); context.SaveChanges(); } private void AttachTags(Post post) { foreach (var tag in post.Tags) { if (context.Tags.Any(x => x.Name == tag.Name)) { context.Tags.Attach(tag); } } } private static string Summarize(string content) { // contrived. return content; } } I am worried that I might have wound up with a design that is not very testable as it is not apparent to me how to test this code. I am going to ask another question on SO as to how to unit test this class soon but before I peruse this implementation I would like to ask that you please review my repository implementation. Particular areas of concern: Testability I have read countless opinions about what a repository should do. Are there any pragmatic reasons why my implementation might be bad? The PostsRepository needs to access the Tags database set. Am I allowing this in the correct way? Know that I plan to implement a TagsRepository in the future. I throw an exception when the slug (which must be unique) is occupied. Should I return a bool to indicate failure instead? Would this not violate the command-query segregation principle? I am aware that the Update method is hard to reason about and I am working on that. It is for this reason that my code does not currently adhere to DRY. Answer: Testability Your repository implements an interface which will allow it to get stubbed out easily, so that's a very testable thing. However if you want to also (unit-test) your repositories themselves then you are stuck because you have a hardcoded dependency on DatabaseContext. You should move that up one layer by abstracting it out as well and providing a custom DbContext for testing purposes or by mocking it out. More information on that here. I have read countless opinions about what a repository should do. Are there any pragmatic reasons why my implementation might be bad? One thing that comes to mind is the long-lived DbContext object. Some say it isn't an exact necessity, others say you definitely should avoid it but generally accepted is the notion that you should dispose of your DbContext. A unit of work corresponds to one method inside your repository to wrap a using statement around it. Note that this would interfere with testing the repository itself so it's worth considering using integration-tests to test your repository in particular. The PostsRepository needs to access the Tags database set. Am I allowing this in the correct way? Know that I plan to implement a TagsRepository in the future. I am not familiar enough with EF to answer that. I throw an exception when the slug (which must be unique) is occupied. Should I return a bool to indicate failure instead? Would this not violate the command-query segregation principle? It does not violate it because it is just the result of your command, it is not the result of a query. It holds exactly the same value as an exception. There is a more interesting way though: create a "result"-object. This can be as simple as class CallResult { bool Success { get; set; } string Message { get; set; } } Which will be more descriptive than bool since you can also send a message along. Whether you choose this or an exception depends on your own preference: using an exception will force you to have a try-catch around it somewhere which is rather ugly but then again it will prevent you from accidentally leaving out validation on the return type (either a try-catch or if(result.Success)) which is not enforced with a custom type. I am aware that the Edit method is hard to reason about and I am working on that. It is for this reason that my code does not currently adhere to DRY. Since I don't see an Edit method I'll assume you haven't written it yet. DRY is nice and all but for some niche situations (CRUD actions, tests, ..) I would argue that readability is more important than having a few lines similar to eachother. I suppose you could create a situation like this, but that depends on how different each action is. void Process(Post post) { post.Slug = SlugConverter.Convert(post.Slug); if (context.Posts.Find(post.Slug) != null) { throw new Exception("tag already exists. choose another."); } post.Summary = Summarize(post.Content); AttachTags(post); } void Create(Post post) { Process(post); post.PublishedAt = DateTime.Now; context.Posts.Add(post); context.SaveChanges(); } void Update(Post post) { Process(post); context.Posts.Add(post); context.SaveChanges(); } Note how I performed the validation before doing the other work. A method should be written as [action][context]. AllPublishedPosts makes me think it's a property, not a method. I would change this to GetAllPublishedPosts. I'm not speaking out about All since that's a special situation, I suppose.
{ "domain": "codereview.stackexchange", "id": 8938, "tags": "c#, unit-testing, entity-framework, asp.net-mvc, repository" }
Ratio of the isospin of the photon
Question: as known the photon can have an isospin of 0 and 1. However, what is the ratio of both for an virtual photon (in e+e- annihilations)? Next to an 1^-- resonance like J/Psi or Psi' it should be 1 in my opinion (as the resonance cross section is much higher than the continuum cross section). But what ratio should one expect for the continuum? (I plan to estimate the ratio of Delta and Nucleon resonances within the continuum...) Thanks for your help. Answer: I did find the answer in a paper of J. L. Rosner (http://arxiv.org/abs/hep-ph/0411003). The ratio is 1:9, since (Q_u − Q_d)^2 = 9(Q_u + Q_d)^2. Thus, it can described as 1/10 I_0 + 9/10 I_1. The reason for the different coupling is the differing coupling from the photon to charge.
{ "domain": "physics.stackexchange", "id": 4589, "tags": "particle-physics" }
Why air laterally diverges before entering a cyclone/anticyclone?
Question: I'm a newbie in meteorology so sorry if this is a dumb question. I get the general idea of how cyclones form, but one thing I can't wrap my head around is why must the air diverge laterally when forming the spin, instead of following the original drift by Coriolis force? For example, the formation of a cyclone in the northern hemisphere, why isn't it the case on the left? What really happens when the cyclone starts to form that gives its direction, before gaining the rotational inertia that sucks more air in that direction? Answer: It may help to focus on how pressure centers aren't initially about rotation. Picture you are air... you start moving towards a new cyclone (because of a lowered pressure). Straight inwards... Then you start to turn to the right because of Coriolis (in the NH). That results in only the right diagram (try adding the arrows in from other directions) The thing is, even if you managed to get a clockwise circulation like your left diagram... its basic forces don't suggest it can continue to work. Because the two forces of note typically directing the air (horizontally) in pressure systems... both point towards the low pressure if the low is rotating clockwise; coriolis (turning to the right, NH) and the pressure gradient both pull the wind directly inward towards the center. The circulation would cease. Circulation is all about balance. It is actually indeed possible to get a clockwise low pressure circulation. But, given those pair of forces it would need to counteract, you need something relatively hefty to make it happen. Probably the most viable method to start up a clockwise low capable of persisting is by the tilting of vertical rotation (i.e. wind shear), which will bring a counterclockwise area of rotation north (NH) of an updraft in a thunderstorm. (See Figure 3 from Dynamics Of Tornadic Thunderstorms, Klemp, 1987) So this means there are clockwise rotating mesocyclones. From there, you can have enough rotation speed that centrifugal force can begin to be a bigger factor, which does point outwards [that's the balance that keeps satellites rotating around the planet... gravity pulls them in... centrifugal force pushes them out... result is they can maintain their height]. And that can allow them to persist, and even tighten up... leading to anticyclonic tornadoes, which are indeed low pressures that are rotating clockwise. There are other options for forced clockwise circulation, like a downdraft/rear inflow jet surging, leading to a diagram like this one from SPC's bow echo education: (Note the anticyclonic on the bottom) However, these tend to weaken more/aren't in a favorable spot for continuation. But it's only that such clockwise rotation can't develop "on its own" without those additional forces/motions circumstances helping, and then needs just the right balance to continue (counterclockwise rotation does need just the right balance too, but those two opposing forces mentioned [PGF+Coriolis] tend to really help the balance be much more stable in the face of disturbance). For a little more discussion of the balancing tendency in a cyclonic (counterclockwise in NH) low pressure, see also this question here.
{ "domain": "earthscience.stackexchange", "id": 2711, "tags": "meteorology, atmosphere, tropical-cyclone" }