anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
How to calculate precision and recall in an unsupervised ranking method in order to be compared with a supervised one?
Question: I am working on an ML supervised learning project for Key-phrase extraction. The evaluation dataset (different from training-validation datasets) contains around 200 phrases labeled as key or not key. The problem is that I want to compare the results obtained (in terms of precision, recall and f1) via different classifier's algorithms with existing unsupervised methods. These methods are based on ranking and extract the top-k key phrases. Using the dataset that I mention, what is the proper way to calculate precision and recall for these unsupervised ranking methods? Answer: You first need to decide what your "label" is, and this label will apply to both your supervised and unsupervised models. Obviously, in a supervised world, knowing your label make training the model and evaluating it easy, so I won't go into that because it seems like you have a good handle. In the unsupervised world, you'll have to think really hard about the mapping of the unsupervised outputs to the label you've decided to use for evaluation. A lot of this is going to come down to how you structure that unsupervised problem. Like, if you're going to be using unsupervised clustering, then maybe you run the model, see what the clusters look like, and see if there's a way to map those clusters the label you're evaluating. It sounds like you're using unsupervised ranking and extracting top key phrases after the unsupervised ranking. In this case, you'd probably want to map the possible top key phrases to the labels you want to evaluate. Then after ranking and key phrase extraction, you can get a mapped "prediction label" based on the key phrases for each prediction. Of course, you'll likely have a messy collection of "prediction labels" because it's unlikely that each test prediction will output N-key phrases that perfectly match 1 label which means you'll have to come up with a way to round those maps (maybe you take the mode, maybe you round the average, it all depends on what you value most) to a singular prediction label. From there, doing recall and precision and f1 is straight forward. I will say though, that trying to bend an unsupervised model to evaluation like a supervised model is only useful if recall, precision, and f1 are the best way to evaluate your model's usefulness in its problem space. This is very different from evaluating its effectiveness. This question is: Does this models output fit the use case of the problem best? When I implement this output to an end user, would they appreciate this model more than another model? I'd encourage you to, on top of using classic evaluation scores to measure model effectiveness, also find a way to measure or understand model usefulness in the application that you have.
{ "domain": "datascience.stackexchange", "id": 3504, "tags": "machine-learning" }
openni_node internal throttle to limit cpu usage
Question: When running openni_node on an embedded system I realized the CPU usage is quite high, actually that high, that the rest of the system lags. The cpu load only goes high, when someone is actually subscribed, so my guess is that the marshalling takes time. Is this the correct assumption? Switching down the resolution with dynamic_reconfigure already helps, but I'm wondering if there is an option (planned?) to throttle the output in the node internally, so the data is not marshalled, if needed at a lower rate. Originally posted by dornhege on ROS Answers with karma: 31395 on 2011-04-12 Post score: 2 Answer: Marshalling does take a good amount of time. The recent driver is already a nodelet in it's implementation and can use the nodelet_throttle. For an example of doing this see the turtlebot_bringup robot.launch That launch file shows how you can downsample while only passing pointers. The existing settings doesn't throttle the frequency much as we can keep up mostly, but just change the parameters if you need less CPU usage. PS you may want to experiment with running more things as nodelets on the embedded board for by doing that you can entirely avoid serializing. Depending on what you are doing it may be more efficient to operate on the data in place. Originally posted by tfoote with karma: 58457 on 2011-04-12 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by Felix Endres on 2011-04-19: To update the last comment: With an update rate of 1hz the nodelet has about 50% single-core cpu usage on an IntelCore2 Quad CPU Q9650@3.00GHz (instead of ~110% for the non-throttled node). Plus the XnSensorServer with ~20%, but I assume that is independent. Comment by Felix Endres on 2011-04-19: Thanks for this info. I had a look at the launch file and the code of CloudThrottle, but a remaining problem seems to be, that the point cloud will still be computed from depth and pixelwhich will still consume cpu time. But maybe that is insignificant versus the marshalling, I haven't tried yet.
{ "domain": "robotics.stackexchange", "id": 5351, "tags": "ros, kinect, openni, openni-kinect, openni-node" }
Terminology used in the Wikipedia article for wormholes
Question: The lead section of the Wikipedia article for wormholes says the following: More precisely it [a wormhole] is a transcendental bijection of the spacetime continuum, an asymptotic projection of the Calabi–Yau manifold manifesting itself in Anti-de Sitter space. The above appears to originate from two edits made in 2017 and 2018, which have stayed there since then. I have never heard of the terms "transcedental bijection" and "asymptotic projection of the Calabi–Yau manifold" before. Are these real terms? If so, what do they mean? 2019/11/09: The sentence has been re-added by the same user, who called its removal "vandalism". Answer: While I can't vouch for having read every wormhole papers ever published, nor knowing every terminology out there, I'm going to go and say this is nonsense. A few pointers for this : Bijections aren't typically discussed in general relativity. Any bijection likely to appear is going to be at least a homeomorphism. The distinction of transcendental v. algebraic function very rarely appears in physics, nor is it particularly salient for wormholes. The term "spacetime continuum" is not typically used by people in the field, it's used more in a pop science context. Calabi-Yau manifolds are usually discussed in string theory. While nice manifolds, a wormhole spacetime isn't required to be related to either Calabi-Yau manifolds nor Anti-de Sitter space. Also no source at all. Wormholes aren't that easy to define, it's easy to make the definition too broad or too narrow, but whatever definition is (whether it be the fundamental group of the manifold, some cut and paste procedure or a local divergence of geodesic congruence), it's certainly not anything close to this. I'm guessing whoever wrote this had the ER = EPR thing in mind, where there is a possible AdS/CFT correspondence between wormholes and entangled states in a conformal field theory, but that's about the extent of what I can guess.
{ "domain": "physics.stackexchange", "id": 61306, "tags": "general-relativity, spacetime, astrophysics, wormholes" }
Voltmeter readings at different points in a circuit
Question: I have a circuit with a 6 volt battery and 2 resistors A and B in that order.A has a potential drop of 2 volts and B has a P.D of 4 v. which means every electron moving across the resp. resistors would lose the resp. amt of energy(which is the P.D). Now i grab a voltmeter : 1) The voltmeter reading across the battery wud be 6v bcoz an electron gains 6v of energy when moving across the battery. 2) Now i connect the voltmeter to any 2 points before resistor A. and yes i take the insulation off. note that there isn't a resistor between the 2 points of the voltmeter's connections.it's just plain wire. since hardly any energy is lost by an electron while travelling in the connecting wires, the electric potential at those two points would be the same. which means my potential DIFFERENCE wud be 0 or close to 0. so will the voltmeter read 0 when connected to 2 points in the connecting wire? plz explain if wrong. 3)And what wud the reading be at two points BETWEEN resistor A and B?. Again, just plain wire 4)And after resistor B? (yes just plain wire) Plz relate your answer to the energy lost by an electron.I have heard of countless analogies(including something abt a sandwich) but they annoy me. i want to get to the atomic level. And oh yes PLZ ANSWER Answer: We always consider than wire has negligible resistance and if the wire is 99% copper wire it usually has resistance too small to cause drop in voltage measurable by common multimeter used in laboratory. Considering that the wire you used was such wire, between a and b, c and d and from a to + terminal and e to - terminal $P.D \approx 0V$ . Though between b and c $P.D = 2V$ and between d and e $P.D = 4V$ $$V=\frac{E}{Q}$$ For every $1 C$ of charge passed there must be loss of $1 J$ to cause potential difference of $1V$. It means $6.24 \times 10 ^{-18}$ electrons lose $1 J$ of energy(common analogy say by bumping into the ions.) That means that when $P.D \approx 0$, change in energy,$E\approx 0$.
{ "domain": "physics.stackexchange", "id": 16465, "tags": "homework-and-exercises, electric-circuits, electrical-resistance, voltage" }
Shooting someone's past self using special relativity
Question: Suppose A and B are a long distance apart initially. B takes off in a spacecraft in A's direction at a really high speed. Both were aged 0 when B took off. When B about to cross paths with A, A observes him to be 30 years old (while A is 60). At this point, 60 year old A shoots the 30 year old B. B dies. Now, from B's frame, the bullet has to be fired by A when B is 120. That's because the event of 'bullet firing' must happen after the event of 'A turning 60', and A turns 60 in B's frame only when B is 120. So B has to die at 120 as seen from his own frame of reference. How can B die at both 30 years old and 120 years old? Answer: Case I: When B takes off, A and B are both aged 0 in A's frame. 1) When B passes A, B is 30 (given in your setup). 2) But B's clocks run at half-speed according to A, so A says that B has been traveling 60 years. 3) Therefore A is 60. 4) But B says A's clocks run at half speed, so B says A was born 120 years before the shooting --- that is, 90 years before B was born. 5) So B's story is this: "90 years before I was born, A was born. He aged at halftime, so on my birthday, he was 45. At that time I started my journey to earth, which took 30 years. During that time, A aged another 15 years, so he was 60 when we met. Then he shot me. I died at 30." Your mistake: You said that "from B's frame, the bullet has to be fired by A when B is 120". That's not correct. The correct statement is "from B's frame, the bullet has to be fired 120 years after A was born". Since B is 30 at the time of the shooting, A must have been born 90 years before B. Your bigger mistake: You assumed that two different problems (namely this one and the one you asked in your last post) have to have exactly the same answer. In the other problem, A and B were in the same place at the same time when both were born. In this problem they weren't. Case II: When B takes off, A and B are both aged 0 in B's frame. 1) When B reaches earth, he is aged 30 (given in the problem). 2) According to B, A ages at half-speed. Therefore A is 15, not 60 as you supposed. The 15-year-old A shoots the 30 year old B. Game over for B.
{ "domain": "physics.stackexchange", "id": 59255, "tags": "special-relativity, relativity" }
Metallicity of Celestial Objects: Why "Metal = Non-metal"?
Question: Metallicity of objects refers to the amount of chemical elements present in it other than Hydrogen and Helium. Note: The other elements may or may not be actual metals in the true sense of their defintion. But why did astronomers use such a term as metallicity? What is the history and cause behind coining such a term that may be (or rather actually is being) confused with metal-content of that celestial object ? Is there any scientific objective or explanation to this? I do not believe it is just a random acceptance, but perhaps there is a not-so-solid motive behind doing this. If there is, what is it? Answer: To first order, the relative abundances of the heavier elements to iron (for instance) are constant. So the metal content of a star is shorthand for the content of any element heavier than He. We now know this is not true in many circumstances and elements can be grouped by synthesis process - for example we can talk about "alpha elements" - O, Mg, Si etc. produced by alpha particle capture; or s-process elements - Ba, Sr etc. produced by the s-process. We know that the ratio of say O/Fe gets larger in more "metal-poor" stars, but Ba/Fe gets smaller. So talking about a single "metallicity" parameter only gets you so far, and the truth is more complex (and interesting). The next point is why they are referred to as "metals" rather than another term like "heavies" or something. I would guess this is down to a bit of history and the fact that initial abundance analyses in stars were (and still are on the whole) done in the visible part of the spectrum (e.g. in the early part of the 19th century by Hyde Wollaston and Fraunhofer). The most abundant elements heavier than He are in fact not metals; they are Oxygen, Carbon, Nitrogen and Neon. However, the signatures of these elements are not at all obvious in the visible spectra of (most) stars. Whereas the signatures (absorption lines) of elements like Fe, Na, Mg, Ni etc., which decidedly are metals, are often very prominent. Thus, there is a reason and some history behind the name "metals". It is that apart from hydrogen and helium, the metallic elements have the most prominent features in the optical spectra of stars, whereas in most stars the signatures of the more abundant non-metals are hard to see.
{ "domain": "astronomy.stackexchange", "id": 4228, "tags": "star, galaxy, history, definition, stellar-structure" }
What are the current best upper bounds of #P?
Question: #P is the class of counting problems for problems in NP. In other words, a solution to #P returns the number of solutions to a particular problem in NP. I'm wondering if there have been any studies on the worst-case behaviors of current best solutions to problems in NP. My focus in the past has been on 3-SAT, so I am particularly interested in the time it takes to count 3-SAT solutions in the worst case. However, I ask in general, What are the current best upper bounds for any (#P-complete) problem in #P? Answer: One such algorithm for $\#3\operatorname{SAT}$ is due to Kutzkov.
{ "domain": "cstheory.stackexchange", "id": 2136, "tags": "ds.algorithms, sat, counting-complexity, upper-bounds, worst-case" }
Solving the Kepler problem
Question: I'm trying to solve the Kepler problem using the Lagrangian, $$L = \frac{1}{2} m (\dot{r}^2 + r^2 \dot{\phi}^2) - U(r) $$ which after quite a bit of fiddling with, by noting that the angular momentum $M = mr^2 \dot{\phi}$ is a constant of motion and also $M = 2m\dot{f}$ where $\dot{f}$ s the sectorial velocity, leads to $$\phi = \int{\frac{M dr/r^2}{\sqrt{2m(E - U(r)) - M^2 / r^2}}}{}$$ Now for the Kepler problem $U(r) \propto 1 / r$ and so $U(r) = \alpha / r$. Plugging that in, we get, $$\phi = \int{\frac{M}{r^2\sqrt{2m(E + \alpha / r) - M^2 /r^2}}}{dr}$$ However, plugging that integration into WolphramAlpha gives an imaginary solution. What am I doing wrong? Answer: You can use $$ \frac{d}{dr} \cos^{-1} \left(f\left(r\right)\right) = -\left[1-\left(f\left(r\right)\right)^2\right]^{-1/2} \frac{df}{dr} $$ with $$ f\left(r\right) = \frac{M/r - m \alpha/M}{\sqrt{2 m E + m^2 \alpha^2 / M^2}} $$ to show that $$ \int dr \frac{M / r^2}{\sqrt{2 m E + 2 m \alpha / r - M^2 / r^2}} = \cos^{-1} \left(\frac{M/r - m \alpha/M}{\sqrt{2 m E + m^2 \alpha^2 / M^2}}\right) + C $$
{ "domain": "physics.stackexchange", "id": 19103, "tags": "homework-and-exercises, newtonian-mechanics, lagrangian-formalism, orbital-motion, celestial-mechanics" }
tissue-specific expression of homolog genes
Question: Two or more genes are homolog if thay have similar sequences. homolog sequences between species are called orthologs (caused by speciation events) and homolog sequences in an species are called paralogs (caused by duplication events). I would like to know what is expression pattern of paralogs or orthologs in an specific tissue. Are paralog (or ortholog) genes have same expression in a tissue or they express different? I hope my question is clear, but if it has mistakes I appreciate your corrections. Answer: The evolution of new genes is frequently associated with gene duplication. The paralogous genes are then free to evolve on their own into new functions over time. Sometimes (usually) the paralog will end up with a frame shift, or stop codon that inactivates it. But your question is lacking the element of time. At the instant of gene duplication, each copy is identical. However, one copy may lack the control elements (usually upstream), and not be transcribed. Or one copy may eventually evolve a new, tissue specific function. But whether the copy is inactivated, or evolves a new function, or remains active is not the same for every instance. It depends on how each copy changes under the influences of mutation and selection.
{ "domain": "biology.stackexchange", "id": 8022, "tags": "genomics, homology" }
How to tell theoretically whether an electron behaves as wave or particle
Question: I have seen many questions on SE on the dual nature of electrons behaving in certain circumstances as particles and as waves in some other circumstance. There is one thing I couldn't get a clear answer on. When making double slit experiment, we all agree that the electrons behave as waves. The same is true in atoms, where electron levels are described by Schrödinger equation. However, if we speak about a field like plasma physics (my field of work) and maybe beam physics, electrons are treated classically as particles with applying Newton's equation to describe their motion. The models built on particle treatment of electrons show an excellent agreement with experimental results. From experimental results and testing, we know that electrons behave like waves (in double slit experiment) or as particles (gas discharge models). My question is, is experimenting the only way to decide which model (wave/particle) describes electrons better in particular circumstances? Isn't there any theoretical frame that decides whether electrons will behave as particles or wave in particular circumstance?? For the record, in plasma physics the strongest type of theoretical models is called Particle In Cell models (PIC). In those models Newton equation of motion is solved for a huge number of particles including electrons. Then the macroscopic properties are determined by averaging. This method although it treats electrons classically it is very successful in explaining what happens in experemints Answer: When we treat quantum mechanical objects as if they are particles, this is often referred to as a classical treatment. Intuitively, this is going to be valid based on a simple argument related to the de Broglie wavelength:\begin{equation} \lambda_{dB} = \sqrt{\dfrac{2 \pi \hbar^2}{m k_B T}}.\end{equation} Most often, when this wavelength is on the order of interatomic (or inter-'object') spacing, then quantum mechanical effects become quite relevant and one must consider the wave-like nature of matter. For wavelengths much smaller than the distance between atoms (or molecules, elementary particles, etc..) quantum effects will be negligible and the classical treatment works just fine. You can notice that $\lambda_{dB}$ is a function of both the mass of the object and the temperature, so making either of these larger while the other is constant will decrease the deBroglie wavelength. You work in plasma physics so this wavelength will most often be very small due to the high temperatures even for very 'light' entities such as electrons. As such you need not consider the wave-like properties of the electron to make accurate calculations of certain physical properties of the system. Electrons are negatively charged and because of the Coulomb repulsion, I would suspect that no matter how much energy they have they will not be a distance apart that is on the order of this wavelength. I study low-temperature condensed matter though most often, so I may be wrong about this spacing. Hope this helps give some intuitive picture of when the classical treatment is acceptable without having to refer to empirical evidence.
{ "domain": "physics.stackexchange", "id": 9670, "tags": "quantum-mechanics, particle-physics, wave-particle-duality, plasma-physics" }
Get robot position wrt world frame in ROS2-Gazebo Garden
Question: I'm trying to understand how I can possibly get the global position of a model in a Gazebo simulation through ROS2. I've seen that Gazebo publishes data to the /world/<world_name>/pose/info topic via messages of type gz.msgs.Pose_V. Following this, I created a bridge to ROS2 via ros_gz_bridge: ros2 run ros_gz_bridge parameter_bridge /world/world_demo/dynamic_pose/info@tf2_msgs/msg/TFMessage[gz.msgs.Pose_V (I got the corresponding ROS2 message from the ros_gz_bridge README Unfortunately, the "translation" is incomplete and in the messages received in ROS2 there's no indication of the entity each portion refers to. Am I missing something here? Is there a way to accomplish this at the moment? I'm using: ROS2 humble Gazebo Garden Answer: ok, in the end that was the right way, but the wrong tool. To publish global positions the right plugin to use is the OdometryPublisher (docs). It works more or less the same way as the PosePublisher, apart from the published messages of course. ORIGINAL ANSWER Found the solution after struggling some more... Scratch that, this publishes the transform of its child entities as per docs... I don't know about the seemingly incomplete conversion of the /world/<world_name>/pose/info topic, but I've found out that to get a model position in the world you need to add the PosePublisher plugin to the model. <?xml version="1.0"?> <sdf version="1.6"> <model name="quadcopter"> <pose>0 0 0 0 0 0</pose> <link name="base"> <pose frame="">0 0 0 0 0 0</pose> <!-- ......... --> </link> <!-- ......... --> <plugin filename="gz-sim-pose-publisher-system" name="gz::sim::systems::PosePublisher"> <publish_link_pose>true</publish_link_pose> <publish_collision_pose>false</publish_collision_pose> <publish_visual_pose>false</publish_visual_pose> <publish_nested_model_pose>false</publish_nested_model_pose> </plugin> </model> </sdf> This will enable a /model/<model_name>/pose topic that you can then bridge to ROS2 via ros_gz_bridge.
{ "domain": "robotics.stackexchange", "id": 38983, "tags": "ros-humble, topic, gz-sim" }
Project 'davis_ros_driver' tried to find library 'pthread'. The library is neither a target nor built/installed properly
Question: When I run the code catkin build davis_ros_driver from https://github.com/uzh-rpg/rpg_dvs_ros, I met an error: CMake Error at /opt/ros/melodic/share/roscpp/cmake/roscppConfig.cmake:173 (message): Project 'davis_ros_driver' tried to find library 'pthread'. The library is neither a target nor built/installed properly. Did you compile project 'roscpp'? Did you find_package() it before the subdirectory containing its code is included? This problem has been bothering me for a week. I tried many solutions, such as changing the Cmake version to 3.10.0 and 3.12.4, but it still didn't work, so I feel very desperate. I hope someone will help me. Any suggestions from everyone will help me. Thanks Melodic ros Ubuntu18.04 Originally posted by luna wu on ROS Answers with karma: 46 on 2021-12-08 Post score: 1 Original comments Comment by osilva on 2021-12-08: Hi @luna wu, have you looked at this: https://answers.ros.org/question/297753/cannot-build-a-package/ Comment by luna wu on 2021-12-08: Yes, I have tried this : Using compile options via CMakeLists.txt worked for me add_compile_options(-pthread) But it didn't work for me. Comment by osilva on 2021-12-09: I tried to replicate your error a couple times but I was unable to reproduce, it compiles with no issues. Perhaps start with a fresh installation of ROS Melodic and Ubuntu 18.04 as the owners of this library are not very responsive. Comment by luna wu on 2021-12-09: Thank you very much for your attention to this issue. Because of too many problems ,I reinstalled the Ubuntu 18.04 version, and the recompilation was successful without any errors. Comment by osilva on 2021-12-09: Glad it worked. Please document as an answer so others may benefit in the future. Answer: Since I did not solve this problem , I reinstalled the Ubuntu 18.04 version, and the recompilation was successful without any errors... I think, sometimes, reinstalling the system is faster than solving the problem Originally posted by luna wu with karma: 46 on 2021-12-10 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 37229, "tags": "ros, ros-melodic" }
Which part of ROS navigation is responsible for path correction?
Question: I've been traveling around ROS' navigation stack, looking for the path correction code and I couldn't find it. I am using a kobuki iClebo's TurtleBot platform, along with kinect instead of laser scan. I am pretty new to robotics and ROS, so I might need to understand first: -There is the global planner (let's say A-star/Djikstra in my case) which is responsible for the Global Path -There is the local planner (DWA/base_local_planner) which, alongside with the obstacle updates to the map by /scan topic, is responsible for obstacle avoidance and hopefully path correction (?) -There is the move_base which I think acts as a "manager" for both of the above Now, I couldn't find anywhere in those three the part that is responsible for path correction - the procedure of understanding where you are (localization), understanding where you were suppose to be (given by one of the planners) and giving control commands in order to close the gap between the two. Originally posted by meirela on ROS Answers with karma: 3 on 2016-08-11 Post score: 0 Answer: There isn't one module that's responsible for "path correction"; it's actually an interaction between two different parts of the system. AMCL uses lidar and odometry information to estimate the position of the robot within the world. It corrects for drift in the odometry and publishes a TF transform between the global map frame and the odometry frame. The local planner receives the estimated position of the robot and tries to create a colliision-free trajectory and a command that moves the robot toward and along the global plan. The result is that this corrects for any drift of the robot or motors. Originally posted by ahendrix with karma: 47576 on 2016-08-11 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by aarontan on 2018-07-04: From your answer, you are suggesting that the local planner is capable of avoiding obstacles without replanning of the global plan? Comment by ahendrix on 2018-07-05: The move_base framework gives the local planners some leeway to choose a different path or abort, but each local planner will implement that slightly differently. That isn't really the topic of this question, so if you want more details I suggest you ask a new question. Comment by aarontan on 2018-07-05: @ahendrix, ok will do, please check my latest question and let me know if you know anything!
{ "domain": "robotics.stackexchange", "id": 25503, "tags": "ros, localization, navigation, path-planning, motion-planners" }
What are these underwater structures near 6°N 85°55'W
Question: While google earth-researching the sea floor near Malpelo and Cocos Islands (Colombian and Costa Rican Pacific Ocean), I came along these strangely geometrical structures in the sea floor, located around 6°N, 85°55'W. I'm fascinated with the look and the geometric perfection they have, and have waited to ask this question for about 3 years, until today that I realized there's a Stack Exchange site dedicated to earth sciences (I'm not an earth scientist, just a maps enthusiast). At first I thought they could be due to the path the satellite was traveling at the moment the image was captured, but then I realized there are depth differences in excess of 300m. I find the paralelism of the lines that form the "beams" surprisingly perfect, and the width of the "elements" very consistent (about 7 to 8km, according to google earth's ruler). Near 6°22'N 85°45'W there's a circular "dent" of about 4 $km^2$ that is very perfect regarding ovality. I found something similar about 670km NE from there, around 11°10'N 82°41'W (160km SW from San Andres Islands, in the Colombian Caribbean). While the structure here is less complex (there's only one "beam" and a concentric circular structure), the width of the "beam" (6km) seems similar to the one near Cocos (the depth differences range between 300 and 500m from the structures to the normal sea floor). What are these things? Are they glitches in the images from the satellite, that coincidentially look like beams? If so, why are depth differences so significative? Are they rocks or some other kind of geological formations? How did they get that shape, and where else cand I find more similar? Answer: As the other answer suggests, these are sonar surveys of the ocean depths. But the answer is a bit more complicated. The vast majority of the ocean floor has never been mapped. We really only know about the water depth because the water above the sea floor is lighter than if it were rocks. So a deep ocean produces less gravity than a shallow one. And, miraculously, you can fly satellites over the oceans that pick up on how hard Earth pulls on the satellite at any given point and convert that data into an ocean depth. This is what Google Maps and other mapping services show by default. The problem is that this information is not very accurate: That's because not just the point immediately under the satellite pulls on the satellite, but also points elsewhere. So a very steep undersea mountain will not be distinguishable from a broader but less high one. This is why in general these images of ocean depths are not shown in high resolution -- we just don't know the depth of the ocean on length scales of a few kilometers. But then along comes the occasional ocean survey. (Think: Research ships, the ships that lay undersea cables, the ships that search for ship wrecks or the occasional plane that fell from the sky.) They tow a sonar array behind a ship, or send a side-scan sonar equipped submersible, to actually map the ocean floor on length scales of a few dozen meters. For map services, this presents a conundrum: They'd like to show the more accurate sonar data where it is available, but they need to overlay it on the low resolution maps that comes from satellites. This is exactly what you see here: The background is the low-resolution satellite map. Overlaid is the high resolution sonar data. At the edges of the sonar maps, these two maps won't match, and so you will see the sharp edges. You can see the different resolutions if you zoom in and out of the view you showed. (Given the long straight tracks of these sonar maps, one can guess that in your view, these were created by surveys for laying undersea cables, for example to the Galapagos Islands.)
{ "domain": "earthscience.stackexchange", "id": 1806, "tags": "satellites, sea-floor, ground-truth, satellite-oddities" }
Magnitude of charge on each plate of a parallel capacitor? Why should I multiply by two?
Question: Here's the full question: A parallel-plate capacitor has a plate area of $0.2\ \mathrm m^2$ and a plate separation of $0.1\ \mathrm{mm}$. To obtain an electric field of $2.0\ 10^6\ \mathrm{V/m}$ between the plates, the magnitude of the charge on each plate should be: Using the equations $C = \epsilon_0 A/d = q/V$, and $V=Ed$, I rearrange to: $q = \epsilon_0AV/d = \epsilon_0AE$ I get $3.5\ 10^{-6}\ \mathrm C$, but the answer is twice that. Why is this the case? If the charge is $Q$, each plate will have either $Q$ or $-Q$, so that's not it. I think it has to do with some assumption within the equation relating the electric field to the voltage and plate separation ($V=Ed$). I notice this comes from the equations $V=(kq)/d$ and $E=(kq)/d^2$, which (I think) are for point charges but can also apply to parallel plates. But I'm not sure how this tells me why my answer is the answer/2. Answer: looks to me like the answer you have been given is incorrect. The way I see it you have a C of approximately $18 nF$ The electric field is the voltage divided by the gap - so the voltage across the capacitor is $200 V$. Now you can use $Q=CV$ to get $Q=3.6\mu C$ (actually $3.5$ probably I just did an approximate calculation to check your working)
{ "domain": "physics.stackexchange", "id": 30698, "tags": "homework-and-exercises, electrostatics, charge, textbook-erratum" }
Frame-dragging of a rotating doughnut
Question: Gravitomagnetic arguments also predict that a flexible or fluid toroidal mass undergoing minor axis rotational acceleration (accelerating "smoke ring" rotation) will tend to pull matter through the throat (a case of rotational frame dragging, acting through the throat). In theory, this configuration might be used for accelerating objects (through the throat) without such objects experiencing any g-forces. —Wikipedia Assume I have a 30g doughnut (made of a flexible material that cannot be broken or torn apart). The major radius of my doughnut is 5cm, and the minor radius is 3cm. What should be its "minor axis rotational acceleration" in order to make the gravitomagnetic acceleration in the center exactly 10 m/s²? As I don't know how to do general relativity, I tried to use the simpler GEM equations but the maths is still too advanced for me. For example I don't know how to compute the mass flux. Answer: Forward's donut The formula in Forward's classic paper is $$G=-\frac{d}{dt}\left(\frac{\eta N T r^2}{4\pi R^2}\right )$$ where $NT$ is the total mass current ($N$ windings of pipes carrying a heavy liquid in a spiral around the torus - here we will use the donut mass) and $\eta=3.73\cdot 10^{-26}$ m/kg. So plugging in your numbers, $r=0.03$ m, $R=0.05$ m and we assume $v(t)=0.03 a t$ kg m/s for some acceleration $a$ I get $3.2057\cdot 10^{-29}a$ , so to get "antigravity" we need $a=3.1194\cdot10^{29}$ m/s$^2$. That donut better be pretty indestructible. (The acceleration is actually in principle physically possible, just a few orders of magnitude above electrons in wakefield accelerators, way below the Planck acceleration). Tajmar's donut One somewhat similar calculation can be found in Tajmar, M. (2010). Homopolar artificial gravity generator based on frame-dragging. Acta Astronautica, 66(9-10), 1297-1301. For a pair of rotating disks he states the field at the center as $B_g=(4G/c^2)mr\omega$ where $m$ is the disk mass, $r$ their radius and $\omega$ their angular velocity. The prefactor is $4G/c^2\approx 3\cdot 10^{-27}$. Note that the gravitomagnetic field acts on a particle with mass only if it is moving, just as a magnetic field will only affect moving charges. The force is at right angles to the field and velocity, and proportional to the speed. This is why there has to be an accelerating flow around the torus in Forward's paper: had it been constant there would have been a constant gravitomagnetic field, and there would not have been any acceleration of particles inside the torus. Tajmar suggests having a cabin moving at constant velocity along a hallway with the spinning disks to provide a velocity. However, the final model in the paper has a ring-shaped cabin surrounded by two rings of spinning disks that themselves spin around the centre. This way one can enjoy artificial gravity without having to move. While this model is a bit donut-shaped it doesn't correspond to a plausible motion of donut dough.
{ "domain": "physics.stackexchange", "id": 53572, "tags": "homework-and-exercises, gravity, frame-dragging" }
Relativity equations
Question: In the equations for time dilation and length contraction, what is a good way of choosing who is the relative time and length and who is the proper time and length so we can get good measurements of the variables? Should the proper time and length be the static observer $t$ and $l$ and the moving observer $t'$ and $l'$? Answer: The question is kind of the whole point behind the principle of relativity: there's no real definition of "static" as opposed to "moving". Instead, you simply pick one observer and — if you want — call that one static, and any other one moving. The key requirement is that you should get the same physical results no matter what you pick. It's true that you'll get different numbers for certain things. For example, the velocity — if you swap the special "static" observer, the velocity of the "moving" observer will flip sign. But you've also changed the observer whose motion you're measuring, so why shouldn't it change? On the other hand, the principle of relativity tells us that we must get the same answer about certain things like whether or not two objects collide. As for measuring time dilation and length contraction, again, any choice you make for the static frame is a valid choice — but when you give the numbers, you have to specify which frame they're being measured in.
{ "domain": "physics.stackexchange", "id": 47347, "tags": "spacetime, reference-frames, time, relativity, inertial-frames" }
Why does a metal boat float?
Question: I was in class learning about density and stuff. Our teacher told us that things that are denser than water sink in water, and less dense things float. Then, our teacher asked us why metal boats float in water, even though they are denser than water. Is it because of the surface tension of water? Some other thing? Any help would be appreciated. Answer: This is because the whole boat, along with the air in the boat, is lighter than the water it displaces. For example, if a small boat will take up 1 cubic meter of water, then it has to be heavier than the weight of 1 cubic meter of water. This is explained in this post by What If here. For the same reason that bowling balls float (because salt water the size of a bowling ball weighs more), boats float (because the overall weight of a boat is less than the overall weight of salt water the size of a boat.
{ "domain": "physics.stackexchange", "id": 78460, "tags": "water, density, buoyancy, fluid-statics" }
Raspberry + ROS + Servo
Question: Hi guys, could anyone tell me some tutorial that teaches to control servoss via ROS installed in a Raspberry PI 3 (Ubuntu Mate Originally posted by zpguedes on ROS Answers with karma: 1 on 2016-11-21 Post score: 0 Answer: I've been working through this scenario. (See if this description is what you are attempting: http://elder.ninja/blog/p/6106) Here are a few things which may make your life a bit easier: Using Ubuntu server for RPi3 (rather than raspbian jessie) does make your query easier as you can install most packages using "apt-get install" and not need to "git" the source. For PWM for positional control of standard servos as well as speed and direction control of continuous servos, you might check out the ros_i2cpwmboard package: https://gitlab.com/bradanlane/ros-i2cpwmboard/tree/master Short description of end-to-end demonstration: http://elder.ninja/blog/p/6804 and accompanying video: https://vimeo.com/193201509 The documentation (http://bradanlane.gitlab.io/ros-i2cpwmboard/) for the ros_i2cpwmboard package gives simple test examples of rostopic() and rosservice(). The package supports controlling servos using direct PWM values, proportional values (-1.0 .. 1.0) and using geometry_msgs::Twist - the latter makes it easier to integrate with the turtlesim tutorials. Originally posted by suforeman with karma: 300 on 2016-12-10 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 26297, "tags": "ros, servo, rasbperrypi" }
Electronic oxygen concentration sensor
Question: I bought an electronic meter which measures the concentration of oxygen in air. But I think the sensor has failed because it is too old, as it had a crusty substance coming from it.but I can no longer find spare sensors for it. How might this sensor have worked? I presume the property of some substance was measured; this property changes with the concentration of oxygen. Perhaps one of these: Its resistance changed as air was passed through some sort of chemical, and the resistance measured to determine the oxygen concentration. Or perhaps its opacity to light measured with an LED and an appropriate optic sensor. Or the volume of the substance changed, or some other property, which ultimately varies the capacitance. What substance might this have been? I want to make my own device - I don't intend to fix this one, so I don't need to identify the substance used in this particular sensor. Answer: I cannot say as to your specific sensor, but most oxygen sensors are electrochemical sensors built around concentration cells. Take a look at the following diagram of a simple zirconia-based oxygen probe from Wikipedia. This type of sensor is very like the one in automobiles. In the middle of the sensor is a gas permeable zirconia $(\ce{ZrO2})$ membrane between two gas permeable electrodes. The membrane and electrodes form the electrochemical cell. In many cases, an electrochemical cell will use a reference electrode with a standard reaction going on for comparison. In this sensor, the standard reaction is the same one that is going on at the other electrode. The difference is that at the standard electrode, the partial pressure ("concentration") of oxygen is known and at the other electrode it is unknown. But wait, how does that work? Let's take a generic electrochemical process, where $\ce{M}$ in the following equation represents something generic that we can oxidize: $$\ce{M(s) + O2(g) -> MO2(s)}$$ This cell has a standard potential $E^\circ$. The actual measured potential, however, will depend on the partial pressure of oxygen $P_{\ce{O2}}$ because the standard potential is defined at $P_{\ce{O2}} = 1\ \text{atm}$. The relationship that gives us the actual electrode potential at some other oxygen level is the Nernst equation; $$E=E^\circ - \frac{RT}{nF}\ln{Q}$$ Where: $R$ is the ideal gas constant $T$ is the temperature $n$ is the number of electrons transferred (4 in my example) $F$ is the Faraday constant $Q$ is the reaction quotient, in this case $Q=\frac{1}{P_{\ce{O2}}}$ If $P_{\ce{O2}}$ is large, then $E$ is small and vice versa. Now, we have two electrodes, our unknown and our reference, each with their own potential: $$E_{ref}=E^\circ - \frac{RT}{nF}\ln{\frac{1}{P_{\ce{O2},ref}}}$$ $$E_{unk}=E^\circ - \frac{RT}{nF}\ln{\frac{1}{P_{\ce{O2},unk}}}$$ The voltmeter in the device measures the difference between these two potentials and does some maths to calculate back $P_{\ce{O2},unk}$ and then convert it to concentration or percent. Note that it does not even matter what $\ce{M}$ and $\ce{MO2}$ were, since they are not in the equation and even $E^\circ$ cancels out. $$E_{unk}-E_{ref} = \frac{RT}{nF}\ln{\frac{P_{\ce{O2},ref}}{P_{\ce{O2},unk}}}$$ What substance might this have been? If your sensor was this kind of sensor, the crusty mess is probably zirconia.
{ "domain": "chemistry.stackexchange", "id": 3276, "tags": "concentration" }
There could not be an edge from u to v in a DAG, if w is before v in a topological order
Question: I am trying to prove that given a DAG. There exists a valid topological ordering that has v in front of u iff there is no path from u to v. The proof is related to the fact that reverse DFS post visit order satisfies the condition of topological order. How do I construct a proof? Answer: Assume that there is path $u \rightarrow v$ as well. Then we have a cycle. What does this mean in terms of topological order? Can there be such an order? Whether u comes first or v comes first depends on where you start!
{ "domain": "cs.stackexchange", "id": 12446, "tags": "algorithms, graphs, proof-techniques, dag" }
Why is no phase information lost when a received signal goes through a mixer?
Question: I'm sorry if the question is too philosophical or makes no sense, but I need to be able to explain why it doesn't make sense. When we take the in-phase and quadrature-phase base-band components of an intermediate frequency signal, it's very easy to show that any one of the two orthogonal components alone would be insufficient to distinguish the variations in a single-carrier phase from variations in its amplitude. This tutorial makes it quite clear. The Q component, although real, carries the imaginary component of the phasor $A(t)\exp(2\pi f_I\cdot t + \phi(t) )$. However, when we pass the RF signal through a mixer, there is no Q component lost. No phase modulation will be lost in the base-band signal obtained later. Why not? A figure to help create context. Answer: The receiver's mixer introduces a phase-shift to the in-phase and quadrature components. In very general terms, assume we have a signal $$a(t)e^{j(2\pi f_0 t + \phi(t))},$$ which is mixed with a local oscillator of frequency $f_{lo}$ and phase $\delta$: $$e^{j(2\pi f_{lo}t + \delta)}.$$ The output signal is \begin{align} a(t)e^{j(2\pi f_0 t + \phi(t))}e^{j(2\pi f_{lo}t + \delta)} &= a(t)e^{j(2\pi (f_0+f_{lo})t + \phi(t) + \delta)} \\ &= a(t)e^{j(2\pi (f_0+f_{lo})t + \phi(t))} e^{j\delta}. \end{align} In other words, even though the frequency has been shifted and a phase-shift equal to $\delta$ has been introduced, $a(t)$ and $\phi(t)$ are still present in the signal and that means that all the information carried by the original signal is still there. Usually the phase-shift is corrected in the digital back-end in the receiver, or you can use a differential encoding (such as DBSK), in which the phase-shift becomes irrelevant.
{ "domain": "dsp.stackexchange", "id": 5373, "tags": "digital-communications, quadrature" }
Conserved current in scalar QED
Question: Consider a theory of a free massless complex scalar $\phi$ which undergoes global $U(1)$ transformations. The conserved current associated to this symmetry is the usual scalar current $$ J^\mu = i\left(\phi^\dagger \partial^\mu \phi - \partial^\mu\phi^\dagger \phi\right) \tag{1} $$ which is divergence-less on-shell: $$\partial_\mu J^\mu = \phi^\dagger \square \phi -h.c = 0 \tag{2}$$ since $$\square\phi=0.\tag{3}$$ When we gauge the $U(1)$, we expect that gauge symmetriy should not spoil global current conservation. The Lagrangian is now $$ -\frac{1}{4}F_{\mu\nu}F^{\mu\nu} - (D_\mu\phi)^\dagger(D_\mu\phi) = -\frac{1}{4}F_{\mu\nu}F^{\mu\nu}- (\partial_\mu\phi)^\dagger(\partial_\mu\phi) - A_\mu J^\mu - A_\mu^2\phi^\dagger\phi \tag{4} $$ where $$D_\mu\phi = \partial_\mu \phi -i A_\mu \phi. \tag{5}$$ The equation of motions for the photon are $$ \partial_\mu F^{\mu\nu} = J^\mu + 2A_\mu\phi^\dagger\phi \tag{6} $$ which seems to imply that the global current is not even conserved, that is $$ \partial_\mu J^\mu = -2\partial_\mu\left[A^\mu \phi^\dagger \phi\right]. \tag{7} $$ Is this result wrong? It is against the expectations. The theory is still invariant under global $U(1)$ transformations, so the global current should be conserved. Answer: No, the gauging just changes the partial derivatives $\partial_{\mu}$ in the current (1) to covariant derivatives $D_{\mu}$. The global $U(1)$ Noether current for the gauged theory [i.e. the right-hand side of OP's eq. (6)] is still conserved on-shell, cf. e.g. this Phys.SE post.
{ "domain": "physics.stackexchange", "id": 53407, "tags": "electromagnetism, symmetry, field-theory, quantum-electrodynamics, gauge-invariance" }
Why does this integral calculating the electrostatic energy converge?
Question: I came across the problem of calculating the interaction energy of two point charges separated by some distance a in Griffith's Introduction to Electrodynamics. Here and everywhere else that I look, I find that the calculation is done by using the expression for interaction energy: $$\epsilon_{0}\int\vec{E}_{1}\cdot\vec{E}_{2}\,d\tau$$ where the integration is done on all space. Why does this give correct result when $\vec{E}_{1}$ and $\vec{E}_{2}$ are both undefined at the points where the two charges are situated? How come does the integral comes out as finite when the integrand goes to infinity at two places in the domain of integration? Answer: This converges because the integral can be thought of as finite. To put it more rigorously, let's look at the integral of two vector functions $\vec{v}_1(\vec{r})$ and $\vec{v}_2(\vec{r})$ near the origin, where $\vec{v}_1$ is bounded and $$ \vec{v}_2 = \frac{1}{r^2} \vec{w}(\vec{r}), $$ where $\vec{w}(\vec{r})$ is bounded. Now, the integral we want to perform is $$ I = \iiint_\text{all space} \vec{v}_1 \cdot \vec{v}_2 \, d\tau $$ but we are worried that the integrand diverges as $r \to 0$. We can "isolate" the problematic region by splitting up the integral: $$ I = \iiint_{|\vec{r}| \geq \epsilon} \vec{v}_1 \cdot \vec{v}_2 \, d\tau + \iiint_{|\vec{r}| \leq \epsilon} \vec{v}_1 \cdot \vec{v}_2 \, d\tau. $$ We'll assume that the first integral is well-defined and look more closely at the second. If we look at this carefully, we can put a bound on the integral into a ball of radius $\epsilon$ around the origin and the rest of space: \begin{align*} \left| \iiint_{|\vec{r}| \leq \epsilon} \vec{v}_1 \cdot \vec{v}_2 \, d\tau \right| &= \left| \iiint_{|\vec{r}| \leq \epsilon} \frac{\vec{v}_1 \cdot \vec{w}}{r^2} \, d\tau \right| \\ &\leq 4 \pi \max_{|\vec{r}| \leq \epsilon} |\vec{v}_1 \cdot \vec{w}| \int_0^{\epsilon} \frac{1}{r^2} r^2 \, dr \\ &= 4 \pi \max_{|\vec{r}| \leq \epsilon} |\vec{v}_1 \cdot \vec{w}| \epsilon. \end{align*} So we can see that in the limit as $\epsilon \to 0$, the contribution from this portion of the integral becomes negligible. This means that it is well-defined to talk about the "integral over all space" as the limit as $\epsilon \to 0$ of "the integral over all space except for small spheres of radius $\epsilon$ around points where $\vec{v}_2$ diverges."1 In a real sense, this works because the singularity of $\vec{v}_2$ is sufficiently "weak" that we can still get a sensible limit out of this. This happened because the singularity was proportional to $1/r^2$, which was cancelled out by the volume factor of $r^2$ in the integral. If you applied the same logic to the quantity $\vec{v}_2 \cdot \vec{v}_2$ near the origin, the integrand would be proportional to $1/r^4$ near the origin, which is "too strong" of a singularity to be cancelled out by the volume factor. This is why Griffiths has to discuss the charge's "self-energy" separately from the "interaction energy." 1 To really be rigorous about this (and avoid doing a suspect $0/0$ cancellation in the demonstration above), we would instead want to fix some radius $r_0$ and take a sequence of $\epsilon$ values $\epsilon_1, \epsilon_2, \dots$ such that $\lim_{n \to \infty} \epsilon_n = 0$. We can then show, using similar logic, that the sequence of integrals $$ I_n \equiv \iiint_{\epsilon_n \leq |\vec{r}| \leq r_0} \vec{v}_1 \cdot \vec{v}_2 \, d\tau $$ also converge to some finite limit. This limit would then be what we mean by "the integral over the sphere of radius $r_0$ centered at the origin." The logic is much the same, but somewhat more opaque a first pass, which is why I phrased my main answer the way I did.
{ "domain": "physics.stackexchange", "id": 84713, "tags": "electromagnetism, electrostatics, electric-fields, integration" }
What information can we extract from the electronic band structure?
Question: I have some difficulty in understanding the electronic band structure.I want know that for a 3D crystal,what information can I extract from its complicated band structure,for example the band structure of the SiC(I downloaded this figure from google).And what intuition that I can build for such a complicated band structure? Answer: The most useful information you can extract from the band structure for an insulator like SiC involves: (1) the value of the bandgap (the energy difference between the highest occupied band--the valence band--and the lowest unoccupied band--the conduction band); (2) the direct or indirect nature of the band gap (direct if the valence band max occurs at the same k point as the conduction band minimum, and indirect otherwise); and (3) the band dispersion or the slope of the bands involved in the band gap--steeper slopes indicate stronger orbital interactions and faster carrier mobility, on the other hand an ionic crystal will have very flat bands.
{ "domain": "physics.stackexchange", "id": 39049, "tags": "quantum-mechanics, solid-state-physics" }
Using Reddit API in R
Question: I'm scraping some comments from Reddit using Reddit JSON API and R. Since the data does not have a flat structure, extracting it is a little tricky, but I've found a way. To give you a flavour of what I'm having to do, here is a brief example: x = "http://www.reddit.com/r/funny/comments/2eerfs/fifa_glitch_cosplay/.json" # example url rawdat = readLines(x,warn=F) # reading in the data rawdat = fromJSON(rawdat) # formatting dat_list = repl = rawdat[[2]][[2]][[2]] # this will be used later sq = seq(dat_list)[-1]-1 # number of comments txt = unlist(lapply(sq,function(x)dat_list[[x]][[2]][[14]])) # comments (not replies) # loop time: for(a in sq){ repl = tryCatch(repl[[a]][[2]][[5]][[2]][[2]],error=function(e) NULL) # getting replies all replies to comment a if(length(repl)>0){ # in case there are no replies sq = seq(repl)[-1]-1 # number of replies txt = c(txt,unlist(lapply(sq,function(x)repl[[x]][[2]][[14]]))) # this is what I want # next level down for(b in sq){ repl = tryCatch(repl[[b]][[2]][[5]][[2]][[2]],error=function(e) NULL) # getting all replies to reply b of comment a if(length(repl)>0){ sq = seq(repl)[-1]-1 txt = c(txt,unlist(lapply(sq,function(x)repl[[x]][[2]][[14]]))) } } } } The above example gets all comments, the first level of replies to each of these comments and the second level of replies (i.e. replies to each of the replies), but this could go down much deeper, so I'm trying to figure out an efficient way of handling this. To achieve this manually, what I'm having to do is this: Copy the following code from the last loop: for(b in sq){ repl = tryCatch(repl[[b]][[2]][[5]][[2]][[2]],error=function(e) NULL) if(length(repl)>0){ sq = seq(repl)[-1]-1 txt = c(txt,unlist(lapply(sq,function(x)repl[[x]][[2]][[14]]))) } } Paste that code right after the line that starts with txt = ... and change b in the loop to c. Repeat this procedure approximately 20 times or so, to make sure everything is captured, which as you can imagine creates a huge loop. I was hoping that there must be a way to fold this loop somehow and make it more elegant. Answer: Here are my main recommendations: use recursion use names instead of list indices, for example node$data$reply$data$children reads much better than node[[2]][[5]][[2]][[2]] and it is also more robust to data changes. use well-named variables so you code reads easily Now for the code: url <- "http://www.reddit.com/r/funny/comments/2eerfs/fifa_glitch_cosplay/.json" rawdat <- fromJSON(readLines(url, warn = FALSE)) main.node <- rawdat[[2]]$data$children get.comments <- function(node) { comment <- node$data$body replies <- node$data$replies reply.nodes <- if (is.list(replies)) replies$data$children else NULL return(list(comment, lapply(reply.nodes, get.comments))) } txt <- unlist(lapply(main.node, get.comments)) length(txt) # [1] 199
{ "domain": "codereview.stackexchange", "id": 9358, "tags": "json, r, reddit" }
Is there a type of detergent/surfactant that evaporates at STP?
Question: Are there any type of detergent/surfactant chemicals that would be good for removing dirt and grease from fabric that also would evaporate from the clothes within, say, 24 hours leaving no residue? Googling "volatile surfactant" turned up fluorocarbons and something called Surfynol 61, which is really 3,5-Dimethylhex-1-in-3-ol as possible candidates. I don't know if these would be safe for laundry. Answer: Volatile surfactants There are many molecules that are both surfactants (in water) and volatile, but of the ones I can think of, none can be used safely in home laundry applications. Hydrocarbon derivatives. The examples you found by googling are in this class. 3,5-dimethyl-1-hexyn-3-ol is the principal component of Surfynol 61 (probably named Surfynol 61 because it is a surface-active (surf) alkyne alcohol (ynol) with six carbons (6) and the alkyne moiety at carbon one (1). Other examples would be medium-chain fatty alcohols like hexanol, octanol, cyclohexanol, etc. These molecules reduce surface tension of water and have OK-ish cleaning power (not as good as larger but non-volatile alternatives like SDS), but their volatility is also their downfall: their use in home environments creates a strong fire and explosion hazard when used in automatic heated dryers. Fluorocarbon derivatives. I found the same article you probably did about volatile fluorocarbon surfactants. The most powerful at reducing the surface tension of water seems to be nonafluoro-tert-butyl alcohol, which reduces water surface tension by 78%! However this experiment required saturating the water with the vapor of this compound. And 1 gram costs >$80. And toxicity is a concern, despite this compound apparently being researched as a component of or precursor to artificial blood mixtures in the 1970s. Other halocarbon derivatives. These include solvents like perchloroethylene, chlorobutanol, etc. These are all highly toxic, and with some of them, there is a flammability/explosion risk as with hydrocarbon derivatives. Non-surfactant use You might consider just using organic solvents that are not (very) surface active as a way to degrease fabric. However, these may leach dyes out of the fabric or otherwise damage them. Isopropyl alcohol (IPA) is accessible to home users, and is relatively safe. Be sure to do the soaking in a well-ventilated area, and do not use heated dryers to remove the IPA! Instead, rinse out the IPA thoroughly with water. Depending on local regulations, you will have to dispose of all of the dirty IPA and IPA-contaminated rinsing water as solvent waste. The best option Pay to have your clothes dry cleaned!
{ "domain": "chemistry.stackexchange", "id": 4710, "tags": "water, safety, home-experiment, surfactants" }
Out-of-domain generalization
Question: Given $X$ the space of all $N \times N$-pixel images and $I=\{$airplane,clock,axe,...$\}$ a set of labels. An image classification task is generally concerned to learn a map $$F:X \rightarrow I$$ Let's look at the examples of pictures below taken from 2. The rows classify the domain of the picture. I.e. a real plane or a sketched plane. Out-of-domain generalization deals with the question of whether it is possible to train a NN e.g. on real photos of planes such that it will also be able to classify sketched pictures of a plane. I have a hard time understanding these domains. My question is the following: Can I think of each domain as a disjoint set in $X$. Let's say $X_1$ are all sketched planes and $X_2$ all real planes. Then $X_1 \cap X_2 = \emptyset$. If not, what is the right way to think about domains? Answer: You can interpret the image as follows: Each column represent a class (or category) of an object, for example what you denote with $X_1$ and $X_2$ are classes not domains which can overlap. Instead, each row is a domain, i.e. the broader, more general, and higher-level (semantically) concept that can include multiple categories even partially. In fact, the domain "quickdraw" contains objects that belong to the same classes of the domain "real": you still have plane, clock, etc but the conceptual representation is different. I cannot claim that domains are mutually exclusive, maybe they are when there is no ambiguity. For example, a human can interpret a sketch as a plane (instead of as a set of lines and curves) thanks to its higher-level cognitive capabilities that allow, at the same time, to see something in its building blocks (e.g. the lines; which is kind of low-level understanding) and as a whole (giving it some semantics and meaning.) Take as an example an image classifier, you train it on images from the real domain. The classifier, in order to show out-of-domain (OOD) generalization should correctly classify images from the quickdraw domain: if so it achieved OOD generalization on that domain. In order to do this, the model should (at least): 1) not adapt to the training domain (which translates to almost zero overfitting and plenty of regularization), 2) ignore the noise in the images (robustness), and 3) capture high-level features that describe the image semantically, i.e. it should capture the concept that an image describes but also capture the same on OOD images.
{ "domain": "ai.stackexchange", "id": 3757, "tags": "neural-networks, machine-learning, convolutional-neural-networks, image-recognition, generalization" }
Parallel faces planar capacitor: condition $d\ll \sqrt A$
Question: In my college notes from about 30 years ago, if I have a plane capacitor with parallel faces of distance $d$ and area $A$, I found written $$d\ll \sqrt A$$ What is the mathematical/physics reason? Answer: What is the mathematical/physics reason? The capacitance of a parallel plate capacitor as a function of the physical characteristics of the capacitor is given by the equation $$C=\frac{\epsilon A}{d}$$ where $A$ is the area of the plates ($m^2$), $d$ is the plate separation ($m$) and $\epsilon$ is the electrical permittivity of the dielectric between the plates (Farads/m). The equation assumes that the electric field between the capacitor plates is confined to the volume of the space between the plates. See FIG 1 below which shows a cross section of a parallel plate capacitor with all the electric field lines confined to the volume between the plates. In actuality, however, the electric field lines are not confined to that volume. They extend to the space beyond the edges, as shown in FIG 2. This is referred to as the "edge" or "fringe" effect. The fact that the field lines extend beyond the edges of the plates means the actual capacitance is greater than that predicted by the above equation, since the effective area of the plates is greater than the actual area. To calculate what the actual capacitance would be involves complex mathematical modeling. If you are interested in that, see http://www.drjamesnagel.com/notes/Nagel%20-%20Numerical%20Poisson.pdf. To get around this, one can minimize the influence of the edge effects by reducing the percentage of the total number of field lines that are outside the volume between the plates. This can be accomplished by having the separation the $d$ between the plates be much less than the linear dimensions of the plate, as shown in FIG 3. If the plates are circular, then the edge of the plate, shown in FIG 3, would equal its diameter. Thus a condition for reducing the influence of the edge effects for a circular plate is to have the separation $d$ be much less than the diameter of the plates $D$, or $d<<D$. Now, getting to the equation $d<<\sqrt A$ that you found in your notes. It's obvious that it assumes square plate capacitor since the edge shown in FIG 3 would be the same for each side of the plate. Moreover, for the same plate area there’s less fringing in the case of square plates than rectangular plates. That’s because fringing occurs at the perimeter and the perimeter of a square is always less than a rectangle of the same area. Hope this helps.
{ "domain": "physics.stackexchange", "id": 85686, "tags": "electrostatics, capacitance" }
Is there a more satisfactory answer than just saying that the manifold of special relativity is the $\mathbb R^4$/some set of "events"?
Question: I'm an undergraduate who visited a course on differential manifolds and now I have the task to reformulate the maxwell equations in terms of differential forms. The most obvious question that arises first is: On what manifold are the forms defined? Unfortunately, it seems to be difficult to get good answers. Some people say Minkowski space is just the $\mathbb R^4$, others completely ignore this point. I know that there are a lot of related questions, but the answers are either not satisfactory (especially for a mathematician studying differential geometry) or contain too advanced maths for an undergraduate. I assume that it is not easy (or even impossible) to give a definite answer, but I'd like to hear some opinions on this. If you know of a reference where this is discussed in more detail, please let me know. Answer: Minkowski space time is a four dimensional affine space whose space of translations is equipped with an indefinite symmetric non-degenerate metric with Lorentzian signature. Every affine space has a natural smooth (real analytic) manifold structure induced by every arbitrarily fixed Cartesian coordinate system. That is the differentiable structure used in special relativity. Cartesian coordinate systems associated to pseudo orthonormal bases are interpreted as coordinates at rest with inertial reference frames. Actually, (1) one considers only basis whose temporal element is future oriented; (2) two Cartesian systems related by means of a spatial 3-rotation or a space time translation or a combination of both are supposed to define the same inertial reference frame (inertial reference frame are equivalence classes).
{ "domain": "physics.stackexchange", "id": 73564, "tags": "electromagnetism, special-relativity, spacetime, differential-geometry, maxwell-equations" }
Would the change in enthalpy change of chemical reaction in space be considered as change in internel enrgy?
Question: The formula of enthalpy change is change in H = U + P*(change in V) In space since there is no air, thus atmospheric pressure = 0 , P= 0 , Pv = 0 , So will H = U ? Answer: Well, in principle yes, for example if you had a blob of liquid floating in empty space and wanted to consider the whole blob as a thermodynamic system. But it may be worth mentioning that wouldn't be a typical use of thermodynamics in astrophysics. Usually you'd consider your thermodynamic system to be some small part of a larger system and the larger system would be a thermal and volume reservoir, so T and P would both be nonzero. For example, it's common to consider the thermodynamics of stars, but what you do is consider a small packet of gas to be your system, and the rest of the star to be surroundings that establish a constant T and P for your packet.
{ "domain": "chemistry.stackexchange", "id": 10331, "tags": "thermodynamics" }
Custom ROS message with unit8[] in python!
Question: I have a PacketMsg.msg as follows in my package: uint8[] buf I would like to decode /topic with this sample python script; import rospy from ouster_ros.msg import PacketMsg def cb(msg): rospy.loginfo('len_msg: {}, type_msg: {}'.format(len(msg.buf), type(msg.buf))) rospy.loginfo('msg: {}'.format(msg.buf)) def main(): rospy.init_node("my_sub") rospy.Subscriber('/topic', PacketMsg,cb) rospy.spin() if __name__ == "__main__": main() This is the output printed in terminal: [INFO] [1579270050.540915]: len_msg: 49, type_msg: <type 'str'> [INFO] [1579270050.541343]: msg: ��H?�~�a�B��`����g=��?M�� whereas, $ rostopic echo /topic returns: buf: [47, 185, 250, 245, 155, 0, 0, 0, 54, 238, 166, ....................... , 23, 154, 0, 0] Does anybody know how to change such <type 'str'> to a list for further usages? Cheers, Originally posted by Farid on ROS Answers with karma: 165 on 2020-01-17 Post score: 0 Answer: In Python a str can be iterated and indexed in the same way as a list: msg.buf[i] Each 8-bit character can be converted into a number using ord(). To get the same result as you see on the console you can do a list comprehension: [ord(c) for c in msg.buf] Originally posted by Dirk Thomas with karma: 16276 on 2020-01-18 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by v.leto on 2021-11-26: Hi! I have the same problem, I get a series of letters instead of number but I am coding with c++. How can I solve the same issue? Many thanks
{ "domain": "robotics.stackexchange", "id": 34285, "tags": "ros, rospy, ros-kinetic, rosmsg" }
Simple tokenizer in C
Question: I implemented a simple tokenizer. Would love to hear your feedback on code style, best practices: #include <stdio.h> #include "lexer.h" #include "mem.h" static inline int delimiter(char c); char **lexer_tokenize(const char *s) { char **rv, **tmp; int i, j, k, len, cap; cap = 4; rv = mem_malloc(sizeof(char *) * cap); for (i = 0, j = 0, len = 0;; j++) { if (delimiter(s[j]) && i < j) { if (len == cap - 1) { cap <<= 1; if ((tmp = mem_realloc(rv, sizeof(char *) * cap))) { mem_free(rv); rv = tmp; } else { mem_free(rv); rv = 0; fprintf(stderr, "tokenize: error resizing result array.\n"); break; } } rv[len] = mem_malloc(sizeof(char) * (j - i + 1)); for (k = 0; (rv[len][k++] = s[i++]) != s[j];) ; rv[len++][k] = 0; if (!s[j]) break; i = j + 1; } } rv[len] = 0; return rv; } static inline int delimiter(char c) { return c == ' ' || c == '\n' || c == 0 || c == EOF; } Would also love to hear tips on performance improvements. Answer: Apparent uniform formatting - good Hope OP is using an auto-formatter. Bug? mem_free(rv) is suspicious. Is a free needed on successful re-allocation? if ((tmp = mem_realloc(rv, sizeof(char *) * cap))) { mem_free(rv); Insufficient documentation "a simple tokenizer" is insufficient to describe lexer_tokenize() functionality. I'd hope to see that in the not-included "lexer.h". Sample usage would have been informative. OP comments about punctuation symbols yet nothing in code suggests anything dealing with punctuation symbols. O(n*n) vs. O(n) With the nested loops code looks O(n*n). I'd expect a tokenizer to be O(n). As is, code is unclear to me. EOF is not a character of a string // return c == ' ' || c == '\n' || c == 0 || c == EOF; return c == ' ' || c == '\n' || c == 0; Consider any whitespace '\t', '\r', '\f', '\v' are white-spaces too. #include <ctype.h> static inline int delimiter(char c) { // return c == ' ' || c == '\n' || c == 0; return isspace((unsigned char) c) || c == 0; } No checking for allocation failure As mem_malloc() can return a failure indication, NULL, check for that and handle appropriately. Declare objects when needed // char **rv; // ... // rv = mem_malloc(sizeof(char *) * cap); char **rv = mem_malloc(sizeof(char *) * cap); Allocate to the refenced object, not the type It is easier to code right, review and maintain. // rv = mem_malloc(sizeof(char *) * cap); char **rv = mem_malloc(sizeof rv[0] * cap); // rv[len] = mem_malloc(sizeof(char) * (j - i + 1)); rv[len] = mem_malloc(sizeof rv[len][0] * (j - i + 1)); Validate "lexer.h" independence I assume this code is in "lexer.c" and the lexer_tokenize() declaration is in "lexer.h". Rather than include #include <stdio.h>, then #include "lexer.h", reverse that to test "lexer.h" has no need for prior includes. int vs. size_t int is not certainly wide enough. size_t is the type for any sizeof of an object or indexing. Note that size_t is an unsigned type. // int i, j, k, len, cap; size_t i, j, k, len, cap; Informative names i, j, k are not informative object names other than they are indexes - but how to they differ? What are they for?
{ "domain": "codereview.stackexchange", "id": 43291, "tags": "c, lexical-analysis" }
Cylinder rolling down slope problem
Question: A uniform cylinder of mass $m$ and radius $r$ is rolling down a slope of inclination $\theta$. The cylinder rolls without slipping. You may take the acceleration due to gravity to be $g$. At what rate does the cylinder accelerate down the slope? The answer is $\frac23 g \sin\theta$ How do you get to this answer? Answer: Gravitational force $\vec{F_g}$ is trying to rotate the cylinder around point A. Lever arm has length $r_c= r\,\sin\theta$ Torque $\tau$ is $$\tau = F_g \, r_c = m \, g\, r \, \sin\theta$$ Torque causes angular acceleration $\alpha$ of the cylinder. We are interested in acceleration $\vec{a}$ of point C. Absolute value of point C acceleration is $|\vec{a}| = \alpha \, r$. We need to compute angular acceleration $\alpha$ which is quotient of torque $\tau$ and moment of inertia $I$. $$\alpha = \frac{\tau}{I}$$ Moment of inertia of a cylinder is $I_{cylinder}=\frac{1}{2}\,m\,r^2$. Axis of rotation doesn't intersect center of mass, we will use parallel axis theorem. $$I = I_{cylinder} + m \, r^2 = \frac{1}{2}\,m\,r^2 + m\,r^2 = \frac{3}{2}\,m\,r^2$$ And from moment of inertia we compute acceleration of point C $$a=\alpha\,r=\frac{\tau}{I}\,r=\frac{m\,g\,r\,\sin\theta}{\frac{3}{2}\,m\,r^2}\,r=\frac{2}{3}\,\frac{m\,g\,r^2\,\sin\theta}{m\,r^2}$$ And finally $$a=\frac{2}{3}\,g\,\sin\theta$$
{ "domain": "physics.stackexchange", "id": 10072, "tags": "homework-and-exercises, newtonian-mechanics, angular-momentum, moment" }
Which particles can be described as an excitation of the electromagnetic field?
Question: In my teacher's class notes, came across this definition of light today: Light is an excitation of the electromagnetic field, with photons being the lowest energy excitation. And the way in which it is phrased makes me think that more particles can be seen as excitations of the electromagnetic field, although I really don't know which, and the information I find about the different particles that constitute the standard model is kind of confusing. Exactly which particles are described as excitations of the electromagnetic field? Answer: It fits with the long discussion we had on this question : What is the connection between quantum optical photons and particle physics' photons? and the answers and comments therein. People working with quantum optics have a more general view of the term "excitations of the electromagnetic field". In the quantum field theory used in the standard model of particle physics, QED for electromagnetism, there is a single excitation of the electromagnetic field, called the photon field , which is the photon in the table. It is a zero mass, point particle with spin +/-1 and energy=hnu, where nu is the frequency of the classical electromagnetic wave built up by photons of this energy. Quantum optics physicists use a generalized quantum field theory, which coincides with the particle QED when in vacuum. Classical light in matter has quantum behavior that can be described by a field theory, and they call the excitation of the field they use, photons. This is what the sentence you quote: Light is an excitation of the electromagnetic field, with photons being the lowest energy excitation. implies. I would say the professor is a quantum optics physicist.
{ "domain": "physics.stackexchange", "id": 76455, "tags": "particle-physics, electromagnetic-radiation, photons, quantum-electrodynamics" }
Statistical Analysis for Blood Sugar Measurements
Question: I've conducted a lab aimed towards finding the impact of soluble fiber intake on the change in blood sugar levels. Following an 8 hour fast, I had test subjects consume a fixed amount of carbohydrates, with a varying amount of soluble fiber, and measured their blood sugar in 20 minute intervals. This was done for two hours each test, generally until blood sugar levels dropped to normal. 5 test subjects were used for the test, each with individual blood sugar responses - for instance, each had consistently different fasting blood sugar levels before food consumption, each had consistently different blood sugar peaks, and so on. Each test subject was tested 4 times, once with no soluble fiber, and 3 more with varying amounts of soluble fiber. The raw data I am now left with is simply a collection of scatter plot graphs displaying rises in blood sugar from normal condition, a peak, and a subsequent drop to normal condition. I can visibly see from my graphs that more soluble fiber made blood sugar peaks smaller and longer to reach. Data did not always follow this trend, due to some source of error, but a relationship is nevertheless evident. I hope to make a conclusion from this data. What method of statistical analysis could I use to conclude a relationship from these scatter plots? I would like the method to take into account different individual responses, as well as the few data sets that did not confirm a relationship. I appreciate your help! Answer: It would help to see your plots. The way I would imagine to present this data is not scatter plots but line plots: (Each line would have a confidence interval/standard deviation from your different test subjects. This is just a schematic representation. Color represents different experimental conditions. Let me know if I misunderstood your experiment.) Now, what is your actual question? Your hypothesis? You want to know if the time or height of the peak of blood sugar levels is different between the different fiber levels, right? So, why not measure this? At what time do you find the peak for each subject and experimental condition and what's the blood sugar level at that time point? Your null hypothesis would be that there is no difference between the test conditions. Here some mock data show the data spread from your 5 test subjects of the peak height measurements in the 4 different test conditions. Now you reduced your data to a question, where you could use a statistical test to see if you can reject the null hypothesis. One thing you could do is to perform an ANOVA with e.g. Dunnett's post hoc test to compare your fiber conditions to the control condition A problem with that is that your conditions are not really independent. And your actual question is not if any of these conditions is different. Your hypothesis would rather be if there is a relationship between fiber content and peak height/delay (null hypothesis would be: there is no relationship). Here is the same mock data in a scatter plot (color represents test subjects, they overlap, but let's assume there are 5 data points for each condition) with a trendline showing, that with increasing fiber content there is a reduction in peak height: You could therefore try a correlation analysis and test the statistical significance of your correlation coefficient. As you see, the statistical analysis very much depends on your hypothesis and how you analyse your raw measurements. I hope, I have correctly identified the issue and this helps you with your analysis.
{ "domain": "biology.stackexchange", "id": 9287, "tags": "statistics, biostatistics, carbohydrates, blood-sugar" }
Assign a unique cluster based on a dataframe column with KMeans Algorithm
Question: I have the following df x1 x2 x3 x4 1000 5000 0.8 restaurant1 2000 7000 0.75 restaurant1 500 1000 0.5 restaurant2 700 1400 0.6 restaurant2 1000 5000 0.8 restaurant2 100 600 0.9 restaurant3 200 1200 0.9 restaurant3 50 1000 0.9 restaurant3 applying a Kmeans Algorithm for 2 clusters what happens is that y: x1 x2 x3 x4 Y 1000 5000 0.8 restaurant1 1 2000 7000 0.75 restaurant1 1 500 1000 0.5 restaurant2 2 700 1400 0.6 restaurant2 2 1000 5000 0.8 restaurant2 1 100 600 0.9 restaurant3 2 200 1200 0.9 restaurant3 2 50 1000 0.9 restaurant3 2 Possible Desired Outputs: x1 x2 x3 x4 Y 1000 5000 0.8 restaurant1 1 2000 7000 0.75 restaurant1 1 500 1000 0.5 restaurant2 2 700 1400 0.6 restaurant2 2 1000 5000 0.8 restaurant2 2 100 600 0.9 restaurant3 2 200 1200 0.9 restaurant3 2 50 1000 0.9 restaurant3 2 or x1 x2 x3 x4 Y 1000 5000 0.8 restaurant1 1 2000 7000 0.75 restaurant1 1 500 1000 0.5 restaurant2 1 700 1400 0.6 restaurant2 1 1000 5000 0.8 restaurant2 1 100 600 0.9 restaurant3 2 200 1200 0.9 restaurant3 2 50 1000 0.9 restaurant3 2 I would like to set this boundary: a restaurant must belong to 1 and only 1 cluster. I understand why there is this output, but how could I avoid and fix it? Below the code that I used in my notebook: #Converting float64 to numpy array x1=df['x1'].to_numpy() x2=df['x2'].to_numpy() x3=(df['x5']/df['x2']).to_numpy() x4=df_joint_raw['x4'].cat.codes.to_numpy() X=np.stack((x1,x2,x3,x4),axis=1) #Getting clusters y_pred=KMeans(n_clusters=2, random_state=0).fit_predict(X) Answer: Very interesting question! I try my best: It depends a bit on the number of clusters and number of restaurant but in general I explain a bit. If the number of restaurants and clusters are the same, then, theoretically, your question has just one trivial answer: "each restaurant is a cluster". You even don't need any algorithm. I go a bit deeper on it. Most of ML algorithms solve an optimization problem to find the answer. Sometimes optimization problems are subject to some constraints. Example: Cluster restaurants such that all similar restaurants are necessarily assigned to the same cluster. Cluster restaurants such that the density of same restaurants in same clusters is maximum. The first one has the trivial answer I gave before but second one can be solved. You run several clustering methods (or just k-means but with several initial conditions) and accept the one in which higher number of similar restaurants are in identical clusters. For this you need to convert "density of same restaurants in same clusters" to a mathematical formulation and use it as criterion of choice. If you need help on it just drop a comment so I update the answer. In any case, you change the output of the clustering and you don't let it "naturally" find the clusters as you push a criterion which is not normally considered in the algorithm. But don't worry! The good thing is that at least you have a criterion for "goodness" of your clustering which does not exist normally in clustering problem. UPDATE Let's try $\chi^2$ test first. It is pretty simplified but try it and if it didn't work we can think of something else. For know how, I prepared it for you in a simple way so you don't get confused with different tutorials on the net. Imagine you have 4 restaurants and you want 4 clusters. You will end up with such a frequency table which says how many restaurants of which type fall in which cluster: Then in Python, you simply calculate $\chi^2$ statistic which tells you if clusters and restaurants "are correlated or not". from scipy.stats import chi2_contingency obs = np.array([[10,1,2,1], [1,11,0,1], [1,2,8,1], [0,2,2,12]]) chi, p, _,_ = chi2_contingency(obs) print('The chi-square statistic of {} with p-value of {}'.format(chi,p)) P-value, as you know, tells you if the statistic is significant. There theoretical consideration in this solution but I am not going to confuse you with that. I apologize as I did not go through your proposal in the comment. Will answer accordingly as soon as I find time to have a look at that. Good Luck!
{ "domain": "datascience.stackexchange", "id": 6984, "tags": "python, clustering, pandas, k-means, numpy" }
How can it be proved that two different kinds of dfs unequivocally define a unique tree?
Question: How can it be proved that two different kinds of dfs ( for example let call them inorder and postorder) unequivocally define a unique tree? Let's say we have two dfs arrays: A0(inorder) = {a01, a02, a03, ... a0n}; A1(postorder) = {a11, a12, a13, ... a1n}; So can we say that the pair {A0 and A1} defines a unique tree, if so how can we prove it? Answer: Since your are talking about in-ordering, I will suppose that the tree is a binary tree (otherwise I don't know how the in-order is defined). This result of unicity can be proven by induction of the length of arrays, but only if nodes are distincts (which I will assume for the rest of the proof). Suppose $T$ to be a tree with an in-order given by $A = (a_1, a_2, …, a_n)$ and a post-order given by $B = (b_1, …, b_n)$. If $n = 0$, then the tree is the empty tree and is unique. Suppose the result is true for all lengths of array $\leqslant n$, $n$ being a fixed non-negative integer. Let $T$ be a tree with an in-order given by $A = (a_1, a_2, …, a_{n+1})$ and a post-order given by $B = (b_1, …, b_{n+1})$. Since $B$ is a post-order, that means there exists $k\in \{0,…, n\}$ such that $T = Node(b_{n+1}, T_l, T_r)$, and $T_l$ has a post-order $B_l=(b_1, …, b_k)$ and $T_r$ has a post-order $B_r = (b_{k+1}, …, b_{n})$. Since $A$ is an in-order, that means that $T_l$ has an in-order $A_r=(a_1, a_2, …, a_k)$, $b_{n+1} = a_{k+1}$ and $T_r$ has an in-order $A_l=(a_{k+2}, …, a_{n+1})$. Since nodes are distincts, the equality $b_{n+1} = a_{k+1}$ guarantees a unique value of $k$. Now, $T_l$ has in-order $A_l$ and post-order $B_l$, and since $|A_l| = |B_l| = k \leqslant n$, there exists (by induction hypothesis) a unique corresponding tree. The same way, there exists a unique possible $T_l$ We conclude that there exists a unique tree $T = Node(b_{n+1}, T_l, T_r)$ with in-order $A$ and post-order $B$.
{ "domain": "cs.stackexchange", "id": 19320, "tags": "graphs, trees, discrete-mathematics, graph-traversal" }
Building a timelapse with ffmpy
Question: As a 'trying to learn Python' project, I am using ffmpy to stitch together a timelapse from a series of still images. I'd like the script to output a couple of formats for web use. This is what I have: #!/usr/bin/env python3 import datetime import ffmpy import os now = datetime.datetime.now() ydr = now.strftime('%Y') mdr = now.strftime('%m') ddr = now.strftime('%d') ipath = str(os.path.dirname(os.path.abspath(__file__))) + '/images/' + ydr + '/' + mdr + '/*/*.jpg' opath1 = str(os.path.dirname(os.path.abspath(__file__))) + '/videos/' + ydr + mdr + '.mp4' opath2 = str(os.path.dirname(os.path.abspath(__file__))) + '/videos/' + ydr + mdr + '.webm' ff = ffmpy.FFmpeg( inputs={ipath: '-loglevel info -pattern_type glob -framerate 18 '}, outputs={opath1: '-c:v libx264 -vf "scale=1280:-1,hqdn3d=luma_spatial=1" -pix_fmt yuv420p'} ) ff.run() ff = ffmpy.FFmpeg( inputs={ipath: '-loglevel info -pattern_type glob -framerate 18 '}, outputs={opath2: '-c:v libvpx -vf "scale=1280:-1,hqdn3d=luma_spatial=1" -b:v 1M -c:a libvorbis'} ) ff.run() It works, but it's kinda ugly and I'm pretty sure there's a more efficient and 'Pythonic' way of doing this. Any pointers? Answer: Use path.join() instead of manually concatenating file paths This will make sure that it will work on different OS's, windows uses \ backslashes for instance No need to convert with strftime A datetime has years, months and days as properties, if you want them in str format you could: map(str, iterable) to convert them into strings Code import datetime import os.path now = datetime.datetime.now() y, m = map(str, (now.year, now.month)) location = os.path.dirname(os.path.abspath(__file__)) ipath = os.path.join(location, 'images', y, m + '.jpeg') video_path_mp4 = os.path.join(location, 'videos', y, m + '.mp4') video_path_webm = os.path.join(location, 'videos', y, m + '.webm')
{ "domain": "codereview.stackexchange", "id": 33291, "tags": "python, beginner, image, video" }
Unprovable Post correspondence problem instance
Question: Since there is no algorithm for the post correspondence problem, there exists an instance of this problem such that we can neither prove that the instance is positive nor prove that the instance is negative, i.e. an unprovable instance of this problem. Otherwise, for each instance, a proof that the instance is positive or a proof that the instance is negative exists and we could just use an algorithm which enumerates all the possible proofs until it finds such a proof, it always terminates and it correctly answers yes/no according to the proof found. For finding such an instance, I pick a random instance, I try with a semi-decision procedure and, after some time, if this doesn't work, I try to prove that the instance is negative. Since this heuristic has always finished, I have not find such an instance until now, but it should be possible. This is not specific to the Post correspondence problem, an unprovable instance should exist for all undecidable problems. Answer: I find an answer here : https://math.stackexchange.com/questions/1214823/have-we-found-a-turing-machine-for-which-halting-non-halting-is-unprovable Since we can encode turing machine in PCP, we can construct an instance such that we can neither prove that the instance is positive nor prove that the instance is negative.
{ "domain": "cs.stackexchange", "id": 4374, "tags": "computability, undecidability" }
Is Python a viable language to do statistical analysis in?
Question: I originally came from R, but Python seems to be the more common language these days. Ideally, I would do all my coding in Python as the syntax is easier and I've had more real life experience using it - and switching back and forth is a pain. Out side of ML type stuff, all of the statistical analysis I've done have been in R - like regressions, time series, ANOVA, logistic regression etc. I have never really done that type of stuff in Python. However, I am trying to create a bunch of code templates for myself, and before I start, I would like to know if Python is deep enough to completely replace R as my language of choice. I eventually do plan on moving more towards ML, and I know Python can do that, and eventually I would imagine I have to go to a more base language like C++. Anyone know what are the limitations of Python when it comes to statistical analysis or has as link to the pros and cons of using R vs. Python as the main language for statistical analysis? Answer: Python is more "general purpose" while R has a clear(er) focus on statistics. However, most (if not all) things you can do in R can be done in Python as well. The difference is that you need to use additional packages in Python for some things you can do in base R. Some examples: Data frames are base R while you need to use Pandas in Python. Linear models (lm) are base R while you need to use statsmodels or scikit in Python. There are important conceptional differences to be considered. For some rather basic mathematical operations you would need to use numpy. Overall this leads to some additional effort (and knowledge) needed to work fluently in Python. I personally often feel more comfortable working with base R since I feel like being "closer to the data" in (base) R. However, in other cases, e.g. when I use boosting or neural nets, Python seems to have an advantage over R. Many algorithms are developed in C++ (e.g. Keras, LightGBM) and adapted to Python and (often later to) R. At least when you work with Windows, this often works better with Python. You can use things like Tensorflow/Keras, LightGBM, Catboost in R, but it sometimes can be daunting to get the additional package running in R (especially with GPU support). Many packages/methods are available for R and Python, such as GLMnet (for R / for Python). You can also see based on the Labs of "Introduction to Statistical Learning" - which are available for R and for Python as well - that there is not so much of a difference between the two languages in terms of what you can do. The difference is more like how things are done. Finally, since Python is more "general purpose" than R (at least in my view), there are interesting and funny things you can do with Python (beyond statistics) which you cannot do with R (at least it is harder).
{ "domain": "datascience.stackexchange", "id": 7813, "tags": "machine-learning, python, r, statistics, data-analysis" }
Are gametes determined by the sex of an organism?
Question: In the Wikipedia article for biological sex, I read the following sentence. "The gametes produced by an organism are determined by its sex:..." However, is it not through the gametes produced by an organism that "sex" is defined? Should this sentence be the other way around? Answer: The gametes produced by an organism are determined by the organism's sex - Thus, that statement from Wikipedia appears correct to me. In the case of humans, males have two different sex chromosomes. Thus, males produce gametes with either a Y-chromosome or an X-chromosome (XY). Female humans also have two sex chromosomes, but they are two different X chromosomes (XX). Thus, when females produce gametes they have a single copy of one of the X chromosomes in each gamete. In most cases, a fertilized embryo results and is either XX or XY - in the case of males the embryo gets the Y from Dad and the other sex chromosome is one of the two X's from Mom. In the case of a female the embryo gets the X chromosome from Dad and one of the two X's from Mom. Here is an image describing what I said above with a link to the page to read more: Here is a link from a basic genetics course from Dr. Young in the UK that has a more in-depth explanation. Here is a link to another article from Nature Education that help explain how this works in humans as well as some other organisms that are slightly different.
{ "domain": "biology.stackexchange", "id": 4815, "tags": "genetics, sex, reproductive-biology, gender" }
Energy balance in Kirchhoff's law of thermal radiation
Question: Kirchhoff's law of thermal radiation is usually motivated by an energy balance : at thermal equilibrium, the absorbed power should be exactly compensated by the radiated power, hence $$ \alpha(\Omega,\nu)\mathcal{K}(\Omega,\nu)=\mathcal{L}(\Omega,\nu)=\epsilon(\Omega,\nu) \mathcal{L}_{BB}(\Omega,\nu) $$ where $\alpha$ is the absorptivity (fraction of incident power absorbed by the body), $\mathcal{K}$ is the incident spectral radiance (incoming power per surface unit, steradian and Herz), $\mathcal{L}$ is the body spectral radiance (radiated power per surface unit, steradian and Herz) $\epsilon$ is the emissivity of the body (ratio between the body radiance and that of a black body). If the body is illuminated by a thermal radiation, $\mathcal{K}=\mathcal{L}_{BB}$ and we conclude $$ \alpha(\Omega,\nu)=\epsilon(\Omega,\nu) $$ My question is the following : why is this relation true for all frequency and solid angles ? Why would energy conservation apply in each mode, and not only globally ? Why don't we rather consider an integrated form such as $$ \int d\nu d\Omega \, \alpha(\Omega,\nu) \mathcal{L}_{BB}(\Omega,\nu)= \int d\nu d\Omega \, \epsilon(\Omega,\nu) \mathcal{L}_{BB}(\Omega,\nu) $$ Complementary question : can Kirchhoff law be related to transition amplitude in a quantum description ? Answer: I came up with a solution from the university of Arizona. Let's consider a body with emissivity $\epsilon_{1}(\lambda)$ and absorptivity $\alpha_{1}(\lambda)$. The energy balance for the body exposed to a thermal radiation of radiance $\mathcal{L}_{T}(\lambda)$ imposes $$ \int d\lambda\,\mathcal{L}_{T}(\lambda)\epsilon_{1}(\lambda)=\int d\lambda\,\mathcal{L}_{T}(\lambda)\alpha_{1}(\lambda) $$ which is not enough to conclude on the identity between emissivity and absorptivity at all wavelength. Let's now put the body inside a cavity, the radiative behavior of which is characterized by $\epsilon_{2},\,\alpha_{2}$. We assume thermal equilibrium at temperature T between the body and the cavity. The spectral irradiance arriving on the body is $$\phi_{in}(\lambda)=\mathcal{L}_{T}(\lambda)\epsilon_{2}(\lambda)+\left(1-\alpha_{2}(\lambda)\right)\phi_{out}(\lambda)$$ where the spectral irradiance reaching the cavity is expressed as $$\phi_{out}(\lambda)=\mathcal{L}_{T}(\lambda)\epsilon_{1}(\lambda)+\left(1-\alpha_{1}(\lambda)\right)\phi_{in}(\lambda).$$ Solving this simple system leads to $$ \phi_{in}(\lambda) =\mathcal{L}_{T}\frac{\epsilon_{2}+\epsilon_{1}(1-\alpha_{2})}{1-(1-\alpha_{1})(1-\alpha_{2})}\\ \phi_{out}(\lambda) =\mathcal{L}_{T}\frac{\epsilon_{1}+\epsilon_{2}(1-\alpha_{1})}{1-(1-\alpha_{1})(1-\alpha_{2})} $$ and the energy balance requires $$ \int d\lambda\,\phi_{in}(\lambda)=\int d\lambda\,\phi_{out}(\lambda)\\ \Rightarrow\int d\lambda\,\mathcal{L}_{T}\frac{\alpha_{1}\alpha_{2}}{1-(1-\alpha_{1})(1-\alpha_{2})}\left(\frac{\epsilon_{2}}{\alpha_{2}}-\frac{\epsilon_{1}}{\alpha_{1}}\right)=0 $$ The trick is that this relation holds for whatever cavity material, ie whatever $\epsilon_{2},\,\alpha_{2}$. This is possible if and only if $\epsilon\propto\alpha$, and the first equation imposes $\epsilon(\lambda)=\alpha(\lambda)$ for each wavelength, hence the Kirchhoff law of radiation. This proof is written in spectral domain, but can be extended to the angular domain as well. I read that Kirchhoff law was only holding in time-reversal symmetric systems. Where is this assumption used here ?
{ "domain": "physics.stackexchange", "id": 39931, "tags": "electromagnetic-radiation, energy-conservation" }
How to calculate forces for most appropriate electric engine for soft sand?
Question: I'm super new to this topic I'm posting here so please forgive the nursery question and content. I'm hoping to use any answers to help me begin my research and investigation on what I need to learn. I've been looking around the interwebs on how to calculate what size electric engine I need to move this garden cart on the beach (https://www.gorillacarts.com.au/product-page/gorilla-carts-450kg-steel-mesh). I'm not at all interested in speed. I just want max torque to achieve my goal. As I understand the basics of it... Torque(wheel) = Force x Radius As mentioned. I'm new to all this so probably slow on the uptake. I'm wanting to understand what big factors do I need to consider if I want to move this cart through soft beach sand. What measurements should I be taking and how do I convert that data to then select an appropriate engine? I've seen similar questions posted on here on how to calculate forces for most appropriate engine but I'm not sure how wanting to operate on soft beach sand changes the math and approach. I bought a crane scale and pulled the cart across a variety of surfaces while at max intended load. Max weight + unit = 158kg Flat cement = 3.5kg peak inital force to pull Flat cement 10%'ish uphill grade = 14kg Soft Flat sand = 41 kg I realise there are a bunch of factors which will be difficult to measure i.e engine/bat inefficiencies but what are the big things I should be figuring out and how to translate that to the hardware requirements? What the best tyre type for soft sand will play a big role I'm sure in engine choice. My values were on the stock wheels. I need to understand what would be the best tyres for soft sand. Thanks very much for any help anyone can share with their knowledge and/or experience in the topic. Thanks, Nick Answer: You have already measured some facts. You need continuous pulling force = the weight of 41 kg on the sand. That's the case when the vehicle has been already lifted up from the notches which will be formed under the tyres soon after the vehicle is stopped. You must know also the needed initial pull. It depends on how deep notches there already are below the tyres and what acceleration you expect. And you need these with full load. The acceleration needs force (in Newtons) = total mass (kilograms) x the speed increasing rate (meters/second in one second). That's an extra which must be added to what's needed to pull up the tyres from the notches below them and to win the continuous sand resistance. To be exact some more is needed to accelerate the wheel rotation, but I guess the rotational inertia of the wheels is small when compared to the translational inertia of the whole system. It's easiest to measure the total start force when pulling the van to move. Then you must know how much forward pushing force your tyres can deliver. Lock the rear wheels (I guess they will be powered) and check what's the needed force to pull the vehicle - do it both with light an full load. I'm afraid pulling as 2 wheels locked shows that you need rear tyres with deep paddles. You simply do not get the needed traction with the originals which probably are designed for low rolling resistance on hard surfaces. In theory any motor could produce as much forward pushing force as wanted if there's a gearbox with high enough ratio. To have also some driving speed you need power. You can calculate it in 2 ways: The power in watts = Driving speed (meters/s) x Pulling force (Newtons). Sorry for using SI units, but they make everything much less error prone. The power in watts = Rotation speed of an axis (radians per second) x Torque (Newtonmeters). The torque calculations are essential, as you probably knew. You can calculate the axis torque in Newtonmeters by multiplying the pulling force (=Newtons) with the radius of your powered wheel (=Meters). You can change the driving speed to axis rotation speed by calculating how many revolutions of a wheel is needed per second for the wanted driving speed. Multiply it with 2Pi (=6.28) to get the result in radians per second or with 60 to get revolutions per minute. Gears multiply the rotation speed and divide the the axis torque with the same number, so gears do not affect the power (except by having friction, you need some reserve). It's up to you to find a motor which can produce the needed torque for starting to move when stalled. Old fashioned DC motors (with brushes) are best in that sense. Modern electronically controlled brushless motors can be close. In addition the motor must output the needed torque for nominal driving speed at motor's nominal rotation speed continuously. It's a challenge to keep water and sand out of the motor and to keep the motor cool enough. Designing the easy to use and reliable electricity system with all needed protections is another challenge. I guess you should search for existing solutions when the power, speed and torque requirements are calculated. Have at least 25% reserve. Not asked, but do not even think that someone would sit in the van. It's already an unstable vehicle (high, narrow, no stabilization mechanisms) and a person + the safety cage would make it intolerable. It can work as a carriage where electricity only helps on the sand. The max speed you need is about 1,5 meters/second. To reach that speed in 0.5 seconds with full 158 kg (brt.) load you'll need acceleration force 474N (or about 48 kg as you may say). In addition you need what's needed to push the vehicle up from the notches and to win the sand resistance.
{ "domain": "engineering.stackexchange", "id": 4427, "tags": "mechanical-engineering, torque, mathematics, electric-vehicles" }
Read first column of a space delimited file with Java 8 stream
Question: Code: import java.nio.charset.Charset; import java.nio.file.Files; import java.nio.file.Paths; import java.util.List; import java.util.regex.Matcher; import java.util.regex.Pattern; import java.util.stream.Collectors; /** * Created by IDEA on 16/11/14. */ public class TestStream { private final static Pattern pattern = Pattern.compile("^(\\S+)\\s+"); public static String firstWord(String s) { Matcher matcher = pattern.matcher(s); if(matcher.find()) { return matcher.group(); } return null; } public static void main(String[] args) throws Exception { List<String> x = Files.lines(Paths.get("/tmp/testfile"), Charset.defaultCharset()).map(line -> firstWord(line)).collect(Collectors.toList()); System.out.println(x); } } Answer: Your code will fail with a single-column input like: one two three four Additionally, it will produce odd results with space-prefixed lines like: normal line spaceprefixed line The reason it will fail is because the Files.lines method trims the line terminators off of each line, so, the input to the regex will be just one, etc. Since there's no whitespace after the one, the match will fail, and the method will return null. You should consider an alternative approach of doing a limited split, and returning the first value: private final static Pattern pattern = Pattern.compile("\\s+"); public static String firstWord(String s) { return pattern.split(s, 2)[0]; } This also changes the logic slightly because empty lines will return empty-string, instead of null. Additionally, your code will match " spaceprefixed" from " spaceprefixed line" whereas my code will consider the leading space to be a separator (and not part of the first word), and will match the empty string "" from " spaceprefixed line" In other words, I disagree with some of the handling you have with edge cases. I would handle the edge cases in a different way in the stream too: List<String> x = Files.lines(Paths.get("/tmp/testfile"), Charset.defaultCharset()) .map(line -> firstWord(line)) .filter(word -> !word.isEmpty()) .collect(Collectors.toList());
{ "domain": "codereview.stackexchange", "id": 10648, "tags": "java, parsing, io, stream" }
Why add a minus sign in the formula for gravity?
Question: Why do we add a minus sign in our formula for gravity, when we might as well choose the unit vector $r_{21}$, instead of $r_{12}$? I'm just wondering why we choose this convention. Is it because it's easier to remember that $F_{12}$ goes with $r_{12}$? Edit: Actually... Is wikipedia right? My syllabus says the following: Still, my question holds... Why go through this trouble of adding a minus sign? Answer: The minus sign is to indicate that the force is attractive: if there was no minus sign, two masses would repel. By the way, Wikipedia's article is correct: the vector F must point away from the mass on which the force is acting ($m_2$), and $\textbf{r}_{12}$ points to the mass $m_2$, so with a minus sign you need $\textbf{r}_{12}$; otherwise you would have repulsion. If it seems redundant to you (using the vector pointing to $m_2$ and introducing a minus sign - why not just use the opposite vector $\textbf{r}_{21}$ and cancel the minus sign?) well you are right in a sense, because when speaking of force this indeed looks a bit of a cramped reasoning. The real reason is that physicists like to think to forces in terms of fields. You can take a look at the Wikipedia article about the gravitational field. Under this light, it makes much more sense to use the vector $\textbf{r}_{12}$ because it can be identified with the position vector (which has nothing to do with the mass $m_2$ anymore - this is the power of the concept of field as opposed to the force-between-objects concept)
{ "domain": "physics.stackexchange", "id": 36667, "tags": "newtonian-gravity, vectors, conventions" }
How do you prove that $L=I-V+1$ in $\lambda\phi^4$ theory?
Question: It is known that the number of loops in $\lambda\phi^4$ theory is given by the formula $$L=I-V+1$$ where $L$ is the number of loops, $I$ the number of internal lines and $V$ the number of vertices. I would like to know the proof of this statement. Answer: This formula is actually Euler's formula for planar graphs, and holds for all Feynman diagrams regardless of what theory we are in. The proof proceeds by induction and is easy if we first disregard the case of crossing lines: Observe that a one-loop graph has two vertices, one loop, and two internal lines, so the formula holds. Observe that a $(n+1)$-loop graph is produced from a $n$-loop graph by either drawing one additional line between two already existing vertices, which doesn't change $L-I$, or by adding a new vertex and connecting it to two other vertices, which doesn't change $L-I+V$. By induction, the formula holds for all graphs with finitely many loops. More formally, we can say that A Feynman diagram is called planar if the adjoint graph obtained by connecting all external lines to a single vertex is planar. and then we have proven up to now that the formula holds for all planar Feynman graphs. Interestingly, not even all $\phi^4$ graphs are planar. Consider $2\to 2$ (or $1\to 3$)-scattering with a box diagram, where each external line is connected to its own vertex, and each vertex is connected with each other vertex. The adjoint graph is the complete graph on five vertices, which is known to be not planar. Nevertheless, the "Feynman-Euler formula" $$ L-I+V = 1$$ still holds because of the way loops are formally counted. By the general Euler formula, $$ \#\{\mathrm{vertices}\} - \#\{\mathrm{edges}\} + \#\{\mathrm{faces}\} = 2 - 2g$$ where $g$ is the genus of the surface on which the graph can be drawn without intersections, and "faces" are all regions bounded by edges. A "face" does not have to have a vertex at every corner, so when you get two crossing lines in a Feynman graph, you get two additional faces that you do not count as loops - the above boxy $\phi^4 $ diagram has four faces inside the box, but only two loops. Since every crossing of lines that cannot be eliminated by deforming the graph (and is hence a "true crossing" and not just us being too dumb to draw the graph properly) increases the genus on which you could draw the graph without crossings by $1$, the "Feynman-Euler formula" for all graphs follows from the general Euler formula.
{ "domain": "physics.stackexchange", "id": 41386, "tags": "homework-and-exercises, quantum-field-theory, feynman-diagrams" }
Is the combination of blocks and lines in this old, very long telescope an implementation of some named structure or technique?
Question: Below are two cropped views of "Johannes Hevelius's 8 inch telescope with an open work wood and wire "tube" that had a focal length of 150 feet to limit chromatic aberration." from Harvard University, Houghton Library, pga_typ_620_73_451_fig_aa (found here) the first of which I've zoomed and enhanced the contrast. I've used these images in two other questions in Astronomy SE: How did Johannes Hevelius' long telescope work? Why all the round holes? How does making a refracting telescope very long reduce the chromatic aberration of an uncorrected lens? This answer provides a link to the original text and image but the discussion was published in 1673 and may be difficult to understand or paste into google translate. However, the structure itself may be familliar to engineers. I'm getting the feeling that the series of square blocks with round holes and string or lines connecting them (presumably under tension?) have no specific optical function, and are purley part of a system to keep the long "telescope tube" stiff. Is this combination of blocks and lines a recognizable implementation of some named structure or technique? Answer: Lateral stiffness is from the diamond stays and notched spreaders. The weight is distributed along the beam using a running "crow's foot" arrangement. The square blocks and holes are presumably there to aid in telling the rigger what they need to do to keep it all straight and true.
{ "domain": "engineering.stackexchange", "id": 2666, "tags": "structural-engineering, terminology, engineering-history" }
High level convenience classes for ros_serial
Question: I thought I'd come up with two convenience classes to wrap up ROS communication in my ATmega1280 code. This may not be the most indispensible part of my project, but just for the fun of learning C++ template programming I thought I'd give it a try. My code is as follows: #include <ros.h> template <class T> class ROSLogger { public: ROSLogger(ros::NodeHandle* nh, char* topic, short pubFrequency); virtual ~ROSLogger(); void advertise(); void publish(); T& getMsg(); private: ros::NodeHandle* nh_; ros::Publisher publisher_; T msg_; const char* topic_; short pubFrequency_; }; template <class T> ROSLogger<T>::ROSLogger(ros::NodeHandle* nh, char* topic, short pubFrequency) : nh_(nh), topic_(topic), pubFrequency_(pubFrequency), publisher_(topic_, static_cast<ros::Msg*>(&msg_)) {} template <class T> ROSLogger<T>::~ROSLogger() {} template <class T> void ROSLogger<T>::advertise() { nh_->advertise(publisher_); } template <class T> T& ROSLogger<T>::getMsg() { return msg_; } template <class T> void ROSLogger<T>::publish() { publisher_.publish(&msg_); } To use it one would create a ros::NodeHandle, initialize it, create a ROSLogger, advertise a topic and then publish data, as below. unsigned char hello[13] = "hello world!"; ros::NodeHandle nh; ROSLogger<std_msgs::String> rl(&nh, "test", 10); void setup() { nh.initNode(); rl.advertise(); } void loop() { nh.spinOnce(); rl.getMsg().data = hello; rl.publish(); } At the end I'd create a vector of ROSLogger's in a containing class and their publish methods would fire only with the desired frequency automatically. Of course the latter is not yet implemented. Even if the added value is small, I'd still like to make it work. The point is, now it doesn't, and I don't know why. After running serial_node.py only an error appears: [INFO] [WallTime: 1311526103.141689] ROS Serial Python Node [INFO] [WallTime: 1311526103.149892] Connected on /dev/ttyUSB0 [ERROR] [WallTime: 1311526118.161632] Lost sync with device, restarting... /opt/ros/diamondback/stacks/ros_comm/clients/rospy/src/rospy/topics.py:640: UserWarning: '' is not a legal ROS graph resource name. This may cause problems with other ROS tools super(Publisher, self).__init__(name, data_class, Registration.PUB) Unable to register with master node [http://localhost:11311]: master may not be running yet. Will keep trying. Any clues? Originally posted by tom on ROS Answers with karma: 1079 on 2011-07-24 Post score: 1 Answer: OK, I figured this out. The problem was a C++ peculiarity when initializing class members with constructor's initialization list. The thing is, class members get initialized in the order of their declaration in class'es body, not in the order you name them in the initialization list (more on that eg. here). So changing the declaration this way solves the problem: template <class T> class ROSLogger { public: ROSLogger(ros::NodeHandle* nh, char* topic, short pubFrequency); virtual ~ROSLogger(); void advertise(); void publish(); T& getMsg(); private: ros::NodeHandle* nh_; const char* topic_; //DECLARATION ORDER CHANGED HERE T msg_; ros::Publisher publisher_; short pubFrequency_; }; The way I programmed this before publisher_ got initialized before topic_ was set, so this couldn't work. Originally posted by tom with karma: 1079 on 2011-07-24 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 6238, "tags": "ros" }
Which cultivated plants produce most nectar for pollinators?
Question: Pollinators are important for many crop types (e.g. wikipedia list), but I am wondering how important the crops are to feed the pollinator. Obviously, some of the crop plants provide much more food than others, and the plants that depend on pollinators for their reproduction will be likely to attract the pollinators with pollen, nectar, or both. But I am not sure that I can use the dependance on pollinator as an accurate estmator of the quantity of nectar that is actually available for pollinator. I can think of the rapeseed as being very rich in nectar, but are there other "nectar rich" flowers ? For instance, what about sunflower, potatoes, peas ,sugar beet, mustard, flax, lucern, clover, maize, wheat... To sum up, I would like to have a list of the most common European cultivated plants that produce a reasonalble amount of nectar accessible to insect pollinators. (rem : I've asked a similar question about pollen, and maybe the answer is the same, but as far as I understood some plants have more pollen and some plants have more nectar.) Answer: Lacy phacelia or purple tansy (Phacelia tanacetifolia), which can be planted any time of year due to its short growing season. After flowering, it is plowed into the ground to enrich the soil with humus All types of clovers: excellent nectar plants that plowed after flowering and enrich the soil with nitrogen, or scythe them and use as feed for animals Sunflowers: after flowering, they greatly enhance the structure of soil and enrich it with hummus Rapeseed and poppy Buckwheat: planted it in summer and when it fades, the seeds and ground into flour Scroll to page 13 (Appendix II). Here you will find a list of all significant nectar-producing plants in Europe Scroll to page 21 for average nectar production (mg sugar/day/flower) of important agricultural crop flowers Melliferous potential of twenty-seven plant families determined from the N data sets recorded in thirty-eight papers ranked by decreasing median values - scroll to page 5 under Table 1) The google keywords you are looking for are along the lines of: "melliferous potential of agricultural crops in europe" and variants of this
{ "domain": "biology.stackexchange", "id": 10166, "tags": "botany, entomology" }
$1°$ shift in sun's sidereal position over a day
Question: I was looking through the book "Astronomy - Principles and Practice 4th ed. - A. Roy, D. Clarke" and I got stuck at the following bold lines: The month is the next period of any significance to our watcher. During this time, the ideas about the heavens and their movements change. It will be noted that after a few nights the first group of stars seen above the eastern horizon just after sunset is markedly higher at first sight, with other groups under it becoming the first stars to appear. Indeed, after a month, the first group is about thirty degrees above the eastern horizon when the first stars are seen after sunset. It is then apparent that the Sun must shift its position against the stellar background as time passes. The rate is slow (about one degree per day—or about two apparent solar diameters) compared with its daily, or diurnal, movement about the Earth. I just got reading, so I don't know anything about Astronomy. I tried to install Stellarium to check it out myself. I couldn't find any option though, which could tell the sidereal position of sun, so that I could see the phenonmenon myself. Can anyone please tell how to do so? (Probably giving me a neat formula or theory/text based on this) Also according to Stellarium, the apparant diameter of Sun is $0°32'$ so howcome twice of this is $1°$? Also, could anyone also explain what does the author mean by "compared with its daily, or diurnal, movement about the Earth." Is the author putting a comparision that the diurnal movement of sun overshadows this $1°$ shift in sun's sidereal position over a day, hence "the rate is slow"? Answer: Imagine the Earth making a complete revolution around the Sun. As it does so, the stars "behind" the Sun from the point of view of the Earth appear to change. It takes a full year (365 days) for this process to complete. Not entirely by coincidence, there are 360 degrees in a circle, so the Sun's apparent position relative to the distant stars shifts by about 1 degree per day. By contrast the rotation of the Earth causes both the Sun and stars to appear to move across the sky once per day.
{ "domain": "physics.stackexchange", "id": 100214, "tags": "astronomy" }
Simple Cache Mechanism
Question: I have this simple cache mechanism for a repeated task in service. It uses static variable to store the information. Please suggest on how is it and if it could be better. Premise: I need to verify transaction from stores. There can be lots of transactions happening repeatedly from multiple stores. Thus, this simple cache manager for getting the store information. I want it to be simple yet effective and fast. StoreCacheManager.cs public class StoreCacheManager { private static List<StoreCacheInformation> _merchantStores = new List<StoreCacheInformation>(); private DBEntities _db; private TimeSpan _cacheTime = new TimeSpan(1, 0, 0);//1 Hour public TimeSpan CacheTimeSpan { get { return _cacheTime; } } public StoreCacheManager(DBEntities db) { _db = db; } public async Task<StoreCacheInformation> Get(int storeId) { if (_merchantStores.Any()) { var store = _merchantStores.FirstOrDefault(i => i.StoreID == storeId); if (store != null) { // Check if Cache time has expired if (store.CacheDateTimeUtc.Add(_cacheTime) < DateTime.UtcNow) { lock (_merchantStores) { _merchantStores.Remove(store); } } else { return store; } } } return await GetAndCache(storeId); } private async Task<StoreCacheInformation> GetAndCache(int storeId) { var store = await GetStoreInfo(storeId); if (store != null) { lock (_merchantStores) { _merchantStores.Add(store); } } return store; } private async Task<StoreCacheInformation> GetStoreInfo(int storeId) { var storeInfo = await _db.Stores.Where(i => i.StoreID == storeId).Select(i => new StoreCacheInformation() { CountryCode = i.CountryObj.CountryCode, MerchantID = i.VendorOrgID ?? 0, TelephoneCode = i.CountryObj.TelephoneCountryCode, StoreID = storeId, //todo: deviceId and Token }).FirstOrDefaultAsync(); if (storeInfo != null) { storeInfo.CacheDateTimeUtc = DateTime.UtcNow; } return storeInfo; } } StoreCacheInformation.cs public class StoreCacheInformation { public int MerchantID { get; set; } public int StoreID { get; set; } public string TelephoneCode { get; set; } public string CountryCode { get; set; } public DateTime CacheDateTimeUtc { get; set; } public string DeviceId { get; set; } public string AuthToken { get; set; } } Answer: Concurrency If you want to implement it thread-safe, you have also lock the whole transaction. For instance: store != null assumes that there is no item in cache. Imagine that after that check another thread added one. That would result in a cache where the same item is cached twice. Conside to use a thread-safe collection (e.g. ConcurrentDictionary) instead of using locking. .Net Framework already provides a thread-safe cache: MemoryCache. I am not very familiar with the entity framework, but as far as I know is the DbContext not thread-safe. Therefore it is not a good idea to use a single instance of it in multithreaded environments. Code Style _merchantStores, _db and _cacheTime should be read-only. Methods that return a Task should be called xxxAsync The property setter in StoreCacheInformation should be private or at least internal. Otherview external code may modify the state of the cached items. For many cached items, it is better to use a dictionary instead of list.
{ "domain": "codereview.stackexchange", "id": 20928, "tags": "c#" }
Understanding $\mathcal Z$-transforms and pole locations
Question: I am trying to gain a better understanding of pole locations in the $z$-plane of a given discrete transfer function, $H(z)$. I think I have a pretty good understanding of how to use the $\mathcal Z$-transforms for filter design, but there is something about the core concepts that still confuse me. I understand that poles are locations in the $z$-plane (complex plane) where a specific value of $z$ results in $\left|H(z)\right|$ approaching infinity. This happens when the denominator of a $\mathcal Z$-transform expression has a zero at $z=a$. i.e. $$ H(z) =\frac{1}{z-0.5} $$ would have one pole at $z = 0.5$. The inverse $\mathcal Z$-transform of this fraction would be $$ h(n) = \mathcal{Z}^{-1}\left\{\frac{1}{z-0.5}\right\} = 2\cdot0.5^{n-1} $$ Since $z$ is less than 1, this impulse response will decay as $n$ increases. So the response is stable (which happens as long as the pole lies within the unit circle). However, if we look at the definition of the $\mathcal Z$-transform, with $z$ written in complex exponential form, i.e. $z=re^{j\omega}$: $$ H(z) = H\left(re^{j\omega}\right) = \sum_{n=-\infty}^{n=\infty} h(n)\left(re^{j\omega}\right)^{-n} = \sum_{n=-\infty}^{n=\infty} h(n)r^{-n}e^{-j\omega n}, $$ we can now express the $\mathcal Z$-transform for our pole directly using using $z=re^{j\omega}$ with $r=0.5$ and $\omega$=0: $$ H(0.5) = \sum_{n=-\infty}^{n=\infty} h(n)0.5^{-n} = \infty $$ We know this sum is infinity from our original transfer function. Now here is where my question begins to form. When $h(n)$ is summed against the $0.5^{-n}$ function, we get an unbounded summation, or a pole. Of course that can only happen if $h(n)$ is infinite (which it is), since the above summation for $H(0.5)$ could not be infinite otherwise. But if $h(n)$ is infinite such that it sums to infinity multiplied by $0.5^{-n}$, it would clearly also sum to infinity against $0.58^{-n}$ or $0.25^{-n}$ for that matter. So my question is, if there is a pole at 0.5, why aren't there poles all along the real axis (except for zero)? Why is there only a pole at 0.5? To make it explicit, we can use the specific $h(n)$ that was the inverse $\mathcal Z$-transform of $H(z)$, above: $$ H\left(\frac 14\right) = \sum_{n=-\infty}^{n=\infty} \left[2\cdot\left(\frac{1}{2}\right)^{n-1}\right]\cdot \left(\frac{1}{4}\right)^{-n} = \sum_{n=-\infty}^{n=\infty} 4\cdot 2^n = \infty $$ So, this sum makes it look like there is a pole at $z=0.25$ for the $\mathcal Z$-transform of $h(n)$, but the transfer function says there is not! In other words, the single pole makes sense to me looking at the $H(z)$ transfer function, but it doesn't make sense to me when I take it and use it directly in the $\mathcal Z$-transform expression. What am I doing wrong? Answer: The actual computation for the inverse Z transform is $$\frac1{z-0.5}=\frac1z\cdot\frac1{1-\frac{0.5}{z}}=z^{-1}\sum_{k=0}^{\infty}(0.5)^k z^{-k}=\sum_{n(=k+1)=1}^\infty (0.5)^{n-1}z^{-n}$$ From that you see that the series is one-sided, $h(n)=0$ for $n<1$. This should remove most of the infinities that you stumbled upon. Note that this geometric series only converges for $\left|\frac{0.5}{z}\right|<1$, that is $|z|>0.5$. For applications one is only interested in the values of the Z transform on or close to the unit circle, $|z|\approx 1$, the eigenvalues and singular values of the sequence shift operator. The unit circle is clearly and comfortably contained in the region of convergence. So for instance, if the pole were at $z=2$, then the inverse Z transform would again use the Laurent series on and close to the unit circle, i.e., for $|z|<2$, to give $$\frac1{z-2}=-\frac12\,\frac1{1-\frac z2}=-\sum_{n=0}^{\infty}2^{(-n)-1} z^{-(-n)},$$ that is, $h(n)=0$ for $n>0$ and $h(n)=-2^{n-1}$ for $n\le 0$
{ "domain": "dsp.stackexchange", "id": 1412, "tags": "discrete-signals, infinite-impulse-response, impulse-response, transfer-function, z-transform" }
How does Decimal to Binary conversion algorithm work?
Question: I'm reading a book which says that to transform a number from decimal representation to binary representation you have to follow these steps: Take the integer part (the one before the point) of the number and transform it in binary like you normally do with integer numbers: that is the integer part of the binary representation of the number we are trying to convert. Take the decimal part (the one after the point) of the original number and multiply it by 2, the integer part of the obtained number is a digit (the current one) of the decimal part of the binary number. Repeat step 2 until: You got 0 as the decimal part of the obtained number. Obtained numbers start to repeat, in that case the number is periodic. At the end you should get the binary transformation of the original number. I tried to apply the algorithm to the number: 0,264 and got: 0,264 (dec part) -> 0,264 * 2 = 0,528 0,528 (dec part) -> 0,528 * 2 = 1,056 1,056 (dec part) -> 0,056 * 2 = 0,112 0,112 (dec part) -> 0,112 * 2 = 0,224 0,224 (dec part) -> 0,224 * 2 = 0,448 0,448 (dec part) -> 0,448 * 2 = 0,896 0,896 (dec part) -> 0,896 * 2 = 1,792 1,792 (dec part) -> 0,792 * 2 = 1,584 1,584 (dec part) -> 0,584 * 2 = 1,168 1,168 (dec part) -> 0,168 * 2 = 0,336 0,336 (dec part) -> 0,336 * 2 = 0,672 0,672 (dec part) -> 0,672 * 2 = 1,344 1,344 (dec part) -> 0,344 * 2 = 0,688 0,688 (dec part) -> 0,688 * 2 = 1,376 1,376 (dec part) -> 0,376 * 2 = 0,752 0,752 (dec part) -> 0,752 * 2 = 1,504 1,504 (dec part) -> 0,504 * 2 = 1,008 1,008 (dec part) -> 0,008 * 2 = 0,016 ...... ... And so on. As you can see neither we found just 0 as the decimal part of the numbers we got, neither these numbers started to repeat. Nevertheless, trying to convert 0,264 using this online converter gets the following: 0,01000011100101011, which stops one step before where I stopped in the algorithm calculation I showed in this question. So I was wondering: Why does it stop there? Are there other criteria for stopping in the algorithm? Am I getting the algorithm wrong? I think it may have something to do with precision you want to get, but then what should be the correct criteria for stopping? Answer: No, you have not done anything wrong. However, you have not done enough. You have done step 2 for 18 times. You will have to repeat step 2 for another 82 times. Then you will observe that the "obtained numbers start to repeat". The binary representation of $0.264$ is $$0.\underline{0100001110010101100000010000011000100100110111010010111100011010100111111011111001110110110010001011}$$ where I use the underline to indicate the first shortest period, which turns out to be the first 100 binary digits right after the decimal separator. This large value of period length is more of a norm than an outlier. Here is a list of the lengths of the periods in the binary representations of a few decimal factions. decimal fraction length of periods in its binary representation 0.6 4 0.06 20 0.006 100 0.0006 500 0.0006 2500 You might have noticed a few patterns in the table above. I will leave them for you to explore, discover, verify and extend. Why does it stop there? That website cuts off the computation of the binary digits at some arbitrary length. Or that is what appears to me. A different website or tool may choose a different cut-off point, such as this one. You can stop the algorithm at any time you prefer. If you would like to run the algorithm entirely, follow step 3. It is an algorithm after all.
{ "domain": "cs.stackexchange", "id": 21075, "tags": "binary, number-formats, base-conversion" }
Do atoms get created or are they recycled?
Question: Basically, are the atoms that make up my body right now something that has existed since the big bang? Answer: Not exactly. Fusion of atoms in Supernova nucleosynthesis is thought to be responsible for the various atoms that make up the periodic table. While there hasn't been one in our part of the galaxy for quite some time, plenty of Supernova are occurring through out the universe right now. So, while you are made of old stuff, in terms of atoms, most of it probably isn't as old as the big bang itself. “The nitrogen in our DNA, the calcium in our teeth, the iron in our blood, the carbon in our apple pies were made in the interiors of collapsing stars. We are made of star stuff.” -Carl Sagan
{ "domain": "physics.stackexchange", "id": 10175, "tags": "atoms, big-bang" }
Logic formula for exactly n unique objects (no more, no less)
Question: I have a question in Logic: If I am asked to construct a formula, using the '=' predicate, that shows that there are exactly n objects, I need to show that there are no n+1 objects, right? For example, to show that there are exactly 3 objects, I will show that there are no 4 objects. Also, I need to show that there exist n objects. My question is, do I need to show that there is no case where only n-1 objects exist? For example, to show that there are exactly 3 objects, do I need to show that there is no case in which there are only 2 objects? If so, is that the correct form to do so? : 1) Showing that there is no case of 4 elements: $ {\lnot}{\exists}x{\exists}y{\exists}z{\exists}w(x{\neq}y{\land}x{\neq}z{\land}x{\neq}w{\land}y{\neq}z{\land}y{\neq}w{\land}z{\neq}w) $ 2) Showing that there are 3 elements: $ {\exists}x{\exists}y{\exists}z(x{\neq}y{\land}x{\neq}z{\land}y{\neq}z) $ 3) Showing that there is no case of only 2 elements: $ {\lnot}{\exists}x{\exists}y{\lnot}{\exists}z(x{\neq}y{\land}x{\neq}z{\land}y{\neq}z) $ And finally, combining the three: $ (1){\land}(2){\land}(3) $ I am really not sure. Thanks in advance EDIT: Actually, assuming that I do need to show that there is no case of less than 3 elements (n elements), I would probably need to show that there is no case where there is just one element as well (from 1 until n-1), am I correct? Answer: You need to say that (a) there are at least $n$ elements, and (b) there are at most $n$ elements. To express (a), $$ L_n := \exists x_1\dotsc \exists x_n\, \left( \bigwedge_{1\le i < j \le n} x_i\ne x_j \right). $$ To express (b), $$ M_n := \forall x_1\dotsc \forall x_{n+1}\, \left( \bigvee_{1\le i < j \le n+1}x_i = x_j\right). $$ So the sentence $L_n \land M_n$ holds iff there are exactly $n$ elements.
{ "domain": "cs.stackexchange", "id": 5892, "tags": "logic, sets, first-order-logic, finite-sets" }
Exporting objects in various formats while reporting progress
Question: Description A WinForms application has a function to export objects of the following type, in various formats: class Item { public int id { get; set; } public string description { get; set; } } At the click of a button in a window, a SaveFileDialog is shown, and currently it provides the option to save the data in .txt, .csv or .xlsx format. Since there are sometimes hundreds or thousands of objects, and the UI should not freeze up, a Task is used to run this operation. This implementation works, but could be improved. Code public partial class ExportWindow : Form { // objects to be exported List<Item> items; // event handler for the "Export" button click private async void exportButton_click(object sender, System.EventArgs e) { SaveFileDialog exportDialog = new SaveFileDialog(); exportDialog.Filter = "Text File (*.txt)|*.txt|Comma-separated values file (*.csv)|*.csv|Excel spreadsheet (*.xlsx)|*.xlsx"; exportDialog.CheckPathExists = true; DialogResult result = exportDialog.ShowDialog(); if (result == DialogResult.OK) { var ext = System.IO.Path.GetExtension(saveExportFileDlg.FileName); try { // update status bar // (it is a custom control) statusBar.text("Exporting"); // now export it await Task.Run(() => { switch (ext.ToLower()) { case ".txt": saveAsTxt(exportDialog.FileName); break; case ".csv": saveAsCsv(exportDialog.FileName); break; case ".xlsx": saveAsExcel(exportDialog.FileName); break; default: // shouldn't happen throw new Exception("Specified export format not supported."); } }); } catch (System.IO.IOException ex) { statusBar.text("Export failed"); logger.logError("Export failed" + ex.Message + "\n" + ex.StackTrace); return; } } } private delegate void updateProgressDelegate(int percentage); public void updateProgress(int percentage) { if (statusBar.InvokeRequired) { var d = updateProgressDelegate(updateProgress); statusBar.Invoke(d, percentage); } else { _updateProgress(percentage); } } private void saveAsTxt(string filename) { IProgress<int> progress = new Progress<int>(updateProgress); // save the text file, while reporting progress.... } private void saveAsCsv(string filename) { IProgress<int> progress = new Progress<int>(updateProgress); using (StreamWriter writer = StreamWriter(filename)) { // write the headers and the data, while reporting progres... } } private void saveAsExcel(string filename) { IProgress<int> progress = Progress<int>(updateProgress); // EPPlus magic to write the data, while reporting progress... } } Questions How can this be refactored to make it more extensible? That is, if I wanted to add support for more file types, make it easy and quicker to modify. The switch statement could get very long. Essentially, how to comply with the Open/Closed principle? Answer: I would suggest moving the actual export(s) into their own class. We can create an interface for exports. Something along the lines of public interface IExport<T> { Task SaveAsync(string fileName, IEnumerable<T> items, IProgress<int> progress = null); string ExportType { get; } } Then each export type can implement this interface. public class ExportItemsToText : IExport<Item> { public Task SaveAsync(string fileName, IEnumerable<Item> items, IProgress<int> progress = null) { throw new NotImplementedException(); } public string ExportType => "txt"; } Then in your constructor of ExportWindow public ExportWindow(IEnumerable<IExport<Item>> exports) { // if using DI otherwise could just fill in dictionary here ExportStrategy = exports.ToDictionary(x => x.ExportType, x => x); } Instead of a switch statement you can now just look up the key in the dictionary to find what export should be ran and if not found would be the same as your default case. IExport<Item> exporter; if (ExportStrategy.TryGetValue(ext.ToLower(), out exporter)) { await exporter.SaveAsync(exportDialog.FileName, items, new Progress<int>(updateProgress)) } else { throw new Exception("Specified export format not supported."); } Now in the future if adding support for more types you just implement the interface and update your DI container. Or if not using DI then would need to add it to the constructor of your ExportWindow. I don't think this is a great idea but If you really don't want to create a class per export, which I think you should, you could make the dictionary IDictionary<string, Action<string>> then just put your methods in there and when adding a new type create the method and update the dictionary.
{ "domain": "codereview.stackexchange", "id": 39201, "tags": "c#, .net" }
Rotating a lever — small end has more force but less speed?
Question: Suppose I have a massless lever with two masses at the ends and a fixed pivot as depicted below. Assume no gravity in this scenario. Further, let's say object $B$ is closer to the pivot, so $r<R$. Now let us suppose a force $\vec{F}_{1}$ perpendicular to the lever and of constant magnitude is applied to object $A$ for some time to supply a torque. By the law of the lever, a force of magnitude $$ F_{2} = F_{1}\frac{R}{r} $$ is applied to object $B$. Since $r < R$, it must be that the force applied to $B$ is of greater magnitude than that of the force on $A$. However, both objects are attached to the same lever, so they must have the same angular velocity and in particular $$ v_{B} = v_{A}\frac{r}{R}. $$ Thus the speed of $B$ must be less than the speed of $A$. It seems that $B$ received a greater force, yet it is moving with less speed. This seems like an apparent contradiction. Is there a way to resolve this? I am aware that there is centripetal force that needs to be considered in a thorough analysis. I am assuming the pivot keeps the entire system in place as it rotates. Answer: What accelerates object A is not the force $F_1$ that you applied to it, it's the net force that it experiences. $F_1$ is one component of the net force, but there is also a reaction force from the lever that you need to consider. Rather than calculate this reaction force, the usual way to solve the problem is to find the moment of inertia of the two joined masses, and the torque you are applying at object A, and from those calculate the angular acceleration of the lever and masses. If you must know the net force on object A, you can calculate it from its angular acceleration, lever arm, and mass.
{ "domain": "physics.stackexchange", "id": 88981, "tags": "newtonian-mechanics, rotational-dynamics, torque, rotation, angular-velocity" }
Elliptical orbit of revolution of earth
Question: Well known Kepler laws state that earth revolve around sun in an elliptical path with sun at one of the focii. My question is rather simple. Why so? I mean for an equilibrium of earth that is it is not accelerated towards or away from sun its gravitational force must be balanced by centrifugal force (pseudo force as frame corresponding to earth) But following an elliptical path it distance from sun won't be same and hence the gravitational force and hence how is this? I suspect may be it is due to presence of other celestial bodies but that seems too vague and escaping from the truth Also maximum of derivations in gravitation are done by assuming circular path of earth around sun. (atleast at my level) Please give me the real physics behind it? Answer: What you are missing is that Earth speed is varying along its orbit, contrary to a circular one. More precisely, at any point of the orbit, the acceleration of the Earth has a component tangent to the orbit, and a component perpendicular to the orbit. For a circular orbit, the former is always zero, and the latter is therefore exactly equal to the gravitational force, i.e. a centripetal acceleration. That was in an inertial frame, so now if we take a frame rotating along with the Earth, that's the picture you had in mind: the centrifugal force, which is opposite to the centripetal acceleration in the inertial frame, compensates the gravitational force. But for an elliptic orbit, the component of the acceleration tangent to the orbit is only zero at the apogee and perigee. Since the gravitational force is equal to the sum of the tangent and normal component of the acceleration, we can say that part of the gravitational force bends the trajectory toward the Sun, and part accelerate the Earth along the trajectory (or decelerate it, depending on the position on the orbit).
{ "domain": "physics.stackexchange", "id": 43631, "tags": "newtonian-mechanics, newtonian-gravity, orbital-motion, celestial-mechanics" }
How can I set the path so the programs are always found?
Question: so far, I have installed UBUNTU. I have installed ROS. I have installed various relative Programs like Turtle-bot, etc. When I try to run the programs, the ROS can't find the files. How can I set the path so the programs are always found? it seems that I am doing this the hardway. What is the easyway? Originally posted by MovieMaker on ROS Answers with karma: 11 on 2011-11-07 Post score: 0 Original comments Comment by Stefan Kohlbrecher on 2011-11-07: Your question title could be a bit more expressive, "What do i do now?" basically is the underlying theme of 80% of questions here :) Answer: I would suggest taking a look at the introductory tutorials - particularly configuring the ROS environment. That includes information on setting up paths. Originally posted by JonW with karma: 586 on 2011-11-07 This answer was ACCEPTED on the original site Post score: 4
{ "domain": "robotics.stackexchange", "id": 7215, "tags": "ros" }
Berry Phase for Bloch electrons
Question: I am new to the topic of Berry phase. The definition says that Berry phase depends only on the path in the parameter space of $R$, where the Hamiltonian is $H(R)$, but whatever problems I have seen, the parameter itself has a time dependence. Even for the case of Bloch electrons, we can calculate Berry phase for cyclic excursion in the parameter space $k$ of the lattice. The real space in the lattice is absolutely time independent; my question is that will there be a Berry phase, if we perform a cyclic excursion of an electron in the real space of the lattice? Answer: In real space, the adiabatic Berry phase of a closed orbit just measures the magnetic flux through the orbit's area. Explanation: (please see Sundaram and Niu ) The semiclassical equations of motion of a Block electron in phase space are given by (Sundaram and Niu equation 3.8) $$\mathbf{\dot{x}_c} = \frac{\partial \mathcal {E}_M}{\partial \mathbf{k_c} }-\mathbf{\dot{k}_c} \times\mathbf{ \Omega}$$ $$\mathbf{\dot{k}_c} = -e \mathbf{E} - \mathbf{\dot{x}_c} \times\mathbf{ B}$$ Where: $\mathbf{x_c}$, $\mathbf{k_c}$, are the electron wavepacket center of mass position and momentum respectively, $\mathcal{E}_M$ is the magnetic Bloch band energy, $\mathbf{ E}$ and $\mathbf{ B}$ are the electric and magnetic fields respectively and $\mathbf{ \Omega}$ is the berry curvature. These equations of motion can be obtained from the Lagrangian (Sundaram and Niu equation 3.7): $$L = \mathcal {E}_M(\mathbf{k_c}) + e \phi(\mathbf{x_c})+ \mathbf{\dot{x}_c}\cdot\mathbf{k_c} – e \mathbf{\dot{x}_c} \cdot \mathbf{A} + \mathbf{\dot{k}_c} \cdot \mathbf{A}_B $$ Where $\phi$ is the electromagnetic scalar potential $\mathbf{A} $ is the electromagnetic vector potential and $\mathbf{A} _B$ is the Berry potential. Please observe that the above formulation is symmetric in between the configuration space and the momentum space. The Berry (geometric) phase emerges from the vector potential terms in the Lagrangian. Thus just as the adiabatic Berry phase in the momentum space integrates the Berry potential over the orbit, the Berry phase in the configuration space integrates the electromagnetic potential over the orbit giving the magnetic flux. Namely, the full Berry phase is given by : $$\phi_B = e^{i \oint e \mathbf{A} \cdot d\mathbf{x} + i \oint e \mathbf{A}_B \cdot d\mathbf{k}} = e^{i \int_{\Sigma} e \mathbf{B} \cdot d\hat{\mathbf{x}} + i \int_{\Sigma_k} \mathbf{\Omega} \cdot d\hat{\mathbf{k}}} $$ Where the line integrals are over an orbit in phase space. The second equality is a consequence of Stokes' theorem; the second surface integral is the usual Berry phase evaluated in momentum space while the first is the magnetic flux.
{ "domain": "physics.stackexchange", "id": 50329, "tags": "quantum-mechanics, solid-state-physics, berry-pancharatnam-phase" }
PERT Chart exercise - is there enough information to solve?
Question: I'm posting this here as my question is from a computer science textbook. It is not a homework question as I am self-studying and I have the answers available. In the exercise below, I can't see how I'm supposed to know which activity to place on which arrow. I can't tell if there is insufficient information provided or if it is a lack of understanding on my part which makes it impossible to decide which arrow out of node 2 should have be labelled B and which with C for example. If it is solvable given the information provided, could someone please exaplain the method? Answer: I don't know if there is a standard method to identify which activity matches which arrow, but I was able to complete your PERT chart by examining a few possible cases, while filling the chart from left to right. Each case was accepted or rejected after a while. I found that the higher arrow matches B and the lower C. Note that if you try to match C with the higher arrow, (2,3), E will match with the (3,5) arrow but then you will face the problem that only one activity depends on E while there are two arrows starting from it that have to be filled. Otherwise, if you match (2,3) with B, (3,5) will match with D and then there are two acivities dependent from D as you wish. More uncertainties will raise but they are solvable by considering the different cases. So the information provided are enough to solve the excersise. One possible typing error that I identify in the problem statement is that Q doesn't depend on K, L and P only, but it depeds on M too. That's why there are four arrows entering node 15 instead of three. Otherwise there is no lack of information and you can solve the problem as I described above. A second way to solve the problem is to create the PERT chart based on the data written in the table by yourself. The graph that you will make will be isomorphic with the graph that you will get if you merge the nodes connected by dotted edges on the graph of the problem statement. So after that you will have to create an edge matching between the two graphs in order to find which activity matches which arrow on the given graph. Both methods will lead to a correct solution, you can try them for practice.
{ "domain": "cs.stackexchange", "id": 13868, "tags": "network-analysis" }
Pandas: how can I create multi-level columns
Question: I have a pandas DataFrame which has the following columns: n_0 n_1 p_0 p_1 e_0 e_1 I want to transform it to have columns and sub-columns: 0 n p e 1 n p e I've searched in the documentation, and I'm completely lost on how to implement this. Does anyone have any suggestions? Answer: Finally, I found a solution. You can find the example script below. #!/usr/bin/env python3 import pickle import pandas as pd import itertools import numpy as np data = pd.DataFrame(np.random.randn(10, 5), columns=('0_n', '1_n', '0_p', '1_p', 'x')) indices = set() groups = set() others = set() for c in data.columns: if '_' in c: (i, g) = c.split('_') c2 = pd.MultiIndex.from_tuples((i, g),) indices.add(int(i)) groups.add(g) else: others.add(c) columns = list(itertools.product(groups, indices)) columns = pd.MultiIndex.from_tuples(columns) ret = pd.DataFrame(columns=columns) for c in columns: ret[c] = data['%d_%s' % (int(c[1]), c[0])] for c in others: ret[c] = data['%s' % c] ret.rename(columns={'total': 'total_indices'}, inplace=True) print("Before:") print(data) print("") print("After:") print(ret) Sorry for this...
{ "domain": "datascience.stackexchange", "id": 6333, "tags": "pandas" }
Do error checking costs of quantum computing shrink BQP?
Question: BQP is the set of problems solvable in polynomial time for a given error tolerance, and it is suspected to be larger than P (and BPP, which is probably equal to P). However, inability for the gates to act perfectly, etc would require error-checking overhead. What is the overhead cost in the algorithm? In particular, does either the time or the number of q-bits overhead grow more than polynomially in the problem size (if it did then BQP would be altered)? Answer: The threshold theorem says that if the error rate is below the threshold, a quantum algorithm with T locations (breadth times depth) can be made fault-tolerant with a blow-up (in both number of qubits and circuit size) by a factor which is a polynomial in the log of T. This is not enough to change BQP.
{ "domain": "physics.stackexchange", "id": 5168, "tags": "quantum-computer, noise" }
Clarification on U-Tube force analysis
Question: This is just a clarifying question about Chester Miller (https://physics.stackexchange.com/users/102308/chester-miller)'s answer to the following question: Oscillations in U-tube. Sorry if this is not the place to ask, but I don't have enough reputation to comment on his original answer so I figured I'd give this a shot. Chester, if this gets marked as a duplicate, I would really appreciate it if you are able to reach out to me at lcoumb@gmail.com :) I'm trying to determine the equation for oscillation of a U-tube specifically using force analysis (I already understand how to do so using conservation of energy). Chester created the following equations for the oscillation of fluid in a U-tube. My question is regarding the integration that happened between equations (5) and (6): I understand (more or less) the integration that occurred on the right side, but am unclear on what happened on the left side. Would it be possible to explain this in further detail? Thanks so much!! If we divide Eqn. 1 by $S\Delta z$ and take the limit as $\Delta z$ approaches zero, we obtain:$$\frac{\partial p}{\partial z}+\rho g=-\rho\frac{d^2x}{dt^2}\tag{3}$$ Eqn. 3 applies to the region above point A in the left column. Similarly, for the horizontal region between points A and B, we have: $$\frac{\partial p}{\partial y}=-\rho\frac{d^2x}{dt^2}\tag{4}$$where y is the horizontal coordinate measured from A to C. Finally, for the right hand column above point B, we have: $$\frac{\partial p}{\partial z}+\rho g=+\rho\frac{d^2x}{dt^2}\tag{5}$$ If we integratd these equations along the continuous path (contour) between the top of the left column at z = h+x (where the pressure is atmospheric) to the top of the right column at z = h-x (where the pressure is again atmospheric), where h is the equilibrium height, we obtain:$$2\rho g x=-L\rho \frac{d^2x}{dt^2}\tag{6}$$ Answer: $$\int_{h+x}^{h-x}{\rho gdz}=\int_{h+x}^{0}{\rho gdz}+\int_{0}^{h-x}{\rho gdz}=-2\rho g x$$
{ "domain": "physics.stackexchange", "id": 40653, "tags": "fluid-dynamics" }
Initializing a ROS2 Message before calling rcl_take
Question: I am writing a custom type support which uses the C type support to save work. Unfortunately I failed to find exact documentation about using the C type support which left me with this question: Has a message to be initialized before being passed to rcl_take? I looked at the rclc example which initialized the message but the documentation for rcl_take says that only an allocated struct is required in which the taken message will be copied, which would lead to a memory leak if there is already a message with pointers to allocated arrays stored there. Answer: It should be initialized, and therefore all fields should be initialized, including strings and sequences. The type support code is responsible for resizing, or finalizing and then re-initializing, fields as needed. The docs don’t explicitly say initialized, and perhaps they should (tag me on a pr if you’d like to add that), but the implication is that you shouldn’t pass in uninitialized memory, otherwise it’s impossible to tell if the complex fields like strings and such need to be cleaned up first. Consider that it’s also perfectly legitimate use case for a user to allocate one message on the stack, initialize it once (which is done automatically in C++), and then call take on it over and over again, reusing the message each time.
{ "domain": "robotics.stackexchange", "id": 38670, "tags": "ros2, ros-humble, c, memory" }
How would you improve braking capability on a hovercraft?
Question: Pretty much letting my mind free-wheel. Assume a fleet of air-supported hover-craft were to replace cars/etc on the streets. Assume also that the present traffic-signals/pedestrian rules remain unchanged. As I understand hover-craft come to a gradual stop; similar to a train. What would be the equivalent of disc-brakes on hover-craft? i.e. How would you improve braking capability on a hovercraft? p.s. Friction, Aerodynamics, Inertia hence the post here in the Physics forum but please feel free to vote as OT ... Answer: To deal with lift fan failure, you'd need some landing pads anyway. If you design those properly you can use them for braking, too. The biggest problem might be how you'd lose the air cushion rapidly, in case of emergency braking. Maglev trains use a similar solution as they've got the same problem.
{ "domain": "physics.stackexchange", "id": 11728, "tags": "friction, aerodynamics, inertia" }
Constructing and maintaining a complete binary tree
Question: Problem statement: Create and maintain a Complete Binary Tree in C. Include the following operations. Insert a given key and perform inorder Replace ALL occurrences of the given key with the then Last Element of the Tree. Then Remove the last node. Query -> return number of occurrences of the key Size -> Given a key, return the number of nodes in the subtree I have written this code, which passes all the public test cases. Is there a better way to maintain this structure, for example, with the help of an array? I am implementing every DS from scratch. #include<stdio.h> #include<stdlib.h> int lastLabel = 0; struct node { int data; int label; struct node* parent; struct node* rightChild; struct node* leftChild; }; struct node* createNode(int d) { struct node* newN = (struct node*)malloc(sizeof(struct node)); newN->data = d; newN->leftChild = '\0'; newN->rightChild = '\0'; lastLabel++; newN->label = lastLabel; return newN; } struct Queue { int front,rear; int size; struct node** array; }; typedef struct tree { struct node* root; int size; }BinaryTree; ////////Binary Tree Helper Functions////////////////////// BinaryTree* createTree() { BinaryTree* t = (BinaryTree*)malloc(sizeof(BinaryTree)); t->root = '\0'; t->size = 0; return t; } int size(BinaryTree *t) { return t->size; } struct node* root(BinaryTree *t) { return t->root; } struct node* parent(struct node* n) { return n->parent; } int isInternal(struct node *n) { return n->leftChild != '\0' || n->rightChild != '\0'; } int isExternal(struct node *n) { return !isInternal(n); } int isRoot(struct node* n) { return n->parent == '\0'; } int hasBothChild(struct node* temp) { if((temp!= '\0') && (temp->leftChild != '\0') && (temp->rightChild != '\0')) return 1; } ////////Binary Tree Helper Functions////////////////////// /////////Queue Helper Functions////////////////////////// // // //createQueue takes queueSize as input and returns a '\0'-initialized queue struct Queue* createQueue(int size) { struct Queue* queue = (struct Queue*) malloc(sizeof(struct Queue)); queue->front = queue->rear = -1; queue->size = size; queue->array = (struct node**)malloc(queue->size * sizeof(struct node*)); int i; for(i = 0; i < size; i++) queue->array[i] = '\0'; return queue; } //check if Queue is empty int isEmpty(struct Queue* queue) { return queue->front == -1; } //check if Queue is Full int isFull(struct Queue* queue) { return (queue->rear == queue->size-1); } //check if Queue has only one Item int hasOnlyOneItem(struct Queue* queue) { return (queue->front == queue->rear); } //ENQUEUE void Enqueue(struct node* root, struct Queue *queue) { if(isFull(queue)) return; queue->array[++queue->rear] = root; if(isEmpty(queue)) ++queue->front; } //DEQUEUE struct node* Dequeue(struct Queue* queue) { if(isEmpty(queue)) return '\0'; struct node* temp = queue->array[queue->front]; if (hasOnlyOneItem(queue)) queue->front = queue->rear = -1; else ++queue->front; return temp; } //Get Front of the Queue struct node* getFront(struct Queue* queue) { return queue->array[queue->front]; } /////////Queue Helper Functions////////////////////////// //Helper function to find the number of nodes of a particular subTree int sizeFind(struct node* stree) { if(stree == '\0') return 0; else return(sizeFind(stree->leftChild) + 1 + sizeFind(stree->rightChild)); } //Helper function to find the a particular nodes given the node's key int sizeQuery(struct node* root,int key, int size) { struct Queue *queue = createQueue(size); struct node *temp_node = root; while(temp_node) { if(temp_node->data == key) { return sizeFind(temp_node); } if(temp_node->leftChild != '\0') { Enqueue(temp_node->leftChild, queue); } if(temp_node->rightChild != '\0') { Enqueue(temp_node->rightChild, queue); } temp_node = Dequeue(queue); } return 0; } //insert data in the pre-existing Complete Binary Tree void insert(struct node** root, int data, struct Queue* queue) { struct node* temp = createNode(data); if(!*root) { *root = temp; } else { struct node* front = getFront(queue); if((front->leftChild) == '\0') { front->leftChild = temp; temp->parent = front; } else if((front->rightChild) == '\0') { front->rightChild = temp; temp->parent = front; } if(hasBothChild(front)) Dequeue(queue); } Enqueue(temp,queue); } //perform Level Order Traversal void levelOrder(struct node* root, int size) { struct Queue* queue = createQueue(size); Enqueue(root, queue); int label = 0; while(!isEmpty(queue)) { struct node* temp = Dequeue(queue); label++; temp->label = label; if(temp->leftChild) Enqueue(temp->leftChild, queue); if(temp->rightChild) Enqueue(temp->rightChild, queue); } } //perform InOrder Traversal void inOrder(struct node* root) { if(root == '\0') return; if(isInternal(root)) inOrder(root->leftChild); printf("%d ", root->data); if(isInternal(root)) inOrder(root->rightChild); } //perform Query int Query(struct node* root,int key,int size) { int count = 0; int rear,front; struct Queue *queue = createQueue(size); struct node *temp_node = root; while(temp_node) { if(temp_node->data == key) { count++; } if(temp_node->leftChild != '\0') { Enqueue(temp_node->leftChild, queue); } if(temp_node->rightChild != '\0') { Enqueue(temp_node->rightChild, queue); } temp_node = Dequeue(queue); } return count; } //Get Pointer will return the node given the Root of the CBT and the Label struct node* getPointer(int label,struct node* root) { struct node* parentPointer; struct node* child; if(root!= '\0' && label == 1) return root; else { parentPointer = getPointer(label/2, root); child = parentPointer->leftChild; // printf("What should have Happened here Label %d %d %d \n",label, child->data,child->label); // printf("What should have Happened here Label %d %d %d \n",label, parentPointer->leftChild->data, parentPointer->leftChild-> label); if(parentPointer != '\0' && child != '\0') { if((parentPointer->leftChild->label) == label) return parentPointer->leftChild; else return parentPointer->rightChild; } } } //The helper function will remove the node containing the Key(multiple instances possible), then it would replace that node with the Last Node struct node* Delete(struct node* root,int key,int size) { int count = 0; int rear,front; struct Queue *queue = createQueue(size); struct node *temp_node = root; while(temp_node) { if(temp_node->data == key) { struct node* lastValue = getPointer(lastLabel,root); if(lastValue != '\0') { temp_node->data = lastValue->data; if(lastValue->label == lastValue->parent->leftChild->label) lastValue->parent->leftChild = '\0'; else lastValue->parent->rightChild = '\0'; } free(lastValue); lastLabel--; } if(temp_node != NULL) { if(temp_node->leftChild != '\0') { Enqueue(temp_node->leftChild, queue); } if(temp_node->rightChild != '\0') { Enqueue(temp_node->rightChild, queue); } } if(!(temp_node != NULL && temp_node->data == key)) temp_node = Dequeue(queue); } return root; } int main() { int num_items; int key; int num_Ops; char op; int op_key; int ctr; int qcount; int i; int stree_ctr; scanf("%d",&num_items); struct node* root = '\0'; struct Queue* queue = createQueue(num_items); for(ctr = 0; ctr < num_items; ctr++) { scanf("%d",&key); insert(&root,key, queue); } levelOrder(root,num_items); inOrder(root); printf("\n"); //num_items is just the initial number of elements scanf("%d",&num_Ops); for(i = 0; i < num_Ops ; i++) { while((op = getchar())== '\n'); scanf("%d",&op_key); if(op == 'i') { insert(&root,op_key,queue); inOrder(root); printf("\n"); } else if(op == 'q') { qcount = Query(root,op_key,lastLabel); printf("%d\n",qcount); } else if(op == 's') { stree_ctr = sizeQuery(root,op_key,lastLabel); printf("%d\n",stree_ctr); } else if(op == 'r') { root = Delete(root,op_key,lastLabel); inOrder(root); printf("\n"); } } return 0; } Answer: I see a number things that may help you improve your code. Rethink your data structures The code comments claim that the structure being implemented is a tree, but there seems to be a Queue object tangled up in your tree. This both complicates the tree code and confuses the reader of the code. It would be better to strive for a clean tree implementation instead. For example, the current Query routine both leaks memory and is overly complex because it also incorporates a Queue. Here's a simpler recursive rewrite: int Query(struct node* root, int key) { if (root == NULL) return 0; int count = Query(root->leftChild, key); if (root->data == key) count++; count += Query(root->rightChild, key); return count; } Use NULL instead of '\0' for pointers The value '\0' is a single character, but the value NULL is an implementation-defined null-pointer constant. It is not guaranteed to have the value 0. Only check conditions once The current code for the inOrder routine is this (with the previous note implemented): void inOrder(struct node* root) { if(root == NULL) return; isInternal(root)) inOrder(root->leftChild); printf("%d ", root->data); if(isInternal(root)) inOrder(root->rightChild); } However, there's no need for the call to isInteral. If the check is eliminated, it only means that the first check will catch the issue on the next recursive call. In other words, the code could be cleaned up like this: void inOrder(struct node* root) { if(root == NULL) return; inOrder(root->leftChild); printf("%d ", root->data); inOrder(root->rightChild); } Eliminate unused variables Unused variables are a sign of poor code quality, so eliminating them should be a priority. In this code, Query and Delete both define but do not use rear and front. Your compiler is probably also smart enough to tell you that, if you ask it to do so. Eliminate global variables where practical The code declares and uses a global variable, lastLabel. Global variables obfuscate the actual dependencies within code and make maintainance and understanding of the code that much more difficult. It also makes the code harder to reuse. For all of these reasons, it's generally far preferable to eliminate global variables and to instead pass pointers to them. That way the linkage is explicit and may be altered more easily if needed. Ensure every control path returns a proper value The hasBothChild routine returns 1 under some set of conditions but then doesn't return anything at all otherwise. This is an error. The code could instead be written like this: int hasBothChild(struct node* temp) { return temp != NULL && temp->leftChild != NULL && temp->rightChild != NULL; } There is a similar problem with getPointer. Create subfunctions that can be used multiple places This code has multiple places that it looks for a node with a particular value. That is a strong clue that it should be its own routine. For example: // returns a pointer to the first node with matching key // or NULL if none found struct node *find(struct node *root, int key) { if (root == NULL) return NULL; if (root->data == key) return root; root = find(root->leftChild, key); if (root == NULL) root = find(root->rightChild, key); return root; } Now your sizeQuery routine becomes extremely simple: //Count the nodes in the subtree with the given key int sizeQuery(struct node* root,int key) { return sizeFind(find(root, key)); } Eliminate memory leaks The code does not release all memory that it allocates. That's a bug that should be fixed. Eliminating the Queue structure entirely should help with that considerably. Omit return 0 at the end of main The compiler will automatically generate a return 0; at the end of main so it is not necessary to supply your own.
{ "domain": "codereview.stackexchange", "id": 13767, "tags": "c, tree" }
What edges are not in a Gabriel graph, yet in a Delauney graph?
Question: It is know that the Gabriel graph of a point set $P \subset \mathbb{R}^2$, $\mathcal{GG}(P)$ is a subset of the corresponding Delauney graph $\mathcal{DG}(P)$, i.e. $\mathcal{GG}(P) \subseteq\mathcal{DG}(P)$. A Gabriel graph is defined as: The Gabriel graph $\mathcal{GG}(P)$ of a point set $P \subset \mathbb{R}^2$ is defined as follows. The vertex set is $P$ and two vertices $p, q$ in $P$ are connected by an edge if and only if the interior of the disk with diameter $\overline{pq}$ is empty and $p, q$ are the only two points on its boundary. In particular, this implies that $\overline{pq}$ is also an edge in the Delaunay graph of P. However, I am struggling to visualize or understand in what cases an edge $(p, q)$ would be in $\mathcal{DG}(P)$ but not in $\mathcal{GG}(P)$. Could someone give me an example? It seems to me as, by the empty-circle property of Voronoi edges, and their duality with the Delaunay graph, every edge in the Delaunay graph should satisfy the definition of a Gabriel graph. It is obvious that I am missing some edge case. Answer: From my answer to a similar question: A Delaunay edge $(x,y)$ won't be a Gabriel edge, if the set of all the possible empty disks with $x$ and $y$ on their boundaries doesn't contain the disk with minimum possible radius (which is the half-length of this edge). In other words, this will happen when the Delaunay edge $(x,y)$ doesn't intersect the segment (or ray), separating sites $x$ and $y$ in the Voronoi diagram.
{ "domain": "cs.stackexchange", "id": 19475, "tags": "graphs, computational-geometry, planar-graphs, voronoi-diagrams" }
What does "Information" and Virtual Particles mean?
Question: I've read that attraction and repulsion between particles is caused by the exchange of virtual photons, and that virtual photons carry information. I don't understand how a virtual photon actually causes any attraction or repulsion, and how does it carry information anyway if it's "virtual"? Aren't photons an excitation of the electromagnetic field? Answer: Virtual particles, whether photons or electrons or... are, in the context of QFT, particles that are off-shell, i.e., their associated energy and momentum are not related by the relativistic energy-momentum relation. Please read this to get an idea of how virtual particle exchange can create attractive or repulsive forces. Photons are quanta of the modes of the quantized electromagnetic field. This isn't easy to explain without some background in the quantum harmonic oscillator and, even then... However, imagine that a guitar string can vibrate with only discrete amplitudes and these amplitudes have an energy that is a multiple of a fundamental energy. Then, if the guitar string were vibrating with the 1 unit of this fundamental energy, we would say that 1 "photon" of guitar string vibration were present. If the string were vibrating with n units of this fundamental energy, we would say that n "photons" were present.
{ "domain": "physics.stackexchange", "id": 10640, "tags": "quantum-field-theory, quantum-electrodynamics, information, virtual-particles" }
Salt generation in C#
Question: I have designed a small method to generate a salt for a password. All this does is create a random salt and does nothing to add it to the password. I have a few questions regarding my very simple method: Is this a secure generator? Currently, it encodes in base64. Is this an issue in any way? Are there any other potential issues? How could I improve this in terms of security, speed, etc... using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using System.Text; using System.Security.Cryptography; namespace Test { public class Program { // Here is a method to generate a random password salt private static string getSalt() { var random = new RNGCryptoServiceProvider(); // Maximum length of salt int max_length = 32; // Empty salt array byte[] salt = new byte[max_length]; // Build the random bytes random.GetNonZeroBytes(salt); // Return the string encoded salt return Convert.ToBase64String(salt); } static void Main(string[] args) { System.Console.WriteLine(getSalt()); System.Console.ReadKey(); } } } Answer: var You should use them with max_length and salt too. If the type is obvious from the right hand side of the assignment you should use var. From here Under what circumstances is it necessary for a variable's type to be clearly understood when reading the code? Only when the mechanism of the code -- the "how it works" -- is more important to the reader than the semantics -- the "what its for". So basically in a line like byte[] salt = new byte[max_length]; or int max_length = 32; the type of the variables does not add any value to the code. It is too obvious what the assigned type is, using the type instead of var only adds noise to the code without any real value. Disposing If an object implements IDisposable you should either call its Dispose() method or enclose it in an using block. Naming Also variables local to a method aren't mentioned in the naming guidelines I would suggest to use camelCase casing for naming them instead of snake_case casing. For naming private methods you should use PascalCase casing. Comments Comments should only describe why something is done. Let the code itself explain what is done by using meaningful and readable names. A very good answer about comments can be found here: https://codereview.stackexchange.com/a/90113/29371 getSalt() You should allow to pass the max_length to the method instead of hardcoding it. This has the advantage that a change to this value won't need the class class/method to be changed. If you make it an optional parameter you can still call it like it whould have no parameter. Edit Based on the valid comment from Johnbot you should better use a overloaded GetSalt() method instead of using an optional parameter. I don't see the need for converting to base64. Encryption algorithms use byte[] arrays, so you should better just return the byte[]. Applying the above would lead to private static int saltLengthLimit = 32; private static byte[] GetSalt() { return GetSalt(saltLengthLimit); } private static byte[] GetSalt(int maximumSaltLength) { var salt = new byte[maximumSaltLength]; using (var random = new RNGCryptoServiceProvider()) { random.GetNonZeroBytes(salt); } return salt; }
{ "domain": "codereview.stackexchange", "id": 14134, "tags": "c#, security" }
Why do the liquid levels on both the containers match?
Question: When we take two containers and fill them both with same or different liquids and put one in the other and let the container reach equilibrium, why do we observe that both the liquid levels match, supposing the containers are ideal and are massless? (Diagram for reference) Answer: If the liquids have the same density and the inner container is massless the levels will match. Otherwise, they won't (except in some special cases). In general, you just need to use Archimedes's law: $V_{\mathrm{displaced}} \rho_{\mathrm{outside}} g = M g$, where $M$ is the total mass of the inner vessel and liquid contained in it.
{ "domain": "physics.stackexchange", "id": 85818, "tags": "fluid-statics" }
Unresolved External Symbol C++
Question: I need to use sockets from winsok2.h. So, I written a class NetObject to use it. But, when I do compilation I am getting an error: error LNK2019: unresolved external symbol "public: int __thiscall wsa::NetObject::connect(void)" (?connect@NetObject@wsa@@QAEHXZ) referenced in function _main I do this using Visual C++ compiler. Here is some code: //wsa.h #ifndef WSA_H_ #define WSA_H_ #include <winsock2.h> namespace wsa { sockaddr_in init(const int, const char *, const int); sockaddr_in init(const int, const char *); SOCKET init(const int, const int, const int); class NetObject { private: SOCKET sock; sockaddr_in addr; int check(const int); public: NetObject ( const int addr_family, const int sock_type, const int protocol, const char * ip_addr, const int port ): sock(init(addr_family, sock_type, protocol)), addr(init(addr_family, ip_addr, port)) {}; int connect(); }; } #endif //WSA_H_ //wsa.cpp #include "wsa.h" #include <winsock2.h> namespace wsa { sockaddr_in init(const int af, const char * ip_addr) { sockaddr_in addr; memset(&addr, 0, sizeof(addr)); addr.sin_family = af; addr.sin_addr.s_addr = inet_addr(ip_addr); return addr; } sockaddr_in init(const int af, const char * ip_addr, const int port) { sockaddr_in addr = init(af, ip_addr); addr.sin_port = htons(port); return addr; } SOCKET init ( const int af, const int type, const int protocol ) { SOCKET s; s = socket(af, type, protocol); while (s == INVALID_SOCKET) { if (closesocket(s) == SOCKET_ERROR) { return INVALID_SOCKET; } s = socket(af, type, protocol); } return s; } inline int NetObject::check(const int code) { if (code == SOCKET_ERROR) { if (closesocket(sock) == SOCKET_ERROR) return 1; return SOCKET_ERROR; } return 0; } inline int NetObject::connect() { int res = ::connect(sock, (sockaddr *)&addr, sizeof(addr)); return check(res); } } //main.cpp #include "wsa.h" #include <winsock2.h> const int af = AF_INET; const int type = SOCK_STREAM; const int protocol = IPPROTO_TCP; int main(int argc, char * argv[]) { WSADATA ws; if (WSAStartup(MAKEWORD(2, 0), &ws) == -1) { return 1; } wsa::NetObject server(af, type, protocol, argv[1], 80); server.connect(); WSACleanup(); } I can't see any errors. Please, help! Answer: Thats because in Visual Studio, including winsock2.h to your headers is not enough. You have to tell the Linker your project will need the library wsock32.lib. You can do that either by editing the properties of your project, or by including this line to your code: #pragma comment(lib, "wsock32.lib")
{ "domain": "codereview.stackexchange", "id": 4851, "tags": "c++" }
Neutral current: terminology
Question: In particle physics, where does the term 'neutral current' originate? An example would be an electron exchanging a Z boson with another electron. I understand that the Z boson itself is neutral, but surely 'current' refers to electron and its associated amplitude in this case? Answer: The neutral current is electrically neutral. To see why, one must first understand what the current is. It is a composite field or (in the quantum theory) an operator, something like $$ J^{\mathrm{(NC)}\mu}(f) = \bar{u}_{f}\gamma^{\mu}\frac{1}{2}\left(g^{f}_{V}-g^{f}_{A}\gamma^{5}\right)u_{f}, $$ where $u_f$ is the Dirac field for the fermion $f$. Note that the current is a product of the charged field and its complex conjugate, along with some coefficients, contractions, and gamma matrices, so the electric charge cancels between $\bar u_f$ and $u_f$. Equivalently, the current is an operator that is creating a particle and an antiparticle at the same moment (or creates+destroys a particle; or creates+destroys an antiparticle) and this particle pair is electrically neutral. The field $u_f$ itself is charged but the current isn't just $u_f$. Of course, one may see more easily why the neutral current has to be neutral. It's because the Lagrangian contains terms like $Z_\mu J^\mu$ and because $Z_\mu$ is an electrically neutral field, $J^\mu$ must be electrically neutral as well for the Lagrangian to preserve the electric charge. The neutral currents should be contrasted with the previously identified charged currents which are schematically $\bar u_e \cdots u_\nu $ which is a product of an electron or positron field and the neutrino field so that the charges don't cancel. Equivalently, the operator of the charged current may be seen to carry the charge $\pm e$ because the Lagrangian contains products like $J_{\rm charged}^\mu W^\pm_\mu$ and the charge has to be compensated. In all cases, the charge of an operator is determined from the phase by which the operator transforms under gauge transformations – it adds an extra $\exp(iQ\lambda)$ – or, equivalently, from the commutator $[Q,L]$ of the charge $Q$ with the operator $L$ that is equal to the charge times $L$.
{ "domain": "physics.stackexchange", "id": 9054, "tags": "particle-physics, terminology, definition" }
What is the equivalent smoothing function to running the same Gaussian 8 times?
Question: Suppose there is an image which is to be smoothed by convolving it with a Gaussian kernel with standard deviation $\sigma$. If the image is then smoothed with this kernel 8 consecutive times, is the total smoothing still Gaussian with some calculable standard deviation $\hat{\sigma}$? Or is it some other distribution instead? Answer: From Wikipedia: Applying multiple, successive gaussian blurs to an image has the same effect as applying a single, larger gaussian blur, whose radius is the square root of the sum of the squares of the blur radii that were actually applied. So in your case, that would be equivalent to blurring the image with a single Gaussian with $\hat{\sigma}$: $$ \hat{\sigma} = \sqrt{8\sigma^2} = 2\sqrt{2} \sigma $$
{ "domain": "dsp.stackexchange", "id": 3288, "tags": "gaussian, smoothing" }
Polchinski Equation (7.2.4)
Question: On page 209 of Polchinski's string theory book he writes down the expectation value of a product of vertex operators on the torus; equation $(7.2.4)$. The derivation is analogous to an earlier calculation on the sphere, equation $(6.2.17)$, and I'm perfectly happy with the result except for the factor of $2\pi/\partial_\nu \vartheta_1(\nu\vert\tau)$. Can anyone give me an insight into how this term appears? Thanks. EDIT: Following Lubos's answer. The expectation value we wish to calculate is \begin{align} \Bigg< \prod_{i=1}^n :e^{ik_i \cdot X(z_i,\overline z_i)}:\Bigg>_{T^2} &= iC^X_{T_2}(\tau) (2\pi)^d \delta^d(\sum_i k_i) \\& \exp \Big(-\sum_{i<j} k_i \cdot k_j \, G'(w_i,w_j) - \frac{1}{2}\sum_i k_i^2 G_r'(w_i,w_i) \Big) \end{align} The second line follows just as in eq. $(6.2.17)$, and the Green functions are $$ G'(w,w') = -\frac{\alpha'}{2} \ln \Bigg\vert \vartheta_1\Big(\frac{w-w'}{2\pi}\Big\vert \tau\Big) \Bigg\vert^2 + \alpha' \frac{[Im(w-w')]^2}{4\pi\tau_ 2}$$ \begin{align} G'_r(w,w)&=G'(w,w)+\alpha'\omega(w)+\frac{\alpha'}{2}\ln \vert w-w'\vert^2 \\&= -\frac{\alpha'}{2}\ln\Bigg\vert \frac{\partial_\nu\vartheta_1(0|\tau)}{2\pi} \Bigg\vert^2 +\alpha'\omega(w) \end{align} Where we have used $$ \left. \vartheta_1 \left( \frac{w-w'}{2\pi} | \tau \right)\right|_{w\to w'} \to \partial_\nu\vartheta_1(0|\tau)\cdot \left(\frac{w-w'}{2\pi} \right) $$ as explained by Lubos. Substituting these into the original equation and taking the curvature to infinity $\omega\to 0$, we find \begin{align} \Bigg< \prod_{i=1}^n :e^{ik_i \cdot X(z_i,\overline z_i)}:\Bigg>_{T^2} &= iC^X_{T_2}(\tau) (2\pi)^d \delta^d(\sum_i k_i) \\& \times\prod_{i<j} \Bigg\vert \frac{2\pi}{\partial_\nu \vartheta_1(0\vert\tau)}\vartheta\Big(\frac{w_{ij}}{2\pi}\Big\vert\tau\Big)\exp\Big[-\frac{(Im w_{ij})^2}{4\pi\tau_2}\Big] \Bigg\vert^{\alpha' k_i \cdot k_j} \end{align} As in equation $(7.2.4)$. Answer: This extra factor arises from the analogy of the conformal factor $\alpha'\omega$ term in (6.2.16). The required $\omega$ is $$\omega = \ln \left ( \frac{2\pi}{\partial_\nu\vartheta_1}\right) $$ and substituting it to the exponential we get $$ \exp\left( -\frac{\alpha'}{2}\sum_ik_i^2 \cdot \ln \frac{2\pi}{\partial_\nu\vartheta_1} \right) = \left(\frac{2\pi}{\partial_\nu\vartheta_1} \right)^{-\alpha' \sum_i k_i^2/2} = \left(\frac{2\pi}{\partial_\nu\vartheta_1} \right)^{+\alpha' \sum_{i\lt j} k_i k_j} $$ which gives exactly the factor whose origin you wanted to trace. As Joe says, this factor you asked about comes from "normalized self-contractions", which refers to the $\sum_i$ in the exponent, but because $\sum_i k_i = 0$ (and its inner-product square is zero, too), we may convert this $\sum_i$ to $\sum_{i\lt j}$ above. The aforementioned required $\omega$ is determined as follows. For the self-contractions, the following factor we use for the contractions should be substituted with $w-w'=0$ $$ \vartheta_1 \left( \frac{w-w'}{2\pi} | \tau \right) $$ but it vanishes at that point, so the value has to be computed by l'Hospital rule – or the first term from the Taylor expansion as a function of $w-w'$, if you wish: $$ \left. \vartheta_1 \left( \frac{w-w'}{2\pi} | \tau \right)\right|_{w\to w'} \to \partial_\nu\vartheta_1(0|\tau)\cdot \left(\frac{w-w'}{2\pi} \right) $$ This expression must coincide with the $\omega$-dependent "self-contraction" exponential from (6.2.17) which fixes the value of $\omega$. Effectively, if you got a result omitting the factor mentioned in the question, it is analogous to saying that $f(0)\to x$ if $f(0)=0$ but the right leading approximation is $f(0)\to f'(0) x$ for such functions.
{ "domain": "physics.stackexchange", "id": 22048, "tags": "homework-and-exercises, string-theory, conformal-field-theory" }
Guess how many are in the jar game in Java - Take 2
Question: Original question can be found here I took the advice I got to heart, and re-wrote it. Once again I'd appreciate any advice as to how I can improve this using best practices. Jar.java package com.tn.jar; public class Jar { private String itemName; private int numberOfItems; private int numberToGuess; private int numberOfGuesses; /* * Default constructor */ public Jar() { this.numberOfGuesses = 0; } /* * Getters */ public String getItemName() { return itemName; } public void setItemName(String itemName) { this.itemName = itemName; } public int getNumberOfItems() { return numberOfItems; } public void setNumberOfItems(int numberOfItems) { this.numberOfItems = numberOfItems; } public int getNumberToGuess() { return numberToGuess; } /* * Setters */ public void setNumberToGuess(int numberToGuess) { this.numberToGuess = numberToGuess; } public int getNumberOfGuesses() { return numberOfGuesses; } public void setNumberOfGuesses(int numberOfGuesses) { this.numberOfGuesses = numberOfGuesses; } public void incrementNumberOfGuesses() { numberOfGuesses++; } } Prompter.java package com.tn.jar; import java.util.Random; import java.util.Scanner; public class Prompter { private Jar jar; private Scanner scanner; /* * Public constructor */ public Prompter(Jar jar) { this.jar = jar; this.scanner = new Scanner(System.in); } /* * Method that actually starts the game */ public void play() { adminSetup(); playerSetup(); } /* * Admin setup */ private void adminSetup() { printTitle("Administrator setup"); setupGame(); } /* * Player setup */ private void playerSetup() { printTitle("Player"); gameExplanation(); areYouReady(); getGuess(); } /* * Setup game. * The user is prompted with a couple of questions for filling the jar instance. */ private void setupGame() { String itemName = askQuestion("Name of items in the jar: "); String numberOfItems = askQuestion("Maximum of lentils in the jar: ");; while(isNan(numberOfItems)) { numberOfItems = askQuestion("Maximum of lentils in the jar: "); } jar.setItemName(itemName); jar.setNumberOfItems(Integer.parseInt(numberOfItems)); jar.setNumberToGuess(new Random().nextInt(Integer.parseInt(numberOfItems)) + 1); } /* * User must confirm whether or not he is ready to play before the game starts. */ private void areYouReady() { do { System.out.print("Ready? (press ENTER to start guessing): "); } while (scanner.nextLine().length( ) > 0); } /* * The user can guess the answer. It will loop until the answer is correct. */ private void getGuess() { do { System.out.print("Guess: "); jar.incrementNumberOfGuesses(); } while(guess() != jar.getNumberToGuess()); printResult(); } /* * Utility method for getting the users guess */ private int guess() { String answer = scanner.nextLine(); while(isNan(answer)) { System.out.print("Guess: "); answer = scanner.nextLine(); } return Integer.parseInt(answer); } /* * Some explanation before the game starts. */ private void gameExplanation() { System.out.printf("Your goal is to guess how many lentils are in the jar. Your guess should be between 1 and %d%n", jar.getNumberOfItems()); } /* * Utility method that accepts input. */ private String askQuestion(String question) { System.out.print(question); String result = scanner.nextLine(); return result; } /* * Utility method for printing out headers */ private void printTitle(String title) { System.out.printf("%n%s%n=========================%n%n", title.toUpperCase()); } /* * When the user answers correctly, he recieves the game stats. */ private void printResult() { System.out.printf("%nCongratulations - you guessed that there are %d" + " lentils in the jar! It took you %d" + " guess(es) to get it right.%n", jar.getNumberToGuess(), jar.getNumberOfGuesses()); } /* * Utility method for checking if the string read by the system can be parsed as an integer */ private static boolean isNan(String string) { try { Integer.parseInt(string); } catch(NumberFormatException nfe) { return true; } return false; } } Game.java package com.tn.jar; public class Game { public static void main(String[] args) { Jar jar = new Jar(); Prompter prompter = new Prompter(jar); prompter.play(); }; } Answer: Now this is a lot easier to follow than your previous version. Though, there are still a few things. /* * Getters */ If you require such comments to separate your classes into regions, you're doing something wrong. Personally, I like to have the following layout and order to my class: Constants Static members Members Static constructor Constructor Static methods Methods Subclasses And within all these categories everything is sorted alphabetically. Now many people will frown when I say "sort your stuff alphabetically", because many prefer to sort it logically, which means that the first used functions are first. I prefer alphabetically because there is never a doubt where a function is and you can easily navigate the whole file (you're seeing a function named "measure" and you know that "getValues" is above it and "setName" is below it). But that is my personal preference. /* * Utility method for checking if the string read by the system can be parsed as an integer */ private static boolean isNan(String string) { You obviously misunderstood what Javadoc is. It is not only commenting above the functions, it is associating documentation with a function. Well formed Javadoc would look this: /** * Checks if the given {@link String} is not a number. * <p> * The given {@link String} is considered to be not a number if * the {@link Integer#parseInt} function fails on it. * * @param string The string to check. * @return {@code true} if the given value is not a number. * @see Integer#parseInt */ private static boolean isNan(String string) { Note that tow asterisks which indicate that this is not a comment, but a Javadoc block. From this countless tools and IDEs are able to associate the Javadoc block with the function below and it automatically provide and generate documentation for this function. Prompter is not only prompting, it is doing everything. I therefor suggest that you rename Game to Main, and Prompter to Game. Even though your naming is great, it is a little bit off from time to time. For exmaple: adminSetup does not setup something that has to do with administration tasks, it initializes the game. playerSetup does not only setup the player, it also starts the game. getGuess is not only getting a guess, it loops until the guess is correct. Functions should be named after what they do, and they should only do that one thing. String numberOfItems = askQuestion("Maximum of lentils in the jar: ");; while(isNan(numberOfItems)) { numberOfItems = askQuestion("Maximum of lentils in the jar: "); } jar.setItemName(itemName); jar.setNumberOfItems(Integer.parseInt(numberOfItems)); Your askQuestion function does read and return a String, which you then try to parse as int and then make an int. It would be a lot better if you have separate functions, like this: private String promptForString(String prompt); private int promptForInt(String prompot); Which would shorten your code to: jar.setItemName(promptForString("Name of items in the jar: "); jar.setNumberOfItems(promptForInt("Maximum of lentils in the jar: "); The Scanner class does have multiple different read* methods, have a look at and use them. String itemName = askQuestion("Name of items in the jar: "); String numberOfItems = askQuestion("Maximum of lentils in the jar: ");; Again, my personal preference: I hate that aligning of statements, I have a hard time reading such code because my brain switches into some sort of "column mode", and I have a hard time associating the values with the names. But that might just be me. Also, it is a lot of work to maintain. Otherwise this feels like an improvement to me. It feels like a lot less code which is easier to follow.
{ "domain": "codereview.stackexchange", "id": 20907, "tags": "java, object-oriented, number-guessing-game" }
Electrical gyroscope
Question: We all love the Gyroscope; basically it has a large angular momentum which stops it from being pushed over. I'm going to assume ideal conditions for a moment- no resistance etc. We can create angular momentum from electricity; just electrons moving around. Thus it should be possible to have a Gyroscope that instead of spinning physically in a way the eye can see, just has a large current through it. It would still have the magical property of not being pushed over. This would make a neat magic trick and so I'm hoping someone did this experiment but I couldn't find it on youtube. Is the reasoning correct? Is there a video demonstrating this? Thanks! Answer: The angular momentum of electrons circulating in a wire is negligibly small compared to the angular momentum of iron atoms in a wheel rotating around its axle, which makes the iron wheel a practical way of making a gyroscope. BTW note that the most modern gyroscopes use beams of light circulating around in fiber optic waveguides, and phenomenally sensitive detectors that can measure tiny shifts in the phase of those light beams which occur when the waveguide assembly is rotated about its axis.
{ "domain": "physics.stackexchange", "id": 85869, "tags": "electromagnetism, angular-momentum, gyroscopes" }
When using Planck's constant, how do I know when to use electron volts or joules?
Question: Planck's constant may be written as 6.63 x 10^-34 Js or 4.14 x 10^-15 eVs right? But how do I choose which one I should use to figure out an answer to a question? Answer: Your question is not primarily about Planck's constant, but about the meaning and use of units in physics. That is where you should focus your intellectual energy in order to resolve this question. You may think you are familiar with units, but I think your question suggests you should try to become even more familiar. I would recommend you first return to some very simple examples of units, and then gradually generalize until you have really 'got' it, where I mean understood it fully, to the point where your intuition and instincts align with your reasoning faculty. So a simple example would be buying bananas. Suppose, for the sake of argument, that the supermarket sells bananas in groups of 5. It is arranged that every bunch of bananas has 5 bananas. Then we can measure the number of items of fruit either in bananas or in bunches. 1 banana = 0.2 bunches 12 bananas = 2.4 bunches 2 bunches = 10 bananas etc. Your question about Planck's constant is like someone giving you a box of fruit and asking for calculations involving the contents of the box, and you are asking "should I use bananas or bunches when doing calculations and reporting results?" The answer is that you use whichever is more convenient. But since energy is less familiar than items of fruit, I'll write a bit more in order to work up to the example of electron-volts. We might go next to other familiar examples such as distance which can be measured in metres, miles, inches, millimetres etc., and time which can be measured in seconds, hours, nanoseconds etc. I am sure all this is reasonably familiar. From these one can construct units of velocity---not just the familiar metres per second and miles per hour, but also all other combinations such as inches per year and things like that. The point is that although the conversion factors are often real numbers rather than simple ratios of integers, all this involves fundamentally the same idea as my original example of bananas and bunches. Now we come to electron-volts. The first thing is to be clear that the electron-volt is a unit of energy. It is not a charge or a voltage or a time or a banana but an energy. Then by using the definition (the first line below) one can begin to work with it: $$ \begin{array}{rcl} 1 \mbox{ electron-volt} &=& 1.60218 \times 10^{-19} \mbox{ joules}\\ 12 \mbox{ electron-volt} &=& 12 \times 1.60218 \times 10^{-19} \mbox{ joules} \\ &=& 1.92261 \times 10^{-18} \mbox{ joules} \\ 1 \mbox{ joule} &=& 6.24151 \times 10^{18} \mbox{ electron-volts}\\ etc. \end{array} $$ Finally, I will offer some advice in order to minimise the chance of making mistakes. Because SI units are familiar, it is often helpful just to put everything in SI units during a calculation, and then convert to whatever other units you like right at the end. However, this is not always the best policy. I think we each discover, by calculation of many examples, when the use of some other units becomes more convenient. Whenever a group of factors in an equation gives an overall result which is dimensionless, then you can calculate that group on its own using whatever units you find convenient.
{ "domain": "physics.stackexchange", "id": 60258, "tags": "quantum-mechanics, physical-constants, unit-conversion" }
refactor 3 lines of javascript to minimize code
Question: I am looking to refactor the three lines of code in the else part of the conditional, you'll see where I have it commented. You'll notice a naming convention for the id's: id, id-div, as seen in the first line: club-community-service, club-community-service-id I want to shorten that up. I thought maybe storing all names into an array, then looping thru them like (below). If there is a better way to go about this, I am all ears, thanks! //in theory array names = [names...]; foreach(names as n) { $("#" + n + "-div").toggle($("#" + n).get(0).checked); } $('#high-school-student').bind('click', function() { if($("input[id=high-school-student]").is(":checked")) { $('.field.full.club-community-service, .field.full.other-campus-activities, .field.full.community-public-service, .field.full.other-community-event-activity, .field.honors-program, .field.full.out-of-hs-5-years').hide(); $('#club-community-service-div, #other-campus-activities-div, #community-public-service-div, #other-community-event-activity-div').hide(); } else { $('.field.full.club-community-service, .field.full.other-campus-activities, .field.full.community-public-service, .field.full.other-community-event-activity, .field.honors-program, .field.full.out-of-hs-5-years').show(); // ------ RIGHT HERE - best way to refactor these next three lines $("#club-community-service-div").toggle($('#club-community-service').get(0).checked); $("#other-campus-activities-div").toggle($('#other-campus-activities').get(0).checked); $("#community-public-service-div").toggle($('#community-public-service').get(0).checked); } }); $('.watch-for-toggle').bind('click', function() { var id = $(this).attr('id'); $('#' + id + '-div').toggle(); }); Answer: I didn't figure out a way to shorten those three lines without being too tangled (without modifying the html), but I would rewrite your code in this way: $('#high-school-student').bind('click', function() { var highSchoolStudent = $(this).is(":checked"); $('.field.full.club-community-service, .field.full.other-campus-activities, .field.full.community-public-service, .field.full.other-community-event-activity, .field.honors-program, .field.full.out-of-hs-5-years').toggle(!highSchoolStudent); if (highSchoolStudent) { $('#club-community-service-div, #other-campus-activities-div, #community-public-service-div, #other-community-event-activity-div').hide(); } else { //Use a multiselector, convert the result to array and iterate over it. //I used replace to remove the '-div'. A regular expression would be a more elegant solution. $.each($('#club-community-service-div, #other-campus-activities-div, #community-public-service-div').toArray(), function(i,v) { $(v).toggle($(v.id.replace('-div', '')).is(':checked')) });​​​​​​​​​​​​​​​​​​​ } });
{ "domain": "codereview.stackexchange", "id": 1973, "tags": "javascript, jquery" }
Obtaining the reduced density matrices for both subsystems of a bipartite system
Question: If we have a single copy of a bipartite quantum system with density matrix $\rho$, is it possible to extract the reduced density matrices of the constituent subsystems separately, i.e. to achieve the following transformation: $$\rho \longrightarrow \mathrm{Tr}_A \rho \otimes \mathrm{Tr}_B \rho $$ where $\mathrm{Tr}_A \rho$ and $\mathrm{Tr}_A \rho$ are the reduced density matrices of the constituent subsystems $A$ and $B$. If we just want $\mathrm{Tr}_B \rho$ we can just "ignore" the other subsystem: say we perform a measurement subsystem $A$ and discard it without noting the outcome, to obtain $\mathrm{Tr}_B \rho$. But is it possible to get both $\mathrm{Tr}_A \rho$ and $\mathrm{Tr}_B \rho$ in one shot? Answer: No, the desired transformation is not physical. Consider its action on a basis consisting of pure product states. It sends every such state to itself, so it would seem that it's the identity. Clearly, it's not the identity. The contradiction arises because the transformation isn't linear$^1$. Hence, it isn't physical. That said, it is possible to realize $$ \rho\otimes\rho \longrightarrow \mathrm{Tr}_A \rho \otimes \mathrm{Tr}_B \rho $$ by performing suitable partial trace. $^1$ And thus it is not uniquely determined by its action on a basis.
{ "domain": "quantumcomputing.stackexchange", "id": 5363, "tags": "quantum-operation, density-matrix" }
Gravity of a gaseous planet without a core
Question: Both Jupiter and Saturn have rocky cores. Is there such of a thing as a gaseous planet without a core? And would a planet without a core have gravity? Answer: The gravitational force on a small mass on the outside of a planet is always the Newtonian $$F_{G}=-\frac{GM}{r^2},$$ so any planet, and particularly, any mass in the universe produces a gravitational field acting on everything else. So if, for example, the mass is $M=2\times 10^{27}\rm kg$ (i.e. one Jovian mass), then the gravity field outside the planet will always be the same (apart from Tides, higher order moments), no matter whether the mass is in Hydrogen or solids. For the gas giants Jupiter and Saturn in our solar system, the mass in heavy refractories (i.e. everything heavier than Helium) is about $M_{\rm ref}\approx 15-20 \rm m_{\oplus}$, where $\rm m_{\oplus}$ is an Earth mass. The rest of $M$ is hydrogen/helium. For Jupiter this is 300 $\rm m_{\oplus}$, Saturn about $75 \rm m_{\oplus}$. This is a relatively large number of refractories in those gas giants, compared to solar composition, which is why we think that they have been formed via core-accretion, see Pollack (1996). However there is another idea of how to form gas giants out there, which is that of gravitational disc instability, see Boss (2002). This idea posits that at very massive protostellar discs, which form planets, can become unstable and fragment into large clumps, which form gas giants directly. Those disc instability giant planets would have solar metallicity, i.e. a Jupiter-mass planet would have a refractory mass of only $M_{\rm ref} \approx 3 \rm m_{\oplus}$. Those refractories would presumably sink to the planetary center and form a small core. Exoplanets that were found at large semi-major axis distances (hundreds of AU, compared to the Jovian 5 AU) from their stars, such as YSES 2b, are candidates for such disc instability models, and hence would host such small core. But that is as small of a core as it gets, you cannot have a core much much less massive than this.
{ "domain": "astronomy.stackexchange", "id": 5584, "tags": "gravity, gas-giants" }
How does dual nature of matter affect collision at the quantum level?
Question: I have a question suppose having two fundamental particles collide which each other at the quantum scale , then what will the collision behave as Will the particle nature be dominant and make sure to conserve momentum as we see in macroscopic objects or Will the wave nature be dominant and the principle of superposition is applied ? Answer: In a sense both particle and wave properties come in to play. So, it is not an either-or scenario. The interaction of fields at the quantum level is in effect the direct manifestation of the quantum nature of such fields. During the interaction, each field exchanges a quantum of energy-momentum. In this sense, one can think of the fields in terms of their particle nature. However, the exchange allows all possible outcomes that satisfy the requirements of energy-momentum conservation. Therefore, the result is a superposition of all such outcomes. In this sense, the wave properties are revealed. However, "waves" in the quantum context is not like waves in classical theories. They could be in simple cases, but the kind of state produced by interactions consists of multiple entities (I am avoid the word "particles" here) in superposition, which is not something you get in classical scenarios. One can write down the expression for the wave function of such a state, but it would not look like a classical wave.
{ "domain": "physics.stackexchange", "id": 94460, "tags": "quantum-mechanics, particle-physics, collision, superposition, wave-particle-duality" }
How do you separate enantiomers?
Question: There are some stereochemical reactions that result in the presence of enantiomers. When moving forward with a practical organic synthesis, how does one usually separate them in order to continue with one of the enantiomers? Answer: There are several ways that enantiomers can be separated, but none of them are particularly simple. The first way to separate them is chiral chromatography. In chiral chromatography, silica gel is bonded to chiral molecules to form what is called a chiral stationary phase. The enantiomers will then separate as they run down the column because one of the enantiomers will interact more strongly with the column and "stick" in place. Chiral sugars (ex. cellulose) are frequently used in chiral chromatography. The second common method is to react the enantiomers with another chemical to form diastereomers. While enantiomers are identical in terms of chemical properties, diastereomers are not. Diastereomers can be created by reacting a mixture of both the enantiomers with another chiral molecule, such as s-brucine, which is commonly used because it is cheap. Diastereomers have different chemical properties (for example melting points), so it is much easier to separate them. Then, after separation, the enantiomers can be recovered from the single diastereomer.
{ "domain": "chemistry.stackexchange", "id": 12332, "tags": "organic-chemistry, stereochemistry" }
License and Registration, Sir
Question: The following code will generate a number plate and then generate a random speed and see if it's speeding import random, string, time class Car: def __init__(self): self.plate = self.genPlate() self.limit = 60 self.checkSpeeding() def checkSpeeding(self): self.speeding = random.randint(0, 1) if self.speeding: self.speed = random.randint(self.limit+3, self.limit+20) else: self.speed = random.randint(self.limit-20, self.limit+2) def genPlate(self): plateFormat = ['L', 'L', 'N', 'N', 'L', 'L', 'L'] genPlate = [] for i in plateFormat: if i == 'L': genPlate.append(random.choice(string.ascii_letters[26:])) elif i == 'N': genPlate.append(str(random.randint(0, 9))) genPlate.insert(4, " ") return "".join(genPlate) allCars = [] x = 0 while True: allCars.append(Car()) print(allCars[x].plate + " was going " + str(allCars[x].speed) + "mph in a " + str(allCars[x].limit) + "mph zone") if allCars[x].speeding: print("speeding fine issued") print("\n") time.sleep(5) x += 1 Here is a link to the original source Answer: I am going to improve the following: allCars = [] x = 0 while True: allCars.append(Car()) print(allCars[x].plate + " was going " + str(allCars[x].speed) + "mph in a " + str(allCars[x].limit) + "mph zone") if allCars[x].speeding: print("speeding fine issued") print("\n") time.sleep(5) x += 1 Your x variable should be eliminated: allCars = [] while True: allCars.append(Car()) print(allCars[-1].plate + " was going " + str(allCars[-1].speed) + "mph in a " + str(allCars[-1].limit) + "mph zone") if allCars[-1].speeding: print("speeding fine issued") print("\n") time.sleep(5) [-1] is the last element and append puts the element in last position. Actually you don't even need a list: while True: new_car = Car() print(new_car.plate + " was going " + str(new_car.speed) + "mph in a " + str(new_car.limit) + "mph zone") if new_car.speeding: print("speeding fine issued") print("\n") time.sleep(5) The print message is very long and not nice, instead use .format(): while True: new_car = Car() print("{} was going {} mph in a {} mph zone".format( new_car.plate,new_car.speed,new_car.limit)) if new_car.speeding: print("speeding fine issued") print("\n") time.sleep(5) Adding dots to the end of phrases, Capitalizing the first letter, removing unecessary statement and sleeping less: while True: new_car = Car() print("{} was going {} mph in a {} mph zone.".format( new_car.plate,new_car.speed,new_car.limit)) if new_car.speeding: print("Speeding fine issued.\n") time.sleep(1) You should use a main function and if __name__ == "__main__": so that you will be able to import your module without running it: def main(): while True: new_car = Car() print("{} was going {} mph in a {} mph zone.".format( new_car.plate,new_car.speed,new_car.limit)) if new_car.speeding: print("Speeding fine issued.\n") time.sleep(1) if __name__ == "__main__": main() You should always strive to make the code the simpler possible. About the rest of the code, minor remarks: You should use names_with_underscores and not camelCase because of PEP8 60 is hardcoded, it would be nicer to pass in a limit parameter when generating a new car.
{ "domain": "codereview.stackexchange", "id": 11483, "tags": "python, classes, random" }
Is there a term for "this month last year" in a report?
Question: I'm building a report that has month over month data, but also "this month last year". Is there a better/standard way of describing this? Answer: Same period last year is what you want. BTW, your question does not appear to be about data science. References https://msdn.microsoft.com/en-us/library/ee634972.aspx http://www.wiseowl.co.uk/blog/s2477/same-period-previous-year.htm
{ "domain": "datascience.stackexchange", "id": 1489, "tags": "visualization, databases, terminology" }
Why is this code uniquely decodable?
Question: Source alphabet: $\{a, b, c, d, e, f\}$ Code alphabet: $\{0, 1\}$ $a\colon 0101$ $b\colon 1001$ $c\colon 10$ $d\colon 000$ $e\colon 11$ $f\colon 100$ I thought that for a code to be uniquely decodable, it had to be prefix-free. But in this code, the codeword $c$ is the prefix of codeword $f$ for example, so it is not prefix-free. However my textbook tells me that its reverse is prefix free (I don't understand this), and therefore it is uniquely decodable. Can someone explain what this means, or why it is uniquely decodable? I know it satisfies Kraft's inequality, but that is only a necessary condition, not a sufficient condition. Answer: Your code has the property that if you reverse all codewords, then you get a prefix code. This implies that your code is uniquely decodable. Indeed, consider any code $C = x_1,\ldots,x_n$ whose reverse $C^R := x_1^R,\ldots,x_n^R$ is uniquely decodable. I claim that $C$ is also uniquely decodable. This is because $$ w = x_{i_1} \ldots x_{i_m} \text{ if and only if } w^R = x_{i_m}^R \ldots x_{i_1}^R. $$ In words, decompositions of $w$ into codewords of $C$ are in one-to-one correspondence with decompositions of $w^R$ into codewords of $C^R$. Since the latter are unique, so are the former. Since prefix codes are uniquely decodable, it follows that the reverse of a prefix code is also uniquely decodable. This is the case in your example. The McMillan inequality states that if $C$ is uniquely decodable then $$ \sum_{i=1}^n 2^{-|x_i|} \leq 1. $$ In other words, a uniquely decodable code satisfies Kraft's inequality. Therefore if all you're interested in is minimizing the expected codeword length, there is no reason to look beyond prefix codes. Sam Roweis gives in his slides a nice example of a uniquely decodable code which is neither a prefix code nor the reverse of a prefix code: $$ 0,01,110. $$ In order to show that this code is uniquely decodable, it suffices to show how to decode the first codeword of a word. If the word starts with a $1$, then the first codeword is $110$. If it is of the form $01^*$, then it must be either $0$ or $01$. Otherwise, there must be a prefix of the form $01^*0$. We now distinguish several cases: $$ \begin{array}{c|cccc} \text{prefix} & 00 & 010 & 0110 & 01110 \\\hline \text{codeword} & 0 & 01 & 0 & 01 \end{array} $$ Longer runs of $1$ cannot be decoded at all.
{ "domain": "cs.stackexchange", "id": 13426, "tags": "encoding-scheme" }
What does "projection of a vector" really mean?
Question: Let $\vec{a}$ & $\vec{b}$ be two non-collinear, non-zero co-initial vectors having angle $\theta$ between them. The projection of $\vec{b}$ on $\vec{a}$ is given by the dot product of $\vec{b}$ &$\hat{a}$. This is the mathematical definition. But what it is actually? What is the definition of it? [ I guess it is the magnitude of component of $\vec{b}$ on $\vec{a}$.] Answer: Consider for example, a plane vector and two orthogonal unit vectors $\hat x$ and $\hat y$. Any vector in the plane can be expressed as $$\vec v = (\vec v \cdot \hat x) \;\hat x + (\vec v \cdot \hat y) \; \hat y = v_x\; \hat x + v_y\; \hat y$$ So, you're correct, $\vec b \cdot \hat a$ is the component of $\vec b$ in the $\hat a$ direction. And further, the operator $$\left(\quad\cdot\; \hat a\right) \hat a $$ is a projection operator - it takes as input a vector and returns a vector - the projection ('shadow') of that vector in the $\hat a$ direction.
{ "domain": "physics.stackexchange", "id": 16914, "tags": "vectors, mathematics, linear-algebra" }
ROS wrapper for Segway RMP440 Omni
Question: Is anyone developing a ROS wrapper for Segway RMP440 Omni or has already developed it? http://rmp.segway.com/about-segway-robotics/segway-rmp400-omni/ Thanks in advance! Originally posted by xylo on ROS Answers with karma: 184 on 2013-03-13 Post score: 1 Answer: I assume the RMP440 Omni uses the new protocol used by the RMP440 (http://rmp.segway.com/about-segway-robotics/segway-rmp440-le-and-se/). If so William's driver won't work on it. We have a student in our group currently working on a wrapper for the new protocol. I'm not sure of the status, but I'll check on it and try to post shortly. Originally posted by david.hodo with karma: 395 on 2013-03-14 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by xylo on 2013-03-14: @david.hodo I think you're right. The RMP400 Omni is a new model that came out last year. Look forward to hearing the student's progress. Thanks! Comment by William on 2013-03-14: I know someone who contacted me about supporting the x440 (similar protocol), but when I declined he said that he implemented his own. I have pointed him to this question, so hopefully he can chime in later.
{ "domain": "robotics.stackexchange", "id": 13346, "tags": "ros" }
Orbital parameters of the Sun
Question: What are the orbital parameters of the Sun such as orbit velocity etc in it's orbit around the Solar System's center of mass? Consider the Sun pointlike or alternatively when talkin about the Sun's movement I mean it's center of mass. Do not tell me that the Sun is stationary because the planets' masses can be neglected. I do no want such oversimplifications. Answer: Since the Solar system is a multi-body system (with $N>2$ bodies of significant mass), the orbits of its constituents are not exact Keplerian orbits. To lowest order, each planet orbits the Sun (or rather the centre of mass of all interior planets) on a Keplerian orbit, but the interactions with the planets as well as the fact that the centre of mass of the interior is not fixed lead to deviations of the true orbit from this simplification. These deviations can be treated either numerically or via perturbation theory, but are non-trivial functions of time. The same holds for the Sun: to lowest order one can neglect all planets but Jupiter (which is more then twice as massive as all the remaining planets combined), when the Sun follows an elliptic orbit with semi-major axis of about 0.005AU (smaller than that of Jupiter by their mass ratio). This is of the same order as the radius of the Sun, i.e. the barycentre of the Solar system hardly leaves the Sun. However, as above, the pull by all other planets lead to a deviation from this simple model. Again, these deviations are non-trivial.
{ "domain": "astronomy.stackexchange", "id": 5567, "tags": "the-sun, solar-system, celestial-mechanics" }
Why do "relativistic effects" come into play, when dealing with superheavy atoms?
Question: I have now read on the Wikipedia pages for unbihexium, unbinilium, and copernicium that these elements will not behave similarly to their forebears because of “relativistic effects”. When I read about rutherfordium, it too brings up the relativistic effects, but only to say that it compared well with its predecessors, despite some calculations indicating it would behave differently, due to relativistic effects. The dubnium page on Wikipedia says that dubnium breaks periodic trends, because of relativistic effects. The Wikipedia page on seaborgium doesn't even mention relativistic effects, only stating that it behaves as the heavier homologue to tungsten. Bohrium's Wikipedia page says it's a heavier homologue to rhenium. So, what are these relativistic effects and why do they only take effect in superheavy nuclei? When I think of relativistic effects, I think speeds at or above $.9 c$ or near incredibly powerful gravitational forces. So, I fail to see how it comes into play here. Is it because the electrons have to travel at higher speeds due to larger orbits? Answer: When quantum mechanics was initially being developed, it was done so without taking into account Einstein's special theory of relativity. This meant that the chemical properties of elements were understood from a purely quantum mechanical description i.e., by solving the Schrödinger equation. The more accurate models following that time, that do use special relativity, were found to be more consistent with experiment than compared with the ones that were used without special relativity. So when they quote "relativistic effects" they are referring to chemical properties for elements that were determined using special relativity. Is it because the electrons have to travel at higher speeds due to larger orbits? Changes to chemical properties of elements due to relativistic effects are more pronounced for the heavier elements in the periodic table because in these elements, electrons have speeds worthy of relativistic corrections. These corrections show properties that are more consistent with reality, than with those where a non-relativistic treatment is given. A very good example of this would be the consideration of the color of the element gold, Au. Physicist Arnold Sommerfeld calculated that, for an electron in a hydrogenic atom, its speed is given by $$v \approx (Zc)\alpha$$ where $Z$ is the atomic number, $c$ is the speed of light, and $$\alpha\approx\frac{1}{137}$$ is a (dimensionless) number called the fine structure constant or Sommerfeld's constant. For Au, since $Z= 79$, its outer shell electrons would be moving$^1$ at about $0.58c$. This means that relativistic effects will be pretty noticeable for gold$^2$, and these effects actually contribute to gold's color. Interestingly, we also note from the above equation, that if $Z\gt 137$ then $v\gt c$ which would violate one of the postulates of special relativity, namely that no object can have a velocity greater than that for light. But it is also well known that no element can have atomic number $Z\gt 137$ (what would happen is that with such a strong electric field due to the nucleus, there is enough energy for pair production $e^++e^-$ which quenches the field). $^1$Electrons are not "moving around" a nucleus, but they are instead probability clouds surrounding the nucleus. So "most likely distances of electrons" would be a better term. $^2$In the example of the element Gold, which has an electron configuration $$\bf \small 1s^2 \ 2s^2\ 2p^6\ 3s^2\ 3p^6\ 4s^2\ 3d^{10}\ 4p^6\ 5s^2\ 4d^{10}\ 5p^6\ 6s^1\ 4f^{14}\ 5d^{10}$$ relativistic affects will increase the $\bf \small 5d$ orbital distance from the nucleus, and also decrease the $\bf \small 6s$ orbital distance from the nucleus.
{ "domain": "physics.stackexchange", "id": 86504, "tags": "quantum-mechanics, special-relativity, atomic-physics, orbitals, elements" }
Entity Framework Lazy loading performance comparison
Question: The following code takes about 20 seconds to run with about 200,000 records in the TaskLogs table: using (var db = new DAL.JobManagerEntities()) { return db.TaskLogs.Select(taskLog => new TaskLog() { TaskLogID = taskLog.TaskLogID, TaskID = taskLog.TaskID, TaskDescription = taskLog.Task.TaskDescription, TaskType = (TaskType)taskLog.Task.TaskTypeID, RunID = taskLog.RunID, ProcessID = taskLog.ProcessID, JobID = taskLog.JobID, JobName = taskLog.Job.JobName, Result = taskLog.Result, StartTime = taskLog.StartTime, TimeTaken = taskLog.TimeTaken }).OrderByDescending(t => t.RunID).ThenByDescending(t => t.RunID).ThenByDescending(t => t.StartTime).ToList(); } I tweaked it until I got something that runs faster. Here's where I got to: using (var db = new DAL.JobManagerEntities()) { db.Configuration.LazyLoadingEnabled = false; var tasks = db.Tasks.ToList(); var jobs = db.Jobs.ToList(); var result = db.TaskLogs.Select(x => new TaskLog() { TaskLogID = x.TaskLogID, TaskID = x.TaskID, RunID = x.RunID, ProcessID = x.ProcessID, JobID = x.JobID, Result = x.Result, StartTime = x.StartTime, TimeTaken = x.TimeTaken }).OrderByDescending(t => t.RunID).ThenByDescending(t => t.StartTime).ToList(); foreach (var r in result) { r.TaskDescription = tasks.Single(t => t.TaskID == r.TaskID).TaskDescription; r.TaskType = (TaskType)tasks.Single(t => t.TaskID == r.TaskID).TaskTypeID; r.JobName = jobs.Single(j => j.JobID == r.JobID).JobName; } return result;} Which runs in less than 6 seconds for the same number of records. The TaskLog table is linked to the Job and Task tables as follows: The Job and Task tables will have 100s and 1000s of records respectively. Is there anything else I could do in order to further improve the efficiency of the code? Answer: 1) Not sure if it impacts performance, but your first code fragment has one redunant order by: .OrderByDescending(t => t.RunID).ThenByDescending(t => t.RunID) 2) You could improve performance of your seconds code with "client side indexing" (using a dictionary): var tasksMap = tasks.ToDictionary(t => t.TaskID); var jobsMap = jobs.ToDictionary(t => t.JobID); foreach (var r in result) { var task = tasksMap[r.TaskID]; r.TaskDescription = task.TaskDescription; r.TaskType = (TaskType)task.TaskTypeID; r.JobName = jobsMap[r.JobID].JobName; }
{ "domain": "codereview.stackexchange", "id": 29454, "tags": "c#, performance, database, entity-framework" }
What happens to fecal matter if it's continually re-eaten?
Question: After class today, the topic of eating one's one fecal matter came up. There was a sharp divide as to how people thought the defecations would change. I realize this is a bit of an odd question but understanding it would help me understand how our body handles waste. One theory was that the body would continually reuse some elements of the fecal matter such that, eventually and with no other input, nothing would be defecated. Another theory was that there are elements in the fecal matter that the body would never make any use of, so it would continually get excreted (up until the person died, of course). Could someone please explain the likely outcome of continually and only eating your own poop (assuming you could stay alive long enough)? Answer: In a healthy person, stool weight depends mainly on the quality and quantity of the diet. The mean stool weight of a normal defecation is about 320 g [1]. Dry weight of fecal matter contains [2]: 30 % bacteria 30 % undigested food and fiber 10-20 % fat 2-3% protein Let's do the math (the only nutrients in feces are fats and proteins): 20% * 320g = 64g of fats which can release 64 * 9.3 = 595 kcal of energy 3% * 320 = 9.6g of proteins which can release 9.6 * 4.1 = 40 kcal of energy That's about 650 kcal. 2,000 kcal is a rough average of what people eat in a day. But your body might need more or less than 2,000 [3]. As you can see, fecal matter can't supply enough nutrients. Continuously (re)eating feces and only feces will lead to denutrition, less feces and death. References: Seyed Mohammad Kazem Hosseini Asl MD, Seyed Davood Hosseini MD. Determination of the Mean Daily Stool Weight, Frequency of Defecation and Bowel Transit Time: Assessment of 1000 Healthy Subjects shigeta's answer on What is the composition of human feces? HowStuffWorks.com. How many calories does a person need daily?
{ "domain": "biology.stackexchange", "id": 2351, "tags": "scatology" }
Using FindOpenCV.cmake with libopencv2.3-dev
Question: While installing ros-electric-desktop-full, I ran into a conflict of ros-electric-vision-opencv (through libopencv2.3-dev) with the default OpenCV in Ubuntu 11.10. In order to resolve the conflict, I manually installed libopencv2.3-dev, which removed Ubuntu's default OpenCV installation (libcv-dev, libcvaux-dev and libhighgui-dev). However, I found out that the FindOpenCV.cmake file that can be downloaded from the OpenCV website does not work with libopencv2.3-dev. It used to work with default Ubuntu OpenCV installation though. That FindOpenCV.cmake file looks for OpenCV libraries with very different names than those installed by libopencv2.3-dev. Is there a FindOpenCV.cmake file that works with 2.3.1 and may also be compatible with default Ubuntu installation (libcv, libcvaux, libhighgui)? This is definitely a show stopper. Originally posted by Aditya on ROS Answers with karma: 287 on 2011-11-13 Post score: 0 Original comments Comment by AHornung on 2011-11-13: The conflict with the default Ubuntu cv libraries looks related to this question: http://answers.ros.org/question/2657/solve-conflict-between-ubuntu-libcv21-and-ros (with still no solution). Answer: The conflict now seems to be resolved with the latest update to the debs: sudo apt-get install libcv2.1 ros-electric-vision-opencv libopencv2.3 Reading package lists... Done Building dependency tree Reading state information... Done libcv2.1 is already the newest version. libopencv2.3 is already the newest version. ros-electric-vision-opencv is already the newest version. Package: libopencv2.3 Version: 2.3.1+svn6514+branch23-8~natty Package: ros-electric-vision-opencv Version: 1.6.8-s1321009674~natty Originally posted by AHornung with karma: 5904 on 2011-11-13 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 7282, "tags": "ros, opencv, libopencv" }
Finding the largest value in array - Recursion
Question: I know this may be a silly question (I'm just learning about recursion, and I think there are still things that are still not clear to me.), but... is this a good implementation of a recursive function to find the maximum number in an array? #include <iostream> #include <cstdlib> using std::cout; using std::endl; /* function prototype */ int find_max(const int [], int); /* main function */ int main(void) { int array[] = {2, 6, 8, 23, 45, 43, 51, 62, 83, 78, 61, 18, 71, 34, 72}; int size = sizeof(array) / sizeof(array[0]); cout << "HIGHEST: " << find_max(array, size) << endl; exit(EXIT_SUCCESS); } /* function definition */ int find_max(const int array[], int size) { int i = 1; int highest = array[0], l = 0; if (i < size) l = find_max(array+i, size-1); if (l > highest) highest = l; return highest; } output: ./main HIGHEST: 83 what I basically want to do is implement the same algorithm using recursive structure. or rather implement the function below. int find_max(const int array[], int size) { int highest = array[0]; for (int i = 1; i < size; i++) { if (array[i] > highest) highest = array[i]; } return highest; } Answer: I respectfully disagree with the advice to prefer iteration to recursion. I personally prefer to express most algorithms recursively. It’s much more suitable to a kind of programming style, using only static single assignments, where certain kinds of bugs become impossible to write without the compiler catching them. It’s often easier to reason about the code and prove it correct. And, properly-written, it’s just as efficient. For several decades, programmers were taught to turn recursive functions into iterative loops, but that was originally because compilers used to be bad at optimizing recursive calls. That’s no longer as true as it was, so it’s now a matter of personal preference. Besides, you’ll have to learn to program in a functional language that has only recursive calls and not loops, if you want a computer science degree these days. So, how do we write this in a recursive call? For the sake of explaining exactly what I’m doing and why, I’m going to ridiculously overdo it. This would be one line of code in a functional language. With that in mind, in functional style, all the local state of the loop becomes parameters of a function, and whenever that local state changes, such as the loop incrementing its counter and repeating, become the function returning a call to itself. What, then, is the local state? The code you want to transform starts out as int find_max(const int array[], int size) { int highest = array[0]; for (int i = 1; i < size; i++) { if (array[i] > highest) highest = array[i]; } return highest; } There are four local variables here: array, size, i and highest. You already figured out, in a similar question you asked, that you can reduce this to three by searching from back to front instead of front to back; the loop for ( auto i = size; i > 0; --i ) only uses size to initialize i, and only need i thereafter. Another way to bring this down to three state variables would be to recursively search the subarray starting at array+1 of length size-1, the subarray starting at array+2 of length size-2, and so on. Another common idiom for this is to use a current and end pointer, and stop when current is equal to end. Here, let’s use the same approach you did in your other question and count down. Since the local update we want to keep around consists of array, i, and highest, our function signature should look like int find_max( const int array[], size_t i, int highest ) Only, like in my other answer, we want this to be constexpr so it can perform constant-folding, and noexcept to help the compiler optimize. On top of that, the only time the state chances is when we make a tail-recursive call, and we can only update the entire state together, so there is no way we can accidentally forget to update one of the state variables or update it twice. So, with static single assignments, the function declaration becomes, constexpr int find_max( const int array[], size_t i, int highest ) noexcept Only, the original caller passed in array and size, so this is actually a helper function that find_max will call, and find_max itself will use the same interface as before. So, we actually want a pair of functions: static constexpr int find_max_helper( const int array[], const size_t i, const int highest ) noexcept; constexpr int find_max( const int array[], const size_t size ); We want to make sure to write only tail calls, so our function should have an if/then/else structure, where each branch returns either an int or a function call. Let’s see the original function again. int find_max(const int array[], int size) { int highest = array[0]; for (int i = 1; i < size; i++) { if (array[i] > highest) highest = array[i]; } return highest; } What I ideally try to do (but don’t always, especially for really short examples like this) is write the contract for my functions in a comment first. In this case, the name of the function and its parameters are self-explanatory, but that might look like: constexpr int find_max( const int array[], const size_t size ) /* Returns the maximum value of the array of the given size. */ Only, wait, what is the maximum value of a zero-length array? Your implementation just checks array[0] unconditionally, so find_max( NULL, 0 ) would have undefined behavior, usually a segmentation fault. You always, always, always check for a buffer overrun in C and C++. It is never too early to get in the habit. I seriously mean it. That could be an assert(), but for this example, let’s throw an std::logic_error exception. And there are several possible ways we could have handled this, so it actually becomes important to document the behavior of the function in this case, or at least warn anyone who uses this function what assumptions it makes. So the first few lines of our function will look like this: constexpr int find_max( const int array[], const size_t size ) /* Returns the maximum value of the array of the given size. * Throws std::logic_error if size is 0. */ { if ( size == 0 ) { throw std::logic_error("Max of a zero-length array"); } What about the rest? We’re reducing the array from back to front and you initialize highest to the first element before you start looping. We refactored to search from the back, so the first element we check is now the rightmost instead of the leftmost, and we replace iterations of the loop with tail-calls to a helper function, but we can follow the exact same logic. our initial state has i equal to size-1 (and we just checked that size is greater than 0, so this is in bounds), array equal to the value we were passed, and highest equal to the last element of array, which is array[size-1]. So far, then, we have static constexpr int find_max_helper( const int array[], const size_t i, const int highest ) noexcept; /* Searches the array from index i to 0 for an element greater than highest. * Intended to be called only from find_max. */ constexpr int find_max( const int array[], const size_t size ) /* Returns the maximum value of the array of the given size. * Throws std::logic_error if size is 0. */ { if ( size == 0 ) { throw std::logic_error("Max of a zero-length array"); } return find_max_helper( array, size-1, array[size-1] ) } Now we just need to implement find_max_helper. It’s pretty straightforward: check whether we’ve reached the start of the array, and if not, find the new maximum and recurse. So, there are three branches: the termination check, the case where the current element is higher, and the case where it isn’t. static constexpr int find_max_helper( const int array[], const size_t i, const int highest ) noexcept /* searches the array from index i to 0 for an element greater than highest. * Intended to be called only from find_max. */ { if ( i == 0 ) { return highest; } else if ( array[i-1] > highest ) { return find_max_helper( array, i-1, array[i-1] ); } else { return find_max_helper( array, i-1, highest ); } } You could also express this as a series of if statements that either return or fall through, making it clearer that the function always terminates in a valid return) or in a nested ternary expression like in your other question for review. Putting it all together, and setting our types correctly in the main test driver, we get: #include <cassert> #include <cstdlib> // for EXIT_SUCCESS #include <iostream> #include <stdexcept> // For runtime_error using std::cout, std::endl, std::size_t; static constexpr int find_max_helper( const int array[], const size_t i, const int highest ) noexcept /* Searches the array from index i to 0 for an element greater than highest. * Intended to be called only from find_max. */ { if ( i == 0 ) { return highest; } else if ( array[i-1] > highest ) { return find_max_helper( array, i-1, array[i-1] ); } else { return find_max_helper( array, i-1, highest ); } } constexpr int find_max( const int array[], const size_t size ) /* Returns the maximum value of the array of the given size. * Throws std::logic_error if size is 0. */ { if ( size == 0 ) { throw std::logic_error("Max of a zero-length array"); } return find_max_helper( array, size-1, array[size-1] ); } int main(void) // Test driver for find_max. { constexpr int array[] = {2, 6, 8, 23, 45, 43, 51, 62, 83, 78, 61, 18, 71, 34, 72}; constexpr size_t size = sizeof(array) / sizeof(array[0]); cout << "HIGHEST: " << find_max(array, size) << endl; return EXIT_SUCCESS; } The call to find_max compiles to mov esi, 83 This is why you always write your functions as constexpr when you can. In the real world, that can typically make a program run 33% faster. If you want to actually see the generated code, you can force the compiler to actually generate the code by making a call to some data it cannot constant-fold. int main(void) // Test driver for find_max. { #if 0 constexpr int array[] = {2, 6, 8, 23, 45, 43, 51, 62, 83, 78, 61, 18, 71, 34, 72}; constexpr size_t size = sizeof(array) / sizeof(array[0]); #endif extern const int array[]; extern const size_t size; cout << "HIGHEST: " << find_max(array, size) << endl; return EXIT_SUCCESS; } What this will tell you is that modern compilers don’t literally compile this as written. What they actually do is figure out for you that you are doing a reduction of the array with the max operation, which is associative and therefore can be automatically vectorized. And, in fact, most code with loops or a tail-recursive accumulator aren’t supposed to literally iterate or recurse one element at the time. We actually want the compiler to notice that a vectorized or parallelized algorithm is equivalent to the specification we gave it. But, there is a much more natural way to express this. Just say we want to perform a reduction operation on the array. In the STL, that’s <valarray> #include <cstdlib> // for EXIT_SUCCESS #include <iostream> #include <valarray> using std::cout, std::endl; int main(void) // Test driver for find_max. { const std::valarray array = {2, 6, 8, 23, 45, 43, 51, 62, 83, 78, 61, 18, 71, 34, 72}; cout << "HIGHEST: " << array.max() << endl; return EXIT_SUCCESS; }
{ "domain": "codereview.stackexchange", "id": 43060, "tags": "c++, algorithm, recursion" }
Why are electrons treated classically in cyclotron measurements?
Question: As I understand , systems having large angular momenta relative to the planck constant (limit of large quantum numbers, e.g. $J/\hbar \to \infty$), can be treated as classical systems. Now in the case of cyclotron resonance type measurements, one often sees the classical equation of motion written down for the electron, e.g. in the presence of a magnetic field we have: $$m\mathbf{\dot{v}} = -e\mathbf{v} \times \mathbf{B} - \frac{m}{\tau}\mathbf{v} \tag{1}$$ With $\tau$ a relaxation time for the electron in its host material (e.g. a crystal), where we often have $\tau^{-1}\to 0$. Why is it physically allowed to assume the electron can be treated classically? What's the key idea behind this approximation in such contexts? Lastly, on a related note, if we add an external electric field to the above system, the first term on the rhs of (1) becomes $-e(\mathbf{E}+\mathbf{v}\times \mathbf{B}) $ and in order to solve this system, the planewave ansatz is usually used for the velocity, i.e. $v(t)=v_0 e^{-i\omega t}$, is this ansatz a sound choice here because we are treating the electron classically or there is another unrelated underlying reason? Answer: Before addressing your question, there is a point where I kind of disagree with Orca's answer that I'd like to discuss: I will begin with part 2 of your question about plane waves. The use of this Ansatz is the first clue that you are actually treating the situation quantum-mechanically, but ending up with a result that exactly matches the classical result. The ansatz $v=v_0 \mathrm e^{-i\omega t}$ has nothing to do with Quantum Mechanics. In fact, cyclotron resonance can be very well understood in the framework of the Drude Model, which is from 1900 (before QM was born). The use of this ansatz is in fact related to a mathematical theorem, with no physical insight whatsoever: Theorem: the general solution of linear differential equations with constant coefficients are always exponentials. See Linear differential equation for the proof. This means: as your equation of motion is linear you know that the solution is an exponential. This is all there is: the ansatz $\mathrm e^{-i\omega t}$ has no physical explanation. To illustrate my point, let's consider a different damping term, where the physics are the same but the mathematics different: $$ m\mathbf{\dot{v}} = -e\mathbf{v} \times \mathbf{B} - \alpha\ v^2\hat{\mathbf{v}} \quad \text{with} \quad \alpha\in\mathbb R $$ In this case, you also have a friction term, which is now quadratic. As this equation is non-linear, you cannot use the exponential ansatz. In this case, $v\neq \mathrm e^{-i\omega t}$, even though the physics are essentially the same. A second point where I kind of disagree with Orca is in the statement To begin with, I will take the simple case of a free electron of momentum... By using a free particle, he/she is neglecting the most important point of your question: why can we treat the electrons as free, when in fact they are immersed in a lattice. Once we know that the electrons behave as free, we can study them Quantum mechanically or classically, depending on the temperature, carrier density, kinetic energy, etc. Orca is right that both QM and CM agree on their prediction for free particles, but we must argue why we are allowed to treat the electrons as free to begin with. This is answered in any good book on Solid State Physics, so I won't explain the details here, but in a nutshell the conclusion is that (because of Bloch's Theorem, or $k\cdot p$ theory, etc) we know that the effective Hamiltonian, once we "integrate out" the interaction of the electrons with the lattice, is quadratic in $\boldsymbol k$. Therefore, the electrons behave as free particles, where we define the effective mass as the parameter that appears in this effective Hamiltonian: $$ H_\text{eff}\equiv\frac{\boldsymbol k^2}{2m^*}+\cdots $$ where this relation defines $m^*$. This means that the effect of the lattice on the electrons is to make them move as free particles with a different mass. Or put it another way: for most materials, the band structure is approximately parabolic. If you search band structure on google you'll see that the energy looks like parabolas, that is, $$ H\approx a\boldsymbol k^2+\mathcal O(k^3) $$ where the approximation holds best if the excitation is not too high (that is, we are not far from the minima of these parabolas). This is not always true: in some materials, such as graphene, the band structure looks like $|\boldsymbol k|$ instead of $\boldsymbol k^2$ (google Dirac cones). When $H= a \boldsymbol k^2+\cdots$ we can define $a=\frac{1}{2m^*}$ for a certain parameter $m^*$ with units of mass. With this definition, the Hamiltonian looks like the Hamiltonian of a free particle even though it really isn't. The electrons are not free - in fact, in general they are tightly bound - but if you approximate the energy with a parabola, the spectrum looks like that of a free particle, but with a different mass. Note that the band structure can be both theoretically calculated and experimentally measured, and most of the times it indeed looks parabolic. This means that the "effective free particle approximation" is very well justified, both by theory and experiments. Once we know that electrons can be treated as free, we can ask ourselves whether we need to use Quantum Mechanics to study its dynamics, or we can use Classical Mechanics to get an approximate description. Orca's nice answer proves that actually both methods agree, so we can use whichever we like the most. There is something weird about magnetic fields and Quantum Mechanics: we very often get the same prediction if we use Classical Mechanics or (first order perturbation theory of) Quantum Mechanics. For example, the Zeeman effect can be both studied with CM and QM. Anyway, to answer your questions: Why is it physically allowed to assume the electron can be treated classically? What's the key idea behind this approximation in such contexts? In this case, the approximation is allowed because the temperature is such that $kT\gg \hbar\omega_c$, where $\omega_c$ is the cyclotron frequency. Therefore, the Landau levels $n_c$ are highly excited, $n_c\to\infty$, which means that the system is essentially classical. This is not always true: for example, to measure the electron magnetic moment in a Penning trap, they use a strong magnetic field and a very low temperature such that $kT\sim\hbar\omega_c$. In this case, the quantum effects are non-negligible, and we can't use CM. But to measure $m^*$ we use high temperatures so we can assume that the dynamics are classical. Is the ansatz $v=v_0\; \mathrm e^{-i\omega t}$ a sound choice here because we are treating the electron classically or there is another unrelated underlying reason? As I said above, the reason is mathematical: it has nothing to do with QM nor any other physical reason. It's simply because the differential equation is linear.
{ "domain": "physics.stackexchange", "id": 29868, "tags": "quantum-mechanics, newtonian-mechanics, electromagnetism, electrons, semiclassical" }
Why do gummy bears explode when added to hot potassium chlorate?
Question: This link shows that a gummy bear explodes when in contact with heated potassium chlorate, $\ce{KClO3}$. But what in a gummy bear creates this reaction? Also, do other foods (fruit, icing sugar...) react as violently with potassium chlorate? Answer: Potassium chlorate is a source of oxygen. After heating, it decomposes to $\ce{O2}$ and $\ce{KCl}$: $$\ce{4 KClO3 → KCl + 3 KClO4}$$ $$\ce{KClO4 → KCl + 2O2}$$ The gummy bear is mainly composed of sugar and other carbohydrates. Those carbohydrates will react with oxygen, combustion occurs. For example, glucose will react in this manner: $$\ce{6O2 + C6H12O6 -> 6CO2 + 6H2O}$$ If there is any material present which does not burn, such as $\ce{H2O}$, the temperature will not rise as high. For gummy bears the reaction works spectacularly because they are mainly carbohydrates (>70%). An apple, for example, has only ~13% carbohydrates – unless you dry it, of course. On the other hand, this video on YouTube is an example of how sugar itself reacts violently with potassium chlorate.
{ "domain": "chemistry.stackexchange", "id": 2524, "tags": "redox, combustion, carbohydrates" }
Hi, when i run catkin_make i have an error with nodelet
Question: this is result of catkin_make : Base path: /home/fatima/catkin_ws1 Source space: /home/fatima/catkin_ws1/src Build space: /home/fatima/catkin_ws1/build Devel space: /home/fatima/catkin_ws1/devel Install space: /home/fatima/catkin_ws1/install #### #### Running command: "make cmake_check_build_system" in "/home/fatima/catkin_ws1/build" #### -- Using CATKIN_DEVEL_PREFIX: /home/fatima/catkin_ws1/devel -- Using CMAKE_PREFIX_PATH: /home/fatima/catkin_ws1/devel;/opt/ros/indigo -- This workspace overlays: /home/fatima/catkin_ws1/devel;/opt/ros/indigo -- Using PYTHON_EXECUTABLE: /usr/bin/python -- Using Debian Python package layout -- Using empy: /usr/bin/empy -- Using CATKIN_ENABLE_TESTING: ON -- Call enable_testing() -- Using CATKIN_TEST_RESULTS_DIR: /home/fatima/catkin_ws1/build/test_results -- Found gtest sources under '/usr/src/gtest': gtests will be built -- Using Python nosetests: /usr/bin/nosetests-2.7 -- catkin 0.6.16 -- BUILD_SHARED_LIBS is on -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- ~~ traversing 1 packages in topological order: -- ~~ - beginner2_tutorials -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- +++ processing catkin package: 'beginner2_tutorials' -- ==> add_subdirectory(beginner2_tutorials) -- Using these message generators: gencpp;genlisp;genpy -- beginner2_tutorials: 1 messages, 1 services CMake Error at /opt/ros/indigo/share/catkin/cmake/catkin_package.cmake:217 (message): catkin_package() DEPENDS on the catkin package 'nodelet' which must therefore be listed as a run dependency in the package.xml Call Stack (most recent call first): /opt/ros/indigo/share/catkin/cmake/catkin_package.cmake:98 (_catkin_package) beginner2_tutorials/CMakeLists.txt:13 (catkin_package) -- Configuring incomplete, errors occurred! See also "/home/fatima/catkin_ws1/build/CMakeFiles/CMakeOutput.log". See also "/home/fatima/catkin_ws1/build/CMakeFiles/CMakeError.log". make: *** [cmake_check_build_system] Error 1 Invoking "make cmake_check_build_system" failed fatima@fatima-K55VD:~/catkin_ws1$ ^C fatima@fatima-K55VD:~/catkin_ws1$ catkin_make Base path: /home/fatima/catkin_ws1 Source space: /home/fatima/catkin_ws1/src Build space: /home/fatima/catkin_ws1/build Devel space: /home/fatima/catkin_ws1/devel Install space: /home/fatima/catkin_ws1/install #### #### Running command: "make cmake_check_build_system" in "/home/fatima/catkin_ws1/build" #### -- Using CATKIN_DEVEL_PREFIX: /home/fatima/catkin_ws1/devel -- Using CMAKE_PREFIX_PATH: /home/fatima/catkin_ws1/devel;/opt/ros/indigo -- This workspace overlays: /home/fatima/catkin_ws1/devel;/opt/ros/indigo -- Using PYTHON_EXECUTABLE: /usr/bin/python -- Using Debian Python package layout -- Using empy: /usr/bin/empy -- Using CATKIN_ENABLE_TESTING: ON -- Call enable_testing() -- Using CATKIN_TEST_RESULTS_DIR: /home/fatima/catkin_ws1/build/test_results -- Found gtest sources under '/usr/src/gtest': gtests will be built -- Using Python nosetests: /usr/bin/nosetests-2.7 -- catkin 0.6.16 -- BUILD_SHARED_LIBS is on -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- ~~ traversing 1 packages in topological order: -- ~~ - beginner2_tutorials -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- +++ processing catkin package: 'beginner2_tutorials' -- ==> add_subdirectory(beginner2_tutorials) -- Using these message generators: gencpp;genlisp;genpy -- beginner2_tutorials: 1 messages, 1 services CMake Error at /opt/ros/indigo/share/catkin/cmake/catkin_package.cmake:217 (message): catkin_package() DEPENDS on the catkin package 'nodelet' which must therefore be listed as a run dependency in the package.xml Call Stack (most recent call first): /opt/ros/indigo/share/catkin/cmake/catkin_package.cmake:98 (_catkin_package) beginner2_tutorials/CMakeLists.txt:13 (catkin_package) -- Configuring incomplete, errors occurred! See also "/home/fatima/catkin_ws1/build/CMakeFiles/CMakeOutput.log". See also "/home/fatima/catkin_ws1/build/CMakeFiles/CMakeError.log". make: *** [cmake_check_build_system] Error 1 Invoking "make cmake_check_build_system" failed i think as Akif told i should bring it to the package.xml but how? i dont know its my result when i run catkin_make again : Base path: /home/fatima/~catkin_ws1 Source space: /home/fatima/~catkin_ws1/src Build space: /home/fatima/~catkin_ws1/build Devel space: /home/fatima/~catkin_ws1/devel Install space: /home/fatima/~catkin_ws1/install Creating symlink "/home/fatima/~catkin_ws1/src/CMakeLists.txt" pointing to "/opt/ros/indigo/share/catkin/cmake/toplevel.cmake" Running command: "cmake /home/fatima/~catkin_ws1/src -DCATKIN_DEVEL_PREFIX=/home/fatima/~catkin_ws1/devel -DCMAKE_INSTALL_PREFIX=/home/fatima/~catkin_ws1/install -G Unix Makefiles" in "/home/fatima/~catkin_ws1/build" -- The C compiler identification is GNU 4.8.4 -- The CXX compiler identification is GNU 4.8.4 -- Check for working C compiler: /usr/bin/cc -- Check for working C compiler: /usr/bin/cc -- works Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ -- Check for working CXX compiler: /usr/bin/c++ -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Using CATKIN_DEVEL_PREFIX: /home/fatima/~catkin_ws1/devel -- Using CMAKE_PREFIX_PATH: /opt/ros/indigo -- This workspace overlays: /opt/ros/indigo -- Found PythonInterp: /usr/bin/python (found version "2.7.6") -- Using PYTHON_EXECUTABLE: /usr/bin/python -- Using Debian Python package layout -- Using empy: /usr/bin/empy -- Using CATKIN_ENABLE_TESTING: ON -- Call enable_testing() -- Using CATKIN_TEST_RESULTS_DIR: /home/fatima/~catkin_ws1/build/test_results -- Looking for include file pthread.h -- Looking for include file pthread.h - found -- Looking for pthread_create -- Looking for pthread_create - not found -- Looking for pthread_create in pthreads -- Looking for pthread_create in pthreads - not found -- Looking for pthread_create in pthread -- Looking for pthread_create in pthread - found -- Found Threads: TRUE -- Found gtest sources under '/usr/src/gtest': gtests will be built -- Using Python nosetests: /usr/bin/nosetests-2.7 -- catkin 0.6.16 -- BUILD_SHARED_LIBS is on -- Configuring done -- Generating done -- Build files have been written to: /home/fatima/~catkin_ws1/build #### #### Running command: "make -j4 -l4" in "/home/fatima/~catkin_ws1/build" #### so if it was true rosrun beginner_tutorials talker could be run is it true? Originally posted by fatima on ROS Answers with karma: 62 on 2015-12-05 Post score: 1 Original comments Comment by joq on 2015-12-05: We need to see the dependencies section of you package.xml, please. Comment by fatima on 2015-12-07: hi sorry for these days late i have an problem , now i run catkin_make after a few days in root catkin and i receive this result , i edited my last question Comment by fatima on 2015-12-07: i dont know is it true or not? Comment by Akif on 2015-12-08: @fatima, did you get any errors when you remove nodelet dependency from your CMakeLists.txt as @joq suggested? Comment by fatima on 2015-12-08: hi akif, no after a few days when i run catkin_make again i hadnt any error without nodelet error i was surprised i edited my answer and put the result above but yet i have the same problem i couldnt run $ rosrun beginner_tutorials talker so i dont know is the catkin_make run true or not!!!! Comment by Akif on 2015-12-08: @fatima, in your question it is written that the package name is beginner2_tutorials not beginner_tutorials as you try. Can you check it? Comment by fatima on 2015-12-08: yes i did that and my result is : fatima@fatima-K55VD:~/catkin_ws1$ rosrun beginner2_tutorials talker [rosrun] Couldn't find executable named talker below /home/fatima/catkin_ws1/src/beginner2_tutorials Comment by fatima on 2015-12-08: i ask new question at link so thank you so much for your help Answer: Why did you list nodelet as a CATKIN_DEPENDS in your CMakeLists.txt? The tutorial does not do that. Try removing it. Maybe you should start over, copying from the tutorial exactly. Originally posted by joq with karma: 25443 on 2015-12-05 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 23157, "tags": "ros, catkin-make, build, cmake, make" }
Toggle the visibility of sections of a form
Question: This code I written by myself. Here I have 3 main part. When I click on any main part then it sub part will appear. Sub part contain some check boxes. Here I have the following main functionality Click the main part, then show it's sub part.If sub part already displayed then hide the sub part Highlight selected main part At a time only highlight one main part and it's child When the the checkbox div inside the sub part is clicked anywhere then the checkbox need to check and highlight that part Hint: Main part ( class: filter-3) Sub Part (class: filter-option-output) Sub part div (class: common-in-block) Please see my code $(function(){ $(".filter-section-color").on("click",function(){ $(".filter-section-pattern").removeClass("output-active"); $(".filter-section-room").removeClass("output-active"); if ( $( this ).hasClass( "output-active" ) ) { $(".filter-option-output").css("display","none"); $(this).removeClass("output-active"); } else{ $(".filter-option-output").css("display","none"); $(".color-output").toggle(); $(this).addClass("output-active"); } }); $(".filter-section-pattern").on("click",function(){ $(".filter-section-color").removeClass("output-active"); $(".filter-section-room").removeClass("output-active"); if ( $( this ).hasClass( "output-active" ) ) { $(".filter-option-output").css("display","none"); $(this).removeClass("output-active"); } else{ $(".filter-option-output").css("display","none"); $(".pattern-output").toggle(); $(this).addClass("output-active"); } }); $(".filter-section-room").on("click",function(){ $(".filter-section-pattern").removeClass("output-active"); $(".filter-section-color").removeClass("output-active"); if ( $( this ).hasClass( "output-active" ) ) { $(".filter-option-output").css("display","none"); $(this).removeClass("output-active"); } else{ $(".filter-option-output").css("display","none"); $(".room-output").toggle(); $(this).addClass("output-active"); } }); $(".color-box").on("click",function(){ $(".color-box").removeClass("active-box"); $(this).addClass("active-box"); var checkbox1 = $(this).find("input[type='checkbox']"); if(checkbox1.is(':checked')){ checkbox1.prop('checked',''); $(this).removeClass("active-box"); } else{ checkbox1.prop('checked','true'); } }); $(".pattern-box").on("click",function(){ $(".pattern-box").removeClass("active-box"); $(this).addClass("active-box"); var checkbox2 = $(this).find("input[type='checkbox']"); if(checkbox2.is(':checked')){ checkbox2.prop('checked',''); $(this).removeClass("active-box"); } else{ checkbox2.prop('checked','true'); } }); $(".room-box").on("click",function(){ $(".room-box").removeClass("active-box"); $(this).addClass("active-box"); var checkbox3 = $(this).find("input[type='checkbox']"); if(checkbox3.is(':checked')){ checkbox3.prop('checked',''); $(this).removeClass("active-box"); } else{ checkbox3.prop('checked','true'); } }); }); body{ background-color: #d6e9d880 !important; } .filter-section-p{ margin-bottom: 8px; } .filter-arrow{ margin-top: 5px; float: right; } .filter-section{ background: #E5E5E599; text-align:center; padding-top: 18px !important; padding-bottom: 41px; } .filter-section-h2{ margin-bottom: 2px !important; font-size: 24px; text-transform: uppercase; } .filter-icon{ width:35px; float:left; } .filter-inside-p{ float:left; text-transform: uppercase; font-size: 16px; margin-left: 15px; margin-bottom: 0px; } .filter-3{ border:1px solid #912C5E33; float:left; width:27%; margin-left: 3%; padding: 6px; cursor:pointer; } .filter-main-div{ margin-left: 49px; margin-bottom:3px; } .filter-option-output{ border-left: 1px solid #93A8B733; border-right: 1px solid #93A8B733; background: white; position: absolute; margin-top: 58px; z-index: 3; display:none; } .color-backround{ float:left; min-width: 47px; height: 32px; } .color-name{ margin-left: 12px !important; font-size: 14px !important; text-transform: uppercase !important; margin-top: 4px !important; float:left !important; margin-bottom: 15px; } .color-value{ float: right !important; margin-top: 11px !important; margin-right: 13px !important; } .common-in-block{ float:left; width:30%; margin-left: 3%; padding-top: 11px; border-right: 1px solid #80808033; padding-bottom: 0px !important; cursor:pointer; } .output-active{ background:white !important; border:2px solid #912C5E33 !important; } .color-block { border-bottom: 1px solid #80808033; overflow: hidden; padding-bottom: 4px; margin-top:4px; } .active-box { background:#f5deb380; } .color-black{ background-color:#252525; } .color-blue{ background-color:#99CEE8; } .color-brown{ background-color:#96776B; } .color-cream{ background-color:#EAE3D9; } .color-green{ background-color:#BED6A4; } .color-grey{ background-color:#919191; } .color-orange{ background-color:#FF9D78; } .color-pink{ background-color:#F9C5CA; } .color-purple{ background-color:#AF9EC7; } .color-red{ background-color:#E95A63; } .color-silver{ background-color:#DDDDDD; } .color-white{ background-color:#F8F8F8; } <html> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css"> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script> <script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script> <body> <div class=" col-sm-12 filter-main-div"> <div class="col-sm-8 filter-main"> <div class="filter-3 filter-section-color"> <img src="https://image.flaticon.com/icons/svg/401/401122.svg" class="filter-icon" /> <p class="filter-inside-p"> Part1</p> <i class="icon-chevron-down filter-arrow"></i> </div> <div class="filter-3 filter-section-pattern"> <img src="https://image.flaticon.com/icons/svg/401/401122.svg" class="filter-icon" /> <p class="filter-inside-p">Part2</p> <i class="icon-chevron-down filter-arrow"></i> </div> <div class="filter-3 filter-section-room"> <img src="https://image.flaticon.com/icons/svg/401/401122.svg" class="filter-icon"/> <p class="filter-inside-p">Part3</p> <i class="icon-chevron-down filter-arrow"></i> </div> </div> <div class="col-sm-8 filter-option-output color-output"> <div class="color-block"> <div class="common-in-block color-box"> <div class="color-backround color-cream">&nbsp;</div><p class="color-name">Part1-child1</p> <input type="checkbox" value="cream" class="color-value"> </div> <div class="common-in-block color-box"> <div class="color-backround color-green">&nbsp;</div><p class="color-name">Part1-child2</p> <input type="checkbox" value="green" class="color-value"> </div> <div class="common-in-block color-box"> <div class="color-backround color-grey">&nbsp;</div><p class="color-name">Part1-child3</p> <input type="checkbox" value="grey" class="color-value"> </div> </div> </div> <div class="col-sm-8 filter-option-output pattern-output"> <div class="color-block"> <div class="common-in-block pattern-box"> <div class="color-backround color-black">&nbsp;</div><p class="color-name">Part-2-child1</p> <input type="checkbox" value="black" class="color-value"> </div> <div class="common-in-block pattern-box"> <div class="color-backround color-blue">&nbsp;</div><p class="color-name">Part-2-child2</p> <input type="checkbox" value="blue" class="color-value"> </div> <div class="common-in-block pattern-box"> <div class="color-backround color-red">&nbsp;</div><p class="color-name">Part-2-child3</p> <input type="checkbox" value="brown" class="color-value"> </div> </div> </div> <div class="col-sm-8 filter-option-output room-output"> <div class="color-block"> <div class="common-in-block room-box"> <div class="color-backround color-black">&nbsp;</div><p class="color-name">Part#3-child1</p> <input type="checkbox" value="black" class="color-value"> </div> <div class="common-in-block room-box"> <div class="color-backround color-blue">&nbsp;</div><p class="color-name">Part#3-child2</p> <input type="checkbox" value="blue" class="color-value"> </div> <div class="common-in-block room-box"> <div class="color-backround color-red">&nbsp;</div><p class="color-name">Part#3-child3</p> <input type="checkbox" value="brown" class="color-value"> </div> </div> </div> </div> </body> </html> Please copy this code into your localhost and check it. Here all of my functionality is working perfectly. But what I feel that the logic I applied is not that much good and still I can reduce the code. Please check this code and suggest which part I can still reduce the code. Answer: As said by konijn in his answer, hasClass -> add/remove class can be replaced by toggleClass(). if ($(selector).hasClass('someClass')) { $(selector).removeClass('someClass'); } else { $(selector).addClass('someClass'); } can be replaced by $(selector).toggleClass('someClass'); The code, $(".someClass").css("display","none"); is in both if and else block in all three functions. This can be moved outside of them. Also, I'll suggest to use hide() instead of applying CSS explicitly as it is more expressive and easy to understand. $('someSelector').hide(); The code to remove class $(".filter-section-pattern").removeClass("output-active"); $(".filter-section-room").removeClass("output-active"); can be written by combining selectors as $(".filter-section-pattern, .filter-section-room").removeClass("output-active"); jQuery will take care of iterating over elements matched by selector and do appropriate action, removeClass in this case. Now, if you look at the first three event handlers, you'll see that they all are similar. Remove some class from some elements Toggle some class on clicked element Hide some element Toggle some element depending on a condition Can we make this code dynamic? To make it dynamic, we need to first check what all things are different in them and by accessing them dynamically these three functions can be combined. First, we need to add some custom attributes on element on which event is bound. To take example, I'll use below element <div class="filter-3 filter-section-color"> We'll add custom HTML5 data-* attribute on it to store custom information <div class="filter-3 filter-section-color" data-selector=".filter-section-pattern, .filter-section-room"> Here, we're adding CSS selectors as it is which are to be hide when this element is clicked. To access this from JavaScript when clicked data() can be used $(this).data('selector') Note that $(this) inside event handler refer to the element on which the event has occurred. We're ready!!! Now, to bind event on all three elements, we'll pass them as comma-separated list of selectors just how we use in CSS. Here's the code of the first three event handlers // Bind event on all the elements $(".filter-section-color, .filter-section-pattern, .filter-section-room").on("click", function() { // Hide $(".filter-option-output").hide(); // Get target selector and remove class $($(this).data("selector")).removeClass("output-active"); if ($(this).hasClass("output-active") === false) { $($(this).data('targetSelector')).toggle(); } // Toggle class on clicked element $(this).toggleClass("output-active"); }); Similarly, the last three event handlers can also be combined // Bind event on all elements $(".color-box, .pattern-box, .room-box").on("click", function() { // Dynamic $("." + $(this).data("myClass")).removeClass("active-box"); var $checkbox = $(this).find("input[type='checkbox']"); $checkbox.prop('checked', !$checkbox.is(':checked')); $(this).toggleClass("active-box", !$checkbox.is(':checked')); }); This also requires changes in HTML of elements on which event is bound. As can be seen from the code, data-my-class custom attribute should be added with value as same class. Example, <div class="common-in-block color-box"> on this element adding custom attribute will be <div class="common-in-block color-box" data-my-class="color-box">. This is required as there can be multiple classes on element and taking a particular class can be cumbersome or long if...else if trail. $(document).ready(function() { $(".filter-section-color, .filter-section-pattern, .filter-section-room").on("click", function() { $(".filter-option-output").hide(); $($(this).data("selector")).removeClass("output-active"); if ($(this).hasClass("output-active") === false) { $($(this).data('targetSelector')).toggle(); } $(this).toggleClass("output-active"); }); $(".color-box, .pattern-box, .room-box").on("click", function() { $("." + $(this).data("myClass")).removeClass("active-box"); var $checkbox = $(this).find("input[type='checkbox']"); $checkbox.prop('checked', !$checkbox.is(':checked')); $(this).toggleClass("active-box", !$checkbox.is(':checked')); }); }); body { background-color: #d6e9d880 !important; } .filter-section-p { margin-bottom: 8px; } .filter-arrow { margin-top: 5px; float: right; } .filter-section { background: #E5E5E599; text-align: center; padding-top: 18px !important; padding-bottom: 41px; } .filter-section-h2 { margin-bottom: 2px !important; font-size: 24px; text-transform: uppercase; } .filter-icon { width: 35px; float: left; } .filter-inside-p { float: left; text-transform: uppercase; font-size: 16px; margin-left: 15px; margin-bottom: 0px; } .filter-3 { border: 1px solid #912C5E33; float: left; width: 27%; margin-left: 3%; padding: 6px; cursor: pointer; } .filter-main-div { margin-left: 49px; margin-bottom: 3px; } .filter-option-output { border-left: 1px solid #93A8B733; border-right: 1px solid #93A8B733; background: white; position: absolute; margin-top: 58px; z-index: 3; display: none; } .color-backround { float: left; min-width: 47px; height: 32px; } .color-name { margin-left: 12px !important; font-size: 14px !important; text-transform: uppercase !important; margin-top: 4px !important; float: left !important; margin-bottom: 15px; } .color-value { float: right !important; margin-top: 11px !important; margin-right: 13px !important; } .common-in-block { float: left; width: 30%; margin-left: 3%; padding-top: 11px; border-right: 1px solid #80808033; padding-bottom: 0px !important; cursor: pointer; } .output-active { background: white !important; border: 2px solid #912C5E33 !important; } .color-block { border-bottom: 1px solid #80808033; overflow: hidden; padding-bottom: 4px; margin-top: 4px; } .active-box { background: #f5deb380; } .color-black { background-color: #252525; } .color-blue { background-color: #99CEE8; } .color-brown { background-color: #96776B; } .color-cream { background-color: #EAE3D9; } .color-green { background-color: #BED6A4; } .color-grey { background-color: #919191; } .color-orange { background-color: #FF9D78; } .color-pink { background-color: #F9C5CA; } .color-purple { background-color: #AF9EC7; } .color-red { background-color: #E95A63; } .color-silver { background-color: #DDDDDD; } .color-white { background-color: #F8F8F8; } <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css"> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script> <script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script> <div class=" col-sm-12 filter-main-div"> <div class="col-sm-8 filter-main"> <div class="filter-3 filter-section-color" data-selector=".filter-section-pattern, .filter-section-room" data-target-selector=".color-output"> <img src="https://image.flaticon.com/icons/svg/401/401122.svg" class="filter-icon" /> <p class="filter-inside-p"> Part1</p> <i class="icon-chevron-down filter-arrow"></i> </div> <div class="filter-3 filter-section-pattern" data-selector=".filter-section-color, .filter-section-room" data-target-selector=".pattern-output"> <img src="https://image.flaticon.com/icons/svg/401/401122.svg" class="filter-icon" /> <p class="filter-inside-p">Part2</p> <i class="icon-chevron-down filter-arrow"></i> </div> <div class="filter-3 filter-section-room" data-selector=".filter-section-color, .filter-section-pattern" data-target-selector=".room-output"> <img src="https://image.flaticon.com/icons/svg/401/401122.svg" class="filter-icon" /> <p class="filter-inside-p">Part3</p> <i class="icon-chevron-down filter-arrow"></i> </div> </div> <div class="col-sm-8 filter-option-output color-output"> <div class="color-block"> <div class="common-in-block color-box" data-my-class="color-box"> <div class="color-backround color-cream">&nbsp;</div> <p class="color-name">Part1-child1</p> <input type="checkbox" value="cream" class="color-value"> </div> <div class="common-in-block color-box" data-my-class="color-box"> <div class="color-backround color-green">&nbsp;</div> <p class="color-name">Part1-child2</p> <input type="checkbox" value="green" class="color-value"> </div> <div class="common-in-block color-box" data-my-class="color-box"> <div class="color-backround color-grey">&nbsp;</div> <p class="color-name">Part1-child3</p> <input type="checkbox" value="grey" class="color-value"> </div> </div> </div> <div class="col-sm-8 filter-option-output pattern-output"> <div class="color-block"> <div class="common-in-block pattern-box" data-my-class="pattern-box"> <div class="color-backround color-black">&nbsp;</div> <p class="color-name">Part-2-child1</p> <input type="checkbox" value="black" class="color-value"> </div> <div class="common-in-block pattern-box" data-my-class="pattern-box"> <div class="color-backround color-blue">&nbsp;</div> <p class="color-name">Part-2-child2</p> <input type="checkbox" value="blue" class="color-value"> </div> <div class="common-in-block pattern-box" data-my-class="pattern-box"> <div class="color-backround color-red">&nbsp;</div> <p class="color-name">Part-2-child3</p> <input type="checkbox" value="brown" class="color-value"> </div> </div> </div> <div class="col-sm-8 filter-option-output room-output"> <div class="color-block"> <div class="common-in-block room-box" data-my-class="room-box"> <div class="color-backround color-black">&nbsp;</div> <p class="color-name">Part#3-child1</p> <input type="checkbox" value="black" class="color-value"> </div> <div class="common-in-block room-box" data-my-class="room-box"> <div class="color-backround color-blue">&nbsp;</div> <p class="color-name">Part#3-child2</p> <input type="checkbox" value="blue" class="color-value"> </div> <div class="common-in-block room-box" data-my-class="room-box"> <div class="color-backround color-red">&nbsp;</div> <p class="color-name">Part#3-child3</p> <input type="checkbox" value="brown" class="color-value"> </div> </div> </div> </div>
{ "domain": "codereview.stackexchange", "id": 25528, "tags": "javascript, jquery, html, form, jquery-ui" }
Ways to speed up a Recursive Backtracking Algorithm
Question: When dealing with a Recursive Backtracking Algorithm what are the ways to speed it up and what computational hardware is involved? I'm assuming from ignorance that everything is done by the CPU so the only thing that comes to mind to make it faster would be early exits, multithreading or quantum computing. Is that it? Can a GPU be used or do these only work for mathematical operations? TLDR: How can you speed up Recursive Backtracking Algorithm via Hardware or via Software. Edit: Assume something simple, like having a goal amount and multiple coins, then having to find which combination of coins add up to that goal amount. Answer: I think you pretty much got it there. There really aren't many ways to improve it, our best method is a slow one! (though our human brains instinctively would love to find something better for such a simple problem) Though I'm imagining, when I think of recursive back-tracking, I think of an algorithm to solve a rubiks cube, or a sudoku puzzle. Which is just brute-forcing the problem, and it'll be slow no matter what. There are millions or billions or pathways to check, to see if we get a solution, and even then we might not get the best one. However, if you're willing to step away from recursive back-tracking, there are some algorithms that will "smartly" try to find you a good answer really efficiently. Imagine when a human tries to solve a rubiks cube, they don't try every combination possible. Computerphile has a good video on it: https://www.youtube.com/watch?v=ySN5Wnu88nE The general idea of it, is you model all of your possible options/choices/moves as a massive decision tree, or graph. You're attempting to find the shortest path from the start, to the solution, but because we don't know how far away from the solution we are, we have a heuristic function, which gives us a guess of how far away we are. This is the A* algorithm, almost like a Dijkstra, but it involves our heuristic function. We selectively explore the paths which we guess will find us the solution the quickest. I applied this to a bubble sorting game once, my brute-force would explore 100000 different pathways and give you the BEST solution which takes only 9 steps to solve the game. But my A* algorithm with a good heuristic would explore 100 pathways, and give you a really good solution (but not the best), which took 10 steps. 1000x faster, but not guaranteed to be the best solution.
{ "domain": "cs.stackexchange", "id": 21343, "tags": "algorithm-analysis, backtracking" }