anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
How can I contribute to the scientific community using my telescope?
Question: I've been using my Meade 90 ETX for a few years trying to become a better astronomer and am familiar enough with the basics to navigate my way around. I was wondering in what way an amateur astronomer such as myself can contribute to the scientific community. I would like to try and do more than just stargaze and take pictures. Answer: Amateurs do useful scientific work by being many-handed and widely spread. Many Supernovae are discovered by amateurs. You need a set-up that can image one galaxy after another and look for any "new stars" that appear in them. As you are looking for the appearance of a Mag-14 star you need a big mirror; 90mm might not be enough. However look to Backyard observatory supernova search for more information. With the right equipment, you might find 1 SN every 5000 galaxies imaged. You can also contribute by observing asteroid occultations. These occur when an asteroid passes in front of a star, blocking out its light. By timing the occultation you can get an idea of the size of the asteroid in one cross-section. If multiple people observe the same occultation from different locations you can get a picture of the shape of the asteroid. See the Asteroid occultation site which links to FAQs and a page for submitting reports.
{ "domain": "astronomy.stackexchange", "id": 2715, "tags": "amateur-observing, fundamental-astronomy" }
what is it called: box potential with one infinite wall
Question: The finite square well and the infinite square well problem are well known, however is there a reason that there is almost no reference to the one sided infinite square well? Consider a particle with mass $m$ moving in the one dimensional potential $V(x)= \infty$ for $x<0$ , $V(x) = -V_{0}$ for $0\le x \le L$; $V(x) = 0$ for $x>L$ i) Can you scale this problem so that all units drop out ? ii) Can you find the boundstate eigenenergies and associated wavefunctions as a function of the parameter $\lambda$? iii) Can you find the free eigenstates which are characterized by the eigenenergies $E\ge 0$? I searched Griffiths Quantum Mechanics, but it didn't have any clue to how to solve this. Can anybody tell me the proper formal name, so I can look it up, or tell me a reason why none such does exist? Answer: The one-sided infinite square well eigenfunctions are all the odd numbered eigenfunctions of the finite square well twice as wide, by reflection symmetry. The odd-parity solutions obey the boundary conditions for the infinite square well, so this is exactly the same problem as the symmetric finite square well.
{ "domain": "physics.stackexchange", "id": 10007, "tags": "quantum-mechanics, terminology" }
odometry opposite direction
Question: Hello,i recently in SLAM, but there is a problem that bothers me deeply, I see in the RVIZ, the car Odometry in the opposite direction, when the car moves forward, odometer arrow nock will move forward, could someone be solved? Thank you very much!! Originally posted by zhangstar on ROS Answers with karma: 1 on 2017-08-09 Post score: 0 Answer: I see two likely reasons for the issue: Your rviz fixed frame is set to a non-standard frame. This could result in all sorts of "weird" motion displayed. Odometry is indeed buggy and has a wrong sign or so. You'll probably want to have a look at the output of rostopic echo odom and look at the raw numbers. You can update your question with this output, too. Originally posted by Stefan Kohlbrecher with karma: 24361 on 2017-08-09 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by zhangstar on 2017-08-10: hello!,i check the fixed frame ,it is no problem .and i have see the rostopic echo /odom the value of x is right when i move the car forward. en ,i find the problem maybe in the package of robot_pose_ekf .bacause i don't understand the tf how to link the odom..........^_^??
{ "domain": "robotics.stackexchange", "id": 28566, "tags": "navigation, odometry, rviz" }
how to compute the centroid of the PointCloud2
Question: I am considering how can I compute the centroid of sensor_msgs/PointCloud2, is there a direct way to do this? Because I only find he pcl::CentroidPoint< PointT > Class Template Reference *[link text] (http://docs.pointclouds.org/trunk/classpcl_1_1_centroid_point.html#details) on the website. Originally posted by zhonghao on ROS Answers with karma: 27 on 2018-11-14 Post score: 0 Answer: You'll first need to convert the sensor_msgs/PointCloud2 into a pcl::PointCloud object as below. Note you may have to change to type of the point cloud to match what you have. #include <pcl/conversions.h> ... pcl::PCLPointCloud2 point_cloud2; // Input PointCloud2 message pcl::PointCloud<pcl::PointXYZ> pclObj; // pcl version of the point cloud // Do the conversion pcl::fromPCLPointCloud2( point_cloud2, pclObj); Now you can loop through the pclObj.points vector adding them to a pcl::CentroidPoint object and then extract the centroid at the end. I Don't think there's a direct method to get the centroid from the PointCloud2 message itself. Originally posted by PeteBlackerThe3rd with karma: 9529 on 2018-11-14 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by alessiadele on 2019-05-05: Hi! I am trying to convert an image in a point cloud using PCL and I am using the code's lines that you have written; but my input is different, so I don't know how to change the line: pcl::PCLPointCloud2 point_cloud2; How can I using an image as input instead of sensor_msgs/PointCloud2? Consider that I have a frame with all necessary informations (intrinsic and extrinsic parameters of camera, depth, and so on). Thanks! Comment by PeteBlackerThe3rd on 2019-05-06: Please can you ask a new question instead of tagging on to the end of this one.
{ "domain": "robotics.stackexchange", "id": 32044, "tags": "pcl, ros-kinetic, pointcloud" }
About the plight of Elephants and efforts to conserve them?
Question: Poachers are only after the ivory horns right? I read this story from the New York Times ( I think) of saving a Rhinocerous by some Park Rangers and a Vet. ( forgive spelling) ; they tranquilized the Rhino and took its horn off so it wouldn't be killed. I know Rhinos need their horns so maybe a hard plastic 'prosthesis' could be put in its place. Maybe the 'prosthesis' could be colored yellow or have some markings on it so any poacher would know it's fake. Of course this could all be done for the elephant too. Could this be a feasible way to save Elephants or Rhinos?? Answer: Dehorning of rhinos has been tried with limited success. Poachers have killed dehorned rhinos anyway, either out of spite or to avoid tracking worthless prey in the future. There is the problem of anesthesia (always a risk) and the fact that rhino horns are usually not destroyed but saved in the event of decriminalization. But at least horn grows back. Considering the strength of horn and ivory, and the force applied in their use, plastic would not be of much use to them (especially elephants with long tusks). They are used for digging for water, salt, and roots; debarking or marking trees; and for moving trees and branches when clearing a path. When fighting, they are used to attack and defend, and to protect the trunk. - Wikipedia To detusk elephants would be akin to declawing a lion, though much more dangerous and painful. Though this might help prevent poaching, the future of these animals lies in prevention.
{ "domain": "biology.stackexchange", "id": 2934, "tags": "zoology, mammals, conservation-biology" }
Is the structure of a crystalized protein the native one?
Question: When one finds the 3D structure of a protein by crystalizing it and then making a X-ray experiment, how does one know that the geometrical configuration of the crystal is the same (or even close) to the structure the protein adopt in aqueous solution? Answer: I'll use quotes from B. Rupp, Biomolecular Crystallography (p. 7-8) to answer. Generally the structure is similar... Comparison of many nuclear magnetic resonance (NMR) solution structure ensembles with crystallographic structure has shown that the core structure of protein molecules remains unchanged compared with the solution state during crystallization. In addition, enzymes packed in crystals even maintain biological activity. with some parts not visible in the experiment... The maintenance of the core structure and of enzymatic function shows that crystal structures are a very good approximation of the native protein solution structure. Nonetheless, highly flexible or mobile regions, frequently the amino- or carboxyl-termini of the protein chain or flexible loops connecting secondary structure elements, can be poorly defined or even absent in the electron density and thus can be modeled only with limited confidence. but beware... In certain situation flexible and dynamic regions of a protein molecule can be rigidly fixed in a specific conformation as a result of crystal packing interactions. In most cases this represents just a snapshot of one possible conformation out of many and it must be understood that such a specific conformation may not locally represent the protein structure in solution. A simple safeguard against misinterpretations -- which is usually assignment of certain biological relevance to regions where that is de facto not warranted -- is to display all neighboring, symmetry-related molecules in the crystal structure and examine if any intermolecular interactions are present that are a result of crystal packing. Such packing induced artifacts can also hamper for example drug discovery by altering or blocking binding sites and thus preventing an otherwise active substance from binding.
{ "domain": "chemistry.stackexchange", "id": 5355, "tags": "structural-biology, proteins, x-ray-diffraction" }
Doesn't pasteurization kill gut bacteria such as Akkermansia muciniphila?
Question: I've recently watched two presentations on YouTube (this and this) about Akkermansia municiphila, a commensal bacteria that feeds on the mucus of the gut. What I find confusing is their claim about stabilizing Akkermansia and making it commercially available via pasteurization. How can it be used as a priobiotic if they pasterize it? Doesn't that make it inactive? Answer: This is interesting since Akkermansia muciniphila is a strict anaerobic and does not produces spores (which was my first thought on it). However, digging up the original publication (see reference 1) solved the question: Unexpectedly, we discovered that pasteurization of A. muciniphila enhanced its capacity to reduce fat mass development, insulin resistance and dyslipidemia in mice. These improvements were notably associated with a modulation of the host urinary metabolomics profile and intestinal energy absorption. We demonstrated that Amuc_1100, a specific protein isolated from the outer membrane of A. muciniphila, interacts with Toll-like receptor 2, is stable at temperatures used for pasteurization, improves the gut barrier and partly recapitulates the beneficial effects of the bacterium. The effect on the experimental animals by these bacteria seems to be mediated by the Amuc_1100 protein which interacts with the Toll like receptor 2 and is responsible for bacteria-host interactions. It turned out that the protein is stable at the temperatures used for pasteurization. The pasteurized bacteria seems to influence the energy expenditure and decreased the the food efficency (see reference 2): We confirmed that daily oral administration of pasteurized A. muciniphila alleviated diet-induced obesity and decreased food energy efficiency. We found that this effect was associated with an increase in energy expenditure and spontaneous physical activity. However why this effect can be seen with the pasteurized bacteria still seems unknown (at least I haven't found a scientific article yet). References: A purified membrane protein from Akkermansia muciniphila or the pasteurized bacterium improves metabolism in obese and diabetic mice Pasteurized Akkermansia muciniphila increases whole-body energy expenditure and fecal energy excretion in diet-induced obese mice
{ "domain": "biology.stackexchange", "id": 12157, "tags": "microbiology, bacteriology, microbiome, gut-bacteria" }
Ball & Beam System Target Landing
Question: I'm working on a ball and beam project where I'm using an IR sensor to find the distance of the ball along the beam and use a PID controller on an Arduino to control the servo motor and balance the ball in the center of the beam. However, I'm now trying to achieve another task: I'd like to place a cup anywhere under the beam, and place another sensor to detect the location of the cup. Then, based on the location of the cup, I'd like the beam to rotate appropriately to land the ball inside the cup. Here is an illustration of what I mean: I'm unable to figure out how to start or how to do the calculations for the projectiles, since everything seems to be a variable. The x & y variables (distance from ball to cup horizontally and vertically, respectively) depend on the angle of the beam. The velocity of the ball also depends on the angle of the beam: The higher the angle, the faster the velocity is (assuming the ball starts from a set point, such as the center). Finally, the angle of the beam itself is variable and depends on the location of the cup (the closer the cup is, the higher the angle needs to be). How do you suggest I approach such a problem to find the equations needed to ensure the ball lands inside the cup? I'm assuming I'd have to fix one of those variables or make some sort of assumptions to eliminate some of them, but I'm unsure of how to start. Answer: You do not need to make additional assumptions, if I did not make a mistake, the solution is completely determined by the information you provided. Firstly, I use the symbols defined in the following figure: and a time scale so that the Ball is at rest at $t = -T$, it rolls off the beam at $t=0$ and enters the cup at $t = \tau$. This means, the ball's coordinates at $t = -T$ are $$ y(-T) = x(-T) \sin \theta + h~, \qquad x(-T)~, \tag{0} $$ the latter of which you measure. We also know $$ y(0) = - l \sin \theta + h~, \qquad y(\tau) = 0~, \qquad x(0) = -l\cos \theta~, \qquad x(\tau) = - b~. \tag{1} $$ The kinetic energy at $t = 0$ is $$ E(0) = (y(-T) - y(0)) mg = \frac 12 m v^2(0)~,\tag{2} $$ with $v$ being the total velocity at that time. It holds $$ \dot y(0) = v \sin \theta~, \qquad \dot x(0) = v \cos \theta~. $$ $$ \Rightarrow v = \frac{\dot y(0)}{\sin \theta} = \frac{\dot x(0)}{\cos \theta}~. $$ Plugging this into (2) yields $$ (y(-T) -y(0)) mg = \frac 12 m \frac{\dot y^2(0)}{\sin^2\theta}~, \qquad (y(-T) - y(0)) m g = \frac 12 m \frac{\dot x^2(0)}{\cos^2 \theta}~. $$ $$ \Rightarrow \dot y(0) = - \sqrt{2 g \sin^2 \theta ~ (y(-T) - y(0))}~, \qquad \dot x(0) = -\sqrt{2 g \cos^2 \theta ~ (y(-T) - y(0))}~. \tag{3} $$ Further, $\forall t \in [0,\tau]$: $$ y(t) = \frac 12 g t^2 + \dot y(0) t + y(0)~, \qquad x(t) = \dot x(0) t - l \cos \theta~. $$ Using (1) and (3) I obtain $$ y(t) = \frac 12 g t^2 - \sqrt{2g \sin^2 \theta ~ (y(-T) - y(0))} t - l \sin \theta + h~, \qquad x(t) = - \sqrt{2 g \cos^2 \theta ~ (y(-T) - y(0))} t - l \cos \theta~. $$ Again using (1) I arrive at the equations $$ y(\tau) = \frac 12 g \tau^2 - \sqrt{2g \sin^2 \theta ~ (y(-T) - y(0))} \tau - l \sin \theta + h = 0~, $$ $$ x(\tau) = - \sqrt{2 g \cos^2 \theta ~ (y(-T) - y(0))} \tau - l \cos \theta = - b~. $$ And plugging in (0) and, once again, (1) yields $$ \frac 12 g \tau^2 - \sqrt{2g (x(-T) + l)} \sin^{3/2} \theta ~ \tau - l \sin \theta + h = 0~, $$ $$ - \sqrt{2 g (x(-T) + l)} \sqrt{\sin \theta} \cos \theta ~ \tau - l \cos \theta + b = 0~. $$ In these equations, everything except $\tau$ and $\theta$ should be known, but the second one can easily be solved for $\tau$, which can then be plugged into the first one to get an equation for $\theta$. Looking at all those trigonometric functions, I assume that equation will have to be solved numerically and in theory there should not be a solution for every possible value of the parameters (e.g. if $b$ is sufficiently large, there is no way the ball will get to the cup).
{ "domain": "physics.stackexchange", "id": 82408, "tags": "newtonian-mechanics, kinematics, projectile" }
Error in --rosdep-install
Question: Hello everyone, When I try this code "rosmake --rosdep-install rgbdslam" it gives the following error. rosmake: error: no such option: --rosdep-install I installed rosdep using this command ."sudo apt-get install python-rosdep" Then I found that '--rosdep-install' option is available in ROS Electric and earlier. I'm using Ubuntu 12.04 and ROS groovy. How can I solve this?Can somebody please help? Thanks in advance Originally posted by Cham on ROS Answers with karma: 11 on 2013-08-05 Post score: 1 Original comments Comment by tfoote on 2013-08-05: What instructions are you following? Answer: You can call rosdep directly. The change is simply that rosmake no longer will call rosdep for you. Originally posted by tfoote with karma: 58457 on 2013-08-05 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Cham on 2013-08-05: Thank you. I got it :)
{ "domain": "robotics.stackexchange", "id": 15181, "tags": "slam, navigation, rosdep, rosmake" }
What do we know about provably correct programs?
Question: The ever increasing complexity of computer programs and the increasingly crucial position computers have in our society leaves me wondering why we still don't collectively use programming languages in which you have to give a formal proof that your code works correctly. I believe the term is a 'certifying compiler' (I found it here): a compiler compiling a programming language in which one not only has to write the code, but also state the specification of the code and prove that the code adheres to the specification (or use an automated prover to do so). While searching the internet, I only found projects that either use a very simple programming language or failed projects that try to adapt modern programming languages. This leads me to my question: Are there any certifying compilers implementing a full-blown programming language, or is this very hard/theoretically impossible? Additionally, I've yet to see any complexity class involving provable programs, such as 'the class of all languages decidable by a Turing machine for which a proof exists that this Turing machine halts', which I shall call $ProvableR$, as an analogue to $R$, the set of recursive languages. I can see advantages of studying such a complexity class: for instance, for $ProvableR$ the Halting problem is decidable (I even conjecture $ProvableRE$ defined in the obvious way would be the largest class of languages for which it is decidable). In addition, I doubt we would rule out any practically useful programs: who would use a program when you can't prove it terminates? So my second question is: What do we know about complexity classes which require their containing languages to provably have certain properties? Answer: "Certifying compiler" usually means something slightly different: it means that you have a compiler which can prove that the machine code it emits correctly implements the high-level semantics. That is, this is a proof that there are no compiler bugs. The programs that people give to the compiler can still be wrong, but the compiler will generate a correct machine code version of the wrong program. The biggest success story along these lines is the CompCert verified compiler, which is a compiler for a large subset of C. The Compcert compiler itself is a program with a correctness proof (done in Coq), which guarantees that if it generates code for a program, it will be correct (with respect to the operational semantics of assembly & C that the CompCert designers used). The effort to machine-check these things is quite large; typically the correctness proof will be anywhere from 1x to 100x the size of the program you are verifying. Writing machine-checked programs and proofs is a new skill you have to learn -- it's not mathematics or programming as usual, though it depends on being able to do both well. It feels like you are starting from scratch, like being a novice programmer again. There are no special theoretical barriers to this, though. The only thing along these lines is the Blum Size theorem that for any language in which all programs are total, you can find a program in a general recursive language which will be at least exponentially larger when programmed in the total language. The way to understand this result is that a total language encodes not just a program, but also a termination proof. So you can have short programs with long termination proofs. However, this doesn't really matter in practice, since we're only going to ever write programs with manageable termination proofs. EDIT: Dai Le asked for some elaboration of the last point. This is mostly a pragmatic claim, based on the fact that if you can understand why a program works, then it's unlikely that the reason is some vast invariant millions of pages long. (The longest invariants I've used are a few pages long, and boy do they make the reviewers grumble! Understandably so, too, since the invariant is the reason the program works stripped of all the narrative that helps people understand it.) But there are also some theoretical reasons, as well. Basically, we don't know very many ways to systematically invent programs whose correctness proofs are very long. The main method is to (1) take the logic in which you prove correctness, (2) find a property which can't be directly expressed in that logic (consistency proofs are the typical source), and (3) find a program whose correctness proof relies on a family of expressible consequences of the inexpressible property. Because (2) is inexpressible, this means that the proof of each expressible consequence must be done independently, which lets you blow up the size of the correctness proof. As a simple example, note that in first-order logic with a parent relation, you can't express the ancestor relation. But $k$-ancestry (ancestry of depth $k$) is expressible, for each fixed $k$. So by giving a program that uses some property of ancestry up to some depth (say, 100), you can force a correctness proof in FOL to contain proofs of those properties a hundred times over. The sophisticated take on this subject is called "reverse mathematics", and is the study of which axioms are required to prove given theorems. I don't know that much about it, but if you post a question on its application to CS, and I'm sure at least Timothy Chow, and probably several other people, will be able to tell you interesting things.
{ "domain": "cstheory.stackexchange", "id": 3739, "tags": "reference-request, complexity-classes, computability, pl.programming-languages" }
actionlib tutorial inconsistency
Question: The SimpleActionServer(ExecuteCallbackMethod) tutorial has a CMakeList.txt inconsistency for Catkin/Groovy. At one place it says: find_package(catkin REQUIRED COMPONENTS actionlib_msgs) Note that CMake needs to find_package only actionlib_msgs (not message_generation that is referenced implicitly by actionlib_msgs). Later it says: find_package(catkin REQUIRED COMPONENTS actionlib message_generation) I assume the second one was not updated and the first example is correct. Please confirm. Originally posted by Dave Coleman on ROS Answers with karma: 1396 on 2013-01-15 Post score: 1 Answer: You're right. The following should be correct. I updated the wiki. Thank you for reporting! find_package(catkin REQUIRED COMPONENTS actionlib_msgs) Originally posted by 130s with karma: 10937 on 2013-01-15 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 12433, "tags": "ros, actionlib-tutorials, actionlib" }
Do you feel gravity?
Question: I have been reading a few articles about the question why we don't feel/notice gravity in everyday life, but I couldn't understand why exactly we don't feel/notice it, that is, why we don't feel a strong force pulling on us at every moment. Answer: Theoretically, the only unambiguous way to "feel" gravity would be to travel near an extremely strong source of gravity, or rather an extremely strong gradient of gravity (e.g. a black hole), and feel tidal forces from such a massive object. (that, or gravitational waves from a binary system of black holes) The rest is a matter of definition. We could define "feeling" gravity as perceiving its effects on our bodies. And the way we perceive the consequences of gravity on Earth is a sensation of the ground "pushing" upwards on our feet, and preventing us from freefall towards the center of the planet. However, don't forget that gravity is basically curvature in spacetime. When you're in freefall, you simply follow a geodesic through the curved spacetime in your vicinity. In this sense, you can't "feel" gravity because there's nothing to be "felt" -- in the absence of other forces, your body is simply following a geodesic that is curved by the presence of a massive body (the earth). If you were inside a free-falling box, you would never be able to tell whether you're in deep space (far away from any massive objects), or plummeting towards the surface of the earth. So, in this sense, gravity cannot be "felt."
{ "domain": "physics.stackexchange", "id": 23306, "tags": "gravity, forces" }
Direct and indirect CP violation
Question: Experimentally, what is the difference between direct and indirect CP violation? An example of indirect CP violation is: $$ \Gamma(\overline{B}^0 \rightarrow B^0) \neq \Gamma(B^0 \rightarrow \overline{B}^0 ).$$ An example of direct CP violation is: $$ \Gamma(\overline{B}^0 \rightarrow \overline{K}) \neq \Gamma(B^0 \rightarrow K ).$$ If we start with a pool of $\overline{B}^0$ mesons, leave them for a bit and then come back and count how many $B^0$ have appeared, we can get $ \Gamma(\overline{B}^0 \rightarrow B^0) $. Similarly, with a pool of $B^0$ mesons we can get $\Gamma(B^0 \rightarrow \overline{B}^0 )$. So what is indirect about this? Answer: The difference, I think, is that $\overline B{}^0 \to B^0$ and $B^0 \to \overline B{}^0$ are inverse processes, and so a difference between them can be ascribed to broken $CP$ symmetry, as you say, or to broken time-reversal symmetry. Only with the additional (strongly motivated) assumption that $CPT$ is an exact symmetry of nature can you definitely say that the two interpretations are identical. By contrast, $\overline B{}^0 \to \overline K$ is time-conjugate to $\overline K\to\overline B{}^0$. So any difference between $\overline B{}^0 \to \overline K$ and $B^0 \to K$ cannot be blamed on $T$-violation, and is therefore a "direct" probe of $CP$.
{ "domain": "physics.stackexchange", "id": 21281, "tags": "particle-physics, cp-violation" }
Is there a relativistic (quantum) thermodynamics?
Question: Does a relativistic version of quantum thermodynamics exist? I.e. in a non-inertial frame of reference, can I, an external observer, calculate quantities like magnetisation within the non-inertial frame? I'd be interested to know if there's a difference between how to treat thermodynamics in a uniformly accelerated reference frame and in a non-uniformly accelerated reference frame. Thanks! Answer: There is a classic treatise on "Relativity, Thermodynamics and Cosmology" from R. Tolmann from the 1930s - it is still referenced in papers today. This generalises Thermodynamics to Special Relativity and then General Relativity. As a simple example the transformation law for Temperature is stated as: $T=\sqrt(1-v^2/c^2)T_0$ when changing to a Lorentz moving frame. Another example is that "entropy density" $\phi$ is introduced, which is also subject to a Lorentz transformation. Finally this becomes a scalar with an associated "entropy 4-vector" in GR. The Second Law is expressed using these constructs by Tolmann. There is some discussion in Misner, Thorne and Wheeler too. Of course both these texts also include lots of regular General Relativity Theory which you may not need.
{ "domain": "physics.stackexchange", "id": 466, "tags": "quantum-mechanics, thermodynamics, relativity" }
Linear algebra for quantum physics
Question: A week ago I asked people on this site what mathematical background was needed for understanding Quantum Physics, and most of you mentioned Linear Algebra, so I decided to conduct a self-study of Linear Algebra. Of course, I'm just 1 week in, but I have some questions. How is this going to be applicable to quantum physics? I have learned about matrices (addition, subtraction, multiplication and inversion) and about how to solve multiple equations with 3 unknowns using matrices, and now I am starting to learn about vectors. I am just 1 week in, so this is probably not even the tip of the iceberg, but I want to know how this is going to help me. Also, say I master Linear Algebra in general in half a year (I'm in high school but I'm extremely fast with maths), what other 'types' of math would I need to self-study before being able to understand rudimentary quantum physics mathematically? Answer: Quantum mechanics "lives" in a Hilbert space, and Hilbert space is "just" an infinite-dimensional vector space, so that the vectors are actually functions. Then the mathematics of quantum mechanics is pretty much "just" linear operators in the Hilbert space. Quantum mechanics Linear algebra ----------------- -------------- wave function vector linear operator matrix eigenstates eigenvectors physical system Hilbert space physical observable Hermitian matrix
{ "domain": "physics.stackexchange", "id": 49289, "tags": "quantum-mechanics, resource-recommendations, soft-question, education, linear-algebra" }
measure angle between two lines, can rotation center constraint be used to refine the initial measured result?
Question: I'm trying to measure the angle between two needle positions in an analog meter (gauge). such as: After the image processing of several images, I get the processed image below(a image show different needle positions: Here, the black dot points were the feature points of needle area, so I use the line fitting algorithm(usually by least square method, or Hough transform to find the line) to get several line positions. I can get the L1, L2, so the angle between those two lines was theta1. But the needle can located in many different places, so I can finally get many positions such as L3, L4...Ln. Those lines should be ideally intersect in one point, which is the rotation center of the needle. I can calculate the equivalent intersection point of all those lines (the red dot), since it is the rotation constraint of the movement. So, my question is: How can the equivalent point be used to refine the final angle, I mean I want to get a better value of theta1, theta2, theta3.... Thanks. Answer: You could recompute the lines with the constraint that they all go through the red point. If you normally optimize a line $$y=ax+b$$ by optimizing both parameters $a$ and $b$, you would now only have 1 degree of freedom for each line. If the red point has coordinates $(x_1,y_1)$ then you get the constrained line equation $$y=a(x-x_1)+y_1$$ with only one parameter $a$. Now you compute $a$ such that the squared distance to the other dots is minimized, and no matter which value of $a$ you obtain, the line will always pass through $(x_1,y_1)$.
{ "domain": "dsp.stackexchange", "id": 1718, "tags": "image-processing, measurement" }
Partition function of primon bosonic gas
Question: Can we interpret the Euler product formula " $\sum\frac{1}{n^s} = \prod_{p\;\mathrm{prime}} \frac{1}{1-p^{-s}} $ " in a stat. physical sense, as a product of single-particle system partition functions, considering them statistically independent ? Answer: Umm...OK well lets see what happens. Lets let $s = \beta\varepsilon$, where $\varepsilon$ is some fixed energy and $$ Z\left(\frac{s}{\varepsilon}\right) = \zeta(s) $$. To get some kind of idea for what kind of system $Z$ describes we need to find the energy levels of the system and to do that we need to express $Z$ in the form $\sum_{i} e^{-\beta E_i}$. In general there will not be a unique way to do this but Euler's formula gives us a couple of obvious ways to try. The left had side of Euler's formula gives us \begin{align} Z(\beta) & = \sum_n n^{-\varepsilon \beta}\\ & = \sum_n e^{-\beta \varepsilon \ln n} \end{align}. So we have some system with logarithmically spaced energy levels. The right hand side we are looking to interpret as a collection of independent, weakly interacting, distinguishable systems with partition functions $$ Z_p\left(\frac{s}{\varepsilon}\right) = \frac{1}{1-p^{-s}} $$. Again the systems described by $Z_p$ will not in general be unique but there is an obvious binomial expansion to give a geometric series \begin{align} Z_p(\beta) &= \sum_n p^{-\beta\varepsilon n}\\ & = \sum_n e^{-\beta\varepsilon n \ln p} \end{align} This does at least have a simple interpretation; it is the partition function of a harmonic oscillator with $\hbar\omega_p = \varepsilon \ln p$ Euler's formula tells us that $$ Z(\beta) = \prod_{p\;\mathrm{prime}}Z_p(\beta) $$ So we would expect the system logarithmically spaced energy levels to have the same macroscopic properties as an infinite collection of harmonic oscillators with frequencies in ratios of the logarithms of the primes. Indeed \begin{align} e^{-s\ln n} &= e^{-s\ln p_1^{a_1}p_2^{a_2}\dots}\\ &=e^{-sa_1\ln p_1}e^{-sa_2\ln p_2}\dots \end{align} This is precisely the form of the terms obtained by multiplying out the $Z_p$s, which shows that the 2 systems have in fact got the same energy levels (This is essentially a rewriting of proof of Euler's formula on the wiki page) Now what does this tell us. That's a good question. I cannot this of a naturally occurring system with logarithmically space energy levels, nor can I think where you would find a collection of oscillators with frequencies in radios of $\ln p$, so there doesn't seem to be much physical insight here that I can see. There may also be other ways to expand $Z$ which give different energy levels which may be more interesting. Somebody else may know something I don't.
{ "domain": "physics.stackexchange", "id": 27702, "tags": "statistical-mechanics" }
Why does it hurt the next day after doing significant exercise?
Question: I think this is a fairly common observation that if one does some significant amount of exercise, he/she may feel alright for the rest of the day, but it generally hurts bad the next day. Why is this the case? I would expect that if the muscles have undergone significant strain (say I started pushups/plank today), then it should cause pain while doing the strenuous activity, or during rest of the day, but it happens often that we don't feel the pain while doing the activity or even on that day, but surely and sorely feel it the next day. Another example - say after a long time, you played a long game of basketball/baseball/cricket. You generally don't feel any pain during the game/that day, but there is a good chance it will hurt bad the next day. I am trying to understand both - why does the pain not happen on that day, and why it does, the next day (or the day after that). Answer: Unlike the conventional wisdom, the pain you feel the next day (after a strenuous exercise) has nothing to do with lactic acid. Actually, lactic acid is rapidly removed from the muscle cell and converted to other substances in the liver (see Cori cycle). If you start to feel your muscles "burning" during exercise (due to lactic acid), you just need to rest for some seconds, and the "burning" sensation disappears. According to Scientific American: Contrary to popular opinion, lactate or, as it is often called, lactic acid buildup is not responsible for the muscle soreness felt in the days following strenuous exercise. Rather, the production of lactate and other metabolites during extreme exertion results in the burning sensation often felt in active muscles. Researchers who have examined lactate levels right after exercise found little correlation with the level of muscle soreness felt a few days later. (emphasis mine) So if it's not lactic acid, what is the cause of the pain? What you're feeling in the next day is called Delayed Onset Muscle Soreness (DOMS). DOMS is basically an inflammatory process (with accumulation of histamine and prostaglandins), due to microtrauma or micro ruptures in the muscle fibers. The soreness can last from some hours to a couple of days or more, depending on the severity of the trauma (see below). According to the "damage hypothesis" (also known as "micro tear model"), microruptures are necessary for hypertrophy (if you are working out seeking hypertrophy), and that explains why lifting very little weight doesn't promote hypertrophy. However, this same microtrauma promotes an inflammatory reaction (Tiidus, 2008). This inflammation can take some time to develop (that's why you normally feel the soreness the next day) and, like a regular inflammation, has as signs pain, edema and heat. This figure from McArdle (2010) shows the proposed sequence for DOMS: Figure: proposed sequence for delayed-onset muscle soreness. Source: McArdle (2010). As anyone who works out at the gym knows, deciding how much weight to add to the barbell can be complicated: too little weight promotes no microtrauma, and you won't have any hypertrophy. Too much weight leads to too much microtraumata, and you'll have trouble to get out of the bed the next day. EDIT: This comment asks if there is evidence of the "micro tear model" or "damage model" (also EIMD, or Exercise-induced muscle damage). First, that's precisely why I was careful when I used the term hypothesis. Second, despite the matter not being settled, there is indeed evidence supporting EIMD. This meta-analysis (Schoenfeld, 2012) says: There is a sound theoretical rationale supporting a potential role for EIMD in the hypertrophic response. Although it appears that muscle growth can occur in the relative absence of muscle damage, potential mechanisms exist whereby EIMD may enhance the accretion of muscle proteins including the release of inflammatory agents, activation of satellite cells, and upregulation of IGF-1 system, or at least set in motion the signaling pathways that lead to hypertrophy. The same paper, however, discuss the problems of EIMD and a few alternative hypotheses (some of them not mutually exclusive, though). Sources: Tiidus, P. (2008). Skeletal muscle damage and repair. Champaign: Human Kinetics. McArdle, W., Katch, F. and Katch, V. (2010). Exercise physiology. Baltimore: Wolters Kluwer Health/Lippincott Williams & Wilkins. Roth, S. (2017). Why Does Lactic Acid Build Up in Muscles? And Why Does It Cause Soreness?. [online] Scientific American. Available at: https://www.scientificamerican.com/article/why-does-lactic-acid-buil/ [Accessed 22 Jun. 2017]. Schoenfeld, B. (2012). Does Exercise-Induced Muscle Damage Play a Role in Skeletal Muscle Hypertrophy?. Journal of Strength and Conditioning Research, 26(5), pp.1441-1453.
{ "domain": "biology.stackexchange", "id": 7354, "tags": "human-biology, muscles, exercise" }
Problems Solvable in Poly time but not verifiable in Poly time
Question: I was just wondering if there exists problems that are solvable in polynomial time (a correct solution can be found in polynomial time) but not verifiable in polynomial time. My professor says no, but doesn't rlly give a clear explanation besides saying you can just use the solution to verify, but I fail to see how this can be if there are more than just one solution. Are we allowed to make the argument that any problem with a poly algorithm can be modified in polynomial time so that it produces the particular polynomial solution we are trying to verify? Edit: Sorry if the problem was confusing as I threw in P and NP, the title has been changed to reflect this. Answer: It appears you're asking about problems for which there may be more than one correct answer for a given input (not decision problems). In that case, there may exist a polynomial-time algorithm that finds a solution, but no algorithm that can verify arbitrary solutions. For example, given any string, it's easy to find a program that prints that string and then halts, but impossible in general to determine whether a program prints that string and then halts.
{ "domain": "cs.stackexchange", "id": 20711, "tags": "algorithms, np, p-vs-np" }
General solution to the wave equation proving dependence on $x \pm vt$
Question: I am trying to solve for a general solution to the wave function and demonstrate any solution has the form $f(x,t) = f_L (x+vt) + f_R (x-vt)$ I have used separation of variables f(x,t)=X(x)T(t) to decouple the equations: $\dfrac{d^2X}{dx^2} = k^2 x$ and $\dfrac{d^2T}{dt^2} = k^2 v^2 T$ My general solution looks like $X = \sum_k (c_1 x + c_2 + c_3 e^{kx} + c_4e^{-kx} +c_5 e^{ikx} + c_6 e^{-ikx})$, which, throwing out solutions that aren't square integrable, and taking the limit of k being continuous, looks like this: $X(x) =\int \phi(k_1) (e^{ik_1 x} + e^{-ik_1 x})dk_1$ and similarly I have $ T = \int \phi(k_2)(e^{ik_2 vt} + e^{-ik_2 vt})dk_2$ Now, I am thinking that preserving the positive and negative exponential solutions is not right, and seems meaningless to me since we integrate over all k. But I have kept them in there because it looks at first glance to me like they may help show that $f(x,t) = f_L (x+vt) + f_R (x-vt)$. Where I am stuck is this last part. Because $k_1$ and $k_2$ are different variables, I cannot show that $f(x,t) = f_L (x+vt) + f_R (x-vt)$ Further, the product $f(x,t)=X(x)T(t)$ will look like $\int \int dk_1 dk_2 \phi(k_1) \phi(k_2)$, which I think may not be correct. My question at this point is how can I show that any solution has the form $f(x,t) = f_L (x+vt) + f_R (x-vt)$? Answer: You can show this by noting that $k_1^2=k_2^2$ and $X(x)T(t)= F(k_1x\pm k_2vt)$. You can see that by: $$ { \partial^2 f \over \partial t^2 } = v^2 { \partial^2 f \over \partial x^2 } $$ Is more common take $k_1=k$ and $k_2v=-\omega$. Then you may note that the equation imposes the choices of $k_1$ and $k_2$, or $k$ and $\omega$. The imposes is $k_1^2=k_2^2$ or $k^2=\omega^2 v^2$. Note that you get a mistake in represent the functions $X(x)$ and $T(t)$. Each orthogonal function inside the integration has a free parameter: $$ X(x) \rightarrow \int (\phi^+(k_1)e^{ik_1 x} + \phi^-(k_1)e^{-ik_1 x})dk_1 $$ $$ T(t) \rightarrow \int (\phi^+(k_2)e^{ik_2 vt} + \phi^-(k_2)e^{-ik_2 vt})dk_2 $$ So, because $k^2=\omega^2 v^2$, you need to insert delta functions $\delta(k^2-\omega^2 v^2)$ in the combine integration: $$ X(x)T(t) =\int\int (\phi^+(k)e^{ik x} + \phi^-(k)e^{-ik x}) (\phi^+(\omega)e^{i\omega t} + \phi^-(\omega)e^{-i\omega t})\delta(k^2-\omega^2 v^2)dkd\omega $$ As you can see in other answers that exist other methods that are more natural than yours, by changing the linear operator in terms of light cone variables. But your method is correct anyway.
{ "domain": "physics.stackexchange", "id": 21425, "tags": "homework-and-exercises, waves, differential-equations" }
Mad Libs using recursive IO for user input
Question: I'm writing my first small programs in Haskell and still getting a feel for the syntax and idioms. I've written this Mad Libs implementation using recursive IO. I've used IO actions throughout and I'm sure there must be a better way of splitting up this code to separate pure functions from IO actions. Also, I'm not happy with the printf statement, but I couldn't find a native way to apply an arbitrary number of list items to printf. import Text.Printf getAnswer :: String -> IO String getAnswer question = do putStrLn question answer <- getLine return answer getAnswers :: [String] -> [String] -> IO [String] getAnswers [] ys = return ys getAnswers (x:xs) ys = do answer <- getAnswer x let answers = ys ++ [answer] getAnswers xs answers main = do let questions = ["Enter a noun:", "Enter a verb:", "Enter an adjective:", "Enter an adverb:"] let madlib = "Your %s is %s up a %s mountain %s." answers <- getAnswers questions [] printf madlib (answers!!0) (answers!!1) (answers!!2) (answers!!3) putStrLn "" Answer: Can we make getAnswer simpler or out of IO? Well, not really. You want to ask the use a question, and you want to get an answer. So all we could do is to reduce the amount of unnecessary code: getAnswer :: String -> IO String getAnswer question = putStrLn question >> getLine -- or -- = do -- putStrLn question -- getLine However, getAnswers can be refactored quite heavily. First of all, its interface isn't really developer-friendly. What are the questions? What are the answers? We should probably hide that in the bowels of our function: getAnswers :: [String] -> IO [String] getAnswers xs = go xs [] where go [] ys = return ys go (x:xs) ys = do answer <- getAnswer x let answers = ys ++ [answer] go xs answers But ++ [...] isn't really best-practice. Instead, you would ask all other questions and then combine them: where go [] = return [] go (x:xs) = do answer <- getAnswer x otherAnswers <- getAnswers x return (answer : otherAnswers) But at that point, we're merily copying mapM's functionailty. Therefore, your getAnswers should be getAnswers :: [String] -> IO [String] getAnswers = mapM getAnswer A lot simpler. Now for your main. If you don't know how many words you'll get you will need a list, correct. But lets check the structure of your result: "Your %s is %s up a %s mountain %s." 1 2 3 4 There is a pattern. We have our text, then whatever the user gave us, then again our text, and so on. Let's split that into fragments: ["Your ","%s"," is ","%s"," up a ","%s"," mountain ","%s","."] -- ^^^^ ^^^^ ^^^^ ^^^^ This brings up the following idea: if you have a list of your answers, you only need the list of the other words, right? ["Your "," is "," up a "," mountain ","."] And then we need to "zip" that list with yours: interleave :: [a] -> [a] -> [a] interleave (x:xs) (y:ys) = x : y : interleave xs ys interleave xs _ = xs We end up with the following main: main = do let questions = ["Enter a noun:", "Enter a verb:", "Enter an adjective:", "Enter an adverb:"] let madlib = ["Your "," is "," up a "," mountain ","."] answers <- getAnswers questions putStrLn $ interleave madlib questions Here's all the code at once: getAnswer :: String -> IO String getAnswer q = putStrLn q >> getLine getAnswers :: [String] -> IO [String] getAnswers = mapM getAnswer interleave :: [a] -> [a] -> [a] interleave (x:xs) (y:ys) = x : y : interleave xs ys interleave xs _ = xs main :: IO () main = do let questions = ["Enter a noun:", "Enter a verb:", "Enter an adjective:", "Enter an adverb:"] let madlib = ["Your "," is "," up a "," mountain ","."] answers <- getAnswers questions putStrLn $ interleave madlib questions Exercises The interleave function above is left-biased. Why? Could this pose problems for your program? Why not?
{ "domain": "codereview.stackexchange", "id": 22862, "tags": "beginner, haskell, formatting" }
Showing $ m\int \frac{d\textbf{v}}{dt} \dot \normalsize \textbf{v}dt = \frac{m}{2}\int \frac{d(v^2)}{dt}{}dt$
Question: Can someone please explain how this equation is valid, using intermediate steps if available? $$ m\int \frac{d\textbf{v}}{dt} \dot \normalsize \textbf{v}dt = \frac{m}{2}\int \frac{d(v^2)}{dt}{}dt$$ And what does the right side mean, if the $dt$ cancels with the $\frac {1}{dt}$? Answer: Using $v^2 = \vec v \cdot \vec v$ and $\dfrac{d}{dt} (f g) = f \dfrac{dg}{dt} + \dfrac{df}{dt} g $ write: $\dfrac{d}{dt}(v^2) = \dfrac{d}{dt}(\vec v \cdot \vec v) = \vec v \cdot \dfrac{d\vec v}{dt} + \dfrac{d\vec v}{dt} \cdot \vec v = 2 (\dfrac{d\vec v}{dt} \cdot \vec v)$ The fact that the $dt$ and $1/dt$ "cancel" in the RHS integral means that you're integrating a differential. $\int \dfrac{dx}{dt} dt = \int dx = x$
{ "domain": "physics.stackexchange", "id": 4099, "tags": "newtonian-mechanics, vectors, calculus" }
Deciding whether the head of a TM remains in K slots?
Question: I am trying to find the following:$$ \text{For a given TM M with initial empty tape is it possible to decide whether there} $$ $$\text{exists a constant K such that the head of M remains within the first K slots?} $$ I thought this was possible if you know $K$, since you could add the current configuration to a separate tape and understand whether the machine is looping or not as there exists finitely many configurations with length $K$. But what if you don't know this number? Is it still possible to find a TM that decides this language or is there a reduction from another undecidable language? Any help will be appreciated. Answer: Hint: Given a Turing machine $M$, you can construct another Turing machine $M'$ with the following properties: If $M$ halts then $M'$ halts. If $M$ doesn't halt then the head of $M'$ eventually reaches the $K$th square from the right for every $K$. This construction can be used to show that the halting problem reduces to your problem. As you mention, given $K$, your property is decidable. This shows that your problem is $\Sigma_1$-complete.
{ "domain": "cs.stackexchange", "id": 9037, "tags": "turing-machines, undecidability" }
Convert IEnumerable to HTML table string
Question: My code can generate HTML table strings well, but it depends on JSON.NET. I'm converting IEnumerable to an HTML table string using Json.NET but I think I shouldn't. void Main() { var datas = new[] { new {Name="Test"} }; var array = datas.ToArray().ToHtmlTable(); //Run Success var set = datas.ToHashSet().ToHtmlTable();//Run Succes var list = datas.ToList().ToHtmlTable(); //Run Succes var enums = datas.AsEnumerable().ToHtmlTable(); //Run Succes } public static class HTMLTableHelper { public static string ToHtmlTable(this IEnumerable enums) { return ToHtmlTableConverter(enums); } public static string ToHtmlTable(this System.Data.DataTable dataTable) { return ConvertDataTableToHTML(dataTable); } private static string ToHtmlTableConverter(object enums) { var jsonStr = JsonConvert.SerializeObject(enums); var data = JsonConvert.DeserializeObject<System.Data.DataTable>(jsonStr); var html = ConvertDataTableToHTML(data); return html; } private static string ConvertDataTableToHTML(System.Data.DataTable dt) { var html = new StringBuilder("<table>"); //Header html.Append("<thead><tr>"); for (int i = 0; i < dt.Columns.Count; i++) html.Append("<th>" + dt.Columns[i].ColumnName + "</th>"); html.Append("</tr></thead>"); //Body html.Append("<tbody>"); for (int i = 0; i < dt.Rows.Count; i++) { html.Append("<tr>"); for (int j = 0; j < dt.Columns.Count; j++) html.Append("<td>" + dt.Rows[i][j].ToString() + "</td>"); html.Append("</tr>"); } html.Append("</tbody>"); html.Append("</table>"); return html.ToString(); } } I think it is bad because SerializeObject to string could be DeserializeObject to DataTable is a waste of efficiency. var jsonStr = JsonConvert.SerializeObject(enums); var data = JsonConvert.DeserializeObject<System.Data.DataTable>(jsonStr); I want to remove dependencies on JSON.NET because it can reduce the size of the library. Do you think I should replace them with something else if there is a better way? Answer: I can use reflection to solve it: void Main() { var datas = new[] { new {Name="Test1",Value="Test2"} }; var array = datas.ToArray().ToHtmlTable(); //Run Success var set = datas.ToHashSet().ToHtmlTable();//Run Succes var list = datas.ToList().ToHtmlTable(); //Run Succes var enums = datas.AsEnumerable().ToHtmlTable(); //Run Succes } public static class HTMLTableHelper { public static string ToHtmlTable<T>(this IEnumerable<T> enums) { var type = typeof(T); var props = type.GetProperties(); var html = new StringBuilder("<table>"); //Header html.Append("<thead><tr>"); foreach (var p in props) html.Append("<th>" + p.Name + "</th>"); html.Append("</tr></thead>"); //Body html.Append("<tbody>"); foreach (var e in enums) { html.Append("<tr>"); props.Select(s => s.GetValue(e)).ToList().ForEach(p => { html.Append("<td>" + p + "</td>"); }); html.Append("</tr>"); } html.Append("</tbody>"); html.Append("</table>"); return html.ToString(); } } <table><thead><tr><th>Name</th><th>Value</th></tr></thead><tbody><tr><td>Test1</td><td>Test2</td></tr></tbody></table>
{ "domain": "codereview.stackexchange", "id": 33240, "tags": "c#, .net" }
dynamic_reconfigure with programatically filled enum?
Question: Hey ROS gurus. I want to ask you for your opinions to a topic I recently thought about. I'm writing a ROS&Python driver for a series of web cameras. There is a lot of variability in the specific models, including supported image resolutions. Now I want to offer a dynamic_reconfigure interface for setting (not only) video resolution. My basic thought was to filter a list of possible resolutions somehow to only publish the supported resolutions on parameter_descriptions (thus displaying only them in rqt_reconfigure). But then I found out that even the set of possible resolutions is not bound and new resolutions may emerge with new camera models. I have 2 ideas on what to do with that: alter dynamic_reconfigure and create another edit_method (that's lot of work, though) create my own implementation of dynamic_reconfigure.Server, which would read the allowed resolutions from somewhere and would publish the corresponding parameter_descriptions. In the cfg file I would probably create a dummy one-element enum in order not to stop existing tools from working. I'm only concerned about this solution working from dynamic_reconfigure CLI, rqt_reconfigure and Python (ideally using the standard Client). I don't (yet) care about C++ clients. Before I start working, could you please give me some feedback on how you like these ideas and what possible shortcomings do you see? Originally posted by peci1 on ROS Answers with karma: 1366 on 2015-11-04 Post score: 0 Answer: Ok, found an even easier way: directly changing the config_description attribute of the dynamic reconfigurable type before creating the server. The server then reads the changed enum description and correctly advertises the desired set of possible values. I'm not sure about how it works with C++ clients, but both rqt_reconfigure and dynparam CLI work without problems for getting and setting parameter values. And since dynparam is a Python script, I assume the Python client has also no problems. Here's a simple implementation (not considering parameter groups): https://gist.github.com/peci1/912549b79fd6e8801023 Originally posted by peci1 with karma: 1366 on 2015-11-04 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 22904, "tags": "dynamic-reconfigure" }
How to find kinematics of differential drive caster robot?
Question: I'm working on a little project where I have to do some simulations on a small robot. I my case I'm using a differential-drive robot as one of the wheels of a bigger robot platform (which has two differential-drive casters), and I really do not understand how to find its kinematics in order to describe it in a model for finding the speed V_tot of the platform. This is my robot and I know the following parameters d is the distance between a joint where my robot is costrained blue point is the joint where the robot is linked to the robot platform L is distance between the wheels r the radius of the wheel the robot can spin around the blue point and with and THETA angle As I know all this dimensions, I would like to apply two velocities V_left and V_right in order to move the robot. Let's assume that V_left = - V_right how do I find analitically the ICR (Istantaneous Center of Rotation) in this costrained robot? I mean that I cannot understand how to introduce d in the formula. Answer: Kinematics of mobile robots For the figure on the left: I = Inertial frame; R = Robot frame; S = Steering frame; W = Wheel frame; $\beta$ = Steering angle; For the figure on the right: L = Distance between the wheels; r = radious of the wheel; Now we can derive some useful equations. Kinematics: $\hspace{2.5em}$ $\vec{v}_{IW} = \vec{v}_{IR} + \vec{\omega}_{IR} \times \vec{r}_{RS}$ If we express the equation above in the wheel frame: $\hspace{2.5em}$ $\begin{bmatrix} 0 \\ r\dot{\varphi} \\ 0 \end{bmatrix} = R(\alpha+\beta)R(\theta)\begin{bmatrix} \dot{x} \\ \dot{y} \\ \dot{\theta} \end{bmatrix} + \begin{bmatrix} 0 & -\dot{\theta} & 0 \\ \dot{\theta} & 0 & 0 & \\ 0 & 0 & 0 \end{bmatrix}\begin{bmatrix} lcos(\beta) \\ -lsin(\beta) \\ 0 \end{bmatrix}$ We obtain the rolling-constraint and the no-sliding constraint respectively: $\hspace{2.5em}$ $[-sin(\alpha+\beta)\hspace{1.0em}cos(\alpha+beta)\hspace{1.0em}lcos(\beta)]\dot{\xi}_{R} = \dot{\varphi}r$ $\hspace{2.5em}$ $[cos(\alpha+\beta)\hspace{1.0em}sin(\alpha+beta)\hspace{1.0em}lsin(\beta)]\dot{\xi}_{R} = 0$ where $\dot{\xi}_{R} = [\dot{x_{R}}\hspace{1.0em}\dot{y_{R}}\hspace{1.0em}\dot{\theta}]^{T}$ Now we need to apply each of this constraints in the differential wheels For the left wheel: $\alpha = -\frac{\pi}{2}$, $\beta = 0$, $l = -\frac{L}{2}$ For the right wheel: $\alpha = -\frac{\pi}{2}$, $\beta = 0$, $l = \frac{L}{2}$ Stacked equation of motion: $\hspace{2.5em}$ $\begin{bmatrix} 1 & 0 & \frac{L}{2} \\ 1 & 0 & -\frac{L}{2} & \\ 0 & -1 & 0 \\ 0 & -1 & 0 \end{bmatrix}\dot{\xi}_{R} = \begin{bmatrix} r & 0\\ 0 & r \\ 0 & 0 \\ 0 & 0 \end{bmatrix}\begin{bmatrix} \dot{\varphi}_{r} \\ \dot{\varphi}_{l} \end{bmatrix} $ $\hspace{2.5em}$ $A\dot{\xi}_{R} = B\dot{\varphi} $ For the forward kinematic solution, just do: $\hspace{2.5em}$ $\dot{\xi}_{R} = \big( A^{T}A \big)^{-1}A^{T}B\dot{\varphi} $ which yields: $\hspace{2.5em}$ $\begin{bmatrix} \dot{x} \\ \dot{y} \\ \dot{\theta} \end{bmatrix} = \begin{bmatrix} \frac{r}{2} & -\frac{r}{2} \\ 0 & 0 \\ \frac{r}{L} & -\frac{r}{L} \end{bmatrix} \begin{bmatrix} \dot{\varphi}_{r} \\ \dot{\varphi}_{l} \end{bmatrix}$ An excellent chapter that I suggest here.
{ "domain": "robotics.stackexchange", "id": 937, "tags": "mobile-robot, kinematics, wheeled-robot, differential-drive, two-wheeled" }
The size of particlecloud in PoseArray Message
Question: Hi, I wrote a node that subscribes to amcl particlecloud. I would like to write a callback function that process all the Pose array elements. How can i get the size of the particlecloud or specifically the number of poses in the PoseArray Message? Originally posted by Anas Alhashimi on ROS Answers with karma: 179 on 2014-09-27 Post score: 1 Answer: geometry_msgs::PoseArray has pose vector type each element represent a particle in case of amcl particlecloud. geometry_msgs::PoseArray poseArray; int size = poseArray.pose.size(); Originally posted by bvbdort with karma: 3034 on 2014-09-27 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Anas Alhashimi on 2014-09-29: Thanks it works for me like that: int size = poseArray.poses.size(); Comment by Vini71 on 2023-05-19: Hi @bvbdort do you know how to increase the size and spread of amcl particles, I am not being able to trigger the plan module for my Ackermann truck and I believe it is due the fact of particles size. I have already changed the parameters but the size seems to keep the same: AMCL Small particles don't allow Planning
{ "domain": "robotics.stackexchange", "id": 19546, "tags": "navigation, amcl" }
Finding out pKa of acid from molar conductivity
Question: I'm reading about the electrical properties of solution where there is a problem like that: The molar conductivity of $0.0250\ \mathrm M$ $\ce{HCOOH(aq)}$ is $4.61\ \mathrm{mS\ m^2\ mol^{-1}}$. Determine the $\mathrm pK_\mathrm a=-\log K_\mathrm a$ of the acid. (limiting ionic conductivity of $\ce{H+}=34.96\ \mathrm{mS\ m^2\ mol^{-1}}$ and limiting ionic conductivity of $\ce{OH-}=19.91\ \mathrm{mS\ m^2\ mol^{-1}}$) I have to solve the problem using the equation below: $$\frac1{\Lambda_\mathrm m}=\frac1{\Lambda_\mathrm m^0}+\frac{\Lambda_\mathrm mc}{K_\mathrm a\left(\Lambda_\mathrm m^0\right)^2}$$ Here which limiting ionic conductivity should I use to solve the problem? $\ce{H+}$ or $\ce{OH-}$? Answer: You need to add the limiting ionic conductivities for $\ce{H+}$ and $\ce{OH-}$ together to get the limiting ionic conductivity for all the ions in solution ($\Lambda_{0}$, which will replace $\Lambda^{0}_{\mathrm m}$ in your equation). This arises from a simplification for calculating $\Lambda_{0}$ in weak electrolyte solutions (such as yours) according to Kohlrausch's Law in which it is stated: Each ionic species makes a contribution to the conductivity of the solution that depends only on the nature of that particular ion, and is independent of the other ions present. from which we can then estimate $\Lambda_{0}$ as: $$\Lambda_{0} = \sum_{i}\lambda_{i,+}^{0} + \sum_{i}\lambda_{i,-}^{0}$$ For your problem: $$\Lambda_{0} = \underbrace{(34.96 + 19.91)}_{54.87}\ \mathrm{mS\cdot m^{2}\cdot mol^{-1}}$$ we solve (I am omitting units here for clarity, and have confirmed via dimensional analysis that the answer is correct): $${1\over 4.61} = {1\over 54.87} + {4.61\times 0.025\over K_{\mathrm a}\times (54.87)^{2}}$$ which simplifies to yield the answer (spoiler alert): $$K_{\mathrm a} = 1.926\times 10^{-4} \implies \mathrm{p}K_{\mathrm a} = 3.72$$ and can be favorably compared to the value of 3.77 given here.
{ "domain": "chemistry.stackexchange", "id": 6947, "tags": "electrochemistry, solutions, conductivity" }
Can't read bagfile containing custom messages
Question: I have a bag file with video and imu data defined with custom messages that I want to read (especially topics /capture and /imu_raw). I know from documentation what the structure of the custom messages is. How can I read the data? Doing rosbag info on the file shows: types: libks/ImageSet [b3ef08c9cff052c83f5039d098add069] pximu/AttitudeData [bc609da5ddac9016c096c67da33d8b9c] pximu/RawIMUData [0380911195329d307d8f033bc714dbbf] [...] topics: /attitude 2174 msgs : pximu/AttitudeData /capture 3282 msgs : libks/ImageSet /imu_raw 10873 msgs : pximu/RawIMUData [...] I cannot play the file with 'rosbag play', probably because rosbag doesn't know the message definitions. Therefore, I thought I needed to use migration(see http_wiki_ros_org/rosbag_migration) and provide custom message definitions (see http_wiki_ros_org/msg) and use them in a rule file. I made a new package 'bag_analyser", and added the custom messages to the 'msg' folder and compiled. E.g. for ImageSet I made a file ImageSet.msg: Header header sensor_msgs/Image[] images This seems to work, e.g. for ' rosmsg show bag_analyser/ImageSet' I get: std_msgs/Header header uint32 seq time stamp string frame_id sensor_msgs/Image[] images std_msgs/Header header uint32 seq time stamp string frame_id uint32 height uint32 width string encoding uint8 is_bigendian uint32 step uint8[] data I also made messages for RawIMUData and AttitudeData. Then I generated a rules file (rules.bmr, see full output at end), adjusted the rules that needed to be adjusted and executed: rosbag fix original_bag.bag converted_bag.bag rules.bmr which told me "bag migrated successfully". Doing rosbag info on the converted file gives me: types: bag_analyser/AttitudeData [bc609da5ddac9016c096c67da33d8b9c] bag_analyser/ImageSet [b3ef08c9cff052c83f5039d098add069] bag_analyser/RawIMUData [0380911195329d307d8f033bc714dbbf] [...] topics: /attitude 2174 msgs : bag_analyser/AttitudeData /capture 3282 msgs : bag_analyser/ImageSet /imu_raw 10873 msgs : bag_analyser/RawIMUData My problem is that I still cannot read the file. I tried using 'rosbag play' but it simply gives me: [ INFO] [1402332944.127314834]: Opening converted_bag.bag [FATAL] [1402332944.127848761]: Error opening file: converted_bag.bag I also tried reading in the file using the rosbag API, which also fails opening the file (code in image_converter.cpp below). I changed the bagfile's permissions and set it to allow read/write/execute just to make sure it wasn't that. What am I doing wrong? How can I read the data from the file? I am fairly new to ROS, so any help or tips are greatly appreciated! File listings: folder bag/analyser/msg ImageSet.msg Header header sensor_msgs/Image[] images RawIMUData.msg Header header # Measured accelerations (in x, y, z) Vector3f acc # Measured angular speeds (around x, y, z) Vector3f gyro # Measured magnetic field (around x , y, z) Vector3f mag Vector3f.msg float32 x float32 y float32 z AttitudeData.msg Header header # roll angle [rad] float32 roll # pitch angle [rad] float32 pitch # yaw angle [rad] float32 yaw # roll angular speed [rad/s] float32 rollspeed # pitch angular speed [rad/s] float32 pitchspeed # yaw angular speed [rad/s] float32 yawspeed bag_analyser/image_converter.cpp ( includes message definition for ImageSet.h and tries to use rosbag API to read file): #include <ros/ros.h> #include <rosbag/bag.h> #include <rosbag/view.h> #include <boost/foreach.hpp> #include <message_filters/subscriber.h> #include <message_filters/time_synchronizer.h> #include <image_transport/image_transport.h> #include <cv_bridge/cv_bridge.h> #include <sensor_msgs/image_encodings.h> #include <opencv2/imgproc/imgproc.hpp> #include <opencv2/highgui/highgui.hpp> #include <vector> #include <bag_analyser/ImageSet.h> static const std::string OPENCV_WINDOW = "Image window"; void loadBag() { rosbag::Bag bag("~/robotics/rosbook/src/bag_analyser/bag/converted_bag.bag"); // fails here std::string l_cam_image = "/capture"; std::vector<std::string> topics; topics.push_back(l_cam_image); rosbag::View view(bag, rosbag::TopicQuery(topics)); BOOST_FOREACH(rosbag::MessageInstance const m, view) { if (m.getTopic() == l_cam_image || ("/" + m.getTopic() == l_cam_image)) { boost::shared_ptr<bag_analyser::ImageSet> image_array = m.instantiate<bag_analyser::ImageSet >(); bag_analyser::ImageSet* theSetPtr = image_array.get(); bag_analyser::ImageSet theSet = *theSetPtr; cv_bridge::CvImagePtr cv_ptr; try { cv_ptr = cv_bridge::toCvCopy(theSetPtr->images[0], sensor_msgs::image_encodings::BGR8); } catch (cv_bridge::Exception& e) { ROS_ERROR("cv_bridge exception: %s", e.what()); return; } // Update GUI Window cv::imshow(OPENCV_WINDOW, cv_ptr->image); cv::waitKey(3); } } bag.close(); } int main(int argc, char** argv) { ros::init(argc, argv, "image_converter"); loadBag(); ros::spin(); return 0; } bag_analyser/CMakeLists.txt cmake_minimum_required(VERSION 2.8.3) project(bag_analyser) ## Find catkin macros and libraries find_package(catkin REQUIRED COMPONENTS cv_bridge image_transport message_generation roscpp rospy std_msgs rosbag ) ## System dependencies are found with CMake's conventions find_package(Boost REQUIRED COMPONENTS system) find_package( OpenCV REQUIRED ) ## Generate messages in the 'msg' folder add_message_files( FILES ImageSet.msg RawIMUData.msg Vector3f.msg AttitudeData.msg ) ## Generate added messages and services with any dependencies listed here generate_messages( DEPENDENCIES std_msgs sensor_msgs ) ## CATKIN_DEPENDS: catkin_packages dependent projects also need catkin_package( CATKIN_DEPENDS message_runtime ) # include_directories(include) include_directories( ${catkin_INCLUDE_DIRS} ${OpenCV_INCLUDE_DIRS} ) ## Declare a cpp executable add_executable(image_converter src/image_converter.cpp) target_link_libraries ( image_converter ${OpenCV_LIBRARIES} ${catkin_LIBRARIES} ) rules.bmr class update_pximu_RawIMUData_0380911195329d307d8f033bc714dbbf(MessageUpdateRule): old_type = "pximu/RawIMUData" old_full_text = """ Header header # Measured accelerations (in x, y, z) Vector3f acc # Measured angular speeds (around x, y, z) Vector3f gyro # Measured magnetic field (around x , y, z) Vector3f mag ================================================================================ MSG: std_msgs/Header # Standard metadata for higher-level stamped data types. # This is generally used to communicate timestamped data # in a particular coordinate frame. # # sequence ID: consecutively increasing ID uint32 seq #Two-integer timestamp that is expressed as: # * stamp.secs: seconds (stamp_secs) since epoch # * stamp.nsecs: nanoseconds since stamp_secs # time-handling sugar is provided by the client library time stamp #Frame this data is associated with # 0: no frame # 1: global frame string frame_id ================================================================================ MSG: pximu/Vector3f float32 x float32 y float32 z """ new_type = "bag_analyser/RawIMUData" new_full_text = """ Header header # Measured accelerations (in x, y, z) Vector3f acc # Measured angular speeds (around x, y, z) Vector3f gyro # Measured magnetic field (around x , y, z) Vector3f mag ================================================================================ MSG: std_msgs/Header # Standard metadata for higher-level stamped data types. # This is generally used to communicate timestamped data # in a particular coordinate frame. # # sequence ID: consecutively increasing ID uint32 seq #Two-integer timestamp that is expressed as: # * stamp.secs: seconds (stamp_secs) since epoch # * stamp.nsecs: nanoseconds since stamp_secs # time-handling sugar is provided by the client library time stamp #Frame this data is associated with # 0: no frame # 1: global frame string frame_id ================================================================================ MSG: bag_analyser/Vector3f float32 x float32 y float32 z """ order = 0 migrated_types = [ ("Header","Header"),] valid = True def update(self, old_msg, new_msg): self.migrate(old_msg.header, new_msg.header) #No migration path between [Vector3f] and [Vector3f] new_msg.acc = self.get_new_class('bag_analyser/Vector3f')(old_msg.acc) #No migration path between [Vector3f] and [Vector3f] new_msg.gyro = self.get_new_class('bag_analyser/Vector3f')(old_msg.gyro) #No migration path between [Vector3f] and [Vector3f] new_msg.mag = self.get_new_class('bag_analyser/Vector3f')(old_msg.mag) class update_libks_ImageSet_b3ef08c9cff052c83f5039d098add069(MessageUpdateRule): old_type = "libks/ImageSet" old_full_text = """ Header header sensor_msgs/Image[] images ================================================================================ MSG: std_msgs/Header # Standard metadata for higher-level stamped data types. # This is generally used to communicate timestamped data # in a particular coordinate frame. # # sequence ID: consecutively increasing ID uint32 seq #Two-integer timestamp that is expressed as: # * stamp.secs: seconds (stamp_secs) since epoch # * stamp.nsecs: nanoseconds since stamp_secs # time-handling sugar is provided by the client library time stamp #Frame this data is associated with # 0: no frame # 1: global frame string frame_id ================================================================================ MSG: sensor_msgs/Image # This message contains an uncompressed image # (0, 0) is at top-left corner of image # Header header # Header timestamp should be acquisition time of image # Header frame_id should be optical frame of camera # origin of frame should be optical center of cameara # +x should point to the right in the image # +y should point down in the image # +z should point into to plane of the image # If the frame_id here and the frame_id of the CameraInfo # message associated with the image conflict # the behavior is undefined uint32 height # image height, that is, number of rows uint32 width # image width, that is, number of columns # The legal values for encoding are in file src/image_encodings.cpp # If you want to standardize a new string format, join # ros-users@lists.sourceforge.net and send an email proposing a new encoding. string encoding # Encoding of pixels -- channel meaning, ordering, size # taken from the list of strings in src/image_encodings.cpp uint8 is_bigendian # is this data bigendian? uint32 step # Full row length in bytes uint8[] data # actual matrix data, size is (step * rows) """ new_type = "bag_analyser/ImageSet" new_full_text = """ Header header sensor_msgs/Image[] images ================================================================================ MSG: std_msgs/Header # Standard metadata for higher-level stamped data types. # This is generally used to communicate timestamped data # in a particular coordinate frame. # # sequence ID: consecutively increasing ID uint32 seq #Two-integer timestamp that is expressed as: # * stamp.secs: seconds (stamp_secs) since epoch # * stamp.nsecs: nanoseconds since stamp_secs # time-handling sugar is provided by the client library time stamp #Frame this data is associated with # 0: no frame # 1: global frame string frame_id ================================================================================ MSG: sensor_msgs/Image # This message contains an uncompressed image # (0, 0) is at top-left corner of image # Header header # Header timestamp should be acquisition time of image # Header frame_id should be optical frame of camera # origin of frame should be optical center of cameara # +x should point to the right in the image # +y should point down in the image # +z should point into to plane of the image # If the frame_id here and the frame_id of the CameraInfo # message associated with the image conflict # the behavior is undefined uint32 height # image height, that is, number of rows uint32 width # image width, that is, number of columns # The legal values for encoding are in file src/image_encodings.cpp # If you want to standardize a new string format, join # ros-users@lists.sourceforge.net and send an email proposing a new encoding. string encoding # Encoding of pixels -- channel meaning, ordering, size # taken from the list of strings in include/sensor_msgs/image_encodings.h uint8 is_bigendian # is this data bigendian? uint32 step # Full row length in bytes uint8[] data # actual matrix data, size is (step * rows) """ order = 0 migrated_types = [ ("Header","Header"), ("sensor_msgs/Image","sensor_msgs/Image"),] valid = True def update(self, old_msg, new_msg): self.migrate(old_msg.header, new_msg.header) self.migrate_array(old_msg.images, new_msg.images, "sensor_msgs/Image") class update_pximu_AttitudeData_bc609da5ddac9016c096c67da33d8b9c(MessageUpdateRule): old_type = "pximu/AttitudeData" old_full_text = """ Header header # roll angle [rad] float32 roll # pitch angle [rad] float32 pitch # yaw angle [rad] float32 yaw # roll angular speed [rad/s] float32 rollspeed # pitch angular speed [rad/s] float32 pitchspeed # yaw angular speed [rad/s] float32 yawspeed ================================================================================ MSG: std_msgs/Header # Standard metadata for higher-level stamped data types. # This is generally used to communicate timestamped data # in a particular coordinate frame. # # sequence ID: consecutively increasing ID uint32 seq #Two-integer timestamp that is expressed as: # * stamp.secs: seconds (stamp_secs) since epoch # * stamp.nsecs: nanoseconds since stamp_secs # time-handling sugar is provided by the client library time stamp #Frame this data is associated with # 0: no frame # 1: global frame string frame_id """ new_type = "bag_analyser/AttitudeData" new_full_text = """ Header header # roll angle [rad] float32 roll # pitch angle [rad] float32 pitch # yaw angle [rad] float32 yaw # roll angular speed [rad/s] float32 rollspeed # pitch angular speed [rad/s] float32 pitchspeed # yaw angular speed [rad/s] float32 yawspeed ================================================================================ MSG: std_msgs/Header # Standard metadata for higher-level stamped data types. # This is generally used to communicate timestamped data # in a particular coordinate frame. # # sequence ID: consecutively increasing ID uint32 seq #Two-integer timestamp that is expressed as: # * stamp.secs: seconds (stamp_secs) since epoch # * stamp.nsecs: nanoseconds since stamp_secs # time-handling sugar is provided by the client library time stamp #Frame this data is associated with # 0: no frame # 1: global frame string frame_id """ order = 0 migrated_types = [ ("Header","Header"),] valid = True def update(self, old_msg, new_msg): self.migrate(old_msg.header, new_msg.header) new_msg.roll = old_msg.roll new_msg.pitch = old_msg.pitch new_msg.yaw = old_msg.yaw new_msg.rollspeed = old_msg.rollspeed new_msg.pitchspeed = old_msg.pitchspeed new_msg.yawspeed = old_msg.yawspeed Originally posted by Ketill on ROS Answers with karma: 36 on 2014-06-10 Post score: 1 Answer: The trick is to make empty packages with the same name as the packages of the message that you need, and then add the custom message definition to these packages. So in my case I made a package 'pximu' and added the custom message definition for AttitudeData and RawIMUData to it. Then I made a 'libks' package and added the custom message definition for ImageSet to it. After compiling I could read those messages with 'rosbag play'. Originally posted by Ketill with karma: 36 on 2014-06-21 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 18224, "tags": "ros, rosbag, bagfile, bagfiles" }
Missing log4cxx on latest ros_comm
Question: With the latest release of ros_comm (1.9.52) and roscpp_core (0.3.17), the dependency on log4cxx has been removed. This breaks a couple of very old packages that relied on log4cxx being the underlying implementation of ros logging. So far I've seen this show up in the Hokuyo driver: https://github.com/ros-drivers/hokuyo_node/issues/7 and in the PR2 Ethercat Driver: https://github.com/PR2/pr2_ethercat_drivers/issues/62 In both cases, the node in question is using log4cxx calls to adjust the verbosity of the underlying logger: https://github.com/PR2/pr2_ethercat_drivers/blob/hydro-devel/ethercat_hardware/src/motorconf.cpp#L642-L643 https://github.com/ros-drivers/hokuyo_node/blob/hydro-devel/src/getFirmwareVersion.cpp#L56 I don't see any recommendations on how to adjust the logger levels from C++ on the roscpp logging or the rosconsole wiki pages. While adding a log4cxx configuration file is suggested, it's a system-wide change that isn't viable for this sort of single-executable logging adjustment, and probably isn't compatible with the non-log4cxx logging backends. Is there a recommendation on how migrate these calls to the new backed-agnostic logging? Originally posted by ahendrix on ROS Answers with karma: 47576 on 2014-01-09 Post score: 1 Original comments Comment by AHornung on 2014-01-09: The real question is: why is such a breaking change introduced in the released distribution which should only receive non-breaking bugfixes? Comment by Dirk Thomas on 2014-01-09: This is not a breaking change. The above described code is using log4cxx without depending and including log4cxx explicitly. Relying on transitive dependencies is not stable and by no means guaranteed. If the code would state its dependency on log4cxx correctly it can continue as it is (and use the log4cxx log level enum directly rathern then trying to resolve it via an internal global variable which is not part of the public API). Comment by ahendrix on 2014-01-10: This is an API-breaking change because you're removing variables from the public headers: https://github.com/ros/ros_comm/commit/4225dd5b528b3f977a1e3ecc4924c9f267071530 Answer: You might consider using the function used by the get/set logger level RPCs: ros::console::set_logger_level(...) But you have to explicitly invoke ros::console::notifyLoggerLevelsChanged() when logger levels have been successfully set in order for them to be applied. Update: The removed symbol has been readded in https://github.com/ros/ros_comm/pull/336 But the missing explicit dependency is still something all affected packages will have to address. Originally posted by Dirk Thomas with karma: 16276 on 2014-01-09 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by ahendrix on 2014-01-09: Update: I REALLY don't like this solution, because it doesn't compile against the previous version of rosconsole in Hydro, because the set_logger_level function isn't public. I'm going to agree with @AHornung that this is a breaking change. It should be reverted in Hydro and delayed to Igloo. Comment by ahendrix on 2014-01-15: Final solution, using the re-added symbol here: https://github.com/PR2/pr2_ethercat_drivers/compare/1.8.5...1.8.7
{ "domain": "robotics.stackexchange", "id": 16622, "tags": "ros, ros-comm, log4cxx" }
Import management
Question: I am working on a personal iPhone ObjC project, and was recently getting frustrated with how tedious importing the same set of headers over and over was getting… So I created a header file that literally only imports other header files, and looks something along the lines of this: //BackboneTools.h #import "ASIHTTPRequest.h" #import "CPTXYGraph.h" #import "CPTGraphHostingView.h" #import "CPTColor.h" #import "CPTPlatformSpecificCategories.h" #import "CPTFill.h" #import "CPTPieChart.h" #import "DateFormatter.h" #import "JSONAssistant.h" #import "Query.h" #import "SegmentsManager.h" #import "SharedState.h" #import "NSString+Trimmable.h" #import "UIImage+Resizable.h" So now I usually just import this file in the .m files of my (many) View Controllers. As you can imagine, this makes imports a lot easier for me, especially for classes that mix and match these dependencies. I was wondering if this is good style in ObjC. I know that monoliths like Foundation.h and UIKit.h do the same thing, though I wasn't sure if that approach is frowned upon for smaller projects. Answer: You can import all you need in AppName_Prefix.pch header. And all this imported headers will be available app-wide. But I think this isn't good idea. In most cases it's helpful to see which other classes and parts of application current class depends of.
{ "domain": "codereview.stackexchange", "id": 6977, "tags": "objective-c, import" }
If the moon also fall then why can the astronaut moonwalks?
Question: imagine astronauts inside the ISS while everything are in free fall, the astronauts are floating around. Now if the moon are also in free fall then how come astronauts can perform moonwalk on the surface but not on ISS? btw ISS stands for international space station. Answer: The astronauts on ISS are attracted to the space station, just as they are attracted to the Moon and to the Earth which are also in free fall. However, the mass of ISS is so very much smaller than the mass of the Moon, by many orders of magnitude (as Photon says, $4.2 \times 10^6kg$ compared with $7.3 \times 10^{23} kg$), so the gravitational pull of ISS is many orders of magnitude less. In addition, the astronauts are surrounded by the mass of ISS, instead of it being on one side of them. If the mass of ISS were distributed with spherical symmetry, then the astronauts inside it would be weightless even if ISS were as massive as the Sun, whatever position they occupied inside it. This is a consequence of the Shell Theorem. The strongest pull from the space station would be felt just outside of its surface.
{ "domain": "physics.stackexchange", "id": 48205, "tags": "newtonian-mechanics, newtonian-gravity, reference-frames, free-body-diagram, moon" }
Intuitive meaning of modal $\mu$-calculus formula
Question: I am solving one of the past exams and I am not certain with my solution to one of the exercises. The exercise is asking to give intuitive meaning to modal $\mu$-calculus formula: $$ \phi = \mu Z. \langle - \rangle tt \wedge [-a]Z $$ According to an article Modal logics and mu-calculi: an introduction by Bradfield and Stirling[1] the intuition behind $\mu$ operator is "finite looping". So my reasoning is following: on every path through states in $Z$ there must be only a finite number of transitions with labels different from $a$ and then we must reach a state which is both non-terminal (from the first condition) and all transitions from it are labelled $a$ (from finiteness). Hence on every path through states in $Z$ there must eventually be a transition labelled $a$. (similar to CTL formula $\forall F(a)$). Is my reasoning correct? I am unable to find any formal reason for my solution to be right, can you give me a little hint? [1] http://homepages.inf.ed.ac.uk/jcb/Research/bradfield-stirling-HPA-mu-intro.ps.gz Answer: Let's break it down. First, let's look at $[-a]\phi$. This means every non-$a$ transition leads to a state where $\phi$ holds. It follows then that $[-a]\mathrm{ff}$ holds for states that have no non-$a$ transitions, which we will use when looking at the least fixed point semantics. $\langle-\rangle\mathrm{tt}$ is pretty simple. It holds in any state that has any transition, i.e. is not deadlocked. So together $\langle-\rangle\mathrm{tt} \land [-a]\phi$ means the state can take a transition and $\phi$ holds after every non-$a$ transition. One way to view the meaning of $\mu Z.\phi(Z)$ is by the approximants referenced in your linked tutorial. If the formula is satisfied in state $s$ then there is some $\beta$ such that $\bigvee_{\alpha<\beta} \phi^{(\alpha)}(\mathrm{ff})$ is satisfied in $s$. The notation $\phi^{(n)}(x)$ means $\phi$ iterated on $x$, $n$ times, i.e. $\underbrace{\phi(\phi(\dots\phi(x)))}_{\text{$n$ times}}$. Let's look at some of these. \begin{align} \phi^{(0)}(\mathrm{ff}) &= \mathrm{ff} \\ \phi^{(1)}(\mathrm{ff}) &= \langle-\rangle\mathrm{tt} \land [-a]\phi^{(0)}(\mathrm{ff}) \\ &= \langle-\rangle\mathrm{tt} \land [-a]\mathrm{ff} \\ \phi^{(2)}(\mathrm{ff}) &= \langle-\rangle\mathrm{tt} \land [-a]\phi^{(1)}(\mathrm{ff}) \\ &= \langle-\rangle\mathrm{tt} \land [-a](\langle-\rangle\mathrm{tt} \land [-a]\mathrm{ff}) \\ \phi^{(3)}(\mathrm{ff}) &= \langle-\rangle\mathrm{tt} \land [-a]\phi^{(2)}(\mathrm{ff}) \\ &= \langle-\rangle\mathrm{tt} \land [-a](\langle-\rangle\mathrm{tt} \land [-a](\langle-\rangle\mathrm{tt} \land [-a]\mathrm{ff})) \end{align} Hopefully it is clear that these have the meanings $\phi^{(1)}(\mathrm{ff})$: States that can take only $a$ transitions $\phi^{(2)}(\mathrm{ff})$: Live states that have only $a$ transitions; or all length 1 non-$a$ paths lead to a live state with only $a$ transitions $\phi^{(3)}(\mathrm{ff})$: Live states that have only $a$ transitions; or all length 1 non-$a$ paths lead to a live state with only $a$ transitions; or all length 2 non-$a$ paths lead to a live state with only $a$ transitions If that is unclear, remember that $[-a]\phi$ is trivially satisfied for states with no non-$a$ transitions. Now you should see that $\phi^{(n)}(\mathrm{ff})$ is true if and only if the state can take at most $n-1$ non-$a$ transitions before reaching a live state with only $a$ transitions. It turns out that $\phi^{(n)}(\mathrm{ff}) \implies \phi^{(n+1)}(\mathrm{ff})$ so we don't need to take the disjunction with lesser approximants and can simply say $\mu Z. \langle-\rangle\mathrm{tt} \land [-a]Z \iff \exists \beta \in \mathbb{N}. \phi^{(\beta)}(\mathrm{ff})$, or in english, after a finite number of non-$a$ transitions we reach a live state with only $a$ transitions.
{ "domain": "cs.stackexchange", "id": 822, "tags": "modal-logic, mu-calculus" }
Something I don't understand in Quantum Mechanics
Question: I've just started on QM and I'm puzzled with a lot of new ideas in it. 1.On a recent lecture I've attended, there is an equation says: $\langle q'|\sum q|q\rangle \langle q|q' \rangle =\sum q \delta(q,q')$ I don't understand why $\langle q'|q\rangle \langle q|q' \rangle =\delta (q,q')$ Can you explain this equation for me? 2.Actually, I'm still not clear about the bra-ket notation. I've learnt the bra and the ket could be considered as vectors. Then what are the elements of the vectors? Thank you very much! Answer: The equation is true, if $|q\rangle$,$|q'\rangle$ are chosen from an orthonormal set of vectors, such as an eigenbasis of an operator. Then, by definition, $\langle q|q' \rangle = \delta_{q,q'}$ $| q \rangle$ just denotes some vector labeled $q$ in some Hilbert space. The dimension equals the number of distinct classical states that your system can be in. ${{{}}}$
{ "domain": "physics.stackexchange", "id": 4936, "tags": "quantum-mechanics" }
Are there characteristics of photographic film that would make it an analog medium?
Question: There was an article a few years ago entitled "Analog is Not the Opposite of Digital". Film is not analog, period. I used to shoot film on a Canon AE-1 from the ’70s. Now I have a digital SLR from Canon, and they’re obviously extremely different. But we have to be careful not to confuse ‘old’ and ‘new’, with two very specific terms like analog and digital. The word digital, to most people, refers to a device that can capture, store, or display data in a binary fashion. Ones and zeros, on and off, digital is all about numbers. Digital shouldn’t be confused with binary, of course, as digital simply means concrete values. Any system that utilizes solid values (or digits) is digital, binary is simply the most common system. Digital cameras capture light with a sensor, that light is converted into data (numbers), so the use of the word ‘digital’ for your cell phone camera or DSLR is accurate. Analog, however, is a very abused word. I would venture a guess that the significant amount of readers have used the word ‘analog’ to refer to film cameras. If the new, fancy robot cameras are ‘digital’ then our aging film cameras are ‘analog’, right? Not at all. Older cameras capture light with film, which is basically plastic, gelatin, and silver halide. When you take a photo (perhaps of your dog drinking a beer), photons hit this material and produce a latent (invisible) image, that can later be brought into view by bathing the film in various chemicals. You could write hundreds of blog posts on film development alone, but the point is that film photography is a chemical process. I understand the author's point about this misuse of the word "analog" in general, but it seems as though chemical film possesses many of the characteristics of an analog medium, in that it can capture a continuous spectrum of color values over a particular range. Does this characteristic (or any other) of chemical film make it directly comparable to other analog media such as a record or a cassette, or does the fact that the method of storage of the light information is not a continuous wave like a phonograph record preclude it from being classified as such? Since I know it will likely show up in a search for these topics, I had found Analog Photography appears to be a subfield of the art of photography, in which "progressively changing [the] recording medium" creates the image, regardless of whether chemical film or digital capture is used. The technique itself seems to be the analog process in that case, and not any sort of statement about the photographic medium. Answer: Film isn't absolutely "analog", as in continuous. Every individual silver halide molecule, after exposure and development, is either metalized or not; and there are a finite number of these molecules in every frame of film, thus quantizing the exposure measurement. However the density and location of the film grains and silver halide molecules is semi-randomized, which helps noise shape or dither the sampling, quantization and aliasing noise more than if they were in a regular grid with fixed step values. So some current differences are not that film is in some sense a truly continuous measurement system (at least down to the Planck level), but more might be that the "sampling" is dithered, and also that the dynamic range may be higher than in the most common digital formats (8 bit integer per channel).
{ "domain": "dsp.stackexchange", "id": 673, "tags": "camera, soft-question, analog" }
How to create a map?
Question: I need to define a map for my robot to let it work with amcl, what kind of software you guys actually using to draw map? Originally posted by FuerteNewbie on ROS Answers with karma: 123 on 2013-09-25 Post score: 0 Answer: Use GIMP IMAGE Editor. Originally posted by FuerteNewbie with karma: 123 on 2013-10-17 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 15660, "tags": "navigation, mapping, amcl" }
What can I see using a celestron nexstar 127SLT?
Question: I am planning to buy celestron nexstar 127SLT and I want to know what can I see using my telescope. Answer: First of all, the high end of magnification is generally not as important as several other factors. However, considering that no additional information was given in the question beyond "reflecting", I will make some assumptions and only give you a very general estimate. The book "Fundamental Astronomy: H. Karttunen et al.", page 51, formula (3.5) gives a relationship between the diameter of a telescope's aperture and it's useful magnification.This relation is as follows ($e$ is the resolving capacity of the human eye, $D$ is the aperture diameter, $ω$ is the maximum magnification and $λ$ is the wavelength of visible light, used in the formula for the telescope's resolving capacity: $$θ ≈ 1.22×λ/D ≈ λ/D,$$ $$ω_{max} = \frac{e}{θ} ≈ \frac{eD}{λ} = \frac{(5.8 × 10^{-4} × D)}{(5.5 × 10^{-7} m)} ≈ \frac{D}{1 mm}.$$ That's a key concept for you, because any additional magnification above that can be thought of as "empty magnification". It exists mathematically, but the image is so blurry that you will not see any additional detail, and probably be quite disappointed in your purchase if you paid a lot of money for that magnification! Let's try backtracking from 500x. If this is the true maximum useful magnification this telescope can achieve, then its aperture diameter would be about 50cm. This aperture allows for $D^2/d^2$ brighter stars to be observed ($d$ is the diameter of the human pupil). When this is calculated and converted to magnitudes, it turns out you should be able to see stars up to magnitude $9+6=15$. However this is a very rough estimation and you certainly will not be able to see magnitude 15 stars due to atmospheric extinction, seeing and other factors ... my estimate is you'll see stars about the 12th magnitude on a good night. If the 500x telescope you are looking at has a much smaller aperture than 50 cm (and probably that's true) then that 500x is not going to be so useful in reality. Take some time and read further about telescopes. This site has many existing questions and answers on the subject.
{ "domain": "astronomy.stackexchange", "id": 3263, "tags": "telescope, amateur-observing" }
Why do helium filled balloons move away from the Earth?
Question: From my understanding objects do not fall but are pulled to the earth from gravity. With this in mind, I can't understand why if helium filled balloons are not pulled by gravity then shouldn't they be stationary in the sky (or drift like objects in space with no gravity) rather than actively move away from the earth? Is gravity pushing it rather than pulling it? Why is this? Answer: Helium balloons are pulled by gravity, as are all objects with mass. The reason they don't fall is that there is another force acting on them, a buoyant force from air pressure that is equal to the weight of the air displaced by the balloon. The reason you don't float is that the weight of the air you displace is quite a bit less than your weight (a person is more dense than air). The reason a normal balloon doesn't float is that the weight of the air it displaces is just a little bit less than the weight of the balloon (because it is filled with air, but the rubber of the balloon itself is more dense than the air). The analogy you want is to objects floating (or suspended) in water. Most rocks sink to the bottom, pulled by gravity, because the weight of the water they displace is less than their own weight. A bowling ball (ironically) is very close to the same density as water, so it will float suspended in mid-water, just like the helium balloon that has leaked a little bit.
{ "domain": "physics.stackexchange", "id": 204, "tags": "gravity, buoyancy" }
Is the following property true?
Question: I was looking at a solution of a Fourier Transform question and following property was used, if: $$ x(t)\rightarrow X(jw) $$ then: $$ e^{jw_ot}x(t)\rightarrow X(j(w-w_0)) $$ $$ x(t)\sin(w_0t)\rightarrow \frac{1}{2j}X(j(w-w_0)) - \frac{1}{2j}X(j(w+w_0)) $$ If the above statements are true, can we say that for cos: $$ x(t)\cos(w_0t)\rightarrow \frac{1}{2}X(j(w-w_0)) - \frac{1}{2}X(j(w-w_0)) $$ Answer: For $\cos$, assuming $\omega_0$ is real, the identity is: $$ x(t) \cos(\omega_0)t = \frac{1}{2} X(j(\omega - \omega_0)) + \frac{1}{2} X(j(\omega + \omega_0)) $$ This is because $$ \cos(\omega_0 t) = \frac{1}{2}e^{j \omega_0 t} + \frac{1}{2}e^{-j \omega_0 t} $$ Use this expression with your first identity and the superposition property of the Fourier transform to arrive at this result. As an aside, also note that $$ \sin(\omega_0 t) = \frac{1}{2j}e^{j \omega_0 t} - \frac{1}{2j}e^{-j \omega_0 t} $$ By the same reasoning, this is how you arrive at your second identity.
{ "domain": "dsp.stackexchange", "id": 6971, "tags": "fourier-transform" }
Decide if number is negative (in URM machine like language)
Question: So I'm learning to program in an assembly language for an abstract machine wich is very similar to an URM Machine. An URM machine has 3 basic instructions (or 4 in some literature): To zero a register: Z(r) To increment a register: S(r) To Jump to a line or label if registers r1 and r2 contain the same value: J(r1,r2,l) Now, my abstract machine is even weaker because, for jumping it only alows a comparison between a register and the literal 0. To compensate it allows to assign any value to a register (not just zero as in the URM) and basic arithmetic operations. Both machines allow an infinite number of registers. I was able to write a program that successfully compares two positive numbers and returns the maximum. Now I would like to make my program able to receive also negative numbers. My question: How can I check if a number is negative? Is it even possible with only these instructions? I confess that I'm not very clever with this kind of low level languages... My maximum program follows: (input goes on r1 and r2 and output comes on r3) maximo(){ r5 := r1 - r3 jump (r5,0,maxr2) r5 := r2 - r4 jump (r5,0,maxr1) r3 := r3 + 1 r4 := r4 + 1 jump (r1,r1,maximo) } maxr1(){ r3 := r1 } maxr2(){ r3 := r2 } Thank you! Answer: Initialise $x \gets v$ and $y \gets -v$. Then increment both $x$ and $y$ in parallel until one of them is equal to 0.
{ "domain": "cstheory.stackexchange", "id": 168, "tags": "ds.algorithms, machine-models" }
Bats Challenge from CodeEval.com
Question: I am going through coding challenges and I would like to get feedback on one of them. I am not sure if i can post the link to the challenge but here it is. CodeEval Challenge Outside of your window, there's a wire between two buildings. Bats love to hang there but you notice they never hang closer than "d" centimeters from each other. They also don't hang closer than 6 centimeters from any of the buildings. Your goal is to determine the maximum number of additional bats that can fit on that wire assuming they have zero width. INPUT SAMPLE: Each line of input contains three space separated integers: the length of the wire "l", distance "d" and number of bats "n" already hanging on the wire. "n" numbers contain the positions of the bats in any order. All number are integers. You can assume that the bats already hanging on the wire are at least 6 cm from the poles and at least "d" centimeters apart from each other.: 22 2 2 9 11 33 5 0 16 3 2 6 10 835 125 1 113 47 5 0 Output 3 5 0 5 8 Here is my solution. I commented it a bit. I had to convert the method to C to get it passed through the automated system. Any suggestions to improve it would be greatly appreciated. #define CENTIMETERS_FROM_WALL 6 int howManyBatsBetween(int pointOne, int pointTwo, int distanceBetween) { //calculates how many bats you can put between two bats already on the wire //Have to make sure that it would be proper distance between the first bat and second bat int nextBatPosition = pointOne+distanceBetween; int batCount = 0; while ((nextBatPosition+distanceBetween) <= pointTwo) { batCount++; nextBatPosition+=distanceBetween; } return batCount; } -(void)batCount { NSString *line = @"47 5 0"; //input //extract input NSArray *inputArray = [line componentsSeparatedByString:@" "]; int lengthWire = [[inputArray objectAtIndex:0] intValue]; int distanceFromEachOther = [[inputArray objectAtIndex:1] intValue]; int batsOnWire = [[inputArray objectAtIndex:2] intValue]; //create an array with bats already on the wire NSMutableArray *batsOnWireArray = [[NSMutableArray alloc]init]; for (int indx = 1; indx <= batsOnWire; indx ++){ [batsOnWireArray addObject:[inputArray objectAtIndex:indx+2]]; } [batsOnWireArray sortedArrayUsingSelector:@selector(compare:)]; int firstLocation = 0; int secondLocation = 0; int additionalBats = 0; if ([batsOnWireArray count] == 0) { //if there are no bats on the wire additionalBats = floor((lengthWire-(CENTIMETERS_FROM_WALL*2))/distanceFromEachOther)+1; }else{ //calculate how many bats can be added between the beginig of the wire + 6cm and the first bat additionalBats = floor(([[batsOnWireArray objectAtIndex:0] integerValue]-CENTIMETERS_FROM_WALL)/distanceFromEachOther); for(int indx = 0; indx < [batsOnWireArray count]; indx++){ firstLocation =[[batsOnWireArray objectAtIndex:indx] intValue]; //calculate how many bats you can add between bats already on the wire if (indx < [batsOnWireArray count]-1) { secondLocation = [[batsOnWireArray objectAtIndex:indx+1] intValue]; additionalBats += howManyBatsBetween(firstLocation, secondLocation, distanceFromEachOther); }else{ //calculate how bats you can fit between the last bat and the end of the wire - 6cm additionalBats += floor(((lengthWire - CENTIMETERS_FROM_WALL) - firstLocation) / distanceFromEachOther); } } } NSLog(@"Result %d", additionalBats); } Answer: You posted this question with the objective-c tag, and made this comment: I had to convert the method to C to get it passed through the automated system. As such, I'm going to assume you're perhaps more interested in a review of the Objective-C stuff, so that's what this'll be. (Essentially, I'm ignoring the howManyBatsBetween function for this.) As a start, the first problem I notice is that our method takes no arguments and returns nothing. The problem statement gives sample inputs in the form of strings and expects an integer return. Our method should be changed to reflect this: - (NSInteger)batsOnWire:(NSString *)wire; Now we can remove the hardcoded // input. componentsSeparatedByString: is a good approach at breaking up the string, but we also need to make sure our input is in a good format. NSArray *batPositions = [wire componentsSeparatedByString:@" "]; if (batPositions.count < 3) { return NSNotFound; } We'll use NSNotFound as a return result representing some sort of problem with the input. Truly we should actually take an NSError ** as an argument, but I won't get into that mess. I'll keep it simple for now. The point here is, we're dealing with inputs, either typed by a user or read from a file... and inputs always have a chance of having errors, so we have to deal with that. Grabbing lengthWire, distanceFromEachOther, and batsOnWire as you do is fine. But again, we are dealing with user input. The problem with calling intValue is that if the string in question can't be parsed as an int, the method just returns 0. It's impossible to tell the difference between @"0" and @"Hello World" when looking at what they return intValue. [@"0" intValue] == [@"Hello World" intValue] Instead, we can convert these strings to NSNumber objects... which we'll want to do later for another reason which I'll address, but here's how we do it: NSNumberFormatter *formatter = [[NSNumberFormatter alloc] init]; formatter.numberStyle = NSNumberFormatterDecimalStyle; NSNumber lengthWire = [formatter numberFromString: batPositions[0]]; if (!lengthWire) { return NSNotFound; } NSNumberFormatter's numberFromString: method returns nil if the string cannot be parsed to a number. Otherwise, it returns an NSNumber object representing the value. Once we do the if(!lengthWire) check, we know we have a valid number. We can always extract the raw int value by calling the same intValue on this NSNumber object. But there's another reason we're going to want the entire array of strings converted into an array of NSNumber objects. Given the following mutable array of strings: @[@"1", @"2", @"40", @"10", @"300", @"100", @"9"]; If we sort if using sortedArrayUsingSelector:@selector(compare:)];, the result would be an array of this order: @[@"1", @"10", @"100", @"2", @"300", @"40", @"9"]; And this isn't the right order. The compare: method of NSString compares the strings character by character. If we have to strings we want to compare individually, we can call another method: [@"2" compare:@"100" options:NSNumericSearch]; And we'll get the correct numeric result (2 would be sorted before 100 correctly). But when using sortedArrayUsingSelector:, we can't use a method that takes other arguments. We can't specify that our strings our compared numerically. However, if they are NSNumber objects, they'll of course be compared as numbers and sorted appropriately. So if we turn our array of strings into an array of NSNumber objects, and then call the same: [arrayOfNumberObjects sortedArrayUsingSelector:@selector(compare:)]; We'll get our array sorted in the proper order. And once again, NSNumber responds to the intValue method in exactly the same way NSString does, so we can still use this to get a workable int out of the object. #define CENTIMETERS_FROM_WALL 6 As one final note about Objective-C, we don't really care for #define variables. I can't discourage it enough. In fact, Apple agrees with me. As evidence, you can't even use the #define functionality in their new language, Swift, at all. Instead, we should opt for a typed constant: static const int kCentimetersFromWall = 6;
{ "domain": "codereview.stackexchange", "id": 10453, "tags": "programming-challenge, objective-c" }
Why are analytic signals so important in Time-frequency analysis?
Question: I am little confused about why we need analytic signals so bad in time-frequency analysis. What might happen if I use non-analytic signals to do time-frequency analysis? Answer: Assuming time-frequency aims a providing a separation (at least visual) between signal components, the main reasons could be: for quadratic distributions, which tend to yield interference between components, "cancelling" at negative frequencies reduce the quantity of components that can interfere. for linear distributions, the filter bank formalism, and especially the down-sampling operators, is simplified, reducing the impact of aliasing errors. For real signals, the Hermitian symmetry yield that "no information" is lost in the analytic form, a complex-valued function that has no negative frequency components, and it is easy to go back to real. However, this is not so simple in practice. If the analytic signal constructed from a wide-sense stationary (WSS) real signal is always proper, this may not be the case for non-stationary signals, as underlined in Stochastic time-frequency analysis using the analytic signal: why the complementary distribution matters. Moreover, computing the analytic signal on discrete, finite-length data is often only approximate, especially for real-time applications. Hence the energy in the negative frequencies is rarely zero. Extending the concept of analyticity beyond 1D is not evident, and several design still exist.
{ "domain": "dsp.stackexchange", "id": 5912, "tags": "stft, time-frequency" }
pandas groupby and sort values
Question: I am studying for an exam and encountered this problem from past worksheets: This is the data frame called 'contest' with granularity as each submission of question from each contestant in the math contest. The question is and the answer is in red. I get why that works, but why is the 4th choice wrong? I really can't figure it out - please help. *please let me know if this is not allowed as a post in this community. for full description of the problem: In this question, we will be looking at the contest dataframe which contains data from a math contest in 2019. In the contest, each participant had a total of five questions. The participants submit each question separately and each row of the DataFrame records a particular submission of one of the contestants by some participant. The Timestamp column specifies the time a given problem is submitted by a participant; each timestamp is discretized to the minute and has been properly converted to a Pandas datetime object with pd.to datetime. The Contestant column contains the id-name pair of each participant. The Question column contains the question that was submitted. The Correct column tells us if the answer given in the submission is correct (1) or not (0). Assume each participant can have several submissions for the same problem, but they can only submit one question per minute. Answer: If the participant has answered the question 2 before the question 1, you will lose the information on question one by using .agg("first") in the 4th option
{ "domain": "datascience.stackexchange", "id": 7891, "tags": "pandas, groupby" }
Force params to lowercase
Question: I think there must be a more pythonic way to achieve the lowercase-forcing that I need to do here. The keys in my lookup (self.adjustments) are all lc, params sent in to function can be any case (data coming from third party). Also, is it objectionable to have self.adjustments used here but not passed in? def getRevenue(self, metric, productParam, platformParam): product = productParam.lower() # better way to do this? platform = platformParam.lower() # better way to do this? # get adjustment for product.platform, multiply by metric if product not in self.adjustments: raise Exception('Unknown adsense product: {}'.format(product)) Answer: If you have many functions that have this requirement, and if all arguments should be lowercase, you can use a decorator: def lowercase_args(fn): def new_fn(*args): args = map(str.lower, args) return fn(*args) return new_fn You can then use this in your function definition: @lowercase_args def getRevenue(self, metric, product, platform): # get adjustment for product.platform, multiply by metric if product not in self.adjustments: raise Exception('Unknown adsense product: {}'.format(product))
{ "domain": "codereview.stackexchange", "id": 12512, "tags": "python" }
If I use kinect2, what should I do? What do I need to install?
Question: I have a Kinect2 , But I don't know what I should to install. So anyone has Tutorials? If have, can you provide the Internet addresses,Thanks. Originally posted by MrBin on ROS Answers with karma: 1 on 2015-12-07 Post score: 0 Answer: Well take a look here it links you to the information about the kinect2 and ros. You cannot use plan libfreenect or openni. But there is a libreenect2 for the kinect2 as described in the above post. That could be found here: https://github.com/wiedemeyer/libfreenect2 Originally posted by Kenavera with karma: 56 on 2015-12-07 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by MrBin on 2015-12-08: Thank you! Comment by Kenavera on 2015-12-08: If this answers solved your question, you can accept it.
{ "domain": "robotics.stackexchange", "id": 23167, "tags": "ros" }
Why is a mild earthquake often mistaken as dizziness or vertigo?
Question: Is it due to the longitudinal nature of the seismic waves? When a heavy bus or truck goes by at high-speed, the shaking in nearby ground is not mistaken as dizziness, rather the feet and body feel a vertical jerk (seemingly a transverse wave vertical to the ground). But when a natural, mild earthquake takes place, it is not felt as a jerk or switch in a direction at all. Rather we feel it for a while as dizziness. I have experienced this, and heard of many others who have as well. In-fact it takes a few seconds to distinguish whether it is dizziness or an earthquake. Is it due to the longitudinal waves? Is it because they are horizontal with the ground, matching the direction of dizziness? Also I guess maybe there may be similarity between the frequency of seismic waves and this video from Wikipedia's article on vertigo: Does this phenomenon's frequency act similarly to the motion and simulates dizziness? Answer: Having experienced many minor earthquakes (Mag 3 to 5 in the East African Rift Valley, Samoa, Solomon Islands, Philippines and Indonesia), and one major life-threatening earthquake (Magnitude 8, Nepal), I can make a few personal observations about physiological reactions. First, You are quite correct about the body picking up any swaying feeling, and interpreting this as dizziness. I was in an 11th floor hotel room in Bangkok when an aftershock of the Aceh earthquake occurred. My first thought was "Did I drink too much beer this evening"? I wasn't alone. When I opened the door into the corridor, the occupants of nearly every room were looking out and wondering much the same thing. Second, after experiencing a big one the body becomes hypersensitive - conditioned to pick up vibrations that could be the onset of a major quake. Now, at the onset of any vibration, my brain goes straight to "how big is it - should I run to the nearest exit"? This response is now much faster than the sensation of dizziness. Also I now pick up many vibrations that nobody else around me seems to notice. Thirdly, I am not sure I agree with you about vertical motion being the trigger. It seems to me that lateral motion - moving one's feet sideways relative to the head, creates a slight imbalance, which the ears interpret as 'dizziness'.
{ "domain": "earthscience.stackexchange", "id": 2085, "tags": "earthquakes, waves, seismic" }
Qiskit Transpiler function - Why does Toffoli + Hadamard not work as a Basis Gate set?
Question: It is a well established fact that Hadamard + Toffoli is a computationally universal gate set. Therefore I thought that the transpiler function in Qiskit would be able to decompose any valid quantum circuit in to a circuit of Hadamard and Toffoli gates. This doesn't seem to be the case however. For example: from qiskit import QuantumCircuit qc = QuantumCircuit(5) qc.cnot(0,1) qc.h(1) from qiskit import transpile basis = ['h','ccx','id','swap'] qc_basis = transpile(qc,basis_gates = basis) returns an error: "Unable to map source basis {('h', 1), ('cx', 2)} to target basis {'ccx', 'barrier', 'measure', 'delay', 'snapshot', 'swap', 'reset', 'id', 'h'}." Have I interpreted computational universality incorrectly or is this functionality simply outside of the scope of the Qiskit transpile function? Thank you Answer: Although $\{\textrm{CCX}, \textrm{H}\}$ is a universal gate set, it is not universal in the sense that any unitary can be expressed in terms of finite sequence of its elements. For any quantum circuit, however, you can use $\{\textrm{CCX}, \textrm{H}\}$ to implement another quantum circuit which when measured, will give the same measurement results as the original circuit. For more details, see the answers here, here, and here. So, you can not use Qiskit's transpiler to transpile a quantum circuit into this gate set because BasisTranslator (the transpiler pass responsible for translating gates to a given target basis) does not currently support the required kind of tranlations.
{ "domain": "quantumcomputing.stackexchange", "id": 4277, "tags": "qiskit, universal-gates, transpile" }
Will sound be generated by the capture of air from a vacume?
Question: I have a thought experiment. If I had a box sized vacuum sitting on the floor in front of me and I were to instantaneously remove all the walls of my box. Would the capturing of air from that vacuum make a noticeable sound? Answer: Yes. Once the walls are magically removed, the air surrounding the box will suddenly rush inwards toward the center of the box volume- thereby propagating a rarefaction wave outwards at the speed of sound- which you will hear. Very soon thereafter, all the inrushing air will meet at the center of the box volume and rebound, propagating a compression wave outwards. You would therefore hear something like a "KA-POP" noise.
{ "domain": "physics.stackexchange", "id": 57923, "tags": "kinematics, acoustics, vacuum" }
Cylinder attached to block rotational dynamics problem gives different answers when using different method
Question: In the exercise described by the attached picture, in which the cylinder A rolls without sliding, I was asked to find what the distance traveled by the system would be when the blocks speed was equal to 3 m/s. My approach to solving it was applying Newtons second law for each object, taking into account the mass and linear acceleration of both objects are equal. This yielded the following equations: $$F_rr = I\alpha$$ $$mg\sin\theta-T-F_r = ma$$ $$mg\sin\theta+T-F_r = ma$$ $$r\alpha = a$$ The first two equations correspond to the cylinder, the third one to the block and the forth one to the rolling condition. I have chosen the coordinate system to be positive in the direction the system moves. Solving the system I obtained $a = 4/3 m/s^2$, and using that to calculate distance traveled when $v=3 m/s$ gives an incorrect answer. The correct way to solve the problem, according to the answer, is as follows: Although it's in Spanish, it is clear the work-energy principle was applied, and distance traveled was calculated using the work friction did on the system. What I don't understand about the answer is, shouldn't work be the sum of the work friction did on the block and the work friction did on the cylinder? Why isn't it multiplied by 2, and why is my initial answer wrong? Answer: from the FBD you obtain $$ I_Z\,\alpha=F_r\,r\\ m\,a_Z=m\,g\sin(\phi)+T-F_r\\ \alpha\,r=a_Z\\ m\,a_K=-T+m\,g\,\sin(\phi)-m\,g\,\cos(\phi)\,\mu_K\\ a_Z=a_K$$ those are 5 equations for the unknowns $~\alpha~,a_Z~,a_K~,T~,F_r$ from here $$v=a_K\,t\quad ,x=\frac 12 a_K\,t^2$$ the solution for x is: $$x=\frac 12\,{\frac { \left( 2\,\,m{\,r}^{2}+I_{{Z}} \right) {v}^{2}}{\,g\,m{\,r}^ {2} \left( 2\,\sin \left( \phi \right) -\cos \left( \phi \right) \mu_{ {K}} \right) }}\quad,I_z=\frac 12 \,m\,r^2\\ x=\frac 54\frac{v^2}{g\,(2\sin(\phi)-\cos(\phi)\,\mu_k)}=\frac 32~[m]$$
{ "domain": "physics.stackexchange", "id": 98918, "tags": "homework-and-exercises, newtonian-mechanics, rotational-dynamics, coordinate-systems, free-body-diagram" }
Recursively using reflection to merge fields
Question: I'm using the Observer pattern to notify my UI that the object they're representing has changed. Also, I'm refreshing this object from the interwebs. Therefore, I'm ending up with two instances representing the same object. One with old values, one with refreshed values. I have written this util class that recursively merges all fields from the refreshed instance into the original instance (Full Gist). I am wondering if and how I can optimize this, and whether I've forgotten something. The code works for some simple use cases. public class MergeUtils { /** * Recursively merges the fields of the provider into the receiver. * * @param receiver the receiver instance. * @param provider the provider instance. */ public static <T> void merge(final T receiver, final T provider) { Field[] fields = receiver.getClass().getDeclaredFields(); for (Field field : fields) { try { Object receiverObject = field.get(receiver); Object providerObject = field.get(provider); if (receiverObject == null || providerObject == null) { /* One is null */ field.setAccessible(true); field.set(receiver, providerObject); } else if (field.getType().isAssignableFrom(Collection.class)) { /* Collection field */ //noinspection rawtypes mergeCollections((Collection) receiverObject, (Collection) providerObject); } else if (field.getType().isPrimitive() || field.getType().isEnum() || field.getType().equals(String.class)) { /* Primitive, Enum or String field */ field.setAccessible(true); field.set(receiver, providerObject); } else { /* Mergeable field */ merge(receiverObject, providerObject); } } catch (IllegalAccessException e) { /* Should not happen */ throw new RuntimeException(e); } } } /** * Recursively merges the items in the providers collection into the receivers collection. * Receivers not present in providers will be removed, providers not present in receivers will be added. * If the item has a field called 'id', this field will be compared to match the items. * * @param receivers the collection containing the receiver instances. * @param providers the collection containing the provider instances. */ public static <T> void mergeCollections(final Collection<T> receivers, final Collection<T> providers) { if (receivers.isEmpty() && providers.isEmpty()) { return; } if (providers.isEmpty()) { receivers.clear(); return; } if (receivers.isEmpty()) { receivers.addAll(providers); return; } Field idField; try { T t = providers.iterator().next(); idField = t.getClass().getDeclaredField(ID); idField.setAccessible(true); } catch (NoSuchFieldException ignored) { idField = null; } try { if (idField != null) { mergeCollectionsWithId(receivers, providers, idField); } else { mergeCollectionsSimple(receivers, providers); } } catch (IllegalAccessException e) { /* Should not happen */ throw new RuntimeException(e); } } /** * Recursively merges the items in the collections for which the id's are equal. * * @param receivers the collection containing the receiver items. * @param providers the collection containing the provider items. * @param idField the id field. * * @throws IllegalAccessException if the id field is not accessible. */ private static <T> void mergeCollectionsWithId(final Collection<T> receivers, final Iterable<T> providers, final Field idField) throws IllegalAccessException { /* Find a receiver for each provider */ for (T provider : providers) { boolean found = false; for (T receiver : receivers) { if (idField.get(receiver).equals(idField.get(provider))) { merge(receiver, provider); found = true; } } if (!found) { receivers.add(provider); } } /* Remove receivers not in providers */ for (Iterator<T> iterator = receivers.iterator(); iterator.hasNext(); ) { T receiver = iterator.next(); boolean found = false; for (T provider : providers) { if (idField.get(receiver).equals(idField.get(provider))) { found = true; } } if (!found) { iterator.remove(); } } } /** * Recursively merges the items in the collections one by one. Disregards equality. * * @param receivers the collection containing the receiver items. * @param providers the collection containing the provider items. */ private static <T> void mergeCollectionsSimple(final Collection<T> receivers, final Iterable<T> providers) { Iterator<T> receiversIterator = receivers.iterator(); Iterator<T> providersIterator = providers.iterator(); while (receiversIterator.hasNext() && providersIterator.hasNext()) { merge(receiversIterator.next(), providersIterator.next()); } /* Remove excessive receivers if present */ while (receiversIterator.hasNext()) { receiversIterator.next(); receiversIterator.remove(); } /* Add residual providers to receivers if present */ while (providersIterator.hasNext()) { receivers.add(providersIterator.next()); } } } Some pseudo use case code: MyObject myObject = retrieveFromWeb(1); myView.setMyObject(myObject); myObject.addObserver(myView); // now refresh the data MyObject myObject2 = retrieveFromWeb(1); // instead of propagating the new object all the way to the view, use the merge util class: MergeUtil.merge(myObject, myObject2); myObject.notifyObservers(); Re-setting the new object to the view can lead to hassle (in Android for instance). Answer: if (receivers.isEmpty() && providers.isEmpty()) { return; } if (providers.isEmpty()) { receivers.clear(); return; } You can remove the first if statement, it's not necessary. Clearing a list that has no elements is almost free anyway. for (T provider : providers) { if (idField.get(receiver).equals(idField.get(provider))) { found = true; } } You could break out of the for loop here. No need to go over the rest of the providers once you've found a match. field.getType() You have a lot of those in your merge function. How about storing it in a temporary variable to increase readability?
{ "domain": "codereview.stackexchange", "id": 8964, "tags": "java, optimization, performance" }
How do we stabilise satellites so precisely?
Question: Look at the Hubble Ultra Deep Field photo. The stars in it are on the order of 1 arcsecond across. To an order of magnitude, this is $10^{-6}$ radians in a $10\text m$ telescope which was held steady for $10^6$ seconds. In other words, the velocity of the aperture of the telescope around the light sensors had to be on the order of one angstrom per second. Perhaps my maths is wrong, but this seems like an extraordinary feat of control. I can't quite believe it. The computer programmer in me suspects that, since the image was captured across a number of occasions, each occasion would be smeared somewhat less than a single long exposure and some kind of correspondence-finding algorithm could alight the images (and infer the drift of the telescope). In any case, even if that is what they did, the satellite is held amazingly steady. How do we achieve this? Answer: Actually reaction wheels or control moment gyros are only part of the answer. To maintain the the accuracy and precision on the order of what Hubble demands requires a fully integrated Feedback Control System of actuators and sensors. For microradian pointing, reaction wheels provide only the first stage of isolating disturbances in a multi-stage pointing control system. Disturbances that can interfere with attitude stabilization include those from outside the spacecraft, such magnetic anomalies and atmospheric drag from planetary orbits, or solar winds from spacecraft further away from a planet - as examples. Or disturbance can come from the spacecraft itself such as vibrational modes excited by solar array stepping. Reaction wheels or CMG's can be used to change the attitude of the spacecraft, and together with feedback from gyros or inertial measurement units (IMU's) closed loop control systems maintain the attitude to perhaps 10's of microradians in the face of the disturbances. But to get down to microradian or submicroradian stability usually requires optical components in the line of sight that compensate for the residual higher frequency jitter that the reaction wheel control system is unable to remove. A fast steering mirror for example can be tipped or tilted to re-align the optical path according to what the imaging sensor reads from the target star or galaxy.
{ "domain": "physics.stackexchange", "id": 18989, "tags": "telescopes, satellites, gyroscopes" }
How does phase transition occur in finite sized ising model?
Question: I was simulating the square lattice Ising model via Metropolis Algorithm and found that at 0 magnetic field, there is spontaneous magnetisation below some temperature. I have used Periodic Boundary Condition in a 100x100 lattice. Is this an instance of a Phase Transition? I have heard that phase transitions occur in thermodynamic limit. So how does this spontaneous magnetisation occur? If this is not a phase transition, is it an artifice of the metropolis algorithm and relates to non convergence of this algorithm? If this is a phase transition how does spontaneous magnetisation occur at all since the probabilities carry the symmetry of the hamiltonian and the partition function is finite allowing the microstates to have boltzmann distribution for all magnetisation values? Answer: In a strict mathematical sense, you will not observe a phase transition in a finite volume, for the reason you mention. If you measure thermodynamic quantities and their derivatives, when you expect a completely sharp transition, you will instead see a smooth curve that approximates the "correct" behavior if the volume gets larger and larger. There is a theory of finite-size scaling that addresses this quantitatively, and in fact explains how these finite-size effects can be exploited to measure critical exponents effectively. In practice, on a 100x100 lattice you should have no problem at all to detect a phase transition. If you use a good algorithm (like a cluster algorithm that flips many spins at once) you will find that the susceptibility obtains a maximum $\chi_\text{max}(L)$ at some temperature $T_c(L)$ that slightly depends on the number of spins $L$. By measuring these quantities for different box sizes $L$ you can obtain estimates for the actual critical temperature $T_c$ and the critical exponents $\nu$ and $\gamma$.
{ "domain": "physics.stackexchange", "id": 60823, "tags": "statistical-mechanics, phase-transition" }
Is it inevitable to compute the quadruople tensor in components? Why?
Question: I was trying to determine the quadrupole tensor for a given charge distribution in one go from this equation: $$\overleftrightarrow{D}=\int d^3r \varrho(\vec{r})\left(3\vec{r} \circ \vec{r}-r^2\hat{I} \right)$$ I am trying to understand why my findings were wrong. I found that the tensor is commonly calculated by components. Why is that so? Can't I just use this formula? Just to clarify, here is what I put for $\rho$ for the charge distribution below: $$\rho(\vec{r}) = Q\cdot \left(\delta^3\left(\vec{r}-\frac{3}{2}\vec{a}\right) +\delta^3\left(\vec{r}+\frac{3}{2}\vec{a}\right) -\delta^3\left(\vec{r}-\frac{1}{2}\vec{a}\right) -\delta^3\left(\vec{r}+\frac{1}{2}\vec{a}\right)\right)$$ Answer: Why would you want to? $$\tag{1}\mathbf{D}=\int d^3r\,\rho(r)\begin{pmatrix} 3x^2-r^2 & 3yx & 3zx \\ 3xy & 3y^2-r^2 & 3zy\\ 3xz & 3yz & 3z^2-r^2 \end{pmatrix}$$ Looks rather strange. Components $$D_{ij}=\int d^3x \,\rho(x)(3x_ix_j-r^2\delta_{ij})$$ are much better. Besides, you're going to calculate (1) component by component anyway. It's not like you can integrate a matrix in one fell swoop.
{ "domain": "physics.stackexchange", "id": 19700, "tags": "homework-and-exercises, electrostatics, tensor-calculus, multipole-expansion" }
Schrodinger equation in spherical coordinates
Question: I read a paper on solving Schrodinger equation with central potential, and I wonder how the author get the equation(2) below. Full text. In Griffiths's book, it reads $$-\frac{1}{2}D^2\phi+\left(V+\frac{1}{2}\frac{l(l+1)}{r^2}\right)\phi=E\phi$$ They are quite different. Can anyone explain how to deduce equation(2)? Answer: The difference is due to the fact that solid harmonics are not spherical harmonic. So, equation (2) and the more conventional equation from Griffith are equations for different functions $\phi$. The Schrodinger eq. (1) $$-\frac{1}{2r^2}\frac{\partial}{\partial r} \left( r^2\frac{\partial}{\partial r}\psi\right) + \frac{\hat{L}^2}{2r^2}\psi + V\psi ~=~ E\psi $$ is indeed turned by substitution $$ \psi ~=~ R(r) Y_{\ell m}(\theta,\varphi)~=~ \phi(r) r^{\ell} Y_{\ell m}(\theta,\varphi) $$ to equation (2) if you do the math correctly. Note $r^{\ell}$ here: it is what differs solid harmonics from spherical harmonics. On the other hand, Griffith's function $\phi(r)$ is defined as $rR(r)$.
{ "domain": "physics.stackexchange", "id": 1341, "tags": "quantum-mechanics, schroedinger-equation" }
Problem with simulation of the desired shape of the signal
Question: I am trying to generate a signal with specific frequency and shape. I have accomplished to get a time domain signal with the specified frequency, however the shape does not match. Is there any method to get the desired shape. Will phase be any helpful? Answer: For a square wave as used in your example phase don't do anything. Since the fourrier series is just based on a sine wave without phase. The fourrier series are: $$ S_n (x) = \frac{a_0}{2}+ \sum_{n=1}^{N} A_n \cdot \sin (\frac{2 \pi \cdot n \cdot x}{P}) \qquad \text{for integer } N \geq 1 $$ Since the square wave have only the odd frequencies. you can choose for the sinewaves: 1, 3, 5, 7, 9, 11, 13 ,... For other functions it is different. In that case you have both sine and cosine signals. Combination of both give as result the phase. if you have an series of 7 or 8 frequencies it is allready close to a square wave.
{ "domain": "dsp.stackexchange", "id": 3497, "tags": "signal-analysis" }
Explanation for $M{\ddot{r}}=-\nabla \phi$
Question: Could someone please explain this equation $$M\bf {\ddot{r}}=-\nabla \phi$$ Where $\bf r$ is a position vector and $\phi$ is the potential function. Could someone brief explain the potential function and tell me why we've got minus sign before the nabla operator? Answer: The minus sign is only there by convention. You could replace $\phi$ with $-\phi$ and the minus sign would go away. Note that $\nabla \phi$ points in the direction of steepest ascent for $\phi$, whereas $-\nabla \phi$ points in the direction of steepest descent. Perhaps it seems nice for the force on an object to be pointing in a direction of descent for $\phi$, as if the object is trying to move to a location where $\phi$ is as small as possible. It's simply a mathematical fact that if a vector field $F$ is conservative, then there exists a scalar-valued function $\phi$ such that $-\nabla \phi = F$. If $F$ is a force field that some object is moving in, then we also have $F = Ma$, which yields your equation \begin{equation*} M {\ddot r} = -\nabla \phi. \end{equation*} Assuming that a vector field $F$ is conservative, how do we show mathematically the existence of $\phi$? You can do it by first selecting some reference point $x_0$ arbitrarily, then defining \begin{equation} \phi(x) = -\int_{x_0}^x F \cdot dr. \end{equation} This line integral is taken along any path connecting $x_0$ to $x$. You get the same answer no matter which path you take, because $F$ is conservative. Then note that \begin{align} \phi(x + \Delta x) - \phi(x) &= -\int_x^{x + \Delta x} F(r) \cdot dr \\ &\approx -\int_x^{x + \Delta x} F(x) \cdot dr \\ &= \langle -F(x), \Delta x \rangle. \end{align} The approximation is good when $\Delta x$ is small. Comparing this with the equation \begin{equation} \phi(x + \Delta x) - \phi(x) \approx \langle \nabla \phi(x), \Delta x \rangle \end{equation} shows that $\nabla \phi(x) = - F(x)$, or in other words \begin{equation*} -\nabla \phi(x) = F(x). \end{equation*}
{ "domain": "physics.stackexchange", "id": 15306, "tags": "newtonian-mechanics, potential-energy, conventions" }
Why does angular frequency of a particle in SHM does not change when it's velocity is changed
Question: $V = A \omega \sin(\omega t + \theta)$ gives velocity of a particle in SHM at time $t$. But, why does the value of $\omega$ doesn't change when $V$ is changed? Answer: $\omega$ in this case is the angular frequency, not an angular velocity. It is given this term because in the equation for the position of the particle is given $x(t)=A\sin(\omega t)$ the input to the $\sin$ function is an angle $\omega t$. It just tells you how "fast" the angle changes over time. Typically the angular frequency just depends on what forces are acting on the object, and in many cases (like a mass on an ideal spring or a simple pendulum) the forces do not depend on velocity, so $\omega$ doesn't either. In cases where forces do depend on velocity (like damped oscillators) things are more complicated, but the natural frequency of the system when damping is not present still plays an important role in describing the motion of the system over time.
{ "domain": "physics.stackexchange", "id": 55443, "tags": "harmonic-oscillator, angular-velocity" }
Confusion regarding Equilibrium Analysis Procedures
Question: I have trouble understanding the bolded sentence in the following Equilibrium Analysis Procedures: Draw a boundary around the system, so that you can clearly separate the system you are considering from its environment. Draw a free-body diagram showing all external forces that act on the system and their points of application. External forces are those that act through the system boundary that you drew in step 1; these often include gravity, friction, and forces exerted by wires or beams that cross the boundary. Internal forces (those that objects within the system exert on each other) should not appear in the diagram. Sometimes the direction of a force may not be obvious in advance. If you imagine making a cut through the beam or wire where it crosses the boundary, the ends of this cut will pull apart if the force acts outward from the boundary. If you are in doubt, choose the direction arbitrarily, and if you have guessed wrong your solution will result in negative values for the components of that force. What does it mean by "the ends of this cut will pull apart"? How does "pulling apart" look like? Why would "the ends of this cut will pull apart"? Answer: What does it mean by "the ends of this cut will pull apart"? How does "pulling apart" look like? Why would "the ends of this cut will pull apart"? Actually, the statement was "the ends of this cut will pull apart if the force acts outward from the boundary". For a beam or wire that would mean the beam or wire is in tension. I emphasized if because the ends of the cut will not pull apart if the force acts inward from the boundary. For a beam that would mean the beam is in compression. A wire would collapse. Perhaps the best way to visualize this is with an example from statics. The first figure below shows a simply supported truss where it is asked what the force is in a particular member (member BD). Here I am using what is called the method of sections in which I am isolating the section (which can be thought of as the "system") from the rest of the truss (which becomes the surroundings) by cutting members with the dotted boundary. Before proceeding one determines the vertical reaction at support A by applying the requirements for equilibrium, i.e., the sum of the vertical forces equals zero ($\sum\vec F_{V}=0$) and the sum of the moments about H equals zero ($\sum\vec M_{H}=0$). The second figure then isolates section ABC from the rest of the truss by the red boundary creating a free body diagram (FBD) for section ABC. The external forces acting on section ABC are then $\vec F_{CD}$ , $\vec F_{BD}$ (the unknown being sought), $\vec F_{BE}$, reaction $R_A$, and the external vertical 5 KN load. The internal forces in this section (system) are $F_{AC}$, $F_{AB}$ and $F_{CB}$ and are therefore not shown. Now here is the key point. In the FBD diagram I have shown all three unknown member forces, including the member in question $\vec F_{BD}$ pointing outward, i.e., showing all members in tension. If the forces are actually pointing outward, then if the members are cut they will "pull apart". In this example, the value for $F_{BD}$ turns out to be negative, meaning it is in compression and would therefore not "pull apart" if cut. T This approach is what the author mean by the following statement: If you are in doubt, choose the direction arbitrarily, and if you have guessed wrong your solution will result in negative values for the components of that force. Hope this helps.
{ "domain": "physics.stackexchange", "id": 80782, "tags": "newtonian-mechanics, rotational-dynamics, torque" }
Is there a difference between heating a liquid from above and heating a solid?
Question: If you were to heat a container full of liquid from the top, would it behave the same as a solid being heated in the same fashion? (Assuming both the liquid and the solid have the same thermal conductivity). By heating the container from the top I would think you would eliminate convection, and therefore make the liquid heat up the same as a solid I.E. mainly through conduction. Answer: Yes, there would be no natural convection in this case. Unless you had something external to the heat source to disturb the liquid, it would remain stagnant and the heat transfer would be conductive The same thing happens when you try and cool something from the bottom (unless it's liquid water near freezing or has similar non-standard properties).
{ "domain": "physics.stackexchange", "id": 38122, "tags": "thermodynamics, thermal-conductivity, convection" }
Threadsafe filtering queue
Question: I have implemented a thread safe filtering queue. The queue allows any objects, of the specified type to be added. A thread interested to take an object must specify which object it is interested in via Predicate<T>. A particularity of this implementation is that in my use case, the threads might not be allowed to remove the object from the queue, because others threads may be interested on that object as well. public class FilterQueue<T> { private readonly LinkedList<T> _values = new LinkedList<T>(); private readonly object _hasWaiters = new object(); private int _waiters; public void Add(T value) { lock (_values) { _values.AddLast(value); Monitor.PulseAll(_values); } } private LinkedListNode<T> FindNode(Predicate<T> pred) { var node = _values.First; while (node != null) { if (pred(node.Value)) { return node; } node = node.Next; } return null; } public void WaitForWaiters() { lock (_values) { while (_waiters == 0) { SyncUtils.Wait(_values, _hasWaiters); } } } public void Clear() { lock (_values) { if (_waiters != 0) { throw new InvalidOperationException("There is still someone waiting for requests"); } _values.Clear(); } } public T Take(Predicate<T> hasMessage) { return Take(hasMessage, Timeout.InfiniteTimeSpan); } public T Take(Predicate<T> hasMessage, TimeSpan timeout, bool removeObject = false) { lock (_values) { var now = Environment.TickCount; int totalTimeout = (int)timeout.TotalMilliseconds; ++_waiters; SyncUtils.Pulse(_values, _hasWaiters); try { while (true) { var node = FindNode(hasMessage); if (node != null) { if (removeObject) { _values.Remove(node); } return node.Value; } Monitor.Wait(_values, totalTimeout); if (SyncUtils.HasTimedOut(ref totalTimeout, now)) { return default(T); } } } catch (ThreadInterruptedException) { var node = FindNode(hasMessage); if (removeObject && node != null) { _values.Remove(node); Thread.CurrentThread.Interrupt(); return node.Value; } throw; } finally { --_waiters; } } } } And the utility methods used there, they are not for review either, they are here for completeness: public static class SyncUtils { private static void EnterUninterruptibly(Object lockObj, bool throwException = false, ThreadInterruptedException previous = null) { ThreadInterruptedException ex = previous; for (; ; ) { try { Monitor.Enter(lockObj); break; } catch (ThreadInterruptedException e) { ex = e; } } if (throwException && ex != null) { throw ex; } if (ex != null) { Thread.CurrentThread.Interrupt(); // NOTE: dont't throw ThreadInterruptedException but DO keep interrupted status } } public static void Pulse(Object lockObj, Object condObj) { if (lockObj == condObj) { Monitor.Pulse(condObj); } else { EnterUninterruptibly(condObj); // NOTE: a Pulse should never throw ThreadInterruptedException Monitor.Pulse(condObj); Monitor.Exit(condObj); } } public static bool HasTimedOut(ref int timeout, int referenceTime) { if (timeout == Timeout.Infinite) { return false; } timeout = timeout - (Environment.TickCount - referenceTime); if (timeout <= 0) { timeout = 0; return true; } return false; } } Concurrent software should always be tested, for that reason I include a couple of tests (you don't have to review them): public const int TakeNMessages = 10000; static void Main(string[] args) { foreach (var thread in TestWithWaitersFirst()) { thread.Join(); } Console.WriteLine("Completed " + "TestWithWaitersFirst"); foreach (var thread in TestWithProducersFirst()) { thread.Join(); } Console.WriteLine("Completed " + "TestWithProducersFirst"); Console.Read(); } private static void WriteWithThreadId(string message) { Console.WriteLine("Thread" + Thread.CurrentThread.ManagedThreadId + ": " + message); } private static IEnumerable<Thread> TestWithWaitersFirst() { var queue = new FilterQueue<int?>(); var takers = Enumerable.Range(0, 4).Select(i => { return new Thread(() => { for (int nTake = 0; nTake < TakeNMessages; ++nTake) { var value = TakeNMessages*i*10 + nTake; WriteWithThreadId("is waiting for " + value); queue.Take(n => n == value); WriteWithThreadId("received " + value); } }); }).ToList(); Thread.Sleep(200); var producers = Enumerable.Range(0, 4).Select(i => { return new Thread(() => { for (int nTake = 0; nTake < TakeNMessages; ++nTake) { var value = TakeNMessages * i * 10 + nTake; WriteWithThreadId("adding " + value); queue.Add(TakeNMessages * i * 10 + nTake); } }); }); var threads = takers.Concat(producers).ToArray(); foreach (var thread in threads) { thread.Start(); } return threads; } private static IEnumerable<Thread> TestWithProducersFirst() { var queue = new FilterQueue<int?>(); var producers = Enumerable.Range(0, 4).Select(i => { return new Thread(() => { for (int nTake = 0; nTake < TakeNMessages; ++nTake) { var value = TakeNMessages * i * 10 + nTake; WriteWithThreadId("adding " + value); queue.Add(TakeNMessages * i * 10 + nTake); } }); }); Thread.Sleep(200); var takers = Enumerable.Range(0, 4).Select(i => { return new Thread(() => { for (int nTake = 0; nTake < TakeNMessages; ++nTake) { var value = TakeNMessages * i * 10 + nTake; WriteWithThreadId("is waiting for " + value); queue.Take(n => n == value); WriteWithThreadId("received " + value); } }); }).ToList(); var threads = takers.Concat(producers).ToArray(); foreach (var thread in threads) { thread.Start(); } return threads; } Any comments are appreciated, specially regarding concurrent concerns that I may have missed. Answer: I cannot tell you whether it's 100% thread safe and correct but I had some general comments on the design. I wouldn't call this structure a Queue because it isn't one. There is no Push/Pop/Peek and it doesn't work like this. It is more an ObjectLocker then a queue. Take I think GetValueOrDefault would be a better name. Secondly you might consider it to return Task<T> instead of just T as this method can take some time. With this you could await for the result. If you chose to do this, then you also might consider another argument, the CancellationToken in case you decide to no longer wait. SyncUtils A parameter like throwException is a really bad choice. If you want to be able to switch between throwing and not throwing an exception then you should have two methods. One that is called EnterUninterruptibly that throws exceptions and the other one TryEnterUninterruptibly that doesn't throw and don't have to catch any exceptions. You can perfectly solve it this way becasue the Monitor class already has such two methods: Enter and TryEnter that you can use for the new implementation. DI The Queue shouldn't rely on static SyncUtils. It would be better to pass it via DI as an instance that implements an IObjectSyncronizer or something similar.
{ "domain": "codereview.stackexchange", "id": 22073, "tags": "c#, thread-safety, concurrency, collections" }
Controlling Stepper or DC motor with L298 or L293 and getting operation similar to Servo for 6 dof arm
Question: I will be using 1:4 gear ratio and thus will require a motor with a continuous 360 motion and a high torque for efficient functioning. Servo motors with continuous rotation and high torque are very expensive and rare, so my choice has boiled down to DC Motor and stepper motor. However I have not completely understood the way to configure them for a robotic arm. I don't want to use rotary encoders, as installing a potentiometer or hall effect sensor on the shaft will be cumbersome and inaccurate due to vibration. Is there a way one could move a DC motor or stepper motor to a particular angle with IC/modules like l293d or other motor drivers? Answer: You can't control shaft angle of the DC motor without any feedback from it. Servos made for RC models or robots just do the thing - they utilize a potentiometer or rotary encoder to measure shaft angle and control it, mostly with PID regulator. You can utilize stepper motor for Your task, as these motors are mainly used in the angular manner of control. For that, better than L293D will be dedicated stepper motor driver, like A4988 or DRV8825 (I don't know how large Your robotic arm will be, so these are just example drivers). If You will choose DC motor over the stepper motor, mind that servos' mechanisms have much higher gear ratio than 1:4 in order to gain torque from a small DC motor. EDIT: There is also a problem with holding the position of the stepper motor, common in CNC machines, 3D printers etc. Stepper motors can move (almost) freely, when there is no holding current applied. Therefore, when a machine is turned on, every motor has to find its origin. It is usually done by placing a switch on the minimum or the maximum position of the joint. The process of finding origin of the machine is called Homing. Check out here how it is done by a 3D printer- You can see here a movement of the axis towards the switch. When the switch is hit, it is recognized by the software as reaching the minimum of the axis, thus zeroing joint's position. Moreover, there is also a way of sensorless homing by measurement stepper motor current draw. For example, TMC2208 (and also TMC2130 and TMC2100) stepper motor drivers are capable of doing that. When the joint mechanically reaches its constraint, the driver knows it by measuring, that the motor is drawing more current than normally. It also sets diag1 pin to high (if it is properly configured), allowing external electronics to detect machine's home position. These drivers are also very silent and give smoother motor movement in comparison with previously mentioned ones. You can check the difference in sound here.
{ "domain": "robotics.stackexchange", "id": 1735, "tags": "robotic-arm, motor, stepper-motor, stepper-driver" }
a problem after "rosrun openni_camera openni_node"
Question: After I run the command "rosrun openni_camera openni_node",it shows as below: /opt/ros/electric/stacks/openni_kinect/openni_camera/bin/openni_node: relocation error: /opt/ros/electric/stacks/perception_pcl/pcl/lib/libpcl_visualization.so.1.1: symbol __cxa_pure_virtual, version libmysqlclient_16 not defined in file libmysqlclient.so.16 with link time reference Is there anyone who can tell me why? Originally posted by sven_007 on ROS Answers with karma: 49 on 2012-10-30 Post score: 1 Answer: Try this: sudo apt-get install libmysqlclient16 If the command above didn't work, make sure to check that the update repositories are enabled by checking /etc/apt/sources.list, you may need this. Originally posted by Po-Jen Lai with karma: 1371 on 2012-10-30 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by sven_007 on 2012-10-30: it worked. Thanks! Comment by Po-Jen Lai on 2012-10-30: Welcome :)
{ "domain": "robotics.stackexchange", "id": 11566, "tags": "ros, openni-node" }
Monty Hall Simulation
Question: I wrote this code as my first self-directed effort and would appreciate any input on things that I've done seriously wrong. I worry that perhaps I'm over-relying on if-statements, for example, but am not sure how to change the pieces of code to use them less. import random def playchosengame(chosengame): if chosengame == 1: doorchoice = int(raw_input("How many doors would you like to play with? > ")) onegameatatime(doorchoice) elif chosengame == 2: numberofgames = int(raw_input("How many games would you like to play? > ")) strategy = int(raw_input("Would you like to always change your answer (1) or keep it the same (2)? > ")) doorchoice = int(raw_input("How many doors would you like to play with? > ")) onestrategy(numberofgames,strategy,doorchoice) else: print "What did you say?" def onestrategy(chosennumberofgames,chosenstrategy,doorchoice): # for playing x number of Monty Hall games with one type of strategy wincount = 0 i = 1 for i in range(1,chosennumberofgames + 1): possibleanswers = range(1,doorchoice + 1) correctanswer = random.randint(1,doorchoice) incorrectpossibilities = possibleanswers incorrectpossibilities.remove(correctanswer) youranswer = random.randint(1,doorchoice) if youranswer == correctanswer: otherremaining = random.choice(incorrectpossibilities) else: otherremaining = youranswer incorrectpossibilities.remove(otherremaining) print "The correct answer is NOT one of these: %r" % incorrectpossibilities if youranswer == correctanswer: print "Which means it is either your answer: %r, or the only other remaining option, %r." % (correctanswer, otherremaining) if chosenstrategy == 1: finalanswer = otherremaining else: finalanswer = correctanswer else: print "Which means it is either your answer: %r, or the only other remaining option, %r." % (youranswer, correctanswer) if chosenstrategy == 1: finalanswer = correctanswer else: finalanswer = youranswer print "You chose %r" % finalanswer if finalanswer == correctanswer: wincount += 1 print "You win!" else: print "You lose!" i += 1 print "You won %r out of %r games. Congrats!" % (wincount, i - 1) def onegameatatime(doorchoice): # for playing one Monty Hall game at a time. playagain = 1 while playagain == 1: possibleanswers = range(1,doorchoice + 1) correctanswer = random.randint(1,doorchoice) incorrectpossibilities = possibleanswers incorrectpossibilities.remove(correctanswer) youranswer = int(raw_input("Pick a door number from 1 - %r > " % doorchoice)) if youranswer == correctanswer: otherremaining = random.choice(incorrectpossibilities) else: otherremaining = youranswer incorrectpossibilities.remove(otherremaining) print "The correct answer is NOT one of these: %r" % incorrectpossibilities if youranswer == correctanswer: print "Which means it is either your answer: %r, or the only other remaining option, %r." % (correctanswer, otherremaining) else: print "Which means it is either your answer: %r, or the only other remaining option, %r." % (youranswer, correctanswer) finalanswer = int(raw_input("Which do you choose? > ")) if finalanswer == correctanswer: print "You win!" else: print "You lose!" playagain = int(raw_input("Play again? (1 = yes, 0 = no) > ")) print "Thanks for playing!" gamechoice = int(raw_input("Play one game at a time (1) or play your chosen number of games with one strategy (2)? > ")) playchosengame(gamechoice) Answer: Welcome to Code Review and coding in general: Code according to PEP8 – In your case this means using snake_case for variable and function names, add spaces after commas and between operators. Your code does look rather nice in general, some minor nitpicks here and there, but read through the guidelines and try to adhere to them. Feature: incorrect_possibilities = possible_answers invalidates possible_answers – When you later on do incorrect_possiblities.remove(correct_answer) this also changes possible_answers. To get a copy you need either to use a better variant of copy, or simply incorrect_possibilities = possible_answers[:]. The latter one uses slicing to denote the entire array, and makes a copy of the sliced area. Use ternary related to chosen_strategy - Based on the chosen strategy you choose either of two choices, this can be coded as final_answer = correct_answer if chosen_strategy == 1 else your_answer, and similar in the other cases. Use ternary related to print output also – This can also be used in print out as well: print("Which means it is either your answer: {}, or the only other remaining option, {}.".format( correct_answer if your_answer == correct_answer else your_answer, other_remaining if your_answer == correct_answer else correct_answer)) Use the newer print('You chose {}'.format(final_answer) syntax – This is the preferred version for new code in Python 2 and for code in Python 3. See Format specification mini language for examples and specification. Use docstrings, """ ... """, and not comments, # ..., when describing modules, functions, classes and methods. – If you use docstrings they can be used by your IDE or documentation tools, and it is according to the Pythonic way. :-) Avoid one-time temporary variables – The playagain can be dropped if you use while True: to keep the game going, and change the end to something like if int(raw_input("Play again (0 = no)?")) == 0: break. (Preferrably with a newline before the break, which does break out of the while loop) Possibly add a loop or validation on the main loop – If you enter something when choosing between playing one game at a time, or play a chosen number of games, you can enter something wrong and the script terminates. This could possibly be enhanced by adding a while loop around it. Do also note that if you enter non-numeric text your code breaks very easily... Look into making a general int input function which uses try ... except around the input function. Introducing the if __name__ == '__main__': idiom – The previous code is often used to make the code reusable as a module, and to have as little code as possible at the top level. Combined with a main() function your code easily be reused and extended. In your case you could use (possibly extended with the while loop): def main(): game_choice = int(raw_input("Play one game at a time (1) or play your chosen number of games with one strategy (2)? > ")) play_chosen_game(game_choice) if __name__ == '__main__': break Consider shortening the text length – Your texts are rather worthy, and could benefit from being shortened a little. Conclusion Your code does look good for a beginner, and naming is good (with the exception of not using underscores in between words). You could benefit from learning to use the ternary operator (a if b else c), and a slight justification on using functions related to top level code and text lengths.
{ "domain": "codereview.stackexchange", "id": 17467, "tags": "python, beginner, game, python-2.x" }
ppl_detection package and training data
Question: Do you know if you have to train ppl_detection package or should work at it's best out of the box. Because I get very poor results. Almost every object is human. Thank you. Originally posted by Grega Pusnik on ROS Answers with karma: 460 on 2012-04-09 Post score: 1 Answer: Its labelling everything as a person for me as well (see the image). But I have a solution, don't use ppl_detection, use openni_tracker with the latest openni and nite libraries instead. It can automatically find people, is orders of magnitude more accurate than ppl_detection and tracks skeleton data too. This is a good tutorial on how to set it up. image description http://imageshack.us/a/img593/3188/humans.jpg Originally posted by James Diprose with karma: 123 on 2012-09-24 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 8919, "tags": "ros, people" }
Is this an accurate explanation of a covariance matrix?
Question: My intent is to provide a simpler description, instead of a mathematical proof, and a practical example of a covariance matrix, especially as they are used in ROS. Did I hit the mark with: Covariance matrices with a practical example Originally posted by billmania on ROS Answers with karma: 266 on 2012-08-17 Post score: 18 Original comments Comment by K7 on 2016-05-26: Thanks! Helped my understanding of it. (4 years later it's still helping people) Comment by psammut on 2017-10-31: I found it helpful 5 years later :) Comment by aarontan on 2018-07-10: nice read, 6 years later Comment by indraneel on 2019-06-06: Still a great read 7 years later! Comment by xinwf on 2019-07-25: Nice, very comprehensive Comment by denim on 2020-03-02: Helpful, 8 years later! Comment by berkayaskar on 2022-12-07: Helpful 10 years later! Comment by billmania on 2022-12-12: On occasion, one manages to understand something. Sometimes, others find it useful. I'm pleased to see this explanation has survived for ten years. I didn't expect that to happen way back in 2012 . Answer: Great read, very informative. Thanks! Originally posted by Huibuh with karma: 399 on 2014-06-02 This answer was ACCEPTED on the original site Post score: 8
{ "domain": "robotics.stackexchange", "id": 10660, "tags": "ros" }
jQuery and CSS3 slideshow
Question: I have built a slideshow using jQuery and CSS3. jQuery is used to trigger class changes (.slide1, .slide2 ...) on the slide container (#slideshow) and CSS3 is used to handle the transition animations between slides according to these classes. My main concern is about the jQuery script. I feel it is very verbal as it is handling each case separatly. How could I make it more efficient, shorter and maintainable? Here is the relevant code : DEMO $(document).ready(function() { var slideC = 1, slideR = 2, slideL = 4; $('#next').click(function(){ $('#slideshow').removeClass('slide' + slideC).addClass('slide' + slideR); slideC = slideR; if (slideC == 1) {slideL = 4;slideR = 2;} if (slideC == 2) {slideL = 1;slideR = 3;} if (slideC == 3) {slideL = 2;slideR = 4;} if (slideC == 4) {slideL = 3;slideR = 1;} console.log('slide =' + slideC); console.log('slideR =' + slideR); console.log('slideL =' + slideL); }); $('#prev').click(function(){ $('#slideshow').removeClass('slide' + slideC).addClass('slide' + slideL); slideC = slideL; if (slideC == 1) {slideL = 4;slideR = 2;} if (slideC == 2) {slideL = 1;slideR = 3;} if (slideC == 3) {slideL = 2;slideR = 4;} if (slideC == 4) {slideL = 3;slideR = 1;} console.log('slideC =' + slideC); console.log('slideR =' + slideR); console.log('slideL =' + slideL); }); $('#controls .bSlide1').click(function(){$('#slideshow').removeClass('slide' + slideC).addClass('slide1'); slideC = 1; slideL = 4;slideR = 2;}); $('#controls .bSlide2').click(function(){$('#slideshow').removeClass('slide' + slideC).addClass('slide2'); slideC = 2; slideL = 1;slideR = 3;}); $('#controls .bSlide3').click(function(){$('#slideshow').removeClass('slide' + slideC).addClass('slide3'); slideC = 3; slideL = 2;slideR = 4;}); $('#controls .bSlide4').click(function(){$('#slideshow').removeClass('slide' + slideC).addClass('slide4'); slideC = 4; slideL = 3;slideR = 1;}); }); body{background:grey;} #slideshow_wrap { position:relative; overflow:hidden; } #slideshow { width:400%; } #slideshow .slide { width:25%; float:left; } /** slideshow content **/ #slideshow .client_img { position:relative; width:25%; padding-bottom:25%; float:left; margin-left:12.5%; } #slideshow .client_img img { position:absolute; width:90%; height:auto; left:5%; top:0; } #slideshow h3, #slideshow p { margin-left:50%; width:40%; } #slideshow .c_txt { padding:5% 0; } /** slideshow controls **/ #slideshow_wrap button { background:none; border:none; outline:none; padding:0; margin:0; cursor:pointer; } #slideshow_wrap #prev, #slideshow_wrap #next { position:absolute; top:50%; width:27px; height:27px; } #slideshow_wrap #prev { left:0; border-top:4px solid #fff; border-left:4px solid #fff; -webkit-transform-origin:0 0; -ms-transform-origin:0 0; transform-origin:0 0; -webkit-transform:rotate(-45deg); -ms-transform:rotate(-45deg); transform:rotate(-45deg); } #slideshow_wrap #next { right:0; border-top:4px solid #fff; border-right:4px solid #fff; -webkit-transform-origin:100% 0; -ms-transform-origin:100% 0; transform-origin:100% 0; -webkit-transform:rotate(45deg); -ms-transform:rotate(45deg); transform:rotate(45deg); } #controls { text-align:center; margin-top:5%; } #controls button { width:15px; height:15px; border-radius:50%; background-color: rgba(255, 255, 255, 0.4); display:inline-block; margin:0 2px; -webkit-transition: background-color .2s ease-in-out; transition: background-color .2s ease-in-out; } /** slideshow animation **/ #slideshow { -webkit-transform: translateX(0); -ms-transform: translateX(0); transform: translateX(0); -webkit-transition: transform .5s ease-in-out; transition: transform .5s ease-in-out; } #slideshow.slide2 { -webkit-transform: translateX(-25%); -ms-transform: translateX(-25%); transform: translateX(-25%); } #slideshow.slide3 { -webkit-transform: translateX(-50%); -ms-transform: translateX(-50%); transform: translateX(-50%); } #slideshow.slide4 { -webkit-transform: translateX(-75%); -ms-transform: translateX(-75%); transform: translateX(-75%); } #slideshow.slide1 ~ #controls .bSlide1 { background-color:#fff; } #slideshow.slide2 ~ #controls .bSlide2 { background-color:#fff; } #slideshow.slide3 ~ #controls .bSlide3 { background-color:#fff; } #slideshow.slide4 ~ #controls .bSlide4 { background-color:#fff; } <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script> <div id="slideshow_wrap"> <div id="slideshow" class="slide1"> <div id="slide1" class="slide"> <div class="client_img"> <img src="http://lorempixel.com/output/people-q-g-300-280-1.jpg" alt="" /> </div> <h3>Lorem ipsum dolor</h3> <p class="c_txt">Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pretium est eu enim hendrerit, nec cursus libero bibendum. Mauris elit turpis, ultricies sed odio sollicitudin, varius rhoncus erat. Morbi cursus feugiat arcu at efficitur. In nec vulputate erat. Integer laoreet neque nec neque imperdiet, quis luctus dolor consequat. Curabitur condimentum posuere purus, eget blandit est facilisis id. Donec pharetra tincidunt felis, in vehicula lacus imperdiet id. </p> </div> <div id="slide2" class="slide"> <div class="client_img"> <img src="http://lorempixel.com/output/people-q-g-300-280-6.jpg" alt="" /> </div> <h3>Lorem ipsum dolor</h3> <p class="c_txt">Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pretium est eu enim hendrerit, nec cursus libero bibendum. Mauris elit turpis, ultricies sed odio sollicitudin, varius rhoncus erat. Morbi cursus feugiat arcu at efficitur. In nec vulputate erat. Integer laoreet neque nec neque imperdiet, quis luctus dolor consequat. Curabitur condimentum posuere purus, eget blandit est facilisis id. Donec pharetra tincidunt felis, in vehicula lacus imperdiet id. </p> </div> <div id="slide3" class="slide"> <div class="client_img"> <img src="http://lorempixel.com/output/people-q-g-300-280-9.jpg" alt="" /> </div> <h3>Lorem ipsum dolor</h3> <p class="c_txt">Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pretium est eu enim hendrerit, nec cursus libero bibendum. Mauris elit turpis, ultricies sed odio sollicitudin, varius rhoncus erat. Morbi cursus feugiat arcu at efficitur. In nec vulputate erat. Integer laoreet neque nec neque imperdiet, quis luctus dolor consequat. Curabitur condimentum posuere purus, eget blandit est facilisis id. Donec pharetra tincidunt felis, in vehicula lacus imperdiet id. </p> </div> <div id="slide4" class="slide"> <div class="client_img"> <img src="http://lorempixel.com/output/people-q-g-300-280-8.jpg" alt="" /> </div> <h3>Lorem ipsum dolor</h3> <p class="c_txt">Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pretium est eu enim hendrerit, nec cursus libero bibendum. Mauris elit turpis, ultricies sed odio sollicitudin, varius rhoncus erat. Morbi cursus feugiat arcu at efficitur. In nec vulputate erat. Integer laoreet neque nec neque imperdiet, quis luctus dolor consequat. Curabitur condimentum posuere purus, eget blandit est facilisis id. Donec pharetra tincidunt felis, in vehicula lacus imperdiet id. </p> </div> </div> <button id="prev"></button> <button id="next"></button> <div id="controls"> <button class="bSlide1"></button> <button class="bSlide2"></button> <button class="bSlide3"></button> <button class="bSlide4"></button> </div> </div> Answer: $(document).ready(function() { var slideC = 1, slideR = 2, slideL = 4; var $Slideshow = $('#slideshow'); $('#next').click(function(){ $Slideshow.removeClass('slide' + slideC).addClass('slide' + slideR); slideC = slideR; switch(slideC){ case 1 : slideL = 4;slideR = 2; break; case 2 : slideL = 1;slideR = 3; break; case 3 : slideL = 2;slideR = 4; break; case 4 : slideL = 3;slideR = 1; } console.log('slide =' + slideC); console.log('slideR =' + slideR); console.log('slideL =' + slideL); }); $('#prev').click(function(){ $Slideshow.removeClass('slide' + slideC).addClass('slide' + slideL); slideC = slideL; alert(slideC); switch(slideC){ case 1 : slideL = 4;slideR = 2; break; case 2 : slideL = 1;slideR = 3; break; case 3 : slideL = 2;slideR = 4; break; case 4 : slideL = 3;slideR = 1; } console.log('slideC =' + slideC); console.log('slideR =' + slideR); console.log('slideL =' + slideL); }); $('#controls .bSlide1').click(function(){$Slideshow.removeClass('slide' + slideC).addClass('slide1'); slideC = 1; slideL = 4;slideR = 2;}); $('#controls .bSlide2').click(function(){$Slideshow.removeClass('slide' + slideC).addClass('slide2'); slideC = 2; slideL = 1;slideR = 3;}); $('#controls .bSlide3').click(function(){$Slideshow.removeClass('slide' + slideC).addClass('slide3'); slideC = 3; slideL = 2;slideR = 4;}); $('#controls .bSlide4').click(function(){$Slideshow.removeClass('slide' + slideC).addClass('slide4'); slideC = 4; slideL = 3;slideR = 1;}); }); body{background:grey;} #slideshow_wrap { position:relative; overflow:hidden; } #slideshow { width:400%; } #slideshow .slide { width:25%; float:left; } /** slideshow content **/ #slideshow .client_img { position:relative; width:25%; padding-bottom:25%; float:left; margin-left:12.5%; } #slideshow .client_img img { position:absolute; width:90%; height:auto; left:5%; top:0; } #slideshow h3, #slideshow p { margin-left:50%; width:40%; } #slideshow .c_txt { padding:5% 0; } /** slideshow controls **/ #slideshow_wrap button { background:none; border:none; outline:none; padding:0; margin:0; cursor:pointer; } #slideshow_wrap #prev, #slideshow_wrap #next { position:absolute; top:50%; width:27px; height:27px; } #slideshow_wrap #prev { left:0; border-top:4px solid #fff; border-left:4px solid #fff; -webkit-transform-origin:0 0; -ms-transform-origin:0 0; transform-origin:0 0; -webkit-transform:rotate(-45deg); -ms-transform:rotate(-45deg); transform:rotate(-45deg); } #slideshow_wrap #next { right:0; border-top:4px solid #fff; border-right:4px solid #fff; -webkit-transform-origin:100% 0; -ms-transform-origin:100% 0; transform-origin:100% 0; -webkit-transform:rotate(45deg); -ms-transform:rotate(45deg); transform:rotate(45deg); } #controls { text-align:center; margin-top:5%; } #controls button { width:15px; height:15px; border-radius:50%; background-color: rgba(255, 255, 255, 0.4); display:inline-block; margin:0 2px; -webkit-transition: background-color .2s ease-in-out; transition: background-color .2s ease-in-out; } /** slideshow animation **/ #slideshow { -webkit-transform: translateX(0); -ms-transform: translateX(0); transform: translateX(0); -webkit-transition: transform .5s ease-in-out; transition: transform .5s ease-in-out; } #slideshow.slide2 { -webkit-transform: translateX(-25%); -ms-transform: translateX(-25%); transform: translateX(-25%); } #slideshow.slide3 { -webkit-transform: translateX(-50%); -ms-transform: translateX(-50%); transform: translateX(-50%); } #slideshow.slide4 { -webkit-transform: translateX(-75%); -ms-transform: translateX(-75%); transform: translateX(-75%); } #slideshow.slide1 ~ #controls .bSlide1 { background-color:#fff; } #slideshow.slide2 ~ #controls .bSlide2 { background-color:#fff; } #slideshow.slide3 ~ #controls .bSlide3 { background-color:#fff; } #slideshow.slide4 ~ #controls .bSlide4 { background-color:#fff; } <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script> <div id="slideshow_wrap"> <div id="slideshow" class="slide1"> <div id="slide1" class="slide"> <div class="client_img"> <img src="http://lorempixel.com/output/people-q-g-300-280-1.jpg" alt="" /> </div> <h3>Lorem ipsum dolor</h3> <p class="c_txt">Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pretium est eu enim hendrerit, nec cursus libero bibendum. Mauris elit turpis, ultricies sed odio sollicitudin, varius rhoncus erat. Morbi cursus feugiat arcu at efficitur. In nec vulputate erat. Integer laoreet neque nec neque imperdiet, quis luctus dolor consequat. Curabitur condimentum posuere purus, eget blandit est facilisis id. Donec pharetra tincidunt felis, in vehicula lacus imperdiet id. </p> </div> <div id="slide2" class="slide"> <div class="client_img"> <img src="http://lorempixel.com/output/people-q-g-300-280-6.jpg" alt="" /> </div> <h3>Lorem ipsum dolor</h3> <p class="c_txt">Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pretium est eu enim hendrerit, nec cursus libero bibendum. Mauris elit turpis, ultricies sed odio sollicitudin, varius rhoncus erat. Morbi cursus feugiat arcu at efficitur. In nec vulputate erat. Integer laoreet neque nec neque imperdiet, quis luctus dolor consequat. Curabitur condimentum posuere purus, eget blandit est facilisis id. Donec pharetra tincidunt felis, in vehicula lacus imperdiet id. </p> </div> <div id="slide3" class="slide"> <div class="client_img"> <img src="http://lorempixel.com/output/people-q-g-300-280-9.jpg" alt="" /> </div> <h3>Lorem ipsum dolor</h3> <p class="c_txt">Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pretium est eu enim hendrerit, nec cursus libero bibendum. Mauris elit turpis, ultricies sed odio sollicitudin, varius rhoncus erat. Morbi cursus feugiat arcu at efficitur. In nec vulputate erat. Integer laoreet neque nec neque imperdiet, quis luctus dolor consequat. Curabitur condimentum posuere purus, eget blandit est facilisis id. Donec pharetra tincidunt felis, in vehicula lacus imperdiet id. </p> </div> <div id="slide4" class="slide"> <div class="client_img"> <img src="http://lorempixel.com/output/people-q-g-300-280-8.jpg" alt="" /> </div> <h3>Lorem ipsum dolor</h3> <p class="c_txt">Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pretium est eu enim hendrerit, nec cursus libero bibendum. Mauris elit turpis, ultricies sed odio sollicitudin, varius rhoncus erat. Morbi cursus feugiat arcu at efficitur. In nec vulputate erat. Integer laoreet neque nec neque imperdiet, quis luctus dolor consequat. Curabitur condimentum posuere purus, eget blandit est facilisis id. Donec pharetra tincidunt felis, in vehicula lacus imperdiet id. </p> </div> </div> <button id="prev"></button> <button id="next"></button> <div id="controls"> <button class="bSlide1"></button> <button class="bSlide2"></button> <button class="bSlide3"></button> <button class="bSlide4"></button> </div> </div> Changes : #slideshow saved to the variable as compiler does not need to find multiple time it from DOM. This will improve performance. No need to check all the conditions as we found match with on condition we can exclude other condition to be match. This will also improve performance.
{ "domain": "codereview.stackexchange", "id": 9760, "tags": "javascript, jquery, html, css, css3" }
What audio effects, filters, distortions, etc. create a "vinyl" effect?
Question: I want to make any song sound like it's being played on a gramophone record in the 1920s. What specific filters would I apply to make this happen? I'm looking for technical details, not some magical program that does it for me. I'm trying to create a programming interface that does this, so a user interface doesn't work. I'm a Math major with understanding of FFTs, but I don't know all the lingo. Answer: I don't know the precise formula for 1920's gramophone, but this is the general process for audio antiquing and should get you started. Depending on the exact settings, you can get vinyl, telephone, etc. Convert to mono. Add white noise. Bandpass. I would start with a HP at 80 HZ and a steep LP at 8 kHz, but that's a guess. You'll have to use your ears until it sounds like the gramophone. You should try Chebychev and Butterworth filters if available. You may also want to add "clicks" and "Pops". You can model those in a variety of ways. One way to start is with a Dirac function and filter it with bandpass functions. Depending on how strict your bandpass in step three is, you might be able to just add the clicks and pops as Diracs at the same time you add white noise, but then all your clicks and pops will sound about the same, which isn't very realistic. Some other things you may want to try for more realism: applying compression and saturation after step 1. applying wow and flutter emulation either before step 2 or after (or, for most accuracy, two stages of white noise before and after the wow and flutter emulation). You may want to ask on https://video.stackexchange.com/ to see exactly what filter settings they would recommend for 1920's gramophone in particular. For tips on implementing basic audio filters, I have a blog post: http://blog.bjornroche.com/2012/08/basic-audio-eqs.html
{ "domain": "dsp.stackexchange", "id": 662, "tags": "audio" }
Potential operator for a particle in space
Question: Considering a particle in 3d, the corresponding Hilbert space $H$ is the tensor product of individual Hilbert spaces $H=(H_x \otimes H_y \otimes H_z)$ If the particle is in a potential $V(x,y,z)$ ,what is the corresponding potential operator for it? Answer: If $V(x,y,z)$ has a development as a series in $x,y,z$, then the potential operator $\hat V$ is obtained by replacing $x,y,z$ in the series by the corresponding operators $\hat x \otimes 1 \otimes 1, 1\otimes \hat y \otimes 1$ and $1\otimes 1 \otimes \hat z$.
{ "domain": "physics.stackexchange", "id": 87023, "tags": "quantum-mechanics, operators, potential" }
How does existence of graviton helps explain 2 different objects fall at the same rate?
Question: Actually I want to see how gravitons help to explain why a feather and a bowling ball would fall at the same rate towards the ground assuming no air resistance, I would imagine bowling ball to emit more gravitons than a feather since it is much more massive but how about the attractive force of earth working on both feather and bowling ball? Answer: A graviton is a hypothetical elementary particle used in the effective quantized gravity in cosmological models. Actually I want to see how gravitons help to explain why a feather and a bowling ball would fall at the same rate towards the ground assuming no air resistance, This was an observation that led to Newtonian gravity, and Newtonian mechanics is valid in the macroscopic dimensions, whether gravity is quantized or not (i.e. established existence of gravitons). The quantum gravitational theory should be able to show mathematically how the classical theory emerges from it, similar to how classical electromagnetic theory is emergent from the underlying quantum electrodynamics. It is not simple, and it needs knowledge of quantum field theory.
{ "domain": "physics.stackexchange", "id": 85657, "tags": "acceleration, quantum-gravity, equivalence-principle, carrier-particles" }
Gale Shapley - Everyone ends up with second choice of partner
Question: So I was asked to prove (or not) whether, in gale shapley algorithm, everyone can end up married to their second partner of choice or not. I think not but then I am not able to come up with a proof. Thanks! :) Answer: Let us number the men $1,2,3,4$ and the women $a,b,c,d$ (the men propose to the women), with the following preferences: $$ \begin{align*} &1\colon b \succ c \succ d \succ a & a\colon 1 \succ 2 \succ 3 \succ 4 \\ &2\colon b \succ a \succ d \succ c & b\colon 3 \succ 4 \succ 1 \succ 2 \\ &3\colon a \succ d \succ c \succ b & c\colon 2 \succ 1 \succ 4 \succ 3 \\ &4\colon a \succ b \succ c \succ d & d\colon 4 \succ 3 \succ 2 \succ 1 \end{align*} $$ Let us trace the Gale-Shapley algorithm: Round 1M: $1,2$ propose to $b$; $3,4$ propose to $a$. Round 1W: $b$ accepts $1$; $a$ accepts $3$. Round 2M: $2$ proposes to $a$; $4$ proposes to $b$. Round 2W: $a$ accepts $2$, throwing $3$; $b$ accepts $4$, throwing $1$. Round 3M: $3$ proposes to $d$; $1$ proposes to $c$. Round 3W: $d$ accepts $3$; $c$ accepts $1$. In the end, everyone is married to their second choice.
{ "domain": "cs.stackexchange", "id": 9856, "tags": "algorithms, data-structures" }
Is there an example of fft where on the y axis wouldn't be power?
Question: my question is the same as in title. Is there an example of signal which after fft operation wouldn't have power on the it's y axis? Answer: Is there an example of signal which after fft operation wouldn't have power on the it's y axis Pretty much all of them. For the discrete case the Fourier Transform has the same units as the time domain quantity. For the time continuous case it adds $1/Hz$ since it's spectral density. So if your time domain signal is a voltage the units of the Fourier Transform would also be $V$ for the discrete case (which includes the FFT) and $V/Hz$ for the continuous case. You can use the FFT to calculate the power spectral density but it typically takes a little massaging to have the units to come out to be Watts. In the case of a voltage signal you can square it but to be precise you would have to label it as "Power over a 1 Ohm real resistor" or something like this. This holds for all field quantities: voltage, current, pressure, displacement, force, etc.
{ "domain": "dsp.stackexchange", "id": 10264, "tags": "fft" }
Remove the flaring edge of Moon/Earth in SkyX
Question: Using this pull request to gazebo_models and the suggestions there, we're now holding a moonlight earthlight party using Gazebo, by simply replacing SkyX_Moon.png with an earth image included in this PR. I think the flaring rim surrounding the earth was suitable for the moon seen from the earth, but not the other way round. Where should I edit SkyX to remove it? Thank you. UPDATE: Thanks to @iche033's answer, we now enjoy even more serene moment. $ diff /usr/share/gazebo-2.2/media/skyx/SkyX_Moon.fragment.org /usr/share/gazebo-2.2/media/skyx/SkyX_Moon.fragment 60c60 < haloIntensity = pow(haloIntensity, uMoonPhase.z); --- > haloIntensity = pow(0.0, uMoonPhase.z); Catch for me was that this .fragment file seems to not work right with some comment-out formats; I was trying to use # and it just didn't show the earth at all. Originally posted by IsaacS on Gazebo Answers with karma: 118 on 2015-04-09 Post score: 1 Answer: try setting the haloIntensity to 0.0 in SkyX_Moon.fragment Originally posted by iche033 with karma: 1018 on 2015-04-09 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 3745, "tags": "gazebo" }
Biomechanics of cells (stress, strain, tension..)
Question: I am confused about the difference between stress, strain, tension, pre-strain and prestress in cells (especially in in-vitro experiments, like cell spreading on a substrate, cell doublets, cell rearrangements in epithelial tissues). How are these 5 physical terms different or related? Is there a reference or a book that can explain them in a good way, I got lost usually when they use in an article the terms ("they transmit tension" or "they impose stress") Answer: Strain and stress are two essential quantities in elasticity theory, corresponding to the deformation and the forces appearing in response to this deformation. Tension or tensile stress is a particular kind of stress appearing in elongated objects, such as ropes, filaments, etc. One could recommend any of the existing books on the elasticity theory, but these may turn out to be a bit of too "hardcore" in terms of the level of the math and the background physics knowledge required. I therefore suggest the reading more adapted to biologists (although not necessarily easy): Physical biology of the cell Mathematical biology I: An introduction or similar books on mathematical biology and biophysics.
{ "domain": "biology.stackexchange", "id": 11719, "tags": "cell-biology, biophysics, bio-mechanics, stress" }
Winding a wire heating element after a few uses
Question: I bought recently 10 meters of Kanthal A1 AWG18 wire, which is a type of heating element (FeCrAl alloy). I intend to make a coil out of it, but it's no use doing that right now, I need more of it for my project (electric furnace), have ordered more of it online and am waiting a couple weeks till it arrives. My question is this, if I use the heating element a few times in a different project (assume here that the wattage is the maximum possible), in a way that I won't twist or wind it in any way, will this affect the physical properties of the wire in ways that might make it harder for me to turn it into a coil later on, for my initial project? (I'll be winding the coil manually, it will be between 5 and 10 mm of diameter) I ask that because heating elements tend to get more brittle with time, but I don't know how long it might take for such changes to happen. Answer: I have not used Kanthal, but I imagine it has similar properties to nicrome. Basically when you heat the wire up and cool it quickly, it quenches the wire. To reverse this process (or avoid it all together) you need to bring the metal to a specific temperature (without looking it up, red hot is plenty), hold it there for some amount of time, then slowly cool the wire by decreasing current to it over time. This is called annealing. The amount of time depends on the metal, but you can repeat the process again and again until you get the right amount of time. The annealing time may be something unrealistic for diy operation, and may not give you exactly the same properties you started with. But knowing this relationship and cooling the wire down slower will help. Kanthal is rated for high temperature operation, but if you approach or exceed the rated temperature of the alloy, oxidation may occur which will progressively damage the material. In this case the damage would not be reversible. I'm not a chemist or metallurgist, but heating the material in the presence of carbon and other materials can result in chemical reactions that will modify the alloy (usually deteriorating its favorable properties). The color of the wire during operation can give you an idea of its operating temperature, see the wiki incandescence chart below:
{ "domain": "engineering.stackexchange", "id": 1768, "tags": "materials, thermodynamics, heat-transfer, heating-systems, thermal-conduction" }
National University of Singapore admission test
Question: Can someone help me with this awful question? Thanks. A ball is thrown from a point P on a cliff of height h meters above the seashore. It strikes the shore at a point Q, where PQ is inclined at an angle α below the horizontal. If the angle of projection is also α, show that the speed of projection is (sqrt(gh))/(2 sin α) ms-1, where g is the acceleration due to gravity. Prove also that the ball strikes the shore at an angle arctan( 3tan α) with the horizontal. Answer: All you have to do is figure out the time and all falls in place. The horizontal component is vcosα and the horizontal distance is hcotα. Dividing them, you get the time as h/sinα. Put this in the equation $-h=vsinαt-1/2gt^2$. This gives you the vertical component as sqrt(gh)/2 and this gives you the required answer. For, the second part, use the equation $v^2=u^2+2gh$ to get the final vertical velocity, which turns out to be thrice initial vertical velocity. Horizontal component remains same and hence the final tan is thrice the initial tan giving you the second answer.
{ "domain": "physics.stackexchange", "id": 37670, "tags": "homework-and-exercises, gravity, acceleration, projectile" }
What is the density of compressed natural gas (CNG) used in vehicles?
Question: I would like to know the density of the compressed natural gas used for natural gas vehicles in order to complete the formula in this Mathematics Stack Exchange post. Answer: The density of CNG at the pressure of 200 bars (which is optimized for use in vehicles) is 435 $\frac{kg}{m^3}$. However, it's (not usually) 175 $\frac{kg}{m^3}$ in some vehicles in some countries. Note that it differs according to your home country. Source 1 and source 2.
{ "domain": "chemistry.stackexchange", "id": 2780, "tags": "gas-laws, hydrocarbons, density" }
Sorting "k-tonic" sequences
Question: I hope somebody knows a ref to this, so I do not have to read the literature... Consider a sequence of numbers $x_1, \ldots, x_n$. Think about the sequence as $n-1$ intervals $[x_1, x_2], [x_2, x_3], \ldots, [x_{n-1},x_n]$. Clearly, the original sequence is bitonic if any point on the real line stabs at most 2 intervals. We will refer to a sequence where a point stabs at most $k$ intervals as being $k$-idiotic. Visually, if you draw the graph of the sequence (i.e., connect the points $p_i =(i,x_i)$ in order), then the above corresponds to the condition that no horizontal line intersects the graph more than $k$ times. It is not too hard (but not too easy, either) to see that $k$-idiotic sequences can be sorted in $O( n \log k )$ time, which is clearly optimal. Question: This result should be known. Do you know any appropriate ref? Answer: Here's a Levcopoulos-Petersson sorting algorithm reference, but a different one somewhat older than the one in Andreas' answer: Levcopoulos, Christos; Petersson, Ola (1989), "Heapsort - Adapted for Presorted Files", WADS '89: Proceedings of the Workshop on Algorithms and Data Structures, Lecture Notes in Computer Science, 382, London, UK: Springer-Verlag, pp. 499–509, doi:10.1007/3-540-51542-9_41. There's a description of the algorithm in http://en.wikipedia.org/wiki/Cartesian_tree#Application_in_sorting from which the O(n log k) bound is easy to see. More precisely the time for the algorithm is $O(\sum\log k_i)$ where $k_i$ is the number of intervals containing input item $x_i$. In a $k$-idiotic sequence, each $k_i$ is uniformly bounded by $k$ so the total time is just $O(n\log k)$.
{ "domain": "cstheory.stackexchange", "id": 3783, "tags": "ds.algorithms, reference-request, sorting" }
Unable to connect Autoware.auto and LGSVL with ROS2 native
Question: Hello everyone, I am trying to run LGSVL alongside autoware.auto but I am totally unable to make anything work. I am following thoses tutorials : https://autowarefoundation.gitlab.io/autoware.auto/AutowareAuto/lgsvl.html In the tutorial it is said to download the latest assets and setup the bridge as ROS2 native. But when starting the simulation with ROS2 native the simulator crashes with this error message in the command line : Aborted (core dumped) And this error message in the web UI : Invalid: type_support is null, at /tmp/binarydeb/ros-dashing-rmw-cyclonedds-cpp-0.5.1/src/rmw_node.cpp:1311, at /tmp/binarydeb/ros-dashing-rcl-0.7.9/src/rcl/publisher.c:171 Any idea ? Thanks ! Originally posted by Mackou on ROS Answers with karma: 196 on 2020-10-12 Post score: 0 Answer: This happens when you have not sourced the Autoware workspace first. Please make sure you run ade$ source /opt/AutowareAuto/setup.bash in a terminal before staring LGSVL in the same terminal. Originally posted by Josh Whitley with karma: 1766 on 2020-10-25 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Mackou on 2020-10-30: Thanks ! It works indeed ! What is the reason to use ROS2 Native instead of the classical ROS2 bridge ? Did you run into any difficulties with the ROS2 bridge ? No ROS2 bridge was available when you started ? Thanks ! Comment by Josh Whitley on 2020-11-02: @Mackou The primary reason was that we were seeing very high CPU usage and dropped messages through the websockets-based bridge.
{ "domain": "robotics.stackexchange", "id": 35622, "tags": "ros" }
How do you calculate the work of a Carnot Heat Engine using entropy?
Question: The problem given is as follows: A Carnot cycle has a heat engine fluid efficiency of 30%. The heat transfer to the fluid in the boiler Qh happens at 270 centigrade. At 270 centigrade, the entropy of saturated vapour and liquid water are 6.001KJ/(kgK) and 3.067KJ/(kgK), respectively. Determine the net work produced by the Carnot engine. I am pretty sure to solve it you need to use the Carnot efficiency equation so 0.30 = W / Qh but I am not too sure how to get Qh from the information provided. Please explain or put me on the right track if you can! Thank you in advance! Answer: Assuming reversible heat addition process: $$q = \int_{s_1}^{s_2} T \,ds = T \int_{s_1}^{s_2}ds=T\Delta s $$ $$ w = \eta_{carnot} \,q$$
{ "domain": "engineering.stackexchange", "id": 1107, "tags": "thermodynamics" }
Is it possible to implement a Neural Network using a graph data structure?
Question: I'm trying to implement a feedforward neural network using a graph. The thing is: I haven't found any example in which is used a graph data structure. So far the examples I've found used arrays. Can anyone please point me in the direction of some literature on the topic or some tutorial? Answer: Many implementations you can find out in the web are done on matrices (MATLAB for instance) since it provides a compact notation. Haykin's textbook on neural networks takes this approach. Matrices also provide a simple translation to hardware design (FPGA, ASIC, etc.). They are also more often implemented on the FPU. If you implement a neural network in an object oriented manner, you are effectively doing what your question asks: implementing a neural network on a graph. Your neurons are then objects that have relations with each others. There are a few books that take that approach. One I can think of is an undergrad level book by Renard called Réseaux de neurones (sorry, only in French).
{ "domain": "cs.stackexchange", "id": 13434, "tags": "graphs, artificial-intelligence, neural-networks" }
What is a non-classical carbocation?
Question: What is a non-classical carbocation? How is it different from a classical carbocation? I am confused as I have come across this term many times on Chem.SE but there seems to be nothing for my level of understanding on the Internet! Answer: Here is a picture of a "classical" carbocation, there is an electron deficient carbon bearing a positive charge. There are many examples of "non-classical" carbocations, but the 2-norbornyl carbocation is among the best known. Labeling experiments have shown that the positive charge resides on more than one carbon in the 2-norbornyl ion. Early on, the data was explained by equilibrating classical ions, but soon another possibility emerged - one involving a single non-classical ion. The problem comes down to: are the equilibrating classical ions ground state structures with the non-classical ion serving as the transition state, or is the non-classical ion the ground state? This debate went on for a very long period of time, but now most agree that the non-classical structure is the ground state in the 2-norbornyl system. In fact, a recent, and difficult to obtain, crystal structure for the 2-norbornyl cation has been published proving that the ion exists with the non-classical geometry (thanks to Klaus for finding this reference, see his comment below). A key difference between classical and non-classical structures is the bonding. As illustrated above, a classical ion has a carbon with a sextet of electrons and 3 other bonds. The non-classical ion, on the other hand, involves 3 carbons with 2 electrons spread over them. This is called a 3-center 2-electron bond (hypercoordinate bonding) and is a clear marker for a non-classical ion. Notice if you count all of the bonds to any of these 3 carbon atoms (solid and dashed lines) you count 5! Sounds strange, but such "hypercoordinate bonding" is a permitted consequence of the 3-center 2-electron bond. Here is a link to a simpler non-classical carbocation discussed here yesterday.
{ "domain": "chemistry.stackexchange", "id": 3493, "tags": "organic-chemistry, bond, molecular-orbital-theory, carbocation" }
Turing machine accepts any word
Question: Let M be a Turing-machine with tape alphabet = {0, 1} that does not move beyond the first 64 cells of its tape. Is the problem "Does M accept any word?" decidable? I would say it does not accept any word because we can have words longer than 64 characters. I would say it does because if it accepts any word, we don't care about the length. It always goes into accepting state. Can someone please explain this to me? Answer: Let $P = \{ w \in \{ 0,1 \} \mid |w| \le 64 \land w \in L(M) \}$. Since $P$ contains only words over an alphabet (i.e., a finite set) and all words in $P$ have bounded length, $P$ is finite. Moreover, any word $w \in \{0,1\}^\ast$ with length $|w| > 64$ is in $L(M)$ if and only if there is a prefix of $w$ which is in $P$. Thus, to decide $L(M)$, we only need to check whether the input has a prefix out of finitely many possibilities (i.e., those in $P$). Hence, $L(M)$ is not only decidable, it is decidable in constant time. Note this construction does not require any knowledge of $M$ whatsoever (in particular, there is no need to simulate $M$). We only need to show the existence of a TM which decides $L(M)$, not actually construct one.
{ "domain": "cs.stackexchange", "id": 13602, "tags": "turing-machines" }
Is acetone saturated or not?
Question: This was a question in a chemistry exam: How many moles of hydrogen is required to convert 1 mole of this compound into a saturated compound? $\ce{CH3COCH3}$ (acetone) Some teachers said that it's saturated so the answer is zero. Others said that one mole is required to break the bond between $\ce{C}$ and $\ce{O}$ and convert it into an alcohol. So which answer is correct and please mention a reference if available. Answer: Typcially, the terms saturated and unsaturated only apply to the carbon chain in itself. A saturated compound would be one whose systematic name ends in -ane while unsaturated ones would end in -ene or -yne — grossly simplifying. Going by this rather simplistically strict definition, at first sight acetone (propanone) is saturated. But. You may have heard of the keto-enol tautomery. That is, a ketone is always in equilibrium with its corresponding enol form, for acetone that would be propen-2-ol — an unsaturated compound. It is shown in the scheme below. The first indication of the distinction being difficult. Furthermore, acetone can be reduced by molecular hydrogen to isopropanol. That means that given the correct conditions, it can basically react much like an unsaturated compound would. It also reacts with elemental bromine similarly to how an unsaturated hydrocarbon would, although only the first mechanistic step is identical; only one carbon is brominated in the product. There is also the measure of double bond equivalents, that is often used in the process of structure elucitaion. Taking the molecular formula of a compound (which can be deduced well from high-resolution mass spectrometry) and performing a simple calculation allows one to arrive at a number of double bond equivalents for a compound. The formula is: $$\tag{1} \text{DBE} = \frac{2 \ce{C} - \ce{H} + \ce{N} + 1}{2}$$ Where $\ce{C}$ is the number of carbon atoms (including other tetravalent atoms such as silicon), $\ce{H}$ is the number of hydrogen atoms (including other monovalent atoms such as chlorine) and $\ce{N}$ is the number of nitrogen atoms (including other trivalent atoms such as phosphorus). However, having 1 double bond equivalent or more does not automatically mean that a molecule is unsaturated, no matter by which definition. Take cyclohexane, $\ce{C6H12}$: It has one double bond equivalent which is caused by the fact that it is cyclic. It will not react with hydrogen or with bromine and no sane chemist would call it unsaturated. So this measure is not of much help. Concluding, I can only say that both sides of the discussion are wrong. It is wrong that acetone will not react with hydrogen to give isopropanol. It is equally wrong that that makes acetone an unsaturated compound. The concept of unsaturation simply does not extend well to $\ce{C=O}$ double bonds. The exam question should not be graded.
{ "domain": "chemistry.stackexchange", "id": 5870, "tags": "organic-chemistry, carbonyl-compounds, terminology" }
Is there a linear-time algorithm for randomly sampling weighted combinations?
Question: For concreteness, here's the specific problem description: suppose we have a set $S$ of $n$ items $a_1, a_2, \ldots, a_n$ with weights $w_1, w_2, \ldots, w_n$ respectively. The goal is to select a subset of size $k$ such that the probability of selecting the subset $a_{i_1}, a_{i_2}, \ldots, a_{i_k}$ is exactly $\dfrac{w_{i_1}\cdot w_{i_2}\cdots\cdot w_{i_k}}{\sum_{S_k\subseteq S, |S_k|=k}(\prod_{i\in S_k}w_i)}$. If $k=1$ then this is the usual selection from a weighted distribution and can easily be done in time $O(n)$ (or even $O(\log n)$ per sample with $O(n)$ preprocessing). If the $w_i$ are all equal then this is the usual uniform selection of a combination of $k$ things from a set of size $n$, and there are several well-established algorithms to do this in time $O(n)$. But I don't know of any algorithm for the generalized problem that runs faster than the naive version in $\Theta_n({n\choose k})=\Theta_n(n^k)$ time, and even a dip into TAOCP didn't turn up anything. Are there any known fast algorithms for this problem? Answer: There is a very simple $O(n \log k)$ algorithm described in Weighted random sampling with a reservoir by Pavlos S. Efraimidis and Paul G. Spirakis, which can be summarized as: Associate value $r_i^{1/w_i}$ to $a_i$, where $r_i$ is an independent random uniform value on $[0, 1]$. Sort the elements by their associated values, the $k$ largest values indicate the desired subset. With a priority queue of size $k$ you don't need to do a full sort and can get away with $O(n \log k)$. To get some mathematical intuition of why this works, you can look at the $k = 2$ case. Let the cumulative density function $F_w(x) = \Pr[r^{1/w} \leq x] = \Pr[r \leq x^w] = x^w$, which gives the probability density function $f_w(x) = wx^{w-1}$. Then we have $$\Pr[r_1^{1/w_1} \leq r_2^{1/w_2}] = \int_{0}^1 F_{w_1}(x)f_{w_2}(x)dx = \int_{0}^1w_2x^{w_1+w_2-1}dx = \frac{w_2}{w_1 + w_2}.$$ For the full method I recommend reading the paper.
{ "domain": "cs.stackexchange", "id": 21939, "tags": "time-complexity, combinatorics, randomness" }
A hollow planet
Question: Consider a planet with the same mass and diameter as Earth, except that it is 1) perfectly spherical. 2) a hollow shell 1 meter thick, so the planet is almost like a massive ping-pong ball. Naturally it would be made of some super-dense material such that it had the same mass as earth without most of its volume. If you stood on the surface of this planet, the gravity should be equivalent to Earth's, but if you were to drill a hole in the surface and climb through, would gravity: a) pull you toward the planet's center of mass, which is in the empty center? b) draw you toward the interior surface, and thus outward from the planet's center, such that you could walk along the inside of the planet? c) somewhere in between, with little or no gravitational tendency? I am posting this in Physics but I am not sure if World building is more appropriate. Answer: The gravitational force you would feel inside the hollow planet is zero. Let's prove it. Let's call the interior of the planet $P$ (it is an open ball in $\mathbb{R}^3$. 1) Gravitational force is conservative, which means that $F = - \nabla \Phi$, where $F$ is the gravitational force, and $\Phi$ is a smooth function (the potential). 2) By Gauss' Theorem for gravity, $\text{div} F = -4 \phi G \rho$, where $\rho$ is the mass density at a point. Since, in $P$, $\rho = 0$, $\text{div} F = 0$ in $P$. 3) By spherical symmetry, any spherical shell $S \subset P$, having the same center as $P$ is equipotential (otherwise you'd have a force field which is not spherically symmetric). 4) Therefore, $\Phi$ satisfies the equation $\text{div}\nabla \Phi = \Delta \Phi = 0$, and $\Phi = \text{const}$ on the sphere $S$. This means that $\Phi = \text{const}$ inside $S$, which implies $F = 0$.
{ "domain": "physics.stackexchange", "id": 28124, "tags": "newtonian-gravity, planets" }
Help initializing EKF to a set position
Question: I had asked this question a while back, but received no response. Hopefully the revised subject line and info helps me come to a solution. I'm running the latest robot_localization package on Kinetic/Ubuntu 16. For testing certain functions of the vehicle, we would like to place the vehicle at a certain point in the map(which is already generated and published). To initialize the EKF to a location, I use the /set_pose rosservice call, which works IF odom0_differential=true. /set_pose does not work if odom0_differential=false. There is a tiny blip on the EKF output to the set location, but then the EKF starts at 0 again. There are two sensors currently providing Pose data. The odometry(/odom) from wheel encoders and imu yaw(imu/data). The EKF is setup as follows. Everything else is set to default. map_frame: map # Defaults to "map" if unspecified odom_frame: odom # Defaults to "odom" if unspecified base_link_frame: base_link # Defaults to "base_link" if unspecified world_frame: odom # Defaults to the value of odom_frame if unspecified odom0: /odom imu0: /imu/data odom0_config: [true, true, false, false, false, true, false, false, false, false, false, false, false, false, false] imu0_config: [false, false, false, false, false, true, false, false, false, false, false, true, false, false, false] odom0_differential: false imu0_differential: false odom0_relative: true imu0_relative: true Rossservice call, initializing EKF to (1,1): rosservice call --wait /set_pose "pose: header: seq: 0 stamp: secs: 0 nsecs: 0 frame_id: odom pose: pose: position: {x: 1.0, y: 1.0, z: 0.0} orientation: {x: 0.0, y: 0.0, z: 0.0, w: 0.0} covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]" Edit(April 4th, 2017, 8:15AM): I've stripped my vehicle bag file to a new bag file called ekfTest.bag, with just odom+imu+tf data - Download. This bag contains /tf, /odom and /imu/data topics. Please note that I've turned off "publish_tf" in the EKF parameters as the bag file already has this /tf data. I've also attached my ekf parameter file(ekf_template.yaml) and launch file. Start the robot_localization node with the parameter file. 2.Open a window to view EKF data: rostopic echo /odometry/filtered/pose/pose/position 3.Run the bag file: rosbag play ekfTest.bag Set pose: rosservice call /set_pose "pose: header: seq: 0 stamp: now frame_id: odom pose: pose: position: {x: 1.0, y: 1.0, z: 0.0} orientation: {x: 0.0, y: 0.0, z: 0.0, w: 1.0} covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]" Note: with @TomMoore's recommendation, I've made w=1. I usually press tab after /set_pose and was not paying attention to orientation. Originally posted by bluehash on ROS Answers with karma: 120 on 2017-04-03 Post score: 1 Original comments Comment by Tom Moore on 2017-04-04: Please post sample input messages from all sensors. Also, you are passing in an invalid quaternion (set w to 1.0). Comment by bluehash on 2017-04-04: I've added data to the bottom of my pose. I've made w=1. I usually press tab after /set_pose and was not paying attention to orientation. The default is 0. This is a controlled test with sensor data(odom+imu) streaming into the EKF. I see the EKF latching to the /set_pose value and not change. Comment by Tom Moore on 2017-05-03: Would you mind accepting the answer (circular checkbox)? Thanks! Answer: Ah, sorry, didn't look at your config closely enough. You are fusing X and Y in your odom0 config, which is pose data, so the first measurement you get after you call set_pose will snap it right back. In other words, this is happening: Time t0: filter gets measurement from odom0. X and Y position in the filter output will be very near the odom0 measurement. Time t1: you call set_pose with some specific pose. EKF pose jumps to that position. Time t2: filter gets measurement from odom0. X and Y position in the filter output will be very near the odom0 measurement. This is why differential mode works: it converts the odom0 pose to a velocity, which won't interfere with your set_pose call. All set_pose is doing is sending your robot to the location you send it. If you then give it measurements that tell it that it is somewhere else, it's going to jump back. Originally posted by Tom Moore with karma: 13689 on 2017-04-11 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by bluehash on 2017-05-02: Thanks Tom!
{ "domain": "robotics.stackexchange", "id": 27507, "tags": "ros, navigation, ekf, robot-localization, ekf-localization" }
Can nanotechnology build a virus?
Question: I heard that nanotechnology can arrange atoms the way one wishes. The atomic force microscope can control the arrangement. As the composition of the viruses is known, can by nanoprinting a virus be reproduced? Answer: See e.g. https://journals.asm.org/doi/epub/10.1128/mSystems.00770-21 mSystems, July/August 2021 Volume 6 Issue 4 e00770-21 "The Future of Virology is Synthetic" "Synthetic virology is a subdiscipline of virology that applies molecular, computational, and synthetic biology principles from the fundamentals obtained from naturally occurring viruses to engineer viruses. The first virus assembled from synthetic oligonucleotides was poliovirus (1), followed by the phiX174 bacteriophage (i.e., phage) (2). Synthetic viruses are built upon a previously sequenced genome, and then oligonucleotides are ordered and assembled synthetically (e.g., Gibson) (2)." "Process of building synthetic viruses. First, viral nucleic acids must be extracted and then sequenced using massively parallel nucleic acid sequencing. After sequencing, computational pipelines assemble the viral genomes de novo. Once the viral genomes are assembled computationally, synthetic DNA and oligonucleotides (oligos) can be ordered. Next, the synthetic DNA and oligos can be assembled into full-length viral genomes using Gibson or Golden Gate assembly. Finally, the assembled viral genome can be converted into viral particles using in vitro transcription and translation into synthetic virions."
{ "domain": "physics.stackexchange", "id": 85829, "tags": "biophysics, nanoscience" }
A problem containing constrained motion of strings and a block
Question: Here is the question :- Two methods came to my mind while trying to solve it, which are: I assumed the velocity of M as v(upwards). Then, as the strings are inextensible the cosine component of v should be equal to u, if that doesn't happen then the strings will stretch or slack, which we don't want to happen. Hence, $$vcos\theta=u$$ Which gives us $v=u/cos\theta$ as our answer. The point on M which is attached to both strings will have 2 velocities which will look like, Its net velocity can be given by $2ucos\theta$ along the dotted normal. And as that point is on the block, the block will also move with the same velocity. This gives us $v=2ucos\theta$ as our answer. The 1st method gives the correct answer but the 2nd method does not and I am looking for the reason behind it. Answer: Generally, when it comes to constraint relation of inextensible strings the components of velocity ,acceleration etc,of a particle attached to it are taken along the direction of the string as its length cannot change.It is wrong to take the component of the veleocity of the string in the direction of motion of the object.Furthermore, your equation would imply that at $\theta$=0 the particle is moving with 2v but the string is moving at v, meaning the string is not taut which is impossible.
{ "domain": "physics.stackexchange", "id": 73289, "tags": "homework-and-exercises, newtonian-mechanics, string, constrained-dynamics" }
Python-based Git pre-commit hook to manage multiple users/Git identities
Question: A couple of months ago I posted a bash script to manage multiple Git identities as a solution on Stack Overflow but soon found the hook isn't flexible enough. Thus I decided to rewrite the hook in Python. Basically it does what it should but I'd really like to have someone review the code in order to eliminate bad practices, to remove unnecessary complexity and how it could be improved in regards to style, readability and performance. Another question is if you see any point where it would be better to introduce a class. For the rewrite I created a git repository at github.com: git-passport - A Git command and hook written in Python to manage multiple Git accounts / user identities. #!/usr/bin/env python3 # -*- coding: utf-8 -*- """ git-passport is a Git pre-commit hook written in Python to manage multiple Git user identities. """ # ..................................................................... Imports import configparser import os.path import subprocess import sys import textwrap import time import urllib.parse # ............................................................ Config functions def config_create(filename): """ Create a configuration file containing sample data inside the home directory if none exists yet. Args: filename (str): The complete `filepath` of the configuration file """ if not os.path.exists(filename): preset = configparser.ConfigParser() preset["General"] = {} preset["General"]["enable_hook"] = "True" preset["General"]["sleep_duration"] = "0.75" preset["Passport 0"] = {} preset["Passport 0"]["email"] = "email_0@example.com" preset["Passport 0"]["name"] = "name_0" preset["Passport 0"]["service"] = "github.com" preset["Passport 1"] = {} preset["Passport 1"]["email"] = "email_1@example.com" preset["Passport 1"]["name"] = "name_1" preset["Passport 1"]["service"] = "gitlab.com" try: msg = """ No configuration file found. Generating a sample configuration file. """ print(textwrap.dedent(msg).strip()) with open(filename, "w") as configfile: preset.write(configfile) sys.exit("\n~Done~") except Exception as error: print(error) raise sys.exit("\n~Quitting~") def config_read(filename): """ Read a provided configuration file and «import» allowed sections and their keys/values into a dictionary. Args: filename (str): The complete `filepath` of the configuration file Returns: config (dict): Contains all allowed configuration sections """ raw_config = configparser.ConfigParser() raw_config.read(filename) # Match an arbitrary number of sections starting with pattern pattern = "Passport" # A generator to filter matching sections: # Let's see if user defined config sections match a pattern def generate_matches(): for section in raw_config.items(): if pattern in section[0]: yield dict(section[1]) # Construct a custom dict containing allowed sections config = dict(raw_config.items("General")) config["git_local_ids"] = dict(enumerate(generate_matches())) return config def config_validate(config): """ Validate and convert certain keys and values of a given dictionary containing a set of configuration options. If unexpected values are found we quit the script and notify the user what went wrong. Since ``ConfigParser`` only accepts strings when setting up a default config it is necessary to convert some values to numbers and boolean. Args: config (dict): Contains all allowed configuration sections Returns: config (dict): Contains valid and converted configuration options """ for key, value in config.items(): if key == "enable_hook": if value == "True": config[key] = True elif value == "False": config[key] = False else: msg = "E > Settings > {}: Expecting True or False." raise sys.exit(msg).format(key) elif key == "sleep_duration": try: config[key] = float(value) except ValueError: msg = "E > Settings > {}: Expecting float or number." raise sys.exit(msg).format(key) # Here the values could really be anything... elif key == "git_local_ids": pass else: msg = "E > Settings > {}: Section/key unknown." raise sys.exit(msg).format(key) return config # ............................................................... Git functions def git_get_id(config, scope, property): """ Get the email address or username of the global or local Git ID. Args: config (dict): Contains validated configuration options scope (str): Search inside a `global` or `local` scope property (str): Type of `email` or `name` Returns: git_id (str): A name or email address error (str): Exception """ try: git_process = subprocess.Popen([ "git", "config", "--get", "--" + scope, "user." + property ], stdout=subprocess.PIPE) git_id = git_process.communicate()[0].decode("utf-8") return git_id.replace("\n", "") except Exception as error: raise error def git_get_url(): """ Get the local remote.origin.url of a Git repository. Returns: git_url (str): The local and active remote.origin.url error (str): Exception """ try: git_process = subprocess.Popen([ "git", "config", "--get", "--local", "remote.origin.url" ], stdout=subprocess.PIPE) git_url = git_process.communicate()[0].decode("utf-8") return git_url.replace("\n", "") except Exception as error: raise error def git_set_id(config, value, property): """ Set the email address or username as a local Git ID for a repository. Args: config (dict): Contains validated configuration options value (str): A name or email address property (str): Type of `email` or `name` Returns: error (str): Exception """ try: subprocess.Popen([ "git", "config", "--local", "user." + property, value ], stdout=subprocess.PIPE) except Exception as error: raise error # ............................................................ Helper functions def get_user_input(pool): """ Prompt a user to select a number from a list of numbers representing available Git IDs. Optionally the user can choose `q` to quit the selection process. Args: pool (list): A list of numbers representing available Git IDs Returns: selection (int): A number representing a Git ID chosen by a user """ while True: # https://stackoverflow.com/questions/7437261/how-is-it-possible-to-use-raw-input-in-a-python-git-hook sys.stdin = open("/dev/tty") selection = input("» Select a passport [ID] or «(q)uit»: ") try: selection = int(selection) except ValueError: if selection == "q" or selection == "quit": sys.exit("\n~Quitting~\n") continue if selection not in pool: continue break return selection def print_choice(choice): """ Before showing the actual prompt by calling `get_user_input()` print a list of available Git IDs containing properties ID, «scope», name, email and service. Args: choice (dict): Contains a list of preselected Git ID candidates """ for key, value in choice.items(): if value.get("flag") == "global": msg = """ ~:Global ID: {} . User: {} . E-Mail: {} """ print(textwrap.dedent(msg).lstrip().format( key, value["name"], value["email"]) ) else: msg = """ ~Passport ID: {} . User: {} . E-Mail: {} . Service: {} """ print(textwrap.dedent(msg).lstrip().format( key, value["name"], value["email"], value["service"]) ) def add_global_id(config, target): """ Adds the global Git ID to a dictionary containing potential preselected candidates. Args: config (dict): Contains validated configuration options target (dict): Contains preselected local Git IDs """ global_email = git_get_id(config, "global", "email") global_name = git_get_id(config, "global", "name") local_ids = config["git_local_ids"] if global_email and global_name: position = len(local_ids) target[position] = {} target[position]["email"] = global_email target[position]["name"] = global_name target[position]["flag"] = "global" # .............................................................. Implementation def identity_exists(config, email, name, url): """ Prints an existing ID of a local gitconfig. Args: config (dict): Contains validated configuration options email (str): An email address name (str): A name url (str): A remote.origin.url """ duration = config["sleep_duration"] if not url: url = "«remote.origin.url» is not set." msg = """ ~Intermission~ ~Active Passport: . User: {} . E-Mail: {} . Remote: {} """ print(textwrap.dedent(msg).lstrip().format(name, email, url)) sys.exit(time.sleep(duration)) def url_exists(config, url): """ If a local gitconfig contains a remote.origin.url add all user defined Git IDs matching remote.origin.url as a candidate. However if there is not a single match then add all available user defined Git IDs and the global Git ID as candidates. Args: config (dict): Contains validated configuration options url (str): A remote.origin.url Returns: candidates (dict): Contains preselected Git ID candidates """ local_ids = config["git_local_ids"] netloc = urllib.parse.urlparse(url)[1] # A generator to filter matching sections: # Let's see if user defined IDs match remote.origin.url def generate_candidates(): for key, value in local_ids.items(): if value.get("service") == netloc: yield (key, value) candidates = dict(generate_candidates()) if len(candidates) >= 1: msg = """ ~Intermission~ One or more identities match your current git provider. remote.origin.url: {} """ print(textwrap.dedent(msg).lstrip().format(url)) else: candidates = local_ids msg = """ ~Intermission~ Zero passports matching - listing all passports. remote.origin.url: {} """ print(textwrap.dedent(msg).lstrip().format(url)) add_global_id(config, candidates) print_choice(candidates) return candidates def no_url_exists(config, url): """ If a local gitconfig does not contain a remote.origin.url add all available user defined Git IDs and the global Git ID as candidates. Args: config (dict): Contains validated configuration options url (str): A remote.origin.url Returns: candidates (dict): Contains preselected Git ID candidates """ candidates = config["git_local_ids"] msg = """ ~Intermission~ «remote.origin.url» is not set, listing all IDs: """ add_global_id(config, candidates) print(textwrap.dedent(msg).lstrip()) print_choice(candidates) return candidates # ........................................................................ Glue def main(): config_file = os.path.expanduser("~/.git_passport") config_create(config_file) config = config_validate(config_read(config_file)) if config["enable_hook"]: local_email = git_get_id(config, "local", "email") local_name = git_get_id(config, "local", "name") local_url = git_get_url() if local_email and local_name: identity_exists(config, local_email, local_name, local_url) elif local_url: candidates = url_exists(config, local_url) else: candidates = no_url_exists(config, local_url) selected_id = get_user_input(candidates.keys()) git_set_id(config, candidates[selected_id]["email"], "email") git_set_id(config, candidates[selected_id]["name"], "name") print("\n~Done~\n") if __name__ == "__main__": main() EDIT: Since there has been posted an answer I'll leave the question as it is / freeze it in order to avoid any discrepancies. Development continues at the repository. Answer: Architecture / general thoughts Overall looks good. You have lots of mostly useful comments which help to understand the program, the code is clear if a little verbose, so to me this was easy to read. That said I'm going to list a few things which could improve the program and then continue with in detail code issues below. Hope that helps and good luck with the program, it looks like a useful tool e.g. if you work both on personal projects and e.g. for a company. Could you post a link to your repository? It would be great if you could package and release this. A quick search also mentions the need to use different Github user names and SSH keys, so there are definitely more use cases (and options) to consider in the future. I'd get rid of the config_validate function and do that in config_read instead, using a separate config_read_passport or so, which then uses ConfigParser.getboolean and the other get... functions with a very strict schema, so you make sure that everything coming out of the config object is properly parsed. That way you have to write less verification code (as the ConfigParser object takes care of that for you) and you can spend that on making sure that the individual passports have the correct format before handing them off to the rest of the application. I personally don't use origin as a remote very often. It would be great if that could be configurable. A script like this is probably fine without using classes. I can see how using it for the configuration would help structuring it better, but it's by no means necessary if this works for you. I kind of find the output with lots of irregular characters unexpected, but of course that's your choice. Code The docstrings say Returns: error (str): Exception, but the functions don't return exceptions, they raise them, so the docstring should say Throws or Raises instead and mention what kind of exception it uses. If you can't know that, e.g. in most of the git_* functions, leave it out, or refer to the specific function which might cause problems, i.e. subprocess.Popen. Just as example, in config_create you can remove one level of indentation if you return early, i.e. if not os.path.exists(filename): return;. This is a style choice, but the creation of the preset dictionary can be shorter if you'd just use the literal dictionary syntax, i.e. preset = {"General": {...}, "Passport 0": {...}, ...} instead of repeating the keys all the time. textwrap.dedent(foo).strip() is nice, I'm copying that; since you use it very often I think that separate functions, like dedented or so, are in order; something short and simple. Same for lstrip. sys.exit already exits the process, no raise necessary, unless I'm completely missing the idiom here. And then maybe don't catch the exception in the first place, just let it propagate. The result will be the same if you already print the exception. Also, I would use less sys.exits in general. It is helpful if you can just import the script for testing purposes and it's jarring if using e.g. config_create suddenly kills the interpreter. Same for proper testing later. generate_matches could very well be a regular function and accept the two arguments pattern and raw_config instead. I'd wrap getting values from git in a separate function, maybe git_config_get, which would (for now) still use the same subprocess.Popen method with passed in arguments and then does the communicate/decode handling. And reraising the exception isn't necessary. Whether you create a separate git_config_set (instead of using the same mechanism as git config) is kind of a trade-off. Performance-wise there's always the option to not call a separate program and use something like a libgit binding, e.g. pygit2 instead. get_user_input should reset sys.stdin to its previous value I think. Again, think of reusing this; same with the sys.exit. I'd structure the loop more like while True: read selection; if in pool: return selection. That's way less confusing than if not in pool: continue; (else) break; return. For performance, you can always use iteritems on dictionaries if you don't need the intermediate list. add_global_id does nothing if either global_email or global_name doesn't exist. Shouldn't it rather mention that problem to the user? sys.exit(time.sleep(..)) implies that the return value of time.sleep is somehow significant. time.sleep(); sys.exit() is clearer. I'd also consider waiting for a keypress (or newline) from the user before exiting instead of using a timeout.
{ "domain": "codereview.stackexchange", "id": 11578, "tags": "python, performance, object-oriented, python-3.x, git" }
How do the state-of-the-art pathfinding algorithms for changing graphs (D*, D*-Lite, LPA*, etc) differ?
Question: A lot of pathfinding algorithms have been developed in recent years which can calculate the best path in response to graph changes much faster than A* - what are they, and how do they differ? Are they for different situations, or do some obsolete others? These are the ones I've been able to find so far: D* (1994) Focused D* (1995) DynamicSWSF-FP (1996) LPA (1997) LPA*/Incremental A* (2001) D* Lite (2002) SetA* (2002) HPA* (2004) Anytime D* (2005) PRA* (2005) Field D* (2007) Theta* (2007) HAA* (2008) GAA* (2008) LEARCH (2009) BDDD* (2009 - I cannot access this paper :|) Incremental Phi* (2009) GFRA* (2010) MTD*-Lite (2010) Tree-AA* (2011) I'm not sure which of these apply to my specific problem - I'll read them all if necessary, but it would save me a lot of time if someone could write up a summary. My specific problem: I have a grid with a start, a finish, and some walls. I'm currently using A* to find the best path from the start to the finish. The user will then move one wall, and I have to recalculate the entire path again. The "move-wall/recalculate-path" step happens many times in a row, so I'm looking for an algorithm that will be able to quickly recalculate the best path without having to run a full iteration of A*. Though, I am not necessarily looking for an alteration to A* - it could be a completely separate algorithm. Answer: So, I skimmed through the papers, and this is what I gleamed. If there is anyone more knowledgable in the subject-matter, please correct me if I'm wrong (or add your own answer, and I will accept it instead!). Links to each paper can be found in the question-post, above. Simple recalculations D* (aka Dynamic A*) (1994): On the initial run, D* runs very similarly to A*, finding the best path from start to finish very quickly. However, as the unit moves from start to finish, if the graph changes, D* is able to very quickly recalculate the best path from that unit's position to the finish, much faster than simply running A* from that unit's position again. D*, however, has a reputation for being extremely complex, and has been completely obsoleted by the much simpler D*-Lite. Focused D* (1995): An improvement to D* to make it faster/"more realtime." I can't find any comparisons to D*-Lite, but given that this is older and D*-Lite is talked about a lot more, I assume that D*-Lite is somehow better. DynamicSWSF-FP (1996): Stores the distance from every node to the finish-node. Has a large initial setup to calculate all the distances. After changes to the graph, it's able to update only the nodes whose distances have changed. Unrelated to both A* and D*. Useful when you want to find the distance from multiple nodes to the finish after each change; otherwise, LPA* or D*-Lite are typically more useful. LPA*/Incremental A* (2001): LPA* (Lifelong Planning A*), also known as Incremental A* (and sometimes, confusingly, as "LPA," though it has no relation to the other algorithm named LPA) is a combination of DynamicSWSF-FP and A*. On the first run, it is exactly the same as A*. After minor changes to the graph, however, subsequent searches from the same start/finish pair are able to use the information from previous runs to drastically reduce the number of nodes which need to be examined, compared to A*. This is exactly my problem, so it sounds like LPA* will be my best fit. LPA* differs from D* in that it always finds the best path from the same start to the same finish; it is not used when the start point is moving (such as units moving along the initial best path). However... D*-Lite (2002): This algorithm uses LPA* to mimic D*; that is, it uses LPA* to find the new best path for a unit as it moves along the initial best path and the graph changes. D*-Lite is considered much simpler than D*, and since it always runs at least as fast as D*, it has completely obsoleted D*. Thus, there is never any reason to use D*; use D*-Lite instead. Any-angle movement Field D* (2007): A variant of D*-Lite which does not constrain movement to a grid; that is, the best path can have the unit moving along any angle, not just 45- (or 90-)degrees between grid-points. Was used by NASA to pathfind for the Mars rovers. Theta* (2007): A variant of A* that gives better (shorter) paths than Field D*. However, because it is based on A* rather than D*-Lite, it does not have the fast-replanning capabilities that Field D* does. See also. Incremental Phi* (2009): The best of both worlds. A version of Theta* that is incremental (aka allows fast-replanning) Moving Target Points GAA* (2008): GAA* (Generalized Adaptive A*) is a variant of A* that handles moving target points. It's a generalization of an even earlier algorithm called "Moving Target Adaptive A*" GRFA* (2010): GFRA* (Generalized Fringe-Retrieving A*) appears (?) to be a generalization of GAA* to arbitrary graphs (ie. not restricted to 2D) using techniques from another algorithm called FRA*. MTD*-Lite (2010): MTD*-Lite (Moving Target D*-Lite) is "an extension of D* Lite that uses the principle behind Generalized Fringe-Retrieving A*" to do fast-replanning moving-target searches. Tree-AA* (2011): (???) Appears to be an algorithm for searching unknown terrain, but is based on Adaptive A*, like all other algorithms in this section, so I put it here. Not sure how it compares to the others in this section. Fast/Sub-optimal Anytime D* (2005): This is an "Anytime" variant of D*-Lite, done by combining D*-Lite with an algorithm called Anytime Repairing A*. An "Anytime" algorithm is one which can run under any time constraints - it will find a very suboptimal path very quickly to begin with, then improve upon that path the more time it is given. HPA* (2004): HPA* (Hierarchical Path-Finding A*) is for path-finding a large number of units on a large graph, such as in RTS (real-time strategy) video games. They will all have different starting locations, and potentially different ending locations. HPA* breaks the graph into a hierarchy in order to quickly find "near-optimal" paths for all these units much more quickly than running A* on each of them individually. See also PRA* (2005): From what I understand, PRA* (Partial Refinement A*) solves the same problem as HPA*, but in a different way. They both have "similar performance characteristics." HAA* (2008): HAA* (Hierarchical Annotated A*) is a generalization of HPA* that allows for restricted traversal of some units over some terrains (ex. a small pathway that some units can walk through but larger ones can't; or a hole that only flying units can cross; etc.) Other/Unknown LPA (1997): LPA (Loop-free path-finding algorithm) appears to be a routing-algorithm only marginally related to the problems the other algorithms here solve. I only mention it because this paper is confusingly (and incorrectly) referenced on several places on the Internet as the paper introducing LPA*, which it is not. LEARCH (2009): LEARCH is a combination of machine-learning algorithms, used to teach robots how to find near-optimal paths on their own. The authors suggest combining LEARCH with Field D* for better results. BDDD* (2009): ??? I cannot access the paper. SetA* (2002): ??? This is, apparently, a variant of A* that searches over the "binary decision diagram" (BDD) model of the graph? They claim that it runs "several orders of magnitude faster than A*" in some cases. However, if I'm understanding correctly, those cases are when each node on the graph has many edges? Given all this, it appears that LPA* is the best fit for my problem.
{ "domain": "cstheory.stackexchange", "id": 1944, "tags": "ds.algorithms, shortest-path" }
Cosmology - how to do the $V/V_{max}$ test?
Question: In Cosmology, we have the co-moving distance (assuming $\Omega_k=0$), $$D_C=\frac c{H_0}\int_0^z\frac{dz'}{\sqrt{\Omega_m(1+z)^3+\Omega_\Lambda}}$$ and we also have the total co-moving volume formula $$V=\frac{4\pi}3 D_C^3$$Then we can use what is called the $\langle V/V_{max}\rangle$ test to test if a sample of objects has uniform co-moving density & luminosity that is constant in time. The number of objects per unit co-moving volume with luminosity in the range $(L,L+dL)$ is given by $\Phi(L)$. Then the total number of objects in the sample is $$\int_0^\infty\Phi(L)\int_0^{V_{max}(L)}\mathrm dV\,\mathrm dL$$ If we have a uniform distribution of objects, then the value of this expectation should be $1/2$, as described by this similar question asked on Student Room, which did not get any responses. This test is supposedly well known, but I can't find any questions about it here, nor can I find a simple article about the actual test, or this result. There are many articles online instead showing generalisations to this test which seem very abstract to me. Question: From these definitions, I don't know how to get a value for $\langle V/V_{max}\rangle$, nor do I know what the explicit formula is. What is the explicit formula for $\langle V/V_{max}\rangle$? Is it that integral? Clearly, $V$ depends only on the value of $z$, but I don't know what a uniform distribution of objects implies about the distribution of $z$. Can someone help me understand this a bit better? Answer: I am not so sure about the cosmological application, but the principle is straightforward. If you have an estimated distance $D$ to an object, then that defines a volume of $$ V =\frac{4\pi}{3} D^3$$ If your survey is capable of detecting such objects to a distance $D_{\rm max}$, then this defines a volume $V_{\rm max}$. So for each object you can calculate $V/V_{\rm max}$. If the source population is uniform in space (and hence in time for cosmological sources), then the average $\langle V/V_{\rm max}\rangle = 0.5$. In fact you can go further and say that $V/V_{\rm max}$ ought to be uniformly distributed between 0 and 1. This can be done as a function of source type, or luminosity or whatever.
{ "domain": "physics.stackexchange", "id": 48711, "tags": "general-relativity, cosmology" }
Game winning optimal strategy
Question: Consider a row of n coins of values v1 . . . vn, where n is even. a player selects either the first or last coin from the row, removes it from the row permanently, and receives the value of the coin. Determine the maximum possible amount of money we can definitely win if we move first. Let us understand the problem with few examples: [8, 15, 3, 7] : The user collects maximum value as 22(7 + 15) Does choosing the best at each move give an optimal solution? Is there any way to improve this code? def optimal_strategy_game(v): n = len(v) t = [[0 for x in xrange(n)] for x in xrange(n)] for gap in xrange(n): for i in xrange(n): j = i + gap if j<n: x = 0 if i+1 > j-1 else t[i+1][j-1] y = t[i+2][j] if i+2 <j else 0 z = t[i][j-2] if i <= j-2 else 0 t[i][j] = max(v[i]+min(x,y), v[j]+min(x,z)) print t print t[0][n-1] v = [8,15,3,7] n = len(v) print optimal_strategy(0,n-1,v) optimal_strategy_game(v) Answer: 1. Review It's usually more convenient to return a result rather than printing it. Then you can use the result in other computations, for example test cases. It's not clear what the meaning of the table t is. It would be helpful to have a comment. I think that if \$i ≤ j\$ then t[i][j] gets filled in with the maximum money obtainable by a player who starts with the coins \$v_i, \dots, v_j\$. There is some trouble in handling edge cases — you need to avoid an invalid lookup in the table t so instead of writing t[i+1][j-1] you have to write: 0 if i+1 > j-1 else t[i+1][j-1] to avoid trouble in the cases i+1 == n and j == 0. What this suggests is that t would be better implemented as a mapping from pair (i, j) to money, using collections.defaultdict: from collections import defaultdict t = defaultdict(int) # Map from (i,j) to max money on v[i:j+1]. Then you could look up any pair and it would default to zero even if it was out of bounds, like this: x = t[i + 1, j - 1] y = t[i + 2, j] z = t[i, j - 2] t[i, j] = max(v[i] + min(x, y), v[j] + min(x, z)) In this code: for gap in xrange(n): for i in xrange(n): j = i + gap if j<n: you could avoid the test j<n by iterating only over valid values of i: for gap in xrange(n): for i in xrange(n - gap): j = i + gap These lines of code: x = t[i + 1, j - 1] y = t[i + 2, j] z = t[i, j - 2] t[i, j] = max(v[i] + min(x, y), v[j] + min(x, z)) are looking two steps ahead (the next player chooses the minimum of their options, and the current player chooses the maximum of theirs). But it is simpler to look only one step ahead, like this: s = sum(v[i:j+1]) t[i, j] = s - min(t[i + 1, j], t[i, j - 1]) How does this work? Well, s is the sum of all the coins from i to j, so if the next player can win t[i + 1, j] (say), then the current player can win whatever's left. (This is a standard trick when implementing a minimax algorithm. Instead of having different code for the even (max) and odd (min) players, use the same code for both players and negate the score at each step so that max and min swap roles.) However, this is inefficient because of the computation of s, which requires iterating over all the coins from i to j. But we can avoid this by precomputing a list of running totals, using itertools.accumulate: from itertools import accumulate totals = [0] + list(accumulate(v)) and then the computation of s becomes: s = totals[j + 1] - totals[i] # sum(v[i:j+1]) 2. Revised code from collections import defaultdict from itertools import accumulate def optimal_strategy_game(coins): """Return the maximum possible amount of money that can be won by a player starting on the given row of coins. """ n = len(coins) totals = [0] + list(accumulate(coins)) # Running totals over coins money = defaultdict(int) # Map from (i,j) to max money on coins[i:j+1]. for gap in range(n): for i in range(n - gap): j = i + gap s = totals[j + 1] - totals[i] # s = sum(coins[i:j+1]) money[i, j] = s - min(money[i + 1, j], money[i, j - 1]) return money[0, n - 1] 3. Alternative approach The technique here (of building a table of solutions to sub-problems) is known as dynamic programming. But code that uses dynamic programming can always be rewritten to use recursion to visit the sub-problems and memoization to avoid duplicate work. Sometimes this results in clearer code. In Python 3.2 or later, many functions are easy to memoize using the @functools.lru_cache decorator: from functools import lru_cache from itertools import accumulate def max_money(coins): """Return the maximum possible amount of money that can be won by a player starting on the given row of coins. """ totals = [0] + list(accumulate(coins)) # Running totals over coins @lru_cache(maxsize=None) def money(start, stop): "Return maximum money that can be won on coins[start:stop]." if start >= stop: return 0 else: s = totals[stop] - totals[start] # s = sum(coins[start:stop]) return s - min(money(start + 1, stop), money(start, stop - 1)) return money(0, len(coins)) (Python 2.7 lacks lru_cache but there's a backport.)
{ "domain": "codereview.stackexchange", "id": 25359, "tags": "python, game, python-2.x" }
Random playlists program
Question: This is the first program I've created without help from a tutorial. Written in Python 3.4, it gathers a specified number of media files and opens them in the default player (I use VLC on Windows 8.1). Python is my first language I have been learning mostly through trial and error. I am posting here to see what I can improve and to learn as much as I can. import os, random, sys completed_media = (r"C:\Users\user_000\Desktop\Completed Media") all_media = [] playlist_ = [] def create_media_list(): for path, subdirs, files in os.walk(completed_media): for file in files: if file.lower().endswith(('.mkv', '.mp4', '.divx', '.avi')): all_media.append(os.path.join(path, file)) def random_selection_from_media(): random_selection = random.choice(all_media) if random_selection not in playlist_: playlist_.append(random_selection) else: pass def get_selections(): for i in range(number): random_selection_from_media() print_playlist_() playlist_confirmation() def number_of_selections(): while True: try: global number number = int(input('How many files would you like to add to playlist? >>> ')) break except ValueError: print('Enter a number.') def print_playlist_(): print('\n-------In playlist-------\n') print('[{0}]'.format('\n-------------------------\n'.join(str(i) for i in enumerate(playlist_, 1)))) print('\n-------End playlist------\n') def remove_selection(): while True: try: to_remove = int(input('To remove a selection enter the number of the selection you want removed here. >>> ')) if to_remove <= len(playlist_): break except ValueError: print('Enter a number.') remove_selection() try: playlist_.pop((to_remove - 1)) break except (IndexError, UnboundLocalError): print('Enter a vaild number') remove_selection() clear() print_playlist_() playlist_confirmation() def playlist_confirmation(): ok = input('This list ok? >>> ').lower() if ok == 'yes' or ok == 'y': play_playlist_() elif ok == 'no' or ok == 'n': while True: new = input('Get new list or remove a specific selection? >>> ').lower() if new == 'new list' or new == 'n': del playlist_[:] clear() get_selections() break elif new == 'specific selection' or new == 's': remove_selection() break else: print('Enter \'new\' or \'selection\'') else: playlist_confirmation() def play_playlist_(): for i in playlist_: play_cmd = "rundll32 url.dll,FileProtocolHandler \"" + i + "\"" os.system(play_cmd) def clear(): os.system('cls') def main(): create_media_list() number_of_selections() get_selections() if __name__=="__main__": main() Answer: Pretty good for a self-taught beginner, keep going like this and you're golden! Recursion and infinite loop This is probably the worst part of the script: def remove_selection(): while True: try: to_remove = int(input('To remove a selection enter the number of the selection you want removed here. >>> ')) if to_remove <= len(playlist_): break except ValueError: print('Enter a number.') remove_selection() try: playlist_.pop((to_remove - 1)) break except (IndexError, UnboundLocalError): print('Enter a vaild number') remove_selection() An infinite loop, a misused exception handling, an incorrect error handling, and recursive calls... This is confusing and more complicated than it needs to be. You can replace the recursive calls with continue. Exception handling for converting the input to integer is appropriate, because there is no better way. It is not appropriate for validating the input is within range, because a better way exists using conditionals. Also, it's not enough to check that the index is less than the length of the list, you also need to check that it's 0 or greater. UnboundLocalError is thrown when trying to access a variable that hasn't been assigned. That's not an exception to catch, it indicates a problem in the logic that needs to be fixed. Here's one way to address these issues: while True: try: to_remove = int(input('To remove a selection enter the number of the selection you want removed here. >>> ')) except ValueError: print('Enter a number.') continue if not (0 < to_remove <= len(playlist_)): print('Enter a valid number (within range: 1..{})'.format(len(playlist_))) continue playlist_.pop(to_remove - 1) break Global variables Try to avoid global variables as much as possible. For example, instead of this: def number_of_selections(): while True: try: global number number = int(input('How many files would you like to add to playlist? >>> ')) break except ValueError: print('Enter a number.') It would be better to make the function return the number: def number_of_selections(): while True: try: return int(input('How many files would you like to add to playlist? >>> ')) except ValueError: print('Enter a number.') And then adjust the callers appropriately to pass the number as parameter where needed, for example instead of: number_of_selections() get_selections() Write: get_selections(number_of_selections()) Adjust the rest of the code accordingly, and try to eliminate the other global variables too, all_media and playlist_. For these two, another good alternative will be to create a class, where these variables will be member fields. Simplifications Instead of this: if ok == 'yes' or ok == 'y': A bit simpler way to write: if ok in ('yes', 'y'): Optimizing imports, and coding style The script is not actually using sys, so you can drop that import. And the style guide recommends to import one package per line, like this: import os import random Do read the style guide, it has many other recommendations that apply to your script.
{ "domain": "codereview.stackexchange", "id": 20012, "tags": "python, beginner, python-3.x" }
What are the issues with a set-like interpretation of quantifiers in type theory?
Question: In his answer to a question that tries to treat universal and existential quantifiers as intersections and unions of sets, Andrej Bauer says: Forget the intersections and unions. People get this idea that ∀ and ∃ are like ⋂ and ⋃, which is the sort of thing the Polish school was doing a long time ago with Boolean algebras, but it's really not the way to go (definitely not in computer science). and then introduces the traditional type-theoretic view of universal types as (collections of certain) functions and existential types as (collections of certain) pairs. Question: what is wrong with the set-theoretic view? (Why was socumbersome asked to forget it?) Further details: I ask because I'm interested in set-theoretic types (see eg. [0, 1]). where types are interpreted as collections of values, and the (set-theoretic) connectives union, intersection, and sometimes even negation are available. Interpreting quantifiers like unions and intersections (as in the linked question): $$\forall x.T \overset{def}{:=} \bigcap_{S - type} T[x := S] $$ $$\exists x.T \overset{def}{:=} \bigcup_{S - type} T[x := S] $$ seems to me to be a natural extension of that view. One (former?) issue I am aware of is that of dealing with cardinality of $D$ because of $D \cong D \to D$. But [0] says: "Systems which wish to reason about types as sets of values and who feature function types can quickly run into a problematic circularity in the metatheory and cardinality issues. Fortunately, these issues have been thoroughly addressed in prior work[28]" (the [28] is my [1]). I even found an old paper[2] that I think (although I cannot claim much understanding) deals with the quantifiers that way. It, however, also syntactically restricts what type definitions are valid (on page 116): So say $\sigma$ is (formally) contractive in t iff one of the following conditions hold: $\sigma$ has one of the forms bool, int, $t'$ (with $t' \neq t$), $\sigma_1 \to \sigma_2$, $\sigma_1 \times \sigma_2$, or $\sigma_1 + \sigma_2$. $\sigma$ has one of the forms $\sigma_1 \cap \sigma_2$ or $\sigma_1 \cup \sigma_2$ with both $\sigma_1$ and $\sigma_2$ contractive in t. $\sigma$ has one of the forms $\forall t'.\sigma_1$, $\exists t'.\sigma_1$, or $\mu t'.\sigma_1$ with either $t' = t$ or $\sigma_1$ contractive in t. Now we take TExp to be the set of well-formed type expressions where $\sigma$ is well formed iff one of the following conditions hold: $\sigma$ is bool, int, or t. $\sigma$ has one of the forms $\sigma_1 \to \sigma_2$, $\sigma_1 \times \sigma_2$, $\sigma_1 + \sigma_2$, $\sigma_1 \cap \sigma_2$, $\sigma_1 \cup \sigma_2$ with both $\sigma_1$ and $\sigma_2$ well-formed. $\sigma$ has one of the forms $\forall t.\sigma_1$ or $\exists t.\sigma_1$ with $\sigma_1$ well formed. $\sigma$ has the form $\mu t.\sigma_1$ with $\sigma_1$ well formed and contractive in t. Below that it defines an interpretation function $\mathcal{T}\colon \mbox{TExp} \to \mbox{TEnv} \to \mathcal{P}(V)$, where $V$ is the space of all values (like booleans, naturals, functions of those, etc.) with the right isomorphisms. This paper makes it harder for me to believe there is something wrong with the "set-theoretic" quantifiers, but maybe I just misunderstand something important. Advanced Logical Type Systems for Untyped Languages - Andrew M. Kent; link Semantic subtyping: Dealing set-theoretically with function, union, intersection, and negation types - Alain Frisch, Giuseppe Castagna, Véronique Benzaken; link An ideal model for recursive polymorphic types - David MacQueen, Gordon Plotkin, Ravi Sethi; link Answer: I think there may be a little nuance that can be applied to the situation, where 2 different possible hats may be applied, and which both are valid views of type systems. View 1: Types are intrinsic In this view, it makes no sense to talk about a program/term independently of its type. In addition $\forall$s and $\exists$s are really "forall"s and "exists" in the logical sense. Subject reduction is typically rather easy to prove (sort of) Typically types are unique: a term has only one type up to some equivalence relation. Typically type-checking is decidable. We care about termination and logical consistency: ideally there should be no term $\vdash t : \forall X.X$. There is (generally) no natural notion of subtype: if you want to restrict the possible values of a given type $T$, you need to form a $\Sigma$-type which contains a witness that the program behaves as expected (itself a program of some type). E.g. the type of positive integers $\Sigma x:\mathrm{Int}. x \geq 0$ is actually the type of pairs $(i, pos)$ where $pos$ is an inhabitant of the type $i\geq 0$. Typically this is the view taken in dependent type theory, and I believe the view Andrej is promoting in his answer. View 2: Types are behaviors In this view, programs exist a priori without having an associated type, and the judgement $\vdash t : \sigma$ is taken to mean "program $t$ has behavior $\sigma$". This is generally the view taken whenever one mentions union or intersection types. Some contrasts with the previous approach: Types are definitely not unique! Usually $\vdash t: \sigma$ for every possible behavior of $t$, e.g. $\lambda x. x$ has behavior $\mathrm{Int}\rightarrow \mathrm{Int}$ and also the incomparable behavior $\mathrm{Bool}\rightarrow\mathrm{Bool}$ etc. As a result, type checking and inference is usually undecidable. Subject reduction becomes quite subtle to prove. Normalization sometimes holds, but is less of a central topic. There is a subtle theory of subtyping where one can have $\mathrm{Nat}\subseteq\mathrm{Int}$ or even $\mathrm{Int}_{\geq 0}\subseteq\mathrm{Int}$ if the type system can express it. Terms need not carry witnesses that they belong in such or such type. There is some close connection with domain theory, which I do not know much about. The papers you reference tend to adopt this second view. The two views are definitely not incompatible, but they tend to have different motivations and techniques. I should finally note that there is a model of system $\mathrm{F}$ where $\Pi$s are interpreted as $\bigcap$, and this is generally used to prove normalization which is a "View 1" property. In some sense, this is a fluke: we're trying to reconcile these non-classical set theoretic views with some set theoretic model. But it definitely "feels" like a "View 2" perspective. I'm not sure how to reconcile this, but there's probably something deep happening.
{ "domain": "cstheory.stackexchange", "id": 5410, "tags": "type-theory, type-systems, set-theory" }
europmc and tidypmc R library for extracting or making metadata from publication
Question: I'm trying to use this europmc r libray where I have a list of pmids to look for. I tried with pubtator but its bit complicated.In Europmc i can all the annotated terms etc. library("europepmc") example list of PMIDS 30024784 30555165 30510081 31688884 31516032 28588019 29286103 Now what I'm doing is looking each ID using epmc_details function which is not the way i would do if i have to look for hundreds. epmc_details(ext_id = '30510081') I have to question how can I run the epmc_details in a loop or some other way where i can look for PMIDS one by one and save the result in a data frame. The epmc_details is returned as a list. The structure of the list is as such [1] "basic" "author_details" "journal_info" "ftx" "chemical" "mesh_topic" "mesh_qualifiers" [8] "comments" "grants" I would only like to save basic,chemical,mesh_topic,mesh_qualifiers in one data frame. For example if my first id is this 30510081 the dataframe should have first column as my ID which is basically basic[1]and rest of the information appended to the next columns. such as ID chemical mesh_qualifiers mesh_topic gene Any suggestion or help would be really and highly appreciated I was looking at europmc site through the browser this was one of my query when i highlight the keyterms i do see in the abstract itself all the keyterms are getting annotated but when i do the same query search through R I do see empty results as such why there is a difference? $chemical # A tibble: 0 x 0 $mesh_topic # A tibble: 0 x 0 $mesh_qualifiers # A tibble: 0 x 0 I found better way of getting data from pubmed using the tidypmc library. library(tidypmc) doc <- pmc_xml("PMC6365492") doc txt <- pmc_text(doc) txt count(txt, "section") cap1 <- pmc_caption(doc) filter(cap1, sentence == 1) tab1 <- pmc_table(doc) sapply(tab1, nrow) tab1[[1]] attributes(tab1[[2]]) collapse_rows(tab1, na.string="-") library(tibble) x <- xml_name(xml_find_all(doc, "//*")) tibble(tag=x) %>% count("tag") library(tidytext) x1 <- unnest_tokens(txt, word, text) %>% anti_join(stop_words) %>% filter(!word %in% 1:100) # Joining, by = "word" #filter(x1, str_detect(section, "Case description")) filter(x1, str_detect(section, "Results")) count(a$word) tbls <- pmc_table(doc) map_int(tbls, nrow) tbls[[1]] collapse_rows(tbls, na.string="-") But if i understand it can use one PMC id at a time. Again keeping my original question how can i put this in a loop to query lets say i have 100 PMCID and get it result and store in a dataframe. After using tidypmc i found that i parse all the publication based on attributes or tags. Such as title,abstract, results etc etc. Lets say I'm interested in the table tags information where they have metadata of patients as well as others.So if a paper contain multiple tables I would like to store each of them in a data-frame under the respective publication.Since I have multiple IDs to search and do the save as above mentioned. How to put this through a loop or can it be done without using loop ? Any suggestion or help would be really appreciated as always. Answer: The idea for this code is to first convert PIDs to PMCIDs, then run the tidypmc in a loop over the PMCIDs. The only problem is that tidypmc failed to retrieve tables from most of the IDs in your example list. library(tidyverse) library(tidypmc) library(httr) library(jsonlite) example_pids <- c(30024784, 30555165, 30510081, 31688884, 31516032, 28588019, 29286103) %>% as.character() #-- Convert to PMC ids convertPIDtoPMCID <- function(pids) { #-------- Make API request pids4query <- paste(pids, collapse = "%0D%0A") idconv_req <- paste0("https://www.ncbi.nlm.nih.gov/pmc/utils/idconv/v1.0/?ids=", pids4query, "&idtype=pmid&format=json&versions=no&showaiid=no&tool=&email=&.submit=Submit") pids_json <- GET(idconv_req) %>% content("text") %>% fromJSON() #-------- Get info from JSON pid2pmc <- pids_json$records %>% select(pmcid, pmid) %>% as.data.frame() rownames(pid2pmc) <- pid2pmc$pmid pmcids <- pmcids <- pid2pmc[pids, "pmcid"] return(pmcids) } example_pmcids <- convertPIDtoPMCID(example_pids) #-- Try to get data with tidypmc pub_tables <- lapply(example_pmcids, function(pmc_id) { message("-- Trying ", pmc_id, "...") doc <- tryCatch(pmc_xml(pmc_id), error = function(e) { message("------ Failed to recover PMCID") return(NULL) }) if(!is.null(doc)) { #-- If succeed, try to get table tables <- pmc_table(doc) if(!is.null(tables)) { #-- If succeed, try to get table name table_caps <- pmc_caption(doc) %>% filter(tag == "table") names(tables) <- paste(table_caps$label, table_caps$text, sep = " - ") } return(tables) } else { #-- If fail, return NA return(NA) } }) names(pub_tables) <- example_pids #-- Inspect results pub_tables$`30555165`$`Table 1 - Patient demographic and baseline characteristics` pub_tables$`29286103`$`Table I - Sample summary.` Tables will require quite a bit of tidying after this. Good luck!
{ "domain": "bioinformatics.stackexchange", "id": 1510, "tags": "r, public-databases, parsing" }
Living organisms are not at equilibrium with their surroundings
Question: Why the living organisms can never be at equilibrium with their surroundings? Answer: Being at equilibrium with your surroundings means that for everything (particles, chemicals, energy) that comes inside you from your surroundings an equivalent amount of it is returned to the environment, unaltered (same structure, same energy, same concentration, etc.). At the same time, and in the same manner, anything going out to the environment will be returned to you. You will not change your environment. . Equilibrium works both ways, so the envirionment will not change you, either. . If you are alive, this cannot keep up for long. You will need to use your stored energy (can't get it from your environment if you are at equilibrium with it) for your basic functions as a live being. Part of this energy will invariably become waste heat energy which you must get out of your body or you will heat up too much and die. But if you are at equilibrium with your environment, you cannot get rid of it. So... . But perhaps you are spore-like, and completely inactive (not even an inner timer)? Then one of two things may happen: you cannot receive anything from the environment, or you can. If you can't, you will never be able to get something to revive you. If you can, you will receive something, and then you are not in equilibrium with your environment.
{ "domain": "chemistry.stackexchange", "id": 10064, "tags": "biochemistry" }
Noughts and Crosses
Question: My first program using C. I would appreciate pointers on how to improve the code. I exit just before turns reach 9 and the grid is filled because it causes all sorts of bugs. The computer is random. #include <stdio.h> #include <stdlib.h> #include <time.h> // Struct with all game state variables. struct game_data { int win; int turn; int grid[3][3]; } game= { 0, 1, { { 8, 8, 8 }, { 8, 8, 8 }, { 8, 8, 8 } } }; void player_one_move(struct game_data* game) { int y_val, x_val; printf("You are '-1's. Please input co-ordinates in the form 'row column' for the 3x3 grid:\n"); scanf(" %d %d", &y_val, &x_val); //Passes player input to variables x_val and y_val //Stops illegal moves and places player's position. if (game->grid[y_val - 1][x_val - 1] == 8) { game->grid[y_val - 1][x_val - 1] = -1; printf("\nYour turn:\n\n"); } else { player_one_move(game); } } //Player two function. /*void player_two_move(struct game_data* game) { int y_val, x_val; printf("Please input co-ordinates in the form 'row column' for the 3x3 grid\n"); scanf(" %d %d", &y_val, &x_val); printf("\nYour turn:\n\n"); game->grid[y_val-1][x_val-1] = 1; } */ void computer_move(struct game_data* game) { int x_val = rand() / (RAND_MAX / 4); int y_val = rand() / (RAND_MAX / 4); if (game->grid[y_val][x_val] == 8) { game->grid[y_val][x_val] = 1; printf("\nComputer turn:\n\n"); } else { computer_move(game); } } void update(struct game_data* game) { /*for (int y_val = 0; y_val < 3; y_val++) { for (int x_val = 0; x_val < 3; x_val++) { printf("%d ", game->grid[y_val][x_val]); } printf("\n"); } printf("\n");*/ //Displays grid. printf("%d | %d | %d \n---+---+---\n %d | %d | %d \n---+---+---\n %d | %d | %d \n\n", game->grid[0][0], game->grid[0][1], game->grid[0][2], game->grid[1][0], game->grid[1][1], game->grid[1][2], game->grid[2][0], game->grid[2][1], game->grid[2][2]); } void game_event_won(struct game_data* game) { int left_diag_sum = 0; int right_diag_sum = 0; int col_sum = 0; int row_sum = 0; // Counts all columns and rows to find sum for (int y_val = 0; y_val < 3; y_val++) { for (int x_val = 0; x_val < 3; x_val++) { col_sum += game->grid[y_val][x_val]; row_sum += game->grid[x_val][y_val]; if (col_sum == -3 || row_sum == -3) { game->win = 1; printf("You have won.\n"); } if (col_sum == 3 || row_sum == 3) { game->win = 1; printf("You have lost.\n"); } } } // Sums diagonals for (int y_val = 0; y_val < 3; y_val++) { left_diag_sum += game->grid[y_val][y_val]; right_diag_sum += game->grid[y_val][2 - y_val]; if (left_diag_sum == -3 || right_diag_sum == -3) { game->win = 1; printf("You have won.\n"); } if (left_diag_sum == 3 || right_diag_sum == 3) { game->win = 1; printf("You have lost.\n"); } } } int main(void) { //Initialises random number generator. srand((unsigned)time(0)); while (game.win == 0 && game.turn < 9) { if (game.turn % 2) { player_one_move(&game); game.turn++; } else { //player_two_move(&game); Player two function computer_move(&game); game.turn++; } update(&game); game_event_won(&game); } return 0; } Answer: Before I even looked at your program, I though I'd let the compiler discover any obvious flaws by compiling it with these flags: gcc -Wall -Wextra -pedantic -O2 noughts-and-crosses.c I expected a few warnings, but there weren't any. This means your code is already free from the worst mistakes. This shows that you have already put some good work into it, and that the code is ready to be inspected by human proofreaders. Very good. #include <stdio.h> #include <stdlib.h> #include <time.h> The headers are in alphabetical order. This is good. Since there are only 3 of them I cannot say whether it is coincidence or arranged by your IDE or done by you manually. Anyway, this is how professional programs look. (Except when the headers do need a certain order. Then this order is of course more important than alphabetical.) // Struct with all game state variables. struct game_data { int win; int turn; int grid[3][3]; } You should remove the empty line between the comment and the beginning of the struct. As it is now, the comment reads like a general remark that applies to all the part below it and not just the struct. This struct definition is the best place to document which values are valid for the win fields. There are several possible choices: true, false 0, 1 0, 1, 2 -1, 0, -1 0, 'o', 'x' 0, '0', '1' 0, '1', '2' It's good style to avoid this possible confusion by commenting the possible values. Especially when you use int as the data type, since that data type is used for almost everything. For the turn field, I first thought it would mark the player whose turn it is. But that's not what the code says. It's actually the number of turns that have already been played. Therefore I'd expect it to be called turns instead of turn. The grid field is obvious since it is a 3 by 3 array, which for noughts and crosses can only mean the content of the board. There should be a comment that explains the possible values. Again, there are almost as many possibilities as for the win field. game= { 0, 1, { { 8, 8, 8 }, { 8, 8, 8 }, { 8, 8, 8 } } }; You surprised me a lot with this part. I first thought about a syntax error, but then I saw that you left out the semicolon after the struct definition. This is unusual since an empty line typically means that the two parts around the empty line are somewhat independent. This is not the case here. The usual form is to put the semicolon at the end of the struct definition and then repeat the words struct game_data, so that the full variable declaration starts with struct game_data game = {. void player_one_move(struct game_data* game) { int y_val, x_val; printf("You are '-1's. Please input co-ordinates in the form 'row column' for the 3x3 grid:\n"); scanf(" %d %d", &y_val, &x_val); //Passes player input to variables x_val and y_val //Stops illegal moves and places player's position. if (game->grid[y_val - 1][x_val - 1] == 8) { game->grid[y_val - 1][x_val - 1] = -1; printf("\nYour turn:\n\n"); } else { player_one_move(game); } } When the game starts, the empty board is not printed. Therefore there is absolutely no clue that the coordinates are in the range 1..3. It would be far easier if there were some example coordinates written somewhere. Using 8 for an empty cell is something I don't understand. An 8 does not look like empty at all. A much better choice would be an actual space or at least an underscore or dot. Also, having -1 for one player and 1 for the other leads to a board layout in which the position of the vertical lines depends on which player plays where. This has nothing to do with the game in reality, where the vertical and horizontal lines are fixed during a game. void computer_move(struct game_data* game) { int x_val = rand() / (RAND_MAX / 4); int y_val = rand() / (RAND_MAX / 4); This again looks unusual. First, why do you divide by 4 instead of by 3? This gives you random numbers between 0 and 3, therefore it might happen that the computer plays off the board (if the memory right behind the struct game_state just happens to have an 8 stored there). Second, the usual pattern for generating a random number between 0 and n is to just calculate rand() % n, which in this case is rand() % 3. if (game->grid[y_val][x_val] == 8) { game->grid[y_val][x_val] = 1; printf("\nComputer turn:\n\n"); } else { computer_move(game); } } void update(struct game_data* game) { //Displays grid. printf("%d | %d | %d \n---+---+---\n %d | %d | %d \n---+---+---\n %d | %d | %d \n\n", game->grid[0][0], game->grid[0][1], game->grid[0][2], game->grid[1][0], game->grid[1][1], game->grid[1][2], game->grid[2][0], game->grid[2][1], game->grid[2][2]); } The above code looks quite nice since it visually tells the reader that it prints the 3x3 board. You could make it even nicer if you'd split the string after each \n, like this: printf( "%d | %d | %d \n" "---+---+---\n" " %d | %d | %d \n" "---+---+---\n" " %d | %d | %d \n" "\n", game->grid[0][0], game->grid[0][1], game->grid[0][2], game->grid[1][0], game->grid[1][1], game->grid[1][2], game->grid[2][0], game->grid[2][1], game->grid[2][2]); Now the code looks almost exactly how the board will be printed, which is good. void game_event_won(struct game_data* game) { int left_diag_sum = 0; int right_diag_sum = 0; int col_sum = 0; int row_sum = 0; // Counts all columns and rows to find sum for (int y_val = 0; y_val < 3; y_val++) { for (int x_val = 0; x_val < 3; x_val++) { col_sum += game->grid[y_val][x_val]; row_sum += game->grid[x_val][y_val]; if (col_sum == -3 || row_sum == -3) { game->win = 1; printf("You have won.\n"); } if (col_sum == 3 || row_sum == 3) { game->win = 1; printf("You have lost.\n"); } } } // Sums diagonals for (int y_val = 0; y_val < 3; y_val++) { left_diag_sum += game->grid[y_val][y_val]; right_diag_sum += game->grid[y_val][2 - y_val]; if (left_diag_sum == -3 || right_diag_sum == -3) { game->win = 1; printf("You have won.\n"); } if (left_diag_sum == 3 || right_diag_sum == 3) { game->win = 1; printf("You have lost.\n"); } } } This function is the most important and the most complicated at the same time, which already sounds bad. It is also full of bugs. For example, it doesn't recognize it when I play at 2 1, 2 2, 2 3. That should be a win for me, but it isn't. The reason for this is that you also add each 8 (which means empty) to the sum. Therefore, the only situations in which I can currently win are the horizontal 1 1, 1 2, 1 3 or the vertical 1 1, 2 1, 3 1 (but only if the computer and I have also filled the cells at 1 2, 1 3, 2 2 and 2 3. Just counting up to 3 or down to -3 isn't enough. For example, the combination 1 3, 2 1, 2 2 is not a winning combination, but might be counted as such by your code. A better approach is to look at each possible combination (3 horizontal, 3 vertical, 2 diagonal) and check for each one separately and independently whether all its cells have the same value and at least one of these cells is not empty. One time I played against the computer, and to be sure I could not win, I played at 1 1, 1 2, 2 1, 2 2. The computer meanwhile got 3 in a row, but nevertheless the game said You have won., which is wrong. This function can also print You have won. twice for the same situation (once horizontally or vertically, and once more diagonally). This is another bug. int main(void) { //Initialises random number generator. srand((unsigned)time(0)); while (game.win == 0 && game.turn < 9) { if (game.turn % 2) { player_one_move(&game); game.turn++; } else { //player_two_move(&game); Player two function computer_move(&game); game.turn++; } update(&game); game_event_won(&game); } return 0; } There are some more things to how you should structure the code of the game, but the first thing to do is to fix the bugs. After that, you are welcome to post a follow-up question with the fixed code.
{ "domain": "codereview.stackexchange", "id": 33566, "tags": "beginner, c, tic-tac-toe" }
Why doesn't my kitchen clock violate thermodynamics?
Question: My kitchen clock has a pendulum, which is just for decoration and is not powering the clock. The pendulum's arm has a magnet that is repelled by a second magnet that is fixed to the clocks body. The repelling magnets are at their closest when the pendulum is at its lowest point. We all (hopefully) agree that a regular pendulum would eventually slow down due to friction. But I honestly cannot recall ever seeing the clock's pendulum at rest. By my calculations the magnet would slow the pendulum as it falls but accelerate it as it swings up the other side. So how would a magnet actually create any net benefit to the pendulum? Will the pendulum eventually stop, or if not, how is it not violating the laws of thermodynamics? Answer: The pendulum is being driven by the magnet: the fixed magnet in the clock is actually the pole of an electromagnet which the clock is using to drive the pendulum: the clock is putting energy into the pendulum via the electromagnet. Almost certainly the clock 'listens' for the pendulum by watching the induced current in the electromagnet, and then gives it a kick as it has just passed (or alternatively pulls it as it approaches). People have used techniques like this to actually drive a time-keeping pendulum (I presume this pendulum is not keeping time but just decorative) but I believe they are not as good as you would expect them to be, because the pendulum is effectively not very 'free'. 'Free' is a term of art in pendulum clock design which refers to, essentially, how much the pendulum is perturbed by the mechanism which drives it and/or counts swings, the aim being to make pendulums which are perturbed as little as possible. The ultimate limit of this is clocks where there are two pendulums: one which keeps time and the other which counts seconds to decide when to kick the good pendulum (and the kicking mechanism also synchronises the secondary pendulum), which are called 'free pendulum' clocks.
{ "domain": "physics.stackexchange", "id": 46896, "tags": "thermodynamics, everyday-life, harmonic-oscillator" }
Why does Azo coupling of β-naphthol takes place at alpha position and not at gamma position
Question: I have searched for coupling reaction of $\beta$-naphthol with benzene diazonium salt at quite a few places. But everywhere the coupling has been shown at alpha position. Why doesn't it take place at gamma position to minimise steric hindrance. Chemistry LibreTexts Chemguide My book Answer: Coupling reaction of β-naphthol with benzene diazonium is an example of electrophilic aromatic substitution. If the electrophile attacks at alpha position ,then two resonance structures 1 and 2 , with aromatic rings are possible. If the electrophile attacks at gamma position ,only one resonance structures 3 , with aromatic ring is possible and 4 is not aromatic. Therefore attack at alpha position is the major product. References: https://pubs.rsc.org/en/content/articlehtml/2015/gc/c4gc02381a https://en.wikipedia.org/wiki/Azo_coupling Mechanism of formation of 2-naphthol red dye (aka Sudan 1)
{ "domain": "chemistry.stackexchange", "id": 11982, "tags": "organic-chemistry" }
gmapping yields map with a bend at one point
Question: Hi, I am using gmapping in a hallway 100 feet long and 8 feet wide. There is an alcove for elevators on one side in the middle of the hallway. I am using a URG-04LX laser scanner. Each time I build a map, there is a bend that occurs at one end of the alcove. The hallway is mapped pretty straight except for the area where the bend occurs. Interestingly, the bend occurs at the same end of the alcove regardless of which end of the hallway the robot begins its mapping run. I have plotted my odometry, x,y,and theta, and they are quite straight for the entire length of the hallway. I have played with srr,srt,str and stt; reducing them to zero makes the turn a little more abrupt, doubling the terms from the default values has little effect. Of course the map should work pretty well for navigation, but the bend is annoying since my odometry is very good. Any suggestions will be appreciated. Alex Originally posted by Alex Brown on ROS Answers with karma: 176 on 2011-07-07 Post score: 0 Original comments Comment by Brian Gerkey on 2011-07-07: Do you get better results if you traverse the hallway twice (once in each direction) during the mapping run? Answer: Can you have a look at the laser data at the specific location. 8 ft is not that far from the URG's max range, perhaps there is very little data to match properly and the map you are getting is good (matching wise). Otherwise you can play around with (at the cost of computation time): iterations (more) linearUpdate/angularUpdate (decrease) particles (more) Originally posted by dornhege with karma: 31395 on 2011-07-07 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by dornhege on 2011-07-08: If you want to trick/cheat the environment for the algorithm: Put something that is not aligned with the corridor, 90deg at best. Comment by Alex Brown on 2011-07-08: Well, I varied each suggested parameter to 50% and 200% of the default values while processing a bag file. The results varied from no appreciable difference to a bit worse bend (maybe 25% more). Tomorrow, I can run the bot in the same hallway and will try it with the door closed and perhaps using a sheet of cardboard to extend the wall so that the laser doesn't see the alcove. Thanks for your help. I'll let you know what happens. Comment by dornhege on 2011-07-08: I don't think you were out of the maxrange, but maybe there was no clear data to match at that point. The bend seems to be at a point where the lower site has no ranges. Comment by Brian Gerkey on 2011-07-07: You might also try increasing ~lsigma, which is the standard deviation in the laser model. Comment by Brian Gerkey on 2011-07-07: linearUpdate and angularUpdate determine how far the robot must have translated or rotated before a filter update is performed; units are meters and radians, respectively. Decreasing them will take account of laser data more often. Comment by Alex Brown on 2011-07-07: Brian, the data shown is the result of making a round trip from one end of the hall and back. The map looks like this after going down the hall one way and doesn't change appreciably on the way back. To Dornhege, it was only about 6 feet from the center of the hall, where I drove the robot, to the farthest wall; so I don't think it was out of range. I'll try looking at your other suggestions. I'm not sure what parameters are for the "updates".
{ "domain": "robotics.stackexchange", "id": 6062, "tags": "slam, navigation, slam-gmapping, gmapping" }