anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
How can i show on my browser what i see on Rviz?
Question: Im on ROS2-Humble and i want to see my robot's location and movement on the map, the way i see it on Rviz, on my browser. Like the picture above, I want from Rviz to get the Grid, RobotModel, TF, and Map, send it on my browser and show it there, kinda like streaming it or rendering it via a .js file and .html. Im new with ROS2 and i cant seem to find any sources to help me. Most of the things i find are for ROS1 and Chat GPT doesnt help at all. Basically something like this: https://answers.ros.org/question/319626/real-time-map-generate-on-web-like-a-rviz/ Any tips/help would be greatly appreciated, thanks in advance! Answer: For anyone who wants the same thing as me I ended up using https://github.com/MoffKalast/vizanti and it works like a charm!
{ "domain": "robotics.stackexchange", "id": 38822, "tags": "ros2, rviz, ros-humble, rosbridge, webots" }
cartesian velocity control loop implementation
Question: I'm using ROS (noetic) to intuitively control a franka manipulator using the panda_robot package for the simulation. I've set up an extended kalman filter which fuses the following measures: IMU data: the sensor is attached to my arm and reports the orientation of my hand wrt the (virtual) base link of the manipulator, along with the angular velocity position data coming from my mouse. The z coordinate is controlled through the mouse wheel linear velocity obtained by differencing the position of the mouse. As for the position data, the velocity is with respect to the (virtual) base link of the manipulator, which corresponds to the origin of the world in the Gazebo simulator. The filter outputs an odometry message, which I'm trying to exploit to implement the following cartesian velocity control loop: where $\dot{x}^*$ is the target velocity in cartesian space, while $x^*$ is the target position in the cartesian space. Both quantities are obtained from the kalman filter. Essentially, the filter acts as the reference generator depicted on the image, sending commands at about 100 Hz. The code for the robot control loop is the following (using rospy): import rospy import numpy as np import tf ... def on_cmd(self, cmd): # cartesian speed and position obtained by the kalman filter tgt_pose = np.array(cmd.tgt_pose) tgt_twist = np.array(cmd.tgt_twist) # get current end-effector position and orientation in cartesian space ee_position, ee_orientation = self._panda_arm.ee_pose() ee_orientation = self._quat2array(ee_orientation) # w,x,y,z --> x,y,z,w ee_orientation_rpy = tf.transformations.euler_from_quaternion(self._ee_orientation) ee_pose = np.concatenate([ee_position, ee_orientation_rpy]) # get the jacobian for the current joint configuration J = self._panda_arm.zero_jacobian() J_inv = np.linalg.pinv(J) pose_error = tgt_pose - ee_pose # self._K is a 6x6 diagonal positive-definite matrix twist_error = tgt_twist + self._K @ pose_error joint_velocity_cmd = J_inv @ twist_error # finally execute the command self._panda_arm.exec_velocity_cmd(joint_velocity_cmd) $K$ is chosen to be a 6x6 diagonal matrix with values of 0.01. By means of rviz, I can see that the cartesian speed and velocity produced by the kalman filter seem to be correct. Still, I'm struggling to make the manipulator move accordingly to my mouse motion and my arm orientation. The robot is currently able to just move back and forth following the mouse, but it is basically ignoring the orientation. The result is that it appears to move randomly when I attempt to perform some more advanced moves (like rotating the gripper of the end effector). I've come to the assumption that the commands value provided by the filter are correct, so I think that there should be some error in my implementation of the control loop. I'd like to ensure that the above code snippet does make sense, at least conceptually. Answer: The orientation error is not simply subtraction of your current pose and the desired pose. Given $\mathbf{R}_{a}, \ \mathbf{R}_{d} \in \text{SO}(3) $, where $\mathbf{R}_{d}$ is your desired orientation in matrix form and $\mathbf{R}_{a}$ is your current orientation in matrix form, you can compute the orientation error as: $$ \mathbf{e}_{o} = 0.5 \cdot\sum_{i=1}^{3} \mathbf{r}_{a,i} \times \mathbf{r}_{d,i} $$ where $\mathbf{r}_{a,i}$ denotes the $i^{\text{th}}$ column vector of $\mathbf{R}_{a}$, likewise for $\mathbf{r}_{d,i}$ and $\mathbf{R}_{d}$, and $\times$ denotes the cross product. This will convert any rotation to the appropriate axis-angle representation of error - capable of being used in conjunction with the end-effector Jacobian. The following image details converting from RPY velocity to angular velocity:
{ "domain": "robotics.stackexchange", "id": 2569, "tags": "control, robotic-arm, ros, kinematics, kalman-filter" }
Count elements in a collection based on a condition [Java 8]
Question: Came across this question here: Write a generic method to count the number of elements in a collection that have a specific property (for example, odd integers, prime numbers, palindromes). Following is my solution: Counter.java public class Counter { public static <T> long countIf(Collection<T> collection, Predicate<T> predicate) { return collection.stream() .filter(predicate) .count(); } } Behaviours.java public class Behaviours { public static boolean checkEvenNumber(final int num) { return num % 2 == 0; } public static boolean checkOddNumber(final int num) { return num % 2 != 0; } public static boolean checkPrimeNumber(final int num) { if (num == 0 || num == 1) { return false; } for (int i = 2; i * i <= num; i++) { if (num % i == 0) { return false; } } return true; } public static boolean checkPalindrome(final String word) { for (int i = 0; i < word.length() / 2; i++) { if (word.charAt(i) != word.charAt(word.length() - 1 - i)) { return false; } } return true; } } CounterTest.java public class CounterTest { /** * Counting odd numbers. */ @Test public void testCountingOddNumbers() { System.out.println("testCountingOddNumbers"); List<Integer> ci = Arrays.asList(1, 2, 3, 4, 5, 6); long expected = 3l; long actual = Counter.countIf(ci, Behaviours::checkOddNumber); Assert.assertEquals(expected, actual); } /** * Counting prime numbers. */ @Test public void testCountingPrimeNumbers() { System.out.println("testCountingPrimeNumbers"); List<Integer> ci = Arrays.asList(1, 2, 3, 4, 5, 6, 7, 8, 9); long expected = 4l; long actual = Counter.countIf(ci, Behaviours::checkPrimeNumber); Assert.assertEquals(expected, actual); } /** * Counting palindromes */ @Test public void testCountingPalindromes() { System.out.println("testCountingPalindromes"); List<String> cs = Arrays.asList("madam", "test", "tacocat", "hello"); long expected = 2l; long actual = Counter.countIf(cs, Behaviours::checkPalindrome); Assert.assertEquals(expected, actual); } } Output ------------------------------------------------------- T E S T S ------------------------------------------------------- Running com.mycompany.demo.assignment001.question001.CounterTest testCountingPalindromes testCountingPrimeNumbers testCountingOddNumbers Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.088 sec Does my implementation take care of all the scenarios or am I missing something? Can any other Java 8 concept be used here? Please also review overall correctness. Answer: The code looks very nice! However, here my small complaints: Tests The tests are good, but you're testing the Behaviours implicitly, when you test the Counter. The Counter itself does only the following things: stream, filter with the given predicate, and count. The static methods im Behaviours should be tested separately, as should the logic of the Counter. The problem is, if the test fails, you don't know if the problem is the Counter or the Behaviours. The more "implicit" things you test, the harder is it to find the bug. You're testing with too many values. The logic itself should be the same with two values (one is a palindrome, one is not) as with four values (two palindromes, two not). If the behaviour for two or four values is different, then it's two different test cases. The lists to count/filter can have a better name. To make a test case more readable, it's usually split in the three blocks given/when/then, separated by an empty line. Setup the test data / perform the action / assertion. Also, I recommend, but that's personal preference, to static import static methods, at least the assertion methods, so it looks a bit more clean. What's missing (at least what I couldn't find): You're missing the test for even numbers. You're missing the test for 0/1 number in testCountingPrimeNumbers. What happens for an empty String (or string with length 1 or 2) in the palindrome count? Other check-prefix: The convention is to use a is prefix, if a method returns a boolean, it's more clear than "check". test-prefix in test methods: That was the convention for JUnit3 (I think), when annotations weren't a thing. It's not needed anymore, when you use the @Test annotation, so instead of testCountingPrimeNumbers, you can go with countPrimeNumbers. The JavaDocs for the test methods are obsolete, the method names are clear. I'd get rid of the System.outs, too, you get enough information in the junit test report. Instead of naming stuff actual, I'd give it the proper name, e.g. amountOfPalindromes. And maybe not declare the expected variable. Hope this helps...
{ "domain": "codereview.stackexchange", "id": 25723, "tags": "java, generics" }
Clarification on infinite mass/momentum argument
Question: While reasoning that why a particle can not be accelerated to light speed $c$, it is argued that the mass/momentum approaches infinity as speed approaches $c$. I think it is per GR. I am sure this also fits into mathematics, otherwise people would not be making this argument. I may be wrong, and please feel free to correct me if you think so. But I do not think that is the case - i.e. mass/momentum does not approach infinite. My simple argument is - if the mass/momentum of a moving particle approaches infinite and such a particle moving at speeds close to $c$, then it would be almost impossible to stop that particle. In other words, it should be equally difficult/impossible to slow it down. We all know that though it is not possible to accelerate the particle further, but it is no big deal to slow it down. Slowing down an infinite mass/momentum would not be that easy. Infinite mass reasoning must apply both ways - in speeding up as well as in slowing down. Has it been experimentally shown that it also applies to slowing down at limits close to $c$? Therefore, I can argue that mass/momentum does not approach infinite, it is the forces that are rendered ineffective at such speeds because the force itself propagates at $c$ and can not accelerate anything as fast as itself, or faster. Force is rendered ineffective only in direction of motion (acceleration), not in opposite direction (slowing down). Analogy how force may become ineffective - In a way, we can not accelerate a car that is already going at 300 miles/hr by pushing with our hands, because humans can not move their hand as fast. But we can accelerate a car going at 5 miles an hour. As the speed gets closer and closer to that of force $c$, the force can not push it any more. Same way as we can not move our hand faster than 300 miles/hr and can not accelerate that car by pushing on it. But slowing down would be effective, dangerous and fatal though. Please correct if I am missing something, instead of blank down voting. Considering formula given by John Rennie in his answer - =========================================================== The momentum of an object of mass $m$ moving at velocity $v$ is: $$ p = \gamma m v = \frac{mv}{\sqrt{1 - \frac{v^2}{c^2}}} $$ which goes to infinity as $v \to c$. In the limit of $v \ll c$ the Lorentz factor $\gamma \approx 1$ and we recover the Newtonian approximation. =========================================================== Same math can be applied to effectiveness of the force. Only thing is that v is the velocity (only positive) component in the direction of the force. So, for slowing down, it will be 0, or $\gamma \approx 1$ The effective force $F1$ when particle is moving at velocity $v$ and a force $F$ is applied: $$ F = \gamma F1 = \frac{F1}{\sqrt{1 - \frac{v^2}{c^2}}} $$ This way, the math does not change either. So at limits close to $c$, the force must be fully effective in slowing down and pretty much ineffective in accelerating. I am proposing below experiment to prove/disprove the concept. If someone is aware of such an experiment being done, please share the results. Make a particle accelerate at ~highest speed that the accelerator can achieve. Once this ~speed is achieved, continue to apply the force for another 1 minute. The particle should gain negligible speed during this 1 minute, but should gain a lot of momentum (per momentum formula) Now stop the accelerating force and start an equal slowing force. I.e. reverse the force. Per the current (infinite mass/momentum) explanation, 1 minute of slowing should reduce the speed by negligible – same speed that was gained during last 1 minute of acceleration. Because force is rate of change of momentum and same force in both directions should cause same change of momentum/speed during same amount of time. But per my explanation, a lot more slowing down will take place during the 1 minute because gamma becomes zero for slowing down. I think evidence and results of such experiment being done, can answer this question definitively. But equivalent other answers would help too - like evidence of the 7 Tev energy of protons being physically measured rather than just being calculated via the momentum formula. Answer: The question is founded on an incorrect assumption. The math absolutely is symmetric between acceleration and deceleration (because velocity enters in to the Lorentz factor squared), and we have machines that take advantage of this fact. Energy recovery linacs work in exactly the manner linacs usually work, only the field timing is maintained 180 degrees out of phase from the acceleration mode. This means that instead of the particle gaining energy at the expense of the field, the field gains energy at the expense of the particle. The forces are the same as in the accelerating case only opposed to the direction of motion, and the particle exhibits the same magnitude of coordinate acceleration (i.e. very little because it is highly relativistic) in the lab frame only slowing rather than speeding up.
{ "domain": "physics.stackexchange", "id": 37237, "tags": "special-relativity, mass, speed-of-light, momentum" }
What is the the full name of TAPD? Structure provided
Question: I have a substituent called TAPD on a molecule I'm working with. I have a hard time locating information about its properties. The bold square in the picture below is supposed to be the rest of the system on which the TAPD is attached. I am a physicist and thus not very good at chemical naming, so I would have good use of some help. Answer: TAPD = Tetraalkyl-​p-​phenylenediamine This is because you have a phenyl (benzenoid) ring, with two amino groups (diamine) opposite (para) to each other, and each of those amino groups has two alkyl substituents, giving four (tetra) in total.
{ "domain": "chemistry.stackexchange", "id": 3245, "tags": "nomenclature" }
Best practice to write a ROS service for a serial-communication class with many options
Question: I have been asked to write code to implement serial communications with a camera in order to control its pedestal (movable base) as well as set a few dozen other camera options. The catch is that I have to make it usable by ROS. What would be the best practice to implement this functionality in ROS? I understand the concept of services, but I think that there should be a better way than creating a different service/file for each option. Thanks, Daniel. Answer: Why not write a subscription based node? It could take messages from whoever then passes the information on to the camera. From what you've said, it seems like a service wouldn't be necessary. Response. Yes. Create a new message that has all the necessary fields and a new topic, maybe /camera_parameters.
{ "domain": "robotics.stackexchange", "id": 607, "tags": "ros, cameras, serial" }
Showing NP-completeness of a graph problem with vertex capacities
Question: The problem: Given an undirected graph G = {V, E}, a source-vertex s, and each vertex having a "capacity" between 0 and |V|, is there a tree which covers all vertices and does not extend from a vertex more times than its capacity allows, while also not covering every edge? Demonstrate that it is NP-complete. An example graph with a source can be seen below, and an example solution, with each vertex not having more edges extend from it than its capacity would allow. While showing it is in NP is trivial, performing a reduction from a known NP-complete problem to show this problem is NP-hard is where I am having some difficulty. I am having some suspicions that reducing a problem like vertex cover might prove easiest, but I am quite stuck here. Answer: You can transform each instance of the hamiltonian cycle problem (finding a cycle that visits each vertex exactly once) into an instance of your problem. Given a hamiltonian cycle problem with a graph $G=(V,E)$, you can apply the reduction described here to turn the graph $G$ into a graph $f(G)$ for the hamiltonian path problem, which is the same as your problem if you restrict the capacity of each vertex to 1. This reduction also gives you a source. You can map the set of instances of the hamiltonian cycle problem to a subset of the instances of your problem, solving these in polynomial time would solve the hamiltonian cycle problem. This proves your problem is $\mathsf{NP}$-hard.
{ "domain": "cs.stackexchange", "id": 20698, "tags": "graphs, np-complete, reductions, np" }
Velocity and acceleration (as vectors) in a straight line
Question: A student is trying to determine the acceleration of a feather as she drops it to the ground. If the student is looking to achieve a positive velocity and positive acceleration, what is the most sensible way to set up her coordinate system? A) Her hand should be a coordinate of zero and the upward direction should be considered positive. B) Her hand should be a coordinate of zero and the downward direction should be considered positive. C) The floor should be a coordinate of zero and the upward direction should be considered positive. D) The floor should be a coordinate of zero and the downward direction should be considered positive. I think, if her hand was the origin, downward should be positive in order for the velocity to be positive, but could anybody describe the acceleration in that instance? And if the floor was the zero-point, what would happen? Answer: The velocity is downward, and the acceleration is downward. Whatever direction you choose, if you start with a velocity of zero the sign of both will be the same (if you throw the feather down, it will decelerate - so the acceleration will the "up". I don't think that is intended here). Whether the floor or the hand is zero in the coordinate system doesn't change whether velocity and acceleration have a positive sign - only the sense of direction (up or down) can change that. However, what is "sensible" as far as origin of the coordinate system is a matter of opinion. Choosing the hand has the advantage that you start with a velocity of zero at position zero; choosing the floor makes for a more robust coordinate system (less likely to vary from one experiment to the next).
{ "domain": "physics.stackexchange", "id": 26435, "tags": "homework-and-exercises, acceleration, velocity, vectors, coordinate-systems" }
Bisimulations: Proof that the following LTS are not bisimilar
Question: I have the two LTS (labeled transition system) as seen in the following picture: And the book is telling me that between those two LTS, their $1$ and $1'$ are non-bisimilar. So I tried to get a bisimilation starting from the pair $\{\{1,1'\}\}$ by continuously extending it whenever I found a conflict, ending up with: $$\{ \{1,1'\} ,\{2,2'\},\{3,3'\},\{2,4'\},\{4,3'\},\{3,5'\},\{4,5'\} \}$$ Finally, to check whether it truly was a bisimilation, I checked for each pair each node, and asserted that all possible pairs of derivatives were part of the set. I am arriving that they are a bisimilation - is the crux in the $a$'s and $b$'s? (They didn't really explain what it means) Answer: The left diagram has a b and the right diagram has a c. Thus, the pair $(2,4')$ does not satisfy the conditions required to be a bisimulation. In particular, the book is correct that those two systems are not bisimilar.
{ "domain": "cs.stackexchange", "id": 12135, "tags": "coinduction" }
From where does irreversibility arise in the Navier-Stokes momentum equation?
Question: A form of the Navier-Stokes momentum equation can be written as: $$ \rho \left( \frac{\partial \mathbf{u}}{\partial t} + \mathbf{u} \cdot \nabla \mathbf{u} \right) = - \nabla \bar{p} + \mu \, \nabla^2 \mathbf u + \tfrac13 \mu \, \nabla (\nabla\cdot\mathbf{u}) + \rho\mathbf{g}$$ This question feels quite basic, but from where can irreversibility arise in this equation? For example, in this video exhibiting the reversibility of Taylor-Couette flow, I believe $|\rho \left(\mathbf{u} \cdot \nabla \mathbf{u} \right)|/|\mu \nabla^2 \mathbf u| = {\rm Re} \ll 1$ since it is in a regime of laminar flow (i.e. low Reynold's number). But why explicitly is Taylor-Couette flow reversible, while the stirring of coffee is irreversible based upon the mathematical terms present in the momentum equation? Is it caused by the nonlinear term due to a possible interaction between various scales in the system that makes the fluid hard to "unmix" or is it somehow due to the dynamic viscosity, $\mu$, being strictly positive? Or can irreversibility arise from other origins such as the initial/boundary conditions or fluid closure applied? Mathematical insights and physical intuition would be greatly appreciated. Answer: Which "reversibility" are you referring to? If you mean "thermodynamically reversible" (a flow which does not generate entropy) then viscous dissipation ($\mu\nabla^2\mathbf{u}$) always ensures irreversibility, whatever the Reynolds number. But perhaps you are referring to "kinematic reversibility", which implies reversal of flow in every detail upon reversal of external forces acting on it - that is, upon reversal of external forces, every fluid particle would retrace its trajectory backwards. Low Reynolds number flows, called "creeping flows", indeed display kinematic reversibility as Reynolds number ($Re$) $\to 0$ (see G.I. Taylor's demo). Here, the non-linear advection term ($\mathbf{u}\cdot\nabla\mathbf{u}$) is negligible (because $Re\ll 1$). To see why the advection term being negligible results in kinematic reversibility of the flow, consider the opposite extreme of a turbulent flow in which the advection term is not negligible. As a specific example, consider turbulent mixing between two initially separate fluids. You could imagine it to be a lab experiment or a numerical simulation - we shall imagine the latter. For simplicity, we imagine that the two fluids are identical but the two portions are given different colours. The turbulent mixing flow - in fact turbulence in general - will be chaotic. After mixing has progressed for some time, we stop the simulation and reverse the velocity field everywhere. Will the two fluids now "un-mix" and separate from each other? It will not, because dominance of the non-linear advection term makes the flow depend sensitively on the initial conditions; this sensitivity is the reason why turbulent flows cannot be reproduced in exact detail; and since we can only specify the initial conditions for the reversed flow with finite precision, the flow will not exactly reverse itself and we will still have mixing as before (in other words, no fluid particle will exactly retrace its previous trajectory). When the non-linear advection term is negligible, we have a highly ordered creeping flow which can be reversed because it is more tolerant to small errors in the specification of the initial conditions. To summarize: Although Navier-Stokes equation governs all flows, the degree of non-linearity as measured by $Re$ is not the same for all flows; thus flows in the extreme limits of $Re$ can exhibit qualitatively different behaviour.
{ "domain": "physics.stackexchange", "id": 84950, "tags": "fluid-dynamics, entropy, reversibility, turbulence, navier-stokes" }
Machine learning methods for panel (longitudinal) data
Question: I have a panel data set, for example: obj time Y x1 x2 1 1 0.1 1.28 0.02 2 1 0.11 1.27 0.01 1 2 -0.4 1.05 -0.06 2 2 -0.3 1.11 -0.02 1 3 -0.5 1.22 -0.06 2 3 1.2 1.06 0.11 I`m new at ML and until recently I did not know that this is a special (panel) data type. I predicted the value of a variable $Y(t+1)$ by values $x_1(t)$ and $x_2(t)$ (time lag) using linear regression model and MLP. But now I read some information about panel data analysis and realized that the methods I used were not suitable. At the moment I found that fixed/random effects models are suitable for panel data analysis. So, I have several questions: What other methods are correctly used to analyze panel data (I`m interested in neural network models)? I read that these methods must take into account the dependency between particular object values and previous occurring values of this object ( what is in models with fixed and random effects). I also tried to use MLP by feeding 2D data to it. I divided the panel data into $k=time$ $quants$ $count$ 2D blocks and passed this data to the MLP input. For example above, $k=3$, ($input$ $layer$ $size= 4 = number$ $of$ $predictors* block$ $objects$ $count$). In this case $batch$ $size=1$. If I make the $batch$ $size=2$ and feed the neural network with 1D data ($input$ $layer$ $size=2$ for example above, will there be any difference? In both cases, the weights of the neural network will be rebuilt after the observations on all objects are transmitted in one quantum of time. Answer: Panel data = multi-object time series. in other words you have time series problem (time) for different objects (obj) that you are trying to predcit (Y). If I were you I would just dissect this problem and start thinking in terms of time-series + another discriminative column called obj. What approaches do you know there? Here is really cool and MODERN time-series tutorial, check it out. Regarding NN, why are you trying to squeeze it in so hard? Let the data tell you what algo can model it. Personally given these 3 features NN is just too much and you can achieve similiar results with less complexity using some simpler/less expensive approaches.
{ "domain": "datascience.stackexchange", "id": 6744, "tags": "machine-learning, neural-network, data" }
If phosphorus trioxide is present in the atmosphere of Venus, could it not react with water to form (sufficient) phosphorous acid?
Question: Sufficient phosphorous acid to be decomposed in the measured quantity of phosphine ? From Phosphine gas in the cloud decks of Venus: We also rule out the formation of phosphorous acid ($\ce{H3PO3}$). While phosphorous acid can disproportionate to $\ce{PH3}$ on heating, its formation under Venus temperatures and pressures would require quite unrealistic conditions, such as an atmosphere composed almost entirely of hydrogen. But according to Vega mission results and chemical composition of Venusian clouds there is $\ce{P4O6}$ in the atmosphere which according to Wikipedia would react with water to form phosphorous acid. However there is supplementary information in the chapter "Equilibrium thermodynamics in the atmosphere and surface" at the end of the article "Phosphine gas in the cloud decks of Venus": As an example of our approach, we present a calculation for phosphorous acid ($\ce{H3PO3}$). This compound will spontaneously decompose on heating to form phosphoric acid and phosphine; this is a standard laboratory method for making phosphine. Phosphorous acid is not stable in gas phase, but could in principle be formed in cloud droplets by reduction of phosphoric acid. Edit for some additional informatiom. Also from the supplementary information (page 13): Reactions of $\ce{P4O6}$, $\ce{P4O10}$, $\ce{H3PO4}$ and $\ce{H3PO3}$ were considered (the last of these only in solution phase in the clouds),.. So not considered was the possible occurence of $\ce{H3PO3}$ below the clouds. The Recent Evolution of Climate on Venus (page 23) states: The evaporation of $\ce{H2SO4}$ occurs at about 48 km, the average cloud base.The vapor phase continues to exist down to 432 K (38 km), where it is thermally decomposed. So below 38 km, and above 160 ⁰ C there would be water in the gas phase, free from sulfuric acid, that could react with $\ce{P4O6}$. Could have that possibility been ruled out for some reason ? Answer: The issue seems to be not the reaction between $\ce{P4O6}$ and water, but whether $\ce{P(III)}$ can form at all. The OP's first reference says no whereas the second reference says yes. Basically the two references used different thermodynamic assumptions. The first one assumes an atmosphere in equilibrium while the second assumes only a partial equilibrium; in the latter case $\ce{CO}$ is allowed to form from nonequilibrium (radiation-driven) dissociation of $\ce{CO2}$, and minor species are then equilibrated with the $\ce{CO/CO2}$ couple. In the case of phosphorous oxides this favors $\ce{P4O6}$ over $\ce{P4O_{10}}$, although the former could be oxidized by other species such as sulfuric acid. The issue has become important because the recent discovery of $\ce{PH3}$ among the clouds of Venus has led to a hypothesis that this compound is generated by Venusian microbes, but if $\ce{P(III)}$ is available for a disproportionation reaction it could provide an alternative abiotic (non-life) source. As noted above, the references disagree on this, and the question of whether this alternative to a biological source of $\ce{PH3}$ is viable awaits resolution. Relevant to the question of whether $\ce{PH3}$ is a true biosignature is this question about organic compounds on Venus, which apparently spawned the current one.
{ "domain": "chemistry.stackexchange", "id": 14530, "tags": "inorganic-chemistry, thermodynamics, atmospheric-chemistry" }
Determining plane wave polarization given the magnetic-field vector phasor
Question: Given the following magnetic-field vector phasor: $$\vec H(\vec r)=\left[\hat x - j\hat y\right]H_o e^{jkz}$$ I need to find the associated E-field vector phasor so that I can determine the polarization, i.e., linear, circular, or elliptical, and whether it is left-handed (LH) or right-handed (RH). I've devised and utilized the following classification system on past problems with much success. Here are the steps I use: Put the E-field vector phasor in the form $\vec E(\vec r) = \left[\hat x + Ae^{j\phi}\hat y\right]E_oe^{-jkx}$. Case I) $A=0,\phi\in\mathbb{R}\rightarrow$ Linearly Polarized. Case II) $A\in\mathbb{R},\phi\in\pi\mathbb{Z}\rightarrow$ Linearly Polarized. Case III) $A\in\left\{(-1,0)\cup(0,1) \right\},\phi\in\mathbb{R}-\pi\mathbb{Z},A\phi<0\rightarrow$ RH Elliptically Polarized. Case IV) $A\in\left\{(-1,0)\cup(0,1) \right\},\phi\in\mathbb{R}-\pi\mathbb{Z},A\phi>0\rightarrow$ LH Elliptically Polarized. Case V) $A\in\mathbb{R}-(-1,1),\phi\in\mathbb{R}-\left\{\frac{\pi}{2}\mathbb{Z}_O\right\},A\phi<0\rightarrow$ RH Elliptically Polarized. Case VI) $A\in\mathbb{R}-(-1,1),\phi\in\mathbb{R}-\left\{\frac{\pi}{2}\mathbb{Z}_O\right\},A\phi>0\rightarrow$ LH Elliptically Polarized. Case VII) $A\in\mathbb{R}^+-(-1,1),\phi\in\frac{\pi}{2}\left(4\mathbb{Z}+3\right)\rightarrow$ RH Circularly Polarized. Case VIII) $A\in\mathbb{R}^--(-1,1),\phi\in\frac{\pi}{2}\left(4\mathbb{Z}+3\right)\rightarrow$ LH Circularly Polarized. Case IX) $A\in\mathbb{R}^--(-1,1),\phi\in\frac{\pi}{2}\left(4\mathbb{Z}+1\right)\rightarrow$ RH Circularly Polarized. Case X) $A\in\mathbb{R}^+-(-1,1),\phi\in\frac{\pi}{2}\left(4\mathbb{Z}+1\right)\rightarrow$ LH Circularly Polarized. So I know the solution goes like this... $$\vec H(\vec r)=\left[\hat x - j\hat y\right]H_o e^{jkz} \rightarrow\vec E(\vec r) = \frac{\nabla\times\vec H(\vec r)}{j\omega\epsilon_o}\implies\vec E(\vec r)=\left[\hat y + e^{j\frac{\pi}{2}}\hat x\right]\eta_oH_oe^{jkz}\rightarrow \style{font-family:inherit;}{\text{LH Circularly Polarized.}}$$ But the first step implies that $\nabla\times\vec H(\vec r)=j\omega\epsilon_o\vec E(\vec r)$ when it should be $\nabla\times\vec H(\vec r)=\vec J(\vec r) + j\omega\epsilon_o\vec E(\vec r)$ for the vector phasors of time-harmonic fields. Why is $\vec J(\vec r)$ assumed to be zero? Answer: I suspect you were either told that the wave was in a vacuum or another non-conducting medium, in which case it can be assumed that $\vec{J}=0$. If not, then I think you can still assume $\vec{J}=0$ if you are told $k$ is a real number, because solutions to Maxwell's equations in conductors are of the form $$ \vec{H}(\vec{r}) = H_0 e^{jkz} = H_0 e^{-k'z}e^{jk''z}\ ,$$ where $k = k'' + jk'$. i.e. They have a dissipative factor which $\rightarrow 1$ when $k$ is real. So, if $k$ is complex then you cannot assume $\vec{J}=0$. Or to put it another way, if $\vec{J} \neq 0$ then $k$ has an imaginary component.
{ "domain": "physics.stackexchange", "id": 71036, "tags": "electromagnetism, magnetic-fields, electric-fields, polarization, plane-wave" }
How to count bits in cache (direct & 4-way)
Question: Let's say, I have a cache with: 2^32 bytes of memory 2048 blocks (of 16 bytes each) Now I'm trying to figure out how much bits each field will contain. Direct mapped: One block is 16 bytes (16 * 8 = 128 bits). The block also contains 1 dirty bit and 1 valid bit. I know that since there are 2048 (=2^11) blocks, and the whole block contains 16 bytes (=2^4) the tag-field will be 4 + 11 + 1 (valid) + 1 (dirty) = 17 bits. So the whole size of 1 block will be 17 + 128 bits. Is this right? And if so, where do I need the given 2^32 bytes of memory, and what is the difference with the direct-mapped cache in comparison to counting the 4-way-set-associative cache fields? I hope someone can help me, thanks in advance! Answer: Maybe it is easier to look at addresses : address range : A[31:0] block address : A[3:0] cache line index : A[14:4] : 2048 = 11bits Each tag contains : addresses A[31:15] : 17bits valid bit dirty bit, if it is a write-back cache So 18bits (write-thu) or 19bits (write-back) per tag. 128+18 per block (or 128+19), 2048*(128+18) bits for the whole cache. (excluding EDC, sub-blocking...) The tags contains the part of the address bits not indexed by the cache (not the contrary). Way associative caches need replicating both tags and data. Additional bits are also needed in that case for the replacement information (LRU, PLRU...) : Selecting which cache way should be replaced when a cache miss occurs.
{ "domain": "cs.stackexchange", "id": 3906, "tags": "computer-architecture, cpu-cache" }
What is the definition of the Rayleigh-Jeans tail?
Question: I've read some papers using the term "Rayleigh-Jeans tail" but cannot find a general definition. I would infer from context that it refers to the blackbody emission spectrum in the range of wavelengths that are long enough that the emission can be approximated by the Rayleigh-Jeans law. Is this correct? Example references: "We show that, despite stringent constraints on the shape of the main part of the cosmic microwave background (CMB) spectrum, there is considerable room for its modification within its Rayleigh-Jeans (RJ) end, ω ≪ TCMB.". PHYSICAL REVIEW LETTERS 121, 031103 (2018). "However, in the FUV band, the Rayleigh-Jeans (RJ) tail of the ∼ 10e5 K surface emission may be dominant and detectable by the HST." https://arxiv.org/abs/1901.07998#:~:text=Assuming%20a%20blackbody%20spectrum%2C%20we,models%20of%20old%20neutron%20stars. Answer: As far as I know, the two ends of the black-body radiation curve is historically described using the Rayleigh-Jeans law and the Wien law, as seen on Wikipedia for example The Rayleigh-Jeans law is the low frequency limit of the full curve, where the spectral radiance is inversely proportional to the wavelength to the fourth power. I am not entirely sure that this is the "tail" that you are referencing to, but I've personally encountered the term Rayleigh-Jeans tail in the context of the low frequency end of the radiation curve (in general, not only for black-body radiation).
{ "domain": "physics.stackexchange", "id": 90006, "tags": "thermal-radiation" }
Using method of images on ungrounded spheres
Question: I have recently learnt how to calculate the induced charges on a grounded sphere using the method of images. However, in all of the examples I did, the conductor itself is grounded, so I am wondering what would happen if the conductor is ungrounded. Since the electric potential on the surface is still constant, would the final result be different? Is it only a matter of "reference levels" ($V=0$) for the ungrounded case? Answer: The method of images for a grounded sphere of radius $R$ centred at $z=0$ and a charge $q$ at $z=a$ is given by the image charge $$q'=-\left(\frac{R}{a}\right)q$$ at $z=R^2/a$. This setup ensures that the surface of the sphere remains equipotential. What we can do here for an ungrounded sphere is exploit the principle of superposition: consider a superposition of (1) a charge $q$ and a grounded sphere and (2) a sphere at potential $V_0$. Setup (1) is solved by $q'$, while setup (2) is achieved by a charge $q''$ placed at the centre of the sphere $z=0$, such that $$ V_0 = \frac{1}{4\pi\epsilon_0}\frac{q''}{R},\quad q'+q''=0. $$ The second equality is true because the sphere is neutral, so $$ q'' = -q' = \left(\frac{R}{a}\right)q . $$ You could refer to Griffiths' Introduction to Eletrodynamics chapter 3 for similar introductory problems.
{ "domain": "physics.stackexchange", "id": 98576, "tags": "electrostatics, method-of-images" }
Method that reserves a reservable entity and charges the user
Question: This method works for what I need it to do, but I just don't think it's extremely readable, and could be abstracted into different methods. I have a feeling I'm going against some best practices. Please let me know your suggestions: def reserve return render json: { error: "This entity is not available to reserve" }, status: :forbidden if @reservable.status != Reservable::STATUS[:available] Reservable.transaction do @reservable.status = Reservable::STATUS[:reserved] @reservable.current_use = Use.create( user_id: @user.id, reservable_id: @reservable.id, start_location: @reservable.location, start_time: DateTime.now, status: Use::STATUS[:progress]) # Check payment payment_type = params[:payment_type] if payment_type === Transaction::METHODS[:subscription] # put subscription logic here else return render json: { error: @user.errors }, status: :payment_required, location: new_payment_path unless @user.validates_has_payment_and_good_standing if payment_type === Transaction::METHODS[:prepay] @reservable.current_ride.trans = Transaction.charge_user_for_use(@user, @reservable.current_use, payment_type) else # :per_minute # put pay_per_minute logic here end end @reservable.save! rescue ActiveRecord::RecordInvalid => exception render :json => { :error => exception.messages }, status: :unprocessable_entity raise ActiveRecord::Rollback #force a rollback end end Answer: Brandon Keepers has a really good tip in his talk Why our code smells. Write a top level decription of your class without using the words "and" or "or". If you can't do that then there is a risk that your class might be doing to much. Since ActiveModel and AR and ActionController gives models and controllers such superpowers it's easy to fall into the trap of making godlike objects. So when you have a method which has an AND in the description then you have definitely have a code smell. The first major code smell here is @reservable.status != Reservable::STATUS[:available] this is just a leaky class with the internals hanging out like loose wires. Fortunately it's really easy to fix: class Reservable def available? status == Reservable::STATUS[:available] end end But an even better way would be to crack out the awesomeness of ActiveRecord::Enum, as it will take care of the wiring for you: class Reservable enum status: [:available, :reserved] end This gives use reservable.available?, .reserved?, .reserved! etc. One good pattern to refactor the above would be Service objects. Service objects are dirt simple plain old ruby objects that just do one job - tops. Most service objects just have a single public method - often #call So lets see if we can start splitting this up into distinct tasks. # Reserves a reservable class ReservationService def initialize(user:, reservable:) @user = user @reservable = reservable end def call unless @reservable.available? raise Reservable::UnavailableError.new(object: @reservable) return false end end end # Charges a user for services rendered class ReservationChargingService attr_reader :transaction def initialize(charge, user){ @charge = charge @user = user } def call(amount, payment_type) # @todo charge user for amount # @todo return true / false or a status code for the payment end end This gives us nice detached pieces which can be tested in isolation - which is really nice since testing methods with DB transactions can be problematic. Then in your controller you can boil things down a bit: def reserve Reservable.transaction do begin @reserved = ReservationService.new( user: @user, reservable: @reservable ).call @charged = ReservationChargingService.new( user: @user, reservable: @reservable ).call raise ActiveRecord::Rollback unless @reserved && @charged rescue Reservable::UnavailableError raise ActiveRecord::Rollback, 'Reservable not available.' end rescue ActiveRecord::RecordInvalid raise ActiveRecord::Rollback, 'Validation error' end # @todo rescue other errors? end if @reserved && @charged render json: @reservable.current_use else errors = [@charged.transaction, @reservable] .reject(&:valid?).map { |o| o.errors.full_messages } render json: { errors: errors }, status :unprocessable_entity end end As you can see here you want to delegate as much work to services as possible. Also prefer using ActiveModel::Errors instead of creating error hashes in your controller. Our controller only has to deal with two contingencies here two which is really good: either the complete transaction passes and we return a success object or we return a errors hash.
{ "domain": "codereview.stackexchange", "id": 14621, "tags": "ruby, ruby-on-rails, active-record" }
Meaning of a negative step response with quaternion
Question: It's not technically robotics but: I've been trying to reproduce in Simulink a spacecraft attitude simulation using quaternions, and the kinematics and dynamics seem to work fine, however I'm having a bit of trouble with the controller. I followed the model give in the 7th chapter which seems to be some sort of a PD controller. The control equation I used is: $q_e$ is the quaternion error, $\omega_e$ is the rotation speed error But my results seems to be off. With : Initial quaternion and rotation speed are $q_i = [0;0;0;1]$ and $ \omega_i = [0;0;0]$ I give a desired reference of $q = [0;1;0;1]$ and $ \omega = [0;0;0]$. I get the following response: $q(1)$ and $q(3)$ are staying at zero as expected. But : $q(2)$ is going towards -1 instead of 1 (As far as I understand the sign ambiguity does not explain this since q(4) is staying around 1) $q(4)$ is not maintaining at 1. (I am not sure if this is related to the fact that the controller is only a PD) I've tried to add -1 gains but it doesn't seem to solve the problem. Why would the step response of q(2) be going to -1 instead of 1 ? And why is q(4) decreasing ? For reference I've added the simulink model: And the "Error quaternion" block: Edit: (Response after Chuck's answer) Answer: Welcome to Robotics, PaoloH! This is a fantastic question for Robotics - It has some Matlab/Simulink, some control theory, some spatial (quaternion) representations, etc. Robotics is the place to come when your question spans multiple fields! In looking at your question, the thing that I noticed is that your reference quaternion is $[0; 1; 0; 1]$. It is not a unit quaternion, and I believe this may be your issue. I looked over your block diagram, and I didn't see anything glaringly wrong there. As SteveO mentioned, the way you're treating the references is a little unusual, but the math all works out. I can't see what you're doing behind the "reference quaternion" or "error quaternion" blocks, but let's take a look at that unit quaternion. Right now, the magnitude of your reference quaternion is $$ \sqrt{0^2 + 1^2 + 0^2 + 1^2} = \sqrt(2) \approx 1.414 \\ $$ If you want to convert your reference quaternion to a unit quaternion, then you divide each term of the quaternion by the magnitude of the quaternion, and you wind up with a reference unit quaternion of: $$ q_{\mbox{ref}} = \left[\begin{array}{ccc} 0 \\ \frac{1}{\sqrt{2}} \\ 0 \\ \frac{1}{\sqrt{2}} \\ \end{array}\right] \\ q_{\mbox{ref}} \approx \left[\begin{array}{ccc} 0 \\ 0.707 \\ 0 \\ 0.707 \\ \end{array}\right]; $$ You can review your quaternion output plot and see that q(2) and q(4) are both moving toward a numeric value of ~0.7. The only real problem seems to be that the polarity on q(2) is wrong. I would guess your "reference quaternion" block is to make the skew symmetric matrix for the "error quaternion" block? The sign problem on your quaternion output could be hiding anywhere, but I'd double check the skew symmetric matrix (that it is actually skew symmetric; $\Omega^T == -\Omega$), and then I'd check the gains. When I said the math all works out on the reference handling, I double checked that just now and it works out for the $\omega$ speed handling. I can't actually tell for the quaternion handling. Typically, error is (reference - feedback), then you apply gains and sum the scaled error terms. You have your error as (feedback - reference), and then you apply gains and negate the scaled error terms. BUT, it looks like, for quaternion error, you actually ARE taking (reference - feedback) but then you're still inverting it anyways. If this isn't enough to get your question resolved, please edit your question to show what's going on under the "Reference quaternion" and "error quaternion" blocks.
{ "domain": "robotics.stackexchange", "id": 1744, "tags": "control, pid, matlab, simulation" }
What are the problems with the best approximation ratio achieved by algorithm returning uniformly random solution?
Question: What are the problems with the best known approximation ratio achieved by an algorithm returning a uniformly random solution? I know one such example for permutation flow shop problem $F|perm|C_{max}$: in the paper "Tight Bounds for Permutation Flow Shop Scheduling" Viswanath Nagarajan and Maxim Sviridenko proved that random sequence of jobs have guarantee $2\sqrt{min\{m,n\}}$ ($m$-number of machines and $n$ - number of jobs) which is the best known currently. Answer: For constraint satisfaction problems, the property of having no better approximation algorithm than random assignment is known as approximation resistance. This is has been studied by several works in the last few years, with some results based on $P\neq NP$ and other more general results based on the unique games conjecture. A good source for this is Per Austrin's thesis.
{ "domain": "cstheory.stackexchange", "id": 490, "tags": "ds.algorithms, reference-request, approximation-algorithms" }
How much does the communications frequency of JWST vary?
Question: This question (How many light seconds away is the JWST?), or rather one of its answers, got me thinking. Since the communications time is about 5±0.75 seconds, it varies by about 1.5 seconds per 3 months. At times it is moving further away from us, at other times it is moving closer to us. How much of an effect does that have on the carrier frequency? Was it enough to affect the design of the telescope? Answer: The Doppler shift caused by the speed of the JWST relative to the Earth is fairly small. Yes, it does need to be accounted for, but dealing with it is a routine matter in space communications, and such Doppler data is a very useful way of measuring the speed of a spacecraft. However, sometimes, blunders have occured, eg with the Cassini–Huygens mission. The equation for the relativistic longitudinal Doppler effect is: $$\frac{f_r}{f_s} = \sqrt{\frac{1-\beta}{1+\beta}}$$ where $f_r, f_s$ are the receiver and source frequencies, and $\beta$ is the radial speed, in units where the speed of light is $1$. Positive speed means the bodies are separating, causing redshift (lower frequency, longer wavelength), negative speed means they're approaching one another, causing blueshift (higher frequency, shorter wavelength). Horizons provides radial range and range-rate data in its vector table ephemerides. You can obtain that data with a simple query URL, like this. Of course, it's nice to see the data in a more graphical format. ;) Here's a daily plot for the radial speed of the JWST relative to the centre of the Earth, for midnight UTC (actually TDB). However, that plot's a little misleading because the speed of the Earth's rotation is quite substantial. Here's a 48 hour plot, with a 1 hour time step, for the radial speed of the JWST relative to the Space Telescope Science Institute, the STScI. You can make your own plots with this script At these speeds, the Doppler shift is virtually linear, and quite small. Eg, at $500$ m/s, the redshift frequency ratio is ~$0.9999983322$. The horizontal axis is in m/s, the vertical axis shows the parts per million difference from $1$. From the JWST User Documentation at STScI: The JWST communication subsystem provides 2-way communications with the observatory via the NASA Deep Space Network. S-band frequencies are used for command uplink, low-rate telemetry downlink, and ranging. Ka-band frequencies are used for high rate downlink of science data and telemetry. All communications are routed through NASA's Deep Space Network, with 3 ground stations located in Canberra (Australia), Madrid (Spain), and Goldstone (USA). There are more details on that page, but for some strange reason it makes no mention of the actual frequencies used. I can't find an official JWST page with that info, but Daniel Estévez says: JWST uses S-band at 2270.5 MHz to transmit telemetry. The science data will be transmitted in K-band at 25.9 GHz, with a rate of up to 28 Mbps.
{ "domain": "astronomy.stackexchange", "id": 6488, "tags": "james-webb-space-telescope, communication" }
Number of nodes in Van-Emde-Boas tree of universe size u?
Question: The universe size $u$ in vEB trees is square rooted at each level until $u = 2$. So, unlike search trees, the branching factor is different at each level. The height of the tree is $h = \lg \lg(u)$ and $u$ is an even power of 2, $u = 2^{2^k}$. I tried to calculate the count using the sum: $$\sum_{i=1}^{h-1} 2^i\,2^{i + 1}$$ But it doesn't work. Any idea on how to do the math? Edit: Sorry for the confusion that might be caused by the illustration. The illustration is based on the CLRS book's implementation of VEB trees, but the summary nodes are not shown. Thinking again about it now, I would like to know the count of all the nodes including the summary nodes and their tree nodes, as well as, the count of just the nodes without the summary. Answer: The nice picture in the question does not include the summary nodes nor the min and max fields, both of which are indispensable components of a van Emde Boas tree (vEB tree). A better illustration might be the following picture taken from how to read off the set represented by a van-Emde-Boas tree, which was drawn by Raphael based on a figure in CLRS, where a number in orange is drawn at a min field that marks its presence in the set of integers being represented. How many nodes are there in the vEB tree implemented like the illustration above? There are $1 + 5 + 5 \times 3 = 21$ nodes in the illustration above. For the sake of simplicity, let $w=2^h$ and $u=2^w=2^{2^h}$. A vEB tree over the universe $\{0,1,\cdots, u-1\}$ is of depth $h=\log\log u$. There is one node of depth 0, which is the root node. There are $2^{2^{h-1}} + 1$ nodes of depth 1. Each node of depth 1 has $2^{2^{h-2}} + 1$ child nodes of depth 2. ... Each node of depth i has $2^{2^{h-i-1}} + 1$ child nodes of depth $i+1$. ... Each node of depth $h-1$ has $2^{2^{h-h}}+1=3$ child nodes of depth $h$. In total, the number of all nodes is $$1 + \sum_{i=1}^{h}\prod_{k=1}^i(2^{2^{h-k}} + 1)=(2^{2^h} - 1)\sum_{i=0}^{h}\frac1{2^{2^i} - 1}\tag{1}$$ which is 1, 4, 21, 358, 92007, 6029862760, 25898063359598159721, $\cdots$ for $h=0,1,2,3,4,5,6,\cdots$ respectively. When $h\ge4$, the number of nodes is about $1.404u$. Similarly, the number of all summary nodes is, for $h\ge1$, $$1 + \sum_{i=2}^{h}\prod_{k=1}^{i-1}(2^{2^{h-k}} + 1)=(2^{2^{h}} - 1)\sum_{i=1}^{h}\frac1{2^{2^i} - 1},\tag{2}$$ which is 0, 1, 4, 21, 358, 92007, 6029862760, $\cdots$ for $h = 0,$$1, 2,3,4,5,$$6,\cdots$ respectively. What is the number of all non-summary nodes? Subtracting (2) from (1), we obtain $$(2^{2^{h}} - 1)\left.\frac1{2^{2^i} - 1}\right|_{i=0}=2^{2^h}-1=u-1.$$ The formula also hold for $h=0$. So we have obtained the following surprising formula. $$\text{the number of non-summary nodes in a vEB tree of universe size } u=2^{2^h}\text{ is }u-1.$$ Every node at the same depth use the same amount of space. Nodes at a smaller depth may use much more space than those at a larger depth. For example, for $h=5$, the root node contains a bit-array of size 65536 but a leaf node just contains several words. Since the number of nodes that use bigger space decreases very quickly as their depth becomes smaller, the total space used is $O(u)$. Exercise. The number of all leaf nodes, which are vEB trees of universe size 2 in a vEB tree of universe size $u=2^{2^h}$ is $u-1$.
{ "domain": "cs.stackexchange", "id": 14426, "tags": "algorithms, data-structures, van-emde-boas-trees" }
Moving a ROS robot in Gazebo, are plugins necessary or can I use topic/node method instead?
Question: Hi all, I'm really newbye in Gazebo and I tried in last days to read everything I could find to use Gazebo for my simulated robot. So far so good..I've a small robot which moves really good under RViz and now I'm interested to simulate it in Gazebo. Since the ROS Node is going to simulate the robot's dynamic I want at first move it under Gazebo. Something similar I 've found in this question. But I'm still a little bit confused about Plugins in Gazebo. For my understanding one could move the robot just publishing from ROS information about velocity and position on the following two topics: /gazebo/set_link_state /gazebo/set_model_state So...now...why should I need a plugin? What's the purpose for? Is the way (publishing on these topics) right? Or one should create a plugin of the robot to get simulated in Gazebo? Many thanks in advance Regards Originally posted by Antares on Gazebo Answers with karma: 3 on 2015-05-06 Post score: 0 Answer: I recommend you take a closer look at the Gazebo plugin. As it said, A plugin is a chunk of code that is compiled as a shared library and inserted into the simulation. The plugin has direct access to all the functionality of Gazebo through the standard C++ classes. So if you want to use all the functionality of Gazebo(get model name, set velocity, acceleration, change simulation physics properties, etc.), not only limiting to publishing messages on /gazebo/set_link_state, /gazebo/set_model_state or other topics to change the model or link state(yes, you can publish messages on these topics to make your model move), you should learn to write a plugin. Actually, Gazebo has been developed independent of ROS, so it is not using ROS messages, service, topics, etc any more. It adopts protobuf for message passing(take a look at "Transport Library" tutorial) and you will not see ROS topics like /gazebo/set_link_state. The reason that you can use them is because a package called gazebo_ros_pkgs. As the tutorial said: To achieve ROS integration with stand-alone Gazebo, a new set of ROS packages named gazebo_ros_pkgs has been created to provide wrappers around the stand-alone Gazebo. They provide the necessary interfaces to simulate a robot in Gazebo using ROS messages, services and dynamic reconfigure. Now I am using model plugin to control my robot and works well. Actually, if you want to move your robot in simulation, I think it is better to use gazebo function SetAngularVel() or SetLinearVel() instead of publishing messages on topic /gazebo/set_model_state. Because messages on the topic /gazebo/set_model_state have fields of both velocity and pose and it is tricky to set both of them. Originally posted by winston with karma: 449 on 2015-05-07 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 3764, "tags": "ros" }
The Badge Bot 9000
Question: Because I'm both lazy and wanting to mess around more with time management and Ruby, I thought it would be fun to create a simple script that opens Code Review once every day to go towards the daily login badge awards! There are so many ways to go about this that I'm certain you could probably do this within several lines of code (especially with Ruby). require 'launchy' module BadgeBot def at_time(time) loop do before = Time.now yield interval = time - (Time.now - before) sleep interval if interval > 0 end end # every 23 hours, open the webpage at_time(82800) do Launchy.open('http://codereview.stackexchange.com/') end end I'm also aware that there's no code to handle closing the browser. This is handled manually, because as a software developer I'd be ashamed if I didn't attend the computer once at some point in the day. Answer: Naming the method at_time doesn't really tell the user what it's doing. If you used ActiveSupport::CoreExtensions::Numeric::Time, you could do something really cool: every 23.hours do # job end Some other name might be better, like verbose repeat_with_interval, but every + hours is terse and readable, thus nice. Also, if you are using a module, you could separate definitions (definition, actually) from actual job for clarity - it is good to separate what is being done from how it is done (if BadgeBot contained more methods, body of the module probably isn't where someone would look for method calls). module BadgeBot def every(time) # ... end end BadgeBot.every 23.hours do # cheat shamelessly end
{ "domain": "codereview.stackexchange", "id": 15866, "tags": "ruby, datetime, stackexchange" }
Kinect for Windows
Question: Has anybody tried the new Kinect for Windows with ROS. http://www.zdnet.com/blog/microsoft/whats-new-in-microsofts-kinect-for-windows-final-bits/11783 Hopefully it works with the standard ROS drivers. Originally posted by sedwards on ROS Answers with karma: 1601 on 2012-02-03 Post score: 4 Answer: So far, I tested the new driver changes for the "Kinect for Windows" from Drew Fisher (k4w-drivers). And it does work fine with the "Kinect for Windows" included the modified libfreenect device drivers! The old drivers are the same, the only changes are the driver parameters which have to be added for the K4W in into the source code. So far, I done this steps to change the openni drivers: First, I change the Product and Vendor IDs in XnDeviceSensorIO.cpp. Furthermore, I'm setting the USB alternative interface to enable the two isochronous endpoints. More informations to libusb (function: libusb_set_interface_alt_setting). Second, I modified the access permissions rules for the primesense sensor usb (55-primesense-usb.rules). I recompiled the modified openni drivers. Basicly now the driver is adjusted only for the K4W. The Kinect for Xbox can actually no longer be identified. But this is okey for my purpose until now. Originally posted by GermanUser with karma: 125 on 2012-05-04 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by Lean2 on 2012-09-29: I need to have the k4W with ROS, but im not really experiment with it. im having problems changing the XnDeviceSensorIO.cpp. could anybody explain a bit more all necessary changes needed to have the K4W working? Comment by S on 2013-03-25: Is there any progress in getting K4W work out of the box? If not, can someone tell me where XnDeviceSensorIO.cpp is located? I checked out openni drivers, but I can't find neither that file nor 55-primesense-usb.rules. Thanks! Comment by madeng84 on 2013-03-26: I need that file, too... Please give us a feedback
{ "domain": "robotics.stackexchange", "id": 8101, "tags": "kinect, windows" }
Do all objects we see emit light which then forms an image inside our eye?
Question: Daily life we see so many objects .Often in textbooks the ray diagram for the image formation in eye is shown by showing light rays coming from the object and making an image in inner layer of eye . Do all objects and surroundings emit light so as we can be able to see those things when they make an image inside the eye ? If this is not true then what is the exact reason we see most objects at day , is tubelight in home / sun outside a reason for this ? Answer: Most of the things you see are not emitting light. Usually they are reflecting light from light sources, and this reflected light is what you see. Typical light sources that do emit light in everyday life are the sun outside and light bulbs inside.
{ "domain": "physics.stackexchange", "id": 85952, "tags": "optics, visible-light, everyday-life, geometric-optics, vision" }
Suggestions for self-studying experimental techniques
Question: I would appreciate any suggestion regarding self-studying experimental techniques used in biology for a theoretical scientist (candidate). Maybe I should be more specific. I'm a biology major, but took as little experimental courses as possible, and I didn't paid much attention to those I took. (shame on me, right). Then I got a MS in applied math. When I revisit my molecular biology/genetics and biochemistry textbooks, I realized that they are mostly descriptive. An experiment/technique oriented textbook would be very helpful for me, especially if it will also provide an overview of the methods. My purpose is not the make up for the lost undergrad years, change my research orientation or master experimental skills. I'm aware that this is not achievable by reading some texts. My sole purpose is to not get lost on theory/experiment interface. When someone asks how something could be proven experimentally, I want to know what options are there in the field. Answer: I cannot more highly recommend Molecular Cloning 4 ed.. It is a really comprehensive text which covers numerous molecular biology experimental techniques including different types of PCR. EDIT: Also, there are some great suggestions for other books and websites that also cover molecular biology techniques in the answers to this question: Short, concise, practical manual for doing experimental biology
{ "domain": "biology.stackexchange", "id": 3135, "tags": "experiment" }
What is the closest distance for an undiscovered old neutron star and how long would it take to get here? To help people scared of neutron stars
Question: This is to help people who are scared of neutron stars. My questions are What is the faintest a neutron star could be How close could it be and not yet be detected How long would it take to get here if it was traveling as fast as the fastest known stars. I am using these figures at present Absolute magnitude 23 a third of a light year for a 10" telescope limiting magnitude 13, and around 13 light years for the GAIA star survey Over 80 years for 10" telescope easy visibility and over 8,000 years for the GAIA star survey They are scared because of an absurd YouTube video with over a million views, by someone going by the name of TopMan 2.0. He names a particular neutron star. It's utterly absurd, it's an estimated 424 light years away and he claims it will hit us 75 years from now based on a spanish blogger who in turn got it from a National Geographic fictional over the top absurd "documentary" which didn't name any star as it was a made up scenario acted out by actors. Children as young as 7 are posting comments to his video, saying they are afraid they are going to die. I tried talking to him, but he is not going to add anything to the video to say that it is fake. He must be getting a fair bit of ad revenue from it. Even when I explain how absurd it is the people who contact me are still scared of neutron stars. This is part of my work trying to help people scared of these absurd fake doomsdays. Often they are getting help from therapists for the extreme fear and anxiety generated by these videos. Ordinary folk who flunked physics, and see one of these videos on the web, and then the fear overwhelms them and they get such severe anxiety they no longer think clearly and panic many times a day. Background and calculation here. Anyway, it would help them to know that we could detect a faint neutron star if there was one close by. I thought members here might be interested in the question. The main uncertainty is for 1 how faint a neutron star can be, I'm basing it on Rob Jeffries' answer here, assuming temperature of Sirius, but I'm not sure if that is the faintest it could get, if it is a billions of years old neutron star in our stellar neighbourhood. Answer: There are all sorts of things to consider here and I doubt there can be a definitive answer. First: how many neutron stars are there - or more pertinently, what is the density in the solar neighbourhood. There are about 1000 stars within 15pc of the Sun down to about $0.2M_{\odot}$. Most of these are main sequence stars that are less massive (and more long-lived) than the Sun, with odd exceptions like Sirius and Arcturus. About 10% are white dwarfs that have evolved from objects with initial masses of $1-8M_{\odot}$. If we assume there are 900 stars within 15pc that were born with $M\leq 1M_{\odot}$, then we can integrate an assumed initial mass function and further assume that all stars with $8\leq M/M_{\odot}<25$ have already ended their lives as neutron stars. The lower limit is fairly solid; the upper limit is much more uncertain, but because of the steepness of the initial mass function ($N(M) \propto M^{-2.3}$) it doesn't really change the numbers of neutron stars much (but does change the [small] numbers of black holes!). Thus the fraction of stars that end up as white dwarfs would be $$f_{\rm WD} \sim \frac{\int^{8}_{1} M^{-2.3}\ dM}{\int^{25}_{0.2} M^{-2.3}\ dM} = 0.11,$$ which ignores the negligible contribution of even higher mass stars to the total. This is in reasonable agreement with observation, but will be slightly overestimated because not all stars more massive than $1M_{\odot}$ have died. Armed with some confidence that this calculation works for white dwarfs, we can do the same calculation for neutron stars. $$f_{\rm NS} \sim \frac{\int^{25}_{8} M^{-2.3}\ dM}{\int^{25}_{0.2} M^{-2.3}\ dM} = 0.006.$$ i.e. If there are 1000 total stars within 15pc, there should be 6 neutron stars. This is likely to be an overestimate because a large fraction of neutron stars are created in supernovae and obtain a large momentum kick that can give them velocities of hundreds of km/s. That means they should be under-represented in the Galactic disc and some will have been ejected from the Galaxy. This is probably a factor of $\sim 2$ effect. See also https://astronomy.stackexchange.com/questions/16678/how-far-away-is-the-nearest-compact-star-remnant-likely-to-be?noredirect=1&lq=1 for a similar calculation, where i used some slightly different assumptions and numbers (which gives a flavour of the uncertainties involved). Second: Could we actually see these nearby neutron stars? Now, the age distribution is likely to be reasonably uniform over the age of the Galaxy or perhaps even weighted to older ages. Neutron stars lose their original "birth heat" on timescales of thousands to millions of years, mostly by the emission of neutron stars. By the time they get to a million years old they can only be kept hot by accretion from the interstellar medium or perhaps by some sort of Ohmic heating driven by the decay of their magnetic fields or frictional processes associated with their spindown and decoupling between superfluids and "normal" fluids in the crust and core. Of these reheating mechanisms, accretion from the interstellar medium is probably not important for neutron stars close to the Sun, because our local ~100 pc is a local bubble of hot and relatively sparse interstellar gas. But the other processes are very uncertain and until we start detecting the thermal radiation from old neutron stars, we just don't know how luminous they will be. In Position of Neutron Stars in H R diagrams I gave an estimate of $M_{v} \sim 23$ for the absolute visual magnitude of the neutron star surface has cooled to 10,000K or so, but I would say this could be uncertain by a factor of a few either way, which results in orders of magnitude uncertainties in their luminosities and so about $\pm 5$ magnitudes on $M_v$! If the absolute magnitude was $<20$, then there is a chance that Gaia might detect one of those $\sim$ few old neutron stars within 15 pc. It would have a large parallax, probably a large proper motion, and a calculation of its luminosity compared with its temperature would quickly reveal it was much smaller than a hot white dwarf. On the other hand, if $M_V>20$ then there is virtually no chance that the Gaia survey would spot it, because that is about its apparent magnitude sensitivity limit. So unless one of those few nearby neutron stars was closer than a few pc it would just be too faint to see. The Large Synoptic Survey Telescope, due to start operation in 2023, should survey the sky to much fainter limits and really does stand a chance of detecting a population of these objects. Third: I should point you to another possibility that I addressed (and dismissed) in https://astronomy.stackexchange.com/questions/16578/will-gaia-detect-inactive-neutron-stars/16699#16699 That is that Gaia might see the gravitational lensing of background stars by a foreground and nearby neutron star. In the answer referred to, I showed that this is possible but rather unlikely. In conclusion there is little reassurance to be given. The likelihood of a neutron star disrupting the solar system is very low and since they are orders of magnitude less common than "ordinary stars" (which would do just as much damage!) and none of these ordinary stars show much likelihood of coming nearer than 10,000 au to us in the the forseeable millions of years (e.g. Bailer Jones 2018) it would be unfortunate in the extreme to be struck by something that is much rarer in the next 100 years. Conspiracy theorists and other wackos should focus on the very much more real threats of global warming and anti-biotic-resistant bacteria, rather than claiming that something we see $>400$ light years away can reach us in 75 years...
{ "domain": "physics.stackexchange", "id": 55498, "tags": "temperature, astrophysics, neutron-stars, luminosity" }
sharp ir sensor
Question: i add 8 sharp ir into my robot, how to see in my rviz, and how to add tf for ir laser. my main laser is hokuyo ust 10lx. help me please. thank you Originally posted by zucky on ROS Answers with karma: 11 on 2018-03-23 Post score: 1 Answer: Easiest way is to add the IR frame position to your URDF robot description. Assuming you are already publishing the robot state, that should add the TF for the IR sensor automatically. You'll also have to write a node to read the analog values from the IR sensor and convert to a distance, then publish that. It's probably easiest to make the IR sensor look like a laser scanner with only a single scan point. (Reason: some off-the-shelf ROS nodes will work with laser scan messages but not single distance reading messages. Sigh...) Originally posted by Mark Rose with karma: 1563 on 2018-04-03 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by zucky on 2018-04-03: thank you very much
{ "domain": "robotics.stackexchange", "id": 30434, "tags": "ros, laser, ros-indigo" }
What are the large round dark "holes" in this NASA Hubble image of the Crab Nebula?
Question: I came across this image of the Crab Nebula taken from NASA Hubble telescope. What are the large round "holes" and how are they formed? Answer: I think my deleted answer to your previous question covers this well, so I'll add it here. These two spots are known as the east and west bays of the Crab Nebula. They appear to be the result of a torus partially encircling a section of the nebula. The pulsar's magnetic field interacts with the gas and dust in the torus, which blocks synchrotron radiation from gases in the nebula. It has been conjectured that the torus is the result of a $\sim0.1M_{\odot}$ ejection of material from a circumstellar disk prior to the supernova, based on studies of the proper motion of gas on the edges of both bays. References Fesen et al. (1992) Hester et al. (1995) We can also rule out that this is the result of a defect in Hubble, because these features have been observed in different wavelengths, including infrared, visible, ultraviolet, and x-ray (though not in radio or gamma-ray wavelengths): Image courtesy of Wikipedia user Hunster under the Creative Commons Attribution-Share Alike 3.0 Unported license. The infrared, ultraviolet, and x-ray images come from the Spitzer Space Telescope, the SWIFT observatory, and the Chandra observatory, respectively.
{ "domain": "astronomy.stackexchange", "id": 1932, "tags": "hubble-telescope, nebula" }
How would you improve this point of interaction between Python and Javascript?
Question: I'm writing a checkout form for purchasing online tests. You can buy a certification or a recertification, as well as a pdf or book study manual. I'm using Jinja2 templates to provide test data into the Javascript. The test object is a dictionary with the following structure. All prices are in cents. test { id: Integer, name: String, acronym: String, certification_price: Integer, recertification_price: Integer, study_pdf_price: Integer, study_book_price: Integer, } If a user buys more than one test, they get a discount of 10% off for each additional test. <!doctype html> <html> <head> <script> $(document).ready(function() { var calcTotal = function(){ var total = 0; var numTests = 0; {% for test in tests %} {% if test.certification_price %} if ($('#buy-{{ test.acronym }}-certification').is(':checked')) { total += {{ test.certification_price }}; numTests++; } {% endif %} {% if test.recertification_price %} if ($('#buy-{{ test.acronym }}-recertification').is(':checked')) { total += {{ test.recertification_price }}; numTests++; } {% endif %} {% if test.study_pdf_price %} if ($('#buy-{{ test.acronym }}-pdf').is(':checked')) { total += {{ test.study_pdf_price }}; } {% endif %} {% if test.study_book_price %} if ($('#buy-{{ test.acronym }}-book').is(':checked')) { total += {{ test.study_book_price }}; } {% endif %} {% endfor %} // take 10 percent off for each additional test after the first if (numTests > 0) numTests--; var discountPrice = Math.round(((10 - numTests) * 0.1) * total); $('.product-total').html(Math.round(discountPrice)); $('#final-total').val(discountPrice); } $('.product-selection').click(calcTotal); calcTotal(); }); </script> <body> <h2>Selected Products</h2> <form id="registration-form" action="" method="POST"> <table id="products-table"> <tr> <th>Product</th> <th>Price</th> <th>Add to Cart</th> </tr> {% for test in tests %} {% if test.certification_price %} <tr> <td>{{ test.acronym }} Certification{% if test.study_pdf_price %} (PDF Manual Included){% endif %}</td> <td>${{ test.certification_price }}</td> <td><input class="product-selection" type="checkbox" id="buy-{{ test.acronym }}-certification" name="buy-{{ test.acronym }}-certification" /></td> </tr> {% endif %} {% if test.recertification_price %} <tr> <td>{{ test.acronym }} Recertification</td> <td>${{ test.recertification_price }}</td> <td><input class="product-selection" type="checkbox" id="buy-{{ test.acronym }}-recertification" name="buy-{{ test.acronym }}-recertification" /></td> </tr> {% endif %} {% if test.study_pdf_price %} <tr> <td>{{ test.acronym }} {{ test.study_pdf_description }}</td> <td>${{ test.study_pdf_price }}</td> <td><input class="product-selection" type="checkbox" id="buy-{{ test.acronym }}-pdf" name="buy-{{ test.acronym }}-pdf" /></td> </tr> {% endif %} {% if test.study_book_price %} <tr> <td>{{ test.acronym }} {{ test.study_book_description }}</td> <td>${{ test.study_book_price }}</td> <td><input class="product-selection" type="checkbox" id="buy-{{ test.acronym }}-book" name="buy-{{ test.acronym }}-book" /></td> </tr> {% endif %} <tr><td>&nbsp;</td><td>&nbsp;</td><td>&nbsp;</td></tr> {% endfor %} <tr><td><strong>Total</strong></td><td><strong>$<span class="product-total">0</span></strong></td></tr> </table> <input type="hidden" id="final-total" name="final-total" /> <p><input id="submit-button" type="submit" value="Place Order"></p> </form> </body> </html> I feel like I am repeating myself a lot, but I don't know how I would do this differently. Also, this way of providing data by writing Javascript code with the templating language seems messy. Any ideas? Answer: You're right, templating JavaScript code in the way you're doing is messy and usually a bad option. If possible it's almost always better to pass in a single JSON object with all of the values you need and then operate on that object in JavaScript. I don't know Python but something like this: var tests = {{ json.dumps( tests ) }}, // this is the only template insertion total = 0, numTests = 0, test, testIdPfx // only one `var` is necessary for multiple declarations ; // do this loop in JavaScript instead of the templating system for ( var i; i < tests.length; i++ ) { test = tests[ i ] // do this concatenation just once at the beginning testIdPfx = '#buy-' + test.acronym + '-'; // this condition also in JavaScript instead of the template--combined with the // `if` you're already doing if ( test.certification_price && $( testIdPfx + 'certification' ).is( ':checked' ) ) { total += test.certification_price; numTests++; } if ( test.recertification_price && $( testIdPfx + 'recertification' ).is( ':checked' ) ) { total += test.recertification_price; numTests++; } // and so on... } // ... (You could also load the JSON object with an Ajax request and avoid templating in your JavaScript entirely, but that might be overkill.) Already that looks a lot cleaner but you still have a lot of repetition. Basically the only thing different between your four if blocks is one word: certification, recertification, study_pdf and study_book. So why not put those four words in an array and then reuse the same code four times? var tests = {{ json.dumps( tests ) }}, // this is the only template insertion // here's your four words: types = [ 'certification', 'recertification', 'study_pdf', 'study_book' ] total = 0, numTests = 0, test, testIdPfx, type, price ; for ( var testIdx; testIdx < tests.length; testIdx++ ) { test = tests[ testIdx ] testIdPfx = '#buy-' + test.acronym + '-'; // repeat for each of the four words in `types` for ( var typeIdx; typeIdx < types.length; typeIdx++ ) { type = types[ typeIdx ]; // exploit the fact that `test.foo` and `test['foo']` are equivalent in JavaScript price = test[ type + '_price' ]; if ( price && $( testIdPfx + type ).is( ':checked' ) ) { total += price; numTests++; } } } // ... (Note: You'll have to adjust your markup to have #buy-{{ test.acronym }}-study_pdf insteady of just -pdf and #buy-{{ test.acronym }}-study_book instead of -book to match the types values, or vice versa.) A nice side-effect of doing the main for loop in JavaScript instead of the templating system is that you're sending proportionally less JavaScript code to the client, which saves on bandwidth. If you wanted you could perform a similar reduction in your markup. Hope that helps!
{ "domain": "codereview.stackexchange", "id": 1314, "tags": "javascript, python, html" }
Canonical transformation and Hamilton's equations
Question: I was trying to prove, that for a transformation to be Canonical, one must have a relationship: $$ \left\{ Q_a,P_i \right\} = \delta_{ai} $$ Where $Q_a = Q_a(p_i,q_i)$ and $P_a = P_a(p_i,q_i)$. Now to do the proof I started with $\dot{Q_a}$: Chain rule and Hamilton's equation for initial coordinates $q_i,p_i$ $$ \dot{Q_a} = \frac{\partial Q_a}{\partial q_j} \dot{q_j} + \frac{\partial Q_a}{\partial p_j} \dot{p_j} = \frac{\partial Q_a}{\partial q_j} \frac{\partial H_a}{\partial p_j} - \frac{\partial Q_a}{\partial p_j} \frac{\partial H_a}{\partial q_j} $$ Then I apply chain rule for the Hamiltonian derivatives: $$ \dot{Q_a} = \frac{\partial Q_a}{\partial q_j} \left( \frac{\partial H}{\partial Q_i} \frac{\partial Q_i}{\partial p_j} + \frac{\partial H}{\partial P_i} \frac{\partial P_i}{\partial p_j} \right) - \frac{\partial Q_a}{\partial p_j} \left( \frac{\partial H}{\partial Q_i} \frac{\partial Q_i}{\partial q_j} + \frac{\partial H}{\partial P_i} \frac{\partial P_i}{\partial q_j} \right) $$ Now reordering the terms yields us: $$ \dot{Q_a} = \frac{\partial H}{\partial Q_i} \left\{ Q_a,Q_i \right\} + \frac{\partial H}{\partial P_i} \left\{ Q_a,P_i \right\} $$ Now here the problem, for the transformation from a coordinate system $(q_i,p_i)$ to a coordinate system $(Q_a(q_i,p_i), P_a(q_i,p_i))$ to be canonical we require: $$\left\{ Q_a,P_i \right\} = \delta_{ai}$$ But why we have as well the following requirement, or is it just too obvious or true because of some property of any coordinate transformation? $$\left\{ Q_a,Q_i \right\} = 0$$ The problem I am having is as follows. I agree, that the following two are true (if I used the covariant notation correctly): $$ \left\{ q^i,q_j \right\}_{q,p} = \frac{\partial q^i}{\partial q^k} \frac{\partial q_j}{\partial p_k} - \frac{\partial q^i}{\partial p^k} \frac{\partial q_j}{\partial q_k} = 0 $$ $$ \left\{ Q^i,Q_j \right\}_{Q,P} = 0 $$ But why its the case that the following is also true? $$ \left\{ Q^i,Q_j \right\}_{q,p} = 0 $$ Answer: A canonical transformation $(q^i,p_j) \to (Q^i,P_j)$ preserves the form of Hamilton's equations. Similarly, a symplectic transformation$^1$ $(q^i,p_j) \to (Q^i,P_j)$ preserves the Poisson structure, aka. as a symplectomorphism. In other words, all the fundamental Poisson brackets (PB) $$ \{ q^i,p_j \} ~=~ \delta^i_j, \qquad \{q^i,q^j \}~=~0, \qquad \{ p_i,p_j \} ~=~ 0,\qquad i,j \in\{1, \ldots, n\},$$ have the same form in the new coordinates $$ \{ Q^i,P_j \} ~=~ \delta^i_j, \qquad \{Q^i,Q^j \}~=~0, \qquad \{ P_i,P_j \} ~=~ 0,\qquad i,j \in\{1, \ldots, n\}. $$ In particular, to answer OP's question(v2), the relations $\{Q^i,Q^j \}=0$ and $\{P_i,P_j\} = 0$ are only trivial if $n=1$, because of skewsymmetry of PB. As is well-known, canonical and symplectic transformations are the same. For a proof [at least in the case of restricted transformations, i.e. transformations without explicit time dependence], see e.g. Ref.1, which uses so-called symplectic notation. An important point is that the Jacobian matrix of a symplectic transformation must be a symplectic matrix. References: H. Goldstein, Classical Mechanics, Section 9.4 in eds. 3 or Section 9.3 in eds. 2. -- $^1$ In this answer we will for simplicity only discuss non-degenerate Poisson brackets in finite dimensions using globally defined coordinates.
{ "domain": "physics.stackexchange", "id": 5960, "tags": "hamiltonian-formalism, commutator, poisson-brackets" }
Unable to find sources for uncommon specific heat
Question: I'm calculating the energy, temperature, and gas volume outputs of gunpowder. To solve this I need to find the specific heat of Potassium sulfate (K2SO4) and Potassium carbonate (K2CO3). If anyone can refer me to any directory that can list the values for these it would be very helpful. Thank you for your time. Answer: For potassium sulfate see here. For potassium carbonate see here. Generally, search in the NIST Chemistry WebBook and when you find the compound of interest, either click on "gas phase thermochemistry data" or "condensed phase thermochemistry data" as appropriate.
{ "domain": "chemistry.stackexchange", "id": 5754, "tags": "temperature" }
Would this code cause memoryleaks?
Question: My question is really simple. Does this code cause memoryleaks? If so, where/how/why? HDC hDC, memDC = 0; HBITMAP hBMP = 0; HBITMAP hOldBMP = 0; PAINTSTRUCT PS; HBRUSH hb212121, hb141414, hb070707, hb000, hbF7F7F7, hb989898, hb707070, hb494949, hb984921 = 0; HPEN hp353535 = 0; case WM_PAINT: hDC = BeginPaint(hWnd, &PS); memDC = CreateCompatibleDC(hDC); hBMP = CreateCompatibleBitmap(hDC, 450, 450); SelectObject(memDC, hBMP); hb212121 = CreateSolidBrush(RGB(33, 33, 33)); FillRect(memDC, &rMainClntNoBorder, hb212121); hb494949 = CreateSolidBrush(RGB(73, 73, 73)); hb984921 = CreateSolidBrush(RGB(152, 73, 33)); hb000 = CreateSolidBrush(RGB(0, 0, 0)); switch(tiles) { setTileRect(); case 1: FillRect(memDC, &rTile, hb494949); break; case 2: FillRect(memDC, &rTile, hb984921); break; case 7: FillRect(memDC, &rTile, hb000); break; } SelectObject(memDC, hOldBMP); DeleteObject(hBMP); DeleteObject(hb984921); DeleteObject(hb494949); DeleteObject(hb212121); DeleteObject(hb000); hp353535 = CreatePen(PS_SOLID, 1, RGB(53, 53, 53)); SelectObject(memDC, hp353535); GetClientRect(hWnd, &rClnt); MoveToEx(memDC, rClnt.left, rClnt.bottom - 1, 0); LineTo(memDC, rClnt.left, rClnt.top); LineTo(memDC, rClnt.right, rClnt.top); DeleteObject(hp353535); BitBlt(hDC, 0, 0, rMainClntNoBorder.right, rMainClntNoBorder.bottom, memDC, 0, 0, SRCCOPY); DeleteDC(memDC); EndPaint(hWnd, &PS); break; Answer: you select the pen in, but don't select out before delete. SelectObject(memDC, hp353535); // snip DeleteObject(hp353535) should be: hOldPen = SelectObject(memDC, hp353535); // snip SelectObject( memDC, hOldPen ); DeleteObject(hp353535) SelectObject(memDC, hBMP); should really be hOldBMP = SelectObject(memDC, hBMP); For some other comments: I recommend that you split the WM_PAINT (and all other messages) processing into its own function. It'll be easier to read/debug. Are rMainClntNoBorder (and rTile, rClnt) globals, or declared and initialised before the code supplied? Only declare one variable per line. It's easier to read and find where it's declared rather than searching. It also avoids an issue with pointers that you may come across. Use proper names for the brushes if you can - hbDarkOrange (or even hbCurrentActiveTile) means more than hb984921. Unless you have hundreds of colours, it may be better to create them at the start of the program and delete them at the end, and store their values in a struct in the 'user-area' of the window structure - You may want to avoid creating and deleting them continually. If you don't do the above, keep the creation and deletion as close together as possible. hbDarkGrey = CreateSolidBrush(RGB(33, 33, 33)); FillRect(memDC, &rMainClntNoBorder, hbDarkGrey); DeleteObject( hbDarkGrey ); Similarly, your switch statement could be simplified. Rather than creating three brushes and using one, just pick the one you want. setTileRect(); // implies that rTile is a global?? switch(tiles) { case 1: tile_colour = RGB( 73, 73, 73 ); break; case 2: tile_colour = RGB( 152, 73, 33 ); break; case 7: // allow fallthrough default: // you need a default!! tile_colour = RGB( 0, 0, 0 ); break; } hbTile = CreateSolidBrush( tile_colour ); FillRect( memDC, &rTile, hbTile ); DeleteObject( hbTile ); Instead of 1 and 2 etc in the case statement, create some meaningful constants for them with enum.
{ "domain": "codereview.stackexchange", "id": 3119, "tags": "c++, memory-management, windows" }
Javascript split array into n subarrays, size of chunks don't matter
Question: NOTE: This post was moved from stackoverflow as codereview.stackexchange is a better place to discuss the performance of this code problem/solution. I want to split an array into n subarrays. I don't care how many elements end up in each array but the elements must be spread through all the available sub arrays. e.g: Solutions A & B are two ways of doing it but I'm looking for Solution A: a = [1,2,3,4,5,6,7,8,9] into_subarrays(a, 2); Solution A => [[1,3,5,7,9],[2,4,6,8]] Solution B => [[1,2,3,4,5],[6,7,8,9]] into_subarrays(a, 4); Solution A => [[1,5,9],[2,6],[3,7],[4,8]] Solution B => [[1,2,3],[4,5],[6,7],[8,9]] into_subarrays(a, 6); Solution A => [[1,7],[2,8],[3,9],[4],[5],[6]] Solution B => [[1,2],[3,4],[5,6],[7],[8],[9]] into_subarrays(a, 12); Solution A => [[1],[2],[3],[4],[5],[6],[7],[8],[9],[],[],[]] Solution B => [[1],[2],[3],[4],[5],[6],[7],[8],[9],[],[],[]] I have this solution, I just want to make sure it's as efficient as possible: function into_subarrays(myArray, chunks=2){ var a = myArray.slice(); //Copy array so that the original is not modified var i = 0; var result = []; while(a.length){ //Create array if needed if (typeof result[i] == 'undefined'){ result[i] = []; } result[i].push(a.shift()); i++; i = (i == chunks) ? 0 : i; //Wrap around chunk selector } return result; } Thanks. This is the answer I've come to prefer from user ffflabs: function into_subarrays(sourceArray, chunks=2){ const result = Array.from(Array(chunks),item=>[]); for(let index=0; index<sourceArray.length ; index++) { result[index % chunks].push(sourceArray[index]); } return result; } Answer: First: Since you already know how many chunks you want, there's no point in checking if the chunk exists and otherwise declare it. In general terms, you should not need to check for objects you can define deterministically. So instead of doing this check on each loop: if (typeof result[i] == 'undefined'){ result[i] = []; } create an array of N empty arrays beforehand. const result = Array.from(Array(chunks),item=>[]); Second: Albeit the performance difference is negligible, checking for i's value and conditionally reassigning it is less efficient than using the modulo operator on its value regardless So instead of results[i].push(...) i++ i = (i == chunks) ? 0 : i; You can do results[i % chunks].push(...) i++ With the above, your function could be expressed as function usingShift(myArray, chunks=5){ const copiedArray = myArray.slice(), result=Array.from(Array(chunks),item=>[]); let i=0; while(copiedArray.length){ result[i % chunks].push(copiedArray.shift()); i++; } return result; } Third: As you've been told, shifting from an array is expensive. I understand you're doing it because you want to populate the chunks in the same order of the original array. However you can achieve the same popping from the a reversed array: If you declare const a = myArray.slice().reverse(); You can replace the usage of shift with result[i].push(a.pop()); The function would be something like: function usingPop(myArray, chunks=5){ const reversedArr = myArray.slice().reverse(), result=Array.from(Array(chunks),item=>[]); let i=0; while(reversedArr.length){ result[i % chunks].push(reversedArr.pop()); i++; } return result; } However... you'd still be copying the array and performing a mutation on the copy. @Miklós Mátyás solution has the advantage of populating the result without copying nor extracting items from the source array. Now, you haven't said the source array will be always the same (9 elements from 1 to 9). It could as well have repeated/unsorted items, so his solution should take into account not the item itself but its index, which can be expressed as: function filterByModulo(myArray, chunks=5){ return Array.from(Array(chunks),(_,modulo)=>{ return myArray.filter((item,index) => index % chunks === modulo); }); } That's pretty clean, but it's filtering on the original array as many times as chunks you want, so it's performance degrades according to the source array length AND the chunk quantity. Personally I believe this is a case in which reduce would be more appropiate and pretty concise, while avoiding the copying or mutation of the source array. function usingReduce(myArray,chunks=5) { const result=Array.from(Array(chunks),i=>[]); return myArray.reduce( (accum,item,index)=>{ accum[index%chunks].push(item); return accum; }, result); } Finally there's the classic for loop function classicFor(sourceArr, chunks=5) { const lengthOfArray=sourceArr.length; const result=Array.from(Array(chunks),i=>[]); for(let index=0; index<lengthOfArray ; index++) { result[index % chunks ].push(sourceArr[index]); } return result; } I made a test case at JSPerf in which it shows that the for loop is the most efficient. (I threw in forEach and for..of implementations too). Running with a source array of 5000 items and 5 chunks shows that using pop on the source is more efficient than using shift by a 2.89x factor. It even looks more efficient that reduce. The classic for loop is the fastest whereas filtering N times comes up last by a ratio of 9x the modulo filtering. If you use a source of 100000 items and 15 chunks the classic for is still the most efficient (still 9x modulo filtering) but the other implementations do scale a bit better than modulo filtering.
{ "domain": "codereview.stackexchange", "id": 38876, "tags": "javascript, array, ecmascript-6" }
How to set multiple variables at once in JavaScript?
Question: I have been setting multiple variables like this function doSomething() { var FirstName = $FirstName.data("ov"); var LastName = $LastName.data("ov"); var Company = $Company.data("ov"); var Website = $Website.data("ov"); } I just read some reviewed code and both reviewers advised setting the variables in a single statement, like this: function doSomething() { var FirstName = $FirstName.data("ov"), LastName = $LastName.data("ov"), Company = $Company.data("ov"), Website = $Website.data("ov"); } What would be the advantage of doing it the second way? Is there a performance benefit? Is it that there is less code? Answer: From a very good book called JavaScript Patterns: Using a single var statement at the top of your functions is a useful pattern to adopt. It has the following benefits: Provides a single place to look for all the local variables needed by the function Prevents logical errors when a variable is used before it's defined Helps you remember to declare variables and therefore minimize globals Is less code (to type and to transfer over the wire) The author also recommends initializing the variables when you declare them when possible.
{ "domain": "codereview.stackexchange", "id": 1892, "tags": "javascript" }
Reduce list manipulation for QGIS layer tree
Question: I am developing a plugin for the GIS software, QGIS. The code below reads the number of layers in various groups (as shown in the image) and adds them to a QTableWidget: For each group, I want to count the number of layers, divide 1 by this number and then add the result of this to each of the layers in each group in the table. Taking "Group1" as an example: Count the number of layers (in this case 3). Calculate 1 / 3 = 0.33.... Insert 0.33... to the first three layers in the table. Repeat for the remaining groups. So the table looks like this: However, there's a fair bit of list manipulation involved which seems unnecessary so was hoping to see if there's a way to reduce this. Here is the code: # Define QTableWidget qTable = self.dockwidget.tableWidget # Define group root = QgsProject.instance().layerTreeRoot() main_group = root.findGroup('Main group') def refresh_table(): # Define parameters for QTableWidget qTable.setRowCount(0) # Define list to contain all layer names layer_data = [] # Define list to contain the number of layers layer_count = [] # Find all groups in main_group for group in main_group.children(): # Define list to count number of layers in each group layer_list = [] # Find all layers in each group for child in group.children(): node = root.findLayer(child.layer().id()) try: # If layer is visible, add them to lists if node.isVisible() == Qt.Checked: layer_data.append(child.layerName()) layer_list.append(child.layerName()) except AttributeError: pass # Insert the number of layers in each group to layer_count list layer_count.append(len(layer_list)) # Get total number of layers layer_data_count = len(layer_data) try: # Create new list for layer_count but ignore any zeros new_layer_count = [x for x in layer_count if x != 0] except ValueError: pass # List manipulation # Calculate the number of layers in each group and divide 1 by this number value_list = [1 / float(x) for x in new_layer_count] # Format the list to one decimal place formatted_value_list = ['%.1f' % elem for elem in value_list] # Convert the values in list to float formatted_value_list_to_float = [float(x) for x in formatted_value_list] # Create final list containing each layer and the values of their group final_value_list = [x for n,x in zip(new_layer_count,formatted_value_list_to_float) for _ in range(n)] # Set number of rows/columns nb_row = len(layer_data) nb_col = 2 qTable.setRowCount(nb_row) qTable.setColumnCount(nb_col) # Hide row index number qTable.verticalHeader().setVisible(False) # Insert layer names and values for row in range(layer_data_count): for col in [0]: item = QTableWidgetItem(str(layer_data[row])) qTable.setItem(row,col,item) # Make first column non-editable item.setFlags(QtCore.Qt.ItemIsEnabled) for col in [1]: item = QTableWidgetItem(str(final_value_list[row])) qTable.setItem(row,col,item) Answer: Your lists manipulations are indeed messy. It seems like you are trying to perform multiple things at once and thus you end up mixing variables that serve different purposes into the same loops. Instead, you should try to extract logical steps to perform, such as: extract informations you need from the internals of QGis to a similar but simpler representation; compute the data you need out of this simplified representation; format the data you computed for presentation; use the formatted data to update your visual presentation. Here the second and third step are simple enough they can be made at once. But first, we will need an helper function to simplify the inner working of the first double for loop: def is_visible_layer(layer, root): node = root.findLayer(layer.id()) try: return node.isVisible() == Qt.Checked except AttributeError: return False Now I’m not QGis expert, but I’m wondering if the root is really necessary to get the meta-information about the node. Isn't it possible to write it like: def is_visible_layer(layer): try: return layer.isVisible() == Qt.Checked except AttributeError: return False ? This let us write the for loop like: for group in main_group.children(): layer_list = [] for child in group.children(): if is_visible_layer(child.layer(), root): layer_data.append(child.layerName()) layer_list.append(child.layerName()) layer_count.append(len(layer_list)) Which can be simplified using list-comprehensions: for group in main_group.children(): layer_list = [child.layerName() for child in group.children() if is_visible_layer(child.layer(), root)] layer_data.extend(layer_list) layer_count.append(len(layer_list)) Now recall what I said about performing several things at once? Here you already try to count things while extracting the informations from the QGis internals. Let's get that for later and simplify the loop once again to build a list of list of layer names: layers = [ [ child.layerName() for child in group.children() if is_visible_layer(child.layer(), root) ] for group in root.findGroup(main_group).children() ] and then, only after you can start counting. And as it is much easier using this structure, you can start converting the list of list into a single list of couples that will mimic the table layout: table = [ (layer, len(group)) for group in layers for layer in group ] Turning this list of couples into an N × 2 table is then really easy. The whole code should look like: def is_visible_layer(layer, root): node = root.findLayer(layer.id()) try: return node.isVisible() == Qt.Checked except AttributeError: return False def refresh_table(root, qTable, main_group='Main group'): # Get names of relevant layers, as a list of list of names layers = [ [ child.layerName() for child in group.children() if is_visible_layer(child.layer(), root) ] for group in root.findGroup(main_group).children() ] # Convert the layer list to the table layout i.e. a list of couples table = [ (layer, len(group)) for group in layers for layer in group ] # Configure output table qTable.setRowCount(len(table)) qTable.setColumnCount(2) qTable.verticalHeader().setVisible(False) # Insert data into table for row, (layer, count) in enumerate(table): name_item = QTableWidgetItem(layer) name_item.setFlags(QtCore.Qt.ItemIsEnabled) count_item = QTableWidgetItem('{:.1f}'.format(1./count)) qTable.setItem(row, 0, name_item) qTable.setItem(row, 1, count_item) Call it like refresh_table( QgsProject.instance().layerTreeRoot(), self.dockwidget.tableWidget)
{ "domain": "codereview.stackexchange", "id": 24754, "tags": "python, plugin" }
Why does the following contradiction arise in Lagrangian Formalism?
Question: If we look at the Lagrange's equation $\frac{d}{dt}(\frac{\partial L}{\partial \dot{q_i}})- \frac{\partial L}{\partial q_i}=0$ It is clear that Lagrangian is invariant under a Transformation $L \rightarrow L + \frac{dF (q_i,t)}{dt}$ because $\frac{\partial \dot{F}(q_i,t)}{\partial \dot{q_i}} = \frac{\partial F}{\partial q} $ But when we look at the action, $A=\int L{dt}$, A transformation of the kind $L \rightarrow L + \frac{dF(q_i, \dot{q_i}, t)}{dt}$ would leave the extrema of action invariant and hence the equations of motion should also be invariant(according to the principle of least action) because the total time derivative of $F$ would contribute just a constant. So under what kind of transformation are equations of motion invariant? $L \rightarrow L + \frac{dF (q_i,t)}{dt}$ or $L \rightarrow L + \frac{dF(q_i, \dot{q_i}, t)}{dt}$ I mean what kind of function should F be? Is it allowed to have explicit dependence on $\dot{q_i}$? Answer: The Euler-Lagrange equation holds for Lagrangians that depend on at most first order derivatives of $q$. But when we take the transformation: $$L' = L+\frac{dF(q,\dot q,t)}{dt} $$ we find that our new Lagrangian now has a dependence on the second order derivative of $q$: $$L' = L+\frac{dF(q,\dot q,t)}{dt} = L+\frac{\partial F}{\partial q}\frac{dq}{dt}+\frac{\partial F}{\partial \dot q}\frac{d\dot q}{dt}+\frac{\partial F}{\partial t}$$ $$= L+\frac{\partial F}{\partial q}\dot q+\frac{\partial F}{\partial \dot q}\ddot q+\frac{\partial F}{\partial t}$$ and so the new Lagrangian now depends on $\ddot{q}$. The only way to avoid the new Lagrangian depending on $\ddot{q}$ is to demand that $$\frac{\partial F}{\partial \dot q} = 0$$ which brings us back to the case where $F$ depends only on $q$ and $t$.
{ "domain": "physics.stackexchange", "id": 21109, "tags": "classical-mechanics, lagrangian-formalism" }
Is chirality a property of only molecules with exactly 1 chiral centre?
Question: I have seen chirality being defined in a number of ways in terms of a molecule not being superimposable with its mirror image. My syllabus has a statement: "know that optical isomerism is a result of of chirality in molecules with a single chiral centre". This seems odd because surely you can have a have optical isomerism between molecules with more than 1 chiral centre so why a single. Overall, I am asking for either a confirmation that the statement is incorrect or an explanation of where my understanding is wrong. Answer: I don't want to try to interpret the intent behind the wording of the phrase you quoted, so I'll just state the situation and you can read into it as you wish. There is no truly general relationship between the number of chiral centres in a compound and whether it displays chirality overall. However, if a compound has exactly one chiral centre, then it must be chiral. Compounds with more than one chiral centre may or may not be chiral, depending on what those chiral centres are and their relative stereochemical relationship. And as has already been rightly pointed out (4 seconds before me), compounds with zero chiral centres may still be chiral.
{ "domain": "chemistry.stackexchange", "id": 12725, "tags": "chirality" }
Compilation of catkin work space fails on beaglebone black
Question: Hello all I am running debian 8.7 on the beaglebone black and i am following these instructions to help me get ROS installed and running. Everything is working fine until the final step where the catkin workspace is created. It fails with the following errors: c++: internal compiler error: Killed (program cc1plus) Please submit a full bug report, with preprocessed source if appropriate. See <file:///usr/share/doc/gcc-4.9/README.Bugs> for instructions. ros_comm/roscpp/CMakeFiles/roscpp.dir/build.make:110: recipe for target 'ros_comm/roscpp/CMakeFiles/roscpp.dir/src/libros/subscriber.cpp.o' failed make[2]: *** [ros_comm/roscpp/CMakeFiles/roscpp.dir/src/libros/subscriber.cpp.o] Error 4 CMakeFiles/Makefile2:8823: recipe for target 'ros_comm/roscpp/CMakeFiles/roscpp.dir/all' failed make[1]: *** [ros_comm/roscpp/CMakeFiles/roscpp.dir/all] Error 2 Makefile:138: recipe for target 'all' failed make: *** [all] Error 2 Invoking "make -j1 -l1" failed Can anyone help me out in resolving this error ? Many thanks in advance. If anyone has a better suggestion in getting ROS installed on the BBB, please do share with me, I have tried many threads and posts but all are outdated and i am unable to get OS images for the specified methods. Originally posted by aditya369007 on ROS Answers with karma: 18 on 2018-01-23 Post score: 0 Original comments Comment by gvdhoorn on 2018-01-24:\ c++: internal compiler error: Killed (program cc1plus) These sort of errors typicaly come up when your system has run out of memory. Do you have swap enabled? Comment by aditya369007 on 2018-01-24: thank you for your reply no i do not have swap enabled on the BBB. I will try doing that and report my results. Comment by gvdhoorn on 2018-01-25: I would recommend a topic change btw: compilation of your workspace fails. Creating the workspace itself (which is essentially just a directory) does not. Answer: Hello friend, I regret to say that I have little experience installing ROS in different embedded systems that are not officially supported. But I have been fighting for a long time with the problems that arise in Raspberry PI and I do not recognize any of them in the error that you present. I know it does not solve your problem but I could recommend you change the platform, for an embedded system with more documentation like a Raspberry PI or an embedded computer. You can even find a memory with Ubuntu and ROS preinstalled. I hope I could have helped you even if it was a little Originally posted by fhfonsecaa with karma: 90 on 2018-01-23 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by aditya369007 on 2018-01-24: Thank you for taking the time to reply to my question. Unfortunately I need the BBB because of its ability to handle analog signals as compared to the Rasberry pi. I will also try Ubuntu on it as you suggested.
{ "domain": "robotics.stackexchange", "id": 29841, "tags": "ros, catkin, beagleboneblack" }
Discrimination vs Calibration - Machine Learning Models
Question: I came across a new term called Calibration while reading about prediction models. Can you please help me understand how different it is from Discrimination. We build ML models to discriminate two/more classes from one another But what does calibration mean and what does it mean to say that "The model has good discriminative power but poorly calibrated/calibrative power`?" I thought we usually only look for separation only between 2 classes. Can help me with this with a simple example please? Answer: Discrimination is the separation of the classes while calibration gives us scores based on risk of the population. For example, there are 100 people that we’d like to predict a disease for and we know that only 3 out of 100 people have this disease. We get their probabilities from our model. Due to good predictability power, our model predicts probabilities between 0-0.05 for 70 people and 0.95-1 for 30 people. This is a good discrimination between classes. We now know that 30 people are at high risk considering only discrimination. But we also know that only 3 out of 100 people get the condition which is 3% prevalence. We use the 3% prevalence to calibrate our scores which will give the actual risk based on population of 100. That means, 0.95 x 0.03 = 0.0285 is their actual risk to the disease. This is a very crude approach, there are advanced techniques like Kernels, Platt Scaling etc.,
{ "domain": "datascience.stackexchange", "id": 11694, "tags": "machine-learning, deep-learning, classification, data-mining, dataset" }
Transcription rate expressed in microarray per hour
Question: This article gives measurement of transcription rate and the unit they're using is microarray per hour. For example, at 27°C the average expression of their genes is 236.1 microarray per hour (page 4). What does this unit mean? Given that each DNA spot in a microarray contain 1 picomole of DNA (wiki), does it mean that the gene is transcribed into 1 picomole of RNA (or of nucleotides?) per hour? Or does it mean that 236.1 RNA are created every hours? Answer: The units here are relative units of intensity. There may be about a picomole of probes on a microarray spot, but the units of intensity are not scaled to the precise concentration of DNA on the spot. There are many variables which make exact measurements of intensity difficult to translate into the number of RNA bound to a spot. The main one is nonspecific or non-target binding. The oligomer on the array surface is typically between 25 and 50 bases long - it isn't the entire length of the RNA being tested by a given spot. Another RNA may have some or most of the same sequence as a probe, or some labelled RNA will tend to get associated with some spots. Usually factors such as the position of the spot on the array surface, the GC composition of the probe and variations in the batch of RNA preparation will also cause the intensity to vary. Still, even with all this, with the same prep conditions and with an identical set of arrays, its valid to look at the intensity over time. In this paper, the investigators took array measurements at different time points and at two different temperatures after 4-Thiouracil is added to the cells, which allows one to follow RNA produced by transcription at a given time. Since the units are arbitrary but still proportional to the mRNA found in the cells, a difference experiment like this is valid, even if the absolute concentration of any given RNA is still a bit of a mystery.
{ "domain": "biology.stackexchange", "id": 2759, "tags": "molecular-biology, dna, dna-sequencing, transcription, dna-isolation" }
How do I determine the output equation in the state-space representation?
Question: In the state space representation, the state equation for a linear time-invariant system is: $$ \dot{\mathbf{x}}(t) = \mathbf{A}\mathbf{x}(t)+\mathbf{B}\mathbf{u}(t) $$ This state equation can be derived by decomposing an $n^{th}$ order differential equation into $n$ first-order differential equations and then choosing the state variables $x_1(t),x_2(t),...,x_n(t)$ and their derivatives $\dot{x}_1(t),\dot{x}_2(t),...,\dot{x}_n(t)$. The state equation essentially describes the relationship between the state variables and the inputs in $\mathbf{u}(t)$. Additionally, the output equation for a linear time-invariant system is: $$ \mathbf{y}(t) = \mathbf{C}\mathbf{x}(t)+\mathbf{D}\mathbf{u}(t) $$ However, I am not sure how this output equation is derived. More precisely, what is an "output"? Is it the set of state variables and inputs that need to be observed by the engineer or another system downstream? If that is true, then if I have a mass-spring-damper system, where the displacement of the mass is represented by the state variable $x_1(t)$, the velocity of the mass is represented by the state variable $x_2(t)$, and an externally applied force on the mass is represented by the input variable $u_1(t)$, and I was interested in observing/measuring the displacement of the mass, would my output equation then be: $$ y(t) = x_1(t) $$ Alternatively, if I was interested in observing both the displacement of the mass and the externally applied force, then would my output equation be: $$ \mathbf{y}(t) = \begin{bmatrix} y_1(t) \\ y_2(t) \end{bmatrix} = \begin{bmatrix} x_1(t) \\ u_1(t) \end{bmatrix} $$ So far, neither the state variables nor the inputs have been scaled in my output equation. Because of this, I don't understand the purpose of the $\mathbf{C}$ and $\mathbf{D}$ matrices. Could they be used to linearly transform the state variables and inputs for another system downstream? From this image on a typical state space representation: $\hskip2in$ It seems that what I am saying is correct, but I would prefer a better explanation. Answer: I am not entirely sure whether your question has to do with C and D matrices, or with how and why to select the output variables. I'll try to tackle both. Regarding the latter (how and why you decide on the output variables): You are right, in that for a simple system, there is not much point in developing the $\mathbf{y}$ vector and corresponding equation. In terms of logistics, you already have the relevant data. However, in more complex system you might be interested in the response of only a few of the state variables, or in linear combinations of them. So I tend to think the $\mathbf{y}$ as a way to perform a "boil down to essentials". However, there are other reasons, which I believe you suspect as linear combination of solutions to obtain a transformed solution Example: think of the following system with two mass springs you can select either the absolute $x_1$ and $x_2$ displacements of the mass. Another equivalent representation is $x_1$ and $x_2-x_1$ (essentially the deformation of the spring). If you are only interested in the deformation of the string you might create a $C=[1, -1]$ and you are done. However, it might be easier to see, that it is easier to construct the equations for $x_1$ and $x_2$ because their odes are similar (while constructring the ode for the spring deformation will be different). Bottom Line: state representation makes much more sense in more complex systems. Regarding the use of C and D matrices The first good reason that comes to mind for the use of C and D matrices, is to perform the Observability and Controlability Tests. Its in the same link you provided wikipedia link.
{ "domain": "engineering.stackexchange", "id": 3477, "tags": "control-engineering" }
Why must the alcohol decolorizer be used for 30 seconds or less for Gram staining?
Question: I had today a class in microbiology and my teacher said that the Decolorizer component is the critical one and cannot be more than 30 seconds. The decolorizer has alcohol in high concentration, my lecturer says 95%. I know that alcohol dissolves the lipid membrane, more exactly here for RBC which should be a similar situation here, not completely sure however. What component is the critical one that you cannot keep alcohol on your bacteria samples more than 30 seconds? Answer: The crystal violet stain will be removed if the decolorizer is left on too long. I'm assuming that the crystal violet complexes that are retained in the gram positive bacteria's peptidoglycan layers end up being washed away (perhaps from overdehydrtation).
{ "domain": "biology.stackexchange", "id": 637, "tags": "alcohol" }
Proving tautology with coq
Question: Currently I have to learn Coq and don't know how to deal with an or : As an example, as simple as it is, I don't see how to prove: Theorem T0: x \/ ~x. I would really appreciate it, if someone could help me. For reference I use this cheat sheet. Also an example of a proof I have in mind: Here for double negation: Require Import Classical_Prop. Parameters x: Prop. Theorem T7: (~~x) -> x. intro H. apply NNPP. exact H. Qed. Answer: You cannot prove it in "vanilla" Coq, because it is based on intuitionistic logic: From a proof-theoretic perspective, intuitionistic logic is a restriction of classical logic in which the law of excluded middle and double negation elimination are not valid logical rules. There are several ways you can deal with a situation like this. Introduce the law of excluded middle as an axiom: Axiom excluded_middle : forall P:Prop, P \/ ~ P. There is no more need to prove anything after this point. Introduce some axiom equivalent to the law of excluded middle and prove their equivalence. Here is just a few examples. Double negation elimination is one such axiom: Axiom dne : forall P:Prop, ~~P -> P. Peirce's law is another example: Axiom peirce : forall P Q:Prop, ((P -> Q) -> P) -> P. Or use one of the De Morgan's laws: Axiom de_morgan_and : forall P Q:Prop, ~(~P /\ ~Q) -> P \/ Q.
{ "domain": "cs.stackexchange", "id": 7769, "tags": "logic, coq" }
What composes the E Field of the Electromagnetic Wave where "disturbances" for propagation occurs?
Question: If electromagnetic waves cause disturbances in the Electric Field… what “is” in this E Field which photons Interact with? I ask because in Vacuum, there are no electrons to excite. So what is “it” that's adding up in the E Field as a disturbance in wave propagation? Answer: You are confused about electromagnetism. The thing is, there generaly (if we leave quantum mechanics out of it) is only electromagnetism, which can manifest itself as magnetic field, electric field, or usualy electromagnetic field depending on the reference frame you're observing it from. What you need to know is that moving electric charges create a magnetic field that is propagating away from the charges at the speed of light. Static electric charges create an electric field that is propagating away from them at the speed of light. A changing magnetic field (accelerating charges) create an electric field around them. Photons don't interact with electric field (under normal circumstances). They interact with charged particles like protons or electrons by exerting force on them. You can Picture empty space as a space where there are many different fields which are 0 on average. When a photon, which is just an excitation in the electromagnetic field travels trough that space it is a wave of electrical and magnetic potential that is traveling trough the empty space. When it's at a certain point it increases a value of electric field in that point. Which you can actually measure. Now if you would to put an electron in that same location, the electric field (caused by photons) would interact with it, pushing it in some direction.
{ "domain": "physics.stackexchange", "id": 37787, "tags": "quantum-field-theory, electromagnetic-radiation, electric-fields" }
Speed of field and data in wire
Question: The speed of field in wire is near the speed of light (speed of field not electron) so why speed of data is in MBPS, I mean why a byte dose not transport with light speed (field speed). Answer: The two measurements have nothing to do with each other. The MB/s is how long it takes to transmit a message of a given size. It has nothing to do with the delay between transmission and reception, which is essentially what the speed of light would give.
{ "domain": "physics.stackexchange", "id": 74374, "tags": "electricity, electric-circuits, electric-current, speed-of-light, data" }
calloc, malloc, free and realloc wrappers to store the size being allocated
Question: I'm currently building a node.js addon for libcurl. Right now I'm trying to correctly use v8::Isolate::GetCurrent()->AdjustAmountOfExternalAllocatedMemory to update v8 on the amount of memory being allocated by libcurl, and to do that I need to wrap the above mentioned functions that libcurl uses, using curl_global_init_mem(). I saw the following code being used here that basically does the same thing, but for another lib. I currently have the following code (almost identical to the one above): struct MemWrapper { size_t size; double data; }; #define MEMWRAPPER_SIZE offsetof( MemWrapper, data ) inline void* MemWrapperToClient( MemWrapper* memWrapper ) { return static_cast<void*>( reinterpret_cast<char*>( memWrapper ) + MEMWRAPPER_SIZE ); } inline MemWrapper* ClientToMemWrapper( void* client ) { return reinterpret_cast<MemWrapper*>( static_cast<char*>( client ) - MEMWRAPPER_SIZE ); } void AdjustMem( ssize_t diff ) { Nan::AdjustExternalMemory( static_cast<int>( diff ) ); } void* MallocCallback( size_t size ) { size_t totalSize = size + MEMWRAPPER_SIZE; MemWrapper* mem = static_cast<MemWrapper*>( malloc( totalSize ) ); if ( !mem ) return NULL; mem->size = size; AdjustMem( totalSize ); return MemWrapperToClient( mem ); } void FreeCallback( void* p ) { if ( !p ) return; MemWrapper* mem = ClientToMemWrapper( p ); ssize_t totalSize = mem->size + MEMWRAPPER_SIZE; AdjustMem( -totalSize ); free( mem ); } void* ReallocCallback( void* ptr, size_t size ) { if ( !ptr ) return MallocCallback( size ); MemWrapper* mem1 = ClientToMemWrapper( ptr ); ssize_t oldSize = mem1->size; MemWrapper* mem2 = static_cast<MemWrapper*>( realloc( mem1, size + MEMWRAPPER_SIZE ) ); if ( !mem2 ) return NULL; mem2->size = size; AdjustMem( ssize_t( size ) - oldSize ); return MemWrapperToClient( mem2 ); } char* StrdupCallback( const char* str ) { size_t size = strlen( str ) + 1; char* res = static_cast<char*>( MallocCallback( size ) ); if ( res ) memcpy( res, str, size ); return res; } void* CallocCallback( size_t nmemb, size_t size ) { size_t totalSize = size + MEMWRAPPER_SIZE; MemWrapper* mem = static_cast<MemWrapper*>( calloc( nmemb, totalSize ) ); if ( !mem ) return NULL; mem->size = nmemb * size; AdjustMem( nmemb * totalSize ); return MemWrapperToClient( mem ); } I hook it up with libcurl calling: curl_global_init_mem( CURL_GLOBAL_ALL, MallocCallback, FreeCallback, ReallocCallback, StrdupCallback, CallocCallback ); I compiled the add-on, and from what I can see, it's working fine, but as I'm unexperienced with C++, I need to know if this is correct. What kind of bad things can happen from modifying the pointer returned by calloc, malloc and prealloc like that? Is there any way to improve it? Answer: I'm the author of the code you mentioned, with some suggestions by Nathan Zadoks. I've used the same idea in another project. So any mistakes others may find in the copied code will likely affect my code as well. async? One critical question is whether you intend to perform asynchroneous operations. If so, this code will likely fail because Nan::AdjustExternalMemory has to be called from the main Node thread, i.e. must not be called from a worker thread. calloc The CallocCallback is wasteful. Your code allocates nmemb * (size + MEMWRAPPER_SIZE) while a better implementation would only allocate (nmemb * size) + MEMWRAPPER_SIZE You can't get that by delegating to calloc, though. Instead I'd build on MallocCallback like this (similar to how StrdupCallback works, too): void* CallocCallback( size_t nmemb, size_t size ) { void* ptr = MallocCallback( nmemb * size ); if (!ptr) return NULL; memset( ptr, 0, nmemb * size ); // zero-fill return ptr; } alignment Since Loki raised concerns regarding alignment, I'll explain my rationale. Some platforms will suffer a performance penalty if certain data types are not aligned with memory addresses in a certain way, others may even prevent access, or pervent access for certain kins of operation (like SSE). I'm guessing that the largest data type affected by this in the library in question is probably double. So I'm using a double item as the data entry of the structure, and you copied that. If the compiler thinks alignment is important, it will lay out the struct in such a way that the data item is properly aligned if and only if the structure as a whole is properly aligned. It can accomplish this by introducing some padding between size and data. So for example if sizeof(size_t) == 4 but sizeof(double) == 8 and more importantly alignof(double) == 8, then you'd get sizeof(MemWrapper) == 16, layed out as 4 bytes for size, 4 bytes padding, 8 bytes for data. You'd have offsetof(MemWrapper, data) == 8, taking both size and padding into account. The common calls like malloc will return memory which is properly aligned for any object type. So the wrapper will be aligned, and hence its data portion will be aligned as well. The correct way to actually follow the malloc spec by the letter (instead of relying on some idea of what objects the library actually uses) would be using std::max_align_t (introduced in C++11) instead of double. But the difference becomes only important if the library is dealing in objects larger than a simple double. It might also be the case that the compiler doesn't pad the data structure by default, risking a performance impact or failure of some (e.g. SSE) operations. On GCC-like compilers the aligned attribute can be used to control alignment and prevent packing. A more portable way to achieve the same would probably be the use of an (anonymous) union between the size field and a std::max_align_t element. But I'd only do this if the application in question were using SSE or something like that, since on arhictectures where unaligned access is a more common problem I'd expect the compiler to take care of this.
{ "domain": "codereview.stackexchange", "id": 20045, "tags": "c++, node.js, memory-management, curl" }
What is the sense of introducing generating functional to the summands of expansion of S-matrix?
Question: Let's have generating functional $Z(J)$: $$ Z(J) = \langle 0|\hat {T}e^{i \int d^{4}x (L_{Int}(\varphi (x)) + J(x) \varphi (x))}|0 \rangle , \qquad (1) $$ where $J(x)$ is the functional argument (source), $\hat {T}$ is the chronological operator, $\varphi (x)$ - some field. I want to understand the reasons for its introduction for the summands of expansion of S-matrix. As I read in the books, it helps to consider only the vacuum expectation values​​, forgetting about in- and out-states. But in $(1)$ appear summands like $\int \frac{J(p)dp}{p^2 - m^2 + i0}$ instead of the contributions from external lines. It may refer to the internal lines. So what to do with them and are there some other reasons to introducing $(1)$ except written by me? Answer: The primary utility in introducing the generating functional is in using it to compute correlation functions of the given quantum field theory. Let's restrict the discussion to that of a theory of a single, real scalar field on Minkowski space, and let $x_1, \dots, x_n$ denote spacetime points. Of central importance are time-ordered vacuum expectation values of field operators evaluated at such points; \begin{align} \langle0|T[\phi(x_1)\cdots\phi(x_n)]|0\rangle. \end{align} It can be shown that these objects can be obtained from the generating functional by taking functional derivatives with respect to the $J(x_i)$ as follows: \begin{align} \langle0|T[\phi(x_1)\cdots\phi(x_n)]|0\rangle = \frac{1}{Z[0]}\left(-i\frac{\delta}{\delta J(x_1)}\right)\cdots \left(-i\frac{\delta}{\delta J(x_n)}\right)Z[J]\Bigg|_{J=0}. \end{align} This standard fact is proven in many books on QFT. It's often proven using the path integral approach which makes it pretty transparent why it's true. The crux of the argument is that every time you take a functional derivative with respect to the source $J(x_i)$, it pulls down a factor of the field $\phi(x_i)$. Dividing by $Z[0]$ is an important normalization relating to vacuum bubbles, and setting $J=0$ after computing the appropriate functional derivatives eliminates terms with more than $n$ factors of the field and renders the final result source-independent as it should be.
{ "domain": "physics.stackexchange", "id": 10577, "tags": "quantum-field-theory, scattering, correlation-functions, s-matrix-theory" }
Show: "Checking no solution for system of linear equations with integer variables and coefficients" $\in \mathbf{NP}$
Question: I've been struggling for a while trying to solve this problem: Show that the following problem is in $\mathbf{NP}$: Check that a system of linear equations with $m$ integer variables and integer coefficients has no solution. Let $L = \{\langle A, b \rangle\ |\ A \in \mathbb{Z}^m\times\mathbb{Z}^m\text{ and }Ax = b\text{ has no solution for }x \in \mathbb{Z}^{m} \}$ (please, feel free to correct if anywhere I'm wrong) For showing that a language is in $\mathbf{NP}$ we have to construct a Turing Machine, indicating what we have chosen as certificate. If the TM is nondeterministic, then nondeterministically we pick a certificate and check the condition. If it is deterministic, we pass the certificate as one of the parameters (of the two parameters TM. The other parameter is, in this case, the pair $\langle A, b \rangle$). I'm having trouble trying to choose the appropriate certificate. My approaches: 1) Certificate: a vector $x \in \mathbb{Z}^{m}$, such that doesn't solve $Ax = b$. Issue: We would have to try every possible $x$ to show that $\nexists$ such vector for which solution may exist... 2) Certificate: Rank of $A$. if the $\mathrm{rank} A \neq m$, then the system has no solution and we $Accept$. Issue: doesn't necessarily mean that $x \in \mathbb{Z}^{m}$. 3) Consider $\overline{L} = \{\langle A, b \rangle\ |\ A \in \mathbb{Z}\times\mathbb{Z}\text{ and }Ax = b\text{ has a solution for }x \in \mathbb{Z}^{m} \}$. Showing that the system has a solution seems easier than showing that there is no one. I guess the certificate has to be a vector $x \in \mathbb{Z}^{m}$, for which the $Ax = b$ is satisfied. Then, if I'm not mistaking, constructing a polynomial-time deterministic TM for $\overline{L}$ is straight forward. The thing is that this will show that $\overline{L} \in \mathbf{NP}$. This implies that $L \in$ co-$\mathbf{NP}$. So showing that $L \in \mathbf{NP}$ would be equivalent to showing that co-$\mathbf{NP} = \mathbf{NP}$?? Answer: There is a polynomial time algorithm that determines whether a system of linear equations over the integers has a solution. The algorithm uses Hermite normal form, which can be computed in polynomial time. See lecture notes of Swastik Kopparty: Hermite normal form and finding integer solutions. Your proof that your problem is in coNP is incomplete, since you haven't shown that the witness has polynomial size. Assuming you did manage to show it, all this implies is that the problem is in coNP, not that it is coNP-hard. There's absolutely no issue for a problem to be in both NP and coNP. Indeed, every problem in P is also in NP and coNP. If you can put a problem in both NP and coNP, then this suggests that it might actually be in P, though it is conjectured that $\mathsf{P} \subsetneq \mathsf{NP} \cap \mathsf{coNP}$. One problem which might lie in the middle is (a decision version of) integer factorization.
{ "domain": "cs.stackexchange", "id": 13394, "tags": "turing-machines, np, decision-problem, complexity-classes" }
Methods for state estimation and real-time path planning of a mobile robot
Question: I'm looking to start a project that incorporates some form of state estimation and path planning for a simple simulated robot dynamic model, in an environment that contains obstacles. I'm hoping to use the combination of state estimation and path planning to allow the robot to efficiently navigate through its environment from an arbitrary starting position A, to another arbitrary ending position B, but was unsure where to start. With regards to the state estimation, I thought it would be good to implement a variant of SLAM (possibly Fast SLAM if it isn't too complicated), but I'm quite lost about where to start when it comes to the path planning side of the project, since there seem to be many different ways to do it. The first algorithms that seem to pop up are variants of A* and RRT*, but I was wondering if there are any "state-of-the-art" algorithms that may allow for real-time path planning. My previous work has looked at the use of convex optimization for optimal guidance and control of various dynamic systems, but it seems that using convex optimization would be very difficult in a highly constrained environment (i.e. environment with lots of obstacles). Any help would be much appreciated. Answer: I am assuming that by real time path planning, you mean starting off in a partially known environment and updating your 'plan' as you gain more knowledge through your SLAM algorithm. For a real world scenario, two of the biggest concerns here would be a) taking into account new information from the sensors to update your obstacle map and plan, b) being computationally efficient to actually compute plans quickly during navigation. Real time sampling based path planning has been examined in some recent papers: for state of the art algorithms, you can possibly look into Real Time RRT* (RT-RRT*) or RRTx, that does real time replanning as the environment information changes. There are various other perspectives of looking at this as well. On the most basic end of the spectrum, if you're only concerned about, say, dynamically moving obstacles, the simplest algorithm to implement that can deal with planning in that scenario would be the D* algorithm or its anytime variants. On the more advanced side, you can look into the idea of SLAP: simultaneous localization and planning, that attempts to capture factors such as uncertainty and rewiring path plans as SLAM builds your environment and constructs the robot's belief.
{ "domain": "robotics.stackexchange", "id": 1660, "tags": "mobile-robot, path-planning, estimation" }
Do gluon jets occur spontaneously in the real world, or only in a particle accelerator?
Question: In this article by Matt Strassler, he says "A struck quark, like any accelerated particle, will radiate. A suddenly accelerated quark will radiate many gluons. So what actually emerges at the edge of the proton is not a fast quark but a collection of fast gluons along with the fast quark. The shape of the jet is actually determined mainly by the way that the gluons are radiated before the quark even emerges from the proton." Is this something that occurs spontaneously in the real world, or only in a particle accelerator? I find this to be an issue in many science papers, where they do not mention if what they are describing only occurs in a lab. Answer: Extremely high-energy cosmic rays, when they strike the Earth's atmosphere, spontaneously cause many of the reactions that we normally see only in particle accelerators, and in particularly high-energy cases it would not at all be surprising to see a gluon jet. In fact, their energy is often orders of magnitude higher than we can produce in particle accelerators. The highest-energy cosmic ray possessed something like 320 EeV, or equivalently 320 million TeV, of energy, while the LHC produces collisions with a center-of-mass energy of a measly 13 TeV in comparison. As far as actually directly observing a gluon jet from a cosmic-ray interaction in the upper atmosphere, the main problem is that we don't really have too many particle detectors that are just sitting in the upper atmosphere, at a spot where a high-energy cosmic-ray interaction happens to take place. So we inevitably observe these reactions from a significant distance, which means the short-length characteristics of the reaction are often obscured by the resulting particle shower as the products of the reaction hit other particles in the atmosphere, dispersing their energy over a wider and wider cone.
{ "domain": "physics.stackexchange", "id": 71714, "tags": "quantum-electrodynamics" }
Why should asteroid coming towards a star deflect away from star if gravity is an attractive force?
Question: I was finding the closest approach of an asteroid (whose energy is sufficient to exit the gravitational field of the star )so that it can escape the gravitational field of the planet .The asteroid has a velocity and is being attracted by the planet and is leaving the star. Why should it deflect away from the star like this if gravitational force is attractive? Well I think if I mention the whole question it'll useful so the question is An asteroid is approaching a star of radius r . The impact parameter is b . Find the minimum value of b for which asteroid will just escape on falling into the star. My instructor said the drawing should be Why it shouldn't deflect like this? Instructor said if this happens then it will be orbiting the planet. Answer: If one thinks of the conic sections which are the solutions of the gravitational equations between two bodies, it is clear that one of the two bodies, the massive one, should be sitting in one of the focuses of the section: The top image in your answer does not correspond to this. The lower does.
{ "domain": "physics.stackexchange", "id": 52187, "tags": "forces, planets" }
Why do blue jays eat my cat food?
Question: We buy some run of the mill big-box store cat food. Over the last several weeks, I've observed a blue jay landing on the cat food dish, taking a piece or two, and flying off with it. I assumed it was eating it then, and this morning I just saw it actually eat a piece. Why would a blue jay eat cat food? Edit: I just looked at the ingredient list and apparently corn is the first ingredient. But there's also "poultry by-product" and a variety of other "meats." Would the jay just smell a lot of corn or something? Answer: Like other Corvids (Jays, Crows, and Ravens), Blue Jays are omnivorous and can eat and digest both animal and plant matter. Animal matter can include insects, carrion, small birds and mammals, and the eggs of other birds. Then there is plant matter including corn, seeds, etc. Your catfood was not only convenient but contains a good source of both animal and plant matter and the opportunistic tendencies of Blue Jays allowed them to take advantage.
{ "domain": "biology.stackexchange", "id": 5729, "tags": "food, ornithology, diet" }
Relativistic Mass and Gravity
Question: Does relativistic mass affect space time in the same way as rest mass? To my understanding, (as I am not an actual physicist, but simply a citizen scientist) relativistic mass is really the measure of an object's energy. It is not the same as rest mass, which is the definition of mass that a layperson would be familiar with (how much matter an object is composed of). However, does a change in relativistic mass amount to the same magnitude of gravitational variation as an equivalent change in rest mass? Answer: In General relativity , it is the stress energy tensor that defines space time curvature. Thus mass is a secondary definition. The stress–energy tensor is the source of the gravitational field in the Einstein field equations of general relativity, just as mass density is the source of such a field in Newtonian gravity. Relativistic mass is not a good concept in defining a particles energy and momentum which will enter the stress energy equations, although it started from the T00 component of the stress energy tensor, by defining an energy density. Excess velocity will give excess energy and momentum, the same as excess Newtonian mass, but one should keep a distinction in the concepts to avoid confusion.
{ "domain": "physics.stackexchange", "id": 41158, "tags": "general-relativity, gravity, spacetime, mass, mass-energy" }
Detecting boilerplate in text samples
Question: I have a corpus of unstructured text that, due to a concatenation from different sources, has boilerplate metadata that I would like to remove. For example: DESCRIPTION PROVIDED BY AUTHOR: The goal of my ... Author provided: The goal of my ... The goal of my ... END OF TRANSCRIPT The goal of my ... END, SPONSORED BY COMPANY XYZ The goal of my ... SPONSORED: COMPANY XYZ, All rights reserved, date: 10/21 This boilerplate can be assumed to occur in beginning or end of each sample. What are some robust methods for wrangling this out of the data? Answer: This might get you started. Phrase length is determined by the range() function. Basically this tokenizes and creates n-grams. Then it counts each token. Tokens with a high mean over all documents (occurs often across documents) is printed out in the last line. from sklearn.feature_extraction.text import CountVectorizer import numpy as np import nltk text = """DESCRIPTION PROVIDED BY AUTHOR: The goal of my a... Author provided: The goal of my b... The goal of my c... END OF TRANSCRIPT The goal of my d... END SPONSORED BY COMPANY XYZ The goal of my e... SPONSORED: COMPANY XYZ All rights reserved date: 10/21 """ def todocuments(lines): for line in lines: words = line.lower().split(' ') doc = "" for n in range(3, 6): ts = nltk.ngrams(words, n) for t in ts: doc = doc + " " + str.join('_', t) yield doc cv = CountVectorizer(min_df=.5) fit = cv.fit_transform(todocuments(text.splitlines())) vocab_idx = {b: a for a, b in cv.vocabulary_.items()} means = fit.mean(axis=0) arr = np.squeeze(np.asarray(means)) [vocab_idx[idx] for idx in np.where(arr > .95)[0]] # ['goal_of_my', 'the_goal_of', 'the_goal_of_my']
{ "domain": "datascience.stackexchange", "id": 545, "tags": "text-mining, data-cleaning, data-wrangling" }
In an entangled system, what happens to Alice's wavefunction right after Bob makes a measurement?
Question: Suppose two entangled particles are far apart. One is with Alice and the other is with Bob. The relative velocity between Alice and Bob is zero (and spacetime is flat), so that we can define a notion of simultaneity that is agreed upon by both observers. Before measurement, the joint is wavefunction of the particles is $\frac{1}{\sqrt{3}}|up, down\rangle +\frac{\sqrt{2}}{\sqrt{3}} |down,up\rangle$. The second spin label is of Bob's particle Suppose, at time $t_0$, Bob measures his particle and observes $|down\rangle$. Can we say that, after time $t_0$, Alice's description of the joint system should become : $|up, down \rangle$? Or should it become: $$\frac{1}{\sqrt{3}}|up, down, \text{Bob measured up}\rangle +\frac{\sqrt{2}}{\sqrt{3}} |down,up, \text{Bob measured down}\rangle$$ If the first option is correct, how does it not violate locality? I am thinking that the first option involves the information, that Bob has made a measurement, to instantaneously travel to Alice's end. Answer: If Bob observes "down", due to the wavefunction collapse the full system will be described by : $$ \left|down,up\right\rangle $$ There is indeed some sort of "spooky action at a distance", but this action cannot be used to transfer information. When Bob observes "down", he will instantaneously know the state of the particle in Alice's possession, and yes, that state will instantaneously change for Alice. However, this cannot be used to transfer information. This is mainly because Bob cannot choose what he observes. He will observe $1/3$ of the time "down", and $2/3$ of the times "up". Using entanglement, you can create correlations which are not possible to make without using the entangled pair (this is at the core of Bell's inequalities), but this does not violate "locality" in the sense that no information or signal can be transmitted faster than light. You are right that there is "something" non-local happening at the moment of the wavefunction collapse.
{ "domain": "physics.stackexchange", "id": 91516, "tags": "wavefunction, quantum-entanglement, quantum-measurements, locality" }
How to discretize a numerival value with predefined ranges in Weka?
Question: I have imported a csv file into Weka. One of the features has a value with minimum 0 and maximum 160. Now, I want to discretise that value into three ranges as you can see below: Less than 6 > L More than 6 and less than 20 > M More than 20 > H How can i do that? Answer: Discretising or binning, very common. There is a filter for that in Weka. You'll find it under weka.filters.unsupervised.attribute.Discretise. It is in the GUI too. You can find the documentation under: https://weka.sourceforge.io/doc.dev/weka/filters/unsupervised/attribute/Discretize.html
{ "domain": "datascience.stackexchange", "id": 7019, "tags": "machine-learning, data-mining, feature-engineering, weka" }
Calculate the cost of cement for a project
Question: I am a newbie in a programming, and I just randomly chose a task for training from some group on Facebook. The task is to calculate the cost of cement for a construction project. The input is the number of pounds of cement required (guaranteed not to be a multiple of 120). The store sells cement in 120-pound bags, each costing $45. Example input: 295.8 Output: 135 #include <stdio.h> #include <stdlib.h> #define PRICE 45 #define CAPACITY 120 #define MAXDIGITS 5 int sum(int); int main(int argc, char *argv[]) { int val = 0; char inp[MAXDIGITS]; if ((argc > 1) && (argv[1] > 0)) val = strtol(argv[1], NULL, 0); else { do { printf("Please input the value of cement: "); scanf("%s", inp); val = strtol(inp, NULL, 0); } while (!val); } if (val) printf("Money you need: %d\n", sum(val)); return 0; } int sum(int need) { int mon = PRICE; int n = CAPACITY; while (n < need) { n += CAPACITY; mon += PRICE; } return mon; } I'm interested in code style, rational memory usage, etc. Answer: Welcome on Code Review Review #define PRICE 45 #define CAPACITY 120 #define MAXDIGITS 5 You define PRICE and CAPATITY as integers, but a price can have cents, and a bag can maybe contains more than a plain amount in pounds, maybe some ounces more. So you should use decimals here. if ((argc > 1) && (argv[1] > 0)) Don't compare a char* to an int. This check doesn't insure that argv[1] is a valid number. If the parameter isn't what you want, you can either print the usage and quit, or silently continue and ask for input. val = strtol(argv[1], NULL, 0); You don't validate the program argument. strtol can silently fail and return "0". In this case, a good option would be to ask for a good input. You parse the string to an unsigned integer; but the asked task stand asking a decimal number (and show "295.8" in the example). You could use strtod or atof but since you are a conscientious programmer, you want to check for validity. So the combo scanf/sscanf (with "%f" or "%lf")` are the solution. do { printf("Please input the value of cement: "); Try to put a \nbefore your request. It will make the output more clear. Otherwise, it will print on the same line as last output. scanf("%s", inp); val = strtol(inp, NULL, 0); As above, use scanf("%lf", ...) instead (shorter, and you can check for errors) int sum(int need) Once what i said above is fixed (price and capacity as double) your function work fine. However, you can simplify it, by doing the computation in only one line (which can be simplified even more, with libmath). Here's my corrected version: #include <stdio.h> #include <stdlib.h> #define PRICE 45. #define CAPACITY 120. int sum(int); int main(int argc, char *argv[]) { double need = 0.; if (argc < 2 || sscanf(argv[1], "%lf", &need) != 1) { while (printf("\nPlease input the amount of cement you need (e.g. 295.8): ") && scanf("%lf", &need) != 1) { for(int c = 0; c != '\n' && c != EOF; c = getchar()); } } printf("For %.02lf pounds you need: $%.02lf!\n", need, PRICE * ((int)((int)(need / CAPACITY) * CAPACITY < need) + (int)(need / CAPACITY))); // or with "#include <math.h>" and the compiler/linker flag "-lm" //printf("For %.02lf pounds you need: $%.02lf!\n", need, PRICE * ceil(need / CAPACITY)); return 0; }
{ "domain": "codereview.stackexchange", "id": 32757, "tags": "beginner, c, calculator" }
How can I avoid unchecked cast warning in my generic recursive Iterator?
Question: It's somewhat odd that Java's collection framework has no iterator for recursive data structures. Since I needed something like this, I wrote my own. First off, I need recursive elements: public interface RecursiveElement<T> { public Iterator<T> getChildrenIterator(); } And then an Iterator: public class RecursiveIterator<T> implements Iterator<T> { private Deque<Iterator<T>> stack; private Iterator<T> currentStackItem; /** * Creates a new instance * * @param root * all children of this node are iterated. The root node itself * is not returned * @throws NullPointerException * if root is null */ public RecursiveIterator(final RecursiveElement<T> root) { if (root == null) throw new NullPointerException( "root argument to this iterator must not be null"); stack = new LinkedList<Iterator<T>>(); currentStackItem = root.getChildrenIterator(); } @Override public boolean hasNext() { return currentStackItem != null; } @Override public T next() { final T result = currentStackItem.next(); if (result instanceof RecursiveElement) { stack.addLast(currentStackItem); // Here is the warning: currentStackItem = ((RecursiveElement<T>)result).getChildrenIterator(); } while (currentStackItem != null && !currentStackItem.hasNext()) currentStackItem = stack.pollLast(); return result; } @Override public void remove() { currentStackItem.remove(); } } That code works very well, but I do get a warning from the compiler in the next() method in the line I marked. It is clear to me why this warning occurs, but I have not come up with any solution on how to solve the problem without this warning (save suppressing the warning). Any ideas? Answer: I don't think you can do anything about this. You have to cast here, and in the process you lose all information about the type parameter: The compiler can't know that if you have a RecursiveElement, it's always a RecursiveElement<T>, and "thanks" to type erasure it can't check the runtime type.
{ "domain": "codereview.stackexchange", "id": 7115, "tags": "java, recursion, casting, iterator" }
Is anyone else thinking about wrapping the leap for ROS?
Question: This seems like it would be really fun in our community! The Leap technology is 200 times more accurate than anything else on the market — at any price point. Just about the size of a flash drive, the Leap can distinguish your individual fingers and track your movements down to a 1/100th of a millimeter. Company website Originally posted by SL Remy on ROS Answers with karma: 2022 on 2012-05-22 Post score: 6 Original comments Comment by Eric Perko on 2012-05-22: Doesn't look like it has Linux support yet... :( Comment by Arkapravo on 2013-04-07: It does support Linux, however I am not sure what is it ! It is surely NOT OpenNi and they have not yet unveiled their technology. So a ROS port-in may only be possible if one can 'hack-into' the hardware, I was seriously thinking on the lines of using Arduino as a middle level hardware. Answer: I just integrated the Leap in ROS. Functionality is limited but you are welcome to fork me on GitHub: https://github.com/warp1337/rosleapmotion Originally posted by flier with karma: 76 on 2013-08-01 This answer was ACCEPTED on the original site Post score: 5 Original comments Comment by mirzashah on 2013-10-23: I'm in the process of packaging florian's work, documentation at http://wiki.ros.org/rosleapmotion
{ "domain": "robotics.stackexchange", "id": 9501, "tags": "ros, pcl, hand-interaction" }
Are single complements the same as seperate complements?
Question: Is (A' + B') the same as (A + B)' ? Note: The apostrophe ( ' ) represents the complement Answer: No. As a counterexample pick $A=0$ and $B =1$. Then: $$ A' + B' = 0' + 1' = 1 + 0 = 1 \neq 0 = 1' = (0 + 1)' = (A+B)'. $$
{ "domain": "cs.stackexchange", "id": 18259, "tags": "logic" }
Writing a tail command clone
Question: I'm reading Bruce Molay's book on Unix programming, and as an exercise I've implemented a copy of the tail command. My approach goes over the entire file once, to count newlines, then again to store their positions in an array, and finally a third time to print out the bytes past the position of the nth newline. I know this must be inefficient, though I'm not sure of the ideal way to go about it. Another point of uncertainty―to get around seeking restrictions on stdin, I write a copy of it to a temp file, then perform all operations on that. I think there's probably a way to do this without a temp file, though I'm not sure what it is. tail1.c #include <stdio.h> #include <unistd.h> #include <fcntl.h> #include <stdlib.h> #include <string.h> #include <stdbool.h> #define BUFSIZE 4096 #define N_LINES 10 #define TMPFILE "/tmp/stdin_tmpf.bin" void oops(char *s1, char *s2) { fprintf(stderr, "Error: %s\n(errno) ", s1); perror(s2); exit(1); } unsigned count_chars(const char *str, char byte, int n_chars) { int cnt = 0; for (int i = 0; i < n_chars; ++i) if (str[i] == byte) ++cnt; return cnt; } unsigned find_cutoff(int fd) { int linecnt, n_chars; int subpos, block; char cbuf[BUFSIZE]; subpos = block = linecnt = n_chars = 0; // count lines to allocate linelocs array while ((n_chars = read(fd, cbuf, BUFSIZE)) > 0) linecnt += count_chars(cbuf, '\n', n_chars); if (linecnt <= N_LINES) return 0; // array of positions of newlines int linelocs[linecnt]; linelocs[0] = 0; int loc_index = 0; if (lseek(fd, 0, SEEK_SET) == -1) oops("couldn't seek start", ""); while ((n_chars = read(fd, cbuf, BUFSIZE)) > 0) { for (int i = 0; i < n_chars; ++i) if (cbuf[i] == '\n') { loc_index++; subpos = i+1; linelocs[loc_index] = (BUFSIZE*block) + subpos; } block++; } return linelocs[linecnt - N_LINES]; } // create temporary file holding stdin contents int stdin_tmpf() { int out_fd, in_fd; int n_chars; char buf[BUFSIZE]; if ((in_fd = fileno(stdin)) == -1) oops("couldn't open stdin", ""); if ((out_fd = open(TMPFILE, O_RDWR | O_CREAT)) == -1) oops("failed to create tmpf", ""); while ((n_chars = read(in_fd, buf, BUFSIZE)) > 0) if (write(out_fd, buf, n_chars) != n_chars) oops("read/write error", "stdin_tmpf"); if (lseek(out_fd, 0, SEEK_SET) == -1) oops("seek failure", "stdin_tmpf"); return out_fd; } int main(int argc, char **argv) { int in_fd, out_fd; bool cleanup = false; if (argc == 1) { in_fd = stdin_tmpf(); cleanup = true; } else if ((in_fd = open(argv[1], O_RDONLY)) == -1) oops("Couldn't open file", argv[1]); if ((out_fd = fileno(stdout)) == -1) oops("Couldn't open stdout", ""); unsigned cutoff = find_cutoff(in_fd); int n_chars; char buf[BUFSIZE]; if (lseek(in_fd, cutoff, SEEK_SET) == -1) oops("couldn't seek cutoff", ""); // TODO int to str while ((n_chars = read(in_fd, buf, BUFSIZE)) > 0) if (write(out_fd, buf, n_chars) != n_chars) oops("couldn't write stdout", ""); if (close(in_fd) == -1 || close(out_fd) == -1) oops("couldn't close files", ""); if (cleanup && unlink(TMPFILE) == -1) oops("failed to cleanup", TMPFILE); return 0; } Quite new to C programming, I've been a Python programmer for many years. Any tips are greatly appreciated! Answer: You only need one pass Your solution to have three passes over the input has some big problems. Most notably, if the input is very large, you also need to create a large temporary file. But you now also have to deal with issues surrounding temporary files, like what if I run two instances of tail in parallel? What if someone created a symlink from /tmp/stdin_tmpf.bin to some other file? You can avoid all this by doing only a single pass over the input. The trick is that you know you only need to remember the last N_LINES, so just create a buffer for N_LINES lines. Start filling the buffer, and once it holds N_LINES lines, when you read in the next line, delete the oldest line. Once you finished reading the input, just write out the contents of the buffer. Note that this is also what the coreutils tail program does. Use size_t for sizes, counts and indices I see you use unsigned and int interchangably for keeping track of counts, like n_chars and cnt in count_chars(). However, the right type to use is size_t. Do this whereever appropriate. Reporting errors I see you check every return value and report an error both to stderr and by exitting with a non-zero exit code. That's very good! However, I don't see why you both do a fprintf(stderr, ...) and call perror(). I think one is enough. Also, prefer using EXIT_FAILURE as the exit code. If you are targetting Linux or BSD only, you might consider calling err() instead of your own oops() function.
{ "domain": "codereview.stackexchange", "id": 41516, "tags": "c, reinventing-the-wheel, linux, unix" }
\rviz_visual_tools not shown in MarkerArray Topic's
Question: I'm using ros2 humble. I'm working on the MoveIt 2 tutorial. After creating the Sourcecode i have to add the RvizVisualToolsGui and the MarkerArray. In the MarkerArray i have to change the topic to \rviz_visual_tools. Unfortunately it dosen't show this topic in the drop down menu. I can just choose between \visualization_marker_array, \display_cost_source and \display_contacts. After doing the step by step tutorial i just copied the the whole sourcecode but there is still the same problem. How can i solve this? Originally posted by LucB on ROS Answers with karma: 42 on 2022-10-11 Post score: 0 Answer: I tried to type the Topic in manually. (without using the dropdown menu) This worked for me. Originally posted by LucB with karma: 42 on 2022-10-12 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 38033, "tags": "ros2, c++, moveit" }
How can I self-learn Electrical Engineering?
Question: I know it is an awkward question but I love maths, physics, and engineering! Due to my health problems (I am suffering from Psoriasis) I was not able to opt PCM (Physics, Chemistry, and Mathematics) as a stream. But I love it so much that I just started learning Maths from the internet and Physics. It would be amazing if anyone can suggest me how can I self-learn the entire Electrical Engineering curriculum by self. Please suggest me some resources and what do I need to focus upon. Also, what are the things that I already need to know? A detailed answer would be amazing. Please help me out with this. I also have other several interests but I would like to tackle this one first. Update: Now this would be kinda funny: I am just 18 years old. But already know Single Variable and Multivariable Calculus. I also know like 2-3 Programming languages (that is what inspired me for electronics). But the worst thing is that I took Commerce in class-11th as a stream. Now no schools and colleges will offer me any course in engineering. But I don't think that will ever stop anyone from learning. So the things that I already know about it MIT OCW, EdX, Coursera, UCIrvine OCW and am persuing things from these online MOOCs but books and things through which I will be able to achieve what I want would be great! Answer: If you want to acquire knowledge equivalent to that of people studying electrical engineering I'd find a person that does and then ask them for a list of course litterature. This should be a good place to start. For myself i can chime in with "Field and Wave Electromagnetics" by Cheng, any big book about university physics, "Microelectronic circuits and design", "Semiconductor physics and devices" - Neamen, "Calculus" - Robert A Adams, "Microwave Engineering" - Pozar. This is just a selection of the books which i found useful for myself. Then of course there were also a lot of books suggested/required for the teacher that did not help us at all. A word of warning to you however is that be aware that at times it might be tough. There is a reason that this degree is generally 3-5 years full time depending on where you take it. For me it was 5 years (3 yr bachelor and 2yr master).
{ "domain": "physics.stackexchange", "id": 63309, "tags": "electrical-engineering" }
Probability current vs. direction of wave function
Question: I did an exercise for my Quantum-Mechanics Lecture: Let $\hbar$=2m=1. A particle in 1 dimension has $j(x)=2\ Im(\overline{\psi} (x) \ \psi'(x))$ and it's to show that there are superpositions $\psi (x) = a_1 e^{i k_1 x} + a_2 e^{i k_2 x}$, where $k_1, k_2 > 0$, of waves which propagate to the right at x=0 but j(0)<0. You can show that by calculating j(0) which leads to a non positive semidefinite quadratic form in $a_1,\ a_2$. (Remark: This superposition can not be normalized, but the exercise states that there are analogue waves which can.) I have troubles understanding that. How can the wave (and therefore the probability of the particle to be at position x) propagate to the right when the current is negative? Maybe someone could explain me how to think about this? Edit: The official solution of the exercise: "With $\psi'=i(k_1 a_1 e^{i k_1 x} + k_2 a_2 e^{i k_2 x})$ is: $\overline \psi(0) \psi'(0)=\sum_{i,j=1}^{2}i\ \overline{a}_i k_ja_j$ and $j(0)=\sum_{i,j=1}^{2}(k_i + k_j) \overline{a}_i a_j$ This quadratic form in $a_1, a_2$ is not positive semi definite because the determinant is given by $-(k_1 - k_2)^2 < 0$" Answer: To your question "How can the wave propagate to the right when the current is negative?" I will answer that your statement that "the wave propagates to the right" is not exactly correct: what you have to consider here is group velocity, not each individual phase velocity. Since both plane waves propagate to the right with $k_1, k_2>0$, then you implicitly assume $\omega_1 \equiv \omega(k_1)>0$ and $\omega_2 \equiv \omega(k_2)>0$, but your problem gives no more information of the dispersion relation. In order to have a better idea of what the flow of probability density is, you need to consider group velocity here given by $\dfrac{\Delta \omega}{\Delta k} = \dfrac{\omega_2-\omega_1}{k_2-k_1}$, which could indeed have any sign, depending on the dispersion relation.
{ "domain": "physics.stackexchange", "id": 28632, "tags": "quantum-mechanics, homework-and-exercises, wavefunction, electric-current, probability" }
Sum of primes up to 2 million using the Sieve of Eratosthenes
Question: I've solved question 10 on Project Euler using the Sieve of Eratosthenes, what can I do to optimize my code? def prime_sum(n): l=[0 for i in range(n+1)] l[0]=1 l[1]=1 for i in range(2,int(n**0.5)+1): if l[i]==0: for j in range(i * i, n+1, i): l[j]=1 s=0 for i in range(n): if l[i] == 0: s+=i print(s) if __name__ == '__main__': prime_sum(2000000) Answer: Organization Your function prime_sum() does three things. It computes primes up to a limit It sums up the computed primes It prints out the sum. What are the odds the next Euler problem will use prime numbers? You’re going to have to write and re-write your sieve over and over again if you keep embedding it inside other functions. Pull it out into its own function. Summing up the list of prime numbers you’ve generated should be a different function. Fortunately, Python comes with this function built-in: sum(). Will you always print the sum of primes every time you compute it? Maybe you want to test the return value without printing it? Separate computation from printing. Reorganized code: def primes_up_to(n): # omitted def prime_sum(n): primes = primes_up_to(n) return sum(primes) if __name__ == '__main__': total = prime_sum(2000000) print(total) Bugs As you code presently reads, you compute primes up to n, using range(n+1) for the loop. The +1 ensures you actually include the value n in the loop. Then, you sum up all the primes using range(n) ... which means you stop counting just before n. Not a problem if you pass a non-prime number for n, but if you passed in a prime, your code would stop one number too early. It would be much easier to increase n by 1 at the start, to avoid the need to recompute it all the time, and risk accidentally forgetting a +1. Memory usage Lists in Python are wonderful, flexible, and memory hungry. A list of 2 million integers could take 16 million bytes of memory, if each element of the list is an 8 byte pointer. Fortunately, the integers 0 and 1 are interned, so no additional memory is required for the storage of each value, but if arbitrary integers were stored in the list, each integer can take 24 bytes or more, depending on the magnitude of the integer. With 2 million of them, that’s an additional 48 million bytes of memory. If you know in advance you are going to be working with 2 million numbers, which will only every be zeros or ones, you should use a bytearray(). def bytearray_prime_sum(n): n += 1 sieve = bytearray(n) # An array of zero flags sieve[0] = 1 sieve[1] = 1 # ... etc ... There are a few tricks of speeding up your sieve. Two is the only even prime. You can treat it as a special case, and only test odd numbers, using a loop over range(3, n, 2). Once you’ve done that, when marking off multiples of a prime, you can loop over range(i*i, n, 2*i), since the even multiples don’t need to be considered. Finally, when generating the final list of primes (or summing up the primes if you are generating and summing in one step), you can again skip the even candidates, and only consider the odd candidates using range(3, n, 2). Just remember to include the initial 2 in some fashion. def bytearray_prime_sum(n): n += 1 total = 2 flags = bytearray(n) for i in range(3, n, 2): if not flags[i]: total += i for j in range(i*i, n, 2*i): flags[j] = 1 return total Memory usage: Take 2 Since we are only storing 0 and 1 flags, a bytearray actually uses 8 times more memory than is necessary. We could store the flags in individual bits. First, we'll want to install the bitarray module: pip3 install bitarray Then, we can reimplement the above using a bitarray instead of a bytearray. from bitarray import bitarray def bitarray_prime_sum(n): n += 1 total = 2 flags = bitarray(n) flags.setall(False) for i in range(3, n, 2): if not flags[i]: total += i flags[i*i::2*i] = True return total Most notably in this bitarray implementation is the flagging of the multiples of a prime became a single statement: flags[i*i::2*i] = True. This is a slice assignment from a scalar, which is a fun and powerful extra tool the bitarray provides. Timings Unfortunately, the above graph shows Justin's implementation has actually slowed down the code over the OP's, due to a mistake Justin made when profiling his "improvement."
{ "domain": "codereview.stackexchange", "id": 35018, "tags": "python, performance, programming-challenge, primes, sieve-of-eratosthenes" }
How close to Earth's core can organisms live?
Question: We don't to know much about organisms living deep below the Earth's crust. Recently a team led by S. Giovanni discovered some microbes 300 m below the ocean floor. The microbes were found to be a completley new and exotic species and apparently they feed off hydrocarbons like methane and benzene. Scientists speculate that life may exist in our Solar System far below the surface of some planets or moons. This raises some questions: What is the theoretical minimum distance from Earth's core where life can still exist. Please explain how you came up with this number. For example, there are temperature-imposed limits on many biochemical processes. Is there the potential to discover some truly alien life forms in the Earth's mantle (by this I mean, life which is not carbon based, or life which gets its energy in ways we have not seen before, or non DNA-based life, or something along these lines)? What is the greatest distance below the Earth's crust that life has been discovered? I believe it is the 300 m I cited above, but I am not 100% sure. Answer: There's a lot we don't know about life in deep caves, but we can bound the deepest living organism to at least 3.5 kilometers down, and probably not more than 30 kilometers down. The worms recovered from deep mining boreholes are not particularly specifically adapted to live that far down: they have similar oxygen/temperature requirements as surface nematodes. The Tau Tona mine is about 3.5 kilometers deep and about 60˚ C at the bottom. Hydrothermal vent life does just fine up to about 80˚C, and the crust gets warmer at "about" 25˚C per kilometer. It's entirely reasonable to expect life to about 5 kilometers down, but further than that is speculation. Increasing pressure helps to stabilize biological molecules that would otherwise disintegrate at those temperatures, so it's not impossible there could be life even deeper. It may even be likely, given that the Tau Tona life breathes oxygen. I am certain no life we might recognize as life exists in the upper mantle.
{ "domain": "biology.stackexchange", "id": 8003, "tags": "life, extremophiles" }
In a binary system, why $m_1 a_1 = m_2 a_2$?
Question: In the center of mass coordinate, $m_1 r_1 = m_2 r_2$, which is straightforward. Yet in this detailed deviation of radial velocity page 27, it says that the $r_1$, which is the magnitude of the vector pointing from the CM to the star, is simply the semi‐major axis of the star’s orbit around the mutual CM, $a_1$. With the same reasoning, the $r_2$, which is the magnitude of the vector pointing from the CM to the planet, is the semi‐major axis of the planet’s orbit around the mutual CM, $a_2$. Therefore $m_1 a_1 = m_2 a_2$. However, I do not see why $r_1 (r_2)$ can be identical to $a_1(a_2)$ since the $r's$ are both changing (in a elliptical orbit for example) while the $a's$ are fixed. Or the other way to ask this question is I do not see why $r_1/a_1 = r_2/a_2$? Answer: It’s just poorly written. He doesn’t really mean that $r$ is $a$. He means that $r$ is $a$ at one point on the orbit, so if $m_1r_1=m_2r_2$ for the whole orbit than $m_1a_1=m_2a_2$.
{ "domain": "physics.stackexchange", "id": 61058, "tags": "newtonian-mechanics, newtonian-gravity, reference-frames, orbital-motion, celestial-mechanics" }
object movement in simulator-gazebo
Question: from gazebo tutorial link i learn to spawn object in gazebo but can anyone tell me how to move these objects in gazebo For example if i want to move object under force of 1 N/m in some x direction how will i do it also please give me link of tutorial. Originally posted by iit.saurav on ROS Answers with karma: 1 on 2012-12-06 Post score: 0 Answer: From the tutorial here you can find out how you can a apply a downwards force rosservice call gazebo/apply_body_wrench '{body_name: "top::my_top" , wrench: { force: { x: 0.0, y: 0, z: -0.1 } , torque: { x: 0.0, y: 0 , z: 0.0 } }, start_time: 10000000000, duration: -1000000000 }' Just apply a force in x-direction. Originally posted by davinci with karma: 2573 on 2012-12-07 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 12013, "tags": "gazebo" }
Generalised NxN Sudoku solver using heap
Question: My implementation of a Sudoku solver. It isn't done using the most naive way but still it does an exhaustive search with some assistance from a heap. The only constraints I have used is the basic rules of Sudoku (a number can occur only once in a row, column and it's box). There probably are more techniques or reasonings with which it can be improved but before that I would like to get this as optimized as possible. I would appreciate any advice on how to make it faster and how my code can be made compatible with modern C++ best practices. Thank you for your time! Edit: I forgot to mention the main idea here. The heap is used to choose the next cell having the least total of possible numbers it can be filled with. When you place one of the possible numbers in that cell say n in cell (x, y), then n is removed from the list of possibilities of all cells in row x, column y and the box which (x, y) belongs to AND these changes are reflected in the heap. To backtrack, n is added back to those lists (these changes too are reflected in the heap). When the heap becomes empty, all cells have been filled and we have found a solution. #include <iostream> #include <vector> #include <unordered_map> using namespace std; // table to calculate no. of set bits in a number vector<int> bitset_table(256); // function to print the board ostream& operator<< (ostream& out, const vector<vector<int>>& M) { for (const vector<int>& V : M) { for (int e : V) out << e << ' '; out << endl; } return out; } // function used by heap to order it's elements based on the contents of `*ptr1` and `*ptr2` bool isLower(const int* ptr1, const int* ptr2) { int size1, size2; size1 = bitset_table[*ptr1 & 0xff] + bitset_table[*ptr1 >> 8 & 0xff] + bitset_table[*ptr1 >> 16 & 0xff] + bitset_table[*ptr1 >> 24 & 0xff]; size2 = bitset_table[*ptr2 & 0xff] + bitset_table[*ptr2 >> 8 & 0xff] + bitset_table[*ptr2 >> 16 & 0xff] + bitset_table[*ptr2 >> 24 & 0xff]; return size1 < size2; } class Heap { private: int heap_size; // no. of elements in the heap vector<int*> A; // heap container of elementes of type `int*` (for 1 by 1 mapping), note that `A.size()` can be greater than `heap_size` unordered_map<int*, int> mapping; // mapping to keep track of the index of `int*` in `A` int parent(int i) { return (i - 1) / 2; } int left(int i) { return 2 * i + 1; } int right(int i) { return 2 * i + 2; } // taken from CLRS. Puts A[i] at the correct place by "heapifying" the heap (requires A[left(i)] and A[right(i)] to follow heap propertey.) void minHeapify(int i) { int l, r, smallest; l = left(i); r = right(i); smallest = i; if (l < heap_size && isLower(A[l], A[i])) smallest = l; if (r < heap_size && isLower(A[r], A[smallest])) smallest = r; if (smallest != i) { swap(mapping[A[i]], mapping[A[smallest]]); swap(A[i], A[smallest]); minHeapify(smallest); } } // updated key at A[i] is pushed towards the top of the heap if it's priority is high otherwise towards the bottom. void heapUpdateKey(int i) { if (i == 0 || !isLower(A[i], A[parent(i)])) minHeapify(i); else { int p = parent(i); while (i > 0 && isLower(A[i], A[p])) { swap(mapping[A[i]], mapping[A[p]]); swap(A[i], A[p]); i = p; p = parent(i); } } } public: Heap() : heap_size(0) {} // `opt = 0` means delete `val` from `*ptr`, otherwise insert. // if it fails to detele, return false. (this fact is used in `search` method) bool heapUpdateKey(int *ptr, int opt, int val) { if (mapping.find(ptr) == mapping.cend() || (opt == 0 && !(*ptr & (1 << val)))) return false; if (opt == 0) *ptr &= ~(1 << val); else *ptr |= 1 << val; heapUpdateKey(mapping[ptr]); return true; } // inserts element at the end of the heap and calls `heapUpdateKey` on it void insert(int *ptr) { if (heap_size < A.size()) A[heap_size] = ptr; else A.push_back(ptr); mapping[ptr] = heap_size; heapUpdateKey(heap_size++); } // returns the element at the top of the heap and heapifies the rest of the heap. int* heapExtractMin() { //if (heap_size == 0) //return nullptr; int *res = A[0]; mapping.erase(res); A[0] = A[--heap_size]; mapping[A[0]] = 0; minHeapify(0); return res; } bool isEmpty() { return heap_size == 0; } }; class Solve { private: int N; // recursive function which basically performs an exhaustive search using backtracking bool search(Heap& H, unordered_map<int*, unordered_map<int, vector<int*>>>& adj, vector<vector<int>>& board, unordered_map<int*, pair<int, int>>& mapping) { if (H.isEmpty()) return true; int *ptr = H.heapExtractMin(); pair<int, int>& p = mapping[ptr]; for (int k = 1; k <= N; ++k) if (*ptr & (1 << k)) { board[p.first][p.second] = k; vector<int*> deleted_from; for (int *ptr2 : adj[ptr][k]) if (H.heapUpdateKey(ptr2, 0, k)) deleted_from.push_back(ptr2); if (search(H, adj, board, mapping)) return true; for (int *ptr2 : deleted_from) H.heapUpdateKey(ptr2, 1, k); } H.insert(ptr); return false; } public: Solve() {} Solve(vector<vector<int>>& board) : N(board.size()) { int n = (int)ceil(sqrt(N)); if (n*n != N) exit(0); // look at already filled cells like number 5 at cell say (x, y). // set the 5th bit at rows[x], columns[y] and the 3x3 (for 9x9 Sudoku) box which (x, y) belongs to. vector<int> rows(N), columns(N), boxes(N); for (int i = 0; i < N; ++i) for (int j = 0; j < N; ++j) if (board[i][j]) { int bit = 1 << board[i][j]; rows[i] |= bit; columns[j] |= bit; boxes[(i / n)*n + (j / n)] |= bit; } // possibilities[i][j] = list of numbers which the cell (i, j) can be filled with. // &possibilities[i][j] is the pointer int* used in the heap. vector<vector<int>> possibilities(N, vector<int>(N)); // mapping used in `search` method to get the coordinates (i, j) which &possibilities[i][j] represents. unordered_map<int*, pair<int, int>> mapping; // look at yet to be filled cells and calculate it's possibilities[i][j] for (int i = 0; i < N; ++i) for (int j = 0; j < N; ++j) if (!board[i][j]) { mapping.emplace(&possibilities[i][j], make_pair(i, j)); for (int k = 1; k <= N; ++k) { int bit = 1 << k; if (!(rows[i] & bit) && !(columns[j] & bit) && !(boxes[(i / n)*n + (j / n)] & bit)) possibilities[i][j] |= bit; } } // adjacency list used in 'search' method. // adj[p][k] is the list of pointers (of cells, i.e., &possibilities[i][j]) which are adjacent to cell at pointer p (same row, column and box) // and have their kth bit set. It seems complex and conjested but it simply creates adjencty list for adj[p][k] for all values of p and k. unordered_map<int*, unordered_map<int, vector<int*>>> adj; for (int i = 0; i < N; ++i) for (int j = 0; j < N; ++j) if (possibilities[i][j]) { for (int k = 0; k < N; ++k) if (!board[i][k] && k / n != j / n) for (int l = 1; l <= N; ++l) if (possibilities[i][k] & (1 << l)) adj[&possibilities[i][j]][l].push_back(&possibilities[i][k]); for (int k = 0; k < N; ++k) if (!board[k][j] && k / n != i / n) for (int l = 1; l <= N; ++l) if (possibilities[k][j] & (1 << l)) adj[&possibilities[i][j]][l].push_back(&possibilities[k][j]); int ti, tj; ti = (i / n)*n, tj = (j / n)*n; for (int tti = 0; tti < n; ++tti) for (int ttj = 0; ttj < n; ++ttj) if (!board[ti + tti][tj + ttj] && (ti + tti != i || tj + ttj != j)) for (int l = 1; l <= N; ++l) if (possibilities[ti + tti][tj + ttj] & (1 << l)) adj[&possibilities[i][j]][l].push_back(&possibilities[ti + tti][tj + ttj]); } // create heap and insert the address (int*) of the list of possibilities of unfilled cells. Heap H; for (int i = 0; i < N; ++i) for (int j = 0; j < N; ++j) if (possibilities[i][j]) H.insert(&possibilities[i][j]); if (search(H, adj, board, mapping)) cout << board << endl; } }; int main() { // fill the bitset_table (bitset_table[i] = no. of set bits of i) for (int i = 1; i < bitset_table.size(); ++i) bitset_table[i] = (i & 1) + bitset_table[i / 2]; int N; cin >> N; vector<vector<int>> board(N, vector<int>(N)); for (int i = 0; i < N; ++i) for (int j = 0; j < N; ++j) cin >> board[i][j]; Solve obj(board); } Some puzzles you can try: 9 8 0 0 0 0 0 0 0 0 0 0 3 6 0 0 0 0 0 0 7 0 0 9 0 2 0 0 0 5 0 0 0 7 0 0 0 0 0 0 0 4 5 7 0 0 0 0 0 1 0 0 0 3 0 0 0 1 0 0 0 0 6 8 0 0 8 5 0 0 0 1 0 0 9 0 0 0 0 4 0 0 16 0 2 14 0 0 0 16 4 0 0 0 1 0 0 5 0 0 0 9 0 0 10 0 1 0 0 0 0 0 4 0 0 0 0 0 0 13 6 0 0 0 14 0 0 15 12 0 16 6 5 10 0 8 2 0 0 0 12 0 0 0 1 0 7 9 0 5 4 1 0 0 2 0 0 0 0 12 0 7 0 0 0 0 0 11 0 0 13 0 3 0 0 0 0 0 1 0 0 0 0 16 0 0 0 13 10 15 9 14 0 4 0 10 0 0 11 0 4 8 15 0 0 0 0 5 0 13 0 0 11 0 1 0 0 0 0 10 7 4 0 3 0 0 6 0 7 0 2 14 16 6 10 0 0 0 11 0 0 0 0 16 0 0 0 0 0 1 0 12 0 0 14 0 0 0 0 0 4 0 10 0 0 0 0 15 0 0 2 16 5 0 11 11 0 12 0 0 0 14 0 0 0 13 7 0 9 6 2 8 0 7 9 0 0 11 0 0 0 14 10 0 0 0 0 0 0 4 0 0 0 0 0 11 0 2 0 0 8 0 0 0 6 0 0 12 0 0 0 9 8 0 0 0 14 1 0 25 0 0 12 6 0 0 7 0 18 0 5 24 0 10 1 0 0 4 0 0 0 0 0 0 0 2 0 19 0 13 0 0 0 10 0 0 0 0 0 0 0 0 18 5 0 0 0 0 0 1 0 0 0 0 0 0 0 22 0 0 0 0 3 0 2 0 0 14 12 0 16 8 25 0 0 0 16 0 0 0 2 23 0 0 13 12 22 0 0 0 21 15 19 3 0 0 0 0 14 0 23 0 24 0 0 0 0 0 25 8 4 0 16 19 21 0 0 7 0 0 0 3 12 0 9 0 4 0 2 0 0 0 0 0 0 0 10 0 24 12 17 16 0 0 0 5 0 0 0 0 0 0 9 0 0 6 25 0 0 0 8 0 5 3 0 0 0 0 0 0 20 0 0 18 19 15 0 10 11 0 0 0 18 12 19 0 0 0 0 0 0 0 23 0 0 7 0 0 4 0 0 0 0 0 0 0 0 14 0 22 0 0 18 16 20 0 6 11 13 0 0 0 0 0 0 0 22 0 25 0 0 1 17 5 4 7 0 0 14 0 8 3 21 0 0 11 0 0 0 6 0 20 13 15 0 0 0 0 0 0 9 0 0 2 0 25 0 1 8 0 0 5 0 21 0 0 1 0 0 0 0 16 10 0 7 0 0 4 20 0 0 9 0 0 14 0 24 0 17 0 25 2 5 0 0 0 0 0 13 0 0 0 0 0 22 0 0 0 0 0 19 1 8 0 0 0 0 7 21 0 0 12 0 2 17 0 0 0 18 6 16 0 0 15 0 0 13 0 10 0 8 10 18 12 16 9 0 0 0 5 0 0 0 0 19 0 0 17 0 21 0 15 0 0 22 0 8 0 0 15 0 3 0 6 0 21 0 0 7 0 18 14 5 0 1 0 0 0 0 0 0 0 0 19 0 1 0 16 11 0 0 0 10 22 25 15 0 0 0 0 0 0 21 0 0 0 3 1 0 21 0 0 4 0 0 0 0 2 0 13 0 24 25 0 0 14 0 0 6 0 0 0 0 0 0 0 0 15 0 12 14 0 6 17 24 0 0 0 0 0 0 0 13 0 0 0 5 23 16 4 0 13 24 7 2 0 9 0 0 15 3 0 22 0 0 0 0 0 0 8 0 0 25 20 2 0 19 0 0 0 0 1 0 0 0 0 21 3 0 0 12 0 0 0 0 16 12 0 5 0 11 21 0 23 0 0 15 0 0 0 0 19 9 0 0 0 0 0 25 10 0 0 0 0 9 20 22 7 4 0 3 0 14 25 18 0 11 0 0 0 0 0 1 0 15 24 0 6 0 22 8 0 25 14 0 10 11 0 9 0 20 1 16 0 7 0 23 0 0 13 14 13 21 1 0 0 5 0 0 0 6 0 22 0 23 10 0 0 0 2 0 0 18 7 11 The 9x9 is supposedly the "hardest 9x9 Sudoku puzzle". Takes no time. The 16x16 is another hard one and takes about 20 minutes on my machine lol. Answer: Freebies Looking at the performance profile for the 16x16 puzzle (there is a profiler built into Visual Studio 2017, which you said you are using, and I used that, so you can reproduce this), I see that deleted_from.push_back(ptr2); is hotter than it deserves. That indicates the vector is growing too often. So change this: vector<int*> deleted_from; To this: vector<int*> deleted_from(8); Before: 6 seconds. After: 5.5 seconds. That's significant, but a trivial change to the code. Reading between the lines of the profile, it turns out that isLower is taking a substantial amount of time. It is not directly implicated by the profile, but the places where it is called are redder than they ought to be. It really should be trivial, but it's not. Here is an other way to write it: #include <intrin.h> ... // function used by heap to order it's elements based on the contents of `*ptr1` and `*ptr2` bool isLower(const int* ptr1, const int* ptr2) { return _mm_popcnt_u32(*ptr1) < _mm_popcnt_u32(*ptr2); } Before: 5.5 seconds. After: 5.0 seconds. That's nice, and it even made the code simpler. The Heap It should be no surprise that a lot of time is spent on modifying the heap. So let's tinker with it. This logic: if (l < heap_size && isLower(A[l], A[i])) smallest = l; if (r < heap_size && isLower(A[r], A[smallest])) smallest = r; Can be rewritten to: if (r < heap_size) { smallest = isLower(A[l], A[r]) ? l : r; smallest = isLower(A[i], A[smallest]) ? i : smallest; } else if (l < heap_size) smallest = isLower(A[l], A[i]) ? l : i; It looks like it should be about the same, but it's not. Before: 5.0 seconds. After: 2.0 seconds. What?! The biggest difference I saw in the disassembly of the function was that cmovl was used this way, but not before. Conditional-move is better than a badly-predicted branch, but worse than a well-predicted branch - it makes sense that these branches would be badly predicted, after all they depend on which path the data item takes "down the heap", which is some semi-randomly zig-zagging path. This on the other hand does not help: smallest = (l < heap_size && isLower(A[l], A[i])) ? l : i; smallest = (r < heap_size && isLower(A[r], A[smallest])) ? r : smallest; When MSVC chooses to use a cmov or not is a mystery. Clearly it has a large impact, but there seems to be no reliable way to ask for a cmov. An extra trick is using that what this "minHeapify" is doing is moving items up the heap along a path, and dropping the item which it was originally called on into the open spot at the end. That isn't how it's doing it though: it's doing a lot of swaps. In total it's doing twice as many assignments as are necessary. That could be changed such as this: void minHeapify(int i) { int l, r, smallest; int* item = A[i]; do { l = left(i); r = right(i); smallest = i; if (r < heap_size) { smallest = isLower(A[l], A[r]) ? l : r; smallest = isLower(item, A[smallest]) ? i : smallest; } else if (l < heap_size) smallest = isLower(A[l], item) ? l : i; if (smallest == i) break; A[i] = A[smallest]; mapping[A[i]] = i; i = smallest; } while (1); A[i] = item; mapping[item] = i; } Before: 2.0 seconds. After: 1.85 seconds. unordered_map Often some other hash map can do better than the default unordered_map. For example you could try Boost's version of unordered_map, or Abseil's flat_hash_map, or various others. There are too many to list. In any case, with Skarupke's flat_hash_map, the time went from 1.85 seconds to 1.8 seconds. Not amazing, but it's as simple as including a header and changing unordered_map to ska::flat_hash_map. By the way, for MSVC specifically, unordered_map is a common reason for poor performance of the Debug build. It's not nearly as bad for the Release build.
{ "domain": "codereview.stackexchange", "id": 38968, "tags": "c++, sudoku" }
Why is charge constant in series connections?
Question: Why does each capacitor in a series connection hold the same charge? I understand that voltages and capacitances across capacitor plate pairs in series vary, but why is it a necessity that charge be constant? Answer: This site from the University of Texas provides a good explanation, I think. Basically, when you have two capacitors connected in series, say $C_1$ and $C_2$, then the total charge in the middle wiring connecting the two components must remain constant, as it cannot escape anywhere. Any charge accumulation in in $C_1$'s outer plate creates a virtual charge accumulation in its inner plate, but the total charge in the middle wiring must remain constant, so there will be an equal, but opposite virtual charge created at $C_2$'s inner plate. The virtual charge on $C_2$ will cause actual charge to accumulate on its exterior plate, and so the total charge in $C_1$ will be equal to the total charge in $C_2$. You can think of it as the boundary condition between the capacitors forcing the charge fluctuations to propagate through the structure, in a sense.
{ "domain": "physics.stackexchange", "id": 40248, "tags": "charge, capacitance" }
Figure Headings in Laboratory Reports
Question: How do I write a figure legend for a thin layer chromatography plate drawing? Answer: Provide all the data that are necessary to reproduce the experiment: TLC material (silica gel or aluminium oxide) composition of the mobile phase special tricks, such as running the TLC in methanol first for 5 mm to squeeze the initial spot to a very narrow band detection of the separated bands/spots: $\mathrm{R_f}$ values do the spots have a colour do the show fluorescence which reagents did you use for staining UPDATE If you report on a series of measurements, it's usually good practise describe the conditions that didn't change (type of TLC plate, staining solutions, etc.) in the general section of the experimental part of the report/article/thesis.
{ "domain": "chemistry.stackexchange", "id": 3132, "tags": "chromatography" }
Biological Consequences of Asteroid Mining—Death by Isotope?
Question: It's been documented that NASA hope to capture an asteroid in 2025, and have subsequent aims to mine that asteroid. If if this is successful, we would expect other asteroids to be mined in the future. A consequence of this is that relative atomic masses of elements mined—those with two or more stable isotopes—will no longer be faithful to our current periodic table. Ruthenium alone, a high-value rare earth, has five stable isotopes alone, ranging from $^{98}\ce{Ru}$ to $^{102}\ce{Ru}$. The relative abundances of these isotopes are bound to differ in other places other than Earth, and so would the weighted average. Say after mining our ruthenium, we use it in an electronic device and consequently that device is thrown away. For argument's sake, also say that this ruthenium—of an isotopic distribution never before seen—leaches into the environment and later comes into contact with biological systems. Could it be possible that compounds hitherto considered non-toxic on Earth become toxic to biological systems by virtue of a newly realised isotopic discrimination? As an example, $^{13}\ce{CO2}$ is effectively discriminated against in uptake by plants in comparison to $^{12}\ce{CO2}$ (due to it being a diffusion limited reaction). Ruthenium is purported to be carcinogenic, but the most abundant isotope on Earth of Ruthenium is $^{102}\ce{Ru}$ which could be effectively non-toxic by virtue of the fact that compounds containing these atoms takes so long to diffuse, they don't diffuse across biological membranes at all. If a sample of Ruthenium from the moon found its most abundant isotope in $^{98}\ce{Ru}$, then this would be expected to diffuse much faster than its heavier counterparts. Therefore could it be possible that a compound previously thought of as non-toxic could become toxic via this effect. Does this sound plausible? NB: I've deliberately missed out other routes of exposure such as ingestion to only consider diffusion controlled reactions. Answer: This is an interesting question and you raise a number of points, let's step through them. A consequence of this is that relative atomic masses of elements mined—those with two or more stable isotopes—will no longer be faithful to our current periodic table. But this is already happening. $\ce{^235U}$ constitutes 0.72% of uranium found on earth and decays to the stable isotope $\ce{^207Pb}$, which is found in a natural abundance of 22.1%. Before we get to the asteroid the abundance of various isotopes and their weighted average mass is already shifting. 98 Ru , then this would be expected to diffuse much faster than its heavier counterparts Diffuse much faster? First off, we're talking about diffusion, not chemical reaction. Elements or compounds usually enter the body by ingestion of one sort or another (eating, breathing) rather than diffusion, and these ingestion pathways wouldn't involve isotopic discrimination. That is, if we breath air that contains isotopes in a certain ratio, then that's the ratio that will initially appear in our lungs. Nonetheless if diffusion were found to be an issue, my guess is that isotopic discrimination by diffusion would be a very small effect. For a gas the maximum separation of two isotopes is given by $$\mathrm{\sqrt{\frac{[MW~of~compound ~with ~isotope~1]}{[MW~of~compound ~with ~isotope~2]}}}$$ In the case of separating $\ce{^235UF6/^238UF6}$ this amounts to a fractionation ratio of 1.0043 after 1 pass. Ruthenium is lighter, so the effect would be larger, about 1.02 ($\ce{^102Ru/^98Ru}$) after 1 pass, even less if it is in a molecule. Chemical reaction within our body of ingested isotopic compounds would also show discrimination due to primary kinetic isotope effects. The maximum primary kinetic isotope effect is proportional to the reduced mass as follows: $$k~ \thicksim ~\sqrt{\frac{m_1 + m_2}{m_1 m_2}}$$ Applying this to $\ce{^102Ru}$ and $\ce{^98Ru}$ yields a primary kinetic isotope effect of 1.020, again, even less if the element is incorporated into a compound. To me, the effects look small and since shifts in natural abundance have been occurring for a long time here on earth, with no one raising a flag, my guess (and that's all it is, a guess) is that it's not something to worry about. Still, I'd feel better if a NASA toxicologist looked at the question and said "no problem"
{ "domain": "chemistry.stackexchange", "id": 1721, "tags": "biochemistry, periodic-table, isotope, astrochemistry" }
Opinions on an LSTM hyper-parameter tuning process I am using
Question: I am training an LSTM to predict a price chart. I am using Bayesian optimization to speed things slightly since I have a large number of hyperparameters and only my CPU as a resource. Making 100 iterations from the hyperparameter space and 100 epochs for each when training is still taking too much time to find a decent set of hyperparameters. My idea is this. If I only train for one epoch during the Bayesian optimization, is that still a good enough indicator of the best loss overall? This will speed up the hyperparameter optimization quite a bit and later I can afford to re-train the best 2 or 3 hyperparameter sets with 100 epochs. Is this a good approach? The other option is to leave 100 epochs for each training but decrease the no. of iterations. i.e. decrease the number of training with different hyperparameters. Any opinions and/or tips on the above two solutions? ( I am using keras for the training and hyperopt for the Bayesian optimisation) Answer: First of all you might want to know there is a "new" Keras tuner, which includes BayesianOptimization, so building an LSTM with keras and optimizing its hyperparams is completely a plug-in task with keras tuner :) You can find a recent answer I posted about tuning an LSTM for time series with keras tuner here So, 2 points I would consider: I would not loop only once over your dataset, it does not sound like enough times to find the right weights. I would rather control the number of possible hyperparams configurations as you said, which is something you can indicate in keras tuner via max_trials param About using keras tuner with Bayesian tuner, you can find some code below as an example for tuning the units (nodes) in the hidden layers and the learning rate: from tensorflow import keras from kerastuner.tuners import BayesianOptimization n_input = 6 def build_model(hp): model = Sequential() model.add(LSTM(units=hp.Int('units',min_value=32, max_value=512, step=32), activation='relu', input_shape=(n_input, 1))) model.add(Dense(units=hp.Int('units',min_value=32, max_value=512, step=32), activation='relu')) model.add(Dense(1)) model.compile(loss='mse', metrics=['mse'], optimizer=keras.optimizers.Adam( hp.Choice('learning_rate', values=[1e-2, 1e-3, 1e-4]))) return model bayesian_opt_tuner = BayesianOptimization( build_model, objective='mse', max_trials=3, executions_per_trial=1, directory=os.path.normpath('C:/keras_tuning'), project_name='kerastuner_bayesian_poc', overwrite=True) bayesian_opt_tuner.search(train_x, train_y,epochs=n_epochs, #validation_data=(X_test, y_test) validation_split=0.2,verbose=1) bayes_opt_model_best_model = bayesian_opt_tuner.get_best_models(num_models=1) model = bayes_opt_model_best_model[0] You would get something like this, informing you about the searched configurations and evaluation metrics:
{ "domain": "datascience.stackexchange", "id": 7504, "tags": "keras, lstm, hyperparameter-tuning, bayesian, epochs" }
Particle with constant momenta?
Question: Consider a particle of mass $m$ subjected to the following potential, $$ \vec{A}=\frac{1}{2}(\vec{B}\times\vec{r}) $$ Where the magnetic field $\vec{B}$ is constant. Can someone prove that $\dot{p}_{i}=0$, $\forall i$? i.e Prove that $$ \dot{\vec{p}}=0 $$ Answer: The momenta are NOT constant. In the simplest case where $\vec B$ is along $\hat z$ along, the motion will be helicoidal and the momenta in the plane perpendicular to $\vec B$ cannot be constants as this would imply a straight trajectory.
{ "domain": "physics.stackexchange", "id": 36825, "tags": "homework-and-exercises, newtonian-mechanics, electromagnetism, lagrangian-formalism, hamiltonian-formalism" }
How are we observing the newly discovered "dark galaxy" J0613+52, if it has no stars and is so far away from other galaxies?
Question: I just came across a New York Times article talking about a newly found Low Surface Brightness (LSB) galaxy (also called “ultra-diffuse galaxies” or “dark galaxies”). The new galaxy, J0613+52, was accidentally discovered using the Green Bank Observatory. To be sure, I verified the story via a Space.com article, a page on the Green Bank Observatory’s website, and a slideshow on the American Astronomical Society’s website. What makes this galaxy unique from other LSB galaxies is that there is not a single star visible in the ENTIRE galaxy, meaning this could just be an ENORMOUS, two billion solar masses cloud of hydrogen gas spiraling around in space, which would further suggest that this could be a “primordial galaxy”, or one that formed just after the Big Bang, before the stellar medium was enriched with heavier elements. So far, the galaxy has only been imaged in radio waves, but more studies are to follow. The researchers were careful to note that it is possible there are stars in the galaxy, but we are just unable to observe them through the thick clouds of hydrogen. So, this led me to my question: How can we even “see” J0613+52? While I am aware that clouds of hydrogen gas are OFTEN imaged in astronomy, I had always presumed the way this stuff was imaged was either by: A) The gas is illuminated by nearby objects or objects behind the gas cloud. As I understand, atoms can absorb a photon, which excites the atom, but when the atom returns to its ground state, it emits the same wavelength of light, which is then visible to our detectors. — OR — by: B) Its gravitational effects (perturbations) on surrounding objects or, light that comes close to the area, a.k.a. lensing. The problem, or at least what appears to be a problem is that there are no known galaxies within 112 Mpc, or over 365 million light years from the diffuse cloud. Further, according to the articles, nearby galaxies could trigger stellar formation, but this galaxy is just so far away from anything else. If it is a primordial galaxy, would this not also imply that it has remained distant from other galaxies throughout its entire lifespan, because at some point if it were within the gravitational effects of another galaxy, some stars would have formed? Now, to be clear, this doesn’t mean that the hydrogen gas cloud is not absorbing and emitting ANY light, as space is awash in light from distant objects, its distance to other galaxies would simply mean it receives a lot less light. Also, considering they got the mass of the cloud, I presume this means some gravitational effect of some kind was measured regarding the galaxy? So, have I answered my own question, is it just minuscule light from other astronomical bodies that are illuminating the galaxy, or did I get something wrong? I considered the possibility that some of the hydrogen is radioactive, so maybe that is what we are seeing. However, after working with ChatGPT and a half-life calculator, I am convinced this is not the case. According to ChatGPT, there are (approximately?/exactly?) 2,379,999,999,999,999,761,365,273,050,982,686,751,747,396,855,793,718,681,448,271,052,800 (2.38×10^66) hydrogen atoms in 2 billion solar masses of hydrogen gas. Tritium has the longest half-life of any non-stable hydrogen isotope at 12.32 years. Then, according to [Wikipedia, the first galaxies started to form 200 million to 500 million years after the Big Bang, and the age of the Universe is about 13.7 billion years. I took this info and used this OmniCalculator for half-life. For the initial quantity, I put the 2.38×10^66 number. I understand that only a small percentage of the original primordial hydrogen would have been tritium, but as we shall see, this does not matter. Next, I put the half-life time as 4,496.8 days. Finally, I (under)estimated the total time in the calculator at 11.2 billion years. While I believe this would be pretty late for a “primordial galaxy” (around 2.5 billion years ABB), and I technically think the time should start when hydrogen formed after the Big Bang, as we'll see, none of this matters. Given these inputs, the number of remaining radioactive hydrogen atoms is 0. In fact, according to the calculator, even if 100% of the hydrogen gas is tritium, there would be ZERO tritium left over after only 3,341 years. So I am convinced radioactivity is NOT the answer. So, to be clear, the question is “How do we observe J0613+52?” Again, I understand we are using radio astronomy, but where are these radio waves coming from that our detectors are picking up? What is emitting them? I appreciate any help anyone could give. -------------------------SORT OF UNRELATED AND UNIMPORTANT SIDE QUESTION------------------------ I don't think this deserves its own question, but if you believe it does, let me know and I will make a new question for this. The NYT article linked above contains the following sentence: Many astrophysicists attribute this discrepancy to their inability to model complex, messy phenomena like shock waves and magnetic fields — so-called gastrophysics — that prevail when atoms get close together. I have gone through the first two pages of Google when searching for “gastrophysics”, and without exception, every single one of them says this refers to an interdisciplinary approach to gastronomy and cooking. Can anyone find anything talking about gastrophysics in relation to astronomy? Answer: The low surface brightness survey at the GBT is looking for H(I) emission, i.e. emission from neutral hydrogen atoms (for example see O'Neil 2023). The most obvious signature they use is the 21 cm hydrogen line, which arises from "hyperfine" transitions in atoms where the proton and electron spins are aligned, "flipping" to become anti aligned. This happens spontaneously via a magnetic dipole transition with a very long half life (about 10 million years). This means you need a lot of hydrogen to see it, but it produces a spectral line that is very narrow and hence easy to pick out in any spectrum. The line is fairly easy to thermally "excite" since the difference in energy levels corresponds to just 6 $\mu$eV - so the excited state is plentiful in essentially all atomic hydrogen gas with temperature $>1 $K. It turns out that actually there is usually three times as much hydrogen in the excited state, simply because there are actually three combinations of quantum numbers that can describe the aligned state but only one combination to describe the anti-aligned "ground state". Because the transition has a long lifetime, it is also very hard to absorb 21 cm photons. That means a cloud of hydrogen is essentially transparent to its own 21 cm emission and that the amount of 21 cm emission can be used directly to estimate the number of hydrogen atoms and hence baryonic mass of a cloud of neutral hydrogen. The fact that the line is sharply defined in frequency means that Doppler shifts can be readily measured and, in this case, used to measure the rotation of the cloud, estimate it's total mass from gravitational considerations and hence infer the amount of dark matter.
{ "domain": "astronomy.stackexchange", "id": 7282, "tags": "galaxy, dark-matter, star-formation, hydrogen, gas" }
On-shell SUSY-transformations for interacting Wess-Zumino model
Question: I'm learning SUSY with Quevedo, Cambridge Lectures on Supersymmetry and Extra Dimensions. Setup: The SUSY transformations of the component fields of a chiral field $\Phi$ are given by (p.41) \begin{align*} \delta_{\epsilon,\overline{\epsilon}}\varphi &= \sqrt{2}\epsilon^{\alpha}\psi_{\alpha}, \\ \delta_{\epsilon,\overline{\epsilon}}\psi_{\alpha} &= i\sqrt{2}\sigma^{\mu}_{\alpha\dot{\alpha}}\overline{\epsilon}^{\dot{\alpha}}\partial_{\mu}\varphi + \sqrt{2}\epsilon_{\alpha}F,\\ \delta_{\epsilon,\overline{\epsilon}} F &=i\sqrt{2}\overline{\epsilon}_\dot{\alpha}(\overline{\sigma}^{\mu})^{\dot{\alpha}\alpha}\partial_{\mu}\psi_{\alpha}, \end{align*} where $\varphi$ is a complex scalar, $\psi_{\alpha}$ is a left-handed Weyl spinor and $F$ is an auxiliary field. My questions: Let us choose the superpotential $W(\Phi)\equiv \frac{m}{2}\Phi^2 + \frac{g}{3}\Phi^3$ together with kinetic part $\Phi^{\dagger}\Phi$ and remove the auxiliary field $F$ via its algebraic equations of motion. Then, the transformation rules must change as well, correct? We can use the equations of motion of the auxiliary field $F$ to remove it from the Lagrangian. How do we account for this in the transformation rules of the component fields? The transformation rules do not know anything about the model (free/interacting/massless) we are considering, so it is us who should implement this choice into the transformation rules -- but how do we do this without messing up SUSY? Answer: When we eliminate/integrate out the auxiliary field $F$, the SUSY transformation for $F$ is rendered moot, and the appearance of $F$ on the RHSs of the other SUSY transformations is replaced with its algebraic EOM. It's not true that we do not know anything about the model -- we assume that the action $S$ is SUSY-invariant. In particular, the EOM for $F$ is derived from the action.
{ "domain": "physics.stackexchange", "id": 67448, "tags": "lagrangian-formalism, field-theory, supersymmetry" }
Force to use in harmonic oscillation through the inside of a planet
Question: I am to find an equation for the time it takes when one falls through a planet to the other side and returns to the starting point. I have seven different sets of values - mass of object falling, mass of planet, radius of the planet, and time. I'm not including friction in the calculations. I think this qualifies as a harmonic oscillator, and thus I work with the formula $$T = 2\pi \sqrt{\frac{m}{k}}$$ To find the spring constant $k$ I need force $F$, and this is where I get uncertain. Should I work with the gravitational force between the object and the planet when the fall begins? In other words $$F = G\times\frac{m \times M}{R^2}$$ When I try this I find that $$F = kx \iff k = \frac{F}{x}$$ $$\iff k = \frac{G\times\frac{m \times M}{R^2}}{2R} = \frac{G \times m \times M}{2R^3}$$ $$\Rightarrow T = 2\pi \sqrt{\frac{m}{\frac{G \times m \times M}{2R^3}}} \iff T = 2\pi \sqrt{\frac{2R^3}{G \times M}}$$ Using this equation for the values I have, however, I get the wrong results - $T = 7148$ instead of $T = 5055$. What am I doing wrong? Answer: The key to this problem is the fact that the planet's mass $M$ as it appears in Newton's law of gravitation, $$F=\frac{GMm}{r^2},$$ is not actually constant. This is because the layers of the planet that are above you cause zero net force: if you are inside of a hollow spherical shell of mass then diametrically opposite elements of solid angle exert equal forces in opposite directions. Thus, the effective mass of the planet in this problem is only that of a sphere of radius $r$ and density $3M_0/4\pi R^3$, i.e. $M(r)=\frac{r^3}{R^3}M_0$. The force is then $$F=\frac{GM_0m}{R^3}r$$ and it of course causes harmonic motion, with "spring constant" $k=GM_0m/R^3$.
{ "domain": "physics.stackexchange", "id": 4610, "tags": "homework-and-exercises, newtonian-mechanics, newtonian-gravity, harmonic-oscillator" }
Why can’t I use quantum teleportation to transmit data FTL 1/4 of the time?
Question: $\newcommand{\bra}[1]{\left<#1\right|}\newcommand{\ket}[1]{\left|#1\right>}\newcommand{\bk}[2]{\left<#1\middle|#2\right>}\newcommand{\bke}[3]{\left<#1\middle|#2\middle|#3\right>}$ Assume there is an entangled pair $(q_1, q_2)$ owned by Alice and Bob, respectively, and some qubit $q_0$ in state $\ket{\psi}$ that Alice wants to teleport. Let Alice perform all the necessary operations to teleport $q_0$, namely, $\text{CNOT}(q_0, q_1)$, $H(q_0)$ (I'm not sure if this is sufficient, or if Alice has to measure her two qubits to collapse their superposition and complete the teleportation, but this isn't relevant to the question. Assume she does measure them if it is necessary). Now the state of $q_2$ should equal $\ket{\psi}$, or be closely related to it through one of the bell states. Assume that Alice and Bob coordinated on what time Alice would complete the teleportation, so that Bob is aware the teleportation has occurred. What is keeping Bob from assuming that $q_2$ is in some particular bell state, and measuring $q_2$? It would seem that would allow faster than light communication 25% of the time. In fact, Bob could even produce imperfect clones of $q_2$, and my understanding is that he could somehow account for the imperfection of these clones. These imperfect clones would then allow him to extract more information from the single teleportation, and, assuming he knows the sort of thing he’s looking for, could provide an even higher chance that he receives meaningful information out of this communication - even if no classical information is sent from Alice. What prevents this from working? Edit According to Holevo's Theorem, one can only retrieve up to $n$ classical bits given $n$ qubits. However, as I understand it, this does not prevent one from storing $n$ classical bits into a single qubit, imperfectly cloning it $n - 1$ times, and thus retrieving $n$ classical bits out. Given this, we can send a single qubit through teleportation and the receiver gets an accurate message approximately 25% of the time (less than this of course, due to the error introduced by the imperfect cloning). In regards to the user not knowing whether the information is correct and thus it being no use, consider the classical case of $n$ one-way radios. Only 25% of the radios send the correct message, on channel $x$, the rest send random noise. Say the message is a recorded English sentence of some substantial length (say 20 words). An observer of this message, flipping through the channels, would be able to tell with high certainty which of these radios is transmitting the correct message. How does this differ in the quantum case, such that we cannot apply the same logic? Answer: $\newcommand{\bra}[1]{\left<#1\right|}\newcommand{\ket}[1]{\left|#1\right>}\newcommand{\bk}[2]{\left<#1\middle|#2\right>}\newcommand{\bke}[3]{\left<#1\middle|#2\middle|#3\right>}$If you could encode an arbitrary amount of bits into a single qubit, and then retrieve those bits, then yes, quantum teleportation would allow you to send a fully-accurate message 25% of the time, which is better than random chance, and would count as faster than light communication. However, although you can encode an arbitrary amount of information into the state of a single qubit, due to Holevo's theorem, you can only ever get a single bit of classical information out. Even imperfect cloning does not allow you to get around this, as commenters have mentioned, as the imperfect clones are entangled and thus measurement of one collapses them all, limiting the amount of useful information one can retrieve. This is stated in the paper "Quantum copying: Beyond the no-cloning theorem". In fact, even Quantum Computation and Quantum Information makes the following strong and damning statements (emphasis added) "only if infinitely many identically prepared qubits were measured would one be able to determine $\alpha$ and $\beta$." and "the laws of quantum mechanics prevent [one] from determining the state when [one] only has a single copy of $\ket{\psi}$." Therefore, Holevo's theorem does prevent your single-qubit-with-arbitrary-encoded-information scheme from allowing faster than light communication. And, since due to Holevo's theorem you can only get one classical bit out of one qubit, that means that in order to send an $n$ bit message, you must send $n$ qubits. Since these qubits each have a 25% chance to be in a particular bell state, and they do not necessarily agree on the bell state, that means that only 25% of your bits will be correct, and you don't know which ones. As other answers have pointed out, this is worse than random chance and thus can't be considered communication.
{ "domain": "quantumcomputing.stackexchange", "id": 1565, "tags": "entanglement, teleportation" }
VBA - Excel add in Convert number to Text currency
Question: No problem yet with the code. Here is a function that can be saved as add in and used as UDF in Microsoft excel to convert a number to text. For example, 4556.45 to RUPEES Four Thousand Five Hundred Fifty Six & PAISE Forty Five (Defaults being set to Indian Rupee). But the function can be adopted to any currency through suitable parameters making it very flexible. Submitting for review. Kindly suggest any modifications required. Thank you. Function Arguments myNumber = Number to be converted to text Optional NumberSystem = DEFAULT Value = 2, 1 for International (Thousand,Million,Billion), 2 for Indian (Thousand,Lakh,Crore), Default Value 1 Optional CurrencyConversion = DEFAULT Value = "YES", _ Yes to convert the number to currency, Default Value Yes. Optional CurrSYMSingular = DEFAULT Value = "RUPEE", 'for example USD or US DOLLAR, INR or Indian Rupee for one unit of currency, Default Value Rupee Optional CurrSYMPlural = DEFAULT Value = "RUPEES", for example USDs or US DOLLARs, INRs or Indian Rupees for multiple units of currency, Default Value Rupees Optional FractionSize = DEFAULT Value = 100, _ for example 100 for one INR = 100 Paise, one USD = 100 Cents, Default value 100 Optional FracSYMSingular = DEFAULT Value = "PAISA", for example Cent for US DOLLAR, Paisa for Indian Rupee for one unit of currency fraction, Default Value Paisa Optional FracSYMPlural = DEFAULT Value = "PAISE", for example Cents for US DOLLAR, Paise for Indian Rupee for multiple units of currency fraction, Default Value Paise Optional TextStyle = DEFAULT Value = 1 1 for CurrencySYM and Amount, 2 for Amount and CurrencySYM, Default Value 1" Main function and other private functions supporting it are as below. Function TextCurrency(ByVal myNumber, Optional NumberSystem = 2, Optional CurrencyConversion = "YES", _ Optional CurrSYMSingular = "RUPEE", Optional CurrSYMPlural = "RUPEES", Optional FractionSize = 100, _ Optional FracSYMSingular = "PAISA", Optional FracSYMPlural = "PAISE", Optional TextStyle = 1) 'Refer to following webpage for fractional units and sizes of different currencies 'https://en.wikipedia.org/wiki/List_of_circulating_currencies ' Maximum fraction size is 1000 (eg. OMR). Dim Temp, myNumberInt, myNumberFrac, DecimalPlace, Count, RUPEEs, PAISE If Val(myNumber) <> 0 Then myNumber = Trim(Str(myNumber)) DecimalPlace = InStr(myNumber, ".") End If If DecimalPlace > 0 Then myNumberInt = Trim(Left(myNumber, DecimalPlace - 1)) myNumberFrac = Trim(Mid(myNumber, DecimalPlace + 1)) If UCase(CurrencyConversion) = "YES" Then If FractionSize <= 1000 Then myNumberFrac = Left(myNumberFrac & "000", 3) If FractionSize <= 100 Then myNumberFrac = Left(myNumberFrac & "00", 2) If FractionSize <= 10 Then myNumberFrac = Left(myNumberFrac & "0", 1) End If Else myNumberInt = myNumber End If If NumberSystem = 2 Then If Val(myNumberFrac) <> 0 Then TextCurrency = NumberINTtoINDtext(myNumberInt) & " POINT " & NumberFRACtotext(myNumber) Else TextCurrency = NumberINTtoINDtext(myNumberInt) End If Else If Val(myNumberFrac) <> 0 Then TextCurrency = NumberINTtotext(myNumberInt) & " POINT " & NumberFRACtotext(myNumber) Else TextCurrency = NumberINTtotext(myNumberInt) End If End If If UCase(CurrencyConversion) = "YES" Then If NumberSystem = 2 Then If Val(myNumberFrac) <> 0 Then RUPEEs = NumberINTtoINDtext(myNumberInt) PAISE = NumberINTtotext(myNumberFrac) Else RUPEEs = NumberINTtoINDtext(myNumberInt) End If Else If Val(myNumberFrac) <> 0 Then RUPEEs = NumberINTtotext(myNumberInt) PAISE = NumberINTtotext(myNumberFrac) Else RUPEEs = NumberINTtotext(myNumberInt) End If End If If Val(myNumber) = 0 Then TextCurrency = CurrSYMSingular & " " & "ZERO" If TextStyle = 1 Then Select Case RUPEEs Case "" RUPEEs = "" Case "One" RUPEEs = CurrSYMSingular & " One" Case Else RUPEEs = CurrSYMPlural & " " & RUPEEs End Select Select Case PAISE Case "" PAISE = "" Case "One" If RUPEEs = "" Then PAISE = FracSYMSingular & " One" Else PAISE = " & " & FracSYMSingular & " One" End If Case Else If RUPEEs = "" Then PAISE = FracSYMPlural & " " & PAISE Else PAISE = " & " & FracSYMPlural & " " & PAISE End If End Select Else Select Case RUPEEs Case "" RUPEEs = "" Case "One" RUPEEs = "One " & CurrSYMSingular Case Else RUPEEs = RUPEEs & " " & CurrSYMPlural End Select Select Case PAISE Case "" PAISE = "" Case "One" If RUPEEs = "" Then PAISE = "One " & FracSYMSingular Else PAISE = " & One " & FracSYMSingular End If Case Else If RUPEEs = "" Then PAISE = PAISE & " " & FracSYMPlural Else PAISE = " & " & PAISE & " " & FracSYMPlural End If End Select End If TextCurrency = RUPEEs & PAISE End If End Function '___________________________________________________________ Private Function ConvertHundreds(ByVal myNumber) Dim Result As String ' Exit if there is nothing to convert. If Val(myNumber) = 0 Then Exit Function ' Append leading zeros to number. myNumber = Right("000" & myNumber, 3) 'Debug.Print myNumber ' Do we have a hundreds place digit to convert? If Left(myNumber, 1) <> "0" Then Result = ConvertDigit(Left(myNumber, 1)) & " Hundred " End If ' Do we have a tens place digit to convert? If Mid(myNumber, 2, 1) <> "0" Then Result = Result & ConvertTens(Mid(myNumber, 2)) Else ' If not, then convert the ones place digit. Result = Result & ConvertDigit(Mid(myNumber, 3)) End If ConvertHundreds = Trim(Result) End Function '___________________________________________________________ Private Function ConvertTens(ByVal MyTens) Dim Result As String ' Is value between 10 and 19? If Val(Left(MyTens, 1)) = 1 Then Select Case Val(MyTens) Case 10: Result = "Ten" Case 11: Result = "Eleven" Case 12: Result = "Twelve" Case 13: Result = "Thirteen" Case 14: Result = "Fourteen" Case 15: Result = "Fifteen" Case 16: Result = "Sixteen" Case 17: Result = "Seventeen" Case 18: Result = "Eighteen" Case 19: Result = "Nineteen" Case Else End Select Else ' .. otherwise it's between 20 and 99. Select Case Val(Left(MyTens, 1)) Case 2: Result = "Twenty " Case 3: Result = "Thirty " Case 4: Result = "Forty " Case 5: Result = "Fifty " Case 6: Result = "Sixty " Case 7: Result = "Seventy " Case 8: Result = "Eighty " Case 9: Result = "Ninety " Case Else End Select ' Convert ones place digit. Result = Result & ConvertDigit(Right(MyTens, 1)) End If ConvertTens = Result End Function '___________________________________________________________ Private Function ConvertDigit(ByVal MyDigit) Select Case Val(MyDigit) Case 1: ConvertDigit = "One" Case 2: ConvertDigit = "Two" Case 3: ConvertDigit = "Three" Case 4: ConvertDigit = "Four" Case 5: ConvertDigit = "Five" Case 6: ConvertDigit = "Six" Case 7: ConvertDigit = "Seven" Case 8: ConvertDigit = "Eight" Case 9: ConvertDigit = "Nine" Case Else: ConvertDigit = "" End Select End Function '___________________________________________________________ Private Function NumberINTtotext(ByVal myNumber) If Len(myNumber) = 0 Or IsNumeric(myNumber) = False Then NumberINTtotext = "" Exit Function End If Dim Temp Dim myNumberInt, myNumberInteger Dim DecimalPlace, Count ReDim Place(9) As String Place(2) = " Thousand " Place(3) = " Million " Place(4) = " Billion " Place(5) = " Trillion " ' Convert MyNumber to a string, trimming extra spaces. myNumber = Trim(Str(myNumber)) ' Find decimal place. DecimalPlace = InStr(myNumber, ".") ' If we find decimal place... If DecimalPlace > 0 Then myNumberInt = Trim(Left(myNumber, DecimalPlace - 1)) Else myNumberInt = Trim(myNumber) End If If Val(myNumberInt) <> 0 Then Count = 1 Do While myNumberInt <> "" ' Convert last 3 digits of MyNumber to English GBP. Temp = ConvertHundreds(Right(myNumberInt, 3)) If Temp <> "" Then myNumberInteger = Temp & Place(Count) & myNumberInteger If Len(myNumberInt) > 3 Then ' Remove last 3 converted digits from MyNumber. myNumberInt = Left(myNumberInt, Len(myNumberInt) - 3) Else myNumberInt = "" End If Count = Count + 1 Loop Else myNumberInteger = "ZERO" End If NumberINTtotext = myNumberInteger If Val(myNumber) = 0 Then NumberINTtotext = "ZERO" End Function '___________________________________________________________ Private Function NumberINTtoINDtext(ByVal myNumber) If Len(myNumber) = 0 Or Val(myNumber) = 0 Or IsNumeric(myNumber) = False Then NumberINTtoINDtext = "" Exit Function End If Dim Temp Dim RUPEEs, PAISE Dim DecimalPlace, Count ReDim Place(9) As String Place(2) = " Thousand " Place(3) = " Lac " Place(4) = " Crore " 'Place(5) = " Arawb " 'Place(6) = " Kharawb " 'Place(7) = " Neel " myNumber = Trim(Str(myNumber)) DecimalPlace = InStr(myNumber, ".") If DecimalPlace > 0 Then myNumber = Trim(Left(myNumber, DecimalPlace - 1)) End If Do Count = 2 If Len(myNumber) > 0 Then Hundreds = ConvertHundreds(Right(myNumber, WorksheetFunction.Min(3, Len(myNumber)))) RUPEEs = Hundreds & RUPEEs myNumber = Left(myNumber, Len(myNumber) - WorksheetFunction.Min(3, Len(myNumber))) End If Do While Count < 4 Temp = ConvertHundreds(Right(myNumber, WorksheetFunction.Min(2, Len(myNumber)))) If Temp <> "" Then RUPEEs = Temp & Place(Count) & RUPEEs myNumber = Left(myNumber, Len(myNumber) - WorksheetFunction.Min(2, Len(myNumber))) Count = Count + 1 Loop If Len(myNumber) <> 0 And Count = 4 Then RUPEEs = " Crore " & RUPEEs Loop While Val(Len(myNumber)) > 0 NumberINTtoINDtext = RUPEEs End Function '___________________________________________________________ Private Function NumberFRACtotext(ByVal myNumber) Dim Temp, myNumberFrac, myNumberFraction, DecimalPlace, Count If Len(myNumber) = 0 Or IsNumeric(myNumber) = False Then NumberFRACtotext = "" Exit Function End If ' Convert MyNumber to a string, trimming extra spaces. myNumber = Trim(Str(myNumber)) ' Find decimal place. DecimalPlace = InStr(myNumber, ".") ' If we find decimal place... If DecimalPlace > 0 Then myNumberFrac = Trim(Mid(myNumber, DecimalPlace + 1)) Else NumberFRACtotext = "ZERO" Exit Function End If Count = DecimalPlace + 1 Temp = "" Do While Val(Mid(myNumber, Count, 1)) = 0 Temp = Temp & "ZERO " Count = Count + 1 Loop Do While Count <> Len(myNumber) + 1 If Val(Mid(myNumber, Count, 1)) = 0 Then Temp = Temp & "ZERO " Else Temp = Temp & ConvertDigit(Val(Mid(myNumber, Count, 1))) & " " End If Count = Count + 1 Loop NumberFRACtotext = Temp End Function After installing the function as add-in and before using the function, run this procedure to see the parameter descriptions in the function dialogue box. Better to save such procedures in personal excel book and to make it run on excel start. Sub AddUDFToCustomCategory() Application.MacroOptions Macro:="TextCurrency", Description:="Converts number to text", _ ArgumentDescriptions:=Array("Number to be converted to text", _ "1 for International (Thousand,Million,Billion)," & vbCrLf & "2 for Indian (Thousand,Lakh,Crore)," & vbCrLf & "Default Value 2", _ "Yes to convert the number to currency, Else please enter No" & vbCrLf & "Make sure the number is rounded to best suit the fraction size of the desired currency" & vbCrLf & "Default Value Yes", _ "for example USD or US DOLLAR, INR or Indian Rupee for one unit of currency," & vbCrLf & "Default Value Rupee", _ "for example USDs or US DOLLARs, INRs or Indian Rupees for multiple units of currency," & vbCrLf & "Default Value Rupees", _ "for example 100" & vbCrLf & "for one INR = 100 Paise, one USD = 100 Cents, One OMR = 1000 Baizas" & vbCrLf & "Default value 100", _ "for example Cent for US DOLLAR, Paisa for Indian Rupee for one unit of currency fraction," & vbCrLf & "Default Value Paisa", _ "for example Cents for US DOLLAR, Paise for Indian Rupee for multiple units of currency fraction," & vbCrLf & "Default Value Paise", _ "1 for CurrencySYM and Amount, for eample USD One Hundred Fifty," & vbCrLf & "2 for Amount and CurrencySYM, for eample One Hundred Fifty USD" & vbCrLf & "Default Value 1") End Sub Answer: While I understand it's not generally done, I'm adding a review of the slightly updated code which should not have been posted as an answer. There seem to be sufficiently few changes made that it's not that much different than the original. Implicit Variants - There are a large number of variables that are Dimmed, but no type is defined. These are implicitly declared Variant by the runtime engine and can lead to a variety of issues, the least of which is slower execution, the worst of which is probably future programmers (including future-you) misusing a variable because the "compiler" doesn't warn you that you're putting a String where a Long should be (for example), leading to run-time errors. One example: Function TextCurrency(ByVal myNumber, ... right there at the top of the code block Declaring multiple values in one Dim statement. While there's nothing inherently wrong with doing so, and some may consider it simply a matter of code style, it's generally frowned upon by "real" programmers, mostly because: It's easy to miss that there are multiple variables declared on one line - most programmers assume one declaration per physical line and mentally stop reading after the first one It's easy to forget to declare types for one or more of the variables in a line, for example, Dim X, Y, Z as Long is assumed by many to declare 3 variables of type Long, however, only Z is a Long, while X and Y are actually Variant. It's preferred to declare variables as close to their initial use as possible and declaring multiple variables on one line tends to lead to a "wall of declarations" and a lot of scrolling to see what's actually going on Declaring variables as Integer. Internally, VBA works with 32-bit integers (i.e. Long) and it converts each 16-bit Integer to a 32-bit Long each time it needs to operate on one, so you may as well just declare them all as Long to begin with. Frankly, there's no need to use Integer unless you're calling a function in some external DLL (like the WinAPI) that requires the use of a 16-bit integer. You'll gain tiny bits of performance improvement with each Integer to Long change you make (by the runtime engine not having to do the conversion for you), and you'll remove (or at least put off) another opportunity for an overflow error. Parameters are passed ByVal, yet are assigned a value Once again Function TextCurrency(ByVal myNumber, ... is a culprit. The very first If statement contains myNumber = Trim(Str(myNumber)). If you're expecting myNumber to be returned to the calling code different than how it's sent it, it should be ByRef so the changes can be reflected externally. If you're not expecting to be returned, it makes more sense to use a local copy of it to be explicit and to write code that "Says what it does and does what it says". Using "" to represent an empty string when there's a perfectly good built-in constant vbNullString to do so. This is something of a matter of style, and it's certainly shorter to type "" Using vbNullString makes it very explicitly clear what you mean, where "" could mean that you forgot to put something between those quotes. Parameters are passed ByRef by default, but none are explicitly declared as such, and most appear to not need to be. For example in Function TextCurrency(ByVal myNumber, Optional NumberSystem As Byte = 2, ... the 2nd parameter NumberSystem is passed ByRef but is never assigned to and could (and should) be passed ByVal to make it abundantly clear that this is the case Function TextCurrency(ByVal myNumber, Optional ByVal NumberSystem As Byte = 2, ... TextCurrency implicitly returns a Variant. You did remember to declare a return type for all your other functions. It should have As String at the end of the signature to help "Say what you mean and mean what you say". The above are all issues that RubberDuck's Code Inspections caught (but not all the issues it found). The built in QuickFixes will give you one or more options and actually fix the code for you. These are other observations: Variable Naming Variable names are somewhat declared with Hungarian Notation, yet are inconsistent with their variable type Dim myNumberInt As String made me think that this should hold an Int, yet it's declared as String. Either the variable type is wrong or the variable name is poor. Naming variables is hard! Based on use, it appears that myNumberInt is the whole number portion of the value passed to the function. I'd suggest something like inputValueIntegerPart, or, maybe a simpler integerPortion. While that's a lot to type, the VBE does offer auto-complete by pressing <Ctrl>-<Space> which will either complete the word for you or offer up a list of everything starting with what you've typed so far, so you don't have to type the whole thing every time. myNumber isn't particularly descriptive of what it holds. Something along the lines of inputValue or originalCurrencyAmount or completeCurrencyAmount are more descriptive and help you remember, many lines later, what you're dealing with. Capitalization consistency You declare the "camelCase" myNumber, yet use "PascalCase" for NumberSystem, and "SHOUTCASE" for RUPEEs and PAISE. Convention says that methods (Sub and Function) are "PascalCase", while variables are "camelCase" and constants are "SHOUTCASE". Of course, there isn't much convention in VBA (other than "no convention at all"), but since you care enough to get your code reviewed, you probably won't be writing VBA forever, and you may want to consider training your brain now for other languages in the future. You don't have to follow that convention by any means (especially if you're a programming shop of 1), but pick some convention and follow it. It will make your life much easier. Do be aware, however, than VBA "helpfully" fixes casing for you, and that can lead to annoying situations later. For example, if you Dim value As Long, you will end up with ThisWorkbook.Cells("A1").Value being "helpfully" fixed to ThisWorkbook.Cells("A1").value (note the lower case "v"). So A) try not to use common method names as variables (even though the scope is different and you can get away with it), and B) If you ever do, a simple Dim Value will "fix" all the occurrences in your code, then you can delete the line. Indentation This is always tricky as often formatting is mangled/lost when pasting code into the text entry box. While code indention doesn't matter one wit to the VBA compiler, it matters hugely to the poor soul who has to read it. This example is a particularly egregious one: If CurrencyConversion Then If FractionSize <= 1000 Then myNumberFrac = Left(myNumberFrac & "000", 3) If FractionSize <= 100 Then myNumberFrac = Left(myNumberFrac & "00", 2) If FractionSize <= 10 Then myNumberFrac = Left(myNumberFrac & "0", 1) End If You have a multi-line If statement wrapping several single-line If statements. The lack of indention in the outer If/End If block makes it harder to mentally parse and requires a new reader (or future you) to slow down and take more time to understand what's going on. Our brains will naturally associate the End If with the last If statement, when in fact, it's actually associated with the first If statement. If CurrencyConversion Then If FractionSize <= 1000 Then myNumberFrac = Left(myNumberFrac & "000", 3) If FractionSize <= 100 Then myNumberFrac = Left(myNumberFrac & "00", 2) If FractionSize <= 10 Then myNumberFrac = Left(myNumberFrac & "0", 1) End If Is functionally identical, but makes the code structure much more obvious. Function size You have some bits of code pulled out into their own functions, and that's good, but the main TextCurrency function is still rather large. There are a number of "chunks" of code that could stand on their own to make the main function more readable, for example: If Val(myNumber) <> 0 Then myNumber = Trim(Str(myNumber)) DecimalPlace = InStr(myNumber, ".") End If Could become DecimalPlace = FindDecimalPlace(myNumber) followed later by the function declaration: Private Function FindDecimalPlace(ByVal inValue As Double) As Long If Val(inValue) <> 0 Then Dim valueAsString As Double valueAsString = Trim(Str(inValue)) FindDecimalPlace = InStr(valueAsString, ".") End If End Function This replaces five lines of code in the main function with only one line, making the main function more readable and it's very explicit that this line is going to FindDecimalPlace and assign it to a variable to use later. Another example would be extracting this into its own function: If DecimalPlace > 0 Then myNumberInt = Trim(Left(myNumber, DecimalPlace - 1)) myNumberFrac = Trim(Mid(myNumber, DecimalPlace + 1)) If CurrencyConversion Then If FractionSize <= 1000 Then myNumberFrac = Left(myNumberFrac & "000", 3) If FractionSize <= 100 Then myNumberFrac = Left(myNumberFrac & "00", 2) If FractionSize <= 10 Then myNumberFrac = Left(myNumberFrac & "0", 1) End If Else myNumberInt = myNumber End If As I was reviewing, looking to see how to refactor this, I ended up with the following: Dim RUPEEs As String, PAISE As String 'NOTE: I removed the `decimalPlace` variable declaration from above. 'because there are multiple declarations on this line, I had to edit the line instead of just deleting it 'decimalPlace = FindDecimalPlace(myNumber) 'NOTE: This line is being removed. I'm commenting it here to make it obvious, but it should be removed once code is proved to still work correctly, don't leave it behind, that doesn't help readability myNumberFrac = SetFractionSize(myNumber, myNumberInt, myNumberFrac) 'the remainder of the TextCurrency function is here, followed later by Private Function SetFractionSize(ByVal originalCurrencyAmount As Double, ByVal currencyConversion As Boolean, ByVal fractionSize As Long, _ ByRef outIntegerPortion As String, ByRef outDecimalPortion As String) As String Dim decimalPlaceLocation As Long decimalPlaceLocation = FindDecimalPlace(originalCurrencyAmount) If decimalPlaceLocation > 0 Then outIntegerPortion = Trim(Left(originalCurrencyAmount, decimalPlaceLocation - 1)) outDecimalPortion = Trim(Mid(originalCurrencyAmount, decimalPlaceLocation + 1)) If currencyConversion Then If fractionSize <= 1000 Then SetFractionSize = Left(outDecimalPortion & "000", 3) If fractionSize <= 100 Then SetFractionSize = Left(outDecimalPortion & "00", 2) If fractionSize <= 10 Then SetFractionSize = Left(outDecimalPortion & "0", 1) End If Else outIntegerPortion = originalCurrencyAmount End If End Function Pay attention to the comments in the code block they explain a few of the things done and why Also, note that the call to FindDecimalPlace was moved into this function since the original DecimalPlace variable is not used anywhere else in TextCurrency, therefore, it doesn't need to exist in TextCurrency, only in this function where it's actually used. Also, all the ByVal parameters are listed first, followed by the ByRef parameters While there's no convention that I'm aware of that encourages this, I did this to help mentally separate them Also, the ByRef parameters use Systems Hungariation Notation to indicate that they are expected to be set in the function and return a value to be used "on the outside". This changes the beginning of the TextCurrency function from a bulky: Dim DecimalPlace As Integer, RUPEEs As String, PAISE As String If Val(myNumber) <> 0 Then myNumber = Trim(Str(myNumber)) DecimalPlace = InStr(myNumber, ".") End If If DecimalPlace > 0 Then myNumberInt = Trim(Left(myNumber, DecimalPlace - 1)) myNumberFrac = Trim(Mid(myNumber, DecimalPlace + 1)) If CurrencyConversion Then If FractionSize <= 1000 Then myNumberFrac = Left(myNumberFrac & "000", 3) If FractionSize <= 100 Then myNumberFrac = Left(myNumberFrac & "00", 2) If FractionSize <= 10 Then myNumberFrac = Left(myNumberFrac & "0", 1) End If Else myNumberInt = myNumber End If To a much more svelte and readable: Dim RUPEEs As String, PAISE As String myNumberFrac = SetFractionSize(myNumber, currencyConversion, fractionSize, myNumberInt, myNumberFrac) There are many more opportunities for refactoring, I've only shown the first two obvious ones, and hope that the reasoning for them will help you find others. Keep it DRY DRY: Don't Repeat Yourself If you discover you're writing the same line(s) of code over and over (or even just twice), refactor those lines into a function and call the function. It makes the code more readable and more maintainable You repeat this line in at least 3 locations: DecimalPlace = InStr(myNumber, ".") Each of those can now be replaced with: DecimalPlace = FindDecimalPlace(myNumber) 'or whatever variable is appropriate to pass in If you ever need to change the way you find the decimal point, or you find a bug, now you only need to fix it in the one function instead of tracking down everywhere the old version is and fixing it multiple times, most likely missing at least one of them. Maybe you want to internationalize it and allow for a , as the decimal separator instead of the US (and, I am a bit surprised to find out, also Indian) . as the decimal separator - again, you only change it in one place You also repeat this line in a number of places myNumber = Trim(Str(myNumber)) Remember above where it's also flagged as being passed ByVal, yet is assigned a value. This is a good place to declare a new variable with a scope internal to TextCurrency and assign it one time, then assign that new variable to the return value of your newly written function. You'll only use myNumber and never assign anything to it, plus you can skip all the other times you're Trim()ing it because you know it's already been cleaned up. Code Separators You have '______________________________________________ Before every one of your functions. I fully understand why, however... Back in the "Tools", "Options" menu, check the "Procedure Separator" box and watch magic happen! :) There are a variety of other things you could do, too, but these are the big stand-outs that will make your code easier to read, easier to follow, easier to maintain in the future (for other programmers and future-you) and, much more importantly, less bug prone for all those reasons.
{ "domain": "codereview.stackexchange", "id": 41642, "tags": "vba, excel" }
Why 1-ethenyl-3-ethyl-5-ethynylbenzene has that name?
Question: I would priorized these funtional groups based on two options (following the ciclohexane rules): Molecular weight Representing each pi bond as a alkane branch. e.g. Ethenil as a isopropil backbone. But, none of these option works. Why this molecule has that name? Answer: Ordering substituents by complexity has been abandoned long ago (before 1979) in favor of the much simpler alphabetic order. Here, ethenyl < ethyl < ethynyl. Note, however, that simple substituents are ordered alphabetically before adding a multiplicative prefix, which means that "dimethyl" is ordered as "methyl". Complex substituents are ordered by their first letter, e.g. isopropyl (I), dimethylphenyl (D)
{ "domain": "chemistry.stackexchange", "id": 11345, "tags": "nomenclature, aromatic-compounds" }
How could chemical potential be interpreted as the molar Gibbs free energy?
Question: As known, the Gibbs free energy for a closed system at constant temperature $T$ and constant pressure $p$ (and thus constant volume $V$), $G=U+pV-TS$, will be minimized, where $U$ is the total internal energy of the system. Could you explain in simple and understandable way why the chemical potential can be interpreted as the molar Gibbs free energy, that is, $G_\text{mol}=\frac{G}{n}=\mu$, where $n$ is the amount of substance in mole. Answer: Let's first note that some of the ways we can add energy to a system are by heating it, doing mechanical work on it, or by adding mass; thus, $$dU=T\,dS-p\,dV+\mu\,dn$$ The Gibbs free energy potential $G=U-TS+PV$, which constitutes a Legendre transform, is of interest because (by differentiating) $$dG=dU-T\,dS-S\,dT+p\,dV+V\,dp=-S\,dT+V\,dp+\mu\,dn$$ which is particularly convenient to work with because many familiar processes occur at constant temperature and pressure. Under these conditions, $dG=\mu\,dn$ (or $\sum\mu_i\,n_i$ for composite systems). Another approach is to simply start from $U=TS-PV+\mu n$, from which we can directly obtain $G=\mu n$.
{ "domain": "physics.stackexchange", "id": 50212, "tags": "thermodynamics, chemical-potential" }
Super simple jQuery slider
Question: I created the most simple though still quite flexible jQuery slider ever! Or at least, I hope so. var slider = $(".slider-ul"); slider.each(function () { var e = $(this), images = e.find("li"), current = null; slide(); function slide() { images.each(function() { var li = $(this), next, pDone = false, sDone = false; if (li.hasClass("primary")) { li.removeClass("primary"); pDone = true; } else if (li.hasClass("secondary")) { li.removeClass("secondary").addClass("primary"); next = li.next(); console.log(next); if (next.size()) { next.addClass("secondary"); sDone = true; } else { images.filter(":first").addClass("secondary"); sDone = true; } } if (sDone && pDone) { return false; } }); setTimeout(slide, 5000); } }); How does this look? Answer: In this code, you set sDone = true in both branches of the if-else: if (next.size()) { next.addClass("secondary"); sDone = true; } else { images.filter(":first").addClass("secondary"); sDone = true; } So you could move that line outside of the if-else: if (next.size()) { next.addClass("secondary"); } else { images.filter(":first").addClass("secondary"); } sDone = true; At the top you declared the current variable, but then you don't use it. If you don't need it, then remove it.
{ "domain": "codereview.stackexchange", "id": 11791, "tags": "javascript, jquery" }
Python package for machine-learning aided data labelling
Question: In a lot of cases unlabelled data needs to be transformed to labelled data. The best solution is to use (multiple) human classifiers. However, going to all the data by hand (i.e. in text-mining or image-processing) is often a daunting task. Is there software that can combine human classifiers and machine-learning techniques in real time? I am especially interested in python packages. To illustrate, classifying images from video streams is very repetitive. After 100 images (from different streams) a machine-learning algorithm could be used to predict the labels given by the human classifier. The machine classifier might be very confident about some (un)seen samples and very uncertain about others. The human classifier can then focus on the uncertain samples helping the machine classifier to learn better what is does not yet know. Answer: It sounds like you are looking for active learning. In active learning, the classifier learns which samples would be most useful to have labelled by a human. There are many techniques for active learning, and many ways to adapt an existing (standard) learning algorithm to the active learning setting. The particular approach you mentioned is called "uncertainty sampling", and can be applied to any standard classifier that outputs confidence/certainty scores. There are other selection methods as well, which may perform better in some settings. You can also apply unsupervised methods to cluster the samples, then label one or a few samples from each cluster.
{ "domain": "datascience.stackexchange", "id": 1647, "tags": "machine-learning, python, labels, active-learning, labelling" }
Don't understand what is meant by signal dimension
Question: I don't understand the concept of dimension of a signal. I ran into it in an explanation of Shannon Capacity, and in a paper on spread spectrum. I was hoping somebody could explain with an example. Does it apply to analog as well as digital? For instance is AM two dimensional considering time and amplitude? Or is QPSK two dimensional because it is a combination of a sin and cosine term? Or does multilevel signaling like Pulse Amplitude Modulation have as many dimensions as discrete levels it can take, or would dimensionality refer to the number of possible symbols it can represent in a pulse? Answer: In digital communication systems, the dimension of a modulation scheme refers to the number of basis function, i.e. independent/orthogonal signals (in this case $\sin$ and $\cos$ function) used to represent the symbols. In these systems $k=\log_2 M$ binary digits are mapped into analog waveforms below $$\left\{s_m(t), m = 1, 2, \ldots, M \ \big\vert \ M = 2^k\right\}$$ Take a PAM signal for instance, this has the form \begin{align} s_m(t)&=\Re\left[A_mg(t)\exp\left(j2\pi f_c t\right)\right]\\ &= A_mg(t)\cos\left(2\pi f_c t\right) \end{align} One basis function (one axis), the in-phase component (here $x$-axis) is used to represent the signal. The signal is one dimensional. Now consider PSK modulation, the signal waveforms are represented as \begin{align} s_m(t)&=\Re\bigg[g(t)\exp\left(j2\pi\frac{m - 1}{M}\right)\exp\left(j2\pi f_c t\right)\bigg]\\ &= g(t)\cos\left(2\pi f_c t + 2\pi\frac{m - 1}{M}\right) \end{align} Which can be decomposed down into sine and cosine components. This is a two-dimensional signal. The same goes for QAM signals. A common misconception is taking the dimension for the number of bits per symbol. In brief, the dimension is the number of orthogonal basis functions used in the linear combination representing the waveforms $s_m (t)$. Note that despite the digital part of it, what is not always aparent is that the modulator takes in charge the mapping of the binary info into the symbols in the analog domain ready for transmission. Having said that, there is also a notion of multi-dimensional signals where the time and frequency intervals are subdivided into short intervals for transmission. More on this and my explanation above see $[1]$. $[1]$, J. G. Proakis, Digital Communications, 4th edition, McGraw Hill, chapter 4.
{ "domain": "dsp.stackexchange", "id": 6117, "tags": "signal-analysis, digital-communications, modulation, information-theory" }
"variable.features.n" in SCTransform
Question: Whats the role of "variable.features.n" in SCTransform function of Seurat? I try setting "variable.features.n" at three different numbers (2000, 5000, 10000). In the downstream cells are clustering in different groups? Answer: variable.features.n sets the number of features (you can think of genes in the case of scRNA-seq) you would like to use for the downstream steps such as clustering. Basically, using a gene that is expressed more or less at similar levels across different cell types would not be informative in terms of differentiating (for example via clustering) these cell types, this parameter helps you to choose the "most informative" (most variable across the whole data set) features.
{ "domain": "bioinformatics.stackexchange", "id": 1861, "tags": "seurat, sctransform" }
Why is `fftfilt` (i.e. `fft` of both inputs, then element-wise multiplication, then `ifft`) faster than direct convolution?
Question: I have the following matlab code N = 500; M = 32; K = N + M - 1; % Length of full convolution x = randn(N); y = ones(M); % first one: convolution r = conv(x, y, 'same'); % second one: multiplication z = ifft(fft(x, K, K).*fft(y, K, K)); idx = round((K-N)/2+1):round((K+N)/2); % Indices of the central portion z = z(idx); % Obtaining the central part of the result May I know which opertation is faster? getting $r$ or $z$? Why is this so? Thanks in advance Answer: Whether the direct convolution or the FFT/IFFT method is faster depends on the length of the impulse response, $N_\mathrm{i}$ and the signal length $N_\mathrm{s}$. With the formulas taken from here I've created a small Matlab script that calculates the required number of real multiplications and additions for the direct convolution and FFT/IFFT method, respectively. I've assumed real-valued impulse response and signal. Taking your example of $N_\mathrm{i}=32$ (or the other way round, this makes no difference), the direct convolution always requires less multiplications: But if we increase $N_\mathrm{i}$ to 50, for example, the direct convolution requires less multiplications up to a signal length of about 60. For signal length greater than 60 the FFT/IFFT method requires less multiplications: The actual computation time depends on some other parameters, especially the CPU architecture but the number of multplications is usually a good indicator. The MATLAB code for the above figures for reference: Ns = 1:1000; % length of signal Ni = 50; % length of impulse response K = Ni + Ns - 1; % number of real multiplications for convolution: M_R_conv = Ns*Ni; % number of real multiplications for convolution: A_R_conv = Ns*(Ni-1); % number of complex multiplications for freq domain conv: M_C_dft = 3/4 * K .* (log2(K) + 1) + K; % number of complex additions for freq domain conv: A_C_dft = 3/2 * K .* log2(K); % number of real multiplications for freq domain conv: M_R_dft = 4*M_C_dft; % number of real additions for freq domain conv: A_C_dft = 2 * A_C_dft + 2 * M_R_dft; figure('Position', [0 0 400 300]); plot(M_R_conv, 'b'); hold on; plot(M_R_dft, 'r'); hold off; legend('direct', 'fft', 'location', 'Southeast'); title(['number of real multiplications. Ni: ' num2str(Ni)]); xlabel('length of signal'); A sidenote to your code: as it stands it is not running as you're creating NxN and MxM matrices for x and y. What you probably want to do is x = rand(1,N); y = ones(1, M) % ... z = ifft(fft(x, K).*fft(y, K)); p.s. for those of you interested in Python code instead of Matlab, here is the same code in Python: https://gitlab.com/snippets/1789085
{ "domain": "dsp.stackexchange", "id": 1372, "tags": "filters, discrete-signals, fourier-transform, convolution" }
Java and C++ stack implementations with dramatically different performance
Question: I'm developing a small programming language for use in any project I have where I feel a small scripting language could be used well. I've written two emulators for the language, one in C++ and one in Java. The C++ performs faster except for any recursion in the language, in which it suddenly performs terribly! Here is the code from my language that I am running which runs in 1.5s on the Java emulator and 4s on the C++ emulator: dec i = 0; recurse(); print i; def recurse: if i < 10000000: i++; recurse(); return; This compiles down into the following instructions: % i = 0; #define the alias i as being variable 0; # i needs to be initialised; STOREINT i 0; CALL @recurse; #call the label recurse, push this line onto the stack; PRINTINTLN $i; END; #end of program; @recurse; #define the function recurse; # if i is < 10000000 then inc i; GEQ $i 10000000; JCMP @return; #jump to the return statement INC i; CALL @recurse; @return; RETURN; #return back to the line on the top of the stack; The C++ implementation differs slightly from the Java implementation in that it uses an array of unions to store each "typeless" object, whereas Java just uses Object. Other than that, the code is almost identical (with C++ not needing to do any casting because from the instructions we can be sure which field in the union we are using, and therefore we access the correct field instead of casting the Object to the correct type in Java. C++ also doesn't need to use & 0xFF in order to fix sign problems with the java byte type, as we can make unsigned chars). The main difference between them seems to be the performance of the call stacks, which are implemented in each case as follows: Java public class ArrayStack<T> implements Stack<T> { private static final long serialVersionUID = 1L; protected Object[] stack; protected int capacity; protected int pointer; public ArrayStack(int capacity) { stack = new Object[capacity]; this.capacity = capacity; clear(); } public void push(T value) { stack[pointer++] = value; } public T pop() { return (T) stack[--pointer]; } public T peek() { return (T) stack[pointer-1]; } public boolean isEmpty() { return pointer == 0; } public boolean isFull() { return pointer == capacity; } public void clear() { pointer = 0; } public int size() { return pointer; } public int capacity() { return capacity; } } C++ template <class T> class ArrayStack { private: T* stack; int capacity; int pointer; public: ArrayStack(int capacity) { this->capacity = capacity; stack = new T[capacity]; clear(); } ~ArrayStack() { delete stack; } void push(T value) { stack[pointer++] = value; } T pop() { return stack[--pointer]; } T peek() const { return stack[pointer-1]; } bool isEmpty() const { return pointer == 0; } void clear() { pointer = 0; } int size() const { return pointer; } int getCapacity() { return capacity; } }; I'll provide the case statements running during this program below if anybody wants to see them, but for now I'll leave them out to keep this post a little shorter! To my eyes, the C++ looks like it should be either faster than or equal to the Java one, as even if new was the slow part of the C++ code, that isn't taken into account in the timing of the program execution, which starts timing as soon as the first instruction is executed. Could it be that I'm storing the stack as a member variable in the Emulator class, and not as a pointer or something like that, or something the JIT compiler can do that the static C++ compiler (on O3) can't? Answer: Performance I'll get to a bit of a review, but first, since you're most interested in performance, I've done some benchmarking. In short, all evidence points towards maaartinus' theory being the cause. Benchmark.java: public class Benchmark { private static class Stored { private final int[] big = new int[250]; public Stored() { } } public static void main(String[] args) { // Create the objects up front so we don't include allocation/construction time in the benchmark Stored stored[] = new Stored[1000000]; for (int i = 0; i < stored.length; ++i) { stored[i] = new Stored(); } for (int n = 0; n < 100; ++n) { long start = System.nanoTime(); ArrayStack<Stored> stack = new ArrayStack<Stored>(1000000); for (int i = 0; i < 1000000; ++i) { stack.push(stored[i]); } for (int i = 0; i < 1000000; ++i) { stack.pop(); } long end = System.nanoTime(); if (!stack.isEmpty()) { System.out.println("This is just here to make sure nothing gets optimized out."); } System.out.printf("Time: %f seconds%n", (end - start) / 1e9); } } } Once the JVM has warmed up and some JITing has (presumably) happened, each test takes 0.0038 seconds (with a very small variance). If the size of big in Stored is changed, this doesn't change timings since the stack stuff is still just copying a reference. Even if more references are added to Stored, this increasing the size of stored, there is still no time increase. This all makes sense: all Java has to do is copy a pointer, and the size of this pointer is fixed. benchmark.cpp int main() { // To be consistent with the Java approach, we'll create the objects up front. Noteably, this doesn't actually // matter since the copying of the objects is just as expensive as creation. // (Note: typically it wouldn't make sense for this to be dynamically allocated, but 1000000 objects was too big for // the stack.) Stored* stored = new Stored[1000000](); // Do the simple approach for (int n = 0; n < 100; ++n) { auto start = std::chrono::high_resolution_clock::now(); ArrayStack<Stored> stack(1000000); for (int i = 0; i < 1000000; ++i) { stack.push(stored[i]); } for (int i = 0; i < 1000000; ++i) { stack.pop(); } if (!stack.isEmpty()) { std::cout << "This is here so the code doesn't get optimized out.\n"; } auto end = std::chrono::high_resolution_clock::now(); std::chrono::duration<double> diff = end - start; std::cout << "Time: " << diff.count() << " seconds\n"; } } This was run with a few different Storeds. The first was a relatively large class: struct Stored { int x[250]; }; The second was a very light class: struct Stored { int x[1]; }; As a third option, the large class was kept, but rather than storing a value, a pointer was stored in the stack: for (int i = 0; i < 1000000; ++i) { stack.push(&stored[i]); } The timings were 0.30 seconds, 0.001 seconds, and 0.001 seconds for options 1, 2, and 3, repsectively. This is consistent with the Java theory and findings: C++ is peforming expensive copies that Java is not doing. Benchmark Conclusions The question now obviously becomes how to optimize your C++, and unfortunatley there's no simple solution to that without knowing a lot more about what exactly it is that you're putting into your stack. Move semantics are one option, provided your copying is expensive and you'd be happy to sacrifice the object into the stack rather than copy it. The other option is to move away from value semantics so that copies become cheaper. If you have simple hierarchical ownership semantics (e.g. the objects being stored in the stack will always outlive the stack), this is super easy: just store pointers in the stack instead of objects. If your ownership isn't quite so simple, you would need to consider using something like heap allocation and a smart pointer. Both of these approaches are trade offs though. Smart pointers are safe but relatively expensive to copy, and raw pointers can quickly become a confusing mess if you're not very particular and consistent with your semantics. In both cases you no longer have the pleasantness of value semantics.
{ "domain": "codereview.stackexchange", "id": 13983, "tags": "java, c++, performance, comparative-review, stack" }
Orienting AMCL Map/Global_Costmap
Question: I'm running AMCL navigation on a map I created with the "Autonomous Navigation of a Known Map with Turtlebot" tutorial. As you can see below, when I open RViz with "roslaunch turtlebot_rviz_launchers view_navigation.launch", the robot (the little black dot) is not centered in the middle of the map, and I think it has to do with the map's origin, set in the .yaml file. I can reconfigure the origin of the .yaml file to fix the robot's position on the RViz visualization of the map (see the correction in the 3rd picture), but the origin of my Global_costmap hasn't been updated, which means the laserscans aren't aligned with the rest of the map. Any ideas of what I'm doing wrong? Originally posted by ElizabethA on ROS Answers with karma: 120 on 2017-06-21 Post score: 0 Original comments Comment by ElizabethA on 2017-06-21: I realized the global_costmap is somehow getting its Position (x, y) and Orientation set somehow...I just can't figure out what's setting it to my map's original origin and why it's not getting the update when I change the map's origin. Answer: I think you should be able to edit the param\Kinect_costmap_params.yaml file under turtlebot_navigation package, and add origin_x and origin_y values, and then pass that file in when I call AMCL. Example: Kinect_costmap_params.yaml origin_x: 15.0 origin_y: 6.5 AMCL would be started with this: roslaunch turtlebot_gazebo amcl_demo.launch map_file:= custom_param_file:= Unfortunately, that didn't work for me - I'm thinking I just didn't find the correct config file. ***Additionally, there's a known issue that you can't set the ORIENTATION of a costmap, which is what I need: https://github.com/ros-planning/navigation/issues/166 I think I'm going to recreate a map, but start my map-making in the middle of the "room" this time, so that hopefully, it will consider that (0,0). Originally posted by ElizabethA with karma: 120 on 2017-06-21 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 28177, "tags": "navigation, rviz, costmap, global-costmap, amcl" }
Whats the difference between entropy and the (dis)order of a system?
Question: Entropy is often verbally described as the order/disorder of the thermodynamic system. However, I've been told that this description is a vague "hand-waving" attempt at describing what entropy is. For example, a messy bedroom doesn't have greater entropy than a tidy room My question is why is this the case? Also, what would better describe entropy verbally? Answer: Briefly, spontaneous processes tend to proceed from states of low probability to states of higher probability. The higher-probability states tend to be those that can be realized in many different ways. Entropy is a measure of the number of different ways a state with a particular energy can be realized. Specifically, $$S=k\ln W$$ where $k$ is Boltzmann's constant and $W$ is the number of equivalent ways to distribute energy in the system. If there are many ways to realize a state with a given energy, we say it has high entropy. Often the many ways to realize a high entropy state might be described as "disorder", but the lack of order is beside the point; the state has high entropy because it can be realized in many different ways, not because it's "messy". Here's an analogy: if energy were money, entropy would be related to the number of different ways of counting it out. For example, there, there are only two ways of counting out two dollars with American paper money (2 1-dollar bills, or 1 two-dollar bill). But there are five ways of counting out two dollars using 50-cent or 25-cent coins (4 50-cent pieces, 3 50-cent pieces and 2 quarters, and so on). You could say that the "entropy" of a system that dealt in coins was higher than that of a system that dealt only in paper money. Let's look the change in entropy for a reaction $\rm A\rightarrow B$, where A molecules can take on energies that are multiples of 10 energy units, and B molecules can take on energies that are multiples of 5 units. Suppose that the total energy of the reacting mixture is 20 units. If we have 3 molecules of A, there are 2 ways to distribute our 20 units among energy levels with 0, 10, and 20 units: If we have 3 molecules of B, there are 4 ways to distribute 20 units among energy levels with 0, 5, 10, 15, and 20 units: The entropy of B is higher than the entropy of A because there are more ways to distribute the same amount of energy in B than in A. Therefore, $\Delta S$ for the reaction $\rm A\rightarrow B$ will be positive.
{ "domain": "chemistry.stackexchange", "id": 2567, "tags": "physical-chemistry, thermodynamics, entropy" }
How deep must Earth's ocean be to form Ice VII on bottom?
Question: Is there way to tell how deep must Earth's ocean be to form Ice VII on bottom? Or is it impossible because of our planet is too small? As I know, in 11.5km Mariana Trench there is 108 MPa and to form Ice VII u need 3GPa. But 11.5km is nothing compared to size of Earth right? Answer: Ice VII occurs above 3 GPa. Pressure at depth $z$ is $p =\rho g z$, so the critical depth for $\rho=1050$ kg/m$^3$ is $z=290,951.4$ m. So we need a 291 km deep ocean to get high pressure ice. Not very likely on Earth.
{ "domain": "physics.stackexchange", "id": 61726, "tags": "pressure, water, estimation, planets, ice" }
Maximum Likelihood Detection of Signal Vectors in Gaussian Noise
Question: Consider a binary-input additive white Gaussian noise channel. Let $\mathbf{x}_0 = (\sqrt{Es},\sqrt{E_s},⋯,\sqrt{E_s})$ and $\mathbf{x}_1 = (-\sqrt{E_s},-\sqrt{E_s},⋯,-\sqrt{E_s})$, be two codewords of length $d$. Suppose $\mathbf{x}_0$ is transmitted and the received vector is given by $\mathbf{y} = \mathbf{x}_0 + \mathbf{n}$, where the noise vector $\mathbf{n}$ is an i.i.d. Gaussian random vector with mean zero and variance $\frac{N_0}{2}$ . Suppose that maximum-likelihood decoding is employed. The pairwise error probability $P_d$ is the probability that the decoder chooses the incorrect codeword $\mathbf{x}_1$ as the decision instead of the originally transmitted $\mathbf{x}_0$. Show that $P_d=Q(\sqrt{\frac{2dE_s}{N_0}})$, where $Q(x) = (\frac{1}{\sqrt{2 \pi}})\int ^{\infty}_{x}e^{-\frac{t^2}{2}}dt$ I know how to calculate the error probability for bpsk in the other type,but not codewords type,does anyone know how to calculate the probability for codeword type? Answer: The probability of error of the ML detector is equal to the probability that the received vector $\mathbf{y}$ is closer to $\mathbf{x}_1$ than to $\mathbf{x}_0$, which is equal to the probability that the noise component in the direction of $\mathbf{x}_0-\mathbf{x}_1$ is greater than half of the Euclidean distance between $\mathbf{x}_0$ and $\mathbf{x}_1$. The Euclidean distance between $\mathbf{x}_0$ and $\mathbf{x}_1$ is $$D=||\mathbf{x}_0-\mathbf{x}_1||=\sqrt{\sum_{i=1}^d(x_{0,i}-x_{1,i})^2}=\sqrt{\sum_{i=1}^d4E_s}=2\sqrt{dE_s}\tag{1}$$ The noise variance in the direction of $\mathbf{x}_0-\mathbf{x}_1$ equals $N_0/2$ (as in any other direction). The probability that a zero mean Gaussian noise variable with variance $N_0/2$ assumes a value greater than $D/2$ is given by $$P_E=Q\left(\frac{D/2}{\sqrt{N_0/2}}\right)=Q\left(\frac{\sqrt{dE_s}}{\sqrt{N_0/2}}\right)=Q\left(\sqrt{\frac{2dE_s}{N_0}}\right)\tag{2}$$
{ "domain": "dsp.stackexchange", "id": 6610, "tags": "noise, digital-communications, homework, maximum-likelihood-estimation" }
Condensation of Water. Classroom Controversy
Question: In our test there was a question that went like so: Question 4 You have a glass of iced water on an unshaded picnic table and went for a walk for 30 minutes. When you return you noticed the glass has water on the outside of it. a. In terms of heat transfer explain what has happened to the glass of water. The majority of the class understood this question and answered it correctly. b. Would there have been more or less water on the outside of the glass if the picnic table was in the shade? Explain. This question caused a lot of controversy with the majority of the students (including myself) believing that the shade would have caused more water on the outside. Whereas the teacher and a few students thought that the sun would have caused more water on the outside. The reasoning that the teacher provided was not very convincing and so we have come to this forum to ask what is the correct answer to part b and most of all WHY? We are 16 -17 years of age if you need to know the level for the explanation. Answer: Obviously it is the water vapor in the air that condensed onto the surface of the glass, and the ice inside the glass played the role of maintaining the glass-surface at a certain temperature. I am now going to simplify the problem to its essentials. I don't know what material the glass had, but say it was made of some highly conducting material, so that we may assume that the temperature was uniform over the entire glass (outer) surface. Also since it is ice inside the glass, it is reasonable to assume that temperature inside the glass is constant over time, with heat flux (from ambient to glass) adjusting itself to maintain this temperature. Also we shall assume that steady state has been reached. Let us also assume that glass is closed at its top, although this is not a serious assumption and may be relaxed. We shall neglect effect of winds and consequent evaporation. We shall assume that water vapor pressure in the air remains constant. So here's the simplified problem: given two closed hollow objects, both filled with ice so that temperature at inside wall of the object ($T_{inside}$) is constant, and one is kept in the shade while the other is in the sun, which will have greater rate of condensate formation (at its outer surface) after steady state has been reached? Heat flux to glass in the shade is $Q_{shade}$, and that to the glass in the sun is $Q_{sun}$, with $Q_{shade}<Q_{sun}$. Let $T_{shade}$ and $T_{sun}$ be the temperature of the outer surface of the glass in the shade and in the sun respectively; both must of course be higher than $T_{inside}$. Then since $Q_{shade}<Q_{sun}$ we must have $T_{shade}<T_{sun}$ (why? hint: heat transfer through wall depends on temperature difference across the wall). Greater rate of condensate formation occurs on the cooler surface, which is that in the shade. P.S. I will briefly explain why condensation rate must be higher for a colder surface. Condensation of water vapor on to a surface is a result of the fact that more number of water molecules are being deposited on the surface from air, than are being lost to air. The rate at which deposition takes place depends on water vapor pressure in air, which we have assumed to be constant and therefore is the same for both glasses. However the rate at which water molecules deposited on the glass are lost to air depends on the temperature of the surface, and lower the temperature lower is this net outward flux (see here).
{ "domain": "physics.stackexchange", "id": 38187, "tags": "homework-and-exercises, thermodynamics, temperature, cooling, condensation" }
Containment problem of an acyclic NFA in an NFA
Question: Let $A$ and $B$ be NFAs, such that $A$ is acyclic. In the general case, deciding whether $L(A)\subseteq L(B)$ is $PSPACE$-hard. However, since $A$ is acyclic, we know that for every $w \in L(A)$, it holds that $|w|$ is linear in $|A|$. It follows that if $L(A) \nsubseteq L(B)$, there must be a polynomial witness $w\in L(A)\setminus L(B)$. Thus, the containment problem when $A$ is acyclic is in $coNP$. Can it be shown that it is $coNP$-hard? Answer: This is coNP-hard even if $B$ is also acyclic. Let $D = \bigvee_{i=1}^m T_i$ be a DNF on variables $x_1, \dots, x_n$. We can easily contruct an NFA $B$ accepting exactly the satisfying assignment of $D$, that is, the words $w \in \{0,1\}^n$ such that the assignment $a$ defined as $a(x_i) = w_i$ satisfies $D$. To do this, you build an automaton $B_i$ with $n+2$ states recognizing $T_i$ and add an initial state that non-deterministically chooses $i$ and jumps into $B_i$ with an $\epsilon$-transition. $B_i$ has states $q_1, \dots, q_{n+1}$, and $reject$. Initial state is $q_1$. When in state $q_j$ for $j \leq n$: if $x_j$ does not appear in $T_i$, then you go in state $q_{j+1}$ for every value of the next letter. If $x_j$ appears positively in $T_i$, then you go in $q_{j+1}$ only if you read letter $1$. If you read letter $0$, you go in state $reject$. If $x_j$ appears negatively in $T_i$, you do the same by swapping $0$ and $1$. $q_{n+1}$ is the only final state. Now you build $A$, acyclic, which accepts every word of length $n$ (same construction as before for $T_i$ empty). It is clear that $L(A) \subseteq L(B)$ iff $D$ is a tautology, which is coNP-complete.
{ "domain": "cstheory.stackexchange", "id": 4914, "tags": "complexity-classes, automata-theory, nfa" }
Gauss law in non-uniform electric field
Question: I am trying to figure out how gauss law would hold in an electric field configuration that varies with space. For simplicity, let us assume the classic XYZ coordinate system. Consider an electric field along the x axis that varies a follows E(x,y,z) = K/(x^2), (similar to the coloumbs inverse square law.) and assume the field vector to be along the positive direction of the X axis, the field lines are parallel and equally spaced, assumed to come from a very large distance. Now, for simplicity, suppose I choose a cube of side a, whose center lies on the X axis, let us say at some point x= A. With this cube as my Gaussian surface, and with the given configuration of the electric field, my calculations are as follows (using the integral form of gauss law): The only two planes that would contribute to the flux are the ones parallel to the YZ plane. Let P1 and P2 be the planes. If the electric field at P1 is E1, the flux of it will be E1A. Similarly the flux through P2 will be -E2A (assuming the direction of area vector). The area of the surfaces being the same, the field clearly is different at P1 and P2, because the two planes are seperated by a distance = a. Thus if E1 = K/(x^2) then E2 must be K/(x±a)^2. How can in this case the flux be equal to zero? Please point out if I have any mistakes in my math, or if my analysis is wrong anywhere. Answer: The statements. Consider an electric field along the x axis that varies as follows E(x,y,z) = K/(x^2) and assume the field vector to be along the positive direction of the X axis, the field lines are parallel and equally spaced, assumed to come from a very large distance cannot be simultaneously true. In effect you have proved that with your evaluation of the electric flux through the opposite faces of a cube and showing it to be different. If the electric field lines are parallel then the electric field is uniform and hence cannot depend on position $x$.
{ "domain": "physics.stackexchange", "id": 49123, "tags": "electrostatics, electric-fields, gauss-law" }
Why does our eyes got wet when we feel emotional or whenever we are hurt?
Question: I was thinking like whenever when we feel emotional or whenever we got hurt we got tears from our eyes but why is that happen ? Why our eyes got wet ? Answer: The lacrimal gland, the major source for tear production, is situated just above the eye. It sends it's secretions to the eye via the lacrimal duct. Tears secreted collect in the conjunctiva of the upper lid. There is also a nasolacrimal duct which drains the tears from the eye into the nasal cavity. Hence the sniffles when we cry. How is it controlled? The lacrimal nucleus in the brain is the major source for innervating the lacrimal gland, is a subnucleus of the superior salivary nucleus in the tegmentum of the pons (part of the brain stem). The lacrimal nucleus is connected to the lacrimal gland by the facial nerve (more specifically the greater petrosal nerve). This is parasympathetic (meaning it is part of the autonomic nervous sytem) and the neurotransmitter transmitting the signal from nerve to gland is acetylcholine (the receptor is muscarinic subtype of the cholinergic... though strangely wikipedia says nicotinic subtype is involved as well, but maybe this is simply poorly worded and is just refering to the ganglia stopover between pre/post ganglionic parasympathetic neurons), this information helps explain how some drugs affect lacrimation (tearing up). Taken from source (1) [I do not know how reliable this website is]: How is the lacrimal nucleus told to start lacrimation? The lacrimal nucleus receives projections from the autonomic nervous system as well as from various brain structures, including the frontal lobe, globus pallidus, thalamus, and hypothalamus. The projections from the frontal lobe are thought to be important in human psychic lacrimation (crying because of emotions). What I know: Another brain area called the limbic system is involved in production of basic emotional drives, such as anger, fear, etc. The limbic system [in the case of sympathetic nervous system it would be the hypothalamus specifically, but this is irrelevant here since parasympathetic nerves cause lacrimation] also has a degree of control over the autonomic system. As I under stand it, the frontal lobe usually inhibits the limbic system. Again from source (1): While humans across the world produce psychic tears in response to both positive and negative emotions, and while similar emotions have been identified in other animal species, no reports exist of other animals showing psychic lacrimation. according to wiki (3): Compared to tearing for physical reasons, tears for emotional reasons have s different chemical composition. They are composed of more protein-based hormones, such as prolactin, adrenocorticotropic, and leucine enkephalin (a natural pain killer), which is suggested to be the mechanism behind the experience of crying from emotion making an individual feel better. sources: (1) http://carta.anthropogeny.org/moca/topics/lacrimation-tearing (2) http://en.wikipedia.org/wiki/Lacrimal_gland#Innervation (3) http://en.wikipedia.org/wiki/Tears
{ "domain": "biology.stackexchange", "id": 4073, "tags": "human-biology, eyes, human-eye" }
How to learn from multiple data sources with different input variables but the same underlying pattern?
Question: I will explain with an example: Let's say you have 2 factories that produce pulp paper. Each have similar processes where the laws of physics give the same outcome. Now let's say this 2 factories have equipment and sensors from different manufacturers, so the output of those sensors is not comparable in any way (different number of variables, different metric system etc.). Although for both factories I can caculate the output easily and determine the learning target in a comparable way (eg. metric tonnes of paper). Is there a way of using deep learning to learn from both datasets at the same time? I mean increase the predictive power upon a sample from factory 1 due to insights on factory 2? What about having 3 DNN, 2 for reducing feature representation and standardizing output representation and the third one for learning the general pattern common to both and predicting the final output? Answer: What you are referring to is multi-view learning. Multi-view learning basically tells us how multiple data sources or multiple feature subsets can be combined to create a more robust learning curve for the algorithm. In recent years, starting from 2013 a lot of research has been carried in this rapidly growing field. A good introduction to the topic can be found in the link below. It contains a more theoretical and mathematical approach to understanding the method. http://research.ics.aalto.fi/airc/reports/R1011/msml.pdf
{ "domain": "datascience.stackexchange", "id": 2913, "tags": "neural-network, deep-learning, representation" }
LINQ Provider: Supporting Projections
Question: Up until recently, my LINQ-to-Sage provider didn't support projections, so the client code had to explicitly "transfer" to LINQ-to-Objects, like this: var vendorCodes = context.Vendors.ToList().Select(e => e.Code); Now, with a bit of help from Stack Overflow, I was able to modify my IQueryProvider implementation to support this: var vendorCodes = context.Vendors.Select(e => e.Code); Or even this: var vendors = context.Vendors.Select(e => new { e.Code, e.Name }); Under the hood it's still LINQ-to-Objects handling it. Here's the IQueryProvider implementation: public class SageQueryProvider<TEntity> : IQueryProvider where TEntity : EntityBase { private readonly IView _view; private readonly SageContextBase _context; public SageQueryProvider(IView view, SageContextBase context) { _view = view; _context = context; } public IQueryable CreateQuery(Expression expression) { var elementType = TypeSystem.GetElementType(expression.Type); try { return (IQueryable)Activator.CreateInstance(typeof (ViewSet<TEntity>).MakeGenericType(elementType), _view, this, expression, _context); } catch (TargetInvocationException exception) { throw exception.InnerException; } } public IQueryable<TResult> CreateQuery<TResult>(Expression expression) { var elementType = TypeSystem.GetElementType(expression.Type); if (elementType == typeof(EntityBase)) { Debug.Assert(elementType == typeof (TResult)); return (IQueryable<TResult>)Activator.CreateInstance(typeof(ViewSet<>).MakeGenericType(elementType), _view, this, expression, _context); } var methodCallExpression = expression as MethodCallExpression; if(methodCallExpression != null && methodCallExpression.Method.Name == "Select") { return (IQueryable<TResult>)Execute(methodCallExpression); } throw new NotSupportedException(string.Format("Expression '{0}' is not supported by this provider.", expression)); } public object Execute(Expression expression) { return Execute(expression, new ViewSet<TEntity>(_view, _context)); } public TResult Execute<TResult>(Expression expression) { return (TResult)Execute(expression, new ViewSet<TEntity>(_view, _context)); } private static object Execute<T>(Expression expression, ViewSet<T> viewSet) where T : EntityBase { var constantExpression = expression as ConstantExpression; if (constantExpression != null) { if (constantExpression.Value is ViewSet<T>) { return viewSet.Select(string.Empty); } } var filterFinder = new InnermostFilterFinder(); var filterExpression = filterFinder.GetInnermostFilter(expression); var filter = string.Empty; if (filterExpression != null) { if (filterExpression.Arguments.Count > 1) { var lambdaExpression = (LambdaExpression)((UnaryExpression)(filterExpression.Arguments[1])).Operand; // Send the lambda expression through the partial evaluator. lambdaExpression = (LambdaExpression)Evaluator.PartialEval(lambdaExpression); // Get the filter string to pass to the Sage API. var visitor = new FilterVisitor<T>(lambdaExpression.Body); filter = visitor.Filter; } switch (filterExpression.Method.Name) { case "Where": return viewSet.Select(filter); case "Single": var singleResult = viewSet.SingleOrDefault(filter); if (singleResult == null) { throw new InvalidOperationException("Sequence contains more than one element."); } return singleResult; case "SingleOrDefault": return viewSet.SingleOrDefault(filter); case "First": var firstResult = viewSet.FirstOrDefault(filter); if (firstResult == null) { throw new InvalidOperationException("Sequence contains no element matching specified criteria."); } return firstResult; case "FirstOrDefault": return viewSet.FirstOrDefault(filter); case "Last": var lastResult = viewSet.LastOrDefault(filter); if (lastResult == null) { throw new InvalidOperationException("Sequence contains no element matching specified criteria."); } return lastResult; case "LastOrDefault": return viewSet.LastOrDefault(filter); case "Count": return viewSet.Count(filter); case "Any": return viewSet.Any(filter); case "All": return viewSet.All(filter); default: throw new NotSupportedException("Method '" + filterExpression.Method.Name + "' is not currently supported by this provider."); } } var method = expression as MethodCallExpression; if (method != null && method.Method.Name == "Select") { // handle projections var lambda = ((UnaryExpression)method.Arguments[1]).Operand as LambdaExpression; if (lambda != null) { var returnType = lambda.ReturnType; var selectMethod = typeof(Queryable).GetMethods().First(m => m.Name == "Select"); var typedGeneric = selectMethod.MakeGenericMethod(typeof(T), returnType); var result = typedGeneric.Invoke(null, new object[] { viewSet.ToList().AsQueryable(), lambda }) as IEnumerable; return result; } } return viewSet.Select(filter); } } As you can see this class has changed quite dramatically since when I first wrote it, and it doesn't look like it's becoming any prettier - especially now that I'm considering adding support for SelectMany. How should I cure it? Answer: One of the things I think when I see a large case statement is would this be better off as some kind of lookup table. I think yours might have some scope for doing this since they all seem to do some processing on a viewSets and a filter. Using a simple lookup table to convert the string "Where", "Single" etc into a method call would allow you to separate the logic out a bit more. So, for example you could do something like this (better naming is left as an exercise for the reader): public static class FilterExpressionHelper { readonly static Dictionary<string, MethodInfo> _methods; static FilterExpressionHelper() { _methods = new Dictionary<string, MethodInfo>(); foreach(var methodInfo in typeof(FilterExpressionHelper).GetMethods(System.Reflection.BindingFlags.Static | System.Reflection.BindingFlags.Public).Where(x=>x.Name != "Execute")) { _methods.Add(methodInfo.Name, methodInfo); } } public static object Execute<T>(string methodName, ViewSet<T> view, string filter) { if(_methods.ContainsKey(methodName)) return _methods[methodName].MakeGenericMethod(typeof(T)).Invoke(null, new object [] { view, filter }); throw new NotSupportedException($"Method '{methodName}' is not currently supported by this provider."); } public static object Where<T>(ViewSet<T> viewSet, string filter) { return viewSet.Select(filter); } public static object Single<T>(ViewSet<T> viewSet, string filter) { var singleResult = viewSet.SingleOrDefault(filter); if (singleResult == null) { throw new InvalidOperationException("Sequence contains more than one element."); } return singleResult; } public static object SingleOrDefault<T>(ViewSet<T> viewSet, string filter) { return viewSet.SingleOrDefault(filter); } // etc } This would allow you to replace the large case statement in your filter logic with: FilterExpressionHelper.Execute(filterExpression.Method.Name, viewSet, filter);
{ "domain": "codereview.stackexchange", "id": 20509, "tags": "c#, linq, expression-trees, linq-expressions" }
I copied a workspace and now catkin_make doesn't work
Question: I created a workspace catkin_ws_01 and created some packages inside it. Then I duplicated that same workspace folder to create catkin_ws_02 and made some changes to the packages inside. Then I deleted the original workspace folder catkin_ws_01. Now when I try to run catkin_make from inside catkin_ws_02, it gives me the following error message: CMake Error: The source directory "/home/blabla/catkin_ws_01/src" does not exist. This means that catkin_make still refers to catkin_ws_01 even though I am executing the command from within catkin_ws_02. What is going on and how can I fix this? Originally posted by alexe on ROS Answers with karma: 60 on 2017-11-19 Post score: 2 Original comments Comment by marcoarruda on 2017-11-19: Have you copied the compiled folders (build and devel)? I would remove these folders and try to compile again I don't think the compilation problem is related to that, but it's also important to check your .bashrc file and remove any sourcing command of your old catkin_ws_01 workspace. Comment by ahendrix on 2017-11-19: @alexe please don't delete questions after they've been answered. Comment by alexe on 2017-11-19: @ahendrix I'm sorry, when I deleted the question I didn't even realize that there had already been an answer. Answer: I'm using ROS Indigo distribution, I have tried to do the same in my computer and I got the following error: CMake Error: The current CMakeCache.txt directory /home/marcoarruda/catkin_02/build/CMakeCache.txt is different than the directory /home/marcoarruda/catkin_01/build where CMakeCache.txt was created. This may result in binaries being created in the wrong place. If you are not sure, reedit the CMakeCache.txt Then, I have removed the folders build and devel from catkin_02 and tried to compile again. It worked! I don't think the compilation problem is related to that, but it's also important to remove any source catkin_01/devel/setup.bash instruction (maybe you have added to your .bashrc file) I've created a video showing the steps to reproduce and solve it (https://youtu.be/Lg_oM4vFIEs). Originally posted by marcoarruda with karma: 541 on 2017-11-19 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by alexe on 2017-11-19: Thanks, the build and devel folders were exactly the issue! After deleting them, building worked as expected :)
{ "domain": "robotics.stackexchange", "id": 29404, "tags": "catkin-make" }
infinite parallel conducting planes
Question: two infinite parallel conducting planes grounded and are separated by a distance d. place a point charge "q" between the two planes, using the "green teoerma reciprocity" how I show that the total charge induced is "-q" by the product of the fraction of the total perpendicular distance between the plane and the load? Answer: If you place a charge q between two conducting plates, the total charge induced is -q on both plates, but the ratio of the charge induced on either plate is as the ratio of the distance of the point to the plate, so that if the distance between the plates is L, and the point is at a distance pL from the left plate, the charge on the left plate is -pq, and on the right plate is -(1-p)q. One way to see this is that the problem of solving Laplace's equation has a probability interpretation. If you start a random walk at the position of the charge, the induced charge on the left plate is equal to q times the probability that the random walk will hit the left plate before the right plate. This probability is the classical problem of a Brownian motion in 1d confined between two absorbing points, and this gives the answer. The solution of the 1d random walk problem allows you to understand that this problem is really one dimensional. If you smear the charge q into a parallel plane of charge, each infinitesimal charge on the plane induces the same charge on the plates, by symmetry. The solution for a plane of charge between two conductors is very simple, and it reproduces the given answer, in a way completely parallel to the probability argument, but without introducing probability concepts. "Using" Green's reciprocity theorem Green's reciprocity theorem is integration by parts twice. $$ \int \phi_1 \nabla^2 \phi_2 = \int \phi_2 \nabla^2 \phi_1 $$ It has the interpretation that the potential energy from the field of 2 acting on body 1 is equal to the potential energy from the field of 1 on 2, and it is clearly true, because the potential is from pairwise interaction, and this potential is all the pairs in the separate bodies. By itself, this theorem proves nothing, because, being just integration by parts, it cannot be used to solve any differential equation. But if you use the additional fact that the potential between two uniformly charged plates is linear (this is the central fact used to get the result), you learn that if you add a charge density $\sigma_1$ to one plate and $\sigma_2$ to the other, then, up to units, the potential energy of the point charge is $$ (\sigma_1) qx + \sigma2 q(L-x) $$ By Green's reciprocity, this is the energy of the uniform charge densities in response to the field of the charge. The derivative with respect to each charge density then tells you how many electric field lines end on each plate, proportionally. The work is all in the solution of the linear 1d problem, and the Green theorem is adding nothing particularly remarkable. This is another "guess what I was thinking" problem all too common in education. This is possible, but it requires a knowledge not of physics, but of physicist psychology.
{ "domain": "physics.stackexchange", "id": 1741, "tags": "electromagnetism, homework-and-exercises" }