anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Spring (with mass) and Particle Work Energy Theorem Application
Question: I couldn't quite grasp my professor's explanation of this phenomenon. If there is a non-massless spring with a particle attached to it why is there an error in using the work-energy principle if only the particle is considered and no external forces are considered. With that being said, why is it that to properly apply the work-energy principle if the force of the spring is considered to be an external force then the system is no longer a conservative one and the path the spring takes must be considered? My professor was explaining it in words at first which I somewhat understood, but his mathematical explanation that followed was what confused me. Thank you Answer: If the spring has mass then it can also have kinetic energy when it oscillates. If you count only the kinetic energy of the particle attached to the spring, you will be missing some energy. If it is fixed at one end then not all of the spring moves. Its effective mass $\mu$ is usually about $\frac13$ of its actual mass. The total kinetic energy of the particle-and-spring is $\frac12 (m+\mu) v^2$ where $v$ is the speed of the particle attached to the end of the spring and $m$ is its mass. It is not clear to me what the rest of your question means. Perhaps the path the spring takes means its extension vs time, hinting at dissipation of energy through hysteresis. But this is not a consequence of the spring having mass. And how can the spring force (presumably the force which the spring exerts on the particle?) be external if the spring is part of the system? It is difficult for a 3rd person to explain what your teacher said when you only have a vague recollection of what that was. I suggest that you ask your teacher for clarification.
{ "domain": "physics.stackexchange", "id": 41718, "tags": "newtonian-mechanics, energy, work" }
RGB-D Handheld Mapping with RealSense R200
Question: I was following the RGB-D Handheld Mapping tutorial . First I ran the following command $ roslaunch realsense_camera r200_nodelet_default.launch While the realsense_camera node is running, in a new tab I ran $ roslaunch rtabmap_ros rtabmap.launch rtabmap_args:="--delete_db_on_start" depth_topic:=/camera/depth_registered/sw_registered/image_rect_raw The RTABMAP GUI is launched as expected but there is no image. In the terminal that I launched the RTABMAP GUI, I am receiving the following warning. [ WARN] [1532462055.394145038]: /rtabmap/rgbd_odometry: Did not receive data since 5 seconds! Make sure the input topics are published ("$ rostopic hz my_topic") and the timestamps in their header are set. /rtabmap/rgbd_odometry subscribed to (approx sync): /camera/rgb/image_rect_color, /camera/depth/image_raw, /camera/rgb/camera_info Same thing happens when I run $ roslaunch rtabmap_ros rtabmap.launch rtabmap_args:="--delete_db_on_start" depth_topic:=/camera/depth/points/image_raw Can you help me solve this issue? Thank you. I am running Ros Kinetic on Ubuntu 16.04 64-bit machine. Edit The published topics are: $ rostopic list /camera/color/camera_info /camera/color/image_raw /camera/color/image_raw/compressed /camera/color/image_raw/compressed/parameter_descriptions /camera/color/image_raw/compressed/parameter_updates /camera/color/image_raw/compressedDepth /camera/color/image_raw/compressedDepth/parameter_descriptions /camera/color/image_raw/compressedDepth/parameter_updates /camera/color/image_raw/theora /camera/color/image_raw/theora/parameter_descriptions /camera/color/image_raw/theora/parameter_updates /camera/depth/camera_info /camera/depth/image_raw /camera/depth/image_raw/compressed /camera/depth/image_raw/compressed/parameter_descriptions /camera/depth/image_raw/compressed/parameter_updates /camera/depth/image_raw/compressedDepth /camera/depth/image_raw/compressedDepth/parameter_descriptions /camera/depth/image_raw/compressedDepth/parameter_updates /camera/depth/image_raw/theora /camera/depth/image_raw/theora/parameter_descriptions /camera/depth/image_raw/theora/parameter_updates /camera/depth/points /camera/driver/parameter_descriptions /camera/driver/parameter_updates /camera/ir/camera_info /camera/ir/image_raw /camera/ir/image_raw/compressed /camera/ir/image_raw/compressed/parameter_descriptions /camera/ir/image_raw/compressed/parameter_updates /camera/ir/image_raw/compressedDepth /camera/ir/image_raw/compressedDepth/parameter_descriptions /camera/ir/image_raw/compressedDepth/parameter_updates /camera/ir/image_raw/theora /camera/ir/image_raw/theora/parameter_descriptions /camera/ir/image_raw/theora/parameter_updates /camera/ir2/camera_info /camera/ir2/image_raw /camera/ir2/image_raw/compressed /camera/ir2/image_raw/compressed/parameter_descriptions /camera/ir2/image_raw/compressed/parameter_updates /camera/ir2/image_raw/compressedDepth /camera/ir2/image_raw/compressedDepth/parameter_descriptions /camera/ir2/image_raw/compressedDepth/parameter_updates /camera/ir2/image_raw/theora /camera/ir2/image_raw/theora/parameter_descriptions /camera/ir2/image_raw/theora/parameter_updates /camera/nodelet_manager/bond /rosout /rosout_agg /tf /tf_static I think the RTABMAP GUI is not looking for the right topic; there is nothing matching "/camera/depth_registered/sw_registered/image_rect_raw". What should I change the $ roslaunch rtabmap_ros rtabmap.launch rtabmap_args:="--delete_db_on_start" depth_topic:=/camera/depth_registered/sw_registered/image_rect_raw command with? Edit 2 As Martin suggested, I tried running $ roslaunch rtabmap_ros rtabmap.launch rtabmap_args:="--delete_db_on_start" depth_topic:=/camera/depth/image_raw Again, there's no image in the GUI. I also ran "rostopic hz /camera/depth/image_raw" and got: $ rostopic hz /camera/depth/image_raw subscribed to [/camera/depth/image_raw] average rate: 60.027 min: 0.016s max: 0.017s std dev: 0.00024s window: 57 average rate: 60.022 min: 0.016s max: 0.017s std dev: 0.00025s window: 117 average rate: 60.018 min: 0.016s max: 0.017s std dev: 0.00025s window: 177 average rate: 60.015 min: 0.016s max: 0.017s std dev: 0.00025s window: 237 average rate: 60.007 min: 0.016s max: 0.017s std dev: 0.00025s window: 297 average rate: 60.002 min: 0.016s max: 0.017s std dev: 0.00024s window: 357 average rate: 60.003 min: 0.016s max: 0.017s std dev: 0.00024s window: 417 average rate: 60.004 min: 0.016s max: 0.017s std dev: 0.00023s window: 477 average rate: 60.005 min: 0.016s max: 0.017s std dev: 0.00024s window: 538 average rate: 60.005 min: 0.016s max: 0.017s std dev: 0.00024s window: 598 This shows that we are receiving something from this topic. I also ran Run the following command $ rqt_image_view just to check if the camera is working and it is indeed working: https://ibb.co/fWS8ET Please notice that the chosen topic is "/camera/depth/image_raw". Originally posted by csg on ROS Answers with karma: 71 on 2018-07-23 Post score: 1 Original comments Comment by Humpelstilzchen on 2018-07-24: Looks like first we need to check what the topic is. "rostopic list" should display it, afterwards try "rostopic echo --noarr " to see if you get some data. Comment by csg on 2018-07-24: Hello @Humpelstilzchen can you check my edit? Thank you. Comment by matlabbe on 2018-07-24: Which realsense_camera package version do you have? Just tried with latest version of their indigo-devel branch and /camera/depth_registered/sw_registered/image_rect_raw should be published by realsense_camera node. Comment by matlabbe on 2018-07-24: Just found the difference, try roslaunch realsense_camera r200_nodelet_rgbd.launch instead like in the tutorial. Comment by csg on 2018-07-24: @matlabbe This one worked. Thank you very much! Comment by jayess on 2018-07-24: Can you please attach your images directly to the question? Answer: From all the topics you listed, /camera/depth/image_raw looks like it's the only candidate. First make sure that you're receiving something on that topic: rostopic hz /camera/depth/image_raw Then change your command to: $ roslaunch rtabmap_ros rtabmap.launch rtabmap_args:="--delete_db_on_start" depth_topic:=/camera/depth/image_raw Originally posted by Martin Günther with karma: 11816 on 2018-07-24 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by csg on 2018-07-24: Hello @MartinGünther please check out my Edit 2, thank you. Comment by Martin Günther on 2018-07-24: Just for somebody stumbling on this later: The comment by @matlabbe solved this problem.
{ "domain": "robotics.stackexchange", "id": 31335, "tags": "slam, navigation, ros-kinetic, rtabmap, rtabmap-ros" }
Algorithm to establish a global ranking given individual rankings
Question: I am looking for an algorithm(s) that can compute a global ranking (partial ordering) given individual rankings, in some kind of principled manner. I want to establish a partial-ordering of some projects, and had some users create their own partial orderings. I've looked at these algorithms so far: borda-0 borda-1 Dowdall My problem is that these algorithms were not really designed to establish partial-ordering and are instead for choosing N winners. I haven't been able to find any widely-used algorithms for this purpose. I looked at adapting STV to this purpose, but raw STV gives no instructions on how to resolve ties, and it's unclear to me what properties it would have when applied to partial ordering. Is there a widely-used algorithm/family of algorithms for the problem? Answer: There is an entire area, rank aggregation (in your case, partial rank aggregation) which deals with these issues. You can take a look at Dwork et al., Rank aggregation revisited and Ailon, Aggregation of partial rankings, $p$-ratings and top-$m$ lists and the pointers therein. There are many other relevant papers online.
{ "domain": "cs.stackexchange", "id": 11956, "tags": "partial-order" }
water pipes break when temperature goes down, how to estimate the energy involved?
Question: Normally low temperature is associated with lower energy state and high temperature with higher energy state. There is an apparent paradox when a water pipe breaks due to low temperature: when temperature goes down below 0°C/32°F, a water pipeline can break due to ice expansion inside the pipe. In order to break pipe, energy is required, so this the apparent paradox: decreasing energy, associated to temperature drop, causes an event (pipe break) that requires energy. How to evaluate the energy that breaks water pipes when temperature goes below 0°C/32°F ? Answer: Key to understanding the ability of freezing of water to break a pipe is the phase transition from liquid water to ice. Let me first discuss water freezing generally, and then I will go to the specific case of water bursting a pipe. In ice crystal the water molecules are in a stabilized position. The stabilization comes from the formation of a particular type of bond that is called 'hydrogen bridge'. A hydrogen bridge is a much weaker form of binding than a molecular bond, but as we know: in the case of water there is a siginificant effect. As with any form of binding: when molecules fall into a bond there is some release of energy. Conversely, once a bond is formed input of energy is required to break that bond. At room temperature only a small percentage of the water molecules has a hydrogen bridge going on. At room temperature the thermal motion of the water molecules is vigorous enough to break any hydrogen bridge that forms. Hydrogen bridges do form, but they don't last. The freezing point of water is the critical point. Below the freezing point the rate of formation of hydrogen bridges exceeds the rate of dissolution of hydrogen bridges. When you have an amount of water, and you continuously withdraw heat then at the freezing point the graph of temperature versus time will flatline for a while. It's only when all of the water is frozen solid that the temperature starts dropping again. Imagine ice crystal, in contact with liquid water. The water molecules do not all move at the same speed. Given how they are bumping into each other all the time there is a statistical distribution of speeds. All the time it is the slowest water molecules that get "stuck" to the ice crystal, forming hydrogen bridges. That is, of the available water molecules the lowest energy ones transition to the ice form. So the remaining water molecules are slightly more energetic than the average. In effect the process of freezing replenishes heat that is being withdrawn. Now to the case of water in a pipe. When the temperature is several degrees below freezing then the pressure increases because it is energetically favorable to transition to the ice crystal form. A sufficiently strong pipe would prevent the water inside from freezing completely. Since ice has a larger volume than water: the higher the pressure, the higher the energetic cost of going from the liquid form to the ice form. However, in the case of the pipes we're talking about here the amount of pressure that the freezing water can exert is more than the pipe can withstand, and the pipe bursts.
{ "domain": "physics.stackexchange", "id": 54914, "tags": "energy, temperature, water, ice" }
Angular velocity of disc attached to rod rotating about pivot
Question: Consider the following system moving of a disc attached to a rod pivoted to a point on the roof under the influence of gravity, with the disc free to rotate about end of rod and rod about pivot. It is found that in the motion that the disc doesn't spin, why is this? As a contrasting question, if it was indeed the disc was indeed welded to the rod, then the disc would spin about. I got this from the first problem discussed in this site. I reread their explanation about angular momentum a few times and I just don't get it, hopefully someone can write a simpler/ clear explanation of why it should be. I also found discussion of the problem in this video Answer: There is no torque applied to the disk (in theory). A round pin can only transfer reaction forces. By contrast, a square pin can transfer forces and torques. For the round pin case, the contact forces are in line with the centerline of the disk causing all contact forces to be through the center of mass and thus cause translation only. For the square pin case, the contact forces can be offset by a certain amount causes not only forces to act on the disk, but also a net torque about the disk center of mass. The second situation is similar to the welded case. In reality, there is friction which is going to spin the disk. So you choose the level of detail you want to examine.
{ "domain": "physics.stackexchange", "id": 75620, "tags": "homework-and-exercises, angular-momentum" }
Does zero-forcing equalizer need known channel impulse response?
Question: I'm studying some of the basic equalizing structres and I understand how Zero-forcing works, but it seems to me that a known channel impulse response is needed. Am I right? If so, what's the point? I mean, you're not likely going to know how the channel is, so how Zero-forcing is in any way useful? Answer: If the received signal can be written as $$\mathbf{y} = \mathbf{H}\,\mathbf{x} + \mathbf{n}$$ where $\mathbf{H}$ is the channel matrix, $\mathbf{x}$ is the transmitted vector, and $\mathbf{n}$ is the AWGN of the channel, then a zero forcing equalizer is simply (assuming that the channel matrix is square, and it's estimated perfectly at the receiver) $$\mathbf{H}^{-1}\mathbf{y} = \mathbf{x}+\mathbf{H}^{-1}\mathbf{n}$$ Obviously, you need the channel impulse response, which is captured in the channel matrix. This channel matrix is estimated in practice using any channel estimation technique, but the estimation is usually not perfect, and thus the aforementioned ZF equalizer serves as a theoretical limit.
{ "domain": "dsp.stackexchange", "id": 8020, "tags": "digital-communications, equalizer" }
Dose finding slope/intercept using the formula of m,b gives best fit line always In linear regression?
Question: In liner regression We have to fit different lines and chose one with minimum error so What is the motive of having a formula for m,b that can give slope and intercept value in the regression line ,when it cannot give best fit line directly ? 1.Consider i applied the value in dataset on the formula of m,b and found the regression line yhat = 17.5835x+6 and for example just assume error calculated for this line was 3 2.Consider i fit another line randomly (i am not using the formula of m,b to find value of m,b assume m,b value for this random line was 16,3) my 2nd regression line is yhat = 16x+3and for example just assume error calculated for this line was 1.5 Linear Regression Goal : to choose best fit line that has minimum error so my second line is better than the 1st line in this case What is the point of having a formula which gives value for slope "m", intercept "b" when it cannot give best fit line directly ? OR is my understanding incoorect Dose finding slope/intercept using the formula of m,b gives best line always ? if its YES then there is no need to try mulitple lines and calculate error and choose line with min error if its No then whats the point of having a formula for slope m,intercept b when it cannot give the best fit line . dose that mean maths/stats community need to change this forumla for slope,intercept Answer: The formulae you mentioned gives the coefficients of the line of best fit.The values are derived using the least squares method, where the goal is to minimize the sum of squared errors. Following is the derivation for the values of m and b. Let the line of best fit be $$\hat{y} = m*x + b$$ We then try to find the coefficients m and b which minimize the sum of squared errors between the actual value y and the observed value $\hat{y}$. \begin{align} SSE &= \sum_{i=1}^{n}(y_{i}-\hat{y_{i}})^2 \\ &=\sum_{i=1}^{n}(y_{i}-m*x_{i}-b)^2 \end{align} Taking the first derivative of SSE with respect to c and equating to zero. \begin{align} \frac{\partial SSE}{\partial b} &= \sum_{i=1}^{n}-2*(y_{i}-m*x_{i}-b)\\ 0 &= \sum_{i=1}^{n}-2*(y_{i}-m*x_{i}-b) \end{align} Therefore we get c as $$ b = \bar{y} - m*\bar{x}$$ Similarly in order to find m we take the partial derivative of SSE with respect to m and equate it to zero. \begin{align} \frac{\partial SSE}{\partial m} &= \sum_{i=1}^{n}-2x_{i}*(y_{i}-m*x_{i}-b)\\ 0 &= \sum_{i=1}^{n}-2x_{i}*(y_{i}-m*x_{i}-b)\\ 0 &= \sum_{i=1}^{n}x_{i}*(y_{i}-m*x_{i}-b)\\ 0 &= \sum_{i=1}^{n}x_{i}*y_{i} - \sum_{i=1}^{n}m*x_{i}^2 - \sum_{i=1}^{n}b*x_{i} \end{align} Substituting b and solving for m we get $$m = \frac{n\sum xy - \sum x\sum y}{n\sum x^2 - (\sum x)^2}$$
{ "domain": "datascience.stackexchange", "id": 8117, "tags": "regression, linear-regression, supervised-learning, linear-algebra" }
Are all least square filters adaptive?
Question: Are least square filters, or filters that minimize error energy, the same as least mean square adaptive filters? Answer: TL;DR: No, they are not necessarily the same. Gory Details Least squares is just an optimization technique. It is used in a variety of ways. For filter design it is used to select that realizable filter $H_r(e^{j\omega})$ that most closely matches, in the least squares sense, the ideal required filter response $H_i(e^{j\omega})$: $$ H_r(e^{j\omega}) = \arg \min \parallel H_r - H_i \parallel_2 $$ where $\parallel \cdot \parallel_2$ is the 2-norm or least-squares norm. This sort of filter $H_r$ is not adaptive. That is, it doesn't change once it has been designed. Adaptive filters may also use the least squares criterion, but in a different way: as part of the adaptation step. Adaptive filters start off with initial filter coefficients $\vec{w}_o[0]$ and then use an update: $$ \vec{w}_o[n] =\vec{w}_o[n-1] + \mu g[n-1] $$ where $\mu$ is the step-size and $g$ is the gradient of the least squared error surface in the direction of the minimum (from our current "location" of $\vec{w}_o[n-1]$). Here, $g$ is determined by our error criterion: least squares. This means: $$ \parallel \vec{w}_{\tt opt} - \vec{w}_o \parallel_2 $$ where $ \vec{w}_{\tt opt}$ is the unknown optimal (minimizing) solution.
{ "domain": "dsp.stackexchange", "id": 5351, "tags": "filter-design, adaptive-filters" }
Ricci tensor of direct product of manifolds
Question: Imagine I have a (Lorentzian) manifold with a metric $\left[ {\begin{array}{cc} g_{\mu\nu} &0\\ 0&g_{mn}\\ \end{array} } \right]$ Will the Ricci tensor be also block diagonal without any mixing of the two manifolds? Answer: Essentially, you have a manifold comprising two manifolds, say $M = N \times R$, and the metric is written as $m_{ab} = n_{ab} + r_{ab}$ with $n_{ab}$ defined on $N$ and $r_{ab}$ on $R$. Since the $N$, $R$ are disjoint, if you define a tensor in, say, $N$, its local coordinates corresponding to $R$ are zero, hence the Ricci tensor etc are all block diagonal.
{ "domain": "physics.stackexchange", "id": 13378, "tags": "general-relativity, differential-geometry, curvature" }
Greedy algorithm: Minimizing the maximum of a list
Question: Given a list $L$ of positive integers, assuming you can only modify the list by "splitting" its numbers a finite number $n$ of times. Write an algorithm which minimize the maximum of the last generated list. By "splitting" a number $x$ I mean deleting $x$ from $L$ and adding to $L$ two positive numbers $\alpha,\beta$ verifying $\alpha+\beta=x$. We can only do this $n$ times. My try was taking the two greatest elements of $L$, $\alpha$ (the maximum) and $\beta$ (the other), then I split $\alpha$ into $c=\min(\alpha/\beta+1,n+1)$ equal parts (the last part will be $\frac{\alpha}{(\alpha/\beta+1)}+\alpha\text{mod}(\alpha/\beta+1)$). So I spend $c$ of the hability. Then I repeat the process until I run out of the hability or $\beta=1$, so then y take $c=\min(\alpha/\beta+1,n+1)-1$. I know my try is not correct, as the list $(10, 4, 9)$ for $n=4$ is a counterexample. Any idea? I'm not looking for efficiency details (it can be achieved working with heaps), only for correctness. Answer: If we use $k$ splits on a single element $x$ of the list, the lowest value we can reach is $\dfrac{x}{k+1}$. In turn, we need $\displaystyle k = \left \lceil\frac{x}{v}\right\rceil - 1$ splits for $x$ to reach a certain value $v$. Elements do not affect each other, after all to reach some maximum $v$ after splitting each element must reach it individually. So you simply want to find the smallest possible $v$ such that $$\sum_{x \in L}\left\lceil\frac{x}{v}\right\rceil \leq n + |L|$$ You can find $v$ to arbitrary precision by using a binary search using the above check, starting with range $[0, \max L]$. Finding an exact answer is somewhat trickier though.
{ "domain": "cs.stackexchange", "id": 10095, "tags": "algorithms, optimization, greedy-algorithms" }
random particles on a grid: Effect of increasing density on distance between them
Question: Say I have two boxes which both contain, say, 25 red particles (as shown in picture). These particles are randomly placed in a 2d grid, and in one the total area $A_{1}=20000$ and the other has area $A_{2}=10000$. If the average distance between any two particles before was $d_{1}$, what is the average (expected) distance between any two particles after, $d_{2}$? I have worked it out as: Since we have vertically compressed the box by factor 0.5: Then we get a factor of $\frac{\sqrt{1.25}}{\sqrt{2}}$ = 0.7905.... of the distance $d_{1}$. However, I am not sure if this is correct because surely the particles would be closer together in the x-direction as well. Answer: If you have a box of area $A$ containing $N$ particles then the average area per particle, $a$, is: $$ a = \frac{A}{N} \tag{1} $$ If the particles aren't arranged in any special way we expect the environment of any particular particle to be symmetric on average, so let's suppose that each particle on average occupies a circle of radius $r$, then the area of each particle's circle is $a = \pi r^2$. Putting this expression for $a$ in equation (1) we get: $$ \pi r^2 = \frac{A}{N} $$ or with a quick rearrangement: $$ r = \sqrt{\frac{A}{\pi N}} $$ The average separation, call this $d$, will be the distance between the centres of the circles so $d = 2r$ and therefore: $$ d = 2\sqrt{\frac{A}{\pi N}} $$ This is all rather approximate, but the key point is that if the number of particles is constant, as in your case, the important relationship is: $$ d \propto \sqrt{A} $$
{ "domain": "physics.stackexchange", "id": 14519, "tags": "particle-physics, geometry, distance" }
How to calculate accuracy of an imbalanced dataset
Question: I like to understand what is the accuracy of an imbalanced dataset. Let's suppose we have a medical dataset and we want to predict the disease among the patients. Say, in an existing dataset 95% of patients do not have a disease, and 5% patients have disease. So clearly, it is an imbalanced dataset. Now, assume our model predicts that all 100 out of 100 patients have no disease. Accuracy means = (TP+TN)/(TP+TN+FP+FN) If the model predicts 100 patients do not have a disease and we are predicting disease among the patient then True positive refers to the disease among the patient and True negative refers to no disease among the patient. In that case accuracy should be (0+100)/(0+100+0+0) = 1. We are going to predict how many patients have a disease so if we get accuracy 1, does that mean 100% of patients have the disease? I am taking the example from 5 Techniques to Handle Imbalanced Data For a Classification Problem . I am not sure at the time of accuracy calculation why they calculate it as (0+95)/(0+95+0+5) = 0.95, if they have already described that their model predicts all 100 out of 100 patients have no disease. I hope I clarified my question. Thank you. Answer: Accuracy is the number of correct predictions out of the number of possible predictions. In many regards, it is like an exam score: you had an opportunity to get $100\%$ of the points and got $97\%$ or $79\%$ or whatever. The class ratio is not a factor. In your example, you had $95$ negative patients and $5$ positive. You predicted $100$ negative patients, meaning that you got $95$ correct and $5$ incorrect for an accuracy of $95\%$. Note that accuracy is a surprisingly problematic measure of performance, and this is true even when the classes are naturally balanced. With imbalance, however, accuracy has the potential to mislead in a way that is not present in many other measures of performance, and your example is a good demonstration of that. All your model does is predict the majority class; it does nothing clever. However, your model achieves an accuracy of $95\%$, which sounds like a high $\text{A}$ in school that indicates strong performance.
{ "domain": "datascience.stackexchange", "id": 11125, "tags": "machine-learning, classification, class-imbalance, imbalanced-learn" }
Calculating errors based on other variables
Question: I am trying to calculate the mass and respective error of a star in kg. I have the numbers in units of $M_{Sun}$,in the form $M_{star}=(a\pm b)$ $M_{Sun}$. Given that I have the mass (with error) of the Sun in kg, in the form $M_{Sun}=(c\pm d)kg$, how can I use this to convert the mass and error of the star to kg? Answer: The mass of the star is the product of two numbers. The two numbers have uncertainties and so does the product : $$p\pm q=(a\pm b)(c\pm d)$$ The uncertainty in the product is given by $$(\frac{q}{p})^2=(\frac{b}{a})^2+(\frac{d}{c})^2$$ where $p=ac$. If $\frac{b}{a} \gt 3\frac{d}{c}$ then it makes little difference if you ignore $d$ and simply use $$p\pm q \approx (a\pm b)c=(ac)\pm (bc)$$
{ "domain": "physics.stackexchange", "id": 64668, "tags": "astrophysics, astronomy, units, error-analysis, statistics" }
What is the temperature of photons in a laser beam?
Question: I can extract the temperaure of the CMB by its width, the same as any other black body radiation. I guess that, physically, that temperature is the temperature of the object that is emitting the radiation? Say now I have a laser, monochromatic radiation at $\omega_0$ with a linediwth of $\delta \omega$. Can I extract a temperature from this spread? What would its physical significance be? Answer: The thermodynamic concept of the physical quantity temperature requires that the system you are considering is in equilibrium, or at least approximately in equilibrium, as in the assumed local equilibrium of non-equilibrium thermodynamics. This is not the case for a laser beam with a narrow frequency band around a frequency $\omega_0$, which definitely doesn't have a blackbody energy distribution following Bose-Einstein statistics for a given temperature like the cosmic microwave background radiation.
{ "domain": "physics.stackexchange", "id": 46626, "tags": "electromagnetism, photons, laser, thermal-radiation" }
Does the repellence of an insecticide affect its efficacy (measured in insect mortality)?
Question: I am currently working on assessing the efficacy of a suspected insecticide on a particular species of insect. I happen to be looking into any repellence caused by the substance. I test for both effects at the same time by putting that insect in a no-choice bioassay. To be specific, my assay set-up consists of a petri dish, and the test article treated with the test substance is placed on one side of the petri dish. Insect species X is introduced into the petri dish, and every 24 hours, I get to count and remove the dead ones. All these are repeated for different concentrations of the test substance. What I found is that mortality (proportion of insect X killed) initially increases with concentration, but then decreases after a particular concentration. I also found that repellence increases with the concentration of the test substance. So, what exactly is happening here? Is the repellence of the test substance having an effect on the mortality, such that repellence is causing a decrease in mortality? Answer: You have a good hypothesis: the decrease in mortality is mediated by an increase in repellence. It sounds like you have a measurement of increased repellence, and it's a plausible mechanism (if an insect isn't near an insecticide, it can't be killed by that insecticide). Now you should test it. If you are measuring repellence using a different assay (e.g., a choice assay, where the insect can enter a chamber with the insecticide vs. one without), than you could use as the denominator insects that enter the chamber with the insecticide. Compare mortality of insects that enter the chamber with insecticide changes with increasing concentrations in this assay with the mortality in your no choice assay.
{ "domain": "biology.stackexchange", "id": 8839, "tags": "zoology, entomology, behaviour, research-process" }
An identity over the sympletic group
Question: This comes in Woit's book. Let the sympletic group be defined as the subgroup of linear transformations $g$ of $M^* = \mathbb{R}^{2d}$ that satisfy $$ \Omega(gv_1, gv_2) = \Omega(v_1,v_2) $$ for $v_1,v_2\in M^*$. $M^*$ is the dual space of $M$ and $\Omega$ is the corresponding sympletic form over $M^*$. If we want the representation of $g$ over $M$, then it is claimed that $$ g\cdot \Omega(u,\cdot) = \Omega(u,g^{-1}(\cdot)) = \Omega(gu,\cdot)\in M. $$ Then it goes "Here the first inequality uses the definition of the dual representation ..., and the second uses the invariance of $\Omega$". I guess it's too obvious but, why the second equality is true? How does one prove this? Answer: Using the invariance property of the symplectic form, $$ \Omega(u,g^{-1}(\cdot)) = \Omega(gu,gg^{-1}(\cdot))= \Omega(gu,\cdot) $$
{ "domain": "physics.stackexchange", "id": 83873, "tags": "classical-mechanics, differential-geometry" }
Computer Architecture-3 level RAM hierarchy
Question: In all computer architecture books we study that Cache memory could be divided into 3 levels (L1,L2 and L3) and its very beneficial to do so. Why don't we use the same approach in case of main memory (RAM). Is there any particular reason that we avoid this? Answer: Cache memory levels are inherently a "subdivision" of main memory (RAM). To speed up the access of RAM, the cache was created using the notion of "The Principle of Locality". Frequently in programs, memory is accessed in locations that are close (local) to previously accessed locations, so it benefits to keep sections of data stored in nearby cache instead of going all the way to the hard drive. Also, cost has historically been a factor. Cache memory is more expensive than memory space on a hard drive, etc. Keep in mind, the landscape of computer memory models is always changing, and it is likely that other methods will be used in the future to speed up memory access. I think the current trend is to move away from mechanical storage devices with moving parts, hence the SSD's.
{ "domain": "cs.stackexchange", "id": 18392, "tags": "computer-architecture, memory-hardware" }
Stability of N-order Thiran delay filter when required passband delay is N+1
Question: In this lecture note, on page 7 (or page 111, the same page), it is said that the Thiran allpass delay filter of order N with passsband delay N+1 reproduces samples with zero error. I find it hard to believe, but am having hard time proving or disproving otherwise. Is this true? And the note also says that when required passband delay is N+1, the filter is unstable. Again, I am finding it difficult to prove or disprove it. Is this true? Answer: I assume you refer to It is seen that the error is zero for integral delay values $N- D=-1$ or $N-D= 0$. This appears to be a typo. The error is zero for $D = N-1$ and $D = N$. At these values of $D$ the delay is an integer number of samples. For $D = N$ The poles are all at $z=0$ and the all pass chain turns into simple FIR delay line, i.e. $a_0 = 1$ and $a_k = 0, k \neq 0$. For $D = N-1$ the filter is marginally stable and we have $a_0 = a_1 = 1$ and $a_k = 0, k > 1$ . That's easy enough to see of you put $D=N$ into table 3.1 And the note also says that when required passband delay is N+1, the filter is unstable It does not say that. It says it's stable for $D > N-1$ and marginally stable at $D = N -1 $. For stability of $D > N$ you should refer back to Thiran's original paper. The proof for $N-1 < D <= N$ was done "experimentally".
{ "domain": "dsp.stackexchange", "id": 11130, "tags": "infinite-impulse-response, digital-filters, group-delay, allpass" }
Cancellation in C++
Question: I am trying to figure out what the problem with the following expression in C++ is: y=std::log(std::cosh(x)); My first intention was that there might occure a Cancellation due to the cosh(x) part, because it is definde as $\frac{e^x+e^{-x}}{2}$ and the computation of $e^x$ with double x results in Cancellation. Am I on the right track? Or is there something different that causes Cancellation? Answer: Let me summarize what I wrote in the comments. It is not a complete answer, since the intervals on which to apply each formula still need to be investigated. It is enough to assume $x\geq0$, since $\cosh(x)$ is even. The type of cancellation that occurs when evaluating, in finite precision floating point (FP2), the expression $$\log(\cosh(x))$$ are: When $x$ is large, in which case $\cosh(x)$ makes it even larger but $\log$ would bring the value back down. FP2 is more sparse for larger values. So, one should prevent $\cosh(x)$ making the value large. When $x$ is small the $\cosh(x)$ is close to $1$. This is cool on its own. FP2 is densest near $1$, but then $\log$ becomes close to $0$. In this case it is better to approximate the function $\log(1+x)$ and the function $\cosh(x)-1$ and compose those. So, for $x$ large one can approximate $\cosh(x)$ by $\frac{e^x}{2}$. Composing with $\log(x)$ one gets $x-\log(2)$. For $x$ small we can write $\cosh(x)=1+\frac{(e^x-1)^2}{2e^x}$ and compute $\log(\cosh(x))$ by composing $\log(1+x)$ and $\frac{(e^x-1)^2}{2e^x}$. The latter would be computed by approximating $e^x-1$ directly and not by evaluating $e^x$ and subtracting $1$. A C++ implementation could compute $\log(1+x)$ using std::log1p(x) and $e^x-1$ using std::expm1(x). Finally, one needs to investigate what would be good value $x_1,x_2$ such that one would use the last computation on the interval $[0,x_1]$, the computation std::log(std::cosh(x)) on $(x_1,x_2]$ and x-std::log(2) on the $[x_2,\infty]$.
{ "domain": "cs.stackexchange", "id": 16955, "tags": "numerical-algorithms, numerical-analysis, c++" }
Angular momentum and torque in gyroscope
Question: In my textbook (Kleppner), the principle of a gyrocompass is given to be "A flywheel free to rotate about two perpendicular axes tends to orient its spin axis parallel to the axis of rotation of the system." While explaining the working, they do a step that I don't understand. This is the first part of explanation which I understand. I get that (moment of inertia)*(angular acceleration) will make a contribution to the rate of change of angular momentum along AB. Now this is the second part of their explanation. They explain that the spin angular momentum that is rotating with omega is also trying to have a component in the total angular momentum along AB This is where I get confused. In my mind, the rotating spin angular momentum can never have a component on AB. It will always stay perpendicular to AB and will not contribute in the change in total angular momentum along AB. I think I am missing something here. All I know is that if TORQUE ALONG A DIRECTION IS ZERO, ANGULAR MOMENTUM WILL NOT CHANGE ALONG THAT DIRECTION WHAT I DON'T KNOW IS THAT WHAT WILL HAPPEN IF THE DIRECTION ITSELF IS MOVING. I would highly appreciate answers that are not extremely advanced. I know RIGID BODY DYNAMICS till EULER'S EQUATIONS Answer: In my mind, the rotating spin angular momentum can never have a component on AB. It will always stay perpendicular to AB and will not contribute in the change in total angular momentum along AB. I think I am missing something here. All I know is that if TORQUE ALONG A DIRECTION IS ZERO, ANGULAR MOMENTUM WILL NOT CHANGE ALONG THAT DIRECTION WHAT I DON'T KNOW IS THAT WHAT WILL HAPPEN IF THE DIRECTION ITSELF IS MOVING. You have the right doubt, and when in doubt, best to revert to the fundamentals to seek resolution. Let us recall the basic principles: Rate of change of angular momentum in an inertial frame is equal to the torque of external forces (assuming torque of internal forces is zero). A vector can be changed by changing the magnitude or the direction. Keeping the above two principles in mind, we first choose the inertial frame to be the lab frame (in which the entire assembly is rotating about the vertical with angular speed $\Omega$). Let us station the origin of the lab frame at the center of the gyrocompass which is clearly stationary in the lab frame. Next, although the direction $AB$ is changing in space, imagine a fixed moment in time $\ t = t_{o}$. At $\ t=t_o$, $AB \ $ will point in a fixed direction in space. The torque equation (principle 1 above) then tells us that the torque in direction $AB$ at $\ t = t_o$ is equal to the instantaneous rate of change of angular momentum in the direction of $AB$. Mathematically, this means calculating $\frac{\mathrm{d}\vec{L}}{\mathrm{d}t}$ at $t=t_o$ and taking its projection along $AB$. As already explained by Kleppner-Kolenkow, the component of torque on the system along axis $AB$ in the lab frame about our chosen origin is zero (they are assuming that the center of mass of the gyroscope is at its geometrical center and no friction on the axle $AB$). So, the only task at hand is to calculate $\frac{\mathrm{d}\vec{L}}{\mathrm{d}t}$ at $t=t_o$ along $AB$. Now, to do the calculation for $\big(\frac{\mathrm{d}\vec{L}}{\mathrm{d}t}\big)_{t=t_o}$, note that the "spin angular momentum" has a vertical as well as a horizontal component. But the horizontal component is precessing about the vertical with angular velocity $\Omega$ (because the entire assembly is rotating about the vertical)! This implies that the "direction" of the horizontal component of "spin angular momentum" in the lab frame is constantly changing. By principle 2 (stated above), this precession leads to a contribution in the expression for $\big(\frac{\mathrm{d}\vec{L}}{\mathrm{d}t}\big)_{t=t_o}$. I'd now recommend (re-)reading the previous section in the same chapter of this book (probably goes by the name "gyroscope precession"; also check out this for visualization). The essence of that section is that in case of a purely precessional motion $-$ imagine a vector, $\vec{V}$, of fixed length, spinning about a fixed axis with instantaneous angular velocity $\vec{\omega} \ -$ we have $$ \frac{\mathrm{d}\vec{V}}{\mathrm{d}t} = \vec{\omega} \times \vec{V}$$ In this particular case, $\vec{\omega} = \Omega \ \hat{k}$, and $\vec{V}$ is the horizontal component of the "spin angular momentum" (because remember the entire assembly is spinning about the vertical and so the horizontal component of the "spin angular momentum" is precessing too). The only minor caveat here is that $\vec{V}$ might change in magnitude $-$ however, this contributes nothing in the direction $AB$ because ($\vec{V}$ is directed perpendicular to $AB$). Clearly then, the precessional contribution in the direction $AB \ $ is given by $\Omega L_s \sin \theta $, and happens to be the only other contribution to $\big(\frac{\mathrm{d}\vec{L}}{\mathrm{d}t}\big)_{t=t_o}$ along $AB$ apart from the usual $I_{\perp}\ddot{\theta} \ - $ and this is exactly what Kleppner-Kolenkow are claiming. Thus, we have, $$ \boxed{ \bigg(\frac{\mathrm{d}\vec{L}}{\mathrm{d}t}\bigg)_{t=t_o}\cdot\vec{e}_{AB} = I_{\perp}\ddot{\theta} + \Omega L_s \sin \theta = 0 }$$ where $\vec{e}_{AB}$ is a unit vector in the direction $AB$. While this heuristically proves the torque equation, I'd still suggest using Euler's equations or explicitly writing out the components of $\vec{L}$ in the lab frame and taking time derivatives in order to not miss other contributions in more complex setups. Besides this, as explained by others, friction damps this (pendulum like) oscillatory motion in $\theta$, eventually aligning the axis of the gyroscope with the axis about which the platform is spinning ($\theta = 0$). Note: This problem just illustrates the principle of a gyrocompass $-$ for an actual gyrocompass device the spinning platform is the earth. Hope this helps.
{ "domain": "physics.stackexchange", "id": 99086, "tags": "angular-momentum, rotational-dynamics, torque, rigid-body-dynamics, gyroscopes" }
Simulating a scale, balancing weight from lists
Question: I have simulated balancing a scale by having each side total the same weight. The scale has two weights (left and right side) and some spare weights: weights = [3, 4] spare_weights = [1, 2, 7, 7] A maximum of two weights can be used from the spare weights to balance the scale. The spare(s) may be on a single side or split between both. The output should be the minimum number of weight needed to balance the scale. import itertools weights = [3, 4] spare_weights = [1, 2, 7, 7] c = enumerate([subset for l in range(0, 3) for subset in itertools.combinations(spare_weights, l)]) cc = cc = list(itertools.combinations(c, 2)) left, right = weights qualifying_combos = [] for c in cc: l, r = c if sum(l[1]) + left == sum(r[1]) + right: qualifying_combos.append(list(l[1]) + list(r[1])) elif sum(l[1]) + right == sum(r[1]) + left: qualifying_combos.append(list(l[1]) + list(r[1])) print(min(qualifying_combos, key=len)) output: [1] Is there a more elegant way to code this? Answer: Write functions. Make this a balance_scale function, that returns the minimum. Your variable names are pretty poor, single letter variable names leave people guessing at what they mean. You should change cc = cc = ... to just one assignment. Since you can use a maximum of two spare weights the solution can be a lot simpler than you've produced. \$W_1\$ is the smaller weight in weights and \$W_2\$ is the larger. If \$W_1 = W_2\$ then you return an empty list. We can determine what weight is needed when only one weight can be used. \$S = W_2 - W_1\$ Therefore if \$S\$ is in spare_weights then we can return \$S\$. For each \$S_1\$ in spare_weights we can determine the weight needed, \$S_2\$. \$S_2 = |W_2 - W_1 - S_1|\$ We take the absolute as this weight can be on either side of the scale. If the non-absolute value of \$S_2\$ is positive then we add it to \$W_1 + S_1\$ if it's negative then we add it to \$W_2\$. If we add \$0\$ twice to spare_weights, then we can see that (4), (3) and (2) are all roughly the same equation. We don't need to check if \$S_1\$ and \$S_2\$ both exist in spare_weights, as the only time they are the same is if \$W_1 = W_2\$, and so they would both be 0. We however have to assign \$S_1\$ to 0 first. def balance_scale(scale, spares): w1, w2 = sorted(scale) spare_set = set(spares) | {0} for s1 in [0] + spares: s2 = abs(w2 - w1 - s1) if s2 in spare_set: return [i for i in [s1, s2] if i] return None print(balance_scale([4, 3], [1, 2, 7, 7])) [1]
{ "domain": "codereview.stackexchange", "id": 36868, "tags": "python, python-3.x" }
How do fields transform under special conformal transformations?
Question: A Question in Classical Field Theory $\underline{\text{Assumption 1}}$: The definition of a transformation specifies how both the coordinates and the fields transform: They are namely $(1$-$1)$ and $(1$-$2)$. $$ x'^{\mu}=x'^{\mu}(x) \tag{1-1}$$ $${\Phi'}(x')=\mathcal{F}(\Phi(x)) \tag{1-2}$$ I've seen the definitions for translations, Lorentz transformations, and dilatations. For example, the definition for translation is given as follows: $$x'^{\mu} = x^{\mu}+a^{\mu} \tag{i-1}$$ $$ \Phi'(x)=\Phi(x-a) \tag{i-2}$$ However, after considerable searches on various online sources, I have only been able to find $(1$-$1)$ for special conformal transformations and not $(1$-$2)$. $${x'}^{\mu}=\frac{x^{\mu}-b^{\mu}x^2}{1-2b\cdot x + b^2 x^2} \tag{ii-1}$$ Question: How do fields transform under special conformal transformations? Duplicate(s) in PhySE : D1: How tensor fields transform under special conformal transformation? D1 is unanswered. Reading the following message is not necessary to answer the aforementioned question. My Thoughts I have read Francesco's book [1] on this topic. But I'm unable to follow his arguments in sections $4.1$ and $4.2$ of his book. In section $4.1$, he finds the generators of the conformal group assuming that the fields transform trivially ($\Phi'(x')=\Phi(x)$)$^\mathbf{*}$, which are provided in $(4.18)$ of the book. He uses these generators to determine the conformal algebra in $(4.19)$. $^\mathbf{*}$ This goes against Assumption 1. For example, if we consider Lorentz transformations, we know that $\Phi'(x') = \Phi(x)$ is true only for scalar fields. Any help would be greatly appreciated. Just keep in mind that for the purposes of this question, the relevant field of interest is classical field theory and not quantum field theory. References: Francesco P., Mathieu P., Senechal D., Conformal field theory Answer: The fields transform under finite conformal transformations as$^1$ $$ \Phi^a(x') \mapsto {\Phi^a}'(x) = \Omega(x')^\Delta\,D(R(x'))^{\phantom{b}a}_b \,\Phi^b(x')\,. \tag{1}\label{main} $$ as given in equation $(55)$ of $[1]$. Let's break it down: $\Delta$ is the conformal dimension of $\Phi$. $\Omega$ is the conformal factor of the transformation. $D$ is the spin representation of $\Phi$. $R$ is the rotation Jacobian of the transformation So let's compute these things. The spin and the conformal dimension $\Delta$ are given. The first thing we have to look at is the Jacobian. $$ \frac{\partial x^{\prime \mu}}{\partial x^\nu} = \Omega(x') R^\mu_{\phantom{\mu}\nu}(x')\,. $$ This implicitly defines both $\Omega$ and $R$ and it is not ambiguous because we require $R \in \mathrm{SO}(d)$, namely $$ R^{\mu}_{\phantom{\mu}\nu} \,\eta^{\nu\rho}\,R^{\lambda}_{\phantom{\lambda}\rho} \,\eta_{\lambda\kappa}= R^{\mu}_{\phantom{\mu}\kappa}\,. $$ You can immediately see that for the Poincaré subgroup of the conformal group $\Omega(x')= 1$, whereas for dilatations $\Omega(x') = \lambda$ and for special conformal transformations $$ \Omega(x') = \frac{1}{1+2(b\cdot x') + b^2 {x'}^2}\,.\tag{2}\label{omega} $$ This can be proven with a bit of algebra. Using your definition of SCT $$ {x'}^\mu = \frac{x^\mu - b^\mu x^2}{1+-2(b\cdot x) + b^2 {x}^2}\,, $$ one can check $$ \frac{\partial x^{\prime \mu}}{\partial x^\rho} \eta_{\mu\nu} \frac{\partial x^{\prime \mu}}{\partial x^\lambda} = \frac{\eta_{\rho\lambda}}{(1-2(b\cdot x) + b^2 x^2)^2}\,. $$ That means that the Jacobian is an orthogonal matrix up to a factor, which is the square root of whatever multiplies $\eta_{\rho\lambda}$. Then we have to re-express that as a function of $x'$. After some algebra again one finds that it suffices to change the sign to the term linear in $b$. Finally, how does one compute $R$? Well, it's just the Jacobian divided by $\Omega$. In the case of special conformal transformations one has (there might be mistakes, redo it for safety) $$ R^{\mu}_{\phantom{\mu}\nu} = \delta^\mu_\nu + \frac{2 b_\nu x^\mu - 2 b^\mu (b_\nu x^2+ x_\nu - 2 (b\cdot x) x_\nu) -2b^2 x^\mu x_\nu }{1-2b\cdot x +b^2 x^2}\,, $$ which, as before, needs to be expressed in terms of $x'$. If you are interested in $\Phi$ scalar then $D(R) = 1$ and you can just plug \eqref{omega} into \eqref{main} to obtain the transformation. If you want to consider also spinning $\Phi$ then it's not much harder. For spin $\ell=1$ the $D$ is just the identity, namely $$ D(R)^{\phantom{\nu}\mu}_\nu = R^{\phantom{\nu}\mu}_\nu\,. $$ For higher spins one just has to take the product $$ D(R)^{\phantom{\nu_1\cdots \nu_\ell}\mu_1\cdots \mu_\ell}_{\nu_1\cdots \nu_\ell} = R^{\phantom{\nu_1}\mu_1}_{\nu_1}\cdots R^{\phantom{\nu_\ell}\mu_\ell}_{\nu_\ell}\,. $$ Again, by plugging these definitions in \eqref{main} you obtain the desired result. $\;[1]\;\;$TASI Lectures on the Conformal Bootstrap, David Simmons-Duffin, 1602.07982 $\;{}^1\;\;$The way the transformations are written in the lecture notes linked above differs a bit from Di Francesco Mathieu Sénéchal. The difference is that Di Francesco et al. make an active transformation $x \to x'$ with $$ \Phi(x) \mapsto \Phi'(x') = \mathcal{F}(\Phi(x))\,, $$ while David Simmons Duffin makes essentially the inverse transformation $x' \to x$ $$ \Phi(x') \mapsto \Phi'(x) = \mathcal{F}^{-1}(\Phi(x'))\,. $$ That is why in the above discussion the indices of $R^\mu_{\phantom{\mu}\nu}$ get swapped when passed inside $D$ as $D(R) = R^{\phantom{\nu}\mu}_{\nu}$. And that's also why we get a factor $\lambda^\Delta$ rather than $\lambda^{-\Delta}$ as Di Francesco et al. This is all consistent as long as it is clear what one is doing.
{ "domain": "physics.stackexchange", "id": 65955, "tags": "field-theory, conformal-field-theory, classical-field-theory" }
How to implement a Fourier Convolution layer in keras?
Question: I'm currently investigating the paper FCNN: Fourier Convolutional Neural Networks. The main contribution of the paper is that CNN training is entirely shifted to the Fourier domain without loss of effectiveness. The proposed architecture looks as follows: The authors state that the implementation was done in keras, however, it is not publicly available. I know I can define a Fourier transformation in the following way: model.add(layers.Lambda(lambda v: tf.real(tf.spectral.rfft(v)))) But this is not a Fourier Convolution, right? How should I go from here? Answer: An FFT-based convolution can be broken up into 3 parts: an FFT of the input images and the filters, a bunch of element-wise products followed by a sum across input channels, and then an IFFT of the outputs (Source). Or as it is written in the paper: So, for a Fourier Convolution Layer you need to: Take the input layer and transform it to the Fourier domain: input_fft = tf.spectral.rfft2d(input) Take each kernel and transform it to the Fourier domain: weights_fft = tf.spectral.rfft2d(layer.get_weights()) Note: The Fourier domain "images" for the input and the kernels need to be of the same size. Perform element-wise multiplication between the input's Fourier transform and Fourier transform of each of the kernels: conv_fft = keras.layers.Multiply(input_fft, weights_fft) Perform Inverse Fourier transformation to go back to the spatial domain. layer_output = tf.spectral.irfft2d(conv_fft) Note: I used pseudo-code, it will probably needs some tuning for it to actually to work.
{ "domain": "datascience.stackexchange", "id": 4277, "tags": "keras, cnn" }
Does the density parameter change over time?
Question: I am aware that at present the density parameter has a value very close to one. Does this parameter change over time, and if so how does that affect the fate of the universe, in terms of open/closed and accelerating/decelerating? Answer: It changes away from one, unless it is exactly one, then it doesn't change. So, if it is very close to one today, it would be orders of magnitude closer to one in the early universe. Therefore chances are it is actually exactly one. In some other cosmological models is is exactly one based on the model topology.
{ "domain": "physics.stackexchange", "id": 49831, "tags": "cosmology, universe, space-expansion, density" }
Product of the entries of the product of two matrices
Question: Let $A=(a_{ij})$ and $B=(b_{ij})$ be $n \times n$ integer matrices. Then $AB=(\Sigma_{k=1}^n a_{ik}b_{kj})_{ij}$. I am interested in finding the most efficient algorithm for computing the product of the entries of $AB$, $\Pi_{i,j=1}^n (\Sigma_{k=1 }^n a_{ik}b_{kj}$). Certainly, the most straightforward way of doing this (by using the above formula) is not the most efficient way of doing this, since there are algorithms like Strassen's algorithm which calculate all of the entries of the matrix $AB$ faster than simply applying the definition $AB=(\Sigma_k a_{ik}b_{kj})_{ij}$ to each entry. Are there other efficient ways to compute the above product besides using algorithms like Strassen's algorithm? Extra: What about computing the sum of the entries of $AB$, $\Sigma_{i,j,k=1}^n a_{ik}b_{kj}$? Answer: You are asking two questions. I will only answer the second one. Let $1$ be the all ones column vector of dimension $n$. The sum of all entries of $AB$ is $1^TAB1 = (1^TA)(B1)$, which gives an $O(n^2)$ algorithm: compute the vectors $1^TA$ and $B1$, each in $O(n^2)$, and then compute their inner product in $O(n)$. This corresponds to the identity $$ \sum_{i,j,k=1}^n a_{ik} b_{kj} = \sum_{k=1}^n \left(\sum_{i=1}^n a_{ik}\right) \left(\sum_{j=1}^n b_{kj}\right). $$ As for the product of all entries, I suspect you can't do better than the trivial algorithm which first computes the entire product matrix, and runs in time $O(n^\omega)$.
{ "domain": "cs.stackexchange", "id": 9134, "tags": "algorithms, complexity-theory, time-complexity" }
Could you use a balloon to show the drag from the gas/atmosphere of something orbitting near the ISS?
Question: If a balloon were floating next to the International Space Station (ISS), how big, light, and/or dense would it need to be such that the gas/atmosphere at that distance from the earth's surface would cause it to visibly accelerate relative to the ISS? I'll say provisionally that the acceleration would need to be at least $10^{-3}\text{ m }\text{s}^{-2}$. I looked all over just now, trying to find out what the density of the atmosphere is at $400\text{ km}$ but all I could find were figures for the lower atmosphere, and for the thermosphere, which is what the ISS orbits inside, log graphs that I can't decipher and equations that I am not smart enough to be able to use. https://space.stackexchange.com/a/38130/40252 has a graph (is it reliable?) which if accurate implies that the density at four hundred kilometers is about one trillionth of a kilogram per cubic meter. This figure is conveniently a round number, and is close to one trillionth of the density of air at sea level (Wikipedia says: At $101.325$ kPa (abs) and $15$°C, air has a density of approximately $1.225$ kg m$^{-3}$.) Answer: I used some approximate numbers to do an approximate calculation. Assuming that: speed of ISS = 8000 m s$^{-1}$ Density of air outside ISS = $10^{-12}$ kg m$^{-3}$ Drag coefficient = 1 The drag force on the balloon is a half * density * speed$^2$ * drag coefficient * cross-sectional area So, I put in the numbers into that formula. A is in square meters and m is kilograms. 1/2 * $10^{-12}$ kg m$^{-3}$ * (8000 m s$^{-1}$)^2 * 1 * A = m * $10^{-3}\text{ m}\ \text{s}^{-2}$ According to the result, to get the required acceleration, the balloon has to have a mass/area ratio of around 0.032 kg m$^{-2}$. That's thirty-two grams per square meter of cross-sectional area. So, you could use a balloon to show the drag if the balloon had a ratio of mass/cross sectional area of .032 in SI units AND provided that the balloon was built of a material strong enough. There might be an issue with feasibility as any material strong enough to survive the pressure differential may not be light enough to have this kind of mass/area.
{ "domain": "physics.stackexchange", "id": 78347, "tags": "orbital-motion, atmospheric-science, drag, satellites, space-travel" }
Hamiltonian and Thermal Energy
Question: I was reviewing my lessons when I read this definition on Thermal Energy: The sum of potential energy and kinetic energy is equal to Thermal Energy But isn't this the same as the Hamiltonian? So, instead of saying "taking the Hamiltonian of ..." can I say "taking the Thermal Energy of ..."? Sorry for my ignorance. Answer: If you integrate the Gibbs equation $dU=TdS+ \sum_k Y_kdX_k$ with the assumption of $X_k$ and $U$ are extensive and $Y_k$ are intensive, then the function $U(S,X_k)$ are first order homogeneous and there follows that $U=TS+\sum_kY_kX_k$. This can be interpreted as the total internal energy of a body containing two terms $TS$ and $\sum_kY_kX_k$. The former $TS$ is the internal thermal energy, the latter is the internal everything else reflecting the various interactions the system may participate in, such as mechanical $X_1=V, Y_1=-p$, electrical $X_2=q_e, Y_2=\phi_e$, chemical, magnetic, etc. This has nothing to do with the Hamiltonian for $U$ is a static (timeless) energy while the Hamiltonian is a dynamic quantity so much so that even if $H$ is explicitly time independent it describes the time evolution of other dynamic quantities.
{ "domain": "physics.stackexchange", "id": 91750, "tags": "thermodynamics, energy, hamiltonian" }
Guide to implement our own algorithm for mapping/localization?
Question: Hello. Currently I am trying to run several examples of gmapping and amcl from various robots. As i am still new in ROS, is there any guide to make our own algorithm for mapping/localization? Thank you Originally posted by nobinov on ROS Answers with karma: 66 on 2017-05-12 Post score: 0 Answer: If your goal is to implement your own mapping/localization algorithm, to my knowledge there is no real easy recipe, in the sense that you need some background in areas including: Motion modelling Sensor modelling Probability theory That said, there is good material available on the web. A good start would be to go check out the free Udacity course cs373 Artificial Intelligence for Robotics. It gives an implementation-driven approach to a range of localization/mapping/path planning/control algos. Good to get some hands-on intuition on complex concepts. After that, you could for instance follow up with either (or both): the online course videos on localization and mapping from Cyrill Stachniss (available in a playlist on his Youtube channel). the book Probabilistic Robotics by Dieter Fox, Sebastian Thrun, and Wolfram Burgard (not free) Both of those tackle localization, mapping and also control from the ground up, with in depth theory, generic algos and examples. They are more of a mouthful than the Udacity course, sure, but they also don't compare in the level of understanding they provide. If your goal is to implement (and understand!) custom localization and mapping based on concepts like EKF, UKF, particle filter, histograms or any Bayesian approach, it is a very good read. On a side note, the online videos and book share a lot of material, so the main differences are the level of detail (+1 for the book) and the human-friendliness (+1 for the videos, if you have the patience to go through the hours of lesson). I'm sure there is a lot more good material to start with, but these were a good start in my case. Cheers Originally posted by serial with karma: 16 on 2017-05-29 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 27881, "tags": "ros, navigation, gmapping, amcl" }
Cumulative transition matrix Markov Chain simulation
Question: I have a cumulative transition matrix with probabilities for all the possible states from 1 to 5. Now the algorithm for simulating future states is following: the initial state is selected randomly, and a random value between 0 and 1 is then produced by uniform random number generator.To determine the next state in the Markov process the value of the random number is is compared with the elements of of the *i-*th row of the cumulative transition matrix determined from the preceding state. If the value of the random number is greater than the cumulative probability of the preceding state but less then or equal to the cumulative probability of the succeeding state, the succeeding state is chosen to represent the next state. cum_trans <- matrix(c(0.1686747,0.4337349,0.6265060,0.7289157,1, 0.2765957,0.5053191,0.6648936,0.7659574,1, 0.2518519,0.4740741,0.6518519,0.7407407,1, 0.1911765,0.4705882,0.6617647,0.7941176,1, 0.2096774,0.4892473,0.6827957,0.7419355,1 ), nrow = 5, ncol = 5, byrow = TRUE) [,1] [,2] [,3] [,4] [,5] [1,] 0.1686747 0.4337349 0.6265060 0.7289157 1 [2,] 0.2765957 0.5053191 0.6648936 0.7659574 1 [3,] 0.2518519 0.4740741 0.6518519 0.7407407 1 [4,] 0.1911765 0.4705882 0.6617647 0.7941176 1 [5,] 0.2096774 0.4892473 0.6827957 0.7419355 1 state<-matrix(NA, nrow=1,ncol=25) i<-sample(1:5,1) state[1]<-i for (k in 2:25){ rr<-runif(1) state[k]<-findInterval(rr,cum_trans[i,])+1 i<-state[k] } state So I'm not sure if I implemented the algorithm correctly. Can someone suggest modifications or improvements. Answer: I see nothing wrong with your implementation, in the sense it does exactly what you described in plain English, and somewhat efficiently. Here are however a few pieces of advice, mostly about improving your coding standards. Add some spaces to your code to make it more readable. Have a look at the body of basic functions (e.g. lm) to see what clean code should look like. In particular, spaces after commas and spaces on both sides of <-, =, and all binary operators. Replace your hardcoded values (5 and 25) with variables. 5 can be derived from the number of rows in cum_trans and 25 is an input to your process. By assigning it to a variable (or a function input), and reusing that variable throughout your code, your code becomes easier to read and maintain, and more robust to changes. Imagine for example what would be required of you if you wanted to change the size of the transition matrix or wanted to have 50 iterations instead of 25. Using a 1-row matrix is overkill and error-prone: use a vector. Choose well how you name your variables and comment your code. Again, to make it easier to read and understand. Learn the difference between numerics and integers. Here your states are obviously integers, yet your code stores numerics. Granted, it is not critical here, but using integers use less memory and can make your code faster. Integers are also not subject to floating point errors so they can make your code more robust in some instances. Marginal speed improvement. The 25 random calls to runif(1) being independent, you can make a single call to runif(25) and store the results. Think of your code in terms of inputs of outputs, then write a function. Updated code taking some of these into account: random.walk <- function(cum_trans, length.out = 25L) { num.states <- nrow(cum_trans) states <- vector(mode = "integer", length = length.out) # pick the initial state randomly states[1L] <- sample(num.states, 1L) # jump randomly using the cum_trans matrix num.jumps <- length.out - 1L randoms <- runif(num.jumps) for (i in seq(num.jumps)) { current.state <- states[i] current.prob <- cum_trans[current.state, ] states[1L + i] <- findInterval(randoms[i], current.prob) + 1L } return(states) } Next, think of the assumptions your code is making: cum_trans must be a square matrix with each row being an increasing vector of positive values ending with 1. length.out must be an integer vector of length 1. All of these assumptions can be checked at the top of the function's body; I am only implementing a few of them here: stopifnot(nrow(cum_trans) == ncol(cum_trans), all(cum_trans >= 0), is.integer(length.out), length(length.out) == 1L) This way, there is no way you or your end-user can mess up with the inputs. I hope you find this useful and it encourages you to write better code.
{ "domain": "codereview.stackexchange", "id": 18964, "tags": "algorithm, r, markov-chain" }
What is happening at the various boundaries of a phase diagram?
Question: Hoping to clarify a few questions I have that the textbook doesn't address. Lets just say I have a piston (at equilibrium) with 100 moles of gaseous compound that looks like this: And here is the phase diagram for this compound: Now lets say I start increasing the pressure on the piston so that the gas starts to approach the gas-liquid boundary. 1.) What would start to happen as you approach the boundary? For example: Would a few moles of gas begin to condense to liquid? Is this process exponential as you approach boundary? 2.) What is happening at the boundary? For example: Are the gas and liquid in perfect equilibrium (50 moles of each)? By that logic, at the triple point, are there 33 moles of each? 3.) Now let's say we approach the critical point along the liquid-vapor boundary. I read online that its when the gas and liquid become "indistinguishable." If so, that means it has a uniform density. Does that make it another state of matter? If not, why is it special? Answer: I got that you are trying to visualize what is actually happening. I have not done the compression myself, but this is what I think will be observed. It will be good if someone with real experience can confirm the answer. At the beginning you are in the region marked 'vapour'. When you are approaching the liquid-vapour boundary from below, before you touch the boundary, you will still observe only vapour, no liquid at all. You are still in the region marked 'vapour'. Nothing will happen. When you just reached the boundary, you will see some liquid droplets formed. It should look just like some dew formed at the wall. Regarding the relative amount of liquid and vapour at equilibrium, the pressure of vapour present must be equal to pressure exerted by you (or else the piston will move!) and equal to pressure on the boundary (determined by temperature). Say if you try to push the piston further inside, the volume occupied by sample will decrease, and more liquid formed, but you will still stay on the boundary. You will notice that the pressure needed to push piston inside further does not change, because the pressure of vapour does not change (push in, vapour pressure should increase, but some vapour condensed, lower back the pressure). Only when all vapour has been converted into liquid, you can depart from the boundary, move into the region marked 'liquid'. However, before all vapour turned into liquid, if you stop pushing in and hold your piston still, the liquid and vapour amount won't change. So there is no unique relative amount of vapour and liquid at equilibrium as written in your question. Above liquid-vapour boundary you have only liquid and the piston surface is resting on the liquid surface. That state is known as supercritical fluid.
{ "domain": "chemistry.stackexchange", "id": 14304, "tags": "thermodynamics, phase" }
Adding a camera to my urdf
Question: Hello, I am trying to add a camera to the end of an arm using xacros. The camera link URDF is: <robot xmlns:xacro="http://www.ros.org/wiki/xacro"> <xacro:macro name="camera" params="parent"> <link name="camera"> <collision> <origin xyz="0 0 0" rpy="0 0 0"/> <geometry> <mesh filename="package://sensors_and_actuators/meshes/camara.stl" scale="0.1 0.1 0.1"/> </geometry> </collision> <visual> <origin xyz="0 0 0" rpy="0 0 0"/> <geometry> <mesh filename="package://sensors_and_actuators/meshes/camara.stl" scale="0.1 0.1 0.1"/> </geometry> </visual> <inertial> <mass value="1e-5" /> <origin xyz="0 0 0" rpy="0 0 0"/> <inertia ixx="1e-6" ixy="0" ixz="0" iyy="1e-6" iyz="0" izz="1e-6" /> </inertial> </link> <joint name="sensor_joint" type="fixed"> <parent link="${parent}"/> <child link="camera"/> <origin rpy="0 1.5620 0" xyz="0.0475 0.0 0.0"/> <axis xyz="1 0 0" /> </joint> <gazebo reference="camera"> <sensor type="camera" name="camera_sensor"> <parent link="camera"/> <update_rate>30.0</update_rate> <camera> <horizontal_fov>1.3962634</horizontal_fov>  <clip> <near>0.01</near> <far>100</far> </clip> </camera> <plugin name="${name}_camera_controller" filename="libgazebo_ros_camera.so"> <cameraName>arm_sensors/camera</cameraName> <alwaysOn>true</alwaysOn> <updateRate>0.0</updateRate> <imageTopicName>image_raw</imageTopicName> <cameraInfoTopicName>camera_info</cameraInfoTopicName> <frameName>camera</frameName> </plugin> </sensor> </gazebo> </xacro:macro> </robot> When I run the launch file it reports an error saying that the parent param is missing: samper@samper:~$ roslaunch cern_tim_arm_simulator arm_tim_simulator.launch spawn_arm:=true robotic_arm:=schunk_lwa4p world:=tunnel spawn_collimator:=true camera_on_arm:=true ... logging to /home/samper/.ros/log/2c17a290-4b39-11e5-b79f-605718079cbb/roslaunch-samper-16690.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. Traceback (most recent call last): File "/opt/ros/indigo/share/xacro/xacro.py", line 60, in <module> xacro.main() File "/opt/ros/indigo/lib/python2.7/dist-packages/xacro/__init__.py", line 696, in main eval_self_contained(doc) File "/opt/ros/indigo/lib/python2.7/dist-packages/xacro/__init__.py", line 626, in eval_self_contained eval_all(doc.documentElement, macros, symbols) File "/opt/ros/indigo/lib/python2.7/dist-packages/xacro/__init__.py", line 552, in eval_all (",".join(params), str(node.tagName))) xacro.XacroException: Parameters [parent] were not set for macro camera while processing /home/samper/ROS_Workspaces/ROS_Cern/src/ros_cern/tim_gazebo/load_world.launch: while processing /home/samper/ROS_Workspaces/ROS_Cern/src/ros_cern/robotic_arm/launch/schunk_lwa4p_and_tim.launch: Invalid <param> tag: Cannot load command parameter [robot_description]: command [/opt/ros/indigo/share/xacro/xacro.py '/home/samper/ROS_Workspaces/ROS_Cern/src/ros_cern/robotic_arm/arm_description/robots/schunk_lwa4p/schunk_lwa4p_arm_and_tim_camera.urdf.xacro'] returned with code [1]. Param xml is <param command="$(find xacro)/xacro.py '$(find robotic_arm)/arm_description/robots/schunk_lwa4p/schunk_lwa4p_arm_and_tim_camera.urdf.xacro'" if="$(arg camera_on_arm)" name="robot_description"/> The traceback for the exception was written to the log file I would really appreciate if someone could give me any clue. Thank you in advance. JLuis Samper PS: it only works if i delete the tag and its content Originally posted by Samper-Esc on ROS Answers with karma: 50 on 2015-08-25 Post score: 0 Answer: In case that someone has this problem, i just solved it changing the inertial values. Gazebo has a problem when has to work with small values in the inertial block. Additionally I just omitted the parent tag, it is taken from the joint. If you need to orientate the position of the optical focus you can do it by the pose property tag of the gazebo plugin <?xml version="1.0"?> <robot xmlns:xacro="http://ros.org/wiki/xacro"> <xacro:property name="M_PI" value="3.1415926535897931" /> <xacro:macro name="camera_link" params="parent"> <link name="camera_link"> <collision> <origin xyz="0 0 0" rpy="0 0 0" /> <geometry> <mesh filename="package://sensors_and_actuators/meshes/camara.stl" scale="0.1 0.1 0.1"/> </geometry> </collision> <visual> <origin xyz="0 0 0" rpy="0 0 0" /> <geometry> <mesh filename="package://sensors_and_actuators/meshes/camara.stl" scale="0.1 0.1 0.1"/> </geometry> <material name="grey"> <color rgba="0.2 0.2 0.2 1.0"/> </material> </visual> <inertial> <mass value="0.001" /> <origin xyz="0 0 0" rpy="0 0 0" /> <inertia ixx="0.0001" ixy="0" ixz="0" iyy="0.0001" iyz="0" izz="0.0001" /> </inertial> </link> <joint name="camera_joint" type="fixed"> <parent link="${parent}"/> <child link="camera_link"/> <origin rpy="0 0 0" xyz="0.047 0.0 0.0"/> <axis xyz="1 0 0" /> </joint> <gazebo reference="camera_link"> <sensor type="camera" name="camera_camera_sensor"> <update_rate>30.0</update_rate> <camera> <pose>0.035 0 0.078 0 ${-M_PI/2} 0</pose> <horizontal_fov>${85 * M_PI/180.0}</horizontal_fov>  <clip> <near>0.01</near> <far>100</far> </clip> </camera> <plugin name="camera_camera_controller" filename="libgazebo_ros_camera.so"> <alwaysOn>true</alwaysOn> <updateRate>0.0</updateRate> <cameraName>arm_sensor/camera</cameraName> <imageTopicName>image_raw</imageTopicName> <cameraInfoTopicName>camera_info</cameraInfoTopicName> <frameName>camera_link</frameName> <hackBaseline>0.07</hackBaseline> <distortionK1>0.0</distortionK1> <distortionK2>0.0</distortionK2> <distortionK3>0.0</distortionK3> <distortionT1>0.0</distortionT1> <distortionT2>0.0</distortionT2> </plugin> </sensor> </gazebo> </xacro:macro> </robot> Originally posted by Samper-Esc with karma: 50 on 2015-08-26 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by gvdhoorn on 2016-04-21: Can you please accept your own answer? That way it is clear from the question listing that it has been answered. Comment by Samper-Esc on 2016-04-21: I tried but it says that i need more than 10 points Comment by gvdhoorn on 2016-04-21: Ah, right. Just accepted it for you. Comment by Samper-Esc on 2016-04-21: Thank you very much
{ "domain": "robotics.stackexchange", "id": 22525, "tags": "urdf, ros-indigo, camera" }
Small Kivy application
Question: I've written a small app using the kivy framework, which aims at reviewing your times tables. My major concern is about making this app easier to improve in the future. That is, how to organize the code to make it more clearer (by adding a User class that would store the data associated to the user, for instance). Since the code pretty straightforward, I've not added documentation to it. The Python code: # -*- coding: utf-8 -*- from numpy.random import randint import kivy kivy.require('1.9.1-dev') from kivy.app import App from kivy.uix.widget import Widget from kivy.core.window import Window from kivy.properties import StringProperty Window.size = (200, 100) class MyWidget(Widget): help_message = StringProperty() question = StringProperty() feedback = StringProperty() def __init__(self): super(MyWidget, self).__init__() self.help_message = "Your answer" self.question = "" self.answer = 0 self.points = 0 self.attempts = 1e-10 self.feedback = "" self.set_question() def set_question(self): a, b = randint(1, 11, 2) self.question = "{} x {} =".format(a, b) self.answer = a * b def check_answer(self, user_input): try: if int(user_input) == self.answer: self.points += 1 self.feedback = "right" else: self.feedback = "{} {}".format(self.question, self.answer) self.set_question() self.attempts += 1 except: self.feedback = "invalid answer" def set_score(self): score = self.points / self.attempts * 100 self.feedback = "{:3.0f}% of right answers".format(score) class MyApp(App): def build(self): return MyWidget() if __name__ == '__main__': MyApp().run() The .kv one: <MyWidget>: canvas: Color: rgba: .93, .93, .93, 1. Rectangle: pos: self.pos size: self.size TextInput: size: 130, 30 pos: 70, 30 text: root.help_message background: 1., 1., 1., 1. font_size: 14 multiline: False on_text_validate: root.check_answer(self.text); self.text = "" Label: size: 200, 30 pos: 0, 60 text: root.feedback color: .0, .0, .0, 1. font_size: 14 Label: size: 70, 30 pos: 0, 30 text: root.question color: .0, .0, .0, 1. font_size: 14 Button: size: 200, 30 pos: 0, 0 text: "Stop" color: .0, .0, .0, 1. font_size: 14 on_release: root.set_score() [EDIT] Mentioned code reviewer here. Answer: Your code is very nice! There is almost nothing for me to fault. However, your try except statement is far too large in the try and is a bare except. Let's say, for whatever reason, your .format raises an error. You could have put the correct answer, however for reason beyond your control, the program failed. And you the user gets '1 x 3 = 3', when they put 3. To amend this you can use the else part of the try except else finally: try: user_input = int(user_input) except ValueError: self.feedback = "invalid answer. Please enter numbers." else: if user_input == self.answer: self.points += 1 self.feedback = "right" else: self.feedback = "{} {}".format(self.question, self.answer) self.set_question() self.attempts += 1 You could say that you don't want to do user_input = int(user_input). If this were the case then you can have the try as just int(user_input), and have the else as your old try. If I were to improve this more: Better class names. MyApp is not descriptive and is definitely not my app. TimesTabelsApp could be an alternate, unless you wish to add more to it. The same as above goes for MyWidget. You could likewise call this TimesTabelsWiget. Put almost everything in the Python 'main'. Window.size = (200, 100), should be in the main. It will still be global! Try to avoid large library's when python comes built in with things you need. I don't think I should need to install NumPy, when CPython already has randint. Sure, it's not as convenient, but asking an end user to install an entire library for something so small is kinda annoying. I know numpy is pip install numpy away, but Python can do this. from random import randint a, b = randint(1, 11), randint(1, 11) A single line at the begging of each function, class and module is enough to document your code entirely. def set_question(self): """Set the internal question and answer.""" Sure it's something that can be understood from the code, but what if I don't want to read the code, or can't. Finally, I don't know why you need self.attempts = 1e-10. It seems kinda strange. And gives a large answer in set_score. I think it's to not get a ZeroDivisionError in set_score. So you could try this: try: score = self.points / self.attempts except ZeroDivisionError: self.feedback = "You didn't attempt any answers" else: self.feedback = "{:3.0f}% of right answers".format(score * 100)
{ "domain": "codereview.stackexchange", "id": 15105, "tags": "python, python-3.x, gui, kivy" }
How would a carbon-fiber bone compare to a biological bone under common workloads?
Question: I am wondering how to approach the problem of carbon-fiber based bone replacements. Considering a femur, what downsides would carbon-fiber have under the types of torsion, compression, wear-and-tear, etc.. experienced by a biological femur? It would be great if you happen to consider any methods for countering downsides, if any. Question assumes production is not an issue but design is. Artificial Human Bones. So let's look at the facts. The bones in your body are made from material which has a tensile strength of 150MPa, a strain to failure of 2% and a fracture toughness of 4MPa(m)½. For a structural material that's not good. We can make alloy steels that are ten times better in all three of those properties. But of course there are some other factors we need to take account of in order to make a valid comparison. Bone is less dense than metals and this is important because the weight of our bones strongly affects the energy needed to move around. To do a quantitative analysis we need to consider the geometry and loading on the structure. The major bones are mostly tubular in shape, loaded in compression and bending. So a rational comparison is to imagine tubes made from different materials, all having the same length and diameter, with their thicknesses adjusted to give them all the same weight. Putting in some typical dimensions and material properties we find that the stresses in a bone made from titanium alloy, for example, would be about 1.3 times higher than in a bone of the same weight, made from bone. But the titanium alloy is 5 times stronger so obviously its safety factor is much higher. Answer: A simple question to address such a broad range of issues. It is assumed that bio-compatible / flexible resin systems would be used. As CFRP strength is more than natural bone stress in compression/bending it would be way better in function if considered separately.. That said rest is tough to satisfy..1) the direction of fiber is important if the Nature's function developed after so many thousands years of evolution if "reverse -engineering" is attempted artificially. 2) Local stiffness variation should be accurately known to be duplicated in fiber placement. The diagram of bone structure & orientation shows disposition of (hollow sandwich) bone matter. Bone filaments run along $ \pm 45^0$ in the middle of the Femur shaft as well as in the neck region above. Force enters normal at either end in the trabecular and condyle regions. Interface force resultant should be normal to transmitted force so as not to induce crack/delamination by weakness of interlaminar stresses and edge stress effects in laminated construction. How would the prosthetic element be made?.. that is the more important question. Devil is in the detail.. or so one views design with composites. No over/undesign is permissible. Material should not be where it is not required without a functional requirement. Material should be placed in the right amount and and in the right direction. As one possibility a carbon cloth with more layers at extremities should be rolled and inserted into a metal mould cavity and hot resin cured. A design/FEM analysis is needed after considering FMECA which are medically well known. Loads and their combination at neck of ball/socket/pelvis joint should be known. Torsional strength required at knee cap may need Silicone rich pockets for shock absorption in the vicinity of the two joints. Downsides with carbon/graphite fiber composite depend upon conformance of the degree of stiffness in multiaxial stress distribution. May be 3d printing of ceramics provides a better stress conduction path than using injection moulding or cloth moulding carbon/graphite for stiffness flexibility and grain direction control.
{ "domain": "physics.stackexchange", "id": 39013, "tags": "biophysics, stress-strain, material-science" }
Methods of investigating the occurrence of non-standard bases in nucleic acids
Question: Recently the popular scientific press gave considerable publicity to three publications in Science (30th April 2021 — see below) relating to the substitution of the nucleic acid base, 2-amino adenine, for adenine in certain bacterial viruses (bacteriophages). For example Quantum Magazine: DNA Has Four Bases. Some Viruses Swap in a Fifth. A few decades ago, researchers found viruses that had swapped one of the four bases in their DNA for a novel fifth one. Now, in a trio of papers published in Science in April, three teams have identified dozens of other viruses that make this substitution, as well as the mechanisms that make it possible. The discoveries raise the thought-provoking possibility that this kind of fundamental genomic change could be much more widespread and important in biology than anyone imagined. I am interested in (and my question is) how this was established experimentally, and how widespread it is in nature, but find the original papers too technical for the non-specialist. Is it possible to provide a summary explaining this, and in general terms how one would search for nucleic acid analogues in organisms? Original Science Papers These are listed below, with an extract from the summary of each, where I have difficulty with the technical terminology. A third purine biosynthetic pathway encoded by aminoadenine-based viral DNA genomes “We show that S-2L and Vibrio phage PhiVC8 encode a third purine pathway catalyzed by PurZ, a distant paralog of succinoadenylate synthase (PurA), the enzyme condensing aspartate and inosylate in the adenine pathway.” A widespread pathway for substitution of adenine by diaminopurine in phage genomes “We identified dozens of globally widespread phages harboring such enzymes, and we further verified the Z genome in one of these phages, Acinetobacter phage SH-Ab 15497, by using liquid chromatography with ultraviolet and mass spectrometry.” Noncanonical DNA polymerization by aminoadenine-based siphoviruses “Congruent phylogenetic clustering of the polymerases and biosynthesis enzymes suggests that aminoadenine has propagated in DNA alongside adenine since archaic stages of evolution.” Answer: General approach to identifying non-standard bases in organisms Purify from source material with standard separation methods Identify using chemical methods and comparison with known compounds Establish the biochemical pathway for synthesis etc. — the specific reaction(s) required to produce (and handle) the compound and the enzymes responsible for catalysing these reactions. Identify the genes for any new enzymes identified Search the Genbank nucleic acid database for similar groups of genes in other organisms. Notes 1 and 2. Modern methods can detect molecules in small quantities in impure mixtures 3 and 4. Often done in concert. Genes in a small viral genomes may suggest possibilities. 5. If one wishes to work on a different organism, it is then necessary to establish that the genes do encode enzymes that catalyse the synthesis of the base. In general, a knowledge of chemistry is much more important than biology in being able to carry out this sort of work. SPECIFIC EXAMPLE FOR BACTERIOPHAGES SUBSTITUTING 2-AMINOADENINE FOR ADENINE 1. Purification In this case purification is relatively easy. One can physically separate the phage particles on the basis of size etc., and then extract the DNA from the protein using standard chemical separations and break it up (hydrolyse) into individual components including bases. This was first reported in 1977 in Nature by Kirnos et al.. Cyanophage S-2L was isolated from water samples taken in the outskirts of Leningrad. It can lyse blue-green algae Synechococcus sp. 698 and S. eloniatus strains 58 and 6907 … The phage was grown on S. sp. 698 and concentrated by chromatography on DEAE-cellulose (Whatman DE-32) and purified by centrifugation in CsCl gradient. 2. Identification Classically a chemist would analyse the atomic constituents and their ratio in an unknown molecule, perform partial degradation to produce known constituents, deduce a possible structure and attempt to synthesize to confirm this. The ease of identifying unknown molecules has increased over the years as the number of already identified compounds with which one can compare has increased (there had been much work done on minor bases found in transfer RNA) and the methods available have grown and become more discriminating. It is interesting that the cyanophage work was done in the former Soviet Union, where biological sciences were low in prestige and funding. Standard equipment for examining the spectral properties of compounds (the light or other electromagnetic radiation emitted or absorbed on excitation of particular chemical bonds) would be available, however. The conclusion that an unknown base was present in the DNA of this phage was almost certainly serendipitous, resulting from standard analysis of the purity of the DNA by simple spectrophotometry. (Protein contamination results in absorption at 260nm.) There were unusual features in the spectrophotometric properties, and even more marked changes when analysis was performed by the method of circular dichroism. As 2-aminoadenine was among the base analogues that had already been described and characterized, establishing that this was the base in the phage DNA that was present instead of adenine was possible by comparison of their physical chemical behaviour. I find it ironic, in the context of the recent publicity, that this report (which was accompanied by a more extensive paper in the journal, Virology) received little attention in the subsequent 43 years. 3. and 4. Biochemical Pathway and Genes The first of the triplet of recent Science papers established the pathway for the synthesis of the dZTP (where Z is 2-aminoadenine) that is the precursor for DNA synthesis, the DNA polymerase enzyme that was needed to handle this modified base, and an enzyme dATPase that removed the normal DNA precursor, dATP. It also established that four of the genes that encoded these proteins are encoded by the phage genome. The strategy by which this was accomplished is worth presenting as it has been used in other circumstances to identify the genes for particular reactions. The general logic is: Enzymes to perform reactions not carried out by the host are likely to be coded by the phage. We expect a new phage enzyme to arise from the acquisition of a host enzyme that performs the required reaction, but on a different substrate (reacting molecule) which we know can quite easily mutate to bind a different molecule and hence give a different product. (Change its substrate specificity.) Families of enzymes that catalyse the same reaction share recognizable structural features, so we can identify candidates in the phage genome for the reaction of interest by conceptual translation into protein. We can then attempt to test our putative candidate by expression and biochemical assay and by observing the effects of deletion or other genetic manipulation. The summary sentence for the first paper indicates that in this case they found a gene in the phage related in this way to host gene purA. The host gene catalysed a reaction of the same chemical type as the one required, but reacted with normal purines (hence the Pur) in standard pathway, and they were subsequently able to establish that the product of the phage gene (which they named purZ) catalysed the reaction in the pathway above. A comparison of the chemistry is shown below. 5. Identification in other organisms One could, of course, search for 2-aminoadenine by purifying DNA from other phages (in 1977 this would have been the only option) and the authors of the recent paper did this in one case (the summary of their second paper) but they also employed the much faster general strategy of searching the phage genome sequences deposited in Genbank for genes similar to purZ (and the other three mentioned). This is aided by the grouping (clustering) of the genes of interest — an additional indication that they work together. This clustering is not always the same, but the diagram below shows the similarities between several phages and how this gives one confidence of the presence of the pathway in phages other than S-2L. How ancient is this evolutionarily? The last paper is concerned with the question of when this base change occurred in evolutionary time. The general strategy is to consider a phylogenetic tree of phages — which is generally assumed to reflect the phylogeny of their host bacteria. The diagram shows an imaginary simplified phylogenetic relationship. 1. Indicates the current species at the right, with their divergence from each other at various periods in the past on the left. 2. Represents the identification of phages in which the base changes are currently found by red balls, so that in 3. by colouring the branches in red we can deduce at what point in the past the enzyme system for producing and using the modified base arose. The third paper, in effect does this sort of thing and finds that this indicates that the 2-aminoadenine: …has been used as an information carrier among siphoviruses since the evolutionary divergence of actinobacteria, cyanobacteria, and proteobacteria ~3.5 billion years ago. Does this occur in bacteria? This cluster of genes has not been reported in any bacteria (and you can be sure that the authors of the three Science papers looked), although one cannot exclude that this change has happened in some unstudied bacteria. However it is unlikely to be occur in known bacteria because of the rationale for its existence in phages. Bacteria have a system to combat phages called the restriction/modification system. They encode specific restriction endonucleases that cleave at specific sites in the DNA (generally 4-6 base pseudo-palindromes) and protect their own DNA by methylating bases in these sites. It was established in the 1980s that the substitution of 2-aminoadenine for adenine in phage S-2L renders its DNA resistant to many restriction endonucleases — i.e. this is a viral response to the bacterial defence system.
{ "domain": "biology.stackexchange", "id": 11536, "tags": "molecular-biology, virology, nucleic-acids" }
Removing recursion - a look into theory behind the scenes
Question: I am new to this site and this question is certainly not research level - but oh well. I have a little background in software engineering and almost none in CSTheory, but I find it attractive. To make a long story short, I would like a more detailed answer to the following if this question is acceptable on this site. So, I know that every recursive program has an iterative analog and I kind of understand the popular explanation that is offered for it by maintaining something similar to the "system stack" and pushing environment settings like return address etc. I find this kind of handwavy. Being a little more concrete, I would like to (formally) see how does one prove this statement in cases where you have a function invoking chain $F_0 \rightarrow F_1 \ldots F_i \rightarrow F_{i+1} \ldots F_n \rightarrow F_0$. Further, what if there are some conditional statements which might lead an $F_i$ make a call to some $F_j$? That is, the potential function call graph has some strongly connected components. I would like to know how can these situations be handled by let us say some recursive to iterative converter. And is the handwavy description I referred to earlier, really enough for this problem? I mean then why is it that I find removing recursion in some cases easy. In particular removing recursion from pre-order traversal of a Binary tree is really easy - its a standard interview question but removing recursion in case of post order has always been a nightmare for me. What I am really asking is $2$ questions (1) Is there really a more formal (convincing?) proof that recursion can be converted to iteration? (2) If this theory is really out there, then why is it that I find, for eg, iteratizing preorder easier and postorder so hard? (other than my limited intelligence) Answer: If I understand correctly, you are clear about converting functions that contain no other function calls but to themselves. So assume we have a "call chain" $F \to F_1 \to \dots \to F_n \to F$. If we furthermore assume that $F_1, \dots, F_n$ are not recursive themselves (because we have converted them already), we can inline all those calls into the definition of $F$ which thusly becomes a directly recursive function we can already deal with. This fails if some $F_j$ has itself a recursive call chain in which $F$ occurs, i.e. $F_j \to \dots \to F \to \dots \to F_j$. In this case, we have mutual recursion which requires another trick to get rid off. The idea is to compute both functions simultaneously. For example, in the trivial case: f(0) = a f(n) = f'(g(n-1)) g(0) = b g(n) = g'(f(n-1)) with f' and g' non-recursive functions (or at least independent of f and g) becomes h(0) = (a,b) h(n) = let (f,g) = h(n-1) in (f'(g), g'(f)) end f(n) = let (f, _) = h(n) in f end g(n) = let (_, g) = h(n) in g end This naturally extends to more functions involved and more complicated functions.
{ "domain": "cstheory.stackexchange", "id": 1369, "tags": "pl.programming-languages, recursion, recursive" }
Why does magnesium bromide transfers from a carbon atom to nitrogen in piperidine?
Question: In this reaction, why does $\ce{MgBr}$ goes to the $\ce{>NH}$ group and form a bond with nitrogen by removing $\ce{-H}$. Also where does the negative charge go? Answer: There is no step 1 and step 2 in this reaction. Both are consecutive and second one is fast because it is an acid-base reaction: Note that approximate $\mathrm{p}K_\mathrm{a}$ values of ring $\ce{C-H}$ and $\ce{N-H}$ are 50 and 40, respectively. Thus, as soon as Grignard reagent is formed, it exchanges relatively acidic protons of $\ce{N-H}$ fast (Remember, Grignard is also a strong base). To visualize the exchange process, I included schematic representation at the bottom of the image.
{ "domain": "chemistry.stackexchange", "id": 14372, "tags": "organic-chemistry, acid-base, nucleophilic-substitution, heterocyclic-compounds, grignard-reagent" }
A point between two charges has an electric potential of zero, but a charge placed at this point will gain kinetic energy. Why?
Question: If I have a positive charge of +q and a negative charge of -q that are set a distance of r apart from each other, their midpoint will have an electric potential of zero. However, if I put a test charge into this midpoint and release it, it will gain kinetic energy, and I am under the impression that it comes from some sort of potential energy. But according to my book, that midpoint that it was placed at has no potential? Where does the kinetic energy come from? Answer: The actual value of the potential is not meaningful, what is meaningful is the gradient: the field on a particle is given by $\vec{E} = -\vec{\nabla} V$. Equivalently, what is meaningful is only the potential difference between 2 points. Notice that one can define a new potential differing from the old by an additive constant: $V' = V + c$ and the force will still be the same. So we have a redundancy in the system when describing potential energy, this is a gauge degree of freedom. But nevermind the jargon for now. In your case, the test charge (positive)'s derivative of its potential energy will give rise to a force towards the negative charge, and so it moves towards the negative charge. Conservation of energy holds, and the gain in KE must be compensated by a loss (i.e. difference) of PE. The potential being $0$ at the midpoint is a consequence of choosing $c$ such that the potential at the point $r\to \infty$ has $V \to 0 $, but one can equally well define the potential at $\infty$ to be $1$ GeV, for example, and nothing physical would change.
{ "domain": "physics.stackexchange", "id": 11616, "tags": "electric-fields" }
Genetic modifications in humans to prevent disease
Question: I was reading an article about "designer babies", which was about how before birth the DNA can be modified so to achieve certain features. Can a technique such as this also be used to prevent diseases, such as down syndrome or some others, before the birth of a child? How far into the future are we to achieving something like this? Answer: Regardless of the ethical and philosophical concerns involved, it is theoretically possible to modify a child's DNA to prevent various diseases, with a few caveats. First of all, many diseases have multiple potential causes, and there is rarely a single method which would work for every case. Second, the actual modification is not a trivial task in eukaryotes. The more controversial method would be for a mother to be implanted with "petri dish" embryos, who have undergone gene transplantation early enough for all of the embryo's cells to be transformed. The other method involves repeated gene therapy treatments later in the child's development (and even into adulthood) which would not necessarily prevent the disease, but would minimize it's effects. Technically it could be done right now, but to the best of my knowledge neither procedures are approved for use as of yet. Given the controversy regarding modifying a child's DNA before birth, it may never be approved, but several gene therapies are undergoing clinical trials.
{ "domain": "biology.stackexchange", "id": 8371, "tags": "genetics, bioinformatics, human-genetics, gene" }
Can volcanos change the climate?
Question: I have heard politicians claiming that volcanoes are the sole cause of global warming and using so called "NASA data" to show that the Earth is actually cooling instead of warming. While the nature of this site is for specific questions and not for political debate, we all know from the Year Without a Summer that volcanic gases can impact the weather severely making it very cold or hot. My question is this: Can a volcanic eruption, either a single or multiple explosions, be the sole cause of global warming at the planetary scale? Can volcanoes really change the climate so fast, relative to the geological timescale, that they produce the abnormal levels that we see today? I'm aware of the year without summer, but that ended in a nanosecond, geologically speaking. Answer: That the CO2 imbalance causing global warming is anthropogenic is accepted by 97% (source) of climatologists. This site has this to say about human's effect on CO2 imbalance: But consider what happens when more CO2 is released from outside of the natural carbon cycle – by burning fossil fuels. Although our output of 29 gigatons of CO2 is tiny compared to the 750 gigatons moving through the carbon cycle each year, it adds up because the land and ocean cannot absorb all of the extra CO2. About 40% of this additional CO2 is absorbed. The rest remains in the atmosphere, and as a consequence, atmospheric CO2 is at its highest level in 15 to 20 million years (Tripati 2009). (A natural change of 100ppm normally takes 5,000 to 20,000 years. The recent increase of 100ppm has taken just 120 years). Human CO2 emissions upset the natural balance of the carbon cycle. Man-made CO2 in the atmosphere has increased by a third since the pre-industrial era, creating an artificial forcing of global temperatures which is warming the planet. While fossil-fuel derived CO2 is a very small component of the global carbon cycle, the extra CO2 is cumulative because the natural carbon exchange cannot absorb all the additional CO2. Also, this site lists the acceptance of anthropogenic global warming from eighteen different scientific associations.I'm not sure politicians are the final authority for global warming.Edit: it has come to my attention that I did not actually answer the question posed in the title of this question, to wit: "Can volcanos change the climate?" I'll have to admit I did not. Taking the title and the body together as one topic I'll try to address the contribution volcanoes make to global warming.This site has this to say about the contribution of CO2 from volcanoes: Volcanic eruptions can enhance global warming by adding CO2 to the atmosphere. However, a far greater amount of CO2 is contributed to the atmosphere by human activities each year than by volcanic eruptions. T.M.Gerlach (1991, American Geophysical Union) notes that human-made CO2 exceeds the estimated global release of CO2 from volcanoes by at least 150 times. The small amount of global warming caused by eruption-generated greenhouse gases is offset by the far greater amount of global cooling caused by eruption-generated particles in the stratosphere (the haze effect). And this one has this: Gas studies at volcanoes worldwide have helped volcanologists tally up a global volcanic CO2 budget in the same way that nations around the globe have cooperated to determine how much CO2 is released by human activity through the burning of fossil fuels. Our studies show that globally, volcanoes on land and under the sea release a total of about 200 million tonnes of CO2 annually. This seems like a huge amount of CO2 , but a visit to the U.S. Department of Energy's Carbon Dioxide Information Analysis Center (CDIAC) website (http://cdiac.ornl.gov/) helps anyone armed with a handheld calculator and a high school chemistry text put the volcanic CO2 tally into perspective. Because while 200 million tonnes of CO2 is large, the global fossil fuel CO2 emissions for 2003 tipped the scales at 26.8 billion tonnes. Thus, not only does volcanic CO2 not dwarf that of human activity, it actually comprises less than 1 percent of that value. Certainly volcanoes can cause short-term climate effects. Pinatubo in 1992 reduced global temperatures by 0.5 to 0.6 degrees Celsius (source). IOW, it lowered temperatures, not raised them. So I think it's safe to say that humans contribute considerably more to warming than do volcanoes.
{ "domain": "earthscience.stackexchange", "id": 858, "tags": "meteorology, climate, climate-change, volcanology, volcanic-hazard" }
TicTacToe game in Haskell
Question: As a beginner, I tried to implement TicTacToe in Haskell. How can I improve my code? I have refactored my code: General TicTacToe in Haskell Utils module Data.List.Utils where import Data.List (intersperse) surround :: a -> [a] -> [a] surround x ys = x : intersperse x ys ++ [x] nth :: Int -> (a -> a) -> [a] -> [a] nth _ _ [] = [] nth 0 f (x:xs) = f x : xs nth n f (x:xs) = x : nth (n - 1) f xs TicTacToe module TicTacToe where import Data.List (transpose) import Data.Foldable (asum) import Data.List.Utils (nth, surround) data Tile = Empty | O | X deriving (Show, Eq) showTile :: Tile -> String showTile Empty = " " showTile O = "O" showTile X = "X" type Board = [[ Tile ]] showBoard :: Board -> String showBoard = unlines . surround vert . map (concat . surround horiz . map showTile) where vert = "+-+-+-+" horiz = "|" whoWon :: Board -> Maybe Tile whoWon xs = asum . map winner $ diag : anti : rows ++ cols where rows = xs cols = transpose xs diag = zipWith (!!) xs [0..] anti = zipWith (!!) xs [2,1..] winner [O, O, O] = Just O winner [X, X, X] = Just X winner _ = Nothing fillTile :: Int -> Int -> Tile -> Board -> Board fillTile row col = nth row . nth col . const isOver :: Board -> Bool isOver = all (all (/= Empty)) overMsg :: Maybe Tile -> String overMsg (Just x) = "Winner: " ++ (showTile x) overMsg Nothing = "Game Over! No winner." inRange :: Int -> Bool inRange x = x >= 0 && x <= 2 gameLoop :: Tile -> Board -> IO () gameLoop player board | isOver board || winner /= Nothing = putStrLn $ overMsg winner | otherwise = do putStrLn $ showBoard board putStrLn $ "Now player " ++ showTile player putStr "Row: " row <- fmap read getLine putStr "Col: " col <- fmap read getLine if inRange row && inRange col && (board !! row !! col == Empty) then gameLoop (if player == O then X else O) $ fillTile row col player board else do putStrLn "Invalid position, please try again." gameLoop player board where winner = whoWon board startBoard :: Board startBoard = replicate 3 (replicate 3 Empty) main = gameLoop O startBoard Answer: Some thoughts about your Tic-Tac-Toe version. You made a confusion between Tile and Player. You assume a Player is the same as a Tile but it is not. A Tile can be Empty but a Player cannot be empty. You should separate these two notions. It will make your declarations clearer on what you expect. What if I call "gameLoop Empty startBoard" ? The putStr function does not flush immediately its output in a compiled program. Your program asks the user for a row and a column and displays these labels after the user inputs. Is it natural to ask for the row before the column ? You have intertwined the game logic with the user interface in the gameLoop function. What will you do if you want to implement a computer player ? Or have a GUI to play the game ? You will have to patch your gameLoop function though the game logic has not changed. Your solution is locked to the fact that your board is a 3×3 grid with a 3 tiles win. For example, your inRange functions works for row indices AND column indices, which are not the same. You use the constants 2 and 3 in your code (inRange, startBoard and whoWon), but nothing implies a relation between them. The 3 rows and 3 columns are characteristics of the Board, not of your functions. Messages could be more specific. When you tell the user "Invalid position", there’s no way to know if it’s because the coordinates are out of bounds or because the tile has already been played. Note: there are many Tic-Tac-Toe game related questions on CodeReview, you should take a look at: Tic Tac Toe game in Haskell (a brute first attempt) Tic Tac Toe game in haskell follow up (its follow up) TicTacToe in Haskell (a version using Lens) m,n,k-game (i.e. generalized tic-tac-toe) (its generalization)
{ "domain": "codereview.stackexchange", "id": 15027, "tags": "haskell, tic-tac-toe" }
Toggling a taxi between busy and available states
Question: Is there a better way to do this? That switch_status method seems like it could be made a lot more compact. class Taxi def initialize @status = :available end def switch_status if @status == :available @status = :busy else @status = :available end end end t = Taxi.new t.switch_status Answer: I voted to reopen, though I still don't consider this a good CodeReview question. Perhaps my answer here will shed some light on why I think that. You say you want to shorten the switch_status method. Well, Ruby has ternaries, same as many other languages so: def switch_status @status = (@status == :available ? :busy : :available) end And done. Shorter. But the reason this question feels incomplete is because that, in and of itself, is completely equivalent to the original code. So it's not really a review, just a demonstration of an alternative if..else syntax found in many languages. A review would require knowing more about why this method exists. What its purpose is, when you need it, why it's using symbols, and so on. To start, just having the switch_state method is odd. You presumably have to know what state the taxi (as the class name suggests it is) is already in, in order to see if switching it makes sense. But your code contains no mechanism for determining the current state. In fact, the only way to get a Taxi instance to return something is to change its state. Calling switch_state will return the new state, and you'll have to work backwards from there or switch it back again. E.g. suppose you have an instance, and want to know its state. Currently, you'd have to do this: some_taxi.switch_state # flip it once current_state = some_taxi.switch_state # flip it back again That makes very little sense. Also: You're switching between exactly two states - which is what booleans do already. So why aren't you using booleans? They're limited to two states, so you're certain it's either one or the other (whereas a symbol can be anything). So you could just as well do this: def switch_status @busy = !@busy end Really, that should do the same trick, since this taxi has only two states. And also, given the lack of context, I'd say you're entire class could, in effect, be replaced with just a boolean, since the class's only purpose right now is to store a boolean state. So cut out the middleman. A raw boolean value would even tell you its current state, which is more than your class does now. But presumably you want the encapsulation and the symbols for something. Or do you want symbols? A more Ruby'esque way to inquire about a known set of states would be with ?-methods: def busy? @busy end def available? !@busy end Or you can just have one of 'em and negate their value, e.g. !taxi.busy?. Either way, it's infinitely better than flipping the state twice just to find out what it currently is. If you still want symbols, write an accessor: def status @busy ? :busy : :available end Then again, if this is a taxi, does it really have only those two states? Maybe one is :out_of_service for instance? Again, no idea. The question doesn't say. Or it could be that available/busy can be - and likely should be - inferred from something else. There's no reason to manually track multiple separate state values if they're dependent on each other. In fact, it's likely harmful to try. So perhaps your code could do this instead: def status @passengers.any? ? :busy : :available end Now status is derived from something (presumably) more meaningful. But does this make sense? No idea. The question doesn't say. As mentioned, just having switch_state is odd because presumably there's more to it than that: The state is connected to something. Otherwise, as I also mentioned, you could replace the whole class with a simple boolean value. Only being able to manipulate the state by flipping it seems weird. Like a light switch you can only even flip to the opposite of whatever it is - and you don't know whether that's on or off.
{ "domain": "codereview.stackexchange", "id": 17229, "tags": "ruby" }
Why does a big protein move during gel electrophoresis but to a lesses extent through a size exclusion column?
Question: A big protein going through gel electrophoresis will be forced through the gel, it will drift a small way and will show on the top of the gel, while in size exclusion, the big protein is not forced to go through the pores and will elute first. Why is the protein forced through one medium and not through the other? Answer: In the polyacrylamide gel, there is no way to avoid the narrow passages. The gel is made in situ by a chemical crosslinking step. In size exclusion, the solid phase is made of beads with diameter ranging from 50 to 200 µm diameter. The voids between these beads are connected and sufficient larger for the protein to go through. Another difference between the two methods is the for SDS gel electrophoresis, the protein is denatured while for size exclusion, it is usually in the native state, so the shape and the flexibility is different, leading to different mobility.
{ "domain": "chemistry.stackexchange", "id": 16593, "tags": "biochemistry, proteins, separation-techniques" }
Show that decomposition does not hold for non-linear system
Question: The solution to an inhomogeneous differential equation can be split up into homogeneous solution and a particular solution (forced response). Another way to split up the solution to an inhomogeneous differential equation is in a zero-input response and a zero-state response. The zero-input response is the system's response to its own internal initial conditions - no input signal is applied. The zero-input response is the homogeneous solution to the system's differential equation, using the initial conditions at $t=0^-$. The zero-state response is the system's response to only the input signal - all initial conditions set to $0$. The zero-state response is found by convolving the system's impulse response $h(t)$ with the input signal $x(t)$. This excerpt is from Lathi's signal processing and linear systems: Lathi claims that for a linear system, one can show that the decomposition property holds. Question: How can I show that the decomposition property for a non-linear system, for example $\dot{y}(t) + y(t) = x(t) + 1$, does not hold? Answer: Assume that $y_0(t)$ is the zero-input response. Then $y_0(t)$ must satisfy $$\dot{y}_0(t)+y_0(t)=1\tag{1}$$ because $x(t)=0$. Now let $y_1(t)$ be the zero-state response to an input $x(t)$, satisfying $$\dot{y}_1(t)+y_1(t)=x(t)+1\tag{2}$$ If the decomposition property holds, the function $y_2(t)=y_0(t)+y_1(t)$ must be the response to $x(t)$ with possibly non-zero initial conditions. I.e., $y_2(t)$ should satisfy $$\dot{y}_2(t)+y_2(t)=x(t)+1\tag{3}$$ However, adding Equations $(2)$ and $(3)$ gives $$\dot{y}_2(t)+y_2(t)=x(t)+2\tag{3}$$ which shows that for the given system the decomposition property doesn't hold.
{ "domain": "dsp.stackexchange", "id": 11767, "tags": "continuous-signals, linear-systems, non-linear, differential-equation" }
Counting people inside the house game
Question: This program animates people going inside and outside of a house, the user has to keep a running sum in their head and must give the answer at the end. The speed of people, the number of maximum people inside the house and the number of rounds to be played can be adjusted, at the end the user will get a report saying how many he got right. You can see the program in action here (Italian commentary, not important) I took two random images from the internet for this, if you are concerned about copyright, you can just use rectangles of sizes: 396*324 for the house 197*255 for the person Folder with code and images I feel like the code contains too much repetition. It is easier for me to avoid repetition in logic/computational programming but making this animation led me to more repetition than I would have liked, especially in ingoing vs outgoing I am open to all kinds of suggestions: import pygame import random import os import time pygame.init() screen = pygame.display.set_mode((800, 640)) person = pygame.image.load('person.png') house = pygame.image.load('house.png') clock = pygame.time.Clock() pygame.display.set_caption('People counting game') myfont = pygame.font.SysFont('comicsansms', 80) smallfont = pygame.font.SysFont('comicsansms', 30) ingoing_persons = 5 outgoing_persons = 3 def animate_person_going_inside_house(speed): running = True x = -200 while running: event = pygame.event.poll() if event.type == pygame.QUIT: running = False x += speed screen.fill((255, 255, 255)) # fill the screen screen.blit(person, (int(x), 340)) screen.blit(house, (200, 260)) if x > 300: return pygame.display.update() # Just do one thing, update/flip. clock.tick(40) # This call will regulate your FPS (to be 40 or less) def animate_person_going_outside_house(speed): running = True x = 300 while running: event = pygame.event.poll() if event.type == pygame.QUIT: running = False x += speed screen.fill((255, 255, 255)) # fill the screen screen.blit(person, (int(x), 340)) screen.blit(house, (200, 260)) if x > 900: return pygame.display.update() # Just do one thing, update/flip. clock.tick(40) # This call will regulate your FPS (to be 40 or less) def animate_all_people(ingoing, outgoing, difficulty): result = ingoing - outgoing ingoing_so_far = 0 outgoing_so_far = 0 people_inside = 0 while True: if random.choice( (0, 1) ): if ingoing_so_far < ingoing: people_inside += 1 animate_person_going_inside_house(difficulty) ingoing_so_far += 1 else: if outgoing_so_far < outgoing and people_inside > 0: # People can only exit if people are inside! people_inside -= 1 animate_person_going_outside_house(difficulty) outgoing_so_far += 1 if ingoing_so_far == ingoing and outgoing_so_far == outgoing: break running = True while running: event = pygame.event.poll() if event.type == pygame.QUIT: running = False if event.type == pygame.KEYDOWN: if (event.key == pygame.K_0 and people_inside == 0) or \ (event.key == pygame.K_1 and people_inside == 1) or \ (event.key == pygame.K_2 and people_inside == 2) or \ (event.key == pygame.K_3 and people_inside == 3) or \ (event.key == pygame.K_4 and people_inside == 4) or \ (event.key == pygame.K_5 and people_inside == 5) or \ (event.key == pygame.K_6 and people_inside == 6) or \ (event.key == pygame.K_7 and people_inside == 7) or \ (event.key == pygame.K_8 and people_inside == 8) or \ (event.key == pygame.K_9 and people_inside == 9): for _ in range(40 * 2): text = myfont.render('Correct! {}!'.format(str(result)), False, (255, 0, 0)) screen.blit(text, (320 - text.get_width() // 2, 240 - text.get_height() // 2)) pygame.display.flip() return 1 else: for _ in range(40 * 2): text = myfont.render('Wrong!, It was {}'.format(str(result)), False, (255, 0, 0)) screen.blit(text, (320 - text.get_width() // 2, 240 - text.get_height() // 2)) pygame.display.flip() return 0 def random_if_condition(minmax, condition): while True: r = random.randint(*minmax) if condition(r): return r """ def play_game(difficulty): ingoing, outgoing = random.randint(0, 9), random.randint(0, 9) animate_all_people(ingoing, outgoing, difficulty) """ def play_match(rounds, speed, max_people): while True: text = smallfont.render("Count the people inside the house.", False, (255, 0, 0)) screen.blit(text, (380 - text.get_width() // 2, 140 - text.get_height() // 2)) text = smallfont.render("When no more people are moving", False, (255, 0, 0)) screen.blit(text, (380 - text.get_width() // 2, 240 - text.get_height() // 2)) text = smallfont.render("press the number on the keyboard.", False, (255, 0, 0)) screen.blit(text, (380 - text.get_width() // 2, 340 - text.get_height() // 2)) text = smallfont.render("Press any key to start playing.", False, (255, 0, 0)) screen.blit(text, (380 - text.get_width() // 2, 440 - text.get_height() // 2)) event = pygame.event.poll() if event.type == pygame.QUIT: return if event.type == pygame.KEYUP or event.type == pygame.KEYDOWN: break pygame.display.flip() points = 0 for _ in range(rounds): ingoing = random.randint(0, max_people) # Careful to avoid more outgoing than ingoing points += animate_all_people(ingoing , random_if_condition( (0, max_people), lambda r: r <= ingoing), speed) for _ in range(40 * 5): # 5 seconds text = myfont.render("You got {}/{} right".format(points, rounds), False, (255, 0, 0)) screen.blit(text, (320 - text.get_width() // 2, 140 - text.get_height() // 2)) pygame.display.flip() #animate_all_people(random.randint(0, 9) , random.randint(0, 9), 30) if __name__ == "__main__": play_match(rounds = 3, speed = 15, max_people = 6) Answer: You're right - there is a lot of duplication. Also, some organization is needed. Organize! Before you do anything else, get everything into a function of some kind. All those statements at module scope, move them into a setup function, write yourself a main, and do the standard Python thing: if __name__ == '__main__': main() You can call play_match from inside main after you call setup. You might even put in a while: loop to play multiple matches. Animate Let's have a look at your in/out functions: def animate_person_going_inside_house(speed): running = True x = -200 while running: event = pygame.event.poll() if event.type == pygame.QUIT: running = False x += speed screen.fill((255, 255, 255)) # fill the screen screen.blit(person, (int(x), 340)) screen.blit(house, (200, 260)) if x > 300: return pygame.display.update() # Just do one thing, update/flip. clock.tick(40) # This call will regulate your FPS (to be 40 or less) def animate_person_going_outside_house(speed): running = True x = 300 while running: event = pygame.event.poll() if event.type == pygame.QUIT: running = False x += speed screen.fill((255, 255, 255)) # fill the screen screen.blit(person, (int(x), 340)) screen.blit(house, (200, 260)) if x > 900: return pygame.display.update() # Just do one thing, update/flip. clock.tick(40) # This call will regulate your FPS (to be 40 or less) The lines that are different are: x = -200 x = 300 if x > 300: if x > 900: It seems fairly obvious that these could be parameters: def animate_person(speed, start_x, target_x): x = start_x if x > target_x: Then you can rewrite the two functions as: def animate_person_going_inside_house(speed): animate_person(speed, -200, 300) def animate_person_going_outside_house(speed): animate_person(speed, 300, 900) Or, you could simply eliminate them and call the animate_person directly. Clarify You have a lot of "magic" numbers, like -200, 900, 300, 260, 340, 200, and (255,255,255). I don't know what they mean. Replace them with named constants: X_PERSON_OFF_LEFT_EDGE = -200 X_DOOR_OF_HOUSE = 300 Remember, you are writing code for the next maintainer, not for the compiler. You have this while loop that is unclear: while True: if random.choice( (0, 1) ): if ingoing_so_far < ingoing: people_inside += 1 animate_person_going_inside_house(difficulty) ingoing_so_far += 1 else: if outgoing_so_far < outgoing and people_inside > 0: # People can only exit if people are inside! people_inside -= 1 animate_person_going_outside_house(difficulty) outgoing_so_far += 1 if ingoing_so_far == ingoing and outgoing_so_far == outgoing: break When you see a break, you have to ask, could I add this to the main loop test? In this case, yes, you can: while ingoing + outgoing != 0: direction = random.choice( ('in',) * ingoing + ('out',) * outgoing ) if direction == 'in': ingoing -= 1 animate_person_going_inside_house(difficulty) people_inside += 1 else: outgoing -= 1 animate_person_going_outside_house(difficulty) people_inside -= 1 Note that I have changed the behavior here, from 50/50 to proportional to the # remaining. You could certainly change it back. The point is to change the while condition. Note also that this is where you would replace calls to separate inside/outside functions with a single function. You could create a container with two sets of parameters, and index by a random number (0,1) or random string ('in', 'out') to select which parameter set to use. Help Yourself Don't be afraid to write helper functions. In fact, if you write a comment, ask yourself if that comment should be a function name instead. If you write a paragraph of code, and then break for another paragraph, ask yourself if that should be a new function. Even if you only call it once, it might be a structural function rather than a reusable function. You have this code: if (event.key == pygame.K_0 and people_inside == 0) or \ (event.key == pygame.K_1 and people_inside == 1) or \ (event.key == pygame.K_2 and people_inside == 2) or \ (event.key == pygame.K_3 and people_inside == 3) or \ (event.key == pygame.K_4 and people_inside == 4) or \ (event.key == pygame.K_5 and people_inside == 5) or \ (event.key == pygame.K_6 and people_inside == 6) or \ (event.key == pygame.K_7 and people_inside == 7) or \ (event.key == pygame.K_8 and people_inside == 8) or \ (event.key == pygame.K_9 and people_inside == 9): That cries out to be a function. Or two functions. if is_digit_key(event.key) and digit_val(event.key) == people_inside: Advance! One thing you don't do is support multiple people moving at the same time. I don't know how important that is to you, but your code isn't structured to handle it. You would need a list of moving people, and their target locations. You could create a simple class for that: class MovingPerson: def __init__(start_x, target_x, speed): Then your animate function would loop over all the moving sprites, update them, then refresh the screen: for p in moving_people: p.update() pygame.display.update()
{ "domain": "codereview.stackexchange", "id": 29601, "tags": "python, game, animation, pygame" }
Time Dilation and "existence" of time
Question: I came up with what might be considered a strange conclusion when thinking about time dilation, and more specifically the Hafele and Keating experiment from 1971. It was shown that time either went faster or slower depending on which direction the plane was traveling in. Since time was measured using an atomic clock where a second is measured from the impact of a particle between two plates, time went slower when the plates moved away from each other thus creating a larger distance for the particle to travel. (Vice versa when the plane was traveling in the opposite direction). Time dialation can also occur due to gravity, i.e. when in a stronger gravity field, time passes more slowly. But isn't this the same as the Hafele and Keating experiment? Imagine placing an atomic clock in a larger gravitational field compared to Earth. Time would pass more slowly because of the gravitational impact on the particle pulling it in one direction, causing the distance traversed to increase and thus making a second pass more slowly. My conclusion is then that time does not exist and time dilation is represented as the change in position between two inertial frames (as explained by classical mechanics). Because there is no change in time itself, it is the distance that changes. Time is more like a measure of change itself, a derivative property not a physical. Does this makes sense or have I missed something? Answer: Since time was measured using an atomic clock where a second is measured from the impact of a particle between two plates This is not a correct description of how an atomic clock works. In an atomic clock there are atoms which undergo a specific atomic transition and in that transition they emit or absorb radiation of a specific frequency. The frequency of that radiation defines the second. In particular there are no relevant particle impacts nor is it necessary for the atoms to move from one location to another within the clock. when in a stronger gravity field, time passes more slow. This is also incorrect. Gravitational time dilation depends on the gravitational potential, not the gravitational field strength. In the HK experiment you reference the gravitational acceleration is essentially the same on an aircraft in flight or on the ground. What differs is the gravitational potential. The duration of some impact, even if it were relevant, depends on the acceleration and not the potential. So it cannot explain the observed behavior. My conclusion is then that time does not exist and time dialation is represented as the change in position between two inertial frames (as explained by classical mechanics) Unfortunately, your conclusion is founded on faulty premises. Atomic clocks do not behave as you have assumed and gravitational time dilation does not work the way you describe.
{ "domain": "physics.stackexchange", "id": 67633, "tags": "newtonian-mechanics, special-relativity, time" }
Electric energy from dipole moment
Question: Conventionally one define electric energy as $$ U = \frac{1}{2} \int \vec{E}(r') \cdot \vec{E}(r') d^3 x' $$ where $\vec{E}$ is a Electric field. And from textbook like Griffith, we know that electric field generated by dipole is given as $$ \vec{E}_{dipole}(r) = \frac{1}{4\pi} \frac{1}{r^3}(3 (\vec{p} \cdot \hat{r}) \hat{r} - \vec{p} ) $$ I am trying to plug this and obtain explicit formula for electric energy $U$. I think the result should be on some textbook or papers but I couldn't find. Do you know the formula? The purpose of this question is actually to compute the $U$ with $\vec{E}$. My trial was $$ U = \frac{1}{2} \frac{1}{16 \pi^2} \int \frac{1}{r^6} \left( 3 (\vec{p}\cdot \hat{r})^2 - \vec{p}\cdot\vec{p} \right) r^2 dr\sin(\theta) d\theta d\varphi$$ $$ = - \frac{1}{32\pi r^3} \int_{0}^{\pi} \left( 3 (\vec{p} \cdot \hat{r})^2 - (\vec{p}\cdot \vec{p}) \right) \sin(\theta) d\theta $$ Answer: The electric field of a dipole can be written in spherical coordinates (using your unit system) as $$\vec{E} =\frac{1}{4\pi r^3}\left( 2p\cos \theta\ \hat{r} + p\sin \theta\ \hat{\theta}\right)$$ $$ E^2 =\frac{p^2}{16\pi^2 r^6}\left(1 + 3\cos^2 \theta \right)$$ Integrating over a spherical volume (as you propose) from an inner radius $r_1$ to an outer radius $r_2$, yields $$U = \frac{p^2}{4\pi}\left[\frac{1}{r^3}\right]^{r=r_1}_{r=r_2}.$$ As you can see there is no problem to integrate out to $r_2 =\infty$, but you get an infinite energy if you allow $r_1 =0$ (because the electric field is infinite when $r=0$). The expression for $U$ in terms of $E^2$ cannot be used at the position of the point charges, or in this case at the position of an (assumed) point dipole.
{ "domain": "physics.stackexchange", "id": 63599, "tags": "homework-and-exercises, electrostatics, electric-fields, potential-energy, dipole" }
Constraints on digital mp3/wav sound reproduction?
Question: A recent tv broadcast on http://www.cuny.tv/ discussed a new and successful business startup in new york city selling vinyl LP's. Sorry, I can't find that episode, but I think it's this company http://nypost.com/2017/06/01/millennial-vinyl-startup-cues-up-success/ (or one pretty much like it). Apparently, analog/vinyl has a "warmer" sound that digital/mp3/wav can't achieve. And maybe it's more an engineering question, but why not??? That is, given a high enough sampling rate and a lossless format, then regardless of the "harshness" of individual samples, I'd think the overall effect could accurately reproduce any "musical quality". Ultimately, a waveform's a waveform, and if you reproduce it, then you've reproduced all its "qualities". Or so I'd think. Maybe you'd need a zillion samples-per-second, and maybe you'd need lossless wav format rather than lossy mp3, but I'd think it should be possible to re-master a vinyl original (even when played through some analog/tube-based amplifier) to a digital format, and accurately reproduce every conceivable musical nuance. But the success of these vinyl companies seems to suggest otherwise. So what am I missing? Answer: In digital music, the data consists of sample values and time intervals between samples. While these time intervals should all be the same in a given song, in reality their value fluctuates causing timing distortions of the resulting sound wave. These distortions are called jitter or phase noise and sound harsh. It so happens that the human ear is amazingly hypersensitive to this type of distortions and an enthusiast listener can easily tell the difference between two different quality quartz crystals stabilizing the timing of the signal. This in itself is not a problem yet. The actual problem comes from the way the digital music distribution standard was set up. Initially the idea was to convert the digital signal from a CD to the analog format in the CD player. For this purpose a digital protocol was developed called I2s that properly supported both, signal values and timing. However, this protocol was designed as a short range type to be used only inside the CD player. Later people started making higher quality digital to analog converters (DACs) in separate enclosures with digital inputs thus requiring a digital output from a CD player. Because I2s was a short range protocol, a different protocol was used for digital outputs called SPDIF (Sony/Philips Digital Interchange Protocol). It is the shortcomings of this protocol that have caused a massive dissatisfaction by the digital standard and a resurrection of vinyl. The problem of SPDIF is that it passes only the value samples, but does not pass the time interval data between samples. Superficially it seems trivial: all intervals are the same, why do we need to pass each one? However, things weren't so simple. With no timing information, we must use a different clock (quartz crystal) in the DAC. However, no two clocks are the same. The frequency difference would eventually break the playback by putting it out of sync with the transmission. For this reason, initially, instead of a high quality quartz resonator, the signal timing in DACs was derived from SPDIF by a PLL (Phase Locked Loop) circuit. The timing precision of this circuit is limited by the time delay required for analyzing the signal. We don't want the music delay to be too long and this limits the precision and increases the jitter to very much audible levels. A massive dissatisfaction with SPDIF along with the emergence of computer music have caused a development of a new protocol called USB Audio, specifically its third iteration called Async USB Audio. This protocol for the first time uses the quartz clock in the DAC and sends timing commands to the player. In theory this should eliminate the transmission jitter and finally after 35 years of the digital disaster resolve the timing issue. In reality this has not happened. The new protocol still has numerous shortcomings. For example, it is not error correcting. If you use USB to copy a file to a disk, transmission errors happen all the time, but they are corrected by a retransmission and the file always copies correctly. Not so in USB Audio, no retransmissions there. All transmission errors are passed to the DAC and reproduced as audible distortions. This is the main reason for why a USB cable affects the sound quality. There are other numerous issues with USB, including poor quality power supplies, galvanic isolation from the player, etc. It takes a lot of effort, but it is possible to make Async USB Audio sound better then vinyl. I have accomplished this in my digital setup that now is better than my High End vinyl turntable. However I can attest that this is not possible without specialized equipment and expertise. There are other efforts underway to resolve the digital fiasco: Using higher bitrates. Most people don't know that the main purpose of higher bitrates is to reduce jitter. Higher bitrates contain virtually no additional musical information, all they do is reduce the transmission jitter. DSD or Direct Stream Digital (SACD) also reduces the transmission jitter and is being resurrected now along with vinyl. Some units used I2s inputs and outputs, but this is uncommon. New network based audio protocols show a promise, but only slowly gain recognition. Integrated players do not have this timing problem (e.g. portable players or DACs that accept SD memory cards). Low jitter variable frequency quartz resonators have been recently developed. Various after market "reclockers" are sold to improve the timing of the digital signal. To summarize, the main digital problem is the signal timing between the player and DAC due to poorly designed digital interface protocols. Vinyl records do not have this problem. Note that most vinyl records are digitally mastered, but still sound warm and "analog", because the problem is not in the digital format. The problem is in the use of inferior transmission protocols in consumer equipment. The digital technology breaks the music into bits and then puts the bits back together. If you break a crystal vase into pieces and then glue them back together, it would never look the same no matter how hard you try.
{ "domain": "physics.stackexchange", "id": 43045, "tags": "acoustics, vibrations" }
Node graph for multi-host ROS network
Question: Assuming a situation where nodes are distributed on multiple hosts, is there a way to oversee at a single view which nodes are running on which hosts? I imagine something like rqt_graph, with which has an option to show either ROS_IP or ROS_HOSTNAME values on each node. I found this page in rocon tutorial but doesn't seem to provide what I'm looking for. It can be a useful debug tool multiple hosts ROS network. Here whether it's a single or multi ROS master network does not matter to me. Originally posted by 130s on ROS Answers with karma: 10937 on 2016-03-04 Post score: 0 Answer: I'm not sure, but does the multimaster_fkie/node_manager show this info? Originally posted by gvdhoorn with karma: 86574 on 2016-03-05 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by 130s on 2016-03-25: Although I don't find exactly what I asked on multimaster_fkie/node_manager, I guess multimaster_fkie pkg might be the best practical solution in ROS1 for communication between ROS masters. (Cont'd) Comment by 130s on 2016-03-25: multimaster_fkie requires running synchronization nodes per each ROS master (as in the tutorial) AFAIK, but I don't know any examples that dynamically discover masters.
{ "domain": "robotics.stackexchange", "id": 23997, "tags": "ros, rqt-graph" }
Why didn't the Event Horizon Telescope team mention Sagittarius A*?
Question: At the press conference this morning, the Event Horizon Telescope team didn't say much about Sagittarius A*, which was the target many of us have been waiting for. Is there any explanation anywhere for this omission? Answer: There was a mention of Sagittarius A* during the Q+A portion of the press conference; the team indicated that they hope to produce an image sometime in the future (although they were careful to make no promises, and they're not assuming they'll be successful). That said, I'm not wholly surprised that we ended up seeing M87, rather than Sgr A*, for a couple reasons which the team mentions in their first paper: As Glorfindel said, Sgr A*'s event horizon is much smaller, meaning matter orbiting the black hole has a shorter orbital period. This contributes to variability on the timescale of minutes. The observations of M87 took place over the course of a week - roughly the timescale over which that target varies, meaning the source should not change significantly over that time. Second - and this is the reason I've seen cited more often - Sgr A* lies in the center of our galaxy, and so thick clouds of gas and dust lie between it and us. That results in scattering, which is a problem. There are ways to mitigate this, of course, and the team has spent a long time on this, but it's simpler to just look at the black hole that doesn't have that problem in the first place. That's why M87's black hole is an attractive target. Neither of these are impossible hurdles to overcome, but they're certainly very real difficulties that can't be ignored.
{ "domain": "astronomy.stackexchange", "id": 3636, "tags": "black-hole, supermassive-black-hole, sagittarius-a, event-horizon-telescope" }
Will this converstion to/from fixed point cause me to lose precision?
Question: I have a situation where I have a fixed point number that I want to convert to and from a floating point number. Specifically it is the SANE_Fixed type from the SANE API. Here is the what the documentation says (I grabbed the bits I think are relavant): SANE_Fixed is used for variables that can take fixed point values in the range -32768 to 32767.9999 with a resolution of 1/65535. #define SANE_FIXED_SCALE_SHIFT 16 typedef SANE_Word SANE_Fixed; The macro SANE_FIXED_SCALE_SHIFT gives the location of the fixed binary point. This standard defines that value to be 16, which yields a resolution of 1/65536. SANE_Word: A word is encoded as 4 bytes (32 bits). The bytes are ordered from most-significant to least-significant byte (big-endian byte-order). So, over the wire I get 4 bytes which I convert into a .Net int like this (you can guess what GetByte and SendByte do): public int GetWord() { int value = 0; value += (GetByte() << 24); value += (GetByte() << 16); value += (GetByte() << 8); value += (GetByte() << 0); return value; } Then I want to convert that int into a floating point number (I went for decimal) like this: public decimal ToFloating(int source) { decimal value = source / ((decimal)(1 << 16)); return value; } And I will also need to go back the other way so I would convert to fixed like this: public int ToFixed(decimal source) { decimal value = source * ((decimal)(1 << 16)); return (int)value; } And then convert to four bytes like this to send it like so: public void SendWord(int word) { SendByte((word >> 24) & 0xff); SendByte((word >> 16) & 0xff); SendByte((word >> 8) & 0xff); SendByte((word >> 0) & 0xff); } So, here is my question: This seems a pretty simplistic implementation to me and I read lots of stuff on the internet about creating custom fixed point classes and whatnot but, given the constraints of my scenario, is this a safe approach to take? Could I lose precision and is the maths correct? Answer: If the SANE_Fixed type has a constantly fixed precision of 1/65536, that means it always stores the fractional part of the number as a 16 bit (two-byte) unsigned integer (hence the SANE_FIXED_SCALE_SHIFT of 16 bits). Since the whole number portion of the value can range from -32768 to 32767, that means that the whole part of the number is represented with a 16 bit signed integer (making the type use a total of 4 bytes). The decimal type in .NET, while it is a floating point type, it is far more precise that the traditional float type. It stores a 96 bit integer (plus an additional bit for the sign). It then stores the position of the decimal point as a 5-bit integer, which allows the decimal point to be located after any digit in the 96-bit integer value. As such, the larger the value gets, the lower the fractional precision becomes. In other words, the decimal type can store a value up to 79,228,162,514,264,337,593,543,950,335 (29 digits). If it does store a 29 digit number, however, it will be unable to store any fractional value with that. It will only store the whole part of the number. If, however, it is only storing a single digit value, the fractional part can be up to 28 digits long (e.g. 0.0000000000000000000000000001). Bearing all of that in mind, if the maximum whole number that the SANE_Fixed type can store is 32767 (a five digit number), when cast to a decimal type, that leaves 24 digits of precision (e.g. 1/10^24) in the fractional part of the number. That is still far more precise than the 1/65536 precision provided by the SANE_Fixed type. Therefore, you can be confident that the decimal type will accurately retain the precise value. Obviously, when casting from a decimal back into a SANE_Fixed type, you will lose precision, but since the value came from that type in the first place, your values are safe.
{ "domain": "codereview.stackexchange", "id": 3367, "tags": "c#, .net" }
Momentum a good definition
Question: We know that the effects produced by a moving body depend both on the speed at which it is moving and on its mass: $$\mathbf{p} = m \mathbf{v}$$ Therefore it is useful, to evaluate this effect, to introduce the momentum vector $\mathbf{p}$. The kinetic energy is (in general of an object of mass $m$ moving with velocity $u$), $$\mathcal K=mu^2/2$$ How is it possible to express well in words the difference between kinetic energy and momentum? Related that I don't like: Definition of force, kinetic energy and momentum Difference between momentum and kinetic energy Answer: I believe that there are good reasons why physics uses formula to describe their finding. Thus, asking us to skip the formulas and still providing "good" definitions is difficult. I'm not able to do so, but I'll try explain my formulas and use them only to clarify what I mean. On a high school level momentum and energy are related by the concept of force: Momentum is given by the sum of the force over "small" time intervals $$ p = \sum_i F_i \cdot \Delta t_i $$ where I use $\Delta p = p - p_0$ and assume $p_0=0$. By dropping the sum and considering a single time interval we obtain $p = F \cdot \Delta t$. Thus, momentum is the ability of a body to exert force over a "short" time interval. In contrast energy is related to work, which is given by the sum of the force over "small" distance intervals $$ E = \sum_i F_i \cdot \Delta s_i $$ Again, we drop the sum, $E = F \cdot \Delta s$. Thus, energy is the ability of a body to exert a force over a "small" distance. Since time and distance are rather different, the quantities momentum and energy are rather different. I believe that the bullet/rifle example described here is great. Nevertheless, here is my own example, utilising the concepts described above: Let's assume we like to drive a nail into a piece of wood. There are two equally valid perspectives: Time perspective: During the swing of the hammer we apply a force over a time $\Delta t$. Thus, the hammer accumulates momentum. Distance perspective: During the swing of the hammer we apply a force over a distance $\Delta s$. Thus, the hammer accumulates (kinetic) energy. Both perspectives are true. Therefore, the hammer acquires momentum and kinetic energy. These two concepts are used to answer different questions: The momentum is a conserved quantity during the collision with the nail. Hence, it is useful to describe the effect the nail experiences. The (kinetic) energy tells us, how much work we have to put into the hammer, in order to achieve this effect. After you edited you question, I have to add this paragraph to my answer. Please do not change the focus of your question. I understand that you are looking for simplified explanations and believe that this is the right way of teaching at high school level and during the first year of university. However, if one uses such simplifications one should ask oneself (a) how wrong is it, and (b) does it really help to understand the key concept. In my opinion the simplified explanation the momentum gives us the effect that we observe when the sphere impacts on a surface (for example if a car impacts against a wall we observe fractures and visual damage) is too vague to be helpful. Nobody knows what "the effect that we observe" means.
{ "domain": "physics.stackexchange", "id": 64809, "tags": "classical-mechanics, energy, momentum, definition" }
Walking over a N branching tree
Question: The aim of this code, written in Python3.6, was to write a function that will non-recursively walk over a N-branching tree structure. It works by using a list of indices that keep track of visited nodes and by retracing prior steps when a tree branch ends. The function non_recurs visits nodes top down left to right (as far as I can tell). non-recurs allows the caller to provide a function to the func argument which will be applied to to each node. The end_only argument will determine if the supplied function will be applied to all nodes (end_only=False) or only nodes at the end of branches (end_only=True). 1st I would like to know if this attempt is actually capable of walking over N-branching trees and 2nd I would like to know what you think about my implementation. NOTE: There are two sections to my code below. The first is to generate N-branching trees and is not the focus of this code review. But is required to generate my demonstrated output EDIT I added IndexError to the except of the try block tree generation code from itertools import product # CODE to create test tree's for walking def assign(tree, idx, value): new_idx = idx[:] assignment_idx = new_idx.pop() ref = tree for i in new_idx: ref = ref[i] ref[assignment_idx] = value return tree def n_branch_tree(height, branches): idx = ([i] for i in range(branches)) tree = list(range(branches)) count = 0 node = 0 while count < height: for i in idx: # mutates tree assign(tree, list(i), list(range(node, node+branches))) node += branches count += 1 idx = product(range(branches), repeat=count) return tree tree walk code # Code to walk over tree def walk(tree, idx): """Return tree node at provided index args: tree: tree to index idx: index of desired node returns: node """ for i in idx: tree = tree[i] return tree def non_recurs(tree, func=print, branches=2, end_only=True, ): """Non-recursively walk n-branching tree args: tree: n-branching tree func: function that takes tree node as first argument branches: The number of branches each node has end_only: Default is True. When True will only apply func to end nodes. When False will apply func to all Nodes """ branches = branches - 1 # Because index's start at 0 idx = [0] node = None while True: # print(idx) try: node = walk(tree, idx) except (TypeError, IndexError): # Could not find node at index try: # Walk back up tree until a node has not # been fully explored while idx[-1] == branches: idx.pop() except IndexError: # Means all nodes in index have been fully explored break # Increase index if current index is under node branch limit if idx[-1] < branches: idx[-1] += 1 # Apply func to end node if end_only and node: func(node) node = None else: idx.append(0) # Apply func to all nodes if not end_only: func(node) if __name__ == '__main__': tree = n_branch_tree(height=3, branches=3) print(tree) value_store = [] non_recurs(tree, func=value_store.append, branches=3, end_only=True) print(value_store) Outputs [[[18, 19, 20], [21, 22, 23], [24, 25, 26]], [[27, 28, 29], [30, 31, 32], [33, 34, 35]], [[36, 37, 38], [39, 40, 41], [42, 43, 44]]] [18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44] Answer: About your solution Your tree generation code is wrong. It doesn't return a tree, it returns a multidimensional list instead. There is a good, detailed article about a tree data structure: Everything you need to know about tree data structures. The main idea of tree - the connection of nodes. All begins from the root - the singular node at the first level, that contains references to the all nodes at the next level, which in their turn have references to the next level nodes, so on. Thus, every node should have as minimum as two fields: one for value and one for list of its childs (terminology used in trees). Edit after the comment [start]: Actually, the tree generation code returned value is resembling the tree, especially taking into account that the Python lists and numbers are objects. It has root object with 3 childs, each child has three childs too, so on. (when height=3, branches=3). At the last level it has objects with a number value. Thus, it complies the first tree data structure requirement: the connection of nodes (from my own definition :)). But not all nodes have value - only the last level's nodes do (the leaves in the tree terminology). So, you anyway can't walking through tree, changing or printing all nodes value, because some nodes don't have them: root[ 2-nd lvl[ 4-th lvl with values 3-rd lvl[18, 19, 20], [21, 22, 23], [24, 25, 26] ], [ [27, 28, 29], [30, 31, 32], [33, 34, 35] ], [ [36, 37, 38], [39, 40, 41], [42, 43, 44] ] ] Edit after the comment [end]: I didn't throughout investigation of your tree walk code, but by glance it uses the the wrong tree idea as well as the tree generation code and therefore, can't work correctly. My partial solution I read your requirements and wrote my own solution, that creates tree, walks through it and applies a passed function to each node. It can't process only end nodes though, but this functionality is easy to add. class Tree(object): def __init__(self): self.value = None self.childs = None # Build tree recursively # All nodes doesn't have values, just child list # Values will be set by the passed function def grow_tree(self, height, child_num): if height < 2: return self.childs = [] for i in range(child_num): child = Tree() self.childs.append(child) child.grow_tree(height - 1, child_num) # Walk through tree iteratively def walk_tree(self, func): all_current_level_nodes = [self] while all_current_level_nodes: all_next_level_nodes = [] for node in all_current_level_nodes: func(node) if node.childs: all_next_level_nodes.extend(node.childs) all_current_level_nodes = all_next_level_nodes ### Recursive implementation ### # def walk_tree(self, func): # func(self) # ## if isinstance(self.childs, list): # if self.childs: # for child in self.childs: # child.walk_tree(func) Testing ### Functions for passing to the "walk_tree" method def set_values(node): node.value = set_values.cnt set_values.cnt += 1 def print_values(node): print(node.value) def print_id(node): print(id(node)) def square_values(node): node.value = node.value ** 2 tree = Tree() tree.grow_tree(2, 3) tree.walk_tree(print_id) # Function is an object. Add the "cnt" field to this object. # It will work as a static variable, preserve counter between calls. set_values.cnt = 1 tree.walk_tree(set_values) tree.walk_tree(print_values) tree.walk_tree(square_values) tree.walk_tree(print_values) Output # "print_id" function was applied on each node while walk_tree 139632471400688 139632471400800 139632471400856 139632471400912 # "set_values" # then "print_values" were applied 1 2 3 4 # "square_values" # then "print_values" were applied 1 4 9 16
{ "domain": "codereview.stackexchange", "id": 33897, "tags": "python, tree, iteration" }
Generating prime numbers using Sieve of Eratosthenes with CUDA
Question: I'm learning CUDA and wrote a little program which generates prime numbers using the Sieve of Eratosthenes. (I know the limitations of CUDA, specially with memory sizes and limits, but this program is for educational purposes). Questions: Did I set up the configuration correctly? (did I set dimBlock and dimGrid correctly?) In my kernel I seem to set a maxRoot constant variable for each thread spawned. Is there a way to set that once in the GPU and then have that shared across all threads?? In the given code below, on line 23 - there's optimization #2 - sieve only odds. How can I apply that to my kernel? The way I'm currently doing is spawning threads from 1 to sqrt(max) and assigning each of them (so much like looping, but incrementing i by 1 each time. In that line of code, we see that it starts at 3 and increments by 2 in the for loop.) Can you give me any other feedback on my code? What else am I doing wrong, or what else could be improved? Any kind of feedback and improvements is good. This is the HOST code for the sieve I attempted to implement in CUDA: void sieveOfEratosthenes(uint64_t max) { // There are no prime numbers smaller than 2 if (max >= 2) { max++; // Create an array of size n and initialize all elements as 0 char arr[max]; // could call calloc() and return a pointer memset(arr, 0, sizeof(char) * max); arr[0] = 1; arr[1] = 1; uint64_t maxRoot = sqrt(max); // optimization #1 uint64_t i; int j; // sieve multiples of two for (j = 2 * 2; j < max; j += 2) { arr[j] = 1; } /* for (i = 2; i <= maxRoot; i++) */ for (i = 3; i <= maxRoot; i += 2) // optimization #2 - sieve only odds { if (arr[i] == 0 ) { int j; for (j = i * i; j < max; j += i) { arr[j] = 1; } } } // display for (i = 0; i < max; i++) if (arr[i] == 0) printf("%d ", i); printf("\n"); } } This is my actual CUDA file: prime.cu #include <stdio.h> #include <helper_cuda.h> // checkCudaErrors() #include <cuda.h> #include <cuda_runtime_api.h> #include <cuda_runtime.h> typedef unsigned long long int uint64_t; /****************************************************************************** * kernel for finding prime numbers using the sieve of eratosthenes * - primes: an array of bools. initially all numbers are set to "0". * A "0" value means that the number at that index is prime. * - max: the max size of the primes array ******************************************************************************/ __global__ static void sieveOfEratosthenesCUDA(char *primes, uint64_t max) { // first thread 0 if (threadIdx.x == 0 && threadIdx.y == 0) { primes[0] = 1; // value of 1 means the number is NOT prime primes[1] = 1; // numbers "0" and "1" are not prime numbers // sieve multiples of two for (int j = 2 * 2; j < max; j += 2) { primes[j] = 1; } } else { int index = blockIdx.x * blockDim.x + threadIdx.x; const uint64_t maxRoot = sqrt((double)max); // make sure index won't go out of bounds, also don't execute it // on index 1 if (index < maxRoot && primes[index] == 0 && index > 1 ) { // mark off the composite numbers for (int j = index * index; j < max; j += index) { primes[j] = 1; } } } } /****************************************************************************** * checkDevice() ******************************************************************************/ __host__ int checkDevice() { // query the Device and decide on the block size int devID = 0; // the default device ID cudaError_t error; cudaDeviceProp deviceProp; error = cudaGetDevice(&devID); if (error != cudaSuccess) { printf("cudaGetDevice returned error code %d, line(%d)\n", error, __LINE__); exit(EXIT_FAILURE); } error = cudaGetDeviceProperties(&deviceProp, devID); if (deviceProp.computeMode == cudaComputeModeProhibited || error != cudaSuccess) { printf("CUDA device ComputeMode is prohibited or failed to getDeviceProperties\n"); return EXIT_FAILURE; } // Use a larger block size for Fermi and above (see compute capability) return (deviceProp.major < 2) ? 16 : 32; } /****************************************************************************** * genPrimesOnDevice * - inputs: limit - the largest prime that should be computed * primes - an array of size [limit], initialized to 0 ******************************************************************************/ __host__ void genPrimesOnDevice(char* primes, uint64_t max) { int blockSize = checkDevice(); if (blockSize == EXIT_FAILURE) return; char* d_Primes = NULL; int sizePrimes = sizeof(char) * max; uint64_t maxRoot = sqrt(max); // allocate the primes on the device and set them to 0 checkCudaErrors(cudaMalloc(&d_Primes, sizePrimes)); checkCudaErrors(cudaMemset(d_Primes, 0, sizePrimes)); // make sure that there are no errors... checkCudaErrors(cudaPeekAtLastError()); // setup the execution configuration dim3 dimBlock(maxRoot, 1, 1); dim3 dimGrid(1); //////// debug #ifdef DEBUG printf("dimBlock(%d, %d, %d)\n", dimBlock.x, dimBlock.y, dimBlock.z); printf("dimGrid(%d, %d, %d)\n", dimGrid.x, dimGrid.y, dimGrid.z); #endif // call the kernel sieveOfEratosthenesCUDA<<<dimGrid, dimBlock>>>(d_Primes, max); // check for kernel errors checkCudaErrors(cudaPeekAtLastError()); checkCudaErrors(cudaDeviceSynchronize()); // copy the results back checkCudaErrors(cudaMemcpy(primes, d_Primes, sizePrimes, cudaMemcpyDeviceToHost)); // no memory leaks checkCudaErrors(cudaFree(d_Primes)); } /****************************************************************************** * ******************************************************************************/ int main() { uint64_t maxPrime = 102; // find all primes from 0-101 char* primes = (char*) malloc(maxPrime); memset(primes, 0, maxPrime); // initialize all elements to 0 genPrimesOnDevice(primes, maxPrime); // display the results int i; for (i = 0; i < maxPrime; i++) if (primes[i] == 0) printf("%i ", i); printf("\n"); free(primes); return 0; } Answer: you shouldn't use the same incrementation variable twice in the same scope, it can get messy and cause weird bugs and you do it with two different incrementing variables, i and j. uint64_t maxRoot = sqrt(max); // optimization #1 uint64_t i; int j; // sieve multiples of two for (j = 2 * 2; j < max; j += 2) { arr[j] = 1; } /* for (i = 2; i <= maxRoot; i++) */ for (i = 3; i <= maxRoot; i += 2) // optimization #2 - sieve only odds { if (arr[i] == 0 ) { int j; for (j = i * i; j < max; j += i) { arr[j] = 1; } } } // display for (i = 0; i < max; i++) if (arr[i] == 0) printf("%d ", i); printf("\n"); you actually initialize j twice inside the same scope. if you were using the resulting i and j values in the next loop it would be a totally different story. the other thing that you are doing is you declare the i variable as a uint64_t but the j is a plain old integer, with sign. I think you should stay consistent and declare the j variable as a uint64_t as well. I think you might also come up against some issues with your array of char's they are quite a bit smaller than your uint64_t variables and so when you start getting the bigger primes it will error out because it can't hold those values, not because the application ran out of memory. I think you should change it to something bigger or at least an unsigned char. I know a char is basically an integer so why not just declare the array a uint? Your display code could use some Curly Braces so that it is definite what is supposed to be going on. imagine writing tons of lines of code and trying to read it when you think you may have forgotten to include something in an if or for block and the code won't run correctly... Just use Curly Braces. // display for (i = 0; i < max; i++) { if (arr[i] == 0) { printf("%d ", i); } } printf("\n");
{ "domain": "codereview.stackexchange", "id": 10512, "tags": "beginner, c, primes, sieve-of-eratosthenes, cuda" }
Segment Image of Small Elements on Neutral Background
Question: I have a 8 bit gray level image where if the value = 122 it means no information, value = 0 means highly negative, and value = 255 means highly positive. I want to connect as many positive information pixels with each other as possible to form as large an area as possible (and not let accidental negative values affect it), and get all the contour of it (which I will then filter out based on # of positive dots and total area of the contour). The ideal contours are the red lines I have drawn onto the image above, and the full image is below. What is the best strategy to go about doing this? The 3rd image is a zoomed in view of the image to show the pixels after filtering / processing just looks like that. [3 Answer: The idea I have is as following: Treat data as a density measure. Values > 122, positive density, Values < 122 negative. Cluster them by threshold. This is how it goes: Scale image into [0, 1] range. Shift the values by 122 / 255 so zero value are zero. Convolve with Gaussian Kernel. Apply threshold. Clean small artifacts. Do connected components. Draw contour of connected components. I will show steps 1-4. The input image The surface after convolution (Hills and valleys) Image after thresholding (Basically which have the same baseline of density) What I'd do next to finish this: Tweak parameters. Clean small artifacts using morphological operations. Draw the contours bases on the connected components. The MATLAB code: clear(); close('all'); zeroLvl = 122 / 255; thrLvl = 0.01; mI = imread('https://i.stack.imgur.com/QMTb5.png'); mI = mI(:, :, 1); mI = im2double(mI); figure(); imshow(mI); mK = fspecial('gaussian', [25, 25], 4.5); mB = mI - zeroLvl; mP = imfilter(mB, mK, 'replicate'); figure(); surf(mP, 'EdgeColor', 'none'); mT = mP > thrLvl; figure(); imshow(mT); sCC = bwconncomp(mT); figure(); contour(mP);
{ "domain": "dsp.stackexchange", "id": 12143, "tags": "image-processing, image-segmentation, edge-detection, image-analysis, connected-components" }
Correlation between speed, distance, mass, etc between a planet and a star (or moon and planet)?
Question: While playing around with some little scripts, generating solar systems, I started to wonder about the correlation between the different objects. Up until now, distance, mass, size and speed are randomly generated. Are there any 'common rules' on how some of theese objects correlate? Are planets close to a star always smaller and faster, and planets far away usually larger and slower? Is the rotation speed affected? And how about the correlations between moons and their planet? At the end of the day, I'm triyng to make the the generation a bit more accurate / realistic. Thanks in advance! Sorry for the english, it's a second language for me. Answer: I spot two pairs of related quantities in your list. Speed (I assume you mean orbital speed, not how fast the planet is spinning about its own axis) and distance. The orbital period and the semi-major axis of the orbit are related by Kepler's third law. For a circular orbit, the semi-major axis is the radius (distance) and the period is trivially related to the speed ($v=\frac{2 \pi a}{P}$, where $P$ is the period and $a$ is the semi-major axis). For elliptical orbits the speed changes over the course of an orbit, so the period instead tells you something about the average speed of the planet. Planets with large semi-major axes might move faster than planets with smaller semi-major axes at a particular instant (depending on the eccentricity of the orbits), but averaged over an orbit larger $a$ always means lower average $v$. Mass and size are also somewhat related, but this is something of a complex topic. More mass means stronger self-gravitation, which tends to compress the size of the planet, but it also means more matter, which tends to grow the size. A planet of a given mass will have a size such that the inward pull of gravity is balanced by the outward pressure supplied by the material making up the planet. For a gas giant/star this is called hydrostatic equilibrium. I'm not sure if there's a similar term for a solid/molten planet but things are complicated somewhat by these condensed phases of matter. If you're trying to come up with a simple model, try picking a mass at random, and look up typical average densities for solid and gas planets. Pick a type of planet then use the average density and the mass to compute the radius. There is no strong evidence that supports smaller planets typically being closer to their stars than larger planets (which is the case in our solar system); many stars have been seen with gas giants orbiting as close or closer to their star as Mercury is to the Sun. It is generally believed that these gas giants formed further out and migrated inward. The rotation speed of a planet (about its own axis) is initially fixed by the total angular momentum of the material that collapsed to form the planet. Over time, however, tidal locking can influence the spin of two orbiting bodies. This is why Earth's Moon has period of revolution equal to its orbital period, for instance. A planet/moon that gets too close to another body with strong tidal forces can be ripped apart. There can also be favoured/disfavoured distances for orbits due to orbital resonance, or a planet can be entirely ejected from its orbit. The interaction of a planet and its moon(s) is essentially the same as a star and planet system, unless you expect effects particular to a star to be important (one example would be radiation pressure). In the end, to get a more realistic simulation going, I'd worry less about initial conditions and more about making sure your method is working accurately and accounting for not only gravitation between the star and the planets but also gravitation between planets and tidal effects as well. If you achieve that, instabilities will make sure that any "unrealistic" scenario will decay into a stable configuration over time (potentially ejecting one or several planets from the system in the process).
{ "domain": "physics.stackexchange", "id": 8281, "tags": "universe, planets, moon, stars" }
Java Battleship game
Question: I have been given a problem statement to create a Battleship game in java. My working code (Spring Boot+Web) is placed here along with the problem statement. https://github.com/ankidaemon/BattleShip This question is majorly focused on design, please help me figure out, how can I make it decoupled and apply suitable design patterns. StartGame.java - getting called from controller @Component public class StartGame { private static final Logger logger = LoggerFactory.getLogger(StartGame.class); public String init(File inputFile) throws FileNotFoundException, InputException { // TODO Auto-generated method stub ArrayList<BattleShips> p1s = new ArrayList<BattleShips>(); ArrayList<BattleShips> p2s = new ArrayList<BattleShips>(); int areaWidth = 0; int areahight = 0; ArrayList<Coordinate> player1missiles = null; ArrayList<Coordinate> player2missiles = null; try{ Scanner sc = new Scanner(inputFile); areaWidth = sc.nextInt(); if(areaWidth>9 || areaWidth<1){ raiseException("Supplied area width is invalid.",sc); } areahight = sc.next().toUpperCase().charAt(0) - 64; if(areahight>25 || areahight<0){ raiseException("Supplied area height is invalid.",sc); } sc.nextLine(); int noOfships = sc.nextInt(); if(noOfships>areahight*areaWidth || noOfships<1){ raiseException("Supplied no of ships is invalid.",sc); } sc.nextLine(); for (int j = 0; j < noOfships; j++) { char typeOfShip = sc.next().toUpperCase().charAt(0); if(typeOfShip!='P' && typeOfShip!='Q'){ raiseException("Supplied type of ship is invalid.",sc); } int shipWidth = sc.nextInt(); if(shipWidth>areaWidth || shipWidth<0){ raiseException("Supplied ship width is invalid.",sc); } int shiphight = sc.nextInt(); if(shiphight>areahight || shiphight<0){ raiseException("Supplied ship height is invalid.",sc); } BattleShips ship; for (int i = 0; i <= 1; i++) { char[] locCharArr = sc.next().toUpperCase().toCharArray(); int[] loc = new int[2]; loc[0] = locCharArr[0] - 65; loc[1] = locCharArr[1] - 49; if(loc[0]>areahight || loc[0]<0 || loc[1]>areaWidth || loc[1]<0){ raiseException("Supplied ship location is invalid.",sc); } ship = new BattleShips(shipWidth, shiphight, typeOfShip, loc); if (i % 2 == 0) p1s.add(ship); else p2s.add(ship); } sc.nextLine(); } player1missiles = returnMissileCoordinates(sc.nextLine()); player2missiles = returnMissileCoordinates(sc.nextLine()); sc.close(); }catch(InputMismatchException e){ throw new InputException("Invalid Input supplied.",ErrorCode.INVALIDINPUT); } BattleArea player1 = new BattleArea("player1", areaWidth, areahight, p1s); BattleArea player2 = new BattleArea("player2", areaWidth, areahight, p2s); player1.placeShips(); player2.placeShips(); while (!player1.isLost() && !player2.isLost()) { for (int i = 0; i < player1missiles.size();) { Coordinate c = player1missiles.get(i); while (player1.fireMissile(c, player2)) { player1missiles.remove(i); if (i < player1missiles.size()) { c = player1missiles.get(i); } else break; } if (player1missiles.size() > 0) { player1missiles.remove(i); } break; } for (int j = 0; j < player2missiles.size();) { Coordinate c = player2missiles.get(j); while (player2.fireMissile(c, player1)) { player2missiles.remove(j); if (j < player2missiles.size()) { c = player2missiles.get(j); } else break; } if (player2missiles.size() > 0) { player2missiles.remove(j); } break; } } if (player1.isLost()) { logger.info("-------------------------"); logger.info("Player 2 has Won the Game"); logger.info("-------------------------"); return "Player 2 has Won the Game"; } else { logger.info("-------------------------"); logger.info("Player 1 has Won the Game"); logger.info("-------------------------"); return "Player 1 has Won the Game"; } } private static ArrayList<Coordinate> returnMissileCoordinates(String nextLine) { // TODO Auto-generated method stub ArrayList<Coordinate> tmp = new ArrayList<Coordinate>(); String[] arr = nextLine.split("\\ "); Coordinate tmpC; for (String s : arr) { char[] charArr = s.toCharArray(); tmpC = new Coordinate(charArr[1] - 49, charArr[0] - 65); tmp.add(tmpC); } return tmp; } private void raiseException(String message, Scanner sc) throws InputException { sc.close(); throw new InputException(message, ErrorCode.INVALIDINPUT); } } BattleArea.java public class BattleArea { private static final Logger logger = LoggerFactory.getLogger(BattleArea.class); private String belongsTo; private int width,height; private ArrayList<BattleShips> battleShips; private Set<Coordinate> occupied=new TreeSet<Coordinate>(); private int[][] board=null; private boolean lost=false; public BattleArea(String belongsTo, int width, int height, ArrayList<BattleShips> battleShips) { super(); this.belongsTo = belongsTo; this.width = width; this.height = height; this.battleShips = battleShips; this.board=new int[this.width][this.height]; } public void placeShips(){ for(BattleShips ship:this.battleShips){ int x=ship.getLocation()[1]; int y=ship.getLocation()[0]; if(ship.getWidth()+x>this.width || ship.getHeight()+y>this.height){ logger.error("Coordinate x-"+x+" y-"+y+" for "+this.belongsTo+" is not avilable."); throw new ProhibitedException("Ship cannot be placed in this location.",ErrorCode.OUTOFBATTLEAREA); }else{ Coordinate c=new Coordinate(x, y); if(occupied.contains(c)){ logger.error("Coordinate x-"+c.getX()+" y-"+c.getY()+" for "+this.belongsTo+" is already occupied."); throw new ProhibitedException("Ship cann't be placed in this location.",ErrorCode.ALREADYOCCUPIED); }else{ Coordinate tempC; for(int i=x;i<ship.getWidth()+x;i++){ for(int j=y;j<ship.getHeight()+y;j++){ logger.debug("Placing at x-"+i+" y-"+j+" for "+this.belongsTo); tempC=new Coordinate(i, j); occupied.add(tempC); if(ship.getTypeOfShip()=='P'){ board[i][j]=1; }else if(ship.getTypeOfShip()=='Q'){ board[i][j]=2; } } } } } } } public boolean fireMissile(Coordinate c, BattleArea enemyBattleArea){ int x=c.getX(); int y=c.getY(); logger.info("Firing at "+enemyBattleArea.belongsTo+" x-"+x+" y-"+y+" :"); if(enemyBattleArea.board[x][y]!=0){ if(enemyBattleArea.board[x][y]==-1){ logger.debug("Already blasted!"); return false; } else if(enemyBattleArea.board[x][y]==1){ Coordinate temp=new Coordinate(x,y); enemyBattleArea.occupied.remove(temp); enemyBattleArea.board[x][y]=-1; if(enemyBattleArea.occupied.size()==0){ enemyBattleArea.setLost(true); } logger.debug("Suucessfully blasted!!"); return true; }else{ enemyBattleArea.board[x][y]=enemyBattleArea.board[x][y]-1; logger.debug("Half life left!!"); return true; } }else{ logger.debug("Missed"); return false; } } public boolean isLost() { return lost; } public void setLost(boolean lost) { this.lost = lost; } } BattleShips.java public class BattleShips { private int width,height; private char typeOfShip; private int[] location; public BattleShips(int width, int height, char typeOfShip, int[] loc) { super(); this.width = width; this.height = height; this.typeOfShip = typeOfShip; this.location = loc; } public int getWidth() { return width; } public int getHeight() { return height; } public char getTypeOfShip() { return typeOfShip; } public int[] getLocation() { return location; } } Coordinate.java public class Coordinate implements Comparable<Coordinate> { private int x,y; public Coordinate(int x, int y) { super(); this.x = x; this.y = y; } @Override public String toString() { return "Coordinate [x=" + x + ", y=" + y + "]"; } @Override public int compareTo(Coordinate o) { // TODO Auto-generated method stub if(this.x==o.x && this.y==o.y) return 0; else if(this.x<o.x && this.y<o.y) return -1; else return 1; } public int getX() { return x; } public int getY() { return y; } } Sample Input 5 E 2 Q 1 1 A1 B2 P 2 1 D4 C3 A1 B2 B2 B3 A1 B2 B3 A1 D1 E1 D4 D4 D5 D5 Rules 1. Player1 will fire first. Each player will get another chance till ( hit == successful ). 2. Battleships will be placed horizontally. 3. Type-Q ship requires 2 missiles hit to get destroyed. 4. Type-P ship requires 1 missile hit to get destroyed. Input First line of the input contains dimensions of battle area having width and height separated by space. Second line will have number (B) of battleships each player has. Then in the next line battleship type, dimensions (width and height) & positions (Y coordinate and X coordinate) for Player-1 and then for Player-2 will be given separated by space. And then in the next line Player-1’s sequence (separated by space) of missiles target location coordinates (Y and X) will be given and then for sequence for Player-2. Constraints: 1 <= Width of Battle area (M) <= 9 A <= Height of Battle area (N) <= Z 1 <= Number of battleships <= M * N Type of ship = {‘P’, ‘Q’} 1 <= Width of battleship <= M A <= Height of battleship <= N 1 <= X coordinate of ship <= M A <= Y coordinate of ship <= N Answer: ok lets put hands on: Class name for your StartGame is not helpful, rename it into a more matching name, i think like BattleShipGame and start the game instead from your controller BattleShipGame game = new BattleShipGame(); game.start(); the init - method is far to big and it does no init but does even more things... so let's break that down a bit: init should return a boolean (or a Result) that indicates that init was successful. init looks like it's a delegate methode wich means ther should be very little logic insde - instead it is useful to put most work into methods just init things and don't do any other things use Player objects... move the game logic out of the method it could look like this then private Player playerOne; private Player playerTwo; public boolean init(){ playerOne = new Player("player1"); playerTwo = new Player("player2"); GameSetup setup = readFile(inputFile); ArrayList<BattleShips> p1bs = setup.getFirstBattleShips(); ArrayList<BattleShips> p2bs = setup.getSecondBattleShips(); playerOne.setBattleShips(p1bs); playerTwo.setBattleShips(p2bs); playerOne.setMissiles(setup.getFirstMissileCoordinates()); playerTwo.setMissiles(setup.getSecondMissileCoordinates()); playerOne.setBoard(new BattleShipBoard(setup.getDimension()); playerTwo.setBoard(new BattleShipBoard(setup.getDimension()); playerOne.placeShips(); playerTwo.placeShips(); return true; } NOTE: the init method could be shortenend far more, but i think i point out in a good way what init should really do... as mentioned above you have moved the game logic out of your init method and put it in the playGame() method. public Result playGame(){ Result result = new Result(); Scores score = new Score(); while (!player1.isLost() && !player2.isLost()) { for (int i = 0; i < player1missiles.size();) { ... } } result.setWinner(playerOne); result.setScore(scores); return result; } the BattleShipGame would start now this manner: public void start(){ init(); Result result = playGame(); ... //do whatever you want with your result - for example print it into the console } for your BattleShip there are some more issuses wich can be talked about. I think it was a very good idea to use a class Coordinate which looks good at first glance. But you don't use it consequencically. think about how it would be if you used Coordinate for you ship instead of int[] that would make your code as well easier to read and the math would be much easier. And don't use a char for your shiptype, use a enum instead. But let's be honest, you don't have a 'position and width and height' what you really have is an rectangle - so use an rectangle! public class BattleShips { private ShipType shipType; private Rectangle bounds; private int lifePoints; public BattleShips(ShipType typeOfShip, Coordinate pos) { super(); shipType = typeOfShip; bounds = new Rectangle(shipType.getDimension, pos); lifePoints = shipType.getLifePoints(); } public Rectangle getBounds() { return bounds(); } ... } the dimension of the Rectangle (width/height) and the amount of lifepoints can be determind by the ShipType public Enum Shiptype { DESTROYER(2,4,2), SUBMARINE(1,3,1), ...; //don't use shiptype P or shiptype Q private final Dimension dimension; final int lifePoints; public ShipType(int w, int h, int life){ dimension = new Dimension(w,h); lifePoints = life; } public Dimension getDimension(){ return dimension; } public int getLifePoints(){ return lifePoints(); } } The BattleArea is now far more easy to use, think about how simple you can placeShips now: public class BattleArea { private Player owner; private Rectangle boardBounds; private List<BattleShips> battleShips; private List<Coordinates> board; public BattleArea(Player owner, Rectangle bounds, List<BattleShips> battleShips) { super(); this.owner = owner; this.dimension = dimension; this.battleShips = battleShips; board = createBoard(); } public void placeShips(){ List<BattleShip> placedShips = new ArrayList<>(); for(BattleShips ship:this.battleShips){ Bound shipBounds = ship.getBounds(); if(!boardBounds.contains(shipBounds)){ throw new ProhibitedException( "Ship cannot be placed in this location.",ErrorCode.OUTOFBATTLEAREA); } for (BattleShip placedShip: placedShips){ if (bounds.intersects(placedShip.getBounds()){ throw new ProhibitedException( "Ship cann't be placed in this location.",ErrorCode.ALREADYOCCUPIED); } } placedShips.add(battleShip); } } public boolean fireMissile(Coordinate c, BattleArea enemyBattleArea){ BattleShip shipAt = enemyBattleArea.getShipAt(c); if(shipAt == null){ return false; }else{ handleDamge(shipAt, enemyBattleArea); return true; } } private void handleDamage(BattleShip opponent, BattleArea area){ int lifePointsLeft = opponent.getLifePoints() - 1; //hardcoded damage (that's bad) if(lifPoints > 0){ //Log damage done }else{ //log destroyed area.removeBattleShip(opponent); } } } all code above has not been compiled so there may be some spelling errors and a lot of methods are not even implemented yet (like Rectangle.contains() or others). summary but let's look at what we have now: you can change the ship type quite easily without modifing any code !!! (you simply have to add another shiptype in ShipType ) you have reduced the complexity of your code very far, you don't have dangerous calculations. you have seperated concerns, the objects now do what they are supposed to do you could easily change your code for another player (three-player game) you could test your code now
{ "domain": "codereview.stackexchange", "id": 29735, "tags": "java, battleship" }
Impact of STFT window function and FFT length on computation time
Question: I have been doing a study which part of it includes a comparison of computation time vs window type and length (among some other things in the computation time, however I speak in terms of relative computation time). I found that among three windows I tested: Hann, Hamming and Blackman; the Blackman window had the slowest computation time (albeit only by ~20ms) with the Hann window outperforming the rest. This was tested over 100 runs for each. My question is, why is this the case? Is there something specific about the windowing function (as well as FFT length of course) that causes this increase in computation time? Are there any sources for justification that explores this in more depth? For reference, my input signal samples are 2s long with sampling rate of 2MHz, so a total length of 4,000,000 samples. The code I am using is as follows: from datetime import datetime from scipy import signal for window in ['hann', 'hamming', 'blackman']: #'hamming', 'hann', for nperseg in [1024, 2048, 4096, 8192, 16384]: # 1024, 2048, 4096, 8192 win = signal.windows.get_window(window, nperseg) et = 0 for i in range(0, 100): start_time = datetime.now() # time.monotonic() f, t, Zxx = signal.stft(x, fs=fs, window=win, nperseg=nperseg) end_time = datetime.now() # time.monotonic() et += (end_time - start_time).total_seconds() * 1000 # convert to ms print(f'Time Taken (window={window},nperseg={nperseg}): {et / 100}ms') Answer: I think this is just measurement noise. You would not expect the window to make any difference (if it's precomputed). However the window length should make a small but consistent difference and that's not visible in the data either: executioin time should slightly with window length but it doesn't consistently. Elapsed real time is always noisy since it highly depends on what else is going on on the system, that interrupts are happening and how long it takes to serve them, who the processes and threads are time sliced etc.
{ "domain": "dsp.stackexchange", "id": 11189, "tags": "fourier-transform, python, window-functions, stft" }
What does "~mitochondrial DNA ~bp linear DNA" means?
Question: I'm surfing NCBI website -Nucleotide- to find some examples of real DNA sequences to use in my small homework project. My question is related to the title of a DNA sequence below: Sus scrofa mitochondrial DNA, D-loop region, isolate: Europeanwild boar 3 1,045 bp linear DNA I thought a mitochondrial DNA is different than a linear/nuclear DNA, especially that a mitochondrial DNA is circular while a linear/nuclear DNA is linear. So what does the above title actually means (in simple terms)? Is it still linear or not? Answer: Well, Mitochondrial DNA CAN be linear (in some organism), see the wikipedia page on this In most multicellular organisms, the mtDNA - or mitogenome - is organized as a circular, covalently closed, double-stranded DNA. But in many unicellular (e.g. the ciliate Tetrahymena or the green alga Chlamydomonas reinhardtii) and in rare cases also in multicellular organisms (e.g. in some species of Cnidaria) the mtDNA is found as linearly organized DNA. Most of these linear mtDNAs possess telomerase independent telomeres (i.e. the ends of the linear DNA) with different modes of replication, which have made them interesting objects of research, as many of these unicellular organisms with linear mtDNA are known pathogens.[19] For human mitochondrial DNA (and probably for that of metazoans in general), 100-10,000 separate copies of mtDNA are usually present per cell (egg and sperm cells are exceptions). In mammals, each double-stranded circular mtDNA molecule consists of 15,000-17,000[20] base pairs. The two strands of mtDNA are differentiated by their nucleotide content, with a guanine-rich strand referred to as the heavy strand (or H-strand) and a cytosine-rich strand referred to as the light strand (or L-strand). The heavy strand encodes 28 genes, and the light strand encodes 9 genes for a total of 37 genes. Of the 37 genes, 13 are for proteins (polypeptides), 22 are for transfer RNA (tRNA) and two are for the small and large subunits of ribosomal RNA (rRNA). This pattern is also seen among most metazoans, although in some cases one or more of the 37 genes is absent and the mtDNA size range is greater. Even greater variation in mtDNA gene content and size exists among fungi and plants, although there appears to be a core subset of genes that are present in all eukaryotes (except for the few that have no mitochondria at all). Some plant species have enormous mtDNAs (as many as 2,500,000 base pairs per mtDNA molecule) but, surprisingly, even those huge mtDNAs contain the same number and kinds of genes as related plants with much smaller mtDNAs.[21] https://en.wikipedia.org/wiki/Mitochondrial_DNA In the case of your sequence, it only means that it has been entered in NCBI as a linear piece of DNA (it is usually the case when you sequence only a piece of a circular DNA for example) It does not mean that within the organism it is linear ! I hope this answers your question !
{ "domain": "biology.stackexchange", "id": 5078, "tags": "bioinformatics, dna" }
Specificity over 100
Question: I am construction a deep neural network for a classification task, when I look at the metrics, I have a specificity of 1.04. Is it possible to have this metric over 100 ? How do you interpret it? Thank you. Answer: The specificity is defined as $Specificity = \frac{\sum{True Negative} }{\sum{True Negative} + \sum{False Positive} }$. These counts are strictly positive values and as such the specificity cannot be negative. You can also see that the specificity must be less than 1 because it is a ratio. For more details on sensitivity and specificity you can check this answer: Usage of Precision Recall on an unbalanced dataset
{ "domain": "datascience.stackexchange", "id": 10438, "tags": "classification, metric" }
My first Hangman game in Python
Question: This is my first game and I'm really proud of it, but I think it's horribly messy. Is there any way to improve it? #HANGMAN # Valid Word Checker def valid_word(word): ill_chars = ["!","@","#","$","%","^","&","*","(",")",",",".", " ","1","2","3","4","5","6","7","8","9","0"] for i in word: if i in ill_chars: print(f"\nError: It must be a letter. No symbols, numbers or spaces allowed. \n\n '{i}' is not allowed\n") return False if len(word) > 12: print("\n\nError: Word is too long. Use a word with eight characters or less.") return False return True word_list = [] spaces = [] guessed_letters = [] ill_chars = ["!","@","#","$","%","^","&","*","(",")",",",".", " ","1","2","3","4","5","6","7","8","9","0"] head =" O" armL = " /" torso = "|" armR = "\\" legL = "/" legR = " \\" hangman_parts = [head, armL, torso, armR, legL, legR] hangman_progress = ["", "|", "\n|", "\n|"] hangman_final = ["|~~~~|\n"] # Check if word is valid wordisValid = False while True: word = input("\n\n\nChoose any word\n\n\n").lower() if valid_word(word) == True: wordisValid = True break else: continue # Add to list for i in word: word_list.append(i) spaces.append("_ ") # Main Game Loop bad_letter = 0 while wordisValid == True: print("".join(hangman_final)) print("\n\n\n\n") print("".join(spaces)) print("\n\nThe word has: " + str(len(word)) + " letters.") print("\nYou've tried the following letters: " + "\n\n" + "".join(guessed_letters) + "\n\n") # Winning Loop if "".join(spaces) == word: print(f"YOU WIN! The word was: {word}") break # Choose Letters player_guess = input("\n\nPlease choose a letter: \n\n\n\n").lower() guessed_letters.append(" " + player_guess) if player_guess in ill_chars: print(f"\nError: It must be a letter. No symbols, numbers or spaces allowed. \n\n '{player_guess}' is not allowed\n") elif len(player_guess) > 1: print("\nError: You must use one letter.\n") elif player_guess == "": print("\nError: No input provided.\n") # Wrong Letter elif player_guess not in word_list: bad_letter += 1 if bad_letter == 1: hangman_final.append(hangman_progress[1] + head) elif bad_letter == 2: hangman_final.append(hangman_progress[2] + " " + torso) elif bad_letter == 3: hangman_final.pop(2) hangman_final.append(hangman_progress[2] + armL + torso) elif bad_letter == 4: hangman_final.pop(2) hangman_final.append(hangman_progress[2] + armL + torso + armR) elif bad_letter == 5: hangman_final.append(hangman_progress[3] + " " + legL) elif bad_letter == 6: hangman_final.pop(3) hangman_final.append(hangman_progress[3] + " " + legL + legR) print("\n\nThe word was: " + word) print("\n\n\n\n\n\n\n" + "".join(hangman_final)) print(" YOU GOT HUNG ") break print("".join(hangman_final)) print("\n\n\n\n") print("".join(spaces)) print("\n\nThe word has: " + str(len(word)) + " letters.") print("\nYou've tried the following letters: " + "\n\n" + "".join(guessed_letters) + "\n\n") print(f"\n\n\n{player_guess} is not in the word. Try again.\n\n") # END GAME if bad_letter == 6: break # Add letters guessed to a list counter = 0 for i in word: if player_guess == i: spaces[counter] = player_guess counter += 1 Even though it's working, I'm not sure that I have the right idea when it comes to making this type of project. Answer: While it's clear that you're new to python, it's still pretty good that you got it to run first time. Good job! Input Validation Currently, you have a list of forbidden characters. While that can work, by default python allows unicode input. That means there's literally thousands of letters someone can input. I suggest instead using a whitelist of valid inputs. Like this: VALID_LETTERS = "abcdefghijklmnopqrstuvwxyz" # This won't ever change, and typically this sort # of things is a module-level variable. def valid_word(word): for letter in word: if letter not in VALID_LETTERS: print(f"Letter {letter} is not a valid letter. Please use just a-z.") return if len(word) > 8: # This was 12 in your script. Little mistake? And you might want a # module-level constant for this as well. print("\n\nError: Word is too long. Use a word with eight characters or less.") return False return True main() and game() Generally, in python we never execute code at the moment the module is loaded. Read here for details, and how we guard against it. I'll come back to this later. For now, it means we should put as much as possible into functions - lets call them main() and game(). main() will be the main menu function. Since we don't have a true menu, this will basically start a game, and perhaps ask if the player wants to play again. game() will be a function that lets us play a single game. Most of this will be a loop that gets player input and calculates the game state. Basically everything in your script that isn't another function should be in the game() function. I'll cut some variables we don't need later on. I'll explain them when we get to where we use them. Like this: def game(): # Variable naming is important. This is an example of a good name. guessed_letters = [] # ill_chars = [...] We already have this variable before here! # Lets assign the parts directly. We can index them as in "3rd part". # And for ourselves, we comment the meaning to make sure we still know what it means a year # from now. hangman_parts = [ " O", # Head " /", # Left Arm "|" , # Torse "\\", # Right Arm "/", # Left Leg " \\" # Right Leg ] hangman_progress = ["", "|", "\n|", "\n|"] hangman_final = ["|~~~~|\n"] # Check if word is valid # Naming: should be word_is_valid, to be consistent with your other variables. word_is_valid = False while not word_is_valid: word = input("\n\n\nChoose any word\n\n\n").lower() word_is_valid = valid_word(word) # With this loop condition, it'll keep asking until valid_word returns True. So we can just # assign that value to word_is_valid directly. # Since this is the word we're trying to guess, lets just push it upwards in # the console a lot, so we won't see it all the time: print("\n" * 100) # Prints 100 newlines. # We don't need word_list, just word will do. We don't need spaces either. while True: # Main game loop. We don't need to check for anything, we'll just use break or # return to get out. Now we're going to use a list comprehension. It's almost the same as a generator expression, but it gives us a true list, so we can ask python how long it is: # Calculate how many bad letters we have with a list comprehension: bad_letter = len([letter for letter in guessed_letters if letter not in word]) draw_hangman(bad_letter) if bad_letter > 5: print("\n\n\n\nYOU GOT HUNG") return print("\n\n\n\n") You used to draw your hangman at two different places, while it's more practical to do it in one place. It also ensures that if you want to change something, you'll only have to change it once. I've also changed how to draw it. It's much more useful to have a function for this drawing operation, and have it calculate how to draw from the number of bad letters so far. This makes the drawing independent of the current game state. Here's the version I came up with, based on your modifications of hangman_final: def draw_hangman(bad_letter): print(hangman_final, end="") if bad_letter > 0: # Head print(hangman_progress[1] + hangman_parts[0], end="") if bad_letter == 2: # Torso and arms print(hangman_progress[2] + " " + hangman_parts[2], end="") elif bad_letter == 3: print(hangman_progress[2] + "".join(hangman_parts[1:3]), end="") elif bad_letter > 3: print(hangman_progress[2] + "".join(hangman_parts[1:4]), end="") if bad_letter > 4: # Legs print(hangman_progress[3] + " " + hangman_parts[4], end="") if bad_letter > 5: print(hangman_parts[5], end="") Here we need to print the letters we guessed, and underscores otherwise. We can calculate and print it in one line: print(" ".join(letter for letter in word if letter in guessed_letters else "_")) For every letter, this will print it if it's contained in guessed_letters, and otherwise print an underscore. This is a generator expression. This one is almost equal to the function: def make_word(word, guessed_letters): result = [] for letter in word: if letter in guessed_letters: result.append(letter) else: result.append("_") return result I say almost, because it's not exactly a list that comes out of it, even if it acts the same if you use it in a for loop or feed it to a function that does, like "".join(). It also acts more like a loop than a function, but that's not really important right now. We feed the result to the join. And note that we feed it to " ".join(), with a space in between, which will put a space between every letter like you did before. print(f"\n\nThe word has: {len(word)} letters.") # f-strings are shorter, and easy to use. print(f"\nYou've tried the following letters:\n\n{''.join(guessed_letters) }\n\n") # For a string literal inside an f-string, use the other type of quotes # This is another generator expression - this one returns a not-quite-list of booleans, and # the all() builtin function returns True if and only if all results of the generator # expression are True. if all(letter in guessed_letters for letter in word): print(f"You WIN!\nThe word was: {word}") return input_valid = False while not input_valid: new_letter = input("\n\nPlease choose a letter: \n\n\n\n").lower() # VERY GOOD that you lowercase it! Prevents tons of weird behaviour. if len(new_letter) > 1: print("Please enter a single letter only.") elif new_letter not in VALID_LETTERS: print(f"{new_letter} is not a valid letter. Please enter a letter a-z") elif not new_letter: # Empty strings evaluate to False print("Please enter a letter before pressing enter.") elif new_letter in guessed_letters: print("You already guessed {new_letter} before!") else: guessed_letters.append(new_letter): input_valid = True This is pretty close to your input. But I put it in a loop, so if we have an invalid input, we don't have to traverse the entire game loop again. I use a pretty simple signalling boolean to keep it going, but perhaps there's more elegant ways to do this. Never wrong to keep it simple, though. Right now, to play the game, all you have to do is call the game() function. But perhaps we want to play multiple games in series? Lets make our main() function: def main(): game() while "y" in input("Play again? [Y/n]").lower(): game() Why is this in a separate function? Again, for the sake of extensiblility. Perhaps, we want to import our game from another module. If so, we don't want it to run when we import, but when we call a function. So we end the script with this guard: if __name__ == "__main__": main() If we execute this file directly, this will run our main() function and let us play the game. But if we import it, it won't, so we can call the function when we want it.
{ "domain": "codereview.stackexchange", "id": 36075, "tags": "python, beginner, python-3.x, game, hangman" }
What is the name of the physical quantity which is the gyromagnetic ratio divided by $2 \pi$?
Question: I am reading Abramowitz and Stegun's Handbook of Mathematical Functions and I come across Table $2.3$, Adjusted Values of Constants. In there we see: Gyromagnetic ratio of proton: $\gamma$, with value $2.675 196 5 \times 10^8 \ \mathrm {rad \cdot s^{-1} \cdot T^{-1} }$ etc. Then immediately underneath it: $\gamma / 2 \pi$, with value $4.257 707 \times 10^7 \ \mathrm {Hz \cdot T^{-1} }$ I have looked up the page on gyromagnetic ratio on the internet, but while some sources reference $\gamma / 2 \pi$, nobody seems to actually explain what it is. Can anyone tell me what that "reduced" value is called, and what it is used for? Answer: It’s the gyromagnetic ratio. In one case in angular units, in the other in cyclic or “lab” units. This is the same as how we use the word “frequency” to refer to both angular and cyclic frequencies. It is an ambiguity in the language, so you just have to check for consistency with your reference if the factor of $2\pi$ matters for what you’re doing. “Reduced gyromagnetic ratio” is probably an ok name for it by analogy with reduced Planck constant. I don’t think anyone would argue with you calling it that if you explained what it is.
{ "domain": "physics.stackexchange", "id": 96722, "tags": "particle-physics, terminology, physical-constants" }
Is this momentum and if so how is it derived?
Question: So I'm reading Susskind's "The Theoretical Minimum" and on pages 128 and 129, he has the following equations: First he starts with the Lagrangian for the system: $$L=\frac{1}{2}(\dot{q_1}^2+\dot{q_2}^2) -V(q_1-q_2)$$ Then he shows the following two derivations: $$\dot{p_1} = -V'(q_1-q_2)$$ $$\dot{p_2} = V'(q_1-q_2)$$ Where $V'$ is stated to be the derivative of $V$. The derivation itself is left as an exercise to the reader and I can't figure it out. In the previous chapter, he had defined the momentum as: $$p_i=\frac{\partial L}{\partial \dot{q_i}}$$ on page 124. But since $V$ does not depend upon either $\dot{q_1}$ or $\dot{q_2}$, equations 2 and 3 don't seem to follow from equation 1. Could someone show me what this derivation looks like? Or maybe $p$ means something else here. Answer: I know knzhou already offered the proper answer in the form of a comment, but allow me to expand on it by showing a series of canonical relationships, not all of which are mentioned I believe in Susskind's book. Given the action $S=\int L~dt$, defined in terms of the Lagrangian $L(q,\dot{q})$, we have the following relationships: \begin{align} L&=\frac{dS}{dt},\\ p&=\frac{\partial S}{\partial q}=\frac{\partial L}{\partial\dot{q}},\\ H&=p\dot{q}-L=-\frac{\partial S}{\partial t},\\ \dot{p}&=\frac{\partial L}{\partial q}=-\frac{\partial H}{\partial q},\\ \frac{dH}{dt}&=\frac{\partial H}{\partial t}=-\frac{\partial L}{\partial t},\\ \dot{q}&=\frac{\partial H}{\partial p}. \end{align} These can all be derived from the properties of the Lagrangian, the Euler-Lagrange equation, or the definition of the Legendre transformation that leads to the Hamiltonian.
{ "domain": "physics.stackexchange", "id": 35629, "tags": "lagrangian-formalism, momentum, hamiltonian-formalism" }
Does phase kickback require the system to be in the eigenstate?
Question: I've been watching this video for the introduction to phase kickback. And here's a diagram: I got confused if we really need $|\psi_k\rangle$ to be an eigenstate to make the kickback work. It seems to me (from the math) that $|\psi_k\rangle$ could be any state. My follow-up question is assume $|\psi_k\rangle$ has to be an eigenstate, then for this diagram: Is it true only if the second qubit is set to $|1\rangle$ state? I'm still a bit confused if there are any relations between the 2 diagrams. Why the top one requires the second register to be in the state $|\psi_k\rangle$, but the bottom one, it could be $|0\rangle$ or $|1\rangle$ state? Answer: The first circuit equality fails when $|\psi_k\rangle$ is not an eigenstate of $U$. A simple way to see this is to set the control qubit to $|1\rangle$. In this case, the RHS circuit is equivalent to $I\otimes I$ up to global phase while the LHS circuit is $I\otimes U$. These are equivalent up to global phase only when $|\psi_k\rangle$ is an eigenstate of $U$. Alternatively, you may notice that the RHS circuit never creates any entanglement. On the other hand, if $|\psi_k\rangle$ is not an eigenstate of $U$ then the LHS circuit may produce entangled output. Therefore, LHS and RHS circuits cannot be equal. The second circuit equality is always true. To see this, note that $\Delta$ is a scalar, so $e^{i\Delta}$ is a multiple of the identity, so every state is its eigenstate.
{ "domain": "quantumcomputing.stackexchange", "id": 3900, "tags": "gate-synthesis, quantum-phase-estimation, phase-kickback" }
Why doesn't Beta Decay violate the laws of physics?
Question: In Beta decay, a neutron decays into a proton, and "throws out" an electron at high speed. However, this, to me, suggests that the law of conservation of mass is not being kept here. Neutrons have a mass of "1", Protons have a Mass of "1", Electrons have a mass of "1/1840" This means that, before the decay, the total mass is "1", but after the decay has occurred, the total mass in the system is "1841/1840", we have gained an electron's worth of mass from somewhere. DISCLAIMER: I have been taught at a GCSE level, so much dumbing down has occurred in terms of what I have been taught. If I say something wrong, its because I've been taught that at school. Sorry! This means that, either energy is being converted to mass here, or the "mass" values I have been taught are wrong, or the reason could be completely different. Which is it? TL;DR: In beta decay, we seemingly gain one electron's worth of mass. Where has it come from? Answer: The mass of a free neutron is 939.566 MeV/c$^2$ (almost 1 GeV/c$^2$, so that's probably where your instructor got the "1" value), and the mass of a free proton is 938.272 MeV/c$^2$. A free neutron will decay into a free proton, free electron ($\beta^-$), and an anti-neutrino, $\bar{\nu}$. The mass of the electron is 0.511 MeV/c$^2$, and of the anti-neutrino, practically zero. In the center-of-mass (CoM) reference frame (the rest-frame of the neutron), the total energy to start with is the mass energy of the neutron: 939.566 MeV. After the decay, the mass energy of the products is 938.783 Mev, so there remains 0.783 MeV of energy to be shared as kinetic energy between the proton and the anti-neutrino. The net momentum in the CoM frame must be zero, but with three particles involved, the energy is not split uniquely among the three. The $\beta^-$ and $\bar{\nu}$ carry most of the kinetic energy, but again, not uniquely split. The non-unique energy and momentum of the $\beta^-$ is what led physicists to consider the existence of the third particle in the decay.
{ "domain": "physics.stackexchange", "id": 22267, "tags": "energy-conservation, mass-energy, neutrons" }
Miller Indices and the case of a cubic crystal
Question: My textbook, Solid-State Physics, Fluidics, and Analytical Techniques in Micro- and Nanotechnology, by Madou, presents the following image and explanation in a section on x-ray diffraction and Laue equations: ... Laue’s equations can also be interpreted as reflection from the $h,k,l$ planes. From Figure 2.25 it can be seen that the spacing between the ($hk$) planes, and by extension between ($hkl$) planes, is given as$^*$: $d_{hkl} = \frac{\hat{\mathbf{n}}_{hkl} \cdot \mathbf{a}_1}{h} = \frac{\hat{\mathbf{n}}_{hkl} \cdot \mathbf{a}_2}{k} = \frac{\hat{\mathbf{n}}_{hkl} \cdot \mathbf{a}_3}{l} \tag{2.27}$ $^*$ Notice that in the case of a cubic crystal, Equation 2.27 can be simplified to give the distance between planes as in Equation 2.12. Equation 2.12 referenced above is presented earlier in the textbook as follows: Adjacent planes ($hkl$) in a simple cubic crystal are spaced a distance $d_{hkl}$ from each other, with $d_{hkl}$ given by: $$d_{hkl} = \frac{a}{\sqrt{h^2 + k^2 + l^2}} \tag{2.12}$$ where $a$ is the lattice constant. Equation 2.12 provides the magnitude of $d_{hkl}$ and follows from simple analytic geometry. The following comment is not clear to me: $^*$ Notice that in the case of a cubic crystal, Equation 2.27 can be simplified to give the distance between planes as in Equation 2.12. I was wondering if someone would please take the time to explain and show this. EDIT: In an attempt to provide further context, I am posting more of the textbook section surrounding this problem. Constructive interference will occur in a direction such that contributions from each lattice point differ in phase by $2\pi$. This is illustrated for the scattering of an incident x-ray beam by a row of identical atoms with lattice spacing $\mathbf{a}_1$ in Figure 2.20. The direction of the incident beam is indicated by wave vector $\mathbf{k}_0$ or the angle $\alpha_0$, and the scattered beam is specified by the direction of $\mathbf{k}$ or the angle $\alpha$. Because we assume elastic scattering, the two wave vectors $\mathbf{k}_0$ and $\mathbf{k}$ have the same magnitude, i.e., $2\pi/\lambda$ but with differing direction. A plane wave $e^{i k \cdot r}$ is constant in a plane perpendicular to $\mathbf{k}$ and is periodic parallel to it, with a wavelength $\lambda = 2\pi/\mathbf{k}$ (see Appendix 2A). The path difference $A_1B − A_2C$ in Figure 2.20 must equal $e\lambda$ with $e = 0, 1, 2, 3, \dots$. For a fixed incident x-ray with wavelength $\lambda$ and direction $\mathbf{k}$, and an integer value of $e$, there is only one possible scattering angle $\alpha$ defining a cone of rays drawn about a line through the lattice points (see Figure 2.20). Because crystals are periodic in three directions, the Laue equations in 3D are then: $$\mathbf{a}_1 (\cos\alpha - \cos\alpha_0) = e \lambda$$ $$\mathbf{a}_2 (\cos\beta - \cos\beta_0) = f \lambda \tag{2.21}$$ $$\mathbf{a}_3 (\cos\gamma - \cos\gamma_0) = g \lambda$$ For constructive interference from a three-dimensional lattice to occur, the three equations above must all be satisfied simultaneously, i.e., six angles $\alpha$, $\beta$, $\gamma$, $\alpha_0$, $\beta_0$, and $\gamma_0$; three lattice lengths $\mathbf{a}_1$, $\mathbf{a}_2$, and $\mathbf{a}_3$; and three integers ($e$, $f$, and $g$) are fixed. Multiplying both sides of Equation 2.21 with $2\pi/\lambda$ and rewriting the expression in vector notation we obtain: $$\mathbf{a}_1 \cdot (\mathbf{k} - \mathbf{k}_0) = 2 \pi e$$ $$\mathbf{a}_2 \cdot (\mathbf{k} - \mathbf{k}_0) = 2 \pi f \tag{2.22}$$ $$\mathbf{a}_3 \cdot (\mathbf{k} - \mathbf{k}_0) = 2 \pi g$$ with $\mathbf{a}_1$, $\mathbf{a}_2$, and $\mathbf{a}_3$ being the primitive vectors of the crystal lattice. If we further define a vector $\Delta \mathbf{k} = \mathbf{k} − \mathbf{k}_0$, Equation 2.22 simplifies to $$\mathbf{a}_1 \cdot \Delta \mathbf{k} = 2 \pi e$$ $$\mathbf{a}_2 \cdot \Delta \mathbf{k} = 2 \pi f \tag{2.23}$$ $$\mathbf{a}_3 \cdot \Delta \mathbf{k} = 2 \pi g$$ Dealing with 12 variables for each reflection simultaneously [six angles ($\alpha$, $\beta$, $\gamma$, $\alpha_0$, $\beta_0$, and $\gamma_0$), three lattice lengths ($\mathbf{a}_1$, $\mathbf{a}_2$ and $\mathbf{a}_3$), and three integers ($e$, $f$, and $g$)] is a handful; this is the main reason why the Laue equations are rarely referred to directly, and a simpler representation is used instead. The reflecting conditions can indeed be described more simply by the Bragg equation. Further below we will learn that constructive interference of diffracted x-rays will occur provided that the change in wave vector, $\Delta \mathbf{k} = \mathbf{k} − \mathbf{k}_0$, is a vector of the reciprocal lattice. Bragg’s law is equivalent to the Laue equations in one dimension as can be appreciated from an inspection of Figures 2.24 and 2.25, where we use a two-dimensional crystal for simplicity. Suppose that vector $\Delta \mathbf{k}$ in Figure 2.24 satisfies the Laue condition; because incident and scattered waves have the same magnitude (elastic scattering), it follows that incoming ($\mathbf{k}_0$) and reflected rays ($\mathbf{k}$) make the same angle $\theta$ with the plane perpendicular to $\Delta \mathbf{k}$. The magnitude of vector $\Delta \mathbf{k}$, from Figure 2.24, is then given as: $$|\Delta \mathbf{k}| = 2\mathbf{k}\sin(\theta)$$ We now derive the relation between the reflecting planes to which $\Delta \mathbf{k}$ is normal and the lattice planes with a spacing $d_{hkl}$ (see Figure 2.25 and Bragg’s law in Equation 2.20). The normal unit vector $\hat{\mathbf{n}}_{hk}$ and the interplanar spacing $d_{hk}$ in Figure 2.25 characterize the crystal planes ($hk$). From Equation 2.23 we deduce that the direction cosines of $\Delta \mathbf{k}$, with respect to the crystallographic axes, are proportional to $e/a_1$, $f/a_2$, and $g/a_3$ or: $$e/a_1:f/a_2:g/a_3 \tag{2.25}$$ From the definition of the Miller indices, an ($hkl$) plane intersects the crystallographic axes at the points $a_1/h$, a_2/k, and a_3/l, and the unit vector $\hat{\mathbf{n}}_{hkl}$, normal to the ($hkl$) plane, has direction cosines proportional to: $$h/a_1, k/a_2, \text{and} \ l/a_3 \tag{2.26}$$ Comparing Equations 2.25 and 2.26 we see that $\Delta \mathbf{k}$ and the unit normal vector $\hat{\mathbf{n}}_{hkl}$ have the same directions; all that is required is that $e = nh$, $f = nk$, and $g = nl$, where $n$ is a constant. The factor $n$ is the largest common factor of the integers $e$, $f$, and $g$ and is itself an integer. From the above, Laue’s equations can also be interpreted as reflection from the $h,k,l$ planes. From Figure 2.25 it can be seen that the spacing between the ($hk$) planes, and by extension between ($hkl$) planes, is given as: $d_{hkl} = \dfrac{\hat{\mathbf{n}}_{hkl} \cdot \mathbf{a}_1}{h} = \dfrac{\hat{\mathbf{n}}_{hkl} \cdot \mathbf{a}_2}{k} = \dfrac{\hat{\mathbf{n}}_{hkl} \cdot \mathbf{a}_3}{l} \tag{2.27}$ Answer: This answer uses vector algebra and the so-called reciprocal lattice to avoid trigonometry. There are probably other, more geometric ways to connect the Laue conditions to the lattice spacing. $d_{hkl} = \dfrac{\hat{\mathbf{n}}_{hkl} \cdot \mathbf{a}_1}{h} = \dfrac{\hat{\mathbf{n}}_{hkl} \cdot \mathbf{a}_2}{k} = \dfrac{\hat{\mathbf{n}}_{hkl} \cdot \mathbf{a}_3}{l} \tag{2.27}$ We use the Laue conditions to define the reciprocal lattice vector $\mathbf{d}^*_{hkl}$ and reciprocal lattice unit vectors $\mathbf{a}^*_1, \mathbf{a}^*_2, \mathbf{a}^*_3$. The reciprocal lattice point is defined as: $$\mathbf{d}^*_{hkl} = \dfrac{\hat{\mathbf{n}}_{hkl}}{d_{hkl}},$$ i.e. it is perpendicular to the reflecting planes and has a magnitude of one divided by the lattice spacing. We can rewrite the three Laue conditions as: $$h = \mathbf{d}^*_{hkl} \cdot \mathbf{a}_1$$ $$ k = \mathbf{d}^*_{hkl} \cdot \mathbf{a}_2 $$ $$ l = \mathbf{d}^*_{hkl} \cdot \mathbf{a}_3 $$ and define $$ \mathbf{a}^*_1 = \mathbf{d}^*_{100} $$ $$ \mathbf{a}^*_2 = \mathbf{d}^*_{010} $$ $$ \mathbf{a}^*_3 = \mathbf{d}^*_{001} $$ The reciprocal space unit vectors have magnitudes that are the reciprocals of the magnitude of the corresponding real space unit vectors (dot product equals to one), and are perpendicular to the other two real space unit vectors (dot product is zero), as you can see if you apply the Laue conditions to, say, $ \mathbf{a}^*_1 = \mathbf{d}^*_{100}$: $$1 = \mathbf{d}^*_{100} \cdot \mathbf{a}_1$$ $$ 0 = \mathbf{d}^*_{100} \cdot \mathbf{a}_2 $$ $$ 0 = \mathbf{d}^*_{100} \cdot \mathbf{a}_3 $$ You can now express a reciprocal lattice point via the Miller indices and the reciprocal unit vectors: $$\mathbf{d}^*_{hkl} = h \cdot \mathbf{a}^*_1 + k \cdot \mathbf{a}^*_2 + l \cdot \mathbf{a}^*_3$$ For the cubic case the three reciprocal unit cell vectors are mutually perpendicular just like the real space unit vectors, and they all have a magnitude of 1/a. We can calculate the magnitude of $\mathbf{d}^*_{hkl}$ as (using properties of the cartesion coordinate system, or Pythagoras' theorem): $$|\mathbf{d}^*_{hkl}| = \frac{\sqrt{(h^2 + k^2 + l^2)}}{a}$$ To get $d_{hkl}$, you simply have to take the reciprocal. Disclaimer I called $d_{hkl}$ the lattice spacing. This is correct if $h, k, l$ have no common divisor. If they do, it is the lattice spacing divided by that common divisor. For example, the (1,2,3) reflection and the (10,20,30) reflection has the same reflecting plane, but a different diffraction angle.
{ "domain": "chemistry.stackexchange", "id": 13187, "tags": "crystallography, miller-indices" }
Can a molecule be neither gerade nor ungerade?
Question: Or can any molecular orbit always be written a linear combination of gerade and ungerade basis states? Answer: g/u is not a property of a molecule; I assume you meant molecular orbital. All functions can be written as a linear combination of even + odd functions: there is a short explanation on Wikipedia. Basically, you have a function $f(x)$; now define $$\begin{align} f_\mathrm e(x) &= \frac{1}{2}[f(x) + f(-x)] \\ f_\mathrm o(x) &= \frac{1}{2}[f(x) - f(-x)] \\ \end{align}$$ It is clear from the definition of parity that $f_\mathrm e$ is even (since $f_\mathrm e(x) = f_\mathrm e(-x)$) and likewise that $f_\mathrm o$ is odd. Now since $f(x) = f_\mathrm e(x) + f_\mathrm o(x)$, we have shown that any function $f(x)$ can be expressed as a linear combination of even and odd functions. In three dimensions where you have $\psi(x,y,z)$ you simply need to define $$\begin{align} \psi_\mathrm g(x) &= \frac{1}{2}[\psi(x,y,z) + \psi(-x,-y,-z)] \\ \psi_\mathrm u(x) &= \frac{1}{2}[\psi(x,y,z) - \psi(-x,-y,-z)] \\ \end{align}$$ and analogously to above one can see that every function $\psi(x,y,z)$ can be expressed as the sum of a gerade component $\psi_\mathrm g$ and an ungerade component $\psi_\mathrm u$. These components don't have any physical meaning (unless $\psi$ itself is either g or u, in which case one component is simply $\psi$ and the other is zero), but that's somewhat beside the point. If a molecule does not possess a centre of inversion then its (canonical) molecular orbitals will not possess g/u symmetry. So the answer is yes, and yes. There is no "or".
{ "domain": "chemistry.stackexchange", "id": 9562, "tags": "molecular-orbital-theory, symmetry" }
Quantum complexity of TQBF
Question: There is no classical algorithm for $n$-bit TQBF with better than $O(2^n)$ complexity. Is that also the best known bound for quantum algorithms / circuits? Edit: As pointed out by Huck Bennett, in the full alternation subset of TQBF we can in fact beat $O(2^n)$ via randomized tree search. This is the case I care about most, so I would also be curious for the best exponent for a quantum algorithm if there are all $n-1$ quantifier flips. Answer: The best quantum algorithm for QBF on n variable formulas of size s runs in about $2^{n/2}poly(s)$ steps, regardless of quantifier alternations. This is due to a series of advances on quantum algorithms for game tree search in the early 2000s. Here is the first reference I found from googling; there are more: https://arxiv.org/abs/0907.1623
{ "domain": "cstheory.stackexchange", "id": 5183, "tags": "quantum-computing, pspace" }
Subscribing and Publishing with generic type of message
Question: Hello, This question is subsequent to this one. I'm still working on the connexion between Neural Network Simulator (Prométhé) and ROS. Now I know how to retrieve my publisher and subscriber pointers in the Prométhé comunicating functions. But the issue is the following : When going from Prométhé to ROS, I have Groups of Neurons (i.e. a certain number of activity level <=> a table of floating values between 0 and 1) that I want to put them in a message to send them through a topic. But I can't know before execution what type of topic will be required because it depends on the link that is drawn in the network (ad the simulator will be compiled for a while). The problem is equivalent from ROS to Prométhé : to subscribe to a topic, I have to know which type it is to define the callback function. So is there a way to be transparent from the type of topic (and compatible with C, meaning that I can't use templates) like a function that dynamically determines type or a "generic" type that gathers every other types (a "common_message" type) ? Or must I define a look-up table / switch case to use the right message depending on the input/output data (considering that I can get a user specified option giving the wanted type) ? Originally posted by Erwan R. on ROS Answers with karma: 697 on 2012-07-17 Post score: 1 Answer: For me that sounds like a design issue, where you don't need such a generic message. Something like an unknown number of activity levels and an unknown number of groups of that can easily be defined as a generic ROS message. Can you give a more concrete example of your data? I think we'll be able to come up with a generic message. Originally posted by dornhege with karma: 31395 on 2012-07-17 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by joq on 2012-07-17: I agree. Much better to define a standard ROS message that is flexible enough for your application. Comment by Erwan R. on 2012-07-19: Thanks for answering. I think I wasn't precise enough. The problem is not to define an ad-hoc message for the neural activity. The problem is, in the compiled code of the NN simulator, not to know the type of message that will be defined at execution (by user input). A solution is to have a ... Comment by Erwan R. on 2012-07-19: switch case on each know ROS type but it's not a robust solution. For example, if in Prométhé I define a link from the simulator to ROS on the /base_controller/command topic with geometry_msgs::Twist type, I want my simulator box to advertise with this type. But if I change my mind and I want to ... Comment by Erwan R. on 2012-07-19: publish a TwistWithCovariance message (because it finally fits better), I just want to change the link "msgtype" field (during execution) and not to recompile the whole simulator. Comment by dornhege on 2012-07-19: In that case the only way out I see is to use something like @Lorenz' suggestion. With python you might be able to do it cleaner. Comment by joq on 2012-07-19: Still seems like a bad idea to me. Type safety for ROS messages is an important feature. It should not be given up lightly. Comment by Erwan R. on 2012-07-19: So does it means that any third-party software that want to communicate with ROS and being technically independent from the solution implements a look-up function with hard-coded message type list, to ensure that what's injected is secure ? It sounds quite static to evolutions ... Comment by joq on 2012-07-19: I clearly do not understand your question. Comment by dornhege on 2012-07-19: I have to admit, I'm not sure, where you need this absolute versatility. You will definitely need to come of with some mapping from your data types to ROS, but usually there is some mapping from the interface types for third-party software that doesn't need to be that generic. Comment by Erwan R. on 2012-07-19: @joq : What I'm wondering more generally is : how can ROS interfaces with a third-party software without explicitly using a technical solution ? I know that two programs can communicate through sockets for example, but it means explicitly choosing TCP or UDP protocol, what has been rejected to ... Comment by Erwan R. on 2012-07-19: ... preserve the transparency to the solution. That's why we choose to insert a tool in the NN simulator that uses ROS publishing and subscribing system. But there are some hidden design issues that address the question of type. @dornhedge : you're right and I think my solution have to be ... Comment by Erwan R. on 2012-07-19: ... redesigned to be more specific (the idea was to find the highest common pattern to avoid a ad-hoc solution, but I think it's just impossible regarding the way the two systems are built). Anyway, many thanks to both of you for taking time to try to help me. Comment by joq on 2012-07-19: I think what @dornhege is suggesting is to create a custom ROS message that encapsulates your NN output, perhaps using variable-sized arrays of floats. One or more simple ROS nodes (perhaps in Python) could easily convert the NN output into any specific ROS message you want.
{ "domain": "robotics.stackexchange", "id": 10233, "tags": "ros, message, publisher" }
What is the current status of parallel or concurrent programs in the Curry-Howard isomorphism?
Question: In Girard's Proofs and Types we can read : From an algorithmic viewpoint, the sequent calculus has no Curry-Howard isomorphism, because of the multitude of ways of writing the same proof. This prevents us from using it as a typed $\lambda$-calculus, although we glimpse some deep structure of this kind, probably linked with parallelism. Proofs and Types, J.Y Girard (Page 28) But we can also read (about Linear Logic) that From the viewpoint of computer science, it gives a new approach to questions of laziness, side effects and memory allocation [GirLaf, Laf87, Laf88] with promising applications to parallelism. Proofs and Types, J.Y Girard (Page 149, written by Yves Lafont) How are parallel programs linked to the Curry-Howard isomorphism ? What are the current thoughts about that ? Answer: The Concurrent Logical Framework is one interesting area including its descendants, like Linear Meld and LolliMon. This is based on intuitionistic linear logic. Classical linear logic has connections to the Linear Chemical Abstract Machine (CHAM) as described by e.g. A Calculus for Interaction Nets Based on the Linear Chemical Abstract Machine which explicitly describes the result as a Curry-Howard type result. Alexander Summers' thesis Curry-Howard Term Calculi for Gentzen-Style Classical Logics which I have not read seems to be aimed directly at the problem of providing a Curry-Howard correspondence for Gentzen-style calculi. The $\lambda\mu$-calculus by Curien and Herbelin introduced in The Duality of Computation is a seminal work in this vein of (non-linear) lambda calculi corresponding to classical logics. At any rate, this is all still a lively area of research. There are many recent papers on this topic. The above doesn't even mention the even more substructural side of separation logic and the corresponding Hoare Type Theory which focuses on imperative programming languages. For example, there's Towards type-theoretic semantics for transactional concurrency whose references you can trace for prior work. (As a bit of a pedantic note, most of these are focused on concurrency, not parallelism per se.)
{ "domain": "cs.stackexchange", "id": 13046, "tags": "reference-request, logic, parallel-computing" }
Where should I put menu items in MVC with PHP - Model or Controller?
Question: This is my first attempt with MVC and I almost get it, but a small thing bothers me. I have this controller in CodeIgniter: <?php class Page extends CI_Controller { public function index($page = "home") { $data['page'] = $page; list($data['title']) = array_keys($this->_menu(), $page); $data['menuItems'] = $this->_menu(); $this->load->view('templates/header', $data); $this->load->view('templates/menu', $data); $this->load->view('pages/'.$page, $data); $this->load->view('templates/footer', $data); } private function _menu() { static $menuItems = array( "Home page" => "home", "Our history" => "history", "About us" => "about", "Contact page" => "contact" ); return $menuItems; } } And here is the menu view: <?php $menu = ""; foreach ($menuItems as $menuName => $view) { $menu .= '<li'; if ( $page == $view ) { $menu .= ' class="selected">'; $menu .= '<a href="#">'; } else { $menu .= '><a href="' . $view . '">'; } $menu .= $menuName . '</a></li>' .PHP_EOL ; } echo $menu; Where do the menu items belong in MVC logic? Is it ok to store things like the menu in the Controller or do I have to make a new Model for it like this? class Menu extends CI_Model { public function __construct() { parent::__construct(); } public function get_menu_items() { static $menuItems = array( "Home page" => "home", "Our history" => "history", "About us" => "about", "Contact page" => "contact" ); return $menuItems; } } Then load it from Controller: public function index($page = "home") { ... $this->load->model('Menu'); $data['menuItems'] = $this->Menu->get_menu_items(); $this->load->view(...); $this->load->view(...); $this->load->view(...); } Answer: This a nice example: it shows how the MVC is just a pattern which should be ajusted to your needs. The menu items you're showing seem very unlikely to change over time. This means you can simply put them in your "templates/menu" view. If they're shared among multiple views, then the controller is OK. If the menu items can change dynamically, then your model can provide a meaningful abstraction over your data, and then it would go into a model.
{ "domain": "codereview.stackexchange", "id": 1687, "tags": "php, mvc, codeigniter" }
Flower identification (Singapore, April)
Question: I saw some flowers growing by the roadside today, and was wondering about the species of this plant. The plant also appeared to be fruiting, and it produced small fruits that formed small clusters. Answer: This is called "Red Powder Puff" or botanically Calliandra haematocephala. See the images (from the Wikipedia article): What you identified as the fruit appears to be the buds: Some more information can be found here.
{ "domain": "biology.stackexchange", "id": 5406, "tags": "botany, species-identification" }
Express boolean logic operations in zero-one integer linear programming (ILP)
Question: I have an integer linear program (ILP) with some variables $x_i$ that are intended to represent boolean values. The $x_i$'s are constrained to be integers and to hold either 0 or 1 ($0 \le x_i \le 1$). I want to express boolean operations on these 0/1-valued variables, using linear constraints. How can I do this? More specifically, I want to set $y_1 = x_1 \land x_2$ (boolean AND), $y_2 = x_1 \lor x_2$ (boolean OR), and $y_3 = \neg x_1$ (boolean NOT). I am using the obvious interpretation of 0/1 as Boolean values: 0 = false, 1 = true. How do I write ILP constraints to ensure that the $y_i$'s are related to the $x_i$'s as desired? (This could be viewed as asking for a reduction from CircuitSAT to ILP, or asking for a way to express SAT as an ILP, but here I want to see an explicit way to encode the logical operations shown above.) Answer: Logical AND: Use the linear constraints $y_1 \ge x_1 + x_2 - 1$, $y_1 \le x_1$, $y_1 \le x_2$, $0 \le y_1 \le 1$, where $y_1$ is constrained to be an integer. This enforces the desired relationship. See also https://or.stackexchange.com/q/37/2415. Logical OR: Use the linear constraints $y_2 \le x_1 + x_2$, $y_2 \ge x_1$, $y_2 \ge x_2$, $0 \le y_2 \le 1$, where $y_2$ is constrained to be an integer. Logical NOT: Use $y_3 = 1-x_1$. Logical implication: To express $y_4 = (x_1 \Rightarrow x_2)$ (i.e., $y_4 = \neg x_1 \lor x_2$), we can adapt the construction for logical OR. In particular, use the linear constraints $y_4 \le 1-x_1 + x_2$, $y_4 \ge 1-x_1$, $y_4 \ge x_2$, $0 \le y_4 \le 1$, where $y_4$ is constrained to be an integer. Forced logical implication: To express that $x_1 \Rightarrow x_2$ must hold, simply use the linear constraint $x_1 \le x_2$ (assuming that $x_1$ and $x_2$ are already constrained to boolean values). XOR: To express $y_5 = x_1 \oplus x_2$ (the exclusive-or of $x_1$ and $x_2$), use linear inequalities $y_5 \le x_1 + x_2$, $y_5 \ge x_1-x_2$, $y_5 \ge x_2-x_1$, $y_5 \le 2-x_1-x_2$, $0 \le y_5 \le 1$, where $y_5$ is constrained to be an integer. Another helpful technique for handling complex boolean formulas is to convert them to CNF, then apply the rules above for converting AND, OR, and NOT. And, as a bonus, one more technique that often helps when formulating problems that contain a mixture of zero-one (boolean) variables and integer variables: Cast to boolean (version 1): Suppose you have an integer variable $x$, and you want to define $y$ so that $y=1$ if $x \ne 0$ and $y=0$ if $x=0$. If you additionally know that $0 \le x \le U$, then you can use the linear inequalities $0 \le y \le 1$, $y \le x$, $x \le Uy$; however, this only works if you know an upper and lower bound on $x$. Alternatively, if you know that $|x| \le U$ (that is, $-U \le x \le U$) for some constant $U$, then you can use the method described here. This is only applicable if you know an upper bound on $|x|$. Cast to boolean (version 2): Let's consider the same goal, but now we don't know an upper bound on $x$. However, assume we do know that $x \ge 0$. Here's how you might be able to express that constraint in a linear system. First, introduce a new integer variable $t$. Add inequalities $0 \le y \le 1$, $y \le x$, $t=x-y$. Then, choose the objective function so that you minimize $t$. This only works if you didn't already have an objective function. If you have $n$ non-negative integer variables $x_1,\dots,x_n$ and you want to cast all of them to booleans, so that $y_i=1$ if $x_i\ge 1$ and $y_i=0$ if $x_i=0$, then you can introduce $n$ variables $t_1,\dots,t_n$ with inequalities $0 \le y_i \le 1$, $y_i \le x_i$, $t_i=x_i-y_i$ and define the objective function to minimize $t_1+\dots + t_n$. Again, this only works nothing else needs to define an objective function (if, apart from the casts to boolean, you were planning to just check the feasibility of the resulting ILP, not try to minimize/maximize some function of the variables). For some excellent practice problems and worked examples, I recommend Formulating Integer Linear Programs: A Rogues' Gallery.
{ "domain": "cs.stackexchange", "id": 8532, "tags": "linear-programming, integer-programming" }
Trying to understand how to connect the most general concept of a function to real world?
Question: I'm a beginner wrapping my head around how general a definition a "function" really is when connected to the real world, please help. I am trying to connect the mathematical definition of a function to real-world occurrences. Here is the Wikipedia definition of "function": a function[note 1] is a binary relation between two sets that associates every element of the first set to exactly one element of the second set. (Wikipedia: Function (mathematics)) Say for example two rocks in space hit each other, can you consider that function? Each rock has an input state: its mass, velocity, spatial position, etc. and each rock has a resulting output state after impact. Does that mean that the relationship between the two rocks throughout the collision can be considered a function? If that is true can we then simply define a function as a relationship between objects? If so and assuming determinism (let's not go down that rabbit hole) can you then view change in the universe as occurring due to a massive network of functions? Or would it be a network of objects with the edges (of this imaginary graph) representing functions? What is the best mental image here? Can anyone recommend any good sources, where people consider these ideas at such a general and abstract level? (whilst still keeping the discussion grounded if possible) Answer: Let $f$ denote a function. A function $y = f(x)$ associates a unique $y$ to every $x$. Some functions have other properties such as: continuity, one-to-one, and onto. A function is a special type of a relation. A function can be defined for more than one variable; for example, $y = f(a, b, c)$ provides a unique $y$ for a specific set of $\{a, b, y\}$ values. For example, the age of each of a set of persons can be described with a function; age = $f$(a specific person). The cities served by a set of large airports cannot be described by a function, because each large airport serves more than one city- a relation can be used for this description. Just as the mathematical construct of integers is used for counting discrete items, functions as defined mathematically describe a great deal of the real world (a couple of example follow below). Similarly, the mathematical construct of vectors describes many real world aspects of dynamics, such as the resolution of a force into components along different directions. More complicated mathematical constructs have been developed, such as tensors to describe the inertia for general motion of a rigid body and to describe the stresses in a solid or fluid. For very many physical phenomena a function can be defined using an equation. Such a functional relationship allows us to quickly evaluate many of the variables of interest for a physical problem. For example for an ideal gas pressure ,P, is a function of volume, V, number of moles, n, and temperature, T, $P = f(n, R, T)$; specifically $P = (nRT)/V$ where $R$ is a constant. Another example is the exponential probability distribution function which provides the probability of failure, $P$, of a component within a given time, t, as the function $P(t) = 1 - \exp(-bt)$ where $b$ is the failure rate. When a phenomenon can be described using a function, we can bring the entire mathematical properties of a function to bear for evaluating the problem. For example, we can differentiate functions, integrate functions, etc.
{ "domain": "physics.stackexchange", "id": 73201, "tags": "mathematical-physics" }
Why is it desirable to have a symmetry to make cosmological constant zero?
Question: It is sometimes stated that absence of a symmetry to make cosmological constant zero is a problem. But observed value of dark energy is very small and non-zero. So why is it desirable to have a symmetry to make cosmological constant zero? Second, why is the energy density due to vacuum fluctuations expected to be Lorentz invariant? What happens if it is not so? (In other words, why should the vacuum expectation value of T_{ij} be of the form \Lambda g_{ij}?) Answer: 1) The cosmological constant in the context of quantum field theory is a set of calculations and leads to a sum that will look something schematically like this cc = 3-5+22-120+3042-50242+... +O(M^4) where M is some heavy mass scale The problem is we don't know the exact numbers in the sum, we can only calculate a few of them and the best we can do is guess their average magnitude, as there are known constants that set scales. Naively if you weren't quite sure of your calculation, you would 'guess' the value of any such set of cancellations is going to be basically on the order of the biggest number there (eg M^4). The problem is that the measured value of the cosmological constant is about 10^-47 Gev^4 and if you set M to be the Planck scale (a natural cutoff scale where new physics ought to be important) you will find something like ~10^70 Gev^4. This implies that the sum above must cancel to one part in 10^120 or so, which is a fantastic coincidence, and highly unnatural unless there was some mechanism there that forced the cancellations to be close. Now, if instead there was a symmetry that made the sum identically vanish (for instance, unbroken supersymmetry), you might be able to cook up a theory that for instance has an undetermined integration constant left over, which you could arbitrarily set to be whatever you wanted. Its just a number, big deal. However trying to tailor a set of physical laws, between quantities that naively shouldn't know anything about one another (why should the vacuum energy of electron loops, very precisely cancel vacuum energy loops by another set of particles) is a daunting task that requires either a miracle, or an as yet set of new principles (symmetries or other) that force mathematical relations inside of the sum. 2) Lorentz invariance of the energy momentum tensor guarantees that the only term available that satisfies the tensor structure is in fact of the form (Pvac Guv). You can of course choose another type of theory that is not Lorentz invariant (and consequently are no longer dealing with General Relativity) but then you are actually generically presented with new finetuning problem. A quantum vacuum that is not Lorentz invariant will typically generate relevant operators at our own scale that are constrained by experiment to be incredibly small. So you haven't really helped matters much, but then some people try to do this with varying degrees of success. See eg Lifshitz gravity.
{ "domain": "physics.stackexchange", "id": 5500, "tags": "symmetry, dark-energy, cosmological-constant, fine-tuning" }
Reduction of graph chromatic number to hypergraph 2-colorability
Question: I'm following this paper titled "Coverings and colorings of hypergraphs" by Lovasz 1973, which is referenced in Garey and Johnson's Computers and Intractability, for the Set Splitting Problem. In this paper, the author tries to reduce graph chromatic number to hypergraph 2-colorability (same as set splitting). You can find the paper here: http://web.cs.elte.hu/~lovasz/old-papers.html I'm reproducing the reduction below. Let $G$ be a graph, $V(G) = \{x_1,\dots,x_n\}$. Let $G_i$ be an isomorphic copy of $G$, $(i=1,\dots,k)$, $V(G_i) = \{x_{i,1},\dots,x_{i,n}\}$ ($x_{i,v}$ is a point corresponding to $x_v$). Take a new point $y$ and let $f_v = \{x_{1,v},\dots,x_{k,v},y\}$. Define hypergraph $H$ by $$ H = E(G_1) \cup \dots \cup E(G_k) \cup \{f_1,\dots,f_n\}. $$ Then $H$ is 2-colorable if and only if $G$ is $k$-colorable. I don't understand this conclusion and the only example I've been able to come up with where this is true is $k = 1$. Answer: This is an error in Lovász' paper. However, a correct reduction is not too hard to come by. See this answer on math.se, in which a slight modification of the reduction you quote is shown to work.
{ "domain": "cs.stackexchange", "id": 10300, "tags": "complexity-theory, graphs, np-complete" }
Equivallent characterization of trees?
Question: I am a teacher of undergrad graph theory and we tend to invent some weird (and false) characterizations of trees and recently I stumbled upon this one. Is the following true? $G$ is a tree if and only if $G$ is connected and between each pair of vertices of same degree, there is an unique path connecting them. If we remove the connectivity condition, then any tree together with an isolated vertex is a counterexample. What if we add the connectivity? Obviously if $G$ is tree, then the condition is true. But is it sufficient to show that $G$ is a tree? I tried to approach by contradiction that it contains a cycle. Given any circle $C$ in $G$, all vertices inside $C$ must necessarily have distinct degrees. But that's all I can come up with. Any ideas on how to proceed? Edit: Some more thoughts. Assume, aiming for a contradiction, that $G$ contains a cycle $C$ and that $v_i, v_j$ are two vertices of the same degree (because for $|V|\geq 2$, it cannot happen that all vertices in $V$ have distinct degrees). $v_i,v_j$ cannot lie on a circle (not only they cannot be on C, but on any circle at all). Which in turn means that $v_i,v_j$ belong to distinct $2$-connected components. Now, Denote them $X,Y$. Assume the graph $G'$ whose vertex set consists of $2$-connected components of $G$ and there is an edge between $C_i$ and $C_j$ if and only if they share an articulation. Let $X,C_1,C_2,\ldots,C_k,Y$ be a path in $G'$ between $X$ and $Y$... Now if the components were nontrivial (i.e. contained a cycle), this would imply a contradiction, but $G$ could be just a path, so here I am stuck again ... There must be assumption about the location of the cycle $C$ w.r.t. $v_i, v_j$ and the components $X,Y$. Here I am stuck again. Answer: This seems to be a correct characterization of trees, here is my proof of it (hope there is no mistake). Let $G = (V, E)$ an undirected graph. Denote $(P)$ the property "$G$ is connected and between each pair of vertices of same degree, there is an unique path connecting them." Suppose there exists a graph verifying $(P)$ without being a tree. Let $G$ be such a graph with the least possible number of edges. There exists two vertices $u, v\in V$ such that $\deg_G(u) = \deg_G(v)$. By $(P)$, there is a unique path $p = (v_1, …, v_k)$ from $u = v_1$ to $v = v_k$. Since there is a unique path between any two vertices of $p$, deleting the edges of $p$ will create $k$ connected components $C_1, …, C_k$, such that $C_i$ contains $v_i$ (otherwise, there would be more than one path between two vertices of the path). Fact 1: for each $i \in \{1, …, k\}$ such that $|C_i| > 1$, $C_i\setminus \{v_i\}$ contains a vertex $w$ such that $\deg_G(w) = 1$. Proof of fact 1: suppose there is a $C_i$ of size $>1$ with no vertex of degree $1$ other than $v_i$. Then consider the graph $H$ obtained from $G[C_i]$ adding one (if $i = 1$ or $i = k$) or two (if $1 < i < k$) dummy vertices, adjacent only to $v_i$. In $H$, all non-dummy vertices have same degree as in $G$ (dummy vertices are here to insure that $v_i$ keeps the same degree). Moreover, there is a unique path between the two dummy vertices (if there are two). Note that there are very particular cases where $H$ has the same number of edges than $G$. This can only happen if both $u$ and $v$ are of degree $1$, and either $k = 2$, or $k = 3$ and $i = 2$. In the first case, $G$ is clearly a tree; in the second case, then $C_2$ contains two vertices of same degree $>1$ and we can use those vertices instead of $u$ and $v$. In all other cases, since $H$ satisfies $(P)$ and contains strictly less edges than $G$, it is a tree. By deleting the dummy vertices, it stays a tree. That means that $G[C_i]$ is a tree and contains at least two vertices of degree $1$. This is absurd given the hypothesis on $C_i$. Fact 2: let $C_i$ be a connected component of size $> 1$, and $w\in C_i, w\neq v_i$ a vertex of degree $1$. Then there is a unique path from $w$ to $v_i$. Proof of fact 2: suppose there are at least two paths from $w$ to $v_i$. WLOG, suppose $i \neq 1$ (if $i = 1$, just swap the role of $1$ and $k$). Then consider $w'$ that is either $v_1$ if $\deg_G(v_1) = 1$ or a vertex $w'\in C_1, w'\neq v_1$ of degree $1$. Therefore, there would be at least two paths from $w$ to $w'$. This is absurd because $G$ satisfies $(P)$. Fact 3: for each $i \in \{1, …, k\}$, $G[C_i]$ is a tree. Proof of fact 3: either $|C_i| = 1$ hence $G[C_i]$ is a trivial tree, or $|C_i| > 1$. If that's the case, consider the same construction $H$ as before. There is still a unique path between two vertices of same degree $> 1$. Given fact 2, there is a unique path between two vertices of degree $1$. Again, we conclude that $H$ is a tree, and that $G[C_i]$ (obtained by removing dummies) is a tree. Adding the path $p$ to all $G[C_i]$, we cannot create a cycle, and therefore $G$ is a tree. We conclude by contradiction that $G$ is a tree if and only if $G$ satisfies $(P)$.
{ "domain": "cs.stackexchange", "id": 20628, "tags": "graphs, trees" }
Why do you need at least two rays to form an image?
Question: Why isn't enough one light beam to form an image in your retina for example? Answer: When a sharp image is formed, every point on the object is reproduced in the image, and all the points around that point on the object are reproduced in the same relative positions of the image. In this first diagram the two grey rays $OXI$ and $OPI$ are the rays I used to find out where the top of the image is formed. They are called the construction ray and these are usually the only two rays that you see on a ray diagram. However in reality all rays bounded by rays $OWI$ and $OZI$ go through the lens to form the image. The time it takes light to go from a point on the object $O$ to a corresponding point on the image $I$ is the same for all rays. If I put an obstruction in, ie reduce the aperture of the lens, the image is still formed but with a diminished brightness because less of the light goes through to form the image. So only the rays bounded by rays $OXI$ and $OYI$ will get through the optical system to form the image of the top of the object. Having used the construction rays to find the position of the image you can draw similar diagrams to show the rays which for other parts of the image as shown below. In that diagram if the top half of the lens was obscured ie the construction rays did not get through to form the image, the image would still be formed but with a reduced brightness.
{ "domain": "physics.stackexchange", "id": 28255, "tags": "optics, geometric-optics, lenses" }
Why does DMSO help facilitate halohydrin reactions?
Question: It would make sense, due to bond energy, that water molecules would have a hard time competing with $\ce{Br}$ in a bromonium ion intermediate. However, I was reading that by doing the reaction in DMSO, it encourages the protonation of the water molecules. How? What is the rational behind this? Answer: Alkenes predominantly gives relevant 1,2-dihalo product when reacted with a halogen (e.g., $\ce{Br2}$) in a organic solvent such as chloroform or dichliromethane. If the reaction is performed in water, the product is the corresponding 1,2-halohydrin due to restriction applied by polar water molecules caging $\ce{Br-}$ ion. Organic molecules are sparingly soluble in water as solvent. Thus, the halohydrin reaction is often done in a mixture of organic solvent and water. For example, bromohydrin can be achieved by using N-bromosuccinimide as the electrophilic bromine source in $\ce{DMSO/H2O}$. Use of $\ce{DMSO/H2O}$ as a solvent has few advantages: The bromine source, N-bromosuccinimide (NBS) is readily soluble in this solvent system [Ref. 1]). $\ce{DMSO}$ is also participated in reaction mechanism. Reference 1 has detailed labeling procedure using $\ce{DMSO^{18}}$ to show that isotopic $\ce{^{18}OH}$ was incorporated in $\ce{OH}$ group of bromohydrin [Ref. 1]. Evedence also suggest the participation of water in the medium, probably due to increasd basic nature of water in $\ce{DMSO}$ ($\mathrm{p}K_\mathrm{a}$ of water in $\ce{DMSO}$ is $31.4$ at $\pu{25 ^\circ C}$ [Ref. 2]). Suggested mechanism for involvement of $\ce{DMSO}$ as follows: References: Bromohydrin Formation in Dimethyl Sulfoxide: D. R. Dalton, V. P. Dutta, D. C. Jones , J. Am. Chem. Soc., 1968, 90(20), 5498–5501 (https://pubs.acs.org/doi/abs/10.1021/ja01022a030). Acidities of water and simple alcohols in dimethyl sulfoxide solution: W. N. Olmstead, Z. Margolin, F. G. Bordwell, J. Org. Chem., 1980, 45(16), 3295–3299 (https://pubs.acs.org/doi/abs/10.1021/jo01304a032).
{ "domain": "chemistry.stackexchange", "id": 10144, "tags": "organic-chemistry, halides" }
"Rock, Paper, Scissors" game
Question: This is a simple "Rock, Paper, Scissors" game I made in Python 3. Feel free to critique me and give suggestions on how can I improve my newbie coding skills. import os from random import randint def ask_player(): while True: print("Rock, paper or scissors?") ans = input(">") if ans in ('Rock', 'Paper', 'Scissors'): return ans def results_msg(x, y, result): message = f"{x} beats {y}. You {result}!" return message def comp_play(): comp_choice = randint(0, 2) if comp_choice == 0: comp_choice = 'Rock' elif comp_choice == 1: comp_choice = 'Paper' else: comp_choice = 'Scissors' return comp_choice def results(wins, losses, draws): player_choice = ask_player() comp_choice = comp_play() if player_choice == comp_choice: print("Draw. Nobody wins or losses.") draws += 1 elif player_choice == 'Rock': if comp_choice == 'Paper': print(results_msg(comp_choice, player_choice, 'lost')) losses += 1 else: print(results_msg(player_choice, comp_choice, 'won')) wins += 1 elif player_choice == 'Paper': if comp_choice == 'Rock': print(results_msg(player_choice, comp_choice, 'won')) wins += 1 else: print(results_msg(comp_choice, player_choice, 'lost')) losses += 1 else: if comp_choice == 'Rock': print(results_msg(comp_choice, player_choice, 'lost')) losses += 1 else: print(results_msg(player_choice, comp_choice, 'won')) wins += 1 return wins, losses, draws def play_again(): while True: print("\nDo you want to play again?") print("[1] Yes") print("[2] No") ans = input("> ") if ans in '12': return ans def main(): wins = 0 losses = 0 draws = 0 while True: os.system('cls' if os.name == 'nt' else 'clear') print(f"Wins: {wins}\nLosses: {losses}\nDraws: {draws}") wins, losses, draws = results(wins, losses, draws) if play_again() == '2': break if __name__ == '__main__': main() The follow-up is ""Rock, Paper, Scissors" game - follow-up". Answer: Welcome back! There are a lot of good things to say about your program, and a few not-so-good. First, the good: The code is clean, well laid-out, and generally written in a Pythonic style. You have used functions to break things down. You have structured your program as a module, which should make it easier to test. Here are some things that I think could be improved: Your function names need a little work. Consider this code: player_choice = ask_player() comp_choice = comp_play() The object is to get two choices, one made by the player and the other made by the computer. Why are the two names so different? ask_player doesn't sound like getting a player's choice. It sounds like a generalized function that asks the player something and gets a response (i.e., input()). On the other hand, if player is spelled out why do you abbreviate the opponent in comp_play? Using get is not always a good thing. It's one of the times when a function or method name doesn't need a verb in it - because it is frequently implicit when you are doing is_... or has_... or get_... or set_.... I don't think you need to spell out get_player_choice and get_computer_choice, but certainly player_choice and computer_choice would be appropriate. This same logic applies to results. Instead of calling a function named results, why not call play_once? Or one_game? It's obvious from the code in main what is going on, but the function name doesn't really match the nature of the "step" being executed. Your code breakdown is uneven. Consider this code: def main(): wins = 0 losses = 0 draws = 0 while True: os.system('cls' if os.name == 'nt' else 'clear') print(f"Wins: {wins}\nLosses: {losses}\nDraws: {draws}") wins, losses, draws = results(wins, losses, draws) if play_again() == '2': break Let's break those lines down. The key point I want to make is to keep neighboring statements at a similar level of abstraction. First, you initialize your variables: wins = 0 losses = 0 draws = 0 Because you are not using a class, and are not using globals (which would be appropriate in this scenario, IMO), you are stuck with doing variable initialization here. I suggest that you make this consistent with how you update the variables after each game: wins, losses, draws = starting_scores() Now, starting_scores could just return 0,0,0 or it could load from a saved-game file. But it makes the initialization sufficiently abstract, and it also spells out what you are doing. Next, you loop: while True: ... if play_again() == '2': break The while True ... break could be rewritten to use a boolean variable. That's not super-critical, since the value of that variable is determined at only a single location. I consider the break to be equivalent in this case. However, the comparison == '2' is not acceptable! Why? Because that's a detail, and your function name play_again should take care of that detail for you! Don't ask a question and then interpret the answer. Make your question-asking code handle the interpretation for you. Obviously play_again is short for "do you want to play again?" and '2' is not a valid answer. True or False are valid answers, so the code should look like: while True: ... if not play_again(): break Finally, the inside of your loop has the same problem: os.system('cls' if os.name == 'nt' else 'clear') print(f"Wins: {wins}\nLosses: {losses}\nDraws: {draws}") wins, losses, draws = results(wins, losses, draws) What are you doing here? Well, you are clearing the screen, showing a summary of the games played, and playing one more round of the game. So say that! clear_screen() show_statistics(wins, losses, draws) wins, losses, draws = play_one_round(wins, losses, draws) Use appropriate data structures. Your main code passes three variables to your play-game code. That code then returns three data items in a tuple, which you unpack into three variables. In fact, you never use one of those variables without also having the others at hand. This should tell you that you are dealing with one aggregate data item, instead of three independent pieces of data. If that's true, just treat the scores as a single item: def main(): scores = starting_scores() while True: clear_screen() show_statistics(scores) scores = rock_paper_scissors(scores) if not play_again(): break Similarly, you can treat the scores as an aggregate until you have to update them: # NB: was `results(wins, losses, draws):` def rock_paper_scissors(scores): player = player_choice() computer = computer_choice() outcome = game_outcome(player, computer) show_results(outcome, player, computer) new_scores = update_scores(scores, outcome) return new_scores At this point, the "play one game" has also become a collection of abstract statements. But notice that I'm treating scores as an opaque blob that I don't need to deal with: I just pass it along to the lower levels, with another data item describing the update to make. Be consistent! I notice that when asking the player to choose rock, paper, or scissors, you allow them to type in an answer. But given a Yes/No question, you require a selection of either 1 or 2. That's consistently surprising. When I ran your code, I wanted to keep typing my answers. (I kept hitting 'y' to play again.) I suggest you either present the Rock/Paper/Scissors options as a menu, or present the Yes/No options as a string input and look for 'y' or 'n'. Making the interface that much more consistent will be an improvement. Use data, or code. Not both. This one is a little subtle, but take a look: if comp_choice == 'Paper': print(results_msg(comp_choice, player_choice, 'lost')) losses += 1 else: print(results_msg(player_choice, comp_choice, 'won')) wins += 1 What's significant here is that you have an if/then statement that decides whether you won or lost. And then you pass that into your results_msg function as a string parameter. The result of this is that you have a string parameter to be substituted that gives information you already knew: whether the player won or lost. Let's look at results_msg: def results_msg(x, y, result): message = f"{x} beats {y}. You {result}!" return message You have to consider that Python f-strings are code. And they're a pretty compact form of code, compared to the horror of str.format(). So writing: print(results_msg(player_choice, comp_choice, 'won')) is not really an improvement on writing: print(f"{player} beats {computer}. You won!") It's not clearer. It's not shorter. It does avoid problems with changing the text of the message, although there isn't much text in the message to change. I don't think you need to hoist the f-string up into the calling function. I do think you should not pass 'won' or 'lost' as a parameter: you already decided you won or lost. Call a separate function instead. if ... win_message(player_choice, comp_choice) else: lose_message(player_choice, comp_choice) Note that this will appear to conflict with the code structure I showed above- because in that code structure, I chose to treat the result as data, not code. I'm not saying you have to use data, or that you have to use code. I'm saying that you should pick one and stick with it. If you determine your outcome as code, go ahead and hard-code the outcome. If you determine your outcome as data, go ahead and treat it as data. And as a side note, strings with substitutions in them make it hard to do i18n. So there's nothing wrong with having an array of totally spelled out messages at the bottom. It also gives a bit more "flavor" if you customize the verbs: "Rock breaks scissors. You won!", "Scissors cuts paper. You won!", "Paper covers rock. You won!", ...
{ "domain": "codereview.stackexchange", "id": 33863, "tags": "python, beginner, python-3.x, rock-paper-scissors" }
pocketsphinx recognizer.py has initial incorrect output
Question: I am using pocketsphinx and it works great for recognition. However, when recognizer.py first starts, it outputs one or two random words from the dictionary, as if there was a queue that has to be cleared. This happens even if the microphone is muted. Any idea where that might be originating? Is there some way to suppress that? Originally posted by dan on ROS Answers with karma: 875 on 2013-12-20 Post score: 0 Answer: I've not seen this before -- but I imagine it is likely an issue with either gstreamer or the VADAR gstreamer plugin which is used to segment the audio input. There's currently no built in options to handle such a problem, but it could probably be easily added by storing the time at which the pipeline was enabled and making sure that some time as passed when an utterance arrives. Originally posted by fergs with karma: 13902 on 2013-12-21 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by dan on 2013-12-23: OK, I'll try that.
{ "domain": "robotics.stackexchange", "id": 16510, "tags": "ros, pocketsphinx" }
Find all pairs in an array that sum to a given number
Question: Assume you have an array of random integers and a sum value. Find all pairs of numbers from the array that sum up to the given sum in O(n) time. Find all distinct pairs. (1,2) and (2,1) are not distinct. import java.util.HashSet; public class PairsSummingToElement { public static void main(String[] args) { PairsSummingToElement e = new PairsSummingToElement(); int[] input = new int[] { 2, 5, 3, 7, 9, 8 }; int sum = 11; HashSet<Pair> result = e.findAllPairs(input, sum); for (Pair p : result) { System.out.println("(" + p.getElement1() + "," + p.getElement2() + ")"); } } public HashSet<Pair> findAllPairs(int[] inputList, int sum) { HashSet<Integer> allElements = new HashSet<Integer>(); HashSet<Integer> substracted = new HashSet<Integer>(); HashSet<Pair> result = new HashSet<Pair>(); for (int i : inputList) { allElements.add(i); substracted.add(i - sum); } for (int i : substracted) { if (allElements.contains(-1 * i)) { addToSet(result, new Pair(-i, i + sum)); } } return result; } public void addToSet(HashSet<Pair> original, Pair toAdd) { if (!original.contains(toAdd) && !original.contains(reversePair(toAdd))) { original.add(toAdd); } } public Pair reversePair(Pair original) { return new Pair(original.getElement2(), original.getElement1()); } } class Pair { private int element1; private int element2; public Pair(int e1, int e2) { element1 = e1; element2 = e2; } public int getElement1() { return element1; } public int getElement2() { return element2; } public int hashCode() { return (element1 + element2) * element2 + element1; } public boolean equals(Object other) { if (other instanceof Pair) { Pair otherPair = (Pair) other; return ((this.element1 == otherPair.element1) && (this.element2 == otherPair.element2)); } return false; } } Answer: No comments on the time complexity, I do have a bunch of other comments: Code to interfaces, not implementations It's better to use Set<Integer> allElements = new HashSet<Integer>() (side note: guess you're not on >= Java 7, since you can use diamond operators otherwise) so that users of allElements do not need to know that it actually is a HashSet. This is just good practice. The lesson here is that it allows the programmer to replace the implementation with another in the future when required. Does not contain, then add to a Set In the following code block: if (!original.contains(toAdd) && !original.contains(reversePair(toAdd))) { original.add(toAdd); } I think the extra check !contains() is not really required because Set.add()'s Javadoc says: If this set already contains the element, the call leaves the set unchanged and returns false. So, even if original did contain toAdd, calling add() will leave the set unchanged anyways. I guess this is useful when you're running on a large set of inputs so that there is some short-circuiting done, hence this is strictly just 'food-for-thought' and not something to frown upon. hashCode() I'm guess there's an implied range of inputs such as all must be positive right? Because your calculation (element1 + element2) * element2 + element1 will yield the same hash codes when element1 + element2 equals to 1, and having a bunch of negative and positive integers (or just 0, 1) will 'break' this easily. You may want to update your question with any assumptions on the range of inputs. How to reverse a Pair I think that you can move your method reversePair(Pair) into the Pair class itself as such: public final class Pair { // also declared as final private final int element1; // also declared as final private final int element2; // also declared as final public Pair(int element1, int element2) { this.element1 = element1; this.element2 = element2; } ... public Pair getReverse() { return new Pair(element2, element1); } } Using it then becomes slightly easier: if (!original.contains(toAdd.getReverse())) { original.add(toAdd); } Override toString() Oh yeah, given how you want to print the contents of Pair objects in the end, why not just override toString() too? :)
{ "domain": "codereview.stackexchange", "id": 29938, "tags": "java, algorithm" }
Can the universe break like a balloon that pops?
Question: I have been investigating inflation and the big difference in the theoretically predicted value of the cosmological constant and the actually measured value of it. There would be 120 orders of magnitude in difference for the latter, which became known as 'the largest discrepancy between theory and experiment in all of science'. What we measure today is a very slightly accelerating expansion of the universe, which is ascribed to the negative pressure of dark energy. According to the theoretical prediction, dark energy density would be so great and the expansion of the universe would accelerate so fast that everything would be torn apart almost immediately (known as the Big Rip). So here comes my idea (which is, if I am right, a form of quintessence): what if there was a phenomenon (a field with negative potential?) that counterbalances the theoretically predicted negative pressure of dark energy with positive pressure? The main way in which I can imagine this is that the universe is like a rubber membrane with a certain elasticity (or plasticity). During inflation the membrane would be in the linear range of the stress-strain curve and expand fast. During the post-inflationary phase the expansion would be much slower because the membrane is partially plastic due to strain hardening. But if the universe keeps expanding, there will be a point in the stress-strain curve where the membrane breaks, followed by an immediate Big Rip (much faster than the classical Big Rip scenario). This would be the analogue of a balloon that pops. So is this a scenario that has been proposed in the literature, next to the Big Rip, the Big Crunch, and the Big Freeze scenarios for the future of the universe: the Big Break? And if not, are there good reasons to reject this possibility? I know that inflation is classically explained by an inflaton field. But that proposal does not seem to solve the infamous discrepancy with respect to dark energy. I also found there is a theory called the Big Pop, but that appears to be about the beginning of the universe, not the end of it. Answer: According to the theoretical prediction, dark energy density would be so great and the expansion of the universe would accelerate so fast that everything would be torn apart almost immediately (known as the Big Rip). Incorrect. Even a very large $\Lambda$ would just lead the Universe to grow exponentially. A Big Rip means the expansion is infinite in a finite time. A parameter called $w$ needs to satisfy $w<-1$ for this to happen, whereas $\Lambda$ dominance defaults to $w=-1$. what if there was a phenomenon... that counterbalances the theoretically predicted negative pressure of dark energy Something must make the true $\Lambda$ surprisingly small. There are two main ways this could happen: What you describe, which requires very nearly but not perfectly matched contributions (you try finding a person exactly $1-10^{-120}$ times as strong as another person; this is called a fine-tuning problem.) That the formula for $\Lambda$ is wrong. Basically, it's an integral from $0$ to $\infty$ whose integrand we probably only understand well for small values. the universe is like a rubber membrane with a certain elasticity I think there are some finite-elasticity spacetime models, although none are currently very popular, but surprisingly it doesn't look like any existing Physics SE questions concern them. During inflation the membrane would be in the linear range of the stress-strain curve In the consensus model the time-dependence of the universe's expansion depends only on density and a curvature parameter. I can't speak to how finite-elasticity models adjust this, but I doubt they modify inflation's explanation. is this a scenario that has been proposed in the literature, next to the Big Rip, the Big Crunch, and the Big Freeze scenarios for the future of the universe: the Big Break? And if not, are there good reasons to reject this possibility? Again, I'm no expert, but my experience with field theory suggests that what would instead happen in the equations is processes that would otherwise cause infinite expansion would only cause finite expansion. If I can pick an analogy, it's like how arbitrary kinetic energy still doesn't get you beyond the speed of light in special relativity. an inflaton field... does not seem to solve the infamous discrepancy with respect to dark energy These problems might not warrant the same solution. I also found there is a theory called the Big Pop Judging by the Figure 2. caption here, that idea is something different, whereby a combination of expansion and contraction is applicable, and not to something as simple as the usual scale factor.
{ "domain": "physics.stackexchange", "id": 91351, "tags": "cosmology, big-bang, stress-strain, cosmological-inflation, cosmological-constant" }
Not Seeing Robot in rviz
Question: I'm working through the URDF Tutorial and one of the first things to do is to load a example file, 01-myfirst.urdf, into rviz for visualization. So I load the example file as is, run the rviz launch file, rviz opens up, and the rviz grid is empty -- no robot! I do get a warning message in rviz that says: No tf data. Actual error: Fixed Frame [map] does not exist What does this mean? What's missing? Why can't I see the robot? P.S. I'm running Hydro. Originally posted by daalt on ROS Answers with karma: 159 on 2014-01-04 Post score: 0 Answer: I would think this is a duplicate of no robot visible in rviz for urdf tutorial. Have you used the search? Originally posted by gvdhoorn with karma: 86574 on 2014-01-04 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by daalt on 2014-01-04: Thanks. That was it! Yes, I used the search but I did not come across that post.
{ "domain": "robotics.stackexchange", "id": 16572, "tags": "rviz, urdf, ros-hydro" }
On Bell inequality and bound entangled states
Question: I have recently seen some presentation slides of Michał Horodecki (slide number 77) in which he discussed the following conjecture. Bound entangled states satisfy all Bell inequalities The conjecture is not true for multi-partite systems, and the references are given in the talk itself. I believe by now, the bipartite case is also settled, one way or the other. Bell theorem has some applications in computer science areas, like cryptography (non-local games). Is there some similar application of the (falsification of) the above conjecture? I mean, what are the interesting applications which can take place by disproving the above conjecture? Advanced thanks for any help/suggestions. To the Moderators: I do not know, whether I should have asked this question in Physics or in Cstheory. Considering the fact that there can be physical applications also, I am posted the question here. Please suggest and/or take the required actions. Answer: This very recent paper: Negativity and steering: a stronger Peres conjecture makes a stronger version of Peres's original conjecture, which would appear to imply that the conjecture is still open.
{ "domain": "physics.stackexchange", "id": 13617, "tags": "quantum-mechanics, quantum-information, quantum-entanglement, bells-inequality" }
Why does it take a projectile as long to get to its apex as it does to hit the ground?
Question: I was once asked the following question by a student I was tutoring; and I was stumped by it: When one throws a stone why does it take the same amount of time for a stone to rise to its peak and then down to the ground? One could say that this is an experimental observation; after one could envisage, hypothetically, where this is not the case. One could say that the curve that the stone describes is a parabola; and the two halves are symmetric around the perpendicular line through its apex. But surely the description of the motion of a projectile as a parabola was the outcome of observation; and even if it moves along a parabola, it may (putting observation aside) move along it with its descent speed different from its ascent; or varying; and this, in part, leads to the observation or is justified by the Newtons description of time - it flows equably everywhere. It's because of the nature of the force. It's independent of the motion of the stone. I prefer the last explanation - but is it true? And is this the best explanation? Answer: I think its because both halves of a projectile's trajectory are symmetric in every aspect. The projectile going from its apex position to the ground is just the time reversed version of the projectile going from its initial position to the apex position.
{ "domain": "physics.stackexchange", "id": 19832, "tags": "newtonian-mechanics, symmetry, projectile, time-reversal-symmetry" }
Could super-caffeinating somebody's bloodstream be dangerous?
Question: I'm currently planning out an RPG based on Misfits. Basically young offenders get super powers. I want one of the villains to have a really lame power that they use to become incredibly dangerous. I thought about using a power to caffeinate drinks. If someone used that to super-caffeinate somebody's bloodstream, could that hurt or kill somebody? Or would it not cause any real damage? Answer: In extreme doses, caffeine can - like almost any substance - cause death. Wikipedia has this to say on the subject: Extreme overdose can result in death.[59][60] The median lethal dose (LD50) given orally is 192 milligrams per kilogram in rats. The LD50 of caffeine in humans is dependent on individual sensitivity, but is estimated to be about 150 to 200 milligrams per kilogram of body mass or roughly 80 to 100 cups of coffee for an average adult.[61] Though achieving lethal dose of caffeine would be difficult with regular coffee, it is easier to reach high doses with caffeine pills, and the lethal dose can be lower in individuals whose ability to metabolize caffeine is impaired. You may wish to consult the references mentioned in the Wikipedia article for further details (As we all know, one should not trust Wikipedia too much). Note that caffeine administrated directly to the bloodstream will be more dangerous than the same amounted ingested. Information on caffeine toxicity can also be found at Medscape. I you need concrete examples, Holmgreen, Norden-Petterson and Ahlner have published case reports on four recent cases of coffeine-related fatalities. (3) To cover all bases, it should be said that none of the information mentioned should be used to estimate a safe level of caffeine intake.
{ "domain": "biology.stackexchange", "id": 2144, "tags": "pharmacology, dose" }
Why is the sign of the tangential component of acceleration negative in this problem?
Question: Since the train is slowing down at a constant velocity, shouldn't the acceleration equal zero? Why is it negative in this case? Answer: The problem doesn't say the train "is slowing down at a constant velocity." That doesn't even make sense --- either it's slowing down, and the velocity is not constant, or it's not slowing down, and the velocity is constant. It can't be both accelerating and at a constant velocity, since the very definition of acceleration is change in velocity with respect to time. Imagine you are driving in your car, dead north, which we'll call the positive direction. If you slam on the brakes, which direction will the acceleration be in? Since the car is slowing down, your acceleration will be in the South/negative direction. The brakes are decreasing the car's velocity, which is the same thing as increasing it in the negative direction. Thus the sign of acceleration is negative. However, if you have pushed the accelerator to the floor, that would be a positive acceleration, since it is accelerating your car in its direction of positive velocity. The same concept is true of the train here. Since it is slowing down over some frame of time, and it is moving in the positive direction, the acceleration will be negative. Since we're talking about tangential components, the fact that the train is in rotational motion isn't even relevant.
{ "domain": "physics.stackexchange", "id": 32426, "tags": "kinematics, acceleration, velocity" }
What is the ICGC normalized_read_count?
Question: I downloaded gene expression data (exp_seq) from the ICGC file browser. For each sample and gene, the file contains a normalized_read_count. What is that value? I couldn't find any information on the ICGC website. The values are definitly too low for TPM. Answer: By reading this thread on seqanswers and by comparing the data to TCGA, I figured out raw_read_count is the read count which you use as input for e.g. DESeq2. It has been estimated using RSEM normalized_read_count is equivalent to the scaled_estimate from TCGA. This is the estimated fraction of transcripts made up by a given gene, as estimated by RSEM. Multiplying this value with 1e6 yields the TPM.
{ "domain": "bioinformatics.stackexchange", "id": 371, "tags": "rna-seq, public-databases" }
Yet another bash backup script, using rsync --link-dest
Question: I’ve written this bash backup script. It uses the --link-dest option of rsync; that way, the user have access to the backed data at any time stamped with relatively affordable data overhead. Any duplicated data should be hard linked; the overhead mostly come from the directory structure. It’s mostly based on this very nice guide by Mike Rubel and various other contributors, as well as a few answers from Unix SE and other web reference. The script is meant to be run at regular (typically, hourly) intervals with cron and other scripts are in charge of safely keeping daily/weekly backups. Of course, I want to minimize the size of the backups. I also want to delete older backups before newer ones. To do so, I build ${backups}, an array of every backup1 sorted by modification date from newest to oldest. Thus ${backups[0]} (if it exists) is the latest complete backup and ${backups[@:$n]} for some integer $n lists every backup but the $n newest (from 0 to $n - 1). As always with bash scripts, I’m especially afraid of quoting issues, but any remark is welcome. I in particular quite dislike how I use find with both -mindepth and maxdepth, but couldn’t find any way around it. Most of the “standard” commands, such as cut, sort or grep, are provided by BusyBox 1.16.1 and may not have every option available on most recent Linux distribution. cut, in particular, does not understand the -d option, hence the ugly tr trick. #!/bin/bash # Check if we are root (no one else should run this) # ================================================== if (( $(id -u) != 0 )); then echo "/ ! \ Only root can run this script. Backup cancelled." >&2 exit 3 fi # Functions # ========= # Given a date (or a placeholder), returns the corresponding hourly backup name function scheme { local token="$@" echo "hourly ${token}" } # Parameters # ========== # password_file=/etc/backup/passwd # Network yet untested backup_directory=/path/to/backups # ABSOLUTE PATH required source_directory=/path/to/data backup_count=24 # number of backups to keep # Name new daily backup # ===================== new_backup="${backup_directory}"/$(scheme $(date +"%-d-%m-%Y a %Hh%M") ) # Check that we can run # ===================== # Check that we don’t overwrite anything if [[ -e "${new_backup}" ]]; then echo "/ ! \ ${new_backup} already exists! We don’t want to overwrite it; backup cancelled." >&2 exit 4 fi # Create the directory which contains all the backups if it doesn’t exist yet if [[ ! -e "${backup_directory}" ]]; then echo "Creating directory ${backup_directory}" mkdir -p "${backup_directory}" elif [[ ! -d "${backup_directory}" ]]; then echo "/ ! \ Destination ${backup_directory} already exists but is not a directory! Backup cancelled." >&2 exit 4 fi # Create a temporary working directory # ==================================== temp_backup=$(mktemp -d -p "${backup_directory}") # Manage previous backups # ======================= # List every previous backup and put it into an array backups=() while read -r -d ''; do backups+=("${REPLY}") done < <( find "${backup_directory}" -mindepth 1 -maxdepth 1 -name "$(scheme \*)" -printf "%A@:%p\0" | \ sort -z -t: -n -r | \ tr '\n\0' '\0\n' | cut -d: -f2 - | tr '\n\0' '\0\n' \ ) # If it exists, select the latest backup as a reference for rsync --link-dest if (( ${#backups[@]} > 0 )); then latest_backup="${backups[0]}" else latest_backup="" fi # Compute the backups to remove # We add one backup before cleaning up # Thus, we keep $backup_count - 1 from the ${backups[@]} old_backups=("${backups[@]:${backup_count} - 1}") # Cleanup function # ================ # We now have everything we need to define a cleanup function # It will be called only if the backup succeeds function cleanup { echo echo "Cleaning up" echo "===========" echo if (( ${#old_backups[@]} > 0 )); then echo "Deleting ${#old_backups[@]} backup(s)!" echo # echo rm -rf "${old_backups[@]}" (set -x; rm -rf "${old_backups[@]}") else echo "There is nothing to delete." fi } # User feedback # ============= echo "Backing up ${source_directory}" echo "Backing up ${source_directory}" | sed "s/./=/g" echo echo "New backup: ${new_backup}" # Setting up rsync options # ======================== RSYNC_FLAGS=("--archive" "--stats") # Set rsync --password-file if the matching variable is defined and # we are using rsync (::) **YET UNTESTED** if [[ "${password_file}" != "" && "${source_directory}" =~ "::" ]]; then RSYNC_FLAGS+=("--password-file=${password_file}") fi # Use rsync to backup. If a previous backup exists, # uses --link-dest to hard link to it. if [[ "${latest_backup}" != "" ]]; then echo "Previous backup: ${latest_backup}" RSYNC_FLAGS+=("--link-dest=${latest_backup}") else echo "This is the first backup ever, it might take a while." fi echo # Backing-up # ========== # TODO Check if something was actually written before creating a new backup # TODO Add an exclusion file (set -x; rsync "${RSYNC_FLAGS[@]}" "${source_directory}" "${temp_backup}") && \ (set -x; mv "${temp_backup}" "${new_backup}") && cleanup echo Actually the “name of the repository which contains the backup”, of course. Answer: The script is nicely written. I only have minor suggestions that are barely more than nitpicks. Function declaration style Instead of this: function scheme { The generally preferred style for declaring functions is this: scheme() { Redundant local variable The local variable token is redundant here: function scheme { local token="$@" echo "hourly ${token}" } You could simplify to: echo "hourly $@" Simplify condition This condition can be simplified: if (( ${#backups[@]} > 0 )); then latest_backup="${backups[0]}" else latest_backup="" fi To just this: latest_backup="${backups[0]}" Instead of this: if [[ "${password_file}" != "" ]]; then You can omit the != "": if [[ "${password_file}" ]]; then Don't repeat yourself The echo statement is duplicated for the sake of underlining: echo "Backing up ${source_directory}" echo "Backing up ${source_directory}" | sed "s/./=/g" It would be good to create a helper function for this purpose: print_heading() { echo "$@" echo "$@" | sed "s/./=/g" }
{ "domain": "codereview.stackexchange", "id": 19781, "tags": "bash" }
What are these lentil like eggs? (Utah, United States)
Question: I've found these (what appear to be eggs) attached to an apple tree in central Utah USA. The eggs are less than 1/2cm in length, have a waxy appearance and are attached to the branch at one end. Visually and tactilely they are quite similar to lentils. I thought they were Box Elder bug eggs, but after a google image search I am convinced that is not what they are. What are they? Answer: They look just like katydid eggs. Image for comparison from the Missouri Botanical Garden:
{ "domain": "biology.stackexchange", "id": 10521, "tags": "species-identification, zoology, entomology" }
A system of two quantum harmonic oscillators
Question: I have two quantum mechanical harmonic oscillators with the same frequency. The Hamiltonian of the combined system is: $$ H= \hbar \omega (2a^\dagger a+b^\dagger b+2)$$ In attempting to find the energy of the combined system, I denoted the energy eigenkets to be $\lvert n_1\rangle$ and $\lvert n_2\rangle$ for the first and the second oscillator respectively so that the energy eigenket of the combined system is a linear combination of them, i.e. $\lvert n\rangle=\lvert n_1\rangle+\lvert n_2\rangle$. Then I used the definition of the number operator $N_1 = a^\dagger a$ and $N_2 =b^\dagger b$, where $N_1\lvert n_1\rangle=n_1\lvert n_1\rangle$ and $N_2\lvert n_2\rangle=n_2\lvert n_2\rangle$. But when I apply the Hamiltonian onto the eigenkets, I have: $$H_0\lvert n\rangle=H_0(\lvert n_1\rangle+\lvert n_2\rangle)=\hbar \omega((2N_1 + N_2 +2)\lvert n_1\rangle+(2N_1 + N_2 +2)\lvert n_2\rangle)$$ My questions are: (1)what would $N_2\lvert n_1\rangle$ and $N_1\lvert n_2\rangle$ be? My intuition tells me they are $0$, but I'm not so sure. (2)I know the ground state is denoted by $n=0$, but what about the first excited state? Would the first excited state be denoted by $n=n_1+n_2=1$ or $n=n_1=n_2=1$? Answer: The combined system eigenstate is not a linear combination. When you combine two quantum systems you take a tensor product of the Hilbert spaces so the combined eigenstates are $$ |n_1,n_2\rangle \equiv |n_1\rangle \otimes|n_2\rangle $$ where the first form is the way that a physicist would write it, and the second the way a methematician would write it. Then $$ a^\dagger a |n_1,n_2\rangle= n_1 |n_1,n_2\rangle\\ b^\dagger b |n_1,n_2\rangle=n_2 |n_1,n_2\rangle. $$ In other worrds the $a$'s act only on the first factor in the tensor product and the $b$'s on the second. A mathematician would probably write $$ H= (a^\dagger a +\frac 12)\otimes {\rm identity}+ {\rm identity}\otimes (b^\dagger ab +\frac 12) $$ to make this clear.
{ "domain": "physics.stackexchange", "id": 77577, "tags": "quantum-mechanics, homework-and-exercises, harmonic-oscillator" }
Finding duplicate numbers
Question: I have written this code which finds duplicate numbers and the occurrence of a particular number in an array. I have used a HashMap, but I want to know if there is a more efficient way to do the same, or whether I should be using another method. import java.util.*; class test7 { public static void main(String ...a) { int []arr={10,20,10,2,11,10,32,15,15,10,10}; HashMap<Integer,Integer> num=new HashMap<Integer,Integer>(); for(int t: arr) { Integer tmp_int=new Integer(t); if(num.containsKey(tmp_int)) { Integer i_ob=num.get(tmp_int); num.put(tmp_int,new Integer(i_ob.intValue()+1)); } else { num.put(tmp_int,new Integer(1)); } } System.out.println(num); } } Output: {32=1, 2=1, 20=1, 10=5, 11=1, 15=2} Answer: I have modified your code. The logic is same as you have done. You do not need to create those Integer. Read about Autoboxing. If the input array is large, you will see performance hit for creating Integer. Note that Integer is a class, and not a primitive type in java. public static void main(String[] args) { int[] arr={10,20,10,2,11,10,32,15,15,10,10}; HashMap<Integer,Integer> result = new HashMap<Integer,Integer>(); for(int element : arr) { if(result.containsKey(element)) { result.put(element, result.get(element) + 1); } else { result.put(element, 1); } } System.out.println(result); }
{ "domain": "codereview.stackexchange", "id": 26985, "tags": "java" }
Fourier transform of $\cos(n\omega t)$
Question: My question is probably very stupid, but I've been strugling for a while on it now... In need to find the Fourier transform of $1+\cos^3(2\pi ft)$. I wrote that : $$\cos^3(2\pi ft)=\frac{\cos(6\pi ft)+3\cos(2\pi ft)}{4}$$ And so I have: $$\delta(f) +\frac 18 \bigg[\delta(f-3f_0)+\delta(f-3f_0)\bigg] +\frac 38 \bigg[\delta(f-f_0)+\delta(f-f_0)\bigg]$$ So, on my spectrum, I should have a dirac at $0$, a smaller one at $f_0$ and a smaller at $3f_0$... But when I process it with matlab (using fast fourier transform), I get this : (With a frequency of $10\textrm{ kHz}$). So the dirac I thought would be at $3f_0$ is in fact at $\frac{f_0}{2}$. What am I missing ? Answer: Let's take a look at the first half of your expansion; $cos(6\pi f_{0} t)$ The Fourier transform for this would be \begin{equation} X_{c}(j\Omega) = \pi \delta(\Omega - 6\pi f_{0}) + \pi \delta(\Omega + 6\pi f_{0}) \end{equation} For your Fourier transform to be correct, we need that \begin{equation} 6\pi f_{0} < \pi f_{s} \end{equation} This means that your sampling rate must be high enough to avoid aliasing. Are you sure that your $f_{0} < f_{s}/2?$ To me it looks like this is the problem.
{ "domain": "dsp.stackexchange", "id": 3881, "tags": "fourier-transform" }
Summarize JS Array more efficiently
Question: I am working on a project for Udacity, where I should use CSV files to display data on fake corporate dashboard. Structure of the data I am working on is like this: I am using AngularJS for this project, I created service for getting the data, formatting it to JSON and summarizing data into months. It works good, but I can see that it takes up to 1-2 sec to load a page, and data sheet has only around 90 lines. Anybody has suggestion how to speed it up? I am also using MomentJS for formatting dates. Service: var getIssues = function() { return $http.get('data/issues.csv').then( function(response) { return CSV2JSON(response.data); } ); }; this.summorizeIssues = function() { return new Promise(function(resolve) { getIssues().then(function(data) { var datesarray = []; var returnarray = []; data.forEach(function(item) { if (item['submission timestamp']) { var month = moment(item['submission timestamp'], 'M-D-YY h:m a').format('MMM-YY'); if (datesarray.indexOf(month) < 0) { datesarray.push(month); } } }); datesarray.sort(function(a, b) { return moment(a, 'MMM-YY') - moment(b, 'MMM-YY'); }); datesarray.forEach(function(date) { var counter = 0; data.forEach(function(item) { var month = moment(item['submission timestamp'], 'M-D-YY h:m a').format('MMM-YY'); if (month === date) { counter += 1; } }); returnarray.push({ label: date, issues: counter }); }); resolve(returnarray); }); }); }; Controller: issuesdataProvider.summorizeIssues().then(function(data) { _this.issues = data; _this.datesIssues = issuesdataProvider.monthstoYears(data); _this.currentDateIssues = _this.datesIssues[_this.datesIssues.length - 1]; }); This is the result of return CSV2JSON(response.data); : [ { "submission timestamp": "4-7-16 2:28 PM", "customer name": "Tincom", "customer email address": "tincom@tincom.com", "description": "error 64", "open/closed status": "open", "closed timestamp": 0, "employee name": "Mickie Daley" }, { "submission timestamp": "7-23-15 1:12 PM", "customer name": "Tripplehow", "customer email address": "tripplehow@tripplehow.com", "description": "error 95", "open/closed status": "closed", "closed timestamp": 42208.59706, "employee name": "Daine Whittington" }, { "submission timestamp": "3-8-14 1:33 PM", "customer name": "Betadox", "customer email address": "betadox@betadox.com", "description": "error 94", "open/closed status": "closed", "closed timestamp": 41706.67644, "employee name": "Corliss Zhang" }] And this how data looks after the process is done: [ { "label":"Jan-14", "issues":5 }, { "label":"Feb-14", "issues":1 }, { "label":"Mar-14", "issues":7 }, { "label":"Apr-14", "issues":4 }, { "label":"May-14", "issues":2 }, { "label":"Jun-14", "issues":2 }, { "label":"Jul-14", "issues":2 }, { "label":"Aug-14", "issues":1 }, { "label":"Sep-14", "issues":1 }, { "label":"Oct-14", "issues":1 }, { "label":"Nov-14", "issues":2 }... ] Answer: I am also using MomentJS for formatting dates. I have experienced performance issues when using MomentJS before. It's not because it's terrible but rather the opposite. It just does a lot of things to get dates right. But you'll have to look out for potential bottlenecks. it takes up to 1-2 sec to load a page Also consider that you're using AJAX. Network latency can become an issue as well. But there's nothing you can do with that since you have no control of the network. Consider other strategies like loading pages, or caching, or data compression. We can do nothing with a slow network, but we can do something about the slow code. data.forEach(function(item) { if (item['submission timestamp']) { var month = moment(item['submission timestamp'], 'M-D-YY h:m a').format('MMM-YY'); if (datesarray.indexOf(month) < 0) { datesarray.push(month); } } }); datesarray.sort(function(a, b) { return moment(a, 'MMM-YY') - moment(b, 'MMM-YY'); }); datesarray.forEach(function(date) { var counter = 0; data.forEach(function(item) { var month = moment(item['submission timestamp'], 'M-D-YY h:m a').format('MMM-YY'); if (month === date) { counter += 1; } }); returnarray.push({ label: date, issues: counter }); }); The lag you are experiencing is most likely the browser's garbage collector kicking in. Every moment call creates a new MomentJS object. It's terrible thing to do in loops. In your case, you call it on each of the loops you have. You have 1 loop that executes n times, 1 sort whose callback is called an arbitrary amount of times, and 2-level loop which is n^2. The goal here is to reduce the usage of moment and avoiding the use of nested loops. The following code is my attempt at it. It uses moment only once to grab everything it needs into a regular object. Everything else is just linear object/array operations. const issuesByDate = data.reduce((data, item) => { // Return early when there's no timestamp if(!item['submission timestamp']) return data; // Group the data by using an object // { 'MMM-YY': { label: ..., timestamp: ..., issues: ... }} // The only time we use moment const itemMoment = moment(item['submission timestamp']) const label = itemMoment.format('MMM-YY'); const timestamp = itemMoment.valueOf(); // Use the label as key to group similar items // Append timestamp, a number value. Well need it later for sorting data[label] = data[label] || { label, timestamp, issues: 0 }; data[label].issues += 1; return data; }, {}); // Convert object into an array // [{ label: ..., timestamp: ..., issues: ... }] return Object.keys(issuesByDate) .map(key => issuesByDate[key]) // Sort here since object keys don't guarantee order .sort((a, b) => a.timestamp - b.timestamp) // Strip off the timestamp property that we don't need // [{ label: ..., issues: ... }] .map(item => ({ label: item.label, issues: item.issues })) The code above can be further tuned by using regular loops instead of array methods, grouping up loop operations, use pushing to existing arrays instead of returning new arrays after each oepration. In summary, avoid calling out moment too often and avoid nested loops.
{ "domain": "codereview.stackexchange", "id": 20786, "tags": "javascript, angular.js" }
Can imaginary (illusionary) forces cause acceleration, and what exactly are they?
Question: We might be aware of illusion forces such as centrifugal force that doesn't really exist but we feel the force for sure, how is that exactly possible? I don't yet know much in Physics but does any illusion force cause acceleration? To anyone who answers, can you also please give me a reference to the topic? Answer: The fictitious forces appear in non-inertial frames of reference. The forces can cause acceleration when viewed in such a frame. Riding in a car that swerves left, my coffee and donuts slide across my lap to the right and spill all over the door. From my reference frame of the car, I could say that a centrifugal force appeared pointing to the outside of the turn (the right). This force acted on everything in the car. The snacks didn't have enough friction to resist, so this force accelerated them to the right until they hit the door. Meanwhile my office mates watching from outside have a different opinion. From the nearly-inertial ground frame, they see the car turn left, but my coffee and doughnuts continue at the same speed. The snacks do not accelerate (until the door intersects their path). There is no force or acceleration that appears in this frame.
{ "domain": "physics.stackexchange", "id": 89687, "tags": "newtonian-mechanics, forces, reference-frames, acceleration" }
Similarity between magnetic field created by a straight wire and wire loop
Question: The magnetic field created by a straight wire is $B = \dfrac{\mu_0 I}{2 \pi d}$ where $I$ is the current flowing through that wire and $d$ the distance from the wire. The magnetic field created by a current loop is $B = \dfrac{\mu_0 I}{2 R}$ where $I$ is the current flowing through the loop and $R$ its radius. They look so similar, don't they? Does anyone know of an intuitive explanation behind that similarity? Answer: We will start by looking at the solution to the problem of finding the magnetic field of a finite wire at a point $P$ which is $r$ distance away from the wire.The answer is : $$B = \frac{\mu_0 i}{4\pi r}(cos\alpha+cos\beta)$$ where $\beta=the\ angle\ formed\ by\ the\ wire\ and \ the\ line\ joining\ an\ endpoint\ with\ P\\ \alpha=the\ angle\ formed\ by\ the\ wire\ and \ the\ line\ joining\ the\ other\ endpoint\ with\ P$ For a wire of $\infty$ length, $\alpha=\beta=0$, and putting this in gives $$B = \frac{\mu_0 i}{2\pi r}$$ For a circular loop, using the Biot-Savart Law gives : $$B = \frac{\mu_0 i}{2r}$$ We know that a finite wire can always be looped to a circle. Then, let us now loop the finite wire and find the magnetic field using the very first equation! First we write $cos \alpha = \frac{h}{\sqrt{l^2+r^2}}$ where $l$ is the length of the wire and $h=d\ cot\alpha$ ($h$ is actually the length of wire from the foot of perpendicular of P on the wire and the endpoint with the $\alpha$ angle ) Also, $cos\beta = \frac{l-h}{\sqrt{(l-h)^2+r^2}}$ Putting everything together, $$B = \frac{\mu_0 i}{4\pi r}(\frac{h}{\sqrt{l^2+r^2}}+\frac{l-h}{\sqrt{(l-h)^2+r^2}})$$ When we loop this wire, the point $P$ becomes the centre of the circle, the length $l$ of the wire becomes $2\pi r$ and since all points on a circle are equivalent foots of perpendiculars dropped from P, so $h$ is just an arbitrary arc length, and for sake of calculation we set it equal to 0 (it can be anything, the answer doesn't change) [You can see the last fact from the simple observation that all points on a circle can have a tangent perpendicular to the radius, and these tangents are the instantaneous direction of current at that point]. Rewriting our equation in light of these new facts : $$B = \frac{\mu_0 i}{4\pi r}(\frac{2\pi r}{\sqrt{((2 \pi r)^2+r^2)}})$$ Cancelling terms and taking an $r$ out from the square root gives us : $$B = \frac{\mu_0 i}{2 r}(\frac{1}{\sqrt{(4\pi^2+r^2)}})$$ So we have the magnetic field at the centre of a circular loop (equation 2 from top) with a correction term. The correction term is because the wire was finite. Working with an infinite wire results in no correction terms ($\infty$s are powerful cancelling factors sometimes). (The actual reason why the correction term comes is because the situation of looping a wire can never be ideal, there has to be some input and output current wires for the loop, which does not allow the loop to be complete, but is not a problem for a straight wire... You can work it out) So, its all in the mathematics. The form is similar because of the Biot Savart Law, and the underlying symmetries of nature ! Keep on thinking! Cheers!!
{ "domain": "physics.stackexchange", "id": 47030, "tags": "magnetic-fields" }
How is there no limit to a human lifespan again?
Question: It's probably just misconstrued pop science, but I thought a read an article recently that said there's no known limit on how long humans can live. I could have sworn though that there were a few automatic processes that took place though, like that the chromosomes all shorten in length every time they're copied (is there any limit to that? Also, why does that matter?), the retinas in the eyes harden, the metabolism slows down, the heart muscles wears out, etc. So, how could it be true? Answer: You're probably reading about the recently-published responses to a publication that argued there is a limit to human lifespan. The original article is Evidence for a limit to human lifespan, and in the June 29 issue of Nature there are five responses to it: Contesting the evidence for limited human lifespan Many possible maximum lifespan trajectories Is there evidence for a limit to human lifespan? Questionable evidence for a limit to human lifespan Maximum human lifespan may increase to 125 years Each of these responses has, in turn, a reply from the original authors. The arguments turn on fairly intricate details of statistical analysis and database interpretation, and I think it's fair to say that outside experts remain unconvinced either way -- neither the original article, nor any of the five responses, nor any of the five responses to the responses, presents a slam-dunk case for or against a limit to human lifespan.
{ "domain": "biology.stackexchange", "id": 7385, "tags": "human-biology" }
How could we figure out the recipe of Coca Cola?
Question: I know that the ingredients and the manufacturing process are both important, as discussed in this question. What would be a modern approach to figuring out the ingredients? The ingredients listed on the website are: "Carbonated Water, High Fructose Corn Syrup, Caramel Color, Phosphoric Acid, Natural Flavors, Caffeine." My ideal answer would provide information about how to get more detail about the specific ingredients (not just 'natural flavors') and their proportions. Answer: I'd say some combination of a GCMS / LCMS. The chromatography separates the components & can often be relied upon to ppm level identification. Then compare the MS peaks for hits against one of the standard databases e.g. NIST. I suppose that strategy would identify most of the peaks. The left over peaks, which don't give a good match against a GCMS database, would be more challenging to identify. One would need to screen potential raw materials one suspects may have gone into the blend to see if one generates a hit. Of course, this identifies individual chemical species. Any practical recipe on the other hand is unlikely to be based on pure species amounts but would use available ingredients which are each complex mixtures of pure species. The GCMS / LCMS approach won't tell you what pseudo-components (e.g. lemon oil ) went in & in what proportions. So you figured out the chemical composition but not really the recipe. Sometimes, rarely, two compounds can have very similar MS peak patterns. In those cases (or to reinforce a GCMS analysis) one could also compute the Kovats Index (KI) of each peak on the GC based on retention times alone (and the injection of a standard Hydrocarbon mix). There's a good amount of info. and databases on the KI flavor / natural product ingredients. That would be another strategy to confirm peak identity. On the molecular biology side of things I'm sure there are techniques using PCR etc. that could amplify and characterize traces of DNA segments that came via ingredients of plant origin. e.g. One might be able to say a certain plant oil extract was used. Although at high dilutions & after extensive processing it would be a challenge to get enough starting matter for these techniques to work.
{ "domain": "chemistry.stackexchange", "id": 5547, "tags": "everyday-chemistry, food-chemistry" }
Evaluating an expression with integers, +, and *, as well as -, /
Question: There is a job interview question, and the source of the question is here. The solution is pretty simple. We just need to split the input string by + and then by *. Then we compute products in a nested loop and sum up all the products. public class Main { // http://www.careercup.com/question?id=4911380140392448 public static void main(String[] args) { String equation = "1*5*4+8*9+16"; int result = compute(equation); System.out.println(result); } static int compute(String equation) { int result = 0; String []byPluses = equation.split("\\+"); for (String multipl : byPluses) { String []byMultipl = multipl.split("\\*"); int multiplResult = 1; for (String operand : byMultipl) { multiplResult *= Integer.parseInt(operand); } result += multiplResult; } return result; } } But what if I was asked the following question after solving the initial problem: OK. What would you do if you needed to support not only + and * but also - and /? The quick solution I thought up is this: package careepcup.fb; public class Main { // http://www.careercup.com/question?id=4911380140392448 public static void main(String[] args) { // An equation with +, -, /, * String anotherEquation = "1*5*4+8*9+16/8-9"; // 85 double another = computeAnother(anotherEquation); System.out.println(another); } static double computeAnother(String equation) { double result = 0.0; String noMinus = equation.replace("-", "+-"); String[] byPluses = noMinus.split("\\+"); for (String multipl : byPluses) { String[] byMultipl = multipl.split("\\*"); double multiplResult = 1.0; for (String operand : byMultipl) { if (operand.contains("/")) { String[] division = operand.split("\\/"); double divident = Double.parseDouble(division[0]); for (int i = 1; i < division.length; i++) { divident /= Double.parseDouble(division[i]); } multiplResult *= divident; } else { multiplResult *= Double.parseDouble(operand); } } result += multiplResult; } return result; } } What do think of my solution with -, +, /, *? Answer: @tim has covered a bunch of what I was going to add, but here are some additional points: You should anticipate having whitespace in the input. A "simple" replace-all would suffice: equation = equation.replaceAll("\\s+", ""); Your solution of converting - to +- is novel, and effective, but it really should have a comment on it explaining that you are going to re-split the String and parse the new - as part of the integer, rather than as an operator. Leaving it as it is required some detective work on how it functions. Bug: Your system will fail on things like: 1 + 2 * -3 (ignoring the whitespace). Your inner workings of the * and / is a bit messy, though. A logical progression for your challenge would be to add, say, a % operator.... which would require a complicated change. Operators of equal precedence should be handled together. treating * specially when compared to / makes it awkward because you have to test all combinations inside the * breakdown.... I would suggest using a smarter split expression, one that splits on the gaps between the operators and the values...: String[] parts = operand.split("(?=[/*])|(?<=[/*])"); With the above regex, you will get for example: 1/2*773 -> [1, /, 2, *, 773] The way the regex works is that it looks for two things (the regex is in two parts - using "lookaround" expressions).... (?=[/*]) - A positive zero-width look-ahead - This says: find any gap between two characters where the next character is a / or *. (?<=[/*]) - A positive zero-width look-behind - This says: *find any gap between two characters where the previous character was a / or *. Put them together with an or condition, it says: split the input on the gaps before and after / or *. Now, you can just initialize the result to index-1, and then loop through the rest.... : double result = Double.parseDouble(parts[0]); for (int i = 1; i < parts.length; i += 2) { String op = parts[i]; double val = Double.parseDouble(parts[i+1]); switch (op) { case "*" : result *= val; break; case "/" : result /= val; break; } } return result;
{ "domain": "codereview.stackexchange", "id": 12707, "tags": "java, strings, interview-questions, math-expression-eval" }
Is it valid to compare SHAP values across models?
Question: Let's say I have three models: a random forest with 100 trees a random forest with 1000 trees an xgboost model. I can rank the importance of my features on my dataset for each model using SHAP, and compare relative importance across models. What is not clear is whether I can meaningfully compare the actual numerical SHAP values across models. I am using the same features for all models. Answer: Shapley values were designed in the context of game theory (source), to share value created by a coalition of player in a game. It has multiple properties, including linearity. The linearity ensure that if you were to average your models, the resulting Shapley value would be the average of Shapley values for individual models. Shapley values are comparable in that sense of considering an average model. I think the general answer would be the opposite. Inutitively, because looking at an individual Shapley value you can't know if the value is due to an 'individual performance' or 'overall performance' of the coalition. So looking at two values in the context of ML, the difference might be both explained by a different contribution of the individual value but also by a difference in the overall performance of the model. So I would avoid doing that in the general case. (But I often do so - along with comparing individual predictions - to check if two models with similar overall performance have learned the same things or not) Overall I would suggest you use a more suited criteria (like an information criteria) for your model selection, then use Shapley value to explain the model you selected. Not using Shapley Values to do some sort of model selection. Note that I am mainly talking about Shapley values and not SHAP, which is an approximation. You need to be cautious with SHAP as the approximation rely on the absence of correlation between your features, which rarely happen in practice.
{ "domain": "datascience.stackexchange", "id": 8500, "tags": "explainable-ai, shap" }