anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
If I push or hit an object in space will it rotate or move along a straight line?
Question: If I push or hit an object in space (vacuum and no gravitation) in direction what is not going trough its centroid, will it rotate or move along in straight line? I expect that on earth it will depend on what is less difficult for the object (rotation or linear movement). So the object will do some kind of combination of both movements (rotating and also moving along the direction of impulse or force). But how could an object "decide" what to do in space, where is not resistance? Answer: Any linear force not going through the centre of mass will create torque, which I hope you know, is related to how far from the centre of mass the line of force is. So, if you manage to hit the object exactly at its centre of mass, i.e. the line of force is directly passing through the centre of mass, then it will show NO ROTATION. It will go straight ahead in a line. But, if you fail to do so, i.e. the line of force misses the centre of mass, it will show BOTH kind of motions, Rotational and Linear. It will go straight ahead in a line as in previous case, but will also rotate. How much is the speed of rotation depends on how badly you missed the centre of mass. But in both cases, the total momentum will be (has to be) same.
{ "domain": "physics.stackexchange", "id": 34468, "tags": "newtonian-mechanics, rotational-dynamics" }
Rejected loop closure 93 -> 173: Too large rotation detected!
Question: Hi, I am doing 3D mapping with Kinect 360 camera using RTABMAP. Mapping starts well and then I start getting this two errors: "[ WARN] (2020-07-22 14:50:09.654) Rtabmap.cpp:2144::process() Rejected loop closure 93 -> 173: Too large rotation detected! (pitch=0.000000, yaw=0.000000) max is -1.628242" [ WARN] (2020-07-22 14:57:33.872) Rtabmap.cpp:2144::process() Rejected loop closure 57 -> 99: Not enough inliers after bundle adjustment 0/20 (matches=70) between 57 and 99 They occur in between lots of successful readings. For moving a robot I use: rostopic pub -1 cmd_vel geometry_msgs/Twist '[0.1, 0, 0]' '[0, 0, 0]' and for rotation I use: rostopic pub -1 cmd_vel geometry_msgs/Twist '[0, 0, 0]' '[0, 0, 0.1]' I am using Pioneer 3-AT with ROS Melodic. Here is my Gazebo world: https://imgur.com/a/B1bPeDE Here are my map when it is still correct: https://imgur.com/a/4Iqg4Tg, and when error occurs: https://imgur.com/a/vv7x60W Here is my rtabmap.launch file: <launch> <param name="/use_sim_time" value="true"/> <node pkg="nodelet" type="nodelet" name="rgbd_sync" args="standalone rtabmap_ros/rgbd_sync" output="screen"> <remap from="rgb/image" to="/camera/rgb/image_raw"/> <remap from="depth/image" to="/camera/depth/image_raw"/> <remap from="rgb/camera_info" to="/camera/rgb/camera_info"/> <remap from="rgbd_image" to="rgbd_image"/> <!-- output --> <!-- Should be true for not synchronized camera topics (e.g., false for kinectv2, zed, realsense, true for xtion, kinect360)--> <param name="approx_sync" value="true"/> <param name="queue_size" type="int" value="10"/> </node> <!-- Odometry --> <node pkg="rtabmap_ros" type="rgbd_odometry" name="rgbd_odometry" output="screen"> <!--param name="subscribe_rgbd" type="bool" value="true"/--> <remap from="rgb/image" to="/camera/rgb/image_raw"/> <remap from="depth/image" to="/camera/depth/image_raw"/> <remap from="rgb/camera_info" to="/camera/rgb/camera_info"/> <param name="frame_id" type="string" value="base_link"/> <remap from="rgbd_image" to="rgbd_image"/> <param name="publish_tf" type="bool" value="false"/> <param name="queue_size" type="int" value="10"/> </node> <param name="map_frame" value="map"/> <param name="odom_frame" value="odom"/> <param name="base_link_frame" value="base_link"/> <param name="world_frame" value="map"/> <param name="odom0" value="odom"/> <param name="pose0" value="rtabmap/localization_pose"/> <!-- The order of the values is x, y, z, roll, pitch, yaw, vx, vy, vz, vroll, vpitch, vyaw, ax, ay, az. --> <rosparam param="odom0_config">[true, true, false, false, false, true, false, false, false, false, false, false, false, false, false]</rosparam> <rosparam param="pose0_config">[ true, true, false, false, false, true, false, false, false, false, false, false, false, false, false] </rosparam> <param name="odom0_differential" value="true"/> <param name="pose0_differential" value="false"/> <param name="odom0_relative" value="true"/> <param name="pose0_relative" value="false"/> <param name="odom0_queue_size" value="5"/> <param name="pose0_queue_size" value="2"/> <node name="rtabmap" pkg="rtabmap_ros" type="rtabmap" output="screen" args="--delete_db_on_start"> <param name="frame_id" type="string" value="map"/> <param name="subscribe_depth" type="bool" value="false"/> <param name="subscribe_rgbd" type="bool" value="true"/> <remap from="odom" to="odom"/> <remap from="rgbd_image" to="rgbd_image"/> <param name="queue_size" type="int" value="500"/> <param name="approx_sync" type="bool" value="false"/> <!-- RTAB-Map's parameters --> <param name="RGBD/AngularUpdate" type="string" value="0.01"/> <param name="RGBD/LinearUpdate" type="string" value="0.01"/> <param name="RGBD/OptimizeFromGraphEnd" type="string" value="false"/> <param name="RGBD/OptimizeMaxError" type="int" value="0"/> <!--param name="Grid/FromDepth" type="string" value="false"/--> <param name="sensor_model/hit" value="1" /> <param name="sensor_model/miss" value="0" /> <param name="Reg/Force3DoF" value="true"/> <!-- 2D mode --> <!--param name="publish_tf" type="bool" value="false"/--> <param name="sensor_model/max_range" value="5.0" /> <param name="latch" value="false" /> </node> Did someone have this problem or does someone know how to fix it? Thanks in advance. Best regards. Originally posted by leodomitrovic on ROS Answers with karma: 33 on 2020-07-22 Post score: 0 Answer: Those are warnings, and they mean the loop closure has been rejected because the returned transform was wrong or because there were not enough inliers. Your environment seems to have a lot of repetitive textures, this could generate those kind of rejections. You can ignore hem unless real loop closures are not found or rejected. Originally posted by matlabbe with karma: 6409 on 2020-08-21 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by leodomitrovic on 2020-08-22: Ok. Thank you for your answer.
{ "domain": "robotics.stackexchange", "id": 35308, "tags": "gazebo, navigation, kinect, ros-melodic, 3dmapping" }
In the photoelectric effect, how do you measure the kinetic energy of the ejected electron?
Question: I have read in textbooks that the kinetic energy of an ejected electron in the photoelectric effect depends on the frequency of the incident photon. My question is, how exactly is the electrons' kinetic energy measured while doing the experiment? Answer: You can measure the kinetic energy of the ejected electrons by applying an electric field pushing back the electrons to the metal. Look at the experimental apparatus shown at Photoelectric effect. It provides also the explanation how to measure the kinetic energy of the electrons: Schematic of experimental apparatus to demonstrate the photoelectric effect. The filter passes light of certain wavelengths from the lamp at left. The light strikes the curved electrode, and electrons are emitted. The adjustable voltage can be increased until the current stops flowing. This "stopping voltage" is a function only of the electrode material and the frequency of the incident light, and is not affected by the intensity of the light. The energy $eV_S$ by this so-called "stopping voltage" $V_S$ (measured as described above) just neutralizes the kinetic energy of an electron, thus bringing the flying electrons to a halt.
{ "domain": "physics.stackexchange", "id": 63398, "tags": "photoelectric-effect" }
Symmetry of spectrum of tight binding model with quasiperiodic potential
Question: In the Aubry-André model, a tight binding model with nearest neighor hopping and a cosine-like potential $\lambda_n = \lambda \cos(2\pi \beta n)$ (where $n$ is the lattice site, $\lambda$ is the potential strength and $\beta$ is typically irrational), it turns out that the spectrum is symmetric with respect to the energy zero. The model can be defined as $$ E a_n = a_{n+1} + a_{n-1} + \lambda_n a_n, $$ where $a_n$ is the component of the wave function at lattice site $n$, so the terms $a_{n \pm 1}$ describe the nearest neighbor hopping, and $E$ is the energy eigenvalue. This can as well be understood as the problem to find eigenvalues $E$ and eigenvectors $\{ a_n \}$ of a matrix of the type $$ \begin{pmatrix} \lambda_{-2} & 1 & & & 1\\ 1 & \lambda_{-1} & 1 & & \\ & 1 & \lambda_0 & 1 & \\ & & 1 & \lambda_1 & 1\\ 1 & & & 1 & \lambda_2\\ \end{pmatrix} $$ where $\lambda_{-n} = \lambda_n$. This example shows the matrix for system size $5$ but I hope you get the scheme. The values $1$ in the off-diagonal corners show that periodic boundary conditions are applied, but I don't think this should be important from now on. Now the symmetry of the spectrum basically means, that if there is a solution $\{ a_n \}$ with energy $E$, then there also exists another solution $\{ \tilde{a}_n \}$ with energy $-E$. How would one show that this is true? The reason I want to do this is because another tight binding model including also next nearest neighbor hopping and a modified potential, $$ E a_n = a_{n+1} + a_{n-1} + t_2 (a_{n+2} + a_{n-2}) + \lambda_n a_n $$ with $\lambda_n = \lambda (\cos(2\pi \beta n) + t_2 \cos(4\pi \beta n))$ does not have a symmetric spectrum, so I thought that this might be a starting point to understand the difference between the spectrum of the two models. Better ideas are welcome, though. ;-) Also, I don't know why this other model has a tendency to have overlapping bands but the Aubry-André model doesn't (see this paper by Magnus Johansson). Help on this would be appreciated as well. Here's a plot of the spectrum of the Aubry-André model for a system of size $233$ with $\beta = 144/233$ (ratio of consecutive Fibonaccis numbers, converging against the inverse of the golden mean, just in case you were wondering why I use a rational value of $\beta$). You can see the symmetry: UPDATE 1: Here's a plot of the spectrum of the second model, also including next nearest neighbor hopping, for the same system size and choice of $\beta$. The hopping parameter $t_2$ for next nearest neighbor hopping is chosen to be $0.3$ here. As you can see, this is a total loss of the $E \leftrightarrow -E$ symmetry. Already at zero potential, the dispersion becomes $E(k) = 2 \cos(k) + 2t_2 \cos(2k)$ and the spectrum therefore loses the symmetry. So you see there's a lot of difference between the band structures of the two models (symmetry, band overlap) even though their Hamiltonians are not so different ($t_2$ typically small, say less than $1/2$; self duality retained). I hope you can help me understand this. There's a lot of literature on the Aubry-André model and a few papers about the other model, but they mostly focus on the localization properties of the eigenstates and very little attention is paid to the spectrum. Numerically it is no problem to observe the above mentioned things, but it would be nice to have a more analytical approach to understand the difference of the models. UPDATE 2 (2017-05-05): The paper mentioned above states that the model has a tendency to have overlapping bands and I didn't understand the reason for that. Now this paper chooses $t_2 = 1/3$ to be fixed. If one plots the dispersion relation at zero potential $E(k)=2\cos(k)+2t_2\cos(2k)$ for $0 \leq k \leq \pi$, one finds that for $t_2>1/4$ there's degeneracy in the low energy regions of the spectrum therefore giving band overlap. Here this can easily be seen in the plots of the sprectrum while for smaller values of $t_2$ I only found avoided crossings. So it might be that the author of this paper was referring to the special case of $t_2 = 1/3$ while I mistakenly thought he was talking about the general case of $t_2 \neq 0$ which I'm interested in. Answer: The Aubry-Andre model only has exact $E\leftrightarrow -E$ symmetry in the thermodynamic limit or for particular rational choices of $\beta$. The easiest way (in my opinion) to see this is from the second quantized Hamiltonian: $$ \mathcal{H} = \lambda\sum_{n} \cos(2\pi\beta n)c_n^\dagger c_n + \sum_{n} (c_{n+1}^\dagger c_n + \mathrm{h.c.}) $$ Fourier transforming only the hopping term using $c_n = \frac{1}{\sqrt{L}}\sum_m e^{\mathrm{i} 2 \pi \beta n m} \tilde{c}_m$, you get the Hamiltonian in a form that clearly demonstrates the self-duality of the model: $$\mathcal{H} = \lambda\sum_{n} \cos(2\pi\beta n)c_n^\dagger c_n + 2\sum_{m} \cos(2\pi\beta m)\tilde{c}_m^\dagger \tilde{c}_m$$ Note that this only works for periodic boundary conditions with $\beta$ divisible by $L$ (system size) as you have in your example. The $E\leftrightarrow -E$ symmetry is just saying that the Hamiltonian is equal to negative itself, up to some unitary transformation: $\mathcal{H} = -U\mathcal{H}U^\dagger$. Observe that $$ \begin{eqnarray} -\mathcal{H} &=& \lambda\sum_{n} \cos(2\pi\beta n + a\pi)c_n^\dagger c_n + 2\sum_{m} \cos(2\pi\beta m+b\pi)\tilde{c}_m^\dagger \tilde{c}_m \\ &=& \lambda\sum_{n} \cos(2\pi\beta (n + a/2\beta))c_n^\dagger c_n + 2\sum_{m} \cos(2\pi\beta (m+b/2\beta))\tilde{c}_m^\dagger \tilde{c}_m \end{eqnarray} $$ for any odd integers $a,b$. Therefore, you can see that flipping the sign of $\mathcal{H}$ is the same as just translation in real space by $a/2\beta$, and in momentum space by $b/2\beta$. When these two translations can be chosen to be integer, then this can be encoded as a unitary and is therefore an exact symmetry. In your case, $a/2\beta = 233a/288$, which cannot be made integer with odd $a$ (and so does not have this symmetry exactly). For the previous Fibbonacci pair, $\beta = 89/144$ and therefore $a/2\beta = 72a/89$ which can be made integer with $a=89$, and so does have this exact symmetry.
{ "domain": "physics.stackexchange", "id": 39568, "tags": "solid-state-physics, tight-binding, quasi-periodic" }
Fibonacci Nth term using tail recursion
Question: I have created a program that prints the Nth term of the Fibonacci sequence. The fib function needs to use tail recursion. If what I have coded isn't tail recursion, I would like to know how to change the fib function so it does. #include <iostream> #include <sstream> int fib(int n, int i = 0, int a = 0, int b = 1) { return (i >= n - 1) ? a : fib(n, i + 1, b, a + b); } int main(int argc, char* argv[]) { if (argc < 2) { std::cerr << "Argument 2 must be the Nth term." << std::endl; return -1; } std::stringstream ss_obj(argv[1]); unsigned long int number; ss_obj >> number; std::cout << "\nFibonacci number: " << fib(number) << std::endl; return 0; } Answer: To address your immediate concerns, it is a tail recursion indeed. OTOH, there is no need to be that terse. You may want to be a little more explicit: if (i == n) { return a; } return fib(n, i + 1, b, a + b); Now the tail-recursiveness is obvious. The error message "Argument 2 must be the Nth term." is misleading. The Nth term definitely refers to the Nth Fibonacci number, rather than the index of the number to be computed. Besides that, traditionally such message is formatted as "Usage: " << argv[0] << " index\n";
{ "domain": "codereview.stackexchange", "id": 32512, "tags": "c++, recursion, fibonacci-sequence" }
How is an inflatable parabolic antenna created?
Question: I'm intrigued by this, and how it would work:- 3 sub-questions if I may: Construction: As I understand it's a flexible sphere constrained by a rigid edge. a. Do we simply glue 2 flat circular pieces of flexible material together at the edges and inflate? (e.g. the inflatable dish on PE1RAH) Or, b. do you have to cut out sections to make it parabolic, and the inflation simply gives it the rigidity? (e.g. page 4 of this document) Spherical Antenna, as manufactured by e.g. GATR. Would the design for this inflated dome with internal parabola be in effect 2 inflatable structures - 1 sphere (to give rigid circumference, and an internal parabola as in question 1, or could you do something like this with only 1 inflatable structure? Design. I'm assuming the parabola generated using question 1.a would be a factor of gas pressure + stretch of fabric given there is no slack in the material? Or are other factors involved? Any suggestions for design or theory papers would be most appreciated. I'm going to have a try at building one of these for portable wifi :) Thanks EDIT: I just threw together a prototype from 2 plastic bags/drinking straw/tape - it's a bit rough to see a definitive answer, but looks parabolic. EDIT 2: on further digging, I confirmed that the inner parabola in a spherical antenna holds its shape by continually topping up the pressure in the 'top' of the sphere to keep the parabolic shape bowed the right way. Answer: When You look with an "math eye" at the shape in Your first link, You will see that the form is more a part of a sphere, as theory predicts. In second link there is a picture of the parabolic mandrel which was used to form the foil. The pressure will not deform the foil much, just keep it inflated.
{ "domain": "physics.stackexchange", "id": 861, "tags": "antennas" }
What is the capacity of a WWII Submarine battery in kWh?
Question: After reading about submarines in World War II, I was curious about their battery capacity, specifically in comparison to modern Battery Electric Vehicles (ie Tesla, Bolt, Leaf, i3, etc). I haven't been able to find a source that either answers the question in kWh or gives me enough information to calculate it myself. My current guess is about 30 kWh, based on a figure of 12000 Ah in a 120 cell system, and a voltage of 2.75 V - 1.05 V I would accept an answer for any class of submarine in WWII, but I was looking at the US Balao class. Answer: Collecting bits from sites, fleetsubmarines World War II American fleet submarines had two batteries, each composed of 126 cells. By comparison, a 12-volt car battery contains only 6 cells, each producing about 2.25 volts when fully charged, with a maximum power output of about 45-50 amps. Each cell in a submarine battery produces from 1.06 volts when fully discharged, to 2.75 volts at the optimum output, so connecting the 126 cells in each battery in series gives a usable output of from about 210 to 350 volts, and a power output of as much as 15,000 amps with both batteries connected in parallel. (no mention of total amp-hrs) quora My submarines (Oberon class of the 1960s-1990s) had two lead acid batteries containing 224 cells each with a nominal voltage of 440 volts.. The cells were rated 74.20 ampere-hours at a 5 hour rate (nominal voltage of each cell was 2.2 V) 448*74*2.2 = 73kWh uboat.net The US Navy "Balao" type submarine (1944/45) was fitted with 4 four Elliot Main (Electric Motors) two on each shaft, with a total horsepower of 2,740. While submerged, these motors were powered by two massive (each cell weighing 1650#) 126-cell batteries (in series) capable of delivering 5,320 Amp/Hrs each. Assuming they meant Amp-hrs, and guessing 2.2 V per cell, 2.2 * 2*5320 = 23.4 kWh
{ "domain": "engineering.stackexchange", "id": 2559, "tags": "electrical-engineering, battery" }
Lagrange multipliers in Maxwell-Boltzmann statistics
Question: I'm following Wikipedia's derivation of Maxwell-Boltzmann statistics. After applying Lagrange multipliers, we arrive at this expression for energy: $${\displaystyle E={\frac {\ln W}{\beta }}-{\frac {N}{\beta }}-{\frac {\alpha N}{\beta }}}$$ with $\alpha$ and $\beta$ as the constants emerging from the constraints. Next, it is explained that Boltzmann simply identified this as an expression of the fundamental thermodynamic relation: $${\displaystyle E=TS-PV+\mu N}$$ and just set the constants $\alpha$ and $\beta$ equal to $-\mu/kT$ and $1/kT$ to set the expression so that they are equal. I can understand that setting the constants in this way does make the expressions same, but why is it physically or mathematically justified? I've been taught the method of Lagrange and we always had to solve a system of equations to figure out the constants and then finally solve the maxima or minima. But here we are simply setting the constants so that we can arrive at a nice expression, but why is it ok to just set the constants this way and conclude that we have arrived at something that represents reality? Answer: Here we are not choosing some constant. We are arriving at the values of $\beta$ and $ \alpha$. In the first equations $\ln(W)$ and $N $ are the variables which are arbitrary. Substituting $ S=k \ln(W)$ and $PV=NkT$ in the second equation gives $$Tk\ln W-kTN+\mu N=0$$ Comparing with first equation, since $\ln W$ and $N$ are arbitrary, gives the value $$\beta=\frac{1}{kT}$$ $$\alpha=\frac{-\mu}{kT}$$.
{ "domain": "physics.stackexchange", "id": 56032, "tags": "statistical-mechanics" }
Property of cyclic codes
Question: Let $C$ be a $[n,k]$ cyclic code over $\mathbb{F}_q$ with $(n,q)=1$. I want to show that $(1, \dots, 1)$ is a codeword iff $X-1 \nmid g(X)$. $g(x)$ is the generator polynomial. We suppose that $(1, \dots, 1)$ is a codeword. We consider the following correspondence $$\pi: \mathbb{F}_q^n \to \mathbb{F}_q[x] / x^n-1, (a_0, a_1, \dots, a_{n-1}) \mapsto a_0+ a_1 x+ \dots+ a_{n-1} x^{n-1}$$ Then $(1, \dots, 1) \mapsto 1+ x+ \dots+ x^{n-1}$. We want to show that $X-1 \nmid g(X)$. Then $g(X)=b(X)(X-1)$ for some polynomial $b(X)$. How can we get a contradiction? How can we show that if $X-1 \nmid g(X)$ then $(1, \dots, 1)$ is a codeword? Answer: The word $(1,\ldots,1)$ is in your code iff there exists a polynomial $p(x)$ such that $$ \frac{x^n-1}{x-1} = p(x) g(x). $$ Since the code is cyclic, $g(x)$ divides $x^n-1$, and so $(1,\ldots,1)$ is in your code iff there exists a polynomial $p(x)$ such that $$ \frac{x^n-1}{g(x)} = p(x)(x-1). $$ In other words, $(1,\ldots,1)$ is in your code iff $$ x-1 \mid \frac{x^n-1}{g(x)}. $$ I'll let you complete the proof, using the fact that $x-1$ divides $x^n-1$ but not $\frac{x^n-1}{x-1}$.
{ "domain": "cs.stackexchange", "id": 6433, "tags": "coding-theory" }
Answer to a self-made mechanics problem
Question: This is a question I made up. I didn't really know a good way of checking if my approach was right. So. Q: Consider a smooth surface on which a glass plate of mass $M$ is kept. If on this plate a wooden block of mass $m$ is given a horizontal velocity $u$, find the time taken by the block to come to rest relative to the glass plate given that the coefficient of kinetic and static friction between the block's surface and the glass plate is $k$. Also predict how the block and the plate move after the block has come to rest relative to the glass plate. Working from the frame of reference of the smooth surface, the only force on the block is the frictional force due to the plate. So its acceleration, $a=-kg$ and the acceleration of the table is $$b= \frac {km}{M} g.$$ We want to know the velocity of the table when the two are moving with the same velocity with respect to the smooth surface. Therefore, $$u+at=0+bt$$ Plugging in the values and solving we get $$t=\frac{Mu}{(M+m)kg}$$ Since there is no longer a tendency for relative motion, there is no more a frictional force between them. Hence, they continue to move with this new velocity. Assumption- The glass plate is as long as required to ensure that the block does not slip off its end. Answer: I did the calculations and I got the same answer as you. The force equations (from the friction force) for the block and the plate become $$ \begin{align} \frac{dv_{\mathrm{block}}}{dt} = -kg \\ \frac{dv_{\mathrm{plate}}}{dt} = \frac{kmg}{M} \end{align}$$ which together with the initial conditions $v_\mathrm{block}(0) = u$ and $v_\mathrm{plate}(0) = 0$ gives $$ \begin{align} v_{\mathrm{block}} &= u - kgt \\ v_{\mathrm{plate}} &= \frac{kmgt}{M} \end{align}$$ and setting them equal I get the same $t$ as you. The only thing I can add is that the final velocity, which you can get by $$ v_\mathrm{final} = \frac{kmgt}{M} = \frac{mu}{M+m} $$ can also easily be found by conservation of momentum: The initial momentum is $mu$ and the final momentum is $(M+m)v_\mathrm{final}$ and setting them equal directly gives you $v_\mathrm{final}$. It's also a good sanity check that your answer actually conserves momentum! Also, it is customary to call the friction coefficients $\mu$ but it's really up to you.
{ "domain": "physics.stackexchange", "id": 36567, "tags": "homework-and-exercises, newtonian-mechanics, friction" }
Find unique variants of a product
Question: I am writing a piece of code that returns all the unique variants that a product is available in for an ecommerce app. For example, a shirt product can be available in different colors, sizes, and linen. If the available attributes are red, green, L, XL, Cotton, and Polyester, then a list of the unique variants should be eventually returned as: [{red; L; Cotton} ; {red; L; Polyester} ; {red; XL; Cotton} ; {red; XL; Polyester} ; {green; L; Cotton} ; {green; XL; Cotton} ; {green; L; Polyester} ; {green; XL; Polyester}] This would be the unique variants available for the product. The code below works and eventually returns a string list of IDs representing each variant available for the product. The only problem that I am having with this is that it generates a duplicate of each variant. I can easily take care of that with a Set.ofList function after this code runs, but would like to solve that problem here internally. I'm new to F#, so what can I do to optimize this code? type NewProductAttributeInfo = { AttributeId : string; AttributeCategoryId : string } let rec private returnVariant (curIdx: int) (listLength: int) (attList: (int * NewProductAttributeInfo * NewProductAttributeInfo) list) (curList: NewProductAttributeInfo list) = match curList with | x when x.Length = listLength -> curList | x -> let attTup = attList |> List.filter (fun x' -> let idx1,att1,att2' = x' idx1 >= curIdx && not(curList |> List.exists (fun x'' -> x'' = att2')) ) let idx1,att1,att2 = attTup |> List.head let newList = curList @ [att2] returnVariant idx1 newList.Length attList newList let rec calculateVariants (attList: NewProductAttributeInfo list) (currentList: (int * NewProductAttributeInfo * NewProductAttributeInfo) list) = // group attribute list by category id let attGrouped = attList |> List.groupBy (fun x -> x.AttributeCategoryId) let (firstGroupCatId,firstGroupDetails) = attGrouped.[0] match currentList with | [] -> let rawVariants = [for nxt in 0 .. (attGrouped.Length - 1) do if nxt > 0 then // begin iteration for d in firstGroupDetails do let _,det = attGrouped.[nxt] for det' in det do yield (nxt, d, det') ] calculateVariants attList rawVariants | x -> let groupLength = x |> List.groupBy (fun (idx,d0,nxtD) -> idx) |> List.length |> ((+)1) let sortedGroup = x |> List.sortBy (fun (x,y,z) -> x) if groupLength > 2 then // below is the block that generates the duplicates [for att in sortedGroup do for attCompare in sortedGroup do let idx1,att1,att2 = att let idx2,attC1,attC2 = attCompare if idx2 > idx1 && att2 <> attC2 then let idString = returnVariant idx2 groupLength x [att1; att2; attC2] |> List.map (fun nl -> nl.AttributeId) yield String.concat "," idString ] else [ for att in sortedGroup do let idx1,att1,att2 = att let idString = returnVariant idx1 groupLength x [att1; att2] |> List.map (fun nl -> nl.AttributeId) yield String.concat "," idString ] Answer: If I understand correctly, you're looking for the Cartesian product of the attributes in each attribute category. To get the Cartesian product, I've adapted* Eric Lippert's solution from his blog post Computing a Cartesian product with LINQ. let cartesianProduct xs = Seq.fold (fun acc xs -> seq { for accSeq in acc do for x in xs do yield Seq.append accSeq (Seq.singleton x) }) (Seq.singleton Seq.empty) xs Then we need to group by the attribute category, and pull out just the attributes**. let variants (attributes : seq<NewProductAttributeInfo>) = attributes |> Seq.groupBy (fun attribute -> attribute.AttributeCategoryId) |> Seq.map (snd >> Seq.map (fun attribute -> attribute.AttributeId)) |> cartesianProduct Here is a test on the sample data you provided let attributes = [ { AttributeId = "red"; AttributeCategoryId = "Color" }; { AttributeId = "L"; AttributeCategoryId = "Size" }; { AttributeId = "XL"; AttributeCategoryId = "Size" }; { AttributeId = "Cotton"; AttributeCategoryId = "Material" }; { AttributeId = "green"; AttributeCategoryId = "Color" }; { AttributeId = "Polyester"; AttributeCategoryId = "Material" } ] for variant in variants attributes do printfn "%A" variant Which gives: seq ["red"; "L"; "Cotton"] seq ["red"; "L"; "Polyester"] seq ["red"; "XL"; "Cotton"] seq ["red"; "XL"; "Polyester"] seq ["green"; "L"; "Cotton"] seq ["green"; "L"; "Polyester"] seq ["green"; "XL"; "Cotton"] seq ["green"; "XL"; "Polyester"] * Hopefully without introducing errors. ** This is a little bit nicer in C# since we have the overload of GroupBy that takes an elementSelector parameter: return attributes.GroupBy(attribute => attribute.CategoryId, attribute => attribute.Id) .CartesianProduct();
{ "domain": "codereview.stackexchange", "id": 16705, "tags": "combinatorics, f#" }
Classification: How to manage data sets where one data row depends on another data row
Question: I am trying to classify heading, image and image caption of a webpage. I am preparing data by scraping selected URLs(around 1000) using XPath of DOM elements I need. Each data row in the CSV file contains tag name, x-coordinate, y-coordinate, text-size etc with three target labels heading, image, image caption but a text simply will not become image caption until unless it is under an image. Is there any way to create this dependency among rows. Answer: I assume that you are solving a supervised classification problem, that is, you train your model on a labeled sample. I can think of two approaches to this problem. I. Classify tags, use neighbor tags for the features For each tag you can calculate features like: (x, y) distances to the closest image (x, y) distances to the closest image above this tag number of images near this tag ... Such features could be fed into any classifier like SVM or decision tree. II Classify pairs of tags For each tag $i$, you can consider all other tags $j$ and predict probability that $i$ is a caption of image $j$. This prediction may be based on concatenation of: features of $i$ features of $j$ joint features of $i$ and $j$, like vertical and horizontal distance from $i$ to $j$. After these probabilities are predicted, you can classify $i$ as a caption indeed, if any of these probabilities exceeds some threshold.
{ "domain": "datascience.stackexchange", "id": 2155, "tags": "machine-learning, svm, feature-extraction, feature-engineering, feature-construction" }
Logspace algorithm for s-t connectivity in undirected forests
Question: It has been shown that the decision problem $\{(G, s, t)~\mid~ G\text{ is an undirected forest and there is a path from }s \text{ to } t \}$ is complete for logspace (and therefore in $L$). But the proof involves a series of reductions which didn't seem very intuitive to me. Is there any natural algorithm that directly solves this problem in log space? Answer: This is proved by Cook and McKenzie. We make use of the following notation: $\deg(v)$ is the degree of a vertex $v$. $N(v,1),\ldots,N(v,\deg(v))$ is some fixed ordering of the neighbors of $v$. We construct a sequence $v_1,v_2,\ldots$ of nodes starting with $v_1 = s$ and $v_2 = N(s,1)$ (if $\deg(s) = 0$ then $s$ is connected to $t$ iff $s = t$). Given $v_{i-2},v_{i-1}$, we construct $v_i$ as follows: Suppose that $v_{i-2} = N(v_{i-1},j)$. Then $v_i = N(v_{i-1},j+1)$. (If $j = \deg(v_{i-1})$ then $j+1 = 1$.) It turns out that if the connected component of $s$ has size $m$ then the first $2(m-1)$ vertices contain each vertex $v$ in the connected component exactly $\deg(v)$ times, and after that the sequence repeats indefinitely. This is because the sequence essentially performs a depth-first search, as the following image from Wikipedia illustrates (the arrows should be ignored): Here $s = F$ and the neighbors are ordered from left to right. The first 16 elements of the sequence are F,B,A,B,D,C,D,E,D,B,F,G,I,H,I,G. The sequence then repeats. This implies the following simple algorithm: calculate the first $2(n-1)$ members of the sequence, and check whether $t$ ever appears. This can be implemented in logarithmic space. Reingold gave a (significantly) more complicated algorithm that works for any undirected graph, still using only logarithmic space. If the graph is directed then the problem is complete for NL, and so probably cannot be solved using only logarithmic space (at least not on deterministic Turing machines!).
{ "domain": "cs.stackexchange", "id": 7712, "tags": "algorithms, graphs, graph-traversal" }
Once matching features are computed between a stereo pair, how can they be tracked?
Question: I am currently working on a SLAM-like application using a variable baseline stereo rig. Assuming I'm trying to perform visual SLAM using a stereo camera, my initialization routine would involve producing a point cloud of all 'good' features I detect in the first pair of images. Once this map is created and the cameras start moving, how do I keep 'track' of the original features that were responsible for this map, as I need to estimate my pose? Along with feature matching between the stereo pair of cameras, do I also have to perform matching between the initial set of images and the second set to see how many features are still visible (and thereby get their coordinates)? Or is there a more efficient way of doing this, through for instance, locating the overlap in the two points clouds? Answer: This will depend a bit on how you are detecting your features and how you are estimating motion between frames. Classic approaches would start with detecting SIFT-like features in each image, which provide a descriptor for each detected feature as well as the point locations. These descriptors provide evidence of whether two detected features correspond to the same world point. In these classical approaches, you will need to do a search for corresponding feature points between successive stereo image frames in order to estimate motion, as well as possibly within the stereo pair. Depending on how your frame rate compares to camera motion, you may be able to restrict the search area for these correspondences. Then this same correspondence information provides most of what's needed for tracking. See this Stereo Visual Odometry Tutorial for a detailed overview of such an approach. There may be benefit to keeping track of particular points over longer periods as well, since longer-lived features show evidence of being more stable and could be less prone to be spurious matches. To take advantage of that, you need to perform some bookkeeping of things like the quality of matching over time or its agreement with estimated motion, which you can then use to decide how much to trust the point. Also see the KITTI odometry leaderboards for a list of algorithms that do very well at this task. The papers linked for those entries should give a good idea of the variation in how leading approaches go about this. Note that some of these use stereo vision, and others use laser data.
{ "domain": "robotics.stackexchange", "id": 1585, "tags": "slam, computer-vision, stereo-vision" }
Chain transfer in polystyrene synthesis
Question: while I was studying the many side reactions of radical polymerization, I stumbled across this source which illustrates the chain transfer side reaction in polystyrene synthesis: Of course, there is a typo in this picture, it shouldn’t be -CH2-H- but -CH-H-, but you get the idea. A monomer radical can abstract a hydrogen atom from an already existing chain in the first step, so that it’s radical function is forwarded to the chain. The chain then, in a second step attacks another styrene molecule from the back side of the double bond so that the more stable, secondary radical forms, so the molecule can also form branches. Everything alright with me, but I don’t get the regiochemistry here. In the first step,a secondary carbon atom is attacked, which also forms a secondary radical intermediate; but wouldn’t an attack on the tertiary -CH group be more favorable thermodynamically as it is a tertiary radical intermediate which is also stabilized by the phenyl group? Or is it a kinetically controlled process, where more hydrogens can be subtracted on the bridging carbon atoms, so that after the chain transfer, the bridging carbons are tertiary as a result? Any tips would really be appreciated! Answer: The image shown in the question can be found with some context at https://www.open.edu/openlearn/science-maths-technology/science/chemistry/introduction-polymers/content-section-4.3.3 In that document, they say of chain transfer: A similar mechanism accounts for the side branches in LDPE where it is a more important mode of termination than in polystyrene This indicates that chain transfer is a rare event in a typical synthesis of polystyrene. [...] but wouldn’t an attack on the tertiary -CH group be more favorable thermodynamically as it is a tertiary radical intermediate which is also stabilized by the phenyl group? As this is a rare event, you don't expect any of the intermediates to be the most favorable ones. Or is it a kinetically controlled process [...] If the chain transfer is rare, you would expect faster processes to react with the radical intermediate faster than it forms (because these steps are less rare). This means it is kinetically controlled. Moreover, all the steps with two radicals combining to form a bond would be expected to be kinetically controlled because once formed, the covalent bond would not fall apart into radicals (that is why you need initiator molecules in the first place). References Synthesis of Polystyrene and Molecular Weight Determination by 1H NMR End-Group Analysis, Jay Wm. Wackerly and James F. Dunn, J. Chem. Educ. 2017, 94, 11, 1790–1793 Preparation and Properties of Branched Polystyrene through Radical Suspension Polymerization, Wenyan Huang et al., Polymers (Basel). 2017 Jan; 9(1): 14 Self-Branching in the Polymerization of Styrene J. C. Bevington, G. M. Guzman and H. W. Melville Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences Vol. 221, No. 1147 (Feb. 9, 1954),
{ "domain": "chemistry.stackexchange", "id": 16089, "tags": "organic-chemistry, reaction-mechanism, polymers, regioselectivity" }
Polchinski's doubling trick for extending open string theory to the whole complex plane
Question: Open string theory can be described on the upper-half complex plane. To simplify the description of open string theory, Polchinski asserts (eq. 2.6.28 in his Vol. I String Theory book) that it is convenient to use the "doubling trick" where we extend the domain of the stress-tensor $T$ to the lower half plane by defining $$ T_{zz} (z) = T_{\bar{z} \bar{z}} (\bar{z}'), \quad \mathrm{Im}(z) <0, \tag{2.6.28} $$ where $z' = \bar{z}$. This is where I get confused. It seems like the notation $\bar{z}' = \bar{\bar{z}} = z$ is not accurate. This statement would imply $T_{zz} (z) = T_{\bar{z} \bar{z}} (z)$, which gives no relation between the upper and lower half planes. I think the correct statement is $$T_{zz} (z) = T_{\bar{z} \bar{z}} (\bar{z}).\tag{1}$$ I thought it was a typo, but later in his book, he states in a similar setup (eq. 6.3.9) $$ \tilde{b}(\bar{z}) = b(z'), \quad \mathrm{Im}(z) > 0, \tag{6.3.9} $$ where again $z' = \bar{z}$. In my opinion, the correct statement here would be $ \tilde{b}(\bar{z}) = b(z)$. Polchinski's generic pattern for relating the upper half plane described by $z$, $\mathrm{Im}(z) > 0$ and the lower half plane described by $z' = f(z)$, $\mathrm{Im}(z') < 0$ is to write $A_1(z) = A_2(\bar{z}')$, but for me the correct relation in general is $A_1(z) = A_2(z')$. It looks like we should remove the "bar" on the $z$'s in all these equations. So my question is, did Polchinski make the same typo multiple times in his book, or am I misunderstanding his notation, and if so what am I missing? Answer: TL;DR: OP's eq. (1) is incorrect, while Polchinski's eq. (2.6.28) is correct. Since the holomorphic argument here lives in the upper half plane, then the anti-holomorphic argument lives in the lower half plane. Eq. (2.6.28) extends the holomorphic argument $z$ to the lower half plane, cf. above comment by Connor Behan. Let us check how the prescription (2.6.28) plays out in the very next eq. (2.6.29): $$ \begin{align} L_m~=~~&\frac{1}{2\pi i} \int_{\curvearrowleft} \left( dz~z^{m+1} T_{zz}(z)-d\bar{z}~\bar{z}^{m+1} T_{\bar{z}\bar{z}}(\bar{z})\right)\cr\cr ~\stackrel{z=re^{i\theta}}{=}& r^{m+2}\int_0^{\pi}\frac{d\theta}{2\pi} \left(e^{i(m+2)\theta}T_{zz}(re^{i\theta}) +e^{-i(m+2)\theta} T_{\bar{z}\bar{z}}(re^{-i\theta})\right)\cr\cr ~\stackrel{(2.6.28)}{=}& r^{m+2}\int_0^{\pi}\frac{d\theta}{2\pi} \left(e^{i(m+2)\theta}T_{zz}(re^{i\theta}) +e^{-i(m+2)\theta} T_{zz}(re^{-i\theta})\right)\cr\cr ~=~~& r^{m+2}\int_{-\pi}^{\pi}\frac{d\theta}{2\pi} e^{i(m+2)\theta}T_{zz}(re^{i\theta})\cr\cr ~\stackrel{z=re^{i\theta}}{=}& \oint_0 \frac{dz}{2\pi i} z^{m+1} T_{zz}(z), \end{align} \tag{2.6.29}$$ as it should.
{ "domain": "physics.stackexchange", "id": 96968, "tags": "string-theory, conformal-field-theory, boundary-conditions, complex-numbers, analyticity" }
Weighted closest-pair-of-points problem
Question: I want to solve the following optimisation problem (an approximation or heuristic would be helpful as well). I have two sets of points in the plane: $P=\left\{ p_{1},p_{2},\dots,p_{N}\right\} $ and $Q=\left\{ q_{1},q_{2},\dots,q_{M}\right\} $. Each point has a value/weight, i.e there is a function $v:P\cup Q\rightarrow\mathbb{R}$ which assigns a weight/value to each point. I want to find a pair of points, one in $P$ and one in $Q$, which maximizes the function $f:P\times Q\rightarrow \mathbb{R} $,$ f\left(p,q\right)=v\left(p\right)+v\left(q\right)-\alpha\cdot d\left(p,q\right) $ where $d$ is the Manhattan distance between the points. So on one hand I want points with a high value, on the other hand I want them to be close together. $\alpha$ is just a constant which I will determine according to the specific application and it assigns a "relative importance" of the distance relative to the weights. The choice for Manhattan distance is because I want to use this with actual geographical points which are connected by actual road networks. Does anyone have a suggestion how to solve this without an exhaustive search over all pairs? even an approximation or heuristic suggestion would be helpful. Thank you! Answer: Suggested solution: use a k-d tree I recommend you use a k-d tree to store the points of $Q$, with $k=3$ (3 dimensions). The three dimensions of each point are the $x$-coordinate, the $y$-coordinate, and the $v$-value (the value/weight of the point). Store all of the points of $Q$ in a k-d tree. Then given a point $p$, you can use the structure of the $k$-d tree to find the point $q \in Q$ that maximizes $f(p,q)$ in a relatively efficient way -- more efficient than naively testing each possible point $q \in Q$. Then, you can iterate through the points of $P$, and for each point $p \in P$, use this procedure to find the point $q \in Q$ that maximizes $f(p,q)$. Keep track of the best pair you've seen, and this will solve your problem. I guess I'd better describe how to use the k-d tree. Recall that each node in the k-d tree splits by a binary condition of the form $x \le c$ or $y \le c$ or $v \le c$, where $c$ is some constant (all points that satisfy the condition are stored in the left subtree of that node and all points that don't satisfy the condition are stored in the right subtree). Each point $q \in Q$ is stored somewhere in the tree. So, any node $s$ in the k-d tree corresponds to a subset $S_s \subseteq Q$ of points from $Q$, namely, the points that appear somewhere in the subtree rooted at $s$. Due to the splitting criteria, each set $S_s$ has the form $Q \cap R$ where $R$ is some axis-aligned 3-dimensional rectangle. For a particular node $s$, define $$\begin{align*} s.x_\text{max} &= \max\{q.x : q \in S_s\}\\ s.x_\text{min} &= \min\{q.x : q \in S_s\}\\ s.y_\text{max} &= \max\{q.y : q \in S_s\}\\ s.y_\text{min} &= \min\{q.y : q \in S_s\}\\ s.v_\text{max} &= \max\{v(q) : q \in S_s\}\\ s.v_\text{min} &= \min\{v(q) : q \in S_s\} \end{align*}$$ As you form the k-d tree, precalculate the above 6 values for every node and store them in that node of the k-d tree, so they're available for ready lookup. Define $R_s = [s.x_\text{min},s.x_\text{max}] \times [s.y_\text{min},s.y_\text{max}] \times [s.v_\text{min},s.v_\text{max}]$. We can see that $S_s = Q \cap R_s$. It will be helpful to define $$d(p,R_s) = \min \{d(p,r) : r \in R_s\}.$$ Note that you can efficiently calculate $d(p,R_s)$, given the point $p$ and the 6 values associated with $s$. (Basically, this involves a case analysis through 27 cases, depending upon where $p$ is relative to $R_s$. The running time to compute $d(p,R_S)$ is $O(1)$.) Let's fix a point $p$. I'm going to describe an algorithm to find the point $q \in Q$ that maximizes $f(p,q)$, by recursively searching the k-d tree for $Q$ in a suitable way. The idea is that we'll recursively visit the nodes of the k-d tree, but pruning the traversal when we reach a node that cannot possibly hold any point $q \in Q$ that will be better than the best seen so far. To help us prune the traversal, notice that we can bound the maximum possible value of $f(p,q)$ over all $q \in S_s$, as follows: $$f(p,q) \le v(p) + s.v_\text{max} - \alpha \cdot d(p,R_s) \text{ for all } q \in S_s.$$ Thus, whenever our recursive traversal reaches a node $s$ of the k-d tree, we'll use the above bound to figure out whether there is any possibility that exploring the subtree rooted at $s$ could possibly yield some improvement over the best result seen so far. If not, we'll prune the traversal and won't visit any of the children or descendants of $s$. Thus, the algorithm looks something like the following: def findbestpoint(p): bestsofar := -infinity visit(p, root of k-d tree for Q) return bestsofar def visit(p, s): if v(p) + s.v_max - alpha * d(p,R_s) <= bestsofar: return update bestsofar based upon the point q stored in node s (if any) if s is a leaf: return visit(p, s.rightchild) visit(p, s.leftchild) Given a k-d tree for $Q$ and a point $p \in P$, this algorithm computes $\max \{f(p,q) : q \in Q\}$. It will visit some of the nodes of the k-d tree but not all of them. So, iterate through the points $p \in P$ and call findbestpoint() on each one. The above is correct but not necessarily optimal in performance. As an optimization, I suggest that at nodes that split on the $x$-coordinate or $y$-coordinate, you should choose the order of the recursive traversal to prefer the subtree that contains $p$; and for nodes that split on $v$-value, always prefer the subtree with the larger $v$-value. In other words, I suggest you replace the last two lines of visit(p, s) with if the condition associated with s is a split on the v-value: visit(p, s.rightchild) visit(p, s.leftchild) else if p satisfies the condition associated with s (i.e., p is contained within the subtree rooted at s.leftchild): visit(p, s.leftchild) visit(p, s.rightchild) else: visit(p, s.rightchild) visit(p, s.leftchild) This will choose a traversal ordering that is more likely to help you prune many unpromising nodes. As another optimization, rather than resetting bestsofar to $-\infty$ on each call to findbestpoint, I suggest you initialize it to $-\infty$ once and then don't reset it again. This will enable you to prune more effectively, and since you only want to find one pair $p,q$ that maximizes $f(p,q)$, this optimization is valid. As yet another optimization, you can choose the order in which you iterate through the points of $P$ based on some heuristic. For instance, you could try sorting $P$ by value and iterating through $P$ in order of decreasing value; or you could use value minus $\alpha$ times the distance to the centroid of $Q$, or something like that. Basically, you want to first try values of $P$ that are more likely to yield you a large $f(\cdot,\cdot)$ value, as that will make the pruning more effective. I have no theoretical guarantees on the asymptotic running time of this algorithm. However, when the set $Q$ is large, I expect this will give significant improvements. Once you understand how k-d trees, it should be relatively easy to code this up and give it a try. Note that you can always choose whether to store $Q$ in the k-d tree and iterate through $P$ (as outlined above) or whether to store $P$ in the k-d tree and iterate through $Q$. I suggest you store the larger set in the k-d tree and iterate through the smaller set. A better algorithm using k-d trees The idea above can be generalized further, to obtain an algorithm that might be even more efficient, at the cost of making the algorithm more complicated and more tedious to implement. The algorithm above explores pairs $p,s$ where $p$ is a point and $s$ is a node in the tree (summarizing some subset of points from $Q$). In the refinement, we'll explore pairs $s,t$ where $s$ is a node in the k-d tree for $P$ and $t$ is a node in the k-d tree for $Q$ (so $s$ summarizes some subset of $P$ and $t$ summarizes some subset of $Q$). This will expose further opportunities for pruning. Assume we have built a k-d tree for $P$ and a k-d tree for $Q$, augmented with the information listed above. For simplicity of exposition, assume we're using one of the variants of the k-d tree data structure where the points of $P,Q$ are stored in the leaves of the tree; the internal nodes don't store points. (You can generalize the ideas here to the case where each internal node also stores a point from $P$ or $Q$; the extension is messy but not conceptually difficult.) Also, if $s$ is a node in the k-d tree for $P$ and $t$ is a node in the k-d tree for $Q$, define $$d(R_s,R_t) = \min \{d(r_1,r_2) : r_1 \in R_s, r_2 \in R_t\}$$ to be the distance between the two rectangles $R_s,R_t$. This is the distance between the "closest point of approach" between the two rectangles. For example, if the two rectangles $R_s,R_t$ overlap, $d(R_s,R_t)=0$. This distance is a property solely of the rectangles $R_s,R_t$ and doesn't have anything to do with the points $p,q$ that are contained within them. Given the descriptions of $R_s,R_t$ (i.e., the 6 values stored in the nodes $s,t$), you can compute $d(R_s,R_t)$ in $O(1)$ time via an ugly but straightforward case analysis. Now notice that $$f(p,q) \le s.v_\text{max} + t.v_\text{max} - \alpha d(R_r,R_s)$$ for all $p \in R_s, q \in R_t$. This gives us an upper bound on the value of $f(p,q)$ over all class of possible pairs $p,q$ of points. Conceptually, the node $s$ summarizes a bunch of points $p$ (the set $S_s \subseteq P$), and the node $t$ summarizes a bunch of points $q$ (the set $S_t \subseteq Q$); the above bound lets us upper-bound the best possible $f(\cdot,cdot)$ value, among all pairs of points $p,q$ in these sets. Based on this insight, we can get a recursive algorithm to find the best pair of points: def findbestpair(): bestsofar := -infinity visitpair(root of k-d tree for P, root of k-d tree for Q) return bestsofar def visitpair(s, t): if s.v_max + t.v_max - alpha * d(R_s,R_t) <= bestsofar: return if s is a leaf and t is a leaf: let p := the point contained within s let q := the point contained within t if f(p,q) > bestsofar: bestsofar := f(p,q) else if s is a leaf: visit(s, t.rightchild) visit(s, t.leftchild) else if t is a leaf: visit(s.rightchild, t) visit(s.leftchild, t) else: visit(s.rightchild, t.rightchild) visit(s.leftchild, t.leftchild) visit(s.leftchild, t.rightchild) visit(s.leftchild, t.leftchild) We can see that this explores pairs $s,t$ (corresponding to a set $S_s \times S_t$ of points), pruning when we can prove that none of the pairs $(p,q) \in S_s \times S_t$ will have a larger value of $f(p,q)$ than the best we've seen so far. You can apply similar optimizations to those mentioned above (e.g., when $s$ is a leaf that splits on $x$-coordinate or $y$-coordinate, first try the child of t that $s$ is contained in; when $s$ is a leaf that splits on $v$-value, first try the child with the larger $v$-value; and symmetrically when $t$ is a leaf). I would conjecture that if the size of the both sets is large enough, this might perform better than the solution mentioned above. However, I have no proof, and the only way to know for sure is to implement it and give it a try on your data set. A connection to computational geometry I also noticed that your problem is related to the following classical problem: Given a set of discs in the 2-D plane, determine whether any pair of discs overlap. and the following version of the problem: Given a set of discs in the 2-D plane, determine the pairs of discs that are closest (where the distance between two discs is given by the closest point of approach, i.e., the minimum distance between any point in the first disc and any point in the second disc). These are classic problems in computational geometry which I think have sub-quadratic time algorithms (by sweepline methods or something, if I recall correctly). Unfortunately your problem seems harder those questions about discs, so I don't know whether those methods will transfer over. Here is the connection. Define $w:P \cup Q \to \mathbb{R}$ by $w(p) = v(p)/\alpha$ and $w(q) = v(q)/\alpha$. Define $g:P \times Q \to \mathbb{R}$ by $$g(p,q) = d(p,q) - w(p) - w(q).$$ Your problem is equivalent to asking for the pair of points $p \in P,q \in Q$ that minimizes $g(p,q)$, as $f(p,q) = - \alpha \cdot g(p,q)$. Next, treat each point $p$ as being associated with a disc centered at $p$ and having radius $w(p)$; and similarly for each point $q$. In this way if all of the $w(\cdot)$ values are positive, we obtain a collection of discs in the two-dimensional plane. Notice that if two discs $p,q$ don't overlap, then $g(p,q)$ denotes exactly the distance between those two discs. This condition is equivalent to requiring that $g(p,q)$ be positive for all $p,q$. In summary, if we were promised that $v(p)$ was positive for all $p,q$ and that the maximum value of $f(p,q)$ was negative, then your problem would be equivalent to finding the closest pair of discs in the 2-D plane. Of course, in your problem, we're not promised any of those things. Therefore, your problem is harder. However, if you were really eager on trying to find the best possible algorithm, you could explore the literature to find algorithms for finding the closest pair of discs and see if any of them can be generalized to your setting.
{ "domain": "cs.stackexchange", "id": 4429, "tags": "algorithms, time-complexity, optimization, computational-geometry, search-problem" }
Performance in particles.js library
Question: Hey I'm trying to optimise particles.js library from Vincent Garrau in order to get 60fps @ 500 particles ~. I've already refactored the code to use a quad tree instead but it does not seem to be enough. In fact, I saw no improvement using the quad tree. Maybe I implemented it incorrectly ? What I'm doing ? Generating n number of particles For each frame : reset the quad tree. For each particle, move it a little and insert it into the tree. Then for each particles, check through a tree query if other particles are close. If there are, draw a line between them. Render the canvas. Most code below. Full codepen here https://codepen.io/audrenbdb/pen/NWWVpmQ function update () { let particle let ms = 0 quadTree.clear() for (let i = 0, l = particlesList.length; i < l; i++) { particle = particlesList[i] ms = speed / 2 particle.x += particle.vx * ms particle.y += particle.vy * ms let new_pos = bounce ? { x_left: size, x_right: canvas.width, y_top: size, y_bottom: canvas.height } : { x_left: -size, x_right: canvas.width + size, y_top: -size, y_bottom: canvas.height + size } if (particle.x - size > canvas.width) { particle.x = new_pos.x_left particle.y = Math.random() * canvas.height } else if (particle.x + size < 0) { particle.x = new_pos.x_right particle.y = Math.random() * canvas.height } if (particle.y - size > canvas.height) { particle.y = new_pos.y_top particle.x = Math.random() * canvas.width } else if (particle.y + size < 0) { particle.y = new_pos.y_bottom particle.x = Math.random() * canvas.width } if (bounce) { if (particle.x + size > canvas.width) particle.vx = -particle.vx else if (particle.x - size < 0) particle.vx = -particle.vx if (particle.y + size > canvas.height) particle.vy = -particle.vy else if (particle.y - size < 0) particle.vy = -particle.vy } if (interaction.status === 'mousemove') { repulse(particle) } draw(particle) particle.circle.x = particle.x particle.circle.y = particle.y particle.circle.r = linkDistance quadTree.insert(particle) } let explored = [] var i var j for (i = 0; i < particlesList.length; i++) { let links = quadTree.query(particlesList[i].circle) for (j = 0; j < links.length; j++) { if (links[j] !== particlesList[i] && !explored.includes(links[j])) { linkParticles(particlesList[i], links[j]) } } explored.push(particlesList[i]) } } function repulse (particle) { const dx_mouse = particle.x - interaction.pos_x const dy_mouse = particle.y - interaction.pos_y const dist_mouse = Math.sqrt(Math.pow(dx_mouse, 2) + Math.pow(dy_mouse, 2)) const velocity = 100 const repulseFactor = Math.min( Math.max( (1 / repulseDistance) * (-1 * Math.pow(dist_mouse / repulseDistance, 2) + 1) * repulseDistance * velocity, 0 ), 50 ) let posX = particle.x + (dx_mouse / dist_mouse) * repulseFactor let posY = particle.y + (dy_mouse / dist_mouse) * repulseFactor if (bounce) { if (posX - size > 0 && posX + size < canvas.width) particle.x = posX if (posY - size > 0 && posY + size < canvas.height) particle.y = posY } else { particle.x = posX particle.y = posY } } function createParticle () { let x = Math.random() * canvas.width let y = Math.random() * canvas.height const vx = Math.random() - 0.5 const vy = Math.random() - 0.5 if (x > canvas.width - size * 2) x -= size else if (x < size * 2) x += size if (y > canvas.height - size * 2) y -= size else if (y < size * 2) y += size let particle = { x: x, y: y, vx: vx, vy: vy, circle: new Circle(x, y, size) } return particle } function setCanvasSize () { canvas.height = canvas.offsetHeight canvas.width = canvas.offsetWidth boundary = new Rectangle( canvas.width / 2, canvas.height / 2, canvas.width, canvas.height ) quadTree = new QuadTree(boundary, 4) context = canvas.getContext('2d') context.fillRect(0, 0, canvas.width, canvas.height) context.fillStyle = `rgba(${particleRGB},1)`; } function linkParticles (particle1, particle2) { let opacityValue = 1 const dist = Math.sqrt( Math.pow(particle1.x - particle2.x, 2) + Math.pow(particle1.y - particle2.y, 2) ) opacityValue = 1 - dist / (.7 * linkDistance) context.strokeStyle = `rgba(${linkRGB}, ${opacityValue})` context.lineWidth = linkWidth context.beginPath() context.moveTo(particle1.x, particle1.y) context.lineTo(particle2.x, particle2.y) context.stroke() context.closePath() } function draw (particle) { context.beginPath() context.arc( Math.floor(particle.x), Math.floor(particle.y), size, 0, Math.PI * 2, false ) context.closePath() context.fill() } function animate () { context.clearRect(0, 0, canvas.width, canvas.height) update() requestAnimationFrame(animate) } Quad tree code class Circle { constructor (x, y, r) { this.x = x this.y = y this.r = r } contains (point) { let d = Math.pow(point.x - this.x, 2) + Math.pow(point.y - this.y, 2) return d <= this.r * this.r } intersects (range) { let xDyst = Math.abs(range.x - this.x) let yDist = Math.abs(range.y - this.y) let r = this.r let w = range.w let h = range.h let edges = Math.pow(xDist - w, 2) + Math.pow(yDist - h, 2) if (xDist > r + w || yDist > r + h) return false if (xDist <= w || yDist <= h) return true return edges <= this.r * this.r } } class Rectangle { constructor (x, y, w, h) { this.x = x this.y = y this.w = w this.h = h } contains (point) { return ( point.x >= this.x - this.w && point.x <= this.x + this.w && point.y >= this.y - this.h && point.y <= this.y + this.h ) } intersects (range) { return !( range.x - range.w > this.x + this.w || range.x + range.w < this.x - this.w || range.y - range.h > this.y + this.h || range.y + range.h < this.y - this.h ) } } class QuadTree { constructor (boundary, capacity) { this.boundary = boundary this.capacity = capacity this.points = [] this.divided = false } insert (point) { if (!this.boundary.contains(point)) return false if (this.points.length < this.capacity) { this.points.push(point) return true } else { if (!this.divided) { this.subdivide() this.divided = true } if (this.northEast.insert(point)) return true else if (this.northWest.insert(point)) return true else if (this.southEast.insert(point)) return true else if (this.southWest.insert(point)) return true } } subdivide () { let x = this.boundary.x let y = this.boundary.y let w = this.boundary.w let h = this.boundary.h let ne = new Rectangle(x + w / 2, y - h / 2, w / 2, h / 2) let nw = new Rectangle(x - w / 2, y - h / 2, w / 2, h / 2) let se = new Rectangle(x + w / 2, y + h / 2, w / 2, h / 2) let sw = new Rectangle(x - w / 2, y + h / 2, w / 2, h / 2) this.northWest = new QuadTree(ne, this.capacity) this.northEast = new QuadTree(nw, this.capacity) this.southWest = new QuadTree(se, this.capacity) this.southEast = new QuadTree(sw, this.capacity) this.divided = true } query (range, found = []) { if (!this.boundary.intersects(range)) { } else { for (let p of this.points) { if (range.contains(p)) { found.push(p) } } if (this.divided) { this.northEast.query(range, found) this.northWest.query(range, found) this.southEast.query(range, found) this.southWest.query(range, found) } return found } } clear () { if (this.divided) { delete this.northEast delete this.northWest delete this.southEast delete this.southWest } this.points = [] this.divided = false } } Answer: See rewrite example to see how these changes are implemented. Batch render calls to avoid GPU state changes. Reuse arrays, don't delete them or recreate them. In the rewrite the array of found particles is never deleted. Rather than use its length there is a new foundCount that is used to add found points and return the number of points found from QuadTree.query Reuse data structures rather than delete and rebuild. The quadTree uses the close function to close all quads but does not delete the quads so it is much faster next frame as it does not need to rebuild what will be a very similar data structure. Use flags to avoid array searches. The loop where you find links you keep an array of explored items. For each found point you search that array. All you need to do is mark the point as explored saving you having to do all those searches. There are many more changes but this was a little longer than I expected and I am way over it now. In the example rewrite I removed the bounce setting (sorry I took it out before I knew what it was for and forgot to put it back.). Your copy has bounce set to false to match the rewrite. UPDATE I am refreshed and looked over the code to notice some issues that I have fixed. Poor timing fixed, Quad name miss matches NE was NW, and the distance check in the quad query was too small so doubled the size. Also added a few more optimizations. Added a depth property to quads. Only starts adding particles to quads 2 levels down or deeper. Quad point arrays do not change length. The quad property pointCount now used rather than the points.length Top three quad layers are never closed. Use the pointCount as early query skip when querying quads. And some other minor changes unrelated to performance Your code Your code copied from the link you provided with a few small mods made to fit the CR snippet and display the render time. The render time in the top left is the mean render time in ~1second blocks running mean time over 10 rendered frames. On the laptop I am using this renders in about ~50ms const number = 200 const speed = 6 const linkWidth = 1 const linkDistance = 120 const size = 2 const repulseDistance = 140 const particleRGB = '255, 255, 255' const linkRGB = '255, 255, 255' const bounce = false let interaction = { status: 'mouseleave', pos_x: 0, pos_y: 0 } let particlesList = [] let quadTree let boundary let canvas let context window.onload = () => { canvas = document.getElementById('quad-tree') canvas.style.height = "100%" canvas.style.width = "100%" setCanvasSize() for (let i = 0; i < number; i++) { let particle = createParticle() particlesList.push(particle) quadTree.insert(particle) } window.addEventListener('resize', () => setCanvasSize()) canvas.addEventListener('mousemove', e => { interaction.pos_x = e.offsetX interaction.pos_y = e.offsetY interaction.status = 'mousemove' }) canvas.addEventListener('mouseleave', () => { interaction.pos_x = null interaction.pos_y = null interaction.status = 'mouseleave' }) animate() } function update () { let particle let ms = 0 quadTree.clear() for (let i = 0, l = particlesList.length; i < l; i++) { particle = particlesList[i] ms = speed / 2 particle.x += particle.vx * ms particle.y += particle.vy * ms let new_pos = bounce ? { x_left: size, x_right: canvas.width, y_top: size, y_bottom: canvas.height } : { x_left: -size, x_right: canvas.width + size, y_top: -size, y_bottom: canvas.height + size } if (particle.x - size > canvas.width) { particle.x = new_pos.x_left particle.y = Math.random() * canvas.height } else if (particle.x + size < 0) { particle.x = new_pos.x_right particle.y = Math.random() * canvas.height } if (particle.y - size > canvas.height) { particle.y = new_pos.y_top particle.x = Math.random() * canvas.width } else if (particle.y + size < 0) { particle.y = new_pos.y_bottom particle.x = Math.random() * canvas.width } if (bounce) { if (particle.x + size > canvas.width) particle.vx = -particle.vx else if (particle.x - size < 0) particle.vx = -particle.vx if (particle.y + size > canvas.height) particle.vy = -particle.vy else if (particle.y - size < 0) particle.vy = -particle.vy } if (interaction.status === 'mousemove') { repulse(particle) } draw(particle) particle.circle.x = particle.x particle.circle.y = particle.y particle.circle.r = linkDistance quadTree.insert(particle) } let explored = [] var i var j for (i = 0; i < particlesList.length; i++) { let links = quadTree.query(particlesList[i].circle) for (j = 0; j < links.length; j++) { if (links[j] !== particlesList[i] && !explored.includes(links[j])) { linkParticles(particlesList[i], links[j]) } } explored.push(particlesList[i]) } } function repulse (particle) { const dx_mouse = particle.x - interaction.pos_x const dy_mouse = particle.y - interaction.pos_y const dist_mouse = Math.sqrt(Math.pow(dx_mouse, 2) + Math.pow(dy_mouse, 2)) const velocity = 100 const repulseFactor = Math.min( Math.max( (1 / repulseDistance) * (-1 * Math.pow(dist_mouse / repulseDistance, 2) + 1) * repulseDistance * velocity, 0 ), 50 ) let posX = particle.x + (dx_mouse / dist_mouse) * repulseFactor let posY = particle.y + (dy_mouse / dist_mouse) * repulseFactor if (bounce) { if (posX - size > 0 && posX + size < canvas.width) particle.x = posX if (posY - size > 0 && posY + size < canvas.height) particle.y = posY } else { particle.x = posX particle.y = posY } } function createParticle () { let x = Math.random() * canvas.width let y = Math.random() * canvas.height const vx = Math.random() - 0.5 const vy = Math.random() - 0.5 if (x > canvas.width - size * 2) x -= size else if (x < size * 2) x += size if (y > canvas.height - size * 2) y -= size else if (y < size * 2) y += size let particle = { x: x, y: y, vx: vx, vy: vy, circle: new Circle(x, y, size) } return particle } function setCanvasSize () { canvas.height = innerHeight canvas.width = innerWidth boundary = new Rectangle( canvas.width / 2, canvas.height / 2, canvas.width, canvas.height ) quadTree = new QuadTree(boundary, 4) context = canvas.getContext('2d') context.fillRect(0, 0, canvas.width, canvas.height) context.fillStyle = `rgba(${particleRGB},1)`; } function linkParticles (particle1, particle2) { let opacityValue = 1 const dist = Math.sqrt( Math.pow(particle1.x - particle2.x, 2) + Math.pow(particle1.y - particle2.y, 2) ) opacityValue = 1 - dist / 80 context.strokeStyle = `rgba(${linkRGB}, ${opacityValue})` context.lineWidth = linkWidth context.beginPath() context.moveTo(particle1.x, particle1.y) context.lineTo(particle2.x, particle2.y) context.stroke() context.closePath() } function draw (particle) { context.beginPath() context.arc( Math.floor(particle.x), Math.floor(particle.y), size, 0, Math.PI * 2, false ) context.closePath() context.fill() } var times = []; var renderCount = 0 function animate () { context.clearRect(0, 0, canvas.width, canvas.height) const now = performance.now(); update() renderCount += 1; times[renderCount % 10] = (performance.now() - now); const total = times.reduce((total, time) => total + time, 0); info.textContent = "Running ave render time: " + (total / times.length).toFixed(3) + "ms"; requestAnimationFrame(animate) } class Circle { constructor (x, y, r) { this.x = x this.y = y this.r = r } contains (point) { let d = Math.pow(point.x - this.x, 2) + Math.pow(point.y - this.y, 2) return d <= this.r * this.r } intersects (range) { let xDyst = Math.abs(range.x - this.x) let yDist = Math.abs(range.y - this.y) let r = this.r let w = range.w let h = range.h let edges = Math.pow(xDist - w, 2) + Math.pow(yDist - h, 2) if (xDist > r + w || yDist > r + h) return false if (xDist <= w || yDist <= h) return true return edges <= this.r * this.r } } class Rectangle { constructor (x, y, w, h) { this.x = x this.y = y this.w = w this.h = h } contains (point) { return ( point.x >= this.x - this.w && point.x <= this.x + this.w && point.y >= this.y - this.h && point.y <= this.y + this.h ) } intersects (range) { return !( range.x - range.w > this.x + this.w || range.x + range.w < this.x - this.w || range.y - range.h > this.y + this.h || range.y + range.h < this.y - this.h ) } } class QuadTree { constructor (boundary, capacity) { this.boundary = boundary this.capacity = capacity this.points = [] this.divided = false } insert (point) { if (!this.boundary.contains(point)) return false if (this.points.length < this.capacity) { this.points.push(point) return true } else { if (!this.divided) { this.subdivide() this.divided = true } if (this.northEast.insert(point)) return true else if (this.northWest.insert(point)) return true else if (this.southEast.insert(point)) return true else if (this.southWest.insert(point)) return true } } subdivide () { let x = this.boundary.x let y = this.boundary.y let w = this.boundary.w let h = this.boundary.h let ne = new Rectangle(x + w / 2, y - h / 2, w / 2, h / 2) let nw = new Rectangle(x - w / 2, y - h / 2, w / 2, h / 2) let se = new Rectangle(x + w / 2, y + h / 2, w / 2, h / 2) let sw = new Rectangle(x - w / 2, y + h / 2, w / 2, h / 2) this.northWest = new QuadTree(ne, this.capacity) this.northEast = new QuadTree(nw, this.capacity) this.southWest = new QuadTree(se, this.capacity) this.southEast = new QuadTree(sw, this.capacity) this.divided = true } query (range, found = []) { if (!this.boundary.intersects(range)) { } else { for (let p of this.points) { if (range.contains(p)) { found.push(p) } } if (this.divided) { this.northEast.query(range, found) this.northWest.query(range, found) this.southEast.query(range, found) this.southWest.query(range, found) } return found } } clear () { if (this.divided) { delete this.northEast delete this.northWest delete this.southEast delete this.southWest } this.points = [] this.divided = false } } .float { position: absolute; top: 0px; left: 0px; } canvas { background: #59F; } <canvas class="float" id="quad-tree"></canvas> <code class="float" id="info"></code> Rewrite 20 times faster. Again the time is in the top corner to show performance. "use strict"; const number = 200; const speed = 1; const linkWidth = 0.5; const linkDistance = 120; const size = 1.5; var repulseDistance = repulseDistance = Math.min(innerWidth,innerHeight) / 6; const PARTICLES_PER_QUAD = 4; const linkDistance2 = (0.7 * linkDistance) ** 2; const repulseDistance2 = repulseDistance ** 2; var showQuads = false; Math.TAU = Math.PI * 2; const particleStyle = "#FFF"; const linkRGB = "#FFF"; const quadStyle = "#F00"; const candidates = []; var W,H; const mouse = { x: 0, y: 0} const particlesList = []; const links = [[],[],[],[]]; const linkBatchAlphas = [0.2, 0.4, 0.7, 0.9]; const linkBatches = links.length; const linkPool = []; let quadTree; let boundary; const ctx = canvas.getContext("2d"); W = canvas.height = innerHeight; H = canvas.width = innerWidth; canvas.addEventListener('mousemove', e => { mouse.x = e.offsetX; mouse.y = e.offsetY; }) canvas.addEventListener('click', e => { showQuads = !showQuads; }) setTimeout(start, 42); function start(){ quadTree = new QuadTree(); for (let i = 0; i < number; i++) { particlesList.push(new Particle(canvas, size)); } animate(); } var times = []; var renderCount = 0 function animate () { if (canvas.width !== innerWidth || canvas.height !== innerHeight) { setCanvasSize() } ctx.clearRect(0, 0, canvas.width, canvas.height); const now = performance.now(); updateParticles(); updateLinks(); renderCount += 1; times[renderCount % 10] = (performance.now() - now); const total = times.reduce((total, time) => total + time, 0); info.textContent = "Running ave render time: " + (total / times.length).toFixed(3) + "ms"; requestAnimationFrame(animate); } function updateParticles() { quadTree.close(); ctx.fillStyle = particleStyle; ctx.beginPath(); for (const particle of particlesList) { particle.update(ctx, true) } ctx.fill(); } function updateLinks() { var i,j, link; if(showQuads) { ctx.strokeStyle = quadStyle; ctx.lineWidth = 1; ctx.beginPath(); } for(const p1 of particlesList) { p1.explored = true; const count = quadTree.query(p1, 0, candidates); for (j = 0; j < count; j++) { const p2 = candidates[j]; if (!p2.explored) { link = linkPool.length ? linkPool.pop() : new Link(); link.init(p1, candidates[j]); links[link.batchId].push(link); } } } if(showQuads) { ctx.stroke() } var alphaIdx = 0; ctx.lineWidth = linkWidth; ctx.strokeStyle = linkRGB; for(const l of links) { ctx.globalAlpha = linkBatchAlphas[alphaIdx++]; ctx.beginPath(); while(l.length) { linkPool.push(l.pop().addPath(ctx)) } ctx.stroke(); } ctx.globalAlpha = 1; } function resetParticles() { quadTree = new QuadTree(); for (const particle of particlesList) { particle.reset(canvas) }; } function setCanvasSize () { W = canvas.height = innerHeight; H = canvas.width = innerWidth; repulseDistance = Math.min(W,H) / 12; resetParticles(); } class Link { constructor() { } init(p1, p2) { // p1,p2 are particles this.p1 = p1; this.p2 = p2; const dx = p1.x - p2.x; const dy = p1.y - p2.y; this.alpha = 1 - (dx * dx + dy * dy) / linkDistance2; this.batchId = this.alpha * linkBatches | 0; this.batchId = this.batchId >= linkBatches ? linkBatches : this.batchId; } addPath(ctx) { ctx.moveTo(this.p1.x, this.p1.y); ctx.lineTo(this.p2.x, this.p2.y); return this; } } class Particle { constructor (canvas, r) { this.r = r; this.speedScale = speed / 2; this.reset(canvas, r); } reset(canvas, r = this.r) { const W = canvas.width - r * 2; // Canvas width and height reduced so // that the bounds check is not needed const H = canvas.height - r * 2; this.x = Math.random() * W + r; this.y = Math.random() * H + r; this.vx = Math.random() - 0.5; this.vy = Math.random() - 0.5; this.quad = undefined; this.explored = false; } addPath(ctx) { //ctx.moveTo(this.x + this.r, this.y); //ctx.arc(this.x, this.y, this.r, 0, Math.TAU); ctx.rect(this.x - this.r, this.y - this.r, this.r * 2, this.r * 2); } near(p) { return ((p.x - this.x) ** 2 + (p.y - this.y) ** 2) <= linkDistance2; } intersects(range) { const xd = Math.abs(range.x - this.x); const yd = Math.abs(range.y - this.y); const r = linkDistance; const w = range.w; const h = range.h; if (xd > r + w || yd > r + h) { return false } if (xd <= w || yd <= h) { return true } return ((xd - w) ** 2 + (yd - h) ** 2) <= linkDistance2; } update(ctx, repulse = true) { this.explored = false; const r = this.r; const W = ctx.canvas.width + r; const H = ctx.canvas.height + r; this.x += this.vx * this.speedScale; this.y += this.vy * this.speedScale; if (this.x > W) { this.x = 0; this.y = Math.random() * (H - r); } else if (this.x < -r) { this.x = W - r; this.y = Math.random() * (H - r); } if (this.y > H) { this.y = 0 this.x = Math.random() * (W - r); } else if (this.y < -r) { this.y = H - r; this.x = Math.random() * (W - r); } repulse && this.repulse(); this.addPath(ctx); quadTree.insert(this); this.quad && (this.quad.drawn = false) } repulse() { // I have simplified the math (I did not check as I went so behaviour may vary a little from the original) var dx = this.x - mouse.x; var dy = this.y - mouse.y; const dist = (dx * dx + dy * dy) ** 0.5; var rf = ((1 - (dist / repulseDistance) ** 2) * 100); rf = (rf < 0 ? 0 : rf > 50 ? 50 : rf) / dist; // ternary is quicker than Math.max(Math.min(rf,50), 0) this.x += dx * rf; this.y += dy * rf; } } class Bounds { constructor(x, y, w, h) { this.init(x, y, w, h) } init(x,y,w,h) { this.x = x; this.y = y; this.w = w; this.h = h; this.left = x - w; this.right = x + w; this.top = y - h; this.bottom = y + h; this.diagonal = (w * w + h * h); } contains(p) { return (p.x >= this.left && p.x <= this.right && p.y >= this.top && p.y <= this.bottom); } near(p) { if (!this.contains(p)) { const dx = p.x - this.x; const dy = p.y - this.y; const dist = (dx * dx + dy * dy) - this.diagonal - linkDistance2; return dist < 0; } return true; } } class QuadTree { constructor(boundary, depth = 0) { this.boundary = boundary || new Bounds(canvas.width / 2,canvas.height / 2,canvas.width / 2 ,canvas.height / 2); this.divided = false; this.points = depth > 1 ? [] : null; this.pointCount = 0 this.drawn = false; this.depth = depth; if(depth === 0) { // BM67 Fix on resize this.subdivide(); this.NE.subdivide(); this.NW.subdivide(); this.SE.subdivide(); this.SW.subdivide(); } } addPath() { // getting ctx from global as this was a last min change const b = this.boundary; ctx.rect(b.left, b.top, b.w * 2, b.h * 2); this.drawn = true; } addToSubQuad(particle) { if (this.NE.insert(particle)) { return true } if (this.NW.insert(particle)) { return true } if (this.SE.insert(particle)) { return true } if (this.SW.insert(particle)) { return true } particle.quad = undefined; } insert(particle) { if (this.depth > 0 && !this.boundary.contains(particle)) { return false } if (this.depth > 1 && this.pointCount < PARTICLES_PER_QUAD) { this.points[this.pointCount++] = particle; particle.quad = this; return true; } if (!this.divided) { this.subdivide() } return this.addToSubQuad(particle); } subdivide() { if (!this.NW) { // if this is undefined we know all 4 are undefined const x = this.boundary.x; const y = this.boundary.y; const w = this.boundary.w / 2; const h = this.boundary.h / 2; const depth = this.depth + 1; this.NE = new QuadTree(new Bounds(x + w, y - h, w, h), depth); this.NW = new QuadTree(new Bounds(x - w, y - h, w, h), depth); this.SE = new QuadTree(new Bounds(x + w, y + h, w, h), depth); this.SW = new QuadTree(new Bounds(x - w, y + h, w, h), depth); } else { this.NE.pointCount = 0; this.NW.pointCount = 0; this.SE.pointCount = 0; this.SW.pointCount = 0; } this.divided = true; } query(part, fc, found) { //part = particle fc found count var i = this.pointCount; if (this.depth === 0 || this.boundary.near(part)) { if (this.depth > 1) { showQuads && !this.drawn && this.addPath(); while (i--) { const p = this.points[i]; if (!p.explored && part.near(p)) { found[fc++] = p } } if (this.divided) { fc = this.NE.pointCount ? this.NE.query(part, fc, found) : fc; fc = this.NW.pointCount ? this.NW.query(part, fc, found) : fc; fc = this.SE.pointCount ? this.SE.query(part, fc, found) : fc; fc = this.SW.pointCount ? this.SW.query(part, fc, found) : fc; } } else if(this.divided) { // BM67 Fix on resize fc = this.NE.query(part, fc, found); fc = this.NW.query(part, fc, found); fc = this.SE.query(part, fc, found); fc = this.SW.query(part, fc, found); } } return fc; } close() { if (this.divided) { this.NE.close(); this.NW.close(); this.SE.close(); this.SW.close(); } if (this.depth === 2 && this.divided) { // BM67 Fix on resize this.NE.pointCount = 0; this.NW.pointCount = 0; this.SE.pointCount = 0; this.SW.pointCount = 0; } else if (this.depth > 2) { this.divided = false; } } } .float { position: absolute; top: 0px; left: 0px; } canvas { background: #59F; } <canvas class="float" id="canvas"></canvas> <code class="float" id="info"></code>
{ "domain": "codereview.stackexchange", "id": 36688, "tags": "javascript, performance, canvas" }
Computation of pH when an acid and base are mixed in solution
Question: I'm doing a basic chemistry course, and we are currently learning how to compute $\text{pH}$ from the acid dissociation constant (using $\left[\text{H}^{+}_{(\text{aq})}\right]=\sqrt{K_{a}\left[\text{HA}_{(\text{aq})}\right]}$) along with computing the $\text{pH}$ of strong bases by assuming full dissociation into $\text{OH}^{-}$ ions, and then using the ionic product of water to calculate the concentration of protons. The question I have been given is: Calculate the pH of the solution obtained when $14.9\text{ cm}^{3}$ of $0.100\text{ mol dm}^{-3}$ sodium hydroxide solution has been added to $25.0\text{ cm}^{3}$ solution of methanoic acid of concentration $0.100\text{ mol dm}^{-1}$; $K_{a}=1.60\times 10^{-4}\text{ mol dm}^{-3}$ I'm not sure how I should go about solving this problem, I can calculate concentrations of hydrogen and hydroxide ions for each of the solutions but I'm unsure how to combine them (as presumably some of the $\text{OH}^{-}$ ions will react with the $\text{H}^{+}$ ions to form $\text{H}_{2}\text{O}$?) Answer: You are headed in the right direction. Pretend that the $\ce{H+}$ ions and the $\ce{OH-}$ ions neutralize each other 1 for 1. That will leave you with only one of those ions left. The water ionization won't affect your problem as that ionization is repressed by the excess ion present in the solution. Now you should be able to solve the problem.
{ "domain": "chemistry.stackexchange", "id": 9518, "tags": "acid-base, ph" }
Why are antiparticles associated with spin-flipped spinors?
Question: In section 2.2 of Elvang and Huang's Scattering Amplitudes in Gauge Theory and Gravity (http://arXiv.org/abs/1308.1697), beneath equation (2.9), it is mentioned that $u^{\pm}=v^{\mp}$, where $u^\pm$ are massless spinors corresponding to helicity eigenstates for particles, and $v^\mp$ are those for antiparticles. Why is this true in general? Or is it a just a convention for associating certain antiparticle spinors with particle spinors? From equation (3.136) in section 3.6 of Peskin and Schroeder, we have$$v^s(p)=\begin{pmatrix}\sqrt{p\cdot\sigma}\xi^{-s}\\-\sqrt{p\cdot\bar{\sigma}}\xi^{-s}\end{pmatrix},$$ which seems to suggest that it is just a matter of choosing some basis of two-component spinors $\xi^{-s}$, which in this case happen to have opposite spin from those used in $u^{s}(p)$. With this choice of $v^s(p)$, it is straightforward to get $u^{\pm}=v^{\mp}$ in the massless limit. I am sure that there is some physical justification for this but what is it? Elvang and Huang suggests using crossing symmetry but the choice of relating s and t channel diagrams seems as arbitrary as any convention. Answer: Let us say you decompose the Dirac field like the Klein-Gordon field, i.e. $$\Psi(x)=\int[dk]\left(u(k)e^{-ikx}+v(k)e^{ikx}\right),$$ with the Lorentz-invariant integration measure $[dk]:=d^4k/(2\pi)^4\times2\pi\delta(k_0-\omega(\mathbf{k}))$, and define your positive- and negative-frequency solutions that way as $u,v$ respectively. That means, if you interpret the solution $u$ as a Dirac fermion travelling in the direction $\mathbf{k}$ with positive frequency $k_0$, then $v$ is a Dirac fermion travelling into the direction $-\mathbf{k}$ with negative frequency $-k_0$. That is equivalent to saying that its Dirac-conjugate in $\int[dk]\overline{v}(k)e^{-ikx}$ is a Dirac-antifermion travelling in the direction $\mathbf{k}$ with positive frequency $k_0$. Thus, while $u(k)$ is your Dirac-fermion, $\overline{v}(k)$ is your Dirac-antifermion. That interpretation makes sense since if you look at $e^+e^-$ annihilation, the incoming positron is described by the Dirac-conjugate $\overline{v}(k)$, and not $v(k)$. So far so good, now we can look how the chirality projectors act on the Dirac-spinors. We define $u_{L/R}:=\frac{1}{2}(1\mp\gamma_5)u$ for the Dirac-fermion. Since $\overline{v}$ is the Dirac-antifermion, we should analogously define $$\overline{v}_{L/R}:=\overline{v}\frac{1}{2}(1\mp\gamma_5)=v^\dagger\gamma^0\frac{1}{2}(1\mp\gamma_5)=v^\dagger\frac{1}{2}(1\pm\gamma_5)\gamma^0=\left(v_{R/L}\right)^\dagger\gamma^0=\overline{v_{R/L}}.$$ We see that the left-chiral Dirac-antifermion $\overline{v}_L=\overline{v_R}$ is a Dirac-conjugated right-chiral $v_R$. Note that the bars over the spinor have different lengths. Now we do the last step and ditch the mass. Both $u$ and $v$ now fulfil the same Dirac equation $k_\mu\gamma^\mu u=0$, $k_\mu\gamma^\mu v=0$. The Dirac-spinors decompose into two decoupled Weyl-spinors $\hat{u}_{L/R}$, such that in the chiral representation $u_{L/R}=(\hat{u}_L,\hat{u}_R)^T$. These Weyl-spinors fulfil the Weyl equations $$0=k_\mu\overline{\sigma}^\mu \hat{u}_L=k_0\hat{u}_L+k_i\sigma^i\hat{u}_L\leftrightarrow\frac{k_i\sigma^i}{|\mathbf{k}|}\hat{u}_L=-\hat{u}_L,$$ $$0=k_\mu\sigma^\mu \hat{u}_R=k_0\hat{u}_R-k_i\sigma^i\hat{u}_R\leftrightarrow\frac{k_i\sigma^i}{|\mathbf{k}|}\hat{u}_R=\hat{u}_R.$$ Here, we have used that $k_0=|\mathbf{k}|$ in the massless case to show that the chiral eigenstates are also helicity eigenstates for $m=0$: The left-chiral state has negative helicity, the right-chiral has positive helicity. Since both $u$ and $v$ fulfill the same Dirac equation, the same two equations hold for the Weyl-spinors $\hat{v}_L,\hat{v}_R$. But as we should carefully remember, $v$ does not describe the antifermion - its Dirac-conjugated $\overline{v}$ does! In the chiral representation we have $\overline{v}=(\hat{v}^\dagger_L,\hat{v}^\dagger_R)$ and thus we have the Weyl equations $$0=\hat{v}^\dagger_Rk_\mu\overline{\sigma}^\mu\leftrightarrow \hat{v}^\dagger_R\frac{k_i\sigma^i}{|\mathbf{k}|}=-\hat{v}^\dagger_R,$$ $$0=\hat{v}^\dagger_Lk_\mu\sigma^\mu\leftrightarrow \hat{v}^\dagger_L\frac{k_i\sigma^i}{|\mathbf{k}|}=\hat{v}^\dagger_L.$$ As we manifestly see, the left-chiral antifermion $\hat{v}^\dagger_L$ has positive helicity, unlike the left-chiral fermion $\hat{u}_L$. And the converse is true for the opposite chirality. I must admit, I had to ponder a lot about this. The way I see it is that it is a typical case of 'be mindful about the maths in QFT and how you interpret it'. Indeed, the idea is that one must be careful that both the fermion and the antifermion field should propagate into the same direction. If you interpret the exponential $e^{-i(\omega t-\mathbf{kx})}$ as a planar wave moving in direction $\mathbf{k}$, then $u$ is your fermion field and $\overline{v}$ is your antifermion field travelling in the same direction. From then on it is clear that $\overline{v}$ will dictate the antifermion's properties, not $v$. And the same then extends to the massless limit where the Dirac-spinors $u,\overline{v}$ are replaced by the Weyl-spinors $\hat{u}_{L/R},\hat{v}^\dagger_{L/R}$.
{ "domain": "physics.stackexchange", "id": 76574, "tags": "antimatter, dirac-equation, spinors, chirality, helicity" }
Does high pressure reverse reaction between zinc and sulfuric acid?
Question: When zinc is added to sulfuric acid, it undergoes a chemical reaction that generates hydrogen gas and zinc sulfate. Can this reaction be reversed by applying pressure to the products converting them back into zinc and sulfuric acid? $$\ce{Zn(s) + H2SO4(aq) <=> ZnSO4(aq) + H2(g)}$$ Are there any sources claiming this reaction can be controlled by pressure at all? Answer: As a very rough estimation of pressure when the hydrogen redox potential equals zinc standard redox potential, we can use the extrapolation of the Nernst equation: \begin{align} E^\circ_\ce{Zn/Zn^2+} &= E^\circ_\ce{H2/H+} + \frac{\pu{0.059 V}}{2} \log\left(\frac{[\ce{H+}]^2}{p_{\ce{H2}}}\right) \tag{1}\\ \log(p_\ce{H2}) &= \left(E^\circ_\ce{H2/H+} - E^\circ_\ce{Zn/Zn^2+}\right) \frac{2}{\pu{0.059 V}} \tag{2}\\ p_\ce{H2} &= 10^{\left(E^\circ_\ce{H2/H+} - E^\circ_\ce{Zn/Zn^2+}\right) \frac{2}{\pu{0.059 V}}} \tag{3}\\ &= \pu{10^{\frac{2\times\pu{0.76 V}}{\pu{0.059 V}}} atm} \\ &\approx \pu{5.8E25 atm} \end{align} This extrapolating estimation is not realistic, being far out of validity scope of the Nernst equation. Even pressure many orders lower would make the system very different, everything solid, including hydrogen. $\pu{E25 bar}$ would cause an universal nuclear fusion. As Loong has noted, the Solar core pressure is $\pu{2.5E11 bar}$. If protons had been fusable directly like deuterium is… But it gives the clear message the pressure cannot help in reaction reversal.
{ "domain": "chemistry.stackexchange", "id": 17299, "tags": "inorganic-chemistry, physical-chemistry, reference-request, pressure, reaction-control" }
String replace templating utility
Question: I am new to Python and I am writing my first utility as a way to learn about strings, files, etc. I am writing a simple utility using string replacement to batch output HTML files. The program takes as inputs a CSV file and an HTML template file and will output an HTML file for each data row in the CSV file. CSV Input File: test1.csv The CSV file, which has header row, contains some catalog data, one product per row, like below: stockID,color,material,url 340,Blue and magenta,80% Wool / 20% Acrylic,http://placehold.it/400 275,Purple,100% Cotton,http://placehold.it/600 318,Blue,100% Polyester,http://placehold.it/400x600 HTML Template Input File: testTemplate.htm The HTML template file is simply a copy of the desired output with string replace tags %s placed at the appropriate locations: <h1>Stock ID: %s</h1> <ul> <li>%s</li> <li>%s</li> </ul> <img src='%s'> The Python is pretty straight forward I think. I open the template file and store it as a string. I then open the CSV file using the csv.dictreader() command. I then iterate through the rows of the CSV, build the file names and then write the output files using string replacement on the template string using the dictionary keys. import csv # Open template file and pass string to 'data'. Should be in HTML format except with string replace tags. with open('testTemplate.htm', 'r') as myTemplate: data = myTemplate.read() # print template for visual cue. print('Template passed:\n' + '-'*30 +'\n' + data) print('-'*30) # open CSV file that contains the data and store to a dictyionary 'inputFile'. with open('test1.csv') as csvfile: inputFile = csv.DictReader(csvfile) x = 0 # counter to display file count for row in inputFile: # create filenames for the output HTML files filename = 'listing'+row['stockID']+'.htm' # print filenames for visual cue. print(filename) x = x + 1 # create output HTML file. with open(filename, 'w') as outputFile: # run string replace on the template file using items from the data dictionary # HELP--> this is where I get nervous because chaos will reign if the tags get mixed up # HELP--> is there a way to add identifiers to the tags? like %s1 =row['stockID'], %s2=row['color'] ... ??? outputFile.write(data %(row['stockID'], row['color'], row['material'], row['url'])) # print the number of files created as a cue program has finished. print('-'*30 +'\n' + str(x) + ' files created.') The program works as expected with the test files I have been using (which is why I am posting here and not on SO). My concern is that it seems pretty fragile. In 'production' the CSV file will contain many more columns (around 30-40) and the HTML will be much more complex, so the chances of one of the tags in the string replace getting mixed seems pretty high. is there a way to add identifiers to the tags? like %s1 =row['stockID'], %s2=row['color'] ...? that could be placed either in the template file or in the write() statement (or both)? Any method alternatives or improvements I could learn would be great (note I am well aware of the Makos and Mustaches of the world and plan to learn a couple of template packages soon.) Answer: Python has a number of templating options, but the simplest to start is probably the string.Template one described in https://docs.python.org/3/library/string.html#template-strings This supports targets such as $StockId and is used as below >>> from string import Template >>> s = Template('$who likes $what') >>> s.substitute(who='tim', what='kung pao') 'tim likes kung pao' If you need more output options, look at the string.format functionality, but this is probably best for starting with.
{ "domain": "codereview.stackexchange", "id": 12431, "tags": "python, beginner, html" }
Problem in resolving pursuit curve with Simulink
Question: Reading this document I had difficulty solving a differential equation with simulink. I report the problem in case the link expires. Given a system like the one in the figure we want to describe the trajectory that the fox will follow to capture the rabbit. The position of the rabbit is given by the parametric function $R(t)$, and the fox position is given by $F(t)= \langle x(t), y(t)\rangle$. $K$ is the ratio of the rabbit’s and the fox’s (constant) speed. The general differential equation describing the general pursuit curve is: $$ F'=k\lVert R'\rVert \frac{R-F}{\lVert R\rVert-\lVert R\rVert} $$ Without loss of generality, we may assume that the rabbit runs up the y-axis, and parameterize the its path by $R(t) = \langle 0,rt \rangle$ . Let the fox’s initial position be given by $F (0) = \langle c,0 \rangle$ where c is a positive constant. The vector differential equation for the fox simplifies to the system : $$\begin{cases} x'=- \frac{krx}{\sqrt{x^2+(rt-y)^2}} \\ y'=\frac{kr(rt-y)}{\sqrt{x^2+(rt-y)^2}} \end{cases} $$ Now i tried to solve this system of equations with simulink, but the XYplot of the fox is not what I expected. This is my simulink schema. If I understand correctly, the trajectory of the rabbit is the line I drew in red with paint. If so, shouldn't the fox's trajectory look like the green line? Why do I get the blue line instead? Does the x and the y represent the fox path ? Sorry for this trivial / stupid question but I haven't found much on the internet... Answer: Apparently my simulink model was wrong. This seems to be the correct implementation. Sorry for the stupid question.
{ "domain": "physics.stackexchange", "id": 84224, "tags": "kinematics, computational-physics, simulations, differential-equations" }
How to understand the term $\frac{1}{-2E(\vec{p})} e^{-ip(x-y)}$ of Klein-Gordon propagator in Peskin & Schroeder's book?
Question: I am reading Peskin & Schroeder's book on Chapter 2. I have a question about how to get the term $\frac{1}{-2E(\vec{p})} e^{-ip(x-y)}$. The original equation for propagator is $$ \langle 0 | [\phi(x), \phi(y)] | 0 \rangle = \int \frac{d^{3}p}{(2\pi)^3} \frac{1}{2E(\vec{p})} [ e^{-ip(x-y)} - e^{ip(x-y)} ].\tag{2.54} $$ We separate this equation, namely, $$ \langle 0 | [\phi(x), \phi(y)] | 0 \rangle = \int \frac{d^{3}p}{(2\pi)^3} \frac{1}{2E(\vec{p})} e^{-ip(x-y)} + \int \frac{d^{3}p}{(2\pi)^3} \frac{1}{-2E(\vec{p})} e^{ip(x-y)} $$ Peskin & Schroeder's Book says that the energy $p^{0}=-E(\vec{p})$ in the second term is less than 0. I tried to expand $e^{ip(x-y)}$ in the following: $$ e^{ip(x-y)} = e^{i(p^{\mu}(x-y)_{\mu})} = e^{i(Et-\vec{p}\cdot(\vec{x}-\vec{y}))} = e^{i[-(-Et)-\vec{p}\cdot(\vec{x}-\vec{y})]}=e^{-i[p^{0}t+\vec{p}\cdot(\vec{x}-\vec{y})]} $$ However, it seems that I can not write $e^{-i[p^{0}t+\vec{p}\cdot(\vec{x}-\vec{y})]}$ as $e^{-ip(x-y)}$, where $p^{0}=-E(\vec{p})$. Where is the problem? Answer: Yeah, basically as hft said in his comment, the main point is the integrals are only in the space/momentum components and not in time/energy components. And because you are integrating over all momenta $\vec{p}$, for every $\vec{x}-\vec{y}$ you pass, you will also pass through a $\vec{y}-\vec{x}$. Meaning that effectively, and only inside the integral: \begin{equation} \vec{x}-\vec{y}=\vec{y}-\vec{x} \end{equation} and because this only happens in space components the time component of $p$ which is $E$ will gain a minus sign, telling you that you are dealing with "negative energy" particles (antiparticles): $$ \langle 0 | [\phi(x), \phi(y)] | 0 \rangle = \int \frac{d^{3}p}{(2\pi)^3} \frac{1}{2E(\vec{p})} e^{-ip(x-y)} \Big|_{p_0=E} + \int \frac{d^{3}p}{(2\pi)^3} \frac{1}{2(-E(\vec{p}))} e^{-ip(x-y)} \Big|_{p_0=-E} $$ Notice you could also have absorbed the minus in the time component of $x$ instead, telling you that those antiparticles can be thought as normal particles kind of travelling back in time: $$ \langle 0 | [\phi(x), \phi(y)] | 0 \rangle = \int \frac{d^{3}p}{(2\pi)^3} \frac{1}{2E(\vec{p})} e^{-ip(x-y)} \Big|_{p_0=E, \ t_+} - \int \frac{d^{3}p}{(2\pi)^3} \frac{1}{2E(\vec{p})} e^{-ip(x-y)} \Big|_{p_0=E, \ t_-} $$ (This last paragraph has to be taken with caution, since in reality is more complex than that)
{ "domain": "physics.stackexchange", "id": 89862, "tags": "quantum-field-theory, fourier-transform, propagator, klein-gordon-equation" }
Reversi (Othello) in Python
Question: I'm a beginner-intermediate programmer who's mostly used Java. I taught myself Python and made this engine for playing Reversi. Tips on Pythonic ways to accomplish tasks and/or general advice appreciated! from ast import literal_eval as eval board_size = 8 BLACK = '\u26AB' WHITE = '\u26AA' EMPTY = '\u2B1c' offsets = ((0,1),(1,1),(1,0),(1,-1),(0,-1),(-1,-1),(-1,0),(-1,1)) def inverse(piece): return BLACK if piece is WHITE else WHITE def main(): if input('Display instructions? [Y/N]: ').lower() == 'y': instructions() board = create_board() piece = BLACK while has_valid_move(board, piece): game_loop(board, piece) if has_valid_move(board, inverse(piece)): piece = inverse(piece) print_board(board) black, white = 0,0 for row in board: for token in row: if token is WHITE: white += 1 if token is BLACK: black += 1 if black == white: print("It's a tie!") else: print() print('{token} is the winner!' % (BLACK if black>white else WHITE)) return def instructions(): print(''' Reversi, also known as Othello, is a strategy board game for two players. The two players alternate placing Black and White tokens on the board, respectively, and the winner is whomever has the most tokens of their color on the board when no legal moves remain.'''\ ) input('\nPress Enter to continue...') print(''' A legal move is one that causes other pieces to flip between White and Black. A move causes pieces of the opposite color to flip when placed down if and only if there is an unbroken line between it and another piece of the same color consisting of pieces of the opposite color.'''\ ) input('\nPress Enter to continue...') print(''' To play this version, when prompted, enter the cartesian coordinates of the location that you would like to place the piece. The upper left corner of the board is 0,0 and the lower right is {0},{0}.'''.format(board_size-1)\ ) input('\nPress Enter to continue...') def create_board(): board = [[EMPTY for x in range(board_size)] for x in range(board_size)] half = board_size//2 board[half-1][half-1] = WHITE board[half][half] = WHITE board[half-1][half] = BLACK board[half][half-1] = BLACK return board def print_board(board): for row in range(len(board)): print(*board[row], sep='') return def game_loop(board, piece): print() print_board(board) while(True): try: move = eval(input('Place %s where? ' % piece)) move = tuple(reversed(move)) # x,y -> y,x (easier to use) if is_valid_move(board, piece, move): place_piece(board, piece, move) return else: raise AssertionError except (TypeError, ValueError, IndexError, SyntaxError, AssertionError): # ------------------bad input------------------ ---bad move--- print('Invalid move. Try again.') def is_valid_move(board, piece, move): if board[move[0]][move[1]] is not EMPTY: return False for offset in offsets: check = [move[0]+offset[0], move[1]+offset[1]] while 0<=check[0]<board_size-1 and 0<=check[1]<board_size-1 and \ board[check[0]][check[1]] is inverse(piece): check[0] += offset[0] check[1] += offset[1] if board[check[0]][check[1]] is piece: return True return False def place_piece(board, piece, move): board[move[0]][move[1]] = piece for offset in offsets: check = [move[0]+offset[0], move[1]+offset[1]] while 0<=check[0]<board_size and 0<=check[1]<board_size: if board[check[0]][check[1]] is EMPTY: break if board[check[0]][check[1]] is piece: flip(board, piece, move, offset) break check[0] += offset[0] check[1] += offset[1] return def flip(board, piece, move, offset): check = [move[0]+offset[0], move[1]+offset[1]] while(board[check[0]][check[1]] is inverse(piece)): board[check[0]][check[1]] = piece check[0] += offset[0] check[1] += offset[1] return def has_valid_move(board, piece): for y in range(board_size): for x in range(board_size): if is_valid_move(board, piece, (y,x)): return True return False if __name__ == '__main__': main() There aren't that many comments as the code should be rather self-documenting. Answer: from ast import literal_eval as eval It's fabulous that you are using ast.literal_eval() instead of eval(). You shouldn't be naming it eval, however, because you are shadowing the built-in function. If sometime later you decide that you want to use eval() for some reason, you might be surprised if it doesn't work how you expect it to. board_size = 8 BLACK = '\u26AB' WHITE = '\u26AA' EMPTY = '\u2B1c' offsets = ((0,1),(1,1),(1,0),(1,-1),(0,-1),(-1,-1),(-1,0),(-1,1)) You use constants instead of magic values! board_size and offsets are both constants and should be ALLCAPS, but you aren't using magic values! def main(): That's good. Your main code is in a function instead of at module level. if input('Display instructions? [Y/N]: ').lower() == 'y': What if the user typed yes? It would accept it, but treat it as no. You should validate the input before continuing. You might want to create a function that keeps asking until y or n is given. You might get more fancy and go with yes, yeah, uh huh, yeah, man!, ... whatever you want to do. print('{token} is the winner!' % (BLACK if black>white else WHITE)) It looks like you're a little confused about what you're gonna do. That throws a TypeError. Either use '{token} ...'.format(token=...) or '%s...' % ..., but don't try to merge them. I prefer .format(), but you can do what you want. return It's useless. Take it out. You're already at the end of a function that doesn't return anything above, so it will return None implicitly. There is no need to add the extra line. ...'''\ ) Unclosed parentheses are also line-continuers. You don't need the backslash. You can remove all of those backslashes in that function. By the way, I'm very pleased to see that you aren't using magic values. You use board_size instead of writing in the actual numbers. board = [[EMPTY for x in range(board_size)] for x in range(board_size)] It's great that you are using list comprehensions. I would use _ instead of x just to make it a little more obvious that it isn't being used. def print_board(board): for row in range(len(board)): print(*board[row], sep='') return You aren't using row for anything except as an index for board. You should just iterate through board in the first place: for row in board: print(*row, sep='') My second problem is again the return. while(True): The parentheses are useless. Just do while True: You can change that in several places. I would highly recommend classes as pointed out by @kushj. Even if you changed just that, your code would look very good.
{ "domain": "codereview.stackexchange", "id": 19488, "tags": "python, game, python-3.x" }
The universal value of Boltzmann constant?
Question: So I'm quite confused about Boltzmann's constant $k_B$ being fundamental. From here: ... the Boltzmann constant. Its value is well known but even if its value were 10 times bigger or if it were exactly 1 , or 45.90 or 106 well... the Universe would remain the same as it is now. The Boltzmann constant is not really fundamental to the existence of the Universe. This leaves me confused. Let's assume I have $2$ different universes: Is it possible they can have different Boltzmann's constants $k_B$ and $k_B'$ ? If yes what happens if I open a wormhole (assume existence of some exotic matter) and connect both universes? Is point 2. identical to arguing that $\beta = 1/k_BT$ and $\beta' = 1/k_B'T$ should be the same for all systems? Answer: Boltzmann constant is the coefficient for converting temperature units to energy units. There is nothing fundamental about it.
{ "domain": "physics.stackexchange", "id": 93657, "tags": "thermodynamics, statistical-mechanics, temperature, dimensional-analysis, physical-constants" }
Could dark matter be a kind of Goldstone boson?
Question: The common argument for vanishing Goldstone boson is like, if there is some massless particle generated from the spontaneous symmetry breaking, it should be detected. Since we never saw that, it is not possible. However, if there is some particle which has not been found, it could be (a) very heavy; (b) very weakly interacting. The Goldstone particle was rejected based on reason (a). Could (b) somehow be true? e.g. as a candidate of dark matter? Or since the Higgs mechanism was confirmed, this possibility (very weakly interacting) is already ruled out? Answer: A Goldstone boson is a generic type of particle formed when symmetries are spontaneously broken. If you want to suggest that dark matter is a Goldstone boson then that says very little unless you suggest a specific model with a symmetry to be broken. When exact symmetries are broken you get a massless Goldstone boson (except in a few special circustances, E.g. in Gauge theory the extra mode gives mass to the gauge bosons instead of forming a Goldstone boson) Dark matter cannot be formed from massless particles since they would not be gravitationally bound to galaxies and we know that dark matter is. Massless particles would fly past on the same trajectories as photons in the microwave background. If the broken symmetry is not perfect you get pseudo-Golstone bosons which are light on the scale of the model, but not massless. The pion is an example from flavour chiral symmetry breaking, but it is not stable. Any theory that predicted such a particle would predict other new particles that could just as easily be part of dark matter if they are stable. Without a specific proposal for such a theory not much has been said. Note that it is actually very easy to dream up particle models of dark matter, e.g. you just need a new quantum number to explain stability. The difficulty is to find a theory that is well motivated from other considerations. e.g supersymmetry solves the hierarchy problem etc., axions solve the strong CP problem. However there is no clear reason why dark matter needs to solve other problems in this way. Until we can detect a signature for dark matter interactions it is going to be very hard to settle what it is.
{ "domain": "physics.stackexchange", "id": 10207, "tags": "dark-matter, symmetry-breaking" }
Does learning content from additional encyclopedias consume much less amount of storage?
Question: Complex AI that learns lexical-semantic content and its meaning (such as collection of words, their structure and dependencies) such as Watson takes terabytes of disk space. Lets assume DeepQA-like AI consumed whole Wikipedia of size 10G which took the same amount of structured and unstructured stored content. Will learning another 10G of different encyclopedia (different topics in the same language) take the same amount of data? Or will the AI reuse the existing structured and take less than half (like 1/10 of it) additional space? Answer: It seems easy for this to be sublinear growth or superlinear growth, depending on context. If we imagine the space of the complex AI as split into two parts--the context model and the content model (that is, information and structure that is expected to be shared across entries vs. information and structure that is local to particular entries), then expanding the source material means we don't have much additional work to do on the context model, but whether the additional piece of the content model is larger or smaller depends on how connected the new material is to the old material. That is, one of the reasons why Watson takes many times the space of its source material is because it stores links between objects, which one would expect to grow with roughly order n squared. If there are many links between the old and new material, then we should expect it to roughly quadruple in size instead of double; if the old material and new material are mostly unconnected and roughly the same in topology, then we expect the model to roughly double; if the new material is mostly unconnected to the old material and also mostly unconnected to itself, then we expect the model to not grow by much.
{ "domain": "ai.stackexchange", "id": 86, "tags": "watson, storage" }
Is "dark clothes for winter, light for summer" relevant?
Question: We are told to wear light clothes in summer as they are better at reflecting sunshine and keeping us cool. And dark clothes absorb sunshine and keep us warm. But is it really relavent? If I buy identical t-shirts, one in black and one in white, will I feel significantly cooler or warmer? I have noticed that black surfaces get much warmer, but do they make the person warmer too? Answer: This article has some relevant results based on a study of bird plumage (it also happens to be cited in the abstract of the Nature paper mentioned in one of the other answers), and is summarized in simpler terms here. I'll attempt to summarize the summary. Black and fluffy/loose fitting clothing is best if it is hot out and there is any ($>3 \mathrm{m}/\mathrm{s}$) wind. The black clothing absorbs both solar radiation and radiation from the body. The air in the immediate vicinity is heated, then efficiently transported away by the wind. This is slightly better than white fluffy/loose fitting clothing, which reflects more sunlight and radiation from the body. The emission from the body is reflected, so it cannot heat the air near the clothing as efficiently and have a chance to be transported away. Tight black clothing is a terrible idea if trying to stay cool, regardless of windspeed. If there is no wind ($<3 \mathrm{m}/\mathrm{s}$), white clothing is better since the most important thing in these conditions is to reflect as much incoming sunlight as possible. I also have another possibility to think about. My recollection regarding loose fitting black robes in the desert is that - given a garment that is open at the bottom (robe) and top (not too tight fitting) - heating the air inside is actually advantageous to keeping cool since this drives a convection flow upward through the garment. This airflow makes cooling via sweating efficient, enough that the person wearing the garment doesn't feel as hot. Unfortunately I can't find any experimental results to validate this picture, but it seems more or less in line with the results above, at least in as much as airflow seems to be key to answering the question.
{ "domain": "physics.stackexchange", "id": 12183, "tags": "thermodynamics, visible-light, everyday-life, thermal-radiation" }
If a = 0 for a Van der Waals gas, what does that signify?
Question: I know that if $a = 0$ for a gas at certain temperature and pressure, it means that the molecules of gas have almost no attractive forces acting between them. But does that also mean that the molecules have high repulsive forces acting between them? Answer: The simple answer is no. An "$a$" of zero simply means the attractive forces are so low that they don't affect the pressure of the gas in a measurable way. As Ivan Neretin mentioned in the comments, high repulsive forces would result in a negative $a$. More specifically, a positive $a$ is due to electrostatic attraction. Thus a negative $a$ would mean that, on average, the gas molecules are close enough to feel an average electrostatic repulsion. But you will never see a negative $a$, because there will never be a net electrostatic repulsion between gas particles. Here is why: At sufficiently high densities, real gases exert more pressure than would be predicted if they behaved ideally. This is not due to a repulsive electrostatic interaction, it is due to an excluded volume effect. Indeed, at even the closest average interparticle spacing that atoms and molecules can have and still be gases, their average electrostatic interaction is attractive. For them to experience a net electrostatic repulsive interaction, they would need to be so close that they are, on average, closer than the energy minimum in their interaction potential (typically modeled as a Lennard-Jones potential). And once the particles are that close together, the substance is no longer a gas. Rather, the positive pressure deviation from ideality at high densities comes from the fact that real gas particles take up space. At sufficiently high densities, there is consequently reduced free volume left for the gas particles to move around (compared to what they would have if they were ideal gas point particles). Hence the term "excluded volume". This lower free volume significantly reduces the configuration space available to the gas (i.e., the number of possible microstates), and is thus entropically unfavorable. I.e., the positive pressure deviation caused by the excluded volume effect is not energetic in origin, it's entropic. This can be most easily confirmed by looking at equations of state for the simplest possible real gases, namely noble gases like argon. I did this and found that, at extremely high densities, it's actually energetically favorable to compress them (confirming that their average electrostatic interactions remain attractive), but entropically unfavorable: $\require{begingroup} \begingroup \newcommand{\pd}[3]{\left(\frac{\partial #1}{\partial #2}\right)_{\!#3}}$ $$\pd{E}{V}{T} > 0 \text{ (compression is energetically favorable)}$$ $$\pd{S}{V}{T} > 0\text{ (compression is entropically unfavorable)}$$
{ "domain": "chemistry.stackexchange", "id": 16381, "tags": "physical-chemistry, gas-laws, van-der-waals-behavior" }
What does "message of over a gigabyte was predicted in tcpros" mean?
Question: While streaming with theora image_transport, I receive the following error: a message of over a gigabyte was predicted in tcpros. that seems highly unlikely, so I'll assume protocol synchronization is lost. What might cause this error? Originally posted by miltos on ROS Answers with karma: 85 on 2011-06-18 Post score: 4 Original comments Comment by kingsimba0511 on 2021-10-28: In my case, I think it's the compiler's bug or some bug in roscpp. I created a very simple program, and the problem still exists. https://github.com/AutoxingTech/simple_publisher_crash Answer: As the error message indicates, it means that the connection some how became out-of-sync. This could potentially happen if streaming between multiple computers and somehow the libraries are not the same, or something even more esoteric. Originally posted by kwc with karma: 12244 on 2011-09-02 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by rastaxe on 2014-10-07: I have the same error. In my case, it happens when a node call a service and the server dies.
{ "domain": "robotics.stackexchange", "id": 5876, "tags": "image-transport" }
Why can't hash tables provide O(n) sorting?
Question: Since a sufficiently large hash table takes constant time to both insert and retrieve data, should it not be possible to sort an array by simply inserting each element into the hash table, and then retrieving them in order? You just insert each number into the hash table, and remember the lowest and highest number inserted. Then for each number in that range, in order, test if it is present in the hash table. If the array being sorted contains no gaps between values (i.e. it can be [1,3,2] but NOT [1,3,4]), this should give you O(N) time complexity. Is this correct? I don't think I've ever heard of hash tables being used this way - am I missing something? Or are the restrictions (numeric array with no gaps) too much for it to be practically useful? Answer: The algorithm you give is exponential time, not linear. If you're given $n$ $b$-bit entries, the size of your input is $nb$ bits but the algorithm takes time $\Theta(2^b)$, which is exponential in the input length. In particular, your algorithm takes $2^k$ steps to sort the roughly $2k$-bit input $\{0, 2^k\}$.
{ "domain": "cs.stackexchange", "id": 19550, "tags": "data-structures, runtime-analysis, sorting, hash-tables" }
Ending points of the root locus
Question: Let $$D(s) + KN(s) = 0 \tag{1}$$where $D(s)$ and $N(s)$ are polynomials of $s \in \mathbb{C}$ such that $\text{Deg}(D) = n, \ \text{Deg}(N) = m$ and $n\ge m$. The root locus method tells us how the solutions of $(1)$ changes as we change parameter $K$ from $K=0$ to $K = \infty$. I'm trying to understand these extreme cases. Let $K \to 0$ and we have $$D(s) = 0$$ so in this case, the set of the solutions is $A = \{s \in \mathbb{R} | D(s) = 0\}$. Now let $K \to \infty$, if we choose $s$ such that $N(s) \not = 0$ then the answer will be infinity. So we should choose $s$ such that $N(s) = 0$. In that case, if $(1)$ holds, we also have $D(s) =0$ which means $N(s)$ and $D(s)$ have the same factor but this isn't the result that should be obtained. Curiously, if we rewrite $(1)$ $$\frac{N(s)}{D(s)} = -\frac{1}{K} \tag{2}$$ and let $K\to \infty $, one possible case that $(2)$ holds is $N(s) = 0$ which gives us $m$ solutions and this doesn't require $D(s)$ have the same factor as $N(s)$! Why this happens? And why the first solution is wrong? Example: Let $D(s) = s^2 - 4$ and $N(s) = s + 3$. So $(1)$ becomes $$s^2 - 4 + K(s+3) = 0$$If $K \to \infty $ and $s = -3$ then $9 - 4 = 0$ which is clearly wrong. On the other hand, rewriting the equation $$\frac{s+3}{s^2 - 4} = -\frac{1}{K}$$ If $K \to \infty $ and $s = -3$ then $\frac{0}{9-4} = 0$ which is true, of course. Answer: The problem with your example is that $\infty\cdot 0$ isn't necessarily equal to zero. The only way to judge what is happening in the limit $K\to\infty$ is to divide the original equation by $K$: $$\frac{D(s)}{K}+N(s)=0\tag{1}$$ Now it is obvious that for $K\to\infty$ the actual value of $D(s)$ is irrelevant, as long as it is finite. Consequently, the only necessary condition for $(1)$ to be true as $K$ becomes large is that $N(s)=0$. This problem is a bit similar to the problem of determining the limit $$\lim_{x\to\infty}\frac{x}{x+c}\tag{2}$$ The value of the limit $(2)$ is independent of the choice of the constant $c$, which becomes negligible compared to $x$. The same is true in the first equation of the question: the actual value of $D(s)$ becomes irrelevant compared to $KN(s)$.
{ "domain": "dsp.stackexchange", "id": 9722, "tags": "continuous-signals, transfer-function, control-systems, poles-zeros" }
How to implement global contrast normalization in python?
Question: I am trying to implement global contrast normalization in python from Yoshua Bengio's Deep Learning book (section 12.2.1.1 pg. 442). From the book, to get a normalized image using global contrast normalization we use this equation: $$\mathsf{X}^{\prime}_{i,j,k}=s\frac{\mathsf{X}_{i,j,k}-\overline{\mathsf{X}}}{max\left\lbrace \epsilon, \sqrt{\lambda+\frac{1}{3rc}\sum_{i=1}^{r}\sum_{j=1}^{c}\sum_{k=1}^{3}(\mathsf{X}_{i,j,k}-\overline{\mathsf{X}})^2}\right\rbrace }$$ where $\mathsf{X}_{i,j,k}$ is a tensor of the image and $\mathsf{X}^{\prime}_{i,j,k}$ is a tensor of the normalized image, and $\overline{\mathsf{X}} = \frac{1}{3rc}\sum_{i=1}^{r}\sum_{j=1}^{c}\sum_{k=1}^{3} \mathsf{X}_{i,j,k}$ is the average value of the pixels of the original image. $\epsilon$ and $\lambda$ are some constants, usually with $\lambda=10$ and $\epsilon$ set to be a very small number, and here is my implementation: import Image import numpy as np import math def global_contrast_normalization(filename, s, lmda, epsilon): X = np.array(Image.open(filename)) X_prime=X r,c,u=X.shape contrast =0 su=0 sum_x=0 for i in range(r): for j in range(c): for k in range(u): sum_x=sum_x+X[i][j][k] X_average=float(sum_x)/(r*c*u) for i in range(r): for j in range(c): for k in range(u): su=su+((X[i][j][k])-X_average)**2 contrast=np.sqrt(lmda+(float(su)/(r*c*u))) for i in range(r): for j in range(c): for k in range(u): X_prime[i][j][k] = s * (X[i][j][k] - X_average) / max(epsilon, contrast) Image.fromarray(X_prime).save("result.jpg") global_contrast_normalization("cat.jpg", 1, 10, 0.000000001) original image: result image: I got an unexpected result. What is wrong with my implementation? Answer: there are multiple issues with the code: You force the values in the image to be uint8 (8-bit integer). Since the values are floats they will be casted/rounded to either 0 or 1. This will later be interpreted as image in black and the darkest form of gray (1 out of 255). Once you have proper floats as values PIL or pillow can't handle the array (they only do images with values in [0, 255]) The first problem happened because you/numpy wants the array to be a uint8. The normalize version will have floats. You should have used: X_prime = X.astype(float) Here is a working version of the code: import numpy import scipy import scipy.misc from PIL import Image def global_contrast_normalization(filename, s, lmda, epsilon): X = numpy.array(Image.open(filename)) # replacement for the loop X_average = numpy.mean(X) print('Mean: ', X_average) X = X - X_average # `su` is here the mean, instead of the sum contrast = numpy.sqrt(lmda + numpy.mean(X**2)) X = s * X / max(contrast, epsilon) # scipy can handle it scipy.misc.imsave('result.jpg', X) global_contrast_normalization("cat.jpg", 1, 10, 0.000000001) PS: X_prime = X will make X_prime reference X. So changing X_prime will also change X.
{ "domain": "datascience.stackexchange", "id": 1469, "tags": "python, image-classification, computer-vision, preprocessing" }
Physics after a Theory of Everything
Question: There is a lot of controversy over the existence of a Theory of Everything (ToE), and as far as I know, we are a long way from having a possible candidate. But what interests me is, what after that? If a Theory of Everything is truly found, will there be anything left for physics to do? Is a ToE actually the end of Physics? Or are there things that will remain unexplained even after a ToE is found out? Edit (7/1/2016): Just to make it fit for this site, my question is What are the areas of physics that would require working on, even if a ToE is discovered? I hope I have been able to "isolate an issue that can be answered in a few paragraphs". Answer: No, I think. There are many open problems in condensed matter physics. For example, I heard from my professors that the mechanism behind new superconductors is still a mystery. Also. To understand the underlying rules does not always explain the system as a whole. There are complex systems that obeys simple underlying rules but cannot be explained directly by these simple rules, like biological systems. And even simple rules could develop complex results. Examples like the game of life and the cellular automata. So I think there are still much more for physicists to explain.
{ "domain": "physics.stackexchange", "id": 27518, "tags": "soft-question, theory-of-everything" }
Do vacuum bubbles exist in theories with normal ordered Hamiltonian?
Question: When we calculate the Hamiltonian in the free theory, we notice that it contains an infinitely large term \begin{align} H &= \int_V \mathrm{d }k^3 \frac{\omega_k}{(2\pi)^3 } a^\dagger(\vec k) a(\vec k)+ \frac{1}{2} \int_V \mathrm{d }k^3 \omega_k \delta(\vec 0) \, . \end{align} We can get rid of this term by claiming that nature only uses normal ordered operators, in which case there is no infinitely large term. As soon as add an interaction term like $\phi^4$, we encounter the same problem. If we switch the operators around using the commutation relations we find infinitely large terms that are all related to field commutators evaluated at the same spacetime point $[\phi_{-}(x) ,\phi_{+}(x)] \sim \infty $. In diagrammatic form, these terms are given by self-loop diagrams Again, the problem can be solved by demanding that the Hamiltonian is brought into normal form before we quantize it. If we work with a normal ordered Hamiltonian there are no self-loop diagrams. This seems to suggest that if we work with normal ordered Hamiltonians, at least some of the vacuum bubble diagrams no longer exist. Does normal ordering remove all vacuum bubble diagrams or are there some remnant diagrams that are not removed through the process of normal ordering? Answer: Not all the vacuum bubbles are removed by normal ordering. Your third diagram at the end of your post is an example.
{ "domain": "physics.stackexchange", "id": 62859, "tags": "quantum-field-theory, operators, renormalization, feynman-diagrams" }
Does changing the order of the convolution layers in a CNN have any impact?
Question: Could changing the order of convolution layers in a CNN improve accuracy or training time? Answer: Conventionally, CNN layers downsample over and over, which enables them to capture details at different levels of abstractions. Usually, it is observed that the initial layers do nothing more than detecting edges, or filtering color channels; the combinations of these edges are what we perceive as 'features'. If you reverse the order, you essentially are changing sampling modes down the line. CNNs detect by 'downsampling' the inputs and therefore 'extracting' features. It may not work as expected!
{ "domain": "ai.stackexchange", "id": 1094, "tags": "neural-networks, machine-learning, deep-learning, convolutional-neural-networks" }
How do foundries prevent zinc from boiling away when alloyed with Aluminum?
Question: How do foundries prevent lower boiling point metals such as zinc from boiling away when alloyed in a furnace with higher boiling point metals such as aluminum? Answer: When alloys are made by mixing molten metals (actually an alloy only need contain one metal and at least one other compound, metal or not) the metals only need to be heated to their melting point, not all the way to their boiling point. In the example you've given, the melting point of aluminum is $\pu{660^oC}$, which is $\pu{247^oC}$ below the boiling point of zinc, so the volatilization of zinc is negligible under these conditions. However, the issue you bring up does present problems in other cases. For example this article states the following: One difficulty in making alloys is that metals have different melting points. Thus copper melts at $\pu{1,083^oC}$, while zinc melts at $\pu{419^oC}$ and boils at $\pu{907^oC}$. So, in making brass, if we just put pieces of copper and zinc in a crucible and heated them above $\pu{1,083^oC}$, both the metals would certainly melt. But at that high temperature the liquid zinc would also boil away and the vapour would oxidize in the air. The method adopted in this case is to heat first the metal having the higher melting point, namely the copper. When this is molten, the solid zinc is added and is quickly dissolved in the liquid copper before very much zinc has boiled away. Even so, in the making of brass, allowance has to be made for unavoidable zinc loss which amounts to about one part in twenty of the zinc. Consequently, in weighing out the metals previous to alloying, an extra quantity of zinc has to be added. Summary, TL;DR: In your example of aluminum and zinc, each metal melts well below either of their boiling points so that loss via volatilization is not a problem. There are cases however, such as alloying copper and zinc, where the boiling point of one metal is lower than the melting point of the other. One way to minimize (but not eliminate) the loss of the more volatile metal is to quickly dissolve it in the high-melting metal and then cool the solution. Although this does not eliminate losses due to volatilization, it can greatly reduce the problem. And actually, since alloys are frequently composed of predominantly one metal, it is not uncommon to dissolve the lesser components into the primary component as a matter of practice anyway. I hope the example I gave addresses your question. Please don't hesitate to ask for any clarifications in the comments below.
{ "domain": "chemistry.stackexchange", "id": 8406, "tags": "thermodynamics, metal, boiling-point, metallurgy, alloy" }
How can time-travel be possible if speed is relative?
Question: I have heard that time-travel is possible...relative to some observer. So, as I understand it, the following example would be accurate: There are two twins- TwinA and TwinB. Both have very accurate clocks, that are exactly synchronised. TwinA leaves earth on a rocket, which travels at approaching the speed of light. After he has counted 24 hours, he comes back to Earth, only to find that more time has passed there, and his clock is now behind TwinB's. However what confuses me is that speed is relative. So who is to say that TwinA was the one who was moving fast? Answer: The twin paradox is generally considered to be an illustration of time dilation, not time travel. To answer your question, the difference between the twins is that Twin A was accelerated several times. Speed is relative; if an object is moving at a constant speed, whether the object is considered to be moving or not depends on which inertial frame of reference you choose to measure the object's speed in. The same is not true of acceleration; an accelerating object's acceleration will be measured to be nonzero no matter which inertial frame of reference is used to measure the acceleration in. So all observers will agree that it was Twin A instead of Twin B who accelerated between different inertial frames of reference.
{ "domain": "physics.stackexchange", "id": 15838, "tags": "speed, time-travel" }
Progress bar wrapper class in C++
Question: I've recently written a simple progress bar class in C++ to mimic usage of similar libraries I've used in Python. The idea is to take some iterable container (e.g. std::vector), and iterate over the container while printing the progress to stdout. This can be useful when doing some computationally expensive operations on the container in the loop body, such as in physics simulations. The code as it stands is #include <iostream> #include <iterator> #ifndef __PBAR_H #define __PBAR_H namespace pbar { template<class It> class ProgressBar { public: ProgressBar(It&& it, It&& it_end, int width, const char symbol='=') : pos_(0), width_(width), symbol_(symbol), iter_(it), iter_begin_(it), iter_end_(it_end) {} using value_type = typename It::value_type; using reference = typename It::reference; class iterator : public std::iterator<typename It::iterator_category, value_type, typename It::difference_type, typename It::pointer, reference> { private: value_type val_ = *iter_; ProgressBar<It> *parent_; public: iterator(ProgressBar<It> *parent, value_type start) : val_(start), parent_(parent) {} iterator& operator++(); iterator operator++(int); bool operator==(iterator other); bool operator!=(iterator other); reference operator*(); }; iterator begin(); iterator end(); template<class I> friend std::ostream& operator<<(std::ostream &steam, const ProgressBar<I> &pbar); private: int pos_; int width_; char symbol_; char left_delim_{'['}; char right_delim_{']'}; char pointer_{'>'}; It iter_; It iter_begin_; It iter_end_; }; // class ProgressBar template<class It> using piter = typename ProgressBar<It>::iterator; template<class It> inline bool ProgressBar<It>::iterator::operator==(piter<It> other) { return val_ == other.val_; } template<class It> inline bool ProgressBar<It>::iterator::operator!=(piter<It> other) { return !(*this == other); } template<class It> inline typename It::reference ProgressBar<It>::iterator::operator*() { return val_; } template<class It> inline piter<It>& ProgressBar<It>::iterator::operator++() { ++(parent_->iter_); val_ = *(parent_->iter_); auto fraction = static_cast<double>(std::distance(parent_->iter_begin_, parent_->iter_))/std::distance(parent_->iter_begin_, parent_->iter_end_); parent_->pos_ = parent_->width_*fraction; std::cout << *parent_; return *this; } template<class It> inline piter<It> ProgressBar<It>::iterator::operator++(int) { auto retval = *this; ++(*this); return retval; } template<class It> inline piter<It> ProgressBar<It>::begin() { return ProgressBar<It>::iterator(this, *iter_begin_); } template<class It> inline piter<It> ProgressBar<It>::end() { return ProgressBar<It>::iterator(this, *iter_end_); } template<class It> inline std::ostream& operator<<(std::ostream &stream, const ProgressBar<It> &pbar) { stream << pbar.left_delim_; for (int i=0; i<pbar.width_; i++) { if (i < pbar.pos_) stream << pbar.symbol_; else if (i == pbar.pos_) stream << pbar.pointer_; else stream << " "; } stream << pbar.right_delim_ << int(double(pbar.pos_)/pbar.width_*100) << "%\r"; stream.flush(); return stream; } }; // namespace pbar #endif // __PBAR_H Using the class: #include <iostream> #include <vector> #include "pbar.h" using namespace pbar; int main() { std::vector<int> v = {1, 2, 3, 4, 5}; ProgressBar<std::vector<int>::iterator> pbar(v.begin(), v.end(), 50); for (auto i = pbar.begin(); i != pbar.end(); i++) { ; } // The constructor allows changing the bar symbol (default '=') ProgressBar<std::vector<int>::iterator> pbar2(v.begin(), v.end(), 50, '#'); std::cout << "\nRange based loops also work" << std::endl; for (auto& i: pbar2) { ; } } While I've been programming in C++ for a while now, I haven't had the opportunity to get a lot of feedback on my code yet, and it's making me anxious that I'm learning bad patterns. So, I'd like to know: Are there any obvious pitfalls in my code? Bad design decisions? General improvements or additions to the code? Should I separate the library into a header and implementation file, or keep it header-only? Is there a better way to implement iterators for custom types? I've taken this implementation from here, but I'm wondering if there's a different way than using nested classes. I appreciate any and all advice! Answer: The constructor takes two forwarding references, but doesn't actually forward them: ProgressBar(It&& it, It&& it_end, int width, const char symbol='=') iter_(it), iter_begin_(it), iter_end_(it_end) It's better to accept it and it_end by value, and move-construct from them: ProgressBar(It it, It it_end, int width, const char symbol='=') iter_(it), iter_begin_(std::move(it)), iter_end_(std::move(it_end)) The constructor also does no checking of its arguments - what does a zero or negative width mean? Should it even be a signed type at all? std::cout is a strange choice of stream for the ++ operator - progress information like this should normally go to std::clog rather than being mixed with program output. Inheriting from std::iterator is now deprecated - just define the member types directly in the class. It's wrong for iterator to forward the category tag of It, as it's at most a Forward Iterator - it's certainly not a Bidirectional Iterator. We could make iterator inherit from It to exactly forward its category, but I think that would be a mistake: we don't want to try to track progress of a bidirectional iterator. We'll need to be a bit clever when defining its category. Many of the members (of bar and of iterator) are missing const when I'd expect it: bool operator==(const iterator& other) const; bool operator!=(const iterator& other) const; reference operator*() const; Modern GCC (g++-8 -std=c++2a) doesn't believe you can use that template alias to define members of the iterator type: 204396.cpp:73:13: error: no declaration matches ‘bool pbar::ProgressBar<It>::iterator::operator==(pbar::piter<It>)’ inline bool ProgressBar<It>::iterator::operator==(piter<It> other) { ^~~~~~~~~~~~~~~ 204396.cpp:42:10: note: candidate is: ‘bool pbar::ProgressBar<It>::iterator::operator==(pbar::ProgressBar<It>::iterator)’ bool operator==(iterator other); ^~~~~~~~ (and many, many more like that) Computation of fraction can be expensive when It is less capable than RandomAccessIterator, due to the std::distance() calls. We can save work by storing the total and progress separately, which means only one std::distance() per ProgressBar instead of two per iteration. The percentage calculation doesn't need to go through double if we multiply before dividing (at least in the absence of overflow): stream << pbar.right_delim_ << (pbar.pos_ * 100 / pbar.width_) << "%\r"; Assignments through the iterator are lost. For example, this code doesn't work (the vector is unchanged): for (auto& i: ProgressBar(v.begin(), v.end(), 50, '#')) { i *= 2; } Really, we want to encapsulate the container iterator inside the progress-bar iterator (instead of a copy of its value), so that operator*() forwards right through to the container. These lines can be within the include-guard: #include <iostream> #include <iterator> Although they almost certainly have include guards of their own, it certainly does no harm to avoid repeating them. Finally, a specific answer to your specific question no. 4: there's nothing in the header that doesn't depend on the template parameter, so nothing can be moved to a separately-compiled implementation file without losing the benefit of templates.
{ "domain": "codereview.stackexchange", "id": 32120, "tags": "c++, c++11, iterator" }
Missing Terms in Weinberg's treatment of perturbations on Newtonian Cosmology
Question: I was reading Appendix F of Steven Weingberg's book "Cosmology". In this Appendix he works out the perturbations to a cosmological fluid described by non-relativistic hydrodynamics and Newtonian gravity. It turns out that the first order perturbations satisfy, $$ \frac{\partial \delta \rho }{\partial t } + 3 H \delta \rho + H \vec{X} \cdot \nabla \delta \rho + \bar{\rho} \nabla \cdot \vec{v} = 0, \qquad \tag{1} $$ $$ \frac{\partial \delta \vec{v}}{\partial t } + H \vec{X} \cdot \nabla \delta \vec{v} + H \delta \vec{v} = - \nabla \delta \phi, \qquad \tag{2} $$ $$ \nabla^2 \delta \phi = 4\pi G \delta \rho. \qquad \tag{3} $$ Weinberg applies the following Fourier transform to these equations, $$ f(\vec{X},t) = \int \exp \left( \frac{i \vec{q} \cdot \vec{X}}{a} \right) f_{\vec{q}}(t) \ \mathrm{d}^3\vec{q} $$, where $f(\vec{X},t)$ is a place holder for $\delta \vec{v}, \delta \rho, $ and $\delta \phi$. The resulting equations he gets are, $$ \frac{\mathrm d \delta \rho_{\vec{q}}}{\mathrm d t } + 3 H \delta \rho_{\vec{q}} + \frac{i\bar{\rho}}{a}\ \vec{q} \cdot \delta \vec{v}_{\vec{q}} = 0 \qquad \tag{1'}$$ $$ \frac{\mathrm d \delta \vec{v}_{\vec{q}}}{\mathrm d t } + H \delta \vec{v}_{\vec{q}} = -\frac{i}{a}\ \vec{q} \delta \phi_{\vec{q}} \qquad \tag{2'}$$ $$ \vec{q}^2 \delta \phi_{\vec{q}} = -4\pi G a^2 \delta \rho_{\vec{q}} \qquad \tag{3'}$$. For the most part these new equations can be obtained by making the substitution $\nabla \rightarrow i \vec{q}/a$. My question : There doesn't seem to be any terms in the transformed equations which correspond to the terms $ H \vec{X} \cdot \nabla \delta \rho$ and $H \vec{X} \cdot \nabla \delta \vec{v}$. Weinberg makes no comment about their absence. Is anyone aware of a legitimate mathematical reason for these terms to disappear in the transformed equations? Answer: The answer turns out to be embarrassingly simple, and I suspect nobody came up with it because of my poor communication in the question. The $a$ is the occurring in the Fourier transform is the scale factor which has a time dependence. So if we look at the term $\frac{\partial}{\partial t} \rho$ we will get, $$ \frac{\partial}{\partial t} \rho = \frac{\partial}{\partial t} \int \exp \left( \frac{ i \vec{q} \cdot \vec{X}}{a(t)}\right) \rho_{\vec{q}}(t) \ \mathrm{d}^3 \vec{q} $$ $$ = \int \frac{\partial}{\partial t}\left( \frac{ i \vec{q} \cdot \vec{X}}{a(t)} \right) \exp \left( \frac{ i \vec{q} \cdot \vec{X}}{a(t)}\right) \rho_{\vec{q}}(t) + \exp \left( \frac{ i \vec{q} \cdot \vec{X}}{a(t)}\right) \frac{ \mathrm{d} \rho_{\vec{q}}}{\mathrm{d} t } \ \mathrm{d}^3 \vec{q} $$ $$ = \int \left(-\frac{\dot{a}}{a} \frac{ i \vec{q} \cdot \vec{X}}{a(t)} \right) \exp \left( \frac{ i \vec{q} \cdot \vec{X}}{a(t)}\right) \rho_{\vec{q}}(t) + \exp \left( \frac{ i \vec{q} \cdot \vec{X}}{a(t)}\right) \frac{ \mathrm{d} \rho_{\vec{q}}}{\mathrm{d} t } \ \mathrm{d}^3 \vec{q} $$ $$ = \int \left(-H\frac{ i \vec{q} \cdot \vec{X}}{a(t)} \right) \exp \left( \frac{ i \vec{q} \cdot \vec{X}}{a(t)}\right) \rho_{\vec{q}}(t) + \exp \left( \frac{ i \vec{q} \cdot \vec{X}}{a(t)}\right) \frac{ \mathrm{d} \rho_{\vec{q}}}{\mathrm{d} t } \ \mathrm{d}^3 \vec{q} $$ The left hand term in the integrand matches the Fourier transform of $H (\vec{X} \cdot \nabla) \rho$. Which is why the terms cancel.
{ "domain": "astronomy.stackexchange", "id": 886, "tags": "cosmology" }
A simple hangman game in Haskell
Question: I just finished a course of functional programming in uni and continued to study Haskell because I found it very interesting! I made a simple hangman game and would like to hear any of your thoughts and ideas that may arise looking at the code! What, for example, could make it more dogmatic in a functional programming sense? import Control.Monad import Data.List (elemIndices, sort) pictures = 9 main :: IO () main = do word <- getWord "" clearScreen hang word hang :: String -> IO () hang word = hang' word [] [] pictures hang' :: String -> [Char] -> [Char] -> Int -> IO () hang' word _ _ lives | lives == 0 = clearScreen >> putStrLn (renderHangman 0) >> putStrLn "You lost!" >> putStrLn ("The correct word was " ++ word ++ "\n") hang' word rights _ lives | win rights word = clearScreen >> putStrLn (renderHangman lives) >> putStrLn "You won!" >> putStrLn ("The correct word was " ++ word ++ "\n") hang' word rights wrongs lives = do clearScreen putStrLn $ renderHangman lives putStrLn $ renderWord rights word putStrLn $ renderLives lives putStrLn $ renderWrongs wrongs guess <- getGuess if guess `elem` (rights ++ wrongs) then hang' word rights wrongs lives else if correctGuess guess word then hang' word (guess : rights) wrongs lives else hang' word rights (guess : wrongs) (lives - 1) win :: [Char] -> String -> Bool win guesses = all (`elem` guesses) clearScreen :: IO () clearScreen = replicateM_ 100 (putStrLn "") correctGuess :: Char -> String -> Bool correctGuess guess word = guess `elem` word getGuess :: IO Char getGuess = do putStrLn "Guess a letter!" getChar getWord s = do clearScreen putStrLn "Give a secret word!" putStr ['*' | _ <- s] c <- getChar case c of '\n' -> return s char -> getWord (s ++ [char]) renderWord :: [Char] -> String -> String renderWord guesses = foldr hide "" where hide = \x xs -> if x `elem` guesses then x : xs else '_' : xs renderWrongs :: [Char] -> String renderWrongs [] = "" renderWrongs wrongs = "Wrong guesses: " ++ sort wrongs renderHangman :: Int -> String renderHangman = unlines . hangmanpics renderLives :: Int -> String renderLives lives = show lives ++ " guesses left!" hangmanpics :: Int -> [String] hangmanpics 9 = [" ", " ", " ", " ", " ", " ", "========="] hangmanpics 8 = [" ", " |", " |", " |", " |", " |", "========="] hangmanpics 7 = [" +---+", " |", " |", " |", " |", " |", "========="] hangmanpics 6 = [" +---+", " | |", " |", " |", " |", " |", "========="] hangmanpics 5 = [" +---+", " | |", " O |", " |", " |", " |", " ========="] hangmanpics 4 = [" +---+", " | |", " O |", " | |", " |", " |", " ========="] hangmanpics 3 = [" +---+", " | |", " O |", " /| |", " |", " |", " ========="] hangmanpics 2 = [" +---+", " | |", " O |", " /|\\ |", " |", " |", " ========="] hangmanpics 1 = [" +---+", " | |", " O |", " /|\\ |", " / |", " |", " ========="] hangmanpics 0 = [" +---+", " | |", " O |", " /|\\ |", " / \\ |", " |", " ========="] hangmanpics _ = [" +---+", " | |", " O |", " /|\\ |", " / \\ |", " |", " ========="] ``` Answer: looks really good! Here are some suggestions, in rough order of significance The game essentially can be described by the three values word, rights, wrongs, and lives which you pass around between functions. For readability, I would suggest wrapping these up into a datatype: data GameState = GameState { word :: String , rights :: [Char] , wrongs :: [Char] , lives :: Int } The end of hang' feels convoluted. By "end" I mean the code after getGuess. The semantic structure is this: "update" the "game state", and then restart hang'. But it's not easy to see that with the current code structure, since each branch is its own "independent" call to hang'. Storing state in its own GameState type, we can cleanly split the code into the parts (1) update the state; (2) call hang'. To cut to the chase, it looks like this: guess <- getGuess let state' = if guess `elem` (rights state ++ wrongs state) then state else if correctGuess guess state then state { rights = guess : rights state } else state { wrongs = guess : wrongs state, lives = lives state - 1 } hang' state' [Char] is not quite an appropriate datatype for rights and wrongs. Using [Char] here suggests to me that you care about the order of the Chars in the list and how many times they show up, but in fact you do not. I would recommend Data.Set instead. (This will give you better performance, too!) getWord can be implemented more simply as clearScreen >> putStrLn "Give a secret word!" >> getLine win -- sleek implementation of this function! Unrelated, I would rename it. win sounds like an action, but really the function is a predicate. Perhaps isWin? As a general rule of thumb, I would try to avoid unqualified imports like import Control.Monad. If you have more than one unqualified import, it can become difficult to know where a function or operator is being imported from. getWord is missing its type signature, as is pictures. This is not a huge deal, but it's generally recommended to include type signatures on (at least) all top-level values. In some places you chain IO actions with >> and in others you use do notation. These are actually the same thing; thingOne >> thingTwo and do { thingOne; thingTwo; } are the same code. (It's unclear to me if you know this already) Anyway, for readability I would suggest using do for longer chains and >> or >>= for shorter expressions. Concretely, rewrite the clearScreen >> putStrLn ... parts of hang' using do, and then rewrite getGuess as putStrLn "Guess a letter!" >> getChar. Hope this helps :-) See the code with all changes made (except for using Set) here
{ "domain": "codereview.stackexchange", "id": 43502, "tags": "game, haskell, hangman" }
Form of wave function of a free particle in the void
Question: Thanks to the answer of Stoby, I clarify my question It seems a silly question but I wonder which form would take the wave function of a free particle alone in the void ? I understood that in this case the main parameter would be its mass but would it results to a very large wave function ? And what about particle that has no mass like the photon ? EDIT : Thanks to the comment of Stoby, I could make some complementary research about the topic. My understanding is that the wave function is theoretically plane (so it is not really a wave anymore) and in this situation, the wave function equation is not able to describe the reality of the particle. My understanding of this impossibility is not that the equation is wrong but that this theoretical situation is not possible because there is not a place that is both totally empty and infinite (unless it is the last particle in the universe I suppose....) Does this understanding is correct ? Answer: You are correct that the plane wave cannot physically describe the particle in reality, as it can’t be normalised. The correct normalisable wave function for a free particle would be a gaussian. It can be constructed by adding together many plane waves. Basically this corresponds to how a particle in reality would not have a definite momentum, unlike the plane wave solution. The plane wave solution can be seen as an extreme consequence of the heisenberg uncertainty principle. A plane wave represents a particle with definite momentum, and as a result it’s position is totally undefined. In order to describe the free particle more realistically, you need to take into account the fuzzy quantum properties of the particle. As for photons, photons are inherently relativistic meaning you would have to use Quantum Electrodynamics, a Quantum Field Theory to describe photons. I believe you could find an analogous sort of wave function for a single photon, but it would be in my opinion not much of a useful task.
{ "domain": "physics.stackexchange", "id": 75861, "tags": "quantum-mechanics, wavefunction" }
Velocity of ion in electric field
Question: I have an ion beam which moves with a velocity $v$ and stays in a region for time $t$. The region has decreasing electric field so the velocity is different for each point in the region, such that it decreases from value $A$ V/m to 0 deeper into the region. The ion velocity depends on the electric field in the region such that $v=\sqrt{\frac{2qU}{m}}$. $m$ is ion mass, $U$ is the applied potential. How do I find the time the ion stays in the region? I know that electric field is decreasing which means the velocity changes from $v$ to 0. The time an ion stays in the region depends on how fast it moves (i.e. its velocity) through the region. I can't move past this point. Should I integrate the electric field of the region to find an average value? Answer: To find the time the ion stays in the region, you need to determine the distance the ion travels in the region, which can be calculated using its velocity. Since the velocity of the ion is dependent on the electric field, you will need to integrate the velocity equation over the length of the region. The velocity equation is v=2qUm−−−√ , where m is the ion mass, U is the applied potential, and q is the charge of the ion. To find the time the ion stays in the region, you need to integrate this equation over the distance the ion travels in the region, L, using the equation: t = ∫L/v dL = ∫L/2qUm−−−√ dL You can evaluate this integral numerically or analytically to find the time the ion stays in the region. Keep in mind that the integral will need to be evaluated over the entire length of the region, taking into account the changing electric field.
{ "domain": "physics.stackexchange", "id": 93725, "tags": "electric-fields, mass-spectrometry" }
What is the difference between training and testing in reinforcement learning?
Question: In reinforcement learning (RL), what is the difference between training and testing an algorithm/agent? If I understood correctly, testing is also referred to as evaluation. As I see it, both imply the same procedure: select an action, apply to the environment, get a reward, and next state, and so on. But I've seen that, e.g., the Tensorforce RL framework allows running with or without evaluation. Answer: What is reinforcement learning? In reinforcement learning (RL), you typically imagine that there's an agent that interacts, in time steps, with an environment by taking actions. On each time step $t$, the agent takes the action $a_t \in \mathcal{A}$ in the state $s_t \in \mathcal{S}$, receives a reward (or reinforcement) signal $r_t \in \mathbb{R}$ from the environment and the agent and the environment move to another state $s_{t+1} \in \mathcal{S}$, where $\mathcal{A}$ is the action space and $\mathcal{S}$ is the state space of the environment, which is typically assumed to be a Markov decision process (MDP). What is the goal in RL? The goal is to find a policy that maximizes the expected return (i.e. a sum of rewards starting from the current time step). The policy that maximizes the expected return is called the optimal policy. Policies A policy is a function that maps states to actions. Intuitively, the policy is the strategy that implements the behavior of the RL agent while interacting with the environment. A policy can be deterministic or stochastic. A deterministic policy can be denoted as $\pi : \mathcal{S} \rightarrow \mathcal{A}$. So, a deterministic policy maps a state $s$ to an action $a$ with probability $1$. A stochastic policy maps states to a probability distribution over actions. A stochastic policy can thus be denoted as $\pi(a \mid s)$ to indicate that it is a conditional probability distribution of an action $a$ given that the agent is in the state $s$. Expected return The expected return can be formally written as $$\mathbb{E}\left[ G_t \right] = \mathbb{E}\left[ \sum_{i=t+1}^\infty R_i \right]$$ where $t$ is the current time step (so we don't care about the past), $R_i$ is a random variable that represents the probable reward at time step $i$, and $G_t = \sum_{i=t+1}^\infty R_i $ is the so-called return (i.e. a sum of future rewards, in this case, starting from time step $t$), which is also a random variable. Reward function In this context, the most important job of the human programmer is to define a function $\mathcal{R}: \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$, the reward function, which provides the reinforcement (or reward) signal to the RL agent while interacting with the environment. $\mathcal{R}$ will deterministically or stochastically determine the reward that the agent receives every time it takes action $a$ in the state $s$. The reward function $R$ is also part of the environment (i.e. the MDP). Note that $\mathcal{R}$, the reward function, is different from $R_i$, which is a random variable that represents the reward at time step $i$. However, clearly, the two are very related. In fact, the reward function will determine the actual realizations of the random variables $R_i$ and thus of the return $G_i$. How to estimate the optimal policy? To estimate the optimal policy, you typically design optimization algorithms. Q-learning The most famous RL algorithm is probably Q-learning, which is also a numerical and iterative algorithm. Q-learning implements the interaction between an RL agent and the environment (described above). More concretely, it attempts to estimate a function that is closely related to the policy and from which the policy can be derived. This function is called the value function, and, in the case of Q-learning, it's a function of the form $Q : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$. The name $Q$-learning derives from this function, which is often denoted as $Q$. Q-learning doesn't necessarily find the optimal policy, but there are cases where it is guaranteed to find the optimal policy (but I won't dive into the details). Of course, I cannot describe all the details of Q-learning in this answer. Just keep in mind that, to estimate a policy, in RL, you will typically use a numerical and iterative optimization algorithm (e.g. Q-learning). What is training in RL? In RL, training (also known as learning) generally refers to the use of RL algorithms, such as Q-learning, to estimate the optimal policy (or a value function) Of course, as in any other machine learning problem (such as supervised learning), there are many practical considerations related to the implementation of these RL algorithms, such as Which RL algorithm to use? Which programming language, library, or framework to use? These and other details (which, of course, I cannot list exhaustively) can actually affect the policy that you obtain. However, the basic goal during the learning or training phase in RL is to find a policy (possibly, optimal, but this is almost never the case). What is evaluation (or testing) in RL? During learning (or training), you may not be able to find the optimal policy, so how can you be sure that the learned policy to solve the actual real-world problem is good enough? This question needs to be answered, ideally before deploying your RL algorithm. The evaluation phase of an RL algorithm is the assessment of the quality of the learned policy and how much reward the agent obtains if it follows that policy. So, a typical metric that can be used to assess the quality of the policy is to plot the sum of all rewards received so far (i.e. cumulative reward or return) as a function of the number of steps. One RL algorithm dominates another if its plot is consistently above the other. You should note that the evaluation phase can actually occur during the training phase too. Moreover, you could also assess the generalization of your learned policy by evaluating it (as just described) in different (but similar) environments to the training environment [1]. The section 12.6 Evaluating Reinforcement Learning Algorithms of the book Artificial Intelligence: Foundations of Computational Agents (2017) by Poole and Mackworth provides more details about the evaluation phase in reinforcement learning, so you should probably read it. Apart from evaluating the learned policy, you can also evaluate your RL algorithm, in terms of resources used (such as CPU and memory), and/or experience/data/samples needed to converge to a certain level of performance (i.e. you can evaluate the data/sample efficiency of your RL algorithm) robustness/sensitivity (i.e., how the RL algorithm behaves if you change certain hyper-parameters); this is also important because RL algorithms can be very sensitive (from my experience) What is the difference between training and evaluation? During training, you want to find the policy. During the evaluation, you want to assess the quality of the learned policy (or RL algorithm). You can perform the evaluation even during training.
{ "domain": "ai.stackexchange", "id": 1956, "tags": "reinforcement-learning, training, comparison, testing" }
Why do air entrainment admixtures improve the freeze-thaw resistance of concrete?
Question: It is stated on here that: Air-entraining admixtures cause small stable bubbles of air to form uniformly through a concrete mix. The bubbles are mostly below 1 mm diameter with a high proportion below 0.3 mm. The benefits of entraining air in the concrete include increased resistance to freeze-thaw degradation, increased cohesion (resulting in less bleed and segregation) and improved compaction in low-workability mixes. This does not make sense to me. By introducing more voids into the concrete there are more paths of ingress for water which can then expand and crack the concrete. Why do air entrainment admixtures improve the freeze-thaw resistance of concrete? Answer: Where concrete is exposed to water, the water is going to permeate concrete no matter what you do (for the most part). The voids introduced by air entrainment allow the water some place to expand into when it does eventually freeze, thereby reducing the pressure on the concrete to crack. Edit 2021/02/17 to add some supplementary information: ACI 201-16, "Guide to Durable Concrete," Chapter 4 has a lot of information on freeze-thaw damage to concrete, and describes the method of attack that this has on concrete in great detail. I won't copy/paste whole sections to avoid copyright issues, but to summarize: Concrete below about 80% relative internal humidity is normally immune to freezing damage. ACI defines several Exposure Classes. The table below summarizes these: Your typical neighborhood sidewalk in a northern climate would be a good example of Class F2, assuming that the homeowner doesn't use salt (based on my neighborhood, this is probably pretty accurate). Exposure Class F1 generally has a recommended minimum air content range of 5% to 7% and Classes F2/F3 from 5.5% to 7.5% depending on maximum aggregate size. Tolerance on this is typically taken as +/- 1.5%. These numbers are based on keeping 18% air in the paste portion of the concrete.
{ "domain": "engineering.stackexchange", "id": 3848, "tags": "civil-engineering, concrete" }
How to count quarks using Deep Inelastic Scattering?
Question: The Wikipedia article on deep inelastic scattering suggests that the experiment shows baryons have three point of deflections (corresponding to three quarks) and mesons have two points of deflection. How are the electrons fired in this experiment being detected, and how exactly do the two or three points of deflection appear in the data? Are they fired at a target consisting entirely of baryons, or are collisions with non-baryons somehow filtered from the data? Answer: There may be too many questions here. I'll try to hit some of the high points of the technologies, but be aware that you could write an entire dissertation on the matter (mine was on a closely related topic). The incoming electron or proton beam is characterized by using current monitors (inductive, resonant cavity, charge cups, etc) and by measuring its bending radius in known magnetic fields. Scattered electrons (after the collision) are detected by there ionization in interactions with matter, by Cerenkov radiation, and/or by transition radiation. Their momenta are again, characterized with magnetic fields. The detection of the reaction products use the same techniques as for the scattered electrons. Filtering the junk to get just the events you want is a big topic. You put a lot of effort into designing the detector package, trigger, data acquisition, storage subsystem, and analytical programs to make it happen. This is what keeps grad students and post docs employed. Finally, baryon targets are easy: put any matter made of protons and neutrons (i.e. everything) in front of the beam, or collide electrons on protons. Meson targets are hard: you can't just get a pile of pions because they decay. Fast. So you have to generate a meson beam (which is a bit of a trick in and of itself) and direct it into either a fixed baryon target or another beam (electrons, say) in collider mode. Now some physics. You don't follow a single electron through multiple independent scatters in a single collision with a hadron (the time and distance scales are prohibitively small).{*} Instead, you chose a center of mass energy for the collision that will tend to suppress the effects of scattering from the whole composite object, and allow you to look as the electron-on-valence-parton cross-section. That measurement tells you (after various corrects are factored in) the sum of the squares of the parton charges, and you already know the sum of the parton charges. {*} Actually my dissertation concerned itself with detecting the effects of secondary scattering events in one very closely defined circumstance and how the rate of such scattering might{+} depend on a parameters of the collision called $Q^2$ (the squared four-momentum transfer). {+} The theorists had said we might see an interesting effect at 5--10 GeV$^2$. In the eight years it took to design the experiment (to get to about 8 GeV$^2$), get approval and beam time, and set it up they'd changed their minds. "It's outside your experimental by a factor of two or three" they said. It was a world class, high-precision null result.
{ "domain": "physics.stackexchange", "id": 225, "tags": "particle-physics, experimental-physics, quarks" }
Compound mechanical advantage calculation confusion
Question: TL;DR: I can't figure out what the mechanical advantage (MA) is of the system shown below. Which, if any, of the calculations shown are correct? Update: stack exchange won't let me post 4 images so I'm combining them all and showing them at the bottom... I apologize for the need to scroll to see the images. Full story: I have recently started to use mechanical advantage, and for the most part it seems really straight forward. I was following a YouTube video on creating a 12:1 compound mechanical advantage system with rope and pulleys, but after I used the system I realized I hooked a few things up wrong. I drew the system on paper and searched the internet in an attempt to figure out what the mechanical advantage was, but came up quite short. I can't find a single diagram with the exact same setup. So, lacking an exact comparison, I searched on how to calculate mechanical advantage and found references to the T method, a method counting the ropes, and a method of counting the pulleys. Since counting the ropes and counting the pulleys seems to only apply to simple mechanical advantage, I am trying to use the T method. Every example I have found using the T method has one additional pulley that I don't have, so I'm not sure how to do it. Anyways, the actual system is shown in the first diagram below. If it matters, the anchor point is on the ceiling, and the load is on the floor. The red circles are pulleys, the black circles are connection points, the blue section is a prusik loop rope grab, and the arrow is the direction of force I apply to raise the load. Using the T method, I first calculated MA as show in the second diagram, labeled 11:1. Afterwards, I googled 11:1 systems and haven't been able to find anything at all, so I started to doubt my calculations. After more searching, I found information indicating that when compound mechanical advantage is used, the two systems are supposed to be multiplied. So, my system is really a simple 5:1 with a simple 2:1, as indicated in the third diagram, labeled 10:1. So with that, I set back to googling 10:1 systems, and found tons of examples, but again, none were exactly like mine and most had the 2:1 add-on in the exact opposite direction. So, I redrew the calculation differently and came up with the final diagram labeled 9:1. But once again could not find another system labeled as 9:1 that looked like mine, and everything I could find had the final 3:1 in the exact opposite direction again. So, what is my actual mechanical advantage???? Answer: The answer and tension labels in your second diagram (the first with labels) are correct. The advantage is 11:1 The diagrams after that are nonsense.
{ "domain": "physics.stackexchange", "id": 57254, "tags": "classical-mechanics" }
Approximating an expression for a potential
Question: In a problem which I was doing, I came across an expression for the potential $V$ of a system as follows $$V = k\left(\frac{1}{l - x} + \frac{1}{l + x}\right)\tag{1}\label{1}$$ where $k$ is a constant, $l$ and $x$ are distances and $l \gg x$. Now I went to find an approximate expression $$V = k\left(\frac{(l + x) + (l - x)}{l^2 - x^2}\right)$$ and reasoning that since $l \gg x$, $l^2 - x^2 \approx l^2$ and thus $$V \approx \frac{2k}{l} \tag{2}\label{2}$$ but this turns out to be wrong as the potential is expected to for an harmonic oscillator and thus propotional to $x^2$. The right way to approximate is \begin{align} V & = \frac{k}{l}\left(\frac{1}{1 - x / l} + \frac{1}{1 + x / l}\right) \\ & \approx \frac{k}{l}\left(\left(1 + \frac{x}{l} + \frac{x^2}{l^2}\right) + \left(1 - \frac{x}{l} + \frac{x^2}{l^2}\right)\right) \\ & \approx \frac{k}{l}\left(2 + \frac{2x^2}{l^2}\right) \end{align} Ignoring the constant $2$ as I'm concerned about the differences in the potential, I get $$ V \approx \frac{2kx^2}{l^3} \label{3}\tag{3}$$ which is correct. What mistake did I do in my approximation method? Answer: We have $$ x^2 \ll l^2 \implies \frac{2x^2}{l^2} \ll 2 \implies \frac{2x^2}{l^2} + 2 \approx 2 $$ So, $$ V \approx \frac{k}{l} \bigg(2 + \frac{2x^2}{l^2}\bigg) \approx 2\frac{k}{l} $$
{ "domain": "physics.stackexchange", "id": 60341, "tags": "homework-and-exercises, electromagnetism, electrostatics, error-analysis, approximations" }
How do i publish in a callback function?
Question: Good evening, my problem is that i want to subscribe and publish in the same node. I need to publish in a callback function but the nodehandle is initialized in the main(). How can i publish a messages in callback function? Is it possible to use a pointer to a publisher? If yes how could i do that? Please help! Originally posted by Gren8 on ROS Answers with karma: 11 on 2016-01-20 Post score: 0 Answer: This is easily possible and you should be able to find many examples of this in existing ROS packages. Some examples: node (inside class), nodelet. Originally posted by Stefan Kohlbrecher with karma: 24361 on 2016-01-20 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 23502, "tags": "ros, callback, publisher, subscribe" }
Bungee balls vs. extension springs
Question: This is a mix between a DIY question and an applied physics question, but I figure the folks here are more interested in these details. My application is attaching a projection screen to a frame with tension to stretch the screen flat. One of the recommended products to do this is bungee balls. In my case, I am considering replacing the bungee balls with extension springs because I can affix them with a twist-tie on one side and it will be easier to attach/detach the screen on the other side. The specifications for extension springs are well detailed (e.g. spring rate measured in lbs/in, but I can't find any equivalent information on bungee balls. How can I find this information (my Google-fu failed)? Or is there a simple test I can do with a bungee ball to measure the spring rate? Answer: It is difficult to give a definitive answer. The spring constant refers to an ideal spring which obeys Hooke's Law (extension proportional to load). Real springs and especially elastic materials such as bungee cord do not necessarily follow this law. Usually they do initially for 'smallish' loads, but they depart from it gradually at large loads. What counts as 'large' differs between materials and structures (springs are structures rather than materials) and is very difficult to predict. The bungee cord is more likely than the springs to deviate from ideal behaviour. Since you have these already, you will be familiar with the way they extend. So as a tentative answer, as long at the load isn't 'too large' the test value of the spring constant should give a reliable comparison. Ultimately I think you can only be sure by getting hold of the springs and comparing performance in situ. Even if they are equally springy, you may discover some other reason to prefer one over the other.
{ "domain": "physics.stackexchange", "id": 30584, "tags": "spring, applied-physics" }
Number of states in NFA and DFA accepting strings from length 0 to n with alphabet Σ= {0,1}
Question: The question is in title. Let me repeat: What are the number of states in NFA and DFA accepting strings from length 0 to n with alphabet $\Sigma = {0,1}$ I feel both NFA and DFA will take following form: and thus both have $n+1$ states. Am I correct with this? Or the DFA needs to take following form? I feel its not necessary to have transitions defined for every symbol for every state, mainly because it does not eliminate "deterministic" nature of the automaton. Am I right? Answer: As mentioned in the other answer, there are two common definitions of DFA. One definition requires the transition function to be total, and the other allows it to be partial. The former one is somewhat more common. Under the first definition, you need to include the additional state. Under the second definition, you don't have to. Using Myhill–Nerode theory, it is easy to show that $n+2$ is the minimum number of states in a (total) DFA accepting the words of length between $0$ and $n$. Indeed, consider the words $\epsilon, 0, 0^2, \dots, 0^{n+1}$. These $n+2$ words are pairwise distinguishable. Indeed, if $0 \leq i < j \leq n+1$ then $0^j0^{n+1-j}$ doesn't belong to the language, but $0^i0^{n+1-j}$ does. Using the "fooling set" method, it is easy to show likewise that $n+1$ is the minimum number of states in an NFA accepting the same language. Here is a statement of the method: Let $L$ be a regular language. Suppose that there are $m$ pairs of words $x_i,y_i$ such that $x_iy_i \in L$ for all $i$, and for all $i \neq j$, either $x_iy_j \notin L$ or $x_jy_i \notin L$. Then every NFA for $L$ contains at least $m$ states. We choose $x_i = 0^i$ and $y_i = 0^{n-i}$ for $0 \leq i \leq n$, in total $n+1$ pairs. Clearly $x_i y_i = 0^n \in L$ (denoting our language by $L$), whereas for $i < j$, $x_j y_i = 0^{n-i+j} \notin L$.
{ "domain": "cs.stackexchange", "id": 11452, "tags": "automata, finite-automata, nondeterminism" }
Can recent developments improve the total synthesis of B-12?
Question: The total synthesis of vitamin B-12 by Robert Burns Woodward and Albert Eschenmoser, is over 30 years old. At its time, it was considered a landmark in the field. With current developments (e.g. olefin metathesis, palladium catalysis, etc.) is it possible to optimize this synthesis? By "optimize" I mean reduce the number of steps in the synthesis. Answer: Almost certainly, yes. Applications of modern organic synthesis methods have drastically shortened the synthetic routes for such previously "daunting" targets such as strychnine see Vanderwal, tetracyclines Myers and pyrroloindole alkaloids Movassaghi, to name but a few - and recent efforts by Baran and White on C-H activation of alkanes have already yielded more efficient syntheses of many complex polycyclic terpenes. There is no reason to think that significant improvement could not occur for chlorophyll or B12. In fact, the absence of chlorophyll or Vitamin B12 from modern synthetic efforts represents a bit of a blind spot; it could well be that new chemistry could be discovered in the process. Although that 1,16 hydride shift by Eschenmoser will be hard to beat :-)
{ "domain": "chemistry.stackexchange", "id": 117, "tags": "organic-chemistry, synthesis" }
Other frequencies in a cavity
Question: This is a fairly basic question but is something that I've never properly understood. If you have a cavity with perfectly reflecting walls, I understand that there are obviously frequencies which generate standing waves but I'm not sure I understand why they are the only frequencies which need to considered. What happens to waves of other frequencies as they propagate in the cavity? The energy of the waves cannot just disappear, so by what mechanism are they suppressed? Answer: Let's consider a 1D cavity with one wall at $x=0$ and the other wall at $x=L$. We know we have the wave equation for the electric potential $\phi$, $\nabla^2 \phi - \frac{1}{c^2}\partial^2_t \phi = 0$. There would be a similar one for $\vec{A}$ in three dimensions. We additionally have the boundary condition that the potential must be zero on the boundary. The usual standing wave solutions are like $\sin(\pi n x /L) \sin(\pi n c t /L)$. Now given any other instantaneous potential that goes to zero at the boundary, we can find its time evolution by doing a fourier decomposition to write it as a sum of plane waves. Since we know how each of these plane waves will evolve, we know the evolution of our initial potential profile by linearity. This applies just as well to the case of your suggestion where the initial potential profile is a localized wave packet with some frequency different from the standing mode frequency. What you will see is that the wave packet will spatially decohere as the different modes it is composed of oscillate at different rates. Eventually it will just look like a more or less random distribution of normal modes. Notice this discussion works just as well in 3D. Since the potential must be zero at the surface. You can prove that the $\nabla^2$ operator is hermitian on the space of all instantaneous potential profiles satisfying the boundary condition. Then it can be "diagonalized"; i.e., there exists a complete set of eigenfunctions, which each have their different eigenfrequencies. As before, if you start with a localized wave packet, you will see it decohere and turn into a random-looking superposition of eigenmodes. Now there is the question of energy. Energy ought to be conserved because the total energy should be the energy of an eigenmode times the square of the amplitude of the mode, summed over all modes. This does not care about the relative phases of the modes. As the wave-packet decoheres, the amplitudes of the modes remain constant even though the relative phases change. Since the amplitudes of each mode remains constant, the total energy will be constant.
{ "domain": "physics.stackexchange", "id": 11965, "tags": "waves, resonance" }
difference between motifs, domains, patterns, signatures and profiles
Question: I can't get clear the difference between those terms, I see them a lot while browsing on Prosite, Pfam, Expasy e.t.c. However, I can find documentation about them, but It still not clear what the difference is between those terms. I think it's something like: motifs: Short residu stretch, not an indication of homology domain: Could contain motifs, howevever most of the time is bigger, it can be used to identify homology if two proteins share a domain. However I can't fit the patterns and profiles in this list. Answer: Looking at the documentation you provide, I think that I can provide definitions for those terms as defined by EXPASY. Note that those definitions may just be an in-house system that EXPASY uses that you can't generalize to other databases for instance. First, in section I.C of the documentation, they write "The use of protein sequence patterns (or motifs) to determine the function(s) of proteins is becoming very rapidly one of the essential tools of sequence analysis." So it looks like for them patterns = motifs and motifs = patterns. Note again that "pattern" appears to be a somewhat informal term. I would certainly say that it is confusing that they use them as synonyms, but they do at least define them as such. Second, I think that they are using the word "profile" in a very narrow technical sense, with regard to the way that they detect homology between proteins. The standard way to detect homology between proteins is using programs called Hidden Markov Models (HMMs). In section II.B, they write "A profile or weight matrix (the two terms are used synonymously here)...". So a profile is really just a weight matrix derived from a HMM that can be applied to any given alignment to figure out whether or not it matches homology. They go on to give an example of how this weight matrix is applied. In simple terms, how the HMM works is that you give it a bunch of sequences that you know belong to the same family, and it will try to learn the frequencies of amino acids at each position and what order they come in. This model of the amino acid frequencies is the weight matrix they talk about. Then you make sure that the HMM works well (gives high scores for proteins in the same family, low scores for proteins in different families), and then you can apply the program to try to find new homologs. Hope that answers the Q.
{ "domain": "biology.stackexchange", "id": 5686, "tags": "proteins, sequence-analysis, protein-interaction" }
Understanding Reinforcement Learning with Neural Net (Q-learning)
Question: I am trying to understand reinforcement learning and markov decision processes (MDP) in the case where a neural net is being used as the function approximator. I'm having difficulty with the relationship between the MDP where the environment is explored in a probabilistic manner, how this maps back to learning parameters and how the final solution/policies are found. Am I correct to assume that in the case of Q-learning, the neural-network essentially acts as a function approximator for q-value itself so many steps in the future? How does this map to updating parameters via backpropagation or other methods? Also, once the network has learned how to predict the future reward, how does this fit in with the system in terms of actually making decisions? I am assuming that the final system would not probabilistically make state transitions. Thanks Answer: In Q-Learning, on every step you will use observations and rewards to update your Q-value function: $$ Q_{t+1}(s_t,a_t) = Q_t(s_t,a_t) + \alpha [R_{t+1}+ \gamma \underset{a'}{\max} Q_t(s_{t+1},a') - Q_t(s_t, a_t)] $$ You are correct in saying that the neural network is just a function approximation for the q-value function. In general, the approximation part is just a standard supervised learning problem. Your network uses (s,a) as input and the output is the q-value. As q-values are adjusted, you need to train these new samples to the network. Still, you will find some issues as you as using correlated samples and SGD will suffer. If you are looking at the DQN paper, things are slightly different. In that case, what they are doing is putting samples in a vector (experience replay). To teach the network, they sample tuples from the vector, bootstrap using this information to obtain a new q-value that is taught to the network. When I say teaching, I mean adjusting the network parameters using stochastic gradient descent or your favourite optimisation approach. By not teaching the samples in the order that are being collected by the policy the decorrelate them and that helps in the training. Lastly, in order to make a decision on state $ s $, you choose the action that provides the highest q-value: $$ a^*(s)= \underset{a}{argmax} \space Q(s,a) $$ If your Q-value function has been learnt completely and the environment is stationary, it is fine to be greedy at this point. However, while learning, you are expected to explore. There are several approaches being $\varepsilon$-greedy one of the easiest and most common ways.
{ "domain": "datascience.stackexchange", "id": 685, "tags": "machine-learning, neural-network, q-learning" }
Assumption behind Gibbs energy and maximum work
Question: I'm somewhat confused by this derivation (in Schroeder's Book of Thermal Physics) of the fact that in a closed system, the change in the Gibbs Energy is always less than the Non-Expansion work done on the system. If we take the change in the entropy of the universe (system + ideal surroundings maintained at constant temperature and pressure also assuming that reactions are taking place in system only i.e. the composition of various species present in surroundings remain constant), then it turns out (in this derivation) that it equals to negative of change in Gibbs Energy divided by Temp of system (which is same as that of surroundings). Now since entropy of Universe always increases it turns out that Gibbs Energy of system will always decrease. But this is only true in absence of Non-Expansion work. However during this derivation nowhere it's mentioned that 'no non expansion work assumed' yet the result only holds true in absence of non expansion work. I think that maybe it was assumed somewhere and I'm not able to spot where. Please help me to clear my doubt. Answer: The whole point of Gibbs free energy is to quantify a bound on possible non-expansion work at constant p and T. That non-expansion work is implicit as part of the term "dU" which in the reversible case can be written as $$dU = -pdV + TdS + dw_{\text{non-exp,rev}}$$ The $-pdV$ term is the expansion work if the process is carried out reversibly. You remove that contribution by adding a $pdV$ term: $$dU + pdV= -pdV + TdS + dw_{\text{non-exp,rev}} + pdV = TdS + dw_{\text{non-exp,rev}}$$ Finally if you subtract the entropy term you are left with the maximum possible non-expansion work (obtained when the process is performed in a reversible fashion): $$dU + pdV - TdS= dw_{\text{non-exp,rev}}$$
{ "domain": "chemistry.stackexchange", "id": 13757, "tags": "thermodynamics, entropy, free-energy" }
Tic Tac Toe game in Python - Beginner
Question: I'm a beginner to python and as part of my course I'm instructed to create a simple tic tac toe game. I'd be very appreciative and interested of any insight, criticism, instructions, best practices or code readability and tips. Are my comments descriptive, repetitive and useful enough? Any inputs and tips would be greatly appreciated. Here's my code import random #gives the apearance of rewriting the board on the screen clear = lambda: print('\n' * 20) #will be used as the game board; to hold the x's and o's and determine a win etc. test_board = ['#', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ']# the '#' is at index 0 so that players can input 1 to use first position instead of 0 # prints testboard with lines so that it appears like a real board def display_board(board): print(board[1] + '|' + board[2] + '|' + board[3]) print('-|-|-') print(board[4] + '|' + board[5] + '|' + board[6]) print('-|-|-') print(board[7] + '|' + board[8] + '|' + board[9]) # assigns x to player1 and o to player2 def player_input(): marker = '' while not (marker == 'x' or marker == 'o'): #this loop goes over and over untill the playerinput is an x or o. marker = input('player1 please choose a marker. "x" or "o"') if marker == 'x': return ('x', 'o') # to assign x to player1 and o to player2 with tuple unpacking else: return ('o', 'x') # to assign o to player1 and x to player2 with tuple unpacking # converts a players integer input to and assings either x or o to the testboard's index of the input def place_marker(board, marker, position): board[position] = marker # checks to board for a win def win_check(board, mark): return ((board[1] == mark and board[2] == mark and board[3] == mark) or # across top (board[4] == mark and board[5] == mark and board[6] == mark) or # across middle (board[7] == mark and board[8] == mark and board[9] == mark) or # across bottom (board[1] == mark and board[4] == mark and board[7] == mark) or # down left (board[2] == mark and board[5] == mark and board[8] == mark) or # down middle (board[3] == mark and board[6] == mark and board[9] == mark) or # down right (board[1] == mark and board[5] == mark and board[9] == mark) or # diagonal upper right to lower left (board[3] == mark and board[5] == mark and board[7] == mark) # diagonal upper left to lower right ) #chooses a player to go first def choose_first(): random_int = random.randint(1, 10) if random_int % 2 == 0: print('player1 may go first') return True else: print('player2 may go first') return False # when player gives index position, checks if index position is not already taken def space_check(board, position): return board[position] == ' ' # checks if the board is full def full_board_check(board): for x in range(1, 10): if space_check(board, x): return False #the index position of the player's input is ' ' based on space_check so its False. i.e. empty return True #the index position of the player's input is not ' ' based on space_check so its True. i.e. full # accepts the players position for where he wants to go def player_choice(board): while True: a = input('please input an integer 1-9')#will be the index position of test_board try: a = int(a)#must convert to int to be used in test_board indexing except ValueError:#ensures that the players input is an integer clear() print('\n please type a number') print('_____________________') display_board(board) continue if a >= 1 and a <= 9:#since there are only nine positions in test_board, the players input must be between 1 and 9 if space_check(board, a): return a #return a(player's input) to be used in test_board elif a < 1 or a > 9: print('please input an integer from 1-9') else: clear() print('\n that space isn\'t available') print('____________________________') display_board(board) continue # when game is over, replay the game def replay(): replay = input('press any key to play again. type "q" to quit') while True: if 'q' in replay: return False break else: return True break while True: #set the game up test_board = ['#', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] #at the begining of the game empty's the board. #randomly select a player to go first if choose_first(): player1,player2=player_input() else: player2,player1=player_input() game_on=True #players take turns while game_on: #player1's turn display_board(test_board) print('%s may go'% player1) place_marker(test_board,player1,player_choice(test_board))#assings the players input to test_board if win_check(test_board,player1): display_board(test_board) print('%s has won! \n game over'% player1) break #breaks out of local loop and jumps to next loop (if not replay()) elif full_board_check(test_board): display_board(test_board) print('board is full') break #player2's turn print('%s may go' % player2) display_board(test_board) place_marker(test_board,player2,player_choice(test_board)) if win_check(test_board,player2): display_board(test_board) print('%s has won! \n game over'% player2) elif full_board_check(test_board): display_board(test_board) print('board is full') break if not replay(): break Answer: use platform-independent clear function from os import system, name def clear(): if name == 'nt': system('cls') else: system('clear') This new function clear is platform-independent and slightly more elegant than the approach you used. Notes on Comments Comments are good but a necessary evil, since you highlighted that you are doing a course, it might be a requirement to include "descriptive comments", but you won't be doing that course all your life. Comments should be avoided as much as possible. Use comment to explain intent(why something was done) not how it was done. If a thought to include a comment crosses your mind, ask yourself, "How would I express myself in code?" Consider using a 2D to represent a board Your internal representation of a board is a 1D list. This is fine, but I would advise you to mirror a real-life application, a tic-tac-toe game is mostly 2D or 3D, a 2D representation would be board = [ [ ' ' for _ in range(3)] for _ in range(3) ] Accessing this is also easy with just an index index = 5 # an example row = index // len(board) col = index % len(board) board[row][col] = marker Improvement on names and structure Right now, your code is messy and hard to read, win_check is a little confusing, a nice and better structure might be something like this def check_victory(): if row_victory(): return True if column_victory(): return True if diagonal_victory(): return True return False From the look of this, it is quite easy to follow what is going on here. row_victory, column_victory, diagonal_victory might be implemented as follows def row_victory(): for i in range(len(board)): if (board[i][0] != ' ' and board[i][0] == board[i][1] == board[i][2]): return True return False def column_victory(): for i in range(len(board)): if (board[0][i] != ' ' and board[0][i] == board[1][i] == board_array[2][i]): return True return False def diagonal_victory(): if (board[0][0] != ' ' and board[0][0] == board[1][1] == board[2][2]): return True if (board[0][2] != ' ' and board[0][2] == board[1][1] == board[2][0]): return True return False This would mean you need to keep a current_player variable, this should seem easy for you. When any of the above function returns true, the current player is the winner. Consider renaming full_board_check to stalemate. Names like a should be avoided, a better alternative is user_input Consider using a class Using a class would really improve the structure and readability of your code
{ "domain": "codereview.stackexchange", "id": 40069, "tags": "python, beginner, game, tic-tac-toe" }
Spinor quantization: contradiction between covariant anticommutator and canonical rules?
Question: Starting from the free lagrangian $$\mathscr L = \bar\Psi(i\displaystyle{\not}\partial - m)\Psi$$ I compute the canonical momenta $$\Pi =\frac{\partial \mathscr L}{\partial\dot{\Psi}}=i\Psi^\dagger \qquad \bar\Pi =\frac{\partial \mathscr L}{\partial\dot{\bar\Psi}}= 0$$ and then I can perform canonincal quantization imposing the canonical anticommutation rules (equal time anticommutator) $$\left\{\Psi_\alpha(x^0,\vec{x}),\Pi_\beta(x^0,\vec{y})\right\} = i\left\{\Psi_\alpha(x^0,\vec{x}),\Psi_\beta^\dagger(x^0,\vec{y})\right\} \equiv i\delta_{\alpha\beta}\delta^3\left(\vec{x}-\vec{y}\right) \tag1$$ $\alpha,\beta = 1,2,3,4$ are spinor indices. I know the covariant anticommutator ($x$ and $y$ are 4-vectors) $$\left\{\Psi_\alpha(x),\bar\Psi_\beta(y)\right\} = iS_{\alpha\beta}(x-y)=i(\displaystyle{\not}\partial+m)_{\alpha\beta}\Delta(x-y) \tag2$$ A possible definition of the $\Delta$ function is $$\Delta(x) = -\frac{i}{2}\int\frac{d^3\vec{k}}{(2\pi)^3}\frac{e^{-ikx}-e^{ikx}}{\omega_k}= -\int\frac{d^3\vec{k}}{(2\pi)^3}\frac{\sin(kx)}{\omega_k}\qquad \text{where} \quad \omega_k = \sqrt{\left|\vec{k}\right|^2+m^2}$$ Now I can rewrite (2) as $$\left\{\Psi_\alpha(x),\Psi^\dagger_\sigma(y)(\gamma_0)^\sigma_{\ \ \beta}\right\} = \left\{\Psi_\alpha(x),\Psi^\dagger_\sigma(y)\right\}(\gamma_0)^\sigma_{\ \ \beta} \tag3$$ If I take equal time anticommutator in (2), I get $$\left\{\Psi_\alpha(x^0,\vec{x}),\bar\Psi_\beta(x^0,\vec{y})\right\} = i(\displaystyle{\not}\partial+m)_{\alpha\beta}\Delta(0,\vec{x}-\vec{y}) = 0$$ because $$\Delta(0,\vec{x}) = -\int\frac{d^3\vec{k}}{(2\pi)^3}\frac{\sin(-\vec{k}\cdot\vec{x})}{\omega_k} = 0 \quad \text{integral of an odd function}$$ But looking at (3) one might conclude $$0 = \left\{\Psi_\alpha(x^0,\vec{x}),\bar\Psi_\beta(x^0,\vec{y})\right\}=\left\{\Psi_\alpha(x^0,\vec{x}),\Psi^\dagger_\sigma(x^0,\vec{y})\right\}(\gamma_0)^\sigma_{\ \ \beta} \tag4$$ Which contradict the canonical anticommutation rules (1). For sure I made a mistake, but I can find it! Answer: Even though $\Delta(0,\boldsymbol x)$ is given by an integral of an odd function, the integral diverges for $\boldsymbol x=\boldsymbol 0$. The correct statement is $$ \Delta(0,\boldsymbol x)=0 $$ and $$ \dot\Delta(0,\boldsymbol x)=\delta(\boldsymbol x) $$ (up to a phase, depending of conventions) This is consistent with the canonical algebra.
{ "domain": "physics.stackexchange", "id": 38755, "tags": "homework-and-exercises, quantum-field-theory, fermions, spinors, second-quantization" }
Giving a finite collection of infinite words "complex" enough with respect to automata measure
Question: We consider acceptance by Büchi automata. Let $X = \{0,1\}$ and $X^{\mathbb N}$ the set of all infinite sequences. Then for each $n$ do we have a finite collection $\{ \xi_1, \xi_2, \ldots, \xi_k \}$ of infinite words, such that for every other infinite word $\eta$ there exists some $\xi_i$ such that for every Büchi automaton $\mathcal A$ with $$ |L(\mathcal A) \cap \{\xi_i, \eta\}| = 1 $$ (i.e. the automaton separates both words: accepts one, but not the other) then $\mathcal A$ must have at least $n$ states? Observation. One neccessary condition on the collection of infinite words $\{\xi_1, \ldots, \xi_k\}$: for a given $n$ we must have at least $2^{n-2}$ of them ($k \ge 2^{n-2}$), i.e. one for each prefix of length $n-2$, for otherwise if some prefix $u \in X^{n-2}$ is not among the finite collection, then we can separate $u1^{\omega}$ easily by an automaton having fewer then $n$ states from every word in $\xi_1, \ldots, \xi_k$, simply read upon the first position $i \le n-2$ where it differs from a given $\xi_j$ and then reject or accept according to the sign there, which could be achieved by a Büchi automaton with $i+2$ states. But besides this observation, I do not see if this is possible? Motivation: If we define $$ d(\xi, \eta) = 1/2^n $$ where $n := \min\{ |\mathcal A| \mid |L(\mathcal A) \cap \{\xi,\eta\}| = 1 \}$ for $\xi \ne \eta$, and $d(\xi,\xi) := 0$, then this gives a metric, and the above question asks if the resulting metric space is totally bounded. Answer: It is possible. Consider all languages $L_1,\dots,L_m$ recognizable by automata with at most $n$ states. Now to each $x=x_1\dots x_m\in\{0,1\}^m$ we associate a (possibly empty) language $$L_x=\bigcap_{1\leq i\leq m} L_i^{x_i},$$ where $L^0$ is the complement of $L$ and $L^1=L$. Let $Y=\{x\in\{0,1\}^m~|~ L_x\neq\emptyset\}$. For each $x\in Y$, we choose $\xi_x\in L_x$. This gives a finite family $\{\xi_x~|~x\in Y\}$, we show that this family satisfies the wanted property. Take any $\eta\in X^{\mathbb N}$, and for each $i\leq m$ let $y_i$ be defined as $0$ if $\eta\notin L_i$ and $1$ if $\eta\in L_i$. We have $y=y_1y_2\dots y_m\in Y$, since nonemptiness of $L_y$ is witnessed by $\eta$. We show that $\xi_{y}$ satisfies the condition. Indeed, for any $L$ recognized by an automaton with at most $n$ states, $L$ does not separate $\eta$ from $\xi_y$, by definition of $\xi_y$. Notice that this family is optimal in size, since if some signature $x\in Y$ is not represented by some $\xi_x$, the property fails. Its size is at most doubly exponential in $n$.
{ "domain": "cstheory.stackexchange", "id": 4018, "tags": "fl.formal-languages, automata-theory" }
Should carbon tetrachloride really be considered an organic compound?
Question: Wikipedia says: Carbon tetrachloride […] is an organic compound with the chemical formula $\ce{CCl4}$. From an organic point of view, carbon tetrachloride can be called the tetrachloro derivative of methane and is described in many organic textbook along with $\ce{CH3Cl}$, $\ce{CH2Cl2}$ and $\ce{CHCl3}$. From an inorganic point of view, carbon tetrachloride can be considered chloride of a group 14 element (along with $\ce{SiCl4}$, $\ce{GeCl4}$). Shouldn’t carbon tetrachloride be considered an inorganic compound? It neither contains any $\ce{C-H}$ bond nor is it found in any living organism. Tetrachloromethane is its organic name. But it is generally not called by this name. Carbon tetrachloride is its better name and inorganic compounds are named like that. Answer: There are a number of questions here on Stack Exchange discussing the distinction between organic and inorganic compounds, such as: What is the definition of organic compounds? What does 'organic/non-organic molecule' means exactly? (Check out ron’s answer there for different contemporary definitions of organic compounds.) What is the first organic compound to be discovered ? Urea or alloxan? (Check out my answer there for one of the oldest definitions.) The distinction between organic and inorganic chemistry is one that is upheld mainly because of traditional reasons. It is also upheld because some more specialised fields are often well included into one of the two traditional definitions: solid state chemistry is usually inorganic while natural product synthesis is almost entirely organic chemistry by all applicable definitions (save the very old historical ones). But remember that there are a number of fields that straddle the boundary. Coordination chemistry often involves organic and inorganic ligands around an inorganic metal centre. It depends on the research being performed whether the group wishes to be considered organic or inorganic. See for example the groups of Professor Klüfers and Professor Knochel at the LMU Munich: The former classifies himself as an inorganic chemist and investigates carbohydrate-metal complexes among other things; the latter classifies himself as an organic chemist and investigates organometallic reagents and reactions. So in a way, the organic/inorganic distinction compares well to the border between Europe and Asia. Everyone will agree that France is in Europe and China is in Asia but what about Georgia and Turkey? Note that your dualism is short-handed. $\ce{SiHCl3}$, trichlorosilane or silicochloroform is produced in an industrial scale and trichlorogermane $\ce{GeHCl3}$ is also known. On the silicon side of things, apart from tri- and tetrachlorosilane the di- and monochlorosilanes $\ce{SiH2Cl2}$ and $\ce{SiH3Cl}$ are known just like their carbon counterparts. And of course, silane $\ce{SiH4}$ is known, too. (I am not sure exactly how much is known about the corresponding germanium compounds. The English Wikipedia didn’t have entries for them but didn’t contain trichlorogermane either, which was only present in German and Russian.) So beware when drawing analogies. They could go into a different direction that you may think. Speaking of surprising facts: Did you know that chloromethane is the most-produced secondary metabolite by mass? So let’s add chlorosilane into the organic realm by analogy! Finally, as an aside note: I actually use tetrachloromethane much more than I would carbon tetrachloride; that goes for both languages in which I do chemistry.
{ "domain": "chemistry.stackexchange", "id": 5456, "tags": "organic-chemistry, inorganic-chemistry, halides" }
How can we use a transfomer model with new data if we still don't have the output?
Question: Transformer models are trained using inputs and outputs. They are both embedded and encoded and used to train multi-head attention mechanisms... But how can we use a transformer model to predict new data? We won't have any "output" to feed the model yet. For example you use English and Spanish text to create a dictionary. But when you want to translate new English text you don't know the translation yet. Answer: Note: this answer assumes that the question is about how to use the Transformer model at inference if there is no output to use At training time, we have the expected output of the model, so there is not a problem, because on the decoder input we use the expected output prefixed with a special <bos> (beginning of sentence) token. At inference time, we have no output data. Here, we decode one token at a time: first, we pass as decoder input just the <bos> token, and the model generates a single output, which is the prediction for the first token of the output sequence; let's call is $P_1$. Then, we concatenate such predicted token with the sequence used previously as input to the decoder (i.e. [<bos>]), obtaining [$P_1$, <bos>], and we use it as new input to the decoder. In this second decoding step, the decoder generates 2 predictions: [$P_1$, $P_2$]. We take $P_2$ and concatenate it to the previous input, obtaining [<bos>, $P_1$, $P_2$]. We repeat the procedure until we obtain as prediction the <eos> (end of sequence) token, which marks the end of the predicted sequence.
{ "domain": "datascience.stackexchange", "id": 11974, "tags": "transformer, attention-mechanism" }
is it possible increase the maximum speed of a simulated equivalent of a real life robot in gazebo
Question: is it possible increase the maximum speed of a simulated equivalent of a real life robot in gazebo?For instance a real life turtlebot3 has a max speed of 0.22m/s. The simulted turtlebot3 is set to have a max speed of 0.22m/s, is there a way to modify some files to increase the speed of the simulated turtlebot3? Edit:I modified the turtlebot_drive.h script, and the DWA_localplanner yaml file. Not sure which one did it but it worked! Originally posted by distro on ROS Answers with karma: 167 on 2022-08-23 Post score: 0 Answer: You can try to modify these values in the turtlebot3_simulations header turtlebot3_drive.h: https://github.com/ROBOTIS-GIT/turtlebot3_simulations/blob/ac7298dfa215d3f05580520d6a938c3dabb53bad/turtlebot3_gazebo/include/turtlebot3_gazebo/turtlebot3_drive.h#L35 It is used in the control loop in the source file: https://github.com/ROBOTIS-GIT/turtlebot3_simulations/blob/ac7298dfa215d3f05580520d6a938c3dabb53bad/turtlebot3_gazebo/src/turtlebot3_drive.cpp#L100 Originally posted by ljaniec with karma: 3064 on 2022-08-23 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by distro on 2022-08-23: @ljaniec Has you tried it yourself? Comment by ljaniec on 2022-08-23: No, because I have a real TB in the lab, so it would be irresponsible - I want to have the same setup IRL and in simulation. If you use the standard simulation in Gazebo with Turtlebot and teleop connected, you can check with the keyboard that you can accelerate from TB to higher speeds. There is no filter/damping on the speed input in the simulated TB robot. This way you can avoid changing the source code. Comment by distro on 2022-08-23: @ljaniec do you happen to use turtlebot2's by any chance? I posted another question about that Comment by ljaniec on 2022-08-24: Yes, you can link it. I modded it to ROS2 though but maybe I can check it anyway Comment by distro on 2022-08-24: @ljaniec This is it here Comment by distro on 2022-08-28: @ljaniec I git cloned the turtlebot3_simulations to my workspace and modified turtlebot3_drive.h #define LINEAR_VELOCITY 0.44 //0.3 #define ANGULAR_VELOCITY 2.2 //1.5 It did not seem to work. When I teleop, the linear velocity still remains at 0.22m/s. Comment by ljaniec on 2022-08-29: If you check it with turtlebot3_teleop_key, you can read here: https://github.com/ROBOTIS-GIT/turtlebot3/issues/897 that it has another set of "limits" (e.g. https://github.com/ROBOTIS-GIT/turtlebot3/blob/9867e2f1f5c9ed54d2b41998a77a9c8eb42ebafd/turtlebot3_teleop/nodes/turtlebot3_teleop_key#L38 and https://github.com/ROBOTIS-GIT/turtlebot3/blob/9867e2f1f5c9ed54d2b41998a77a9c8eb42ebafd/turtlebot3_teleop/nodes/turtlebot3_teleop_key#L184) You should check now two things: a modification of the turtlebot3_teleop_key package to your needs with manual tests with teleop_key a modification of the turtlebot3_simulations package with test with simple TestCmdVelPublisher node (for moving the TB2 in Gazebo) Comment by distro on 2022-08-31: @ljaniec sorry I had though I replied to this. I also wish for the max velocity to be higher when I use 2D Nav Goal, so when I'm not manually driving the robot. Do you know what files I might need to change for that? Comment by ljaniec on 2022-09-05: In general, you should review the code of the PR2 controller used in the TB2 URDF model - maybe somewhere here you can see if some ifs there have any speed restrictions specified/implemented Comment by distro on 2022-09-05: @ljaniec Would you mind leaving a link? Not sure which codes you are talking about. Comment by ljaniec on 2022-09-07: https://github.com/ROBOTIS-GIT/turtlebot3/blob/melodic-devel/turtlebot3_navigation/param/base_local_planner_params.yaml - YAML with parameters (tuning guide is here) used in the XML launch file. My previous comment was unclear, sorry. I didn't find any other places where you can modify these speed parameters (links here + mentioned above in other comments). Other links: https://github.com/ROBOTIS-GIT/turtlebot3/issues/615 https://github.com/ROBOTIS-GIT/turtlebot3/issues/451 Comment by distro on 2022-09-10: @ljaniec I modiffied the turtlebot_drive.h, and the DWA_planner yaml file. Not sure which one did it but it worked! I need help woth more turtlebot stuff if you have the time, here and here Comment by ljaniec on 2022-09-11: You are welcome! You can upvote and accept this as an answer - please, describe your modification in the edit of the question for future readers, you can see that this topic of speed modification comes quite often. I think you should share your knowledge and experience :) Comment by distro on 2022-09-20: @ljaniec please take a look at this other problem I am having with turtlebot2 recently. here
{ "domain": "robotics.stackexchange", "id": 37935, "tags": "gazebo, ros-melodic" }
Why is HIV associated with weight loss/being underweight?
Question: It seems that people with HIV are often underweight, is there something (or multiple things) which cause HIV sufferers to lose weight? What is happening within the body as a direct effect of HIV that will lead to increased weight loss? Answer: Here is a brief dissection of the causes of weight loss associated with HIV which are listed by AidsMap.com. Most were quite obvious (disease and side-effects of medicine often cause appetite loss, diarrhea, vomiting etc. - kind of trivial components of the answer) but the first two were more interesting (and new to me). HIV can increase the rate at which the body uses nutrients (increased metabolism) It seems that, at least in the early phases of HIV, the metabolic rate is higher. In one study, the resting energy expenditure of HIV patients was ~8% higher, and fat oxidation was increased. This will result in more calories being burned on a daily basis, and, to put it in perspective, ~8% of a males recommended daily allowance of calories is 200 calories, or about one 40g bar of chocolate. Patients will have to consume more calories just to meet their basic daily needs. HIV can alter the lining of the gut, making it harder to absorb nutrients (malabsorption) It seems the HIV virus is associated with inflammation in the gut, causing damage to the intestinal lining. The intestine is a major area of where we absorb nutrients from our food, if efficiency is reduced by damage and inflammation, then fewer calories will be absorb. Patients will have to eat more food to get the same amount of calories in to their system. other gut infections can cause malabsorption and/or diarrhoea Many illnesses that a HIV patient will contract will causes these problems. you may eat less than you used to (and need to) because of loss of appetite during ill health Simple one, less energy in = less energy to absorb, weight loss is all about calorie intake vs. calorie usage. specific conditions may make it harder to eat, such as mouth and throat infections Obviously some infections, such as herpes and canker sores, will make it unpleasant to eat. HIV leaves patients more prone to such problems so they will more frequently avoid eating because of the discomfort and pain. some drugs may suppress your appetite or cause side-effects that put you off food, such as nausea, vomiting, indigestion or altered taste. A lot of drugs, not just those that fight HIV, cause side-effects on appetite, vomiting, etc. Here is a list of various HIV/AIDS drugs and their common side effects. A quick eyeballing of that table suggests around 20 of 28 (~71%) are associated with some kind of vomiting or diarrhea.
{ "domain": "biology.stackexchange", "id": 5194, "tags": "hiv" }
"Outrunning pressure": Liquid in free-fall
Question: I've checked out related questions and didn't see an answer to my question. Scenario: Liquid in open-top container, together accelerating downward with magnitude $g$. It was straightforward for me to surmise that $\nabla P = \rho (\vec{g} - \vec{a}) = \vec{0} \Rightarrow \frac{\partial P}{\partial y} = 0 \Rightarrow P(y) = \text{constant} $, so the pressure is uniform in the vertical (and all other) direction. No problem. But what is the value of that uniform pressure? Since the container top is open to the atmosphere, does that guarantee the liquid's pressure is $P_{atm}$? Or, since the liquid is fleeing the atmosphere at exactly the acceleration that pulls the atmosphere down and generates atmospheric pressure, does that imply that a liquid in free-fall is free of atmospheric pressure? (I understand this would mean the liquid would vaporize, but perhaps that is beside the point.) It's strange to me to conclude that $ P(y) = P_{atm} $, because I believe an object moving through air experiences a higher pressure on the leading surface and a lower pressure on the trailing surface. So shouldn't this liquid experience a lower pressure on its trailing (top) surface than if it were at rest? (And thus a lower pressure throughout, since its leading/bottom surface is shielded from the air by the container.) Question: What is the value of the uniform pressure experienced by a liquid in free-fall (specifically in an open-top container)? Answer: It's strange to me to conclude that $P(y)= P_{\text{atm}}$ because I believe an object moving through air experiences a higher pressure on the leading surface and a lower pressure on the trailing surface I believe you are conflating acceleration with velocity. Consider when you drop a bucket of water from a height. If you release the bucket without giving it any initial velocity, then at the moment of release, it is accelerating downward at $-g$ but has zero velocity. This meets the premises of your question, and indeed without any velocity the absolute pressure in the fluid would be atmospheric. As no motion between the atmosphere and the fluid-bucket surfaces has been created, the same number of atoms strike the surfaces as when the bucket was supported, resulting in the same pressure. Once some velocity is obtained, then certainly there may be less atoms striking the fluid surface than the bucket’s surface and this could result in a change from atmospheric pressure. However this becomes a fluid dynamics question and whether the pressure increases or decreases would depend on the geometry and many other physical parameters.
{ "domain": "physics.stackexchange", "id": 91171, "tags": "fluid-dynamics, pressure, fluid-statics, free-fall" }
Efficiently selecting spatially distributed weighted points
Question: Background: Motivation behind writing the following code is originated in the area of computer vision. More specifically – image rectification. In order to obtain rectified images, one has to find a set of matching features/keypoints on both images beforehand. My code is supposed to operate on these points. When calculating rectifications for a series of aerial images, I was obtaining poor results. Around 80% of rectified images were squeezed and tilted at a large angle. For example: To improve the results my professor suggested selecting only the top X% of the keypoints, as there are probably many bad quality keypoints (keypoints have weights or similarity scores by which we could sort them). However, that selection could introduce a strong bias, as, probably, one side of the image has much better keypoints than the other side. So, we would want them to be distributed evenly on the image. When looking for an algorithm that could select the most scattered points, I came across two posts on Stack Exchange: Farthest point algorithm in Python1 Selecting most scattered points from a set of points In the second post I found a link leading to the following article: Efficiently selecting spatially distributed keypoints for visual tracking. There they describe an algorithm that takes into account weights of the points. Since it's possible to sort the keypoints by a score of similarity, this looked like a way to go. I implemented their Suppression via Disk Covering (SDC) algorithm and got ~70% good quality rectified images comparing to previous ~20%. Here is an example of a good one: Idea of the algorithm is pictured on the following image taken from the article: Relevant steps (citing the article): 1) Input image for which a set of, for example, k = 20 strong, well-distributed keypoints is sought. 2) Keypoints found by detector. 3) The first (strongest) keypoint is selected and all cells within the approximated radius r are covered. 4) Strongest uncovered point is selected, surrounding cells covered. 5) Finally, with this radius, five keypoints have been selected. This is below the desired k, so a new iteration (6)) is started with a smaller r. 7) More than k points are selected and still uncovered points left: r is too small, the iteration can be aborted and the next iteration (8)) started. 9) Finally, with this r , exactly k keypoints have been selected and are returned as result What I want reviewed: Code organization, separating logic to functions, DRY, style, choice of names, etc. Performance. (It's possible that in the future this code will run in real time. Right now it's not critical, so I didn't do any profiling. But by taking a look on the current state of the code, I don't see how it could be sped up.) Missed bugs, edge-cases. Alternative algorithms. Is this an XY problem? I am new to computer vision and I'm afraid that I could take a wrong way to tackle this problem. I asked a separate question about tilted rectified images: Skewed rectified aerial images Code: suppression_via_disk_covering.py from functools import partial from typing import Tuple import numpy as np def select(points: np.ndarray, *, image_shape: Tuple[int, int], count: int, count_delta: int = 1, radius: int = 10, radius_delta: int = 2, max_iterations_count: int = 15, min_cell_size: int = 2, max_cell_size: int = 100) -> np.ndarray: """ Selects points by a Suppression via Disk Covering algorithm. For more details see: http://lucafoschini.com/papers/Efficiently_ICIP11.pdf :param points: original set that should be ordered by distance :param image_shape: shape of an image :param count: number of output points :param count_delta: let k = `count` and Δk = `count_delta`, if number of found points is within [k; k + Δk], return top-k points :param radius: initial radius of area where points will be removed :param radius_delta: determines width of cells :param max_iterations_count: prevents infinite loop :param min_cell_size: :param max_cell_size: :return: mask array with selected strong scattered keypoints """ if len(points) < count: raise ValueError('Not enough points to select.') grid_resolution = radius_delta * radius / np.sqrt(2) max_count = count + count_delta points_mask = partial(selected_points_mask, points, image_shape=image_shape, count=max_count, radius=radius) for _ in range(max_iterations_count): result_mask = points_mask(grid_resolution=grid_resolution) selected_points_count = result_mask.sum() if selected_points_count == count: return result_mask if count < selected_points_count <= max_count: return erase_extra_points(result_mask, count=count) if selected_points_count < count: max_cell_size = grid_resolution grid_resolution -= (grid_resolution - min_cell_size) / 2 else: min_cell_size = grid_resolution grid_resolution += (max_cell_size - grid_resolution) / 2 raise ValueError('Number of iterations exceeded.') def selected_points_mask(points: np.ndarray, *, grid_resolution: float, image_shape: Tuple[int, int], count: int, radius: int) -> np.ndarray: """ Calculates boolean mask corresponding to array of input points. True values are for those points that will be selected as scattered enough from each other. In case if there were too many points found, the mask still will be returned. :param points: input array :param grid_resolution: size of a cell in a grid :param image_shape: :param count: number of points to select :param radius: as number of cells where points won't be selected :return: boolean array with True values for selected points """ points_grid_indices = (points // grid_resolution).astype(int) grid_shape = (int(image_shape[0] // grid_resolution) + 1, int(image_shape[1] // grid_resolution) + 1) grid = np.full(shape=grid_shape, fill_value=False) result_mask = np.full(shape=points.shape[0], fill_value=False) for index, point_grid_index in enumerate(points_grid_indices): if grid[tuple(point_grid_index)]: continue result_mask[index] = True if result_mask.sum() > count: break mask = circular_mask(grid.shape, center=point_grid_index, radius=radius) grid[mask] = True return result_mask def circular_mask(array_shape: Tuple[int, int], *, center: Tuple[int, int], radius: int) -> np.ndarray: """ Returns 2d array with applied a disc shaped mask over it. For more details see: https://stackoverflow.com/questions/8647024/how-to-apply-a-disc-shaped-mask-to-a-numpy-array https://stackoverflow.com/questions/44865023/circular-masking-an-image-in-python-using-numpy-arrays :param array_shape: shape of original image :param center: center of the disc :param radius: radius of the disc :return: boolean array with applied circular mask """ y, x = np.ogrid[-center[0]:array_shape[0] - center[0], -center[1]:array_shape[1] - center[1]] return x * x + y * y <= radius * radius def erase_extra_points(array: np.ndarray, *, count: int) -> np.ndarray: """ Let n = `count`, sets to False all elements after the n-th occurrence of a True element. :param array: input boolean array :param count: number of True elements to remain :return: """ array = array.copy() last_true_index_to_remain = np.where(array)[0][count] array[last_true_index_to_remain:] = False return array Examples of usage: Simple example without opencv, ignoring ordering of points by strength: %matplotlib inline import matplotlib.pyplot as plt import numpy as np import suppression_via_disc_covering as sdc image_shape = (1500, 2000) points = np.random.uniform(low=(0, 0), high=image_shape, size=(100, 2)) mask = sdc.select(points, image_shape=image_shape, count=8) plt.scatter(points[:, 0], points[:, 1]) plt.scatter(points[mask, 0], points[mask, 1], color='r') Output: Example with opencv (Shi-Tomasi corner detector): Taking the following image: duck.jpg import cv2 import suppression_via_disc_covering as sdc image = cv2.imread('duck.jpg') image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) corners = cv2.goodFeaturesToTrack(image, maxCorners=1000, qualityLevel=0.01, minDistance=10) corners = corners.reshape(-1, 2) for corner in corners: cv2.circle(image, center=tuple(corner), radius=3, color=0, thickness=-1) mask = sdc.select(corners, image_shape=image.shape[::-1], count=25) good_corners = corners[mask, :] image = cv2.cvtColor(image, cv2.COLOR_GRAY2RGB) for corner in good_corners: cv2.circle(image, center=tuple(corner), radius=3, color=(0, 0, 255), thickness=-1) cv2.imshow('image', image) cv2.waitKey(0) Output: P.S.: I'm not including an example with pairs of aerial images, since there is much more code there that should be reviewed separately. Moreover, I still consider that 70% of good results is not good enough and hence this example would be considered as "doesn't work as intended". 1I tried this algorithm as well and quality of the results improved from ~20% of good images to ~45%. This is not good enough. Answer: I've read this question and the associated code a couple of times because I really wanted to review it. First of all I'd say that there's not much to review because it's really well written (and it's a complex subject). Usage of functools.partial This may be opiniated, but I don't think you should be using partial here. I think you used it to make the code cleaner, but all in all, what's the difference between these two pieces of code : points_mask = partial(selected_points_mask, points, image_shape=image_shape, count=max_count, radius=radius) for _ in range(max_iterations_count): result_mask = points_mask(grid_resolution=grid_resolution) and for _ in range(max_iterations_count): result_mask = selected_points_mask(points, image_shape=image_shape, count=max_count, radius=radius, grid_resolution=grid_resolution) There are two other reasons I don't think you should use it : You only re-use the partial function once It adds complexity to your code for nothing grid_shape = (int(image_shape[0] // grid_resolution) + 1, int(image_shape[1] // grid_resolution) + 1) You don't need to use the int conversion here, integer division // will return an int anyways. That's a small performance improvement, but you compute radius*radius often while it could be computed once. You could create a radius_pow_2 variable and pass this to your circular_mask function. In the select function, I'd be inclined to rename count to k. This might not be a popular decision, but you use k everywhere when you explain the algorithm, so it's very clear what it's supposed to do. It's also a pretty popular parameter (think K-Means Clustering or K Nearest Neighbours). I also think you should revisit the documentation for this parameter : :param max_iterations_count: prevents infinite loop. The idea, if I understood correctly, isn't to prevent an infinite loop, but to set a "time limit" where you accept that the algorithm isn't finding a reasonable solution and this difference is pretty important. This parameter do could also use some love : :param radius: as number of cells where points won't be selected;. It's not very clear what it means (even though it's clear what it does in your post, but the documentation should be clear, otherwise why have it.) You throw ValueError, first I don't think that's the right exception (I also think the slim choice of exceptions we can throw in Python is... way too slim.) Second I think it could be more detailed as to why the algorithm didn't find a solution. Was K too large? Was the initial radius too big/small? I'm pretty sure that by analyzing the responses your algorithm gave while iterating, you could give a little more "meat" to your exception message. While that might not be a good solution for a real time system, it could be an interesting addition for debugging. If I understood correctly, in the erase_extra_points method, you basically delete every points after the k-est one. This as a consequence that the points near the bottom of your image would be deleted (again, if I understood correctly), without concern towards the importance of the said points. Even if I'm mistaken in my previous sentence, the idea is that deleting points with such a "simple" algorithm could hurt your performance. Finally, I'm no CV expert either, but if there's one thing I've learned is that if you have a performance of 45% and you need it to be much higher, it might be wise to get more creative and think of other solutions.
{ "domain": "codereview.stackexchange", "id": 35526, "tags": "python, algorithm, numpy, opencv" }
Human (imperfect) array (card) shuffle written in C
Question: I've got this humanoid_shuffle I wrote in Python. I've been wanting to learn C so as an exercise I ported it to C. As this is my first venture into C and even the ideas of memory management and the like, what are some ways to make this better from a code quality and efficiency perspective? A couple of quick thoughts from a C beginner: Interesting how it's harder to justify creating multiple new copies of the array to juggle between. Seems like you have to be much more clever and less practical with C (obviously the Python version is much easier to follow). Thoughts? Python @staticmethod def humanoid_shuffle(items, num_shuffles=6): # how many times items can be pulled from the same list consecutively MAX_STREAK = 10 # divide list roughly in half num_items = len(items) end_range = int(num_items / 2 + random.randint(0, int(.1 * num_items))) first_half = items[:end_range] # list up to 0 - end_range second_half = items[end_range:] # list after end_range - len(items) split_lists = (first_half, second_half) mixed = [] streak = current_item_index = 0 # while both lists still contain items while first_half and second_half: # calc the percentage of remaining total items remaining = (1 - float(len(mixed)) / num_items) # if we happen to generate a random value less than the remaining percentage # which will be continually be decreasing (along with the probability) # or # if MAX_STREAK is exceeded if random.random() < remaining or streak > MAX_STREAK: # switch which list is being used to pull items from current_list_index = 1 ^ current_list_index # reset streak counter streak = 0 # pop the selected list onto the new (shuffled) list mixed.append(split_lists[current_list_index].pop()) # increment streak of how many consecutive times a list has remained selected streak += 1 # add any remaining items mixed.extend(first_half) mixed.extend(second_half) num_shuffles -= 1 # if we still have shuffles to do if num_shuffles: # rinse and repeat mixed = humanoid_shuffle(mixed, num_shuffles) # finally return fully shuffled list return mixed C #include <stdio.h> #include <stdlib.h> #include <string.h> void shuffle(int *shuffle_array, int length) { int const MAX_STREAK = 10; int end_range; int mixed[length]; int m, f, l; int streak; int *current_ptr; srand(time(NULL)); current_ptr = (rand() % 2) ? &f : &l; end_range = (int)(length / 2 + rand() % (int)(.1 * length)); for(m = 0, f = 0, l = (end_range + 1), streak = 0; m < length && l < length && f < end_range + 1; m++, *current_ptr += 1) { float remaining = 1 - m / (float)length; float test = rand() / (float)RAND_MAX; if (test < remaining || streak > MAX_STREAK) { current_ptr = (current_ptr == &f ? &l : &f); streak = 0; } mixed[m] = shuffle_array[*current_ptr]; printf("Dropped from %p --> %d \n", current_ptr, mixed[m]); streak += 1; } // change the pointer to the one that didn't cause the for to exit current_ptr = (current_ptr == &f ? &l : &f); printf("remaing items in %p\n", current_ptr); while(m < length) { mixed[m] = shuffle_array[*current_ptr]; printf("Dropped from %p --> %d \n", current_ptr, mixed[m]); m++; *current_ptr += 1; } memcpy( shuffle_array, mixed, length * sizeof( int ) ); } int main(void) { int i; int array[52] = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51 }; int length = sizeof(array) / sizeof(int); printf("\nShuffling...\n\n"); shuffle( array, length ); // for(i = 0; i < length; i++) // { // printf("%d\n", array[i]); // } return 0; } Answer: Code Review General Comments Be careful of identifiers in all caps. It is traditional to reserve all caps for macros (which can cause problems when you get clashes as macros do not respect scope). int const MAX_STREAK = 10; It is usually best to initialize variables as you declare them int end_range; int mixed[length]; Try not to declare multiple variables on a single line (Every coding standard I have seen will hit you on this). There are also some corner cases with pointers that are not so obvious thus best to just avoid them. int m, f, l; Initialization of the random number generator should be done once in the application. So best to do it just after main() has been entered. srand(time(NULL)); This is an excessive number of identifiers you are initializing in the for(;;). for(m = 0, f = 0, l = (end_range + 1), streak = 0; m < length && l < length && f < end_range + 1; m++, *current_ptr += 1) Algorithm Comments To be blunt I have a hard time following what you are trying to do. There should definitely be more comments explaining what you are trying to do at each step (the only way I managed to decode it was reading the python version). One technique you can use to make things easier is to create structures to hold groups of related information (You should have mimicked the python array). typedef struct Array { int* data; int size; // Other structures you want } Array; Unfortunately unlike python ther are no built-in us-full generic container types (That's why C++ is very popular (or one of the reasons)). As C++ has a std::vector (a dynamically sized array with push/pop). In fact I would consider C++ a better bet for a python programmer as it gives you some useful built-in types and allows you to do OO programming like python. Also as you mention in the comments (to the other questions) the downside (or the beauty depending on how you look at it) is the requirement for memory management. C is very basic and you must do it all manually. If you return dynamic allocated memory from a C function you must be very careful to document the requires of the user on what to do with it (this is solved in C++ where memory management is practically automatic).
{ "domain": "codereview.stackexchange", "id": 1189, "tags": "beginner, c, memory-management, simulation, shuffle" }
Language of lists of words, not all of which are different, is not context-free
Question: How do I prove that the following language isn't context-free using the pumping lemma? $$ L=\{w_1\#w_2\#\dots\#w_k \colon k ≥ 2, w_i \in \{0,1\}^*, w_i = w_j \text{ for some } i \ne j\} $$ I am having trouble choosing the string to use for the proof. I know that I have to choose a string such that at least two substrings separated by the # are equal to each other but am unsure of how to approach this. If someone could please help me with this, I would appreciate it. Answer: If $L$ were context-free then so would $L' = d(L \cap (0+1)^*\#(0+1)^*)$ be, where $d$ is the homomorphism that deletes $\#$. However, $L'$ is the language of squares (words of the form $w^2$), which is well-known not to be context-free. If for some reason you have to prove that $L$ is not context-free directly using the pumping lemma, this suggests looking at the proof that $L'$ is not context-free and trying to adapt it.
{ "domain": "cs.stackexchange", "id": 16449, "tags": "formal-languages, context-free, proof-techniques, pumping-lemma" }
Synchronization of two lists
Question: I'm looking for a better more efficient way to write some code I have where I"m synchronizing two lists. Basically, the first list is a list of devices I need to check. The second list is a list of devices that I have already checked with dates. I need to synchronize the two lists so I only check devices that are new or out of date based on the date. I also need to delete any devices that have been removed. Here is my current code that works fine, it just looks and feels SO clunky. public IEnumerable<AssetBlob> Synchronize(IEnumerable<AssetBlob> assets, string id) { //Get list of devices already checked in lineitemsmap List<AssetBlob> items = new List<AssetBlob>(); IEnumerable<LineItemsMap> lineItems = auditRepo.GetLineItemsMap(id).DeviceResults; List<LineItemsMap> deleted = new List<LineItemsMap>(); //put new items into list items.AddRange((from t0 in assets join t1 in lineItems on t0.Id equals t1.BlobId into t1_join from t1 in t1_join.DefaultIfEmpty() where t1 == null select t0).ToList()); //list of existing items that need updated items.AddRange((from t0 in assets join t1 in lineItems on t0.Id equals t1.BlobId where t0.Imported > t1.Created select t0).ToList()); deleted.AddRange((from t0 in assets join t1 in lineItems on t0.Id equals t1.BlobId where t0.Imported > t1.Created select t1).ToList()); //Delete items in lineitems that don't exists in assets list deleted.AddRange((from t0 in lineItems join t1 in assets on t0.BlobId equals t1.Id into t1_join from t1 in t1_join.DefaultIfEmpty() where t1 == null select t0).ToList()); if (deleted.Any()) { auditRepo.RemoveLineItemMaps(deleted); } return items; } Any suggestion or improvements would be great. Thank you. Answer: I haven't tested this due to time constraints, and lack of knowledge of the data, but from my understanding of the problem, this should work. I have taken the linq join queries, two for the insert, two for the delete and created two IEnumerable.Where clauses, one for insert, one for delete. The two sets of criteria are combined in an OR statement within the .Where statement. The one thing I wasn't sure on is if the original list needed to be returned. That wasn't clear in the original code. If it does need to be returned, that's an easy change to implement. EDIT: Changed to do work per Kyles suggestion in comments. public IEnumerable<AssetBlob> Synchronize(IEnumerable<AssetBlob> assets, string id) { var assetsList = assets.ToList(); //Get list of devices already checked in lineitemsmap IEnumerable<LineItemsMap> lineItems = auditRepo.GetLineItemsMap(id).DeviceResults; var items = assetsList.Where( existingAsset => // ExisingItems that need updating // Any returns true if any of the items in a list meet the criteria lineItems.Any(lineItem => lineItem.BlobId == existingAsset.Id && existingAsset.Imported > lineItem.Created) || // New items that need inserting. // All returns true if all of the items in a list meet the criteria // therefore All == !Any lineItems.All(lineItem => lineItem.BlobId == existingAsset.Id)).ToList(); var deleted = lineItems.Where(lineItem => assetsList.Any(existingAsset => existingAsset.Imported > lineItem.Created) || assetsList.All(existingAsset => existingAsset.Id != lineItem.BlobId)).ToList(); // If there is a foreach in RemoveLineItemMaps you can probably remove this check. // Just cleans up the code a little, and the foreach will take care of this check. if (deleted.Any()) { auditRepo.RemoveLineItemMaps(deleted); } return items; }
{ "domain": "codereview.stackexchange", "id": 2142, "tags": "c#, linq" }
DX100 controller communication protocol
Question: I have a motoman robot for use in a pick and place application. It has a DX100 controller which has an ethernet interface which could be used to control a slave device using the Modbus TCP protocol. The DX100 controller also supports Ethernet/IP and DeviceNet. I know Modbus can be quite complex for first timers and I have little experience when it comes to programming these devices. I would like to know, if someone here has ever worked with this controller, which communication protocol they used and why. Answer: I did not work with Motoman controllers, but I did work with other robots, Modbus and DeviceNet. Modbus TCP is probably the easiest to use from the client side. On the DX100 Side you need to check which modbus address coresponds to the functionalites you would prefer. If you want to move the robot along the X axis, you need to check in the documentation which Modbus register(s) you need to write to make the robot move. (Please note that since I do not know the Motoman controller it is not sure that moving the robot via modbus is possible.) After you have identified the needed registers, you can connect a freely available modbus slave (or master, depending on configuration) running on a normal PC to see if the robot moves when you write the registers. You can find a number of programs capable of doing this here. I assume it is needed that the robot is not connected to a PC but to a PLC via Modbus. If you have correctly identified the required registers via the PC software, you can cofigure the PLC to do the same, read/write the required Modbus registers and you are done.
{ "domain": "robotics.stackexchange", "id": 1298, "tags": "microcontroller, serial, communication" }
how to update ROS_PACKAGE_PATH?
Question: farshid@ubuntu:~$ roscreate-pkg farshid WARNING: current working directory is not on ROS_PACKAGE_PATH! Please update your ROS_PACKAGE_PATH environment variable. how update ROS_PACKAGE_PATH ? Originally posted by abrsefid on ROS Answers with karma: 41 on 2011-10-22 Post score: 1 Answer: This is described in the tutorial Installing and Configuring Your ROS Environment. Originally posted by Dan Lazewatsky with karma: 9115 on 2011-10-22 This answer was ACCEPTED on the original site Post score: 10 Original comments Comment by aly on 2014-09-30: if after configuring your environment, still the ros commands like (ros_make) doesn't work than go to .bashrc file & remove all the duplicate lines at the bottom. than follow the instruction mentioned by Dan Lazewatsky.
{ "domain": "robotics.stackexchange", "id": 7055, "tags": "rospack" }
The number of triangulations of a set of $n$ planar points: Why so difficult?
Question: After hearing Emo Welzl speak on the subject this summer, I know the number of of triangulations of a set of $n$ points in the plane is somewhere between about $\Omega(8.48^n)$ and $O(30^n)$. Apologies if I am out-of-date; updates welcomed. I mentioned this in class, and wanted to follow up with brief, sage remarks to give students a sense for (a) why it has proved so difficult to nail down this quantity, and (b) why so many care to nail it down. I found I did not have adequate answers to illuminate either issue; so much for my sageness! I'd appreciate your take on these admittedly vague questions. Thanks! Answer: Here's one more "applied" reason why we care about triangulations. There's a body of work on mesh compression where the goal is to use as few bits per vertex as possible to encode a mesh (mainly to aid in storage and transmission). The particular base of the exponent in the number of triangulations of a planar point set provides an information-theoretic lower bound on the number of bits needed per vertex (specifically, $8.48^n$ triangulations means you need at least 8.48 bits per vertex). Such bounds can then be compared with actual mesh compression schemes to determine their efficacy.
{ "domain": "cstheory.stackexchange", "id": 1284, "tags": "co.combinatorics, cg.comp-geom" }
How can I make a toboggan to study projectile motion?
Question: I'm currently working on an experiment due in a month in which I have to recreate this toboggan : I would drop a ball at the top of it on the left and let it slide. I would then study the trajectory of the ball after exiting the toboggan (projectile motion study). The point of it is to study how the angle and speed of the ball will affect its trajectory. Using $E_p = E_c$ we would get an initial speed of $v = \sqrt{2gH}$ ; and an angle of $\alpha$. Therefore we should be able to manipulate and change the height $H$ and the angle $\alpha$. However me and my physics teacher can't sort it out. I thought of using a silicone tube but it would require to be quite large to welcome the ball. Do you guys have any idea? Also I have a question : will the mass of the ball affect the initial speed? With $v = \sqrt{2gH}$ it seems like it won't but just to be sure. Answer: I think you can use any material provided that it should give minimal resistance.. And about your second question the initial speed of ball wont be affected but when the ball is in air it will experience drag due to air so heavier ball will experience less drag as compared to a lighter one. And best of luck for your experiment :)
{ "domain": "physics.stackexchange", "id": 94118, "tags": "newtonian-mechanics, kinematics, experimental-physics" }
Does a uniform loading of an elastic half space result in a uniaxial stress state or a uniaxial strain state?
Question: Suppose for instance a soil is loaded by a building over an area of length $L$ (load is in the $z$ direction). In the neighborhood of a point at depth $h$, $h \ll,L$, in the soil under the loaded area, do we expect a uniform uniaxial $\sigma_{zz}$ stress state (all other terms of the stress tensor vanish) or a uniform $\epsilon_{zz}$ strain state (all other terms of the strain tensor vanish)? Answer: Strictly neither, but approximately both. The exact stress distribution can be calculated by integrating the Green tensor of elasticity for a vertical point load. (This is called the Boussinesq problem; see Chapter IX in Love and Farrell's "Deformation of the Earth by Surface Loads". The point-load solutions are given in many soil mechanics and elasticity texts, e.g., Eqs. (8-2) here.) The strain distribution can then be obtained from generalized Hooke's law. Various normal and shear stresses and strains arise from the influence of the material freedom outside the load. But perhaps we wish to look at limiting cases, e.g., $h\ll L$, as you specified. Is an approximation available? Directly at the surface, we have a single external stress $\sigma_{zz}$ within the loaded region and no other external loading. For very small depths, far inside the edge of the loading, the stress state can be idealized as that of a half-space with uniform loading $\sigma_{zz}$. (The justification is that the material doesn't "know" that there's any other loading type over the half-space other than that directly above and near it; the exact solution shows that the influence of distant loads disappears quickly as an inverse power of increasing distance.) It follows that the radial strain (symmetric $\varepsilon_{xx}=\varepsilon_{yy}$) could be assumed to be constrained as zero, as a uniformly loaded half-space—or semi-infinite medium—has nowhere to expand into or contract from. We apply generalized Hooke's Law for an isotropic material, with Poisson's ratio $\nu$, Young's modulus $E$, $\delta$ as the Kronecker delta, and repeated indices indicating summation: $$\varepsilon_{ij}=\frac{1+\nu}{E}\sigma_{ij}-\frac{\nu}{E}\sigma_{ii}\delta_{ij};\tag{Generalized Hooke's Law}$$ $$0=\varepsilon_{xx}(=\varepsilon_{yy})=\frac{1}{E}\sigma_{xx}-\frac{\nu}{E}\sigma_{yy}-\frac{\nu}{E}\sigma_{zz};\tag{Radial strain}$$ $$\sigma_{xx}=\sigma_{yy}=\frac{\nu}{1-\nu}\sigma_{zz};\tag{Solve for radial stress}$$ $$\varepsilon_{zz}=\frac{(1+\nu)(1-2\nu)}{E(1-\nu)}\sigma_{zz};\tag{Solve for vertical strain}$$ If the soil is porous, for example, then Poisson's ratio is small, and $\sigma_{zz}$ could be taken as $\varepsilon_{zz}E$, the same answer we get for uniaxial compression of a rod or bar with free sides—the basis of defining Young's modulus $E$. The idealization also results in zero shear stress and strain, which is not strictly the case for finite $L$. Note that the same derivation provides us with the P-wave modulus of acoustics; here, also, the material is essentially fixed in place laterally when considering a propagating elastic wave. To summarize: only uniaxial $\sigma_{zz}$ is applied at the surface, but lateral stresses arise underneath when Poisson effects are constrained. A reasonable assumption is uniaxial $\varepsilon_{zz}$ at depths $h\ll L$ underneath vertical loading of lateral dimension $L$. If Poisson's ratio is small, this corresponds to approximately uniaxial $\sigma_{zz}$. Please let me know if anything's unclear.
{ "domain": "physics.stackexchange", "id": 97547, "tags": "continuum-mechanics" }
Space-time and gravitational fields
Question: I am reading a book by Carlo Rovelli, Seven Brief Lessons On Physics, and would like to check if I have understood something. Apologies if my question is badly phrased, feel free to edit where appropriate. I am not a physicist, just an enthusiast. It was this excerpt that made me think gravitational waves had something to do with time. The heat of the black holes is like Rosetta Stone of physics, written in a combination of three languages-Quantum, Gravitational and Thermodynamic - still awaiting decipherment in order to reveal the true nature of time And the following that made we wonder if they were to do with space as well? My understanding is that space and time are synonymous? The heat of black holes is a quantum effect upon an object, the black hole, which is gravitational in nature... It was the next line, following that inspired my question was this, ... It is the individual quanta of space, the elementary grains of space, the vibrating 'molecules' that heat the surface of black holes Talking about the gravitational field being space-time The gravitational field, as we saw in the first lesson, is space itself, in effect space-time Text referred to in the 'First Lesson' Einstein had been fascinated by this electromagnetic field and how it worked... soon came to understand that gravity, like electricity must be conveyed by a field as well... the gravitational field is not diffused through space; the gravitational field is that space itself Question: Is space-time 'made of' gravitational waves? Is that field it's fundamental building block? It seems to me from all of this that space-time is indeed Answer: I'll try to boil down several of your questions and answer what I think is most fundamental, and hopefully clarify things in the process: Gravity is completely synonymous with the shape of spacetime across all 4 dimensions (3 space, 1 of time). The reason we speak of spacetime is thus: When you (having negligent mass) stand in a "gravity field" such as that caused by a massive object such as the Earth, you notice 2 things: First, that space seems to have a direction, i.e. objects will "fall" towards the center of the dominant mass Second, that your watch will tick somewhat more slowly than it did when you were far away from this gravity well. These phenomena of acceleration (a gradient in space) and changes in the rate of your watch ticking (so a gradient in time) is why we don't speak of space and time as separate entities - they are inextricably linked, and movement through one affects your movement through the other. The stronger the gravity field you're exposed to (so the more massive the object you're near), the slower your watch will tick when compared to someone standing safely outside the field. Spacetime is the fabric of the universe - so far as we know, there's nothing "under" it, nothing that it is "made up of" - and GR treats it as such. So when you think of spacetime, think of it as a landscape of hills and valleys in both 3D space and in time, all caused by the various masses that reside there, and understand that those hills and valleys are gravity. Now, gravity waves, then, are ripples in this landscape that can be caused by an accelerating massive object. So if two black holes accelerate and smash into each other, some ripples in spacetime will travel out from the disturbance and we may be able to detect it. With our understanding of what gravity is above, what would a gravity wave be expected to 'look like?' We'd expect to be able to pick it up one of two ways: Either by a change in space (seen as a distance fluctuation between 2 points in our detector) or a change in time (an aberration in the rate a very reliable clock is ticking). Which now makes perfect sense, because gravity is simply the shape of space and time, bound up together.
{ "domain": "physics.stackexchange", "id": 28240, "tags": "black-holes, spacetime, quantum-gravity, gravitational-waves, black-hole-thermodynamics" }
Path from the root to a given node in binary tree
Question: Based on the solution provided in this question , I have written non-recursive Java code to find the path form to a given element in a binary tree. This code is working for all the elements except root. Is this the correct implementation of the algorithm provided? Please check and suggest the required changes for the code to look more elegant. public void findPathFromRoot(Node rootNode,int key) { Node temp = rootNode; Stack<Node> nodeStack = new Stack<Node>(); nodeStack.push(rootNode); boolean present = true; Node topNode = rootNode; Node topNodeLeft,topNodeRight; while(true) { //three temp variables for top,left of top,right of top topNode = nodeStack.peek(); topNodeLeft = topNode.lnode; topNodeRight = topNode.rnode; //if left of top is not null, push to stack. Check if the push values is equla to the required key //If it is the require element break if(topNodeLeft != null ) { nodeStack.push(topNodeLeft); if(topNodeLeft.value == key) break; } //if right of top is not null, push to stack. Check if the push values is equal to the required key //If it is the require element break else if(topNodeRight != null) { nodeStack.push(topNodeRight); if(topNodeRight.value == key) break; } //If both the childs are null(ie leaf node) , pop the elements in stack, till the top node has some right child. else { Node lastPoppedNode; //pop the elements in stack, till the top node has some right child. do{ lastPoppedNode = nodeStack.pop(); topNode = nodeStack.peek(); }while(topNode.rnode==null); //Now the top node would be the node with some right child topNodeRight = topNode.rnode; //Push that right child nodeStack.push(topNodeRight); //Check if the push values is equal to the required key //If it is the require element break if(topNodeRight.value == key ) { break; } //To terminate the loop if no element is found.... //it checks if the popped element is getting added again if( lastPoppedNode == topNodeRight) { present = false; break; } } } //Printing based on the boolean variable. if(present) { for(Node tempNode : nodeStack) { System.out.println(tempNode.value); } } else { System.out.println("Element is not present in the tree"); } } Answer: Step 1: use an IDE to reformat this nicely Step 2: remove pointless comments For example: // if left of top is not null, push to stack. // Check if the push values is equla to the required key // If it is the require element break if (topNodeLeft != null) { nodeStack.push(topNodeLeft); if (topNodeLeft.value == key) break; } The comment explained in English exactly what the code does. This is pointless. On closer look, pretty much all the comments in this code are pointless. Step 3: rename variables to more readable names For example: lnode -> left rnode -> right Step 4: let your IDE guide you Pay attention to the warnings and try to fix them. For example IntelliJ tells this: The variable temp is never used: delete it The Node topNode = rootNode; initialization is pointless, skip the initialization Step 5: remove unnecessary elements Look at every variable with suspicion. What's the purpose of topLeftNode? After the renaming above, we have this line: topNodeLeft = topNode.left; Which doesn't make a whole lot of sense. Can't we use just topNode.left ? Yes we can. Why is topNode declared outside of the while loop when it is only used inside the loop, and assigned on the first line of the loop? It's better to declare it inside: while (true) { Node topNode = nodeStack.peek(); Step 6: look for bugs in simple (degenerate) cases The method crashes with EmptyStackException if the tree has only one node. Same thing if the tree has only two nodes with the child in left position. While fixing these bugs, you might find more opportunities to improve. Give it a try. With the above suggestions, the code becomes this: public void findPathFromRoot(Node rootNode, int key) { Stack<Node> nodeStack = new Stack<>(); nodeStack.push(rootNode); boolean present = true; Node topNodeRight; while (true) { Node topNode = nodeStack.peek(); topNodeRight = topNode.right; if (topNode.left != null) { nodeStack.push(topNode.left); if (topNode.left.value == key) { break; } } else if (topNodeRight != null) { nodeStack.push(topNodeRight); if (topNodeRight.value == key) { break; } } else { Node lastPoppedNode; do { lastPoppedNode = nodeStack.pop(); topNode = nodeStack.peek(); } while (topNode.right == null); topNodeRight = topNode.right; nodeStack.push(topNodeRight); if (topNodeRight.value == key) { break; } if (lastPoppedNode == topNodeRight) { present = false; break; } } } if (present) { for (Node tempNode : nodeStack) { System.out.println(tempNode.value); } } else { System.out.println("Element is not present in the tree"); } } Step 7: Stop and think What algorithm are you actually trying to implement here? The top answer on the question you linked suggests using pre-order traversal. That's a good suggestion that will work, though it won't be the fastest for all possible trees. I suggest to implement that algorithm first.
{ "domain": "codereview.stackexchange", "id": 12355, "tags": "java, algorithm, tree" }
Does magnetic field make particle move in the direction of magnetic field(In circles)?
Question: So my task says A particle with mass $m$ moves freely through space and there is no electromagnetic field. At moment $t = 0$ a uniform magnetic field is turned on with field strength $B$, and direction perpendicular to the velocity of the particle. At moment $t = T$ the field is turned off. What is the deviation of the direction of the particle that happened while uniform field was turned on? In textbook it says: To solve this I must assume that the magnetic force changes the particle's direction, not magnitude. So I will get $F_{cp}$ (centripetal force) $= F_{mg}$ (force of the magnetic field). But what I don't understand is how can magnetic force make particle move in a circle (direction of the magnetic field), if by applying the right hand rule, the force of the magnetic field is perpendicular to the direction of the magnetic field? Answer: As you mentioned, the force experienced by a moving charged particle in a magnetic field is perpendicular to both the magnetic field and the particle's velocity. This force causes the direction of the velocity of the particle to change, so the direction of the force in the next moment also changes. When you draw this out, the force always towards a point in the circle.
{ "domain": "physics.stackexchange", "id": 59419, "tags": "homework-and-exercises, magnetic-fields" }
Mass-Spring Damper system - moving surface
Question: I need help with a physics problem, I don't know much about dampers, how can this be solved? we have $y_0(x)=\mu\sin(\Omega x)$ We arrive at this equation for motion (where we define $b$ and $w_0$ ourselves, and $z=y(t)-y_0(t)$) $$\displaystyle\frac{d^2z}{dt^2} + b\displaystyle\frac{dz}{dt} +w_0^2z = \mu\Omega^2U^2\sin(\Omega Ut)$$ Can someone show me the steps in between for determining this equation of motion EDIT: What I have is (with $y(ii)$ meaning second derivative of $y$, and $y(i)$ first): $$m(y(ii)-y_0(ii))=-K(y-y_0)-C(y(i)-y_0(i))$$ which simplifies as $$(y(ii)-y0(ii))+\frac{c}{m}(y(i)-y_0(i))+\frac{k}{m}(y-y_0)=0$$ which $\implies$ $z(ii)+bz(i)+w_0^2z=0$ (which is wrong) Maybe I could be shown a proper free body diagram by someone if my idea of the forces are wrong? Answer: What you have worked out so far is the left hand side of the answer. You have written that the sum of the accelerations is zero, but that is not correct because the system is being driven by the roller moving across the sinusoidal surface. Parameterize in terms of time by writing $x=Ut$, then rewrite your $y_0(x)$ equation as $y_0(t)$. Now that you have the vertical position of the roller as a function of time, you can find its acceleration by taking the derivative twice with respect to time. The result should look comforting. This new acceleration term goes with the force that drives the system.
{ "domain": "physics.stackexchange", "id": 5553, "tags": "homework-and-exercises, newtonian-mechanics, forces, spring" }
Constraint terminology
Question: If we want to pick a solution $S$ from a collection of items $C$ to maximize some function $f(S)$. The constraint that we pick at most $k$ item, i.e., $|S| \leq k$, is called the cardinality constraint. If the items in $C$ belongs to different groups and we are allowed to picked at most $k_i$ items from group $i$, what is the right terminology for this constraint? I think some paper used to call it the group cardinality constraint but I'm not sure if it's still the popular terminology. Answer: This is not clear from your question, but I am going to assume that the groups are disjoint. I.e. $C = \bigcup_i C_i$ and $C_i \cap C_j = \emptyset$ for every $i \neq j$. Then the collection of sets $\mathcal{S} = \{S \subseteq C: |S \cap C_i| = k_i \ \ \forall i\}$ is the collection of bases of a partition matroid. In the context of combinatorial optimization then the constraint $S \in \mathcal{S}$ is often called a partition constraint. This is a special case of matroid constraints. There is, for example, an extensive body of work on maximizing a submodular function $f(S)$ subject to matroid constraints, i.e. subject to the set $S$ being a basis of a matroid.
{ "domain": "cstheory.stackexchange", "id": 4193, "tags": "terminology" }
The Riemannian Penrose Inequality in higher dimensions
Question: I am reading the proof of the Riemannian Penrose Inequality by Huisken and Ilmamen in The Inverse Mean Curvature Flow and the Riemannian Penrose Inequality and I was wondering why they restrict their proof to the dimension $n=3$. I thought it might be because of the definition of the Geroch-Hawking mass, or the monotonicity of such a mass, and I was told that it works only in dimension $n=3$ because the Geroch-Hawking mass monotonicity formula relies on the Gauss-Bonnet Theorem. But the latter can be generalized to higher dimensions (for an even dimension), right (wikipedia: Generalized Gauss-Bonnet Theorem)? Then which argument restricts their proof to $n=3$? Answer: This may be more of an extended comment rather than an answer, but I suspect the reason it works in $n=3$ is because, in their proof, Huisken and Ilmamen consider boundaries ($\partial M$) of the 3-space $M$, which is essentially the black hole exterior (the minimal surface in question). The 2-dimensional case $(\partial M)$ is very special because the Euler-Characteristic $\chi$ gives you $\textit{all}$ of the topological information about the surface. Essentially, the theorem of Hawking and Ellis telling you that the black hole horizon has Euler characteristic $\chi=2$ gives you more information than you would have in higher dimensions. This 'free' information is essential in Huiseken and Ilmamen's proof. An independent proof of the Riemmanian-Penrose inequality appeared due to Bray in 2001 (H.L. Bray, $\textit{Proof of the Riemannian Penrose inequality using the positive mass theorem}$) using conformal flows. These techniques can be extended to higher dimensions, and indeed it was shown by Bray and Lee that the Riemmanian-Penrose inequality holds for (at least) $n < 8$ using these techniques. It may be possible to employ the methods of H&I by invoking the extended topology results due to Galloway and Schoen, regarding the Yamabe classification of black hole horizons in higher dimensions, and then the generalised Gauss-Bonnet theorem. I believe this has not been done before (?).
{ "domain": "physics.stackexchange", "id": 27319, "tags": "general-relativity, black-holes" }
Is there a way to solve this complex statics problem algebraically?
Question: This problem can be solve in 2 ways either I solve it with vectors which would be relatively painful and more time consuming and the other faster way is algebraically but I faced a problem when trying to find the points of intersection with the axes: After I calculated $R_x=403.3$ N , $R_y=-131.81 N$ and Moment about G(origin): $$M_G=-460cos(15)*0.47+100*0.59+120cos(70)*0.47-120sin(70)*0.19+100+135=88.89 N$$ Then I said that the sum of moments of forces about G = The moment of the resultant force about G $$R_x*y+R_y*x=88.98$$ $$\therefore -131.81x+403.3y=88.98$$ Now when I plug $ y=0 :x=0.675m=675mm$ And when $x=0$ : $y=-0.2207m=-220.7mm$ Apparently there is no answer with the signs that I got ,What did I do wrong here. Answer: The idea was correct, and all your calculations were correct. You only neglected to consider the sign of the generating moment, i.e. in the 2D case the moments of a force are given by the following equation (notice the minus) : $$M= -F_x\cdot y + F_y\cdot x$$ So for horizontal forces when a positive horizontal force is applied on a positive y distance then the resulting moment is negative Positive y Negative y Positive $F_x$ - M +M Negative $F_x$ +M -M Similarly, vertical forces: Positive x Negative x Positive $F_y$ + M -M Negative $F_y$ -M + M So when you were calculating the x coordinate, only the y component of the force generates moment (i.e. $R_y), so what you should have calculated was: $$M_G = +R_y\cdot x \Rightarrow$$ $$x = +\frac{M_G}{ R_y}=+\frac{88.98}{ −131.81}= -220.65 mm$$ and similarly for the y calculation $$M_G = -R_x\cdot y \Rightarrow$$ $$y = +\frac{M_G}{ R_x}= - \frac{88.98}{ 403}= -675 mm$$
{ "domain": "engineering.stackexchange", "id": 4566, "tags": "mechanical-engineering, applied-mechanics, statics, solid-mechanics, moments" }
#define or NSString * const - Benefits and drawbacks of string literal constant syntax?
Question: I'm interested in discussing string literals in Objective-C and would like to know 1. if there's a preferred paradigm / style and 2. what (if any) the performance implications might be. There are parts of the Cocoa / CocoaTouch frameworks that use strings as identifiers. Some examples in Cocoa / CocoaTouch... -[NSNotificationCenter addObserver:selector:name:object:] -[UITableView dequeueReusableCellWithIdentifier:] -[UIViewController performSegueWithIdentifier:sender:] I find myself most often declaring a global variable within the class like so... NSString * const kMySegueIdentifier = @"Awesome Segueueueueue"; For segue identifiers, I will often times expose the variable in the header file extern NSString * const kMySegueIdentifier; so that other modules can reuse it. The same behaviors can be accomplished with preprocessor macros: #define kMySegueIdentifier @"Awesome Segueueueueue". I believe this would also prevent the app from consuming memory to hold these globals. I cringe a little at this syntax however because it exposes the "implementation details" of my string literal constants. Both of these lines accomplish an end goal of abstracting the string into being easy to remember, type correctly, and will generate compile warnings / errors, is one actually better then the other? What are the situations that would arise where one would be preferred over the other? Answer: Actually, they are completely equal (obviously sans the extern keyword on the constant define). When literal strings are declared @"", the compiler expands them out to a compile-time constant expression, which looks familiar to us all: (static) NSString *const; -albeit with a lot more compiler magic thrown in. Nearly the same process occurs with macros, but with one extra step: replacement. Macros are placeholders, which the compiler replaces with the value you #define at compile-time, which is why CLANG can show you errors and warnings. Where the difference lies is how much work the compiler has to do to replace your abstractions, not in the "memory overhead" they will incur (which means there is absolutely no speed or performance to squeeze out). Besides, NSString* is a brilliant class cluster that's been optimized over the years, especially with literal constants, where a sort of caching occurs in the binary itself. That way, literals used over and over again don't get reallocated over and over again. Though, to make one thing perfectly clear: #define'd literals do NOT reduce memory overhead!
{ "domain": "codereview.stackexchange", "id": 2553, "tags": "objective-c" }
Law for tap water temperature
Question: I was wondering if anyone put together a law to describe the rising temperature of the water coming out of a tap. The setup is fairly simple: there's a water tank at temperature T, a metal tube of length L connected to it and a tap at the end where temperature is measured. The water flows at P l/s. Given that the metal tube is at room temperature initially, what law describes the temperature of the water at any instant? What is the limit temperature of the water? Thanks. Answer: We can consider the following model: a tube of constant temperature $T_e$ of lenght L, radius $r$ where water is flowing uniformly at a speed $v$ (that you can obtain from your flow $P$). A "slice" of water travels an interval $dx$ in a duration $dt = \frac{dx}{v}$. The tube will contribute to the "heating" of the water by $\frac{dQ}{dt} = (T-T_e) k 2 \pi r dx$ where $k$ is the conductivity and where we use a very simple model (in particular for the radius, we do not distinguish external and internal radii). During this interval the temperature $T(x)$ of the water will vary by $dT = -\frac{dQ}{c \rho dV}$ where $C$ is the heat capacity at constant pressure of water, and where $dV = 2 \pi r dx$. Replacing we have $\frac{dT}{T-T_e}=-\frac{k}{\rho C v} dx$ whose solution, if the temperature in the tank (ie x = 0) is $T_t$ : $T(x) = (T_t - T_e) e^{(-\alpha x)}+T_e$ where $\alpha = \frac{k}{\rho C v}$. Depending on the lenght of the tube you have the temperature at the tap.
{ "domain": "physics.stackexchange", "id": 6, "tags": "applied-physics, thermodynamics" }
Node.js Data-Completion script
Question: Following task: Customer-data, given in a JSON-file, have to be completed with additional address-data. Given in a second JSON-file. Then these data-stock has to be saved into a MongoDB-database. Schema of the customer-JSON: [ { "id": "1", "first_name": "Ario", "last_name": "Noteyoung", "email": "anoteyoung0@nhs.uk", "gender": "Male", "ip_address": "99.5.160.227", "ssn": "509-86-9654", "credit_card": "5602256742685208", "bitcoin": "179BsXQkUuC6NKYNsQkdmKQKbMBPmJtEHB", "street_address": "0227 Kropf Court" }, { "id": "2", "first_name": "Minni", "last_name": "Endon", "email": "mendon1@netvibes.com", "gender": "Female", "ip_address": "213.62.229.103", "ssn": "765-11-9543", "credit_card": "67613037902735554", "bitcoin": "135wbMcR98R6hqqWgEJXHZHcanQKGRPwE1", "street_address": "90 Sutteridge Way" }, ... ] Schema of the addresses-JSON: [ { "country": "United States", "city": "New Orleans", "state": "Louisiana", "phone": "504-981-8641" }, { "country": "United States", "city": "New York City", "state": "New York", "phone": "212-312-1945" }, ... ] Desired result: [ { "_id" : ObjectId("5b3f16f5743a6704739bf436"), "id" : "1", "first_name" : "Ario", "last_name" : "Noteyoung", "email" : "anoteyoung0@nhs.uk", "gender" : "Male", "ip_address" : "99.5.160.227", "ssn" : "509-86-9654", "credit_card" : "5602256742685208", "bitcoin" : "179BsXQkUuC6NKYNsQkdmKQKbMBPmJtEHB", "street_address" : "0227 Kropf Court", "country" : "United States", "city" : "New Orleans", "state" : "Louisiana", "phone" : "504-981-8641" }, ... ] My solution: const mongodb = require("mongodb"); const filePathCustomer = "./customers.json"; const filePathAddresses = "./addresses.json"; const MongoClient = mongodb.MongoClient; completeCustomers = (filePathCustomers, filePathAddresses) => { const customers = require(filePathCustomers); const addresses = require(filePathAddresses); return (updatedCustomers = customers.map((customer, index) => { const updatedCustomer = Object.assign(customer, addresses[index]); return updatedCustomer; })); }; MongoClient.connect( "mongodb://localhost:27017/bitcoinExchange", (error, client) => { if (error) { throw new Error("Connecting to MongoDb has failed."); } const db = client.db(); let execCompleteCustomer = new Promise(resolve => { resolve(completeCustomers(filePathCustomer, filePathAddresses)); }); execCompleteCustomer .then(customer => { db.collection("customers").insertMany(customer, (error, result) => { if (error) { db.close(); throw new Error("Writing to database has failed"); } console.log( "Count of customer documents inserted:", result.insertedCount ); return true; }); }) .then(result => { if (result) db.close(); }) .catch(() => { db.close(); throw new Error("The merging of the customer-data has failed."); }); } ); What would you have done different and why? Is my error-handling done in a good way and fashion? How could it be improved? What bothers me a bit are this multiple occurences of db.close(). Is there a way in Node to avoid these redundancy? Something like finally in Java. Answer: Just a few things To start out, the return statement in your function completeCustomers should/could be changed. The way the function is defined should also be changed. But we'll get to that in a second. return (updatedCustomers = customers.map((customer, index) => { const updatedCustomer = Object.assign(customer, addresses[index]); return updatedCustomer; })); I would replace with: return (updatedCustomers = customers.map((customer, index) => ({ ...customer, ...addresses[index] }))) Note, you don't have to make it take up multiple lines if you don't want. Use const when you don't modify a variable, or arrow function This means that the two functions below need to change: completeCustomers = (filePathCustomers, filePathAddresses) => { Would become: const completeCustomers = (filePathCustomers, filePathAddresses) => { let execCompleteCustomer = new Promise(resolve => { And this would become: const execCompleteCustomer = new Promise(resolve => { That's all I have for now, but I'm not getting what part you are saying is redundant. And I think you handled the errors well.
{ "domain": "codereview.stackexchange", "id": 31239, "tags": "javascript, node.js, mongodb" }
Formatting phone number without knowing locale
Question: I am trying to format supplied phone numbers for a web application. Users in general will be supplying US-based phone numbers, but it is possible for some users to input a phone number from another country. Here are my assumptions: Default to US if the number of digits is 7 or 10 after stripping non-digits. If not, just return the String Use tested external libraries where possible since speed isn't an utmost concern (the service can be run in the background if it is really that slow). Prefer an external library over a custom regex. For the first assumption, I am thinking that if there is another country that has phone numbers with 7 or 10 digits, it's not the end of the world to have it formatted in US format. Here's my code anyway, using Google's libphonenumber: public static String formatPhoneNumber(String phoneNumber, Locale locale) throws NumberParseException { PhoneNumberUtil phoneUtil = PhoneNumberUtil.getInstance(); Phonenumber.PhoneNumber usNumber = phoneUtil.parse(phoneNumber, locale.getCountry()); return phoneUtil.format(usNumber, PhoneNumberUtil.PhoneNumberFormat.NATIONAL); } public static String formatUSPhoneNumber(String phoneNumber) throws NumberParseException { //First strip out any non-digits phoneNumber = stripNonDigits(phoneNumber); phoneNumber = (phoneNumber == null) ? BaseConstants.EMPTY_STRING : phoneNumber; if ((phoneNumber.length() != 7) && (phoneNumber.length() != 10)) { return phoneNumber; } return formatPhoneNumber(phoneNumber, Locale.US); } I am adding this method to be comprehensive, but it's not directly needed for review (credit to SO): public static String stripNonDigits(final CharSequence input){ final StringBuilder sb = new StringBuilder(input.length()); for(int i = 0; i < input.length(); i++){ final char c = input.charAt(i); if(c > 47 && c < 58){ sb.append(c); } } return sb.toString(); } The associated JUnit tests: @Test public void testFormatPhoneNumber() throws NumberParseException { assertEquals(FormUtils.formatPhoneNumber("2122222222", Locale.US), "(212) 222-2222"); assertEquals(FormUtils.formatPhoneNumber("2122222222", Locale.CANADA), "(212) 222-2222"); assertEquals(FormUtils.formatPhoneNumber("(021)11111111", Locale.GERMANY), "0211 1111111"); } @Test public void testFormatUSPhoneNumber() throws NumberParseException { assertEquals(FormUtils.formatUSPhoneNumber("2122222222"), "(212) 222-2222"); assertEquals(FormUtils.formatUSPhoneNumber("92122222222"), "92122222222"); assertEquals(FormUtils.formatUSPhoneNumber("(212)2222222"), "(212) 222-2222"); assertEquals(FormUtils.formatUSPhoneNumber("(212)222-2222"), "(212) 222-2222"); assertEquals(FormUtils.formatUSPhoneNumber("(212) 222-2222"), "(212) 222-2222"); } Is there a better way to do this? Are there tests I should add? Answer: Some improvements... Variable re-assignment and null-check You are re-assigning phoneNumber twice, and unfortunately I think you don't even need it at all. First, it's generally not a nice idea to re-assign the method argument because careless mistakes may result in you inadvertently changing the instance referenced by it somewhere inside the method, when you're expecting the original input. Even if you don't explicitly final method parameters, it's simpler to always treat method arguments as an invariant so that you know what the method is processing on/with. Therefore, you should simply assign the output of stripNonDigits() to a new variable, and you can inline the new variable in the subsequent calls: public static String formatPhoneNumber(String phoneNumber) throws NumberParseException { String strippedNumber = stripNonDigits(phoneNumber); return strippedNumber.length() != 7 && strippedNumber.length() != 10 ? strippedNumber : formatAsUSPhoneNumber(strippedNumber); } Now, you don't need a null-check (it's not in the right place anyways - it should be done at the start), and I renamed the main method as formatAsUSPhoneNumber(). I feel that formatUSPhoneNumber() is suggesting that it is formatting a US phone number into another representation. stripNonDigits() If stripNonDigits() should be used only by formatAsPhoneNumber(), consider making them private static. Ok, so even if you prefer not to use regex (fair point), how about Character.isDigit()? I feel that it's easier to read than the condition you have, short of the one-liner regex: // replace all non-digits with an empty String return input.toString().replaceAll("\\D+", "");
{ "domain": "codereview.stackexchange", "id": 13360, "tags": "java, formatting, i18n" }
Recursive, Recursively Enumerable and None of the Above
Question: Let $A = \mathrm{R}$ be the set of all languages that are recursive, $B = \mathrm{RE} \setminus \mathrm{R}$ be the set of all languages that are recursively enumerable but not recursive and $C = \overline{\mathrm{RE}}$ be the set of all languages that are not recursively enumerable. It is clear that for example $\mathrm{CFL} \subseteq A$. What is a simple example of a member of set B? What is a simple example of a member of set C? In general, how do you classify a language as either A, B or C? Answer: You can choose the language of the halting problem $\qquad \displaystyle B_1 = \{\langle T \rangle \mid T \text{ halts on } \langle T \rangle\} \in B$ and its complement $\qquad \displaystyle C_1 = \overline{B_1} \in C$. This is fairly standard material. The proof for $B_1$ not being recursive is the well-known diagonalization. Proving $B_1$ to be RE (recursively enumerable) is a tad tricky, involving interleaved simulation of multiple TMs, but is widely documented. If $C_1$ were RE, then $B_1$ being RE would imply that both are recursive; hence $C_1$ is not RE. This illustrates some of the techniques for such proofs in general.
{ "domain": "cs.stackexchange", "id": 283, "tags": "formal-languages, computability" }
Possibility of further chlorination of hexachlorocyclohexane
Question: The reaction of benzene with excess of chlorine in the presence of sunlight produces 1,2,3,4,5,6-hexachlorocyclohexane as the major product. This is also given in the preparation section of the Wikipedia page on lindane. Also, multiple chlorination of methane is also well-known, specially when chlorine is in excess. For example, it is mentioned at the bottom of this page from MasterOrganicChemistry. My question: After hexachlorocyclohexane (lindane) is formed, I thought that it is an ordinary haloalkane, comparable to chloromethane, so if chlorine is excess, it should also substitute the remaining six hydrogen atoms from lindane - forming 1,1,2,2,3,3,4,4,5,5,6,6-dodecachlorocyclohexane. Since the reaction is used industrially, what I propose obviously doesn't occur. Why is that so? Answer: Given enough exposure time, the reaction will lead to formation of various products, even the fully chlorinated compound: dodecachlorocyclohexane. Increasing the concentration of chlorine also help in the formation of this product. Doing some literature survey confirms this: [...] During most of the course of this reaction (photochemical chlorination of benzene), both substitution and addition takes place simultaneously, the final reaction product after long periods approaching the composition of dodecachlorocyclohexane ($\ce{C6Cl12}$) Photochemical Studies. XVII. The Chlorination of Chlorobenzene; a Comparison with Benzene, Edwin J. Hart and W. Albert Noyes, Journal of the American Chemical Society 1934 56 (6), 1305-1310, DOI: 10.1021/ja01321a016 The photochlorination of aromatic hydrocarbons may result in either addition or substitution or both. The photochlorination of benzene in either liquid phase or gas phase yields mainly hexachlorocyclohexane, but when high chlorine concentration and long exposure time are employed, the product is mainly dodecachlorocyclohexane. Technique of Organic Chemistry: Catalytic, photochemical, and electrolytic reactions, Arnold Weissberger, Interscience Publishers, 1956
{ "domain": "chemistry.stackexchange", "id": 15818, "tags": "organic-chemistry, halogenation" }
Pointer handle - absolute follow-up
Question: This is a follow-up to: Pointer handle - follow-up Pointer class/handle Please review my pointer class. template<typename T> class Ptr { public: Ptr(T* t, int s = 1) : sz{s<1 ? throw std::logic_error("Invalid sz parameter") : s} { sz = s; p = new T[sz]; std::copy(t,t+sz,p); } Ptr(const Ptr& t) : Ptr(t.p, t.sz) { } Ptr& operator=(Ptr copy) { std::swap(copy.sz, sz); std::swap(copy.p, p); return *this; } Ptr(Ptr &&t) :p{t.p}, sz{t.sz} { t.p = nullptr; t.sz = 0; } Ptr& operator=(Ptr &&t) { std::swap(t.p,p); std::swap(t.sz,sz); return *this; } T& operator*() { check_range(index); return p[index]; } T& operator[](int i) { check_range(i); return p[i]; } T* get() const { return p; } void operator+=(int i) { check_range(index+i); index += i; } void operator-=(int i) { operator+=(-i); } Ptr operator+(int i) { Ptr old{*this}; old += index+i; return old; } Ptr operator-(int i) { return operator+(-i); } Ptr& operator++() { operator+=(1); return *this; } Ptr operator++(int) { Ptr old{p+index}; operator++(); return old; } Ptr& operator--() { operator-=(1); return *this; } Ptr operator--(int) { Ptr<T> old{p+index}; operator--(); return old; } ~Ptr() { delete[] p; } private: T* p; int sz; int index = 0; void check_range(int i) { if (i < 0 || i > sz-1) { throw std::out_of_range("out of range"); } if (p+i == nullptr) { throw std::out_of_range("null pointer"); } } }; Answer: Ptr(T* t, int s = 1): sz{s<1 ? throw std::logic_error("Invalid sz parameter") : s} { sz = s; //this is unnecessary p = new T[sz]; std::copy(t,t+sz,p); } Here you are writing to sz twice, first in the initializer list then in the constructors body. In general it's best if you just write only once in the initializer list if possible: Ptr(T* t, int s = 1): sz{s<1 ? throw std::logic_error("Invalid sz parameter") : s} { p = new T[sz]; std::copy(t,t+sz,p); } In the move constructor: Ptr(Ptr &&t): p{t.p}, sz{t.sz} { t.p = nullptr; t.sz = 0; } we are changing the pointer and size in this class but not the index. I would "move" everything and take the index from the other class here as well. We need to set t.p to nullptr so when t gets destructed the the call to delete[] p; will be a no-op, so that part is necessary. However I don't see why the sz needs to be set to 0 because it's a primitive and therefore not in the destructor. Because the range checking function always checks p for a nullptr we can't index the old object anyway so I think I would just remove that expression. Ptr(Ptr &&t): p{t.p}, sz{t.sz}, index{t.index} { t.p = nullptr; } (Disclaimer: I'm not especially experienced with c++ move semantics so if you think it's more readable to keep explicitly setting every element to a "null" type of value for readability please make a comment, I'd like to know what people think about that.)
{ "domain": "codereview.stackexchange", "id": 11373, "tags": "c++, c++11, reinventing-the-wheel, pointers" }
Boolean circuit with two inputs and advice input is hard-wired
Question: Claim : $\cup_{c,d} $ DTIME$(n^c)/n^d \subseteq$ $P_{poly}$ Proof : if $L$ is decidable by a polynomial-time Turing machine $M$ with access to advice family $\{\alpha_n\}_{n\in \mathbb{N}}$ of size $a(n)$, then we can use the cook-levin construction for every $n$ a polynomial-sized circuit $D_n$ such that on every $x \in \{0,1\}^n$, $\alpha_n \in \{0,1\}^{a(n)}$, $D_n(x,\alpha) = M(x,\alpha)$. That is, $C_n$ is equal to the circuit $D_n$ with the string $\alpha_n$ "hard-wired" as its second input. Question : I am not getting the part that circuit is going to have two inputs one hard-wired as second input. How a circuit will like with two inputs. I have seen circuits with one input but not with two inputs (i.e. one input is hard-wired). I am thinking that it may be the case that they are attaching advice bit to each of AND and OR gate (increasing the fan-in). So I am not getting, how they are hard-wiring the advice string ? Reference : http://theory.cs.princeton.edu/complexity/book.pdf Answer: Just imagine the circuit $D_n(x,\alpha)$ has $n+a(n)$ bits of input, where the first $n$ bits are treated as the first parameter for the machine $M$, and the last $a(n)$ bits are treated as the advice. To forget about this separation and have a simpler notation, we just say $D_n$ accepts two inputs. Now fix the "second parameter" with the correct advice $\alpha_n$, and this induces a polynomial circuit for $L$ with $n$ inputs.
{ "domain": "cs.stackexchange", "id": 8768, "tags": "complexity-theory, circuits" }
Constructing a Turing machine which decides whether a fixed TM will halt on a fixed input or not
Question: It is known that the halting problem is decidable for every fixed $M_0$ Turing machine and every fixed $w_0$ input. My related question would be the following: is it true that for every fixed $M_0$ Turing machine and every fixed $w_0$ input, an $M_{M_0,w_0}$ Turing machine can be constructed for which the possible inputs are $(M, w)$ machine-input pairs, and for the $(M_0, w_0)$ pair, the output is "1" if $M_0$ will halt on $w_0$ and "0" if $M_0$ will not halt on $w_0$? ($M_{M_0, w_0}$ can give false answers for other pairs, it is not demanded that it has to run correctly for every $(M, w)$ pair.) Answer: Since $M_0$ and $w_0$ are fixed parameters of the problem, the answer is yes: for every fixed $M_0$ and $w_0$, there exists a Turing machine $M_{M_0, w_0}$ (depending on $M_0$ and $w_0$) such that, for the input $(M_0, w_0)$, $M_{M_0, w_0}$ returns $1$ if $M_0(w_0)$ halts and 0 otherwise. In particular one such Turing machine $M_{M_0, w_0}$ must be one of the following two machines: $M'_1$: Write $1$. Halt. $M'_0$: Write $0$. Halt. If, instead, you are looking for an algorithm that takes $M_0$ and $w_0$ as input and outputs a machine $M_{M_0, w_0}$ with the above property, then you are out of luck: there is not such algorithm in general (it might exist if you restrict the set of input machines $M_0$). Suppose that such an algorithm (i.e., Turing machine) $A$ existed, then it would allow to solve the halting problem: Given $M_0$ and $w_0$, compute $M_{M_0, w_0}$ by simulating $A$ on input $(M_0, w_0)$. Simulate $M_{M_0, w_0}$ on input $(M_0, w_0)$. By definition of $M_{M_0, w_0}$ this step requires finite time. Return "yes" if the output of $M_{M_0, w_0}( (M_0, w_0) )$ was $1$, otherwise return "no".
{ "domain": "cs.stackexchange", "id": 15788, "tags": "turing-machines, computability, halting-problem" }
An alternative package of opende for indigo
Question: Dear all I have a ros-project which works nicely for all the previous distributions of ROS. I updated my ros and I've found out that opende is not supported anymore. Do you have any suggestions? Best Sina Originally posted by sina on ROS Answers with karma: 3 on 2015-01-22 Post score: 0 Answer: ODE is available from Ubuntu as a system package. We've transitioned to using that from upstream. You should be able to do the same. You can use the rosdep key 'opende' to reference it. Originally posted by tfoote with karma: 58457 on 2015-03-12 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 20654, "tags": "ros-indigo" }
combine xgb feature importance
Question: Is it possible to combine the results from the xgb.boost importance function. For example, due to one hot encoding, I have a feature age=35 and another age=60. Is there a way that I can add these to get an overall importance of age, and not just age at every value? In case it matters the model is a binary:logistic one. library(xgboost) imp_matrix= xgb.importance(features,model=mod) Answer: You can just sum them. The three importances reported are all fractions that a given feature contributes out of the total. The measures are Gain, the impurity improvement a split provides; Cover, the number of datapoints passing through a node; and Frequency, just the number of nodes. All of these are summed across the nodes that use a given feature to split, and so adding them together across dummy variables for a given categorical also makes sense.
{ "domain": "datascience.stackexchange", "id": 11087, "tags": "r, xgboost, feature-importances" }
What causes the sound of climaxing bees?
Question: I just saw the funniest video of bumblebees(?) having sex. (mirror) As they were getting it on, there seemed to be an almost audible 'moaning' coming from the pair. From what I presume it was a sexual climax we heard! Bees have neither vocal chords nor lungs, so I wonder: what is that sound? As the pair is mating on a newspaper, I'm almost certain we're hearing transduced vibrations from the bees made audible through the paper and not the bees themselves. (Nonetheless, what a beautifully humorous anthropomorphism! Moaning bees certainly give vivid color to the birds and the bees!) I have multiple theories: The ejaculation of sperm by the drone is known to be 'explosive', even rupturing the endophallus and making a 'popping' sound. I wouldn't call the sound in the video 'popping' by any means, though. (Un)controlled movements of legs or wings by either of the pair. This seems very likely considering the subjective frequency of the sound, but that's just speculation on my part. Perhaps it's the queen piping? I'm really grasping with this one. I don't even know if that species' queens pipe or even have the ability to pipe. Any excited entomologists wanna fill me in? Answer: Firstly, those aren't bumblebee faces ... those are big fly eyes. Those are Bumblebee Hover Flies (Volucella bombylans): The noise in the first part of the video is identical to a fly flying around your head and the noise at the end sounds like what a fly would sound like when you press it against a surface like a wall or screen (or a newspaper, apparently)
{ "domain": "biology.stackexchange", "id": 10892, "tags": "entomology, sex, sexual-reproduction" }
Graph classes in which CLIQUE is known to be NP-hard?
Question: Given a graph $G$ and a positive integer $k$, the CLIQUE problem asks if $G$ contains a clique (complete subgraph) on at least $k$ vertices. This problem is long known to be NP-complete --- in fact, it was one of Karp's list of 21 NP-complete problems. My question is: For what restricted families of graphs is CLIQUE known to be NP-complete? I could find one such graph class with Google's help: the class of $t$-interval graphs for any $t\ge 3$ (Butman et al., TALG 2010) [1]. Do you know of other graph classes where this problem has been shown to be NP-complete? [1] Butman, Hermelin, Lewenstein, Rawitz. Optimization problems in multiple-interval graphs. ACM Transactions on Algorithms 6(2), 2010 Answer: It's NP-complete to find maximum cliques in claw-free graphs [Faudree, Ralph; Flandrin, Evelyne; Ryjáček, Zdeněk (1997), "Claw-free graphs — A survey", Discrete Mathematics 164 (1–3): 87–147] and in string graphs [Jan Kratochvíl and Jaroslav Nešetřil, INDEPENDENT SET and CLIQUE problems in intersection-defined classes of graphs, Commentationes Mathematicae Universitatis Carolinae, Vol. 31 (1990), No. 1, 85–93]. At least as of the 1990 paper it was open whether the problem remained hard for intersection graphs of straight line segments. However, finding maximum cliques easy for planar graphs, for minor-closed graph families, or more generally for any family of graphs with bounded degeneracy: find the minimum degree vertex, search for the largest clique among its O(1) neighbors, remove the vertex, and repeat. It's also easy for perfect graphs and the many important subfamilies of perfect graphs. Although maximum independent set is hard for many other interesting graph classes, that doesn't generally lead to interesting hardness results for clique, because the complement of an interesting graph class is not necessarily itself interesting.
{ "domain": "cstheory.stackexchange", "id": 430, "tags": "reference-request, np-hardness, clique" }
Are there more than 3 dynamical pictures of quantum mechanics?
Question: There are 3 well known dynamical pictures of quantum mechanics: the Schrödinger picture, the Heisenberg picture and the interaction picture. In above wikipedia article, their connection is nicely summarized in the following table: What is done physically in each of these can be summarized as follows: Schrödinger picture: all time-dependence in the states Heisenberg picture: all time-dependence in the operators Interaction picture: free time-dependence in the operators, interaction time-dependence in the states Each of them is used frequently and has different advantages. However, if you look at the physical summary, there is really (at least) one missing, that is rarely mentioned in undergraduate courses: "Interaction picture #2": free time-dependence in the states, interaction time-dependence in the operators So here are the questions: Is the "interaction picture #2" relevant in quantum theory? If so, where is it used and how is it useful? If so, why on earth does nobody ever talk about it? Are there posssibly even more useful dynamical pictures of quantum theory? To anticipate a nitpick: you could probably interpret "interaction picture #2" as a normal interaction with a redefinition of the free and interacting Hamiltonian. However, I would argue that this defeats the point in many cases, since the free Hamiltonian often needs to be something simple for many cases. So swapping it for a complicated interaction Hamiltonian is a bit of the cheat and does not comply with the physical notion of each picture summarized above either. Answer: I am adding my own answer on a few of the points, since this was the solution to the discussion in chat. However, it is by no means complete and other answers are (needless to say) more than welcome! Yes! "Interaction picture #2" is frequently used when one considers quantum Langevin dynamics and quantum stochastic proccesses, such as for example the input-output formalism (see e.g. 1). It is useful there, for example, when developing perturbation theory based on operators or operator scattering theory. In this field people usually just call it the interaction picture. However, it is funny to point out that this terminology is in some sense inconsistent with the textbook and wikipedia definition of the interaction picture. There are only two places that you can shift the free and interacting time dependences into: states and operators. That means, the number 4 should be it for the useful dynamical pictures of quantum mechanics.
{ "domain": "physics.stackexchange", "id": 59236, "tags": "quantum-mechanics, quantum-field-theory" }
Finding missing numbers in ranges
Question: I have a list of (potentially overlapping) ranges, e.g [(3, 9), (8, 10), (1, 18), (1, 1000000)]. I need to process them in order and for each range calculate how many numbers from the range have not been seen so far (i.e. not present in previous ranges). For example, for the above list the result would be [7, 1, 10, 999982]. A simple solution using a set: def get_missing(ranges): seen = set() result = [] for start, end in ranges: missing = 0 for n in range(start, end+1): if n not in seen: missing += 1 seen.add(n) result.append(missing) return result How can I improve the performance of this solution (i.e. do it without looping through each number of each range)? Answer: The primary problem with your current approach. As you know from previous answers and comments, the limitation of your current approach is that it falters when faced with very large intervals, which cause the inner loop to balloon in size. A little bit of progress can be made by taking fuller advantage of sets to perform the intersection logic (as shown below in the benchmarks). But that improvement is only modest: the sets eliminate the inner loop in your own code, but behind the scenes the Python sets are doing iteration of their own. Some infrastructure. Let's create a simple dataclass to make the code more readable, a utility function to generate intervals to our specifications (many or few, big or small), and a utility function to benchmark the various approaches to the problem. We will compare your code (slight modified to handle the new dataclass), a set-based alternative similar to the one in another answer, and a much faster approach using an IntervalUnion (to be discussed below). import sys import time from dataclasses import dataclass from random import randint def main(args): intervals = tuple(create_intervals( n = 1000, start_limit = 100000000, max_size = 100000, )) funcs = ( get_missing_orig, get_missing_sets, get_missing_interval_union, ) exp = None for func in funcs: got, dur = measure(func, intervals) if exp is None: exp = got print(func.__name__, dur, got == exp) @dataclass(frozen = True, order = True) class Interval: start: int end: int def create_intervals(n, start_limit, max_size): for _ in range(n): start = randint(0, start_limit) end = start + randint(0, max_size) yield Interval(start, end) def measure(func, intervals): t1 = time.time() got = func(intervals) return (got, time.time() - t1) def get_missing_orig(ranges): # Your original implementation, slightly adjusted. seen = set() result = [] for x in ranges: missing = 0 for n in range(x.start, x.end + 1): if n not in seen: missing += 1 seen.add(n) result.append(missing) return result def get_missing_sets(intervals): # A set-based approach. counts = [] seen = set() for x in intervals: s = set(range(x.start, x.end + 1)) - seen counts.append(len(s)) seen.update(s) return counts def get_missing_interval_union(intervals): # An approach that just stores intervals, as few as possible. iu = IntervalUnion() return [iu.add(x) for x in intervals] if __name__ == '__main__': main(sys.argv[1:]) The intuition behind IntervalUnion. We're trying to avoid two problems. First, we want to process and store only the intervals, not all of their implied values. Second, we don't want to end up having to make passes over an ever-growing collection of intervals. Instead, we would rather merge intervals whenever they overlap. If we can keep the size of the data universe in check, our computation will also be quick. For starters we need a couple of utility functions: one to tell us whether two intervals can be merged and, if so, how many of their values are overlapping; and another that can merge two intervals into one. def overlapping(x, y): # Takes two intervals. Returns a (CAN_MERGE, N_OVERLAPPING) tuple. # Intervals can be merged if they overlap or abut. # N of overlapping values is 1 + min-end - max-start. n = 1 + min(x.end, y.end) - max(x.start, y.start) if n >= 0: return (True, n) else: return (False, 0) def merge(x, y): # Takes two overlapping intervals and returns their merger. return Interval( min(x.start, y.start), max(x.end, y.end), ) The data structure of an IntervalUnion. A IntervalUnion holds a SortedList of Interval instances. The SortedList provides the ability to add and remove intervals without having to keep the list sorted ourselves. The SortedList will do that work efficiently for us, and the various add/remove operations will operate on the order of O(logN) rather than O(N) or O(NlogN). The add() method orchestrates those details, which are explained in the code comments, and returns the number that you need -- namely, how many distinct values are represented by the interval we just added. from sortedcontainers import SortedList class IntervalUnion: def __init__(self): self.intervals = SortedList() def add(self, x): # Setup and initialization: # - The N of values in the initial interval x. # - N of overlaps observed as we add/merge x into the IntervalUnion. # - Convenience variable for the SortedList of existing intervals. # - Existing intervals to be removed as a result of those mergers. n_vals = x.end - x.start + 1 n_overlaps = 0 xs = self.intervals removals = [] # Get the index where interval x would be added in the SortedList. # From that location we will look leftward and rightward to find # nearby intervals that can be merged with x. To the left, we # just need to check the immediate neighbor. To the right, we # must keep checking until no more merges are possible. i = xs.bisect_left(x) for j in range(max(0, i - 1), len(xs)): y = self.intervals[j] can_merge, n = overlapping(x, y) if can_merge: # If we can merge, do it. Then add y to the list of intervals # to be removed, and increment the tally of overlaps. x = merge(x, y) removals.append(y) n_overlaps += n elif j >= i: # Stop on the first rightward inability to merge. break # Remove and add. for y in removals: xs.remove(y) xs.add(x) # Return the distinct new values added to the IntervalUnion. return n_vals - n_overlaps Benchmarks. In terms of space, the IntervalUnion is quite efficient: it stores only intervals and it merges them whenever possible. At one extreme (all of the intervals overlap), the space used is O(1) because the IntervalUnion never contains more than one interval. At the other extreme (no overlap), the space used is O(N), where N represents the number of intervals. In terms of time, the IntervalUnion becomes faster than the other approaches when the interval sizes reach about 300 (at least in my limited number of experiments). When the intervals get even bigger, the advantages of the IntervalUnion are substantial. For example: # max_size = 300 get_missing_orig 0.027595043182373047 True get_missing_sets 0.01658797264099121 True get_missing_interval_union 0.013303995132446289 True # max_size = 1000 get_missing_orig 0.10612797737121582 True get_missing_sets 0.054525136947631836 True get_missing_interval_union 0.013611078262329102 True # max_size = 10000 get_missing_orig 1.1063508987426758 True get_missing_sets 0.5742030143737793 True get_missing_interval_union 0.013240814208984375 True # max_size = 100000 get_missing_orig 9.316476106643677 True get_missing_sets 6.468451023101807 True get_missing_interval_union 0.016165733337402344 True
{ "domain": "codereview.stackexchange", "id": 44400, "tags": "python, algorithm, interval" }
Why is urobilinogen reabsorbed from the gut just to be excreted via the kidneys?
Question: The Wikipedia article on Urobilin states: Bilirubin is... excreted as bile, which is further degraded by microbes present in the large intestine to urobilinogen... Some is reabsorbed into the bloodstream and then delivered to [the] kidney. This process is puzzling to me since it doesn't mention any function or reason for urobilinogen to be absorbed into the body, aside from it being there so it can be removed by the kidneys. The only reasons I can speculate are that: Urobilinogen shares a transporter/receptor with a similar, useful compound. It has a direct useful function when in the bloodstream, not mentioned on the wikipedia article or a few other sources I've skimmed. High urobilinogen is harmful to the gut and it's necessary to remove it via the blood to the kidneys. Urobilinogen is inherently able to diffuse across the intestinal barrier into blood as a result of its chemical structure Is it one of the above, or some other reason I haven't taken into account? Answer: I think that it is missing some details. Note that the vast majority of urobilinogen (~80%) is actually eliminated via fecal elimination. I would recommend looking at the article on enterohepatic circulation, which provides more context on this issue. Basically, bile salts can be useful (even or especially after metabolism by bacteria), so a fraction is reabsorbed by the large intestine and then dumped back into the digestive system: However, just like bile, some of the urobilinogen reabsorbed is resecreted in the bile which is also part of enterohepatic circulation. The rest of the reabsorbed urobilinogen is excreted in the urine where it is converted to an oxidized form, urobilin, which gives urine its characteristic yellow color. ... The net effect of enterohepatic recirculation is that each bile salt molecule is reused about 20 times, often multiple times during a single digestive phase. Obviously urobilinogen isn't one of the salts recirculated that much if most of it gets excreted in feces, but apparently it still happens (possibly just because it is chemically similar to the other bile salts, as you suggest). I think that the mechanism which is emphasized in the urobilinogen wiki page is a secondary mechanism for excretion when urobilinogen is at toxic levels in the bloodstream (e.g. jaundice). These high levels come for example from blood breakdown as pointed out in the article, which may be emphasized because it is a common medical issue (which is of course not a focus of SE Biology). So, in other words, the wiki article is targeted towards the potential medical problems (toxic levels requiring elimination via kidney) rather than the normal physiology (mostly eliminated in feces, sometimes-useful salt that helps digest stuff), possibly leading to confusion. This flash card resource succinctly notes the functions of bile salts such as bilirubin and its derivatives and may be helpful for other basic info: Functions: (1) Digestion and absorption of lipids and fat-soluble vitamins (2) Cholesterol excretion (body's only means of eliminating cholesterol) (3) Antimicrobial activity (via membrane disruption) More about (unfortunately medical focused) physiology of bilirubin etc. can be found here.
{ "domain": "biology.stackexchange", "id": 11026, "tags": "physiology, gastroenterology" }
Enthalpy change defined at constant pressure only?
Question: Is change in enthalpy defined only at constant pressure? I know that $Q-W=\Delta U $, and $Q$ at constant pressure equals $\Delta H$ (enthalpy change). Can $Q$ (heat given to the system) be used interchangeably with $\Delta H$ in the first equation? Is $ \Delta H =nC_p\Delta T$? How? Answer: Is change in enthalpy defined only at constant pressure? No. The definition of $\Delta H$ is $\Delta H = \Delta U + \Delta (PV)$ Simple as that. I know Q-W=del(U) And Q at constant pressure equals del(H) (enthalpy change). Can Q (heat given to the system) be used interchangeably with del(H) in the first equation? No. Only if the applied pressure is held constant during the change. Is del(H) =nCp del(T)? How? No. Only for an ideal gas. For real gases, this equation is not correct outside the limit of ideal gas behavior. For an ideal gas, $\Delta U=nCv\Delta T$ and $\Delta (PV)=nR\Delta T$, so $\Delta H=n(C_V+R)\Delta T$. And then $C_p=(\partial H/\partial T)_P=C_v+R$, so $\Delta H=nC_P\Delta T$. For a real gas, liquid, or solid, $$dH=nC_PdT+\left[V-T\left(\frac{\partial V}{dT}\right)_P\right]dP$$ Note that the term in brackets is zero for an ideal gas.
{ "domain": "chemistry.stackexchange", "id": 5435, "tags": "thermodynamics, enthalpy" }
Regarding density of states, not counting of kx, ky, kz of its negative wavefunctions
Question: In the density of states, we divide the equation by 8 as we remove the count from kx, ky, kz in negative direction. why is that? I understand that it's kinda intuitive, but I don't follow the intuition Answer: For a free particle in a box of volume $L^3$, the eigenfunction will be: $$\psi_{n_x,n_y,n_z}(x, y, z)=\sqrt{\frac{8}{L}} \sin(\pi n_x x/L) \sin(\pi n_y y/L) \sin(\pi n_z y/L).$$ I can take the same eigenstate, with a negative set of quantum number (I could also just flip the sign of one or two of them, the argument would be the same): $$\psi_{-n_x,-n_y,-n_z}(x, y, z)=-\sqrt{\frac{8}{L}} \sin(\pi n_x x/L) \sin(\pi n_y y/L) \sin(\pi n_z y/L).$$ But, in this case wave functions with $(n_x, n_y,n_z)$ represent the same physical state as wave functions with $(\pm n_x, \pm n_y, \pm n_z)$ because they are just the same functions with a different sign. Thus, they are not distinct and since the density of state is really counting the number of states on which the system can be found, if two wavefunctions represent the same physical state, you don't count them twice, just once. On a more mathematical side, you only want linearly independent wavefunction to form a basis, and this is what you count: the number of linearly independent states. Since $\psi_{\vec n} = -\psi_{-\vec n}$, they are not linearly independent. Edit: Here is an interesting thing (in 1D for notation ease). If instead of considering hard boundary conditions with $\psi(0) = \psi(L) = 0$ you consider periodic boundary conditions: $\psi'(x) = \psi'(x + L)$, your eigenstate has the form: $\psi'_{n}(x)\sim\exp\left(\frac{2n\pi ix}{L}\right)$ and you can remark that: $\psi'_{n}(x) \neq \alpha\psi'_{-n}(x)$. They are genuinely different states. So here, we should not divide by 8 the density of states. Everything is fine because in the case of hard boundary condition: you have $k= n \pi/L$ with $n\in \mathbb{N}$ while with periodic boundary conditions you have $k= 2n \pi/L$ with $n\in \mathbb{Z}$. So in the case of a periodic boundary condition, you would have a factor $V/(2\pi)^3 = \dfrac{1}{8}V/\pi^3$ instead of a factor $V/\pi^3$. The factor $1/8$ appears automatically and you don't have to divide, manually at the end by 8 to restrain your spheres to positive quantum number. The same thing happens also for the energy, see this PSE post. Edit 2: By "indistinguishable" he really means linearly independent wavefunctions (wavefunction1 $\neq \alpha$ wavefunction2 ) and not wavefunctions with the exact same probability density distribution $|\psi(x)|^2$. Because $|\psi'_{-n}(x)|^2 = |\psi'_{n}(x)|^2\forall x$ for the periodic boundary codition wavefunction but the two states are genuinely different one that both need to be counted in the density of states.
{ "domain": "physics.stackexchange", "id": 98392, "tags": "statistical-mechanics, solid-state-physics" }
High Dimensional Data Structures
Question: I have a 20-dimensional dataset, with a large amount of data points. I would like to have each dimension discretized into bins. Per bin, I would like to be able to access two neighbours per dimension (i.e. +1 and -1 per dimension). Basically I want to be able to easily access 2*d (where d is the dimensionality) neighbours. For lower dimensions, this would be quite easy to do using a multidimensional array (i.e. for point data[0][1][2] I would access its neighbours data[0][1][3] and data[0][1][1] for the third dimension). However, when this approach is scaled up to higher dimensions, memory becomes an issue. What kind of data structures would be suitable to use, where the most important criterium is the quick and easy access to its +1 and -1 neighbours? Answer: I would suggest you to use a hierarchical K-means, or a hierarchical vocabulary tree approach. For example: http://gecco.org.chemie.uni-frankfurt.de/hkmeans/H-k-means.pdf http://www.vlfeat.org/overview/hikm.html CVPR Paper with application: http://www.vis.uky.edu/~stewe/publications/nister_stewenius_cvpr2006.pdf FLANN also has some kind of hierarchical capabilities.
{ "domain": "cs.stackexchange", "id": 3409, "tags": "data-structures, big-data" }