anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Grad Checking, verify by average?
Question: I am running Gradient Checking, to spot any discrepancy between my mathematically-computed gradient and the actual sampled gradient - to reassure my backprop was implemented correctly. When computing such a discrepancy, can I sum-up squares of differences, then take their average? I could then use this average as my estimate of how correctly the network computes the gradient: $$\frac{1}{m}\sum_{i=0}^{i=m}(g_i-n_i)^2$$ or even: $$\sqrt{\sum_{i=0}^{i=m}(g_i-n_i)^2}$$ where $g$ is a gradient from backpropagation, and $n$ is gradient from gradient checking. However, Andrew Ng instead recommends: $$\frac{\vert \vert (g-n) \vert \vert _2 }{ \vert \vert g \vert \vert _2 + \vert \vert n \vert \vert _2}$$ where $\vert \vert . \vert \vert _2$ is the length of the vector. Another post post also recommends an slightly different approach: https://stats.stackexchange.com/a/188724/187816 Why would their approaches be better than mine? Answer: Let me give you an example where Andrew's recommendation works better than yours: Let's say that the real gradient is $(0, 0, 0)$ and the gradient you have computed is $(10^{-4}, 10^{-4}, 10^{-4})$. Then your average would return $10^{-8}$, and Andrew's recommendation would return $1$. Your metric could fool you into thinking that your gradient is computed propperly and the error is just due to a numeric issue, while Andrew's cannot fool you into that, due to the fact that it considers the fact that the gradient can be very small. To wrap up, if your gradient doesn't have norm close to zero, it wouldn't really matter. However, when the gradient is close to zero you can be fooled into thinking that your gradient is right when it is not.
{ "domain": "datascience.stackexchange", "id": 3001, "tags": "machine-learning, optimization, gradient-descent" }
Is it better to use software or hardware for object sorting?
Question: For a sorting machine where the objects to be sorted have various sizes, color, shapes, and patterns, is it more optimal (in terms of minimal time of the overall process and maximal precision and accuracy) to use a sorting algorithm or to use different dimensions in the physical design to do the sorting? Answer: It really depends on the objects you are sorting, and the flexibility you're looking for as the system is maintained over the years. Using physical methods to sort is very reliable. Many times you can align the parts, such as by using curves, chutes, vibration, and parts feeding mechanisms. Look up vibratory bowl feeders and cap feeders for examples. You can sometimes pre-sort a large collection of different products into smaller groups of products, too, and the solution for one group may be different from the solution for the other groups. If you can properly orient and align the objects, it may be simpler to use physical sorting methods than trying to identify the different parts using software. But this isn't a black-and-white statement. For instance, if one of your parts is always higher than the rest of the parts, a simple through-beam sensor located higher than all but the tallest product, tied to an ejecting cylinder is quite straightforward to implement. In this case, there is both software and hardware involved. However, as the product mix changes over time, you may have to redesign the hardware sorting mechanisms to accomodate those changes. This redesign can sometimes be quite difficult. If you want the most flexibility (as in the case where the parts you are sorting may change over time), then a vision-guided system offers this. However, even these systems will benefit by first controlling the fed parts before they encounter the vision system. This is especially true if the objects have different visual characteristics when viewed from one side as compared to being viewed from another side. You can use speed changes of conveyors to guarantee spacing between parts to facilitate the vision processing, plus other parts feeding techniques that reduce the area in which your vision system must operate. So again, it is a combination of mechanical and software means. The software-guided sorting usually takes longer to set up and train, and you'll have to deal with things like color variations, structured lighting, and possibly speed issues. But it offers greater flexibility as requirements change. Many robotics manufacturers offer very powerful vision-based add-ons (Fanuc is one), which help shorten the vision-to-robot integration time. Therefore "it depends."
{ "domain": "robotics.stackexchange", "id": 1273, "tags": "sensors, design, automation" }
Quantum input-output theory : Why do we multiply by density of mode to have a number of photon **per unit of time**
Question: In this paper, https://journals.aps.org/pra/abstract/10.1103/PhysRevA.31.3761, we work with input-output theory. I will first summarize the physics of it and then ask my question. In input-output theory we model everything by saying : I send an input field, it interacts with a quantum system, and after the interaction I have an output field. ==Input==> [Interaction of field with Q.Syst] ==Output==> The Hamiltonian is : $$ H = H_s + H_{field} + H_{int} $$ Where $H_{field}=\int d\omega ~ \hbar \omega b^{\dagger}(\omega) b(\omega) $. It is not necessary to explicit the other parts for my question In Heisenberg picture, if we didn't had any interaction, the field would freely evolve such that $$b(\omega,t)=e^{-i \omega t} b^0(\omega)$$ Before moving further, it is important to underline that $b^{\dagger}(\omega) b(\omega)$ is not the number of photon of frequency $\omega$ because it has the dimension of a time. This is basically what will cause me all the problem. So for me if I work with a discrete number of mode, I would write : $$ \sum_k \hbar \omega_k a^{\dagger}(\omega_k) a(\omega_k) = \int d \omega ~ \hbar \omega \nu(\omega) a^{\dagger}(\omega) a(\omega) $$ and I would identify $b(\omega) = \sqrt{\nu (\omega)} a(\omega)$ where $\nu(\omega) d \omega$ is the number of mode I have in $[\omega; \omega+ d \omega]$. We define the input field as : $$b_{in}(t)=\frac{1}{\sqrt{2 \pi}} \int d\omega ~ e^{-i \omega t} b^0(\omega)$$ It looks like a Fourier transform but the way I understand it is more : we make evolve all mode at time $t$ in Heisenberg picture assuming they are not interacting (which is the case of the input field before the interaction), and we sum on those modes : it is the definition of the total input field. My question : In this paper (and more generally everytime input-output theory is used), they say that $<b^{\dagger}_{in}(t) b_{in}(t)>$ is the number of photon per unit of time I have at time $t$ in my input field. I don't understand this. I agree it has the good dimension but why would this quantity physically represent it ? Answer: Some remarks: From the question (emphasis mine) Before moving further, it is important to underline that $b^†(ω)b(ω)$ is not the number of photon of frequency $ω$ because it has the dimension of the inverse of a time. How so? According to $H_{field}=\int d\omega ~ \hbar \omega b^{\dagger}(\omega) b(\omega)$ we ought to have that $ \int d\omega ~ \hbar \omega \langle b^{\dagger}(\omega) b(\omega)\rangle$ is the total energy of the bath in the absence of interactions with the system. Therefore $\langle b^{\dagger}(\omega) b(\omega)\rangle$ is simply the number of photons per unit frequency. The dimension is therefore inverse frequency, not inverse time. $\sum_k \hbar \omega_k a^{\dagger}(\omega_k) a(\omega_k) = \int d \omega ~ \hbar \omega \nu(\omega) a^{\dagger}(\omega) a(\omega)$ It is worth noting here that in this step an approximation is being performed, namely the discrete system modes are being replaced by a continuum. Also note that the notation is slightly abusive, since the $a^\dagger$ operator changes units from the left to the right. We define the input field as : $b_{in}(t)=\frac{1}{\sqrt{2 \pi}} \int d\omega ~ e^{-i \omega t} b^0(\omega)$ It looks like a Fourier transform but the way I understand it is more : we make evolve all mode at time t in Heisenberg picture assuming they are not interacting (which is the case of the input field before the interaction), and we sum on those modes : it is the definition of the total input field. I quite like this interpretation. Let me rephrase it a bit: at the start you have a quantum mechanical "wave packet" that is built from a superposition of bath modes, each evolving freely at frequency $\omega$. Note that this contains the notion of being an boundary condition at time $t_0$ (indicated only by the zero superscript in this formula). In many cases this will be considered asymptotically, with $t_0$ approaching the infinite past. $b_{in}(t)=\frac{1}{\sqrt{2 \pi}} e^{-i \omega t} b^0(\omega)=\frac{1}{\sqrt{2 \pi}} e^{-i \omega t} \sqrt{\nu(\omega)} a^0(\omega)$ This formula in the question is not completely correct, especially the continuum system modes on the right hand side. It is certainly not what Gardiner&Collett have in their paper (see formula (2.22)). They only have a single discrete mode. If you want to have a continuum of system modes, there should at least be an integral for that somewhere, unless your coupling is local in frequency. But the latter would just correspond to a single mode problem with messy notation again. Either way the units are wrong in this, as pointed out in remark 2. I initially thought that this was where the confusion came from, but after StarBucK's I am adding this edit to address the real question: EDIT: So now that we have understood the units of $b(\omega)$, which was also nicely expained again in an answer by jgerber that was posted since, we can look at the units of $b_\textrm{in}(t)$. To understand this let us investigate the definition of the input operators a bit further. The original bath operator $b(\omega)$ is in the Heisenberg picture (as also pointed out by jgerber). So this operator is already time dependent and could (or maybe should) be written $b(t, \omega)$. As we saw above, what $\langle b^\dagger(t, \omega) b(t, \omega) \rangle$ then represents physically is the number of photons per unit frequency (so "per mode") at time t (not per unit time). So in other words: $b(t, \omega)$ is our standard photon operator, just for a continuum, not for a discrete mode. The definition of the input operator can then be written: $$b_{\textrm{in}}(t)=\frac{1}{\sqrt{2 \pi}} \int d\omega ~ e^{-i \omega (t-t_0)} b(t_0, \omega)$$ Note that $e^{i \omega t_0} b(t_0, \omega)$ is physically the photon operator in the interaction picture (that is with the free time evolution taken out) at time $t_0$. So if you have no interactions, then $e^{i \omega t_0} b(t_0, \omega)$ is actually independent of $t_0$. This means $b_{\textrm{in}}(t)$ is really the Fourier transform of the interaction picture operator at time $t_0$. To make it a bit clearer what I am saying, you can also define an input operator in the frequency domain. The definition is just a Fourier transform again and if we evaluate this Fourier transform, we get a very simple result: $$b_{\textrm{in}}(\omega) = \frac{1}{\sqrt{2 \pi}} \int dt e^{i \omega t} b_{\textrm{in}}(t) = e^{i \omega t_0} b(t_0, \omega).$$ So the input operator in the frequency domain is exactly the interaction picture photon operator at time $t_0$! Mathematically this is all simple, we are just doing Fourier transforms back and forth. But physically, this gives a lot of insight into what the input operators mean in my opinion. So say we say that the expectation value of these frequency space input operators is some function $\langle b^\dagger_{\textrm{in}}(\omega) b_{\textrm{in}}(\omega) \rangle = I(\omega)$. I have called this function $I$ on purpose, because this represents the intensity spectrum that you send into your system at time $t_0$. So if you have some wavepacket flying towards your cavity/interaction region, the input operators give you the spectrum of this wavepacket. The time-frequency relation then behaves very similarly to classical optics. We have $$ \omega\textrm{-domain amplitude} \xleftarrow[]{\textrm{Expectation value}}b_{\textrm{in}}(\omega) \xrightarrow[]{\textrm{Fourier transform}} b_{\textrm{in}}(t) \xrightarrow[]{\textrm{Expectation value}} t\textrm{-domain amplitude}$$ So my advice is to think about it in terms of wavepackets. $\langle b^\dagger_{\textrm{in}}(\omega) b_{\textrm{in}}(\omega) \rangle$ gives you the number of photons per unit frequency at frequency $\omega$ in the wavepacket. $\langle b^\dagger_{\textrm{in}}(t) b_{\textrm{in}}(t) \rangle$ gives you the number of photons per unit time at time $t$ in the wavepacket. Here is a picture (source): The picture is for classical fields, so you do not have the whole business with expectation values, but the principle is the same. $|E(\omega)|^2$ is the intensity per unit frequency at frequency $\omega$, $|E(t)|^2$ is the intensity per unit time at time t. Summary: after stripping away the weirdness of the definition of input operators and the interaction picture, this is really just Fourier transforming wave packets.
{ "domain": "physics.stackexchange", "id": 59176, "tags": "quantum-mechanics, quantum-field-theory, quantum-optics, open-quantum-systems, cavity-qed" }
Suggestion about head transplant
Question: The success of the head transplant surgery depends on the acceptance of the immune cells of the body on which the head is going to be inserted. Can stem cells from thymus gland and bone marrow of the head donor be transplanted into the body on which the head is going to joined, then those stem cells will produce lymphocytes which will accept the transplanted head. Can this be a way to increase the probability of success of the head transplant operation? Answer: I think that immune rejection is not the only major problem in head transplantation. We are quite skilled at suturing vasculature, but it's not fully solved yet how can you make new and proper connections in the spine? Concerning your questions, if you just took these naive lymphocytes, they would probably get destroyed by the acceptor's immune system pretty soon. If this wasn't the case, the approach would already probably be in use for other transplantations too. What you possibly could do is a bone marrow transplantation, where you would replace the bone marrow of the acceptor with that one of a donor. However this would cause the opposite problem, the donor cells attacking the acceptors body. For now, lifetime immunosuppresion is the only working way to go. To get a better idea about how exactly the mechanism of rejection works and the prospective future therapies, see for example: http://emedicine.medscape.com/article/432209-overview#a8
{ "domain": "biology.stackexchange", "id": 7104, "tags": "transplantation" }
I want to understand a trick in the derivation of the Schwinger-Dyson equations
Question: In the book of Ashok Das, Field theory-path integral approach, he begin the demonstration of the Schwinger-Dyson equation using the fact that the $\delta Z[J]=0$, so \begin{equation} \delta Z[J]=\int \mathcal{D} \phi \frac{\delta S[\phi,J]}{\delta \phi(x)} e^{iS[\phi,J]}=0, \end{equation} but we already know that \begin{equation} \frac{\delta S[\phi, J]}{\delta \phi(x)}=F(\phi(x))-J(x), \end{equation} where $F(\phi(x))$ is the equation of motion. So if we go back to the first equation and use the identification \begin{equation} \phi(x)\rightarrow -i\frac{\delta}{\delta J(x)}, \end{equation} we conclude that $$ \int \mathcal{D}\phi\left(F(\phi(x))-J(x)\right)e^{iS[\phi,J]}=\left(F\left(-i\frac{\delta }{\delta J(x)}\right)-J(x)\right)\int \mathcal{D}\phi e^{iS[\phi,J]} $$ $$ \left(F\left(-i\frac{\delta }{\delta J(x)}\right)-J(x)\right)Z[J]=\left(F\left(-i\frac{\delta }{\delta J(x)}\right)-J(x)\right)e^{iW[J]}=0. $$ But is here where I get lost, how did he pass from the above equation for $$ e^{-iW[J]}\left(F\left(-i\frac{\delta }{\delta J(x)}\right)-J(x)\right)e^{iW[J]}=F\left(\frac{\delta W[J]}{\delta J(x)}-i\frac{\delta}{\delta J(x)}\right)-J(x)=0. $$ A more important question is: what does this last equation mean at all, mathematically speaking, because the functional derivative now is acting on nothing. Answer: The steps that you already understand showed that $$ \left(F\left(-i\frac{\delta}{\delta J(x)}\right) -J(x)\right)e^{i W[J]}=0. \tag{1} $$ This clearly implies $$ \left(F\left(-i\frac{\delta}{\delta J(x)}\right) -J(x)\right)e^{i W[J]}c=0, \tag{2} $$ where $c$ is any constant. Now use the identity $$ -i\frac{\delta}{\delta J(x)}e^{iW[J]}h[J] = e^{iW[J]}\left(\frac{\delta W[J]}{\delta J(x)} -i\frac{\delta}{\delta J(x)}\right)h[J] \tag{3} $$ to move the factor of $e^{iW[J]}$ from the right-hand side of equation (2) to the left-hand side, where $h[J]$ is an arbitrary functional. The result is the last equation shown in the question, except that here I've written it with an arbitrary constant $c$ on the right-hand side, so that the variational derivatives always have something to act on, even if it's something trivial. The book apparently just didn't bother writing this arbitrary constant. The remaining functional derivatives $\delta/\delta J$ are still important, because they're inside the argument of $F(\cdots)$, so they still act on the $J$-dependent factors that are also inside the argument of $F(\cdots)$. (For an example, suppose $F[X]=X^2$.) This detail is exactly what makes the Schwinger-Dyson equations different than the classical equation of motion for $\phi(x) := \delta W[J]/\delta J(x)$.
{ "domain": "physics.stackexchange", "id": 66738, "tags": "quantum-field-theory, path-integral, correlation-functions" }
Can't get transforms using waitForTransform
Question: Hello everyone. I recorded a bag file using messages generated by Stage simulator; thus, when bag file is played, both odometric and scanner messages are sent again. I wrote a function that should receive a laser message and find the associated odometric message, using waitForTransform, on professor's suggestion: void messageListener::laserCallback(const sensor_msgs::LaserScan msg){ //std::string frame_id = msg->header.frame_id; ros::Time t = msg.header.stamp; tf::StampedTransform laserPose; std::string error; if (listener->waitForTransform ("/odom", "/base_scan", t, ros::Duration(0.5), ros::Duration(0.01), &error)){ listener->lookupTransform("/odom", "/base_scan", t, laserPose); double yaw,pitch,roll; tf::Matrix3x3 mat = laserPose.getBasis(); mat.getRPY(roll, pitch, yaw); pos_node pn; pn.x = laserPose.getOrigin().x(); pn.y = laserPose.getOrigin().y(); pn.angle = yaw; const vector<float>& ranges = msg.ranges; scan_node temp_sn; temp_sn.cartesian = polar2cart(ranges, msg.angle_min, msg.angle_increment, msg.range_min, msg.range_max); vector<scan_node> tempScanVec; tempScanVec.push_back(temp_sn); extractCorners(&tempScanVec, 0.1, 0.1, 0.3); line2angle(&tempScanVec); pn.corn_vec = tempScanVec[0].corners_list; nodeInfo.push_back(pn); } else printf("Nothing to do\n"); } The problem is that the IF block where the trasnform should be found is never executed and I always get the nothing to do printing; this makes me understand that somehow transform is never found. I suppose that problem may regard parameters of the function itself. This is the way I perform subscription: ros::NodeHandle n; ros::Subscriber sub_n = n.subscribe("/base_scan", 1000, &messageListener::laserCallback, this); Does anyone know where I'm mistaking? Thanks Originally posted by ubisum on ROS Answers with karma: 23 on 2014-01-22 Post score: 0 Answer: Increase the duration on which it is asking until you see something. Also insert a printf() in your if loop, it could be working sometimes but now you won't see it. Do you also see odom and base_scan when you do a rostopic echo /tf ? Originally posted by davinci with karma: 2573 on 2014-01-23 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by ubisum on 2014-01-23: thanks for answer. I increased fourth argument of waitForTransform to 1.5 seconds, but this slowed down my program; I added a printing of error returned by waitForTransform, but it never got printed. this suggests IF block is never entered. what about first two arguments? do you think they're right? Comment by ubisum on 2014-01-23: they represent the two topics Stage uses for advertising. Comment by davinci on 2014-01-23: The first two arguments should be parts of your robot not topics. Do you also see these points when you do a rostopic echo /tf ? Comment by ubisum on 2014-01-23: thanks again for answer. my original problem was to couple odometric and laser messages generated from same robot pose. at the beginning, I used two queues, one for each kind of message. then, I took a message from top of each queue, since they were for sure generated from same pose... Comment by ubisum on 2014-01-23: ... they could be combined, then, in one object. my professor told me to use tf to do the same and I'm trying to adapt a snippet of code he gave me. Anyway rostopic echo /tf gives no output. do you know how to use tf for my goal? Comment by davinci on 2014-01-23: If rostopic echo /tf doesn't give any output than the code will not work. This means tf is not running and asking transformations will not work. Your bag file should also contain tf messages or some other way to track transformations between frames. Comment by ubisum on 2014-01-24: this is what I suspected, after reading tf tutorial. It seemed clear that robot frames must be inserted into tf tree, before trying to retrieve transforms. Professor told me that Stage automatically publishes frame to /tf, but it seems untrue. moreover, tf is not listed in Stage's published topics.. Comment by ubisum on 2014-01-24: ... do you think there's a way to force Stage to publish on /tf? is it possible to exploit the bag file I'm currently using without generating a new one? thanks Comment by davinci on 2014-01-26: I don' know much about stage. Perhaps you should start a new question for that. But with rxbag you can check what is in the bagfile. If there is nothing in the tf stream you have to generate a new bag file.
{ "domain": "robotics.stackexchange", "id": 16741, "tags": "simulation, stage, transform" }
Fluid Mechanics Problem - Ball in Reservoir
Question: I'm having a problem starting and solving the following problem: My attempt at the solution was to realize you need to balance the upward forces with those of the downward forces in which case you have a buoyant force, for the upward force. And downward is the weight of the ball and the pressure of the ball at that height it's at. So then to solve it I assume you equate both and then you can find the specific gravity? Thanks, Answer: You are close. The forces on the ball are 1. Gravity 2. Water pressure, and 3. air pressure from the hole at the bottom. For convenience you can assume the air pressure is 0, or you can keep it in, it will go away from the final answer. Not the trick is that the water pressure would produce a net booyant force given by Archimedes principle, IF it was completely surrounded by water. But it is not completely surrounded. You need to subtract from the bouayant force, the force that would come from the area over the hold, if the hold were also full of water. Since the hold is small compared to the ball, you can probably just estimate this as the area of the hold times the water pressure at the bottom of the tank. And there you go.
{ "domain": "physics.stackexchange", "id": 1674, "tags": "gravity, homework-and-exercises, fluid-dynamics" }
Discrete random variable generator
Question: There is my SSCCE to generate a value of discrete random variable. values is set of value the RV can take and procents is equivalent to discrete pdf. Can you anticipate any issue with this snippet? import java.util.Random; public class RandomTest { public static void main (String[] args) throws Exception { RandomTest rt = new RandomTest(); int[] values = {0, 1, 2}; int[] procents = {30, 60, 10}; for (int i=0; i < 10; i++) { System.out.print(rt.discreteRV(values, procents) + " "); } } public int discreteRV (int[] values, int[] procents) throws Exception { if (values == null || procents == null) throw new Exception("Input parameters are null"); if (values.length != procents.length) throw new Exception("Input parameters length mismatch"); int sumProcents = 0; for (int i=0; i < procents.length; i++) { if (procents[i] < 0) throw new Exception("Negative procents are not allowed"); sumProcents += procents[i]; } if (sumProcents != 100) throw new Exception("Sum of procents is not 100"); int rand = new Random().nextInt(100); int left = 0, right = 0; for (int i=0; i < procents.length; i++) { right += procents[i]; if (rand >= left && rand < right) return values[i]; left = right; } throw new Exception(""); } } Answer: In general, this looks ok, i.e. it works. I will put some notes on the lines of your method and will provide my suggestion after this. Spelling: procent is not an English word. Well, the compiler does not care, but the next programmer probably will. public int discreteRV(final int[] values, final int[] procents) throws Exception { To throw a exception of type Exception does not provide any help. The purpose of exceptions is to communicate errors. Noone knows the type of error from Exception and can do anything about it. They should be avoided in throw statements. if (values == null || procents == null) throw new Exception("Input parameters are null"); While checking your contract is a good idea, you do not gain anything here. Java will throw a Nullpointer error with nearly the exact same message anyway if you try to access them. As long as there are no plans to do any special things, I would not waste lines on this. if (values.length != procents.length) throw new Exception("Input parameters length mismatch"); I would throw a runtime exception. IllegalArgumentException looks suitable here. int sumProcents = 0; for (final int procent : procents) { if (procent < 0) throw new Exception("Negative procents are not allowed"); sumProcents += procent; } For exception: same as above. For check: This is not valid for all input. Consider (and/or try:) final int[] procents = { 1234567890, 1234567890, 1825831616 }; If you want to check the input, check for both sides. if (sumProcents != 100) throw new Exception("Sum of procents is not 100"); For exception: same as above. final int rand = new Random().nextInt(100); If you use this method frequently, make it static (and/or even ThreadLocalRandom) int left = 0, right = 0; for (int i = 0; i < procents.length; i++) { right += procents[i]; if (rand >= left && rand < right) return values[i]; left = right; } The left check is not needed. If you check for something in between, support your readers: Instead of rand >= left && rand < right try to read left <= rand && rand < right throw new Exception(""); For exception: same as above. A IllegalStateException looks suitable here. } Suggestion: public int discreteRV(final int[] values, final int[] percentages) { if (values.length != percentages.length) throw new IllegalArgumentException("values.length != percentages.length"); int sumPercentages = 0; for (int i = 0; i < percentages.length; ++i) { if (percentages[i] < 0) throw new IllegalArgumentException("Negative percentages are not allowed: percentages[" + i + "] = " + percentages[i]); sumPercentages += percentages[i]; if (sumPercentages > 100) throw new IllegalArgumentException("Sum > 100"); } if (sumPercentages != 100) throw new IllegalArgumentException("Sum of percentages is not 100"); final int randomIntUpTo100 = random.nextInt(100); int threshold = 0; for (int i = 0; i < percentages.length; i++) { threshold += percentages[i]; if (randomIntUpTo100 < threshold) return values[i]; } throw new IllegalStateException("No value found. rand:" + randomIntUpTo100); } More ideas: The arguments (arrays) could be replaced by an Object. This could make the handling easier. Depends on the use case. The two loops could be combined into one loop. You will loose the check for sum == 100 then.
{ "domain": "codereview.stackexchange", "id": 2825, "tags": "java, random, generator" }
what is CFG of $\{a^i b^j c^k \mid k = |i-j| \}$
Question: what steps should I take to find out what is the context free grammar of $L(w) = \{a^i b^j c^k \mid k = |i-j| \}$ Answer: Use the fact that CF are closed under union, and start by considering cases, whether $i\ge j$ or $j\ge i$. In the first case we get the language $L_1 = \{ a^i b^j c^k \mid k+j=i\}$, which is the same as $L_1 = \{ a^{k+j} b^j c^k \mid k,j \ge 0\}$. Make a grammar for $L_1$, and then consider the second case.
{ "domain": "cs.stackexchange", "id": 16152, "tags": "context-free" }
The acceleration vector of a simple pendulum
Question: In this picture the acceleration vector $\vec{a}$ points upward when the pendulum is halfway Click To see animated GIF But according to this picture, the force acts tangentially: Which means the acceleration should be tangential too, and never pointing upward? So whats right? Answer: Please note that in the picture, there are two forces acting: 1) the weight, mg, which acts vertically downward, and does not change, and 2) the tension in the string, Z, which points from the mass to the point the string connects to the ceiling, provided the string remains taut. Z varies with time periodically. These two forces combine to give the resultant force, and it is the resultant force which occurs in the same direction as the acceleration, as seen in the gif. The green arrows in the picture are actually just the tangential and normal components of gravity. Edit: also, I believe the source of confusion might have lied with assuming the normal component of gravity cancels with the tension. This is not the case: you cannot use the equations of equilibrium if the system is not in equilibrium, i.e. accelerating.
{ "domain": "physics.stackexchange", "id": 35895, "tags": "homework-and-exercises, newtonian-mechanics, forces, acceleration" }
Circumference in 2D curved space
Question: If we consider the metric to be $ds^2 = \frac{dr^2}{1-kr^2} + r^2 d\phi^2$ and want to compute the length $L$ of a path we know that: $$L = \int_{path}ds$$ With the path defined by $\{r=r_1, 0 <\phi < 2\pi\}$. My question is how do I compute this thing correctly? My first thought was to set $dr=0$, as long $r=r_1$ which is a constant, but this will return the Euclidean circle circumference $L=2\pi r_1$. It's a very basic question, but I didn't really get it with all lectures/books that I read. Answer: The point here is that $r$ is just an arbitrary coordinate, i.e. a number which marks the points along a line which has not necessarily anything to do with the radius. For the radius in this metric is not $r$, but $$R = \int_0^{r_1} \frac{dr}{\sqrt{1-kr^2}}$$ For simplicity we will assume positive curvature $k=+1$, then we get (due to the proposed metric now there is dimension problem that will be, however, neglected as it does not matter in this demonstration) $$R = \int_0^{r_1} \frac{dr}{\sqrt{1-r^2}} = arsin(r_1) $$ Therefore the ratio between circumference and radius is not $2\pi$, but $$\frac{U}{R} = 2\pi \frac{r_1}{arcsin(r_1)} \geq 2\pi$$ For small $r_1\ll 1$ it is nevertheless still $$\frac{U}{R} = 2\pi \frac{r_1}{r_1} = 2\pi $$
{ "domain": "physics.stackexchange", "id": 74311, "tags": "homework-and-exercises, differential-geometry" }
Reading list and book recommendation on Conformal Field Theory
Question: I have a background in QFT, GR and differential geometry at the level of a master student in theoretical physics. I would like to touch the area of CFT. I know the textbook of Philippe Di Francesco. It may be too big for the beginner like me. Are there some good introductory lectures or textbooks adapted to the needs of beginners? Answer: I would recommend the book Introduction to Conformal Field theory by Blumenhagen and Plauschinn. It is quite sort and can serve as a perfect introduction to CFT. It covers the basics of CFT in the first 3 chapters and then in the remaining 3 it goes on to introduce the CFT concepts that will appear most frequently in String theory. I believe the content of the book was chosen with the beginning string theory phd student in mind, even though the phrase "string theory" rarely appears in the book. The style of writing is accessible to someone who is just beginning to learn about the subject and as far as I remember almost every statement in the book comes with a proof, which is quite refreshing for a physics oriented book. The book is not complete in any sense and as you delve into the subject you will have to supplement it with other textbooks, like the Di Francesco, but it personally helped me learn the basics and not be completely lost in the CFT jargon during the beginning steps and I recommend it as an introductory book.
{ "domain": "physics.stackexchange", "id": 18040, "tags": "resource-recommendations, conformal-field-theory" }
Powershell zip subfolders recursively, conditionally
Question: This is a progression of the script I posted here: Zip the contents of subfolders, conditionally It does this: Determines all subfolders recursively Checks each subfolder for files older than 31 days which aren't .zip If such files found, creates a new folder within, labelled with the date Moves the files to the new folder Creates a zipped copy of the new folder Deletes the new folder and its contents There's error-handling and logging built-in #Run under Powershell v1 or v2 #Pass the target log folder as a parameter: '-LogFolder Path\to\logfolder' param( [parameter(Mandatory=$true)] [ValidateScript({ if (Test-Path $_ -PathType Container) {$true} else {Throw "Folder $_ not found"}})] [String] $LogFolder ) #Get working directory, for outputting log file $WorkingDirectory = (pwd).path #Print datestamp to the log file Get-date >> "$WorkingDirectory\OutputLog.txt" "Log directory: $LogFolder" >> "$WorkingDirectory\OutputLog.txt" #Record total size of the target log folder before compression $Logsize = Get-ChildItem -recurse $LogFolder | Measure-Object -property length -sum $MBsize = "{0:N2}" -f ($Logsize.sum / 1MB) + " MB" "Initial Log folder size: $MBsize" >> "$WorkingDirectory\OutputLog.txt" #Verify that the Zip.dll is present $testzip = Test-Path .\ICSharpCode.SharpZipLib.dll #If the Zip.dll is found, bind it to a variable and load it, otherwise exit #In other words, if the script cannot load the Zip module, it will do nothing if ($testzip -eq $True){ $ZipModule = Get-ChildItem .\ICSharpCode.SharpZipLib.dll | Select -ExpandProperty FullName [void][System.Reflection.Assembly]::LoadFrom($ZipModule) } Else{ "Zip dll not found or couldn't be loaded. Check file is present and unblocked. Exiting" >> "$WorkingDirectory\OutputLog.txt" Exit } #Get all subfolders $subfolders = Get-ChildItem $LogFolder -Recurse | Where-Object { $_.PSIsContainer -and $_.fullname -notmatch "\\jsonTemplates\\verifier\\?" } ForEach ($s in $subfolders) { $path = $s #$s variable contains each folder $path Set-Location $path.FullName $fullpath = $path.FullName #Get all items older than 31 days, exclude zip files and folders $items = Get-ChildItem -Exclude *.zip | Where-Object {$_.LastWriteTime -lt (Get-date).AddDays(-31) -and -not $_.psIsContainer} #Verify that there are such items in this directory, catch errors if ( $(Try { Test-Path $items } Catch { "Cannot find items in $fullpath. Sub-folders will be processed afterwards. ERROR: $_" >> "$WorkingDirectory\OutputLog.txt" }) ) { $date = Get-Date -Format 'yyyy-MM-dd_HH-mm' $newpath = "$path-$date" $newpath $newfld = New-Item -ItemType Directory -name $newpath $src = $newfld.FullName #move items to newly-created folder Move-Item $items -destination $src $dest = "$src.zip" "Compressing $src to $dest" >> "$WorkingDirectory\OutputLog.txt" #the following block zips the folder try{ $zip = New-Object ICSharpCode.SharpZipLib.Zip.FastZip $zip.CreateZip($dest, $src, $true, ".*") Remove-Item $src -force -recurse } catch { "Folder could not be compressed. Removal of $src ABORTED. ERROR: $_" >> "$WorkingDirectory/OutputLog.txt" } } } #Record total size of the target log folder after compression $Logsize = Get-ChildItem -recurse $LogFolder | Measure-Object -property length -sum $MBsize = "{0:N2}" -f ($Logsize.sum / 1MB) + " MB" "Final Log folder size: $MBsize" >> "$WorkingDirectory\OutputLog.txt" Set-Location $LogFolder Answer: [ValidateScript({ if (Test-Path $_ -PathType Container) {$true} else {Throw "Folder $_ not found"}})] Test-Path already returns a [bool] which is what is expected by [ValidateScript()] so this can be simplified by removing the if/else: [ValidateScript( (Test-Path $_ -PathType Container) -or $(throw "Folder $_ not found") )] pwd is an alias for Get-Location. I always recommend avoiding aliases and abbreviations in reusable scripts. if ($testzip -eq $True) This can be simplified to: if ($testzip) Else{ "Zip dll not found or couldn't be loaded. Check file is present and unblocked. Exiting" >> "$WorkingDirectory\OutputLog.txt" Exit } This should probably be thrown as an exception: else { $e = "Zip dll not found or couldn't be loaded. Check file is present and unblocked. Exiting" $e >> "$WorkingDirectory\OutputLog.txt" throw [System.IO.FileNotFoundException]$e } $path = $s #$s variable contains each folder $path Set-Location $path.FullName There are a two things I want to address here. First, the middle line: $path This is implicitly calling Write-Object $path which in your script is probably writing this out to the console, and that's probably the desired behavior. You should be aware that this actually makes it the return value of the current scope/scriptblock you're in, which means it's part of the return value of your script. You might not care about that right now, and it might never matter because you'll never call this script from something that checks its output, but it's bad practice (unless you very explicitly do want that to be part or all of its return value). If all you want is to show the path on the screen, use Write-Host. Or even better, add [CmdletBinding()] above your param() block and then use Write-Verbose. That will only show the output when someone calls the script with -Verbose. Second, I believe you should avoid changing the current path when possible. If it's not possible or it's impractical, then at least make use of Push-Location and Pop-Location along with try/finally so that you restore the original working directory. if ( $(Try { Test-Path $items } Catch { "Cannot find items in $fullpath. Sub-folders will be processed afterwards. ERROR: $_" >> "$WorkingDirectory\OutputLog.txt" }) ) { This is confusing. Test-Path should not throw an exception; it should return $false when the path doesn't exist, so your catch should never run. In addition, if it did run, the entire expression might still return $true making the if always satisfied. Seems this would be better written as: if ( Test-Path $items ) { # ... do stuff } else { "Cannot find items in $fullpath. Sub-folders will be processed afterwards." >> "$WorkingDirectory\OutputLog.txt" } You should wrap your logging into a function instead of having "Some message" >> "$WorkingDirectory\OutputLog.txt" strewn about the code. If you ever have to When you have to update that it will be a pain. Consider something like this: function Write-Log { [CmdletBinding()] param( [Parameter( Mandatory=$true )] [ValidateNotNullOrEmpty()] [String] $Message , [Parameter( Mandatory=$true )] [ValidateNotNullOrEmpty()] [ValidateScript( { $_ | Test-Path -PathType Leaf -IsValid } )] [String] $LogPath ) $Message >> $LogPath } Now you can do: Write-Log -Message "Compressing $src to $dest" -LogPath "$WorkingDirectory\OutputLog.txt" And, you can do this at the top of the script (with v3 or higher): $PSDefaultParameterValues = @{ "Write-Log:LogPath" = "$WorkingDirectory\OutputLog.txt" } And then just do: Write-Log -Message "Compressing $src to $dest" Of course you could also make -LogPath optional, and give it a default value if you must (for v2 and below compatibility). Even if you have to supply the log file every time though, so what, you're already doing that. And wrapping this in a function gives you more options for changing how its written later (like automatically adding a timestamp to each log entry).
{ "domain": "codereview.stackexchange", "id": 17133, "tags": "recursion, powershell" }
Slope of 2nd order low pass vs band pass Butterworth filters
Question: I have noticed that some manufacturers of audio processing products label a 2nd order Butterworth low pass as having a slope of 12dB per octave, but then also label a 2nd order Butterworth band pass as having a slope of 12dB per octave. In actuality each side of a 2nd order Butterworth band pass has a slope of only 6dB per octave. Is there a convention that because it is symmetrical that the slope of each together constitute the "slope" or is it just that they are wrongly assuming that order is equivalent to slope for all the Butterworth filter types? Answer: By convention, the pole count refers to the number of poles in the design polynomial (Butterworth, Chebyshev, etc) and this defines the amount of roll off in the stop band. If we maintain this convention, then all filters roll off at the rate of 6 dB / octave / pole in the stop band, whether it is a low pass, high pass, band pass, or notch. Also, a nth order Elliptic has the same ultimate roll off rate in the stop band as a nth order Butterworth. Of course, the Elliptic's transition band will be much narrower.
{ "domain": "dsp.stackexchange", "id": 3722, "tags": "filters, audio, lowpass-filter, bandpass" }
ar_pose/ARMarkers.h not found
Question: Hi, i am trying to include the ar_pose/ARMarkers.h header in a ROS node with #include <ar_pose/ARMarkers.h> I added artoolkit and ar_pose to the dependencies in the manifest file. But when making my package, it says "ar_pose/ARMarkers.h : No such file or directory" rospack find artoolkit and rospack find ar_pose successfully bring up the path to these packages I am using fuerte and installed the ar toolkit through ccny_vision first the ARMarker(s).h is located at ccny_vision/ar_pose/msg_gen/cpp/include/ar_pose. I copied the files to /ccny_vision/ar_pose/include/ar_pose but nothing helped. update: The files using AR Toolkit should be in the correct path, i got some other includes (pcl and ros) that work. AR and my package are in an extra path i added in my /home/user space because i have to work without superuser rights. The path is included in the ROS_PACKAGE_PATH. What makes me wonder is, that the packages can be located with ROS but when compiling it fails... update: I downloaded the bag files today to test if the package itself works correct. The demo_mutli and demo_single do work properly Also tried now with the newer ar_toolkti package from IHeartEngineering, getting the same error when including manifest.xml <package> <description brief="obstacle_detection"> obstacle_detection </description> <author>me</author> <license>BSD</license> <review status="unreviewed" notes=""/> link deleted, insufficient karma <depend package="pcl"/> <depend package="pcl_ros"/> <depend package="roscpp"/> <depend package="sensor_msgs"/> <depend package="artoolkit"/> <depend package="ar_pose"/> </package> code #include <ros/ros.h> #include <ar_pose/ARMarkers.h> void processing (const ar_pose::ARMarkers::ConstPtr& msg){ ar_pose::ARMarker ar_pose_marker; for(i=0;i<=msg->markers.size();i++){ ar_pose_marker = msg->markers.at(i); std::cout<<"x " << ar_pose_marker.pose.pose.position.x<<st::endl; std::cout<<"y " << ar_pose_marker.pose.pose.position.y<<st::endl; std::cout<<"z " << ar_pose_marker.pose.pose.position.z<<st::endl; } } int main (int argc, char** argv) { // Initialize ROS ros::init (argc, argv, "locateCamera"); ros::NodeHandle nh; // Create a ROS subscriber ros::Subscriber sub = nh.subscribe ("input", 1, processing); ros::spin (); } What do i make wrong? Thanks for helping me. Update: Looks like the problem is solved now. When compiling the ARtoolkit packages indirect through my ros package, the ARMarkers.h gets found. So just cloning the AR Packages but not running make or rosmake in the folders and instead let the own package call this when rosmaking solved this problem. Thank you very much Originally posted by benngee on ROS Answers with karma: 11 on 2013-06-14 Post score: 1 Original comments Comment by jep31 on 2013-06-14: This file is generated automatically by rosbuild_genmsg(). If you want to copy it in your package, you should copy the msg file in your package and do a rosmake, then you can include #include <yourPackage/ARMarkers.h> but normally you don't have to do that. Comment by Procópio on 2013-06-14: did you managed to compile the examples that came with the ccny_vision stack? Comment by benngee on 2013-06-15: i did not compile examples explicit, i ran rosmake for the packages successfully. Tomorrow i will look for these examples and post here what i found. The idea to copy the msg files will be an "emergency plan" but normally it should work out of the AR package Comment by Martin Günther on 2013-06-17: Copying the msg files will probably not work; you should really try to solve the underlying problem Comment by Martin Günther on 2013-06-18: Hmm, this is strange. Your manifest.xml file looks correct. Could you just upload your code somewhere? Answer: I've tried your code, and it compiles for me without problems. I had to change two small things so it compiles, but here's the finished version: Github Gist. Try it like this: mkdir ~/test cd ~/test git clone https://gist.github.com/5813581.git obstacle_detection git clone git://github.com/IHeartEngineering/ar_tools.git source /opt/ros/fuerte/setup.sh export ROS_PACKAGE_PATH=~/test:$ROS_PACKAGE_PATH rosmake obstacle_detection This is how your environment should look like: $ env | grep ROS ROS_ROOT=/opt/ros/fuerte/share/ros ROS_PACKAGE_PATH=~/test:/opt/ros/fuerte/share:/opt/ros/fuerte/stacks ROS_MASTER_URI=http://localhost:11311 ROS_ETC_DIR=/opt/ros/fuerte/etc/ros ROS_DISTRO=fuerte ROSLISP_PACKAGE_DIRECTORY=/opt/ros/fuerte/share/common-lisp/ros Originally posted by Martin Günther with karma: 11816 on 2013-06-19 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by benngee on 2013-06-27: Thank you for the hints. As soon as i finnished clonig i will try it out. My envirmonment looks like the one you posted above. Great. it compiled now successful. It looks like the Problem was, that i ran "make" in the AR packages first and it somehow messed up the package
{ "domain": "robotics.stackexchange", "id": 14564, "tags": "ros, ar-pose, ros-fuerte, include" }
Why do colligative properties depend only on number of solute particles?
Question: Colligative properties depend solely on the number of even though the interactive forces are different for different solute-solvent pairs. So why is the dependence only on the number of solute? Answer: A very simple, qualitative explanation: After your solute has dissolved, there are no more enthalpic effects to take into account. The solvation enthalpy is converted to a temperature change, and that's it. For colligative properties, the solute ideally has no significant own vapour pressure, does not precipitate, it just stays in solution. The solute molecule moves around (Brownian motion), the solvent molecules around it exchange places, but are all the same, and that's that. Now everything in solution is about entropy. In a solution, there is only orientational and translational entropy. (Molecular) Vibrational excitation is only relevant at high temperatures, rotation is practically imposible due to the many interactions. That's why every solute particle, irrespective of its size, has the same contribution to the entropy. If the solute concentration inceases, its particles start interacting. Their coming close to each other and detaching again makes local energy fluctuations, which add entropy, but depend strongly on the kind of interaction. That's when the identity of the solute starts to matter.
{ "domain": "chemistry.stackexchange", "id": 11339, "tags": "solutions, colligative-properties" }
Help with recurrence solutions
Question: We started learning recurrences and I am having trouble with some of the problems. Our professor is having us substitute in $n=2^m$ and $S(m)=T(2^m)$ then writing down equations and summing them all up. I understand the substitution part, but not how to simplify the end result. One of the problems I am stuck on is $T(n)=2T(n/2)+n^3$ where $T(2)=c$, When I sub in $n=2^m$ and $S(m)=T(2^m)$ I get: $S(m)=2S(m-1)+(2^m)^3$ $S(m-1)=2S(m-2)+(2^{m-1})^3$ ... $S(3)=2S(2)+(2^{3})^3$ $S(2)=2S(1)+(2^{2})^3$ $S(1)=c$ Now because there is a coefficient on the right side we have to multiply each equation by a power of 2 so we can sum them. $S(m)=2S(m-1)+(2^m)^3$ $2*S(m-1)=2^2S(m-2)+2*(2^{m-1})^3$ ... $2^{m-3}*S(3)=2^{m-2}S(2)+2^{m-3}(2^{3})^3$ $2^{m-2}*S(2)=2^{m-1}S(1)+2^{m-2}(2^{2})^3$ $2^{m-1}S(1)=c*2^{m-1}$ So sum these and we get: $S(m)=(2^m)^3+2*(2^{m-1})^3+2^2*(2^{m-2})^3+...+2^{m-3}(2^{3})^3+2^{m-2}(2^{2})^3+c*2^{m-1}$ And this is where I get stuck, how can I simplify this? I tried dividing by $(2^m)^3$ but that came out to a awful mess. The other two I'm having trouble with are $T(n)=16T(n/4)+n^2$ and $T(n)=7T(n/2)+n^2$ Any help would be appreciated. Answer: Once you have $S(m) = 2 S(m-1) + 2^{3m}$, you can divide by $2^m$ and obtain : $$\frac{S(m)}{2^m} = \frac{S(m-1)}{2^{m-1}} + 2^{2m}$$ Thus, setting $P(m) = \frac{S(m)}{2^m}$, you should be able to find $P(m)$, then $S(m)$.
{ "domain": "cs.stackexchange", "id": 1766, "tags": "proof-techniques, recurrence-relation" }
Object-Oriented JavaScript Chess game
Question: I am currently creating a chess game in JavaScript, some aspects are yet to be done such as the computer player and turns, but before I get into writing these features I would like to know how to restructure or edit the code so it is more flexible to changes, and is more manageable. Right now it seems that to implement some of these features, I would have to keep on copy-pasting similar sections of code. const canvas = document.getElementById('canvas'); const c = canvas.getContext('2d'); const difficultySlider = document.getElementsByClassName('slider'); const chessPieceSWidth = 800/6; const chessPieceSHeight = 267/2; const chessPiecesImgSrc = "chessPieces.png"; const whiteKing = {image: {sx: 0 * chessPieceSWidth, sy: 0 * chessPieceSHeight}}; const whiteQueen = {image: {sx: 1 * chessPieceSWidth, sy: 0 * chessPieceSHeight}}; const whiteBishop = {image: {sx: 2 * chessPieceSWidth, sy: 0 * chessPieceSHeight}}; const whiteHorse = {image: {sx: 3 * chessPieceSWidth, sy: 0 * chessPieceSHeight}}; const whiteCastle = {image: {sx: 4 * chessPieceSWidth, sy: 0 * chessPieceSHeight}}; const whitePawn = {image: {sx: 5 * chessPieceSWidth, sy: 0 * chessPieceSHeight}}; const blackKing = {image: {sx: 0 * chessPieceSWidth, sy: 1 * chessPieceSHeight}}; const blackQueen = {image: {sx: 1 * chessPieceSWidth, sy: 1 * chessPieceSHeight}}; const blackBishop = {image: {sx: 2 * chessPieceSWidth, sy: 1 * chessPieceSHeight}}; const blackHorse = {image: {sx: 3 * chessPieceSWidth, sy: 1 * chessPieceSHeight}}; const blackCastle = {image: {sx: 4 * chessPieceSWidth, sy: 1 * chessPieceSHeight}}; const blackPawn = {image: {sx: 5 * chessPieceSWidth, sy: 1 * chessPieceSHeight}}; const whitePieces = [whiteCastle, whiteHorse, whiteBishop, whiteQueen, whiteKing, whitePawn]; const blackPieces = [blackCastle, blackHorse, blackBishop, blackQueen, blackKing, blackPawn]; let standardBoard = [ [blackCastle, blackHorse, blackBishop, blackQueen, blackKing, blackBishop, blackHorse, blackCastle], [blackPawn, blackPawn, blackPawn, blackPawn, blackPawn, blackPawn, blackPawn, blackPawn], ["vacant", "vacant", "vacant", "vacant", "vacant", "vacant", "vacant", "vacant"], ["vacant", "vacant", "vacant", "vacant", "vacant", "vacant", "vacant", "vacant"], ["vacant", "vacant", "vacant", "vacant", "vacant", "vacant", "vacant", "vacant"], ["vacant", "vacant", "vacant", "vacant", "vacant", "vacant", "vacant", "vacant"], [whitePawn, whitePawn, whitePawn, whitePawn, whitePawn, whitePawn, whitePawn, whitePawn], [whiteCastle, whiteHorse, whiteBishop, whiteQueen, whiteKing, whiteBishop, whiteHorse, whiteCastle] ]; let hasClicked = false; let canMove = false; let isHighlightPossibleMoves = false; let canAdvancePiece = false; let highlightPos = undefined; let pieceMoves = undefined; let advancePosition = undefined; let isCastling = false; if(Math.round(Math.random()) == 0){ humanPlayer = whitePieces; board = copyBoardArray(standardBoard); } else { humanPlayer = blackPieces; board = reverseArray(copyBoardArray(standardBoard)); } function reverseArray(array){ return array.reverse(); } function switchSides(){ if(humanPlayer == whitePieces){ humanPlayer = blackPieces; } else { humanPlayer = whitePieces; } board = reverseArray(board); } function reload(){ location.reload(); } document.addEventListener('click', function(event){ if(!hasClicked){ if(event.clientX < 480 && event.clientY < 480 && board[Math.floor(event.clientY / 60)][Math.floor(event.clientX / 60)] != "vacant"){ if(humanPlayer.indexOf(board[Math.floor(event.clientY / 60)][Math.floor(event.clientX / 60)]) != -1){ canMove = true; isHighlightPossibleMoves = true; hasClicked = true; highlightPos = {x: Math.floor(event.clientX / 60), y: Math.floor(event.clientY / 60)}; pieceMoves = processMoves({x: Math.floor(event.clientX / 60), y: Math.floor(event.clientY / 60)}, board); } else { hasClicked = true; highlightPos = {x: Math.floor(event.clientX / 60), y: Math.floor(event.clientY / 60)}; canMove = false; } } } else { if(canMove){ advancePosition = {x: Math.floor(event.clientX / 60), y: Math.floor(event.clientY / 60)}; for(i = 0; i < pieceMoves.moves.length; i++){ if(advancePosition.x == pieceMoves.moves[i].x && advancePosition.y == pieceMoves.moves[i].y){ if(board[highlightPos.y][highlightPos.x] == blackKing || board[highlightPos.y][highlightPos.x] == whiteKing){ if(pieceMoves.moves[i].x - 2 == highlightPos.x || pieceMoves.moves[i].x + 2 == highlightPos.x){ isCastling = true; } else { isCastling = false; } } if(isCastling){ board = chess.returnCastledBoard({x: highlightPos.x, y: highlightPos.y}, pieceMoves.moves[i]); chess = new Chess(board); isCastling = false; } else { board[highlightPos.y][highlightPos.x].hasClicked = true; board = chess.updateBoard(highlightPos, advancePosition); chess = new Chess(board); break; } } } } hasClicked = false; canMove = false; highlightPos = undefined; pieceMoves = undefined; advancePosition = undefined; } }); function getPieceType(position, board){ if(blackPieces.indexOf(board[position.y][position.x]) != -1 && board[position.y][position.x] != "vacant"){ return blackPieces; } else if(whitePieces.indexOf(board[position.y][position.x]) != -1 && board[position.y][position.x] != "vacant"){ return whitePieces; } } function isCheck(player, board){ if(player == blackPieces){ checkKing = blackKing; opponent = whitePieces; } else { checkKing = whiteKing; opponent = blackPieces; } for(rows = 0; rows < 8; rows++){ for(columns = 0; columns < 8; columns++){ if(board[rows][columns] == checkKing){ kingPos = {x: columns, y: rows}; break; } } } opponentMoves = []; threatningPieces = []; check = false; for(rows = 0; rows < 8; rows++){ for(columns = 0; columns < 8; columns++){ if(opponent.indexOf(board[rows][columns]) != -1){ opponentMoves.push(move({x: columns, y: rows}, board)); } } } for(len = 0; len < opponentMoves.length; len++){ for(subLen = 0; subLen < opponentMoves[len].moves.length; subLen++){ if(opponentMoves[len].moves[subLen].x == kingPos.x && opponentMoves[len].moves[subLen].y == kingPos.y){ check = true; threatningPieces.push(opponentMoves[len].playerPos); } } } if(check){ threatningPieces.push(kingPos); } return {state: check, threatningPieces: threatningPieces}; } function castleMove(position, board){ moves = []; let pieceType = getPieceType(position, board); for(i = position.x + 1; i < 8; i++){ if(board[position.y][i] != "vacant" && pieceType.indexOf(board[position.y][i]) != -1){ break; } if(board[position.y][i] != "vacant" && pieceType.indexOf(board[position.y][i]) == -1){ moves.push({x: i, y: position.y}); break; } moves.push({x: i, y: position.y}); } for(i = position.x - 1; i >= 0; i--){ if(board[position.y][i] != "vacant" && pieceType.indexOf(board[position.y][i]) != -1){ break; } if(board[position.y][i] != "vacant" && pieceType.indexOf(board[position.y][i]) == -1){ moves.push({x: i, y: position.y}); break; } moves.push({x: i, y: position.y}); } for(i = position.y + 1; i < 8; i++){ if(board[i][position.x] != "vacant" && pieceType.indexOf(board[i][position.x]) != -1){ break; } if(board[i][position.x] != "vacant" && pieceType.indexOf(board[i][position.x]) == -1){ moves.push({x: position.x, y: i}); break; } moves.push({x: position.x, y: i}); } for(i = position.y - 1; i >= 0; i--){ if(board[i][position.x] != "vacant" && pieceType.indexOf(board[i][position.x]) != -1){ break; } if(board[i][position.x] != "vacant" && pieceType.indexOf(board[i][position.x]) == -1){ moves.push({x: position.x, y: i}); break; } moves.push({x: position.x, y: i}); } return moves; } function horseMove(position, board){ moves = []; let pieceType = getPieceType(position, board); if(position.x + 1 < 8 && position.y + 2 < 8){ if(board[position.y + 2][position.x + 1] == "vacant" || pieceType.indexOf(board[position.y + 2][position.x + 1]) == -1){ moves.push({x: position.x + 1, y: position.y + 2}); } } if(position.x - 1 >= 0 && position.y + 2 < 8){ if(board[position.y + 2][position.x - 1] == "vacant" || pieceType.indexOf(board[position.y + 2][position.x - 1]) == -1){ moves.push({x: position.x - 1, y: position.y + 2}); } } if(position.x + 1 < 8 && position.y - 2 >= 0){ if(board[position.y - 2][position.x + 1] == "vacant" || pieceType.indexOf(board[position.y - 2][position.x + 1]) == -1){ moves.push({x: position.x + 1, y: position.y - 2}); } } if(position.x - 1 >= 0 && position.y - 2 >= 0){ if(board[position.y - 2][position.x - 1] == "vacant" || pieceType.indexOf(board[position.y - 2][position.x - 1]) == -1){ moves.push({x: position.x - 1, y: position.y - 2}); } } if(position.x + 2 < 8 && position.y + 1 < 8){ if(board[position.y + 1][position.x + 2] == "vacant" || pieceType.indexOf(board[position.y + 1][position.x + 2]) == -1){ moves.push({x: position.x + 2, y: position.y + 1}); } } if(position.x - 2 >= 0 && position.y + 1 < 8){ if(board[position.y + 1][position.x - 2] == "vacant" || pieceType.indexOf(board[position.y + 1][position.x - 2]) == -1){ moves.push({x: position.x - 2, y: position.y + 1}); } } if(position.x + 2 < 8 && position.y - 1 >= 0){ if(board[position.y - 1][position.x + 2] == "vacant" || pieceType.indexOf(board[position.y - 1][position.x + 2]) == -1){ moves.push({x: position.x + 2, y: position.y - 1}); } } if(position.x - 2 >= 0 && position.y - 1 >= 0){ if(board[position.y - 1][position.x - 2] == "vacant" || pieceType.indexOf(board[position.y - 1][position.x - 2]) == -1){ moves.push({x: position.x - 2, y: position.y - 1}); } } return moves; } function bishopMove(position, board){ moves = []; let pieceType = getPieceType(position, board); x = position.x + 1; y = position.y + 1; while(x < 8 && y < 8){ if(board[y][x] != "vacant" && pieceType.indexOf(board[y][x]) != -1){ break; } if(board[y][x] != "vacant" && pieceType.indexOf(board[y][x]) == -1){ moves.push({x: x, y: y}); break; } moves.push({x: x, y: y}); x += 1; y += 1; } x = position.x - 1; y = position.y - 1; while(x >= 0 && y >= 0){ if(board[y][x] != "vacant" && pieceType.indexOf(board[y][x]) != -1){ break; } if(board[y][x] != "vacant" && pieceType.indexOf(board[y][x]) == -1){ moves.push({x: x, y: y}); break; } moves.push({x: x, y: y}); x -= 1; y -= 1; } x = position.x - 1; y = position.y + 1; while(x >= 0 && y < 8){ if(board[y][x] != "vacant" && pieceType.indexOf(board[y][x]) != -1){ break; } if(board[y][x] != "vacant" && pieceType.indexOf(board[y][x]) == -1){ moves.push({x: x, y: y}); break; } moves.push({x: x, y: y}); x -= 1; y += 1; } x = position.x + 1; y = position.y - 1; while(x < 8 && y >= 0){ if(board[y][x] != "vacant" && pieceType.indexOf(board[y][x]) != -1){ break; } if(board[y][x] != "vacant" && pieceType.indexOf() == -1){ moves.push({x: x, y: y}); break; } moves.push({x: x, y: y}); x += 1; y -= 1; } return moves; } function kingMove(position, board){ moves = []; let pieceType = getPieceType(position, board); if(position.x + 1 < 8){ if(board[position.y][position.x + 1] == "vacant" || pieceType.indexOf(board[position.y][position.x + 1]) == -1){ moves.push({x: position.x + 1, y: position.y}); } } if(position.x - 1 >= 0){ if(board[position.y][position.x - 1] == "vacant" || pieceType.indexOf(board[position.y][position.x - 1]) == -1){ moves.push({x: position.x - 1, y: position.y}); } } if(position.y + 1 < 8){ if(board[position.y + 1][position.x] == "vacant" || pieceType.indexOf(board[position.y + 1][position.x]) == -1){ moves.push({x: position.x, y: position.y + 1}); } } if(position.y - 1 >= 0){ if(this.board[position.y - 1][position.x] == "vacant" || pieceType.indexOf(this.board[position.y - 1][position.x]) == -1){ moves.push({x: position.x, y: position.y - 1}); } } if(position.y - 1 >= 0 && position.x - 1 >= 0){ if(board[position.y - 1][position.x - 1] == "vacant" || pieceType.indexOf(board[position.y - 1][position.x - 1]) == -1){ moves.push({x: position.x - 1, y: position.y - 1}); } } if(position.y + 1 < 8 && position.x + 1 < 8){ if(board[position.y + 1][position.x + 1] == "vacant" || pieceType.indexOf(board[position.y + 1][position.x + 1]) == -1){ moves.push({x: position.x + 1, y: position.y + 1}); } } if(position.y + 1 < 8 && position.x - 1 >= 0){ if(board[position.y + 1][position.x - 1] == "vacant" || pieceType.indexOf(board[position.y + 1][position.x - 1]) == -1){ moves.push({x: position.x - 1, y: position.y + 1}); } } if(position.y - 1 >= 0 && position.x + 1 < 8){ if(board[position.y - 1][position.x + 1] == "vacant" || pieceType.indexOf(board[position.y - 1][position.x + 1]) == -1){ moves.push({x: position.x + 1, y: position.y - 1}); } } return moves; } function pawnMove(position, board){ moves = []; let pieceType = getPieceType(position, board); if(humanPlayer == whitePieces){ standardPawn = whitePawn; } else { standardPawn = blackPawn; } if(this.board[position.y][position.x] == standardPawn){ if(position.y == 6){ if(board[position.y - 1][position.x] == "vacant"){ moves.push({x: position.x, y: position.y - 1}); } if(board[position.y - 2][position.x] == "vacant" && board[position.y - 1][position.x] == "vacant"){ moves.push({x: position.x, y: position.y - 2}); } } else if(position.y - 1 >= 0){ if(board[position.y - 1][position.x] == "vacant"){ moves.push({x: position.x, y: position.y - 1}); } } if(position.x + 1 < 8 && position.y - 1 >= 0){ if(getPieceType({x: position.x + 1, y: position.y - 1}, this.board) != pieceType && board[position.y - 1][position.x + 1] != "vacant"){ moves.push({x: position.x + 1, y: position.y - 1}); } } if(position.x - 1 >= 0 && position.y - 1 >= 0){ if(getPieceType({x: position.x - 1, y: position.y - 1}, this.board) != pieceType && board[position.y - 1][position.x - 1] != "vacant"){ moves.push({x: position.x - 1, y: position.y - 1}); } } } else { if(position.y == 1){ if(board[position.y + 1][position.x] == "vacant"){ moves.push({x: position.x, y: position.y + 1}); } if(board[position.y + 2][position.x] == "vacant" && board[position.y + 1][position.x] == "vacant"){ moves.push({x: position.x, y: position.y + 2}); } } else if(position.y + 1 < 8){ if(board[position.y + 1][position.x] == "vacant"){ moves.push({x: position.x, y: position.y + 1}); } } if(position.x + 1 < 8 && position.y + 1 < 8){ if(getPieceType({x: position.x + 1, y: position.y + 1}, this.board) != pieceType && board[position.y + 1][position.x + 1] != "vacant"){ moves.push({x: position.x + 1, y: position.y + 1}); } } if(position.x - 1 >= 0 && position.y + 1 < 8){ if(getPieceType({x: position.x - 1, y: position.y + 1}, this.board) != pieceType && board[position.y + 1][position.x - 1] != "vacant"){ moves.push({x: position.x - 1, y: position.y + 1}); } } } return moves; } function move(position, board){ let boardPos = board[position.y][position.x]; if(boardPos == blackCastle || boardPos == whiteCastle){ return {playerPos: {x: position.x, y: position.y}, moves: castleMove(position, board)}; } else if(boardPos == blackHorse || boardPos == whiteHorse){ return {playerPos: {x: position.x, y: position.y}, moves: horseMove(position, board)}; } else if(boardPos == blackBishop || boardPos == whiteBishop){ return {playerPos: {x: position.x, y: position.y}, moves: bishopMove(position, board)}; } else if(boardPos == blackQueen || boardPos == whiteQueen){ possibleMoves = castleMove(position, board); for(i = 0; i < bishopMove(position, board).length; i++){ possibleMoves.push(bishopMove(position, board)[i]); } return {playerPos: {x: position.x, y: position.y}, moves: possibleMoves}; } else if(boardPos == whiteKing || boardPos == blackKing){ return {boardPos: {x: position.x, y: position.y}, moves: kingMove(position, board)} } else if(boardPos == whitePawn || boardPos == blackPawn){ return {playerPos: {x: position.x, y: position.y}, moves: pawnMove(position, board)}; } } function processMoves(position, board){ let pieceType = getPieceType(position, board); let posMoves = move(position, board).moves; for(index = posMoves.length - 1; index >= 0; index--){ bCopy = copyBoardArray(board); cCopy = new Chess(bCopy); bCopy = cCopy.updateBoard(position, posMoves[index]); if(isCheck(pieceType, bCopy).state){ posMoves.splice(index, 1); } if(board[position.y][position.x] == blackKing || board[position.y][position.x] == whiteKing){ castleMoves = castlingMoves(position, board); for(indI = 0; indI < castleMoves.length; indI++){ posMoves.push(castleMoves[indI]); } } } return {playerPos: {x: position.x, y: position.y}, moves: posMoves}; } function castlingMoves(position, board){ let pieceType = getPieceType(position, board); let castleMoves = []; if(board[position.y][position.x].hasClicked === undefined){ if(getPieceType({x: 0, y: position.y}, board) == pieceType && board[position.y][0].hasClicked === undefined){ for(key = position.x - 1; key >= 1; key--){ if(board[position.y][key] == "vacant"){ isPieceBlocking = false; } else { isPieceBlocking = true; break; } } if(!isPieceBlocking){ for(key = position.x; key > position.x - 3; key--){ bdCopy = copyBoardArray(board); chessCpy = new Chess(bdCopy); bdCopy = chessCpy.updateBoard(position, {x: key, y: position.y}); if(isCheck(pieceType, bdCopy).state){ isIllegal = true; break; } else { isIllegal = false; } } } if(!isPieceBlocking && !isIllegal){ castleMoves.push({x: position.x - 2, y: position.y}); } } if(getPieceType({x: 7, y: position.y}, board) == pieceType && board[position.y][7].hasClicked === undefined){ for(key = position.x + 1; key < 7; key++){ if(board[position.y][key] == "vacant"){ isPieceBlocking = false; } else { isPieceBlocking = true; break; } } if(!isPieceBlocking){ for(key = position.x; key < position.x + 3; key++){ bdCopy = copyBoardArray(board); chessCpy = new Chess(bdCopy); bdCopy = chessCpy.updateBoard(position, {x: key, y: position.y}); if(isCheck(pieceType, bdCopy).state){ isIllegal = true; break; } else { isIllegal = false; } } } if(!isPieceBlocking && !isIllegal){ castleMoves.push({x: position.x + 2, y: position.y}); } } } return castleMoves; } function Chess(board){ this.board = board; this.updateBoard = function(playerPastPos, playerNextPos){ let boardDeepClone = copyBoardArray(this.board); let player = this.board[playerPastPos.y][playerPastPos.x]; boardDeepClone[playerPastPos.y][playerPastPos.x] = "vacant"; boardDeepClone[playerNextPos.y][playerNextPos.x] = player; return boardDeepClone; } this.returnCastledBoard = function(kingPos, movePos){ let king = this.board[kingPos.y][kingPos.x]; if(movePos.x > kingPos.x){ targetCastle = this.board[movePos.y][7]; boardDeepClone = copyBoardArray(this.board); boardDeepClone[kingPos.y][kingPos.x] = "vacant"; boardDeepClone[kingPos.y][kingPos.x + 2] = king; boardDeepClone[kingPos.y][7] = "vacant"; boardDeepClone[kingPos.y][kingPos.x + 1] = targetCastle; } else { targetCastle = this.board[movePos.y][0]; boardDeepClone = copyBoardArray(this.board); boardDeepClone[kingPos.y][kingPos.x] = "vacant"; boardDeepClone[kingPos.y][kingPos.x - 2] = king; boardDeepClone[kingPos.y][0] = "vacant"; boardDeepClone[kingPos.y][kingPos.x - 1] = targetCastle; } return boardDeepClone; } } function copyBoardArray(board){ let boardCopy = []; for(i = 0; i < 8; i++){ boardCopy.push([0]); for(j = 0; j < 8; j++){ boardCopy[i][j] = board[i][j]; } } return boardCopy; } function rect(x, y, width, height, color){ c.beginPath(); c.rect(x, y, width, height); c.fillStyle = color; c.fill(); c.closePath(); } let chess = new Chess(board); function render(){ for(i = 0; i < 8; i++){ for(j = 0; j < 8; j++){ if(i % 2 == 0){ if(j % 2 == 0){ rect(j * 60, i * 60, 60, 60, "peru"); } else { rect(j * 60, i * 60, 60, 60, "seashell"); } c.stroke(); } else { if(j % 2 == 0){ rect(j * 60, i * 60, 60, 60, "seashell"); } else { rect(j * 60, i * 60, 60, 60, "peru"); } } c.stroke(); } } if(isCheck(humanPlayer, board).state){ for(ind = 0; ind < isCheck(humanPlayer, board).threatningPieces.length; ind++){ rect(isCheck(humanPlayer, board).threatningPieces[ind].x * 60, isCheck(humanPlayer, board).threatningPieces[ind].y * 60, 60, 60, "red"); c.stroke(); } } if(highlightPos != undefined){ rect(highlightPos.x * 60, highlightPos.y * 60, 60, 60, "yellow"); c.stroke(); } for(i = 0; i < 8; i++){ for(j = 0; j < 8; j++){ if(board[i][j] != "vacant"){ let image = new Image(); image.src = "chessPieces.png"; c.drawImage(image, board[i][j].image.sx, board[i][j].image.sy, chessPieceSWidth, chessPieceSHeight, j * 60, i * 60, 60, 60); } } } if(pieceMoves != undefined){ c.globalAlpha = 0.6; for(i = 0; i < pieceMoves.moves.length; i++){ c.beginPath(); c.arc(pieceMoves.moves[i].x * 60 + 30, pieceMoves.moves[i].y * 60 + 30, 12, 12, 0, Math.PI * 2); c.fillStyle = "grey"; c.fill(); c.closePath(); } c.globalAlpha = 1; } } setInterval(render, 10); ```` Answer: Methods If you write more methods, your code instantly becomes easier to read. It's also easier to maintain, including making changes. This is one example of what could be a method. It's not clear at a glance what this code does and you may want to allow the user to pick their own colour in the future. if(Math.round(Math.random()) == 0){ humanPlayer = whitePieces; board = copyBoardArray(standardBoard); } else { humanPlayer = blackPieces; board = reverseArray(copyBoardArray(standardBoard)); } Creating a chess piece could be a method. You may change the features of a Chess piece in the future, you can also reduce repeating yourself with a method: function createChessPiece(var x, var y) { return {image: {sx: x * chessPieceSWidth, sy: y * chessPieceSHeight}}; } All of your logic should be moved to separate methods as well. Avoid magic numbers "Magic numbers" and "magic strings" are literals not assigned to a variable. For example, what is "480" here? It could be a const declared at the top: event.clientX < 480 && event.clientY < 480 Naming Make sure your names make sense. "isCheck" does not make sense or mean anything. It looks like check is a class variable but should be declared inside this method. check, checkKing are also bad names as it's not descriptive. Chess is not a good function name. You cannot tell what the method will do by reading the name. Edit: Examples of naming You may find it's hard to give certain methods names. This is actually a good thing and shows why simply creating & renaming methods can be a very good tool to for refactoring. For example, it's really hard to name isCheck, it's doing multiple things which aren't obvious by looking at the method. I suggest splitting the method into 2 or 3 methods. isState & isThreatning may not be good names either. Perhaps getThreatningPieces and renaming state to be more specific. Currently you are returning 2 flags and invoking the method when you need either threatningPieces or state. It should also be noted you're invoking the method more than you need to: if(isCheck(humanPlayer, board).state){ for(ind = 0; ind < isCheck(humanPlayer, board).threatningPieces.length; In the above code, you can create a variable to store the value of the method, so it only gets invoked once. Your isCheck method is changing some local variables creating spaghetti code. Which brings me to my next point: Declare variables as locally as possible Don't declare all your variables at the top. There's no need for that. It's hard to tell where checkKing is getting changed or used. It makes debugging difficult.
{ "domain": "codereview.stackexchange", "id": 36918, "tags": "javascript, performance, object-oriented, chess" }
How can an observer observe the metric of spacetime?
Question: I don't mean how can we measure the metric in practice. I only mean in principle. Suppose you are an omnipresent being, no experimental limitations. What measurements do you need to measure the metric at a point? Also, you can deduce the metric from the Energy Momentum Tensor, sure. I don't want that answer. The answer I want is "How can we deduce the metric by making measurements about the geometry of spacetime"? Answer: Directly measuring the geometry is not practical, as the measurements would be too hard to make. But it could be done in principle. You just need to parallel transport a vector around a loop and measure how much this vector has changed. The change in the direction of the vector is directly related to the Riemann tensor. Suppose you take a vector and you parallel transport it round a small square: Obviously when you finish going round the square you'll be back at your starting point and the vector will still be pointing in the same direction. But this is only true when the spacetime you're moving through is flat. If you try this on a curved spacetime you'll find that after completing the square you won't be back at your starting point and the vector may have rotated away from it's original direction. In this diagram the red vector is the initial vector at $P$, and we find that when we get back to our starting point, the angle of the vector has changed. The change in the vector is described by the Riemann tensor, and in General Relativity the Riemann tensor is what determines the metric. So we can measure the metric by measuring the geometry. In practice, the changes around any loop of a practical size would be too small to measure so this direct measurement could not be done. Instead you would do indirect measurements. For example just dropping an object and measuring its trajectory is a way of indirectly determining the metric since the acceleration is determined by the Christoffel symbols, which are determined by the metric.
{ "domain": "physics.stackexchange", "id": 87666, "tags": "general-relativity, spacetime, metric-tensor, curvature, measurements" }
Small C project: recording mouse/keyboard bot software
Question: This is, kind of, my first programming project. It's a small project to complete first year's university programming course. The program allows the user to record his mouse/keyboard/cursor activities and then replay it. It's a mini-bot. I am unsure whether I will continue developing this project, or abandon C and go learn some OOP or web. Either way, I would really love to know what kind of mistakes I made. Particularly: do you see something that hurts your eyes, some bad practices, terrible naming, some unreadable code? Short video demonstration: https://streamable.com/7qcb3 Project's code: https://github.com/Wenox/WinAuto The menu.c file was written in a rush, so you're likely to find the ugliest code in there. I am mostly interested about menu.c, smooth_cursor.c, replay.c, recording.c files. I've got a small review and this code is vulnerable: printf("Save recording as (i.e: myrecording.txt):\n"); char file_name[64]; scanf("%s", file_name); (I will probably replace scanf with fgets combined with sscanf). Other than that, now that I am looking at my code I probably could have used typedef on the struct. Heard that it's a bad practice, though. I am not sure if I should remove the large, ugly comments from the .h files or not. The program is launched from main like this: int main(int argc, char **argv) { struct f_queue *headptr = NULL; struct f_queue *tailptr = NULL; if (!h_switch_invoked(argc, argv)) init_menu(headptr, tailptr, 0, 0); else init_menu(headptr, tailptr, 7, 0); return 0; } Here is menu.c file that I am particularly interested in. I've written it in a rush and never wrote "menu" before. So I came up with an idea to make it recursive, with helping enum, and not sure how good or bad idea that was: bool h_switch_invoked(int argc, char **argv) { if (argc > 1) if (0 == strcmp(argv[1], "-h")) return true; return false; } /** Enum containing various menu flags used to determine which <b>printf</b> should be displayed to the user, based on earlier program behaviour. */ enum menu_flags { ///< start of definition NO_ERRORS, ///< default ERROR_NO_TXT_SUFFIX, ///< when user forgot to input the .txt postfix ERROR_READING_FILE, ///< when file was corrupted, does not exist or cannot be opened SAVED_HOTKEY, ///< when the hotkey has been successfully saved SAVED_FILE, ///< when the file saved successfully STOPPED_PLAYBACK, ///< when the recording playback successfully ended STOPPED_SCREENSAVER, ///< when the screensaver has been successfully stopped HELP_SWITCH ///< when program was ran with '-h' switch }; void draw_menu(const int flag_id) { system("cls"); switch (flag_id) { case 0: printf("WinAuto\n"); break; case 1: printf("ERROR: File name must end with .txt suffix\n\n"); break; case 2: printf("ERROR: No such file or file is corrupted\n\n"); break; case 3: printf("Hotkey set successfully\n\n"); break; case 4: printf("Recording saved successfully\n\n"); break; case 5: printf("Playback finished or interrupted\n\n"); break; case 6: printf("Welcome back\n\n"); break; case 7: print_help(); break; default: // do nothing break; } printf("Press 1 to set global hotkey (DEFAULT HOTKEY: F5)\n"); printf("Press 2 to create new recording\n"); printf("Press 3 to play recording\n"); printf("Press 4 to start screensaver\n"); printf("Press 5 to exit\n"); } int get_menu_choice(void) { int choice = 0; while (choice < 1 || choice > 5) if (1 != scanf("%d", &choice)) fseek(stdin, 0, SEEK_END); return choice; } int get_hotkey(void) { printf("Set hotkey: \n"); int hotkey = 0; while (hotkey == 0 || hotkey == KEY_RETURN || hotkey == KEY_LMB || hotkey == KEY_RMB || hotkey == KEY_F5) { hotkey = get_keystroke(); } FlushConsoleInputBuffer(GetStdHandle(STD_INPUT_HANDLE)); return hotkey; } bool str_ends_with(const char *source, const char *suffix) { int source_len = strlen(source); int suffix_len = strlen(suffix); return (source_len >= suffix_len) && (0 == strcmp(source + (source_len - suffix_len), suffix)); } int get_cycles_num(void) { printf("How many playing cycles? (>5 to play infinitely, default 1):\n"); int cycles_num = 1; if (1 != scanf("%d", &cycles_num) || cycles_num <= 0) { fseek(stdin, 0, SEEK_END); get_cycles_num(); } return cycles_num; } void exec_play_recording(struct f_queue *head, struct f_queue *tail, const int cycles_num, const int hotkey_id) { printf("Playing recording...\n"); printf("Press your hotkey to stop\n"); if (cycles_num > 5) { make_queue_cyclic(head, tail); play_recording(tail, hotkey_id); unmake_queue_cyclic(head, tail); } else { for (int i = 0; i < cycles_num; i++) play_recording(tail, hotkey_id); } } void init_menu(struct f_queue *head, struct f_queue *tail, const int flag_id, const int hotkey_id); void chosen_recording(struct f_queue *head, struct f_queue *tail, const int hotkey_id) { printf("Save recording as (i.e: myrecording.txt):\n"); char file_name[64]; scanf("%s", file_name); if (str_ends_with(file_name, ".txt")) { record(&head, &tail, 10, hotkey_id); FlushConsoleInputBuffer(GetStdHandle(STD_INPUT_HANDLE)); trim_list(&head); save_recording(tail, file_name); free_recording(&head, &tail); init_menu(head, tail, SAVED_FILE, hotkey_id); } else { init_menu(head, tail, ERROR_NO_TXT_SUFFIX, hotkey_id); } } void chosen_playback(struct f_queue *head, struct f_queue *tail, const int hotkey_id) { printf("Type in file name of your recording (i.e: myfile.txt):\n"); char file_name[64]; scanf("%s", file_name); if (load_recording(&head, &tail, file_name)) { int cycles_num = get_cycles_num(); exec_play_recording(head, tail, cycles_num, hotkey_id); FlushConsoleInputBuffer(GetStdHandle(STD_INPUT_HANDLE)); free_recording(&head, &tail); init_menu(head, tail, STOPPED_PLAYBACK, hotkey_id); } else { // error when reading file if (tail) free_recording(&head, &tail); init_menu(head, tail, ERROR_READING_FILE, hotkey_id); } } void init_menu(struct f_queue *head, struct f_queue *tail, const int flag_id, const int hotkey_id) { draw_menu(flag_id); int choice = get_menu_choice(); static int hotkey = KEY_F5; /// default hotkey switch(choice) { case 1: hotkey = get_hotkey(); init_menu(head, tail, SAVED_HOTKEY, hotkey); break; case 2: chosen_recording(head, tail, hotkey); break; case 3: chosen_playback(head, tail, hotkey); break; case 4: exec_screen_saver(hotkey); init_menu(head, tail, STOPPED_SCREENSAVER, hotkey); break; case 5: return; default: // do nothing break; } } Also, here's how an exemplary .h header file looks like. menu.h (note the large doxy comments that I am unsure whether should be kept or removed): /** @file */ #ifndef MENU_H_INCLUDED #define MENU_H_INCLUDED /** The function outputs relevant text data to the user. The function helps the user navigate around the program. @param flag_id menu flag to determine expected printf result based on earlier behaviour */ void draw_menu(const int flag_id); /** The function prompts user to select menu choice to futher navigate around the program. Basic input validation is performed. */ int get_menu_choice(void); /** The function saves user-inputted keystroke as a hotkey used in <b>2nd, 3rd and 4th</b> menu functions. @warning User needs to remember his hotkey. @warning For user's convenience, several hotkeys that would propably not me sense were blacklisted, including the default hotkey. */ int get_hotkey(void); /** The function verifies if string (array of chars) ends with given suffix (other array of chars). Used to validate if the file inputted by the user surely ends with .txt postfix. @param source pointer to source array @param suffix pointer to desired ending suffix of soruce array @return <b>true</b> if source ends with suffix @return <b>false</b> otherwise @warning The function comes from stackoverflow.com */ bool str_ends_with(const char *source, const char *suffix); /** The function prompts user to input how many cycles of recording he wishes to playback. The input number has to be an integer greater or equal than 1, and if the input is greater than 5, then it is assumed the playback is infinitely loop. <b>In such case the f_queue doubly linked list-queue attains cyclic properties.</b> @return cycles_num the desired number of cycles */ int get_cycles_num(void); /** The function executes the process of simulation of playing the recording. In case if cycles number is greater than 5, the playback loop is infinite. The playback loop ends at the end of all cycles, or <b>can be broken by pressing the set (or default if not set) hotkey</b>. @param head pointer to the front of the <b>f_queue</b> list-queue @param tail pointer to the last node of the <b>f_queue</b> list-queue @param cycles_num the number of playback cycles @param hotkey_id the turn-off playback key switch */ void exec_play_recording(struct f_queue *head, struct f_queue *tail, const int cycles_num, const int hotkey_id); /** The function executes entire recording process when user chose <b>2</b>. Recording is stopped when <b>hotkey</b> is pressed and saved into the inputted .txt file. Hence it can be re-used afterwards for playback purposes. The function <b>recurseively</b> goes back to the menu with appropriate <b>menu_flags</b>: SAVED_FILE or ERROR_NO_TXT_SUFFIX, depending on the earlier behaviour. @param head pointer to the front node of the <b>f_queue</b> linked list @param tail pointer to the last node of the <b>f_queue</b< linked list @param hotkey_id */ void chosen_recording(struct f_queue *head, struct f_queue *tail, const int hotkey_id); /** Recursive function that loops the menu and loops the execution of the program. The user chooses if he wants to set new hotkey, create new recording, playback old recording, start screensaver or end the program. @param head pointer to the front node of <b>f_queue</b> doubly-linked list @param tail pointer to the last node of <b>f_queue</b> doubly-linked list @param flag_id the menu flag, depending on the value different output is displayed to the user @param hotkey_id the turn-off switch for the program (default <b>F5</b>) */ void init_menu(struct f_queue *head, struct f_queue *tail, const int flag_id, const int hotkey_id); /** Function prints detailed manual to the user if -h flag was invoked. */ void print_help(); /** Function checks the command line input switches. If -h switch is found, detailed manual is printed out to the user.*/ bool h_switch_invoked(int argc, char **argv); #endif // MENU_H_INCLUDED Here's recording "engine" from recording.c: #define _GETCURSOR 1 #define _GETKEY 2 #define _SLEEP 3 void add_cursor(struct f_queue **head, struct f_queue **tail, POINT P[2]) { P[1] = get_cursor(); if (P[0].x != P[1].x || P[0].y != P[1].y) { ///< if current cursor pos != previous add_function(head, tail, _GETCURSOR, P[1].x, P[1].y); ///< add it to the queue P[0] = P[1]; } } void add_keystroke(struct f_queue **head, struct f_queue **tail, int key_buff[2]) { key_buff[1] = get_keystroke(); if (key_buff[1] != key_buff[0] && key_buff[1] != 0) ///< if there was keystroke add_function(head, tail, _GETKEY, key_buff[1], -1); ///< add it to the queue key_buff[0] = key_buff[1]; } bool is_prev_sleep_func(struct f_queue **head) { return (*head)->f_type == _SLEEP; } void add_sleep(struct f_queue **head, struct f_queue **tail, const int sleep_dur) { Sleep(sleep_dur); if (!is_prev_sleep_func(head)) add_function(head, tail, _SLEEP, sleep_dur, -1); else (*head)->f_args[0] += sleep_dur; ///< increment the previous node, rather than add new one } void record(struct f_queue **head, struct f_queue **tail, const int sleep_dur, const int hotkey_id) { int key_buff[2] = {-1, -1}; ///< buffer for curr and prev pressed key POINT cursor_buff[2] = {{-1, -1}, {-1, -1}}; ///< buffer for curr and prev cursor position printf("RECORDING...\n[press your hotkey to stop]\n"); while(key_buff[1] != hotkey_id) { ///< stop recording when 'hotkey' is pressed add_cursor(head, tail, cursor_buff); add_keystroke(head, tail, key_buff); add_sleep(head, tail, sleep_dur); } } and replay "engine" from replay.c: bool is_mouse_event(const int KEY_CODE) { return KEY_CODE <= 2; } void send_mouse_input(const int KEY_CODE) { INPUT ip = {0}; ip.type = INPUT_MOUSE; switch(KEY_CODE) { case 1: ip.mi.dwFlags = MOUSEEVENTF_LEFTDOWN | MOUSEEVENTF_LEFTUP; break; case 2: ip.mi.dwFlags = MOUSEEVENTF_RIGHTDOWN | MOUSEEVENTF_RIGHTUP; break; default: return; } SendInput(1, &ip, sizeof(INPUT)); } void send_keyboard_input(const int KEY_CODE) { INPUT ip = {0}; ip.type = INPUT_KEYBOARD; ip.ki.wVk = KEY_CODE; SendInput(1, &ip, sizeof(INPUT)); // press ip.ki.dwFlags = KEYEVENTF_KEYUP; SendInput(1, &ip, sizeof(INPUT)); // release } void send_input(const int KEY_CODE) { if (is_mouse_event(KEY_CODE)) send_mouse_input(KEY_CODE); else send_keyboard_input(KEY_CODE); } void play_recording(struct f_queue *tail, const int hotkey_id) { while (tail) { if (check_key(hotkey_id)) return; if (tail->f_type == _GETCURSOR) SetCursorPos(tail->f_args[0], tail->f_args[1]); ///< Simulates cursor's position else if (tail->f_type == _GETKEY) send_input(tail->f_args[0]); ///< Simulates keystroke else if (tail->f_type == _SLEEP) Sleep(tail->f_args[0]); ///< Simulates waiting interval in between keystrokes and/or cursor's movements tail = tail->prev; } } Also, `functio I am in need of all kind of criticism. Thanks. Answer: Reserved identifiers Identifiers starting with a single underscore followed by a capital letter are reserved by the Standard. You can't create any new name of that form at all in your code. (As you didn't post any headers I can't know, but I guess things like _GETCURSOR are yours, and not from some library). C17::7.1.3: 7.1.3 Reserved identifiers 1 Each header declares or defines all identifiers listed in its associated subclause, and optionally declares or defines identifiers listed in its associated future library directions subclause and identifiers which are always reserved either for any use or for use as file scope identifiers. — All identifiers that begin with an underscore and either an uppercase letter or another underscore are always reserved for any use, except those identifiers which are lexically identical to keywords.187) — All identifiers that begin with an underscore are always reserved for use as identifiers with file scope in both the ordinary and tag name spaces. — Each macro name in any of the following subclauses (including the future library directions) is reserved for use as specified if any of its associated headers is included; unless explicitly stated otherwise (see 7.1.4). — All identifiers with external linkage in any of the following subclauses (including the future library directions) and errno are always reserved for use as identifiers with external linkage.188) — Each identifier with file scope listed in any of the following subclauses (including the future library directions) is reserved for use as a macro name and as an identifier with file scope in the same name space if any of its associated headers is included. So you should maybe name it GETCURSOR or GET_CURSOR or GETCURSOR_ or GETCURSOR__. stderr Error messages should be printed to stderr instead of stdout (which is where printf() prints). To do that, one uses fprintf(stderr, "...", ...);. curses Maybe you would like some very nice menus instead of just printing lines on the screen like messages. The curses libraries do that. There are various options you can use (all are more or less compatible, at least on the basics): pdcurses and ncurses are the two I've used, and they are relatively easy to learn (the basics at least). As a bonus, curses is compatible with POSIX, so your program will not only run on Windows.
{ "domain": "codereview.stackexchange", "id": 35666, "tags": "c, winapi" }
Python toolbox for OpenStreetMap data
Question: Background My project is a Python toolbox. This is my first bigger coding project and it took me quite some time to code this toolbox. I learned a lot from the start point to this day. I changed this code hundreds of times, whenever I learned about new trick or method. This code works. I tested it with various datasets, fixed bugs or any syntax errors. Cons: The code ha some Polish names for: variable, functions etc. GUI is all Polish I started using Python about 3 months ago What my code does: Main purpose of this toolbox was to automate OpenStreetMap (OSM) data transformation from voivodeship shapefiles into country sized one, from which values were selected by their attributes to visualize features (for example, roads were selected and symbolized). The code consists of three classes which are three scripts inside of my toolbox. It is used in ArcGIS Pro to help non-programmer user to replicate my work. My goal Can someone who is more experienced than me in Python give me some useful advice? Terms used in this code shp - shapefile osm - OpenStreetMap fc - feature class gdb - geodatabase I added comments to my code to help understand what is happening. My code # -*- coding: CP1250 -*- import arcpy import os import pandas as pd import shutil import xlrd from xml.etree import ElementTree as ET import glob from itertools import starmap import re class Toolbox(object): def __init__(self): """Define the toolbox (the name of the toolbox is the name of the .pyt file).""" self.label = "NardzedziaDoEskportu" self.alias = "" # List of tool classes associated with this toolbox self.tools = [Przygotowanie_do_eksportu, SkryptDoEksportu, XML_export] class SkryptDoEksportu(object): def __init__(self): """Define the tool (tool name is the name of the class).""" self.label = "OSM Polska" self.description = "Skrypt eksportuje wybrane kolumny zawarte w tabeli atrybutow klas obiektow z geobazy." self.canRunInBackground = False def getParameterInfo(self): """Define parameter definitions""" # Pierwszy parametr inside = arcpy.Parameter( displayName="Wejsciowa geobaza", name="in_gdb", datatype="DEWorkspace", parameterType="Required", direction="Input") # drugi parametr klasy = arcpy.Parameter( displayName="Warstwy w geobazie (mozliwy tylko podglad)", name="fcs_of_gdb", datatype="DEFeatureClass", parameterType="Required", direction="Input", multiValue=True) # trzeci parametr kolumny = arcpy.Parameter( displayName="Wybierz kolumny do selekcji", name="colli", datatype="GPString", parameterType="Required", direction="Input", multiValue=True) kolumny.filter.type = "ValueList" # Czwarty parametr plikExcel = arcpy.Parameter( displayName="Plik *.XLS z domenami", name="excelik", datatype="DEType", parameterType="Required", direction="Input") # Piaty parametr plikShpWoj = arcpy.Parameter( displayName="Plik *.Shp okreslajacy granice wojewodztw", name="ShpWoj", datatype="DEShapefile", parameterType="Required", direction="Input") # Szosty parametr plikBoundary = arcpy.Parameter( displayName="Plik *.Shp bedacy poprawiona wersja Polska_boundary_ply", name="shpBoundary", datatype="DEShapefile", parameterType="Required", direction="Input") p = [inside, klasy, kolumny, plikExcel, plikShpWoj, plikBoundary] return p def isLicensed(self): """Set whether tool is licensed to execute.""" return True def updateParameters(self, parameters): """Modify the values and properties of parameters before internal validation is performed. This method is called whenever a parameter has been changed.""" parameters[1].enabled = 0 if parameters[0].value: arcpy.env.workspace = parameters[0].value fclist = arcpy.ListFeatureClasses() parameters[1].value = fclist if parameters[1].value: fcs = parameters[1].value.exportToString() single = fcs.split(";") fields = arcpy.ListFields(single[0]) l1 = [f.name for f in fields] l2 = ["OBJECTID", "Shape", "OSMID", "osmTags", "osmuser", "osmuid", "osmvisible", "osmversion", "osmchangeset", "osmtimestamp", "osmMemberOf", "osmSupportingElement", "osmMembers", " Shape_Length", "Shape_Area", "wayRefCount"] l3 = [czynnik for czynnik in l1 if czynnik not in l2] parameters[2].filter.list = l3 return def updateMessages(self, parameters): """Modify the messages created by internal validation for each tool parameter. This method is called after internal validation.""" def execute(self, parameters, messages): # Variables arcpy.env.overwriteOutput = True gdb = parameters[0].valueAsText wybor_uzytkownika = parameters[2].valueAsText excel = parameters[3].valueAsText granice_woj_shp = parameters[4].valueAsText boundary_ply_shp = parameters[5].valueAsText arcpy.env.workspace = gdb warunek = " <> ''" tymczasowa_nazwa = "tymczasowaNazwaDlaFC" lista_ln = [] lista_ply = [] lista_pt = [] # Appends feature classes to lists and then merges them to single fc based on geometry fc_lista = arcpy.ListFeatureClasses() listy_append( fc_lista, lista_ln, lista_ply, lista_pt) tupel_merge = ( [lista_ln, "Polska_ln"], [lista_ply, "Polska_ply"], [lista_pt, "Polska_pt"]) list(starmap( arcpy.Merge_management,tupel_merge)) fc_lista = arcpy.ListFeatureClasses() # Deleting useless feature classes for fc in fc_lista: czlon_nazwy = fc.split("_") if czlon_nazwy[0] != "Polska": arcpy.Delete_management(fc) # Column split kolumny_split( wybor_uzytkownika, tymczasowa_nazwa, warunek, gdb, granice_woj_shp, boundary_ply_shp) # File import from excel to create domain lists import_excel( excel, gdb) # Adding domains nadaj_domene( gdb, wybor_uzytkownika) return class XML_export(object): def __init__(self): """Define the tool (tool name is the name of the class).""" self.label = "Eksport danych z XML" self.description = "Skrypt przygotowuje dane i eksportuje wybrane aspkety z XML" self.canRunInBackground = False def getParameterInfo(self): """Define parameter definitions""" # Pierwszy parametr inside = arcpy.Parameter( displayName = "Wejsciowa geobaza", name = "in_gdb", datatype = "DEWorkspace", parameterType = "Required", direction = "Input", multiValue = False) # drugi parametr rodzaj = arcpy.Parameter( displayName = "Wybierz typ geometrii", name = "geom", datatype = "GPString", parameterType = "Required", direction = "Input", multiValue = False) rodzaj.filter.type = "ValueList" rodzaj.filter.list = ['pt','ln','ply'] # trzeci parametr klasy = arcpy.Parameter( displayName = "Wybrane klasy", name = "fcs_of_gdb", datatype = "DEFeatureClass", parameterType = "Required", direction = "Input", multiValue = True) # czwarty wojewodztwa_string = arcpy.Parameter( displayName = "Wybierz wojewodztwa", name = "colli", datatype = "GPString", parameterType = "Required", direction = "Input", multiValue = True) wojewodztwa_string.filter.type = "ValueList" #piaty warstwa = arcpy.Parameter( displayName = "Wybierz warstwe", name = "fl_gdb", datatype = "GPFeatureLayer", parameterType = "Required", direction = "Input") # szosty wyrazenie = arcpy.Parameter( displayName = "Wpisz wyrazenie do selekcji", name = "expres", datatype = "GPSQLExpression", parameterType = "Required", direction = "Input") wyrazenie.parameterDependencies = [warstwa.name] # siodmy folder_xml = arcpy.Parameter( displayName = "Wskaz folder gdzie znajduja sie pliki w formacie XML", name = "XMLdir", datatype = "DEFolder", parameterType = "Required", direction = "Input") # osmy folder_csv = arcpy.Parameter( displayName = "Wskaz folder gdzie maja zostac zapisane pliki CSV", name = "CSVdir", datatype = "DEFolder", parameterType = "Required", direction = "Input") #dziewiaty kolumny = arcpy.Parameter( displayName = "Wybierz kolumne", name = "colli2", datatype = "GPString", parameterType = "Required", direction = "Input", multiValue = False) kolumny.filter.type = "ValueList" #dziesiaty check_1 = arcpy.Parameter( displayName = "Zaznacz aby dokonac zapisu do CSV (niezalecane odznaczanie)", name = "check1", datatype = "GPBoolean", parameterType = "Optional", direction = "Input", multiValue = False) check_1.value = True #jedenasty check_2 = arcpy.Parameter( displayName = "Zaznacz aby polaczyc pliki CSV w jeden - odznaczenie spowoduje brak laczenia", name = "check2", datatype = "GPBoolean", parameterType = "Optional", direction = "Input", multiValue = False) p = [inside, rodzaj, klasy, wojewodztwa_string, kolumny, warstwa, wyrazenie, folder_xml, folder_csv, check_1, check_2] return p def isLicensed(self): """Set whether tool is licensed to execute.""" return True def updateParameters(self, parameters): """Modify the values and properties of parameters before internal validation is performed. This method is called whenever a parameter has been changed.""" wejsciowa_gdb = parameters[0] wybrana_geometria = parameters[1] lista_klas = parameters[2] wybor_wojewodztwa = parameters[3] wybor_kolumny = parameters[4] check_box_wartosc_1 = parameters[9].value check_box_wartosc_2 = parameters[10].value lista_klas.enabled = 0 arcpy.env.workspace = wejsciowa_gdb.value fclist = arcpy.ListFeatureClasses() fc_o_wybranej_geometrii = [] wybor = wybrana_geometria.valueAsText if check_box_wartosc_2 and check_box_wartosc_1 == False: parameters[0].enabled = 0 parameters[1].enabled = 0 parameters[3].enabled = 0 parameters[4].enabled = 0 parameters[5].enabled = 0 parameters[6].enabled = 0 if check_box_wartosc_1 and check_box_wartosc_2 == False: parameters[0].enabled = 1 parameters[1].enabled = 1 parameters[3].enabled = 1 parameters[4].enabled = 1 parameters[5].enabled = 1 parameters[6].enabled = 1 for fc in fclist: try: split_nazwy = fc.split('_') if len (split_nazwy) == 2 and split_nazwy[1] == wybor: fc_o_wybranej_geometrii.append(fc) except IndexError: pass lista_klas.value = fc_o_wybranej_geometrii if lista_klas.value: fcs = lista_klas.value.exportToString() fcs_lista = fcs.split(";") wybor_wojewodztwa.filter.list = fcs_lista if wybrana_geometria.value: if wybor == 'ln': lista_ln = [ 'highway', 'waterway', 'boundary' ] wybor_kolumny.filter.list = lista_ln elif wybor == 'pt': lista_pt = [ 'natural', 'aeroway', 'historic', 'leisure', 'waterway', 'shop', 'railway', 'tourism', 'highway', 'amenity' ] wybor_kolumny.filter.list = lista_pt elif wybor == 'ply': lista_ply = [ 'landuse', 'building', 'natural', 'amenity' ] wybor_kolumny.filter.list = lista_ply def updateMessages(self, parameters): """Modify the messages created by internal validation for each tool parameter. This method is called after internal validation.""" def execute(self, parameters, messages): # Zmienne # -*- coding: CP1250 -*- arcpy.env.overwriteOutput = True tymczasowa_nazwa = "tymczasowaNazwaDlaFC" gdb = parameters[0].valueAsText user_geometry_choice = parameters[1].valueAsText user_wojewodztwo_choice = parameters[3].valueAsText user_column_choice = parameters[4].valueAsText user_expression = parameters[6].valueAsText dir_xml = parameters[7].valueAsText dir_csv = parameters[8].valueAsText field_osm = 'OSMID' xml_parent_way = 'way' xml_parent_node = 'node' xml_atr_parent = 'id' xml_child = 'tag' xml_atr_child = 'k' xml_value_child_1 = 'name' xml_value_child_2 = 'v' xml_value_child_3 = 'ele' xml_value_child_4 = 'addr:housenumber' xml_value_child_5 = 'ref' id_csv = 'id_robocze' id_csv_2 = 'id_elementu' nazwa_csv = 'nazwa' natural_name = "nazwa_ele" natural_name_2 = "wysokosc" building_name = "budynki_nazwa" building_name_2 = "buydnki_numery" natural_csv_name = 'natural_nazwa' natural_csv_name_2 = 'natural_wysokosc' building_csv_name = 'budynki_nazwa' building_csv_name_2 = 'budynki_numery' highway_name = 'ulice' highway_name_2 = 'nr_drogi' highway_csv_name = 'ulice' highway_csv_name_2 = 'nr_drogi' check_box_wartosc_1 = parameters[9].value check_box_wartosc_2 = parameters[10].value dir_natural = os.path.join( dir_csv,'Polska_{0}_{1}.csv'.format(natural_csv_name, user_geometry_choice)) dir_natural_2 = os.path.join( dir_csv,'Polska_{0}_{1}.csv'.format(natural_csv_name_2, user_geometry_choice)) dir_any = os.path.join( dir_csv,'Polska_{0}_{1}.csv'.format(user_column_choice, user_geometry_choice)) dir_building = os.path.join( dir_csv,'Polska_{0}_{1}.csv'.format(building_csv_name, user_geometry_choice)) dir_building_2 = os.path.join( dir_csv,'Polska_{0}_{1}.csv'.format(building_csv_name_2, user_geometry_choice)) dir_highway = os.path.join( dir_csv,'Polska_{0}_{1}.csv'.format(highway_csv_name, user_geometry_choice)) dir_highway_2 = os.path.join( dir_csv,'Polska_{0}_{1}.csv'.format(highway_csv_name_2, user_geometry_choice)) # Selekcja z geobazy plikow, ktore zostana wykorzystane do stworzenia list fc if check_box_wartosc_1: selektor_pre( gdb, user_geometry_choice, user_wojewodztwo_choice, user_column_choice, tymczasowa_nazwa, user_expression) get_csv( gdb, user_geometry_choice, user_column_choice, field_osm, dir_xml, xml_parent_node, xml_atr_parent, xml_child, xml_atr_child, xml_value_child_1, xml_value_child_3, dir_csv, natural_csv_name, natural_csv_name_2, id_csv, natural_name, natural_name_2, xml_value_child_4, building_csv_name, building_csv_name_2, building_name, building_name_2, xml_value_child_2, nazwa_csv, xml_parent_way, highway_csv_name, highway_csv_name_2, highway_name, highway_name_2, xml_value_child_5, user_geometry_choice, user_column_choice, check_box_wartosc_1, check_box_wartosc_2, id_csv_2, dir_natural, dir_natural_2, dir_any, dir_building, dir_building_2, dir_highway, dir_highway_2) return class Przygotowanie_do_eksportu(object): def __init__(self): """Define the tool (tool name is the name of the class).""" self.label = "Eliminacja datasetow" self.description = "Skrypt przygotowuje dane w geobazie, aby spelnialy wymagania nastepnego skryptu." self.canRunInBackground = False def getParameterInfo(self): """Define parameter definitions""" # Pierwszy parametr inside = arcpy.Parameter( displayName="Wejsciowa geobaza", name="in_gdb", datatype="DEWorkspace", parameterType="Required", direction="Input") p =[inside] return p def isLicensed(self): """Set whether tool is licensed to execute.""" return True def updateParameters(self, parameters): """Modify the values and properties of parameters before internal validation is performed. This method is called whenever a parameter has been changed.""" def updateMessages(self, parameters): """Modify the messages created by internal validation for each tool parameter. This method is called after internal validation.""" def execute(self, parameters, messages): arcpy.env.overwriteOutput = True arcpy.env.workspace = parameters[0].valueAsText alt = arcpy.env.workspace datalist = arcpy.ListDatasets() #clears gdb out of data sets for data in datalist: for fc in arcpy.ListFeatureClasses("*", "ALL", data): czesc = fc.split("_") arcpy.FeatureClassToFeatureClass_conversion( fc, alt, '{0}_{1}'.format(czesc[0], czesc[2])) arcpy.Delete_management(data) return def import_excel( in_excel, out_gdb): """ Opens excel file from path Make a list from sheets in file Iterates through sheets """ workbook = xlrd.open_workbook(in_excel) sheets = [sheet.name for sheet in workbook.sheets()] for sheet in sheets: out_table = os.path.join( out_gdb, arcpy.ValidateTableName( "{0}".format(sheet), out_gdb)) arcpy.ExcelToTable_conversion(in_excel, out_table, sheet) def iter_kolumny( user_input, tymczasowa_mazwa, warunek): """ Selection based on user choice """ lista_kolumn = user_input.split(";") arcpy.AddMessage( "Wybrales nastepujace parametry: {0}".format(lista_kolumn)) fc_lista = arcpy.ListFeatureClasses() for fc in fc_lista: czlon_nazwy = fc.split("_") for kolumna in lista_kolumn: arcpy.MakeFeatureLayer_management(fc, tymczasowa_mazwa) try: arcpy.SelectLayerByAttribute_management( tymczasowa_mazwa, "NEW_SELECTION", '{0}{1}'.format(kolumna, warunek)) arcpy.CopyFeatures_management( tymczasowa_mazwa, '{0}_{1}_{2}'.format(czlon_nazwy[0], kolumna, czlon_nazwy[1])) except arcpy.ExecuteError: pass arcpy.Delete_management(fc) def kolumny_split( user_input, tymczasowa_mazwa, warunek, gdb, wojewodztwa_shp, boundary_ply): """ After iter_kolumny call faulty column is deleted, and new fc is imported which will be substitute for it """ iter_kolumny( user_input, tymczasowa_mazwa, warunek) arcpy.Delete_management( 'Polska_boundary_ply') arcpy.FeatureClassToFeatureClass_conversion( wojewodztwa_shp, gdb, 'GraniceWojewodztw') arcpy.FeatureClassToFeatureClass_conversion( boundary_ply, gdb, 'Polska_boundary_ply') def listy_append( listaFc, liniowa, polygon, punkty): """ Simple list appender """ for fc in listaFc: czlon_nazwy = fc.split("_") if czlon_nazwy[1] == "ln": liniowa.append(fc) elif czlon_nazwy[1] == "ply": polygon.append(fc) elif czlon_nazwy[1] == "pt": punkty.append(fc) def nadaj_domene( work_space, wybor_uzytkownika): """ Function firstly makes list out of user choice, then appends only those fcs which are in gdb, then applies only domains which are wanted by user (determined by fc choice) """ arcpy.env.workspace = work_space lista_kolumn = wybor_uzytkownika.split(";") all_tabele_gdb = arcpy.ListTables() lista_poprawiona_o_kolumny = [] for tabela in all_tabele_gdb: pierwszy_czlon_nazwy = tabela.split("_")[0] if pierwszy_czlon_nazwy in lista_kolumn: lista_poprawiona_o_kolumny.append(tabela) elif pierwszy_czlon_nazwy == 'man': lista_poprawiona_o_kolumny.append(tabela) else: arcpy.Delete_management(tabela) for tabela in lista_poprawiona_o_kolumny: lista_robocza = [] lista_robocza.append(tabela) nazwa_domeny = lista_robocza[0] arcpy.TableToDomain_management( tabela, 'CODE', 'DESCRIPTION', work_space, nazwa_domeny, '-', 'REPLACE') arcpy.Delete_management(tabela) def selektor_pre( baza_in, geometria, wojewodztwa, kolumna, tymczasowa_nazwa, user_expression): """ Selects features based on user expression """ arcpy.env.workspace = baza_in fc_lista = wojewodztwa.split(';') arcpy.AddMessage(fc_lista) for fc in fc_lista: arcpy.MakeFeatureLayer_management( fc, tymczasowa_nazwa) arcpy.SelectLayerByAttribute_management( tymczasowa_nazwa, "NEW_SELECTION", user_expression) arcpy.CopyFeatures_management( tymczasowa_nazwa, '{0}_{1}'.format(fc, kolumna)) arcpy.AddMessage( 'Seleckja skonczona dla {0}_{1}'.format(fc, kolumna)) def compare_save_to_csv( gdb, pole_osm, xml_folder, kolumna, parent,atrybut_parent, child, child_atrybut, child_value_1, child_value_2, csv_dir, nazwa_pliku, nazwa_id, nazwa_atrybutu, user_geometry_choice): """ Iterates over feature classes in geodatabase, checks for only those which user needs, creates list of ids which will be used in xml_parser """ arcpy.env.workspace = gdb wszystkie_fc = arcpy.ListFeatureClasses() for fc in wszystkie_fc: try: split = fc.split('_') if split[2] == kolumna and split[1] == user_geometry_choice: czesc_nazwy = split[0] geom = split[1] nazwa_pliku = '{0}_{1}'.format(kolumna, geom) lista_id_arcgis = [row[0] for row in arcpy.da.SearchCursor(fc, pole_osm)] arcpy.AddMessage("Dlugosc listy: {0}".format( str(len(lista_id_arcgis)))) xml_parser( '{0}\{1}.xml'.format(xml_folder, czesc_nazwy), lista_id_arcgis, parent, atrybut_parent, child, child_atrybut, child_value_1, child_value_2, nazwa_pliku, csv_dir, nazwa_id, nazwa_atrybutu,czesc_nazwy) except IndexError: pass def compare_save_to_csv_wyjatek( gdb, user_geometry_choice, user_column_choice, pole_osm, xml_folder, kolumna, parent, atrybut_parent, child, child_atrybut, child_value_1, child_value_2, child_value_3, sciezka_csv, csv_name, csv_name_2, nazwa_id, nazwa_atrybutu, nazwa_atrybutu_2): """ Iterates over feature classes in geodatabase, checks for only those which user needs, creates list of ids which will be used in xml_parser_wyjatki """ arcpy.env.workspace = gdb wszystkie_fc = arcpy.ListFeatureClasses() for fc in wszystkie_fc: try: split = fc.split('_') if split[2] == kolumna and split[1] == user_geometry_choice: czesc_nazwy = split[0] lista_id_arcgis = [row[0] for row in arcpy.da.SearchCursor(fc, pole_osm)] arcpy.AddMessage("Dlugosc listy: {0}".format( str(len(lista_id_arcgis)))) xml_parser_wyjatki( '{0}\{1}.xml'.format(xml_folder, czesc_nazwy), lista_id_arcgis, parent, atrybut_parent, child, child_atrybut, child_value_1, child_value_2, child_value_3, sciezka_csv, csv_name, csv_name_2, nazwa_id, nazwa_atrybutu, nazwa_atrybutu_2, czesc_nazwy) except IndexError: pass def merge_csv( sciezka_csv, fragment_nazwy, nazwa_csv): """ Merges csv in specifed directory based on name scheme """ results = pd.DataFrame([]) for counter, file in enumerate(glob.glob("{0}\*{1}*".format(sciezka_csv, fragment_nazwy))): name_dataframe = pd.read_csv( file, usecols=[0, 1],encoding = 'CP1250' ) results = results.append( name_dataframe) results.to_csv( '{0}\{1}.csv'.format(sciezka_csv, nazwa_csv), encoding = 'CP1250') def zapis_do_csv( lista_1, lista_2, nazwa_1, nazwa_2, csv_name, katalog, czesc_nazwy): """ Saves to CSV, based on 2 lists. """ raw_data = {nazwa_1: lista_1, nazwa_2: lista_2} df = pd.DataFrame(raw_data, columns=[nazwa_1, nazwa_2]) df.to_csv( '{0}\{1}_{2}.csv'.format(katalog, czesc_nazwy, csv_name), index=False, header=True, encoding = 'CP1250') def xml_parser( xml, lista_agis, parent, atrybut_parent, child, child_atrybut, child_value_1, child_value_2, nazwa_pliku, sciezka_csv, nazwa_id, nazwa_atrybutu, czesc_nazwy): """ Function to pick from xml files tag values. Firstly it creates tree of xml file and then goes each level down and when final condtion is fullfiled id and value from xml file is appended to list in the end of xml file list is saved to CSV. """ rootElement = ET.parse(xml).getroot() l1 = [] l2 = [] for subelement in rootElement: if subelement.tag == parent: if subelement.get(atrybut_parent) in lista_agis: for sselement in subelement: if sselement.tag == child: if sselement.attrib[child_atrybut] == child_value_1: l1.append( subelement.get(atrybut_parent)) l2.append( sselement.get(child_value_2)) zapis_do_csv( l1, l2, nazwa_id, nazwa_atrybutu, nazwa_pliku, sciezka_csv, czesc_nazwy) arcpy.AddMessage('Zapisalem {0}'.format(nazwa_pliku)) arcpy.AddMessage('Zapsialem tyle id: {0}'.format((len(l1)))) arcpy.AddMessage('Zapsialem tyle nazw: {0}'.format((len(l2)))) def xml_parser_wyjatki( xml, lista_agis, parent, atrybut_parent, child, child_atrybut, child_value_1, child_value_2, child_value_3, sciezka_csv, nazwa_pliku, nazwa_pliku_2, nazwa_id, nazwa_atrybutu, nazwa_atrybutu_2, czesc_nazwy): """ Function to pick from xml files tag values. Firstly it creates tree of xml file and then goes each level down and when final condtion is fullfiled id and value from xml file is appended to list in the end of xml file list is saved to CSV. Added 'elif' for some feature classes that are described by 2 value tags. """ rootElement = ET.parse(xml).getroot() l1 = [] l2 = [] l3 = [] l4 = [] for subelement in rootElement: if subelement.tag == parent: if subelement.get(atrybut_parent) in lista_agis: for sselement in subelement: if sselement.tag == child: if sselement.attrib[child_atrybut] == child_value_1: l1.append( subelement.get(atrybut_parent)) l2.append( sselement.get(child_value_2)) arcpy.AddMessage('Dodalem {0}'.format(sselement.get(child_value_2))) elif sselement.attrib[child_atrybut] == child_value_3: l3.append( subelement.get(atrybut_parent)) l4.append( sselement.get(child_value_2)) arcpy.AddMessage('Dodalem {0}'.format(sselement.get(child_value_2))) zapis_do_csv( l1, l2, nazwa_id, nazwa_atrybutu, nazwa_pliku, sciezka_csv, czesc_nazwy) zapis_do_csv( l3, l4, nazwa_id, nazwa_atrybutu_2, nazwa_pliku_2, sciezka_csv, czesc_nazwy) def replace_csv( csv, symbol_1, symbol_2): ''' Function replace certain symbol to prevent ArcGIS Pro from crashing during table import. ''' my_csv_path = csv with open(my_csv_path, 'r') as f: my_csv_text = f.read() find_str = symbol_1 replace_str = symbol_2 csv_str = re.sub(find_str, replace_str, my_csv_text) with open(my_csv_path, 'w') as f: f.write(csv_str) def get_csv( gdb, geom_choice, column_choice, field_osm, dir_xml, xml_parent_node, xml_atr_parent, xml_child, xml_atr_child, xml_value_child_1, xml_value_child_3, dir_csv, natural_csv_name, natural_csv_name_2, id_csv, natural_name, natural_name_2, xml_value_child_4, building_csv_name, building_csv_name_2, building_name, building_name_2, xml_value_child_2, nazwa_csv, xml_parent_way, highway_csv_name, highway_csv_name_2, highway_name, highway_name_2, xml_value_child_5, user_geometry_choice, user_column_choice, check_box_wartosc_1, check_box_wartosc_2, id_csv_2, dir_natural, dir_natural_2, dir_any, dir_building, dir_building_2, dir_highway, dir_highway_2): ''' Combination of all other functions to deliver new fields in feature classes in geodatabase. ''' wybrana_kolumna = column_choice if geom_choice == 'pt': if wybrana_kolumna == 'natural': if check_box_wartosc_1: compare_save_to_csv_wyjatek( gdb, user_geometry_choice, user_column_choice, field_osm, dir_xml, wybrana_kolumna, xml_parent_node, xml_atr_parent, xml_child, xml_atr_child, xml_value_child_1, xml_value_child_2, xml_value_child_3, dir_csv, natural_csv_name, natural_csv_name_2, id_csv, natural_name, natural_name_2) if check_box_wartosc_2: tupel_pt = ( [dir_natural], [dir_natural_2]) list(starmap( merge_csv, tupel_pt)) tupel_pt_2 = ( [dir_natural, ';', ' '], [dir_natural, ':', ' '], [dir_natural_2, ';', ' '], [dir_natural_2, ':', ' ']) list(starmap( replace_csv, tupel_pt_2)) tupel_pt_3 = ( [dir_natural, gdb, id_csv_2, id_csv, 'Polska_natural_pt', field_osm, natural_name], [dir_natural_2, gdb, id_csv_2, id_csv, 'Polska_natural_pt', field_osm, natural_name_2]) list(starmap( import_fix_join, tupel_pt_3)) else: if check_box_wartosc_1: compare_save_to_csv( gdb, field_osm, dir_xml, wybrana_kolumna, xml_parent_node, xml_atr_parent, xml_child, xml_atr_child, xml_value_child_1, xml_value_child_2, dir_csv, wybrana_kolumna, id_csv, nazwa_csv, user_geometry_choice) if check_box_wartosc_2: merge_csv( dir_csv, wybrana_kolumna, 'Polska_{0}_{1}'.format(wybrana_kolumna, geom_choice)) tupel_pt_4 = ( [dir_any, ':', ' '], [dir_any, ';', ' ']) list(starmap( replace_csv, tupel_pt_4)) import_fix_join( dir_any, gdb, id_csv_2, id_csv, 'Polska_{0}_{1}'.format(wybrana_kolumna, geom_choice), field_osm, nazwa_csv) elif geom_choice == 'ply': if wybrana_kolumna == 'building': if check_box_wartosc_1: compare_save_to_csv_wyjatek( gdb, user_geometry_choice, user_column_choice, field_osm, dir_xml, wybrana_kolumna, xml_parent_way, xml_atr_parent, xml_child, xml_atr_child, xml_value_child_1, xml_value_child_2, xml_value_child_4, dir_csv, building_csv_name, building_csv_name_2, id_csv, building_name, building_name_2) if check_box_wartosc_2: tupel_ply = ([ dir_csv, building_csv_name, 'Polska_{0}_{1}'.format(building_csv_name, user_geometry_choice), dir_csv, building_csv_name_2, 'Polska_{0}_{1}'.format(building_csv_name_2, user_geometry_choice)]) list(starmap( merge_csv, tupel_ply)) tupel_ply_2 = ( [dir_building, ';', ' '], [dir_building, ':', ' '], [dir_building_2, ':', ' '], [dir_building_2, ';', ' ']) list(starmap( replace_csv, tupel_ply_2)) tupel_ply_3 = ( [dir_building, gdb, id_csv_2, id_csv, 'Polska_building_ply', field_osm, building_name], [dir_building_2, gdb, id_csv_2, id_csv, 'Polska_building_ply', field_osm, building_name_2]) list(starmap( import_fix_join, tupel_ply_3)) else: if check_box_wartosc_1: compare_save_to_csv( gdb, field_osm, dir_xml, wybrana_kolumna, xml_parent_way, xml_atr_parent, xml_child, xml_atr_child, xml_value_child_1, xml_value_child_2, dir_csv, wybrana_kolumna, id_csv, nazwa_csv, user_geometry_choice) if check_box_wartosc_2: merge_csv( dir_csv, wybrana_kolumna, 'Polska_{0}_{1}'.format(wybrana_kolumna, geom_choice)) tupel_ply_4 = ( [dir_any , ':', ' '], [dir_any , ';', ' ']) list(starmap( replace_csv, tupel_ply_4)) import_fix_join( dir_any, gdb, id_csv_2, id_csv, 'Polska_{0}_{1}'.format(wybrana_kolumna, geom_choice), field_osm, nazwa_csv) elif geom_choice == 'ln': if wybrana_kolumna == 'highway': if check_box_wartosc_1: compare_save_to_csv_wyjatek( gdb, user_geometry_choice, user_column_choice, field_osm, dir_xml, wybrana_kolumna, xml_parent_way, xml_atr_parent, xml_child, xml_atr_child, xml_value_child_1, xml_value_child_2, xml_value_child_5, dir_csv, highway_csv_name, highway_csv_name_2, id_csv, highway_name, highway_name_2) if check_box_wartosc_2: tupel_ln = ([ dir_csv, highway_csv_name, 'Polska_{0}_{1}'.format(highway_csv_name, user_geometry_choice), dir_csv, highway_csv_name_2, 'Polska_{0}_{1}'.format(highway_csv_name_2, user_geometry_choice)]) list(starmap( merge_csv, tupel_ln)) tupel_ln_2 = ( [dir_highway, ';', ' '], [dir_highway, ':', ' '], [dir_highway_2, ':', ' '], [dir_highway_2, ';', ' ']) list(starmap( replace_csv, tupel_ln_2)) tupel_ln_3 = ( [dir_building, gdb, id_csv_2, id_csv, 'Polska_highway_ln', field_osm, highway_name], [dir_building_2, gdb, id_csv_2, id_csv, 'Polska_highway_ln', field_osm, highway_name_2]) list(starmap( replace_csv, tupel_ln_3)) else: if check_box_wartosc_1: compare_save_to_csv( gdb, field_osm, dir_xml, wybrana_kolumna, xml_parent_way, xml_atr_parent, xml_child, xml_atr_child, xml_value_child_1, xml_value_child_2, dir_csv, wybrana_kolumna, id_csv, nazwa_csv, user_geometry_choice) if check_box_wartosc_2: merge_csv( dir_csv, wybrana_kolumna, 'Polska_{0}_{1}'.format(wybrana_kolumna, geom_choice)) tupel_ln_4 = ( [dir_any, ':', ' '], [dir_any, ';', ' ']) list(starmap( replace_csv, tupel_ln_4)) import_fix_join( dir_any, gdb, id_csv_2, id_csv, 'Polska_{0}_{1}'.format(wybrana_kolumna, geom_choice), field_osm, nazwa_csv) def fix_field( tabela , nazwa, pole): """ Imported tables has got not valid field with ID. This fix that problem by creating new on in text type, copying value and deleting old one. """ arcpy.AddField_management( tabela, nazwa, "TEXT", field_length = 20) try: with arcpy.da.UpdateCursor(tabela, [pole,nazwa]) as cursor: for row in cursor: row[1] = row[0] cursor.updateRow(row) except RuntimeError: print(row[1]) del row,cursor arcpy.DeleteField_management(tabela, [pole]) def import_fix_join( in_table, out_gdb, nazwa, id_csv, in_fc, field_osm, pole_to_join): """ Imports table to geodatabase Fixes its column Join field to feature class. """ arcpy.TableToGeodatabase_conversion( [in_table], out_gdb) fix_field( in_table, nazwa, id_csv) pole = [pole_to_join] arcpy.env.workspace = out_gdb arcpy.JoinField_management( in_fc, field_osm, in_table, nazwa, pole) Three scripts in ArcGIS Pro software. Script number one GUI Script number two GUI Script number three GUI Answer: Some minor stuff: I don't see where self.tools is used after initialization - can it be deleted? If you need to keep it, does it need to change? If it doesn't change (if it can be immutable), use a tuple instead of a list. CP1250 should be avoided unless you have a really good reason. Everyone should be on UTF-8. Using UTF-8 will allow you to add all of the proper character accents in your strings, which currently appear to be missing. Python's naming convention is snake_case for variables and function names, and UpperCamelCase only for classes, so canRunInBackground would actually be can_run_in_background. Same for other names. Avoid naming list variables l1, l2, etc. They should have a meaningful name according to what they actually store. For short function calls such as import_excel( excel, gdb) there is no need to split it onto two lines. For calls with many arguments it's fine, but here it's more legible on one line. This: wejsciowa_gdb = parameters[0] wybrana_geometria = parameters[1] lista_klas = parameters[2] wybor_wojewodztwa = parameters[3] wybor_kolumny = parameters[4] can be abbreviated to wejsciowa_gdb, wybrana_geometria, lista_klas, wybor_wojewodztwa, wybor_kolumny = parameters[:5] there are similar instances elsewhere in your code. I suggest making a loop for your checkbox logic: if check_box_wartosc_1 != check_box_wartosc_2: enabled = int(check_box_wartosc_1) for i in (0, 1, 3, 4, 5, 6): parameters[i] = enabled After your if wybor == 'ln', you have several temporary list assignments. You don't need the temporary variables - you can assign the lists directly to filter.list. The argument list for get_csv is a little insane. You should make a class with members for those arguments.
{ "domain": "codereview.stackexchange", "id": 32683, "tags": "python, plugin, geospatial, arcpy" }
Radioactive Materials to Energy AT HOME
Question: Is there a way to use a radioactive elements (Americium in my case, but for demo, any radioactive material) to create a usable amount of energy (Power small LED?) AT HOME! Using materials that can be bought from eBay/amazon and the store? Answer: Depending on how much nuclear material you have, you can use a system based on thermoelectric materials to directly use the heat from the decay of the material to generate electricity. Following from Hazzey's comment, the safety of the experiment will quickly go out of the household realm the more power you try to generate. Edit for OP comment 1: In theory Americium is okay to use for a thermoelectric device because it has an alpha particle decay route (easier to produce heat with alpha particles). Wikipedia says Americium based smoke detectors have 0.3 micrograms per unit so you would have to open many thousands of detectors to get a usable amount.
{ "domain": "engineering.stackexchange", "id": 1444, "tags": "power, energy-efficiency, energy, thermal-conduction, radiation" }
Why is the entropy of a subsystem density matrix equal to that of the other subsystem?
Question: If we have some system $\rho_{AB}$ we can find the entanglement entropy as $S(\rho_A)=S(\text{Tr}_B(\rho_{AB}))=S(\text{Tr}_A(\rho_{AB}))$ But why are the entropies of the subsystems $\rho_A$ and $\rho_B$ equal? Answer: For a state $|\psi\rangle=\sum_{ij}C_{ij}|i\rangle|j\rangle$, the density matrix is $$\rho=|\psi\rangle \langle \psi|=\sum_{ijkl}C_{ij}{C_{kl}}^*|ij\rangle\langle kl|$$ We find the reduced density matrix of $A$ by tracing out the $B$ subsystem $$\rho_A=\text{Tr}_B\left(\rho\right)=\sum_x {}_{B}\langle x|\sum_{ijkl}C_{ij}{C_{kl}}^*|ij\rangle\langle kl|x\rangle _B=\sum_{ikx}C_{ix}{C_{kx}}^*|i\rangle\langle k|=\sum_{ikx}C_{ix}{C^\dagger_{xk}}|i\rangle\langle k|\ .$$ Equivalently, for B we find $$\rho_B=\text{Tr}_A\left(\rho\right)=\sum_x {}_{A}\langle x|\sum_{ijkl}C_{ij}{C_{kl}}^*|ij\rangle\langle kl|x\rangle _A=\sum_{xjl}C_{xj}{C_{xl}}^*|j\rangle\langle l|=\sum_{xjl}C_{jx}^T{C_{xl}}^*|j\rangle\langle l|\ .$$ The reduced states for $A$ and $B$, respectively, are $\rho_A=CC^\dagger$ and $\rho_B=C^T C^*=(C^\dagger C)^*$. Now $CC^\dagger$ and $C^\dagger C$ have the same non-zero eigenvalues, and since they are real, also $C^TC^*$ has the same eigenvalues; and thus also the same entropy (which is a function of their eigenvalues).
{ "domain": "physics.stackexchange", "id": 55021, "tags": "homework-and-exercises, quantum-information, entropy, quantum-entanglement" }
Normal Force - What if I stepped on a molecularly frozen object?
Question: I’ve been trying to understand normal force on a molecular level, and I’ve come across the idea that atoms are connected to each other by forces that act like springs. You step on the floor; the floor atoms compress until their ‘spring’ force is equal and opposite to that which you exert on them; this push back upwards on you allows you to step forward! This normal force connection to motion applies to more objects, e.g. cars actually move forward because the ground’s springy atoms push back on the car tires. I’ve heard that if normal force didn’t exist, it would mean that the atoms of an object give way to any force applied to them, and everything would fall through and accelerate toward the center of the earth. This makes sense when you think about a free body diagram: if gravity isn’t counteracted by normal force, then the net force (and acceleration) in the y-direction is downward. I’m curious about the theoretical possibility of an alternative ‘absence of normal force’ when thinking about it as an object’s ability to push back on another object with spring action (perhaps this is a sub-question: is a substance’s ability to push back on a force applied by another substance an accurate definition of normal force?) What if the theoretical substance ‘diamondite’ was so hard and rigid that it had NO ATOMIC SPRING ACTION at all? Obviously this is completely theoretical; we can’t even say that objects at zero kelvin are rendered atomically ‘frozen’. But if I in theory constructed a floor out of diamondite and then stepped on it, wouldn’t that floor be unable to push back on me? Isn’t this another possible answer to the question what would happen if normal force didn’t exist? I’m wondering what would happen if I stepped on diamondite: Would the absence of atomic spring action mean that there cannot be friction and therefore I would just slip on it and then infinitely slide at a constant velocity (or constant acceleration?) across the floor? If I put both feet on the floor, would I become stuck to the surface? Since there is no push springing me forward, would I be stuck forever pressing into the floor but never able to move? Please correct any misunderstandings of the mechanics at work in normal force that you see here. Thank you! Answer: Your general concept of a normal force is basically fine; it is the force applied by a body when a force is applied upon it by another body, like the force the ground applies to your feet when you are standing on the surface of the earth, with the original force being gravity. In your concern about whether the material would have a normal force you are forgetting a very important aspect of the interaction between material objects on the microscopic scale: the electrostatic interactions of the particles. While you are correct to a first approximation that interatomic forces (what chemists call bonds) can be modeled somewhat like springs that will "snap back" if you do not apply so much force as to irrevocably separate them, I want to focus in on your "0 K" scenario, as it raises an important point. Even if the atoms in your hypothetical substance were utterly frozen in a perfect crystal lattice at absolute zero, and the substance has no heat capacity with which to obtain vibrational energy as heat from other sources, the particles that make up your foot (or your shoe or...) would still be strongly repelled by the hypothetical substance due to the Coulomb force at short ranges. Thus, you would still experience a normal force and could still step on this substance. You would not be "stuck" unless this substance were to form an unusually strong attractive interactions with the molecules of your shoe/foot, which does not seem particularly likely. Note that your concern with not being able to move if you were to stop completely on a surface of your hypothetical substance is also missing a fundamental part of the puzzle of movement: friction. Unless a surface is so utterly free of defects that it is atomically smooth, you will experience significant microscopic friction when your muscles try to move you along a surface. By the conservation of momentum, if one leg moves forward, the rest of you will move backward in turn, with the degree of displacement dictated by the relative mass of your leg and the rest of your body. The difference, however, is that the frictional force (originating in the unevenness of the surface and the electrostatic interactions we mentioned before) applied to your planted foot/shoe will resist this motion and keep your foot essentially in place, allowing your other foot to move forward and letting you to make progress walking along the surface. This is why car tires "stick" to the road and are able to push forward, why you are able to walk on various surfaces, etc. It is also why it is much harder to get moving along a surface with a smaller amount of friction (i.e. ice).
{ "domain": "physics.stackexchange", "id": 95949, "tags": "quantum-mechanics, newtonian-mechanics, particle-physics, atomic-physics, molecular-dynamics" }
How do I feature a book on wiki.ros.org/Books
Question: Can someone help me create a book page like http://wiki.ros.org/Books/ROS_Robotics_By_Example to be posted on http://wiki.ros.org/Books to feature a book Originally posted by PacktPub on ROS Answers with karma: 3 on 2017-02-13 Post score: 0 Answer: The ROS wiki is just that, a wiki, so anyone (with an account) can create pages like that. The procedure would be: register for an account request editing access (ros-infrastructure/roswiki#139 ros-infrastructure/roswiki#258 now) create a new page link to it from the Books page Originally posted by gvdhoorn with karma: 86574 on 2017-02-13 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by ohsh on 2020-03-08: Thank you for good information.
{ "domain": "robotics.stackexchange", "id": 27003, "tags": "ros, documentation" }
When do one-point functions vanish?
Question: I have read in many places that one-point functions, like the one below: $$\langle \Omega|\phi(x) |\Omega \rangle$$ are equal to zero ($|\Omega \rangle$ is the vacuum of some interacting theory, $\phi$ is the field operator - scalar, spinorial, etc...) Peskin's book, for instance, says (page 212) this is USUALLY zero by symmetry in the case of a scalar field ("usually" probably means $\lambda \phi^4$ theory) and by Lorentz invariance for higher spins . How can I see that? In a more general question: can someone point out a counterexample? A case when this functions are not zero? Answer: First of all, if $\phi(x)$ were not a scalar field but a field with spin (1/2 or 1), then a nonzero expectation value would break the Lorentz symmetry because the expectation value would be a preferred spinor or vector in spacetime (similarly for other more complicated representations of the Lorentz group). So only vevs of scalar fields may be compatible with a relativistic theory. For scalar fields $\phi(x)$, it is a matter of field redefinition what value is called "zero" and what value is called something else. For example, in the electroweak theory, the Higgs field is a doublet $h=(h_1,h_2)$ and its vacuum expectation value is normally $(v,0)$ i.e. nonzero. However, one may rewrite the first component as $h_1=v+H$ and the the expectation value of the new field $H$ is zero again. Classically, the vacuum is a stationary configuration so the scalar fields must be stationary points of the potential, $$ \frac{\partial}{\partial \phi_i} V = 0 |_{\phi_i =\text{vacuum values}}.$$ So when the vev of $\phi_i$ is zero in the vacuum, it means that there are no linaer terms of the type $\phi_i$ in the (Taylor-expanded) potential $V$. That's often the case; if there were such linear terms, the point $\phi=0$ would be unstable and we would probably reparameterize $\phi\to \phi'(\phi)$ in such a way that $\phi'=0$ would correspond to a stationary point. As I hinted above, the Higgs field is a natural field for which the most natural parameterization implies $\langle h(x)\rangle \neq 0$. However, string theory and supersymmetric quantum field theories are full of additional counterexamples, the so-called moduli. The potential for the moduli is, by definition, zero (or universal constant) so any point is a stationary point. The physical properties of the vacuum and particles upon it depend on the moduli. For example, the coupling constant is often written as $$ g = \exp(\phi)$$ where $\phi$ is the so-called dilaton. It would be counterproductive to shift the value of $\phi$ in such cases because $\phi$ stores some important physical information. So moduli are other examples whose one-point function is nonzero in the most natural field redefinition. Once again, it's always possible to redefine the fields so all of them have vanishing one-point functions in the vacuum.
{ "domain": "physics.stackexchange", "id": 4384, "tags": "quantum-field-theory, symmetry, renormalization, vacuum, correlation-functions" }
how to implement multiple subscribers simply
Question: I am studying ROS2 dashing. I 'd like to create many simple subscribers which subscribe same topic and test CPU performance. Now I am trying as follws. This is a example which two subscribers run. I'd like to test a case in which ,for example, 100 subscribers are running. Is there any better way to realize multiple such ? Thanks. //subnodes.hpp #include <rclcpp/rclcpp.hpp> #include <std_msgs/msg/int16.hpp> class MinimalSubscriber1:public rclcpp::Node{ private: rclcpp::Subscription<std_msgs::msg::Int16>::SharedPtr subscription_; void topic_callback_(const std_msgs::msg::Int16::SharedPtr msg); public: MinimalSubscriber1(); }; class MinimalSubscriber2:public rclcpp::Node{ private: rclcpp::Subscription<std_msgs::msg::Int16>::SharedPtr subscription_; void topic_callback_(const std_msgs::msg::Int16::SharedPtr msg); public: MinimalSubscriber2(); }; //subnodes.cpp #include <rclcpp/rclcpp.hpp> #include <std_msgs/msg/int16.hpp> #include "subnodes.hpp" void MinimalSubscriber1::topic_callback_(const std_msgs::msg::Int16::SharedPtr msg){ RCLCPP_INFO(this->get_logger(), "No.1 heard:%d",msg->data); } MinimalSubscriber1::MinimalSubscriber1() :Node("minimal_subscriber_test1"){ subscription_ = this->create_subscription<std_msgs::msg::Int16>( "testtopic", std::bind(&MinimalSubscriber1::topic_callback_,this,std::placeholders::_1) ); } void MinimalSubscriber2::topic_callback_(const std_msgs::msg::Int16::SharedPtr msg){ RCLCPP_INFO(this->get_logger(), "No.2 heard:%d",msg->data); } MinimalSubscriber2::MinimalSubscriber2() :Node("minimal_subscriber_test2"){ subscription_ = this->create_subscription<std_msgs::msg::Int16>( "testtopic", std::bind(&MinimalSubscriber2::topic_callback_,this,std::placeholders::_1) ); } //main.cpp #include <rclcpp/rclcpp.hpp> #include "subnodes.hpp" int main(int argc,char*argv[]){ rclcpp::init(argc,argv); rclcpp::executors::SingleThreadedExecutor exec; auto node1 = std::make_shared<MinimalSubscriber1>(); auto node2 = std::make_shared<MinimalSubscriber2>(); exec.add_node(node1); exec.add_node(node2); exec::spin(); rclcpp::shutdown(); return 0; } Originally posted by marney on ROS Answers with karma: 3 on 2020-01-13 Post score: 0 Original comments Comment by MCornelis on 2020-01-13: Long answer: Have a look here: https://discourse.ros.org/t/singlethreadedexecutor-creates-a-high-cpu-overhead-in-ros-2/10077 And here: https://github.com/nobleo/ros2_performance Short answer: You can create subscribers in a loop but you have to save a pointer to the subscribers outside of the loop std::vector<rclcpp::Subscription<String>::SharedPtr> sub_refs; for (int s = 0; s < amount of subs; ++s) { auto sub = node->create_subscription<String>("topic_name", qos, [](String::SharedPtr) {}); sub_refs.push_back(sub); } I'm not sure if the example is using the dashing API, but a similar implementation should be possible in dashing. The important thing is that you have the sub_refs declared outside of the loop. Full example of source code can be found at the nobleo github on the dashing branch in the source folder. Look for ros.cc , that one partly does what you want. Comment by MCornelis on 2020-01-13: Just a heads up, working with CPU utilization is not as straight-forward as some people might think. Especially when dealing with big.LITTLE configurations and other factors that might influence CPU utilization. The results you can find in the discourse post and on the nobleo github are all hardware and software specific and the tests may return different results on your setup. Answer: Simple example to make n nodes that all have 1 subscriber that subscribes to the topic "topic_name": #include <rclcpp/rclcpp.hpp> #include <std_msgs/msg/string.hpp> int main() { using namespace std_msgs::msg; rclcpp::init(0, nullptr); n = 100; // create this many nodes rclcpp::executors::SingleThreadedExecutor exec; // create executor std::vector<rclcpp::Node::SharedPtr> node_refs; // create vector to store node references std::vector<rclcpp::Subscription<String>::SharedPtr> sub_refs; // create vector to store sub references for (int i = 0; i < n; i++) { auto node = std::make_shared<rclcpp::Node>("node_" + std::to_string(i)); // create a node auto sub = node->create_subscription<String>("topic_name", qos, [](String::SharedPtr) {}); // add a sub to the node sub_refs.push_back(sub); // save a referene to the sub node_refs.push_back(node); // save a reference to the node exec.add_node(node); // add the node with the sub to executor } exec.spin(); // spin everything rclcpp::shutdown(); return 0; } Originally posted by MCornelis with karma: 331 on 2020-01-13 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by MCornelis on 2020-01-13: Nodes also add CPU overhead btw. An explanation can be found here: https://discourse.ros.org/t/reconsidering-1-to-1-mapping-of-ros-nodes-to-dds-participants/10062/22 So you could also consider creating 1 node and adding n subscribers to it (instead of n nodes with 1 sub each). Comment by marney on 2020-01-16: Thank you so much for reply. I could test the case in multiple nodes you could also consider creating 1 node and adding n subscribers to it Yes, the idea should be considered. Thanks.
{ "domain": "robotics.stackexchange", "id": 34263, "tags": "ros2" }
An immutable C++ string with ref-counting
Question: Class intended to be used as main type in a key-value database where keys and values are strings. Searched features: It is a const char * Behaves like a std::string Reference counting integrated reducing the number of indirections Vampirizes string_view using ptr + len Some additional methods (contains(), trim(), etc) Basically, it is a pointer to chars where pointed memory is prefixed by the ref-counter (4-bytes) and the string length (4-bytes). An example of usage and the unit tests can be found at: https://github.com/torrentg/cstring Not 100% sure on memory alignment and thread-safety. I will appreciate your comments and suggestions. Here is cstring.hpp #pragma once #include <memory> #include <string> #include <limits> #include <atomic> #include <utility> #include <cassert> #include <cstdint> #include <stdexcept> #include <string_view> #include <type_traits> namespace gto { /** * @brief Immutable string based on a plain C-string (char *) with ref-counting. * @details * - Shared content between multiple instances (using ref counting). * - Automatic mem dealloc (when no refs point to content). * - Same sizeof than a 'char *'. * - Null not allowed (equals to empty string). * - Empty string don't require alloc. * - String content available on debug. * - Mimics the STL basic_string class. * @details Memory layout: * * ----|----|-----------0 * ^ ^ ^ * | | |-- string content (0-ended) * | |-- string length (4-bytes) * |-- ref counter (4-bytes) * * mStr (cstring pointer) points to the string content (to allow view content on debug). * Allocated memory is aligned to ref counter type size. * Allocated memory is a multiple of ref counter type size. * @todo * - Validate assumption that sizeof(atomic<uint32_t>) == sizeof(uint32_t) * - Check that processor assumes memory alignment or we need to add __builtin_assume_aligned(a)) or __attribute((aligned(4))) * - Check that std::atomic is enough to grant integrity in a multi-threaded usage * - Explore cache invalidation impact on multi-threaded code * - Performance tests * @see https://en.cppreference.com/w/cpp/string/basic_string * @see https://github.com/torrentg/cstring * @note This class is immutable. * @version 0.9.0 */ template<typename Char, typename Traits = std::char_traits<Char>, typename Allocator = std::allocator<Char>> class basic_cstring { public: // declarations using prefix_type = std::uint32_t; using atomic_prefix_type = std::atomic<prefix_type>; using allocator_type = typename std::allocator_traits<Allocator>::template rebind_alloc<prefix_type>; using allocator_traits = std::allocator_traits<allocator_type>; using traits_type = Traits; using size_type = typename std::allocator_traits<Allocator>::size_type; using difference_type = typename std::allocator_traits<Allocator>::difference_type; using value_type = Char; using const_reference = const value_type &; using const_pointer = typename std::allocator_traits<Allocator>::const_pointer; using const_iterator = const_pointer; using const_reverse_iterator = typename std::reverse_iterator<const_iterator>; using basic_cstring_view = std::basic_string_view<value_type, traits_type>; private: // declarations using pointer = typename std::allocator_traits<Allocator>::pointer; public: // static members static constexpr size_type npos = std::numeric_limits<size_type>::max(); private: // static members static allocator_type alloc; static constexpr prefix_type mEmpty[3] = {0, 0, static_cast<prefix_type>(value_type())}; private: // members //! Memory buffer with prefix_type alignment. const_pointer mStr = nullptr; private: // static methods //! Sanitize a char array pointer avoiding nulls. static inline constexpr const_pointer sanitize(const_pointer str) { return (str == nullptr ? getPtrToString(mEmpty) : str); } //! Return pointer to counter from pointer to string. static inline constexpr atomic_prefix_type * getPtrToCounter(const_pointer str) { assert(str != nullptr); pointer ptr = const_cast<pointer>(str) - 2 * sizeof(prefix_type); return reinterpret_cast<atomic_prefix_type *>(ptr); } //! Return pointer to string length from pointer to string. static inline constexpr prefix_type * getPtrToLength(const_pointer str) { assert(str != nullptr); pointer ptr = const_cast<pointer>(str) - sizeof(prefix_type); return reinterpret_cast<prefix_type *>(ptr); } //! Return pointer to string from pointer to counter. static inline constexpr const_pointer getPtrToString(const prefix_type *ptr) { assert(ptr != nullptr); return reinterpret_cast<const_pointer>(ptr + 2); } //! Returns the allocated array length (of prefix_type values). //! @details It is granted that there is place for the ending '\0'. static size_type getAllocatedLength(size_type len) { return (3 + (len * sizeof(value_type)) / sizeof(prefix_type)); } //! Allocate memory for the counter + length + string + eof. Returns a pointer to string. static pointer allocate(size_type len) { assert(len > 0); assert(len <= std::numeric_limits<prefix_type>::max()); size_type n = getAllocatedLength(len); prefix_type *ptr = allocator_traits::allocate(alloc, n); assert(reinterpret_cast<std::size_t>(ptr) % alignof(prefix_type) == 0); allocator_traits::construct(alloc, ptr, 1); ptr[1] = static_cast<prefix_type>(len); return const_cast<pointer>(getPtrToString(ptr)); } //! Deallocate string memory if no more references. static void deallocate(const_pointer str) { atomic_prefix_type *ptr = getPtrToCounter(str); switch(ptr[0]) { case 0: // constant break; case 1: { // there are no more references prefix_type len = *getPtrToLength(str); size_type n = getAllocatedLength(len); allocator_traits::destroy(alloc, ptr); allocator_traits::deallocate(alloc, reinterpret_cast<prefix_type *>(ptr), n); break; } default: ptr[0]--; } } //! Increment the reference counter (except for constants). static void incrementRefCounter(const_pointer str) { atomic_prefix_type *ptr = getPtrToCounter(str); if (ptr[0] > 0) { ptr[0]++; } } public: // methods //! Default constructor. basic_cstring() : basic_cstring(nullptr) {} //! Constructor. basic_cstring(const_pointer str) : basic_cstring(str, (str == nullptr ? 0 : traits_type::length(str))) {} //! Constructor. basic_cstring(const_pointer str, size_type len) { if (str == nullptr || len == 0) { mStr = getPtrToString(mEmpty); return; } else { pointer content = allocate(len); traits_type::copy(content, str, len); content[len] = value_type(); mStr = content; } } //! Destructor. ~basic_cstring() { deallocate(mStr); } //! Copy constructor. basic_cstring(const basic_cstring &other) noexcept : mStr(other.mStr) { incrementRefCounter(mStr); } //! Move constructor. basic_cstring(basic_cstring &&other) noexcept : mStr(std::exchange(other.mStr, getPtrToString(mEmpty))) {} //! Copy assignment. basic_cstring & operator=(const basic_cstring &other) { if (mStr == other.mStr) return *this; deallocate(mStr); mStr = other.mStr; incrementRefCounter(mStr); return *this; } //! Move assignment. basic_cstring & operator=(basic_cstring &&other) noexcept { std::swap(mStr, other.mStr); return *this; } //! Return length of string. size_type size() const noexcept { return *(getPtrToLength(mStr)); } //! Return length of string. size_type length() const noexcept { return *(getPtrToLength(mStr)); } //! Test if string is empty. bool empty() const noexcept { return (length() == 0); } //! Get character of string. const_reference operator[](size_type pos) const { return mStr[pos]; } //! Get character of string checking for out_of_range. const_reference at(size_type pos) const { return (empty() || pos >= length() ? throw std::out_of_range("cstring::at") : mStr[pos]); } //! Get last character of the string. const_reference back() const { return (empty() ? throw std::out_of_range("cstring::back") : mStr[length()-1]); } //! Get first character of the string. const_reference front() const { return (empty() ? throw std::out_of_range("cstring::front") : mStr[0]); } //! Returns a non-null pointer to a null-terminated character array. inline const_pointer data() const noexcept { assert(mStr != nullptr); return mStr; } //! Returns a non-null pointer to a null-terminated character array. inline const_pointer c_str() const noexcept { return data(); } //! Returns a string_view of content. inline basic_cstring_view view() const { return basic_cstring_view(mStr, length()); } // Const iterator to the begin. const_iterator cbegin() const noexcept { return view().cbegin(); } // Const iterator to the end. const_iterator cend() const noexcept { return view().cend(); } // Const reverse iterator to the begin. const_reverse_iterator crbegin() const noexcept { return view().crbegin(); } // Const reverse iterator to the end. const_reverse_iterator crend() const noexcept { return view().crend(); } //! Exchanges the contents of the string with those of other. void swap(basic_cstring &other) noexcept { std::swap(mStr, other.mStr); } //! Returns the substring [pos, pos+len). basic_cstring_view substr(size_type pos=0, size_type len=npos) const { return view().substr(pos, len); } //! Compare contents. int compare(const basic_cstring &other) const noexcept { return view().compare(other.view()); } int compare(size_type pos, size_type len, const basic_cstring &other) const noexcept { return substr(pos, len).compare(other.view()); } int compare(size_type pos1, size_type len1, const basic_cstring &other, size_type pos2, size_type len2=npos) const { return substr(pos1, len1).compare(other.substr(pos2, len2)); } int compare(const_pointer str) const { return view().compare(sanitize(str)); } int compare(size_type pos, size_type len, const_pointer str) const { return substr(pos, len).compare(sanitize(str)); } int compare(size_type pos, size_type len, const_pointer str, size_type len2) const { return substr(pos, len).compare(basic_cstring_view(sanitize(str), len2)); } int compare(const basic_cstring_view other) const noexcept { return view().compare(other); } //! Checks if the string view begins with the given prefix. bool starts_with(const basic_cstring &other) const noexcept { size_type len = other.length(); return (compare(0, len, other) == 0); } bool starts_with(const basic_cstring_view sv) const noexcept { auto len = sv.length(); return (compare(0, len, sv.data()) == 0); } bool starts_with(const_pointer str) const noexcept { return starts_with(basic_cstring_view(sanitize(str))); } //! Checks if the string ends with the given suffix. bool ends_with(const basic_cstring &other) const noexcept { auto len1 = length(); auto len2 = other.length(); return (len1 >= len2 && compare(len1-len2, len2, other) == 0); } bool ends_with(const basic_cstring_view sv) const noexcept { size_type len1 = length(); size_type len2 = sv.length(); return (len1 >= len2 && compare(len1-len2, len2, sv.data()) == 0); } bool ends_with(const_pointer str) const noexcept { return ends_with(basic_cstring_view(sanitize(str))); } //! Find the first ocurrence of a substring. auto find(const basic_cstring &other, size_type pos=0) const noexcept{ return view().find(other.view(), pos); } auto find(const_pointer str, size_type pos, size_type len) const { return view().find(sanitize(str), pos, len); } auto find(const_pointer str, size_type pos=0) const { return view().find(sanitize(str), pos); } auto find(value_type c, size_type pos=0) const noexcept { return view().find(c, pos); } //! Find the last occurrence of a substring. auto rfind(const basic_cstring &other, size_type pos=npos) const noexcept{ return view().rfind(other.view(), pos); } auto rfind(const_pointer str, size_type pos, size_type len) const { return view().rfind(sanitize(str), pos, len); } auto rfind(const_pointer str, size_type pos=npos) const { return view().rfind(sanitize(str), pos); } auto rfind(value_type c, size_type pos=npos) const noexcept { return view().rfind(c, pos); } //! Finds the first character equal to one of the given characters. auto find_first_of(const basic_cstring &other, size_type pos=0) const noexcept { return view().find_first_of(other.view(), pos); } auto find_first_of(const_pointer str, size_type pos, size_type len) const { return view().find_first_of(sanitize(str), pos, len); } auto find_first_of(const_pointer str, size_type pos=0) const { return view().find_first_of(sanitize(str), pos); } auto find_first_of(value_type c, size_type pos=0) const noexcept { return view().find_first_of(c, pos); } //! Finds the first character equal to none of the given characters. auto find_first_not_of(const basic_cstring &other, size_type pos=0) const noexcept { return view().find_first_not_of(other.view(), pos); } auto find_first_not_of(const_pointer str, size_type pos, size_type len) const { return view().find_first_not_of(sanitize(str), pos, len); } auto find_first_not_of(const_pointer str, size_type pos=0) const { return view().find_first_not_of(sanitize(str), pos); } auto find_first_not_of(value_type c, size_type pos=0) const noexcept { return view().find_first_not_of(c, pos); } //! Finds the last character equal to one of given characters. auto find_last_of(const basic_cstring &other, size_type pos=npos) const noexcept { return view().find_last_of(other.view(), pos); } auto find_last_of(const_pointer str, size_type pos, size_type len) const { return view().find_last_of(sanitize(str), pos, len); } auto find_last_of(const_pointer str, size_type pos=npos) const { return view().find_last_of(sanitize(str), pos); } auto find_last_of(value_type c, size_type pos=npos) const noexcept { return view().find_last_of(c, pos); } //! Finds the last character equal to none of the given characters. auto find_last_not_of(const basic_cstring &other, size_type pos=npos) const noexcept { return view().find_last_not_of(other.view(), pos); } auto find_last_not_of(const_pointer str, size_type pos, size_type len) const { return view().find_last_not_of(sanitize(str), pos, len); } auto find_last_not_of(const_pointer str, size_type pos=npos) const { return view().find_last_not_of(sanitize(str), pos); } auto find_last_not_of(value_type c, size_type pos=npos) const noexcept { return view().find_last_not_of(c, pos); } //! Checks if the string contains the given substring. bool contains(basic_cstring_view sv) const noexcept { return (view().find(sv) != npos); } bool contains(value_type c) const noexcept { return (find(c) != npos); } bool contains(const_pointer str) const noexcept { return (find(str) != npos); } //! Left trim spaces. basic_cstring_view ltrim() const { const_pointer ptr = mStr; while (std::isspace(*ptr)) ptr++; return basic_cstring_view(ptr); } //! Right trim spaces. basic_cstring_view rtrim() const { const_pointer ptr = mStr + length() - 1; while (ptr >= mStr && std::isspace(*ptr)) ptr--; ptr++; return basic_cstring_view(mStr, static_cast<size_type>(ptr - mStr)); } //! Trim spaces. basic_cstring_view trim() const { const_pointer ptr1 = mStr; const_pointer ptr2 = mStr + length() - 1; while (std::isspace(*ptr1)) ptr1++; while (ptr2 >= ptr1 && std::isspace(*ptr2)) ptr2--; ptr2++; return basic_cstring_view(ptr1, static_cast<size_type>(ptr2 - ptr1)); } }; // namespace gto //! Static variable declaration template<typename Char, typename Traits, typename Allocator> typename gto::basic_cstring<Char, Traits, Allocator>::allocator_type gto::basic_cstring<Char, Traits, Allocator>::alloc{}; //! Comparison operators (between basic_cstring) template<typename Char, typename Traits, typename Allocator> inline bool operator==(const basic_cstring<Char,Traits,Allocator> &lhs, const basic_cstring<Char,Traits,Allocator> &rhs) noexcept { return (lhs.compare(rhs) == 0); } template<typename Char, typename Traits, typename Allocator> inline bool operator!=(const basic_cstring<Char,Traits,Allocator> &lhs, const basic_cstring<Char,Traits,Allocator> &rhs) noexcept { return (lhs.compare(rhs) != 0); } template<typename Char, typename Traits, typename Allocator> inline bool operator<(const basic_cstring<Char,Traits,Allocator> &lhs, const basic_cstring<Char,Traits,Allocator> &rhs) noexcept { return (lhs.compare(rhs) < 0); } template<typename Char, typename Traits, typename Allocator> inline bool operator<=(const basic_cstring<Char,Traits,Allocator> &lhs, const basic_cstring<Char,Traits,Allocator> &rhs) noexcept { return (lhs.compare(rhs) <= 0); } template<typename Char, typename Traits, typename Allocator> inline bool operator>(const basic_cstring<Char,Traits,Allocator> &lhs, const basic_cstring<Char,Traits,Allocator> &rhs) noexcept { return (lhs.compare(rhs) > 0); } template<typename Char, typename Traits, typename Allocator> inline bool operator>=(const basic_cstring<Char,Traits,Allocator> &lhs, const basic_cstring<Char,Traits,Allocator> &rhs) noexcept { return (lhs.compare(rhs) >= 0); } //! Comparison operators (between basic_cstring and Char*) template<typename Char, typename Traits, typename Allocator> inline bool operator==(const basic_cstring<Char,Traits,Allocator> &lhs, const Char *rhs) noexcept { return (lhs.compare(rhs) == 0); } template<typename Char, typename Traits, typename Allocator> inline bool operator!=(const basic_cstring<Char,Traits,Allocator> &lhs, const Char *rhs) noexcept { return (lhs.compare(rhs) != 0); } template<typename Char, typename Traits, typename Allocator> inline bool operator<(const basic_cstring<Char,Traits,Allocator> &lhs, const Char *rhs) noexcept { return (lhs.compare(rhs) < 0); } template<typename Char, typename Traits, typename Allocator> inline bool operator<=(const basic_cstring<Char,Traits,Allocator> &lhs, const Char *rhs) noexcept { return (lhs.compare(rhs) <= 0); } template<typename Char, typename Traits, typename Allocator> inline bool operator>(const basic_cstring<Char,Traits,Allocator> &lhs, const Char *rhs) noexcept { return (lhs.compare(rhs) > 0); } template<typename Char, typename Traits, typename Allocator> inline bool operator>=(const basic_cstring<Char,Traits,Allocator> &lhs, const Char *rhs) noexcept { return (lhs.compare(rhs) >= 0); } //! Comparison operators (between Char * and basic_cstring) template<typename Char, typename Traits, typename Allocator> inline bool operator==(const Char *lhs, const basic_cstring<Char,Traits,Allocator> &rhs) noexcept { return (rhs.compare(lhs) == 0); } template<typename Char, typename Traits, typename Allocator> inline bool operator!=(const Char *lhs, const basic_cstring<Char,Traits,Allocator> &rhs) noexcept { return (rhs.compare(lhs) != 0); } template<typename Char, typename Traits, typename Allocator> inline bool operator<(const Char *lhs, const basic_cstring<Char,Traits,Allocator> &rhs) noexcept { return (rhs.compare(lhs) > 0); } template<typename Char, typename Traits, typename Allocator> inline bool operator<=(const Char *lhs, const basic_cstring<Char,Traits,Allocator> &rhs) noexcept { return (rhs.compare(lhs) >= 0); } template<typename Char, typename Traits, typename Allocator> inline bool operator>(const Char *lhs, const basic_cstring<Char,Traits,Allocator> &rhs) noexcept { return (rhs.compare(lhs) < 0); } template<typename Char, typename Traits, typename Allocator> inline bool operator>=(const Char *lhs, const basic_cstring<Char,Traits,Allocator> &rhs) noexcept { return (rhs.compare(lhs) <= 0); } // template incarnations typedef basic_cstring<char> cstring; typedef basic_cstring<wchar_t> wcstring; typedef basic_cstring<char>::basic_cstring_view cstring_view; typedef basic_cstring<wchar_t>::basic_cstring_view wcstring_view; } // namespace gto namespace std { //! Specializes the std::swap algorithm for std::basic_cstring. template<typename Char, typename Traits, typename Allocator> inline void swap(gto::basic_cstring<Char,Traits,Allocator> &lhs, gto::basic_cstring<Char,Traits,Allocator> &rhs) noexcept { lhs.swap(rhs); } //! Performs stream output on basic_cstring. template<typename Char, typename Traits, typename Allocator> inline basic_ostream<Char,Traits> & operator<<(std::basic_ostream<Char,Traits> &os, const gto::basic_cstring<Char,Traits,Allocator> &str) { return operator<<(os, str.view()); } //! The template specializations of std::hash for gto::cstring. template<> struct hash<gto::cstring> { std::size_t operator()(const gto::cstring &str) const { return hash<std::string_view>()(str.view()); } }; //! The template specializations of std::hash for gto::wcstring. template<> struct hash<gto::wcstring> { std::size_t operator()(const gto::wcstring &str) const { return hash<std::wstring_view>()(str.view()); } }; } // namespace std ``` Answer: In no particular order. NUL-termination The ASCII character with a value of 0 is the NUL character. C Strings are thus NUL-terminated strings. I would advise changing the comment 0-ended to NUL-terminated. Public and Private API The repeated switch between public and private declarations at the top of the class is fairly annoying. If possible, try to put first all public declarations (user API) and then all private ones. Worst comes to worst, an initial private section can be used. The prefix_type and atomic_prefix_type have no reason to be public. Size and Alignment assumptions The memory layout you use makes a number of assumptions, for example that the alignment of Char is less than or equal to that of prefix_type, and that the size of prefix_type is equal to that of atomic_prefix_type. Those are reasonable assumptions, but they ought to be checked. You can add static_assert(alignof(Char) <= alignof(prefix_type)), etc... to validate (and document) each assumption that is made. I recommend putting those static_assert where the assumptions are used, such as in the getPtrToCounter and getPtrToLength. Do not worry about duplicated them. Any time an assumption is used, check the assumption. This allows locally reasoning that all assumptions are checked when reading the code. Thread safety Your use of atomics is correct, in fact it's even over the top. By directly using = and ++/-- you are using the Sequentially Consistent memory ordering -- the strongest of all -- which is overkill here. Since you have no synchronization with another piece of memory, you can instead use the Relaxed memory ordering. Strict-aliasing woes. Your definition of mEmpty violates strict-aliasing. In general, you cannot store a value as type A, then read it as type B. An exception is made for char, signed char, unsigned char, and std::byte, but as your class is templated on Char you cannot rely on this -- and indeed it fails when used with wchar_t. Instead, you should be defining a struct with the exact layout that you want: struct EmptyString { atomic_prefix_type r; prefix_type s; value_type z; }; static constexpr EmptyString mEmpty = {}; Weird mix of case style In order to present a STL-like interface, your public interface uses snake_case. Yet, your private interface uses camelCase. The dissonance is annoying for the reader. Pick one, stick to it. Allocation and deallocation The getAllocatedLength function could benefit from a comment explaining what is going on, because that's quite unclear. It may be clever maths, if so I'm missing it. The obvious formula would be: 2 * sizeof(prefix_type) / sizeof(value_type) + sizeof(value_type) * (len + 1). In allocate, you never check that n > len. On 32-bits platforms, with len close to the maximum, the computation in getAllocatedLength will overflow. You should at least assert against that. allocate and deallocate are asymmetric: allocate just allocates, whereas deallocate both decrements the counter and deallocates. It would be better for deallocate just to deallocate, and to have a decrementRefCounter function instead. Documentation Your documentation comments are mostly pointless, either get rid of them, or make them useful. For example, //! Default constructor. is useless. I can see perfectly in the signature that this is the default constructor, thank you very much. At the same time, there's important information that's not conveyed: that the default-constructed string is empty. The same holds true for //! Constructor (and co), they're just paraphrasing the signature without providing any useful information. Good documentation comments should: Clearly indicate the functionality, even if obvious. operator[](size_type pos) returns a reference to the character at index pos, not just any character. empty returns whether the string is empty (not just "test" it...). Clearly indicate any pre-condition. The first //! Constructor requires that the string be NUL-terminated. operator[] requires that pos be within [0, length()] (and not [0, length())). Clearly indicate any post-condition. The //! Default Constructor returns an empty string. Clearly indicate what happens when a pre-condition is violated: is it undefined behavior? Is an exception thrown? Examples: //! Constructs an empty string. basic_cstring() : basic_cstring(nullptr) {} //! Returns a reference to the character at index `pos`. //! //! # Pre-conditions //! //! - `pos` must be in the `[0, length()]` range. //! //! # Undefined Behavior //! //! If `pos` is outside `length()`, Undefined Behavior occurs. const_reference operator[](size_type pos) const { return mStr[pos]; } Noexcept Mark noexcept functions that cannot throw an exception, such as your default constructor, operator[], etc... Some of your functions are marked, but not all that could be. If and else. If an if block ends with return, there is no need for an else. This will save you one degree of indentation, and make it clearer to the reader. Also, even when an if has a single statement in its block, do use {} around it. Front and back. Your front and back functions throw an out_of_range exception, which is not the case of std::basic_string. I do prefer throwing, although it may affect performance. Performance hint: even though it's getting better, inline throw statements tend to bloat the code of the functions they appear in. It is better to manually outline them behind functions that are marked as no-inline, cold, and no-return. Prefer non-member non-friend functions I advise you to read Monolith Unstrung, though at the same time I do understand wanting to provide as close to std::string as possible an interface. I do note, however, that in such cases you may want to delegate to std::string-view more often, rather than re-implement the functionality yourself. Free-functions and ADL Your operator== and friends are declared in the global namespace, instead of being declared in the gto namespace. For ADL to find them, they need to be in the namespace of one of their arguments. (Might be a copy/paste mistake? As I see the namespace being closed a second time afterwards) Specialization It is better to specialize std algorithms in the global namespace, rather than open the std namespace. The namespace you are in affects name-lookup, and you may accidentally refer to a std entity. Specialization IS NOT Overloading The definitions of swap and operator<< are NOT specializations, they're overloading. They should be in the gto namespace, instead. TODO With regard to your todo list: The assumption should be encoded as static_assert, then you can be sure it either holds, or that the user will get a compile-time error on their weird platform. __builtin_assume_aligned(...) may help indeed. std::atomic is enough, your use of it is even overkill. Atomic operations have two impacts on code: They are slower than non-atomic operations in general, with a slight exception for pure reads/writes in non SeqCst mode on x86. Writes imply cache invalidation on other cores. Beware that benchmarks lie ;) Conclusion A fairly nice read, you did a good job overall!
{ "domain": "codereview.stackexchange", "id": 45135, "tags": "c++, strings, immutability" }
"Adjust pH of the solution to 5.0±0.1 with acetic acid (by potentiometry)"
Question: In a procedure description I'm translating, there's this sentence (I here quote it literally, word for word, as it is in Russian): Adjust the pH of the solution to 5.0±0.1 with acetic acid (by potentiometry). The meaning of "(by potentiometry)" is that the personnel should use a potentiometric pH meter diring the adjustment process. However, I'm not sure what the custom is of indicating this in English. Maybe "(control using potentiometry")? Or maybe one should add the word "potentiometrically": Adjust pH of the solution potentiometrically to 5.0±0.1 with acetic acid. There must be some commonly used turn of phrase for this. Answer: I suspect this is very likely a sentence adapted from of one of the methods from the Russian State Pharmacopoeia (RSP), which has numerous entries for the adverb «потенциометрически» (Eng. “potentiometrically”). For instance, there is a nearly identical match in the normative section for the preparation of acetate buffer solution, ОФС.1.3.0003.15-1.27 Ацетатный буферный раствор рН 5,0: К 120,0 мл 6,0 г/л раствора уксусной кислоты ледяной прибавляют 100,0 мл 0,1 М раствора калия гидроксида и 250,0 мл воды, перемешивают. Доводят рН до 5,0 потенциометрически с помощью 6 г/л раствора уксусной кислоты ледяной или 0,1 М раствора калия гидроксида и доводят объём раствора водой до 1000,0 мл. RSP underwent a vigorous clean-up since the Soviet era when it was primarily original, and currently it is mostly translated from the European Pharmacopoeia (EP), so I decided to take a look at the corresponding entry in EP 8.0 (currently invalidated, but one can at least find bits of it in the internet), Section 4.1.3. Buffer solutions: Acetate buffer solution pH 5.0. 4009100. To 120 mL of a 6 g/L solution of glacial acetic acid R add 100 mL of 0.1 M potassium hydroxide and about 250 mL of water R. Mix. Adjust the pH to 5.0 with a 6 g/L solution of acetic acid R or with 0.1 M potassium hydroxide and dilute to 1000.0 mL with water R. EP omits the details as to how the pH is controlled, but there are several phrases in the prior text Adjust to pH … with …, monitoring the pH potentiometrically referring to the section 2.2.3. Potentiometric Determination of pH, so I think the phrase that you have proposed Adjust pH of the solution potentiometrically to 5.0±0.1 with acetic acid. is perfectly fine and will be correctly understood as a more literate alternative to something like "the pH was determined with a pH meter." A couple of similar textbook-usecases: Concise Encyclopedia Chemistry [2, p. 803]: The pH of a solution is determined experimentally with a suitable pH color indicator (see Indicators) or potentiometrically, with a pH meter (see Glass electrode). Fundamentals of Electrochemistry [3, p. 590]: The concentrations of hydrogen ions (solution pH) and of a number of other inorganic ions … are determined potentiometrically. References Council of Europe; European Directorate for the Quality of Medicines & Healthcare. European Pharmacopoeia 8.0; European Directorate for the Quality of Medicines & Healthcare, Council of Europe: Strasbourg, 2013. Concise Encyclopedia Chemistry, English language ed., 2nd ed.; Scott, T., Eagleson, M., Eds.; de Gruyter: Berlin; New York, 1994. Bagot︠s︡kiĭ, V. S. Fundamentals of Electrochemistry, 2nd ed.; The Electrochemical Society series; Wiley-Interscience: Hoboken, N.J, 2006.
{ "domain": "chemistry.stackexchange", "id": 13097, "tags": "analytical-chemistry, ph, terminology" }
2D Point subclass of complex builtin
Question: I'm writing a pure-python geometry tool for technical drawings / computational geometry (not a solver, as a solver has to work with constraints). I've already mentioned a previous version on code review, as an example for Metaclass Wrapping Builtin to Enforce Subclass Type. This question is about the Point class and the documentation/style/examples or packaging. I'm using Yapf/isort for formatting, and it would be helpful if I could get further best practices. Tree $ tree --gitignore . ├── conf.py ├── geometry │ ├── __init__.py │ └── point.py ├── __init__.py ├── LICENSE.MD ├── README.md ├── ROADMAP.md ├── TODO.MD └── _utils └── _type_enforcer.py point.py #!/usr/bin/env python import math from math import cos, sin, tan from typing import ClassVar, Iterable, Union from .._utils._type_enforcer import TypeEnforcerMetaclass Point = ClassVar["Point"] class Point(complex, metaclass=TypeEnforcerMetaclass, enforce_all=True): r""" :param x: x coordinate of the point. :type x: ``int``, ``float`` :param y: y coordinate, Stored as the imaginary component :type y: ``int``, ``float`` :: .. highlight:: python >>> a = Point(2, 3) >>> b = Point(-3, 5) >>> a Point(2.0, 3.0) >>> a + b Point(-1.0, 8.0) >>> a.midpoint(b) Point(-0.5, 4.0) >>> a * b Point(-6.0, 15.0) """ ORIGIN = 0 # Works because 0 == 0+0j def __new__(cls, *args, **kwargs): if len(args) == 1: if isinstance(args[0], complex): return super().__new__(cls, args[0].real, args[0].imag, *args[1:], **kwargs) elif isinstance(args[0], Iterable) and len(args[0]) == 2: return super().__new__(cls, args[0][0], args[0][1], *args[1:], **kwargs) else: raise ValueError( """Only complex or iterable with length 2 can be used with single-argument form""") else: return super().__new__(cls, *args, **kwargs) def __init__(self, x, y: float = 0): pass def __repr__(self): return f"Point({self.real}, {self.imag})" def __complex__(self): """Convert self to exact type complex""" return complex(self) def __getitem__(self, val): if val not in range(0, 2): raise IndexError("") def __mul__(self, other) -> "Point": """Element-wise multiplication of two points, unless other is a real.number or a complex. Falls back on complex behavior. :param other: Another point, real number, or complex :type other: ``int``, ``float``, ``point``, ``complex`` :: .. highlight:: python >>> Point(-3, 6) * Point(2, -3) Point(-6.0, -18.0) """ if isinstance(other, Point): return Point(self.real * other.real, self.imag * other.imag) return super().__mul__(other) @property def x(self): """ X coordinate of the point. Represented internally as real component :: .. highlight:: python >>> a = Point(3, 5) >>> a.x 3.0 >>> a.x = 5 >>> a Point(5.0, 5.0) """ return self.real @x.setter def x(self, val): self.real = val @property def y(self): """ Y coordinate of the point. Represented internally as imag component :: .. highlight:: python >>> a = Point(3, 5) >>> a.y 5.0 >>> a.y = 2 >>> a Point(5.0, 2.0) """ return self.imag @y.setter def y(self, val): self.imag = val def scale(self, other, center=ORIGIN): """Scales self by other around center :param other: vector to scale self by. Floats are interpreted as scaling by (other, other) :type other: Point, Iterable, complex, float :: .. highlight:: python >>> Point(2,1).scale(2) Point(4,2) >>> Point(3,4).scale((3, 5)) Point(9, 20) >>> Point(2,2).scale(10+10j, center=Point(2,2)) Point(2,2) """ scale = 0 if isinstance(other, Point): scale = other elif isinstance(other, Iterable) and len(other) == 2: scale = Point(*other) elif isinstance(other, complex): scale = Point(other) elif hasattr(other, "real"): scale = Point(other, other) else: raise ValueError(other, "is not a valid type for Point.scale") local = self - center local *= scale return local + center def midpoint(self, other): """Midpoint between self and other, in cartesian coordinates. Floats are interpreted as Point(other, 0) :param other: coordinate to take midpoint between it and self :type other: Point, complex or float. :: .. highlight:: python >>> Point(3, 5).midpoint(Point(5, 3)) Point(4, 4) """ return self * 0.5 + other * 0.5 def distance(self, other: Point = ORIGIN) -> float: """Euclidian distance between self and other Floats are interpreted as Point(other, 0) :param other: coordinate to take euclidian distance of :returns: Euclidian distance :rtype: float :: .. highlight:: python >>> Point(3.0, 4.0).distance() 5.0 """ return abs(self - other) def taxicab_distance(self, other: Point = ORIGIN) -> float: """Returns taxicab distance between self and other. Floats are interpreted as Point(other, 0) :param other: coordinate to take taxicab distance of. :type other: Point, complex, float :returns: `abs(self.x - other.x) + abs(self.y - other.y)` :rtype: float :: .. highlight:: python >>> Point(3.0, 4.0).taxicab_distance() 7.0 >>> # 3.0 - 0.0 + 4.0 - 0.0 = 7.0 """ return abs(self.real - other.real) + abs(self.imag - other.imag) def rotate(self, theta: float, center: Point = ORIGIN) -> Point: """Rotates point around center by theta radians. Equivalent to `Point(self * cmath.exp(theta * 1j * cmath.pi))` :param theta: angle of rotation in radians :param center: center of rotation. Defaults to ORIGIN :returns: new rotated Point :rtype: Point :: .. highlight:: python >>> Point(2,3).rotate(math.pi/2) Point(-3.0, -2.0) """ local_coords = self - center # Multiplying by z, where abs(z) == 1 is the same as rotation local_coords *= Point.__conj_rotation(theta) local_coords += center return local_coords @staticmethod def __conj_rotation(theta: float): """Returns a complex representing a rotation vector. :param theta: theta, in radians of the rotation :type theta: float :return: representation of rotation :rtype: complex :: .. highlight:: python >>> Point._Point__conj_rotation(math.pi) (-1+...e-16j) """ return complex(cos(theta), sin(theta)) if __name__ == "__main__": import doctest doctest.testmod(optionflags=doctest.ELLIPSIS) _type_enforcer.py #!/usr/bin/env python import logging from types import MemberDescriptorType, WrapperDescriptorType default_exclude = {"__new__", "__getattribute__", "__setattribute__"} class TypeEnforcerMetaclass(type): """ Metaclass that enforces return type. Ensures that inhereted methods return proper class (that of the inheritee), as in the case of builtins. When `enforce_all == True`, creates intermediate class which is wrapped to the correct type. :param enforce_all: Enforces return type of super() :type enforce_all: ``bool`` :param exclude: Ignore special names, such as __X__. :type enforce_all: ``bool`` Example:: >>> class A(int, metaclass=TypeEnforcerMetaclass, enforce_all=True, exclude={"real"}): ... def __repr__(self): ... return f"A({super().real})" >>> A(1) A(1) >>> A(1) + A(4) A(5) >>> A(3) * A(-3) A(-9) >>> super(A) <super: <class 'A'>, NULL> >>> type(A) <class '__main__.TypeEnforcerMetaclass'> """ def __new__(meta, name, bases, classdict, enforce_all=False, exclude=default_exclude): exclude = exclude.union(default_exclude) logging.info(f"Creating class {name} in {meta.__name__}") # Creates a new abstraction layer (middleman class), so super() # returns wrapped class # that has all its methods wrapped. # ┌────────────┐ ┌────────────┐ ┌────────────┐ # │ Point │⇦│ _compl │⇦│ complex │ # └────────────┘ └────────────┘ └────────────┘ superclass = bases[0] if enforce_all: logging.debug(f"Parameter enforce_all is turned ON") # Somehow, name mangling doesn't show up when printing class. # Probably __repr__ isn't being overriden inter_name = meta.__name__ + "." + superclass.__name__ logging.debug(f"Creating intermediate class {inter_name}") inter_dict = dict(superclass.__dict__) inter_class = super(TypeEnforcerMetaclass, meta).__new__(meta, inter_name, bases, inter_dict) bases = (inter_class, *bases) # Neat trick for tuples subclass = super(TypeEnforcerMetaclass, meta).__new__(meta, name, bases, classdict) logging.debug("New Bases", bases) logging.debug("sub:", subclass, "meta:", type(subclass)) # Has the potential to cause errors if not enabled # logging.debug("inter:", inter_class, "meta:", type(inter_class)) logging.debug("Class info: ",subclass, subclass.__name__, subclass.mro(), subclass.__dict__, sep="\n") base_to_wrap = subclass.mro()[1] type_compare = subclass type_to_convert = subclass if enforce_all: type_compare = inter_class base_to_wrap = inter_class.mro()[1] for attr, obj in base_to_wrap.__dict__.items(): if isinstance(obj, MemberDescriptorType): logging.debug("Skipping", obj, "Due to MemberDescriptor") continue # Traverse the mro # == testing is exactly what we want for comparing overridden # definitions, if obj == getattr(type_compare, attr, None) and attr not in exclude: # Dont override __new__! # Check if the method is inhereted from base_to_wrap # Iff inherited, wrap the return type logging.info("Wrapping", obj, "to return", type_to_convert.__name__) setattr( type_compare, attr, TypeEnforcerMetaclass.return_wrapper( superclass, type_to_convert, obj)) return subclass def return_wrapper(cls, convert_cls, func): """Wraps class methods and enforces type""" if isinstance(func, (str, int, float)): return func logging.debug("Decorator", cls, func.__name__) def convert_if(val): if isinstance(val, cls): logging.debug("Wrapped:", val.__class__, val) return convert_cls(val) else: logging.debug("Skipped:", val.__class__, val) logging.debug("Reason:", cls.__class__, "!=", val.__class__) return val def wrapper(*args, **kwargs): logging.debug("Wrapper", cls, func.__name__) return convert_if(func(*args, **kwargs)) wrapper.__wrapped__ = func return wrapper if __name__ == "__main__": import doctest logging.setLevel(logging.INFO) logging.info("Running doctests") doctest.testmod(report=True) Answer: doctests Kudos for the doctests, they are very helpful. Doctests are documentation I can believe! They do not bit rot (since failing tests are quickly noticed). doctest.testmod(optionflags=doctest.ELLIPSIS) Example:: >>> class A(int, metaclass=TypeEnforcerMetaclass, enforce_all=True, exclude={"real"}): ... def __repr__(self): ... return f"A({super().real})" I do worry a little about unintended ... matching. The doctests are very nice, but they're no substitute for a test suite. The two tools have different audiences. Doctests are short, non-comprehensive, and must be understood by newbies. A test suite can be long, boring, and explore a tediously large number of odd corner cases. documentation toolchain class Point(...): r""" :param x: ... :type x: ``int``, ``float`` I guess you're using something like Sphinx? The need for a r"raw" docstring seems a little inconvenient. But maybe you really need fancy formatting of types. Docstrings ideally look good both before and after sphinx processing. That is, we prefer docstrings that aren't cluttered with formatting details, such as "highlight". Consider re-evaluating your goals and your documentation toolchain. There's more than one standard for writing docstrings, and it seems clear that you're trying to adhere to one. So write it down, preferably accompanied by an URL citation. I'm a little sad we couldn't get away with using optional type hints to document valid inputs, but maybe we really do need __new__ instead of __init__ to accomplish your desired goals. When such type annotations are used, I try to mention the type just once, so annotation and documentation can't get out of sync with one another. limited flexibility def __new__(cls, *args, **kwargs): if len(args) == 1: if isinstance(args[0], complex): return super().__new__(cls, args[0].real, args[0].imag, *args[1:], **kwargs) Ok, we're trying to offer a lot of flexibility to the caller, kind of a DWIM Public API. Maybe that's good. I guess I'm not yet convinced, in part because I haven't seen motivating examples of tests calling into this code. I will just assume it's a Good Thing for now. I don't understand what's going on with that *args[1:] expression. Didn't we just establish that it must always be the empty list here? Recommend you elide it. And similarly where it appears again, two lines down. elif isinstance(args[0], Iterable) and len(args[0]) == 2: return super().__new__(cls, args[0][0], args[0][1], *args[1:], **kwargs) I don't understand that conjunct, it just doesn't make any sense. First we ask if .__iter__ is present. And then we take a length?!? But we never asked about .__len__. Being a container is a much stronger restriction than being iterable. Consider this example: def odds(limit=11): """Generates a small subset of the odd integers.""" yield from range(1, limit, 2) Maybe you want to assign list(args[0]) to a temp var first? At that point you know you have a container. Let's move on to the ... , args[0][0], args[0][1], ... expressions. If I pass in a set of {3, 4} we will survive the iterable and length tests. But you didn't ask if subscripting is supported; dereferencing [0] and [1] will blow up. EAFP It is easier to ask for forgiveness than permission. You've been attempting to go down the LBYL path, and it's not always easy. Embrace try! defaults def __init__(self, x, y: float = 0): pass Sorry, I didn't understand that. Perhaps you intended ...(self, x: float = 0.0, y: float = 0.0): ? Also, this kind of highlights the whole Union[int, float] issue, which unfairly (arbitrarily?) rejects Decimal and Fraction. Consider turning everything that you store into a float, in the interest of simplicity. It would be one of your class invariants. @x.setter def x(self, val): self.real = val Back to the invariant thing, maybe assign float(val) ? You did a bunch of validating in __new__, which the setter threatens to subvert. getitem def __getitem__(self, val): if val not in range(0, 2): raise IndexError("") I don't understand that. Didn't you want to map 0 to .real and 1 to .imag ? Also, might as well include val in the diagnostic error message. LBYL elif isinstance(other, Iterable) and len(other) == 2: scale = Point(*other) In scale() the second conjunct isn't safe because we only looked for .__iter__ and not .__len__. Embrace try! The high level design issue here is you attempt to accept roughly as many cases as numpy, but without writing down clear motivating use cases and without offering Green unit tests and coverage measurements. Recommend you scale down your ambitions, and wait for motivating use cases to organically arise. DRY def midpoint(self, other): """Midpoint between self and other, in cartesian coordinates. Floats are interpreted as Point(other, 0) :param other: coordinate to take midpoint between it and self :type other: Point, complex or float. Here the type annotation admits Any type in a way that $ mypy *.py can check, yet the documentation restricts to a smaller set of types, hidden from mypy. def distance(self, other: Point = ORIGIN) -> float: """Euclidian distance between self and other Floats are interpreted as Point(other, 0) :param other: coordinate to take euclidian distance of Here we have a lovely annotation which is not too ambitious; it specifies exactly one type. But the documentation admits of multiple types, and is not self-consistent (a "float" is not a "coordinate"). Subsequent methods exhibit similar difficulties. Recommend you push your type documentation exclusively into the annotations, so mypy can help you out. I want to be able to believe the documentation that I'm reading. name mangling def __conj_rotation(theta: float): Recommend you rename to _conj_rotation, as name mangling is seldom what you want. Alternatively, supply automated tests involving inheritance which demonstrate some name mangling benefit to the app-level developer. This codebase appears to achieve many of its design objectives. I am skeptical that its ambition to "accept any plausible type!" is compatible with the current target code and the current automated testing code. I would be willing to delegate or accept maintenance tasks on this codebase.
{ "domain": "codereview.stackexchange", "id": 45027, "tags": "python, python-3.x, reinventing-the-wheel, computational-geometry" }
Python inserts newline by writing to csv
Question: I am trying to scrape http://www.the-numbers.com/movie/budgets/all but when I write the table into a csv file, there is an additional line with the counter index written in between each movie row... how can I get rid of this? I dont understand how that counter line is being written to the csv... import csv,os from bs4 import BeautifulSoup from urllib.request import Request, urlopen, URLError from selenium import webdriver counter = 0 currentDir=os.getcwd() filename = currentDir + "\\theNumbersScraper.csv" pagecount = 1 headers=['ID', 'Release Date', 'Movie', 'Production Budget', 'Domestic Gross', 'Worldwide Gross'] with open(filename, 'w' ,newline='\n',encoding='utf-8') as csvfile: #writer = csv.DictWriter(csvfile, fieldnames=dictionary)#write headers #writer.writeheader() #csvfile = open(filename, 'w', newline='',encoding='utf-8') writer = csv.writer(csvfile,delimiter='|') writer.writerow(headers) #with open(filename, 'a', newline='',encoding='utf-8') as csvfile: #writer = csv.DictWriter(csvfile, fieldnames=dictionary)#write headers #writer.writeheader() #csvfile = open(filename, 'w', newline='',encoding='utf-8') # writer = csv.writer(csvfile,delimiter='|') while pagecount <5401: """ #movie-entries go from http://www.the-numbers.com/movie/budgets/all/1 #to http://www.the-numbers.com/movie/budgets/all/5401 #so there are 5400 entries """ request = Request("http://www.the-numbers.com/movie/budgets/all/"+str(pagecount)) request.add_header('User-agent', 'wswp') website = urlopen(request).read().strip() soup = BeautifulSoup(website,'lxml') """#obsolete headertags = soup.find("table").find_next("tr").find_all("th") headers= [] for line in headertags: headers.append(line.string) headers[0] = 'ID' """ #movie-entries go from http://www.the-numbers.com/movie/budgets/all/1 #to http://www.the-numbers.com/movie/budgets/all/5401 #so there are 5400 entries all_tr = soup.find_all("tr") for movie in range(1, len(all_tr)): row=[] counter+=1 row.append(counter) td = all_tr[movie].find_all("td") for colIndex in range(1, len(td)): row.append(td[colIndex].string) writer.writerow(row) """ for tr in all_tr: row = [] td = tr.find_all("td") for i in range(1, 4): row.append(str(counter)) row.append(td[i].string) writer.writerow(row) counter+=1 """ pagecount +=100 csvfile.close() The part of interest is actually this block only: all_tr = soup.find_all("tr") for movie in range(1, len(all_tr)): row=[] counter+=1 row.append(counter) td = all_tr[movie].find_all("td") for colIndex in range(1, len(td)): row.append(td[colIndex].string) writer.writerow(row) The output is: ID|Release Date|Movie|Production Budget|Domestic Gross|Worldwide Gross 1|12/18/2009|Avatar|$425,000,000|$760,507,625|$2,783,918,982 2 3|5/20/2011|Pirates of the Caribbean: On Stranger Tides|$410,600,000|$241,063,875|$1,045,663,875 4 5|5/1/2015|Avengers: Age of Ultron|$330,600,000|$459,005,868|$1,408,218,722 6 and so on, with additional counter lines between the output that I don't want, how can I get rid of that? (and where does it come from?) Answer: Check the source code of http://www.the-numbers.com/movie/budgets/all/5401 In that table, every second row is an empty <tr> tag.
{ "domain": "datascience.stackexchange", "id": 2205, "tags": "python, csv" }
Protecting electronics against voltage/current extremes and bad polarity
Question: I have built a robot from a wheelchair that has worked very well thus far. It is now time for me to take the next step. I need to implement a permanent power circuit with proper protection. The lowest level of protection I can think of is a fuse, but I would like to take a step further (current/voltage/direction/switches/High/Low voltages). If some one could give some insight on this project of mine any info will be greatly appreciated. Moderator comment: Please see How do we address questions about related subject areas? before answering. This question is close to the boundary, but is on-topic here. Answer: I think this is a very relevant question for robotics, as you can spent a lot of time fixing your system if something went wrong in this area. Here are a few things to look out for: Insulation: Ideally you don't want any high voltages to reach your equipment in the first place. So one of the most important rules we apply to our electronics is insulate. Try to have as little conductive surfaces flying around as possible. Especially when you system is moving, having something flying around that is conductive can screw you in the literal sense. insulation tape, tubing whatever are the weapons of choice here. Connectors: Try never to use connectors that could be plugged the wrong way, or use the same connectors for different ports. Always check your connector thrice on the running system with a voltmeter before connecting it. Diodes can be useful to prevent overvoltage or reverse polarity, they are however useless if your power source is very powerfull, burns your diode, and gets through anyway. That is what a crowbar circuit will prevent. Its a combination of a fuse and a diode and will fix both overvoltage/overcurrent and reverse current. There are integrated parts available, and I've also successfully used polyfuses for small equipment. Resistors in series for data lines are also quite useful as they limit the current flowing through. Something between 50-100 Ohm should usually do the job. Fuses are generally a good idea, especially when you use power sources with very high current capabilities (e.g. LiPo).
{ "domain": "robotics.stackexchange", "id": 89, "tags": "mobile-robot, wheeled-robot, protection, circuit" }
How to find the actual state vector in Quantum Mechanics?
Question: In most examples I have seen in QM, solutions for bound state of a system is obtained using suitable boundary and normalization conditions. I have not seen examples where the actual state vector or the wave function is calculated and I do not know if one can. Suppose we label the bound states $\phi_{\alpha}(x)$ where $\alpha$ maybe a continuous label. Then, $$\psi (x,t) = \sum_{\alpha}c_{\alpha}e^{\frac{-iE_{\alpha}t}{\hbar}}\phi_{\alpha}(x)$$ Where the summation is replaced by integration if $\alpha$ is a continuous label. We know that, $$|\Psi(t)\rangle=\int \psi(x,t) |x\rangle dx$$ I have some questions, How do you find $c_{\alpha}$ ? Do we need any extra conditions to calculate it? How do you find $|x\rangle$ the eigenvectors of the position operator? How does one carry out the integration and find the state vector ? Answer: How do you find $c_{\alpha}$ ? Do we need any extra conditions to calculate it? This is like asking "how do I find $\mathbf r(t)$ in newtonian mechanics?", to which the only possible answer is "... for what situation? for which force, and what initial conditions?" In the case of the dynamics of a quantum particle subject to a time-independent potential $V(x)$, for which you've previously solved the time-independent Schrödinger equation $\left[ - \frac{\hbar^2}{2m}\frac{\mathrm d^2}{\mathrm dx^2} + V(x)\right] \phi_\alpha(x) = E_\alpha \phi_\alpha(x)$ for the eigenstates $\phi_\alpha$, then given an initial condition $\psi_0(x)$ for the system, the expansion you've written, $$ \psi (x,t) = \sum_{\alpha}c_{\alpha}e^{\frac{-iE_{\alpha}t}{\hbar}}\phi_{\alpha}(x), $$ gives the state's evolution starting at $\psi (x,0) = \psi_0(x) = \sum_{\alpha}c_{\alpha}\phi_{\alpha}(x)$ in terms of its basis coefficients in the energy eigenbasis $\phi_\alpha(x)$, which are themselves given by the inner products of the initial condition and the basis functions, $$ c_\alpha = \langle \phi_\alpha | \psi_0 \rangle = \int \phi_\alpha(x)^* \psi_0(x) \mathrm dx. $$ If you don't know the specific initial condition you want to use, then all you've obtained is a general solution that's ready to work when you do, but you cannot use it to say anything concrete about the solution. How do you find $|x\rangle$ the eigenvectors of the position operator? You don't "find" them, they're in-built objects of the vector space you started with. How does one carry out the integration and find the state vector? You don't. It's a symbolic integration, much like the completely equivalent relation $$ |\Psi(t)\rangle = \sum_{\alpha}c_{\alpha}e^{\frac{-iE_{\alpha}t}{\hbar}} |\phi_{\alpha}\rangle. $$
{ "domain": "physics.stackexchange", "id": 49626, "tags": "quantum-mechanics, hilbert-space, wavefunction, schroedinger-equation" }
Curvilinear Coordinates and basis vectors
Question: In these notes, $\frac{\partial \vec{r}} {\partial q_i}$ is stated to form a basis set for the vector space. How does this happen? Also, how does one justify this equation from Goldstein's Classical Mechanics using the above method? Answer: Consider a system made of $N$ points of matter, with positions $\vec{r}_i$, $i=1,\ldots,N$ referred to the rest space of a reference frame $\cal I$. In the absence of further constraints, the system is described in $\mathbb R^{3N+1}$, where $\mathbb R^{3N}$ refers to the spatial Cartesian coordinates of the points in $\cal I$, whereas the last $\mathbb R$ indicates the axis of time $t$. Next, suppose that the points are assumed to satisfy some constraints described by $c < 3N$ conditions, $$f_j(t,\vec{r}_1,\ldots, \vec{r}_N) =0\quad j=1,\ldots, c\:.\tag{1}$$ For instance the conditions above may state that some distances between $\vec{r}_i$ and $\vec{r}_j$ is a given function of time, or that some of the points belong to lines or surfaces fixed in $\cal I$, or deforming in time with a given law (a circumference with radius $R(t)$ depending on time), and so on. Assume that the functions $f_j$ are smooth ($C^2$ would be enough) and focus on the Jacobian matrix of elements $$\frac{\partial f_j}{\partial x_{ik}}$$ where $\vec{r}_i = x_{i1}\vec{e}_1+ x_{i2}\vec{e}_2+ x_{i3}\vec{e}_3$. If that matrix has $c$ linearly independent row (or culumn) on the set $S \subset \mathbb R^{3N+1}$ defined by (1), the constraints are said holonomic. In this case as a straightforward consequence of the so called theorem of regular values it is possible to prove that, every $a\in S$ admits a neighbourhood $U_a$, such that $S \cap U_a$ is biunivocally and smoothly described by local coordinates $t, q_1,\ldots, q_n$ with $n= 3N-c$ and where $t$ is the initially used time coordinate. In other, more mathematical, words $S$ is an embedded submanifold of $\mathbb R^{3N+1}$ and $t, q_1,\ldots, q_n$ are a local coordinate system. For every fixed $t_0$, the elements $a$ of $S$ with $t(a)=t_0$ define the configuration space of the system at $t=t_0$. That is an emebedded submanifold of $S$ (and thus of $\mathbb R^{3N+1}$) with dimension $n$. REMARK. It is possible to prove (using again the mentioned theorem) that the $n$ coordinates $q_k$ can always be chosen to coincide with $n$ of the components $x_{ij}$. The remaining coordinates are functions of $t$ and the $q_k$ through functions of the same regularity ($C^2$ in our case) as that of the functions $f_j$. Since $t,q_1,\ldots, q_n$ are free coordinates to describe the system, we can write down $N$ vector valued $C^2$ functions: $${\vec r}_i= {\vec r}_i(t,q_1,\ldots,q_n)\quad i=1,\ldots, N \tag{2}$$ It is not so difficult to prove that, in view of the above remark, the vectors $$\frac{\partial \vec{r}_i}{\partial q_k}$$ must be linearly independent. They form a basis of the tangent space at each point of the submanifold $S_t$. The coordinates $q_1,\ldots, q_n$ are the ones used to describe the motion of the system. Each motion is defined by a curve $\mathbb R \ni t \mapsto (q_1(t),\ldots, q_n(t))$. Motion in physical space is then obtained just exploiting (2), $$\mathbb R \ni t \mapsto \vec{r}_i(t, q_1(t),\ldots, q_n(t))\:,\quad i=1, \ldots, N \tag{3}\:.$$ Looking at (3), it is should be obvious that the velocity of the point determined by $\vec{r}_i$ with respect to $\cal I$ is given by $$\vec{v}_i(t) = \frac{d\vec{r}_i}{dt} = \frac{\partial \vec{r}_i}{\partial t} +\sum_{k=1}^n \frac{\partial \vec{r}_i}{\partial q_k} \dot{q}_k\quad\mbox{with}\quad \dot{q}_k := \frac{dq_k}{dt}\:.$$
{ "domain": "physics.stackexchange", "id": 14561, "tags": "homework-and-exercises, classical-mechanics, lagrangian-formalism, vectors, coordinate-systems" }
Hypovolemia and orthostatic hypertension
Question: What is the physiological mechanism behind the occurrence of orthostatic hypertension in the presence of hypovolemia? Answer: The pathophysiology of orthostatic hypertension has not been elucidated. It is believed it involves activation of the sympathetic nervous system [1], vascular adrenergic hypersensitivity and diabetic neuropathy [2]. High levels of plasma atrial natriuretic peptide and antidiuretic hormone were observed in children [3]. Hypovolemia causes: baroreflex-mediated increase in muscle sympathetic nerve activity [4] release of epinephrine and norepinephrine [5] activation of renin-angiotensin axis [5], thus increasing ADH levels All these reactions result in vasoconstriction and blood pressure raising. But not up to absolute hypertension. Orthostatic hypertension is diagnosed by a rise in systolic blood pressure of 20 mmHg or more when standing [6]. This is possible. A raise of 20 mmHg from hypotension could result due to vasoconstriction. References: Fessel J, Robertson D. Orthostatic hypertension: when pressor reflexes overcompensate. Nat Clin Pract Nephrol. 2006 Aug;2(8):424-31. doi: 10.1038/ncpneph0228. PubMed PMID: 16932477. Chhabra L, Spodick DH. Orthostatic hypertension: recognizing an underappreciated clinical condition. Indian Heart J. 2013 Jul 5;65(4):454-6. doi: 10.1016/j.ihj.2013.06.023. PubMed PMID: 23993009. Zhao J, Yang J, Du S, Tang C, Du J, Jin H. Changes of atrial natriuretic peptide and antidiuretic hormone in children with postural tachycardia syndrome and orthostatic hypertension: a case control study. Chin. Med. J. 2014 May;127(10):1853-7. PubMed PMID: 24824244. Ryan KL, Rickards CA, Hinojosa-Laborde C, Cooke WH, Convertino VA. Sympathetic responses to central hypovolemia: new insights from microneurographic recordings. Front Physiol. 2012 Apr 26;3:110. doi: 10.3389/fphys.2012.00110. PubMed PMID: 22557974. Wikipedia contributors, "Shock (circulatory)," Wikipedia, The Free Encyclopedia, http://en.wikipedia.org/w/index.php?title=Shock_(circulatory)&oldid=612494727 (accessed June 26, 2014). Wikipedia contributors, "Orthostatic hypertension," Wikipedia, The Free Encyclopedia, http://en.wikipedia.org/w/index.php?title=Orthostatic_hypertension&oldid=603984647 (accessed June 26, 2014).
{ "domain": "biology.stackexchange", "id": 2419, "tags": "physiology" }
Is the surface of a Perfect Electric Conductor Equipotential in any Condition?
Question: It is known that the surface of a PEC (Perfect Electric Conductor) is equipotential. This is true, in theory, in any situation (equilibrium or not), since the conductor is perfect and so the electric field on its surface is always orthogonal to it, and this means that the electric potential is the same on its surface. Now let's consider for instance a transmission line of length L, which starts from position z = 0 and arrives at z = L. This line is connected to an AC Voltage Source. Now let's consider one of the two conductors: obviously the voltage between them is function of time (since the source is AC), but what I want to focus on is the fact that it is function of the position z on the conductor (precisely, it has a waveform behaviour). This dependence on position seems to be in contrast with the fact that the potential on a PEC is constant on its surface. Answer: An equipotential surface is always perpendicular to the electric field line. The surface of a (perfect) charged conductor works as an equipotential surface and it satisfies the above condition,i.e, the electric field lines are perpendicular to the surface of the charged conductor. BUT, once you apply a voltage or potential difference across the conductor (the AC source in your question), it no longer applies because now the electric field lines are affected by the electric field due to the voltage source and they are no longer perpendicular to the surface of the conductor. In summary, the surface of the conductor is NO LONGER an equipotential surface due to the applied voltage across the conductor. For more information, follow this link https://openpress.usask.ca/physics155/chapter/3-5-equipotential-surfaces-and-conductors/
{ "domain": "physics.stackexchange", "id": 62830, "tags": "electromagnetism, electric-fields, potential, voltage, conductors" }
What makes sand to behave like a liquid when it is poured?
Question: I couldn't find a satisfying physics explanation of what the exact mechanism and phenomenon responsible for making the sand to behave like a liquid when poured although its particles are solids? Answer: In soil mechanics, the maximum shear stress, known as shear strength, the soil can endure without sliding is typically modeled by $$ \tau = \sigma \tan \phi \ + \ c $$ where $\tau$ is the shear strength $\sigma$ is the normal stress† $\phi$ is the shear friction angle, defined such that $\tan \phi$ is analogous to the coefficient of friction $c$ is called cohesion, a.k.a. stickiness The ideal sand is characterized as having $0$ cohesion, and thus have $0$ shear strength when there is no normal stress acting on it. When you pour sand, it is "depressurized" from its own weight and can be considered to have $0$ normal stress, which implies it has $0$ shear strength. Now, the basic distinction of fluids and solids is exactly that fluids can't sustain shear stress under constant strain, which is the same phenomena you see with pouring sand. As an aside, when you pour sand, such as in an hourglass, the angle of the sand pile is exactly $\phi$, which is why it's defined as such. † For dry soils only.
{ "domain": "physics.stackexchange", "id": 79934, "tags": "fluid-dynamics, flow" }
Action of conc H2SO4 on pinacol
Question: Question: Please suggest the correct answer along with suitable mechanism Answer: I would also say A.{PINNACOL PINNACOLON REARRANGEMENT) First H+ can create carbocation at any one of the carbon and then methyl shift(CH3-) from other carbon can occur and we would get positive charge on carbon attached directly with OH , and then OH can share its lone pair and form double bond O. So ketone on left and four degree carbon on right What's the answer given? And pls also post the efforts you had made for the question in problem description :))
{ "domain": "chemistry.stackexchange", "id": 8463, "tags": "organic-chemistry, acid-base, alcohols" }
ID Sunbird in Oman - White, Dark Grey, Yellow
Question: Some extensive web searching has not helped me find this bird I discovered in my travels. They were flitting too fast to get a decent picture, but here is the best description I can obtain from memory: Dark Grey from their head to their back. Their wings looked black but, as I said, they were flying about. Throat and breast were pale, probably white. Belly and vent were bright yellow. Their beak was a similar shape to the Purple Sunbird. They were taking nectar from bushes bordering a house which had small white flowers. Answer: There are 4 species of Sunbirds in Oman. https://en.wikipedia.org/wiki/List_of_birds_of_Oman Based on your description and how common are the different species I think they were most likely mainly Purple Sunbird (Cinnyris asiaticus) https://en.wikipedia.org/wiki/Purple_sunbird At the moment, in fact, they are not purple at all. They are moulting their feathers and also the male have extensive yellow on breast and belly. Among the species of Oman's Sunbirds it is the only one that shows black wings (in males) contrasting with the brownish/grey upper parts precisely as you observed. MOULTING MALE FEMALE (left)
{ "domain": "biology.stackexchange", "id": 7964, "tags": "species-identification, ornithology" }
Extract incomplete weeks from a DateTime collection
Question: The code below attempts to extract incomplete weeks from a List<DateTime>. For example, a list containing all the days in Jan 2015 would result in the 5th to the 25th inclusive. I know the list going in will contain unique dates, and are in date order. It seems to be working but I can't help but think this could be done better. private IEnumerable<DateTime> extractIncompleteWeeks(IEnumerable<DateTime> dates) { var mondays = dates .Where((d, i) => d.DayOfWeek == DayOfWeek.Monday) .Select(d => d.Date); var results = new List<DateTime>(); foreach (var monday in mondays) { var fullWeek = new HashSet<DateTime>(dates.SkipWhile(d => d < monday).Take(7).Select(d => d.Date)); if (fullWeek.Last().Date == monday.AddDays(6).Date) results.AddRange(fullWeek); } return results; } Answer: Naming: Names of methods in C# use PascalCasing. Your method name would become ExtractIncompleteWeeks. Although the naming convention is now correct, the name is ambiguous. Extract incomplete weeks means you want to fetch only the dates of the weeks which are incomplete. Instead rename it to ChopIncompleteWeeks or ExtractCompleteWeeks. This makes clear that you want to eliminate the incomplete weeks or want to extract the complete weeks. Return type: The return type of the method is an IEnumerable<DateTime> yet you return a List<DateTime>. Either make the return type also List<DateTime> but preferably return one element at a time by using yield. private IEnumerable<DateTime> ExtractCompleteWeeks(IEnumerable<DateTime> dates) { ... foreach (var monday in mondays) { ... if (fullWeek.Last().Date == monday.AddDays(6).Date) { foreach(var day in fullWeek) { yield return day; } } } } Here are some questions from StackOverflow and Programmers.SE regarding yield: Proper Use of yield return A practical use of “yield” keyword in C# I really like this answer from the first question to get a very basic understanding of using yield instead of a temporary list: Populating a temporary list is like downloading the whole video, whereas using yield is like streaming that video. HashSet: Is there a specific reason why you create a HashSet<DateTime> only to take the last element and compare it against monday.AddDays(6).Date? You can replace that line by following: var fullWeek = dates.SkipWhile(d => d < monday).Take(7).Select(d => d.Date); Complete code: private IEnumerable<DateTime> ExtractCompleteWeeks(IEnumerable<DateTime> dates) { var mondays = dates.Where(d => d.DayOfWeek == DayOfWeek.Monday) .Select(d => d.Date); foreach (var monday in mondays) { var fullWeek = dates.SkipWhile(d => d < monday).Take(7).Select(d => d.Date); if (fullWeek.Last().Date == monday.AddDays(6).Date) { foreach(var day in fullWeek) { yield return day; } } } }
{ "domain": "codereview.stackexchange", "id": 12831, "tags": "c#, datetime, linq, ienumerable" }
Design a Tips class that calculates the gratuity on a restaurant meal
Question: Design a Tips class that calculates the gratuity on a restaurant meal. Its only class member variable, taxRate, should be set by a one-parameter constructor to whatever rate is passed to it when a Tips object is created. If no argument is passed, a default tax rate of .065 should be used. The class should have just one public function, computeTip. This function needs to accept two arguments, the total bill amount and the tip rate. It should use this information to compute what the cost of the meal was before the tax was added. It should then apply the tip rate to just the meal cost portion of the bill to compute and return the tip amount. Demonstrate the class by creating a program that creates a single Tips object, and then allows the program user to retrieve the correct tip amount using various bill totals and desired tip rates. What can I do better? #include<iostream> #include <string> using namespace std; class Tips { private : double taxRate; public: Tips() { taxRate = .65; cout << "Tax rate is "<<taxRate<<endl; } Tips(double tax) { taxRate = tax; cout << "Tax rate is"<<taxRate<<endl; } double computeTip(double billAmount,double tipRate) { double mealCost = billAmount - billAmount*(taxRate); cout << "Meal cost is with no tax "<<mealCost<<endl; return mealCost+tipRate; } }; int main() { Tips tip ; cout << "Meal cost + tip rate is "<<tip.computeTip(100,20)<<endl; return 0; } Answer: Inconsistant layout: #include<iostream> // No space #include <string> // Space Be consistent in your code. I prefer the version with the space. Using namespace std Arrrrr. Don't do that. see: Why is “using namespace std;” considered bad practice?. Definately never do it in a header file. You are pulliting the namesapce for anybody that includes your code (which can change behavior) and break other peoples code. Don't do it in source files because of the reasons you read in the linked article. Default argument Tips() { taxRate = .65; cout << "Tax rate is "<<taxRate<<endl; } Tips(double tax) { taxRate = tax; cout << "Tax rate is"<<taxRate<<endl; } When you see code like this you can usually replace it with a single constructor that has a parameter with a default argument (as they are basically doing the same thing). Tips(double tax = 0.065) // Corrected you constant. { taxRate = tax; cout << "Tax rate is"<<taxRate<<endl; } Be consistent with spacing (white space is your friend) cout << "Tax rate is"<<taxRate<<endl; /// ^^^^^^ ^^^^^ Looking very tight here. cout << "Meal cost + tip rate is "<<tip.computeTip(100,20)<<endl; //// ^^^^^ ^^^^ Prefer '\n' over std::endl std::end forces a flush of the stream. This is never what you actually want. The automated flushing of the stream will work much better than manual flushing. Prefer to use the initializer list. Tips(double tax = 0.065) // Corrected you constant. : taxRate(tax) { cout << "Tax rate is"<< taxRate << endl; } Arithmetic issue If the billAmount is mealCost * (1 + taxRate) then this is not the formula to calculate meal cost. double mealCost = billAmount - billAmount*(taxRate);
{ "domain": "codereview.stackexchange", "id": 10693, "tags": "c++, finance" }
Print a command line argument 100 times
Question: I am learning Java and this is my solution for a program that will read in a name from the command line and write it out 100 times furthermore words are never split up assuming that the console is 80 characters wide. Could someone please help me improve its quality? /** * Program to print command line argument 100 times * Words are never split up. Assuming that the console is 80 characters wide. * **/ class Hundred { private static final int LINE_WIDTH = 80; public static void main(String[] args) { if(args.length == 0) { System.out.println("Argument missing"); } else { String name = args[0]; int wordsOnOneLine = LINE_WIDTH / (name.length() + 1); int i = 0; while(i < 100) { for(int j = 0; j < wordsOnOneLine && i < 100; j++) { i++; System.out.print(name); System.out.print(" "); } System.out.println(); } } } } Answer: General Remarks Well done. While there are some points of improvement, you created a functional program, with proper naming of arguments, and even some basic error handling. Asking for feedback is the best way to learn, so here you get some :). Keep up the good work. Repeating/LineWidth logic The repeating of the value 100 is suspicious, there probably is a better solution. while(i < 100) { for(int j = 0; j < wordsOnOneLine && i < 100; j++) { If you see this, think more and try to approach the problem differently. The main idea (that vp_arth already posted) is to keep track of the available space in the line, no need to calculate the wordsOnOneLine Also, you can count-down the number of words you still have to output. You will end up with two simple counters repetitions and availableSpace. See below for more detail. Cleanness of output While not really enforced, you should not add a space at the end of the line if not needed. Trailing spaces are a general pain in the ass, because you can't see them and might give adverse effects. Validating arguments And you need to take care your user does not apply an argument that will break your program (what if the user enters an 1000 character String?), so you need to check for that too. Java conventions You already follow most of the java conventions (casing, bracing, etc). But you are missing a package for your class. While not needed, it is good practice. Exit codes While not THAT interesting, it is good practice to let you program end with an error code (integer greater than zero) if it could not run succesfully. Proposed solution package hundred; /** * Program to print command line argument 100 times Words are never split up. Assuming that the console is 80 characters * wide. **/ public class Hundred { private static final int LINE_WIDTH = 80; public static void main( String[] args ) { if ( args.length == 0 ) { System.err.println( "Argument missing" ); System.exit( 1 ); //error status } else { String word = args[0]; int availableWidth = LINE_WIDTH; if ( word.length() > availableWidth ) { System.err.println( "Argument too long for the line width" ); System.exit( 1 ); //error status } int repetitions = 100; while ( repetitions > 0 ) { if ( availableWidth < word.length() ) { System.out.println(); availableWidth = LINE_WIDTH; } System.out.print( word ); availableWidth -= word.length(); //We might need a space, but only if the word + space still fits if ( availableWidth > word.length() + 1 ) { System.out.print( " " ); availableWidth--; } repetitions--; } } } }
{ "domain": "codereview.stackexchange", "id": 24206, "tags": "java, beginner" }
What made Einstein think that gravity was caused by the curvature of spacetime?
Question: What observation/thought experiment led him to think this? Answer: To be exact Einstein made a claim that it is gravity that curves space-time. You can follow his reasoning in his "Relativity: The Special and General Theory." Einstein started off with comparing acceleration caused by gravity to acceleration in a lift (assuming it moves with accelerated motion) going up. He claimed that these two accelerations are indistinguishable from each other - see chapter 20. Later in chapter 22 he said: ... we learn that a body which is in a state of uniform rectilinear motion with respect to K (in accordance with the law of Galilei) is executing an accelerated and in general curvilinear motion with respect to the accelerated reference-body K' (chest). This acceleration or curvature corresponds to the influence on the moving body of the gravitational field prevailing relatively to K'. It is known that a gravitational field influences the movement of bodies in this way, so that our consideration supplies us with nothing essentially new. After that he concluded that light as seen from such an accelerated lift must also be curvilinear as compared to an outside inertial frame. Einstein then proceeds with his line of reasoning that finally leads to the conclusion that space must by curved (he describes some other thought experiments, such as a spinning disk with clocks located in its center and on its edge, and also as introduces Gaussian coordinates to prove his point). I think this book is quite easily digestible for almost anybody and worth the time.
{ "domain": "physics.stackexchange", "id": 16868, "tags": "general-relativity, gravity, history, curvature, equivalence-principle" }
How to calculate the oxidation state and the number of equivalents oxidized with YBa₂Cu₃O₇
Question: I've hit some trouble with these calculations and need help sorting out confusion. In the lab we synthesized $\ce{YBa2Cu3O7}$, per the lab manual $\ce{Y, Ba, O}$ have the usual charges of +3, +2, and -2 respectively. Spectroscopic studies show that no copper (III) centers are present in the material, but rather there are missing electrons from the copper-oxygen bonds. For the purpose of the titration though, the missing electrons are thought of as coming from the copper center and we're assuming there are copper III centers present. We used idodine to tritrate, and there is the assumption that copper is doing the oxidizing. The text states, "Copper(I) is not an oxidizing equivalent: It cannot oxidize iodide to iodine." The relevant equations are $$\begin{align} \ce{Cu^{3+} + 2I- &-> Cu+ +I2}\\ \ce{Cu^{2+} +I- &-> Cu+ + \frac{1}{2}I2} \end{align}$$ I've calculated the theoretical OS to be $$\ce{Y} (3^+ )+\ce{Ba} (2^+ * 2)+\ce{O} (2^-*7)= \frac{7}{3 \ \ce{Cu}}\ce{Cu}^{2.3+}$$ We have to calulate the theoretical number of equivalents of $\ce{I-}$ oxidized by 1 g of the sample. Would this be correct for the oxidation state? If so would that mean that the total equivalents of Iodine are an addition of $\ce{Cu(II)}$ and $\ce{Cu(III)}$ (the two equations)? Also I'm feeling brain dead, because I can't figure out the fraction of $\ce{Cu(II)}$ and $\ce{Cu(III)}$ that would average to 2.3. Answer: You forgot one relevant equation: $\ce{Cu+ + I- -> CuI}$ which uses up some iodide as copper precipitates with it. Getting the fractions is easy. You know that they must add to 3 and that that charge balance must apply. I marked the fractions as $x$ for copper(II) and $y$ for copper (III) and set up this system of equations which is easy to solve: $$\begin{align} x+y&=3\\ 2x+3y&=3\times\text{average oxidation state} \end{align}$$ You multiply the fraction with the charge of the species and the average oxidation state by three because in one mole of the compound you have three moles of copper, so that the equation represents a charge balance.
{ "domain": "chemistry.stackexchange", "id": 10596, "tags": "inorganic-chemistry, analytical-chemistry, titration" }
Yaw angle calculation for a two-wheeled inverted pendulum
Question: [Sorry, if this is a basic question. I'm a software engineer, learning mechanical engineering concepts by trying to build a self-balancing robot] I'm unable to understand the yaw angle equation: $\phi = (\frac RW) * (\theta_r - \theta_l)$ used in NXT two-weheeled self balancing bot. R - radius of the wheel. W - Distance between the centres of the wheels. $\theta_r$, $\theta_l$ - Rotation angle of the right and left wheels respectively. $\phi$ - yaw angle. Answer: The yaw angle is the rotation around a vertical axis. It is easiest to understand if $\theta_l = 0$. In that case, the l wheel stays fixed and the r wheel travels in a circle around it. The distance the r wheel travels is $R*\theta_r$. The robot rotates through an angle $\phi = R/W*\theta_r$. If $\theta_r > \theta_l > 0$, the robot drives around in a circle like a car turning. The r side travels farther. The center is on the l side of the robot. A line from wheel to wheel stays pointed at the center of the circle. Each wheel drives around the same angle, $\phi = distance/radius$. $\phi = R/R_r * \theta_r = R/R_l * \theta_l$. For small angles, you can also calculate $\phi$ from the difference between the wheels' travel. $\phi = \Delta distance/\Delta radius = [(R*\theta_r) - (R*\theta_l)] / W = R/W *(\theta_r - \theta_l)$
{ "domain": "physics.stackexchange", "id": 22231, "tags": "homework-and-exercises, rotational-kinematics" }
How does ros create communication between android device and linux pc?
Question: I wanted to know how ros can communicate between an android and pc through an usb device. Looking at the hokuyo code in the android_extras, it implements the USB api, but this USB api is not picking up my linux machine as an usb device. So I wanted to know why the hokuyo code can create an connection between an android device and linux pc only using the usb. Originally posted by hat13 on ROS Answers with karma: 1 on 2015-01-22 Post score: 0 Answer: The Hokuyo being used for that has a USB interface. If you want to talk from a tablet to a PC with ROS you should use a USB ethernet adapter on the android device. Originally posted by tfoote with karma: 58457 on 2015-03-12 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 20661, "tags": "usb" }
Multiple Source Shortest Paths in a weighted graph
Question: In an unweighted graph, we can find Multiple Source Shortest Paths using the Breadth-First Search algorithm by setting the distance of all starting vertices to zero and pushing them into the queue at the beginning of the algorithm. However, I'm wondering if we can use the same technique to solve Multiple Source Shortest Paths in a weighted graph using Dijkstra's algorithm (for non-negative weight edges) and Bellman-Ford algorithm (when negative weight edges are allowed, and of course there is no queue here). If I'm right, we can think in this situation as there are edges with weight equal to zero between any source and all other sources in the graph. So, can we use this technique in a weighted graph? And why? Answer: Yes. Here is the trick that always works: create a new source, $s_0$, and add an edge (with length 0) from $s_0$ to each of your starting vertices. Then, run any shortest-paths algorithm starting from $s_0$ to compute the distance from $s_0$ to each other vertex. Your technique for BFS is equivalent to this; but this is more general and can be used with Dijkstra's algorithm, Bellman-Ford, or any other shortest paths algorithm.
{ "domain": "cs.stackexchange", "id": 13689, "tags": "graphs, shortest-path, weighted-graphs" }
Make change for a given amount given denominations
Question: Make change like at a cash register Input is the denominations of the currency and the sum In test case only coins are entered Return how many of each denomination Return the fewest possible denomination count (don't return all pennies) Check for style, efficiency, correctness, and anything else you want to check. // test for (int i = 1; i <= 200; i++) { Dictionary<int, int> change = MakeChange(new List<int> { 1, 5, 10, 25, 50 }, i); Debug.WriteLine($"sum {i}"); int sum = 0; foreach (KeyValuePair<int, int> coins in change) { sum += coins.Key * coins.Value; Debug.WriteLine($" coin {coins.Key} count {coins.Value} "); } if (i != sum) { Debug.WriteLine("problem"); } } // end test private static Dictionary<int, int> MakeChange(List<int> coins, int sum) { if(sum < 0 || coins.Count == 0) { throw new ArgumentOutOfRangeException(); } Dictionary<int, int> change = new Dictionary<int, int>(); foreach (int coin in coins.Distinct().Where(x => x > 0).OrderByDescending(x => x)) { int j = sum / coin; //integer math rounds down if (j > 0) { change.Add(coin, j); } sum -= j * coin; if (sum == 0) return change; } return null; } Answer: You have implemented the greedy algorithm for making change: starting with the largest denomination, divide, then use the smaller coins to handle the remainder. That works for many coin systems, but in the general case, it fails to produce optimal results: Greedy method For the so-called canonical coin systems, like the one used in US and many other countries, a greedy algorithm of picking the largest denomination of coin which is not greater than the remaining amount to be made will produce the optimal result. This is not the case for arbitrary coin systems, though: if the coin denominations were 1, 3 and 4, then to make 6, the greedy algorithm would choose three coins (4,1,1) whereas the optimal solution is two coins (3,3). Therefore, if you want to use as few coins as possible, where the set of denominations is arbitrarily picked by the user, you have to use a different algorithm — typically one based on dynamic-programming. So, either your code is wrong, or needs a comment to warn of that significant caveat.
{ "domain": "codereview.stackexchange", "id": 26640, "tags": "c#, .net, change-making-problem" }
Force required to produce a specific motion on a particle
Question: This exercise comes from the Exercises for the Feynman Lectures, Chapter 15. The full question: 15-6 A particle of rest mass $m_0$ is caused to move along a line such that its position is: $$x = \sqrt{b^2+c^2t^2} - b$$ What force must be applied to produce this motion? I am assuming b is an arbitrary constant. With this in mind, I figured that I may be able to calculate the force by first finding out the acceleration of the particle. Then we have $$\frac{d^2x}{dt^2} = \frac{c^2b^2}{(b^2+c^2t^2)^{3/2}}$$ Then we have $F=ma$. However, the mass is $\gamma m_0$, where I took found $\gamma$ by substituting $v = dx/dt$. This resulted in a complex expression that did not simplify nicely. The answer, according to the book, is $F = m_0 c^2 /b$ Perhaps substituting $v= dx/dt$ to find $\gamma$ isn't valid? Answer: You cannot use Newton's Second Law of motion in its classical version. Try to express this law in its more fundamental form where it is defined as the rate of change of total linear momentum of a system. Knowing this and the fact that total linear momentum of a system must also be represented in its relativistic form (which I assume you are aware of,) you can calculate the relativistic force.
{ "domain": "physics.stackexchange", "id": 29285, "tags": "homework-and-exercises, special-relativity, kinematics" }
Why is And-Or-Graph-Search called a search algorithm?
Question: An algorithm in Artificial Intelligence: A Modern Approach for planning in stochastic, fully observable environments is called And-Or-Graph-Search, implying that it's a search algorithm. However, I don't see how it is one. Wikipedia defines search algorithms as, "an algorithm for finding an item with specified properties among a collection of items," but And-Or-Graph-Search doesn't do that. It instead finds multiple items (goals states) in order to guarantee it will reach a goal state no matter what the results of its stochastic actions are. So, why is it a search algorithm? Here's its pseudo-code: function AndOrGraphSearch(problem) returns a conditional plan, or failure OrSearch(problem.initialState, problem, []) function OrSearch(state, problem, path) returns a conditional plan, or failure if problem.GoalTest(state) then return the empty plan if state is on path then return failure for each action in problem.Actions(state) do plan = AndSearch(Results(state, action), problem, [state | path]) if plan does not = failure then return [action | plan] return failure function AndSearch(states, problem, path) return a conditional plan, or failure for each si in states do plani = OrSearch(si, problem, path) if plan = failure then return failure return [if si then plani else if s2 then plan2 else . . . if sn-1 then plann-1 else plann] AndOrSearch is an algorithm for searching And-Or graphs generated by nondeterministic environments. It returns a conditional plan that reaches a goal state in all circumstances. (The notation [x|l] refers to the list formed by adding the object x to the front of list l.) The function is from the book Artificial Intelligence: A Modern Approach. Answer: You can view And-Or Graph search as a search algorithm in two ways: As search in the state space: Here, the "items" from the Wikipedia definition are the states, and an "item with specified properties among a collection of items" is a goal state. With And-Or Graph search, finding one such item is generally not enough. Under this view, the definition on Wikipedia is a bit too narrow. As search in the space of partial conditional plans: Here, the "items" are partial conditional plans, and an "item with specified properties among a collection of items" is a total conditional plan, i.e., a partial conditional plan with the specified property that it is guaranteed to reach a goal state after a finite number of steps. Unlike a total conditional plan, a partial conditional plan may contain leaf nodes that are neither goal nodes nor have an associated action in the plan. Search steps in this search space are extensions of partial conditional plans by one action, i.e., these steps take a partial plan P and return a new partial plan P' that is like P, but with one non-goal leaf replaced by a valid action assignment to that leaf node. Both views are legitimate, and the second one is perfectly consistent with the Wikipedia definition. The analogous distinction in regular graph search would be between searching for a goal state and searching for a path to a goal state.
{ "domain": "cs.stackexchange", "id": 3758, "tags": "terminology, artificial-intelligence, search-algorithms" }
Last layers of YOLO
Question: I would like if someone could explain to me something. The architecture in YOLO from the Figure 3 in your YOLO paper https://pjreddie.com/media/files/papers/yolo.pdf is like this: (448,448,3), (112,112,192), (56,56,256), (28,28,512), (14,14,1024),(7,7,1024),(7,7,1024), Dense(4096), (7,7,30) I don't understand how to implement the last three parts, bolded ones If it is not the problem, I would appreciate if you help me understand that part. I use Keras and everything is OK for me to implement except those parts. I really don't know how to pass from (7,7,1024) to (7,7,1024) and also from Dense to (7,7,30). Answer: You can use the Flatten and Reshape layers to go to Dense and back to HWC format. The last layers in keras would look like this: 7_7_1024_1 = ... # The first (7,7,1024) x = keras.layers.Conv2D(1024, 3, padding='same')(7_7_1024_1) x = keras.layers.Flatten()(x) x = keras.layers.Dense(4096)(x) x = keras.layers.Dense(7 * 7 * 30)(x) x = keras.layers.Reshape((7, 7, 30))(x)
{ "domain": "datascience.stackexchange", "id": 3343, "tags": "neural-network, keras, yolo" }
Missing something basic about simple orbital mechanics
Question: I seem to be missing something basic. I've been trying to get a simple orbital simulation working, and my two objects are Earth around the Sun. My problem is this. I placed the Earth at 93M miles away from the Sun, or 155M km. As I understand it, the orbital velocity of something at 155M km from the Sun is: $$v = \sqrt{\frac{GM}{r}}$$ Plugging in the numbers for the Sun, I get a velocity of: $$29261 \frac{\mathrm{m}}{\mathrm{s}}$$ However, if I want to get the acceleration that the Sun has upon the earth, I use: $$g = \frac{GM}{r^2}$$ For the Sun, and 155M km, I get an acceleration of: $$0.0055\frac{\mathrm{m}}{\mathrm{s}^2}$$ Now, I start with a simple body at the proper radius out along the X axis and give it a simply vector of 29261 m/s along the Y axis, then I start applying the 0.0055 m/s^2 acceleration to it. And the acceleration of the Sun is simply not enough to hold the Earth. If the Earth starts with a vector of (0, 29261 m/s), and after and I add the acceleration vector of (-0.0055 m/s, 0) to it, you can see that after a single second, it doesn't move a whole lot. If I chunk things to days, 86400 seconds, then the acceleration vector is only, roughly, -477 m/day, but the velocity vector is: $$2,325,974,400 \frac{\mathrm{m}}{\mathrm{day}} = 29,261 \frac{\mathrm{m}}{\mathrm{s}} \times 86,400 \frac{\mathrm{s}}{\mathrm{day}}$$ As you can imagine, the -477 isn't going to move that much towards the Sun. I understand that better simulations use better techniques than simply adding basic vectors together, but that's not what this is. I seem to be missing something fundamental. I had assumed that given the correct velocity, that the pull of the Sun should keep the Earth in orbit, but the "pull" that I'm using doesn't seem to be having the desired effect. So, I'm curious what basic "D'oh" thing I'm missing here. Edit for Luboš Motl answer. Perhaps there's something more fundamental I'm missing here. I understand you point, but .0055 m/s * 86,400 is -477. I was doing that math fine. Simply, I have an object with a velocity vector. Then I apply an acceleration at a right angle. I do that for N seconds to come up with a new, right angle velocity vector. I then add that to the original vector to come up with the objects new vector. I then take that vector, apply to the current position of the object, and arrive at a new position. Clearly there is a granularity issue which makes some amount of seconds a better choice for a model than others, but this is high school level simple mechanics, so there's going to be some stepping. I chose one day so that my little dot of a planet on my screen would move. If I update every 1/10th of a second "real time", and each update is a day, then I should get a rough orbit that's really a 365ish polygon in a little over 30s real time. If I choose a step size of 1 second, then my acceleration (0.0055 m/s^2) * 1 s = a right angle velocity vector that's -0.0055 in magnitude. That vector is added to the original vector of 29261 (at right angles), giving me a new vector of (-0.0055, 29261). That's after one second. That's not much of a bump. It's barely a blip. If I apply one days full of acceleration, "all at once", I am obligated to not only multiply the acceleration by 86,400, but also the original vector (since it's 29261 m/s, and we have 86,400 s), thus giving me, proportionally, the same vector, just longer. And it's still just a bump. So, I'm mis-applying something somewhere here, as I think the numbers are fine. I'm simply "doing it wrong". Trying to figure out what that wrong part is. Edit 2, responding to Platypus Lover Thank you very much for the simple code you posted. It showed me my error. My confusion was conflating updating of the vector with the calculation of the velocity vector. I felt that I had to multiply both the original vector AND the acceleration amount by the time step, which would give me the silly results. It was just confused in my head. Answer: I suspect there is something wrong in the way you are adding the acceleration vector to the velocity vector after the first timestep. The simplest way to check you are doing things correctly is to write down your scheme in cartesian coordinates. To point you in the right direction, I wrote a sample orbit integrator for you here: Simplest orbit integrator, which should be at a level appropriate for a high school student. This is probably simplest integrator you can possibly write. With your code, you should get a nice circle for an x-y plot, and a nice sinusoid for the position and velocity components: Notice that it uses Euler's method: $\vec{v}(t+\Delta t) = \vec{v}(t) + \Delta t \ \vec{a}(t)$ $\vec{x}(t+\Delta t) = \vec{x}(t) + \Delta t \ \vec{v}(t)$ which is probably what you have been doing so far without realizing. This is the most inaccurate method to use when integrating, and once you introduce elliptical orbits, this method will give you wrong results after a few orbits. There are many simple recommendations I can give you on how to further improve your code once it is fixed (normalizing units, a better integration scheme, etc.).
{ "domain": "physics.stackexchange", "id": 573, "tags": "newtonian-mechanics, orbital-motion" }
How to create a local map from global one
Question: After runing a slam algorithm I receaved a big map and I dlike to make it smaller- I'd like to use only data that are in 3-5 m around. Is there possibility of deleting a node from octomap graph representing a too far point? Originally posted by Wilk on ROS Answers with karma: 67 on 2014-02-04 Post score: 1 Answer: There is a clear option. Not sure if you can easy clear with it as you can only specify two corner points. ~clear_bbx (octomap_msgs/BoundingBoxQuery) Clears a region in the 3D occupancy map, setting all voxels in the region to "free" Originally posted by davinci with karma: 2573 on 2014-02-04 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Wilk on 2014-02-04: Thanks for very fast response- I'm searching how to use it :D Comment by Wilk on 2014-02-04: Maby a stupid question but how can I use it in terminal? Comment by davinci on 2014-02-05: Try it with tab completion, it will give the required format. Comment by Wilk on 2014-02-09: I think the usage is like this: rosrun octomap_server octomap_eraser_cli.py -x -y -z x y z but still I can't use it- propably I have other problems with octomap (or rgbdslam). Anyway thanks for help. This is the function I was looking for.
{ "domain": "robotics.stackexchange", "id": 16883, "tags": "ros, octomap, graph, node" }
Selecting sites from VCF which have an alt AD > 10
Question: I have high-depth variant calling created using the HaplotypeCaller with --output_mode EMIT_ALL_SITES I'm interested in finding all sites (regardless of genotype call heterozygous or homozygous) where at least one of the alternative alleles have an AD value (Allelic Depth) greater than 10, I.e. are supported by more than 10 reads. Also ideally I want back more than just the first alternative allele. Note that I don't want back lines of VCF were we only see an AD count for the ref allele only. So in the example VCF snippet below I'm wanting to select lines: 6,7,8,12,13 and 14, which have GT:AD values 1/1:1,988:989 0/1:116,92 0/1:220,234 0/1:62,611 1/1:0,109 respectively. #CHROM POS ID REF ALT QUAL FILTER INFO FORMAT 12908_DIAG 3 187446740 . T . Infinity . AN=2;DP=1095;MQ=60.00 GT:AD:DP 0/0:1095:1095 3 187446741 . C . Infinity . AN=2;DP=1117;MQ=60.00 GT:AD:DP 0/0:1117:1117 3 187446752 . A . Infinity . AN=2;DP=1297;MQ=60.00 GT:AD:DP 0/0:1297:1297 3 187446763 . C . Infinity . AN=2;DP=1494;MQ=60.00 GT:AD:DP 0/0:1494:1494 3 187451574 . C . Infinity . AN=2;DP=1493;MQ=60.00 GT:AD:DP 0/0:1493:1493 3 187451609 rs1880101 A G 39794.03 . AC=2;AF=1.00;AN=2;BaseQRankSum=1.859;ClippingRankSum=0.000;DB;DP=995;ExcessHet=3.0103;FS=0.000;MLEAC=2;MLEAF=1.00;MQ=60.00;MQRankSum=0.000;QD=24.56;ReadPosRankSum=0.406;SOR=8.234 GT:AD:DP:GQ:PL 1/1:1,988:989:99:39808,2949,0 4 1803279 . T G 0 . AC=0;AF=0.00;AN=2;BaseQRankSum=-6.652;ClippingRankSum=0.000;DP=245;ExcessHet=3.0103;FS=89.753;MLEAC=0;MLEAF=0.00;MQ=59.97;MQRankSum=0.000;ReadPosRankSum=-2.523;SOR=6.357 GT:AD:DP:GQ:PL 0/0:211,23:234:99:0,364,6739 4 1803307 rs2305183 T C 2486.60 . AC=1;AF=0.500;AN=2;BaseQRankSum=-5.049;ClippingRankSum=0.000;DB;DP=215;ExcessHet=3.0103;FS=1.110;MLEAC=1;MLEAF=0.500;MQ=59.97;MQRankSum=0.000;QD=11.95;ReadPosRankSum=-0.045;SOR=0.809 GT:AD:DP:GQ:PL 0/1:116,92:208:99:2494,0,3673 4 1803671 . C A 0 . AC=0;AF=0.00;AN=2;BaseQRankSum=-0.880;ClippingRankSum=0.000;DP=450;ExcessHet=3.0103;FS=0.000;MLEAC=0;MLEAF=0.00;MQ=60.00;MQRankSum=0.000;ReadPosRankSum=-0.953;SOR=0.572 GT:AD:DP:GQ:PL 0/0:445,2:447:99:0,1272,15958 4 1803681 . T C 0 . AC=0;AF=0.00;AN=2;BaseQRankSum=-1.654;ClippingRankSum=0.000;DP=483;ExcessHet=3.0103;FS=0.000;MLEAC=0;MLEAF=0.00;MQ=60.00;MQRankSum=0.000;ReadPosRankSum=-0.422;SOR=0.664 GT:AD:DP:GQ:PL 0/0:479,2:481:99:0,1408,18538 4 1803703 . A G 0 . AC=0;AF=0.00;AN=2;BaseQRankSum=-1.704;ClippingRankSum=0.000;DP=458;ExcessHet=3.0103;FS=0.000;MLEAC=0;MLEAF=0.00;MQ=60.00;MQRankSum=0.000;ReadPosRankSum=0.299;SOR=0.497 GT:AD:DP:GQ:PL 0/0:454,2:456:99:0,1325,18095 4 1803704 rs2234909 T C 6676.60 . AC=1;AF=0.500;AN=2;BaseQRankSum=-2.605;ClippingRankSum=0.000;DB;DP=456;ExcessHet=3.0103;FS=1.753;MLEAC=1;MLEAF=0.500;MQ=60.00;MQRankSum=0.000;QD=14.71;ReadPosRankSum=0.324;SOR=0.849 GT:AD:DP:GQ:PL 0/1:220,234:454:99:6684,0,6366 4 1803824 rs2305184 C G 2030.60 . AC=1;AF=0.500;AN=2;BaseQRankSum=8.083;ClippingRankSum=0.000;DB;DP=124;ExcessHet=3.0103;FS=6.128;MLEAC=1;MLEAF=0.500;MQ=60.00;MQRankSum=0.000;QD=16.51;ReadPosRankSum=0.180;SOR=0.096 GT:AD:DP:GQ:PL 0/1:62,61:123:99:2038,0,1766 4 1805296 rs3135883 G A 3876.03 . AC=2;AF=1.00;AN=2;DB;DP=110;ExcessHet=3.0103;FS=0.000;MLEAC=2;MLEAF=1.00;MQ=60.00;QD=29.22;SOR=9.401 GT:AD:DP:GQ:PL 1/1:0,109:109:99:3890,326,0 A dropbox link for file I'd initially considered using GATK's SelectVariants but I'm not sure JEXL has the ability to select out what I want specifically other than a blanket AD > 10 which will give me both ref and alt alleles with AD > 10. Perhaps there is a bioawk solution or something more elaborate with coreutils which could successfully return sites with an alt AD count > 10? Answer: This now works with the development version of Bcftools v1.5 (commit 4f134df). Thanks to Petr Danecek for adding the feature. I expect this feature to make its way into the next release of Bcftools: git clone git://github.com/samtools/htslib.git git clone git://github.com/samtools/bcftools.git (cd bcftools; make) bgzip Test.vcf ./bcftools/bcftools index Test.vcf.gz ./bcftools/bcftools filter -i 'AD[0:1-] > 10' Test.vcf.gz Output without header (I have modified the second line to be tri-allelic to demonstrate the filtering works): 3 187451609 rs1880101 A G 39794 PASS AC=2;AF=1;AN=2;BaseQRankSum=1.859;ClippingRankSum=0;DB;DP=995;ExcessHet=3.0103;FS=0;MLEAC=2;MLEAF=1;MQ=60;MQRankSum=0;QD=24.56;ReadPosRankSum=0.406;SOR=8.234 GT:AD:DP:GQ:PL 1/1:1,988:989:99:39808,2949,0 4 1803279 . T G 0 PASS AC=0;AF=0;AN=2;BaseQRankSum=-6.652;ClippingRankSum=0;DP=245;ExcessHet=3.0103;FS=89.753;MLEAC=0;MLEAF=0;MQ=59.97;MQRankSum=0;ReadPosRankSum=-2.523;SOR=6.357 GT:AD:DP:GQ:PL 0/0/0:211,3,34:234:99:0,364,6739 4 1803307 rs2305183 T C 2486.6 PASS AC=1;AF=0.5;AN=2;BaseQRankSum=-5.049;ClippingRankSum=0;DB;DP=215;ExcessHet=3.0103;FS=1.11;MLEAC=1;MLEAF=0.5;MQ=59.97;MQRankSum=0;QD=11.95;ReadPosRankSum=-0.045;SOR=0.809 GT:AD:DP:GQ:PL 0/1:116,92:208:99:2494,0,3673 4 1803704 rs2234909 T C 6676.6 PASS AC=1;AF=0.5;AN=2;BaseQRankSum=-2.605;ClippingRankSum=0;DB;DP=456;ExcessHet=3.0103;FS=1.753;MLEAC=1;MLEAF=0.5;MQ=60;MQRankSum=0;QD=14.71;ReadPosRankSum=0.324;SOR=0.849 GT:AD:DP:GQ:PL 0/1:220,234:454:99:6684,0,6366 4 1803824 rs2305184 C G 2030.6 PASS AC=1;AF=0.5;AN=2;BaseQRankSum=8.083;ClippingRankSum=0;DB;DP=124;ExcessHet=3.0103;FS=6.128;MLEAC=1;MLEAF=0.5;MQ=60;MQRankSum=0;QD=16.51;ReadPosRankSum=0.18;SOR=0.096 GT:AD:DP:GQ:PL 0/1:62,61:123:99:2038,0,1766 4 1805296 rs3135883 G A 3876.03 PASS AC=2;AF=1;AN=2;DB;DP=110;ExcessHet=3.0103;FS=0;MLEAC=2;MLEAF=1;MQ=60;QD=29.22;SOR=9.401 GT:AD:DP:GQ:PL 1/1:0,109:109:99:3890,326,0 Edited (Dec 2020): bcftools filter syntax has been updated, answer now reflects changes found in version v1.11 (not sure at what point these were introduced). The old query 'AD[1-] > 10' would produce the error The FORMAT tag AD can have multiple subfields, run as AD[sample:subfield]. A discussion of these changes can be found here.
{ "domain": "bioinformatics.stackexchange", "id": 149, "tags": "variant-calling, vcf, bioawk" }
Minimize this IF statement
Question: I was going to post this in code golf, but I think it may be better suited here. I'm not really interested in scoring it beyond accepting the best answer. I'm wanting to minimize this series of IFS, if possible or necessary. I'm brand new to javascript/coding, but it just seems like there may be some redundancy here. if(activeCell.getRow() > 2 && activeCell.getColumn() == 1){ activeCell.offset(0, 1).setDataValidation(projectTasksAdjItemValidationRule); activeCell.offset(0, 2).setDataValidation(projectTasksAdjSubItemValidationRule); activeCell.offset(0, 3).setDataValidation(projectTasksAdjActionValidationRule); activeCell.offset(0, 4).setDataValidation(projectTasksAdjTaskValidationRule); } else if(activeCell.getRow() > 2 && activeCell.getColumn() == 2){ activeCell.offset(0, 1).setDataValidation(projectTasksAdjSubItemValidationRule); activeCell.offset(0, 2).setDataValidation(projectTasksAdjActionValidationRule); activeCell.offset(0, 3).setDataValidation(projectTasksAdjTaskValidationRule); } else if(activeCell.getRow() > 2 && activeCell.getColumn() == 3){ activeCell.offset(0, 1).setDataValidation(projectTasksAdjActionValidationRule); activeCell.offset(0, 2).setDataValidation(projectTasksAdjTaskValidationRule); } else if(activeCell.getRow() > 2 && activeCell.getColumn() == 4){ activeCell.offset(0, 1).setDataValidation(projectTasksAdjTaskValidationRule); } Answer: Since the activeCell.getRow() is repeated in all ifs, the only way I see of refactoring your code without overcomplicating the script is the following: if(activeCell.getRow() > 2){ if(activeCell.getColumn() == 1){ activeCell.offset(0, 1).setDataValidation(projectTasksAdjItemValidationRule); activeCell.offset(0, 2).setDataValidation(projectTasksAdjSubItemValidationRule); activeCell.offset(0, 3).setDataValidation(projectTasksAdjActionValidationRule); activeCell.offset(0, 4).setDataValidation(projectTasksAdjTaskValidationRule); } else if(activeCell.getColumn() == 2){ activeCell.offset(0, 1).setDataValidation(projectTasksAdjSubItemValidationRule); activeCell.offset(0, 2).setDataValidation(projectTasksAdjActionValidationRule); activeCell.offset(0, 3).setDataValidation(projectTasksAdjTaskValidationRule); } else if(activeCell.getColumn() == 3){ activeCell.offset(0, 1).setDataValidation(projectTasksAdjActionValidationRule); activeCell.offset(0, 2).setDataValidation(projectTasksAdjTaskValidationRule); } else if(activeCell.getColumn() == 4){ activeCell.offset(0, 1).setDataValidation(projectTasksAdjTaskValidationRule); } } Hope it helps ;) EDIT: As it has been pointed out by "esote" in the comments of this answer, you could add a switch statement to improve the readability, the code would be: if(activeCell.getRow() > 2){ switch(activeCell.getColumn()){ case 1: activeCell.offset(0, 4).setDataValidation(projectTasksAdjTaskValidationRule); case 2: activeCell.offset(0, 3).setDataValidation(projectTasksAdjTaskValidationRule); case 3: activeCell.offset(0, 2).setDataValidation(projectTasksAdjTaskValidationRule); case 4: activeCell.offset(0, 1).setDataValidation(projectTasksAdjTaskValidationRule); break; } } If you have any doubt about how switches work, see on this page
{ "domain": "codereview.stackexchange", "id": 35767, "tags": "javascript, google-apps-script" }
Does staring at a bright LED light damage your eyes?
Question: According to this article it seems that it is the UV part of the spectrum from the Sun that causes damage to the eye. Would it therefore be "safe" to observe directly an equivalent energy density LED lamp, emitting in the visible part of the spectrum? Answer: One can give a highly qualified, but definite "yes" in answer to your question. Contrary to popular belief, if it weren't for the UV, then staring at the Sun would not be a particularly hazardous thing to do for the majority of people. This is why I said "highly qualified" - for people with certain conditions, simple staring at the Sun may be hazardous, even aside from the UV (more below). There are two ways that light will damage your eye: Thermal damage; Photochemical damage; Fairly obviously from the nomenclature, the first kind (1) of damage is where so much energy is concentrated on the retina, its temperature is raised and the tissue is damaged or destroyed. The second kind (2) is where the light's photons are energetic enough to beget chemical changes by breaking bonds in organic molecules. This can lead both to acute poisioning of and damage / destruction to the tissue by the weird molecules / free radicals that come out of such light-matter interactions and also long term damage, even nuclear (in the sense of cell nucleus) changes, dysplasia (e.g. cataracts) and ultimately neoplasia (cancer). The retina in the mammalian eye is superbly, densely envasculated. You won't find a better liquid cooling system in our contemporary human primitive technology. This situation has arisen from two evolutionary drivers: (1) the retina is simply adapted brain tissue, and the brain itself needs sophisticated liquid cooling: of all the organs it is the most sensitive to deviations from the warm blooded homeostatic temperature ($37{\rm ^oC}$) and (2) since we are creatures of the Neogene Eastern Africa, accidental looking at the Sun was everyday and commonplace to us. Therefore, if you stare straight at the Sun at high noon, your pupils will have shrunken to about $1{\rm mm}$ diameter, thus the power incident on the retina is of the order of $\frac{\pi}{4}\times 10^{-3}{\rm m^2} \times 1000{\rm W} \approx 1{\rm mW}$. This is WAY less than the normal, healthy retina's capacity to dump heat, even if concetrated into a diffraction limited spot. The uncomfortable feeling you get from looking at the Sun is mostly a psychological one: if you did it for thirty seconds, your retinal cone and rod cells would be so drained of ATP (energy stores) that you would be totally (albeit altogether temporarily) blind for many, many minutes. For either a predator or prey creature (we are both), this is not good. But the intensity alone, although it causes severe, but temporary blindness, is not a hazard (as long as you don't get eaten by a Neogene lion wearing sunglasses, or fall over a cliff texting on your phone whilst your sight recovers when the receptors finally get their ATP levels topped up). As you rightly point out the danger from the Sun is UV: it is not the intensity with the uncomfortable, squinty-eye feeling that arises from it that is the danger for someone with a healthy retina. It is the low level but constant UV dose one gets mostly from scattered light that is the problem. This is why sunglasses in many countries must by law fulfil stringent UV attenuation performance standards: sunglasses stop the "glare" but this is not the danger; indeed the "comfort" afforded by non-UV attenuating lenses is a very false sense of security. Three other points about the Sun's power delivered to someone who stares at it. You have likely heard that looking at a total eclipse can be dangerous. It indeed can be. And this is because the pupil responds to average light. During a total eclipse, the pupil swells to its full size owing to the twilight's low average intensity. Its diameter swells to about $7{\rm mm}$: so the aperture has fifty times the area it has just before the eclipse begins. If you look at the diamond ring just after totality, you can therefore cop a dose of $20{\rm mW}$ or so in the eye. This can be enough to cause thermal damage. Evolution didn't kit us out to look at total eclipses; Some eye conditions mean that even the thermal loading on the eye from the Sun can be dangerous. Macular degeneration is a major one, as is albinism or even an extremely white complexion. Heart and circulatory diseases are others. Other diseases and defects mean that the pupil cannot respond to high light levels. Many recreational drugs can dilate the pupils severely, particularly hallucinogens like LSD, psylosin or mescaline can lead to eye damage in this way. Interestingly, if you begin with the assumption that the Sun's intensity into a fully shrunken pupil of someone staring straight at the Sun represents a safe upper limit to power dose (in the absence of UV), then you come up with dose limits that pretty closely match the ISO60825 laser safety standards for visible light. The last comment is particularly relevant to you. ISO60825 does NOT tell the difference between coherent and incoherent light. You tread an LED exactly as you would a laser: if you apply ISO60825 and determine that the light dose from your LED is intrinsically safe (i.e. class 1), then this is a sound indication, aside from in severe cases of the diseases I mention above. The other factor I haven't mentioned is the blink response, which is also accounted for by ISO80625, but this sets levels for class 2 and class 3A, which are not intrinsically safe, but which are deemed to be low enough that an accidental looking into the beam will not be harmful owing to the shielding afforded by a healthy blink response (assumed to limit the light dose to 0.25 seconds). Again, drugs severely interfere with this reflex.
{ "domain": "physics.stackexchange", "id": 18900, "tags": "electromagnetic-radiation, biophysics, vision, light-emitting-diodes" }
Centripetal force in frame of reference of body moving In a circle
Question: Suppose a body is moving in a circle about a fixed point. In the frame of reference of the body, is the centripetal force felt or is only the centrifugal force felt? More generally, does a body only feel the effect of pseudo forces in an accelerated reference frame? Answer: In the frame of reference of the body, is the centripetal force felt or is only the centrifugal force felt? It depends on what you mean exactly. Consider, for example, the amusement park ride Dumbo at Disneyland: . On this ride, passengers sit in mini Dumbo replicas and are swung around in a circle. What forces do they feel? Well, firstly, they feel a centrifugal force radially outward. But this is not all. If that were the only force they felt, then in the frame that is stationary with respect to Dumbo, they would accelerate radially outward. Instead, they also feel a normal force of Dumbo pushing them inward that is precisely equal to the centrifugal force, and as a result, as measured in the Dumbo frame, they remain stationary with respect to Dumbo. Now, we know that if we were to analyze the same situation from the frame of reference of a person watching the ride from the ground, then we would say that there is only one force on the passengers, namely the normal force of Dumbo on them, and this force causes the passengers to accelerate, namely to move in a circle. As a result, the convention is to call the normal force the "centripetal" force. I personally think this is terrible terminology that confuses students because it leads them to believe that "centripetal force" is somehow an independent thing that doesn't need to be comprised of real physical interactions with objects...by anywho. Now, going back to the accelerated frame, we had noticed that there were two forces acting on the passengers, the (fictitious) centrifugal force, and the normal force. Would you now call the normal force a "centripetal"? If we're doing the analysis in the accelerating frame, then that would be extremely non-standard because in that frame, no circular motion is occurring. does a body only feel the effect of pseudo forces in an accelerated reference frame? No! Just look at the above example! The passengers feel the centrifugal force, but they also feel a normal force due to their interaction with dumbo! In general, there can be all sorts of forces that an object feels in an accelerated frame that are not pseudo forces like friction, gravitational forces, electromagnetic forces etc.
{ "domain": "physics.stackexchange", "id": 10764, "tags": "newtonian-mechanics, rotational-dynamics, reference-frames, centrifugal-force, centripetal-force" }
openni_tracker fails on 12.04
Question: http://answers.ros.org/question/42654/openninite-incompatible-in-fuerteprecise/ According to above, this should been fixed with 1.5.4.0 according to @tfoote but I am still seeing it on Fuerte/Groovy on 12.04. [ERROR]: Find user generator failed: This operation is invalid! And I tried the workaround proposed by @pgorczak in there but it no longer works. Has anyone found another workaround or know how to fix it? Thanks! Originally posted by jys on ROS Answers with karma: 212 on 2013-02-20 Post score: 0 Answer: For me the workaround worked. It was sufficient to install the linux version of NITE-dev v1.5.2.21 and the error disappeared. Originally posted by rastaxe with karma: 620 on 2013-02-20 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by jys on 2013-02-20: If you take a look at http://www.openni.org/openni-sdk/openni-sdk-history-2/, there are 3 different things. I installed "OpenNI SDK v1.5.4.0", am I supposed to install just "NiTE v1.5.2.21" or all three? Comment by jys on 2013-02-20: I was installing the wrong thing. Installing "NITE v1.5.2.21" made it work. (It is the last one at that link) Just ran "sudo ./uninstall.sh" and "sudo ./install.sh". Thanks! Comment by jys on 2013-02-21: @rastaxe could you take a look at my new question http://answers.ros.org/question/55827/openni_tracker-displaying-pointcloud-and-skeleton-alignment/ Thanks! Comment by shivesh_sk on 2014-05-16: Hi, I am facing the same problem and have installed NiTE v1.5.2.21 but it still shows the same error for me. I am using ROS Groovy with Ubuntu 12.04. Any help would be appreciated. Comment by rastaxe on 2014-11-14: In my experience, this method works only on 32 bit, have you a 64 bit? In this case, you have to try openni2_tracker.
{ "domain": "robotics.stackexchange", "id": 12976, "tags": "openi-tracker" }
Why is this interpretation of phase kickback incorrect?
Question: (This question is a kind of sequel to a prior question at Why does the "Phase Kickback" mechanism work in the Quantum phase estimation algorithm?) This question asks for someone to identify the error in what seems to me to be a reasonable interpretation of the phase kickback math. Here's the setup for the question. Suppose we have a unitary gate $U$ with eigenvector $|u\rangle$ and eigenvalue $e^{i\phi}$, so $U|u\rangle = e^{i\phi}|u\rangle$. If we use $|+\rangle$ as the control of a controlled-$U$ gate receiving $|u\rangle$, then the input system is $|+\rangle|u\rangle$, and the output system can be written (excluding the normalization factor $1/\sqrt{2}$) as $|0\rangle|u\rangle + e^{i\phi}|1\rangle|u\rangle$. The $e^{i\phi}$ is a term we can't directly measure, but it is part of the physical system, and can affect computations after this step. We can rewrite this output of the system in two forms: $$ \begin{align} \begin{split} |0\rangle|u\rangle + e^{i\phi}|1\rangle|u\rangle &= |0\rangle|u\rangle + |1\rangle\left(e^{i\phi}|u\rangle\right) \,\,\,\,\, & \text{Form 1} \\ &= \left(|0\rangle + (e^{i\phi}|1\rangle\right) \, |u\rangle \,\,\,\,\, & \text{Form 2} \\ \end{split} \end{align} $$ Interpretation of Form 1: The left qubit is either $|0\rangle$ or $|1\rangle$. If we measure the left qubit and get $|0\rangle$, then we know that the right qubit is $|u\rangle$, and if we measure $|1\rangle$ on the left qubit, then we know that the right is $e^{i\phi}|u\rangle$. We can't measure that phase directly, but applying something like QPE to the right qubit we can turn the phase into something we can measure, so we could detect that the phase shift is on the right qubit, not the left. In short, the left qubit is unchanged, and the right changes. Interpretation of Form 2: This the typical phase kickback formulation, where the phase is associated with the control qubit, here the left one. Now we have the opposite situation: the left qubit changes, and the right does not. These interpretations appear to describe different physical states, which can result in different measurements after additional circuitry, yet only algebra distinguishes these two forms. Since the second interpretation is widely accepted and used in analysis, this suggests that the first interpretation is wrong. But specifically why? That is, rather than a walkthrough of a "right" interpretation, I hope someone can answer identify and correct the specific error in logic or understanding in Interpretation 1. Answer: Physically, there is no difference between measuring the left qubit in the state $|0\rangle$ or $|1\rangle$. As stated in a previous answer, if the left qubit is measured in $|1\rangle$ the phase $e^{i\phi}$ becomes a global phase to the state $|u\rangle$, so you can not measure it. Regarding your interpretations. I think an easier way to interpret what's happening is to consider the state $$|0⟩|u⟩+e^{iϕ}|1⟩|u⟩ \qquad (1)$$ in the computational basis. Let's suppose that the state $|u\rangle$ can be written as: $$|u\rangle = u_0 |0\rangle + u_1 |1\rangle $$ the state in (1) becomes: $$ |0⟩(u_0 |0\rangle + u_1 |1\rangle) + e^{iϕ}|1⟩(u_0 |0\rangle + u_1 |1\rangle) $$ $$ = u_0 |00\rangle + u_1 |01\rangle + e^{iϕ}u_0 |10\rangle + e^{iϕ} u_1 |11\rangle $$ Using this form you can see that the states $|10\rangle$ and $|11\rangle$ are the ones being 'changed' with a phase. You can find more information here: https://qiskit.org/textbook/ch-gates/phase-kickback.html A related question is: Why does the "Phase Kickback" mechanism work in the Quantum phase estimation algorithm?
{ "domain": "quantumcomputing.stackexchange", "id": 3831, "tags": "phase-kickback" }
k-means classifies one data point as a group
Question: I have 1000 sets of one dimensional data (360 each in length), and I want k means to classify what is a small/medium/large value (n_clusters=3) for each set of data, but I'm getting a lot of instances where the large group only has 1 data point because that value is so far away from the rest, but the rest look like they can clearly create 3 clusters. In some other cases, it does seem to make sense to use 1 data point as the large group since the rest are so close together. It's not clear if there can be 3 distinctive clusters. What would be an efficient way to deal with this? Answer: Two ideas come to mind, which could be combined or not. Try to identify the single point as an outlier, and remove it from consideration for the clustering. Allow $k$ to vary a little. Using both and allowing $k\in\{2,3\}$ allows you to find only two groups in the main set of points, plus the outlier. Using just (2) with $k\in\{3,4\}$ could find clusters Low/Med/Large/Outlier...that has the nicety that outlier detection is done by the k-means algorithm rather than another preprocessing step, but runs the risk of finding four honest clusters when you only wanted three.
{ "domain": "datascience.stackexchange", "id": 5217, "tags": "machine-learning, clustering, k-means" }
response of a server in python
Question: Hi, everybody, I have a simple question. If I have more than 2 responses in the srv file. How Can I use them in python code. How to use the servicenameResponse() function? Cheers, Gauss Originally posted by Gauss Lee on ROS Answers with karma: 13 on 2011-07-10 Post score: 0 Original comments Comment by dornhege on 2011-07-10: Can you give an example what you mean with "more than 2 responses". A service consists of 1 request and 1 response per srv file. Answer: Posting your actual .srv file would help, but I think you just have a bit of terminology issues. If you have two lines in your response section of the service definition, you would have two fields in the response message. For instance, if servicename.srv is: int8 someRequestStuff --- int8 firstThingToReturn int8 secondThingToReturn In this case, you could: resp = servicenameResponse() resp.firstThingToReturn = 1 resp.secondThingToReturn = 2 return resp Originally posted by fergs with karma: 13902 on 2011-07-10 This answer was ACCEPTED on the original site Post score: 6
{ "domain": "robotics.stackexchange", "id": 6096, "tags": "ros, rospy, service" }
How to calculate the average fidelity of an amplitude damping channel
Question: An answer to this question shows how to calculate the average fidelity of a depolarizing channel. How would one go about calculating this for an amplitude dampening channel? I tried working out the math myself but had no luck. The tricks used in the previous answer can't be applied in this new scenario it seems... Answer: An elementary method is to simply carry out the integration $$ \begin{align} \overline{F} &= \int\langle\psi|\mathcal{N_\gamma}(|\psi\rangle\langle\psi|)|\psi\rangle d\psi\\ &=\int\langle\psi|K_0|\psi\rangle\langle\psi|K_0^\dagger|\psi\rangle + \langle\psi|K_1|\psi\rangle\langle\psi|K_1^\dagger|\psi\rangle d\psi\\ & =\frac{1}{4\pi}\int_0^\pi\int_0^{2\pi}\left|\begin{pmatrix}\cos\frac{\theta}{2}&e^{-i\phi}\sin\frac{\theta}{2}\end{pmatrix}\begin{pmatrix}1 & 0 \\0 & \sqrt{1 - \gamma}\end{pmatrix}\begin{pmatrix}\cos\frac{\theta}{2}\\e^{i\phi}\sin\frac{\theta}{2}\end{pmatrix}\right|^2\sin\theta \\ & + \left|\begin{pmatrix}\cos\frac{\theta}{2}&e^{-i\phi}\sin\frac{\theta}{2}\end{pmatrix}\begin{pmatrix}0 & \sqrt{\gamma} \\0 & 0\end{pmatrix}\begin{pmatrix}\cos\frac{\theta}{2}\\e^{i\phi}\sin\frac{\theta}{2}\end{pmatrix}\right|^2\sin\theta d\phi d\theta \\ &=\frac{1}{4\pi}\int_0^\pi\int_0^{2\pi}\left|\cos^2\frac{\theta}{2}+\sqrt{1-\gamma}\sin^2\frac{\theta}{2}\right|^2\sin\theta + \left|\sqrt{\gamma}e^{i\phi}\sin\frac{\theta}{2}\cos\frac{\theta}{2}\right|^2\sin\theta d\phi d\theta \\ &=\frac{1}{2}\int_0^\pi\left(\cos^4\frac{\theta}{2}+(1-\gamma)\sin^4\frac{\theta}{2}+\frac{\sqrt{1-\gamma}}{2}\sin^2\theta + \frac{\gamma}{4}\sin^2\theta\right)\sin\theta d\theta \\ &=\frac{1}{2}\int_0^\pi\sin\theta\cos^4\frac{\theta}{2}+(1-\gamma)\sin\theta\sin^4\frac{\theta}{2}+\frac{\gamma+2\sqrt{1-\gamma}}{4}\sin^3\theta d\theta \\ &=\frac{1}{2}\left(\frac{2}{3} + (1-\gamma)\frac{2}{3} + \frac{\gamma+2\sqrt{1-\gamma}}{4}\frac{4}{3}\right) \\ &=\frac{1}{2}\left(\frac{4}{3} - \frac{\gamma}{3} + \frac{2\sqrt{1-\gamma}}{3}\right) \\ &=\frac{2}{3}-\frac{\gamma}{6} + \frac{\sqrt{1-\gamma}}{3}. \end{align} $$ A computationally easier, but conceptually more sophisticated approach is based on the fact that the eigenstates of the Pauli operators, i.e. $S=\{|0\rangle, |1\rangle, |+\rangle, |-\rangle, |{+i}\rangle, |{-i}\rangle\}$ form a spherical $2$-design and thus averaging any expression of the form $\langle\psi|A|\psi\rangle\langle\psi|B|\psi\rangle$ over the six states gives the same result as averaging it over the Haar measure (see e.g. this paper). Therefore, $$ \begin{align} \overline{F} &= \int\langle\psi|\mathcal{N_\gamma}(|\psi\rangle\langle\psi|)|\psi\rangle d\psi \\ &=\frac{1}{|S|}\sum_{\psi\in S}\langle\psi|\mathcal{N_\gamma}(|\psi\rangle\langle\psi|)|\psi\rangle \\ &=\frac{1}{6}\left[1 + 1 - \gamma + 4 \cdot \left(\frac{1}{2} + \frac{\sqrt{1-\gamma}}{2}\right)\right] \\ &= \frac{2}{3} - \frac{\gamma}{6} + \frac{\sqrt{1-\gamma}}{3} \end{align} $$ where individual fidelities $$ \begin{align} \langle 0|\mathcal{N_\gamma}(|0\rangle\langle 0|)|0\rangle &= 1 \\ \langle 1|\mathcal{N_\gamma}(|1\rangle\langle 1|)|1\rangle &= 1 - \gamma \\ \langle +|\mathcal{N_\gamma}(|+\rangle\langle +|)|+\rangle &= \frac{1}{2} + \frac{\sqrt{1-\gamma}}{2} \\ \langle -|\mathcal{N_\gamma}(|-\rangle\langle -|)|-\rangle &= \frac{1}{2} + \frac{\sqrt{1-\gamma}}{2} \\ \langle {+i}|\mathcal{N_\gamma}(|{+i}\rangle\langle {+i}|)|{+i}\rangle &= \frac{1}{2} + \frac{\sqrt{1-\gamma}}{2} \\ \langle {-i}|\mathcal{N_\gamma}(|{-i}\rangle\langle {-i}|)|{-i}\rangle &= \frac{1}{2} + \frac{\sqrt{1-\gamma}}{2} \\ \end{align} $$ are easily computed using $$ \mathcal{N_\gamma}\left(\begin{pmatrix}a & b \\ c & d\end{pmatrix}\right) = \begin{pmatrix} a+d\gamma & b\sqrt{1-\gamma} \\ c\sqrt{1-\gamma} & d(1-\gamma) \end{pmatrix}. $$
{ "domain": "quantumcomputing.stackexchange", "id": 2438, "tags": "quantum-state, quantum-operation, fidelity" }
How to avoid planner jumping between 2 possible path
Question: Hi everyone, recently I'm using Navigation2 release 1.1.0 with Galactic built from source in Ubuntu18.04. I'm using Smac 2D Planner and TEB controller in my navigation stack, they are run in 10Hz. The localization is provided by another node, all running in a real robot. To describe my question, here is the video. Youtube You can see there are 2 symmetric pathes generated by global planner (both Navfn and Smac 2D). It will cause the robot stuck at the intersection. How can I be sure that it will keep the path as long as there are no blocked space. If there is any missing information please let me know. Really want some help on it. Thanks! Originally posted by stu00608 on ROS Answers with karma: 1 on 2022-06-14 Post score: 0 Answer: Replan at a reduced rate or check out one of the other provided behavior trees Nav2 provides (or create your own!). But the specific problem of path oscillation was the motivation behind this particular behavior tree: https://github.com/ros-planning/navigation2/blob/main/nav2_bt_navigator/behavior_trees/nav_to_pose_with_consistent_replanning_and_if_path_becomes_invalid.xml which includes both time-based replanning at a reduced rate and event-based replanning due to critical collisions. Originally posted by stevemacenski with karma: 8272 on 2022-06-14 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by stu00608 on 2022-06-15: Actually I didn't modify any behavior tree node in nav2 for this task! I'll give it a try, thank you for your reply! Comment by stu00608 on 2022-07-05: Hi Steve, I've tried some features in behavior tree, that solves my problem! But I realized that I wrote a wrong version of release that I'm using, it's 1.0.12 . There are less bt nodes than humble. Is there any plan to backport those functionalities back to galactic? Comment by stevemacenski on 2022-07-06: Some are able to be, but not all. Additionally, Galactic is EOL in 4 months, so its nearly out of support anyhow. Comment by stu00608 on 2022-07-07: Then I think I should consider to upgrade the distro, thanks for your response!
{ "domain": "robotics.stackexchange", "id": 37766, "tags": "ros, ros2, navfn, global-planner" }
rtabmap odometry source
Question: I have been experimenting with rtabmap_ros lately, and really like the results I am getting. Awesome work Mathieu et al.! First, let me describe my current setup: Setup ROS Indigo/Ubuntu 14.01 rtabmap from apt-binary (ros-indigo-rtab 0.8.0-0) Custom robot with two tracks (i.e. non-holonomic) Custom base-controller node which provides odometry from wheel encoders (tf as /odom-->/base_frame as well as nav_msgs/Odometry messages) Kinect2 providing registered rgb+depth images XSens-IMU providing sensor_msgs/Imu messages (not used at the moment) Hokuyo laser scanner providing sensor_msgs/LaserScan messages Problem Description The problem I am having is the quality of the odometry from wheel encoders: while translation precision is good, precision of rotation (depending on the ground surface) is pretty bad. So far, I have been using gmapping for SLAM/localization. This has been working good, gmapping subscribes to the /odom-->/base_frame tf from the base_controller as well as laser scan messages. In my experiments, gmapping does not have any problems in indoor environments getting the yaw-estimate right. Using rtabmap's SLAM instead of gmapping works good as long as I don't perform fast rotations or drive on surfaces on which track slippage is high (i.e. odom quality from wheel encoders is poor). This results in rtabmap getting lost. To improve rtabmap performance, I would like to provide it with better odometry information. My ideas are: Use laser_scan_matcher subscribing to laser scan + imu/data + wheel_odom OR Use robot_pose_ekf subscribing to imu/data + wheel_odom OR Use robot_localization subscribing to imu/data + wheel_odom OR Use gmapping subscribing to tf + laser scans OR Use hector_mapping subscribing to laser scans Solution (5) only uses laser scans and does not work reliably enough according to my experiments. (4) works great, but is overkill for this purpose and uses too many resources. (3) and (2) only use relative sensor sources (i.e. do not do their own ICP matching using laser scans), but might be worth a try. (1) would be my preferred solution as it uses laser scans as absolute reference. However, laser_scan_matcher only provides geometry_msgs/Pose2D and no nav_msgs/Odometry which is required by rtabmap. Question @matlabbe Can you advise what would be you recommendation in my case? I have been looking at the ideas from this thread as well as the laser_scan_matcher mod listed here, but I am unsure whether rtabmap uses the pose and twist information contained in nav_msgs/Odometry, or if providing pose only would suffice. Please advise. Thanks you! Originally posted by Huibuh on ROS Answers with karma: 399 on 2015-01-30 Post score: 0 Answer: Can you show which parameters are used for the rtabmap node (example launch file)? The parameter "RGBD/PoseScanMatching" can be set to true for odometry correction with laser scans like in the Robot mapping demo (which may correct the yaw). The laser_scan_matcher seems also a good solution. RTAB-Map doesn't use twist information, you could just convert the geometry_msgs/Pose2D into a nav_msgs/Odometry message by filling the "pose" field (you can set the covariance matrix to null) as well as the header with corresponding TF frames and timestamp. Originally posted by matlabbe with karma: 6409 on 2015-01-30 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Huibuh on 2015-02-05: I checked my setup, "RGBD/PoseScanMatching" was already set to true during my experiments. I am now in the process of writing a node that converts geometry_msgs/Pose2D into nav_msgs/Odometry to check out how laser_scan_matcher performs as a odometry source for rtabmap. I will report back here. Comment by Huibuh on 2015-03-10: I got it to work, but laser_scan_matcher fails to localize correctly in my test environment (e.g. at end of long halls). Ideally, I would like to use gmapping as odometry source (it localizes flawlessly). However, gmapping publishes its own map->odom transform, which interferes with rtabmap...ideas? Comment by matlabbe on 2015-03-10: You could disable loop closure detection and set "publish_tf" to false in rtabmap, but the whole map would not be corrected when gmapping closes a loop. I did some updates to PoseScanMatching recently, you may want to try to compile rtabmap from source and see if it could work better. Comment by Huibuh on 2015-03-12: I have compiled the latest rtabmap 0.8.6 from source. The localisation performance has improved a great deal, I can now for the first time map the test environment without getting lost every time. Good work!
{ "domain": "robotics.stackexchange", "id": 20735, "tags": "slam, navigation, odometry, robot-pose-ekf, robot-localization" }
Resource: Star map which show current actual positions versus current observable positions?
Question: Light travelling from stars and galaxies takes some time to reach us here on Earth - when we observe stars or galaxies in the night sky, we see their positions as they were when the light left on its journey towards us, and not their current actual positions where they are located now. Now the visible stars are located at a range of distances from us - from Proxima Centauri at 4.24 light years away, to V762 Cas in Cassiopeia at 16,308 light-years away. So when we look up, we see Proxima Centauri at the position it was located 4.24 years ago, and in the same view of the night sky, we would see V762 Cas at the position it was located 16,308 years ago. So not only are we looking into the past, but more than that: we are not looking at a snapshot of a single moment somewhere in the past, but a composite view of a range of past times, stretching back some 16 thousand years. So here is my question: Does anyone know of any resource that: shows the positions of stars as we would look up to see them in their current visible positions and then allows a "play forward the motions of the individual stars", over the time it took for the light from each one to reach us to show where they are located in their current actual positions I've done quite a bit of searching, but all I can find are maps which show the night sky: with a view of the stars, in their visible positions, as we would see them all currently with a view of the stars, in their visible positions, as we would see them all at some date in the past but nothing that would move each star individually according to: the motion of the star's orbit, relative to our position as observers on Earth, as we orbit the Sun and as our solar system orbits the Milky Way across the elapsed time it took for the light to leave the star and journey to reach us EDIT: To add some clarity. A computer simulation is what I am ideally looking for. Preferably one that would do some sort of "time based animation" to show the relative movements of the individual stars, and showing the final positions of where they are currently actually located. Answer: As you are talking about a "Star map" and "current visible positions", I'll assume you are talking about a star map of the ${\sim} 5000$ stars visible to the naked eye. Most of those stars are within 1000 light years of he Earth. They have typical velocity dispersions with respect to the Earth of ${\sim} 10$ km/s, with the occasional rare star with a velocity of ${\sim} 100$ km/s. This translates into proper motion on the sky of milli-arcseconds to a few arcseconds per year. Here is a plot of proper motion versus distance taken from the second version of the Hipparcos catalogue by van Leeuwen (2007). I selected 4022 stars with magnitudes brighter than 6, and with uncertainties in their proper motion of less than 1 milli-arcsecond/year and uncertainty in parallax of less than 20% (to ensure reasonably accurate distances and tangential motions). To estimate the size of the effect you are talking about, we need to multiply the proper motion by the distance (in light years) to work out how far the stars have moved whilst the light has travelled towards us. The plot below shows by how much a star's position would have shifted from where they would be conventionally plotted in a star chart as a function of stellar magnitude (brightest stars on the left). Now you could plot a star chart based on these numbers, but the typical deviations are less than 100 arcseconds, which is approximately the same angular resolution as the human eye. So in a revised star chart based on these numbers, which shows where the stars are now, there would be no perceptible difference in the appearance of the constellations. I have highlighted one outlier in the plot. This is Arcturus, a bright red giant, in the constellation of Bootes (easily found in the night sky). It is at a distance of 37 light years, and in 37 years it moves 83 arcseconds in the night sky. An issue that does come into play, and I'm not sure what the "rules" of the question are, is the precession of the Earth. This has the effect of changing the positions of stars with respect to our coordinate system (though not with each other). The equinox precesses at around 50 arcseconds per year, so if star maps are plotted on a fixed RA and Dec grid then this is a massive effect compared to what I have described above. This could be calculated, but it feels like cheating because it isn't reflective of any true change in the positions of the stars, just their positions with respect to our coordinate system.
{ "domain": "physics.stackexchange", "id": 32222, "tags": "soft-question, astronomy, resource-recommendations, stars" }
how should I debug using gdb? where to see the results?
Question: It might be very trivial but I have problem with gdb debugging. I don't know how should I see xterm window when I launch the nodes using gdb. Originally posted by b.slmn on ROS Answers with karma: 3 on 2018-09-16 Post score: 0 Original comments Comment by ahendrix on 2018-09-16: There's not enough detail here for us to help. Please edit your question to add the commands that you're running, and the output that you do see. If you're using a launch file please include that launch file too. Answer: In case you haven't, take a look at How to Roslaunch Nodes in Valgrind or GDB. Especially the run your node in gdb in a new [..] window variants of the prefixes, such as: launch-prefix="xterm -e gdb --args": run your node in a gdb in a separate xterm window, manually type run to start it launch-prefix="gdb -ex run --args": run your node in gdb in the same xterm as your launch without having to type run to start it I don't know how should I see xterm window when I launch the nodes using gdb. If you already tried the two launch-prefixes I list above, but still can't find the window, make sure it is not being spawned behind the terminal you use to roslaunch everything in. That has happened to me some times and it can make for a confusing 5 minutes. Originally posted by gvdhoorn with karma: 86574 on 2018-09-17 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 31778, "tags": "ros, gdb, ros-kinetic" }
What's the reaction here?
Question: I have found a gif photo on the internet and I am very curious what is happening there? What reaction is the reason for that? Answer: $\ce{KI}$ with $\ce{H2O2}$ with a tiny bit of dishwashing liquid. It is a rather violent redox reaction wherein $\ce{H2O2}$ is decomposed to $\ce{H2O}$ and $\ce{O2}$. You can actually see that $\ce{I2}$ is being formed by the brownish-yellowish color in the beginning of the reaction. There is a lot of information about this reaction. I performed this reaction quite a few times myself for my students. It is a lot of fun. Some people call it making elephant toothpaste. :-)
{ "domain": "chemistry.stackexchange", "id": 4802, "tags": "experimental-chemistry, teaching-lab" }
Quantum double slit experiment:$ N$ particles 1 experiment Vs. 1 particle $N$ experiments
Question: The title basically sums up the question. We know that if I shoot $N$ particles through a double slit then as $N$ gets large I see an interference pattern. Now if I take $N$ experiments and shoot one particle for each experiment and superimpose the outcome do I still see an interference pattern, or does the set up need to "warm-up" over some early particles? Answer: Theoretically, yes, the interference pattern should emerge if you shoot one particle in each of $N$ experiments and then merge the outcomes. However, It is not practically feasible to conduct thousands, or millions of experiments using one particle each. So, this can not be experimentally verified. But, what can be done for reasonable experimental proof is - conduct (say)100 experiments (with identical setups) with few hundred/thousand/million particles in each and then superimpose the outcomes. QM community is convinced to an extent that is beyond any level of scrutiny, and it would not see any point in conducting such an experiment. So, this is not likely to happen.
{ "domain": "physics.stackexchange", "id": 39163, "tags": "quantum-mechanics, double-slit-experiment" }
Optimize Recursive Fetching of information?
Question: I wrote a small class that demonstrates my problem: class Root { private Leaf RootLeaf; private Dictionary<object, Leaf> AllItems = new Dictionary<object, Leaf>(); //dictionary contains a reference to each item, and the leaf it is assigned to (its parent) public Root() { RootLeaf = new Leaf(); } public List<object> GetAllChildren { get { return RootLeaf.getAllChildren; } } public void Add(object o) { Leaf AddedTo = RootLeaf.Add(o); AllItems.Add(o, AddedTo); //Add object reference to AllItems dictionary } private IEnumerable<object> GetNeighbors(object obj) { foreach (object o in this.AllItems[obj].getAllChildren) { if (obj != o) yield return o; } } public void Update() { foreach (KeyValuePair<object, Leaf> i in AllItems) { foreach (object b in this.GetNeighbors(i.Key)) { //Would do collision checks here } } } } class Leaf { private const int MaxChildCount = 1; private List<object> Children = new List<object>(); private Leaf[] ChildLeaves = null; private bool _LeavesGenerated = false; //Have the leaves been created? (Low Level, do not touch) private bool HasLeaves = false; //Should we use the leaves? protected void CreateLeaves() { if (!_LeavesGenerated) { //create each of the four leaves for (int i = 0; i < 4; i++) ChildLeaves[i] = new Leaf(); _LeavesGenerated = true; } HasLeaves = true; } protected void RemoveLeaves() { HasLeaves = false; } /// <summary> /// Returns children of this leaf, and all of its subleaves /// </summary> public List<object> getAllChildren { get { List<object> outp = Children.ToList(); if (HasLeaves) { foreach (Leaf l in ChildLeaves) outp.AddRange(l.getAllChildren); } return outp; } } /// <summary> /// Get count of all children in this leaf, and its subleaves /// </summary> public int getChildCount { get { int outp = Children.Count; if (HasLeaves) { foreach (Leaf l in ChildLeaves) outp += l.getChildCount; } return outp; } } static Random rand = new Random(); /// <summary> /// /// </summary> /// <param name="o">The object to be added</param> /// <returns>The leaf the object was added to</returns> public Leaf Add(object o) { if (Children.Count >= MaxChildCount) { //Pick random subleaf, I know this isn't correct for a quadtree, but it will simplify this explanation code if (!HasLeaves) CreateLeaves(); return ChildLeaves[rand.Next(0, 3)].Add(o); } else { Children.Add(o); return this; } } } I ran this through the ANTS profiler, and it not surprisingly pulled the Leaf.getAllChildren and Leaf.getChildCount as the two most expensive operations. I'm ok with how everything thing else is laid out. My question is: how could I optimize the two aforementioned properties? The properties are called when the Root.Update() function is run. Answer: All right, I've done a bit of work here (a good most of it stylistic, to be sure). Do note I replaced object with a generic implementation to help avoid boxing of value types. It does also employ LINQ in a couple instances to remove some complexity. When I run it and generate 1,000,000 integer leaves, I find that the bulk of the time is taken in the class constructors and the methods themselves are close enough to 0 time to be considered 0. Hope this helps: internal sealed class Root<T> { private readonly Leaf<T> rootLeaf = new Leaf<T>(); // dictionary contains a reference to each item, and the leaf it is assigned to (its parent) private readonly IDictionary<T, Leaf<T>> allItems = new Dictionary<T, Leaf<T>>(); /// <summary> /// Gets GetAllChildren. /// </summary> public IList<T> GetAllChildren { get { return this.rootLeaf.GetAllChildren; } } public void Add(T o) { var addedTo = this.rootLeaf.Add(o); this.allItems.Add(o, addedTo); // Add object reference to AllItems dictionary } public void Update() { foreach (var i in from i in this.allItems from b in this.GetNeighbors(i.Key) select i) { // Would do collision checks here } } private IEnumerable<T> GetNeighbors(T obj) { return this.allItems[obj].GetAllChildren.Where(o => ReferenceEquals(obj, o)); } } internal class Leaf<T> { private const int MaxChildCount = 1; private static readonly Random rand = new Random(); private readonly IList<T> children = new List<T>(); private List<Leaf<T>> childLeaves; private bool hasLeaves; // Should we use the leaves? private bool leavesGenerated; // Have the leaves been created? (Low Level, do not touch) /// <summary> /// Returns children of this leaf, and all of its subleaves /// </summary> public IList<T> GetAllChildren { get { var allChildren = this.children.ToList(); if (this.hasLeaves) { this.childLeaves.ToList().ForEach(l => allChildren.AddRange(l.GetAllChildren)); } return allChildren; } } /// <summary> /// Get count of all children in this leaf, and its subleaves /// </summary> public int GetChildCount { get { var allChildrenCount = this.children.Count; if (this.hasLeaves) { allChildrenCount += this.childLeaves.Sum(l => l.GetChildCount); } return allChildrenCount; } } /// <summary> /// /// </summary> /// <param name="o">The object to be added</param> /// <returns>The leaf the object was added to</returns> public Leaf<T> Add(T o) { if (this.children.Count < MaxChildCount) { this.children.Add(o); return this; } // Pick random subleaf, I know this isn't correct for a quadtree, but it will simplify this explanation code if (!this.hasLeaves) { this.CreateLeaves(); } return this.childLeaves[rand.Next(0, 3)].Add(o); } protected void CreateLeaves() { if (!this.leavesGenerated) { // create each of the four leaves this.childLeaves = new List<Leaf<T>> { new Leaf<T>(), new Leaf<T>(), new Leaf<T>(), new Leaf<T>() }; this.leavesGenerated = true; } this.hasLeaves = true; } protected void RemoveLeaves() { this.hasLeaves = false; } }
{ "domain": "codereview.stackexchange", "id": 891, "tags": "c#, recursion" }
Finding the diameter of a material using tension stiffness and Young's Modulus
Question: This question is similar to a question I am stuck on, but I have changed it to get an understanding how to works. A spring is being made by pulling a 12cm long cylinder of material in tension with a desired stiffness of 6200kN/m. It has a density of 2.9 g/cm^3, an ultimate tensile strength of 375 MPa and a Young's modulus of 70 GPa. What should the diameter (in mm) be for the metal "spring"? So, my question is how can I go about solving this question? I have looked at the Young's Modulus and Hooke Law's formulas. But keep getting stuck with material displacement. Do I need to know the amount of length the object changes to work out this solution, or am I on the wrong track? Can I assume: $\Delta L = 1$ if I used the following formula to find the $A_0$, which I can then calculate the diameter from: $$ F = \frac{E A_0 \Delta L}{L_0} $$ Any help will be most appreciated. Answer: The axial stiffness of an isotropic material with a uniform cross-section is a fundamental part of most engineering mechanics of materials concepts. We can derive it as follows: The force-displacement relation of a spring is described by Hooke's Law, $$F = k \Delta L$$ where $F$ is the force exerted on the spring, $\Delta L$ is the change in length or displacement, and $k$ is the stiffness or spring constant. We can rearrange to get an expression of the stiffness $$k = \frac{F}{\Delta L}$$ Now we want to express $k$ exclusively in terms of the geometry and material properties. For that, we need to make use of the engineering stress $\sigma = F/A_0$ where $A_0$ is the cross-sectional area of the material before deformation. We can rearrange and substitute this back into the previous equation $$k = \frac{\sigma A_0}{\Delta L}$$ We still need to get rid of $\sigma$, and to do that we can use $\sigma = E \epsilon$ where $E$ is the Young's modulus of the material and $\epsilon$ is the engineering strain. Substituting that and using the definition of engineering strain $\epsilon = \Delta L/L_0$ where $L_0$ is the length of the material before deformation and we get $$ k = \frac{E(\Delta L/L_0)A_0}{\Delta L}$$ $$ k = \frac{E A_0}{L_0}$$ Note that $\Delta L$ cancels, so we don't need to know the change in length. Now if you want to figure out the necessary dimensions for a block of material to have a given stiffness, just substitute the appropriate equation for $A_0$ and rearrange. For example, a cylinder has cross-sectional area $A_0 = \frac{\pi}{4} D_0^2$, sub that in and you get $$ D_0 = \sqrt{\frac{4 k L_0}{\pi E}}$$
{ "domain": "engineering.stackexchange", "id": 1352, "tags": "springs" }
can somebody explain how you get the second line from the first line in the picture?
Question: I'm trying to understand the transition from the 1st line of the Lagrangian to the second. we substitute for $\eta$ but how is the multiplication happening here? if I multiply the terms into the matrix elements, won't I get a matrix whose elements are the terms to the right of the matrix here? Answer: Since $\eta^{\mu \nu}$ is non-zero for diagonal elements only, we only need to sum those terms when $\mu = \nu$. Therefore, we have \begin{align} &\qquad \frac{1}{2} \eta^{\mu \nu} \; \partial_{\mu}\phi \; \partial_{\nu} \phi \\ &= \frac{1}{2} \Bigg[ \eta^{00} \partial_{0}\phi \; \partial_{0} \phi \; + \; \eta^{11} \partial_{1}\phi \; \partial_{1} \phi \; + \; \eta^{22} \partial_{2}\phi \; \partial_{2} \phi \; + \; \eta^{33} \partial_{3}\phi \; \partial_{3} \phi \Bigg]\\ &= \frac{1}{2} \Bigg[ (+1)\partial_{t}\phi \; \partial_{t} \phi \; + \; (-1) \partial_{x}\phi \; \partial_{x} \phi \; + \; (-1) \partial_{y}\phi \; \partial_{y} \phi \; + \; (-1) \partial_{z}\phi \; \partial_{z} \phi \Bigg] \\ &= \frac{1}{2} \dot{\phi}^{\, 2} - \frac{1}{2}(\nabla\phi)^2 \end{align}
{ "domain": "physics.stackexchange", "id": 51261, "tags": "quantum-field-theory, klein-gordon-equation" }
How to apply the Bloch-Floquet theorem for a square lattice in a magnetic field?
Question: Generalizing the question here, if I have a square lattice in the homogeneous magnetic field $B$ as the given picture, how can we apply the Bloch-Floquet theorem in this periodic structures (with the unit cell which contains $N$ vertices)? As we know, from the Bloch-Floquet theorem, the free ends of a unit cell are related by a phase factor; in this case, this rule applies to which free ends of the cell? Answer: The Hamiltonian of your system can be written as $$H = t\left[\sum_{n,m} |n+1,m\rangle\langle n,m| + e^{inaB}|n,m+1\rangle\langle n,m| + h.c.\right]$$ Defining the unitary lattice translation operators $T_x = \sum_n |n+1,m\rangle\langle n,m|$ and $T_y = \sum_m |n,m+1\rangle\langle n,m|$, it's easy to show that $T_y HT_y^\dagger= H \iff [T_y,H]=0$. On the other hand, $$T_xHT_x^\dagger = t \left[\sum_{n,m}|n+1,m\rangle\langle n,m| + e^{i(n-1)aB}|n,m+1\rangle\langle n,m| + h.c.\right]$$ which is generally not equal to $H$ - unless $aB= 2\pi k$ for some $k\in \mathbb Z$. More generally, given any nonzero integer $q$, $$T_x^q H T_x^{q\dagger} = H \iff aB = 2\pi p/q, p\in \mathbb Z$$ where $T_x^{q(\dagger)}=\underbrace{T_x^{(\dagger)}T_x^{(\dagger)}\ldots T_x^{(\dagger)}}_{q\text{ times}}$. To summarize, your system is generically not periodic unless $\frac{aB}{2\pi} = \frac{p}{q}\in \mathbb Q$ is a rational number (which we take to be in fully-reduced form, so $p$ and $q$ are relatively prime). In that case, the system is periodic with primitive lattice vectors $\mathbf v_x = qa\hat x$ and $\mathbf v_y = a \hat y$. To apply Bloch's theorem, we should first rewrite our Hamiltonian to explicitly encode our unit cell. In the above, $|n,m\rangle$ refers to the lattice site at position $(na,ma)$. We now define $|N,M\rangle\otimes|\ell\rangle$ to refer to the lattice site at position $\ell$ in the unit cell whose left-most site is at position $(Nqa,Ma)$. In other words, $|N,M\rangle\otimes|\ell\rangle$ refers to the lattice site at position $\big((Nq+\ell)a,Ma\big)$. From here, our Hamiltonian becomes $$H = t\sum_{N,M} \left[\sum_{\ell=0}^{q-2} |N,M\rangle\langle N,M|\otimes |\ell+1\rangle\langle \ell|\right.\tag{1}$$ $$ + |N+1,M\rangle\langle N,M| \otimes |0\rangle\langle q-1|\tag{2}$$ $$+ \sum_{\ell=1}^{q-1}\underbrace{e^{i(Nq+\ell)aB}}_{=e^{i\ell aB}\text{ because }qaB\in2\pi\mathbb Z}|N,M+1\rangle\langle N,M| \otimes |\ell\rangle\langle \ell| + h.c.\bigg]\tag{3}$$ The sum in $(1)$ refers to the left-right hopping within a given unit cell. The second term $(2)$ refers to the left-right hopping from one unit cell to the adjacent cell. Finally, the term in $(3)$ refers to vertical hopping between adjacent unit cells. Having rewritten it this way, we may perform a pseudo-Fourier transform over the unit cell degrees of freedom $|N,M\rangle$ by defining $$|N,M\rangle = \frac{1}{2\pi}\int \mathrm dk_x \mathrm dk_y \ e^{iNqak_x} e^{iMa k_y} |k_x,k_y\rangle $$ $$|k_x,k_y\rangle = \frac{1}{2\pi} \sum_{N,M} e^{-iNqak_x} e^{-iMak_y} |N,M\rangle$$ where $(k_x,k_y)\in [-\pi/qa,\pi/qa]\times [-\pi/a,\pi/a]$, where the opposite edges of this rectangular region are identified to form a torus. To see why this is necessary, simply observe that $|-\pi/qa,k_y\rangle=|\pi/qa,k_y\rangle$ and $|k_x,-\pi/a\rangle = |k_x, \pi/a\rangle$. Substituting this into the Hamiltonian, we obtain after some algebra $$ H = t \int \mathrm dk_x \mathrm dk_y \ |k_x,k_y\rangle\langle k_x,k_y| \otimes h_\mathbf k$$ $$ \mathbf h_k := \left(\sum_{\ell=0}^{q-2}|\ell+1\rangle\langle \ell| + |\ell\rangle\langle\ell+1|\right)\tag{4}$$ $$+e^{iqak_x}|0\rangle\langle q-1| + e^{-iqak_x} |q-1\rangle\langle 0|\tag{5}$$ $$+ \sum_{\ell=0}^{q-1}\underbrace{e^{i\ell aB}e^{iak_y} |\ell\rangle\langle \ell| + e^{-i\ell aB} e^{-iak_y} |\ell\rangle\langle \ell|}_{= 2\cos(\ell a B + a k_y)|\ell\rangle\langle \ell|}\tag{6}$$ This appears to be a horrifying mess, but in fact it's not so bad. For concreteness, let $q=3$ so $h_{\mathbf k}$ is a $3\times 3$ matrix. The terms in $h_\mathbf k$ are respectively: $$h_{\mathbf k} = \underbrace{\pmatrix{0 & 1 & 0\\1 &0 & 1 \\ 0 & 1 & 0}}_{(4)} + \underbrace{\pmatrix{0 &0 & e^{3iak_x}\\0&0&0\\e^{-3iak_x}&0&0}}_{(5)}+\underbrace{\pmatrix{2\cos(a k_y)&0&0\\0&2\cos(aB + a k_y)&0\\0&0&2\cos(2aB + a k_y)}}_{(6)}$$ $$=\pmatrix{2\cos(a k_y)&1&e^{3iak_x}\\1&2\cos( aB + a k_y)&1\\e^{-3ia k_x}&1&2\cos(2aB + a k_y)}$$ In words, the Hamiltonian has cosines on the diagonal, $1$'s on the adjacent diagonals, and $e^{\pm iqak_x}$ in the upper right and lower left corners, respectively. Further recalling that $aB = 2\pi p/q$, we may write $$=\pmatrix{2\cos(a k_y)&1&e^{3iak_x}\\1&2\cos( 2\pi p/3+ a k_y)&1\\e^{-3ia k_x}&1&2\cos(4\pi p/3 + a k_y)}$$ This matrix can subsequently be diagonalized to yield the spectrum of the Hamiltonian. As we know, from the Bloch-Floquet theorem, the free ends of a unit cell are related by a phase factor; in this case, this rule applies to which free ends of the cell? If I understand what you're saying, the application of the translation operators $T_x^q$ and $T_y$ to an energy eigenfunction $|\psi\rangle = |k_x,k_y\rangle\otimes u_\mathbf k$ (where $u_\mathbf k$ is an eigenfunction of $h_\mathbf k$) yields $$T_x^q|\psi\rangle = e^{iqak_x}|\psi\rangle\qquad T_y|\psi\rangle = e^{iak_y}|\psi\rangle$$
{ "domain": "physics.stackexchange", "id": 90157, "tags": "magnetic-fields, crystals, lattice-model, bloch-sphere, floquet-theory" }
Why does energy flow downhill/entropy increase?
Question: I'm not sure if this is asked anywhere else before, but so far my search isn't fruitful. I'm in my second bachelor of physics and mathematics, and currently taking a thermodynamics course. We just got introduced to the second law and entropy, but I am confused by its deeper meaning. I understand that entropy always increases and in our course text (An introduction to thermal physics by Daniel V. Schroeder) this is explained by microstates of a system. I understand that systems with more 'disorder' have a higher chance of occuring than systems with a lower disorder. The thing that bothers me right now is why nature obeys this statistical argument. Just because combinatorics says a certain configuration is more likely doesn't imply that nature will develop to that situation. My question is: why does a system tend to a configuration with more disorder. I've read on several pages that this is because 'energy flows downhill'. I can see how this solves my question, because if the energy flows downhill, then it's quite easy to see that the result is a system with energy more spread out, a system with more disorder. But then again why does energy flow downhill? What drives the fluid of energy between systems? On some internet pages, the answer to this question is 'because entropy increases', but this brings us back to the original problem. In conclusion: why does energy flow downhill or why does entropy increase, explained on a fundamental level. I thank you in advance. Answer: You say My question is: why does a system tend to a configuration with more disorder. Try to think it in this way: There is a macrostate of fixed energy of a system and you can measure properties of this macrostate There are multiple microstates corresponding to this macrostate, each of this microstates is equally probable The "ordered" states are very few compared to the "disordered" ones Since the probability that a microstate is realized is equal to the probability that another microstate is realized, from the point $3$ it follows that, because the disordered configurations are way way more than the ordered ones, it's way more likely that the system is in one of these disordered configurations rather than in an ordered one. As far as I got your question your problem is with the point $2$. Why is it valid? It's an hypothesis, called hypothesis of equal a priori probability. So why should we trust this hypothesis? Because it works, that is after you do such hypothesis you study the consequences of this hypothesis and then compare them with the experiment. If the experiments are consistent with it, it means that your hypothesis was a good one. Addendum after the comments: Notice that the system can even change its microstate arriving to ordered configurations, that's not forbidden, it's just not likely. Maximum Entropy doesn't mean that the system always ends up in the most disordered microstate. Entropy is, roughly speaking, the number of the accessible microstates for the given system, so the bigger this number is and the bigger the entropy is
{ "domain": "physics.stackexchange", "id": 74361, "tags": "thermodynamics, energy, entropy" }
Giving too much energy to an exothermic reaction
Question: What will happen in an exothermic reaction if more than required energy is given? I mean, will it produce even more energy which will be highly dangerous or nothing will happen according to Le Chatelier's principle (as we are trying to carry the reaction in an opposite direction)? Answer: There are two separate effects we need to consider here: thermodynamic and kinetic. Let's assume you are only providing the energy thermally. So providing more energy means increasing the temperature (T). Thermodynamically, if a reaction is exothermic, and you increase T, the reaction becomes less favorable (assuming it stays exothermic over that temperature range) according to, as you mentioned, Le Chatelier's principle. I.e., the equilibrium shifts to the left. However, we also need to consider kinetic effects. Here, as you increase T, the rate of reaction increases, and thus the rate of thermal energy production increases, thus further increasing T, thus further increasing the rate, and so on, leading to the thermal runaway effect that Buck Thorn mentioned in his comment. I.e., we get a positive feedback loop.
{ "domain": "chemistry.stackexchange", "id": 12800, "tags": "physical-chemistry, experimental-chemistry, energy" }
How to compute modulo of a hash?
Question: Let's say that I have a set of users in my database, that have GUIDs as their IDs. I use xxhash to generate fixed-length hashes for each value, so that I can then proceed to "bucketizing" them and being able to do random sampling with the help of the modulo function. That said, if I have a hash such as 367b50760441849e, I want to be able to use hash % 20 == 0 to randomly pick 5% of the population (hence, 20 "buckets"). This is the approach that is used in Kusto hash() with a modulo argument. With this in mind, what is the approach that should be used to calculate an integer value from the hash, so that I can calculate the modulo? Answer: Any good hash will be uniformly distributed, which means that you can assume a uniform distribution when you apply modulo n, as long as $n < 2^{M/2}$, where M is the number of bits in your hash, see here. So for SHA1-32 you would at most modulo by $2^{16}$. There is no approach to calculating an integer value; what you have there is an hexadecimal representation of a hash, you just need to convert it to a numeric type if you obtained it as a string. XXH32() and XXH64() both already produce an unsigned int output.
{ "domain": "datascience.stackexchange", "id": 6856, "tags": "dataset, bigdata, sampling, randomized-algorithms" }
Javascript Object deep copy one-liner
Question: My goal was to create a single-line function that produces a deep copy of a javascript object. Requirements: Executable on a single line without the use of semicolons (exception: last character of the line) Nested objects should not be references to the originals Function: let copiedObjct = (copy = (obj) => (Object.keys(obj).reduce((v, d) => Object.assign(v, {[d]: (obj[d].constructor === Object) ? copy(obj[d]) : obj[d]}), {})))(obj); Usage (expanded): let obj = { /*...*/ } let copiedObjct = (copy = (obj) => ( Object.keys(obj).reduce((v, d) => Object.assign(v, { [d]: (obj[d].constructor === Object) ? copy(obj[d]) : obj[d] }), {}) ))(obj); My questions are: Is this code efficient? Are there any reasons this shouldn't be used in the wild? Is there a way I can make it shorter? Answer: Is this code efficient? Probably... not. Efficiency can mean different things. Efficiency could mean "fast running code" or "use the least amount of memory" - this can only be determined by running the code through a profiler. Efficiency could also mean "I can use it easily" or "works reliably" or "handles edge cases well" - this can only be determined by having other people use it. Are there any reasons this shouldn't be used in the wild? Yes. Does the code really run on a variety of objects? What if I put things like Arrays, Dates, RegExps, Maps, Sets, etc. in my object? Does it handle circular references? This is a naive implementation of deep-copy which is fine in small apps and prototypes where you know what the data looks like. But in more general cases, this will probably not hold. If you'd like to see how a general-purpose deep-copy function looks like, see how jQuery.extend or lodash.assign is implemented. Is there a way I can make it shorter? Wrong question. Code size is irrelevant in JS, you have minifiers for that.
{ "domain": "codereview.stackexchange", "id": 31021, "tags": "javascript" }
Can Sparse Fourier transform be used for sparse signal in other domain
Question: I've read that sparse fast Fourier transform can be used to compute the Fourier transform of a signal that is sparse in frequency domain much faster compared to FFT. My question is that can SFFT be used for signal that is not sparse in frequency domain but in some other domains? Answer: No. What did you expect? For sparse algorithm to work, the signal has to be sparse. If it's sparse after a integral transform, that doesn't mean it's sparse under a different transform. Also note that according to this answer, the benefit of the sparse fft only apply to very sparse signals - a factor of at least about 2000 between the zero and non-zero bins.
{ "domain": "dsp.stackexchange", "id": 3832, "tags": "fft, sparsity" }
Interpretation of density matrix
Question: In Landau’s Statistical Physics (part 1) , section 5, he writes:" In particular, it would be quite incorrect to suppose that the description by means of the density matrix signifies that the subsystem can be found in various ψ states with various probabilities and that the averaging is over these probabilities." However to my knowledge what Landau opposes is exactly the physical interpretation of a density matrix. What is it that I am missing? edit: I am not confusing the probabilistic property inherent to a pure quantum state with that of the mixed state. Still, I am under the impression that the density matrix is a characterization of the constitution of the mixed state; for instance, we could use a density matrix to describe an ensemble of systems made up with 70% of state A and 30% of state B (This example comes from Sakurai's Modern Quantum Mechanics(2e), page 180). But is this not what Landau calls incorrect? Could it be that Landau uses the density matrix to describe a subsystem which is quite determined in a way unknown to us (since we only have incomplete information) and in the above example the matrix is used in a closed and completely described system which is probabilistic in nature? (I am ignoring here the probabilistic property in quantum physics itself as it is present in both cases.) Answer: I think what he means is with reference to his equation (5.1) $$\psi = \sum_n c_n \psi_n. $$ These $\psi_n$ states are in a superposition and this superposition is not the same as saying the system has some probability of being in one state or the other, even though $|c_n|^2$ is really the probability of the system being in state $\psi_n$ (regular quantum mechanics interference effects). This is manifested in the off diagonal terms of the density matrix. This lead us to the other part of the question. As you mentioned, density matrix is always interpreted as being a probability of the system of being in some quantum mechanical state. The definition Landau gave for the density matrix is $$w_{mn} = c^*_n c_m,$$ which is clearly a Hermitian matrix and so can be diagonalised, $$w_{\alpha \beta} = c_{\alpha} \delta_{\alpha \beta}. $$ Now in this new basis you can really think of the diagonal terms $c_{\alpha}$ as being a probability of the system being in state $\alpha$. It's important to notice that in this diagonal basis you cannot think of the system as being in a superposition of the $\alpha$'s, $$\psi \neq \sum_{\alpha} c_{\alpha} \psi_{\alpha}.$$ If this bothers you, I can give you another way to think about this. Instead of equation (5.1) think about the bigger state containing the system and the environment, $$\Psi =\sum_{ij} C_{ij} \theta_i \psi_j, $$ where with Landau, $\psi_j$'s are the wavefunctions for the system in consideration, and $\theta_i$'s are for the environment. With this, following exactly what Landau has in the book, but for an operator $f$ that only act on the system and leave the environment unchanged, we can write the density matrix as, $$w_{j^{\prime} j } = \sum_{i} C^*_{ij^{\prime}} C_{ij}. $$ Notice that this is also Hermitian. Now this sum can be diagonal in $j$ and $j^{\prime}$ with no problem.
{ "domain": "physics.stackexchange", "id": 59828, "tags": "quantum-mechanics, statistical-mechanics, hilbert-space, density-operator" }
Can a home be designed to specifically "retain heat"?
Question: Assume you have two identical bodies at temperature $T$. One is subject to an environment characterized by a temperature of $T+\Delta T$. The other is subject to an environment subject to a temperature of $T-\Delta T$. Let $\dot Q_{1,2}$ be the rate at which heat flows from the object to the environment in two cases. What mechanism, if any, would the rate of heat exchange in the first case not be the same magnitude, i.e. $\dot Q_{1} \neq -\dot Q_2$? Suppose that in addition to the general ambient temperature of $T \pm \Delta T$, there is a separate stream of black body radiation with temperature $T_s >> T+\Delta T$ illuminating the objects under both conditions. Let $C$ represent the rate of heat flow from the object when the local ambient temperature is $T$. With this addition, is there a way for the magnitude of the rate of heat transfer to the local environment to differ? I.e. $\dot Q'_1 + \dot Q'_2 \neq 2 C$? Answer: One way to keep a house warm (from a warmer day into the night or from human activities that generate heat) is to increase thermal insulation. Nature requires the thermal conduction of passive materials to be symmetric. However, the material property of thermal conductivity depends on temperature and thus can be asymmetric for temperature excursions above and below the nominal house temperature. One can also increase the heat capacity of a house to buffer temperature swings (such as nightly cold conditions). Unfavorably, this would tend to prolong heat-wave effects inside. The heat capacity is also temperature dependent, also potentially leading to notable asymmetry. For example, the heat capacity of water during freezing (or any material in a first-order phase transition) is infinite; freezing and accumulating layers of slush during winter can provide cooling during the summer. Conversely, the opposite strategy (storing warm water for the winter) is not effective because the sensible heat is much smaller than the latent heat. An asymmetric way to keep a house warm is to absorb as much sunlight as possible in the visible-wavelength range and reradiate as little as possible in the infrared range (e.g., from southern-facing windows in the Northern Hemisphere, to obtain a greenhouse effect). The analytical framework here is that the radiative emissivity $\varepsilon(\lambda)$ and transmissivity $\tau(\lambda)$ depend on the wavelength $\lambda$; bodies of different temperatures radiate different wavelengths, and transparent materials transmit different wavelengths with different efficiency.) Another asymmetric mechanism relies on apertures such as windows that can be opened and closed as desired (or turned into insulators, as described in the comments), depending on whether one desires convective transfer with the outside. (You've since edited your question to exclude this mechanism.)
{ "domain": "physics.stackexchange", "id": 89758, "tags": "thermodynamics, temperature, thermal-conductivity" }
Oocyte cryopreservation: genes from three parents?
Question: Recently, I've heard of something called Oocyte cryopreservation, where a (fertilized, I think) egg from a woman is extracted, frozen and later thawed and reinserted into the woman to delay pregnancy. Now, this is just an idea, I don't know if this is actually possible, but can this frozen egg be implanted into a different woman, who isn't the original owner of the egg? If yes, whose genes would the child inherit? Would it get genes from all three parents, or just from the original owner of the sperm and egg? Answer: In short: yes it is possible to donate an oocyte to another woman. This is typically done in assisted reproduction and combined with in vitro fertilization. The child would only inherit the genes from the donor oocyte and the sperm, as the genetic material is enclosed in the oocyte and sperm. There is no genetic contribution of the recipient. You can find more information on the corresponding wikipedia page: Egg donation
{ "domain": "biology.stackexchange", "id": 8133, "tags": "genetics, reproduction, pregnancy" }
What is the hardest instance for the group isomorphism problem?
Question: Two groups $(G,\cdot)$ and $(H, \times)$ are said to be isomorphic iff there exists a homomorphism from $G$ to $H$ which is bijective. The group isomorphism problem is as follows: given two groups, check whether they are isomorphic or not. There are different ways to input a group, the two mostly used are by a Cayley table and by a generating set. Here I am assuming input groups are given by their Cayley table. More formally: $\textbf{Group Isomorphism Problem}$ $\textbf{Input : }$ Two groups $(G,\cdot)$ and $(H,\times)$. $\textbf{Decide : } $ Is $G \cong H$? Let us assume that $n = |G| = |H|$ Group Isomorphism problem when input groups are given by Cayley table is not known to be in $\textbf{P}$ in general. Although there are group classes like the abelian group class for which the problem is known to be in polynomial time, groups which are the extension of an abelian group, simple groups etc. Even for nilpotent class two groups, no algorithm better than brute force is known. A brute force algorithm for group isomorphism is given by Tarjan, which is as follows. Let $G$ and $H$ are two input groups, and let $S$ be a generating set of the group $G$. It is a well-known fact that every finite group admits a generating set of $\mathcal{O}(\log n)$ size and which can be found in polynomial time. The number of images of the generating set $S$ in the homomorphism from $G$ to $H$ is $n^{\log n}$ many. Now, check whether each possible homomorphism is bijective or not. The overall runtime will be $n^{\log n + \mathcal{O}(1)}$. Let me first define the center of the group $G$: $$Z(G) = \{g \in G \mid ag=ga, \forall a \in G\}$$ $Z(G)$ denotes the elements of the group $G$ which commutes with all other elements of the group $G$. Groups for which $G/Z(G)$ ( / used for quotient) is abelian are known as a nilpotent class two groups. To me it appears that nilpotent class two groups are the hardest instances to solve the group isomorphism problem. The meaning of "hardest instances" is: solving that case will allow researchers who work in group theory to solve the isomorphism problem of a large number of groups. Initially, I thought that simple groups are the hardest instances as they are building blocks of all groups, but later came to know that the isomorphism problem for simple groups is in $\textbf{P}$. Question: What is the hardest instance for the group isomorphism problem? Answer: $p$-groups of class 2 and exponent $p$ are widely believed to be the hardest case of Group Isomorphism ($p > 2$). (For $p=2$, we need to consider exponent 4, since all groups of exponent 2 are abelian - easy exercise for the reader.) Although there is as yet no reduction from general GpIso to this class of groups (though see point 0.5 below), there are several reasons for this belief. Let me outline some of them here. 0) Practical experience (see papers by Newman, Eick, O'Brien, Holt, Cannon, Wilson, ... which give the algorithms that are implemented in GAP and MAGMA). 0.5) [EDIT: added 8/7/19] Reductions. When such $p$-groups are given by generating sets of matrices over $\mathbb{F}_p$, the problem is $\mathsf{TI}$-complete [G.-Qiao '19]. Also (cf. point (4) below), isomorphism of $p$-groups of exponent $p$ and class $c < p$ reduces in poly time to isomorphism of $p$-groups of exponent $p$ and class 2 (ibid.). 1) Structure (reduce to solvable, then to $p$-group). Every finite group contains a unique maximal solvable normal subgroup, called the solvable radical, denoted $Rad(G)$. $G/Rad(G)$ contains no abelian normal subgroups, and isomorphism of such groups can be handled efficiently in practice (Cannon-Holt J. Symb. Comput. 2003) and in theory (Babai-Codenotti-Qiao ICALP 2012). Even for groups where $Rad(G)$ is abelian, some of these can be handled in $n^{O(\log \log n)}$ time (G-Qiao CCC '14, SICOMP '17) - so, not quite polynomial, but much closer than $n^{\log n}$. The main obstacle thus appears to be solvable (normal sub)groups. Now, within solvable groups, there is a lot of structure - starting with the fact that every solvable group is a knit product of its Sylow $p$-subgroups - and it seems the hardest cases are $p$-groups. 2) Counting. The number of groups of order $n$ is $\leq n^{(\frac{2}{27} + o(1))\mu(n)^2}$, where $\mu(n)$ is the largest exponent of any prime dividing $n$ (Pyber 1993). The number of $p$-groups of order $n=p^m$ is at least $p^{(\frac{2}{27} + o(1))m^2}$ (Higman 1960). So you see that the coefficient of the leading terms in the exponents match. In this sense "most" groups are $p$-groups (even of class 2 and exponent $p$). There is a long-standing conjecture which says that "most" in the preceding weak sense can be strengthened to say that proportion of groups of order $\leq n$ which are $p$-groups tends to 1 as $n \to \infty$. 3) Universality (/wildness). Giving a classification of $p$-groups would imply a classification of all modular representations of any finite group (or even Artinian algebra) in characteristic $p$ (Sergeichuk 1977). 4) Flexibility. Why $p$-groups of class 2 and not higher class? (Note that $p$-groups of nearly-maximal class, so-called "small coclass", have essentially been classified, Eick & Leedham-Green 2006, see also some of the answers here.) To any $p$-group one can associate a graded Lie ring, where bracket in the Lie ring corresponds to commutator in the group. Associativity in the group implies the Jacobi identity for the bracket, thus giving rise to a genuine Lie ring. However, note that when the group is class 2, the Jacobi identity is trivially satisfied (all its terms are automatically 0), so this puts no additional constraints on the structure. It basically just corresponds to an arbitrary skew-symmetric bilinear map. For $p$-groups of exponent $p$, there is even a reduction from class $c < p$ to class 2.
{ "domain": "cstheory.stackexchange", "id": 4660, "tags": "ds.algorithms, gr.group-theory" }
How to solve when circuits have both d.c, and a.c. component?
Question: The given figure illustrates the simplest ripple filter. A voltage $V=V_0(1+\cos(\omega t))$ is fed to the left input. Find the output voltage V'(t). As it can be seen that the voltage fed to the circuit has two components, one d.c. and another a.c. Now suppose only the a.c. component was fed to the circuit. Then the current will lead the voltage and hence the current equation becomes $$I=\frac{V_0}{\sqrt{R^2+(\frac{1}{\omega C})^2}}\cos(\omega t-\phi)$$ where $\tan(\phi)=\frac{1}{\omega RC}$. Now suppose only the d.c. component was fed to the circuit then the equation of current would become $$I=\frac{V_0}{R}e^{\frac{-t}{RC}}$$. And so we can find the voltage across the capacitor in the respective cases discussed above. But if both the the capacitors are fed simultaneously then how are the currents going to superimpose.Will the each component behave independently or there is something else which I am missing. The answer is given as $V'=V_0+V_m\cos(\omega t-\alpha)$ where $V_m=\frac{V_0}{\sqrt{1+(\omega RC)^2}}$, $\alpha=arctan(\omega RC)$. Here I also did not understand how the d.c. component(Voltage $V_0$) fed across the circuit would be equal to the voltage across the capacitor. Answer: Because the system is linear the response to the two driving voltages [$V_0$ and $V_0cos(\omega t)]$ will equal the sum of each acting alone (superposition). Yes, each component behaves independently and the output will be their sum. Think of it like this (intuition) - the dc source will charge the capacitor up to $V_0$ after some time (after 5 time constants it is practically fully charged). Now, to get charge to flow into the capacitor you will have to raise the applied voltage above $V_0$. This will be done by the ac component of the source. This will charge the capacitor above $V_0$, and then when the ac source goes negative, the capacitor will discharge back into the source. After the initial transient period, this charge/discharge oscillation will be centered around $V_0$ as I show below, I used R=2$\Omega$, C=100µF, f=1000Hz, and $V_0$=25V for the above plot with the circuit being closed at t=0.
{ "domain": "physics.stackexchange", "id": 80883, "tags": "homework-and-exercises, electric-circuits, electrical-resistance, voltage, capacitance" }
Has Hubble ever been used to try to image a near Earth asteroid?
Question: This answer to How big will Apophis appear? points out that the near Earth asteroid Apophis will likely be close to 2 arcseconds in diameter as seen from Earth during its close approach in 2029. I speculate that if the Hubble Space Telescope were still operational then, it could potentially image the asteroid in visible light at a few dozen pixels in diameter. This leads me to wonder if the Hubble has ever been used to image† or at least spatially resolve in some way an asteroid during a close pass to the Earth before. †Here the verb "image" should be taken to mean the act of producing a resolved image of an object so that different pixels correspond to intensity from different parts of the body being imaged. For the purposes of this question please don't consider telescope images in which an asteroid happens to appear but is too far away to be resolved. Thanks! Answer: [rewritten to address the revised question] Maybe, depending on how fussy you want to be about "resolved". This is a study from 1995, using observations of asteroid 4179 Toutatis made in 1992 with HST. They reported marginal resolution of the asteroid, as suggested by this figure comparing a deconvolved image of a star (observed with the same filter and imager location) and a similarly deconvolved image of the asteroid itself (each pixel corresponds to about 450 m at the distance of the asteroid): The appearance of the asteroid is pretty clearly not a point source, but it's also fair to say it's only partly resolved, and mostly just in one direction. (My admittedly vague impression is that this is one of the best, if not the best, case of HST "resolving" a near-Earth asteroid.) Most observations of near-Earth objects with HST are, I think, aimed at getting optical information on compositions not possible from other wavelengths, and sometimes refining estimates of rotation rates, as was done this year (using data from 2012) for the asteroid Bennu, currently being visited by OSIRIS-REx. In practice, you get much better spatial resolution using radar (including line-of-sight distance variations due to the structure of the asteroids from time-of-return measurements, allowing you to construct 3D models of them), so there's not much point in trying to resolve them with HST.
{ "domain": "astronomy.stackexchange", "id": 3814, "tags": "asteroids, hubble-telescope, near-earth-object" }
Calling ROS subscriber callback functions from another function
Question: Hi, Is it OK (both technically and in terms of good programming practice) to call the callback function of a certain subscriber from within another function? For example: void callback(sensor_msgs::Image::ConstPtr & msg) { /*do stuff*/ } void otherFunction() { sensor_msgs::Image::ConstPtr image_ptr; /*do stuff*/ callback(image_ptr); } Thank you Originally posted by 2ROS0 on ROS Answers with karma: 1133 on 2017-03-16 Post score: 0 Answer: This should be ok, provided that you take care of potential concurrency issues that might arise due to your callback queues being driven by any kind of multithreaded spinners. As your callbacks could then be called in parallel, otherFunction(..) might call callback(..) at a point where callback(..) is already being executed by the callbackqueue instance(s) in your node. If callback(..) performs any operations that must be atomic or has critical sections, you would need to add some mutual exclusion infrastructure to make sure that two calls to callback(..) can be executed in parallel. Originally posted by gvdhoorn with karma: 86574 on 2017-03-17 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 27340, "tags": "ros, callback" }
Why doesn't frequency change?
Question: I've seen a mathematical example where a wave was made from same source in water and air. but in both medium frequency was same but wavelengths were different in each medium. we know that velocity of a wave = frequency * wavelength. if the velocity of a wave changes from medium to medium(wave;in two different medium but made from same source) why doesn't the frequency change (in case of a wave which is made from same source)? Answer: It is best to first simplify the situation and think about why is the frequency of the wave the same as the source. Considering the case of a string attached to an oscillator, if the frequency of the oscillator and the wave were different, there would need to be a discontinuity in the string. The fact that the string is attached to the oscillator means they need to oscillate at the same frequency. Now, in your case you describe a situation where the same source is attached to two different mediums. This is analogous to an oscillator attached to two strings of different thickness. As shown, the frequency must be the same. The speed of propagation depends on the medium. In the case of a string it will be $$ v = \sqrt{\frac{\text{tension}}{\text{mass per unit length}}}. $$ Then, the wavelength must change accordingly to keep the $v/\lambda$ ratio constant.
{ "domain": "physics.stackexchange", "id": 17574, "tags": "refraction, frequency" }
Why delete high frequence in fft return the area of the most changes?
Question: I'm new to ||image-processing|| and || fft filters || I have a relative smooth $\mathtt{2D}$ (circle) gaussian surface (height $10$) with several "holes" in it, and the surrounding area [has] some noise (height below $0.5$). Here's what I did, I transfer[ed] the matrix $\mathcal{A}$ with $\mathcal{A}_f= \text{fftshift}( \text{fft}(\mathcal{A}))$, then I used the for loop to delete all the high frequency component, delete the component with magnitude compar[able] to DC component ($\approx 2.5\mathtt{e}5$), i.e. if $\mathcal{A}_f(i,j) > 1\mathtt{e}3$ , then [set] $A_f(i,j)=0$. Then, I did a backwards transformation $\mathcal{A}_\text{result}=\text{ifft}(\text{ifftshift}(\mathcal{A}_f))$. What's so interesting is that, $\mathcal{A}_\text{result}$ clearly identified all the holes (the region with the most changes) as [spikes], and the rest of the graph is almost $0$ with some noises. My question is that: What happened here? Why the image returns the value of holes as [spikes]? Is there any other way to identify the holes with Fourier or convolution filter? Answer: This line of yours delete the component with magnatude compare to DC component(=2.5e5), i.e. if Af(i,j)>1e3 , then Af(i,j)=0 indicates an effective highpass filter on the typical smooth image data. Indeed for typical smooth images high, frequency coefficients are generally (but not always) smaller in magnitude compared to the low frequeny coefficients. So deleting large magnitude coefficients will effectively delete low frequency coefficients. But this is not exactly same thing with deleting low frequency coefficients directly. neverthless, such an operation would return mostly a darkened image with certain edges and contours outlined, essentially a very a primitive edge detection exemplified with the following code: I = im2double(imread('Lena.bmp')); figure,imshow(I); Ik = fft2(I); th = 100; Ik( abs(Ik) > th) = 0; figure,imshow( 3*real(ifft2(Ik))); with the result:
{ "domain": "dsp.stackexchange", "id": 6870, "tags": "image-processing, fft, filters" }
Uranium Deposits in the Ocean?
Question: I was wondering whether there are any uranium deposits in the ocean floor, or in the rock walls of the continental shelf, etc. I was hoping to have some deep sea uranium deposits accessible from the water, for fiction writing. So I would like to know if such has ever been found, or if it's possible to have uranium deposits so near to the ocean. Answer: There is no reason you couldn't have uranium deposits in the continental shelf. Certain types of deposits wouldn't occur or be likely to persist in that environment but other types such as those in Archean metasediments likely could be found. As pure speculation, I can envision formation of deposits similar to unconformity or roll-front deposits where deep-circulating groundwater discharges through organic-rich ocean sediments. The organic material would cause the uranium to precipitate in a reduced oxidation state. That would likely be up on the shelf so I don't know if you could make it work if you want to be down on the edge of the shelf.
{ "domain": "earthscience.stackexchange", "id": 1175, "tags": "geology, ocean, oceanography, uranium, sea-floor" }
Are all of the edge descriptors necessary to differentiate edges?
Question: My question is about edge descriptors produced from an edge detection method applied on an object image. As we know, an edge has four descriptors edge normal, edge direction, edge position, and edge strength. These descriptors are used for differentiating between the detected edges. Are all of this descriptors are necessary? When can I dismiss any of these descriptors? I am thinking about assumptions based on the properties of the input image, e.g., if the input image has no noise, can I neglect the edge strength and assume that all resulting edge points are true edges? Answer: You can calculate the edge direction from the edge normal by atan2(y,x)link. The other descriptors are independent of each other and thus only the problem itself can tell if they are necessary or not.
{ "domain": "dsp.stackexchange", "id": 319, "tags": "computer-vision, edge-detection, image-processing" }
Existence of polynomial time reduction from P to R?
Question: Why the next idea doesn't work: If L_2 in R and L_1 in P and the languages are not trivial, then there is a polynomial-time reduction from L_1 to L_2 I know that if such reduction exists, than L_1 is also in R --> But L_1 is in P and P is in R, so everything looks OK :) Will be glad for your help here. Answer: The claim is correct, therefore, you need to give a proof that such a reduction exists. I see that you're trying to prove it by contradiction, so your question is a bit unclear. If you did not get to a contradiction by assuming the correctness of the claim, that does not mean that the claim is correct. Instead, you're assuming that it is correct. By a similar reasoning to yours: as far as i see, if $P\neq NP$ then everything is ok :) Or if $P=NP$ then everything is okay. So maybe the claim is not correct but we don't have the tools to show that yet. Or maybe you can assume both! Regarding the specific claim you mentioned, consider the following more general claim. Claim: every non-trivial language $L_2 \notin \{\emptyset, \Sigma^*\}$ is $P$-hard. That is, if $L_2$ is non-trivial language and $L_1\in P$, then $L_1 \leq_p L_2$. To begin with, note that the claim you wrote is a special case to the above claim. Indeed, we just drop down the assumptions that $L_2$ is in $R$ and $L_1$ is non-trivial. Solution: let $L_2$ be a non-trivial language and let $L_1\in P$. We describe a reduction from $L_1$ to $L_2$. To begin with, since $L_2$ is non-trivial, we have that there is a word $x_{in}\in L_2$ and a word $x_{out} \notin L_2$. Defining the reduction: the reduction, denoted $f$, operates as follows. For the input word $w$, check whether $w$ is in $L_1$. If $w\in L_1$, the reduction outputs $x_{in}$. Otherwise, the reduction outputs $x_{out}$. Correctness: follows immediately from the fact that $w\in L_1$ iff $f(w) = x_{in} \in L_2$. Runtime: note that the reduction runs in polynomial time in $|w|$ (the input's length). Indeed, since $L_1\in P$, we have that there is a deterministic TM that decides $L_1$ in polynomial time and thus one can decide whether $w\in L_1$ in polynomial time in $|w|$. $x_{in}$ and $x_{out}$ are constants (they do not depend on the input $w$), hence they do not affect the runtime of the reduction (you can think about them as words hardcoded in the reduction itself). Note: a machine that decides $L_1$ in polynomial time exists, $x_{in}$ and $x_{out}$ exist and thus the reduction exists.
{ "domain": "cs.stackexchange", "id": 12280, "tags": "complexity-theory, computability, time-complexity, reductions, polynomial-time" }
SQL injection safety check
Question: I was wondering if my code is safe for sql injection. This code just checks if the username exists in my db or not. $username = $_POST['username']; $stmt = mysqli_stmt_init($con); $query = "SELECT username FROM users WHERE username = ?" ; mysqli_stmt_prepare($stmt, $query); mysqli_stmt_bind_param($stmt, "s", $username); mysqli_stmt_execute($stmt); mysqli_stmt_bind_result($stmt, $user['username']); mysqli_stmt_execute($stmt); if (mysqli_stmt_fetch($stmt)){ if ($user['username'] === $username){ echo $username, ' exists'; } } elseif (!mysqli_stmt_fetch($stmt)){ echo $username, ' doesn\'t exists'; } Answer: SQL Injection-wise, this is completely safe. You don't run any risk of SQL Injection. However, some parts of your code is not optimal: mysqli_stmt_execute($stmt); mysqli_stmt_bind_result($stmt, $user['username']); mysqli_stmt_execute($stmt); Why are you executing the statement twice? Now, imagine that you hade your if-else switched so that you wanted to check for non-existing user first: if (!mysqli_stmt_fetch($stmt)) { echo $username, ' doesn\'t exists'; } elseif (mysqli_stmt_fetch($stmt)) { if ($user['username'] === $username) { echo $username, ' exists'; } } This code would not work, and you might not be aware of why exactly. You might be just lucky that you did not code it this way from the start. The issue is that you are calling mysqli_stmt_fetch twice. You should not do that, there can only be a maximum of one result, which means that the second if will always be false. Your original code should look like this: if (mysqli_stmt_fetch($stmt)) { if ($user['username'] === $username){ echo $username, ' exists'; } } else { echo $username, ' doesn\'t exists'; } In fact though, if the first if-statement is true, then the inner if will also be true because of your SQL WHERE condition. So your code could be just this: if (mysqli_stmt_fetch($stmt)) { echo $username, ' exists'; } else { echo $username, ' doesn\'t exists'; }
{ "domain": "codereview.stackexchange", "id": 8536, "tags": "php, mysqli, sql-injection" }
Load text file into an array in Swift
Question: I'm drowning in Swift optionals and error handling syntax. I'm just converting a text file into an array of Strings. One would think that this could be a simple two or three liner but by the time I got it to work, I ended up with the following mess: enum ImportError: ErrorType { case FileNotFound case CouldntGetArray } private func getLineArrayFromFile(fileName: String) throws -> Array<String> { let path = NSBundle.mainBundle().pathForResource(fileName, ofType: nil) if path == nil { throw ImportError.FileNotFound } var lineArray: Array<String>? do { let content = try String(contentsOfFile: path!, encoding: NSUTF8StringEncoding) lineArray = content.componentsSeparatedByString("\n") }catch{ throw ImportError.CouldntGetArray } return lineArray! } I actually don't really care about the using ErrorType enum, but I wanted to play around with the new Swift Error Handling syntax. I thought I understood optionals before, but they were giving me a headache when combined with the do-try-catch statement. I also didn't know if I should return an Array or an Optional Array. What are the best practices for a situation like this? Error handling Treatment of optionals Code brevity/readability Answer: func getLineArrayFromFile(fileName: String) throws -> Array<String> The function does not get lines from an arbitrary file, but from a resource file. The "get" prefix is usually not used in Objective-C or Swift. Array<String> can be shortened to [String]. So my suggestion for the function name would be func linesFromResource(fileName: String) throws -> [String] let path = NSBundle.mainBundle().pathForResource(fileName, ofType: nil) if path == nil { throw ImportError.FileNotFound } As you are obviously using Swift 2, this can be simplified with the guard statement. Note that path is no longer an optional. There are pre-defined NSError codes which can be used here instead of defining your own. This also gives better error descriptions for free. Example: guard let path = NSBundle.mainBundle().pathForResource(fileName, ofType: nil) else { throw NSError(domain: NSCocoaErrorDomain, code: NSFileNoSuchFileError, userInfo: [ NSFilePathErrorKey : fileName ]) } do { let content = try String(contentsOfFile: path!, encoding: NSUTF8StringEncoding) lineArray = content.componentsSeparatedByString("\n") } catch { throw ImportError.CouldntGetArray } You are catching the error and throw your own error code in the failure case, so the actual error information is lost. Better just call the try String(..) and let an error propagate to the caller of your function: let content = try String(contentsOfFile: path, encoding: NSUTF8StringEncoding) Again, this gives better error descriptions for free. So the complete method would now look like this (no optionals anymore, no forced unwrapping with !): func linesFromResource(fileName: String) throws -> [String] { guard let path = NSBundle.mainBundle().pathForResource(fileName, ofType: nil) else { throw NSError(domain: NSCocoaErrorDomain, code: NSFileNoSuchFileError, userInfo: [ NSFilePathErrorKey : fileName ]) } let content = try String(contentsOfFile: path, encoding: NSUTF8StringEncoding) return content.componentsSeparatedByString("\n") } And a typical usage would be do { let lines = try linesFromResource("file.txt") print(lines) } catch let error as NSError { print(error.localizedDescription) } catch let error { print(error) } The reason for the final catch let error is that it is required that the catch statements are exhaustive. Even if we know that the function throws only NSErrors, the compiler doesn't know that. (There are exceptions but that is a different topic .) Now to your question how an error should be handled in general, and I would say: it depends. There are tree different scenarios: Loading the strings can fail, and you want to present or log an error message in that case. Then the above method of throwing and catching an error is appropriate. Loading the strings can fail, but the particular reason is of no interest to the caller. In that case I would change the function to return an optional. Example: func linesFromResource(fileName: String) -> [String]? { guard let path = NSBundle.mainBundle().pathForResource(fileName, ofType: nil) else { return nil } do { let content = try String(contentsOfFile: path, encoding: NSUTF8StringEncoding) return content.componentsSeparatedByString("\n") } catch { return nil } } // Usage: if let lines = linesFromResource("file.txt") { print(lines) } Finally, if failing to load the strings is a programming error then the function should abort in the error case, and return the (non-optional) strings otherwise. As an example, if this function is only used to load strings from fixed compiled-in resource files which are supposed to exist, then failing to load a file would be a programming error and should be detected early: func linesFromResource(fileName: String) -> [String] { guard let path = NSBundle.mainBundle().pathForResource(fileName, ofType: nil) else { fatalError("Resource file for \(fileName) not found.") } do { let content = try String(contentsOfFile: path, encoding: NSUTF8StringEncoding) return content.componentsSeparatedByString("\n") } catch let error { fatalError("Could not load strings from \(path): \(error).") } } let lines = linesFromResource("file.txt") print(lines) fatalError() prints a message before stopping execution of the program, which can be helpful to locate the programming error. Otherwise you could shorten the "forced" version to func linesFromResourceForced(fileName: String) -> [String] { let path = NSBundle.mainBundle().pathForResource(fileName, ofType: nil)! let content = try! String(contentsOfFile: path, encoding: NSUTF8StringEncoding) return content.componentsSeparatedByString("\n") } which is the "three liner" that you were looking for in the introduction to your question.
{ "domain": "codereview.stackexchange", "id": 15205, "tags": "array, file, error-handling, swift" }
Why do we say that the Earth moves around the Sun?
Question: In history we are taught that the Catholic Church was wrong, because the Sun does not move around the Earth, instead the Earth moves around the Sun. But then in physics we learn that movement is relative, and it depends on the reference point that we choose. Wouldn't the Sun (and the whole universe) move around the Earth if I place my reference point on Earth? Was movement considered absolute in physics back then? Answer: Imagine two donut-shaped spaceships meeting in deep space. Further, suppose that when a passenger in ship A looks out the window, they see ship B rotating clockwise. That means that when a passenger in B looks out the window, they see ship A rotating clockwise as well (hold up your two hands and try it!). From pure kinematics, we can't say "ship A is really rotating, and ship B is really stationary", nor the opposite. The two descriptions, one with A rotating and the other with B, are equivalent. (We could also say they are both rotating a partial amount.) All we know, from a pure kinematics point of view, is that the ships have some relative rotation. However, physics does not agree that the rotation of the ships is purely relative. Passengers on the ships will feel artificial gravity. Perhaps ship A feels lots of artificial gravity and ship B feels none. Then we can say with definity that ship A is the one that's really rotating. So motion in physics is not all relative. There is a set of reference frames, called inertial frames, that the universe somehow picks out as being special. Ships that have no angular velocity in these inertial frames feel no artificial gravity. These frames are all related to each other via the Poincare group. In general relativity, the picture is a bit more complicated (and I will let other answerers discuss GR, since I don't know much), but the basic idea is that we have a symmetry in physical laws that lets us boost to reference frames moving at constant speed, but not to reference frames that are accelerating. This principle underlies the existence of inertia, because if accelerated frames had the same physics as normal frames, no force would be needed to accelerate things. For the Earth going around the sun and vice versa, yes, it is possible to describe the kinematics of the situation by saying that the Earth is stationary. However, when you do this, you're no longer working in an inertial frame. Newton's laws do not hold in a frame with the Earth stationary. This was dramatically demonstrated for Earth's rotation about its own axis by Foucalt's pendulum, which showed inexplicable acceleration of the pendulum unless we take into account the fictitious forces induced by Earth's rotation. Similarly, if we believed the Earth was stationary and the sun orbited it, we'd be at a loss to explain the Sun's motion, because it is extremely massive, but has no force on it large enough to make it orbit the Earth. At the same time, the Sun ought to be exerting a huge force on Earth, but Earth, being stationary, doesn't move - another violation of Newton's laws. So, the reason we say that the Earth goes around the sun is that when we do that, we can calculate its orbit using only Newton's laws. In fact, in an inertial frame, the sun moves slightly due to Earth's pull on it (and much more due to Jupiter's), so we really don't say the sun is stationary. We say that it moves much less than Earth. (This answer largely rehashes Lubos' above, but I was most of the way done when he posted, and our answers are different enough to complement each other, I think.)
{ "domain": "physics.stackexchange", "id": 65051, "tags": "newtonian-mechanics, newtonian-gravity, orbital-motion, solar-system, inertial-frames" }
How will the charges redistribute when a conductor is in contact with a uniformly charged material?
Question: This is a conceptual question but it is just a bit tricky, Under the influence of E-field any charges that accumulates within the conductor redistributes to the surface. So when an uncharged spherical shell conductor of certain thickness encloses a charge $q$, a charge of $-q$ will accumulate on the inner surface while a charge $+q$ will accumulate outside the surface of the shell. What if we made the thickness of the shell larger such that $q$ now comes in contact with the inner surface of the shell, how will now charges get redistributed? Answer: Any charge that touches the inner surface will flow to the outer surface.
{ "domain": "physics.stackexchange", "id": 29502, "tags": "electrostatics, classical-electrodynamics" }
bumblebee don't publish image --> fuerte
Question: OS: Ubuntu 12.04 ROS: Fuerte Camera: Bumblebee2 -> svn co `cu-ros-pkg.googlecode.com/svn/trunk/bumblebee2` Other Software include: Bumblebee2 ROS package, libdc1394-22, Coriander programme. Other links cheked: http://answers.ros.org/question/11492/bumblebee2-or-bumblebee1394-this-is-the-question http://answers.ros.org/questions/37289/revisions/ http://answers.ros.org/question/33666/error-with-rosmake-bumblebee2-fuerte http://answers.ros.org/question/10152/bumblebee2-640x480-under-diamondback/?answer=14894#post-id-14894 http://ninghang.blogspot.com.es/2011/11/running-bumblebee2-on-ros-tutorial.html And I can't run bumblebee2 in fuerte. I follow 2 lines: one with the updated package camera1394 like this: <launch> <!-- name node (and topic) after parameter file --> <node pkg="camera1394" type="camera1394_node" name="bumblebee2" output="screen"> <rosparam file="$(find DemoROS)/launch/bumblebee2.yaml" /> </node> </launch> and the yaml # Sample camera parameters for the Bumblebee2 #guid: 08144361026320a0 (defaults to first camera found) #video_mode: 1024x768_mono #for a single grayscale frame video_mode: 1024x768_mono16 #for bumblebee2 fps: 15 bayer_pattern: GRBG bayer_method: Nearest settings_url: file:///opt/ros/camera_settings/bumblebee_right.yaml settings_url2: file:///opt/ros/camera_settings/bumblebee_left.yaml # these all default to automatic settings: #brightness: 511 #exposure: 256 #gain: 150 shutter: -1 #whitebalance: "2000 2000" $ roslaunch DemoROS bumblebee2_lsi.launch [ INFO] [1360084095.413930703]: Found camera with GUID b09d010090cf10 [ INFO] [1360084095.414026074]: No guid specified, using first camera found, GUID: b09d010090cf10 [ INFO] [1360084095.414056375]: camera model: Point Grey Research Bumblebee2 BB2-08S2C [ WARN] [1360084095.414878719]: [Nearest] Bayer decoding in the driver is DEPRECATED; image_proc decoding preferred. [ERROR] [1360084095.414906986]: Unknown Bayer method [Nearest]. Using ROS image_proc instead. [ERROR] [1360084095.415115555]: unknown bayer pattern [GRBG] [ INFO] [1360084095.418217826]: [00b09d010090cf10] opened: 1024x768_mono16, 15 fps, 400 Mb/s This way publish one image http://img707.imageshack.us/img707/4431/1024x768mono16.jpg with both camera data mixed. I try de-interlaced it but I don't know what I am doing wrong. I try with other formats like Format7_mode3 with dynamic_reconfigure pkg and this is the result: image http://img706.imageshack.us/img706/4717/format7mode3.jpg My bumblebee2_lsi program is this: #include <signal.h> #include <ros/ros.h> #include <ros/console.h> #include <boost/format.hpp> #include <sensor_msgs/CameraInfo.h> #include <sensor_msgs/image_encodings.h> #include <tf/transform_listener.h> //#include <camera_info_manager/camera_info.h> #include <camera_info_manager/camera_info_manager.h> #include <image_transport/image_transport.h> #include <dynamic_reconfigure/server.h> #include <dynamic_reconfigure/SensorLevels.h> //#include "bumblebee2.h" //#include "bumblebee2/Bumblebee2Config.h" namespace enc = sensor_msgs::image_encodings; /** @file @brief camera1394 is a ROS driver for 1394 Firewire digital cameras. This is a ROS port of the Player driver for 1394 cameras, using libdc1394. It provides a reliable driver with minimal dependencies, intended to fill a role in the ROS image pipeline similar to the other ROS camera drivers. The ROS image pipeline provides Bayer filtering at a higher level (in image_proc). In some cases it is useful to run the driver without the entire image pipeline, so libdc1394 Bayer decoding is also provided. @par Advertises - \b camera/image_raw topic (sensor_msgs/Image) raw 2D camera images (only raw if \b bayer_method is \b NONE). - \b camera/camera_info topic (sensor_msgs/CameraInfo) Calibration information for each image. @par Subscribes - None @par Parameters - \b frame_id : @b [string] camera frame of reference (Default: device node name) - \b guid : @b [string] The guid of the camera (Default: "NONE") - \b fps : @b [real] Frames per second (Default: 15.0) - \b iso_speed : @b [int] ISO transfer speed (Default: 400) - \b video_mode : @b [string] Desired image resolution and type (Default "800x600_mono"). The driver supports the following values: "320x240_yuv422" "640x480_mono" "640x480_yuv422" "640x480_rgb" "800x600_mono" "800x600_yuv422" "1024x768_mono" "1024x768_yuv422" "1280x960_mono" "1280x960_yuv422" - \b bayer_pattern : @b [string] The pattern of the Bayer filter to use (Default: "NONE"). The driver supports the following values: "BGGR" "GRBG" "RGGB" "GBRG" "NONE" - \b bayer_method : @b [string] The type of Bayer processing to perform (Default: "NONE"). The driver supports the following values: "NONE" "DownSample" (1/2 size image) "Nearest" "Bilinear" "HQ" "VNG" "AHD" - \b exposure : @b [int] Sets the camera exposure feature to value. - \b shutter : @b [int] Sets the camera shutter feature to value. -1 turns on auto shutter. - \b whitebalance : @b [string] (e.g. "2000 2000") Sets the Blue/U and Red/V components of white balance. "auto" turns on auto white balance. - \b gain : @b [int] Sets the camera gain feature to value. -1 turns on auto gain. - \b brightness : @b [int] Sets the camera brightness feature to value. @todo Make array of supported image encoding values, check parameter settings against that. Make enum type for dynamic reconfiguration. */ class Bumblebee2lsi { private: ros::NodeHandle privNH_; // private node handle image_transport::ImageTransport *it_; std::string camera_name_; std::string frame_id_; sensor_msgs::Image image_; sensor_msgs::Image left_image_; sensor_msgs::Image right_image_; sensor_msgs::CameraInfo left_cam_info_; sensor_msgs::CameraInfo right_cam_info_; sensor_msgs::CameraInfo cinfo_; /** image transport publish interface */ image_transport::CameraPublisher left_image_pub_; image_transport::CameraPublisher right_image_pub_; bool start_; public: Bumblebee2lsi() { privNH_ = ros::NodeHandle("~"); it_ = new image_transport::ImageTransport(privNH_); left_image_pub_ = it_->advertiseCamera("left/image_raw", 1); right_image_pub_ = it_->advertiseCamera("right/image_raw", 2); start_ = false; } ~Bumblebee2lsi() { delete it_; } /** Update the bumblebee2 calibration data */ /** Author : Soonhac Hong (sh2723@columbia.edu) */ /** Date : 5/24/2010 */ /** Note : Calibration data is needed to show disparity image using image_view with stereo_view.*/ void updateBumblebee2CalibrationData() { double left_D_data[] = {-0.29496962080028677, 0.12120859315219049, -0.0019941265153862824, 0.0012058185627261283, 0.0}; double left_K_data[] = {543.60636929659358, 0.0, 321.7411723319629, 0.0, 543.25622524820562, 268.04452669345528, 0.0, 0.0, 1.0}; double left_R_data[] = {0.99980275533925467, -0.018533834763323875, -0.0071377436911170388, 0.018542709766871161, 0.99982737377597841, 0.0011792212393866724, 0.0071146560377753926, -0.0013113417539480422, 0.9999738306837177}; double left_P_data[] = {514.20203529502226, 0.0, 334.37528610229492, 0.0, 0.0, 514.20203529502226, 268.46113204956055, 0.0, 0.0, 0.0, 1.0, 0.0}; double right_D_data[] = {-0.2893208200535437, 0.11215776927066376, -0.0003854904042866552, 0.00081197271575971614, 0.0}; double right_K_data[] = {541.66040340873735, 0.0, 331.73470962829737, 0.0, 541.60313005445187, 265.72960150703699, 0.0, 0.0, 1.0}; double right_R_data[] = {0.99986888001551244, -0.012830354497672055, -0.0098795131453283894, 0.012818040762902759, 0.99991698911455085, -0.001308705884349387, 0.0098954841986237611, 0.001181898284639702, 0.99995034002140304}; double right_P_data[] = {514.20203529502226, 0.0, 334.37528610229492, -232.44101555000066, 0.0, 514.20203529502226, 268.46113204956055, 0.0, 0.0, 0.0, 1.0, 0.0}; left_cam_info_.D.resize(5); right_cam_info_.D.resize(5); memcpy(&left_cam_info_.D[0], &left_D_data[0], sizeof(left_D_data)); memcpy(&left_cam_info_.K[0], &left_K_data[0], sizeof(left_K_data)); memcpy(&left_cam_info_.R[0], &left_R_data[0], sizeof(left_R_data)); memcpy(&left_cam_info_.P[0], &left_P_data[0], sizeof(left_P_data)); memcpy(&right_cam_info_.D[0], &right_D_data[0], sizeof(right_D_data)); memcpy(&right_cam_info_.K[0], &right_K_data[0], sizeof(right_K_data)); memcpy(&right_cam_info_.R[0], &right_R_data[0], sizeof(right_R_data)); memcpy(&right_cam_info_.P[0], &right_P_data[0], sizeof(right_P_data)); } void updateDataCallbackImage(const sensor_msgs::Image msg) { image_ = msg; // get current CameraInfo data from topic /camera/image_info left_cam_info_ = right_cam_info_= cinfo_; left_image_.header.frame_id = right_image_.header.frame_id = image_.header.frame_id = left_cam_info_.header.frame_id = right_cam_info_.header.frame_id = image_.header.frame_id; //update bumblebee2 calibration data updateBumblebee2CalibrationData(); // Read data from the image_ via topic if(start_){ left_cam_info_.header.stamp = right_cam_info_.header.stamp = left_image_.header.stamp = right_image_.header.stamp=image_.header.stamp; left_cam_info_.height = right_cam_info_.height = left_image_.height = right_image_.height=image_.height; left_cam_info_.width = right_cam_info_.width = left_image_.width = right_image_.width=image_.width; left_image_.encoding = right_image_.encoding = image_.encoding; //Split image into left image and right image left_image_.step = right_image_.step = image_.step; int image_size = image_.height*image_.step; left_image_.data.resize(image_size); right_image_.data.resize(image_size); memcpy(&right_image_.data[0], &image_.data[0], image_size); // the image of right camera is the first half of the deinterlaced image. memcpy(&left_image_.data[0], &image_.data[image_size], image_size); // the image of left camera is the second half of the deinterlaced image. // Publish it via image_transport left_image_pub_.publish(left_image_, left_cam_info_); right_image_pub_.publish(right_image_, right_cam_info_); } } void updateDataCallbackInfo(const sensor_msgs::CameraInfo info) { cinfo_ = info; start_ = true; } }; // end Bumblebee2lsi class definition /** Main entry point */ int main(int argc, char **argv) { ros::init(argc, argv, "bumblebee2_lsi"); ros::NodeHandle node; Bumblebee2lsi bumblebee2; ros::Subscriber sub_image = node.subscribe("camera/image_raw", 1000, &Bumblebee2lsi::updateDataCallbackImage, &bumblebee2); ros::Subscriber sub_info = node.subscribe("camera/camera_info", 1000, &Bumblebee2lsi::updateDataCallbackInfo, &bumblebee2); while (node.ok()) { ros::spinOnce(); } return 0; } It is a copy paste from http://cu-ros-pkg.googlecode.com/svn/trunk/bumblebee2/ and modify a little. the results are the same image in both topics (right and left). I don't know why. The other line to follow is using the code from http://cu-ros-pkg.googlecode.com/svn/trunk/bumblebee2 and try to compile. I follow that http://answers.ros.org/question/33666/error-with-rosmake-bumblebee2-fuerte and I can compile all package and solve the segmentation fault but no images is published (the topics don't start to publish nothing). What I am doing wrong? I hope somebody can help me. Thanks a lot Originally posted by pmarinplaza on ROS Answers with karma: 330 on 2013-02-05 Post score: 0 Original comments Comment by joq on 2013-02-05: These questions come up often enough that I wish we could support an "official" ROS bumblebee driver. It's not all that different from the basic camera1394 driver, which I do support. Unfortunately, I have no bumblebee hardware for testing, so I cannot do that myself. Comment by pmarinplaza on 2013-02-06: Thanks for the comment ;) Maybe this code help a little to understand the problem: http://cu-ros-pkg.googlecode.com/svn/trunk/bumblebee2/src/dev_camera1394.cpp (line 725) and the use of this code here : http://cu-ros-pkg.googlecode.com/svn/trunk/bumblebee2/src/bumblebee2.cpp (line 382) Answer: Pease check our firewire stereo driver http://www.ros.org/wiki/camera1394stereo, which we are using for bumblebee2 cameras with Fuerte. Originally posted by Miquel Massot with karma: 1471 on 2013-02-18 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 12748, "tags": "ros, camera1394, ros-fuerte, stereo, bumblebee2" }
depthimage_to_laserscan
Question: Has anyone tried to get the depthimage_to_laserscan to run with the gazebo kinetic plugin. I can not seem to get it to work. Originally posted by rnunziata on ROS Answers with karma: 713 on 2015-03-01 Post score: -1 Original comments Comment by slivingston on 2015-03-02: Could you be more specific? E.g., what commands did you try? Answer: I got pointcloud_to_laserscan working, not a solution but good enough. Originally posted by rnunziata with karma: 713 on 2015-03-02 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 21023, "tags": "ros, depthimage-to-laserscan" }
Use dummy variables to create a rank variable. R
Question: I have a series of multiple response (dummy) variables describing causes for a canceled visits. A visit can have multiple reasons for the cancelation. My goal is to create a single mutually exclusive variable using the dummy variables in a hierarchical way. For example, in my sample data below the rank of my variables is as follow: Medical, NoID and Refuse. Ex. if a visit was cancelled due to medical and lack of ID reasons, I would like to recode my final variable as "medical" since is more important base on my rank. Likewise VisitID 3 was cancelled due to no ID and refuse visit, in this case I would like to recode this cancellation as NoID since NoID is more important than Refuse. Thank you for any help! VisitID NoID Refuse Medical WhatINeed 1 1 TRUE FALSE TRUE Medical 2 2 FALSE FALSE FALSE <NA> 3 3 TRUE TRUE FALSE NoID structure(list(VisitID = c(1, 2, 3), NoID = c(TRUE, FALSE, TRUE ), Refuse = c(FALSE, FALSE, TRUE), Medical = c(TRUE, FALSE, FALSE ), WhatINeed = c("Medical", NA, "NoID")), row.names = c(NA, 3L ), class = "data.frame") Answer: You can use case_when() and list the conditions in the order of your rank. Since your dummy variables are already os type logical, the following should work: df %>% mutate( WhatINeed_2 = case_when( Medical ~ "Medical", NoID ~ "NoID", Refuse ~ "Refuse", TRUE ~ NA_character_ ) ) VisitID NoID Refuse Medical WhatINeed WhatINeed_2 1 1 TRUE FALSE TRUE Medical Medical 2 2 FALSE FALSE FALSE <NA> <NA> 3 3 TRUE TRUE FALSE NoID NoID data df <- structure(list(VisitID = c(1, 2, 3), NoID = c(TRUE, FALSE, TRUE ), Refuse = c(FALSE, FALSE, TRUE), Medical = c(TRUE, FALSE, FALSE ), WhatINeed = c("Medical", NA, "NoID")), row.names = c(NA, 3L ), class = "data.frame")
{ "domain": "datascience.stackexchange", "id": 9818, "tags": "r, ranking, dummy-variables, hierarchical-data-format" }
Non-bonded orbitals in water
Question: General Chemistry perspective: Looking at the molecular orbitals of water, we can see that the oxygen is $sp^3$ hybridized. Oxygen forms two sigma bonds with hydrogens, and there are two lone pair orbitals in the molecule. There are five occupied molecular orbitals: The core $1s$ of oxygen, and the four $sp^3$ orbitals, out of which two are lone pair orbitals. Group Theory and Quantum Chemistry perspective: Water belongs to $C_{2v}$ point group. There are four irreducible representations, and the atomic orbitals belong to these irreducible representations (or linear combinations of these irreducible representations). Hence, we get five occupied molecular orbitals of symmetries $a_1, b_1,$ and $b_2$. There is only one non-bonding orbital ($1b_1$) in which the AOs of hydrogens do not participate. Where is the second lone pair orbital of water? Answer: General Chemistry perspective: Looking at the molecular orbitals of water, we can see that the oxygen is $\mathrm{sp}^3$ hybridized. No. One could say that, but $\mathrm{sp}^2$ is equally possible. In fact, it may seem more likely, we’ll get to that in a second. Group Theory and Quantum Chemistry perspective: Water belongs to $C_{2\mathrm{v}}$ point group. There are four irreducible representations, and the atomic orbitals belong to these irreducible representations (or linear combinations of these irreducible representations). Hence, we get five occupied molecular orbitals of symmetries $\mathrm{a}_1, \mathrm{b}_1$, and $\mathrm{b}_2$. Yes, and you did a nice job of drawing them. There is only one non-bonding orbital ($1\mathrm{b}_1$) in which the AOs of hydrogens do not participate. Where is the second lone pair orbital of water? And this is where it gets interesting. I’ll once again point to Professor Klüfers’ web scriptum for the basic and inorganic chemistry course in Munich, section about localising molecular orbitals. If you don’t understand German, all you need to do is look at the pictures and understand that wenig means a little in this context. You see, to do a discussion such as ‘this is the bonding electron pair’ we need to localise molecular orbitals. Specifically by linear combining them. It’s almost like in the linked pictures: $\Psi_2 (1\mathrm{b}_2) + \Psi_3 (3\mathrm{a}_1)- \mathrm{some~} \Psi_1 (2\mathrm{a}_1) = \unicode[Times]{x3C3}_1$. Invert the signs: $\Psi_2 (1\mathrm{b}_2) - \Psi_3 (3\mathrm{a}_1) + \mathrm{some~} \Psi_1 (2\mathrm{a}_1) = \unicode[Times]{x3C3}_2$. These are your two bonding orbitals to hydrogen. We have some $\Psi_1$ remaining (and, theoretically, also some $\Psi_2$ and $\Psi_3$, since we didn’t use all of them). That (remaining linear combination) is our second non-bonding orbital. Now why is $\mathrm{sp}^3$ a bad description? You see, we arrived at one lone pair that is definitely $\unicode[Times]{x3C0}$-shaped or p-shaped. The second is somewhat p-ish but also somewhat s-ish, so maybe some sp-hybrid. But there is no mistaking that $1\mathrm{b}_1$ is in no way influenced by all the others. It is antisymmetric to the $\ce{H-O-H}$-plain of symmetry while all the others are symmetric. $\mathrm{sp}^2$ describes this a lot better.
{ "domain": "chemistry.stackexchange", "id": 15021, "tags": "water, quantum-chemistry, computational-chemistry, theoretical-chemistry, group-theory" }
Beginner Battleship game in Python/terminal
Question: I am learning how to program and during the Python course on Codecademy I was encouraged to rewrite it all with new features (2 players, various ship sizes, etc.). In summary this works as follows: Board configuration is set Players and their deployment is initialized Deployment info is exchanged between players Each player takes a turn to look at the state of his boards and to take a shot. When all ship tiles are shot, the opposite player wins. My question is (if anyone got patience to dig through it), how can I improve my coding skills, style and simplify this? Is it good idea to trash it and rewrite with ships and boards as classes? from textwrap import dedent from copy import copy class Player(): def alternate_pixel(self, index, mode): """This method, after receiving a call with pixel index will change its graphical representation. Depending on mode ofc. Also, this function does not mess with active_indexes.""" # print('alternating %d' % index) if mode == 'deploy': # . - empty, free space self.own_board[index] = '$' # $ - healthy ship part elif mode == 'get miss': self.own_board[index] = ' ' # - hit in empty space elif mode == 'get hit': self.own_board[index] = 'X' # X - destroyed ship part elif mode == 'make miss': if self.shooting_board[index] == 'X': # do not override X pass self.shooting_board[index] = ' ' elif mode == 'make hit': self.shooting_board[index] = 'X' @staticmethod def show_board(board): """Processes list of pixels into graph, without messing with them""" temp = copy(board) c = -1 # compensate for inserted '\n\t' units for i in range(board_size): c += 1 temp.insert((i+1)*board_size + c, '\n\t') print('\t ' + ' '.join(temp)) @staticmethod def are_indexes_allowed(occupied, proposed): """Periodicity (being on two ends), out of board deploys and stacking on tile is disallowed.""" for i in range(1, board_size): if i*board_size - 1 in proposed and i*board_size in proposed: return False for index in proposed: if index not in range(board_size**2) or index in occupied: return False return True def ship_settler(self, ship_size): """The back end to deploy_fleet(). This will put given ship size on board(horizontaly or vertically).""" while True: nose_index = int(ask_for_coordinates()) if nose_index not in self.active_indexes: break # just faster if ship_size == 1: # mono ships do not need to face the incoming self.active_indexes.append(nose_index) Player.alternate_pixel(self, nose_index, 'deploy'); return proposed_indexes = [] direction = input('NSWE?\n') for i in range(ship_size): # for each hull segment if direction == 'N' or direction == 'n': proposed_indexes.append(nose_index - i*board_size) if direction == 'S' or direction == 's': proposed_indexes.append(nose_index + i*board_size) if direction == 'W' or direction == 'w': proposed_indexes.append(nose_index - i) if direction == 'E' or direction == 'e': proposed_indexes.append(nose_index + i) if Player.are_indexes_allowed(self.active_indexes, proposed_indexes): for index in proposed_indexes: # run the updates self.active_indexes.append(index) Player.alternate_pixel(self, index, 'deploy') else: print('Invalid, try again.') # if not met - rerun del proposed_indexes [:] # if not emptied it will stay filled Player.ship_settler(self, ship_size) def deploy_fleet(self): """The front end function that fills active_indexes""" print(dedent('\ ------------------------------\n\ Deployment phase of %s\n\ ------------------------------\n' % (self.name))) ship_size = 0 # going form smallest to largest for ship_amount in ship_list: ship_size += 1 for i in range(ship_amount): # for each ship of size print('Deploying %d sized ship %d/%d\n' % (ship_size, i+1, ship_amount)) Player.show_board(self.own_board) Player.ship_settler(self, ship_size) Player.show_board(self.own_board) # refresh board at the end input('Your deployment phase has finished.') def __init__(self, name): """Each player has 2 boards, first shows own ships and tiles shot by the opponent, while second shows the same about him, but active ships are hidden. Note that list of active_indexes (which marks non destroyed ship parts) has no effect on graphics (in the shooting phase)""" self.name = name self.active_indexes = [] # filled by deploy(), emptied by get_shot() self.own_board = ['.'] * board_size**2 self.shooting_records = [] self.shooting_board = copy(self.own_board) Player.deploy_fleet(self) # deployment is part of initialization print(chr(27) + "[2J") # clear terminal screen def send_deployment_info(self): """Some method of trasfering info whether player hit somthing must be in place. This seems suboptimal and maybe it could be resolved diffrently, but perhaps rebuilding is nesscesary to do so.""" active_indexes = copy(self.active_indexes) return active_indexes def receive_deployment_info(self, opponent_indexes): self.opponent_indexes = opponent_indexes def make_shot(self): """Essentially handles players turn""" print(dedent('\ --------------------------------------\n\ \tFiring phase of %s\n\ --------------------------------------\n' % (self.name))) Player.show_board(self.shooting_board) while True: # cannot strike same tile twice index = ask_for_coordinates() if index not in self.shooting_records: self.shooting_records.append(index) break if index in self.opponent_indexes: # if guessed right print('You got him!') self.alternate_pixel(index, 'make hit') else: # if guessed wrong self.alternate_pixel(index, 'make miss') print('Bad luck!') Player.show_board(self.shooting_board) # refresh the board input('Your turn has finished') print(chr(27) + "[2J") # clears the terminal window return index # pass shot coordinate def get_shot(self, index): """This has nothing to do with input, it only displays result of opponents turn.""" print(dedent('\ --------------------------------------\n\ \tRaport phase phase of %s\n\ --------------------------------------\n' % (self.name))) if index in self.active_indexes: # if got hit self.alternate_pixel(index, 'get hit') print('The opponent got your ship hit!\n') Player.show_board(self.own_board) self.active_indexes.remove(index) # for finishing the game else: # if evaded self.alternate_pixel(index, 'get miss') print('You have evaded the shot!\n') Player.show_board(self.own_board) def configure_board_size(): print( 'What size of board would you like to play on?\n\ Expected range: 4 to 20') size = input() if size.isdigit(): if 4 <= int(size) <= 20: return int(size) print('Invalid board size') return configure_board_size() def ask_for_ship_type_amount(ship_size, space_avaible): """Makes sure that there wont be much problems with fitting ship coordinates in deployment phase. It is ensured bu restricting the amount of ships size based on left free space.""" if ship_size > space_avaible: # speeds up prompting return 0 value = input('How much ships sized %d are to be placed?\n' % ship_size) if value.isdigit(): if int(value)*ship_size <= space_avaible: return int(value) print('Invalid amount') return ask_for_ship_type_amount(ship_size, space_avaible) def configure_ships(board_size): # gets called second """Gets called second and with help of ask_for_ship_type_amount generates list of ships to be placed in deployment phases. This ship_list stores amount of each next (in size) ship.""" ship_list = [] space_avaible = ((board_size)**2) // 2.2 # preserve arbitrary 60% freespace print('Generating ships, ') for i in range(1, board_size): value = ask_for_ship_type_amount(i, space_avaible) space_avaible -= i*value # each next placed ship takes space ship_list.append(value) # this also stores board_size indirectly return ship_list def ask_for_coordinates(): """Asks for row and column and outputs the index""" coords = str(input('Select tile\n')) if coords.isdigit() and len(coords) == 2: if 0 < (int(coords[0]) and int(coords[1])) <= board_size: index = int(coords[0]) + board_size * (int(coords[1])-1) return index - 1 # the above doesnt account for 0th item print('Invalid coordinates') return ask_for_coordinates() def main(): global board_size; global ship_list board_size = configure_board_size() ship_list = configure_ships(board_size) #ship_list = [0, 2, 0] #board_size = len(ship_list) + 1 a = Player('A') b = Player('B') a.receive_deployment_info(b.send_deployment_info()) b.receive_deployment_info(a.send_deployment_info()) # deployment info exch. #print(a.active_indexes, b.active_indexes) while True: a.get_shot(b.make_shot()) # Player B turn if not a.active_indexes: print('B wins!'); break b.get_shot(a.make_shot()) # Player A turn if not b.active_indexes: print('A wins!'); break main() Answer: I got two basic things to say You got a typo, in 'Raport' I guess it should be 'Report'. (Actually runned the code, that doesn't count as a thing) -------------------------------------- Raport phase phase of A -------------------------------------- 1.- I didn't go through the implementation just readed your post and actually runned the code. OOP is supposed to model the world and help us code. In the world you have a single board and two players. Instead of that you got 2 players and 4 boards this is weird. I'm not saying that you should make an object named Board, it could be a just a list but players should not have knowledge of their opponents information. Like in the real world you dont have information about your opponents ships. 2.- You are doing a weird thing when calling methods inside your classes, for instance: (Got rid of some code for redability) class Player: ... def __init__(self, name): ... Player.deploy_fleet(self) # THIS LINE IS THE PROBLEM ... This is a problem because you are not calling 'your' method you are calling the method defined in Player class with 'you' as an argument. This is not how you call your own methods in a class, you do it like this: class Player: ... def __init__(self, name): ... self.deploy_fleet() # Ou yeah! Much better ... You may think it is the same but it isnt. It may be a little bit complex for the level you are in right now (I'm guessing here), but this becomes trouble when someone tries to inherit from your Player class. And again, this may be complex but if you understand it, it is a pretty good tool to have in your back. I'll try to demostrate it with a little example, when 'you' inherit from a class suddently you become that class, you have the same attributes and the same methods (well in python a method is actually an attribute, but that is WAY out of the scope). And you can choose whether or not you keep your "parent's" attributes, if you dont like it, you can define your own, lets see with an example. Yes, I'm sorry, but here we go with the animals example. class Animal: def __init__(self, name): self.name = name Animal.talk(self) # This is not supposed to be done this way, you'll see def talk(self): print('Animal talking ' + self.name) I think a dog is also an animal right? So lets do it! class Dog(Animal): def talk(self): print('Dog talking ' + self.name) I dont like a dog saying that it's just an animal, he's a dog (man's best friend), so I override the method talk while I let the method __init__ be the same as the Animal class. Let's have a look at what happens... some_animal = Animal('BaseAnimal') dog = Dog('ADog') When I do this, this happens: Animal talking BaseAnimal Animal talking ADog I overided the talk method in the Dog class, but in the __init__ method which Dog inherit from Animal, a hardcoded call to the class Animal is preventing the Dog to call its own talk method. To solve this, we do: class Animal: def __init__(self, name): self.name = name self.talk() def talk(self): print('Animal talking ' + self.name) Without touching the Dog class, if we repeat the previous statements: some_animal = Animal('BaseAnimal') dog = Dog('ADog') Now the result is: Animal talking BaseAnimal Dog talking ADog Much better right! This is a pretty complex thing, but to summarize, dont call a method within a class with the name of the class, use self (Unless you are looking for that behaviour.) Hope it helped!
{ "domain": "codereview.stackexchange", "id": 28159, "tags": "python, beginner, python-3.x, battleship" }
Nonlinear spring $F=-kx^3$
Question: A nonlinear spring whose restoring force is given by $F=-kx^3$ where $x$ is the displacement from equilibrium , is stretched a distance $A$. Attached to its end is a mass $m$. Calculate....(I can do that) ..suppose the amplitude of oscillation is increased, what happens to the period? Here's what I think: If the amplitude is increased the spring posses more total energy, at equilibrium the spring is traveling faster than before because it posses more kinetic energy. I think in the spring travels faster when it's at a similar displacement from equilibrium, but it has to travel more distance, so I can't conclude anything. I was think about solving, $$mx''=-kx^3$$ But realized this is a very hard job. Any ideas? Answer: The potential energy is $U\left(x\right) = kx^4/4$ since $-d/dx\left(kx^4/4\right) = -kx^3 = F$, and the energy $$ E = \frac{1}{2}m\left(\frac{dx}{dt}\right)^2 + \frac{1}{4}kx^4 $$ is conserved. From the above you can show that $$ \begin{eqnarray} dt &=& \pm \ dx \sqrt{\frac{m}{2E}}\left(1-\frac{k}{4E}x^4\right)^{-1/2} \\ &=& \pm \ dx \sqrt{\frac{2m}{k}} \ A^{-2} \left[1-\left(\frac{x}{A}\right)^4\right]^{-1/2} \end{eqnarray} $$ where the amplitude $A = \left(4E / k\right)^{1/4}$ can be found from setting $dx/dt = 0$ in the expression for the energy and solving for $x$. The period is then $$ \begin{eqnarray} T &=& 4 \sqrt{\frac{2m}{k}} \ A^{-2} \int_0^A dx \left[1-\left(\frac{x}{A}\right)^4\right]^{-1/2} \\ &=& 4 \sqrt{\frac{2m}{k}} \ A^{-1} \int_0^1 du \left(1-u^4\right)^{-1/2} \\ &=& \left(4 \sqrt{\frac{2m}{k}} I\right) A^{-1} \\ &\propto& A^{-1} \end{eqnarray} $$ where $u = x/A$ and $I = \int_0^1 du \left(1-u^4\right)^{-1/2} \approx 1.31$ (see this). You can repeat the above for a more general potential energy $U\left(x\right) = \alpha \left|x\right|^n$, where you should find that $$ dt = \pm \ dx \sqrt{\frac{m}{2\alpha}} \ A^{-n/2} \left[1-\left(\frac{\left|x\right|}{A}\right)^n\right]^{-1/2} $$ and $$ \begin{eqnarray} T_n &=& \left(4 \sqrt{\frac{m}{2\alpha}} I_n\right) A^{1-n/2} \\ &\propto& A^{1-n/2} \end{eqnarray} $$ where $$ I_n = \int_0^1 du \left(1-u^n\right)^{-1/2} $$ can be evaluated in terms of gamma functions (see this). This is in agreement with the above for $\alpha = k/4$ and $n=4$, and with Landau and Lifshitz's Mechanics problem 2a of section 12 (page 27), where they find that $T_n \propto E^{1/n-1/2} \propto A^{1-n/2}$.
{ "domain": "physics.stackexchange", "id": 75849, "tags": "homework-and-exercises, newtonian-mechanics, spring, non-linear-systems, anharmonic-oscillators" }
Literature recommendation for classical density functional theory (DFT) and fundamental measure theory (FMT)
Question: I'm very much interested in properly learning about density functional theory calculations (DFT) in classical settings, for example as used in the theory of liquids. Apart from the success of DFT applied to many-body QM systems, for classical​ systems it remains the main theoretical approach of the statistical physics of liquids and solids. Similarly, but more recently, applications of fundamental measure theory (FMT, which is a more geometric approach) can be noticed more and more. In most current liquid state theory books, these approaches are only briefly introduced and almost never at depth (as they are often assumed known by the authors). Although I have the basics of statistical mechanics, I am very new to classical DFT calculations, and would be very much interested in any piece of literature, be they review papers, lecture notes or textbooks, that would softly and slowly introduce these techniques. Answer: An excellent introduction to the field of classical DFT is Robert Evan's article Density functionals in the theory of nonuniform fluids (Fundamentals of inhomogeneous fluids 1 (1992): 85-176). It is a bit older, but very accessible and yet thorough. Some proofs are omitted, but references are provided for further information. The standard textbook for liquid state theory, Theory of Simple Liquids by Jean-Pierre Hansen and Ian McDonald has a section on DFT and also discusses Fundamental Measure Theory. It is a great book but the presentation is quite terse, and makes numerous references to preceding sections that you will have to go through first. Nevertheless it a good starting point if you have the book lying around (as everyone interested in liquid theory should). DFT is typically presented in terms of correlation functions of simple fluids, which means that one has to deal with complicated-looking integral equations. The article An introduction to inhomogeneous liquids, density functional theory, and the wetting transition by Hughes, Thiele and Archer presents an alternative introduction to DFT in terms of a simple (Ising) lattice model, which might be a good approach for those who have a more general statistical mechanics background.
{ "domain": "physics.stackexchange", "id": 40290, "tags": "statistical-mechanics, resource-recommendations, density-functional-theory, liquid-state" }