anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
SO(3) Representation
Question: Consider an SO(3) invariant Lagrangian density $$\mathcal{L}=\frac{1}{2}\sum_{a=1}^{3}(\partial_{\mu}\phi_{a}\partial^{\mu}\phi_{a}-m^{2}\phi_{a}^{2})$$ Denote the three SO(3) generators by $T_{a}$, and the corresponding Noether charges by $Q_{a}$. In the quantised theory, there are three types of excitation, one for each field, and a single particle excitation may be written $|k,a\rangle$ where $k$ denotes its four momentum and $a$ its type. It is possible to show that $$Q_{a}|k,b\rangle=\sum_{c=1}^{3}(T_{a})_{cb}|k,c\rangle$$ Apparently this proves that the single particle states form a three-dimensional representation of SO(3). However I don't understand where the final conclusion comes from. Could anyone help? Please try to not to stray into too much mathematical language if possible :) Answer: The logic is as follows. Fix a momentum $k$, and take the space of single particles with this momentum. Any state in this space can be written as a linear combination $$| \psi \rangle = \alpha | k, 1 \rangle + \beta | k, 2 \rangle + \gamma | k, 3 \rangle$$ so you can see already that if these states are a representation, whatever that means, it's sure going to be a three dimensional one, because the space of one particle states with fixed momentum is three dimensional. Now, a warning. When physicists say that some vectors are a representation, what they really mean is that you have some operators on that space of vectors which form a representation of some matrices. So we really need to look at the $Q_a$, not so much at the states. Another warning: the $Q_a$ really are a representation of the Lie Algebra of $SO(3)$. This is a fancy way of saying that they act just like the generators $T_a$ of $SO(3)$: they're representing them, not the actual group elements. What does it mean to say they're a representation? We know that the generators obey the commutation relations $$[T_a, T_b] = \sum_c i \epsilon_{abc} T_c.$$ (Remember that the $T_a$ are $3\times3$ matrices, not operators on Hilbert space.) The $Q_a$ being a representation means that they too should fulfill these conditions: we need to check that $$[Q_a, Q_b] = \sum_c i \epsilon_{abc} Q_c.$$ To do that, we take the left hand side, act on some state $|k, d \rangle$, use the action of the $Q_a$ (that you wrote in your question) and the commutation relations of the $T_a$ (which, again, are just matrices), and see if we get the right hand side. I'll let you do the math, but this is the general idea.
{ "domain": "physics.stackexchange", "id": 41026, "tags": "quantum-field-theory, group-theory, group-representations" }
Trying to Classify MNIST where pixels are shuffled with a seed.Why is it not working?
Question: I transformed the MNIST dataset as follows:(X (70000 x 784) is the training matrix) np.random.seed(42) def transform_X(): for i in range(len(X[:,1])): np.random.shuffle(X[i,:]) I had thought that shuffling the pixels in an image would make the digits unrecognizable by humans,but the machine learning algorithms would still be able to learn from the images with shuffled pixels correctly. I cannot understand the mathematical reason behind not learning. Do you think this should work and I should recheck my code for any mistakes? I trained on SVC and SGDClassifier of sklearn.Results from both are just randomness. Answer: You are essentially setting each image to a (more or less) random image. And now you wonder why it is not learning? There is (almost) no data to learn from left. Let me give you a similar example. Here is some (unshuffled) training data: 0 0 0 -> 0 0 0 1 -> 0 0 1 0 -> 0 0 1 1 -> 0 1 0 0 -> 1 1 0 1 -> 1 1 1 0 -> 1 1 1 1 -> 1 The pattern is pretty obvious: The first column is equal to the label. The next two columns are irrelevant. Now let me give you some shuffeled training data: 0 0 0 -> 0 0 1 0 -> 0 1 0 0 -> 0 0 1 1 -> 0 1 0 0 -> 1 1 1 0 -> 1 1 0 1 -> 1 1 1 1 -> 1 Good luck with finding the correct pattern there. And this is only 3 dimensional; things get much worse with more dimensions. One thing that could work if you stored a random permutation and applied the same permutation to each image.
{ "domain": "cs.stackexchange", "id": 9417, "tags": "machine-learning, neural-networks, statistics" }
EfficientNet: Compound scaling method intuition
Question: I was reading the paper EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks and couldn't get my head around this sentence: Intuitively, the compound scaling method makes sense because if the input image is bigger, then the network needs more layers to increase the receptive field and more channels to capture more fine-grained patterns on the bigger image. In the case of a big image, why the network needs more layers to increase the receptive field ? What does increasing the receptive field mean? Increasing its width/height ? If so, we can do it directly without increasing the number of layer in the network no ? is "fine-grained patterns" referring to noisy shape we can see after visualizing convolution output ? I feel like I am missing / misunderstanding something evident. Answer: Receptive field refers to the number of input pixels that a convolutional filter will operate on. There's a nice distill article about how to calculate receptive field size for your filters (with a nice visualization of receptive field size) and an interactive calculator here if you're only curious about how receptive field size grows with changes to depth and filter size. Increases to receptive field size typically come from adding layers and from increasing the kernel size. A larger kernel operates on more pixels, which grows the receptive field. Increasing the depth of your network refers to adding additional convolutional layers. These downstream filters operate on the feature maps produced by the initial conv. layers of your net, which increases the receptive field for the filters in those additional layers (if this isn't clear, this is a good guide). The distill article also goes into detail about how other operations affect receptive field size. As for the claim of a gain in number fine grain patterns captured, it's more in line with the intuition that more filters will give the network more ways to learn specific features of your data. See articles around visualizing convolutional filters for a sense of what types of features are captured (this tutorial on object detection links to a nice visualization). Hope this helps!
{ "domain": "datascience.stackexchange", "id": 6924, "tags": "deep-learning, neural-network, cnn, computer-vision, convolutional-neural-network" }
3-stack implementation with a single array
Question: I'm teaching myself data structures and would really appreciate some feedback on my stack implementation. A couple of things I'm not sure if I should be doing: creation of the array and pointer using new style // Implement 3 stacks with one array #include <iostream> class SingleArrayStacks{ private: int stack_size; int *array; int *pointers; int get_top_position(int stack_num){ return (stack_num * stack_num) + pointers[stack_num]; } public: SingleArrayStacks (int array_size = 100, int num_stacks = 3) { array = new int[array_size]; pointers = new int[num_stacks]; stack_size = array_size / num_stacks; std::fill_n(pointers, num_stacks, -1); } ~SingleArrayStacks (){ delete[] array; delete[] pointers; } void print_stack (int stack_num) const { std::cout << "Current stack state: "; for (int i = 0; i < sizeof(array); i++) { std::cout << array[i]; } std::cout << std::endl; } bool is_empty(int stack_num) const { return pointer[stack_num] == -1; } void push (int stack_num, int val) { if (pointers[stack_num] > stack_size) { throw std::runtime_error("Stack is full"); } else { array[get_top_position(stack_num) + 1] = val; pointers[stack_num]++; } } int pop(int stack_num){ if (is_empty(stack_num) { throw std::runtime_error("Stack is empty"); } else { int val = array[get_top_position(stack_num)]; array[get_top_position(stack_num)] = NULL; pointers[stack_num]--; return val; } } int top(int stack_num){ if (is_empty(stack_num) { throw std::runtime_error("Stack is empty"); } else { return array[get_top_position(stack_num)]; } } }; Answer: Compiler errors: You code did not compile under Clang! It would not compiler anywhere for that matter, as it has a few syntax errors: In method is_empty(), pointer is not declared. It should be pointers (plural). In both pop() and top() methods, this line is broken: if (is_empty(stack_num) { // ^-------- Missing a `)` here! Compiler warnings: Always compile with warnings turned on and set them to the highest level practical. If you have the habit of ignoring warnings, try compiling with "warnings as errors" (-Werror for Clang and GCC) to force yourself into fixing them. That said, your code only produced one warning, after the errors above where fixed: array[get_top_position(stack_num)] = NULL; // ^^^^--------- implicit conversion of NULL constant to 'int' NULL is not the same as int. In fact, an implementation is free to define NULL to whatever, so don't assume it will be convertible to an integer on all compilers/platforms. Code review: Now if I get the idea behind your code, you intend to have a single array with several stack sharing this array. Your implementation doesn't seem to be doing that correctly. I could not test it thoroughly, but the helper array pointers, which doesn't store pointers by the way, seems questionable. The method get_top_position() also seems a bit contrived to me. print_stack() is broken, so I couldn't print the stacks to validate the state of the structure. I would suggest that you attempt to simplify this by storing actual pointers (or indexes) to the sub-array inside the main array. Then you won't need any additional offset calculation once pushing/poping. You also have the advantage that all stacks share the same size. main array of ints: +--+--+--+--+--+--+--+--+--+--+--+--+---- | | | | | | | | | | | | | ... +--+--+--+--+--+--+--+--+--+--+--+--+---- | | | | V V V V +-----------+-----------+-----------+---- | stack 0 | stack 1 | stack 2 | ... +-----------+-----------+-----------+---- | | | | V V V V pointer[0] pointer[1] pointer[2] pointer[N] ... Overall code improvements: sizeof misuse: This is not doing what you expect: for (int i = 0; i < sizeof(array); i++) { sizeof is a compile-time operator, so it cannot infer the size of dynamically allocated arrays, only arrays in which the size is known at compile-time (e.g.: char buf[128]) can have the size inferred with sizeof. You must keep a member variable with the size of the stacks and another with the main array's. top() only inspect data, so it should also be a const method. Simplify if-else logic where it is not needed. Example: if (is_empty(stack_num)) { throw std::runtime_error("Stack is empty"); } else { return array[get_top_position(stack_num)]; } No need to keep the if-else when both paths will exit the function. if (is_empty(stack_num)) { throw std::runtime_error("Stack is empty"); } return array[get_top_position(stack_num)]; Instead of hardcoding cout in print_stack(), you could take the output parameter as an std::ostream &. However, such function is asking to become an output stream operator. Manual memory management (with new/delete) is a dated practiced in C++. Even for custom containers, the use of smart pointers is strongly advised. I would replace the raw pointers by at least a std::unique_ptr or even better a std::vector.
{ "domain": "codereview.stackexchange", "id": 12458, "tags": "c++, stack" }
Misunderstanding about Simon Stevinus's proof
Question: I came across this proof in my textbook, but I am not sure exactly what it's proving. Could someone provide an explanation as to why this proof proves that forces combine in the same way as displacements? Simon Stevinus's proof. Answer: They're proving that if something sits on an incline, it's weight along the line parallel to the incline is $mg \sin \theta$. The reason they say this is significant is because you can get the same exact result by treating forces as vectors and adding up the normal forces and gravitational forces. To restate his argument more clearly: If we put a loop of chain around an incline, it'll eventually come to an equilibrium (obviously it can't be a perpetual motion machine and move forever!) Once we have it at an equilibrium, we can remove the part that hangs below as it is perfectly symmetrical and will not affect the equilibrium of each side Now, we know, since the system is in equilibrium, that the weight of side $a$ must equal the force that side $b$ exerts. If we assume constant mass density, $\lambda$, than the weight of side $a$ is $(a\lambda)g$, and therefore the force needed to support side $b$ is $(a\lambda)g$. However, we can rewrite $a$ in terms of $b$; namely, $a=b\sin \theta$. Therefore, the force needed to support side $b$ is $(b\lambda \sin \theta)g$, which is $mg\sin \theta$, where $m$ is the mass of side $b$. You can find more information here: Inclined Plane.
{ "domain": "physics.stackexchange", "id": 93150, "tags": "newtonian-mechanics" }
Did vertebrates diverge from other deuterostomes via neoteny?
Question: Question: Is there any research corroborating my suspicion that vertebrates diverged from other deuterostomes via neoteny (the attainment of sexual maturity while still in larval stage)? Context: Most echinoderms have pelagic, bilaterally symmetric larvae, while the adults are not bilaterally symmetric (except for sea cucumbers) and live on the sea floor. The same is true for tunicates, which are supposed to be the sister group to vertebrates. Adult echinoderms also have very reduced nervous systems, although I am not sure whether the larvae have more advanced nervous systems than the adults. For tunicates, which are more closely related to vertebrates, the nervous system definitely does degenerate when transitioning from pelagic larvae to sessile adults. Echinoderms are also one of the few clades of bilateria, except for vertebrates, which have endoskeletons. And of course, echinoderms, tunicates, and vertebrates are all deuterostomes. In contrast to tunicates and echinoderms, the MRCA of vertebrates is thought to have resembled a lancelet, which is pelagic and which has a more advanced nervous system than adult tunicates or echinoderms. In other words, it seems like ancestral vertebrates retained larval features into adulthood, and then just grew bigger in size. Answer: echinoderms are sort of bilaterally symmetrical, imagine if you took a velvet worm and then twisted it around in a big loop with the legs sticking out until the head was stuck to the tail end but still pointed away from each other. That is basically how echinoderms went from bilateral to radial. of course in practice this is done by rearranging hox genes but it is a good way to visualize it. https://evodevojournal.biomedcentral.com/articles/10.1186/2041-9139-5-22 as for chordates look at hemichordates (which also have endoskeletons) and you can see why the general belief is that tunicates evolved that life stage after branching off(kinda like barnacles) instead of vertebrates losing that life stage.
{ "domain": "biology.stackexchange", "id": 6303, "tags": "evolution, zoology, palaeontology" }
When computing asymptotic time complexity, can a variable dominate over other?
Question: I want to express the asymptotic time complexity for the worst case scenario of sorting a list of $n$ strings, each string of length $k$ letters. Using merge-sort, sorting a list of $n$ elements requires $O(n\log n)$. Comparing two strings of length $k$ has a cost of $O(k)$. Therefore, the cost would be $O(kn\log n)$. However, I know some restrictions about $k$ and $n$ due to the nature of the problem. In particular, I know that for any list, $0 \lt k \leq 20$, and $0 \lt n \leq 80000$. In other words, the number of words in a list might vary in a much larger range than the length of the words. In that case, would it be correct to say that $n$ dominates over $k$ and therefore the cost could be expressed as $O(n\log n)$? Or does the fact that we are discussing asymptotic costs make those restriction meaningless (as we are describing how the algorithm is impacted by the growth of each variable, regardless of how much they can actually grow)? In general, if two variables are independent, is it possible to dismiss one of them from the asymptotic cost under certain circumstances? Answer: First of all, if $k$ and $n$ are bounded then all complexities trivialize to $O(1)$. Hence a better assumption is something like $k = O(\log n)$. Under this assumption, you can, for example, say that $O(k + n) = O(n)$, and even $O(k + \log n) = O(\log n)$. However, you cannot say that $O(kn) = O(n)$, since it's not necessarily true! If $k = \log n$, then it's not true that $O(kn) = O(n)$. Instead, all you can say is that $O(kn) = O(n\log n)$.
{ "domain": "cs.stackexchange", "id": 16049, "tags": "time-complexity, asymptotics" }
Do not understand reason for compile error
Question: Consider the following partial application in my new Hydro installation #include <ros/ros.h> int main(int argc, char** argv) { //... ros::start(); //... } when compiled with g++ -I/opt/ros/hydro/include I get the error: "undefined reference to 'ros::start()'" There is no error about a missing include and I even included <ros/init.h> and got the same error. I can see ROSCPP_DECL void start(); Does anybody know why I get this compile error? Originally posted by flb on ROS Answers with karma: 30 on 2014-02-28 Post score: 0 Answer: It is a linker error. You will have to link against libroscpp. Edit: You might find these excellent tutorials by Jonathan Bohren useful. They explain step by step and bottom up how programs using ROS are built and how this is aided by catkin. http://jbohren.com/articles/roscpp-hello-world/ http://jbohren.com/articles/gentle-catkin-intro/ http://jbohren.com/articles/modular-ros-packages/ Originally posted by demmeln with karma: 4306 on 2014-02-28 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by flb on 2014-03-01: That was it, thanks.
{ "domain": "robotics.stackexchange", "id": 17130, "tags": "ros-hydro" }
What happens to an objects velocity as it enters and travels through a Black Hole?
Question: Assuming an object is traveling at some speed that is less then the speed of light, and it then enters a Black Hole. Would that objects velocity increase, decrease, or remain constant between the time it passes the Schwarzschild radius and before it reaches the singularity? Answer: For the following I will consider a static, uncharged Schwarzschild black hole. We can use GR perfectly well beyond the horizon ($r=2M$) of a black hole. The horizon is just a coordinate singularity of the Schwarzschild metric: it is no physical singularity. It is true that beyond the horizon there are no geodesics out of it but we can still use GR inside. We can not receive signals/"see" the object beyond the horizon (from outside) but that does not mean that there is some crazy stuff happening there. For discussing motion across the horizon Gullstrand–Painlevé coordinates (PG) are very well suited. These coordinates have no singularity at $r=2M$ the singularity of the PG coordinates is at the center ´($r=0$) of the black hole and this is the physical singularity of the black hole. The speed inside the horizon stays finite and below the speed of light. Only at the singularity at the center the speed becomes infinte but at that point very close to the singularity GR may not hold on its own since effects of quantum mechanics become important. To begin with I would highly recommend reading the sections "Motion of raindrop" and "Speeds of light" in the Wikipedia article of the PG metric. In short; the speed of the free falling object increases but stays finite and below $c$ until it reaches the singularity. The acceleration and speed is continuous across the horizon, since again the horizon is only a coordinate singularity not a physical one. Apart from this it is always a bit difficult to discuss such things in words, since time, distances and speeds are relative: especially around black holes. So the question from where we are observing and describing what is always important and tricky. It is very hard to describe those effects/motions on point/correct (at least for me). For rotating/charged black holes there might be similar discussions but for those cases I do not have references, so I only discussed the situation for the Schwarzschild black hole. I am certainly not an expert on geodesics and effects around black holes but I am pretty confident that the bit that I said here should be correct.
{ "domain": "physics.stackexchange", "id": 34554, "tags": "black-holes" }
Doubt with Relativistic Lagrangian Expression
Question: I'm trying to learn by myself some Lagrangian mechanics, and I found on Wikipedia this relativistic Lagrangian $$L=\frac{1}{2}mu_{\alpha}u^{\alpha}, \tag{1}$$ where $$u^\alpha=\frac{dx^{\alpha}}{d\tau}\tag{2}$$ is the four-velocity and I don't know if $m>0$ is the rest or relativistic mass. And here is my problem. I used these Euler-Lagrange equations $$\frac{\partial L}{\partial x^{\beta}}-\frac{{\rm d}}{{\rm d}\tau}\left(\frac{\partial L}{\partial u^{\beta}}\right)=0, \tag{3}$$ which are in the article and I finish with this $$\frac{{\rm d}}{{\rm d}\tau}\left(mu_{\beta}\right)=0.\tag{4}$$ What I think it's the conservation of four-momentum, if $m$ is the rest mass. Right? But a trouble came when I calculated the energy with this formula $$E=\frac{\partial L}{\partial \dot{r}}\cdot\dot{r}-L\tag{5}$$ with $r$ the position vector. I did this, I don't know if it's correct $$E=\frac{\partial L}{\partial u^{\beta}}u^{\beta}-L\tag{6}$$ $$E=mu_{\beta}u^{\beta}-\frac{1}{2}mu_{\alpha}u^{\alpha}\tag{7}$$ Now because $$u_{\beta}u^{\beta}=c^{2}\tag{8}$$ I get $$E=mc^{2}-\frac{1}{2}mc^{2}=\frac{1}{2}mc^{2}.\tag{9}$$ This is not the relativistic energy of the particle of course, there is a missed gamma factor which can appear if $m$ is the relativistic mass. But it isn't still the energy because of the $\frac{1}{2}$ and confuse me of what is $m$. Is something there correct? What am I doing wrong? Answer: For your second question, observe that your Hamiltonian is in fact $$\mathcal{H}=\frac{p_{\alpha}p^{\alpha}}{2m}$$ because $\mathcal{H}=\mathcal{H}\left(x^{\alpha},p_{\alpha}\right)$ so you are not allowed to use $u_{\alpha}$. Lets test it by trying to set up Hamilton's equations $$\begin{cases}\frac{{\rm d}p_{\alpha}}{{\rm d}\tau}=-\frac{\partial\mathcal{H}}{\partial x^{\alpha}}=0\\\frac{{\rm d}x^{\alpha}}{{\rm d}\tau}=\frac{\partial\mathcal{H}}{\partial p_{\alpha}}=\frac{p^{\alpha}}{m}\end{cases}$$ That's exactly identical to your equation using Lagrangian mechanics, so it seems like you've constructed the right Hamiltonian. Keep in mind that the Hamiltonian is not the total energy, it just happens to be so in some cases. The key point here is that you can't put $p_{\alpha}p^{\alpha}=m^{2}c^{2}$, because of the same reasons you didn't put $u_{\alpha}u^{\alpha}=c^{2}$ in the Lagrangian ending up with $$\mathcal{L}=\frac{1}{2}mc^{2}$$ The fact that both $\mathcal{L}$ and $\mathcal{H}$ are constants if you substitute the intervals means that they are constants over trajectories. EDIT 1: If you want your Hamiltonian to be the total energy, you should write your Lagrangian as $$\mathcal{L}^{\prime}=-\frac{mc^{2}}{\gamma}=-mc^{2}\sqrt{1-\frac{v^{2}}{c^{2}}}$$ and use $t$ instead of $\tau$ to get $$\boldsymbol{p}=\frac{\partial\mathcal{L}^{\prime}}{\partial\boldsymbol{v}}=\frac{m\boldsymbol{v}}{\sqrt{1-\frac{v^{2}}{c^{2}}}}$$ and so $$\mathcal{H}^{\prime}=\frac{\partial\mathcal{L}^{\prime}}{\partial\boldsymbol{v}}\cdot\boldsymbol{v}-\mathcal{L}^{\prime}=\frac{mv^{2}}{\sqrt{1-\frac{v^{2}}{c^{2}}}}+mc^{2}\sqrt{1-\frac{v^{2}}{c^{2}}}=$$ $$=\frac{mc^{2}}{\sqrt{1-\frac{v^{2}}{c^{2}}}}=\gamma mc^{2}$$ EDIT 2: To get the Lagrangian $\mathcal{L}^{\prime}=-\frac{mc^{2}}{\gamma}$ from yours, you should look at the action $$S\left[u_{\alpha}\right]=\int\mathcal{L}{\rm d}\tau$$ Our goal is to change variables to $\boldsymbol{x},\boldsymbol{v},t$ instead of $x^{\alpha},u_{\alpha},\tau$. Using $\frac{{\rm d}\tau}{{\rm d}t}=\frac{1}{\gamma}$ this gives $$S\left[\boldsymbol{x}\right]=\int\mathcal{L}\frac{{\rm d}\tau}{{\rm d}t}{\rm d}t=\int\frac{mc^{2}}{2\gamma}{\rm d}t$$ so we get our new Lagrangian to be $$\mathcal{L}^{\prime\prime}=\frac{mc^{2}}{2\gamma}=-\frac{1}{2}\mathcal{L}^{\prime}$$ This is identical to $\mathcal{L}^{\prime}$ up to a constant, so the physics of $\mathcal{L}^{\prime}$ and $\mathcal{L}^{\prime\prime}$ is the same.
{ "domain": "physics.stackexchange", "id": 45261, "tags": "special-relativity, energy, lagrangian-formalism, hamiltonian-formalism, point-particles" }
Does Alcubierre drive allow time travel?
Question: In Alcubierre drive, one can travel apparently faster than the speed of light by "compressing" space in front in the direction of travel. So let's say we have a stationary observer at point A, and an object travels from point A to point B and back, departing A at $t_0$ according to the clock of the stationary observer. Is it possible for one to use Alcubierre drive to arrive back at point A before $t_0$ as far as the stationary observer is concerned? Answer: Alcubierre [1] and others have shown how a suitable shaping of space-time curvature can be used to generate a warp bubble, where the contents of the bubble can be transported to arbitrarily large speeds, even exceeding the local speed of light, without suffering inertial effects. This can be called an inertialess warp drive. “Abstract: It is shown how, within the framework of general relativity and without the introduction of wormholes, it is possible to modify a spacetime in a way that allows a spaceship to travel with an arbitrarily large speed. By a purely local expansion of spacetime behind the spaceship and an opposite contraction in front of it, motion faster than the speed of light as seen by observers outside the disturbed region is possible. The resulting distortion is reminiscent of the 'warp drive' of science fiction. However, just as happens with wormholes, exotic matter will be needed in order to generate a distortion of spacetime like the one discussed here.” [1] This well-known paper has inspired other “warp drive” designs [2], however, all of these share a common defect in the requirement for “exotic matter” to provide the proper shaping of the required space-time curvature. This exotic matter defect is shared with worm hole techniques. To answer your question: any FTL drive can be used to create closed time-like loops. With these you can travel back in time, but you get stuck in a loop. All of this is mathematically consistent with the field equations for General Relativity, but probably not possible in our universe due to the inability to establish the proper conditions. [1] Alcubierre, M. "The Warp Drive: Hyper-fast Travel within General Relativity," Classical and Quantum Gravity, 11(5), L73-77 (1994). [2] Everett, Allen, and Roman, Thomas, "Time Travel and Warp Drives", University of Chicago Press, Chicago and London (2012).
{ "domain": "physics.stackexchange", "id": 29747, "tags": "general-relativity, time-travel, warp-drives" }
Did I wire these solar panels correctly?
Question: recently I bought 3 solar panels rated at 5V 200 mA each. I want to use them to charge a 5V battery bank to charge a phone. Thinking about the proper way to put them, I thought i can connect all in parallel to get maximum current, but realized that if the sun light was a little weak it will no generate full 5v thus preventing charging. So I decided to put 2 in parallel to give the equivalent of one 5V solar panel, connected in series with the 3rd panel to give the equivalent of 10v. sacrificing a little current to get higher voltage, to allow the charging to happen on a wider range of sun light power. The following picture shows the wiring and the schematic I intend to replicate. Now that I have an equivalent of 10v, 400mA solar panel. I used a 7805 voltage regulator to cut down the excess to 5v. Final step, I added a standard diode to prevent the panels from leaking the battery in the shade. and now measuring: Questions: I didn't think about this before putting the diode, but is it okay to put the blocking diode on the ground wire? because I know some applications do not use the ground except for safety (i.e. 3 phase system). it would be helpful also to avoid the 0.7v drop across the diode before the regulator. according to previous calculations, I'm supposed to get a maximum of 10v output before regulation, and considering that the sun was pretty shinny today, why was the reading I got not more then 6v? I have measured across each panel seperatly and got around 5.5V, are the connections right? Answer: Putting a single panel in series with two other panels that are in parallel does not accomplish what you think it does. The overall current of such a setup is limited by the single panel to 200 mA, so the three panels will not produce any more power than you'd get by just putting two panels in series. A single solar cell can be thought of as a current source in parallel with a silicon diode. The current source is driven by the incoming light. The diode "shorts out" the current source, which is why the voltage across a single cell can never be more than about 0.65 V, the forward drop of a silicon diode. A 5V panel is approximately 10 such cells connected in series. The current through all of the cells will be limited by the cell that is receiving the least amount of light. Also, the blocking diode in your diagram is pointing the wrong way. If your panels are rated at 5V, and your "battery bank" requires 5V to charge, then you don't need to do anything more than put all three panels in parallel and hook them directly to the battery. Forget about blocking diodes.
{ "domain": "engineering.stackexchange", "id": 5389, "tags": "electrical-engineering, photovoltaics, solar-energy" }
Newton's third law between moving charge and stationary charge
Question: Between two stationary charges, Newton's third law holds. But what if one of the charge is moving? Like, in moving charge, electric field is different with the field generated by stationary charge. So let's think there are two charges. Charge with $q_1$ moves toward to another charge with $q_2$, that is stationary. The relative velocity is $v$. In this condition, what would be the force between two charges? Will the force be different with two stationary charges? Or will the force be same? Will the Newton's third law apply? Answer: Newton's third law does not apply directly to moving electric charges. Newton's third law is derived from conservation of linear momentum. For moving charges, the changing momentum in the electromagnetic field has to be taken into account, so $\bf F_{21}=-F_{12}$ does not always apply for the moving charges
{ "domain": "physics.stackexchange", "id": 69740, "tags": "newtonian-mechanics, electromagnetism, special-relativity, forces" }
Using mass of the observable Universe to estimate an energy equivalent
Question: For quite some time now, physicists have been able to estimate the mass of the observable universe. Reportedly it's around $10^{50} \:\mathrm{kg}$. There is also general relativity, which states that $E=mc^2$. If we can calculate the energy equivalent of the observable Universe, can we extend the same logic to the rest of it? Let's leave electromagnetic radiation out of the picture for starters, since I know nothing about neither average photon density nor spectral distribution. Although if anyone does, please feel free to elaborate. Answer: Conservation of energy doesn't apply to cosmology. General relativity doesn't have a conserved scalar mass-energy that can be defined in all spacetimes.[MTW] There is no standard way to define the total energy of the universe (regardless of whether the universe is spatially finite or infinite). There is not even any standard way to define the total mass-energy of the observable universe. There is no standard way to say whether or not mass-energy is conserved during cosmological expansion. The WP article you linked to discusses these points. You've picked a number out of the article without paying attention to the text surrounding that number, which says that the number doesn't really mean much and can't be extended in the way you suggest. MTW: Misner, Thorne, and Wheeler, Gravitation, 1973. See p. 457.
{ "domain": "physics.stackexchange", "id": 1345, "tags": "energy, general-relativity" }
Why gravitational waves are not produced by objects moving at constant velocity?
Question: Gravitational wave is produced by change in gravitational field, source. If something is moving away from me at constant speed, its gravitational field will vary. But why only accelerating bodies produce GW? Answer: Because radiation from such a body undergoing inertial motion would gainsay Galileo's postulate of relativity, i.e. that there is no measurement one can make within an inertial frame that could detect that frame's motion relative to another inertial frame. Gravitational radiation would bear energy away from the body, and in principle (at least from the classical paradigm that holds in special and general relativity) this energy loss could be measured from within the frame. If you think about it, the proposition that bodies emit radiation when in uniform motion relative to your own inertial frame is a tacit assumption that your own frame is privileged. If you are looking for an answer with more "technology" applied, then, as user Danu comments, the lack of radiation from uniformly moving bodies is not special to gravitation: there is an approximation to GR called Gravitoelectromagnetism which is almost exactly like Maxwellian electrodynamics and can be derived from assuming a delay, propagating at speed $c$, in gravitational effects. It's kind of like Coulomb's electrostatic law modified to heed the locality requirements of special relativity. In this picture, the gravitational field lines of a moving body are distorted in a relatively uniformly moving frame, but their shape does not change and the whole solution (essential like that described the Liénard-Weichert Potential for zero acceleration) is a field of constant shape translating uniformly with the body. One can easily see from this picture that only accelerating bodies can radiate energy: a field of constant shape whose only change is uniform, rigid translation with the body producing it must have a constant energy and there can be no radiation. One should be warned, however, that the gravitoelectromagentic approach grossly overestimates gravitational radiation power radiated from accelerating bodies by using the analogue of Larmor's formula / Liénard-Weichert Potential. Such an approach gives several gigawatts radiation from the Earth, whereas full GR calculates around about 200W. (Incidentally, the several gigawatt figure is also too small to measure for something Earth-sized).
{ "domain": "physics.stackexchange", "id": 35719, "tags": "general-relativity, gravitational-waves" }
What is the difference between zero background radiation and field background radiation in Nuclear Physics measurements?
Question: Can someone please explain the difference between these two terms (Zero Background Radiation and Field Background Radiation) used in radiometric prospecting measurements? Answer: In the abstract of this article Measurements of environmental background radiation at location of coal-fired power plants (PubMed, 2004) the terms are used in context. "Zero background radiation" - is the natural expected radiation measured far away of the specific pollutant site - the coal mine. It is a control or reference value in the study. "Background radiation field" is the effective ambient radiation Record radiation levels were found in a house where the effective dose due to ambient radiation fields was 131 mSv/a, in the same abstract: This investigation has been primarily done in order to check the impact of coal-fired power plants on the background radiation level in its vicinity
{ "domain": "physics.stackexchange", "id": 21374, "tags": "nuclear-physics, radiation, measurements, nuclear-engineering" }
Comparing 2 ints without using comparative operators
Question: I was looking for questions that might be asked in a technical interview and I found this one: Write a function that determines if two integers are equal without using any comparative operators. I'm using Java and my solution was to wrap the ints in an Integer object, convert that Integer to a string and return: (firstInt.toString()).equals(secondInt.toString()). I have 2 questions: Is there a better Java answer to this question? Does this response fulfill the requirements and if so, is it a good one? public class Main{ public static void main(String[] args){ log(equal(1,2)); //prints out false log(equal(67,67)); //print out true log(equal(-5,-5)); //prints out true log(equal(-9,-4)); //prints out false; } public static boolean equal(int a, int b){ Integer first = a; Integer sec = b; return (first.toString()).equals(sec.toString()); } public static void log(Object o){ System.out.println(o); } } Answer: I find it highly unlikely that the interview question is designed to be answered in Java. It is more than likely referring to languages that can evaluate non-boolean value types as boolean expressions. A possible solution in JavaScript could be: function equal(a, b) { return !(a - b); // returns true only if (a - b) === 0 } Is there a better Java answer to this question? You can simplify your method by calling toString on the Integer class and passing in the integer you wish to convert. public static boolean equal(int a, int b){ return Integer.toString(a).equals(Integer.toString(b)); } Not that this is a better solution: a == b would be favored in any scenario. Does this response fulfill the requirements and if so, is it a good one? String.equals is doing a series of comparisons to verify the sequence of characters in your converted integers match; arguably this does not fulfill the requirements, but simply obfuscates it through an internal method call.
{ "domain": "codereview.stackexchange", "id": 22132, "tags": "java, interview-questions" }
What gives things their colour?
Question: My 6 year old daughter asked me 2 biology-related questions yesterday and I tried my best to answer them with the aid of YouTube videos. One of the questions (I may post the other one too) was How do we see colour? I showed her a video on YouTube where it was explained that light hits an object and if that object is a tangerine then that object will absorb all colours except orange and will reflect orange and that's why we see it as being orange. I then repeated this explanation and she said. But how do we see different colours? I then showed her a video describing corneas, vitreous humours, lenses, optical nerves and so on. I think I lost her. I got a bit lost too. Now I have two problems, which I'd like to ask to you all as questions I do not know how to explain to a 6 year old how we are able to perceive colour. Does anyone know how this can be explained? I have never quite understood this idea that an object has a colour depending on the light that hits it. Okay, I understand that in low light objects have a different colour because there is a not much light hitting it and that different objects absorb different wavelengths of light and therefore appear as different colours. What I don't understand is that if I place a bar of gold a bar of silver side by side in the same lighting conditions they do have different colours, so therefore there must be something inherent in these object that give them different colours. What is that something? Answer: I do not know how to explain to a 6 year old how we are able to perceive colour. Does anyone know how this can be explained? Well, depending on the depth you want to introduce her to, it can be difficult to explain to adults - much less children. You explained the basics well enough. Without going to the molecular mechanisms, here's a useful diagram: Light will pass through the eye and Retina until it hits the Cones and Rods. That produces a reaction (which I'll address next) - which then causes signals to either resume or cease (Rods are, ironically, shut off by light - not turned on). These signals pass through Ganglion cells, through the Optic Nerve, and interpreted by the brain. When some cells are switched on, like a 3-Way traffic stop, they will prevent the information from other cells being transmitted. A simple way of explaining it might be that inside our eyes are millions of teeny-tiny molecules that act like light-switches, and the brightness and color of light determine which of the "light-switches" are on or off. If you want to go further into the actual mechanisms (or just have the background knowledge for future reference) the next diagram shows the mechanism itself (and the original in full-size since the one shown is squished if you prefer): The big things to note above is the change from cis-Retinal to trans-Retinal which occurs after a Photon is absorbed. This causes signaling molecules to go a bit wild, open ion channels, and the depolarization of the membrane propagates the charge down the cell - much like a neuron. I have never quite understood this idea that an object has a colour depending on the light that hits it. Okay, I understand that in low light objects have a different colour because there is a not much light hitting it and that different objects absorb different wavelengths of light and therefore appear as different colours. Well, let me stop you here. Low-light is a different situation than colored-light. Let's get the basics down: Photons are absorbed by the electrons of the atoms that compose the molecules of an object. Whatever Wavelength of light is not absorbed by the electrons is reflected, and it's this Wavelength that we perceive as the color of the object as our Cones absorb the reflected light. Photons can also be emitted when an electron moves to a lower-energy state. The Wavelength emitted by the electron is directly related to the difference between the High and Low energy states, as this diagram shows fairly well: The emitted photons are the object's "Incandescence" - which is when an object produces a color of light by itself. The color you perceive is NOT going to change if the object emits incandescent light because the object is generating its own light. Neon signs are a great example: The gases being subjected to a current emit light, and will appear whatever color it's supposed to be whether or not it's a blue moon or sunset. Photons which are reflected and not emitted - i.e. almost everything that doesn't have a power source - as I said above, are then absorbed by our Cones and our brain interprets the signals to produce a color. The reason why objects that reflect light can change color is because not all environmental light is the same. Red objects will appear Black under Blue light because Blue light doesn't contain Red Wavelengths - there's nothing to reflect, so the object absorbs all of the available Wavelengths - the very definition of Black! A lot of our color perception depends on ambient light, and most of the time - thanks to the Sun - that is a full-spectrum white light. Which brings me to answer the last bit of your question: What I don't understand is that if I place a bar of gold a bar of silver side by side in the same lighting conditions they do have different colours, so therefore there must be something inherent in these object that give them different colours. What is that something? Yes, there definitely is something inherent in both objects. That is: Their electron configurations absorb different chunks of the spectrum, and accordingly reflect different chunks of the spectrum. Although that's still a bit simplistic, since metals have some unique properties that other Elements do not. Their electrons exist in more of an "ocean" than around central atoms, but that's a whole other question. As a fun bit of Trivia to impress your daughter when she's old enough, it's always a fun fact to know that the color Yellow is completely constructed in your head. The human eye only has Rho (Red), Gamma (Green), and Beta (Blue) color receptors which have the following absorption pattern (from photo.net): What everybody sees as "Yellow" is actually when both the Green and Red receptors are activated at the Wavelength where they intercept above, which your brain interprets as "Yellow": Because your brain does a lot of processing, despite having only three color receptors we can perceive millions of colors (and shades/tones). Now for a while everywhere you look you'll be utterly amazed at what's going on, and you rightfully should be. ;-)
{ "domain": "biology.stackexchange", "id": 628, "tags": "vision" }
Understanding the principle of photo diode signal generation
Question: I have two lasers, both illuminate on a photo diode, i'm not very sure what's physically going on in the photo diode. My understanding is the following: The electric fields of both lasers add up and get destroyed in the photo detectors, the electrons excited by photons in the process generates a photo current, and these charge carriers now oscillate like an antenna to emit microwave frequency and travelled via the microwave cable to and Oscilloscope. And the microwave again induce a oscillating voltage to the charge carriers inside the Oscilloscope. I don't understand how a photo current turns into microwave and back into photo current. Answer: the electrons excited by photons in the process generates a photo current, I'm with you so far. Each photon absorbed in the detector generates an electron-hole pair, which is quickly swept out of the depletion region, forming a photocurrent. and these charge carriers now oscillate like an antenna to emit microwave frequency I think you have cause and effect reversed here. You illuminated the photodiode with two lasers. These two lasers will create an interference pattern on the surface of the photodiode, although this pattern will be changing much too fast for the eye to see. If the laser emission frequencies are separated by some difference $\Delta f$, then the electric field at the photodiode surface will be fluctuating at this same rate. Seen at the quantum level, the rate of photon absorption will be fluctuating at this frequency also. If $\Delta f$ is in the microwave region (300 MHz - 300 GHz or so) then the rate of photon absorption will be varying at a microwave frequency (It typically takes special care to tune two lasers to be as near in frequency to each other as a few GHz). If the diode capacitance isn't high enough to damp out this variation, you'll also see this frequency in the photocurrent. If you set up an oscilloscope to measure the photocurrent, and the oscilloscope is fast enough to measure microwave frequencies, then you should see a (quite noisy) microwave signal on your oscilloscope screen. Given the random fluctuations of frequency you're likely to see in this experiment, it might make more sense to measure the signal with a spectrum analyzer rather than an oscilloscope. Main point: the current doesn't start oscillating after the absorption occurs. The rate of absorption depends on the interference between the two laser beams, and the photocurrent is just proportional to the photon absorption rate.
{ "domain": "physics.stackexchange", "id": 46134, "tags": "semiconductor-physics, electronics, photoelectric-effect, microwaves" }
How to link against API package in ROS from a catkin package (as a user application)
Question: Hello all, I am working on KUKA youbot. To drive this robot I have to use an API package provided by Kuka. To use this APIs I need to make a ROS based user application (A ros package) which links against the API package. The API package contains other packages and folders such as 'lib'(including libsoem.a , libYouBotDriver.a and libYouBotDriver.so ), 'config', 'build', 'bin', 'soem'... My problem is that I don't know how to link my user application (the main package) to this API package? I use catkin workspace for making the main package. Later I will expand my package by applying other packages such as kinect camera to receive some data and command the youbot from main package and APIs to control it. At this step I would appreciate it if you tell how I can link a ros package (a user application) against a package contains some APIs which can control the movements of the robot. Thanks, Originally posted by A.M Dynamics on ROS Answers with karma: 93 on 2014-12-02 Post score: 0 Answer: The manufacturer's documentation should describe how to link against their package. If you're trying to link a catkin package against another catkin package, you can simply add that package as a dependency in your package.xml and add it as a component to find_package(catkin REQUIRED COMPONENTS ...) in your cmakelists. If you're trying to link a catkin package against a system package, you should look into the cmake find_package() command. It looks like there are already ROS wrappers for the youbot; you might want to check those out as well: http://wiki.ros.org/youbot_driver and http://wiki.ros.org/youbot_common Originally posted by ahendrix with karma: 47576 on 2014-12-02 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by A.M Dynamics on 2014-12-03: Thanks for your answer. Would you please tell me briefly what a system package is and how do I discriminate a system package with a catkin package? Dose a system package is the same as a repository Comment by ahendrix on 2014-12-04: A system package is a package that is outside of the ROS ecosystem; boost and libsensors are both examples of system packages. Comment by A.M Dynamics on 2014-12-12: Thanks for helping me but I am facing with this error: [rosrun] Couldn't find executable named node_pkg/main.ccp. I explained my problem in this topic: http://answers.ros.org/question/199276/rosrun-couldnt-find-executable-named-error-for-youbot/ Comment by A.M Dynamics on 2014-12-12: unfortunately, The manufacturer's documentation does not describe how to link against their package.
{ "domain": "robotics.stackexchange", "id": 20222, "tags": "ros, catkin, package, youbot" }
Why is SAT in NP?
Question: I know that CNF SAT is in NP (and also NP-complete), because SAT is in NP and NP-complete. But what I don't understand is why? Is there anyone that can explain this? Answer: Try reading up on the Cook-Levin theorem. SAT is basically the first problem proven NP-complete. High level sketch of the proof: simulate a nondeterministic (NP-time, nondeterministic polynomial time) TM computation using a cascading circuit that computes the TM iterations, which can be converted to SAT. Loosely speaking, the "size" of the circuit is "polynomial".
{ "domain": "cs.stackexchange", "id": 3053, "tags": "complexity-theory, satisfiability, decision-problem, np" }
$fl(x)=x(1+\delta)$
Question: The floating point representation of a real number $x$ in a machine is given by $fl(x)=x(1+\delta),\: |\delta| = \frac{|x^*-x|}{|x|} \le \epsilon$. But I do not find this equation very insightful. Insert $\delta = \frac{x^*-x}{x}$ in the equation and you get $x^*$. So $fl(x)$ is just $x^*$. Why write $x^*$ in this fancy way: $$fl(x)=x(1+\delta)$$ Does equation have a name by the way? Answer: The idea that this expression is trying to relay is that the nature of the error in floating-point arithmetic is multiplicative rather than additive (which is the case for fixed-point arithmetic). This is because of the way that floating-point numbers are stored: as a mantissa multiplied by an exponent. Since the error is incurred only when rounding the mantissa, it is multiplicative (ignoring overflow and underflow). The kind of guarantee that numerical algorithms can make is: if the input is correct up to a multiplicative error of $1 \pm \delta$, then the output will be correct up to a multiplicative error of $1 \pm \epsilon$. So $\delta$ (in your case, the machine $\delta$) measures the relative error involved when expressing real numbers as floating-point numbers, which is the information required to deduce the accuracy of numerical algorithms.
{ "domain": "cs.stackexchange", "id": 16915, "tags": "numerical-analysis" }
Wheel encoder triggers interrupt too many times
Question: I am building a simple robot with two driving wheel. I want to control the wheel rotation using a wheel encoder like this one. Here is a code I have on Arduino to try to understand the problem I'm facing: int count = 0; void setup() { Serial.begin(9600); attachInterrupt(digitalPinToInterrupt(2), upL, RISING); pinMode(2, INPUT); } void loop() { Serial.println(String(digitalRead(2)) + "__" + String(count)); } void upL(){ count++; } What I notice is: The interrupt is triggered multiple times when the sensor beam is cut once. But when I digitalRead the pin, then there is only one change. I also noticed that the interrupt is also triggered when going from HIGH to LOW. Here is an example of the ouput I have: 0__0 0__0 0__0 0__0 ... ... 0__0 0__0 0__0 <<< change from LOW to HIGH... 1__9 <<< the interrupt must have incremented only once... 9 instead ! 1__9 1__9 1__9 ... ... 1__9 1__9 <<< change from HIGH to LOW. the interrupt shouldn't be triggered 0__24 <<< still... we have 15 increments 0__24 0__24 0__24 0__24 ... ... 0__24 0__24 <<< same here... 1__51 <<< 26 increments 1__51 ... ... 1__51 <<< same here... 0__67 <<< 16 increments 0__67 0__67 The only way I can explain that is that during a change of state, the signal received from the sensor is not really square, but somehow noisy. Like in this image : Therefore we would have, indeed, many RISING on one change.... (However reading the output of the sensor on an analog pin shows a direct variation from 880(HIGH) to 22(LOW)) Does anyone have another explanation? Or a solution to this problem ? EDIT Thanks to @TobiasK I know that this is called a bouncing effect. By doing further research I came across this solution: playground.arduino.cc/Main/RotaryEncoders (Ctrl+F for rafbuff). I'm trying it and i'll let you know. Answer: After studying the code I cited on the EDIT of my question. I came up with this solution which is working so far # define encoderRight 2 # define encoderLeft 3 volatile int countR = 0; volatile int countL = 0; volatile boolean firstChangeR = true; // expecting the 1st change of the bounce volatile boolean firstChangeL = true; // same boolean right_set = true; boolean left_set = true; void setup() { Serial.begin(9600); // Right encoder attachInterrupt(digitalPinToInterrupt(encoderRight), doEncoderRight, CHANGE); pinMode(encoderRight, INPUT); digitalWrite(encoderRight, HIGH); // turn on pullup resistor // Left encoder attachInterrupt(digitalPinToInterrupt(encoderLeft), doEncoderLeft, CHANGE); pinMode(encoderRight, INPUT); digitalWrite(encoderLeft, HIGH); } void loop() { firstChangeR = true; // we reset to true to expect next change triggered firstChangeL = true; // by the interrupt // printing String txt = String(countL); txt += "__"; txt += String(countR); Serial.println(txt); } void doEncoderRight(){ if (firstChangeR) delay(1); // if this is the first detection then we wait // for the bounce to be over // if the current state is different from the last saved state then: // a real change happened and it's not part of the bouncing but // actually the real beginning of the change: the first bounce ! if (digitalRead(2) != right_set) { right_set = !right_set; // so we change the real state countR ++; // we also increment the right encoder // since this was the firstChange the next are part of bouncing, so: firstChangeR = false; } } void doEncoderLeft(){ if (firstChangeL) delay(1); if (digitalRead(3) != left_set) { left_set = !left_set; countL ++; firstChangeL = false; } } Tell me what do you think about it? Do you think it's reliable and is there any improvement you can propose?
{ "domain": "robotics.stackexchange", "id": 947, "tags": "arduino, wheel, two-wheeled, interrupts" }
Gradient vector starts to increase at some point, gradient descent from scratch
Question: I have a simple linear function y = w0 + w1 * x, where w0 and w1 are weights, And I'm trying to implement a gradient descent for it. I wrote the function and tested in on the data(a dataset of two columns X and Y, Y is dependent on X). For the first few iterations it goes as intended, my gradient vector decreases and I expect it to reach preselected treshhold and stop there. But at some point vector starts to increase, I don't know what mistake I did in my code, please help me to find it. Thanks. Here's the code: def squared_error(prediction, y): error = np.sum((prediction-y)**2)/len(y) return error def gradient_step(x, y, prediction): w0_derivative = np.sum(prediction-y) w1_derivative = np.sum((prediction-y)*x) return [w0_derivative, w1_derivative] def gradient_descent(x, y): errors = [] eps = 1e-6 step = 0.00001 current_error = 0 weights = np.random.rand(1,2)[0] L = len(x) t = 0 while np.linalg.norm(weights) > eps: print(t, weights, np.linalg.norm(weights), current_error) prediction = weights[0]+weights[1]*x current_error = squared_error(prediction, y) errors.append(current_error) gradient = gradient_step(x, y, prediction) weights[0] = weights[0] - step * 2/L * gradient[0] weights[1] = weights[1] - step * 2/L * gradient[1] t+=1 return t, weights (If I can somehow hide the below output please clarify) And here's the sample output, you can see that on iteration 38 vector norm starts to increase thus it can't reach the stopping threshold. iteration: 0 weights: [0.31865964 0.70571233] vector norm: 0.7743215427455836 current error: 0 iteration: 1 weights: [0.3182195 0.64808332] vector norm: 0.7219942084596928 current error: 539.9063798449935 iteration: 2 weights: [0.31792583 0.60922537] vector norm: 0.6871916691263193 current error: 261.86786860245604 iteration: 3 weights: [0.31773094 0.58302432] vector norm: 0.6639806537564518 current error: 135.4577412755645 iteration: 4 weights: [0.31760264 0.56535753] vector norm: 0.6484601525092629 current error: 77.98541136268015 iteration: 5 weights: [0.31751924 0.55344519] vector norm: 0.6380595931367428 current error: 51.85562952403169 iteration: 6 weights: [0.31746612 0.54541294] vector norm: 0.6310784575355897 current error: 39.97572738325473 iteration: 7 weights: [0.31743342 0.53999696] vector norm: 0.6263870177268759 current error: 34.57452859878733 iteration: 8 weights: [0.31741449 0.53634506] vector norm: 0.62323188578263 current error: 32.118870272836105 iteration: 9 weights: [0.31740483 0.53388265] vector norm: 0.6211090962595095 current error: 31.002400996852963 iteration: 10 weights: [0.31740144 0.53222227] vector norm: 0.6196807404591702 current error: 30.494793604390054 iteration: 11 weights: [0.31740226 0.5311027 ] vector norm: 0.6187198631788634 current error: 30.264005077205262 iteration: 12 weights: [0.31740593 0.53034777] vector norm: 0.6180738454010467 current error: 30.159072160707378 iteration: 13 weights: [0.31741152 0.52983871] vector norm: 0.6176399692755613 current error: 30.11135945832206 iteration: 14 weights: [0.3174184 0.52949544] vector norm: 0.6173490616027659 current error: 30.08966190842252 iteration: 15 weights: [0.31742615 0.52926396] vector norm: 0.6171545204075297 current error: 30.079792139481196 iteration: 16 weights: [0.3174345 0.52910785] vector norm: 0.6170249413446866 current error: 30.075299867461972 iteration: 17 weights: [0.31744324 0.52900257] vector norm: 0.6169391575773583 current error: 30.073252472741405 iteration: 18 weights: [0.31745224 0.52893155] vector norm: 0.6168829006509288 current error: 30.072316640723134 iteration: 19 weights: [0.31746143 0.52888364] vector norm: 0.6168465514539311 current error: 30.0718861803421 iteration: 20 weights: [0.31747074 0.52885132] vector norm: 0.6168236248664436 current error: 30.071685487019835 iteration: 21 weights: [0.31748013 0.52882949] vector norm: 0.6168097485077322 current error: 30.071589257220854 iteration: 22 weights: [0.31748957 0.52881476] vector norm: 0.616801974365135 current error: 30.071540521731656 iteration: 23 weights: [0.31749906 0.52880479] vector norm: 0.6167983147497292 current error: 30.07151337951737 iteration: 24 weights: [0.31750857 0.52879805] vector norm: 0.6167974294520366 current error: 30.07149605468038 iteration: 25 weights: [0.31751809 0.52879348] vector norm: 0.6167984148216327 current error: 30.07148319331257 iteration: 26 weights: [0.31752763 0.52879038] vector norm: 0.6168006615593393 current error: 30.071472361261364 iteration: 27 weights: [0.31753717 0.52878826] vector norm: 0.616803758834994 current error: 30.071462451839917 iteration: 28 weights: [0.31754672 0.52878681] vector norm: 0.6168074296388016 current error: 30.07145296189346 iteration: 29 weights: [0.31755627 0.5287858 ] vector norm: 0.6168114871914999 current error: 30.07144366266287 iteration: 30 weights: [0.31756583 0.5287851 ] vector norm: 0.6168158055533827 current error: 30.07143445014285 iteration: 31 weights: [0.31757539 0.5287846 ] vector norm: 0.6168202998069839 current error: 30.07142527704745 iteration: 32 weights: [0.31758494 0.52878424] vector norm: 0.6168249126949067 current error: 30.071416121878023 iteration: 33 weights: [0.3175945 0.52878398] vector norm: 0.6168296056101267 current error: 30.071406974860405 iteration: 34 weights: [0.31760406 0.52878377] vector norm: 0.6168343525210258 current error: 30.071397831550584 iteration: 35 weights: [0.31761362 0.52878361] vector norm: 0.6168391358752219 current error: 30.071388689928252 iteration: 36 weights: [0.31762318 0.52878348] vector norm: 0.6168439438376392 current error: 30.071379549074727 iteration: 37 weights: [0.31763274 0.52878336] vector norm: 0.61684876842822 current error: 30.0713704085725 iteration: 38 weights: [0.3176423 0.52878326] vector norm: 0.6168536042662351 current error: 30.071361268231524 iteration: 39 weights: [0.31765186 0.52878317] vector norm: 0.6168584477236104 current error: 30.071352127965643 iteration: 40 weights: [0.31766142 0.52878308] vector norm: 0.6168632963540365 current error: 30.071342987735534 iteration: 41 weights: [0.31767098 0.528783 ] vector norm: 0.6168681485080346 current error: 30.07133384752333 iteration: 42 weights: [0.31768054 0.52878292] vector norm: 0.6168730030734078 current error: 30.07132470732093 iteration: 43 weights: [0.3176901 0.52878284] vector norm: 0.6168778593002311 current error: 30.071315567124664 iteration: 44 weights: [0.31769966 0.52878276] vector norm: 0.6168827166828511 current error: 30.07130642693288 iteration: 45 weights: [0.31770922 0.52878269] vector norm: 0.6168875748803186 current error: 30.071297286744738 iteration: 46 weights: [0.31771878 0.52878261] vector norm: 0.6168924336627406 current error: 30.071288146559965 iteration: 47 weights: [0.31772834 0.52878254] vector norm: 0.6168972928751059 current error: 30.07127900637834 iteration: 48 weights: [0.3177379 0.52878246] vector norm: 0.6169021524128935 current error: 30.071269866199884 iteration: 49 weights: [0.31774746 0.52878239] vector norm: 0.6169070122056274 current error: 30.071260726024477 iteration: 50 weights: [0.31775702 0.52878231] vector norm: 0.6169118722057866 current error: 30.071251585852142 iteration: 51 weights: [0.31776658 0.52878224] vector norm: 0.616916732381328 current error: 30.071242445682856 iteration: 52 weights: [0.31777614 0.52878216] vector norm: 0.6169215927106454 current error: 30.07123330551661 iteration: 53 weights: [0.3177857 0.52878209] vector norm: 0.6169264531791692 current error: 30.071224165353485 iteration: 54 weights: [0.31779526 0.52878201] vector norm: 0.6169313137770746 current error: 30.071215025193343 iteration: 55 weights: [0.31780482 0.52878194] vector norm: 0.6169361744977363 current error: 30.0712058850363 iteration: 56 weights: [0.31781438 0.52878186] vector norm: 0.6169410353366862 current error: 30.071196744882265 iteration: 57 weights: [0.31782394 0.52878179] vector norm: 0.6169458962909106 current error: 30.0711876047313 iteration: 58 weights: [0.3178335 0.52878171] vector norm: 0.6169507573583767 current error: 30.07117846458336 iteration: 59 weights: [0.31784306 0.52878164] vector norm: 0.6169556185377126 current error: 30.07116932443854 iteration: 60 weights: [0.31785262 0.52878157] vector norm: 0.6169604798279933 current error: 30.071160184296716 iteration: 61 weights: [0.31786218 0.52878149] vector norm: 0.6169653412285931 current error: 30.071151044158 iteration: 62 weights: [0.31787174 0.52878142] vector norm: 0.6169702027390902 current error: 30.071141904022262 iteration: 63 weights: [0.3178813 0.52878134] vector norm: 0.6169750643591987 current error: 30.071132763889636 iteration: 64 weights: [0.31789086 0.52878127] vector norm: 0.6169799260887253 current error: 30.071123623760045 iteration: 65 weights: [0.31790042 0.52878119] vector norm: 0.6169847879275386 current error: 30.071114483633497 iteration: 66 weights: [0.31790998 0.52878112] vector norm: 0.6169896498755493 current error: 30.071105343510006 iteration: 67 weights: [0.31791954 0.52878104] vector norm: 0.6169945119326963 current error: 30.07109620338955 iteration: 68 weights: [0.3179291 0.52878097] vector norm: 0.6169993740989372 current error: 30.071087063272184 iteration: 69 weights: [0.31793866 0.52878089] vector norm: 0.6170042363742433 current error: 30.071077923157834 iteration: 70 weights: [0.31794822 0.52878082] vector norm: 0.6170090987585936 current error: 30.07106878304657 Answer: Keep track of the delta weight, not the weight itself. def gradient_descent(x, y): errors = [] eps = 1e-7 step = 0.001 current_error = 0 weights = np.random.rand(1,2)[0] new_weights = np.zeros(2) delta_weights = weights.copy() L = len(x) t = 0 while np.linalg.norm(delta_weights) > eps: # if t > 40: # break print(t, weights, np.linalg.norm(delta_weights), current_error) prediction = weights[0]+weights[1]*x current_error = squared_error(prediction, y) errors.append(current_error) gradient = gradient_step(x, y, prediction) new_weights[0] = weights[0] - step * 2/L * gradient[0] new_weights[1] = weights[1] - step * 2/L * gradient[1] delta_weights[0] = new_weights[0] - weights[0] delta_weights[1] = new_weights[1] - weights[1] weights[0] = new_weights[0] weights[1] = new_weights[1] t+=1 return t, weights x = np.random.randn(100) * 5 y = 2 + 3 * x + np.random.randn(100) import matplotlib.pyplot as plt plt.scatter(x, y) plt.show() gradient_descent(x, y)
{ "domain": "datascience.stackexchange", "id": 11640, "tags": "python, linear-regression, gradient-descent" }
IMU and GPS fusion without odom - robot_localization
Question: I'm using ros Indigo, and trying to fuse IMU and GPS, I don't have any odom source. Reading forum and the roscon presentation about the robot_localization package, I tried the following configuration: <launch> <!-- gps and imu fusion --> <node pkg="robot_localization" type="navsat_transform_node" name="navsat_transform_node" respawn="true" output="screen"> <param name="magnetic_declination_radians" value="0.36878808"/> <param name="zero_altitude" value="false"/> <param name="publish_filtered_gps" value="true"/> <param name="broadcast_utm_transform" value="true"/> <param name="wait_for_datum" value="true"/> <remap from="/gps/fix" to="/gps"/> <remap from="/imu/data" to="/imu"/> <remap from="/odometry/filtered" to="/odometry/map"/> </node> <!-- map frame --> <node pkg="robot_localization" type="ekf_localization_node" name="map_transform" clear_params="true"> <param name="odom0" value="odometry/gps"/> <param name="imu0" value="/imu"/> <param name="frequency" value="30"/> <param name="sensor_timeout" value="2"/> <param name="two_d_mode" value="true"/> <param name="map_frame" value="map"/> <param name="odom_frame" value="odom"/> <param name="base_link_frame" value="base_link"/> <param name="world_frame" value="map"/> <rosparam param="imu0_config">[false, false, false, false, false, true, false, false, false, false, false, true, true, false, false]</rosparam> <param name="imu0_differential" value="false"/> <param name="imu0_remove_gravitational_acceleration" value="true"/> <rosparam param="odom0_config">[true, true, false, false, false, false, false, false, false, false, false, false, false, false, false]</rosparam> <param name="odom0_differential" value="false"/> <param name="print_diagnostics" value="true"/> <param name="debug" value="false"/> <param name="debug_out_file" value="$(env HOME)/adroit_files/debug_ekf_localization.txt"/> <remap from="odometry/filtered" to="/odometry/map"/> </node> <!-- odom frame --> <node pkg="robot_localization" type="ekf_localization_node" name="odom_transform" clear_params="true"> <param name="imu0" value="/imu"/> <param name="odom0" value="/odometry/map"/> <param name="frequency" value="30"/> <param name="sensor_timeout" value="2"/> <param name="two_d_mode" value="true"/> <param name="map_frame" value="map"/> <param name="odom_frame" value="odom"/> <param name="base_link_frame" value="base_link"/> <param name="world_frame" value="odom"/> <rosparam param="imu0_config">[false, false, false, false, false, true, false, false, false, false, false, true, true, false, false]</rosparam> <rosparam param="odom0_config">[true, true, false, false, false, false, false, false, false, false, false, false, true, false, false]</rosparam> <param name="odom0_differential" value="false"/> <param name="print_diagnostics" value="true"/> <param name="debug" value="false"/> <param name="debug_out_file" value="$(env HOME)/debug_ekf_localization.txt"/> </node> </launch> as result, I'm getting a drift that increases in time, and the vector is always perpendicular to robot. This warning appear several times too [ WARN] [1465502165.359329185, 20.510000000]: Could not obtain base_link-> transform. Will not remove offset of navsat device from robot's origin. [ WARN] [1465502167.550982133, 21.310000000]: Could not obtain transform from to base_link. Error was Invalid argument passed to lookupTransform argument source_frame in tf2 frame_ids cannot be empty I'm not using a real robot yet, for now I'm on gazebo, using hector_gazebo plugins for IMU and GPS. IMU sample message: header: seq: 10464 stamp: secs: 10 nsecs: 918000000 frame_id: base_link orientation: x: 0.019510418191 y: -0.00138183083974 z: -0.00971101353096 w: 0.999761536739 orientation_covariance: [2.6030820491461885e-07, 0.0, 0.0, 0.0, 2.6030820491461885e-07, 0.0, 0.0, 0.0, 0.0] angular_velocity: x: -0.00310991562662 y: -0.00912178750215 z: 0.0116196962976 angular_velocity_covariance: [2.5e-05, 0.0, 0.0, 0.0, 2.5e-05, 0.0, 0.0, 0.0, 2.5e-05] linear_acceleration: x: -2.21806651905 y: 2.26472137133 z: 6.24316872463 linear_acceleration_covariance: [2.5e-05, 0.0, 0.0, 0.0, 2.5e-05, 0.0, 0.0, 0.0, 2.5e-05] --- GPS sample message: header: seq: 244 stamp: secs: 61 nsecs: 500000000 frame_id: base_link status: status: 0 service: 1 latitude: -23.7134276213 longitude: -46.5536684712 altitude: -3.02017915306 position_covariance: [25.0, 0.0, 0.0, 0.0, 25.0, 0.0, 0.0, 0.0, 25.0] position_covariance_type: 2 --- The ekf output after some time, with the robot stoped in gazebo header: seq: 1747 stamp: secs: 222 nsecs: 58000000 frame_id: odom child_frame_id: base_link pose: pose: position: x: 44.8376708689 y: -2.12248261091 z: 0.0 orientation: x: 0.0 y: 0.0 z: -0.0360110406313 w: 0.99935139213 covariance: [15.30443876297224, 85.29210019200963, 0.0, 0.0, 0.0, 7.403568159067386e-13, 85.2921001920097, 1661.3211796485384, 0.0, 0.0, 0.0, 7.592048297986206e-12, 0.0, 0.0, 3.331484049746691e-07, -8.211483406385409e-17, -3.2792082619740134e-15, 0.0, 0.0, 0.0, -8.211483406385409e-17, 3.3296398857637426e-07, -1.4560869721212304e-21, 0.0, 0.0, 0.0, -3.2792082619740122e-15, -1.4560869721108193e-21, 3.3296398857631613e-07, 0.0, 7.403568159067387e-13, 7.592048297986206e-12, 0.0, 0.0, 0.0, 9.99999523817672e-10] twist: twist: linear: x: 1.06759861895 y: -0.0267929278598 z: 0.0 angular: x: 0.0 y: 0.0 z: -0.00130429742184 covariance: [0.23737282750479055, -0.15158265506748317, 0.0, 0.0, 0.0, 1.405394133138543e-15, -0.1515826550674825, 23.524227691364764, 0.0, 0.0, 0.0, 2.5224508749013602e-14, 0.0, 0.0, 3.3305613291777575e-07, 9.51454551412301e-26, -2.0191178601715107e-24, 0.0, 0.0, 0.0, 9.514545514123008e-26, 3.3223139406571205e-07, 6.682293294008273e-30, 0.0, 0.0, 0.0, -2.0191178601715103e-24, 6.682280875348762e-30, 3.3223139406571205e-07, 0.0, 1.4053941331385318e-15, 2.52245087490136e-14, 0.0, 0.0, 0.0, 2.4165639655974852e-05] --- Doesn't anyone have a clue about what i'm doing wrong? I've tried some other configurations, but none of then worked Update 1: I added this to my urdf file: <gazebo> <plugin name="gps" filename="libhector_gazebo_ros_gps.so"> <alwayson>1</alwayson> <updaterate>50.0</updaterate> <bodyname>base_link</bodyname> <topicName>/gps</topicName> <service>1</service> <frameId>gps_link</frameId> <velocityTopicName>/gps/fixvelocity</velocityTopicName> <drift>5.0 5.0 5.0</drift> <referenceLatitude>-23.713367</referenceLatitude> <referenceLongitude>-46.553640</referenceLongitude> <gaussiannoise>0.1 0.1 0.1</gaussiannoise> <velocitydrift>0 0 0</velocitydrift> <velocitygaussiannoise>0.1 0.1 0.1</velocitygaussiannoise> </plugin> and I have the tf from gps_link to base_link Originally posted by torugobeck on ROS Answers with karma: 13 on 2016-06-09 Post score: 1 Original comments Comment by rajnunes on 2016-06-11: How exactly did you get the gps plugin to work..? Comment by torugobeck on 2016-06-13: added the plugin configuration at the topic, to big to post here Comment by Tom Moore on 2016-06-14: Hmmm...this may be an issue. Can you post a bag? Comment by torugobeck on 2016-06-14: I generated without any node open, just the gazebo with the plugins https://www.dropbox.com/s/11tcvbdze5azhj8/2016-06-14-12-55-58.bag?dl=0 Answer: So there are a lot of issues here: You appear to be feeding the output of your map_transform instance of ekf_localization_node into your odom_transform instance. That is probably not going to work well at all. You have wait_for_datum set to true for navsat_transform_node, but your other settings indicate that you don't actually want this set to true. I just realized that I need to document that parameter on the wiki, so that's my fault. In any case, it's not relevant for your use case. If all you have is IMU and GPS data, then you don't need to run two EKFs, as it won't really help you. I did find a bug with the node crashinig when publish_filtered_gps was set to true. I have fixed it and it is in the upstream repository. Please pull down that version until I can get a new release out. Anyway, here are the settings that worked for me. Please note that I added a static transform publisher for the base_link->gps transform: <launch> <node name="bl_gps" pkg="tf2_ros" type="static_transform_publisher" args="0 0 0 0 0 0 1 base_link gps_link" /> <node pkg="robot_localization" type="navsat_transform_node" name="navsat_transform_node" respawn="true" output="screen"> <param name="magnetic_declination_radians" value="0.36878808"/> <param name="zero_altitude" value="false"/> <param name="publish_filtered_gps" value="true"/> <param name="broadcast_utm_transform" value="true"/> <param name="wait_for_datum" value="false"/> <remap from="/gps/fix" to="/gps"/> <remap from="/imu/data" to="/imu"/> </node> <node pkg="robot_localization" type="ekf_localization_node" name="ekf_odom" clear_params="true"> <param name="odom0" value="odometry/gps"/> <param name="imu0" value="/imu"/> <param name="frequency" value="30"/> <param name="sensor_timeout" value="2"/> <param name="two_d_mode" value="true"/> <param name="map_frame" value="map"/> <param name="odom_frame" value="odom"/> <param name="base_link_frame" value="base_link"/> <param name="world_frame" value="odom"/> <rosparam param="imu0_config">[false, false, false, false, false, true, false, false, false, false, false, true, true, false, false]</rosparam> <param name="imu0_differential" value="false"/> <param name="imu0_remove_gravitational_acceleration" value="true"/> <rosparam param="odom0_config">[true, true, false, false, false, false, false, false, false, false, false, false, false, false, false]</rosparam> <param name="odom0_differential" value="false"/> <param name="print_diagnostics" value="true"/> <param name="debug" value="false"/> <param name="debug_out_file" value="$(env HOME)/adroit_files/debug_ekf_localization.txt"/> </node> Originally posted by Tom Moore with karma: 13689 on 2016-06-15 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by torugobeck on 2016-06-15: Hello Tom, thanks for your help. Actually I'm using Datum, I have a python script that set it. I'm using since I have a map that is streamed through map_server, and I want the origin of map to be the gps coordinate set in datum. I'll try this configuration with a static tf from map to odom. Comment by torugobeck on 2016-06-15: I just tried this configuration that you posted, with wait_for_datum and publish_filtered_gps set to false, but i'm still getting the drift in the position, without any other odom source, this continuous drift is expected? Comment by Tom Moore on 2016-06-15: Is your GPS drifting? Comment by torugobeck on 2016-06-15: I ploted some data in matlab, and it look pretty stable, as you can see here: https://www.dropbox.com/s/1rauxv3ajbkmvys/Captura%20de%20tela%202016-06-15%2015.25.08.png?dl=0 Comment by Tom Moore on 2016-06-15: Try removing the linear acceleration from your state estimate (odom0 input). Comment by torugobeck on 2016-06-15: I'm using the same launch file that you posted, it already have this configuration. https://www.dropbox.com/s/9fbjdn9l1rumlzc/Captura%20de%20tela%202016-06-15%2017.32.png?dl=0 But Im' getting this output as result. Can you see any problem that may cause this? Comment by Tom Moore on 2016-06-16: No, the file I posted has X linear acceleration enabled (the last "true" value in imu0_config). I am suggesting that you set it to false instead. Comment by torugobeck on 2016-06-16: Sorry I thought that you were talking about the odom0 parameter, it worked well. I have one more doubt, the odom output is getting perpendicular to the robot, do you have any idea what may cause this? Thanks for the Help!! Comment by Tom Moore on 2016-06-17: Please read the "yaw_offset" parameter on the wiki here.
{ "domain": "robotics.stackexchange", "id": 24889, "tags": "navigation, ekf, gps, navsat-transform-node, robot-localization" }
What is the opposite of explosive decompression?
Question: Is it something like "implosive compression"? Cannot find the correct term. Explosive decompression is when the pressure rapidly drops, e.g. spaceship gets a hole and passengers cannot breathe; I'm looking for a coined term for when the pressure rapidly rises, e.g. deep sea submarine gets a hole underneath and the air inside is compacted towards its ceiling. Answer: What you describe (damaging something by suddenly applying a large overpressure) is how bombs kill people and destroy buildings. On the human body this is called concussive shock. In the case of bombs this is called blast damage.
{ "domain": "physics.stackexchange", "id": 69628, "tags": "pressure, air" }
Can I apply survival analysis to predict if a user will revisit the website?
Question: I have one business problem in hand which is to predict if a user will revisit the website or not within 6 months. I need to majorly understand what are the factors which make the user return and also need to give business recommendations on what can be done to make a new user return to the website. My initial idea was to do logistic regression. Lately, I read about survival analysis. I want to know if I can use survival analysis for this problem. Also, my dataset has 20k users; each user having multiple transactions; the target variable was not given I aggregated the dataset to one record per user and did some feature engineering to come up with a target variable. If I want to use survival analysis in this problem, shall I consider only the last transaction of each user or shall I use the aggregated dataset? Answer: If you want to use survival analysis (which can be more flexible and insightful), I'd recommend this package and this great tutorial. Speaking shortly, as a result, you'll get "probability of being alive" for each customer. If you want to use logistic regression I think it's trickier. Why I think so - Like any other churn problem, it's hard to define it properly. The definition depends on your task and where the model outcome will be used. Let's say churn is a particular amount of inactivity, e.g. 30 days. You can do an initial analysis of how to find this number. Just pick a particular date (you can do it multiple times) and check % of people who made next transaction. Important thing is - your time period from both sides should be same for all users: if it's a new users he's not able to be inactive for a long period, right? if it's a last date in the dataset (e.g. yesterday) - not all users are not able to perform transaction within 1 day. And you'll get high churn rates. So be attentive to dates. So you need to understand from your data - which inactivity is "normal" for the average user and define it as N. After that you can label users binary like "if inactivity > N then "churned (1)" else "not churned (0)"". And you can use this label with any classification model.
{ "domain": "datascience.stackexchange", "id": 6307, "tags": "predictive-modeling, churn, aggregation, survival-analysis" }
Why hasn't there been an encryption algorithm that is based on the known NP-Hard problems?
Question: Most of today's encryption, such as the RSA, relies on the integer factorization, which is not believed to be a NP-hard problem, but it belongs to BQP, which makes it vulnerable to quantum computers. I wonder, why has there not been an encryption algorithm which is based on an known NP-hard problem. It sounds (at least in theory) like it would make a better encryption algorithm than a one which is not proven to be NP-hard. Answer: Worst-case Hardness of NP-complete problems is not sufficient for cryptography. Even if NP-complete problems are hard in the worst-case ($P \ne NP$), they still could be efficiently solvable in the average-case. Cryptography assumes the existence of average-case intractable problems in NP. Also, proving the existence of hard-on-average problems in NP using the $P \ne NP$ assumption is a major open problem. An excellent read is the classic by Russell Impagliazzo, A Personal View of Average-Case Complexity, 1995. An excellent survey is Average-Case Complexity by Bogdanov and Trevisan, Foundations and Trends in Theoretical Computer Science Vol. 2, No 1 (2006) 1–106
{ "domain": "cs.stackexchange", "id": 14803, "tags": "complexity-theory, np-hard, encryption, cryptography" }
How to bring map in rviz (indigo-gazebo)
Question: Speecification First: Ubuntu 14.04 LTS/ Ros Indigo / Gazrbo / P3AT/ I have used following manual to setup ros-gazebo-p3at. [http://people.oregonstate.edu/~chungje/Code/Pioneer3dx%20simulation/ros-indigo-gazebo2-pioneer.pdf] Teleoperation is working fine. Problem: we are unable to set rviz display parameter. What to do. Should i give more information. Please. Originally posted by sumant on ROS Answers with karma: 77 on 2016-02-23 Post score: 0 Answer: If I have understood well, you would like to load a map on Rviz. To load a map on Rviz you have two options: via a launch file do it manually. Open a terminal and go to the directory where you have put your map file and run: rosrun map_server map_server sample.yaml After you run rviz, (e.g., rosrun rviz rviz), add Map from the bottom left tab. Then, under the topic section choose /map. Originally posted by MahsaP with karma: 79 on 2016-02-23 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 23879, "tags": "rviz" }
RTT (Round Trip Time) calculation
Question: How do I calculate RTT, do I start to measure the time from the beginning of the segment transmition or from the moment all the segment had been transmitted? The end time is the moment the sender gets the "ACK". Answer: The round trip time is the time required to go and return, so it must be measured from the start of the transmission to the start of the reception of the response. If you measure from the end of the packet to the end of the ACK, you are subtracting the package length and adding in the length of the acknowledgement's package (the ACK could be piggibacked on any package doing the back trip). This can very well be measured during a stream of data (and it is done so by your neighborhood TCP stack to tune timeouts and so on).
{ "domain": "cs.stackexchange", "id": 958, "tags": "computer-networks" }
Face Recognition
Question: Hi All I just tried the new face recognition package but when i run Fserver i get the following error [ INFO] [1325746855.901582077]: Could not load Haar cascade Face detection classifier in 'haarcascade_frontalface_alt.xml'. I am using Ubuntu 10.4 32 bit and ros electric with vision opencv installed from debs Originally posted by Peter Heim on ROS Answers with karma: 215 on 2012-01-04 Post score: 1 Original comments Comment by tfoote on 2012-01-04: Please post more information. There is not enough information here to reproduce your effrots. See http://www.ros.org/wiki/Support Answer: Hi, Fserver requires haarcascade_frontalface_alt.xml to be in the same directory as you run the Fserver. The file was missing in the package, but now It has been uploaded in the repository in the face_recognition directory. Cheers, Pouyan Originally posted by Pouyan with karma: 143 on 2012-01-05 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by joq on 2012-01-05: No problem, I was just recommending a more general solution in a future release. Comment by Pouyan on 2012-01-05: Jog, Thanks for your comments, will consider that for the future, but for now cmd works. Comment by Pouyan on 2012-01-05: When you use cmd="node", the working directory is face_recognition/bin. If you put the files in the bin directory, should be no problem. Comment by Peter Heim on 2012-01-05: I just tried it and i cant get it to work it shows a error when i use the directory if i use node or ROS_HOME and put the file in those locations i get the file not found error Comment by joq on 2012-01-05: That is really not the recommended way to use roslaunch. I am not even sure it works. Have you tried it? Comment by Pouyan on 2012-01-05: As mentioned above, the program searches for its input files in its working directory which is not necessarily the package directory. In 'roslaunch', you can set the working directory using 'cwd' attribute to point to the directory in which necessary input files are placed. Comment by joq on 2012-01-05: Depending on the current directory being the package directory is very unfriendly for people using roslaunch. Perhaps a future release might provide a more flexible access mechanism? Comment by Peter Heim on 2012-01-05: Running from the installation directory was the key the server started and i was able to acquire a face Thanks Comment by Pouyan on 2012-01-05: You might also wish to put the data folder (containing sample pictures) and the train.txt file(which list the training examples) in the same directory. As all these are in the face_recognition directory by default, simply try : 1-roscd face_recognition 2-rosrun face_recognition Fserver Comment by Pouyan on 2012-01-05: rosmake in not necessary again. you should just run the Fserver from the directory in which haarcascade_frontface_alt.xml is placed. For example : 1- copy haarcascade_frontface_alt.xml to /home/new_dir 2-cd /home/new_dir 3-rosrun face_recognition Fserver. Let me know if it works :) Comment by Peter Heim on 2012-01-05: Hi Just updated and did a rosmake the new files is in the face_recognition directory but i still get the same error
{ "domain": "robotics.stackexchange", "id": 7790, "tags": "ros" }
Classes for user registration and authentication
Question: <?php namespace Users; use mysqli; class Users { protected $db; public function __construct() { $this->db = new mysqli('localhost', 'root', 'php123', 'sms'); } public function login($username, $password) { $query = $this->db->prepare("SELECT username, password FROM users WHERE username=? AND password=? LIMIT 1"); $password = UserData::hash($password); $query->bind_param('ss', $username, $password); $query->execute(); $query->bind_result($username, $password); $query->store_result(); if($query->num_rows ==1) { while($query->fetch()) { $_SESSION['loggedin'] = 1; UserData::getAndSet($username); header("Location: dashboard.php"); } } else { return false; } $query->free_result(); $query->close(); } public function register($username, $password, $rank, $name) { $errors = array(); $query = $this->db->prepare("INSERT INTO users VALUES('', $username, $password, $rank, $name);"); $password = UserData::hash($password); $query->bind_param('ssss', $username, $password, $rank, $name); if(strlen($username) < 3 || strlen($username) > 15) { $errors[] = "The length of the username can only range from 3 to 15 characters."; } if(strlen($password) < 3 || strlen($password) > 15) { $errors[] = "The length of the password can only range from 3 to 15 characters."; } if(strlen($name) < 3 || strlen($name) > 15) { $errors[] = "The length of the name can only range from 3 to 15 characters."; } $query->execute(); $query->close(); } } class UserData { protected $db; public $username; public $password; public $rank; public $name; public function __construct() { $this->db = new mysqli('localhost', 'root', 'php123', 'sms'); } public function getAndSet($username) { $query = $this->db->prepare("SELECT username, password, rank, name FROM users WHERE username=?"); $query->bind_param('s', $username); $query->execute(); $query->bind_result($username, $password, $rank, $name); $query->store_result(); while($query->fetch()) { $this->username = $username; $this->password = $password; $this->rank = $rank; $this->name = $name; } $query->free_result(); $query->close(); } public function hash($string) { $fSalt = "{z6Q9&2eHn69,9/cXa=<,Nm[?^^Tw-"; $sSalt = ":L+;fwS<q%)3f?N_d)(Ta_*fN&ax}+"; $sting = md5(md5($fSalt) . md5($string) . md5($sSalt)); for($i=0; $i < 3; $i++) { $string = md5($string); $string = sha1($string); $string = bin2hex($string); $string = md5($string); } return $string; } } class UserUtil { protected $db; public function __construct() { $this->db = new mysqli('localhost', 'root', 'php123', 'sms'); } } $Users = new Users(); $UserData = new UserData(); $UserUtil = new UserUtil(); ?> This is the beginning of my project, "Student Management System" and before I proceed I want this code to be rated, and how can I improve this code with better ways. Answer: Why are you starting a new database connection for each class? Pass in a database connection object to the constructor: public function __construct(mysqli $dbConnection) { This allows your class to work without worrying about actually doing the connection. The class's job is to handle users, not handle database connections! You not only said LIMIT 1 in your query, but you also check that $query->num_rows is 1, and then you even loop over the results (even though there's only one). This sort of triple check is redundant and unreadable. You know there's either 1 or 0 results because you LIMITed the query! Your hashing algorithm is not secure. Use PHP 5.5's password_hash(), or if you don't use 5.5 (And you should!!!) use Password Compat, a library written by the same guy who wrote the password_hash() function for PHP's core. Do not limit your users' password length - Setting a minimum length is good. Setting a maximum length is a kitten killing crime. Don't. Your login() method returns false on error, but nothing on success. Either return true on success, or simply throw an exception on errors. Your UserData class is not a class, it's a namespace. You only use it for static functions (even though you don't declare them as static in your class!). What does your getAndSet method does? It's not clear. Use a proper name or add documentation. Your UserUtil class does absolutely nothing. Generally, I'd say you can improve the structure of your code by having the following 3 classes: UserService - is responsible for registering, authenticating and verifying users using the following two objects. Basically higher level management. User - Represents a single user in the system. Has absolutely no database interaction whatsoever, and has no idea where it came from. It accepts parameters through the constructor and getters/setters: class User { public function __construct($id, $username, $hashedPassword, $name) { UserMapper - Which is responsible for mapping and/or creating user objects to/from the database. For example class UserMapper { public function fetch(User $user) { //$user object will already have the ID filled in. Fetch according to that and fill the same $user object. } } Example of execution: class UserService { private $mapper; public function __construct(UserMapper $mapper) { ... } public function register($username, $password, $name) { $user = new User($username, $password, $name); $user->validateRegistrationDetails(); //Throws ValidationException on validation error. $user->generateHash(); $this->mapper->save($user); //Save user to database; } } Note that in the above, the User is unaware of the database, which means it's perfectly reusable across applications with different storage systems.
{ "domain": "codereview.stackexchange", "id": 8449, "tags": "php, object-oriented, mysqli, authentication" }
Regarding the law of Inertia
Question: I am still a student in high school, so don't judge me because of this question:- Imagine you are on a train, then you pass by a tree; for me, this tree appears to me in motion. The train has moved quite a distance and the tree is no longer in sight. So, how will the law of Inertia work on this tree ? (We don't know if the tree is still in motion or not )? I think that the tree should be in motion as long as the tree is moving, so the law of inertia is not in action in the scenario.... Please correct me if I am wrong. Answer: The law of Inertia is one of Newton's laws. Newton's laws are assumptions. Given them, motion of objects can be predicted. We have no reason to expect Newton's laws should be true. We know they are because we have checked and checked and checked. If you are sitting on a train, the whole world is in motion. You can check on the tree so long as you can see it. When it is out of sight, perhaps a friend in the back of the train can still check. Perhaps you can tie a string to the tree and watch it unwind from a spool. As long as the string is unwinding at the right rate, the tree isn't moving. After enough checks, you see the pattern and you expect the pattern holds even when you are not checking.
{ "domain": "physics.stackexchange", "id": 89473, "tags": "inertia" }
Finding period of a square wave with varying sampling frequency
Question: I have a square wave (0-1.8V) with a varying sampling frequency (from a circuit simulator). It is also not a perfect square wave (the high and low signal could be very close to but not precisely zero or 1 Volt) and the fall time is slightly slower than the rise time. Another factor is that I will retake the data and the period (and possibly duty cycle as well) will change when I alter another parameter, so I am not working with a fixed number of cycles per window. I tried autocorrelation but it seems that it gave me double or so the actual period. And fft seems to need a fixed sampling rate. What would be the best solution to finding the period of this square wave? Answer: Pick a nice center value, say 1.0, scan across looking for crossings from down to up (or the reverse). Mark the spot of your first crossing, count up some fixed number of crossings, call the count N. Take your system timer value at your first crossing and your last crossing. Divide the time interval by N and you'll have a very good estimate of the period. How uneven the crossings are in the interval is immaterial. If you can assure the two crossing that you are timing happen in "the same state", you will lose one more level of uncertainty. You can't do it much simpler than that. Increased accuracy will come from interpolating the crossing point rather than taking the closest sample's time. ---------------------- Follow up You'll have to figure the Interpolate routine out yourself. It's not that hard. The logic is very straightforward and significantly less computationly heavy than an auto-correlation technique. More accurate in this case too. Pseudo-code FirstIndex = -1 N = 0 For i = 0 to PointCount If signal[i] <= 1.0 and signal[i+1] > 1.0 Then If FirstIndex < 0 Then T1A = System.Time(i) T1B = System.Time(i+1) FirstIndex = i Else TNA = System.Time(i) TNB = System.Time(i+1) LastIndex = i N += 1 End If End If Next T1 = Interpolate( T1A, T1B, Signal[FirstIndex], Signal[FirstIndex+1] ) TN = Interpolate( TNA, TNB, Signal[LastIndex], Signal[LastIndex+1] ) Period = ( TN - T1 ) / N You may not need to interpolate, (TNA - T1A)/N may be accurate enough.
{ "domain": "dsp.stackexchange", "id": 7312, "tags": "matlab, discrete-signals, autocorrelation, periodic, measurement" }
Fusion boosted fission reactor
Question: With weapons, fusion boosted fission seems to already be a thing. But I'm not interested in weapons. Are there any types of fission reactor (other than a nuclear explosive) that would make more efficient use of fissile material by mixing in fusionable material? Obviously the average temperature of a fission reactor isn't high enough for fusion, but the fission products will be in the right energy range for a couple of collisions while they are in the process of cooling down. Answer: Something very close to this is the fusion-fission hybrid idea where fusion does not have to be at the point of self-sustaining burning (so it is below the ignition threshold) but rather driven by an external power source. Still, such a driven fusion reactor can produce lots of neutrons that can be used for driving fission burn of surrounding fissionable material. For example this could be used to extract energy from (and dispose of) nuclear waste that is not rich enough to go into self-sustaining fission reaction but with external neutron radiation one can drive fission reaction in it. Here is the Wikipedia article with detailed explanations https://en.wikipedia.org/wiki/Nuclear_fusion-fission_hybrid
{ "domain": "physics.stackexchange", "id": 34101, "tags": "nuclear-physics, fusion, nuclear-engineering" }
Density operator of a system $S$ coupled to a bath $B$
Question: In the second equation of section 8.1 in this MIT OCW lecture notes, I can't understand how they went from $$\rho_{S}(t)=Tr_{B}\{\rho_{SB}(t)\}=\sum_{k}\langle k|U_{SB}(\rho_{S}(0)\otimes|0\rangle\langle 0|_{B})U^{+}_{SB}|k\rangle$$ to $$\rho_{S}(t)=\sum_{k}\langle k|U_{SB}|0\rangle \rho_{S}(0)\langle 0|U^{+}_{SB}|k\rangle$$ I do understand that there is some kind of rearrangement of the terms, but how do we know that it is a valid rearrangement? Answer: Let's express the identity operator on $\mathcal{H} = \mathcal{H}_S \otimes \mathcal{H}_B$ in the following way $$\mathbb{I}_{SB}= \mathbb{I}_S\otimes\mathbb{I}_B= \mathbb{I}_S\otimes\sum_l |l\rangle \langle l|. $$ Let's insert the identity between the time evolution operator and the full density matrix $$\sum_{k}\langle k|U_{SB}(\rho_{S}(0)\otimes|0\rangle\langle 0|_{B})U^{+}_{SB}|k\rangle= \sum_{k}\langle k|U_{SB} \mathbb{I}_{SB}(\rho_{S}(0)\otimes|0\rangle\langle 0|_{B}) \mathbb{I}_{SB} U^{+}_{SB}|k\rangle\\ =\sum_{k,l,m}\langle k|U_{SB}( \mathbb{I}_{S}\otimes|l\rangle\langle l | )(\rho_{S}(0)\otimes|0\rangle\langle 0|_{B}) ( \mathbb{I}_{S}\otimes|m\rangle\langle m | ) U^{+}_{SB}|k\rangle.$$ Here each operator acts on its respective Hilbert space so this vanishes for all $m,l \neq 0$ so we're left with the traced out time evolution operators on each side of the reduced density matrix. $$\sum_{k}\langle k|U_{SB}( \mathbb{I}_{S}\otimes|0 \rangle )(\rho_{S}(0)) ( \mathbb{I}_{S}\otimes\langle 0 | ) U^{+}_{SB}|k\rangle\\ =\sum_{k}\langle k|U_{SB}|0 \rangle \rho_{S}(0) \langle 0 | U^{+}_{SB}|k\rangle,$$ where the operators $\sum_k \langle k | U_{SB}|0\rangle$ and $\sum_k \langle 0 | U_{SB}^{\dagger}|k\rangle$ are defined on $\mathcal{H}_S$. EDIT: This is precisely what user @spiridon_the_sun_rotator said in their comment, I have just tried to make it slightly more explicit by putting an identity operator there...
{ "domain": "physics.stackexchange", "id": 69342, "tags": "quantum-mechanics, quantum-information, density-operator, open-quantum-systems" }
What is a multicast graph?
Question: Martin Fowler in his blog post on "The LMAX Architecture" tried to describe the component called Disruptor as a multicast graph of queues At a crude level you can think of a Disruptor as a multicast graph of queues where producers put objects on it that are sent all the consumers for parallel consumption through separate downstream queues... I'm having difficulties understanding the nature of a multicast graph Answer: Here is a technical paper on the LMAX Architecture (I just followed a link found on the blog you linked here): http://disruptor.googlecode.com/files/Disruptor-1.0.pdf An example of a multicast graph is on page 9 of the PDF. There are also examples of multicast graphs (compared to various other options) here: http://en.wikipedia.org/wiki/Multicast
{ "domain": "cstheory.stackexchange", "id": 991, "tags": "graph-theory, queueing-theory" }
C++ Qt simple GUI game
Question: I'm trying to learn how to use Qt (5.11.1) for GUI applications so I've done a simple memory game where 12 tiles are displayed and every time the user clicks on a tile it will show an image, so they have to match them into 6 pairs of images. There's a countdown of 1 minute. Game ends if the time is up before all 6 pairs have been matched, or if all 6 pairs are matched, only that it will show different messages to the user. There is no next level, saving score or anything, so it's very simple. I know there's room for adding many more features, but would like to know what can be improved from what I've done so far. My mainwindow.h file: #ifndef MAINWINDOW_H #define MAINWINDOW_H #include <QMainWindow> #include <QTimer> #include <QTime> #include <QString> #include <QVector> #include <QHash> #include <QRandomGenerator> #include <QPushButton> #include <QMessageBox> namespace Ui { class MainWindow; } class MainWindow : public QMainWindow{ Q_OBJECT public: explicit MainWindow(QWidget *parent = nullptr); ~MainWindow(); QTimer *timer=new QTimer(); QTime time; QVector<QString> tiles{"tile01", "tile02", "tile03", "tile04", "tile05", "tile06", "tile07", "tile08", "tile09", "tile10", "tile11", "tile12"}; QHash<QString, QString> tile_image; int score=0; bool isTurnStarted; QPushButton* previousTile; QPushButton* currentTile; int matchesLeft; QMessageBox msgBox; private slots: void updateCountdown(); void tileCliked(); void randomize(QVector<QString> &tiles); void bindTileImage(QVector<QString> &tiles, QHash<QString, QString> &tile_image); void findTurnResult(); void restartTiles(); void showImage(); void findFinalResult(); void updateState(); void initalizeGame(); private: Ui::MainWindow *ui; }; #endif // MAINWINDOW_H My mainwindow.cpp file: #include "mainwindow.h" #include "ui_mainwindow.h" MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent), ui(new Ui::MainWindow){ ui->setupUi(this); //Connect timer to the slot that will handle the timer connect(timer, SIGNAL(timeout()), this, SLOT(updateState())); //Connect each button to the same slot, which will figure out which button was pressed and show its associated image file accordingly connect(ui->tiles01, SIGNAL(clicked()), this, SLOT(tileCliked())); connect(ui->tiles02, SIGNAL(clicked()), this, SLOT(tileCliked())); connect(ui->tiles03, SIGNAL(clicked()), this, SLOT(tileCliked())); connect(ui->tiles04, SIGNAL(clicked()), this, SLOT(tileCliked())); connect(ui->tiles05, SIGNAL(clicked()), this, SLOT(tileCliked())); connect(ui->tiles06, SIGNAL(clicked()), this, SLOT(tileCliked())); connect(ui->tiles07, SIGNAL(clicked()), this, SLOT(tileCliked())); connect(ui->tiles08, SIGNAL(clicked()), this, SLOT(tileCliked())); connect(ui->tiles09, SIGNAL(clicked()), this, SLOT(tileCliked())); connect(ui->tiles10, SIGNAL(clicked()), this, SLOT(tileCliked())); connect(ui->tiles11, SIGNAL(clicked()), this, SLOT(tileCliked())); connect(ui->tiles12, SIGNAL(clicked()), this, SLOT(tileCliked())); initalizeGame(); } void MainWindow::tileCliked(){ //get the tile that was clicked currentTile=qobject_cast<QPushButton*>(sender()); //get the image linked to that tile in the map and set tile background to it showImage(); //disable current tile so it can't be clicked again (unless there is no match, in which case it will be re-enabled) currentTile->setEnabled(false); //do something depending on whether the revealed tile is the first or the second tile in the turn if (!isTurnStarted){ previousTile=currentTile; isTurnStarted=true; } else{ //change score and display it findTurnResult(); ui->lblScore->setText(QString::number(score)); //reset turn isTurnStarted=false; } } void MainWindow::showImage(){ QString tile_name=currentTile->objectName(); QString img=tile_image[tile_name]; currentTile->setStyleSheet("#" + tile_name + "{ background-image: url(://" + img + ") }"); } void MainWindow::restartTiles(){ //return tiles from current turn to the default state (remove backgrounds) previousTile->setStyleSheet("#" + previousTile->objectName() + "{ }"); currentTile->setStyleSheet("#" + currentTile->objectName() + "{ }"); //re-enable both tiles so they can be used on another turn currentTile->setEnabled(true); previousTile->setEnabled(true); //re-enable the whole tile section ui->frame->setEnabled(true); } void MainWindow::findFinalResult(){ msgBox.setWindowTitle("Game has ended"); msgBox.setIcon(QMessageBox::Information); msgBox.setStandardButtons(QMessageBox::Yes); msgBox.addButton(QMessageBox::No); msgBox.setDefaultButton(QMessageBox::Yes); msgBox.setEscapeButton(QMessageBox::No); if (matchesLeft==0){ timer->stop(); msgBox.setText("Good job! Final score: " + QString::number(score) + "\nPlay again?"); if (QMessageBox::Yes == msgBox.exec()){ initalizeGame(); } else{ QCoreApplication::quit(); } } else{ if (time.toString()=="00:00:00"){ timer->stop(); ui->frame->setEnabled(false); msgBox.setText("Game over.\nPlay again?"); if (QMessageBox::Yes == msgBox.exec()){ initalizeGame(); } else{ QCoreApplication::quit(); } } } } void MainWindow::findTurnResult(){ //check if there is a match (the current tile matches the previous tile in the turn) if (tile_image[currentTile->objectName()]==tile_image[previousTile->objectName()]){ score+=15; matchesLeft--; //if there is a match, find out if all tiles have been matched. findFinalResult(); } else{ score-=5; //disable the whole tile section so no tiles can be turned during the 1-second "memorizing period" ui->frame->setEnabled(false); //if there is no match, let user memorize tiles and after 1 second hide tiles from current turn so they can be used on another turn QTimer::singleShot(1000, this, SLOT(restartTiles())); } } void MainWindow::initalizeGame(){ //start turn isTurnStarted=false; //Set score score=0; ui->lblScore->setText(QString::number(score));; //Set matches counter matchesLeft=6; //Set clock for countdown time.setHMS(0,1,0); //Initialize countdown ui->countdown->setText(time.toString("m:ss")); // Start timer with a value of 1000 milliseconds, indicating that it will time out every second. timer->start(1000); //Randomly sort tiles in container randomize(tiles); //Grab pairs of tiles and bind the name of an image file to each pair bindTileImage(tiles, tile_image); //enable tiles frame ui->frame->setEnabled(true); //enable every tile and reset its image QList<QPushButton *> btns = ui->centralWidget->findChildren<QPushButton*>(); foreach (QPushButton* b, btns) { b->setEnabled(true); b->setStyleSheet("#" + b->objectName() + "{ }"); } } void MainWindow::updateCountdown(){ time=time.addSecs(-1); ui->countdown->setText(time.toString("m:ss")); } void MainWindow::updateState(){ updateCountdown(); findFinalResult(); } void MainWindow::randomize(QVector<QString> &tiles){ int a,b,min,max; min = 0; max = tiles.size()-1; for(int i=0; i<tiles.size(); i++){ a=QRandomGenerator::global()->generate() % ((max + 1) - min) + min; b=QRandomGenerator::global()->generate() % ((max + 1) - min) + min; std::swap(tiles[a],tiles[b]); } } void MainWindow::bindTileImage(QVector<QString> &tiles, QHash<QString, QString> &tile_image){ auto iter=tiles.begin(); for (int i=1; i<=6; i++){ QString file_name="0"+QString::number(i)+".png"; tile_image[(*iter)]=file_name; iter++; tile_image[(*iter)]=file_name; iter++; } } MainWindow::~MainWindow(){ delete ui; } And the mainwindow.ui file: <?xml version="1.0" encoding="UTF-8"?> <ui version="4.0"> <class>MainWindow</class> <widget class="QMainWindow" name="MainWindow"> <property name="geometry"> <rect> <x>0</x> <y>0</y> <width>800</width> <height>600</height> </rect> </property> <property name="windowTitle"> <string>Memory game</string> </property> <property name="styleSheet"> <string notr="true">#centralWidget { background-image: url(://background.png); } #howToPlay { color: white; } #countdown { color: white; } #scoring { color: white; } #lblScore { qproperty-alignment: AlignCenter; color: white; background: teal; border: 3px solid silver; border-radius: 7px; }</string> </property> <widget class="QWidget" name="centralWidget"> <widget class="QLabel" name="howToPlay"> <property name="geometry"> <rect> <x>160</x> <y>40</y> <width>471</width> <height>31</height> </rect> </property> <property name="font"> <font> <pointsize>14</pointsize> <weight>75</weight> <bold>true</bold> </font> </property> <property name="text"> <string>Click on two tiles and try to match the images</string> </property> </widget> <widget class="QLabel" name="countdown"> <property name="geometry"> <rect> <x>690</x> <y>20</y> <width>81</width> <height>20</height> </rect> </property> <property name="font"> <font> <pointsize>10</pointsize> <weight>75</weight> <bold>true</bold> </font> </property> <property name="text"> <string>cronómetro</string> </property> </widget> <widget class="QLabel" name="scoring"> <property name="geometry"> <rect> <x>330</x> <y>520</y> <width>71</width> <height>21</height> </rect> </property> <property name="font"> <font> <pointsize>12</pointsize> <weight>75</weight> <bold>true</bold> </font> </property> <property name="text"> <string>Puntos:</string> </property> </widget> <widget class="QLabel" name="lblScore"> <property name="geometry"> <rect> <x>410</x> <y>510</y> <width>41</width> <height>31</height> </rect> </property> <property name="font"> <font> <pointsize>14</pointsize> <weight>75</weight> <bold>true</bold> </font> </property> <property name="text"> <string>0</string> </property> </widget> <widget class="QFrame" name="frame"> <property name="geometry"> <rect> <x>70</x> <y>80</y> <width>661</width> <height>431</height> </rect> </property> <property name="frameShape"> <enum>QFrame::StyledPanel</enum> </property> <property name="frameShadow"> <enum>QFrame::Raised</enum> </property> <widget class="QPushButton" name="tile10"> <property name="geometry"> <rect> <x>180</x> <y>300</y> <width>131</width> <height>111</height> </rect> </property> <property name="text"> <string/> </property> </widget> <widget class="QPushButton" name="tile05"> <property name="geometry"> <rect> <x>20</x> <y>160</y> <width>131</width> <height>111</height> </rect> </property> <property name="text"> <string/> </property> </widget> <widget class="QPushButton" name="tile06"> <property name="geometry"> <rect> <x>180</x> <y>160</y> <width>131</width> <height>111</height> </rect> </property> <property name="text"> <string/> </property> </widget> <widget class="QPushButton" name="tile09"> <property name="geometry"> <rect> <x>20</x> <y>300</y> <width>131</width> <height>111</height> </rect> </property> <property name="text"> <string/> </property> </widget> <widget class="QPushButton" name="tile07"> <property name="geometry"> <rect> <x>340</x> <y>160</y> <width>131</width> <height>111</height> </rect> </property> <property name="text"> <string/> </property> </widget> <widget class="QPushButton" name="tile03"> <property name="geometry"> <rect> <x>340</x> <y>20</y> <width>131</width> <height>111</height> </rect> </property> <property name="text"> <string/> </property> </widget> <widget class="QPushButton" name="tile11"> <property name="geometry"> <rect> <x>340</x> <y>300</y> <width>131</width> <height>111</height> </rect> </property> <property name="text"> <string/> </property> </widget> <widget class="QPushButton" name="tile01"> <property name="geometry"> <rect> <x>20</x> <y>20</y> <width>130</width> <height>110</height> </rect> </property> <property name="text"> <string/> </property> </widget> <widget class="QPushButton" name="tile04"> <property name="geometry"> <rect> <x>500</x> <y>20</y> <width>131</width> <height>111</height> </rect> </property> <property name="text"> <string/> </property> </widget> <widget class="QPushButton" name="tile12"> <property name="geometry"> <rect> <x>500</x> <y>300</y> <width>131</width> <height>111</height> </rect> </property> <property name="text"> <string/> </property> </widget> <widget class="QPushButton" name="tile02"> <property name="geometry"> <rect> <x>180</x> <y>20</y> <width>131</width> <height>111</height> </rect> </property> <property name="text"> <string/> </property> </widget> <widget class="QPushButton" name="tile08"> <property name="geometry"> <rect> <x>500</x> <y>160</y> <width>131</width> <height>111</height> </rect> </property> <property name="text"> <string/> </property> </widget> </widget> <zorder>frame</zorder> <zorder>howToPlay</zorder> <zorder>countdown</zorder> <zorder>score</zorder> <zorder>lblScore</zorder> </widget> <widget class="QToolBar" name="mainToolBar"> <attribute name="toolBarArea"> <enum>TopToolBarArea</enum> </attribute> <attribute name="toolBarBreak"> <bool>false</bool> </attribute> </widget> <widget class="QStatusBar" name="statusBar"/> </widget> <layoutdefault spacing="6" margin="11"/> <resources/> <connections/> </ui> Please note I translated variable and slot names into English for better understanding. I could have missed something. Thanks!! Answer: When dealing with Qt 5+ prefer the new connect syntax: QObject::connect(ui->tiles01, &QPushButton::clicked, this, &MainWindow::tileCliked); Instead of using sender() in the slot you can use a functor to pass the sender along: QObject::connect(ui->tiles01, &QPushButton::clicked, this, [=](){tileCliked(ui->tiles01)}); Then tileCliked becomes void MainWindow::tileCliked(QPushButton* sender){ //... } You shuffle isn't a proper shuffle. Instead you want to do a fisher-yates shuffle: void MainWindow::randomize(QVector<QString> &tiles){ int a,b,min,max; max = tiles.size()-1; for(int i=0; i<tiles.size(); i++){ min = i; a = i; b = QRandomGenerator::global()->generate() % ((max + 1) - min) + min; if(b != a) std::swap(tiles[i],tiles[b]); } }
{ "domain": "codereview.stackexchange", "id": 33190, "tags": "c++, gui, qt" }
Why do the two methods give different answers for the Fourier transform of $Y = \cos(\omega_0 t + \phi)$?
Question: Why do the following two methods give different answers (or are they the same) for the Fourier transform of $Y = \cos(\omega_0 t + \phi)$, with respect to $t \to \omega$ ? Answer: They are the same. Using the Dirac impulse sifting property $f(x) \delta(x-a) = f(a)\delta(x-a)$ you can also verify that the second method produces the same output as the first method.
{ "domain": "dsp.stackexchange", "id": 7399, "tags": "fourier-transform" }
Simple html5/javascript web database application front end
Question: This application is the front end to a very basic database application. The front end assumes that the back end database would have fields as per the html form. some concerns on db2form.js I have are: some very specific html doc references in the javascript - eg document.forms.searchform.elements.search.innerText = "Search"; current_contact_idx global variable doesn't seem right. As for the css file, that could probably be improved a lot. Any feedback on this application would be very welcome. The html page: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Itel Office</title> <link rel="stylesheet" href="style.css"> <script src="db2form.js"></script> </head> <body> <nav> <a href="">Contacts</a> <a href="call_identifier_pretty.html" target="_blank">Call Log</a> </nav> <section> <h1>Contacts</h1> <p>Enter text below and click Search button to find a contact</p> <form name="searchform" action="/cgi-bin/database.exe" method="POST"> <label for="rowid">ID: </label> <input id="rowid" type="text" name="rowid" value="" readonly disabled> <br> <label for="name">Name: </label> <input id="name" type="text" name="name" value=""> <br> <label for="company">Company: </label> <input id="company" type="text" name="company" value=""> <br> <label for="email">Email: </label> <input id="email" type="email" name="email" value=""> <br> <label for="ddi">Telephone: </label> <input id="ddi" type="tel" name="ddi" value=""> <br> <label for="mobile">Mobile: </label> <input id="mobile" type="tel" name="mobile" value=""> <br> <label for="switchboard">alt Telephone: </label> <input id="switchboard" type="tel" name="switchboard" value=""> <br> <label for="url">Web: </label> <input id="url" type="text" name="url" value=""> <br> <label for="address1">Address line 1: </label> <input id="address1" type="text" name="address1" value=""> <br> <label for="address2">Address line 2: </label> <input id="address2" type="text" name="address2" value=""> <br> <label for="address3">Address line 3: </label> <input id="address3" type="text" name="address3" value=""> <br> <label for="address4">Address line 4: </label> <input id="address4" type="text" name="address4" value=""> <br> <label for="postcode">Postcode: </label> <input id="postcode" type="text" name="postcode" value=""> <br> <label for="category">Category: </label> <input id="category" type="text" name="category" value=""> <br> <label for="notes">Notes: </label> <textarea id="notes" name="notes"></textarea> <br> <div class="buttons"> <button name="search" type="button" onclick="process(document.forms.searchform.elements.search.innerText)">Search</button> <button name="new" type="button" onclick="process('New')">New</button> <button name="edit" type="button" onclick="process('Edit')" disabled>Edit</button> <button name="save" type="button" onclick="process('Save')" disabled>Save</button> <button name="delete" type="button" onclick="process('Delete')" disabled>Delete</button> <button name="first" type="button" onclick="process('First')" disabled>First</button> <button name="next" type="button" onclick="process('Next')" disabled>Next</button> <button name="prior" type="button" onclick="process('Prior')"disabled>Prior</button> <button name="last" type="button" onclick="process('Last')" disabled>Last</button> </div> </form> <div id="status"> </div> </section> </body> </html> The css file, style.css: body{ background-color: #ffff00; } nav{ box-sizing:border-box; background-color:#409fff; /* blue we like */ display: inline-block; width: 20%; min-width: 125px; margin-right:15px; height:100vh; overflow: auto; } nav a{ display:block; line-height: 45px; height:45px; color: #FFFFFF; text-decoration: none; padding-left: 50px; margin:10px 0 10px 5px; } section{ display: inline-block; width:70%; height:100vh; overflow: auto; } h1{ color: #409fff; padding: 2px; margin: 0; } form { display: grid; grid-template-columns: 150px 1fr; border: 0; } label { grid-column: 1 / 2; margin: 0; padding:0; border: 0; } input{ grid-column: 2 / 3; margin: 0; padding:0; border: 0; border-radius: 5px; } /*input:focus{ background-color: #fcfab1; } */ textarea{ border-radius: 5px; height: 20px; } .buttons{ display: grid; grid-column: 2 / 3; grid-gap: 10px; grid-template-columns: 1fr 1fr 1fr 1fr 1fr 1fr 1fr 1fr 1fr; } The javascript file, db2form.js: let current_contact_idx = -1; let records = null; function search_mode() { // now change button to say Search document.forms.searchform.elements.search.innerText = "Search"; document.forms.searchform.elements.new.disabled = false; document.forms.searchform.elements.edit.disabled = true; document.forms.searchform.elements.save.disabled = true; document.forms.searchform.elements.delete.disabled = true; document.forms.searchform.elements.first.disabled = true; document.forms.searchform.elements.next.disabled = true; document.forms.searchform.elements.prior.disabled = true; document.forms.searchform.elements.last.disabled = true; } function found_mode() { // now change button to say Cancel document.forms.searchform.elements.search.innerText = "Cancel"; document.forms.searchform.elements.new.disabled = false; document.forms.searchform.elements.edit.disabled = false; document.forms.searchform.elements.save.disabled = true; document.forms.searchform.elements.delete.disabled = false; document.forms.searchform.elements.first.disabled = false; document.forms.searchform.elements.next.disabled = false; document.forms.searchform.elements.prior.disabled = false; document.forms.searchform.elements.last.disabled = false; } function new_edit_mode() { // now change button to say Cancel document.forms.searchform.elements.search.innerText = "Cancel"; document.forms.searchform.elements.new.disabled = true; document.forms.searchform.elements.edit.disabled = true; document.forms.searchform.elements.save.disabled = false; document.forms.searchform.elements.delete.disabled = true; document.forms.searchform.elements.first.disabled = true; document.forms.searchform.elements.next.disabled = true; document.forms.searchform.elements.prior.disabled = true; document.forms.searchform.elements.last.disabled = true; } function server_response_callback_search(ajax) { let form_elements = document.forms.searchform.elements; if(ajax.responseText.length == 0) { cancel_step(form_elements); document.getElementById('status').innerHTML = "No record found for your search." return; } console.log("server_response_callback_search response type: " + ajax.getResponseHeader('content-type')); records = JSON.parse(ajax.responseText); if (records.contacts.length > 0) { current_contact_idx = 0; populate_field(records.contacts[current_contact_idx]); found_mode(); } else { current_contact_idx = -1; // reset to no record found search_mode(); // stay in search mode } // display message if (current_contact_idx == -1) { document.getElementById('status').innerHTML = "No record found which matches the criteria"; } else { document.getElementById('status').innerHTML = "Displaying record " + (current_contact_idx + 1).toString() + " of " + records.contacts.length; } } function server_response_callback_update(ajax, rowid) { console.log("server_response_callback_update response type: " + ajax.getResponseHeader('content-type')); let form_elements = document.forms.searchform.elements; search_mode(); // empty all input and textarea fields for (let element of form_elements) { if(element.type != 'hidden') { element.value = ""; } } document.getElementById('status').innerHTML = ajax.responseText;; } function server_response_callback_insert(ajax) { console.log("server_response_callback_insert response type: " + ajax.getResponseHeader('content-type')); let form_elements = document.forms.searchform.elements; search_mode(); // empty all input and textarea fields for (let element of form_elements) { if(element.type != 'hidden') { element.value = ""; } } document.getElementById('status').innerHTML = ajax.responseText; } // We need to display what it is that database.exe returns for these cases function server_response_callback_delete(ajax, rowid) { console.log("server_response_callback_delete response type: " + ajax.getResponseHeader('content-type')); let form_elements = document.forms.searchform.elements; search_mode(); // empty all input and textarea fields for (let element of form_elements) { if(element.type != 'hidden') { element.value = ""; } } document.getElementById('status').innerHTML = ajax.responseText; } function populate_field(element) { let formelements = document.forms.searchform.elements; // formelements is an array for (let i = 0; i < formelements.length; i++) { if (formelements[i].name in element) { formelements[i].value = element[formelements[i].name]; } else { formelements[i].value = ""; } } document.getElementById('status').innerHTML = "Displaying record " + (current_contact_idx + 1).toString() + " of " + records.contacts.length; } function edit_step() { new_edit_mode(); } function cancel_step(form_elements) { search_mode(); // empty all input and textarea fields for (let element of form_elements) { if(element.type != 'hidden') { element.value = ""; } } document.getElementById('status').innerHTML = ""; } function new_step(form_elements) { new_edit_mode(); // empty all input and textarea fields for (let element of form_elements) { if(element.type != 'hidden') { element.value = ""; } } document.getElementById('status').innerHTML = "Enter data for new contact, then click Save button to save to database"; } function extract_form_values(form_elements) { let query = ""; let first = "yes"; for (let element of form_elements) { if(["text", "textarea", "tel", "email"].includes(element.type)) { if(first == "no") { query += "&"; } first = "no"; query += element.name; query += "="; query += element.value; } } return query; } function save_step(form_elements) { let request_payload = extract_form_values(form_elements); if(request_payload.length == 0) { //alert("You need to enter some data to save to database"); document.getElementById('status').innerHTML = "You need to enter some data to save to database"; return; } // we determine whether to UPDATE or INSERT based on presence of rowid. // if a rowid assume updating an existing contact, otherwise a new contact if (document.forms.searchform.elements.rowid.value == "") { // go down INSERT route // remove rowid= from payload let pos = request_payload.indexOf("rowid=&"); if (pos != -1) { // remove string request_payload = request_payload.replace("rowid=&", ""); } request_payload += "&operation=INSERT"; console.log("sending query to database server: " + request_payload); // setup ajax callback to handle response ajax_post("/cgi-bin/database.exe", request_payload, server_response_callback_insert); } else { let rowid = parseInt(document.forms.searchform.elements.rowid.value, 10); request_payload += "&operation=UPDATE"; console.log("sending query to database server: " + request_payload); // setup ajax callback to handle response ajax_post("/cgi-bin/database.exe", request_payload, server_response_callback_update, rowid); } } function has_values(form_elements) { for (let element of form_elements) { if(["text", "textarea", "tel", "email"].includes(element.type) && element.name != "rowid" && element.value != "") { return true; } } return false; } function insert_step(form_elements) { // check user actually entered some data in fields if(!has_values(form_elements)) { console.log("attempting to insert but no values populated"); document.getElementById('status').innerHTML = "Enter contact details to add a new contact"; return; } let request_payload = extract_form_values(form_elements); if(request_payload.length == 0) { document.getElementById('status').innerHTML = "You need to enter some update a contact"; return; } request_payload += "&operation=INSERT"; console.log("sending query to database server: " + request_payload); // setup ajax callback to handle response ajax_post("/cgi-bin/database.exe", request_payload, server_response_callback_insert); } function search_step(form_elements) { let query = extract_form_values(form_elements); query += query.length == 0 ? "operation=SELECT" : "&operation=SELECT"; console.log("sending query to database server: " + query); // setup ajax callback to handle response ajax_post("/cgi-bin/database.exe", query, server_response_callback_search); } function ajax_post(url, request, callback, arg) { // setup ajax callback to handle response var xhttp = new XMLHttpRequest(); xhttp.onreadystatechange = function() { if (this.readyState == 4 && this.status == 200) { callback(this, arg); } }; xhttp.open("POST", url, true); xhttp.setRequestHeader("Content-type", "application/x-www-form-urlencoded"); xhttp.send(request); } function delete_step(form_elements) { if(form_elements.rowid.value == "") { const delete_msg = "Form not in correct state to delete a contact"; document.getElementById('status').innerHTML = delete_msg; alert(delete_msg); return; } let rowid = parseInt(form_elements.rowid.value, 10); // DELETE FROM table_name WHERE condition; let request = `rowid=${rowid}&operation=DELETE`; console.log("sending request to database server: " + request); let confirmation = confirm("Click Ok if you are absolutely sure you want to delete this contact from the database"); if (confirmation) { // setup ajax callback to handle response ajax_post("/cgi-bin/database.exe", request, server_response_callback_delete, rowid); } } function process(buttontext) { console.log(`buttontext=${buttontext}`); let form_elements = document.forms.searchform.elements; if (buttontext == "New") { new_step(form_elements); }else if (buttontext == "Edit") { edit_step(); } else if (buttontext == "Save") { save_step(form_elements); } else if (buttontext == "Search") { search_step(form_elements); } else if (buttontext == "Cancel") { cancel_step(form_elements); } else if (buttontext == "Delete") { delete_step(form_elements); } else if (buttontext == "First") { if (records.contacts.length != 0) { current_contact_idx = 0; populate_field(records.contacts[current_contact_idx]); } } else if (buttontext == "Next") { if (records.contacts.length > (current_contact_idx + 1)) { populate_field(records.contacts[++current_contact_idx]); } else { document.getElementById('status').innerHTML = "You are on the last record"; } } else if (buttontext == "Prior") { if (current_contact_idx > 0) { populate_field(records.contacts[--current_contact_idx]); } else { document.getElementById('status').innerHTML = "You are on the first record"; } } else if (buttontext == "Last") { if (records.contacts.length != 0) { current_contact_idx = records.contacts.length - 1; populate_field(records.contacts[current_contact_idx]); } } else { document.getElementById('status').innerHTML = "something has gone wrong - button text incorrectly set"; } } // user can press Enter key to invoke search, Esc key to cancel (go back to ready to search mode) document.onkeydown = function(evt) { evt = evt || window.event; var isEscape = false; var isEnter = false; if ("key" in evt) { isEscape = (evt.key === "Escape" || evt.key === "Esc"); isEnter = (evt.key === "Enter"); } else { isEscape = (evt.keyCode === 27); isEnter = (evt.keyCode === 13); } if (isEscape) { // only handle Escape if Cancel button enabled if(document.forms.searchform.elements.search.innerText == "Cancel") { process("Cancel"); } } else if (isEnter) { // only handle Enter if Search button enabled if(document.forms.searchform.elements.search.innerText == "Search") { process("Search"); } } }; Answer: On the Javascript side of things: Don't use inline handlers, they have way too many problems to be worth using. Instead, attach listeners with Javascript and addEventListener instead. Since on every button click including Search, you want to pass the text content of the button to process, you can do that concisely by examining the textContent of the clicked button inside the handler. It's generally preferable to select elements with querySelector (which accepts concise, flexible CSS strings) rather than going through document.forms: document.querySelector('.buttons').addEventListener('click', ({ target }) => { if (!target.matches('button')) return; process(target.textContent); }); Using the above code will allow you to remove all inline handlers from the .buttons > button elements, including the onclick="process(document.forms.searchform.elements.search.innerText)". const process = console.log; document.querySelector('.buttons').addEventListener('click', ({ target }) => { if (!target.matches('button')) return; process(target.textContent); }); <div class="buttons"> <button name="search" type="button">Search</button> <button name="new" type="button">New</button> <button name="edit" type="button" disabled>Edit</button> <button name="save" type="button" disabled>Save</button> <button name="delete" type="button" disabled>Delete</button> <button name="first" type="button" disabled>First</button> <button name="next" type="button" disabled>Next</button> <button name="prior" type="button" disabled>Prior</button> <button name="last" type="button" disabled>Last</button> </div> Best to use textContent, the standard way to extract text from an element, not innerText, a peculiar property from Internet Explorer that has pretty strange behavior. innerText is almost never what you want. Rather than selecting the buttons over and over again in search_mode, found_mode, new_edit_mode, consider selecting them once, and construct an object indexed by element type instead: const buttons = {}; for (const button of document.querySelectorAll('.buttons > button')) { buttons[button.textContent.toLowerCase()] = button; } function enableDisableButtons(newVal) { for (const button of buttons) { button.disabled = newVal; } } function search_mode() { buttons.search.textContent = 'Search'; enableDisableButtons(true); buttons.new.disabled = false; } function found_mode() { buttons.search.textContent = 'Cancel'; enableDisableButtons(false); buttons.save.disabled = true; } function new_edit_mode() { buttons.search.textContent = 'Cancel'; enableDisableButtons(true); buttons.save.disabled = false; } You can also save a reference to the status element instead of re-selecting it frequently. const status = document.querySelector('#status'); // ... status.innerHTML = "Displaying record " + (current_contact_idx + 1).toString() + " of " + records.contacts.length; The above code also points to another issue - unless you're deliberately inserting HTML markup, you should set text content of elements by assigning to textContent, not innerHTML. Using innerHTML can result in arbitrary code execution if the code is untrustworthy, in addition to being slower than textContent and more confusing for script readers. So, for the above, you'd want to instead do status.textContent = "Displaying record " + (current_contact_idx + 1).toString() + " of " + records.contacts.length; In your process function, rather than a whole bunch of if/else checks on the argument, you could consider making an object indexed by the button text instead, whose values are the function you'd want to run when that button needs to be processed. In the handler, just look up the function on the object and run it: const actionsByButtonText = { New: new_step, Edit: edit_step, Save: save_step, // ... }; function process(buttontext) { console.log(`buttontext=${buttontext}`); const fn = actionsByButtonText[buttontext]; if (fn) fn(); else status.textContent = "something has gone wrong - button text incorrectly set"; } (No need to pass form_elements to those functions - they can iterate through the buttons object above, it doesn't make much sense as an argument, since it never changes) It looks like you're using let by default when declaring variables. Best to always use const - don't use let unless you have to reassign, and never use var (like in your ajax_post). Using const indicates to later readers of the script, including you, that the variable name will never be reassigned, which results in less cognitive overhead than permitting reassignment with let. In Javascript, variables are almost always named using camelCase, which you might want to consider if you want to conform. The script is a bit long - originally, 371 lines. Once you have a script with more than 3-4 functions, I'd strongly consider organizing it using modules instead. Having separate modules which each do their own thing is more maintainable than having one big file. Modules are also useful because the dependencies between them are explicit, rather than everything being global and being able to potentially reference everything else - that can make things a bit confusing when the code isn't trivial. Look into something like webpack. You should also consider using proper indentation in the HTML, it'll make the structure more readable at a glance. Eg, this: </nav> <section> <h1>Contacts</h1> <p>Enter text below and click Search button to find a contact</p> <form name="searchform" action="/cgi-bin/database.exe" method="POST"> <label for="rowid">ID: </label> <input id="rowid" type="text" name="rowid" value="" readonly disabled> should probably be </nav> <section> <h1>Contacts</h1> <p>Enter text below and click Search button to find a contact</p> <form name="searchform" action="/cgi-bin/database.exe" method="POST"> <label for="rowid">ID: </label> <input id="rowid" type="text" name="rowid" value="" readonly disabled> ... There are other improvements that can be made as well, but this should be a good start.
{ "domain": "codereview.stackexchange", "id": 38106, "tags": "javascript, css, html5" }
Why $T(n)=6T(n-1) + n^3$ has such a mess solution?
Question: I tried to solve the recurrence relation $T(n) = 6T(n-1) + n^3$ using the tree method, and figured out that the root will be $n^3$, the second level will be $6^1(n-1)^3$, the third will be $6^2 (n-2)^3$, and so on. The formula as I understood it is: $\sum_{i=0}^n 6^i(n-i)^3$. After entering this in Wolfram, the result is: $$ \sum_{i=0}^n 6^i(n-i)^3 = \frac{1}{625}(-125n^3-450n^2-630n+366(6^n-1)). $$ And it doesn't look like a valid solution. Did I miss anything? Answer: I'm not sure why you think this solution is invalid. It implies that $$ \sum_{i=0}^n 6^i(n-i)^3 \sim \frac{366}{625} 6^n, $$ and in particular, $$ \sum_{i=0}^n 6^i(n-i)^3 = \Theta(6^n). $$ You can also check it for particular values of $n$. For example, when $n = 0$ you clearly get zero, and for $n = 1$ you get $$ \frac{-125-450-630+366 \cdot 5}{625} = 1 = 6^0 (1-0)^3 + 6^1 (1-1)^3. $$
{ "domain": "cs.stackexchange", "id": 16051, "tags": "trees, recurrence-relation" }
Using the BBC iPlayer API for managing TV programs
Question: I have been learning Ruby by reading books and tutorials, and practising some problems on Codewars to level-up my new skills, when I had an idea for a little Ruby code project. It will store programmes you like watching on the BBC iPlayer website, and list upcoming episodes, using an API (link is in 2nd line of code). For those unaware with iPlayer, it lets you watch most BBC TV programs, but only for a set time period after they were shown on TV. Sometimes I get behind in watching shows so in this example it will list episodes in order of soonest-to-expire, so I know what I need to watch first. I decided to create a class which is an extension of a Hash, thinking that people can then do normal operations on it like they would a hash. I remain unsure if this is a good idea. Then there's a simple CLI to access it. I've written it to be extendable. Two extensions I had in mind were: A GUI could use this class as-is A list of programs could be stored in an external file and loaded when the program is run. So, let me know what you think. The main thing I thought of is error-trapping. It does a degree of assuming that the API will do what you expect it to. class Iplayer_data < Array #for iPlayer API documentation, see http://mermade.github.io/swagger/index.html?url=https://raw.githubusercontent.com/Mermade/bbcparse/master/iblApi/swagger.json#!/Search/Search require 'httpclient' #I had difficulty installing a cUrl gem on windows, so I used this instead require 'JSON' attr_reader :categories, :regions def initialize @http = HTTPClient.new @lang = "en" @rights = "web" @version = "v1" # load categories and regions straight away, for future use, and so we have a local copy of the data @categories = load_standing_data("categories") @regions = load_standing_data("regions") end def i_search(query) url = "http://ibl.api.bbci.co.uk/ibl/#{@version}/search?q=" url += URI.escape(query) url += "&lang=#{@en}&rights=#{@rights}&availability=available" get_http_content(url)["search"]["results"].each do |c| #don't bring back all the attributes. Basically just a test to see if I could do it #plus it's less data to display, which helps with testing. #there's no real good programmatic reason to limit what we bring back c.select! { |k,v| ["id", "title", "synopses"].include?(k) } end end def add_programme(name) if self.select {|x| x["title"].downcase==name.downcase}.count > 0 return "Programme {name} is already in your list of programmes" end s = i_search(name) return "No program found with a name like that" if s.count==0 if s[0]["title"].downcase == name.downcase self << s[0]; "Successfully added #{s[0]['title']}" else "Could not find a program of the name #{name}. Did you mean #{s[0]['title']}?" end end def delete_programme(name) if self.select {|x| x["title"].downcase==name.downcase}.count == 0 return "Programme #{name} is not in your list of programmes" end self.delete_if {|x| x["title"].downcase==name.downcase} "#{name} has been deleted" end def get_episodes episodes = [] self.each do |programme| pid = programme["id"] url = "http://ibl.api.bbci.co.uk/ibl/#{@version}/programmes/#{pid}/episodes?rights=#{@rights}" url += "&availability=available&initial_child_count=2&per_page=200" episodes += get_http_content(url)["programme_episodes"]["elements"] end episodes end def get_http_content(url);JSON.parse(@http.get(url).content);end private #categories and regions have pretty much the same API so we can load them both through 1 method def load_standing_data(name) raw = get_http_content("http://ibl.api.bbci.co.uk/ibl/#{@version}/#{name}?lang=#{@lang}") # from an array of hashes, return a hash of hashes, with the id field as the key output = Hash.new raw["#{name}"].each { |x| output[x["id"]] = x } output end end ##################### END OF CLASS ##################### ###### NOW CREATE A SIMPLE COMMAND-LINE INTERFACE ###### def display_help puts 'Iplayer_data helps you keep track of your favourite programs.' puts 'It can store the programs you want to follow, then display information regarding episodes of those programs.' puts 'You can (a)dd, (d)elete or (v)iew your favourite programs...' puts 'Display (e)pisodes, (c)ategories, or (r)egions, or e(x)it.' puts 'Some instructions require extra input - just put a space after the first letter than type it in.' puts 'eg to add Top Gear, input "a top gear" (case insensitive)' #(d)elete, (p)rogrammes end require 'pp' # pretty print ip = Iplayer_data.new re='' until re[0] == 'x' do print '>' re = gets.chomp case re[0] when '?' then display_help when 'a' then puts ip.add_programme(re[2..-1]) when 'd' then puts ip.delete_programme(re[2..-1]) when 'v' then pp ip when 'c' then pp ip.categories when 'r' then pp ip.regions when 'e' then #list of episodes, ordered by soonest to latest to expire ep = ip.get_episodes.sort_by { |episode| episode["versions"][0]["availability"]["end"]} ep.each do |e| e["time_left"] = e["versions"][0]["availability"]["remaining"]["text"] puts e.select {|k,v| ["id", "title", "subtitle", "time_left"].include?(k) } end puts "No episodes found" if ep.count==0 when 'x' then puts "Exiting..." else puts "Sorry, I did not recognize that input. Type \"?\" for help." end end Answer: My thoughts, in no particular order: Fix your indentation. Ruby uses 2 spaces of indentation; no more, no less, and not tabs requires should generally be file-level, not class-level Don't name a class with an underscore. Classes are PascalCase in Ruby. Besides, the _data suffix doesn't make much sense; if anything the class is an "iPlayer client" and should be named something like that. Why derive the class from Array? That's crazy overkill compared to just having a programs (or programmes) array as an instance variable. Besides, if it's an array instance, I can do weird things like ip << "foobar", which doesn't make much sense. If you do want an array-like API for the class, implement an each method, and include Enumerable for cleaner code. Model the domain. You should probably create a separate Programme class, which can have an episodes method. Would be cleaner than having one class both deal with programs and episodes, and just loading all episodes for everything. It'd might be good to subdivide further into a pure API client class, which can be used to make API calls by a "programs manager" class, a "program" class, and perhaps even an "episode" class. You have some instance variables that should probably be constants since that's how they're used (@lang, @rights) You repeat most of the URL in several places. Make the base URL a constant. Make a method for generating URLs instead of glueing together bits of string in multiple places. E.g. def url_for(path, params = {}) query = URI.encode_www_form(params) "#{BASE_URL}#{path}?#{query}" end # Example url_for('search', q: 'top gear', rights: 'web', availability: 'available', lang: 'en') # => http://ibl.api.bbci.co.uk/ibl/v1/search?q=top+gear&rights... Can't remember if an HTTPClient instance can be configured to handle some of this too (e.g. a common base URL). It might. Don't do this: output = Hash.new raw["#{name}"].each { |x| output[x["id"]] = x } output Don't create a closure variable and manipulate it from inside a block. Don't write ["#{name}"] when [name] has the same result. Do something like this instead: raw[name].map { |item| [item['id'], item] }.to_h If categories and regions aren't necessarily used for anything, don't eagerly load them. Instead load them lazily, when needed, and memoize: def categories @categories ||= load_standing_data('categories') end Incidentally, load_standing_data is an odd name... not terribly explanatory.
{ "domain": "codereview.stackexchange", "id": 21295, "tags": "beginner, ruby, api" }
Data Structure - Array of Array
Question: Suppose that we have a family $F=\{a_1,\dots,a_s\}$ of sorted arrays to merge. Our strategy is to choose two of them, say $a_i$ and $a_j$, remove them from $F$, merge them and put the resulting array back into $F$; we keep doing that until there is only one array in $F$. We will use a greedy strategy: at each step, we choose $a_i$ of minimal length in $F$, and $a_j$ of minimal length in $F-\{a_i\}$. What data structure would you use to store $F$? Obviously, I assumed F $F$ be an array of array as each element of F is an array of integers. However, I think it is asking about Abstract Data Structures that were taught at a class. We are taught about Priority Queue ADT, Stack ADT, Queue ADT, and heap so far. Can F be one of them? If so, please explain why F can be such structure. Thank you, Answer: You would like to have the ability to extract the minimal array (in terms of length), then extract the next minimal, merge them, and insert them back to the data structure. Since you want the best performance possible, you now would like to ask yourself how much each operation would cost? If you use a stack you would have the option to $pop$, which is good if you assume your stack is given you with the arrays already sorted by length (will be $O(1)$). So, you will be able to $pop$ the first two arrays and then merge them. But now, what will you do with the merged array? You have no guarantee that the new array is indeed the shortest one. A simple queue won't do as well. So, a better option would be to use a priority queue, implemented as a heap. You will have a heap where the values are the length of the arrays. To get the two shortest array you will use $extract-min$. You can use Fibonacci heaps which will give you $O(\log n)$ amortized for extraction. Next, after merging the arrays you can insert it back with $O(1)$ amortized time complexity. *Note that the complexity times I wrote here are disregarding the length of the arrays themselves. I treated them as simple elements for simplicity, since the question focuses on the use of the data structure.
{ "domain": "cs.stackexchange", "id": 11415, "tags": "algorithms, algorithm-analysis, data-structures" }
Why are some of the gene peptides returned by biomaRt missing an asterisk in the end?
Question: Note: this question has also been asked on BioStars I am getting peptides using biomaRt library in R for the gene 'BRCA1'. 27 different aminoacid sequences are returned, and 12 of those have an asterisk (*) in the end, while 15 do not. What does that mean? I know that asterisk is used to indicate stop codon. So, does that mean the peptides without an asterisk do not have a stop codon? Is that possible at all? Are all of those valid peptides or not? Below is the R code I used: library(biomaRt); mart = useMart('ensembl', dataset='hsapiens_gene_ensembl'); seq = getSequence(id='BRCA1', type='hgnc_symbol', seqType='peptide', mart = mart); MDLSALRVEEVQNVINAMQKILECPICLELIKEPVSTKCDHIFCKVLLCCPSWSTVVRS* is an example sequence with an asterisk in the end. MDLSALRVEEVQNVINAMQKILECPICLELIKEPVSTKCDHIFCKSLQESTRFSQLVEEL‌​LKIICAFQLDTGLEYANSYN‌​FAKKENNSPEHLKDEVSI is an example without an asterisk. Answer: If it's 3' incomplete that means the evidence used to create it was a fragment. Here's the evidence used to construct BRCA1-214 ENST00000477152.5, a 3' incomplete. You can see that there's a full length cDNA from EMBL, AK307553.1, which was used to create this model. The sequence was mapped against the genome sequence to create the transcript model. When searching for the translation, it was found that the open reading frame was started, but there was no stop codon. This suggests that the cDNA AK307553.1 is actually a fragment, that the mRNA was broken or cleaved before it was reverse-transcribed sequenced. We display it in Ensembl in the hope that this will lead someone to identify the full length transcript that it represents. Many people choose not to work with these incomplete transcripts. You can do this by filtering by Gencode Basic in biomaRt.
{ "domain": "bioinformatics.stackexchange", "id": 208, "tags": "gene, proteins, biomart" }
Are vacuum actuated artificial muscles as hard to control as the pneumatic ones?
Question: I was searching for questions on this stack and I found this one asking why pneumatic/hydraulic artificial muscle robots aren't common, TLDR: the reason is that it is incredible hard to control them. However, one thing that the asnwer didn't talked about was vacuum actuated artificial muscles. My gut feeling says that they are as difficult as pneumatic ones, since they are simply the opposite, but even then I want to know. Answer: With a vacuum you are limited to the pressure differential between zero and atmospheric (~100kPa or ~15PSI). With pneumatic you are limited to the pressure differential between atmospheric and whatever pressure you design your compressor and muscles to. A normal shop compressor easily exceeds 100PSI. Also, with a compressor you can do things like store the compressed air at higher than the pressure you need and use a valve to regulate it which lets you use a smaller reservoir. You can't "store a more extreme vacuum" and then regulate the vacuum to be less. The only thing you can do is use a larger vacuum reservoir tank.
{ "domain": "engineering.stackexchange", "id": 4989, "tags": "robotics, vacuum, actuator, vacuum-pumps" }
Heat capacity in canonical ensemble?
Question: This is a problem from Understanding molecular simulations by Frenkel and Smit. Show that the heat capacity can be calculated from fluctuations in the total energy of a canonical ensemble. $$C_v=\frac{<U^2>-<U>^2}{k_BT^2}.$$ I know the following $$C_v=\left(\frac{dU}{dT}\right)_V$$. How do I calculate slope in a statistical ensemble. What am I missing here? Answer: If $ Z = \sum_\mu \mathrm e^{-\beta E_\mu} $ is the partition function in the canonical ensemble, then $$ U = \langle U \rangle = -\frac{\partial}{\partial \beta} \ln Z \text{ and}\\ \Delta U^2 = \langle U^2 \rangle - \langle U \rangle^2 = \frac{\partial^2}{\partial\beta^2} \ln Z \;. $$ (If you don't know why, prove this by calculating the derivatives.) Can you take it from here?
{ "domain": "physics.stackexchange", "id": 44286, "tags": "thermodynamics, statistical-mechanics" }
concatenating the content of list in python
Question: I have a list. list = ['It is a delightfully naive and entertaining movie', 'The songs are boring and dated in 2009', 'was a great movie from genre director Luc Besson'] and I want a result like: list_result = [ It is a delightfully naive and entertaining movie The songs are boring and dated in 2009 was a great movie from genre director Luc Besson] how I can do this? (list_result can also be a corpus.) Answer: result = "" for sentence in list: result += sentence result += " " list_result = [result] Go over list comprehension if you want a more pythonic way to do it, here is the most understandable to begin.
{ "domain": "datascience.stackexchange", "id": 2288, "tags": "python, pandas" }
Is it possible to use an RNN to predict a feature that is not an input feature?
Question: I came across RNN's a few minutes ago, which might solve a problem with sequenced data I've had for a while now. Let's say I have a set of input features, generated every second. Corresponding with these input features, is an output feature (also available every second). One set of input features does not carry enough data to correlate with the output feature, but a sequence of them most definitely does. I read that RNN's can have node connections along sequences of inputs, which is exactly what I need, but almost all implementations/explanations show prediction of the next word or number in a text-sentence or in a sequence of numbers. They predict what would be the next input value, the one that completes the sequence. However, in my case, the output feature will only be available during training. During inference, it will only have the input features available. Is it possible to use RNN in this case? Can it also predict features that are not part of the input features? Thanks in advance! Answer: Is it possible to use RNN in this case? Can it also predict features that are not part of the input features? Yes. No changes are required to a RNN in order to do this. All you need is correctly labelled data mapping a sequence of $x$ to correct $y$ in order to train, and of course a RNN architecture which has input vectors matching shape of $x$ and output vectors matching shape of $y$. The case where $x$ and $y$ are the same data type is just a special case of RNN design, and not a requirement. You may need to consider some details: If the relationship between $x$ and $y$ is complex and non-linear even accounting for accumulated hidden state during the sequence, you may need to add deeper layers. The output of the LSTM can be some vector $h$ and you can add fully-connected layers to help with predicting $y$ from $h$. This, or adding more LSTM layers, is a choice of hyperparameter that you may want to experiment with. Start with a basic LSTM to see how that goes first. If you wish to predict a sequence of output features that is either not the same length as the input feature sequence, or logically should come after the whole sequence (think language translation) then this may need a slight change in setup to get best results. For a predict-same-kind sequence you can feed your predicted output value into the next input, but if input and output have different data types, this will not work. Instead, you will need to have some dummy input or other setup for creating sequences of $y$. In your specific case the second point does not seem to apply, as you want to predict a single $y$ immediately after a sequence of $x$.
{ "domain": "ai.stackexchange", "id": 690, "tags": "neural-networks, machine-learning, recurrent-neural-networks" }
Scope of Haloform reaction
Question: We know that methyl ketones and secondary alcohols oxidizable to methyl ketones, such as isopropanol. The only primary alcohol and aldehyde to undergo this reaction are ethanol and acetaldehyde, respectively. 1,3-Diketones such as acetylacetone also give the haloform reaction. Do Terminal Alkynes give Haloform Reaction if Yes then how the mechanism will follow? Answer: Terminal alkynes, including phenyl acetylene, react with iodine (in MeOH) to give 1,2-diiodoalkenes according to this JOC paper here. Presence or absence of base will not alter the outcome. Presence of AgNO3 gives some di-iodo ketone which can then give rise to the iodoform reaction.
{ "domain": "chemistry.stackexchange", "id": 14000, "tags": "organic-chemistry" }
I have a python program. How can I integrate ROS data in it?
Question: Hi, I have a python program running perfectly. Now I need some data in the program which is being published on a rostopic. In the ros environment, I know I can get that data using rostopic echo, or save it in a text file. Can I add something in my python program that it collects data on a ros topic and use it? All I want is that everytime my program runs, there is a new data it collects from rostopic and does the operation on it. Right now my program is using a dummy data and I urgently need to replace it with actual data being published over a rostopic. urgent, please. Originally posted by Latif Anjum on ROS Answers with karma: 79 on 2013-03-20 Post score: -1 Answer: If this is truly urgent, you should consider reading the basic ROS tutorials. You will probably find Writing a Publisher and Subscriber in Python helpful. Originally posted by joq with karma: 25443 on 2013-03-20 This answer was ACCEPTED on the original site Post score: 4
{ "domain": "robotics.stackexchange", "id": 13454, "tags": "ros, python" }
.Net Core 6.0 : Entity Framework - Either Or / Neither Nor / Both in Where clause
Question: I am building a .Net Core 6.0 application that uses Entity Framework. I have situation where I need to apply the filter(Where clause) based on two properties (eg: Guid? skillType, string skillName). These properties may or may not have values, means that Neither of the property will have value Either of the property will have value Both properties will have value How do I construct the Where clause? I tried something like this var paginationListResult = await this.DbContext.Skills .Where(st => skillType == null || st.SkillTypeId == skillType) .Where(st => string.IsNullOrEmpty(skillName) || st.Name == skillName) .Include(sk => sk.SkillType) .Include(sk => sk.Regroupment) .Include(sk => sk.SkillRoleRequirements) .OrderBy(s => s.Name) .ToPaginateListAsync(pageIndex, itemsPerPage, default(CancellationToken)).ConfigureAwait(false); Is there a better way to handle the situation? Answer: Since you're using Entity Framework, you can use null-coalescing operator to get the property instead of the given value. This would be translated into COALESCE function, which is a standard SQL function that is supported in most of database providers. Example : .Where(st => skillType == null || st.SkillTypeId == skillType) .Where(st => string.IsNullOrEmpty(skillName) || st.Name == skillName) would be modified to this : .Where(st => st.SkillTypeId == (skillType ?? st.SkillTypeId)) .Where(st => st.Name == (skillName ?? st.Name)) In SQL, it would be translated into something similar to this query: SELECT * FROM table WHERE SkillTypeId = COALESCE(@skillType, SkillTypeId) AND Name = COALESCE(@skillName, Name) So, the above query will check first @skillType if null, then it will get current value of SkillTypeId and compare it to itself. This would be also applied to the Name as well. which is the same effect as 1 == 1. This is only applicable on null-coalescing operator, and because of that, you will need to handle the value to reassign null to it to avoid default values or empty or whitespace for strings before you pass it to the query. Something like this would be applicable: skillName = string.IsNullOrWhiteSpace(skillName) ? null : skillName; var paginationListResult = await this.DbContext.Skills .Where(st => st.SkillTypeId == (skillType ?? st.SkillTypeId)) .Where(st => st.Name == (skillName ?? st.Name)) .Include(sk => sk.SkillType) .Include(sk => sk.Regroupment) .Include(sk => sk.SkillRoleRequirements) .OrderBy(s => s.Name) .ToPaginateListAsync(pageIndex, itemsPerPage, default(CancellationToken)).ConfigureAwait(false);
{ "domain": "codereview.stackexchange", "id": 43716, "tags": "c#, asp.net-core, .net-core, entity-framework-core, asp.net-core-webapi" }
Bounds on "well dispersed" sparse matrices
Question: Suppose we have an $n\times n$ zero/one matrix $M$, with $k$ ones. Let us say that the extent of $M$ is the maximum of $i+j$ over all ones at positions $(i,j)$ of the matrix, and the quality $q(M)$ is the minimum of the extent of $PMQ$ taken over all permutation matrices $P$ and $Q$. Let $f(n,k)$ be the maximum of $q(M)$ over all matrices $M$ with $k$ ones. What bounds can we give on $f(n,k)$? Specifically, I am interested in finding whether there are any matrices $M$ with $k=O(n)$, such that $q(M)\ge (1+\epsilon)n$ for some $\epsilon>0$, i.e. $f(n,k)\ge (1+\epsilon)n$. Is it possible to achieve $f(n,k)\ge (1+\epsilon)n$ for $k=O(n)$? Less formally, we have a grid that we can fill with a small number of dots, and the adversary is allowed to permute rows and columns in order to prevent the bottom corner from being filled. For example, I can have a matrix like \begin{matrix} \bullet&\cdot&\cdot&\cdot&\bullet\\ \cdot&\bullet&\cdot&\cdot&\bullet\\ \cdot&\cdot&\bullet&\cdot&\bullet\\ \cdot&\cdot&\cdot&\bullet&\bullet\\ \bullet&\bullet&\bullet&\bullet&\cdot\\ \end{matrix} and the adversary can permute this to \begin{matrix} \cdot&\bullet&\bullet&\bullet&\bullet\\ \bullet&\cdot&\cdot&\cdot&\bullet\\ \bullet&\cdot&\cdot&\bullet&\cdot\\ \bullet&\cdot&\bullet&\cdot&\cdot\\ \bullet&\bullet&\cdot&\cdot&\cdot\\ \end{matrix} so this matrix gets a quality of $6$. We want to distribute the dots so the quality is as large as possible. Here are some observations: $f(n,k)\le 2n$ trivially $f(n,k)=2n$ only for $k=n^2$. This is because if $M$ has less than $n^2$ ones, then there is a spot containing a zero, which can be moved to the corner by a permutation. $f(n,n)=n+1$. The identity matrix has $n$ ones, and all permutations of the identity matrix are permutation matrices. The best the adversary can do on a permutation matrix is put all the ones on the antidiagonal, which has $i+j=n+1$ for each of its ones. $f(n,n+m^2-m)\ge n+m$: Similar to the above, where we use a $m\times m$ block of ones followed by ones on the diagonal. Permutations cannot break up the block, so the best option for the adversary is to put it on the antidiagonal, where it will extend out beyond the antidiagonal by $m$, like so: \begin{matrix} \cdot&\cdot&\cdot&\cdot&\bullet\\ \cdot&\cdot&\cdot&\bullet&\cdot\\ \bullet&\bullet&\bullet&\cdot&\cdot\\ \bullet&\bullet&\bullet&\cdot&\cdot\\ \bullet&\bullet&\bullet&\cdot&\cdot\\ \end{matrix} Answer: The answer to the main question is yes, and the key to the construction is the notion of an expander graph. A (bipartite $d$-left-regular) expander graph between vertices $\mathcal{I}$ and $\mathcal{O}$ (the bipartition), where $|\mathcal{I}|=|\mathcal{O}|=n$, with expansion factor $c>0$, is a graph which is $d$-left-regular, meaning that each $x\in \mathcal{I}$ has degree $d$, such that for all subsets $A\subseteq I$ such that $|A|\le n/2$, the set $B\subseteq\mathcal{O}$ of neighbors of elements of $A$ has $|B|\ge (1+c)|A|$. There are many proofs of existence of expander graphs like this by the probabilistic method and some deterministic methods as well; for the present purpose we only need that expander graphs as defined above exist for some constant $c>0$ independent of $n$. Given such a graph, we can construct a $n\times n$ matrix from the adjacency matrix of the graph, with $\mathcal{I}$ for the rows and $\mathcal{O}$ for the columns. This matrix has exactly $k=dn=O(n)$ ones, because of the $d$ left regular property, and the expander property is unaffected by any permutations performed by the adversary. Now for $\alpha>0$, let $A$ be the last $\alpha n$ elements of $\mathcal{I}$ and let $C$ be the last $\alpha n$ elements of $\mathcal{O}$, after the adversary has chosen an ordering of the rows and columns. If there is no one in this square in the matrix, then there must correspondingly be no edge between $A$ and $C$, and hence the neighbors $B$ of elements of $A$ must satisfy $B\subseteq C^c$, so by the expander graph property this implies $$(1+c)|A|=(1+c)\alpha n\le |B|\le |C^c|=(1-\alpha)n\implies \alpha\le\frac{1}{2+c}.$$ Thus any empty bottom right square must be smaller than $n/(2+c)\times n/(2+c)$, so in the language of the OP, $$f(n,k)\ge 2n\left(1-\frac{1}{2+c}\right)=(1+\epsilon)n$$ where $\epsilon=\dfrac{c}{2+c}>0$.
{ "domain": "cs.stackexchange", "id": 13150, "tags": "graphs, optimization, asymptotics, combinatorics" }
Complexity of covering subset of the monoid $(\{0,1\}^n, \text{OR})$
Question: (At the very bottom of this, I will shortly describe the motivation for this question.) Assume we have a commutative monoid $(G,\circ)$, i.e. a set $G$ with a commutative binary operation $\circ$ that satisfies associativity and the existence of a neutral element. The task is as follows: given some $X \subseteq G$, find a smallest $U \subseteq G$ such that $X \subseteq \langle U \rangle$, where $\langle U \rangle$ is the submonoid generated by the elements of $U$. Now, if instead of a monoid we had a vector space, the task would be simple: determine the subspace spanned by $X$, and find a base for it. However, this relies on vectors having inverses, which we can't assume to exist in our monoid. The monoid can be assumed to be fixed, i.e. we don't have to receive it as input, and the only input we receive is $X$. To be more precise, the set $G$ can implicitly be assumed to be $\{0,1\}^n$ for some $n$, and $\circ$ is the component-wise $\max$ operation, which is the same as $\{\text{true}, \text{false}\}^n$ with component-wise $\text{OR}$. As an example, assume $n=4$ and we want to cover $$X= \{(0,1,0,1), (1,1,0,0),(1,0,1,0),(1,1,0,1),(1,1,1,1)\}$$ then one optimal solution would be $\{(0,1,0,1), (1,0,1,0), (1,1,0,0)\}$. Question: what is the complexity of this problem? I have a feeling it's somewhere between a covering problem and a hull problem, but I can't quite put my finger on whether I've come across a variant of this before or not. The problem arose when a colleague of mine played around with a building block used in LED clocks. They have the structure _ |_| |_| and each of those seven lines has its own control wire. We only want to use $10$ of the possible $2^7$ configurations of those control wires, so we tried to figure out whether we can build all $10$ configurations using less than $7$ primitives via OR-ing. Since OR over binary vectors doesn't come with inverses, we only have a monoid. Answer: L. J. Stockmeyer proves that SET BASIS is $\mathrm{NP}$-complete. Reference: L.J. Stockmeyer, The set basis problem is $\mathrm{NP}$-complete, Tech. Report RC-5431, IBM, 1975 Each vector of your problem can be seen as subset of $\{1, 2, \dots, n\}$. And the component-wise OR operation is corresponding to set union. So a basis for a collection of Boolean vector in your problem is exactly a set basis.
{ "domain": "cs.stackexchange", "id": 12176, "tags": "complexity-theory, optimization, algebra" }
Static helper function to set a property on another object
Question: A situation has come up in which we have a block of code that will be repeated in multiple files, and this block of code has to: Create some object based on the values of other controls/properties. Set a property on another object to be the object from #1. Here is an example of what this looks like: Repeated code before refactoring (TypeScript) public class PageViewModel { private _name: TextBox; private _type: TextBox; private _timestampFormat: RadioButton; .. // This should generate a data model out of the current state of its controls. public generateDataModel(): IDataModel { // Here will be some logic to generate a property value based on // the radio button value and some other conditions. let timestampProperty; if (this._radioControl.value() === 1) { timestampProperty = { format: "foo" }; } else if(this._radioControl.value() === 2) { timestampProperty = { format: "bar" }; } var dataModelObject = <IDataModel>{ name: this._name.value(), type: this._type.value(), timestamp: timestampProperty }; return dataModelObject; } } Problem The generateDataModel function will be present on more than one view model, all of which will generate the timestamp field from the radio control with that logic, so code will be repeated. How it was solved A helper method was extracted to do both #1 and #2 from above. This method is being called with references to both the control and the IDataModel object being created. Helpers.ts: public static setTimestampRadioButton(radioControl: RadioButton, dataModelObject: IDataModel): void { let timestampProperty; if (radioControl.value() === 1) { timestampProperty = { format: "foo" }; } else if(radioControl.value() === 2) { timestampProperty = { format: "bar" }; } dataModelObject.timestamp = timestampProperty; } PageViewModel.ts: public generateDataModel(): IDataModel { var dataModelObject = <IDataModel>{ name: this._name.value(), type: this._type.value() }; Helpers.setTimestampRadioButton(this._radioControl, dataModelObject); return dataModelObject; } I have some concerns with this approach from the point of view of readability, and also because I think Helpers.ts knows too much about IDataModel and how it is used. My questions: Is this a good way to solve the problem of repetition here? Why / why not? If not, is there a better way you'd suggest? What do you think of the pattern static setSomethingOnSomethingElse(thing1, thing2)? Answer: I would argue that setTimestampRadioButton can be improved in three ways. Side effects. By mutating the passed dataModelObject, the function immediately makes any function that uses setTimestampRadioButton harder to understand as it magically sets a property name that the calling function did not specify. Typing. Mutating the argument the function forces you to create a Partial<IDataModel> (or cast an object which is not an IDataModel to IDataModel). This is not ideal. Alternative property names. Currently only IDataModel uses the format from a RadioButton, what happens when ITimeModel needs the timestamp format stored in a format property? We are now back to the original problem. I would instead recommend creating a method to get the expected format. public static getTimestampFormat(radioControl: RadioButton): IFormat { if (radioControl.value() === 1) { return { format: "foo" }; } else if(radioControl.value() === 2) { return { format: "bar" }; } return { format: } This simplifies your generateDataModel method even more. public generateDataModel(): IDataModel { return { name: this._name.value(), type: this._type.value(), timestamp: Helpers.getTimestampFormat(this._radioControl) }; } Instead of a static helper method, you might also consider adding the method as an instance method of the RadioButton class, or creating a child of the RadioButton class with a TimestampRadioButton class. A couple other general things: Avoid casting like the plague. It is almost as bad as using the any type as it hides potential problems in a very similar way. I can't think of any instance in which adding Object or data to a variable name makes it more descriptive. dataModelObject doesn't really tell you any more than model.
{ "domain": "codereview.stackexchange", "id": 27256, "tags": "typescript, static, helper" }
Best text books for reference (dynamic-electromagnetic systems)?
Question: For modeling,design and optimization purposes relative to electrodynamic systems, what text book(s) would be ideal? I'm a mechanical engineer, and during my undergrad I used David J. Griffiths introduction to electrodynamics, which was very insightful and seem inclusive and interesting(I read it all beyond the course's requirements), and use it for reference. However, should I consider other texts? I've came across Jackson's book, and honestly...I wasn't able to learn much beyond the first chapter. Answer: Here's my attempt to make an answer from the extended comments. Physics texts like Jackson or Griffiths may not be much help w/r to modeling and simulation, design and optimization studies for systems. They are meant to teach the first principles approach to understanding fundamental physics. They will rely heavily on exact solutions, special function expansion and abstractions of problems with a bent towards understanding grand ultimate truths about nature. If you are looking for modeling and simulation resources you may need to go outside physics and search for specialized numerical methods for various modeling paradigms. For solving the field equations with boundary conditions you have a variety of approaches: Exact solutions, method of images Finite difference and finite difference - time domain Finite Element Method Special function expansions Boundary element method or method of moments. Each has it's virtues and difficulties. And there are some subtleties that you won't learn about in the textbook lit. For RF I would recommend the following: Computational Electromagnetics for RF and microwave Engineering by David Davidson The Method of Moments in Electromagnetics by Walton Gibson Field Computation by Method of Moments RF Engineering for Wireless Networks by Dobkin Radar Cross Section by Knott et al Additional good references for modeling and simulation of almost anything: A first course in Numerical Analysis of Differential Equations by A. Iserles Numerical Recipes by W. Press et al Methods of Theoretical Physics by Morse and Feshbach You can tell by the titles what kind of work I do but don't turned off. They are all pretty general in the physics and treatment of modeling but the examples are clearly industry specific. Despite being fairly complete there are pitfalls. Davidson's book covers almost every technique at the surface but some presentations are unstable (and there is a disclaimer), being presented for "educational purposes". There is active research on developing fast, stable methods. One that comes to mind for solving the time-dependent Maxwell's equations is Yee's method, now a very famous staple in numerics. You can find the original article online for free. I hope that helps.
{ "domain": "physics.stackexchange", "id": 54427, "tags": "electromagnetism, resource-recommendations, classical-electrodynamics" }
Why is Larry Niven's Ringworld Unstable?
Question: In his 1970 science fiction novel Ringworld, author Larry Niven describes the eponymous Ringworld, a gigantic structure shaped as a ring with a radius of around 1 AU, rotating around a star in the center of the ring. This system is described as physically stable; however many readers have complained that it is actually unstable and the structure will drift away in time. My first thought reading this is that the Ringworld's center of mass is identical to the star's center of mass, so the system should be stable. Why is it unstable? Some additional details of the structure: Radius: ~1 AU Mass: ~1 Solar Mass Year duration: ~220 hours The Ringworld is also made of material strong enough to withstand the stresses affecting it in such a system. Answer: It is only for a spherically symmetric shape that you can treat an extended body as if it were a point mass at the CoM. The Ringworld is stable against axial displacements after which it will gently bob back and forth around the star. unstable against transverse ones because the gravitational attraction of the near-side is greater than that of the far-side.
{ "domain": "physics.stackexchange", "id": 32156, "tags": "newtonian-mechanics, newtonian-gravity, orbital-motion, celestial-mechanics, stability" }
robot_self_filter + rgb pointcloud
Question: Hi, if I use robot_self_filter to filter points from kinect colored pointcloud the resulting output is without color. Is it error or feature? Is there way how to filter colored pointcloud without loosing color? I'm using ROS Electric. Thanks. Originally posted by ZdenekM on ROS Answers with karma: 704 on 2012-07-25 Post score: 1 Answer: I would consider that a bug. Best if you file a ticket for that. Btw. you could also try the realtime_urdf_filter package that should be much faster since it is based on OpenGL rendering. Originally posted by Lorenz with karma: 22731 on 2012-07-26 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 10369, "tags": "kinect, pointcloud, ros-electric" }
What is $dm$ in the center of mass formula in the $y$-direction $y_{cm} = \frac{\int y\ dm}{\int dm}$?
Question: I want to figure out the COM (Center of Mass) of an infinitesimally thin semicircular ring of uniform density with mass $M$ and radius $R$. I begin to draw the following picture: As I have understood it, the typical way to go about solving these kinds of problems is to try and think of the rigid body as it consisting of an infinite amount of point particles. Each particle will have a COM $m_{i}\bar{r_{i}}$ with $\bar{r_{i}}$ being the position vector of the COM for particle $i$. By summing up all of these infinite COMs and dividing by the total mass of the particles $M$, we obtain the COM of a system of particles. The equations for the COM in the $y$- and $x$ directions are $$ x_{cm} = \frac{\int x\ dm}{\int dm},\ \ \ \text{The position vector is }x = Rcos(\theta) $$ $$ y_{cm} = \frac{\int y\ dm}{\int dm},\ \ \ \text{The position vector is }y = Rsin(\theta), $$ but what is $dm$? It feels reasonable that $dm$ is the mass of the point particle whose location is given by the position vector $\bar{r_{i}}$. How do I figure out what that mass is? Answer: Perhaps it would be better to think of $dm$ as an infinitessimal mass element instead of a point particle. Usually you use the density to write $\text dm$ in terms of an infinitessimal volume/area/distance which you can integrate. Depending on the number of dimensions you get the following expressions for $\text dm$ \begin{align} \text dm&=\rho(x)\text dV&\text{in 3D}\\ \text dm&=\sigma(x)\text d A& \text{in 2D}\\ \text dm&=\lambda(x)\text ds&\text{in 1D} \end{align} Where $\rho,\sigma, \lambda$ are the density, area density and linear density respectively (sometimes different notation is used). Your problem is one dimensional and has uniform density so we can put $\text dm=\lambda \text ds$ where $\lambda$ is now constant. Before we can calculate $\text ds$ we first have to parametrize the domain. Define $s$ as the path length along the arc $$s(\theta)=R\theta\implies\text ds=R\text d\theta$$ We can now write the integral as \begin{align} x_\text{COM}&=\dfrac{\int \lambda(R\text d\theta)(R\cos\theta)}{\int \lambda(R\text d\theta)} \end{align}
{ "domain": "physics.stackexchange", "id": 84034, "tags": "newtonian-mechanics, mass" }
Vector-valued function for 2 point charges
Question: I try plotting such vector field, but I'm not getting the function behind it. My best guess for $E: \vec{r} = \left(\begin{array}{cc}x\\y\end{array}\right) \rightarrow \mathbb{R}^2, \quad E(x,y) = \underbrace{\dfrac{\mathrm{c}\cdot Q_1}{\vert\vec{r}\vert^3}\cdot \left(\begin{array}{cc}x\\y\end{array}\right)}_{Q_1^+} + \underbrace{\dfrac{\mathrm{c}\cdot Q_2}{\vert\vec{r}\vert^3}\cdot \left(\begin{array}{cc}x\\y\end{array}\right)+\left(\begin{array}{cc}2\\0\end{array}\right)}_{Q_2^+ \quad\text{displaced}} $ But the plot ain't right... Answer: The electric field at a position $\mathbf x$ caused by a point charge $q$ at point $\mathbf x'$ is given by $$ \mathbf E = \frac{q}{r^2}\hat{\mathbf r} $$ (in Gaussian units), where $r=|\mathbf x-\mathbf x'|$ is the distance between the two points, and $\hat{\mathbf r} = (\mathbf x-\mathbf x')/r$ is the unit vector that points from $\mathbf x'$ to $\mathbf x$. If you then multiply the denominator and numerator by $r$ you can transform this into $$ \mathbf E = \frac{q}{r^3}{\mathbf r}, $$ where $\mathbf r=\mathbf x-\mathbf x'$ is the separation vector between the two charges. If you implement this correctly, then the plot will appear as required. On the other hand, if you input the wrong quantities, then that won't work. Particular points to pay attention to are checking that the distances $r$ between the point and the charge are correct, that you're using the correct separation vectors, and that you are distributing correctly when you multiply $(\mathbf x-\mathbf x')$ by a scalar.
{ "domain": "physics.stackexchange", "id": 79041, "tags": "electrostatics, electric-fields, charge" }
Elimination reaction
Question: In this question i understand that we have to break C- N bond. But which one to choose? Answer: This is an example of Cope Elimination where the base takes up hydrogen from the least sterically hindered Carbon atom to avoid crowding in the Transition state. Hence, Hoffman Alkene is obtained and ethene is formed. The reaction is not an example of E1cb since the Leaving Group is a stable alkene whereas the basic requirement for a reaction to be categorised as E1cb is that the leaving group should be a weak base.
{ "domain": "chemistry.stackexchange", "id": 13553, "tags": "reaction-mechanism" }
Seeking Alternate Proof Regarding Closure Of Recursively Enumerable Languages
Question: So I would like to show that the class of Recursively Enumerable languages are closed under the shrink operation. In other words, $\text{shrink}_a(L) = \{\text{shrink}_a(w)\mid w\in L\}$ and where $\text{shrink}_a(w)$ is the string formed from $w$ by replacing every maximal substring of two or more $a$'s by a single a. For example, $\text{shrink}_a(baaab) = bab$. So I was browsing around for other examples to study, and I came across the following proof for the prefix operation: Proving that recursively enumerable languages are closed against taking prefixes (the proof given by the user Wu Yin). I thought that this was a very cool way of proving something like this, instead of just directly building an alternate TM. I'm curious to know, can anyone come up with a proof that is of a similar style and flavor to the one pointed above? I would be very curious to see a similar bijective proof regarding countable/uncountable sets!! This has reminded me that there can be many ways to prove something, so I wanted to see what kind of flavor other people's proofs might have to this sample problem. I find that too often, students (and myself included) get caught up in a single procedure for finding solutions to a particular type of problem and neglect to see other ways of showing the same result. Answer: For any kind of a serious argument about computably enumerable sets, you need to free yourself of details concerning Turing machines. So you should look for general useful principles that allow you to prove properties of c.e. sets. Here is one: If $A$ is computably enumerable and $f$ is computable then $f(A)$ is computably enumerable. Using this, we immediately get: The set $\mathrm{shrink}_a(L)$ is computably enumerable because it is the image of the computably enumerable set $L$ by the computable function $\mathrm{shrink}_a$. Other such useful principles are (note that I am using the new terminology in which "recursive" is replaced by "computable"): computable sets are computably enumerable c.e. sets are closed under finite intersections c.e. sets are closed under inverse images by computable functions c.e. sets are closed under computably enumerable unions c.e. sets are closed under projection: if $E \subseteq \mathbb{N} \times \mathbb{N}$ is a c.e. set, then so is its projection $\lbrace n \in \mathbb{N} \mid \exists m \in \mathbb{N} . (n,m) \in E\rbrace$ a set is c.e. if and only if, it is the domain of a computable (partial) function a set is c.e. if and only if, it is the image of a computable (partial) function With these you can get pretty far.
{ "domain": "cs.stackexchange", "id": 709, "tags": "formal-languages, computability, proof-techniques, closure-properties" }
Light travels in a medium
Question: According to Snell's law : $${n_1 \over n_2} = {v_2 \over v_1}$$ $v_2 = v_1 n_1 / n_2$ Assuming that $n_1$ is vacuum , we will find the following equation: $$v = c / n$$ (We may find the same equation directly according to the definition of refractive index : $v = c /n$) Theoretically , according to this equation in a certain medium the speed of light can become small and even 0. 1- Under these conditions , will the light will be at rest ? 2- If true , a viewer in such medium can move faster than the speed of light. Theoretically, is such occasion possible ? Under certain conditions , is it possible to move faster than the light? Answer: Light will never be completely at rest, but we have succeeded in slowing it down significantly. (See this for example) In a medium, particles can move faster than the speed of light. (The speed of light in that medium) In fact, this is used in some particle accelerators to detect certain particles. When a charged particle travels faster than the speed of light in a medium, it emits Cherenkov Radiation. (See this for a brief explanation)
{ "domain": "physics.stackexchange", "id": 23408, "tags": "homework-and-exercises, special-relativity, waves, speed-of-light, faster-than-light" }
Making CRUD abstracted class in PHP
Question: I did a working CRUD abstracted class, and I am not sure that this is a good way for it, so I would like to hear your opinion. protected function properties(){ $properties = array(); foreach (self::$table_fields as $db_field){ if(property_exists($this, $db_field)){ $properties[$db_field] = $this->$db_field; } } return $properties; } public function create(){ $db = db::getConnection()->conn; $properties = $this->properties(); $key = implode(",",array_keys($properties)); $value = implode(",:",array_keys($properties)); $sql = "INSERT INTO ".self::$table."(" . $key . ") "; $sql.= "VALUES (:" . $value . ")"; $stmt = $db->prepare($sql); foreach($properties as $key=>$value){ $stmt->bindParam($key,$value); } if($stmt->execute()){ print_r($stmt); $this->id = db::the_insert_id(); return true; }else{ print_r($stmt); return false; } return $stmt; } This is working well: I am getting TRUE as response and it creates a user in database, but I am just not sure about it. Answer: Format all identifiers in order to avoid a syntax error. Assuming it's mysql, use backticks for the purpose: $key = "`".implode("`,`",array_keys($properties))."`"; You are going to repeat the second half of the code in the every crud method. To avoid that, add a query() method to your db class public function query($sql, $params = []) { $stmt = self::getConnection()->conn->prepare($sql); $stmt->execute($params); return $stmt; } so you'll be able to make your crud methods much more concise. public function create(){ $properties = $this->properties(); $key = "`".implode("`,`",array_keys($properties))."`"; $value = implode(",:",array_keys($properties)); $sql = "INSERT INTO `".self::$table."` (" . $key . ") "; $sql.= "VALUES (:" . $value . ")"; db::query($sql, $properties); $this->id = db::the_insert_id(); } There is no point in returning true or false from such a method, the only reason for it to return false is a database error and such an error should be thrown in the form of Exception and handled elsewhere, making all this true false stuff rather useless. Your current setup is following Active Record pattern, making a data object to contain all the database interaction related code. Consider switching to Data Mapper pattern where you have two classes - your data class and a mapper that is responsible for all the database interactions.
{ "domain": "codereview.stackexchange", "id": 29209, "tags": "php, pdo, crud" }
Solving EOMs for massless particle in a potential
Question: As discussed here and within, the Lagrangian for a massless particle, using the $(-,+,+,+)$ metric signature, is $$L = \frac{\dot{x}_\mu \dot{x}^\mu}{2e} - V,\tag{1}$$ where $\dot{x}^\mu := \frac{dx^\mu}{d\lambda}$ is the velocity, $\lambda$ is some worldline parameter, $e$ is the auxiliary einbein and $V$ is the potential. The EL equations give us the EOMs $$\dot{x}_\mu \dot{x}^\mu = 0,\tag{2}$$ $$\ddot{x}^\mu + \Gamma^\mu_{\sigma\rho} \dot{x}^\sigma \dot{x}^\rho - \frac{\dot{e}\dot{x}^\mu}{e} + e\partial^\mu V = 0,\tag{3}$$ where $\Gamma^\mu_{\sigma\rho}$ are the Christoffel symbols of the metric $\eta_{\mu\nu}$ for some choice of coordinates. After this, I'm not sure how to proceed, for the following reasons. In the $V=\text{constant}$ case, the system is underdetermined, and we are free to choose some $e$, such as setting $e=1$. We then get the consistent EOMs $$\dot{x}_\mu \dot{x}^\mu = 0,\tag{4}$$ $$\ddot{x}^\mu + \Gamma^\mu_{\sigma\rho} \dot{x}^\sigma \dot{x}^\rho = 0.\tag{5}$$ In the general case, however, we seem to end up with the two EOMs being inconsistent. For example, suppose we use Cartesian coordinates and a $z$-direction potential such as $$V = z,\tag{6}$$ choosing initial conditions satisfying the null velocity condition such as $\dot{x}^\mu \left(0\right) = \left(1,0,0,1\right)$. The EOM for the coordinates becomes $$\ddot{x}^\mu = \left(0,0,0,-1\right),\tag{7}$$ yielding $\dot{x}^\mu \left(\lambda\right) = \left(1,0,0,1-\lambda\right)$, which fails to satisfy the null velocity condition for $\lambda \neq 0$. Answer: It is usually implicitly assumed that OP's Lagrangian (1) for the massless relativistic point particle exhibits world-line (WL) reparametrization covariance, cf. point 4 in my related Phys.SE answer here. This puts strong conditions on the potential $V$, i.e. the 1-form $V\mathrm{d}\lambda$ should be a WL invariant. One allowed potential is an E&M background $V=-qA_{\mu}(x)\dot{x}^{\mu}$. OP's example (6) is not allowed.
{ "domain": "physics.stackexchange", "id": 90765, "tags": "special-relativity, lagrangian-formalism, potential, action, point-particles" }
Elastic collision in 2 dimension: Simulation
Question: I am writing an MD simulation of an ideal gas. To solve for the final velocities of two particles that collide, I came across the following code on the Internet: // they are just touching. find vector // separating their centers and normalize it. //m and m2 are colliding at this moment. m.dx represents velocity of molecule m in x direction. Similarly for m2.dx. sx = m.x-m2.x; sy = m.y-m2.y; double sxynorm = java.lang.Math.sqrt(sx*sx+sy*sy); double sxn = sx/sxynorm; double syn = sy/sxynorm; // find velocity of center of mass double totmass = m.mass + m2.mass; double comdx = (m.mass*m.dx+m2.mass*m2.dx)/totmass; double comdy = (m.mass*m.dy+m2.mass*m2.dy)/totmass; double pn = (m.dx-comdx)*sxn + (m.dy-comdy)*syn; double px = 2*sxn*pn; double py = 2*syn*pn; // subtracting this vector from m's momentum m.dx -= px; m.dy -= py; // adjusting m2's momentum so that total momentum // is conserved double mult = m.mass/m2.mass; m2.dx += px*mult; m2.dy += py*mult; I come from a computer science background, and studied a bit of collision theory in high-school that involved the center of mass frame. The above code seems to be using it but I can't decipher what exactly it is doing. Can someone help me? Answer: To understand this kind of collision, you need to start with a diagram: This is the situation "in the lab frame of reference" - that is, the velocities and positions in a stationary frame of reference. Now it turns out the math is much easier in the center of mass (COM) frame of reference, because the net momentum in that frame is zero - both before, and after the collision. This also means that the velocities $v_1'$ and $v_2'$ will be pointing in opposite directions. And since you are saying you have an ideal gas (thus perfectly elastic collisions), then in the COM frame the magnitude of the velocities after the collision will be the same as before - they will just be reflected along the line perpendicular to the line connecting the two spheres: In the final step, you convert the velocities $u_1'$ and $u_2'$ back to the lab frame (that is, you add the velocity of the center of mass back in). To calculate the velocity of the center of mass, you just take the vector sum of the momentum, and divide by the total mass: p_x = m_1*v_1x + m_2*v_2x p_y = m_1*v_1y + m_2*v_2y Vx = p_x/(m_1+m_2) Vy = p_y/(M-1+m_2) I recommend that you use these diagrams, and the equations given by @ja72, to derive the correct code yourself - rather than relying on someone else's code that you don't understand.
{ "domain": "physics.stackexchange", "id": 35882, "tags": "collision, ideal-gas, molecular-dynamics, simulations" }
AVL tree insertion and deletion of nodes in C. 2.0
Question: I asked a question yesterday, based on the answers to that question and some personal insights I was able to update the original code. which I am posting here to get reviewed. I also thought about posting it as an answer to my own question but since that will not provide any more inputs to improve the code, I am posting a new question. So I believe this code now provides a good AVL tree implementation with basic functions. Any feedback is welcome. HEADER FILE #ifndef HEADER typedef struct mytree tree; typedef struct treenode node; // constants for nodetype #define TYPEONE 1 // root node #define TYPETWO 2 // leaf node #define TYPETHREE 3 //internal node with only right child #define TYPEFOUR 4 //internal node with only left child #define TYPEFIVE 5 //internal node with both childs //constants for child type #define LEFTCHILD 1 #define RIGHTCHILD 2 //constant for getheight #define NULLHEIGHT -1 tree * new_tree(); node * getposition(node * ptr, int value); node * parent(tree * obj, int value); bool isempty(tree * obj); node * leftchild(tree * obj, int value); node * rightchild(tree * obj, int value); void add(tree * obj, int value); void rmroot(tree * obj); void rmnode(tree * obj, int value); void searchvalue(tree * obj, int value); void inorder(tree * obj); void postorder(tree * obj); static void posttraverse(node * ptr); static void intraverse(node * ptr); static void rem(tree * obj, node * ptr); static node * createnode(int value); static int getheight(node * ptr); static void setheight_toroot(node * ptr); static void setheight(node * start, node * stop); static int getbalfac(node * ptr); static node * findpredecessor(tree * obj, node * ptr); static bool isleftheavy(node * ptr); static bool isrightheavy(node * ptr); static bool isunbalanced(node * ptr); static bool isheightchanged(node * ptr); static void rotateright(tree * obj, node * ptr); static void rotateleft(tree * obj, node * ptr); static void rearrange(tree * obj, node * ptr); static void balance(tree * obj, node * ptr); static void delntype1(tree * obj); static void delntype2(node * ptr); static void delntype3(node * ptr); static void delntype4(node * ptr); static void delntype5(tree * obj, node * ptr); static int findmax(int a, int b); static int roottype(node * ptr); static int nodetype(node * ptr); static int findtype(node * ptr, int flag); static int childtype(node * ptr); #endif AVL.c #include<stdio.h> #include<stdlib.h> #include<stdbool.h> #include "newAVL.h" struct treenode { int value; int height; node *left; node *right; node *parent; }; struct mytree { node *root; bool status; }; // functions provided by ADT tree* new_tree() { // constructor function for tree tree *temp = (tree*)malloc(sizeof(tree)); temp->root = NULL; temp->status = true; return temp; } node* getposition(node *ptr, int value) { /*Return address of the node having value equal to "value", returns NULL if not found ptr should be root of tree*/ if (ptr == NULL) { return NULL; } else if (value < ptr->value) { return getposition(ptr->left, value); } else if (value > ptr->value) { return getposition(ptr->right, value); } else { return (ptr); } } node* parent(tree *obj, int value) { //Returns parent of "ptr" node *ptr = getposition(obj->root, value); if (ptr) { return ptr->parent; } else { return NULL; } } bool isempty(tree *obj) { return obj->status; } node* leftchild(tree *obj, int value) { //Returns leftchild of "ptr" node *ptr = getposition(obj->root, value); if (ptr) { return ptr->left; } else { return NULL; } } node* rightchild(tree *obj, int value) { //Returns right child of "ptr" node *ptr = obj->root; if (ptr) { return ptr->right; } else { return NULL; } } void add(tree *obj, int value) { // adds "value" to the tree, value must not be a member of set S. where S is set of all values // stored in tree if (obj->root == NULL) { node *temp = createnode(value); obj->root = temp; obj->status = false; } else { node *ptr = obj->root; node *prev = NULL; bool flag = true; while (ptr != NULL) { if (value < ptr->value) { flag = true; prev = ptr; ptr = ptr->left; } else { flag = false; prev = ptr; ptr = ptr->right; } } node *temp = createnode(value); temp->parent = prev; if (flag) { prev->left = temp; } else { prev->right = temp; } balance(obj, temp); } } void rmroot(tree *obj) { //Removes root node of tree if (obj->root) { rem(obj, obj->root); } } void rmnode(tree *obj, int value) { //Removes node having value equal to "value" . no operation is performed if such a node is not found node *temp = getposition(obj->root, value); if (temp) { rem(obj, temp); } } void searchvalue(tree *obj, int value) { //search the tree for value node *temp = getposition(obj->root, value); if (temp) { printf("\n\nValue found\n\n"); } else { printf("\n\nValue not found\n\n"); } } void inorder(tree *obj) { /*Performs inorder traversal of tree*/ node *ptr = obj->root; printf("\nInorder Traversal : "); intraverse(ptr); printf("\n\n"); } void postorder(tree *obj) { //Performs post order traversal of tree rooted at ptr printf("\nPostorder Traversal : "); node *ptr = obj->root; posttraverse(ptr); printf("\n\n"); } // Helper functions for the ADT functions static void posttraverse(node *ptr) { if (ptr) { posttraverse(ptr->left); posttraverse(ptr->right); printf(" %d", ptr->value); } } static void intraverse(node *ptr) { if (ptr) { intraverse(ptr->left); printf(" %d", ptr->value); intraverse(ptr->right); } } static void rem(tree *obj, node *ptr) { /*Deletes node given by "ptr"*/ int var = nodetype(ptr); node *father = ptr->parent; switch (var) { case TYPEONE: delntype1(obj); break; case TYPETWO: delntype2(ptr); break; case TYPETHREE: delntype3(ptr); break; case TYPEFOUR: delntype4(ptr); break; case TYPEFIVE: delntype5(obj, ptr); break; default: break; } if (father) { if (isheightchanged(father)) { setheight_toroot(father); } while (father) { if (isunbalanced(father)) { rearrange(obj, father); setheight_toroot(father); } father = father->parent; } } } static node* createnode(int value) { node *ptr = (node*)malloc(sizeof(node)); ptr->left = ptr->right = ptr->parent = NULL; ptr->value = value; ptr->height = 0; return ptr; } static int getheight(node *ptr) { if (ptr == NULL) { return NULLHEIGHT; } else { return ptr->height; } } static void setheight_toroot(node *ptr) { setheight(ptr, NULL); } static void setheight(node *start, node *stop) { //sets height of nodes beggining from start to stop(excluding) while (start != stop) { start->height = findmax(getheight(start->right), getheight(start->left)) + 1; start = start->parent; } } static int getbalfac(node *ptr) { //Returns balance factor of a node if (ptr) { return (-(getheight(ptr->left)) + (getheight(ptr->right))); } else { return 0; } } static node* findpredecessor(tree *obj, node *ptr) { if (ptr) { node *run = ptr->left; node *father = ptr->parent; if (run) { while (run->right) { run = run->right; } } else if ((father) && (ptr->value > father->value)) { run = ptr->parent; } else { if (ptr->value > obj->root->value) { run = obj->root; } else { run = NULL; } } return run; } else { return NULL; } } static bool isleftheavy(node *ptr) { return (getheight(ptr->left) > getheight(ptr->right)); } static bool isrightheavy(node *ptr) { return (getheight(ptr->right) > getheight(ptr->left)); } static bool isunbalanced(node *ptr) { //Returns pointer to the first node which is unbalanced int var = getbalfac(ptr); if ((var < -1) || (var > 1)) { return true; } else { return false; } } static bool isheightchanged(node *ptr) { int first = ptr->height; int second = findmax(getheight(ptr->left), getheight(ptr->right)) + 1; if (first == second) { return false; } else { return true; } } static void rotateright(tree *obj, node *ptr) { node *new_root = ptr->left; ptr->left = new_root->right; if (ptr->left) { ptr->left->parent = ptr; } new_root->right = ptr; node *father = ptr->parent; if (!father) { obj->root = new_root; new_root->parent = NULL; } else { int var = childtype(ptr); if (var == LEFTCHILD) { father->left = new_root; } else { father->right = new_root; } new_root->parent = father; } ptr->parent = new_root; } static void rotateleft(tree *obj, node *ptr) { node *new_root = ptr->right; ptr->right = new_root->left; if (ptr->right) { ptr->right->parent = ptr; } new_root->left = ptr; node *father = ptr->parent; if (!father) { obj->root = new_root; new_root->parent = NULL; } else { int var = childtype(ptr); if (var == LEFTCHILD) { father->left = new_root; } else { father->right = new_root; } new_root->parent = father; } ptr->parent = new_root; } static void rearrange(tree *obj, node *ptr) { if (isleftheavy(ptr)) { if (isrightheavy(ptr->left)) { ptr->left->right->height++; ptr->height -= 2; ptr->left->height--; rotateleft(obj, ptr->left); rotateright(obj, ptr); } else { ptr->height -= 2; rotateright(obj, ptr); } } else if (isrightheavy(ptr)) { if (isleftheavy(ptr->right)) { ptr->right->left->height++; ptr->height -= 2; ptr->right->height--; rotateright(obj, ptr->right); rotateleft(obj, ptr); } else { ptr->height -= 2; rotateleft(obj, ptr); } } } static void balance(tree *obj, node *ptr) { if (isheightchanged(ptr->parent)) { while ((ptr) && (!isunbalanced(ptr))) { setheight(ptr, ptr->parent); ptr = ptr->parent; } if (ptr) { setheight(ptr, ptr->parent); rearrange(obj, ptr); } } } static void delntype1(tree *obj) { int var = roottype(obj->root); node *temp = NULL; switch (var) { case TYPETWO: // root is leaf free(obj->root); obj->root = NULL; obj->status = true; break; case TYPETHREE: //root has only right child temp = obj->root->right; free(obj->root); obj->root = temp; obj->root->parent = NULL; break; case TYPEFOUR: //root has only left child temp = obj->root->left; free(obj->root); obj->root = temp; obj->root->parent = NULL; break; case TYPEFIVE: //root has both childs delntype5(obj, obj->root); break; } } static void delntype2(node *ptr) { //this function deletes a leaf node node *father = ptr->parent; int var = childtype(ptr); if (var == LEFTCHILD) { father->left = NULL; } else { father->right = NULL; } free(ptr); } static void delntype3(node *ptr) { //this function deletes a internal node having only right child node *father = ptr->parent; int var = childtype(ptr); if (var == LEFTCHILD) { father->left = ptr->right; } else { father->right = ptr->right; } ptr->right->parent = father; free(ptr); } static void delntype4(node *ptr) { //this function delete an internal node having only left child node *father = ptr->parent; int var = childtype(ptr); if (var == LEFTCHILD) { father->left = ptr->left; } else { father->right = ptr->left; } ptr->left->parent = father; free(ptr); } static void delntype5(tree *obj, node *ptr) { node *temp = findpredecessor(obj, ptr); int var = temp->value; rmnode(obj, temp->value); ptr->value = var; } static int findmax(int a, int b) { if (a > b) { return a; } else { return b; } } static int roottype(node *ptr) { return findtype(ptr, 0); } static int nodetype(node *ptr) { return findtype(ptr, 1); } static int findtype(node *ptr, int flag) { /*Determines type of node. ex. leaf or internal*/ if ((ptr->parent == NULL) && (flag)) { return TYPEONE; //root } else if ((ptr->left == NULL) && (ptr->right == NULL)) { return TYPETWO; //leaf node } else if (ptr->left == NULL) { return TYPETHREE; //internal node with right child } else if (ptr->right == NULL) { return TYPEFOUR; //internal node with left child } else { return TYPEFIVE; //internal node with left and right childs } } static int childtype(node *ptr) { /*Determine wether ptr is left child of its parent or right child*/ node *temp = ptr->parent; if (temp) { if (temp->left == ptr) { return LEFTCHILD; } else { return RIGHTCHILD; } } else { return 0; } } int main() { tree *obj = new_tree(); for (int i = 0;i < 50;i++) { add(obj, i); } for (int i = 0;i < 40;i++) { rmroot(obj); } inorder(obj); postorder(obj); searchvalue(obj, 91); return 0; } Answer: Overall the code is very readable. The function names and variable names are generally descriptive, which is a good thing. The indentation is great, The level of comments are pretty good, although AVL.c could use a description of what it provides and how it provides it. I would replace the use of obj and ptr in your functions with more meaningful variable names. The function name add() isn't descriptive enough, what are you adding? Additional Things to Learn I suggest that you learn about make and/or cmake if you are not using visual studio. The main() function should be in it's own C file. The AVL.c file should only contain the functions that implement and modify the tree. The static functions you use should only be in AVL.c and should not be in the header file. If the functions that use the tree need access to the struct, then the struct should be declared as part of the typedef. Named Constants Have constants defined is really good practice. One flaw in the code is the constant naming. Constants should be named using WORD1_WORD2 rather than WORD1WORD2, see the C coding standard at Coding Standard. The named constants can probably have more meaningful names: TYPEONE could be ROOT_NODE TYPETWO could be LEAF_NODE TYPETHREE could be INTERNAL_NODE_NO_LEFT_CHILD Reduce Function Complexity The add() function is very long, and some parts of it, such as the while (ptr != NULL) loop might be useful to other functions you write. Consider creating a function/subroutine just for that code. This Programmers Question discusses when is it good to break up a function, and points to a good reference book. Naming Conventions There are a number of ways to differentiate words in a variable name or function name, CamelCase is one of them. An example of CamelCase is getPosition(). Using one of these conventions helps make the code more readable.
{ "domain": "codereview.stackexchange", "id": 21116, "tags": "algorithm, c, tree" }
What does it mean that two frames are " in a state of constant, rectilinear motion with respect to one another"?
Question: This expression ( applied to reference frames) " being in a state of constant, rectilinear motion with respect to one another "is frequently used as self explanatory . Though I might appear as stupid, I absolutely do not understand what it means! I perfectly understand what means " rectilinear" motion and " constant " motion, when applied to a single motion . But I would like to know what it means to say that 2 states of motion are " rectilinear and constant " to one another". Does this mean that 2 frames are " in a state of constant.... " just in case they are following parallel lines in the same sense, with the same constant speed? Is it possible, please, to formulate precisely the definition of " in a state of rectilinear and constant motion to one another" as a list of distinct conditions that are individually necessary and jointly sufficient? Answer: Saying that the reference frame $K$ has constant rectilinear motion with respect to the reference frame $K'$ means that if you consider a point $p$ at rest in $K$, the velocity of that point computed in $K'$ (a) is constant in time and (b) it does not depend on the position of $p$ at rest in the first reference frame. It turns out that, wirh the definition above, to have constant rectilinear motion is an equivalence relation among reference frames.
{ "domain": "physics.stackexchange", "id": 62521, "tags": "classical-mechanics, inertial-frames, definition, relative-motion, galilean-relativity" }
Minimalistic mysql db class
Question: I am really just looking to make sure I am not making any weird mistakes that could hurt me in the long run or if something seems odd in the way I imagine it to work. This code does work for the way I use it and may lack other more advanced things that I may use in the future but have not implemented yet. I left in my phpdoc code to maybe make some sense of weird things. Any feedback is appreciated. Note: I am constantly reading up on things and try to cover everything in each of the stack sites but I might have overlooked some very basic resources due to be newer to the stack family. /** * Class Main Mysql Database */ class mydb { /** * Connect to DB */ public function __construct() { // Connect to Database $this->mydb = new mysqli('host', 'database', 'user', 'pass'); if ($this->mydb->connect_errno) { // Error on connection failure echo "Failed to connect to MySQL in Construct: (" . $this->mydb->connect_errno . ") " . $this->mydb->connect_error; } } /** End construct */ /** * Choose items from the database (SELECT Statement with WHERE equals and like options) * Designed to be slim and rely more on PHP than mysql for result formating * @param string $select * @param string $from * @param string $config * @param array $options * @return array */ public function choose ($select, $from, $config = NULL, $options = NULL) { if ($config === NULL) { $stmt = "SELECT {$select} FROM {$from}"; } elseif ($config === 'EQUALS') { $stmt = "SELECT {$select} FROM {$from} WHERE {$options['where_comp']} = '{$options['where_value']}'"; } elseif ($config === 'LIKE') { $stmt = "SELECT {$select} FROM {$from} WHERE {$options['where_comp']} LIKE '{$options['where_value']}'"; } /** End if $config */ if (!($result = $this->mydb->query($stmt))) { echo 'Query Error: ' . $result->error . '<br/>'; } else { while ($row = $result->fetch_assoc()) { $payload[] = $row; } return $payload; } /** End if mydb->query */ } /** End choose method */ /** Put items into the database (INSERT INTO $table ($col) VALUES ($set) * This method was designed to do basic inserts without complicating it and filter when $clean is required * @param string $table * @param string $col * @param array $set * @param string $clean * */ public function put ($table, $col, $set, $clean = NULL) { if ($clean) { $set = "'" . implode("','",filter_var_array($set,FILTER_SANITIZE_STRING)) . "'"; $stmt = "INSERT INTO {$table} ({$col}) VALUES ({$set})"; } else { $set = "'" . implode("','",$set) . "'"; $stmt = "INSERT INTO {$table} ({$col}) VALUES ({$set})"; } /** End form $stmt */ if (!($putResult = $this->mydb->query($stmt))) { echo 'Insert Error: ' . $this->mydb->error . '<br/>'; } else { return $putResult; } } /** End put method */ /** Change items in the database (UPDATE $table SET item=value WHERE condition) * This method was designed to do basic updates without complicating it and filter when $clean is required * @param string $table * @param array $set * @param array $where * @param string $clean * */ public function change ($table, $set, $where, $clean = NULL) { if ($clean) { foreach ($set as $key => $value) { $value = filter_var($value, FILTER_SANITIZE_STRING); $setArray[] = "{$key}='{$value}'"; } $set = implode(',',$setArray); $stmt = 'UPDATE ' . $table . ' SET ' . $set . ' WHERE ' . $where['target'] . ' = ' . $where['match']; } else { foreach ($set as $key => $value) { $setArray[] = "{$key}='{$value}'"; } $set = implode(',',$setArray); $stmt = 'UPDATE ' . $table . ' SET ' . $set . ' WHERE ' . $where['target'] . ' = ' . $where['match']; } /** End form $stmt */ if (!($putResult = $this->mydb->query($stmt))) { echo 'Insert Error: ' . $this->mydb->error . '<br/>'; } else { return $changeResult; } /** End Query Check */ } /** End change method */ public function __destruct() { if (is_object($this->mydb)) { $this->mydb->close(); } } /** End destruct */ } /** End mydb Class */ Example of CODE use: ::Include above and create $mydb object:: $set = array('team'=>'First Shift Tech','access'=>'None'); $where = Array('target'=>'author','match'=>'John Doe'); $mydb->change('agent',$set,$where); Can I pass array's in this line instead? Seems I get an inspection error in phpstorm if I try to do some type of lazy deal. Answer: First of a note: you say yourself that everything is working. And that is what often counts. I hope I'm not to harsh, but it's my way of telling things ;) Overall: you should (try and) avoid an 'if' statement that change the action of a method. And document the $config (what does it mean, what does it do) you should use method names that are self-explicable (choose is actually select, change is actually update, put is actually insert.) Again, use method names that say what they do. 'change': change what? a record? the table declaration? the column definition? Never echo inside a method. If there is an error throw one. Of use a global Error handling class. e.g. Log::setError(..); (just imagining trying to figure out why there is a 'header allready sent' error because of that error). Now,I don't really know what you wrote. Is it a wrapper class for mysqli? That failed, you can now do less and code doesn't get easier to read (imo). Is it an adapter that adapts mysqli to a certain interface? Where is the interface? Does it help you abstract the database-layer and easily change the DB? again, no. mysqli is tightly coupled with this mysql class. What if we want to start using PDO? Or even step away from mySQL? All that code is useless then. Now, some specific remarks: function choose(); Designed to be slim and rely more on PHP than mysql for result formating In what way does it rely more on php for result formatting? You are simply calling fetch_assoc(); where is the formatting happening? function change(); & function put(); This method was designed to do basic updates without complicating it and filter when $clean is required How doesnt it complicate things? Why not use the mysqli prepared statements and let mysqli handle the filtering of of inputted variables? And your code isn't really basic. There is a lof of 'Repeat-yourself code'. So, let's fix what is 'broken'. I think that what you actually wanted to create was an adapter. The task of an adapter is translating the desired interface onto the correct method of the actual db class (mysqli in your case). To start of, we will have to design the desired interface. Let's for the sake of my answer (I'm lazy) create a simple Select, Update, Delete and Insert interface using an 'identifier' string as primary key. <?php interface Database { public function __construct($client); public function select($table, $identifier); public function insert($table, $identifier, $data); public function update($table, $identifier, $data); public function delete($table, $identifier); } Ok, brilliant. We now let other programmers (and ourself) know that if you are passing me a Database object, it has to implement these methods. Sadly for us, mysqli doesn't implement these methods directly. So the adapter pattern comes in handy here. We write an adapter for mysqli to fit our Database interface: <?php class MysqliToDatabaseAdapter implements Database { private final $client; public function __construct($client) { $this->client = $client; } public function select($table, $identifier) { $stmt = $this->client->prepare('SELECT * FROM ' . $table . ' WHERE identifier = ?'); $stmt->bind_param('s', $identifier); $stmt->execute(); return $stmt->fetch(); } public function insert($table, $identifier, $data) { $stmt = $this->client->prepare( 'INSERT INTO ' . $table . ' (`identifier`, `' . implode('`,`'array_keys($data)) . '`)' . ' VALUES (? ' . str_repeat(',?', count($data)) . ')' ); $data = array_values($data); array_unshift($identifier, $data); $stmt->bind_param( str_repeat('s', count($data)), $data ); return $stmt->execute(); } public function update($table, $identifier, $data) { //... } public function delete($table, $identifier, $data) { //... } } We would then use this class as follows: <?php $mysqli = new mysqli('host', 'database', 'user', 'pass'); $myDbHandler = new MysqliToDatabaseAdapter($mysqli); //somewhere in the code $myDbHandler->insert('my_table','my_identifier',array('my_column'=>'my_column_data')); //and then we select it $data = $myDbHandler->select('my_table','my_identifier'; Now this code is easy to maintina, the methods do what they are called (select, insert,...) and we can now easily change the databaseHandler without having to change anything in our application. Offcourse at this point the Database interface is of not much use. There is no way to 'filter' data, bulk update, ... But these could all be implemented. A good rule here is if you have to write 'if' in the documentation, then split up the method into two different methods. e.f.: update() and bulkUpdate() or select() and selectByColumn(); Always keep your code small. Don't stuff one interface with massive amount of functions. Create little small interfaces that focus on one part. e.g. DatabaseCrud (for simple create, read update and delete) DatabaseBulk (for bulk manipulations) DatabaseSomethingCompletlyDifferent (some other functions) Then depending on what you ned you can easily pass different DatabaseAdapters to different objects. Most code will only need simple CRUD functions. So you pass in the simple DatabaseCrudAdapter we made. For some special exotic method we can then create a bigger Adapter that implements extra interfaces that uses the same mysqli object. Some code for demonstrations: <?php //create the mysqli object $mysqli = new mysqli('host', 'database', 'user', 'pass'); //create our adapters $simpleCrud = new MysqliToSimpleDatabaseCrud($mysqli); // implements SimpleDatabaseCrud(); $simpleBulkCrud = new MysqliToSimpleBulkDatabase($mysqli); // extends MysqliToSimpleDatabaseCrud implements DatabaseBulk //create some objects $foo = new IOnlyNeedSimpltCrudFunctions($simpleCrud); $bar = new INeedSomeSpecialBulkFunctions($simpleBulkCrud); I hope this maks any sense :p So, now it's up to you. You will have to write out a decent set of interfaces used inside your program. Then create adapters for existing code to fit those interfaces. More on the adapter pattern: http://en.wikipedia.org/wiki/Adapter_pattern Some other usefull links: http://en.wikipedia.org/wiki/SOLID_%28object-oriented_design%29 http://en.wikipedia.org/wiki/Don%27t_repeat_yourself http://www.jamisbuck.org/presentations/rubyconf2011/index.html http://thc.org/root/phun/unmaintain.html
{ "domain": "codereview.stackexchange", "id": 4319, "tags": "php, object-oriented, mysqli" }
Return the repeated number and the missing number
Question: Input: Array of n integers, containing numbers in the range [1,n]. Each integer appears once except A which repeats twice and B which is missing. Output: Return A and B. Example: Input:[3 1 2 5 3] Output:[3, 4] A = 3, B = 4 My approach: public class Solution { // DO NOT MODIFY THE LIST. IT IS READ ONLY public ArrayList<Integer> repeatedNumber(final List<Integer> A) { //O(n) solution //Store the count of all the numbers appearing in the list int [] count = new int[A.size()]; int rep_num = 0,miss = 0; //Increase the count at the index location for( int i = 0; i < A.size(); i++ ) { int ind = A.get(i); count[ind - 1]++; // if(count[ind - 1] == 2) { rep_num = ind; break; } } //If the count has not been updated, then the number is missing in the array for(int i = 0; i < count.length; i++ ) { if(count[i] == 0) miss = i+1; } ArrayList<Integer> num = new ArrayList<Integer>(); num.add(rep_num); num.add(miss); return num; } } I have the following questions: How can I further optimize my code? Is there any better way to solve this question (i.e. using a better data structure, lesser lines of code)? Question asked on: interviewbit.com Answer: Your solution takes about 25 lines which is, IMO, a lot for this task. I'll propose my solution at the end. int rep_num = 0,miss = 0; Avoid declaring two variables on the same line, also rep_num does not correspond to the java recommended notations. Java uses lowerCamelCase for variable (and method) name so you should write rep_num as repNum. Also miss is not really clear as a variable name IMO. Maybe you should consider missingNumber. You are using an array to store duplicated elements, but it's clearer to use Set or a Map (or even multiSet with a library) to count duplicate elements. Actually if you have to count a lot of duplicate (or missing) elements, it's likely to be faster than your solution. Here is a code sample : Set<Integer> seen = ... for (Integer elementFromA : a) { if (seen.contains(elementFromA)) { // a set can only contains one elements, thus we the current value of elementFromA is a duplicate // let's do something with it } seen.add(v); } You should consider turning this method into a static method as it don't use any field from your object. Here is the solution I came up with (it should be about the same performance as yours) : public static List<Integer> repeatedNumber(final List<Integer> a) { ArrayList<Integer> res = IntStream.range(1, a.size() + 1).boxed().collect(toCollection(ArrayList::new)); // by removing everything elements from a, we are basically doing an exclusion/disjunction // res will now contain only the missing elements from a res.removeAll(a); final Set<Integer> seen = new HashSet<>(); for (Integer v : a) { if (seen.contains(v)) { res.add(0, v); break; } seen.add(v); } return res; } To make sure I wasn't making mistakes, I made a quick unit test class : public class SolutionTest { @Test public void basicTest() { List<Integer> expected = Arrays.asList(3, 4); Assert.assertEquals(expected, Solution.repeatedNumber(Arrays.asList(3, 1, 2, 5, 3))); } @Test public void testWithLastNumberModified() { List<Integer> expected = Arrays.asList(6, 7); Assert.assertEquals(expected, Solution.repeatedNumber(Arrays.asList(1, 2, 3, 4, 5, 6, 6))); } @Test public void testWithBigList() { List<Integer> expected = Arrays.asList(2, 3); Assert.assertEquals(expected, Solution.repeatedNumber(Arrays.asList(11, 10, 9, 8, 7, 6, 5, 4, 2, 2, 1))); } @Test public void anotherTestWithBigList() { List<Integer> expected = Arrays.asList(6, 1); Assert.assertEquals(expected, Solution.repeatedNumber(Arrays.asList(10, 11, 7, 8, 9, 3, 2, 6, 4, 5, 6))); } }
{ "domain": "codereview.stackexchange", "id": 29196, "tags": "java, performance" }
Negative unity matrix not hermitian? (stabilizer formalism)
Question: I read the section in the attached picture about the stabilizer formalism and was wondering about the last sentence in the pic. It says that all operators of the stabilizer group are hermitian, because the negative unity operator is not included, but isn't that operator hermitian as well? Answer: Notice that an element $A$ of the Pauli group is either Hermitian and then $A^2=I$, or anti-Hermitian and then $A^2=-I$. If the stab' group included a non-Hermitian element, it would also include it's square, $-I$. Hence if it doesn't include $-I$ it cannot include non-Hermitian elements.
{ "domain": "physics.stackexchange", "id": 55161, "tags": "quantum-mechanics, operators, quantum-information, mathematical-physics, group-theory" }
is transforming (tf) a pointcloud faster than transforming the points one by one
Question: I have about 600 points with the same time stamp and frame id. I am transforming them all one by one by multiplying the transform pre-obtained by lookupTransform with the point: t = lookupTransform() for ( point in points ) btVector3 p = point btVector3 q = t*p point = q This is taking quite some time, which is a problem because these are points from the velodyne so I have to do it again and again for millions of them. I am wondering if there would be an advantage in throwing them all in a point cloud and transforming the whole cloud at once. I saw that the tf::TransformListener::transfromPointCloud function is using boost::numeric::ublas in the background, which construct one big matrix with all the points and multiplies it with the transform. see http://www.ros.org/doc/api/tf/html/c++/transform__listener_8cpp_source.html#l00288 I also saw that joq's code for his velodyne driver is using pcl_ros to transform the cloud, which ends up calling pcl::transformPointCloud that can be seen here: http://docs.pointclouds.org/trunk/transforms_8hpp_source.html#l00042 At the end, you can't beat the fact that you have to multiply each point by the transform matrix, so it's only a question of computational efficiency... Which would be more efficient then: pcl_ros or ublas, and why? Of course I will know if I experiment all of those solutions, but I am hoping that someone will save me that pain... Also I have been thinking about GPUs these days. Is there a GPU implementation of this? Would that make sense? I guess it would for large enough point clouds... Originally posted by brice rebsamen on ROS Answers with karma: 1001 on 2013-01-17 Post score: 3 Answer: So I gave it a go and used tf::TransformListener::transfromPointCloud and it turned out to be much much faster (CPU usage went from hogging a core to 10%). I haven't tried pcl::transformPointCloud yet, but I guess I will, because that would be interesting to know which one is faster. I am a bit puzzled though. I am copying my points to a point cloud, then they get copied to a ublas matrix. This matrix is then multiplied by the transform. Then points are copied back to a point cloud and back to my data format. So first of all there are 2 copy operations (but in my own implementation I also had to copy to a btVector3 back and forth ...). Then the number of mathematical operations is still the same, isn't it: the 600 points still have to be multiplied by the transform. So either ublas is exceptionally well optimized, or my implementation was really crappy. EDIT: Since then, I have switched from diamondback to fuerte. I noticed that the implementation of tf::TransformListener::transfromPointCloud has changed, starting from electric. It is now transforming all the points one by one. See the following API links for a comparison: diamondback fuerte (electric to groovy) I haven't thoroughly timed them, but it seems that it's still doing a good job at it. Originally posted by brice rebsamen with karma: 1001 on 2013-01-21 This answer was ACCEPTED on the original site Post score: 5 Original comments Comment by joq on 2013-01-22: There are all kinds of crazy optimizations that can be done in serious numerical libraries. No sane programmer would do them in a simple loop. Comment by Philip on 2013-03-04: Have you checked the performance of tf::TransformListener::transfromPointCloud versus pcl::transformPointCloud? I assume they are about equal, but would be interested in the results nevertheless :-) Comment by brice rebsamen on 2013-03-04: no I haven't tried
{ "domain": "robotics.stackexchange", "id": 12473, "tags": "ros, transforms, tf2, transform, gpu" }
Call center escalation excercise, using Chain of Responsibility pattern
Question: This is a coding interview I found online, to be solved within 30 minutes: Design an OOP concept for an application where employee can dispatch their incoming phone call according to their seniority level if they are not able to solve. I chose to use Chain of Responsibility pattern, I also used Null pattern. Please review OOP principles and the specific design pattern implementation. The unit test is just a code sample. using System; using Microsoft.VisualStudio.TestTools.UnitTesting; namespace DesignPatternsQuestions { [TestClass] public class ChainOfResponsibilityTest { [TestMethod] public void PhoneCallDispatchCORTest() { PhoneCallHandler gilad = new PhoneCallHandler(new LowLevel("Gilad")); PhoneCallHandler bonno = new PhoneCallHandler(new LowLevel("Bonno")); PhoneCallHandler batel = new PhoneCallHandler(new MediumLevel("Batel")); PhoneCallHandler daniel = new PhoneCallHandler(new HighLevel("Daniel")); gilad.RegisterNext(batel); bonno.RegisterNext(batel); batel.RegisterNext(daniel); BankPhoneCall call1 = new BankPhoneCall(1500); bool res = gilad.GetPhoneCall(call1); Assert.IsTrue(res); } } public interface IPhoneCall { int Budget { get; set; } } public class BankPhoneCall :IPhoneCall { public BankPhoneCall(int budget) { Budget = budget; } public int Budget { get; set; } } public interface IPhoneCallHandler { bool GetPhoneCall(IPhoneCall phoneCall); //the same item is able to call next Item in the chain of command void RegisterNext(IPhoneCallHandler employee); } public class PhoneCallHandler : IPhoneCallHandler { private IPhoneCallHandler _nextCallHandler = EndOfChainHandler.Instance; private Employee _employee; public PhoneCallHandler(Employee employee) { _employee = employee; } public bool GetPhoneCall(IPhoneCall phoneCall) { bool res = _employee.Resolve(phoneCall); if (!res) { //handle _nextcallHandler == null with null pattern return _nextCallHandler.GetPhoneCall(phoneCall); } return res; } public void RegisterNext(IPhoneCallHandler nextCallHandler) { _nextCallHandler = nextCallHandler; } } /// <summary> /// null pattern /// </summary> public sealed class EndOfChainHandler : IPhoneCallHandler { private static readonly Lazy<EndOfChainHandler> lazy = new Lazy<EndOfChainHandler>(()=> new EndOfChainHandler()); public static EndOfChainHandler Instance { get { return lazy.Value; } } public bool GetPhoneCall(IPhoneCall phoneCall) { return false; } public void RegisterNext(IPhoneCallHandler employee) { throw new InvalidOperationException("can't register next to null"); } } public abstract class Employee { protected int _maxBudget; public string Name { get;set; } public Employee(string name, int maxBudget) { _maxBudget = maxBudget; Name = name; } public bool Resolve(IPhoneCall phoneCall) { return phoneCall.Budget <= _maxBudget; } } public class LowLevel : Employee { public LowLevel(string name) : base(name, 1000) { } } public class MediumLevel : Employee { public MediumLevel(string name) : base(name, 5000) { } } public class HighLevel : Employee { public HighLevel(string name) : base(name, 10000) { } } } Answer: Review Well done, there isn't much I would change in this implementation. I would grant you the humble badge for putting yourself as LowLevel employee :) The lazy and null pattern are well implemented (except a small issue that could possibly introduce null -> see minor issues). The chain of responsibility looks to be by the book. You've even used a visitor pattern between the handler and the employee. Each specific employee yields different behavior handling a phone call. Minor issues IPhoneCall as interface seems overkill to me. This is just a POCO. Check arguments against null in public entry points such as public PhoneCallHandler(Employee employee). GetPhoneCall could be written more compact: return _employee.Resolve(phoneCall) || _nextCallHandler.GetPhoneCall(phoneCall);. RegisterNext should not allow null, as this bypasses your EndOfChainHandler.Instance pattern to avoid null handling.
{ "domain": "codereview.stackexchange", "id": 35881, "tags": "c#, design-patterns, interview-questions" }
Calculating average of a function of molecule's orientation (Euler angles)
Question: In this paper, orientational average of a function of Euler angles, $f(\phi,\theta,\psi)$, is defined as: $$\langle f\rangle=\frac{1}{8 \pi^2} \int_0^\pi \int_0^{2 \pi} \int_0^{2 \pi} f(\theta, \phi, \chi) \sin \theta \mathrm{d} \theta \mathrm{d} \phi \mathrm{d} \chi$$ My question is: Why do we need the $\sin(\theta)$? What's wrong with defining it as follows: $$\langle f\rangle=\frac{1}{4 \pi^3} \int_0^\pi \int_0^{2 \pi} \int_0^{2 \pi} f(\theta, \phi, \chi) \mathrm{d} \theta \mathrm{d} \phi \mathrm{d} \chi$$ This still has the correct normalization without the $\sin(\theta)$. Answer: The reason is the same why, in the case of the spherical coordinates, the surface area element contains the $\sin(\theta)$ factor: we would like to have a uniform measure on the unit sphere surface described by the coordinates $\theta$ and $\phi$. Lines at constant longitude ($\phi$), the "meridians," have the same length. Lines at constant latitude ($\theta$), the "parallels," have an increasing size going from the poles (zero-length) to the "Equator" (same length as the meridians). It is a simple exercise to check that the length of the parallels is proportional to a $\sin (\theta)$ factor. Therefore, the surface area of two small elements on the surface of a unitary radius sphere, corresponding to the same coordinate variation $d\theta, d\phi$, at different latitudes (like the green and the red elements in the following figure) must be written as $$ \sin (\theta) d \theta d \phi.$$ Similarly, for the average on Euler's angles.
{ "domain": "physics.stackexchange", "id": 89280, "tags": "statistical-mechanics, coordinate-systems, integration, mathematics, molecular-dynamics" }
How strong are the experimental constraints on the neutrino helicity really?
Question: It is commonly accepted that all neutrinos have left-handed helicity and all anti-neutrinos are right-handed helicity.$^1$ The experimental base comes from Goldhaber's famous experiment performed in the 1950's. Its results are consistent with only left-handed neutrinos, but the uncertainties are actually rather large. I have been searching for more recent, improved measurements of the neutrino helicity, but everything I found online were only references to Goldhaber's original experiment. It looks to me like there were no serious attempts to reproduce or improve Goldhaber's more than 60-year old result, which I find somewhat intriguing, given that the result had a major impact on the development of the Standard Model of particle physics. Yet, it is often made to appear like the question was unambiguously settled experimentally by the Goldhaber experiment. So my questions are: have there been any experiments which were able to improve the uncertainty (say, to the 1%-level)? Or is a better measurement simply too difficult, even with modern hardware and techniques? If there really were no lab experiments for a better value of the amount of left-handed neutrinos, are there other constraints, e.g. from cosmology? If not, then I would say that in principle right-handed neutrinos are still a possibility which should be more thoroughly tested experimentally, since it would have a major impact on particle physics. ${^1}_\textrm{For the purpose of this question I'll not consider reference frames in which the observer is faster than the (massive) neutrino.}$ Answer: It is true that the Goldhaber's experiment has not been recently repeated directly in the present (I think, as I couldn't find any recent paper trying to repeat the experiment with higher precision). However, neutrino helicity and the inexistence of the right handed neutrino is a direct consequence of Dirac's equation for massless particles. 50+ years latter there is no experimental or theoretical evidence that this result could not be true. However even if not measured directly, the neutrino helicity is a very important mechanism in recent discoveries such as the Higgs boson and its interactions, meaning that if the neutrino helicity theory was incorrect, other recent experiments would have shown very different results. In addition, recently it has been proved that the neutrino must have mass, meaning that the existence of right-handed neutrinos is actually true, but to measure this is very hard, taking in account how difficult neutrino detection his and how low is the probability of producing one of those. Here you have a link with more information about the topic: http://ctp.berkeley.edu/neutrino/neutrino5.html
{ "domain": "physics.stackexchange", "id": 34036, "tags": "experimental-physics, neutrinos, helicity" }
Longest common subsequence — recursion and memoization
Question: use std::{cmp::max_by_key, collections::HashMap}; fn longest_common_subsequence(s1: Vec<u8>, s2: Vec<u8>) -> Vec<u8> { struct Helper { s1: Vec<u8>, s2: Vec<u8>, cache: HashMap<(usize, usize), Vec<u8>>, } impl Helper { fn helper(&mut self, i: usize, j: usize) -> Vec<u8> { if self.cache.contains_key(&(i, j)) { return self.cache[&(i, j)].clone(); } let value = { if i >= self.s1.len() || j >= self.s2.len() { Vec::new() } else if self.s1[i] == self.s2[j] { let mut tmp = vec![self.s1[i]]; tmp.append(&mut self.helper(i + 1, j + 1)); tmp } else { max_by_key(self.helper(i + 1, j), self.helper(i, j + 1), |s| s.len()) } }; self.cache.insert((i, j), value.clone()); return value; } } return Helper { s1, s2, cache: HashMap::new(), } .helper(0, 0); } fn main() { println!( "{}", String::from_utf8(longest_common_subsequence( String::from("abcd").into_bytes(), String::from("adfsfsdfbcd").into_bytes() )) .unwrap() ); } I'm playing around with Rust (only a few days into it) for a toy project in which I had to find the canonical representation for a given term -- I figured longest common subsequence of first few Google search results (titles) might do the trick, so I started implementing it and this is what I have (from a mental translation of how I'd do it in Python). Among other things, it looks.. ugly to me. What can I do better (shorter / more readable e.t.c.)? Answer: Your code looks dense because you've stuffed a lot of stuff into a single function, without any structuring whitespace. But that is merely cosmetic and can be fixed. For example, you might structure the code with some blank lines … fn longest_common_subsequence(s1: Vec<u8>, s2: Vec<u8>) -> Vec<u8> { return Helper { s1, s2, cache: HashMap::new(), } .helper(0, 0); struct Helper { ... } impl Helper { ... } } … or move the helper outside entirely: fn longest_common_subsequence(s1: Vec<u8>, s2: Vec<u8>) -> Vec<u8> { Helper { s1, s2, cache: HashMap::new(), } .helper(0, 0) } struct Helper { ... } impl Helper { ... } I've thought about whether a separate Helper struct even makes sense. In principle, you could just pass the values like the cache as arguments to a helper function, but given the pair of vecs s1, s2 that should be kept together a struct makes sense. Often, the alternative would be to simply use a closure, but we can't directly have recursive closures. A different debate can be had about whether the helper() function should be a method. I'd argue “no” – this is a purely syntactic choice here, and we could instead write the helper function to take a &mut Helper parameter. But that is primarily opinion-based. When I look into the helper() implementation, I note this pattern: if self.cache.contains_key(&(i, j)) { return self.cache[&(i, j)].clone(); } This unnecessarily makes two accesses to the cache. Instead, we can write: if let Some(value) = self.cache.get(&(i, j)) { return value.clone(); } The get() method returns an Option<&_>, which we can pattern-match using if let. Next, we have the pattern if let Some(value) = self.cache.get(&(i, j)) { return value.clone(); } let value = ...; self.cache.insert(&(i, j), value.clone()); value Again, the get() and insert() accesses do duplicate work. Typically, we can avoid this using the HashMap Entry API. An entry gives access to a slot in the hash map which may be occupied or may be vacant, and we can efficiently insert a value. The inserting function will return a reference to the value in either case, which we can clone: self.cache.entry((i, j)).or_insert_with(|| ... ).clone() which is very elegant except that this doesn't work here: the entry() keeps a &mut reference to the self Helper so that it can insert an item, but the closure also wants to borrow self. For that reason, it makes sense stick with your general design – but note that this could be avoided by using a different type for the cache, e.g. a Vec instead of a HashMap. A Vec or slice can be split_at() so that different sub-ranges can be borrowed separately. The problem as a whole is also amenable to using Dynamic Programming instead of memoization: without recursion, we wouldn't have to hold on to the cache for the duration of the recursive calls. This is a common theme with Rust: things seem like they should be easy, but they're not that easy when thinking about the actual ownership or safety issues. For example, I was thinking about returning a &[u8] from the helper() function to avoid unnecessary copies. This mostly works and saves a wasted allocation in the tmp.append(...) statement, but will ultimately have little effect since the max_by_key(...) statement has two calls into helper(). If their return value were to borrow the Helper struct, they would conflict with each other. In this particular case, it's also not possible to break the &mut requirement unless we put the cache into a RefCell and guarantee that the vec contents won't be moved, which we could do with a Pin and a bit of unsafe code. In order to avoid excessive Vec copies, an alternative to references would be to track indexes into an array. This is a common strategy in Rust to avoid lifetime issues. Instead of struct Helper { ... cache: HashMap<(usize, usize), Vec<u8>>, } we might use: struct Helper { ... cache: HashMap<(usize, usize), usize>, subseqs: Vec<Vec<u8>>, } Then: fn longest_common_subsequence(s1: Vec<u8>, s2: Vec<u8>) -> Vec<u8> { let mut helper = Helper { s1, s2, cache: HashMap::new(), subseqs: vec![Vec::new()], }; let result_index = helper.cached_value_for(0, 0); helper.subseqs[result_index].clone() } impl Helper { fn cached_value_for(&mut self, i: usize, j: usize) -> usize { if let Some(&value) = self.cache.get(&(i, j)) { return value; } let value = self.computed_value_for(i, j); self.cache.insert((i, j), value); value } fn computed_value_for(&mut self, i: usize, j: usize) -> usize { if i >= self.s1.len() || j >= self.s2.len() { return 0; } if self.s1[i] == self.s2[j] { let next_index = self.cached_value_for(i + 1, j + 1); let mut tmp = vec![self.s1[i]]; tmp.extend_from_slice(&self.subseqs[next_index]); self.subseqs.push(tmp); return self.subseqs.len() - 1; } max_by_key( self.cached_value_for(i + 1, j), self.cached_value_for(i, j + 1), |&s| self.subseqs[s].len(), ) } } The above code also separates the value construction from cache management. If desired, we could use this to eliminate the recursion by priming the cache in the correct pattern: for i in (0..s1.len()).rev() { for j in (0..s2.len()).rev() { cache.insert((i, j), ...); } } A disadvantage of using indices instead of references is that we lose type safety: all those usize values look the same. This can be often be adressed with crates like typed-arena.
{ "domain": "codereview.stackexchange", "id": 41780, "tags": "recursion, rust, memoization" }
What happens to the electric field of dipoles and in the electric field of capacitors when we introduce a dielectric in a capacitor?
Question: I am trying to understand the concept of dielectric materials. I know that when we insert a dielectric inside of a capacitor with constant voltage the electric field of the capacitor does not change but what happens in the electric field of dipoles I found this equation: $$ E_{local} = \frac{1}{3} \ (\varepsilon_{r} + 2) \ E_{ext} $$ https://chat.openai.com/share/88abd4d0-3239-4965-adec-ce3b9dd0db06 Is this right ? Why is the local electric field increased, if it has the opposite direction of the capacitor's electric field ? Answer: The formula is wrong, or if you, wish not quite right... You can find the correct formula in, for example, Jackson: Classical Electrodynamics (3rd edition), Sect 4.4 p158. Equation 4.55, according to which the internal electric field intensity $\mathbf E_{in}$ of a sphere of homogeneous dielectric in external field $\mathbf E_{ext}$ $$ \mathbf E_{in}= \frac{3}{\epsilon_r+2} \mathbf E_{ext}$$ and as you expected $E_{in} < E_{ext}$ because $\epsilon_r >1$ for all real material. Note that for a sphere $\mathbf E_{in}$ is parallel with $\mathbf E_{ext}$!
{ "domain": "physics.stackexchange", "id": 96909, "tags": "electric-fields, capacitance, dielectric" }
Execution time of a linear pipeline
Question: A simple linear pipeline has three stages. The execution times in the stages are 10, 15, and 12 units respectively. If the pipeline is used to process 100 inputs, then the execution time is (a) 1530 units (b) 3700 units (c) 1500 units (d) 1537 units I am not able to understand this question. can anyone help me to understand this question and also the solution. Answer: Every input ("job", "task", whatever) must be processed by all three pipeline stages. The respective execution times for each stage are given; they are the same for each input. If there are no other inputs (imagine an empty system), a single input will leave the pipeline after 10+15+12 time units, right? But if there is more than only one input, you might have to deal with congestion... that's what this question is all about. So, how long does it take until the last of the 100 inputs has left the pipeline? (But I'm not giving out solutions here...)
{ "domain": "cs.stackexchange", "id": 9009, "tags": "cpu-pipelines" }
Tetris clone in C
Question: I just finished writing an ncurses-based Tetris clone in C. It's only my second project of this size, the first being a Snake clone. I would really appreciate any and all suggestions/improvements as I'm relatively new to C and I would like to improve. As this is a 300-plus-line codebase, anything I can add to this question to more easily facilitate its review, I will. main.c: #include "game.h" #include "ncurses_util.h" #include <ncurses.h> #include <unistd.h> int main(void) { ncurses_init(); curs_set(0); // Invisible cursor draw_game_board(); int ch; enum Direction dir; while ((ch = getch())) { switch (ch) { case KEY_DOWN: dir = Down; break; case KEY_UP: dir = Up; break; case KEY_LEFT: dir = Left; break; case KEY_RIGHT: dir = Right; break; default: dir = None; break; } update_game(dir); usleep(20000); // 50FPS } } game.h: #ifndef TETRIS_GAME_H #define TETRIS_GAME_H #define BOARD_WIDTH 10 #define BOARD_HEIGHT 12 enum Direction { Up, Down, Left, Right, None }; struct Coords { int y; int x; }; void draw_game_board(); void update_game(enum Direction press); #endif /* TETRIS_GAME_H */ game.c: #include "game.h" #include "ncurses_util.h" #include <ncurses.h> #include <stdbool.h> #include <stdlib.h> #include <string.h> enum Piece { T, S, Z, L, J, O, I }; const struct Coords pieces[7][4] = { { { 1, 4 }, { 2, 5 }, { 1, 6 }, { 1, 5 } }, // T { { 2, 5 }, { 1, 6 }, { 2, 4 }, { 1, 5 } }, // S { { 1, 4 }, { 2, 6 }, { 2, 5 }, { 1, 5 } }, // Z { { 1, 6 }, { 2, 4 }, { 2, 6 }, { 2, 5 } }, // L { { 1, 4 }, { 2, 4 }, { 2, 6 }, { 2, 5 } }, // J { { 1, 5 }, { 1, 6 }, { 2, 5 }, { 2, 6 } }, // O { { 1, 4 }, { 1, 5 }, { 1, 6 }, { 1, 7 } }, // I }; int tetris_check() { static char *fullLine; static char *emptyLine; if (!fullLine) { fullLine = malloc(sizeof(char) * (BOARD_WIDTH + 1)); emptyLine = malloc(sizeof(char) * (BOARD_WIDTH + 1)); for (int i = 0; i < BOARD_WIDTH; i++) { fullLine[i] = '#'; emptyLine[i] = ' '; } fullLine[BOARD_WIDTH] = '\0'; emptyLine[BOARD_WIDTH] = '\0'; } int tetrises = 0; char *line = malloc(sizeof(char) * (BOARD_WIDTH + 1)); for (int i = 1; i <= BOARD_HEIGHT; i++) { mvinnstr(i, 1, line, BOARD_WIDTH); if (strcmp(line, fullLine) == 0) { mvaddstr(i, 1, emptyLine); tetrises++; } } int dropAmount = 0; for (int i = BOARD_HEIGHT + 1; i >= 1; i--) { mvinnstr(i, 1, line, BOARD_WIDTH); if (strcmp(line, emptyLine) == 0) { dropAmount++; } else { mvaddstr(i + dropAmount, 1, line); if (dropAmount) { mvaddstr(i, 1, emptyLine); } } } return tetrises; } void rotate_piece(struct Coords *originalPiece, enum Piece pieceType) { if (pieceType == O) return; // No point in rotating an O struct Coords piece[4]; memcpy(piece, originalPiece, sizeof(struct Coords) * 4); if (pieceType == I) { if (piece[0].x == piece[1].x) { // Horizontal for (int i = 0; i < 4; i++) { if (i == 1) continue; piece[i].x = piece[1].x + (piece[i].y - piece[1].y); piece[i].y = piece[1].y; } } else { // Vertical for (int i = 0; i < 4; i++) { if (i == 1) continue; piece[i].y = piece[1].y + (piece[i].x - piece[1].x); piece[i].x = piece[1].x; } } } else { // The last coordinate pair always corresponds to the "center" of a non-I/O piece for (int i = 0; i < 3; i++) { if (piece[i].x == piece[3].x && piece[i].y == piece[3].y - 1) { piece[i].x += 1; piece[i].y += 1; } else if (piece[i].x == piece[3].x && piece[i].y == piece[3].y + 1) { piece[i].x -= 1; piece[i].y -= 1; } else if (piece[i].x == piece[3].x - 1 && piece[i].y == piece[3].y) { piece[i].x += 1; piece[i].y -= 1; } else if (piece[i].x == piece[3].x + 1 && piece[i].y == piece[3].y) { piece[i].x -= 1; piece[i].y += 1; } else if (piece[i].x == piece[3].x - 1 && piece[i].y == piece[3].y - 1) { piece[i].x += 2; } else if (piece[i].x == piece[3].x + 1 && piece[i].y == piece[3].y - 1) { piece[i].y += 2; } else if (piece[i].x == piece[3].x + 1 && piece[i].y == piece[3].y + 1) { piece[i].x -= 2; } else if (piece[i].x == piece[3].x - 1 && piece[i].y == piece[3].y + 1) { piece[i].y -= 2; } } } for (int i = 0; i < 4; i++) { for (int j = 0; j < 4; j++) { if (piece[i].x == originalPiece[j].x && piece[i].y == originalPiece[j].y) { goto ROTATE_PIECE_OUTER_CONTINUE; } } chtype ch = mvinch(piece[i].y, piece[i].x); if (ch == '#' || ch == L'│' || ch == L'▁' || ch == L'▔') return; ROTATE_PIECE_OUTER_CONTINUE: continue; } for (int i = 0; i < 4; i++) { mvaddch(originalPiece[i].y, originalPiece[i].x, ' '); } for (int i = 0; i < 4; i++) { mvaddch(piece[i].y, piece[i].x, '#'); } refresh(); memcpy(originalPiece, piece, sizeof(struct Coords) * 4); } bool update_piece_on_screen(struct Coords *piece, enum Direction dir) { switch (dir) { case Down: for (int i = 0; i < 4; i++) { for (int j = 0; j < 4; j++) { if (i == j) continue; if (piece[i].x == piece[j].x && piece[i].y == piece[j].y - 1) goto UPDATE_PIECE_DOWN_CONTINUE; } chtype adjCh = mvinch(piece[i].y + 1, piece[i].x); if (adjCh == '#' || adjCh == L'▔') return false; UPDATE_PIECE_DOWN_CONTINUE: continue; } for (int i = 0; i < 4; i++) { mvaddch(piece[i].y, piece[i].x, ' '); piece[i].y++; } for (int i = 0; i < 4; i++) { mvaddch(piece[i].y, piece[i].x, '#'); } break; case Left: for (int i = 0; i < 4; i++) { for (int j = 0; j < 4; j++) { if (i == j) continue; if (piece[i].y == piece[j].y && piece[i].x == piece[j].x + 1) goto UPDATE_PIECE_LEFT_CONTINUE; } chtype adjCh = mvinch(piece[i].y, piece[i].x - 1); if (adjCh == '#' || adjCh == L'│') return false; UPDATE_PIECE_LEFT_CONTINUE: continue; } for (int i = 0; i < 4; i++) { mvaddch(piece[i].y, piece[i].x, ' '); piece[i].x--; } for (int i = 0; i < 4; i++) { mvaddch(piece[i].y, piece[i].x, '#'); } break; case Right: for (int i = 0; i < 4; i++) { for (int j = 0; j < 4; j++) { if (i == j) continue; if (piece[i].y == piece[j].y && piece[i].x == piece[j].x - 1) goto UPDATE_PIECE_RIGHT_CONTINUE; } chtype adjCh = mvinch(piece[i].y, piece[i].x + 1); if (adjCh == '#' || adjCh == L'│') return false; UPDATE_PIECE_RIGHT_CONTINUE: continue; } for (int i = 0; i < 4; i++) { mvaddch(piece[i].y, piece[i].x, ' '); piece[i].x++; } for (int i = 0; i < 4; i++) { mvaddch(piece[i].y, piece[i].x, '#'); } break; default: break; } return true; } int add_new_piece() { int piece = 6; // rand() % 7; for (int i = 0; i < 4; i++) { if (mvinch(pieces[piece][i].y, pieces[piece][i].x) == '#') return -1; mvaddch(pieces[piece][i].y, pieces[piece][i].x, '#'); } return piece; } void draw_game_board() { // mvaddstr is used in place of mvaddch for wide characters because macOS's ncurses is broken mvaddch(0, 0, ' '); for (int x = 1; x <= BOARD_WIDTH; x++) { mvaddstr(0, x, "▁"); } mvaddch(0, BOARD_WIDTH + 1, ' '); for (int y = 1; y <= BOARD_HEIGHT; y++) { mvaddstr(y, 0, "│"); for (int x = 1; x <= BOARD_WIDTH; x++) { mvaddch(y, x, ' '); } mvaddstr(y, BOARD_WIDTH + 1, "│"); } mvaddch(BOARD_HEIGHT + 1, 0, ' '); for (int x = 1; x <= BOARD_WIDTH; x++) { mvaddstr(BOARD_HEIGHT + 1, x, "▔"); } mvaddch(BOARD_HEIGHT + 1, BOARD_WIDTH + 1, ' '); refresh(); } void update_game(enum Direction press) { static int downTimer = 0; static struct Coords *piece; static enum Piece pieceType; if (!piece) { piece = malloc(sizeof(struct Coords *) * 4); srand(time(NULL)); rand(); pieceType = add_new_piece(); memcpy(piece, pieces[pieceType], sizeof(struct Coords) * 4); } switch (press) { case Up: rotate_piece(piece, pieceType); break; case Left: case Right: update_piece_on_screen(piece, press); break; case Down: goto MOVE_DOWN; case None: break; } downTimer++; if (downTimer >= 50) { MOVE_DOWN: downTimer = 0; if (!update_piece_on_screen(piece, Down)) { tetris_check(); pieceType = add_new_piece(); if (pieceType == -1) { // Game over ncurses_deinit(); printf("%s\n", "Game over"); exit(0); } memcpy(piece, pieces[pieceType], sizeof(struct Coords) * 4); for (int i = 0; i < 4; i++) { mvaddch(piece[i].y, piece[i].x, '#'); } } } refresh(); } ncurses_util.h: #ifndef TETRIS_NCURSES_UTIL_H #define TETRIS_NCURSES_UTIL_H void ncurses_init(); void ncurses_deinit(); void ncurses_should_exit(int signalValue); #endif /* TETRIS_NCURSES_UTIL_H */ ncurses_util.c: #include "ncurses_util.h" #include <locale.h> #include <ncurses.h> #include <signal.h> #include <stdlib.h> void ncurses_init() { // Default cleanup for ^C, etc. signal(SIGINT, ncurses_should_exit); signal(SIGTERM, ncurses_should_exit); signal(SIGQUIT, ncurses_should_exit); setlocale(LC_ALL, ""); initscr(); start_color(); use_default_colors(); cbreak(); nodelay(stdscr, TRUE); keypad(stdscr, TRUE); noecho(); } void ncurses_deinit() { endwin(); } void ncurses_should_exit(int signalValue) { ncurses_deinit(); exit(128 + signalValue); } Some of the aspects I'm most interested in: I use goto in several places to escape nested loops. Is this use acceptable, or is there a cleaner way to do this? Are there any (common) mistakes that I've made? My comments are relatively sparse; is this OK or should I comment more thoroughly? Is my code generally readable/understandable? For this code to properly function on macOS, you must compile it with -D_XOPEN_SOURCE_EXTENDED -lncurses. On Linux, you must compile it with -lncursesw. This is due to its usage of wide characters. Answer: Assignment-in-expression Whereas this - while ((ch = getch())) is a "common" paradigm in C, that doesn't mean it's a great idea. You're better-served by splitting it into separate assignment and condition-check statements. Helpful constants It's good that you wrote a comment here: usleep(20000); // 50FPS it would be even nicer if you had constants that derive this programatically, i.e. #define FRAME_RATE 50 #define FRAME_TIME_US (1000000 / FRAME_RATE) or equivalent const declarations. typedefs It's generally more usable to add utility typedefs to your enum and struct declarations so that you can (for example) drop the enum in enum Direction dir; State management The way you manage state is interesting. The state of the game is (from what I can tell) entirely represented in your display term buffer. That's not crazy, but it's a measure toward a level of efficiency that in this context is really not necessary, and harms your application in other categories. What if you wanted to play a "headless" game? What if you wanted to run unit tests in the absence of a real terminal? What if you have a real terminal, but you want to quickly page between different game instances? Your current implementation makes all of these very difficult. Consider prying apart your application layer from your presentation layer. Rather than representing the game state as display character data, represent it as the best "logical" two-dimensional array of booleans, or maybe bitfields if you want to be fancy. Also, eliminate statics such as static char *fullLine; static char *emptyLine; with prejudice. They will introduce surprising side-effects for anyone attempting to use your code in a re-entrant manner.
{ "domain": "codereview.stackexchange", "id": 40048, "tags": "c, game, tetris, curses" }
Physical reasons for why systems are chaotic?
Question: Are there any reasons why a system would exhibit chaotic behavior? Or is this something only found through numerical modelling or experimental testing? For example, the simple forced, damped pendulum or the duffing oscillator. Were these experimented on and it was found that they were sensitive to initial conditions, and then examined further to prove the 3 chaotic properties and finally deemed to be chaotic? Or is there something physical about them that gives away a possibility to chaos? If it is the former, how would we determine chaotic systems? Just trial and error until all 3 properties are proven? Answer: There is a bit to disentangle here, so this may seem like a somewhat roundabout answer, but we will get to your questions eventually: First of all, chaos is a property of deterministic and bounded dynamical systems, which in turn are mathematical constructs. The general mechanisms of chaos in such systems are generally well understood, but are nothing that I can explain within the scope of this answer. Moreover, for such systems, chaos is the norm, more specifically the non-chaotic regions of parameter space become increasingly small with increasing complexity of the system. So, in some sense the interesting question is rather why some (mathematical model) systems are not chaotic and why we can derive valuable insights from those. There are some necessary ingredients required for chaos¹, but these can be found in every real system. All definitions of chaos employ criteria that are inherently mathematical, in particular employing infinitesimal or infinite quantities. As such, they cannot be applied to natural systems. The one aspect of chaos that most definitions agree upon is the sensitivity to initial conditions, which can be determined for natural (and simulated) systems in good approximation (see this answer of mine for some details). Other properties like topological transitivity and dense periodical orbits are not reasonably accessible in nature at all. Rigorously proving chaos in nature is as impossible as proving whether the ratio of the proton and electron mass is a rational number. With that as a background, let’s turn to your specific questions: Are there any reasons why a system would exhibit chaotic behavior? As elaborated above, chaos is more or less the default. Rather, there are reasons why chaos may be absent in a model. Or is this something only found through numerical modelling or experimental testing? For some mathematical models, chaoticity according to some definition was rigorously proven. For most model systems, it is shown numerically (and not rigorously). For natural systems, you can only show the sensitivity to initial conditions experimentally, either by reproducing initial conditions and observe their divergence (example by me) or analysing data from the system. Alternatively, it may satisfy you to create a model for the system that explains the relevant observed behaviour and show that this model is chaotic. After all, showing chaos for its own sake is pretty pointless in nature, as you will always find always find chaos if you look closely enough, and the question is rather how to best describe the relevant properties of the system. Were these […] examined […] to prove the 3 chaotic properties and finally deemed to be chaotic? I am not exactly sure what three properties you are referring to, but as elaborated above only sensitivity to initial conditions is usually experimentally accessible. ¹ three dynamical variables (in continuous-time systems, which real physical systems are), non-linearities
{ "domain": "physics.stackexchange", "id": 84762, "tags": "chaos-theory" }
My Blackjack Game in Python
Question: I am a beginner programmer ( just recently picked it up ). I just completed my first ever project, that is creating a Blackjack game in Python. I would like to know your opinions about it and how it can be improved! Thanks in advance. This code is original and I made it myself without any external assistance. import random import os import time def clear(): os.system('cls' if os.name == 'nt' else 'clear') def addPlayerCard(playerCards, cards): playerCards.append(random.choice(cards)) def addDealerCard(dealerCards, cards): dealerCards.append(random.choice(cards)) def addSum(hand): totalsum = 0 hand2 = [] for card in hand: if(card == 'A'): hand2.append(card) hand2.remove(card) else: hand2.append(card) for cards in hand2: if(cards == 'A'): hand2.append('A') hand2.remove('A') if(cards == 'J' or cards == 'K' or cards == 'Q'): totalsum += 10 elif(cards == 'A'): if(totalsum + 11 <= 21): totalsum += 11 else: totalsum += 1 else: totalsum += cards return totalsum def showDealerHand(dealerCards): print("Dealer's hand: [X,", str(dealerCards[1]) + "]") def showPlayerHand(playerCards): print("Player's hand: [" + str(playerCards[0]) + ", " + str(playerCards[1]) + "]") def checkBlackjack(cards): flag1 = 0 flag2 = 0 if(cards[0] == 'A' or cards[1] == 'A'): flag1 += 1 if(cards[0] == 'J' or cards[1] == 'J' or cards[0] == 'K' or cards[1] == 'K' or cards[0] == 'Q' or cards[1] == 'Q'): flag2 += 1 if(flag1 + flag2 == 2): return True else: return False def handCount(hand): handCount = len(hand) return handCount def hitOrStand(playerCards, dealerCards, cards, fiveHandRule): i = 0 while(i != 2): hitstand = str(input("\nDo you want to hit or stand?\n")).lower() if(hitstand[0] == "h"): addPlayerCard(playerCards, cards) flag1 = addSum(playerCards) flag2 = handCount(playerCards) clear() print("Dealer's hand: [X,", str(dealerCards[1]) + "]") print("Player's hand: " + str(playerCards)) if(flag1 > 21): print("\nYOU BUSTED!") time.sleep(2) return True elif(flag2 >= 5 and flag1 <= 21): fiveHandRule.append("PASS") return True if(hitstand[0] == "s"): return True def dealerCheck(dealerSum, dealerCards, cards, playerCards): if(dealerSum <= 16): print("Dealer's hand: " + str(dealerCards)) print("Player's hand: " + str(playerCards) + "\n") print("_________________________________________________\n") print("The Dealer's Cards' Total Sum do not exceed 16.") print("The Dealer Hits!") addDealerCard(dealerCards, cards) dealerSum = addSum(dealerCards) time.sleep(2) return dealerSum def blackjackCard(playerCards, dealerCards): print("You Got a Blackjack!\n") time.sleep(1) print("Revealing Dealer's Hand...") time.sleep(1) print("Dealer's hand: " + str(dealerCards)) print("Player's hand: " + str(playerCards) + "\n") if(checkBlackjack(dealerCards)): print("The Dealer Also Has a Blackjack!\n") return 8 else: print("The Dealer Does Not Have a Blackjack.\n") return 9 def winLose(pSum, dSum, pHandCount): if(pSum > dSum and pSum <= 21): return 1 elif(dSum > pSum and dSum <= 21): return 2 elif(pHandCount >= 5 and pSum <= 21): return 3 elif(dSum == pSum and pSum <= 21 and dSum <= 21): return 4 elif(pSum > 21 and dSum <= 21): return 5 elif(dSum > 21 and pSum <= 21): return 6 elif(pSum > 21 and dSum > 21): return 7 def gameEnd(result): if(result == 1): print("\nCongratulations, You Won! Your Cards' Total Sum Is Higher Than The Dealer's.") return 1 elif(result == 2): print("\nYou Lost! The Dealer's Cards' Total Sum Is Higher Than Your's.") return 2 elif(result == 3): print("\nCongratulations, You Won! You Have a 5 Card Hand Without Busting, Winning by 5-Card Charlie.") return 1 elif(result == 4): print("\nIt's a Push! Your Cards' Total Sum Is The Same As The Dealer's!") return 3 elif(result == 5): print("\nYou Lost! You Busted but The Dealer Did Not. The Total Sum of Your Cards Exceeded 21.") return 2 elif(result == 6): print("\nCongratulations, You Won! The Dealer Busted but You Did Not. The Total Sum of The Dealer's Cards Exceeded 21.") return 1 elif(result == 7): print("\nIt's a Push! Both Parties Busted. Both Parties' Total Sum of Their Respective Cards Exceeded 21.") return 3 elif(result == 8): print("\nIt's a Push! Both Parties Have a Blackjack!") return 3 elif(result == 9): print("\nCongratulations, You Won! You Have a Blackjack but The Dealer Does Not!") return 1 def gameEnd2(result): if(result == 1 or result == 3 or result == 6 or result == 9): return 1 if(result == 2 or result == 5): return 2 if(result == 4 or result == 7 or result == 8): return 3 def Blackjack(): clear() cards = ['A','J','K','Q', 2, 3, 4, 5, 6, 7, 8, 9, 10] dealerCards = [] playerCards = [] addPlayerCard(playerCards, cards) addPlayerCard(playerCards, cards) addDealerCard(dealerCards, cards) addDealerCard(dealerCards, cards) showDealerHand(dealerCards) showPlayerHand(playerCards) checkBlackjack(playerCards) checkBlackjack(dealerCards) blackjack = False if(checkBlackjack(playerCards)): blackjack = True fiveHandRule = [] if(blackjack == False): while(hitOrStand(playerCards, dealerCards, cards, fiveHandRule) != True): hitOrStand(playerCards, dealerCards, cards, fiveHandRule) if(len(fiveHandRule) <= 0): clear() print("Revealing Dealer's Hand...") time.sleep(1) dSum = addSum(dealerCards) if(dSum <= 16): while(dSum <= 16): dealerCheck(dSum, dealerCards, cards, playerCards) dSum = addSum(dealerCards) if(dSum > 16): print("Dealer's hand: " + str(dealerCards)) print("Player's hand: " + str(playerCards)) if(dSum > 21): print("\nTHE DEALER BUSTED!") time.sleep(2) pSum = addSum(playerCards) dSum = addSum(dealerCards) pHandCount = handCount(playerCards) winLose(pSum, dSum, pHandCount) result = winLose(pSum, dSum, pHandCount) gameEnd(result) else: print("Five-Hand Charlie!") time.sleep(2) result = 3 gameEnd(result) else: result = blackjackCard(playerCards, dealerCards) gameEnd(result) input("Input any key to continue.\n") return result def endgame(result, gamelog): clear() print("The Match Has Been Concluded.") flag = gameEnd2(result) if(flag == 1): print("Result: WIN") gamelog.append('W') elif(flag == 2): print("Result: LOSS") gamelog.append('L') elif(flag == 3): print("Result: TIE") gamelog.append('T') input("Input any key to continue.\n") clear() def lobby(gamelog): totalgames = len(gamelog) totalwins = 0 totallosses = 0 totalties = 0 for game in gamelog: if(game == 'W'): totalwins += 1 elif(game == 'L'): totallosses += 1 elif(game == 'T'): totalties += 1 winrate = (2 * totalwins + totalties) / (2 * totalgames) * 100 winrate = round(winrate, 2) print("Your Stats") print("_________________________________________________\n") print("TOTAL GAMES: " + str(totalgames)) print("WINS: " + str(totalwins)) print("LOSSES: " + str(totallosses)) print("TIES/PUSHES: " + str(totalties) + "\n") print("YOUR WINRATE: " + str(winrate) + "%") print("_________________________________________________\n") gamelog = [] print("Welcome to Darrance's Blackjack. Have fun!") time.sleep(2) def main(): result = Blackjack() endgame(result, gamelog) lobby(gamelog) option = input("Would you like to play again? (Y/N)\n").lower() if(option[0] == 'y'): main() elif(option[0] == 'n'): clear() print("Thank you for playing Darrance's Blackjack!") input("Input any key to exit program.\n") clear() main() Answer: I felt the code was well written! Pretty modular, cleanly written code and very verbose print statements and a well-established interactive gameplay. :D Overall For a longer game, it might be a bit more readable to write the same code but using a Blackjack class or something similar. It allows you to extend the code to handle multiple ongoing games easily, save and load games, basically just treat each game as a single object and then you call methods on it to interact with the game. It won't be very difficult to convert most of what you've written into a class-format. Important I do not understand the need of hand2.append('A') followed immediately by hand2.remove('A'). Since you begin hand2 as an empty list, these two lines will cancel each other and you will never have an 'A' in hand2, and the if block does nothing. Moreover, in the next for loop, you are editing hand2 while iterating over it. It is a bad idea to change a mutable list while iterating over it. Since hand2 will never contain 'A' in your case, you don't end up into trouble, but the practice of editing mutable objects while iterating over them can lead to quite problematic issues. Here is a replacement which works equivalent to the code you have. Does this do valid calculation of the the hand-sum?: def addSum(hand): totalsum = 0 hand2 = [] for card in hand: if card != 'A': hand2.append(card) for cards in hand2: if(cards == 'J' or cards == 'K' or cards == 'Q'): totalsum += 10 else: totalsum += cards return totalsum Game I think the above point might break the core-game logic. Apart from that, I am not sure if the remaining code works as you want it to for a valid BlackJack game. I don't know BlackJack enough to be able to check the working of the core-game logic. However a couple of suggestions: Assuming it might be more accurate to play BlackJack with a full deck (13 cards of 4 suites), you can have the cards list contain all the 52 cards, and with every addPlayerCard or addDealerCard you remove the chosen card. This will keep the probabilities of the cards being selected more accurate than the current implementation. Currently, each card is equally likely as the next card, which is not the case when you start with a full deck. On removing an 'A' for example, you only have 3 more 'A' in the deck while each of the other cards have 4 of them in the deck making 'A' slightly bit less probable. Further additions and enhancements can include the possibility of splitting your hand, different rules for the dealer to hit or not etc. Code shortening You can shorten some conditions for if making the code a bit more compact/readable. Here are three examples. Also, != True is the same as == False which can be written as shown below. The same shortening can be applied for gameEnd2. # if(cards[0] == 'A' or cards[1] == 'A') if 'A' in cards[:2] # if(cards[0] == 'J' or cards[1] == 'J' or cards[0] == 'K' or cards[1] == 'K' or cards[0] == 'Q' or cards[1] == 'Q') if (cards[0] in 'JQK') or (cards[0] in 'JQK'): # if(blackjack == False) if not blackjack: handCount is not needed, since you can easily use len wherever you use handCount. Furthermore, inside handCount function you define a variable and call it handCount. Such a renaming might make longer codes end up having difficult-to-debug bugs. It is a good idea to not give variables names which overlap with the already-used names, especially in the same scope. There are two calls to checkBlackjack(playerCards) inside the Blackjack function which do not store the saved value to anything and have no effect. You can remove them. Similarly, there is a call to winLose which does nothing, right above a useful call result = winLose(...) which does something. Instead of the for loop to count the number of wins, losses and ties, you can use list.count('W') with 'W', 'L', 'T'. Instead of the blackjack code block, you can directly use checkBlackjack where needed. # blackjack = False # if(checkBlackjack(playerCards)): # blackjack = True # # fiveHandRule = [] # if(blackjack == False): fiveHandRule = [] if not checkBlackjack(playerCards): Modularity (and further shortening :D) Since you already have the showDealerHand and showPlayerHand defined, you should use them in other functions instead of re-writing the print functions to display cards separately in all the other functions. You can rewrite the two functions so that they can work even when the player hand has more than two cards. Misc In hitOrStand, the i variable is never changed, and hence, you will never get out of the while loop until you reach a return True statement. You can remove i completely, and keep while True. And in the BlackJack function where you call this hitOrStand you can replace the while loop with a single call to hitOrStand since it does not return anything but True and it always returns True when the function ends # i = 0 # while(i != 2) while True: # while(hitOrStand(playerCards, dealerCards, cards, fiveHandRule) != True): # hitOrStand(playerCards, dealerCards, cards, fiveHandRule) hitOrStand(playerCards, dealerCards, cards, fiveHandRule) Instead of calling main() recursively to continue playing game, you can use a single while loop to achieve the same result. While there is nothing wrong or bad in your implementation, you will keep creating recursive calls and recursive scopes inside the function without needing too. In a more complicated code, this might enter into space-issues or might reach the maximum recursion depth allowed by python. (Glossing over the possibility of tail-recursion since I am unsure how well is the python interpreter optimized for tail-recursion). Here is one alternative # result = Blackjack() # endgame(result, gamelog) # lobby(gamelog) # option = input("Would you like to play again? (Y/N)\n").lower() # if(option[0] == 'y'): # main() # elif(option[0] == 'n'): # clear() # print("Thank you for playing Darrance's Blackjack!") # input("Input any key to exit program.\n") # clear() def main(): result = Blackjack() endgame(result, gamelog) lobby(gamelog) while input("Would you like to play again? (Y/N)\n").lower()[0] == 'y': result = Blackjack() endgame(result, gamelog) lobby(gamelog) clear() print("Thank you for playing Darrance's Blackjack!") input("Input any key to exit program.\n") clear() Minor comments len cannot be negative, hence len(x) <= 0 is the same as len(x) == 0 In checkBlackjack, instead of the flags becoming 1 and then you checking if their sum equals 2, it might be a more readable implementation (i.e. your intent would be clearer) if you use flag as a boolean. See below flag = True if not ('A' in cards[:2]): flag = False if not ((cards[0] in 'JQK') or (cards[1] in 'JQK')): flag = False return flag input returns a string already, so you do not need to do str(input()) For a prettier printing, you can use the .format method.
{ "domain": "codereview.stackexchange", "id": 41420, "tags": "python, beginner" }
Reading lines of integers into a list of lists
Question: I have a text file containing integers in different lines which I want to read into a list of lists. So 01 11 14 45 50 09 should become ((1) (11 14) (45 50 9)) The real file is longer so I want to read it in using CL instead of transforming it using Emacs' macros etc. It is important that every line becomes exactly one list and the data is stored as integers, i.e. 09 becomes 9. I ended up with the following (using the external library split-sequence): (defun read-data (&optional (file "data.txt")) "Returns the numerical data in FILE as a list of lists." (with-open-file (data file) (loop :for line = (read-line data nil nil) :while line :collect (map 'list #'parse-integer (split-sequence #\Space line))))) which is great because it does the job, I needed only two minutes for it and I knew exactly what I wanted to do. Nevertheless, I found the chain read-line, split-sequence and map a bit too much for this rather simple job. Is there a simpler way just using read which I am just missing? Answer: A simple possibility could be to enclose each row between parentheses and use read-from-string (assuming that the file does not contain incorrect data): (defun read-data (&optional (file "data.txt")) "Returns the numerical data in FILE as a list of lists." (with-open-file (data file) (let (*read-eval*) ; set *read-eval* to NIL to avoid malicious code in the input file (loop :for line = (read-line data nil nil) :while line :collect (read-from-string (concatenate 'string "(" line ")"))))))
{ "domain": "codereview.stackexchange", "id": 28842, "tags": "strings, file, integer, common-lisp" }
Selection rules in neutron scattering
Question: Why doesn't neutron scattering have selection rules? According to Wikipedia, it seems that the value of the transition moment integral will be non-zero, but I guess another way of phrasing my question is, What does the transition moment integral for neutron scattering represent? Answer: Selection rules in atomic transitions (including the virtual transitions which characterize Raman scattering) occur because the incident photon is changing the internal state of the atom, and must do so in ways which conserve energy, angular momentum, parity, and perhaps some other quantum numbers. Neutron scattering is generally done with milli-eV neutrons, but the lowest excitations in nuclei typically have mega-eV energies above the ground state. So a scattered neutron typically can’t drive any excitations in the target nucleus, and therefore doesn’t care about any of the low-lying nuclear quantum numbers. To the extent that there are quantized states, they are in the virtual compound nucleus $(\text{target}+\text n)$ whose decay towards its ground state is the mechanism for neutron capture. However, most nuclei have an extremely dense set of very broad, short-lived states that overlap with their neutron separation energy, so you don’t end up with selection rules there, either. If you need some quantum numbers in a compound nucleus, there’s almost certainly an available state which has them. If your neutrons are energetic enough to scatter from nuclear resonances, their wavelengths are probably too short to be interesting for materials science. And there are selection rules for some capture reactions, such as the $\rm^3He(n,p)^3H$ reaction whose spin dependence is important for neutron polarimetry. But that’s nucleon scattering, which is much more exciting than neutron scattering.
{ "domain": "physics.stackexchange", "id": 85671, "tags": "quantum-mechanics, condensed-matter, scattering, spectroscopy, neutrons" }
How could Deep Blue possibly cheat?
Question: On Wikipedia we can read: Kasparov accused IBM of cheating and demanded a rematch. IBM refused and retired Deep Blue. What was the accusation and how was Deep Blue allegedly able to cheat? Answer: The allegation was based on the fact that Deep Blue made a choice that did not yield the immediate (or short term) benefit that was synonymous with systems back then (1997). Computational capability was significantly less powerful then, and Kasparov claimed that only a grand master would have made the decision that the system did - so the deep blue team cheated by having a human perform the move instead of the system. He asked for a rematch, but IBM did not allow this, which only added to the suspicion. This is a great article with deep analysis on the specific moves and circumstances - however suffice to say that Kasparov was trying to bait the system into making a decision for a weak pawn, but the system chose otherwise and instead put Kasparov into a compromised position: https://en.chessbase.com/post/deep-blue-s-cheating-move
{ "domain": "ai.stackexchange", "id": 63, "tags": "history, chess, deep-blue" }
$sp^3$ Hybridization wavefunctions and probability density
Question: I have plotted a hydrogen-like sp3-hybrid orbital probability density and it looks like this: I can plot 4 overlapping probability densities in a tetrahedral shape: So far it looks OK. But when I'm trying to sum the orbitals (wavefunctions) and square the resulting function to get the probability density, I always come up with a sphere: So I have the following questions: 1.) Is this an expected result? If not, what am I doing wrong? 2.) What is usually depicted on the sp3-hybridization schemes: overlapping wavefunctions, sums of the wavefunctons, probability density or something else? Answer: The $sp^3$ orbitals are a mix of an $s$ orbital and all three $p$ orbitals and form four equivalent hybrids. Due to orthogonality, it can be proven that the orbitals have the following form: $$|h_1\rangle=\frac{1}{2}\left(|s\rangle+|p_x\rangle+|p_y\rangle+|p_z\rangle\right)\\ |h_2\rangle=\frac{1}{2}\left(|s\rangle+|p_x\rangle-|p_y\rangle-|p_z\rangle\right)\\ |h_3\rangle=\frac{1}{2}\left(|s\rangle-|p_x\rangle+|p_y\rangle-|p_z\rangle\right)\\ |h_4\rangle=\frac{1}{2}\left(|s\rangle-|p_x\rangle-|p_y\rangle+|p_z\rangle\right)$$ The addition of all hybrid orbitals is, wait for it, an $|s\rangle$ orbital! So yes, it should be a spheric orbital. Source: Electonic Structure of Materials, A. Sutton
{ "domain": "physics.stackexchange", "id": 56112, "tags": "quantum-mechanics, condensed-matter, wavefunction, orbitals, quantum-chemistry" }
Numerical Solution of least action principle
Question: I am trying to numerically find the path of least action between two points (ignoring the time step normalizing factor): $$S=\sum_i (x_{i+1}-x_{i})^2/2-V(x_i)$$ I don't have the potential in explicit form, but only in a look-up table. The initial $x_0$ and final point $x_N$ are fixed. For finding the path of least action, all that needs to be done is to minimize wrt to all the intermediate points $x_1, x_2, ... x_{N-1}$. If one does this, then this can be shown to reduce to a linear system of equations, where the matrix is in tridiagonal form: $$\begin{bmatrix} -2 & 1 & 0 & 0 &... &0\\ 1 & -2 & 1 & 0 & ... &0\\ 0 & 1 & -2 & 1 & ... &0\\ ...\\ 0 & ... & & 1 & -2 &1\\ 0 & ... & & 0 & 1 &-2\\ \end{bmatrix} \begin{bmatrix} x_1\\ x_2\\ x_3\\ ...\\ x_{N-2}\\ x_{N-1}\\ \end{bmatrix}= \begin{bmatrix} -V'(x_1)-x_0\\ -V'(x_2)\\ -V'(x_3)\\ ...\\ -V'(x_{N-2})\\ -V'(x_{N-1})-x_N\\ \end{bmatrix}$$ I have checked my notes in "Computational Physics". This problem is very similar to solving Poisson's equation in 1D. However the function are the coordinates themselves and the Potential depends on the coordinates. I am not sure how to solve this system of equations. Answer: $$\underbrace{\begin{bmatrix} -2 & 1 & 0 & 0 &... &0\\ 1 & -2 & 1 & 0 & ... &0\\ 0 & 1 & -2 & 1 & ... &0\\ ...\\ 0 & ... & & 1 & -2 &1\\ 0 & ... & & 0 & 1 &-2\\ \end{bmatrix}}_{\mathbf A} \underbrace{\begin{bmatrix} x_1\\ x_2\\ x_3\\ ...\\ x_{N-2}\\ x_{N-1}\\ \end{bmatrix}}_{\mathbf x}=- \underbrace{\begin{bmatrix} V'(x_1)+x_0\\ V'(x_2)\\ V'(x_3)\\ ...\\ V'(x_{N-2})\\ V'(x_{N-1})+x_N\\ \end{bmatrix}}_{\mathbf b}$$ you have to solve this vector equation $$\mathbf f(\mathbf x)=\mathbf A\,\mathbf x+\mathbf b(\mathbf x)=\mathbf 0\tag 1$$ the numerical solution is $$\mathbf x_{n+1}=\mathbf x_n-\left[\mathbf F(\mathbf x)^{-1}\,\mathbf f(\mathbf x)\right]_{\mathbf x=\mathbf x_n}\quad n=0~,1~,\ldots$$ where $$\mathbf F=\frac{\partial \mathbf f}{\partial \mathbf x}=\mathbf A+\frac{\partial \mathbf b}{\partial \mathbf x}$$ $~\mathbf x_0~$ is the "start solution" and $~n~$ the iteration steps . you stop the iteration if $ |f_i(\mathbf x_n)|\le \mathbf \epsilon~$ thus the solution is $~\mathbf x_{n}$
{ "domain": "physics.stackexchange", "id": 86936, "tags": "lagrangian-formalism, computational-physics, variational-principle, action" }
Is the Solar core hard?
Question: This may seem like a weird question, but something got me thinking about it just recently. The Sun's core is composed of mainly hydrogen and helium, and is present in the form of a extremely hot supercrushed plasma. The Sun's core is mind-bogglingly dense, about 150,000 kg/m3 (about 15x denser than lead, 7x denser than uranium, 6x denser than osmium). The density can get extremely high at the center of stars. This leads me to think that the solar core, due to the immense amount of atoms packed together would behave like a extremely hard solid, as per my understanding, most dense metals (excluding gold) are extremely hard, like tungsten. I decided to dig into it a bit on the Internet, but whatever information I got were merely about the pressure at the center of the Sun, and not about the hardness of the solar core. By hardness, I mean having stiffness/rigidity, an ability to retain a certain shape when subjected to anisotropic stress. To clarify things a bit: Supposing we submerged an "indestructible" observer really deep into the Sun, just inside the solar core. Supposing we got the indestructible observer to throw a punch randomly inside the Sun, what would this observer feel? More specifically, would the observer perceive the solar core material as being extremely hard, like a solid, or would it act like an extremely viscous fluid? TL;DR Would the solar core be extremely stiff and hard? Or would it simply behave like a dense and viscous gas? Answer: The solar core can be considered soft in a relative sense (compared to other materials at the same density), but hard and incompressible in an absolute sense. The material behaves almost exactly like a perfect gas but would be as viscous as ketchup. The equation of state is that of a perfect monatomic gas and thus the pressure $P \propto \rho^{\alpha}$, with $\alpha \sim 4/3$ in the solar core, where heat transport is dominated by radiative diffusion. This is a "soft equation of state" - the material is highly compressible - it takes a small fractional increase in pressure to produce a compression. For most solids, $\alpha$ would be in double figures and they are approximately incompressible. However, what you are asking about could be represented by the "bulk modulus" (Young's modulus and Shear modulus are not meaningfully defined for a fluid). This is roughly equal to the pressure of a gas and is a measure of how much force in an absolute sense is required to change the volume of something. At the centre of the Sun, this is $2\times 10^{16}$ Pa. This can be compared with the bulk modulus of diamond which is $4\times 10^{11}$ Pa. Thus in that sense, the solar core is much harder than a diamond to compress. In terms of viscosity, the microscopic kinematic viscosity in the solar core is of order $10^{-4}$ m$^2$/s (Ruediger & Kitchatinov 1996) and hence a dynamic viscosity of 15 Pa s. For comparison, the kinematic viscosity of water at 293K is $10^{-6}$ m$^2$/s and the dynamic viscosity is $10^{-3}$ Pa s. Thus the solar interior fluid is 100-10000 times more viscous than water, depending on how viscosity is defined. Fluids of comparable viscosity would be honey or ketchup. The viscosity of solids (like rocks) meanwhile, is of order $\sim 10^{20}$ Pa s. To understand why the Sun behaves like a perfect gas, one must compare the interaction energies between the particles with their kinetic energies. At a density of 150000 kg/m$^3$ the mean separation of protons and electrons is $\sim 2\times 10^{-11}$ m, with a mutual Coulomb energy of about 100 eV. The kinetic energy of the particles is $3k_BT/2$, and with temperature $T \sim 1.5\times 10^7$ K in the solar core, this is about 1000 eV. Thus the kinetic energies are much greater than the Coulombic binding energies and so the particles behave like a gas. To "freeze" into a solid you would need the binding energy to be about 100 times the kinetic energy, which would require much higher densities at that temperature.
{ "domain": "astronomy.stackexchange", "id": 6773, "tags": "the-sun, core, plasma-physics" }
Expense Sharing Calculation
Question: The following is my solution to the problem presented on this CodeReview post: Friends A, B, C, D go for a trip. They spend on various expenses. Cost of the expense is shared. Example : A spends 100 for breakfast for A, B, C and D D spends 500 for cab for B and C B spends 300 for lunch for A, B and C Write a program to calculate how much each should get or each should give to one another. App should be scalable that number of friends can change Design: We need two data sets, which each respectively: record transactions tally amount due to each person Simply strip off the info we need from the transactions and record either how much each person is owed, or how much each person should give. Any input is welcome! using System; using System.Collections.Generic; namespace CodeReview { class Transaction { class Account { private int totalDebt = 0; private Dictionary<char, int> debtors = new Dictionary<char, int>(); public Account(char initialDebtor, int initialDebt) { debtors.Add(initialDebtor, initialDebt); } public Dictionary<char, int> Debtors { get { return debtors; } } public int TotalDebt { get { foreach (char debtor in debtors.Keys) { totalDebt += debtors[debtor]; } return totalDebt; } } } static void Main(string[] args) { Queue<Tuple<char, int, List<char>>> transactions = new Queue<Tuple<char, int, List<char>>>(); Dictionary<char, Account> ledger = new Dictionary<char, Account>(); transactions.Enqueue(Tuple.Create('A', 100, new List<char>() { 'A', 'B', 'C', 'D' })); transactions.Enqueue(Tuple.Create('D', 500, new List<char>() { 'B', 'C' })); transactions.Enqueue(Tuple.Create('B', 300, new List<char>() { 'A', 'B', 'C' })); while (transactions.Count > 0) { Tuple<char, int, List<char>> transaction = transactions.Dequeue(); foreach (char beneficiary in transaction.Item3) { if (!beneficiary.Equals(transaction.Item1)) { if (ledger.ContainsKey(beneficiary)) { Dictionary<char, int> debtors = ledger[beneficiary].Debtors; if (debtors.ContainsKey(transaction.Item1)) { debtors[transaction.Item1] += transaction.Item2; } else { debtors.Add(transaction.Item1, transaction.Item2); } } else { ledger.Add(beneficiary, new Account(transaction.Item1, transaction.Item2)); } } } } foreach (char beneficiary in ledger.Keys) { Account account = ledger[beneficiary]; Console.WriteLine("User " + beneficiary + " owes $" + account.TotalDebt + ":"); foreach (char debtor in account.Debtors.Keys) { int debt = account.Debtors[debtor]; Console.WriteLine(" - $" + debt + " to " + debtor); } } Console.ReadKey(); } } } Answer: The main issue I can see, is the excessive use of generic data types like tuples and dictionaries instead of domain classes. That may be a matter of taste, but I prefer meaningful named domain classes like your Account class. In the Account class, the dictionary is exposed as public member. IMHO internal data structures should be encapsulated. The class should be extended by appropriated methods to manipulate that data structures. That would automatically move code like if (debtors.ContainsKey(transaction.Item1)) { debtors[transaction.Item1] += transaction.Item2; } else { debtors.Add(transaction.Item1, transaction.Item2); } into the domain class. Imagine use have to add items somewhere else, that kind of code must be duplicated each time. Instead of if (!beneficiary.Equals(transaction.Item1)) { ... } You could use if (beneficiary.Equals(transaction.Item1)) { continue; } .... to reduce nesting. The instance variable debtors should be read-only. Another small point: I would use strings to store the users because you can store names and you do not run into problems if the group becomes greater than 26 persons :P - of course, that is not relevant for the algorithm. Generall Approach Your approach is, to build up two dictionaries (the ledger and the debtors within the account class). The data structure of nested dictionaries matches the requirements exactly. If the requirements change, you have to restructure your whole code. It is better to transform the data in a simple generally usable data structure and implement your algorithm for that data structure. Such a simple data structure may be a list of Transactions: public class Transaction { public Transaction(string payer, string payee, int amount) { Payer = payer; Amount = amount; Payee = payee; } public string Payer { get; } public int Amount { get; } public string Payee { get; } } The algorithms could be implemented in a class like Bank: public class Bank { private readonly List<Transaction> myTransactions = new List<Transaction>(); private readonly HashSet<string> myPayers = new HashSet<string>(); private readonly HashSet<string> myPayees = new HashSet<string>(); public void AddTransaction(string payer, string payee, int amount) { myTransactions.Add(new Transaction(payer, payee, amount)); myPayers.Add(payer); myPayees.Add(payee); } public void PrintPayees() { foreach (var payee in myPayees) { var trans = myTransactions.Where(t => t.Payee == payee); Console.WriteLine($"User {payee} owes ${trans.Sum(t => t.Amount)} total:"); foreach (var groupedTransaction in trans.GroupBy(t => t.Payer)) Console.WriteLine($"- ${groupedTransaction.Sum(t => t.Amount)} from {groupedTransaction.Key}"); } } public void PrintPayer() { foreach (var payer in myPayers) { var trans = myTransactions.Where(t => t.Payer == payer); Console.WriteLine($"User {payer} gets ${trans.Sum(t => t.Amount)} total:"); foreach (var groupedTransaction in trans.GroupBy(t => t.Payee)) Console.WriteLine($"- ${groupedTransaction.Sum(t => t.Amount)} to {groupedTransaction.Key}"); } } } Usage base on the list of tuples: void Main() { var bank = new Bank(); foreach (var transaction in new[] { Tuple.Create("A", 100, new [] { "A", "B", "C", "D" }), Tuple.Create("D", 500, new [] { "B", "C" }), Tuple.Create("B", 300, new [] { "A", "B", "C" }) }) { var payer = transaction.Item1; var amount = transaction.Item2; foreach (var payee in transaction.Item3) if (payee != payer) bank.AddTransaction(payer, payee, amount); } bank.PrintPayees(); bank.PrintPayer(); }
{ "domain": "codereview.stackexchange", "id": 29077, "tags": "c#, interview-questions" }
Why is the partial derivative of strain energy function with respect to strain equal to stress
Question: In Elasticity, we have a strain energy function , $W$, that is a function of strain tensor, $E$. Then the cauchy stress tensor, $T$ can be determined by: $$T_{ij}=\frac{\partial W}{\partial E_{ij}} \tag{$\star$}$$ First Question, does this equation hold for all elastic bodies? Or just linear elasticity? Second, I am having trouble finding the intuition behind this relation. Could someone please help explain why the partial derivative of the strain energy with respect to strain components gives the stress components? My attempt at deriving ($\star$) The strain energy is the energy stored in a body due to deformation. And since we are considering elastic bodies, its equal to work done in order to deform it. In 1D, for a beam of cross sectional area $A$ being stretched by a length of $u_0$ we can write an integral for work as : $$ Work = \int_{0}^{u_0}{\sigma(u)*A(u)*du} $$ where $\sigma$ is just the stress normal to the cross sectional area in this 1D case. So, I can see how the stress tensor definitely plays a role in the strain energy, but I just can't figure out how to properly derive ($\star$). Can someone walk me through the derivation? Answer: 1. Yes, the relation $$\mathrm{stress}=d(\mathrm{strain\,energy\,density})/d(\mathrm{strain})$$ holds for all elastic bodies, not just linearly elastic bodies. This equation implies that all differential work goes into elastic strain energy, which holds even for nonlinearly elastic materials (e.g., hyperelastic materials). However, the equation wouldn't apply to plastic deformation, for example, in which substantial amounts of work are converted to heat and expended through the formation of crystal defects. 2. Regarding the intuition behind this equation, we can say that any way to add energy to a system involves two parameters (called thermodynamic conjugate variables): a generalized force and a generalized displacement. The first term is intensive; i.e., if you doubled the system size, then the generalized force would stay the same. The second term is extensive; if you doubled the system size, then this term would also double. The simplest example of a generalized force and displacement is an actual force $F$ and displacement $x$ and the familiar equations $w=\boldsymbol{F\cdot x}$ and $dw=F\,dx$ for the work $w$. Another example is the pressure $P$ and volume $V$: $dw=-P\,dV$, with the minus sign appearing because pressure is compressive. Note how a gradient in pressure, the intensive variable, drives a shift in volume, the extensive variable. This effect is common for all of these pairs, whose units invariably multiply to give units of energy. (This framework applies even to heating: the system energy $U$ increases with $T\,dS$, where gradients in temperature $T$ drive shifts in the entropy $S$. Here again, the units multiply to give units of energy.) Yet another example of a conjugate pair is the stress and strain. Well actually, this isn't entirely true. If you look at the units, you'll see that the product of stress and strain has units of volumetric energy. So we can work with the elastic strain energy density or what you call above the strain energy function $W$, or we can work in terms of energy by multiplying by the volume, as in the fundamental relation for a first-order closed system under a general mechanical load: $dU=T\,dS+\boldsymbol{\bar{\sigma}} V\,d\boldsymbol{\bar{\epsilon}}$, where $\boldsymbol{\bar{\sigma}}$ and $\boldsymbol{\bar{\epsilon}}$ are the stress and strain tensors, respectively. (If the load is pressure, or equitriaxial compressive stress, then we recover the familiar $dU=T\,dS-P\,dV$.) 3. As for deriving your starred equation, I checked Nye's Physical Properties of Crystals and Ugural & Fenster's Advanced Strength and Applied Elasticity, and they proceed as you do: define the increase in strain energy from a uniaxial load applied to a differential element and then build up to the complete 3D case. For an isotropic material (which obeys generalized Hooke's Law), for example, Ugural & Fenster obtain a strain energy density of $$W=\frac{1}{2E}\left(\sigma_{x}^2+\sigma_{y}^2+\sigma_{z}^2\right)-\frac{\nu}{2E}\left(\sigma_{x}\sigma_y+\sigma_{y}\sigma_z+\sigma_{x}\sigma_z\right)+\frac{1}{2G}\left(\tau_{xy}^2+\tau_{yz}^2+\tau_{xz}^2\right).$$
{ "domain": "physics.stackexchange", "id": 75280, "tags": "elasticity, continuum-mechanics, stress-strain, solid-mechanics" }
Ackerman Conundrum
Question: Source: https://en.wikipedia.org/wiki/Ackermann_steering_geometry The image above shows a vehicle with Ackermann steering. The front wheels are connected by a mechanism so that they turn at different angles such that the projected axles of the wheels meet at the same point, the center of rotation. Source: https://www.google.com/patents/US6634109 This image shows a case where are four wheels do not have the same center of rotation. If the Ackerman condition is not satisfied as shown in the figure for a car taking a turn, then what would be the equivalent centre of rotation for the vehicle? Answer: This is quite easy: Choose one of the wheels build a second wheel so it would satisfy Ackerman condition Get the First rotation point ( O1 ) Do the same with the second wheel, and get a second rotation point ( O1' ) The actual rotation point can be anything between these two points. If you choose, for instance, the first point as rotation point, it means that the first wheel has no issue but the second wheel therefore has an additional damping to its movement. To find the real rotation center you could compute including the damping on each wheel, but In a driving situation equal damping is quite unlikely, more over it can change in function of the directions. On top of this, this Damping changes in function of the object dynamics. So you may most likely end up with an irrelevant model.
{ "domain": "engineering.stackexchange", "id": 1423, "tags": "automotive-engineering, dynamics" }
Understanding KL Stopping and KL Cutoff for the PPO algorithm
Question: I am reading a couple of review papers to optimize the PPO algorithm. It seems like the review papers are saying the same thing but used slightly different terms. Could someone please tell if the following terms are equivalent - This paper talks about Policy regularization using KL Divergence Whereas this paper uses the terms KL Stopping and KL Cutoff - I think "Penalty" from the first paper is the same as "KL-cutoff". Also "Constraint" from the first paper is the same as "KL-Stopping". Could someone let me know if I am correct? Answer: In my opinion both "Penalty" and "Constraint" have somewhat similar ideas to "KL-Cutoff", but "KL-Stopping" is entirely different. The idea of policy regularization is to add some form of regularization to your loss function, much like adding L2-norm regularization in supervised learning. So for ppo you have: ppo_loss = -torch.mean(torch.min(rho * adv, clip(rho, 1-eps, 1+eps) * adv)) loss = ppo_loss + alpha * reg Now, the first paper discusses different types of regularization, e.g. entropy regularization or KL-divergence regularization. And also explains how to go about choosing the parameter alpha: You could just treat it as a hyperparameter, just like you would treat the learning rate. (Penalty) Or, you could optimize this parameter using a more sophisticated procedure. (Constraint) You can add these regularization techniques to any policy gradient algorithm, e.g. vanilla policy gradient, actor-critic, or ppo. KL-Cutoff and KL-Stopping from the second paper actually address a different problem. Using the ppo objective you are allowed to perform many policy updates before throwing away your data and performing new rollouts. But if you update too many times then your training might be unstable. So after every update you check the KL-divergence between the old policy (behavioral policy) and the current policy. If the KL-divergence reaches a given threshold, then you should do something: KL-Stopping says to stop updating your policy, throw away the data, and rollout the new policy to collect new data. KL-Cutoff suggests something very similar to the idea of policy regularization: continue updating your policy, but add the term $\pm \alpha(D_{KL} - D_{thr})^2$ to the loss function. As far as I understand the authors suggest a fixed parameter alpha, which is the same as "Penalty" from the fist paper. However, your regularizer is not the KL-divergence, but how much you are violating the threshold. In my opinion these techniques are complementary. If you want to see this implemented in code I will add here a link to one of my repos where I implemented ppo: here I use policy regularization with entropy, here I use KL-stopping.
{ "domain": "ai.stackexchange", "id": 3972, "tags": "reinforcement-learning, deep-rl, proximal-policy-optimization, on-policy-methods" }
Why do birds legs' not get frostbite?
Question: In a nature documentary I watched a while ago, there was a scene where a flock of flamingos slept in a lake that froze overnight. Each morning they just had to wait until the lake defrosted sufficiently to release them so they could get on with their day. How do they not get tissue damage etc doing this? Answer: Birds have a few adaptations and behaviors to keep their legs warm in cold weather: Blood vessels (veins and arteries) run very close together or are intertwined: The arteries and veins intertwine in the legs, so heat can be transferred from arteries back to veins before reaching the feet. Such a mechanism is called countercurrent exchange. Gulls can open a shunt between these vessels, turning back the bloodstream above the foot, and constrict the vessels in the foot. This reduces heat loss by more than 90 percent. (Wikipedia) Birds have specialized scales on their feet and legs which act as an insulator. Birds often alternately tuck their legs into their body feathers to minimize heat loss. (Audubon) Other survival mechanisms such as shivering and torpor come into play with extreme cold. Other sources: Mother Nature Network. I don't have my texts handy, but can highly recommend Manual of Ornithology: Avian Structure and Function by Noble S. Proctor and Patrick J. Lynch.
{ "domain": "biology.stackexchange", "id": 10628, "tags": "ornithology, behaviour" }
Isothermal compression without a heat reservoir
Question: I have devised a method to isothermally compress a gas without the use of a heat reservoir. Consider a container of gas. To compress the gas normally, one would simply move one of the walls of the container inwards, which will do work on the gas when the gas particles collide with the moving wall, increasing its temperature. However, consider this. Whenever I move the side of the container, I do it when none of the particles are touching that wall, then I move it to right next to the nearest particle. Thus, none of the particles collide when the wall is moving. I can continue doing this until I achieve the volume I want to compress to. This doesn't violate the ideal gas law as the pressure still increases due to increased frequency of collision, but the temperature of the gas should remain constant because there is no work done on the gas! Thus, I have achieved an isothermal compression of the gas without the use of a heat reservoir. Is this method valid? What are the implications? If its invalid, why? Answer: As pointed out in the comments, this is just Maxwell's demon in disguise. Why? Because this is a reversed isothermal free expansion. Suppose we have an ideal gas trapped in an adiabatic container of volume $V$, but all of the gas is compressed by a piston in half of the volume ($V/2$). Say the temperature of the gas is $T$, and the pressure is $P$. If the piston is suddenly removed, the gas quickly expands and occupies the whole volume $V$. Since the gas did no work and exchanged no heat, its energy didn't change at all and its temperature remains at $T$. We may use the ideal gas law to find that the final pressure will just be $P/2$. Note that while expanding, no molecules hit the piston, because it was removed instantly. The method you are proposing for a "non-reservoir isothermal" is exactly this free expansion, but reversed in time. If we have the knowledge of where all particles are and where they are going, we may push the piston little by little without hitting any of them. In the end, we did no work (because no particles hit the piston), the gas is still in temperature $T$ and can be contained back in the $V/2$ volume, for example. In other words: if we have the knowledge of every particle's position and momentum, we may reverse an irreversible process (free expansion) without exchanging heat with a reservoir and doing no work, therefore causing the gas' entropy to decrease. Sounds a lot like Maxwell's demon, does it not?
{ "domain": "physics.stackexchange", "id": 56293, "tags": "thermodynamics, temperature, work, ideal-gas" }
Password Validation Class
Question: I created a class designed to validate passwords based on user defined criteria. Each PasswordValidator object objects a minimum and maximum password length, a character blacklist, as well as customizable Uppercase, Lowercase, and digit requirements. I am well aware that I could have used regex but I wanted to avoid using it for extra practice with classes. Here is an example object I created as well as the class. var passwordValidator = new PasswordValidator { MinLength = 6, MaxLength = 13, UpperRequirement = 1, LowerRequirement = 1, DigitRequirment = 1, CharBlacklist = new char[] { 'T', '&' } }; while (true) { var input = Console.ReadLine(); if (input == "exit") { break; } else { Console.WriteLine(passwordValidator.CheckPassword(input)); } } class PasswordValidator { public uint MinLength { get; init; } public uint MaxLength { get; init; } public uint UpperRequirement { get; init; } public uint LowerRequirement { get; init; } public uint DigitRequirment { get; init; } public char[]? CharBlacklist { get; init; } public bool CheckPassword(string password) { bool hasBlacklist = CharBlacklist != null; bool hasMaxLength = MaxLength != 0; // Length Checking if (password.Length < MinLength || (hasMaxLength && password.Length > MaxLength)) { return false; } uint upperCount = 0; uint lowerCount = 0; uint digitCount = 0; foreach (char passwordChar in password) { if (hasBlacklist && !AllowedChar(passwordChar)) { return false; } if (char.IsUpper(passwordChar)) { upperCount++; } else if (char.IsLower(passwordChar)) { lowerCount++; } else if (char.IsDigit(passwordChar)) { digitCount++; } } if (upperCount < UpperRequirement || lowerCount < LowerRequirement || digitCount < DigitRequirment) { return false; } else { return true; } } private bool AllowedChar(char passwordChar) { foreach (var blacklistChar in CharBlacklist) { if (passwordChar == blacklistChar) { return false; } } return true; } } All types of feedback are appreciated but here are some questions I have about how I could improve my code. My C# knowledge is fairly limited. The language I know the best is Java (0.5 years). Are there any language specific style issues you can see in my code? Did I get carried away with C# language features? I chose not to create any constructors for my class in favor of object initializer syntax. I did this because All of the properties in the class are meant to be optional. Did I make the right move? One design issue I see is that MinLength could be greater than MaxLength. How would I fix this? My CheckPassword method is quite lengthy and contains a lot of if else statements. It also has two special cases where there is no MaxLength and where there is a Character Blacklist. What improvements (if any) should I make to that method. Answer: You've mixed Requirement and Validator into one role. a Validator, should only read the requirements, and validate upon it, but in your class, you're setting the requirement from within the validator. So, what you need is to separate both Requirement and Validator given each one of them its own class. This would give more flexibility and better maintainability over them. so, you should end-up having something like this : public interface IPasswordRequirement { ... } public class PasswordValidator { private readonly IPasswordRequirement _requirement; // this constructor ensures this instance won't be initialized without providing a requirement. public PasswordValidator(IPasswordRequirement requirement) { _requirement = requirement; } public bool Validate(string password) { // validate the password using _requirement instance. } } Now, in the example above I've used an interface as baseline contract to validate upon. This would give you the ability to define different requirements such as (StrongPasswordRequirement, DefaultPasswordRequirement ..etc.), and can give you also the ability to expand the validator to accept multiple requirements at once (for instance, you can define three requirements, and validate the password against them, to see if the password matches any of them.). CheckPassword should be renamed to ValidatePasswordRequirement, because CheckPassword by itself will give many meanings, such as checking the password hash, checking the password expiry, ...etc. All will go under password validity. So, it would be a good idea to tell which part of validation the method will process. I chose not to create any constructors for my class in favor of object initializer syntax. I did this because All of the properties in the class are meant to be optional. Did I make the right move? It's fine, as long as they're optional. You only need to consider the properties default values in your work and wither you need to use the system default value or use custom one. Also, if you're working with different .NET versions, you should consider a backward compatibility approach, where you would favor a default constructor over init only sitters. One design issue I see is that MinLength could be greater than MaxLength. How would I fix this? just validate them before initializing something like this : private uint _minLength; private uint _maxLength; public uint MinLength { get => _minLength; init { if(_maxLength > 0 && value >= _maxLength) throw new ArgumentOutOfRangeException(nameof(MinLength)); _minLength = value; } } public uint MaxLength { get => _maxLength; init { if(value <= _minLength) throw new ArgumentOutOfRangeException(nameof(MaxLength)); _maxLength = value; } } this way you would avoid overlapping between the two properties. although, AllowedChar can be replaced with CharBlacklist.Contains(character) your loop can be done like this : foreach (char passwordChar in password) { if (hasBlacklist && CharBlacklist.Contains(passwordChar)) { return false; } else if (upperCount < UpperRequirement && char.IsUpper(passwordChar)) { upperCount++; } else if (lowerCount < LowerRequirement && char.IsLower(passwordChar)) { lowerCount++; } else if (digitCount < DigitRequirment && char.IsDigit(passwordChar)) { digitCount++; } } this way, the upper, lower, and digit will only hit if their counter is less than their requirements, whenever any of them fulfil the requirement, it will be skipped in the upcoming iterations. here is an example of the above points : public interface IPasswordRequirement { int MinimumLength { get; } int MaximumLength { get; } int NumberOfUpperCase { get; } int NumberOfLowerCase { get; } int NumberOfDigits { get; } char[]? BlacklistCharacters { get; } } /// this would be used as system default. // then you can define another custom requirement // that the consumer can customize, and use the default instance // if the consumer did not provide any custom requirement. public class DefaultPasswordRequirement : IPasswordRequirement { private static readonly _defaultBlacklistCharacters = new { 'T', '&' } public int MinimumLength { get; } = 6; public int MaximumLength { get; } = 13; public int NumberOfUpperCase { get; } = 1; public int NumberOfLowerCase { get; } = 1; public int NumberOfDigits { get; } = 1; public char[]? BlacklistCharacters { get; } = _defaultBlacklistCharacters; } public class PasswordRequirementValidator { // create the default instance, and reuse it. private static readonly IPasswordRequirement _defaultPasswordRequirement = new DefaultPasswordRequirement(); private readonly IPasswordRequirement _requirement; // when using the default constructor, it will apply the default requirement; public PasswordRequirementValidator() : this(_defaultPasswordRequirement) { } // use a custom requirement instead public PasswordRequirementValidator(IPasswordRequirement requirement) { if(password == null) throw new ArguementNullException(nameof(requirement)); _requirement = requirement; } public bool Validate(string password) { if(string.IsNullOrWhiteSpace(password)) return false; if(password.Length < _requirement.MinimumLength || password.Length > _requirement.MaximumLength) return false; uint upperCount = 0; uint lowerCount = 0; uint digitCount = 0; foreach (char passwordChar in password) { if (hasBlacklist && CharBlacklist.Contains(passwordChar)) { return false; } else if (upperCount < NumberOfUpperCase && char.IsUpper(passwordChar)) { upperCount++; } else if (lowerCount < NumberOfLowerCase && char.IsLower(passwordChar)) { lowerCount++; } else if (digitCount < NumberOfDigits && char.IsDigit(passwordChar)) { digitCount++; } } return upperCount >= NumberOfUpperCase && lowerCount >= NumberOfLowerCase && digitCount >= NumberOfDigits; } } also, you could see a way of replacing the bool to something more descriptive that would give more meaningful results.
{ "domain": "codereview.stackexchange", "id": 43606, "tags": "c#, object-oriented, parsing" }
How can we make embeddings from the text?
Question: I have a dataset with questions and answers to them. I want to make embeddings of questions and save them in a vector database. Next, I will make a query to the database. With the help of the pinecone service, I will be able to do what I have planned. I'm not vectorizing the answer text itself, just the question. But I have some questions. How do I need to provide the text of the questions in the embedding in order to summarize and simplify the search for the right embedding? That is, do I need to provide multiple questions referring to a single answer? I mean, the wording of a sentence can be different, and the answer to it can be the same. Answer: Using Text Embeddings for Semantic Search To create embeddings of questions and save them in a vector database for semantic search, you can utilize the OpenAI Embedding API to generate vector embeddings of your questions and then upload those vector embeddings into Pinecone, which can store and index millions/billions of these vector embeddings and search through them at ultra-low latencies. Providing Text for Embedding When providing text for embedding, it's important to consider the variations in the wording of questions that may lead to the same answer. To simplify the search for the right embedding, you can provide multiple questions that refer to a single answer. This approach allows the semantic search pipeline to identify the meaning between each of the queries and enables the system to return the most relevant results. By using these embeddings with Pinecone, you can effectively retrieve the desired information based on the semantic meaning of the queries. You can refer to the documentation for more info Hope that helps!
{ "domain": "datascience.stackexchange", "id": 12186, "tags": "data-cleaning, embeddings" }
Metatranscriptomics Extraction
Question: I am attempting to perform metatranscriptomics analysis of a cleanroom. The DNA input is known to be rather low (<1pg). Despite this, I want to still attempt to try it. I am thinking it might be best to perform extraction trials before hand where certain parameters are manipulated. I can then determine from these extraction trials which values for which parameters are best. As an example, I am reading a peer-reviewed paper called "Validation of picogram- and femtogram-input DNA libraries for microscale metagenomics" where they used the Illumina Nextera XT DNA library preparation kit on low biomass samples. Two of the parameters they manipulated were: 1) PCR cycle numbers (Values = 12, 14, 16, 18, 20) 2) ATM dilution (Values = undiluted, 1:5, 1:10, 1:20, 1:50) I am hoping to use the same type of extraction trials to determine which combination of values from these two parameters is best for a clean room. My first question is: How can I determine which combination is best (as there are 25 combinations)? I have two ideas (but am open to others): 1) Check the amount of RNA derived 2) Check some quality metric on RNA As I hope to do triplets of each of the 25 combinations, I will have 75 samples. I am hoping not to have to do sequencing as it is time consuming and possibly expensive in order to chose the best combination. Is there a way to chose which combination works best without having to sequence? My second question is: Are there any other parameters in this workflow that I can also test for if I am not getting decent results in these low-biomass cleanrooms? (As you can tell, I do not have too much direct molecular biology experience, so feel free to correct or question any of my post and/or "dumb down" your response). Thank you for sharing advice (for either question)! Answer: Consider doing triplicates of just 4 combinations of cycle/dilution: low+low, low+high, high+ low, high+high. Simpler. Most important: prove you can extract DNA from a clean room. If you extract from a real clean room and get no result from the PCR, maybe it is because you failed to collect anything or maybe nothing was there. Idea - spray a known quantity of DNA into a room that will serve as your sham clean room. Then do your extraction protocol. You can test the cycle / dilution parameters with the sham clean room extraction and have those variables optimized for when you collect from a real "wild" clean room you did not dope with DNA.
{ "domain": "biology.stackexchange", "id": 8822, "tags": "rna-sequencing" }
The mean kinetic energy of a gas particle
Question: I'm in undergraduate stat mech/thermo. In the context of the Maxwell-Boltzmann distribution, the mean kinetic energy of a gas particle is $\langle KE \rangle = \frac{1}{2}m \langle v^2 \rangle$. I do not see why we use $\langle v^2 \rangle$, and not $\langle v \rangle ^2$. I understand that they are different terms mathematically, just can't figure out what necessitates the use of one over the other. Thank you. Answer: In the kinetic theory of gases, the average velocity of a particle is derived by considering the molecules of gas to be point particles with velocities $v_{i}$, where $i\, =\, 1,\,2, \, \ldots,\,N$ is the particle index and $N$ is the total number of particles. In this context, $ \langle v \rangle$ is the average velocity of the collection: $$ \langle v \rangle^2 = \left( \frac{1}{N} \sum_{i=1}^{N} v_i \right)^2 $$ and $ \langle v^{2} \rangle$ is: $$ \langle v^2 \rangle = \frac{1}{N} \sum_{i=1}^{N} v_i^2. $$ Expanding a few terms reveals the problem: $$ \langle v \rangle^2 = \frac{1}{N^2} (v_1 + v_2 + v_3 + \ldots + v_N)^2, $$ whereas: $$ \langle v^2 \rangle = \frac{1}{N} \sum_{i=1}^{N} v_i^2 = \frac{1}{N} (v_1^2 + v_2^2 + v_3^2 + \ldots + v_N^2). $$ I hope this helps!
{ "domain": "physics.stackexchange", "id": 100045, "tags": "thermodynamics, statistical-mechanics, ideal-gas, gas" }
DWA suddenly stops when it arrives to the goal
Question: I am using move_base and dwa_local_planner. Robot moves smoothly at start but when it arrives to goal it suddenly stops instead of slowing down when getting closer to the goal. I have tried different acceleration values but it didn't effect the behavior. My planner parameters are set like this DWAPlannerROS: max_vel_trans: 1.0 min_vel_trans: 0.1 max_vel_x: 1.0 min_vel_x: -1.0 max_vel_theta: 1.0 min_vel_theta: 0.1 max_vel_y: 0.0 min_vel_y: 0.0 acc_lim_x: 0.15 acc_lim_y: 0.0 acc_lim_theta: 0.35 acc_lim_trans: 0.15 prune_plane: true xy_goal_tolerance: 0.5 yaw_goal_tolerance: 6.28 latch_xy_goal_tolerance: false sim_time: 1.5 sim_granularity: 0.1 angular_sim_granularity: 0.1 vx_samples: 20 vy_samples: 1 vth_samples: 20 path_distance_bias: 32.0 goal_distance_bias: 15.0 occdist_scale: 0.02 forward_point_distance: 0.325 stop_time_buffer: 0.2 scaling_speed: 0.25 max_scaling_factor: 0.2 oscillation_reset_dist: 0.05 publish_traj_pc : true publish_cost_grid_pc: true controller_frequency: 1.0 Example of this behavior can be seen from rqt plots Edit 1: I have noticed that yaw goal tolerance is too high which prevents rotating_to_goal behavior in computeVelocityCommandsStopRotate function at the end and directly sets cmd_vel to 0. However, it still doesn't fix sudden stop problem if the goal yaw and final position yaw is in the given tolerance Originally posted by rappy on ROS Answers with karma: 25 on 2021-08-10 Post score: 0 Original comments Comment by Mike Scheutzow on 2021-08-11: Why do we not see cmd_vel.linear.x in these plots? What is the black line? Comment by rappy on 2021-08-12: Black line is cmd_vel actually I have changed the plot style manually but I think rqt didn't update the legend Comment by Mike Scheutzow on 2021-08-12: A couple thoughts on your DWAPlanner config: Your controller_frequency is insanely low, and is not consistent with the black dots in first plot. Most people prefer to inherit this rate from the parent move_base. Under normal circumstances, DWAPlanner will linearly ramp the x velocity down to 0 once it is a certain distance from the goal. Your very low acc_lim_x and very short stop_time_buffer might be interfering with this ramp-down mechanism. Comment by rappy on 2021-08-13: You are right I forgot to remove controller frequency parameter from this config and it is set to 15 in move_base config. So I have changed the parameters you have suggested to and it fixed the problem. Thanks for the advice. acc_lim_trans: 0.3 acc_lim_x: 0.3 stop_time_buffer: 1 Answer: A couple thoughts on your DWAPlanner config: Your controller_frequency is insanely low, and is not consistent with the black dots in first plot. Most people prefer to inherit this rate from the parent move_base. Under normal circumstances, DWAPlanner will linearly ramp the x velocity down to 0 once it is a certain distance from the goal. Your very low acc_lim_x and very short stop_time_buffer might be interfering with this ramp-down mechanism. Originally posted by Mike Scheutzow with karma: 4903 on 2021-08-13 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 36789, "tags": "ros, navigation, ros-melodic, dwa-local-planner" }
Project Euler+ #35 - Summing circular primes less than N
Question: Overview The following code is a solution for a Project Euler+ problem on Hackerrank. A circular prime is a prime number whose rotations are also prime. For example, 197 is a circular prime since it and all it's rotations (197, 971, 719) are prime. The sum of all circular primes less than 100 is: sum(2, 3, 5, 7, 11, 13, 17, 31, 37, 71, 73, 79, 97) = 446 Code Redability After learning about list comprehensions in python, I'm concerned that my code readability may have taken a nosedive. I'd like to make my code as understandable as possible so that someone reading it could quickly deduce the strategy I used to solve the problem. I'm mostly concerned with my generate_circular_primes_less_than function as it goes 5 nested blocks deep and seems to be more confusing than it could be. Here's the code for my solution: from math import sqrt, ceil from functools import reduce # generates a list of n booleans where indexes correspond to primality def prime_sieve(n): N = [True] * n N[0] = False N[1] = False for i in range(2, int(ceil(sqrt(n)))): if N[i]: for j in range(i*2, n, i): N[j] = False return N # rotates the first i chars of a string to the end def rotate(s, i): return s[i:] + s[:i] def generate_circular_primes_less_than(n): large = 10**len(str(n)) is_prime = prime_sieve(large) for num in range(n): if is_prime[num]: s = str(num) rotations = [int(rotate(s, i)) for i in range(len(s))] if reduce(lambda y, x: y and is_prime[x], rotations, True): for circular_prime in (r for r in rotations if r < n and is_prime[r]): is_prime[circular_prime] = False # remove duplicates (like 11) yield circular_prime else: for r in rotations: is_prime[r] = False # no need to recheck non-circular primes # # MAIN if __name__ == "__main__": n = int(input()) print(sum(generate_all_circular_primes_less_than(n))) However, the pseudocode I wrote when designing my solution is much simpler: 1. sieve all primes less than the maximum rotation value 2. get the rotations for each prime less than n if all rotations are prime: add the rotations less than n to the sum Sometimes it seams that adding comments only masks unintuitive code (like putting lipstick on a pig). What would you suggest to improve the readability? Answer: Replace the list of all numbers up to n with a dictionary of primes only As it stands, N holds mainly False values, which are never used. It would be more idiomatic to use a dictionary, plus Python's in operator, which tells you whether a particular key exists in the dictionary. As the proportion of primes below a given number n is approximated by \$n /\mathcal{ln}(n)\$, a dictionary containing only primes as keys would be smaller than your list by a factor of about \$n/\mathcal{log}(n)\$, while look-up would be just as fast. Your prime sieve would look like this: def prime_sieve(n): '''Your docstring here. ''' sieve = dict.fromkeys([i for i in range(n)]) # make a dict of all numbers up to n for i in range(2, ceil(sqrt(n))): if i in sieve: for j in range(i**2, n, i): del sieve[j] # remove composites from dict return sieve (Note: even with this modification, the sieve still begins at size n, and takes on the order of \$\mathcal{O}(n^2)\$ operations to generate. This is a fundamental property of the Sieve of Erasosthenes. A faster implementation might forgo the sieve, generate candidate circular primes, and test them individually, e.g. using Miller-Rabin primality test.) Follow your own pseudocode They say in programming that good code should read like prose. Which means it should read as close to a natural sentence as possible. Writing out your program as pseudocode is a great way to prioritize readability. The pseudocode already gives you the most readable version of your code as a place to start. The task is then to rewrite it according to the syntax and idioms of the language you choose. Based on your own pseudocode, we need: (1) A simple way to invoke the sieve. You have that already with your prime_sieve function. Here, your variable large should be something more descriptive. Taking cues from your pseudocode, we can call it max_rotation. max_rotation = 10**len(str(n)) primes = prime_sieve(max_rotation) (Note: I changed the name of the output of prime_sieve to primes, as the new dict implementation contains only primes.) (2) A loop header that makes it clear that we are getting all rotations of each prime less than n. Translated into Python, the second line of your pseudocode becomes: for rotations in [get_rotations(i) for i in range(n)]: This implies that operations such as conversion of the integer i to a string, removal of duplicates, and accumulation into a list, should all be handled by the function rotations, rather than clogging up the main body of your generate_circular_primes function. Such a function would look like this: def get_rotations(num) '''Your docstring here. ''' def rotate(s, i): return s[i:] + s[:i] s = str(num) return set([int(rotate(s, i)) for i in range(len(s))]) (Note: using set() makes sure all rotations returned are unique.) (3) Check if all the resulting rotations are prime. Here, it's clearest and most idiomatic to use the all() built-in expression. if all(r in primes for r in rotations): (4) And finally, we must yield only those circular primes that are less than the original n. The final code looks like this: from math import sqrt, ceil def prime_sieve(n): '''Your docstring here. ''' sieve = dict.fromkeys([i for i in range(n)]) for i in range(2, ceil(sqrt(n))): if i in sieve: for j in range(i**2, n, i): if j in sieve: del sieve[j] return sieve def get_rotations(num): '''Your docstring here. ''' def rotate(s, i): return s[i:] + s[:i] s = str(num) return set([int(rotate(s, i)) for i in range(len(s))]) def get_circular_primes(n): '''Your docstring here. ''' circular = set() max_rotation = 10**len(str(n)) primes = prime_sieve(max_rotation) for rotations in [get_rotations(i) for i in range(n)]: if all((r in primes) for r in rotations): circular.update(rotations) yield [c for c in circular if c < n] Notice that the last five lines closely match your pseudocode (with the exception of updating the set circular). [Below is a summary of edits suggested by enedil.] You use int() unnecessarily here: for i in range(2, int(ceil(sqrt(n)))): Because ceil() already returns an integer, wrapping it in int() is redundant. The following line can be improved to skip multiples of primes that have already been identified: for j in range(i*2, n, i): As you have it, j increments through multiples of all integers between i and 2, in addition to the integers we are interested in, those greater than i. As multiples of all integers below i have already been removed from the prime list, they can be skipped in subsequent rounds. So, j should be initiated at i**2, the first composite that has not been seen before. The improved loop looks like this: for j in range(i**2, n, i):
{ "domain": "codereview.stackexchange", "id": 24511, "tags": "python, programming-challenge, python-3.x, primes" }
Approximation algorithm for weighted set cover, using multiplicative weights
Question: It is known that the problem of fractional set cover can be rephrased as a linear programming problem and be approximated using the multiplicative weights method, for instance this lecture note shows how to do so. The running time depends on the "width" of the problem, which equals to the number of sets in the unweighted case. However, in the weighted case, the width of the problem depends on the weight function, hence the running time is exponential with respect to the representation of the problem. Is there a way to overcome this issue? Either a way to reduce it to polynomial running time or proof that it's impossible (under plausible complexity assumptions)? Answer: Yes, fractional weighted set cover and, indeed, all explicitly given packing/covering LP's can be solved using multiplicative-weights algorithms in poly time independent of the width. There are many papers in this line of work. The earliest are probably due to Garg and Konemann. See e.g. https://cstheory.stackexchange.com/questions/4697/toy-examples-for-plotkin-shmoys-tardos-and-arora-kale-solvers/14388#14388 .
{ "domain": "cs.stackexchange", "id": 13702, "tags": "approximation, linear-programming, set-cover, online-algorithms" }
Unable to connect two FlexBE states
Question: I am a beginner in FlexBE. I was able to connect the start state to initial_wait state and then initial_wait state to print_greeting state. But I am unable to connect from print_greeting state to finished state. You can see what I am trying to say from the image given below. Could anyone help? Originally posted by Paul Sebastian on ROS Answers with karma: 41 on 2019-10-07 Post score: 1 Original comments Comment by gvdhoorn on 2019-10-07: Please attach your screenshot to the question directly. I've given you sufficient karma for that. Comment by Paul Sebastian on 2019-10-07: Okay. Thanks Answer: I understood how to do it. The unconnected outcomes of each state are listed at the bottom of each state's visualization. First, click on the outcome (in this case click on "done" in Print_Greeting) you want to connect and then click on the target. Originally posted by Paul Sebastian with karma: 41 on 2019-10-08 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 33858, "tags": "ros-kinetic" }