anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Normalize ATAC-seq/Dnase-seq sequencing reads coverage signals over estimated background
Question: I'm trying to normalize the coverage signals of ATAC-seq reads against its own background using normal distribution, described in this paper It says: Finally, all open chromatin coverage measurements were normalized by standardization to the mean and standard deviation of coverage over a set of 25,000 randomly selected background regions. To select background regions, the set of peak open regions were widened to 20,000 bp, reduced, and subtracted from the genome assembly. Thereafter, 25,000 random positions were selected and widened to reflect the distribution of widths in the set of open peaks. Coverage within these background regions was then calculated, and regions with zero coverage were discarded (~5%). The distribution of counts within background regions approximated a log-normal distribution. Mean and standard deviation of these background regions was calculated and used to transform the coverage measurements for the entire genome. Does anyone know any existing codes doing that? I know Anshul's pipeline does that but can't find the specific script for that step. Answer: Supplementary File 5 and 6 contain the code you're looking for.
{ "domain": "bioinformatics.stackexchange", "id": 428, "tags": "normalization, atac-seq" }
JavaScript, looping, and functional approach
Question: Data Structure coming back from the server [ { id: 1, type: "Pickup", items: [ { id: 1, description: "Item 1" } ] }, { id: 2, type: "Drop", items: [ { id: 0, description: "Item 0" } ] }, { id: 3, type: "Drop", items: [ { id: 1, description: "Item 1" }, { id: 2, description: "Item 2" } ] }, { id: 0, type: "Pickup", items: [ { id: 0, description: "Item 0" }, { id: 2, description: "Item 2" } ] } ]; Each element represents an event. Each event is only a pickup or drop. Each event can have one or more items. Initial State On initial load, loop over the response coming from the server and add an extra property called isSelected to each event, each item, and set it as false as default. -- Done. This isSelected property is for UI purpose only and tells user(s) which event(s) and/or item(s) has/have been selected. // shove the response coming from the server here and add extra property called isSelected and set it to default value (false) const initialState = { events: [] } moveEvent method: const moveEvent = ({ events }, selectedEventId) => { // de-dupe selected items const selectedItemIds = {}; // grab and find the selected event by id let foundSelectedEvent = events.find(event => event.id === selectedEventId); // update the found event and all its items' isSelected property to true foundSelectedEvent = { ...foundSelectedEvent, isSelected: true, items: foundSelectedEvent.items.map(item => { item = { ...item, isSelected: true }; // Keep track of the selected items to update the other events. selectedItemIds[item.id] = item.id; return item; }) }; events = events.map(event => { // update events array to have the found selected event if(event.id === foundSelectedEvent.id) { return foundSelectedEvent; } // Loop over the rest of the non selected events event.items = event.items.map(item => { // if the same item exists in the selected event's items, then set item's isSelected to true. const foundItem = selectedItemIds[item.id]; // foundItem is the id of an item, so 0 is valid if(foundItem >= 0) { return { ...item, isSelected: true }; } return item; }); const itemCount = event.items.length; const selectedItemCount = event.items.filter(item => item.isSelected).length; // If all items in the event are set to isSelected true, then mark the event to isSelected true as well. if(itemCount === selectedItemCount) { event = { ...event, isSelected: true }; } return event; }); return { events } } Personally, I don't like the way I've implemented the moveEvent method, and it seems like an imperative approach even though I'm using find, filter, and map. All this moveEvent method is doing is flipping the isSelected flag. Is there a better solution? Is there a way to reduce the amount of looping? Maybe events should be an object and even its items. At least, the lookup would be fast for finding an event, and I don't have to use Array.find initially. However, I still have to either loop over each other non selected events' properties or convert them back and forth using Object.entries and/or Object.values. Is there more a functional approach? Can recursion resolve this? Usage and Result // found the event with id 0 const newState = moveEvent(initialState, 0); // Expected results [ { id: 1, type: 'Pickup', isSelected: false, items: [ { id: 1, isSelected: false, description: 'Item 1' } ] } { id: 2, type: 'Drop', // becasue all items' isSelected properties are set to true (even though it is just one), then set this event's isSelected to true isSelected: true, // set this to true because event id 0 has the same item (id 1) items: [ { id: 0, isSelected: true, description: 'Item 0' } ] } { id: 3, type: 'Drop', // since all items' isSelected properties are not set to true, then this should remain false. isSelected: false, items: [ { id: 1, isSelected: false, description: 'Item 1' }, // set this to true because event id 0 has the same item (id 2) { id: 2, isSelected: true, description: 'Item 2' } ] } { id: 0, type: 'Pickup', // set isSelected to true because the selected event id is 0 isSelected: true, items: [ // since this belongs to the selected event id of 0, then set all items' isSelected to true { id: 0, isSelected: true, description: 'Item 0' }, { id: 2, isSelected: true, description: 'Item 2' } ] } ] Answer: You can just get all selected items and compute everything in one go using functional approach with map: const moveEvent = ({ events }, selectedEventId) => { const selectedEvent = events.find(({id}) => id === selectedEventId); const selectedItems = new Set(selectedEvent.items.map(({id}) => id)); const allItemsSelected = (items) => items.every(({ id }) => selectedItems.has(id)); return { events: events.map((event) => ({ ...event, isSelected: (selectedEventId === event.id || allItemsSelected(event.items)), items: event.items.map((item) => ({ ...item, isSelected: selectedItems.has(item.id) })) })) }; }; If you want to be truly functional you can transform temp variables at the beginning of the moveEvent to pure functions.
{ "domain": "codereview.stackexchange", "id": 39990, "tags": "javascript, object-oriented, array, functional-programming" }
Does Potassium Superoxide also absorb Carbon Monoxide?
Question: Potassium Superoxide ($\ce{KO2}$) is used as an oxygen provider and carbon dioxide scrubber in life support systems $$\ce{2KO2 + H2O -> 2KOH + O2} \\ \ce{2KOH + CO2 -> K2CO3 + H2O}$$ I encountered a question that asked if Potassium superoxide can also serve as a scrubber for $\ce{CO}$. I thought of a reaction where $\ce{KO2}$ oxidized $\ce{CO}$ to $\ce{CO2}$ and the $\ce{CO2}$ was absorbed as previously mentioned $$\ce{2KO2 + 2CO -> K2O2 + 2CO2}$$ Another possible reaction would be reduction of $\ce{KO2}$ to $\ce{K2O}$ $$\ce{2KO2 + 3CO -> K2O + 3CO2}$$ I could not find these reactions or similar ones mentioned in any literature online. This article describes the working of $\ce{KO2}$ in detail but I could not find a mention of carbon monoxide in there. Can $\ce{KO2}$ also be used to scrub $\ce{CO}$? If so, how? Answer: As the other answer and comments imply, potassium superoxide is not a good reagent for absorbing carbon monoxide. Cuprous chloride solution works better, as in this study[1] where the salt is dissolved in an ammonia-bearing solution. Reference: R. V. Gholap and R. V. Chaudhari, "Absorption of carbon monoxide with reversible reaction in cuprous chloride solutions", Ind. Eng. Chem. Res. 27, 11, 2105–2110 (1988).
{ "domain": "chemistry.stackexchange", "id": 14224, "tags": "inorganic-chemistry, alkali-metals" }
Work done by gravitational field and and proof of gravity being conservative force
Question: We know gravitational force is a conservative force. So work done by the field in moving an object form one position to other and again moving it bact to the initial position should be zero. But while studying about the work done by the field I wanted to see whether it is true or not but obtained that W1+W2≠0. It would be much helpful if someone points out my mistake (about the concept or any mathematical error)and help me to learn more about this topic. Answer: This is a standard doubt. Similar problem one can encounter in the Coulomb force (as both have similar form). The whole problem is in the limits of the integration. Consider a function $y=x^2$ The $\int_1^2ydx$ means that $x$ varies from 1 to 2 infinitesimally, and in the process we are summing $f(x)dx$. So the limit in the integral gives us itself a sense of direction in which the change in x takes place. $\int_2^1ydx$ shows that x varies from 2 to 1. So, vectorially there is no need to write $-dx$ in the expression $\int_2^1ydx$ Now coming back to the original question. Case 1 When the particle move from P to Q If the particle moves from the $P$ to $Q$. Gravitational force= $-\frac{GMm}{r^2}\hat r$ and displacement = $dr\hat r$ Work done = $F.dx=\int_{r_a}^{r_b}-\frac{GMm}{r^2}\hat r.dr\hat r$ $\implies W= GMm\Big[\frac{1}{r}\Big]_{r_a}^{r_b}$ So, $W_{P\to Q}=GMm\Big[\frac{1}{r_b}-\frac{1}{r_a}\Big]$ So, work done by gravity is negative which is in accordance with our intuition. Case[2] When the particle move from Q to P Gravitational force= $-\frac{GMm}{r^2}\hat r$ and displacement = $dr\hat r$ Here there is no need to write displacement = $-dr\hat r$ as we give the direction of displacement in the limits of integral. So, work done by gravity, $W=\int_{r_b}^{r_a}-\frac{GMm}{r^2}\hat r.dr\hat r$ $W=-GMm\Big[-\frac{1}{r}\Big]_{r_b}^{r_a}$ $W_{Q\to P}=-GMm\Big[\frac{1}{r_b}-\frac{1}{r_a}\Big]$ So, work done by gravity from Q to P is positive which is in accordance to our intuition Thus, $W_{P\to Q}+W_{Q\to P}=0$ So, we can see that the whole problem lies in the wrong interpretation of the limits when we vary our domain from the reverse side (not in the usual increasing fashion). The limits of the integration itself gives the direction of the displacement so there is no need to add another - sign in the expression of displacement.
{ "domain": "physics.stackexchange", "id": 77783, "tags": "newtonian-mechanics, newtonian-gravity, work, potential-energy, conservative-field" }
Correct interpretation of $\langle x | \psi \rangle$?
Question: Suppose $|x\rangle$ is an eigenvector of the position operator $\hat{x}$ and let $|\psi\rangle$ be an arbitrary state on this Hilbert space. What is the correct interpretation of the complex number $\langle x| \psi\rangle$? Is it the probability amplitude of finding the particle at position x in the state $|\psi\rangle$ or is it the probability of finding the particle in the state $|\psi\rangle$ at position $x$? Or are these two equivalent? In particular, if $|p\rangle$ is en eigenstate of $p$, is $\langle p| x\rangle$ the probability amplitude of finding the particle at position $x$ with momentum $p$? Answer: The first of your two suggestions doesn't make sense, "the probability amplitude of finding the particle at position $x$ in the state $|\psi\rangle$". The particle is either in the state $|\psi\rangle$ or at position $x$ (in which case it would be in the state $|x\rangle$), assuming $|\psi\rangle\neq|x\rangle$. The quantity $\langle x|\psi\rangle$ is actually the wavefunction of the state $|\psi\rangle$, usually denoted $\psi(x)$. From here the interpretation is exactly what you'd expect for a wavefunction: $|\psi(x)|^2$ is the probability of finding the the particle, after measurement, at the point $x$. In your second example, the quantity $|\langle p|x\rangle|^2$ is the probability - given a particle in the state $|x\rangle$ - that a measurement of the particle's momentum returns the value $p$, after which the system will be in the state $|p\rangle$.
{ "domain": "physics.stackexchange", "id": 76274, "tags": "quantum-mechanics, hilbert-space, wavefunction, quantum-interpretations, born-rule" }
question about algorithm
Question: as you know we have equivalent condition for graphs so I want to ask a very basic question and please help me what is exactly w1(e1) and w1(e2) and w2(e1) and w2(e1) ? If e1 means path from A to B so w1 means the weight of the path e1 and so what about w2 and what does it mean means ? is there any link for this kind of topic to understand better ? thank you so much this is main question Answer: The solution provided here is highly inelegant as it uses a lemma which is a more general result. This 'solution' should instead be considered as more of a hint: a minimal solution can instead be found by extracting the most relevant parts of the proof of the lemma. Note that $w_1(e_1)<w_1(e_2)\Leftrightarrow w_2(e_1)>w_2(e_2)$ implies that $w_1(e_1)=w_1(e_2)\Leftrightarrow w_2(e_1)=w_2(e_2)$. In particular, if we order the edges in a non-decreasing order according to $w_1$, then we have also ordered them in non-increasing order according to $w_2$. Now, recall that any minimal spanning tree (for $(G,w_1)$) can be found by Kruskal's algorithm (lemma -- https://cs.stackexchange.com/a/95625/109876), so in particular there is an order on the edges which is non-decreasing for $w_1$ such that $\mathcal{T}(\mathcal{V},\mathcal{E}')$ is selected. This same order is non-increasing for $w_2$, and can therefore be used to find a maximum spanning tree (for $(G,w_2)$) using Kruskal's algorithm. But since Kruskal's selection process is independent of the actual weights (and depends only on the ordering and whether the current edge creates a cycle or not), the algorithm will output $\mathcal{T}(\mathcal{V},\mathcal{E}')$ again.
{ "domain": "cs.stackexchange", "id": 14725, "tags": "algorithms, graphs, weighted-graphs" }
Why do electrons orbit protons?
Question: I was wondering why electrons orbited protons rather than protons orbiting electrons. My first thought was that it was due to the small amount of gravitational attraction between them that would cause the orbit to be very close to the proton (or nucleus). The only other idea that I would have is that the strong interaction between protons and neutrons have something to due with this. I have heard that the actual answer is due to something in QM, but haven't seen the actual explanation. The only relation to QM that I can think of is that due to a proton's spin and the fact that they are fermions, the atomic orbitals should be somewhat similar. Do protons have the same types of orbitals, that are just confined by the potential of the strong force? A related question that came up while thinking of this being due to a gravitational interaction: do orbits between protons and electrons have a noticeable rotation between each other (as the sun orbits the earth just as the earth orbits the sun), or is any contribution this has essentially nullified by the uncertainty of the location of the electron (and possibly proton as well)? Answer: Technically the electron and proton are both orbiting the barycenter of the system, both in classical and quantum mechanics, just as in gravitational systems. You find the same dynamics for the system if you assume the proton and electron are moving independently about the barycenter, or if you convert to a one-body problem of a single "particle" with the reduced mass $$ \mu = \frac{m_p m_e}{m_p + m_e } \approx m_e \left(1 - \frac{m_e}{m_p}\right). $$ However, the proton is nearly 2000 times more massive than the electron. If we assume that the proton is fixed and infinitely massive, and model our atom using $\mu=m_e$, we introduce errors starting in the fourth decimal place. Usually that's good enough.
{ "domain": "physics.stackexchange", "id": 17052, "tags": "quantum-mechanics, mass, reference-frames, atomic-physics" }
Is it possible to remove rosdep
Question: I'm developing a project with ROS and have a question now. I found that rosdep is really huge (Installing rosdep will increase the dist usage of /usr about 700 MB). As I know, rosdep is to solve the dependency issue. So I'm thinking if I can remove the whole rosdep to save my disk space. If possible, what should I do? Originally posted by bear234 on ROS Answers with karma: 71 on 2018-03-07 Post score: 0 Answer: You can remove rosdep by calling sudo apt-get remove python-rosdep however if you're looking to clean up disk space that's not what you want. Rosdep's size is well under 1 MB. What you're probably seeing is the dependencies that you asked rosdep to install for you. They will need to be uninstalled just like any other package you installed. To get a list of what rosdep installed for a workspace use the --reinstall --simulate options and reinvoke rosdep install like you did before. Note that when you uninstall these dependencies anything that relies on them will no longer work. Originally posted by tfoote with karma: 58457 on 2018-03-07 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 30235, "tags": "ros, rosdep, ros-kinetic" }
finding the output of these three systems when connected in series?
Question: I have just started to study Oppenheim's "Signals & Systems, Second Edition" and there is this easy-looking problem that has evaded me its solution for the past 48 hours: consider systems $S1$, $S2$, and $S3$, where $ x[n] \xrightarrow{S1} y_{(1)}[n] = \begin{cases} x[n/2] & \text{if n is even;}\\ 0 & \text{if n is odd.}\\ \end{cases}$ $ x[n] \xrightarrow{S2} y_{(2)}[n] = x[n] + (1/2)x[n-1] + (1/4)x[n-2] $ $ x[n] \xrightarrow{S3} y_{(3)}[n] = x[2n]$ To work out the result of $x[n] \xrightarrow{S1,S2,S3} y_{(1,2,3)}[n]$, I first tried to work out the result of $x[n] \xrightarrow{S1,S2} y_{(1,2)}[n]$ as follows: $x[n] \xrightarrow{S1,S2} y_{(1,2)}[n] = \\ y_{(1)}[n] + (1/2)y_{(1)}[n-1] + (1/4)y_{(1)}[n-2] = \\ \begin{cases} x[n/2] + (1/4)x[(n-2)/2] & \text{if n is even;}\\ (1/2)x[(n-1)/2] & \text {if n is odd.}\\ \end{cases}$ after that, feeding $y_{(1,2)}[n]$ to $S3$, we have: $y_{(1,2)}[n] \xrightarrow{S3} y_{(1,2,3)}[n] = \\ y_{(1,2)}[2n] = \\ \begin{cases} x[2n/2] + (1/4)x[(2n-2)/2] & \text{if 2n is even;}\\ (1/2)x[(2n-1)/2] & \text {if 2n is odd.}\\ \end{cases}$ which simplifies to $y_{(1,2,3)}[n] = x[n] + (1/4)x[n-1].$ but the solution manual says the output is in fact $x[n] \xrightarrow{S1,S2,S3} y_{(1,2,3)} = x[n] + (1/2)x[n-1] + (1/4)x[n-2].$ :( What I want to know is, who is actually wrong: me or the manual? Thanks a lot in advance! Answer: You are right and the manual is wrong. Given $S_1, S_2, S_3$ and the respective input-output signals as below : $$\\x[n] \rightarrow \boxed{S_1} \rightarrow v[n] \rightarrow \boxed{S_2} \rightarrow w[n] \rightarrow \boxed{S_3}\rightarrow y[n] \\$$ $\\$ $ x[n] \xrightarrow{S1} v[n] = \begin{cases} x[n/2] & \text{if n is even;}\\ 0 & \text{if n is odd.}\\ \end{cases}$ $ v[n] \xrightarrow{S2} w[n] = v[n] + (1/2)v[n-1] + (1/4)v[n-2] $ $ w[n] \xrightarrow{S3} y[n] = w[2n] \\ \\$ You would compute the output $y[n]$ in three steps as: (you have compined step 1 into 2 actually) $v[n] \xrightarrow{S2} w[n] $ $ \begin{align} w[n] &= v[n] + (1/2)v[n-1] + (1/4)v[n-2] \\ \\ &= \begin{cases} { x[n/2] + (1/4)x[(n-2)/2] ~~~~~~ \text{if n is even;}\\ (1/2)x[(n-1)/2] ~~~~~~~~~~~~~~~~~~~~~~ \text {if n is odd.} } \end{cases} \end{align} $ $\\$ and then, feeding $w[n]$ to $S_3$, we have: $w[n] \xrightarrow{S3} y[n] = w[2n] \\$ $ y[n] = \begin{cases} x[2n/2] + (1/4)x[(2n-2)/2] & \text{if 2n is even;}\\ (1/2)x[(2n-1)/2] & \text {if 2n is odd.}\\ \end{cases}$ Hence (your) the solution is right: $$y[n] = x[n] + (1/4)x[n-1]. $$ solution manual is wrong: $$x[n] \xrightarrow{S1,S2,S3} y[n] = x[n] + (1/2)x[n-1] + (1/4)x[n-2].$$ You can verify this simply by putting some test signals, or even easier by writing a simple MATLAB/Octave/Python script to implement the $S_1,S_2,S_3$ system.
{ "domain": "dsp.stackexchange", "id": 8013, "tags": "discrete-signals" }
Why are wavefunctions in Quantum Mechanics shown as complex Circular waves instead of real Planar waves?
Question: I'm currently learning Quantum Mechanics from online video lectures and resources. In most of the web articles and videos, the wave functions are shown as circular waves $e^{i\omega t}$ instead of planar waves $\sin{\omega t}$. [Note: I'm considering a fixed position and hence the equation $e^{i(k\cdot r + \omega t)}$ reduces to $e^{i\omega t}$] Some examples from the web: This video shows the wave amplitude to be rotating around the position (i.e. a circular wave in accordance with $e^{i\omega t}$): Quantum Wave Function Visualization The Wikipedia Article on Schrödinger equation describes the plane wave using $e^{i(k\cdot r + \omega t)}$ instead of $\sin{\omega t}$ even though they call it a planar wave: Schrödinger equation In this Video the derivation of Probability density is based on a circular wave: Quantum Mechanics 1 Lecture 3 Answer: There's a misunderstanding what the word "plane" represents in the term "plane wave". A plane wave is a wave in which the surface of constant phase (wavefront) is a plane: (image source) What is shown as a circular thing that rotates for $e^{i\omega t}$ is the phasor that represents the value of the wavefunction at a given (single!) point of space. Phasors are used not only for quantum mechanical wavefunctions: this concept originated in the theory of electric circuits, and is also useful for treatment of other types of waves—even real-valued—e.g. electromagnetic. What makes quantum mechanical wavefunction special is that it's not usually observable, only its absolute value is. But the effect of interference of quantum particles, like in the double-slit experiment, makes it necessary to introduce an additional parameter to capture this kind of effects. This parameter is the phase, and it's the thing that makes the phasor rotate in the animations you see in the resources on quantum mechanics. Note that phasor is a vector not in the ordinary physical space: it's a vector in the complex plane, and it doesn't point to any direction in the real physical space, rather being a mathematical abstraction.
{ "domain": "physics.stackexchange", "id": 68290, "tags": "quantum-mechanics, waves, wavefunction, schroedinger-equation, complex-numbers" }
Can someone prove that the $\tilde{E}$ and $\tilde{H}$ fields in a waveguide looks as pictured?
Question: Hi, I'm trying to use the solution to the wave's equation in a rectangular waveguide for $\tilde{E}$ and $\tilde{H}$ to show how I can get the above picture. For example, why is the magnetic field looping around? Why is the electric field diverging towards the walls? Can someone show me how I can get the above picture based on the following set of equations? (taken from here) Thank you! Answer: For example, why is the magnetic field looping around? Yours is nothing more (nor less) than the intuitive statement of $\nabla\cdot B=0$. Flux lines of a divergenceless field cannot "begin" or "end"; they must loop if $\nabla\cdot B=0$ holds at all points. Otherwise, a nonzero divergence betokens charge density, as with the electric field lines which are described by $\nabla\cdot E=\rho/\epsilon_0$. Why is the electric field diverging towards the walls? The walls are assumed to be perfect conductors. That means that, if there were a tangential component of the electric field at the walls, charge would instantly shift, thus giving rise to a cancellation field. Equilibrium is reached when the electric field lines pierce the walls at right angles. You assume equilibrium because you must assume a good enough conductivity that the charge shifting is a great deal faster than the wave's frequency. The field lines end on the wall, in keeping with $\nabla\cdot E=\rho/\epsilon_0$: there is a time varying charge density in the walls. You may even want to try graphing some of those field lines: for the magnetic field for example, the last two equations tell you that $\frac{H_y}{H_x} = \frac{m\,b}{n\,a}\, \tan\left(\frac{m\,\pi\,x}{a}\right)\,\cot\left(\frac{n\,\pi\,y}{b}\right)$. $\frac{H_y}{H_x}$ defines the direction of the tangent to the field line, so you have a differential equation: $$\frac{\mathrm{d}\,y}{\mathrm{d}\,x}=\frac{m\,b}{n\,a}\, \tan\left(\frac{m\,\pi\,x}{a}\right)\,\cot\left(\frac{n\,\pi\,y}{b}\right)$$ which is readily integrable and which you can plot in something like Mathematica.
{ "domain": "physics.stackexchange", "id": 17989, "tags": "electromagnetism, waveguide" }
Math behind trajectory planning
Question: Let's assume the very simple case of a particle and a control system in one dimensional space therefore our particle can move only in a straight line and dynamics of system is described by: $m\vec{a} = u$. Now the problem: we would like to make our particle move from point $A$ to point $B$ in time $t$ and constrain our acceleration with some value $a_{m}$ i.e. $a$ can not exceed $a_{m}$ at any moment. How would one do this assuming that our control system allows us to control either velocity or acceleration? The most important things here are names of mathematical methods behind this task and explanation of how to apply them. Also consider that $x(0) = A = 0\\ x(t) = B\\ v(0)=0\\a(0)=0\\ v(t)=0\\ a(t)=0$ Answer: Since the problem is one dimensional, you are actually asking to compute a velocity profile. (A velocity profile is the information of how a path is traversed with respect to time.) Now the problem is "How to travel for $B$ units within time $T$?" (Let's call the duration $T$ instead.) A velocity profile can be viewed as a curve in the $v$-$t$ (velocity vs time) plane. And as we all know, the area under a curve in that $v$-$t$ plane is the displacement. So any curve which passes through points $(v_0, t_0) = (0, 0)$ and $(v_2, t_2) = (0, T)$ with the area $B$ will be what you are looking for. One velocity profile which solves the problem is as shown below. It contains two segments. From time $0$ to $t_s$, you travel with a constant positive acceleration. Then after that you travel with a constant negative acceleration (of equal magnitude). The peak velocity $v_p$ can be easily computed since we know that $$ \frac{1}{2}v_{p}(T) = B. $$ This is fine as long as the magnitude of acceleration of both parts, $|a| = v_p/t_s = 2v_p/T$, does not exceed $a_m$. Normally, people would ask how to get from $x_0 = A$ to $x_1 = B$ the fastest possible. When there are only velocity and acceleration limits, the time-optimal velocity profile can be computed analytically (see, e.g., this paper ) using polynomial interpolation. People may also be interested in other higher-order constraints such as jerk limits (i.e., limits on the rate of change of acceleration). I don't think there is any more specific name for these things than something like trajectory generation using polynomials or splines, etc.
{ "domain": "robotics.stackexchange", "id": 1376, "tags": "control, motion-planning, movement" }
Get results of output for each one value of the input - MATLAB ODE
Question: I am simulating a mass - damper - system in Matlab and I have the following vector (1x100) as input to my system: ut = linspace(0, 10); u = 5 * sin(2 * ut) + 10.5; % input of our system - external force Now, I want to take the value of the output for each value of the input vector using this differential equation: function dx = odefun_4(t,x) m = 15; b = 0.2; k = 2; dx = [x(2); u/m - (b/m)*x(2) - (k/m)*x(1)]; end which is called from my main .m file like this: [t,X] = ode45(@odefun_4, [0 10], [0;0]) I tried to pass the u vector by making it global variable but I get an error stating: Dimensions of arrays being concatenated not consistent. I also tried to set the Refine parameter using odeset and the options argument of ode45 solver but still don't get the desired results. I could really use some help. Answer: If you want $u$ to be considered as a smooth continuous function, then you should define it as such since you also have $t$ available as well. So you could use: function dx = odefun_4(t,x) m = 15; b = 0.2; k = 2; u = 5 * sin(2 * t) + 10.5; dx = [x(2); u/m - (b/m)*x(2) - (k/m)*x(1)]; end I often also find it more convenient to use anonymous functions in Matlab, for example: m = 15; b = 0.2; k = 2; u = @(t) 5 * sin(2 * t) + 10.5; odefun_4 = @(t,x) [x(2); u(t)/m - (b/m)*x(2) - (k/m)*x(1)];
{ "domain": "engineering.stackexchange", "id": 2441, "tags": "mathematics, matlab" }
Inorder Traversal of the Ternary Tree
Question: As per Wikipedia, Algorithm for In-order Traversal of Binary Tree If the current node is empty/NULL/None return nothing. Traverse the left subtree by recursively calling the in-order function. Display the data part of the root (or current node). Traverse the right subtree by recursively calling the in-order function. I was interested in Algorithm for In-order Traversal of Ternary Tree. Upon referring Professor Robert Sedgewick's Lecture of Ternary Search Tries, I found that if I do the In-order traversal on the Ternary Search Tree (a type of Trie data structure) for a particular searched string then visiting order of nodes should be Check if the current node is empty or null. Traverse the left subtree recursively calling in-order function. Display the data part of the current node. Traverse the middle subtree by recursively calling the in-order function. Traverse the right subtree by recursively calling the in-order function. But I got result different from claimed in one Assignment Problem, and in one Competitive Exam Problem. Problem 1 : Find the In-order Traversal of the following tree Mine Answer : AKBJCLIEDHFG Given Answer : AKBJCLIDEHFG Problem 2 : Consider the rooted tree with the vertex labelled P as Root. Find the order in which nodes are visited during an in-order traversal of the tree. Mine Answer : QSPTRUWV Given Answer : SQPTRWUV Please verify whether my answers are correct or not and If I am wrong somewhere then please correct me. Answer: According to me answer should be :- AKBJCLIEDHFG Your answer is ((A) (K) ((B) (J) (C))) (L) (I) (((E) (D)) (H) (F) (G)) Please take a close look at the part ((E) (D)). Here lies the ambiguity/confusion/uncertainty. The ambiguity is whether D is the left subtree or the middle subtree or the right subtree of E. Because of the limited drawing spaces that shows the tree, it can be argued in a few ways. I could swear that D is meant to be the left subtree or E. On a different day, I might, agreeing with you, insist that D is so apparently the middle subtree of E. On another day, I could stretch myself a little bit, unabashedly claiming that D is in fact the right subtree of E. Unless there are some kind of instructions such as your textbook or whatever material you have been using has defined the rules how to tell the class of subtree from the way is drawn in case of ambiguity, it is impossible to conclude which subtree of E D is. So your question is reduced to whether there are those kind of rules in your material. Or in your instructor's notes or oral guide. Or, what is general convention in your culture or your context to interpret "left", "middle" and "right". This question is not so much of computer science, but more about linguistics and drawings and culture. If I had to pick one solution out of no context, I would be very frustrated on deciding whether D is a left subtree or right subtree of E. I would choose one of many possible actions below, without any particular preference. try finding the context or the rules. just choose left subtree. just choose middle subtree. wave my hands, declaring no value to solve a question that is not well-formed. presenting two solutions or three solutions, each with its assumption stated clearly. redraw the graph in the question. raise a question about that question to seek other's judgements as you just did had I been a student, my TA should have been my savior and my professor should have been the ultimate arbitrator. the last option that stands for all the remaining possibilities. (By the way, I just searched the lecture briefly. I have not found any definitive guide on how to tell left or middle or right subtree. Of course, I might have missed some hints or conspicuous rules.)
{ "domain": "cs.stackexchange", "id": 20034, "tags": "algorithms, data-structures, trees, tree-traversal" }
Do multiple electrons exist during superposition?
Question: Wikipedia says: Quantum superposition is a fundamental principle of quantum mechanics that holds that a physical system—such as an electron—exists partly in all its particular, theoretically possible states (or, configuration of its properties) simultaneously;[...] What interests me is what happens when location is the state that is superposed. If an electron exists in two locations at once, as a result of quantum superposition, does that mean that the mass of the universe has increased so long as the superposition exists? So in other words, does it follow that effectively two electrons exist during superposition? Why or why not? Answer: In QM you have only the one electron, but according to the Copenhagen interpretation (which is positivist) we can only know what we measure, and we may not measure position/momentum with equal precision as they are complementary. This is usually interpreted popularly that there is indeterminancy of position or the wave nature of the particle. The first cannot be quite right as one expects Heisenbergs uncertainty principle to hold even if measurements are not taking place, and neither is the second - it is neither particle nor wave, as traditionally envisaged, but something else entirely. In QFT, we no longer have an electon, but an electron field. The excitations of this field represent electrons in different positions. But in a sense the excitations could be said to exist globally in the field. According to Heisenbergs uncertainty principle you can have variation in mass/energy with respect to time. So yes, the energy does vary, but there is still a conservation law of some kind.
{ "domain": "physics.stackexchange", "id": 11708, "tags": "quantum-mechanics, quantum-interpretations, superposition" }
Could epicycles approximate anything to any precision? In what way is QED different?
Question: Just curiosity, as follows: I was trying to explain/illustrate to a non-technical friend that physics is just "mathematical models", which may or may not represent/correspond_to some "underlying reality". And we can't infer it does just because the calculated numbers work (correspond to observed measurements). And the example that crossed my mind was this: classical Greeks (mostly) thought planets revolve around the earth. But to explain retrograde motion, etc, they introduced epicycles. And when that didn't exactly work, they introduced epicycles on epicycles. Now, I suggested, if they'd known a little more math, they could've "expanded" the observed motion in epicycles (if epicycles are "complete" for describing such orbital curves). And then they could've argued along the lines, "Look, our calculated numbers are accurate to 16 significant decimal digits. So our epicycle model of planetary motion must be right. How could we obtain such incredible accuracy otherwise???" So how good/bad an illustration is this? Are epicycles complete in this sense? And, of course, an underlying question I didn't mention to my friend: how can you "protect" QED, etc, from such objections? Or can't you? Answer: Yes they could, here is a fun YouTube video where Homer Simpson is sketched by a tower of epicycles. We can think of epicycles as sums of $a_ne^{i\lambda_n t}$ by identifying the plane of the ecliptic with the complex plane. The $a_n$ then code their radii and phases and $\lambda_n$ are the frequencies. This is a generalization of the Fourier expansion, in which $\lambda_n$ must all be integers. Any continuous almost periodic function admits such an expansion, the epicyclic exponents are complete in their space (some discontinuous functions can also be approximated). Such functions were introduced and studied by Harald Bohr, the famous physicist's brother. In particular any continuous periodic motion can be accomodated by epicycles. For more details see Mathematical Power of Epicyclical Astronomy by Hanson. As for QED, it is at least protected from the Homer Simpson objection, unlike epicyclic astronomy it only has finitely many parameters to "adjust". And it made predictions of (even today) remarkable accuracy without any analog of mounting epicycles upon epicycles that Islamic astronomers engaged in at the end of middle ages, see Ancient Planetary Model Animations, especially the Arabic models for outer planets.
{ "domain": "physics.stackexchange", "id": 41028, "tags": "mathematics" }
If the gravitational force were inversely proportional to distance (rather than distance squared), will celestial bodies fall into each other?
Question: If gravity was inversely proportional to distance, will the dynamics of celestial bodies be much different from our world? Will celestial bodies fall into each other? Answer: Why not test it? The following Mathematica code numerically integrates the equations of motion for $F\propto 1/r$. G = 1; M = 1; T = 20; r0 = 1; dv = .1; sols = NDSolve[{ x''[t] == -((G M)/(x[t]^2 + y[t]^2)) x[t], y''[t] == -((G M)/(x[t]^2 + y[t]^2)) y[t], x[0] == r0, y[0] == 0, x'[0] == 0, y'[0] == Sqrt[G M] + dv}, {x, y}, {t, 0, T}]; ParametricPlot[ Evaluate[{{Cos[t], Sin[t]}, {x[t], y[t]}} /. sols], {t, 0, T}, AspectRatio -> 1] where $T$ is the integration time, $r0$ is the starting radius and $dv$ is the deviation from a circular trajectory. The circular trajectory has been calculated with the help of @joseph h’s answer. This code gives the following plots for different $dv$: The blue circle shows the reference circular trajectory. We notice two important things. Firstly the orbits precess. They generally don't end up at their starting point. Non-precessing orbits are a special characteristic of Keplerian orbits. Secondly the orbits are still bound. They don't spiral inward. To make sense of this we can look at the potential $V(r)=GM\log(r)$. If you plot this and compare it to $V(r)=-\frac{GM}{r}$ they actually have very similar shape. But, because $\log(r)$ doesn't have an asymptote, there are no longer escape trajectories. Every orbit will eventually return even though it will take very long to return for large velocities. Edit: for reference I will also include these plots for a $F\propto 1/r^2$ to get a sense of how large $dv$ is. At $dv=+0.5$ we already have an escape trajectory.
{ "domain": "physics.stackexchange", "id": 75999, "tags": "newtonian-mechanics, gravity, angular-momentum, orbital-motion" }
Angle of refraction at minimum deviation of two different colours
Question: I was thinking, if two separate light beams of red and violet colour are passed separately through an equilateral prism, such that the angle of deviation is minimum in both cases, will the angle of refraction inside the prism just simply be 30 degrees (from relation: A=2r) in both cases, or do we have to consider the fact that one beam is of red colour and the other beam is of blue colour, (which have different angles of refraction in a non-minimum deviation case) ? So basically my question is that: Does colour(wavelength) of light have an effect on angle of refraction, when we are specifically considering the case of minimum deviation through an equilateral prism? Answer: Yes, angle of refraction would be same for both the lights but their deviation would not be equal. Let's work out for a thin prism(as we can do calculation for it!) The angle of deviation is given by $$δ=(n-1)A$$ Where n is the refractive index of the prsim and A is the angle of the prism. The refractive index of a material depends upon the wavelength of the light(it is an inverse relation). Hence n will be greater for voilet, as a result of which it will suffer greater deviation than red light( see dispersion of light by a prism). Now the point to be noted here is that the angle of incidence will not be equal for both the lights to have minimum deviation in each of them(calculation done below). From Snell's Law $$n_1\sin i=n_2\sin r$$ Assuming $n_1=1$ and $n_2=n$ and angle of incidence to be small(in order to get a clear picture) $$i=nr$$ Also $$r=\frac{A}{2}$$ $$i=n\frac{A}{2}$$ As mentioned above n will be greater for voilet light and Hence it's angle of incidence (for minimum deviation) will be greater.
{ "domain": "physics.stackexchange", "id": 39843, "tags": "refraction" }
Proof that tensor product of unit vectors is a unit vector
Question: I am trying to find a simple proof that $\|v \otimes u\| = 1 $ if $\|v\|=1$ and $\|u\|=1$. I have a proof by induction, where I can fix the length of $u$ and show by induction on the length of $v$ that the previous statement is true. The base case is for length of $v=2$. Let $u = [\alpha_1, \alpha_2]$ and $v=[v_1, \dots, v_n]$, we get a formula of the kind $$ \sum_{i=1}^n (\alpha_1v_i)^2 + \sum_{i=1}^n (\alpha_2v_i)^2 = \sum_{i=1}^n v_i^2(\alpha_1+\alpha_2)^2 = 1$$ Do you have a another idea, maybe using properties of the tensor product or properties of the norm? Answer: Let $\mathbf{u}$ and $\mathbf{v}$ be two vectors, which we write as: $$\mathbf{u}=\sum_{j=1}^nu_j\mathbf{e}_j$$ and: $$\mathbf{v}=\sum_{j=1}^nv_j\mathbf{e}_j$$ with $\left\{\mathbf{e}_j\right\}_j$ being an orthonormal basis of the Hilbert space we're working in. We thus have: $$\begin{align}\mathbf{u}\otimes\mathbf{v}&=\left[\sum_{j=1}^nu_j\mathbf{e}_j\right]\otimes\left[\sum_{k=1}^nv_k\mathbf{e}_k\right]\\&=\sum_{j=1}^nu_j\left[\mathbf{e}_j\otimes\sum_{k=1}^nv_k\mathbf{e}_k\right]\\&=\sum_{j=1}^n\sum_{k=1}^nu_jv_k\left[\mathbf{e}_j\otimes\mathbf{e}_k\right]\end{align}$$ We know want to compute the squared norm of this vector. Note that $\left\{\mathbf{e}_j\otimes\mathbf{e}_k\right\}_{j,k}$ is an orthonormal basis of the new Hilbert space we're working in. $$\begin{align}\|\mathbf{u}\otimes\mathbf{v}\|^2&=\sum_{j=1}^n\sum_{k=1}^n\overline{u_jv_k}u_jv_k\\&=\underbrace{\left(\sum_{j=1}^n\overline{u_j}u_j\right)}_{\|\mathbf{u}\|^2}\underbrace{\left(\sum_{k=1}^n\overline{v_k}v_k\right)}_{\|\mathbf{v}\|^2}\end{align}$$ Thus, since $\sqrt{\|\mathbf{u}\|^2\times\|\mathbf{v}\|^2}=\sqrt{\|\mathbf{u}\|^2}\times\sqrt{\|\mathbf{v}\|^2}=\|\mathbf{u}\|\times\|\mathbf{v}\|$, it follows that if $\mathbf{u}$ and $\mathbf{v}$ are unit vectors, then so is $\mathbf{u}\otimes\mathbf{v}$.
{ "domain": "quantumcomputing.stackexchange", "id": 4774, "tags": "linear-algebra" }
Simulating Toffoli Gates with Fredkin Gates with no garbage bits
Question: I found this answer for how to simulate a Toffoli gate with a Fredkin gate; however, it leaves a bunch of garbage bits. Is there some way to do this without garbage bits (i.e., at the end of the computation, the only outputs are $a,b,c\oplus ab$ and a bunch of 1s and 0s that do not depend on the inputs)? The simple answer would be to fan-out the result - $c\oplus ab$ - and then reverse the computation on one of the duplicates of this. However, this leaves an extra bit of $\overline{c\oplus ab}$ that I don't know how to remove. No matter how hard I try, I can't seem to find an arrangement of Fredkin gates and not gates that will turn $(x,x)$ into $(x,0)$, but I don't know if this is provably impossible or just something I can't figure out. Answer: Fredkin gates preserve the total number of ON bits. Toffoli gates don't. Because of that, it's impossible to implement Toffoli gates with Fredkin gates without some source/dump of ON bits. See the paper 'The Classification of Reversible Bit Operations' by Scott Aaronson et al.
{ "domain": "physics.stackexchange", "id": 42303, "tags": "quantum-information, quantum-computer" }
Implementation of tree with different node types and faux-polymorphism in C
Question: I'm currently learning C by working on my first project. It's a calculator with a parser that transforms input into an operator tree. These trees consist of nodes of different types: Operators (inner nodes), constants and variables. My first attempt was to create a single struct node and just put everything a Node might have in it -- which wastes a lot of memory because not all of the fields are used, depending on the actual type a Node has. My solution: I implemented three different structs for the three different Node types. typedef enum { NTYPE_OPERATOR, NTYPE_CONSTANT, NTYPE_VARIABLE } NodeType; typedef NodeType* Node; struct VariableNode_ { NodeType type; char var_name[]; }; struct ConstantNode_ { NodeType type; double const_value; }; struct OperatorNode_ { NodeType type; Operator *op; size_t num_children; Node children[]; }; They all have a type as their first field -- a header to let me know which type a Node actually has. A Node is always a pointer to the heap, so I typedef'ed it to hide its pointer type. A node may be created and used by the following "constructors" and accessors: Node malloc_variable_node(char *var_name) { VariableNode res = malloc(sizeof(struct VariableNode_) + (strlen(var_name) + 1) * sizeof(char)); if (res == NULL) return NULL; res->type = NTYPE_VARIABLE; strcpy(res->var_name, var_name); return (Node)res; } Node malloc_constant_node(ConstantType value) { ConstantNode res = malloc(sizeof(struct ConstantNode_)); if (res == NULL) return NULL; res->type = NTYPE_CONSTANT; res->const_value = value; return (Node)res; } Node malloc_operator_node(Operator *op, size_t num_children) { if (num_children > MAX_ARITY) { // Max. arity exceeded return NULL; } OperatorNode res = malloc(sizeof(struct OperatorNode_) + num_children * sizeof(Node)); if (res == NULL) return NULL; for (size_t i = 0; i < num_children; i++) { res->children[i] = NULL; } res->type = NTYPE_OPERATOR; res->op = op; res->num_children = num_children; return (Node)res; } void free_tree(Node tree) { if (tree == NULL) return; if (get_type(tree) == NTYPE_OPERATOR) { for (size_t i = 0; i < get_num_children(tree); i++) { free_tree(get_child(tree, i)); } } free(tree); } NodeType get_type(Node node) { return *node; } Operator *get_op(Node node) { return ((OperatorNode)node)->op; } size_t get_num_children(Node node) { return ((OperatorNode)node)->num_children; } Node get_child(Node node, size_t index) { return ((OperatorNode)node)->children[index]; } Node *get_child_addr(Node node, size_t index) { return &((OperatorNode)node)->children[index]; } void set_child(Node node, size_t index, Node child) { ((OperatorNode)node)->children[index] = child; } char *get_var_name(Node node) { return ((VariableNode)node)->var_name; } double get_const_value(Node node) { return ((ConstantNode)node)->const_value; } Is this common practice? Would there be a better way to do it? Would it have been okay to just waste the space and use a single struct as I did before? This would almost always safe me a level of indirection when dealing with trees and transforming them because all Nodes would have the same size and could be recycled. If you want to look at the full file(s): node.h hosted at GitHub Thanks in advance :) Edit: Changed ConstantType to double because the type was not defined in the excerpt I posted Answer: Is this common practice? Enum-based type tracking in C? Yes. Would there be a better way to do it? Depends on a few things, including your definition of better. I think this is fine, but if your type-conditional code ends up being extremely long, then you can move to a more C++-style approach, where instead of tracking a type enum, you track function pointers; then your if (get_type()) code disappears and you can blindly call the function pointer. It's a little more complicated, and can have performance implications. Would it have been okay to just waste the space and use a single struct as I did before? Again - it depends. How many of these structures are you instantiating? Seven, or seven billion? Wasting the space can actually be more performant - you can simplify your allocation logic. There's yet another option where you don't waste space at all - C allows you to do a dirty trick called union punning. Basically, write multiple versions of the structure whose memory overlaps, and choose the right one based on context. Until performance becomes a very specific and dominant issue, just stick with what you have, which is simple and legible, and resist the urge to prematurely optimize.
{ "domain": "codereview.stackexchange", "id": 36219, "tags": "beginner, object-oriented, c, parsing, polymorphism" }
When is ${catkin_EXPORTED_TARGETS} needed
Question: This is basically a follow-up to #q285772 During my use of ROS, I adopted to call the following line on any (C++) target that I try to build: add_dependencies(MYTARGET ${${PROJECT_NAME}_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS}) using ${${PROJECT_NAME}_EXPORTED_TARGETS} only when building msg/srv/actions/dynamic_reconfigure in the package I am workin on, and ${catkin_EXPORTED_TARGETS} always. This is a constant source of error, as well in the above cited question. Thus, I always adviced People to follow the same strategy. In a comment to the above question, @gvdhoorn described the approach to always use ${catkin_EXPORTED_TARGETS} as "cargo-cultish" (nice term :-D) re: adding catkin_EXPORTED_TARGETS: that's a bit cargo cult-ish. I wouldn't add that unless it's needed. Checking the documentation (e.g. wiki, catkin docs 1 or catkin docs 2) I'm not quite sure when to use it. The wiki suggest that: If you have a target which (even transitively) depends on some other target that needs messages/services/actions to be built, you need to add an explicit dependency on target catkin_EXPORTED_TARGETS, so that they are built in the correct order. This case applies almost always, unless your package really doesn't use any part of ROS. Unfortunately, this dependency cannot be automatically propagated. (some_target is the name of the target set by add_executable()): add_dependencies(some_target ${catkin_EXPORTED_TARGETS}) This is basically saying: You Need it always (solution 1). The catkin docs rather say, if you Need any msg/srv/action/dynamic_reconfigure from another catkin package (solution 2). Which is the right way to go? I'd be happy for any insights. Originally posted by mgruhler on ROS Answers with karma: 12390 on 2018-03-23 Post score: 4 Original comments Comment by peci1 on 2022-01-25: I think solution 1 becomes more important with the number of packages you build from source for the first time. Imagine the theoretical case where you'd like to build whole ROS up to your package from source. If your package's targets are missing the dependency on ${catkin_EXPORTED_TARGETS}, they could be built earlier than the messages from the dependencies are generated. Or at least this is how I understand it. Thanks to the ordering catkin creates, builds with -j1 should succeed, while multi-job builds could run into this issue. Answer: Afaik, the main difference between catkin_EXPORTED_TARGETS and ${PROJECT_NAME}_EXPORTED_TARGETS is that the former contains the exported targets (ie: message, service, action generation targets, dynamic_reconfig, etc) of all your build dependencies, while the latter contains those targets for just the current package (see ros/catkin/cmake/catkinConfig.cmake). By making a target depend on catkin_EXPORTED_TARGETS, that specific target will only be built after all its build dependencies have been built (ie: all the exported targets in all the build dependencies). By adding a dependency on ${PROJECT_NAME}_EXPORTED_TARGETS, a target will be built after the targets in just that set have been built. It's rarely the case that ${catkin_EXPORTED_TARGETS} == ${${PROJECT_NAME}_EXPORTED_TARGETS}, so this can affect the build order significantly. As an extreme example: a node that needs a single header for a custom message that it provides itself, but does have 10 other build dependencies (but none that don't exist already), would be prevented from building until all 10 build dependencies have had all their targets completely built, even though it doesn't actually depend on any of those. See also the Catkin documentation's section on Extracted CMake API reference - catkin_package(..). Originally posted by gvdhoorn with karma: 86574 on 2018-03-23 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by gvdhoorn on 2018-03-23: Note that this may not actually be such a big problem, after all: waiting for everything to complete >= waiting for just your specific dependencies. But your question was slightly academic, so the answer is as well. Comment by mgruhler on 2018-03-23: Thanks @gvdhoorn for the detailed answer (and I agree that this is rather academic and of less practical relevance). I've been aware about most parts. Basically, catkin_EXPORTED_TARGETS contains ALL targets (i.e. msg/srv/action/dyn_reconf/libraries etc.) right? This could obviously lead ... Comment by mgruhler on 2018-03-23: to a slow-down of the build process. Even though it should be pretty reproducable then. What I don't get is: How can ${catkin_EXPORTED_TARGETS} == ${${PROJECT_NAME}_EXPORTED_TARGETS} except for the obvious case of both being empty. Doesn't this mean we have some cyclic dependency then? Comment by mgruhler on 2018-03-23: And even then, to my understanding, this would only mean that ${${PROJECT_NAME}_EXPORTED_TARGETS} is only a subset of ${catkin_EXPORTED_TARGETS}, and not completely equal. Or am I missing something?
{ "domain": "robotics.stackexchange", "id": 30425, "tags": "ros-lunar, catkin, ros-kinetic, cmake, ros-indigo" }
What kind of ribes is this?
Question: I have a plant in my USDA zone 5 garden that appears to be in the ribes genus. Details: The location is east of Toronto (Ontario, Canada). The plant is currently flowering (today is May 18th, 2019). It is approx 4 feet tall. The plant flowers every year, but does not set fruit (not self-fruitful). More photos here: https://i.stack.imgur.com/coEp5.jpg https://i.stack.imgur.com/wv0TA.jpg https://i.stack.imgur.com/pFmtk.jpg https://i.stack.imgur.com/KvjGe.jpg What species is it? Answer: I think this is Ribes americanum or wild black currant. I'm basing this on page 306 of Newcomb's Wildflower Guide, an excellent source for the Eastern US and some of Canada. On that page we have two main choices: Base of flowers prickly or bristly... or Base of flowers not prickly or bristly... Your excellent pictures suggest that your specimen is in category 2. From there, we have: a. Flowers solitary or 2-3 in a cluster; branches bearing a few thorns. or b. Flowers 5 or more in racemes. Your specimen looks to have quite a few flowers, so in category 2b we have a choice of Ribes americanum: Whitish or yellowish flowers, longer than wide (and fruit black, you can check later I guess), or Ribes sativum: Greenish flowers, wider than long (and fruit red). Color in the white/green/yellow range is a bit subjective, but your flowers look longer than wide, hence Ribes americanum. Of course, with this in mind you might re-inspect the plant and refine this a bit.
{ "domain": "biology.stackexchange", "id": 9773, "tags": "species-identification, botany, fruit" }
Won't decrease in kinetic energy decrease the momentum as well?
Question: My doubt stems from the question attached. If some kinetic energy gets converted into spring potential energy, shouldn't the velocity of be reduced and subsequently momentum? I also considered the second block to move due to the spring but then the spring would still have some potential energy and thus the velocity isn't the highest it can be Any help regarding the question will be appreciated Answer: I will assume the statements in the second image refer to 'any moment in time' after A has attached to the spring. Also, velocity in this 1 dimensional example is a number whose sign determines its direction. So if A has a higher velocity than B, that means that the distance between A and B is getting smaller, since A is approaching B faster than B is moving away from A. If the distance between them is getting smaller, the length of the spring is getting smaller too. And when a spring is compressed, the bodies that are attached to it gain Elastic Potential Energy. So the system as a whole is gaining potential energy. The law of conservation of energy (which I think this exercise is about) states that the total energy of an isolated system cannot change in time. If there is an increase in Potential Energy in the system, there must be a decrease in another Energy somewhere. This system is isolated from anything else, so the only other source of energy is its own Kinetic Energy. Therefore, if the velocity of A is higher than B, the system gains Potential Energy and loses Kinetic Energy. i) is correct and ii) is incorrect. On the other hand, we have Momentum. The other law this exercise is likely about is Conservation of Momentum. As the system is isolated from anything else, its Momentum cannot change, at any point in time. Therefore iii) and iv) are both incorrect.
{ "domain": "physics.stackexchange", "id": 72673, "tags": "newtonian-mechanics, energy, momentum, inertial-frames, spring" }
Will Q-learning converge to the optimal state-action function when the reward periodically changes?
Question: Imagine that the agent receives a positive reward upon reaching a state . Once the state has been reached the positive reward associated with it vanishes and appears somewhere else in the state space, say at state ′. The reward associated to ′ also vanishes when the agent visits that state once and re-appears at state . This goes periodically forever. Will discounted Q-learning converge to the optimal policy in this setup? Is yes, is there any proof out there, I couldn't find anything. Answer: No, it will not converge in the general case (maybe it might in extremely convenient special cases, not sure, didn't think hard enough about that...). Practically everything in Reinforcement Learning theory (including convergence proofs) relies on the Markov property; the assumption that the current state $s_t$ includes all relevant information, that the history leading up to $s_t$ is no longer relevant. In your case, this property is violated; it is important to remember whether or not we visited $s$ more recently than $s'$. I suppose if you "enhance" your states such that they include that piece of information, then it should converge again. This means that you'd essentially double your state-space. For every state that you have in your "normal" state space, you'd have to add a separate copy that would be used in cases where $s$ was visited more recently than $s'$.
{ "domain": "ai.stackexchange", "id": 946, "tags": "reinforcement-learning, q-learning" }
Transmitted electrons in TEM
Question: When some electrons of the electron beam pass through the specimen, what exactly happens after that in order to produce the black white image? What I understand is that electrons transmitted collide with a film and make that part of it black on the image, and those that don't collide make their part white. But why is it said that electrons that pass readily make the area brighter or more electron lucent and vice versa for scattered electrons? Answer: The pixels in a TEM image are whiter or blacker based on the dose of electrons that impact that pixel in the whatever detector is being used to form the image, be that photographic film, a scintillation screen, or a direct detector camera. As for how the image that reaches the detector is formed by the specimen being imaged, there are two different mechanisms. Amplitude Contrast is relatively simple - denser parts of the specimen scatter electrons strongly enough that some electrons do not make it to the detector at all, so pixels below these parts of the specimen will receive a lower dose than pixels below less dense parts of the specimen. Phase Contrast is more complex, and is responsible for most of the contrast in high-resolution cryo-TEM. Briefly, the specimen is illuminated by a parallel beam of electrons and every "pixel" of the specimen scatters some electrons but lets most of them pass straight through without interaction. The scattered electrons go off in a different direction than the main beam but are focused back by the lenses to the same point on the detector as the unscattered electrons. Because the scattered beam took a different path from the unscattered beam, and because electrons are waves as well as particles, there is a phase difference between the two beams which causes them to interfere. Typically, areas of the specimen that scatter electrons more will cause destructive interference, so darker pixels. If there is no scattering, then there is no interference, so brighter pixels. There are many resources explaining the technical details of this mechanism (1, 2, 3), but these tend to focus on crystalline specimens.
{ "domain": "biology.stackexchange", "id": 10339, "tags": "microscopy" }
Robust phase extraction of STFT bins
Question: I am interested in tracking how the phase of a signal at a particular frequency changes over time. The method I am using calculates a series of STFTs and takes the arctangent of the imaginary and real components of the desired bin for each FFT, however, due to noise in the system I am having issues with phase unwrapping. Are there any alternate and perhaps more robust methods of extracting phase from a signal in this manner? Thank you Answer: If your signal is not exactly integer periodic within the width of the FFT, then you may need to window and interpolate. Try windowing the data using a Blackman-Nutall window, doing an FFTshift to center the window at element 0, performing the FFT, interpolating the complex results to the periodicity of your signal (windowed Sinc interpolation works), then taking atan2() of that interpolated complex result. Increasing the Overlap of the FFT windows helps reduce the phase shift between adjacent overlapped windows, which can make phase unwrapping much easier.
{ "domain": "dsp.stackexchange", "id": 5344, "tags": "fft, fourier-transform, phase" }
Any way to reduce the Fermi-Dirac EEDF to Maxwell-Boltzmann?
Question: I just read an article which confused me somehow (B. Deschaud et al 2014 EPL 108 53001) The author claims that for very high temperatures the electron energy distribution function becomes the classical Maxwell-Boltzmann distribution for energy. Note that I'm talking about $$F_{FD}(E)=\frac{2}{(2\pi)^2 n_e} \left(\frac{2m}{\hbar^2}\right)^{3/2}\sqrt{E} \times \frac{1}{1+e^{(E-\mu)/kT}} $$ and that $$F_{MB}(E)=2\sqrt{E/\pi} (kT)^{-3/2} e^{-E/kT}$$ I do see how the Fermi-Dirac distribution becomes Maxwellian for the high temperature or low-density case but what about the density of states? Is there some smart series expansion which get's me there ? Answer: The key is that for Maxwellian electrons the chemical potential may be written as $$\mu/kT=\ln(n_e \Lambda^3/2) $$ with the thermal De-Broglie wavelength $$\Lambda=\left(\frac{2 \pi \hbar^2}{m_e k T} \right) $$ This makes the FD-EEDF indeed become the MB-EEDF.
{ "domain": "physics.stackexchange", "id": 64552, "tags": "thermodynamics, statistical-mechanics, solid-state-physics" }
MIPS assembly addition program
Question: I am new to assembly and have made a simple addition program to sum two integers read from the keyboard. The program outputs correctly, but I want to know if there is a way to streamline my code. It seems a bit cumbersome for such a simple program and I may have instructions that are unnecessary. # Author: Evan Bechtol # Description: This program prompts the user to enter 2 integers and computes their sum. #---------------------------------------------------------------------------------------# .data A: .word # Store the number 4 as an integer in var1 # $t0 is used B: .word # Store the number 2 as an integer in var2 # $t1 is used S: .word # Store the sum of A and B # $t2 is used Prompt1: .asciiz "Please enter first number: " Prompt2: .asciiz "Please enter second number: " Result: .asciiz "The sum of A and B is: " .text main: #--------------------------------------------------------# #Display first prompt li $v0, 4 # Load instruction "print string" la $a0, Prompt1 # Load prompt into $a0 syscall #Read first integer li $v0, 5 # Read 1st integer la $t0, A # $t0 = A syscall #Store first integer into memory move $t0, $v0 # Move contents in $v0 to $t0 sw $t0, A # A = value at $t0 #--------------------------------------------------------# #Display second prompt li $v0, 4 # Load instruction "print string" la $a0, Prompt2 # Load prompt into $a0 syscall #Read second integer li $v0, 5 # Read 1st integer la $t1, B # $t0 = A syscall #Store second integer into memory move $t1, $v0 # Move contents in $v0 to $t0 sw $t1, B # A = value at $t0 #--------------------------------------------------------# #Add the two variables la $t2, S # $t2 = S add $t2, $t0, $t1 # $t2 = $t0 + $t1 sw $t2, S # S = value at $t2 #Display the Result prompt la $a0, Result # Loads Output label to be printed li $v0, 4 # Sysycall to print string syscall #Display the sum lw $a0, S # $a0 = value at S li $v0, 1 # Syscall to print integer syscall #Exit the program li $v0, 10 # Load exit code to $v0 syscall Answer: The comments are misleading: #Read second integer li $v0, 5 # Read 1st integer la $t1, B # $t0 = A umm... are we reading second or 1st? Bottomline is, do not overcomment the code. syscall 5 leaves a value in $v0. The contents of $t0 (or $t1) is irrelevant during the syscall. Set them up when you need them: li $v0, 5 syscall la $t0, A move $t0, $v0 You store data to memory just to load them back. This is very anti-assembly. Generally you want to use registers as much as possible, and avoid memory as much as possible: li $v0, 5 syscall move $t0, $v0 ... li $v0, 5 syscall # At this moment you have first integer in $t0, and the second in $v0. # Just add them together. No memory access is necessary. Consult your documentation on which registers are guaranteed to survive a syscall (I suspect, all of them besides $v0). Nothing to simplify reading and printing.
{ "domain": "codereview.stackexchange", "id": 12285, "tags": "beginner, assembly" }
Python genetic sequence visualizer
Question: I'm a beginner at python, and this is my first major project. Inspired by this reddit post, I wrote a program that accepts a DNA sequence (in a FASTA file format) and generates a graphical representation Ebola Virus I've optimized it as much as I can, but I've hit a bottleneck with generating the actual image and rendering the path onto it. I'm run into memory errors with larger source files. Here are some of the source file's I'm testing. I can render everything except for the Fruit Fly genome, which crashes before completion. Does anyone have any tips for refactoring and optimizing this? Particularly I've been wondering if I could break up the image into chunks, render them one by one, save them to disk (like with the array class), and then stitch them back together but I can't figure out a way to. #!/usr/bin/env python3 # Python 3.6 from PIL import Image, ImageDraw from re import sub from copy import deepcopy, copy from os import listdir, path, makedirs from shutil import rmtree import pickle filepath = input("""What is the input file? FASTA format recommended (path/to/file) ?> """) file = open(filepath,'r') print("Serializing %s ..."%filepath) raw = ''.join([n for n in file.readlines() if not n.startswith('>')]).replace('\n',"").lower() file.close() del file raw = sub(r'[rykmswbdhv-]', "n", raw) # Handles miscellaneous FASTA characters raw = sub(r'[^atgcn]', "", raw) # Handles 4 bases and not-lnown sequence = deepcopy(list(raw)) # Completed filtered list containing all the bases del sub, raw print("The input file has been serialized. (%s items) Calculating path..." %str(len(sequence))) action = { # The different bases and their respective colours and movements "a": ((0,255,0),0,-1), #green - Moves up "t": ((255,0,0),0,1), #red - Moves Down "g": ((255,0,255),-1,0), #hot pink - Moves Left "c": ((0,0,255),1,0), #blue - Moves Right "n": ((0,0,0),0,0), #black - Stays on spot } class array(): # Array class that dynamically saves temp files to disk to conserve memory def __init__(self): self.a = [] self.maxmem = int(5e6) # How much data should be let accumulated in memory before being dumped to disk? # 1e6 ~ 20,000mb and 5e6 ~ 100,000mb self.fc = 0 # File Count if path.exists("temp"): rmtree('temp') makedirs("temp") self.path = "temp/temp%d.dat" # Name of temp files def append(self,n): self.a.append(n) if len(self.a) >= self.maxmem: self.fc += 1 with open(self.path%self.fc,'wb') as pfile: pickle.dump(self.a,pfile) # Dump the data del self.a[:] def setupiterate(self): if len(self.a) > 0: self.fc += 1 with open(self.path%self.fc,'wb') as pfile: pickle.dump(self.a,pfile) self.maxfc = copy(self.fc) self.fc = 0 def iterate(self): # This is called in a loop self.fc += 1 with open(self.path%self.fc,'rb') as pfile: return pickle.load(pfile) # Get the data count = [[0,0],[0,0]] # Top left and bottom right corners of completed path curr = [0,0] pendingactions = array() for i in sequence: #get the actions associated from dict curr[0] += action[i][1] curr[1] += action[i][2] if curr[0] > count[0][0]: count[0][0] = curr[0] elif curr[0] < count[1][0]: count[1][0] = curr[0] if curr[1] > count[0][1]: count[0][1] = curr[1] elif curr[1] < count[1][1]: count[1][1] = curr[1] pendingactions.append((copy(curr),action[i][0])) pendingactions.setupiterate() del sequence, deepcopy, copy # Final dimensions of image + 10px border dim = (abs(count[0][0]-count[1][0])+20,abs(count[0][1]-count[1][1])+20) print("The path has been calculated. Rendering image... %s"%("("+str(dim[0])+"x"+str(dim[1])+")")) img = Image.new("RGBA", dim, None) # Memory intensive with larger source files draw = ImageDraw.Draw(img) for i in range(0,pendingactions.maxfc): for j in pendingactions.iterate(): # This method returns a single file's worth of data draw.point((j[0][0]+abs(count[1][0])+10,j[0][1]+abs(count[1][1])+10), fill=j[1]) # Plots a point for every coordinate on the path def mean(n): # I couldn't find an average function in base python s = float(n[0] + n[1])/2 return s # Start and End points are dynamically sized to the dimensions of the final image draw.ellipse([((abs(count[1][0])+10)-round(mean(dim)/500),(abs(count[1][1])+10)-round(mean(dim)/500)),((abs(count[1][0])+10)+round(mean(dim)/500),(abs(count[1][1])+10)+round(mean(dim)/500))], fill = (255,255,0), outline = (255,255,0)) #yellow draw.ellipse([((curr[0]+abs(count[1][0])+10)-round(mean(dim)/500),(curr[1]+abs(count[1][1])+10)-round(mean(dim)/500)),((curr[0]+abs(count[1][0])+10)+round(mean(dim)/500),(curr[1]+abs(count[1][1])+10)+round(mean(dim)/500))], fill = (51,255,255), outline = (51,255,255)) #neon blue del count, curr, mean, dim, draw, ImageDraw print("The image has been rendered. Saving...") loc = '%s.png'%filepath.split(".", 1)[0] img.save(loc) img.close() del img print("Done! Image is saved as: %s"%loc) rmtree('temp') print("Temp files have been deleted.") input("Press [enter] to exit") Answer: You should make your array class behave a bit more like a Python list. It would be nice if there was a nice representation of it, if you could extend the list, if you could just iterate over it without having to worry about setting up the iteration first or even initialize it with some values already in the array. Currently you can also not have more than one instance of array, because it deletes the temp directory upon initialization All of this can be achieved using the so-called magic or dunder-methods. These are specially named methods that get automatically called by Python in certain circumstances. For a nice representation this is __repr__, if you call print(x) or str(x), it is __str__ and to iterate it is __iter__. With these (and a few more methods) your class becomes: from itertools import islice from os import path, makedirs from shutil import rmtree import pickle class array(): """1D Array class Dynamically saves temp files to disk to conserve memory""" def __init__(self, a=None, maxmem=int(5e6)): # How much data to keep in memory before dumping to disk self.maxmem = maxmem # 1e6 ~ 20,000mb and 5e6 ~ 100,000mb self.fc = 0 # file counter # make a unique subfolder (unique as long as the array exists) self.uuid = id(self) self.dir = ".array_cache/%d" % self.uuid if path.exists(self.dir): rmtree(self.dir) makedirs(self.dir) self.path = self.dir + "/temp%d.dat" # Name of temp files self.a = [] if a is not None: self.extend(a) def append(self, n): """Append n to the array. If size exceeds self.maxmem, dump to disk """ self.a.append(n) if len(self.a) >= self.maxmem: with open(self.path % self.fc, 'wb') as pfile: pickle.dump(self.a, pfile) # Dump the data self.a = [] self.fc += 1 def extend(self, values): """Convenience method to append multiple values""" for n in values: self.append(n) def __iter__(self): """Allows iterating over the values in the array. Loads the values from disk as necessary.""" for fc in range(self.fc): with open(self.path % fc, 'rb') as pfile: yield from pickle.load(pfile) yield from self.a def __repr__(self): """The values currently in memory""" s = "[..., " if self.fc else "[" return s + ", ".join(map(str, self.a)) + "]" def __getitem__(self, index): """Get the item at index or the items in slice. Loads all dumps from disk until start of slice for the latter.""" if isinstance(index, slice): return list(islice(self, index.start, index.stop, index.step)) else: fc, i = divmod(index, self.maxmem) with open(self.path % fc, 'rb') as pfile: return pickle.load(pfile)[i] def __len__(self): """Length of the array (including values on disk)""" return self.fc * self.maxmem + len(self.a) I also added docstrings to describe what the methods do and made the size an optional parameter (for testing purposes, mostly) You can use this class like this: x = array(maxmem=5) x.extend(range(21)) print(x) # [..., 20] list(x) # [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20] x[4] # 4 x[:4] # [0, 1, 2, 3] len(x) # 21 for i in x: print(i) # 0 # 1 # 2 # ... x2 = array(range(10), maxmem=9) list(x2) # [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] This implementation is still far from perfect. Indeed extend could probably be implemented better, by extending in batches of self.maxmem (taking care of the first slice). Adding elements at an arbitrary location is not implemented (__setitem__), and neither is deleting (__delitem__). It would be better if the path was also configurable, so if you have two instances of array, they don't overwrite each other, and so on. But for now those are left as an exercise for the reader...
{ "domain": "codereview.stackexchange", "id": 30167, "tags": "python, beginner, memory-optimization, bioinformatics, data-visualization" }
Does the width of spectral lines depend on the intensity of light?
Question: Suppose to observe spectral lines of a mercury or sodium lamp with a prism spectrometer. If there is a slit in front of the source to make the light collimated, and if regulate that slit in width, do I see the spectral lines change in their width? In other words, do spectral lines width depend on the intensity ($W/m^2$) of light? Answer: No, the width of the lines does not depend on the intensity of the light, it depends on the width of the slit! You could replace the lamp with a more powerful version and the width of the lines would be unchanged. When you form a spectrum, what you are (crudely) doing is forming an image of the slit at each wavelength; so the width of the lines corresponds to the width of the slit. Actually if you think about it, in the experiment you have described, the intensity of the light is not changed at all.
{ "domain": "physics.stackexchange", "id": 40473, "tags": "electromagnetism, visible-light, physical-chemistry, spectroscopy" }
How to circularly shift a signal by a fraction of a sample?
Question: The shift theorem says: Multiplying $x_n$ by a linear phase $e^{\frac{2\pi i}{N}n m}$ for some integer m corresponds to a circular shift of the output $X_k$: $X_k$ is replaced by $X_{k-m}$, where the subscript is interpreted modulo N (i.e., periodically). Ok, that works fine: plot a N = 9 k = [0, 1, 2, 3, 4, 5, 6, 7, 8] plot ifft(fft(a)*exp(-1j*2*pi*3*k/N)) It shifted by 3 samples, as I expected. I thought you could also do this to shift by fractions of a sample, but when I try it, my signal becomes imaginary and not at all like the original: plot real(ifft(fft(a)*exp(-1j*2*pi*3.5*k/N))) plot imag(ifft(fft(a)*exp(-1j*2*pi*3.5*k/N))), 'b--' I didn't expect this at all. Isn't this equivalent to convolving with a real impulse that's been shifted by 3.5 samples? So the impulse should still be real, and the result should still be real? And it should have more or less the same shape as the original, but sinc interpolated? Answer: If you want the shifted output of the IFFT to be real, the phase twist/rotation in the frequency domain has to be conjugate symmetric, as well as the data. This can be accomplished by adding an appropriate offset to your complex exp()'s exponent, for the given phase slope, so that the phase of the upper (or negative) half, modulo 2 Pi, mirrors the lower half in the FFT aperture. The complex exponential shift function can also be made conjugate symmetric by indexing it from -N/2 to N/2 with a phase of zero at index 0. It just so happens that the appropriate offset for phase twists or spirals, that complete an exact integer multiples of 2 Pi rotations in aperture, to be conjugate symmetric in aperture, is zero. With a conjugate symmetric phase twist vector, the result should then end up as a circular Sinc interpolation for non-integer shifts. Elaboration by OP: Your choice of k = [0, 1, 2, 3, 4, 5, 6, 7, 8] is producing an asymmetrical complex exponential: If you use k = [0, 1, 2, 3, 4, -4, -3, -2, -1] instead, you get a Hermite-symmetric complex exponential: plot(fftshift(exp(-1j * 2*pi * 0.5/N * k))) and now when you use the same exponential formula to shift by 0.5 or 3.5 samples, you get a real result: plot ifft(fft(a)*exp(-1j * 2 * pi * 0.5/N *k)) plot ifft(fft(a)*exp(-1j * 2 * pi * 3.5/N *k))
{ "domain": "dsp.stackexchange", "id": 1105, "tags": "fourier-transform, convolution, phase, frequency-domain" }
Late and Early Bisimulation
Question: This is a follow up to my earlier questions on coinduction and bisimulation. A relation $R \subseteq S \times S$ on the states of an LTS is a bisimulation iff $\forall (p,q)\in R,$ $$ \begin{array}{l} \text{ if } p \stackrel\alpha\rightarrow p' \text{ then } \exists q', \; q \stackrel\alpha\rightarrow q' \text{ and } (p',q')\in R \text{ and } \\ \text{ if } q \stackrel\alpha\rightarrow q' \text{ then } \exists p', \; p \stackrel\alpha\rightarrow p' \text{ and } (p',q')\in R. \end{array} $$ This is a very powerful and very natural notion, after you come to appreciate it. But it's not the only notion of bisimulation. In special circumstances, such as in the context of the $\pi$-calculus, other notions such as open, branching, weak, barbed, late and early bisimulation exist, though I do not fully appreciate the differences. But for this question, I want to limit focus just two notions. What are late and early bisimulation and why would I use one of these notions instead of standard bisimulation? Answer: Late and early only make sense when building the LTS. The notion of bisimulation stays the same. However there are several notion of bisimilarities in the π-calculus for example. These are related to how new names are handled. It happens in the π-calculus because it is a name-passing process calculus. There is no such thing in CCS where names cannot be sent. Usually, the notion of barbed equivalence or congruence is more unanimous. Then you have to prove the equivalence with the late/early/open bisimilarity to give it some credit. To be quick about it, early transitions are that way: $$ a(x).P \stackrel{a(b)}{\longrightarrow} P[b/x] \qquad \begin{array}{c} P \stackrel{a(b)}{\longrightarrow} P' \quad Q \stackrel{\overline ab}{\longrightarrow} Q' \\ \hline P\mid Q → P' \mid Q' \end{array} $$ and late are like this: $$ a(x).P \stackrel{a(x)}{\longrightarrow} P \qquad \begin{array}{c} P \stackrel{a(x)}{\longrightarrow} P' \quad Q \stackrel{\overline ab}{\longrightarrow} Q' \\ \hline P\mid Q → P'[b/x] \mid Q' \end{array} $$ You see the difference is subtle and is just about where the substitution is done. My preference go to the late one because when I reason about processes, I just remember that the $x$ in the transition $a(x)$ is bound. Also the late bisimilarity is the strongest one (beside the open one which is really not the same) and the processes that are early but not late bisimilar are kind of weird, so you don't lose much.
{ "domain": "cs.stackexchange", "id": 65, "tags": "semantics, formal-methods, process-algebras, pi-calculus" }
What is a tower?
Question: In many tensorflow tutorials (example) "towers" are mentioned without a definition. What is meant by that? Answer: According to tensorflow documentation about CNN, The first abstraction we require is a function for computing inference and gradients for a single model replica. In the code we term this abstraction a "tower". To get the relevant context and more, check this.
{ "domain": "datascience.stackexchange", "id": 10222, "tags": "deep-learning, tensorflow, terminology" }
z-transform help
Question: I'm trying to solve this exercise: And the solutions manual states that the resolution is this one: but I can not understand the last step, which is indicated with an arrow. Also, how do you find the ROC. Answer: Multiply the numererator and denominator by $z^N$ and factor out (in the denominator) $z^{N-1}$.
{ "domain": "dsp.stackexchange", "id": 7315, "tags": "z-transform" }
Why is Formanilide not named N-benzamide in IUPAC Nomenclature
Question: The preferred IUPAC name for formanilide is N-phenylformamide. However, it is very similar to benzamide, except for the atom to which phenyl is attached How does the bonding arrangement, in this case, change the parent group? Answer: According to the current version of Nomenclature of Organic Chemistry – IUPAC Recommendations and Preferred Names 2013 (Blue Book), amides are considered derivatives of the corresponding acids. P-66.1.0 Introduction Amides are derivatives of organic oxoacids in which each hydroxy group has been replaced by an amino or substituted amino group. (…) Simple amide names (such as hexanamide) are generally formed by substitutive nomenclature. P-66-1.1.1.1.1 Alicyclic mono- and diamides are named substitutively by adding the suffix ‘amide’, to the appropriate parent hydride name, with elision of the final letter ‘e’ before ‘a’. The multiplying prefix ‘di’ is used to name diamides. A few amides, however, still have retained names that are derived from retained names of the corresponding carboxylic acids. This includes benzamide ($\ce{C6H5{-}CO{-}NH2}$, see Rule P-66.1.1.1.2.1) and formamide ($\ce{HCO{-}NH2}$, see Rule P-66.1.1.1.2.2). Accordingly, formanilide is a substituted formamide and not a substituted benzamide since the acid part of formanilide is derived from formic acid and the amino part is derived from aniline. Thus, the correct name is N-phenylformamide. Note that the name “N-benzamide” doesn't make sense anyway. N is a locant that does not describe anything here; so it reads like N-what-benzamide. It’s like writing “2-butane”.
{ "domain": "chemistry.stackexchange", "id": 16621, "tags": "nomenclature" }
Change of variables, Fermi Integral
Question: This is a really basic question, but I'm kind of confused. I have this integral $$\int_{0}^{\infty}\frac{p^{2}dp}{e^{\alpha+\beta p^{2}/2m}+1}$$ where $p:=|\mathbf{p}|=\left(p_{x}^{2}+p_{y}^{2}+p_{z}^{2}\right)^{1/2}$ is the magnitude of the momentum of a particle of mass $m$. This integral represents the total number of particles in a Fermi-Dirac gas. Now, I want to convert this integral to something of the form $$\int n(p_{x})dp_{x}$$ where $n(p_x)$ is the number of particles with momentum between the interval $(p_x,p_x+dp_x)$. To this end I must use cylindrical coordinates but I don't know how to convert the term $dp$ in something like $dp_xdp_r$. Can you help me please? Answer: You have spherical coordinates but you want cylindrical coordinates with $p_x$ the axial direction (commonly labelled $z$). When you do an integral, you want to integrate over an infinitesimal volume element, $d\tau$. For your spherical coordinates, this is $d\tau = p^2\sin\theta dp d\theta d\phi$ and for cylindrical coordinates it is $d\tau = p_\perp dp_\perp d\phi dp_x$ where $p_\perp^2 = p_y^2+p_z^2$. For pictures of these volume elements and discussion, see e.g. Griffiths, Electrodynamics, Section 1.4. Your integral is spherically symmetric, so the angular variables can be integrated out. For the spherical integral we get: $$\int d\tau\, f(p) = \int_0^\infty p^2 f(p) dp \int_0^{\pi}\sin \theta \,d\theta \int_0^{2\pi}d\phi = 4\pi\int_0^\infty p^2 f(p) dp $$ and for the cylindrical integral, we get: $$\int d\tau\, f(p_x,p_\perp) = \int_{-\infty}^\infty dp_x\int_0^\infty p_\perp dp_\perp \,f(p_x,p_\perp)\int_0^{2\pi}d\phi = 2\pi\int_{-\infty}^\infty dp_x\int_0^\infty p_\perp dp_\perp \,f(p_x,p_\perp) $$ So, replace $4\pi\int_0^\infty p^2 f(p)dp$ with $2\pi \int_{-\infty}^\infty dp_x \int_0^\infty p_\perp dp_\perp f(p_x,p_\perp)$. Your integral is then: $$ 4\pi\int_0^\infty \frac{p^2 dp}{e^{\alpha+\beta p^2/2m}+1} = 2\pi\int_{-\infty}^\infty dp_x\int_0^\infty \frac{p_\perp dp_\perp}{e^{\alpha+\beta (p_x^2+p_\perp^2)/2m}+1} = \int_{-\infty}^\infty dp_x n(p_x) $$ where $n(p_x) = \frac {2\pi m}{\beta} \ln\left[ e^{-\alpha-\beta p_x^2/2m}+1 \right]$.
{ "domain": "physics.stackexchange", "id": 7129, "tags": "statistical-mechanics, integration, fermions" }
Saving an uploaded file and returning form data
Question: I am new to F#. This code is basically a port I did from C#. I am sure there is room for a ton of improvement, so how can I improve and make this more efficient? let SingleFIle (req : HttpRequestMessage) dirName typeDir (fileType : string) userName clearDir deleteExistingFile = async{ // Check Request if not (req.Content.IsMimeMultipartContent()) then return req.CreateResponse(HttpStatusCode.UnsupportedMediaType) else // create temp file dir let tempFileFolder = HttpContext.Current.Server.MapPath(@"~/TempFileUploads") // create the temp file folder Directory.CreateDirectory(tempFileFolder) |> ignore // create provider let provider = new MultipartFormDataStreamProvider(tempFileFolder) // save file data try // create the server path let dirPath = ref "" if String.IsNullOrWhiteSpace(userName) then dirPath := HttpContext.Current.Server .MapPath(@"~/Uploads/" + typeDir + @"/" + dirName) else dirPath := HttpContext.Current.Server .MapPath(@"~/Uploads/" + typeDir + @"/" + dirName + @"/" + userName) // clear directory if true if clearDir then if Directory.Exists(!dirPath) then Directory.Delete(!dirPath, true) // create the final directory path Directory.CreateDirectory(!dirPath) |> ignore // save to provider let! readToProvider = (req.Content.ReadAsMultipartAsync provider) |> Async.AwaitIAsyncResult // file name and type check let fileNameAndTypeIsValid = // file name length list let nameLength = [ for x in provider.FileData -> x.Headers.ContentDisposition.FileName.Length ] // file type length let fileTypeList = [ for x in provider.FileData -> x.Headers.ContentType.MediaType .Substring(0, fileType.Length) ] // nameLength & fileType valid check let nameLengthNotValid = query{ for x in nameLength do exists (x <= 2) } let fileTypeNotValid = query{ for x in fileTypeList do exists (x <> fileType) } (not nameLengthNotValid) && (not fileTypeNotValid) && (provider.FormData.Count > 0) // if valid then create file details if fileNameAndTypeIsValid then // get the current file info let fileInfo = new FileInfo(provider.FileData.ElementAt(0) .LocalFileName) // get the extension let stripQuotes = new FileInfo(provider.FileData.ElementAt(0) .Headers.ContentDisposition.FileName .Replace(@"""", String.Empty)) let ext = stripQuotes.Extension // create new file path and move file File.Move(fileInfo.FullName, (Path.Combine(!dirPath, fileInfo.Name + ext))) // create file Url let fileUrl = ref "" if (String.IsNullOrWhiteSpace(userName)) then fileUrl := @"/Uploads/" + typeDir + @"/" + dirName + @"/" + fileInfo.Name + ext else fileUrl := @"/Uploads/" + typeDir + @"/" + dirName + @"/" + userName + @"/" + fileInfo.Name + ext provider.FormData.Add("FileUrl", !fileUrl) // delete existing files if true if deleteExistingFile then for x in provider.FormData .GetValues("ExistingPath") do File.Delete( System.Web.Hosting.HostingEnvironment .MapPath(x)) return req.CreateResponse(HttpStatusCode.OK, provider.FormData) else return req.CreateResponse HttpStatusCode.InternalServerError with | ex -> return req.CreateResponse(HttpStatusCode.InternalServerError) } Answer: Here's a cleaned up version. It's by no means exhaustive, but may get you thinking about other ways to improve it. Some of the changes: removed comments (they didn't add anything since the code is adequately descriptive) replaced refs with let-bound expressions replaced Substring with slicing cleaned up some of the logic refactored file validation into a separate function replaced query expressions with (more suitable) collection functions replaced unused exception with wildcard pattern let singleFile (req : HttpRequestMessage) dirName typeDir (fileType : string) userName clearDir deleteExistingFile = let isValidFile (fileData: seq<MultipartFileData>) = let nameLength = [ for x in fileData -> x.Headers.ContentDisposition.FileName.Length ] let fileTypeList = [ for x in fileData -> x.Headers.ContentType.MediaType.[..fileType.Length-1] ] let nameLengthNotValid = nameLength |> List.exists ((>=) 2) let fileTypeNotValid = fileTypeList |> List.exists ((<>) fileType) not (nameLengthNotValid || fileTypeNotValid) async { if not (req.Content.IsMimeMultipartContent()) then return req.CreateResponse HttpStatusCode.UnsupportedMediaType else let tempFileFolder = HttpContext.Current.Server.MapPath @"~/TempFileUploads" Directory.CreateDirectory tempFileFolder |> ignore let provider = new MultipartFormDataStreamProvider(tempFileFolder) try let dirPath = let path = @"~/Uploads/" + typeDir + @"/" + dirName + if String.IsNullOrWhiteSpace userName then "" else @"/" + userName HttpContext.Current.Server.MapPath path if clearDir && Directory.Exists dirPath then Directory.Delete(dirPath, true) // create the final directory path Directory.CreateDirectory dirPath |> ignore let! readToProvider = req.Content.ReadAsMultipartAsync provider |> Async.AwaitIAsyncResult let fileNameAndTypeIsValid = isValidFile provider.FileData && provider.FormData.Count > 0 if fileNameAndTypeIsValid then let fileInfo = new FileInfo( provider.FileData.ElementAt(0).LocalFileName ) let ext = Path.GetExtension( provider.FileData.ElementAt(0) .Headers.ContentDisposition.FileName .Replace(@"""", String.Empty) ) File.Move(fileInfo.FullName, Path.Combine(dirPath, fileInfo.Name + ext)) let fileUrl = @"/Uploads/" + typeDir + @"/" + dirName + @"/" + if String.IsNullOrWhiteSpace userName then "" else userName + @"/" + fileInfo.Name + ext provider.FormData.Add("FileUrl", fileUrl) if deleteExistingFile then for x in provider.FormData.GetValues("ExistingPath") do File.Delete (System.Web.Hosting.HostingEnvironment.MapPath(x)) return req.CreateResponse(HttpStatusCode.OK, provider.FormData) else return req.CreateResponse HttpStatusCode.InternalServerError with | _ -> return req.CreateResponse HttpStatusCode.InternalServerError }
{ "domain": "codereview.stackexchange", "id": 13560, "tags": "f#, network-file-transfer" }
Why do phasor derivations related to LCR circuits consider $V_c = I * X_c$ even if voltage and current are out of phase?
Question: In a capacitor, the current leads by $\pi/2$. But even then derivations of of total voltage in LCR circuits using phasor diagrams assume that $V_{inst.} = I_{inst.} * X_c$ Where $V_{inst.}$ is the instantaneous voltage and the $I_{inst.}$ is the instantaneous current and $X_c$ is the capacitive reactance. Answer: But even then derivations of of total voltage in LCR circuits using phasor diagrams assume that Vinst. = Iinst. * Xc This is not correct. The magnitude of the reactance $X_{\rm C}$ is given by $X_{\rm C} = \dfrac {V_{\rm C,peak}}{I_{\rm C,peak}} = \dfrac {V_{\rm C,rms}}{I_{\rm C,rms}}$
{ "domain": "physics.stackexchange", "id": 56583, "tags": "electric-circuits, capacitance, inductance" }
Confusion about thermodynamic equilibrium
Question: I’m studying thermodynamics and statistical mechanics from the book by Kerson Huang. I’m having a conceptual difficulty with the notion of thermodynamic equilibrium. At the beginning of the first chapter, the author states A thermodynamic state is specified by a set of values of all the thermodynamic parameters necessary for the description of the system. and Thermodynamic equilibrium prevails when the thermodynamic state of the system does not change in time. Then, after stating the laws of thermodynamics, a criterion for finding the equilibrium state is provided: for a closed system, the equilibrium state is the one that maximizes the entropy. This only makes sense if entropy is defined for non-equilibrium states, too. The same thing applies for the minimization of the free energy etc. However, in the statistical mechanics part of the book, there is the following statement The entropy in thermodynamics, just as $S$ here, is defined only for equilibrium situations. It's not that this does not make sense: in fact, one talks about “equilibrium thermodynamics” because the subject is only concerned with equilibrium states. Thermodynamic parameters are average values over times longer than the relaxation time, and as such they require equilibrium situations in order to be defined. However, in my understanding, this makes the criterion of maximization of entropy meaningless. I found this question, which is basically the same I'm asking, but I do not understand the answer. To be more specific, the accepted answer seems to boil down to the following example: one imagines that some “constraint” is placed on the system, that maintains the system in equilibrium. Then the constraint is relaxed, allowing the system to evolve, and then it is placed again. In this way, one can approximate the evolution of the system as a sequence of equilibrium states separated by “infinitesimal transformations”. The state that maximizes the entropy is then the one which does not require a constraint to remain in equilibrium. I don't understand this explanation because it seems to me that the constraint invalidates the hypothesis that the system is closed. In other words, the presence of the constraint “forces” a state to be an equilibrium state, but then would no longer be described by the equilibrium parameters of the system alone. I also have something else to add: in the book by Landau and Lifshitz (Course of theoretical physics V) the picture for the law of increase of entropy is that a non equilibrium system can be viewed as a composition of smaller system which are approximately in equilibrium, and for them an entropy can be defined. The total entropy is then the sum of the individual entropies. As the subsystem reach mutual equilibrium, the total entropy increases. This is to me a more logical picture. However, does it correspond to the thermodynamics one? i.e., can such a non equilibrium system be represented by a set of thermodynamic parameters? I guess not, but then the law of increase of entropy would have a different meaning. In short, I'm terribly confused. Given everything that I said (if it is correct), what does the criterion of maximization of entropy actually mean? Does the law of increase of entropy require to consider non equilibrium systems? Edit: I now think that what I said about the hypothesis of closed system is nonsense, because my definition is wrong (see comments below) and also because the entropy increases when the system is thermally isolated, not when it is closed: from Clausius theorem, $$ \Delta S \ge \int_A^B \frac{\delta Q}{T}, $$ and the right hand side is zero when $\delta Q = 0$. However, the questions don’t strictly depend on this. Answer: Instead of saying that "The entropy in thermodynamics, just as S here, is defined only for equilibrium situations," it is better to say that "The entropy in thermo-statics, just as S here, is defined only for equilibrium situations." The entropy concept would be of very limited use if it were not applicable to dynamic situations. Below is a quote from Truesdell: Rational Thermodynamics,(1984, 2nd ed.) pp79-80 regarding this issue, and the whole book is about how to extend the concept of equilibrium entropy into non-equilibrium entropy. I am surprised that this is still controversial. Note too that the concept of a non-equilibrium entropy is not any more controversial than non-equilibrium temperature is. For the slightly disequilibrated professional I will make only two remarks in passing. First, in mechanics the concept of force originated in statics and was carried over bodily, if with much delay and discussion, to motions. If the restoring force exerted by a spring is proportional to the increase of its length in a static experiment, will it still be so when a ball is attached to the end and set into oscillation, especially if the experiment is performed in a spaceship in orbit around the moon? Indeed, does it make sense to talk about forces at all in a moving system? The forces, it seems, might be affected by the motions, yet we are supposed to know the forces first in order to determine what the motion will be. These questions, and far subtler ones of the same kind, were asked in the seventeenth century; today the freshman is trained specifically not to ask them. Perhaps he ought to; but if he does, he is more likely to take up philosophy than science; and if NEWTON had insisted that forces be used only when they can be measured by an operationalist who works and thinks very, very slowly, it is unlikely anyone would be designing spaceships today. The professionals who moan that temperature is a concept for equilibrium alone are not solving any problems themselves; they are merely pronouncing our problems insoluble and sneering at us for trying to solve them. Second, the professional theorists of thermodynamics show a lack of respect for experiment I find hard to admire. Experiments are being done today in all sorts of extreme conditions. My friends and colleagues tell me they measure the temperatures in polymer solutions undergoing strong normal-stress effects, they control the temperatures in explosion fronts, they infer the temperatures on the skins of artificial satellites. It would be presumptuous on my part to question the details of the work of these men, and I see no reason to deny that they know their business. The temperatures occurring as values of the temperature function $\theta(\cdot)$ in modern theories of materials are intended to represent the numbers these experimenters report and call "temperatures", although their systems bear little likeness to those the old books on thermodynamics describe as being "in equilibrium ", and their processes seem to have little in common with whatever it is the thermodynamicists mean by "quasi-static". I do not claim that we yet know whether or not a particular theory be borne out by any particular experiment, but I do not see why the professionals should forbid us from trying to explain the experiments by a rational theory, approached in that very spirit everyone today regards as the right one for mechanics and electromagnetism.
{ "domain": "physics.stackexchange", "id": 96529, "tags": "thermodynamics, statistical-mechanics, entropy, equilibrium" }
Why do ligands have such a small effect on overall absorption of a complexed ion?
Question: When a metal cation is complexed, there is strong UV-Vis absorption due to the splitting of its $d$ orbitals, thereby allowing electronic transitions. My understanding is that ligands contribute very little to overall observed absorption. This is evident, for example, in copper solutions. Copper(II) chloride, copper(II) sulfate, and copper(II) nitrate – all solutions of complexed ions – are all very similar in color and have similar absorption spectra. I think the simple answer is that transition metals have the capacity to allow for electronic transitions while most ligands do not, but I don't believe it is this simple. Also, it is curious that sulfate, chloride, and nitrate are all colorless when dissolved in solution. My question is: why do ligands have such a small effect on overall absorption of a complexed metal cation? Answer: This is absolutely not true. Many ligands can strongly dictate the color of transition metal solutions. Your example picks three simple salts and then questions whether the color of the solutions (dictated largely by $\ce{[Cu(H2O)6]^{2+}}$) is very different. No, because the resulting majority complex in aqueous solution is likely identical. Even taking a simple ammonia complex, e.g. $\ce{[Cu(NH3)4(H2O)2]^{2+}}$ you can see a substantial color change. Many copper acetonitrile compounds are weakly colored or colorless, with UV/Vis optical absorption occurring near the edge of the red to near-IR. When you get into octahedral complexes, you can find substantial changes in color due to MLCT and LMCT absorptions, as well as significant modulation of the d-d transitions due to the ligand (i.e., high-field and low-field ligands).
{ "domain": "chemistry.stackexchange", "id": 6939, "tags": "inorganic-chemistry, ions, coordination-compounds, spectrophotometry" }
What happens to the electric field lines when an electron and positron collide?
Question: A month back I began my course in classical electrodynamics. I know about the field lines of moving charges. But I have this question that if an electron and a positron are situated some distance apart and assuming that the electric force is 'somehow' great enough to accelerate them towards each other, then what happens to the electric field lines after the 2 have collided? The net charge becomes 0 and taking into the fact that photons are released, my question is what happens to the electric field? Edit - So is it that the Electric Field exists outside the radius ct and is zero within the circle of radius ct which increases in diameter with time and spreads in space ? Answer: The electron and positron are two point charges with opposite sign, and classically , as the field lines are an iconal representation of the charge, when the charge becomes zero there will be no electric field lines from the spot where the two point particles overlap. BUT electrons and positrons are quantum mechanical particles and when close enough classical electrodynamics has to be replaced by quantum mechanical equations . For the low energies you are discussing, when they get close enough they may get caught in each other's potential forming a positronium, similar to the hydrogen atom. . This has energy levels and the annihilation will happen from one of these energy levels. The postron and electron will be in probability locuses, called orbitals and these, depending on the quantum numbers of the energy levels, will have asymmetries which will allow for dipole and multipole electric fields as long as the positronium survives intact. It will fast annihilate into two photons, which will carry electric and magnetic field information in their complex wave functions, and balance the quantum numbers, energy and momentum and angular momentum. The calculations for the probability of annihilation can only be done accurately using quantum electrodynamics, QED. EDIT after comment. But what happens if they do collide or suppose we make them collide somehow ? There are experiments with electron positron colliders, where the particles have higher energy ( not starting at zero kinetic energy as in your question). Again the collision dynamics is successfully modeled using QED. When electrons scatter on positrons, with enough energy other channels open and the scattering produces particles and resonances, i.e. the electron and positron"disappear" and other elementary particles come out. Here is what the crossection looks like for this scattering. the top figure, go to the link for the caption. It needs serious study of QED to understand these scatterings. For the simple case of annihilation to two photons, the Feynman diagram which defines the probability of this to happen is The e- radiates a photon and becomes virtual, and meets the e+ and annihilates into another photon, conserving momentum and energy in the center of mass system. It has no meaning to be talking of fields within this context of the quantum mechanical framework. Again the two photons will have in their complex wavefunction information on electric and magnetic fields, but the classical picture does not work at this level.
{ "domain": "physics.stackexchange", "id": 31596, "tags": "electrons, electric-fields, quantum-electrodynamics, antimatter" }
What is entropy change in electrochemistry?
Question: I keep seeing entropy change (or reversible effects) and irreversible effects (Joule heating) when reading about charging and discharging of Li-ion batteries. I understand the Joule heating part (the irreversible), but I have troubles understanding the reversible effects. What's going on in the structure of the electrodes that the process is reversible? As far as I know, reversiblities come from the fact that there is no friction or other losses. If one isolates a single ion, the charging or discharging, in my interpretation, would look roughly like in the figure below: What happens inside the battery during this charge and discharge that would make one think of reversibility? And why is entropy important in this case? And why would a reversible process release heat? Answer: I think your diagram seems plausible; the physical process happening here is when charging an applied potential difference (voltage) causes lithium ions to leave the crystal lattice and migrate to the cathode. When the battery discharges, the reverse happens, and lithium ions rejoin the anode. The definition of reversibility is the infinitesimal change in some thermodynamic quantity in the system with respect to its surroundings without increasing the entropy. The electrode is crystalline, so Li+ ions can rejoin the lattice without increasing the configurational entropy, whilst infinitesimally changing the chemical potential. If the temperature is low enough for crystallisation to be entropically and energetically favourable with respect to dissolving, then the free energy of the system will be reduced by adding an ion to the crystal lattice. The chemical potential induced by charging causes the lithium ions to return to the anode when discharging.
{ "domain": "chemistry.stackexchange", "id": 12626, "tags": "electrochemistry, entropy" }
Cannot record sensor data using rosbag from kinect
Question: Hi, I tried to record the data from Kinect by using rosbag record -a after I opened the openni_node driver for Kinect. wei@wei-Aspire-4810T:~/ros/bagfiles$ roslaunch openni_camera openni_node.launch Then in another terminal I used rosbag trying to record all topics: wei@wei-Aspire-4810T:~/ros/bagfiles$ rosbag record -a [ INFO] [1330246493.151671247]: Recording to 2012-02-26-16-54-53.bag. [ INFO] [1330246494.158586547]: Subscribing to /camera/depth/image [ INFO] [1330246494.163723907]: Subscribing to /camera/rgb/image_raw [ INFO] [1330246494.167274829]: Subscribing to /camera/rgb/image_mono/compressed [ INFO] [1330246494.169913071]: Subscribing to /camera/rgb/image_mono/compressed/parameter_updates [ INFO] [1330246494.180194984]: Subscribing to /camera/depth/image/theora [ INFO] [1330246494.185385283]: Subscribing to /camera/depth/image_raw/theora [ INFO] [1330246494.199527654]: Subscribing to /tf [ INFO] [1330246494.205979210]: Subscribing to /camera/depth/image_raw/theora/parameter_descriptions [ INFO] [1330246494.212827463]: Subscribing to /camera/rgb/image_mono/theora/parameter_descriptions [ INFO] [1330246494.219018234]: Subscribing to /camera/rgb/image_raw/compressed [ INFO] [1330246494.223620402]: Subscribing to /camera/rgb/image_mono/compressed/parameter_descriptions [ INFO] [1330246494.226451684]: Subscribing to /camera/rgb/image_mono/theora/parameter_updates [ INFO] [1330246494.229063037]: Subscribing to /camera/rgb/image_raw/compressed/parameter_updates [ INFO] [1330246494.231719646]: Subscribing to /camera/depth/image/theora/parameter_updates [ INFO] [1330246494.234358167]: Subscribing to /openni_node1/parameter_updates [ INFO] [1330246494.236913228]: Subscribing to /camera/depth/image_raw/compressed/parameter_descriptions [ INFO] [1330246494.239575774]: Subscribing to /camera/rgb/points [ INFO] [1330246494.242255780]: Subscribing to /camera/rgb/camera_info [ INFO] [1330246494.244859171]: Subscribing to /camera/rgb/image_color/compressed [ INFO] [1330246494.261203557]: Subscribing to /camera/rgb/image_color/compressed/parameter_descriptions [ INFO] [1330246494.267427571]: Subscribing to /camera/depth/image/compressed [ INFO] [1330246494.273242877]: Subscribing to /camera/depth/image_raw/theora/parameter_updates [ INFO] [1330246494.283536873]: Subscribing to /camera/depth/image_raw/compressed [ INFO] [1330246494.291459070]: Subscribing to /camera/rgb/image_color/theora/parameter_updates [ INFO] [1330246494.299466124]: Subscribing to /camera/depth/camera_info [ INFO] [1330246494.302353488]: Subscribing to /camera/rgb/image_color [ INFO] [1330246494.328684988]: Subscribing to /camera/rgb/image_raw/compressed/parameter_descriptions [ INFO] [1330246494.337562469]: Subscribing to /rosout [ INFO] [1330246494.349211658]: Subscribing to /camera/depth/disparity [ INFO] [1330246494.352240939]: Subscribing to /camera/depth/image_raw [ INFO] [1330246494.359652878]: Subscribing to /camera/rgb/image_color/compressed/parameter_updates [ INFO] [1330246494.372313713]: Subscribing to /rosout_agg [ INFO] [1330246494.380340812]: Subscribing to /camera/depth/image/theora/parameter_descriptions [ INFO] [1330246494.386094727]: Subscribing to /camera/rgb/image_mono [ INFO] [1330246494.392173263]: Subscribing to /camera/rgb/image_color/theora [ INFO] [1330246494.400247295]: Subscribing to /camera/rgb/image_color/theora/parameter_descriptions [ INFO] [1330246494.410299151]: Subscribing to /camera/depth/points [ INFO] [1330246494.419197515]: Subscribing to /camera/rgb/image_raw/theora [ INFO] [1330246494.425615758]: Subscribing to /camera/rgb/image_raw/theora/parameter_updates [ INFO] [1330246494.432190793]: Subscribing to /camera/depth/image_raw/compressed/parameter_updates [ INFO] [1330246494.455170068]: Subscribing to /camera/rgb/image_mono/theora [ INFO] [1330246494.466239857]: Subscribing to /camera/rgb/image_raw/theora/parameter_descriptions [ INFO] [1330246494.478321710]: Subscribing to /camera/depth/image/compressed/parameter_updates [ INFO] [1330246494.484514087]: Subscribing to /openni_node1/parameter_descriptions [ INFO] [1330246494.494957891]: Subscribing to /camera/depth/image/compressed/parameter_descriptions ^C However, the openni_node crushed immediately after I started recording all topics. The error messages were: OpenCV Error: Image step is wrong () in cvInitMatHeader, file /tmp/buildd/libopencv-2.3.1+svn6514+branch23/modules/core/src/array.cpp, line 162 terminate called after throwing an instance of 'cv::Exception' what(): /tmp/buildd/libopencv-2.3.1+svn6514+branch23/modules/core/src/array.cpp:162: error: (-13) in function cvInitMatHeader [openni_node1-2] process has died [pid 13733, exit code -6]. log files: /home/wei/.ros/log/7efa7378-6057-11e1-bb4c-fd7118ed2e09/openni_node1-2*.log Is there anything I do wrong? Thanks for answering! Originally posted by Mchaiiann on ROS Answers with karma: 130 on 2012-02-25 Post score: 0 Answer: Don't use the -a flag, rosbag will subscribe to all offered topics, even if some are meant as alternatives and might not be used in combination. So, think about what you want to do with the data and only subscribe to the relevant topics. The most commonly used topics are /tf /camera/rgb/image_mono /camera/rgb/image_color /camera/rgb/points /camera/rgb/camera_info /camera/depth/image /camera/depth/points /camera/depth/camera_info But still you will only need a subset of those. Edit: I just stumbled upon this again and want to add, that I recently wrote a small tool to assemble point clouds from (rgb and) depth images. It can be found in our repository. The wiki page will be generated in a few page. Until then have a look into the manifest for usage information. Originally posted by Felix Endres with karma: 6468 on 2012-02-26 This answer was ACCEPTED on the original site Post score: 6 Original comments Comment by Mchaiiann on 2012-02-26: Thanks. Besides, why I said "openni_node crushed" is that the bag file only recorded some info or parameter_description instead of the real sensor data. But why did this happened? (Should I open a new question for this?) Now I specify the topics I want and this don't happened again. Thanks a lot. Comment by joq on 2012-02-26: @Felix Endres is right. Note that you can save space in the bags by saving compressed images like /camera/rgb/image_image/compressed, etc.
{ "domain": "robotics.stackexchange", "id": 8392, "tags": "ros, rosbag, openni-node, openni-camera" }
Escalation approval tool, copy to clipboard via Python
Question: I created a simple little script for the people I work with, what it does is copy information to the users clipboard for escalation approvals. It's pretty simple and not very exciting but I want to know if there are ways to do this better: from Tkinter import Tk import getpass import os import time class ClipBoard(object): full_string = '' # Full string that will contain the entire escalation approval def __init__(self, string, addition): self.string = string self.addition = addition def add_addition_to_string(self): # Add the string together return self.string + self.addition def copy_string_to_clipboard(self): # Copy to the windows clipboard cb = Tk() cb.withdraw() cb.clipboard_clear() cb.clipboard_append(self.full_string) def create_full_string(self): # Create the string and return it self.full_string = self.add_addition_to_string() return self.full_string class ConfigureProgram(object): def __init__(self, directory, escalator): self.directory = directory self.escalator = escalator def check_directory(self): # Check if the directory exists if os.path.exists(self.directory) is False: os.makedirs(self.directory) # If it doesn't create it else: return True def check_esc_name(self): # Check if there's already a file made if self.check_directory() is True: with open(self.escalator, 'a+') as esc: # If not create it esc.write(raw_input('Enter what you want appear on escalations(be specific): ')) def get_user(): # Get the username that's running at the moment return getpass.getuser() def create_addition_for_string(): # Where's it going to? return raw_input('Escalate to where? ') def check_if_run_yet(path_to_file): # Check if you need to configure the program return os.path.exists(path_to_file) def read_username(): # Read the escalators name from the file created with open("C:\Users\{}\AppData\Roaming\esc\esc_name.txt".format(get_user())) as f: return f.readline() if __name__ == '__main__': path = "C:\Users\{}\AppData\Roaming\esc\esc_name.txt".format(get_user()) if check_if_run_yet(path) is not False: opening = "Ticket has been reviewed and approved by {}. Ticket assigned to ".format(read_username()) cliptext = ClipBoard(opening, create_addition_for_string()) cliptext.create_full_string() cliptext.copy_string_to_clipboard() print 'Press CNTRL-V to paste the approval.' time.sleep(1) else: config = ConfigureProgram( "C:\Users\{}\AppData\Roaming\esc".format(get_user()), path ) config.check_directory() config.check_esc_name() print "Program configured, please run again." time.sleep(2) Answer: The main part of your program is only def copy_string_to_clipboard(self): # Copy to the windows clipboard cb = Tk() cb.withdraw() cb.clipboard_clear() cb.clipboard_append(self.full_string) the rest being used only as a formatting tool. You should thus focus into making such formatting as clear as possible. But first of, you may want to read about docstrings as your comment should be one: def copy_string_to_clipboard(self): """Copy to the windows clipboard""" cb = Tk() cb.withdraw() cb.clipboard_clear() cb.clipboard_append(self.full_string) So now, onto cleaning a bit everything around this function. First off, you don't need a class with 3 other functions just to concatenate two strings. Based on your opening variable, what you need to do is simply: user = read_username() assignation = raw_input('Escalate to where? ') copy_to_clipboard( 'Ticket has been reviewed and approved by {}. ' 'Ticket assigned to {}.'.format(user, assignation)) where copy_to_clipboard has been slightly modified to accept the whole string as argument: def copy_to_clipboard(text): """Copy the content of a string to the windows clipboard""" cb = Tk() cb.withdraw() cb.clipboard_clear() cb.clipboard_append(text) The last thing to do is to simplify a bit the reading of the user config file. It's a bit of a shame to be required to run the program twice in order to create a file if it doesn't exist. Instead, you could try to read the file and, if it fails, ask the user for its content: def read_username(filename='esc_name.txt'): """Get the user's name from config file or command line""" try: with open(filename) as f: return f.readline() except OSError: pass # If we are here, we couldn't read the configuration # file so ask the user an try to save its choice username = raw_input('Enter what you want appear on escalations (be specific): ') try: with open(filename, 'w') as f: f.write(username) except OSError: pass return username But this function assumes we are in the right directory, which may not exist yet. Let's tackle that as well: from getpass import getuser def setup(program_dir='esc'): """Create our own directory structure and move to the right place""" appdata = r'C:\Users\{}\AppData\Roaming'.format(getuser()) os.chdir(appdata) if not os.path.exists(program_dir): os.mkdir(program_dir) os.chdir(program_dir) as you can see, I used getuser from getpass directly without having to create a wrapper that just remove 1 level of nesting. Also, if I recall correctly, os.environ['APPDATA'] should contain the value we are building into appdata. So you could simplify it to: def setup(program_dir='esc'): """Create our own directory structure and move to the right place""" os.chdir(os.environ['APPDATA']) if not os.path.exists(program_dir): os.mkdir(program_dir) os.chdir(program_dir) Putting it all together, you can end up with: import os from Tkinter import Tk def copy_to_clipboard(text): """Copy the content of a string to the windows clipboard""" cb = Tk() cb.withdraw() cb.clipboard_clear() cb.clipboard_append(text) def setup(program_dir='esc'): """Create our own directory structure and move to the right place""" os.chdir(os.environ['APPDATA']) if not os.path.exists(program_dir): os.mkdir(program_dir) os.chdir(program_dir) def read_username(filename='esc_name.txt'): """Get the user's name from config file or command line""" try: with open(filename) as f: return f.readline() except OSError: pass # If we are here, we couldn't read the configuration # file so ask the user an try to save its choice username = raw_input('Enter what you want appear on escalations (be specific): ') try: with open(filename, 'w') as f: f.write(username) except OSError: pass return username if __name__ == '__main__': setup() user = read_username() assignation = raw_input('Escalate to where? ') copy_to_clipboard( 'Ticket has been reviewed and approved by {}. ' 'Ticket assigned to {}.'.format(user, assignation)) print 'Press CTRL-V to paste the approval.'
{ "domain": "codereview.stackexchange", "id": 21837, "tags": "python, python-2.x, tkinter" }
Ideal gas in a vessel: kinetic energy of particles hitting the vessel's wall
Question: Reading Landau's Statistical Physics Part (3rd Edition), I am trying to calculate the answer to Chapter 39, Problem 3. You are supposed to calculate the total kinetic energy of the particles in an ideal gas hitting the wall of a vessel containing said gas. The number of collisions per unit area (of the vessel) per unit time is easily calculated from the Maxwellian distribution of the number of particles with a given velocity $\vec{v}$ (we define a coordinate system with the z-axis perpendicular to a surface element of the vessel's wall; more on that in the above mentioned book): $$ \mathrm{d}\nu_v = \mathrm{d}N_v \cdot v_z = \frac{N}{V}\left(\frac{m}{2\pi T}\right)^{3/2} \exp\left[-m(v_x^2 + v_y^2 + v_z^2)/2T \right] \cdot v_z \mathrm{d}v_x \mathrm{d}v_y \mathrm{d}v_z $$ Integration of the velocity components in $x$ and $y$ direction from $-\infty$ to $\infty$, and of the $z$ component from $0$ to $\infty$ (because for $v_z<0$ a particle would move away from the vessel wall) gives for the total number of collisions with the wall per unit area per unit time: $$ \nu = \frac{N}{V} \sqrt{\frac{T}{2\pi m}} $$ Now it gets interesting: I want to calculate the total kinetic energy of all particles hitting the wall, per unit area per unit time. I thought, this would just be: $$ E_{\text{tot}} = \overline{E} \cdot \nu = \frac{1}{2} m \overline{v^2} \cdot \nu $$ The solution in Landau is given as: $$ E = \nu \cdot 2T $$ That would mean that for the mean-square velocity of my particles I would need a result like: $$ \overline{v^2} = 4\frac{T}{m} $$ Now, I consider that for the distribution of $v_x$ and $v_z$ nothing has changed and I can still use a Maxwellian distribution. That would just give me a contribution of $\frac{T}{m}$ each. That leaves me with $2\frac{T}{m}$, which I have to obtain for the $v_z$, but this is where my trouble starts: How do I calculate the correct velocity distribution of $v_z^2$? Answer: The reason your calculation is not right is because the mean energy of the molecules hitting the wall is not the mean number times the mean energy per molecule, because the fast molecules hit the walls more frequently than the slow ones.
{ "domain": "physics.stackexchange", "id": 12150, "tags": "thermodynamics, statistical-mechanics" }
CANopen test based on the canopen_test_utils package
Question: Hello, i am trying to communicate with a single motor through CANopen. I had posted earlier about it in this question: #q294210 EDIT (21/06): This question was answered by Mathias Lüdtke. The error was in the EDS file, Mathias noticed that the [6502] object had a wrong DataType value, the right DataType value is 0x0007. After i changed the value the controller was spawned correctly. **However a new ERROR occurred after the spawning. ** This problem is discussed in #q294869 ORIGINAL question starts here: Since no answer was arrived, i was trying to find reference CANopen based projects. Two popular projects/packages were recommended by Mathias Lüdtke: 1) canopen_test_utils and 2) schunk_robots. The canopen_test_utils seemed perfect to me, since it provides a CANopen based control solution for only 1 joint, e.g. one motor. Therefore i only needed to replace the Elmo.dcf file to my EDS file in canopen_rig1.yaml file. I changed nothing else in the package. So i did the following: sudo ip link set can0 up type can bitrate 500000 roslaunch canopen_test_utils hw_rig1.launch rosservice call /rig1/driver/init And of course the call crashed with the following message: success: False message: "Throw location unknown (consider using BOOST_THROW_EXCEPTION)\nDynamic exception\ \ type: boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<boost::lock_error>\ \ >\nstd::exception::what: boost: mutex lock failed in pthread_mutex_lock: Invalid\ \ argument\n; Could not set transition" OK.. (so i thought...) Since only the DCF/EDS file differs, and this project has been tested, therefore i checked and compared the Elmo.dcf and my EDS file line by line in a bruteforce way. I modified my original EDS file based on this Elmo.dcf, so it has the same T/R PDO mappings, transmission types, etc, more or less everything. I repeated the aforementioned 3 steps using the new updated EDS file to test the package. And of course it crashes again. I try to give here every information that i got in the terminals: After step 2, i have the following output in the terminal (2 suspicious warnings: Controller Spawner couldn't find the expected controller_manager ROS interface.): xacro: Traditional processing is deprecated. Switch to --inorder processing! To check for compatibility of your document, use option --check-order. For more infos, see http://wiki.ros.org/xacro#Processing_Order deprecated: xacro tags should be prepended with 'xacro' xml namespace. Use the following script to fix incorrect usage: find . -iname "*.xacro" | xargs sed -i 's#<\([/]\?\)\(if\|unless\|include\|arg\|property\|macro\|insert_block\)#<\1xacro:\2#g' when processing file: /home/akosodry/catkin_ws/src/canopen_test_utils/urdf/rig1.urdf.xacro xacro.py is deprecated; please use xacro instead started roslaunch server http://akosodry-8700K-linux:33007/ SUMMARY ======== CLEAR PARAMETERS * /rig1/driver/ PARAMETERS * /rig1/driver/bus/device: can0 * /rig1/driver/bus/master_allocator: canopen::SimpleMa... * /rig1/driver/defaults/dcf_overlay/1016sub1: 0x7F0064 * /rig1/driver/defaults/dcf_overlay/1017: 100 * /rig1/driver/defaults/dcf_overlay/6098: 35 * /rig1/driver/defaults/eds_file: config/ASDA_A2_10... * /rig1/driver/defaults/eds_pkg: canopen_test_utils * /rig1/driver/heartbeat/msg: 77f#05 * /rig1/driver/heartbeat/rate: 20 * /rig1/driver/nodes/rig1_plate_joint/id: 1 * /rig1/driver/sync/interval_ms: 10 * /rig1/driver/sync/overflow: 0 * /rig1/joint_state_controller/joints: ['rig1_plate_joint'] * /rig1/joint_state_controller/publish_rate: 50 * /rig1/joint_state_controller/type: joint_state_contr... * /rig1/joint_trajectory_controller/action_monitor_rate: 10 * /rig1/joint_trajectory_controller/constraints/goal_time: 0.6 * /rig1/joint_trajectory_controller/constraints/rig1_plate_joint/goal: 0.1 * /rig1/joint_trajectory_controller/constraints/rig1_plate_joint/trajectory: 0.1 * /rig1/joint_trajectory_controller/constraints/stopped_velocity_tolerance: 0.5 * /rig1/joint_trajectory_controller/joints: ['rig1_plate_joint'] * /rig1/joint_trajectory_controller/required_drive_mode: 7 * /rig1/joint_trajectory_controller/state_publish_rate: 25 * /rig1/joint_trajectory_controller/stop_trajectory_duration: 0.5 * /rig1/joint_trajectory_controller/topic: test * /rig1/joint_trajectory_controller/type: position_controll... * /rig1/joint_trajectory_pid_controller/action_monitor_rate: 10 * /rig1/joint_trajectory_pid_controller/constraints/goal_time: 0.6 * /rig1/joint_trajectory_pid_controller/constraints/rig1_plate_joint/goal: 0.1 * /rig1/joint_trajectory_pid_controller/constraints/rig1_plate_joint/trajectory: 0.1 * /rig1/joint_trajectory_pid_controller/constraints/stopped_velocity_tolerance: 0.5 * /rig1/joint_trajectory_pid_controller/gains/rig1_plate_joint/d: 0.0 * /rig1/joint_trajectory_pid_controller/gains/rig1_plate_joint/i: 0.0 * /rig1/joint_trajectory_pid_controller/gains/rig1_plate_joint/i_clamp: 0.0 * /rig1/joint_trajectory_pid_controller/gains/rig1_plate_joint/p: 0.5 * /rig1/joint_trajectory_pid_controller/joints: ['rig1_plate_joint'] * /rig1/joint_trajectory_pid_controller/required_drive_mode: 3 * /rig1/joint_trajectory_pid_controller/state_publish_rate: 25 * /rig1/joint_trajectory_pid_controller/stop_trajectory_duration: 0.5 * /rig1/joint_trajectory_pid_controller/topic: test * /rig1/joint_trajectory_pid_controller/type: position_controll... * /rig1/rig1_plate_joint_position_controller/joint: rig1_plate_joint * /rig1/rig1_plate_joint_position_controller/required_drive_mode: 1 * /rig1/rig1_plate_joint_position_controller/type: position_controll... * /rig1/rig1_plate_joint_velocity_controller/joint: rig1_plate_joint * /rig1/rig1_plate_joint_velocity_controller/required_drive_mode: 3 * /rig1/rig1_plate_joint_velocity_controller/type: velocity_controll... * /robot_description: <?xml version="1.... * /robot_state_publisher/publish_frequency: 50.0 * /robot_state_publisher/tf_prefix: * /rosdistro: kinetic * /rosversion: 1.12.13 * /script_server/rig1/action_name: /rig1/joint_traje... * /script_server/rig1/default_vel: 0.3 * /script_server/rig1/down: [[3.14]] * /script_server/rig1/home: [[0.0]] * /script_server/rig1/joint_names: ['rig1_plate_joint'] * /script_server/rig1/left: [[1.5708]] * /script_server/rig1/right: [[-1.5708]] * /script_server/rig1/roundtrip: ['down', 'left', ... * /script_server/rig1/service_ns: /rig1/driver * /script_server/rig1/ticktack: ['left', 'right',... * /script_server/rig1/up: [[0.0]] NODES /rig1/ driver (canopen_motor_node/canopen_motor_node) rig1_controller_spawner (controller_manager/spawner) rig1_joint_state_controller_spawner (controller_manager/spawner) / robot_state_publisher (robot_state_publisher/robot_state_publisher) auto-starting new master process[master]: started with pid [9070] ROS_MASTER_URI=http://localhost:11311 setting /run_id to 157337bc-73ee-11e8-931c-78321b043645 process[rosout-1]: started with pid [9083] started core service [/rosout] process[robot_state_publisher-2]: started with pid [9101] process[rig1/driver-3]: started with pid [9102] process[rig1/rig1_joint_state_controller_spawner-4]: started with pid [9103] process[rig1/rig1_controller_spawner-5]: started with pid [9115] [ INFO] [1529432704.218127226]: Using fixed control period: 0.010000000 [WARN] [1529432734.635528]: Controller Spawner couldn't find the expected controller_manager ROS interface. [WARN] [1529432734.636938]: Controller Spawner couldn't find the expected controller_manager ROS interface. [rig1/rig1_joint_state_controller_spawner-4] process has finished cleanly log file: /home/akosodry/.ros/log/157337bc-73ee-11e8-931c-78321b043645/rig1-rig1_joint_state_controller_spawner- 4*.log [rig1/rig1_controller_spawner-5] process has finished cleanly log file: /home/akosodry/.ros/log/157337bc-73ee-11e8-931c-78321b043645/rig1-rig1_controller_spawner-5*.log If i do rosservice list there is no controller_manager: /rig1/driver/get_loggers /rig1/driver/get_object /rig1/driver/halt /rig1/driver/init /rig1/driver/recover /rig1/driver/set_logger_level /rig1/driver/set_object /rig1/driver/shutdown /robot_state_publisher/get_loggers /robot_state_publisher/set_logger_level /rosout/get_loggers /rosout/set_logger_level And also, if i call the /rig1/driver/init, it returns with success: True, i can hear that my servo is turned on. I got the following message in the other terminal: [ INFO] [1529433440.546777357]: Initializing XXX [ INFO] [1529433440.547440823]: Current state: 1 device error: system:0 internal_error: 0 (OK) [ INFO] [1529433440.547911808]: Current state: 2 device error: system:0 internal_error: 0 (OK) EMCY: 81#0000000000000000 I assume that these messages are OK. But the problem is that i have no topics related to the controllers: /diagnostics /joint_states /rosout /rosout_agg /tf /tf_static If i do rosservice list i have now the controller manager: /rig1/controller_manager/list_controller_types /rig1/controller_manager/list_controllers /rig1/controller_manager/load_controller /rig1/controller_manager/reload_controller_libraries /rig1/controller_manager/switch_controller /rig1/controller_manager/unload_controller /rig1/driver/get_loggers /rig1/driver/get_object /rig1/driver/halt /rig1/driver/init /rig1/driver/recover /rig1/driver/set_logger_level /rig1/driver/set_object /rig1/driver/shutdown /robot_state_publisher/get_loggers /robot_state_publisher/set_logger_level /rosout/get_loggers /rosout/set_logger_level However, if i call for example: rosservice call /rig1/controller_manager/load_controller "name: 'rig1_plate_joint_position_controller'" It returns with false and message: [ERROR] [1529433712.087792833]: Exception thrown while initializing controller rig1_plate_joint_position_controller. Could not find resource 'rig1_plate_joint' in 'hardware_interface::PositionJointInterface'. [ERROR] [1529433712.087895663]: Initializing controller 'rig1_plate_joint_position_controller' failed Also after the rosservice call i get the following: [ WARN] [1529433780.238559532]: RPDO timeout Did not receive a response message abort1001#0, reason: Client/server command specifier not valid or unknown. Could not process message discarded message [ WARN] [1529433790.288558376]: RPDO timeout [ WARN] [1529433800.298543451]: RPDO timeout [ WARN] [1529433810.298551968]: RPDO timeout EMCY: 81#3081108001000000 EMCY: 81#3081108001000000 [ERROR] [1529433816.158555573]: RPDO timeout; Node has emergency error [ERROR] [1529433826.168516048]: not operational; not operational [ERROR] [1529433836.178398516]: not operational; not operational [ERROR] [1529433846.178455135]: not operational; not operational [ERROR] [1529433856.188484882]: not operational; not operational At this point i am stuck. I really tried to give all the information i have regarding the testing of this package. I would like to ask some help from the forum members. Especially, the help of @Mathias Lüdtke would be appreciated, (if im not mistaken) the owner/author of the canopen_test_utils package. Thank you in advance. Best regards, Akos Originally posted by akosodry on ROS Answers with karma: 121 on 2018-06-19 Post score: 0 Original comments Comment by Mathias Lüdtke on 2018-06-20:\ Also after the rosservice call i get the following: Which service call? The errors do not look good. The error register (1001) cannot be read/reset and your drive has a heartbeat failure. In addition "RPDO timeout" might occur once in a while (given heavy CPU or bus load), but not always. Comment by akosodry on 2018-06-20: After i call: rosservice call /rig1/controller_manager/load_controller "name: 'rig1_plate_joint_position_controller'" Answer: The config and launch files in canopen_test_utils describe a rather specific set-up. They might be good to copy out some snippets, but not the full files. If you start with a slow sync rate (~10Hz), you could try it without any specific PDO mapping. Everthing should work with SDOs as well, but most-likely not at a high rate. PDO mapping could be added afterwards. First you should identify the drive mode you want to use (not all devices support all modes). Afterwards you can create the controller config for it and setup the units in the driver config. If i do rosservice list there is no controller_manager: The controller_manager services will get started during driver/init. I assume that these messages are OK. But the problem is that i have no topics related to the controllers: These will get availables after the controller was spawned successfully. Could not find resource 'rig1_plate_joint' in 'hardware_interface::PositionJointInterface' Does your drive support mode 1 and reports it in 0x6502? Update: I have checked ASDA_A2_1042sub980_C_ORIG_mod_bonSchunkv1.eds with CANeds. Looks like 0x6502 has the wrong type (Integer32 instead of Unsigned32). Please try to change DataType to 0x0007. Originally posted by Mathias Lüdtke with karma: 1596 on 2018-06-20 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by akosodry on 2018-06-20: Hello Mathias! Thank you for the reply. [6502] ParameterName=Supported drive modes ObjectType=0x7 DataType=0x0004 LowLimit= HighLimit= AccessType=ro DefaultValue=0x0000006D PDOMapping=1 ObjFlags=0x0 This is how my 6502 object looks. SHould i add ParamVal=1? Comment by Mathias Lüdtke on 2018-06-20: 0D is fine, it is a bit field. And it includes bit 1 for mode 1. Comment by akosodry on 2018-06-20: Ok, its 6D. Then the 6502 object is correct (i assume). What can be then wrong with the 'hardware_interface::PositionJointInterface' ? Additionally, i also created a new package, based on the schunk package. Where the same error occurs.. Comment by akosodry on 2018-06-20: I also checked that ros-control is installed, ros-controllers as well.. i have no idea what could be the problem.... Comment by akosodry on 2018-06-20: In the rig1_controller.yaml i have only the rig1_plate_joint_position_controller and joint_state_controller, the rest is commented out. I also changed in the test_setup_controller.xml to spawn the rig1_plate_joint_position_controller. Comment by akosodry on 2018-06-20: If you have 3 minutes, HERE is the test video (first 3 minutes). Thank you in advance for your help. Comment by Mathias Lüdtke on 2018-06-20: Have you changed any joint/node name? It might be helpful if you update the files somewhere (e.g. github).. Comment by akosodry on 2018-06-20: No i did not! I just commented out the controllers that (i thought) are not needed for me. I will upload the package in a minute. And again, thanks for the help! Comment by akosodry on 2018-06-20: This is the link for the package: github link So this is basically YOUR package, i only changed the EDS file: ASDA_A2_1042sub980_C_ORIG_mod_bonELMO.eds and i commented out some controllers. Comment by akosodry on 2018-06-21: Hello @Mathias Lüdtke I also uploaded my own package (LINK) which crashes also with: Could not find resource 'joint_1' in 'hardware_interface::PositionJointInterface I have no idea what could be wrong, or what is missing?! Comment by akosodry on 2018-06-21: Dear Mathias! Thank you for the help, i updated the DataType and it seems that it could spawn the controller :) I can't believe that this was the problem. I also checked with Vector CANeds, but it did not give me any error. However there is new error, i am now editing,/updating the originalquestion Comment by akosodry on 2018-06-21: @Mathias Lüdtke : I've just updated the original question. Comment by Mathias Lüdtke on 2018-06-21: I would prefer to discuss the new error in another question. Otherwise, it is really hard to follow. Comment by akosodry on 2018-06-21: Ive just posted a new question: link Iam now updating this question, and mark your answer correct. Thanks again for the help.
{ "domain": "robotics.stackexchange", "id": 31036, "tags": "ros, ros-control, ros-kinetic, hardware-interface, ros-canopen" }
can't find a viable import class for keras.utils.Sequence
Question: I am using Google Colab. tensorflow version = 2.8.0, and keras is the same. I am trying to get a BalancedDataGenerator(Sequence) class created, but I can't get keras.utils.Sequence to load. from tensorflow.python.keras.utils import Sequence from tensorflow.python.keras.utils.np_utils import Sequence from tensorflow.python.keras.utils.all_utils import Sequence I've tried it taking out "python", or taking out "tensorflow.python", or searching as to where it is now currently located, but haven't found it. The errors I get are: ImportError: cannot import name 'Sequence' from 'tensorflow.python.keras.utils' (/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/utils/init.py) AttributeError: module 'keras.utils' has no attribute 'Sequence' ImportError: cannot import name 'Sequence' from 'keras.utils' (/usr/local/lib/python3.7/dist-packages/keras/utils/init.py) ImportError: cannot import name 'Sequence' from 'keras.utils.all_utils' (/usr/local/lib/python3.7/dist-packages/keras/utils/all_utils.py) NOTE: If your import is failing due to a missing package, you can manually install dependencies using either !pip or !apt. When I go /usr/local/lib/python3.7/dist-packages/keras/utils and read all_utils.py I see 'from keras.utils.data_utils import Sequence' as an option and it says that this all_utils.py module is 'used as a shortcut to access all the symbols. Those symbols werd exposed under init, and was causing some hourglass import issue.' If I read data_utils.py I do see the Sequence module inside there. if I type from 'keras.utils.data_utils import Sequence' in a cell and run it it looks like it was accepted. However, if I put that in the balanceddatagenerator.py and import the class I created that uses Sequence, and then import the class I get this error. I don't know how to overcome this. How get this installed and imported successfully? Answer: imblearn.keras.balanced_batch_generator was pointing to an old version of keras.utils.Sequence and causing this error. I just found that imblearn also has a tensorflow version of balanced_batch_generator, so I switched to tensorflow.keras.utils.Sequence and the imblearn.tensorflow.balanced_batch_generator, and it seems to import without any errors. Now let's see if I can finally make progress.
{ "domain": "ai.stackexchange", "id": 3309, "tags": "tensorflow, keras" }
Can we recover a vector from one element of resulted vector after multiplication?
Question: I have a matrix $X = \begin{bmatrix} 0.5000 + 0.5000i & 0.5000 - 0.5000i\\ 0.5000 - 0.5000i & 0.5000 + 0.5000i \end{bmatrix}$ multiplied with a column containing a complex number and its conjugate, as below: $y = \begin{bmatrix} y_1\\ y_2 \end{bmatrix} = \begin{bmatrix} 0.5000 + 0.5000i & 0.5000 - 0.5000i\\ 0.5000 - 0.5000i & 0.5000 + 0.5000i \end{bmatrix} \times \begin{bmatrix} s\\ s' \end{bmatrix}$ I am wondering if we can recover $s$ from only $y_1$ or $y_2$. I mean, as long as the vector $s$ is only containing a complex number with its conjugate, so we can estimate $s$ from only $y_1$. But, I don't know how can we estimate it. Answer: I think the answer is there is no way to recover $s$. Here I will be using a superscript $*$ to indicate the complex conjugate. First lets expand the matrix multiplication: $$y_1 = \frac{1}{2}\left[(1+i)s + (1-i)s^*\right]$$ $$y_2 = \frac{1}{2}\left[(1-i)s + (1+i)s^*\right]$$ Let $a = 1+i:$ $$y_1 = \frac{1}{2}\left[as + a^*s^*\right]$$ $$y_2 = \frac{1}{2}\left[a^*s + as^*\right]$$ We can immediately see here that the term in the square bracket, for either case, is the sum of a number and its complex conjugate and therefore must be real. This means there is no hope of recovering the value of $s$ is only $y_1$ or $y_2$ is provided unless $s$ is real. To see what's going on lets solve for $y_1$ $$y_1 = \frac{1}{2}\left[(1+i)s + (1-i)s^*\right]$$ Let $s = s_R + is_I$ $$y_1 = \frac{1}{2}\left[(1+i)(s_R + is_I) + (1-i)(s_R + is_I)^*\right]$$ $$y_1 = \frac{1}{2}\left[(1+i)(s_R + is_I) + (1-i)(s_R - is_I)\right]$$ $$y_1 = \frac{1}{2}\left[s_R + is_I + is_R - s_I + s_R - is_I - is_R -s_I\right]$$ $$y_1 = \frac{1}{2}[2s_R - 2s_I] = s_R - s_I$$ Similarly the expression for $y_1$ can be determined to be $y_2 = s_R + s_I$. So neither expression by itself will let you solve for a complex $s$, but both together will.
{ "domain": "dsp.stackexchange", "id": 10931, "tags": "signal-analysis, matrix, linear-algebra" }
Reversible process in thermodynamics
Question: How can we look at reversible processes in an intuitive manner? How is it possible, that at any moment, it can be reversed by an infinitesimal change? Answer: I sometimes explain it this way: imagine a staircase where each step is $h$ high. If $h$ is small then it is quite easy to take a single step up or down, i.e. the process is relatively easily reversed. However, as $h$ increases it starts to become increasingly difficult to go up (and hazardous to go down). Even for small steps there is a difference between going up and down since you expend more energy going up (against gravity) than going down, so it's not truly reversible, but as $h$ decreases the difference becomes minuscule (but never 0). Similarly, there is no truly reversible thermodynamic process but the more slowly you let a process proceed (and equilibrate) the more it resembles the reversible ideal process
{ "domain": "chemistry.stackexchange", "id": 3781, "tags": "thermodynamics" }
show that special case of NP-complete problem is also NP-complete?
Question: I want to show that a problem is NP-hard by reducing a known NP-complete problem to it. However, I will have to use a special case of the NP-complete problem for the reduction to work. I'm pretty sure that the special case version is also NP-complete but I have no idea how to prove that. Are there any general guidelines for how to do this? For example, consider this version of the SUBSET SUM problem without repetition: Given an integer I and a multiset S of integers in the range 1,2,...,10, is there a non-empty subset of S whose sum is I? I could be wrong, but don't think the restriction of possible values in S to {1, ..., 10} affects the NP-completeness of the problem. How would one go about showing this? EDIT: Apparently that version of the problem is actually in P. I might restate my question later. Answer: "Are there any general guidelines for how to do this?" Frequently, one does that by reducing from the general case to the special case. Otherwise, being a special case of a known NP-complete problem isn't really helpful for that. How would one go about showing this? Due to dynamic programming, one would prove ​ P = NP .
{ "domain": "cs.stackexchange", "id": 6572, "tags": "np-complete, reductions, decision-problem" }
Determining potentials at points in a circuit with multiple batteries
Question: I am asked to find the potential difference across the points $P$ and $Q$. Using Kirchoffs second law, I calculated the 'resultant' emf as being $E$. The p.d across each resistor would then be $\frac{E}{3}$. To solve the question I went with the idea that the potential at the negative pole of the top battery(at P) would be 0. The the potential at Q would then be $E-\frac{E}{3}$ which happens to give the correct answer of $\frac{2E}{3}$. My assumption that the p.d that at the negative pole has to be zero also implies that the p.d across the left resistor has to be $E$ which doesn't agree with the calculation. My question is: how do I find the potential at points on a circuit with multiple batteries, when some oppose others? Answer: A reliable, and usually easy, way to do this is using superposition. The Khan Academy has an excellent article on it it here. I urge you to take the time to read the article because it explains the procedure in detail. But in brief you take the circuit and replace all the batteries but one with a wire. Do this for each battery, so in your circuit you'd get three separate circuits. For each of the three circuits calculate the voltages and currents, then simply add the three circuits together and that will give you the correct values for the circuit with all three batteries present. So in this case superposition gives you these three circuits to combine: Superposition may initially seem a bit complicated, but once you get used to it the method gives you a quick and reliable method for approaching circuits like this.
{ "domain": "physics.stackexchange", "id": 49838, "tags": "electric-circuits, potential, electrical-resistance, voltage, batteries" }
Plausible reason for the Hot Neptune gap
Question: This NASA article talks about how there's a dearth of hot neptune exoplanets that have been discovered so far. https://www.nasa.gov/feature/goddard/2018/in-search-of-missing-worlds-hubble-finds-a-fast-evaporating-exoplanet The article makes no attempt at postulating why this gap may exist. One possible explanation I have is that the silicate material of a terrestrial planet is difficult to vapourise and the surface gravity of a hot jupiter is too great to drive off hydrogen to any great extent, whereas a Hot Neptune lacks both the solid surface and the gravity to maintain its size. Answer: An explanation could be provided by irradiation and evaporation of planetary envelopes. The introduction in the article by Bourrier et al. 2018, which is the source of the NASA press release, explains that as exoplanets migrate inwards, they are increasingly challenged to hang on to their envelopes due to irradiation and heating (insolation) by their star. The Neptune-sized planets are less capable of retaining their envelopes because of lower densities and lower "surface" gravities. The idea is that many of them get stripped of their outer envelopes and end up as "Cthonian" planets that are of "super-Earth" size. A hypothesis is that many hot Jupiters have enough mass and density to maintain a large size for billions of years and hence it is only the hot Neptunes that are depleted. The recent work on GJ 3470b (cited in the NASA press release) supports this hypothesis by finding a very large atmospheric loss rate from a Neptune-sized object at the edge of the "hot Neptune desert".
{ "domain": "astronomy.stackexchange", "id": 3380, "tags": "exoplanet, hot-jupiter" }
Find a substring which could be rearranged into a palindrome
Question: Problem statement: To find maximum length substring in an input string which could be arranged into a palindrome, only even length palindromes are expected. Input is one line String which contains only integers. Output is the length of the substring which could be arranged in palindrome. Example: Input: 123456546 Output: 6 (substring 456546 can be rearranged to an even palindrome) My approach (I am not sure if this is the most optimal way to do it, please point out any modifications): Find the integers which are occurring in pairs in the original input string For each possible length look for a possible palindrome by starting from each integer (which we listed earlier as possible palindrome members) and searching for the required length. Although this approach and the code works, I don't think it is optimal at all (I am using multiple nested for loops and the code doesn't look good at all). Should I use some other data structures? Can someone please help in optimising the solution? public class Code2 { public static int lengthofPalindrome(String input1) { /*Check if the input is valid * return 0 : if length = 0 , * contains anything other than numbers * */ if(input1.length() <= 0 ) return 0; if (input1.matches("[0-9]+")) ; else return 0; Integer[] input_array = new Integer[input1.length()]; // copy of the input string used to compare List<Integer> input_array_2 = Arrays.asList(input_array); //copy of the input string in array list List<Integer> tempString = new ArrayList(); // temp arrayList List<Integer> sub1 = new ArrayList(); // contains unique digits getting repeated 2 times List<Integer> sub2 = new ArrayList(); // contains all digits getting repeated 2 times int flag_2 =0; //copy the string into integer array for (int i = 0 ; i < input1.length(); i ++) { input_array[i] = Integer.parseInt(String.valueOf(input1.charAt(i))); } //find the int which have even occurences in the input array and populate sub1 arrayList with the values for(int i =0 ; i < input1.length(); i++) { if(i==0) { tempString.add(input_array[0]); } else { for(int j = 0; j <tempString.size(); j++) { if(input_array[i]==tempString.get(j)) { tempString.remove(j); sub1.add(input_array[i]); tempString.add(j, -2); flag_2 = 1; } if(flag_2 ==1) { break; } } if(flag_2 ==1) { flag_2=0; } else { tempString.add(input_array[i]); } } } //Make a copy of sub1 and populate sub2 for(Integer a : sub1) { sub2.add(a); } //Remove duplicates from the sub1 for (int i =0 ; i < sub1.size(); i ++) { for(int j =i+1 ; j <sub1.size();j++) { if(sub1.get(i)==sub1.get(j)) { sub1.remove(j); } } } /*Length of sub2: used to calculate the legths of possible pallindrome substrings Eg. If the length of sub2(contains the ints which occur in pairs in the input string) = 3 Lengths of possible substrings : 6, 4, 2 */ int length = sub2.size(); int value =0; // value is the length of the substring that can be rearranged as a palindrome for (int i =length; i >= 1; i--) { value = find_sub(i*2, sub1,input_array_2 ); if(value !=0) { break; } } return value; } /*Parameters * length : Length of the substring to be found * subString : Contains the int which could be part of the substring * input1 : Original input string * This function finds the length of the substring of given length inside the main String, if it exists otherwise returns 0 * */ public static int find_sub(int length, List<Integer> subString, List<Integer> input1) { if(length ==0 || subString.size() == 0 || input1.size()==0) { return 0; } int index = -2; int sum = 0; List<Integer> allIndex = new ArrayList<Integer>(); int breakFlag =-2; // List of all the indices of input1 string which are present in subString (occur in pair) for(int j =0; j < subString.size();j++) { for (int i =0; i < input1.size(); i ++) { if (input1.get(i) == subString.get(j)) { allIndex.add(i); } } } Collections.sort(allIndex); // Store the values of the integer elements and its occurence if the substring exists Map<Integer, Integer> palin ; int val, finalFlag =-2; int bound; List<Integer> subList= new ArrayList<Integer>(); // Main loop to chcek from each index value for the substring of desired length for(int j =0; j < allIndex.size(); j++) { //gets the first index of the substring index = allIndex.get(j); //check if the length to search doesnot fit in the main string break; if(index+length-1 >= input1.size()) { break; } //make a sublist with the desired length subList = input1.subList(index, index+length-1); //check if the substring of given length contain anything other than the ints in the subString for(int i =0; i < subList.size(); i++) { for(int m =0 ; m < subString.size(); m++) { if(subList.get(i)==subString.get(m)) { breakFlag =0; break; } else { breakFlag =1; } } //subList contains other int if(breakFlag ==1) { break; } } //breakFlag : is set when the substring in input1 contains any other integer than what is expected (subString) if(breakFlag ==0) { //logic to check if the pallindrome can be formed from this index and this length //add all the recuring values in the map with number of occurences palin= new HashMap<Integer, Integer>(); for(int n =index; n <= index+length-1; n++) { if(palin.containsKey(input1.get(n))) { val = palin.get(input1.get(n)); palin.put(input1.get(n), val+1); } else { palin.put(input1.get(n), 1); } } for(Map.Entry<Integer, Integer> entry: palin.entrySet()) { } sum=0; for(Map.Entry<Integer, Integer> entry: palin.entrySet()) { if(entry.getValue() %2 !=0) { finalFlag = 0; break; } else { //string can be rearranged into a pallindrome finalFlag =1; sum = sum+entry.getValue(); } } } if(finalFlag ==1) { break; } } if(finalFlag ==1) { return sum; } else return 0; } public static void main(String[] args) throws IOException{ Scanner in = new Scanner(System.in); int output = 0; String ip1 = in.nextLine().trim(); output = lengthofPalindrome(ip1); System.out.println(String.valueOf(output)); } } Answer: Observations You need to find the longest substring that can be arranged into an even palindrome. Basically, you can split the problem in: making all substrings, and check for each one if that can be arranged into a palindrome. That is inefficient, as you make a lot of new String objects. So it is better to just use the characters and see if the character-frequency is OK. (all freqencies must be even). Because scanning each character only increases the freqency of that given character, we can quickly check if the substring starting from i until j is a even palindrome. So, we need two loops, one for the substring start, and a nested one from the substring-end. Observation 2 As the input can only be 0..9, you could also use an int[] for the frequencies. This is a bit better for the performance, but the main algorithm does not change with that. Observation 3 As you are only interested in even/uneven, you could also implement this as a bitmask, flipping each ith bit when you encounter a integer i. This is probably most efficient, as you only need a short to store the 'isPalinDrome' state. The length of the palindrome can be calculated by j-i+1. (if i=0 and j=1 we got 2 characters, at position 0 and 1) Proposed solution I skipped all the validation, and cut right to the algorithm. import java.util.HashMap; import java.util.Map; public class LongestPalindromeCandidate { /** Palindrome is possible if all frequencies as even, or all even except 1 */ public static boolean isPalinDromePossible(Map<Character, Integer> freqMap) { int countUneven = (int) freqMap.values().stream().filter(i -> i % 2 == 1).count(); return countUneven < 1; } public static void main(String[] args) { String test = "123456546"; int maxSize = 0; //create all sub-string frequencies starting from i for (int i = 0; i < test.length() - 1; i++) { Map<Character, Integer> freqMap = new HashMap<Character, Integer>(); //let the sub-string go to j for (int j = i ; j < test.length(); j++) { //each time we encounter a character, we add it to the substring-from-i frequency-map char ch = test.charAt(j); //increate the freq. by one, setting it to 1 if it not already in the map freqMap.compute(ch, (k, v) -> v == null ? 1 : v + 1); //check if the freq.map is a palindrome. If so, we can check if it is longer than the current max if (isPalinDromePossible(freqMap)) { int size = freqMap.values().stream().mapToInt(k -> k).sum(); if (size > maxSize) { maxSize = size; } //just for debugging System.out.println("Palindrome possible:" + freqMap.keySet()); } } } System.out.println("Maxsize:" + maxSize); } } Bit flipping solution for performance and fun :) public class LongestPalindromeCandidate2 { public static void main(String[] args) { String test = "123456546"; int length = test.length(); int[] input = new int[length]; for (int i=0; i<input.length; i++) { input[i] = test.charAt(i) - '0'; } int maxSize = 0; //create all sub-string frequencies starting from i for (int i = 0; i < length - 1; i++) { int parity =0; //let the sub-string go to j for (int j = i ; j < length; j++) { int n = input[i]; //flip the nth bit parity = parity ^ (1 << n); //check if parity indicates an even palindrome. If so, we can check if it is longer than the current max if (parity == 0) { if ((j-i +1 ) > maxSize) { maxSize = (j-i+1 ); } } } } System.out.println("Maxsize:" + maxSize); } }
{ "domain": "codereview.stackexchange", "id": 28387, "tags": "java, strings, palindrome" }
Script for creating a hierarchy of directories
Question: This script creates a hierarchy of directories. Is this a good approach or can it be made better? I am mainly concerned with maintainability here, not efficiency. Suppose I wanted to add more directories to this hierarchy. Would this structure of the script make it easier to do? Hierarchy ./project_euler ./001_050 ./051_100 ... ./401_450 ./codechef ./easy ./medium ./hard ./spoj ./functions ./utilities The script import os TOP = ['project_euler', 'codechef', 'spoj', 'functions', 'utilities'] CODECHEF = ['easy', 'medium', 'hard'] def safe_make_folder(i): '''Makes a folder if not present''' try: os.mkdir(i) except: pass def make_top_level(top): for i in top: safe_make_folder(i) def make_euler_folders(highest): def folder_names(): '''Generates strings of the format 001_050 with the 2nd number given by the function argument''' for i in range(1,highest, 50): yield ( 'project_euler' + os.sep + str(i).zfill(3) + '_' + str(i + 49).zfill(3) ) for i in folder_names(): safe_make_folder(i) def make_codechef_folders(codechef): for i in codechef: safe_make_folder('codechef' + os.sep + i) def main(): make_top_level(TOP) make_euler_folders(450) make_codechef_folders(CODECHEF) if __name__ == "__main__": main() Answer: One of the things I would do is remove the double occurrence of the strings 'project_euler' and 'codechef'. If you ever have to change one of these in TOP, you are bound to miss the repetition in the functions. You should at least use TOP[0] in make_euler_folders() and TOP[1] in make_codechef_folders. A better approach however would be to take both definitions out of TOP and change def safe_make_folder(): TOP = ['spoj', 'functions', 'utilities'] def safe_make_folder(i): '''Makes a folder (and its parents) if not present''' try: os.makedirs(i) except: pass The standard function os.makedirs() creates the 'project_euler' resp. 'codechef', as the first subdirectory of each is created. The other is that I would create the directory names using os.path.join() (as it prevents e.g. the mistake of providing double path separators), in combination with standard string formatting to get the leading zeros on the subfolder names: os.path.join('project_euler', '{:03}_{:03}'.format(i, i+49)) the {:03} gives a 3 character wide field with leading zeros. @Josay improvemed function would then become: def make_euler_folders(highest): for i in (os.path.join('project_euler', '{:03}_{:03}'.format(i, i+49)) \ for i in range(1,highest, 50)): safe_make_folder(i) And the other function would become: def make_codechef_folders(codechef): for i in codechef: safe_make_folder(os.path.join('codechef', i))
{ "domain": "codereview.stackexchange", "id": 4165, "tags": "python, directory" }
Problem with momentum operator
Question: Why is there no problem with the eigenfunction of the momentum operator being non-normalisable? How can it be a valid quantum state? Answer: It is not a valid quantum state, it is an idealization of very long wave-packets emitted by atom-lasers. These wave-packets are almost coherent waves, very close, by their quantum description, to Fourier components, though they have finite length, e.g. 0.35mm (see arXiv quant-ph/9812.258, "An Atom Laser with a cw Output Coupler", by Bloch-Hänsch-Esslinger, and see the picture below). As to the actual Fourier components, they are normalizable to Dirac $\delta$ function, s.t. we can work with them, $\int_{-\infty}^{+\infty} e^{ik'x}. e^{-ikx} dx = \delta(k - k')$. Atom laser output: A collimated atomic beam derived from a Bose-Einstein condensate . In the upper side of the figure one can see the condensate, typically $^{87}$Rb.
{ "domain": "physics.stackexchange", "id": 20326, "tags": "quantum-mechanics, mathematical-physics, operators, momentum, hilbert-space" }
Difference between dot product attention and "matrix attention"
Question: As far as I know, attention was first introduced in Learning To Align And Translate. There, the core mechanism which is able to disregard the sequence length, is a dynamically-built matrix, of shape output_size X input_size, in which every position $(o, i)$ holds the (log) probability that output $o$ should attend to input $i$. That (log) probability is obtained by operating a learned function $a(h, s)$, where $h$ is a hidden state of the input, and $s$ is a cell state of the output. Please let's disregard the fact that these inputs are RNN-based, and only look at the attention mechanism itself - a dynamic matrix of (log) probabilities is built, each slot is built by a function taking in two vectors, and outputting their "correspondence". Jump forward to the iconic Attention Is All You Need. Please disregard the fact that in this paper, $K$ was separated from $V$, unlike in the previous one. I just want to look at the mechanism itself. Let's look only at Multi-Head Attention, and in it, let's look only at the part actually doing the attention: $ QK^T $ Let's assume $Q$ and $K$ are vectors and not matrices, for simplicity. Their attention score is their dot product. Let's compare the core attention mechanisms of "align and translate" against "all you need". In "align and translate", the function learns how two vectors correspond to one another In "all you need, the function learns to project embeddings into a continuous space, where they can be compared against other such projections by their dot-product. One could easily implement multi-head-attention with the dynamic matrix method, by a function $b(k, q)$ yielding the (log) probability that the two correspond, and putting that into a dynamic-size matrix. My question is what in the "all you need" core attention method makes it better than the "align and translate" core attention method? Are there ablation studies for this point? My intuition tells me it would be easier for a network to learn how to correspond vectors, rather than to learn an entire continuous space. Again, please disregard the other contributions in "all you need", such as self-attention, separation of key from value, normalization, Transformer, ect. Answer: The key difference between the attention mechanisms used in "Learning To Align And Translate" and "Attention Is All You Need" lies in the way that the similarity between the query and key vectors is calculated. "Learning To Align And Translate" The attention score is calculated by a learned function using a feed-forward neural network that takes in the query and key vectors and outputs a (log) probability of correspondence between them. This approach requires the model to learn a mapping from the input and output spaces to a joint space where the vectors can be compared against each other. "Attention Is All You Need" Here the attention is calculated as the similarity between the query and key vectors by taking their dot product and scaling it by the square root of their dimensionality. This approach does not require the model to learn a mapping to a joint space, but instead relies on the inherent structure of the vector space. Pros/Cons One advantage of the approach used in "Attention Is All You Need" is that it is computationally more efficient than the method used in "Learning To Align And Translate", especially for long sequences. Specifically, the scaled dot product attention is faster compared to "general/Bahdanau attention" in the sense that the latter is a learnt via a usually shallow feedforward neural network. In that sense, overhead space and time complexity is added while traversing the computational graph of the model as part of training. That being said, there have been studies that have explored the use of different attention mechanisms in Transformers, including variants of the dot product and learned similarity functions. While the dot product attention used in "Attention Is All You Need" has shown to be effective in many cases, other mechanisms may be more appropriate for certain tasks or data types. I copy below recent studies relevant to transformer variations and its attention mechanism. Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with sparse transformers. CoRR, abs/1904.10509, 2019. Nikita Kitaev, L. Kaiser, and Anselm Levskaya. Reformer: The efficient transformer. ArXiv, abs/2001.04451, 2020. Jack W. Rae, Anna Potapenko, Siddhant M. Jayakumar, and T. Lillicrap. Compressive transformers for long-range sequence modelling. ArXiv, abs/1911.05507, 2020. Sinong Wang, Belinda Z. Li, Madian Khabsa, Han Fang, and Hao Ma. Linformer: Self-attention with linear complexity. ArXiv, abs/2006.04768, 2020. A. Katharopoulos, A. Vyas, N. Pappas, and F. Fleuret. Transformers are rnns: Fast autoregressive transformers with linear attention. In Proceedings of the International Conference on Machine Learning (ICML), 2020. Yi Tay, Dara Bahri, Donald Metzler, Da-Cheng Juan, Zhe Zhao, and Che Zheng. Synthesizer: Rethinking self-attention in transformer models, 2020.
{ "domain": "ai.stackexchange", "id": 3776, "tags": "papers, transformer, attention, sequence-modeling" }
How can I use Machine learning for inter-relationship between Features?
Question: Machine learning is used mostly for prediction and there are numerous algorithms and packages for this. How can I use machine learning for studying inter-relationships between features? What are major packages and functions for this? Are there any packages for graphics in this area? Can artificial neural networks also be used for this purpose? If so, any particular type is specifically suited for this? I do not want to limit to any particular language like Python or R. Answer: I use pair plots to study the inter-relationship between features. Pair plot gives the first level information of the features. Seaborn library has pairplot function and even matplotlib has the same. Another thing you can use heatmap which gives the co-relation between features, by using heat map we can see features that are high co-related and may eliminate one of them. As a word of caution you must good reason or domain knowledge to drop a feature
{ "domain": "datascience.stackexchange", "id": 4073, "tags": "machine-learning, features" }
What is wrong with my ros installation
Question: I have just installed the ros indigo system followed the sdk.rethinkrobotics.wiki tutorial on my laptop. However every time I type in the terminal with rosrun or roscore or other command, the response are command not found. The ros installed on a desktop followed by the same installation tutorial will not response like that. So I would like to ask what is wrong with my installation or my laptop.* Originally posted by riddick on ROS Answers with karma: 3 on 2015-08-25 Post score: 0 Answer: You need to source your setup.bash file in your .bashrc file. Edit ~/.bashrc file and add source /home/<user_name>/<catkin_ws>/devel/setup.bash line to the bottom of the file, then restart your terminal. Originally posted by Akif with karma: 3561 on 2015-08-25 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by duck-development on 2015-08-25: Do not forget first to source /opt/ros/indigo/setup.bash To her the basic ros commands Comment by riddick on 2015-08-26: Thank you and your answers perfectly solve my problem!
{ "domain": "robotics.stackexchange", "id": 22524, "tags": "ros" }
The role of symmetry in geometric complexity theory?
Question: I'm not well versed in geometric complexity theory so my question could be trivial. I understand that GCT program studies the symmetries of determinant and permanent to prove Valiant's Hypothesis: $VP\ne VNP$ In simple terms, What is the invariant property? Which mapping (transformation) is applied to the determinant and permanent? How is the symmetry of permanent used to explain the difficulty of computing the permanent? Answer: As for what invariant properties are being used, and what transformations are being applied to the permanent and determinant, see the answer to a related question. As for how the symmetry is used to explain computational hardness: it's not. First, it's not just symmetry that is being used, but characterization by symmetries. As discussed in another question of yours, symmetry can very well make a function easier to compute. Second, even characterization by symmetries does not explain hardness - since both $det$ and $perm$ are characterized by their symmetries, but $det$ is easy. The idea instead is this: it should be easier to understand the algebro-geometric and representation-theoretic properties of functions that are characterized by their symmetries, and hence to carry out the GCT plan of attack. Since most functions are not characterized by their symmetries, using characterization by symmetries in a crucial way also gives some hope of bypassing the natural proofs barrier, as discussed here. (That characterization by symmetries should make understanding easier can be formalized a tiny bit more as follows. Since GCT is studying the orbits of $det$ and $perm$, and there is a sort of duality between orbits and stabilizers, functions that are characterized by their stabilizers should have "special" orbits in some sense. I guess this alone doesn't actually say anything about how difficult it should be to understand these orbits. But understanding "generic" orbits can be very hard; having a nice property like characterization by symmetries at least gives us something to grab on to that we can try to use to gain more understanding.)
{ "domain": "cstheory.stackexchange", "id": 224, "tags": "cc.complexity-theory, gct, symmetry" }
What is an intuitive explanation using forces for the equatorial bulge?
Question: The earth is not a sphere, because it bulges at the equator. I tried fiddling with centripetal force equations and gravity, but I couldn't derive why this bulge occurs. Is there (a) a mathematical explanation using forces (not energies) and (b) a simple intuitive explanation to explain to others why the bulge occurs? Answer: Equatorial bulging of a planet is caused by the combination of gravity and centrifugal force. To show this I will first make a few assumptions: The planets is assumed to be made up of a liquid of constant density. All liquid is at rest relative to itself, which means that there are no shear stresses within the liquid, since this would induce a flow. The equatorial bulging is small, such that the acceleration due to gravity, $\vec{a}_g$, at the surface can be approximated with: $\vec{a}_g=-G\frac{M}{\|\vec{x}\|^3}\vec{x}$, where $G$ is the gravitational constant, $M$ the mass of the planet and $\vec{x}$ the position on the surface relative to the center of mass of the planet. The planet is axis symmetric and rotates around this axis with a constant angular velocity $\omega$. A small volume, $dV$, experiences two volumetric accelerations, namely gravitational and centrifugal, and normal forces by the neighboring liquid in therms pressure. The sum of all accelerations on $dV$ should add up to zero to comply with the second assumption (the centrifugal acceleration already accounts for the fact that the reference frame is rotating). At any point on the surface there is a constant pressure, because above it there would be the vacuum of space. This means that the neighboring liquid, also at the surface, has the same pressure and therefore can not exert any force on each other in the plane op the surface. The only direction that liquid can exert force on each other is in the normal direction to the surface. However the sum of all accelerations still should add up to zero and therefore the sum of the gravitational and centrifugal acceleration should also point in the normal direction of the surface. The magnitude of the gravitational acceleration, $a_g$, is defined by assumption three and its direction is always radially inwards. The magnitude of the centrifugal acceleration, $a_c$, is equal to: $$ a_c = \omega^2 \sin\phi\ \|\vec{x}\|, $$ where $\phi$ is equal to $\pi/2$ minus the latitude; its direction is always parallel to the plane of the equator and its line of action always goes through the axis of rotation. These accelerations are illustrated in the figure below. For the next part I will define local unit vectors $\vec{e}_r$ and $\vec{e}_t$, where $\vec{e}_r$ points into the local radial outwards direction and $\vec{e}_t$ is perpendicular to it, lies in the plane spanned by the axis of rotation and $\vec{x}$ and faces the direction closest to the equator. The direction of vectors also correspond with the grey vectors in the figure above. Using these unit vectors, the vector sum of the gravitational and centrifugal acceleration can be written as $$ \vec{a}_g + \vec{a}_c = \vec{e}_r \left(\omega^2 \sin^2\!\phi\ \|\vec{x}\| - G\frac{M}{\|\vec{x}\|^2}\right) + \vec{e}_t\ \omega^2 \sin\phi\ \cos\phi\ \|\vec{x}\|. $$ If there would be no bulging then the normal vector should always point radial outwards. However the normal vector has to point in the opposite direction as the equation above, which means that for $\omega>0$ it will not point in the same direction as $-\vec{e}_r$ for all values of $\phi$. This means that the surface will have a small slope, $\alpha$, relative to $\vec{e}_t$ $$ \alpha = \tan^{-1}\left(\frac{\omega^2 \sin\phi\ \cos\phi\ \|\vec{x}\|}{G\frac{M}{\|\vec{x}\|^2} - \omega^2 \sin^2\!\phi\ \|\vec{x}\|}\right). $$ A slope means a change of height, and thus radius, when displacing horizontally. To simplify the expression, $r$ will substitute $\|\vec{x}\|$. For a slope $\alpha$ the change of the radius, $dr$, for a small change in $\phi$, $d\phi$, will be equal to: $$ dr = \tan\alpha\ r\ d\phi. $$ By substituting in the equation for $\alpha$ the following differential equation can be obtained $$ \frac{dr}{d\phi} = \frac{\omega^2 \sin\phi\ \cos\phi\ r^2}{G\frac{M}{r^2} - \omega^2 \sin^2\!\phi\ r} . $$ When $\phi$ is equal to $0$ or $\frac{\pi}{2}$, the poles and the equator respectively, this equation will be zero, however for any value in between, it will be positive, since when denominator would become negative it would mean that the centrifugal force will be bigger than gravity and the liquid would be flung into space. So this planet would have the smallest radius near the poles after which the radius will increase with $\phi$ until you reach the equator.
{ "domain": "physics.stackexchange", "id": 14529, "tags": "newtonian-mechanics, newtonian-gravity, rotational-dynamics, centrifugal-force" }
Is flow speed between two pressures dependent on absolute or relative pressure difference?
Question: Imagine you have a pressure chamber which is divided by a removable wall in two equal parts. Scenario A) Air pressure on the left side is 1 bar, on the right side it is 2 bar. Scenario B) Air pressure on the left side is 10 bar, on the right side it is 11 bar. The absolute pressure difference between the two sides is the same (1 bar in both scenarios). The relative pressure difference is greater in scenario A than in scenario B. When the wall in the middle is removed air flows from the right side to the left side. Now, is the flow speed of the air the same for both scenarios or is it greater in scenario A? Intuitively, I assume the flow speed to be greater in scenario A (because of a greater relative pressure difference), but I can't back it up with a solid argument. What is happening? Answer: Well, let's look at the equations for a flow initially at rest: $$ \frac{\partial \rho u}{\partial t} = -\frac{\partial p}{\partial x}$$ So, if we assume density is constant in the flow, you get: $$ \frac{\partial u}{\partial t} = -\frac{1}{\rho}\frac{\partial p}{\partial x}$$ and so you can see that in both your cases, the gradient in pressure is the same size regardless of the magnitude of pressure. So, if your case B is hotter such that the two densities between case A and B are the same, the acceleration should be the same. On the other hand, if your temperatures are the same, then your acceleration will be higher in case A because density is lower. Since your system is closed and transient, it doesn't make sense to look at the total flow velocity because it is always changing. But the acceleration will change depending on how you changed your pressures. Another way to look at it. If we assume isentropic flow, the total pressure is conserved. This means that: $$ P_0 = P_s + \frac{1}{2}\rho u^2 \rightarrow \Delta P = \frac{1}{2}\rho u^2$$ where $\Delta P = P_0 - P_s$ somewhere along the streamline. So again we can see that the velocity depends on the change in pressure and the density and we're back at the same argument we had above.
{ "domain": "physics.stackexchange", "id": 28178, "tags": "fluid-dynamics, pressure" }
epos_hardware_node can't find parameters
Question: Hey, i tryed to get the epos_hardware package ti work. I have the right parameter in the yaml file and if i started the launch file directly with roslaunch mrd.launch it initalized the motors and I can send with the velocity commad the velocity to one motor. But when i wnat to start it with the epos_hardware_node like it is told at the wiki I get ae@AErobotNUC:~$ rosrun epos_hardware epos_hardware_node actuator_right [ INFO] [1475145806.123784737]: Loading EPOS: actuator_right [ERROR] [1475145806.133245540]: You must specify an actuator name [ERROR] [1475145806.135072354]: You must specify a serial number [ERROR] [1475145806.137558224]: You must specify an operation mode [ INFO] [1475145806.137672590]: [ INFO] [1475145806.230669194]: Waiting for robot_description I think that the problem is that the node doesn't get the parameter and so i tryed to start first the launch and then the node i get his ae@AErobotNUC:~$ rosrun epos_hardware epos_hardware_node left [ INFO] [1475146396.731178459]: Loading EPOS: left [ERROR] [1475146396.739065881]: You must specify an actuator name [ERROR] [1475146396.741005635]: You must specify a serial number [ERROR] [1475146396.742883926]: You must specify an operation mode [ INFO] [1475146396.743017379]: [ INFO] [1475146396.854288335]: Initializing Motors [ERROR] [1475146396.854556155]: Not Initializing: 0x0, initial construction failed [ERROR] [1475146396.854641495]: Could not configure motor: left [FATAL] [1475146396.854699196]: Failed to initialize motors I'm new at ROS and didn't unterstand the problem, thanks for help Originally posted by Milliau on ROS Answers with karma: 33 on 2016-09-29 Post score: 0 Answer: You should take a further review on the example launch file and fully understand what does. It calls the urdf and yaml yes and load them to the parameter server, But also calls epos_harware_node, without having more information about your launch file, urdfs and yamls it is a bit difficult to predict what is wrong. Before I would be also check the output on list_devices node and be sure that everything is being detected, for further help you should specify more about the contex of your problem (devices etc) Originally posted by Bogdar_ with karma: 136 on 2016-10-05 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 25858, "tags": "ros, velocity" }
Show $x^y$ is a primitive recursive function
Question: As this thread title gives away I need to prove $x^y$ to be a primitive recursive function. So mathematically speaking, I think the following are the recursion equations, well aware that I am assigning to $0^0$ the value $1$, which shouldn't be, since it is an "indeterminate" form. \begin{cases} x^0=1 \\ x^{n+1} = x^n\cdot x \end{cases} More formally I would write: \begin{cases} h(0) = 1 \\ h(x,y+1) = g(y,h(x,x),x) \end{cases} as $g(x_1, x_2, x_3) = h\left(u^3_2(x_1, x_2, x_3),u^3_3(x_1, x_2, x_3)\right)$ and provided $h(x,y) = x \cdot y$ is primitive recursive. Is my proof acceptable? Am I correct, am I missing something or am I doing anything wrong? Answer: Supposing that $\times~(mult)$ is primitive recursive function. Then you could write: $exp(x,y)={ x }^{ y }$ 1) $exp(x,0)=x^0=1$ 2) $exp(x,y+1)=x^{y+1}=(x^y)\times x=mult(exp(x,y),x)$ for $mult$ you could show that: $mult(x,y)=x\times y$ 1)$mult(x,0)=x \times 0=0$ 2)$\operatorname{mult} \left({x, y + 1}\right) = x \times \left({y + 1}\right) = \left({x \times y}\right) + x = \operatorname{add} \left({\operatorname{mult} \left({x, y}\right), x}\right)$ and for addition $add$ the proof is straightforward. Hope these are useful!
{ "domain": "cs.stackexchange", "id": 984, "tags": "computability, recursion, check-my-proof" }
Prerelease melodic Qt5 cmake error
Question: Hey, I am currently trying to do my prerelease tests for a melodic version of my meta-package robot_statemachine which was already released for kinetic. Unfortunately, when running the prerelease tests for melodic, I encountered various errors related to Qt5 dependencies. The meta-package includes two packages which are basically plugins, one for rqt and the other for rviz. Unfortunately, when running the prerelease script on docker, both fail with the below errors: rsm_rviz_plugins error: CMake Error at CMakeLists.txt:31 (find_package): By not providing "FindQt5.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "Qt5", but CMake did not find one. Could not find a package configuration file provided by "Qt5" (requested version 5.9.5) with any of the following names: Qt5Config.cmake qt5-config.cmake Add the installation prefix of "Qt5" to CMAKE_PREFIX_PATH or set "Qt5_DIR" to a directory containing one of the above files. If "Qt5" provides a separate development package or SDK, be sure it has been installed. rsm_rqt_plugins error: CMake Error at CMakeLists.txt:17 (find_package): By not providing "FindQt5Widgets.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "Qt5Widgets", but CMake did not find one. Could not find a package configuration file provided by "Qt5Widgets" with any of the following names: Qt5WidgetsConfig.cmake qt5widgets-config.cmake Add the installation prefix of "Qt5Widgets" to CMAKE_PREFIX_PATH or set "Qt5Widgets_DIR" to a directory containing one of the above files. If "Qt5Widgets" provides a separate development package or SDK, be sure it has been installed. After some research I found this question. Following the answer I added a build dependency for qtbase5-dev. Unfortunately, this only lead to the following new error: CMake Error at /opt/ros/melodic/share/catkin/cmake/catkinConfig.cmake:83 (find_package): Could not find a package configuration file provided by "qtbase5-dev" with any of the following names: qtbase5-devConfig.cmake qtbase5-dev-config.cmake Add the installation prefix of "qtbase5-dev" to CMAKE_PREFIX_PATH or set "qtbase5-dev_DIR" to a directory containing one of the above files. If "qtbase5-dev" provides a separate development package or SDK, be sure it has been installed. Call Stack (most recent call first): CMakeLists.txt:6 (find_package) Therefore, my question: Could someone point me out how to correctly include the Qt5 dependencies for rviz and rqt plugins in ROS melodic? Below you can see my CMakeLists and package files for the two packages throwing errors. rsm_rqt_plugins: CMakeLists.txt: cmake_minimum_required(VERSION 2.8.3) project(rsm_rqt_plugins) add_compile_options(-std=c++11) find_package(catkin REQUIRED COMPONENTS roscpp rqt_gui rqt_gui_cpp rsm_msgs std_msgs std_srvs ) set(CMAKE_AUTORCC ON) find_package(cmake_modules REQUIRED) find_package(Qt5Widgets REQUIRED) include_directories(${Qt5Widgets_INCLUDE_DIRS} include) ################################### ## catkin specific configuration ## ################################### catkin_package( INCLUDE_DIRS ${INC_DIR} LIBRARIES ${PROJECT_NAME} ${CMAKE_CURRENT_BINARY_DIR}/.. ${CMAKE_CURRENT_BINARY_DIR} CATKIN_DEPENDS roscpp rqt_gui rqt_gui_cpp ) ########### ## Build ## ########### ## set variables for Statemachine Control node set(SRCS_CONTROLS src/RSMControls.cpp ) set(HDRS_CONTROLS src/RSMControls.h ) set(UIS_CONTROLS src/rsm_controls.ui ) set(INC_DIR include ${CMAKE_CURRENT_BINARY_DIR} ) ########################################################################################## # qt5_wrap_cpp produces moc files for all headers listed # qt5_wrap_ui produces .h files for all .ui files listed ## Statemachine Control: qt5_wrap_cpp(MOCS_SRCS_CONTROLS ${HDRS_CONTROLS}) qt5_wrap_ui(UI_HEADER_CONTROLS ${UIS_CONTROLS}) ########################################################################################## include_directories(${INC_DIR} ${catkin_INCLUDE_DIRS}) ## Statemachine Control: add_library(rsm_rqt_plugins ${SRCS_CONTROLS} ${MOCS_SRCS_CONTROLS} ${UI_HEADER_CONTROLS}) target_link_libraries(rsm_rqt_plugins ${catkin_LIBRARIES} ${QT_QTCORE_LIBRARY} ${QT_QTGUI_LIBRARY}) add_dependencies(rsm_rqt_plugins ${catkin_EXPORTED_TARGETS}) install(TARGETS rsm_rqt_plugins ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION} LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION} RUNTIME DESTINATION ${CATKIN_GLOBAL_BIN_DESTINATION}) install(FILES rsm_rqt_plugins.xml DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}) package.xml: <?xml version="1.0"?> <package format="2"> <name>rsm_rqt_plugins</name> <version>1.2.0</version> <description>The rsm_rqt_plugins package includes the Robot Statemachine GUI plugin for rqt</description> <license>BSD</license> <buildtool_depend>catkin</buildtool_depend> <build_depend>roscpp</build_depend> <build_depend>rqt_gui</build_depend> <build_depend>rqt_gui_cpp</build_depend> <build_export_depend>roscpp</build_export_depend> <build_export_depend>rqt_gui</build_export_depend> <build_export_depend>rqt_gui_cpp</build_export_depend> <exec_depend>roscpp</exec_depend> <exec_depend>rqt_gui</exec_depend> <exec_depend>rqt_gui_cpp</exec_depend> <build_depend>rsm_msgs</build_depend> <build_export_depend>rsm_msgs</build_export_depend> <exec_depend>rsm_msgs</exec_depend> <build_depend>cmake_modules</build_depend> <build_export_depend>cmake_modules</build_export_depend> <exec_depend>cmake_modules</exec_depend> <build_depend>std_msgs</build_depend> <build_export_depend>std_msgs</build_export_depend> <exec_depend>std_msgs</exec_depend> <build_depend>std_srvs</build_depend> <build_export_depend>std_srvs</build_export_depend> <exec_depend>std_srvs</exec_depend> <build_depend>qtbase5-dev</build_depend> <export> <rqt_gui plugin="${prefix}/rsm_rqt_plugins.xml" /> </export> </package> rsm_rviz_plugin: CMakeLists.txt: cmake_minimum_required(VERSION 2.8.12) project(rsm_rviz_plugins) # C++ 11 add_definitions(-std=c++11) find_package(catkin REQUIRED COMPONENTS rviz rsm_msgs visualization_msgs interactive_markers std_msgs std_srvs pluginlib tf ) # Qt Stuff if(rviz_QT_VERSION VERSION_LESS "5") find_package(Qt4 ${rviz_QT_VERSION} REQUIRED QtCore QtGui) include(${QT_USE_FILE}) macro(qt_wrap_ui) qt4_wrap_ui(${ARGN}) endmacro() macro(qt_wrap_cpp) qt4_wrap_cpp(${ARGN}) endmacro() else() find_package(Qt5 ${rviz_QT_VERSION} REQUIRED Core Widgets) set(QT_LIBRARIES Qt5::Widgets) macro(qt_wrap_ui) qt5_wrap_ui(${ARGN}) endmacro() macro(qt_wrap_cpp) qt5_wrap_cpp(${ARGN}) endmacro() endif() set(CMAKE_AUTORCC ON) find_package(cmake_modules REQUIRED) find_package(Qt5Widgets REQUIRED) include_directories(${Qt5Widgets_INCLUDE_DIRS} include) catkin_package( INCLUDE_DIRS ${INC_DIR} LIBRARIES ${PROJECT_NAME} ${CMAKE_CURRENT_BINARY_DIR}/.. ${CMAKE_CURRENT_BINARY_DIR} CATKIN_DEPENDS roscpp ) ########### ## Build ## ########### ## set variables for Statemachine Control node set(SRCS_CONTROLS src/PlantWaypointTool.cpp src/RSMControls.cpp ) set(HDRS_CONTROLS src/RSMControls.h ) set(UIS_CONTROLS src/rsm_controls.ui ) set(INC_DIR include ${CMAKE_CURRENT_BINARY_DIR} ) ########################################################################################## # qt5_wrap_cpp produces moc files for all headers listed # qt5_wrap_ui produces .h files for all .ui files listed ## Statemachine Control: qt5_wrap_cpp(MOCS_SRCS_CONTROLS ${HDRS_CONTROLS}) qt5_wrap_ui(UI_HEADER_CONTROLS ${UIS_CONTROLS}) include_directories(${INC_DIR} ${catkin_INCLUDE_DIRS}) ## Statemachine Control: add_library(${PROJECT_NAME} ${SRCS_CONTROLS} ${MOCS_SRCS_CONTROLS} ${UI_HEADER_CONTROLS}) add_executable(waypointFollowingVisualizationNode src/WaypointFollowingVisualizationNode.cpp src/WaypointFollowingVisualization.cpp) target_link_libraries(${PROJECT_NAME} ${rviz_DEFAULT_PLUGIN_LIBRARIES} ${QT_LIBRARIES} ${catkin_LIBRARIES}) target_link_libraries(waypointFollowingVisualizationNode ${catkin_LIBRARIES}) add_dependencies(${PROJECT_NAME} ${catkin_EXPORTED_TARGETS}) add_dependencies(waypointFollowingVisualizationNode ${catkin_EXPORTED_TARGETS}) ############# ## Install ## ############# ## Mark executables and/or libraries for installation install(TARGETS ${PROJECT_NAME} ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION} LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION} RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION} ) # Mark cpp header files for installation install(DIRECTORY include/${PROJECT_NAME}/ DESTINATION ${CATKIN_PACKAGE_INCLUDE_DESTINATION} ) install(TARGETS waypointFollowingVisualizationNode RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}) # Mark config files for installation install(FILES rsm_rviz_plugins.xml DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION} ) install(DIRECTORY launch/ DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}/launch) install(DIRECTORY icons/ DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}/icons) install(DIRECTORY media/ DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}/media) package.xml: <?xml version="1.0"?> <package format="2"> <name>rsm_rviz_plugins</name> <version>1.2.0</version> <description>The rsm_rviz_plugins package includes the Robot Statemachine GUI plugin for RViz and the waypoint visualization </description> <license>BSD</license> <buildtool_depend>catkin</buildtool_depend> <build_depend>roscpp</build_depend> <build_export_depend>roscpp</build_export_depend> <exec_depend>roscpp</exec_depend> <build_depend>rviz</build_depend> <build_export_depend>rviz</build_export_depend> <exec_depend>rviz</exec_depend> <build_depend>interactive_markers</build_depend> <build_export_depend>interactive_markers</build_export_depend> <exec_depend>interactive_markers</exec_depend> <build_depend>visualization_msgs</build_depend> <build_export_depend>visualization_msgs</build_export_depend> <exec_depend>visualization_msgs</exec_depend> <build_depend>rsm_msgs</build_depend> <build_export_depend>rsm_msgs</build_export_depend> <exec_depend>rsm_msgs</exec_depend> <build_depend>cmake_modules</build_depend> <build_export_depend>cmake_modules</build_export_depend> <exec_depend>cmake_modules</exec_depend> <build_depend>std_msgs</build_depend> <build_export_depend>std_msgs</build_export_depend> <exec_depend>std_msgs</exec_depend> <build_depend>std_srvs</build_depend> <build_export_depend>std_srvs</build_export_depend> <exec_depend>std_srvs</exec_depend> <build_depend>pluginlib</build_depend> <build_export_depend>pluginlib</build_export_depend> <exec_depend>pluginlib</exec_depend> <build_depend>tf</build_depend> <build_export_depend>tf</build_export_depend> <exec_depend>tf</exec_depend> <build_depend>qtbase5-dev</build_depend> <export> <rviz plugin="${prefix}/rsm_rviz_plugins.xml" /> </export> </package> Originally posted by MarcoStb on ROS Answers with karma: 80 on 2020-11-19 Post score: 0 Answer: Seems like I was testing with a detached git head or something when trying to add <build_depend>qtbase5-dev</build_depend> to the package.xml. I tried the prerelease test on a different machine and it worked with the above packge.xml and CMakeLists.txt. So, adding the build dependency on qtbase5-dev fixed it. Originally posted by MarcoStb with karma: 80 on 2020-11-25 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 35780, "tags": "ros, rviz, ros-melodic, qt5, rqt" }
How can the Holevo bound be used to show that $n$ qubits cannot transmit more than $n$ classical bits?
Question: The inequality $\chi \le H(X)$ gives the upper bound on accessible information. This much is clear to me. However, what isn't clear is how this tells me I cannot transmit more than $n$ bits of information. I understand that if $\chi < H(X)$, then reliable in-ferment isn't possible, with the Fano inequality giving the lower bound for the chance of an error being made. However, I've seen some examples state that $\chi\le n$ proves this, which I can only see being the case of $H(X)$ is maximum for each qubit. Do they mean that if $\chi = H(X)$ then given that this is all the information about one qubit, then for $n$ qubits, if $\chi=H(X)$ for all of them then $\chi =n$? Is it taking the $H(X)$ as all the information of a single qubit/bit, regardless of its value, and as such if $\chi$ is equal to it, it is said to have access to all that information as well? Edit: Maybe to make this clearer, I am asking where $n$ comes from if we take $\chi \le H(X)$, as in many cases $H(X)$ will not be maximum. Answer: Like many ideas in quantum information theory, I think this is best understood using a $2$-party communication scenario. Suppose Alice has a classical random variable, $X$ which can take values $1,2, \cdots, k$ with probabilities $p_{X}(1), p_{X}(2), \cdots, p_{X}(k)$. Alice then encodes this information by encoding the classical index $j$ in the state $\rho^{j}$. One can represent this scenario as a classical ensemble, $\mathcal{E} = \{ p_X(j), \rho^{j} \}_{j=1}^{k}$ (note that the set $\{\rho^j\}$ is, per se, not mutually orthogonal). For convenience, let's explicitly keep the classical index $j$ by representing this as a classical-quantum state (where the classical index $j$ is correlated to the state $\rho^{j}$ that carries its information) $$ \sigma = \sum\limits_{j=1}^{k} p_X(j) | j \rangle_{X} \langle j | \otimes \rho^{j}. $$ Now, Alice sends this state to Bob, whose task is to determine the classical index $j$ by performing some (optimal) measurement on the state. On some thought, it becomes clear that this is equal to the maximum mutual information of this ensemble. Define, $$ I_{\mathrm{acc}}(\mathcal{E})=\max _{\left\{\Lambda_{y}\right\}} I(X ; Y), $$ where $\{ \Lambda_{y} \}$ is a POVM and $Y$ is a random variable corresponding to the outcome of the measurement. This quantity $I_{\mathrm{acc}}(\mathcal{E})$ is called the accessible information of the ensemble $\mathcal{E}$. Now, in general, one has $$ I_{\mathrm{acc}}(\mathcal{E}) \leq \chi(\mathcal{E}) $$ where $$ \chi(\mathcal{E}) \equiv H\left(\rho_{B}\right)-\sum_{x} p_{X}(x) H\left(\rho_{B}^{x}\right) $$ is the Holevo information --- but this is where out classical-quantum state will become useful. Interestingly, for classical-quantum states, the Holevo information is equal to the mutual information. That is, $$ \chi(\mathcal{E})=I(X ; B)_{\sigma}, $$ which, when combined with the following (simple) bound: $$ I(X;Y) \leq \log \left( \mathrm{dim}(\mathcal{H}) \right), $$ gives us the desired result. Note that the $\mathrm{dim}(\mathcal{H})$ is the Hilbert space where the states $\{\rho^j\}$ belong. To make the final result transparent, it is instructive to ask what kind of states will saturate this upper bound on the mutual information (and in turn the accessible information). This would correspond to the case where the maximum amount of information can be encoded and accessed from this protocol. It is a simple exercise to show that this happens when the set $\{ \rho_{j} \}$ is mutually orthogonal and hence all states $\rho^j$ are distinguishable. Now, if $k=2^n$, say, for example, because the random variable takes values in $n$-bit strings, then, we need, $\mathrm{dim}(\mathcal{H}) = 2^n$, which can be achieved by choosing states from an $n$-qubit space, $\mathcal{H} \cong (\mathbb{C}^{2})^{\otimes n}$. Therefore, if we want to classically encode (and retrieve) $n$-bits, then we need $n$-qubits. Vice versa, $n$-qubits can contain at most $n$-bits of information. A few remarks: You don't need to store information in $n$-qubits. You can store the information in any $k$-dimensional quantum system (I'm pointing this out because the tensor product structure of the qubit space plays no role in this protocol, it might as well be a single-particle space with $k$-levels). The key constraint comes from the ability to successfully retrieve information, which requires the states to be distinguishable. More details can be found in Section 11.6 of Mark Wilde's book.
{ "domain": "quantumcomputing.stackexchange", "id": 1806, "tags": "quantum-state, entropy, information-theory" }
Is electricity instantaneous?
Question: My question is basically what exactly is electricity? I've simply been told before that it's a flow of electrons, but this seems too basic and doesn't show that electricity is instant. What I mean is turning a switch has no delay between that and a light coming on. Is it really instantaneous? Or is it just so fast that we don't notice it? Answer: It's just so fast you don't notice it. You won't see the effect of the travel time in something like turning on a light, because your eyes aren't fast enough to register the delay, but if you do even moderately precise experiments involving signal transmission and look at it on an oscilloscope, you will find that the travel time is easily measurable. The speed of signal propagation is close to that of light, or about a foot per nanosecond. (It's worth noting that this is not the speed of electrons moving through the wires, which is dramatically slower. The signal is a disturbance that propagates more rapidly than the drift velocity of electrons in a conductor.)
{ "domain": "physics.stackexchange", "id": 16492, "tags": "electricity, everyday-life, electrons" }
Passing parameters by reference? Let me fix that for you
Question: I've implemented a code inspection that verifies whether a procedure has parameters that are implicitly passed by reference - in VBA a parameter is passed ByRef unless it's explicitly specified ByVal. Here is the implementation in question: [ComVisible(false)] public class ImplicitByRefParameterInspection : IInspection { public ImplicitByRefParameterInspection() { Severity = CodeInspectionSeverity.Warning; } public string Name { get { return "Parameter is passed ByRef implicitly"; } } public CodeInspectionType InspectionType { get { return CodeInspectionType.CodeQualityIssues; } } public CodeInspectionSeverity Severity { get; set; } public IEnumerable<CodeInspectionResultBase> GetInspectionResults(SyntaxTreeNode node) { var procedures = node.FindAllProcedures().Where(procedure => procedure.Parameters.Any(parameter => !string.IsNullOrEmpty(parameter.Instruction.Value))); var targets = procedures.Where(procedure => procedure.Parameters.Any(parameter => parameter.IsImplicitByRef) && !procedure.Instruction.Line.IsMultiline); return targets.SelectMany(procedure => procedure.Parameters.Where(parameter => parameter.IsImplicitByRef) .Select(parameter => new ImplicitByRefParameterInspectionResult(Name, parameter.Instruction, Severity))); } } Would there be a better way to do this? Here is the ImplicitByRefParameterInspectionResult class (gosh that's a long name!): [ComVisible(false)] public class ImplicitByRefParameterInspectionResult : CodeInspectionResultBase { public ImplicitByRefParameterInspectionResult(string inspection, Instruction instruction, CodeInspectionSeverity type) : base(inspection, instruction, type) { } public override IDictionary<string, Action<VBE>> GetQuickFixes() { return !Handled ? new Dictionary<string, Action<VBE>> { {"Pass parameter by value", PassParameterByVal}, {"Pass parameter by reference explicitly", PassParameterByRef} } : new Dictionary<string, Action<VBE>>(); } private void PassParameterByRef(VBE vbe) { if (!Instruction.Line.IsMultiline) { var newContent = string.Concat(ReservedKeywords.ByRef, " ", Instruction.Value); var oldContent = Instruction.Line.Content; var result = oldContent.Replace(Instruction.Value, newContent); var module = vbe.FindCodeModules(Instruction.Line.ProjectName, Instruction.Line.ComponentName).First(); module.ReplaceLine(Instruction.Line.StartLineNumber, result); Handled = true; } else { // todo: implement for multiline throw new NotImplementedException("This method is not [yet] implemented for multiline instructions."); } } private void PassParameterByVal(VBE vbe) { if (!Instruction.Line.IsMultiline) { var newContent = string.Concat(ReservedKeywords.ByVal, " ", Instruction.Value); var oldContent = Instruction.Line.Content; var result = oldContent.Replace(Instruction.Value, newContent); var module = vbe.FindCodeModules(Instruction.Line.ProjectName, Instruction.Line.ComponentName).First(); module.ReplaceLine(Instruction.Line.StartLineNumber, result); Handled = true; } else { // todo: implement for multiline throw new NotImplementedException("This method is not yet implemented for multiline instructions."); } } } Any comment is welcome. Answer: Starting from this var procedures = node.FindAllProcedures().Where(procedure => procedure.Parameters.Any(parameter => !string.IsNullOrEmpty(parameter.Instruction.Value))); var targets = procedures.Where(procedure => procedure.Parameters.Any(parameter => parameter.IsImplicitByRef) && !procedure.Instruction.Line.IsMultiline); return targets.SelectMany(procedure => procedure.Parameters.Where(parameter => parameter.IsImplicitByRef) .Select(parameter => new ImplicitByRefParameterInspectionResult(Name, parameter.Instruction, Severity))); Let's collapse it into one expression return node.FindAllProcedures() .Where(procedure => procedure.Parameters.Any(parameter => !string.IsNullOrEmpty(parameter.Instruction.Value))) .Where(procedure => procedure.Parameters.Any(parameter => parameter.IsImplicitByRef) && !procedure.Instruction.Line.IsMultiline) .SelectMany(procedure => procedure.Parameters.Where(parameter => parameter.IsImplicitByRef) .Select(parameter => new ImplicitByRefParameterInspectionResult(Name, parameter.Instruction, Severity))); Move the last Select out of SelectMany return node.FindAllProcedures() .Where(procedure => procedure.Parameters.Any(parameter => !string.IsNullOrEmpty(parameter.Instruction.Value))) .Where(procedure => procedure.Parameters.Any(parameter => parameter.IsImplicitByRef) && !procedure.Instruction.Line.IsMultiline) .SelectMany(procedure => procedure.Parameters) .Where(parameter => parameter.IsImplicitByRef) .Select(parameter => new ImplicitByRefParameterInspectionResult(Name, parameter.Instruction, Severity)); Now it seems like procedure.Parameters.Any(parameter => parameter.IsImplicitByRef) is redundant, so let's remove that and join the successive Wheres return node.FindAllProcedures() .Where(procedure => procedure.Parameters.Any(parameter => !string.IsNullOrEmpty(parameter.Instruction.Value)) && !procedure.Instruction.Line.IsMultiline) .SelectMany(procedure => procedure.Parameters) .Where(parameter => parameter.IsImplicitByRef) .Select(parameter => new ImplicitByRefParameterInspectionResult(Name, parameter.Instruction, Severity)); It's not clear why procedure.Parameters.Any(parameter => !string.IsNullOrEmpty(parameter.Instruction.Value)) is necessary -- if it is, a comment is required, otherwise we can just write return node.FindAllProcedures() .Where(procedure => !procedure.Instruction.Line.IsMultiline) .SelectMany(procedure => procedure.Parameters) .Where(parameter => parameter.IsImplicitByRef) .Select(parameter => new ImplicitByRefParameterInspectionResult(Name, parameter.Instruction, Severity)); Which is looking more much manageable. Or if you prefer the alternative syntax return from procedure in node.FindAllProcedures() where !procedure.Instruction.Line.IsMultiline from parameter in procedure.Parameters where parameter.IsImplicitByRef select new ImplicitByRefParameterInspectionResult(Name, parameter.Instruction, Severity);
{ "domain": "codereview.stackexchange", "id": 11201, "tags": "c#, rubberduck" }
What is the unbraced length of the compression portion of an X brace?
Question: In the diagram below, the compression member is $AC$. What is its unbraced length for determining buckling capacity? This also matters because certain codes put limits on the maximum value of $\frac{KL}{r}$ for bracing members. I know that the conservative answer is to consider the entire length, AC. This can be too conservative when lots of braces are used (total weight of steel increases). It can also be conservative if the actual compressive force is low enough that a member size is increased solely to meet the $\frac{KL}{r}$ requirements. I can justify to myself that the tension member braces the compression member in one direction (in the plane of the page), but can the tension member be considered to provide bracing in the orthogonal direction (in and out of the page)? There are two reasons why I think this might be possible: The tension brace provides resistance solely from being present. (i.e. something is better than nothing) The other brace is in tension so it will provide additional restraint similar to a bow string. Note: Some codes have requirements that a brace be able to withstand 5% of the axial force in the member being braced. I would think that this would be the lower limit for restraint coming from the tension member. Answer: What is its unbraced length for determining buckling capacity? AISC defines $L$ as the laterally unbraced length of the member (emphasis mine) (AISC 14th Ed. Steel Construction Manual, Part 16, Section E2). I can justify to myself that the tension member braces the compression member in one direction (in the plane of the page) I would agree. but can the tension member be considered to provide bracing in the orthogonal direction (in and out of the page)? Note: Some codes have requirements that a brace be able to withstand 5% of the axial force in the member being braced. I would think that this would be the lower limit for restraint coming from the tension member. Simply put, I don't think you'll be able to argue that the tension member provides enough out-of-plane resistance to bending of the compression member since the tension member would resist the out-of-plane load via bending, which is a relatively "soft" support condition. This also matters because certain codes put limits on the maximum value of $\frac{KL}{r}$ for bracing members. Further discussion from the Commentary Section E2 says: It's generally a good idea to stay under 200 for your slenderness ratio, though the Commentary discussion appears to give you an "out" if you think you need it.
{ "domain": "engineering.stackexchange", "id": 675, "tags": "structural-engineering" }
Cryptographically secure version of the core array_rand() function
Question: I want a cryptographically secure version of array_rand(). Is this it? /** * retrieve a random key from an array, using a cryptographically secure rng. * - it does the same as array_rand(), except that this one use a cryptographically secure rng. * - relatively speaking, it should be significantly slower and more memory hungry than array_rand(), because it is creating a copy of all the keys of the array.. * * @param array $arr * @throws ValueError if array is empty * @return mixed */ function array_rand_cryptographically_secure(array $array) { if (count ( $array ) < 1) { throw new ValueError ( 'Argument #1 ($array) cannot be empty' ); } if (PHP_MAJOR_VERSION >= 8 && array_is_list ( $array )) { // optimization, this avoids creating a copy of all the keys return random_int ( 0, count ( $array ) - 1 ); } $keys = array_keys ( $array ); return $keys [random_int ( 0, count ( $keys ) - 1 )]; } Significant edit: I figured I can use array_is_list() to avoid copying the keys when given a list. Answer: array_is_list() is available from PHP8.1. You can check the PHP version major&minor version with version_compare() (version_compare(PHP_VERSION, '8.1', '>=')) or function_exists() -- the latter is more concise and explicit than the former. The $max key position is shared by both branches of the conditional logic, so you can count() just once. In other words, regardless of if you call array_keys(), the array's size will not change. With $max already declared, $keys becomes a single-use variable and is therefore omittable. I prefer to type hint whenever possible. Since array keys can only either be integers or strings, the return value can be typed. With PHP8, union types allow the explicit dual-typing of the return value as a string or an integer. Of course, if you are trying to make a function that works under PHP8, then disregard this point. Code: (Demo) function array_rand_cryptographically_secure(array $array): int|string { $max = count($array) - 1; if ($max < 0) { throw new ValueError('Argument #1 ($array) cannot be empty'); } if (function_exists('array_is_list') && array_is_list($array)) { // optimization, this avoids creating a copy of all the keys return random_int(0, $max); } return array_keys($array)[random_int(0, $max)]; } $tests = [ [5, 6, 7], ['a' => 1, 'b' => 2, 'c' => 3], ['zero', 4 => 'four', 9 => 'nine'] ]; foreach ($tests as $test) { echo array_rand_cryptographically_secure($test) . "\n"; } Alternatively, you can slice and preserve the original keys to avoid doubling the memory cost. (Demo) function array_rand_cryptographically_secure(array $array): int|string { $max = count($array) - 1; if ($max < 0) { throw new ValueError('Argument #1 ($array) cannot be empty'); } return key(array_slice($array, random_int(0, $max), 1, true)); } And finally, the most compact and perhaps hardest to read because of all of the nested function calls: (Demo) function array_rand_cryptographically_secure(array $array): int|string { if (!$array) { throw new ValueError ('Argument #1 ($array) cannot be empty'); } return key(array_slice($array, random_int(0, count($array) - 1), 1, true)); }
{ "domain": "codereview.stackexchange", "id": 45418, "tags": "php, random, security" }
Evaporation from a capillary tube
Question: Consider a capillary tube (say from a liquid / capillary thermometer), that means a tube of small internal diameter which holds liquid by capillary action . The tube is fulfilled with water and closed at one end. How long it would take for water to completely evaporate out through the open end of the tube? To be specific, let the diameter of the tube be $d=1$mm, the length of the tube $L=1$m, the environmental temperature $T=300$ K and pressure $P=10^5$ Pa, the relative humidity $f=60$ %, the contact angle a) $\theta=0^{\circ}$ and b) $\theta=180^{\circ}$. Does it have any effect on the time when the tube is positioned horizontally or vertically(open end above)? Answer: As a rough answer: you can calculate the rate of evaporation from a water surface for the given temperature, humidity and a parabolic surface area. The problem starts a few seconds later: The relative humidity above the water surface will rise close to 100% and now the rate of evaporation is limited by the diffusion of water molecules through the air out of the capillary. If you are not interested in the time dependence one can consider these two cases. After some time the rate of evaporation will be equal to the diffusion rate at the end of the capillary. Additionally the length of the tube compared to the length of the water column can influence the result because it determines the length of the capillary through which the water vapour has to diffuse. To your last question: In theory yes, as you will have less diffusion against the additional gravity, in practice it will only matter if you have a much longer tube. (The opposite is true if the experiment is surrounded by normal air. The mass of air molecules like O$_2$, N$_2$, ... is higher than H$_2$O as Georg pointed out)
{ "domain": "physics.stackexchange", "id": 1852, "tags": "thermodynamics" }
How are the patterns in solid rock born
Question: I took this photo on my vacation in Finland showing white stripes in the rock going different ways. How are these patterns born? Answer: One can only speculate based upon a photograph - however they look very much like mineralized fractures. At some time in the past this rock mass may have fractured in response to thermal or tectonic stresses. Fluid may have then penetrated relatively long distances along the fractures into the area and infiltrated shorter distances into the wall rock along small scale porosity such as cracks and grain boundaries (and also possibly diffusions.) The fluid and rock were most likely not in chemical equilibrium when they came into contact and there were chemical reactions that produced additional minerals along the trace of the original fracture. This is probably what you see - the result of a process of fracture, mass transport, and fluid-rock chemical reaction. But a geologist would need much more information about this system (than available from one photograph) to describe the details. Seeing something like this in the field is always very exciting because it raises so many questions. Where did the fluid come from? What was the temperature? When did the rock fracture?
{ "domain": "earthscience.stackexchange", "id": 192, "tags": "geology, rocks, petrology, bedrock" }
Troubleshooting 'rviz-9' Process Death via TeamViewer, Node Runs Fine Without TeamViewer
Question: whenever i run the node via teamviewer? Without using teamviwer and running the node directly on the computer the node runs fine. this is the error that pops up on my remote desktop: REQUIRED process [rviz-9] has died! process has died [pid 6774, exit code -11, cmd /opt/ros/........... log file: /home/......... Initiating shutdown! Originally posted by Jish on ROS Answers with karma: 1 on 2018-09-04 Post score: 0 Answer: Teamviewer most likely interferes with your video cards ability to properly render 3D graphics. That error is just about always caused by something wrong in the graphics pipeline, so that would be a good first thing to check. Originally posted by gvdhoorn with karma: 86574 on 2018-09-04 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 31703, "tags": "rviz, ros-kinetic" }
Heat capacity: practical or logical meaning (question for student of an high school without calculus)
Question: The thermal or heat capacity $C$ of a body (or more generally of any system) is defined as the ratio between the heat exchanged between the body and the environment and the resulting temperature variation: $$C=\frac{Q_{\text{heat}}}{\Delta T}$$ Thus, the heat capacity depends neither on the substance nor on the amount of substance we are heating. I have made some observations: the heat capacity, structurally, is an amount of heat related to the temperature variation (the average velocity is the ratio between a displacement and a time interval)? We could also define it as the ability of a body subjected to a certain heat to increase more easily its temperature, considering that $C$ represents the slope of a line in a temperature-heat graph or vice versa? When we say that the heat capacity of the water is of $4186\, J/K$, can we say that water holds the heat more than other liquids or solids or that it releases it more easily? Possible connection to the question topic: Why and who has established that $1\, cal \equiv4.186\, J$? Answer: When we say that the heat capacity of the water is of $4186\text{ }\mathrm{J/K}$, [...] But we don't say that. The value of $4186$ is the specific heat capacity $c_p$: $$c_p=4186\text{ }\mathrm{Jkg^{-1}K^{-1}}$$ $c_p$ in $\text{Joule}$ per $\text{kg}$ and per $\text{K}$. [...] can we say that water holds the heat more than other liquids Compared to other substances (or mixtures of substances) with a lower specific heat capacity, yes. On request of the OP: Specific heat capacity is the amount of heat energy needed to heat $1\text{ }\mathrm{kg}$ of the material by one $1\text{ }\mathrm{K}$: $$c_p(T)=\frac{1}{m}\Big(\frac{\text{d}Q}{\text{d}T}\Big)_T$$ If $c_p$ is constant over an interval $\Delta T$, then we can write: $$c_p=\frac{\Delta Q}{m\Delta T}$$ For a uniform object made of a material of specific heat capacity $c_p$ and of mass $m$ its heat capacity is: $$C=mc_p$$
{ "domain": "physics.stackexchange", "id": 75523, "tags": "thermodynamics" }
Propagation mode for anisotropic medium
Question: Lets assume that we have sourceless anisotropic medium with $\epsilon_1 , \epsilon_2,\epsilon_3 $ as the diagonal values. Also assume $\vec{k}=k_z\hat{z}$ and $e^{i \omega t} e^{-i \vec{k} \cdot \vec{r}}$ form. We have $\vec{k} \cdot \vec{D} = 0 \implies \vec{k} \cdot \underline{\underline{\epsilon}} \vec{E} = 0 \implies k_z \epsilon_3 E_z = 0 $. From the curl equations and the fact that from above $E_z = 0$ and $\vec{k} \cdot \vec{E} = 0$,then we have $\vec{E} k_z^2 = \omega \mu^2_0 \underline{\underline{\epsilon}} \vec{E}$ which implies $E_x k_z^2 = \omega^2 \mu_0 \epsilon_1 E_x$ and $E_y k_z^2 = \omega^2 \mu_0 \epsilon_2 E_y$ So is this saying that the wave can only propagate in two modes? One where $E_x=0, E_y \neq 0$ and one where $E_y=0, E_x \neq 0$? For if there were a nonzero $x$ and $y$ component then $k_z^2$ would equal two different values. Answer: You are right that the propagation constant differs for these two polarizations, but remember also that any linear combination of these two modes is also a perfectly valid solution. It's just that the components of the wave polarized along $x$ and $y$ directions propagate with different propagation constants (assuming they have the same frequency). This situation, arising from the anisotropy of the medium, is called birefringence in optics. The difference of the propagation constants for the two polarizations cause their relative phase relationship to change as a function of position ($z$-coordinate), which results in a polarization state that depends on position. This property is exploited in quarter-wave plates, which can be used to create circularly polarized light from a linearly-polarized source.
{ "domain": "physics.stackexchange", "id": 69455, "tags": "electromagnetism, waves" }
gazebo does not load simple model
Question: I created a simple .world file: <?xml version="1.0"?> <sdf version="1.4"> <world name="office"> <include> <uri>model://sun</uri> </include> <include> <uri>model://ground_plane</uri> </include> </world> </sdf> My launch file also adds a robot to this world. It launches fine in gazebo, I can see the gray ground plane with grid markings on it and also my robot. Next I created a simple building layout (some walls created using Gazebo's building editor). I saved the building model and added these lines to my .world file: <include> <uri>model://car_description/models/tunnel</uri> <pose>25 20.2 0 0 0 0</pose> </include> When I launch the file again, this time it takes ages to launch. I see a 'Preparing your world ...' screen for a long time (~10 mins): and then I see this dark screen: Other solutions which I have tried and did not work for me are: Adding the following code to the package's package.xml <export> <gazebo_ros gazebo_model_path="${prefix}/models"/> <gazebo_ros gazebo_media_path="${prefix}/models"/> </export> Adding the environment variable GAZEBO_MODEL_PATHto my .bashrc file: export GAZEBO_MODEL_PATH=/home/my_user/my_catkin_ws/src/ There are no errors thrown in the terminal. What am I doing wrong? Originally posted by Subodh Malgonde on Gazebo Answers with karma: 31 on 2018-08-11 Post score: 0 Answer: I placed the model files in the ~/.gazebo/models directory and changed the code in the .world file to: <include> <uri>model://tunnel</uri> <pose>25 20.2 0 0 0 0</pose> </include> Its working fine now. However I still don't know how to provide a reference to models which are not in the ~/.gazeo/models directory. Originally posted by Subodh Malgonde with karma: 31 on 2018-08-12 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Raskkii on 2018-08-13: What is the output of "echo $GAZEBO_MODEL_PATH"? Comment by Subodh Malgonde on 2018-08-23: This environment variable is not set by ROS. So the output is blank. After I set it in .bashrc, I get the output. However it did not fix my problem. Comment by Raskkii on 2018-08-23: Try changing the command set in .bashrc slightly: export GAZEBO_MODEL_PATH=${GAZEBO_MODEL_PATH}:/home/my_user/my_catkin_ws/src/ The difference here is that this adds your custom path to the existing model path (which should be ~/.gazebo/models) and doesn't overwrite it, allowing both paths to be used.
{ "domain": "robotics.stackexchange", "id": 4311, "tags": "gazebo" }
error in import
Question: Dear ROS-Users I created a service in my_package but I am not able to import it. In particular, using "from my_package import srv" in a ros-python node raise me this error: ImportError: cannot import name srv However, doing the same "from my_package import srv" from a python shell, I don't get any error I'm using catkin on ros-hydro and both catkin_make and catkin_make install don't give me any erro Thank you for your help Originally posted by Rahndall on ROS Answers with karma: 133 on 2014-10-15 Post score: 1 Original comments Comment by Rahndall on 2014-10-15: Could it be related to the PYTHONPATH? Answer: The problem was on the name of file: it gave me problem when package, file and node had the same name Sorry for the "noob" question Originally posted by Rahndall with karma: 133 on 2014-10-29 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 19735, "tags": "python, catkin" }
Transfer video using Wifi
Question: Hey, I am trying to transfer video from one laptop to another for real-time analysis. I followed the instructions given in the following link: http://wiki.ros.org/ROS/Tutorials/MultipleMachines I want to transfer video at 720p. But there is a lot of lag and also the frame rate received is very less. Is there a problem with video transfer using Wifi? That shouldn't be the case since there are cameras having internal wifi or drones are also used in concerts, matches for live streaming. Are their other methods to transfer other than ROS or another method in ROS? I could only get real-time video when I reduced to 200 X 200 (very less). Edit: I am using usb_cam package to get webcam data and using rviz to view the video feed. Thanks. Originally posted by Trishant_Roy_221b on ROS Answers with karma: 1 on 2018-01-09 Post score: 0 Answer: Are you using any compression? Or sending raw, uncompressed, full-frame, full-colour 720p frames over a sensor_msgs/Image topic? In the latter case, that could get up to ~ 3.5MB per frame (megabytes, not bits). For 30fps, that would be ~105MB/s. Your wifi probably can't handle that. Originally posted by gvdhoorn with karma: 86574 on 2018-01-09 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Trishant_Roy_221b on 2018-01-14: I tried the compression topic as well and that too in 480p. Though the output is better, the issue still persists. Is there any other possible reason other than due to Wifi? Coz some surveillance cameras do use Wifi to send output to PC/mobile. Comment by gvdhoorn on 2018-01-14: Can you describe what you have used exactly? Edit your original question with the new info. Packages/nodes used, topics, configuration settings, etc. Note that sending individual frames over sensor_msgs/Image (even if compressed) is always going to be less efficient than a proper video codec. Comment by Trishant_Roy_221b on 2018-01-14: Edited. Yes... that is true...But the video feed should be better like wifi cameras
{ "domain": "robotics.stackexchange", "id": 29699, "tags": "ros, wifi" }
Sudoku-Solver - follow-up
Question: I've now tried to use the suggestions you can find here to improve my Sudoku-Solver. Here's the updated code: import javax.swing.*; import java.awt.*; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import static javax.swing.WindowConstants.EXIT_ON_CLOSE; public class SudokuSolver { public static void main(String[] args) { SwingUtilities.invokeLater(SudokuSolver::createGUI); } private static void createGUI() { int[][] board = { { 0, 0, 0, 0, 0, 0, 0, 0, 0 }, { 0, 0, 0, 0, 0, 0, 0, 0, 0 }, { 0, 0, 0, 0, 0, 0, 0, 0, 0 }, { 0, 0, 0, 0, 0, 0, 0, 0, 0 }, { 0, 0, 0, 0, 0, 0, 0, 0, 0 }, { 0, 0, 0, 0, 0, 0, 0, 0, 0 }, { 0, 0, 0, 0, 0, 0, 0, 0, 0 }, { 0, 0, 0, 0, 0, 0, 0, 0, 0 }, { 0, 0, 0, 0, 0, 0, 0, 0, 0 }, }; JFrame frame = new JFrame(); frame.setSize(800, 700); frame.setDefaultCloseOperation(EXIT_ON_CLOSE); JPanel panel = new JPanel(); JPanel subpanel1 = new JPanel(); subpanel1.setPreferredSize(new Dimension(500,500)); subpanel1.setLayout( new java.awt.GridLayout( 9, 9, 20, 20 ) ); JTextField[][] text = new JTextField[9][9]; Font font = new Font("Verdana", Font.BOLD, 40); for(int i = 0; i < 9; i++) { for(int j = 0; j < 9; j++) { text[i][j] = new JTextField(); text[i][j].setText("0"); text[i][j].setEditable(true); text[i][j].setFont(font); subpanel1.add(text[i][j]); } } JPanel subpanel2 = new JPanel(); JButton button = new JButton("OK"); button.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent actionEvent) { for(int i = 0; i < 9; i++) { for(int j = 0; j < 9; j++) { String s = text[i][j].getText(); board[i][j] = Integer.valueOf(s); } } boolean solve = solver(board); if(solve) { for(int i = 0; i < 9; i++) { for(int j = 0; j < 9; j++) { text[i][j].setText("" + board[i][j]); text[i][j].setEditable(false); } } } else { JOptionPane.showMessageDialog(null,"Not solvable."); } button.setVisible(false); } }); subpanel2.add(button); panel.add(subpanel1, BorderLayout.WEST); panel.add(subpanel2, BorderLayout.EAST); frame.add(panel); frame.setVisible(true); } //Backtracking-Algorithm public static boolean solver(int[][] board) { for (int i = 0; i < 9; i++) { for (int j = 0; j < 9; j++) { if (board[i][j] == 0) { for (int n = 1; n < 10; n++) { if (checkRow(board, i, n) && checkColumn(board, j, n) && checkBox(board, i, j, n)) { board[i][j] = n; if (!solver(board)) { board[i][j] = 0; } else { return true; } } } return false; } } } return true; } public static boolean checkRow(int[][] board, int row, int n) { for (int i = 0; i < 9; i++) { if (board[row][i] == n) { return false; } } return true; } public static boolean checkColumn(int[][] board, int column, int n) { for (int i = 0; i < 9; i++) { if (board[i][column] == n) { return false; } } return true; } public static boolean checkBox(int[][] board, int row, int column, int n) { row = row - row % 3; column = column - column % 3; for (int i = row; i < row + 3; i++) { for (int j = column; j < column + 3; j++) { if (board[i][j] == n) { return false; } } } return true; } } I think there are some improvements already, but I would appreciate any suggestions to further improve the code (especially the GUI). The follow-up question can be found here. Answer: GUI Design Changes BorderLayout You are adding subpanels with: panel.add(subpanel1, BorderLayout.WEST); panel.add(subpanel2, BorderLayout.EAST); but you declare panel with: JPanel panel = new JPanel(); which by default uses FlowLayout. You probably want to explicitly use BorderLayout: JPanel panel = new JPanel(new BorderLayout()); and then I find: panel.add(subpanel1, BorderLayout.CENTER); panel.add(subpanel2, BorderLayout.PAGE_END); produces a more pleasing layout. Centred Text With the BorderLayout, the 9x9 grid layout will expand to fill most of the application window. With larger windows, the left-aligned text fields look wrong, so instead, add: text[i][j].setHorizontalAlignment(JTextField.CENTER); Grid Gaps At this point, I removed the GridLayout hgap and vgap, and removed the preferred size for subpanel1: JPanel subpanel1 = new JPanel(new GridLayout(9, 9)); GUI Code Refactoring Member variables The createGUI() method is a little large; it contains the event handler for the button. Let's move that out into its own function. Since it will need access to the text[i][j] and button, let's move those into members of a SudokuSolver object. Obviously, we'll create need to create a SudokuSolver object, so let's use invokeLater to create the object, and build the GUI inside the constructor. public class SudokuSolver { public static void main(String[] args) { SwingUtilities.invokeLater(SudokuSolver::new); } private final JTextField[][] text; private final JButton button; SudokuSolver() { JFrame frame = new JFrame(); ... text = new JTextField[9][9]; ... button = new JButton("OK"); button.addActionListener(this::solveBoard); ... panel.add(subpanel1, BorderLayout.CENTER); panel.add(subpanel2, BorderLayout.PAGE_END); frame.add(panel); frame.setVisible(true); } private void solveBoard(ActionEvent action_event) { ... } ... } The Board The createGUI() method had a board matrix which was explicitly initialized to a 9x9 grid of 0's. The creation of the GUI didn't use this board at all; it was used by the actionPerformed handler. So it does not need to be included in the constructor's GUI creation code. It can be created as a local variable in the solveBoard() method. private void solveBoard(ActionEvent action_event) { int board[][] = new int[9][9]; for(int i = 0; i < 9; i++) { for(int j = 0; j < 9; j++) { board[i][j] = Integer.valueOf(text[i][j].getText()); } } ... } Hiding UI elements When you make something invisible, the entire UI may need to be updated, as components may grow to use the newly vacated space. It is usually preferable to disable components, instead of making them invisible, when they are no longer needed or appropriate: button.setEnabled(false); Quality of Life Improvements Suggestions for improvements Input Validation What happens if the user enters a bad input for one of the cells, and presses "OK"? The program might crash, which is usually unacceptable behaviour for a GUI application. The user might not even see a console message explaining why the crash occurred! What if the user enters bad, but valid input, like "10" or "-1" into a cell? The solver won't have any problem finding values that work to solve the puzzle, but does it make sense to even attempt solving it? Perhaps the "OK" button should only be enabled if all the cells contain only a single digit, and disabled otherwise? Retry After solving, or attempting to solve a puzzle, what can the user do? Only close the application. They can't reset the puzzle. What if they made a mistake, and want to change something? They have to reenter all the givens. This is most apparent if "Not solvable." is displayed. None of the user inputs have been changed, all the cells are still editable, but the "OK" button to solve the puzzle is no longer available? Then the user see "Oh, that cell was supposed to be a 7, not a 1" ... and while they can change the cell value, they can't reattempt the solution! They have to relaunch the application, and enter the values again. Or you could be nice and in this one case, you could leave the "OK" available to try again. How about a "Reset" button, which is enabled after a successful solve, which removes the solved values and reenables all the input cells? Uniqueness Can you tell if there are multiple solutions? Other Review Comments Naming button, text are poor variable names. solve_button would be a little clearer if there was more than one button. Similarly, cell_text_field might be better than text, but perhaps a little verbose; how about cell_tf? Or tfCell if you like Hungarian notation. subpanel1 could be called grid_panel, and subpanel2 could be called button_panel.
{ "domain": "codereview.stackexchange", "id": 36983, "tags": "java, swing, gui, sudoku" }
Is any language that can express its own compiler Turing-complete?
Question: A comment over on tex.SE made me wonder. The statement is essentially: If I can write a compiler for language X in language X, then X is Turing-complete. In computability and formal languages terms, this is: If $M$ decides $L \subseteq L_{\mathrm{TM}}$ and $\langle M \rangle \in L$, then $F_L = \mathrm{RE}$. Here $L_{\mathrm{TM}}$ denotes the language of all Turing machine encodings and $F_L$ denotes the set of functions computed by machines in $L$. Is this true? Answer: The informal statement is not true, as shown by the following programming language. Any string of, say, ASCII characters is a valid program and the meaning of every program is, "Output a program that just outputs a copy of its input." Thus, every program in this language is a compiler for the language but the language is not Turing-complete. I'm not sure if your "computability theory version" is equivalent but it is also not true. By Kleene's second recursion theorem, for any coding of Turing machines, there is a TM that accepts its own coding and rejects all others.1 This machine is a counterexample to the proposition. More concretely, we can achieve the result by choosing a coding. For example, let every odd number code the machine $M$ defined by "If my input is odd, accept it; otherwise, reject" and let the number $2x$ code the machine coded by $x$ in your own favourite coding scheme for Turing machines. $\langle M\rangle$ is in the language $L$ accepted by $M$ but $F_L$ is not Turing complete. 1 Kleene's second recursion theorem says that, for any enumeration $(\phi_i)_{i\geq 0}$ of the partial recursive functions (i.e., for any coding of programs as integers), and any partial recursive function $Q(x,y)$, there is an integer $p$ such that $\phi_p$ is the function that maps $y$ to $Q(p,y)$. So, in particular, let $Q$ be the function that accepts if $x=y$ and rejects otherwise. By the theorem, there is an integer $p$ that codes the program $\phi_p(y) = Q(p,y)$. That is, $\phi_p$ accepts its own coding $p$ and rejects all other inputs.
{ "domain": "cs.stackexchange", "id": 2246, "tags": "computability, turing-completeness" }
Questions about terminology used for Mars and its moons
Question: I am having difficulty with some terminology: sub-Martian hemisphere of Deimos ... what place on Deimos does it refer to? leading/trailing apex of Phobos ... what place are those? Apex ... in general it means a top of something, or an end of something pointed; but what does it mean on a relatively spherical body? Anti-Mars meridian ... is this a meridian on the side of the moon facing away from Mars? Thank you. Answer: Deimos is, like our own moon, tidally locked to Mars, so one hemisphere of Deimos faces the planet, and the faces away. The Sub-Martian hemisphere faces Mars. Since Deimos is locked, there is a point on its surface that always faces in the direction of orbital motion, that point is the leading apex. It is an apex in the sense of a "leading point" A meridian is a line of longitude, running from the North to the South pole. One of Deimos's meridians goes through the centre of its hemisphere that faces away from Mars, this is the anti-Mars Meridian. As you say, the "meridian on the side of Deimos facing away from Mars. The image here shows a map of Deimos: The leading face is the left side of the image, and the trailing face is on the right. The central 50% is the sub-martian hemisphere and the anti mars meridian is the line on the left and right edge, where the 3D image of Deimos has been cut to make a 2D map
{ "domain": "astronomy.stackexchange", "id": 1137, "tags": "planet, fundamental-astronomy, terminology, observational-astronomy" }
Probability of Attaining the/a Global Optimum
Question: Suppose for a given optimization problem, $A$, the size of the search space $S$ is |$S$|. If the fitness landscape defined by $A$ is unimodal, then there is clearly only one global optimum. Given that $G$ is the event "reach the global optimum", can we then infer that $P(G) = \frac{1}{|S|}$, where $P(G)$ denotes the probability of reaching the/a global optimum? For $N$ ($N < |S|$) such global optima (in the case of a multimodal landscape), the probability increases to $P(G) = \frac{N}{|S|}$ Is this a valid argument or can we not conclude anything regarding the probability of attaining the/a global optimum on the basis of the number of optima and the size of the search space? I realize that often one cannot know in advance the number of global optima, but say, if the problem was to maximize a simple continuous function over some domain, then the function of interest could easily be plotted and the number of (local) optima examined. Answer: No, you can't infer that. You seem to be assuming that the optimization process tries one element of the search space, chosen uniformly at random, and hopes that it got lucky. But that's not how real optimization algorithms work. They try multiple solutions, and they're not chosen uniformly at random; they're typically chosen in some way that we hope is more likely to hit a good solution. In general, you cannot conclude anything about the probability of hitting the optimum after some amount of optimization, without further information. The probability might be much larger than $N/|S|$ (e.g., if the optimization problem is easy or the optimization algorithm is effective on this kind of problem). In the kinds of optimization problems we typically use a computer to solve, often you can't just "plot it out" as often we're working with a multidimensional search space.
{ "domain": "cs.stackexchange", "id": 17867, "tags": "algorithms, optimization" }
Maritime telescopes: Stabilisation requirements for optical vs. radio telescopes?
Question: SOFIA stands for Stratospheric Observatory for Infrared Astronomy. She says: My telescope stays stable with a spherical bearing, shock absorbers, and gyroscopes. I suppose a similar system would work for an optical telescope on a maritime vessel as well. But what about a radio-telescope on a ship with a dish of around 10m? What are arguments (ideally mathematical ones) against a similar construction like for SOFIA? References Tom Nardi A Miniature Radio Telescope In Every Backyard. hackaday.com (2019) Gregory Redfern: Cruise Ship Astronomy and Astrophotography - The go-to astronomy resource for cruise travelers Answer: You asked for arguments against this, but I don't think there are any other than money. Consider the AN/SPQ-11 passive radar that was aboard the US Navy ship Observation Island. The actual receiver area is not as large (7M according to https://fas.org/irp/program/collect/cobra_judy.htm), but we could imagine a larger antenna being placed on a larger ship such as the USS Enterprise. The beam (maximum width) of the Enterprise is 41 meters, which could certainly accommodate a 10-20m dish. I think the only reason this would not be built would be lack of funds. From an engineering perspective, it seems doable.
{ "domain": "astronomy.stackexchange", "id": 5381, "tags": "observational-astronomy, telescope, radio-telescope, sofia" }
Red-blue intersection requirements
Question: In the red-blue line segments intersection problem, what does it mean that the red-red (and blue-blue) lines cannot intersect? Does mean mean that the algorithms wont work correctly or does it mean that these kind of intersections will be simply ignored? In other words, do the fast intersection algorithms require that the are no single-color intersections? Or they can work with such intersections but will find only bi-chromatic intersections? The red-blue line segments intersection problem can be stated as: "Given a disjoint set of red line segments and a disjoint set of blue line segments in the plane, with a total of n segments, report all k intersections of red segments with blue segments." (From "A Simple Trapezoid Sweep Algorithm for Reporting Red/Blue Segment Intersections", Timothy M. Chan). The idea, if I understood correctly, is that the problem should be easier to solve than the problem of reporting all intersections between line segments in general. So I'm trying to understand how hard the requirements are... Answer: I guess it depends on the particular algorithm you have in mind. However, the red/blue segment intersection algorithms that I'm aware of require that there are no monochromatic (red/red or blue/blue) intersections. For example, Chans red/blue intersection algorithm builds a trapezoidal decomposition on the blue segments. That does not really work if there are blue/blue intersections as well. By the way, note that both red/blue segment intersection, as well as general (monochromatic) segment intersection can be solved in optimal $O(n \log n + k)$ time.
{ "domain": "cs.stackexchange", "id": 6844, "tags": "algorithms, computational-geometry" }
From kinetic theory - why does heat rise?
Question: I know the explanation from fluid elements and densities however I don't find it satisfying. Is there an explanation from kinetic theory? Answer: Think of it at the scale of individual particles and it clearly becomes a simple statistical issue. Liquids in a gravitational field have a density distribution. If you consider a "low energy" liquid from any given location, one sees more particles below that spot and less above it. Now take a bunch of "high energy" particles and fire them in random directions from that location. If they happen to be going up, they'll have less collisions over time and therefore travel further than the ones going down. So, in bulk, "heat rises".
{ "domain": "physics.stackexchange", "id": 58910, "tags": "thermodynamics, kinetic-theory, convection" }
Unmet dependencies on Ubuntu 19.10 eloquent
Question: I have spent a very frustrating afternoon trying to install ROS2 eloquent on a clean Ubuntu 19.10 VM without success. The VM was fully updated before the install. I chose the minimal installation as I only want to use ROS2 in the VM. I started by following the installation instructions: https://index.ros.org/doc/ros2/Installation/Eloquent/Linux-Install-Debians/. I installed the keys and the ros2-latest.listfile, did the sudo apt update and then tried: $ sudo apt install ros-eloquent-desktop Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package ros-eloquent-desktop This format of command used to work on ROS but failed this time. Reading other questions, the following information was requested, so here it is. $ uname -a Linux andy-VirtualBox 5.3.0-46-generic #38-Ubuntu SMP Fri Mar 27 17:37:05 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux $ cat /etc/apt/sources.list.d/ros2-latest.list deb [arch=amd64] http://packages.ros.org/ros2/ubuntu eoan main $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=19.10 DISTRIB_CODENAME=eoan DISTRIB_DESCRIPTION="Ubuntu 19.10" After a bit of searching around, I read that each ROS2 release is paired with an Ubuntu release, so I tried the following command. $ sudo apt install ros-desktop-full Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies. ros-desktop-full : Depends: ros-desktop but it is not going to be installed Depends: ros-perception but it is not going to be installed Depends: ros-simulators but it is not going to be installed E: Unable to correct problems, you have held broken packages. Finally, something that apt can install, but still no cigar! I then spent about an hour trying to manually fill in the missing dependencies. I ended up with this command: $ sudo apt install -y \ > ros-desktop \ > ros-robot \ > tf-tools \ > python-tf\ > tf2-tools \ > python-tf2 \ > python-rospy \ > python-tf2-ros \ > python-actionlib \ > libtf2-ros-dev \ > ros-viz \ > ros-base \ > ros-core \ > python3-rosbag \ > python-roslib \ > catkin \ > python3-catkin \ > python3-catkin-pkg \ > python3-catkin-pkg-modules \ > python-rospkg \ > python-rospkg-modules \ > python-catkin-pkg-modules \ > python3-rosclean \ > python-rosgraph \ > python-rosmaster \ > python-rosparam \ > python-roslaunch \ > python-rosmsg \ > python-rosnode \ > python-rosservice \ > python-rostopic \ > python-message-filters \ > python-roswtf Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies. catkin : Depends: python-catkin-pkg but it is not going to be installed python-catkin-pkg-modules : Conflicts: catkin but 0.7.18-1 is to be installed python3-catkin-pkg : Conflicts: catkin but 0.7.18-1 is to be installed python3-catkin-pkg-modules : Conflicts: catkin but 0.7.18-1 is to be installed python3-rosbag : Depends: python3-roslib but it is not going to be installed Depends: python3-rospy but it is not going to be installed python3-rosclean : Depends: python3-rospkg but it is not going to be installed E: Unable to correct problems, you have held broken packages. Clearly, I'm missing something, but I have no idea what. Any help would be welcome! Originally posted by Andy Blight on ROS Answers with karma: 33 on 2020-04-24 Post score: 1 Answer: Ubuntu 19.10 is not a supported platform for Eloquent Supported platforms are listed in REP 2000 That's why there are no packages available. The first line of the tutorial you linked to is, "Debian packages for ROS 2 Eloquent Elusor are available for Ubuntu Bionic." When you tried to switch to a non-distro based version you were starting to try to install UpstreamPackages directly from the Ubuntu source repositories. Please see the discussion on that wiki page about mixing the sources which is giving you your conflicts. Originally posted by tfoote with karma: 58457 on 2020-04-24 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by gvdhoorn on 2020-04-25: I've answered my share of these kinds of questions. Clearly something is not clear enough when it comes to supported platforms/OS and the versions of ROS that can be installed on them. Comment by Andy Blight on 2020-04-25: Thanks for your answers. I don't know how I missed something that is now so obvious. I guess I thought it would probably work anyway but clearly it doesn't. Perhaps the line "Debian packages for ROS 2 Eloquent Elusor are available for Ubuntu Bionic." could be made bold to make it more obvious?
{ "domain": "robotics.stackexchange", "id": 34828, "tags": "ros, ros2, apt" }
Can't compute real cepstrum of real signal
Question: I'm trying to compute the real cepstral coefficients of recorded telephone audio in Matlab using the rceps function. On some audio frames (480 samples per frame -- 60ms of audio at 8kHz), I get a Matlab error: "rceps:ZeroInFFT". The cepstrum does not exist because some of the DFT coefficients are 0. The frames in question are not zero, nor are any of the sample values complex. Going by the RMS of the frame, many of the frames that have errors have significant energy. About 10% of my frames give this error, so it seems like a bigger problem than a fluke. I'm confused because this type of analysis is very common, especially for speech analysis, but I can't find record of anyone else having this problem. Any advice would be greatly appreciated. audio % is a 236000x1 vector of doubles containing PCM audio data at 8kHz window_size = 3 %Working with multiples of 20-ms frames errs = zeros(N_frames,1); for i=0:N_frames-window_size %% Cepstral analysis s = audio((i*160+1):((i+window_size)*160)); %Grab 60ms of audio -- 480x1 matrix try c = rceps(s); catch err errs(i+1) = 1; end end Answer: It's common, when computing a cepstrum, to replace any zero's or tiny magnitudes in the 1st FFT result with some (noise) floor value to keep the scale and range of the log function "reasonable looking". Huge negative spikes (or -inf) from the log() of tiny spectrum magnitudes don't usually provide that much added useful information to the rest of the results in cepstral-type analysis.
{ "domain": "dsp.stackexchange", "id": 1468, "tags": "matlab, speech-processing, cepstral-analysis" }
How does ecology differ from biology?
Question: What precisely is ecology? How does it differ from biology? Because I never studied biology after high school, please explain as if I were 10 years old. I only know that ecology is a subset of biology I tried some dictionaries but they didn't adequately discriminate. I tried to find an explanation from a scientist: the following appears to claim that only ecology concerns some organism's external interactions with other entities? But how? Biology must also? For example, suppose that someone studies prions' interactions with humans, and not just prions. Then this is biology, not ecology? Source: by Matthew Fraser, PhD Candidate (Marine Ecology) at University of Western Australia So what makes us fully fledged marine ecologist different from our biologist counterparts? Well, I think that marine ecology is even cooler than marine biology because as marine ecologists we link what we know about the biology of a given species with other plants/animals and the environment as well. [...] If we were splitting hairs, ecology is technically a form of biology, but I felt the need to write this post given how passionately I see some researchers stating that they are in one camp or another. [...] But as an ecologist (albeit a biased one!) what gets me excited isn’t just finding out how the amazing plants and animals we find in the ocean work, but how they interact with each other and their environment, explaining why we see certain species in some places and not others! Answer: Ecology has two meanings. The popular and the scientific meaning. Ecology: the popular definition: Here the term ecology is probably quite poorly defined. To my gut feeling, the concept relates to the concept of global change. It encompass many fields such as biology (ecology (in the scientific sense), evolutionary biology and conservation biology especially), ethics and moral, politics, meteorology, public policy, ... Ecology: the scientific definition: Ecology is a subfield of biology and earth sciences that studies interactions among organisms and their environment. interaction is an important word here. Biology has a much broader meaning. Biology is the science that studies life. Biology studies the structure, the ecology (impact on their surrounding), the evolution, development biochemical processes, etc.. of living things. Biology is a very big field of science. A researcher in biology will probably not really consider him/herself as a biologist but rather as a molecular geneticist, a neurologist, a epidemiologist, a plant physiologist, a biochemist, a bioinformatician, a system biologist, etc.. For example, I would appreciate to consider myself as a population geneticist rather than as a biologist as there are many fields of biology I know nothing about. For example, I am a pretty bad naturalist. In short, ecology is to biology what optics is to mechanics is to physics. Some people may not like this comparison as mechanics might take a larger part of physics that what ecology does to biology. Earth scientists may not like this comparison as well, as they are part of ecology without necessarily feeling like being part of biology. But anyway. From the text you cite [..] as an ecologist [..] what gets me excited isn’t just finding out how the amazing plants and animals we find in the ocean work, but how they interact with each other and their environment [..] It shows that indeed ecologist are interested in the interaction between organisms and between organisms and their abiotic environment. Matthew Fraser says "he's not only interested in how they work". 'How they work' is obviously an extremely inaccurate sentence. A less misleading reformulation would be: I am not interested in everything about the biology of marine animals, "I am especially interested in how animals interact with each other and how they interact with the environment", but obviously that is less exciting for the reader. The goal of Matthew I guess was to create the interest and the excitation of the reader (or audience) on ecology and for this purpose he kinda implied as ecology being more than biology, while ecology is only a subfield of biology. For more information, wikipedia is your friend!
{ "domain": "biology.stackexchange", "id": 4023, "tags": "ecology, terminology" }
How do I find the count of a particular column, based on another column(date) using pandas?
Question: I have a dataframe with 3 columns, such as SoldDate,Model and TotalSoldCount. How do I create a new column, 'CountSoldbyMonth' which will give the count of each of the many models sold monthly? Date Model TotalSoldCount Jan 19 A 4 Jan 19 A 4 Jan 19 A 4 Jan 19 B 6 Jan 19 C 2 Jan 19 C 2 Feb 19 A 4 Feb 19 B 6 Feb 19 B 6 Feb 19 B 6 Mar 19 B 6 Mar 19 B 6 The new df should look like this. Date Model TotalSoldCount CountSoldbyMonth Jan 19 A 4 3 Jan 19 A 4 3 Jan 19 A 4 3 Jan 19 B 6 1 Jan 19 C 2 2 Jan 19 C 2 2 Feb 19 A 4 1 Feb 19 B 6 3 Feb 19 B 6 3 Feb 19 B 6 3 Mar 19 B 6 2 Mar 19 B 6 2 I tried doing df['CountSoldbyMonth'] = df.groupby(['date','model']).totalsoldcount.transform('sum') but it is generating a different value. Answer: data['CountSoldbyMonth']= data.groupby(['Date','Model']).TotalSoldCount.transform('count') is working perfectly.
{ "domain": "datascience.stackexchange", "id": 5485, "tags": "python, pandas, data-cleaning" }
Bash script that checks for package dependencies and installs them if necessary
Question: I'm new to writing bash scripts and was wondering if I could get someone's advice on a part of the script I'm working on. Intended purpose of code Check if a package exists using dpkg, and if it doesn't, offer to install it for the user. This snippet is part of a larger script that installs a particular Conky configuration along with all of its dependencies with minimal effort from the user. Concerns I feel as though there is a more elegant way to check if a package is installed using dpkg (code was found on Stack Overflow). Is there a better way of handling the (y/n) response? Here is the code that I am using: declare -a packages=("conky-all" "lm-sensors"); for i in "${packages[@]}"; do if [ $(dpkg-query -W -f='${Status}' $i 2>/dev/null | grep -c "ok installed") -eq 0 ]; then echo "$i is not installed, would you like to install it now? (Y/N)"; read response if [ "$response" == "y" ] || [ "$response" == "Y" ]; then sudo apt-get install "$i"; else echo "Skipping the installation of $i..."; echo "Please note that this Conky configuration will not work without the $i package."; fi else echo "The $i package has already been installed."; fi done Answer: Instead of this: if [ $(dpkg-query -W -f='${Status}' $i 2>/dev/null | grep -c "ok installed") -eq 0 ]; then A better way to write the same thing: if ! dpkg-query -W -f='${Status}' $i 2>/dev/null | grep -q "ok installed"; then Instead of this: if [ "$response" == "y" ] || [ "$response" == "Y" ]; then A simpler way to write is: if [[ $response == [yY]* ]]; then This is not example the same. It will match anything that starts with "y" or "Y". If you want to match strictly only those letters, just drop the * from the pattern: if [[ $response == [yY] ]]; then Finally, all the ; at line endings are unnecessary. The purpose of ; is to separate multiple statements on the same line. Line breaks naturally serve as statement separators.
{ "domain": "codereview.stackexchange", "id": 13342, "tags": "bash, linux, sh" }
What is a light cone?
Question: What is a light cone? Why we can't escape the light cone? Why the speed of light being the limit for us to escape the cone the future and the past events of the light cone is that governs the future? and past? Answer: To escape one's light cone requires you to exceed the speed of light. Since this is impossible, the light cone divides your future into two pieces: that part of space in which events are close enough to possibly affect you and that part of space where events are too far away to have any influence on your future. Similarly, any event which occurred outside your past light cone could not have had any effect on your present, but may have an effect on your future.
{ "domain": "physics.stackexchange", "id": 52347, "tags": "special-relativity, speed-of-light, metric-tensor, faster-than-light, causality" }
Quantum mechanically speaking: why is it that electrons get bound to a nucleus? ..and why doesn't the electron's wavefunction get infinitely small?
Question: I have a pretty good intuitive understanding of quantum mechanics. But one thing that I don't really intuitively understand is why electrons end up in bound states. An electron might have some positional uncertainty, but no matter what that position is, it still experiences a force towards the nucleus. I would have expected, given my understanding of QM, that each individual possible position of the electron gets individually pulled toward the nucleus, and you end up with a coherent superposition of all of the possibilities. But, as the electron gets closer it also gets pulled with more strength, so I would have thought that essentially all the possibilities get pushed infinitely close to the nucleus. Yes, then that implies that there is maxium uncertainty in the momentum distribution, but how exactly does this uncertainty counteract the strong pull of the nucleus? I guess a bound state is just an equilibrium of the electron having its position wavefunction pulled to the center but having a component that is trying to escape due to propagation of the uncertainty in momentum? Answer: I think your confusion really comes from a misunderstanding at the classical level – that attractive forces work like vacuum cleaners. A vacuum cleaner creates a wind with a certain velocity that tends to make objects move with that velocity. It's an Aristotelian force, more or less. Electromagnetism and gravity cause acceleration, not velocity. An accelerated object picks up speed as it gets closer to the source, overshoots it, recedes while slowing down, and reverses direction, and the process starts over. In other words, it orbits. If it weren't for the second law of thermodynamics, it would keep orbiting forever. A hydrogen atom in the ground state is a quantum version of that process. The electron is in a superposition of approaching and receding from the nucleus (and orbiting circularly, etc.), and no direction dominates over any other. There are no dissipative processes to break the time symmetry because the system is already in the lowest energy state that's compatible with the uncertainty principle.
{ "domain": "physics.stackexchange", "id": 87031, "tags": "quantum-mechanics, atomic-physics, coulombs-law, orbitals" }
What kind of spider is this?
Question: This large white spider has made her home on our garage. We live in the upper peninsula of Michigan. I believe it's an orb spider but couldn't find any with the large black almost hourglass shape on the back Answer: This is probably Araneus marmoreus var. pyramidatus. Here is a better image for comparison: Source: http://www.uksafari.com/marbled_orb_weavers.htm PS: Despite A. marmoreus being found in US, this variety (pyramidatus) is normally an European one. I don't know any observation of it in US. However, I'll leave this answer here until another ID is provided.
{ "domain": "biology.stackexchange", "id": 7811, "tags": "species-identification" }
Prove if $\{x^iy^jz^k \mid i \le2j\text{ or }j \le 3k\}$ is regular or not
Question: $$L = \{x^iy^jz^k \mid i \le2j\text{ or }j \le 3k\}$$ To Prove: If given language is regular or not. I know that it is not a regular language but I am not able to come up with the string which I can use in the pumping lemma to prove that it is not regular. We can also divide $L$ into two parts: $$\begin{align*} L_1 &= \{x^iy^jz^k \mid i \le 2j\}\\ L_2 &= \{x^iy^jz^k \mid j \le 3k\}\,, \end{align*}$$ so I just need the strings to be used in the pumping lemma for $L_1$ and $L_2$. Answer: It's not regular. Hint: Let $p$ be the integer of the pumping lemma and pump the string $x^{6p}y^{3p}z^{2p}$.
{ "domain": "cs.stackexchange", "id": 4480, "tags": "formal-languages, regular-languages" }
Dynamixel Control on Raspbian (ROS Groovy)
Question: Hi everybody, I have a project on which I want to control an RX-28 Dynamixel motor with a Raspberry which has installed Raspbian with ROS groovy. The problem is that the Dynamixel packages are not yet available in the raspbian groovy repository mentioned here /repos/rospbian/debbuild/groovy. So I think it would be possible to build the packages by myself and then use them. I already downloaded the packages to my raspberry and put them in /opt/ros/groovy/share. but then when follow the dynamixel_controllers/Tutorials/ConnectingToDynamixelBus tutorial. I changed the baudrate to 57142 (which I know is correct for my motor I've checked it in windows with Roboplus). After the roslaunch I get this File "/opt/ros/groovy/share/dynamixel_motor-master/dynamixel_controllers/nodes/controller_manager.py", line 52, in from dynamixel_driver.dynamixel_serial_proxy import SerialProxy ImportError: No module named dynamixel_driver.dynamixel_serial_proxy I know it is maybe a stupid question but I have tried everything and I still cannot communicate with the motor. Thanks! Originally posted by Mike Rasp on ROS Answers with karma: 1 on 2013-11-24 Post score: 0 Answer: You shouldn't be manually installing things into /opt. There's a tutorial about how to setup a ROS workspace: http://wiki.ros.org/ROS/Tutorials/InstallingandConfiguringROSEnvironment The port is not yet a problem you don't yet have your environment setup such that it can find all the required source files. Originally posted by tfoote with karma: 58457 on 2013-11-27 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 16254, "tags": "raspberrypi" }
How does axial offset effect rotor balance?
Question: I have a thin rod rotating about its primary axis, rigidly attached to mass $m_1$ whose center sits at distance $r_1$ from the axis and position $y_1$ along the axis. $m_1$ exerts a centripetal force $F_1 = m_1 r_1 \omega^2$ where $\omega$ is angular velocity. This force leaves the rotor unbalanced and will cause the housing to shake, so I must add masses to balance it. I am constrained here because $m_1$ projects light up and out, so I can't put balancing masses near or above it; I can only add balancing masses to the axis below $m_1$. So I add mass $m_2$ centered at $(r_2, y_2)$. If $F_1+F_2=0$ then the rotor is statically balanced, meaning that it will stand on its end without toppling over. However, because $y_2<y_1$, the rotor has a couple unbalance and will still cause the housing to shake. So I add mass $m_3$ centered at $(r_3, y_3)$. I can attain static balance by ensuring $F_1+F_2+F_3=0$. I know anecdotally that this 3-mass configuration can remove the couple unbalance, but I'm struggling to express it mathematically. What is the relation between $m_{1..3}$, $r_{1..3}$, and $y_{1..3}$ that guarantees zero couple unbalance? Answer: To balance the system with respect to the lowest point of the rotating thin rod, ensure the balance of moments with respect to the $y$ axis is zero. That simply yields: $m_1r_1 + m_3r_3=m_2r_2$. To ensure no net centripetal force acts on the rod: $m_1 r_1 \omega^2y_1+m_3 r_3 \omega^2y_3=m_2 r_2 \omega^2y_2$. $\omega^2$ drops out, so we get: $m_1 r_1y_1+m_3 r_3y_3=m_2 r_2 y_2$. Now we have too many degrees of freedom: 3 variables and only 2 equations (I assume $m_1$, $r_1$ and $y_1$ are known!) To solve this, I would set $m_1=m_3$, $m_2=m_1+m_3$ and $r_1=r_3$. It's now possible to calculate the remaining unknowns from the knowns and these equations.
{ "domain": "physics.stackexchange", "id": 24632, "tags": "rotational-dynamics, rigid-body-dynamics" }
Under which operations is the class of non-recursive languages a closure?
Question: I am currently studying turing computability and related problems such as the halting problem with a background in formal languages. I know that the class of recursive (decidable) languages is a closure under union, intersection and complement, and that recursively enumerable languages (semi-decidable) are a closure under union and intersection, but what about non-recursive (undecidable) languages? To be more specific, I am trying to prove that a specific language of the form $$L = \{ w \in \Sigma^* : w \in A \lor w \in B \}$$ is non-recursive. I managed to prove that neither $A$ nor $B$ are recursive by reducing them to the universal halting problem, but I am not sure what that implies for $L$. The exact language is $$L = \left\{ w\#u \in \left\{ 0, 1, \# \right\}^* | \\ M_w \text{ will halt with input } u \text{ or } M_u \text{ will halt with input } w \right\}$$ where $M_b$ is the turing machine whose transition function $\delta$ is represented using Gödel numbering as a binary string $b$. Is the class of non-recursive languages a closure under union? If it is, how could I prove it myself or where could I find an existing proof? Answer: Non-recursive languages are closed under complement, but not under union or intersection. Indeed, a decider for $A$ exists iff there is a decider for $A^c$. Hence the closure under complement. However, let $B$ be any non-recursive language. $B^c$ is also non-recursive. But $B \cap B^c = \emptyset$ and $B \cup B^c = \Sigma^*$ are both recursive. Hence, non-recursive languages are not closed under union or intersection. In the example you mention, you can not conclude anything about $L$ from $A,B$ being non recursive. I will provide a hint for $L$, only. When you have a language of the form $L=\{f(x,y) | x,y\mbox{ s.t. } p(x,y)\}$, it is sometimes convenient to choose a particular value $a$ and study the related language $L_a = \{ y | y\mbox{ s.t. } p(a,y) \}$ first. Maybe one can choose $a$ to make $p(a,y)$ simple enough so that other proof techniques apply.
{ "domain": "cs.stackexchange", "id": 8437, "tags": "formal-languages, turing-machines, closure-properties, halting-problem" }
Is this calculation of inverse z-transform proper
Question: I wonder whether my calculation of inverse z transform are correct. My IIR system is described as follows in Z-domain $H(z) = \frac{z^{-2}}{1-0.5z^{-2}}$ After using partial fraction decomposition I obtained $H(z)$ in Z-domain as follows $H(z)=\frac{\frac{1}{2}}{1-\sqrt{\frac{1}{2}}z^{-1}} + \frac{\frac{1}{2}}{1+\sqrt{\frac{1}{2}}z^{-1}}$. After applying inverse Z transform I obtained such form: $h[n]=\frac{1}{2}(\sqrt{\frac{1}{2}}^{n}-\sqrt{\frac{1}{2}}^{n})u[n]$. What confused me is that this is 0 as the insides of parentheses cancel out. Maybe this is some dumb mistake I am making (it is quite late) Answer: HINT: Write $H(z)$ as $$H(z)=G(z^2)\tag{1}$$ with $$G(z)=\frac{z^{-1}}{1-0.5z^{-1}}\tag{2}$$ Determine the inverse transform $g[n]$ of $G(z)$. Figure out what replacing $z$ by $z^2$ means in the time domain. From this, the inverse transform of $H(z)$ is easily obtained from $g[n]$. EDIT: Your partial fraction expansion lacks the factor $z^{-2}$, and the major mistake is that you transform both terms to the same sequence, even though the poles have opposite signs. If you do it right, the two terms of the sequence only cancel each other for odd $n$, and you can combine them for even $n$. The method shown as a hint in this answer is probably more straightforward because it avoids partial fraction expansion.
{ "domain": "dsp.stackexchange", "id": 7125, "tags": "homework, z-transform, exam" }
KNN algorithm implemented in Python
Question: This is the first time I tried to write some code in Python. I think it gives proper answers but probably some "vectorization" is needed import numpy as np import math import operator data = np.genfromtxt("KNNdata.csv", delimiter = ',', skip_header = 1) data = data[:,2:] np.random.shuffle(data) X = data[:, range(5)] Y = data[:, 5] def distance(instance1, instance2): dist = 0.0 for i in range(len(instance1)): dist += pow((instance1[i] - instance2[i]), 2) return math.sqrt(dist) # Calculating distances between all data, return sorted k-elements list (whole element and output) def getNeighbors(trainingSetX, trainingSetY, testInstance, k): distances = [] for i in range(len(trainingSetX)): dist = distance(testInstance, trainingSetX[i]) distances.append((trainingSetX[i], dist, trainingSetY[i])) distances.sort(key=operator.itemgetter(1)) neighbour = [] for elem in range(k): neighbour.append((distances[elem][0], distances[elem][2])) return neighbour #return answer def getResponse(neighbors): classVotes = {} for x in range(len(neighbors)): response = int(neighbors[x][-1]) if response in classVotes: classVotes[response] += 1 else: classVotes[response] = 1 sortedVotes = sorted(classVotes.items(), key=operator.itemgetter(1), reverse = True) return sortedVotes[0][0] #return accuracy, your predicitons and actual values def getAccuracy(testSetY, predictions): correct = 0 for x in range(len(predictions)): if testSetY[x] == predictions[x]: correct += 1 return (correct / (len(predictions))) * 100.0 def start(): trainingSetX = X[:2000] trainingSetY = Y[:2000] testSetX = X[2000:] testSetY = Y[2000:] # generate predictions predictions = [] k = 4 for x in range(len(testSetX)): neighbors = getNeighbors(trainingSetX, trainingSetY, testSetX[x], k) result = getResponse(neighbors) predictions.append(result) accuracy = getAccuracy(testSetY, predictions) print('Accuracy: ' + str(accuracy)) start() Answer: First a style nitpick: Python has an official style-guide, PEP8, which recommends using lower_case_with_underscores for variable and function names instead of camelCase. Second, the comments you have above your functions should become docstrings. These appear for example when calling help(your_function) in an interactive session. Just Have a string as the first line below the function header like so: def f(a, b): """Returns the sum of `a` and `b`""" return a + b It is recommended to always use triple double-quotes (i.e. """). Now I am going to focus on the distance calculation. First, you can greatly simplify your getNeighbors function using list comprehensions: def getNeighbors(trainingSetX, trainingSetY, testInstance, k): distances = sorted((distance(testInstance, x), x, y) for x, y in zip(trainingSetX, trainingSetY) return [(d[1], d[2]) for d in distances[:k]] Here I used the fact that tuples already sort naturally, by first comparing the first index then (if they are equal) the second and so on. So I put the distance as the first index of the tuple and you don't need the key function any longer. sorted can take a generator expression and sort it directly. We can also iterate over multiple iterables at the same time using zip. Since your variables are all numpy arrays, you could also do it more vectorized. For this I would first re-define the distance function to use numpy functions: def distance(x, y): return np.sqrt(((x - y)**2).sum()) And then put the distances into a numpy array as well. Only returning the second and third column becomes then easier with array slicing. def getNeighbors(trainingSetX, trainingSetY, testInstance, k): distances = np.array([(distance(testInstance, x), x, y) for x, y in zip(trainingSetX, trainingSetY]) distances.sort() return distances[:k, 1:] This can probably be modified even further by trying to make the distance call vectorized as well. Your function classVotes can be simplified using the collections.Counter class which implements exactly what you do her: def getResponse(neighbors): classVotes = Counter(int(neighbor[-1]) for neighbor in neighbors) return max(classVotes.iteritems(), key=itemgetter(1))[0] Your function getAccuracy can be slightly simplified using a generator expression and sum: def getAccuracy(testSetY, predictions): correct = sum(y == p for y, p in zip(testSetY, predictions)) return correct * 100.0 / len(predictions) And lastly, in your start function, you can directly iterate over the elements of testSetX, make it a generator expression and use the fact that print can take multiple arguments: def start(): trainingSetX = X[:2000] trainingSetY = Y[:2000] testSetX = X[2000:] testSetY = Y[2000:] # generate predictions k = 4 predictions = (getResponse(getNeighbors(trainingSetX, trainingSetY, x, k)] for x in testSetX) accuracy = getAccuracy(testSetY, predictions) print('Accuracy:', accuracy)
{ "domain": "codereview.stackexchange", "id": 24213, "tags": "python, beginner, algorithm, python-3.x, clustering" }
Setting frame of IMU message in Gazebo Fortress
Question: I am trying to setup a robot in Gazebo Fortress, under ROS2 Humble in Ubuntu 22.04. This robot has 6 wheels, a lidar and an IMU. I have setup the xacro files and I am able to spawn the robot into Gazebo and get laser and IMU readings. However, the frame of the IMU messages is not correct. This is my IMU gazebo code in its corresponding xacro file: <gazebo reference="${prefix}_${imu_name}"> <material>Gazebo/Black</material> <gravity>true</gravity> <sensor name="${prefix}_imu_sensor" type="imu"> <always_on>true</always_on> <update_rate>25</update_rate> <visualize>false</visualize> <topic>${prefix}/imu</topic> <plugin filename="libignition-gazebo-imu-system.so" name="ignition::gazebo::systems::Imu"> </plugin> <pose>0 0 0 0 0 0</pose> </sensor> </gazebo> I have tried to add frameID to the plugin parameters, as I did in Gazebo Classic, but it didn't work. My questions are: How to change the frame published in the messages? What parameters are available for the IMU plugin? Is there any documentation or where can I read the source code? Thank you! Answer: From the source code it appears you can use the tag gz_frame_id or the older ignition_frame_id (deprecated) as a child of the sensor tag. The relevant source code is here.
{ "domain": "robotics.stackexchange", "id": 38627, "tags": "ros2, imu, gazebo-ignition, ignition-fortress" }