anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Are all nuclear fusion reactions exothermic and fission reactions endothermic?
Question: I have been told in class that while the $Q$ value of a nuclear reaction is positive the process is exothermic and if this parameter is negative the reaction is endothermic. all of the fussion process are exothermic? and the fission ones are endothermics? Answer: The term endothermic process describes a process or reaction in which the system absorbs energy from its surroundings; usually, but not always, in the form of heat. The term was coined by Marcellin Berthelot from the Greek roots endo-, derived from the word "endon" (ἔνδον) meaning "within" and the root "therm" (θερμ-) meaning "hot." The intended sense is that of a reaction that depends on absorbing heat if it is to proceed. The opposite of an endothermic process is an exothermic process, one that releases, "gives out" energy in the form of heat. Thus in each term (endothermic & exothermic) the prefix refers to where heat goes as the reaction occurs, though in reality it only refers to where the energy goes, without necessarily being in the form of heat. italics mine. What makes me wonder is, all of the fussion process are exothermic?, and the fission ones are endothermic? In both cases , fission and fusion, energy is released so both are exothermic, according to the definition of the words (remember heat and kinetic energy are related in statistical thermodynamics)
{ "domain": "physics.stackexchange", "id": 38334, "tags": "nuclear-physics" }
What are these tiny, swarming, jumping bugs?
Question: I built a small plywood box in my yard (on Long Island, NY, USA) for some stray cats about a year ago. I didn't bother waterproofing the box, and so over time the box has gotten damp. About two weeks ago I started seeing small reddish brown bugs on the box, and in the food/water dishes near the box. Can anyone help identify them? Here are some details: there are 100s of them they're really small, the biggest ones are about the size of a sesame seed they jump when disturbed, usually in an upwards direction they don't seem to go on the cats some venture 2-4 ft from the box to find food dishes on the ground (concrete patio) they drown themselves in the water dish by the dozen they're more active when it's warmer (~ 45°). When it's cold (25-35°) they mostly hide somewhere All of this makes them sound like fleas, but they're not fleas -- they don't jump as high or have the flat body of a flea. They actually look like arachnids. OK that makes them sound like baby jumping spiders - but I've seen baby jumping spiders at work, and these look different (they're reddish brown, not black, and they don't jump in an "across" direction, they jump upwards, kind of randomly). Maybe they're a different type of jumping spider, but why haven't they moved on? Why are they taking up residence on the box, instead of spreading out and going their separate ways like baby spiders usually do? They're are about the size of a sesame seed so it's hard to get a good picture. EDIT: new picture collage: Answer: Very useful image updates! These are actually not arachnids but hexapods called springtails (order Collembola). Although springtails are often very tiny and hard to see without a lens, you happen to find some "larger" specimens. Specifically, these appear to be some species in the family Dicyrtomidae. Credit: M. Bertone (source) Your species resembles the image of Dicyrtomina minuta seen here or Dicyrtomina saundersi seen here (though I can't be certain of these source's accuracy). In general, springtails usually feed on decaying matter, and -- you guessed it -- they're known for their ability to jump! Springtails are so-called because they have a unique structure, the furcula, that allows them to jump for considerable distance relative to their tiny size. [source]. Springing mechanism of a generalized springtail. Credit: Marianne Alleyne [Source] According to NC State Extension: Rarely springtails may become exceedingly abundant and may congregate in heaps several inches high on driveways, sidewalks and poolsides. Collembola.org has many excellent close-up images as well as detailed descriptions of nuances and sexual dimorphism seen in the Dicyrtomina group of springtails.
{ "domain": "biology.stackexchange", "id": 9409, "tags": "species-identification, entomology, arachnology, arthropod" }
Optimizing code solution for Palindrome Index-Hackerrank
Question: I submitted my solution for palindrome Index coding challenge but I get "test cases terminated due to time out error". My code is working and so I don't know what else to do to optimize it. Please help: function palindromeIndex(s) { let palindrome = s === s.split('').reverse().join('') if(!palindrome) { let i = 0 let integerIndex = [] let arr = s.split('') while(i < s.length) { arr.splice(i,1) if(arr.join('') === arr.reverse().join('')) { integerIndex.push(i) } arr = s.split('') i++ } return integerIndex.length > 0 ? integerIndex[0] : - 1 }else { return -1 } } Answer: It may well be an error @ Hackerrank. If I'm not mistaken the nodejs-code expects you to provide console input. Or you may have accidentally changed something in the surrounding code. Concerning your code: writing ES20xx, it's good practice to terminate lines with a semicolon (;), because not doing so may result in nasty bugs. let palindrome = s === s.split('').reverse().join('') You don't need this variable. It could've been: if(s !== s.split('').reverse().join('')) { Furthermore, if you wanted to declare a variable, it could've been a const here (you are not modifying it afterwards). Just for fun, here's an alternative approach, using substrings from the original given string: "hannach,ava,reopaper,annana,ewve,blob,otto,michael,racecaar,wasitacatiwsaw" .split(",") .forEach(name => console.log(`[${name}] => ${palindromeIndex(name)}`)); function palindromeIndex(s) { if (`${[...s]}` === `${[...s].reverse()}`) { return "is palindrome"; } let i = 0; while(i < s.length) { const sx = `${i < 1 ? s.substr(1, 0) : s.substr(0, i)}${s.substr(i + 1)}`; const rsx = `${[...sx].reverse().join("")}`; if (sx === rsx) { return `removing '${s[i]}' (@ position ${i}): '${sx}'`; }; i += 1; } return -1; } .as-console-wrapper { top: 0; max-height: 100% !important; }
{ "domain": "codereview.stackexchange", "id": 38456, "tags": "javascript, algorithm, programming-challenge, palindrome" }
Falling electric dipole contradicts equivalence principle?
Question: Consider an electric dipole, with total mass $M$, consisting of charges $q$ and $-q$, separated by a distance $d$. The total mass $M$ includes the mass defect due to the negative electrostatic energy associated with the opposite charges. If the dipole is given an acceleration $a$ perpendicular to its moment the total electric force on it, due to each charge acting on the other, is given approximately by $$F_e=\frac{e^2a}{c^2d}$$ where we introduce $e^2 \equiv q^2/4\pi\epsilon_0$ for clarity. The exact expression is given in Electrostatic Levitation of a Dipole Eq(5). Now suppose the dipole, initially oriented horizontally, is dropped in a vertical gravitational field of strength $g$. Applying Newton's second law of motion to the dipole we have: gravitational force (gravitational mass times field strength) plus electric force equals the inertial mass times acceleration $$Mg+F_e=M a$$ where, following the equivalence principle, we assume gravitational and inertial masses are equal. Therefore the acceleration $a$ of the dipole is given by $$a=g\large(1-\frac{e^2}{Mc^2d}\large)^{-1}.$$ Thus the dipole accelerates faster than gravity. An observer falling with the dipole sees it move away from him whereas in deep space the observer does not see the dipole move away. Surely this contradicts the equivalence principle? P.S. I think that if advanced EM fields are somehow included in the calculation of the electric force $F_e$ then $F_e=0$ and the equivalence principle is obeyed. Answer: In your calculation you assume that gravitational mass $M_G$ of the system is $2m$ where $m$ is rest mass of a single particle, thus you assume it is independent of the mutual distance between the charged particles $d$. In other words, you do not take into account the force of gravity acting on the system due to the concentrated bound negative energy of EM field near the charged particles. However, since the system has lower inertial mass, it should also have lower gravitational mass. It is well known that systems with negative potential EM energy have inertial mass defect. In this case, the dipole is such a system, so it will have lower inertial mass than $2m$, thanks to its negative electrostatic potential energy $-\frac{e^2}{d}$. This "mass defect" effect comes from the forces of "acceleration electric fields" acting (in this case) to speed up the charged particles. This you have taken into account by including force $F_{em/self} = \frac{e^2}{c^2d}a$, which is the electromagnetic self-force acting on the dipole. But defect in inertial mass should mean also defect in gravitational mass. Heuristically/naively, the gravitational mass to use in the formula $F_G = M_G g$ should correspond to total energy of the system via Einstein's formula $$ E = M_Gc^2 $$ where $E$ is total energy of the system, including its internal potential energy. Using the Coulomb formula for potential energy, $$ E = 2mc^2 - \frac{e^2}{d} $$ and so the gravitational mass of the dipole should be taken as $$ M_{G} = 2m -\frac{e^2}{c^2d}. $$ Then, the Newtonian equation of motion turns out as follows. We have $$ M_G g + F_{em/self} = 2ma; $$ using the above expression for $M_G$ and $\frac{e^2}{c^2d}a$ for $F_{em/self}$, we obtain $$ \left(2m - \frac{e^2}{c^2d}\right)g = 2ma - \frac{e^2}{c^2d}a $$ which always implies $$ a = g, $$ confirming that if $M_G\neq 0$, the dipole will move in accordance with the equivalence principle.
{ "domain": "physics.stackexchange", "id": 55252, "tags": "electromagnetism, general-relativity, classical-electrodynamics, equivalence-principle" }
Increasing Image Resolution
Question: I know of some oscilloscopes (DSA8300) that repeatedly sample at a few hundred kS/s to reconstruct a few GHz signal. I was wondering if this could be extended to 2D signals (photographs). Can I take a series (say 4) of still pictures using a commercial 16MP camera to finally reconstruct a 32MP image? Will doing this remove aliasing I have from each image? Should such a thing be attempted from a single image it would obviously not work as no new information is being introduced. If all the pictures taken are absolutely identical, will I still be at the same point as having one image? So are variations essential? Is CCD / CMOS noise enough variation for such a thing to work? Is there a name for such a technique or algorithm? What should I look for? Answer: One word for that technique is superresolution. Robert Gawron has a blog post here and the Python implementation here. Usually, this technique relies on each image being slightly offset from the others. The only gain you'd get from not moving between shots would be to reduce the noise level.
{ "domain": "dsp.stackexchange", "id": 1640, "tags": "image-processing, aliasing, resolution, superresolution, inverse-problem" }
Perpetual motion?
Question: while i was studying my Alevel physics course, i came around the term " perpetual machines " just at the end of the conservation of energy concept, anyway, as a curious person i googled " perpetual machines" and i saw lots of concepts but the one that really confused me is the " john wilkins perpetual machine " i want to ask how does it work? will it keep on forever? does it even work? link : https://www.youtube.com/watch?v=V70w3cxDJIM Answer: The video you see is a hoax. The principle (as explained in the video) is that the ball is attracted by the magnet ("lodestone") to go up the ramp, then drops down and slides along the bottom ramp. There are two problems with the implementation: first, the force of a magnet on an object is a function of the gradient - and the gradient of the magnet shown drops VERY quickly with distance. The apparent "keeps rolling up the slide" you see is NOT what you expect from a ball in the field of a magnet. Second, when you get near the top, the magnet should hold the ball firmly - there is no mechanism for making it drop down the hole (again, the force of the magnet on the ball should be MUCH stronger when it's very close) Third - if the magnetic field is on all the time, the force is conservative: that is, after a complete loop by the ball, no net work is done by the magnet (or gravity). The only way to explain the video is to realize there is probably a bit of electronics in the framework, and the magnetic force is modulated (turned on and off) to keep the ball rolling. Which makes for great "kinetic art", but not a perpetual motion machine. Conservation of energy tells us that such machines cannot exist. And that is a law that has been proven right, over and over again.
{ "domain": "physics.stackexchange", "id": 39676, "tags": "electromagnetism, energy-conservation, perpetual-motion" }
Prove that there exists a $d \times d$ unitary matrix $U$ which cannot be decomposed as a product of fewer than $d-1$ two-level unitary matrices
Question: I'm trying to solve exercise 4.38 from Nielsen and Chuang, which asks to "Prove that there exists a $d \times d$ unitary matrix $U$ which cannot be decomposed as a product of fewer than $d-1$ two-level unitary matrices". In this context, a two-level matrix is a matrix which acts nontrivially on at most two levels. In other words, we say that $A$ is a two-level matrix if it can be written as $A=\tilde A \oplus I$ for some $2\times2$ matrix $\tilde A$ (up to a rearrangement of the matrix components). This definition is found in section 4.5.1 in the 10th edition of the book. If you find unitary matrices $U_{d-1}, U_{d-2}, \ldots, U_1$ such that the matrix $U_{d-1}U_{d-2}\ldots U_1U$ has a one in the top left-hand corner, all zeroes elsewhere in the first row and column, and the remaining $d-1 \times d-1$ submatrix (when you remove the first row and column) is not a two-level unitary, then the decomposition of $U$ must require more than $d-1$ two-level unitaries. That seems pretty clear, I'm just not sure where to go from here. Any hints/suggestions? Answer: Suppose $U$ is a $d\times d$ unitary matrix which can be decomposed using less than $d-1$ two-level unitaries. We can think of each two-level unitary as an "edge" linking some pair of modes, interpreting each mode as a vertex. Let us then ask what kinds of configurations can be obtained using less than $d-1$ edges. In other words, what kinds of graphs are possible using less than $d-1$ edges. As discussed in this math.SE post, any such graph must contain at least two connected components. This means that there must be at least two subsets of vertices/modes, call them $V_1$ and $V_2$, which are not connected by any edge. Physically, we can understand this as saying that if less than $d-1$ two-level unitaries are used to decompose $U$, then there must be two subsets of modes on which $U$ acts independently. Upon rearranging the order of the levels, this means that $U=U_1\oplus U_2$ for some unitaries $U_1,U_2$. In other words, if $U$ can be built with less than $d-1$ two-level unitaries, then $U$ is block-diagonal in the computational basis. Clearly not every $U$ has this form. To name one example, the QFT matrix doesn't. To also verify this numerically, here's a Mathematica snippet that combines together $d-2$ random two-level unitaries and shows the resulting matrix. Consistently with the reasoning above, we can notice that any resulting matrix has a block-diagonal form: d = 5; numberTwoLevelUnitaries = d - 2; randomTwoLevelUnitaries = Table[RandomUnitary@2, numberTwoLevelUnitaries]; randomPositions = Table[RandomSample[Range@d, 2], numberTwoLevelUnitaries] // Echo; embeddedRandomUnitaries = Table[ With[{pos = posAndUnitary[[1]], u = posAndUnitary[[2]]}, ArrayFlatten[{ {u, 0}, {0, IdentityMatrix[d - 2]} }] // ReplacePart[IdentityMatrix@d, Table[ {pos[[i]], pos[[j]]} -> #[[i, j]], {i, 2}, {j, 2} ] // Flatten[#, 1] & ] & ], {posAndUnitary, Thread@{randomPositions, randomTwoLevelUnitaries}} ]; overallUnitary = Dot @@ embeddedRandomUnitaries; overallUnitary // MatrixForm // Chop
{ "domain": "physics.stackexchange", "id": 76373, "tags": "quantum-mechanics, homework-and-exercises, quantum-information, quantum-computer, unitarity" }
Question on how to actually use momentum space Feynman rules in $\phi^4$-theory
Question: The momentum space Feynman rules state that we "integrate over all undetermined momenta" and "impose momentum conservation at each vertex". This is given for example on page 95 of Peskin and Schroeder, however, I am unsure what these two statements mean. Firstly, for each external leg with momentum $p$ we have a factor: $e^{-ip\cdot x}$, if we have more than one of these we can obtain a delta function (that would seem to impose momentum conservation at each vertex), but only if we integrate over space, not momentum, however there is no rule stating that we integrate over space. I am unsure how we obtain the delta functions that are seen in momentum space evaluations of Feynman diagrams. Regarding the second rule, I am unsure why any of the momentum would be "undetermined", or why the momentum of one propagator/external line would be more "undetermined" than any other. Could someone clear this up slightly for me? Answer: Momentum-space Feynman rules are almost always used in calculations, and this entails transforming the propagators (from the field contractions) to momentum space. However, the integrals over spacetime that arise from the Dyson series expansion of the S-matrix still remain - interchanging the order of integration (as physicists do) and performing these spacetime integrals furnishes the delta functions at vertices that effect momentum conservation. For simplicity, I'll talk about $\phi^4$ theory here, but the argument here is easily generalisable. Every momentum-space propagator brings with it another $\int \mathrm d^4 k_{(i)}$ to the calculation - this means that a priori, the internal momenta are off-shell (to be expected from virtual propagators), and hence each such $k_{(i)}$ is an additional unconstrained degree of freedom. The delta functions produced by the integration over spacetime have the effect of constraining the momentum of the internal lines - killing the momentum integral, as it were. Obviously, there will be one such delta function for each vertex in the diagram. Exactly one of these forces overall momentum conservation, and each of the remaining ones constrain the momenta of one internal line (i.e. reduces the number of degrees of freedom by one). Consequently there will still remain $n_\text{int. lines} + 1 - n_\text{vertices}$ of these $\int \mathrm d^4 k_{(i)}$ integrals. Since a delta function like $\delta(k_{(i)} - k_{(j)})$ can be used to kill either the $\int \mathrm d^4 k_{(i)}$ integral or the $\int \mathrm d^4 k_{(j)}$ integral, it really makes no difference which one you choose to call "unconstrained", since the "constrained" one can in principle be calculated from the momentum conservation equation at that vertex. Finally, an external line can never have undetermined momentum - these are essentially the "initial conditions" for your scattering process. I'll use an explicit example here, take this diagram: Let's work in one dimension and use "scalar momentum" - suppose the momentum going in is $10$. By overall momentum conservation, the momentum coming out should also be $10$. The Feynman rules dictate that the internal lines should take on all real values, and we should sum over all of these processes (so supposedly we have three undetermined momenta here). However, momentum conservation dictates that these three values can't be arbitrary real numbers, but should sum to $10$. So we would have to sum over processes with $k_1 = 5, \ k_2 = 4, \ k_3 = 1$ and $k_1 = -\pi, \ k_2 = 10, \ k_3 = \pi$ and what have you. The main thing to realise here, is that for any one of these processes, we can calculate any one of the momenta from the others - for example, if you know that the incoming momentum is $10$, $k_1 = 4$ and $k_2 = 1$, you know that $k_3$ is constrained to be $5$. The choice of line to be constrained is completely arbitrary (in general, any line connected to the same vertex can be chosen), so you are correct in that no one internal line is favoured over another. We finally see that there are two undetermined momenta in this diagram, which agrees with $n_\text{int. lines} + 1 - n_\text{vertices} = 3 + 1 - 2 = 2$ Explicitly (in 3+1D), this diagram is, up to symmetry factors, given by $$ F= -\lambda^2\iint \mathrm d^4 x \ \mathrm d^4 y \ e^{iqy} D_{xy} D_{xy}D_{xy} e^{-ipx} \\ D_{xy}\rightarrow \int \frac{\mathrm d^4 k}{(2\pi)^4} \ \frac{i}{k^2-m^2+i\varepsilon} e^{-ik\cdot(x-y)} = \int \frac{\mathrm d^4 k}{(2\pi)^4} \ D_k e^{-ik\cdot(x-y)} \\ F= -\lambda^2\iint \mathrm d^4 x \ \mathrm d^4 y \ e^{iqy} \int \frac{\mathrm d^4 k_1}{(2\pi)^4} \ D_{k_1} e^{-ik_1\cdot(x-y)} \int \frac{\mathrm d^4 k_2}{(2\pi)^4} \ D_{k_2} e^{-ik_2\cdot(x-y)} \int \frac{\mathrm d^4 k_3}{(2\pi)^4} \ D_{k_3} e^{-ik_3\cdot(x-y)} e^{-ipx} \\ = -\lambda^2 \iiint \frac{\mathrm d^4 k_1}{(2\pi)^4} \frac{\mathrm d^4 k_2}{(2\pi)^4} \frac{\mathrm d^4 k_3}{(2\pi)^4} D_{k_1}D_{k_2}D_{k_3} \ \int\mathrm d^4 x \ e^{i(\sum_i k_i - p)\cdot x}\underbrace{\int\mathrm d^4 y \ e^{-i(\sum_i k_i - q)\cdot y}}_\text{Momentum Conservation at Vertex} \\ = -\lambda^2 \iiint \frac{\mathrm d^4 k_1}{(2\pi)^4} \frac{\mathrm d^4 k_2}{(2\pi)^4} \frac{\mathrm d^4 k_3}{(2\pi)^4} D_{k_1}D_{k_2}D_{k_3} \ \int\mathrm d^4 x \ e^{i(\sum_i k_i - p)\cdot x} (2\pi)^4\delta(k_1+k_2+k_3-q) \\ = -\lambda^2 \iint \frac{\mathrm d^4 k_1}{(2\pi)^4} \frac{\mathrm d^4 k_2}{(2\pi)^4} D_{k_1}D_{k_2}D_{\color{red}{q - k_1 - k_2}} \ \underbrace{\int\mathrm d^4 x \ e^{i(q - p)\cdot x}}_\text{Overall Momentum Conservation} \\ = -\lambda^2 (2\pi)^4 \delta(p - q) \iint \frac{\mathrm d^4 k_1}{(2\pi)^4} \frac{\mathrm d^4 k_2}{(2\pi)^4} D_{k_1}D_{k_2}D_{q - k_1 - k_2} $$ Normally this overall delta function is absorbed into the definition of the matrix element, see here. As you can see, $k_1$ and $k_2$ are the undetermined momenta (but the $\delta(k_1+k_2+k_3-q)$ could have killed any one of the $k_{(i)}$ integrals).
{ "domain": "physics.stackexchange", "id": 75721, "tags": "quantum-field-theory, momentum, conservation-laws, feynman-diagrams, dirac-delta-distributions" }
O(Log n) Search - Array
Question: So, there's a LeetCode problem that has you find a O(log n) solution to finding a target number in a rotated sorted array. As an example: array = [5,6,7,1,2,3] target = 4 Basically, the trick is that you can find the point of rotation in O(log n) time and then you can do a binary search over the appropriate subsection in O(log n) time to find the index of the target (if it exists) in the array. This sort of got me thinking, what are the conditions under which you can do a O(log n) search for an element in an array in which all numbers are distinct? Clearly, you can still do a O(log n) search even if the array is something less than strictly sorted. But, I don't know how far you can push the array before it is no longer searchable in log n time. How would one look to solve or describe a problem like this? Answer: I think you essentially answered it yourself (and @Yves Daoust) with the point that there's an O(log(n)) trick. We could say that it's the set of problems for which you can modify a known O(log(n)) algorithm by adding O(log(n)) operations. You asked for "the conditions" under which an array can be searched in O(log(n)). The answer above is informal, but I expect there's a nice mathematical way to express it. That might start with defining a problem as a set of arrays, such as a set identified by some conditions or specification; e.g., "any array that is a rotation of a sorted array." In the case of the rotated array, the added operations are finding the smallest or largest element, which can be done in O(log(n)). Another example: if you have an array of numbers such that every number is no more than 10 positions away from where it would be if sorted, then you modify a binary search algorithm by walking the 20 positions surrounding the point where the algorithm expected to find it. That's just adding a constant number of operations (~20), which is less than O(log(n)). I think we could keep coming up with new O(log(n)) problems by taking one of these that we've discovered and "perverting" it. For example, we could have a rotated array with numbers no more than 10 spots away from their rotated-sorted position. That sounds a little more complicated, but I expect it's still O(log(n)). I believe the ways the problem could be modified are infinite and potentially very complicated, so it's probably not worth trying to identify them . . . unless you are looking for good job interview questions.
{ "domain": "cs.stackexchange", "id": 20839, "tags": "search-algorithms, binary-search" }
Publishing array data with rqt
Question: I'm trying to use the rqt message publisher to publish a message of type std_msgs/msg/Float64MultiArray. I can't figure out how I'm supposed to give the data field of type sequence<double>. The default value in the expression field suggests that one should simply specify the array initializer in the array('d') constructor. I tried this to no avail with: array('d', [0.1, 0.1]) >>> [ERROR] [1644749632.166618229] [get_message_class]: Malformed message_type: sequence<double> Python eval failed for expression "array('d', [0.1, 0.1])" with an exception "name 'array' is not defined" Publisher._evaluate_expression(): failed to evaluate expression: "array('d', [0.1, 0.1])" as Python type "None" Traceback (most recent call last): File "/opt/ros/rolling/lib/python3.8/site-packages/rqt_publisher/publisher.py", line 161, in change_publisher new_text = handler(self._publishers[publisher_id], topic_name, new_value) File "/opt/ros/rolling/lib/python3.8/site-packages/rqt_publisher/publisher.py", line 262, in _change_publisher_expression expression, error_prefix, slot_type.__name__) AttributeError: 'NoneType' object has no attribute '__name__' The interesting bit here is what seems to be the root cause of the problem: Python eval failed for expression "array('d', [0.1, 0.1])" with an exception "name 'array' is not defined" If this isn't the correct way to do it, what am I supposed to do? I also tried, all in vain, these options: [0.1, 0.1] array.array('d', [0.1, 0.1]) import array; array.array('d', [0.1, 0.1]) I'm using ROS2 Rolling at the moment, although I've had the same issue with Foxy and Galactic before. Originally posted by kvik on ROS Answers with karma: 23 on 2022-02-13 Post score: 1 Answer: The version 1.6.0 of rqt_publisher solve the problem. I tested also in foxy using this version and is working fine with the expression array('d', [0.1, 0.1]) Originally posted by halejo with karma: 26 on 2022-09-06 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by kvik on 2022-09-06: Nice. Thanks for the tip.
{ "domain": "robotics.stackexchange", "id": 37437, "tags": "ros, ros2, array, rqt" }
Where does CMB come/emit from?
Question: Where exactly does CMB come from. I've seen it in documentaries as a huge sphere with Earth in the middle. But if all this radiation was ejected from the start of the universe some time after the big bang; why can we see it? Surely the radiation should be travelling away from us? Just like every galaxy is? Answer: The radiation was "produced" about 380,000 years after the Big Bang, and it was produced at every point of the Universe. From the very beginning, it's been (almost) uniform (the same at all places) and isotropic (the same in all directions). Since that time, the radiation was moving in all directions, essentially without any interactions. The cosmic microwave radiation "decoupled" - separated - from the rest of the matter in the Universe in this era we call "decoupling". Before the decoupling, the temperature of the Universe was so high that electrons and protons were largely separated in a plasma filling the Universe. Plasma carries a lot of random electric charge that severely interacts with photons all the time - so the plasma was opaque for the radiation. However, after the "decoupling", the atoms were formed for the first time - mostly Hydrogen atoms. Hydrogen atoms are neutral and their interactions with the photons are much weaker so the Universe became essentially transparent. The photons - and everything else in the Universe - at the moment of decoupling had a certain high temperature (thermal equilibrium, about 3000 Kelvin) which means that their spectrum was Planck's black-body thermal spectrum corresponding to the temperature. From that moment, photons were moving without any interactions and their wavelength was increasing proportionally to the size of the Universe. That also means that the energy of each photon was decreasing by the same factor; the temperature of the black body radiation did the same thing. That's why the current CMB temperature is just 2.7 Kelvin. You may see that the Universe's linear dimensions expanded about 1,000 times from the decoupling. (Note that 13.73 billion years over 380,000 years is substantially more than 1,000. That's because in the early stages, the expansion of the Universe was "decelerating" as a function of time. Only in recent few billions of years, the expansion got actually accelerating because of dark energy that gradually became important.) When the WMAP probe detects a photon of the cosmic microwave background, this collision with the telescope is the first interaction of this photon since the moment when the Universe was 380,000 years old. This fact allows you to to deduce how far is the point when the photon was born - or when it last interacted with another object. The birth place is clearly a point in the direction where the photon is coming from. The distance is always the same so all the photons we see here today had to be produced at a particular spherical shell in spacetime. The center of the shell is "our place in the past" and the radius is such that the photons from the shell, when travelling inwards, exactly needed those 13.7 billion years of the cosmic time to get here.
{ "domain": "physics.stackexchange", "id": 434, "tags": "cosmology, cosmic-microwave-background" }
Oxidation state of Manganese
Question: We know that Mn shows variable oxidation states ranging from +2 to +7 but why is +1 oxidation state of Manganese(Mn) not stable? The +1 oxidation state of Mn would have a configuration of 4s1 3d5. What is that I am missing or wrong with? Answer: $\ce{Mn+}$ has configuration $\mathrm{(4s)^0(3d)^6},$ while on the other hand $\ce{Mn^2+}$ $\mathrm{(3d)^5(4s)^0}$ and also, $\ce{Mn}$ in ground state is $\mathrm{(4s)^2(3d)^5}.$ Therefore you can see in the oxidation state +2 orbitals are half-filled, and in +1 they are not half-filled. As half-filled is more stable, therefore +2 and 0 are more stable than +1. Therefore Mn prefers not to stay in +1 due to higher stability of adjacent states.
{ "domain": "chemistry.stackexchange", "id": 13014, "tags": "inorganic-chemistry, transition-metals, oxidation-state" }
State evolution and phase correction for iterative phase estimation
Question: I'm learning about the iterative phase estimation (IPE) algorithm from the qiskit textbook. Here's a circuit I generated to implement this algorithm on a random single-qubit Hamiltonian. Instead of using the phase gate to perform time evolution as demonstrated in the textbook, I used controlled $U$ gates. From the qiskit document, the qubit $q_1$ remains in the same state $|\psi\rangle$ throughout the algorithm (here it's initialized to $|0\rangle$). I don't quite understand why this should be the case, given the circuit above. I was thinking about when we perform a measurement in $q_0$, are we kind of 'projecting' the quantum state on $q_1$ onto a subspace associated with the measurement outcome? If we just look at the unitary operations on $q_1$, I still don't know why that state should be the same. Also, should the phase corrections (controlled by the classical bits) start from the second iteration be the same for the implementation of IPE on any system? It seems like the phase we want to correct after each iteration is independent of the system we simulate, but I'm worried if there's anything else we need to take into account for a large system when we calculate the phase that needs to be corrected. Thanks so much for the help:) Answer: Quick answer the qubit $q_1$ remains in the same state $|\psi\rangle$ after measurement because it is not entangled with $q_0$. Yes! phase correction is the same regardless of the unitary $U$. Details Assume that, $ U|\psi\rangle = e^{2 \pi i \theta}|\psi\rangle $, where $\theta$ is a fraction whose binary representation is $0.a_1a_2 \dots a_k$ Initially, the state is $|\psi\rangle|0\rangle$ (here I'm using little endian bit ordering same as Qiskit). After applying the first Hadamard gate the state becomes $\frac{1}{\sqrt{2}}(|\psi\rangle|0\rangle + |\psi\rangle|1\rangle)$. And after applying the controlled $U^{2^{k}}$ the state becomes $$\frac{1}{\sqrt{2}}(|\psi\rangle|0\rangle + U^{2^{k}}|\psi\rangle|1\rangle) = \frac{1}{\sqrt{2}}(|\psi\rangle|0\rangle + e^{2\pi i 2^{k}\theta}|\psi\rangle|1\rangle) = \frac{1}{\sqrt{2}}|\psi\rangle(|0\rangle + e^{2\pi i 2^{k}\theta}|1\rangle) $$ Based on the assumed binary representation of $\theta$, the fraction part of $2^{k}\theta$ equals $0.a_k$. Hence, $e^{2\pi i 2^{k}\theta} = e^{\pi ia_k} = (-1)^{a_k}$ (because $a_k$ equals $0$ or $1$). So, the state equals $\frac{1}{\sqrt{2}}|\psi\rangle(|0\rangle + (-1)^{a_k} |1\rangle)$ which becomes $|\psi\rangle|a_k\rangle$ after applying the second Hadamard. When measuring the first qubit we get $a_k$ and the second qubit remains in its initial state $|\psi\rangle$. The purpose of phase correction step is to remove the bits we already know from $\theta$. After $m$ iterations we know the values of $a_k, a_{k-1}, \dots, a_{k-m+1}$. To remove these bits, we add a relative phase of $e^{-2 \pi i 0.00\dots0a_{k-m+1}\dots a_{k-1}a_k}$ so that when applying the controlled $U$ operations the phase becomes $e^{2 \pi i 0.a_1a_2\dots0a_{k-m}}$ then we can do the same steps as before to get the value of $a_{k-m}$. And so on until we get all the bits in $\theta$.
{ "domain": "quantumcomputing.stackexchange", "id": 3536, "tags": "quantum-algorithms, hamiltonian-simulation, quantum-phase-estimation" }
What is spin state of two bound spin half particles and two independent spin half particles?
Question: This question arises from my study about bound state and unbound state to construct wave function. So I put the spin half particle to ask. For the bound state, I understand that we can measure the total spin of the bound state. Such that the spin state of two bound spin half particle should be eigenstate of $$S=S^{(1)}+S^{(2)}.$$ So the spin state should be $\mid1,1\rangle,\mid1,0\rangle,\mid1,-1\rangle$ for the triplet state and $\mid0,0\rangle$ for singlet state. As for two independent spin half particles spin state, I don't know. I guess that it should $\mid+,+\rangle$ or $\mid+,-\rangle$, something like these. If yes, why the they were written like that? Answer: You should first remember that spin is a vector quantity. There isn't just one operator of spin, but three: $\hat S_x$, $\hat S_y$, $\hat S_z$. They are just non-commuting operators, so it's impossible to measure all three simultaneously (except for a case when $s=0$). They all however commute with the operator $$ \hat {\bf S}^2 = \hat S_x^2 + \hat S_y^2 + \hat S_z^2$$ which can also be written as $$\hat {\bf S}^2 = \hat S_z (\hat S_z +1) + \hat S_-\hat S_+ $$ where $$\hat S_\pm = \hat S_x \pm {\rm i}\hat S_y$$ The spin state of the particle can be characterized by two numbers $s$ and $m_s$ $$ \hat{\bf S}^2|s, m_s\rangle = s(s+1)|s,m_s\rangle$$ $$ \hat S_z|s, m_s\rangle = m_s|s,m_s\rangle$$ It can be showed that operators $\hat S_\pm$ transit between the sates. The exact formula is: $$ \hat S_+|s, m_s\rangle = \sqrt{s(s+1)-m_s(m_s+1)}|s, m_s+1\rangle $$ $$ \hat S_-|s, m_s\rangle = \sqrt{s(s+1)-m_s(m_s-1)}|s, m_s-1\rangle $$ The operator $\hat {\bf S}^2$, with the eigenvalue $s(s+1)\hbar^2$, is what really determines the total spin of a particle $s$. From which you can calculate $s$. The $m_s$, the eigenvalue of $\hat S_z$ can take values between $-s$ to $s$ and it is no clear indicator of the total spin of the particle. For example it is possible for a particle of spin 1 to be in an eigenstate of $S_z$ with eigenvalue $0$ or $-1$, but it doesn't mean that it has spin $0$ or $-1$. So when you're try to detemine the total spin of a bound state you need to find the eigenstates of the operator $$ \hat{\bf S}^2 = (\hat S^{(1)}_x+\hat S^{(2)}_x)^2 + (\hat S^{(1)}_y+\hat S^{(2)}_y)^2 +(\hat S^{(1)}_z+\hat S^{(2)}_z)^2$$ Since the state of one particle can be expressed in the basis of states $|s,m_s\rangle$, the state of two paritcles can be expressed in the basis: $$|s_1,m_{s1}\rangle\otimes|s_2, m_{s2}\rangle $$ Since $s_1$ and $s_2$ for fundamental particles are always constant, and are usually known, they are usually skipped to shorten the notation, let's introduce the notation $$ |m_{s1}; m_{s2}\rangle = |s_1,m_{s1}\rangle\otimes|s_2, m_{s2}\rangle$$ For example, for two particles of spin $\frac12$, we have four base states: $$|\frac12;\frac12\rangle, \qquad |\frac12;-\frac12\rangle, \qquad |-\frac12;\frac12\rangle, \qquad |-\frac12;-\frac12\rangle$$ which can also be denoted as $$|+;+\rangle, \qquad |+;-\rangle, \qquad |-;+\rangle, \qquad |-;-\rangle$$ The are all eigenstates of $\hat S_{z}^{(1)}$ and $\hat S_{z}^{(2)}$: $$\hat S_{z}^{(1)} |m_{s1};m_{s2}\rangle = m_{s1}|m_{s1};m_{s2}\rangle$$ $$\hat S_{z}^{(2)} |m_{s1};m_{s2}\rangle = m_{s2}|m_{s1};m_{s2}\rangle$$ and as a consequence, they are also eigenstates of $\hat S_z = \hat S_{z}^{(1)} + \hat S_{z}^{(2)}$: $$(\hat S_{z}^{(1)} + \hat S_{z}^{(2)}) |m_{s1};m_{s2}\rangle = (m_{s1} + m_{s2})|m_{s1};m_{s2}\rangle$$ However, they are not all eigenstates of operator $\hat{\bf S}^2$. If we write, as before, that $$ \hat {\bf S}^2 = \hat S_z (\hat S_z +1) + \hat S_-\hat S_+ $$ where $\hat S_\pm = \hat S_\pm^{(1)} +\hat S_\pm^{(2)}$, we can find that $$ \hat {\bf S}^2 |+;+\rangle = 2|+;+\rangle$$ $$ \hat {\bf S}^2 |+;-\rangle = |+;-\rangle + |-;+\rangle $$ $$ \hat {\bf S}^2 |-;+\rangle = |+;-\rangle + |-;+\rangle $$ $$ \hat {\bf S}^2 |-;-\rangle = 2|-;-\rangle$$ We can see that $|+;-\rangle$ and $|-;+\rangle$ are not eigenstates of $\hat {\bf S}^2$. Looking for actual eigenstates, we find that $$ \hat {\bf S}^2 (|+;-\rangle + |-;+\rangle) = 2(|+;-\rangle + |-;+\rangle) $$ $$ \hat {\bf S}^2 (|-;+\rangle - |+;-\rangle) = 0 $$ We should also add a factor $\frac{1}{\sqrt{2}}$ to normalize them. To sum up, we have \begin{align} &\hat {\bf S}^2 |+;+\rangle &=& 2|+;+\rangle \\ &\hat {\bf S}^2 \frac{|+;-\rangle + |-;+\rangle}{\sqrt{2}} &=& 2\frac{|+;-\rangle + |-;+\rangle}{\sqrt{2}} \\ &\hat {\bf S}^2 |-;-\rangle &=& 2|-;-\rangle\\ &\hat {\bf S}^2 \frac{|+;-\rangle - |-;+\rangle}{\sqrt{2}} &=& 0 \\ &\hat S_{z} |+;+\rangle &=& |+;+\rangle \\ &\hat S_{z} \frac{|+;-\rangle + |-;+\rangle}{\sqrt{2}} &=& 0 \\ &\hat S_{z} |-;-\rangle &=& -|-;-\rangle \\ &\hat S_{z} \frac{|+;-\rangle - |-;+\rangle}{\sqrt{2}} &=& 0 \end{align} comparing it with the formulas $$ \hat{\bf S}^2|s, m_s\rangle = s(s+1)|s,m_s\rangle$$ $$ \hat S_z|s, m_s\rangle = m_s|s,m_s\rangle$$ we can identify $$ |+;+\rangle = |1,1\rangle $$ $$ \frac{|+;-\rangle + |-;+\rangle}{\sqrt{2}} = |1,0\rangle $$ $$ |-;-\rangle = |1,-1\rangle $$ $$ \frac{|+;-\rangle - |-;+\rangle}{\sqrt{2}} = |0,0\rangle $$ where on the right side the states were written in terms of their total spin and total z-component of the spin.
{ "domain": "physics.stackexchange", "id": 66931, "tags": "quantum-mechanics" }
Difference between coordinate time and proper time in general relativity
Question: I was watching a video on relativity on YouTube that talked about the difference between coordinate time $t$ and proper time $\tau$ and I have a couple of questions. As I understand it, the video said that the coordinate time $\Delta t$ along a path between two events is the time between the two events measured by a faraway observer. The proper time $\Delta \tau$ along a path between two events is the time measured by an observer traveling along that path. I understand this in the context of special relativity. However, in the context of general relativity what would a faraway observer entail? Since the definition of coordinate time says it's the time measured by a faraway observer. For example, consider a case where we are comparing the amount of time measured between two events in a strong gravitational field by two different observers. One observer is traveling through the gravitational field and the other observer is not in the gravitational field. Would the coordinate time be the time the observer far away from the location of the two events occurring in the gravitational field (i.e. an observer in a flat Minkowski spacetime) measures? In general, how does the distinction between coordinate time and proper time work in general relativity? Is the coordinate time the time measured between two events by an observer in flat Minkowski spacetime? Answer: I honestly believe this sort of questions require some formulas. First of all, let us agree on the setting. In general relativity (GR) the metric $g_{\mu\nu}$ is a dynamical tensor, meaning it is a tensor which is not constant. The metric encodes how one measures distances, time intervals or better, space-time intervals. This metric will depend on the coordinates you choose for the patch of the space-time you are considering, with out loss of generality call them as follows: $$g_{\mu\nu} = g_{\mu\nu}(t,x_1,x_2,x_3)$$ The important thing is that locally, let us say if we are studying a small enough patch, things are like in special relativity and this means that there is one coordinate, namely $t$ in this example, to which a diagonal term $g_{tt}$, with an opposite relative sign is associated. This coordinate is usually called the coordinate time, or at least is responsible of defining what time-like is. Different coordinates and metrics have different behaviors, names but they all share the fact that the signature of the metric (realistic metrics, non-euclidean) is the same and this special coordinate always exist. So far we have only chosen a set of coordinates for our patch of the "Universe" and recognized that one of them behaves slightly different. Now let us speak about proper time. Over this chosen coordinates let us consider some geodesics, that is paths that experience no acceleration. Mathematically in this coordinates, a path in space-time is just some function depending on some parameter $s$, that returns a point in space-time: $$\gamma(s)=(t(s),x_1(s),x_2(s),x_3(s))$$ As you might know there are infinitely many ways to parameterize a curve, in other words $s$ can be changed for some other parameter. But again for the sake of comparison one looks for a "standard", this natural choice is the arc-length of the path itself. Assuming this path is time-like (meaning simply, its velocity is always lower than the speed of light) the arc-length of this path in 4 dimensions is what we call proper time, mathematically: $$\gamma(\tau)=(t(\tau),x_1(\tau),x_2(\tau),x_3(\tau))\Leftrightarrow \bigg|\frac{d\gamma}{d\tau}\bigg|^2=1$$ it has the units of time, and has the interpretation of being what a clock traveling along that geodesic would display. It is the parameterization that ensures a constant speed of 1 w.r.t. the parameter $\tau$. Above I presented just the definitions as best as I could without going full math mode. Let us make contact with observers, and what has been mentioned in the post. Asymptotic observers are thought of experiencing a flat metric, (so Minkowski if you will), and it simply happens that their proper time might coincide with the coordinate time as defined above, therefore the terminology and the usage. Notice how coordinate time does not depend on any geodesic, it is only dependent on our coordinate choice, while proper time is different for every geodesic but its intervals will not depend on our choice of coordinates, it is an intrinsic property of the geodesic. To address the last part of your question. Events are points in space-time, for example $$(t_1,x_1^1,x_1^2,x_1^3)$$ $$(t_2,x_2^1,x_2^2,x_2^3)$$ where I have used the same name for the coordinates as before. This points as they are written have coordinate times $t_1$ and $t_2$ and you may subtract them to find the coordinate time interval. Nonetheless I can speak about the same points in many different ways, I can change the coordinates all together, or if I happen to have geodesics that go through them, one could describe them by the value of the parameter of the geodesic when it goes through those points. Take this just as an invitation to think about the geometry of the situation. To close, one could say that for certain space-time metrics that are asymptotically flat, the time in a clock of a far away observer (its proper time) coincides with coordinate time, so the time intervals he measure will be intervals of coordinate time as well.
{ "domain": "physics.stackexchange", "id": 70221, "tags": "general-relativity, spacetime, coordinate-systems, time, observers" }
What is the difference of spectra of EI-MS and ESI-MS/MS?
Question: I want to ask not difference about the principle or applicability to GC and LC, but the peak behavior of spectra. EI is hard ionization method, so there occurs many fragments, sometimes losing molecular ion. In contrast, ESI is soft ionization method, so there are a few fragments. But this can be disadvantage because of lack of molecular information. To complement this, ESI-MS/MS proceeds more fragmentation. Then what is the difference of EI-MS and ESI-MS/MS? Both methods have fragmentation. If my sample is available in both GC and LC, then can I get similar peaks and information in GC-EI-MS and LC-ESI-MS/MS? Answer: Assuming the present question relates to the characterization of a commercial sample 2,3,5-tribromothiophene here, with results used to monitor the progress of a chemical synthesis of simple small molecules (no forensics, no proteins, no polymers, etc.), then EI-MS might suffice (especially if combined with IR, 1H and 13C-NMR spectroscopy) and ESI-MS/MS be an overkill. GC-MS is a hyphenation. This means, at first you separate compounds by gas chromatography. Then these fractions are ionized, shatter into fragments which eventually are separated along the ratio of mass/charge by mass spectroscopy. In LC-MS (equally a hyphenation), liquid chromatography performs the separation of the compounds. Because the mechanism of LC differs from the one of GC, you are not able to equate retention times by GC with those of the same compounds of LC. Because of the large excess of solvent molecules in LC in comparison of your analyte molecules, the chemistry of ionization in the mass spectrometer equally differs from the one by GC. Contrasting to spectroscopic methods like NMR, the shape of the spectra obtained by mass spectrometry, the relative intensity of the individual peaks experimentally recorded may differ quite a lot from one experiment to an other. Among the influential parameters are the method of ionization like CI (chemical ionization) with a gas, or EI electron ionization where electrons hit your molecules. The parameters of the ionization interfere, e.g., an EI with an accelerator potential of $\pu{20 eV}$ is softer (with less fragmentation) than the standard $\pu{70 eV}$; however with the possibility that some molecules are less well ionized at all and subsequently less well seen because of lesser fragmentation. The conditions of fragmentation in a GC-MS equally differ from those for a LC-MS. In the case of the later, ionization and fragmentations of your analyte molecules interfere much more with the ionization and fragmentations of the carrier molecules (the excess of solvent compared to your analyte running the LC). As nicely described @M.Farooq's answer, the combination of MS-MS primarily targets larger molecules where by sheer structural complexity of the molecule's structure (e.g., proteins, polymers) one identifies fragments of first generation by characterization of their subsequently formed fragments. This is an example of tandem mass spectroscopy. For the synthesis of small organic molecules, the detailed elucidation of these fragmentation pathways in the mass spectrometer however often are not relevant. Your time and energy typically are better invested in the characterization of your compounds along the compound characterization checklist by your group or / and the targeted journal (example J. Org. Chem., yet you may use it anyway as a guide). In many cases, this includes recording a) a low resolution MS with characteristic peaks assigned ($\ce{M+}$, adducts, important isotopic peaks (e.g., halogen pattern); a few important assigned fragmentations (e.g., $\ce{- H2O}$, decarboxylation, tropylium cation) or pattern (e.g., McLafferty) and b) one high-resolution HRMS for the newly prepared compound altogether with a self-consistent set of other (spectroscopic) data (UV, IR, NMR, melting point, combustion analysis, etc.).
{ "domain": "chemistry.stackexchange", "id": 15898, "tags": "mass-spectrometry" }
How does a planet lose mass to its host star?
Question: How do bodies like WASP-12b lose mass to its central body? The process is never really explained in popular media, instead showing visualisations of a thin veil of gas being "sucked" in a rather straight line (and then into a ring around the star) rising straight from the surface of the planet. I highly doubt the mechanics which these illustration insinuate: Gravitation isn't working like a vacuum cleaner on a dusty rug. Either the rug goes with it, or the dust stays on the rug. The only mechanism that I can think of are particles from the upper atmosphere which happen to pick up enough thermodynamic energy so that they are faster than the escape velocity of the planet. But is this a process which effects a big enough mass transfer to be of significance? Do I overlook something? How is such a mass defect going on really? Answer: In the case of WASP-12b, at least, the close proximity to the star has actually deformed the planet so much that it is overflowing its Roche-lobe, the area around a planet or star that . We can show this mathematically by finding the approximate Roche-lobe of the planet using: $\frac{r_1}{A}=.46224\sqrt[3]{\frac{M_1}{M_1+M_2}}$ for $\frac{M_1}{M_2}<.8$ (which it is) where $A$ is the orbital separation and $r_1$ is the radius of the Roche-lobe around $M_1$. Solving for $r_1$ we get $97788\pm 2318$ miles. The planet's radius is $77759 \pm 3909$ miles. Taking the best possible scenario, there would be $\approx$ 13,000 miles between the planet's surface and its Roche-lobe if the planet was a perfect sphere. It is unknown how much the star affects the planet, but it would not be a stretch to think that the major axis of the planet would be 13,000 miles longer than the minor axis, allowing for the planet's gas to be shoved out of the Roche-lobe and "sucked" into the star.
{ "domain": "astronomy.stackexchange", "id": 2526, "tags": "supernova, binary-star, white-dwarf, hot-jupiter" }
A standard-conforming C++17 std::optional implementation
Question: I took quite some time to implement a fully standard-conforming std::optional in C++17. It turns out more sophisticated than I initially thought. My code is just below 1000 lines (excluding empty lines), and I have tested the functions extensively. There have been some attempts to implement std::optional on Code Review. A simple search brings up two: Reinventing std::optional - far from standard conforming; `std::optional` under C++14 v1 - nice in general, but doesn't implement the interaction between constexpr and triviality correctly. Some facts that complicate the implementation: Many operations are constexpr friendly. With constexpr, the aligned_storage + explicit construction / destruction technique becomes useless. The standard is effectively asking us to use a union. The fact that the constexpr-ness on the copy / move operations depends on the triviality of the corresponding operations on the value type is a clear evidence because that's exactly how unions work. The special member functions conditionally get defined as deleted / participate in overload resolution. Since special member functions cannot be templates, SFINAE cannot be used, and the only way to implement this that I can think of is to write a chain of base classes and use class template specialization, and then use = default to "inherit" the (possibly deleted) special member functions. I used N4659 (C++17 final draft) as a reference. The relevant parts are [optional], [unord.hash], and [depr.func.adaptor.binding] (for the deprecated std::hash<...>::result_type and std::hash<...>::argument_type). Except for std::hash, all functionalities are provided in the my_std namespace. As you can see, basically everything is boilerplate code and the actual code is almost zero. // C++17 std::optional implementation #ifndef INC_OPTIONAL_HPP_9AEkHPjv56 #define INC_OPTIONAL_HPP_9AEkHPjv56 #include <cassert> #include <exception> #include <initializer_list> #include <memory> // for std::destroy_at #include <typeindex> // for std::hash #include <typeinfo> #include <type_traits> #include <utility> namespace my_std { // [optional.optional], class template optional template <class T> class optional; // [utility.syn], [in-place construction] struct in_place_t { explicit in_place_t() = default; }; inline constexpr in_place_t in_place{}; // [optional.nullopt], no-value state indicator struct nullopt_t { constexpr explicit nullopt_t(int) {} }; inline constexpr nullopt_t nullopt{0}; // [optional.bad.access], class bad_optional_access class bad_optional_access :public std::exception { public: bad_optional_access() = default; }; // [optional.relops], relational operators template <class T, class U> constexpr bool operator==(const optional<T>&, const optional<U>&); template <class T, class U> constexpr bool operator!=(const optional<T>&, const optional<U>&); template <class T, class U> constexpr bool operator<(const optional<T>&, const optional<U>&); template <class T, class U> constexpr bool operator>(const optional<T>&, const optional<U>&); template <class T, class U> constexpr bool operator<=(const optional<T>&, const optional<U>&); template <class T, class U> constexpr bool operator>=(const optional<T>&, const optional<U>&); // [optional.nullops], comparison with nullopt template <class T> constexpr bool operator==(const optional<T>&, nullopt_t) noexcept; template <class T> constexpr bool operator==(nullopt_t, const optional<T>&) noexcept; template <class T> constexpr bool operator!=(const optional<T>&, nullopt_t) noexcept; template <class T> constexpr bool operator!=(nullopt_t, const optional<T>&) noexcept; template <class T> constexpr bool operator<(const optional<T>&, nullopt_t) noexcept; template <class T> constexpr bool operator<(nullopt_t, const optional<T>&) noexcept; template <class T> constexpr bool operator>(const optional<T>&, nullopt_t) noexcept; template <class T> constexpr bool operator>(nullopt_t, const optional<T>&) noexcept; template <class T> constexpr bool operator<=(const optional<T>&, nullopt_t) noexcept; template <class T> constexpr bool operator<=(nullopt_t, const optional<T>&) noexcept; template <class T> constexpr bool operator>=(const optional<T>&, nullopt_t) noexcept; template <class T> constexpr bool operator>=(nullopt_t, const optional<T>&) noexcept; // [optional.comp.with.t], comparison with T template <class T, class U> constexpr bool operator==(const optional<T>&, const U&); template <class T, class U> constexpr bool operator==(const U&, const optional<T>&); template <class T, class U> constexpr bool operator!=(const optional<T>&, const U&); template <class T, class U> constexpr bool operator!=(const U&, const optional<T>&); template <class T, class U> constexpr bool operator<(const optional<T>&, const U&); template <class T, class U> constexpr bool operator<(const U&, const optional<T>&); template <class T, class U> constexpr bool operator>(const optional<T>&, const U&); template <class T, class U> constexpr bool operator>(const U&, const optional<T>&); template <class T, class U> constexpr bool operator<=(const optional<T>&, const U&); template <class T, class U> constexpr bool operator<=(const U&, const optional<T>&); template <class T, class U> constexpr bool operator>=(const optional<T>&, const U&); template <class T, class U> constexpr bool operator>=(const U&, const optional<T>&); // [optional.specalg], specialized algorithms template <class T> std::enable_if_t<std::is_move_constructible_v<T> && std::is_swappable_v<T>> swap(optional<T>& x, optional<T>& y) noexcept(noexcept(x.swap(y))) { x.swap(y); } template <class T> constexpr optional<std::decay_t<T>> make_optional(T&& v) { return optional<std::decay_t<T>>(std::forward<T>(v)); } template <class T, class... Args> constexpr optional<T> make_optional(Args&&... args) { return optional<T>(in_place, std::forward<Args>(args)...); } template <class T, class U, class... Args> constexpr optional<T> make_optional(std::initializer_list<U> il, Args&&... args) { return optional<T>(in_place, il, std::forward<Args>(args)...); } } namespace std { // [optional.hash], hash support template <class T> struct hash<my_std::optional<T>>; } namespace my_std::detail { template <class T, class U> struct is_cv_same :std::is_same< std::remove_const_t<std::remove_volatile_t<T>>, std::remove_const_t<std::remove_volatile_t<U>> > { }; template <class T, class U> inline constexpr bool is_cv_same_v = is_cv_same<T, U>::value; template <class T> struct enable { // constructors template <class... Args> using in_place = std::enable_if_t<std::is_constructible_v<T, Args...>, int>; template <class U> using conv_implicit = std::enable_if_t<std::is_constructible_v<T, U&&> && !std::is_same_v<std::decay_t<U>, in_place_t> && !std::is_same_v<std::decay_t<U>, optional<T>> && std::is_convertible_v<U&&, T>, int>; template <class U> using conv_explicit = std::enable_if_t<std::is_constructible_v<T, U&&> && !std::is_same_v<std::decay_t<U>, in_place_t> && !std::is_same_v<std::decay_t<U>, optional<T>> && !std::is_convertible_v<U&&, T>, int>; template <class U> static constexpr bool conv_common = !std::is_constructible_v<T, optional<U>& > && !std::is_constructible_v<T, optional<U>&&> && !std::is_constructible_v<T, const optional<U>& > && !std::is_constructible_v<T, const optional<U>&&> && !std::is_convertible_v< optional<U>& , T> && !std::is_convertible_v< optional<U>&&, T> && !std::is_convertible_v<const optional<U>& , T> && !std::is_convertible_v<const optional<U>&&, T>; template <class U> using copy_conv_implicit = std::enable_if_t<conv_common<U> && std::is_constructible_v<T, const U&> && std::is_convertible_v<const U&, T>, int>; template <class U> using copy_conv_explicit = std::enable_if_t<conv_common<U> && std::is_constructible_v<T, const U&> && !std::is_convertible_v<const U&, T>, int>; template <class U> using move_conv_implicit = std::enable_if_t<conv_common<U> && std::is_constructible_v<T, U&&> && std::is_convertible_v<U&&, T>, int>; template <class U> using move_conv_explicit = std::enable_if_t<conv_common<U> && std::is_constructible_v<T, U&&> && !std::is_convertible_v<U&&, T>, int>; // assignment template <class U> using conv_ass = std::enable_if_t<!std::is_same_v<optional<T>, std::decay_t<U>> && !(std::is_scalar_v<T> && std::is_same_v<T, std::decay_t<U>>) && std::is_constructible_v<T, U> && std::is_assignable_v<T&, U>, int>; template <class U> static constexpr bool conv_ass_common = conv_common<U> && !std::is_assignable_v<T&, optional<U>& > && !std::is_assignable_v<T&, const optional<U>& > && !std::is_assignable_v<T&, optional<U>&&> && !std::is_assignable_v<T&, const optional<U>&&>; template <class U> using copy_conv_ass = std::enable_if_t<conv_ass_common<U> && std::is_constructible_v<T, const U&> && std::is_assignable_v<T&, const U&>, int>; template <class U> using move_conv_ass = std::enable_if_t<conv_ass_common<U> && std::is_constructible_v<T, U> && std::is_assignable_v<T&, U>, int>; // emplace template <class U, class... Args> using emplace_ilist = std::enable_if_t< std::is_constructible_v<T, std::initializer_list<U>, Args...> , int>; }; // deal with destructor // trivially destructible version template <class T, bool = std::is_trivially_destructible_v<T>> class destroy_base { static_assert(std::is_object_v<T>, "[optional.optional]/3"); static_assert(std::is_destructible_v<T>, "[optional.optional]/3"); static_assert(!detail::is_cv_same_v<T, in_place_t>, "[optional.syn]/1"); static_assert(!detail::is_cv_same_v<T, nullopt_t>, "[optional.syn]/1"); public: constexpr destroy_base() noexcept {} ~destroy_base() = default; constexpr destroy_base(const destroy_base& rhs) = default; constexpr destroy_base(destroy_base&& rhs) = default; destroy_base& operator=(const destroy_base& rhs) = default; destroy_base& operator=(destroy_base&& rhs) = default; constexpr destroy_base(nullopt_t) noexcept {} template <class... Args, typename enable<T>::template in_place<Args...> = 0> constexpr explicit destroy_base(in_place_t, Args&&... args) :object(std::forward<Args>(args)...), contains{true} { } template <class U, class... Args, typename enable<T>::template in_place<std::initializer_list<U>&, Args...> = 0> constexpr explicit destroy_base(in_place_t, std::initializer_list<U> ilist, Args&&... args) :object(ilist, std::forward<Args>(args)...), contains{true} { } constexpr bool has_value() const noexcept { return contains; } void reset() noexcept { destroy(); } protected: constexpr T* get() noexcept { return &object; } constexpr const T* get() const noexcept { return &object; } template <typename... Args> void construct(Args&&... args) { assert(!has_value()); ::new (get()) T(std::forward<Args>(args)...); contains = true; } void destroy() noexcept { assert(has_value()); contains = false; } private: union { char dummy{'\0'}; T object; }; bool contains{false}; }; // non-trivially destructible version template <class T> class destroy_base<T, false> { static_assert(std::is_object_v<T>, "[optional.optional]/3"); static_assert(std::is_destructible_v<T>, "[optional.optional]/3"); static_assert(!detail::is_cv_same_v<T, in_place_t>, "[optional.syn]/1"); static_assert(!detail::is_cv_same_v<T, nullopt_t>, "[optional.syn]/1"); public: constexpr destroy_base() noexcept {} constexpr destroy_base(const destroy_base& rhs) = default; constexpr destroy_base(destroy_base&& rhs) = default; destroy_base& operator=(const destroy_base& rhs) = default; destroy_base& operator=(destroy_base&& rhs) = default; ~destroy_base() { reset(); } constexpr destroy_base(nullopt_t) noexcept {} template <class... Args, typename enable<T>::template in_place<Args...> = 0> constexpr explicit destroy_base(in_place_t, Args&&... args) :object(std::forward<Args>(args)...), contains{true} { } template <class U, class... Args, typename enable<T>::template in_place<std::initializer_list<U>&, Args...> = 0> constexpr explicit destroy_base(in_place_t, std::initializer_list<U> ilist, Args&&... args) :object(ilist, std::forward<Args>(args)...), contains{true} { } constexpr bool has_value() const noexcept { return contains; } void reset() noexcept { if (has_value()) destroy(); } protected: constexpr T* get() noexcept { return &object; } constexpr const T* get() const noexcept { return &object; } template <typename... Args> void construct(Args&&... args) { assert(!has_value()); ::new (get()) T(std::forward<Args>(args)...); contains = true; } void destroy() noexcept { assert(has_value()); std::destroy_at(get()); contains = false; } private: union { char dummy{'\0'}; T object; }; bool contains{false}; }; template <class T> class common_base :public destroy_base<T> { public: using destroy_base<T>::destroy_base; constexpr common_base() = default; constexpr common_base(const common_base&) = default; constexpr common_base(common_base&&) = default; common_base& operator=(const common_base&) = default; common_base& operator=(common_base&&) = default; constexpr T* operator->() { assert(*this); return this->get(); } constexpr const T* operator->() const { assert(*this); return this->get(); } constexpr T& operator*() & { assert(*this); return *this->get(); } constexpr const T& operator*() const & { assert(*this); return *this->get(); } constexpr T&& operator*() && { return std::move(*this->get()); } constexpr const T&& operator*() const && { return std::move(*this->get()); } constexpr explicit operator bool() const noexcept { return this->has_value(); } protected: // assign if has value, construct otherwise template <typename U> void assign(U&& arg) { if (this->has_value()) **this = std::forward<U>(arg); else this->construct(std::forward<U>(arg)); } }; // deal with copy constructor // trivially copy constructible version template <class T, bool = std::is_copy_constructible_v<T>, bool = std::is_trivially_copy_constructible_v<T>> class copy_construct_base :public common_base<T> { using Base = common_base<T>; public: using Base::Base; constexpr copy_construct_base() = default; constexpr copy_construct_base(const copy_construct_base& rhs) = default; constexpr copy_construct_base(copy_construct_base&&) = default; copy_construct_base& operator=(const copy_construct_base&) = default; copy_construct_base& operator=(copy_construct_base&&) = default; }; // non-trivially copy constructible version template <class T> class copy_construct_base<T, true, false> :public common_base<T> { public: using common_base<T>::common_base; constexpr copy_construct_base() = default; copy_construct_base(const copy_construct_base& rhs) // not constexpr { if (rhs) this->construct(*rhs); } constexpr copy_construct_base(copy_construct_base&&) = default; copy_construct_base& operator=(const copy_construct_base&) = default; copy_construct_base& operator=(copy_construct_base&&) = default; }; // non-copy constructible version template <class T> class copy_construct_base<T, false, false> :public common_base<T> { public: using common_base<T>::common_base; constexpr copy_construct_base() = default; copy_construct_base(const copy_construct_base&) = delete; constexpr copy_construct_base(copy_construct_base&&) = default; copy_construct_base& operator=(const copy_construct_base&) = default; copy_construct_base& operator=(copy_construct_base&&) = default; }; // deal with move constructor // trivially move constructible version template <class T, bool = std::is_move_constructible_v<T>, bool = std::is_trivially_move_constructible_v<T>> class move_construct_base :public copy_construct_base<T> { using Base = copy_construct_base<T>; public: using Base::Base; constexpr move_construct_base() = default; constexpr move_construct_base(const move_construct_base&) = default; constexpr move_construct_base(move_construct_base&& rhs) noexcept(std::is_nothrow_move_constructible_v<T>) = default; move_construct_base& operator=(const move_construct_base&) = default; move_construct_base& operator=(move_construct_base&&) = default; }; // non-trivially move constructible version template <class T> class move_construct_base<T, true, false> :public copy_construct_base<T> { public: using copy_construct_base<T>::copy_construct_base; constexpr move_construct_base() = default; constexpr move_construct_base(const move_construct_base&) = default; move_construct_base(move_construct_base&& rhs) // not constexpr noexcept(std::is_nothrow_move_constructible_v<T>) { if (rhs) this->construct(std::move(*rhs)); } move_construct_base& operator=(const move_construct_base&) = default; move_construct_base& operator=(move_construct_base&&) = default; }; // non-move constructible version template <class T> class move_construct_base<T, false, false> :public copy_construct_base<T> { public: using copy_construct_base<T>::copy_construct_base; constexpr move_construct_base() = default; constexpr move_construct_base(const move_construct_base&) = default; move_construct_base(move_construct_base&& rhs) = delete; move_construct_base& operator=(const move_construct_base&) = default; move_construct_base& operator=(move_construct_base&&) = default; }; // deal with copy assignment // copy constructible and assignable version template <class T, bool = (std::is_copy_constructible_v<T> && std::is_copy_assignable_v<T>)> class copy_assign_base :public move_construct_base<T> { using Base = move_construct_base<T>; public: using Base::Base; constexpr copy_assign_base() = default; constexpr copy_assign_base(const copy_assign_base&) = default; constexpr copy_assign_base(copy_assign_base&&) = default; copy_assign_base& operator=(const copy_assign_base& rhs) { if (rhs) this->assign(*rhs); else this->reset(); return *this; } copy_assign_base& operator=(copy_assign_base&&) = default; }; // non-(copy constructible and assignable) version template <class T> class copy_assign_base<T, false> :public move_construct_base<T> { public: using move_construct_base<T>::move_construct_base; constexpr copy_assign_base() = default; constexpr copy_assign_base(const copy_assign_base&) = default; constexpr copy_assign_base(copy_assign_base&&) = default; copy_assign_base& operator=(const copy_assign_base&) = delete; copy_assign_base& operator=(copy_assign_base&&) = default; }; // deal with move assignment // move constructible and assignable version template <class T, bool = (std::is_move_constructible_v<T> && std::is_move_assignable_v<T>)> class move_assign_base :public copy_assign_base<T> { using Base = copy_assign_base<T>; public: using Base::Base; constexpr move_assign_base() = default; constexpr move_assign_base(const move_assign_base&) = default; constexpr move_assign_base(move_assign_base&&) = default; move_assign_base& operator=(const move_assign_base&) = default; move_assign_base& operator=(move_assign_base&& rhs) noexcept(std::is_nothrow_move_assignable_v<T> && std::is_nothrow_move_constructible_v<T>) { if (rhs) this->assign(std::move(*rhs)); else this->reset(); return *this; } }; // non-(move constructible and assignable) version template <class T> class move_assign_base<T, false> :public copy_assign_base<T> { public: using copy_assign_base<T>::copy_assign_base; constexpr move_assign_base() = default; constexpr move_assign_base(const move_assign_base&) = default; constexpr move_assign_base(move_assign_base&&) = default; move_assign_base& operator=(const move_assign_base&) = default; move_assign_base& operator=(move_assign_base&&) = delete; }; } namespace my_std { template <class T> class optional :public detail::move_assign_base<T> { using Base = detail::move_assign_base<T>; using Enable = detail::enable<T>; public: using value_type = T; using Base::Base; optional() = default; ~optional() = default; optional(const optional&) = default; optional(optional&&) = default; optional& operator=(const optional&) = default; optional& operator=(optional&&) = default; template <class U = T, typename Enable::template conv_implicit<U> = 0> constexpr optional(U&& v) :Base{in_place, std::forward<U>(v)} { } template <class U = T, typename Enable::template conv_explicit<U> = 0> explicit constexpr optional(U&& v) :Base{in_place, std::forward<U>(v)} { } template <class U, typename Enable::template copy_conv_implicit<U> = 0> optional(const optional<U>& rhs) { if (rhs) this->construct(*rhs); } template <class U, typename Enable::template copy_conv_explicit<U> = 0> explicit optional(const optional<U>& rhs) { if (rhs) this->construct(*rhs); } template <class U, typename Enable::template move_conv_implicit<U> = 0> optional(optional<U>&& rhs) { if (rhs) this->construct(std::move(*rhs)); } template <class U, typename Enable::template move_conv_explicit<U> = 0> explicit optional(optional<U>&& rhs) { if (rhs) this->construct(std::move(*rhs)); } optional& operator=(nullopt_t) noexcept { this->reset(); return *this; } template <class U = T, typename Enable::template conv_ass<U> = 0> optional& operator=(U&& v) { this->assign(std::forward<U>(v)); return *this; } template <class U, typename Enable::template copy_conv_ass<U> = 0> optional& operator=(const optional<U>& rhs) { if (rhs) this->assign(*rhs); else this->reset(); return *this; } template <class U, typename Enable::template move_conv_ass<U> = 0> optional& operator=(optional<U>&& rhs) { if (rhs) this->assign(std::move(*rhs)); else this->reset(); return *this; } template <class... Args> T& emplace(Args&&... args) { static_assert(std::is_constructible_v<T, Args...>, "[optional.assign]/25"); this->reset(); this->construct(std::forward<Args>(args)...); return **this; } template <class U, class... Args, typename Enable::template emplace_ilist<U, Args...> = 0> T& emplace(std::initializer_list<U> ilist, Args&&... args) { this->reset(); this->construct(ilist, std::forward<Args>(args)...); return **this; } void swap(optional& rhs) noexcept(std::is_nothrow_move_constructible_v<T> && std::is_nothrow_swappable_v<T>) { if (*this && rhs) { using std::swap; swap(**this, *rhs); } else if (*this) { rhs.construct(std::move(**this)); this->destroy(); } else if (rhs) { this->construct(std::move(*rhs)); rhs.destroy(); } } constexpr T& value() & { if (*this) return **this; else throw bad_optional_access{}; } constexpr const T& value() const & { if (*this) return **this; else throw bad_optional_access{}; } constexpr T&& value() && { if (*this) return std::move(**this); else throw bad_optional_access{}; } constexpr const T&& value() const && { if (*this) return std::move(**this); else throw bad_optional_access{}; } template <class U> constexpr T value_or(U&& v) const & { static_assert(std::is_copy_constructible_v<T>, "[optional.observe]/18"); static_assert(std::is_convertible_v<U&&, T>, "[optional.observe]/18"); if (*this) return **this; else return static_cast<T>(std::forward<U>(v)); } template <class U> constexpr T value_or(U&& v) && { static_assert(std::is_move_constructible_v<T>, "[optional.observe]/20"); static_assert(std::is_convertible_v<U&&, T>, "[optional.observe]/20"); if (*this) return std::move(**this); else return static_cast<T>(std::forward<U>(v)); } }; template <class T> optional(T) -> optional<T>; template <class T, class U> constexpr bool operator==(const optional<T>& x, const optional<U>& y) { if (x) return y && static_cast<bool>(*x == *y); else return !y; } template <class T, class U> constexpr bool operator!=(const optional<T>& x, const optional<U>& y) { if (x) return !y || static_cast<bool>(*x != *y); else return static_cast<bool>(y); } template <class T, class U> constexpr bool operator<(const optional<T>& x, const optional<U>& y) { if (x) return y && static_cast<bool>(*x < *y); else return static_cast<bool>(y); } template <class T, class U> constexpr bool operator>(const optional<T>& x, const optional<U>& y) { if (x) return !y || static_cast<bool>(*x > *y); else return false; } template <class T, class U> constexpr bool operator<=(const optional<T>& x, const optional<U>& y) { if (x) return y && static_cast<bool>(*x <= *y); else return true; } template <class T, class U> constexpr bool operator>=(const optional<T>& x, const optional<U>& y) { if (x) return !y || static_cast<bool>(*x >= *y); else return !y; } template <class T> constexpr bool operator==(const optional<T>& x, nullopt_t) noexcept { return !x; } template <class T> constexpr bool operator==(nullopt_t, const optional<T>& x) noexcept { return !x; } template <class T> constexpr bool operator!=(const optional<T>& x, nullopt_t) noexcept { return static_cast<bool>(x); } template <class T> constexpr bool operator!=(nullopt_t, const optional<T>& x) noexcept { return static_cast<bool>(x); } template <class T> constexpr bool operator<(const optional<T>&, nullopt_t) noexcept { return false; } template <class T> constexpr bool operator<(nullopt_t, const optional<T>& x) noexcept { return static_cast<bool>(x); } template <class T> constexpr bool operator<=(const optional<T>& x, nullopt_t) noexcept { return !x; } template <class T> constexpr bool operator<=(nullopt_t, const optional<T>&) noexcept { return true; } template <class T> constexpr bool operator>(const optional<T>& x, nullopt_t) noexcept { return static_cast<bool>(x); } template <class T> constexpr bool operator>(nullopt_t, const optional<T>&) noexcept { return false; } template <class T> constexpr bool operator>=(const optional<T>&, nullopt_t) noexcept { return true; } template <class T> constexpr bool operator>=(nullopt_t, const optional<T>& x) noexcept { return !x; } template <class T, class U> constexpr bool operator==(const optional<T>& x, const U& v) { if (x) return *x == v; else return false; } template <class T, class U> constexpr bool operator==(const U& v, const optional<T>& x) { if (x) return v == *x; else return false; } template <class T, class U> constexpr bool operator!=(const optional<T>& x, const U& v) { if (x) return *x != v; else return true; } template <class T, class U> constexpr bool operator!=(const U& v, const optional<T>& x) { if (x) return v != *x; else return true; } template <class T, class U> constexpr bool operator<(const optional<T>& x, const U& v) { if (x) return *x < v; else return true; } template <class T, class U> constexpr bool operator<(const U& v, const optional<T>& x) { if (x) return v < *x; else return false; } template <class T, class U> constexpr bool operator<=(const optional<T>& x, const U& v) { if (x) return *x <= v; else return true; } template <class T, class U> constexpr bool operator<=(const U& v, const optional<T>& x) { if (x) return v <= *x; else return false; } template <class T, class U> constexpr bool operator>(const optional<T>& x, const U& v) { if (x) return *x > v; else return false; } template <class T, class U> constexpr bool operator>(const U& v, const optional<T>& x) { if (x) return v > *x; else return true; } template <class T, class U> constexpr bool operator>=(const optional<T>& x, const U& v) { if (x) return *x >= v; else return false; } template <class T, class U> constexpr bool operator>=(const U& v, const optional<T>& x) { if (x) return v >= *x; else return true; } } namespace my_std::detail { template <typename T> struct hash_is_enabled :std::is_default_constructible<std::hash<std::remove_const_t<T>>> {}; template <typename T> inline constexpr bool hash_is_enabled_v = hash_is_enabled<T>::value; template <typename T> struct optional_hash { using result_type [[deprecated]] = std::size_t; using argument_type [[deprecated]] = my_std::optional<T>; constexpr std::size_t operator()(const optional<T>& o) { if (o) return std::hash<std::remove_const_t<T>>{}(*o); else return typeid(T).hash_code(); } }; struct disabled_hash { disabled_hash() = delete; disabled_hash(const disabled_hash&) = delete; disabled_hash& operator=(const disabled_hash&) = delete; disabled_hash(disabled_hash&&) = delete; disabled_hash& operator=(disabled_hash&&) = delete; }; } namespace std { template <typename T> struct hash<my_std::optional<T>> :std::conditional_t<my_std::detail::hash_is_enabled_v<T>, my_std::detail::optional_hash<T>, my_std::detail::disabled_hash> {}; } #endif Here's the test if you want to see. It's a bit unorganized, and not the most important part :) #include <cassert> #include <string> #include <vector> #include "optional.hpp" using namespace my_std; struct Disabled { Disabled() = delete; Disabled(const Disabled&) = delete; Disabled& operator=(const Disabled&) = delete; Disabled(Disabled&&) = delete; Disabled& operator=(Disabled&&) = delete; ~Disabled() = default; }; struct Nontrivial_copy { Nontrivial_copy() = default; Nontrivial_copy(const Nontrivial_copy&) {} Nontrivial_copy& operator=(const Nontrivial_copy&) = delete; }; template <bool Noexcept = true> struct Moveonly { Moveonly() = default; Moveonly(const Moveonly&) = delete; Moveonly& operator=(const Moveonly&) = delete; Moveonly(Moveonly&&) noexcept(Noexcept) {} Moveonly& operator=(Moveonly&&) noexcept(Noexcept) {} }; struct Direct_init { // strict pattern constexpr Direct_init(int&, int&&) {} // no braced init template <class U> Direct_init(std::initializer_list<U>) = delete; }; int main() { // ill formed instantiation { // optional<int&> a; // optional<const in_place_t> b; // optional<volatile nullopt_t> c; } // value_type { static_assert(std::is_same_v<optional<int>::value_type, int>); } // deduction guide { static_assert(std::is_same_v<optional<int>, decltype(optional{42})>); static_assert(std::is_same_v<optional<Moveonly<>>, decltype(optional{Moveonly<>{}})>); } // default / nullopt constructor { constexpr optional<int> a{}; constexpr optional<int> b = nullopt; static_assert(!a); static_assert(!b); constexpr optional<Disabled> c{}; constexpr optional<Disabled> d = nullopt; static_assert(!c); static_assert(!d); static_assert(std::is_nothrow_constructible_v<optional<Disabled>>); static_assert(std::is_nothrow_constructible_v<optional<int>, nullopt_t>); } // trivial (constexpr) copy constructor { constexpr optional<int> a{}; constexpr auto b = a; static_assert(!a && !b); constexpr optional c{42}; constexpr auto d = c; static_assert(c == 42 && d == 42); } // non-trivial (non-constexpr) copy constructor { constexpr optional<Nontrivial_copy> a{}; constexpr optional<Nontrivial_copy> b{in_place}; /* constexpr */ auto c = a; /* constexpr */ auto d = b; assert(!c); assert(d); } // deleted copy constructor { static_assert(!std::is_copy_constructible_v<optional<Disabled>>); static_assert(!std::is_copy_constructible_v<optional<Moveonly<>>>); } // move constructor { optional<Moveonly<true>> a{}; auto b = std::move(a); assert(!a); assert(!b); optional<Moveonly<false>> c{in_place}; auto d = std::move(c); assert(c); assert(d); } // move constructor noexcept specification { static_assert(std::is_nothrow_move_constructible_v<Moveonly<true>>); static_assert(!std::is_nothrow_move_constructible_v<Moveonly<false>>); } // deleted move constructor { static_assert(!std::is_move_constructible_v<optional<Disabled>>); } // in place constructor { int x = 21; constexpr optional<Direct_init> a{in_place, x, 42}; static_assert(a); } // in place initializer list constructor { optional<std::vector<int>> b{in_place, {30, 36, 39, 42, 45}}; assert((b == std::vector<int>{30, 36, 39, 42, 45})); } // in place constructor explicit { static_assert(!std::is_convertible_v<in_place_t, optional<Direct_init>>); } // single value constructor { optional<std::vector<int>> a{5}; // => std::vector<int>(5) assert(a->size() == 5); // not 1 constexpr optional<double> b = 42; static_assert(b == 42.0); } // explicit { static_assert(std::is_convertible_v<const char*, optional<std::string>>); static_assert(!std::is_convertible_v<std::size_t, optional<std::vector<int>>>); } // copying converting constructor { optional<int> a{5}; optional<double> b = a; optional<std::vector<int>> v{a}; // => std::vector<int>(5) assert(b == 5); assert(v->size() == 5); // not 1 static_assert(std::is_convertible_v<const optional<int>&, optional<double>>); static_assert(!std::is_convertible_v<const optional<int>&, optional<std::vector<int>>>); optional<int> c{}; optional<double> d = c; optional<std::vector<int>> w{c}; assert(!d && !w); } // moving converting constructor { optional<int> a{5}; optional<double> b = std::move(a); optional<std::vector<int>> v{std::move(a)}; assert(a == 5 && b == 5 && v->size() == 5); static_assert(!std::is_convertible_v<optional<int>&&, optional<std::vector<int>>>); } // destructor { static_assert(std::is_trivially_destructible_v<optional<Disabled>>); static_assert(!std::is_trivially_destructible_v<optional<std::string>>); } // nullopt assignment { optional<std::vector<std::string>> a{in_place, 5, "foo"}; auto b = a; a = nullopt; assert(!a && b); } // copy assignment { optional<std::string> a; optional<std::string> b{"foo"}; optional<std::string> c{"bar"}; a = b; assert(a == "foo"); a = c; assert(a == "bar"); static_assert(!std::is_copy_assignable_v<optional<Disabled>>); static_assert(!std::is_copy_assignable_v<optional<Moveonly<>>>); } // move assignment { static_assert(std::is_nothrow_move_assignable_v<optional<Moveonly<>>>); static_assert(!std::is_nothrow_move_assignable_v< optional<Moveonly<false>>>); static_assert(!std::is_move_assignable_v<Disabled>); optional<std::string> a{"foo"}; optional<std::string> b{"bar"}; b = std::move(a); assert(a == "" && b == "foo"); } // single value assignment { optional<std::string> a{"foo"}; a = "bar"; static_assert(std::is_assignable_v<optional<std::string>&, const char*>); static_assert(!std::is_assignable_v<optional<std::string>&, int>); } // converting copy assignment { optional<std::string> a{"foo"}; optional<const char*> b{"bar"}; a = b; assert(a == "bar"); static_assert(!std::is_assignable_v<optional<std::string>&, optional<int>&>); } // converting move assignment { optional<std::string> a{"foo"}; optional<const char*> b{"bar"}; a = std::move(b); assert(a == "bar" && b); static_assert(!std::is_assignable_v<optional<std::string>&, optional<int>>); } // emplace { optional<std::string> a{"foo"}; optional<std::string> b{"bar"}; a.emplace(5, 'a'); assert(a == "aaaaa"); a.emplace({'a', 'b', 'c'}); assert(a == "abc"); a.emplace(std::move(*b)); assert(a == "bar" && b == ""); } // swap, general { static_assert(std::is_nothrow_swappable_v<optional<Moveonly<>>>); static_assert(!std::is_nothrow_swappable_v<optional<Moveonly<false>>>); static_assert(!std::is_swappable_v<optional<Disabled>>); } // swap, case one { optional<int> a{1}, b{2}; a.swap(b); assert(a == 2 && b == 1); swap(a, b); assert(a == 1 && b == 2); } // swap, case two { optional<int> a{1}, b; a.swap(b); assert(!a && b == 1); swap(a, b); assert(a == 1 && !b); } // swap, case three { optional<int> a, b{2}; a.swap(b); assert(a == 2 && !b); swap(a, b); assert(!a && b == 2); } // swap, case four { optional<int> a, b; a.swap(b); assert(!a && !b); swap(a, b); assert(!a && !b); } // observers { optional<std::string> a{"foo"}; assert(a->size() == 3); assert(*a == "foo"); assert(a); assert(a.has_value()); assert(a.value() == "foo"); assert(a.value_or("bar") == "foo"); optional<std::string> b{*std::move(a)}; assert(a == ""); a = "foo"; b = std::move(a).value(); assert(a == ""); a = "foo"; b = std::move(a).value_or("bar"); assert(a == "" && b == "foo"); constexpr optional<std::pair<int, int>> c; static_assert(!c && !c.has_value()); // static_assert(c.value().first == 5); // throws bad_optional_access static_assert(c.value_or(std::pair(21, 42)) == std::pair(21, 42)); } // reset { optional<std::string> a{"foo"}; a.reset(); assert(!a); a.reset(); assert(!a); } // nullopt features { static_assert(std::is_empty_v<nullopt_t>); static_assert(!std::is_default_constructible_v<nullopt_t>); static_assert(!std::is_aggregate_v<nullopt_t>); } // bad_optional_access { static_assert(std::is_default_constructible_v<bad_optional_access>); static_assert(std::is_base_of_v<std::exception, bad_optional_access> && std::is_convertible_v<bad_optional_access*, std::exception*>); } // comparison between optionals { constexpr optional<int> a{42}, b{21}, c; static_assert(a == a && !(a == b) && c == c && !(a == c) && !(c == a)); static_assert(!(a != a) && a != b && !(c != c) && a != c && c != a); static_assert(!(a < a) && !(a < b) && !(c < c) && !(a < c) && c < a); static_assert(a <= a && !(a <= b) && c <= c && !(a <= c) && c <= a); static_assert(!(a > a) && a > b && !(c > c) && a > c && !(c > a)); static_assert(a >= a && a >= b && c >= c && a >= c && !(c >= a)); } // comparison with nullopt { constexpr optional<int> a{42}; static_assert(!(a == nullopt || nullopt == a)); static_assert(a != nullopt && nullopt != a); static_assert(!(a < nullopt) && nullopt < a); static_assert(!(a <= nullopt) && nullopt <= a); static_assert(a > nullopt && !(nullopt > a)); static_assert(a >= nullopt && !(nullopt >= a)); constexpr optional<int> b; static_assert(b == nullopt && nullopt == b); static_assert(!(b != nullopt || nullopt != b)); static_assert(!(b < nullopt) && !(nullopt < b)); static_assert(b <= nullopt && nullopt <= b); static_assert(!(b > nullopt) && !(nullopt > b)); static_assert(b >= nullopt && nullopt >= b); } // comparison with T { constexpr optional<double> a{42.0}; static_assert(a == 42 && 42 == a && !(a == 21) && !(21 == a)); static_assert(!(a != 42) && !(42 != a) && a != 21 && 21 != a); static_assert(!(a < 42) && !(42 < a) && !(a < 21) && 21 < a); static_assert(a <= 42 && 42 <= a && !(a <= 21) && 21 <= a); static_assert(!(a > 42) && !(42 > a) && a > 21 && !(21 > a)); static_assert(a >= 42 && 42 >= a && a >= 21 && !(21 >= a)); constexpr optional<double> b; static_assert(!(b == 42) && !(42 == b)); static_assert(b != 42 && 42 != b); static_assert(b < 42 && !(42 < b)); static_assert(b <= 42 && !(42 <= b)); static_assert(!(b > 42) && 42 > b); static_assert(!(b >= 42) && 42 >= b); } // make optional { constexpr int ans = 42; auto a = make_optional(ans); static_assert(std::is_same_v<decltype(a), optional<int>>); assert(a == 42); constexpr auto b = make_optional<std::pair<double, double>>(ans, ans); static_assert(b == std::pair(42.0, 42.0)); auto c = make_optional<std::vector<int>>({39, 42}); assert((c == std::vector<int>{39, 42})); } // hash { assert(std::hash<optional<double>>{}(42) == std::hash<double>{}(42)); using disabled = std::hash<optional<std::vector<double>>>; static_assert(!std::is_default_constructible_v<disabled>); } } Answer: This looks pretty good. My comments are trivial nitpicking. The constructor of struct in_place_t gains nothing from explicit (it can't be considered as a conversion if it has no arguments). Whilst explicit prevents users writing in_place_t x = {}, I certainly think that's a reasonable thing to want to do, and won't cause any surprising conversions. The comment // [optional.comp.with.t], comparison with T probably should read "comparison with value" or similar, given that the other argument is a const U&. It shouldn't be necessary to provide my_std::swap(): providing member swap should be sufficient to allow std::swap() to work. Instead of writing out the return type again in make_optional, we can simply use a brace-expression: return {std::forward<T>(v)};. Sadly this won't work for the in_place overloads as that uses an explicit constructor. I'm not a fan of else return false in this: if (x) return *x == v; else return false; I'd probably rewrite as return x && *x == v;; similarly for all these related comparisons. I don't think there's a need for static_cast<bool> in the optional/optional comparisons, since the the arguments of logical operators are contextually converted to bool.
{ "domain": "codereview.stackexchange", "id": 35578, "tags": "c++, reinventing-the-wheel, c++17, stl" }
Checks if all nested objects don't have 'status' attribute as 'deleted'
Question: I currently this code working, but its performance is very poor — 1.45 seconds seems a bit too much for a simple recursive if statement that only checks attribute values. def _check_delete_status(self, obj) -> bool: obj_name = obj._sa_class_manager.class_.__name__ self.visited.append(obj_name) if getattr(obj, 'status', 'deleted').lower() != 'deleted': children = [parent for parent in self._get_parents(self._get_table_by_table_name(obj_name)) if parent not in self.visited] for child in children: if (child_obj := getattr(obj, child, None)) and child not in self.visited: if self._check_delete_status(child_obj): return True else: return True return False Although self._get_parents seems like a counterintuitive name (it's used elsewhere in the code), in this case it is still very useful to this solution: It returns a list with all possible attribute names that object might have as children. For example, an object named appointment will have ['patient', 'schedule'] as response; of which patient will have [] since it doesn't have any children, and schedule will have ['physiotherapist', 'address', 'patient', 'service'] returned. When those values are then used on getattr(object, child_name) it returns the object corresponding to the child. I tried to think on how to do this iteratively, but couldn't up come with any solutions. PS: The reason for the self.visited list, is that sometimes an object might have the exact same object nested inside, and since they have the same values they can be skipped. EDIT: The "helper" methods: def _get_table_by_table_name(self, table_name: str) -> Table: return self._all_models[table_name]['table'] @staticmethod def _get_parents(table: Table) -> set: parents = set() if table.foreign_keys: for fk in table.foreign_keys: parent_name = fk.column.table.name parents.add(parent_name) if parent_name != table.name else None return parents Answer: Your code is improperly using lists. children = [parent for parent in self._get_parents(self._get_table_by_table_name(obj_name)) if parent not in self.visited] By using a list comprehension you have to generate every parent before you can validate a parent. Lets instead use say you want to know if 0 is in range(1_000_000). What your code would be doing is building a list of 1 million numbers before you check the first value is 0. You should use a generator expression, or just use standard for loops, to build children so we can exit early. (Of course doing so would rely on self._get_parents and self._get_table_by_table_name not returning lists. Which I don't have access to, so cannot comment on.) self.visited.append(obj_name) parent not in self.visited child not in self.visited We know self.visited is a list. So we know in runs in \$O(n)\$ time. You want to instead use a set which runs in \$O(1)\$ time. Ignoring changing self.visited to a set you can simplify the code using any and a generator expression. def _check_delete_status(self, obj) -> bool: obj_name = obj._sa_class_manager.class_.__name__ self.visited.append(obj_name) if getattr(obj, 'status', 'deleted').lower() == 'deleted': return True else: return any( child not in self.visited and (child_obj := getattr(obj, child, None)) and child not in self.visited and self._check_delete_status(child_obj) for child in self._get_parents(self._get_table_by_table_name(obj_name)) ) You should also notice you don't need child not in self.visited twice.
{ "domain": "codereview.stackexchange", "id": 41727, "tags": "python, python-3.x" }
LaserScans not in the Map
Question: Is there a node that returns all of the LaserScan points that are (probably) not in the map? Edit: Clarification: I was assuming a static map relative to a robot that has already been localized. What I'd like to be able to say is "these laser scans are of the wall (which has always been there) and this person shaped blob of laser readings is new." Originally posted by David Lu on ROS Answers with karma: 10932 on 2011-12-20 Post score: 2 Original comments Comment by David Lu on 2011-12-21: Clarification posted above. Comment by Eric Perko on 2011-12-20: Could you add some detail to this question? What map (local or global from costmap_2d with s standard nav_stack config? or something entirely different?)? And do you mean the points there are to be inserted into a map as fresh obstacles or a "scan" representing things your scan missed in the map? Answer: For future users who stumble upon this question, I've written a node (not yet pushed to our public repository) that does this task. Originally posted by David Lu with karma: 10932 on 2012-02-13 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 7696, "tags": "ros, navigation, mapping, laser-filters, laserscan" }
Message Queue for inter-thread communication
Question: I wrote this little piece of code, a while back. The intention behind it was to create a system to send messages between consumer and producer threads. I have no idea for what i wanted to use it. I just found it in an old snippet folder of mine. The type of the queue is FIFO. The usage Create a queue with new MessageQueue<MessageType>(). let's assume we will store an instance of the message queue in an private field inside the classes that will be using it. This field is called messageQueue Use messageQueue.push(new MessageType([...])) in a producer thread to add a Message. Take it out of the queue from a consumer thread with messageQueue.pull(). Check if there is a message in the queue with messageQueue.hasMessage(). I know that I thought about implementing a messageQueue.peek() but I never implementet it. The code /** * Holds a queue of messages for inter-thread communication * Producer and Consumer should <b>not</b> be the same thread! * * @param <T> type of the messages */ public class MessageQueue<T> { private static final int MAX_SIZE = 255; private T[] messages; private int lastReadIndex; private int lastInsertIndex; private int adjustIndex(int index) { if (index >= MAX_SIZE) { return 0; } return index; } /** * Constructor of the {@link MessageQueue} */ @SuppressWarnings("unchecked") // Java can't create an Array of a generic type... public MessageQueue() { messages = (T[]) new Object[MAX_SIZE]; } /** * Pushes a new entry into the queue. * This function will block when the queue is full. * * @param msg The entry to push on the stack * @throws InterruptedException */ public synchronized void push(T msg) throws InterruptedException { int newIndex = adjustIndex(lastInsertIndex + 1); // Check if we can push the message while (messages[newIndex] != null) { wait(); } // Push the message messages[newIndex] = msg; lastInsertIndex = newIndex; notifyAll(); } /** * Pulls an entry from the queue. * The entry will be returned and removed from the queue. * * @return The entry from the queue */ public synchronized T pull() { int newIndex = adjustIndex(lastReadIndex + 1); T shouldReturn = messages[newIndex]; messages[newIndex] = null; lastReadIndex = newIndex; notifyAll(); return shouldReturn; } /** * Checks if there is an entry in the queue. * * @return <code>true</code>, when there is an entry in the queue */ public synchronized Boolean hasMessage() { if (lastInsertIndex > -1) { int newIndex = adjustIndex(lastReadIndex + 1); if (messages[newIndex] != null) { return true; } } return false; } } Answer: Nothing in-depth, just a few things that threw me off as a developer that would be using your class: If you design a queue, use proper interfaces. There are a few interfaces in Java, designed to offer standardized access to a Queue public class MessageQueue<T> { // A queue should implement the Queue<E> interface, preferably BlockingQueue<E> Mention constants in your documentation, together with their reason. I have a hard time understanding your limit of 255 messages. As a developer that cannot look at private variables, I would be thrown off with errors once I go over that limit. Document it. Tell others about this limitation! private static final int MAX_SIZE = 255; // Mention this constant in your documentation Is there a reason to use the Boolean object type in favor of the boolean primitive? Using Boolean involves unnecessary boxing and might, if used excessively and in time-sensitive applications, introduce lower performance. public synchronized boolean hasMessage() { // Use primitives when objects are not necessary This isn't meant as an exhaustive list, but only my subjective opinion.
{ "domain": "codereview.stackexchange", "id": 26413, "tags": "java, multithreading, queue" }
No laser scan received (and thus no pose updates have been published) for 18.974000 seconds. Verify that data is being published on the /scan topic
Question: Hi all, i already searched this question on all over answers.ros.org, but i didnt found answer on the issue i have. I built a custom 4 wheel bot and i added a navigation stack(map, move_base, amcl and other param yaml files) on it. if i run the launch file, i get this error, [ WARN] [1656224367.752167333, 18.974000000]: No laser scan received (and thus no pose updates have been published) for 18.974000 seconds. Verify that data is being published on the /scan topic. so, the issue i have is even if i see the map in rviz, there are no particle filters i can see detected by laser.Which describes the error i mentioned above. i clearly understand that my bot doesnt localize at first, leads to cant set a navigate and set a goal to my bot on the map. I have been trying to solve this issue since last 3 days. Would be glad to have some guidance. Thanks in advance. Regards. Originally posted by the_one on ROS Answers with karma: 41 on 2022-06-26 Post score: 0 Original comments Comment by tianb03 on 2022-06-28: You'd better provide the result when executing roswtf, and the tf tree in rqt. It is easier for others to debug Comment by the_one on 2022-06-29: thank u for ur reply Sure, i attached a link of drive here, which has all the image files(error) in it...for your reference. link text i named those files with respect to the errors... thanks in advance. Comment by tianb03 on 2022-07-02: any output from the /scan topic using rostopic echo? Answer: Hi, after many consideration and references, i finally got cleared that issue. Fault is on me. I did the mistake on topic name. Its working now. After that i wanted to scan the map(turtlebot3 house) with my bot, but it ended up like this. Also if i run the launch file, the bot behaves like this. Idk why the bot behaves like. Thanks in advance. Originally posted by the_one with karma: 41 on 2022-07-02 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 37807, "tags": "ros, navigation, ros-melodic, amcl, move-base" }
Circular array rotation Java
Question: I have a solution to the HackerRank Circular Array Rotation challenge. It passes 7 test cases out of 15. The rest of them are getting timed out. I think it's because of the huge data set that has been given as the input. Input Format The first line contains 3 space-separated integers, n (the length of the array), k (the number of right circular rotations), and q (the number of queries). The second line contains n space-separated integers a0, a1, a2, …, an-1. Each of the q subsequent lines contains a single integer denoting m. For each of those queries, output am of the rotated array. Constraints 1 ≤ n ≤ 105 1 ≤ ai ≤ 105 1 ≤ k ≤ 105 1 ≤ q ≤ 500 Can you point me out how can I improve this code in order to avoid those time out of test cases? public class CircularArrayRotation { public static int[] circularArray(int[] beforeArray){ int[] afterArray = new int[beforeArray.length]; afterArray[0] = beforeArray[beforeArray.length-1]; System.arraycopy(beforeArray,0,afterArray,1,beforeArray.length-1); return afterArray; } public static void main(String[] args){ Scanner sc = new Scanner(System.in); int n = sc.nextInt(); int k = sc.nextInt(); int q = sc.nextInt(); sc.nextLine(); int[] source = new int[n]; String[] elements = sc.nextLine().split(" "); for (int i=0;i<elements.length;i++){ source[i] = Integer.parseInt(elements[i]); } source = repeatCirculating(source,k); int[] ques = new int[q]; for (int i=0;i<q;i++){ int position = Integer.parseInt(sc.nextLine().trim()); ques[i] = position; } for (int ask:ques) { System.out.println(source[ask]); } } public static int[] repeatCirculating(int[] source, int times){ for (int i =0; i<times; i++){ source = circularArray(source); } return source; } } Answer: The array may be up to 105 elements long. If you actually perform the rotation, then you will be copying at least 105 elements. (You actually do ridiculously more work, as @JoeC and @OhadR have both pointed out.) However, there will be at most 500 queries. It would be nice if you didn't have to modify 105 entries just to satisfy 500 queries. You don't actually need to perform the rotation — you only need to pretend to have performed the rotation. import java.util.Scanner; public class CircularArrayRotation { /** * Performs one query. * * @param a The original array * @param k The number of right circular rotations * @param m The index of the rotated array to retrieve */ public static <T> T query(T[] a, int k, int m) { int n = a.length; return a[(((m - k) % n) + n) % n]; } public static void main(String[] args) { Scanner sc = new Scanner(System.in); int n = sc.nextInt(), k = sc.nextInt(), q = sc.nextInt(); sc.nextLine(); // End of first line String[] a = sc.nextLine().split(" "); // Second line while (q-- > 0) { // Perform q queries System.out.println(query(a, k, sc.nextInt())); } } } Strictly speaking, you don't even need to parse the ai as integers — you just need to read and regurgitate them. You also don't need to store all q queries — you can just reply to each one as soon as you read m.
{ "domain": "codereview.stackexchange", "id": 22702, "tags": "java, programming-challenge, array, time-limit-exceeded, circular-list" }
ekf_localization node not responding
Question: I am trying to use the robot_localization package to fuse absolute attitude from an IMU, velocity in the world_frame from the IMU with a SONAR and pressure sensor (all aboard an AUV). Following your tutorials I set up the launch file, but when I publish sensor data nothing gets fused at all. In fact I haven't seen a single message on topic /odometry/filtered. I read all tutorials and everything on answers.ros.org but can't get my head around it. Any ideas? Thanks a lot, Rapha The sensors frame_id is set to "odom". My launchfile looks like this: [false, false, false, false, false, false, true, true, true, false, false, false, false, false, false] [false, false, true, false, false, false, false, false, false, false, false, false, false, false, false] [true, true, false, false, false, false, false, false, false, false, false, false, false, false, false] [false, false, false, true, true, true, false, false, false, false, false, false, true, true, true] [0.03, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.03, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.03, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.03, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.06, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.025, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.025, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.0, 0.002, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.0, 0.0, 0.002, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.004] The data for the IMU velocity (in ENU frame-->odom) coming in looks like this: header: seq: 778403 stamp: secs: 1415909887 nsecs: 951959536 frame_id: odom twist: twist: linear: x: -4.52138569919 y: 3.20782521003 z: 0.199781238464 angular: x: 0.0 y: 0.0 z: 0.0 covariance: [0.0, 0.0007748282199090048, 0.0, 0.0, 0.0, 0.0, 0.0, 0.000604634932647141, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0015846222091318446, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] EDIT: Hey Tom, thanks for the quick answer. I have checked the rostopics and both sensor topics are being connected correctly to ekf_localization. It turned out to be the missing transform: Changing the frame_id of the IMU to base_link did start output, but I am not sure everything is in the correct frame now. The attitude solution from the IMU (which runs its own eKF) is in reference to ENU. More specifically it states that it: "...is the orientation of the sensor fixed frame with respect to cartesian earth-fixed system" Q(1): Which frame for the IMUs ENU attitude solution? My understanding is that these values are therefore in odom. Following what you said in Issue 22, I added a static transform (imu-->base_link). And now changed the IMUs frame_id to imu. The odom-->base_link is now being published and the eKF runs. I turned on the differential integration of the IMU. However, then the IMUs orientation solution doesn't match the node's solution. When I turn the differential integration off, they match (apart from having the opposite sign at some orientations. However, I believe Q=-Q, so that should be fine.) The second data source (ENU referenced linear velocities from the same IMU) publishes its data in the imu frame. I know that I could fuse in the accelerometer data directly. However, my integration algorithm also handles calibration, deadbands, static offset and a very basic zero velocity detection. Gravity is already removed by the IMU. I will have to compare both approaches. Q(2): which publish_rate for imu->base_link? Am I right to assume that I will need the transforms to be published at at least the same frequency of the sensor data? EDIT regarding Tom's clarification and suggestion: Thanks for the elaborate answer. I might have not been quite clear on the IMU frame. The frame is just a static transform from base_link to account for different mounting positions and headings of the sensor. So this is not ENU. I have double checked the frame that the velocity is expressed in. (I checked this on the accelerometer output as it was easier and I do not transform anything within the velocity publishing node) It is actually the body frame: if heading east and moving east, too, there is only a change in x if heading north and moving east, there is a change in y only if heading is in between i.e. NE, there is a change in x and y. rolling the sensor +90deg and moving up will show a change in y Thank you very much for the clarifying questions. I will leave the velocity messages in my imu frame and for clarification change the transform to 0 offset and 0 rotation. For the imu orientation data (imu/data) I am not sure I quite understand. So I'll give you an overview of my understanding. The orientation data will always show the offset from ENU. So irrelevant of the starting position it will always show yaw==0deg if orienting the x axis to point east (or +90deg if x axis is pointing north.) I think I now understand the effect of the differential integration. Since it differentiates the position into velocity it does not matter what heading the initial message states. When starting the robot_localization_node, it will start with a heading and position of 0. The point I struggle with is the correct frame and that ekf_localization_node always tries to transform IMU data - both angular velocity and orientation - into base_link. Q(3) So even if I were to change the frame_id of my imu/data to odom and only use the orientation data in that messages robot_localization_node still wouldn't be interpreted in odom? Or would it try to transform the orientation to base_link and we'd be at the chicken and egg problem again? Later today I'll try these settings and will update here again. Again thank you so much for your time and effort. Originally posted by Raphael Nagel on ROS Answers with karma: 15 on 2014-11-13 Post score: 0 Answer: A few things: No output at all only occurs when it's not receiving sensor data or it can't transform the data into the target frame. Do rostopic info on all your topics to make sure that they are being published and subscribed to. Velocities should ideally be reported in the frame of the vehicle (base_link), but if they're not, the package will transform them into base_link anyway before fusing them with the state estimate. In this case, I'm guessing that it's attempting to transform your velocity message from odom->base_link, but can't, as the node that produces that transform is ekf_localization_node itself. It's a chicken-or-egg problem. However, the odom->base_link transform should be getting generated if any one of the sensors is being successfully fused, so check (1) again, and perhaps post examples of your other sensor messages? EDIT: Also, turn debug mode off unless you want to eat up disk space in a hurry. If you want to run your node for a few seconds with all of the sensor data coming in and then send me that log, that will help, but in general, don't run with that on. EDIT in response to update. Forgive me if I go over anything you already know. I just want to make sure we're on the same page. Q1: Let me back up a bit. First, in your system, we actually have three separate coordinate frames (at least with regard to this question, you probably have more): odom base_link imu (which is equivalent to the ENU frame) Just so we're clear, both your IMU's velocity message frame_id (as reported in xsens/velocity) and its orientation frame_id (as reported in imu/data) are now reported in the imu (i.e., ENU) frame, correct? Assuming this is true, let's consider the velocity first. If your robot heads directly east, your IMU velocity sensor reads only +X velocity, and if it drives straight north, you get only +Y, correct? (Incidentally, this would surprise me. If you're using accelerations, the velocities you generate, unless you transform them, are going to be in the body frame of the IMU, but not in the ENU frame. However, the rest of my answer assumes you did mean that the velocities are reported in the ENU frame. Apologies if I misunderstood.) Assuming I'm still correct, then what you need is a transform from base_link->imu, which you said you have. Question: how did you generate this transform? It can't be static, as it will constantly change depending on the heading of the robot. For example, if you drive northeast, in the imu (ENU) frame, you'd get equal values for X and Y velocity. However, in base_link, you'd only have +X velocity. Now if you turned around and drove southwest, you'd have -X and -Y velocity in the imu frame, but would still have +X velocity in the base_link frame. Essentially, your base_link->imu transform would always have to contain the vehicle's current heading. It may be easier to edit your velocity node so that it automatically applies this transform internally before publishing, and then you can just publish the velocity with the base_link frame_id. Before I respond to your issue regarding the differential setting, let me make sure we're on the same page with the odom frame. When you first start your robot, the odom frame is automatically aligned with your starting position and orientation. If you're facing northeast, then your robot still believes it's at (0, 0) with a heading of 0 (simplifying to 2D for illustration). This obviously is not the same as the ENU frame, which always has the same orientation. So at the starting point, your odom frame heading is 0, but your imu (ENU) frame heading is, say, pi/4. Again, you have a couple options here. First, if you didn't create a base_link->imu transform, you could now create an odom->imu transform (SEE NOTE AT THE END OF THIS UPDATE). In our example, you would transform a new IMU heading by -pi/4 to get the odom frame heading. That transform would be static, but you'd have to set it programmatically when you first start running. A second (and much easier) option is to simply turn the differential setting on. This will effectively treat the initial orientation as a "zero point" for all future measurements, so your first measurement has a heading of pi/4, but that becomes 0, and future measurements would be relative to it. The differential behavior is a bit more complicated than that, but for this question, that's all that's relevant. To summarize: I would edit the IMU velocity generation node to output velocities in the base_link frame. I would then turn on the differential setting for your IMU so that the orientation starts out at zero. Feel free to upload a bag and I'll check it out at some point. Q2: That seems reasonable, sure. That's really a tf question. I'm using message filters to ensure that transforms are available for any message I receive. END OF UPDATE NOTE The statements I made above regarding transforming the IMU data are only partially true. I still need to fix this. Right now, ekf_localization_node always tries to transform IMU data - both angular velocity and orientation - into base_link. I either need to let users specify a target frame for IMU orientation and velocity data, or simply make it so all orientation data gets transformed into odom, and all angular velocity and acceleration data get transformed into base_link. EDIT: Responding to update that contains Q3: OK, so your velocities are being reported in the body frame. In this case, what I would do is: Generate a base_link->imu transform that contains the IMU's orientation and position w.r.t the body frame. Change your IMU data's frame to be imu. Enable the differential setting for your IMU. What you want to avoid (at least in this case; this is not generally true) is to require ekf_localization_node to use the very transform it's meant to be generating (e.g., odom->base_link). Re: the coordinate frame of the IMU orientation data, part of the issue is the IMU message itself. It has only one frame_id specified in the message, but the orientation data is usually reported in a world-fixed frame, and the angular velocity and linear acceleration is usually reported in another (e.g., the base_link frame). ekf_localization_node will try to transform the data into the frames it requires using the frame_id of the IMU message. While it treats orientation and velocity/acceleration data separately, it's still going to use the frame_id in that message as the source frame when it attempts to carry out the transform. So what is the target frame? For orientation data, it ought to be your world_frame (e.g., map or odom), but it can't be if your IMU transform is relative to your base_link frame, as you're back to square one with your issue (the chicken-or-egg problem). For angular velocity and acceleration data, it's the body frame (e.g., base_link). The problem is that we cannot define an odom->imu transform and a base_link->imu transform, as that would be dual-parenting in the tf tree, which would break the tree. In any case, for you, the differential setting for your IMU ought to fix the orientation issue anyway. Originally posted by Tom Moore with karma: 13689 on 2014-11-13 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Raphael Nagel on 2014-11-17: Hi Tom, I have implemented your last suggestions and it does seem to work now. However, without integrating the next sensor (i.e. Position from Sonar) I cannot really investigate further - the position drifts a lot. I'll update here once that the Sonar is in place. Thank you for your help. Comment by Tom Moore on 2014-11-17: Yeah, double-integration of acceleration data will usually result in a lot of drift. Glad I could help, and good luck!
{ "domain": "robotics.stackexchange", "id": 20048, "tags": "navigation, robot-localization" }
How damping constant of pendulum varies with length of wire? (experiment result confusion)
Question: I'm a high school student doing an experiment for my physics course. The experiment aims to find out the relationship between the damping constant and the length of the wire. The constant is calculated with this formula: $$\ln \left(\frac{θ}{θ_0}\right)=-\frac{k\,t}{2}$$ From this paper http://dx.doi.org/10.4236/jamp.2017.51013 Since I don't have any advance equipments, I put the pendulum in water to make the damping more observable. My hypothesis was that the longer the length, the bigger the damping constant will be, since the velocity of the bob will be faster while damping being proportional to velocity. However, my experiment shows completely opposite result: the shorter the wire, the bigger the damping constant. I can't figure out how to explain this result, is there any possible logical explanation? Or is my experiment just wrong? Answer: The increase in velocity for the longer pendulum does, while increasing the total dampening force, not increase the damping constant. This can easily be shown by analyizing the mechanical problem. Damping that leads to $e^{-kt}$-like decay of the oscillations is due to friction forces that are proportional to the velocity (such as the drag in a fluid at sufficiently low speeds, for a sphere this is given by the Stokes formula $F = 6\pi \eta r v$, with the viscosity $\eta$, the radius of the sphere $r$ and the velocity $v$). As is explained in the answer by @mmesser314 this formula is only valid if the velocities are low enough and the cutoff depends roughly on the Reynolds number – but we will assume in this answer that the formula is valid for the case of the water (which it may well be for slow oscillations). The differential equation describing the motion of a pendulum with this kind of friction then is: $$ m l \frac{d^2\phi(t)}{dt^2} = m g l \sin(\phi) + 6\pi\eta r l \frac{d\phi(t)}{dt} $$ (This is just Newtons $F = ma$ written down for the pendulum, and writing the acceleration and velocity in terms of the angle $\phi(t)$ – which then becomes the function we try to determine.) As you can see, all terms are proportional to the length $l$ of the pendulum, so this parameter can't affect the result of $\phi(t)$ under the given assumptions! Possibilities for this to go wrong: My best guess on the behavior you observe is due to a friction component that's not proportional to the velocity, but constant (such as the friction of the bearing where your wire is suspended – dry friction of solids on solids are typically independent of the velocity). This would mean an extra term in the equation above, that does not change with the length of the wire, so as the wire gets longer and longer, this terms influence will get smaller and smaller. Experimental limitations. For example, the period of a pendulum is no longer constant when the elongation gets too large (we typically say larger than $5°$) – this in turn leads to a higher velocity of the pendulum and therefore to larger friction losses. (The same linear elongation leads to a larger angular elongation when the wire gets shorter). Yet another possible explanation is that it is purely a measurement error. It is quite challenging to measure maximal elongation precisely (especially when the wire is short). A good way to identify, or at least restrict the possible causes, is to do multiple measurements of the maximal elongation for one run of the experiment (e.g. every ten periods), and then graphing the $\ln \theta_{\text{max}}$ over $t$. This way you can see the following: Does the oscillation actually follow the expected $e^{-kt}$-behaviour – then the points should lie on a straight line (and its slope gives you $k$)? Inaccuracies of the individual time and elongation measurements can potentially be compensated by combining several measurements. By also the initial elongation and then graphing the number of oscillations over time, you can check whether you really are in the regime of small angles that the analysis assumes (Are the periods the same for the different string lengths? Does the period time remain constant?)
{ "domain": "physics.stackexchange", "id": 93450, "tags": "newtonian-mechanics, experimental-physics, drag, oscillators" }
How does SMAP detect soil statistics?
Question: So there's a NASA satellite called SMAP currently orbiting Earth right now, and it can measure soil moisture, and detect whether or not soil is frozen or thawed. It knows this since it uses radar to detect natural microwave emissions from the ground. But what are natural microwave emissions, and what tools are necessary to detect and process the information (doesn't have to be related to NASA)? Answer: The SMAP includes both an active L-band synthetic aperture radar and a passive microwave radiometer. Any object above a temperature of absolute 0 will emit thermal radiation over a wide range of frequencies. The spectrum of this radiation can vary from that of a theoretical "black body" depending on its physical properties, especially the dielectric constant. Since the bulk dielectric constant of the soil depends on the soil moisture, the spectrum of the thermal radiation from the soil is related to the soil moisture. The dielectric constant of frozen water is different than for liquid water, so there's a different signal depending on whether the soil moisture is liquid water or ice. Needless to say, it takes a very sensitive and accurate instrument to measure this thermal radiation in order to estimate the soil moisture. Similarly, for the active radar, the signal reflected from the soil depends on the dielectric constant of the soil, which is directly related to the soil water content. The data from these instruments are combined to produce a surface soil moisture product- this is an estimate of the moisture in the top 5 cm (2 inches) of the soil. It's important to understand that this is not an estimate of (and is a very poor substitute for) the root zone soil moisture available to crops or other vegetation. This distinction is important in identifying drought conditions and in predicting transpiration from plants. There's a large research literature on using passive radiometers and active radar to estimate surface soil moisture. Is there something more specific about this you'd like to understand?
{ "domain": "earthscience.stackexchange", "id": 1866, "tags": "soil-moisture, radar" }
Python: OOP design for event pricing updates
Question: This was an interview question, I was supposed to refactor the EventPricingUpdate class to reduce the amount of nested ifs. Here are the basic rules: EventPricingUpdate() is responsible for updating the price of a list of events after each day. Each event's price will be reduced by 10 for each day. The price of each event must be in the range of 50 <= price <= 500. The field days_from_start represents how many days before the event begins. If days_from_start < 7, then the effect of pricing change will be doubled. For example, a regular event's price will be reduced by 10 * 2 each day in the last 6 days. There are special event_types that behavors differently. "music" events' price will always go up by 10 each day, and up by 10 * 2 each day in the last 6 days. "construction" events' price will never change. "sports" events' price will drop by double the regular amount(20 instead of 10). The class should support easily adding new conditions for a new event_type. As you can see below, I had trouble refactoring the nested ifs in the update() method, what are some good approach to refactor it? class Event(object): def __init__(self, price, days_from_start, event_type): self.price = price self.days_from_start = days_from_start self.event_type = event_type class EventPricingUpdate(object): def __init__(self, events): self.events = events def update(self): for event in self.events: if event.event_type == 'construction': continue elif event.event_type == 'music': if event.days_from_start < 7 and event.price <= 480: event.price += 20 elif event.price <= 490: event.price += 10 elif event.event_type == 'sports': if event.days_from_start < 7 and event.price >= 90: event.price -= 40 elif event.price >= 70: event.price -= 20 elif event.days_from_start < 7 and event.price >= 70: event.price -= 20 elif event.price >= 60: event.price -= 10 event.days_from_start -= 1 return self.events Answer: After a small check, it seems the best solution is to add a condition per type of event, where you adjust accordingly. Since all events have their effect 'doubled' on the last 7 days, you can use a single if for it Lastly, you use a single if as well to check if you can update the event, regarding final price Also I added a list comprehension to avoid looping through 'construction' events, whose value never changes class EventPricingUpdate(object): ORIGINAL_REDUCTION = 10 def __init__(self, events): self.events = events def events_to_update(self): return [event for event in self.events if event.event_type != 'construction'] def update(self): for event in self.events_to_update(): reduction = self.ORIGINAL_REDUCTION if event.event_type == 'music': reduction = reduction * -1 if event.event_type == 'sports': reduction = reduction * 2 if event.days_from_start < 7: reduction = reduction * 2 # if you use python 2 this if needs a slight change if 500 >= event.price - reduction >= 50: event.price -= reduction event.days_from_start -= 1 return self.events # Fill some events events = [ Event(event_type='construction', days_from_start=10, price=100), Event(event_type='music', days_from_start=10, price=100), Event(event_type='sports', days_from_start=10, price=100) ] EventPricingUpdate(events).update() # Check update assert events[0].price == 100 assert events[1].price == 110 assert events[2].price == 80 As you can see, is easy to plug new events, just a new condition on the loop If you want to go further on the refactoring, you will add the reduction on the list comprehension method
{ "domain": "codereview.stackexchange", "id": 29713, "tags": "python, object-oriented, interview-questions" }
Physical interpretation of the bra-ket notation
Question: The bra-ket notation generally consists of 'ket', i.e. a vector, and a 'bra', i.e. some linear map that maps a vector to a number in the complex plane. Now, using this bra-ket notation we can compute the inner product of some operator, say $\hat{H}$, so $\langle\psi|\hat{H}|\psi\rangle$ defines the eigenvalue of some hermitian operator $\hat{H}$. This is also called the expectation value of $\hat{H}$ and describes the probability of measuring this operator given the state $\psi$. I hope this is correctly understood. We can also derive the inner product $\langle\phi|\psi\rangle$. I must admit that I am a little confused about this representation although it makes sence mathematically. Does this mean the probability of being in the state $\phi$ given the state $\psi$? I hope someone can clarify. Answer: Now, using this bra-ket notation we can compute the inner product of some operator, say $\hat{H}$, so $\langle\psi|\hat{H}|\psi\rangle$ defines the eigenvalue of some hermitian operator $\hat{H}$. The inner product is a thing between two vectors - "the inner product of some operator" is not a meaningful phrase. If $|\psi\rangle$ is a normalized eigenvector of $\hat H$ with eigenvalue $\lambda$, then it's true that $\langle \psi|\hat H|\psi\rangle = \lambda$, but the definition of an eigenvector/eigenvalue pair is that $\hat H|\psi\rangle = \lambda|\psi\rangle$. This is also called the expectation value of $\hat{H}$ and describes the probability of measuring this operator given the state $\psi$. $\langle \psi|\hat H|\psi\rangle$ is referred to as the expectation value (or expected value) of $\hat H$ (corresponding to the normalized state vector $|\psi\rangle$). The interpretation of this number is that if you take a large number of identical systems all prepared in the state $|\psi\rangle$ and measured $\hat H$ in each of them, you would expect the mean value of all of those results to be $\langle \psi|\hat H|\psi\rangle$. We can also derive the inner product $\langle\phi|\psi\rangle$. I must admit that I am a little confused about this representation although it makes sence mathematically. Does this mean the probability of being in the state $\phi$ given the state $\psi$? There is no immediate physical interpretation of the inner product between two vectors - it is a quantity which shows up in all kinds of different contexts, and essentially measures the "overlap" between $\psi$ and $\phi$. It is analogous to the ordinary dot product between vectors in $\mathbb R^3$. If $\psi$ is a normalized state vector representing the state of the system and $\phi$ is a normalized eigenvector of some observable $\hat A$ with (non-degenerate) eigenvalue $\lambda$, then $|\langle \phi|\psi\rangle|^2$ is the probability of measuring $\hat A$ to take the value $\lambda$. So that is one context in which the expression could arise. But trying to assign a single physical meaning to the inner product is like trying to assign a single physical meaning to the dot product between vectors in $\mathbb R^3$.
{ "domain": "physics.stackexchange", "id": 96284, "tags": "quantum-mechanics, operators, hilbert-space, linear-algebra, eigenvalue" }
Nonlocality of a bug on movie screen
Question: I am currently learning quantum mechanics using Griffiths. In the appendix, he goes to talk about EPR and Bell's inequality, and that experimental verification of Bell's inequality rejects the "local hidden variable" theory. This means if Quantum Mechanics is correct, the collapse of the wave function is instantaneous, and not subject to locality. He then tried to argue that many things in fact travel faster than the speed of light but we don't have to worry about them, "if a bug flies across the beam of a movie project, the speed of its shadow is proportional to the distance to the screen; in principle, that distance can be as large as you like, and hence the shadow can travel at arbitrarily high velocity. However, the shadow does not carry any energy, nor can it transmit any information from one point on the screen to another". I get how the shadow is not carrying any energy, but why can't it transmit information? For example, suppose Bob is on one side of the screen and encodes his message on the contour of the bug then sends it to the projector, then the bug flies across the project at a very close distance to the light source. Then can this message be transferred to Alice on the other side of the screen faster than the speed of light? Answer: Part 1: In the scenario you describe, let's imagine the fastest possible bug, which travels at nearly the speed of light. Alice: A Bob: B Projector Alice: PA Projector Bob: PB That bug has to travel from B to PB, then PB to PA, then the bug's shadow has to travel from PA to A since there are still light rays en route after the bug is in position--A can only "see the shadow" after the light en route finishes propagating. So the total time it takes for A to receive the message from B will be the time it takes for light to travel from B to PB to PA to A. Which is roundabout and hence a longer distance than the direct B to A (by the triangle inequality). So the message will not be transferred to Alice faster than the speed of light. Part 2: In Griffith's description, the bug does NOT come from B in the first place, only PB, so obviously does not carry information from B. Imagine at t=0 the bug is at PB and at t=dt the bug is at PA, and it takes time P for the shadow to be projected. Then the shadow reaches B (from PB) at t=P and reaches A (from PA) at t=P+dt, so indeed there is a time difference of only dt between A seeing the shadow and B seeing the shadow, and dt can be quite short compared to the A-B distance over the speed of light. So there is a sense in which the shadow "travels faster than light" (two shadow events separated by a distance longer than $c * \Delta t$) which is different from your scenario, but in this scenario, no information is being carried.
{ "domain": "physics.stackexchange", "id": 92430, "tags": "special-relativity, faster-than-light, information, non-locality" }
Magnetic field of dipole derivation
Question: How can we derive the following formula: $$\vec{B}(\vec{r})=\frac{\mu_0}{4\pi}\left[ \frac{3(\vec{m}\cdot\vec{r})\vec{r}}{r^5} - \frac{\vec{m}}{r^3}\right]\; ,$$ I want to derive it as a limit of a pair of magnetic charges, as the source shrinks to a point, while keeping the magnetic moment $m$ constant. I know that it can also be derived as the limit of a current loop, but here we have to use the vector potential: $$\vec{A}(\vec{r}) = \frac{\mu_0}{4 \pi}\frac{\vec{m} \times \vec{r}}{r^3} \;$$ and to be honest I don't really know what that actually is, and where it comes from. So I would like to derive it using pair of magnetic charges. Thank you! I tried to find some online sources with this type of derivation, but no luck (urls would be appreciated). Answer: This is going to be quite formal, but a nice analogy to electrostatics. The ideal electric dipole If you take two opposite point charges and at constant (electric) dipole moment $\vec{p}$ reduce their distance to zero you get the charge density $$ \rho = - \vec{p} \cdot \nabla \delta $$ with the delta distribution $\delta$. To compute the field created by this charge distributions we need three ingredients: Gauß' law $$ \nabla \cdot \vec{D} = \rho\,, $$ which relates the dielectric displacement $\vec{D}$ to the charge density, Faraday's law (static version here) $$ \nabla \times \vec{E} = 0\,, $$ which yields the fact that the electric field can be described as the gradient of a potential, $\vec{E} = - \nabla \phi$, and a material law for the vacuum, $$ \vec{D} = \varepsilon_0 \vec{E}\,. $$ Together they yield the Poisson equation $$ -\Delta \phi = \frac{\rho}{\varepsilon_0}\,. $$ The operator on the left has the fundamental solution $$ G = \frac{1}{4 \pi r}\,, $$ i.e. $$ - \Delta G = \delta $$ and for a general charge the solution to the Poisson equation is $$ \phi = \frac{1}{\varepsilon_0} \rho \ast G $$ where the asterisk denotes the convolution. This way, for example, we get the Coulomb potential for the point charge. Using the charge distribution of the ideal dipole we get the potential $$ \phi = \frac{1}{\varepsilon_0} \vec{p} \cdot \nabla G $$ and the electric field $$ \vec{E} = - \nabla \frac{1}{\varepsilon_0} \vec{p} \cdot \nabla G\,. $$ The ideal magnetic dipole For the ideal magnetic dipole it's exactly the same, only a little different. ;) Take a current loop of given magnetic dipole moment $\vec{m}$ and reduce its size to zero. The distributional limit will be the current density $$ \vec{j} = - \vec{m} \times \nabla \delta\,. $$ To compute the field created by this current distribution we need three ingredients: Ampère's law (magnetostatic version) $$ \nabla \times \vec{H} = \vec{j}\,, $$ which relates the magnetic field to the current density, the equation $$ \nabla \cdot \vec{B} = 0\,, $$ which ensures the existence of a vector potential such that $\vec{B} = \nabla \times \vec{A}$, and you guessed it, a material law of the vacuum $$ \vec{B} = \mu_0 \vec{H}\,. $$ Together they yield the Poisson equation for the vector potential $$ -\Delta \vec{A} = \mu_0 \vec{j} $$ We already know how to solve it - the vector potential will be the convolution $$ \vec{A} = - \mu_0 \vec{j} \ast G\,. $$ And so, in complete analogy to electrostatics we here find for the magnetic dipole $$ \vec{A} = \mu_0 \vec{m} \times \nabla G\,, $$ and the field of the magnetic dipole is $$ \vec{B} = \nabla \times \left(\mu_0 \vec{m} \times \nabla G\right) \\ = \mu_0 \vec{m} \delta - \mu_0 \nabla \vec{m} \cdot \nabla G \,. $$ Explicitly calculating the derivatives of the fundamental solution now yields what you are interested in. For including the delta terms you might want to look at Frahm, C.P. Some novel delta-function identities. Am. J. Phys. 1983. You don't have to if you just want to get the classical part of the solution...
{ "domain": "physics.stackexchange", "id": 88929, "tags": "electromagnetism, magnetic-fields, dipole, magnetic-monopoles, dipole-moment" }
Kinematics of actuated disk with no slip
Question: Given the following kinematic problem, how would one calculate the velocity $v_C$ of point C given a certain horizontal velocity $v_{E}$ in point E? Given is that the disk does not slip and cannot move in the vertical direction. Point E can also only move horizontally. My train of thought is as follows: Calculate $v_D=v_E+\omega_{DE} \times r_{D/E}$. Then use $v_D=v_C+\omega_{BD}\times r_{D/B}$. With these two equations I could find $v_C$. However, I cannot find the angular velocities. What am I missing here? Answer: You need one more equation $v_B=v_C+\omega_{BD}\times r_{C/B}$ and the fact that $\omega_{DE}=\dot{\theta }$. Now we have $v_B=v_C+\omega_{BD}\ \hat{k} \times R_3\ \hat{j}=v_C-\omega_{BD}\ R_3\hat{i}$ and since $v_B=0$, we get $v_C=\omega_{BD}\ R_3\hat{i}$. For D we have, $v_D=v_C-\omega_{BD}\ \hat{k} \times R_3\ \hat{j}=v_C+\omega_{BD}\ R_3\hat{i} = 2\ \omega_{BD}\ R_3\hat{i}$ Thus $$v_C=v_D/2$$ Using $v_D=v_E+\omega_{DE} \times r_{D/E}$, we get $v_D=v_E\hat{i}-\dot{\theta} \hat{k} \times \{-L_{DE} \cos\theta\hat{i}+L_{DE} \sin\theta\hat{j}\} = v_E\hat{i}+\dot{\theta}L_{DE}\cos\theta\hat{j}+\dot{\theta}L_{DE}\sin\theta\hat{i}$ Thus $$v_C=\frac{1}{2}(v_E\hat{i}+\dot{\theta}L_{DE}\sin\theta\hat{i}+\dot{\theta}L_{DE}\cos\theta\hat{j})$$ (For the points C and D to remain at the constant height with no vertical motion, either $\theta=90 {}^{\circ}$ or $\dot{\theta}=0$.)
{ "domain": "engineering.stackexchange", "id": 4918, "tags": "dynamics, kinematics" }
can not edit in pg_ident.conf
Question: Hi, how to make pg_ident.conf easy accessible and edit on it. any suggestion ?! Originally posted by Amal on ROS Answers with karma: 229 on 2012-12-16 Post score: 2 Answer: You cannot edit this file because this file belongs to user postgres, you didn't have privilege to edit this file.This can be verified by using ls -l /etc/postgresql/9.1/main/pg_ident.conf You can edit this file in two ways. 1.Change to user postgres sudo su - postgres Then edit the file by vim /etc/postgresql/9.1/main/pg_ident.conf 2.Use root privilege to edit the file sudo vim /etc/postgresql/9.1/main/pg_ident.conf If you didn't have vim installed yet, use sudo apt-get install vim Once you enter vim editor, press "i" to become INSERT mode, and you can edit the file. When you finished editing, type ":wq" to save the file and leave vim editor. Originally posted by Po-Jen Lai with karma: 1371 on 2012-12-17 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by RiskTeam on 2013-02-26: I have the same question of @Amal and when I tried to be a user postgres , a password for postgres was requested ! @Ricky Comment by Po-Jen Lai on 2013-03-01: When you type sudo su - postgres to become user postgres, the password you need is your original user's. Comment by RiskTeam on 2013-03-02: yes, I have tried my original password ant it worked , but I have stopped at the step of making an administrator of my created database using pgadmin , There was an error that the version of pgadmin isn't suitable to the version of the postgreSQL.
{ "domain": "robotics.stackexchange", "id": 12126, "tags": "ubuntu-precise, ubuntu" }
Leap year check in Haskell, using pattern matching or bind operator
Question: One of the first Haskell puzzles on http://exercism.io/ is to implement a leap year check. I've done this twice. Using pattern matching: isLeapYear :: Integer -> Bool isLeapYear year | year `mod` 400 == 0 = True | year `mod` 100 == 0 = False | year `mod` 4 == 0 = True | otherwise = False Using the bind operator >>=: isLeapYear :: Integer -> Bool isLeapYear year = head $ [(400, True), (100, False), (4, True), (1, False)] >>= check year where check y (interval, isLeap) = [isLeap | y `mod` interval == 0] I'd like to know which implementation is “better“ / more idiomatic in Haskell. I am unsure whether I might have misused a t0o powerful concept in the second try, and the first try might just be more readable. Answer: The first one is a lot more readable, whereas the second one uses a "hack". I would go with the first one, except that I would use rem, which is a little bit faster. And one could introduce some DRY: isDivisibleBy :: Integral n => n -> n -> Bool isDivisibleBy x n = x `rem` n == 0 isLeapYear :: Integer -> Bool isLeapYear year | divBy 400 = True | divBy 100 = False | divBy 4 = True | otherwise = False where divBy n = year `isDivisibleBy` n That being said, for a programming challenge, your version is perfectly fine: isLeapYear :: Integer -> Bool isLeapYear year | year `rem` 400 == 0 = True | year `rem` 100 == 0 = False | year `rem` 4 == 0 = True | otherwise = False The latter can be rewritten without >>= as list comprehension: isLeapYear :: Integer -> Bool isLeapYear year = head [isLeap | (interval, isLeap) <- classifications , year `isDivisibleBy` interval] where classifications = [(400, True), (100, False), (4, True), (1, False)] You could get rid of the "hack" with safeHead and maybe False, but that's left as an exercise. If you really want to use check, remove the y. It just introduces an additional error source: isLeapYear :: Integer -> Bool isLeapYear year = head $ [(400, True), (100, False), (4, True), (1, False)] >>= check where check (interval, isLeap) = [isLeap | year `rem` interval == 0] Note that this shows perfectly that >>= is just flip concatMap for lists. So let's take advantage: isLeapYear :: Integer -> Bool isLeapYear year = head $ concatMap check classifications ++ [False] where check (interval, isLeap) = [isLeap | year `rem` interval == 0] classifications = [(400, True), (100, False), (4, True)] Which is easier to grasp than the version with >>=.
{ "domain": "codereview.stackexchange", "id": 38082, "tags": "programming-challenge, haskell, datetime, comparative-review, monads" }
What is the purpose of the various qiskit libraries?
Question: I can't find much resources about how Qiskit API libraries are organised. We could see versions by using: import qiskit print(qiskit.__qiskit_version__) >> {'qiskit-terra': '0.21.2', 'qiskit-aer': '0.10.4', 'qiskit-ignis': None, 'qiskit-ibmq-provider': '0.19.2', 'qiskit': '0.37.2', 'qiskit-nature': None, 'qiskit-finance': None, 'qiskit-optimization': None, 'qiskit-machine-learning': None} What is the purpose of these libraries? Answer: -terra is the "core" of qiskit, containing things like the circuit class, gates, etc. -aer contains circuit simulators to test out your circuit. -ignis is for noise modeling, containing various error channels and quantum information and error correction utilities. -ibmq-provider is for running on the IBM cloud, be it QPUs or hosted simulators. Note: -ignis has been superseded by qiskit-experiments, as in @Egretta.Thula'a comment. qiskit itself just packages everything together. -nature, -finance, -optimization, and -machine-learning are pretty self explanatory, containing tools to express problems in their respective domains as quantum programs.
{ "domain": "quantumcomputing.stackexchange", "id": 4162, "tags": "qiskit" }
What does a curved natural log graph suggest?
Question: Sorry if this is rather simple, but I've only just started learning about using logarithms in experimental physics. I did an experiment to test the amount of time it would take for an amount of water to leave a burette. I used the starting volume of water in the burette as a control variable, $50cm^3$. I recorded the time it took for the a given volume of water to be left in the burette. For example, $10cm^3$ left took a time of roughly $71\mbox{s}$; $45\mbox{cm}^3$ left took roughly $6\mbox{s}$, and then many values in between. I would expect this to represent exponential decay, seen as different concentrations and masses of water in the burette would have different effects on the speed of the water leaving the burette. (Correct me if I'm wrong.) So I plotted a graph of volume against time and it showed exponential decay, but it was only very slightly curved, but curved nonetheless. So I decided then to plot a graph of $\ln\left(V/\operatorname{cm}^3\right)$ against time/s. However, this did not produce a straight line. If I were to follow the plotted points with a curve, the gradient of the line would have been negative and increased in negative 'magnitude'. I'm meant to analyse the extent of whether or not my experiment shows exponential decay. I'm quite stuck, because my original graph shows very slight decay, whereas my log graph isn't a straight line. Does the fact that the log graph doesn't produce a straight line show that there isn't exponential decay? Does it not matter? Would it have been straight had there been very few experimental errors/uncertainties (there would have been a lot)? So I guess, fundamentally, my question is: What does the curved line on my natural log graph suggest? Answer: Some quick scribbling on an envelope suggests that the volume in the burette will vary with time according to: $$ V = V_0 \left(\frac{t_0 - t}{t_0} \right) ^2 $$ where $V_0$ is the initial volume (50cc in this case) and $t_0$ is the time the burette takes to empty. So the curve is not an exponential decay. It's actually a section of a parabola, but shifted along the time axis. Some more quick scribbling in Excel and I get a graph that looks like (assuming $t_0$ is 20 seconds): If you do a graph of ln(V) against time with this data you get a distinctly non-straight line: It's dangerous to assume that any curve vaguely resembling an exponential is actually an exponential, even though this is a common error amongst budding physicists. You really need to have some mathematical model against which you can evaluate your data.
{ "domain": "physics.stackexchange", "id": 8365, "tags": "experimental-physics, data-analysis" }
Angular Momentum about a Point
Question: I am confused by two different definitions of the angular momentum of a particle P about a moving point Q, with point O as the origin of the inertial frame. I learned the first definition from my sophomore dynamics course, and now I am taking analytical dynamics and the textbook gave a different definition. The first definition looks like this: $$ \vec{h}_{P/Q} = \vec{r}_{P/Q} \times m\vec{v}_{P/O} $$ Where h_P/Q is the angular momentum of particle P w.r.t point Q; r_P/Q is the relative position of P w.r.t Q, v_P/O is the inertial velocity of particle P w.r.t a fixed point O. The second definition looks like this: $$ \vec{h}_{P/Q} = \vec{r}_{P/Q} \times m\vec{v}_{P/Q} $$ Basically the velocities are taken w.r.t different points. Which one is the correct one? Answer: If the angular momentum is calculated with respect to Q, both the position vector and the velocity vector must related to Q. So, the second expression is correct. If the momentum is calculated with respect to the origin of the inertial frame: $\mathbf L = \mathbf r_{OP} \times \mathbf v_{OP} = (\mathbf r_{OQ} + \mathbf r_{QP}) \times \mathbf v_{OP} $
{ "domain": "physics.stackexchange", "id": 88538, "tags": "newtonian-mechanics, rotational-dynamics, angular-momentum" }
Tonelli-Shanks algorithm implementation of prime modular square root
Question: I did an implementation of the Tonelli-Shanks algorithm as defined on Wikipedia. I put it here for review and sharing purpose. Legendre Symbol implementation: def legendre_symbol(a, p): """ Legendre symbol Define if a is a quadratic residue modulo odd prime http://en.wikipedia.org/wiki/Legendre_symbol """ ls = pow(a, (p - 1)/2, p) if ls == p - 1: return -1 return ls Prime modular square root (I just renamed the solution variable R to x and n to a): def prime_mod_sqrt(a, p): """ Square root modulo prime number Solve the equation x^2 = a mod p and return list of x solution http://en.wikipedia.org/wiki/Tonelli-Shanks_algorithm """ a %= p # Simple case if a == 0: return [0] if p == 2: return [a] # Check solution existence on odd prime if legendre_symbol(a, p) != 1: return [] # Simple case if p % 4 == 3: x = pow(a, (p + 1)/4, p) return [x, p-x] # Factor p-1 on the form q * 2^s (with Q odd) q, s = p - 1, 0 while q % 2 == 0: s += 1 q //= 2 # Select a z which is a quadratic non resudue modulo p z = 1 while legendre_symbol(z, p) != -1: z += 1 c = pow(z, q, p) # Search for a solution x = pow(a, (q + 1)/2, p) t = pow(a, q, p) m = s while t != 1: # Find the lowest i such that t^(2^i) = 1 i, e = 0, 2 for i in xrange(1, m): if pow(t, e, p) == 1: break e *= 2 # Update next value to iterate b = pow(c, 2**(m - i - 1), p) x = (x * b) % p t = (t * b * b) % p c = (b * b) % p m = i return [x, p-x] If you have any optimization or have found any error, please report it. Answer: Good job! I don't have a lot to comment on in this code. You have written straightforward clear code whose only complexity stems directly from the complexity of the operation it is performing. It would be good to include some of your external commentary (such as the renames of R and n) in the code itself to make it easier for someone to follow the documentation on wikipedia. You may want to include some of that documentation as well. For reference, the rest of this review assumes that the code functions correctly; I don't have my math head on tonight. There appears to be one case of redundant code, unless m can ever be 1, resulting in an empty range and thus no reassignment of i. Otherwise you can skip the assignment to i in the following: i, e = 0, 2 for i in xrange(1, m): ... There are a number of small strength-reduction optimizations you might consider, but in Python their impact is likely to be minimized - definitely profile before heading too deeply down the optimization path. For example in the following while loop: # Factor p-1 on the form q * 2^s (with Q odd) q, s = p - 1, 0 while q % 2 == 0: s += 1 q //= 2 Both operations on q can be reduced. The modulus can be rewritten as a binary and q & 1, and the division as a binary shift q >>= 1. Alternately, you can use divmod to perform both operations at once. Similarly, 2**(m - i - 1) is identical to 1 << (m - i - 1) for non-negative exponents.
{ "domain": "codereview.stackexchange", "id": 32118, "tags": "python, algorithm, primes, mathematics" }
Given an array of integers, return all pairs that add up to 100
Question: I was recently given this programming challenge on an interview, and I decided to use javascript to solve it. I did, but I'm not happy with my implementation. I can't help thinking there must be a better way of doing this. The exercise goes like this: Given an array of integers, write a function that returns an array of each pair of integers that add up to 100. The input is [0, 1, 100, 99, 0, 10, 90, 30, 55, 33, 55, 75, 50, 51, 49, 50, 51, 49, 51] and the function should return something like this (the order is not important). [ [0,100], [1,99], [10,90], [50,50], [49,51] ] My implementation looks like this, is there a different approach out there? var sample_data = [0, 1, 100, 99, 0, 10, 90, 30, 55, 33, 55, 75, 50, 51, 49, 50, 51, 49, 51] function process(data){ var result = [] var a; var b; for (var i=0; i < data.length; i++) { a = data[i]; for (var j=0; j < data.length; j++) { b = data[j] if ( (parseInt(a) + parseInt(b)) === 100 && result.indexOf(a+","+b) == -1 && result.indexOf(b+","+a ) == -1 ) { result.push( a+","+b ) } } } for (var i=0; i < result.length; i++) { result[i] = result[i].split(',') } return result } process(sample_data); Answer: Style It is considered good practice to terminate all statements with a semicolon, even though they are optional. Follow namingConventions in JavaScript. The function should have a more descriptive name than process(). It could take the desired sum as a parameter. Algorithm Your function, which relies on brute force, is an inefficient \$O(n^2)\$, bordering on \$O(n^3)\$. It's \$O(n^2)\$ because i takes on \$n\$ values, and for each i, j takes on \$n\$ values. The result.indexOf(…) operations could also be \$O(n)\$ in the worst case, so the overall complexity could be as bad as \$O(n^3)\$. One simple optimization would be to take advantage of symmetry to cut the work in half: for (var i = 0; i < data.length; i++) { a = data[i]; for (var j = 0; j < i; j++) { … } } Another simplification would be to avoid stringifying and parsing the number pairs: result.push([a, b]); One possible smart solution would be to sort the data, and have i increasing from 0, and j decreasing from the end, until they meet in the middle. function pairsWithSum(sum, data) { data = data.slice(0); data.sort(function(a, b) { return a - b; }); var pairs = []; var i = 0, j = data.length - 1; while (i < j && i < data.length && j >= 0) { var a = data[i], b = data[j]; if (a + b == sum) { pairs.push([a, b]); while (i < data.length && data[i] == a) { i++; } while (j >= 0 && data[j] == b) { j--; } } else if (a + b < sum) { while (i < data.length && data[i++] == a); } else { while (j >= 0 && data[j--] == b); } } return pairs; }
{ "domain": "codereview.stackexchange", "id": 11555, "tags": "javascript, array, interview-questions" }
Does the mass of the observable universe ever change?
Question: First do we have anyway to even estimate the mass of the entire observable universe? And then is there any data that shows mass being gained or lost? Would we ever know if someone was playing with the til. Also I want to be clear that I am not talking about small masses on the outskirts of the "universe" or small discrepancies in measurement or anything of the sort. Note: I would like to add that maybe we should define the observable universe as NOW (x-date) so that we aren't calculating a moving target. Answer: Yes, the mass of the observable Universe is always increasing. Matter Even if you're only referring the "ordinary" matter (such as stars, gas, and bicycles) and dark matter, the mass of the observable Universe does increase, not because mass is being created, but because the size of the observable Universe increases. In a billion years from now, we can see stuff that today is too far away for the light to have reached us, so its radius has increased. Since the mass $M$ equals density $\rho_\mathrm{M}$ times volume $V$, $M$ increases. As called2voyage mentions, we have several ways of measuring the density, and we know it's close to $\rho_\mathrm{M}\simeq 2.7\times10^{-30}\,\mathrm{g}\,\mathrm{cm}^{-3}$ (Planck Collaboration et al. 2020). The radius is $R = 4.4\times10^{28}\,\mathrm{cm}$, so the mass is $$ M = \rho_\mathrm{M} \times V = \rho_\mathrm{M} \times \frac{4\pi}{3}R^3 \simeq 10^{57}\,\mathrm{g}, $$ or $5\times10^{23}M_\odot$ (Solar masses). Mass increase of matter Every second, the radius of the observable Universe increases by $dR = c\,dt = 300\,000\,\mathrm{km}$, in addition to the expansion. Here, $c$ is the speed of light, and $dt$ is the time interval that I choose to be 1 second. That means that its mass (currently) increases by $$ \begin{array}{rcl} dM & = & A \times dR \times \rho_\mathrm{M}\\ & = & 4\pi R^2 \times c\,dt \times \rho_\mathrm{M}\\ & \sim & 10^6\,M_\odot\,\text{per second,} \end{array} $$ where $A=4\pi R^2$ is the surface area of the Universe. Dark energy However, another factor contributes to the mass increase, namely the so-called dark energy, which is a form of energy attributed to empty space. And since new space is created as the Universe expands, dark energy is being created all the time. Currently, the energy density of dark energy, expressed as mass density through $E=mc^2$, is more than twice that of matter ($\rho_\Lambda \simeq 6\times10^{-30}\,\mathrm{g}\,\mathrm{cm}^{-3}$). The rate at which the observable Universe grows due to expansion can be calculated from the Hubble law, which says that objects at a distance $d$ from us recedes at a velocity $$ v = H_0 \, d, $$ where $H_0\simeq 70\,\mathrm{km}\,\mathrm{s}^{-1}\,\mathrm{Mpc}^{-1}$ is the Hubble constant. Expansion thus makes the edge of the observable Universe recede at $v=H_0 R = 3.2c$ (yes, more than three times the speed of light), in addition to the factor of $1c$ that comes from more light reaching us (as above). Mass increase of dark energy Hence, every second the "total" radius of the observable Universe (i.e. expansion + more light) increases by $dR = (3.2c + 1c)\times dt$, such that the increase in mass/energy from dark energy is $$ \begin{array}{rcl} dM & = & A \times dR \times \rho_\Lambda\\ & = & 4\pi R^2 \times (3.2c + 1c)dt \times \rho_\Lambda\\ & \sim & 10^7\,M_\odot\,\text{per second,} \end{array} $$ an order of magnitude more than that of regular/dark matter.
{ "domain": "astronomy.stackexchange", "id": 2029, "tags": "universe, mass" }
Markov's Decision Process - calculate value in each iteration
Question: I have the following decision tree: I calculated the value of the plan using the following paramenters (given): {0 → 1 , 1 → 3 , 2 → 4 }, Discount factor ()= 0.2 I used this formula to calculate the linear equations to find the value of the plan: = + ∑ ( , , ) Linear equations: V0 = 1 + 0.2 (0.5 V0 + 0.5 V2 ) V1 = -1 + 0.2 (0.7 V0 + 0.3 V1) V2 = -2 + 0.2 (1 V1 ) Now I need to calculate the first three iterations of the value-iteration algorithm, if a discount factor of 0.2 is used and starting initially (iteration 0) with state values all equal to 0. And write it in below format: S0 = {Value at iteration1, value at iteration2, value at iteration3} S1 = {Value at iteration1, value at iteration2, value at iteration3} S2 = {Value at iteration1, value at iteration2, value at iteration3} How can I find the value at different iterations? I know that we can use the Bellman Equation: But how do I use this equation? Thanks! Answer: The value iteration algorithm defines the following update rule (reference is slide 11 in this MIT course): $$V_{i+1}(s) = \max_{a}\{R(s,a) + \gamma E_{s'\sim T(.|s,a)} V_i(s')\}$$ for all states $s$, with $E_{s'\sim T(.|s,a)} V_i(s') = \sum_{s'} T(s'|s,a)V_i(s')$. I adapted the transition probability name from $P$ to $T$ to conform with your notation. Note the index $i$ for the value function on the right hand side of the equations. That's what enables you to do the computations: the value of the state at iteration $i+1$ is calculated recursively from the values at iteration $i$. We also have $\max_a R(a,s) =$ reward for reaching the state, given that the reward assigned to each state doesn't depend on the action which was taken in your example. Therefore the value iteration update rule simplifies to: $$V_{i+1}(s) = R(s) + \gamma \max_a \{\sum_{s'} T(s'|s,a)V_i(s')\}$$ If you initialize $V_0(s)$ to the value of the reward function for that state, we have: $V_0(S_0) = 1$, $V_0(S_1) = -1$, $V_0(S_2) = -2$. Then for $S_0$: $$\begin{align} V_1(S_0) = R(S_0) + \gamma \max \{ & T(S_0| a_1, S_0) V_0(S_0) + T(S_2 | a_1, S_0)V_0(S_2); \\ & T(S_0 | a_2, S_0) V_0(S_0) + T(S_1| a_2, S_0) V_0(S_1)\}\end{align}$$ where the first term in the $\max$ is the expected value if you take action $a_1$ from state $S_0$ and the second term is the expected value if you take action $a_2$ from state $S_0$. This gives, with $\gamma = 0.2$ $$\begin{align}V_1(S_0) & = 1 + 0.2 * \max \{0.5*1 + 0.5 *-2; 0.2*1 + 0.8*-1\} \\ & = 1 + 0.2 * \max \{-0.5; -0.6\} \\ & = 1 - 0.2*0.5 = 0.9\end{align}$$. Similarly for $S_1$: $$\begin{align} V_1(S_1) = R(S_1) + \gamma \max \{ & T(S_1| a_3, S_1) V_0(S_1) + T(S_0 | a_3, S_1)V_0(S_0); \\ & T(S_1 | a_5, S_1) V_0(S_1)\}\end{align}$$ which gives: $$\begin{align}V_1(S_1) & = -1 + 0.2 * \max \{0.3*-1 + 0.7*1; 1*-1\} \\ & = -1 + 0.2 * \max \{0.4; -1\} \\ & = -1 + 0.2*0.4 = -0.92\end{align}$$ And finally for $S_2$: $$\begin{align} V_1(S_2) = R(S_2) + \gamma \max \{ & T(S_2| a_3, S_2) V_0(S_2) + T(S_0 | a_3, S_2)V_0(S_0); \\ & T(S_1 | a_4, S_2) V_0(S_1)\}\end{align}$$ which gives: $$\begin{align}V_1(S_2) & = -2 + 0.2 * \max \{0.2*-2 + 0.8*1; 1*-1\} \\ & = -2 + 0.2 * \max \{0.4; -1\} \\ & = -2 + 0.2*0.4 = -1.92\end{align}$$ To get the values at the next iteration, you can reuse the equations above incrementing the indices by one.
{ "domain": "ai.stackexchange", "id": 3447, "tags": "reinforcement-learning, machine-learning, markov-decision-process, statistics" }
Scalar positronium decay Feynman diagram
Question: Charged scalar particles appear in some Beyond the Standard Model theories and are regularly searched for at high energy colliders. For positronium there is a simple Feynman diagram as shown below Would the same diagram work for a scalar charged particle-antiparticle bound state? Unfortunately, if we try to apply this diagram to such a state we have a problem with the spin conservation being violated. What is the Feynman diagram for scalar charged particle-antiparticle bound state annihilation? Answer: There (only) 2 possibilities for the spin state of a positronium, that is the singlet state S=0 where the spin of electron and positron are anti-parallel which is called para-positronium and the $S=1$ state where the spin of electron and positron are parallel which is called ortho-positronium. Each case of both cases has to be treated differently. The para-positronium decays in 2 photons as shown in the Feynman-diagram of the post, while the ortho-positronium decays in an odd number of photons, preferentially in 3, but 5, ... is a priori also possible but very unprobable. So if you consider a $S=1$ positronium (ortho-positronium), then you have to compute a Feynman diagram with 3 out-going photons. The third photon is emitted from the virtual exchange particle between the 2 vertices of the Feynman diagram already shown in the post. If you want to consider a positronium in scalar electrodynamics, there is chapter 61 on this in the book of Srednicki http://web.physics.ucsb.edu/~mark/ms-qft-DRAFT.pdf Figure 61.2 shows the Feynman diagram and at the bottom of p. 366 the calculation is shown. If the ingoing particles are scalar, the computation is very similar to the decay of the Parapositronium, the singlet state. But the in-going particles of the "positronium" are no longer electron respectively positrons. To recap the Feynman diagram is the same as the one shown in the post, only the in-going particles change to scalar. These particles could be for instance supersymmetric particles. But such particles would hardly build up a bound state at least for a very short time, therefore one would not call it a "positronium". When it is all about the spin state of virtual particle note that spin is not conserved. The virtual particle in order to fulfill angular momentum conversation can adapt some orbital angular momentum in order to fulfill angular momentum conversation. Anyway, it is a virtual particle, whose physical quantities are not measurable. What counts at the end is that angular momentum is conserved between the in-going and the out-going particles. This can be easily achieved for in-going scalar particles as described above.
{ "domain": "physics.stackexchange", "id": 92476, "tags": "feynman-diagrams, klein-gordon-equation, positronium" }
What does "cellular" mean in this context?
Question: I came across a confusing word when I was reading a Scientific American story, “Controversial Spewed Iron Experiment Succeeds as Carbon Sink” (by David Biello). It goes like this: “One key to the whole experiment’s success turns out to be the specific diatoms involved, which use silicon to make their shells and tend to form long strands of cellular slime after their demise that falls quickly to the seafloor.” I'm wondering what “cellular” means in this context. Does it mean “of (diatom) cells” or “porous”? Could someone kindly enlighten me on this? Answer: My understanding is that the slime in question is formed of the bodies (the cells) of the dead diatoms. Where does porous come into it? Is it mentioned in a previous sentence?
{ "domain": "biology.stackexchange", "id": 4668, "tags": "microbiology" }
how to monitor the node's state?
Question: Here is the problem, I have several nodes running together ,and they should work well for a long time. But, I'm not sure wheather they will break or not . So,is there anything can be used to monitor the state of the nodes ? if one broke up,it can show something or inform there are something wrong. Originally posted by Tomas yuan on ROS Answers with karma: 56 on 2017-06-24 Post score: 0 Answer: You could create bonds. Originally posted by NEngelhard with karma: 3519 on 2017-06-24 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 28199, "tags": "ros, node" }
8 shades of grey (sliding puzzle)
Question: I've written a smaller version of 15 puzzle in C, using OpenGL 3.3 and freeglut. I've found a place you can play it online to get familiar. I'm new to OpenGL, so I have no idea if I'm doing this in the right way. Nevertheless, this game compiles and works as intended on my 64-bit HP COMPAQ laptop with Intel HD graphics card and lubuntu 15.04 (64-bit) OS installed. I would be glad if you would point out the major (and minor) mistakes I made to avoid them before moving on to bigger projects. ~200 CLOC + comments Makefile all: 8 8: main.o render.o gcc main.o render.o -lglut -lGLEW -lGL -o 8 main.o: gcc -c -std=c99 main.c render.o: gcc -c -std=c99 render.c clean: rm -f main.o rm -f render.o main.c /* 8 shades of grey - a sliding puzzle */ /* OpenGL dependencies */ #include <GL/glew.h> #include <GL/freeglut.h> /* Standard dependencies */ #include <stdlib.h> #include <stdio.h> /* Home dependencies */ #include "render.h" /* Constants */ #define WIN_TITLE "8 shades of grey" #define WIN_WIDTH 600 #define WIN_HEIGHT 600 int empty_tile_loc = 8; int board_loc_contents[9] = {0, 1, 2, 3, 4, 5, 6, 7, 8}; /* Swaps tiles */ void swap_tiles(int is_x, int diff) { empty_tile_loc += diff; board_loc_contents[empty_tile_loc - diff] = board_loc_contents[empty_tile_loc]; board_loc_contents[empty_tile_loc] = 8; if(is_x) { if(diff < 0) offsetx[board_loc_contents[empty_tile_loc - diff]] += 0.666; else offsetx[board_loc_contents[empty_tile_loc - diff]] -= 0.666; } else { if(diff < 0) offsety[board_loc_contents[empty_tile_loc - diff]] -= 0.666; else offsety[board_loc_contents[empty_tile_loc - diff]] += 0.666; } glutPostRedisplay(); } /* Keyboard arrow callback */ void kboard_arrow_callback(int key, int mouse_x, int mouse_y) { switch(key) { case GLUT_KEY_LEFT: if(empty_tile_loc % 3 != 0) swap_tiles(1, -1); break; case GLUT_KEY_RIGHT: if(empty_tile_loc % 3 != 2) swap_tiles(1, 1); break; case GLUT_KEY_UP: if(empty_tile_loc > 2) swap_tiles(0, -3); break; case GLUT_KEY_DOWN: if(empty_tile_loc < 6) swap_tiles(0, 3); break; } } /* The main function */ int main(int argc, char **argv) { /* Initializing freeGLUT */ glutInit(&argc, argv); glutInitContextFlags(GLUT_FORWARD_COMPATIBLE); glutInitContextProfile(GLUT_CORE_PROFILE); glutInitContextVersion(3, 3); glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE); glutInitWindowSize(WIN_WIDTH, WIN_HEIGHT); glutCreateWindow(WIN_TITLE); /* Initializing GLEW */ glewExperimental = GL_TRUE; /* to render VAO */ if(glewInit() != GLEW_OK) { fprintf(stderr, "GLEW failed to initialize!\n"); return EXIT_FAILURE; } /* Setting freeGLUT function callbacks */ glutDisplayFunc(draw_callback); glutSpecialFunc(kboard_arrow_callback); /* Initializing VBOs, VAO, shader program and variables... */ if(!draw_initialize()) { fprintf(stderr, "Drawing failed to initialize!\n"); draw_terminate(); return EXIT_FAILURE; }; /* Main loop! */ glutMainLoop(); /* Exiting... */ draw_terminate(); return EXIT_SUCCESS; } render.h #ifndef RENDER_H #define RENDER_H /* Amount of tiles */ #define NUM_BUFFERS 8 /* Shader variables */ float offsetx[NUM_BUFFERS]; float offsety[NUM_BUFFERS]; /* Self-explanatory public draw functions */ int draw_initialize(void); void draw_terminate(void); void draw_callback(void); #endif render.c #include "render.h" /* OpenGL dependencies */ #include <GL/glew.h> #include <GL/freeglut.h> /* Standard dependencies */ #include <string.h> #include <stdio.h> /* Size of infolog char array - used to report errors */ #define INFOLOG_SIZE 1024 /* Drawing data */ GLuint VAO; GLuint VBO_ids[NUM_BUFFERS]; GLfloat VBO0[] = {-1.00, 1.00,0,-.333, 1.00,0,-1.00, .333,0, -1.00, .333,0,-.333, .333,0,-.333, 1.00,0}; GLfloat VBO1[] = {-.333, 1.00,0, .333, 1.00,0,-.333, .333,0, -.333, .333,0, .333, .333,0, .333, 1.00,0}; GLfloat VBO2[] = { .333, 1.00,0, 1.00, 1.00,0, .333, .333,0, .333, .333,0, 1.00, .333,0, 1.00, 1.00,0}; GLfloat VBO3[] = {-1.00, .333,0,-.333, .333,0,-1.00,-.333,0, -1.00,-.333,0,-.333,-.333,0,-.333, .333,0}; GLfloat VBO4[] = {-.333, .333,0, .333, .333,0,-.333,-.333,0, -.333,-.333,0, .333,-.333,0, .333, .333,0}; GLfloat VBO5[] = { .333, .333,0, 1.00, .333,0, .333,-.333,0, .333,-.333,0, 1.00,-.333,0, 1.00, .333,0}; GLfloat VBO6[] = {-1.00,-.333,0,-.333,-.333,0,-1.00,-1.00,0, -1.00,-1.00,0,-.333,-1.00,0,-.333,-.333,0}; GLfloat VBO7[] = {-.333,-.333,0, .333,-.333,0,-.333,-1.00,0, -.333,-1.00,0, .333,-1.00,0, .333,-.333,0}; GLfloat *VBOs[NUM_BUFFERS] = { VBO0, VBO1, VBO2, VBO3, VBO4, VBO5, VBO6, VBO7 }; size_t VBO_sizes[] = { sizeof(VBO0), sizeof(VBO1), sizeof(VBO2), sizeof(VBO3), sizeof(VBO4), sizeof(VBO5), sizeof(VBO6), sizeof(VBO7) }; float offsetx[NUM_BUFFERS] = {0.00}; float offsety[NUM_BUFFERS] = {0.00}; float color_r[NUM_BUFFERS] = {0.00, .125, .250, .375, .500, .675, .750, .875}; float color_g[NUM_BUFFERS] = {0.00, .125, .250, .375, .500, .675, .750, .875}; float color_b[NUM_BUFFERS] = {0.00, .125, .250, .375, .500, .675, .750, .875}; /* Shader program data */ GLuint shader_prog_id; GLuint vshade_offsetx; GLuint vshade_offsety; GLuint fshade_color_r; GLuint fshade_color_g; GLuint fshade_color_b; const GLchar *vert_src = "#version 330\n" "uniform float offsetx;\n" "uniform float offsety;\n" "layout (location = 0) in vec3 Pos;\n" "void main() {\n" " gl_Position = vec4(Pos.x + offsetx, Pos.y + offsety, Pos.z, 1.0); }\n"; const GLchar *frag_src = "#version 330\n" "uniform float color_r;\n" "uniform float color_g;\n" "uniform float color_b;\n" "out vec4 FragColor;\n" "void main() {\n" " FragColor = vec4(color_r, color_g, color_b, 1.0); }\n"; /* Function declarations */ int draw_initialize(void); void draw_callback(void); void draw_terminate(void); void vbo_init(void); int shader_init(const GLchar **v_src, const GLchar **f_src, GLint *v_size, GLint *f_size); int find_shader_vars(void); int compile_shader(GLuint shader); void shader_err(GLuint id, GLchar *msg); /* Initializes drawing */ int draw_initialize(void) { glClearColor(1.0f, 1.0f, 1.0f, 1.0f); vbo_init(); GLint vsrcsize = strlen(vert_src); GLint fsrcsize = strlen(frag_src); if(!shader_init(&vert_src, &frag_src, &vsrcsize, &fsrcsize)) { fprintf(stderr, "Shader program failed to initialize!\n"); return 0; } if(!find_shader_vars()) { fprintf(stderr, "Could not find all shader variables!\n"); return 0; } return 1; } /* Draw callback */ void draw_callback(void) { glClear(GL_COLOR_BUFFER_BIT); glUseProgram(shader_prog_id); glEnableVertexAttribArray(0); for(int i = 0; i < NUM_BUFFERS; ++i) { glBindBuffer(GL_ARRAY_BUFFER, VBO_ids[i]); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL); glUniform1f(vshade_offsetx, offsetx[i]); glUniform1f(vshade_offsety, offsety[i]); glUniform1f(fshade_color_r, color_r[i]); glUniform1f(fshade_color_g, color_g[i]); glUniform1f(fshade_color_b, color_b[i]); glDrawArrays(GL_TRIANGLES, 0, VBO_sizes[i]/sizeof(GLfloat)); } glDisableVertexAttribArray(0); glutSwapBuffers(); } /* Draw termination */ void draw_terminate(void) { glDeleteProgram(shader_prog_id); } /* Initializes VBOs and VAO */ void vbo_init(void) { glGenVertexArrays(1, &VAO); glBindVertexArray(VAO); glGenBuffers(NUM_BUFFERS, VBO_ids); for(int i = 0; i < NUM_BUFFERS; ++i) { glBindBuffer(GL_ARRAY_BUFFER, VBO_ids[i]); glBufferData(GL_ARRAY_BUFFER, VBO_sizes[i], VBOs[i], GL_STATIC_DRAW); } } /* Prints shader program error */ void shader_prog_err(GLuint id, GLchar *msg) { GLchar infolog[INFOLOG_SIZE]; glGetProgramInfoLog(id, INFOLOG_SIZE, NULL, infolog); fprintf(stderr, "%s '%s'\n", msg, infolog); } /* Initializes shader program */ int shader_init(const GLchar **v_src, const GLchar **f_src, GLint *v_size, GLint *f_size) { GLint success; shader_prog_id = glCreateProgram(); GLuint v_shader = glCreateShader(GL_VERTEX_SHADER); GLuint f_shader = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(v_shader, 1, v_src, v_size); glShaderSource(f_shader, 1, f_src, f_size); if(!compile_shader(v_shader)) { fprintf(stderr, "Vertex shader compilation failed.\n"); glDeleteShader(v_shader); glDeleteShader(f_shader); return 0; } if(!compile_shader(f_shader)) { fprintf(stderr, "Fragment shader compilation failed.\n"); glDeleteShader(v_shader); glDeleteShader(f_shader); return 0; } glAttachShader(shader_prog_id, v_shader); glAttachShader(shader_prog_id, f_shader); glLinkProgram(shader_prog_id); glDetachShader(shader_prog_id, v_shader); glDetachShader(shader_prog_id, f_shader); glDeleteShader(v_shader); glDeleteShader(f_shader); glGetProgramiv(shader_prog_id, GL_LINK_STATUS, &success); if(!success) { shader_prog_err(shader_prog_id, "Error: failed to link shader program:"); return 0; } glValidateProgram(shader_prog_id); glGetProgramiv(shader_prog_id, GL_VALIDATE_STATUS, &success); if(!success) { shader_prog_err(shader_prog_id, "Error: failed to validate shader program:"); return 0; } return 1; } /* Finds location of uniform variables in shader program */ int find_shader_vars(void) { vshade_offsetx = glGetUniformLocation(shader_prog_id, "offsetx"); vshade_offsety = glGetUniformLocation(shader_prog_id, "offsety"); fshade_color_r = glGetUniformLocation(shader_prog_id, "color_r"); fshade_color_g = glGetUniformLocation(shader_prog_id, "color_g"); fshade_color_b = glGetUniformLocation(shader_prog_id, "color_b"); if(vshade_offsetx == -1 || vshade_offsety == -1) return 0; if(fshade_color_r == -1 || fshade_color_g == -1 || fshade_color_b == -1) return 0; return 1; } /* Compiles shader, returns if was succesfull, prints error msg if not */ int compile_shader(GLuint shader) { GLint success; glCompileShader(shader); glGetShaderiv(shader, GL_COMPILE_STATUS, &success); if(!success) { GLchar infolog[INFOLOG_SIZE]; glGetShaderInfoLog(shader, INFOLOG_SIZE, NULL, infolog); fprintf(stderr, "Failed to compile shader: '%s'\n", infolog); } return success; } Answer: Makefile Your Makefile isn't as useful as it could be, since it doesn't rebuild anything when you change the source: there's no dependency information in it. Also if you want to change your compiler or compiler flags, you'll need to do that all over the place. Adding a new source file requires adding three lines and modifying (at least) two others. So, I'd start off with: CC=gcc CFLAGS=-std=c99 -Wall -Wextra -pedantic These are standard variable names, people will be familiar with them. Then you could: SOURCES=main.c render.c LIBS=-lglut -lGLEW -lGL Now you've got all your source files and the necessary libraries in one place, easy to change down the line. OBJECTS=$(SOURCES:.c=.o) (See Substitution References.) This generates the name foo.o from foo.c: got all your object files now too. So you can: all: 8 8: $(OBJECTS) $(CC) $(CFLAGS) -o $@ $(OBJECTS) $(LIBS) clean: rm -f $(OBJECTS) (See Automatic Variables.) And... that's it. Make knows how to build .o files from .c files, and will use $(CC) and $(CFLAGS), so no need to create those rules yourself. If the built-in rule doesn't suit your need, create a generic one, e.g.: %.o: %.c @echo Building $@ from $^ @$(CC) $(CFLAGS) -c -o $@ $^ This isn't quite complete, in particular you have no build dependency on your header file. If you want to go further, though that would be a bit overkill here, look for generating dependencies automatically, e.g. generate dependencies for a makefile. Header float offsetx[NUM_BUFFERS]; float offsety[NUM_BUFFERS]; These should be declared extern. Code Unfortunately I don't know OpenGL, so can't really comment on that. But your bracing style is very strange for me, very hard to read and makes your code way too compact. I'd urge you to put the closing braces on their own line. And put a space between if (and while) and the conditional. Examples: int compile_shader(GLuint shader) { GLint success; glCompileShader(shader); glGetShaderiv(shader, GL_COMPILE_STATUS, &success); if (!success) { GLchar infolog[INFOLOG_SIZE]; glGetShaderInfoLog(shader, INFOLOG_SIZE, NULL, infolog); fprintf(stderr, "Failed to compile shader: '%s'\n", infolog); } return success; } void swap_tiles(int is_x, int diff) { empty_tile_loc += diff; board_loc_contents[empty_tile_loc - diff] = board_loc_contents[empty_tile_loc]; board_loc_contents[empty_tile_loc] = 8; if (is_x) { if (diff < 0) offsetx[board_loc_contents[empty_tile_loc - diff]] += 0.666; else offsetx[board_loc_contents[empty_tile_loc - diff]] -= 0.666; } else { if (diff < 0) offsety[board_loc_contents[empty_tile_loc - diff]] -= 0.666; else offsety[board_loc_contents[empty_tile_loc - diff]] += 0.666; } glutPostRedisplay(); } If you want to practice "ultra-dense" C, you could hop over to Programming Puzzles and Code Golf for a real challenge, but otherwise don't try and minimize your LOC count. int draw_initialize(void); void draw_callback(void); void draw_terminate(void); You've already got those in the header, not needed in render.c.
{ "domain": "codereview.stackexchange", "id": 15701, "tags": "c, game, opengl" }
My perfect shopping cart class
Question: Here is my shopping cart class! I want it to be perfect and never to have to implement it. What improvements would you do? New : IComparable interface IComparable { public function equals($object); } New : ICartItem interface ICartItem extends IComparable { public function setQuantity($quantity); public function getQuantity(); } New : ICart interface ICart { public function add(ICartItem $item); public function remove(ICartItem $item); public function getQuantity(ICartItem $item); public function setQuantity(ICartItem $item, $quantity); public function isEmpty(); public function getItems(); public function clear(); } New : Cart class SessionCart implements ICart { const IDENTIFIER = '_CART_'; protected $items; public function __construct(&$container = null) { if (is_null($container)) { if (session_id() == '') { session_start(); } if (!isset($_SESSION[self::IDENTIFIER])) { $_SESSION[self::IDENTIFIER] = array(); } $container = & $_SESSION[self::IDENTIFIER]; } $this->items = & $container; } public function add(ICartItem $item) { $index = $this->getIndexOfItem($item); if ($index == -1) { $this->items[] = $item; } else { $item = $this->items[$index]; $item->setQuantity($item->getQuantity() + 1); } return $item->getQuantity(); } public function remove(ICartItem $item) { $index = $this->getIndexOfItem($item); if ($index == -1) { throw new Exception('The item isn\'t inside the cart.'); } $item = $this->items[$index]; $quantity = $item->getQuantity() - 1; if ($quantity > 0) { $item->setQuantity($quantity); } else { unset($this->items[$index]); } return $quantity; } public function getQuantity(ICartItem $item) { $index = $this->getIndexOfItem($item); if ($index == -1) { return 0; } else { return $this->items[$index]->getQuantity(); } } public function setQuantity(ICartItem $item, $quantity) { if (($quantity = (int)$quantity) < 1) { throw new Exception('A positive quantity is required.'); } $index = $this->getIndexOfItem($item); if ($index == -1) { $item->setQuantity($quantity); $this->items[] = $item; } else { $item = $this->items[$index]; $item->setQuantity($quantity); } return $item->getQuantity(); } public function isEmpty() { return empty($this->items); } public function getItems() { return $this->items; } public function clear() { $this->items = array(); } private function getIndexOfItem(ICartItem $item) { foreach ($this->items as $key => $value) { if ($item->equals($value)) { return $key; } } return -1; } } Answer: General You should reconsider the naming of some methods. For example, interface IComparable { public function equals($obj); } is more clear about what it really does (comments are omitted to save space; of course, proper DocBlock comments should always be included with your code). This way, you can easily add other methods like isLessThan, isGreaterOrEqual, and so on. A method named compareTo as in $a->compareTo($b) I'd expect to be a valid callback for usort, thus returning -1 on $a < $b, 0 on $a == $b, or 1 on $a > $b. Cart One day, you might want to store your cart items in a database, so using $_SESSION directly is not optimal. I'd define a cart interface: interface ICart { public function add(IComparable $obj); public function remove(IComparable $obj); public function getQuantity(IComparable $obj); public function setQuantity(IComparable $obj, $qty); public function isEmpty(); public function getAll(); public function clear(); } class Cart implements ICart { const IDENTIFIER = '_CART_'; protected $container; public function __construct(&$container = null) { if (is_null($container)) { if (session_id() == '') { session_start(); } if (!isset($_SESSION[self::IDENTIFIER])) { $_SESSION[self::IDENTIFIER] = array(); } $container = &$_SESSION[self::IDENTIFIER]; } $this->container = &$container; } } With this approach, you can always build other cart implementations, like a database-aware one. The Cart class can be provided with an array, which will be used to store the data. If you want, you can pass in the $_SESSION superglobal directly, or use any other array. If omitted, a kind of namespace within the session variables (self::IDENTIFIER, '_CART_') is used to store the data. Since the session - if needed - is started within the constructor, you can rely on its existence in the subsequent methods. BTW: the existence of the $_SESSION array does not guarantee that a session has been started! Use session_id() to check that instead.) Internal Data Structure Since your data array can be accessed from outside the cart, the current data structure is prone to get out of sync. It is more robust to swap the indices. Then you can replace getIndex() with getEntry(), which makes the handling much easier. class Cart implements ICart { ... // see above public function add(IComparable $obj) { $entry = &$this->getEntry($obj); $entry['quantity']++; return $entry['quantity']; } public function remove(IComparable $obj) { $entry = &$this->getEntry($obj); $entry['quantity'] = max(0, --$entry['quantity']); return $entry['quantity']; } public function getQuantity(IComparable $obj) { $entry = &$this->getEntry($obj); return $entry['quantity']; } public function setQuantity(IComparable $obj, $qty) { $entry = &$this->getEntry($obj); if ($entry['quantity'] > 0) { $entry['quantity'] = (int) $qty; } return $entry['quantity']; } public function isEmpty() { $total = 0; foreach ($this->container as $entry) { $total += $entry['quantity']; } return $total <= 0; } public function getAll() { $cart = array(); foreach ($this->container as $entry) { if ($entry['quantity'] > 0) { $cart[] = $entry; } } return $cart; } public function clear() { $this->container = array(); } private function &getEntry(IComparable $obj) { foreach ($this->container as &$entry) { if ($obj->equals($entry['item'])) { return $entry; } } $entry = array( 'item' => $obj, 'quantity' => 0 ); $this->container[] = &$entry; return $entry; } } As you can see, all methods (excl. isEmpty()) become much simpler.
{ "domain": "codereview.stackexchange", "id": 3706, "tags": "php, e-commerce" }
Why allene cannot be described with an allyl system?
Question: Why is it impossible in allene to have two pi-bonds in the same orientation? => Why isn't allene planar? (4 $p$ electrons are in two 2x2 $p$-orbitals) From my chemical intuition I would guess that there are 4 electrons in a $p$-system with three orbitals, but frankly I don't know how to allocate the lone pair electrons of $\ce{CS2}$ into the $sp$ and $p$ orbitals. => Why does $\ce{CS2}$ have another geometry than $\ce{H2CCCH2}$? I've tried to rationalize this by allocating electrons to different hybrid orbitals (especially in $\ce{CS2}$ I was not sure how many electrons do we have in the 2x3 $p$-orbitals): Answer: Why isn't allene planar? Let's start by looking at the structure of allene and seeing what happens if we make it planar. (image source) As the above image shows, the 3 carbons in allene are linear (form a straight line) and the terminal hydrogens lie in orthogonal planes (e.g. the planes containing the hydrogens are rotated 90° one from the other as shown in the bottom half of the drawing). Allene has two orthogonal pi bonds (e.g. the planes of the two pi bonds are also rotated 90° one from the other). Each one of those pi bonds contains 2 electrons. The terminal carbons are $\ce{sp^2}$ hybridized, while the central carbon is $\ce{sp}$ hybridized. Now let's twist one of the terminal methylenes by 90°. It doesn't matter which one, so let's just say we twist the one on the right, and twist it so that its red p-orbital lobe lines up with the red lobes on the other double bond. Now our molecule still has the 3 carbons arranged in a line, but all of the atoms - including the hydrogens - now lie in the same plane; we've made our molecule planar. To accomplish this we broke one pi bond. In our planar molecule we have 3 pi electrons (2 from the original pi bond on the left and 1 from the terminal methylene p-orbital that we rotated) in the 3 aligned p orbitals that exist in the plane of the screen. We are also left with 1 electron in the p-orbital on the central carbon that is perpendicular to the screen (the p-orbital that was part of the double bond on the right side of the molecule). So we took a molecule with two stable double bonds, rotated a p-orbital 90° and wind up with a planar molecule that has 3 aligned p-orbitals with 3 pi electrons and one electron by itself in a single p-orbital. This is a biradical, an allyl-like (it's just allyl-like because the 3 carbons in our molecule are still linear, whereas the 3 carbons in the allyl system are bent) radical and a radical in a p-orbital. Converting 2 double bonds to a biradical is energetically uphill, it does not happen at room temperature. Allene prefers to keep its 2 stable double bond structure rather than be a planar biradical molecule because the former geometry is much more stable. Why does $\ce{CS2}$ have another geometry than $\ce{H2CCCH2}$? It doesn't. Carbon disulfide and carbon dioxide have the same structure and hybridization as allene. It is just a little harder to notice because we don't have hydrogens on the ends of those molecules to "tip us off" that the two double bonds are orthogonal. Here is a picture of the orbitals in carbon dioxide (it's exactly the same in carbon disulfide) (image source) Just like in allene, we have 3 atoms arranged in a linear fashion and the central carbon is $\ce{sp}$ hybridized; there are two pi bonds that are orthogonal to one another.
{ "domain": "chemistry.stackexchange", "id": 3657, "tags": "orbitals, hybridization" }
A Satellite's Perspective
Question: If a planet is spinning east to west and there is a satellite spinning from west to east... Can the satellite travel at a speed sufficient to make the planet appear, from the vantage point of the satellite, to be rotating from west to east? What kind of calculations could be made to determine that speed? Answer: No. Suppose we take the sign convention to be that the velocity of the planet's surface (i.e. East to West) is positive, then because the satellite is moving in the opposite direction (West to East) it's velocity will be negative. We need to consider angular velocities, $\omega$, given by $v = r\omega$, where $r$ is the distance from the centre of the planet, but this doesn't change the argument because $v$ and $\omega$ always have the same sign. The relative angular velocity of the planet's surface as seen from the satellite is: $$\omega_{rel} = \omega_{planet} - \omega_{satellite}$$ and because we've already decided that $\omega_{satellite}$ must have a negative sign, $\omega_{rel}$ is always positive regardless of how fast or slow the satellite is orbiting i.e. the surface is always moving East to West.. A more interesting question is if the planet and satellite are moving in the same direction (let's take this to be East to West). In this case there is a geostationary orbit at which the satellite is moving at exactly the same angular velocity as the planet, so the planet's surface appears to be stationary. If the satellite moves further away it's angular velocity decreases and it will see the planet rotate East to West, but if it moves nearer it's angular velocity increases it will see the planet rotate West to East.
{ "domain": "physics.stackexchange", "id": 2893, "tags": "kinematics, orbital-motion, speed" }
Why does a narrow line of falling water seem drop drop when quickly looking at it from top to the bottom?
Question: When I was under the shower I experienced when I look quickly at a single line of water falling from one pore (from top to the bottom) it looks "drop drop" instead of continuous. Why is it like that? Answer: The stream of water is not as continuous and uniform as you might imagine. If you are looking at the falling water with an electric light run from the mains supply on it is a strobe effect because the intensity of the light is varying due to the alternating voltage supply. You can enhance the effect by spreading out the fingers of a hand and then moving the vertically orientated hand up and down rapidly. With sunlight or a dc powered light source you can see the effect because the water stream is not uniform and reflects light in different directions for which you the response of your eye is fast enough to observe. Again the effect can be enhanced by observing the stream through a waving spread out hand. The effect can easily be seen in the frame from a video.
{ "domain": "physics.stackexchange", "id": 97528, "tags": "water, vision" }
Why is a silent electric discharge used for the preparation of ozone from oxygen?
Question: I learnt that ozone is prepared by passing oxygen through a 'silent' electric discharge. Can someone please explain what is meant by a 'silent' discharge here, and why that helps? I couldn't get a satisfactory answer while Googling. Answer: According to this Wikipedia article: Dielectric-barrier discharge (DBD) is the electrical discharge between two electrodes separated by an insulating dielectric barrier. Originally called silent (inaudible) discharge and also known as ozone production discharge or partial discharge, it was first reported by Ernst Werner von Siemens in 1857. On right, the schematic diagram shows a typical construction of a DBD wherein one of the two electrodes is covered with a dielectric barrier material. The lines between the dielectric and the electrode are representative of the discharge filaments, which are normally visible to the naked eye. DBDs can be used to generate optical radiation by the relaxation of excited species in the plasma. The main application here is the generation of UV-radiation. Those excimer ultraviolet lamps can produce light with short wavelengths which can be used to produce ozone in industrial scales. Ozone is still used extensively in industrial air and water treatment. Early 19th-century attempts at commercial nitric acid and ammonia production used DBDs as several nitrogen-oxygen compounds are generated as discharge products.
{ "domain": "chemistry.stackexchange", "id": 7651, "tags": "inorganic-chemistry, electricity, ozone" }
Why $\frac{d}{dt}r_{a}\nabla_{a}U_{ab}+\frac{d}{dt}r_{b}\nabla_{b}U_{ba}=\frac{d}{dt}U_{ab}?$
Question: In classical mechanics for two mass particles $a$,$b$ we assume the symmetric potential arising from $F_{ab}$ and $F_{ab}$ given by $$U_{ab}(r)=-\int^{r}_{r_{0}}F_{ab}(r')dr'$$ and $$U_{ba}(r)=-\int^{r}_{r_{0}}F_{ba}(r')dr'$$ The book mechanics by Florian Scheck gives $$\frac{d}{dt}r_{a}\nabla_{a}U_{ab}+\frac{d}{dt}r_{b}\nabla_{b}U_{ba}=\frac{d}{dt}U_{ab}$$ because $$[\frac{d}{dt}r_{a}\nabla_{a}+\frac{d}{dt}r_{b}\nabla_{b}]U_{ab}=\frac{d}{dt}U_{ab}$$ I am confused how we get the summation form. My derivation goes as follows: Notice $F_{ab}=-\nabla_{b} U_{ab}$. Thus we should have $$\frac{d}{dt}U_{ab}=\frac{d}{dr}*\frac{dr}{dt}U_{ab}=\frac{dr}{dt}\frac{d}{dr}U_{ab}=\frac{dr}{dt}[-F_{ab}]=\frac{d}{dt}[r_{a}-r_{b}][-F_{ab}]=[\frac{d}{dt}r_{a}\nabla_{a}+\frac{d}{dt}r_{b}\nabla_{b}]U_{ab}$$ My simple question is just whether my derviation is correct, for I assume $r=r_{a}-r_{b}$ at here. Answer: Yes, the $r$-argument really is $r_{ik}:=|r_i-r_k|$, as he writes two pages earlier at the beginning of "Systeme von endlich vielen Teilchen". But then you don't need the force to show the relation, it's just the chain rule, which makes derivatives of $U$ into a two term expression and notice that $U(r_{ik})=U(|r_i-r_k|)=U(|r_k-r_i|)=U(r_{ki})$. Also, it's not so good, that you write "$\frac{d}{dt}r_{a}\nabla_{a}U_{ab}$" for "$\frac{dr_{a}}{dt}\nabla_{a}U_{ab}$", because it suggests that you mean "$\frac{d}{dt}(r_{a}\nabla_{a}U_{ab})$". Moreover, the books name is not the autors name. And I would change the title to something readable, and by that I don't mean the problem with the total derivative, but a title which is a sentence, not a formula. E.g. "A problem deriving the energy conservation for radial two particle potentials".
{ "domain": "physics.stackexchange", "id": 3655, "tags": "classical-mechanics" }
What's the relation between two symmetry groups, if one has all the symmetry of the other and some more?
Question: Question: Consider two molecular symmetry groups, for example $C_s$ and $C_{2v}$. $C_s$ has one inversion plane, and two irreductible representations: the symmetric $A'$, and the antisymmetric $A''$. $C_{2v}$ has one inversion plane and one 2-fold axis; and four irreductible representations: $A_1$, $B_1$, $A_2$, $B_2$. It is clear that if a molecule has a $C_{2v}$ symmetry, it also has a $C_s$ symmetry. It also stands (in a selected coordinate system), that an orbital of a molecule with $C_{2v}$ symmetry that belong to $A_1$ or $B_1$ also belong to $A'$, and similarly, orbitals belonging to $A_2$ and $B_2$ also belong to $A''$. What mathematical term describes the relationship between $C_s$ and $C_{2v}$? Is $C_{2v}$ a subgroup of $C_s$? Is it rather a subset of it? Should we rather say that $C_{2v}$ implies $C_s$? What's the correct term for the relationship between $A_1$/$B_1$ and $A'$, and between $A_2$/$B_2$ and $A''$? Are they subspecies? Does $A_1$ imply $A'$? Background: I'm comparing some orbitals of two similar molecules. One is a root with $C_{2v}$ symmetry, and the other is an alkylated version of it with $C_s$ symmetry. For certain reasons, symmetry classifications of the molecules play an important part of the writeup. I want to explicitly point out that the symmetry and the irreductable representations of the root correspond to the ones of the alkylated molecule, even though they're labeled differently. While this sounds like a basic question, I found it surprisingly hard to find an answer. I've been struggling for almost a week at this point. All textbooks and online resources talk about how you can obtain $C_{2v}$ from $C_s$ with certain symmetry operations, and about what properties they have individually, but never mention the terms I'm looking for. I don't really have a strong background in group theory, so if I used any other terms incorrectly, I'm glad to be corrected. Answer: What mathematical term describes the relationship between $C_s$ and $C_{2v}$? Is $C_{2v}$ a subgroup of $C_s$? Is it rather a subset of it? Should we rather say that $C_{2v}$ implies $C_s$? $C_\mathrm{s}$ is a subgroup of $C_\mathrm{2v}$, yes. I'm speaking very informally here, but a group is a set of elements, together with a binary operator $*$ that 'combines' the elements to form other elements. In the case of symmetry groups, these elements are symmetry operations. The binary operator is composition, i.e. $B * A$ means do symmetry operation $A$ first and then $B$ (or the other way round, it's just a matter of definition); and it turns out that $B * A$ is always also a symmetry element. So, a group has more structure than just a set. A set is just a collection of elements, it doesn't have to have a way to combine them. It's true that the elements of $C_\mathrm{s}$ are a subset of the elements of $C_\mathrm{2v}$. However, when referring to the groups themselves, subgroup is the correct term. I don't think 'implies' is appropriate here, that's typically used in a context like propositional logic. What's the correct term for the relationship between $A_1$/$B_1$ and $A'$, and between $A_2$/$B_2$ and $A''$? Are they subspecies? Does $A_1$ imply $A'$? $A$, $B$, etc. are representations of a group. Defining the word 'representation' is actually quite involved! I've written a bit about it here before, but it's a whirlwind tour, and I don't necessarily think it makes for good reading. If you're just looking for the correct term, though, then my understanding is that $A'$ is the restriction of the representation $A_1$ to the subgroup $C_\mathrm{s}$: https://en.wikipedia.org/wiki/Restricted_representation Writing this in a chemistry paper/report will probably throw most readers off. (I reckon most chemists probably don't understand the formal meaning of a subgroup, but would have an intuitive understanding of it. I don't think a restricted representation means anything to the typical chemist, though, except for the very mathematically inclined.) So if I were in your position, I'd probably write that there is a 'correspondence' or a 'correlation' between the two irreps. That's not super-precise language, in that it doesn't describe the nature of the 'correspondence', but it'll get the message across. (Of course, you could define it properly once at the beginning, and then stick to vague language afterwards.) For a different perspective, these kinds of relationships between representations are tabulated in so-called descent in symmetry tables, which you can find online. I also spent some time typing them up on Chemistry Meta, and in the original source from Atkins et al., similarly vague language is used: The following tables show the correlation between the irreducible representations of a group and those of some of its subgroups. In a number of cases more than one correlation exists between groups.
{ "domain": "chemistry.stackexchange", "id": 17338, "tags": "terminology, symmetry" }
What are the ways to partition a large file that does not fit into memory so it can later be fed as training data?
Question: Is there any other way to partition a large file that does not fit into memory so it can be fed as training data other than using spark? or hadoop? Answer: Yes, of cause. But, it's insignificant, because Spark and Hadoop are better. This is my idea. Suppose that your memory can take in 100,000 examples. So splitting your data set to files with size lower than 100,000. The key and most complex step is how to train classify with those data. Good luck, For Gradient descent series optimization algorithms (GB, SGD and so on), most algorithms (SVM, GBDT, Bayes, LR, deeplearn and so on) support this. You could load one file to RAM and fed them to classifier until to find the best parameter. My code is very simple. Before each iteration, re-shuffling the order of simples and re-splitting data set will boost the classifier. import numpy as np X = np.random.random((100, 2)) y = [1 if x[0] > x[1] else 0 for x in X] from sklearn.linear_model import LogisticRegression lr_cly = LogisticRegression() def stop_train(X_s, y_s, threshold): scores = [gnb.score(X, y) for X, y in zip(X_s, y_s)] return np.mean(scores) > threshold def iter_train(cly, X, y, threshold=0.99, max_iter=10): X_s = [X[:50, :], X[50:, :]] y_s = [y[:50], y[50:]] iter_times = 0 while iter_times <= max_iter: print "--------------" for X, y in zip(X_s, y_s): cly.fit(X, y) print cly.score(X, y) if stop_train(X_s, y_s, threshold): break iter_times += 1 iter_train(lr_cly, X, y)
{ "domain": "datascience.stackexchange", "id": 4889, "tags": "machine-learning, bigdata" }
Speed of a falling pencil
Question: If you balance a pencil of length $d$ on its tip, and let it fall, how do you compute the final velocity of its other end just before it touches the ground? (Assume the pencil is a uniform one dimensional rod) Answer: The amount of kinetic energy in the pencil just before it touches the ground is equal to the gravitational potential energy it lost while falling. That quantity is easy to compute, being simply m*g*h, where h=d/2 (the center of mass of the pencil started at height d/2 and ends at height 0). If we assume the tip of the pencil hasn't moved (this becomes a much more complex problem if there is friction between the tip and the table, etc) then all of that energy is now rotational kinetic energy, which is equal to 1/2*I*ω. I is the moment of inertia (1/12*m*d^2) and ω is the angular velocity. You know d, the mass terms cancel out, and once you solve for angular velocity you can determine linear velocity of the end.
{ "domain": "physics.stackexchange", "id": 9161, "tags": "homework-and-exercises, newtonian-mechanics, energy-conservation, angular-velocity" }
Earth Science Search Engine
Question: After scirus is closed, which search engine do you all use? I know of Arxiv, and ADS, which only partially overlaps with Earth Science related issues. Any other recommendations would be appreciated. Answer: The Elsevier service Geofacets "is designed to search for, and extract, maps, sections and other geographically-referenced geoscientific data from a very large and growing volume of published content", and as such is perhaps relevant -- though it focusses on georeferenced data rather than on scientific articles.
{ "domain": "earthscience.stackexchange", "id": 135, "tags": "resources, informatics" }
Tokenizer for my programming language
Question: Here's my attempt at porting the Lua codebase for my programming language to C++(11). This is just the first step, the tokenizer, and I wanted to remove all the bad performance / practices / code before passing to the next steps. I'm also still learning C++ as I go through this experience, so I wanted to get it reviewed to have a feedback on how I am going and to learn more. Here's a formal definition of what a token is in my programming language in a syntax I hope looks like EBNF: token ::= symbol | string | number | name; symbol ::= '{' | '}' | '[' | ']' | '(' | ')' | '.' | ',' | ';' | ':' | '$' | '?' | '!' | '#' | '_' | '\''; string ::= '"' {(any_character | string_escape)} '"'; string_escape ::= c_escape | ('\\' digit [digit] [digit]); number ::= [('+' | '-')] {digit} ('.' [digit] {digit}); digit ::= '0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9'; name ::= name_char {(name_char | digit)}; name_char ::= //all printable characters which aren't a symbol, a digit or " and ~ A single line comment starts with a ~ and ends with a new line character. A block comment instead starts with ~{ and ends with ~}. Each opening bracket must have a matching closing bracket (they can be nested): an an example, a string like ~{ ~{ ~{ ~} ~{ ~{ ~} ~} won't be accepted because there are some unmatched opening brackets. Strings aren't single-line: they can span multiple lines without the need to escape the newline with \ like in most languages. But here's my actual code: Path/include/error.hpp #ifndef ERROR_HPP_INCLUDED #define ERROR_HPP_INCLUDED #include <string> #include <sstream> namespace patch { template <typename T> std::string to_string(const T &n) { std::stringstream stm; stm << n; return stm.str(); } } class Error { public: std::string message; Error(std::string); }; #endif//ERROR_HPP_INCLUDED Path/src/error.cpp #include "../include/error.hpp" #include <string> Error::Error(std::string new_message): message(new_message) {} Path/include/types.hpp #ifndef TYPES_HPP_INCLUDED #define TYPES_HPP_INCLUDED #include <string> enum token_type { //not sure about how I should name this thing none, symbol, number, name, string }; class Token { public: void* value = nullptr; token_type type = none; int line; Token() = default; Token(void*, token_type, int); ~Token(); }; std::string stringof_value(Token*); std::string stringof_type(Token*); #endif//TYPES_H_INCLUDED Path/src/types.cpp #include "../include/types.hpp" #include <string> #include "../include/error.hpp" Token::Token(void* new_value, token_type new_type, int new_line): value(new_value), type(new_type), line(new_line) {} Token::~Token() { if (!value) { switch (type) { case number: delete (double*) value; break; default: delete (std::string*) value; } } } std::string stringof_value(Token* t) { void* value = t->value; switch (t->type) { case number: return patch::to_string(*(double *) value); default: return *(std::string*) value; } } std::string stringof_type(Token* t) { switch (t->type) { case symbol: return std::string("symbol"); case number: return std::string("number"); case name: return std::string("name"); case string: return std::string("string"); case none: //(no Token should get here) throw t->type; } } Path/include/syntax.hpp #ifndef SYNTAX_HPP_INCLUDED #define SYNTAX_HPP_INCLUDED #include "error.hpp" enum syntax_subtype { escape_sequence, decimal_escape_sequence, unfinished_obj }; //escape_sequence template <syntax_subtype> Error SyntaxError(char, int); //decimal_escape_sequence template <syntax_subtype> Error SyntaxError(int, int); //unfinished_obj template <syntax_subtype> Error SyntaxError(const char*, int); #endif//SYNTAX_HPP_INCLUDED Path/src/syntax.cpp #include "../include/error.hpp" #include "../include/syntax.hpp" #include <string> template <> Error SyntaxError<escape_sequence>(char c, int line) { return Error( std::string("SyntaxError: invalid escape sequence '\\") + patch::to_string(c) + "' (at line " + patch::to_string(line) + ")." ); } template <> Error SyntaxError<decimal_escape_sequence>(int code, int line) { return Error( std::string("SyntaxError: decimal escape sequence too large (") + patch::to_string(code) + " used at line " + patch::to_string(line) + ")." ); } template <> Error SyntaxError<unfinished_obj>(const char* type, int line) { return Error( std::string("SyntaxError: unfinished ") + type + "(starting at line " + patch::to_string(line) + " until End Of File)." ); } Path/include/lexer.hpp #ifndef LEXER_HPP_INCLUDED #define LEXER_HPP_INCLUDED #include <list> #include "types.hpp" #include <string> std::list<Token*> lexer(std::string); #endif Path/src/lexer.cpp #include "../include/lexer.hpp" #include <list> #include "../include/types.hpp" #include <string> #include "../include/syntax.hpp" #include <cstdlib> bool is_symbol(char x) { switch(x) { case '{': case '}': case '[': case ']': case '(': case ')': case '.': case ',': case ';': case ':': case '$': case '?': case '!': case '#': case '_': case '~': case '"': case '\'': return true; default: return false; } } char escape(char seq, int line) { switch (seq) { case '"': return '"'; case '\\': return '\\'; case '0': return '\0'; case 'a': return '\a'; case 'b': return '\b'; case 'f': return '\f'; case 'n': return '\n'; case 'r': return '\r'; case 't': return '\t'; case 'v': return '\v'; default: throw SyntaxError<escape_sequence>(seq, line); } } std::list<Token*> lexer(std::string source) { std::list<Token*> tokens; int line = 1; const char* i = source.c_str(); char end = '\0'; auto next = [&i, &line] () -> void { if (*(++i) == '\n') { ++line; } }; //these lambda functions are just here to keep organizated the main loop, but it wouldn't be difficult to manually inline them within the loop if necessary //builds a new Token of type symbol, assumes first character is a valid symbol auto buildsymbol = [&i, &line, &next] () -> Token* { std::string* symbolstr = new std::string(1, *i); Token* new_symbol = new Token((void*) symbolstr, symbol, line); next(); return new_symbol; }; //builds a new Token of type number, assumes first character is either +, -, ., or a digit auto buildnumber = [&line, &i] () -> Token* { Token* new_number = new Token(nullptr, number, line); char* after_number = nullptr; double* value = new double(std::strtod(i, &after_number)); if (*value == 0.0 && i == after_number) { delete value; delete new_number; return nullptr; } new_number->value = (void*) value; i = after_number; return new_number; }; //builds a new Token of type name, assumes the first character is printable but not a digit or symbol auto buildname = [&line, &i, &next, &end] () -> Token* { Token* new_name = new Token(nullptr, name, line); std::string* value = new std::string(""); while (*i != end) { if (isspace(*i) || is_symbol(*i)) { break; } *value += *i; next(); } new_name->value = (void*) value; return new_name; }; //builds a new Token of type string, assumes first character is the opening " auto buildstring = [&line, &next, &i, &end] () -> Token* { Token* new_string = new Token(nullptr, string, line); std::string* value = new std::string(""); next(); bool finished = false; while (*i != end) { char to_push = *i; if (to_push == '\\') { next(); if (!isdigit(*i)) { try { *value += escape(*i, line); } catch (Error err) { delete value; delete new_string; throw; } next(); } else { std::string digits = ""; for (int d = 0; *i != end && d < 3 && isdigit(*i); d++) { digits += *i; next(); } int code = atoi(digits.c_str()); if (code > 255) { delete new_string; delete value; throw SyntaxError<decimal_escape_sequence>(code, line); } *value += (char) code; if (*i == '"') { finished = true; next(); break; } } } else { *value += to_push; next(); if (*i == '"') { finished = true; next(); break; } } } if (!finished) { delete value; Error err = SyntaxError<unfinished_obj>("string", new_string->line); delete new_string; throw err; } new_string->value = (void*) value; return new_string; }; //skips all whitespace characters (\n, \t, ...), assumes first character is a whitespace character auto skipspaces = [&i, &end, &next] () -> void { do { next(); } while (*i != end && !isprint(*i)); }; //skips a comment (single or multi line), assumes first character is ~ auto skipcomment = [&next, &i, &end, &line] () -> void { next(); //single line comment if (*i != '{') { while (*i != end) { if (*i == '\n') { next(); break; } next(); } } //multi line comment else { int line_start = line; int nest = 1; while (*i != end) { if (*i == '~') { next(); if (*i == '{') { ++nest; } else if (*i == '}') { --nest; if (!nest) { next(); break; } } } next(); } if (nest) { throw SyntaxError<unfinished_obj>("block comment", line_start); } } }; try { //main loop while (*i != end) { if (*i == '"') { tokens.push_back(buildstring()); } else if (*i == '~') { skipcomment(); } else if (isspace(*i)) { skipspaces(); } else if (is_symbol(*i)) { tokens.push_back(buildsymbol()); } else { Token* try_number = buildnumber(); if (try_number) { tokens.push_back(try_number); } else { tokens.push_back(buildname()); } } } } catch (Error err) { for (std::list<Token*>::iterator i = tokens.begin(), e = tokens.end(); i != e; ++i) { delete *i; } throw; } return tokens; } I won't include main.cpp here because it's just a test which prompts the user for input, tokenizes it and outputs the type and value of the tokens it got. Nothing to review really. Answer: Issues with EBNF string ::= '"' {(any_character | string_escape)} '"'; Here for "any_character" you probably meant any character except " but that is not made explicit. number ::= [('+' | '-')] {digit} ('.' [digit] {digit}); ^^^A^^^ ^^^^^^^B^^^^^^^^^^^^ For A: This means zero or more digits. Thats fine. For B: Thats zero or one digit followed by zero or more digits. So the following is a valid number +. I believe you meant: number ::= ['+' | '-'] {digit} ['.' digit {digit}]; Because this still allows + and - to be parsed as numbers. You really need to break this up into a couple of expressions to fully parse numbers. number ::= ['+' | '-'] NumberPart NumberPart ::= NumberInteger | NumberFloat NumberInteger ::= digit {digit} NumberFloat ::= {digit} '.' digit {digit}; You can do it in one line if you really must. But I find it easier to read when you split it up a bit. Note: This is still not as comprehensive as those done by the C language as a decimal point must be followed by a digit but its pretty good. An equivalent FLEX file %x BLOCKCOMMENT %x LINECOMMENT /* You probably meant any character except " */ AnyStringCharacter [^"] Digit [0-9] CEscape \\. StringEscape {CEscape}|\\{Digit}{Digit}{Digit} Character {AnyStringCharacter}|{StringEscape} LiteralString "{Character}*" Sign [+-] NumberInteger {Digit}+ NumberFloat {Digit}*\.{Digit}+ NumberPart {NumberInteger}|{NumberFloat} LiteralNumber {Sign}?{NumberPart} IdentifierChar_First [^]{}().,;:$?!#_\\[0123456789~"] IdentifierChar {IdentifierChar_First}|{Digit} Identifier {IdentifierChar_First}{IdentifierChar}* LineComment [^\n]* BlockComment [^~\n]* EndOfLine \n %% <INITIAL>\~ {BEGIN(LINECOMMENT);} <INITIAL>\~\{ {BEGIN(BLOCKCOMMENT);} <BLOCKCOMMENT>\~\} {BEGIN(INITIAL);} <BLOCKCOMMENT>{EndOfLine} {/*++line;*/} <LINECOMMENT>{EndOfLine} {BEGIN(INITIAL);/*++line;*/} <BLOCKCOMMENT>{BlockComment} {/* Ignore Comment */} <BLOCKCOMMENT>\~ {/* Ignore ~ not followed by { */} <LINECOMMENT>{LineComment} {/* Ignore Comment */} \{ {return '{';} \} {return '}';} \[ {return '[';} \] {return ']';} \( {return '(';} \) {return ')';} \. {return '.';} \, {return ',';} \; {return ';';} \: {return ':';} \$ {return '$';} \? {return '?';} \! {return '!';} \# {return '#';} \_ {return '_';} \\ {return '\\';} {LiteralString} {return yy::lex::literal_string;} {LiteralNumber} {return yy::lex::literal_number;} {Identifier} {return yy::lex::identifier;} . {/* ERROR */} That's 67 lines compared to the nearly 500 for writing it yourself. And I am being generous as I could collapse all the symbols into a single line. This code is basically readable BNF so any computer scientist should be able to maintain it. Code Review There are so many of these lying around. You could have picked up a nearly standard one from boost boost::lexical_cast<> namespace patch { template <typename T> std::string to_string(const T &n) { std::stringstream stm; stm << n; return stm.str(); } } If this is an exception you should probably enhirit from one of the standard exceptions (like std::runtime_error). class Error { public: std::string message; Error(std::string); // Pass by const reference. // If it needs building from a literal it works. // But if already a string it will prevent the copy. }; Seriously. That could have been inlined in the header file. Error::Error(std::string new_message): message(new_message) {} Rather than use a void* to store your data use a union. class Token { public: void* value = nullptr; }; It expresses intent more clearly and also will remove all the casting issues that you are going to have in the rest of your code. Never use C casts. Always use a C++ cast. They are easier to spot in the code and express intent much better. delete (double*) value; delete (std::string*) value; Being easy to spot is a good thing. Because I want to check more closely the dangerous casts but ignore the simpler casts.
{ "domain": "codereview.stackexchange", "id": 22043, "tags": "c++, c++11, language-design, lexical-analysis" }
Confusion with cosine similarity
Question: In information retrieval when we calculate the cosine similarity between the query features vector and the document features vector we penalize the unseen words in the query. Example if we have two documents with features vectors d1 = [1,1,1,0,0] d2 = [0,1,1,1,0] We can see that the two documents have the second feature so if we want to search for the second feature with query vector: q = [0,1,0,0,0] then the cosine similarity between q and d1,d2 will be $1/√3$, and not 1 because that we penalize the other features that we have not mention in the query. From this discussion I don't understand why penalize it is a good Idea. Is penalizing unseen features good? Is there another similarity measure that does not penalize them? Answer: Of course cosine similarity is not proper for searching a specific features in documents! To do this, you can exactly using dot product, as it will ignore zero features in query vector from documents. Cosine similarity, in the current context, can be used to finding similarity between two documents. So, all features can be important to finding similarity. It means, if there is not a feature in query vector, but there is in a document, or vice versa, these two are 100 percent similar. Hence, it makes sense.
{ "domain": "datascience.stackexchange", "id": 2207, "tags": "information-retrieval, cosine-distance, vector-space-models" }
Regarding Pose Graph Slam using Lidar
Question: So I'm trying to incorporate pose graph optimization into a mapping framework using a Lidar. So I basically have all the relative transformations between the pointclouds and I have pairs of pointclouds which satisfy my place recognition algorithm so I know which poses to complete the loop with, now the question I have is given that I only have these relative transformations (1) how do I calculate the error where $\hat{z}$ is the ground truth since I only have one set of measurements which are my R,t estimate from consecutive pointclouds? (2) How do I loop close using g2o? (3) What will my information matrix be isn't it supposed to be a property of the sensor itself? Thank you. Answer: In the context of a pose graph / factor graph SLAM backend, $e\left(i,j\right)$ actually reduces to the difference between the current estimate of pose $x_j$ and the predicted pose $\hat{x}_j$ given the current estimate of pose $x_i$ and the constraint / pose graph edge $c_{ij}$: $$ e\left(i,j\right)=x_j-\hat{x}_j=x_j-x_i\cdot c_{ij}=x_j-\left(R_{ij}\cdot x_i+t_{ij}\right) $$ where $R_{ij},t_{ij}$ are the parametrization of the transformation between poses $x_i$ and $x_j$ as given by constraint / edge $c_{ij}$. If you are using g2o, then this is not something you need to calculate yourself, you only need to provide an initial estimate for every pose, as well as the edge measurements ($R,t$) and their information matrices $\mathcal{I}$. Depending on whether you are operating in SE3 or SE2, add an edge (EdgeSE2 / EdgeSE3) between the two vertices where the loop closure was determined, using the registration data (transformation and information matrix) of the two associated point clouds to specify the edge constraint. I would strongly advise you to look into robust methods like Dynamic Covariance Scaling (Agarwal et al., ICRA 2013), Switchable Constraints (Sünderhauf et al., IROS 2012) and RRR (Lasif et al., IJRR 2013), as erroneous loop closure constraints will wreak havok on the shape of your optimization problem / pose graph. If you are talking about the information matrix of the edges / constraints between poses, then they will come from your registration algorithm, odometry system, IMU, etc. Anything that measures the geometric offset between two poses should provide an estimate of the certainty or uncertainty of this measurement. In Pose Graph SLAM you will typically have constraints generated by Odometry or IMU, and / or from registration algorithms such as ICP. Depending on your parametrization of the transformation, this uncertainty is represented as a covariance matrix $\Sigma^{n\times n}$ where $n$ is the dimensionality of your transformation representation (e.g. in 2D we would have $n=3$). The Fisher information matrix $\mathcal{I}$ required by e.g. g2o is simply the inverse of the covariance matrix $\Sigma$ , i.e. $$ \mathcal{I}_{ij}={\Sigma_{ij}}^{-1} $$ Hope this helps.
{ "domain": "robotics.stackexchange", "id": 1335, "tags": "slam, mapping, lidar" }
Bug with colors loading from xacro in Gazebo 7
Question: In my model.xacro file there is: <gazebo reference="top_plate"> <material> Gazebo/Black </material> </gazebo> And I only get an white model. If I Edit the model in Gazebo7 and write 'Gazebo/Black' in Visusal/Material/Scrip/Name it works. I want it to be black while loarding. Has someboady an idea how to fix this bug. My launch file: <launch> <include file="$(find gazebo_ros)/launch/empty_world.launch"> <!-- Note: the world_name is with respect to GAZEBO_RESOURCE_PATH environmental variable --> <arg name="world_name" value="worlds/empty.world"/> <arg name="paused" value="false"/> <arg name="use_sim_time" value="true"/> <arg name="gui" value="true"/> <arg name="headless" value="false"/> <arg name="debug" value="false"/> <extra_gazebo_args value="verbose" /> </include> <include file="$(find project_gazebo)/urdf/pioneer3at_kinect.xml" /> <node name="spawn_p3dx" pkg="gazebo_ros" type="spawn_model" args="-urdf -param robot_description -model robot_description -x -0.0 -y -0.0 -z 0.051" respawn="false" output="screen" /> </launch> My model file: <?xml version="1.0"?> <robot xmlns:sensor="http://playerstage.sourceforge.net/gazebo/xmlschema/#sensor" xmlns:controller="http://playerstage.sourceforge.net/gazebo/xmlschema/#controller" xmlns:interface="http://playerstage.sourceforge.net/gazebo/xmlschema/#interface" xmlns:xacro="http://ros.org/wiki/xacro" > <!-- Chassis --> <link name="base_link"> <inertial> <mass value="15.0"/> <origin xyz="0 0 0.10"/> <inertia ixx="0.3338" ixy="0.0" ixz="0.0" iyy="0.4783" iyz="0.0" izz="0.3338"/> </inertial> <visual name="base_visual"> <origin xyz="0 0 0.177" rpy="0 0 0"/> <geometry name="pioneer_geom"> <mesh filename="package://project_gazebo/meshes/p3at_meshes/chassis.stl"/> </geometry> <material name="ChassisRed"> <color rgba="0.851 0.0 0.0 1.0"/> </material> </visual> <collision> <origin xyz="0 0 0.177" rpy="0 0 0"/> <geometry> <mesh filename="package://project_gazebo/meshes/p3at_meshes/chassis.stl"/> </geometry> </collision> </link> <gazebo reference="base_link"> <visual> <material> <script> <name value="Gazebo/Red"/> </script> </material> </visual> </gazebo> </robot> And the Gazbeo look like: Originally posted by eed on Gazebo Answers with karma: 3 on 2017-04-12 Post score: 0 Original comments Comment by eugene-katsevman on 2017-04-12: it works in my gazebo 7.0.0. Are you sure you wrote the link name correctly? Comment by sloretz on 2017-04-12: What's the output look like with the --verbose argument given to gazebo? If you're launching gazebo with roslaunch the arg atribute can be used to provide it http://wiki.ros.org/roslaunch/XML/node Comment by eed on 2017-04-13: the link name is corrct. It worked in ros-indigo and gazebo 2 @sloretz what exactly you mean can I just add "<extra_gazebo_args value="verbose" />" to the launch file? I posted my launch file above Comment by sloretz on 2017-04-13: empty_world.launch in gazebo_ros allows enabling verbose with either or Comment by eugene-katsevman on 2017-04-13: It won't help him much,though Answer: Seems to be a bug in sdformat. When converting from urdf to sdf, material property is omitted in some occasions due to name mangling and link reduction. I'm digging it further. Remove name attribute from your visual tag UPD: And here they are: https://bitbucket.org/osrf/sdformat/issues/132/parser-does-not-handle-urdf-material https://bitbucket.org/osrf/sdformat/issues/86/incorrect-urdf-to-sdf-parsing-materials UPD: remove name attribute from your visial tag Originally posted by eugene-katsevman with karma: 163 on 2017-04-13 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by eed on 2017-04-19: That worked for me. Thanks!
{ "domain": "robotics.stackexchange", "id": 4079, "tags": "gazebo-7" }
Why would evolution favor blue as a color in bugs?
Question: This question arose when I saw a metallic blue - black bug some while ago. I can't remember the species name now, but a quick google search will show that such bugs do exist. I'm down with green and black - camouflage, right? But what is the case for blue color? What kind of an advantage does it provide so that they have survived to this date? Answer: Similarly, you could ask Why are bees yellow? Why are some frogs so colourful? Why widowbirds have such long tails? Why do peacocks have such a tail? There are a number of reasons why such traits could evolve. The main reasons are: Sexual selection (esp. intra-sexual selection when it comes to colors) Mimicry (esp. batesian mimicry) Aposematism I let you investigate further by yourself through reading the linked wikipedia articles. For the specific beetle, you have seen, it will be impossible to address the question unless we know the species.
{ "domain": "biology.stackexchange", "id": 7455, "tags": "evolution" }
how to omit the step of setup.bash every time?
Question: I am new in ROS. I have a question. When I use ROS, I have to do like this every time I start new window of Terminal. $ source ~/catkin_ws/devel/setup.bash and this step enable me to use "roscd". Do I have to do this step every time? Or does anyone know how to skip this step? If there is a website about this, please let me know. Originally posted by Izumi Hayashi on ROS Answers with karma: 1 on 2014-10-21 Post score: 0 Answer: Put this line in your .bashrc and it will be executed every time you open a terminal automatically. Originally posted by dornhege with karma: 31395 on 2014-10-21 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 19799, "tags": "ros, setup.bash" }
why the RFE to select features is changing when the os or computer is changed
Question: I used the the feature selection method: RFE to select feature for features. Now I want to select 20 features(there are about 50 features or more) with LGBM model as following code: But I found that the features(sel.ranking_) are different when the os or os version, or computer is changed. I don't know what caused the changing. More, how to solve it that features selection is fixed. Thanks! gbm = lgb.LGBMClassifier( boosting_type='gbdt', objective='binary', learning_rate=0.01, colsample_bytree=0.9, subsample=0.8, random_state=21, n_estimators=200, num_leaves=18) sel = RFE(gbm, step=1, n_features_to_select=20, verbose=1) sel.fit(X_train, y_train) print (sel.ranking_) Answer: Since you do not provide any information about the creation of X_train and y_train I cannot be sure that this is the issue, but I would guess that it has to do with some change in the order of the features. Three-based algorithms' results are affected when the order of the input features changes, so make sure that this order does not change every time you run your code (e.g. you can sort them alphabetically before RFE). You might find this discussion interesting.
{ "domain": "datascience.stackexchange", "id": 2591, "tags": "machine-learning, python, scikit-learn, feature-selection" }
Raising and lowering the indices of a perturbed metric
Question: I am looking at a metric which is defined as (Eq 2.4 Glampedakis & Babak) $$ g_{\mu \nu} = g_{\mu \nu}^K + \epsilon h_{\mu \nu}$$ where $g_{\mu \nu}^K$ is the original unperturbed metric (Kerr) and $h_{\mu \nu}$ some perturbation. Now, I know that the indices of the perturbation are raised/lowered using the unperturbed metric i.e. $$ h_{\alpha \beta} = g_{\alpha \gamma}^K g_{\beta \delta}^K h^{\gamma \delta}$$ (see e.g. application in Eq 2. of Narzilloev et al. 2019) My question is how to get the contravariant form of $g^{\mu \nu}$? Option 1 is that simply, $$ g^{\mu \nu} = g^{\mu \nu}_K + \epsilon h^{\mu \nu}$$ Option 2 considers $g^{\mu \nu}$ as an independent matrix and we invert it the usual way. However these two options seem to not be equivalent. For example consider the $g^{03}$ term. Option 1 tells us that $$ g^{03} = g^{0 3}_K + \epsilon h^{0 3}$$ but $h^{0 3} = 0$ and so $g^{03} = g^{0 3}_K$. But Option 2 tells us that $$ g^{03} = - \frac{g_{03}}{ \tilde{g}} = \frac{-1}{\tilde{g}} (g_{03}^{K} + \epsilon h_{03}) \ne g^{0 3}_K$$ where $\tilde{g} = g_{00} g_{33} - g_{03}^2$ and we have exploited the symmetries of the matrix (e.g. Eq 19.13 of these notes) Can anyone provide some guidance on where I am going wrong? Thanks. Answer: Oooh you're in bad luck. This is quite possibly one of the worst aspect of the perturbation formalism in general relativity, and quite possibly the reason the full non-linear theory isn't commonly used. There is no finite expression for the inverse metric of $g$. This is due to the binomial inverse theorem. What we want is to find the inverse of the matrix $g + h$. From this theorem, this is $$(g + h)^{-1} = g^{-1} - g^{-1} (I + hg^{-1})^{-1} h g^{-1}$$ From the form of the expansion, you can immediatly see the issue : the definition is recursive. Any expansion will require further expansion. There are many forms you can expand it into (here's another one), but in the end it is always an infinite sum. As we usually deal with this formalism to linearize the theory, though, it is usually cut off at the linear term. This will give us $$(g + h)^{-1} = g^{-1} - g^{-1} h g^{-1} = g^{\mu\nu} - h^{\mu\nu}$$
{ "domain": "physics.stackexchange", "id": 60987, "tags": "general-relativity, metric-tensor, tensor-calculus" }
Joint Time-Frequency Scattering structure & implementation?
Question: How does JTFS differ from wavelet time scattering in its computation graph, and how does FDTS discriminability work, at a lower level? How is it implemented in practice, and how can one visualize involved ops? Can the coefficients be normalized (for e.g. ML applications)? Answer: JTFS overview provided in this post. Computational structure JTFS breaks the tree structure of time scattering by convolving along frequency, exploiting the joint time-frequency geometry: $$ \begin{align} S_{(J, J_{fr})}^{{(0)}} x(t) &= x \star \phi_T(t), \\ S_{(J, J_{fr})}^{{(1)}} x(t, \lambda) &= |x \star \psi_\lambda^{{(1)}}| \star \phi_T, \\ S_{(J, J_{fr})}^{{(2)}} x(t, \lambda, \mu, l, s) &= ||x \star \psi_\lambda^{{(1)}}| \star \Psi_{{\mu, l, s}}| \star \Phi_{(T, F)}. \end{align} $$ $\Psi_{{\mu, l, s}}$ comprises of five kinds of joint wavelets: $$ \begin{align} \Psi_{{\mu, l, +1}}(t, \lambda) &= \psi_\mu^{{(2)}}(t) \psi_{{l, s}}(+\lambda) && \text{spin up bandpass} \\ \Psi_{{\mu, l, -1}}(t, \lambda) &= \psi_\mu^{{(2)}}(t) \psi_{{l, s}}(-\lambda) && \text{spin down bandpass} \\ \Psi_{{\mu, -\infty, 0}}(t, \lambda) &= \psi_\mu^{{(2)}}(t) \phi_F(\lambda) && \text{tm bandpass, fr lowpass} \\ \Psi_{{-\infty, l, 0}}(t, \lambda) &= \phi_T(t) \psi_{{l, s}}(\lambda) && \text{tm lowpass, fr bandpass} \\ \Psi_{{-\infty, -\infty, 0}}(t, \lambda) &= \phi_T(t) \phi_F(\lambda) && \text{joint lowpass} \\ \end{align} $$ and $\Phi_{(T, F)}$ optionally does temporal and/or frequential averaging: $$ \Phi_{(T, F)}(t, \lambda) = \phi_T(t) \phi_F(\lambda) $$ where $t =$ time $\lambda =$ log-frequency (wavelet frequency along $t$ is $2^\lambda$). Rate of sinusoidal oscillation ($=$ frequency) of the signal. $\mu =$ log-frequency (wavelet frequency along $t$ is $2^\mu$). Frequency of amplitude envelopes of sinusoidal oscillations of the signal (frequency of AM), logscaled. $l =$ log-quefrency (wavelet quefrency along $\lambda$ is $2^l$). Frequency of FM bands - rate of variation of frequential bands (independent components/modes) of varying widths, decay factors, and temporal evolutions. Each $\Psi$ captures a distinct structure: Spinned: FDTS Time lowpass: vertical time-freq geometry (e.g. spikes) Freq lowpass: horizontal time-freq geometry (e.g. pure sines) Joint lowpass: time-freq mean (dc component) Higher orders are computed by iterating on unaveraged slices (see "Energy analysis"). Note: $S^{(2)}$ with $\Phi$ is the implementation variant; one consistent with time scattering's breakdown omits it. $S^{(1)}$ also has an alternate breakdown. See "Energy analysis". Units & quefrency: frequency has physical units $\text{cycles/second} = \text{Hz}$, discrete units $\text{cycles/sample}$ "quefrency" has units $\text{cycles/octave}$. It's a dimensionless, log unit that encodes the physical units in offset of the log axis (1mm vs 1m is a constant shift in log space), and encodes the base as 2 by definition (octave == base 2). It's the rate of variation along the log-frequency axis. Pseudocode Unoptimized implementation is fairly straightforward; showing only up to spin up, unaveraged: # first order time scattering U1 = [] for p1t in psi1_t: U1.append(abs(conv(x, p1t))) # joint scattering U2 = [] for p2t in psi2_t: # second order time scattering U2_tm = [] for u1 in U1: U2_tm.append(conv(u1, p2t)) U2_tm = array(U2_tm) # shape == (freq, time) # frequential scattering for p1f in psi1_fr_up: U2.append(abs(conv(U2_tm, p1f, axis=-2))) vs Wavelet Time Scattering High quefrency coefficients approximate second order time scattering; hence, little discriminative power is lost (without freq averaging). The catch is much greater output size for same time averaging configuration. Separable convolutions The joint filters are defined separably, i.e. as product of two 1D functions. This enables inheriting time scattering's properties along log-frequency, and FDTS discriminability via independent scaling along time and log-frequency. 2D Morlets, as in image wavelet scattering, aren't appropriate, as they introduce rotation, and a rotation does not preserve the relationship between time and frequency (a rotated scalogram generally isn't a scalogram of some other signal). Further, this yields significantly faster convolutions, and a separable interpretation. To understand FDTS sensitivity, consider echirp after second order time scattering: Each row is formed by convolving psi2_t with a U1 coefficient: Since echirp is a straight line in a scalogram, Gaussian-scaled along time, the result is as shown in the first image. Frequential scattering will then convolve with columns of U1 * psi2_t; taking the central slice: It's a Morlet! Morlet convolved with Morlet is another Morlet with twice the time and half the frequency support (Gauss * Gauss in freq). Note that the opposite spinned wavelet convolves to zero: conv(analytic, anti_analytic) == 0. This Morlet, or more importantly the odd symmetry of the imaginary part along frequency, is formed only by a relative misalignment of rows in the same direction - i.e. FDTS. Spin resonance is strongest when the time-log-frequency geometry is a straight line, but other forms of FDTS also work (e.g. linear chirp). Different FDTS rates are sloped differently in time-frequency, which will resonate with joint wavelets of different relative scalings along time and frequency. This is akin to rotation, but isn't same, as "orientation" is generated by relative stretching of the wavelet. Invertibility Scalogram is invertible within a global phase shift $e^{j\phi}$ (see "Invertibility"). Time scattering is invertible within a global phase shift, and time shift $T$. JTFS is invertible within a global phase shift, time shift, and log-frequency shift $F$. Loss of information within a log-frequency shift is much more severe than within a phase or time shift. For a moderately large $F$, two orders (which is where most implementations stop) are insufficient to recover information lost to averaging, and time-frequency ridges on scales $<F$ are blended together. Further, a log frequency shift is more nonlinear than a constant shift, which can significantly recharacterize a signal. In most cases, such loss is undesired, and unaveraged JTFS is preferred; appropriate amount of averaging, or any further manipulation, can instead be learned. Energy analysis Recall for standard scattering energy decomposition follows a lowpass-bandpass split. For JTFS, it's the same - just in 2D, and a mixture: Right is pure bandpass, left is mix of lowpass and bandpass that gets filed under "lowpass", with pure lowpass at very center. The breakdown is hence: S0: x * phi_t U1: |x * psi_t| S1: U1 * phi_t * phi_f, |U1 * phi_t * psi_f|, |U1 * psi_t * phi_f| U2: |U1 * psi_t * psi_f| S2: U2 * phi_t * phi_f, |U2 * phi_t * psi_f|, |U2 * psi_t * phi_f| ... An alternative breakdown is S1: U1 * phi_t, |U1 * psi_t * phi_f| replacing phi_t * phi_f with phi_t * psi_f. Note the combined energies and frequency tilings are identical (but first inherits JTFS's properties), as it's phi_t -> phi_t * phi_f, phi_t * psi_f, where * phi_f and * psi_f combined equate to * 1 in frequency response. Likewise * psi_t * psi_f and * psi_t * phi_f are * psi_t's (second order time scattering) energy equivalents. Lastly the LP sum of the joint filterbank: This is with simple L1 norm, no energy renormalization, which exceeds the intended upper bound of 1. Deviations from 1 (tight frame) amplify quintically for JTFS, a square up from time scattering. Up vs down Wavelet transform can interpret two ways: convolution: response of a physical system to a stimulus, with impulse response being the wavelet filterbank cross-correlation: similarities of wavelets with the input (inner product with conjugate of wavelet) Implementation follows the former per FFT convolution, but latter is the true mathematical formulation and favored for understanding coefficients. Since the wavelets are purely real in Fourier, the two can be shown to be equivalent. As such, the time-domain wavelet is what directly correlates (conj + inner product) with its input (the scalogram). Observing them in time domain, spin down is crest up (2D maxima are sloped diagonally up). Hence, down resonates with up, as in rising chirp with crest up. Padding Consistent with separable treatment, we pad convolutions along time and frequency, treating frequential slices same as a signal input to time scattering. JTFS padding, along time or frequency, is tricky, it can break FDTS discriminability; the standard is zero padding for both, but I made reflect work. Normalization Ref 2 proposes a channel log normalization: $$ \widetilde{S}x_n(p) = \log{\left( 1 + \frac{\int_\mathbb{R}Sx_n(t, p) dt}{\epsilon \mu(p)} \right)} $$ where $$ \mu(p) = \underset{1\leq n \leq N}{\text{median}} \int_{\mathbb{R}} Sx_n (t, p) dt $$ is median of path/channel $p$ aggregated over $N$ samples. This has the effect of "Gaussianizing" the histogram of coefficients, and works well in the proposed ML framework, with coefficients fed as frequency vectors and time integrated out. Time integration can be omitted in the general case, leaving $$ \widetilde{S}x_n(t, p) = \log{\left( 1 + \frac{Sx_n(t, p)}{\epsilon \mu(p)} \right)} $$ and $\epsilon$ is a user-chosen constant which controls the extent of log scaling (bringing different orders of magnitude closer together). This approach bears two problems, however: Relative scaling of paths changes. If dataset lacks content in high frequencies, then high frequency features will be uninformative or be noise, while channel norm will bring their energy to same level as informative ones, lowering SNR. Breaks spatial coherence necessary for higher-dimensional convs, like scale-shifting every individual timestep for 1D convs Blue = before, orange = after. Path 1 (max amplitude 20) is clearly dominant to path 0 (max 1), but post-norm they're of ~equal intensity. A solution is sample normalization, but it's not a win-all. Log also has drawbacks, which PCEN improves on -- see further discussion. Example: pure sine Joint slices: Why are all spinned coefficients zero? Wavelet convolved with a constant is zero, since wavelets are zero mean and a constant == DC. |conv(psi1, sine)| = U1 = const (first order yields constants along rows), and conv(psi2, U1) = 0 (second order convolves over those constants). Only the phi_t * pairs are sensitive to constant horizontal geometries, as they're non-zero mean along time. Conclusion JTFS is doper
{ "domain": "dsp.stackexchange", "id": 10593, "tags": "wavelet, time-frequency, visualization, cwt, scattering" }
Why collisions are necessary for electrical conductivity?
Question: I have a simple question that came to my mind studying the semiclassical electron dynamics: why there must be collisions between particles to have conductivity? If it’s not necessary, what’s the fundamental mechanism driving it? Answer: The current $J=\sigma E$. But more fundamentally $j=-env$ where e is the charge, n is the number of electrons per unity volume and $v$ is the velocity of the charge. So can have currents without collisions (ballistic transport) but in almost any case if you are talking about a material - unless looking at transport over very short distances - it is convenient to think about the electrons as gas of particles with very high velocity (fermi sphere) and without an applied field there is no net current. If you apply an electric field the electrons will have a directional force due to the field and accelerate for some mean time between the collisions with some mean free path. This shifts the fermi sphere is the drift velocity with some net motion of the electrons giving a current. Consider a typical electron at time zero. Let t be the time elapsed since its last collision. Its velocity at time zero will be its velocity v0 immediately after that collision plus the additional velocity −eEt/m it has subsequently acquired. Since we assume that an electron emerges from a collision in a random direction, there will be no contribution from v0 to the average electronic velocity, which must therefore be given entirely by the average of −eEt/m. However, the average of t is the relaxation time τ. So by substituting in the expression for the velocity you get $\sigma = \frac{-ne^2\tau}{m}$ that relates the current to the applied electric field, but now has a term that is for the mass of the electron and the mean time between collisions. So why do we care? We didn't have to say anything about the details of the scattering or why the collisions occurred. If we get into the details there could be different times for different types of collisions but we can wrap them up into a mean collision time. $\frac{1}{\tau_{total}}=\frac{1}{\tau_{1}}+\frac{1}{\tau_{2}}$. It also turns out that you can also define the mobility $\mu$ of the electron in terms drift velocity. $v_d=\mu E$ and with the band structure have an understanding of the how the drift velocity, effective mass and scattering are related. Ultimately conductivity is not as fundamental as $j=-env$, but it is very convenient way to label a material property.
{ "domain": "physics.stackexchange", "id": 89180, "tags": "electromagnetism, electrons, collision" }
Finding the shortest time to go from one stop to another stop in a train system with many train lines with connection stations
Question: So I was thinking after discussing with my friend the other day, how would I use something like breadth-first or depth-first search to find the fastest time to go from one station to another station if there are a number of different train lines in a subway system. They may cross over each other, maybe multiple times even. Train line in this case means a train that goes from a starting station to a ending station, with many station stops on the way, where certain station stops may give the option for the rider to swap to another "line" that also has its starting and stoping station. A simple example is a two-line subway system like so: o : station - : track Line 2 START | o | Line 1 START o -- o -- o -- o -- o -- o END | END I live in Taipei, Taiwan, so you may reference the MRT train map there for reference but it may look fairly complicated. I was looking for tips for a simpler start and ways to work out a solution to a smaller problem first, but the MRT system is my friend and I's target goal. What I have though of so far Have an array that holds each stop name on along with the time it takes for previous stop to reach this stop represent a single train line, with each connection to another train line being a nested array at that item. Figure out how to tell the algorithm to look through each route to find the destination stop, adding the time taken along the way comparing to find the shortest time. Things to Consider The time between each station is known to us. But for the purpose of this question just assume its 5 minutes between each station. A train line has a starting stop and a ending stop, but on the way it may stop on a stop that connects to any other train line. Any connection to another train line may connect to another line - but access to this third line may be reached by taking a further more complicated route by switching train lines. Meaning if I try out all possibilities it may take looping routes and test forever. Its possible to take a transfer from one line to another, and go either left or right on the new line, i.e. the trains are not one-way. So how is it possible to Avoid our algorithm making loops, e.g.: If line 1 connects to line 2 on its third stop, and that connects to a third line, and then back to line 1 later on down line 3. Then if we were to try find the shortest way to go from a stop in line 1 to another stop in line 1, the algorithm may test going from line 1 to line 2 taking the connection then to line 3 back to line 1 and back around. And also, how do we make it decide where to make the turns, and what is a better way to represent this problem and find the shortest time? Answer: Transform your input to a graph in which each vertex represents a stop on a line. If a stop is shared by multiple lines, then add one copy of the vertex per line and a new super-vertex connected to all such copies with edges of weight 2.5 (so there is a path of length 5 between any two of these stops). Add edges between consecutive stops of the same line with weights matching the travel times. Use Dijkstra's algorithm to find the shortest path between the starting stop and the last stop. The time required is $O(n\log n)$ where $n$ is the overall number of (not necessarily distinct) stops across the different lines. This is because is possible to assign edges to vertices so that: 1) each edge is assigned to a "stop" vertex, and 2) each "stop" vertex has at most $2$ edges assigned (i.e., the one towards the "next" stop w.r.t. an arbitrary direction of the line the vertex belongs to, if any, and possibly the one to a super-vertex), showing that the number of edges is at most $2n$. EDIT: Here is the graph obtained by applying the above transformation to the example in the question. The edges of the horizontal line are in red, those of the vertical line are in blue. All the vertices representing stops on a specific line are white, while the only super-vertex is in black. Black edges have a weight of $2.5$ each (not shown). The tree vertices in the gray circle together represent the stop at the intersection of the horizontal line and vertical line.
{ "domain": "cs.stackexchange", "id": 14831, "tags": "search-algorithms, search-trees" }
Can my Less CSS code be improved?
Question: I just started getting into the Less CSS framework and I am wondering if I'm doing it right and how the code can be improved if possible. global.less //variables @themeRed: #cc1111; @themeColor: @themeRed; @marginBottom: 10px; @contentRadius: 5px; //mixins .border-radius(@args) { -webkit-border-radius: @args; -moz-border-radius: @args; border-radius: @args; } //styles body { font: 12px Arial; color: #333; background: #e9eaed; } a { &:link, &:visited { color: @themeColor; text-decoration: none; } &:hover { border-bottom: solid 1px; } &:active { //color: lighten(@themeColor, 10%); } } #topbar { background: @themeColor; width: 100%; height: 2px; } #container { width: 750px; margin: 0 auto; padding: 10px 0; } #header { display: table; height: 100px; width: 100%; margin-bottom: @marginBottom; .wrapper { vertical-align: middle; display: table-cell; text-align: center; } } #sitename { font-size: 50px; font-weight: bold; text-transform: uppercase; text-shadow: 2px 2px @themeColor; a { color: #fff; border: none; text-decoration: none; } } .section-heading { font-size: 1.1em; padding-left: 10px; margin-bottom: @marginBottom; text-transform: uppercase; font-weight: bold; color: @themeColor; border-bottom: 1px solid #e5e5e5; line-height: 29px; } p { line-height: 1.5em; margin-bottom: 1em; } .last { margin-bottom: 0; } #nav { position: absolute; width: 90px; text-align: right; margin-top: @contentRadius; li { border-right: 2px solid lighten(@themeColor, 30%); padding: 2px 5px 2px 0; margin-bottom: 2px; &:hover { border-color: @themeColor; } &.active { border-color: @themeColor; font-weight: bold; } } } .content { width: 518px; padding: 15px; margin: 0 auto; margin-bottom: @marginBottom; background: #fff; border: 1px solid; border-color: #e5e6e9 #dfe0e4 #d0d1d5; border-bottom: solid 2px @themeColor; .border-radius(@contentRadius); box-shadow: 0 1px 5px #d1d1d1; a { border-bottom: 0; &:hover { text-decoration: underline; } } } #advertisement { margin-bottom: @marginBottom; .adsbygoogle { display: block; margin: 0 auto; } } #footer { text-align: center; p { margin-bottom: 0; } } Answer: Looks just fine, I think. Only some really minor things come to mind For font-family, it's good to always end with a generic font face style; in this case it'd be sans-serif. Arial is of course as "web-safe" as it gets, but still. Why does #nav have a position: absolute rule, but no actual positioning (left, top, etc.) values? I imagine you're doing some JS positioning, but you should still have a fallback position (or just set position: absolute in the JS, so it's only there when needed). You could consider using :last-of-type instead of explicitly giving elements a last class. Depends on what browsers you're targeting Remove the a:active rule; you've already commented it out of the code. You may want to do a box-shadow function that does the vendor-prefix magic, similar to your border-radius You have one @themeColor, but also a bunch of different hard-coded colors (e.g. border colors on .content). So while it's easy to change the @themeColor, you'll probably still have to go through and change every other color to better match the new theme. If you can, use the color functions to derive the rest of the palette from @themeColor or make a more comprehensive list of color variables (even if they're only used once, it's still nice to be able to define the whole color scheme in 1 place).
{ "domain": "codereview.stackexchange", "id": 7055, "tags": "css, less-css" }
NP-complete problems not "obviously" in NP
Question: It occurred to many that in all the $\textbf{NP}$-completeness proofs I've read (that I can remember), it's always trivial to show that a problem is in $\textbf{NP}$, and showing that it is $\textbf{NP}$-hard is the... hard part. What $\textbf{NP}$-complete problems are these whose polynomial-time verifiers are highly non-trivial? Answer: There are at least four such $NP$-complete problems listed in the appendix of Garey and Johnson's COMPUTERS AND INTRACTABILITY: A Guide to the Theory of NP-Completeness. [AN6] NON-DIVISIBILITY OF A PRODUCT POLYNOMIAL INSTANCE: Sequences $A_i = \langle (a_i[1],b_i[1]), ..., (a_i[k],b_i[k]) \rangle,\ 1 \leqslant i \leqslant m,$ of pairs of integers, with each $b_i[j] \geqslant 0,$ and an integer $N$. QUESTION: Is $\displaystyle \prod_{i=1}^m \left( \displaystyle\sum_{j=1}^k a_i[j] \cdot z^{b_i[j]} \right)$ not divisible by $z^N - 1$? Reference: [Plaisted, 1977a], [Plaisted, 1977b]. Transformation from 3SAT. Proof of membership in NP is non-trivial and appears in the second reference. The other three I found in the appendix are: [LO13] MODAL LOGIC S5-SATISFIABILITY [LO19] SECOND ORDER INSTANTIATION [MS3] NON-LIVENESS OF FREE CHOICE PETRI NETS
{ "domain": "cs.stackexchange", "id": 4439, "tags": "complexity-theory, np-complete, np" }
Derive a GTF containing protein coding genes from a GTF file with Exons and CDS
Question: Why I need a compatible file I’m trying to run velocyto with the R package to analyse RNA velocity (cell trajectories) with single cell RNASeq data. I have performed single cell analysis from 10x Genomics data using cellranger. I have successfully aligned the reads to get loom files and imported these into R. I can get the velocity from these files by following the vignettes. However, I cannot reproduce the RNA velocity analysis based on “gene structure”. I’m working with a different organism to the example (not human or mouse) so the annotation data provided in the example does not work. I have a GTF file for the latest annotation of this organism. However, it only contains “exon” and “CDS” as features. This appears to be the source of the problem. The “find.ip.sites” function requires a GTF with “features” = “gene” and one of the “attributes” to be “protein_coding”. These requirements are hard-coded into the velocyto.R function. I have the following GTF files from the AtRTD2 dataset. The chromosome labels match the loom files in R. Chr1 TAIR10 exon 3631 3913 . + . transcript_id "AT1G01010.1"; gene_id "AT1G01010"; gene_name "AT1G01010"; Chr1 TAIR10 exon 3996 4276 . + . transcript_id "AT1G01010.1"; gene_id "AT1G01010"; gene_name "AT1G01010"; Chr1 TAIR10 exon 4486 4605 . + . transcript_id "AT1G01010.1"; gene_id "AT1G01010"; gene_name "AT1G01010"; Chr1 TAIR10 exon 4706 5095 . + . transcript_id "AT1G01010.1"; gene_id "AT1G01010"; gene_name "AT1G01010"; Chr1 TAIR10 exon 5174 5326 . + . transcript_id "AT1G01010.1"; gene_id "AT1G01010"; gene_name "AT1G01010"; Chr1 TAIR10 exon 5439 5899 . + . transcript_id "AT1G01010.1"; gene_id "AT1G01010"; gene_name "AT1G01010"; Chr1 TAIR10 CDS 3760 3913 . + 0 transcript_id "AT1G01010.1"; gene_id "AT1G01010"; gene_name "AT1G01010"; Chr1 TAIR10 CDS 3996 4276 . + 2 transcript_id "AT1G01010.1"; gene_id "AT1G01010"; gene_name "AT1G01010"; Chr1 TAIR10 CDS 4486 4605 . + 0 transcript_id "AT1G01010.1"; gene_id "AT1G01010"; gene_name "AT1G01010"; Chr1 TAIR10 CDS 4706 5095 . + 0 transcript_id "AT1G01010.1"; gene_id "AT1G01010"; gene_name "AT1G01010"; Chr1 TAIR10 CDS 5174 5326 . + 0 transcript_id "AT1G01010.1"; gene_id "AT1G01010"; gene_name "AT1G01010"; Chr1 TAIR10 CDS 5439 5630 . + 0 transcript_id "AT1G01010.1"; gene_id "AT1G01010"; gene_name "AT1G01010"; Chr1 Araport11 exon 6788 7069 . - . transcript_id "AT1G01020_P2"; gene_id "AT1G01020"; gene_name "AT1G01020"; Note "ARV1"; Chr1 Araport11 exon 7157 7450 . - . transcript_id "AT1G01020_P2"; gene_id "AT1G01020"; gene_name "AT1G01020"; Note "ARV1"; Chr1 Araport11 exon 7564 7649 . - . transcript_id "AT1G01020_P2"; gene_id "AT1G01020"; gene_name "AT1G01020"; Note "ARV1"; Chr1 Araport11 exon 7762 7835 . - . transcript_id "AT1G01020_P2"; gene_id "AT1G01020"; gene_name "AT1G01020"; Note "ARV1"; Chr1 Araport11 exon 7942 7987 . - . transcript_id "AT1G01020_P2"; gene_id "AT1G01020"; gene_name "AT1G01020"; Note "ARV1"; Chr1 Araport11 exon 8236 8325 . - . transcript_id "AT1G01020_P2"; gene_id "AT1G01020"; gene_name "AT1G01020"; Note "ARV1"; Chr1 Araport11 exon 8417 8464 . - . transcript_id "AT1G01020_P2"; gene_id "AT1G01020"; gene_name "AT1G01020"; Note "ARV1"; Chr1 Araport11 exon 8571 8737 . - . transcript_id "AT1G01020_P2"; gene_id "AT1G01020"; gene_name "AT1G01020"; Note "ARV1"; Chr1 Araport11 CDS 7315 7450 . - 1 transcript_id "AT1G01020_P2"; gene_id "AT1G01020"; gene_name "AT1G01020"; Note "ARV1"; Chr1 Araport11 CDS 7564 7649 . - 0 transcript_id "AT1G01020_P2"; gene_id "AT1G01020"; gene_name "AT1G01020"; Note "ARV1"; Chr1 Araport11 CDS 7762 7835 . - 2 transcript_id "AT1G01020_P2"; gene_id "AT1G01020"; gene_name "AT1G01020"; Note "ARV1"; Chr1 Araport11 CDS 7942 7987 . - 0 transcript_id "AT1G01020_P2"; gene_id "AT1G01020"; gene_name "AT1G01020"; Note "ARV1"; Chr1 Araport11 CDS 8236 8325 . - 0 transcript_id "AT1G01020_P2"; gene_id "AT1G01020"; gene_name "AT1G01020"; Note "ARV1"; The specifications for a GTF or GFF2 are: Fields Fields must be tab-separated. Also, all but the final field in each feature line must contain a value; "empty" columns should be denoted with a '.' seqname - name of the chromosome or scaffold; chromosome names can be given with or without the 'chr' prefix. Important note: the seqname must be one used within Ensembl, i.e. a standard chromosome name or an Ensembl identifier such as a scaffold ID, without any additional content such as species or assembly. See the example GFF output below. source - name of the program that generated this feature, or the data source (database or project name) feature - feature type name, e.g. Gene, Variation, Similarity start - Start position of the feature, with sequence numbering starting at 1. end - End position of the feature, with sequence numbering starting at 1. score - A floating point value. strand - defined as + (forward) or - (reverse). frame - One of '0', '1' or '2'. '0' indicates that the first base of the feature is the first base of a codon, '1' that the second base is the first base of a codon, and so on.. attribute - A semicolon-separated list of tag-value pairs, providing additional information about each feature. What I’m looking for Is there a way to derive a compatible GTF file containing protein coding genes from the exons and CDS? I want to produce a GTF which contains genes in the features and protein_coding in the attributes. Is it possible to do this with existing tools or scripts? What I’ve tried so far I can modify the source code of the “find.ip.sites” function to run on my GTF file with these features missing. However, this requires running internal functions from the package written in Rcpp and means my workflow override future updates to the package. Running the rest of the vignette returns errors as no introns long enough have been identified (setting the thresholds lower is also incompatible with the general linear models called). Therefore I think it is better to generate a compatible GTF or GFF3 file rather than alerting the source code of the functions. While intended for GTF files, the package functions can import GFF3 files, despite being described for GTF in the documentation. I’ve tried generating a GFF3 file using gffread from Cufflinks and replacing “feature” = “mRNA” with “gene” and adding “protein_coding”. This also returns errors when running the velocity algorithm. It does not work for either the GTF file used as an input for cellranger or the file generated by it. Is there a way to annotate protein coding genes and intron/exon boundaries based on a GTF file containing only exon and CDS annotations? With cufflinks 2.2.1, a GFF3 was generated gffread - E AtRTD2_19April2016.gtf -o- > AtRTD2_19April2016.gff sed -i '/mRNA/s/gene_name=AT/gene_type=protein_coding;gene_name=AT/g' AtRTD2_19April2016.gff' sed -i 's/mRNA/gene/g' AtRTD2_19April2016.gff This is the GFF3 files that I've tried: # gffread - E AtRTD2_19April2016.gtf -o- ##gff-version3 Chr1 TAIR10 gene 3631 5899 . + . ID=AT1G01010.1;geneID=AT1G01010;gene_type=protein_coding;gene_name=AT1G01010 Chr1 TAIR10 exon 3631 3913 . + . Parent=AT1G01010.1 Chr1 TAIR10 exon 3996 4276 . + . Parent=AT1G01010.1 Chr1 TAIR10 exon 4486 4605 . + . Parent=AT1G01010.1 Chr1 TAIR10 exon 4706 5095 . + . Parent=AT1G01010.1 Chr1 TAIR10 exon 5174 5326 . + . Parent=AT1G01010.1 Chr1 TAIR10 exon 5439 5899 . + . Parent=AT1G01010.1 Chr1 TAIR10 CDS 3760 3913 . + 0 Parent=AT1G01010.1 Chr1 TAIR10 CDS 3996 4276 . + 2 Parent=AT1G01010.1 Chr1 TAIR10 CDS 4486 4605 . + 0 Parent=AT1G01010.1 Chr1 TAIR10 CDS 4706 5095 . + 0 Parent=AT1G01010.1 Chr1 TAIR10 CDS 5174 5326 . + 0 Parent=AT1G01010.1 Chr1 TAIR10 CDS 5439 5630 . + 0 Parent=AT1G01010.1 Chr1 Araport11 gene 6788 8737 . + . ID=AT1G01020_P2;geneID=AT1G01020;gene_type=protein_coding;gene_name=AT1G01020 Chr1 Araport11 exon 6788 7069 . - . Parent=AT1G01020_P2 Chr1 Araport11 exon 7157 7450 . - . Parent=AT1G01020_P2 Chr1 Araport11 exon 7564 7649 . - . Parent=AT1G01020_P2 Chr1 Araport11 exon 7762 7835 . - . Parent=AT1G01020_P2 Chr1 Araport11 exon 7942 7987 . - . Parent=AT1G01020_P2 Chr1 Araport11 exon 8236 8325 . - . Parent=AT1G01020_P2 Chr1 Araport11 exon 8417 8464 . - . Parent=AT1G01020_P2 Chr1 Araport11 exon 8571 8737 . - . Parent=AT1G01020_P2 Chr1 Araport11 CDS 7315 7450 . - 1 Parent=AT1G01020_P2 Chr1 Araport11 CDS 7564 7649 . - 0 Parent=AT1G01020_P2 Chr1 Araport11 CDS 7762 7835 . - 2 Parent=AT1G01020_P2 Chr1 Araport11 CDS 7942 7987 . - 0 Parent=AT1G01020_P2 Chr1 Araport11 CDS 8236 8325 . - 0 Parent=AT1G01020_P2 The specifications for a GTF or GFF2 are: The specifications for a GF3 are: Fields Fields must be tab-separated. Also, all but the final field in each feature line must contain a value; "empty" columns should be denoted with a '.' seqid - name of the chromosome or scaffold; chromosome names can be given with or without the 'chr' prefix. Important note: the seq ID must be one used within Ensembl, i.e. a standard chromosome name or an Ensembl identifier such as a scaffold ID, without any additional content such as species or assembly. See the example GFF output below. source - name of the program that generated this feature, or the data source (database or project name) type - type of feature. Must be a term or accession from the SOFA sequence ontology start - Start position of the feature, with sequence numbering starting at 1. end - End position of the feature, with sequence numbering starting at 1. score - A floating point value. strand - defined as + (forward) or - (reverse). phase - One of '0', '1' or '2'. '0' indicates that the first base of the feature is the first base of a codon, '1' that the second base is the first base of a codon, and so on.. attributes - A semicolon-separated list of tag-value pairs, providing additional information about each feature. Some of these tags are predefined, e.g. ID, Name, Alias, Parent - see the GFF documentation for more details. Answer: In your case, I would definitely suggest following @Emily_Ensembl's advice and using the Arabidopsis GTF from Ensembl. But for future reference, if an Ensembl GTF wasn't available, you could build something like this using the gtf class from cgat The cgat module and dependancies can be installed by following the instructions here. Specifically, you can download and run their installation script. This will set up a dedicated conda environment to run the versions of dependancies needed. Further documentation can be found here. # download installation script: curl -O https://raw.githubusercontent.com/cgat-developers/cgat-apps/master/install.sh # install the development version (recommended, no production version yet): bash install.sh --devel This requires python3 and anaconda to install. You may need to run this script in the conda environment: # show available environments conda info --envs # activate by path source activate /home/user/local/bin/conda-install/envs/cgat-a Then run the follow as a python3 script. Copy the contents in a file such as convert_gtf.py. Then run it in the terminal on your gtf file: python convert_gtf.py genes.gtf > genes_new.gtf #python 3.6.4 import sys from cgat import GTF from cgatcore import iotools for gene in GTF.flat_gene_iterator( GTF.iterator(iotools.open_file(sys.argv[1])), strict = False): gene_start = min (e.start for e in gene) gene_end = max(e.end for e in gene) if any(e.feature == "CDS" for e in gene): gene_type = "protein_coding" else: gene_type = "non_coding" for exon in gene: exon_line.setAttribute("gene_biotype", gene_type) gene_line = GTF.Entry().fromGTF(gene[0]) gene_line.feature = "gene" gene_line.start = gene_start gene_line.end = gene_end # Its not clear what the transcript_id for a gene line is. # Technically speaking, Ensembl GTFs do not follow the GTF # standard as GTF entries must include transcript_ids, but # the ensembl ones don't. if hasattr(gene_line, "gene_name"): gene_line.attributes["gene_name"] = gene[0].gene_name else: gene_line.attributes["gene_name"] = gene[0].gene_id gene_line.attributes["gene_biotype"] = gene_type print (str(gene_line)+"\n") for exon in gene: print(str(exon_line)+"\n")
{ "domain": "bioinformatics.stackexchange", "id": 813, "tags": "scrnaseq, sequence-annotation, single-cell, gtf, 10x-genomics" }
Factor of 4 (or 2) in the gravitoelectromagnetic (GEM) Lorentz-force law. Which is correct? Why is it there?
Question: I realize that the Gravitoelectromagnetic equations (GEM) are derived from the Einstein field equation (EFE) in the degenerate case of reasonably flat spacetime, which is the case for the propagation of gravitational waves in free space reasonably far away from nasties such as black holes or neutron stars or the like. Now, I understand Maxwell's equations pretty well, and how to derive EM radiation (at the wave speed of $c$) from them. I also understand the electromagnetic force as a manifestation of the sole electrostatic force, but with the consequences of special relativity taken into consideration. So, in both cases, EM and GEM, we have in the limiting case, an inverse-square static interaction and a dynamic interaction that propagates at the same speed of $c$. The inverse-square interaction leads to Gauss's law and the differential counterpart (divergence) in Maxwell's equations (or in the GEM equations). Okay, just for comparison, the static EM and graviation inverse-square laws are: $$ F_\mathrm{e} = \frac{1}{4 \pi \epsilon_0} \frac{q_1 q_2}{r^2} $$ and $$ F_\mathrm{g} = -G \frac{m_1 m_2}{r^2} $$ In both cases, a positive force $F_\mathrm{e}$ or $F_\mathrm{g}$ is repulsive. This is why the minus sign needs to be attached to $G$ in the static gravitation law. Then Maxwell's equations for EM are: $$\begin{align} \nabla \cdot \mathbf{E} &= \frac {1}{\epsilon_0} \rho \\ \nabla \cdot \mathbf{B} &= 0 \\ \nabla \times \mathbf{E} &= -\frac{1}{c}\frac{\partial \mathbf{B}} {\partial t} \\ \nabla \times \mathbf{B} &= \frac{1}{c} \left( \frac{1}{\epsilon_0}\rho\mathbf{v}_\rho + \frac{\partial \mathbf{E}} {\partial t} \right) \end{align}$$ and the GEM counterparts: $$\begin{align} \nabla \cdot \mathbf{E}_\mathrm{g} &= -4\pi G \ \rho \\ \nabla \cdot \mathbf{B}_\mathrm{g} &= 0 \\ \nabla \times \mathbf{E}_\mathrm{g} &= -\frac{1}{c}\frac{\partial \mathbf{B}_\mathrm{g}} {\partial t} \\ \nabla \times \mathbf{B}_\mathrm{g} &= \frac{1}{c} \left( -4\pi G \ \rho\mathbf{v}_\rho + \frac{\partial \mathbf{E}_\mathrm{g}} {\partial t} \right) \end{align}$$ In the EM case, I have eliminated $\mu_0$ and expressed it in terms of $\epsilon_0$ and $c$. In both cases, I have expressed current density $\mathbf{J}$ as charge density (or mass density) times the velocity of the differential charge or mass. And, in both cases, I expressed in terms of Lorentz-Heaviside units which makes the $\mathbf{B}$ field have the same dimensions (and units) as the $\mathbf{E}$ field. This is consistent with most of the papers dealing with GEM. Both the inverse-square and EM/GEM expressions are totally consistent with replacing charge with mass, charge density with mass density, and $\frac{1}{4 \pi \epsilon_0}$ with $-G$. Both EM/GEM sets of equations degenerate to the inverse-square laws and to an interaction that propagates at a speed of $c$. So far, this agrees with the expressions for EM or GEM in the Wikipedia articles on either. The difference comes with the Lorentz force equations acting on a small test charge $q$ or small test mass $m$ (moving at a velocity independent of the charge current density or mass current density above): For EM it's: $$ \mathbf{F}_\mathrm{e} = q\mathbf{E} + \frac{q}{c}\mathbf{v}_q \times \mathbf{B} $$ For GEM, in the Wikipedia article it's: $$ \mathbf{F}_\mathrm{g} = m\mathbf{E}_\mathrm{g} + \frac{m}{c}\mathbf{v}_m \times (\color{red}{4}\mathbf{B}_\mathrm{g}) $$ The first term (right of the "$=$" sign) is the electrostatic or static gravitational force and the the latter term is the electromagnetic or gravitomagnetic force. Now, for the GEM Lorentz force, where does that factor of $\color{red}{4}$ come from? And there are other papers that show the GEM equations as above, but have a factor of $2$ instead: $$ \mathbf{F}_\mathrm{g} = m\mathbf{E}_\mathrm{g} + \frac{m}{c}\mathbf{v}_m \times (2\mathbf{B}_\mathrm{g}) $$ or no fudge factor in the Lorentz force, but a $\tfrac12\mathbf{B}_\mathrm{g}$ in the GEM, which is equivalent. I don't know why either the $4$ or the $2$ would come into this, but I would like to know who is correct; the $4\mathbf{B}_\mathrm{g}$ advocates or the $2\mathbf{B}_\mathrm{g}$ advocates? Answer: TL;DR: The factor of $\color{red}{4}$ in the Lorentz force comes morally speaking from trying to mimic a spin-2 field as a spin-1 field. There is no unique/canonical/"correct" normalization convention: It is still possible to normalize/scale the fields $\phi$, ${\bf A}$, ${\bf E}$ & ${\bf B}$, as we please, but that only moves the factor of $\color{red}{4}$ around: It doesn't disappear everywhere! In detail: Consider the linearized EFE$^1$ $$ G^{\mu\nu}~=~-\frac{1}{2}\Box \bar{h}^{\mu\nu}~=~\kappa T^{\mu\nu},\qquad \kappa~\equiv~\frac{8\pi G}{c^4}, \tag{1}$$ in the Lorenz gauge $$ \partial_{\mu} \bar{h}^{\mu\nu} ~=~0. \tag{2}$$ Here $$\begin{align} g_{\mu\nu}~=~&\eta_{\mu\nu}+h_{\mu\nu}, \cr \bar{h}_{\mu\nu}~:=&~h_{\mu\nu}-\frac{1}{2}\eta_{\mu\nu}h \qquad\Leftrightarrow\qquad h_{\mu\nu}~:=~\bar{h}_{\mu\nu}-\frac{1}{2}\eta_{\mu\nu}\bar{h}. \end{align}\tag{3}$$ The matter is assumed to be dust: $$ T^{\mu 0}~=~cj^{\mu}, \qquad j^{\mu}~=~\begin{bmatrix} c\rho \cr {\bf J} \end{bmatrix}, \qquad T^{ij}~=~{\cal O}(c^0). \tag{4}$$ In our convention, the GEM ansatz reads $$\begin{align} A^{\mu}~=~&\begin{bmatrix} \phi/c \cr {\bf A} \end{bmatrix}, \qquad\bar{h}^{ij}~=~{\cal O}(c^{-4}),\cr -\frac{1}{4}\bar{h}^{\mu\nu} ~=~&\begin{bmatrix} \phi/c^2 & {\bf A}^T /c\cr {\bf A}/c & {\cal O}(c^{-4})\end{bmatrix}_{4\times 4}\cr ~\Updownarrow~& \cr -h^{\mu\nu} ~=~&\begin{bmatrix} 2\phi/c^2 & 4{\bf A}^T/c \cr 4{\bf A}/c & (2\phi/c^2){\bf 1}_{3\times 3}\end{bmatrix}_{4\times 4} \cr ~\Updownarrow~& \cr g_{\mu\nu} ~=~&\begin{bmatrix} -1-2\phi/c^2 & 4{\bf A}^T/c \cr 4{\bf A}/c & (1-2\phi/c^2){\bf 1}_{3\times 3}\end{bmatrix}_{4\times 4}. \end{align}\tag{5}$$ The gravitational Lorenz gauge (2) corresponds to the Lorenz gauge condition $$ c^{-2}\partial_t\phi + \nabla\cdot {\bf A}~\equiv~ \partial_{\mu}A^{\mu}~=~0 \tag{6}$$ and the "electrostatic limit" $$ \partial_t {\bf A}~=~{\cal O}(c^{-2}).\tag{7}$$ Next define the field strength $$\begin{align} F_{\mu\nu}~:=~&\partial_{\mu} A_{\nu} - \partial_{\nu} A_{\mu}, \cr -{\bf E}~:=~&{\bf \nabla} \phi+\partial_t{\bf A}, \cr {\bf B}~:=~&{\bf \nabla}\times {\bf A}.\end{align} \tag{8} $$ Then the tempotemporal & the spatiotemporal sectors of the linearized EFE (1) become the gravitational Maxwell equations with sources $$ \partial_{\mu} F^{\mu\nu}~=~\frac{4\pi G}{c}j^{\mu}. \tag{9} $$ Note that the gravitational (electric) field ${\bf E}$ should be inwards (outwards) for a positive mass (charge), respectively. For this reason, in this answer/OP/Wikipedia, the GEM equations (9) and the Maxwell equations have opposite$^2$ signs. See also this related Phys.SE post. The Lagrangian for a massive point particle in curved space in the static gauge $x^0=ct$ is $$ \begin{align}L~=~&-m_0c\sqrt{ -g_{\mu\nu}\dot{x}^{\mu}\dot{x}^{\mu}}\cr ~\stackrel{(5)}{=}~&-m_0c\sqrt{ c^2+2\phi -8{\bf A}\cdot{\bf v}-(1-2\phi/c^2){\bf v}^2 }\cr ~=~&-\frac{m_0c^2}{\gamma}\sqrt{ 1+\frac{2\gamma^2}{c^2}\left( (1+{\bf v}^2/c^2)\phi -\color{red}{4} {\bf v}\cdot{\bf A}\right)}\cr ~\stackrel{(12)}{=}~& -\frac{m_0c^2}{\gamma}~-~U ~+~{\cal O}(A^2), \end{align}\tag{10} $$ $$\gamma~:=~\frac{1}{\sqrt{ 1-{\bf v}^2/c^2}} .\tag{11} $$ Here the velocity-dependent potential for the gravitational Lorentz force is $$\begin{align} U~=~&m_0\gamma\left((1+{\bf v}^2/c^2)\phi -\color{red}{4} {\bf v}\cdot{\bf A}\right)\cr ~\stackrel{NR}{=}~& m_0\left(\phi -\color{red}{4} {\bf v}\cdot{\bf A}\right)~+~{\cal O}({\bf v}^2).\end{align} \tag{12}$$ The gravitational Lorentz force becomes in the non-relativistic (NR) limit $$\begin{align} {\bf F}~=~&\frac{d}{dt}\frac{\partial U}{\partial {\bf v}} - \frac{\partial U}{\partial {\bf r}}\cr ~\stackrel{NR}{\approx}~& m_0\left(-{\bf \nabla} \phi + \color{red}{4}({\bf v}\times {\bf B}-\partial_t{\bf A})\right) \cr ~\stackrel{(7)}{=}~& m_0({\bf E}+ \color{red}{4}{\bf v}\times {\bf B}) .\end{align}\tag{13}$$ References: B. Mashhoon, Gravitoelectromagnetism: A Brief Review, arXiv:gr-qc/0311030. -- $^1$ In this answer we use Minkowski sign convention $(-,+,+,+)$ and work in the SI-system. Space-indices $i,j,\ldots \in\{1,2,3\}$ are Roman letters, while spacetime indices $\mu,\nu,\ldots \in\{0,1,2,3\}$ are Greek letters. $^2$ Warning: In Mashhoon (Ref. 1) the GEM equations (9) and the Maxwell equations have the same sign. For comparison, in this Phys.SE answer $$\phi~=~-\phi^{\text{Mashhoon}}, \qquad {\bf E}~=~-{\bf E}^{\text{Mashhoon}}, $$ $${\bf A}~=~-\frac{1}{2c}{\bf A}^{\text{Mashhoon}}, \qquad {\bf B}~=~-\frac{1}{2c}{\bf B}^{\text{Mashhoon}}.\tag{14}$$
{ "domain": "physics.stackexchange", "id": 50004, "tags": "general-relativity, gauss-law, gravitational-waves, maxwell-equations, linearized-theory" }
How exactly do you calculate the hidden layer gradients in the backpropagation algorithm?
Question: I have been going through the description of the backpropagation algorithm found here. and I am having a bit of trouble getting my head around some of the linear algebra. Say I have a final output layer $L$ consisting of two visible units, and layer $L-1$ consists of four hidden units. (this is just an example to illustrate my problem) my understanding is that the weight matrix for this final layer ($w^L$) should be a 4 x 2 matrix. the reference says to calculate the output error $\delta^{x,L}$ given by: $\delta^{x,L} = \nabla_a C_x \odot \sigma'(z^{x,L})$ where: $z^{x,L} = w^La^{x,L-1} + b^L$, $a^{x,L} = \sigma(z^{x,L})$, and $\odot$ is the hadamard product. evaluating $\delta^{x,L}$ gives a 1 x 2 vector, as it should given there are two output units. my problem is when calculating the hidden layer gradients (eg. $L-1$) given by: $\delta^{x,L-1} = ((w^L)^T\delta^{x,L})\odot\sigma'(z^{x,L-1})$ now if $w^L$ is a 4x2 matrix and $\delta^{x,L}$ is a 1x2 vector then wouldnt $(w^L)^T\delta^{x,L}$ be a multiplication of a 2x4 matrix and a 1x2 matrix, which is impossible? i feel like i have missed something vital in my understanding, but i cant work out what it is. is it just as simple as making it $\delta^{x,L}(w^L)^T$? this would be a 1x2 matrix multiplied by a 2x4 matrix, which is perfectly legal. but the formula has it around the other way. can anyone see where my understanding is flawed? any help would be greatly appreciated Answer: You've transposed the sizes of both $w^L$ and $\delta^{x,L}$. $w^L$ should be 2x4 and $\delta^{x,L}$ should be 2x1. $(w^L)^T$ is a 4x2 matrix that will be multipled by a 2x1 matrix yielding a 4x1 matrix suitable for the next step of backpropagation. In general for neural nets the activation units are represented as column vectors and the weights are matrices of dimension |L+1| x |L|, where L is the current layer and L+1 is the next layer (in the forward direction).
{ "domain": "cs.stackexchange", "id": 3446, "tags": "machine-learning, neural-networks" }
Comparing the basicity of heterocyclic amines
Question: Compare the relative basicity of the following amines. I know that the higher the electron density on the nitrogen the higher will be its basicity. So here’s my take on each option. (a)The Nitrogen lone pair undergoes resonance. It’s not good for basic strength (b) A doubly bonded nitrogen lone pair does not undergo resonance. So, the Nitrogen lone pair are localised. (c) Both nitrogen atoms are doubly bonded and so their lone pair won’t be delocalised and the electron density is localised here as well. (d) There’s no question of resonance here so the lone pairs are localised anyway From the above points my answer is (c)>(b)>(d)>(a) but the answer given is (d)>(b)>(c)>(a) Is there any mistake in my reasoning? Answer: The correct order is in fact $(\bf{d}) \gt (\bf{b}) \gt (\bf{c}) \gt (\bf{a})$, the reason for this is as follows. $(\bf{a})$ is definitely last in this order of basicity since its lone pair is delocalised by the phenyl ring. Now we need to compare the other three. We can split this into two parts, a comparison between $(\bf{d})$ and $(\bf{b})$ $(\bf{b})$ and $(\bf{c})$ The basicity of $(\bf{d})$ would be greater than that of $(\bf{b})$ because of the fact that in $(\bf{d})$, the nitrogen with the localised lone pair is $\mathrm{sp}^3$ whereas in case of $(\bf{b})$, the lone pair is seen on an $\mathrm{sp^2}$ nitrogen. Since an $\mathrm{sp^2}$ nitrogen is more electronegative than an $\mathrm{sp^3}$ nitrogen, it holds the electrons closer, thereby leading to less accessible electron density. Therefore $(\bf{d}) \gt (\bf{b})$. Now, moving on $(\bf{b})$ and $(\bf{c})$. In case of $(\bf{c})$, one of the nitrogen atoms has a -I effect on the other nitrogen atom, thereby leading to lower electron density and hence lower basicity. Since we find the basicity of each nitrogen atom, we see that the basicity of each one is less than that seen in $(\bf{b})$. Therefore, $(\bf{b}) \gt (\bf{c})$ Hence, the order of decreasing basicity will be $(\bf{d}) \gt (\bf{b}) \gt (\bf{c}) \gt (\bf{a})$.
{ "domain": "chemistry.stackexchange", "id": 15636, "tags": "organic-chemistry, acid-base, aromatic-compounds, amines" }
Where to get rate of change for calculating ephemeris from JPL Horizons
Question: I have a working implementation of Schlyter's method for calculating the Sun's position. I would like to know how Schlyter got the 3.24587E-5_deg in the following calculation for Mercury. N = 48.3313_deg + 3.24587E-5_deg * d (Long of asc. node) I could not find this number in JPL Horizons. I'm guessing it's the difference between the N at two epochs divided by number of days between the two epochs, but I'm not sure. Answer: My email to Paul Schlyter and his reply, posted with permission: On http://www.stjarnhimlen.se/comp/tutorial.html you note that Mercury's ascending node is given as: N = 48.3313_deg + 3.24587E-5_deg * d According to https://ssd.jpl.nasa.gov/txt/p_elem_t1.txt Mercury's ascending node precesses in the negative direction. As Where to get rate of change for calculating ephemeris from JPL Horizons notes, NASA's value matches the negative of your value almost exactly. Are you using a different sign convention here, or is this an error? Reply: It's neither - instead I use a somewhat different reference system. I use the "epoch of the day" instead of the fixed epoch of, say, J2000.0. So add the rate of precession to the negative rate of change you found at NASA's site, and you should get my rate of change. Since I use the "epoch of the day", the positions I get are suitable to compute e.g. the rise, transit, and set times for the planets. Of course they are also good for computing the positions of several planets relative to one another. But if you need the positions to plot on a star map drawn for some fixed epoch, you need to apply a correction for precession to these positions. Yes, it can be confusing to distinguish those different astronomical coordinate systems from one another. But that's a consequence of living on the planet Earth which orbits the Sun, rotates, and wobbles.
{ "domain": "astronomy.stackexchange", "id": 2799, "tags": "ephemeris" }
Capacitor - would there be current?
Question: In an ideal charged capacitor (with infinitely large parallel plates), the electric field outside the area between the plates is zero. Will be there any current flowing through the red wire from plate 1 to plate 2 if I attached it just like on the image below? I think the answer is no. Because the electric field in the points where I attached the wire to the plates is zero, just like it is all along through the wire. What I'm a bit confused about is the fact that you can actually say that there IS a path connecting those two points (marked with red dots, connected by the wire) with non zero voltage. Just look at the definition of voltage: As long as the integral over the path we take (where $E$ is the electric field) from point 1 to 2 isn't zero, the voltage won't be zero = there will be current. Like in this case (forget about the capacitor plates here, just assume the electric field exists in the area between the virtual plates as in this picture): The integral over the red path won't be zero, so the current should be able to flow between those two points once they are connected with a wire, right? But there's nothing to push the charge from the first to the second point, because the electric field is zero at those points. Could anyone explain that to me? Answer: Your question demonstrates the difference between electrostatics in "normal" Euclidean space and in periodic space. In all cases however, a current will flow through the wire. In your problem you sketch how the wire goes from one side of the infinitely large capacitor to the other without passing through the field inside the capacitor. However, in our everyday Euclidean space this not possible. This leaves three options: Option 1: The wire goes through a hole somewhere in the capacitor. In this case the wire goes through the electric field, which will make a current flow. Option 2(inspired by John's answer): The capacitor is not truly infinitely large. In this case the wire can go around the capacitor. However, due to edge effects caused by the finite size, the field at the sides of the capacitor won't be zero. Therefore the wire will pass through an electric field and a current will flow. Option 3: The capacitor does not live in Euclidean space, but in a space with periodic boundary conditions. This means that when you move out of the image on the left side, you will enter the image again on the right side. It is similar to a rolled up piece of paper, where you can move in circles around the cylinder. In such a space it is possible to connect the two plates without crossing the capacitor and your problem seems to persist. However, in such a periodic space your assumption that there is no field outside of the capacitor fails. In fact, it is not longer trivial what to consider as the inside and outside of the capacitor. You can draw Gaussian surfaces to see that there will be an electric field everywhere, except for inside the conductors. The field will point from the positively charged plate, to the negatively charged plate along two different routes. Any wire connecting the two plates will thus experience an electric field and a current will flow.
{ "domain": "physics.stackexchange", "id": 48859, "tags": "electrostatics, capacitance" }
How are hillsides farmed?
Question: I would assume hilly land is more labor-intensive to farm than flat land, and more exhausting to humans and animals if traditional farming techniques are used. How do farmers plant and plow on uneven terrain? And do the furrows on a hill run uphill-downhill, or “horizontally stacked” up the hill’s incline? Edit: I see why sloped land is more conducive to other kinds of agriculture. Any idea how steep a grade has to be before farming grain on it is more trouble than it’s worth? Answer: Terrace farming is widespread in the Orient but it is backbreakingly labour intensive. In UK we don't do it, partly because we don't grow our own rice. Depending on how steep the hill is, we either plough it or use it for grazing. A shallow incline can be ploughed, but you have to be very careful how you do it. Many accidents are caused each year by tractors overturning. This is most likely when you are contouring, i.e sideways to the slope. In a few areas, mainly in Scotland, hills where there is plenty of heather are sometimes used for game preservation and grouse shooting. Another use for hills is forestry, but there is not a lot of forestry in UK. If there are extensive steep hills on your land, the usual solution is to use them for grazing, which is why there are so many sheep farmers in Wales and other hilly areas. If it's just one steep hill on land which is otherwise fairly level, some farmers leave it to nature rather than risk a nasty accident. So the answer to your question is that hilly land is more labour intensive in the Orient and some similar areas of the world, but not in most of Europe. Furrows generally run up and down hill unless the farmer is confident he can plough transversely without risk of overturning.
{ "domain": "earthscience.stackexchange", "id": 1870, "tags": "agriculture" }
Alien Numbers - how Scala-ish is my solution?
Question: I'm trying to solve an old GCJ. It's a very simple puzzle, but I'm trying to sharpen my Scala-fu. Basically, you're getting a list of triple number srcLanguage dstLanguage, where number is an integer given in the numeral system of srcLanguage. You should translate it to the numeral system of dstLanguage. A numeral system is simply a string of all possible digits, in ascending order. The decimal numeral system is represented by 0123456789, the binary numeral system is 01, and the hexadecimal one 0123456789ABCDEF. For example: 3 0123456789 01 -> 11 3 0123 AB -> BB Here's how I implemented it in Scala: case class Langs(num:String,srcLang:String,dstLang:String) object Langs {def fromLine(line:String):Langs = { val ar = line.split(" ");return Langs(ar(0),ar(1),ar(2))}} object Translate { def lang2int(lang:String,num:String):Long = { var b = BigDecimal(0) val dmap = (lang.toList.zipWithIndex).toMap val digitsList = num map dmap val valueList = digitsList.reverse.zipWithIndex map ( x => x._1 -> math.pow(dmap.size,x._2)) return valueList.map(x=>x._1*x._2).sum.toLong } def int2lang(lang:String,_num:Long):String = { var num = _num val dmap = (lang zip (0.toLong to lang.size)).map(_.swap).toMap val sb = StringBuilder.newBuilder while (num > 0) { sb.append(dmap(num % dmap.size)) num = num/dmap.size } sb.reverse.toString } def lang2lang(l:Langs):String = int2lang(l.dstLang,lang2int(l.srcLang,l.num)) } object mymain { def main(args : Array[String]) : Unit = { val s = "A-large-practice" val basef = new java.io.FileInputStream("~/Downloads/"+s+".in") val f = new java.util.Scanner(basef) val out = new java.io.FileWriter(s+".out") val n = f.nextInt f.nextLine for (i <- 1 to n) { val nl = f.nextLine val l = Langs.fromLine(nl) out.write("Case #"+i+": "+Translate.lang2lang(l)+"\n") } out.close } } Answer: You should definitely use scala.io.Source for File-IO I wouldn't consider String splitting a responsibility of a general-purpose class. This should be done in the main loop For tuples you can write map{ case (one,two) => ... }, which is often clearer than using x._1 and x._2 You don't need to write return if it's the last statement of the block You can use pattern matching when defining vals: val Array(x, y, z) = line.split(" ") Here is my attempt: case class Langs(num:String, srcLang:String, dstLang:String) object Langs { def fromLine(line:String):Langs = { val Array(num, srcLang, dstLang) = line.split(" ") Langs(num, srcLang, dstLang) } } object Translate { def lang2int(lang:String,num:String):Long = { val dmap = lang.toList.zipWithIndex.toMap val digitsList = num map dmap val valueList = digitsList.reverse.zipWithIndex map { case (one, two) => one -> math.pow(dmap.size, two)} valueList.map{case (one,two) => one*two}.sum.toLong } def int2lang(lang:String, num:Long):String = { val dmap = (0.toLong to lang.size zip lang).toMap Iterator.iterate(num)( _/dmap.size).takeWhile(_ > 0).map(n => dmap(n % dmap.size)).mkString.reverse } def lang2lang(l:Langs):String = int2lang(l.dstLang,lang2int(l.srcLang,l.num)) } Eliminating the while loop isn't that straight-forward, maybe someone else has an idea how to avoid that Iterator train-wreck. [Edit] I asked in another forum for a better solution for int2lang, and got this answer: def int2lang(lang: String, num: Long): String = { val dmap = (0L to lang.size) zip lang toMap val size = dmap.size def loop(num: Long, l: List[Char]): List[Char] = if (num == 0) l else loop(num/size, dmap(num%size) :: l) loop(num, Nil).mkString } The nice thing about this is that the reverse is gone.
{ "domain": "codereview.stackexchange", "id": 2555, "tags": "scala, programming-challenge" }
calling ros::spin or ros::spinOnce from other function in QT
Question: Hello, I am very new to ROS and I have just manged to integrate the ROS libraries into my QT IDE. I am building a fairly complex application, however the ROS subscriber example I am attempting to implement fairly simple. If I implement my code this way (per the examples), I get subscriber callbacks as expected: void MainWindow::rtsp1Callback(const sensor_msgs::Image::ConstPtr& msg) { qDebug() << "rtsp1 Image has data"; } With this initializer: void MainWindow::on_ROSSubscribeButton_clicked() { ros::init(m_argc, m_argv, "camera_check"); ros::NodeHandle n1; ros::Subscriber sub1 = n1.subscribe("/rtsp_1/image_raw", 100, &MainWindow::rtsp1Callback, this); ros::spin(); } But of course, this is an undesirable design, as ros::spin(); then blocks the rest of my program. So I execute a timer to call ros::spinOnce(); on demand: Callback unchanged: void MainWindow::rtsp1Callback(const sensor_msgs::Image::ConstPtr& msg) { qDebug() << "rtsp1 Image has data"; } Initializer with timer: void MainWindow::on_ROSSubscribeButton_clicked() { ros::init(m_argc, m_argv, "camera_check"); ros::NodeHandle n1; ros::Subscriber sub1 = n1.subscribe("/rtsp_1/image_raw", 100, &MainWindow::rtsp1Callback, this); CameraCheckTimer.start(100); //calls ros::spinOnce() every 1000ms; } And this spin call (on timer execution): void MainWindow::checkCameras() { ros::spinOnce(); qDebug() << "Spin!"; } However I do not get any callbacks. I also tested calling ros::spin() - which was working in my initial example, by calling a singleShot timer. When I do this, my program blocks as expected, however I do not get any callbacks. This makes me suspect that ros::spin() and ros::spinOnce() are somehow getting an implicit pointer to the GlobalCallbackQueue from the ros::init() function. Is this the case? If so, what is the data type of ros::GlobalCallbackQueue (so that I can pass a global pointer to it into my timer loop)? Or, if you have an example of how to simply fix my problem, I will accept that as well! Originally posted by riot on ROS Answers with karma: 28 on 2019-07-12 Post score: 0 Answer: Initializer with timer: void MainWindow::on_ROSSubscribeButton_clicked() { ros::init(m_argc, m_argv, "camera_check"); ros::NodeHandle n1; ros::Subscriber sub1 = n1.subscribe("/rtsp_1/image_raw", 100, &MainWindow::rtsp1Callback, this); CameraCheckTimer.start(100); //calls ros::spinOnce() every 1000ms; } However I do not get any callbacks. The problem here is scope: sub1 goes out of scope as soon as on_ROSSubscribeButton_clicked() completes. So the Subscriber gets destroyed, cancelling the subscription. Additionally, the NodeHandle also vanishes. Caling ros::spinOne() at this point doesn't really do much any more, as there are no subscriptions. Make sure to keep the ros::Subscriber around, and probably also the ros::NodeHandle (as a member variable fi). Things should start working. This makes me suspect that ros::spin() and ros::spinOnce() are somehow getting an implicit pointer to the GlobalCallbackQueue from the ros::init() function. Is this the case? If so, what is the data type of ros::GlobalCallbackQueue (so that I can pass a global pointer to it into my timer loop)? I don't believe this has anything to do with your problem. Originally posted by gvdhoorn with karma: 86574 on 2019-07-12 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by riot on 2019-07-12: Thank you for your very concise answer! I simply declared the handles as 'static' and they now persist outside of the creation function: void MainWindow::on_ROSSubscribeButton_clicked() { ros::init(m_argc, m_argv, "camera_check"); static ros::NodeHandle n1; static ros::Subscriber sub1 = n1.subscribe("/rtsp_1/image_raw", 100, &MainWindow::rtsp1Callback, this); CameraCheckTimer.start(100); //calls ros::spinOnce() every 1000ms; } I get the callbacks as expected. Comment by gvdhoorn on 2019-07-12: I wouldn't declare them static, but if you are ok with doing it that way, that would work.
{ "domain": "robotics.stackexchange", "id": 33410, "tags": "ros-melodic" }
Filtering a data structure using regex predicates
Question: Assume we have a data-structure like this: (def data (atom [{:id 1 :first-name "John1" :last-name "Dow1" :age 14} {:id 2 :first-name "John2" :last-name "Dow2" :age 54} {:id 3 :first-name "John3" :last-name "Dow3" :age 34} {:id 4 :first-name "John4" :last-name "Dow4" :age 12} {:id 5 :first-name "John5" :last-name "Dow5" :age 24}])) I want to filter it by specific keys using regex in predicates, so I ended up with this function which works fine, but I guess code is ugly and duplicated. How can I get rid of code duplication? (defn my-filter [str-input] (let [firstname (filter #(re-find (->> (str str-input) (upper-case) (re-pattern)) (upper-case (:first-name %))) @data) lastname (filter #(re-find (->> (str str-input) (upper-case) (re-pattern)) (upper-case (:last-name %))) @data) age (filter #(re-find (->> (str str-input) (upper-case) (re-pattern)) (upper-case (:age %))) @data)] (if-not (empty? firstname) firstname (if-not (empty? lastname) lastname (if-not (empty? age) age))))) Answer: First up, notice that you're creating 3 leted names by using exactly the same expression but replacing one key lookup function. This is a prime candidate for deduplication by making a local function. (defn my-filter [str-input] (let [seek (fn [k](filter #(re-find (->> (str str-input) (upper-case) (re-pattern)) (upper-case (k %))) @data)) firstname (seek :first-name) lastname (seek :last-name) age (seek :age)] (if-not (empty? firstname) firstname (if-not (empty? lastname) lastname (if-not (empty? age) age))))) Next we can notice that the remainder is just doing the same "return this if it's not empty" logic with each of the three keys in sequence. some returns the first non-falsy result of a function on a sequence so by adding a seq to the seek function we can turn empty filter sequences into nils we can combine the two and get the logic we want. I used letfn this time because we're only leting fns, might as well. (defn my-filter [str-input] (letfn[(seek [k] (seq (filter #(re-find (->> (str str-input) (upper-case) (re-pattern)) (upper-case (k %))) @data)))] (some seek [:first-name :last-name :age])))
{ "domain": "codereview.stackexchange", "id": 24790, "tags": "regex, clojure, clojurescript" }
At what temporal scales do forest structures change?
Question: At which time scales do forest stand characteristics (i.e. stand density, species composition, & diameter distributions) change in a temperate European forest which is not managed? Ideally, I'd like to conclude if a certain quantitative description (based on the variables I've mentioned above) of a forest stand is valid for e.g. 10 years or even 20 years. A peer-reviewed reference would be ideal. Answer: This question is too broad. What type of temperate forest? The European Environment Agency recognized dozens of temperate forest types: Acidophilous oak and oak‑birch forest types Mesophytic deciduous forest types Beech forest types Mountainous beech forest types Broadleaved evergreen forest types Floodplain forest types Non‑riverine alder, birch or aspen forest types Community types will experience different patterns and rates of change based on numerous properties including climate, edaphic conditions, stand density, species pools, specific species characteristics, land use history, etc. See Wright & Fridley (2010) -Rate at which woody species colonize and dominate old fields decreases significantly with latitude. Rates of woody succession were highly correlated with both annual temperature...and measures of soil fertility See Fridley & Wright (2012) suggest that climate plays a relatively minor role in community dynamics at the onset of secondary succession, and that site edaphic conditions are a stronger determinant of the rate at which ecosystems develop to a woody-dominated state. See Peet & Christensen (1980) Examined thinning rates based on initial stand densities -- found rates to be higher in denser stands and later densities became similar regardless of initial density (after 40 years). Further compounding all of this are issues such as invasive species, shifts in herbivore pressure, plant pathogens/pests, etc. Also, it's hard to find European forests w/out some type of historical major human influence. From Parviainen 2005: Latest estimates show that there are about 0.3 million ha of virgin forest (0.4 % of the total forest area) left in strict forest reserves and other protection areas in the temperate zone of Europe. So given all of that, it's hard to definitively nail down specifics to answer your question. Each of these above characteristics/variables/issues will influence both individual growth rates (i.e., tree diameter and height) as well as population and community dynamics. To answer your question a little more directly: As shown in the paper linked above (Peet & Christensen 1980), stand density rates vary dramatically based on initial stand densities and succesional stage; high-density stands can see HUGE loss of stems/ha in < 40 years, while low-density sites might see very little change. Diameter distributions can change drastically, but will likely take multiple decades to shift to the "late-succesisonal reverse j". However, this is strongly dependent on frequency and magnitude of disturbances -- large canopy-clearing disturbances such as major wind events will accelerate this shift in distribution. Species composition can certainly shift fairly quickly (1 or 2 decades), but typically only in transitional stages of succession or due to major disturbance. Late-stage successional forests (i.e., "mature forests") will typically see very little change in species composition in this time frame, but again so many facotrs will influence that. Because of changing climates, mass introductions of invasive species (both competitors and pathogens), etc., forests appear to be changing more rapidly. [See Israel (2011)]. Overall, forests change continuously, with most processes occurring on the time-scale of decades. However, this depends on scope b/c understory plants can change from year to year, while canopy trees can live for centuries. Noticeable shifts in community composition and biomass are visible on decadal timescales, but only long-term evaluation will demonstrate the effects of past disturbances. The takeaway, then, is that forests chnage on multiple time scales and as a result of many variables; as a rsult, one can never consider a forest as a static community or as a climax type. However, we have more long-term data sets available and numerous advances in sampling methods that are aiding our ability to understand forest change across time scales. So we are just now beginning to truly understand some of the phenomena on a deeper level. I'd recommend you start with an overview of the topic: van der Maarel & Franklin (2013) should provide a number of citations for you to follow: covers the composition, structure, ecology, dynamics, diversity, biotic interactions and distribution of plant communities, with an emphasis on functional adaptations; reviews modern developments in vegetation ecology in a historical perspective; presents a coherent view on vegetation ecology while integrating population ecology, dispersal biology, soil biology, ecosystem ecology and global change studies; Citations - Fridley, Jason D., and Justin P. Wright. 2012. “Drivers of Secondary Succession Rates across Temperate Latitudes of the Eastern USA: Climate, Soils, and Species Pools.” Oecologia 168 (4): 1069–77. doi:10.1007/s00442-011-2152-4. - Israel, Kimberly A. 2011. Vegetation change in Duke Forest, 1977 – 2010. University of North Carolina at Chapel Hill Masters Thesis. 120 pp. - Parviainen, J. (2005). Virgin and natural forests in the temperate zone of Europe. Forest Snow and Landscape Research, 79(1-2), 9-18. - Peet, Robert K., and Norman L. Christensen. 1980. “Succession: A Population Process.” Vegetatio 43 (1): 131–40. - van der Maarel, E. & J. Franklin. 2013. Vegetation Ecology, 2nd Edition. Oxford University Press, New York, New York. Pages 28-70. - Wright, Justin P., and Jason D. Fridley. 2010. “Biogeographic Synthesis of Secondary Succession Rates in Eastern North America.” Journal of Biogeography 37 (8): 1584–96. doi:10.1111/j.1365-2699.2010.02298.x.
{ "domain": "biology.stackexchange", "id": 7095, "tags": "botany, ecology, trees, ecosystem, community-ecology" }
Too many parentheses to format a percentage in SELECT
Question: This query seems to have way too many parentheses. SELECT CAST(ISNULL(ROUND(CAST((SUM(product1) + SUM(product2)) AS FLOAT) / CAST(salescalls.visits AS FLOAT) * 100, 2), 0) AS VARCHAR) + '%' AS avrg FROM sales, salescalls WHERE sales.salescallId = salescalls.id Answer: There is something you can do about CAST((SUM(product1) + SUM(product2)) AS FLOAT) / CAST(salescalls.visits AS FLOAT) * 100 To perform floating-point division rather than integer division, either the dividend or the divisor needs to be a float. You don't need make both of operands floats. Furthermore, you can make a the dividend a float by using floating-point multiplication. That means that you don't need to cast anything to a float. That simplifies it to 100.0 * (SUM(product1) + SUM(product2)) / salescalls.visits Next, you can rewrite the sums as one aggregate by adding the two columns in each row before summing the rows: 100.0 * SUM(product1 + product2) / salescalls.visits I don't think that there's not much that can be done about ROUND(), ISNULL(), and CAST(… AS VARCHAR), though. Consider doing that formatting in your application-layer code instead of SQL. You've written the query using an old-style join. It would be better to write it using a JOIN keyword: SELECT CAST( ISNULL( ROUND( 100.0 * SUM(product1 + product2) / salescalls.visits, 2 ), 0 ) AS VARCHAR ) + '%' AS avrg FROM sales INNER JOIN salescalls ON sales.salescallId = salescalls.id
{ "domain": "codereview.stackexchange", "id": 9316, "tags": "sql, sql-server" }
What has $E = mc^2$ to do with nuclear powerplants?
Question: In life, when you talk about nuclear energy, there always happens to be a guy who says that famous Einstein's equation. "Yeah, they just convert mass to energy, $E = mc^2$ ya know?" When I think about that, all I learned about nuclear power resembles dominoes arrangements. You tonk a block and it falls. On its way, it tonks other dominoes and when it falls it releases energy (sound waves). Quite same in the nuclear physics. You send slow neutron to a core. The core absorbs it, breaks and sends another neutrons and energy (electromagnetic waves). So in the end, I see no domino blocks disappearing in this game. All we do is, that we tonk domino arrangements that has been built by old stars long time ago. So why is this equation related to nuclear power? What mass disappears in nuclear power plants? Answer: The answer is the there is some reduction in mass whenever energy is released, whether in nuclear fission or burning of coal or whatever. However, the amount of mass lost is very small, even compared to the masses of the constituent particles. A good overview is given in the Wikipedia article on mass excess. Basically, the mass of a nucleus will in general be a little bit off from the sum of the masses of the protons and neutrons inside it. This is because there is a binding energy holding the nucleus together, and your standard $E = mc^2$ gives the equivalent mass for this energy. In the fission of uranium-235, $$ {}^{235}_{\phantom{0}92}\mathrm{U} + {}^1_0\mathrm{n} \to {}^{236}_{\phantom{0}92}\mathrm{U} \to {}^{141}_{\phantom{0}56}\mathrm{Ba} + {}^{92}_{36}\mathrm{Kr} + 3\ {}^1_0\mathrm{n}, $$ the total rest mass of the products is slightly less than of the reactants. This is true even though there are the same number of protons (92) and neutrons (144) before and after. So it is not as though an entire nucleus disappears, or even an entire proton or neutron. The lost mass comes from the binding energy. The take-away message is that we are not destroying particles to create energy. Even nuclear fusion conserves the total number of protons and neutrons. Instead, you should think about the mass-energy equivalence the other way around. The fact that there is potential energy capable of being released in nuclear fission implies that the reactants must be heavier than the products. In the same fashion, a typical battery weighs less after being discharged (though by an immeasurably small amount), even though the nuclei are unchanged and the number of electrons is the same. That is, potential energy in any form adds to the mass of the system as a whole, and is not attributable to any one component.
{ "domain": "physics.stackexchange", "id": 7995, "tags": "special-relativity, nuclear-physics, mass-energy" }
How to calculate the dissociation constant of a weak acid from the titration with a strong base?
Question: Problem: A solution of an unknown weak acid of unknown concentration was titrated with a solution of a strong base of unknown concentration. During the titration, the pH after adding $\pu{2.00 mL}$ of the base was 6.912. An additional $\pu{14.00 mL}$ of the base was required to reach the equivalence point. Calculate the Ka of the weak acid. I have decimated six sheets of paper trying different equations. Combining the two ICE tables you can make for each equation, I can still get no fewer than three unknowns. I'm sure I'm overlooking a super-simple solution but I cannot think of it. These images are what I have done so far. Answer: This is a strong base weak acid titration. If the weak acid is of the form $\ce{HA}$, then at equivalence point, all the acid has been converted to its conjugate base. Total volume of base required for equivalence point is $\mathrm{16~ml}$. Let the molarity of base used be $M$. Then, number of milli moles of base for equivalence point is $M\times 16$. This is also the number of milli moles of acid taken. If only $\mathrm{2~ml}$ of base was used, there will be some acid and some salt of acid and base (A buffer, since the acid is weak and the base is strong). $$pH=pk_a+\log\frac{M\times 2}{M\times 16-M\times 2}=pk_a+\log\frac{2}{14}$$ From this you will get $$6.912=pk_a-0.845$$ $$pk_a=7.757$$
{ "domain": "chemistry.stackexchange", "id": 4943, "tags": "acid-base, equilibrium, titration" }
Canonical quantisation harmonic oscillator
Question: I have a question on the canonical quantisation as described at the linked wiki page: https://en.wikipedia.org/wiki/Quantum_field_theory#Canonical_quantisation we take the displacement of a classical harmonic oscillator described as $$ x(t)={\frac {1}{\sqrt {2\omega }}}ae^{-i\omega t}+{\frac {1}{\sqrt {2\omega }}}a^{*}e^{i\omega t}, $$ and promote $x(t)$ to a linear operator ${\displaystyle {\hat {x}}(t)}$: $${\displaystyle {\hat {x}}(t)={\frac {1}{\sqrt {2\omega }}}{\hat {a}}e^{-i\omega t}+{\frac {1}{\sqrt {2\omega }}}{\hat {a}}^{\dagger }e^{i\omega t}.}$$ the coefficients $a$ and it's complex conjugate $a^*$ are replaced ${\hat a}$ and ${\hat a}^{\dagger }$. Question: why following this instruction ${\hat a}$ becomes creation operator and ${\hat a}^{\dagger }$ annihilation opererator and not conversely? is there any physical reason that justifies it based on the plus or minus sign of $e^{-i\omega t}$ resp. $e^{i\omega t}$? or is it just a "random choice"? Answer: In quantum mechanics, $a$ and $a^\dagger$ are ladder operators, in that they raise and lower the number of excitations presence in the harmonic oscillator: $$ a^\dagger |0\rangle \propto |1\rangle \quad \Rightarrow \quad a^\dagger |n\rangle = \sqrt{n+1}|n+1\rangle, \\ a|1\rangle \propto |0\rangle \quad \Rightarrow \quad a|n\rangle = \sqrt{n}|n-1\rangle.$$ Each application on $a$ on your number-eigenstate $|n\rangle$ removes a quantum of excitation ($\hbar \omega$ for the harmonic oscillator). The opposite for $a^\dagger$. $a^\ast$ is the specific case of $a^\dagger$ when $a$ is a scalar, not a matrix. In field theory now, the vacuum $|0\rangle$ has no field excitations (i.e. no particles, no waves on the string). I can create one excitation by applying $a^\dagger$, like before. Since in the field theory context I am now creating a particle, it's called the creation operator. Same reasoning for the annihilation operator $a$. In the Heisenberg picture, the time dependence of an operator is given by: $$ \dot{A} = \frac{i}{\hbar} [H,A],$$ so: $$ \dot{a} = \frac{i}{\hbar}[H, a] = \frac{i}{\hbar}[\hbar \omega \, a^\dagger a,a] = -\mathrm{i}\omega a \quad \Rightarrow \quad a(t) = a(0)e^{-\mathrm{i}\omega t},$$ and $$ \dot{a^\dagger} = \frac{i}{\hbar}[H, a^\dagger] = \frac{i}{\hbar}[\hbar \omega \, a^\dagger a,a^\dagger] = \mathrm{i}\omega a^\dagger \quad \Rightarrow \quad a^\dagger(t) = a^\dagger(0)e^{\mathrm{i}\omega t}.$$
{ "domain": "physics.stackexchange", "id": 61391, "tags": "quantum-mechanics, hilbert-space, operators, harmonic-oscillator" }
why do hypergiants shed mass before death?
Question: I'm reading on this website here and will be using lots of quotes from it. Hypergiant Star Seen Shedding Mass Ahead Of Explosive Death As Supernova Astronomers using a telescope in Chile have observed a hypergiant star shedding massive amounts of mass, suggesting it is about to end its relatively short life in a massive supernova explosion. The red hypergiant star VY Canis Majoris, one of the largest stars ever found in the Milky Way, is losing enormous amounts of its mass as it deteriorates, they say. It is 30 to 40 times as massive as our sun and 300,000 times as bright. If it sat in the center of our solar system, it would encompass the orbit of Jupiter. This interested me so I kept reading, Radiation pressure is the force exerted by starlight, and is very weak, which is why only large dust grains have enough surface area to be affected and cause the star to lose mass, the researchers say. VY Canis Majoris, around 3,800 light-years away from us, is expelling an amount of dust and gas every year equal to 30 times the mass of the Earth, they say. As the star died wouldn't the radiation pressure decrease? Less fusion = less radiation = less radiation pressure right? Why does increase of radiation pressure cause gas and dust to be pushed out of the star. Answer: Towards the ends of their lives, massive stars exhaust hydrogen from their cores and burn heavier nuclear fuels. The sequence of events is that hydrogen core burning is followed by hydrogen burning in a shell around the core, then He core burning, then H+He shell burning, then C/O core burning, and so on. During phases when shell burning dominates the luminosity of the star, it becomes a red (super)giant, both its radius and luminosity increase. The central misconception is that the luminosity decreases - that isn't true, what happens in broad terms is that as the mean mass per particle increases (because of fusion) in the center, the temperature must increase to provide the necessary pressure. But the nuclear reactions are very temperature dependent, so the star burns through the heavier fuels even quicker, leading to higher luminosity. In the extended outer layers of the star, it is possible for dust to condense. The dust is coupled to the gas but scatters radiation from the interior. The momentum absorbed by the dust may be sufficient to lift the outer layers away from the star, because the "surface" gravity is relatively low. In practice, dust alone may be insufficient. Dust cannot form until it is far enough away from the star to fall below the dust condensation temperature. Radiative driving of gas may also play a role closer in. This is where light is absorbed in atomic and molecular transitions, accelerating the gas. As it gains velocity, the transition is redshifted with respect to the radiation so that new photons are able to be absorbed and impart momentum. The net effect of these processes, which are not fully understood, is that luminous red supergiants appear to be losing mass at the rate of $10^{-4}$ solar masses every year. This rate is highly dependent on the metallicity of the star, since this controls how much dust can form and how opaque to radiation the outer atmosphere is. Higher metallicity means more mass loss.
{ "domain": "physics.stackexchange", "id": 33224, "tags": "sun, stars, supernova, gas, radiation-pressure" }
How do I create the search tree for DFS applied to a grid map?
Question: I have been working through some search tree problems and came across this one: Assume that that the algorithm has a closed list and that nodes are added to the frontier in the following order: Up, Right, Down, Left. For example, if node J is expanded: there is no node up from J so nothing is added to the frontier for up. K is right from J so it is added to the frontier, H is down from J so it is added to the frontier, there is no node left from J, so nothing is added to the frontier. a) Assume that the start node is node F and the goal node is node M. Provide the entire search tree if Depth First Search is employed. b) Provide the frontier at the time the search terminates Because I understand how a depth-first search works with regards to the frontier (it is a LIFO queue), I know that the last node added to the frontier would be the next node you need to expand. Using that knowledge, the frontier would be as follows after each expansion: F F I B E E is expanded: F I B H A A is expanded: F I B H H is expanded: F I B J J is expanded: F I B K K is expanded: F I B L L is expanded: F I B M The solution has been found, as we have reached M. I thus seem to have answered part b of the question, but as for how to draw the search tree, I am stumped. Any ideas would be appreciated. Answer: To draw the search tree, you just need to add as children the nodes that you found (i.e. the nodes that you add to the queue and that you may expand next). So, in your case, the root node of the tree would be $F$, which would have the children $I$, $B$, and $E$. Then $E$ would have the children $H$, $F$ and $A$, and so on. So, here's a simple illustration of this partially constructed search tree. F /|\ / | \ I B E /|\ / | \ H |F| A /|\ / | \ Note that I added $F$ again to the search tree, but you should not expand it again, otherwise, you end up looping forever. I denoted it by |F| to differentiate it from the others. Moreover, note that the creation of the search tree does not really depend on the actual problem, but on the search algorithm and how you expand nodes/states. Here you can find a nice step-by-step example of how to construct the search tree of DFS, in case my explanation above is not clear enough. You can also find more info about this topic in the book Artificial Intelligence: A Modern Approach by Russell and Norvig (you can also find freely downloadable pdfs of the 3rd edition on the web), specifically, chapter 3 "Solving Problems by Searching".
{ "domain": "ai.stackexchange", "id": 2979, "tags": "search, depth-first-search" }
Integral involving two energy Green's functions
Question: The problem I am attempting to evaluate this integral: \begin{equation} I(\vec{k}) = \lim_{\epsilon\to 0} \int d^3q \, \frac{1}{E-E_{\vec{q}}+i\epsilon} \frac{1}{E-E_{\vec{k}+\vec{q}}+i\epsilon} \end{equation} with $E_\vec{q}=q^2/2$ and $E_{\vec{k}+\vec{q}}=|\vec{k}+\vec{q}|^2/2$. If possible, it would be nice to see multiple methods of solution. My attempt I chose to work in a spherical coordinate system with $\vec{k}$ aligned along the $z$-axis. Letting $\phi$ represent the polar angle between $\vec{k}$ and $\vec{q}$, I wrote $E_{\vec{k}+\vec{q}}$ as \begin{equation} E_{\vec{k}+\vec{q}} = \frac{1}{2} \left( k^2 + q^2 - 2kq \cos (\pi-\phi) \right) = \frac{k^2}{2} + \frac{q^2}{2} + kq \cos \phi . \end{equation} It is then fairly straightforward to evaluate both angular integrals, leaving only the radial integral: \begin{equation} I(\vec{k}) = \lim_{\epsilon\to 0} \frac{2\pi}{k} \int_0^\infty dq \, \frac{q}{E-q^2/2+i\epsilon} \ln \left| \frac{E-(k-q)^2/2+i\epsilon}{E-(k+q)^2/2+i\epsilon}\right| . \end{equation} But this is where I get stuck. Answer: This is a modification of classic manipulations that are done when computing loop integrals. The tool to use is usually referred to as "Feynman-parameters''. Inserting in the definitions of $ E _i $ and defining $ m ^2 \equiv 2 E $ to make connection with the usual computations I find (I drop the $\epsilon$'s since they are irrelevant here): \begin{align} \frac{1}{2} I & = \int d^3q \frac{1}{ ( q ^2 - m ^2 ) ( ( {\mathbf{q}} + {\mathbf{k}} ) ^2 - m ^2 ) } \\ & = \int _0 ^1 dx \int d^3q \frac{1}{ \left[ ( ( {\mathbf{q}} + {\mathbf{k}} ) ^2 - m ^2 ) x + ( q ^2 - m ^2 ) ( 1 - x ) \right] ^2 } \end{align} Expanding and rewriting the expression a bit gives, \begin{equation} \frac{1}{2} I = \int _0 ^1 d x \int d^3q \frac{1}{ \left[ ( {\mathbf{q}} + {\mathbf{k}} x ) ^2 - {\mathbf{k}} ^2 x ^2 + {\mathbf{k}} ^2 x - m ^2 \right] ^2 } \end{equation} Now shifting the integral variable I find: \begin{equation} \frac{1}{2} I = \int _0 ^1 d x \int d^3q \frac{1}{ \left[ {\mathbf{q}} ^2 + \Delta \right] ^2 } \end{equation} where $ \Delta \equiv - {\mathbf{k}} ^2 x ^2 + {\mathbf{k}} ^2 x - m ^2 $. The angular part of the integral is now trivial and the radial integral is straightforward. I find, \begin{equation} \frac{1}{2} I = \pi ^2 \int _0 ^1 d x \Delta ^{ - 1/2} \end{equation} The $ x $ integral is usually left untouched, but I suspect in this case you could try carrying that out too.
{ "domain": "physics.stackexchange", "id": 42412, "tags": "quantum-mechanics, scattering, integration, propagator" }
UnicodeEncodeError while running roslaunch (kinetic)
Question: Hi there, I'm very new to ROS. I'm trying to connect my lidar (EAI ydlidar X4) with hector mapping, and downloaded from git clone https://github.com/tu-darmstadt-ros-pkg/hector_slam I've created a launch file (from tutorial online): <?xml version="1.0"?> <launch> <include file="$(find ydlidar)/launch/lidar_view.launch" /> <node pkg="tf" type="static_transform_publisher" name="map_to_odom" args="0.0 0.0 0.0 0.0 0.0 0.0 /odom /base_link 40" /> <node pkg="tf" type="static_transform_publisher" name="base_frame_to_laser" args="0 0 0 0 0 0 /base_link /laser_frame 40" /> <!--<node pkg="rviz" type="rviz" name="rviz" args="-d $(find hector_slam_launch)/rviz_cfg/mapping_demo.rviz"/>--> <include file="$(find hector_mapping)/launch/mapping_default.launch" /> <node pkg="rviz" type="rviz" name="rviz" args="-d $(find ydlidar)/launch/lidar.rviz" /> <include file="$(find hector_geotiff)/launch/geotiff_mapper.launch" /> </launch> however, when I roslaunch it, I got: Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. 'ascii' codec can't encode character u'\xa0' in position 0: ordinal not in range(128) The traceback for the exception was written to the log file and in the log: ....... File "/opt/ros/kinetic/lib/python2.7/dist-packages/roslaunch/substitution_args.py", line 365, in resolve_args resolved = _resolve_args(arg_str, context, resolve_anon, commands) File "/opt/ros/kinetic/lib/python2.7/dist-packages/roslaunch/substitution_args.py", line 376, in _resolve_args for a in _collect_args(arg_str): File "/opt/ros/kinetic/lib/python2.7/dist-packages/roslaunch/substitution_args.py", line 434, in _collect_args buff.write(c) UnicodeEncodeError: 'ascii' codec can't encode character u'\xa0' in position 0: ordinal not in range(128) [rospy.core][INFO] 2018-10-03 12:28:17,248: signal_shutdown [atexit] Anyone knows how to solve this? Originally posted by IvyKK on ROS Answers with karma: 39 on 2018-10-03 Post score: 0 Answer: According to Python: Removing \xa0 from string?, ordinal \xa0 is a "non breaking space". That is indeed a unicode character, not an ascii one. If you copied your example from a website, those can include non-breaking spaces (as those are often used on websites to make sure the browser doesn't affect code example rendering). I would advise you to rewrite your launch file. Suspicuous characters are: whitespace (ie: spaces) (double) quotes (ie: ' and ") question marks (ie: ?) the latter may be surprising, but I've been in situations where it was not a question mark from the ascii range (ie: the "normal" question mark), but a slightly different version, from a unicode table. Just replace all of those with a text editor and try again. If your editor supports encoding transformations (ie: wholesale conversion from UTF to ASCII) then you could try those. Originally posted by gvdhoorn with karma: 86574 on 2018-10-03 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by IvyKK on 2018-10-03: Thank you so much! I tried that out and the error is gone. Unfortunately, got another error when re-attempting... Skipped loading plugin with error: XML Document '/opt/ros/kinetic/share/hector_worldmodel_geotiff_plugins/hector_geotiff_plugins.xml' has no Root Element. Comment by gvdhoorn on 2018-10-03: Well, that looks like a different problem, so that deserves its own question (but do first search for your current problem, as we don't need more duplicates).
{ "domain": "robotics.stackexchange", "id": 31853, "tags": "navigation, mapping, roslaunch, ros-kinetic, hector-mapping" }
What is the difference between propositions and judgments?
Question: I get confused by the subtle difference between propositions and judgments when exposed to intuitionistic type theory. Can any one explain to me what is the point to distinguish them and what distinguishes them? Especially in view of the Curry-Howard Isomorphsim. Answer: First, you should know that, in general, there is not consensus about these terms and their definitions depend on the system in which one is working.Since you asked about intuitionist type theory, I'll quote Pfenning: A judgment is something we may know, that is, an object of knowledge. A judgment is evident if we in fact know it. Propositions on the other hand, according to Martin-Löf are sets of proofs. In this interpretation, if the set of proofs for a proposition is empty then it is false and otherwise true. A proposition is interpreted as a set whose elements represent the proofs of the proposition says Nordström et al. On the other hand, in classical logic and in general, propositions are objects expressed in a language which can be either "true" or "false". To give you some extra intuition; from my point of view, judgments are metalogical and propositions logical. I suggest "Constructive Logic" by Frank Pfenning, "Proofs and Types" by Jean-Yves Girard and "Programming in Martin-Löf's Type Theory" by Bengt Nordström et al. All three are freely available on the Internet. The last one is probably the closest to what you want as it is oriented to programming and goes into great detail, at length, about the meanings of these terms and many more.
{ "domain": "cstheory.stackexchange", "id": 1487, "tags": "lo.logic, pl.programming-languages, type-theory" }
How to calculate the complexity of a TM?
Question: I have the language $\{ w \mid w \text{ contains an equal amount of symbols } a \text{ and } b\}$. I want to show that the language is in the class $\textsf{P}$. To do this, I must give an implementation level description of the relevant Turing machine. I'm not very knowledgeable on Turing machines, but I think I would be able to write down a possible transition function, but from this how do I compute the complexity, in order to show it is in the class $\textsf{P}$? Answer: You only have to show that there exists a Turing machine that decides the language, and such that the number of steps taken by it is a polynomial in the size of the input, in this case a polynomial in $|w|$. An example of such a machine is the following. Scan the tape left to right until you find both an $a$ and a $b$, and replace both of them with a new symbol, let's say $x$. If you fail to find both an $a$ and a $b$, there are two cases: if every nonblank symbol on the tape is $x$, answer $\textsf{YES}$, otherwise answer $\textsf{NO}$. It should be easy to see that the number of steps is $O(|w|^2)$, and therefore the language is in $\textsf{P}$. Just a pedantic note: we usually talk about the complexity of languages or problems, not machines. It would be more appropriate to say "number of steps taken" or "amount of space used" rather than "time complexity" or "space complexity" when talking about a specific Turing machine.
{ "domain": "cs.stackexchange", "id": 10495, "tags": "complexity-theory, turing-machines" }
Perform the following 8-bit two's complement arithmetic operation
Question: How am I supposed to perform this subtraction? For instance: $01110000_2−11101011_2$ $01110000_2$ has a positive sign, so there is no need to perform two's complement. $11101011_2$ has a negative sign, so I need to perform two's complement. Two's complement for $11101011_2$ is $00010101_2$ Finally, I must perform the subtraction $01110000_2 - 00010101_2 = 1011011_2$ Answer: Two's complement is a way of representing positive and negative numbers in binary. There is a very good explanation here: https://stackoverflow.com/questions/1049722/what-is-2s-complement Basically, the really clever part about two's complement is that arithmetic works without making any special changes. Let's look at that in action with n addition operation: $10000001 + 00010000 = 10010001$. Notice that no special actions were taken in finding the sum. Nevertheless, what I've written there is that $-127 + 16 = -111$. Two's complement is really just a clever way of encoding the numbers designed to make arithmetic straightforward. By the way, an easy way of reading two's complement numbers is just to reverse the sign on the highest bit. To illustrate what I mean by this, regularly encoded (non-complement) 8-bit integers like $10010011$ and $01000010$ could be found by adding $1 + 2 + 16 + 128 = 147$, and $2 + 64 = 66$ respectively. If we were to designate these same numbers as two's-complement, we would just reverse the sign on the 128s bit. Now our two numbers would be $1 + 2 + 16 -128 = -109$, and $2 + 64 = 66$. The second number didn't change because the highest bit was a $0$ already, so it doesn't matter if we are adding or subtracting it. So, if you were to try to subtract $01110000−11101011$, let's get an answer first so that we can check ourselves. $64+32+16 = 112$, and $-128+64+32+8+2+1 = -21$. You are performing the subtraction $112 - -21$. This means that our final answer should be $133$. So, just as you can subtract by negating before you add (ie. $112 + 21$ instead of $112 - -21$), we can perform a similar operation here. If we negate the second number, we flip the bits and add one: $00010100 + 1 = 00010101$. But now that we've negated the number, our subtraction has become an addition problem. So, our original $01110000−11101011$ has become $01110000 + 00010101$ 01110000 + 00010101 -------- 10000101 $10000101$ is $-128 + 4 + 1$, which equals... $-123$. -123?? Wait, what happened? We've fallen into the classic two's complement trap! Because we only have 8 bits, we only have a limited range of possible numbers. 8 bits allows for 256 possible values. Two's complement divides these values nicely: half are positive and half are negative, giving us a range of $-128$ to $127$. Why one less in the positives? It's because Zero has to be somewhere, and since $00000000$ doesn't start with a leading $1$, it comes out of the positive space. If you're just doing pure math, simply add an extra digit, and you'll be out of trouble. $010000101$ is $128 + 4 + 1$, which equals $133$. If you're on a computer, however, you have to pay more attention to this problem. When you go outside of the range of your bits, you are said to have an overflow error. Modern processors and languages have evolved ways of dealing with this problem, typically through flagging operations that might have caused overflow.
{ "domain": "cs.stackexchange", "id": 8804, "tags": "computer-architecture" }
How to justify the dot product in the expression of force of relativistic mechanics
Question: To deduce the expression of force in relativistic mechanics we can use : $$\vec{F}=\frac{d\vec{p}}{dt}=\frac{d(\gamma m\vec{v})}{dt}=m\left(\frac{d\gamma}{dt}\vec{v}+\gamma\frac{d\vec{v}}{dt}\right) = m\left(\frac{|\vec{v}||\vec{a}|\gamma^3}{c^2}\vec{v} +\gamma\vec{a}\right)= m\gamma\left(\frac{|\vec{v}||\vec{a}|}{c^2-v^2}\vec{v} +\vec{a}\right)$$ But in one book i found : $$\vec{F}=m\gamma\left(\frac{\vec{v}\cdot\vec{a}}{c^2-v^2}\vec{v} +\vec{a}\right)$$ How could one justify the exchange of $|\vec{v}||\vec{a}|$ for $\vec{v}\cdot\vec{a}= |\vec{v}||\vec{a}|cos\theta $ ? Answer: First, notice that: $\displaystyle \frac{d |\vec{v}|}{dt}$ stands for the time derivative of the magnitude of the velocity vector $\vec{v}$. $\displaystyle \left|\frac{d\vec{v}}{dt}\right|$ stands for the magnitude of the time derivative of the velocity vector. Generally, these quantities are different. Only the second is to be identified with the magnitude of the acceleration vector, $|\vec{a}|=|d\vec{v}/dt|$. Because of that, you have to correct your first formula using $$\frac{d\gamma}{dt} = \frac{\gamma^3}{c^2} |\vec{v}| \frac{d|\vec{v}|}{dt} \neq \frac{\gamma^3}{c^2} |\vec{v}| |\vec{a}|.$$ Finally, you get your last (and correct) formula noticing that $$ |\vec{v}| \frac{d|\vec{v}|}{dt} = \frac{1}{2} \frac{d}{dt}|\vec{v}|^2 = \frac{1}{2} \frac{d}{dt}(\vec{v}\cdot\vec{v}) = \vec{v} \cdot \frac{d\vec{v}}{dt} = \vec{v} \cdot \vec{a}.$$
{ "domain": "physics.stackexchange", "id": 44901, "tags": "special-relativity" }
What exactly is a photon?
Question: Consider the question, "What is a photon?". The answers say, "an elementary particle" and not much else. They don't actually answer the question. Moreover, the question is flagged as a duplicate of, "What exactly is a quantum of light?" – the answers there don't tell me what a photon is either. Nor do any of the answers to this question mentioned in the comments. When I search on "photon", I can't find anything useful. Questions such as, "Wave function of a photon" look promising, but bear no fruit. Others say things like, "the photon is an excitation of the photon field." That tells me nothing. Nor does the tag description, which says: The photon is the quantum of the electromagnetic four-potential, and therefore the massless bosonic particle associated with the electromagnetic force, commonly also called the 'particle of light'... I'd say that's less than helpful because it gives the impression that photons are forever popping into existence and flying back and forth exerting force. This same concept is in the photon Wikipedia article too - but it isn't true. As as anna said, "Virtual particles only exist in the mathematics of the model." So, who can tell me what a real photon is, or refer me to some kind of authoritative informative definition that is accepted and trusted by particle physicists? I say all this because I think it's of paramount importance. If we have no clear idea of what a photon actually is, we lack foundation. It's like what kotozna said: Photons seem to be one of the foundation ideas of quantum mechanics, so I am concerned that without a clear definition or set of concrete examples, the basis for understanding quantum experiments is a little fuzzy. I second that, only more so. How can we understand pair production if we don't understand what the photon is? Or the electron? Or the electromagnetic field? Or everything else? It all starts with the photon. I will give a 400-point bounty to the least-worst answer to the question. One answer will get the bounty, even if I don't like it. And the question is this: What exactly is a photon? Answer: The word photon is one of the most confusing and misused words in physics. Probably much more than other words in physics, it is being used with several different meanings and one can only try to find which one is meant based on the source and context of the message. The photon that spectroscopy experimenter uses to explain how spectra are connected to the atoms and molecules is a different concept from the photon quantum optics experimenters talk about when explaining their experiments. Those are different from the photon that the high energy experimenters talk about and there are still other photons the high energy theorists talk about. There are probably even more variants (and countless personal modifications) in use. The term was introduced by G. N. Lewis in 1926 for the concept of "atom of light": [...] one might have been tempted to adopt the hypothesis that we are dealing here with a new type of atom, an identifiable entity, uncreatable and indestructible, which acts as the carrier of radiant energy and, after absorption, persists as an essential constituent of the absorbing atom until it is later sent out again bearing a new amount of energy [...]–"The origin of the word "photon"" I therefore take the liberty of proposing for this hypothetical new atom, which is not light but plays an essential part in every process of radiation, the name photon.–"The Conservation of Photons" (1926-12-18) As far as I know, this original meaning of the word photon is not used anymore, because all the modern variants allow for creation and destruction of photons. The photon the experimenter in visible-UV spectroscopy usually talks about is an object that has definite frequency $\nu$ and definite energy $h\nu$; its size and position are unknown, perhaps undefined; yet it can be absorbed and emitted by a molecule. The photon the experimenter in quantum optics (detection correlation studies) usually talks about is a purposely mysterious "quantum object" that is more complicated: it has no definite frequency, has somewhat defined position and size, but can span whole experimental apparatus and only looks like a localized particle when it gets detected in a light detector. The photon the high energy experimenter talks about is a small particle that is not possible to see in photos of the particle tracks and their scattering events, but makes it easy to explain the curvature of tracks of matter particles with common point of origin within the framework of energy and momentum conservation (e. g. appearance of pair of oppositely charged particles, or the Compton scattering). This photon has usually definite momentum and energy (hence also definite frequency), and fairly definite position, since it participates in fairly localized scattering events. Theorists use the word photon with several meanings as well. The common denominator is the mathematics used to describe electromagnetic field and its interaction with matter. Certain special quantum states of EM field - so-called Fock states - behave mathematically in a way that allows one to use the language of "photons as countable things with definite energy". More precisely, there are states of the EM field that can be specified by stating an infinite set of non-negative whole numbers. When one of these numbers change by one, this is described by a figure of speech as "creation of photon" or "destruction of photon". This way of describing state allows one to easily calculate the total energy of the system and its frequency distribution. However, this kind of photon cannot be localized except to the whole system. In the general case, the state of the EM field is not of such a special kind, and the number of photons itself is not definite. This means the primary object of the mathematical theory of EM field is not a set of point particles with definite number of members, but a continuous EM field. Photons are merely a figure of speech useful when the field is of a special kind. Theorists still talk about photons a lot though, partially because: it is quite entrenched in the curriculum and textbooks for historical and inertia reasons; experimenters use it to describe their experiments; partially because it makes a good impression on people reading popular accounts of physics; it is hard to talk interestingly about $\psi$ function or the Fock space, but it is easy to talk about "particles of light"; partially because of how the Feynman diagram method is taught. (In the Feynman diagram, a wavy line in spacetime is often introduced as representing a photon. But these diagrams are a calculational aid for perturbation theory for complicated field equations; the wavy line in the Feynman diagram does not necessarily represent actual point particle moving through spacetime. The diagram, together with the photon it refers to, is just a useful graphical representation of certain complicated integrals.) Note on the necessity of the concept of photon Many famous experiments once regarded as evidence for photons were later explained qualitatively or semi-quantitatively based solely based on the theory of waves (classical EM theory of light, sometimes with Schroedinger's equation added). These are for example the photoelectric effect, Compton scattering, black-body radiation and perhaps others. There always was a minority group of physicists who avoided the concept of photon altogether for this kind of phenomena and preferred the idea that the possibilities of EM theory are not exhausted. Check out these papers for non-photon approaches to physics: R. Kidd, J. Ardini, A. Anton, Evolution of the modern photon, Am. J. Phys. 57, 27 (1989) http://www.optica.machorro.net/Lecturas/ModernPhoton_AJP000027.pdf C. V. Raman, A classical derivation of the Compton effect. Indian Journal of Physics, 3, 357-369. (1928) http://dspace.rri.res.in/jspui/bitstream/2289/2125/1/1928%20IJP%20V3%20p357-369.pdf Trevor W. Marshall, Emilio Santos: The myth of the photon, Arxiv (1997) https://arxiv.org/abs/quant-ph/9711046v1 Timothy H. Boyer, Derivation of the Blackbody Radiation Spectrum without Quantum Assumptions, Phys. Rev. 182, 1374 (1969) https://dx.doi.org/10.1103/PhysRev.182.1374
{ "domain": "physics.stackexchange", "id": 93911, "tags": "particle-physics, photons" }
Can I leave a planet without achieving escape velocity?
Question: I know that if you exceed orbital velocity, you will never fall-back to the planet. My question is not about orbits. It's about brute-force propulsion to achieve altitude. I'm using an intentionally slow velocity to help illustrate my point. Imagine I have a rocket with very efficient fuel storage. My rocket can store enough energy to accelerate to 100kph shortly after leaving the ground, and continue to maintain that speed (100kph) for a very long period of time. My rocket just goes straight up. It doesn't try to enter an orbit. As it leaves the atmosphere, it can throttle-back because there's no air resistance. As it continues to gain altitude in inter-planetary space, it can throttle-back even more because Earth's gravitational influence diminishes with distance. It just maintains enough throttle to continue moving away from Earth at 100kph. At some point, Earth's gravitational influence would be moot, as other bodies (Jupiter, Sun), would gain relative influence. Eventually, far outside the solar system, even the Sun's influence would be insignificant. My rocket never achieved escape velocity, but it sure did escape. Assuming that my fuel supply could last long enough, and I wasn't concerned about travel time, could this method allow my rocket to "leave" without achieving escape velocity? Answer: Escape velocity is how fast you must go in order to keep moving away indefinitely without additional thrust. If the Earth were the only major body in the system, and you moved away from it at 100 km/h for 1180 years, you'd be 6.9 AU away. Since Earth escape velocity at that distance is only 100 km/h, you could then stop the engine and coast to infinity. If we add the Sun to this oversimplified system, you could escape it by maintaining 100 km/h until you were 36 light-years away, which would take 390 million years. In the real universe, of course, the Earth's and Sun's neighbors make their spheres of influence much smaller.
{ "domain": "astronomy.stackexchange", "id": 1598, "tags": "gravity, escape-velocity" }
On Aharonov–Bohm effect
Question: Aharonov–Bohm effect in brief is due to some singularities in space. In books it's infinite solenoid most of the time, which makes some regions of space not simply connected. What intrigues me is the fact that in real experiment we can't use infinite solenoid. So even if we use one and say that locally it's good approximation it doesn't change the fact that whole space is still simply connected. But the fact is that the effect was experimentally observed. So the question arises - how one should describe this effect in more rigorous manner (or maybe not rigorous but possible in real world)? Answer: The solenoid might not be infinite in space, but in general neither is the configuration space for electrons. For example, in experiments that test the A-B effect, electrons are typically confined to a wire or ribbon of metal, with a hole that a (finite) solenoid can be placed through. In this case the electrons' position space is not simply-connected by construction, but the A-B effect is that you get path-dependent phases from the gauge field, regardless of how the non-simply-connectedness of your space came about.
{ "domain": "physics.stackexchange", "id": 4400, "tags": "quantum-mechanics, electromagnetism, topology, topological-phase" }
How fast can I run in a vacuum?
Question: One factor that limits my top running speed is air resistance. Another, much smaller, factor is the drag caused by the partial vacuum I create in my wake. Suppose instead of running on the surface of the Earth, I was running in a complete vacuum under constant gravity. Then the two mentioned factors would no longer apply. What then prevents me from gradually accelerating to arbitrary speeds? What would change if I could replace parts of my anatomy by stronger materials (think robot legs)? Answer: Running on a treadmill will get rid of both factors as well without the need of a vacuum. When someone is running they apply a force backwards against the ground which gives the acceleration, but they also apply a forwards force against the ground when each leg lands, creating a braking force. The position of the feet in contact with the ground relative to the runner's center of mass dictate whether the force is being applied forwards or backwards. Theoretically you could minimize your braking force and maximize your accelerating force by always landing your feet as close to your center of mass as possible, but this would require an increasingly greater force in order to keep your body aloft since your force vector will be originating behind you most of the time and for shorter periods of time. See this video and accompanying paper for example force patterns of running and related info.
{ "domain": "physics.stackexchange", "id": 73459, "tags": "newtonian-mechanics, vacuum, biology" }
Yellow-amber debris above water filter: What is it?
Question: I have some weird, yellow-amberish pebbles that have been filtered by my Zero waterfilter. Any idea what they are? Copper maybe? The black stuff is charcoal from the filter. The quality of the water in my area is fairly good, with a TDS of 40-60. With the filter, I have a TDS of 0, which is excellent. But I'm still curious as to what those things are. Answer: It looks like ion exchange resin, it is used to specifically capture ions from water and replace them by others (either $\ce{H^+}$ or alkali metal ions. As such, it is probably also part of the filter.
{ "domain": "chemistry.stackexchange", "id": 12158, "tags": "filtering" }
debug ros node python
Question: Hi I am new to ROS how do you debug your ros node, say you code it in python. Is it possible to use run a ros node in an ide and set break points ect? I saw a link on setting up ide's but they seemed like it was for C++ IDE's Originally posted by Sentinal_Bias on ROS Answers with karma: 418 on 2013-02-07 Post score: 1 Answer: From commandline, you can run your script on debugger by: $ roscd %YOUR_PKG_HOME% $ gdb python $ run %YOUR_SCRIPT_PATH% If you'd prefer to sticking to eclipse, one dumb workaround to utilize make solution on wiki is: 1 Install pydev on eclipse, create your python project on eclipse by "File" --> "New" --> "Pydev Project". When successful, close eclipse. 2 Copy the .project file that was just generated by eclipse. Eg. by doing: $ roscd %YOUR_PKG_HOME% $ cp .project .project.org 3 Run make eclipse-project following http://www.ros.org/wiki/IDEs#Eclipse. If you don't have Makefile in the current dir, create one as the same wiki page suggests when error occurs. 4 Swap .project and remove .cproject: $ mv .project.org .project $ rm .cproject This sets up ROS python environment on eclipse. You should be able to run your ROS script in debug mode. Apparently this is not smart and ideal, but works for me. This works even on Groovy where make eclipse-project generates many unnecessary files & directories. In this case you might want to remove them & Makefile you no longer need by like: $ rm -fR catkin* CATKIN_IGNORE cmake_install.cmake devel/ gtest/ Makefile test_results/ Originally posted by 130s with karma: 10937 on 2013-02-07 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by Sentinal_Bias on 2014-01-14: Hi Thanks for the reply I also got this to work for python debugging http://winpdb.org/docs/ Its a graphical debugger for python. All I have to do is go the source directory and run it from the terminal from there.
{ "domain": "robotics.stackexchange", "id": 12785, "tags": "ros, python, ide" }
Inconsistencies in Transform Definition between tf2 and ROS?
Question: According to this definition of a homogeneous transformation matrix (pictured below), a transform consists of a rotation from source to destination and the location of the source origin relative to the destination origin. So, for example, if I want to transform a point from frame A to frame B, where B is 2 meters ahead of A, my transformation matrix would have an x translation of -2 (using ROS coordinate conventions). To add this transform to my transform tree I'd expect to have to publish a TFMessage, with parent A and child B with a translation of -2. However, when I do this and use tf2_ros::Buffer::lookupTransform with the target frame as B and the source frame as A my transform contains a +2 x-translation. Additionally, this relationship between A & B, with B being 2 meters ahead of A, is only visualized correctly in RVIZ if my published transform contains a +2 translation. This leads me to believe that the homogeneous transform matrix is being represented in tf2 as I expect, with the negative translation, but in ROS it is being represented in the opposite way. What exactly is the difference in transformation representation between ROS and tf2? Do the standard built-in tools all interpret transformations in the same way? Answer: We believe we've figured this out. Here's what we've learned: What is a Transform? Consider the diagram below which shows two coordinate frames, source and destination, as well as a vector representing some hypothetical measurement. To get the source frame to align to the destination frame we'd have rotate counterclockwise by 15 degrees. We'll define positive rotation as clockwise, so the pose of the destination frame in the source frame is negative 15 degrees. If our data vector is defined in the source frame and we want to transform it to the destination frame, we'd rotate it by a positive 15 degrees; the below diagram helps illustrate this transformation. When we refer to a transform we're talking about transforming data, and when we refer to a pose we're talking about the relationship between two coordinate frames. These concepts of transform vs. pose are diametrically opposed, in that the pose is the inverse of the transform. In more concise terms, the transform from the source frame to the destination frame is the inverse of the pose of the destination frame in the source frame. Now let's consider a more complicated example, as illustrated in the below figure, where the frames are not only rotated 15 degrees apart but also offset from each other. In this case, the destination frame is some positive x and y distance away from the source frame, so the pose of the destination frame in the source frame would be negative 15 degrees with a positive translation. When transforming our data vector though we're in-fact subtracting some x and y offset. Below is a helpful illustration of how the transform is encoded in a 4x4 homogeneous transform matrix (source). In particular, we use a subset of homogeneous transforms called isometric transforms, which consist only of a rotation and translation (technically an isometry can also be a reflection rather than a rotation, but our implementation only allows them to be rotations), disallowing operations such as scaling, shearing, and rotation, and also preserving colinearity. More information on isometries can be found here. The C++ Eigen library is very useful for working with transforms and data, and there's already a library for converting between Eigen and the geometry_msgs/TransformStamped, eigen_conversions When we lookup a transform from a source frame to a destination frame, using the lookupTransform API of the tf2 library, we receive the transform needed to transform a point in our source frame to a point in our destination frame. You might've noticed previously that looking up the transform from source to destination gives opposite relationship between the two frames compared to what was published to the transform tree. This is because the published transform tree actually contains poses, not transforms! The lookupTransform API then converts these poses into actual transforms by inverting them. How to Transform Data Below are examples of how to utilize 3D isometric transforms to modify data as well as poses: Construct a transform. Inputs $R_A^B$ ∴ 3x3 rotation matrix, specifying how to rotate data in frame A to frame B. $t_B^A$ ∴ 1x3 translation vector, specifying location of frame A origin relative to frame B origin. Output $T_A^B$ ∴ Transform from frame A to frame B. Expression $T_A^B = \begin{bmatrix} R_A^B & t_B^A \\\ \left<0,0,0\right> & 1 \end{bmatrix}$ Construct a pose. Inputs $R_A^B$ ∴ 3x3 rotation matrix, specifying orientation of frame A relative to frame B. $t_B^A$ ∴ 1x3 position vector, specifying location of frame A origin relative to frame B origin. Output $P_A^B$ ∴ Pose of frame A in frame B. Expression $P_A^B = \begin{bmatrix} R_A^B & t_B^A \\\ \left<0,0,0\right> & 1 \end{bmatrix}$ Rotate a 3D vector in frame A to frame B. Inputs $R_A^B$ ∴ 3x3 rotation matrix, specifying how to rotate data in frame A to frame B. $\vec{v}_A = \left<{v_x, v_y, v_z}\right>_A$ ∴ 3D vector in frame A. Output $\vec{v}_B = \left<{v_x, v_y, v_z}\right>_B$ ∴ 3D vector in frame B. Expression $\vec{v}_B = R_A^B * \vec{v}_A$ Transform a point in frame A to frame B. Inputs $T_A^B$ ∴ Transform from frame A to frame B. $\vec{p}_A = \left<{p_x, p_y, p_z, 0}\right>_A$ ∴ 3D vector in frame A. Zero is added as 4th element so we can multiply by the transformation matrix. Output $\vec{p}_B = \left<{p_x, p_y, p_z, 0}\right>_B$ ∴ 3D vector in frame B. Expression $\vec{p}_B = T_A^B * \vec{p}_A$ Composing multiple transforms into a single transform. Inputs $T_A^B$ ∴ Transform from frame A to frame B. $T_B^C$ ∴ Transform from frame B to frame C. Output $T_A^C$ ∴ Transform from frame A to frame C. Expression $T_A^C = T_B^C * T_A^B$ Composing multiple poses into a single pose. Inputs $P_B^A$ ∴ Pose of frame B in frame A. $P_C^B$ ∴ Pose of frame C in frame B. Output $P_C^A$ ∴ Pose of frame C in frame A. Expression $P_C^A = P_B^A * P_C^B$ Inverting an Isometry Input $I = \begin{bmatrix} R & t \\\ \left<0,0,0\right> & 1 \end{bmatrix}$ ∴ Isometric transform or pose to invert. Output $I^{-1}$ ∴ Inverted isometric transform or pose. Expression $I^{-1} = \begin{bmatrix} R^{-1} & t^{-1} \\\ \left<0,0,0\right> & 1 \end{bmatrix}$, where $R^{-1} = transpose(R)$ $t^{-1} = -R^{-1} * t$ Identities $T_A^B = P_A^B = inverse(T_B^A) = inverse(P_A^B)$ $T_B^A = P_B^A = inverse(T_A^B) = inverse(P_B^A)$
{ "domain": "robotics.stackexchange", "id": 38743, "tags": "ros, transform, tf2" }
vision_opencv installation error on beagleboard
Question: Hey I was trying to install vision_opencv from here: https://code.ros.org/svn/ros-pkg/stacks/vision_opencv/ on my BeagleBoard xM running Oneiric. Each time i perform rosdep install vision_opencv I get an error saying: unable to find ros-fuerte-opencv2. I checked the Makefile and it says to look for ros-fuerte-opencv2 in the apt-repository, but since we don't really have a ros repository for BeagleBoard and the installation has to be done from source, i dont really know how to install this package. Any help will be greatly appreciated. Thanks Originally posted by incognito on ROS Answers with karma: 5 on 2012-07-08 Post score: 0 Answer: The source installation is the only way to install something on an ARM Platform but after a while is pretty straightforward: first checkout the source code $ svn checkout https://code.ros.org/svn/ros-pkg/stacks/vision_opencv/trunk then create a build dir and go into it $ mkdir build && cd build then make it with cmake $ cmake .. then make and install $ make $ sudo make install hope I could help. Originally posted by dinamex with karma: 447 on 2012-07-09 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 10092, "tags": "ros, ubuntu, ubuntu-oneiric, vision-opencv, beagleboard" }
Laser reading not between range_min and range_max!
Question: Hi, i'm trying to use nav2d package on turtlebot3 for autonomous exploration(third tutorial). When i launch it, everything seems to work, except for the fact that rqt_console shows the warning "Laser reading not between range_min and range_max!". Looking the messages published on /scan topic it says range max is 3.5 meters and actually some reading goes beyond this distance. The point is: how can i modify that value? Because i've tried modifying in Sensor.cpp the Custom laser range, but that doesn't work and also on yaml files i don't find any parameter referring to my laser range. Thanks for help!! Originally posted by ryuzaki on ROS Answers with karma: 3 on 2018-06-08 Post score: 0 Answer: According the specification of the laser scan message any depth measurements outside of the min-max range should be treated as erroneous and ignored. As such I would assume these warnings can be safely ignored. Originally posted by PeteBlackerThe3rd with karma: 9529 on 2018-06-08 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by ryuzaki on 2018-06-10: Thanks for your answer. That's what i've done in the end and actually everything works. Just out of curiosity, is there any way to modify that or it's a way to prevent possible readings' inaccurancy beyond laser's range_max?
{ "domain": "robotics.stackexchange", "id": 30987, "tags": "ros, ros-kinetic, nav2d, turtlebot3, 2dlidar" }
Why would substitution of H with D alter chemical shift?
Question: The $\ce{^{119}Sn}$ chemical shift of the dimeric tin hydride shown below (R=terphenyl) changes on addition of $\ce{D2}$ from a 1:2:1 triplet at $\pu{657.9 ppm}$ (due to coupling between $\ce{Sn}$ and bridging $\ce{H}$) to a doublet at $\pu{650.6 ppm}$. I assume that's because of reversible dissociation of one of the bridging $\ce{H}$, which allows formation of a mixed species where both $\ce{H}$ and $\ce{D}$ are present (the doublet being due to coupling with the remaining $\ce{H}$). However, I'm not quite certain why that alters the $\ce{^{119}Sn}$ chemical shift (I know the $\ce{Sn-D}$ bond will be shorter/stronger due to the lower ZPE). I would be interested in any thoughts/comments. Answer: OP's main question: Why would substitution of $\ce{H}$ with $\ce{D}$ alter chemical shift? OP has correctly indicated that the $\ce{Sn-D}$ bond will be shorter/stronger than $\ce{Sn-H}$ due to the lower ZPE, which may cause the chemical shift difference. That is the major reason for isotope effect in chemical kinetics mainly due to bond strength (shorter the bond distance, stronger the bond). However, here, how change of isotope effected the chemical shift is caused by increasing the electron density of the adjacent atom ($\ce{Sn}$). Since $\ce{Sn-D}$ distance is shorter than $\ce{Sn-H}$, the sharing electron pair in $\ce{Sn-D}$ bond is closer to $\ce{Sn}$ tan that in $\ce{Sn-H}$ bond. Thus, it is safe to say that $\ce{Sn}$ of $\ce{Sn-D}$ is shielded more than that of $\ce{Sn-H}$. Therefore, we can expect up-field shift (red-shift) in relevant $\ce{Sn}$ chemical shift. That's what you have experienced $(\pu{650.6 ppm} - \pu{657.9 ppm} = -\pu{7.3 ppm})$ when you have changed the $\ce{H}$ with $\ce{D}$ of the dimer. This phenomena is common among deuterated organic compounds (Ref.1): $$ \begin{array}{llrrr} \hline \text{Hydrated cpd} & \text{deuterated cpd} & \delta \ \left(\ce{^{13}C-H}\right)/\pu{ppm} & \delta \ \left(\ce{^{13}C-D}\right)/\pu{ppm} & \Delta \delta/\pu{ppm}\\ \hline \ce{CHCl3} & \ce{CDCl3} & 77.36 & 77.16 \pm 0.06 & -0.20 \\ \ce{CH3C#N} & \ce{CD3C#N} & 1.79 & 1.32 \pm 0.02 & -0.47 \\ \ce{(CH3)2C=O} & \ce{(CD3)2C=O} & 30.60 & 29.84 \pm 0.01 & -0.76 \\ \ce{C6H6} & \ce{C6D6} & 128.62 & 128.06 \pm 0.02 & -0.56 \\ \ce{CH3-OH} & \ce{CD3-OH} & 49.86 & 49.00 \pm 0.01 & -0.86 \\ \hline \end{array} $$ These chemical shifts are also depend on the other factors such as temperature and solvent. For comparison purposes, all chemical shifts of hydrated compounds have been recorded in their deuterated versions (e.g., $\ce{CHCl3}$ in $\ce{CDCl3}$) at a constant temperature (Ref.1). Note: I also realized that $^1\!J_{\ce{Sn-D}}$ constant must be zero or negligible according to your observation. I tried to find the value of $^1\!J_{\ce{Sn-D}}$, but failed. Note that in deuterated organic solvents, $^1\!J_{\ce{C-D}}$ varies between $\pu{20 Hz}$ and $\pu{30 Hz}$ (Ref.1). References: Hugo E. Gottlieb, Vadim Kotlyar, Abraham Nudelman, "NMR Chemical Shifts of Common Laboratory Solvents as Trace Impurities," J. Org. Chem. 1997, 62(21), 7512-7515 (DOI: https://doi.org/10.1021/jo971176v).
{ "domain": "chemistry.stackexchange", "id": 15323, "tags": "inorganic-chemistry, nmr-spectroscopy" }