anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Node.js put records to Kinesis with infinite retry strategy.
Question: My application is really easy. It's going to add 500 records at a time to AWS Kinesis and if there's an error occur I'll just add that back to the queue and it's going to be retry. It's going well but I feel like there's a bug somewhere that I don't see. 'use strict'; const _ = require('lodash'); const moment = require('moment'); const config = require('../config'); const logger = require('../logger'); const numbers = require('../helpers/numbers'); const streamName = config.kinesis.streamName; let records = []; module.exports = (kinesis) => { let sendRecords = () => { let payloadRecords = { Records: records, StreamName: streamName }; if (records.length >= 500) { const pushingToKinesis = records.splice(0, 500); payloadRecords.Records = pushingToKinesis; } else { records = [] } kinesis.putRecords(payloadRecords, (err, data) => { if (err) { logger.error(err); } const failedRecord = _.get(data, 'FailedRecordCount', 0); if (failedRecord > 0) { logger.warn(`There are ${data.FailedRecordCount} failed`); data.Records.forEach((record, index) => { if (_.has(record, 'ErrorCode') || _.has(record, 'ErrorMessage')) { logger.debug(record); logger.debug(payloadRecords.Records[index]); records.push(payloadRecords.Records[index]); } }); } }); }; let putRecord = (record) => { let payload = { Data: JSON.stringify(record), PartitionKey: String(numbers.random() * 100000) }; records.push(payload); if (records.length >= config.kinesis.maxRecords) { sendRecords(); } }; return { putRecord: putRecord, recordCount: () => { return records.length; }, clearRecord: () => { records = []; }, init: () => { setInterval(() => { if (!_.isEmpty(records)) { sendRecords(); } }, 200); } }; }; Answer: There is no obvious bug but; You are defining functions both within the return structure and outside of it, which makes it hard to read/parse, just keep it clean and define everything outside of it You are sending records in a function called putRecord, that is not good. 500 should probably be a constant retrieved with config, same thing for 200 Using numbers.random() sounds like a terrible idea, you should use a library that generates GUID's (https://www.npmjs.com/package/guid) Why assign config.kinesis.streamName to streamName, just assign it straight in the structure. It is one less line of code, and the reader does not have to wonder where it comes from I would have re-ordered the 500 check and the assignment: records = records.splice(0, 500); let payloadRecords = { Records: records, StreamName: streamName }; instead of let payloadRecords = { Records: records, StreamName: streamName }; if (records.length >= 500) { const pushingToKinesis = records.splice(0, 500); payloadRecords.Records = pushingToKinesis; } else { records = [] } personally I would even go for //Make sure we never send more than 500 records, splice keeps the rest in records let payloadRecords = { Records: records.splice(0, 500), StreamName: streamName }; Finally, I am not a big fan of perpetually re-sending the data. If the data gets rejected because of data-content, then at some point if you reach 500 entries that will never be stored, your pipe will filled with junk and you will stop transmitting store-able data.
{ "domain": "codereview.stackexchange", "id": 20845, "tags": "javascript, node.js, ecmascript-6, amazon-web-services" }
Installing turtlebot on ROS kinetic
Question: I am using a master computer to control my turtlebot. My master computer has ubuntu 16.04 xenial, so I was wondering whether turtlebot will work with it and if it does, how to install turtlebot for xenial. In most of the tutorials it says turtlebot is supported for only ROS indigo, so what do I do? Originally posted by homagni on ROS Answers with karma: 11 on 2016-10-19 Post score: 1 Answer: The TurtleBot packages have been released for Kinetic. See the tutorials here: http://wiki.ros.org/Robots/TurtleBot Originally posted by tfoote with karma: 58457 on 2016-10-19 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by homagni on 2016-10-19: the tutorials all point to ros indigo..do I just blindly follow them for ROS kinetic too? Wont there be any changes for installing turtlebot for ROS kinetic? Can you please paste the link to the instructions? Thanking you Comment by tfoote on 2016-10-19: Generally everything that worked in indigo is expected to work in kinetic. Just replace the keywords and use Xenial instead of Trusty. Comment by homagni on 2016-10-19: Do I need to replace keyword indigo with keywords like kinetic anywhere? Comment by Eric_ROS on 2016-11-21: Hello~ all I also have the same problems to install Turtlebot packages on ROS-kinetic. I followed instruction to install turtlebot deb by replacing indigo to kinetic. but still some errors there... Any obvious solution for this ? thank you! Comment by jwhendy on 2017-05-08: Replacing indigo with kinetic in the current tutorial install line fails due to ros-kinetic-rocon-remocon and ros-kinetic-rocon-qt-library not being found. Removing them works though might impact some functionality. Comment by srf on 2017-10-13: How did you get on with this? I have the same issue and I am worried I will run into problems down the line. Comment by jwhendy on 2017-11-08: @srf I don't know! I might have abandoned as I don't recall doing much with the turtlebot. That said, the other day followed along with a tutorial and launched a turtlebot in a gazebo and it worked, so maybe it updated itself away?
{ "domain": "robotics.stackexchange", "id": 26005, "tags": "ros, turtlebot, ros-kinetic, ros-indigo" }
In QFT, can Field Operators at different points in Space-time always be expressed as unitary Transformation of each other?
Question: Given an operator valued field $\Phi(x)$, and two points in spacetime, $x$ and $y$, can I always write down something like: $$ \Phi(y) = U_{x,y}^{-1} \Phi(x) U_{x,y} $$ With $U_{x,y}$ being an unitary Operator? Answer: Depends what you call a QFT. If it is an object satisfying the definition commonly known as the set of Wightman Axioms then the property is built into it, i.e., it follows from Axiom W2 together with the trivial transitivity of the action of the Poincaré group on spacetime.
{ "domain": "physics.stackexchange", "id": 42150, "tags": "quantum-field-theory, unitarity" }
Arduino with navigation stack
Question: Is it possible to launch the robot_configuration.launch file with arduino for the Navigation Stack usage? If yes, how? Rosserial_arduino? My arduino is used for all the UAV controls. I am unsure of what to add in the robot_configuration launch file. From my senior's report, they mentioned that there's no odom for UAV but navigation stack requires it for the local_planner. So, I hope to gather some guidance from here as I am quite confused. Originally posted by Edward Ramsay on ROS Answers with karma: 65 on 2013-06-20 Post score: 3 Answer: I think you got the spirit. By now the best way to use the navigation stack (which need lot of computacional resources) is by runing it on a computer and just passing command to the robot_base (which is commanded by the arduino). So you should run rosserial on the arduino and make it subscribe the cmd_vel messages, also if you have wheel encoders or other type of sensor you could plug it on arduino and publish its info to ros. You should check http://wiki.ros.org/rosserial_arduino/Tutorials. Originally posted by Henrique with karma: 68 on 2016-11-13 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 14648, "tags": "ros, navigation, uav, obstacle-avoidance, rosserial" }
(Codewars) Linked Lists-Sorted Insert (2)
Question: Original Question. So I rewrote the code for the problem described in the original question, but someone had already answered that question, and my rewrite would have invalidated their answer, so I just posted a new question. Updated Solution def sorted_insert(head, data): tmp = Node(None) tmp.next = head node = Node(data) while tmp.next: if tmp.next.data > data: break tmp = tmp.next node.next = tmp.next tmp.next = node return head if tmp.data else node Answer: To me it seems the purpose of this rewrite is to mitigate the special treatment for replacing the head. Like the previous solution, it also suffers from lack of separation of distinct logical elements. In this example, tmp is used for two purposes: traverse the list, and act as the dummy to check if head was replaced. This is one step away from a clear dummy node to prefix the list to eliminate the special treatment of the head, and I think this clean separation is simpler and easier to understand: def sorted_insert(head, data): # a dummy node inserted in front of head dummy = Node(None) dummy.next = head # whatever happens in the rest, the new head will be dummy.next # loop until the insertion point node = dummy while node.next: if node.next.data > data: break node = node.next # create the new node and insert it new_node = Node(data) new_node.next = node.next node.next = new_node # return the head. it may or may not have been replaced return dummy.next
{ "domain": "codereview.stackexchange", "id": 33822, "tags": "python, beginner, programming-challenge, linked-list" }
Can I uncut a piece of paper?
Question: I've inadvertently cut a piece of paper that I wasn't meant to. I've repaired it pretty well using single sided sticky tape and I think, so long as no-one looks at it too closely, I'll get away with it. However, if you look close enough you can still tell it has been cut. Given the importance of this document, I don't feel sticking the paper back together again is good enough. What I really need to do is 'uncut' the paper. I order to do this I need to know two things I thought Physics.SE would be able to help me with: What happens to the paper as I cut it? Is the process reversible (even if impractical), if so, how so and if not, why not? I'll remove this section in due course but just a friendly hello and a heads up that although this is my first question on Physics.SE it isn't my first on SE altogether, so Comments and Suggestions for Improvements are gratefully received (I was particularly uncertain about the tags), I'd rather you commented than Downvoted / VTC but you are of course welcome to vote as you please! If you do choose to vote this way it would be appreciated if you could explain what caused it and what I can do to reverse it! Answer: ordinarily, you cannot "uncut" paper because it consists of a matted mass of microscopic fibers pressed together with a little glue mixed in to hold them together. cutting the paper causes the fibers themselves to be cut, and once cut, their ends cannot be butted back together without more glue. An exception exists for papers which have very little or no glue in them at all, like paper towel stock or tissue paper. in this case, if you put a very small amount of water on the cut, the fibers readily let go of one another and the cut can be then massaged back together. If pressed and allowed to dry, the fibers that cross the original cut will then hold fast... but it's really hard to accomplish this in practice.
{ "domain": "physics.stackexchange", "id": 51727, "tags": "everyday-life, material-science, reversibility" }
Behavior of the Electric- and Magnetic-field under time reversal and parity
Question: The behavior of the electric- $\mathbf{E}$ and the magnetic-field $\mathbf{B}$ und time reversal and parity can be calculated in different ways. My first solution is to study the transformation behavior of the Field-strength-tensor $F^{\mu\nu}$ when acting on it with the Lorentz-transformation for time reversal $$(\mathcal{T}_{\;\;\;\nu}^{\mu})=\text{diag}(-1,1,1,1)$$ and similarly for partiy $$(\mathcal{P}_{\;\;\nu}^{\mu})=\text{diag}(1,-1,-1,-1)$$. The result is that for both time reversal and parity $\mathbf{E}$ and $\mathbf{B}$ behave as: $$\begin{align}\mathbf{E'} &= -\mathbf{E}\\ \mathbf{B'} &= \mathbf{B} \end{align}$$ On the other hand, if one follows the argumentation of Jackson and demands the invariance of the e.o.m., mathematically: $$\begin{align} \mathcal{T}&: \quad \mathbf{x'} = \mathbf{x} \quad \text{and} \quad t' = -t\\ \mathcal{P}&: \quad \mathbf{x'} = -\mathbf{x} \quad \text{and} \quad t' = t \end{align}$$ for the transformations and $$m_0 \gamma' \frac{\mathrm{d}u'^\mu}{\mathrm{d}t'} \stackrel{!}{=} \frac{q}{c} F'^{\mu\nu}u'_\nu $$ for the e.o.m. The equation above implies a different transformation behavior for the field-strength-tensor with the result: $$\begin{align} \mathcal{T}&: \quad \mathbf{E'} = \mathbf{E} \quad \text{and} \quad \mathbf{B'} = -\mathbf{B}\\ \mathcal{P}&: \quad \mathbf{E'} = -\mathbf{E} \quad \text{and} \quad \mathbf{B'} = \mathbf{B} \end{align}$$ My question now: How is this issue/ambiguity resolved or what is my misconception. Answer: The transformation under time reversal of the forms in electrodynamics is subtle because the gauge field 1-form $A = A_\mu \mathrm{d}x^\mu$ and the field strength $F = F_{\mu\nu}\mathrm{d}x^\mu\wedge\mathrm{d}x^\nu$ are not the correct physical objects to transform. This may be seen by observing that the Maxwell equations are $\mathrm{d}F = 0$ and $\mathrm{d}\star F = \star J$, but the former is just a Bianchi identity following from $\mathrm{d}^2 = 0$. The actual equation of motion for the gauge theory is given in terms of the Hodge duals $\star F$ and $\star J$, and it are thus the Hodge duals whose transformation behaviour dictates the transformation behaviour under time reversal. In the field strength tensor, we have the terms $E_i \mathrm{d}t\wedge\mathrm{d}x^i$ and $B_i \epsilon^{ijk}\mathrm{d}x^j\wedge\mathrm{d}x^k$ and from this one would indeed conclude that it is the electric field that changes sign under time reversal. However, inspecting the Hodge dual that occurs in the equation of motion, we find the opposite behaviour since the star of the terms with $\mathrm{d}t$ contains no $\mathrm{d}t$ terms anymore and vice versa. This highlights a general and important fact: The Hodge star does not commute with coordinate transformations that change the handedness of the underlying coordinate system, since its definition crucially relies on the ordering and handedness of the vectors in the system. Therefore, as soon as we consider transformations whose determinant is negative (since that is the abstract sign of changing handedness), care must be taken for all geometric objects whether the correct physical interpretation is to have the transformation act on them or their duals.
{ "domain": "physics.stackexchange", "id": 40794, "tags": "special-relativity, symmetry, classical-electrodynamics, parity, time-reversal-symmetry" }
Electrostatic and gravitational forces combined
Question: If we have two bodies of considerable mass and charge both .Will we consider both its gravitational force and electrostatic force to calculate its acceleration ? Why (not) ? Answer: The comparison which is often made when discussing the forces acting within a nucleus is that for two protons where the ratio of the electrostatic repulsion to the gravitational attraction is found. $$\frac{1}{4\pi \epsilon_0}\frac {q_{\rm p}q_{\rm p}}{r^2} \quad{\Large{:}}\quad G \frac{m_{\rm p}m_{\rm p}}{r^2} \Rightarrow \frac{q_{\rm p}^2}{4\pi \epsilon_0 Gm_{\rm p}^2}\quad{\Large{:}}\quad 1 \Rightarrow 10^{36} \quad{\Large{:}}\quad 1$$ So in this context the electrostatic repulsion is obviously much stronger than the gravitational attraction. You can put your own numbers in to see if you can make the two forces comparable but remember that as you increase the mass you then also have both positive (protons) and negative (electrons) in roughly equal number and thus for large masses the gravitational force of attraction becomes the dominant force as the electrostatic repulsive forces between charges with the same sign tend to cancel out the electrostatic attractive forces between charges with the opposite sign.
{ "domain": "physics.stackexchange", "id": 57243, "tags": "forces, electrostatics" }
MoveIt Setup Assistant ROS Jade
Question: What is the preferred method for installing the MoveIt Setup Assistant with ROS Jade? Do I need to build from source? Thanks. Originally posted by user12821821 on ROS Answers with karma: 45 on 2015-09-03 Post score: 2 Answer: Do I need to build from source? Looking at ros.org/debbuild/jade?q=moveit (and wiki/moveit_setup_assistant), it looks like it hasn't been released for Jade yet. Building from source would seem to be your only option. Originally posted by gvdhoorn with karma: 86574 on 2015-09-04 This answer was ACCEPTED on the original site Post score: 4
{ "domain": "robotics.stackexchange", "id": 22567, "tags": "ros, moveit, catkin, moveit-setup-assistant, ros-jade" }
Identification of an alpine blueberry type
Question: I'm trying to figure out what kind of berry this is. It can be found all over the Swiss, French, and Italian alps, generally right where ordinary blueberries are (the ones one would eat). However, they are distinctly different: The flower attachment spot (sorry, non-botanist here) at the tip is square shaped (see photo) Their leaves turn red much later during the season The berry is more elongated when compared to ordinary blueberries, a bit of an ellipsoid Answer: Possibly Vaccinium gaultherioides (sometimes considered a subspecies of Vaccinium uliginosum), which are present throughout the Alps. For example, see InfoFlora*: Vaccinium gaultherioides Bigelow, © Konrad Lauber – Flora Helvetica – 2012 Haupt Bern You can see the prevelance of this species throughout the Swiss alps in the below range map (also from InfoFlora): ______________________________________enter preformatted text here * the national data center for the Swiss flora I don't know the flora of the Alps personally and I feel you post lacks enough detail for me to confidently pick and choose among many Vaccinium candidates. From those I've examined, this species seems most visually similar to yours.
{ "domain": "biology.stackexchange", "id": 10202, "tags": "species-identification, botany" }
Why are there only 14 types of Bravais lattices and not 28 when there are 7 types of unit cells and each can have four variations?
Question: As the title suggests, I can't understand why certain kinds of variations (like Face-centred or Body-centred) are restricted to certain types of unit cells. An orthorhombic unit cell has Primitive, Body-centred, Face-centred and End-centred variations, why can't the same be applicable to other crystal systems too? Has this something to do with symmetry, but in that case a cubic unit cell should have four variations (it only has three). There are some patterns, like all crystal systems which have angles $=90^\circ$ have Body-centred variations, but beyond that it doesn't make sense. Answer: Essentially, certain combinations of the possible point-group symmetries (cubic, tetragonal, hexagonal, trigonal, orthorhombic, monoclinic, triclinic) and possible translational symmetries (simple, base-centered, face-centered, body-centered) end up having identical overall lattice symmetries and thus you don't get $7×4$ unique lattices. For example, suppose you propose a base-centered cubic lattice. Base-centered means you get the same lattice back if you translate the corners of each unit cell to the centers of a pair of specified opposing faces (the "bases"). But "cubic" means you get the same unit cell back when you rotate each face to match an adjacent one (rotating around a body diagonal of the cube). So to have both the base-centered translational symmetry and the cubic point-group symmetry, you have to allow translation of the unit cell corners to the centers of all the faces, not just one opposing pair, and your intended base-centered cubic lattice is really face-centered cubic. Let's try a different example. Suppose you try to construct a base-centered tetragonal lattice by allowing translations of the corners onto the centers of the opposing square faces of the prism. Tetragonal point-group symmetry does not include a roration around a body diagonal or any other operation that would shift the square faces onto another face, so you avoid the trap of turning "base-centered" into "face-centered" like what happened with cubic symmetry. You really do have a specific pair of "bases". But now there is a different trap: you can draw a smaller unit cell, with smaller square faces, that is a simple tetragonal lattice. So again your intended base-centered lattice is not unique; in this case it is just another simple tetragonal lattice. When we work through all the constraints with each of the seven point-group symmetries, we find that a unique base-centered lattice exists only for the orthorhombic point-group symmetry, a unique body-centered lattice exists only for cubic, tetragonal and orthorhombic point-group symmetries, and so on. Thus only 14 out of the apparent 28 point-group/translational symmetry combinations actually form different Bravais lattices.
{ "domain": "chemistry.stackexchange", "id": 15950, "tags": "symmetry, crystallography, lattices, bravais-lattices" }
Analyse audio/music frequencies without STFT, at 1/f temporal resolution, using probe phasors at logarithmically-spaced frequencies, O(N log N)?
Question: First, some background: The STFT is the best general-purpose tool I know of for analysing a (musical or other) signal into its component frequencies. For many purposes the STFT works fine, but there are two related problems that make it non-ideal for musical purposes -- 1) the frequency bins are evenly-spaced, not logarithmically like the human hearing system works, and 2) by the mathematical nature of the DFT/FFT, you need to capture a long period of audio to get decent frequency resolution at low frequencies. For example, to be able to distinguish semitones accurately in the bass range of a piano, you'd need approximately 1hz resolution, meaning you'd need to capture 1 second of audio. For music, this is a completely unacceptable time resolution. There are various ways to enhance the output of the FFT, including a) zero-padding to create a larger window, or b) overlapping analysed segments, or c) magic with the phase to detect frequency change. These are useful, but still have problems -- more computation overhead, and while (a) can increase the apparent frequency resolution at a given temporal resolution, it doesn't help with a situation where two frequencies happen to fall into the same bin. So... I've been researching other possibilities, out of sheer interest. CWT and DWT looked very promising at first, but the CWT is very slow to compute, and the DWT only has octave frequency resolution. (That right?) I've also considered taking multiple FFTs at different time/frequencies resolutions to cover the space better, or solutions like zoom-FFT. These all help solve different specific problems, but none of them really make the fundamental problem go away: we don't have a fast (N log N) way of analysing the frequencies in a signal at good temporal and frequency resolution. A lot of people point to the Heisenberg uncertainty principle at this point, implying that there's a mathematical impossibility of actually achieving this at all. I'm pretty sure this is simply wrong: sure, the uncertainty principle is valid, but we're not even close to running into it yet. The problem here is that the DFT/FFT can only analyse frequencies at integer multiples of the fundamental. However, the other frequencies are certainly there to be analysed, at good temporal resolution -- if we're willing to use a (computationally slow) sine sweep or continuous wavelet transform. The uncertainty principle doesn't cause problems until long after the FFT ceases to be useful. So, my solution. I'd like you to tell me if and where it's wrongheaded: Let's correlate the input signal with probe phasors at select frequencies. At first, this sounds like an N^2 algorithm (N operations for a correlation at each frequency, times the number of frequencies). However, the key point is that for equivalent log-frequency resolution to an FFT of the same frequency resolution (at low frequencies), you only need to analyse log N frequencies -- because the resolution at high frequencies is logarithmically less important in musical signals. So, we have an algorithm that works like this: Decide on your frequency resolution. Let's say octaves for now, though I'd want sub-semi-tones in most applications. Pre-load some buffers with probe phasors as frequencies spaced at that interval -- let's say 16, 32, 64, 128, 256, 512, 1024hz, [...]. Because higher frequencies can be detected in a shorter amount of time, you need fewer samples (= better temporal resolution, as well as fewer operations) at higher frequencies. For the sake of it, let's use a few periods at each frequency. Pre-window these buffers, if it makes sense. Correlate each of these with a long-enough segment of the input signal at time t (i.e. the sum of the component-wise multiplication of the segment and the probe). Probably normalize this value. The result of this correlation is the magnitude of the signal at that frequency, at time t. So, the table of results-vs-frequencies should be your frequency spectrum at the desired logarithmic resolution (octaves, in this case). From what I can work out, this would give you superb temporal resolution proportional (a few cycles of each frequency) at whatever frequency resolution you like, using approximately N operations times log N frequencies, so long as your frequencies were logarithmically spaced -- which is exactly what you want for music, and exactly the problem with the STFT -- thus, solving the problems with the STFT (for musical signals), with equivalent computational complexity. What am I missing? Would this work at all? Are there caveats or pitfalls? Is my complexity calculation mathematically sound? It feels a bit like inventing a perpetual-motion machine -- it looks like it should work, but my gut feeling is that I'm glossing over some important detail. Answer: True, the complexity would be $O(N \log N)$. But this $\log N$ captures a different reality from the $\log N$ in the FFT algorithm. In your case it comes from the fact that you're only interested in a smaller set of logarithmic spaced frequencies. In the FFT case, it comes from the "divide and conquer", recursive structure of the Cooley–Tukey algorithm. Let's say you're interested in quarter-tone resolution, from 40 Hz to 1.28kHz. That's 5 octaves, or 120 quarter-tones. For a 1024-long buffer, your algorithm is doing exactly 1024 x 120 complex MAC (plus some book-keeping). A naive radix-2 FFT would do 1024 x 10 complex MAC (plus some book-keeping). Your algorithm is 12 times slower. Also, your transform doesn't sound invertible, which makes it useful only for analysis applications (not resynthesis, effects...).
{ "domain": "dsp.stackexchange", "id": 2668, "tags": "fourier-transform, frequency-spectrum, dft, time-frequency, stft" }
Strongest force in nature
Question: Possible Duplicate: What does it mean to say “Gravity is the weakest of the forces”? It is said nuclear force is the strongest force in nature.. But it is not true near a black hole where gravitational force exceeds nuclear force.. So which is the strongest force in nature? Answer: The gravitational force exerted by a black hole is strong because the force is proportional to the mass and the black hole as a large mass. In the same way, the gravitational field of the Earth may beat a weak enough magnet - because the Earth is large and the magnet is small and we're comparing apples and oranges. But fundamentally speaking, among particles with masses that correspond to elementary particles, gravity is the weakest force. For example, the gravitational force between two electrons is $10^{43}$ times weaker than the electrostatic one. The weak nuclear force is as strong as the electromagnetic one at very short distances - but exponentially drops at distances much longer than the W-boson wavelength. The strong nuclear force is the strongest among the four forces. It seems that the weakness of gravity relatively to all other forces, when evaluated at the level of elementary particles, is a general principle that has to hold in any consistent quantum theory of gravity, see http://arxiv.org/abs/hep-th/0601001
{ "domain": "physics.stackexchange", "id": 827, "tags": "forces, gravity, black-holes, physical-constants, interactions" }
Calculation of power for a pipeline crawler
Question: I am an engineering student. I am working on a project where we need to use something called a vertical pipeline crawler. It is just a device that travels inside pipelines for inspection. I found a crawler online, and I wanted to calculate the amount of power it uses. In specifications (there is more specs than below): Speeds up to 10 m/min Mass: 10 kg Maximum pull: 27 kg Power: 600 W 115 / 230 VAC Initially what I did was to multiply the weight it can carry by the speed: mgv = 27*9.81*(10/60) = 44.145 Watts But it also says 600 W for power in specs. So how much power does this device need? Is the input 600 and output 44.145, isn't that very inefficient for an electrical device? Is my thought process wrong, what should use? (I didn't want to share the link for the product thinking it might be against the website rules, however its pretty easy to find online) Answer: Your question "power does this device need?" is unclear. If you are concerned about how much electrical power it requires, thats 600W and the current it requires will be given by dividing thus with the line voltage. The diameter of the electrical power cable will be determined by this. When you multiply speed with the downward force you are neglecting how much energy the device has to expend in operating the gripping mechanism. In addition there is the electrical power requirement for the inspection head. You need to consider all of these.
{ "domain": "engineering.stackexchange", "id": 1184, "tags": "mechanical-engineering, motors, power" }
Appying a Filter Several Times on Data
Question: I have data from an experiment and apply a lowpass filter (digital Butterworth filter, in Matlab). My question ist what happens if I apply the same filter e. g. 2 times on the data? What is changing? Is it like I have the double order? The question refers to the filtfilt function in Matlab: http://www.mathworks.de/de/help/signal/ref/filtfilt.html The source [1] says on page 336: In many filtering problems, we would prefer that the phase characteristics be zero or linear. For causal filters, it is impossible to have zero phase. However, for many filtering applications, it is not necessary that the impulse response of the filter be zero for n < 0 if the processing is not to be carried out in real time. One technique commonly used in discrete-time filtering when the data to be filtered are of finite duration and are stored, for example, in computer memory is to process the data forward and then backward through the same filter. [1] Oppenheim, Alan V., Ronald W. Schafer, and John R. Buck. Discrete-Time Signal Processing. 2nd Ed. Upper Saddle River, NJ: Prentice Hall, 1999. Answer: Silly me! The Matlab help includes the answer: A filter transfer function, which equals the squared magnitude of the original filter transfer function A filter order that is double the order of the filter specified by filter coefficients $b_0$, $b_1$, $b_2$,... and $a_1$, $a_2$....
{ "domain": "dsp.stackexchange", "id": 2118, "tags": "matlab, filters, lowpass-filter" }
What triggers programmed cell death in humans (from outside the cell)?
Question: What triggers programmed cell death in humans? Is it decided by the brain (for the entire body)? Or is it a local decision of a cell by its environment? Something else? I realize that there might be different cases. But I'd like to get a general idea of where (and why, actually) does this happen. EDIT As linked to by a comment below - one type of cell-death (Necrosis) just "happens" to a cell. And perhaps there are other types that are decided by the cell. What I'm asking about (and trying to understand more) is about the idea that cell-death might be initiated externally to the cell because it would be beneficial to the whole organism. Such as "the separation of fingers and toes" mentioned in Wikipedia . "Who" would initiate it? Are there examples of the CNS initiating it? Notifying the cell by nerves? By hormones? Are they initiated by neighboring cells? (And if so - what cells have the "clout" to send such signals?) What I'm trying to understand is who decides when a cell dies in those cases where it's not the cell itself. Answer: The answer is, in part, it depends. Let's think of the PI3K/AKT pathway. Akt actively phosphorylates BAD which abrogates the Bax/Bak apoptosis pathway. RTK's at the plasma membrane activate this pathway when bound with survival factors. In the absence of survival factors, Akt would become dephosphorylated and you'd have a net movement toward apoptosis. In a particular form of cell death called anoikis, this could be as simple as detaching from the extracellular matrix. So anything that halts the binding of survival factors could play a role. In the case of immune response, activated Tc cells can induce apoptosis by secreting pore-forming enzymes as well as enzymes that directly activate caspases. The Tc cells also express Fas on their membrance which is involved in the extrinsic apoptosis pathway. TNF (tumor necrosis factor) may also bind cell surface receptors as sort of a death factor, and push towards apoptosis. These are just examples, some generic searches about apoptosis, necroptosis, entosis, and a myriad of other programmed death mechanisms will yield a very comprehensive overview.
{ "domain": "biology.stackexchange", "id": 3379, "tags": "human-biology, cell-biology, neuroscience, apoptosis" }
Energy operator
Question: Does the Hamiltonian always translate to the energy of a system? What about in QM? So by the Schrodinger equation, is it true then that $i\hbar{\partial\over\partial t}|\psi\rangle=H|\psi\rangle$ means that $i\hbar{\partial\over\partial t}$ is also an energy operator? How can we interpret this? Answer: I will formulate the following in such a way, that the language doesn't change too much within the answer. This also emphasizes the analogies of related concepts. Classically, you have a configuration/state $\Psi$, which is characterised by coordinates $x^i,v^i$ or $q^i,p_i$ and/or any other relevant parameters. Then an energy is a function or functional of this configuration $$H:\Psi\mapsto E_\Psi,\ \ \mbox{where}\ \ E_\Psi:=H[\Psi].$$ Here $E_\Psi$ is some real (energy-)value associated with the configuration $\Psi$. To name an example: Let $q$ and $p$ be the coordinates of your two-dimensional phase space, then every point $\Psi=(q,p)$ characterises a possible configuration. The configuration/state $\Psi$ here is really just the pair of coordinates. The scalar function $H(p,q)=\frac{1}{2m}p^2+\frac{\omega}{2}x^2$ clearly is a map which assigns a scalar energy value $E_\Psi$ to every possible configuration $\Psi$. The evolution of $\Psi$ in time is determined by $H$, see Hamilton's Equations. This might be viewed as the point of coming up with the Hamiltonian in the first place and it is typically done in such a way, that the energy value $E_\Psi$ will not change with time. See also this thread for a related question. What you call "energy" is pretty much determined by this criterium. In the case of a time independent Hamiltonian (as in the example) and if the time developement of observables $f$ is governed by $\frac{\mathrm{d}f}{\mathrm{d}t} = \{f, H\} + \frac{\partial f}{\partial t}$, then you have $\frac{\mathrm{d}H}{\mathrm{d}t} = \{H, H\} = 0$ and the conservation of the quantiy $E_\Psi:=H[\Psi]$ is evident. Of course, you might want to model friction processes and whatnot and it then might be difficult to define all the relevant quantities. In quantum mechanics, your configuration $\Psi$ is given by a state vector $|\Psi\rangle$ (or an equivalence class of such vectors) in some Hilbert space. There are many vectors in this Hilbert space, but there are some vectors $|\Psi_n\rangle$, which also span the whole vector space and which are also special in the following sense: They are eigenvectors of the Hamiltonian operator: $H|\Psi_n\rangle = E_n|\Psi_n\rangle$. Here $E_n$ is just the real eigenvarlue and I assume that I can enumerate the eigenstates by an descrete index $n$. Now for every point in time, your state vector $\Psi$ is just a linear combination of the special states $\{\Psi_n\}$. (As a remark, notice that all the time dependencies of states are left implicit in this post.) Therefore, if you know how $H$ acts on all the $\Psi_n$'s, you know how $H$ acts on any $\Psi$. Since a Hilbert space naturally comes with an inner product, i.e. a map $$\omega:(|\Psi\rangle,|\Phi\rangle)\mapsto\langle\Psi|\Phi\rangle\in\mathbb{C},\ \ \mbox{satisfying}\ \ \langle\Psi|\Psi\rangle>0\ \ \forall\ \ |\Psi\rangle\ne 0,$$ you can define a new map $$\omega_H:\Psi\mapsto E_\Psi,\ \ \mbox{where}\ \ E_\Psi:=\omega_H[\Psi],$$ with $$\omega_H[\Psi]:=\omega(|\Psi\rangle,H|\Psi\rangle)\equiv\langle\Psi| H|\Psi\rangle.$$ Compare the lines above with the classical case. Here $E_\Psi=\ ...=\langle\Psi| H|\Psi\rangle$ is then called the expectation value of the Hamiltonian in the phyical state. It is the energy value associated with $\Psi$, which is real due to hermiticity of the Hamiltonian. Also, like in the classical case, the time evolution of any state $\Psi$ (resp. state vector $|\Psi\rangle$) is determined by the observable $H$, an operator in the QM-case. And as stated above, exactly this $H$, together with the state/configuration $\Psi$, gives you the energy values $E_\Psi$ associated with $\Psi$. This relation of time and energy is by construction: The Schrödinger equation is an axiom (but a natural one, see conservation of probability), which relates time evolution and Hamiltonian. Now, if the time dependency of the state is governed by the Hamiltonian (whatever it might look like in your scenario), then so is the time dependency of $\langle\Psi| H|\Psi\rangle$. And if $\ i\hbar\frac{\partial}{\partial t}|\Psi\rangle=H|\Psi\rangle\ $ is true for all vectors in your Hilbert space, i.e. if $i\hbar\frac{\partial}{\partial t}=H$ holds as an operator equation, then these two really are just the same operator. If you ask for an interpretation for this, then I'd suggest you hold on to the quantum mechanical relation between frequency and energy. Regarding the equation which determines time evolution, quantum mechanics is much easier than classical mechanics in a sense, especially if you come with some Lie group theory intuition in your backpack.
{ "domain": "physics.stackexchange", "id": 1730, "tags": "quantum-mechanics, energy, schroedinger-equation, hamiltonian" }
Center Point In Circular Motion
Question: Suppose a point initially located at (x,y) moves to (x',y') in a circular motion with angular velocity $w$. Then, the center of this circular motion (x*,y*) can be found by the following: where $\theta = w \Delta t$. I really do not understand how this relation holds. Do you have any ideas? Answer: This isn't a complete answer, but it may help show the formula is reasonable. First there is a line segment connecting $(x,y)$ and $(x^{'},y^{'})$. The center of the circle is somewhere on the line that bisects that segment and is perpendicular to it. The center of the segment is $$(x_0,y_0) = \left(\frac{x+x^{'}}{2},\frac{y+y^{'}}{2}\right)$$ The vector from $(x,y)$ to $(x^{'},y^{'})$ is $$\left(x^{'}-x, y^{'} - y\right)$$ You can do a dot product to show that this vector is perpendicular to that one. $$(X,Y) = (y^{'} - y, x-x^{'})$$ So the line containing the center of the circle is the set of vectors $$(x_0,y_0) + a (X,Y)$$ where a is a real number. So you need to find $a_0$, the value of a that matches the center of the circle. $$(x^{*},y^{*}) = \left(\frac{x+x^{'}}{2},\frac{y+y^{'}}{2}\right) + a_0 * (y^{'} - y, x-x^{'})$$ To do that, you might think about lines from the center of the circle that pass through $(x,y)$ and $(x^{'},y^{'})$.
{ "domain": "physics.stackexchange", "id": 73302, "tags": "homework-and-exercises, kinematics, geometry" }
Issue when using rosbag to record concurrent messages from multiple devices to a single topic
Question: Hi All, I've been struggling with this issue ever since upgrading our kinetic/indigo systems. Brief outline of our software/hardware; 7 laptops (3 on indigo and 4 on kinetic), 3 turtlebot2 platforms and 1 workstation (on indigo). Prior to Dec 2017 everything was working okay and ROS-indigo was used system-wide. However, since Jan 2018 we have been doing a lot of software updates and changes (updated 4 laptops to kinetic). The experiment; we have 3 robots and the workstation PC connected via multimaster-fkie (all on the same network and interacting). We send a bunch of messages from the workstation and the robots perform a bunch of tasks. This all works fine, so I doubt it is a multimaster-fkie problem. We run the following command on the workstation PC rosbag record -j -o Tasks.yaml_ /experiment /tasks/announce /tasks/award /tasks/status /tasks/new /debug /robot_1/amcl_pose /robot_2/amcl_pose /robot_3/amcl_pose The /tasks/status topic is the most crucial as it records (time stamp) when the robots perform any action, furthermore all the devices subscribe to this topic. During an experiment if I use rostopic echo /tasks/status (from the workstation) I see all the messages that I should be getting from all the robots. However, later when I play back the rosbag file messages sent to /tasks/status one or more of an entire robot's messages are missing from this topic. When I check the rosbag for other topics there exists messages from all robots (e.g. amcl messages from robot 1,2 etc.). I made a simple script that counts the amount of messages from different topics, from a rosbag file and prints in terminal. Below is the result from one bag file recording - tasks/status/robot1: 0 tasks/status/robot2: 5 experiment: 8 robot1_amcl: 27 robot2_amcl: 43 I hope the names are easy enough to understand which topic is being referring to, but as can be seen tasks/status contains 0 messages for robot1. Could anyone please let me know their thoughts? Any further information I can supply to help with explaining the problem do let me know! Please note: The problem with rosbag recording occurs regardless whether only using indigo machines, kinetic machines or a mix of both. Thank you, Tiz Originally posted by Tiz on ROS Answers with karma: 13 on 2018-04-10 Post score: 0 Original comments Comment by jayess on 2018-04-10: Welcome! Is this any different from your other question #q288187? Comment by Tiz on 2018-04-11: Hi, there was an issue when I created my account and I thought the original question didn't post. But I deleted it now (hopefully). Thanks for notifying me! Answer: Hi All, I believe I figured out the issue I had with the binary (and API) rosbag record. I realised from here ROS-rosbag-API that the binary and API are not thread-safe. But I had the topic tasks/status which was concurrently being updated by the robot team and workstation. I wrote a python script using the python API, which "proved" to be thread-safe. I have already done tests using my script for recording and it works. Below is a code snippet example: # callback function that handles writing to the bag file # makes use of 'mutex = Lock()' to acquire and release when the file is being written to def tasks_cb(task_msg): mutex.acquire() rec_bag.write('/tasks/status',task_msg) mutex.release() # subscribe to the topic and call tasks_cb tasks_stat = rospy.Subscriber('/tasks/status', mrta.msg.TaskStatus, tasks_cb) Originally posted by Tiz with karma: 13 on 2018-04-23 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by gvdhoorn on 2018-04-24: Just trying to understand this: are you using rosbag (ie: the command line tool), or a custom script that makes use of the rosbag infrastructure (ie: the Python/C++ libraries)? Concurrent publishing to the same topic should not matter for rosbag (the tool). Comment by Tiz on 2018-04-24: The rosbag binary as I mention above is the command line tool. You're right that concurrent publishing to the same topic SHOULDN'T matter for rosbag. But it does and it did for my experiments. If you wish to replicate it I suggest using +3 machines connected via FKIE publishing to the same topic. Comment by gvdhoorn on 2018-04-24: Seeing as you report this same setup as working under Indigo this might be a regression. It would be good to report it over at ros/ros_comm/issues. Comment by Tiz on 2018-04-24: Thanks for the advice. I was considering this, but I wasn't sure it is a significant issue. I might report it later when I have time to sit down and explain the problem in a more concise way.
{ "domain": "robotics.stackexchange", "id": 30608, "tags": "ros, ros-kinetic, rosbag-record, multi-robot, ros-indigo" }
Thermal expansion stress and strain
Question: I'm having trouble with this question: https://i.stack.imgur.com/hhnot.png and this is the work i've done so far: https://i.stack.imgur.com/RA6xa.jpg Am I going along the right lines? It seems its a 3 variable simultaneous equation to solve for the stress but it seems like its too long for the 15 marks so am I missing something? Answer: I don't think you need to solve a simultaneous system of equations. You can do this serially. Instead of the process stated, imagine that first the tubes were heated, expanding to three different lengths, and then they were stretched/compressed as much as needed to bring them into alignment. The length of each component after heating is $L_n=L_0+L_0\alpha_n\Delta T$. When you bring all the components into alignment (at length $L$), there is no net force, so you have $\sum A_nE_n(L_n-L)/L_n=0$. This is a single equation in a single unknown ($L$). Once you have solved for $L$ you can find the stress in each component easily enough.
{ "domain": "physics.stackexchange", "id": 45883, "tags": "homework-and-exercises, stress-strain" }
Implementation of C Standard Library Function ntohl()
Question: This is an implementation of ntohl() that I wrote as an exercise. ntohl() takes a uint32_t value and returns it unchanged if the host architecture is network-byte-order (big-endian), otherwise the value is converted to host-byte-order. My version converts to little-endian; is it always the case that host-byte-order is taken to mean little-endian? This appears to be the case, from what I have read, but what if the host architecture is middle-endian? Do real implementations of ntohl() detect other byte-orders, or strictly big- and little-endian? I am also interested in any comments about the use of a union to detect endianness on the host machine, suggestions and comparison with other methods, and similarly, comments and suggestions relating to the use of bitwise operators to perform the conversion from big-endian to little-endian. #ifndef _STDINT_H #include <stdint.h> #endif uint32_t my_ntohl(uint32_t netlong) { union { uint16_t num; uint8_t bytes[2]; } endian_test = { .bytes = { 0x01, 0x00 }}; if (endian_test.num == 0x0001) { netlong = (netlong << 24) | ((netlong & 0xFF00ul) << 8) | ((netlong & 0xFF0000ul) >> 8) | (netlong >> 24); } return netlong; } Answer: Unless you wish to optimize the code, with specialized swappers for various hosts orders, you are doing it wrong. I invite you to check The Byte Order Fallacy by Rob Pike. The punch line: the byte order of the computer you are executing the code on doesn't matter, because the language abstracts it for you. Thus, only the byte order of the network matters, and the network is big-endian: #include <stdint.h> #include <string.h> uint32_t ntohl(uint32_t const net) { uint8_t data[4] = {}; memcpy(&data, &net, sizeof(data)); return ((uint32_t) data[3] << 0) | ((uint32_t) data[2] << 8) | ((uint32_t) data[1] << 16) | ((uint32_t) data[0] << 24); } This function will work no matter the endianness of the host, even on the crazy middle-endians ones. Oh, and it optimizes well in general, in case you were wondering: ntohl(unsigned int): mov eax, edi bswap eax ret bswap being the native CPU instruction to swap bytes on x86.
{ "domain": "codereview.stackexchange", "id": 23389, "tags": "c, reinventing-the-wheel, integer, library, bitwise" }
I have a decision problem with $2^n$ bit sized certificates, how would I verify my decision problem efficiently if it is in $NP$?
Question: Decision Problem: Is $2^k$ + $M$ a prime? The inputs for both $K$ and $M$ are integers only. The solution is the sum of $2^k$+$M$. (Use AKS to decide prime) The powers of 2 have approximately $2^n$ digits. Consider $2^k$ where $K$ = 100000. Compare the amount of digits in $K$ to the amount of digits in it's solution! Question Seeing that the decision problem's certificate can be $2^n$ sized, how would I verify the decision problem in polynomial time, considering that I can just look at the transition states as a certificate in itself? In other words, what would a polynomial time verifier look like for this decision problem? Answer: A decision problem has a yes/no answer, so it can't have "exponential size". You are asking about search problems, those can certainly have exponential size. And yes, if the size of the solution (written down in some suitably compact format, that is) is exponential in the size of the original problem, it is clearly impossible to even write down the answer in polynomial time. In any case, P and NP strictly apply only to decision problems. But take a look at Belare's "Decision vs Search" for a relation between both.
{ "domain": "cs.stackexchange", "id": 16638, "tags": "decision-problem" }
Why we use transposed filter as the deconvolution operation instead of the pseudo inverse of filter?
Question: I am trying to visualization CNN by the method in this paper "Visualizing and Understanding Convolutional Networks" According to this tutorial A guide to convolution arithmetic for deep learning (P.19) We can always rewrite convolution as a matrix multiplication For example, $Y$ is a feature map & $X$ is input & $W$ is a filter & $\otimes$ is convolution operation $$ \begin{bmatrix} y_{11} & y_{12} \\\\ y_{21} & y_{22} \end{bmatrix} = \begin{bmatrix} x_{11} & x_{12} & x_{13} & x_{14}\\\\ x_{21} & x_{22} & x_{23} & x_{24}\\\\ x_{31} & x_{32} & x_{33} & x_{34}\\\\ x_{41} & x_{42} & x_{43} & x_{44} \end{bmatrix} \otimes \begin{bmatrix} w_{11} & w_{12} & w_{13} \\\\ w_{21} & w_{22} & w_{23} \\\\ w_{31} & w_{32} & w_{33} \end{bmatrix} $$ In the matrix multiplication form $$ \begin{bmatrix} y_{11} \\\\ y_{12} \\\\ y_{21} \\\\ y_{22} \end{bmatrix} = C \begin{bmatrix} x_{11}\\\\ x_{12}\\\\ \vdots \\\\ x_{44} \end{bmatrix} $$ $$ C= \left[\begin{smallmatrix} w_{11} & w_{12} & w_{13} & 0 & w_{21} & w_{22} & w_{23} & 0 & w_{31} & w_{32} & w_{33} & 0 & 0 & 0 & 0 & 0\\\\ 0 & w_{11} & w_{12} & w_{13} & 0 & w_{21} & w_{22} & w_{23} & 0 & w_{31} & w_{32} & w_{33} & 0 & 0 & 0 & 0\\\\ 0 & 0 & 0 & 0 & w_{11} & w_{12} & w_{13} & 0 & w_{21} & w_{22} & w_{23} & 0 & w_{31} & w_{32} & w_{33} & 0\\\\ 0& 0 & 0 & 0 & 0 & w_{11} & w_{12} & w_{13} & 0 & w_{21} & w_{22} & w_{23} & 0 & w_{31} & w_{32} & w_{33}\\\\ \end{smallmatrix} \right] $$ So we can visualize filter by multiplying $C^{-1}$. OK, my question is why we use $C^{T}$ instead of the pseudo inverse $(C^{T}C)^{-1}C^{T}$? Answer: First, transposed convolution isn't the inverse operation of convolution. It only takes the output shape of origin convolution as its input shape and takes the input shape of origin convolution as its output shape. Second, transposed convolution can be seen to a learnable upsampling operation. It just likes bilinear or bicubic upsampling but all of the parameters are learned by gradient descent. https://www.youtube.com/watch?v=nDPWywWRIRo&index=12&list=PL3FW7Lu3i5JvHM8ljYj-zLfQRF3EO8sYv&t=1632s
{ "domain": "datascience.stackexchange", "id": 3722, "tags": "deep-learning, visualization" }
Will a lighter car have a higher top speed than a heavier car with an equal power engine?
Question: If I have a car (with a particular engine) optimized (shape & weight distribution-wise) for attaining the top speeds possible, and I put that engine into a car which is heavier (but otherwise the same shape & design), will the heavier car have the same top speed in the real world? I'm guessing that the heavier car will accelerate at a slower rate, but I am not sure whether it would eventually hit the same top speed as the lighter car. Factors such as air resistance & the way racing cars are designed to hug the ground (as I understand it) might cause them to not have the same max speed? If they don't have the same top speed, would it be possible to re-design the heavier car (i.e. change its shape and weight distribution) so that it has the same max speed as (or a higher max speed than) the lighter car? My thinking is that if the heavier car doesn't need to use the air resistance to "hug the ground" then it might be able to be designed more aerodynamically? Update 1 Okay, $F_{ground}$ increases with $m$, which decreases $|v|_{max}$. That makes sense. But could the heavier car go as fast or faster with a different design? Here's my reasoning: Speed increases while the car's $F_{engine}$ is greater than friction's $F_{ground} + F_{air}$. $F_{air}$ increases as $|v|$ increases. "Upside down wings" are used to provide extra $F_{downwards}$ (lets call it $F_{wings}$). Having too little $F_{downwards}$ decreases $F_{engine}$. Bigger 'wings' in #3 increases $F_{wings}$ but also increases $F_{air}$ $F_{downwards} = F_{gravity} + F_{wings}$ Based on this logic, a lighter car will need bigger 'wings' (#6) to maintain traction (#3) in order to maintain speed (#4), but increasing $F_{wings}$ increases $F_{air}$ by #5, which decreases $|v|_{max}$ (#1 + #2). However, as $m$ increases, $F_{gravity}$ increases, therefore less $F_{wings}$ is needed (#6), and therefore less $F_{air}$ is experienced. So we have heavier car would have greater $F_{ground}$ which decreases $|v|_{max}$ by a constant amount lighter car would have greater $F_{air}$ which increases as $|v|$ increases So following this reasoning, wouldn't it be possible to build a heavier car which has greater $|v|_{max}$ than a lighter car? Update 2 Clarification: #4 is supposed to mean "when there's too little force pushing the car down, the wheels will slip, which reduces the amount of force the engine can provide". Is that correct? Answer: The problem you've formulated is that these two cars are identical aside from the mass difference, so let's just limit this to two identical cars where one has an added weight in it. The heavier car will accelerate slower, based on simple $F=ma$, where $F$ is the same, so $a$ must be smaller for a larger $m$. Friction, which determines the max speed along with the force, is a little bit messier. The air resistance is basically only affected by the shape of the car, so it will be completely unchanged. The heavier car, however, will have greater ground friction from the contact between the wheels and road and because of that will have a slower max speed. One way you can convince yourself that the tire friction depends on the weight of the car is to just consider that the the tire deforms and creates heat, and greater mass will cause it to deform more every turn. For super fast cars tire friction is actually extremely significant. Update Due to increased formalism by the question, I can offer a little more detail. Here is a really basic force diagram I made for this. It's not perfect, but I think it's sufficient. And by the logic of the question, $F_{drag}$ is broken up into 2 parts where one is from the body and one is from the wing. Furthermore, the wing aerodynamic forces are really the wind specific drag plus $F_{wing}$. So, we were talking about increasing the weight of the car. That increases $F_g$ and $F_{ground}$ because in real life the wheel friction force has a significant dependence on the weight. The question mentioned the need for the wing in order to maintain traction. So, traction is a tricky point, because it is related to the weight. $$F_g=M g$$ $$traction = \frac{M g+F_{wing}}{M g}$$ It is my belief that if a given turning radius without slipping was needed for a race car, this quantity is what would need to be kept constant. I think this gets to a little of what the question wanted, which is that if the mass, $M$, changed, some redesign would be necessary to maintain the same traction (see the above equation).
{ "domain": "physics.stackexchange", "id": 46431, "tags": "fluid-dynamics, forces, aerodynamics" }
Showing form on btn click - preventDefault of submit btn, then remove listener
Question: I've built out a section on a page that contains multiple instances (why I'm using querySelectorAll()) of this Request Brochure form. They are using Campaign Monitor so much of the form code has been removed for this post. Operation On page load an event handler that calls preventDefault on click events is added to elements with class download to prevent the submit action. A class of .is-open is added to the .download container. This triggers a keyframe animation that transitions the height, then the opacity of the forms The event listener is removed to allow the button to perform it's submit action I did try using the method of setting the button to disabled initially, then removing this attribute onclick but found this meant the hover states would not work. I'm looking to see if this is the most efficient way of adding, then removing preventDefault once it is no longer required. const packageCMCont = document.querySelectorAll('.download'); packageCMCont.forEach(function(item) { function handleClick(e) { e.preventDefault() var btn = item.querySelector('.btn'); item.classList.add('is-open'); btn.classList.remove('is-style-outline'); btn.classList.add('is-style-default'); item.removeEventListener('click', handleClick); } item.addEventListener('click', handleClick); }) .download { width: 100%; max-width: 600px; margin: 0 auto; } .download.is-open .inputs { display: flex; height: 100px; animation: showForms 3s forwards; } .inputs { opacity: 0; display: none; flex-direction: column; height: 0; overflow: hidden; } input { height: 50px; } button { height: 50px; width: 100%; background-color: green; color: white; } button:hover { background-color: white; color: green; } @keyframes showForms { 0% { height: 0; } 50% { height: 100px; opacity: 0; } 100% { height: 100px; opacity: 1; } } <div class="download"> <form> <div class="inputs"> <input aria-label="Name" id="fieldName" maxlength="200" name="cm-name" placeholder="Name"> <input autocomplete="Email" aria-label="Email" id="fieldEmail" maxlength="200" required="" type="email" placeholder="Email"> </div> <div class="button-cont"> <button class="btn" type="submit">Request Brochure</button> </div> </form> </div> Answer: Event delegation can be used to improve efficiency. Instead of adding a click handler to every single element with the class download, an event handler could be added to the entire document or a sub-element that contains all elements with class download. document.addEventListener('click', e => { let node = e.target; do { if (node.classList.contains('download') && !node.classList.contains('is-open')) { e.preventDefault(); const btn = node.querySelector('.btn'); btn.classList.remove('is-style-outline'); btn.classList.add('is-style-default'); node.classList.add('is-open'); break; } node = node.parentNode; } while (node && node.classList !== undefined); }); In the example above, the click handler inspects the target element to see if it or a parent node has the class download and if such an element doesn't have class is-open before modifying class names for the elements. I noticed that I wasn't able to click on the div element with class download without clicking the button, despite the container element having space on the sides of the button, and in the original code the click event is bubbled up from the button to the div. Instead of needing to use the do while loop to go up the DOM chain, the click handler could check to see if the target element has class name btn and in that case if it has an ancestor up three levels that matches the specified class names then modify the class names. document.addEventListener('click', e => { const node = e.target; if (node.classList.contains('btn')) { if (node.parentNode && node.parentNode.parentNode && node.parentNode.parentNode.parentNode) { const ancestorDiv = node.parentNode.parentNode.parentNode; if (ancestorDiv.classList && ancestorDiv.classList.contains('download') && !ancestorDiv.classList.contains('is-open')) { e.preventDefault(); node.classList.remove('is-style-outline'); node.classList.add('is-style-default'); ancestorDiv.classList.add('is-open'); } } } }); Because the code above contains multiple 'if' statements that lead to multiple indentation levels, the logic could be switched to return early instead: document.addEventListener('click', e => { const node = e.target; if (!node.classList.contains('btn')) { return; } if (!(node.parentNode && node.parentNode.parentNode && node.parentNode.parentNode.parentNode)) { return; } const ancestorDiv = node.parentNode.parentNode.parentNode; if (ancestorDiv.classList && ancestorDiv.classList.contains('download') && !ancestorDiv.classList.contains('is-open')) { e.preventDefault(); node.classList.remove('is-style-outline'); node.classList.add('is-style-default'); ancestorDiv.classList.add('is-open'); } }); I presume the classes is-style-outline and is-style-default are styled by a library (e.g. wordpress) but the styles could be incorporated based on whether the container div (i.e. with class download) contains class is-open, in the same way the inputs are displayed depending on those class names. Indentation in this code is fairly uniform, though one CSS ruleset (i.e. .download.is-open .inputs) contains rules indented by four spaces instead of two
{ "domain": "codereview.stackexchange", "id": 37814, "tags": "javascript, html, css, ecmascript-6, event-handling" }
Incorrect links on the wiki give 404 errors
Question: Looks like some of the links on the wiki are obsolete. For instance: http://wiki.ros.org/imu_drivers links to http://docs.ros.org/api/sensor_msgs/html/msg/Imu.html but should link to http://docs.ros.org/fuerte/api/sensor_msgs/html/msg/Imu.html instead. It appears to be the case with all the API links on that page at least: http://docs.ros.org/api/std_msgs/html/msg/Bool.html is also a 404. I haven't checked the other wiki pages yet. Originally posted by FranciscoD on ROS Answers with karma: 128 on 2013-09-02 Post score: 0 Answer: I think you got unluck when the job was updating. The links are fine now. Originally posted by tfoote with karma: 58457 on 2014-01-27 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 15406, "tags": "ros, wiki" }
Matlab filter design,through group delay
Question: I have the measurements of a filter’s Group delay and S-parameters. The S-parameters are of the following form, presented in a touchstone file. !Agilent Technologies,N5242A,MY49421489,A.09.33.09 !Agilent N5242A: A.09.33.09 Date: Thursday, March 22, 2012 19:50:06 !Correction: S11(Full 2 Port(1,2)) !S21(Full 2 Port(1,2)) !S12(Full 2 Port(1,2)) !S22(Full 2 Port(1,2)) !S2P File: Measurements: S11, S21, S12, S22: Hz S dB R 50 1450000000 -0.44925556 132.79056 -43.664959 42.970737 -43.609634 45.291161 -0.41757283 131.60133 The filter’s Group delay data are like those demonstrated below: !CSV A.01.01 !Date: Thursday March 22 !Source: Standard BEGIN CH1_DATA Freq(Hz) S21 Delay(s) 1.45E+09 -2.02E-08 1.45E+09 -1.32E-08 1.45E+09 -1.77E-08 1.45E+09 -1.70E-08 1.45E+09 -1.56E-08 1.45E+09 -1.36E-08 1.45E+09 -1.20E-08 What I want to do is simulate the impact that the group delay will have on a particular waveform using MATLAB. I tried to use the fdesign.arbgrpdelay in order to insert my group delay data and somehow observe how that would impact on a waveform but I am getting the following errors: Error using fdesign.abstracttype/superdesign (line 96) Design options must be specified as a structure or as parameter-value pairs. Error in fdesign.abstracttype/design (line 11) varargout{1} = superdesign(this, varargin{:}); Error in allpassfilterarbitrarygrpdly (line 413) Hgd = design(hgd,'iirlpnorm','Weights','MaxPoleRadius',0.95); I also tried to use fdatool but I couldn't find a way of designing a filter by changing its group delay. The group delay was flat in all the available designs. Does anyone knows how I can use the Group delay measurements in matlab to simulate a filter through its group delay? Answer: The Solution finally was to match the input vectors in the group delay simulating Matlab functions.That means that the frequencies and the group delay Vectors given as input in Matlab should be SHORT and not as long as i had to give,and exactly on the same length (which makes sense).The most tricky part is to find which length was that would fit in and run the simulation.
{ "domain": "dsp.stackexchange", "id": 1513, "tags": "matlab, filter-design" }
Pumping Lemma Applied to 3 Variables
Question: Prove That the Language $L_1 = \{0^i1^j0^k | i < j\ or\ i > k\}$ is not regular using the pumping lemma. I am not sure how to begin with this I ended up using the string: $0^p1^{p+1}0^{p+1}$ S = 00001111100000 It holds for the conditions because the first set of 0s is less than the set of 1s, and the last set of zeros doesn't matter because I only have to meet one condition I Divided the string as follows, {000,0,1111100000} in the case: $xy^2z$ = 000001111100000 is not part of the language because i = j = k so neither condition is met. Please tell me if I did this right and give me some pointers because I have not found a single source that shows how to deal with 3 variables with an or statement. Answer: You seem to misunderstand a bit how the Pumping lemma works. You're not free to choose the value of $p$ or how to divide the string, but need to show that there's no factorization for some $w \in L$ with $|w| \geq p$ and $p$ arbitrary that would satisfy the pumping lemma. The trick here is to find values of $i, j, k$ such that after pumping $j \leq i \leq k$. We know that the pumped substring $y$ must have length $1 \leq |y| \leq p$ for the pumping length $p$, and can use this to clamp the value of $i$ between sufficient $j, k$. Assume $w = 0^p1^{p + 1}0^{2p} \in L$ such that $|w| \geq p$ for the pumping length $p \geq 1$. Then, if $L$ was regular, there must be $w = xyz$ with $|y| \geq 1$, $|xy| \leq p$, and $xy^iz \in L$ for all $i \in \mathbb{N}_0$. Now we must use only 1.-3. to derive a contradiction. Notice that from 2. it follows that $y \in 0^+$, and $$xy^2z = 0^{p + |y|}1^{p + 1}0^{2p}.$$ But then by 1. it follows that $p + |y| \geq p + 1$ and by 2. that $p + |y| \leq p + |xy| \leq 2p$. Therefore $xy^2z \notin L$ contradicting 3. ⭍
{ "domain": "cs.stackexchange", "id": 21826, "tags": "finite-automata, pumping-lemma" }
Calculation of the money to be paid back to the employer
Question: I'm a beginner developer and I wrote a simple program. How could be the code improved to meet best practices of the development? I mean if the names of classes or variables are enough self-explaining for others and it is required to insert setters and getters in the classes, when I know that at this point it is necessary? Any help to improve the code would be appreciated. Brief code explanation An employee signs a contract with an employer, that he cannot terminate before 18 months working there. Otherwise he is obliged to pay back a certain amount of money, that decrease each month of working. UPDATE Assuming a user of the program started a job on 1st April 2017, he inserts in the Main class a date of starting a potential new job. In the CalculateAmountOfMonthToGo class determines how many months differs the inserted date and the date that he does not need to pay back any money. Then in the class CalculateAmountOfMoneyToBePaid it is calculated how much he needs to repay. Later there will be displayed a kind of summary with dates of starting new job and the amount of money to be repaid at that time. Main.java package changeWork; import java.text.DecimalFormat; public class Main { public static void main(String[] args) { CalculateAmountOfMonthToGo monthsToGo = new CalculateAmountOfMonthToGo("01-10-2017"); monthsToGo.retriveAmountOfMonthToGo(); CalculateAmountOfMoneyToBePaid moneyToBePaidClass = new CalculateAmountOfMoneyToBePaid(); double money=moneyToBePaidClass.amountMoneyToPayBack(); DecimalFormat df = new DecimalFormat("#.##"); System.out.println("Do splaty pozostalo:"); System.out.println(df.format(money)); System.out.println(); System.out.println("Kwota do zaplaty po kolejnych miesiacach"); moneyToBePaidClass.printMoneyToBePaidEachMonth(); } } CalculateAmountOfMonthToGo.java package changeWork; import java.time.LocalDate; import java.time.format.DateTimeFormatter; import java.time.temporal.ChronoUnit; public class CalculateAmountOfMonthToGo { private final String END_PENALTY="01-10-2018"; private LocalDate newWorkStartDate; private String newWorkStart; private long monthsBetween=0; public String getNewWorkStart() { return newWorkStart; } public void setNewWorkStart(String newWorkStart) { this.newWorkStart = newWorkStart; } public long getMonthsBetween() { return monthsBetween; } public void setMonthsBetween(long monthsBetween) { this.monthsBetween = monthsBetween; } public CalculateAmountOfMonthToGo(String newWorkStart) { super(); this.newWorkStart = newWorkStart; } public CalculateAmountOfMonthToGo(){ } public long retriveAmountOfMonthToGo(){ DateTimeFormatter formatter = DateTimeFormatter.ofPattern("dd-MM-yyyy"); try{ newWorkStartDate =LocalDate.parse(newWorkStart, formatter);} catch (Exception e) { e.printStackTrace(); } LocalDate endPenaltyDate= LocalDate.parse(END_PENALTY,formatter); monthsBetween= ChronoUnit.MONTHS.between(newWorkStartDate, endPenaltyDate); if (monthsBetween<0){ throw new RuntimeException("Rozpoczecie pracy po wygasnieciu okresu splaty"); } else if (monthsBetween>18){ throw new RuntimeException("Bledna data rozpoczecia nowej pracy"); } return monthsBetween; } } CalculateAmountOfMoneyToBePaid.java package changeWork; import java.text.DecimalFormat; import java.util.Arrays; import java.util.HashMap; import java.util.List; public class CalculateAmountOfMoneyToBePaid { private final double PENALTY = 6000; private final double PENALTY_DURATION = 18; private String newWorkStart; private long monthsBetween = 0; private HashMap<String, Double> amountOfMoney = new HashMap<String, Double>(); public CalculateAmountOfMoneyToBePaid() { } public CalculateAmountOfMoneyToBePaid(long monthsBetween) { this.monthsBetween = monthsBetween; } public double amountMoneyToPayBack() { double ratio = monthsBetween / PENALTY_DURATION; return ratio * PENALTY; } public HashMap<String, Double> printMoneyToBePaidEachMonth() { CalculateAmountOfMonthToGo monthsToGo = new CalculateAmountOfMonthToGo(); System.out.println("Data: \t\t Kwota:"); List<String> keys = Arrays.asList("01-04-2017", "01-05-2017", "01-06-2017", "01-07-2017", "01-08-2017", "01-09-2017", "01-10-2017", "01-11-2017", "01-12-2017", "01-01-2018", "01-02-2018", "01-03-2018", "01-04-2018", "01-05-2018", "01-06-2018", "01-07-2018", "01-08-2018", "01-09-2018"); for (int i = 0; i < keys.size(); i++) { newWorkStart = keys.get(i); monthsToGo.setNewWorkStart(newWorkStart); monthsBetween = monthsToGo.retriveAmountOfMonthToGo(); double moneyToPay = amountMoneyToPayBack(); amountOfMoney.put(newWorkStart, moneyToPay); DecimalFormat df = new DecimalFormat("#.##"); System.out.println(newWorkStart + "\t\t" + df.format(moneyToPay)); } return amountOfMoney; } } Answer: Avoid strings where other types are more appropriate. You can use Date objects instead and eliminate some of the overhead of converting them. Rather than use a hard-coded list of dates (List<String> keys), generate them dynamically. I don't think you need three classes to achieve the functionality you want. Use a minimal Main class to launch "PenaltyCalculator" class to do all the work. "CalculateAmountOfMonthsToGo" sounds like the name of a method. It should be a method in the new "PenaltyCalculator" instead of its own class. As a method, it can take newWorkStart as a parameter. Likewise, "CalculateAmountOfMoneyToBePaid" should be a method in PenaltyCalculator. A separate "printPenalties()" method can iterate over the months and call the other two methods with each iteration. The constants like private final double PENALTY = 6000; and private final double PENALTY_DURATION = 18; can live in the printPenalties() method. This method is called from Main. Now for the reach: Create a test harness so that you can quickly check that you haven't changed the answers as you refactor your program. Only preaching this because you said you were new to programming. Earlier you get used to test driven development, the more productive you'll be. UPDATE: The following code reduces the lines of code from over 100 to about 30 by getting rid of the List called keys as well as the method or class called CalculateAmountOfMonthToGo (it is taken care of in the for loop). This eliminates the need to convert Strings to Dates and does not need the HashMap named amountOfMoney as this is calculated on the fly in each iteration of the for loop: import java.time.LocalDate; import java.text.DecimalFormat; public class PenaltyCalculator{ private final double PENALTY = 6000; private final double PENALTY_DURATION = 18; public void printPenaltiesByMonth(){ // start day is first day of current month: LocalDate today = LocalDate.now(); LocalDate monthStart = today.withDayOfMonth(1); System.out.println("Quit On | Owe This"); //duration is 18 months, so program should loop 18 times: for(int monthsOnTheJob = 0; monthsOnTheJob < PENALTY_DURATION; monthsOnTheJob++){ DecimalFormat df = new DecimalFormat("#.##"); System.out.println(monthStart.plusMonths(monthsOnTheJob) + " : " + df.format(calculatePenalty(PENALTY_DURATION-monthsOnTheJob))); } } public double calculatePenalty(double monthsRemaining){ double ratio = monthsRemaining / PENALTY_DURATION; return ratio * PENALTY; } } and this, then, is the Main.java class that just serves to call the above: public class Main { public static void main(String[] args) { PenaltyCalculator penaltyCalc = new PenaltyCalculator(); penaltyCalc.printPenaltiesByMonth(); } }
{ "domain": "codereview.stackexchange", "id": 26384, "tags": "java, beginner" }
Lazy split and semi-lazy split
Question: Sometimes I need to be able to split data into chunks, and so something like str.split would be helpful. This comes with two downsides: Input has to be strings You consume all input when generating the output. I have a couple of requirements: It needs to work with any iterable / iterator. Where the items have the != comparator. I don't want to consume the chunk of data when returning it. Rather than returning a tuple, I need to return a generator. And so this left me with two ways to implement the code. A fully lazy version isplit. And one that is semi-lazy where it consumes some of the generator, when moving to the next chunk, without fully consuming it. And so I created: from __future__ import generator_stop import itertools def _takewhile(predicate, iterator, has_data): """ Return successive entries from an iterable as long as the predicate evaluates to true for each entry. has_data outputs if the iterator has been consumed in the process. """ for item in iterator: if predicate(item): yield item else: break else: has_data[0] = False def isplit(iterator, value): """Return a lazy generator of items in an iterator, seperating by value.""" iterator = iter(iterator) has_data = [True] while has_data[0]: yield _takewhile(value.__ne__, iterator, has_data) def split(iterator, value): """Return a semi-lazy generator of items in an iterator, seperating by value.""" iterator = iter(iterator) has_data = [True] while True: carry = [] d = _takewhile(value.__ne__, iterator, has_data) try: first = next(d) except StopIteration: if not has_data[0]: break yield iter([]) else: yield itertools.chain([first], d, carry) carry.extend(d) An example of these working are below. There is an edge case with isplit, which is as far as I know inherent from the code being fully lazy. This is shown below too. print('isplit') print([list(i) for i in isplit('abc def ghi', ' ')]) print([list(i) for i in isplit(' abc def ghi', ' ')]) s = isplit('abc def ghi', ' ') print(list(itertools.zip_longest(*itertools.islice(s, 4)))) print('\nsplit') print([list(i) for i in split('abc def ghi', ' ')]) print([list(i) for i in split(' abc def ghi', ' ')]) s = split('abc def ghi', ' ') print(list(itertools.zip_longest(*itertools.islice(s, 4)))) Which outputs: isplit [['a', 'b', 'c'], ['d', 'e', 'f'], ['g', 'h', 'i']] [[], ['a', 'b', 'c'], ['d', 'e', 'f'], ['g', 'h', 'i']] [('a', 'b', 'c', None), ('d', 'e', 'f', None), (None, 'g', 'h', None), (None, 'i', None, None)] split [['a', 'b', 'c'], ['d', 'e', 'f'], ['g', 'h', 'i']] [[], ['a', 'b', 'c'], ['d', 'e', 'f'], ['g', 'h', 'i']] [('a', 'd', 'g'), ('b', 'e', 'h'), ('c', 'f', 'i')] Answer: I would prefer the name iterable for the iterable argument (compare the documentation for the itertools module), and sep for the seperator argument (compare the documentation for str.split). isplit has the unsatisfactory feature that you cannot ignore any of the returned iterators — you have to consume each one fully before moving on to the next, otherwise the iteration goes wrong. For example, suppose we want to select words starting with a capital letter. We might try: for word in isplit('Abc def Ghi', ' '): first = next(word) if first == first.upper(): print(first + ''.join(word)) But this produces the output: Abc Traceback (most recent call last): File "<stdin>", line 2, in <module> StopIteration Instead, we have to ensure that we consume each word iterator fully, even if we don't care about it: for word in isplit('Abc def Ghi', ' '): first = next(word) if first == first.upper(): print(first + ''.join(word)) else: for _ in word: pass The same issue arises with the standard library function itertools.groupby, where calling code might move on to the next group before it has finished iterating over the previous group. groupby solves this problem for us by fully consuming the previous group as soon as the caller moves on to the next group. It would be helpful for isplit to do the same. The similarity with itertools.groupby suggests that we could implement isplit very simply in terms of groupby, like this: from itertools import groupby def isplit(iterable, sep): """Generate the contiguous groups of items from the iterable that are not equal to sep. The returned groups are themselves iterators that share the underlying iterable with isplit(). Because the source is shared, when the isplit() object is advanced, the previous group is no longer visible. So, if that data is needed later, it should be stored as a list. """ for key, group in groupby(iterable, sep.__ne__): if key: yield group Note that this code behaves like plain str.split() in that it coalesces adjacent separators. If you need the behaviour to be more like str.split(' '), with empty groups when there are adjacent separators, then it should be straightforward to add an else: clause to generate the necessary empty iterators, like this: for key, group in groupby(chain((sep,), iterable, (sep,)), sep.__ne__): if key: yield group else: for _ in islice(group, 1, None): yield iter(()) This uses itertools.chain and itertools.islice. (There are a couple of minor optimizations you could make here: the 1-element tuple (sep,) could be stored in a variable and used twice, and iter(()) could be a global constant since you don't need a new empty iterator each time.)
{ "domain": "codereview.stackexchange", "id": 29570, "tags": "python, python-3.x, iterator, lazy" }
Kinect depth data is saturated to 1 meter
Question: Hello everybody, I'm using the simulated kinect of gazebo with the gazebo_ros_openni_kinect plugin. It works fine when the kinect is looking at objects that are closer than 1 meter, but when the objects are at a distance bigger than 1 meter the depth value is always 1. It is not depending on the near/far clip set in the urdf of the kinect. I'm using ros groovy and gazebo 1.9, the image is taken from the topic /depth/image_raw, but even rviz shows a semi-flat point cloud (that is, flat when the depth is bigger than 1 meter). Does anyone know how to fix that problem? Thanks! EDIT: the rgb image works correctly. depth/image_raw: https://www.dropbox.com/s/xlycy5troq7g4v0/depth.jpg rgb/image_raw: https://www.dropbox.com/s/y0am9uxv7ivub6b/rgb.jpg This is how the kinect sensor is defined in the urdf: <!-- xtion sensor --> <gazebo reference="xtion_link"> <sensor type="depth" name="xtion_sensor"> <update_rate>20.0</update_rate> <camera> <horizontal_fov>1.047</horizontal_fov>  <clip> <near>0.05</near> <far>3</far> </clip> <noise> <type>gaussian</type> <!-- Noise is sampled independently per pixel on each frame. That pixel's noise value is added to each of its color channels, which at that point lie in the range [0,1]. --> <mean>0.0</mean> <stddev>0.001</stddev> </noise> </camera> <always_on>1</always_on> <visualize>true</visualize> <plugin name="Xtion_frame_controller" filename="libgazebo_ros_openni_kinect.so"> <alwaysOn>true</alwaysOn> <updateRate>20.0</updateRate> <cameraName>xtion</cameraName> <imageTopicName>ir/image_raw</imageTopicName> <cameraInfoTopicName>ir/camera_info</cameraInfoTopicName> <depthImageTopicName>depth/image_raw</depthImageTopicName> <depthImageCameraInfoTopicName>depth/camera_info</depthImageCameraInfoTopicName> <pointCloudTopicName>depth/points</pointCloudTopicName> <frameName>xtion_depth_optical_frame</frameName> <pointCloudCutoff>0.05</pointCloudCutoff> <distortionK1>0.00000001</distortionK1> <distortionK2>0.00000001</distortionK2> <distortionK3>0.00000001</distortionK3> <distortionT1>0.00000001</distortionT1> <distortionT2>0.00000001</distortionT2> </plugin> </sensor> </gazebo> <!-- xtion rgb camera --> <gazebo reference="xtion_link"> <sensor type="camera" name="xtion_rgb_camera"> <update_rate>20.0</update_rate> <camera> <horizontal_fov>1.047</horizontal_fov>  <clip> <near>0.05</near> <far>3</far> </clip> <noise> <type>gaussian</type> <!-- Noise is sampled independently per pixel on each frame. That pixel's noise value is added to each of its color channels, which at that point lie in the range [0,1]. --> <mean>0.0</mean> <stddev>0.001</stddev> </noise> </camera> <always_on>1</always_on> <visualize>true</visualize> <plugin name="Xtion_rgb_camera_controller" filename="libgazebo_ros_camera.so"> <alwaysOn>true</alwaysOn> <updateRate>20.0</updateRate> <cameraName>xtion</cameraName> <imageTopicName>rgb/image_raw</imageTopicName> <cameraInfoTopicName>rgb/camera_info</cameraInfoTopicName> <frameName>xtion_rgb_optical_frame</frameName> <hackBaseline>0.07</hackBaseline> <distortionK1>0.0</distortionK1> <distortionK2>0.0</distortionK2> <distortionK3>0.0</distortionK3> <distortionT1>0.0</distortionT1> <distortionT2>0.0</distortionT2> </plugin> </sensor> </gazebo> Originally posted by Tirjen on Gazebo Answers with karma: 3 on 2013-09-10 Post score: 0 Answer: I am running Gazebo 1.9 with ROS Hydro and everything seems to work correctly. I did have to install the gazebo_plugins from source since there was a bug in the gazebo_ros_openni plugin (which has since been fixed). Originally posted by nkoenig with karma: 7676 on 2013-09-15 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Tirjen on 2013-09-18: I installed ROS Hydro and now it works correctly, thanks!
{ "domain": "robotics.stackexchange", "id": 3451, "tags": "ros, kinect, ros-groovy" }
Copy, Paste And Format
Question: I am currently working with this code to automate some tasks for senior staff members that are not very adept in Excel. Wondering if VBA is simply not a very quick code or if my code is clunky and slow. For clarity, I would think with how simple this code is it could run in under a second or two. Maybe this is overzealous? Sub Paste() '---Paste Macro '---2016-05-23 Dim sht1 As Worksheet Dim sht2 As Worksheet Dim LastRow As Long Dim LastRow2 As Long Dim LastColumn As Long Dim StartCell1 As Range Dim StartCell2 As Range Dim rng1 As Range Dim rng2 As Range Set sht1 = GetWSFromCodeName("Sheet10") Debug.Print sht1.Name Set sht2 = GetWSFromCodeName("Sheet8") Debug.Print sht2.Name Set StartCell1 = Range("A2") Set StartCell2 = Range("B2") 'Find Last Row and Column LastRow = sht1.Cells(sht1.Rows.Count, StartCell1.Column).End(xlUp).Row LastColumn = sht1.Cells(StartCell1.Row, sht1.Columns.Count).End(xlToLeft).Column LastRow2 = sht2.Cells(sht2.Rows.Count, StartCell1.Column).End(xlUp).Row 'Select Range And Copy into Final Formula Sheet sht1.Range(StartCell1, sht1.Cells(LastRow, LastColumn)).Copy Destination:=sht2.Cells(LastRow2 + 1, 2) 'Convert Text in Column C of Final Formula Sheet to Numbers to Allow Advisor Code to Apply Set rng1 = Range(sht2.Cells(LastRow2, 3), sht2.Cells(LastRow2 + LastRow - 1, 3)) With rng1 .NumberFormat = "0" .Value = .Value End With 'Copy Advisor Function down to meet with new Pasted in Data With sht2 Set rng2 = .Cells(LastRow2, 1) End With With rng2 .Copy Destination:=Range(sht2.Cells(LastRow2, 1), sht2.Cells(LastRow2 + LastRow - 1, 1)) End With End Sub '---This Function allows the worksheet name to change in the workbook as it allows the 'user to set Worksheets to codename variables. By using this function the user can input a 'codename for a worksheet and the function will call the worksheet name of the corresponding 'codename, allowing the user to set worksheet variables to codenames without losing 'functionality usually associated with such variables. '---2016-05-23 Public Function GetWSFromCodeName(CodeName As String) As Worksheet Dim WS As Worksheet For Each WS In ThisWorkbook.Worksheets If StrComp(WS.CodeName, CodeName, vbTextCompare) = 0 Then Set GetWSFromCodeName = WS Exit Function End If Next WS End Function Answer: The 3 lowest-hanging VBA performance fruit are: Application.ScreenUpdating = False Application.EnableEvents = False Application.Calculation = xlManual Just make sure to restore them at the end of your sub, and/or if your method encounters an error and stops, else your senior people won't be able to use Excel afterwards and will blame you for breaking it. Used like so: Sub/Function () Application.ScreenUpdating = False Application.EnableEvents = False Application.Calculation = xlManual < Code > Application.ScreenUpdating = True Application.EnableEvents = True Application.Calculation = xlAutomatic '/ Assuming it was set to automatic to begin with End Sub/Function And with some (very basic) error handling: Sub/Function () On Error Goto CleanFail Application.ScreenUpdating = False Application.EnableEvents = False Application.Calculation = xlManual < Code > Application.ScreenUpdating = True Application.EnableEvents = True Application.Calculation = xlAutomatic '/ Assuming it was set to automatic to begin with CleanExit: Exit Sub/Function CleanFail: '/ Resets the Application settings, *then* raises the error On Error Goto 0 Application.ScreenUpdating = True Application.EnableEvents = True Application.Calculation = xlAutomatic '/ Assuming it was set to automatic to begin with Err.Raise(Err.Number) '/ Or insert your own error handling here End Sub/Function
{ "domain": "codereview.stackexchange", "id": 20132, "tags": "performance, vba, excel" }
Why don't all objects bounce like rubber balls?
Question: Some things don't bounce like rubber balls do. For example, books don't bounce much when dropped. Why is it that some things bounce while others don't? Answer: Because bouncing requires the object to be elastic - shortly after it deforms, its shape should return to the one it had before deforming. In order for an object to bounce1, the sequence of events would be following: object is going to touch the surface, having a kinetic energy $E$ the object deforms (doesn't shatter, break, explode, catch fire, etc) and its kinetic energy transforms into the internal energy. there's no (or insignificant) loss in newly gained internal energy, i.e. no (or little) part of it is dissipated as heat, vibration, etc (guess there could be another form of dissipation...) If all the above steps are passed, the object has to "un-deform" - internal energy gained because of deforming and not lost in the step 3 is turning back into the kinetic energy. Now it has its kinetic energy back, and thus has the speed to go up again. In order to bounce, an object must "pass" all the steps above. In other words, the objects bounces, if there is deformation and it's elastic, not plastic or viscous and most of the elastic potential energy is realised into acceleration of the whole object in the opposite direction. Let's see consider three different objects - a rubber ball, a plasticine ball and a book, and see how they behave. Well, any of them can pass the first step since they have got the speed. Now they will fall on the ground. Balls pass the second step, they deform to a different degree. A book primarily fails to bounce because its shape favours other modes of energy propagation - dissipation via vibration. Thus, the book is not a contender anymore. What about the third step? The plasticine ball fails it - its gained internal energy was mostly lost because of being transformed into thermal energy. So, out of three objects, only the rubber ball will bounce. As an additional example, you could consider the third, steel ball (not drawing it here :). It would certainly deform less than the rubber ball, but would still bounce pretty good.2 1 - This answer is considering a system where the surface of the floor doesn't deform itself. If there's a trampoline instead of the floor and an object won't stick to it once it falls on it, it will bounce back. If there's sand instead of hard surface, any object falling in sand would behave like an object which falls on the hard surface and fails the 3rd step 2 - See "Clarifying the actual definition of elasticity. Is steel really more elastic than rubber?"
{ "domain": "physics.stackexchange", "id": 48230, "tags": "newtonian-mechanics, collision" }
Relativistic electromagnetism and electromagnetic forces on 2 protons
Question: The question I have about it is how we can get the same result of net force acting on the individual protons if we judge the system from 2 different reference frames. One using more of the magnetic part of the electromagnetism, other more of electrical part of it to calculate the forces. So I'll try to explain my understanding of it trough these two examples in which I get different results of particle acceleration depending on the different reference frames named in the Picture as "observer reference frames". For simplicity I've decided to ignore the magnetic field created by the particles spins. I think they do not matter for this mental excercise. In example A, both protons are traveling in the same direction and with the SAME speed. If the observer is stationary, he sees both of them moving away from him at the same speed. Because the protons have the same charge, they are repelled away from eachother by electrostatic force. But because they are moving, the upper proton creates a magnetic field around itself and that field acts on the other proton in such a way to push him up (towards the other proton). The same is true for the other proton which magneticaly attracts the other proton. The direction net force is dependant on the speeds of protons and how far away from eachother they are. So in one case they might attract eachother (high speeds) and in other case repel (close together). Now let's look at example B. The observer is traveling at the same speed as the protons and in the same direction. The protons according to the observers aren't moving. So magnetic force doesn't appear between the protons. The only thing the protons feel according to the observer is the repulsive electrostatic force between them, so the protons, no matter how far away from eachother they are or at what velocity they are traveling (according to the stationary observer from case A) Will always repel eachother. Now this entirely contradicts itself. Does this have something to do with time dilation and space contraction effects? And if so, how? I've read the examples of a current conducting wire and it's effect on a moving independant charge. That picture makes sense because of space contraction, so there are more of opposite (stationary) charges in the same volume of space (in a wire) than if the charges weren't moving relative to eachother. That creates an electrostatic force which can in different reference frame be considered a magnetic force. But here we do not have the "help" from those opposite charges. And I believe the same effect still occures. How? Answer: Short answer: Force is not a Lorentz invariant and neither is acceleration. The protons always repel each other, with a force that combines the electric and magnetic components of the Lorentz force and depends on the frame of reference of the observer, but which is maximised in their rest frame and which approaches zero as the protons become ultra-relativistic. Details: Exactly your question is dealt with in Purcell & Morin "Electricity & Magnetism" 3rd ed. p.264. The problem you may be having is in thinking that force is a relativistic invariant - it is not. Electric and magnetic fields are transformed when looking at them from a different frame of reference. In the stationary frame of the protons then there is just the Coulomb repulsion between them given by $$F_{\rm rest} = e\vec{E}_{\rm rest} = \frac{e^2}{4\pi \epsilon_0 r^2}\hat{r} ,$$ where $\vec{E}_{\rm rest}$ is the E-field of a stationary proton. In the lab frame, the electric field in the direction between the two protons is increased by the Lorentz factor to $\vec{E}_{\rm lab} =\gamma \vec{E}_{\rm rest}$, where $\gamma = (1-v^2/c^2)^{-1/2}$ and where $\gamma \geq 1$. At the same time, there is a magnetic field caused by the motion of the protons and this contributes a force $e \vec{v} \times \vec{B}_{\rm lab}$, where $\vec{B}_{\rm lab}$ is the B-field measured in the lab frame. The lab B-field is found using the appropriate transform as $$ \vec{B}_{\rm lab} = -\frac{\gamma}{c^2} \vec{v} \times \vec{E}_{\rm rest}$$ Thus the force between the protons on the lab frame is $$F_{\rm lab} = e(\vec{E}_{\rm lab} + \vec{v}\times \vec{B}_{\rm lab}) = e (\gamma \vec{E}_{\rm rest} - \frac{\gamma v^2}{c^2} \vec{E}_{\rm rest}) = \frac{e \vec{E}_{\rm rest}}{\gamma} = \frac{F_{\rm rest}}{\gamma}. $$ This is exactly as required by the rules for transforming forces under special relativity. The force acting between the two protons is smaller in the lab frame and approaches zero as the protons become more and more relativistic. If you had arranged it so your proton beams travelled in parallel lines then you must also have arranged for some force to act in the direction opposing the proton's mutual repulsion. This force would transform in exactly the same way, so that if there was no net acceleration in the lab frame then there would be no net acceleration in the proton rest frame either.
{ "domain": "physics.stackexchange", "id": 37418, "tags": "electromagnetism, special-relativity, forces, inertial-frames, observers" }
Strictly from a biological perspective, what is the functional expectation of a human being
Question: Just like the title says: Strictly from a biological perspective, what is the functional expectation of a human being. No religious or philosophy-based answers will be accepted, we are talking biology not religion or philosophy. Answers involving excessive psychology will be frowned upon unless found to be exceedingly insightful. (The brain is a biological entity) If you are considering claiming this to be too broad or unanswerable consider the hypothetical question "what are the functional expectations of human feet" which can be answered fairly easily by a child with no formal training in biology. In my actual question I simply ask about the feet and the other parts attached to them. Edit: Originally this question was titled "Strictly from a biological perspective, what is the purpose of a human existence." To clarify the question i have changed it to "Strictly from a biological perspective, what is the functional expectation of a human being." Despite the concerns raised with the use of the word "purpose" the question was serving its purpose with the answers I am seeing. I did have some reservations about the word "purpose" as well, but I thought I would try it and see what others thought. So if a human stomach has the overall functional expectation of mechanically and chemically breaking down and digesting food, what would be the overall functional expectation of a human as a whole? I am looking for not so much of psychologically specific functional expectations and more of physical functional expectations. For instance a house cat's functional expectations are generally considered to be sleeping, eating, reproducing, socializing, hunting, etc. Answer: I will start with a quote from François Jacob, a French geneticist: The sole ambition of a bacterium is to make two bacteria. The same reasoning can be extended to all life on Earth, including humans. There is no purpose to human existence, in the same way that there is no purpose for life at all. Life is there because it can. It works, chemically, physically and thermodynamically, given the physical conditions present on Earth. Biology does not give any moral reason for our existence. It only describes what is there. Using experimental techniques we can infer the purpose of the human feet. All we need to do is to remove the feet of a human subject and see what happens. The subject will be unable to walk, and therefore we will be able to infer that the feet's purpose is to enable locomotion. In biological research, we usually remove or silence genes or proteins from an organism to infer their purpose. We then observe the phenotype. Most of the time however, such an experiment does not give any information on the purpose of the gene because its function is not obvious enough to be observed. In the same vein, one could remove humans from the Earth and see what happens. If something changes for the worse on Earth, then we would be able to infer that our purpose was to prevent such event from happening. Obviously, such an experiment is impossible to perform, therefore we will never know what is our purpose, neither will we know whether we actually have a purpose.
{ "domain": "biology.stackexchange", "id": 2103, "tags": "human-biology" }
Fields $E$, $D$ and polarizaton $P$
Question: I am studying Griffith's Introduction to Electrodynamics and having problems to grasp few things. Anyway, taking into account a problem of dielectric being put between parallel plate capacitor. There is an E field of capacitor as an external field, and I know inside there is a polarization, dipole per volume P With the story of induced charges and how dipoles are orienting, I believe I got the right in my mind. What I want to ask firstly, that field E and field P are in opposite direction? I am just asking this for example of parallel place capacitor and dielectric between them? Now, there is a final field D, which we associate with free charges. And P is associated with the bound charges. And I should consider E fundamental and related to the total charge (free+bound). Now: $ D=\epsilon_0E + P $ And know constantly in my head is that this should be $-P$. Like $D+p$ should equal to $E$ and also in terms of bound and free charges. And totally I got it in my mind that $D$ and $E$ are in same direction and $P$ is opposite. I know question might be dump, but I am losing to understand this properly, read some previous answers here and material on Internet, but still got feeling in my head "I am no exactly sure what is happening here"... Answer: Just considering surface charge densities on the dielectric this is a diagram of the situation described in Griffiths. The important ideal is that there is an electric field in the dielectric $\vec E_{\rm local}$ which is the sum of the electric fields due to the free charges and the bound charges. $$\vec E_{\rm local} = \vec E_{\rm free} + \vec E_{\rm bound}\Rightarrow E_{\rm local} = E_{\rm free} - E_{\rm bound}$$ with the local field being less than the field due to the free charges. It is this local field which produces a polarization of the dielectric and the density of electric dipoles depends (linearly) on the local value of the electric field. $\vec P = \chi \epsilon_0 \vec E_{\rm local}$ where $\chi$ is the electric susceptibility of the dielectric. Note that since $E_{\rm free} > E_{\rm bound}$ the direction of the local field $\vec E_{\rm local} $ and hence the polarization $\vec P$ will be in the same direction as $\vec E_{\rm free}$. So defining the displacement $$\vec D = \epsilon_0 \vec E_{\rm local} + \vec P \Rightarrow D = \epsilon_0 E_{\rm local} + P$$ gives the relationship that you quoted in your question. Furthermore if $\vec P = \chi \epsilon_0 \vec E_{\rm local}$ then $D = \epsilon_0 E_{\rm local} + \chi \epsilon_0 E_{\rm local} \Rightarrow D = \epsilon_{\rm r} \epsilon _0 E_{\rm local}$ where $\epsilon_{\rm r}(= 1+\chi)$ is the relative permittivity of the dielectric.
{ "domain": "physics.stackexchange", "id": 51785, "tags": "electrostatics, electric-fields, polarization" }
Implementing Do keyword in a compiler
Question: I'm writting a compiler implementation for a language that you can see defined here. I having problems designing how the do keyword should be handled. You can find the code I'm referring to in here. Let me delve a little bit into the problem. According to the documentation: The do keyword is used to call an expression (normally, a method call) just for its side-effect, and discards the result. Now according to the list of possible expressions in the file I referred to before: IntLit(value) StringLit(value) True() False() And(lhs, rhs) Or(lhs, rhs) Plus(lhs, rhs) Minus(lhs, rhs) Times(lhs, rhs) Div(lhs, rhs) LessThan(lhs, rhs) Not(expr) Equals(lhs, rhs) ArrayRead(arr, index) ArrayLength(arr) MethodCall(obj, meth, args) Variable(Identifier(name)) New(tpe) This() NewIntArray(size) I want to figure out first in which cases should do act. Clearly, it should act with MethodCall as it provides a result but also with expressions that contain operators (And,Or,Plus,Minus,Times,Div,LessThan,Not,Equals,...). What would be the complete list of cases I should consider? My second question is how can I implement Do behaviour for MethodCall, that is, not getting the result but getting the side effects. Answer: The general idea is that do ( expression ) is a statement that evaluates any expression and throws away the result. A first attempt could be def evalExpr(e: ExprTree)(implicit ectx: EvaluationContext): Value = // ... def evalStatement(stmt: StatTree)(implicit ectx: EvaluationContext): Unit = stmt match { // ... case DoExpr(expr) => evalExpr(expr) } } but in this way, we return a Value and not a Unit, so the compiler raises a type error. We can fix that by using case DoExpr(expr) => evalExpr(expr) () // the Unit value
{ "domain": "cs.stackexchange", "id": 7409, "tags": "programming-languages, compilers, syntax" }
Relationship betwen stellar mass loss and mass of stars
Question: I am following the semi-empirical relationship of mass loss rate of a star with mass M, radius R and luminosity L. $log(\dot{M}v_{inf}R^{1/2})=−1.37+2.07log(L/10^{6})$ where $v_{inf}=\frac{\sqrt{(2GM/R)}}{2.6}$ M, R and L are in solar unit. I simplified the relationship for plotting purpose: $log(\dot{M})=−1.37+2.07log(L)−2.07log(10^{6})−\frac{1}{2}log(2*G*M)+log(2.6)$ I hope my simplification is correct. If not, please direct me to the right path. I need to plot a $log(\dot{M})−M$ relationship for mass range $20−100M_{\odot}$ for few different luminosities and compare it with the main sequence $log(\dot{M})−M$ relationship. My first confusion came with the value of G. Since everything is in solar unit, I thought I need to set everything with respect to solar unit. So I took $G=4 \pi^{2} M_{\odot}^{-1} Au^{3} yr^{-2}$ . I am not quite sure if this is the correct way to set G and L, so I plotted the data, and it looks like following: But I was expecting higher mass loss for more massive stars. Please help me understand what am I doing wrong here. Answer: There's nothing wrong with your plot. What's incorrect is your expectation that as you hold the luminosity constant, higher-mass stars should have higher mass loss. Holding luminosity constant, higher-mass stars have higher escape velocities, which means it's more difficult for gas to escape their gravity well. As such, the expectation is that less mass loss should occur, which is exactly what your plot shows. In reality, more massive stars typically have higher mass loss because more massive stars are usually much more luminous than their less massive brethren. This increase in luminosity offsets the increased depth of their gravity well, and leads to a net increase in mass loss. For example, as you go up the main sequence in mass, luminosity also increases, at a rate faster than the mass.
{ "domain": "physics.stackexchange", "id": 46945, "tags": "astrophysics, astronomy" }
How to determine the direction of a wave propagation?
Question: In the textbook, it said a wave in the form $y(x, t) = A\cos(\omega t + \beta x + \varphi)$ propagates along negative $x$ direction and $y(x, t) = A\cos(\omega t - \beta x + \varphi)$ propagates along positive $x$ direction. This statement looks really confusing because when it says the wave is propagating along $\pm$ x direction, to my understanding, we can drop the time term and ignore the initial phase $\varphi$ while analyzing the direction, i.e. $y(x, 0) = A\cos(\pm\beta x)$, however, because of the symmetry of the cosine function, $\cos(\beta x)\equiv \cos(-\beta x)$, so how can we determine the direction of propagation from that? I know my reasoning must be incorrect but I don't know how to determine the direction. So if we don't go over the math, how to figure out the direction of propagation from the physical point of view? Why $-\beta x$ corresponding to the propagation on positive x direction but not the opposite? Answer: For a particular section of the wave which is moving in any direction, the phase must be constant. So, if the equation says $y(x,t) = A\cos(\omega t + \beta x + \phi)$, the term inside the cosine must be constant. Hence, if time increases, $x$ must decrease to make that happen. That makes the location of the section of wave in consideration and the wave move in negative direction. Opposite of above happens when the equation says $y(x,t) = A\cos(\omega t - \beta x + \phi)$. If t increase, $x$ must increase to make up for it. That makes a wave moving in positive direction. The basic idea:For a moving wave, you consider a particular part of it, it moves. This means that the same $y$ would be found at other $x$ for other $t$, and if you change $t$, you need to change $x$ accordingly. Hope that helps!
{ "domain": "physics.stackexchange", "id": 69669, "tags": "waves" }
Finding examples of languages that are "anti-palindromic"
Question: Let $\Sigma = \{ 0, 1 \}$. A language $L \subseteq \Sigma^* $ is said to have the "anti-palindrome" property if for every string $w$ that is a palindrome, $w\notin L$. In addition, for every string $u$ that is not a palindrome either $u\in L$ or $\mathrm{Reverse}(u) \in L$, but not both(!) (exclusive or). I understand the anti-palindrome property, but I could not find any languages that have this property. The closest one I could find is $\Sigma^* \setminus L$, but it does not have the exclusive or part... that is, for example, both $01$ and $10$ are in $L$. Could anyone give me an example of a language that has this propery? Or possibly even more than a single example, because I fail to see what kind of limitations this puts on a language. (Must it be non-regular? Context Free? Or not even in $R$? and etc.) Answer: One example will be $L = \{ x\ \ |\ \ binary(x) < binary(x^R), x\in [0,1]^*\}$. And yet another example $L' = \{ x\ \ |\ \ binary(x) > binary(x^R), x\in [0,1]^*\}$. The idea is, if $x \neq x^R$ you make a rule to choose only one of them. You need to choose the rule such that palindromes should be rejected ($f(x) < f(x^R)$, for palindromes you must have $f(x)= f(x^R)$).You can also change the alphabet, I took binary alphabet just to get a quick answer. $L$ and $L'$ above are not regular. And every anti-palindromic language will be non-regular and can be as bad as a non-RE language. Examble of an undecidable language: $L=\{x\ \ |\ \ $such that $ binary(x)<binary(x^R)$ if both $x$ and $x^R$ $\not\in$ Halt or both $x$ and $x^R$ $\in$ Halt, otherwise if $x \in$ Halt$\}$ Klaus Draeger explained in the comment below that anti-palindromic language given at the beginning of the answer is context-free: $L=\{x0y1x^R\ |\ x,y\in\{0,1\}^*\}$
{ "domain": "cs.stackexchange", "id": 6048, "tags": "formal-languages" }
General Game Loop 3.0
Question: Follow up from General Game Loop 2.0 It has been quite a while. Some major changes involve: Removed dependancy on Swing. The more I read up on Swing, the more I understood it was meant for handling forms. Directly "Swing is a GUI widget toolkit for Java". As games generally depend a on GUI but aren't solely based on it I figured I would do that writing myself. Mouse and Keyboard input have been implemented. Event handling is implemented, reflection is used to call the right method or active object's method. Scenes have been implemented, see they as the "current room". Think page in an install wizard. Scenes are responsible for all active sprites and objects on the screen. You can ignore all Z-classes as of now, they might be in a future question. The idea is if the core class can't perform an event, it will give it to the scene, if the scene can't handle the event it will be passed on to the active container. If that one can't handle it then and only then will the event be trashed. package scene; import java.awt.Graphics; import java.awt.Image; import java.lang.reflect.InvocationTargetException; import java.util.ArrayList; import java.util.List; import zlibrary.ZBackground; import zlibrary.ZDrawable; import zlibrary.ZObject; import core.Event; import core.EventAdder; public abstract class Scene { private ZBackground background; protected final List<ZDrawable> sprites = new ArrayList<>(); protected final List<ZObject> objects = new ArrayList<>(); protected EventAdder eventAdder; public Scene (Image image, EventAdder eventAdder) { background = new ZBackground (image, 0, 0); this.eventAdder = eventAdder; } /** * Rendering, called from render in Game.java * @param g - */ public void render (Graphics g) { background.render(g); for (ZDrawable sprite : sprites) { sprite.render(g); } } /** * Updating, called from tick in Game.java */ public void tick () { for (ZObject o : objects) { o.tick (); } } /** * Iterates through all objects existing in current scene and checks if that button was pressed by key input * @param keyCode - which key was pressed */ public void keyPressed(int keyCode) { for (ZObject o : objects) { if (o.isKeyHotkey (keyCode)) { o.pressed (); } } } /** * Iterates through all objects existing in current scene and checks if that button was pressed by mouse input * @param x - mouse position in x when left button was pressed * @param y - mouse position in y when left button was pressed */ public void buttonPressed(int x, int y) { for (ZObject o: objects) { if (o.isMouseInside (x, y)) { o.pressed (); } } } /** * Removes this scene, called from switchScene in Game.java when switching scene */ public void remove () { sprites.clear(); objects.clear(); } /** * Event handler for current scene; uses reflection to invoke method based on event * @param event - which event was triggered from Game.java */ public void eventHandler (Event event) { System.out.println(this.getClass().getName() + ": Event: " + event.toString()); try { this.getClass().getMethod(event.getMethod(), String.class).invoke(this, event.getArgument()); } catch (IllegalAccessException | IllegalArgumentException | InvocationTargetException | SecurityException | NoSuchMethodException e) { e.printStackTrace(); System.exit(-1); } } /** * Overridden toString * @return the name of the scene */ public String toString () { return this.getClass().getName(); } } For the main class. package core; import java.awt.Canvas; import java.awt.Dimension; import java.awt.Frame; import java.awt.Graphics; import java.awt.Image; import java.awt.event.KeyAdapter; import java.awt.event.KeyEvent; import java.awt.event.MouseAdapter; import java.awt.event.MouseEvent; import java.awt.event.WindowAdapter; import java.awt.event.WindowEvent; import java.awt.image.BufferStrategy; import java.lang.reflect.InvocationTargetException; import scene.Scene; class Game extends Canvas implements Runnable { // Run, Tick, Render, Entities and Drawables private static final long serialVersionUID = 7629246777343825696L; private boolean isRunning; private Thread thread; private long now = System.nanoTime (); private long nextTick = now; private long nextRender = now; // Window Settings, Event Queue and Current Scene private Frame frame; private final EventQueue eventQueue = new EventQueue (); private Scene currentScene; /** * Constructor, initialized by main () */ private Game () { // set up the window initWindow (); // off we go thread = new Thread(this); thread.start (); } /** * Game Loop, executed in thread */ public void run () { // off we go isRunning = true; // set up title screen eventQueue.getEventAdder().add(new Event ("switchScene,TitleScreen")); while (isRunning) { now = System.nanoTime (); // get all events and execute them Event currentEvent; while ((currentEvent = eventQueue.get()) != null ) { eventHandler (currentEvent); } // call tick for all entities if (now - nextTick >= 0) { update(); do { nextTick += Constants.NANOS_PER_TICK; } while (now - nextTick >= 0); } // call draw for all drawables if (now - nextRender >= 0) { render (); do { nextRender += Constants.NANOS_PER_RENDER; } while (now - nextRender >= 0); } // yield time to other processes final long workTime = System.nanoTime(); final long minDelay = Math.min(nextTick - workTime, nextRender - workTime); if (minDelay > 0) { sleep ((long)((minDelay + Constants.NANOS_PER_MILLISECOND) / Constants.NANOS_PER_MILLISECOND)); } } } /** * Switches from one scene to another, removes the old one as well * @param scene - which scene to switch to */ public void switchScene (String scene) { if (currentScene != null) { currentScene.remove (); } try { Class<?> sceneToLoad = Class.forName("scene."+scene); System.out.println("--"+sceneToLoad.toString()); Class[] arguments = new Class[2]; arguments[0] = Image.class; arguments[1] = EventAdder.class; currentScene = (Scene) sceneToLoad.getConstructor(arguments).newInstance(null, eventQueue.getEventAdder()); } catch (ClassNotFoundException | InstantiationException | IllegalAccessException | IllegalArgumentException | InvocationTargetException | NoSuchMethodException | SecurityException e) { e.printStackTrace(); eventQueue.getEventAdder().add(new Event ("exitGame,"+Constants.ERROR_NO_SUCH_SCENE_EXISTS)); } System.out.println("Switched scene to: " + currentScene.toString()); } /** * Exits game, saves the events. * @param code - code to exit with */ public void exitGame (String code) { System.exit(Integer.parseInt(code)); } /** * Event handler for Game; uses reflection to invoke method based on event * @param event - which event was triggered */ public void eventHandler (Event event) { System.out.println("Event: " + event.toString()); try { this.getClass().getMethod(event.getMethod(), String.class).invoke(this, event.getArgument()); } catch (IllegalAccessException | IllegalArgumentException | InvocationTargetException | SecurityException e) { e.printStackTrace(); eventQueue.getEventAdder().add(new Event ("exitGame,"+Constants.ERROR_INVALID_EVENT)); } catch (NoSuchMethodException e) { currentScene.eventHandler(event); } } /** * Game Updates for current scene */ private void update () { currentScene.tick (); } /** * Renders current scene */ private void render () { BufferStrategy bufferstrategy = getBufferStrategy (); if (bufferstrategy == null) { createBufferStrategy(3); return; } Graphics g = bufferstrategy.getDrawGraphics(); g.clearRect(0, 0, Constants.WIDTH, Constants.HEIGHT); currentScene.render(g); g.dispose(); bufferstrategy.show(); } /** * Helper for Game, initializes the frame, adds window, key and mouse listener */ private void initWindow () { setPreferredSize (new Dimension (Constants.WIDTH, Constants.HEIGHT)); setMaximumSize (new Dimension (Constants.WIDTH, Constants.HEIGHT)); setMinimumSize (new Dimension (Constants.WIDTH, Constants.HEIGHT)); addKeyListener(new KeyAdapter () { public void keyReleased (KeyEvent key) { currentScene.keyPressed(key.getKeyCode()); } }); addMouseListener(new MouseAdapter () { public void mousePressed(MouseEvent mb) { if (mb.getButton() == MouseEvent.BUTTON1) { currentScene.buttonPressed(mb.getX(), mb.getY()); } } }); frame = new Frame (Constants.TITLE); frame.add (this); frame.setResizable (false); frame.pack (); frame.setLocationRelativeTo (null); frame.setVisible (true); frame.addWindowListener (new WindowAdapter () { public void windowClosing(WindowEvent we){ eventQueue.getEventAdder().add(new Event ("exitGame,"+0)); } }); frame.addKeyListener(new KeyAdapter () { public void keyReleased (KeyEvent key) { currentScene.keyPressed(key.getKeyCode()); } }); } /** * Helper for Game, lets the current thread sleep * @param delayMS - time in milliseconds to sleep */ private void sleep (long delayMS) { try { Thread.sleep(delayMS); } catch (InterruptedException ie) {} } /** * Main Function, instantiating Game * @param args - system input, none used */ public static void main(String[] args) { new Game(); } } Some thoughts I have, and some direct problems I know exist. /** * Helper for Game, initializes the frame, adds window, key and mouse listener */ private void initWindow () { ... } Currently has double sets of keyListener and windowsListener. Both the frame and the window itself currently needs them. The ideal solution would be to only need for one of them. Messy Reflection My idea was to get it as dynamic as possible, to never have to touch the core files after a certain point in development. However, they didn't become as dynamic as I wanted, a lot of type casting and error checking is needed here and there for it work. Split Game.java I've been thinking about moving everything directly related to the Window and the Frame out of the Game.java class and instead let Game have a Window class, maybe even split up Window and Frame. Small addition: The package names are a bit temporary as well, do not worry about them not following the naming convention. They will later on when I have the structure more set in stone. Answer: First of all, it's a lot easier to review code if I can easily compile and run it on my PC. When reviewers can run your code, they can also test their theories, and often tell you more than theories, and include actual code that you can use. So if you want interesting answers, make it easier to review your code, ideally provide a GitHub link. The idea is if the core class can't perform an event, it will give it to the scene, if the scene can't handle the event it will be passed on to the active container. If that one can't handle it then and only then will the event be trashed. Sounds like the Chain of responsibility pattern. That's what you should be doing instead of the messy reflection stuff. This is one of the biggest problems in your code. The other big problem is that Game is doing too much: manages a thread manages the event queue configures a frame As you suspected, you need to split this up. Simple improvements Some simple improvements are possible: Make everything final that you can. Scene.background is never reassigned, so it can be final. Convert member variables to local variables when possible. For example isRunning, thread, frame. Common bad practices Some known bad practices that static analysis tools would point out about your code: Printing to console: consider using a logger instead Exiting in the middle of the code: find a more graceful way to shut down your program. Strange code The way you process the event queue is more complicated than it needs to be: Event currentEvent; while ((currentEvent = eventQueue.get()) != null) { eventHandler(currentEvent); } The natural way to process a queue would be more like this: while (!queue.isEmpty()) { eventHandler(queue.poll()); } I'm wondering why you write this kind of conditions: if (now - nextTick >= 0) { Instead of the more natural: if (now >= nextTick) { And why do this kind of loop: do { nextTick += Constants.NANOS_PER_TICK; } while (now - nextTick >= 0); Instead of the faster math: nextTick += (1 + (now - nextTick) / Constants.NANOS_PER_TICK) * Constants.NANOS_PER_TICK; Other less obvious bad practices and code smell A class/interface called Constants smells. Is that really the best place to put WIDTH, HEIGHT, TITLE? I seriously doubt it. I suggest to delete that class and move the constants out to more appropriate places. Keep in mind that there's no need to keep all constants in the same place. They should be in the place where they are needed and make the most sense. The Game constructor immediately starts executing a thread. This is a bit odd. Take a hint from the JDK's Thread class: it doesn't starts executing itself immediately upon construction, it's a separate action. About Scene.keyPressed, can you associate multiple objects to the same hot key? If yes, the method is fine. If not, then instead of iterating over the objects it would be better to use a map. remove is not a great name for a method to remove self. A method named "remove" usually takes an object to remove as parameter. Maybe cleanup would be better.
{ "domain": "codereview.stackexchange", "id": 14076, "tags": "java, performance, game" }
How $N$ qubits correspond to $2^N$ bits?
Question: I read everywhere that $N$ qubits correspond to $2^N$ bits. Let's start with 1 qubit, which is commonly represented by $\alpha |0\rangle + \beta |1\rangle$ where alpha and beta are complex numbers. This looks to me like infinite bits. How do they choose alpha and beta so that it becomes 2 bits? Edit: Why is it commonly said that $N$ qubits correspond to $2^N$ bits? Answer: Despite what badly written pop-science explanations of quantum computation may tell you, a qubit is not two classical bits and $N$ qubits is not $2^N$ bits. Qubits are fundamentally different from bits, pairs of bits or anything else (except for a representation of $\alpha$ and $\beta$ in binary to precision sufficient to classically simulate the computation to whatever precision is required). As for why people talk about $N$ qubits being equivalent to $2^N$ bits, it's hard to say. To paraphrase Tolstoy, all correct explanations are alike but all incorrect explanations are incorrect in their own particular way. Perhaps it comes from the widespread mistaken belief that quantum computers somehow try all the options in parallel; perhaps it's a mistranslation of the fact that quantum computers offer exponential speedup over some classical algorithms. Scott Aaronson has some thoughts along these lines: Fueling the belief that countless more quantum algorithms should exist (or that are not finding them is a failure), seems to be the idea that a quantum computer could just “try every possible answer in parallel”. But we’ve understood since the early 90s that that’s not how quantum algorithms work! You need to choreograph an interference pattern, where the unwanted paths cancel The miracle, I’d say, is that this trick yields a speedup for any classical problems, not that it doesn’t work for more of them
{ "domain": "cs.stackexchange", "id": 20620, "tags": "computation-models, quantum-computing" }
What is the complexity of the problem of computing the cardinality of the union of many (finite) and small sets?
Question: What is the complexity of the problem of computing the cardinality of the union of many (finite) and small sets? What is both the time and space complexity of the naive algorithm that does this computation? An inefficient recursive algorithm to do this task is: Where This results in Θ(2n) exponential blow up time complexity and this is incredibly bad! Another iterative algorithm to compute the cardinality is: This formula was taken from this proof wiki webpage, although it doesn't show any proof, but only the formula itself. But I don't know how to analysis it's both time and space complexity. EDIT: It appears that the iterative algorithm, unlike the recursive algorithm, iterates over all the different subsets of {S1, ... , Sn} in order to compute the cardinality of the union of all sets from S1 to Sn, but the number of subsets of {S1, ... , Sn} is 2n, so the time complexity of the iterative algorithm is same as the time complexity of the recursive algorithm, i.e. the time complexity of the iterative algorithm is also Θ(2n) exponential time blow up as the time complexity of the recursive algorithm. Does exist polynomial both time and space algorithm to compute this? I tried to google an answer to this question for hours, but I didn't find it anywhere. Answer: Yes, there is a polynomial time algorithm. You simply iterate over all of the elements of each set and add each one to some data structure (e.g., a hashtable, a balanced binary tree) if it's not already present. Then, you count the number of items in that data structure. If there are $n$ input sets $S_1,\dots,S_n$, each of size $k$, then the running time of this is $O(nk \log(nk))$ if you use a self-balancing binary tree data structure. The expected running time is $O(nk)$ if you use a hash table. The size of the input is $\Theta(nk)$, so this yields an algorithm whose running time is polynomial in the size of the input.
{ "domain": "cs.stackexchange", "id": 9419, "tags": "algorithms, complexity-theory, algorithm-analysis, time-complexity, space-complexity" }
PropertyInfo GetValue and Expression Cache GetValue
Question: I have implemented PropertyInfo GetValue and Expression Cache GetValue by Logic from Reflection vs. compiled expressions vs. delegates - Performance comparison,but I'm sure that there are better ways to do every thing that I did. I would appreciate any feedback on this code, which does work according to spec: MyLogic Asynchronous nonblocking, which does not block thread when adding a Cache Dictionary Key use propertyInfo GetHash() if dictionary cache containskey then use compiler expression function else use Reflection GetValue My Class using System; using System.Collections.Generic; using System.Diagnostics; using System.Linq.Expressions; using System.Reflection; using System.Threading.Tasks; public class TestService { private static Dictionary<int, object> ExpressionCache = new Dictionary<int, object>(); public IEnumerable<string> Execute<T>(IEnumerable<T> enums) { var expressionCache = (enums); var props = typeof(T).GetProperties(); foreach (var e in enums) { foreach (var p in props) { var func = GetOrAddExpressionCache<T>(p); var value = string.Empty; if (func == null) { Debug.WriteLine("Use Reflection"); value = p.GetValue(e).ToString(); } else { Debug.WriteLine("Use Expression"); value = func(e); } yield return value; } } } private Func<T, string> GetOrAddExpressionCache<T>(PropertyInfo prop) { var key = prop.GetHashCode(); if (ExpressionCache.ContainsKey(key)) { var func = ExpressionCache[key] as Func<T, string>; return func; }else{ Task.Run(()=>AddExpressionCacheAsync<T>(prop)); return null; } } //Asynchronous nonblocking, which does not block a thread when adding a Cache private async Task AddExpressionCacheAsync<T>(PropertyInfo propertyInfo) { var key = propertyInfo.GetHashCode(); if (!ExpressionCache.ContainsKey(key)) { var func = GetValueGetter<T>(propertyInfo); ExpressionCache.Add(key, func); } await Task.Yield(); } private static Func<T, string> GetValueGetter<T>(PropertyInfo propertyInfo) { var instance = Expression.Parameter(propertyInfo.DeclaringType); var property = Expression.Property(instance, propertyInfo); var toString = Expression.Call(property, "ToString", Type.EmptyTypes); return Expression.Lambda<Func<T, string>>(toString, instance).Compile(); } } Demo class Program{ public static void Main(string[] args){ var data = new[] { new { Name = "Henry", Age = 25 } }; for (int i = 0; i < 2; i++) { var service = new TestService(); var result = service.Execute(data).ToArray(); } } } /* Console : Use Reflection Use Reflection Use Expression Use Expression */ Answer: Problems Both the reflection and expression-based approach fail to take null values into account (NullReferenceException). They also fail to take types with indexers into account (TargetParameterCountException). Hash-codes are not unique identifiers. Different properties (and objects in general) can have the same hash code - that just means that they might be equal. Use prop.MetadataToken in combination with the module they're declared in (prop.Module.ModuleVersionId). Compiling expressions asynchronously makes this a lot more complex than it needs to be, and that complexity is not properly taken care of: There's no guarantee that compilation will be finished when the next object is processed. This can result in multiple compilations for the same property. That's a waste of work. The results of these compilations are added to a Dictionary, which is not thread-safe. In the best case, adding a key that already exists will throw an exception. In the worst case, the dictionary's internal state will become corrupted. Those exceptions are not being caught. Before .NET Framework 4.5, that would've caused a crash, and in 4.5 and higher it may still do so, depending on certain settings. Other improvements Why does Execute take a sequence of items, instead of a single item? Flattening results is easy (data.SelectMany(service.Execute)), 'unflattening' is not - the caller would have to figure out the number of properties, and you'd have to write a method to split up a single sequence into sub-sequences. Instead of doing ExpressionCache.ContainsKey(key), followed by ExpressionCache[key], use TryGetValue. This lets you check the presence of a key and get its associated value with a single lookup. There's no point in making AddExpressionCacheAsync async. It's not doing any asynchronous work (that await Task.Yield() is useless). You're already calling this method from within a Task.Run call, so it'll be executed asynchronously anyway. In GetValueGetter, use nameof(object.ToString) instead of "ToString". Readability issues Some names are not very descriptive: Execute -> GetPropertyValues, enums -> items, func -> compiledExpression, GetValueGetter -> CompileGetValueExpression. There are several abbreviations that do not improve readability: e -> item, props -> properties, p -> property.
{ "domain": "codereview.stackexchange", "id": 33307, "tags": "c#" }
Writing CSV file from huge JSON data
Question: I am writing a program that reads from DB and outputs to a CSV file. Besides the regular columnar data there are 2 JSON fields data as well. The table layout looks like this (other fields removed for brevity): +----+--------------+-------------+-----------------------+ | ID | Product_Type | Json_Data | Demographic_Questions | +----+--------------+-------------+-----------------------+ | 1 | DPI | {some_JSON} | {another_JSON} | +----+--------------+-------------+-----------------------+ | 2 | Travel | {some_JSON} | {another_JSON} | +----+--------------+-------------+-----------------------+ Program logic Read data from DB Store columnar data into a map Convert JSON data into CSV format and store into the map Write map into CSV file The main program public static void extractData(String lastRunDateTime, String extractionType) throws Exception { List<LinkedHashMap<String, String>> flatJson = new ArrayList<LinkedHashMap<String, String>>(); String result = ""; ResultSet rsData = null; List<String> productType = new ArrayList<>(); // Store Product Type name for SQL & CSV creation try { conn = dbUtil.dbConnect(); String sqlQuery = "SELECT DISTINCT Product_Type FROM Mapping WHERE Extraction_Type = '"+ extractionType +"'"; st = conn.prepareStatement(sqlQuery); rsData = st.executeQuery(); while(rsData.next()) { productType.add(rsData.getString("Product_Type")); } // Currently there are 4 product types in DB for(int i = 0; i < productType.size(); i++) { flatJson.clear(); isEmptyRS = true; String sProductType = productType.get(i); LOG.debug("Extraction Started for WebDB data " + sProductType + " V" + extractionType); sqlQuery = "SELECT b.Rate, b.Comments, a.* " + "FROM users_data a LEFT OUTER JOIN users_ratings b ON a.sys_policy_no = b.sys_policy_no " + "WHERE a.Product_Type = '" + sProductType + "' " + "AND CAST(a.Submitted_Date as date) BETWEEN '2016-08-01 00:00:00' AND '" + currentDateTime + "' " + "ORDER BY a.Submitted_Date DESC"; st = conn.prepareStatement(sqlQuery); rsData = st.executeQuery(); while(rsData.next()) { LinkedHashMap<String, String> map = new LinkedHashMap<String, String>(); //LOG.debug("Sys_Policy_No = " + rsData.getString("Sys_Policy_No")); map.put("ID", rsData.getString("ID")); map.put("Product_type", rsData.getString("Product_type")); // Read JSON data and convert to columns result = rsData.getString("Json_Data"); if(result != null && result.length() != 0) { addKeys("", new ObjectMapper().readTree(result), map); } // Read Demographic data and convert to columns result = rsData.getString("Demographic_Questions"); if(result != null && result.length() != 0) { addKeys("", new ObjectMapper().readTree(result), map); } flatJson.add(map); } String filepath = config.getPropValue("GetAllJsonDataFilePath"); String filename = filepath + sProductType + "_csv_json_all_V" + extractionType + ".csv"; LOG.debug("Writing " + filename + "..."); CSVWriter writer = new CSVWriter(); writer.writeAsCSV(flatJson , filename); LOG.debug("Extraction Completed for WebDB data " + sProductType + " V" + extractionType + "\n"); } } catch (Exception e) { LOG.error(e.getMessage(), e); } finally { st.close(); if(conn!=null) conn.close(); } } JSON conversion program private static void addKeys(String currentPath, JsonNode jsonNode, Map<String, String> map) { if (jsonNode.isObject()) { ObjectNode objectNode = (ObjectNode) jsonNode; Iterator<Map.Entry<String, JsonNode>> iter = objectNode.fields(); String pathPrefix = currentPath.isEmpty() ? "" : currentPath + "."; while (iter.hasNext()) { Map.Entry<String, JsonNode> entry = iter.next(); addKeys(pathPrefix + entry.getKey(), entry.getValue(), map); } } else if (jsonNode.isArray()) { ArrayNode arrayNode = (ArrayNode) jsonNode; for (int i = 0; i < arrayNode.size(); i++) { addKeys(currentPath + "_" + i, arrayNode.get(i), map); } } else if (jsonNode.isValueNode()) { ValueNode valueNode = (ValueNode) jsonNode; String value = valueNode.asText().replace("\n", ". ").replace("\r", ""); map.put(currentPath, value); } } CSV Writer program public void writeAsCSV(List<LinkedHashMap<String, String>> flatJson, String fileName) throws IOException { LinkedHashSet<String> headers = collectHeadersOrdered(flatJson); String output = StringUtils.join(headers.toArray(), ",") + "\n"; for (LinkedHashMap<String, String> linkedMap : flatJson) { output = output + getCommaSeparatedRow(headers, linkedMap) + "\n"; } writeToFile(output, fileName); } private LinkedHashSet<String> collectHeadersOrdered(List<LinkedHashMap<String, String>> flatJson) { LinkedHashSet<String> headers = new LinkedHashSet<String>(); for (LinkedHashMap<String, String> linkedMap : flatJson) { headers.addAll(linkedMap.keySet()); } return headers; } private String getCommaSeparatedRow(Set<String> headers, Map<String, String> map) { List<String> items = new ArrayList<String>(); for (String header : headers) { String value = map.get(header) == null ? "" : map.get(header).replace(",", ""); items.add(value); } return StringUtils.join(items.toArray(), ","); } private void writeToFile(String output, String fileName) throws IOException { try (BufferedWriter bw = new BufferedWriter(new FileWriter(fileName))) { LOG.debug("Generating " + fileName + " ..."); bw.write(output); } catch (IOException e) { LOG.error(e.getMessage(), e); } } While the program runs fine without error it is taking way too much time to execute, roughly 3-4 hours. Currently, the biggest CSV filesize is at 40 MB (around 200k rows, 1300 columns). The 2 JSON fields are subjected to very frequent change and I've seen it growing by 30 data elements every other months. What can I do to increase the performance? Answer: JSON library Assuming ObjectMapper is from the Jackson library, I think you should be able to create only one instance of it as it's safe to do so. Pro-tip: on that link, the developer of Jackson also suggests using ObjectReader/ObjectWriter if you are using Jackson 2.x. try-with-resources Since Java 7, you can use try-with-resources to safely and efficiently manage I/O resources, such as your JDBC-related resources. More specifically, you can take a look at this WebLogic blog article to better understand how you can use it for the Connection, Statement and ResultSet objects together. Variable scope Your flatJson List is declared quite early on, necessitating you to keep clear()-ing it for each iteration. You can instead considering creating a new List each time. Modeling JSON as a domain object (...?) This is just a thought, how about modeling the JSON as a domain object, so that you do less of addKeys() yourself, and perhaps just need a nice toCsvMap() implementation on the domain object to get the Map output you need? Of course, this very much depends on what you mean 'growing by 30 data elements every other months' as... are these elements just part of an array that your JSON library can easily output as a List? Or do you really mean the JSON payload mutates in different ways even between rows, such that there's no one coherent structure to map it as an object? SQL Server 2016? If you are using SQL Server 2016, it looks like you may also rely on it to convert your JSON data to rows and columns... again, per disclaimer above, this depends on how its structure changes over time. Optimizing bottlenecks Last but not least, have you already tried profiling - regardless of using precise instrumentation frameworks, or just informally with a stopwatch - your application from the time it queries the database to the time the CSV file is generated? Can the database query be further optimized? Is there some inherent network latency somewhere that is making the code appear to work slower? Is writer.writeAsCSV(flatJson , filename) reasonably efficient? See below. Writing output Instead of doing a sub-optimal String concatenation using + in each iteration, consider using the newer Files.write(Path, Iterable, CharSet, OpenOption) method. You just need to map each LinkedHashMap element of your List to a String, and the method will iterate through them for you. In addition, it uses the OS-specific line separator, which may be more preferable depending on your use case. Since you have 1300 columns, performing a Map.get(Object) twice to retrieve the value for each column, per row, is not going to be the fastest way to do so. Just hold on to that thought for a moment... Think of the children consumers! Actually, why is there this requirement to write such a 'sparse' CSV file, where there is never a complete row, and instead you are going to have blocks of values, and then blocks of emptiness depending on the product? I suppose the output will resemble something the following, if we can sort the rows by product type and there are no other overlapping columns other than the ID and product type: ID,Product_type,dpi_1,dpi_2,dpi_3,travel_1,travel_2,travel_3,other_1,other_2,other_3 1,dpi,a,b,c,,,,,, 2,dpi,d,e,f,,,,,, 3,dpi,g,h,i,,,,,, 4,travel,,,,j,k,l,,, 5,travel,,,,m,n,o,,, 6,travel,,,,p,q,r,,, 7,other,,,,,,,s,t,u 8,other,,,,,,,v,w,x 9,other,,,,,,,y,z,? Will it not be better off to create one CSV file per product type, so that the consumers of these data can fully process the product-type-specific file they require, instead of having to cherry-pick columns from a 40 MB file, which will likely be slower as well? Writing output (cont'd) Resuming from the earlier section, Java 8 has a Map.getOrDefault(Object, V) method that simplifies your approach of calling Map.get(Object) twice: // String value = map.get(header) == null ? "" : map.get(header).replace(",", ""); String value = map.getOrDefault(header, "").replace(",", ""); The conversion of a List<LinkedHashMap<String, String> to a List<String> is relatively straightforward when you think of the approach as such: Create a map of the total columns you have, with elements mapping to themselves, and treat this as the zeroth row, i.e. a single-element List<Map<String, String>>. Create a stream out of the zeroth row and your actual payload (\$1...n\$ rows), so that you can apply the common step of mapping each column header against all the \$n + 1\$ Maps and concatenating them as a String. Putting it altogether: private static List<String> flattenAll(List<LinkedHashMap<String, String>> input) { Set<String> columns = input.stream() .flatMap(v -> v.keySet().stream()) .collect(Collectors.toCollection(LinkedHashSet::new)); Map<String, String> header = columns.stream() .collect(Collectors.toMap(k -> k, v -> v)); return Stream.concat(Stream.of(header), input.stream()) .map(m -> columns.stream() .map(k -> m.getOrDefault(k, "").replace(",", "")) .collect(Collectors.joining(","))) .collect(Collectors.toList()); } How we get our columns is similar to your original approach but done with a stream-based approach, i.e. to flatMap() each Map.keySet() into a Stream<String>, before collect()-ing them into a LinkedHashSet.
{ "domain": "codereview.stackexchange", "id": 23300, "tags": "java, performance, json, sql-server, csv" }
Synchrotron radiation and Hawking radiation
Question: According to 1 answer I came across: "There is a myth, for which Hawking himself is responsible, that Hawking radiation is primarily made up of matter-antimatter pair annihilation." The simplest matter-antimatter pair I can think of is the electron-positron. From this, it can be inferred that Hawking radiation comes from annihilation between electrons and positrons. But if electrons are present in the black hole's magnetic field, it looks like synchrotron radiation may be present.So what happens when two such radiations are emitted? Answer: It is true that Hawking both proposed the virtual particle interpretation in the original paper, and immediately noted: It should be emphasized that these pictures of the mechanism responsible for the thermal emission and area decrease are heuristic only and should not be taken too literally. What actually happens in this theory is an interaction between the black hole and surrounding quantum fields (whose excitations are particles). This process is essentially due to the observer-dependence of the number of particles in quantum fields under strong acceleration and happens even for uncharged black holes that do not have any magnetic field. Actual astrophysical black holes are expected to be nearly uncharged (since they would attract opposite charges really well), but if they have an accretion disk this will cause strong magnetic fields from the outside to impinge on them, and their rapid rotation will then power up these fields even more. The resulting system is very complex but it is commonly theorized that there will be pair production due to strong fields and the presence of very energetic photons partially due to the plasma being hot, partially because of synchrotron acceleration of charged particles. This is normal pair production without the subtle (and perhaps non-literal) aspects of Hawking radiation.
{ "domain": "physics.stackexchange", "id": 93264, "tags": "black-holes, radiation, hawking-radiation, synchrotron-radiation" }
Inner Product Spaces
Question: I am trying to reconcile the definition of Inner Product Spaces that I encountered in Mathematics with the one I recently came across in Physics. In particular, if $(,)$ denotes an inner product in the vector space $V$ over $F$: $(u + v, w) = (u, w) + (v, w) \text{ for all } u, v, w \in V$, $(\alpha v, w) = \alpha(v, w) \text{ for all } v, w \in V$ and $\alpha \in F$, $(v, w) = (w, v)^* \text{ for all } v, w \in V$, (* denotes complex conjugation) were some of the properties listed in my mathematics course. In physics, however, the inner product was said to be linear in the second argument and $(v,\sum(\lambda_i w_i)) = \sum(\lambda_i (v,w_i))$ where $v$ and $w_i$ are kets in Hilbert space and $\lambda_i$ are complex numbers. To me, these properties of an Inner Product are not compatible. If the first definition of inner product is correct, then I think $(v,\sum(\lambda_i w_i)) = \sum((\lambda_i)^* (v,w_i))$ where $^*$ denotes complex conjugation. Answer: To formalize the comments as an answer: The difference between requiring $$(\alpha u,v)=\alpha(u,v)\quad\text{ (mathematician's definition)}$$ and $$\langle u, \alpha v\rangle=\alpha\langle u,v\rangle\qquad\quad\,\,\text{ (physicist's definition)}$$ is purely one of convention, and the two definitions are equivalent as $(u,v)=\langle v,u\rangle$. There's no intrinsic reason to choose either, though if you work exclusively with one for long enough, you might come to regard the other as an abomination. In general, it is always advisable to keep an eye to which convention is being used. The physicist's definition does have the advantage that it extends well to Dirac notation, in the sense that matrix elements such as $\langle \phi|\hat{A}|\psi\rangle$ are linear in $\psi$, so that the state $\hat{A}|\psi\rangle$ corresponds to the operator-acting-on-a-vector notation $Av$. If the bracket were linear in $\phi$ then we'd have to make operators act to their left. This is again an OK convention but no one uses it.
{ "domain": "physics.stackexchange", "id": 7618, "tags": "hilbert-space, conventions, notation, complex-numbers" }
Attenuation of reconstruction filter for Digital-to-Analogue (DAC) converter
Question: I am using a 12-bit DAC to produce signals between 10 kHz and 400 kHz, produced at a rate of 3.33 MSamples/s (Nyquist frequency of 1.665 MHz). I would like to create a "reconstruction filter" that will be placed after the DAC. My question is, how much attenuation should the reconstruction filter achieve at the Nyquist frequency of 1.665 MHz to avoid alias images? I am aware of the ~6 dB per bit heuristic that is commonly used, but I was wondering if there is a better approach of calculating this. Answer: Well, you cannot completely avoid images, you can only suppress them sufficiently. How much "sufficiently" is is completely up to your application! So, we can't tell you this. However, thanks to the fact that you're comfortably oversampling your 400 kHz, the first aliases would appear at 2.93 MHz (=lowest frequency in signal, i.e. -400 kHz, plus sampling rate). How sensitive is whatever will get that signal to energy at 2.93 MHz or higher? Assuming 50 dB attenuation totally suffice at 2.9 MHz, a simple split-into-two-stages, fourth order Butterworth filter would do: Designed with the analog devices filter design wizard Implemented as Sallen-Key active filter, that'd amount to four capacitors, four resistors and two relatively cheap opamps. Plus one capacitor and a high-valued resistive voltage divider to AC-couple in the signal, which, since it's got no DC component, can be shifted to an arbitrary voltage, so that you can work with a single supply voltage here. So the answer here is: well, no matter what you need, it's not very hard, might as well go for a pretty good filter. Assuming you want your passband to be very flat, you want a Butterworth filter, and if you choose that, you'll want an even order one, and second-order isn't going to be steep enough, so fourth order it is. You can do better, but unless you know why, you won't need to.
{ "domain": "dsp.stackexchange", "id": 10512, "tags": "filter-design, reconstruction, digital-to-analog" }
topic_tools transform from launch file?
Question: Is topic_tools transform incapable of being run from a launch file? Maybe it has a non-standard parser for getting the python imports, and can't handle __ arguments properly? <?xml version="1.0"?> <launch> <node name="radius" pkg="rostopic" type="rostopic" args="pub /radius std_msgs/Float32 'data: 1.0' -r 1" /> <node name="radius_to_diameter" pkg="topic_tools" type="transform" args="/radius /diameter std_msgs/Float32 'm.data * 2'" /> </launch> Fails with: process[radius-1]: started with pid [26144] process[radius_to_diameter-2]: started with pid [26150] usage: transform [-h] [-i MODULES [MODULES ...]] input output_topic output_type expression transform: error: unrecognized arguments: __name:=radius_to_diameter __log:=/home/lucasw/.ros/log/.../radius_to_diameter-2.log [radius_to_diameter-2] process has died [pid 26150, exit code 2, cmd /opt/ros/indigo/lib/topic_tools/transform /radius /diameter std_msgs/Float32 m.data * 2 __name:=radius_to_diameter __log:=/home/lucasw/.ros/log/.../radius_to_diameter-2.log]. While rosrun topic_tools transform /radius /diameter std_msgs/Float32 'm.data * 2' works as expected. Originally posted by lucasw on ROS Answers with karma: 8729 on 2015-07-21 Post score: 1 Answer: It looks it's missing a call to rospy.myargv to strip all the extra stuff before trying to parse the args. File an issue against ros_comm (or better yet, a pull request). Originally posted by Dan Lazewatsky with karma: 9115 on 2015-07-22 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by lucasw on 2015-07-22: I'll do that. Comment by lucasw on 2015-07-24: https://github.com/ros/ros_comm/pull/645 Comment by Kansai on 2021-04-14: so what is the correct way now? Comment by lucasw on 2021-09-26: @Kansai the launch file example in the question now works
{ "domain": "robotics.stackexchange", "id": 22249, "tags": "ros, topic-tools" }
How can I get the energy levels from a Hamiltonian?
Question: I'm helping out in a research project at my school and was tasked with plotting the magnetic field dependency (when B is parallel to each individual axis) of the energy levels of the molecule with which we're working. I have the Hamiltonian from which I got all the eigenvalues (the computer did, it's a 216*216 matrix), I'm not sure where to go from there. (I have also calculated the free energy axis by axis at different temperatures, not sure if this will be useful, but I thought it might) I have to admit I'm pretty lost here. Do you guys have any insight? Answer: Your Hamiltonian should depend on the magnetic field so its eigenvalues also depend on the magnetic field. The energy levels you are looking for are those eigenvalues. The number of eigenvalues i.e. energy levels are determined by the basis you use and there are in your case 216 ones because your basis contains 216 vectors. Probably you only care for the occupied ones, which are the n lowest ones. n is the number of electrons you include in your calculation. The more basis vectors you use the more accurate your solution will be. You now need to calculate the interesting eigenvalues for the magnetic field values you want to investigate and plot these. They should depend continiously on the magnetic field.
{ "domain": "physics.stackexchange", "id": 25963, "tags": "energy" }
Histidine protonation
Question: In histidine structure , full protonation gives a structure with only the double bonded nitrogen having been protonated in the ring and not the single bonded nitrogen. Why is that? Answer: Textbooks tend to report structures as relatively static, whereas advanced biochemistry texts explore structural tautomerism. In essence, the histidine imidazole sidecvhain may move around it's protons and electrons when pH is roughly equal to pKa. Thus, the sidechain exists as two tautomers where the two nitrogens sort of "share" the double-bond character and the proton, like so: And it's even more accurate to depict that the double-bond character is like so:
{ "domain": "biology.stackexchange", "id": 7845, "tags": "biochemistry" }
Is the given unit conversion possible?
Question: I have to convert from Joule to Newton than to a specific unit, but I don't know which two unit factors I have to choose (since I am not allowed to change $kg$) to get the final result: $$1\,{\rm J} = 1\,{\rm N\,m} = 10^6 \cdot \frac{\rm kg \cdot \_\,m^2}{ \_\,^2}$$ My main problem is to get the $10^6$, I tried many possible combinations but I always failed. Answer: I don't exactly understand what the point of this is, but my guess for the conversion would be: $ 1N = 1 \frac{kg\cdot m}{s^2}$, thus $$ 1J=1Nm=1\frac{kg\cdot m^2}{s^2}=10^6\frac{kg\cdot m^2}{10^6\cdot s^2} $$
{ "domain": "physics.stackexchange", "id": 39925, "tags": "homework-and-exercises, units, si-units, unit-conversion" }
Temperature and Heat question
Question: So I have a problem about calculating the specific heat of a metal. I'm using the philosophy "Heat gained = Heat lost". The problem is: When 50g of a metal at 280C is put into a calorimeter containing 400g of water at 25C, the final temperature of the system is 30C. What is the specific heat of the metal in J/KgC? (Specific heat of water is 4186 JkgC. Known variables: Mass(metal) = 0.050kg, Temp(metal) = 280C. Mass(water) = 0.400kg, Temp(water) = 25C, C(water) = 4186 JkgC. Temp(final) = 30C. Heat Gained = Heat lost so..... C(metal) * Mass(metal) * DeltaTemp(metal) = C(water) * Mass(water) * DeltaTemp(water) So I solved for C(metal) to get -> C(metal) = (C(water) * Mass(water)* DeltaTemp(water))/(Mass(metal) * DeltaTemp(metal)) If I plug the numbers in, I get -> C(metal) = (4186 * 0.4 * (30-25))/(0.05 * (30-280)) = 8372/(-12.5) = -669.76JkgC The answer seems wrong to me and I'm not sure what I did wrong. I followed the example in the book, so I'm assuming that the heat that was in the calorimeter of water is equal to the heat lost AFTER the 270C metal is put in it. And I solved for the unknown variable. I looked at the table of Specific heats and none of them have such a low negative number (or any negative number at all). Thank you all for your help. Answer: I don't see anything wrong except the extra minus sign. The idea that heat gained equals heat loss is spot on but both quantities need to be positive in this case. One way I like to think about this would be that the change in heat of the metal plus the change in heat of the water must add to zero. In this equation, "H_metal + H_water = 0", the sign of H_metal is negative as it is losing heat and H_water will be positive.
{ "domain": "physics.stackexchange", "id": 23590, "tags": "homework-and-exercises, thermodynamics, temperature" }
Different hominid species or simply mutated humans?
Question: I am not a biologist, however, I am curious as to how do evolutionary biologists classify a newly discovered hominid fossil as a separate species. If simply the bone structures are a criteria to classify a hominid specimen as a separate species, how do we know that the different bone structures are not a result of a genetic disorder or a terrible mutation in a human? From what I understand, DNA specimen do not survive that long for consideration. Can anyone provide me with a layman explanation of the process of classification which seemingly doesn't involve genetic data? Answer: Hey, quite tricky to correlate but good logic When a fossil is being examined, carbon dating is one of the methods used to examine its age. Also, as time passes, organic components of bones are slowly decomposed and get absorbed in the surrounding (into soil) and the entire mass either collapses under pressure or, inorganic materials from its surroundings replace the empty spaces(rest of the body degrades very quickly). Thus if the fossil is determining exact skeletal structure, then it must have inorganic compounds which resemble to the surroundings. If not(it has inorganic materials that are as same as the inorganic materials of an actual bone), then the fossil is unfit for determining organismal structure, bcuz it had collapsed due to external pressure. Now coming to your question.... for mutation to take place, there is a certain time period with a certain percentage of accuracy. Which means that for such a change that causes gross phenotypic difference, it requires a certain number of generations and fixed progeny-coincidences. Thus we can compare the age of fossil and the approximate probability of a mutation, which caused phenotypic changes in a normal human to look like the one determined by the fossil. By this we can calculate the probability that whether the fossil is a legit "non sapien hominid" or just a mutated human. And often these probabilities are are very low. So we finally identify the fossil to be of a certain hominid with a certain level of similarity to the modern day human. Hope you understood the process. References: https://www.sciencedirect.com/topics/biochemistry-genetics-and-molecular-biology/fossil-hominid#:~:text=Most%20early%20fossil%20hominids%20appear,9.10). https://byjus.com/physics/types-of-fossils/ And for further information on fossilisation and chemical nature of fossils, you can check out this pdf: https://egyankosh.ac.in/bitstream/123456789/69611/1/Unit-9.pdf
{ "domain": "biology.stackexchange", "id": 12459, "tags": "human-genetics, taxonomy, human-evolution" }
Minimal temperature achievable by vanilla Peltier element?
Question: I wonder, are there any fundamental issues leading to reduced performance of Peltier elements at cryogenic temperatures (-100C and lower)? What is theoretical/practical minimum temperature achievable by a cascade of Peltier elements, provided that each next element have about 3 times less power than previous in cascade, so that they are not overwhelmed by selfheating? Let's say first element is water-cooled down to 20C when dissipating 150W. Update: After extensive tests, I've found out that in any setup I cannot get below -19C using any number or combination of Chinese (r) Peltier elements (I've tried alot of different ones in different combinations). Answer: I do not know any fundamental minimum of Peltier elements operation temperature. However, there are serious technical issues: as long as Peltier effect relies on an interaction between electrons and phonons, there should be enough phonons to interact with. Decreasing temperature dramatically (though quantitatively, not qualitatively) changes effectiveness of the elements. As long as heat pumping should overcome heat produced by electron current which produces this pumping, going to low temperatures is a challenging task. To sound more scientific, I've found this relatively new paper where Peltier effect is discussed for some rather standard system and where curves at room and liquid nitrogen temperature (which is low by human standards but pretty warm from cryogenic point of view) and its effectiveness may be found (see Fig. 3). As can be seen from the curves, Peltier element is able to give $\Delta T$ around few degrees at nitrogen temperature. So, it stil works at 77K but definitely far less effective than at room temperature. With numbers given on graph I can hardly imagine the cascade which will do these 77K out of room temperature. In this paper authors claim that effect may be observed at 6K, but the numbers they give show that this effect may be hardly used in practice. To conclude, it seems there is no definite theoretical limit, but practical limit is around -100C
{ "domain": "physics.stackexchange", "id": 8182, "tags": "thermodynamics" }
Understanding LSTM input shape for keras
Question: I am learning about the LSTM network. The input needs to be 3D. So I have a CSV file which has 9999 data with one feature only. So it is only one file. So usually it is (9999,1) then I reshape with time steps 20 steps timesteps = 20 dim = data.shape[1] data.reshape(len(data),timesteps,dim) but I am getting following error ValueError: cannot reshape array of size 9999 into shape (9999,20,1) and the input in LSTM model.add(LSTM(50,input_shape=(timesteps,dim),return_sequences=True, activation="sigmoid")) Answer: (9999,1) has 9999*1 elements = 9999. However, (9999,20,1) will have 9999*20*1 elements, which are not available. Break your data into a batch/sequence length of say 99. Then reshape it into (101,99,1)
{ "domain": "datascience.stackexchange", "id": 4321, "tags": "keras, tensorflow, lstm" }
Why should the Standard Model be renormalizable?
Question: Effective theories like Little Higgs models or Nambu-Jona-Lasinio model are non-renormalizable and there is no problem with it, since an effective theory does not need to be renormalizable. These theories are valid until a scale $\Lambda$ (ultraviolet cutoff), after this scale an effective theory need a UV completion, a more "general" theory. The Standard Model, as an effective theory of particle physics, needs a more general theory that addresses the phenomena beyond the Standard Model (an UV completion?). So, why should the Standard Model be renormalizable? Answer: The short answer is that it doesn't have to be, and it probably isn't. The modern way to understanding any quantum field theory is as an effective field theory. The theory includes all renormalizable (relevant and marginal) operators, which give the largest contribution to any low energy process. When you are interested in either high precision or high energy processes, you have to systematically include non-renormalizable terms as well, which come from some more complete theory. Back in the days when the standard model was constructed, people did not have a good appreciation of effective field theories, and thus renormalizability was imposed as a deep and not completely understood principle. This is one of the difficulties in studying QFT, it has a long history including ideas that were superseded (plenty of other examples: relativistic wave equations, second quantization, and a whole bunch of misconceptions about the meaning of renormalization). But now we know that any QFT, including the standard model, is expected to have these higher dimensional operators. By measuring their effects you get some clue what is the high energy scale in which the standard model breaks down. So far, it looks like a really high scale.
{ "domain": "physics.stackexchange", "id": 1577, "tags": "quantum-field-theory, standard-model, renormalization, effective-field-theory, non-perturbative" }
Can you have a charge of 0 C?
Question: My textbook says that there are only two types of charges: positive and negative. Then would it be correct to say that the neutron (for instance) has charge $0\ \rm C$? Answer: You can say that an object has a velocity of $0$ m/s or you can say that it is stationary. You can say that an object has a mass of $0$ kg or you can say that it is massless. Similarly, you can say that an object has a charge of $0$ coulombs or that it has no charge - either choice is correct.
{ "domain": "physics.stackexchange", "id": 98199, "tags": "charge, terminology" }
What is the origin of dot notation?
Question: In object-oriented programming, dot notation is used when accessing the properties or methods of a class: Dog dog print dog.name >> "Fido" dog.walk() >> Walking the dog now... What is the origin of that syntax and why the . to notate it? It must date to some early object-oriented language, but I couldn't find mention of it anywhere. Answer: In [1] (authored by one of the co-creators of Simula), there is a suggestion that Simula 67 may have been the first to use this dot notation. Given that Simula is widely credited for being the first OO language, it may be tricky to find an earlier example specifically in an OO context. EDIT: On DiscreteLizard's suggestion in comments, I took a peek at the use of this dot notation for specifying fields in a record. As it turns out, according to this specification of the PL/I language for the IBM System/360 from July 1965, the dot notation was indeed used to identify fields within structures: A qualified name takes the form: identifier {. identifier} ... Examples: 1. A program may contain the structures: DECLARE 1 CARDIN, 2 PARTNO, 3 DESCRIPTION, 2 PRICE; DECLARE 1 CARDOUT, 2 PARTNO, 2 DESCRIPTION, 2 PRICE; Elements are then referred to as: CARDIN.PARTNO CARDOUT.PARTNO CARDIN.PRICE Dahl, Ole-Johan. "The Birth of Object Orientation: the Simula Languages." From Object-Orientation to Formal Methods. Springer, Berlin, Heidelberg, 2004. 15-25.
{ "domain": "cs.stackexchange", "id": 10952, "tags": "programming-languages, history, object-oriented" }
sorting a map by converting it to vector
Question: I have a map that I want to print out sorted by value, I convert it to vector and sort the vector. Is this code correct? #include <map> #include <iostream> #include <vector> #include <algorithm> #include <utility> int main() { std::map<char, int> freq; std::string text; std::getline(std::cin, text); std::vector<std::pair<char, int>> items; for(auto & ch: text) freq[ch]++; for(auto [key, value]: freq) items.push_back(std::make_pair(key, value)); std::sort(items.begin(), items.end(), [](auto a, auto b) { return a.second > b.second;}); for(auto [key, value]: items) std::cout << key << " " << value << std::endl; return 0; } Answer: The code is correct. However, I still have some recommendations: Sort the includes, so you can easily spot recurring/missed ones #include <algorithm> #include <iostream> #include <map> #include <utility> #include <vector> You do not need to run the loop, you can simply pass the map to the std::vector constructor. As you use structured bindings and therewith at least C++17, I suggest Template Argument Deduction to omit the type of the vector std::vector items(freq.begin(), freq.end()); The comparison function takes the arguments by copy, which is not optimal. Rather use const auto&: [] (const auto& a, const auto& b) { return a.second > b.second;}) Note that std::sort may change the ordering of elements with equal value. If you want those elements with equal frequency to appear in the same order than in the map you would need std::stable_sort. However, keep in mind that this requires additional resources in memory and compute time. You are using a std::map, which is an ordered container. If you are only interested in the frequencies then a std::unordered_map will generally offer better performance. Although for a simple alphabet this will most likely be negligible.
{ "domain": "codereview.stackexchange", "id": 32745, "tags": "c++, sorting, hash-map" }
Construct URLs out of filenames of zero-byte images and open them in Firefox for re-downloading
Question: It's a problem with FlashGot: it sometimes download zero-byte files. I think it's because I didn't use something like DownThemAll! to limit the number of concurrent downloads, but I wrote this script anyway to solve my problem. The script basically sets up a RunspacePool and in each of the spawned thread, checks if either of the constructed Danbooru image URL is valid and appends it to a list that becomes the input to Firefox. I guess the main problem I encountered was a scoping issue. I can not use custom functions inside a script block unless it's defined in there, which is what I did, or its definition is passed as a parameter to the script block and sourced, which is ugly. I would be interested in reviews that point out any cargo cult programming, performance quirks and/or violation of "best practices". I'm still a beginner so I'd appreciate more if feedback comes with working code rather than hand-waving. <# .SYNOPSIS Opens the Danbooru URLs of zero-byte images in Firefox for re-downloading by FlashGot. #> #Requires -Version 3.0 #Requires -Modules Microsoft.PowerShell.Utility, Microsoft.PowerShell.Management, CimCmdlets Set-StrictMode -Version Latest $PrepareUrl = { param ( [string]$domain, [string]$file ) begin { function Get-UrlStatusCode([string]$Url) { try { [int](Invoke-WebRequest -Uri $Url -UseBasicParsing -DisableKeepAlive -Method Head).StatusCode } catch [Net.WebException] { [int]$_.Exception.Response.StatusCode } } } process { $url = "$domain/data/$file" if ((Get-UrlStatusCode -Url $url) -ne 200) { $url = "$domain/cached/data/$file" if ((Get-UrlStatusCode -Url $url) -ne 200) { $url = $null } } Write-Output $url } } #region Setting Up Jobs $domain = 'https://danbooru.donmai.us' $sourceDir = 'C:\fakepath' Write-Progress -Activity 'Getting the filenames' -Status "Searching under $sourceDir" -SecondsRemaining -1 filter isEmptyImage { if ($_.PSIsContainer -eq $false -and $_.Length -eq 0 -and $_.Name -match '__.+\.(jpg|png|gif)') { $_ } } $emptyFiles = Get-ChildItem -Path $sourceDir -Recurse | isEmptyImage | Select-Object -ExpandProperty Name $NumberOfLogicalProcessors = (Get-CimInstance Win32_Processor).NumberOfLogicalProcessors $rp = [runspacefactory]::CreateRunspacePool(1, 2 * $NumberOfLogicalProcessors) $rp.Open() $jobs = New-Object System.Collections.Generic.List[System.Object] foreach ($file in $emptyFiles) { $job = [powershell]::Create().AddScript($PrepareUrl).AddArgument($domain).AddArgument($file) $job.RunspacePool = $rp $jobs.Add( (New-Object PSObject -Property @{ Job = $job Result = $job.BeginInvoke() }) ) } #endregion #region Collecting Jobs $urlList = New-Object System.Collections.Generic.List[System.Object] $jobCount = $jobs.Count while ($jobs.Count -gt 0) { $WriteProgressParams = @{ Activity = 'Waiting for jobs to complete' Status = "$($jobs.Count) jobs remaining" PercentComplete = (($jobCount - $jobs.Count) / $jobCount * 100) } Write-Progress @WriteProgressParams # splatting Start-Sleep -Milliseconds 50 foreach ($job in $($jobs | Where-Object { $_.Result.IsCompleted -eq $true })) { $result = [string]$job.Job.EndInvoke($job.Result) if ($result) { $urlList.Add($result) } $job.Job.Dispose() [void]$jobs.Remove($job) } } #endregion if ($urlList) { Start-Process -FilePath 'C:\Program Files\Mozilla Firefox\firefox.exe' -ArgumentList $urlList } $rp.Close() $rp.Dispose() Set-StrictMode -Off Answer: I don't see anything really wrong with your script. It looks quite functional, and I don't see anything superfluous, though it is very verbose. If it weren't for the fact that I don't see a ; at the end of every line I'd assume you're more accustomed to C# than PowerShell from how structured your script is, not that there's anything wrong with that at all. All that said, if it were me I think I'd have done a couple of things differently. Nested If statements are hard to read in my opinion. I would prefer to use a Switch statement instead to define cases. process { Switch ($true){ {(Get-UrlStatusCode -Url "$domain/data/$file") -ne 200} {"$domain/data/$file";continue} {(Get-UrlStatusCode -Url "$domain/cached/data/$file") -ne 200} {"$domain/cached/data/$file";continue} default {$null} } } You may note that I output the strings directly in the case scriptblocks; that is because Write-Output is implied, and everything not otherwise directed (such as capturing it in a variable, or using Write-Host or Out-File which redirect the output to the screen or a file respectively) is returned from a function to the pipeline. For your Get-ChildItem command I would suggest using the -file parameter. That way the FileSystem provider only returns files to PowerShell and you don't have to filter out folder later. This will help speed things up in general, and also simplifies your filter. Kudos on using a filter by the way, almost nobody does it and it's really under appreciated IMHO. Along those same lines I would strongly suggest letting the provider filter files as well using -include, then passing it to your filter. It would look something like this: filter isEmptyImage { if ($_.Length -eq 0 -and $_.Name -match '__.+\.(jpg|png|gif)') { $_ } } $emptyFiles = Get-ChildItem -Path $sourceDir -Recurse -Include '*.jpg','*.png','*.gif' -File | isEmptyImage | Select-Object -ExpandProperty Name This way the File System provider only returns files that end in .jpg, .png, and .gif, and PowerShell has less to sort through. Lastly, PowerShell is pretty bad about adding and removing things from arrays. It rebuilds its array each time, so I think I would avoid that by capturing the jobs all at once, and then changing the While clause to be a little more intrusive. $jobs = foreach ($file in $emptyFiles) { $job = [powershell]::Create().AddScript($PrepareUrl).AddParameters(@{domain=$domain;file=$file}) $job.RunspacePool = $rp [PSCustomObject]@{ Job = $job Result = $job.BeginInvoke() } } Or if you want to get fancy you could simply do: $Jobs = $emptyFiles | %{ [powershell]::Create().AddScript($PrepareUrl).AddParameters(@{domain=$domain;file=$_}) | %{ $_.RunspacePool = $rp;$_ } } | Select *,@{l='Result';e={$_.Job.BeginInvoke()}} Then we change over to a Do/While loop instead of just a While loop: $jobsLeft = $jobs | Where-Object { $_.Result.IsCompleted } $urlList = Do { $jobsLeft = $jobs | Where-Object { $_.Result.IsCompleted } $WriteProgressParams = @{ Activity = 'Waiting for jobs to complete' Status = "$($jobsLeft.Count) jobs remaining" PercentComplete = (($jobs.Count - $jobsLeft.Count) / $jobs.Count * 100) } Write-Progress @WriteProgressParams # splatting Start-Sleep -Milliseconds 50 foreach ($job in $jobsLeft) { [string]$job.Job.EndInvoke($job.Result)|?{$_} [void]$job.Job.Dispose() [void]$jobs.Remove($job) } } while ($jobsLeft.Count -gt 0) Those are my suggestions, take'em or leave'em.
{ "domain": "codereview.stackexchange", "id": 28263, "tags": "asynchronous, powershell, webdriver" }
How does light travel create time travel violating causality?
Question: Saw a question about faster than light travel... I still have the same question though none of the answers offered any resolution for me. It is so summarily assumed by all physicists and commentaries that exceeding the speed of light would turn back the clock. I can't see the relation. Doubling the amount of any speed halves the time taken to travel a given distance. Keep doubling the speed and that time is halved (or otherwise divided). Divide any quantity (time in this case) and you always end up with a fraction of it but never zero and certainly never a negative amount as would be the case for the causality conflict. So it seems to me that whatever speed one attains, there is always a positive time element in the travel no matter how tiny!! The speed of light is only unique to me in that it is the fastest observed speed but is otherwise just another speed quantity set by nature (just like the speed of sound etc) could it be that other elements in nature are travelling faster than light but we lack the means to detect or measure them (like the rebellious neutrino)? I also don't understand time as an independent element that can be slowed sped up etc. It seems to me that time is simply a relative measure of the ever-changing state of matter relative to other states of matter. If every thing in the universe stopped- that is all state of matter everywhere frozen, all electrons frozen in place etc wouldn't we observe that time had stopped? Isn't it therefore our observation of the changing state of matter around us that gives the perception (perhaps illusion) of time? I can therefore only understand time as a subjective sense of changing states relative to an observer! It should be the rate of change of these states that slow down or speed up (in relation to the observer or instrument) and not the universal rate of change or universal time that changes! It would also debunk any notion of time travel, as it would involve the manipulation of every particle in the universe to a previous of future state... Disclaimer.. I hate calculations, stink at them and have no idea what mathematical formulas are used to arrive at the accepted conclusions so I'm not trying to dispute any findings etc by the experts, just trying to align my lay understanding to their conclusions. Answer: Disclaimer.. I hate calculations, stink at them and have no idea what mathematical formulas are used to arrive at the accepted conclusions so I'm not trying to dispute any findings etc by the experts, just trying to align my lay understanding to their conclusions. Main thing is, all these strange theories come up only after scientists try to match experimental data with "mathematical formulas". Which is why plain intuition won't get you too far. I also don't understand time as an independent element that can be slowed sped up etc. It seems to me that time is simply a relative measure of the ever-changing state of matter relative to other states of matter. There are some phenomena which take fixed amounts of time in a given frame. You can always use these as benchmark "clocks". Time is a relative measure, but in more ways than one. Other than what you say, time is also relative to the observer, and there is no "absolute" time. Time isn't being "slowed down", really -- just that some events that take a certain amount of time appear to take a different amount of time when viewed from a different reference frame. If every thing in the universe stopped- that is all state of matter everywhere frozen, all electrons frozen in place etc wouldn't we observe that time had stopped? Isn't it therefore our observation of the changing state of matter around us that gives the perception (perhaps illusion) of time? Yes, completely true. That changes nothing.. I can therefore only understand time as a subjective sense of changing states relative to an observer! It should be the rate of change of these states that slow down or speed up (in relation to the observer or instrument) and not the universal rate of change or universal time that changes! It would also debunk any notion of time travel, as it would involve the manipulation of every particle in the universe to a previous of future state... Except that there is no "universal timeline". The minute we talk about relativity, we discard any notions of a "special" reference frame. Everything is derived assuming that all reference frames moving at constant velocities are equivalent--thus there is no frame which you can point out and say "this is the Universe's record of events, it is more correct than the others". It's the intuitive way of looking at it, but here intuition fails us. We just can't look at it this way because physics does not allow such a privileged viewpoint. With that in mind, we can address your final issue: Doubling the amount of any speed halves the time taken to travel a given distance. Keep doubling the speed and that time is halved (or otherwise divided). Divide any quantity (time in this case) and you always end up with a fraction of it but never zero and certainly never a negative amount as would be the case for the causality conflict. So it seems to me that whatever speed one attains, there is always a positive time element in the travel no matter how tiny!! You've got to remember that we have multiple reference frames here. The time we're talking about isn't the time in "speed=dist/time". It's a different time interval we're measuring. Let's look at a spaceship travelling at some speed. In the spaceship, we have a clock. Now, for a viewer outside the spaceship, he will perceive that the clock is going slow. How? He can send light pulses to the clock to measure the motion of the hands of the clock. Light travels at a finite speed, so he must recompensate for that. After doing that, he stil gets the result that the clock is slower from his point of view. If the velocity was increased, it gets even slower -- with the limiting condition that if velocity reaches $c$, then the clock does not tick at all from the outside viewer's point of view. By this logic, if speed is increased further, the clock ought to start ticking backwards. Note that speed is still (distance travelled by spaceship)/(time taken by ship). The time we're dealing with is the time taken between events generated by a "clock".
{ "domain": "physics.stackexchange", "id": 51391, "tags": "special-relativity, time" }
Fuerte turtlebot simulator Ubuntu 12.04 precise
Question: Hi, I am having a hard time running the turtlebot simulator with ros-fuerte on Ubuntu precise 12.04. I used to run the simulator without any problems with Ubuntu Lucid and ros-electric. It seems to be an issue with the gazebo server and the boost libraries. More specifically, after issuing: roslaunch turtlebot_gazebo turtlebot_empty_world.launch I get the following error: -- waiting for service spawn_urdf_model gzserver: /usr/include/boost/smart_ptr/shared_ptr.hpp:418: T* boost::shared_ptr::operator->() const [with T = urdf::Inertial]: Assertion `px != 0' failed. Service call failed: transport error completing service call: unable to receive data from sender, check sender's logs for details Aborted (core dumped) I have observed the same behavior on 3 different 12.04 machines (Ububtu, lubuntu, Xubuntu). Is this a known problem for fuerte-precise turtlebot simulator or am I doing sth wrong? Note that stripping down the turtlebot urdf to just the create base seems to work - but that's definitely not a solution. Originally posted by Savvas on ROS Answers with karma: 43 on 2012-07-10 Post score: 4 Answer: I noticed that in the turtlebot_description/urdf/turtlebot_body.urdf.xacro file they had commented out this: <inertial> <mass value="0.0001" /> <origin xyz="0 0 0" /> <inertia ixx="0.0001" ixy="0.0" ixz="0.0" iyy="0.0001" iyz="0.0" izz="0.0001" /> </inertial> After removing the commented out part, the turtlebot was spawned correctly and all of the topics seem to be published as well. The only issue is that while running you will be bombarded by this warning: Warning [RaySensor.cc:206] ranges not constructed yet (zero sized) But other than that it seems to run correctly. Originally posted by mgrimson with karma: 36 on 2012-07-12 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Savvas on 2012-07-12: Thank you very much for this. Indeed this fixed the problem. Comment by bit-pirate on 2012-07-24: This seems to be a more general problem. I guess, this part was commented out, because otherwise KDL complains about not being able to handle inertia in the first link. Leaving the inertia out was working for Electric, but now it crashes with the new Gazebo.
{ "domain": "robotics.stackexchange", "id": 10139, "tags": "simulation, turtlebot, ubuntu, ros-fuerte, ubuntu-precise" }
Why can early embryos survive freezing?
Question: According to this Blastocyst & Embryo Freezing in IVF Embryos can be frozen at the pronuclear stage (one cell), or at any stage after that up to and including the blastocyst stage (5-7 days after fertilization). Given that freezing normally destroys mammalian cells1 why can early stage embryos survive being frozen? Reference: 1: Shier, W. T. (1988). Studies on the mechanisms of mammalian cell killing by a freeze-thaw cycle: conditions that prevent cell killing using nucleated freezing. Cryobiology, 25(2), 110-120. Answer: The reason why cells (and tissues) die during freezing is that ice crystals rupture the cell membranes1,2, so to me it seems the question is actually why early embryos can survive this process. The answer depends on the technique used, but it comes down to size. Small tissues (or cells) have a large surface area to volume ratio. In techniques that use cryoprotectants (compounds that suppress ice crystal formation), the small size allows efficient replacement of water by cryoprotectant prior to freezing (and the reverse after thawing) so that the cells don't have time to die1,2. In flash freezing (vitrification), the small size allows the entire tissue to be frozen instantaneously, which also suppresses ice crystal formation1,2. This Scientific American article is very relevant as well. References: 1: Konc, J., Kanyó, K., Kriston, R., Somoskői, B., & Cseh, S. (2014). Cryopreservation of embryos and oocytes in human assisted reproduction. BioMed research international, 2014. 2: Loutradi, K. E., Kolibianakis, E. M., Venetis, C. A., Papanikolaou, E. G., Pados, G., Bontis, I., & Tarlatzis, B. C. (2008). Cryopreservation of human embryos by vitrification or slow freezing: a systematic review and meta-analysis. Fertility and sterility, 90(1), 186-193.
{ "domain": "biology.stackexchange", "id": 9958, "tags": "human-biology, embryology" }
Sick TIM571 is transmitting too few datas
Question: Hello, I'm working with the Sick TIM571 Lidar with ROS-Kinetic on Ubuntu and I'm using the Sick-Tim-ROS-Package. The connection via ethernet works fine and I'm getting distance-datas from the Lidar. The TIM571 has an angular resolution of 0.333° with an opening angle of 270°. So the lidar should deliver 810 distances. The problem is, that I'm only getting 270 distances from the lidar. In the Sopas Tool on Windows the angular resolution is set so 0.333° and I can't change that. I would be very pleased if someone could help me with that Issue. Originally posted by Tobias36 on ROS Answers with karma: 1 on 2018-03-22 Post score: 0 Original comments Comment by mallain on 2019-08-27: I am experiencing the same issue with both the older sick_tim package and the newer SICK AG supported sick_scan driver package. Please see this GitHub issue. Update: see my post below for resolution. Answer: I experienced the same issue with the default configuration on a SICK TiM571, and discovered that I had the median filter option enabled in SOPAS. After disabling this feature and permanently saving to the device, I am now receiving full (0.333 deg) resolution samples from the sick_scan driver. Credit to SICK support for helping me find diagnose and solve this. Mitchell Originally posted by mallain with karma: 16 on 2019-09-01 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 30413, "tags": "slam, navigation, ros-kinetic, ubuntu, laserscanner" }
Explicit construction for unitary extensions of completely positive and trace preserving (CPTP) maps?
Question: Given a completely positive and trace preserving map $\Phi : \textrm{L}(\mathcal{H})\to\textrm{L}(\mathcal{G})$, it is clear by the Kraus representation theorem that there exist $A_k \in \text{L}(\mathcal{H}, \mathcal{G})$ such that $\Phi(\rho) = \sum_k A_k \rho A_k^\dagger$ for all density operators $\rho$ on $\mathcal{H}$. (I'll consider the special case $\mathcal{H} = \mathcal{G}$ for simplicity.) If we use then the system+environment model to express this action as $\Phi(\rho)=\text{Tr}_{\mathcal{H}_E} (Y\rho Y^\dagger)$ for an isometry $Y$ from $\mathcal{H}$ to $\mathcal{H}\otimes\mathcal{H}_E$, where $\mathcal{H}_E$ is an ancilla modelling the environment, what is an explicit construction for a unitary $U$ that has the same action on inputs of the form $\rho\otimes\left|0\right>\left<0\right|_E$? That is, how can I construct an explicit dilation of the map to a unitary acting on a larger space? I understand that this is possible by Steinspring's dilation theorem, but actually constructing an explicit form for the dialated unitary I have had much less success with. Answer: The isometry $Y:\mathcal H\rightarrow \mathcal H_E \otimes \mathcal H$ is $$ Y=\left(\begin{array}{c} A_1 \\ \vdots \\ A_K \end{array}\right) = \sum_k |k\rangle \otimes A_k\ . $$ Clearly, $$ \mathrm{tr}_E(Y\rho Y^\dagger) = \sum_{kl} \mathrm{tr}(|k\rangle\langle l|) A_k\rho A_l^\dagger = \sum_k A_k \rho A_k^\dagger \ , $$ as desired. Moreover, $Y$ is an isometry, $Y^\dagger Y=I$, i.e., its columns are orthonormal, which follows from the condition $\sum_k A_k^\dagger A_k=I$ (i.e., the map is trace preserving). Now if you want to obtain a unitary which acts on $|0\rangle\langle 0|\otimes \rho$ the same way $Y$ acts on $\rho$, you have to extend the matrix $Y$ to a unitary by adding orthogonal column vectors. For instance, you can pick linearly independent vectors from your favorite basis and orthonormalize. (Clearly, $U$ is highly non-unique, as its action on environment states other than $|0\rangle$ is not well defined.)
{ "domain": "physics.stackexchange", "id": 49450, "tags": "quantum-mechanics, research-level, quantum-information" }
How would it be possible to measure the day's length for the past?
Question: According to this recent article [1], Earth spins faster now than in the past, switching its trend. Besides the main focus of the above-mentioned article, what captured my attention was the knowledge of the day's length 1.4 billions years ago. How were scientists able to measure it? [1] https://www.engadget.com/earth-rotation-speed-negative-leap-second-183324723.html Answer: How would be possible to measure days length for the past? Tidal rhythmites and rock formation dating. Some rock formations show banding caused by, for example, springtime floods that bring in mud, monthly neap and spring tides, and daily tides. Rock formations with multiple types of banding can show how many days were in a year when the formation formed. There are multiple techniques for dating a rock formation. For example, if the rock formation contains zircons they can be used for dating. There are many other dating techniques. The combination of rock dating and the estimates of length of day lets scientists determine how long days were hundreds of millions of years ago, or even further into the past.
{ "domain": "physics.stackexchange", "id": 90047, "tags": "time, earth" }
What is the energy`correction or amplitude correction for a Tukey window?
Question: From this page: https://community.sw.siemens.com/s/article/window-correction-factors there is a list of correction factors for popular windows. Is there a correction factor for Tukey window, depending on factor for the Tukey-window? Have not found this as yet. Answer: $w$ is the window function, $N$ is the length of the window. Amplitude correction: $$\text{ACF} = \cfrac{N}{\sum_{n=0}^{N-1}w[n]} = \cfrac{1}{\text{mean(w)}} $$ Energy correction: $$\text{ECF} = \sqrt{\cfrac{N}{\sum_{n=0}^{N-1} w[n]^2}} = \cfrac{1}{\text{rms(w)}}$$ In matlab: % Tukey N = 2^16; acf = 1/mean(tukeywin(N)); % = 1.3334 ecf = 1/rms(tukeywin(N)); % = 1.2061
{ "domain": "dsp.stackexchange", "id": 11529, "tags": "fft, window-functions" }
The Polaris/Pole star and revolution of Earth
Question: How can we say that the Pole star is fixed just because it is positioned exactly above the North Pole? How does it revolve along with Earth around the Sun if it's fixed? Answer: We can't say anything is "fixed" - it is only ever "somewhat fixed relative to x", where x might be our frame of reference. Further, while the radius of the Earth's orbit is about 150 million km, the distance to Polaris is $4\cdot 10^{15}km$. When you move Earth by 300 million km (the approximate diameter of its orbit), the apparent angle to Polaris would change by $7.5\cdot 10^{-7}~\rm{rad}$ which is about 0˚00'00.15" - really a very small shift that you won't normally be able to detect. For comparison, the sextant NASA developed for space flight had an rms error of 8.6 arcsec source - see page 8 - over 50x greater than the shift in apparent position of Polaris. In fact, Earth's axis doesn't point directly at Polaris (see this article)- and the direction it's pointing changes all the time. This phenomenon is called precession - it is a result of the fact that the earth is not a perfect sphere, but slightly bulging at the equator. As a consequence, gravity of the Sun pulls a little harder on the near side than the far side of the equator, and this differential force creates a torque on the Earth. When a spinning object is subject to torque, it starts precessing (see my link). In the case of Earth, the rate of precession is about 26,000 years. That means that Polaris is "moving" through the sky at a much higher rate than the 0.15 arcsec/ 6 months that I calculated above.
{ "domain": "physics.stackexchange", "id": 39285, "tags": "astronomy, earth, rotational-kinematics, solar-system, stars" }
Is filtering necessary if I'm not downsampling?
Question: My data is sampled at $f_s$, and I'm interested in analyzing certain frequencies $0<f_1<\ldots<f_N<\frac{f_s}{4}$. I then take $T$ samples of data, $\{x_1,\ldots\,x_T\}$ where $T\gg \frac{1}{f_s}$, FFT it after padding to the next power of 2 after $T$ and then pick the FFT bins closest to my $f_i$ of interest to proceed with the next step. My question is: Is it necessary to (or is there any advantage if I) bandpass filter (BPF) my data between $f_1$ and $f_N$? I figured that since there was no downsampling involved, there won't be any aliasing and I needn't BPF it. Besides, a well designed filter should not modify the frequencies of interest, so it makes no difference. Am I missing something or is filtering a necessary step before FFT? Note: I won't be inverting it back to the time domain. I'm just interested in the specific bins (but several that FFT is much faster than Goertzel). Answer: Filtering is not necessary as it would only effect those frequency bins outside of the passband. However, depending on how small your passband is, you could potentially save yourelf a lot of FFT processing by low-pass filtering and then downsampling before your perform your FFT. For example, if you downsample by 4, the resulting FFT could be 4 times shorter and still have the same frequency resolution.
{ "domain": "dsp.stackexchange", "id": 1092, "tags": "fft, filters" }
How much mass will the Sun have when it becomes a white dwarf?
Question: In 4 billion years, when our Sun sheds all of its outer gas layers and turns into a white dwarf, how much mass will the white dwarf have compared to what the sun has today? Will the planets still orbit in the same way, or will the reduced mass cause the planets' trajectories to change, so that they eventually leave the solar system? Answer: Short answer: The Sun will lose about half of its mass on the way to becoming a white dwarf. Most of this mass loss will occur in the last few million years of its life, during the Asymptotic Giant Branch (AGB) phase. At the same time the orbital radius of the Earth around the Sun will grow by a factor of two (as will the outer planets). Unfortunately for the Earth, the radius of the Sun will also reach to about 2 au, so it will be toasted. There is the possibility that the decreased binding energy and increased eccentricity of the Earth and the outer planets will lead to dynamical instabilities that could lead to planetary ejection. This is highly dependent on the exact time dependence of the late, heavy mass loss and the alignment or otherwise of the planets at the time. Long answer: Stars with mass less than about 8 solar masses will end their lives as white dwarfs on a timescale which increases as their main sequence initial mass decreases. The white dwarfs that are formed are of lower mass than their progenitor main sequence stars, because much of the initial mass of a star is lost through stellar winds (particularly during the thermally pulsating asymptotic giant branch phase) and final ejection of a planetary nebula. Thus, the current distribution of white dwarf masses, that peaks between $0.6$ and $0.7 M_{\odot}$ and with a dispersion of $\sim 0.2 M_{\odot}$, reflects the final states of all main sequence stars with $0.9 <M/M_{\odot}<8 M_{\odot}$, that have had time to evolve and die during our Galaxy's lifetime. The most reliable information we have about the relationship between the initial main sequence mass and final white dwarf mass (the initial-final mass relation or IFMR) comes from measuring the properties of white dwarfs in star clusters of known age. Spectroscopy leads to a mass estimate for the white dwarf. The initial mass is estimated by calculating a main sequence plus giant branch lifetime from the difference between the age of the star cluster and the cooling age of the white dwarf. Stellar models then tell us the relationship between main sequence plus giant lifetime and the initial main sequence mass, hence leading to an IFMR. A recent compilation from Kalirai (2013) is shown below. This shows that a star like the Sun, born with an initial mass of $1M_{\odot}$ (or maybe a per cent or two more, since the Sun has already lost some mass), ends its life as a white dwarf with $M = 0.53 \pm 0.03\ M_{\odot}$. i.e. The Sun should lose approximately 50% of its initial mass in stellar winds and (possibly) planetary nebula ejection. A comprehensive treatment of what happens to solar systems when the central star loses mass in a time-dependent way is given in Adams et al. (2013). The simplest cases are initially circular orbits where the mass loss takes place on much longer timescales than the orbital period. As mass loss proceeds, the gravitational potential energy increases (becomes less negative) and thus the total orbital energy increases and the orbit gets wider. Roughly speaking, $aM$ is a constant, where $a$ is the orbital radius, which is a simple consequence of conservation of angular momentum: so the Earth would end up in a 2 au orbit. However, in the presence of a non-zero eccentricity in the initial orbit, or in the case of rapid mass loss, such as that which occurs towards the end of the AGB phase, then things become altogether more unpredictable, with the eccentricity also growing as mass loss proceeds. This has a knock-on effect when considering the dynamical stability of the whole (evolved) solar system and may result in planetary ejection. The faster the mass loss, the more unpredictable things get. The radius of an AGB star can be calculated using $ L = 4\pi R^2 \sigma T_{eff}^{4}$. Stars at the tip of the AGB branch have luminosities of $\sim 10^{4} L_{\odot}$ and $T_{eff} \simeq 2500\ K$, leading to likely radii of $\sim 2$ au. So it is quite likely that unless the Earth is ejected or has its orbit significantly modified by some dynamical instability that, like the inner planets, it will end up engulfed in the outer enevelope of the AGB star and spiral inwards... Even should it narrowly escape this immediate fate, it is then quite likely that tidal dissipation will rapidly extract energy out of the orbit and the Earth will spiral in towards the envelope of the giant Sun... with the same result.
{ "domain": "astronomy.stackexchange", "id": 908, "tags": "star, the-sun, white-dwarf" }
Why doesn't AlphaGo expect its opponent to play the best possible moves?
Question: In the game won by Lee Sedol, AlphaGo was apparently surprised by a brilliant and unexpected move from Lee Sedol. After analysing the logs, the Deep Mind CEO said that AlphaGo had evaluated a 1/10000 probability for that specific move to be played by Lee Sedol. What I don't understand here is : whatever the probability for a good move to be played, why taking the risk ? Why not instead expecting the opponent to always plays the best moves ? Of course it's always possible that you miss the best move your opponent could play when playing Monte Carlo to evaluate his possibilities, but here it seems that the move was founded. If AlphaGo knew that its strategy could be counter by such a move, why not choosing another strategy, where the worst case scenario would be less "worse". Answer: It appears that AlphaGo did not rate the move as a best possible move for Lee Sedol, just as one that was within its search space. To put into context the board is 19x19, so a 1 in 10000 chance of a move is much lower than chance of the square being picked at random. That likely makes the move that it "found" not worth exploring much deeper. It is important to note too that the probabilities assigned to moves are equivalent to AlphaGo's rating for quality of that move - i.e. AlphaGo predicted that this was a bad choice for its opponent. Another way of saying this, is "there is a probability p that this move is the best possible one, and therefore worth investigating further". There is no separate quality rating - AlphaGo does not model "opponent's chance of making a move" separately from "opponent's chance of gaining the highest score from this position if he/she makes that move". There is just one probability covering both those meanings 1 As I understand it, AlphaGo rates the probabilities of all possible moves at each game board state that it considers (starting with the current board), and employs the most search effort for deeper searches on the highest rated ones. I don't know the ratios or how many nodes are visited in a typical search, but expect that a 1 in 10000 rating would not have been explored in much detail if at all. It is not surprising to see the probability calculation in the system logs, as the logs likely contain the ratings for all legal next moves, as well as ratings for things that didn't actually happen in the game but AlphaGo considered in its deeper searches. It is also not surprising that AlphaGo failed to rate the move correctly. The neural network is not expected to be a perfect oracle that rates all moves perfectly (if it was, then there would be no need to search). In fact, the opposite could be said to be the case - it is surprising (and of course an amazing feat of engineering) just how good the predictions are, good enough to beat a world-class champion. This is not the same as solving the game though. Go remains "unsolved", even if machines can beat humans, there is an unknown amount of additional room for better and better players - and in the immediate future that could be human or machine. There are in fact two networks evaluating two different things - the "policy network" evaluates potential moves, and the output of that affects the Monte Carlo search. There is also a "value network" which assesses board states to score the end point of the search. It is the policy network that predicted the low probability of the move, which meant that the search had little or no chance of exploring game states past Lee Sedol's move (if it had, maybe the value network would of detected a poor end result from playing that through). In reinforcement learning, a policy is set of rules, based on known state, that decide between actions that an agent can take.
{ "domain": "datascience.stackexchange", "id": 744, "tags": "machine-learning" }
A* search algorithm
Question: NodeData stores all information of the node needed by the AStar algorithm. This information includes the value of g, h, and f. However, the value of all 3 variables are dependent on source and destination, thus obtains at runtime. @param <T> I'm looking for reviews on optimization, accuracy and best practices. final class NodeData<T> { private final T nodeId; private final Map<T, Double> heuristic; private double g; // g is distance from the source private double h; // h is the heuristic of destination. private double f; // f = g + h public NodeData (T nodeId, Map<T, Double> heuristic) { this.nodeId = nodeId; this.g = Double.MAX_VALUE; this.heuristic = heuristic; } public T getNodeId() { return nodeId; } public double getG() { return g; } public void setG(double g) { this.g = g; } public void calcF(T destination) { this.h = heuristic.get(destination); this.f = g + h; } public double getH() { return h; } public double getF() { return f; } } /** * The graph represents an undirected graph. * * @author SERVICE-NOW\ameya.patil * * @param <T> */ final class GraphAStar<T> implements Iterable<T> { /* * A map from the nodeId to outgoing edge. * An outgoing edge is represented as a tuple of NodeData and the edge length */ private final Map<T, Map<NodeData<T>, Double>> graph; /* * A map of heuristic from a node to each other node in the graph. */ private final Map<T, Map<T, Double>> heuristicMap; /* * A map between nodeId and nodedata. */ private final Map<T, NodeData<T>> nodeIdNodeData; public GraphAStar(Map<T, Map<T, Double>> heuristicMap) { if (heuristicMap == null) throw new NullPointerException("The huerisic map should not be null"); graph = new HashMap<T, Map<NodeData<T>, Double>>(); nodeIdNodeData = new HashMap<T, NodeData<T>>(); this.heuristicMap = heuristicMap; } /** * Adds a new node to the graph. * Internally it creates the nodeData and populates the heuristic map concerning input node into node data. * * @param nodeId the node to be added */ public void addNode(T nodeId) { if (nodeId == null) throw new NullPointerException("The node cannot be null"); if (!heuristicMap.containsKey(nodeId)) throw new NoSuchElementException("This node is not a part of hueristic map"); graph.put(nodeId, new HashMap<NodeData<T>, Double>()); nodeIdNodeData.put(nodeId, new NodeData<T>(nodeId, heuristicMap.get(nodeId))); } /** * Adds an edge from source node to destination node. * There can only be a single edge from source to node. * Adding additional edge would overwrite the value * * @param nodeIdFirst the first node to be in the edge * @param nodeIdSecond the second node to be second node in the edge * @param length the length of the edge. */ public void addEdge(T nodeIdFirst, T nodeIdSecond, double length) { if (nodeIdFirst == null || nodeIdSecond == null) throw new NullPointerException("The first nor second node can be null."); if (!heuristicMap.containsKey(nodeIdFirst) || !heuristicMap.containsKey(nodeIdSecond)) { throw new NoSuchElementException("Source and Destination both should be part of the part of hueristic map"); } if (!graph.containsKey(nodeIdFirst) || !graph.containsKey(nodeIdSecond)) { throw new NoSuchElementException("Source and Destination both should be part of the part of graph"); } graph.get(nodeIdFirst).put(nodeIdNodeData.get(nodeIdSecond), length); graph.get(nodeIdSecond).put(nodeIdNodeData.get(nodeIdFirst), length); } /** * Returns immutable view of the edges * * @param nodeId the nodeId whose outgoing edge needs to be returned * @return An immutable view of edges leaving that node */ public Map<NodeData<T>, Double> edgesFrom (T nodeId) { if (nodeId == null) throw new NullPointerException("The input node should not be null."); if (!heuristicMap.containsKey(nodeId)) throw new NoSuchElementException("This node is not a part of hueristic map"); if (!graph.containsKey(nodeId)) throw new NoSuchElementException("The node should not be null."); return Collections.unmodifiableMap(graph.get(nodeId)); } /** * The nodedata corresponding to the current nodeId. * * @param nodeId the nodeId to be returned * @return the nodeData from the */ public NodeData<T> getNodeData (T nodeId) { if (nodeId == null) { throw new NullPointerException("The nodeid should not be empty"); } if (!nodeIdNodeData.containsKey(nodeId)) { throw new NoSuchElementException("The nodeId does not exist"); } return nodeIdNodeData.get(nodeId); } /** * Returns an iterator that can traverse the nodes of the graph * * @return an Iterator. */ @Override public Iterator<T> iterator() { return graph.keySet().iterator(); } } public class AStar<T> { private final GraphAStar<T> graph; public AStar (GraphAStar<T> graphAStar) { this.graph = graphAStar; } // extend comparator. public class NodeComparator implements Comparator<NodeData<T>> { public int compare(NodeData<T> nodeFirst, NodeData<T> nodeSecond) { if (nodeFirst.getF() > nodeSecond.getF()) return 1; if (nodeSecond.getF() > nodeFirst.getF()) return -1; return 0; } } /** * Implements the A-star algorithm and returns the path from source to destination * * @param source the source nodeid * @param destination the destination nodeid * @return the path from source to destination */ public List<T> astar(T source, T destination) { /** * http://stackoverflow.com/questions/20344041/why-does-priority-queue-has-default-initial-capacity-of-11 */ final Queue<NodeData<T>> openQueue = new PriorityQueue<NodeData<T>>(11, new NodeComparator()); NodeData<T> sourceNodeData = graph.getNodeData(source); sourceNodeData.setG(0); sourceNodeData.calcF(destination); openQueue.add(sourceNodeData); final Map<T, T> path = new HashMap<T, T>(); final Set<NodeData<T>> closedList = new HashSet<NodeData<T>>(); while (!openQueue.isEmpty()) { final NodeData<T> nodeData = openQueue.poll(); if (nodeData.getNodeId().equals(destination)) { return path(path, destination); } closedList.add(nodeData); for (Entry<NodeData<T>, Double> neighborEntry : graph.edgesFrom(nodeData.getNodeId()).entrySet()) { NodeData<T> neighbor = neighborEntry.getKey(); if (closedList.contains(neighbor)) continue; double distanceBetweenTwoNodes = neighborEntry.getValue(); double tentativeG = distanceBetweenTwoNodes + nodeData.getG(); if (tentativeG < neighbor.getG()) { neighbor.setG(tentativeG); neighbor.calcF(destination); path.put(neighbor.getNodeId(), nodeData.getNodeId()); if (!openQueue.contains(neighbor)) { openQueue.add(neighbor); } } } } return null; } private List<T> path(Map<T, T> path, T destination) { assert path != null; assert destination != null; final List<T> pathList = new ArrayList<T>(); pathList.add(destination); while (path.containsKey(destination)) { destination = path.get(destination); pathList.add(destination); } Collections.reverse(pathList); return pathList; } public static void main(String[] args) { Map<String, Map<String, Double>> hueristic = new HashMap<String, Map<String, Double>>(); // map for A Map<String, Double> mapA = new HashMap<String, Double>(); mapA.put("A", 0.0); mapA.put("B", 10.0); mapA.put("C", 20.0); mapA.put("E", 100.0); mapA.put("F", 110.0); // map for B Map<String, Double> mapB = new HashMap<String, Double>(); mapB.put("A", 10.0); mapB.put("B", 0.0); mapB.put("C", 10.0); mapB.put("E", 25.0); mapB.put("F", 40.0); // map for C Map<String, Double> mapC = new HashMap<String, Double>(); mapC.put("A", 20.0); mapC.put("B", 10.0); mapC.put("C", 0.0); mapC.put("E", 10.0); mapC.put("F", 30.0); // map for X Map<String, Double> mapX = new HashMap<String, Double>(); mapX.put("A", 100.0); mapX.put("B", 25.0); mapX.put("C", 10.0); mapX.put("E", 0.0); mapX.put("F", 10.0); // map for X Map<String, Double> mapZ = new HashMap<String, Double>(); mapZ.put("A", 110.0); mapZ.put("B", 40.0); mapZ.put("C", 30.0); mapZ.put("E", 10.0); mapZ.put("F", 0.0); hueristic.put("A", mapA); hueristic.put("B", mapB); hueristic.put("C", mapC); hueristic.put("E", mapX); hueristic.put("F", mapZ); GraphAStar<String> graph = new GraphAStar<String>(hueristic); graph.addNode("A"); graph.addNode("B"); graph.addNode("C"); graph.addNode("E"); graph.addNode("F"); graph.addEdge("A", "B", 10); graph.addEdge("A", "E", 100); graph.addEdge("B", "C", 10); graph.addEdge("C", "E", 10); graph.addEdge("C", "F", 30); graph.addEdge("E", "F", 10); AStar<String> aStar = new AStar<String>(graph); for (String path : aStar.astar("A", "F")) { System.out.println(path); } } } Answer: This is quite good and professional-looking code. There are many small aspects which I really like: Using Collections.unmodifiableMap instead of a shallow copy is brilliant. The use of a generic nodeId is clever and makes for elegant code. There are input checks in all public methods (well, there are some exceptions: only the AStar and NodeData constructors, NodeComparator#compare, and AStar#astar are not checked). You make perfect use of empty lines to separate unrelated blocks. But there were some aspects that made your code sometimes a bit harder to follow: Lack of encapsulation of some parts like heuristic: What is this Map<T, Map<T, Double>>? Can't I have nice self-documenting accessors for a Heuristic<T> instance? Sequential coupling in NodeData: whenever I setG I also have to call calcF. You could have made this easier by slapping a return this in there instead of void methods, but the real solution is to get rid of public calcF and make it private instead. The NodeData class is responsible on its own to maintain its invariants, so any call to setG should update dependent fields. Bad naming. The letters g, f and h have a specific meaning in the context of A* and are OK here. Of course it would have been better to include a link to this algorithm's Wikipedia page so that a future maintainer can understand why you used g instead of distance. But nodes don't have a distance, nodes or edges have weights. In the context of optimization problems it is also common to talk about a cost – a term which does not occur once in your code. It's called heuristic, not hueristic. Typos are easy to correct, and should be corrected while they're still young (The origin of the referer field in a HTTP request should be edutaining here). There are some formatting “errors” that can be easily rectified by an automatic formatter. E.g. don't use braces when a conditional is on a single line like if (cond) { single_statement; } – removing the braces reduces line noise. Otherwise, you could also put the statement on its own line. Some of your lines are excessively long and should be broken up (see also the next tip). As laudable as your input checks are, they do add visual clutter. Consider hiding them behind helper methods, e.g. heuristic.assertContains(nodeId) or preferably Assert.nonNull(nodeId, "node") (which assumes a whole class dedicated to input checking). Arguments against this are less useful stack traces and reduced performance (method call overhead, compiler optimizations are more difficult), but pro-arguments include more self-documenting, concise code. Other notes: Your documentation does not state what happens when there is no path from the start to the destination (e.g. if the nodes are in unconnected subgraphs). The implementation is rather clear here: A null is returned instead of a list. The Pseudocode given on the A* Wikipedia page uses a slightly different condition for updating the path and possibly adding neighbours to the openQueue: if (!openQueue.contains(neighbor) || tentativeG < neighbor.getG())` whereas you use if (tentativeG < neighbor.getG()). It might be worth checking an authoritative source what the correct condition is. You could translate between T instances and a continuous range of ints at your component's boundaries (in other words: internally, every nodeId would be an integer). Using integers allows for more efficient data structures like arrays. This would remove most Map lookups, but also the NodeData class. The disadvantage is that your code would look like C afterwards… I would just try out the transformation and see (a) whether there is a noticeable increase in performance and (b) whether the increased ugliness is really worth it.
{ "domain": "codereview.stackexchange", "id": 20861, "tags": "java, algorithm, search, graph, pathfinding" }
How do cold conditions help organ transplants?
Question: Body organs are kept cold in-between explanting them from the donor and implanting them in the new host. How do these cold conditions help organs to stay viable while they haven't fresh blood, energy and oxygen supply? Answer: This paper actually goes into the whole history of organ transplants. In short it seems to have the following effects: preservation - usually with a specific solution to help. slows down extracorporeal ischaemic damage slows down hypoxic damage slows down the metabolism (energy consumption) and thus the need for oxygen that blood provides. Remember that the cooling doesn't allow the organ to live indefinitely, and may only work for a few short hours depending on the organ. I highly recommend reading the paper if you'd like to know more about organ transplant. They went into great detail about the past and present and the abstract also says they discuss new techniques about to be don't in clinical trials.
{ "domain": "biology.stackexchange", "id": 4087, "tags": "physiology, temperature, decay" }
What is the best way to share links between models?
Question: I need to make a bunch of models each having an identical copy of a particular link. What's the best way of doing that without copying and pasting the entire XML for the link every time? Can I use an <include> element for this somehow? Originally posted by JasonMel on Gazebo Answers with karma: 35 on 2016-08-25 Post score: 0 Answer: I've never tried this, but it should be possible to <include> a model within another one. The SDF parser just expands the links of the included model within the parent model. You can see an example here. So in your case, you could make a base model which has the link that should be shared, and include that in each new model. Originally posted by chapulina with karma: 7504 on 2016-08-25 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by JasonMel on 2016-09-19: This definitely works. Thanks!
{ "domain": "robotics.stackexchange", "id": 3975, "tags": "gazebo" }
What is the applications of kmp algorithm?
Question: KMP algorithm works best when there is/are self matching(s) of pattern string that we want to search for. Usually it doesn't happen unless pattern is long enough. So where is the KMP application in real world? Answer: In real world KMP algorithm is used in those applications where pattern matching is done in long strings, whose symbols are taken from an alphabet with little cardinality. A relevant example is the DNA alphabet, which consists on only 4 symbols (A,C,G,T). Imagine how KMP can work in a "DNA pattern matching problem": it is really suitable because many repetition of the same letter allows many jumps, and so less computation time wasted. If you are interested in this resarching area just google "DNA pattern matching algorithms", there's a lot to say.
{ "domain": "cstheory.stackexchange", "id": 3140, "tags": "search-problem, string-search" }
Serial to Parallel Converter
Question: in many texts about modulators I have seen there is a component called "Serial to Parallel Converter". For instance, let's consider this scheme and let's focus on the second block (reference): The text about that block says: The input serial data stream is formatted into the word size required for transmission, e.g. 2 bits/word for QPSK, and shifted into a parallel format. The data is then transmitted in parallel by assigning each data word to one carrier in the transmission. Somewhere I have read also that this device converts a signal from serial to parallel by halving the transmission rate. But, I do not understand what serial to parallel conversion exactly means, and if I have to see it from and electronic and circuital point of view (from a signal between a wire and GND we get two signals between two wires and GND) or from a signal theory point of view (but I do not know how to see it). Which is the math relationship of the output signals (between them and between them and the input signal)? Answer: So, I myself have taught based on material that uses that scheme, "P/S" and "S/P" after and before the transforms. Personally, it's nonsense. What the author tries to say is: The IFFT is a mapping of sample vectors to vectors. So, you need a vector as input, not a stream of samples. What they instead say is: We use the terminology from very basic digital logic to deal with complex values. That's not only a bit awkward, if you asked me, but also inaccurate: modern FFT implementations in hardware actually do take in stream data, so that operation isn't there in practical hardware implementations in software implementations, you basically never even deal with samples coming in serially – they always appear en block, in some memory location, so nothing's "serial" to even begin with S/P nor P/S even say that you should be fully accumulating one vector of $N$ samples, then move on to the next vector, and have zero overlap between these – when applied to FIFOs (which is where the term comes from), that's usually not how they work. So, simply think of S/P as "get $N$ samples, present these $N$ to the next block as vector". And P/S is "take this $N$ long vector, and give one sample after the other". So, these blocks do exactly nothing to your signal - it's just a reinterpretation, if you will, between things that are logically "one after the other", and things that are logically "a vector of $N$ values".
{ "domain": "dsp.stackexchange", "id": 8493, "tags": "discrete-signals, signal-analysis, digital-communications, modulation, communication-standard" }
If the energy of photons at the bottom of the visible spectrum (~720 nm) is 1.72 eV, why does a red LED light up at only 1.48 V?
Question: I have a red LED (623 nm peak wavelength) which I am able to light up at 1.48 V. I thought the switch-on voltage should be determined by $hc/(e\lambda)$, but that would give 1.99 V for 623 nm. Even assuming the LED spectrum is really wide, and I'm only seeing the the deepest visible red (~720 nm), the voltage required should still be at least 1.72 V. Does the missing energy come from heat? There is "thermal voltage" $(kT/e)$, but it's only 26 mV at room temperature — an order of magnitude smaller than the discrepancy between the prediction and the measurement. Answer: There is nothing strange in having a forward voltage drop lower than $E_{gap}/e$. According to the Shockley equation of the diode, any voltage bias $V_D \gt 0$ is able to induce forward conduction in the diode: $$ I(V_D) =I_0 \left(e^{eV_D/kT}-1 \right), $$ and as long as there is some current flowing across the LED, light emission is possible although possibly faint. However, from the point of view of energy conservation, every photon of energy $E_{gap}$ is created at the expense of an injected electron of energy $eV_D$. When at low current injection $E_{gap} \gt eV_D$, the missing energy has to come from heat. It is similar as the evaporation of a liquid: only the few electrons that happen to have enough energy to overcome the built-in potential recombine, but by doing that they subtract thermal energy. This doesn't necessarily mean that the LED cools down. Most of the electrons recombine non-radiatively emitting heat instead of light. There is also joule heating due to the flow of current. But in some special conditions cooling of the device has been indeed observed.
{ "domain": "physics.stackexchange", "id": 47269, "tags": "semiconductor-physics, light-emitting-diodes, solar-cells" }
How do you make a spherical radio wave?
Question: A vertical rod, a usual dipole, produces radio waves in the horizontal plane, mostly in two opposite directions: $\qquad \qquad \qquad \qquad \qquad $ If that is possible, how do you produce a spherical EM radiation? should the antenna be a (..n expanding and contracting) globe or a circle? How should the charges oscillate? and, lastly, would its energy decrease by 1/$4 \pi r^2$ and so its range would be rather short? P.S. Someone said in a comment to How is a spherical electromagnetic wave emitted from an antenna described in terms of photons?: For some reason, my instinct is that a spherical electromagnetic wave cannot be emitted by an antenna. Instead, they can only be emitted by a charge. I guess that's cause I always think of an antenna as an object that has no net charge. – Carl Brannen Is this true? can you explain how a charge, say an electron, can produce a spherical wave? Also, does the section (the area) of a charge carry any info about its force or anything else? Answer: A result known as Birkhoff's theorem forbids spherical electromagnetic radiation. The statement of the theorem is that any spherically symmetric vacuum solution to Maxwell's equations must be static. It is rather simple to prove. In a spherically symmetric solution $\mathbf E$ and $\mathbf B$ must be radial. Make an Ansatz, $$\mathbf E = E_0 \exp(i(\mathbf k\cdot\mathbf r-\omega t)) \hat r \quad \mathbf B = B_0 \exp(i(\mathbf k\cdot\mathbf r-\omega t)) \hat r $$ The wavevector $\mathbf k$ must be $\mathbf k = k\hat r$ for spherical symmetry. Now Ampere's law is $$\nabla\times \mathbf B = i\mathbf k \times \mathbf B = 0 = \partial_t \mathbf E = -i\omega \mathbf E$$ which implies $\omega = 0$, so that the field is static, or $E_0 = 0$. From Faraday's law $\nabla\times\mathbf E =- \partial_t \mathbf B$ you can see that if $E_0 = 0$ but $\omega \neq 0$, then also $B_0 = 0$. The most general result for electromagnetic radiation is that in Coulomb gauge, in the radiation zone, the vector potential is $$\mathbf A(\mathbf x, t) = \frac{\mu_0}{4\pi }\frac{e^{i(kr-\omega t)}}{r} \int \mathbf J(\mathbf x') e^{-ik\hat{x} \cdot \mathbf x'} \, dx'$$ where $\mathbf J(\mathbf x')$ is the current in the source region, e.g., your antenna, and the current is assumed to have sinusoidal (harmonic) time dependence. [This is not a restriction because Maxwell's equations are linear and Fourier transform exists.] The angular dependence is entirely in the integral over the source current. Thus to achieve some desired angular profile of the radiation, one needs to design $\mathbf J$ appropriately. Your particular case of an oscillating sphere of charge actually does not radiate because it has only a monopole moment and there is no monopole radiation. A spheroidal charge distribution is treated by Jackson Classical Electrodynamics, Sec. 9.3. There Jackson shows that this arrangement leads to quadrupole radiation with a four-lobed distribution of radiated power. For a more in-depth discussion, read Ch. 9 in Jackson, which treats radiation in detail, including the angular distribution of radiated power from various sources.
{ "domain": "physics.stackexchange", "id": 37833, "tags": "electromagnetic-radiation, antennas, radio-frequency" }
How are J1407b's rings possible?
Question: If planetary rings are maintained by staying within the Roche limit, how is it that J1407b's rings extend so far? Surely the gravity and tidal forces should be low enough to allow a moon to form. Answer: There are a few ways that a gigantic ring system outside the Roche limit might be possible. Ring systems can extend well past the Roche limit if they are very faint or very young. A faint/light ring system doesn't have enough mass to coalesce and a young system might not have enough time. Young small moons with lots internal heat of formation can be volcanically active too, feeding a ring system. Saturn's E ring is well outside it's Roche limit. https://upload.wikimedia.org/wikipedia/commons/f/f7/Saturn%27s_Rings_PIA03550.jpg The most likely explanation for it's enormous and very dense rings, however is that J1407b is very young and a lot of it's ring system will likely to coalesce into moons in time. The breaks in it's ring system suggests that some moons are already clearing paths in it's rings. It's mass is also very significant. While difficult to estimate, it's thought to be about 10-40 Jupiters in mass. Source. That would give it a mass in the range of 33 to 130 Saturns and a Roche limit some 3 to 5 times (cube root of the mass), distance to Saturn's center, perhaps 6-10 times the distance from the surface surface. Because it's ring system is about 200 times the size of Saturn's rings, the mass alone wouldn't explains part of it. Mostly likely it's a combination of very massive planet/perhaps a brown dwarf star and young plant/star with a satellite system still in formation. There might also be a very powerful magnetic field at play that helps prevent the ring debris from forming into satellites, but that's just speculation on my part.
{ "domain": "astronomy.stackexchange", "id": 2110, "tags": "planetary-ring, roche-limit" }
One-Dimensional Convolutional Neural Network
Question: Can someone explain how 'One-Dimensional Convolutional Neural Network' works. I do understand the 2-D for image but for 1-D how is the filer created. is it fixed 1-D filter within a specific time interval or the operation is the same as we convolve a signal with a filter in signal processing y = f*x Answer: is it fixed 1-D filter within a specific time interval? Yes. The same as filters in 2D. Adjacent filters may even have no overlap with each other. 1D CNN is almost the same as 2D CNN both mathematically and visually by setting the second dimension (either the horizontal or vertical one in visualizations) to 1. This way, 1D filters are placed (possibly with some overlap) in one dimension instead of 2D filters being spread in two dimensions. The below image shows a filter set with shared parameter $W$ covering the overlapping regions of the input. By shared parameter we mean $f_i=\mbox{ReLU}(\mbox{sum}(W \odot \mbox{region}_i))$, where $\odot$ is a point-wise product between a region of input and $W$.
{ "domain": "datascience.stackexchange", "id": 4812, "tags": "deep-learning, time-series, convolution" }
Positive Definiteness of Killing Form in Gauge Theory
Question: This question is related to requirement that the gauge group of a gauge theory be a direct product of compact simple groups and $U(1)$ factors but is not the same as, for example, this question (though related as described below). When looking to build the kinetic term of a gauge theory, the demand that the Lagrangian be real and Lorentz invariant implies the lowest order term we can write down using only the field strength $F^{\alpha}_{\mu\nu}$ (using $\alpha,\beta,\ldots$ for gauge group indices) is of the form $$ g_{\alpha\beta}F^{\alpha}_{\mu\nu}F^{\beta\mu\nu} $$ where $g_{\alpha\beta}$ is a real matrix which we may take to be symmetric. As described in this answer and Weinberg Vol2, in order to conclude that the gauge group must be a product of compact simple and $U(1)$ factors we must ague both that $g_{\alpha\beta}$ satisfy the invariance condition $g_{\alpha\beta}C^\beta_{\gamma\delta}=-g_{\gamma\beta}C^\beta_{\alpha\delta}$ (so $g_{\alpha\beta}$ is proportional to the Killing form of the gauge group) and also that $g_{\alpha\beta}$ must be positive-definite. The former of these follows from gauge invariance and does not concern me here. My question: The claim is that the positive definiteness of $g_{\alpha\beta}$ follows from unitarity and the canonical quantization procedure. Can anyone make explicit how this follows? Answer: The shortest route is to Wick rotate. In the Euclidean setting, the integration measure is $$ e^{-S[A]}\mathrm dA,\qquad\text{with}\qquad S[A]=\frac{1}{g^2}\int \langle F,F\rangle $$ If the scalar product is not positive definite, then the action does not decay for large field configurations, and the path-integral does not converge. So the QFT does not even exist. In the Lorentzian setting the philosophy is really the same. Recall that in the path-integral we send the time direction in a slightly imaginary direction (see this PSE post), and so you still need the imaginary part of the action to have the appropriate decay properties. This can all be traced back to the assumption that the Hamiltonian is hermitian and bounded from below, so if a QFT with non-positive-definite metric exists, it cannot have a healthy Hilbert space with well-defined Hamiltonian. There is an interesting loophole though. If the classical phase space is finite-dimensional, then the path-integral converges even if $S$ does not decay. This plays an important role in topological gravity, where we the gauge group is Poincare (which is not simple so its Killing form is not positive definite). The theory is well-defined even though $\langle\cdot,\cdot\rangle$ is not a norm. See the seminal work of Witten for more details regarding this point. -- A different (and perhaps more convincing) argument is the following. Instead of looking at Lagrangians, you can construct the theory of spin-1 particles from the bottom-up. We look at the most general $S$-matrix of such particles, and impose that it leads to a self-consistent theory. A very nice account of this construction is Scharf, Quantum gauge theories, a true ghost story. In section 3.2 the author proves that the coupling of these particles necessarily involves a set of constants $f_{ijk}$ which are completely anti-symmetric and satisfy the Jacobi identity. (This also follows from Weinberg's soft theorems, see e.g. Schwartz's Quantum Field Theory and the Standard Model, section 27.5.2). It is a classic result of the theory of Lie algebras that the set $f_{ijk}$ satisfying Jacobi and being anti-symmetric in its three indices defines a reductive Lie algebra, and vice-versa. So this proves that self-consistent theories of interacting spin-1 particles are classified by positive-definite Killing forms. Regarding the loophole above, topological theories have no particles, so this is how they evade the constraints from the $S$-matrix.
{ "domain": "physics.stackexchange", "id": 75235, "tags": "quantum-field-theory, gauge-theory, yang-mills, unitarity" }
What standards exist regarding diagnostic systems in automotive engineering?
Question: I'm designing a diagnostic system that simulates an environment and user input for a dummy vehicle. All possible critical situations are simulated in order to find out whether any errors are triggered, or if the software contains any faults or bugs. It's a vehicle for state defense, it doesn't have OBD like normal cars, but a display with buttons with which you can seek and troubleshoot errors. I have to design a procedure for testing each new software release. I was wondering if any standard exists for testing/validating software. Just like driving cycles(NEDC, WLTP) exist for emission tests. Do any standards(SAE, DIN, etc.) exist for this kind of thing? Answer: Since your project is defense software related I suggest taking a look at MIL-STD-498. The one that is close related to your question is SOFTWARE TEST PLAN (STP)
{ "domain": "engineering.stackexchange", "id": 2362, "tags": "automotive-engineering, simulation, software, standards, car" }
Proof of forbidden free electron-photon absortion using spacetime diagram
Question: I'm trying to solve this exercise from the book "Modern Classical Physics" by Kip Thorne: Show, using spacetime diagrams and also using frame-independent calculations, that the law of conservation of 4-momentum forbids a photon to be absorbed by an electron I managed to prove this using coordinates: choose the Lorent's frame in which the electron is at rest, in this frame its 4-momentum is simply $\vec{p_e}=(m,0,0,0)$ and let the photon be moving from left to right, so its 4-momentum is $\vec{p_\gamma}=(\epsilon,\epsilon,0,0)$. Using conservation of 4-momentum, after the absortion the 4-momentum of the electron should be $\vec{p_e} = (m+\epsilon,\epsilon,0,0)$, but this generates a contradiction, because for any massive particle we have that $\vec{p}^2= -m^2$, but if $\vec{p_e}^2=-m^2-2m\epsilon-\epsilon^2 +\epsilon^2=-m^2$, we conclude that $2m\epsilon=0$, so either the mass of the electron is zero or the energy of the photon is zero, contradiction! My problem is, how can I prove that this reaction is forbidden using a spacetime diagram? I tried to draw one, but I can't see where the contradiction will arise from it. Answer: If you draw some electron worldline inside the past light cone suddenly transforming into a photon line (on the light cone) and another electron worldline inside the future light cone, you should notice that it doesn't satisfy both energy and momentum conservation (bold symbols are four-vectors): $$\tag{1} \boldsymbol{p}_A = \boldsymbol{p}_B + \boldsymbol{k}. $$ This equation could be written as this: $$\tag{2} \boldsymbol{p}_A - \boldsymbol{p}_B = \boldsymbol{k}. $$ The four-vector $-\, \boldsymbol{p}_B$ could be interpreted as an electron moving to the past, in the past light cone. Now draw the worldlines in the total momentum frame. You should notice that there's a problem! Also, take the invariant square of equation (2) ($\circ$ is the four-vector scalar product): $$\tag{3} p_A^2 + p_B^2 - 2 \,\boldsymbol{p}_A \circ \boldsymbol{p}_B = k^2 \equiv 0.$$ Then $p_A^2 = p_B^2 = m^2$ (the electron stays an electron!) so (3) reduces to $$\tag{4} m^2 = \boldsymbol{p}_A \circ \boldsymbol{p}_B. $$ This equation cannot be true, unless $\boldsymbol{p}_A \equiv \boldsymbol{p}_B$ (which implies $\boldsymbol{k} = 0$, i.e no photon!)
{ "domain": "physics.stackexchange", "id": 61272, "tags": "special-relativity" }
If you moved your hand continuously through fire, would it feel hot?
Question: If you quickly pass your finger or hand through a candle or fire, it doesn't feel very hot (great party trick). Is this because your hand spends such little time in the fire that there isn't enough time for conduction to occur? Or, does the motion of your hand prevent the heat-transfer process from beginning (to a significant degree)? If you continuously moved your hand through fire, would it feel hot? Answer: Is this because your hand spends such little time in the fire that there isn't enough time for conduction to occur? Pain occurs when the skin temperature reaches the threshold of pain. The temperature of the skin due to exposure to the candle flame is the result of the combination of the skin's heat absorption rate and the duration of exposure of the skin to the flame. For example, one study showed that the theoretical threshold of pain and blister for a one second radiant heat exposure on thin skin to be about 2.5 watts/cm$^2$ and 5 watts/cm$^2$, respectively. According to a Wikipedia article, the heat release rate of a candle flame is about 80 watts. So clearly the heat is sufficient to cause pain as well as a burn. Therefore, if pain does not occur it's simply because the exposure time was too brief for the skin temperature to reach the threshold of pain. That said, it is foolish to perform such experiments due to the risk of burn injury. Hope this helps.
{ "domain": "physics.stackexchange", "id": 90645, "tags": "everyday-life, convection, heat-conduction" }
Why is Torque defined the way it is?
Question: According to my professor, Torque is like the connection of force in the angular world. But, we know that most quantities such as angular acceleration, angular velocity, etc. are defined as something divided by $r$. So then why is torque defined as the product of $r$ and $F$, when intuitively it should be $F/r$? Answer: Tl;dr: how much spin an object around a point when a force is acted, depends on where the body is relative to that point and how much force exerted. For example, imagine you are on a circular race track, then the amount of 'effort' you have to do to increase your speed tangential to the race track increases as you increase the radius of the circle which you are moving around in. In another way, the amount of 'effort' you need to increase your spin speed around some axis depends on the perpendicular distance from that axis to you. For example, consider the work doing in moving a body around a small infinitesiam circular arc with the arc's center at some origin $O$. Say a force acts on it for some arc distance $ds$ then(*), $$ \mathrm{dW} = \vec{F}\cdot \vec{ds}$$ Now since it's a circular path, $$ \vec{ds} = |r| d \theta \vec{ \theta}$$ Hence, $$ \mathrm{dW} = |r||F_{\perp}| d \theta$$ Or, $$ \frac{dW}{d \theta} = |r|F_{\perp}|$$ So, it's pretty easy to see that the rate of change of work as the angle changes depends on how the $r$ and $F$ this quantity on the RHS. This quantity we can define as torque with: $$ W_{angular} = \int \vec{\tau} d \theta$$ More generally speaking, we can say that the $\vec{ds}$ has to components, the arc component and a radial component: $$ \vec{ds} = r d \theta \hat{\theta} + dr \vec{r}$$ So, $$ F \cdot ds = rF_{tngt} d \theta + F_{radial} dr $$ The second component is the radial force component while the first is the angular / tangential force component, so the total work: $$ W = \int F_{rdl} dr + \int rF_{tngnt} d \theta$$ Note: (*): When you dot two vectors only the components which they don't have in common goes to zero: Eg: Imagine a vector $q$ having components perpendicular and parallel to vector $p$ $$ (q_{ \perp} + q_{\parallel} ) \cdot ( p) = q_{ \parallel} \cdot p$$ So, dotting the force with the circular arc, extracts out the component which is acting along the tangential direction immediately. Refer here
{ "domain": "physics.stackexchange", "id": 73012, "tags": "newtonian-mechanics, reference-frames, definition, torque, rigid-body-dynamics" }
Remove Orbitals in FreezeCoreTransformer (qiskit)
Question: In Qiskit's VQE tutorial, the FreezeCoreTransformer is used and some orbitals are removed. In the particular case of LiH, they remove [-2,-3] orbitals. Why are the orbitals indexed in this way? How can I choose the unoccupied orbitals for other molecules just as NaH or BeH2? Sincerely, Maria Gabriela Answer: A negative index there is just like using a negative list index in Python. -1 means the highest index, -2 the one down from that and so on. In this case since it was wanted to remove the couple of unoccupied orbitals down from the highest one. So that was a convenient way to specify them rather than having to know the total orbitals and compute a 0 based index (which would be valid too). The orbitals removed were known from the molecule itself to play only a very small part in the overall behavior. As to doing this for others, well you really need to take great care. You can see plots here for BeH2 orbital reduction, which though done a long time ago, and the code used there is now out of date, are still valid. Freezing the core orbitals is always safe to do though.
{ "domain": "quantumcomputing.stackexchange", "id": 4462, "tags": "qiskit, vqe, chemistry" }
Units of Larmor frequency
Question: I'm sorry if this is really basic, but what exactly is the unit of Larmor frequency? According to Wikipedia, the formula for Larmor frequency is: $$\omega = \frac{egB}{2m}$$ and with $g$ as a dimensionless constant, we work out the units of $\omega$ to be: \begin{equation} [\omega] = \frac{(A\cdot{s})({kg\cdot s^{-2}\cdot A^{-1}})}{kg} =s^{-1} \end{equation} which is all good. But when we include Thomas Precession, Wikipedia (it's on the same page) offers the formula to be: $$\omega_{s(g=2)} = \frac{eB}{mc\gamma}$$ which, upon comparison with the original formula above, is obviously different. More precisely, the units of Larmor frequency is now $m^{-1}$. What's going on here? Answer: The formula for Thomas Precession is in CGS units, whereas the formula you provided for Larmor Precession is in SI.
{ "domain": "physics.stackexchange", "id": 36776, "tags": "electromagnetism, units, si-units, unit-conversion" }
Tornado Safety: Basement or first floor interior room?
Question: Bulding I live in a new (~10 years old) 2 story duplex home in the Eastern United States. We have a completely interior bathroom on the first floor. It is between 2 other (exterior) rooms, the garage, and the firewall. It has a shower (plastic covering, not tile) but no tub. We also have an unfinished basement. The foundation is concrete slab with CMU walls, which extend about 18 inches above ground level. There is a small (I think it's called a "port") window in one corner. At the opposite corner is a steel door that opens to a poured concrete stairwell topped with Bilco doors. It's an open space, I'd estimate a little under 1500ft2. We use about 2/3 of the space for general storage (decorations, unused furniture, food, the usual stuff) and 1/3 for my woodshop (hand tools, power tools, lumber, etc). Risk According to http://www.tornadohistoryproject.com, my County has seen 31 recorded tornadoes since 1950, with the following breakdown by scale: | Fujita Scale | Number | | ------------ | ------ | | 3 | 1 | | 2 | 11 | | 1 | 13 | | 0 | 6 | (the Tornado History Project gets it's data directly from NOAA but uses the term "Fujita Scale" consistently - I don't know whether that means all their data after 2007 is actually scaled on EF or not - they don't seem to specify). Advice The general advice is always "get to a basement or an internal room on the lowest floor." This makes sense. Roofs and exterior walls are for obvious reasons the first surfaces to be damaged and destroyed. Upper floors are more susceptible to structural damage. Direct wind and debris from tornadoes also is much less likely to extend below ground level, making basements in general a good place to be. In a lot of cases (as in mine) the basement is used as storage for food and/or provisions - helpful in the event of being trapped. It's also clear why fully interior rooms are an acceptable alternative - they put more structure between you and the storm. Since wind-induced structural failures happen from the outside in, they're also the rooms most likely to be left standing. Questions Are there any other factors to consider? What are some other risk/benefits that I've not considered? Speaking in the most general sense, if you have both options, does one tend to be ubiquitously safer, overall? Speaking specifically to my home, given that A) the historical risk of major structural failure is low and B) the basement is wide open and filled with sharp would-be missiles, which would you recommend for my family, and why? Answer: The Wikipedia page on Tornado Intensity has example pictures of damage. Your location has mostly EF1 and EF2 tornados historically (I'm assuming EF scale). EF1 EF2 EF3 Result Just looking at those pictures, I wouldn't want to be above ground if the basement is an option. You don't want to make a decision based on how strong you think the tornado is. Just go to the basement if it is possible. It isn't likely that the wind will be strong enough to throw your tools around in the basement. And if it is, I certainly wouldn't want to be above ground.
{ "domain": "engineering.stackexchange", "id": 704, "tags": "structural-engineering, safety" }
Experimental Proof of Einstein-Rosen bridge
Question: Question: There are mathematical proofs available for the Einstein-Rosen bridge. But I was wondering whether there is experimental proof for such a thing. The general theory of relativity supports the fact and thus allows us to think of existence of such a thing.But I don't think a concept which cannot be mathematically proved must be universally accepted. Note:- I am not questioning Einstein's theory of general relativity but the existence of such a point in space where matter goes into a singularity is what questions me.As far as I know universe is continuously expanding and existence of such a hypothetical and supporting it shouldn't be done unless it is somehow proved mathematically. Answer: I think you got it exactly backwards. There are theoretical demonstrations of the Einstein-Rosen bridge, i.e. you can write down the Schwarzschild solution the Einstein field equations, do a couple changes of coordinates, and demonstrate the system contains a wormhole; see any general relativity textbook for this. There are of course no experimental demonstrations -- that would be sensational. The other objection you have is that singularities should be impossible because the universe is expanding. This doesn't make sense on the scale of a black hole because the expansion of the universe is very weak; it's totally undetectable for anything short of cosmological scales. On the other hand, it's a fair question why the universe has concentrated lumps of matter in it, such as galaxies and stars, when you would expect a uniform distribution. This is due to gravitational instability: matter attracts matter in a runaway process, so any inhomogeneities get amplified. Remarkably, this doesn't violate the second law of thermodynamics because lots of entropy is produced by gravitational potential energy going into kinetic energy during collapse. (I've heard some people go so far to say that's the fundamental reason life can exist at all, though it really depends on what you mean by 'fundamental'.) Mathematically, once you have these concentrated lumps of matter you can prove sufficiently dense matter must form black holes, e.g. the TOV limit for neutron stars. Then the Penrose-Hawking singularity theorems essentially say that if gravity in a region is strong enough, such as in a black hole, there must be a singularity.
{ "domain": "physics.stackexchange", "id": 46763, "tags": "general-relativity, black-holes, experimental-physics, wormholes" }
subscribe to image topic over WiFi
Question: Hi guys, I have an Asctec Pelican here with a Ueye camera attached to it. We're trying to set it up such that we can subscribe to the topic from a remote PC, and view the video feed in real time. We set up a local WiFi network, and experienced tons of latency and very low framerate. We verified that the images are published at the desired rate on the Asctec itself, so it seems like it's either Asctec or WiFi bandwidth that's causing the issue. Has any of you tried this type of things before? Thanks, Originally posted by yan on ROS Answers with karma: 31 on 2012-06-19 Post score: 3 Answer: You should try the following: rosrun image_view image_view image:=/camera/image _image_transport:=theora ...with /camera/image the image topic you want to display. Using image transport decreases a lot the bandwith required by the streaming, but it requires CPU power to compress the data on your robot. The only other option is to stream different data: lower resolution, black and white only, etc. PS: yes, wifi router usually behaves very badly when they reach theirs limits. Freeze, bad performances, reboot, etc. So what you experience is probably an access point performance issue. Originally posted by Thomas with karma: 4478 on 2012-06-21 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 9850, "tags": "ros, wifi, image-transport" }
What happens in Young's double slit experiment and Fraunhoffer's diffraction actually?
Question: In Fraunhofer's diffraction, is it the interference of light rays of two secondary sources in a wavefront of a primary source? And in Young's double slit experiment, is it the interference of light rays of two primary sources? I have read these theories of Fraunhofer's diffraction and Young's double slit experiment from some lectures and literally all of them came vague for whether they were speaking about interference of light rays of two primary sources or two secondary sources. It's also unclear if they used light rays or wavefronts. If they used light rays only I would like to know what would it look like to explain these using wavefronts only . . . but it would be alright to clarify where light rays of two primary sources interact and where light rays of two secondary sources interact for now and also let me know if I made a mistake in describing them for what I have understood using wavefronts and light rays also in the description below and also in picture-1 and 2. What I have understood is, In Fraunhofer's diffraction, the primary source of light is located at an infinite distance from the slit. And the screen also is located at an infinite distance from the slit on the opposite side of the primary source. The light rays coming out of the primary source are parallel to one another. Since light rays are always perpendicular to the wavefront, a particular wavefront in them would be a straight line. Speaking of that particular wavefront only, it will be bent after it has passed through the slit and according to Huygens law, all points of that wavefront will act like secondary sources and thus will emit light rays in all directions. Picking any two points of that, their parallel light ray pairs would meet on the screen and depending on the phase difference, they are more like to form bright or dark fringe(s). In Young's double slit experiment, two primary sources emit light rays in all directions and depending on phase difference interacting light rays form bright or dark fringe(s). Answer: Light from the (primary) source travels by different routes (through different slits) to each point on the screen where you observe the effects of interference. We can regard wavefronts (loci of equal phase) travelling from the source as being divided and given curvature by the slits. But it's not wrong to treat the light as having come from the slits themselves, as long as it's realised that they won't necessarily be in-phase sources. [They will be if they are equal optical path lengths from the primary source.] Provided that the light source is small and far enough away from the double slits (or at the principal focus of a converging lens) the slits will, though, be coherent sources. Note that the set-up in your lower diagram won't give you an interference pattern, because the sources will be incoherent (no constant phase relationship between them). "It's also unclear if they used light rays or wavefronts." Remember that rays are simply directions of travel of (parts of) the wavefronts. As you say, rays are normal to wavefronts. If the words are used correctly they shouldn't give rise to different explanations of interference.
{ "domain": "physics.stackexchange", "id": 86583, "tags": "homework-and-exercises, waves, interference, double-slit-experiment, diffraction" }
Qt Number Generator v2
Question: Link to the old question. I tried to learn some new things from the answers and here's what I did: Used Qt Designer so that in Config.h, all member variables and widget positioning are gone User can't set the lower bound higher than the upper and vice versa Replaced obsolete qrand with generators from the C++11 <random> library Used the Qt5 version of connect Better UI with more options I like code which can be read as plain English text, that's why I made functions such as _removeLastChar or _correctInputParameters which are basically one-liners but, for me, improve readability a lot. Code-review IMHO taught me the most about code quality that is why I am asking this "new" version. generator.h #ifndef GENERATOR_H #define GENERATOR_H #include <QMainWindow> class QSpinBox; namespace Ui { class Generator; } class Generator : public QMainWindow { Q_OBJECT public: explicit Generator(QWidget *parent = nullptr); ~Generator(); public slots: void generateNumber(); void clear(); void saveToFile(); void setMinValue(int); void setMaxValue(int); private: Ui::Generator *ui; qint32 _generateNumber(); QString _getSeparator(); QString _nums; bool _correctInputParameters(); bool _oneLineOutput(); void _generateNumbers( int from, int to, bool random ); void _removeLastChar( QString& string ); }; #endif // GENERATOR_H generator.cpp #include "generator.h" #include "ui_generator.h" #include <random> #include <iostream> #include <QMessageBox> #include <QTextStream> #include <QFileDialog> Generator::Generator(QWidget *parent) : QMainWindow(parent) , ui(new Ui::Generator) { ui->setupUi(this); connect(ui->generateButton, &QPushButton::clicked, this, &Generator::generateNumber); connect(ui->clearButton, &QPushButton::clicked, this, &Generator::clear); connect(ui->saveButton, &QPushButton::clicked, this, &Generator::saveToFile); connect(ui->exitButton, &QPushButton::clicked, this, &QApplication::exit); connect(ui->minimumSpinBox, static_cast<void (QSpinBox::*)(int)>(&QSpinBox::valueChanged), this, &Generator::setMinValue); connect(ui->maximumSpinBox, static_cast<void (QSpinBox::*)(int)>(&QSpinBox::valueChanged), this, &Generator::setMaxValue); } void Generator::generateNumber() { clear(); int numbersCount = ui->numbers->value (); _nums = ""; // random numbers if ( ui->random->isChecked () ) { _generateNumbers (0, numbersCount, true); } // sequential numbers else { int lower = ui->minimumSpinBox->value (); int upper = ui->maximumSpinBox->value (); _generateNumbers (lower, upper + 1, false); } ui->textEdit->setText (_nums); } void Generator::_generateNumbers( int low, int high, bool random ) { QString separator = _getSeparator(); for ( qint32 i = low; i < high; ++i ) { if ( random ) { // random _nums += QString::number ( _generateNumber () ); } else { // sequential _nums += QString::number( i ); } _nums += separator; // output into multiple lines if ( !_oneLineOutput () ) { _nums += "\n"; } } // get rid of the last separator char if ( _oneLineOutput () && separator != "" ) { _removeLastChar(_nums);} } void Generator::saveToFile () { QString filename = QFileDialog::getSaveFileName (this, tr("Save numbers"), "", tr("Text file (*.txt);;All Files(*)")); if ( filename.isEmpty () ) { return; } QFile output( filename ); if ( !output.open(QIODevice::WriteOnly | QIODevice::Text) ) { QMessageBox::information( this, tr("Unable to open file"), output.errorString() ); return; } QTextStream ts( &output ); ts << _nums.toUtf8 (); output.close(); } qint32 Generator::_generateNumber() { std::random_device rd; std::default_random_engine eng(rd()); std::uniform_int_distribution< qint32 > distr( ui->minimumSpinBox->value (), ui->maximumSpinBox->value () ); return distr(eng); } QString Generator::_getSeparator() { auto separator = ui->separator->currentText(); if ( separator == "(space)" ) return " "; if ( separator == "(nothing)" ) return ""; return separator; } void Generator::setMinValue( int newValue ) { auto maxValue = ui->maximumSpinBox->value (); if ( newValue > maxValue ) { ui->minimumSpinBox->setValue ( maxValue ); } } void Generator::setMaxValue ( int newValue ) { auto minValue = ui->minimumSpinBox->value (); if ( newValue < minValue ) { ui->maximumSpinBox->setValue (minValue); } } void Generator::clear (){ ui->textEdit->clear (); } void Generator::_removeLastChar( QString &string ) { string.remove ( string.size () - 1, 1 ); } bool Generator::_correctInputParameters() { return ui->minimumSpinBox->value () <= ui->maximumSpinBox->value (); } bool Generator::_oneLineOutput() { return ui->oneLine->isChecked (); } Generator::~Generator() { delete ui; } Answer: Seems like you've made a lot of improvements since your previous post! Let's get into the review! 1. General Overview a. Use the on_<objectName>_<signal> Naming Scheme for Slots This naming scheme tells the moc to automatically connect a slot with the corresponding <signal> of <objectName> from the UI. We then don't need to call connect(...), saving us a few lines of code. If we take a look at the clearButton UI object, we can get this auto-connect behaviour by renaming the clear method to on_clearButton_clicked. The implementation doesn't change, only the symbol. This process of pinpointing the correct slot name is automated from Design mode. First, right-click the object itself or the listing on the object-class tree. Then select the signal to connect and the slot to go to. Qt will automatically generate the on_clearButton_clicked slot in the header and source files (if it doesn't exist yet). Right-click on your Clear button to bring up the context menu and select Go to slot... Choose the clicked() signal and click OK. Now you no longer need manually connect with connect(...). You can apply this to generateButton, clearButton, saveButton, minimumSpinBox, and maximumSpinBox. Yay, 5 less lines of code! 5 less worries! (To be clear, static_cast<void (QSpinBox::*)(int)> isn't needed for minimumSpinBox, and maximumSpinBox as the correct overload can be automatically deduced.) Also note that this naming scheme doesn't have to be used for every slot – it is primarily used for those slots which have a corresponding signal from the UI. b. Consistency in Order of Methods in Header and Source Files In your header file, the first four function-like declarations are public: explicit Generator(QWidget *parent = nullptr); ~Generator(); public slots: void generateNumber(); void clear(); However, in your source file, the definition for the destructor comes last. This harms readability. Most readers may be expecting the same ordering of methods in both header and source files. Does this mean the header file should conform to the ordering of the source file? Something like below perhaps? public: explicit Generator(QWidget *parent = nullptr); public slots: void generateNumber(); void clear(); public: ~Generator(); Nawww, the source file should conform to the header file. Please, please, please; if you declare the destructor right after the constructor, define the destructor right after the constructor. Generator::Generator(QWidget *parent) : QMainWindow(parent) , ui(new Ui::Generator) { ui->setupUi(this); connect(ui->generateButton, &QPushButton::clicked, this, &Generator::generateNumber); connect(ui->clearButton, &QPushButton::clicked, this, &Generator::clear); connect(ui->saveButton, &QPushButton::clicked, this, &Generator::saveToFile); connect(ui->exitButton, &QPushButton::clicked, this, &QApplication::exit); connect(ui->minimumSpinBox, static_cast<void (QSpinBox::*)(int)>(&QSpinBox::valueChanged), this, &Generator::setMinValue); connect(ui->maximumSpinBox, static_cast<void (QSpinBox::*)(int)>(&QSpinBox::valueChanged), this, &Generator::setMaxValue); } Generator::~Generator() { delete ui; } // other methods // ... c. Naming i. _generateNumbers(int ?, int ?, bool random) A minor issue. You have void _generateNumbers( int from, int to, bool random ); in your header file but void Generator::_generateNumbers( int low, int high, bool random ) { in your source code. Choose either from/to or low/high, but not both. ii. _correctInputParameters and oneLineOutput For methods that return bool (also known as predicates), consider starting the method with is or has. bool _hasCorrectInputParameters(); bool _isOneLineOutput(); Helps with readability. We don't need any special guesswork to infer that these will return bool. 2. Logic The logic and program flow seems a tad messy, let's try cleaning it up! a. clear() What should this clear? Only the text-edit? I'd clear _nums as well. void Generator::clear() { ui->textEdit->clear(); _nums.clear(); } The last thing we want is to have the clear method clear only the gui and leave the variable sitting. Clear everything it all at once! Doing so allows us to pinpoint bugs easier – we don't have to spend 30 minutes digging through the entire code to find a lone _nums = "" placed wrongly. b. generateNumber and _generateNumbers and _generateNumber First off, these methods could do with better naming. As soon as I type generate, the IDE completer will show these three methods and it all suddenly becomes ambiguous. Be specific with what each method does. _generateNumber only generates random numbers, so change it to _generateRandomNumber. generateNumber handles the button click, so follow the first section of this answer and change it to on_generateButton_clicked. _generateNumbers is a fine name as it is. Down to the logic. It doesn't really make sense to retrieve values of minimumSpinBox and maximumSpinBox in two places (one, in generateNumber, under the else branch; and two, in _generateNumber). Retrieve it once, then pass it accordingly. By the same principle, since only the random option needs int numbersCount = ui->numbers->value();, this should be placed in _generateNumbers instead. void Generator::generateNumber() { clear(); // _nums = ""; // moved to clear(); same as _nums.clear() int low = ui->minimumSpinBox->value(); // retrieve values from spinboxes ONCE int high = ui->maximumSpinBox->value(); _generateNumbers(low, high+1, ui->random->isChecked()); // universal, no need for if-else ui->textEdit->setText(_nums); } This also means changing _generateNumber to accept parameters so that we can later pass the low and high in generateNumber: qint32 Generator::_generateNumber(int low, int high) { std::random_device rd; std::default_random_engine eng(rd()); std::uniform_int_distribution<qint32> distr(low, high); return distr(eng); } Currently, _generateNumbers serves two purposes: generating random numbers and generating sequential numbers. However, the arguments used are for completely different purposes, which is... meh... until their names contradict with their purpose which merits another meh. This seems like a big red sign to me: if ( ui->random->isChecked () ) { _generateNumbers (0, numbersCount, true); // low = 0, high = numbersCount ? } Does this use case not imply that the generated numbers should be between 0 and numbersCount? Apparently not... Apparently, low and high in that context means to generate high – low number of values. Since the purposes and use cases are different, it only makes sense to have different implementations, so branch your if-else well ahead of the for-loop(s). void Generator::_generateNumbers( int low, int high, bool random ) { QString separator = _getSeparator(); if (random) { int numbersCount = ui->numbers->value(); // generate random numbers between low and high // for (int i = 0; i < numbersCount; i++) // ... } else { // generate random numbers between low and high // ... } // get rid of the last separator char if ( _oneLineOutput () && separator != "" ) { _removeLastChar(_nums);} } 3. UI On the UI side, I'd consider removing some borders – they do get in the way, especially the ones around Generate Numbers and the ones around your four buttons. You should also consider disabling the How many numbers spinbox if the selected number pattern is Sequential as the two are mutually exclusive options. But then, you could also consider the case where the user selects Sequential and provides a from-value and how-many-numbers-value but doesn't provide a to-value. Sounds like you could make this a third Numbers option: Sequential-n. It's unfortunate that we don't get the .ui to play around with, but nonetheless, it looks stunning and functional from afar. :-)
{ "domain": "codereview.stackexchange", "id": 34218, "tags": "c++, c++11, random, gui, qt" }
How to define a custom resampling methodology
Question: I'm using an experimental design to test the robustness of different classification methods, and now I'm searching for the correct definition of such design. I'm creating different subsets of the full dataset by cutting away some samples. Each subset is created independently with respect to the others. Then, I run each classification method on every subset. Finally, I estimate the accuracy of each method as how many classifications on subsets are in agreement with the classification on the full dataset. For example: Classification-full 1 2 3 2 1 1 2 Classification-subset1 1 2 2 3 1 Classification-subset2 2 3 1 1 2 ... Accuracy 1 1 1 1 0.5 1 1 Is there a correct name to this methodology? I thought it can fall under bootstrapping but I'm not sure about this. Answer: Random subsampling seems appropriate, bootstrapping is a bit more generic, but also correct. Here are some references and synonyms: http://www.frank-dieterle.com/phd/2_4_3.html
{ "domain": "datascience.stackexchange", "id": 65, "tags": "classification, definitions, accuracy, sampling" }
Model structure for Player Team and Match in football application
Question: I'm creating an application for foosball matches. I have models like below: class Player(models.Model): match_amount = models.IntegerField(default=0) wins = models.IntegerField(default=0) avatar = models.ImageField(blank=True) user = models.OneToOneField( on_delete=models.CASCADE, to=User, primary_key=True, related_name='player', verbose_name=_('user'), ) def __str__(self): return self.user.get_full_name() class Team(models.Model): players = models.ManyToManyField(Player) wins = models.IntegerField(default=0) def __str__(self): return "Team %s" % self.pk class Match(models.Model): date = models.DateField(default=now) def __str__(self): return "Match on %s" % self.date class TeamMatch(models.Model): team = models.ForeignKey(Team) match = models.ForeignKey(Match) points = models.IntegerField() def __str__(self): return "Match %s" % self.team Is these models are well created? Thanks much for each tip and corrections. Answer: It is difficult to say how good your models are, since it depends on your application requirements and demands, current and future/planned use cases. But, few points from the top of my head: there are some important fields missing - for example, team names I don't particularly like the TeamMatch model. Instead, I'd expect the Match to have links to the home and away teams: class Match(models.Model): home_team = models.ForeignKey(Team, related_name='home_matches') away_team = models.ForeignKey(Team, related_name='away_matches') date = models.DateField(default=now) def __str__(self): return "Match between '%s' and '%s' on %s" % (self.home_team.name, self.away_team.name, self.date) you will also need some way of keeping the score of a match. A separate MatchResult model? how about a use case when a player moves from one team to another? If this is something you want to keep track of, you would probably need something like a PlayerContract model to relate players and teams for a certain period of time Also, please see these similar model design discussions: How should I design my django models for team, player and match objects? Models for a team
{ "domain": "codereview.stackexchange", "id": 24521, "tags": "python, python-3.x, django" }
Symmetry of a tensor
Question: This is from my notes, which I don't fully understand: It is straightforward to check that (anti)symmetry is a coordinate-independent notion, e.g., if the components of a tensor are symmetric in some coordinate system, they are symmetric in all. note that it only makes sense to discuss symmetry of pairs of contra- or co-variant indices, but not a mix of the two. I'm guessing in 1 that the notes is referring to symmetry in all coordinates, not just two (or any subset) specific coordinates. I'm happy that this is the case for the symmetric case, but could someone explain this more in depth in the anti-symmetric case? I'm not sure why 2 is the case too, surely we can construct a tensor which is symmetric over a pair of contra and co variant indices? ie $A^\mu_{\lambda} = A^\lambda_\mu$ Answer: I think you are confused by the meaning of the first statement. It does not say that (anti-)symmetry in two indices implies (anti-)symmetry in all indices; one is perfectly free to have tensors that are (anti-)symmetric in any number of indices, as long as these are of the same type. Instead, it refers to the fact that symmetries of tensors are untouched by coordinate transformations. A (p,q)-tensor is a linear map from p covectors and q vectors to (in a GR context) the real numbers. For example, let's look at a (1,2)-tensor $A^{\mu}_{\ \ \ \nu \rho}$. We can contract this object with a covector $W_{\mu}$ and two vectors $U^{\nu}$ and $V^{\rho}$ to form the real number $A^{\mu}_{\ \ \ \nu \rho} W_{\mu} U^{\nu} V^{\rho}$. $A$ being (anti-)symmetric in its lower indices means that switching $U$ and $V$ leaves the outcome unchanged, apart from a sign in the antisymmetric case: $A^{\mu}_{\ \ \ \nu \rho} W_{\mu} U^{\nu} V^{\rho} = \pm A^{\mu}_{\ \ \ \nu \rho} W_{\mu} V^{\nu} U^{\rho}$. This corresponds to the requirement that $A^{\mu}_{\ \ \ \nu \rho} = \pm A^{\mu}_{\ \ \ \rho \nu}$. Since (co)vector components are coordinate-dependent, the tensor components should be as well. In general, a tensor transforms by contraction with the coordinate changes $\partial x ^{\mu} / \partial y ^{\sigma}$. The notion of a tensor being (anti-)symmetric could well be coordinate-dependent: a tensor that is symmetric in some coordinate system could lose its symmetry in some other coordinate system. The content of your first statement is that this is not the case. This is rather easily shown. Let's focus on a (0,2)-tensor $A_{\mu \nu}$ for now. Under a change of coordinates $\partial x ^{\mu} / \partial y ^{\sigma}$ this tensor becomes: $$ A_{\rho \sigma } = \frac{\partial x^{\mu}}{\partial y ^{\rho}} \frac{\partial x^{\nu}}{\partial y ^{\sigma}} A_{\mu \nu}. $$ Now, if $A_{\mu \nu}$ is symmetric, that is, $A_{\mu \nu} = A_{\nu \mu}$, $$ A_{\rho \sigma } = \frac{\partial x^{\mu}}{\partial y ^{\rho}} \frac{\partial x^{\nu}}{\partial y ^{\sigma}} A_{\mu \nu} = \frac{\partial x^{\mu}}{\partial y ^{\rho}} \frac{\partial x^{\nu}}{\partial y ^{\sigma}} A_{\nu \mu}. $$ We can now switch the $\mu$ and $\nu$ indices, since these are dummy indices which we sum over, to obtain $$ A_{\rho \sigma } = \frac{\partial x^{\nu}}{\partial y ^{\rho}} \frac{\partial x^{\mu}}{\partial y ^{\sigma}} A_{\mu \nu} = A_{\sigma \rho }, $$ showing that $A$ is also symmetric in the new coordinates. If $B_{\mu\nu}$ is an antisymmetric (0,2)-tensor, we would get a minus sign; otherwise, the procedure to show antisymmetry of $B_{\rho \sigma}$ is identical. Your second statement means that it does not make sense to compare upper with lower indices, since this would correspond to comparing vectors with covectors. Sure, you can construct a list of numbers that (numerically) satisfies $A_{\mu}^{\nu} = A_{\nu}^{\mu}$, but this 'symmetry' would not have any sensible meaning, since the index types of the LHS and RHS do not match.
{ "domain": "physics.stackexchange", "id": 52764, "tags": "general-relativity, metric-tensor, tensor-calculus, relativity" }
Why are telescopes on top of Mauna Kea instead of Mauna Loa?
Question: Many large and important telescopes are located on top of Mauna Kea on Hawaii. This is a great location for many reasons: it's tall enough to be above the weather, an inversion layer at night keeps the atmosphere clear and dry, etc. But right next to Mauna Kea is an equally tall mountain Mauna Loa. Although there are some instruments on Mauna Loa (a NOAA station) there aren't major telescopes. Why? What makes Mauna Kea a better location? Answer: Mauna Loa is an active volcano. The last eruption was in 1984.
{ "domain": "physics.stackexchange", "id": 8603, "tags": "astronomy, telescopes" }
summit hokuyo data
Question: Hello Robotnik, I have some question about sensor hokuyo and the data received. I have some troubles to use the stack summit_sim in FUERTE. I rosmaked summit_description, summit_controller inside this stack and launch the description robot: $ roslaunch summit_description summit.launch It looks great but in the verbose of gazebo server I have this warning: ... ... Dbg skipping prefixed element [interface:audio] when copying plugins Dbg skipping prefixed element [interface:audio] when copying plugins Dbg skipping prefixed element [interface:position] when copying plugins spawn status: SpawnModel: successfully spawned model Dbg plugin parent sensor name: hokuyo_laser [ WARN] [1338277432.923449936]: Laser plugin missing <hokuyoMinIntensity>, defaults to 101 [ INFO] [1338277432.923541873]: INFO: gazebo_ros_laser plugin should set minimum intensity to 101.000000 due to cutoff in hokuyo filters. Error [Plugin.hh:100] Failed to load plugin libgazebo_ros_time.so: libgazebo_ros_time.so: cannot open shared object file: No such file or directory Dbg plugin model name: summit [ INFO] [1338277432.931752412]: starting gazebo_ros_controller_manager plugin in ns: / [ INFO] [1338277432.931997347]: Callback thread id=0x7f7bc00cedf0 Gtk-Message: Failed to load module "canberra-gtk-module" Gtk-Message: Failed to load module "canberra-gtk-module" [spawn_object-6] process has finished cleanly log file: /home/pablo/.ros/log/07bdad80-a962-11e1-ac58-5404a64baafd/spawn_object-6*.log [ WARN] [1338277433.242845571]: imu plugin missing <serviceName>, defaults to /default_imu [ WARN] [1338277433.242977926]: imu plugin missing <xyzOffset>, defaults to 0s [ WARN] [1338277433.243170950]: imu plugin missing <rpyOffset>, defaults to 0s [ INFO] [1338277433.328125825, 0.022000000]: waitForService: Service [/gazebo/set_physics_properties] is now available. [ INFO] [1338277433.364027285, 0.058000000]: Starting to spin physics dynamic reconfigure node... [INFO] [WallTime: 1338277433.501923] [0.195000] Loaded controllers: summit_controller [INFO] [WallTime: 1338277433.505892] [0.199000] Started controllers: summit_controller Ok this is very weird because I can launch other worlds without troubles. I don't know why but I can't see the fan laser rays in the world. I can get the information of the hokuyo sensor (very bad data because it isn't logic data --> I receive only values between 0.25 and 0.05, it looks like the sensor are inside the chassis of the robot.). If help, I put the code here: <!-- HOKUYO SENSOR --> <joint name="hokuyo_laser_joint" type="fixed"> <axis xyz="0 1 0" /> <origin xyz="0.226 0 0.125"/> <!-- origin xyz="0.226 0 0.126"/ --> <parent link="base_link"/> <child link="hokuyo_laser_link"/> </joint> <link name="hokuyo_laser_link" type="laser"> <inertial> <mass value="0.001" /> <origin xyz="0 0 0" rpy="0 0 0" /> <inertia ixx="0.0001" ixy="0" ixz="0" iyy="0.000001" iyz="0" izz="0.0001" /> </inertial> </link> <!-- This adds a visual box to allow us to see the Hokuyo in rviz/gazebo --> <joint name="hokuyo_laser_box_joint" type="fixed"> <origin xyz="0 0 -0.02" rpy="0 0 0" /> <!-- sensor hokuyo down 2 cm --> <parent link="hokuyo_laser_link" /> <child link="hokuyo_laser_box_link"/> </joint> <link name="hokuyo_laser_box_link"> <inertial> <mass value="0.01" /> <origin xyz="0 0 0" /> <inertia ixx="0.001" ixy="0.0" ixz="0.0" iyy="0.001" iyz="0.0" izz="0.001" /> </inertial> <visual> <origin xyz="0 0 0" rpy="0 0 0"/> <geometry> <!--box size="0.05 0.05 0.1" /--> <mesh filename= "package://pr2_description/meshes/tilting_laser_v0/hok_tilt.stl" scale="0.5 0.5 0.5"/> </geometry> </visual> <collision> <origin xyz="0 0 0" rpy="0 0 0"/> <geometry> <box size="0.05 0.05 0.1" /> </geometry> </collision> </link> <gazebo reference="hokuyo_laser_box_link"> <material>Gazebo/Blue</material> <!-- missing magenta color in Gazebo.materials --> <turnGravityOff>false</turnGravityOff> </gazebo> <!-- Other controllers (HOKUYO SENSOR) --> <gazebo reference = "hokuyo_laser_link"> <sensor:ray name="hokuyo_laser"> <rayCount>640</rayCount> <rangeCount>640</rangeCount> <laserCount>1</laserCount> <origin>0.0 0.0 0.0</origin> <!--<displayRays>fan</displayRays>--> <minAngle>-120</minAngle> <maxAngle>120</maxAngle> <minRange>0.05</minRange> <maxRange>4.0</maxRange> <resRange>0.001</resRange> <updateRate>10.0</updateRate> <controller:gazebo_ros_laser name="gazebo_ros_hokuyo_laser_controller" plugin="libgazebo_ros_laser.so"> <gaussianNoise>0.005</gaussianNoise> <alwaysOn>true</alwaysOn> <updateRate>10.0</updateRate> <topicName>hokuyo_laser_topic</topicName> <frameName>hokuyo_laser_link</frameName> <interface:laser name="gazebo_ros_hokuyo_laser_iface" /> </controller:gazebo_ros_laser> </sensor:ray> </gazebo> I know this is a mixed formats (old deprecated format + new gazebo 1.0.1 format) but this stack is close enaugh to work in FUERTE (I can teleoperate with the keyboard and subscribe to hokuyo_laser_topic with my own node) Originally posted by pmarinplaza on ROS Answers with karma: 330 on 2012-05-28 Post score: 0 Answer: Hi, yes, you just need to modify the height of the laser link **** Originally posted by Robotnik with karma: 71 on 2012-11-16 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 9580, "tags": "ros, gazebo, laser, gazebo-plugin, hokuyo" }
ObservableQueue
Question: I'm looking for feedback on this. public sealed class ObservableQueue<T> : IObservable<T>, IDisposable { private readonly object _lock = new object(); private ImmutableQueue<T> _processQueue = ImmutableQueue<T>.Empty; private ImmutableList<IObserver<T>> _observers = ImmutableList<IObserver<T>>.Empty; private readonly ManualResetEvent _active = new ManualResetEvent(true); private readonly ManualResetEvent _itemEnqueued = new ManualResetEvent(false); private readonly ManualResetEvent _completed = new ManualResetEvent(false); private readonly Thread _thread; private bool _disposed; public ObservableQueue() { _thread = new Thread(Execute); _thread.IsBackground = true; _thread.Start(); } public void Enqueue(T value) { ImmutableInterlocked.Enqueue(ref _processQueue, value); _itemEnqueued.Set(); } public IDisposable Subscribe(IObserver<T> observer) { if (observer == null) throw new ArgumentNullException("observer"); Interlocked.Exchange(ref _observers, _observers.Add(observer)); return new Subscription(this, observer); } public void Dispose() { lock (_lock) { if (_disposed) return; _active.Reset(); _itemEnqueued.Set(); _completed.WaitOne(); _active.Dispose(); _itemEnqueued.Dispose(); _completed.Dispose(); _disposed = true; } } private void Execute() { try { while (_active.WaitOne(1)) { if (!_itemEnqueued.WaitOne()) continue; T value; while (ImmutableInterlocked.TryDequeue(ref _processQueue, out value)) OnNext(value); _itemEnqueued.Reset(); } OnCompleted(); _completed.Set(); } catch (Exception ex) { OnError(ex); } } private void OnCompleted() { foreach (var observer in _observers) { try { observer.OnCompleted(); } catch { } } } private void OnNext(T value) { foreach (var observer in _observers) { try { observer.OnNext(value); } catch (Exception ex) { OnError(ex); } } } private void OnError(Exception ex) { foreach (var observer in _observers) { try { observer.OnError(ex); } catch { } } } private sealed class Subscription : IDisposable { private readonly ObservableQueue<T> _instance; private readonly IObserver<T> _observer; public Subscription(ObservableQueue<T> instance, IObserver<T> observer) { _instance = instance; _observer = observer; } public void Dispose() { Interlocked.Exchange(ref _instance._observers, _instance._observers.Remove(_observer)); } } } Answer: The queue is dependent on the subscribers; throws errors based on subscriber state. And in order to be useful the subscribers must depend on it. There's significant afferent and efferent coupling in the implementation. Perhaps not a good thing. An alternative would be to write to a stream or bus or log and let clients read from the stream according to their needs. Adding a reader interface would let the subscribers know what the queue is committed to doing...and what it is not committed to (e.g. how does it flush? when full does it drop oldest events or refuse new ones? what does it do when it is empty? etc.).
{ "domain": "codereview.stackexchange", "id": 10321, "tags": "c#, .net, immutability, observer-pattern" }
Reverse Polish Notation in F#
Question: In my question to learn F#, I've decided to get one step closer to creating a programming language, and implement this simple Reverse Polish Notation "interpreter" of sorts. It does not allow for parentheses in input ( ), and only accepts expressions containing the following valid tokens: 0 1 2 3 4 5 6 7 8 9 + - * / Here's a small list of sample inputs and outputs for reference: 2 2 + -> 4 10 10 + 5 * -> 100 2 2 * 4 + 2 * -> 16 I have a few concerns here: Is this written in a proper "functional" way? Is there any way to shorten evaluate_expr? Are there any glaring issues that I missed? open System open System.Collections.Generic open System.Text.RegularExpressions /// <summary> /// Evaluate an expression pair, like '2 2 +'. /// <summary> /// <param name="operand">The operand to use.</param> let evaluate_expr_pair (a: string) (b: string) (operand: string) = match operand with | "+" -> (Int64.Parse(a) + Int64.Parse(b)).ToString() | "-" -> (Int64.Parse(a) - Int64.Parse(b)).ToString() | "*" -> (Int64.Parse(a) * Int64.Parse(b)).ToString() | "/" -> (Int64.Parse(a) / Int64.Parse(b)).ToString() | _ -> "" /// <summary> /// Evaluate a tokenized expression, such as '[| "2"; "2"; "+" |]', /// and return a result, in this case, '2'. /// </summary> /// <param name="expr">The expression to evaluate.</param> let evaluate_expr (expr: string[]) = let program_stack = new Stack<string>() for token in expr do program_stack.Push(token) match token with | "+" -> let operand = program_stack.Pop() let b = program_stack.Pop() let a = program_stack.Pop() program_stack.Push(evaluate_expr_pair a b operand) | "-" -> let operand = program_stack.Pop() let b = program_stack.Pop() let a = program_stack.Pop() program_stack.Push(evaluate_expr_pair a b operand) | "*" -> let operand = program_stack.Pop() let b = program_stack.Pop() let a = program_stack.Pop() program_stack.Push(evaluate_expr_pair a b operand) | "/" -> let operand = program_stack.Pop() let b = program_stack.Pop() let a = program_stack.Pop() program_stack.Push(evaluate_expr_pair a b operand) | _ -> () program_stack.Pop() /// <summary> /// Tokenize an input expression, such as '2 2 + 5 *'. /// </summary> /// <param name="expr">The expression to tokenize.</param> let tokenize_expr (expr: string) = let split_pattern = "\s*(\+|\-|\*|\/)\s*|\s+" let split_regex = new Regex(split_pattern) let result = split_regex.Split(expr) result.[0..result.Length - 2] /// <summary> /// Check all the tokens of an expression to make sure /// that they are all legal. /// </summary> /// <param name="expr">The expression to check.</param> let check_expr (expr: string) = let valid_token_pattern = "[\s0-9\+\-\*\/]" let valid_token_regex = new Regex(valid_token_pattern) for token in expr do if valid_token_regex.Match(token.ToString()).Success then () else Console.WriteLine(String.Format("Invalid token \"{0}\".", token)) [<EntryPoint>] let main argv = while true do Console.Write(": ") let input_expr = Console.ReadLine() input_expr |> check_expr let tokenized_expr = input_expr |> tokenize_expr let result = tokenized_expr |> evaluate_expr Console.Write("\n ") Console.Write(result) Console.Write("\n\n") 0 Answer: evaluate_expr_pair Your calculator performs integer division, which is a bit surprising. To fix it to do floating-point or decimal arithmetic, though, you would have to do a search-and-replace of Int64.Parse, which has been written an unreasonable number of times in evaluate_expr_pair. evaluate_expr_pair also does a lot of Parse and ToString round trips. Not only is that inefficient, it could also cost you some loss of precision if you were doing floating-point arithmetic. operand should really be called operator. (The operands are a and b.) Ignoring unrecognized operators is a bad idea, even if you have validated the input in check_expr. If not doing exception handling, I'd rather leave out the default case and let it crash. I'd also prefer to detect errors while attempting to evaluate the expression than pre-validate, because there are some errors that you can't reasonably detect without evaluating the expression (such as stack underflow). evaluate_expr Here, with your use of the program_stack, you are venturing outside the realm of functional programming, because the .Push() and .Pop() operations cause mutation. A more FP approach would be to use a List as an immutable stack. Instead of blindly pushing tokens, some of which are operators, onto a stack of strings, you should only put numeric data in the stack. You would be better off treating the operators as functions that manipulate the entire stack directly, instead of as functions that take two operands and return one result. Otherwise, you would have to implement an operation like swap as a special case, and distinguish between binary operations and unary operations such as negate and ex. tokenize_expr The simple strategy would be to use new Regex("\s+") as the delimiter pattern. You're treating the operators as captured delimiters — presumably to make spaces optional in an expression like 1 2+ — and discarding the last element of the resulting array. That works, as long as the expression ends with an operator. If the expression is just 5, for example, you'll discard a number. The regex could be written more succinctly using a [-+*/] character class. I wouldn't bother naming split_pattern and split_regex as variables. main printf is preferred over System.Console.Write. If you going to set up a |> pipeline, don't interrupt it by defining let tokenized_expr — that just gets in the way. Summary RPN calculators are much simpler when the operators work directly on the stack, rather than having a controller feed the operands to them. In fact, every function except main has traces of the operator definitions. Here's an implementation that I came up with: open System open System.Text.RegularExpressions exception InputError of string let tokenize (expr: string) = expr |> (new Regex("\s+|\s*([-+*/])\s*")).Split |> Array.toList |> List.filter(fun s -> s.Length > 0) let perform (op: string) (stack: decimal list) = match (op, stack) with | ("+", a :: b :: cs) -> (b + a) :: cs | ("-", a :: b :: cs) -> (b - a) :: cs | ("*", a :: b :: cs) -> (b * a) :: cs | ("/", a :: b :: cs) -> (b / a) :: cs | ("swap", a :: b :: cs) -> b :: a :: cs | ("drop", a :: cs) -> cs | ("roll", a :: cs) -> cs @ [a] | (n, cs) -> try decimal n :: cs with | :? System.FormatException -> raise (InputError(n)) let evaluate (expr: string list) = let rec evaluate' (expr: string list) (stack: decimal list) = match expr with | [] -> stack | op :: exp -> evaluate' exp (perform op stack) evaluate' expr [] [<EntryPoint>] let main argv = while true do printf ": " try match Console.ReadLine() |> tokenize |> evaluate with | num :: [] -> num |> printfn "%g" // Single answer | stack -> stack |> printfn "%O" // Junk left on stack with | InputError(str) -> printfn "Bad input: %s" str 0
{ "domain": "codereview.stackexchange", "id": 15119, "tags": "f#, calculator, math-expression-eval, interpreter" }