anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
As a rugged and waterproof device, how does gopro dissipate heat?
Question: I am new to designing rugged + waterproof products. I am curious about how to dissipate heat when the electronics are running and the ambient environment is even hotter. It seems like the mode of heat transfer is limited to conduction through the shell and then convection with the ambient air. My product will be small (like GoPro), need to be outdoor in Dallas all day (37°F to 96°F), and be able to protect against dust and water submersion (IP67). I am concerned about where the heat will go and the thermal cycling as the device experiences ~30°F change daily. From a design stand point, what options (material, geometry...) do I have to solve this problem? Answer: Normally, you would mount a the heat-generating components of the PCB to a heatspreader, and then mount the heat spreader to the inside of a thermally conductive (aka metal) case, possibly with a heatsink on the other side. If the ambient is even hotter than your device, you either need to do what a fridge does or use components rated for higher temperatures. Or a peltier, I guess.
{ "domain": "engineering.stackexchange", "id": 3445, "tags": "mechanical-engineering, design, heat-transfer, consumer-electronics, convection" }
Constructing a Free Body Diagram (Stacked Blocks)
Question: How can I construct the free body diagram of this figure? I am in a topic of equilibrium of particles so the total force will be equal to zero. The two springs are indicated to give equal forces. Am I correct that I need to add the masses of the blocks and then I will make an FBD together with two spring forces? Answer: What you need to do is you need to isolate the masses from the environment. I.e. create a section. At the points that section intersects with other objects (i.e. spring) you should place forces there with the correct direction. Additionally you need to add any external forces like weight on the masses.
{ "domain": "engineering.stackexchange", "id": 3974, "tags": "statics" }
Shouldn't Electric Potential energy be $U=2kQq/r$?
Question: Let's take 2 charges $Q$ and $q$ a meter apart. Say you want them to collide. You'll have to apply force $F$ on charge $q$ (which I know is changing with distance). I know we integrate this force and we would get something like $\text{Work} = U_{0 \ \mathrm{m}} - U_{1 \ \mathrm{m}}$. But it's not just $Q$ applying force on $q$. $q$ is also applying the same force on $Q$. So You'll have to get $q$ close to $Q$. But also apply the same force on $Q$ so that it does not run away. Right? I have had this at the back of my head for days. please help me out. In simple words, we should apply force $F$ on both charges to keep them in place? Thus while finding potential energy, we must integrate both forces $F$, making $U = 2kQq/r$. What am I missing? Answer: A lot of controversy so here's an example... Say you have charges Q and q that are attracted to each other. What is the force you'll have to apply to maintain the system's potential energy? Say at distance r they apply force F on each other. You apply force F on q to hold it in place. But Q is still moving towards q. So you have to apply another force F on Q to keep it in place too. Thus at each point, you have to apply 2F force on the system. This is what the confusion was about. The change in potential energy would be the work done from there on. So say on a particle with charge q you apply a force F + dF, just above F so that you move it further apart. To change a particular configuration... the net force you apply is the force to maintain the configuration in the first place + the extra dF force you apply (which is almost 0). That brings us to 2F + dF which is just 2F for all theoretical purposes. There you go, the work to be done is thus 2F*dx for distance dx. So the work done to change the configuration must be 2kQq/x. Bringing charges from infinity, the work IS the potential energy of the system. Now I myself don't know if that is right, which is why I put it up as a question. WHAT AM I MISSING? Conclusion (EDIT): Say we apply force F on both particles (total force being 2F) which is just enough to move them apart. They both move distance dx so the total work done is 2(Fdx) and the total distance they moved apart is 2*dx (they both move dx dist. in opposite directions). That would mean moving them dx distance apart, work done is Fdx, integrating it would imply that change in Potential energy to bring them distance r apart would be kQq/r... So to bring them at distance x from infinity, we have to move them half the distance each while applying equal force on both Q and q. Which just mathematically happens to be the same as applying force F on q and moving it all the way to x distance from Q. Thus, U = kQq/r is correct. But we do have to apply force on both particles if we practically have to get them into a configuration. Just that we have to only make them move half the distance each. I hope it makes sense for everyone now.
{ "domain": "physics.stackexchange", "id": 95651, "tags": "forces, electrostatics, work, potential-energy, coulombs-law" }
Settling of unevenly loaded storage tank
Question: Suppose I have a concrete tank, round, vertical, diameter in the range 16m-24m. One half the floor is filled with gravel or concrete to a height of maybe 2 meters. When the tank is empty, that will mean I have about 5 tons per m² more weight on this half, when the tank is full (with water or a slurry that's mostly water) it's still 3 tons (assuming 2.5 t/m³ density which is exact enough for my ballpark). Most of the times (>90%) it will be full. The tank will be above ground, 8-10 m high. I want to know if the tank will tilt during its lifetime, say 20 years. I'm not a civil engineer and I have no feeling for the numbers involved. My gut feeling is that my tank will tilt visibly in a matter of a few years and that my idea is not feasible as is. Can someone weigh in and comment on ... Will I have tilt/uneven settling problems? At what magnitude over the tanks lifetime? What's the easiest (=cheapest) remedy, leaving the tank interior alone? Clarifying points: The tank is not yet built or even planned. It's just an idea I'm thinking about that calls for half filling the tank to create a sort of funnel. I wonder if this idea is worth pursuing, and uneven loading/settling is one issue want to consider. I'm not in the "call a structural engineer and let him calculate the static" phase, I'm in the tossing around harebrained ideas in my skull phase. I'm sure such a tank can be built to last for 20 or 200 years, but at what price? Answer: Here's a really (and I mean really!) quick and dirty set of calculations that might give you an idea of the magnitudes of settlement you could be dealing with. The settlement potential of the tank location can be determined a number of ways, but probably the best thing to do would be a plate load bearing test. The test can be run to simulate the range (though not the duration) of loads you are expecting. A test like this will give you a spring constant $k$ that represents the "modulus of subgrade reaction" of the bearing soil (for the tested loading range). However, it's a short term test that doesn't take into account creep, so the long-term $k$ value will be lower. In general, a short-term $k$ will run from something like 80pci for a very soft clay to something like 250pci for a very dense sand (caveat: this is just from the top of my head without looking anything up). So let's use the worst case scenario here, and to take into account creep, let's do what geotech engineers do best and slap a 2.5 safety factor on it. So we have about a 30pci modulus of subgrade reaction. Let's also assume that most of the differential settlement will occur as a result of the uneven loading of the empty tank, and that the emptying/filling of the tank is going to have a negligible contribution to the differential settlement. This isn't too terrible of an assumption, since the difference in applied surface pressure (which determines differential settlement) is much greater in the empty state, and it's also conservative because it will only be empty 10% of the time anyway. So here we go (I'm American, so we're doing everything other than what you gave for dimensions in imperial-scum units first and then converting - sorry!): $k = 30 \frac{lbf}{in^3}$, $\gamma_{concrete}=150\frac{lbf}{ft^3}$, $H_{concrete}=2m$ Applied pressure under half the tank: $q_c=H_c\times\gamma_c=0.98ksf=5.3 \frac{tonf}{m^2}$ Settlement under loaded half of tank: $S=\frac{q_c}{k} = 0.23in = 5.8mm$ If we assume the other side of the tank does not settle at all, our differential settlement comes out to about 6mm. Now, this number assumes the loaded side of the tank is free to settle while the unloaded side remains static. This is not the case. Assuming the tank is nice and stiff, some of the applied pressure on the loaded side will be transferred to the unloaded side (which will reduce the settlement of the loaded side). I don't know what the application is for this tank, but the above is probably a pretty conservative analysis of the situation you described. I would be surprised if differential settlement potential turns out to be a problem for you. EDIT: One thing to note is that the tank will "wiggle" when it is being filled/drained. What I mean is, the entire thing will settle more when it is filled, but it will settle more in the unloaded side (thereby undoing some of the differential settlement in the empty condition). Then when drained, the soil will rebound and the tank will return to the more-tilted empty condition when the unloaded side rebounds more than the loaded side (though it is likely neither side would rebound fully). Assuming the 6mm of settlement from above, the deflection angle for the 24m diameter tank comes out to be $\arctan\frac{6mm}{24m}=0.014^{\circ}$. Pretty tiny.
{ "domain": "engineering.stackexchange", "id": 28, "tags": "civil-engineering, soil, geotechnical-engineering" }
Visibility of human activity on the moon
Question: In this video the host of Test Tube Plus states that you can go out a buy a laser, point it at the moon, and see the retroreflector left by the astronauts. When you point at the right spot you'll see a reflection, elsewhere not. He reiterates that this is not just a hypothetical use of 2nd person for effect, but that you the viewer can go out and try this yourself, and anyone paying any attention to moonlanding conspiracy theories ought to try it tracks and artefacts at the landing site can be seen with amateur telescopes, and again this is approachable to anyone who cares to look Neither of these seem correct to me. Could someone with real numbers weigh in on this? I seem to recall that seeing artefacts is only possible in the latest high-res images taken from lunar orbit, and the telescope that uses the reflector gets back single photons on some trials and is a much more powerful laser than you could buy! But, a continuous beam may be different? How big of a laser would you need to see a reflection with a viewing instrument that doesn't cost more than the laser? Answer: The video is hilariously wrong. However, the principle of laser ranging is more or less right, and it does require the reflectors left behind by the astronauts on the Moon. It's just that the physics and technology involved are far beyond just pointing a toy laser at the Moon. Project APOLLO (Apache Point Observatory Lunar Laser-ranging Operation) is actually doing this. http://physics.ucsd.edu/~tmurphy/apollo/basics.html You need a fairly large telescope to start with - both for collimating the outgoing light pulse, and for receiving as much of the reflection as possible. APOLLO uses a 3.5 meter telescope at the Apache Point observatory. You need a laser that can generate a high-energy light pulse that is very short. The pulse is injected into the telescope's optical train and sent to the Moon. This is not a laser pointer; it's a high power Q-switched laser for research, a device the size of a refrigerator. On the receiving end, you need a very good detector also plugged into the telescope. Of the many, many photons sent to the Moon in the pulse, only between 1 and 5 photons make it back down to the detector. You need a detector that can tell the extra 1 to 5 photons from the background noise of light coming from the Moon. Using this technique, the distance from Earth to Moon can be measured with very high precision. This is the APOLLO system in action: Regarding observing human artifacts on the Moon with terrestrial telescopes, I wish it was doable, but it's not. Again the YouTube video is wrong. The biggest telescope accessible to amateurs has an aperture (diameter) in the range of 1 meter or a little bit larger (the aperture of the largest amateur-owned telescopes currently). The resolving power of a telescope (the size of finest details discernible) depends on aperture - if aperture is measured in mm and the resolving power in measured in arcseconds, then the formula is: resolving power = 100 / aperture So a 1 meter telescope has a resolving power of 0.1 arcseconds. The distance from Earth to Moon is 384000 km (3.84 * 10^8 meters). With a resolving power of 0.1 arcseconds, the size of the smallest detail discernible on the Moon is: detail size = distance_from_Earth_to_Moon * arctan(resolving power) or detail size = 3.84 * 10^8 * arctan(0.1 arcsec) = 186 meters Anything smaller than 186 meters would be blurred into a single dot. There's nothing we've done on the Moon that's as big as that. It's not possible to see traces of human activity with amateur-level telescopes, even with extremely large meter-class dobsonian telescopes. Even with professional telescopes, we just don't have the aperture yet to resolve that kind of detail. However, satellites in orbit around the Moon, such as the LRO, were able to image traces of the Apollo missions. That's because they are a lot closer to the site. http://www.nasa.gov/mission_pages/LRO/news/apollo-sites.html
{ "domain": "astronomy.stackexchange", "id": 1198, "tags": "amateur-observing, moonlanding" }
Zeroth law of thermodynamics confusion
Question: I quote from Zemansky's "Heat & Thermodynamics"; "Imagine two systems A and B separated from each other by an adiabatic wall but each in contact with a third system C through diathermic walls, the whole assembly being surrounded by an adiabatic wall as shown in Fig. 1-2a. Experiment shows that the two systems will come to thermal equilibrium with the third and that no further change will occur if the adiabatic wall separating A & B is then replaced by a diathermic wall (Fig. 1-2b). If, instead of allowing both systems A & B to come to equilibrium with C ar the same time, we first have equilibrium between A & C and then equilibrium between B & C (the state of system C being the same in both cases), then, when A & B are brought into communication through a diathermic wall, they will be found to be in thermal equilibrium." My question is; What does he exactly mean by "the state of system C being the same in both cases"? Does C get connected to A first and then after reaching thermal equilibrium with A, gets connected to B? Or do we have like 2 identical systems to C and we connect A to one and B to the other? If it means that C is just one system and we connect A first and then to B ( without C being in its initial condition before it was connected to A), then what I understand is that A & C will reach thermal equilibrium and will have same "temperature" (I know we still didnt define temperature yet but at least based on how it "feels") so afterwards when B is connected to C, C being at the same temp as A now, the temp of C will change to the equilibrium temp with B. So A and B will have different temperatures, so how come will they be at thermal equilibrium when connected? (No change will occur in either A or B). This is the figure he refers to (https://i.stack.imgur.com/iiAe5.jpg) Answer: Your arguments are correct. If $A$ and $C$ are brought in equilibrium initially, then for $A$ and $B$ to be in equilibrium, B has to be in equilibrium with that state of $C$ which was initially in equilibrium with $A$, without any transfer of heat energy. In other words, $B$ and $C$ must be at the same temperature before they are brought in contact (Here, the system $C$ is the one in equilibrium with $A$). Therefore, the state of system $C$ ,which the same in both cases, refers to that state of C which is in equilibrium with $A$ $\space$ and with $B$ (without any heat transfer). Only then $A$ and $B$ will be in thermal equilibrium.
{ "domain": "physics.stackexchange", "id": 73943, "tags": "thermodynamics, temperature, equilibrium" }
Box filter with non integer length
Question: I'm trying to model a sensor system that has an averaging behaviour. The frequency response is almost identical to a box filter and looks roughly like this: Transferring this into a discrete time model would require a box filter of non-integer length - e.g. $N=2.5$ samples. Now I am looking for ways to model this system. Here are my attempts and why they failed for me: 1. Ordinary Lowpass As the desired frequency response has a lowpass characteristic, it would seem logical to try a lowpass filter first. However, they fail to reproduce the zero found in the desired frequency response. Also, they end in a zero at nyquist, which is not wanted. 2. Interpolated box filter Using the impulse response $h[i] = [1, 1, f]$ where $0 < f < 1$ allows me to approximte a box filter with $N$ somewhere between 2 and 3. Here are the frequency responses of these filters for $Fs = 24kHz$ and $f = 0, 0.1, 0.2, ... , 1$: The problem is that the attenuation only approaches zero for $N=2$ and $N=3$. For anything in between it becomes way less with the worst being $N=2.5$ where the attenuation is only about -16dB. 3. Downsampled Box Filter: I designed the desired box filter for a higher samplerate, e.g. oversampled by a factor of $S=32$. Then I lowpass-filtered it with a windowed-sinc and got these impulse responses: I downsampled this to my original samplerate by keeping only the samples $S/2 + i*S$ and got these impulse responses: However, the frequency responses of this look very similar to the simple "interpolated" filters from attempt #2. They are so similar, that it doesn't even make sense to add another picture here. The major difference is a significantly higher computational load and an additional processing delay. Increasing the size of the windowed sinc lowpass kernel doesn't actually improve things much, it only adds additional delay due to the pre-ringing. 4. Crude oversampling The idea was to interpolate $S$ samples for each actual sample and apply the box filter to these. I used 4-point interpolation that accounts for samples $i-1, i, i+1, i+2$ for each output sample at a position between $i$ and $i+1$. I can then re-arrange the formula to calculate the specific contribution of each input sample to the final output value like this: h = zeros(ceil(N) + 2) totalNumOversampledSamples = S * N for i = 0 .. totalNumOversampledSamples: samplePosition = i / S intSamplePosition = floor(samplePosition) fractional = samplePosition - floor(samplePosition) // get interpolation coefficients for a 4pt interpolation a,b,c,d = getInterpolationCoefficients(fractional) // add those to the impulse response h[intSamplePosition - 1] += a h[intSamplePosition] += b h[intSamplePosition + 1] += c h[intSamplePosition + 2] += d // normalize h /= sum(h) (I assumed the first $S$ samples to not be interpolated to avoid adding another coefficient to the front of my impulse response) The resulting filter is quite efficient, but unfortunately, the resulting frequency response is pretty bad - probably due to the poor interpolation scheme used: 5. Additional thoughts I though of upsampling my input data, then applying an ordinary box filter to it before downsampling again. With this method, I could actually realise a "fractional length" box filter because in the upsampled domain, the box filter can be of integer length. However, this operation is entirely linear, so it should be possible to transform the same operation to an ordinary FIR filter and skip the upsampling step - which I did attempt in my 3rd approach. I am not sure why it didn't work. Here's the actual question: How could I model this system to fulfill these criteria: Keep the characteristic shape, especially the "zero" of the desired transfer function, or at least a high attenuation. Be able to "sweep" the zero(s) across the frequency spectrum much like it would be possible with a "moving average" filter in a continuous-time system. Keep computational load within reason (this must be able to run in real time) Phase response is not important Answer: The problem might be solved already by the existing answers, but I thought I'd add my solution, which adds another degree of freedom resulting in a much closer match of the filters' magnitude responses. What I came up with is a simple system of four linear equations with the following conditions: unity gain at DC gain of the continuous-time (CT) filter at Nyquist zero at the same frequency as the CT filter This is similar to the existing answers, but with the additional condition that the responses at Nyquist are also identical. This makes the resulting magnitude responses match each other very closely (see figure below). As an example, I chose the width of the CT boxfilter as $T=6e-5$, and used a sampling frequency $f_s=48 \textrm{ kHz}$. The discrete-time (DT) filter has four samples because there are $4$ degrees of freedom (note that two degrees are taken by the zero at positive and negative frequencies). The result looks like this (top: magnitude responses, bottom: impulse response of DT filter): Note that there's virtually no difference between the magnitude responses of the CT and DT filters. EDIT: With this method one can incorporate an arbitrary number of zeros, which is necessary if the width of the CT impulse response becomes larger compared to the sampling period. In that case we naturally end up with a longer filter. Here is an example for the same sampling rate as before ($f_s=48 \textrm{ kHz}$), but with a longer CT impulse response with $T=15e-5$:
{ "domain": "dsp.stackexchange", "id": 8653, "tags": "filters, filter-design, finite-impulse-response, moving-average" }
ros2 rclpy list parameters from node
Question: Is there an easy way to list parameters from an rclpy Node ?? I can 'manually' set up a service client and do the list_parameters call but it feels like this should be abstracted, Current solution, class Dummy(Node): def __init__(self): super().__init__('dummy', allow_undeclared_parameters=True, automatically_declare_parameters_from_overrides=True) service_name = self.get_name() + '/list_parameters' client = self.create_client(ListParameters, service_name) while not client.service_is_ready(): time.sleep(0.1) request = ListParameters.Request() future = client.call_async(request) # wait for response rclpy.spin_until_future_complete(self, future) if future.result is not None: response = future.result() for name in sorted(response.result.names): print(' {name}'.format_map(locals())) Desired, class Dummy(Node): def __init__(self): super().__init__('dummy', allow_undeclared_parameters=True, automatically_declare_parameters_from_overrides=True) param_list = self.list_parameters() Originally posted by artivis on ROS Answers with karma: 65 on 2019-06-28 Post score: 0 Answer: Hi @Artivis, The service call is the solution if you need to list the parameters of another node. If you want to do something with the parameters within the node (Dummy in your case), just use the "private" member self._parameters, which is a map with the parameter names as keys and Parameter objects as values: https://github.com/ros2/rclpy/blob/0.7.4/rclpy/rclpy/node.py#L133. If you just want a list: class Dummy(Node): def __init__(self): super().__init__('dummy', allow_undeclared_parameters=True, automatically_declare_parameters_from_overrides=True) param_list = [parameter for parameter in self._parameters.values()] Hope it helps! Originally posted by jubeira with karma: 1054 on 2019-07-04 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by artivis on 2019-07-04: Hi, thanks for the tip ! Is this self._parameters member the place were the parameters live ? The actual concern being to know whether this member is up-to-date at launch time given that allow_undeclared_parameters=True and automatically_declare_parameters_from_overrides=True ? A quick test suggests that yes. Notice that in my particular example the actual code would be sorted(list(self._parameters.keys())) Comment by jubeira on 2019-07-04: Yw! Yes, that's where parameters actually live. Getting them after calling super().__init__ should give you the up-to-date dictionary. And yes, if you just want a sorted list of names that line should be just fine. Comment by artivis on 2019-07-04: Thanks for the precision ! Cheers.
{ "domain": "robotics.stackexchange", "id": 33290, "tags": "ros, ros2, parameters, rclpy" }
Setting up a gazebo Application using RenderEngine and GUIOverlay in gazebo
Question: I am creating a game that uses gazebo simulator. I need help on how to set up an application and rendering scene using RenderEngine.hh, I would like to use Ogre3d functionality to render and set up the camera and GUIOverlay class for the GUI. I haven't found any tutorials of how I could use these two classes and I am stuck. Could anyone please provide examples of how you would go about using RenderEngine.hh to render your scene from gazebo. What steps are involved to achieve this?Please take me from file creation, to cmake files if needed. The end result of this should be an App. Originally posted by iamhero on Gazebo Answers with karma: 11 on 2013-02-14 Post score: 1 Answer: Unfortunately there are no tutorials on using the Gazebo rendering library and gui library in a separate application. Right now, you'd be best served by looking at the code. Check out gazebo/gui/GLWidget.cc to see how QT interfaces with OGRE. This will give you a good starting point. Originally posted by nkoenig with karma: 7676 on 2013-03-17 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 3041, "tags": "gazebo" }
Are the arrows in this diagram of ATP synthase correct?
Question: I have a question about this image below. Do you think that the pink arrow is actually going in the right direction? I would suggest a LTO sequence and not LOT, since it is in the T state that ADP + Pi is converted in ATP. Answer: Yes, the arrows are correct. However, it's not immediately clear what they're trying to show. ATP synthase ATP synthase can act as a generator or a motor. Interactions cause rotation of the F0 subunit and the subunit's axle has a kink in it which deforms the stationary F1 subunits (the axle can turn either way to act as a generator or a motor). Read more at PDB's ATP synthase 101. What The Asker's Diagram is Trying to Show The black arrows in the asker's diagram are implying the movement of ADP or ATP (generation) in and out of the catalytic site. The pink line implies the movement of the axle kink and not the transition of ADP+pi->ATP. Let's focus on the left-hand subunit across the subfigures and it makes more sense (ADP and Pi go in, protein "closes" and ATP is formed, ATP is released). Meanwhile, the "kink" completes a full rotation.
{ "domain": "biology.stackexchange", "id": 5962, "tags": "biochemistry, plant-physiology, photosynthesis, plant-anatomy, protein-binding" }
Why does climate change generate desertification?
Question: According to IPCC: Climate change can be a significant driver of desertification and land degradation and can affect food production, thereby, influencing food security. Source: IPCC My concern is if greenhouse gasses increase the temperature, there is more evaporation, so more clouds, so more rainfall. Why does climate change generate desertification instead? Answer: Most arid and semi-arid regions are normally far inland. Often those also have terrain features that don't allow for the moist air to reach, such as mountain ranges, etc. So now if global temperatures increase, the evaporation in semi-arid regions increase as well, reducing the amount of water available for plants. At the same time moisture coming from the ocean still precipitates as rain before reaching those regions.
{ "domain": "earthscience.stackexchange", "id": 1996, "tags": "climate-change, desertification" }
Magnetism and atoms
Question: I have a little question about magnetic fields. Suppose we have an uniform magnetic field $\vec{B}$ and a metal wire immersed in $\vec{B}$ crossed by a stationary corrent $i$ . I know, for the Second Elementary Laplace Law, that the infinitesimal force the wire is affected is: $$d\vec{F}=id\vec{s}\times\vec{B}$$ microscopically speaking electrons in the wire are affected by the magnetic field and they change their motion and they exert shocks on the internal wire surface producing a mechanical force $\vec{F}_1$ and so the wire moves. But also protons are affected by the magnetic field and they should move in the opposite directions producing a mechanical force $\vec{F}_2$ also bigger than $\vec{F}_1$ because $m_e< m_p$. So why does the wire go in the electrons direction? Am I forgetting something about electrons and protons behavior? Answer: In a conventional (metallic) conductors, the charge carriers are the electrons in the conduction band. The protons in the nuclei form a static lattice which does not move, even if you apply an electric field: it might budge a bit, but it cannot continue moving the way that conduction electrons do. (Note that there are other types of conductors, listed in this great answer, where the charge carriers are different.) If you place a metallic wire, carrying a current, in a magnetic field, then the protons do feel the magnetic field, but they are stationary, so the magnetic Lorentz force on them is zero. That said, your question holds even if we change the conductor for something with equivalent positive and negative charge carriers. (As an example, consider a plastic pipe full of water with dissolved table salt (NaCl), where the charge carriers are positive Na$^+$ and negative Cl$^-$ ions, whose masses are reasonably similar.*) There, the velocity of the two different carriers is opposite $-$ but it is important to keep in mind that they are still carrying current in the same direction, which means that the Lorentz force, $$\mathrm d\vec{F}=i\:\mathrm d\vec{s}\times\vec{B},$$ points in the same direction for both! * Or, even better, potassium chloride, so the mass ratio between the carriers is $m_\mathrm{K}/m_\mathrm{Cl} = 1.10$
{ "domain": "physics.stackexchange", "id": 74424, "tags": "electromagnetism, atomic-physics" }
Is there a general guideline for experience replay size, and how to store?
Question: I am trying to use deep Q-learning on color images of size (224 x 224 x 3). I have read that implementations of DQN use experience replay sizes around 50,000. If my calculation is correct, this is over 56 Gigabytes for my data (two images per tuple, one image for state and next_state, totaling 100,000 images). Is this correct? If yes, how can I go about this without that much RAM? Answer: If yes, how can I go about this without that much RAM? Your value of 56GB seems correct to me, assuming you include a multiplier of 4 for the "4 frames per state representation" used in the DQN/Atari paper. However you should note that in the original paper, the images were converted to greyscale and downsampled to 110×84 prior to use in state representation. This made them 16 times smaller than the frames in your problem, so the whole data set would fit into 4GB. If you must use larger images, you could store them on disk - maybe in a database - and load on demand. That may unfortunately create an I/O bottleneck that slows learning, although you should still be able to work with it. You can parallelise fetching from the database for mini-batches with the learning process, and this is similar to the mini-batch generators used for things like ImageNet training. You can also work on improving disk performance using optimisations such as parallel disk arrays or SSDs. You could also pre-process the frames using a hidden layer embedding from a generic computer vision network trained on e.g. ImageNet, and store that representation, not the raw pixel values. This may limit self-discovery of important low-level features by the learning agent, but then again it may still be worth a shot, as lower-level image features are often very similar across different problems, and transfer learning in e.g. image classification has been quite successful using that approach. More likely practical answers used by RL researchers at least up to a certain scale is one of: Store less states in replay memory. Base the size of replay on the memory you have available. Yes this may compromise the learning, but there is no special magic about the number 50,000 and if you are optimising resource use you may have to decide between how efficiently a system learns with 10,000 fast replay memory size or 50,000 slower I/O-based replay memory size. Buy more memory. The big name research labs working in deep RL are funded well enough that they can afford to throw money at this problem. One estimate for how much AlphaGo Zero hardware cost is $25million, so you can imagine that loading a few machines with 128GB+ RAM if they thought it necessary for any reason on any other flagship project would not be a major blocker. If you look at what OpenAI are doing at the cutting edge of video game playing, you can see that their hardware setup is equally monstrous. It is not clear whether they have an issue with storing experience as they use a different algorithm, or needing RAM in general, but if they did, it is also clear they could quite happily finance maximum RAM on their training rigs. Do note I am not a RL researcher myself (just a hobbyist) and have not asked anyone else what they would do when faced with this problem.
{ "domain": "datascience.stackexchange", "id": 3515, "tags": "reinforcement-learning" }
Array Stutter Implementation
Question: I was assigned this homework assignment to complete. The question originates from CodeStepByStep. Below is the prompt for the question: Write a function stutter that takes an array of Strings as a parameter and that replaces every String with two of that String. For example, if an array stores the values ["how", "are", "you?"] before the function is called, it should store the values ["how", "how", "are", "are", "you?", "you?"] after the function finishes executing. Below is my implementation: function stutter(arr) { if(arr.length == 0) { return []; } if(arr.length == 1) { arr.push(arr[0]); return arr; } let size = arr.length; for(let i = 0; i < size + 2; i += 2) { arr.splice(i + 1, 0, arr[i]); } //If last two elements are not the same if(arr[arr.length - 2] != arr[arr.length - 1] && arr.length != 1) { arr.push(arr[arr.length - 1]); } return arr; } It would be really helpful if I could get some feedback on how my code is written. I am fairly new to JavaScript, and I don't know some of the functions that could have made this a lot easier. Feedback on code efficiency and the implementation itself is warmly invited! Answer: By reviewing your code, the first thing that comes to my mind is a lot of if statements. You should try and keep those minimal by writing solutions to be as general as they can be. Another thing that I don't like in your code is that you are doing manipulation of array passed to the function, instead of going out with a fresh one. It looks like this manipulation is what leads to a lot of ifs in the first place. So main point on how you can refactor your code is that you initialize an empty array which will act as a result and then manipulate that code: function stutter(arr) { const result = [] for (let word of arr) { // for..of loop is a bit clearer to read result.push(word, word) // push can accept N arguments } return result } So by initalizing resulting array with an empty one, you cleared of a case that the argument array passed in is empty, because for..of loop won't do the looping at all. I've used for..of loop here since it's less code, but you could also use the C-like for like you've written in your question. Note that that loop also wouldn't loop if the argument array was empty, therefor no need for the if(arr.length == 0). By the way, I'm a bit puzzled with what exactly is the point of the last if you have in the code, but I think that with the refactoring I've provided, then there is no need for it at all. Since you've asked for more JS way of doing this, here are two ways: Using map and flat (this one won't run in Edge): function stutter(arr) { return arr.map(x => [x, x]).flat() } Using just reduce: function stutter(arr) { return arr.reduce((result, current) => [...result, current, current], []) } EDIT As @AndrewSavinykh pointed out, the assignment is worded in such way that the original array should be mutated. If that truly is the case, and both Andrew and me are not misreading it, then the solution would be to just reassign the result to arr and return arr (@JollyJoker pointed that previous answer was not correct) use splice (the code is updated for the first example, but same thing can be done for the other two): function stutter(arr) { const result = [] for (let word of arr) { // for..of loop is a bit clearer to read result.push(word, word) // push can accept N arguments } arr.splice(0, arr.length, result) return arr } On the other hand, I'd advise against input arguments mutation, because you can easily forget which function mutates the arguments and which not, so it can create some additional cognitive load while reading the code. Also, it goes against the principles of functional programming where you want your functions to be as pure as they can.
{ "domain": "codereview.stackexchange", "id": 34701, "tags": "javascript, array, homework" }
Why does wet skin sunburn faster?
Question: There is a popular belief that wet skin burns or tans faster. However, I've never heard a believable explanation of why this happens. The best explanation I've heard is that the water droplets on the skin act as a lens, focusing the sunlight onto your skin. I don't see how this would affect an overall burn, because the amount of sunlight reaching the skin is the same (ignoring reflection). Is this 'fact' true, and if so, what causes it? Answer: I don't know of any research to find out if skin sunburns faster when wet, though someone did a comparable experiment to find out if plants can be burnt by sunlight focussed through drops of water after the plants have been watered. You need to be clear what is being measured here. The total amount of sunlight hitting you, and a plant, is unaffected by whether you're wet or not. The question is whether water droplets can focus the sunlight onto intense patches causing small local burns. The answer is that under most circumstances water droplets do not cause burning because unless the contact angle is very high they do not focus the sunlight onto the skin. Burning (of the plants) could happen if the droplets were held above the leaf surface by hairs, or when the water droplets were replaced by glass spheres (with an effective contact angle of 180º). My observation of water droplets on my own skin is that the contact angles are less than 90º, so from the plant experiments these droplets would not cause local burning. The answer to your question is (probably) that wet skin does not burn faster. I would agree with Will that the cooling effect of water on the skin may make you unaware that you're being burnt, and this may lead to the common belief that wet skin accelerates burning.
{ "domain": "physics.stackexchange", "id": 31817, "tags": "heat, water, everyday-life, lenses" }
Hector Trajectory path is not flowing smoothly
Question: I used the hector mapping package in combination with an XV-11 Vacuum Cleaner. Although its laser has a relatively low range and update rate, the created maps turned out to be pretty accurate, and much better than GMapping. There is one small problem with displaying the trajectory though. The path it creates is correct for most of the time, but sometimes it determines the robot position wrongly, which results in a 'spiky' trajectory path. I attached an image for clarifying what I mean: http://imgur.com/npbAq1s Does anyone have an idea where these erroneous positions come from, and how to solve the problem? Originally posted by AeroR on ROS Answers with karma: 11 on 2013-05-16 Post score: 1 Answer: Hi, very interesting to see things working so well with the XV-11, thanks for raising awareness of that :) The fact that the map looks consistent clearly suggests that the internal state of the mapping system does not jump in as shown in your screenshots. It is thus very likely that your tf setup is messed up in some way (the trajectory server uses tf for generating the path displayed). You most likely have both hector_mapping generating a map->base_link transform as well as some other node doing the same (AMCL would be a candidate for example). When the trajectory server queries tf for transform data based on a timer, it more or less randomly gets one or the other, resulting in this jagged path. That at least is the most probable cause I see from looking at what you posted. Originally posted by Stefan Kohlbrecher with karma: 24361 on 2013-05-16 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by AeroR on 2013-05-21: Thank you for your answer! I looked at my transformation frames, and found out that both hector_mapping and my rosbag were publishing on the /odom topic. Depending on which source had last published onto the topic, the trajectory was either updated correctly or wrongly. Comment by AeroR on 2013-05-21: By the way, I solved this by removing all /odom entries from the rosbag file using the method described here: http://answers.ros.org/question/56935/how-to-remove-a-tf-from-a-ros-bag/
{ "domain": "robotics.stackexchange", "id": 14197, "tags": "slam, navigation, hector, hector-slam, hector-mapping" }
“We can describe general relativity using either of two mathematically equivalent ideas: curved space-time, or metric field” What is the metric field?
Question: We can describe general relativity using either of two mathematically equivalent ideas: curved space-time, or metric field. The metric field is like the legend of a map, which allows a flat chart to represent a bumpy terrain. Mathematicians, mystics, and specialists in general relativity tend to like the geometric view because of its elegance. Physicists trained in the more empirical tradition of high-energy physics and quantum field theory tend to prefer the field view, because it corresponds better to how we (or our computers) do concrete calculations. Once it’s expressed in terms of the metric field, general relativity resembles the field theory of electromagnetism. In electromagnetism, electric and magnetic fields bend the trajectories of electrically charged bodies, or bodies containing electric currents. In general relativity, the metric field bends the trajectories of bodies that have energy and momentum. Frank Wilczek, What is space?, page 9, http://web.mit.edu/physics/news/physicsatmit/physicsatmit_09_whatisspace_wilczek.pdf Answer: I will try to answer what he may have meant, or what an arbitrary person saying that may have meant. I did go read the article. The bottom line is that there is not much content in the separation he made between a spacetime theory of gravity and a metric field theory of gravity. It is more than anything just a bit of a philosophical difference, and a bit a way of thinking, whether as a dynamic geometry or as dynamic fields in a fixed geometry. But since the results are the same, as he admits, it makes no physical difference. In Living Reviews in Relativity at http://relativity.livingreviews.org/Articles/lrr-2014-4/articlese3.html, they review the status of different theories of gravity. Interestingly, for metric theories, there is always a spacetime. In one case there is two metrics, like the bimetric theory, which has pretty much been ruled out by measurements. It's hard to think of a metric theory without it being GR (general relativity), or one of its competitors like Brans-Dicke scalar-tensor theory, which has also been ruled out. See the article in the link above, GR is by far the most suported. And there is no GR equivalent that is denoted or described as a different kind of metric field theory. So, what does Wilczek mean? He means a different way of thinking of it. In canonical quantization of gravity the approach was to take the metric components, separate space and time like in the GR ADM formalism (one takes a time coordinate as a starting hyper-surface and evolves the metric). This is basically a Hamiltonian treatment of the GR metric as fields. The quantization never worked out, see the wiki article on canonical quantization at https://en.m.wikipedia.org/wiki/Canonical_quantum_gravity But see a treatment of classical GR as a field theory in http://www.reed.edu/physics/faculty/wheeler/documents/Classical%20Field%20Theory/Class%20Notes/Field%20Theory%20Chapter%204.pdf. It is not hard to see it as a field theory, and in fact the theory can be set up as a field defined by an action with a Lagrangian, which when the action's variation is set to zero held the GR equations. In fact that is how a scalar field can be tried to be added to the theory, leading to Brans-Dicke. But it all results in the same equations. Wilczek is too good a physicist, having gotten the Nobel Prize and all his work in QCD, not to have meant something. It seems that he meant that thinking of it and treating it as a field can make obvious us the similarities with the electromagnetic and other fields in physics, and that quantization always needed something different in going from QED to electroweak unification, and the different quantization in QCD, eventually all led to the standard model (SM), all based on quantum fields. But reading his article it seems to be more than that. Just nothing really specific. He makes a beautiful case, with good physical examples, though with no mathematics, for space as effervescent (bubbling with excitations), substantial (lots of quantum fields filling it), weighty (dark energy), and elastic (trajectories bend). There is nothing wrong with using geometry to describe the math for GR, and maybe if space (or spacetime) is all he says it is, some geometrical interpretations can make sense for quantum gravity. He is saying the field theory view of it can be equivalent. Keep in mind that AdS/CFT says that a string theory in AdS spacetime is equivalent to a conformal field theory on its boundary. But other than the beautiful thought of some unification of GR and quantum theory, I don't see any deeper implication, nor any prescription or thought as to how to arrive at it, from what Wilczek said in your reference. I am open to having missed something deeper or more specific in an answer to the questioner.
{ "domain": "physics.stackexchange", "id": 34517, "tags": "general-relativity, spacetime, metric-tensor" }
Doubt regarding grounding of a sphere and charge distribution
Question: An insulated sphere with dielectric constant $K$ (where  $K>1$) of radius of $R$ is having a total charge $+Q$ uniformly distributed in the volume. It is kept inside a metallic sphere of inner radius $2R$ and outer radius $3R$. Whole system is kept inside a metallic shell of radius $4R$, metallic sphere is earthed as shown in the figure. Spherical shell of radius $4R$ is given a charge $+Q$. Consider $E-r$ graph for $r > 0$ only. Well I started working out on this by considering $-Q$ charge induced on the inner side of the metallic sphere (since by application of Gauss theorem the electric field inside a conductor should be null) and proceeded by considering unknown charge $q$ on the outer side of the sphere where it is grounded (or so I understood). by taking potential zero I calculated $q= -3Q/4$ but I believe I must have gone wrong somewhere will these values of charges still hold that the electric field inside the conductor (metallic sphere) is zero Answer: (Edit: originally I neglected the shell at $r = 4R$ completely, thank you to Urb for pointing this out. Since fields superimpose, we can add the effect of this shell to what has already been calculated.) First, let's neglect the outer charged shell at $r = 4R$. If the inner metallic shell (the one that has inner radius $2R$ and outer radius $3R$) were not connected to ground, then the total charge on the outer surface (at $r = 3R$) would be $+Q$ since the conductor is net neutral and $-Q$ amount of charge has been displaced to the inner surface, so the outer surface is left with electron holes that give it an overall charge of $+Q$. The reason why $-Q$ charge must be displaced to the inner surface is so that there is no electric field within the conductor, so any Gaussian surface within the conductor must enclose net zero charge. Since the insulator carries $+Q$ charge, the inner surface of the conductor must carry $-Q$ charge. Since the inner metallic shell is grounded, however, think of it like this: the Earth effectively provides an infinite supply of electrons that can fill up the holes left by the electrons in the conductor that got moved to the inner surface. Thus, the charge on the inner surface (at $r=2R$) is still $-Q$, but the charge on the outer surface (at $r=3R$) is now $q = 0$ since the electron holes have been filled. Now, what happens if we add the charged shell at $r = 4R$? The positive charge will draw negative charge from the Earth and it will collect on the outer surface. The amount of charge drawn will now depend on the radii of the spheres, in contrast to the situation at the inner surface, which will always carry charge $-Q$ no matter the radii of the insulator or the conductor. One way to see why this is, is that if the inner radius of the grounded sphere got bigger, the negative charge pulled to the inner surface would get smaller because it is now further away from the positive charge on the insulator, but the total surface area that collects this charge gets bigger, so overall the total charge that collects at the surface will be the same (field drops as $1/r^2$, but the surface area grows as $r^2$, so the radius dependence cancels). By contrast, at the outer surface of the grounded sphere, if its radius got smaller, now not only does the negative charge drawn by the positively charged outer sphere get smaller because they are now separated by a greater distance, the surface area it collects on gets smaller too, so the total charge that collects on the outer surface will depend on the radii of the spheres. So, how much charge collects at the outer surface of the grounded sphere? The insulator and the charge at the inner surface we can completely neglect from now on, because effectively the grounded insulator isolates us from whatever charge there was there. First let's consider only the outer charged shell. The potential at $r = 4R$, which I'll call $\phi_0$ is: $$ \phi_0 = \phi(r=4R) = \frac{Q}{4\pi \epsilon_0 \cdot 4R}. $$ For $r < 4R$, considering only the charged outer shell, the potential will satisfy Laplace's equation $\nabla^2 \phi = 0$ since there is no enclosed charge. By the uniqueness theorem, the potential due only to the outer charged shell will therefore be $\phi_0$ throughout the interior $r<4R$. So this is the contribution to the potential from the outer charged shell. Now putting back the grounded conductor, in the grounded conductor, we must have $\phi = 0$. The field contribution at $r=3R$ from the outer shell is $\phi_0$ as we now from above. So, the potential from the negative charge that collects at this outer surface, which I'll call $q$, must contribute exactly $-\phi_0$ in order to cancel and be zero. Thus, at $r=3R$, we must have: $$ \phi(r=3R) = \frac{q}{4\pi\epsilon_0 \cdot 3R} + \phi_0 = 0. $$ So, $q = -3Q/4$. This is the charge that must collect at the outer surface of the grounded conductor so that the potential due to the insulator and the charge at the inner surface (these two by themselves cancel), and the charge itself at the outer surface and the charged outer sphere all cancel. Will the potential in the interior of the grounded sphere still be zero? Yes. The potential here will be the potential of the outer charged sphere plus the potential due to the charged outer surface of the grounded conductor. Remember that for $r>2R$, the effects of the charged insulator are shielded by the conductor by the collection of the $-Q$ charge boundary on the inner surface: for $r>2R$ these two will always cancel each other, so we need only consider the outer charged surface and the outer charged sphere for $r>2R$. The potential from the outer charged sphere is $\phi_0$ no matter where within the sphere we are ($r<4R$); we calculated this above. What about the charged outer surface? Calculate this as we did for the outer charged sphere: at $r=3R$ we know that the total potential is zero, so the potential due only to the charged outer layer here must be $-\phi_0$. For any position $r<3R$, since the charge enclosed by a Gaussian surface is zero, and by the uniqueness theorem, the potential due only to the charged outer surface must also be $-\phi_0$, no matter where we are. So, we have $\phi = \phi_0$ throughout $r<4R$ from the outer shell, and we have $\phi = -\phi_0$ throughout $r<3R$ from the outer charged surface. So, for $2R<r<3R$, which includes the grounded conductor, we must have $\phi = -\phi_0 + \phi_0 = 0$. (When we go $r<2R$ we must also add the potential from the inner charged surface and the charged insulator too.) I hope this clarifies the issue, but I appreciate that it is hard to follow. The key points are this: The electric field in a conductor is always zero, and the potential is constant but does not have to be zero. The potential of a grounded conductor always additionally sets the potential to be zero (or more accurately, whatever the potential of the grounding element is, which we usually take to be zero). A grounded conductor will always shield the enclosed region from the outside region, and vice versa; so any calculation in the one region can neglect the charge content (and potential fields) of the other. The potential inside a shell of charge $q$ and radius $a$ is $\phi = q/4\pi\epsilon_0 a$, throughout the interior of the shell $r<a$. The total potential (or electric field) can be added from the potentials (or electric fields) of each charge or set of charges in isolation. These five points will, I believe, answer all of your questions/doubts. (In a previous edit I mentioned Paul Falstad's applets at https://www.falstad.com/emstatic/; I'll leave the reference here because they can be useful.)
{ "domain": "physics.stackexchange", "id": 68965, "tags": "electrostatics, electric-fields, charge, gauss-law" }
Entropy and data redundancy
Question: I have a quick question about the redundancy of data and the information that it represents. A means of evaluating the amount of information contained in the data is the notion of entropy. 1) For example, if in my data set, I have only constant values ​​(all more or less equal), Then the entropy (of Shannon?) Will be low, is it the case? In this case, we are talking about redundant data, are we not? 2) How to prove it mathematically (that the entropy is low)? : Can I use the following formula: $$ H_ {b}(X) = - \mathbb{E} [\log_{b} {P(X)}] = \sum_{i = 1}^{n} P_{i} \log_{b} \left ({\frac{1}{P_ {i}}}\right) = - \sum_{i = 1}^{n} P_{i} \log_{b} P_{i} $$ where $ P_{i} $ is the probability that the random variable $X$ takes the value $ X = x_{i} $ among $ n $ possible values. In the case where I have only equal values ​​($ x_{i} = \text{constant} \Rightarrow\,P_{i} = 1 \quad \text{for all i}$), then have a zero Shannon entropy, that is, zero information. 3) On the contrary, if the data are really different and scattered, I would expect a large entropy: is this also correct? 4) In the universe, we say that entropy only increases with the homogenization of matter: we say this because there will be a lot of possible configurations with this homogenization, but which parameter do we take into account when we talk about this entropy (which is not the entropy of Shannon) ? : At first sight, we could think that if the homogeneity is complete, the position of each galaxy is uniformly distributed and therefore represents redundant data . But issue is that their position is not a constant value contrary to my previous question: they will be evenly distributed but not equal to each other. If you could help me clarify this point? It seems that I am confused between the possible number of configurations and an eventual fixed distance between 2 galaxies (which is redundant from a Shannon's entropy point of view). I should maybe consider the distance between each couple of galaxies ?? 5) In conclusion, are these little problems well understandable, I hope so. I'm interested in this because I'm working on Fisher's formalism which is a kind "entropy". Any help would be great, Regards Answer: I think that you have a confusion between redundant information and uniform probability distribution. To answer 1) and 2) The error here is that the probability distribution is not the one you think. You only have one possible outcome (assuming that only microstate $j$ is observed) so you have a Dirac distribution \begin{equation} P_i = \left\{ \begin{array}{l} 1 \text{ for } i=j \\ 0 \text{ otherwise} \end{array} \right. \end{equation} In the case 3), in the contrary, you have $P_i= 1/\Omega$ whatever $i$ (where $\Omega$ is your number of states), and this is a uniform probability distribution, if you compute the Shannon entropy in this case, you will get $H_b=\ln \Omega$ and you can shown that this is the maximum of the Shannon entropy when you maximize over possible probability distribution (see Wikipedia https://en.wikipedia.org/wiki/Entropy_(information_theory)#Maximum for a demonstration) This uniform distribution is then the opposite of redundant data since you cannot determine the rest of the data from a subset of them. 4) In this case, we are speaking of the Shannon entropy at time $t$ :$-\sum_i P_i(t) \ln P_i(t)$, but with the current probability of observing microstate $i$ that is not necessary equal to the microcanonical probability of uniform outcome. Now the evolution with time of this quantity can only grow according to the second law of thermodynamics and the current probability of observing microstate $i$ will eventually converge towards the equilibrium uniform probability. Edit to answer the question of the mutual distance of the galaxies: From your comment above, I now see why you want to consider a dataset of mutual distances as a redundant dataset. However, it would be a redundant dataset if all distances would have been exactly the same, i.e. if I give you the position of one of the galaxy then you can determine with no error the position of all others one. But in the actual dataset having the position of one of the galaxy only give approximately the position of the others ones and this errors is precisely quantify by the Shannon entropy. A configuration here is the actual position galaxies, so $(\vec{x}_{G1},\vec{x}_{G2},\ldots, \vec{x}_{GN})$, with discernable galaxies, so exchanging the position of two galaxy will give you a different configuration. Assume that I give the position of one the galaxy, from the set of mutual distance, you know that the average distance between galaxy is a certain number so you can give the position of another galaxy with a certain uncertainty, but now if I try to compute the position of a third galaxy from the second one, I will have again an uncertainty that will add to the first uncertainty, so if I continue to add more galaxy by putting them at constant distance the one from each others, the actual configuration (so the set of positions of all galaxy) will be very off from the computed one. That is very different of being able to give with full certainty the position of all galaxies. So the number of configurations here would be the number of possible positions for the set of all galaxies, times the number of permutations of galaxies as they are discernible.
{ "domain": "physics.stackexchange", "id": 62763, "tags": "cosmology, entropy, data-analysis" }
Job queue system
Question: I'm doing a side project at work and I consider it as a learning opportunity more than work itself, but it does hold a purpose if I can complete it. Anyway, I'll post the code, what it actually does and hopefully people can critique/improve what I've done. os.system("bjobs -u all| awk ' NF>1' > file") lines = open('file', 'r').readlines() del lines[0] open('file', 'w').writelines(lines) source = open("file") out = open("file1", "w") for line in source: out.write(line.split(" ")[0] + "\n") out.close() source.close() This code does 2 parts. Firstly the bjobs -u all| awk ' NF>1' > file will print this to a file (plus however many jobs are running): JOBID USER STAT QUEUE FROM_HOST EXEC_HOST JOB_NAME SUBMIT_TIME 111111 ###### ### ### ###### comp113 ######## Mar 27 12:50 comp113 comp113 comp113 222222 ###### ### ### ###### comp114 ######## Mar 27 12:50 comp114 comp114 comp114 Then remove the first line of the file ("JOBID USER STAT" etc) as this isn't needed. Then strip the additional "comp113" lines away (this is done using the awk command to remove any lines that don't contain more than 1 field - maybe it's a bad idea mixing shell commands to format text within Python) leaving just: 111111 ###### ### ### ###### comp113 ######## Mar 27 12:50 222222 ###### ### ### ###### comp114 ######## Mar 27 12:50 From this, extract just the JOBID and store it within a separate file for later use (this is done by deleting any characters after the first whitespace in the line). Mainly I'm just looking at maybe how I can tidy the code up and improve on anything I've done. I do plan on commenting the code when I get around to it. Anything that I've left unexplained please do ask and I'll do my best to explain. There is a lot more to this project which I plan on asking questions about in the future! Just didn't want to get ahead of myself. Answer: I'm having a major doubt: should I give you the same advice I gave you the last time here? Would you follow them this time? Or there was something not clear? Well, I'll try my best. Don't use os.system Use the subprocess module. Quoting from the doc: This module intends to replace several other, older modules and functions, such as: os.system os.spawn* Use next() to skip the header file ibjects are iterable, so to skip the header usually one does: with open('example.txt') as f: next(f) for line in f: # start to work on the lines here But I'd say that you won't need that one if you don't really need to have the file file on your hard disk. Check the output It seems that you don't really need the file file to exist, so you could just check the output of your command with subprocess.check_output(). Notice that subprocess.check_output() will Run command with arguments and return its output as a byte string. So you'll have to call the .decode() method on it. If you don't have any idea of what I'm talking about take a look at the doc here. Use the with statement with open("file1", "w") as output: # ... .split() vs. .split(" ") Those two have different behaviours: >>> s = 'This is a test\tstring' >>> s.split() ['This', 'is', 'a', 'test', 'string'] >>> s.split(" ") ['This', 'is', '', '', '', '', '', '', '', 'a', 'test\tstring'] .split() will split multiple spaces and tabs too. You don't need awk After checking the output just skip the line that you don't want: cmd_output = subprocess.check_call(...).decode() with open('file1.txt') as output_file: for line in cmd_outpu.split('\n')[1:]: # [1:] will skip the header splitted_line = line.split() if len(splitted_line) > 1: output_file.write(splitted_line[0] + '\n') You could obviously do it without decoding and working with bytes, but this way it's more clear to start with.
{ "domain": "codereview.stackexchange", "id": 1530, "tags": "python, console" }
how is there no centre of symmetry in this case?
Question: the question was whether the following compounds have the center of symmetry or not .the question was : And the answer is that there is no center of symmetry .how? according to me, there should be the centre of symmetry. I have made it like this: on asking my teacher how there is no center of symmetry he told me that since it is a Fischer projection, therefore, the horizontal lines are above the planes. therefore there will be no center of symmetry. I understand that if the horizontal lines are out of the plane then there will be no COS. but what I don't understand how come the horizontal lines are above the plane/out of the plane. please explain. Answer: One way to quickly get a visualization without building a 3D model (having a model is more instructive, though): Go to https://chemapps.stolaf.edu/jmol/jsmol/simple.htm Click on Load by name and type "(2S,3S)-Butane-2,3-diol" Click on console and type "rotate branch @2 @4 180" to get a conformation that is easy to compare with the Fisher projection.
{ "domain": "chemistry.stackexchange", "id": 12546, "tags": "isomers" }
Using rosbag with rosjava
Question: Hello everyone, I'm using the latest software for java and rosjava and after searching for a few hours, I couldn't find any api information on how to use rosbag from rosjava. Specifically, writing and reading to and from a bag file. Is this possible? Any help would be greatly appreciated. Thanks, Blair Originally posted by bgagnon on ROS Answers with karma: 51 on 2012-07-16 Post score: 0 Answer: rosjava doesn't provide that functionality. However, the rosbag file format is documented here so it shouldn't be too hard to implement a rosbag java library. Originally posted by Lorenz with karma: 22731 on 2012-07-16 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 10215, "tags": "rosbag, rosjava" }
Plot the frequency response of MTI delay line canceller using Octave
Question: I try to plot the frequency response of the delay line canceller (FIR filter which used for MTI). The delay line canceller has the following structure: +-----+ | | x [k] >---+---| T | | | | | +-----+ | | | +-----+ | | | | |x -1 | | | | | +-----+ | | | +-----+ | | ___ | \---| \ |---> y [k] | /__ | +-----+ Its frequency response is well known (Pg 20) and equal: $$H(\omega) = 2 \cdot \left|\sin (\frac{\omega \cdot T} {2})\right| \tag{1}$$ I also try obtain frequency response of the canceller using algorithm which described on this question: $$y [k] = x [k] - x [k - 1] \tag{2}$$ $$Y (z) = X (z) - X (z) z ^{-1} \tag {3}$$ $$ H(z) = 1 - z^{-1} \tag{4}$$ Now I try to plot (using GNU Octave) both of responses (1) and (4). w = linspace (0, 2 * pi, 100); z = exp (-j .* w); H_z = 1 - z .^ -1; H_w = 2.0 * abs (sin (w / 2) ); hold ("on"); plot (w, H_z, "1", "linewidth", 2); plot (w, H_w, "2", "linewidth", 2); title ("Frequency response of the delay line canceler."); set (gca, 'XTick', 0: pi / 2: 2 * pi) set (gca, 'XTickLabel',{'0', 'pi / 2', 'pi', '3 pi / 2','2 pi'}) xlabel ("Angular frequency."); ylabel ("Magnitude."); legend ("H (z)", 'H (\omega)'); I expect that they will be the same, but they are different. Where is my mistake? P.S. If I add modulus for (4) (like: H_z = abs (1 - z .^ -1);) they will became the same. Answer: Magnitude |H(z)| is always defined as an absolute value of your transfer function H(z) values which are complex. What you are plotting is the vector that contains complex numbers. You need to take their absolute value and that will give you magnitude. If you call the angle function then you are going to get phase. Transfer function H(z) is always complex. Try to print values of real(H_z) and imag(H_z), then you will see real and imaginary values of $H(z)$. Now magnitude |H(z)| is sqrt(real * real + imag * imag) for every sample...
{ "domain": "dsp.stackexchange", "id": 2485, "tags": "filters, frequency-response, radar" }
Selecting a quantity from a list
Question: I want to implement a function take_upto_n(A, n) such that for e.g. take_upto_n([1,2,3,4], 8) returns [1,2,3,2] take_upto_n([1,2,4,3], 8) returns [1,2,4,1] take_upto_n([1,2,3,4], 100) returns [1,2,3,4] take_upto_n([1,2,3,4], 3) returns [1,2,0,0] Assume A contains non-negative values only. n is also non-negative This is straightforward with a loop def take_upto_n(A, n): out = [] for x in A: out += [min(n, x)] n -= min(n, x) return out We can also do this with numpy, but this I.M.O. is fairly unreadable: def take_upto_n(A, n): A = np.array(A) return np.clip(n + A - np.cumsum(A), 0, A) Is there some standard function that does this - couldn't find it myself (or didn't know how to search)? If there isn't any such function, any advice on how to make it not look like "what is this?" (the n + A - np.cumsum(A) is the real killer)? Answer: Your regular Python implementation generally looks reasonable, and unless numpy offers a performance boost that you really need, I would not recommend it for this use case: the brevity/clarity tradeoff seems bad. My biggest suggestion is to consider clearer names. A function with signature of take_upto_n(A, n) makes me think the function takes an iterable and returns a sequence of n values. Something like take_upto_sum(A, total) seems more communicative to me. And even "take" isn't quite right: take implies a filtering process (we'll take some and leave some). But we are limiting or capping. Perhaps you can think of something even better. This example presents a classic Python dilemma: you want to do a simple map over a list to generate a new list, so of course you would like to knock it out in a comprehension or something equally cool. But while iterating, you also need to modify some other variable to keep track of state, and Python mostly lacks the assignments-as-expression idiom that allows such trickery in other languages (for example, old-school Perl map/grep one-liners and their ilk). If you are using a sufficiently modern Python, you can use the walrus operator to compress the code a bit, but it doesn't get you all the way there. Honestly, I wouldn't even bother with the walrus (but I would assign min() to a convenience variable to avoid repetition and thus help readability). In any case, the walrus approach: def take_upto_sum(A, total): out = [] for x in A: out.append(y := min(total, x)) total -= y return out I suppose it is possible to express this as a single list comprehension if we take advantage of the built-in itertools and operator modules. Like the numpy solution, this approach reminds me of the Spinal Tap quote: It's such a fine line between stupid, and uh ... clever. from itertools import accumulate from operator import sub def take_upto_sum(A, total): return [ max(min(x, acc), 0) for x, acc in zip(A, accumulate(A, sub, initial=total)) ]
{ "domain": "codereview.stackexchange", "id": 41220, "tags": "python, numpy" }
Roller-spring system (Generalized mass)
Question: Just a few weeks into a physics class on waves/oscillating systems here and I'm a bit stumped. System is straight forward: Cylinder resting on a floor attached to a wall via a spring. Connection is at cylinder center of mass. Cylinder rolls without skidding, no air resistance. Cylinder has radius 'a', mass 'm' and the spring constant is k. I've already used the energy method to derive $\omega ^2 =\frac{2k}{3m}$ Where I'm stumped is the second part where we're instructed to find the generalized mass. From my notes, this would take the initial form $\frac 1\mu=\frac1{m_1} + \frac 1{m_2}$ My initial attempt was to assume m1 was the translational mass and m2 was the rotational mass, I, giving: $\frac 1\mu=\frac1m + \frac 1I=\frac1m + \frac 2{ma^2}$ which leads to: $\mu=\frac{a^2}{a^2+2}m$ Given that the radius is irrelevant in the derivation for the first part, I'm assuming I've forgotten something or misapplied something else, seeing as this looks nothing like what I could use to plug into $\frac km$ to get something that resembles $\omega^2$ above... Any ideas what I've missed? Answer: I think I understand the problem. Do a free body diagram and see that there are two forces acting on the cylinder in the horizontal direction The spring force $F_S = - k x$. The friction force $F_R$ keeping the cylinder under pure rolling. The equations of motion of the center of mass is $$m \ddot{x} = F_S + F_R$$ The contact point must not have velocity so the motion of the center of mass must obey $$ \dot{x} + \dot{\theta} r = 0$$ or $$ \ddot{x} + \ddot{\theta} r = 0$$ or $$ \ddot{\theta} = -\frac{\ddot{x}}{r} $$ The rotational equations of motion are $$\left. I \ddot{\theta} = r\, F_R \right\} \ddot{x} =- \frac{r^2}{I} F_R $$ Combined with the linear motion $$ \left. m \left(- \frac{r^2}{I} F_R \right) = -k x + F_R \right\} F_R = \frac{I}{I+m r^2} k x $$ So now the equations of motion are $$ \boxed{ m \ddot{x} =-\left( \frac{m r^2}{I+m r^2} k \right) x } $$ You can proceed for here. The effective mass $m^\star$ is defined as $$ \ddot{x} = -\frac{k}{m^\star} x $$ or $$ m^\star = m + \frac{I}{r^2} $$
{ "domain": "physics.stackexchange", "id": 37625, "tags": "homework-and-exercises, newtonian-mechanics, mass, rotational-dynamics, spring" }
A (comprehensive) URI parser for Python
Question: For a code challenge, I'm trying to write a comprehensive URI parser in Python that handles both URIs with authority paths (ex: URLs such as http://user:login@site.com/page?key=value#fragment) and other URI schemes (ex: mailto:mail@domain.com?subject=Blah). Here's my current code: import json import re class Uri(object): """ Utility class to handle URIs """ ESCAPE_CODES = {' ' : '%20', '<' : '%3C', '>' : '%3E', '#' : '%23', '%' : '%25', '{' : '%7B', '}' : '%7D', '|' : '%7C', '\\' : '%5C', '^' : '%5E', '~' : '%7E', '[' : '%5B', ']' : '%5D', '`' : '%60', ';' : '%3B', '/' : '%2F', '?' : '%3F', ':' : '%3A', '@' : '%40', '=' : '%3D', '&' : '%26', '$' : '%24'} @staticmethod def encode(string): """ "Percent-encodes" the given string """ return ''.join(c if not c in Uri.ESCAPE_CODES else Uri.ESCAPE_CODES[c] for c in string) # We could parse (most of) the URI using this regex given on the RFC 3986: # http://tools.ietf.org/html/rfc3986#appendix-B # We won't do it though because it spoils all the fun! \o/ # We're only going to use it detect broken URIs URI_REGEX = "^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?" def __init__(self, uri): """ Parses the given URI """ uri = uri.strip() if not re.match(Uri.URI_REGEX, uri): raise ValueError("The given URI isn't valid") # URI scheme is case-insensitive self.scheme = uri.split(':')[0].lower() self.path = uri[len(self.scheme) + 1:] # URI fragments self.fragment = None if '#' in self.path: self.path, self.fragment = self.path.split('#') # Query parameters (for instance: http://mysite.com/page?key=value&other_key=value2) self.parameters = dict() if '?' in self.path: separator = '&' if '&' in self.path else ';' query_params = self.path.split('?')[-1].split(separator) query_params = map(lambda p : p.split('='), query_params) self.parameters = {key : value for key, value in query_params} self.path = self.path.split('?')[0] # For URIs that have a path starting with '//', we try to fetch additional info: self.authority = None if self.path.startswith('//'): self.path = self.path.lstrip('//') uri_tokens = self.path.split('/') self.authority = uri_tokens[0] self.hostname = self.authority self.path = self.path[len(self.authority):] # Fetching authentication data. For instance: "http://login:password@site.com" self.authenticated = '@' in self.authority if self.authenticated: self.user_information, self.hostname = self.authority.split('@', 1) # Fetching port self.port = None if ':' in self.hostname: self.hostname, self.port = self.hostname.split(':') self.port = int(self.port) # Hostnames are case-insensitive self.hostname = self.hostname.lower() def serialize_parameters(self): """ Returns a serialied representation of the query parameters. """ return '&'.join('{}={}'.format(key, value) for key, value in sorted(self.parameters.iteritems())) def __str__(self): """ Outputs the URI as a string """ uri = '{}:'.format(Uri.encode(self.scheme)) if self.authority: uri += '//' if self.authenticated: uri += Uri.encode(self.user_information) + '@' uri += self.hostname if self.port: uri += ':{}'.format(self.port) uri += self.path if self.parameters: uri += '?' + self.serialize_parameters() if self.fragment: uri += '#' + Uri.encode(self.fragment) return uri def json(self): """ JSON serialization of the URI object """ return json.dumps(self.__dict__, sort_keys=True, indent=2) def summary(self): """ Summary of the URI object. Mostly for debug. """ uri_repr = '{}\n'.format(self) uri_repr += '\n' uri_repr += "* Schema name: '{}'\n".format(self.scheme) if self.authority: uri_repr += "* Authority path: '{}'\n".format(self.authority) uri_repr += " . Hostname: '{}'\n".format(self.hostname) if self.authenticated: uri_repr += " . User information = '{}'\n".format(self.user_information) if self.port: uri_repr += " . Port = '{}'\n".format(self.port) uri_repr += "* Path: '{}'\n".format(self.path) if self.parameters: uri_repr += "* Query parameters: '{}'\n".format(self.parameters) if self.fragment: uri_repr += "* Fragment: '{}'\n".format(self.fragment) return uri_repr Also hosted on github. All feedback, including failure to respect PEP8 or existence of more "pythonic" methods, is welcome! Answer: You referenced RFC 3986, but I don't think you've tried to follow it. In your constructor, you immediately lower-case everything. That is obviously wrong. RFC 3986 Sec. 6.2.2.1 says that only the scheme and host portions of URIs are case-insensitive. You have an escape() function, but oddly no unescape() function, which I expect would be needed for parsing URIs. Please be aware when implementing unescape() that query strings have special unescaping rules. The RFC uses the term "percent-encoding", so perhaps you should call it "encode" rather than "escape". Your escape() function only encodes specific characters, which is dangerous, considering that more characters exist that require encoding than that can be passed through. Be careful when calling split() where you expect at most one separator. You should use split(':', 1), split('@', 1) and split('#', 1) instead. Better yet, don't try to split at all. Instead, consistently use regular expression capturing for identifying all parts of the URI. You should be able to make one huge regular expression. All complex regular expressions, including the one you are using now, should have embedded comments.
{ "domain": "codereview.stackexchange", "id": 4527, "tags": "python, parsing, url" }
Determining the error in gradient through least fit of data
Question: I am try to measure the error in the slope of the black dots. To do this I performed a linear regression to the data using from scipy.optimize import curve_fit on python and plotted it as shown by the dashed red lines. The legend shows the equation of the line of this fit along with the $R^{2}$ value. I have also plotted some measurements of the same data but using a different technique which is the shown by the blue cross. The reason for this was to determine the systematic error in the measurement technique used to get the black dots. Similarly I have also performed a linear regression of this data as shown in the legend by the blue dash. So my question is, what is the total error in gradient of the data shown by the black dots. Answer: Without knowing the true slope there is no unique way of determining the error of the slope. So, all you can do is to select a method to determine the slope and then calculating the associated uncertainty. E.g. using the least square fit, the uncertainty of the slope estimate $\hat \beta$ is usually taken to be the standard deviation $$ Sd[\hat\beta_j] = \hat\sigma_\epsilon \sqrt{[(X^T X)^{-1}]_{jj}} $$ where $X$ is the so called design matrix. I use the index $jj$ to indicate that we only take the diagonal elements. The point estimate of the "random error" $\hat\sigma_\epsilon$ is often taken to be $\sqrt{\sum_{i=1}^N (y - y_{fit})^2/(N-p)}$, where $N$ is the number of data points and $p$ is the number of factors (including the intercept). Often, I find it helpful to have an example to check my calculations. Therefore, here an example: ## Sample data: x = 1:10 y = c(1.77954908388273, 2.44621066683337, -2.2018205040772, -0.764061711605929, -2.72481609322593, -3.51410342713565, -7.86973095833333, -8.78328718047611, -11.5217536084918, -6.80619245104001) df = data.frame(x,y) which look like this library(MASS) # for inverse ginv() ## design matrix X = c(rep(1, N), x) dim(X) = c(N,2) # reshape the design matrix, so that its dimension is (N,2) ## Fit data (formulas not included above): betaHat = ginv(t(X) %*% X) %*% t(X) %*% y yFit = X %*% betaHat ## Estimate of the uncertainty: res = y - yFit # residuals sigma = sqrt( sum(res^2) / (N-2) ) # use p=2, because we fit slope and intercept betaSD = sigma * sqrt( diag(ginv(t(X) %*% X)) ) > Coefficients: > Estimate Std. Error t value Pr(>|t|) > (Intercept) 3.6727 1.3757 2.670 0.028372 * > beta -1.3943 0.2217 -6.289 0.000236 *** Looking at your data, there are two regions where the data does not appear to be linear. This is at approx $x=15$ and $x = 115$. Thus, you should decide whether or not these regions are important. If they are not important you should consider omitting these regions, so that your fit is improved. Note that the data points at the "$x$-edge" carry more leverage. Hence, they are particularly dangerous to distort your fit.
{ "domain": "physics.stackexchange", "id": 74200, "tags": "error-analysis, data-analysis" }
WARNING: ROS_MASTER_URI [http://192.168.0.109:11311] host is not set to this machine
Question: hello,i am trying to use turtlebot as website link text. when i have done link text. and i can check that TurtleBot laptop can get data from ROS node running on workstation. and the problem is when i run "roscore"in my workstation , i get waring:WARNING: ROS_MASTER_URI [http://192.168.0.109:11311] host is not set to this machine. and http://192.168.0.109 is my turtlebot ip.and my workstation ip is 192.168.0.103. when i run $ rosrun turtlebot_dashboard turtlebot_dashboard& my dashboard buttons remain all grey. what's matter ?thank s a lot Originally posted by longzhixi123 on ROS Answers with karma: 78 on 2012-11-21 Post score: 0 Answer: You can only have one roscore active (and configured) at a time. In case of the turtlebot the roscore should always run on the turtlebot itself to keep latencies and actual network usage low. If you wanted to use a roscore on your workstation to work, while the turtlebot isn't running, you have to change your ROS_MASTER_URI to the name or IP of your workstation. Dont forget to source the .bashrc again after the change, or restart your open consoles. Originally posted by Ben_S with karma: 2510 on 2012-11-21 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 11831, "tags": "turtlebot, ros-electric" }
Random X-Inactivation and Duchenne
Question: I'm reading about X-inactivation and I can't reconcile some things with it being truly random. In only a small percentage of female carriers Duchenne's will be expressed. But if this was truly random, wouldn't 50% of female carriers expressing the disease? As one of the X-chromosomes is silenced at random, this seems sort of contradictory. Then, in females that do express the disease phenotype it's because of non-random X-inactivation. For me, this sounds like it's the other way around, so clearly I'm not up to speed and am in need of a bit of clarification on the subject. Answer: Duchenne muscular dystrophy (DMD) is caused by the body's inability to make the protein dystrophin, which is needed for proper muscle function (1, 2). In an individual carrier, half of the cells that would normally make dystrophin cannot, but the other half still can. This leads to two explanations (AFAIK) for why carriers do not express DMD: The cells that can make dystrophin make twice as much, so the total amount produced is the same as a normal individual. Only half as much dystrophin is produced, but that is enough to prevent symptoms from manifesting. This is actually the stock explanation for why female carriers exist for X-linked recessive diseases. You can substitute 'DND' and 'dystrophin' with another X-linked recessive disease and the associated protien (e.g. 'hemophilia' and 'factor 8 or 9') and it would still be mostly true. I say mostly true because DMD is actually an unusual. Its carriers do sometimes exhibit symptoms (3, 4). Different studies give different rates for how often this happens, but the NIH says it is about 20%. EDIT: I am still a little unclear on what your asking, but I think you are confused about why any carriers for Duchenne express the disease. The answer to that is a phenomena called skewed X-inactivation, which is when either the paternal or maternal X chromosome is inactivated and turned into a Barr body more often than the other. This can happen for 2 reasons: Primary: When Barr bodies are forming, the paternal or maternal chromosome is selected more than 50% of the time. Secondary: After Barr bodies form, cells with the paternal chromosome active reproduce less than cells with the maternal chromosome active, or vice versa. Both types of skewing can be happen for genetic causes (e.g. the Xce locus in mice can cause primary inactivation) or stochastic causes (i.e. by pure chance some people will have unlikely things happen to them). I spent the last few hours reading up on which type happens specifically during Duchenne and I am still not 100% sure. What I am confident about is that carriers for Duchenne can have a translocation, which causes the skewed X-inactivation and therefor symptoms in carriers: A few cases of translocations involving DMD have been reported22; these translocations will cause DMD in both males and females, the latter owing to non-random X-inactivation of the unaffected X chromosome. In these cases, cells with inactivation of the mutated X chromosome (cells that could, in theory, produce dystrophin) are not viable owing to the inactivation effect of the chromosomal translocation on the autosome. Only the cells where the unaffected X chromosome is inactivated will be viable. However, these will not produce dystrophin owing to the chromosomal translocation affecting DMD. Therefore, females with these translocations are unable to produce any dystrophin. What I cannot figure out is if the translocation happens on the chromosome with the functioning DMD gene or the chromosome with the defective DMD gene.
{ "domain": "biology.stackexchange", "id": 12228, "tags": "gene-expression" }
Converting Collada to urdf
Question: When running rosrun urdf check_urdf my_collada.dae, I get this error. COLLADA error: Trying to load an invalid Collada version for this DOM build I have looked at the source in collada_parser and urdf and cannot seem to find which version is required (I have tried 1.4.0 and 1.5.0) I need to be publishing joints on /tf, so I want to convert my Collada file into an URDF, which robot_description requires as input. Will this method work on a full-robot model with kinematic information, or would I have to convert each part to an urdf? Here's a link to an example file that gives the error Thanks Originally posted by phil0stine on ROS Answers with karma: 682 on 2012-08-18 Post score: 1 Original comments Comment by jbohren on 2012-08-19: Can you post the model, or a similar one that gives you the same error? Answer: Perhaps I am misunderstanding something, but as I see it you are testing a COLLADA format to see if it is URDF. They are two different formats. You can include a reference to a .dae file within a URDF file as a link, but .dae themselves are not URDF. I would recommend you check out this excellent video on URDF from ROSCON http://www.youtube.com/watch?v=g9WHxOpAUns At 15:30 in the video you will see the format to include .dae into your URDF file. I've followed this, and it works perfectly. I have about 10 different mesh links in my URDF with no problems. Originally posted by dougbot01 with karma: 342 on 2012-08-19 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by jbohren on 2012-08-20: Except it's possible to load a COLLADA file and construct a C++ URDF object with this library: http://www.ros.org/wiki/collada_parser Comment by phil0stine on 2012-08-20: @jbohren Exactly, the ideal would be to support full kinematic chains, as mentioned here, but right now I am just going for a simple shape
{ "domain": "robotics.stackexchange", "id": 10666, "tags": "ros, urdf, collada, robot-description" }
Fastest recorded apparent motion of a comet or asteroid seen from Earth (degrees/day)?
Question: If I've done my maths correctly in this answer comet 2I/Borisov (C/2019 Q4) was seen to move about 0.2 degrees in 7 hours, which is almost 0.7 degrees per day. I'm assuming this is pretty rapid motion for a comet, but I'm wondering if comets (or asteroids) have been observed to move even faster than this relative to the celestial sphere. Yes I know the speed of apparent motion depends on incidentals like how close the comet happens to pass to the Earth. I'm just wondering if any particular comet stands out as been particularly fast, possibly even difficult to track because perhaps some software didn't allow such a large offset in tracking relative to the stars. Question: What is the fastest recorded apparent motion of a comet or asteroid, seen from Earth (degrees/day) relative to the celestial sphere? note: @JamesK's comment reminds me that I would like to exclude meteor tracks. For the purposes of this question the recorded tracks should be produced reflected light, either of the Sun or of radar illumination, and not be something burning up in our atmosphere. Answer: This is more of a recent example than a record. On 2019-07-25, asteroid 2019 OK passed about 65000 km from the Earth at a relative speed of 24.5 km/s. The Minor Planet Center lists multiple observations from sites in Italy and Armenia an hour before closest approach. Using pairs of these (streak endpoints?) by the ISON-Castelgrande observatory, I compute motions of 29.8° per hour at 00:23 UT and 31.7° per hour at 00:26 UT. The MPC ephemeris service gives sky motions of 30.6°/hr and 32.5°/hr from site L28 at those times. If someone had observed the asteroid from (25°S, 85°E) in the Indian Ocean at 01:22 UT, they would have seen it move 76° per hour. Update: On 2020-08-16, tiny (H=29.8) asteroid 2020 QG passed by at 12.3 km/s, about 9320 km from the Earth's center or 2950 km from the surface. A NASA news article said this was a new record for close approach distance. An observer in the right place at 04:09 UT would have seen a maximum sky motion of $$\mathrm{\frac{12.3~km/s}{2950~km} = 0.0042~radian/s = 860^\circ/hr}$$ From pairs of actual observations reported to MPC, ZTF Palomar (I41) saw a sky motion of 566°/hr at 10:23 UT, and ATLAS Mauna Loa (T08) recorded 439°/hr around 11:46 UT.
{ "domain": "astronomy.stackexchange", "id": 4028, "tags": "comets, near-earth-object, apparent-motion" }
Simple calculator app made in Android Studio (Java)
Question: I'm a college student in my first year of my bachelor in IT. I recently decided to try some app development with Android Studio and Java. For my first project, I made a basic calculator app. Here you can see how it looks: The calculator app: can take multiple digits and floating point numbers as input has operator precendence can add, substract, divide and mutiply My main idea was to have an ArrayList in wich every number and every operator gets stored. Then for the calculation, I searched the ArrayList for the operator and took the numbers that are to left and to the right of it, and replaced those with the answer of those. Here is the code of my mainActivity.java: package com.example.calculatormk2; public class MainActivity extends AppCompatActivity { private ArrayList input = new ArrayList(); private StringBuilder number = new StringBuilder(); private boolean calculationDone = false; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); } public void onNumberClick(View v) { number.append(getBtnTxt(v)); updateView(v); } public void onOperatorClick(View v) { addNumberToInput(number); addOperatorToInput(v); updateView(v); } public void onEqualsClick(View v) { addNumberToInput(number); calculate(input); clearInput(); } public void calculate(ArrayList i) { while (i.contains("x") || i.contains("÷")) { int indexOperatorTimes = i.indexOf("x"); int indexOperatorDivideBy = i.indexOf("÷"); int indexOperatorMain; if (indexOperatorTimes < indexOperatorDivideBy && indexOperatorTimes != -1) { indexOperatorMain = indexOperatorTimes; } else { indexOperatorMain = indexOperatorDivideBy; } if (indexOperatorDivideBy == -1) { indexOperatorMain = indexOperatorTimes; } double firstNumber; double secondNumber; String operator = i.get(indexOperatorMain).toString(); double answer; if (i.get(indexOperatorMain - 1) instanceof Integer) { firstNumber = (int) i.get(indexOperatorMain - 1); } else { firstNumber = (double) i.get(indexOperatorMain - 1); } if (i.get(indexOperatorMain + 1) instanceof Integer) { secondNumber = (int) i.get(indexOperatorMain + 1); } else { secondNumber = (double) i.get(indexOperatorMain + 1); } if (operator.equals("x")) { answer = firstNumber * secondNumber; } else { answer = firstNumber / secondNumber; } i.remove(indexOperatorMain - 1); i.remove(indexOperatorMain - 1); i.remove(indexOperatorMain - 1); i.add(indexOperatorMain - 1, answer); System.out.println(i); } while (i.contains("+") || i.contains("-")) { int indexOperator1 = i.indexOf("+"); int indexOperator2 = i.indexOf("-"); int indexOperatorDef; if (indexOperator1 < indexOperator2 && indexOperator1 != -1) { indexOperatorDef = indexOperator1; } else { indexOperatorDef = indexOperator2; } if (indexOperator2 == -1) { indexOperatorDef = indexOperator1; } double firstNumber; double secondNumber; String operator = i.get(indexOperatorDef).toString(); double answer; if (i.get(indexOperatorDef - 1) instanceof Integer) { firstNumber = (int) i.get(indexOperatorDef - 1); } else { firstNumber = (double) i.get(indexOperatorDef - 1); } if (i.get(indexOperatorDef + 1) instanceof Integer) { secondNumber = (int) i.get(indexOperatorDef + 1); } else { secondNumber = (double) i.get(indexOperatorDef + 1); } if (operator.equals("+")) { answer = firstNumber + secondNumber; } else { answer = firstNumber - secondNumber; } i.remove(indexOperatorDef - 1); i.remove(indexOperatorDef - 1); i.remove(indexOperatorDef - 1); i.add(indexOperatorDef - 1, answer); } calculationDone = true; TextView t = findViewById(R.id.mainOutput); String s = i.get(0).toString(); t.setText(s); } public void addOperatorToInput(View v) { input.add(getBtnTxt(v)); } public void addNumberToInput(StringBuilder s) { if (s.toString().contains(".")) { input.add(Double.parseDouble(s.toString())); } else { input.add(Integer.parseInt(s.toString())); } clearNumber(); } public String getBtnTxt(View v) { return ((Button) v).getText().toString(); } public void clearNumber() { number.delete(0, number.length()); } public void clearInput() { input.clear(); } @SuppressLint("SetTextI18n") public void updateView(View v) { TextView mainOutput = findViewById(R.id.mainOutput); if (calculationDone) { mainOutput.setText(""); calculationDone = false; } mainOutput.setText(mainOutput.getText() + getBtnTxt(v)); } } In total it took me about 10 hours to make this and I'm pretty happy with it (because I didn't expect to get it to even work) even though there is a lot of double code. There is also a lot of code which only exists to cast the numbers to the right datatype. I realised late when working on the app that the get() function of an ArrayList returns the item as an object and it gave a lot of trouble retrieving something out of it. Looking back, I probably should have done this differently but I went into this project without a proper plan and just winged it. I really would appreciate some feedback. You can be honest about it, even if it's garbage. If, by any chance, anyone wants to take a look at the full project. Here is the GitHub page: https://github.com/PhilipNousPXL/Calculator-mk2.git Answer: nice to see some android code here - thanks for sharing violation of Single Responsibility Principle (SRP) you MainActivity is responsible to build up the GUI and is also responsible for being a calculator! That is too much responsibility for your Activity. Solution create a Class Calculator and that is responsible for taking the input and returning a result. violation of Integration Operation Segregation Principle (IOSP) this is the reason why your calculate method is so messy (sorry, no offense, really!) IOSP calls for a clear separation: Either a method contains exclusively logic, meaning transformations, control structures or API invocations. Then it’s called an Operation. Or a method does not contain any logic but exclusively calls other methods within its code basis. Then it’s called Integration. Solution create a integration method that handles the logical program flow, create several operation methods that do the bit shuffling! example for an integration method (this is just a method stub): public void calculate(ArrayList i) { handleMultiplier(i); handleAdditions(i); } example for an operation method (this is just an example, not real code!!!): public double getLeftOperand(...){ if (i.get(indexOperatorDef - 1) instanceof Integer) { return (int) i.get(indexOperatorDef - 1); } else { return (double) i.get(indexOperatorDef - 1); } } uncecessarily complex code it is overkill to parse the arguments to either double or int Keep It Simple Stupid... keep all your input in String and just parse ONLY to double (see getLeftOperand() up above)! pseudo concurrency You make your Button unclickable while the calculator is doing some cpu-intensive Task. Either make a real Async-Task (see Handler.post(Runnable)) and make some real separation from your your GUI-Thread. Or just leave it be and skip the enabling/disabling (that's fine for me) Open Question what happens, if you push + twice? that case is not handled properly, you cannot even remove your accidently pushed input. (yes - that also counts for the other arithmetic operations) Naming well, thats just me being picky, but one-letter-variables (i, v, etc) are no more used today since very IDE provides code completion! Android Activity Life Cycle have a look at the Android Activity Life Cycle and take a thought on what happens to the input, if your Activity is unplanned closed? Your Input will be wiped. But you have left open the requirements for that, so feel free to just look a bit deeper into android and take that point for further projects :-)
{ "domain": "codereview.stackexchange", "id": 42923, "tags": "java, android" }
Is it possible for the Milky Way and Andromeda to get ejected upon colliding instead of merging?
Question: The Milky Way and Andromeda are destined to collide and merge within the next 4-5 billion years, but I feel like there could be a chance that instead of merging, they could just eject each other. I dunno, but is it possible, and if so, how likely will it be? Answer: A full merger may not occur on that timescale. The encounter in 4.5 billion years will take place but the galactic centers may be far enough apart that it is a "glancing blow". Regardless, the orbit will be diminished and the two will come together some billions of years later (e.g. Schiavi et al. 2021). The basic physics here is that the Milk-dromeda system(!) has a certain amount of energy (the sum of its kinetic energy and gravitational potential energy), and the total is negative. This indicates that the system is bound. As time goes on energy is lost from the system through radiation and also perhaps through the ejection of some gas and stars. This makes the system more bound. These processes are actually enhanced as the galaxies approach each other and their gas starts to interact (the stars are effectively point-like particles and won't collide). Thus the net effect will always be to bind the galaxies closer together and they won't escape from each other. That said, the fact that each of the galaxies contains billions of component stars, does mean the energy can be shared out in many different ways. It is quite likely that some (a small fraction) of the stars and gas will gain sufficient kinetic energy during the collision to escape from the system as a whole. Here is a simulation based on the Schiavi et al. calculations. You can see the initial glancing blow followed by the merger sometime later. You can also see that a fair number (difficult to be quantitative based on this) of stars do get thrown out of the system during the merger.
{ "domain": "astronomy.stackexchange", "id": 6613, "tags": "galaxy, milky-way, collision, future" }
How can a glass rod become charged if it is an insulator?
Question: I was reading some of the other questions, and I found this one about a glass rod and how it gains a net charge when rubbed with a silk scarf. I learned from working in a shop one summer that most solids are insulators, because their electrons are tightly bound, so it is hard to knock them off. Why would such a simple motion (like a moving scarf) knock electrons from an insulator (I looked it up and glass is an insulator)? Answer: Conductivity is not just about how tightly bound electrons are, but equally about how easy it is for them to travel. Example: a bunch of islands in a shark-infested sea. You cannot swim from one island to the next although it is close. At low tide you can walk across no problem. The first example is an insulator, the second is a conductor. Rubbing (google triboelectricity) causes unlike atoms to stick and unstick frequently. Atoms "fight" over electrons, and the stronger one gets to take the electron home. It is like air lifting them from the island - shark infested waters or not. There are lists of materials (the triboelectric series) that tell you which material will give up its electrons when in contact with another material. Glass is high on the list - it loses electrons easily. The can't move sideway, but they can be picked off the surface.
{ "domain": "physics.stackexchange", "id": 15752, "tags": "electromagnetism, electricity, insulators" }
What calculations show that Comet Shoemaker-Levy 9 orbited Jupiter for several decades before its spectacular impact? (Chodas, Sekanina & Yeomans)
Question: This answer to A moon in eccentric orbit dipping below Roche limit includes the following about Comet Shoemaker–Levy 9 Here is a nice figure of its last passes: I looked up "fragment A" in JPL's Horizons and it provides a modeled trajectory only between 1992-07-15 and 1994-07-17 but the link cited above says Computations by Paul Chodas, Zdenek Sekanina, and Don Yeomans, suggest that the comet has been orbiting Jupiter for 20 years or more, but these backward extrapolations of motion are highly uncertain. See "elements." and "ephemeris." at SEDS.LPL.Arizona.EDU in /pub/astro/SL9/info for more information. If I paste some of that into a search engine I get http://spider.seds.org/sl9-list.html with lots and lots of links, but now I'm in way over my head. Question: What calculations show that Comet Shoemaker–Levy 9 was orbiting Jupiter for several decades before its spectacular impact? (Chodas, Sekanina & Yeomans) Where can I read about these calculations? Have there been further refinements since, based on any subsequently discovered historical precovery data? Answer: Chodas and Yeomans published a paper called The orbital motion and impact circumstances of Comet Shoemaker-Levy 9. In it they describe the algorithms and techniques they used to reconstruct the orbit backwards from the discovery of the comet. SL9 had an incredibly chaotic orbit in the mathematical sense in that small perturbations in the position would result in large effects in reverse-time orbital simulations. In the words of the paper authors: Unfortunately, SL9'sorbit about Jupiter was among the most chaotic of any known solar system body, with an effective Lyapunov time on the order of 10 years (Benner and McKinnon 1995). As a result, a single backward numerical integration does not provide definitive answers on the orbital history of this object. A better approach is to account for the uncertainties in the initial conditions of the backward integrations, and to investigate the motion in a statistical manner using a Monte Carlo analysis (Chodas and Yeomans 1995). For comparison, the Lyapunov time for the Solar System is on the order of millions of years. An orbit trajectory plot for SL9 is the most complicated I've ever seen (excluding human-built craft)! I don't know if precovery data was ever used to refine the orbit, though papers are still being published on SL9's orbit. Unfortunately, Invariant Manifolds and the Capture of Comet Shoemaker-Levy 9, published in 2019, is behind a paywall.
{ "domain": "astronomy.stackexchange", "id": 5368, "tags": "orbital-mechanics, jupiter, comets, impact, resource-request" }
Matrix Diagonal Difference
Question: Problem Given a square matrix of size N×N, calculate the absolute difference between the sums of its diagonals. Input Format The first line contains a single integer, N. The next N lines denote the matrix's rows, with each line containing N space-separated integers describing the columns. Output Format Print the absolute difference between the two sums of the matrix's diagonals as a single integer. Sample Input 3 11 2 4 4 5 6 10 8 -12 Sample Output 15 Code (ns hackerrank.core [:require [clojure.string :as s]]) (defn get-diagonal-sums-reducer [n] (fn [sums [line-number line]] (let [primary (nth line line-number) secondary (nth line (- n (+ 1 line-number)))] (assoc sums :primary (+ primary (:primary sums)) :secondary (+ secondary (:secondary sums)))))) (let [n (Integer/parseInt (read-line)) matrix (for [_ (range n)] (mapv #(Integer/parseInt %) (s/split (read-line) #"\s+"))) matrix-enumerated (map-indexed vector matrix) sums (reduce (get-diagonal-sums-reducer n) {:primary 0 :secondary 0} matrix-enumerated) difference (- (:primary sums) (:secondary sums))] (println (max difference (- difference)))) Couple questions in particular: can I avoid the get-diagonal-sums-reducer somehow? I'm only using it so that it closes over n. am I doing this as lazily as possible or is there any place I'm using vectors when seqs would have worked? is there a much less verbose way to do this? Doesn't feel like this would be as complicated in Python. Answer: Could you have avoided get-diagonal-sums-reducer? You probably could have used partial if you wanted, though I'm not sure it ends up that much better: (defn diagnoal-sums-reducer [n sums [line-number line]] ...) (reduce (partial diagonal-sums-reducer n) ...) Is there a much less verbose way to do this. I think there is. I can see why you've tried to use reduce, but your reducer is doing way more than it should be. Essentially, there's 3 things it's doing: Calculating the coordinates of the diagonals for the line. Getting the values of those coordinates. Summing the values together with the previous values. There might be some situations where you'd need to do all of those things in a single reduce, but I don't think this is one of those cases. It's also complicated by having to do this for both the primary and the secondary cases. It ends up much simpler if you split the whole process out into the individual steps: (defn sum [x] (apply + x)) (let [n 3 matrix [[11 2 4] [4 5 6] [10 8 -12]] ;; Build up a list of coordinates for the diagonals. primary-coords (for [i (range n)] [i i]) secondary-coords (for [i (range n)] [(- n i 1) i]) ;; Extract the values of those coordinates primaries (map #(get-in matrix %) primary-coords) secondaries (map #(get-in matrix %) secondary-coords)] ;; Sum them, take the absolute difference (Math/abs (- (sum primaries) (sum secondaries))))
{ "domain": "codereview.stackexchange", "id": 30351, "tags": "matrix, clojure" }
Recording audio in C
Question: Please note there are newer revisions of this code, one here, and one here for continuous audio recording. This is a program I wrote as a .wav audio recording library for Linux. It was developed on a Raspberry Pi, so that may affect the dependencies required.(1) wav.h #include <stdint.h> typedef struct { char RIFF_marker[4]; uint32_t file_size; char filetype_header[4]; char format_marker[4]; uint32_t data_header_length; uint16_t format_type; uint16_t number_of_channels; uint32_t sample_rate; uint32_t bytes_per_second; uint16_t bytes_per_frame; uint16_t bits_per_sample; } WaveHeader; WaveHeader *genericWAVHeader(uint32_t sample_rate, uint16_t bit_depth, uint16_t channels); WaveHeader *retrieveWAVHeader(const void *ptr); int writeWAVHeader(int fd, WaveHeader *hdr); int recordWAV(const char *fileName, WaveHeader *hdr, uint32_t duration); Here is the main program: #include <alsa/asoundlib.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include "wav.h" WaveHeader *genericWAVHeader(uint32_t sample_rate, uint16_t bit_depth, uint16_t channels) { WaveHeader *hdr; hdr = malloc(sizeof(*hdr)); if (!hdr) return NULL; memcpy(&hdr->RIFF_marker, "RIFF", 4); memcpy(&hdr->filetype_header, "WAVE", 4); memcpy(&hdr->format_marker, "fmt ", 4); hdr->data_header_length = 16; hdr->format_type = 1; hdr->number_of_channels = channels; hdr->sample_rate = sample_rate; hdr->bytes_per_second = sample_rate * channels * bit_depth / 8; hdr->bytes_per_frame = channels * bit_depth / 8; hdr->bits_per_sample = bit_depth; return hdr; } int writeWAVHeader(int fd, WaveHeader *hdr) { if (!hdr) return -1; write(fd, &hdr->RIFF_marker, 4); write(fd, &hdr->file_size, 4); write(fd, &hdr->filetype_header, 4); write(fd, &hdr->format_marker, 4); write(fd, &hdr->data_header_length, 4); write(fd, &hdr->format_type, 2); write(fd, &hdr->number_of_channels, 2); write(fd, &hdr->sample_rate, 4); write(fd, &hdr->bytes_per_second, 4); write(fd, &hdr->bytes_per_frame, 2); write(fd, &hdr->bits_per_sample, 2); write(fd, "data", 4); uint32_t data_size = hdr->file_size - 36; write(fd, &data_size, 4); return 0; } int recordWAV(const char *fileName, WaveHeader *hdr, uint32_t duration) { int err; int size; snd_pcm_t *handle; snd_pcm_hw_params_t *params; unsigned int sampleRate = hdr->sample_rate; int dir; snd_pcm_uframes_t frames = 32; const char *device = "plughw:1,0"; // USB microphone // const char *device = "default"; // Integrated system microphone char *buffer; int filedesc; /* Open PCM device for recording (capture). */ err = snd_pcm_open(&handle, device, SND_PCM_STREAM_CAPTURE, 0); if (err) { fprintf(stderr, "Unable to open PCM device: %s\n", snd_strerror(err)); return err; } /* Allocate a hardware parameters object. */ snd_pcm_hw_params_alloca(&params); /* Fill it in with default values. */ snd_pcm_hw_params_any(handle, params); /* ### Set the desired hardware parameters. ### */ /* Interleaved mode */ err = snd_pcm_hw_params_set_access(handle, params, SND_PCM_ACCESS_RW_INTERLEAVED); if (err) { fprintf(stderr, "Error setting interleaved mode: %s\n", snd_strerror(err)); snd_pcm_close(handle); return err; } /* Signed 16-bit little-endian format */ if (hdr->bits_per_sample == 16) err = snd_pcm_hw_params_set_format(handle, params, SND_PCM_FORMAT_S16_LE); else err = snd_pcm_hw_params_set_format(handle, params, SND_PCM_FORMAT_U8); if (err) { fprintf(stderr, "Error setting format: %s\n", snd_strerror(err)); snd_pcm_close(handle); return err; } /* Two channels (stereo) */ err = snd_pcm_hw_params_set_channels(handle, params, hdr->number_of_channels); if (err) { fprintf(stderr, "Error setting channels: %s\n", snd_strerror(err)); snd_pcm_close(handle); return err; } /* 44100 bits/second sampling rate (CD quality) */ sampleRate = hdr->sample_rate; err = snd_pcm_hw_params_set_rate_near(handle, params, &sampleRate, &dir); if (err) { fprintf(stderr, "Error setting sampling rate (%d): %s\n", sampleRate, snd_strerror(err)); snd_pcm_close(handle); return err; } hdr->sample_rate = sampleRate; /* Set period size*/ err = snd_pcm_hw_params_set_period_size_near(handle, params, &frames, &dir); if (err) { fprintf(stderr, "Error setting period size: %s\n", snd_strerror(err)); snd_pcm_close(handle); return err; } /* Write the parameters to the driver */ err = snd_pcm_hw_params(handle, params); if (err < 0) { fprintf(stderr, "Unable to set HW parameters: %s\n", snd_strerror(err)); snd_pcm_close(handle); return err; } /* Use a buffer large enough to hold one period */ err = snd_pcm_hw_params_get_period_size(params, &frames, &dir); if (err) { fprintf(stderr, "Error retrieving period size: %s\n", snd_strerror(err)); snd_pcm_close(handle); return err; } size = frames * hdr->bits_per_sample / 8 * hdr->number_of_channels; /* 2 bytes/sample, 2 channels */ buffer = (char *) malloc(size); if (!buffer) { fprintf(stdout, "Buffer error.\n"); snd_pcm_close(handle); return -1; } err = snd_pcm_hw_params_get_period_time(params, &sampleRate, &dir); if (err) { fprintf(stderr, "Error retrieving period time: %s\n", snd_strerror(err)); snd_pcm_close(handle); free(buffer); return err; } uint32_t pcm_data_size = hdr->sample_rate * hdr->bytes_per_frame * (duration / 1000); hdr->file_size = pcm_data_size + 36; filedesc = open(fileName, O_WRONLY | O_CREAT, 0644); err = writeWAVHeader(filedesc, hdr); if (err) { fprintf(stderr, "Error writing .wav header."); snd_pcm_close(handle); free(buffer); close(filedesc); return err; } int totalFrames = 0; for(int i = ((duration * 1000) / (hdr->sample_rate / frames)); i > 0; i--) { err = snd_pcm_readi(handle, buffer, frames); totalFrames += err; if (err == -EPIPE) fprintf(stderr, "Overrun occurred: %d\n", err); if (err < 0) err = snd_pcm_recover(handle, err, 0); // Still an error, need to exit. if (err < 0) { fprintf(stderr, "Error occured while recording: %s\n", snd_strerror(err)); snd_pcm_close(handle); free(buffer); close(filedesc); return err; } write(filedesc, buffer, size); } close(filedesc); snd_pcm_drain(handle); snd_pcm_close(handle); free(buffer); return 0; } (1): Right now the program needs a USB device for input. I left a comment where the input device is declared so that it can be changed as needed. Also, I'm not sure the program will work as intended if PulseAudio is installed. Answer: A few things jump out at me immediately: Use sizeof() whenever you need the size of something with non-dynamic allocation. In other words, all of your write calls on the WaveHeader struct members should be using sizeof, not hard coded sizes. Allocation and initialization should be separate concerns. There's no need to have the same method that initializes a struct allocate it (unless it's an opaque pointer, or you actually care where it's allocated). There's no reason your WaveHeader can't be on the stack. As a general rule, use automatic allocation ("the stack") first and only move to dynamic allocation when you have some compelling reason (the reason usually being size varying, too big to fit on the stack, multithreading concerns, etc). Separating out initialization and allocation makes it possible for the user of the API to decide what to do with memory. Unless you need to make this decision for them, don't. If you're feeling lazy, you can always have a genericWavHeaderInit and genericWaveHeaderCreate Library code should never output anything. Use error codes and allow the caller to handle error reporting. What if the error should be ignored for some reason? Well, too late. You already put out an error. Imagine if standard library functions output errors to stderr. It would be absurd :). Operate on resources, not what you need to create those resources RecordWAV should take a snd_pcm_t and allocate/initialize it. Summarize to yourself what RecordWAV does. Truly go through all the steps, explaining to yourself what each chunk of code does. It does a lot more than just record a WAV. As a happy side effect, it relieves your hard coded USB port. That really should come from a command line argument or a config file or something. Wanting to use different USB ports should not require a recompile. Another happy side effect: your resource cleanup code doesn't have to be repeat a trillion times with every exit point (much less repetitive, but much more important: less error prone!) This is fairly subjective, but I don't like your naming scheme. subjectVerb has the nice effect of provided a "namespace" of sorts (every widely used C library has a standard prefix (curl_, qt_, glib_, apr_, etc). A suffix technically accomplishes the same namespacing effect, but it's much, much rarer Likewise, when a function's sole purpose is to operate on some object (i.e. struct) that struct should typically be the first parameter (writeWAVHeader). If you're going to go the camelCase route (which I don't know if I would in C, but that's subjective) I would stick with strict camel casing (writeWavHeader). It's less visually jarring, it's easier to type, and anyone familiar with WAV will understand. Same with RIFF_marker Include your own headers first, and then other headers. If your wav.h had a hidden dependency on a file, it could get hidden by the source file including it. It wouldn't be until you tried to include it without including that hidden dependency that you'd get a sudden mysterious undeclared symbol error. retrieveWAVHeader isn't defined. It also seems to be an unnecessary version of (WaveHeader*) ptr If the return of recordWAV is an ALSA constant (since it's one of the error codes), you should use whatever ALSA's success constant is. I can't imagine that it wouldn't be 0, but consistency with the other possible returns would be nice.
{ "domain": "codereview.stackexchange", "id": 7181, "tags": "performance, c, linux, audio, raspberry-pi" }
On sparse complete sets and P vs L
Question: Mahaney's Theorem tells us that if there is a sparse $NP$-complete set under polynomial-time many-one reductions, then $P = NP$. (See "Sparse complete sets for NP: Solution of a conjecture of Berman and Hartmanis") Are there known consequences of the existence of sparse complete sets for other complexity classes? In particular, if there is a sparse $P$-complete set under logspace many-one reductions, does that imply $P = L$? Answer: Yes, exactly what you suggested is true: if there is a sparse $\mathbf{P}$-complete set under log-space many-one reductions, then $\mathbf{P} = \mathbf{L}$. This was conjectured by Hartmanis in 1978 and proven by Cai and Sivakumar in 1995. See this paper. Hartmanis also conjectured that if there is a sparse $\mathbf{NL}$-complete set under log-space many-one reductions, then $\mathbf{NL} = \mathbf{L}$. This was also proven by Cai and Sivakumar in 1997; see this other paper.
{ "domain": "cstheory.stackexchange", "id": 4274, "tags": "cc.complexity-theory, complexity-classes, reductions, polynomial-time, logspace" }
Is the language of TMs that halt on some string recognizable?
Question: I would like to show that the following language is recognizable: $$L:= \{ \langle M \rangle \mid M \text{ is a TM that halts on some string}\}.$$ How do I go about showing that this language is recognizable? I know that all recognizable languages are reducible to $HALT_\epsilon$, so I figure if I can show that this language reduces to $HALT_\epsilon$, then I am all set. I am defining $HALT_\epsilon$ as follows: $$HALT_\epsilon:= \{ \langle M \rangle \mid M \text{ is a TM that halts on } \epsilon \},$$ where $\epsilon$ is the empty string. We can reduce $HALT$ on $x$ to $HALT_\epsilon$ by a reduction $F(\langle M, x\rangle) = \langle M' \rangle $, where $M'(y) = M(x)$. For this reduction, we just ignore the input string $y$, which we know will be $\epsilon$ and just run $M$ on $x$ instead. Here, $HALT$ is defined as $$HALT:= \{ \langle M,x \rangle \mid M \text{ is a TM that halts on } x \}.$$ I tried leveraging a similar technique to show that $L$ is recognizable, but I could not come up with anything better than this (somewhat crazy) TM that has a $HALT_\epsilon$ oracle: $ D^{HALT_\epsilon} =$ On input $\langle M \rangle:$ Construct $N = $ "On input $x:$ Run $M$ in parallel on all inputs $y\in \Sigma^*$. If $M$ halts on any $y$ then accept, otherwise loop." Query the oracle to determine whether $\langle N \rangle \in HALT_\epsilon$. If the oracle answers YES, accept; if NO, reject. Note: My notation for TM algorithms is based on "Theory of Computation" by Sipser. Step 2 for the definition of $N$ is a bit redundant, but in this type of context, is it okay to say something like "If $M$ halts on any $y$, then halt?" I think all I have shown here is that $L$ is decidable relative to $HALT_\epsilon$. I don't know if this implies that $L$ is recognizable. Can a Turing reduction be used in this manner to show that a language is recognizable? I'm confused as to what it means for a language to be recognizable. The task seems obvious if we go back to the definition: If some TM $R$ accepts strings in $L$, then $R$ recognizes $L$. So what if $R=D^{HALT_\epsilon}$, and in the body of $D^{HALT_\epsilon}$ we use some crazy reduction like $N$? In general, to show recognizability, can we just come up with a reduction like $N$ that may or may not halt? Is it a problem that $N$ will never halt if $\langle M \rangle \notin L$? Answer: You can construct a recognizer following the same principle used for the recognizer for HALT. The only extra bit is how you check "all" inputs without getting stuck in a non-terminating computation. An important technique you can use here is called dovetailing (expressed with inputs from $\mathbb{N}$): Simulate one step of $M$ on $1$. Simulate two steps of $M$ on $1$ and $2$ each. Simulate three steps on $1$, $2$ and $3$ each. $\quad\vdots$ Terminate and accept once any of the simulated computations terminates and accepts. If there is a halting input of $M$, this dovetailed simulation certainly finds it after finite time. If there is none, it loops and is correct in doing so. This is, in essence, your $N$ (with an explanation why it's actually a computable function). You don't need the rest of the reduction in order to show that $L$ is semi-decidable.
{ "domain": "cs.stackexchange", "id": 15513, "tags": "computability, turing-machines" }
Units in the multipole expansion
Question: In the Wikipedia page (https://en.wikipedia.org/wiki/Multipole_expansion#Expansion_in_Cartesian_coordinates) the expression for the multipole expansion gives the dipole term as $$V(r)=\frac{1}{4\pi\varepsilon_{0}}\frac{∑P_{\alpha}r_{\alpha}}{r^{3}}$$ where the dipole is given by $P_{\alpha}=∑q_{i}r_{i\alpha}$. The units don't seem to cancel out. Wouldn't you need to multiply the dipole of one object by the dipole of another? Answer: The equation you're looking at just gives the electrostatic potential generated by one dipole. The $r^3$ on the bottom cancels with 2 powers of $r$ on top leaving $\frac{q}{4\pi\epsilon_0 r}$ which, just like the potential from a point charge, is in units of $\frac{J}{C}$.
{ "domain": "physics.stackexchange", "id": 83831, "tags": "electrostatics, dimensional-analysis, si-units, multipole-expansion" }
Encoding multiple observations from the same feature space
Question: My data contains multiple observations of categorical feature.The feature space is medical symptoms, so the data for this feature is like : ['fever','pain','yellow skin' .... ] .The amount of symptoms observations per sample is not fixed and i have around 50 different symptoms how can i encode this feature into something that ML model can deal with ? the order of the symptoms in the array is not important. i tried one hot encoding but projecting feature space of 50 category levels into 50 indicators means losing information (having a sparse matrix ) any ideas? Answer: What you need is Encoding Categorical variables. This topic is discussed in tons of blogposts, it is definitely worth checking out this recent article that nicely and extensively go through most of the methods, since I do not intent to rewrite again what is out there but rather giving you my personal experience. I asked you earlier what algorithms you are after, simply that could change your choice of encoding method. Some encoding methods like One-hot-encoding makes your feature space very sparse when your categorical variable is very cardinal (usually not recommended!), and it is best to go with sparse-aware algorithms. In a nutshell, I suggest to start with the following (classic and simple), explanations borrowed from that article: OneHot — one column for each value to compare vs. all other values. Binary — convert each integer to binary digits. Each binary digit gets one column. Hashing — Like OneHot but fewer dimensions, some info loss due to collisions. Backward Difference — the mean of the dependent variable for a level is compared with the mean of the dependent variable for the prior level. Target — use the mean of the dependant variable. The above-mentioned methods can be used pretty much for all algorithms. And each single has its pros and cons. Some you have more information loss than the other and so on. Good news is that most of these methods are quite easy to use, e.g. via this Python package. Another method that I found quite interesting and is suitable for Neural Networks is: Entity Embeddings - An special encoding method borrowed from NLP, in which an embedding space is learned on the fly for each categorical variable. See these blogpost 1, 2 ,3. I am writing an up-and-running code in Google Colab to fully demonstrate this, but you still find pieces of code in those blogposts to give it a try. I find this method way better than for example OneHot for Neural Networks. Hope these help!
{ "domain": "datascience.stackexchange", "id": 4419, "tags": "machine-learning, preprocessing, encoding" }
unable to bar plot the data for all the columns
Question: I am trying to plot the data in a loop, I am using the below code to build the plot. but Its running for all the loop, I can see the column name is getting printed for all th but its building the plot for only the last one. is there anything I have add extra to generate the plot for individual iteration? for (index, colname) in enumerate(df): print(colname) counts = df[colname].value_counts(dropna=False) counts.plot.bar(title=colname, grid=True) Answer: I think the thing is that by calling the drawing function, you are constantly redrawing the same figure. Try to create a figure every time: import matplotlib.pyplot as plt for (index, colname) in enumerate(df): print(colname) plt.figure() counts = df[colname].value_counts(dropna=False) counts.plot.bar(title=colname, grid=True) You can also divide one drawing into many parts and draw several graphs at a time using matplotlib subplots. matplotlib subplots doc
{ "domain": "datascience.stackexchange", "id": 11355, "tags": "pandas, matplotlib, plotting" }
Is there any literature that compares the candle-power to the candela?
Question: I am unable to find anything in the literature to back up the claim that appears all over the internet that: The modern candlepower now equates directly to the number of candelas. Can anyone please suggest something in the scientific literature to back up this statement? If this is not the correct stack exchange for this question, please suggest a better one. Answer: Perhaps this paper is what you are looking for? http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5323103&tag=1 A quote: The form of primary standard of light (Fig. 1) now in force internationally consists of a black-body (or cavity) radiator held at the temperature of freezing platinum (2042° K). The corresponding unit of luminous intensity is the 'can- dela' (cd). By definition, the brightness or luminance of the radiator is 60cd/cm2. The figure 60 was chosen to ensure con- tinuity, so that, within limits of experimental error at the various relevant dates, an intensity of 1 cd is the same as the older international candle, which in turn was approximately equal to the intensity of the earliest standard sperm candle.
{ "domain": "physics.stackexchange", "id": 2190, "tags": "electromagnetic-radiation, specific-reference, si-units" }
How would one implement a multi-agent environment with asynchronous action and rewards per agent?
Question: In a single agent environment, the agent takes an action, then observes the next state and reward: for ep in num_episodes: action = dqn.select_action(state) next_state, reward = env.step(action) Implicitly, the for moving the simulation (env) forward is embedded inside the env.step() function. Now in the multiagent scenario, agent 1 ($a_1$) has to make a decision at time $t_{1a}$, which will finish at time $t_{2a}$, and agent 2 ($a_2$) makes a decision at time $t_{1b} < t_{1a}$ which is finished at $t_{2b} > t_{2a}$. If both of their actions would start and finish at the same time, then it could easily be implemented as: for ep in num_episodes: action1, action2 = dqn.select_action([state1, state2]) next_state_1, reward_1, next_state_2, reward_2 = env.step([action1, action2]) because the env can execute both in parallel, wait till they are done, and then return the next states and rewards. But in the scenario that I described previously, it is not clear how to implement this (at least to me). Here, we need to explicitly track time, a check at any timepoint to see if an agent needs to make a decision, Just to be concrete: for ep in num_episodes: for t in total_time: action1 = dqn.select_action(state1) env.step(action1) # this step might take 5t to complete. as such, the step() function won't return the reward till 5 t later. #In the mean time, agent 2 comes and has to make a decision. its reward and next step won't be observed till 10 t later. To summarize, how would one implement a multiagent environment with asynchronous action/rewards per agents? Answer: The cleanest solution from a theoretical point of view is to switch over to a hierarchical framework, some framework that supports temporal abstraction. My favourite one is the options framework as formalised by Sutton, Precup and Singh. The basic idea is that the things that you consider "actions" for your agents become "options", which are "large actions" that may take more than a single primitive time step. When an agent selects an option, it will go on "auto-pilot" and keep selecting primitive actions at the more primitive, fine-grained timescale as dictated by the last selected option, until that option has actually finished executing. In your case, you could: implement the first "primitive action" of an option to immediately apply all effects to the state, and append a sequence of "no-op" actions afterwards to make sure the option actually has a longer duration than a single primitive timestep, OR implement the very last primitive action of an option to actually apply changes to the state, and prepend a sequence of "no-op" actions in front of it to make the option take more time, OR something in between (i.e. actually make partial changes to the state visible during the execution of the option). Since all legal choices for agents in your scenario appear to be options, i.e. you do not allow agents to select primitive actions at the more fine-grained timescale, you would only have to implement "inter-option" learning in your RL algorithms; there would be no need for "intra-option" learning. In practice, if you only have a small number of agents and have options that take relatively large amounts of time, you don't have to actually loop through all primitive time-steps. You could, for example, compute the primitive timestamps at which "events" should be executed in advance, and insert these events to be processed into an event-handling queue based on these timestamps. Then you can always just skip through to the next timestamp at which an event needs handling. With "events" I basically mean all timesteps at which something should happen, e.g. timesteps where an option ends and a new option should be selected by one or more agents. Inter-option Reinforcement Learning techniques are basically oblivious to the existence of a more fine-grained timescale, and they only need to operate at precisely these decision points where one option ends and another begins.
{ "domain": "ai.stackexchange", "id": 1409, "tags": "reinforcement-learning, ai-design, multi-agent-systems" }
Java Navigation System for GUI in JavaFX
Question: I am currently working on a project in Java, and I use JavaFX for the GUI of the System. Currently, I am in the designing phase of the system, where I am designing the look of the system as well as setting up the basic functionality like navigation. I would highly appreciate it if someone can review my simple navigation system and the code I have written so far. I would like to know: Any bad practices I am following, and how would I improve them. Any architectural issues in my program, and how would I rectify that issue. Any techniques I could follow to keep my software more maintainable. Any inefficiencies that can be optimised. And any sort of points you have in your mind that I could follow to improve my software. Anything that you can add to the list above would also highly help. I also have a preloader for this application, I have provided the code just for the sake of completeness. This preloader was generated by NetBeans. TeleMart_Preloader.java /* * Program developed by Hassan Althaf. * Copyright © 2015, Hassan Althaf. * Website: http://hassanalthaf.com */ package telemart.preloader; import javafx.application.Preloader; import javafx.application.Preloader.ProgressNotification; import javafx.application.Preloader.StateChangeNotification; import javafx.scene.Scene; import javafx.scene.control.ProgressBar; import javafx.scene.layout.BorderPane; import javafx.stage.Stage; /** * Simple Preloader Using the ProgressBar Control * * @author hassan */ public class TeleMart_Preloader extends Preloader { ProgressBar bar; Stage stage; private Scene createPreloaderScene() { bar = new ProgressBar(); BorderPane p = new BorderPane(); p.setCenter(bar); return new Scene(p, 300, 150); } @Override public void start(Stage stage) throws Exception { this.stage = stage; stage.setScene(createPreloaderScene()); stage.show(); } @Override public void handleStateChangeNotification(StateChangeNotification scn) { if (scn.getType() == StateChangeNotification.Type.BEFORE_START) { stage.hide(); } } @Override public void handleProgressNotification(ProgressNotification pn) { bar.setProgress(pn.getProgress()); } } com.hassanalthaf.telemart.Main.java: /* * Program developed by Hassan Althaf. * Copyright © 2015, Hassan Althaf. * Website: http://hassanalthaf.com */ package com.hassanalthaf.telemart; import javafx.application.Application; import javafx.fxml.FXMLLoader; import javafx.scene.Parent; import javafx.scene.Scene; import javafx.stage.Stage; /** * * @author hassan */ public class Main extends Application { public static final String APPLICATION_TITLE = "TeleMart - ERP System"; @Override public void start(Stage stage) throws Exception { Parent root = FXMLLoader.load(getClass().getResource("views/MainView.fxml")); Scene scene = new Scene(root); stage.setScene(scene); stage.setResizable(false); stage.setTitle(Main.APPLICATION_TITLE); stage.show(); } /** * @param args the command line arguments */ public static void main(String[] args) { launch(args); } } com.hassanalthaf.telemart.views.Dashboard.fxml: <?xml version="1.0" encoding="UTF-8"?> <?import javafx.scene.shape.*?> <?import java.lang.*?> <?import java.util.*?> <?import javafx.scene.*?> <?import javafx.scene.control.*?> <?import javafx.scene.layout.*?> <AnchorPane id="AnchorPane" fx:id="dashboard" prefHeight="400.0" prefWidth="600.0" stylesheets="@css/dashboard.css" xmlns="http://javafx.com/javafx/8" xmlns:fx="http://javafx.com/fxml/1" fx:controller="com.hassanalthaf.telemart.viewmodels.DashboardViewModel"> <children> <MenuBar maxWidth="600.0" minWidth="600.0" prefWidth="600.0"> <menus> <Menu mnemonicParsing="false" text="File"> <items> <MenuItem fx:id="homeMenuItem" mnemonicParsing="false" onAction="#menuItemClick" text="Home" /> <MenuItem fx:id="anotherPageMenuItem" mnemonicParsing="false" onAction="#menuItemClick" text="Another Page" /> <MenuItem fx:id="differentPageMenuItem" mnemonicParsing="false" onAction="#menuItemClick" text="Different Page" /> </items> </Menu> </menus> </MenuBar> <AnchorPane fx:id="differentPage" layoutY="29.0" maxHeight="-Infinity" maxWidth="-Infinity" minHeight="-Infinity" minWidth="-Infinity" opacity="0.0" prefHeight="371.0" prefWidth="600.0"> <children> <TextArea layoutX="155.0" layoutY="62.0" prefHeight="200.0" prefWidth="200.0" promptText="Page three" /> </children> </AnchorPane> <AnchorPane fx:id="anotherPage" layoutY="29.0" maxHeight="-Infinity" maxWidth="-Infinity" minHeight="-Infinity" minWidth="-Infinity" opacity="0.0" prefHeight="371.0" prefWidth="600.0"> <children> <TextField layoutX="180.0" layoutY="77.0" promptText="Page Two" /> </children> </AnchorPane> <AnchorPane fx:id="home" layoutY="29.0" maxHeight="-Infinity" maxWidth="-Infinity" minHeight="-Infinity" minWidth="-Infinity" prefHeight="371.0" prefWidth="600.0"> <children> <TextField layoutX="133.0" layoutY="150.0" text="First Page" /> </children> </AnchorPane> </children> </AnchorPane> com.hassanalthaf.telemart.views.MainView.fxml: <?xml version="1.0" encoding="UTF-8"?> <?import java.lang.*?> <?import java.util.*?> <?import javafx.scene.*?> <?import javafx.scene.control.*?> <?import javafx.scene.layout.*?> <AnchorPane id="AnchorPane" fx:id="mainWindow" maxHeight="-Infinity" maxWidth="-Infinity" minHeight="-Infinity" minWidth="-Infinity" prefHeight="350.0" prefWidth="600.0" stylesheets="@css/style.css" xmlns="http://javafx.com/javafx/8" xmlns:fx="http://javafx.com/fxml/1" fx:controller="com.hassanalthaf.telemart.viewmodels.MainViewModel"> <children> <Label layoutX="222.0" layoutY="35.0" styleClass="title" text="TeleMart" /> <TextField layoutX="380.0" layoutY="160.0" onKeyPressed="#loginEnter" promptText="Username"> <styleClass> <String fx:value="login-field" /> <String fx:value="login-field" /> </styleClass> </TextField> <PasswordField layoutX="380.0" layoutY="208.0" onKeyPressed="#loginEnter" promptText="Password" styleClass="login-field" /> <Button layoutX="495.0" layoutY="256.0" mnemonicParsing="false" onMouseClicked="#loginClick" text="Login" /> <Label layoutX="45.0" layoutY="113.0" styleClass="content" text="Welcome to TeleMart's Enterprise&#10;Resource Planning System.&#10;Please enter your login credentials&#10;on the right so that we could verify&#10;your identity. Also, please do not&#10;share your login credentials with&#10;anyone or use another person's&#10;login credentials." textAlignment="CENTER" wrapText="true" /> <Label layoutX="220.0" layoutY="309.0" styleClass="content" text="© 2015, Hassan Althaf." /> </children> </AnchorPane> com.hassanalthaf.telemart.views.css.style.css: /* Program developed by Hassan Althaf. Copyright © 2015, Hassan Althaf. Website: http://hassanalthaf.com */ /* Created on : Dec 19, 2015, 6:09:52 PM Author : hassan */ @font-face { font-family: 'Lato-Regular'; src: url('../fonts/Lato-Regular.ttf'); } @font-face { font-family: 'Lato-Hairline'; src: url('../fonts/Lato-Hairline.ttf'); } .root { -fx-background-color: #2C3E50; } .title { -fx-font-family: 'Lato-Hairline'; -fx-font-size: 30pt; -fx-text-fill: #FFFFFF; } .login-field { -fx-background-color: #FFFFFF; -fx-border-radius: 5pt; -fx-padding: 10px; -fx-border-width: 1pt; -fx-border-style: solid; -fx-border-color: #202D3A; -fx-background-insets: 2, 0, 0; -fx-font-family: 'Lato-Regular'; } .button { -fx-background-color: #3498DB; -fx-text-fill: #FFFFFF; -fx-font-size: 11pt; -fx-padding: 8px; -fx-border-radius: 10pt; } .button:hover { -fx-background-color: #51A7E0; } .content { -fx-font-family: 'Lato-Regular'; -fx-font-size: 11pt; -fx-text-fill: #FFFFFF; } com.hassanalthaf.telemart.views.css.dashboard.css: /* Program developed by Hassan Althaf. Copyright © 2015, Hassan Althaf. Website: http://hassanalthaf.com */ /* Created on : Dec 19, 2015, 11:29:50 PM Author : hassan */ @font-face { font-family: 'Lato-Regular'; src: url('../fonts/Lato-Regular.ttf'); } .menu-bar { -fx-background-color: #3498DB; } .menu:hover { -fx-background-color: #51A7E0; } .menu:showing { -fx-background-color: #2487C9; } .menu .label { -fx-text-fill: #FFFFFF; -fx-font-family: 'Lato-Regular'; -fx-font-size: 10pt; } .menu-item { -fx-background-color: #FFFFFF; } .menu-item:hover { -fx-background-color: #51A7E0; } .menu-item .label { -fx-text-fill: #333333; -fx-font-family: 'Lato-Regular'; -fx-font-size: 10pt; } .menu-item:hover .label { -fx-text-fill: #FFFFFF; } com.hassanalthaf.telemart.viewmodels.MainViewModel.java: /* * Program developed by Hassan Althaf. * Copyright © 2015, Hassan Althaf. * Website: http://hassanalthaf.com */ package com.hassanalthaf.telemart.viewmodels; import java.io.IOException; import java.net.URL; import java.util.ResourceBundle; import javafx.fxml.FXML; import javafx.fxml.FXMLLoader; import javafx.fxml.Initializable; import javafx.scene.Parent; import javafx.scene.input.KeyCode; import javafx.scene.input.KeyEvent; import javafx.scene.input.MouseEvent; /** * * @author hassan */ public class MainViewModel implements Initializable { @FXML private Parent mainWindow; @FXML private void loginClick(MouseEvent event) { try { this.openDashboard(); } catch (IOException exception) { } } @FXML private void loginEnter(KeyEvent event) { if (event.getCode() == KeyCode.ENTER) { try { this.openDashboard(); } catch (IOException exception) { exception.printStackTrace(); } } } private void openDashboard() throws IOException { FXMLLoader fxmlLoader = new FXMLLoader(getClass().getResource("/com/hassanalthaf/telemart/views/Dashboard.fxml")); Parent dashboard = fxmlLoader.load(); DashboardViewModel dashboardViewModel = fxmlLoader.getController(); dashboardViewModel.show(this.mainWindow); } @Override public void initialize(URL url, ResourceBundle rb) { // TODO } } com.hassanalthaf.telemart.viewmodels.DashboardViewModel.java: /* * Program developed by Hassan Althaf. * Copyright © 2015, Hassan Althaf. * Website: http://hassanalthaf.com */ package com.hassanalthaf.telemart.viewmodels; import com.hassanalthaf.telemart.Main; import java.net.URL; import java.util.ResourceBundle; import javafx.event.ActionEvent; import javafx.fxml.FXML; import javafx.fxml.Initializable; import javafx.scene.Parent; import javafx.scene.Scene; import javafx.scene.control.MenuItem; import javafx.scene.layout.AnchorPane; import javafx.stage.Stage; /** * FXML Controller class * * @author hassan */ public class DashboardViewModel implements Initializable { @FXML private Parent dashboard; @FXML private AnchorPane home; @FXML private AnchorPane anotherPage; @FXML private AnchorPane differentPage; private AnchorPane currentPage; private void changePage(AnchorPane page) { this.currentPage.setOpacity(0); this.currentPage = page; this.currentPage.toFront(); this.currentPage.setOpacity(1); } public void menuItemClick(ActionEvent event) { Object source = (MenuItem)event.getSource(); MenuItem clickedItem; if(source instanceof MenuItem) { clickedItem = (MenuItem)source; } else { return; } String id = clickedItem.getId(); switch (id) { case "homeMenuItem": this.changePage(this.home); break; case "anotherPageMenuItem": this.changePage(this.anotherPage); break; case "differentPageMenuItem": this.changePage(this.differentPage); break; default: this.changePage(this.home); break; } } public void show(Parent main) { Scene scene = new Scene(this.dashboard); Stage stage = new Stage(); stage.setScene(scene); stage.setTitle(Main.APPLICATION_TITLE); stage.setResizable(false); stage.show(); Stage mainStage = (Stage)main.getScene().getWindow(); mainStage.close(); } /** * Initializes the controller class. */ @Override public void initialize(URL url, ResourceBundle rb) { // TODO this.currentPage = this.home; } } The entire source code with all the assets can be found here on GitHub. Answer: Since the preloader is autogenerated I won't nag about that one. I won't nag on Main either, since that's the default code to start any JavaFX application. I can start nagging on your FXMLs though. Any IDE worth it's salt will show you unnecessary inputs. Those should generally be removed. For Dashboard that's the first four, for MainView it's java.util.* and javafx.scene.* While we're on imports: the JavaFX loader can benefit from not having to search a whole package. Use your IDE's "optimize imports" to avoid wildcard imports. JavaFX has the wonderful idea of "default property tags". In general FXML gets a lot easier to read if you don't add the redundant default property tags and just continue as you know it. It's also a generally good idea to minimize the amount of vertical scrolling when reading XML files. I have configured my IDE to 120 columns width, which is 50% more than the "terminal standard" of 80, and generally enough for 99% of code to not require vertical scrolling. I wonder why you'd disable mnemoicParsing. It's generally accepted best practice to enable using the keyboard for Menu navigation. You should not force users into using the mouse in your App if you get keyboard navigation for free. It's usually a not so great idea to force linebreaks in standard paragraph text as you do on the login screen. This just means more work for translators, additional cognitive load for people reading the code and it usually wastes tons of generally useful features from a layouting system. It also results in a generally bad user experience for users that have a larger system font (unless you fix the font size, which is also a bad experience). While we're at these "UX" issues: Don't tell users to not share their credentials with anyone. It's really condescending. And since I talked about users with larger system fonts: It's really bad style to define font sizes in pt or px. It's a horrendous system for both High-DPI users as well as people with difficulty reading. Usually the latter ones customize their system's fonts to be displayed larger. That will generally wreak havoc with any layouts that rely on text not taking more than a certain amount of space. Or it will not be properly applied for your app, which makes for a horrendous exerience. Instead of fixing the font size in px/pt, you should (just like in web design) specify font sizes in em. Which brings me to my next point: Don't constrain the size of components in the FXML. The only thing you may constrain there is the ratios components have against one another. Read up on dynamically scaling designs. In general avoid absolute sizes for any and all components' width and height. Furthermore use em (or rem) for sizing fonts and dynamic distances that should scale with the user's system font size. read up on it Let's get on with the Java code for the ViewModels.... As Stan already outlined in his answer, you're not handling the IOException in loginClick. That's bad. The actual problem is a different one: you're letting that exception bubble, which is the actual cause of that inconsistency. @FXML private void loginClick(MouseEvent e) { openDashboard(); } @FXML private void loginEnter(KeyEvent e) { if (event.getCode() == KeyCode.ENTER) { openDashboard(); } } private void openDashboard() { try { // see the difference? } // ... } While we're in that class: Don't implement from interfaces you don't use. Drop the implements Initializable and the public void initialize if you're not going to use it... Final Words: While it's often recommended, switching on the event target of an event is a terrible idea because it hides errors from you during development and leads to generally hard to debug issues. In addition to that I personally prefer to only define the layout of a view in the FXML, and the behaviour in the Controller. This generally entails using the initialize method to bind action handlers to MenuItems and similar changes. It also stops your IDE from complaining about the fx:ids not resolving to backing properties on your Controller... This is how the changed files look for me now: Dashboard.fxml: <?xml version="1.0" encoding="UTF-8"?> <?import javafx.scene.control.*?> <?import javafx.scene.layout.AnchorPane?> <AnchorPane id="AnchorPane" fx:id="dashboard" prefHeight="400.0" prefWidth="600.0" stylesheets="@css/dashboard.css" xmlns="http://javafx.com/javafx/8" xmlns:fx="http://javafx.com/fxml/1" fx:controller="com.hassanalthaf.telemart.viewmodels.DashboardViewModel"> <MenuBar maxWidth="600.0" minWidth="600.0" prefWidth="600.0"> <Menu mnemonicParsing="false" text="File"> <MenuItem fx:id="homeMenuItem" text="Home"/> <MenuItem fx:id="anotherPageMenuItem" text="Another Page"/> <MenuItem fx:id="differentPageMenuItem" text="Different Page"/> </Menu> </MenuBar> <AnchorPane fx:id="differentPage" layoutY="29.0" maxHeight="-Infinity" maxWidth="-Infinity" minHeight="-Infinity" minWidth="-Infinity" opacity="0.0" prefHeight="371.0" prefWidth="600.0"> <TextArea layoutX="155.0" layoutY="62.0" prefHeight="200.0" prefWidth="200.0" promptText="Page three"/> </AnchorPane> <AnchorPane fx:id="anotherPage" layoutY="29.0" maxHeight="-Infinity" maxWidth="-Infinity" minHeight="-Infinity" minWidth="-Infinity" opacity="0.0" prefHeight="371.0" prefWidth="600.0"> <TextField layoutX="180.0" layoutY="77.0" promptText="Page Two"/> </AnchorPane> <AnchorPane fx:id="home" layoutY="29.0" maxHeight="-Infinity" maxWidth="-Infinity" minHeight="-Infinity" minWidth="-Infinity" prefHeight="371.0" prefWidth="600.0"> <TextField layoutX="133.0" layoutY="150.0" text="First Page"/> </AnchorPane> </AnchorPane> DashboardViewModel.java /* * Program developed by Hassan Althaf. * Copyright © 2015, Hassan Althaf. * Website: http://hassanalthaf.com */ package com.hassanalthaf.telemart.viewmodels; import com.hassanalthaf.telemart.Main; import java.net.URL; import java.util.ResourceBundle; import javafx.fxml.FXML; import javafx.fxml.Initializable; import javafx.scene.Parent; import javafx.scene.Scene; import javafx.scene.control.MenuItem; import javafx.scene.layout.AnchorPane; import javafx.stage.Stage; /** * FXML Controller class * * @author hassan */ public class DashboardViewModel implements Initializable { @FXML public MenuItem homeMenuItem; @FXML public MenuItem anotherPageMenuItem; @FXML public MenuItem differentPageMenuItem; @FXML private Parent dashboard; @FXML private AnchorPane home; @FXML private AnchorPane anotherPage; @FXML private AnchorPane differentPage; private AnchorPane currentPage; private void changePage(AnchorPane page) { this.currentPage.setOpacity(0); this.currentPage = page; this.currentPage.toFront(); this.currentPage.setOpacity(1); } public void show(Parent main) { Scene scene = new Scene(this.dashboard); Stage stage = new Stage(); stage.setScene(scene); stage.setTitle(Main.APPLICATION_TITLE); stage.setResizable(false); stage.show(); Stage mainStage = (Stage) main.getScene().getWindow(); mainStage.close(); } /** * Initializes the controller class. */ @Override public void initialize(URL url, ResourceBundle rb) { currentPage = home; homeMenuItem.setOnAction((e) -> changePage(home)); anotherPageMenuItem.setOnAction((e) -> changePage(anotherPage)); differentPageMenuItem.setOnAction((e) -> changePage(differentPage)); } } MainViewModel.java /* * Program developed by Hassan Althaf. * Copyright © 2015, Hassan Althaf. * Website: http://hassanalthaf.com */ package com.hassanalthaf.telemart.viewmodels; import javafx.fxml.FXML; import javafx.fxml.FXMLLoader; import javafx.scene.Parent; import javafx.scene.input.KeyCode; import javafx.scene.input.KeyEvent; import javafx.scene.input.MouseEvent; import java.io.IOException; /** * * @author hassan */ public class MainViewModel { @FXML private Parent mainWindow; @FXML private void loginClick(MouseEvent event) { this.openDashboard(); } @FXML private void loginEnter(KeyEvent event) { if (event.getCode() == KeyCode.ENTER) { this.openDashboard(); } } private void openDashboard() { FXMLLoader fxmlLoader = new FXMLLoader(getClass().getResource("/com/hassanalthaf/telemart/views/Dashboard.fxml")); try { Parent dashboard = fxmlLoader.load(); } catch (IOException e) { e.printStackTrace(); return; } DashboardViewModel dashboardViewModel = fxmlLoader.getController(); dashboardViewModel.show(this.mainWindow); } } I have not changed anything about the css, Main or the Preloader. Finally I have one small nitpick... In the login view the button and the fields don't line up. That's terribly irritating ;)
{ "domain": "codereview.stackexchange", "id": 27689, "tags": "java, javafx" }
Looking for an understanding of Toroidal Moments
Question: The wikipedia page is not so enlightening Apparantly the Neutrino, if it is a Dirac particle, has a Toroidal Moment. Does this mean the Dirac neutrino would interact electromagnetically as well as weakly? Answer: Yes. Although neutrinos have no charge and therefore don’t interact directly with an electromagnetic field, in principle they do so indirectly through electrons and W bosons. See the relevant one-loop Feynman diagrams here: https://arxiv.org/pdf/hep-ph/0206083.pdf As far as I am aware, this electromagnetic interaction is too small to be measured.
{ "domain": "physics.stackexchange", "id": 56385, "tags": "neutrinos, magnetic-moment, dipole-moment, multipole-expansion" }
Work done by friction in a complicated path
Question: A block of mass $M$ is taken from point $A$ to point $B$ in a complex path by a force $F$ which is always tangential to the path. We also have coefficient of friction as $K$. What will be the work done by force $F$ when it reaches point $B$ from point $A$? Given that the vertical displacement from $A$ to $B$ is $h$ and the horizontal displacement from $A$ to $B$ is $l$. In this question, I tried solving the problem using conservation of energy, we know that the total energy will remain constant. So with that, we will have, $$\Delta U_{gravity}+W_{friction}+W_{F} = O$$ But how do you calculate the work done by friction in this case? Moreover, in the answer, the work done by friction is only dependent on l!! EDIT: 1.The body is moved very slowly. Answer: Friction is not a force from a scalar potential. As such the work done is path dependent so there is not enough information to answer the question.
{ "domain": "physics.stackexchange", "id": 61706, "tags": "newtonian-mechanics, forces, energy-conservation, friction, work" }
Autoware Auto - Base docker image build
Question: Hello, I have recently started working on Autoware Auto opensource code. And I am having trouble understanding how the Base docker image is built. As per the given documentation, we directly download the executable from [here]($ wget https://gitlab.com/ApexAI/ade-cli/uploads/f6c47dc34cffbe90ca197e00098bdd3f/ade+x86_64). I am interested know which dockerfile is being used to build the base docker image and also the where exactly the build process starts. Kindly excuse if the question is too basic. Thank you, KK Originally posted by kk2105 on ROS Answers with karma: 262 on 2021-01-24 Post score: 1 Answer: The docker images are built during a job on gitlab. You can have a look at the gitlab-ci file here. Have a look at the lines 24 and 36 for the docker build commands. The docker files used for the build are located in the AutowareAuto/tools/ade_image directory in the repo. Also you can have a look at the container registry to see the built docker images. Originally posted by Mackou with karma: 196 on 2021-01-25 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by kk2105 on 2021-01-25: Thanks for the answer @Mackou. I will have a look and come back if any queries. Comment by rugved on 2021-05-26: How can we change the base image?
{ "domain": "robotics.stackexchange", "id": 36007, "tags": "ros2" }
A brainfuck interpreter in C
Question: I wrote brainfuck interpreter in order to prepare myself for a C job. I try to write the code as clear and as defensively as I can. Can somebody take a look at the code and give me some hints for improvement? // tape.h #pragma once #include <stdio.h> #include <stdlib.h> #include <assert.h> typedef struct Tape { long pointer; long capacity; unsigned short *data; } Tape; void initializeTape(Tape *tape); void growTape(Tape *tape); void incrementPointer(Tape *tape); void decrementPointer(Tape *tape); void incrementValue(Tape *tape); void decrementValue(Tape *tape); void read(Tape *tape); void get(Tape *tape); void freeTape(Tape *tape); long interpret(Tape *tape, const char *source_code, long source_code_size, long position); The implementation of tape.h // tape.c #include "tape.h" void initializeTape(Tape *tape) { tape->pointer = 0; tape->capacity = 8; tape->data = (unsigned short *) calloc( tape->capacity, sizeof(unsigned short)); if (tape->data == NULL) { fprintf(stderr, "Out of memory error.\n"); exit(1); } } void growTape(Tape *tape) { tape->capacity *= 2; tape->data = (unsigned short *) realloc(tape->data, tape->capacity); if (tape->data == NULL) { fprintf(stderr, "Out of memory error.\n"); exit(1); } } void incrementPointer(Tape *tape) { if (tape->pointer >= tape->capacity) { growTape(tape); } tape->pointer++; } void decrementPointer(Tape *tape) { if (tape->pointer == 0) { fprintf(stderr, "Syntax error. Negative pointer detected."); exit(1); } tape->pointer--; } void incrementValue(Tape *tape) { tape->data[tape->pointer]++; } void decrementValue(Tape *tape) { tape->data[tape->pointer]--; } void read(Tape *tape) { putchar(tape->data[tape->pointer]); } void get(Tape *tape) { tape->data[tape->pointer] = (char) getchar(); } void freeTape(Tape *tape) { free(tape->data); tape->pointer = 0; tape->capacity = 0; } long interpret(Tape *tape, const char *source_code, long source_code_size, long position) { char c = source_code[position]; switch (c) { case '>': incrementPointer(tape); break; case '<': decrementPointer(tape); break; case '+': incrementValue(tape); break; case '-': decrementValue(tape); break; case '.': read(tape); break; case ',': get(tape); break; case '[': if (tape->data[tape->pointer] == (char) 0) { int stack = 1; long j = position + 1; for (; j < source_code_size && stack > 0 && tape->pointer < source_code_size; j++) { char _c = source_code[j]; if (_c == '[') { ++stack; } else if (_c == ']') { --stack; } } if (stack != 0) { fprintf(stderr, "Syntax error. Missing closing ].\n"); exit(1); } else { position = j + 1; } } break; case ']': if (tape->data[tape->pointer] != (char) 0) { int stack = 1; long j = position - 1; for (; j >= 0 && stack > 0 && tape->pointer >= 0; j--) { char _c = source_code[j]; if (_c == '[') { --stack; } else if (_c == ']') { ++stack; } } if (stack != 0) { fprintf(stderr, "Syntax error. Missing opening [.\n"); exit(1); } else { position = j + 1; } } break; default: break; } return ++position; } And the main file: // main.c #include "tape.h" int main(int argc, char **argv) { FILE *file; if (argc < 2) { file = fopen("helloworld.bf", "r"); } else { file = fopen(argv[1], "r"); } if (file == NULL) { fprintf(stderr, "Can not open file %s\n", argv[1]); return 1; } if (fseek(file, 0L, SEEK_END) != 0) { fprintf(stderr, "Fail to fseek file %s\n", argv[1]); return 1; } long filesize = ftell(file); if (filesize < 0) { fprintf(stderr, "Fail to read file's size\n"); return 1; } rewind(file); char source_code[filesize]; size_t result = fread(source_code, 1, filesize, file); if (fclose(file) != 0) { fprintf(stderr, "Can not close file %s\n", argv[1]); return 1; } if (result != filesize) { fprintf(stderr, "Can not read file. Corrupt\n"); } Tape tape; initializeTape(&tape); long i = 0; while(i < filesize) { i = interpret(&tape, source_code, filesize, i); } freeTape(&tape); return 0; } Answer: Overall Observations An interpreter should be able to read from standard in as well as from a file, this would break the entire tape model. The user could also redirect an input file to standard in. If you are going to program in C then you need to get comfortable with pointers. In the case of file input I would use an algorithm that reads a line at a time, that way the file doesn't need to be read twice and the memory used to store the file doesn't need to be allocated. Reading a line at a time will also work for console input. If you are use C in an embedded environment allocating the space to store the memory could seriously affect the amount of memory available for processing. For this reason you also need to be careful when using malloc(), calloc(), or realloc() in an embedded environment. Some embedded C compilers do not support memory allocation and some companies will have coding standards that do not include memory allocation for embedded applications. Only Include Headers Needed to Make the Code Compile The header file tape.h includes assert.h and assert() is not used in the program. Since the C pre-processor implementation of include is generally to create a temporary source file and actually copy the included header files this increases the size of the temporary source files without need and increases compile time. Hiding #include statements within other include files can sometimes lead to problems, include files that are necessary to make the header compile and tape.h doesn't need any header files to compile. An example of when it would be necessary to include a header file in tape.h is if there were functions that returned type bool and then the header file should contain the statement #include <stdbool.h>. Make it clear what each C source file needs to compile by including the headers in the C source file. As a side note, it is better not to use assert since if the code is optimized all asserts will be optimized out of the code. Performance (speed) In the main loop of the program and in the function interpret() execution time might be improved if you used character pointers rather than integer indexing. In addition to possibly improving the performance this could also decrease the amount of code in the function interpret() by reducing the number of temporary variables. Note the following code has not been tested and may not work. In main(): char* current_source_code_ptr = source_code; char* end_file_ptr = &source_code[filesize - 1]; while (current_source_code_ptr < end_file_ptr) { current_source_code_ptr = interpret(current_source_code_ptr, end_file_ptr, source_code, &tape); } char* interpret(char* current_source_code_ptr, const char* end_file_ptr, const char *source_code, Tape* tape) { switch (*current_source_code_ptr) { case '>': incrementPointer(tape); break; case '<': decrementPointer(tape); break; case '+': incrementValue(tape); break; case '-': decrementValue(tape); break; case '.': read(tape); break; case ',': get(tape); break; case '[': if (tape->data[tape->pointer] == (char)0) { int stack = 1; for (; current_source_code_ptr < end_file_ptr && stack > 0 && tape->pointer < (end_file_ptr - source_code); current_source_code_ptr++) { if (*current_source_code_ptr == '[') { ++stack; } else if (*current_source_code_ptr == ']') { --stack; } } if (stack != 0) { fprintf(stderr, "Syntax error. Missing closing ].\n"); exit(EXIT_FAILURE); } else { current_source_code_ptr++; } } break; case ']': if (tape->data[tape->pointer] != (char)0) { int stack = 1; for (; current_source_code_ptr >= source_code && stack > 0 && tape->pointer >= 0; current_source_code_ptr--) { if (*current_source_code_ptr == '[') { --stack; } else if (*current_source_code_ptr == ']') { ++stack; } } if (stack != 0) { fprintf(stderr, "Syntax error. Missing opening [.\n"); exit(EXIT_FAILURE); } else { current_source_code_ptr++; } } break; default: break; } return ++current_source_code_ptr; } Complexity The switch/case statement in the function interpret() is too long, each case should be implemented by a function, so the code for case '[': and case ']': should be moved into separate functions. Use System Defined Constants The header file stdlib.h includes system specific definitions for the macros EXIT_SUCCESS and EXIT_FAILURE. This would make the code more readable and possibly more portable. // main.c #include <stdio.h> #include <stdlib.h> #include "tape.h" int main(int argc, char** argv) { FILE* file; if (argc < 2) { file = fopen("helloworld.bf", "r"); } else { file = fopen(argv[1], "r"); } if (file == NULL) { fprintf(stderr, "Can not open file %s\n", argv[1]); return EXIT_FAILURE; } if (fseek(file, 0L, SEEK_END) != 0) { fprintf(stderr, "Fail to fseek file %s\n", argv[1]); return EXIT_FAILURE; } long filesize = ftell(file); if (filesize < 0) { fprintf(stderr, "Fail to read file's size\n"); return EXIT_FAILURE; } rewind(file); char source_code[filesize]; size_t result = fread(source_code, 1, filesize, file); if (fclose(file) != 0) { fprintf(stderr, "Can not close file %s\n", argv[1]); return EXIT_FAILURE; } if (result != filesize) { fprintf(stderr, "Can not read file. Corrupt\n"); } Tape tape; initializeTape(&tape); long i = 0; while (i < filesize) { i = interpret(&tape, source_code, filesize, i); } freeTape(&tape); return EXIT_SUCCESS; }
{ "domain": "codereview.stackexchange", "id": 39897, "tags": "c, brainfuck" }
What is the mechanism for nitration of cinnamaldehyde?
Question: The group added on to the benzene ring(-CH=CH-CH=O) should be an electron withdrawing group, yet the sources I found online suggest an ortho-/para- product rather than the expected meta product. Is there an explanation for this and preferably a mechanism for the reaction? Answer: I regret that the OP did not provide a link to the o,p-nitration of cinnamaldehyde. HotLasagna's surmise that m-nitration should apply is well-founded. I did a reaction search on Chem. Absts. for the nitration of cinnamaldehyde without success. Because cinnamaldehyde is a vinolog of benzaldehyde, the latter should serve as a model for the former. The literature confirms the preferred m-nitration of benzaldehyde. Continuous flow nitration[1] ($\ce{HNO3/H2SO4}$) of benzaldehyde gives high yields of the o:m-isomers (1:4) without mention of p-nitrobenzaldehyde. Two additional methods employ trifluoromethanesulfonic anhydride (triflic anhydride) in conjunction with tetramethyl ammonium nitrate or ethyl ammonium nitrate. In the former case[2], the o:m:p ratio was 31/63/6 while in the latter study[3] the o:m:p ratio was 27:70:3. Contrary to the rational suggested by @ M. L., the determining factor is (would be) the destabilization of the positive charge by the aldehyde and vinylogous aldehyde groups in 1 and 3, respectively, whereas the m-nitration resonance structures 2 and 4 avoid this problem. (Not all of the canonical resonance structures are shown.) A. A. Kulkarni, V. S. Kalyani, R. A. Joshi and R. R. Joshi, Org. Proc. Res. & Dev., 2009, 13, 999. S. A. Shackelford, et al., J. Org. Chem. 2003, 68, 267. G. Aridoss and K. K. Laali, J. Org. Chem. 2011, 76, 8088.
{ "domain": "chemistry.stackexchange", "id": 17032, "tags": "organic-chemistry, reaction-mechanism" }
ROS Answers SE migration: pcl bag_to_pcd
Question: I'm trying to convert a bagfile into pcd files using bag_to_pcd. But everything I get is [ERROR] [1297703460.328842013]: You requested a transform at time 1292497206.348, but the tf buffer only contains a single transform at time 1292497207.122. When trying to transform between /head_tof_link and /base_link. What's wrong? Replaying the bag file using rosbag play seems to work correctly. Originally posted by Tully on ROS Answers with karma: 1145 on 2011-02-14 Post score: 2 Answer: By default I believe that bag_to_pcd uses /base_link as the output TF transform. This should be exposed properly through a parameter. You can either modify it yourself (line 133) and submit a patch, or ticket me and I'll make sure it gets fixed. Originally posted by tfoote with karma: 58457 on 2011-02-14 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 4729, "tags": "ros, pcl, pcd, perception-pcl" }
Wrapper that accepts both scalar and collection
Question: Can I somehow refactor this into one class? Some things are common here like Metadata property. public class Resource<TEntity> { public TEntity Data { get; set; } public Dictionary<string, string> Metadata { get; set; } public Resource(TEntity data) { Data = data; } } public class ResourceCollection<TEntity> { public ICollection<TEntity> Data { get; set; } public Dictionary<string, string> Metadata { get; set; } public int Count { get; set; } public ResourceCollection(ICollection<TEntity> data) { Data = data; Count = data.Count; } } Is this possible to make interface that expects either ICollection<TEntity> or TEntity alone? This wrapper class is used for serialization it simply acts as a wrapper, but still I have some doubts about it that it could be done better I also came up with this: public class Resource<T> { public T Data { get; set; } [JsonExtensionData] public Dictionary<string, object> Metadata { get; set; } = new Dictionary<string, object>(); public Resource(T data) { Data = data; if (data is ICollection collection) { Metadata["count"] = collection.Count; } } } Answer: I guess you could use inheritance if you really need that extra property: public class ResourceCollection<TEntity> : Resource<ICollection<TEntity>> { public ResourceCollection(ICollection<TEntity> data) : base(data) { //modify metadata if needed } public int Count => Data.Count; } While technically this is still two classes, you can now cast your collections to Resource<T> and process them in generic manner with your other resources.
{ "domain": "codereview.stackexchange", "id": 30656, "tags": "c#" }
Name for a certain 1D lattice model
Question: I have encountered a physical system where the microstates are described by a vector $$k = \left[k_1, k_2, \ldots, k_n\right]$$ where all the $k_i$ are strictly positive integers smaller than some $k_{max}$. The associated energy for a microstate $k$ is given by $$a \sum_{i=1}^{n}k_i + b \sum_{i=1}^{n-1}\left|k_i-k_{i+1}\right|.$$ I am also interested in the case where the energy is given by $$a \sum_{i=1}^{n}k_i + b \sum_{i=1}^{n-1}\left(k_i-k_{i+1}\right)^2.$$ I was wondering whether such systems have studied before, and in particular whether something is known about the partition function of the system where $k_{max}\rightarrow \infty$. For a finite $k_{max}$ the partition function can be calculated numerically by using the transfer matrix method. One can approximate the partition function of the $k_{max} \rightarrow \infty$ case by taking $k_{max}$ sufficiently large, but I am hoping to get something more concrete than this. I am primarily interested in the 1D open boundary case, but periodic boundary conditions and potential natural generalisations to higher dimensional lattices would also be of interest. One can also consider the case where the $k_i$ are now strictly positive real numbers instead. Answer: The limiting case $k_{\rm max}\to\infty$ is well known. When the interaction is $\sum_{k=1}^{n-1} |k_i - k_{i-1}|$, this is known as the (one-dimensional) SOS model; when the interaction is $\sum_{k=1}^{n-1} (k_i - k_{i-1})^2$, this is known as the (one-dimensional) discrete (or integer-valued) Gaussian free field (GFF). The versions with continuous spins are respectively known as the continuous SOS model and the Gaussian free field (or, sometimes, as the harmonic crystal) . In all cases, the spins are usually not supposed positive (that is, they take values in $\mathbb{Z}$ or $\mathbb{R}$). However, taking them positive is a common variant, useful to model an interface above a wall. In the variant with the positivity constraint, the term $a\sum_{i=1}k_i$ (with $a>0$) models a layer of unstable phase above a wall (the height of the layer above site $i$ being given by $k_i$). In dimension $1$, the fact that the spins take discrete or continuous values, and the particular form of the interaction potential (that is, SOS or GFF or any other reasonable one) play very little role. The behavior of this system (when $n\to\infty$) is well understood when $a$ is small: the width of the unstable layer if of order $a^{-1/3}$ and the correlation length of order $a^{-2/3}$. In fact, much more precise information is available: after a scaling by $a^{1/3}$ vertically and $a^{2/3}$ horizontally, the distribution of the (linear interpolation of) the function $i\mapsto k_i$ converges (as $n\to\infty$ and $a=a(n)\to 0$ not too fast) to the distribution of the trajectories of a stationary Ferrari-Spohn diffusion. This is proved in this paper. A substantially older paper relying on exact computation (via transfer matrix) rather than probabilistic tools, and obtaining weaker results is this one. Of course, exact computations are more sensitive to details, so the latter paper applies only to the continuous SOS model, while the former applies to a very general class of interactions (and, actually, to a very general class of confining potentials, not necessarily linear in the $k_i$, although this in general changes the relevant scaling). For finite $k_{\rm max}$, the results would be essentially the same as long as $k_{\rm max} \gg a^{-1/3}$. When $k_{\rm max} \ll a^{-1/3}$, I'd expect that the confining potential plays no role. This should be easy to prove. The higher-dimensional analogues have also been studied, although the results are far from being as complete. As mentioned above, this is often used to model a layer of unstable phase above a hard wall. Interestingly, in the case where $a=\lambda/n$, where $n$ is the linear size of the system, the above-mentioned convergence of the rescaled interface to a Ferrari-Spohn diffusion was even proved in a much more complex setting, namely the 2-dimensional Ising model in an external magnetic field; see this paper.
{ "domain": "physics.stackexchange", "id": 91167, "tags": "thermodynamics, statistical-mechanics, terminology, lattice-model" }
Calculate damping constant / coefficient
Question: I am trying to graphically simulate a series of springs in 2D. Now one of the forces I am stuck with calculating is the damping force. The given formula is $F = -k_d v$. I know that $v$ is the velocity of the vectors, but I can't seem to find how to calculate $k_d$. Answer: For a viscous damper, the decay in the free oscillation amplitude is exponential (it is geometric for hysteric damping and linear for Coulomb damping). So if you have the time history of the amplitude of your decay and you know it is a viscous damper (which is the equation you gave) then you can measure the amplitude $A$ at two consecutive peaks and calculate: $$\gamma = \ln \left(\frac{A_{t_n}}{A_{t_{n+1}}}\right) $$ you can then find the damping coefficient to give this decay as: $$\zeta = \frac{\gamma}{\sqrt{4 \pi^2 + \gamma^2}}$$ where then of course $\zeta = k_d/(2\sqrt{k m})$. So given a spring with unknown damping coefficient but known stiffness, you can attach a known mass to it and measure it's response to a disturbance and determine from that the damping coefficient. Since you are just going for aesthetics, you pick your damping constants arbitrarily. I would actually recommend that you play with it and see how it influences the solution, it's actually pretty cool to visualize. All you do is pick values for $\zeta \in [0.0, 2.0]$ where the upper bound is really limitless but not much will change when it is greater than $2.0$. Then you can compute your $k_d$ based on $k$ and $m$. Depending on your time integration, you may find that $\zeta = 0$ will be unstable. You might need something nominal to stabilize the scheme. When $\zeta = 1$, it is called critically damped and you should not see much oscillation at all (it will be driven to steady state without oscillation). I say you won't see much because as a system of springs and with numerical integration, it won't be exactly critically damped.
{ "domain": "physics.stackexchange", "id": 19936, "tags": "newtonian-mechanics, friction, spring, drag, oscillators" }
Pandas-style "indexer proxy" descriptor
Question: The goal is to write a decorator that allows you to define "indexer proxies", which define properties with __getindex__ instead of normal method calls, like loc and iloc in Pandas dataframes. To that end I ended up with a descriptor constructing an inner class with the transformed __getindex__, an instance of which is then constructed in __get__: from functools import wraps class locator: def __init__(self, getter): class LocatorProxy: def __init__(self, obj): self.obj = obj @wraps(getter) def __getitem__(self, key): return getter(self.obj, key) self.proxy_class = LocatorProxy def __get__(self, obj, objtype=None): return self.proxy_class(obj) Example usage: In [41]: class Foo: ...: def __init__(self, **vals): ...: self.vals = vals ...: @locator ...: def loc(self, key): ...: """Return important stuff""" ...: return self.vals[key] ...: In [42]: Foo(x = 1, y = 10).loc["y"] Out[42]: 10 In [43]: Foo(x = 1, y = 10).loc["z"] --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-43-d91ef97fd1fe> in <module> ----> 1 Foo(x = 1, y = 10).loc["z"] <ipython-input-40-ca29fed22d1f> in __getitem__(self, key) 9 @wraps(getter) 10 def __getitem__(self, key): ---> 11 return getter(self.obj, key) 12 13 self.proxy_class = LocatorProxy <ipython-input-41-fb4c4b7f731c> in loc(self, key) 5 def loc(self, key): 6 """Return important stuff""" ----> 7 return self.vals[key] 8 KeyError: 'z' In [44]: Foo(x = 1, y = 10).loc["x"] Out[44]: 1 I'd like to know whether this is an idiomatic usage of descriptors. Is this the right way to avoid unnecessary allocations/definitions of the inner class as much as possible? Does @wraps do the right thing? Can I do anything useful with __set_name__? Only the getters are needed in my case, so no .setter thing like property has it is necessary for now, although it would be nice to see how it's done. Answer: Is this the right way to avoid unnecessary allocations/definitions of the inner class as much as possible? This smells like premature optimization. If you actually care about every microsecond, you shouldn't be doing any of this, and should just have regular class methods. That said, no, your approach doesn't avoid as many allocations as it could. Rather than one allocation per __get__, you can restructure your code to have one allocation per @locator decorator instance. Does @wraps do the right thing? Probably, but it isn't strictly needed. The following suggested code has no nested class definitions, no class definitions in closure scope, no @wraps calls, and attempts to brow-beat mypy into understanding your types. This last part is only partially successful. from typing import Any, Callable, Generic, Optional, Type, TypeVar InstanceType = Any # mypy is too stupid to infer this type as a generic KeyType = TypeVar('KeyType') ValueType = TypeVar('ValueType') LocatorCallback = Callable[[InstanceType, KeyType], ValueType] class LocatorProxy(Generic[KeyType, ValueType]): __slots__ = ('method', 'instance') def __init__(self, method: 'LocatorCallback'): self.method = method self.instance: Optional[InstanceType] = None def __getitem__(self, item: KeyType) -> ValueType: assert self.instance is not None return self.method(self.instance, item) class locator(Generic[KeyType, ValueType]): __slots__ = ('proxy_class',) def __init__(self, method: LocatorCallback) -> None: self.proxy_class = LocatorProxy[KeyType, ValueType](method) def __get__( self, instance: InstanceType, owner: Optional[Type[InstanceType]] = None, ) -> LocatorProxy: self.proxy_class.instance = instance return self.proxy_class class Foo: def __init__(self, **vals: int) -> None: self.vals = vals loc: LocatorProxy[str, int] @locator # type: ignore # you really need to shoehorn this thing in def loc(self, key: str) -> int: return self.vals[key] def test() -> None: foo = Foo(x=1, y=10) l = foo.loc assert l["y"] == 10 try: l["z"] raise AssertionError() except KeyError: pass if __name__ == '__main__': test()
{ "domain": "codereview.stackexchange", "id": 42353, "tags": "python, properties" }
Classification of Conversations in Text
Question: I am trying to pick a technique for classifying conversational text. I am concerned about treating the problem at a level of fidelity of each individual message because people often say things like, "ok" or short responses that have no inferable meaning. How does conversation classification typical handle these types of problems? To elaborate, a conversation might be: P1: Hi I want to buy a car? P2: Ok. Great! P1: What cars do you have? P2: A large variety! The topic is cars, but this can not be inferred by anything P2 says, nor should it be. So would you break a conversation into blocks of time, or is there a technique for partitioning? Answer: There are multiple techniques that help with the problem you sketch, the applicability of which usually depends on the classification technique and the corpus. But I get the idea you would be helped by some practical examples. So let's go over some of them. Feel free to comment if I tread over familiar grounds, or you want me to elaborate on some of them. Or where to start experimenting with them Stopping A simple technique is applying a stoplist: A list of common words that should be removed. There are pre-packaged lists, but most packages allow you to provide your own. TF/IDF A technique that transforms your features by weighing words by their term frequency (how often do they occur in the document) divided by the document frequency (how often does the word occur in other documents. This way frequent words are made less relevant to the document POS Many packages offer you a part-of-sentence-tagger, that will tag the words by their grammatical function (Verb for instance). You can leverage that in the tokenization step to filter out words (usually you'll look for verbs and nouns). Some vectorizers can do this straight out. (This could also be done with NER) Stemming transforms inflections of words (eg: train/trains) to a stem. This might make some of your words a little more relevant by upping the chance a pair of them collides Restricting your vectorizer: Most packages sport a vectorizer that you can instruct to either look for a minimum/maximum document count (ignore words that either occur in to many different documents, or to little different documents), or to cap the amount of features (words). Capping the amount of words usually selects for most frequently used words. Encoding word/token based features to more semantic features: Word2Vec, but also older techniques such as LDA/LSI. Picking the classification algorithm: Some algorithms are very capable of handeling large feature spaces (Naive Bayes for instance), some algorithms learn to transform the feature space to find better ways to weigh the features. Packages such as Sklearn, NLTK and Gensim offer most of these techniques. Let me know if this was helpful
{ "domain": "datascience.stackexchange", "id": 5790, "tags": "classification, nlp" }
Finding common ancestor in a binary tree
Question: Following is a Python program I wrote to find the common ancestor in a binary tree (not BST) with no link to parent node. Please review. def common_ancestor(N1, N2, head): if not head: return [None, 0, 0] left_status = common_ancestor(N1, N2, head.left) if left_status[0]: return left_status this_status = [None, 0, 0] if head == N1: this_status[1] = 1 if head == N2: this_status[2] = 1 right_status = common_ancestor(N1, N2, head.right) if right_status[0]: return right_status combined_status = [None, 0, 0] combined_status[1] = left_status[1] or this_status[1] or right_status[1] combined_status[2] = left_status[2] or this_status[2] or right_status[2] if combined_status[1] and combined_status[2]: print "Common Ancestor is %s" % head combined_status[0] = head return combined_status if __name__ == "__main__": head = some_node print common_ancestor(5, 32, head)[0] Answer: 1. Review You didn't give us the definition of the node objects, so I'm guessing you used something like this: from collections import namedtuple Node = namedtuple('Node', 'left right'.split()) There's no docstring. How do I call common_ancestor? What is the meaning of the arguments N1, N2, and head? (I'm guessing that N1 and N2 are the nodes to find the nearest common ancestor of, and that head had to be the root node of the tree.) The function common_ancestor does two things: it finds a node and prints a message about it. It would be better to separate these responsibilities. The print statement means that the code is not portable to Python 3. The function allocates a new list of three status elements for each node in the tree. But as we'll see below, we only need a total of two status elements. The test head == N2 is not quite right. Here we need the actual node N2, not just any node that happens to be equal to N2, so the test should use is rather than ==. 2. Simpler implementation The function common_ancestor traverses the tree in depth-first order, keeping track of which of N1 and N2 have been visited so far. So let's start with a simple depth-first traversal implementation: def traverse(node): """Visit all nodes in the tree starting at node, in depth order.""" if node is None: return traverse(node.left) traverse(node.right) Here's a binary tree with two shaded nodes that we want to find the common ancestor of. The thin line shows the order in which nodes are visited by a depth-first traversal. Suppose that we augment this traversal function so that it keeps track of how many of n1 and n2 have been visited so far: count = 0 def traverse(node): """Visit all nodes in the tree starting at node, in depth order.""" if node is None: return print("Entering {}, count={}".format(node, count)) if node is n1: count += 1 if node is n2: count += 1 traverse(node.left) traverse(node.right) print("Exiting {}, count={}".format(node, count)) This figure shows the value of count on entry to each node: And this figure shows the value of count on exit from each node: You'll see that all the common ancestors of the two nodes have count = 0 on entry and count = 2 on exit (and only common ancestors have this property). So that means that we can implement common_ancestor like this: def common_ancestor(n1, n2, head): """Return the nearest common ancestor of nodes n1 and n2 in the tree rooted at head. """ count = 0 # How many nodes in {n1, n2} have been visited so far? ancestor = None def traverse(node): nonlocal count, ancestor if node is None or ancestor is not None: return count_at_entry = count if node is n1: count += 1 if node is n2: count += 1 traverse(node.left) traverse(node.right) if count_at_entry == 0 and count == 2 and ancestor is None: ancestor = node traverse(head) return ancestor The nonlocal statement was new in Python 3, so if you're stuck on Python 2 then you'll have to find a workaround, for example using a status object: class Status(object): pass status = Status() status.count = 0 status.ancestor = None # etc. 3. Efficient implementation? The implementation discussed here has to traverse the whole tree and so takes time \$ O(n) \$. Whether this is important depends on how many common-ancestor queries you have. If there are many, then you'll probably want to change your tree data structure so that you can make efficient common-ancestor queries. Adding parent pointers allows you to bring the query cost down to \$ O(\log n) \$ on balanced trees but if you have many common-ancester queries then you may prefer to preprocess the tree into a data structure that suppose \$O(1)\$ queries. See Bender and Colton (2000) for an approach based on range-minimum queries.
{ "domain": "codereview.stackexchange", "id": 12699, "tags": "python, algorithm, tree" }
What is the ratio of the acceleration in the two cases (a) and (b)
Question: What is the ratio of the acceleration in the two cases (a) and (b)? I thought that the ratio would be 1:1 but it my textbook says its 1:3, so can someone explain to me how that's possible. Answer: $1)$ Let's make all the forces that would be acting on the blocks, in the first case. Now we apply newtons' laws of motion assuming that block of mass 2m accelerates downward with $a_1$ acceleration and the block of mass m accelerates upward with same magnitude of $a_1$ acceleration( because they are constrained to have same acceleration till the string is tight). So, \begin{align} &2mg-T=2ma_1\qquad\quad\cdots (1)\\ &T-mg=ma_1\qquad\qquad\cdots (2)\\ \end{align} adding (1) and (2), $$\implies(2mg-T)+(T-mg)=2ma_1+ma_1$$ $$\implies mg=3ma_1$$ $$\implies a_1=\frac{g}{3}\qquad\qquad\cdots(3)$$ Thus, acceleration of block of mass m would be $\frac{g}{3}$ in upward direction. $2)$ Now making the forces for the second one: We assume that the string is ideal, and as we know that an ideal string is massless and inextensible, which means that the tension at any point through out the string should be constant, because if it is not constant then the net force at that point would not be zero and thus the string would have infinite acceleration being massless, therefore the downward force that you apply at the free end of the string should be equal to the tension at that point, which eventually gets transferred to the block and lifts it up. Thus, $$|\vec{T}|=|\vec{F}| = 2mg$$ Let's say that the block moves upward with $a_2$ acceleration. $$T-mg = ma_2$$ $$\implies 2mg - mg = ma_2$$ $$\implies mg = ma_2$$ $$\implies a_2 = g$$ So you see that in this case the block of mass m has an upward acceleration equal to $g$, i.e. 3 times the acceleration in first case (refer equation 3) Thus $$\frac{a_1}{a_2} = \frac{\frac{g}{3}}{g} = \frac{1}{3}$$ $$\implies a_1 : a_2 = 1: 3$$
{ "domain": "physics.stackexchange", "id": 24315, "tags": "newtonian-mechanics" }
Multi-threading in ROS nodes and MultiThreadedSpinner and AsyncSpinner
Question: To put it simply: What is the application of multithreading in ros nodes? I am not from a Computer Science background. I have been learning Operating System concepts like processes, threads, signals, etc., only very recently. I came to the conclusion that one would hardly have to use a fork() in a ROS program because ROS nodes are themselves like different processes. (Ques.1) Is this conclusion right? Or, are the instances where one might need to use the fork() and exec()? I got the idea of using a one thread to simply wait for callbacks all the time and another to do the useful work, from here. (Ques.2) Is this the standard way of doing it? Is it a good idea to follow the idea quoted above or is it better to include a ros::spinOnce() regularly in the code? The latter sounds a buggy solution to me. Please correct me where I am wrong. If I do as it has been shown in that tutorial, then I see no point in spinning from multiple threads. Though, I am sure there is an application. I am not able to grasp the importance of it (especially MultiThreadedSpinner and AsyncSpinner) without applications and examples. (Ques.3) Please give examples and suggest applications where I might use a multithreaded node and in-turn wait for callbacks (spin) from each separate thread. If any of the questions seems trivial, please bear with me and explain. Originally posted by McMurdo on ROS Answers with karma: 1247 on 2014-01-10 Post score: 6 Answer: I think it's safe to say that, in general, you'd not use fork/exec within a ROS node. roslaunch serves that purpose for most cases. You cited actionlib for your threading example. Yes, spinOnce is a proper (and non-buggy) way of handling callbacks. In this context, realize that ROS provides three types of inter-node interaction: publish/subscribe is non-blocking and stateless (see '*Note' below) service calls are blocking and stateful actionlib calls are non-blocking and stateful One uses an actionlib client when other 'work' needs to be done while the action is executing; that is, one thread will perform that 'work' while another keeps track of the action's progress. ROS's spin is a terminal call (i.e., non-returning until the node shuts down), with the effect that all message processing must be handled within the callbacks (and processing will only be done in the callbacks). Clearly that's inadequate for the other 'work'. As an example, let's take a GUI for...oh, anything long-running, say navigation or object recognition. The 'work' that needs to be done is keeping the user interface responsive -- blocking the UI thread is a non-starter. For navigation, you likely want to update some representation of the current state with the actionlib feedback as it's received while the user interacts with other parts of the UI. For object recognition, it's likely not the feedback that's important, but rather the final results -- while the scene is being analyzed, the user (or the system internally) can 'work on' other things. Some previously asked/answered spinOnce questions that you may want to look at: Is spinOnce needed? Significance of ros::spinOnce? ROS callbacks, threads and spinning *Note: in fact, it's exactly spinOnce that allows pub/sub to be non-blocking, as spin is terminal, and thus effectively blocks. Originally posted by kramer with karma: 1470 on 2014-07-17 This answer was ACCEPTED on the original site Post score: 7 Original comments Comment by McMurdo on 2014-07-18: I fully understand your answer. However, my question was not, "why would there be spinning in a multi-threaded program?" My question was, "why would there be multiple spins in a multi-threaded program, when there can be one thread that is perpetually spinning and the others waiting for a trigger?" Comment by kramer on 2014-07-19: Ah, OK. A single spinOnce will process all queued messages, but does so sequentially. Multiple and multi-threaded spinOnce calls provide concurrent message processing. That is, a long-running callback from a single spinOnce will hold up subsequent messages. More next comment... Comment by kramer on 2014-07-19: If callbacks are all very fast -- say, just data copying, with another thread actually processing -- you may never see any need for multiple spinOnce calls. In fact, that's what I've done for a large Qt project: a single spinOnce thread, quick callbacks, and additional threads for other processing. Comment by McMurdo on 2014-07-19: Great. If you have time, please also explain about callbacks and spining in nodelets. Perhaps there is already a question about that? And also what happens when we declare a NodeHandle and associate a queue to it in a Nodelet. I can make a new question if needed. Thanks Comment by kramer on 2014-07-19: You'd be better off searching for prior questions and/or opening another question, just to keep separate topics separate. (Besides, my nodelet experience is minimal.)
{ "domain": "robotics.stackexchange", "id": 16629, "tags": "ros, multithreading, asyncspinner" }
How accurate is the current read by /svh_controller/channel_currents?
Question: Hello everyone! I´m working with a real SVH 5-Finger hand from Schunk and I have performed a lot of tests regarding to the current. As it was pointed out before, there is no force feedback on the hand so the current is the only relevant variable to measure how much "effort" is applied to an object. I´m working on developing a grasp controller based on the current of the hand, therefore I´m using the topic /svh_controller/channel_currents provided by the driver. My question is wether this topic is reliable enough and if it contains an accurate measurement of the current consumption. Or in other words, should I use another method to obtain the current? Best regards, Charlie Originally posted by CharlieMAC on ROS Answers with karma: 28 on 2018-09-05 Post score: 0 Answer: Hello Charlie, there is an error of about +- 10 mA. We compared the current reading of the topic to the actual current between the handcontrolcircuit and single motors to get this value. Originally posted by T.Meurer with karma: 26 on 2018-09-10 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 31715, "tags": "ros, ros-kinetic, grasping" }
Snake game in C++ with SFML
Question: I started learning programming about a year ago, starting with Python. Half a year ago, I moved onto C++ and this is my first large project with that language. Have I understood the basics of the language? In Main.cpp: #pragma warning(disable : 4996) #pragma warning(disable : 4244) #include <memory> #include <stdlib.h> #include <stdio.h> #include <time.h> #include <SFML\Graphics.hpp> #include "Snake.h" #include "SnakeFood.h" #include "HighScoreFile.h" class Snake; class SnakeFood; class HighScoreFile; void displayScores(sf::RenderWindow& window, HighScoreFile& highScorefile, int score, const sf::Font& font); void displayNewBest(sf::RenderWindow& window, const sf::Font& font); bool playAgain(sf::RenderWindow& window); std::unique_ptr<sf::Font> newFont(std::string&& fileName); std::unique_ptr<sf::SoundBuffer> newSoundBuffer(std::string&& fileName); int main() { srand(time(NULL)); static auto scoredSoundBuffer = newSoundBuffer("Sound Effects\\Scored.wav"); static auto celebrationSoundbuffer = newSoundBuffer("Sound Effects\\Celebration.wav"); static auto defeatSoundBuffer = newSoundBuffer("Sound Effects\\Defeat.wav"); static auto startupSoundBuffer = newSoundBuffer("Sound Effects\\Startup.wav"); sf::Sound scoredSoundEffect{ *scoredSoundBuffer }; sf::Sound celebrationSoundEffect{ *celebrationSoundbuffer }; sf::Sound defeatSoundEffect{ *defeatSoundBuffer }; sf::Sound startupSoundEffect{ *startupSoundBuffer }; scoredSoundEffect.setVolume(30.f); celebrationSoundEffect.setVolume(30.f); defeatSoundEffect.setVolume(30.f); startupSoundEffect.setVolume(30.f); static auto gameTextFont = newFont("Arcade Classic.ttf"); sf::RenderWindow window(sf::VideoMode(665, 595), "Snake", sf::Style::Close | sf::Style::Titlebar); while (true) { Snake snake{}; SnakeFood food{ window, snake }; int score{ 0 }; HighScoreFile highScoreFile{ "high-score-file.txt" }; startupSoundEffect.play(); while (window.isOpen()) { sf::Event event; while (window.pollEvent(event)) { switch (event.type) { case sf::Event::KeyPressed: switch (event.key.code) { case sf::Keyboard::W: case sf::Keyboard::Up: snake.changeDirection(Direction::Up); break; case sf::Keyboard::S: case sf::Keyboard::Down: snake.changeDirection(Direction::Down); break; case sf::Keyboard::A: case sf::Keyboard::Left: snake.changeDirection(Direction::Left); break; case sf::Keyboard::D: case sf::Keyboard::Right: snake.changeDirection(Direction::Right); break; } break; case sf::Event::Closed: exit(0); break; default: //No need to handle unrecognised events break; } } snake.checkIfOutOfBounds(window); snake.move(); if (snake.isTouchingFood(food)) { scoredSoundEffect.play(); snake.grow(); score++; food.setToRandomPosition(window, snake); } window.clear(sf::Color::Black); snake.drawBody(window); window.draw(food); sf::Text scoreText{ std::to_string(score), *gameTextFont, 30 }; scoreText.setPosition(10.f, 5.f); window.draw(scoreText);; window.display(); if (snake.isTouchingSelf()) { if (score > highScoreFile.getHighScore()) { celebrationSoundEffect.play(); displayNewBest(window, *gameTextFont); highScoreFile.editHighScore(score); _sleep(1500); } else { defeatSoundEffect.play(); } _sleep(1000); displayScores(window, highScoreFile, score, *gameTextFont); if (!playAgain(window)) { exit(0); } break; } } } return 0; } void displayScores(sf::RenderWindow& window, HighScoreFile& highScoreFile, int score, const sf::Font& font) { window.clear(sf::Color::Black); sf::Text scoreText{ "SCORE: " + std::to_string(score), font, 90 }; if (score < 10) { scoreText.setPosition(85.f, 85.f); } else { scoreText.setPosition(55.f, 85.f); } //scoreText.setPosition(85.f, 85.f); scoreText.setFillColor(sf::Color::Green); sf::Text highScoreText{ "HI SCORE: " + std::to_string(highScoreFile.getHighScore()), font, 80 }; highScoreText.setFillColor(sf::Color::Green); if (highScoreFile.getHighScore() < 10) { highScoreText.setPosition(40.f, 375.f); } else { highScoreText.setPosition(10.f, 375.f); } //highScoreText.setPosition(40.f, 375.f); window.draw(scoreText); window.draw(highScoreText); window.display(); } void displayNewBest(sf::RenderWindow& window, const sf::Font& font) { sf::Text newBest{ "NEW BEST!", font, 75 }; newBest.setPosition(110.f, 250.f); newBest.setFillColor(sf::Color::Red); window.draw(newBest); window.display(); } bool playAgain(sf::RenderWindow& window) { while (true) { sf::Event event; while (window.pollEvent(event)) { if (event.type == sf::Event::Closed) { window.close(); } switch (event.key.code) { case sf::Keyboard::Q: return false; break; case sf::Keyboard::Z: return true; break; default: //No need to handle unrecognised events break; } } } } std::unique_ptr<sf::Font> newFont(std::string&& fileName) { auto font = std::make_unique<sf::Font>(); if (!font->loadFromFile(fileName)) { exit(0); } return font; } std::unique_ptr<sf::SoundBuffer> newSoundBuffer(std::string&& fileName) { auto buffer = std::make_unique<sf::SoundBuffer>(); if (!buffer->loadFromFile(fileName)) { exit(0); } return buffer; } In SnakeRect.h: #pragma once #include <SFML\Graphics.hpp> enum class Direction { Left, Right, Up, Down }; class SnakeRect : public sf::RectangleShape { using RectangleShape::RectangleShape; public: SnakeRect(Direction dir); Direction direction() const; Direction oppositeDirection() const; private: Direction direction_; }; In SnakeRect.cpp: #include "SnakeRect.h" SnakeRect::SnakeRect(Direction dir) : RectangleShape{}, direction_{ dir } { } Direction SnakeRect::direction() const { return direction_; } Direction SnakeRect::oppositeDirection() const { switch (direction_) { case Direction::Up: return Direction::Down; break; case Direction::Down: return Direction::Up; break; case Direction::Right: return Direction::Left; break; case Direction::Left: return Direction::Right; break; default: break; } } In Snake.h: #pragma once #include <vector> #include <SFML/Audio.hpp> #include "SnakeRect.h" #include "SnakeFood.h" class Snake { public: Snake(); Snake(sf::Vector2f startingPos, Direction startingDir); bool isTouchingFood(const SnakeFood& food); bool isTouchingSelf(); void move(); void changeDirection(Direction dir); void checkIfOutOfBounds(const sf::RenderWindow& window); void grow(); void drawBody(sf::RenderWindow& window); friend class SnakeFood; private: std::vector<SnakeRect> body_; static const float thickness; static const float speed; static const sf::Color color; static const float startingLength; static const sf::Vector2f defaultStartingPos; static const Direction defaultStartingDir; }; In Snake.cpp: #pragma warning(disable : 4996) #include <chrono> #include "Snake.h" const float Snake::thickness{ 35.f }; const float Snake::speed{ 35.f }; const sf::Color Snake::color{ sf::Color::Green }; const float Snake::startingLength{ 3.f }; const sf::Vector2f Snake::defaultStartingPos{280.f, 280.f}; const Direction Snake::defaultStartingDir{Direction::Right}; Snake::Snake() : Snake{defaultStartingPos, defaultStartingDir} { } Snake::Snake(sf::Vector2f startingPos, Direction startingDir) { SnakeRect newRect{ startingDir }; newRect.setSize(sf::Vector2f(startingLength*speed, (float)thickness)); newRect.setPosition(startingPos); newRect.setFillColor(color); body_.push_back(newRect); } bool Snake::isTouchingFood(const SnakeFood& food) { const SnakeRect& frontRect{ (body_.at(body_.size() - 1)) }; return (frontRect.getGlobalBounds().intersects(food.getGlobalBounds())); } bool Snake::isTouchingSelf() { SnakeRect& frontRect{ body_.at(body_.size() - 1) }; for (auto it = body_.begin(); it != std::prev(body_.end()); it++) { if (frontRect.getGlobalBounds().intersects(it->getGlobalBounds())) { return true; } } return false; } void Snake::move() { SnakeRect& backRect{ body_.at(0) }; SnakeRect& frontRect{ body_.at(body_.size() - 1) }; for (int i{ 0 }; i < 2; i++) { SnakeRect& currentRect{ (i == 0) ? backRect : frontRect }; float modifier{ (i == 0) ? -(float)speed : (float)speed }; switch (currentRect.direction()) { case Direction::Up: currentRect.setSize(sf::Vector2f(currentRect.getSize().x, (currentRect.getSize().y) + modifier)); currentRect.move(0, (i == 1) ? -modifier : 0); break; case Direction::Down: currentRect.setSize(sf::Vector2f(currentRect.getSize().x, (currentRect.getSize().y) + modifier)); currentRect.move(0, (i == 0) ? fabs(modifier) : 0); break; case Direction::Left: currentRect.setSize(sf::Vector2f((currentRect.getSize().x) + modifier, currentRect.getSize().y)); currentRect.move((i == 1) ? -modifier : 0, 0); break; case Direction::Right: currentRect.setSize(sf::Vector2f((currentRect.getSize().x) + modifier, currentRect.getSize().y)); currentRect.move((i == 0) ? fabs(modifier) : 0, 0); break; default: //Will never execute since Direction is an enum break; } } if (backRect.getSize().x <= 0 || backRect.getSize().y <= 0) { body_.erase(body_.begin() + 0); } _sleep(150); } void Snake::changeDirection(Direction dir) { SnakeRect frontRect{ body_.at(body_.size() - 1) }; float frontRectX{ frontRect.getPosition().x }; float frontRectY{ frontRect.getPosition().y }; if (dir != frontRect.direction() && dir != frontRect.oppositeDirection()) { float xPosition{}; float yPosition{}; switch (frontRect.direction()) //Can shorten this down, will look into it { case Direction::Up: xPosition = (dir == Direction::Left ? frontRectX : frontRectX + (float)thickness); yPosition = frontRectY; break; case Direction::Down: xPosition = (dir == Direction::Left ? frontRectX : frontRectX + float(thickness)); yPosition = frontRectY + frontRect.getSize().y - (float)thickness; break; case Direction::Right: xPosition = frontRectX + frontRect.getSize().x - (float)thickness; yPosition = (dir == Direction::Up ? frontRectY : frontRectY + (float)thickness); break; case Direction::Left: xPosition = frontRectX; yPosition = (dir == Direction::Up ? frontRectY : frontRectY + (float)thickness); break; default: break; //Will never execute } float xSize{ (dir == Direction::Up || dir == Direction::Down) ? (float)thickness : 0.f }; float ySize{ (dir == Direction::Up || dir == Direction::Down) ? 0.f : (float)thickness }; SnakeRect newRect{dir}; newRect.setSize(sf::Vector2f(xSize, ySize)); newRect.setPosition(xPosition, yPosition); newRect.setFillColor(sf::Color::Green); body_.push_back(newRect); } } void Snake::checkIfOutOfBounds(const sf::RenderWindow& window) { const SnakeRect& frontRect{ body_.at(body_.size() - 1) }; float xPositionWithSize{ frontRect.getPosition().x + frontRect.getSize().x }; float yPositionWithSize{ frontRect.getPosition().y + frontRect.getSize().y }; bool isLeft{ frontRect.direction() == Direction::Left }; bool isRight{ frontRect.direction() == Direction::Right }; bool isUp{ frontRect.direction() == Direction::Up }; bool isDown{ frontRect.direction() == Direction::Down }; bool xOutOfBounds{ (frontRect.getPosition().x - (isLeft ? (float)speed : 0.f)) < 0 || xPositionWithSize + (isRight ? (float)speed : 0.f) > window.getSize().x }; bool yOutOfBounds{ (frontRect.getPosition().y - (isUp ? (float)speed : 0.f)) < 0 || yPositionWithSize + (isDown ? (float)speed : 0.f) > window.getSize().y }; if (xOutOfBounds || yOutOfBounds) { SnakeRect newRect{frontRect.direction()}; newRect.setFillColor(sf::Color::Green); sf::Vector2f newRectSize{}; sf::Vector2f newRectPos{}; switch (frontRect.direction()) { case Direction::Up: newRectSize = sf::Vector2f((float)thickness, 0.f); newRectPos = sf::Vector2f(frontRect.getPosition().x, (float)window.getSize().y); break; case Direction::Down: newRectSize = sf::Vector2f((float)thickness, 0.f); newRectPos = sf::Vector2f(frontRect.getPosition().x, 0.f); break; case Direction::Right: newRectSize = sf::Vector2f(0.f, (float)thickness); newRectPos = sf::Vector2f(0.f, frontRect.getPosition().y); break; case Direction::Left: newRectSize = sf::Vector2f(0.f, (float)thickness); newRectPos = sf::Vector2f((float)window.getSize().x, frontRect.getPosition().y); break; default: break; } newRect.setSize(newRectSize); newRect.setPosition(newRectPos); body_.push_back(newRect); } } void Snake::grow() { SnakeRect& backRect{ body_.at(0) }; switch (backRect.direction()) { case Direction::Up: backRect.setSize(sf::Vector2f(backRect.getSize().x, (backRect.getSize().y) + (float)speed)); break; case Direction::Down: backRect.setSize(sf::Vector2f(backRect.getSize().x, (backRect.getSize().y) + (float)speed)); backRect.move(0, -(float)speed); break; case Direction::Left: backRect.setSize(sf::Vector2f((backRect.getSize().x) + (float)speed, backRect.getSize().y)); break; case Direction::Right: backRect.setSize(sf::Vector2f((backRect.getSize().x) + (float)speed, backRect.getSize().y)); backRect.move(-(float)speed, 0); break; default: //Will never execute since Direction is an enum break; } } void Snake::drawBody(sf::RenderWindow& window) { for (const SnakeRect& rect : body_) { window.draw(rect); } } In SnakeFood.h: #pragma once #include <SFML\Graphics.hpp> class Snake; class SnakeFood : public sf::RectangleShape { using RectangleShape::RectangleShape; public: SnakeFood(const sf::RenderWindow& window, const Snake& snake); bool isTouching(const Snake& snake); void setToRandomPosition(const sf::RenderWindow& window, const Snake& s); }; In SnakeFood.cpp: #include "SnakeFood.h" #include "Snake.h" #include <stdlib.h> #include <stdio.h> #include <time.h> SnakeFood::SnakeFood(const sf::RenderWindow& window, const Snake& snake) : RectangleShape{} { setSize(sf::Vector2f(15.f, 15.f)); setFillColor(sf::Color::Red); setToRandomPosition(window, snake); } void SnakeFood::setToRandomPosition(const sf::RenderWindow& window, const Snake& snake) { do { float xPosition, yPosition; xPosition = float(rand() % (window.getSize().x - int(getSize().x))); yPosition = float(rand() % (window.getSize().y - int(getSize().y))); setPosition(xPosition, yPosition); } while (isTouching(snake)); } bool SnakeFood::isTouching(const Snake& s) { for (const SnakeRect& rect : s.body_) { if (rect.getGlobalBounds().intersects(this->getGlobalBounds())) { return true; } } return false; } In HighScoreFile.h: #pragma once #include <fstream> class HighScoreFile { public: HighScoreFile(std::string fileName); int getHighScore(); void editHighScore(int score); private: const std::string highScoreFileName_; std::fstream highScoreFile_; }; In HighScoreFile.cpp: #include "HighScoreFile.h" HighScoreFile::HighScoreFile(const std::string fileName) : highScoreFileName_{fileName} { } int HighScoreFile::getHighScore() { highScoreFile_.open(highScoreFileName_, std::ios::in); if (!highScoreFile_){ exit(0); } int highScore{}; highScoreFile_ >> highScore; highScoreFile_.close(); return highScore; } void HighScoreFile::editHighScore(int score) { highScoreFile_.open(highScoreFileName_, std::ios::out, std::ios::trunc); if (!highScoreFile_) { exit(0); } highScoreFile_ << score; highScoreFile_.close(); } ``` Answer: 1) Remove unused headers You don't need stdlib.h and stdio.h. These are C headers, and you'll rarely use them in C++ (and if you need to use them, use cstdlib and cstdio). Similarly, don't use time.h in C++; C++ provides much better functionality in the form of the chrono library. 2)Forward declarations You don't need to forward declare your classes, since you're already including them. 3) Random number generation Don't use srand and rand. These are C methods for random number generations, and truthfully aren't that random at all. Prefer using random library provided by the STL. 4) Static Your use of static in the main method doesn't make sense, since main is not a function you will be calling repeatedly. 5) while(true) The while(true) doesn't make any sense; it's not doing anything. You can safely remove it from the code. 6) Don't use exit I suspect you're using exit because the outer infinite loop; once you've removed the loop, you should use window.Close() method. This exits the game loop, and allows you to do any resource cleanup or post game-loop activity. 7) Separate simulation and render logic Your simulation and render logic are interspersed together. You first check if the snake is in contact with the food, then render the frame, and then check if the snake is biting itself. Ideally, you'd want the simulation and render logic grouped together, possibly as separate functions. 8) Use std::this_thread::sleep_for instead of _sleep. 9) Call sf::display only once per frame. You have multiple display calls per frame. You only want to call display once per frame, after you've sent all data to be displayed by using sf::draw. 10) playAgain playAgain can be consolidated into the main game loop, instead of running a separate infinite loop. Just something for you to look into. 11) Better error messages Suppose your newFont methods cannot find the font. It just silently exits. The developer has no idea what happened. Instead, provide the developer with a complete error message explaining what failed. Something like "Unable to allocate font: <font_path>". This allows the developer to fix the issue. Better yet, have a backup font in case font allocation fails; this allows the game to run even if it can't find the font. 12) You don't need a break statement in the switch body if you're returning a value. 13) static data members in Snake The use of static data members in the Snake class binds all instances to a particular configuration for Snake. If I want to have multiple snakes (I don't know; maybe you're creating a local multiplayer version), each with different colors or thickness, I'm out of luck. Consider making them instance data members. 14) SnakeFood::isTouching() should be const. Similarly, Snake::isTouchingFood and Snake::isTouchingSelf should be const. 15) body.begin() + 0 is the same as body.begin(). 16) General advice One way you can improve your design is to have snake contain a simulate or update method, which simulate the snake i.e. moving, checking if out of bounds, check if eating the food or biting itself; then inside your game loop, you can simply do snake.simulate(), it's much cleaner code. Learn to use STL features, instead of C library features; the former is much more robust than the latter.
{ "domain": "codereview.stackexchange", "id": 39112, "tags": "c++, game, snake-game, sfml" }
Is potential difference the difference in electric potential energy or electric potential?
Question: Referencing the book Physics for Scientist and Engineers, Ninth Edition, the book says that "Potential Difference should not be confused with Difference in Potential Energy." I also reviewed several internet sources that say "Potential Difference is the difference in Electric Potential Energy between two points." What is the difference between potential difference and a difference in potential energy? Answer: Technically "potential difference" is the difference in electrical potential, i.e. $\Delta V$, not the difference in electrical potential energy, $\Delta U$. Potential difference ($\Delta V$) is also called voltage, in certain contexts. However, many people and sources are sloppy about their terminology, and they will say just "potential" when they really mean potential energy. An expert could tell which is meant based on context - or, in some cases, that it doesn't matter. Since potential energy is related to potential by $U = qV$, if the charge $q$ is known and constant, you can usually say the same things about either quantity $U$ or $V$.
{ "domain": "physics.stackexchange", "id": 32136, "tags": "electrostatics, potential, potential-energy, voltage, definition" }
What's a trivial property?
Question: I have to show a property P is trivial. This problem has to do with Rice's Theorem, which I do not completely understand. Can someone explain the difference between trivial and non-trivial properties? Answer: A "trivial" property is one that holds either for all languages or for none.
{ "domain": "cs.stackexchange", "id": 4688, "tags": "terminology, computability, rice-theorem" }
Printing every digit of a number
Question: Which of these three methods is better? import java.util.* ; public class anycode{ public static void print_digits3(int num){ //this method converts integer to string and parses it int digit = 0 ; String num_str = new Integer(num).toString() ; int counter = 0 ; while(counter != num_str.length()){ System.out.println("digit: " + num_str.charAt(counter) + "| real integer digit: " + Integer.parseInt(num_str.substring(counter,counter+1))) ; counter++ ; } } public static void print_digits4(int num){ //this method converts integer to array of chars and parses it char[]num_sequence_of_chars = new Integer(num).toString().toCharArray() ; for(int i = 0 ; i < num_sequence_of_chars.length ; i++){ System.out.println("digit :" + num_sequence_of_chars[i] ) ; } } public static void print_digits2(int num){ //this is the MATH method of parsing number - take every first digit div by // 1000,100,10,1 and update num - num% to get the rest //it s clunky, hard to read int digit = 0 ; Integer res_int = new Integer(num) ; int num_len = res_int.toString().length();// need to know if the number is 3-digit,4-digit etc //so we can know the power of 10 - 10^3, 10^2, 10^1 etc System.out.println("len of num : " + num_len) ; num_len-- ; double base = 0 ; int temp ; while( num_len >= 0){ temp = num ; base = Math.pow(10,num_len) ; num = num / (int)base ;// type recasting to int is a must, it does not work without it - no idea why System.out.println("num :" + num) ; num = (int) temp % (int)base ; num_len-- ; } } public static void print_digits(int num){ int digit = 0 ; while(num != 0 ){ digit = num%10 ; System.out.print(digit + "\t") ; num = num/ 10 ; } } public static void main(String args[]){ print_digits2(7658) ; System.out.println("--------------------------") ; print_digits3(7658) ; } } How can I rewrite the print_digits2? I want a more efficient MATH method of doing - no converting to string \ char[]. Answer: Admonition about sloppiness These functions all "work", but the workmanship is clearly sloppy in many ways. You have four functions, all printing slightly different output. If I extend main() to call all four functions… public static void main(String args[]){ print_digits(7658) ; System.out.println("--------------------------") ; print_digits2(7658) ; System.out.println("--------------------------") ; print_digits3(7658) ; System.out.println("--------------------------") ; print_digits4(7658) ; } The output is: 8 5 6 7 -------------------------- len of num : 4 num :7 num :6 num :5 num :8 -------------------------- digit: 7| real integer digit: 7 digit: 6| real integer digit: 6 digit: 5| real integer digit: 5 digit: 8| real integer digit: 8 -------------------------- digit :7 digit :6 digit :5 digit :8 The title of your question says you want to review "3 methods" — I assume you mean print_digits2(), print_digits3(), and print_digits4(), since they all print the most-significant digit first. It would have been nice to see all three functions print exactly the same output, and print_digits() print the same output in reverse. When the functionality isn't identical, it's hard to run a fair performance benchmark, for example. The functions appear in an unexpected order: print_digits3(), print_digits4(), print_digits2(), print_digits(). Logical code organization would help other programmers navigate the codebase. Code indentation is inconsistent. In print_digits2(), the closing brace of the while loop is misaligned. In print_digits3(), System.out.println() has an extra leading space. In print_digits(), the first two lines of the loop body are indented at the wrong level. In main(), everything is five levels too deep. Furthermore, two spaces of indentation per level is non-standard and insufficient for readability. Four spaces is the norm. Your naming violates capitalization conventions. The class should be capitalized as AnyCode, but would be better renamed DigitPrinterTest. Each function should be named like printDigits(). Variables such as num_str should be numStr instead. The unnecessary import java.util.* should be removed. print_digits3() int digit = 0 is superfluous, and should be removed. new Integer(num).toString() unnecessarily creates an Integer object. String.valueOf(num) or Integer.toString(num) would be better. You have a classic initialize–test–increment loop, which should be written as a for loop for compactness and recognizability. I don't understand why you do Integer.parseInt(num_str.substring(counter,counter+1))). Do you not trust that it produces the same result as .charAt(counter)? Code cleanup: public static void printDigits3(int num) { String numStr = Integer.toString(num); for (int i = 0; i < numStr.length(); i++) { System.out.println("digit: " + numStr.charAt(i)); } } print_digits4() Not too bad. I suggest renaming the array to digits — short and sweet. (Well, technically, it could also contain a negative sign.) public static void printDigits4(int num) { char[] digits = Integer.toString(num).toCharArray(); for (int i = 0; i < digits.length; i++) { System.out.println("digit: " + digits[i]); } } print_digits2() int digit = 0 is superfluous. You're still stringifying the number, which is, depending on your point of view, either cheating or silly (because you're discarding the work already done by .toString()). Assuming that num is positive, the length is 1 + (int)Math.log10(num). Since base and temp are only used inside the loop, they should be declared inside the loop. You might as well make base an int instead of a double. Promotion from int to double is done automatically as necessary (JLS Sec 5.1.2: Widening Primitive Conversion). However, when you want to treat a double as an int, you need an explicit cast to tell the compiler that you know what you're doing, since doubles can represent numbers that int cannot. First rewrite: public static void printDigits2(int num) { for (int place = (int)Math.log10(num); place >= 0; place--) { int temp = num; int base = (int)Math.pow(10, place); num /= base; System.out.println("digit: " + num); num = temp % base; } } If you stare at that for a while, you'll find that you can do away with temp. public static void printDigits2(int num) { for (int place = (int)Math.log10(num); place >= 0; place--) { int base = (int)Math.pow(10, place); System.out.println("digit: " + num / base); num %= base; } } This function fails for nonpositive numbers (as did your print_digits2()), so you should add either validation, JavaDoc to declare that limitation, or code to handle it gracefully. Another suggestion You should consider using recursion as well, since it lets you write a very compact implementation: public static void printDigitsRecursive(int num) { if (num >= 10) { printDigitsRecursive(num / 10); } System.out.println("digit: " + num % 10); } This also fails for negative inputs, so again there is some remaining work for you to do. Performance The recursive solution is comparable in speed to printDigits2(). (The recursive solution is faster for smaller numbers; the log-and-pow solution is faster for larger numbers.) Both of the stringification solutions have about the same performance. The mathematical solutions beat the stringification solutions by about 30%. That makes sense, since the strings need to be allocated and garbage-collected. To be fair, stringification solves the problem with complete generality, including support for negative numbers.
{ "domain": "codereview.stackexchange", "id": 7680, "tags": "java, strings, parsing, integer, comparative-review" }
Making a scroll function more efficient in jQuery
Question: I have the following code, though I'm not sure it is efficient as it could be. $(window).scroll(function() { var scrollT = $(document).scrollTop(); if (scrollT >= 180) { $("#primary-nav-wrapper").addClass("scroll"); $("#primary-nav-wrapper li.front").addClass("active"); $("#primary-nav-wrapper .search-wrapper").removeClass("active"); $("#primary-nav-wrapper .search-field").blur(); } else { $("#primary-nav-wrapper").removeClass("scroll"); $("#primary-nav-wrapper .search-wrapper, #primary-nav-wrapper li.front").removeClass("active"); $("#primary-nav-wrapper .search-field").blur(); } if (scrollT >= 400) {$("a#to-top-link").addClass("active");} else {$("a#to-top-link").removeClass("active");} }); Basically, what I do here is checking two if-clauses every time I scroll, but is it more resource-friendly to only check every few milliseconds? If so, how is this done? Or is it a better idea to re-write the if-else structure, e.g. if.. else if... else if... else? Answer: I don't know if you mean general performance, but one thing I've heard helps a lot is to put all of your different selectors in variables outside of the function as it is apparently pretty expensive to make the selector calls again and again. I mean something like this (and I do mean "something like this" because I haven't tested even the basics, but the general idea is described here: http://www.artzstudio.com/2009/04/jquery-performance-rules/#cache-jquery-objects) var scroll = { primary : $("#primary-nav-wrapper"), front : $("#primary-nav-wrapper li.front"), search_w : $("#primary-nav-wrapper .search-wrapper"), search_f : $("#primary-nav-wrapper .search-field"), to_top : $("a#to-top-link") } $(window).scroll(function() { var scrollT = $(document).scrollTop(); if (scrollT >= 180) { scroll.primary.addClass("scroll"); scroll.front.addClass("active"); scroll.search_w.removeClass("active"); scroll.search_f.blur(); } else { scroll.primary.removeClass("scroll"); scroll.front.removeClass("active"); scroll.search_f.blur(); } if (scrollT >= 400) {scroll.to_top.addClass("active");} else {scroll.to_top.removeClass("active");} });
{ "domain": "codereview.stackexchange", "id": 11865, "tags": "javascript, jquery" }
How to form and minimise custom features for classification in supervised learning
Question: I am having an issue in understanding how to form the features based on particular math formula, and how to adjust the weights with. The aim is to draw ellipses for each unique category of points. Ellipse keeps inside as much of possible selected type of dots, and as little as possible other types (in the picture attached selected yellow ones). Here is the a formula, with 5 variables, based on which we can draw any ellipse. https://www.desmos.com/calculator/65hlgysuy7 I can make a model and calculate based on the data set whether the particular point is within the ellipse (score less than 0) or outside, and create accuracy based loss function, or just to use cross entropy for the loss function. Then i could find derivative for cross entropy, but then what's next? How basically i could adjust the weights in such case? The the big question how i could manipulate those variables (in the link a,b,c,l and h) to form different models, and optimize them with gradient descent (if thats the appropriate technique for such task)? Can someone please explain me the logic of how custom features in such case can be formed and weights found with gradient descent? Answer: If you want to work with ellipses, you may indeed tackle the problem through an optimization procedure. The gradient-descent algorithm consists in computing gradients around the current solution (set of (a, b, c, l, h) and change these values depending on the derivatives. It could work, but also be stuck in local minima; a classic solution is to run the algorithm multiple times, with different starting points. There are other, more efficient but also more complex, optimization algorithms (genetic algorithms, simulated annealing) that you may need to look at if gradient descent performs badly. Now, with this kind of problem, there may be more efficient techniques. As already stated in quester's answer, the most intuitive solution would be support vector machines. But the ellipse shape you're trying to draw makes me think of Gaussian Mixture Models. This is because the iso-probability of a Gaussian multivariate distribution is an ellipsoid. See an explanation of how it works in sklearn documentation. In a more general approach, you could replace the optimization algorithm by the Expectation-Maximization procedure (which is the one implemented in sklearn's Gaussian Mixture Models). Once a Gaussian bivariate distribution has been fitted on the data, you can retrieve the equation of ellipses from the distribution means and covariances. I don't have means to test the following code right now, but you should be able to adapt it and make it work. Starting from the a sklearn GaussianMixture named gmm, the ellipse coefficients as in your example can be retrieved from each component with: # i would be the i-th cluster b = gmm.means_[i][0] c = gmm.means_[i][1] covs = gmm.covariances_[i] # cst will allow sizing the ellipsis, depending on the targeted confidence interval l = np.sqrt(covs[0,0]) * cst h = np.sqrt(covs[1,1]) * cst eigval, eigvec = np.linalg.eig(covs) try: a = -np.arctan(eigvec[1, 0]/eigvec[0, 0]) except ZeroDivisionError: a = 0 The definition of a in your example is strange so I'm not quite sure of the calculation of a, but adding an integer number of $\pi/2$ to that value and/or switching its sign should do the job.
{ "domain": "datascience.stackexchange", "id": 6372, "tags": "machine-learning, classification, gradient-descent" }
Commutation relation under an arbitrary Lie algebra representation
Question: This is an exercise in Woit's book, B9, Problem 2: For the case of the Euclidean group $E(2)$, show that in any representation $\pi'$ of its Lie algebra, there is a Casimir operator $$ |\vec{P}|^2 = \pi'(p_1)\pi'(p_1) +\pi'(p_2)\pi'(p_2) $$ that commutes with all the Lie algebra operators $\pi'(p_1), \pi'(p_2), \pi'(l)$. I have a couple of doubts regarding this computation. Suppose we want to prove that it commutes with $\pi'(p_1)$. One has to compute $$ [\pi'(p_1)\pi'(p_1) +\pi'(p_2)\pi'(p_2),\pi'(p_1)] = 0, $$ but since $\pi'(\cdot)$ is arbitrary, I would like to "use" the definition of the Lie algebra representation A Lie algebra representation $(\phi,V)$ of a Lie algebra $\mathbb{g}$ on an $n$-dimensional complex vector space $V$ is given by a real-linear map $$\phi: X\in \mathbb{g}\rightarrow \phi(X) \in \mathbb{gl}(n,\mathbb{C})$$ satisfying $$\phi([X,Y]) = [\phi(X),\phi(Y)].$$ Of course this definition is for finite vector spaces (it is the one in my book), but still I can use it in my case (e.g. the Schrödinger representation is still unitary and fulfills the commutator relation above), right? If this holds then $$ \pi'([p_1^2+p_2^2,p_1]), $$ where the Lie bracket is now the Poisson bracket and thus $\pi'(0)$. Is the process above valid? Is it $\pi'(0)=0$? Answer: Is the process above valid? Is it $\pi'(0)=0$? Yes, it is. The phase-space origin in your ill-met infinite-dimensional representations maps to the trivial zero operator in Hilbert space, just like the zero matrix for the 3×3 matrices provided. Specifically, you have the E(2) brackets, $$ [\pi'(p_1),\pi'(p_2)]=0, \qquad [\pi'(l),\pi'(p_1)]=\pi'(p_2), \qquad [\pi'(l),\pi'(p_2)]=-\pi'(p_1), $$ where your Casimir works; but it is not in the Lie algebra. In point of fact, your target bracket can also be computed by inspection as a plain commutator of the Schroedinger realization, $$ \pi'(l)= -q_1\partial/\partial q_2 + q_2\partial/\partial q_1 ,\qquad \pi'(p_1)=-\partial/\partial q_1, \qquad \pi'(p_2)=-\partial/\partial q_2, \\ \pi^{'~ 2}(p_1)+ \pi^ {'~ 2}(p_2)= \frac{\partial^2 }{\partial q_1^2}+ \frac{\partial^2 }{\partial q_2^2}~~. $$ Note the last line is not in the Lie algebra, but it is still "represented" faithfully by this "crypto-quantization" realization.
{ "domain": "physics.stackexchange", "id": 84518, "tags": "group-theory, representation-theory, commutator, lie-algebra" }
Allocate Resource Effectively
Question: I have module called printer allocator (PrinterAllocator) which will allocate the next available printer to the requester. import java.util.ArrayList; import java.util.Arrays; import java.util.BitSet; import java.util.List; public class PrinterAllocator { private static final int TOTAL_AVAILABLE_PRINTERS = 25; private BitSet bitSet = new BitSet( TOTAL_AVAILABLE_PRINTERS ); public PrinterAllocator() { List<Integer> unassignedPriters = new ArrayList<>( Arrays.asList( 1, 2, 3, 24 ) ); // but this array will be fetched from database unassignedPriters.forEach( printerId -> bitSet.set( printerId - 1 ) ); } public int getNextPrinter() { int index = bitSet.nextClearBit( 0 ) + 1; bitSet.set( index ); return index + 1; } public boolean hasEnoughPrinters(int printersNeeded){ /* point 1*/ return bitSet.cardinality()+printersNeeded<=TOTAL_AVAILABLE_PRINTERS; } } Point 1: I am unable to replace 'point 1' with following code return bitSet.cardinality()+printersNeeded<=bitSet.size() (or) return bitSet.cardinality()+printersNeeded<=bitSet.length(); Because bitSet.size() and bitSet.length() has a different meaning that is size() != length() != TOTAL_AVAILABLE_PRINTERS Any better Idea to implement PrinterAllocator with/without bitset Answer: Why do you use BitSet? You're not saving kilobytes of memory by representing your printers as a set of bits. Go for code clarity and represent each printer with a dedicated instance of Printer class that holds the allocation status of the printer in question. Store them in a List or other data structure that suits your needs. You could even have two LinkedLists for allocated and free printers and just pop the first from the free list and push it to the end of the allocated list. You need to synchronize anyway. No need to count sizes to see if there are available printers. Just check isEmpty(). bitSet is a bad name for printer reservartion status. Java is a strongly typed language so you don't need to repeat the type in the field name. Use printerAllocationStatus if you really need to use a BitSet (see above). You're stacking on responsibilities by implementing hasEnoughPrinters in your allocator. Instead implement getNumberOfFreePrinters and let the caller decide what to do with the information. The method is of limited use anyway, since printer allocation status might change right after the caller has checked for availability making the data the caller has completely useless. getNextPrinter does not communicate the side effect of changing printer allocation status in it's name. Use allocatePrinter instead. Use consistent terminology. In a PritnerAllocator a printer is allocated or free, not assigned.
{ "domain": "codereview.stackexchange", "id": 33884, "tags": "java, bitset" }
Magnitude of intermodulation distortion products?
Question: Assuming I have a set of frequencies $\mathbf F$, and I pick 2 frequencies $f_1$ and $f_2$ randomly from the set, generate sine waves for them, and pass the signals through a non-linear distortion process while adding them. Is there any way for the harmonics or intermodulation distortion products to have a magnitude larger than the magnitude of any of the two frequencies, when calculating an FFT? If yes, under what conditions, and how can I avoid that this happens? Answer: Certainly the amplitude of a harmonic can exceed the amplitude of the fundamental, or the amplitude of an intermodulation product can exceed the amplitudes of the individual signals. But this occurs when the system is so nonlinear that it would make no sense to use it as an approximation to a linear system. Indeed, the very fact that such a nonlinear system is being used (intentionally) means that it has some properties that are useful in the application at hand. Example: If the system is a squarer that produces output $x^2(t)$, then with $x(t) = \cos(2\pi f_c t)$, the output is entirely the second harmonic (and a DC term). Since the fundamental is entirely absent from the output, the harmonic amplitude is infinitely larger than the fundamental. If two sinusoidal inputs of frequencies $f_1$ and $f_2$ are added together before the squaring operation, then the output has second harmonics at frequencies $2f_1$ and $2f_2$, intermodulation distortion products at frequencies $f_1+f_2$ and $|f_1-f_2|$ and no fundamentals at frequencies $f_1$ and $f_2$. Moral: Don't use a squarer and pretend it is a linear addition circuit or linear amplfier. If you are knowingly using a squarer, it is for some other useful properties (e.g. frequency doubler) of the circuit that are important to you right now.
{ "domain": "dsp.stackexchange", "id": 1350, "tags": "fft, frequency-spectrum, frequency, dft" }
How is this grammar LL(1)?
Question: This is a question from the Dragon Book. This is the grammar: $S \to AaAb \mid BbBa $ $A \to \varepsilon$ $B \to \varepsilon$ The question asks how to show that it is LL(1) but not SLR(1). To prove that it is LL(1), I tried constructing its parsing table, but I am getting multiple productions in a cell, which is contradiction. Please tell how is this LL(1), and how to prove it? Answer: First, let's give your productions a number. 1 $S \to AaAb$ 2 $S \to BbBa$ 3 $A \to \varepsilon$ 4 $B \to \varepsilon$ Let's compute the first and follow sets first. For small examples such as these, using intuition about these sets is enough. $$\mathsf{FIRST}(S) = \{a, b\}\\ \mathsf{FIRST}(A) = \{\}\\ \mathsf{FIRST}(B) = \{\}\\ \mathsf{FOLLOW}(A) = \{a, b\}\\ \mathsf{FOLLOW}(B) = \{a, b\}$$ Now let's compute the $LL(1)$ table. By definition, if we don't get conflicts, the grammar is $LL(1)$. a | b | ----------- S | 1 | 2 | A | 3 | 3 | B | 4 | 4 | As there are no conflicts, the grammar is $LL(1)$. Now for the $SLR(1)$ table. First, the $LR(0)$ automaton. $$\mbox{state 0}\\ S \to \bullet AaAb\\ S \to \bullet BbBa\\ A \to \bullet\\ B \to \bullet\\ A \implies 1\\ B \implies 5\\ $$$$\mbox{state 1}\\ S \to A \bullet aAb\\ a \implies 2\\ $$$$\mbox{state 2}\\ S \to Aa \bullet Ab\\ A \to \bullet\\ A \implies 3\\ $$$$\mbox{state 3}\\ S \to AaA \bullet b\\ b \implies 4\\ $$$$\mbox{state 4}\\ S \to AaAb \bullet b\\ $$$$\mbox{state 5}\\ S \to B \bullet bBa\\ b \implies 6\\ $$$$\mbox{state 6}\\ S \to Bb \bullet Ba\\ B \to \bullet\\ B \implies 7\\ $$$$\mbox{state 7}\\ S \to BbB \bullet a \\ a \implies 8\\ $$$$\mbox{state 8}\\ S \to BbBa \bullet \\ $$ And then the $SLR(1)$ table (I assume $S$ can be followed by anything). a | b | A | B | --------------------------- 0 | R3/R4 | R3/R4 | 1 | 5 | 1 | S2 | | | | 2 | R3 | R3 | 3 | | 3 | | S4 | | | 4 | R1 | R1 | | | 5 | | S4 | | | 6 | R4 | R4 | | 7 | 7 | S8 | | | | 8 | R2 | R2 | | | There are conflicts in state 0, so the grammar is not $SLR(1)$. Note that if $LALR(1)$ was used instead, then both conflicts would be resolved correctly: in state 0 on lookahead $a$ $LALR(1)$ would take R3 and on lookahead $b$ it would take R4. This gives rise to the interesting question whether there is a grammar that is $LL(1)$ but not $LALR(1)$, which is the case but not easy to find an example of.
{ "domain": "cs.stackexchange", "id": 753, "tags": "formal-grammars, compilers, parsers" }
Expression for Potential energy of a hanging mass
Question: If the acceleration due to gravity is $g$ and a mass $m$ is hanging from a fixed support with a thread of length $l$, then it's potential energy ($U$) is given by: $$U = -mgl;$$ This was stated by my instructor. However, the potential energy must be: $$U = mgh;$$ where $h$ represents the vertical height above ground. Why is PE stated in terms of distance from fixed support and in terms of height above ground? How does that make sense? Answer: Potential energy is never an absolute value - it is always measured relative to some base configuration or point that is assigned zero potential energy. And the location of this base point is arbitrary. So if we take the base point as being at ground level then the mass is a distance $h$ above the base point and its potential energy relative to the ground is $mgh$. However, if we take the base point as being at level of the support then the mass is a distance $l$ below this point and its potential energy relative to this base point is $-mgl$. When solving problems we are only ever interested in the difference in potential energy between two different positions or configurations, so the choice of a base point for PE makes no difference to the final result. A good choice of base point may, however, make calculations simpler - for example, if you chose a base point on the opposite side of the Earth then you would be making the problem more complicated than it needs to be.
{ "domain": "physics.stackexchange", "id": 97688, "tags": "newtonian-mechanics, reference-frames, coordinate-systems, potential-energy, conventions" }
Why F-theory picks Calabi-Yau manifolds as backgrounds?
Question: Why F-theory picks Calabi-Yau manifolds as backgrounds? Is there a similar argument like the one in heterotic/IIA,B which singles out Calabi-Yau manifolds based on the requirement of space-time supersymmetry? If there is no 12-dimensional supergravity (hence no 12-dim SUSY variations) then how one can show that the solution of the Killing-spinor equations chooses Calabi-Yau manifolds? Answer: One of the points of F-theory is that it may be imagined to be a 12-dimensional theory – however one in which two dimensions are compactified on a tiny, infinitesimal two-torus. But the supersymmetry generators are exactly those that are fully compactible with the 12-dimensional interpretation – after all, all "type IIB supercharges" in F-theory transform as a chiral spinor in 12 dimensions. So the logic of the proof that the background is Calabi-Yau is really the same. The "Calabi-Yau 4-folds" of F-theory don't really have all the moduli because the two directions among the 12 are infinitesimal. But one may show that the complex structure may be defined just like for generic Calabi-Yaus. Alternatively, you may just construct an analogous proof directly for F-theory. But the essence will be analogous. The holonomy group has to be restricted analogously.
{ "domain": "physics.stackexchange", "id": 37616, "tags": "string-theory, differential-geometry, supersymmetry, compactification" }
Will a black body placed somewhere around the Sun obtain (eventually) the same temperature as the Sun?
Question: Suppose we look (above the Earth's atmosphere) at the wavelength ($=\frac c f)$ spectrum emitted by the Sun: This shows that the Sun is approximately a black body with a temperature of about $5525(K).$ Now If we place a black body at a distance $l$ from the Sun will the radiation coming from the Sun (after a while, depending on $l$) causes this black body to have a temperature of $5525(K) $ too (by means of the blackbody radiation corresponding to the Sun), or will this only happen when the black body is completely surrounded completely by a material at $5525(K) $ (implying that the black body we place somewhere around the Sun is in a state of dynamical equilibrium)? Somewhat like a thermometer put in interstellar space will show a temperature of $2.7(K)$ because it's surrounded at all sides by the CMBR. Answer: Assuming the black body doesn't produce heat and we don't focus radiation towards it, it needs to be completely surrounded by a material at 5525K and reach thermal equilibrium to have a temperature of 5525K. If it is instead somewhere around the sun it will be in a dynamic equilibrium, having temperature lower than 5525K. If it has a non-infinite thermal coductivity its "dark" side will be even colder. The reason for this dynamic equilibrium is that idealized black bodies, in addition to being perfect absorbers are also perfect emmitters, this might raise some eyebrows at first, but it actually makes sense when you think about it keeping in mind the second law of thermodynamics. If it was absorbing without emitting, it would just gather energy in one place, reducing the entropy. Of course I'm talking about reasonable time scales. You can go overboard and claim it will eventually reach the Sun's temperature some time during the heat death of the universe, but that is more about the sun dying and cooling down than it is about the Sun heating up our black body.
{ "domain": "physics.stackexchange", "id": 55092, "tags": "radiation, thermal-radiation, sun" }
A way to express LTL (varient) to enforce a stream of data to satisfy some linear time logic
Question: Linear Time Logic (LTL) is used for system verification. In my case, I am investing some time, to see the feasibility of using LTL this time to enforce a constraint on a stream of data. Enough of generalities, let's take a simple example: The operator UNTIL in the expression u Until v in LTL means, event u until v, it is a general formula that an infinite number of signal traces could satisfy. see it's definition here: page4 like: u,u,u,v,v,v,... u,u,u,u,u,u,... u,v,v,v,v,v,... In my case, I want to enforce an LTL like formulae to a system receiving a stream of data; Again let's take the same operator Until. let's say we have two input signals, one for constant u, and one for constant v. u,u,u,u,u,u,... , , , ,v,v,... The stream processor taking these inputs, if it is an "UNTIL*" node, would output: u,u,u,u,v,v,... The reason I differentiate UNTIL with an asterisk is the whole point of the question, "u UNTIL* v" is only true when v is taken as output as soon as it appears in the second stream, it is one single trace satisfying "u UNTIL* v" given our input signals. How to express this constraint ?! LTL seems very general for this "constraint enforcing mechanism". note: Please bear with me, I am no computer scientist, nor a mathematician, I am an average programmer who tries to learn new things. Answer: This is an interesting question. It is not directly an LTL (Linear Temporal Logic) question -- rather a question about whether there is an algorithm or tool that takes an input stream and modifies it a somewhat minimal way to satisfy a given LTL property. Whether that you want can be done or has been done depends on what exactly the allowed modifications to the stream are. In your example, you are, in a sense, remixing two streams. Note that LTL is defined over alphabets in which every character can have multiple propositions that are TRUE at the time. So in your example, you could have just mixed the u and v stream together to obtain: u,u,u,u,{u,v},{u,v},.... This would have satisfied the LTL formula as well. If the way in which you remix is that the proposition in the output stream is always a subset of the propositions set in the input stream, then you can use reactive synthesis to obtain a transducer that does the stream mixing/fixing. However, this will only work if for every output stream there actually is a way to perform the mix. If there is some quantitative notion of mixing/fixing, then quantitative synthesis may be a research area that may have some results. But you would need a description of the precise stream modification optimization criterion to see if any of the results from that area are applicable.
{ "domain": "cs.stackexchange", "id": 16529, "tags": "linear-temporal-logic, streaming-algorithm" }
Fine-grained complexity of 3-CNF formula evaluation
Question: It's well known that 3-SAT is in NP, which means that one can evaluate a 3-CNF formula in polynomial time. However, I was wondering what the tightest upper bound is for formula verification, expressed in big-O notation or some other way. My best guess is that 3-CNF formulas can be verified in linear time in the number of clauses. Each clause has two disjunctions and up to three negations, which should each require one step to evaluate. If you count replacing a literal with its assigned value as a step, that gives you at most 8 steps per clause. Then, once all the clauses are evaluated, you should need [# of clauses] - 1 steps to compute all the conjunctions between them. So with $x$ clauses, that's at most $8x + x - 1 = 9x - 1$ steps total, which is $O(x)$. Obviously, if I'm wrong about that, please let me know, and if there are even better time bounds (either theoretical or practical), I would love to hear about those as well. Answer: Yes, you are correct. Your analysis is fine. It takes linear time. It's also easy to see that it's not possible to do better than linear time: any algorithm will have to read the entire input (in the worst case), which itself takes linear time.
{ "domain": "cs.stackexchange", "id": 13412, "tags": "time-complexity, satisfiability, decision-problem, 3-sat, sat-solvers" }
Effect of Charon on Pluto
Question: Even though Pluto is not known as a planet anymore, theroetically it has/had a moon, called Charon. I've heard about something that their size are so close to each other that while Charon rotates around Pluto, it also rotates around a central point. How does it work exactly? Answer: Mathematically, the motion of the Pluto-Charon system can be decomposed into two parts: The motion of Pluto-Charon about the Sun, and the motion of Pluto and Charon about one another. If one sets the reference point to be the center of Pluto, the path the Pluto-Charon system would appear to follow about the Sun would be an epicycle, which is a far more complicated means of describing the pair's trajectory through space. By instead setting this reference point to be the barycenter, the trajectory followed by the Pluto-Charon system is conveniently a Keplerian ellipse. Charon is 11.6% the mass of Pluto, and is (on average) 19,571 km away from Pluto (Source 1, 2). The barycenter is the point at which the masses of the two bodies "balance", and for the Pluto-Charon system it lies at a distance of $$\frac{m_{\text{Charon}}}{m_{\text{Pluto}}+m_{\text{Charon}}}\times\text{distance}=\frac{0.116}{1.116}\times\text{distance}=0.104\times 19,571\,\text{km}=2,034\,\text{km}$$ from the center of Pluto. As Pluto is only 1,153 km in radius, the barycenter lies ~900 km above its surface. The Pluto-Charon system is the only known (minor or major) planet for which this is the case, although the Earth-Moon system will likely satisfy this criteria billions of years in the future (see Is the Moon a Planet? on Physics SE). There are some instances of binary asteroids, however, for which the barycenter lies outside the surface of both bodies.
{ "domain": "astronomy.stackexchange", "id": 53, "tags": "solar-system, natural-satellites, pluto, charon" }
Dirt energy, possible?
Question: Doesn't $E=mc^2$ mean that mass can be converted to energy? But from what I have studied in high school nuclear physics, it seems that the only "$E=mc^2$" we can get is from binding energy between nucleons. This may sound really stupid but is it possible to actually convert matter into energy using a machine on earth? Like get 1 kilogram of dirt and convert it to $c^2$ joules? What would it take to make this happen? Answer: Like get 1 kilogram of dirt and convert it to c2 joules? What would it take to make this happen? To make something close to this happen you would need one kilogram of antimatter dirt. Then all the quantum number additions would be satisfied and a lot of radiation would come out, but not completely into kinetic or useful radiative energy. You will get pairs of weakly interacting neutrinos antineutrinos, and pairs of electrons and positrons that will have to meet each other to become pairs of photons. The resources you would have to spend to create 1 kilogram of antidirt would make the whole process economically unfeasible, considering that we have only managed to make a bit of antihydrogen up to now.
{ "domain": "physics.stackexchange", "id": 8633, "tags": "mass-energy" }
Adaptive Optics?
Question: I get the general idea of adaptive optics. The light from an object distorted by differences in the earth's atmosphere, and a telescope with AO tries to compensate for this distortion by various mechanisms. Is there a good resource for more in depth overview on the systems and the physics behind them? Answer: There are of course books about adaptive optics. For instance: Tyson, R. Principles of Adaptive Optics, (2010).
{ "domain": "astronomy.stackexchange", "id": 513, "tags": "observational-astronomy, optics" }
Maximum period of a vertically spinning ball
Question: Problem: A point mass with mass $m$, tethered by a string of length $R$, is in unforced circular motion in a gravitational field with strength $g$. The plane of motion is parallel to the gravitational field lines. The period of motion is $T$. What is the largest possible $T$? (ignore friction and any heat loss) This a problem that I made up, but I'm sure it has been posed and answered before. I cannot find any reference. What I want to know is if my assumption for solving this problem is correct. Assumption: For the path to be circular, the string must be under tension. Minimum tension is when the mass is at the top of the loop. Maximum period corresponds to zero tension at this point. Added: Reason for the assumption: The radial acceleration $a=v^2/R$ is toward the center. $g$ is downward. The lower half of the circle is a pendulum and the path is circular. In the upper half of the circle, the tension is $m(v^2/R - g \sin(\theta))$, where $\theta$ is angular displacement measured counterclockwise from positive x-axis. Minimum tension is reached at $\theta=\pi/2$. Since tension must be grater or equal to zero, $v^2 \ge g R$. Maximum period corresponds to lowest tangential speed, $v^2=gR$. Answer: You start out right - the tension is zero at the top, and that gives you the velocity at the top of the arc. The velocity at the bottom is right too. However, I don't think you can simply take the average velocity... instead you need to write down the integral equation. At a given angle $\theta$, you have height $h$ and velocity $v$. From conservation of energy we known $$\frac12 m v^2 + m g h = 2mgR + \frac12 m v_0^2 = \frac52 mgR$$ Now we can write $h$ in terms of $\theta$: $$h = R(1+\cos\theta)$$ We can do the same thing for $v$: $$v = R\dot\theta$$ Together, these make a differential equation: $$\begin{align}v^2 &= 5gR - 2gh\\ \left(\dot\theta R\right)^2 &=5gR - 2gR(1+\cos\theta)\\ &=gR(3-2\cos\theta)\end{align}$$ A bit of rearranging gives $$dt = \sqrt{\frac{R}{g}}\frac{d\theta}{\sqrt{3-2\cos\theta}}$$ This is a hard thing to integrate; so we turn to Wolfram Alpha, which tells us we should have paid more attention when we learnt about elliptical integrals... When the right hand side is integrated from $0$ to $\pi$, the left hand side gives us half the period: $$T = 2\sqrt{\frac{R}{g}}\left(2K(-4)\right) \\ \require{AMScd}\bbox[border:2px solid red]{T= 4.038 \sqrt{\frac{R}{g}}}$$ where K is the elliptical integral of the first kind. Because I don't trust my own math skills, I did the integration numerically - and am happy to say I got basically the same answer: As you can see, the line crosses the $2\pi$ line at just a touch over $4\sqrt{\frac{R}{g}}$ - this is consistent with the value 4.038 that I got from Wolfram Alpha. This is slightly longer than the value you calculated (which comes to 3.703), which is not surprising: even if you assume a linear change in velocity, you have to take account of the time spent at each velocity - your expression doesn't do that...
{ "domain": "physics.stackexchange", "id": 28846, "tags": "homework-and-exercises, newtonian-mechanics" }
Does Ros 2 work on any debian base distro or only ubuntu?
Question: How well does Ros 2 work on all debian based distros does it also have tier -1 support, or only ubuntu ? Answer: The best reference for supported platforms is REP 2000. To directly answer your question, Debian platforms are not Tier 1 for any distribution at the moment. However because they are so closely tied to Ubuntu distributions they will generally work without any changes if you pick the closest analogous system. But you will have to compile from source.
{ "domain": "robotics.stackexchange", "id": 38665, "tags": "ubuntu, debian, linux" }
Why do we tap on cardboard to see magnetic field lines?
Question: If we sprinkle iron particles on a cardboard where a bag magnet is kept and tap the board gently then the particles get arranged in a way that they look like field lines. But I am confused why do we have to tap on the board? Why won't it get arranged like that normally? (Sorry for this stupid question, I have stated studying proper magnetism recently.) Answer: It’s like shaking a measuring cup half full of sugar to make it level out—in both cases there’s an energetically favored configuration you’re trying to reach, but without agitation, friction prevents the grains from moving to that configuration. Each time you tap the cardboard or shake the cup, you give the grains a new opportunity to settle in a new position, and the magnetic/gravitational forces, though not strong enough to overcome friction on their own, determine the end configuration.
{ "domain": "physics.stackexchange", "id": 85605, "tags": "electromagnetism, magnetic-fields, friction" }
Prove the solution of von Neumann equation will never stabilize if Hamiltonian and initial density matrix commutes
Question: Given von Neumann equation $$\frac{d}{dt} \rho(t) = -i [H, \rho(t)] = -i e^{-iHt}[H, \rho(0)]e^{iHt}.$$ If we know that $[H, \rho(0)] \neq 0$, how do we prove in details the solution of von Neumann equation will never be stabilized, i.e. $$\lim_{t\to\infty}\frac{d}{dt} \rho(t) \neq 0~?$$ Answer: Hints to the question (v2): First note that the operator norm $$\tag{1}||A||~=~||UA||~=~||AU||$$ of an operator $A$ is invariant if we compose with an unitary operator $U$ from either left or right. Therefore $\dot{\rho}(t)$ is not the zero-operator: $$\tag{2}|| \dot{\rho}(t) || ~=~ || [H, \rho(t)] || ~\stackrel{(1)}{=}~ || [H, \rho(0)] || ~\neq~ 0. $$
{ "domain": "physics.stackexchange", "id": 15654, "tags": "quantum-mechanics, mathematical-physics, operators, hilbert-space, density-operator" }
volumetric density from linear densities in the three directions x,y,z
Question: I have a cube and I know the linear density of particles along each axis (I mean- the number of particles per unit length along each axis). How can I get the volumetric density? Answer: In cartesian coordinates just take a volume element as dV = dxdydz. The volumetric density is determined by multiplying the linear densities by each other. This gives you the volumetric density as # of particles/m^3
{ "domain": "physics.stackexchange", "id": 59181, "tags": "geometry, density" }
Accessing the contents of a project's root directory in Python
Question: This morning I was trying to find a good way of using os.path to load the content of a text file into memory, which exists in the root directory of my current project. This approach strikes me as a bit hackneyed, but after some thought it was exactly what I was about to do, except with os.path.normpath(os.path.join(__file__, '..', '..')) These relative operations on the path to navigate to a static root directory are brittle. Now, if my project layout changes, these associations to the (former) root directory change with it. I wish I could assign the path to find a target, or a destination, instead of following a sequence of navigation operations. I was thinking of making a special __root__.py file. Does this look familiar to anyone, or know of a better implementation? my_project | | __init__.py | README.md | license.txt | __root__.py | special_resource.txt | ./module |-- load.py Here is how it is implemented: """__root__.py""" import os def path(): return os.path.dirname(__file__) And here is how it could be used: """load.py""" import __root__ def resource(): return open(os.path.join(__root__.path(), 'special_resource.txt')) Answer: I never heard of __root__.py and don't think that is a good idea. Instead create a files.py in a utils module: MAIN_DIRECTORY = dirname(dirname(__file__)) def get_full_path(*path): return join(MAIN_DIRECTORY, *path) You can then import this function from your main.py: from utils.files import get_full_path path_to_map = get_full_path('res', 'map.png') So my project directory tree looks like this: project_main_folder/ utils/ __init__.py (empty) files.py main.py When you import a module the Python interpreter searches if there's a built-in module with that name (that's why your module name should be different or you should use relative imports). If it hasn't found a module with that name it will (among others in sys.path) search in the directory containing your script. You can find further information in the documentation.
{ "domain": "codereview.stackexchange", "id": 27233, "tags": "python" }
3D SLAM without IMU possible?
Question: hi, I'm very new to ros and working on the same problem of building 3d map using lidar but i don't have imu. can you please tell me is it possible to build 3d map without imu? and for that what is the procedure. I have pioneer 3at robot with ros jade version. Originally posted by ARCHANA on ROS Answers with karma: 3 on 2016-01-19 Post score: 0 Answer: This depends on your sensor setup. If it can provide 3D-pointclouds at a high enough rate, you can do 3D-SLAM by aligning the pointclouds with an ICP-Algorithm. You can also use the pioneer's wheel odometry to replace the IMU to some extend. Edit: A good ICP-Variant is the GeneralizedIterativeClosestPoint from pcl-registration library. To create a 3D-Map with a 2D-Sensor (like your Hokuyo), you have to mount the Scanner on a tilt-unit to periodically tilt it up and down. Then PCL can create a pointcloud out of all scans from one full sweep of the scanner. Originally posted by Sebastian Kasperski with karma: 1658 on 2016-01-20 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by ARCHANA on 2016-01-20: Thanks for the reply. The sensor i am using is hokuyo UTM 30 LX. I converted 2d laser data to pointcloud data using pcl library. Now if i move hokuyo in upward direction manually with hands then also z=0 and in rviz i see only 2d scan not 3d scan. Can you plz suggest me what should i do next. Comment by ARCHANA on 2016-01-20: And if the solution is ICP algorithm then from where i can study that? Thanks Comment by ARCHANA on 2016-01-22: I tried the same using "https://github.com/ras-sight/hratc2016_entry_template/wiki/pointcloud_generator_node" this package but when i run launch file, it is asking for tilting action server. i dont know how to write action server for that so i tried "rostopic pub " to move ptu but it doesnot move. Comment by Sebastian Kasperski on 2016-01-22: What kind of ptu with what kind of ros-driver do you use? Comment by ARCHANA on 2016-01-22: I am using PTU-E46 with flir_ptu_ros drivers. I installed this package from git source. Comment by ARCHANA on 2016-01-27: Hi, sorry to disturb you again. I installed loam_continuous package from git source. Also i download data sets provided i.e bagfile and that is running fine in rviz using command rosbag play. but when i connect hokuyo utm 30lx with my laptop and run that package then i am not able to see 3d pointcld Comment by ARCHANA on 2016-01-27: Also there is no error. Can you please tell me what changes i have to make in that package so as to use it for hokuyo utm-30lx. Comment by asd on 2017-10-13: ARCHANA I also installed loam but point cloud is displayed only with rviz not with loam. Can u guess a possible reason for that ? it will be of great help thank u
{ "domain": "robotics.stackexchange", "id": 23488, "tags": "ros" }
C++ game server
Question: I'm writing a server for an MMO game using boost::asio. I would like to know, are there any design or other issues in my code? And what should I improve in it? Thanks in advance. BaseServer.h: #ifndef BASE_SERVER_H #define BASE_SERVER_H #include <asio.hpp> #include "MessageProcessor.h" class BaseServer { public: BaseServer(asio::io_context& ioContext, unsigned short port) : socket(ioContext, asio::ip::udp::endpoint(asio::ip::udp::v4(), port)) { receivePacket(); } protected: asio::ip::udp::socket socket; virtual void handlePacket() = 0; private: void receivePacket() { socket.async_receive(asio::null_buffers(), [this](std::error_code ec, std::size_t bytes_recvd) { if (ec == asio::error::operation_aborted) return; handlePacket(); receivePacket(); }); } }; #endif GameServer.h: #ifndef GAME_SERVER_H #define GAME_SERVER_H #include <Net/BaseServer.h> #include <Net/MessageProcessor.h> #include <Utils/BitStream.h> class GameServer : public BaseServer { public: GameServer(asio::io_context& ioContext, unsigned short port); protected: MessageProcessor<BitStream&, asio::ip::udp::endpoint> messageProcessor; void asyncParsePacket(unsigned char* buffer, unsigned short packetSize, asio::ip::udp::endpoint senderEndpoint); virtual void handlePacket() override; }; #endif GameServer.cpp: #include "GameServer.h" #include <iostream> #include "Messages/Client/TestMessage.h" GameServer::GameServer(asio::io_context& ioContext, unsigned short port) : BaseServer(ioContext, port) { messageProcessor.registerHandler(0x01, [](BitStream& stream, asio::ip::udp::endpoint endpoint) { TestMessage mes; mes.deserialize(stream); std::cout << "Test message received! A = " << mes.a << ", B = " << mes.b << std::endl; }); } void GameServer::asyncParsePacket(unsigned char* buffer, unsigned short packetSize, asio::ip::udp::endpoint senderEndpoint) { BitStream stream(buffer, packetSize); delete[] buffer; unsigned char messageId; stream >> messageId; auto handler = messageProcessor.getHandler(messageId); if (handler) handler(stream, senderEndpoint); } void GameServer::handlePacket() { unsigned int available = socket.available(); unsigned char* buffer = new unsigned char[available]; asio::ip::udp::endpoint senderEndpoint; std::error_code ec; unsigned short packetSize = socket.receive_from(asio::buffer(buffer, available), senderEndpoint, 0, ec); socket.get_io_service().post(std::bind(&AuthServer::asyncParsePacket, this, buffer, packetSize, senderEndpoint)); } BaseMessage.h: #ifndef BASE_MESSAGE_H #define BASE_MESSAGE_H #include "../Utils/BitStream.h" class BaseMessage { protected: unsigned short id; public: BaseMessage(unsigned short messageId) : id(messageId) {} virtual ~BaseMessage() = default; unsigned short getId() const { return this->id; } virtual void serialize(BitStream& stream) const = 0; virtual void deserialize(BitStream& stream) = 0; }; #endif MessageProcessor.h #ifndef MESSAGE_PROCESSOR_H #define MESSAGE_PROCESSOR_H #include <vector> #include <functional> class BitStream; template <typename ... HandlerArgs> class MessageProcessor { protected: using MessageHandler = std::function<void (HandlerArgs ...)>; std::vector<MessageHandler> messageHandlers; public: void registerHandler(unsigned short id, MessageHandler handler) { if (messageHandlers.size() <= id) messageHandlers.resize(id); messageHandlers.insert(messageHandlers.begin() + id, handler); } MessageHandler getHandler(unsigned short id) const { return id < messageHandlers.size() ? messageHandlers[id] : 0; } }; #endif Answer: You can make the call to recieve_from async by creating a temp struct with the variables you need to keep alive and the buffer. Then you can put it in a shared_ptr (to account for the potential copies and capture that shared_ptr in the lambda: void GameServer::handlePacket() { unsigned int available = socket.available(); struct rec_data{ std::vector<unsigned char> buffer; asio::ip::udp::endpoint senderEndpoint; } std::shared_ptr<rec_data> data = std::make_shared<rec_data>(); data->buffer.resize(available); socket.receive_from(asio::buffer(data ->buffer.data(), available), data ->senderEndpoint, 0, [data](const std::error_code& error, std::size_t bytes_transferred) { if(!error) asyncParsePacket(data->buffer.data(), bytes_transferred, data->senderEndpoint); }); } The int you use for registerHandler is a magic number. Make it an enum and give each message type a name. Make sure to share the header between the sender and receiver.
{ "domain": "codereview.stackexchange", "id": 33204, "tags": "c++, game, networking, server, boost" }
Applying KVL to a circuit with only a battery and a capacitor
Question: My textbook states that KVL can be applied to a circuit with only a battery and a charging capacitor. According to the textbook then, the voltage across a capacitor always equals that of the battery. How is this possible? Answer: If you model a battery as an ideal voltage source in series with an internal resistance then you have a standard RC circuit which charges the capacitor with a time constant of $\tau=RC$. If you wish to consider a theoretical ideal voltage source then you can simply take the limit as $R$ goes to 0. In that hypothetical case the voltage source provides an infinite current for an infinitesimal time and the capacitor is fully charged with a time constant of 0.
{ "domain": "physics.stackexchange", "id": 90070, "tags": "electric-circuits, voltage" }
Can asteroids contain atmosphere?
Question: Gas is abundance in the universe, can a massive asteroid draws in these gas forming a thin atmosphere? Answer: At a certain size, huge asteroids get classified as dwarf planets. Pluto has an atmosphere 100,000 times thinner than Earth, and Pluto is already one of the two largest dwarf planets known. Asteroids (like everything) do have gravity, so nearby gas would be drawn to them. But it would take just very tiny distrubances for that gas to drift away, so what little there is would probably be close to undetectable.
{ "domain": "astronomy.stackexchange", "id": 873, "tags": "gravity, asteroids, gas" }
need to understand ground orientation between gazebo and rviz
Question: Update : added URDF and SDF files being used. I have looked these over but I can not see how they would cause the model to run in different directions. When I send -1,-1 to my joints via the ros gazebo plugin joint/command my model moves backwards but when I process those same commands in rviz via joint/state messages rviz base link moves forward. What would case this. How does ground orientation map between gazebo and rivz? <sdf version="1.4"> <model name='rrbot'> <static>0</static> <link name='base_link'> <pose>0 0 0.1 0 0 0</pose> <inertial> <mass>10</mass> <inertia> <ixx>1</ixx> <ixy>0</ixy> <ixz>0</ixz> <iyy>1</iyy> <iyz>0</iyz> <izz>1</izz> </inertia> </inertial> <visual name='base_visual'> <geometry> <box> <size>0.4 0.2 0.1</size> </box> </geometry> <material> <script> <name>Gazebo/Red</name> <uri>__default__</uri> </script> </material> </visual> <collision name='base_collision'> <geometry> <box> <size>0.4 0.2 0.1</size> </box> </geometry> <surface> <contact> <ode/> </contact> <friction> <ode> <mu>0.2</mu> <mu2>0.2</mu2> </ode> </friction> <bounce/> </surface> <max_contacts>10</max_contacts> </collision> <visual name='caster_visual'> <pose>-0.15 0 -.05 0 0 0</pose> <geometry> <sphere> <radius>0.05</radius> </sphere> </geometry> <material> <script> <name>Gazebo/Blue</name> <uri>__default__</uri> </script> </material> </visual> <collision name='caster_collision'> <pose>-0.15 0 -.05 0 0 0</pose> <geometry> <sphere> <radius>0.05</radius> </sphere> </geometry> <surface> <contact> <ode/> </contact> <friction> <ode> <mu>0</mu> <mu2>0</mu2> <slip1>1.0</slip1> <slip2>1.0</slip2> </ode> </friction> <bounce/> </surface> <max_contacts>10</max_contacts> </collision> <visual name='tower_visual'> <pose>0.05 0 0.1 0 0 0</pose> <geometry> <cylinder> <length>0.2</length> <radius>0.025</radius> </cylinder> </geometry> <material> <script> <name>Gazebo/White</name> <uri>__default__</uri> </script> </material> </visual> <collision name='tower_collision'> <pose>0.05 0 0.1 0 0 0</pose> <geometry> <cylinder> <length>0.2</length> <radius>0.025</radius> </cylinder> </geometry> <surface> <friction> <ode> <mu>0</mu> <mu2>0</mu2> <slip1>1</slip1> <slip2>1</slip2> </ode> </friction> <bounce/> <contact> <ode/> </contact> </surface> <max_contacts>10</max_contacts> </collision> <sensor name='camera1' type='camera'> <visualize>1</visualize> <update_rate>30</update_rate> <camera name='head'> <horizontal_fov>1.39626</horizontal_fov>  <clip> <near>0.02</near> <far>300</far> </clip> <noise> <type>gaussian</type> <mean>0</mean> <stddev>0.007</stddev> </noise> </camera> <plugin name='camera_controller' filename='libgazebo_ros_camera.so'> <alwaysOn>true</alwaysOn> <updateRate>1</updateRate> <cameraName>camera1</cameraName> <imageTopicName>image_raw</imageTopicName> <cameraInfoTopicName>camera_info</cameraInfoTopicName> <frameName>camera_frame</frameName> <robotNamespace>/rrbot</robotNamespace> <hackBaseline>0.07</hackBaseline> <distortionK1>0.0</distortionK1> <distortionK2>0.0</distortionK2> <distortionK3>0.0</distortionK3> <distortionT1>0.0</distortionT1> <distortionT2>0.0</distortionT2> </plugin> <pose>0.05 0 0.2 0 0 0</pose> </sensor> <sensor name='head_hokuyo_sensor' type='ray'> <visualize>1</visualize> <update_rate>40</update_rate> <ray> <scan> <horizontal> <samples>720</samples> <resolution>1</resolution> <min_angle>-1.5708</min_angle> <max_angle>1.5708</max_angle> </horizontal> </scan> <range> <min>0.1</min> <max>30</max> <resolution>0.01</resolution> </range> <noise> <type>gaussian</type> <mean>0</mean> <stddev>0.01</stddev> </noise> </ray> <plugin name='gazebo_ros_head_hokuyo_controller' filename='libgazebo_ros_laser.so'> <topicName>/scan</topicName> <frameName>hokuyo_frame</frameName> <robotNamespace>/rrbot</robotNamespace> </plugin> <pose>0.05 0 0.2 0 0 0</pose> </sensor> </link> <link name='left_wheel'> <pose>0.15 -0.13 0.1 0 1.5707 1.5707</pose> <inertial> <mass>1</mass> <inertia> <ixx>0.1</ixx> <ixy>0</ixy> <ixz>0</ixz> <iyy>0.1</iyy> <iyz>0</iyz> <izz>0.1</izz> </inertia> </inertial> <collision name='left_wheel_collision'> <geometry> <cylinder> <length>0.05</length> <radius>0.1</radius> </cylinder> </geometry> <surface> <contact> <ode/> </contact> <friction> <ode> <mu>1</mu> <mu2>1</mu2> </ode> </friction> <bounce/> </surface> <max_contacts>10</max_contacts> </collision> <visual name='left_wheel_visual'> <geometry> <cylinder> <length>0.05</length> <radius>0.1</radius> </cylinder> </geometry> <material> <script> <name>Gazebo/Red</name> <uri>__default__</uri> </script> </material> </visual> <gravity>1</gravity> <velocity_decay> <linear>0</linear> <angular>0</angular> </velocity_decay> <self_collide>0</self_collide> <kinematic>0</kinematic> </link> <joint name='joint1' type='revolute'> <pose>0 0 -0.03 0 1.5707 1.5707</pose> <child>left_wheel</child> <parent>base_link</parent> <axis> <xyz>0 1 0</xyz> </axis> </joint> <link name='right_wheel'> <pose>0.15 0.13 0.1 0 1.5707 1.5707</pose> <inertial> <mass>1</mass> <inertia> <ixx>0.1</ixx> <ixy>0</ixy> <ixz>0</ixz> <iyy>0.1</iyy> <iyz>0</iyz> <izz>0.1</izz> </inertia> </inertial> <collision name='right_wheel_collision'> <geometry> <cylinder> <length>0.05</length> <radius>0.1</radius> </cylinder> </geometry> <surface> <contact> <ode/> </contact> <friction> <ode> <mu>1</mu> <mu2>1</mu2> </ode> </friction> <bounce/> </surface> <max_contacts>10</max_contacts> </collision> <visual name='right_wheel_visual'> <geometry> <cylinder> <length>0.05</length> <radius>0.1</radius> </cylinder> </geometry> <material> <script> <name>Gazebo/Red</name> <uri>__default__</uri> </script> </material> </visual> <gravity>1</gravity> <velocity_decay> <linear>0</linear> <angular>0</angular> </velocity_decay> <self_collide>0</self_collide> <kinematic>0</kinematic> </link> <joint name='joint2' type='revolute'> <pose>0 0 0.03 0 1.5707 1.5707</pose> <child>right_wheel</child> <parent>base_link</parent> <axis> <xyz>0 1 0</xyz> </axis> </joint> <plugin name='ros_control' filename='libgazebo_ros_control.so'> <robotNamespace>/rrbot</robotNamespace> <robotSimType>gazebo_ros_control/DefaultRobotHWSim</robotSimType> </plugin> </model> </sdf> <?xml version="1.0"?> <!-- Revolute-Revolute Manipulator --> <robot name="rrbot" xmlns:xacro="http://www.ros.org/wiki/xacro"> <xacro:include filename="$(find rrbot_description)/urdf/materials.xacro" /> <xacro:property name="PI" value="3.1415926535897931"/> <!-- ${PI} --> <link name='base_link'> <collision name='collision'> <origin xyz="0 0 .1" rpy="0 0 0"/> <geometry> <box size=".4 .2 .1"/> </geometry> </collision> <visual name='visual'> <origin xyz="0 0 .1 " rpy="0 0 0"/> <geometry> <box size=".4 .2 .1"/> </geometry> </visual> <visual name='caster_visual'> <origin xyz="0.15 0 -0.05" rpy="0 0 0"/> <geometry> <sphere radius=".05"/> </geometry> </visual> <collision name='caster_collision'> <origin xyz="0.15 0 -0.05" rpy="0 0 0"/> <geometry> <sphere radius=".05"/> </geometry> <surface> <friction> <ode> <mu>0</mu> <mu2>0</mu2> <slip1>1.0</slip1> <slip2>1.0</slip2> </ode> </friction> </surface> </collision> <inertial> <mass value="10" /> <origin xyz="0 0 .1 " rpy="0 0 0"/> <inertia ixx=".1" ixy="0" ixz="0" iyy=".1" iyz="0" izz=".1" /> </inertial> </link> <link name="left_wheel"> <collision name="collision"> <origin xyz="0.15 -0.13 0.1" rpy="0 1.5707 1.5707"/> <geometry> <cylinder length="0.05" radius="0.1"/> </geometry> </collision> <visual name="visual"> <origin xyz="0.15 -0.13 0.1" rpy="0 1.5707 1.5707"/> <geometry> <cylinder length="0.05" radius="0.1"/> </geometry> </visual> <inertial> <mass value="1" /> <inertia ixx=".1" ixy="0" ixz="0" iyy=".1" iyz="0" izz=".1" /> </inertial> </link> <joint type="continuous" name="joint1"> <origin xyz="0.15 -0.13 0.1" rpy="0 1.5707 1.5707"/> <child link="left_wheel"/> <parent link="base_link"/> <axis> <xyz>0 1 0</xyz> </axis> <limit effort="3" velocity="3"/> </joint> <link name="right_wheel"> <collision name="collision"> <origin xyz="0.15 0.13 0.1" rpy="0 1.5707 1.5707"/> <geometry> <cylinder length="0.05" radius="0.1"/> </geometry> </collision> <visual name="visual"> <origin xyz="0.15 0.13 0.1" rpy="0 1.5707 1.5707"/> <geometry> <cylinder length="0.05" radius="0.1"/> </geometry> </visual> <inertial> <mass value="1" /> <inertia ixx=".1" ixy="0" ixz="0" iyy=".1" iyz="0" izz=".1" /> </inertial> </link> <joint type="continuous" name="joint2"> <origin xyz="0.15 0.13 0.1" rpy="0 1.5707 1.5707"/> <child link="right_wheel"/> <parent link="base_link"/> <axis> <xyz>0 1 0</xyz> </axis> <limit effort="3" velocity="3"/> </joint> <link name="tower_link"> <visual name="tower_visual"> <origin xyz="0.1 0 0.1" rpy="0 0 0"/> <geometry> <cylinder length="0.2" radius="0.025"/> </geometry> </visual> <collision name="tower_collision"> <origin xyz="0.1 0 0.1" rpy="0 0 0"/> <geometry> <cylinder length="0.2" radius="0.025"/> </geometry> </collision> </link> <joint name="camera_joint" type="fixed"> <origin xyz="0 0.0 0.1" rpy="0 0 0" /> <axis xyz="0 0 1"/>> <parent link="base_link" /> <child link="tower_link" /> </joint> <link name="hokuyo_frame"/> <joint name="hokuyo_frame_joint" type="fixed"> <child link="hokuyo_frame">hokuyo_frame</child> <parent link="tower_link">tower_link</parent> </joint> <link name="camera_frame"/> <joint name="camera_frame_joint" type="fixed"> <child link="camera_frame">camera_frame</child> <parent link="tower_link">tower_link</parent> </joint> <transmission name="tran1"> <type>transmission_interface/SimpleTransmission</type> <joint name="joint1"/> <actuator name="motor1"> <hardwareInterface>EffortJointInterface</hardwareInterface> <mechanicalReduction>1</mechanicalReduction> <motorTorqueConstant>10</motorTorqueConstant> </actuator> </transmission> <transmission name="tran2"> <type>transmission_interface/SimpleTransmission</type> <joint name="joint2"/> <actuator name="motor2"> <hardwareInterface>EffortJointInterface</hardwareInterface> <mechanicalReduction>1</mechanicalReduction> <motorTorqueConstant>10</motorTorqueConstant> </actuator> </transmission> <plugin name="ros_control" filename="libgazebo_ros_control.so"> <robotNamespace>/rrbot</robotNamespace> <controlPeriod>0.001</controlPeriod> <robotSimType>gazebo_ros_control/DefaultRobotHWSim</robotSimType> </plugin> </robot> Originally posted by rnunziata on Gazebo Answers with karma: 107 on 2013-10-10 Post score: 0 Answer: I had inverted sign in odomentry on x. Originally posted by rnunziata with karma: 107 on 2013-10-11 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 3489, "tags": "gazebo" }
Min Heap Priority Queue
Question: This was a part of one an assignment I found online and tried to solve it myself. The objective was to implement a Priority Queue using Min Heap and use Array as the underlying data structure to store the data. public class ArrayHeapMinPQ<T> { private PriorityNode<T>[] items; private int INITIALCAPACITY = 4; private int capacity = INITIALCAPACITY; private int size = 0; private Set<T> itemSet; // Declaring a construtor to intialize items as an array of PriorityNodes public ArrayHeapMinPQ() { itemSet = new HashSet<>(); items = new PriorityNode[INITIALCAPACITY]; items[0] = new PriorityNode(null, -1); } /* * Adds an item with the given priority value. Throws an * IllegalArgumentException if item is already present */ @Override public void add(T item, double priority) { ensureCapacity(); // To ensure that duplicate keys are not being used in the queue if (itemSet.contains(item)) { throw new IllegalArgumentException(); } items[size + 1] = new PriorityNode(item, priority); size++; itemSet.add(item); upwardHeapify(items[size]); } /* * Returns true if the PQ contains the given item */ @Override public boolean contains(T item) { return itemSet.contains(item); } /* * Returns the minimum item. Throws NoSuchElementException if the PQ is * empty */ @Override public T getSmallest() { if (this.size == 0) throw new NoSuchElementException(); return items[1].getItem(); } @Override public T removeSmallest() { if (this.size == 0) throw new NoSuchElementException(); T toReturn = items[1].getItem(); items[1] = items[size]; items[size] = null; size -= 1; itemSet.remove(toReturn); downwardHeapify(); ensureCapacity(); return toReturn; } // TODO: Implementation of changePriority is pending /** * Changes the priority of the given item. Throws * NoSuchElementException if the element does not exists * @param item Item for which the priority would be changed * @param priority New priority for the item */ @Override public void changePriority(T item, double priority) { if (!itemSet.contains(item)) throw new NoSuchElementException(); for (int i = 1; i <= this.size; i += 1) { if (item.equals(items[i].getItem())) { PriorityNode currentNode = items[i]; double oldPriority = currentNode.getPriority(); currentNode.setPriority(priority); if (priority < oldPriority) { upwardHeapify(currentNode); } else { downwardHeapify(); } break; } } } /* Returns the number of items in the PQ */ @Override public int size() { return this.size; } /* * Helper function to retrieve left child index of the parent */ private int getLeftChildIndex(int parentIndex) { return 2 * parentIndex; } /* * Helper function to retrieve right child index of the parent */ private int getRightChildIndex(int parentIndex) { return 2 * parentIndex + 1; } /* * Helper function retrieve the parent index */ private int getParentIndex(int childIndex) { return childIndex / 2; } /* * Helper method to heapify the queue upwards */ private void upwardHeapify(PriorityNode last) { PriorityNode smallestNode = items[1]; // the last node which was inserted in the array PriorityNode lastNode = last; int latestNodeIndex = size; // The max could be that last node will need to switch the smallest node while (!lastNode.equals(smallestNode)) { // Get the parent node int parentNodeIndex = getParentIndex(latestNodeIndex); PriorityNode parentNode = items[parentNodeIndex]; // The function is working because the compareTo method is // comparing the priority and not the data in the item if (parentNode.compareTo(lastNode) > 0) { // Swap the last node with its parent node swap(parentNodeIndex, latestNodeIndex); // Update the method variables latestNodeIndex = parentNodeIndex; lastNode = items[latestNodeIndex]; } // The priority of the parent is less than or equal to the parent else if (parentNode.compareTo(lastNode) <= 0) { break; } } } private void downwardHeapify() { // assumption is that the top node is the largest node int currentIndex = 1; while(hasLeftChild(currentIndex)) { int leftChildIndex = getLeftChildIndex(currentIndex); int smallerChildIndex = leftChildIndex; if (hasRightChild(currentIndex)) { int rightChildIndex = getRightChildIndex(currentIndex); double leftChildPriority = items[leftChildIndex].getPriority(); double rightChildPriority = items[rightChildIndex].getPriority(); if (leftChildPriority > rightChildPriority) { smallerChildIndex = rightChildIndex; } } if (items[currentIndex].getPriority() < items[smallerChildIndex].getPriority()) { break; } else { swap(currentIndex, smallerChildIndex); } currentIndex = smallerChildIndex; } } private boolean hasLeftChild(int index) { return getLeftChildIndex(index) < this.size + 1; } private boolean hasRightChild(int index) { return getRightChildIndex(index) < this.size + 1; } /* * Helper function to the class to make sure that there is enough capacity * in the array for more elements */ private void ensureCapacity() { // there are two conditions to take care of // 1. Double the size // 2. Make the size half if the array is 3/4 empty double currentLoad = (double) this.size / (double) this.capacity; int newCapacity = capacity; if(this.size > 1 && currentLoad < 0.25) { // Array is being downSized newCapacity = capacity / 2; items = Arrays.copyOf(items, newCapacity); } else if (currentLoad >= 0.5 ) { // Doubling the size of the array newCapacity = capacity * 2; items = Arrays.copyOf(items, newCapacity); } capacity = newCapacity; } /* * Helper method to swap two nodes */ private void swap(int parentNodeIndex, int latestNodeIndex) { PriorityNode temp = items[parentNodeIndex]; items[parentNodeIndex] = items[latestNodeIndex]; items[latestNodeIndex] = temp; } public Integer[] toArray() { Integer[] toReturn = new Integer[items.length]; for (int i = 1; i < items.length - 1; i++) { toReturn[i] = ((Double) items[i].getPriority()).intValue(); } return toReturn; } } PriorityNode public class PriorityNode<T> implements Comparable<PriorityNode> { private T item; private double priority; PriorityNode(T item, double priority) { this.item = item; this.priority = priority; } protected T getItem() { return this.item; } protected double getPriority() { return this.priority; } protected void setPriority(double priority) { this.priority = priority; } @Override public int compareTo(PriorityNode other) { if (other == null) { return -1; } return Double.compare(this.getPriority(), other.getPriority()); } @Override @SuppressWarnings("unchecked") public boolean equals(Object o) { if (o == null || o.getClass() != this.getClass()) { return false; } else { return ((PriorityNode) o).getItem().equals(this.getItem()); } } @Override public int hashCode() { return item.hashCode(); } } I had to implement an interface and I have omitted that part in the code. In my opinion, the change priority function is running at \$O(n)\$ and I am not sure how can I improve the performance of it. I am looking for a discussion on the code in general and the performance of changePriority function. Answer: General Review Make all instance fields that are never reassigned final. From the comments you suggest some of these fields get reassigned later in the code. In this case, those fields should not be declared final. private final PriorityNode<T>[] items; private final Set<T> itemSet; Make constants static and readonly, and use underscores for readability. private static final int INITIAL_CAPACITY = 4; private int capacity = INITIAL_CAPACITY; Don't introduce unnecessary new lines. For instance, between class definition and instance variables. Zero or one new line would suffice. public class ArrayHeapMinPQ<T> { private PriorityNode<T>[] items; public class ArrayHeapMinPQ<T> { private PriorityNode<T>[] items; Don't write comments that state the obvious. It's polluting the source code. Write comments for when they would really make sense. // Declaring a construtor to intialize items as an array of PriorityNodes public ArrayHeapMinPQ() { Like public API comments (this is a good thing): /* * Adds an item with the given priority value. Throws an * IllegalArgumentException if item is already present */ @Override public void add(T item, double priority) { Perform argument checks before changing the state of the instance. (And remove these comments that have zero added value) public void add(T item, double priority) { ensureCapacity(); // To ensure that duplicate keys are not being used in the queue if (itemSet.contains(item)) { throw new IllegalArgumentException(); } public void add(T item, double priority) { if (itemSet.contains(item)) { throw new IllegalArgumentException(); } ensureCapacity(); It is custom in Java to provide not just add, but also offer methods. add throws an exception, while offer returns a boolean. To accomodate multiple entrypoints, you should put the actual insertion of data in a private method. private void insert(T item, double priority) { ensureCapacity(); items[size + 1] = new PriorityNode(item, priority); size++; itemSet.add(item); upwardHeapify(items[size]); } And then refactor add: /* * Adds an item with the given priority value. Throws an * IllegalArgumentException if item is already present */ @Override public void add(T item, double priority) { if (itemSet.contains(item)) { throw new IllegalArgumentException(); } insert(item, priority); } And introduce offer: /* * Adds an item with the given priority value. Returns * False if item is already present */ public boolean offer(T item, double priority) { if (itemSet.contains(item)) { return false; } insert(item, priority); return true; }
{ "domain": "codereview.stackexchange", "id": 35686, "tags": "java, complexity, heap" }
How can we create a fake gps publisher node in ROS2?
Question: Hello! I want to create a fake GPS ros node to publish the GPS data so that the robot_localization package can subscribe to it for localization? What information would I be requiring for this task? Any suggestion would be helpful. Thanks, Originally posted by petal on ROS Answers with karma: 47 on 2020-04-15 Post score: 2 Answer: I have put a node below that publishes static GPS data in a sensor_msgs/NavSatFix message to the topic 'gps/fix' since the navsat_transform_node of robot_localization subscribes to gps/fix. If you want the GPS data to update you can change the values on the attributes you need e.g. latitude/longitude over time in the pattern you want. import rclpy import os from rclpy.node import Node from sensor_msgs.msg import NavSatFix from sensor_msgs.msg import NavSatStatus from std_msgs.msg import Header class GpsNode(Node): def __init__(self): super().__init__('gps_node') self.publisher_ = self.create_publisher(NavSatFix, 'gps/fix', 10) timer_period = 0.5 # seconds self.timer = self.create_timer(timer_period, self.timer_callback) def timer_callback(self): msg = NavSatFix() msg.header = Header() msg.header.stamp = self.get_clock().now().to_msg() msg.header.frame_id = "gps" msg.status.status = NavSatStatus.STATUS_FIX msg.status.service = NavSatStatus.SERVICE_GPS # Position in degrees. msg.latitude = 57.047218 msg.longitude = 9.920100 # Altitude in metres. msg.altitude = 1.15 msg.position_covariance[0] = 0 msg.position_covariance[4] = 0 msg.position_covariance[8] = 0 msg.position_covariance_type = NavSatFix.COVARIANCE_TYPE_DIAGONAL_KNOWN self.publisher_.publish(msg) self.best_pos_a = None def main(args=None): rclpy.init(args=args) gps_node = GpsNode() rclpy.spin(gps_node) # Destroy the node explicitly # (optional - otherwise it will be done automatically # when the garbage collector destroys the node object) gps_node.destroy_node() rclpy.shutdown() if __name__ == '__main__': main() Originally posted by DanielRobotics with karma: 42 on 2020-04-16 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by tfoote on 2020-04-16: @DanielRobotics thanks for catching the duplicate post. Recently spammers are getting more sophisticated and will duplicate posts like this to make their accounts look more active. Comment by Rika on 2021-08-07: Hi, thanks a lot for this but do you happen to know if this also works in ROS1?
{ "domain": "robotics.stackexchange", "id": 34773, "tags": "ros2, gps" }
Band Limited Impulse Train Synthesis
Question: I'm currently working on a piece of DSP/Synthesis software and I have run into the issue of generating band limited waveforms. I have been doing some research to try and find a good solution but I haven't had much luck. I have found some things about a BLIT to generate complex waveforms but I have not found a good example of an implementation. I am not very good with Math or Mathematical notation but I am great with dealing with c/c++ code. If anyone has a good explanation of how to generate a BLIT using c/c++ or resources about that I would much appreciate any help that you could give. Thanks P.S. I know this could also be a Stack Overflow question because of the programming nature but I thought I would try the DSP community first. Thanks again. Answer: the main idea behind BLIT is that these analog synth waveforms that we are trying to generate digitally can be thought of as the integral (over $t$) of impulse trains. a sawtooth can be thought of as the integral of the sum of a little bit of DC and an impulse train. a square wave is the integral of impulses of alternating signs. the triangle wave is the integral of the square wave. so, to create bandlimited waveforms of the above, the impulse trains are bandlimited which means that each impulse $\delta(t-t_n)$ is replaced by a $\operatorname{sinc}(t-t_n)$ function, which is that impulse bandlimited through a Nyquist brick-wall LPF. that sequence of bandlimited impulses is a BLIT. then, since integration is a filter with s-plane transfer function of $H(s)=\frac{1}{s}$ and is LTI (Linear, Time-Invariant), integrating the BLITs will introduce no new frequency components. if your BLITs are bandlimited, so are the other waveforms that are derived from filtering the BLITs.
{ "domain": "dsp.stackexchange", "id": 1721, "tags": "audio, dsp-core, signal-synthesis" }
Spectrometer vs. Spectrophotometer
Question: I have been researching about the difference of a spectrometer and a spectrophotometer. They both sound the same. What is the difference? Answer: A spectrometer tells you which wavelengths of light is absorbed and which wavelengths of light is reflected. A spectrophotometer measures the relative intensity of the light absorbed or reflected at a particular wavelength of light.
{ "domain": "physics.stackexchange", "id": 75850, "tags": "optics, spectroscopy" }
Acceleration vector - deceleration vs direction
Question: If acceleration of something $= - 10 \text{ m s}^{-2}$ And forwards is define as north. Does that mean the object is getting slower (decelerating) or accelerating in the reverse direction (south) How can you tell the difference? Answer: Does that mean the object is getting slower (decelerating) or accelerating in the reverse direction (south) It really doesn't matter. Basic kinematic formulas are designed to work just as well in either case, which is why physicists don't generally use the word "decelerating." It's just another kind of acceleration. That being said, if you want to determine whether the object's speed is increasing or decreasing (which correspond to the popular meanings of "accelerating" and "decelerating" respectively), you can just look at the orientation of the acceleration with respect to the velocity. If the acceleration is parallel to the velocity, the object will be speeding up. If it's antiparallel, the object will be slowing down. You can see this mathematically by taking the derivative of the kinetic energy: $$\frac{\mathrm{d}}{\mathrm{d}t}\biggl[\frac{1}{2}mv^2\biggr] = m\vec{v}\cdot\frac{\mathrm{d}\vec{v}}{\mathrm{d}t} = m\vec{v}\cdot\vec{a}$$ So the sign of the dot product $\vec{v}\cdot\vec{a}$ tells you whether the speed is increasing or decreasing. Do note that velocity is reference frame-dependent. So two different inertial observers looking at the same object at the same time could have differing conclusions as to whether it is speeding up or slowing down. That's one big reason why the distinction is not important in physics.
{ "domain": "physics.stackexchange", "id": 1580, "tags": "vectors" }
Tracking employee data
Question: We have hundreds of employees and for tracking each employees' data, and I am using this code: <?php $sqldelivery9 = "SELECT COUNT(*) as count FROM orders WHERE employeename = 'nawaz' AND DATE(reattemptdate) = DATE(NOW() - INTERVAL 2 DAY)"; $resultdeliverys9 = $db_handle->runSelectQuery($sqldelivery9); $numrowsresultdelivery9 =$resultdeliverys9[0]['count']; echo $numrowsresultdelivery9; Also, I am tracking the last 7 days of records of each employee, so it's going to be 100*7= 700 SQL queries. Is this acceptable? Will it going to affect performance? Is there any better way to do it with minimal SQL queries? Answer: Definitely not. You can do that all in a single query. SELECT employeename, DATE(reattemptdate) as date, COUNT(*) as count FROM orders WHERE employeename in ('name1', 'name2', 'name3', '...') AND DATE(reattemptdate) > DATE(NOW() - INTERVAL 7 DAY) GROUP BY employeename, date This will give you data in the form of name1 | 2018-09-05 | 5 name1 | 2018-09-06 | 7 name1 | 2018-09-07 | 12 name1 | 2018-09-08 | 9 name1 | 2018-09-09 | 22 name2 | 2018-09-05 | 3 name2 | 2018-09-06 | 5 name2 | 2018-09-07 | 9 name2 | 2018-09-08 | 11 name2 | 2018-09-09 | 16 ... All you need is to iterate over the data. Of course you can drop the employeename in ('name1', 'name2', 'name3', '...') condition if you want to search over ALL employees and not just "specific n" ones.
{ "domain": "codereview.stackexchange", "id": 31950, "tags": "performance, php, mysql, mysqli" }
python rosmsg/rospack in ROS2
Question: I have some python code using rosmsg and rospkg, i.e. rospack = rospkg.RosPack() lines = rosmsg.get_msg_text(msg_name, False, rospack) how to do the same in ROS2 with rclpy? Originally posted by federico.ferri on ROS Answers with karma: 41 on 2019-04-14 Post score: 2 Answer: Here's what I do: from rosidl_runtime_py import get_interface_path from rosidl_adapter.parser import parse_message_file m = parse_message_file('std_msgs', get_interface_path('std_msgs/msg/Bool')) it's actually better than what you do with RosPack as parse_message_file parses the lines for you and returns a nice object with all the info in it. Originally posted by fferri with karma: 46 on 2020-06-05 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 32868, "tags": "ros, ros2, msg, rospkg, rclpy" }
Can there be 'dead states' in a context-free grammar?
Question: Can a context-free grammar include "dead states" from an automaton, such as $$G = \big(\{a, b, c\}, \{A, B, C\}, \{A\to aB, B\to b, B\to C, C\to cC\}, A\big)\,?$$ The production rules $B\to C$ and $C\to cC$ will loop forever and never generate a word. Is this allowed or MUST production rules end with an terminal at some point? Answer: Context-free grammars are allowed to contain unproductive rules. This is accepted, because every CFG generates the same language as some proper CFG which contains no unproductive rules, no empty string productions, and no cycles; so it is safe to assume that a CFG is proper without loss of generality.
{ "domain": "cs.stackexchange", "id": 8174, "tags": "context-free, automata, formal-grammars" }
More accurate version of Newton's Second Law?
Question: Since Force is a one-form (co-variant vector), is it more accurate to assert that $F = ma^ug_{uv}$ where $a^u$ is the acceleration vector, which is contra-variant, and $g_{uv}$ is the metric tensor? Answer: The equation $F = m a^\mu g_{\mu \nu}$ is notationally unclear. You're right to note that tensor equations have to match types of tensors on both sides, but if we're being really careful about notation, then we have to note exactly what sort of tensor $F$ is. If you mean $F_\nu$ when you write $F$, then it is true that $F_\nu = m a^\mu g_{\mu \nu}$, but it would be equally correct to write $F_\nu = m a_\nu$ or $F^\nu = m a^\nu$. This is because in Einstein summation notation it is understood that $F^\mu = g^{\mu \nu} F_\nu$ and $F_\mu = g_{\mu \nu} F^\nu$.
{ "domain": "physics.stackexchange", "id": 18993, "tags": "newtonian-mechanics, tensor-calculus" }
Convert state Vectors to Bloch Sphere angles
Question: I think this question is a bit low brow for the forum. I want to take a state vector $ \alpha |0\rangle + \beta |1\rangle $ to the two bloch angles. What's the best way? I tried to just factor out the phase from $\alpha$, but then ended up with a divide by zero when trying to compute $\phi$ from $\beta$. Answer: You are probably dividing by $\alpha$ at some point to eliminate a global phase, leading to your divide by zero in some cases. It would be better to get the phase angles of $\alpha$ and $\beta$ with $\arg$, and set the relative phase $\phi=\arg(\beta)-\arg(\alpha)$. Angle $\theta$ is now simply extracted as $\theta = 2\cos^{-1}(|\alpha|)$ (note that the absolute value of $\alpha$ is used). This is all assuming that you want to get to $$|\psi\rangle = \cos(\theta/2)|0\rangle + \mathrm{e}^{i\phi}\sin(\theta/2)|1\rangle\,,$$ which neglects global phase.
{ "domain": "physics.stackexchange", "id": 58825, "tags": "homework-and-exercises, quantum-information" }
Counting the number of thick regions which overlap a square
Question: Let $S$ be a unit square. As a function of $\beta$, what is the maximum number of $\beta$-fat pairwise-disjoint regions with diameter at least 1 which can intersect $S$? Below, we give a figure showing that for $\beta=1$, the maximum number is 7. What about for $\beta = 2, 3, \ldots, n$? Recall the definition of fat for regions in the plane. Given a region $R$, let circle $C_1$ of radius $r_1$ be the largest circle contained in $R$, and let circle $C_2$ of radius $r_2$ be the smallest circle that contains $R$. The fatness of $R$ is given by $\frac{r_2}{r_1}$, and we say that $R$ is $\beta$-fat, for $\beta = \frac{r_2}{r_1}$. For example, if $r_2 = r_1=\frac{1}{2}$, then the regions are unit circles, and there are at 7 circles with diameter at least 1 which can overlap $S$ without overlapping each other. In the figure below, we have depicted a unit square and 7 unit circles which overlap the square. Answer: I think that the maximum number of pairwise disjoint fat regions which overlap the square should be strongly related to circle packing. The worst-case shape for a region is something like a "ball & chain". Below I have depicted such a region for $\beta=2$ with diameter 1 . and these can pack within distance 1 of the unit square obviously much more tightly than I've depicted them. Note that the actual ball & chain region is defined by the green area, and the outer circle is just a guide to depict the fact that these regions have fatness 2. In fact, the chain part of the region, can "bend" to allow more regions to be packed.
{ "domain": "cstheory.stackexchange", "id": 1485, "tags": "cg.comp-geom" }