anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Displacement of a body in free fall
Question: Suppose a ball is thrown up from the ground with some velocity. Let vertically up be the positive direction and vertically down be the negative direction and let the thrower be at the origin. What will be the sign of displacement after it has reached it's maximum height and is still descending? I think it will be negative since it's displacement will be in the negative direction, although it's position will always be positive. Am I correct? Answer: Displacement is a vector quantity. In "our world", it usually has three components (x,y and z) and a length. Of course, the length of a vector is always positive. So how could displacement be negative? If you mean "the Z component of displacement", then by your own definition Z is positive above the origin. The only thing that goes negative is the velocity after the maximum height has been reached, or $dZ/dt$ for $t>v_{init}/g$ When the ball has returned back to the thrower, the displacement is once again zero.
{ "domain": "physics.stackexchange", "id": 14019, "tags": "kinematics" }
Access case class parameter inside a map with default
Question: I have a map whose values are case classes, and I want to access one parameter of that case class with a default value if they map doesn't contain the provided key. myMap.get(myKey).map(_.valueParam).getOrElse(defaultParam) intellij-scala suggests that this is a "simplifiable operation on collection” -- is there a more idiomatic way to write this? * Edit/Answer: The alternative approach would be to use myMap.get(myKey).fold(defaultParam)(valueParam) Answer: Map has a getOrElse option: myMap.getOrElse(myKey,defaultParameter).valueParameter
{ "domain": "codereview.stackexchange", "id": 10494, "tags": "scala" }
How to use pca results for linear regression
Question: I have a data set of 11 variables with allot of observations for each one. I want to make linear regression on the variables with the observed $\vec{y}=\alpha +\beta*\vec{X}$ when X is matrix. I'm trying to reduce my parameters so I activate pca algorithm on X. I get the "loading" data but i don't understand how to use it to get only four (for example) variables to estimate instead of 11. somebody can help? Answer: Welcome to the site! So, the outcome which you get from PCA explain the most of your original dataset. You need to name them based on your business understanding(Assuming that you know about data, as you mentioned that you wanted to apply, Linear Regression) else you might need some Subject Matter Experts expertise. Of-course, the Features won't be same with the original data or else what is the point in performing PCA(I know that you understood this part). To decide on the number of features, you need to look at Scree Plot. PCA is a Dimensionality Reduction algorithm which helps you to derive new features based on the existing ones. PCA is an Unsupervised Learning Method, used when the has many features, when you don't understand anything about the data, no data dictionary etc.For better understanding on PCA you can go through this link-1,link-2. Now before performing Linear Regression, you need to check if these new features are explaining the Target Variable by applying Predictor Importance test(PI Test), you can go through the Feature Selection test in the python,R. Based on the outcome of PI Test you can go ahead and use those important feature for modeling and discarding the features which are not explaining the target variable well. Finally, you can achieve the results which you are looking for. Let me know if you are stuck somewhere.
{ "domain": "datascience.stackexchange", "id": 2743, "tags": "linear-regression, dimensionality-reduction, pca" }
Texture of object loading as black with dots and text
Question: For some reason the texture on my mesh file loads as black for Gazebo 11 with ROS Noetic (gazebo fortress). The texture setup I have will load and configure perfectly fine for Gazebo garden on a different system with ROS2 Humble.. For some reason, when I load the same exact object into the predecessor version, the object renders with a black layer that has text and dots in it sometimes around the object. When I zoom in close onto the object, I can see through the black shielding layer and observe the true colors of the object. Does anyone know why this could be happening? Originally posted by mmDrone on Gazebo Answers with karma: 3 on 2023-03-14 Post score: 0 Original comments Comment by azeey on 2023-03-16: Can you clarify if you're using Gazebo-classic (Gazebo 11) or Gazebo Fortress? Also, can you provide a minimal example that shows the problem? Answer: The way the new Gazebo Garden loads models with textures is totally different to how it used to be. See "Textures" towards the bottom of this page: https://gazebosim.org/api/gazebo/3.7/migrationsdf.html Took some creative google search to find this information. But the TLDR is instead of importanting a model texture the old OGRE way where you defined a .material file that ogre reads, you just import a MTL + OBJ or DAE file. If you know how to use Blender it isn't too bad. You can create your 3D model with blender and then export it to MTL + OBJ (make sure you use PBR + Relative paths), or DAE (easier) file format. Then include the model as <geometry> <mesh>uri://PATH_TO_MODEL</mesh> <geometry> ^ where PATH_TO_MODEL is the folder name of your model with the appropriate model.sdf model.conf and other files. Originally posted by chutsu with karma: 36 on 2023-03-16 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by mmDrone on 2023-03-27: The documentation shared worked great for me. I used: <include> <uri> model://<model_name> </uri> </include> The texture is now loaded in properly. Thank you!
{ "domain": "robotics.stackexchange", "id": 4693, "tags": "gazebo-11, ignition-fortress" }
How to add decimals to formal grammar?
Question: I have a formal language that describes digit production like <digit> ::= 0|1|2|...|9 and I need to intruduce fraction to write decimals like 3.14 and so on What did I do: <frac> ::= <digit><frac> | .<digit><frac> | <digit> I feel that the .<digit><frac> part is wrong, as every time we go this production we can substitute this part and get more than a single dot for the decimal part, which is wrong? What is the proper way to handle this? Answer: To properly produce a formal grammar, you should designate a non-terminal for each different part of a decimal. Here is a sample grammar in Backus–Naur form that also uses square brackets for optional items. <decimal> ::= [<sign>]<nonnegative_decimal> <sign> ::= - | + <nonnegative_decimal> ::= <nonnegative_integer>[<fractional_part>] <nonnegative_integer> ::= <positive_integer> | 0 <positive_integer> ::= <nonzero_digit>[<digits>] <fractional_part> ::= .<digits> <digits> ::= <digit>[<digits>] <digit> ::= <nonzero_digit> | 0 <nonzero_digit> ::= 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 It can be made more compact as the following. <decimal> ::= [ - | + ]<nonnegative_decimal> <nonnegative_decimal> ::= <nonnegative_integer>[.<digits>] <nonnegative_integer> ::= <nonzero_digit>[<digits>] | 0 <digits> ::= <digit>[<digits>] <digit> ::= <nonzero_digit> | 0 <nonzero_digit> ::= 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 Please note that the above grammar defines a decimal as a string that starts with a possible sign followed by an integer, possibly followed by a dot and some digits, where only 0,1,2,3,4,5,6,7,8 and 9 are allowed in an integer or digits and where an integer must not start with 0. Many other kinds of decimals are used, too.
{ "domain": "cs.stackexchange", "id": 12554, "tags": "formal-languages, formal-grammars" }
Modular multiplicative inverse in Ruby
Question: I implemented an algorithm to find the modular multiplicative inverse of an integer. The code works, but it is too slow and I don't know why. I compared it with an algorithm I found in Rosetta Code, which is longer but way faster. My implementation: def modinv1(a, c) raise "#{a} and #{c} are not coprime" unless a.gcd(c) == 1 0.upto(c - 1).map { |b| (a * b) % c }.index(1) end Rosetta Code's implementation: def modinv2(a, m) # compute a^-1 mod m if possible raise "NO INVERSE - #{a} and #{m} not coprime" unless a.gcd(m) == 1 return m if m == 1 m0, inv, x0 = m, 1, 0 while a > 1 inv -= (a / m) * x0 a, m = m, a % m inv, x0 = x0, inv end inv += m0 if inv < 0 inv end Benchmark results (used benchmark-ips): Warming up -------------------------------------- Rosetta Code 141.248k i/100ms Mine 462.000 i/100ms Calculating ------------------------------------- Rosetta Code 2.179M (± 6.5%) i/s - 10.876M in 5.022459s Mine 4.667k (± 3.7%) i/s - 23.562k in 5.055259s Comparison: Rosetta Code: 2179237.4 i/s Mine: 4667.4 i/s - 466.90x slower Why is mine so slow? Should I use the one I found in Rosetta Code? Answer: As the comment says on the less-obfuscated version on Rosetta Code, their implementation is based on the Extended Euclidean Algorithm. Your implementation works by brute force to test every element in the field, so of course it's going to be slow.
{ "domain": "codereview.stackexchange", "id": 34539, "tags": "performance, algorithm, ruby, mathematics" }
Interpretation of permanganate-stained TLC spots
Question: When I stain TLC spots with $\ce{KMnO4}$, I found that some spots appear differently as time goes by. Here is the example: The leftmost one is the plate right after staining, and the others are listed sequentially with time. Question 1: As time goes by, some white stains appeared and widened. Does this mean some materials exist in that area, or is this just result of diffusion? Are the white stains due to original yellow spots, or indicate the existence of different materials? Question 2: Some white stains (numbered as 1, 2, 3) appeared in the same manner, but disappeared. Then should I consider the existence of this white stain? And should I record staining with time-scale whenever I stain the TLC with $\ce{KMnO4}$? Answer: As you have seen, potassium permanganate is very reactive. It can react with almost anything organic. Question 1: Yes diffusion of the single component is causing the broadening of the spots. Question 2: The explanation of spots turning white is that either you have acid in your eluent or the compound themselves are relatively acidic. When manganese dioxide reduces in the presence of acids, it is converted to manganese ions (which are very light pink in color). Record the stain immediately. That is its correct retention.
{ "domain": "chemistry.stackexchange", "id": 16467, "tags": "organic-chemistry, experimental-chemistry, chromatography" }
Why Does a Resistor's Resistance Vary?
Question: Whenever we try to measure the resistance in a multimeter, the value is not the same for all measurements. Slight variations are observed over several measurements. But why this happens? And can this be considered as 'random' fluctuations? Answer: A multimeter measures resistance by injecting a known current and measuring the voltage across the device. As John Rennie mentioned in a comment, the largest uncertainty in this simple measurement is due to the contact resistance of the probes. The error occurs since the same probes are used to both apply the current and measure voltage. There is also a unknown resistance of the wires and probes themselves, along with the contact resistance of the probes, in series with the device being measured. Resistance of the wires and probes can be subtracted by a differential measurement, but uncertainty due to contact resistances is increased. A more accurate and repeatable technique for resistance measurements is the 4-point probe method. Two pairs of probes are used, one pair to inject current and another to measure the voltage. This method is less susceptable to error from contact resistance, since with a high impedance meter the voltage probes carry negligible current. Thus 4-point measurements are almost always used when measuring small resistances.
{ "domain": "physics.stackexchange", "id": 60924, "tags": "electrical-resistance, measurements, error-analysis" }
Frequency of an open air column
Question: Given only the length of an organ pipe to be $2.14 m$, is it possible to find what frequency it vibrates at? If I use the equation $f=\frac{v}{\lambda}$, does the $v$ apply to the speed of sound in the organ pipe or in air? Answer: The speed of sound should apply to $v$ because the sound waves are travelling through the air after it leaves the organ pipe. The speed of sound is approximated by the following formula: $$ v = 331.3 + 0.606T $$ Where $T$ is the temperature in degrees Celsius, and $v$ is the velocity in meters per second. In your case, suppose you're at room temperature (~25 degrees Celsius), then the speed of sound would be: $$ \begin{align} v &= 331.3 + 0.606(25) \\&=346.45m/s \end{align} $$ Now, to solve for the frequency: $$ \begin{align} f &= \frac{v}{\lambda}\\\\ &=\frac{345.45ms^{-1}}{4.24m}\\\\ &=81.71s^{-1}\\\\ &=8.17\times10^1 \ Hertz \end{align} $$
{ "domain": "physics.stackexchange", "id": 63083, "tags": "homework-and-exercises, waves, acoustics, frequency, harmonics" }
Am I including these JavaScript files correctly?
Question: I just want to know if this is reasonably good implementation of including local/remote JavaScript files. The function takes a dictionary with (string) .src whitespace-separated list of urls to include/execute in global context, and (func) .done/.error/.load callbacks to run in corresponding cases, resolves URLs to absolute, temporarily inserts <script> blocks in page header, caches loaded addresses, and attaches few static properties: (object) .defaults and (func) .ls/.reload to main includejs() function. // #includejs ;((function (name, def) { this[name] = def(document); }).call( this, "includejs", function (doc) { // will be used to reference private includejs() version var _include; // holds cached script urls var imported = {}; // no-op callback default var pass = function () {}; // helpers var _ = { // <a> helper element to resolve urls to absolute anchor: doc.createElement("a"), // calculates difference of two arrays // used by includejs() to filter new urls to load arrdiff: function (a1, a2) { return a1.filter(_.cbuniq).filter(_.cbdiff, a2); }, // calculates intersection of two arrays // used by includejs.reload() to filter cached urls arrinters: function (a1, a2) { return a1.filter(_.cbuniq).filter(_.cbinters, a2); }, cbdiff: function (node) { //this: a2[] return this.indexOf(node) == -1; }, cbinters: function (node) { //this: a2[] return this.indexOf(node) != -1; }, cbpropcp: function (name) { //this: {src{}, target{}} this.target[name] = this.src[name]; }, cbrmprop: function (name) { //this: target{} try { delete this[name]; } catch (e) {} }, cbuniq: function (node, idx, arr) { return idx <= arr.indexOf(node); }, // shallow copies an (object) node // used by includejs.ls() to list cached urls cp: function (node) { var nodecp = {}; _.keys(node).forEach(_.cbpropcp, {src: node, target: nodecp}); return nodecp; }, // default settup defs: { src : "", done : pass, error : pass, load : pass }, // removes passed properties from (object) node // used in includejs.reload() to filter cached urls del: function (node) { //, ...props _.arrinters(_.keys(imported), _.slc(arguments, 1)) .forEach(_.cbrmprop, node); return node; }, // helper for wssplit() for filter out empty strings fnid: function (node) { return node; }, // <head> reference for temporarily injecting <script>-s h: doc.getElementsByTagName("head")[0], // attaches properties of (object) items to (object) target // used in few places to assign object properties init: function (target, items) { for (var name in items) { if (items.hasOwnProperty(name)) { target[name] = items[name]; } } return target; }, isfn: function (node) { return "function" == typeof node; }, isplainobj: function (node) { return "[object Object]" == Object.prototype.toString.call(node); }, isstr: function (node) { return "[object String]" == Object.prototype.toString.call(node); }, keys: function (node) { return Object.keys(Object(node)); }, now: Date.now, // calculates absolute url coresponding to given (string) url // not sure if this works on old ies path2abs: function (url) { _.anchor.href = ""+ url; return _.anchor.href; }, // matches whitespace globaly // copy-pasted from es5-shim.js // https://github.com/es-shims/es5-shim.git rews: /[\x09\x0A\x0B\x0C\x0D\x20\xA0\u1680\u180E\u2000\u2001\u2002\u2003\u2004\u2005\u2006\u2007\u2008\u2009\u200A\u202F\u205F\u3000\u2028\u2029\uFEFF]+/g, // used for converting dynamic arguments object to static array slc: function () { return Array.prototype.slice.apply(Array.prototype.shift.call(arguments), arguments); }, // empties an object // used in includejs.reload() to empty private url cache // forcing reload vacate: function (node) { for (var name in node) { if (!Object.prototype.hasOwnProperty(name)) { delete node[name]; } } return node; }, // splits whitespace-separated string to it's components // returns array of uniqe (string) urls // used by includejs() to turn (string) .src urls to array wssplit: function (str) { return (""+ str).split(_.rews).filter(_.cbuniq).filter(_.fnid); } }; // main function _include = function (settup) { var opts = {}; _.isplainobj(settup) || (settup = {}); // take only uncached absolute script urls opts.src = _.arrdiff( _.wssplit(_.isstr(settup.src) ? settup.src : "").map(_.path2abs), _.keys(imported) ); opts.done = _.isfn(settup.done) ? settup.done : _.defs.done; opts.error = _.isfn(settup.error) ? settup.error : _.defs.error; opts.load = _.isfn(settup.load) ? settup.load : _.defs.load; // acts as counter to track download progress opts.i = 0; // holds <script> nodes for caching, and for callback clean-up afterward opts.s = []; // for tracking download progress, when opts.i == opts.len download is done opts.len = opts.src.length; if (opts.len) { opts.src.forEach(_requireloadcb, opts); } else { // asyncs _requireloadcompletecb() setTimeout(function () { _requireloadcompletecb(opts); }); } return opts.src.slice(0); }; // export // adds 'includejs' function identifier to global scope return _.init(_include, { defaults: _.defs, // query cached urls ls: function () { return _.cp(imported); }, // reloads cached .src urls // takes same-format-object as includejs() reload: function (settup) { if (_.isstr(settup)) settup = {src: settup}; _.isplainobj(settup) || (settup = {}); if (!settup.hasOwnProperty("src")) { settup.src = _.keys(imported).join(" "); _.vacate(imported); } else { _.del.apply(null, [imported] .concat(_.wssplit(_.isstr(settup.src) ? settup.src : "").map(_.path2abs)) ); } _include(settup); } }); // helpers // nulls .onload/.onerror handlers // detaches loaded <script> node // used by _requireloadcompletecb() to perform cleanup function _requiregcloadcb (nodescript) { nodescript.onload = null; nodescript.onerror = null; _.h.removeChild(nodescript); } // generates new <script> and appends it to <head> // executing <script>.src file in global context // used by includejs() to download/execute script files function _requireloadcb (fileurl) { //this: {src, done, error, load, i, len, s} var opts = this; var nodescript = _.init(doc.createElement("script"), { onerror : function () { // ...e //this: <script> opts.i += 1; opts.error.apply(this, arguments); _requireloadcompletecb(opts); }, onload : function () { // ...e //this: <script> opts.i += 1; imported[fileurl] = _.now(); opts.load.apply(this, arguments); _requireloadcompletecb(opts); }, defer : false, src : fileurl, type : "application/javascript" }); //opts.s.push(_.h.removeChild(_.h.appendChild(nodescript))); opts.s.push(_.h.appendChild(nodescript)); } // cleans up after scripts load/fail-to-load // nulls .onload/.onerror handlers // empties settup (object) opts function _requireloadcompletecb (opts) { if (opts.i == opts.len) { opts.done.apply(doc, opts.src); opts.s.forEach(_requiregcloadcb); // opts.s.splice(0, 1/0); opts.s.splice(0); _.vacate(opts); } } } )); // // use: // // includejs({ // src: "lib/_.js //cdnjs.cloudflare.com/ajax/libs/modernizr/2.7.1/modernizr.min.js", // done: function (scripturls) { // // _doStuff(scripturls); // console.log("done", this, arguments); // }, // load: function (e) { // // _srcLoaded(this); // console.log(e, this); // }, // error: function (e) { // // _srcFailed(this); // console.log(e, this); // } // }); // // // console: // load <script src="http://localhost/sites/xsite/lib/_.js" type="application/javascript"> // load <script src="http://cdnjs.cloudflare.com/ajax/libs/modernizr/2.7.1/modernizr.min.js" type="application/javascript"> // done Document index.htm ["http://localhost/sites/xsite/lib/_.js", "http://cdnjs.cloudflare.../2.7.1/modernizr.min.js"] // // console.log(includejs.ls()); // // console: // Object { // http://localhost/sites/xsite/lib/_.js = 1394306633258, // http://cdnjs.cloudflare.com/ajax/libs/modernizr/2.7.1/modernizr.min.js = 1394306633369 // } // // includejs.reload({ // done: function (urls) { // // _doStuffOnReload(urls); // console.log("reloaded", this, arguments); // } // }); // // // console: // reloaded Document index.htm ["http://localhost/sites/xsite/lib/_.js", "http://cdnjs.cloudflare.../2.7.1/modernizr.min.js"] // Answer: Your code is interesting, you are clearly smart and know JavaScript, but this is a maintenance nightmare. I am assuming you will not take my advice to heart but here goes: Do not abbreviate: isfn -> isFunction, isstr -> isString, cbpropcp -> ? ,slc -> slice, _requiregcloadcb -> ? etc. etc. etc. etc. Also, use lowerCamelCase Also, avoid _xx for private variables, as per Crockford Name functions for what they do, not how they are used: fnid: function (node) { return node; }, fnid is a terrible name, if possible I would refactor the code so that I would not need this function. A better name might be value ? JSHint could not find anything wrong, except that your event handlers do not use e, so you do not need to declare e as a parameter You use cbrmprop and similar functions only once, consider in-lining them settup -> setup ;) opts.s.splice(0, 1/0); -> opts.s.splice(); defer : false, -> I think this should have been an option
{ "domain": "codereview.stackexchange", "id": 6500, "tags": "javascript, reinventing-the-wheel" }
Introduction to Special Relativity Question - Momentum Conservation
Question: I'm currently reading a text for self-study on special relativity, Introduction to Special Relativity by James H. Smith, and I came across a question that I don't see to grasp at the moment. "Figure 1-6 shows a diagram of the successive positions of two objects colliding and bouncing apart. The figure can be considered a photograph taken by repetitive flashes of light spaced at $\frac{1}{10}$ sec intervals. Figures 1-6*a* and 1-6*b* show the same collision, but one was taken with a camera which was moving uniformly. The scale of distance is shown in the figure. The object marked with a small circle has a mass of 1 kg. a. Using Figure 1-6*a* and the conservation of momentum, determine the mass of the object marked with the cross. b. Show by direct measurement, and using the mass determined in a, that momentum is also conserved when measurements are made in the frame of reference of Figure 1-6*b*." My answers: a. Using the conservation of momentum ($u$ being velocity before collision and $U$ being velocity after collision): $$ m_ou_o + m_xu_x = m_oU_o + m_xU_x $$ Since 'x' object stops after collision in figure (a) [top], $U_x = 0m/s$: $$ m_ou_o + m_xu_x = m_oU_o + 0 $$ $$ m_x = \frac{m_o(U_o - u_o)}{u_x} $$ According to the diagram, we see that $u_o = U_0 = u_x = 5m/s$: $$ m_x = \frac{1kg(5m/s - 5m/s)}{5m/s} = 0??? $$ Something is not right here. The only way this would seem to me to make sense would be where the frame suddenly begins to move immediately after the collision, and we can account for some of the velocity with the moving frame, but the question clearly states that one of the frames, either (a) or (b), is continuously moving with a uniform speed. Momentum is conserved for ALL inertial frames. b. Again, plugging values into the conservation of momentum equation, using the velocities found in figure (b) [bottom], things don't add up: $$ 0.72m_o + 0.38m_x = 0.51m_o + 0.38m_x $$ Is there something I'm doing wrong or not understanding, or are the diagrams not accurate? Thanks in advance :) Answer: Momentum & velocity are vectors, not scalars. This means that you can't just set up one equation for "momentum conservation" like you can do with energy conservation, but instead have to look at each coordinate direction independently. In part a of your problem, the equation for momentum conservation in the $x$-direction (horizontally along the page) would be $$ m_0 (5 \text{ m/s}) + m_x (5 \text{ m/s})(- \cos 45^\circ) = m_0 (0 \text{ m/s}) + m_x (0 \text{ m/s}) $$ (note that neither puck has momentum in the $x$-direction after the collision.) Similarly, for the $y$-direction, you would have $$ m_0 (0 \text{ m/s}) + m_x (5 \text{ m/s})(- \cos 45^\circ) = m_0 (-5 \text{ m/s}) + m_x (0 \text{ m/s}) $$ These equations can be solved for $m_0$ and $m_x$ (and, thankfully, are consistent with each other.) Your error is pretty much the same in part b of your problem.
{ "domain": "physics.stackexchange", "id": 32946, "tags": "homework-and-exercises, special-relativity, galilean-relativity" }
Coulomb field from QED
Question: It is well-known how to obtain Coulomb's law in perturbative QED (e.g. the answers to this question on this site). I am trying to understand if there is any reasonable way to give a meaning, within QED, to the concept of a static electric field. In the case of radiation fields, we know that they have a meaning, within QED, as coherent superpositions of photons. But what about static fields? Is there any QED observable that under proper conditions becomes the classical Coulomb field? I am interested in a formal derivation, not in hand-waving arguments. Answer: When you consider QED, you need to consider the electron field as well. You cannot recover the Coulomb field exactly in QED, because the electron field will shield the EM field. However, it appears in many different ways in a quantized, pure Maxwell theory. One way to see this is to look at the generating functional: $$ Z(J)=\int DA\exp\left(iS\right)\\ S = \int \mathcal L d^4 x\\ \mathcal L =-\frac{1}{4}F^2+JA $$ Specifying the classical source 4-current to be electrostatic: $$ J=(\rho(\vec x),0) $$ Now, the natural step is to integrate out the magnetic part. For this, you need to fix a gauge. A simple choice would be to use the analogue of the $R_\xi$ gauge for the Coulomb gauge. Essentially, this amounts to adding one final term to the Lagrangian: $$ \mathcal L = \frac{1}{2}(\partial_t A+\nabla V)^2-\frac{1}{2}(\nabla\times A)^2+\rho V+\frac{1}{2\xi}(\nabla \cdot\vec A)^2 $$ By integrating out the magnetic part, you obtain in the Fourier basis: $$ \mathcal L = \frac{\vec k{}^2}{\xi\omega^2 +\vec k{}^2}\frac{\vec k{}^2}{2}V^2 $$ and choosing the quantum analogue of the Coulomb gauge with $\xi = 0$, the propagator is given by the Coulomb potential. In fact you can check the gauge invariance since $\rho\propto \delta(\omega)$ in Fourier space so $\omega$ is effectively set to $0$ when integrating $V$ for the generating function. The result is independent of $\xi$: $$ Z = \exp\left(\frac{i}{2}\int d^3\vec xd^3\vec y \frac{\rho(\vec x)\rho(\vec y)}{4\pi|\vec x-\vec y|}\right) $$ You can therefore obtain a static field in a quantised EM theory using classical sources. To specifically get the Coulomb potential, you just need to take a single charge and the expected value of $V$ gives the Coulomb field in the Coulomb gauge. Mathematically, you set $\rho=\delta$ and take the functional, logarithmic derivative of $Z$ with respect to $\rho$ at this specific value. This is what you get from classical Maxwell theory, but now using the quantised version, you can calculate the quantum fluctuations about this mean value like higher moments (which is tractable since the theory is gaussian). More generally, static charges give fields whose expected value are given by electrostatics in the Coulomb gauge. The true novelty is are the quantum fluctuations which are still present even without the classical charges. In particular, the Coulomb potential can be alternatively interpreted as the correlator of $V$. The link between the two interpretations is the fluctuation dissipation theorem. In a similar spirit, Jeanbaptiste Roux outlined another standard method to get the Coulomb potential from a pure, quantised Maxwell theory using Wilson loops. Once again, this is not QED since there are no electrons. You can find the approach in Peskin and Schroeder's Introduction to Quantum Field Theory, exercise 15.3 "Colomb potential." Instead of calculating the generating functional, you calculate the Wilson loop: $$ \begin{align} W &= \left\langle \exp\left(-i\oint A dx\right)\right\rangle \\ &= \exp\left(-i\oint dx^\mu \oint dy_\mu \frac{1}{8\pi^2(x-y)^2}\right) \end{align} $$ Using the appropriate loop, you recover the Coulomb potential in the argument. Actually, you can get this from the more general approach of the generating function and choosing appropriate source localised on the chosen path. Hope this helps.
{ "domain": "physics.stackexchange", "id": 97266, "tags": "electrostatics, electric-fields, quantum-electrodynamics, coulombs-law" }
compressing rarely used space
Question: So an idea I've had bouncing around for a while goes like this: suppose we have some TM that runs in possibly exponential time, and thus can use possibly exponential space. But let's say that it doesn't use its available space very often: say, it's oblivious and makes a constant number of passes back and forth across the input. It seems to me that such a machine can have its space usage improved, but I'm not sure of any relevant results. Can we simulate such a machine in some smaller space bound (ideally polynomial or close to it...), if we allow the simulator to use that smaller amount of space however it wants? Anyone know of such a result? Answer: I think it can be compressed to linear space. I assume the machine $M$ has one tape. Initially the first $n$ cells contain the input, which is followed by $2^O(n)$ zeroes. $M$ is oblivious and always makes $k$ passes through the tape (from left end to right and back), and then accepts or rejects. Let $p_i$ denote the state the machine enters after reading the $n$-th cell on $i$-th pass during the left-right phase, and let $q_i$ denote the state the machine enters after reading the $n+1$-th cell on $i$-th pass during the right-left phase. Note that $q_1$ is a function of $p_1$ alone, and generally $q_i$ depends only on the sequence $p_1, p_2, \dots, p_i$ (this sequence has length at most $k$, which is fixed). Therefore we can always compute the state $q_i$ without actually continuing the pass through the extra scratch space - the transitions can be hardcoded into the description of the machine.
{ "domain": "cstheory.stackexchange", "id": 334, "tags": "cc.complexity-theory, space-bounded" }
How to optimize the significance for my neural network with the purpose of classifying detector events?
Question: For my Bachelor's thesis, I've created a neural network with the task of classifying FCNC tz-production events. It was trained on data from a Monte-Carlo simulation, and tries to output 0 when encountering a background event, and 1 when encountering a signal event. Obviously, its output is continuous between 0 and 1, so some kind of classification threshold is needed above which the event is classified as "signal". During training, this was set to 0.5, but in the validation phase, it makes sense to adjust this parameter in order to ensure maximum significance before feeding the real data into the network. I tried doing this by looking at the network's response to all events in the validation dataset, and creating a histogram for them. Here is said histogram: Now, I continued by trying to maximize $S/\sqrt{B}$. I did this by starting at the right-most bin and adding the bins together, for signal and background seperately. With each iteration, I then calculate $S/\sqrt{B}$, where $S$ is the current sum of the signal bins and $B$ is the sum of the background bins. The idea is that the highest $S/\sqrt{B}$ obtained in this procedure gives you the optimal classification threshold, which would then be the left edge of the bins added together. However, doing this, I only ever get a supposedly optimal threshold directly on the left edge of the right-most bin. I've tried various different histogram step sizes, all leading to the same result of the right-most bin alone having the maximum $S/\sqrt{B}$. This seems to make sense, because the right-most bin holds very many signal events (in the case of 100 total bins, the last bin holds more than half of all signal events) while there are very few background events in it. But setting my threshold at 0.99 seems ridiculous, and I'm certain this is not the correct way to go about this. I also tried to use the improved "Asimov Z" instead of $S/\sqrt{B}$, which looks like this: $$ Z=\sqrt{2((S+B)\ln(1+S/B)-S)} $$ But the results were the same. Answer: It turns out that I simply made a mistake with how I handled my network's input data that made it perform artificially well. So if you ever encounter this problem, you probably did something wrong, since your network shouldn't perform this well to begin with.
{ "domain": "physics.stackexchange", "id": 60697, "tags": "particle-physics, statistics" }
Turtlebot 2/Kobuki installation problems
Question: I just received the Turtlebot2/Kobuki. I installed ROS using the included memory stick. Now I am trying to install all of the remaining software indicated on the Kobuki Installation ROS page. The first problem occurs because of a dead link referenced for a rosinstall under 2. RosInstall I got around this by finding a cached version of the page online and doing a local file rosinstall. The second problem occurred when I attempted a rosdep install kobuki_node under 3. Rosdep. This returns the following: ERROR: the following packages/stacks could not have their rosdep keys resolved to system dependencies: kobuki_node: Cannot locate rosdep definition for [kobuki_safety_controller]. I ignored the error and compiled. However, I am unable to run any of the launch files to get the Kobuki up and running. I get various errors that seem to be related to the fact that rosdep install kobuki_node failed. E.g.: When I try to bring up the Kobuki with the minimal.launch file I get the following: [ERROR] [1360192717.868409930]: Failed to load nodelet [/mobile_base] of type [kobuki_node/KobukiNodelet]: Could not find library corresponding to plugin kobuki_node/KobukiNodelet. Make sure the plugin description XML file has the correct name of the library and that the library actually exists. [FATAL] [1360192717.869479490]: Service call failed! [mobile_base-2] process has died [pid 22945, exit code 255, cmd /opt/ros/groovy/lib/nodelet/nodelet load kobuki_node/KobukiNodelet kobuki mobile_base/odom:=odom mobile_base/joint_states:=joint_states __name:=mobile_base __log:=/home/turtlebot/.ros/log/5a31d5b2-70ab-11e2-aa29-485d60f58202/mobile_base-2.log]. log file: /home/turtlebot/.ros/log/5a31d5b2-70ab-11e2-aa29-485d60f58202/mobile_base-2*.log Any suggestions will be appreciated. Thanks! Originally posted by guzza100 on ROS Answers with karma: 11 on 2013-02-06 Post score: 1 Original comments Comment by bit-pirate on 2013-02-06: "This page is full of lies" was my colleagues statement about the Kobuki installation page. :-) Please be patient, he is working on it right now and will come back to you ASAP. Comment by jorge on 2013-02-06: Some things are still a little in flux due to the late arrival of groovy - the live usb's and the 2.0 release are still on their way. Do you actually need to install via source? You can probably get away with just installing turtlebot groovy debs. Comment by jorge on 2013-02-06: Btw, the Kobuki install tutorial that used to lay is fair again. He explain how to install all Kobuki from deb packages. Comment by guzza100 on 2013-02-08: Thanks bit-pirate and jorge. The new installation page with the deb packages worked and i was able to start up the Turtlebot 2 and run teleop and autodocking! Comment by bit-pirate on 2013-02-11: Great! Looks like all your problems have been solved. Are you able to mark your question as answered even though it has no answer? Comment by jorge on 2013-02-11: Good to hear that! Just in case you have extra time to expend, we have added a second tutorial with instructions to install Kobuki from source code, using catkin. Answer: I just summarize previous comments to provide a clear answer and close the question: This tutorial explains how to install Kobuki from turtlebot groovy debs And this other explains how to install Kobuki from source code, using catkin. Originally posted by jorge with karma: 2284 on 2013-02-12 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 12771, "tags": "turtlebot2, turtlebot, kobuki" }
What is the physical interpretation of the "energies" I am getting from the self-consistent field Hartree-Fock (SCF HF) method?
Question: The SCF HF method is a variational method that calculates the minimal energy of a many-body ground state (actually, it converges to a number which I take as the ground state energy). But what are the second minimum, third minimum, and... energies? do they correspond to the excited states? Answer: The Hamiltonian describing the electrons in a static cristal is given by $\mathcal{H} = T + U_{ions} + U_{ees}$ where $T=-\frac{\hbar^2}{2m}\sum_i\nabla_i^2$ is the kinetic energy operator, $U_{ions} = \sum_iU_{ion}(\vec{r}_i)$ is the potential energy between the protons and the electrons and $U_{ees} = \sum_{i<j} \frac{e^2}{|\vec{r}_i-\vec{r}_j|}$ are the electron-electron interactions. To tackle this problem, the Hartree-Fock equation assumes a collection of orthonormal one-particule spatial wave-functions $\{\phi_i(\vec{r})\}$ and applies the variational method, namely that $\langle \Psi| \mathcal{H} |\Psi \rangle \geq E_0$ where $\Psi$ is a Slater determinant constructed with $\{\psi_i(\vec{r},\sigma)\} = \{\phi_i(\vec{r})\chi_i(\sigma)\}$, $\{\chi_i(\sigma)\}$ the spin parts. Since $\langle \Psi | \mathcal{H} | \Psi\rangle$ can be computed, you than minimize it with respect to $\{\phi_i\}$, add a set of Lagrange multipliers $\{\mathcal{E}_i\}$ to make sure the wave-functions are orthonormal, play a little with the solution and you get the Hartree-Fock equation $$ \mathcal{E}_i\phi_i(\vec{r}) = -\frac{\hbar^2\nabla^2}{2m}\phi_i(\vec{r}) + U(\vec{r})\phi_i(\vec{r}) +\phi_i(\vec{r})\int d\vec{r}' \sum_{j=1}^N \frac{e^2|\phi_j(\vec{r}')|^2}{|\vec{r}'-\vec{r}|} - \sum_{j=1}^N \delta_{\chi_i\chi_j}\phi_j(\vec{r}) \int d\vec{r}' \frac{e^2\phi^*_j(\vec{r}')\phi_i(\vec{r}')}{|\vec{r}'-\vec{r}|}. $$ The set $\{\mathcal{E}_i\}$ is a collection of Lagrange multipliers and do not technically represent the one-particle energies of the orbitals $\{\phi_i\}$ since it is not an eigenvalue of the Hamiltonian, however it is sometimes referred to has energies but technically aren't.
{ "domain": "physics.stackexchange", "id": 41757, "tags": "energy, schroedinger-equation, variational-principle, many-body" }
How do we measure one complete orbit?
Question: This is being asked from a lay-person point of view. However, please do not hesitate to give more detailed answers that require prior knowledge - if these answer the question more accurately. My confusion currently comes from the fact that everything in the universe is moving in some way or another - so any reference point we use would also be moving/not completely reliable. As such, how can we accurately determine that we have completed an exact 360 degree rotation around the sun (and as such, measure exactly how long it took)? To be clear, this is asking about how we can determine that we have completed a single 360 orbit of our sun, and not the timing aspect of how we know how long a second is. If this situation is different for measuring the orbits of other bodies around their star - it would be great to expand on those differences as well. Answer: When one asks the question, "How is a year defined?", there are actually a few different answers one can provide. What you're implicitly asking, even if you don't know, is how one defines a Sidereal Year. In effect, the Sidereal Year is the time necessary to complete one, full, 360 degree revolution around the Sun. This is opposed to the Tropical Year (or else the Solar Year) which is the time necessary to travel just slightly less than a full 360 degrees (due to things like precession and the definition of the Tropical Year). So your question boils down to, how is a Sidereal year determined? In practice, this is often done by choosing a specific starting point and noting where the Sun is with respect to the stars in the Sky. Then, it will be an entire sidereal year (and a 360 degree revolution) before the Sun returns to the same position as previously observed. Once you've observed the Sun in the exact same position you know a full revolution has occurred of the Earth around the Sun. It should be noted that the motion of the stars over a year is almost completely inconsequential. The stars are so immensely far away compared to their own motion, that their apparent motion (otherwise known as the proper motion) is basically nothing over a year. In this way, we consider the stars "fixed".
{ "domain": "astronomy.stackexchange", "id": 2887, "tags": "orbit" }
8bit kinect data
Question: I search disparity and depth data 11bit. But kinect pulish just 8bit. Why do they have difference? Originally posted by rosmaker on ROS Answers with karma: 51 on 2012-08-25 Post score: 0 Answer: Looking at the openni_camera documentation on the ROS wiki, you can see that the openni_node broadcasts several topics. I think for the data that you are looking for, the topics of interest will be the depth/image_raw messages, or the depth_registered/image_raw if you have the OpenNI registration turned on. The */image_raw messages are in the sensor_msgs/Image message format, which stores the data in a byte array, that can then be reassembled in the subscriber. This is accomplished using cv_bridge. cv_bridge allows you to write code like this for your callback: void imageCb(const sensor_msgs::ImageConstPtr& msg) { cv_bridge::CvImagePtr cv_ptr; try { cv_ptr = cv_bridge::toCvCopy(msg, enc::TYPE_32FC1); } catch (cv_bridge::Exception& e) { ROS_ERROR("cv_bridge exception: %s", e.what()); return; } cv::imshow(WINDOW, cv_ptr->image); image_pub_.publish(cv_ptr->toImageMsg()); } cv_ptr->image will contain an OpenCV cv::Mat, in this case containing floating point values, which represent the depth in millimeters for that pixel. Originally posted by mjcarroll with karma: 6414 on 2012-08-29 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by phil0stine on 2012-08-29: They might also be interested in the rectified images, depth_registered/image_rect_raw
{ "domain": "robotics.stackexchange", "id": 10762, "tags": "ros, data" }
Why is it necessary to supply constant electricity to make a laser work?
Question: As we know that in lasers there are excited atoms. When energy is provided in the form of light, heat or electricity to these atoms, these excited atoms after sometime go to a lower state of energy and releases a photon and when this photon hits another electron which goes to a lower state of energy and releases two daughter photon and in every hit it amplifies the photon production by twice. So like when all the atoms have amplified photons and are in a ground state then why can't those photons provide energy to bring an atom to an excited state and repeat the whole process without the use of any extra electric current? Answer: Even if the laser had perfectly reflecting, i.e. lossless, mirrors at either end of the cavity, and both ends were sealed so no light could escape it would still require a continual power input. That's because excited atoms/molecules can decay by mechanisms that don't involve a photon e.g. collisional de-excitation. The lost energy goes into heating up the laser, so you'd need to supply energy to replace the energy lost as heat. In the real world the mirrors aren't perfectly reflecting so you need to supply energy to make up for the energy lost by absorption at the mirrors (this also ends up heating up the laser). And of course a laser isn't much use unless you make a hole in one for the beam to shine out of. That removes energy from the system, so you need continual power to replace the lost energy.
{ "domain": "physics.stackexchange", "id": 21407, "tags": "photons, energy-conservation, electrons, atomic-physics, laser" }
Time slows down with speed compared to what reference point?
Question: So as far as I understand time dilation it means that time slows down as an object approaches lightspeed. This is an issue even with for example satellites around earth compared to people on earth (GPS). Now I am wondering compared to what reference point is this speed measured? Is it absolute speed in comparison to the spacefabric (is that even possible to measure?)? If thats true then how come that the speed of the satelites around earth has an impact at all? I'd imagine it to be similar to me running back and forth on a plane. In the end my average speed will be the same (If I sit back down on my place) compared to me sitting down the whole time. Does that mean, that time dilation was the same? Answer: Now I am wondering compared to what reference point is this speed measured? The reference “point” is a system of clocks, all of which are at rest in the chosen reference frame and synchronized. The time on the moving clock is compared to the time on the co-located stationary clock at each moment, and the time dilation is calculated from that. I'd imagine it to be similar to me running back and forth on a plane. In the end my average speed will be the same (If I sit back down on my place) compared to me sitting down the whole time. Does that mean, that time dilation was the same? This is a version of the twins paradox. The back and forth time dilation does not generally cancel out.
{ "domain": "physics.stackexchange", "id": 99708, "tags": "general-relativity, reference-frames, time-dilation, observers" }
Why don't those developing AI Deepfake detectors use two detectors at once so as to catch deepfakes in one or the other?
Question: Why don't those developing AI Deepfake detectors use two differently trained detectors at once that way if the Deepfake was trained to fool one of the detectors the other would catch it and vice-versa? To be clear this is really a question of can deepfakes be made to fool multiple high-accuracy detectors at the same time. And if so then how many can they fool before they become human detectable from noticeable noise? I've heard of papers where they injected a certain noise into their deepfake videos which allows them to fool a given detector (https://arxiv.org/abs/2009.09213, https://delaat.net/rp/2019-2020/p74/report.pdf), so I thought well if they simply used two high-accuracy detectors then any pattern of noise used to fool one detector would interfere with the pattern of noise used to fool the other detector. Answer: Because it is possible to fool many different models at once. See table 2 in this paper, for an example using adversarial perturbations: https://arxiv.org/pdf/1610.08401.pdf That being said, there is no reason to think that using two detectors at once will not increase chance to detect deepfakes. It will just not resolve the problem completely.
{ "domain": "ai.stackexchange", "id": 2630, "tags": "machine-learning, generative-adversarial-networks, deepfakes, video-classification" }
Are the orders of reactants with respect to a reaction different for different starting concentrations?
Question: If the reaction is repeated with $\pu{2 M}$ ethyl iodide the pyridine concentration decreases as shown below. Give the rate law of the reaction in terms of pyridine and ethyl iodide. Explain your reasoning. $$ \begin{array}{l|rrrrr} \hline t/\pu{min} & 0 & 20 & 40 & 80 & 120\\ \hline [\ce{C5H5N}]/\pu{M} & 0.1 & 0.0746 & 0.0559 & 0.0312 & 0.0175\\ \hline \end{array} $$ I have found the rate constant and found that the orders of ethyl iodide and pyridine are both $1$ with respect to the reaction when their initial concentrations were both $\pu{0.1 M}.$ I would have thought this would be the case for any set of initial concentrations. I would like to confirm this. In that case, I am thinking the only thing different would be the rate constant. But how would I go about finding that? Could I plot the data, interpret the half life and go from there? Or is my approach totally wrong? Answer: A reaction's rate does depend upon the consumption of its reactants, and the manner in which the reactants interact. Let's consider the following reaction: $$\ce{A + B -> final products}$$ The rate of consumption of reactants $$\dfrac{-d[A]}{dt}= k[A][B]$$ and $$\dfrac{-d[B]}{dt}= k[A][B]$$ It seems that this reaction is a second order reaction since the rate order appears to be:$$\text{Rate} = k[A][B] $$ As @Andrew astutely pointed out in the comments, a reaction can be of a lower order if one of the reactant's concentration is much higher than the other's. Let's suppose, that one reactant is in great excess (say $10 \text{M of B vs } 0.01\text{M A}$) So what we're observing now is that the reactant B is barely consumed, yet the reaction terminates since reactant A, the limiting reagent is exhausted. In other words, the rate equation looks something like this: $$\text{Rate}=k'[A]$$, where $k^{'}=k[B]_o$ .Now, the reaction appears to be a first order reaction. This is known as a pseudo first-order reaction, since the rate of the reaction ultimately depends only on one reactant. An example that I found here: $$ \rm{CH_3Br + OH^- \rightarrow CH_3OH + Br^-}$$ [...]"Imagine we had an initial concentration of CH3Br of 100 μM and and an initial concentration of OH- of 10 mM. Now even if all of the CH3Br has reacted the concentration of OH- will be essentially unchanged. Therefore during the course of the reaction, the concentration of OH- will be essentially constant. This makes the reaction "like a first order reaction", thus the name pseudo-first order." Effectively, the rate now is: $$\ce{rate= k'[CH_3Br]}$$ This can also be mathematically represented in the integrated rate law: $$\dfrac{1}{\ce{[OH-]_0 - [CH3Br]_0}}\ln \dfrac{\ce{[OH-][CH3Br]_0}}{\ce{[CH3Br][OH-]_0}} = kt \quad \ce{where [OH-]\neq[CH3Br]}$$ Now, since $\ce{[OH-]_0 \gg [CH3Br]_0}$, we can write the equation as: $$\dfrac{1}{\ce{[OH-]_0 - [CH3Br]_0}}\ln \dfrac{\ce{[OH-][CH3Br]_0}}{\ce{[CH3Br][OH-]_0}} \approx \dfrac{1}{\ce{[OH-]}}\ln \dfrac{\ce{[CH3Br]_0}}{\ce{[CH3Br]}}=kt $$ Remember, however that the rate and order of a reaction usually depends upon its mechanism. For further reading, check here and here as well.
{ "domain": "chemistry.stackexchange", "id": 15380, "tags": "physical-chemistry, kinetics, reaction-order, rate-equation" }
Corrosion of iron: no electron pushing mechanism?
Question: Why does the oxidation-reduction reaction between iron, water, and $\ce{O2}$ gas to form rusts lack an electron pushing mechanism and a specific sequence of steps? $$\ce{4Fe^0(s) + 3 O2(g) + 2n H2O(l) -> 2 Fe2O3·nH2O(s)}$$ I've noticed that most of the reactions described in undergraduate organic chemistry textbooks have detailed and widely accepted electron pushing mechanisms whereas most inorganic reactions (such as the corrosion of iron) lack electron pushing mechanisms. Why? Also, which type of $\ce{O2}$ is involved in the corrosion of iron? The singlet excited state or the triplet ground state or doesn't matter? $$ \underset{\text{triplet oxygen}~\ce{^3O2}}{ \ce{ ·\overset{\Large .\!\!.}{\underset{\Large .\!\!.}{O}}-\overset{\Large .\!\!.}{\underset{\Large .\!\!.}{O}}· } } \qquad \underset{\text{singlet oxygen}~\ce{^1O2}}{ \ce{ \overset{\Large .\!\!.}{\underset{\Large .\!\!.}{O}}-\overset{\Large .\!\!.}{\underset{\Large .\!\!.}{O}} } } $$ Answer: Arrow pushing mechanism is just a formal way of thinking how the reaction might be occurring. Unfortunately, organic chemistry textbooks present it as if all of these reactions were occurring in real time, step by step, and we are able to watch it like a movie. The reality must be far more complex. If you were truly investigating a reaction mechanism, we have to design very clever experiments (there are books on this topic). It is experimental data that determines the true outcome of an organic reaction not the arrow pushing formalism. Usually, when chemists find out the experimental products, then they postulate arrow pushing mechanisms. Coming to the rust question. Rust has puzzled chemists and engineers for decades because it causes millions of dollars of losses every year. Your equation is a very simplistic way of thinking that rust forms in a single step. $$\ce{4Fe^0(s) + 3 O2(g) + 2n H2O(l) -> 2 Fe2O3·nH2O(s)}$$ It does not proceed in a single step. Our atmospheric chemistry is very complex, you have carbon dioxide, you have sulfur dioxide, nitrogen oxides, ozone, water vapors, sunlight, plenty of free radicals and plenty of minor trace components. Then there are different phases of rust too. You can have a quick look at Google Scholar and one such representative abstract [1]. In short, people still do PhD in this field and one can only imagine how complex corrosion science is. References Misawa, T.; Hashimoto, K.; Shimodaira, S. On the Mechanism of Atmospheric Rusting of Iron and Protective Rust Layer on Low Alloy Steels. Corrosion Engineering 1974, 23 (1), 17–27. https://doi.org/10.3323/jcorr1974.23.1_17.
{ "domain": "chemistry.stackexchange", "id": 11974, "tags": "inorganic-chemistry, redox" }
Friction of a rolling cylinder
Question: I was wondering why friction vectors are drawn differently regarding a cylinder rolling on a surface and a cylinder rolling down an inclined surface. Since friction is responsible for the rotational motion shouldn't it be always pointing in the same direction (given that the cylinder is rotating to the right)? Here are two pics I googled so you can see what I mean: Answer: This is because static friction, in the second case, tries to oppose the force which would otherwise result in the movement of the cylinder, which is the component of gravitational force along the inclined plane. In the second case, I will assume that the rolling friction has been referred to. Then, the shown direction of friction at that point actually does oppose the direction of rolling at that point, which is the point of contact. Edited diagrams to help you understand better: (Sorry for the small text)
{ "domain": "physics.stackexchange", "id": 25931, "tags": "classical-mechanics" }
Haskell 'n' layered ANN forward pass
Question: I'm trying to write a simple 'n' layered ANN in haskell for supervised learning, it will eventually have back prop and you'll be able to use it in a step by step fashion through a GUI which will graph the error function. This is for learning (so I'm not looking for suggestions on libraries that already solve this problem). In this review I'm mainly looking for feedback on how I've arranged my code and if there are better ways to represent a neural network in Haskell. The approach I'm trying to take is to separate the forward pass from the backward pass, so that the training can be controlled by the consumer (rather than having a network that is just a one shot IO which does forward -> back recursively until error < x). The reason for this is that I can then separate the rendering of the networks state at any given pass (i.e. after each forward pass I can easily just render the current loss of the whole network by applying the cost function across the output vector). Here is the current code for a forward pass of the network. It takes a list of double (which is the input vector) and a list of matrix of double (where each element in the list is a weight matrix representing the weights of each connection on a given layer in the network). The activation function then creates an output vector by recursively applying a forwardPass function through each layer until the final layer 'n' has been evaluated. Here is the code for that: module Training (randomWeights, activate, cost) where import Data.Matrix import System.Random activate :: [Double] -> [Matrix Double] -> Matrix Double activate i weights = forwardPass inputs weights where inputSize = length i inputs = fromList 1 inputSize i forwardPass inputs weights | length weights == 1 = squashedOutputs | otherwise = forwardPass (squashedOutputs) (tail weights) where squashedOutputs = mapPos (\(row, col) a -> leakyRelu a) layerOutputs where layerOutputs = multStd2 inputs (head weights) leakyRelu a | a > 0.0 = a | otherwise = 0.01 * a randomWeights :: (RandomGen g) => g -> Int -> Int -> Matrix Double randomWeights generator inputSize outputSize = weights where weights = matrix inputSize outputSize (\(col, row) -> (take 10000 $ randomRs (-1.0, 1.0) generator)!!(col * row)) The randomWeights function is consumed in the main function of my program and is used to generate the list of matrix of double to be passed to the forwardPass (i.e. each layers weights). The main function looks like this: main :: IO() main = do generator <- newStdGen let expected = 0.0 let inputs = [0, 1] let inputWeights = randomWeights generator (length inputs) 3 let hiddenWeights = randomWeights generator 3 1 let outputWeights = randomWeights generator 1 1 let outputs = activate inputs [inputWeights, hiddenWeights, outputWeights] print inputs print outputs So it a bit like a unit test (eventually I will wrap the activate/backprop loop into a structure that a user can control by a 'next' button on a GUI, but for now I simply want a solid forwardPass foundation to build off. Does all of this look like reasonable haskell, or are there some obvious improvements I can make? Answer: \(col, row) -> (take 10000 $ randomRs (-1.0, 1.0) generator)!!(col * row) Oh man you got me going "no no no no no no" like I long haven't :D. take 10000 does nothing here. col * row is going to come out to the same when you switch row and col, perhaps you want col + inputSize * row? randomRs is going to be recalculated for each (col,row) pair - fromList fixes that. Calling the line's result weights is little more than a comment. MonadRandom can avert the generator passery, and also stop you generating the same random values for each call to randomWeights. activate :: [Double] -> [Matrix Double] -> Matrix Double activate i = foldl squash (fromLists [i]) where squash inputs weights = fmap leakyRelu $ multStd2 inputs weights leakyRelu a | a > 0.0 = a | otherwise = 0.01 * a randomWeights :: Int -> Int -> IO (Matrix Double) randomWeights rows cols = fromList rows cols <$> getRandomRs (-1.0, 1.0) main :: IO () main = do let inputs = [0, 1] inputWeights <- randomWeights (length inputs) 3 hiddenWeights <- randomWeights 3 1 outputWeights <- randomWeights 1 1 let outputs = activate inputs [inputWeights, hiddenWeights, outputWeights] print inputs print outputs
{ "domain": "codereview.stackexchange", "id": 31893, "tags": "haskell, reinventing-the-wheel, neural-network" }
M-clique covers in complete graphs
Question: Let us consider a complete weighted graph, with $NM$ nodes. Our objective is to find, among all possible combinations of $N$ disjoint $M$-cliques (each clique consisting of $M$ nodes), the configuration that maximizes/minimizes the sum of the $N$ $M$-cliques weights. Here the weight of a $M$-clique is the sum of the edge weights between all the $M$ nodes composing the clique. It sounds like a classical mathematical problem, but I have been spending hours without finding anything. The special case where $M=2$ consists of a maximal weighted matching problem in a complete graph and can be solved using Edmonds Matching Algorithm, but I can't find anything for $M>2$. Is there an efficient algorithm for this problem? Answer: This problem is NP-hard. As proof, the maximal clique problem (or rather the decision variant find-a-K-clique) can be reduced to this problem as follows. Start with a problem on a graph with N vertices where we wish to find a clique of size K. The set containing these original vertices we'll call S. Add a clique (we'll call it C) of size (N - K)*K which is joined to every vertex in S. Additionally, adjoin one more vertex V which is joined only to the vertices in S (not the ones in C). Now we have an instance of your problem (never mind the edge weights) where we want to divide the resulting graph up into N - K + 1 cliques of size K+1. I claim that there is a solution to this problem if and only if there is a clique of size K in the original graph. only-if follows from the fact that V must belong to some clique of size K+1 which is only the case if there are K vertices which form a clique in S. Furthermore, there will be enough leftover nodes in C that every S-vertex not in the solution clique can be assigned to a separate set of K vertices from C. So once we've managed to find a clique for V, finding the other N-K (K+1)-cliques is always possible (and indeed trivial). So taking "efficient" to mean "polynomial time", then the answer to your question is "no", or more precisely, "only if P = NP"
{ "domain": "cstheory.stackexchange", "id": 2462, "tags": "ds.algorithms, graph-theory, graph-algorithms, clique" }
Euler 12: counting divisors of a triangle number
Question: This is the question: The sequence of triangle numbers is generated by adding the natural numbers. So the 7th triangle number would be 1 + 2 + 3 + 4 + 5 + 6 + 7 = 28. The first ten terms would be: 1, 3, 6, 10, 15, 21, 28, 36, 45, 55, ... Let us list the factors of the first seven triangle numbers: 1: 1 3: 1,3 6: 1,2,3,6 10: 1,2,5,10 15: 1,3,5,15 21: 1,3,7,21 28: 1,2,4,7,14,28 We can see that 28 is the first triangle number to have over five divisors. What is the value of the first triangle number to have over five hundred divisors? import math def triangulated(num): x = 0 for num in range(1, num + 1): x = x + num return x l = [] def factors(g): for n in range(1, triangulated(g) + 1): if triangulated(g) % n == 0: l.append(n) if len(l) > 500: print(triangulated(g)) print(l) l.clear() for k in range(1, 10000000000): factors(k) print(k) Help optimise this problem. Answer: This is not a review, but an extended comment. Project Euler problems are about math, not programming. To optimize, you need to do the math homework first: The \$n\$'th triangular number is \$\dfrac{n (n+1)}{2}\$. A number of divisors, aka \$\sigma_0\$, is a multiplicative function. The link to divisor function may also be interesting. \$n\$ and \$n+1\$ are coprime. That should be enough to get you started with optimization.
{ "domain": "codereview.stackexchange", "id": 38103, "tags": "python, programming-challenge, factors" }
Vibrations with Different Frequenc'es Beat
Question: Hello here i have a problem which is following Say there are two waves which are $x_1=A_1cos(w_1t)\\x_2=cos(w_2t)$ In my book it is written that $x=x_1+x_2=2Acos\frac{(w_1-W-2)}{2}tcos\frac{(w_1+W-2)}{2}t$ And we know that $f_b=f_2-f_1$ Here $f_b$ is beat frequency. Because we know that $cos\frac{(w_1+W-2)}{2}t$ make a lot of fluctuation we do not consider it as beats frequency but $cos\frac{(w_1-W-2)}{2}$ makes les we can say it is the frequency for whole motion. Here $f_b=(w_1-w_2)/2\pi$ we get $cos(f\pi t)$But angular frequency is $w=f2\pi$ I am confused here. Thank you for all help!! Answer: Here are you two individual waves and the waves combined. Now the frequency of the envelope is indeed $\dfrac{\omega_1-\omega_2}{2}$ but you will note for every period of that envelope (i to v) there are two maxima and so the beat frequency (number of maxima per second) is $\dfrac{\omega_1-\omega_2}{2}\times 2 = \omega_1-\omega_2$
{ "domain": "physics.stackexchange", "id": 35132, "tags": "vibrations" }
Processing an ECG signal with a median filter
Question: I have read in a couple of papers that the noise from an ECG signal can be removed via median filter. One such example I found was on stackoverflow, where multiple methods were suggested and one of them being the median filter. The following image is taken from the post on stackoverflow. The main idea of the median filter is to run through the signal entry by entry, replacing each entry with the median of neighboring entries. The pattern of neighbors is called the "window", which slides, entry by entry, over the entire signal. What I do not understand is, why doesn't the QRS complex get removed as well? When I tried it using matlab, the bigger I set the window, the better the signal got filtered, should't it be the opposite? Answer: You are certainly doing something very wrong. You should upload your data in order to get better responses. You can upload your data into any upload sites you wish and provide its link here, practically. Median filter is a highly highly nonlinear filter (it re-orders the sample positions!). The output of the median filter at a position $n$ is the median of the values that reside in the window scope; i.e., it's the value that resides in the middle when the samples are sorted in order. Hence median filtering requires sorting for each computation. This makes it quite slow as well (a deailed answer actually depends on the architecture...) Median filter is mainly used for speckle or salt and pepper noise removal, in essence these are local noise samples whose frequency domain filtering is not possible without degrading the whole signal. Such local (in time) peaks will have wide band frequency spectrums which inhibit frequency domain attacks to remove them, therefore, leaving only the time domain (or time-frequency domain) approaches possible. Median filter has a tendency to preserve edges, therefore quite preferred in certain image enhancement operations. However it also has the side effect of washed out results (texture details are lost and only strong edges remain) which indicates that their use should be performed with care. In principle, the longer the window size, the stronger will be the washed out effect. So it's customary to use as short as possible window sizes (unless otherwise dictated by the particular application) Coming to your example plot. Using a window size larger than 50 samples will wash out local details that might be important to you, so you should use a window size less than 50, I guess (based on the plot you provided).
{ "domain": "dsp.stackexchange", "id": 5471, "tags": "filters" }
Function for splitting an integer into smaller values
Question: This is the function I've implemented in an answer to this question. I've tried to make this as simple and idiomatic as possible, using C++11. Could this still be improved in any way? #include <algorithm> #include <cstdint> #include <iostream> #include <iterator> #include <vector> typedef std::vector<int> SplitValues; SplitValues splitter(std::int32_t number) { SplitValues values(8); for (auto& iter : values) { iter = static_cast<int>(number & 0xF); number >>= 4; } return values; } int main() { std::int32_t number = 432214123; SplitValues values = splitter(number); std::reverse_copy(values.begin(), values.end(), std::ostream_iterator<int>(std::cout, " ")); } Test run Questions: Is SplitValues an appropriate name for a typedef, or does it sound more like a variable? If I pass the hard-coded number instead of the number variable, will it still be treated as the same type as the parameter? Answer: The first problem I see is that you're doing a right shift on a signed integer type (std::int32_t). If a negative value is passed, the result will be implementation defined. When you're going to treat something as a collection of bits instead of treating the whole thing together as a number, you usually want to use an unsigned type, not signed. I'd also prefer to get rid of the magic (and inter-related) numbers sprinkled throughout the code, such as the 8, 4 and the 0xf. These three are inter-related--the 8 being the number of 4-bit fields contained in a single 32-bit item, and 0xf being a mask with 4 bits set. I'd prefer to specify only one of these values in one place, and compute the others from that one. As to the specific questions you've asked: I don't think SplitValues does a good job of indicating a type. This has been a problem for a long time, and I don't think I have a silver bullet to finally put it to rest. In this case, I'd rather just avoid it completely (about which, more below). If you pass a literal to a function instead of passing a variable, that literal will start with some type--in this case, since it's digits and can fit in an int, its type will be int. That will then be converted to the function's parameter type (std::int32_t). In this case, at least on a typical compiler where int is a 32-bit type, that "conversion" will really be a NOP because the source and target types are basically identical. As to how to avoid the problem with having to define names, C++ has two mechanisms that can help quite a bit: templates (especially function templates) and auto. Both of these can deduce a result type based on input types, to avoid having to explicitly specify types throughout much of the code. I'd also generally prefer to write splitter as a generic algorithm. While we're at it, its name should be changed from a noun to a verb. Non-functor classes should have nouns for names, but functors and functions do things, so their names should normally be verbs. template <classs T, class OutIt> void split(T number, OutIt result) { unsigned bits = 4; unsigned mask = (1<<bits) - 1; unsigned iter_count = sizeof(T) * CHAR_BIT / bits; for (unsigned i=0; i<iter_count; i++) { *result++ = number & mask; number >>= bits; } } This part is a little longer than the code in your answer, but most of that extra length stems from specifying the number of bits, then computing the mask and iteration count from that shift value instead of having interrelated magic numbers throughout. With this code, we can change the type of the input (e.g., to a 64-bit unsigned long long) or the shift count (e.g., to 8 bits) independently of each other, without having to patch the rest of the code to compensate. Although it's a rather minor point, all else being equal, I prefer to avoid creating variables with default values, then filling in the real values later. This is fairly innocuous when the values involved are at least initialized as they are in a vector, but it's still better to just initialize values when possible. That leads us to the calling code. Since we changed the function to look like a standard algorithm, we need to change the calling code to suit. int main() { std::deque<int> values; split(432214123, std::front_inserter(values)); std::copy(values.begin(), values.end(), std::ostream_iterator<int>(std::cout, " ")); } Of course, there are a lot of other ways this job could be handled, but I guess that's enough for the moment anyway.
{ "domain": "codereview.stackexchange", "id": 8080, "tags": "c++, c++11, bitwise, integer" }
Thermodynamic equilibrium - will the piston move?
Question: Friend asked me this question and I didn't manage to solve it with basic thermodynamic reasoning. I believe this problem is easly solvable via numeric methods by choosing specific systems, though I prefer an analytic, more general and more intuitive solution. Two different and isolated systems (which specified by $S_1(E_1,V_1,N_1)$ and $S_2(E_2,V_2,N_2)$) are seperatly prepeard to satisfy particular $(P,T)$ requirements, so that $P_1=P_2=P$ but $T_1 \ne T_2$. Afterwards the two systems are brough one near the other, with a single piston (unmovable at first) seperating them. The piston doesn't allow transfer of heat or particles at any stage. After the two systems were properly juxtaposed the restriction on the movement of the piston is removed. Will the piston move from its original position? One way of treatment suggests that since $P_1=P_2$ and and since only mechanichal work (exchange) is allowed - the piston will not move. Other way sugest that by forcing maximum entropy (thermodynamic equilibrium) for the combined system, we will get $dS_{tot}=dS_1+dS_2=0$, and in particular (since there is only one degree of freedom here) $\frac{\partial S_1}{\partial V_1}=\frac{\partial S_2}{\partial V_1}$ so at equilibrium $\frac{P_1}{T_1}=\frac{P_2}{T_2}$, hence the piston will move. Answer: You've discovered a famous problem in thermodynamics. In our case the piston will not move. The mechanical argument is right, while the maximum entropy argument is inconclusive. To see that $P_1=P_2$ is an equilibrium position you can also apply conservation of energy. Since there is no heat exchange, $$dU_{1,2} = -P_{1,2} dV_{1,2}$$ We require that $dU=0$ since our system is isolated from the environment, hence $$dU_1 + dU_2 = 0 \to P_1 d V_1 + P_2 dV_2 = 0$$ But $V=V_1+V_2$ and $V$ is fixed, so that $dV_1 = - dV_2$ and we obtain $$P_1=P_2$$ Now let's see the entropy maximum principle. The problem is that you forgot that $S$ is a function of energy too: $$S(U,V)= S_1 (U_1, V_1)+ S_2 (U_2, V_2)$$ $$d S = dS_1 + dS_2 = \frac{dU_1}{T_1} + \frac{P_1}{T_1} dV_1 + \frac{dU_2}{T_2} + \frac{P_2}{T_2} dV_2$$ Since $dU_{1,2} = -P_{1,2} dV_{1,2}$, we see that $dS$ vanishes identically, so that we can say nothing about $P_{1,2}$ and $T_{1,2}$: the entropy maximum principle is thus inconclusive. Update Your question actually inspired me a lot of thoughts in the past days and I found out that...I was wrong. I basically followed the argument given by Callen in his book Thermodynamics (Appendix C), but it looks like: There are some issues with the argument itself I misinterpreted the argument My error was really silly: I only showed that $P_1=P_2$ is a necessary condition for equilibrium, not that it is a sufficient condition, i.e. (if the argument is correct and) if the system is at equilibrium, then $P_1=P_2$, but if $P_1=P_2$ the system could still be out of equilibrium...which it is! I am still not really able to explain why the whole argument is wrong: some authors have said that equilibrium considerations should follow from the second law and not from the first and that the second law is not inconclusive. You can read for example this article and this article. They both use only thermodynamics considerations, but I warn you that the second tries to contradict the first. So the problem, from a purely thermodynamic point of view, is really difficult to solve without making mistakes, and I have found no argument that persuaded me completely and for good. This article takes into consideration exactly your problem and shows that the piston will move, making the additional assumption that the gases are ideal gases. We take the initial temperatures, T1 and T2, to be different, and the initial pressures, p1 and p2, to be equal. Once unblocked, the piston gains a translational energy to the right of order 1/2KT1 from a collision with a side 1 molecule, and a translational energy to the left of order 1/2KT2 from a collision with a side 2 molecule . In this way energy passes mainly from side 2 to side 1 if T2>T1. [...] In this process just considered, the pressures on the two sides of the piston are equal at all times, which means no "work" is done. However, the energy transfer occurs through the agency of the moving piston, and if one considers "work" to be the energy transferred via macroscopic, non-random motion, then it appears that "work" is done. This is really similar to the argument given by Feynman in his lectures (39-4). Feynman basically uses kinetic theory arguments to show that if we start with $P_1 \neq P_2$ the piston will at first "slosh back and forth" (cit.) until $P_1 = P_2$, and then, due to random pressure fluctuations, slowly converge towards thermodynamic equilibrium ($T_1=T_2$). The argument is really tricky because we assume that if the pressure is the same on both side the piston will not move, forgetting that pressure is just $2/3$ of the density times the average kinetic energy per particle $$P = \frac 2 3 \rho \langle \epsilon_K \rangle$$ just like temperature is basically the average kinetic energy (without the density multiplicative factor). So we are dealing with statistical quantities, which are not "constant" from a microscopic point of view. So while thermodynamically we say that if $P_1=P_2$ the piston won't move, from a microscopic point of view it will actually slightly jiggle back and forth because of the different collision it experiences from particles in the left and right sides. There have been also simulations of your problem which show that if we start with $P_1=P_2$ and $T_1\neq T_2$ the piston will oscillate until we reach thermodynamic equilibrium ($T_1=T_2$). See the pictures below, which I took from the article.
{ "domain": "physics.stackexchange", "id": 43702, "tags": "thermodynamics" }
Grammar generating odd number of 1s
Question: I have the language $L = \{w ∈ \{0, 1, 2\}^∗ \mid \text{the number of 1s in $w$ is odd}\}$. I am stuck halfway through this process. I have $$ \begin{align*} &A \to 0A \mid \epsilon \\ &B \to 2B \mid \epsilon \\ &C \to AB \mid BA \end{align*} $$ This should take care of 0s and 2s. Now I have this, but I'm not sure if this is right $$ \begin{align*} &X \to 1Y \mid Y1 \\ &Y \to YC1C1 \mid C \mid \epsilon \end{align*} $$ Answer: I would split the language in a part that assures that the number of 1s is odd (like, one 1), and another part of the language that maintains that. Similar to induction. So the first step is to write $$S \rightarrow A1A$$ This makes sure there is at least one $1$ in our grammar, and $A$ is must maintain that. Any number of $0$s or $2$s have no effect, as well as the empty string: $$A \rightarrow 0A \mid A0$$ $$A \rightarrow 2A \mid A2$$ $$A \rightarrow \lambda$$ However, if $A$ contains a $1$, it must contain at least another one. How can we enforce this? Simple! We re-use $S$, which already enforces this: $$A \rightarrow 1S\mid S1$$ Note that I made every rule symmetrical regarding order. This is not a good practice in general when writing grammars for actual parsing, as it leads to highly ambiguous grammars. However here the focus is on getting the requirements right, so having not to worry about order at all makes it a bit easier.
{ "domain": "cs.stackexchange", "id": 8888, "tags": "formal-grammars" }
Mars Rover challenge in Python - general feedback
Question: I have given the Mars Rover challenge a go in Python as DSA practice. Here is the challenge: A rover’s position and location is represented by a combination of x and y co-ordinates and a letter representing one of the four cardinal compass points. The plateau is divided up into a grid to simplify navigation. An example position might be 0, 0, N, which means the rover is in the bottom left corner and facing North. In order to control a rover , NASA sends a simple string of letters. The possible letters are ‘L’, ‘R’ and ‘M’. ‘L’ and ‘R’ makes the rover spin 90 degrees left or right respectively, without moving from its current spot. ‘M’ means move forward one grid point, and maintain the same heading._ Test Input: 5 5 1 2 N LMLMLMLMM 3 3 E MMRMMRMRRM Expected Output: 1 3 N 5 1 E I am still relatively new to Python so know this is a bit basic - but I wondered if I could get some general feedback on my code for best coding practice? RIGHT_ROTATE = { 'N':'E', 'E':'S', 'S':'W', 'W':'N' } LEFT_ROTATE = { 'N':'W', 'W':'S', 'S':'E', 'E':'N' } class MarsRover(): def __init__(self, X, Y, direction): self.X = X self.Y = Y self.direction = direction def rotate_right(self): self.direction = RIGHT_ROTATE[self.direction] def rotate_left(self): self.direction = LEFT_ROTATE[self.direction] def move(self): if self.direction == 'N': self.Y += 1 elif self.direction == 'E': self.X += 1 elif self.direction == 'S': self.Y -= 1 elif self.direction == 'W': self.X -= 1 def __str__(self): return str(self.X) + " " + str(self.Y) + " " + self.direction @classmethod def get_rover_position(self): position = input("Position:") X = int(position[0]) Y = int(position[2]) direction = position[4] return self(X, Y, direction) class Plateau(): def __init__(self, height, width): self.height = height self.width = width @classmethod def get_plateau_size(self): plateau_size = input("Plateau size:") height = int(plateau_size[0]) width = int(plateau_size[2]) return self(height, width) def main(): plateau = Plateau.get_plateau_size() while True: rover = MarsRover.get_rover_position() current_command = 0 command = input("Please input directions for rover.") command_length = len(command) while current_command <= command_length - 1: if command[current_command] == 'L': rover.rotate_left() current_command += 1 elif command[current_command] == 'R': rover.rotate_right() current_command += 1 elif command[current_command] == 'M': rover.move() current_command += 1 result = str(rover) print(result) if __name__ == '__main__': main() Answer: Classes Typically, a class encapsulates the data and functionality of the thing the class is modeling. The class might have various attributes that describe the thing (position, heading), and some methods for actions the thing does (move). Here, the problem says that NASA sends a command string to the rover. The rover then processes the commands in the command string. So, it makes sense to put the command processing code in a method of the Rover class. Plateau In a more complicated simulation, a Plateau (or Terrain) class might make sense for simulating the environment. For example, it could model ground slope or wheel traction, so the rover would require more energy to go uphill or need to go slow on loose soil. In this simulator, it is not needed. Looping When processing the command string, it would be more pythonic to iterate over the string directly, rather than using an index into the command string. Instead of while current_command <= command_length - 1: if command[current_command] == 'L': rover.rotate_left() current_command += 1 ... use for letter in command: if letter == 'L': ... elif letter == 'M': ... I/O It is generally a good idea to separate I/O from model code. For example, if you wanted to change the current code so the rover is controlled via a web interface, a RESTful API, or via intergalactic WiFi, the Rover class would need to be revised. f-strings f-strings makes is easy to format strings. Rather than str(self.X) + " " + str(self.Y) + " " + self.direction use f"{self.X} {self.Y} {self.direction}" All together, something like this: RIGHT_ROTATE = { 'N':'E', 'E':'S', 'S':'W', 'W':'N' } LEFT_ROTATE = { 'N':'W', 'W':'S', 'S':'E', 'E':'N' } class MarsRover(): """ class to simulate a Mars rover. """ def __init__(self, x, y, heading): self.x = x self.y = y self.heading = heading def rotate_right(self): """rotate rover 90 degees clockwise.""" self.direction = RIGHT_ROTATE[self.direction] def rotate_left(self): """rotate rover 90 degees counter clockwise.""" self.direction = LEFT_ROTATE[self.direction] def move(self): """ moves the rover 1 grid square along current heading.""" if self.heading == 'N': self.y += 1 elif self.heading == 'E': self.x += 1 elif self.heading == 'S': self.y -= 1 elif self.heading == 'W': self.x -= 1 def execute(self, command_string): """parse and execute single letter commands in a command string. L/R - turn 90 degrees left/right M - move one grid square in the current heading. """ for command in command_string: if command == 'L': self.rotate_left() elif command == 'R': self.rotate_right() elif command == 'M': self.move() else: raise ValueError("Unrecognized command '{command"}'." def __str__(self): return f"{self.x} {self.y} {self.heading}" def main(): # this should have some error checking coords = input("Enter x and y coordinate (e.g., 3 11): ") x, y = (int(s) for s in coords.strip().split()) heading = input("Enter initial heading: ") rover = MarsRover(x, y, heading) while True: command_string = input("Please input directions for rover.") if comment_string == '': break rover.execute(command_string) print(str(rover)) if __name__ == '__main__': main() One more thing: enums Instead of letters 'N', 'S', etc. consider using enums.
{ "domain": "codereview.stackexchange", "id": 37340, "tags": "python, algorithm" }
Street and road data
Question: Where do companies like Google and Yahoo and Mapquest get their street and road data? Is this a data source that the public has access to? Answer: Google Maps and MapQuest have gathered data and information through subcontractors and their own internal efforts. Their information is proprietary and is very hard to gain access to (and most methods of scraping this data would violate their Terms of Service). It's generally not considered a reliable source for getting anything beyond geocoded addresses. An alternative method for getting more specific geographic data (e.g. building footprints, forest extents, roads/highways, POIs, etc.) is through the OpenStreetMap Project. This information is Open and crowd-sourced with very few limitations. Specific information can be grabbed through their Overpass API or can be downloaded as a whole in their pbf format which can then be converted to shapefile/GML.
{ "domain": "earthscience.stackexchange", "id": 909, "tags": "geography, open-data, mapping" }
Motorbike with parachute vs normal skydive
Question: Had a strange discussion in the coffeeroom today. The background was a story about a guy that took his motorcycle to transport himself to the airstrip to do a skydive. Since there is no place to put a parachute on the motorbike, he put the parachute on his back (since it is like a backpack). And apparently the parachute almost opened during the drive, but he was lucky and nothing happened. Now we come to the actual question, what would happen if you ride a motorbike and the parachute opens? Is it really that different from a skydive? Let's compare the two cases. The first case, the skydive, I guess free-fall speed is around 150-200km/h, and then you open the parachute and the speed drops and eventually speed is so low that you can survive the touchdown. And all the motion is vertical. In the second case, the motorbike, we can have the driver going at the same speed 150-200km/h. And we assume that he travels on a big open space, with flat tarmac, so there is nothing that can come in his way (like trees, or other cars). Also he is wearing full protective clothing, like a proper race driver. In this case all the motion is horizontal. What happens when the parachute opens? Will the biker die? or will he glide on the ground until friction and the parachute stops him? Is there really any difference if you are going vertically or horizontally? Except for the proximity to the ground? Answer: Air friction is the only thing really acting on the chute. Sure there is some gravity, but it should act basically the same vertically / horizontally. Think dragster at the end of the strip. The chute will put the brakes on the rider, not the bike. So chute opens, biker ripped from machine, biker stops with a few bruises, bike keeps going...
{ "domain": "physics.stackexchange", "id": 25246, "tags": "newtonian-mechanics, soft-question, drag" }
How are the optical encoders used in platforms like Rover 5?
Question: I just got my rover 5 chassis with 4 motors and 4 quadrature encoders and I am trying to utilize the optical encoders. I know the encoders generate pulse signals which can be used to measure speed and direction of the motor. I want to know how 4 separate optical encoders add value for the controller of rover 5 like platform. The controller normally uses PWM to control the speed of the motor. If two motors are running at same speed then the encoder output will be same. So, why should the controller monitor all 4 encoders? Answer: No two motors will ever turn with the same angular velocity given the same voltage. If you power each of your Rover 5 motors with 12V (I don't know what they're rated for), you'll see that each motor will spin at slightly different speeds. If you want to guarantee you're traveling in a straight line, you need to implement velocity control on both wheels. One method of doing this is implementing a PID controller on the drive wheels to ensure their velocity is the same, based on encoder ticks per unit time. Otherwise (let's assume you have two wheels that are driving the vehicle) one wheel will turn faster than the other, and you'll slowly drift in one direction. However, you may want to turn a vehicle with no steering control! That means you want to turn your drive wheels at different velocities (albeit this will cause your wheels to slip horizontally and thus cause you to lose some traction/power), and so you need two different encoders that will be the input to two different velocity controllers. For a tank like system, if the front left wheel encoder is ticking and the rear left wheel encoder is NOT ticking, then maybe your tread has fallen off! It's really very useful to create a robust system. Edit: Man I keep editing and adding more stuff...having multiple encoders will also allow you to identify failures. If you see that one wheel has stopped moving, it could be stuck on something and/or broken! This could allow you to halt the system and tell the operator that a mechanical failure has occurred with, for example, the front left wheel. This can only be done if you have more than one encoder. As a side note, it's always good to have a redundant system in case one breaks!
{ "domain": "robotics.stackexchange", "id": 239, "tags": "mobile-robot, motor, control" }
Driven oscillator with constant velocity
Question: I am trying to simulate a driven oscillator of sorts on the computer. I have a 1D spring-mass system, attached to a point in space. To make the math easier, I'm assuming the attachment point is right at the spring's equilibrium position. The total force of the system is: $F = k(P_a-P_m)$ where $P_m$ is the mass's position and $P_a$ is the attachment point. Later, I'd like to add dampening, but for now I'm keeping it simple. So, I want to move $P_a$ around with constant velocity. How do I calculate where $P_m$ will be in $t$ seconds, given an initial $P_a$, $P_m$ and velocity for $P_a$? My attempts: It seems like I'd need to solve it as a differential equation. I haven't learned how to do differential equations though, so I'm not sure how to proceed. I plugged a few numbers in, and it looks like the motion of $P_a$ only affects the amplitude and phase of a normal spring's motion. Though, it seems like it might even shift the sin wave vertically, in some cases. Any help would be much appreciated. Answer: Newton's second law says $F=ma=m\frac{dP_m^2}{dt^2}$. Substituting yields $$m\frac{dP_m^2}{dt^2} = k(P_a-P_m)$$ Since $P_a$ is moving at constant velocity, we can substite $vt$ for $P_a$. Doing this and rearranging, we find $$m\frac{dP_m^2}{dt^2} + kP_m = kvt$$ This is indeed a differential equation, a second order inhomogeneous linear ordinary differential equation in fact. I recommend khan academy to help you there.
{ "domain": "physics.stackexchange", "id": 22730, "tags": "newtonian-mechanics, spring" }
Given a list of coordinates check if 4 points make a line
Question: I was doing some preparation for coding interviews and I came across the following question: Given a list of coordinates "x y" return True if there exists a line containing at least 4 of the points (ints), otherwise return False. The only solution I can think about is in \$O(n^2)\$. It consist of calculating the slope between every coordinate: Given that if \$\frac{y_2-y_1}{x_2 –x_1} = \frac{y_3-y_2}{x_3-x_2}\$ then these three points are on the same line. I have been trying to think about a DP solution but haven't been able to think about one. Here is the current code: def points_in_lins(l): length = len(l) for i in range(0,length -1): gradients = {} for j in range(i+1, length): x1, y1 = l[i] x2, y2 = l[j] if x1 == x2: m = float("inf") else: m = float((y2 -y1) / (x2 - x1)) if m in gradients: gradients[m] += 1 if gradients[m] == 3: return(True) else: gradients[m] = 1 return(False) Answer: A different approach could be using the Hough transform. This does require that the points are in a bounded space, but would lead to an algorithm \$O(n)\$. This does not necessarily make it faster, but I think it will be significantly faster for large sets of points. It works this way: Parameterize the potential lines in the input space using a distance from origin and an angle (typically, the angle is that of the normal line that goes through the origin). Set up a "parameter space", a discretized space using distance and angle as its two axes. You need to choose a sampling here. Each bin in this space represents a potential line in the input space (or rather, a collection of lines, within a small range of angles and distances determined by the discretization of the parameter space). For each point in the set, add 1 to each bin in the parameter space that represents a line going through this point. There is an infinite number of potential lines going through one point. These lines form a sinusoid in the parameter space, and it is quick to compute the set of bins covered by this sinusoid. Each bin in the parameter space that has a value of 4 (or larger) represents the parameters of a line that covers 4 (or more) of the points. However, due to rounding of the parameters, it is possible these points are actually not collinear. To disambiguate, visit each point and determine which ones contributed to the bin in question, then verify they actually form a straight line. Under a worst case scenario, all points contribute to the same bin, but are not actually on the same line. However, if this happens, the discretization of the parameter space was chosen incorrectly. Note that this algorithm was invented to detect straight lines in an image, meaning that all input points have discrete coordinates. However, this is not a necessary requirement to apply the algorithm.
{ "domain": "codereview.stackexchange", "id": 31432, "tags": "python, algorithm, python-3.x, computational-geometry" }
autostart a launchfile after/while boot
Question: Hi, I looking for a solution to just turn on my system (ubuntu 11.04 server/headless) and launch a launchfile automatically. I tried the solution posted here with a boot startup script but it didn't work. Would be great to get your thoughts. here is my script in /etc/init.d/: start_ros () { #Get ROS bash commands source /opt/ros/electric/setup.sh #Get custom packages (this script will launch with 'sudo', so it'll be the root user export ROS_PACKAGE_PATH=/home/panda:$ROS_PACKAGE_PATH #Home directory of your user for logging export HOME="/home/panda" #Home directory for your user for logging export ROS_HOME="/home/panda/.ros" #I run ROS on a headless machine, so I want to be able to connect to it remotely export ROS_IP="192.168.9.123" #I like to log things, so this creates a new log for this script LOG="/var/log/panda.log" #Time/Date stamp the log echo -e "\n$(date +%Y:%m:%d-%T) - Starting ROS daemon at system startup" >> $LOG echo "This launch will export ROS's IP as $ip" >> $LOG #For bootup time calculations START=$(date +%s.%N) #This is important. You must wait until the IP address of the machine is actually configured by all of the Ubuntu process. Otherwise, you will get an error and launch will fail. This loop just loops until the IP comes up. while true; do IP="`ifconfig | grep 'inet addr:'192.168.9.123''| cut -d: -f2 | awk '{ print $1}'`" if [ "$IP" ] ; then echo "Found" break fi done #For bootup time calculations END=$(date +%s.%N) echo "It took $(echo "$END - $START"|bc ) seconds for an IP to come up" >> $LOG echo "Launching default_package default_launch into a screen with name 'ros'." >> $LOG screen -dmS ros roslaunch panda_cam high.launch } case "$1" in start) start_ros esac exit 0 after that I called and rebooted $sudo update-rc.d panda_cam_startup defaults 99 while bootup i get the following output /etc/rc2.d/S99panda_cam_startup: 45: source: not found Found /etc/rc2.d/S99panda_cam_startup: 45: bc: not found /etc/rc2.d/S99panda_cam_startup: 45: screen: not found But the sourcepath is definitely righ. No idea whats wrong Originally posted by dinamex on ROS Answers with karma: 447 on 2013-01-11 Post score: 5 Original comments Comment by dbworth on 2013-01-11: Maybe you can get some inspiration from here: https://github.com/turtlebot/turtlebot/tree/master/turtlebot_bringup/upstart Answer: Take a look at ros-system-daemon-groovy. That is a recent package that should do what you are looking for. Originally posted by Eric Perko with karma: 8406 on 2013-01-12 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by dinamex on 2013-01-13: Is the name indicating the ROS version compatibility? I'm using electric... Comment by Eric Perko on 2013-01-13: It looks like a lot of the files reference groovy specifically, but, if you changed that, after a quick glance it looks like it would work.
{ "domain": "robotics.stackexchange", "id": 12373, "tags": "roslaunch" }
Does the new finding on "reversing a quantum jump mid-flight" rule out any interpretations of QM?
Question: This new finding by Minev et al. seems to suggest that transitions between atomic states are not instantaneous, but continuous processes wherein a superposition smoothly adjusts from favoring one state to another (if I understand it correctly). The authors also claim to be able to catch a system "mid-jump" and reverse it. Popular articles are here and *here. I am curious if this finding rules out any interpretations of QM. It seems to generally go against the Copenhagen attitude, which describes measurements as collapsing physical systems into a definite classical state. The popular articles indeed claim that the founders of QM would have been surprised by the new finding. The link with the asterisk mentions that something called "quantum trajectories theory" predicts what was observed. Is this an interpretation, or a theory? And are they implying that other interpretations/theories don't work? Answer: No. All news stories about this result are extremely misleading. The "quantum jump" paper demonstrates an interesting and novel experimental technique. However, it says absolutely nothing about the interpretation of quantum mechanics. It agrees with all proper interpretations, including the Copenhagen interpretation. What the researchers actually did When a quantum system transitions between two states, say $|0 \rangle$ to $|1 \rangle$, the full time-dependence of the quantum state looks like $$|\psi(t) \rangle = c_0(t) |0 \rangle + c_1(t) |1 \rangle.$$ The amplitude $c_0(t)$ to be in $|0 \rangle$ smoothly and gradually decreases, while the amplitude $c_1(t)$ to be in $|1 \rangle$ smoothly and gradually increases. You can read this off right from the Schrodinger equation, and it has been known for a hundred years. It is completely standard textbook material. The researches essentially observed this amplitude changing in the middle of a transition, in a context where nobody had done so before. The authors themselves emphasize in their paper that what they found is in complete agreement with standard quantum mechanics. Yet countless news articles are describing the paper as a refutation of "quantum jumps", which proves the Copenhagen interpretation wrong and Bohmian mechanics right. Absolutely nothing about this is true. Why all news articles got it wrong The core problem is that popsci starts from a notion of "quantum jumps", which itself is wrong. As the popular articles and books would have it, quantum mechanics is just like classical mechanics, but particles can mysteriously, randomly, and instantly teleport around. Quantum mechanics says no such thing. This story is just a crutch to help explain how quantum particles can behave differently from classical ones, and a rather poor one at that. (I try to give some better intuition here.) No physicist actually believes that quantum jumps in this sense are a thing. The experiment indeed shows this picture is wrong, but so do thousands of existing experiments. The reason that even good popsci outlets used this crutch is two-fold. First off, the founders of quantum mechanics really did have a notion of quantum jumps. However, they were talking about something different: the fact that there is no quantum state "in between" $|0 \rangle$ and $|1 \rangle$ (which, e.g. could be atomic energy levels) such as $|1/2 \rangle$. The interpolating states are just superpositions of $|0 \rangle$ and $|1 \rangle$. This is standard textbook material: the states are discrete, but the time evolution is continuous because the coefficients $c_0(t)/c_1(t)$ can vary continuously. But the distinction is rarely made in popsci. (To be fair, there was an incredibly short period in the tumultuous beginning of "old quantum theory" where some people did think of quantum transitions as discontinuous. However, that view has been irrelevant for a century. Not every early quote from the founders of QM should be taken seriously; we know better now.) Second off, the original press release from the research group had the same language about quantum jumps. Now, I understand what they were trying to do. They wanted to give their paper, about a rather technical aspect of experimental measurement, a compelling narrative. And they didn't say anything technically wrong in their press release. But they should've known that their framing was basically begging to be misinterpreted to make their work look more revolutionary than it actually is. Interpretations of quantum mechanics There's a very naive interpretation of quantum mechanics, which I'll call "dumb Copenhagen". In dumb Copenhagen, everything evolves nicely by the Schrodinger equation, but when any atomic-scale system interacts with any larger system, its state instantly "collapses". This experiment indeed contradicts dumb Copenhagen, but it's far from the first to; physicists have known that dumb Copenhagen doesn't work for 50 years. (To be fair, it is used as a crutch in introductory textbooks to avoid having to say too much about the measurement process.) We know the process of measurement is intimately tied to decoherence, which is perfectly continuous. Copenhagen and, say, many worlds just differ on how to treat branches of a superposition that have completely decohered. Another issue is that proponents of Bohmian mechanics seem to latch onto every new experimental result and call it a proof that their interpretation alone is right, even when it's perfectly compatible with standard QM. To physicists, Bohmian mechanics is a series of ugly and complicated hacks, about ten times as bad as the ether, which is why it took last place in a poll of researchers working in quantum foundations. But many others really like it. For instance, philosophers who prefer realist interpretations of quantum mechanics love it because it lets them say that quantum mechanics is "really" classical mechanics underneath (which actually isn't true even in Bohmian mechanics), and hence avoid grappling with the implications of QM proper. (I rant about this a little more here.) Quantum mechanics is one of the most robust and successful frameworks we have ever devised. If you hear any news article saying that something fundamental about our understanding of it has changed, there is a 99.9% chance it's wrong. Don't believe everything you read!
{ "domain": "physics.stackexchange", "id": 59018, "tags": "quantum-mechanics" }
CRAP index (56) in Web Scraper Engine
Question: I am working on a Web Scraper for the first time using Test Driven Development, however I have caught myself into a huge CRAP (Change Risk Anti-Patterns) index (56) and I can not seem to find a solution about this. <?php /** * Code Snippet MIT licensed taken from proprietary source. */ namespace WebScraper; use WebScraper\Contracts\Scraper; use WebScraper\Collections\BaseCollection; use WebScraper\Exceptions\EngineIgnitionException; class Engine { protected $targetAdAge; protected $currentAdAge = 1; protected $targetPage = 999; protected $currentPage = 1; public function __construct(Scraper $scraper, BaseCollection $collection) { $this->scraper = $scraper; $this->collection = $collection; } public function start($targetAdAge = null) { if ($this->collection->count() >= 1) { throw new EngineIgnitionException("Engine can not be ran twice in the same instance."); } if (array_key_exists('adage', $requestData = $scraper->queryBuilder()->buildArray())) { $this->targetAdAge = $requestData['adage']; } if (! $this->targetAdAge) { throw new EngineIgnitionException("targetAdAge value is a core part of the Engine and it is missing."); } for ($this->currentAdAge = 1; $this->currentAdAge <= $this->targetAdAge; ++$this->currentAdAge) { for ($this->currentPage = 1; $this->currentPage <= $this->targetPage; ++$this->currentPage) { $this->scraper->queryBuilder()->adage($this->currentAdAge)->page($this->currentPage); $parser = $this->scraper->extract()->parse(); $items = $parser->items(); $this->targetPage = $parser->maxPages(); foreach ($items as &$item) { $item->age = (new DateTime('now'))->modify("-{$this->currentAdAge} day"); $this->collection->push($item); } unset($item); } } return $this->collection; } } The main issue is in the start() function. Basically, what this class does is building an Engine to web scrape a website that uses pagination to show their data, and because of so I am forced to use iterators and scrape every single page to gather all the data. Line 48 is the function that does the dirty job and scrapes the website. The thing is, this function is likely to take even minutes to complete if there's a lot of data to extract or if scraping filters are very broad. This is the only function where I really do not know how to refactor it in a better way. The good thing, however, is that every other single class has a low CRAP index (< 5) and Code Coverage is above 85%. How do I refactor the iterator of page and adAge? Answer: The code looks mostly fine to me. If you feel that the code isn't as clean as it should be, or just want it to pass the metric, what you could do is split it into functions: $this->scraper->queryBuilder()->adage($this->currentAdAge)->page($this->currentPage); $parser = $this->scraper->extract()->parse(); $items = $parser->items(); $this->targetPage = $parser->maxPages(); That goes into a private function which sets targetPage and returns $items... you could call it parsePageToItems And this foreach ($items as &$item) { $item->age = (new DateTime('now'))->modify("-{$this->currentAdAge} day"); $this->collection->push($item); } unset($item); Could go into a separate function mapItems. You could then write a test for mapItems in particular, increasing coverage and further reducing your CRAP score.
{ "domain": "codereview.stackexchange", "id": 18750, "tags": "php, unit-testing, web-scraping" }
Sick LMS Series Sensors
Question: All, I see on the ros.org website that the Sick LMS1xx & LMS2xx sensors are supported in ROS: Does anyone know if the new Sick LMS5xx sensor is supported? Does anyone know of a 360 degree Sick sensor supported in ROS? Thank you in advance! Andy Originally posted by Andrius on ROS Answers with karma: 41 on 2013-01-28 Post score: 2 Answer: We use the RCPRG package to work with the LMS1xx series sensors via Ethernet. I have played with using it with the LMS5xx, but ran into issues which I think were mainly related to the LMS5xx not being as easy to set up for continuous output as the LMS1xx. Originally posted by Ryan with karma: 3248 on 2013-02-01 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by ZdenekM on 2013-02-04: I just want to notice you, that LMS1xx driver is a bit buggy. Comment by Ryan on 2013-02-04: How so? Comment by ZdenekM on 2013-02-04: At least length of the array for ranges is shorter for higher angular resolution and longer for lower one. You need to swap two lines. And there are probably more bugs - it seems that min and max angles are not set properly etc. Comment by rohan on 2013-02-25: Could you please attach the code where the lines needs to be swapped? Comment by tfoote on 2013-03-21: @ZdenekM please file tickets against software you know about bugs in. Comment by ZdenekM on 2013-04-08: @rohan: In LMS1xx_node.cpp, find line 55 ("num_values = 541;") and line 59 ("num_values = 1081;"). Swap numeric values. @tfoote: This code is not released as far as I know. Where and how I can post bug report? I can't find even author's email... Comment by ZdenekM on 2013-04-08: Now I see it's on github (I thought it's on some svn)... Hm, I will try to fork it and make pull request.
{ "domain": "robotics.stackexchange", "id": 12600, "tags": "ros, laser, sicklms, sensor, sick" }
Duality and 1 forms
Question: How is a dual map defined if we are talking about partial derivatives and 1 forms? Answer: The dual is defined by the map $$\frac{\partial}{\partial x^\mu} \mapsto g_{\mu\nu}\mathrm{d}x^\nu$$ and hence the dual of $a \partial_t + b \partial_1$ is only $a \mathrm{d}x^t + b \mathrm{d}x^1$ iff the metric is Euclidean flat in the $t,1$-direction. Note: Do not write "$X=$" if you mean the dual of $X$ is equal to something. Duality/equivalence is not equality, a 1-form is still a different object from a vector field.
{ "domain": "physics.stackexchange", "id": 19480, "tags": "homework-and-exercises, general-relativity, differential-geometry, vector-fields" }
Is there a way to decompose a quantum circuit into layers?
Question: For example, if take the following circuit as the input (either QASM or Qiskit): qreg q[2]; creg c[2]; x q[1]; h q[0]; h q[1]; cx q[0],q[1]; h q[0]; measure q[0] -> c[0]; measure q[1] -> c[1]; The expected output will be: layer[0] = [H [q0], X [q1]] layer[1] = [H [q1]] layer[2] = [cnot [q0] [q1]] layer[3] = [H [q0]] layer[4] = [measure [q0]] layer[5] = [measure [q1]] Is there a Qiskit function to achieve this? If not, suggestions to implement this task are also welcomed. Thanks! Answer: You can decompose a quantum circuit into layers using DAGCircuit.layers() method: from qiskit.converters import circuit_to_dag, dag_to_circuit from IPython.display import display dag = circuit_to_dag(circ) for layer in dag.layers(): layer_as_circuit = dag_to_circuit(layer['graph']) display(layer_as_circuit.draw('mpl')) where circ is a QuantumCircuit. The output will be: DAGCircuit.layers() method constructs the layers using a greedy algorithm. You can also break down your circuit into layers based on some scheduling policy. In the following example we apply an "as late as possible" (ALAP) scheduling policy: from qiskit.transpiler import PassManager, InstructionDurations from qiskit.transpiler.passes import ALAPScheduleAnalysis, PadDelay # Apply the scheduling policy: instruction_durations = InstructionDurations( [ ("h", None, 160), ("x", None, 160), ("cx", None, 800), ("measure", None, 1600), ] ) pass_manager = PassManager( [ ALAPScheduleAnalysis(instruction_durations), PadDelay(), ] ) transpiled_circ = pass_manager.run(circ) # Use DAGCircuit.layers() method with the transpiled circuit: dag = circuit_to_dag(transpiled_circ) for layer in dag.layers(): layer_as_circuit = dag_to_circuit(layer['graph']) # Remove the Delay instructions: for _inst in layer_as_circuit.data: if _inst.operation.name == 'delay': layer_as_circuit.data.remove(_inst) display(layer_as_circuit.draw('mpl')) The result: Similarly, you can apply "as soon as possible" (ASAP) scheduling policy.
{ "domain": "quantumcomputing.stackexchange", "id": 4284, "tags": "qiskit, quantum-circuit, quantum-volume" }
Does a fan rotating with a uniform angular velocity consume electrical energy?
Question: Work done on a rotating body is equal to the change in its kinetic energy. When an electric fan rotates with a constant angular velocity, then its kinetic energy doesn't change. Does it mean that it doesn't consume electrical energy? Answer: It definitely does consume electrical energy. Why? Because there's some opposing force faced by it while it rotates, and this force is often known as air drag/air resistance. You can see the effect of air drag once you switch off the fan. The fan decelerates from its original angular velocity until it stops completely. This deceleration is due to the motion opposing air drag. And thus while rotating, the fan continually loses it's kinetic energy (due to the air drag) and this lost energy is primarily converted to heat energy Thus you don't need electricity to change the kinetic energy, rather you need electrical energy to compensate for the energy lost due to the air drag acting on the fan. Also the air drag is the most common, most general and easy to understand among all the losses experienced by a fan. However there are many other factors which also increase the loss of energy in a fan. Here's an extremely nice flowchart/Sankey diagram showing this: Source (PDF)
{ "domain": "physics.stackexchange", "id": 67536, "tags": "energy, energy-conservation, work, dissipation" }
Replacing part of a string with filler characters
Question: I came up with this function rangedReplace as an exercice to use recursion and try different ways to make it work (I made at least 3 completely different versions). I think this version is the best but I have a feeling I can improve the way the functions/arguments are handled especially in the censor and in the main functions. How can I get these functions to be more idiomatic Haskell-way of doing, ? -- Replace the letters in the range [from:to[ ("to" not included) with chr rangedReplace :: Int -> Int -> Char -> String -> String rangedReplace from to chr str | from == to = str | otherwise = (take from' str) ++ rangedReplace' chr n (drop from' str) where from' = min from to to' = max from to n = to' - from' -- Helper function rangedReplace' :: Char -> Int -> String -> String rangedReplace' chr n str | n == 0 = str | str == "" = "" | otherwise = chr : (rangedReplace' chr (n-1) (tail str)) Usage example: censor (from,to) = rangedReplace from to '*' main = do putStrLn $ foldr id "Fork bombs" (map censor [(1,3),(6,8)]) Answer: Parameter design To facilitate currying, you should arrange a function's parameters starting with the one that is the least likely to vary. In this case, I would consider the fill character the parameter that is most likely to be fixed. The next parameter, I think, should be the range, and it should be specified as a pair rather than as two separate parameters. That would make rangedReplace fill (from, to) a filter that takes a string and produces a string. Using inclusive bounds for from and exclusive bounds for to is the right way to go. I don't think it's beneficial to automatically swap from and to. That just encourages your caller to be sloppy. Rather, I would expect a rangedReplace from 3 to 1 to act as a no-op. With rangedReplace fill (from, to) string, your main function no longer needs id, map, and the censor helper function. main = do putStrLn $ foldr (rangedReplace '*') "Fork bombs" [(1, 3), (6, 8)] Implementation The use of ++ should be considered a yellow flag, as it indicates a full list traversal. You'll be doing three O(from') operations: take, drop, and ++. I'd rather go with a recursive solution that does everything in one pass, and is fully lazy. The implementation is shorter, too. rangedReplace :: Char -> (Int, Int) -> String -> String rangedReplace _ _ [] = [] rangedReplace fill (from, to) s@(c:cs) | to <= 0 = s | from <= 0 = fill:cs' | otherwise = c:cs' where cs' = rangedReplace fill ((from - 1), (to - 1)) cs
{ "domain": "codereview.stackexchange", "id": 8348, "tags": "strings, haskell" }
Binaries slam toolbox galactic
Question: Does anyone know if there are binaries released for the galactic version of the slam-toolbox and if not when will be it released? Originally posted by charlie92 on ROS Answers with karma: 87 on 2021-06-03 Post score: 0 Answer: https://build.ros2.org/job/Gbin_uF64__slam_toolbox__ubuntu_focal_amd64__binary/ It will be available in the first Galactic sync Originally posted by stevemacenski with karma: 8272 on 2021-06-03 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 36489, "tags": "ros2" }
ROS callbacks' scope and objects' lifecycles
Question: I am quite new to C++. I know that callbacks with return type of ConstPtr& are of boost::shared_ptr<const MsgType> type. Now, when dealing with callbacks, does one copy or initialize objects by reference? For example, I have a ROS msg with fields int x and int y[]. Within the callback, would I copy x and y or initialize references for both of them? Another thing, how can I know which objects are being copied and which ones are being referenced? Let's say msg->x and msg->y are they copied or referenced? I am not familiar with object lifecycle and RAII at this point, could someone please explain to me in Layman's terms what's going on within ROS callbacks when ConstPtr& is used? Originally posted by Hypomania on ROS Answers with karma: 120 on 2019-01-19 Post score: 0 Answer: Just to clarify something, callbacks don't have a return value they are void. You may mean they accept a parameter of type ConstPtr&. This parameter that is passed to the callback function is a reference to the actual message data stored by ROS message passing system. Creating references to this data is a bad idea since after the callback completes that data will probably stop existing so you'll have invalid pointers, bugs, segfaults and all the things we hate. For this reason your callback function should always make copies of data that you intend to use after the callback has finished. Your other questions are more general C++ issues, but your question doesn't make any sense without the context of the code it's within. A direct copy, or pointer copy (reference) are always done with an assignment = statement, without an assignment statement we can't answer your question. I would like to point out that C++ ROS is not a great place to be if you're new to C++. We use some very complex features of the language, I would recommend working through some dedicated C++ tutorials or courses outside of ROS to learn the language itself. Originally posted by PeteBlackerThe3rd with karma: 9529 on 2019-01-19 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Hypomania on 2019-01-20: Thank you again for your excellent answer. I understand that C++ ROS is complex, I am coming from embedded C background so I do know some C style concepts. Comment by Hypomania on 2019-01-20: As to my second question, all I want to know for now is when objects, within callbacks are, copied. I remember you telling me that copying a vector is as simple as doing msg->vec, however with arrays it was completely different, there is no simple arrow operator assignment, why? Comment by PeteBlackerThe3rd on 2019-01-20: msg->vec doesn't copy anything. It's a statement which has the value of the vec member of a struct/class which is pointed to by a variable called msg. The -> operator isn't an assignment it simply refers to a member of structure identified by a pointer. Comment by Hypomania on 2019-01-21: @PeteBlackerThe3rd, so I am assuming the way vec would be copied is by one of the operator overloads? I thought equating two vectors together is considered as a copy. msg->vec would mean you are equating a vector to another vector. For e.g.: = x = (*msg).vec, where vec and x are both vecs. Comment by PeteBlackerThe3rd on 2019-01-21: You're right x=(*msg)->vec will copy the vector using the overloaded = operator. Similarly std::vector<type> *x = &(*msg)->vec; will copy a pointer to the vector. The first makes a copy of the data while the second makes a reference to the original. Comment by Hypomania on 2019-01-24: @PeteBlackerThe3rd, crystal clear, thank you!
{ "domain": "robotics.stackexchange", "id": 32298, "tags": "ros, callback, c++11, ros-kinetic" }
Understanding the meaning of the integral of energy of an $\rm H$ atom
Question: This is probably very basic but my notes are confusing and not clearly written so I would appreciate some help in trying to clarify the following points: If we consider the expression $$\left\langle \phi_{1s}(r_A)\left|-\frac{\nabla^2}{2}-\frac{1}{r_A}\right|\phi_{1s} (r_A)\right\rangle$$ where $\phi_{1s}(r_A)$ is the $1s$ orbital of the hydrogen atom centered on proton $A$ and $r_A$ denotes the position of the electron relative to the position of proton $A$. My question regards the above expression. I know that this is equal to the energy of the hydrogen atom. Why do some sources write the solution of the expression as $-\frac{1}{2}$? My second question is whether this is equal to the ionisation energy of a hydrogen atom in its ground electronic state. Answer: When you find the ground state energy of the hydrogen atom expressed as $-\frac12$, it implies that the so-called Hartree atomic units (a.u.) have been used. In such units $\frac{e^2}{4\pi\epsilon_0}=1$, $\hbar=1$, and $m_e=1$. Therefore the ground state energy of the hydrogen atom, $E_{GS}=-\frac{m_ee^4}{2(4\pi\epsilon_0)^2\hbar^2}=-\frac12 a.u.$. And, yes, $E_{GS}$ is also equal to the ionization energy since the underlying convention for the potential energy is to have it zero at an infinite distance from the nucleus.
{ "domain": "physics.stackexchange", "id": 76035, "tags": "quantum-mechanics, energy, atomic-physics, units, hydrogen" }
Pseudo Game of Cups in Python
Question: DESCRIPTION: [Inspired by Chandler's GameOfCups with Joey in "Friends"]. Program gets 5-digit zipcode from user. (Assume user won't make a mistake, and will enter exactly 5 digits). Program awards points based on a series of rules, and reports the total points earned at the end. The 8 rules are embedded as comments in the code. For each rule, besides adding points (or not) to the total, rule displays "Rule _ got _ points, so total is now _" (It prints this even if rule added 0 points to total). """ RULES +5 when first and last digit match +6 when second digit is twice the first AND third digit is greater than second or fourth digit +7 if any 7 is in the zipcode +8 when there's no "13" in MIDDLE the zipcode +9 when all three middle digits match +10 when third and fourth digits match +11 when zipcode is palindrome (12121 == 12121, while 12345 != 54321) """ Here is my solution to the challenge above: zipcode = input("Enter your zipcode: ") total_points = 0 #Rule 1 points = 5 if zipcode[0] == zipcode[-1] else 0 total_points += points print(f"Rule 1 got {points} points, so total is now {total_points}") #Rule 2 points = 6 if (int(zipcode[1]) * 2) > int(zipcode[0]) and (int(zipcode[2]) > int(zipcode[1]) or int(zipcode[2]) > int(zipcode[3])) else 0 total_points += points print(f"Rule 2 got {points} points, so total is now {total_points}") #Rule 3 points = 7 if "7" in zipcode else 0 total_points += points print(f"Rule 3 got {points} points, so total is now {total_points}") #Rule 4 points = 8 if "13" not in zipcode[1:-1] else 0 total_points += points print(f"Rule 4 got {points} points, so total is now {total_points}") #Rule 5 points = 9 if zipcode[1] == zipcode[2] == zipcode[3] else 0 total_points += points print(f"Rule 5 got {points} points, so total is now {total_points}") #Rule 6 points = 10 if zipcode[2] == zipcode[3] else 0 total_points += points print(f"Rule 6 got {points} points, so total is now {total_points}") #Rule 7 points = 11 if zipcode == reversed(zipcode) else 0 total_points += points print(f"Rule 7 got {points} points, so total is now {total_points}") print(f"{zipcode} got {total_points} points!") I feel like there is a much better way to do this. The print statements are repetitious, and reassigning points each time I check the zip code feels weird. Any suggestions are helpful and appreciated. Answer: Your code can be simplified using a simple loop, eliminating most of the duplicated code: def game_of_cups(zipcode, rules): total_points = 0 for num, rule in enumerate(rules, 1): rule_passes = rule(zipcode) points = num + 4 if rule_passes else 0 total_points += points print(f"Rule {num} got {points} points, so total is now {total_points}") print(f"{zipcode} got {total_points} points!") You just need the appropriate rule functions, like: def rule1(zipcode): return zipcode[0] == zipcode[-1] def rule2(zipcode): a, b, c, d, e = map(int, zipcode) return b * 2 > a and c > min(b, d) ... etc ... And then a list of rules to pass to the game: rules = [ rule1, rule2, rule3, rule4, rule5, rule6, rule7 ] Feel free to name the functions more appropriately; they don’t need to be named rule#. Are you missing a rule? You said there were 8 rules. Your implementation of rule#2 doesn’t match the comment description of rule #2. I think it should be b == a * 2, not b * 2 > a.
{ "domain": "codereview.stackexchange", "id": 36026, "tags": "python, python-3.x, programming-challenge, game" }
Computing probability of sentence using N-grams
Question: I have implemented N-grams by constructing a tree (or a trie, technically) that stores frequencies of each N-gram. Each path in the tree represents an N-gram and its frequency: the path consists of N nodes (each node containing a word), followed by a leaf node containing the frequency. So, each path in the tree is of length N + 1. I'm now trying to compute the probability of observing a given sentence, and am having some trouble, particularly when N > 2. For the sentence <s> Hello world </s> using N = 1, the probability is P(<s>) * P(Hello) * P(world) * P(</s>). Using N = 2, the probability is P(Hello | <s>) * P(world | Hello) * P(</s> | world). But for N = 3, I'm not sure what to do. If I compute P(Hello | <s>) * P(world | <s> Hello), then my tree will give an error since <s> Hello is a bigram and the tree is only defined for trigrams. I considered maybe wrapping each sentence an additional time, e.g. <s> <s> Hello world </s> </s> then computing P(Hello | <s> <s>) * P(world | <s> Hello) * P(</s> | Hello world) * P(</s> | world </s>), but this seems non-intuitive and involves mutating the corpus in an ugly way. What is the proper way to compute this probability? Answer: The $N$-gram model assumes a generative model in which the next word generated depends only on the preceding $N-1$ words. Using Bayes' law, we get that the probability of a sentence $w_1,\ldots,w_n$ is $$ P(w_1\cdots w_n) = P(w_1) P(w_2|w_1) P(w_3|w_1w_2) \cdots P(w_N|w_1\ldots w_{N-1}) P(w_{N+1}|w_2\ldots w_N) \cdots P(w_n | w_{n-N+1}\ldots w_{n-1}). $$ In the particular cases $N=1,2,3$, this gives $$ P(w_1\cdots w_n) = P(w_1) P(w_2) \cdots P(w_n), \\ P(w_1\cdots w_n) = P(w_1) P(w_2|w_1) P(w_3|w_2) \cdots P(w_n|w_{n-1}), \\ P(w_1\cdots w_n) = P(w_1) P(w_2|w_1) P(w_3|w_1w_2) P(w_4|w_2w_3) \cdots P(w_n|w_{n-2}w_{n-1}). $$ This means that your expression for $N=2$ is wrong. In practice, we don't want to store special tables for the first $N-1$ characters. There are two ways around this. First, we can ignore entirely the first $N-1$ words. This makes sense when the text is very long, and we don't expect the first few words to make a big difference in the resulting probability. Second, we can include a special "blank" symbol and add it to our tables, so that (for example) $P(w_1)$ is stored as $P(w_1|\not{b}\ldots \not{b})$. Usually one stores not the actual probabilities but rather their logarithm. The reason is that adding numbers is faster than multiplying them. As an added benefit, you don't have to worry about the dynamic range of the floating point data type you use (the actual probability could be rather close to 0 and could cause underflow, but this is unlikely for its logarithm). Another thing to keep in mind is that the results are more meaningful as the text becomes longer. Your example is very short and doesn't serve as a good test case.
{ "domain": "cs.stackexchange", "id": 5265, "tags": "natural-language-processing" }
Where/when did Stephen Kleene first define the Kleene closure/star?
Question: I'm working on a paper and would like to review the origins of Kleene's closure. I am unable to find any article of Kleene's that has the original definition of the Kleene closure. Is there a paper by Kleene in which he first defines the Kleene closure? Answer: Kleene's classic paper on finite automata and regular expressions is Kleene, Stephen C.: "Representation of Events in Nerve Nets and Finite Automata". In Shannon, Claude E.; McCarthy, John. Automata Studies, Princeton University Press. pp. 3–42., 1956. There seems to be a scan or recreation of that version of the paper at: http://www.dlsi.ua.es/~mlf/nnafmc/papers/kleene56representation.pdf. But, as pointed out by @HendrickJan, the work seems to have been done about 5 years earlier. The article starts with a note that says that "the material ... is drawn from Project RAND Research Memorandum RM-704 (15 Dec 1951, 101 pages) ... used by permission of the RAND Corporation ... supported by the RAND Corporation during the summer of 1951." A scan of the RAND research memorandum is available for free from the RAND website: http://www.rand.org/pubs/research_memoranda/RM704.html. "Regular events" are defined in Section 7 of both papers. (page 46 of the 1951 memorandum and page 23 of the 1956 paper). Interestingly, Kleene defines $*$, the closure operator, as a binary operator, rather than a unary operator as we do today. This enables Kleene to avoid dealing with empty strings. $E*F$ means the same thing it does today: "0 or more instances of E followed by F" but there is no way to say $E^*$ and have it include the empty string.
{ "domain": "cs.stackexchange", "id": 2776, "tags": "formal-languages, reference-request, automata" }
Creating a Custom Hardware Interface for a Two-Wheeled Mobile Robot for ros2_control?
Question: I am in the process of developing a custom two-wheeled mobile robot with differential control, and I want to integrate it with the ROS 2 ecosystem, specifically leveraging ros2_control. I understand that ros2_control provides a framework to connect any hardware to ROS 2, but I am having some challenges with where and how to start for specific/custom robot configuration. Robot Details: Two drive wheels with individual motor controllers. The robot uses differential control for maneuvering. Velocity control mechanism for each wheel. Encoders on each wheel for feedback. Questions: What are the fundamental steps to create a custom hardware interface for a differential-controlled robot to use with ros2_control? How can I expose the readings from the wheel encoders to the joint_states topic within the ROS 2 ecosystem? I've already gone through the official ros2_control documentation, and understood that I need to use ros2_control tags in my robot's URDF to set up the hardware interfaces and that I need to write YAML file to configure controllers. I'd greatly appreciate insights or experiences from those who have tackled the creation of the hardware interface, especially in the context of wheeled robots and encoder data integration. Thank you in advance for your guidance! Answer: Have a look at the diff_drive example, this should answer question 2. About how to write a hardware_component this video could help you with the first steps, or have a look at this step-by-step guide.
{ "domain": "robotics.stackexchange", "id": 38699, "tags": "ros2, control, mobile-robot, ros2-control, ros2-controllers" }
What really are perturbation expansions?
Question: I'm unsure if this question belongs here or at Math.SE, but since I've got to it by reading some articles about Physics I'm going to post it here anyway. In this particular article (Theoretical models in low-Reynolds-number locomotion) about fluid mechanics I've found the following situation: one gets to solve Stokes' equations. The equations themselves are linear, but there are still a problem with the boundary conditions which may be evaluated on some weird surface. In that particular article the author solved the problem with a perturbation expansion. Basically he chose a parameter $\epsilon$ and wrote the solution $\psi$ as $$\psi = \epsilon \psi_1 + \epsilon^2 \psi_2 + \cdots$$ and as I understood, $\psi_n$ is the solution to the problem with $O(\epsilon^n)$ boundary conditions. This seems to be something that is quite frequently done. One expands some function in a perturbation series like that using a parameter. The problem is that I can't get what this really is. This seems quite different than expanding the general solution of a differential equation in a certain basis of functions. There's also this parameter $\epsilon$ that I can't get his role on all of this. So what really is this perturbation expansion? From a rigorous point of view what is that series? And why is it useful anyway? Answer: Think of this not as an extremely rigorous way of solving the differential equation, but rather as using your intuition to guess a solution. Often when you are given a differential equation, the solution is not at all obvious, and perhaps the equation isn't even solvable analytically. Instead of giving up, though, sometimes you can identify a parameter (the $\epsilon$ in your above expansion) such that for $\epsilon=0$, the equation is easy to solve. You can then guess that as long as $\epsilon$ is "small," you can Taylor expand about the $\epsilon=0$ solution to get a perturbative series solution to the true problem. You then hope that this series will converge for the actual value of $\epsilon$, and this will give you your actual solution. Physically, this sort of thing happens often when you have a system that is "close" to some special system that is easy to solve. Maybe you have some sort of oscillator with energy $E$ that is "pretty close" to being a simple harmonic oscillator (which is very well understood), but whose potential differs from a true harmonic potential by some factor $\epsilon V$ where $V$ is of the same order of magnitude as $E$ and $\epsilon$ is small (so $\epsilon V \ll E$). Then it is reasonable to expect that the behavior of this system should be "pretty close" to the behavior of the simple harmonic oscillator, and that as you vary $\epsilon$ in some neighborhood of $0$, the system's behavior should change smoothly. But this means that you should be able to expand the general system's solution as a Taylor series in $\epsilon$ with the zeroth-order term simply being the solution for a simple harmonic oscillator and higher-order terms giving corrections proportional to powers of $\epsilon$.
{ "domain": "physics.stackexchange", "id": 22223, "tags": "mathematical-physics, mathematics, perturbation-theory" }
How are the relative distances of celestial objects from the Earth calculated using observations at a single time instant?
Question: How does one find the distances of celestial objects in the night sky, such as the Moon and the stars, from the Earth using a snapshot of information (including, say, the intensity and wavelength of light received from the various observed objects and their relative positions) observed in the night sky at a single time instant? Most methods (especially those taught in orbital mechanics classes) are inspired by Gauss's method for determining orbits (and hence, distances of the observed objects from the earth), thus requiring observations at several time instants, or equivalently, position and velocity information at a single time instant. Answer: For very distant objects, their distance from us can be estimated in one snapshot by measuring the redshift in their spectra, knowing the so-called Hubble Constant. This method can be refined somewhat if the type of the object (star, quasar, galaxy, etc.) is known and its spectrum can be accurately gathered. For a much closer object whose diameter is known, its distance can be estimated trigonometrically by measuring its angular size with a telescope, in one "snapshot". If two cameras are allowed instead of one, then two photos of the same object in the sky shot at the same instant from different locations on earth will yield the distance via a parallax measurement, for objects within our local spot in space.
{ "domain": "physics.stackexchange", "id": 65668, "tags": "astronomy, earth, planets, stars" }
catkin build configuration options
Question: Is there a way to define the default behaviour of catkin_make with a .cmake-file or such? For example, I would need to disable some platform-specific ROS-packages included in a repository. Or, toggle CUDA or some other optional library. The problem I'm facing is that catkin includes all subdirectories in the workspace, so I can't make my own CMake macro which would set(ENABLE_CUDA) or set(BUILD_ANALYSIS_TOOLS), and then add_subdirectory() if this is set. To give some idea what would be nice to have, OpenCV does its build options very nicely (from line 155): https://github.com/Itseez/opencv/blob/master/CMakeLists.txt Originally posted by Tommi on ROS Answers with karma: 111 on 2015-12-03 Post score: 0 Answer: OpenCV is a single project so it's not really comparable to a system which builds groups of projects. You can certainly add options to individual CMake projects, but you'd have to pass options to each of them. As for controlling which packages get built in a catkin workspace, you can use CATKIN_IGNORE files to prevent certain packages from getting processed by catkin_make. Or you can use the CATKIN_BLACKLIST_PACKAGES variable to blacklist certain packages, see: http://answers.ros.org/question/54181/how-to-exclude-one-package-from-the-catkin_make-build/ You get more control with the upcoming tools provided by the catkin_tools project, but it's got some problems that we're still working out before it's ready for prime time: http://lists.ros.org/lurker/message/20151110.195101.767ad75f.en.html https://github.com/catkin/catkin_tools/issues/90 Originally posted by William with karma: 17335 on 2015-12-03 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 23144, "tags": "catkin, cmake" }
Why doesn't the black hole in the center of the Milky Way glow similarly to the famous M87 image?
Question: The M87 image made some astronomers famous recently as the first image of a black hole. In the Milky Way, it has been concluded that there must be a black hole due to the movement of stars near the center. But why is there no such light effect in our black hole, as there is in M87? Or is it there, but we cannot see it for some reason? Answer: News were released today and a new image has been published which has similarities to the M87 image in the question. https://eventhorizontelescope.org/blog/astronomers-reveal-first-image-black-hole-heart-our-galaxy Although we cannot see the event horizon itself, because it cannot emit light, glowing gas orbiting around the black hole reveals a telltale signature: a dark central region (called a “shadow”) surrounded by a bright ring-like structure. The new view captures light bent by the powerful gravity of the black hole, which is four million times more massive than our Sun. The image of the Sgr A* black hole is an average of the different images the EHT Collaboration has extracted from its 2017 observations. Credit: EHT Collaboration
{ "domain": "astronomy.stackexchange", "id": 6349, "tags": "observational-astronomy, black-hole, radio-astronomy, milky-way, m87" }
Is there a way to write the Lorentz force in terms of one field, $L$, and one charge, $X$?
Question: I have heard that physicists like to write electromagnetism as one force (the Lorentz force) and define it as $\vec{F_L}\left(q, \vec{v}, \vec{E}, \vec{B}\right) = q\left(\vec{E} + \vec{v} \times \vec{B}\right)$. They also talk about electricity and magnetism as if they are one force. However, this doesn't look that much prettier to me, easier to use or unified. Is there a way to write the Lorentz force in terms of one field, $L$, and one charge, $X$? Obviously, one could just make $L$ and $X$ tuples and write $\vec{F_L}\left(X, L\right) = X_q \left(\vec{L_E} + \vec{X_v} \times \vec{L_B} \right)$ but that doesn't seem nice enough to me. Answer: Let us fix a reference frame $S$, where a particle of charge $q$ and velocity $v$ lies. It can be experimentally proven that, if another such particle $q'$ is present elsewhere in the universe, the initial one is subject to a force $\textbf{F}=q\textbf{E}$, where $\textbf{E}$ can be measured and addressed to the other body $q'$. Likewise, if a current $i$ (or, equivalently, a magnet) exists somewhere in the universe, the initial particle is subject to a force $\textbf{F}=q\textbf{v}\times\textbf{B}$, where $\textbf{B}$ can be addressed to the current $i$. If both are present together, then the force is obviously the sum of the two pieces, thus $\textbf{F}=q(\textbf{E} + \textbf{v}\times\textbf{B})$. Since, in principle, $\textbf{E}$ and $\textbf{B}$ seem to come from two different sources (the former being a charge $q'$ and the latter being a current $i$) and since they are measured in different ways, one is led to believe that they are indeed two different things, therefore one gives them two different names. But then we realise that if, instead of choosing $S$ as reference frame, we choose $S'$ having the same velocity of the current $i$ with respect to $S$, then the two previous contributions $\textbf{E}$ and $\textbf{B}$ replace each other. Hence we understand that they are not really two different things, rather they are the same thing that only appears to be different just according to what reference frame we choose. This is indeed supported by the additional experimental results showing that whenever a variation in time of either of the two fields is present, the other gets automatically created. Again, this leads to think that they must somehow be the same underlying field having difference faces. At this point we are quite sure that there is only one field, that we call $F$, whose representation is any reference frame of choice depends on the coordinate basis and can be described by a rank $2$ tensor (for some reasons). Doing a little re-ordering of the previous equations, together with the general Maxwell's equations for the field, one narrows things down to the following formulae for the fields: $$ \partial_{\mu}F^{\mu\nu} = \frac{4\pi}{c}\,j^{\nu},\qquad \partial_{\lambda}F_{\mu\nu} + \textrm{permutations} = 0 $$ together with the equation of motion of a charged particle in such environment $$ \frac{d}{ds}p_{\mu} = qF_{\mu\nu}u^{\nu}. $$
{ "domain": "physics.stackexchange", "id": 26621, "tags": "electromagnetism, general-relativity" }
Why the wave function decays exponentially when it crosses the potential barrier?
Question: That may be an obvious question, but I would like a physical answer, not math. Answer: The physics is similar to the physics of evanescent waves in optics or more generally in EM propagation. The idea is simpler to understand mathematically: in the classical forbidden region, the wave vector $k$ become imaginary: $k\to i\kappa$ with $\kappa$ real, so the solutions $e^{\pm i kx}$ go to exponentials $e^{\pm \kappa x}$. The boundary conditions eliminate the growing exponential as the amplitude must eventually decay based on energy considerations. Indeed, in optics the amplitude of the wave transmitted at a boundary similarly decays in certain conditions because no solutions of the sine or cosine form for the transmitted wave is compatible with the boundary conditions. This webpage gives additional mathematical details. Evanescent waves also occur in accoustics although the physics is a little different since acoustic waves are longitudinal. Nevertheless, the physics here also involves reflection and transmission at a boundary. Mathematically, if $E<V$ then the coefficient of $\psi$ on the right hand side of $$ \psi^{\prime\prime}(x)=\frac{2m}{\hbar^2}(V-E)\psi(x) $$ is positive so the solution are exponentials rather than sines or cosines. There is no “loss of energy” as the energy of the solution is fixed. Roughly speaking, as in optics, the wave “doesn’t propagate” in the forbidden region. In the optics case the lack of propagation of the wave implies lack of energy, but this explanation does not work well for QM as the energy is fixed (as mentioned above).
{ "domain": "physics.stackexchange", "id": 96115, "tags": "quantum-mechanics, wavefunction, schroedinger-equation, quantum-tunneling" }
Gazebo 1.0.x and CUDA in Furete
Question: We are currently running a relatively intense simulation for some work we are doing, and we noticed that the physics engine seems to be slowing us down the most. We don't want to decrease the precision of the physics engine too much (i.e., changing parameters in the tag in the .world file) as this can cause models to "explode" -- as discussed here and here. I know in previous version of ROS/Gazebo, parallel quick step could be used in conjunction with CUDA to help the physics engine. However, I cannot seem to find out how to get this to work in Fuerte. Previous posts (e.g., here have discussed how to configure this in Electric. With Gazebo 1.0.x used in Fuerte and the new sdf world model syntax, I see no place to force Gazebo to use CUDA to speed itself up. Even the documentation for the physics parameters show that "world" and "quick" are the only two valid solver types (i.e., no parallel_quick/cuda). Furthermore, the example launch files that use CUDA in the Ubuntu Fuerte parallel_quickstep package use out-of-date world-file syntax/don't run. Is it no longer possible to use CUDA to speed up simulations? We have a relatively powerful Nvida GPU in our Ubuntu 12.04 machine and it would be a shame not to utilize it in the simulations. Originally posted by rtoris288 on ROS Answers with karma: 1173 on 2012-07-17 Post score: 4 Answer: I'm going to hazard a guess that it is no longer supported/maintained. In $(rosfind gazebo)/build/gazebo-r22f33a2ed71a/deps/parallel_quickstep/ there is some code that references CUDA, but I can't find anywhere where it connects to the main gazebo program (renaming the folder seems to have no ill effects on compilation). The patch files in the main directory don't look like they have been updated since before fuerte. Originally posted by dearl with karma: 96 on 2012-09-05 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 10236, "tags": "ros, gazebo, ubuntu, ros-fuerte, cuda" }
Sensor messages not working properly in rviz for multiple turtlebots simulated in gazebo
Question: I have spawned 2 turtlebots in gazebo. All the topics seem to be working fine. Even the move base seems to be working when given a goal to accomplish. If i check with a topic echo, all the concerned topics seem to be working fine. But I could not visualize all the topics in rviz. I have a feeling that this has something to do with the tf being broadcasted, as when i try to observe the laser scans and point clouds in rviz I get some errors like these. Error For pointcloud in riviz Transform [sender=unknown_publisher] For frame [robot1/robot1/camera_depth_optical_frame]: Frame [robot1/robot1/camera_depth_optical_frame] does not exist Error For laser scan: Transform [sender=unknown_publisher] For frame [camera_depth_frame]: Frame [camera_depth_frame] does not exist The robot and the camera raw image appears perfect in rviz. My Launch file for getting the robots in: <launch> <arg name="world_file" default="$(env TURTLEBOT_GAZEBO_WORLD_FILE)"/> <arg name="base" value="$(optenv TURTLEBOT_BASE kobuki)"/> <!-- create, roomba --> <arg name="battery" value="$(optenv TURTLEBOT_BATTERY /proc/acpi/battery/BAT0)"/> <!-- /proc/acpi/battery/BAT0 --> <arg name="stacks" value="$(optenv TURTLEBOT_STACKS hexagons)"/> <!-- circles, hexagons --> <arg name="3d_sensor" value="$(optenv TURTLEBOT_3D_SENSOR kinect)"/> <!-- kinect, asus_xtion_pro --> <include file="/opt/ros/indigo/share/gazebo_ros/launch/empty_world.launch"> <!-- $(find gazebo_ros) --> <arg name="use_sim_time" value="true"/> <arg name="debug" value="false"/> <arg name="world_name" value="$(arg world_file)"/> </include> <group ns="robot1"> <param name="tf_prefix" value="robot1"/> <include file=".../launch/turtlebot.launch"> <arg name="robot_name" value="robot1"/> <arg name="init_pose" value="-z 3 -x 3"/> </include> <node pkg="tf" type="static_transform_publisher" name="$(anon odom_map_broadcaster)" args="3 0 0 0 0 0 map robot1/odom 100"/> </group> <group ns="robot2"> <param name="tf_prefix" value="robot2"/> <include file=".../turtlebot.launch"> <arg name="robot_name" value="robot2"/> <arg name="init_pose" value="-z 3 -x -3"/> </include> <node pkg="tf" type="static_transform_publisher" name="$(anon odom_map_broadcaster)" args="-3 0 0 0 0 0 map robot2/odom 100"/> </group> </launch> My navigation launch file <launch> <param name="/use_sim_time" value="true"/> <!-- Map server --> <arg name="map_file" default="$(env TURTLEBOT_GAZEBO_MAP_FILE)"/> <node name="map_server" pkg="map_server" type="map_server" args="$(arg map_file)" > <param name="frame_id" value="/map" /> </node> <group ns="robot1"> <param name="tf_prefix" value="robot1" /> <param name="amcl/initial_pose_x" value="3" /> <param name="amcl/initial_pose_y" value="1" /> <include file="..navigation/launch/amcl_0.1.launch" /> </group> <group ns="robot2"> <param name="tf_prefix" value="robot2" /> <param name="amcl/initial_pose_x" value="-3" /> <param name="amcl/initial_pose_y" value="1" /> <include file="..navigation/launch/amcl_0.1.launch" /> </group> </launch> Originally posted by Usman Arif on ROS Answers with karma: 58 on 2015-11-20 Post score: 1 Original comments Comment by Usman Arif on 2015-11-20: One more thing. The tf tree seems perfect. map is the root node with two branches robot1/odom & robot2/odom and then the rest of the transforms. No unconnected ones Answer: Problem solved :) Made a few changes. Posting the updated files <launch> <arg name="world_file" default="$(env TURTLEBOT_GAZEBO_WORLD_FILE)"/> <arg name="base" value="$(optenv TURTLEBOT_BASE kobuki)"/> <!-- create, roomba --> <arg name="battery" value="$(optenv TURTLEBOT_BATTERY /proc/acpi/battery/BAT0)"/> <!-- /proc/acpi/battery/BAT0 --> <arg name="stacks" value="$(optenv TURTLEBOT_STACKS hexagons)"/> <!-- circles, hexagons --> <arg name="3d_sensor" value="$(optenv TURTLEBOT_3D_SENSOR kinect)"/> <!-- kinect, asus_xtion_pro --> <include file="/opt/ros/indigo/share/gazebo_ros/launch/empty_world.launch"> <!-- $(find gazebo_ros) --> <arg name="use_sim_time" value="true"/> <arg name="debug" value="false"/> <arg name="world_name" value="$(arg world_file)"/> </include> <!-- Robot Description, Global (one description for all robots) --> <arg name="urdf_file" default="$(find xacro)/xacro.py '$(find turtlebot_description)/robots/kobuki_hexagons_kinect.urdf.xacro'" /> <param name="robot_description" command="$(arg urdf_file)" /> <group ns="robot1"> <param name="tf_prefix" value="robot1_tf"/> <include file="/home/.../launch/turtlebot.launch"> <arg name="robot_name" value="robot1"/> <arg name="robot_prefix" value="robot1_tf"/> <arg name="init_pose" value="-z 3 -x 3"/> </include> <node pkg="tf" type="static_transform_publisher" name="$(anon odom_map_broadcaster)" args="3 0 0 0 0 0 map robot1_tf/odom 100"/> </group> </launch> So publishing the static map transform in the above launch file. And this should be done for every robot you have in your environment (multiple robots), notice that the "map" is global as it doesn't have a "/" before it. Next file is the turtlebot.launch which I am using <launch> <arg name="robot_name"/> <arg name="init_pose"/> <arg name="robot_prefix"/> <!-- Gazebo model spawner --> <node name="spawn_turtlebot_model" pkg="gazebo_ros" type="spawn_model" args="$(arg init_pose) -unpause -urdf -param /robot_description -model $(arg robot_name)"/> <!-- Velocity muxer --> <node pkg="nodelet" type="nodelet" name="mobile_base_nodelet_manager" args="manager"/> <node pkg="nodelet" type="nodelet" name="cmd_vel_mux" args="load yocs_cmd_vel_mux/CmdVelMuxNodelet mobile_base_nodelet_manager"> <param name="yaml_cfg_file" value="$(find turtlebot_bringup)/param/mux.yaml" /> <remap from="cmd_vel_mux/output" to="mobile_base/commands/velocity"/> </node> <!--Bumper/cliff to pointcloud (not working, as it needs sensors/core messages) --> <include file="$(find turtlebot_bringup)/launch/includes/kobuki/bumper2pc.launch.xml"/> <node pkg="robot_state_publisher" type="robot_state_publisher" name="robot_state_publisher"> <param name="publish_frequency" type="double" value="30.0" /> </node> <!-- Fake laser --> <node pkg="nodelet" type="nodelet" name="laserscan_nodelet_manager" args="manager"/> <node pkg="nodelet" type="nodelet" name="depthimage_to_laserscan" args="load depthimage_to_laserscan/DepthImageToLaserScanNodelet laserscan_nodelet_manager"> <param name="scan_height" value="10"/> <!-- publishing my output frame id with the robot tf prefix, this would end any errors in rviz saying no transform exist from /camera_depth_frame to /camera_depth_frame --> <param name="output_frame_id" value="$(arg robot_prefix)/camera_depth_frame"/> <param name="range_min" value="0.45"/> <remap from="image" to="camera/depth/image_raw"/> <remap from="scan" to="scan"/> </node> </launch> The one important thing here to note is the output_frame_id of the fake laser. Please refer to the comment above this line Next inline is my move_base and amcl launch file (combined) <launch> <param name="/use_sim_time" value="true"/> <!-- Map server --> <arg name="map_file" default="$(env TURTLEBOT_GAZEBO_MAP_FILE)"/> <node name="map_server" pkg="map_server" type="map_server" args="$(arg map_file)" > <param name="frame_id" value="/map" /> </node> <group ns="robot1"> <param name="tf_prefix" value="robot1_tf" /> <param name="amcl/initial_pose_x" value="3" /> <param name="amcl/initial_pose_y" value="1" /> <include file="/home/..../launch/amcl_0.1.launch" /> </group> </launch> Keeping the amcl call inside robot1 namespace would made sure that for every robot an amcl and move base is launched which lives within the tf_prefix of that robot. Next my amcl_0.1.launch <launch> <!-- Localization --> <arg name="robot_name" default="robo"/> <arg name="initial_pose_x" default="0.0"/> <arg name="initial_pose_y" default="0.0"/> <arg name="initial_pose_a" default="0.0"/> <include file="/home/..../amcl.launch.xml"> <arg name="initial_pose_x" value="$(arg initial_pose_x)"/> <arg name="initial_pose_y" value="$(arg initial_pose_y)"/> <arg name="initial_pose_a" value="$(arg initial_pose_a)"/> <arg name="use_map_topic" value="false"/> <arg name="scan_topic" value="scan"/> <arg name="odom_frame_id" value="odom"/> <arg name="base_frame_id" value="base_footprint"/> <arg name="global_frame_id" value="/map"/> </include> <!-- Move base --> <include file="/home/..../launch/includes/move_base_altered.launch.xml"> <arg name="odom_frame_id" value="odom"/> <arg name="base_frame_id" value="base_footprint"/> <arg name="global_frame_id" value="/map"/> <arg name="odom_topic" value="odom" /> <arg name="laser_topic" value="scan" /> </include> </launch> <!-- in both the cases keeping the map topic as /map coz it is only one topic for all the robots and must be an absolute one hense / which would not let it change with changing namespaces for the remaining topics --> My amcl xml referred in the code above is same as the original. The xml for move_base only points out to the parameters I have setup for my environment (global costmap, local costmap, move_base_params etc). Hope this helps anyone who is stuck with multiple robot navigation in gazebo using ros. Originally posted by Usman Arif with karma: 58 on 2015-12-10 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 23034, "tags": "navigation, rviz, turtlebot, amcl, laserscan" }
How to calculate the components of the displacement of unit magnitude of an accelerating particle in 2D?
Question: A particle is moving in 2D having a constant acceleration $ \vec a $. Given initial velocity $ \vec u $, after time $ t $ the magnitude of its displacement $ \vec S $ is 1. I need to calculate $ S_x $ and $ S_y $ (components of the displacement in x and y direction) such that $ | \vec S | = 1$, i.e. after time t. I know $ \vec a $ and $ \vec u $, but I don't know 't'. Is it possible to calculate the time 't' so that I can calculate $ S_x $ and $ S_y $ ? Please note :- I tried doing this to find 't' - $$ \sqrt{ (u_x t + 0.5 a_xt^2)^2 + (u_yt + 0.5a_yt^2)^2} = 1 \\ \Longrightarrow\quad \left(\frac{a_x^2 + a_y^2}{4}\right)t^4 + (u_xa_x + u_ya_y)t^3 + (u_x^2 + u_y^2)t^2 = 1 $$ Is this the right approach? Is this equation really solvable for t ? Answer: The equation you got is correct. The solution is the time at which the displacement magnitude is 1. However, without some numerical values for $\vec a$ and $\vec u$, this will be a monster to solve analytically (see here for quartic formula to see what I mean https://en.wikipedia.org/wiki/Quartic_function#/media/File:Quartic_Formula.svg). I would just paste that expression in the software of your choice and use the answer it gives.
{ "domain": "physics.stackexchange", "id": 75835, "tags": "homework-and-exercises, newtonian-mechanics, kinematics, time, displacement" }
Which human cell lines do not express the GLP-1 receptor?
Question: I need a human cell line that does not express the GLP-1 (glucagon like peptide-1) receptor. I'm working with HeLa cells, do those express the GLP-1 receptor? Which other cell lines exist that don't express this specific receptor? Are there any general resources where I could find this kind of information? Answer: AbCam suggests HeLa cells as positive controls for their antibody to GLP1R. They provide the following pictures of HeLa cells labeled with their antibody: (The image of the right is treated with synthesized peptide.) According to Wikipedia, GLP1R is also expressed in pancreatic beta cells and the brain.
{ "domain": "biology.stackexchange", "id": 64, "tags": "cell-culture" }
Develop the context free grammar to match this language (puzzle)
Question: This is a puzzle type question which asks to create a context-free grammar to match this language: { x#w | x,w are in {a,b}*, and w contains the reversal of x as a substring } So some example strings to try: #, a#a, b#b, ab#ba, ab#aaabbba Does anyone have any advice on how to get better at these types of problems? I am generally a good problem solver, but have trouble developing grammars for languages for some reason. I am completely stuck on this question. Here is my attempt: S --> TR T --> aTa | bTb | #R R --> RR | 0 | 1 | empty My guess is that we want to define the left side of the string in terms of the right side of the string. Edit: As far as I can tell, the above answer seems to be correct now. Only took me an hour to figure out! Answer: Consider the following grammar: $$ T \to aTa | bTb $$ It is not hard to check that $T \to^* wTw^R$ for all $w \in \{a,b\}^*$, where $w^R$ is the reverse of $w$. The language we are aiming at is $\{w\#xw^Ry : w,x,y \in \{a,b\}^*\}$. We can take care of the $x$ part by providing a "leaf case" for $T$: $$ \begin{align*} &T \to \#R \\ &R \to aR|bR|\epsilon \end{align*} $$ Similarly, to take care of the $y$ part, we can create a new start symbol $S$, and add the production $$ S \to TR $$ In total, we obtain the grammar $$ \begin{align*} &S \to TR \\ &T \to aTa|bTb|\#R \\ &R \to aR|bR|\epsilon \end{align*} $$
{ "domain": "cs.stackexchange", "id": 2559, "tags": "context-free" }
A question about thiamine
Question: How does the the nitrogen of thiazole in thiamine acquire a positive charge without being stabilized by another negative charge or by being a salt of an anion? Answer: Thiamine (Vitamin B1) has a positive charge on the nitrogen of thiazole ring, because that nitrogen is tetravalent. Thus, it should be neutralized by a counter ion. Usually, the counter ion is chloride ($\ce{Cl-}$) ion. Over-the-counter Vitamin B1 usually supplies as hydrochloride salt of thiamine chloride (simply called thiamine hydrochloride), which is very soluble in water. According to Wikipedia: The salt thiamine mononitrate, rather than thiamine hydrochloride, is used for food fortification, as the mononitrate is more stable, and does not absorb water from natural humidity (is non-hygroscopic), whereas thiamine hydrochloride is hygroscopic. When thiamine mononitrate dissolves in water, it releases nitrate (about 19% of its weight) and is thereafter absorbed as the thiamine cation. The structures of two compounds are depicted in following diagram:
{ "domain": "chemistry.stackexchange", "id": 13049, "tags": "organic-chemistry, biochemistry, chemical-biology, nitro-compounds, organosulfur-compounds" }
Tick module for the game
Question: I tried to look up and suck in most of the information about optimizing this operation and this is what I came up with. As it's pretty much core of the game, I really would like to have it performant as much as I can. I would appreciate if someone can take a look at this and possibly find weak spots. Note: I am using Browserify, hence that module.exports. Don't get confused, it is supposed to run in the browser. module.exports = (tickModule, app) -> # Function to retrieve current timestamp, hopefully using window.performance object getTime = if (perf = window.performance)? then -> perf.now() else Date.now # Store the reference so there is no need for scope lookup in every tick raf = window.requestAnimationFrame # Indicates if module is running running = false # Holds identifier for cancelAnimationFrame call requestId = null # Timestamp of the last run of the tick previous = 0 # Run the tick loop tick = -> return unless running # Request frame and store identifier requestId = raf tick # Retrieve current timestamp timestamp = getTime() # Calculate number of seconds from last tick delta = (timestamp - previous) * 0.001 # Store the timestamp for the next round previous = timestamp # Emit event with delta app.land.emit 'tick', delta, timestamp # Start ticking when module start tickModule.addInitializer -> previous = getTime() running = true tick() # Stop ticking when module stops tickModule.addFinalizer -> running = false window.cancelAnimationFrame requestId I am thinking about removing that requestId and running, since I am not really planning to stop the ticking once it starts. It was made merely like nice gesture, but it's not that useful for the game I suppose. Answer: Do you really need a comment above every line? There are a few good comments like # Function to retrieve current timestamp, hopefully using window.performance object, but comments like # Indicates if module is running are really not useful. They add no value to the code itself, and can be removed. Preferably, you should only be using comments when parts of the code are unclear. Secondly, your function alias raf is fairly unclear. I'd recommend renaming it to something like requestFrame. Also, if you're creating aliases for functions like this, I'd also recommend creating one for window.cancelAnimationFrame, and other functions like this. Finally, I'd recommend renaming tick to something like tickLoop, like how it's described in the comment above it.
{ "domain": "codereview.stackexchange", "id": 14662, "tags": "performance, game, animation, coffeescript" }
How to change the pivot point of a link in URDF joint?
Question: I have built simple robotic arm, where the Link1 is joined to the base_link through a revolute joint. When i try to move the link1 it is revolving on its own center instead pivoting the bottom to base_link. Here, how to change the pivot point from the center of link1 to its bottom and fix it to the base_link. My urdf: <?xml version="1.0"?> <robot name="myfirst" xmlns:xacro="http://www.ros.org/wiki/xacro"> <link name="base_link"> <visual> <geometry> <box size="0.5 0.5 0.5"/> </geometry> </visual> </link> <joint name="joint1" type="revolute"> <axis xyz="0 1 0"/> <limit effort="1000.0" lower="-0.548" upper="0.548" velocity="0.5"/> <parent link="base_link"/> <child link="arm2"/> <origin xyz="0 0 0.5" rpy="0 0 0" /> </joint> <link name="arm2"> <origin xyz="0 0 -0.3" rpy="0 0 0" /> <visual> <geometry> <cylinder length="0.6" radius="0.04"/> </geometry> </visual> </link> </robot> Result initial pose: Result end pose: Originally posted by Kishore Kumar on ROS Answers with karma: 173 on 2018-08-19 Post score: 1 Answer: A cylinder has its origin in the centre of the shape (all primitive geometry actually). That is why you're seeing the rotation as it is. If you want the cylinder to rotate at one of its ends, you'll have to translate it half its own length up. Something like: <origin xyz="0 0 0.3" rpy="0 0 0" /> Originally posted by gvdhoorn with karma: 86574 on 2018-08-19 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Kishore Kumar on 2018-08-19: Thank you so much that helped!
{ "domain": "robotics.stackexchange", "id": 31572, "tags": "gazebo, simulation, ros-kinetic, ubuntu, ubuntu-precise" }
How to find the standard error of calculated results in comparison with experimental data
Question: Given two sets of data, experimental frequencies and calculated frequencies from some theoretical model: In an article underneath that table, one can find the statement that the "standard error for all calculated values from experimental values is: $\Delta = 2.74$". The author obviously wanted to test their theoretical model against experimental data, but how do they get $\Delta$? Generally, the question is: if I discover a new theoretical model, and calculate freqeuencies according to that new theory, how do I find the (standard) error/deviation $\Delta$ of my calculated results? Answer: In this case they are referring to the root-mean-square deviation, that is, if you have a set of $N$ measurements $y_i$ and a model $y(x)$: $$\Delta = \sqrt{\frac{\sum\limits_{i=1}^N \left(y_\mathrm{i} - y(x_i) \right)^2}{N}}.$$ If I evaluate that for the data set you posted I get your value of $\Delta$. This is a fairly common method to assess the quality of a model if you don't have error bars on the data. After all, we do not know how accurate the data in the table is. In the limit of infinitely large error bars any model would be good. In general it is best to look at the reduced $\chi$-squared, since this accounts for both the error bar and the number of parameters used in the model. (Obviously, a model that uses more parameters and is as good as a model that uses fewer, is worse.)
{ "domain": "physics.stackexchange", "id": 57423, "tags": "error-analysis" }
Idiomatic loop and break condition
Question: I am calling a C library via P/Invoke. That library needs to be called in sucession with an increasing unsigned int, and returns a pointer. That pointer is then mapped to a managed type, until it returns a NULL pointer. In C, the idiomatic way to write it, is probably for(i=0;;i++), but what is the most idiomatic way to write it in C#? Currently it is using a do {} while loop, as in my opinion, this is the clearest way to show that this loop will repeat until newPort is null. static IEnumerable<Port> GetPorts () { List<Port> ports = new List<Port>(); uint i = 0; Port newPort; do { // This calls an C library, and maps the returned pointer // to the Port class newPort = GetPortData (i); if (newPort != null){ ports.Add (newPort); } i++; } while(newPort != null); return ports; } On the other hand, the variables i and newPort are only used inside the loop, so using for would be another solution, but it does not clearly show the breaking condition. for (uint i = 0; ; i++) { Port newPort = GetPortData (i); if (newPort == null) { break; } ports.Add (newPort); } Which version should I use? Answer: I think the for loop can show the "breaking condition" like this: for (uint i = 0; (newPort = GetPortData(i)) != null; i++) { ports.Add (newPort); }
{ "domain": "codereview.stackexchange", "id": 4408, "tags": "c#" }
Fixing TF between base_link and odom
Question: Hey , i am in the process of porting my robot from simulation to real world . Finished URDF (robot_description) , which publishes the tf Finished with the most essential stack such as robot_nav , robot_viz , robot_movebase My Real Robot can now listen to cmd_vel and will responds correctly to it . I have written a script which accepts cmd_vel and publishes odom . As you can see above the tf between odom and base_link is not yet published , what could be the reason for the same ? "header: seq: 40 stamp: secs: 1517994494 nsecs: 929254055 frame_id: odom child_frame_id: base_link ...." As you can see the child_frame_id of the odom topic is base_link . Originally posted by chris_sunny on ROS Answers with karma: 47 on 2018-02-07 Post score: 0 Answer: It is not sufficient, to publish this as an odometry Topic. You Need to use a tf broadcaster to send the actual transformation via the tf tree. Originally posted by mgruhler with karma: 12390 on 2018-02-07 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by chris_sunny on 2018-02-07: https://answers.ros.org/question/281781/ros-python-script-for-kangaroo-x2-sabertooth-2x32-configuration-accepts-cmd_vel-and-publishes-odom/ The python script which publishes odom is listed here , please check it out . I believe i am using the tf_broadcaster ,let me know if i missed anything. Comment by mgruhler on 2018-02-07: Looks okay on a first glance. Do you see this transform in the tf message?
{ "domain": "robotics.stackexchange", "id": 29971, "tags": "ros, navigation, odometry, base-link, map-to-odom" }
Finding all solutions by Grover search(not superposition)
Question: When there are multiple marked elements, grover search provides only superposition of them. If I want to find all the marked elements, not superposition, I could try this: 1) Do Grover search, get superposition of t marked element, 2) observe ele space, get one marked element, 3) remove that element, 4) goto 1) This takes time step $O(\sqrt{\frac{N}{t}}+\sqrt{\frac{N-1}{t-1}}+\dots+\sqrt{\frac{N-t+1}{1}})$. My question is, can i do better? Answer: First note that the sum $O\left(\sqrt{\frac{N}{t}}+\sqrt{\frac{N-1}{t-1}}+\dots+\sqrt{\frac{N-t+1}{1}}\right) = O(\sqrt{Nt})$. The quantum query complexity of this problem is indeed $\Theta(\sqrt{Nt})$. The lower bound can be shown by reduction from the problem of deciding whether the input has $t$ marked elements or $t+1$ marked elements. This problem is very similar to $t$-threshold, and has a lower bound of $\Omega(\sqrt{Nt})$. This can be shown using the polynomial method or the adversary method.
{ "domain": "cstheory.stackexchange", "id": 1943, "tags": "quantum-computing, quantum-information" }
Java practise exam question
Question: Doing practise questions for a Java exam which have no answers (useful) I have been asked to do the following: Write a class called Person with the following features: a private int called age; a private String called name; a constructor with a String argument, which initialises name to the received value and age to 0; a public method toString(boolean full). If full is true, then the method returns a string consisting of the person’s name followed by their age in brackets; if full is false, the method returns a string consisting of just the name. For example, a returned string could be Sue (21) or Sue; a public method incrementAge(), which increments age and returns the new age; a public method canVote() which returns true if age is 18 or more and false otherwise. My code is as follows can someone point out any errors? public class Person { private int age; private String name; public Person(String st) { name = st; age = 0; } public String toString(boolean full) { if(full == true) { return name + "(" + age + ")"; } else { return name; } } public int incrementAge() { age++; return age; } public boolean canVote() { if(age >= 18) { return true; } else { return false; } } } Answer: instead of if(full == true) use if (full) and instead of if(age >= 18) { return true; } else { return false; } use: return age >= 18
{ "domain": "codereview.stackexchange", "id": 29124, "tags": "java" }
How does negative gravitational work result in an increase in potential energy?
Question: I'm trying to untangle some confusion when it comes to understanding work. Suppose a rocket is moving upwards (in the opposite direction to the force of gravity), with a uniform velocity. For simplicity, let's assume the mass of the rocket is tiny relative to that of the earth, and define the system as comprising only the rocket. The combined total work done on the rocket, by engines and gravity, is zero, since there is no change in kinetic energy. This result makes sense given the definition of the work energy theorem, which states that the work done on a system is equal to the change in kinetic energy of that system. If we analyze the work done on the rocket by the engines alone, it is equal to the force of the engines (mass * gravity) multiplied by the displacement. The work done by gravity is the exact negative of the work done by the engines. We can derive this work done by gravity in two ways: The total work is 0, and therefore the work of gravity must be equal and opposite to the work done by the engines. Work is defined as the force multiplied by the displacement in the direction of the force, and the force of gravity is $\left[\text{mass}\right]{\times}\left[\text{gravity}\right]$, i.e. $mg$, and the displacement is opposite to the direction of this force, so work done by gravity is $-mg \, {\Delta x}.$ This is where I now get confused: I've often seen it stated that the negative work done by gravity, in this situation, is precisely what is causing the rocket to gain potential energy. In here, for example: The gravitational force that did negative work on the ball and decreased its KE has in the process increased the PE of the ball. Thus negative work (W1) has resulted in positive change in PE. Or, from here: The fact that these two cancel out (Wnet=Wyou+Wgrav=0) means that the kinetic energy of the object after being lifted is 0. So the work done by gravity went to sucking energy out of the object that you were adding, thereby converting it to gravitational potential energy. This immediately strikes me as bizarre. I associate an increase in height with an increase in gravitational potential energy (as something goes higher, its potential energy also goes higher). Yet the force of gravity is acting downwards, and a downwards force will reduce the rate at which an object attains height. So if anything, isn't the work done by gravity contributing to a decrease in the rate at which the potential energy is increasing? I understand that there's a bit of an irony here - the very thing that gives an object potential energy is gravity, and if you increase the gravitational force, you increase the gravitational potential energy. But on the other hand, the force of gravity reduces the rate at which an object attains height, and height is proportional to gravitational potential energy. I'm very confused here - to me, it makes intuitive sense that, while yes, the gravitational field is what allows gravitational potential energy to exist, it is the force of the engines that is driving the upward motion, and therefore causally implicated in the rise in potential energy of the rocket. So why is it said that the work done by gravity (which is negative!) is what results in the increase in potential energy? Indeed, from Wikipedia: The amount of gravitational potential energy possessed by an elevated object is equal to the work done against gravity in lifting it. While this doesn't logically imply that the work done against gravity (i.e. by the engine) is causally implicated in increasing the potential energy of the object, it surely suggests as much. And I can't help but come to that conclusion: without the engines, the potential energy of the rocket would remain at 0. With the engines, the potential energy increases! Answer: Let's consider a simpler example with a mass at the end of an anchored weightless spring. As the mass oscillates at the free end of the spring, there is a continuous conversion between kinetic and potential energies. The kinetic energy here belongs to the mass, while the potential energy belongs to the spring. As the spring stretches, it performs a negative work on the mass, since the mass is moving in the direction opposite to the force applied by the spring. We can say that the kinetic energy of the mass (the only energy it may have) is deceasing because the spring performs a negative work on it, while the potential energy of the spring is increasing because it performs a negative work on the mass. We can also turn it around and say that the mass performs a positive work on the spring (since the direction of the force the mass applies to the spring coincides with the direction of stretching) and, as a result, the potential energy of the spring increases, while the kinetic energy of the mass is decreasing (is being spent). All the above seems consistent: in all cases the energy flows in the direction of the work. Alternatively, we could treat the spring and the mass as one entity and claim that, since no external forces perform any work on it (assuming that the fixed end of the spring is not moving), its total energy is not changing and all energy transitions are internal. In case of a ball moving up and down in the gravitational field of the Earth, we may decide that the potential, as well as the kinetic, energy belongs to the ball, in which case any work, positive or negative, done by the Earth on the ball won't make any changes in the total energy of the ball, which is a contradiction. On the other hand, we can decide that the potential energy belongs to the gravitational field, in which case the energy transitions would be similar to the case of the spring and the mass and there would be no contradictions. I'll let the experts decide whether this approach is justifiable, but it appears to be helpful. We could also treat the ball and the Earth as one entity and consider all energy transitions internal to that entity.
{ "domain": "physics.stackexchange", "id": 49403, "tags": "forces, newtonian-gravity, work, potential-energy" }
At what point, when connected, do DNA strands become a helix?
Question: When synthesizing DNA strands and beginning to "connect" them, how quickly does it become a helix? In this answer, canadianer says The helical structure is more stable than the "straight" form (because of base stacking interactions), and so it forms spontaneously. This then makes me think of the following scenario: say one is constructing two strands of DNA out of deoxyribonucleotides. Then, one begins connecting them with hydrogen bonds. Before the hydrogen bonds are added, my understanding is the two strands would be straight and not helical (if this is wrong, please correct me on this) so at some point as the two strands are connected, it would "spontaneously" become helical. The question, then, is at which point. After one "connection"? Two? All of the connections necessary to complete the piece of DNA? Answer: In general, a single stranded nucleic acid is helical in the absence of other secondary structure. The base stacking that drives helix formation does occur between adjacent bases in the same strand. Note, however, that such structures are dynamic and dependent on properties such as temperature and salt concentration. You can see an example of a single-stranded RNA helix in the following crystal structure (1MHK): Note the blue helix at the bottom of the image is actually a duplex in the crystal.
{ "domain": "biology.stackexchange", "id": 6902, "tags": "biochemistry, dna, structural-biology, 3d-structure, dna-helix" }
Derivatives of Continuously Parameterized Operators on Hilbert Spaces
Question: I'm working through Ballentine's Quantum Mechanics - A Modern Development and I've reached a section of the book where he is looking at infinitesimal transformations and generators. To do this, he assumes a family of operators continuously parametereized by a variable $s$, $U(s)$, and expands around $s=0$ as $$U=I+\frac{dU}{ds}|_{s=0} +O^2(s).$$ I don't quite understand how these derivatives are defined. I understand that we can take limits in the Hilbert space, so my first thought is something like the derivative being defined by $$\frac{dU}{ds}|\psi\rangle = \lim_{\epsilon\to0}\left(\frac{U(s+\epsilon) - U(s-\epsilon)}{\epsilon}\right)|\psi\rangle$$ for all kets in the space. Then, if this converges with respect to the norm of the space, the derivative is defined. Is this the right way to think about the derivative of an operation, with higher derivatives defined similarly? From here, I would assume we can talk about convergent series of operators to define the exponential. This seems to be the purview of functional analysis. Are there any recommended sources from which I could learn more about analysis with operators? Answer: Yes this is the right way to look at this except you need $U(s)$ instead of $U(s-\epsilon)$ in your formula. If you have a map $s\mapsto U(s)$ into the set of bounded operators on a Hilbert (e.g., unitary operators), then you have two main approaches for defining the derivative $$ \frac{d}{ds}U(s)=\lim_{\epsilon\rightarrow 0} \frac{1}{\epsilon}(U(s+\epsilon)-U(s))\ . $$ This depends on the topology. 1) The operator norm topology: meaning the convergence in this limit is with respect to the operator norm. This is too restrictive for QM applications. 2) The strong operator topology: that's exactly what you said, i.e., the limit means that for every ket $|\psi\rangle$, you apply $\frac{1}{\epsilon}(U(s+\epsilon)-U(s))$ and see if this converges in the Hilbert space norm. In general it does not converge for every ket. So the derivative operator $\frac{d}{ds}U(s)$ typically is an unbounded operator which is only defined on a dense subspace of the Hilbert space. Look up Stone's Theorem. Also, a good book to go more in depth into this is "Quantum Mechanics and Quantum Field Theory A Mathematical Primer" by Jon Dimock.
{ "domain": "physics.stackexchange", "id": 64330, "tags": "quantum-mechanics, hilbert-space, operators, mathematical-physics" }
If acceleration causes relative time dilation does the eventual deceleration reverse it?
Question: If acceleration causes relative time dilation does the eventual deceleration reverse it? For example: traveling to Alpha Centauri Based on me reading this site: http://www.convertalot.com/relativistic_star_ship_calculator.html - Answer: You don't say how much you know about special relativity, and the calculations involved in handling acceleration are a bit involved unless you are already fairly familiar with the subject. The calculation is described in chapter 6 of Gravitation by Misner, Thorne and Wheeler, or if you just want the results see John Baez's article on the relativistic rocket. The simple answer is that no, the deceleration does not reverse the effects of acceleration. You can see why this is because as dmckee and cb3 have said, it is the velocity that causes the time dilation not the acceleration. The acceleration is symmetric about zero because the positive is balanced out by the negative so you'd expect it's effects to cancel, and indeed they do because you start at rest and end at rest. However the velocity is not symmetric about zero because it starts at zero, rises to a maximum and falls back to zero. So there's no reason to expect the effects of the velocity to cancel. This means that the time dilation caused by the velocity wouldn't cancel either.
{ "domain": "physics.stackexchange", "id": 5520, "tags": "special-relativity, space-travel" }
Is this PHP code snippet safe?
Question: Mostly asking for critiques of vulnerability. Am I using any functions or methods that are unsafe? <?php $menu = array( "page1","page2","page3" ); $defpage = "page1"; $section = $defpage; if ( isset( $_GET['section'] ) ) $section = $_GET['section']; if ( !in_array( $section, $menu ) ) $section = $defpage; ?> This is code that checks if the section is in the array and then sets it as such, but hardwires it back to default if it's not valid. Answer: Is it safe? Yes, it will currently do the right thing. One of the features that plays in to best practice though, is how future proof it is. Over time, code gets edited, changed, etc. What you want is to make the code 'fail safe' in the future too. What if someone comments out the second line, you end up with a problem. A better way to write your code would be to set the default, and only change it if the input is valid: <?php $menu = array( "page1","page2","page3" ); $section = "page1"; $input = $_GET['section']; if ( isset( $input ) && in_array( $input, $menu ) ) { $section = $input; } ?>
{ "domain": "codereview.stackexchange", "id": 8223, "tags": "php, security" }
Hydrofluoric acid
Question: At work we have a large tank that is comprised mostly of hydrofluoric acid, I don't know the dilution factor as I'm not the one making the bath, but I have a question, when emptying the bath we tried to check if acid really is heavier then water, when we insert the pH stick, it comes up "acid", but the stick itself only goes about 2cm into the bath, where theoretically the water would be, we emptied half of the bath so the acid would be emptied and tried again, same result, is the acid mixed completely with water or is it just stronger near the bottom of the tank? Answer: Liquids with unlimited miscibility with water, like ethanol, sulfuric acid and hydrofluoric acid do have different densities than water. This may lead to layered state, if one liquid is intentionally and carefully layered over or under water, or if there was just insufficient mixing. If mixed properly, what takes quite short time with proper mixing, the solution of such liquid gets homogeneous, without separation tendency. Therefore, there is no reason to expect variation of the acid concentration, unless it was locally and significantly spent due the bath purpose and not mixed afterwards. If there is such local acid spending then the result depends on how the liquid density changes by the process. If it decreases, it would have tendency to trigger convective raising stream and vice versa. BTW, people without sufficient knowledge about dangerous chemicals should not be around them. A sudden splash of hydrofluoric acid on skin can painfully kill a person within not many hours. A part of danger is the acid takes its time for skin penetration before the person starts to observe dooming symptoms. Chemistry of fluorine had already too many victims, especially in first decades after the element discovery.
{ "domain": "chemistry.stackexchange", "id": 16692, "tags": "acid-base, aqueous-solution" }
How to preprocess Acoustic Data
Question: I am dealing with acoustic data with very high sampling frequency of 2MHz and want to build a classifier. I was wondering if there are any rules of thumb for preprocessing acoustic data. Is it better to directly use raw data (timesignal) or first to construct spectrograms, and to use these? There are papers, which say raw is better, and there are papers saying spectrograms are better. It somehow seems to me, that authors already had a preferred method, even before writing the paper. I think a real comparison is difficult. I read the paper "Deep Learning and Its Applications to Machine Health Monitoring: A survey", in which a study of different methods was done. I looked up his references, but authors seemed just to pick raw or spectrograms without explaining. For example, in the paper "End-to-end learning for music audio" from Dieleman spectrograms are preferred. In "Sample-Level deep convolutional neural networks for music auto-tagging using raw waveforms", they claim their 1D structure is better or at least comparable to 2D architectures. Personally I had better experience with spectrograms. Answer: As far as the paper "Sample-Level deep convolutional neural networks for music auto-tagging using raw waveforms", I can give you some of my intuitions about the question since I and my colleague proceeded the experiments. To summarize, I suggest you to use spectrogram based approaches in your situations. There are two reasons I would like to point out, First, training raw waveform based architecture takes about 4 times longer than spectrogram based model when the sampling rate is ranging from 16kHz to 22kHz. In your case, sampling rate is even 22Mhz. I think it will take a lot more time than spectrogram based model with similar performances. Second, to obtain well trained raw waveform based model, we need more than 50 hours audio since the model has more parameters and deeper layers. In my opinion, the benefit of using a raw waveform-based model is not the performance improvement, but on the generative model. If we use well performing raw waveform based model, we would not need to reconstruct audio signal from spectrogram when the case is generative model. This is the main reason why we performed reported experiments. If computing power and memory improve with current trends, we expect that the raw waveform-based model will be the mainstream in the near future. But now I think the spectrogram-based model is more convenient, especially for industrial applications.
{ "domain": "datascience.stackexchange", "id": 2006, "tags": "preprocessing" }
Install ROS lunar on fedora
Question: I am trying install ROS on Fedora rosdep install --from-paths src --ignore-src --rosdistro lunar -y but i got this error ERROR: the following packages/stacks could not have their rosdep keys resolved to system dependencies: gazebo_dev: No definition of [libgazebo7-dev] for OS version [26] I try this command [ahmedadel@192 ros_catkin_ws]$ rosdep install -ay --os=fedora:26 and got this result #All required rosdeps installed successfully Originally posted by ahmedadelhekal on ROS Answers with karma: 1 on 2017-09-24 Post score: 0 Answer: sudo dnf install --skip-broken python-empy console-bridge console-bridge-devel poco-devel boost boost-devel eigen3-devel pyqt4 qt-devel gcc gcc-c++ python-devel sip sip-devel tinyxml tinyxml-devel qt-devel qt5-devel python-qt5-devel sip sip-devel python3-sip python3-sip-devel qconf curl curl-devel gtest gtest-devel lz4-devel urdfdom-devel assimp-devel qhull-devel qhull uuid uuid-devel uuid-c++ uuid-c++-devel libuuid libuuid-devel gazebo gazebo-devel collada-dom collada-dom-devel yaml-cpp yaml-cpp-devel python2-defusedxml python-netifaces pyparsing pydot python-pyqtgraph python2-matplotlib This tends to install everything needed to build ROS on Fedora Originally posted by Rufus with karma: 1083 on 2018-04-20 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by gvdhoorn on 2018-04-20: It would be really great if you could contribute some rules to the database so we can use rosdep to manage dependencies on Fedora again.
{ "domain": "robotics.stackexchange", "id": 28910, "tags": "ros-lunar" }
Name of this rearranging/sorting problem?
Question: You are given an array of length $n$. Each element of the array belongs to one of $K$ classes. You are supposed to rearrange the array using minimum number of swap operations so that all elements from the same class are always grouped together, that is they form a contiguous subarray. For example: $$ \begin{align*} &[2, 1, 3, 3, 2, 2] \longrightarrow [2, 2, 2, 1, 3, 3], \text{ or} \\ &[2, 1, 3, 3, 2, 2] \longrightarrow [1, 2, 2, 2, 3, 3], \text{ or} \\ &[2, 1, 3, 3, 2, 2] \longrightarrow [3, 3, 2, 2, 2, 1]. \end{align*} $$ Three other valid arrangements remain. What is this problem called in literature? Is there an efficient algorithm for it? Answer: Note: It is a hardness proof, and I think there are practical algorithms like integer programming, etc. Given a BIN_PACKING instance where you want to pack $K$ numbers $n_1,\ldots,n_K$ into $L$ bins of size $m_1,\ldots,m_L$, and it is ensured that $\sum n_i=\sum m_j=N$, then we could design a instance of your problem as follows: There are $K+(N+1)(L-1)$ classes; The first $K$ classes have size $n_1,\ldots,n_K$ respectively, and each of the rest classes have size $N+1$;; The array is partitioned into slots of size: $$m_1,(N+1)^2,m_2,(N+1)^2,m_3,\ldots,(N+1)^2,m_L$$ where each slot of size $(N+1)^2$ is packed with $N+1$ classes, arranged contiguously, and the rest are arbitrarily arranged. Now a key observation is that it is meaningless to keep at least one class in a $(N+1)^2$ slot unmoved and move other ones (because it won't change the size of a 'bin'). So the original bin packing is available if and only if the minimum number of swaps is no larger than $N$. Since BIN-PACKING is known to be strongly NP-complete, your problem is NP-hard.
{ "domain": "cs.stackexchange", "id": 9936, "tags": "terminology, reference-request, sorting, arrays" }
actionlib blocking subscription callbacks
Question: Hi, I'm using Python, ROS Fuerte and Ubuntu 12.04. I have an actionlib server as part of a node that subscribes to a certain topic. Subscription callbacks are attended normally, but when an the actionlib server executes its callback to dispatch a goal, no subscription callbacks are called, although data is continuously published on the topics this node is subscribed to. I tried putting the actionlib server in a separate thread but nothing changes. Any ideas? Thank you, Originally posted by Enric Galceran on ROS Answers with karma: 13 on 2013-01-22 Post score: 0 Answer: Callbacks are called during spin() or spinOnce() (see the overview on spinning for details). Callbacks are also blocking, so while your action callback is executing, your node won't spin, and no subscription callbacks will be performed during that time. If it's possible, you need to change the way your action server is implemented. Have a look at the goal callback method tutorial. It's written in cpp, but the concept is the same. Basically, you subscribe to goal messages and store the necessary data in the node. Then, in the main loop, you call a function to do a little bit on whatever action you're performing and possibly publish some feedback. That way, your node can keep spinning, and subscriptions callbacks can still occur. Originally posted by thebyohazard with karma: 3562 on 2013-01-22 This answer was ACCEPTED on the original site Post score: 4
{ "domain": "robotics.stackexchange", "id": 12532, "tags": "ros, action, actionlib, callbacks" }
What is membrane-partitioning free energy? Can it be simulated?
Question: Firstly, is there a strict definition of the "membrane-partitioning free energy"? It is banded around in membrane biology, but I have never seen it strictly defined. The only non-scholarly site that google shows is this Q&A on quora, and the answer there even has ambiguity in what is defined as partitioning in the membrane. This seems wholly unclear. Furthermore, is it possible to study free energy changes upon partitioning in GROMACS, or with any other molecular dynamics simulations? If not, what are the methods I would need to use to determine the free energy of partitioning? Links and citations for further reading on this topic are encouraged. Answer: In the biological context, membrane-partitioning is usually referring to the stage in which the transmembrane-destined region of a protein moves from interacting with the water, to interact with the interface of the membrane. In the diagram below showing a four step thermodynamic cycle, the partitioning free energy can be referred to as-is ΔGwiu in terms of free energy where w is water, i is the interface, and u is unfolded. The image comes from the same paper that introduced the famous Wimley and White octanol-interface scale from 1999. Note that in a more recent 2015 article, they comment that the partitioning phase ΔGwiu is generally the only experimentally accessible step. With that in mind, simulating ΔGwif where f is the folded helix becomes necessary. This 2014 Nature Comms paper use folding-partitioning molecular dynamics simulations to estimate the free energies that are experimentally inaccessible. They used Gomacs 4.5. In fact, a study using a simulation from 2005 suggests that folding isn't necessarily required for insertion of the helices. Note that all the information here is behind a paywall. Feel free to ask me for clarification or expansion in the comments.
{ "domain": "biology.stackexchange", "id": 5714, "tags": "biochemistry, bioinformatics, proteins, biophysics, cell-membrane" }
What's the difference between getEulerYPR, getEulerZYX and getRPY?
Question: Hello everyone, I noticed tf::Matrix3x3 provides 3 distinct methods (getEulerYPR, getEulerZYX and getRPY) and they all seem to give the very same results : The output of the following snippet is given below : double roll, pitch, yaw; tf::Matrix3x3(transform.getRotation()).getRPY(roll, pitch, yaw); std::cout<<"getRPY(): "<<roll<<" "<<pitch<<" "<<yaw<<"\n"; tf::Matrix3x3(transform.getRotation()).getEulerYPR(yaw, pitch, roll); std::cout<<"getEulerYPR(): "<<roll<<" "<<pitch<<" "<<yaw<<"\n"; tf::Matrix3x3(transform.getRotation()).getEulerZYX(yaw, pitch, roll); std::cout<<"getEulerZYX(): "<<roll<<" "<<pitch<<" "<<yaw<<"\n"; outputs: getRPY(): 0.000254904 -0.00770209 -0.0312527 getEulerYPR(): 0.000254904 -0.00770209 -0.0312527 getEulerZYX(): 0.000254904 -0.00770209 -0.0312527 Why do we have duplicate methods that seem to do exactly one thing? Can somebody please shed some light on this? Thanks a lot in advance Originally posted by Rika on ROS Answers with karma: 72 on 2021-07-03 Post score: 1 Answer: attribute((deprecated)) void getEulerZYX(tfScalar& yaw, tfScalar& pitch, tfScalar& roll, unsigned int solution_number = 1) const { getEulerYPR(yaw, pitch, roll, solution_number); }; void getRPY(tfScalar& roll, tfScalar& pitch, tfScalar& yaw, unsigned int solution_number = 1) const { getEulerYPR(yaw, pitch, roll, solution_number); } No difference according to the code No real cost maintenanceto allow different functions that just have different argument names to the same underlying function. Originally posted by bob-ROS with karma: 525 on 2021-07-03 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 36637, "tags": "ros, c++, ros-kinetic, ubuntu, ubuntu-xenial" }
Can't Link to liborocos-kdl.so.1.1
Question: I can't run robot_state_publisher, installed on Hydro from debians. I get the following error message. /opt/ros/hydro/lib/robot_state_publisher/robot_state_publisher: error while loading shared libraries: liborocos-kdl.so.1.1: cannot open shared object file: No such file or directory Versions: > dpkg -s ros-hydro-robot-state-publisher | grep 'Version' Version: 1.9.10-0precise-20131228-0321-+0000 > dpkg -s ros-hydro-orocos-kdl | grep 'Version' Version: 1.2.1-0precise-20131209-0906-+0000 Originally posted by David Lu on ROS Answers with karma: 10932 on 2014-01-09 Post score: 2 Original comments Comment by Athoesen on 2014-03-21: It appears 1.1 is for Groovy while 1.2 is for Hydro. I'm currently getting the same problem. Although both Groovy and Hydro are installed on this computer (shared lab) I know Hydro was sourced during my entire time doing this. https://groups.google.com/forum/#!topic/moveit-users/KS6fCrztt_0 Answer: Clean remove and then reinstall. Bah. Originally posted by David Lu with karma: 10932 on 2014-01-11 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Athoesen on 2014-03-21: Is this how you solved it?
{ "domain": "robotics.stackexchange", "id": 16621, "tags": "ros, orocos, robot-state-publisher" }
Why is using a lexer/parser on binary data so wrong?
Question: I often work with lexer/parsers, as opposed to a parser combinator and see people who never took a class in parsing, ask about parsing binary data. Typically the data is not only binary but also context sensitive. This basically leads to having only one type of token, a token for byte. Can someone explain why parsing binary data with a lexer/parser is so wrong with enough clarity for a CS student who hasn't taken a parsing class, but with a footing on theory? Answer: In principle, there is nothing wrong. In practice, most non-textual data formats I know are not context-free and are therefore not suitable for common parser generators. The most common reason is that they have length fields giving the number of times a production has to be present. Obviously, having a non context-free language has never prevented the use of parser generators: we parse a superset of the language and then use semantic rules to reduce it to what we want. That approach could be used for non-textual formats if the result would be deterministic. The problem is to find something else than counts to synchronize on as most binary formats allow arbitrary data to be embedded; length fields tell you how much it is. You can then start playing tricks like having a manually writen lexer able to handle that with feedback from the parser (lex/yacc handling of C use that kind of tricks to handle typedef, for instance). But then we come to the second point. most non-textual data formats are quite simple (even if they are not context-free). When the counts mentioned above are ignored, the languages are regular, LL1 at worst, and are thus well suited for manual parsing techniques. And handling counts is easy for manual parsing techniques like recursive descent.
{ "domain": "cs.stackexchange", "id": 107, "tags": "programming-languages, compilers, parsers" }
Simplifying a web service method
Question: I have the following method in a web service class. I'm a little unhappy about the big block of new JProperty(...) calls in the for loop. Is there a way to simplify that? public string UserCatalog(string numericSessionId, JObject incomingRequestJson) { JObject json = new JObject(); JObject returningJson = new JObject(); JArray userCatalogArray = new JArray(); string deviceId = incomingRequestJson.SelectToken("deviceId", true).ToString(); string version = getOptionalData(incomingRequestJson, "version", "1.0.0"); requireMinVersion(incomingRequestJson); IEnumerable<Bookcard> catalog = readerBLL.UserCatalog(numericSessionId, deviceId); var publicKeyRSA = readerTools.GetPublicKey(HttpUtility.HtmlDecode(numericSessionId), HttpUtility.HtmlDecode(version)); //Build the userCatalog json array foreach (Bookcard card in catalog) { JObject arrayEntry = new JObject( new JProperty("bookThumbnailUrl", card.bookThumbnailUrl), new JProperty("bookId", card.bookId), new JProperty("bookTitle", card.bookTitle), new JProperty("titlePrefix", card.titlePrefix), new JProperty("author", card.author), new JProperty("annotation", card.annotation), new JProperty("publisher", card.publisher), new JProperty("numPages", card.numPages), new JProperty("returnDate", card.returnDate), new JProperty("downloaded", card.downloaded), new JProperty("deviceId", card.deviceId), new JProperty("currentPageLabel", card.currentPageLabel), new JProperty("furthestPageLabel", card.furthestPageLabel), new JProperty("currentReadPosition", card.currentReadPosition), new JProperty("furthestReadPosition", card.furthestReadPosition), new JProperty("lastReadTimestamp", card.lastReadTimestamp), new JProperty("bookLength", card.bookLength), new JProperty("ttsEnabled", card.ttsEnabled), new JProperty("mackinCheckoutID", card.mackinCheckoutID), new JProperty("runtime", card.runtime), new JProperty("readerType", card.readerType), new JProperty("dop", card.dop), new JProperty("externalId", card.externalId), new JProperty("externalSessionKey", card.externalSessionKey), new JProperty("externalCheckoutSessionKey", card.externalBookSessionKey), new JProperty("externalCheckoutId", card.externalCheckoutId), new JProperty("externalAccountId", card.externalAccountId + findawayBLL.AccountSuffix) ); if (card.printAllowed && !readerTools.isOldVersion(1, 3, 0, version)) { arrayEntry.Add(new JProperty("printAllowed", readerTools.EncryptRSA("print_OK_" + card.bookId, publicKeyRSA))); } userCatalogArray.Add(arrayEntry); } returningJson.Add("userCatalog", userCatalogArray); return returningJson.ToString(); } Answer: As @WillNewton commented, you could always move all that newing up elsewhere (perhaps in an extension method that extends the Bookcard type and adds a ToJObject method), but the length would remain the same - there's not much repetition in here. If all properties of BookCard (that's not a typo - I really think it should be BookCard) are mapped to a JProperty, then you could use some reflection to get each property's name and value: public static IEnumerable<JProperty> GetJProperties(this BookCard card) { var create = new Func<string, object,JProperty>((name, value) => new JProperty(name, value)); foreach (var property in card.GetType().GetProperties()) { yield return create(property.Name, property.GetValue(card)); } } and then I presume (haven't tested) this would work: public static JObject ToJObject(this BookCard card) { return new JObject(card.GetJProperties()); } This extension method would allow you to rewrite your foreach loop like this: foreach(BookCard card in catalog) { var arrayEntry = card.ToJObject(); // ... } Alternatively, JSON.net gives you the JsonConvert class; you might be interested in the Newtonsoft.Json.JsonConvert.SerializeObject and Newtonsoft.Json.JsonConvert.DeserializeObject static methods, each with overloads that let you customize the process.
{ "domain": "codereview.stackexchange", "id": 6938, "tags": "c#, json" }
First Hangman game
Question: This is my first ever program created after reading a book on Python. Do you have any suggestions for me? Anything that are considered bad habits that I should correct for my new project? #HangMan - 2014 import random import time #TODO: add word support #TODO: add already guessed letters secret = "" dash = "" HANGMANPICS = [''' +---+ | | | | | | =========''', ''' +---+ | | O | | | | =========''', ''' +---+ | | O | | | | | =========''', ''' +---+ | | O | /| | | | =========''', ''' +---+ | | O | /|\ | | | =========''', ''' +---+ | | O | /|\ | / | | =========''', ''' +---+ | | O | /|\ | / \ | | ========='''] def create_hangman(): create_hangman.guessess = create_hangman.guessess = 0 create_hangman.already_guessed = "" #List of words, pick a word, then set it to a var words = ["soccer", "summer", "windows", "lights", "nighttime", "desktop", "walk"] d = random.randint(0, 6) #Tell the compiler we want the global secret var global secret #Change the global secret v to a string while we choose the word secret = str(words[d]) #The blank spaces. Find how many letters the word is and replace it with underscores create_hangman.dash = ['_' for x in range(len(secret))] #Print the hangman print(HANGMANPICS[0], "\n",' '.join(create_hangman.dash)) def add_letter(letter): create_hangman.already_guessessed = create_hangman.already_guessed, letter def guess(): while True: think = input("Pick a letter: ") letter = think alreadyGuessed = "" if(len(letter) != 1): print("Please enter only one letter.") elif(letter not in 'abcdefghijklmnopqrstuvwxyz'): print("Please guess a letter.") elif(letter not in secret): wrong_word(create_hangman.guessess) add_letter(letter) elif(letter in secret): print("Congratulations!", letter, " was found!") remove_dash(letter) print_hangman() check() def wrong_word(hmpic): create_hangman.guessess = create_hangman.guessess + 1 hmpic = create_hangman.guessess if(create_hangman.guessess == 7): you_loose() else: print(HANGMANPICS[hmpic], "\n", ' '.join(create_hangman.dash), "\n", "That letter is not in the word.") def print_hangman(): print(HANGMANPICS[create_hangman.guessess] + "\n") print(' '.join(create_hangman.dash)) def you_loose(): print("Sorry you lost! The correct word was", secret) play_again = input("Would you like to play again: "); if(play_again == "Y" or play_again == "y"): create_hangman() print("Creating a new game...") elif(play_again == "N" or play_again == "n"): print("Thanks for playing, bye!") quit() else: print("Error: Please choose either 'Y' or 'N'") return you_loose() def you_win(): print("Congratulations! You won and got the word", secret) play_again = input("Would you like to play again: ") if(play_again == "Y" or play_again == "y"): create_hangman() print("Creating a new game...") elif(play_again == "N" or play_again == "n"): print("Thanks for playing, bye!") quit() else: print("Error: Please choose either 'Y' or 'N'") return you_loose() def check(): if(''.join(create_hangman.dash) == secret): you_win() else: guess() def remove_dash(letter): for i in range(len(secret)): if secret[i] == letter: create_hangman.dash = list(create_hangman.dash) create_hangman.dash[i] = letter name = input("Whats your name? ") print("Hey", name, "welcome to HangMan 1.6") create_hangman() guess() Answer: You are making a common beginner mistake of misusing functions as if they were goto labels. For example, from the last line of the program, you call guess(), which calls check(), which calls guess(), which calls check(), which calls guess(), …, which calls check(), which calls you_win(), which can call you_loose() (?!) At some point, you can hit ControlC to see the deep call stack that results from this weird mutual recursion. A properly structured program should have a nice, simple stack trace. See other examples of code with this problem: Hangman Number-guessing game Rock-paper-scissors Mileage calculator Here is an implementation restructured to use functions properly. Note the use of while loops. The state of a game, at any point in a game, is entirely summarized by secret and guesses. Therefore, those two variables are frequently passed from calls within play_hangman(). An object-oriented solution would avoid such parameter passing, but I opted to stay somewhat close to the original design instead. import random HANGMANPICS = … def pick_word(): """Return a random word from the word bank.""" words = ["soccer", "summer", "windows", "lights", "nighttime", "desktop", "walk"] return random.choice(words) def print_hangman(secret, guesses): """Print the gallows, the man, and the blanked-out secret.""" wrong_guesses = [guess for guess in guesses if not guess in secret] word_display = ' '.join(letter if letter in guesses else '_' for letter in secret) print(HANGMANPICS[len(wrong_guesses)]) print() print(word_display) def guess(secret, guesses): """Prompt for a single letter, append it to guesses, and return the guess.""" while True: letter = input("Pick a letter: ") if len(letter) != 1: print("Please enter only one letter.") elif letter not in 'abcdefghijklmnopqrstuvwxyz': print("Please guess a letter.") else: guesses.append(letter) return letter def won(secret, guesses): """Check whether the secret has been guessed.""" right_guesses = [letter for letter in secret if letter in guesses] return len(right_guesses) >= len(secret) def hanged(secret, guesses): """Check whether too many guesses have been made.""" wrong_guesses = [guess for guess in guesses if not guess in secret] return len(wrong_guesses) >= len(HANGMANPICS) def play_hangman(): """Play one game of hangman. Return True if the player won.""" secret = pick_word() guesses = [] message = None while not hanged(secret, guesses): print_hangman(secret, guesses) if message is not None: print() print(message) new_guess = guess(secret, guesses) if won(secret, guesses): print("Congratulations! You won and got the word", secret) return True elif new_guess in secret: message = "Congratulations! {0} was found!".format(new_guess) else: message = "That letter is not in the word." print("Sorry you lost! The correct word was", secret) return False def play_again(): while True: play_again = input("Would you like to play again: "); if play_again == "Y" or play_again == "y": print("Creating a new game...") return True elif play_again == "N" or play_again == "n": print("Thanks for playing, bye!") return False else: print("Error: Please choose either 'Y' or 'N'") while True: play_hangman() if not play_again(): break
{ "domain": "codereview.stackexchange", "id": 11044, "tags": "python, game, python-3.x, hangman" }
Installing a custom dependency with rosdep
Question: Hello, I have a Ros package (let's call it myRosPkg) which depends on a custom version of a python module (that we call myCustomPkg). This python module is on a forked github repository. I would like rosdep to automatically install this dependency. To do so, here is what I have done (following the instructions given here and here. 1. Created a custom_deps.yaml with the following content: myCustomPkg: ubuntu: | pip install git+https://github.com/user/repo.git@master 2. Append this file to /etc/ros/rosdep/sources.list.d/20-default.list echo file://$(readlink -f custom_deps.yaml) >> /etc/ros/rosdep/sources.list.d/20-default.list 3. Update rosdep and check install (this is where it fails) > rosdep update ... > rosdep resolve myCustomPkg --os=ubuntu:xenial rosdep detected OS: [elementary] aliasing it to: [ubuntu] #apt pip install git+https://github.com/user/repo.git@master > rosdep check myRosPkg --os=ubuntu:xenial -i rosdep detected OS: [elementary] aliasing it to: [ubuntu] System dependencies have not been satisified: apt pip apt install apt git+https://github.com/user/repo.git@master Is there something wrong I did ? (maybe rosdep is not parsing the command as a multiline string?) Thanks in advance :) Originally posted by ejalaa12 on ROS Answers with karma: 81 on 2018-06-27 Post score: 0 Original comments Comment by gvdhoorn on 2018-06-27: Did you copy-paste the yaml content correctly? The following seems to be missing one level of indentation: myCustomPkg: ubuntu: | ... Comment by ejalaa12 on 2018-06-27: Yes i just forgot to put it correcly on the question Answer: The bash syntax you linked to is no longer supported as of rosdep 0.12 several years ago. See the top level header. http://wiki.ros.org/rosdep/rosdep.yaml Originally posted by tfoote with karma: 58457 on 2018-06-27 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by tfoote on 2018-06-27: Duplicate: https://github.com/ros-infrastructure/rosdep/issues/614 Comment by ejalaa12 on 2018-06-27: Thank you, and sorry again for the double post. Is there no other way of doing so? Comment by tfoote on 2018-06-27: Since you're using pip I'd suggest using the pip installer. The best approach for this would be to use the full url in a pip rule. I don't think that's currently fully supported but a patch to extend the pip installer to support that would be appreciated. Comment by ejalaa12 on 2018-06-28: See my answer for using a pip rule. What do you mean by not fully supported ?
{ "domain": "robotics.stackexchange", "id": 31101, "tags": "rosdep, ros-kinetic" }
C++ Improved ThreadGroup Implementation
Question: After the amazing feedback from these questions; I have prepared a third version of the original posted code. The Idea is the same: An std::size_t variable threads_ready is increased to threads.size() until all threads are finished with the payload, and then back to 0 when all threads are ready to execute again. I eliminated all busy waiting that I could find made the class more generic by making the class use of variadic templates. The Best solution would be to have a lambda capture the required context, but unfortunately I couldn't find a way to deter thread related parameters comfortably from a generic lambda. The best I could manage was to reduce the used parameters to "thread index", so each thread would have an idea about the relevant regions in the inputs. I made a better use-case to better test the implementation #include <iostream> #include <functional> #include <tuple> #include <vector> #include <thread> #include <mutex> #include <iomanip> #include <numeric> #include <atomic> #include <condition_variable> #include <algorithm> #include <cassert> #include <chrono> #include <cmath> using std::atomic; using std::vector; using std::function; using std::tuple; using std::thread; using std::mutex; using std::unique_lock; using std::lock_guard; using std::condition_variable; using std::size_t; template<typename First, typename ...T> class ThreadGroup{ public: ThreadGroup(int number_of_threads, function<void(tuple<First, T...>&, int)> function) : worker_function(function) , state(Idle) { for(int i = 0; i < number_of_threads; ++i) threads.emplace_back(thread(&ThreadGroup::worker, this, i)); } ~ThreadGroup(){ { /* Signal to the worker threads that the show is over */ lock_guard<mutex> my_lock(state_mutex); state.store(End); } synchroniser.notify_all(); for(thread& thread : threads) thread.join(); } void start_and_block(tuple<First, T...>& buffer){ { /* initialize, start.. */ unique_lock<mutex> my_lock(state_mutex); target_buffers = &buffer; state.store(Start); } synchroniser.notify_all(); /* Whip the peons */ { /* wait until the work is done */ unique_lock<mutex> my_lock(state_mutex); synchroniser.wait(my_lock,[this](){ return (threads.size() <= threads_ready); }); } { /* set appropriate state */ unique_lock<mutex> my_lock(state_mutex); state.store(Idle); } synchroniser.notify_all(); /* Notify worker threads that the main thread is finished */ { /* wait until all threads are notified */ unique_lock<mutex> my_lock(state_mutex); synchroniser.wait(my_lock,[this](){ return (0 >= threads_ready); /* All threads are notified once the @threads_ready variable is zero again */ }); } } private: enum state_t{Idle, Start, End}; tuple<First, T...>* target_buffers = nullptr; function<void(tuple<First, T...>&, int)> worker_function; /* start, length */ vector<thread> threads; size_t threads_ready = 0; atomic<state_t> state; mutex state_mutex; condition_variable synchroniser; void worker(int thread_index){ while(End != state.load()){ /* Until the pool is stopped */ { /* Wait until main thread triggers a task */ unique_lock<mutex> my_lock(state_mutex); synchroniser.wait(my_lock,[this](){ return (Idle != state.load()); }); } if(End != state.load()){ worker_function((*target_buffers), thread_index);/* do the work */ { /* signal that work is done! */ unique_lock<mutex> my_lock(state_mutex); ++threads_ready; /* increase "done counter" */ } synchroniser.notify_all(); /* Notify main thread that this thread is finsished */ { /* Wait until main thread is closing the iteration */ unique_lock<mutex> my_lock(state_mutex); synchroniser.wait(my_lock,[this](){ return (Start != state.load()); }); } { /* signal that this thread is notified! */ unique_lock<mutex> my_lock(state_mutex); --threads_ready; /* decrease the "done counter" to do so */ } synchroniser.notify_all(); /* Notify main thread that this thread is finsished */ } /* Avoid segfault at destruction */ } /*while(END_VALUE != state)*/ } }; int main(int argc, char** agrs){ const int number_of_threads = 5; vector<double> test_buffer; double expected; double result = 0; mutex cout_mutex; ThreadGroup<vector<double>&> pool(number_of_threads,[&](tuple<vector<double>&>& inputs, int thread_index){ double sum = 0; vector<double>& used_buffer = std::get<vector<double>&>(inputs); size_t length = (used_buffer.size() / number_of_threads) + 1u; size_t start = length * thread_index; length = std::min(length, (used_buffer.size() - start)); if(start < used_buffer.size()) /* More threads could be available, than needed */ for(size_t i = 0; i < length; ++i) sum += used_buffer[start + i]; //std::this_thread::sleep_for(std::chrono::milliseconds(200)); //to test with some payload { /* Print partial results and accumulate the full results */ lock_guard<mutex> my_lock(cout_mutex); std::cout << "Partial sum[" << thread_index << "]: " << std::setw(4) << sum << " \t\t \r"; result += sum; } }); for(int i = 0; i< 1000; ++i){ test_buffer = vector<double>(rand()%500); std::for_each(test_buffer.begin(),test_buffer.end(),[](double& element){ element = rand()%10; }); expected = std::accumulate(test_buffer.begin(),test_buffer.end(), 0.0); result = 0; auto tpl = std::forward_as_tuple(test_buffer); pool.start_and_block(tpl); std::cout << "result["<< i << "]: " << std::setw(4) << result << "\t\t \r"; assert(expected == result); } std::cout << "All assertions passed! "<< std::endl; return 0; } Is there anything else that could be optimized/improved with this implementation? Answer: Lambdas are not inefficient. You need to pack up those values somehow, and the way lambda captures work are just constructor arguments. That is, it copies the desired values into storage locations, which is exactly the work needed when passing a parameter (storing it into a local variable in the called function). Your tuple packs up values into the tuple, which is morally the same as a structure with unnamed members; again, assuming no extra copies are being made, that is the same amount of work again. The most straightforward solution is to declare your function to just take the thread ID, and pass in a lambda that captures whatever it needs for the actual function to be performed. We hope that (in a release build anyway) that the constructor argument gets optimized out. But in the case where you are capturing something like vector or string, you want to use move semantics. To this end, the constructor argument should be a "sink" parameter. That is, declare it to take by-value, and use std::move in the member initialization. In any case, you can play around in Compiler Explorer to make sure that the extra copy is actually optimized away. In practice, copying a few bytes is nothing compared to the cost of starting a thread or synchronizing anything! As long as you are not deep-copying something expensive like vector it's not inefficient. To clarify: ThreadGroup x { 5, [=](int id){ } }; will initialize the lambda's body (the captures) directly in the function parameter of the constructor. Then, : worker_function{std::move(function)} in that constructor will initialize the class member. That extra copying is what we hope to optimize out: store the captures directly into the final resting place inside the lambda inside the std::function inside the ThreadGroup. Using move semantics ensures that even if copies are not entirely eliminated, it will not do expensive deep copies.
{ "domain": "codereview.stackexchange", "id": 41652, "tags": "c++, multithreading" }
Find the molality
Question: Find the molality of a mixture formed by mixing $200cm^3$ of $HNO_3$ that has a $69\%$ of richness and density $1.41g/mL$, with $1L$ of the same acid with $1.2M$ and density $1.06g/mL$. Attempt. The question asked the molarity too, and I think I've found it, but the issue comes with the molality. Pretty stuck here, I'm having troubles finding which is the solvent and which is the solute, is there any water going on here that it is assumed to be known? I know molality, let that be $m$, is defined as $m=\frac{n_s}{m_{solvent}}$, where $n_s$ and $m_{solvent}$ are the moles of the solute and the mass of the solvent respectively. But I want to be sure I know what the mixture is about. Are we mixing some mixture that has water and acid, with another mixture that has water and acid? Or is this just acid with acid, and the solvent is the acid with more proportion? Continuation. Now that I know it is a mixture of 2 mixtures with both water and acid, let me post what I've tried: I've used the letters to denote the first mixture, the letters with ' to denote the second mixture, and the letters with '' to denote the final mixure of both mixtures. $m''=\frac{n_s''}{m_{solvent}''}$, where $n_s''$ and $m_{solvent}''$ are the moles of the solute and the mass of the solvent of the new gas, denoted by the ''. I've found $n_s''$ by doing this: $$n''_s=n_s'+n_s$$, not sure if that thing above is true, but assuming it is, I've separated both mixtures with the information we are given about them: $$\text{Mixture }1: \begin{cases}m_{mixture}=V_{mixture}d_{mixture}\\ 0.69m_{mixture}=m_s\end{cases}$$ $$\text{Mixture }2: \begin{cases} 1.2=\frac{n_s'}{V_{mixture}'}\end{cases}$$, by simply just plugging the information we know and some unit conversions, with the equation above $$n''_s=n_s'+n_s$$ we get $$n_s''=1.2+0.69\cdot 0.2L\cdot 1410g/L\cdot \frac{1mole\ HNO_3}{63g\ HNO_3}\approx 4.289 moles$$ and we would have one part of the $m=\frac{n_s''}{m_{mixture}}$ done. Not sure on how to get the $m_{mixture}$ and not sure if what I've done is correct either. Answer: Ok, you correctly calculated the number of moles in the final solution with: $$n''=n'+n$$ and $$n''=\pu{1.2 M}\times \pu{1 L} + \dfrac{0.69\cdot \pu{0.2 L}\cdot \pu{1410 g/L}}{\pu{63.012 g/mol \ce{HNO_3}}} =\pu{4.2880 mol}$$ However I used 63.012 for the molecular mass of nitric acid instead of 63. Now to calculat molarity you divide thus $$\dfrac{4.2880}{\mathrm{liters\ of\ solution}} = \mathrm{molarity\ of\ \ce{HNO3}}$$ and to calculate molality you divide thus: $$\dfrac{4.2880}{\mathrm{kg\ of\ solvent}} = \mathrm{molality\ of\ \ce{HNO3}}$$ At this point you have to assume that the volumes are additive, which isn't quite true..., so you end up with 1.2 L of solution. So for molarity: $$\dfrac{4.2880}{\pu{1.2 L}} = \pu{3.57 M}\ \ce{HNO3}$$ Now the kg of solvent is a bit more convoluted. For the concentrated solution the mass of water is: $$0.31\cdot \pu{0.2 L}\cdot \pu{1.410 kg/L} = \pu{0.08742 kg}$$ for the more dilute acid solution the mass of water is: $$\pu{1.00 L} \cdot{(\pu{1.06 kg/L} - \pu{1.2 mol/L}\cdot \pu{0.063012 kg/mol} ) = \pu{0.98439 kg}}$$ so the total mass of water is $$ \pu{0.08742 kg} + \pu{0.98439 kg} = \pu{1.07181 kg}$$ Now to calculate the molality $$\dfrac{4.2880}{\mathrm{kg\ of\ solvent}} = \dfrac{4.2880}{\pu{1.07181 kg}} = 4.00\ \mathrm{molal}$$
{ "domain": "chemistry.stackexchange", "id": 15030, "tags": "mixtures" }
How to approach model reporting task
Question: I have been tasked to report on an ensemble model that was created in h2o which includes several model subtypes such as Random Forest, GBM, linear models etc. The end goal is to predict churn rates for products in a large telco company, but the approach we use could apply to any similar problem. The models produced in this way contain a few potentially useful performance measures such as variable importance, precision, recall and some others. Each model has roughly 150 input variables. The model scores have been used to group the customers by decile and measure the churn rate of each group. The present situation is that the scores appear to be too good which suggests we may have a data leakage problem. For instance, for one of the models the 1st decile captures 84% of the churn, with 99% of the churn captured by the 4th decile. My task is to understand and report on potential issues with the model performance so we can improve the models and recommend action to the business. What I would like to know is: What are some basic analyses that I can perform to address the data leakage issue. How can I leverage the model metadata to better understand model performance? What other important questions should I know to ask in order to fully address this task? Answer: Remove input data to test for leakage This is very generalized question, so without knowing the types and provenance of the input data, this can be hard to answer. But, in general, to check for leakage, you can use the model on some subsets of the input variables while removing other input variables. If you get data from multiple sources, then try removing all input variables from a single source, then re-run your models. You may be able to identify the source of the data leakage. Alternately, if computational power allows, you can brute force it by running the model with each of the 150 input variables removed, or all sets of two variables, etc. Use customer-centered time data Regarding model meta-data, again I would investigate data provenance. Are you predicting churn using the complete patterns of customers who stopped using the service? What I mean to say is, instead of looking backwards from a fixed real-time period, like today, to all customers who did or did not stop using the service, try looking from a fixed customer-time. Use only data from the first year that each customer used the service, and attempt to predict whether each customer will remain with the service for another year. The warning signs of a customer dropping the service may be obvious in the months leading up to that customer dropping the service, but by then, the predictive power of your model may be too late to stop that customer from leaving. Instead, index the time component of each customer's history to zero when the first start using the service, and run your model on this data.
{ "domain": "datascience.stackexchange", "id": 2648, "tags": "machine-learning, ensemble-modeling, scoring" }
Possible Outcomes from Measuring a Hydrogen Atom
Question: A hydrogen atom is characterized by the wavefunction $$\mid \psi \rangle =\sqrt{\frac{2}{7}}\mid 4\,2\,1\rangle +\sqrt{\frac{1}{7}}\mid 2 \,1\,\bar{1}\rangle+\sqrt{\frac{4}{7}}\mid 3\,2\,0\rangle$$ I want to know the possible outcomes and the probability for obtaining each outcome from measuring $E$, $L^2$, and $L_z$, while also calculating the expectation value of the energy expressed as some number times $E_1$ (for $E$), $\hbar^2$ (for $L^2$), and $\hbar$ (for $L_z$). I know the possible outcomes are the eigenvalues for the observable, but we didn't really go over any examples, and there are few examples of this sort in the textbook, so I don't know where to begin. For example, I know when we act $L^2$ on the system, we will get $$ L^2\mid \psi\rangle=6\hbar^2\sqrt{\frac{2}{7}}\mid 4\,2\,1\rangle +2\hbar^2\sqrt{\frac{1}{7}}\mid 2 \,1\,\bar{1}\rangle+6\hbar^2\sqrt{\frac{4}{7}}\mid 3\,2\,0\rangle$$ However, what does this tell me about the possible outcomes? Would the probabilities for each outcome be the coefficient multiplying each successive term squared divided by the Pythagorean sum of the all three terms? How do I find the expectation value? Any help would be appreciated. Answer: The state ket of a system has encoded all the information that can be known about that system. In which way? Well, in its linear decomposition on the eigenstates that form a basis in the Hilbert space you are working at. Your particular ket is represented as a linear combination of 4 kets who have nonzero coefficients. All of those kets represent a particular basis-state(which is characterized using complete sets of commuting obsevables, in your case $E,L^2$ and $Lz$). The 3 numbers "inside" the ket give a possible values of $E,L^2$ and $Lz$(via eigenvalue equations). With what probability?well that is just the modulus of the coefficient in the linear composition. So you see, the possible values are the ones without zero coefficient, that is, the ones with nonzero probability. Now, if you wanted information about another magnitude apart from $E,L^2$ or $Lz$, you would have to express your current base kets in the eigenkets(in another complete set of commuting observables) of the magnitude whose information you wanna know. Beware normalization always(your ket is normalized) To get the expectation value you just "Bra Ket" your operator(if you think carefully you will realize that by this you just make a sum of all the possible values multiplied by their probabilities).
{ "domain": "physics.stackexchange", "id": 10110, "tags": "quantum-mechanics, homework-and-exercises, wavefunction, hydrogen" }
Transcriptionally-mediated DNA damage
Question: I'm researching the genetics of brain cancer, and finding a huge number of mutations in voltage-gated channels. It stands to reason that some of this DNA damage is due to the DNA being transcribed heavily, or in an open chromatin conformation more often, leading to more breakage and damage due to environmental stress. Of course, those are just guesses. Does anyone know of any research papers in the area? Answer: There seems to be some solid evidence that transcription promotes mutation because the untranscribed strand is able to form secondary structures which expose bases to chemical mutagenesis. Here is a recent paper about transcription-associated mutagenesis: Kim H et al.(2010) Transcription-associated mutagenesis increases protein sequence diversity more effectively than does random mutagenesis in Escherichia coli. PLoS One 5(5):e10567. doi: 10.1371/journal.pone.0010567. From the abstract: During transcription, the nontranscribed DNA strand becomes single-stranded DNA (ssDNA), which can form secondary structures. Unpaired bases in the ssDNA are less protected from mutagens and hence experience more mutations than do paired bases. These mutations are called transcription-associated mutations. Transcription-associated mutagenesis is increased under stress and depends on the DNA sequence. Therefore, selection might significantly influence protein-coding sequences in terms of the transcription-associated mutability per transcription event under stress to improve the survival of Escherichia coli. The authors cite a number of papers in their introduction which document the phenomenon that you could follow up. Just in case the focus on a bacterial system puts you off, the Kim et al. paper has in turn been cited in: Wright et al. (2011) The roles of transcription and genotoxins underlying p53 mutagenesis in vivo. CARCINOGENESIS 32:1559-1567 Abstract in full: Transcription drives supercoiling which forms and stabilizes single-stranded (ss) DNA secondary structures with loops exposing G and C bases that are intrinsically mutable and vulnerable to non-enzymatic hydrolytic reactions. Since many studies in prokaryotes have shown direct correlations between the frequencies of transcription and mutation, we conducted in silico analyses using the computer program, mfg, which simulates transcription and predicts the location of known mutable bases in loops of high-stability secondary structures. Mfg analyses of the p53 tumor suppressor gene predicted the location of mutable bases and mutation frequencies correlated with the extent to which these mutable bases were exposed in secondary structures. In vitro analyses have now confirmed that the 12 most mutable bases in p53 are in fact located in predicted ssDNA loops of these structures. Data show that genotoxins have two independent effects on mutagenesis and the incidence of cancer: Firstly, they activate p53 transcription, which increases the number of exposed mutable bases and also increases mutation frequency. Secondly, genotoxins increase the frequency of G-to-T transversions resulting in a decrease in G-to-A and C mutations. This precise compensatory shift in the 'fate' of G mutations has no impact on mutation frequency. Moreover, it is consistent with our proposed mechanism of mutagenesis in which the frequency of G exposure in ssDNA via transcription is rate limiting for mutation frequency in vivo.
{ "domain": "biology.stackexchange", "id": 1113, "tags": "genetics, cancer, transcription, mutations, chromatin" }
Bending behavior of built-up C-Channel
Question: I am analyzing a built-up C-channel for bending purposes (2 point loads of 1200 N each @ 30 cm symmetrically from the CL). The flanges are made of a different material than the web, they are also of a different thickness. The two materials have very different elastic and strength properties. Assume a perfect bond between the flanges and web. So far i have used the "equivalent section" trick to convert the flanges' dimensions as if they were made of material 2 and to find the stresses in both the web and flanges, after this i ran an FEA on the beam and the stresses are very close to what the theory predicts. The next and hardest step in the analysis is to figure out the deflection of the beam due to the bending load. While the "equivalent section" trick is helpful to find the maximum stresses, i'm not sure it can be used to find the actual deflection. Additionally, i need to figure out the shear center of the section and all formulas i have seen so far assume a uniform thickness in the web and flanges which is not my case. Furthermore, since the shear center will be located somewhere to the left of the section, there will be a resultant torque on the section which will induce a shear stress in both the flanges and web. While Roark's formulas for stress and strain provide guidance on how to calculate torsional shear stresses in C-channels, it generally assumes uniform material properties which is not the case of this channel. The materials are a ton weaker in shear than they are tension/compression so this is actually my main concern. Can anybody recommend any resources, tools or tricks to solve this problem? I have access to my college's mechanics of materials books and even in the advanced edition i could't find anything that might help me. In theory i could simply use FEA software for this section but i will have nothing to verify the results against and I always like to at least have an analytical ballpark number to double check. Any help would be greatly appreciated. Thank you fellas. Answer: Assuming your sketch has been drawn to scale, it won't be easy to make a hand calculation of the location of the shear center. The challenge is not the use of two different materials but that the usual assumption of a thin-walled cross section won't be very accurate. If you need the accurate location of the shear center, you will pretty much have to use a FEM with 3D elements. The shear flow in a thick-walled cross section is too complicated a topic to be worth the bother. To calculate the approximate location using the approximation of a thin-walled cross section, you can use the following approach: Calculate any cross section parameters you need (area, first moment of area and section moment of area) for the transformed cross section, which is very similar to using an equivalent section, but not quite. That is, you pick one material as the reference material (e. g. number 2) and multiply the contribution of the other material (1 in that case) with the ratio of E for the two materials. The difference is that you don't adjust the width or thickness of material 1. The transformed cross section does not have geometric representation as a cross section of a single material. Assume the cross section is loaded in pure shear, i. e. a vertical shear load placed in the shear center. Use Zhuravskii's shear stress stress formula to calculate the sum of horizontal shear in each flange in material 1. Clearly there will also be some horizontal shear in material 2, but based on the approximation of a thin-walled cross section, we're assuming it is a small contribution. There will also be some vertical shear in the flanges in material 1, but we'll assume it's a small contribution. Then the centroid of vertical shear will be located in the centroid of the web. The horizontal shear forces and the vertical will both contribute to a torsional equilibrium about the shear center but with opposite signs. The only unknown in that equation is the location of the shear section, so solve for that.
{ "domain": "engineering.stackexchange", "id": 3308, "tags": "mechanical-engineering, structural-analysis, stresses, solid-mechanics" }
tf tree is invalide because it contains a loop
Question: I am trying to display my laserscan on rviz but its status flickers between Status:Ok and Status:Error because of Transform. Transform [sender=/depthimage_to_laserscan] For frame [camera_depth_frame]: No transform to fixed frame [base_footprint]. TF error: [The tf tree is invalid because it contains a loop. Frame camera_rgb_optical_frame exists with parent camera_rgb_frame. Frame camera_rgb_frame exists with parent base_link. Frame base_footprint exists with parent odom. Frame base_link exists with parent base_footprint. Frame left_cliff_sensor_link exists with parent base_link. Frame leftfront_cliff_sensor_link exists with parent base_link. Frame right_cliff_sensor_link exists with parent base_link. Frame rightfront_cliff_sensor_link exists with parent base_link. Frame wall_sensor_link exists with parent base_link. Frame camera_depth_frame exists with parent camera_rgb_frame. Frame camera_depth_optical_frame exists with parent camera_depth_frame. Frame camera_link exists with parent camera_rgb_frame. Frame front_wheel_link exists with parent base_link. Frame gyro_link exists with parent base_link. Frame base_laser_link exists with parent base_link. Frame laser exists with parent base_link. Frame plate_0_link exists with parent base_link. Frame plate_1_link exists with parent plate_0_link. Frame plate_2_link exists with parent plate_1_link. Frame plate_3_link exists with parent plate_2_link. Frame rear_wheel_link exists with parent base_link. Frame spacer_0_link exists with parent base_link. Frame spacer_1_link exists with parent base_link. Frame spacer_2_link exists with parent base_link. Frame spacer_3_link exists with parent base_link. Frame standoff_2in_0_link exists with parent base_link. Frame standoff_2in_1_link exists with parent base_link. Frame standoff_2in_2_link exists with parent base_link. Frame standoff_2in_3_link exists with parent base_link. Frame standoff_2in_4_link exists with parent standoff_2in_0_link. Frame standoff_2in_5_link exists with parent standoff_2in_1_link. Frame standoff_2in_6_link exists with parent standoff_2in_2_link. Frame standoff_2in_7_link exists with parent standoff_2in_3_link. Frame standoff_8in_0_link exists with parent standoff_2in_4_link. Frame standoff_8in_1_link exists with parent standoff_2in_5_link. Frame standoff_8in_2_link exists with parent standoff_2in_6_link. Frame standoff_8in_3_link exists with parent standoff_2in_7_link. Frame standoff_kinect_0_link exists with parent plate_2_link. Frame standoff_kinect_1_link exists with parent plate_2_link. Frame left_wheel_link exists with parent base_link. Frame right_wheel_link exists with parent base_link. Frame scan1 exists with parent cart_frame. Frame scan2 exists with parent cart_frame. ] How do I get rid of the error? EDIT: tf frames Originally posted by charkoteow on ROS Answers with karma: 121 on 2014-09-25 Post score: 1 Original comments Comment by bvbdort on 2014-09-25: Please share frame.pdf screenshot from rosrun tf view_frames Comment by charkoteow on 2014-09-26: I've updated the question with my frames. Ignore the cart_frame, that's something else that I'm trying to figure out. Answer: It can be also caused by openni_launch node if you leave default settings and publish_tf over your urdf model. Set "arg publish_tf" to false in your launch file. Helped in my case. <include file="$(find openni_launch)/launch/openni.launch" > <arg name="publish_tf" value="false" /> </include> Originally posted by Frantisek.Durovsky with karma: 58 on 2014-12-18 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by charkoteow on 2015-03-03: worked for me too! thank you :) Comment by luc on 2016-12-07: me too! thanks!
{ "domain": "robotics.stackexchange", "id": 19525, "tags": "kinect, rviz, turtlebot, transform" }
LeetCode 839: Similar String Groups III
Question: I'm posting my code for a LeetCode problem. If you'd like to review, please do so. Thank you! Problem Two strings X and Y are similar if we can swap two letters (in different positions) of X, so that it equals Y. Also two strings X and Y are similar if they are equal. For example, "tars" and "rats" are similar (swapping at positions 0 and 2), and "rats" and "arts" are similar, but "star" is not similar to "tars", "rats", or "arts". Together, these form two connected groups by similarity: {"tars", "rats", "arts"} and {"star"}. Notice that "tars" and "arts" are in the same group even though they are not similar. Formally, each group is such that a word is in the group if and only if it is similar to at least one other word in the group. We are given a list A of strings. Every string in A is an anagram of every other string in A. How many groups are there? Example 1: Input: A = ["tars","rats","arts","star"] Output: 2 Constraints: \$1 <= A.\text{length} <= 2000\$ \$1 <= A[i].\text{length} <= 1000\$ \$A.\text{length} * A[i].\text{length} <= 20000\$ All words in A consist of lowercase letters only. All words in A have the same length and are anagrams of each other. The judging time limit has been increased for this question. Inputs ["tars","rats","arts","star"] ["vklldovi","lvdiklov","dlkvoilv","likolvvd","ldlvviko","kvdivlol","vlidklov","iovlkdvl","kvlvdiol","dvkillov","dvoklliv","kilvvold","ldliovvk","vldokvil","loikvvdl","illvodvk","vovlkidl","iklvlodv","vdvlkilo","llvkivdo","vklvdilo","oivkvldl","odlvikvl","vlokivld","vvkloild","vlkdlovi","klolvdiv","viovklld","klivdlov","odvlkliv","loidlkvv","llvodkvi","klvlivod","iokvlldv","oidkvlvl","llodkvvi","vldolivk","lvolvdki","ldoklvvi","lvokvild","lvilkdvo","vdovklil","ivkldvol","dikvvoll","ikovvdll","kvdliolv","odkllivv","lvvldkoi","dkilovvl","viodkllv","ldkvovli","illokvvd","dlvkliov","klivoldv","lvlvkido","kvlviodl","klvlvdio","ovlldivk","lkdviovl","dilkvlvo","lovlkvdi","ovkdlivl","olkilvvd","okvlivld","lidvvlok","iokvldlv","vlolvdik","ivlvdklo","lvvlkodi","kovdilvl","lilvvodk","ldvolkiv","vlkidolv","vidkllov","dlvoilvk","vklidolv","kdolivvl","ldivvlko","vvdlliok","vlviokld","dlvvloki","vlivkdol","vlildvko","kllivovd","dlilovkv","lvdilkvo","ildvlokv","odvllivk","odlvkilv","ldvkvoil","dvillkov","illkovvd","llodvikv","ivkollvd","kvildvlo","loldikvv","dvolivlk","ovilvlkd","dolilvvk","llovkvid","ildvkovl","idlvvlok","llvoikdv","dvlilkvo","lkdlivvo","vlovdikl","kiollvdv","lkildvvo","lvkovlid","dkoillvv","liolvvkd","vklvloid","ivlvokld","ldvloivk","klldvovi","dviolvkl","ikollvvd","lvlkodiv","kvdvilol","lolkidvv","llkdvoiv","ldvkvlio","dlvolvik","dlolivkv","vodlvlik","okvvdlli","lklviovd","vovdllki","dovlklvi","lvkioldv","vidvllok","vviklodl","klvliovd","olkvlidv","ovdlvlik","vkldvilo","kovdivll","lklidovv","ikdovlvl","kvvlldio","llvoidkv","dvviollk","dvolvikl","ilvdlkvo","diklvvol","lvkldvio","kidolvlv","volivldk","llkdoivv","idovlvlk","kvlliodv","vlkodlvi","dklliovv","odliklvv","dlioklvv","ldvlikvo","dvloivlk","kvdvillo","ikvovdll","kodlivlv","llkviodv","odlvivlk","vdlkolvi","ldkvvlio","liovdvlk","olkvdvli","lviovdkl","lldivovk","iklldvov","ildvvlko","odkvivll","llodikvv","dovllkvi","odlvlvik","lvvodlki","okvidllv","dovilklv","vodlivlk","klvidovl","vildvolk","ldvvkilo","lvlokvid","vokilvld","dolvkilv","vvolkdil","vlvlkodi","vkvodlli","lvdoklvi","llkodviv","vdkllvio","dloklvvi","ldvvkoli","liovvkld","kidvovll","ldokilvv","lvdolvki","lloidkvv","ldloikvv","ikovvlld","dvoillvk","klvvilod","lkilvvdo","lvvlidko","livkldvo","dlolvvki","viklodvl","vdollkiv","vdolkvil","lvlvoikd","lviovkdl","ldlikvvo","kildovvl","idlklvvo","volidklv","okdlilvv","kvoldilv","voillvkd","vldiolvk","ovdvllki","kdvvoill","illvovkd","dokillvv","ovikvldl","vvollidk","lovilkdv","vklvldio","lvkvodil","llvvdkoi","vlvdolki","vkvoldli","llovvidk","villkdov","kdilvlov","dvlokivl","llvidvko","vlvokidl","klvdvoli","llkvdoiv","divlklvo","vldlkvio","vldlokvi","ilvkdolv","vlkivold","kvvdliol","lildvovk","olvlidkv","vvdiollk","lvlivodk","vliodvkl","lkolivvd","ldvkivol","lvoilvkd","vdvlokli","lvivdkol","oivlvkdl","vlkdovil","vvklliod","dvolvkli","ilkvdvol","volilkdv","lvvkldoi","ikllodvv","oildvlvk","vkilvold","kvivdllo","dlklviov","ildkolvv","ioldlvvk","vvikdoll","vloilkdv","klvlviod","dvlokvli","kvidolvl","kvliovld","ldlvkiov","olivlvdk","ivdkovll","oldlkvvi","dlilvvok","ovvkdlil","ovllvdki","lvlkoidv","vvkoildl","dklvvloi","okvldvli","lkvivold","kodlvvil","dvlvolki","vilkvold","kvivodll","dvlklivo","lolivvkd","idlvklov","llvdviko","vdoillvk","ldvilvko","oldlvkvi","dlkvvlio","vlkoilvd","ovldklvi","dlkiovvl","llvviodk","lvvikldo","dvklivlo","ilokvdvl","ildklovv","dilvvolk","olklivvd","vlvkidlo","olvlikdv","ovikllvd","dkvoivll","lvdlkiov","ilvkvdol","ollikvvd","diokllvv","vvkoilld","ivdlvlko","lvikldvo","villkodv","vvdlkiol","likvlovd","odvvilkl","kolidlvv","ivkdvllo","lildvkvo","kidvlolv","kodilvvl","odvklliv","dklovivl","dvoilkvl","liodvklv","okvilvdl","vklovild","lkoldvvi","ovilvldk","lloikvdv","ldvlivko","vildlkvo","vikldlvo","lvikvodl","vivolldk","lidovlkv","kldovvil","vikvodll","vlilovkd","lidlvvko","kiovdvll","vollvdik","ikllvodv","ivldoklv","ldvivkol","ldvvkoil","livdvolk","odvvlkli","koldvliv","kvvoildl","vdliklov","vdoikvll","odlvvikl","vilvdlok","lvlkidov","vliodvlk","llovvikd","oldivkvl","llvdikvo","vdllivko","lvolivdk","odkvlilv","dlkvliov","kvlovldi","lvoidlkv","olvvkidl","lvkoidlv","ovkdilvl","ovivlkld","iollvdkv","lildkvvo","lvdvkiol","ldiovlkv","kdiolvlv","dolvkvli","iodlvvkl","vvolildk","volvlidk","dvlkliov","ilvkdvlo","lklvvido","idlkvovl","idllvokv","lodvvlki","dvkvioll","ivlovkdl","vvodklil","kilvldvo","odilkvvl","ikdlovvl","vdkvilol","vvlliodk","kdlovvil","ldivkvol","idvlklvo","lvolvikd","vkdoivll","vvdkolil","ivdlklvo","lvidklov","vlovkild","kvdlolvi","kvovldil","likvdolv","ovlidlvk","lviovldk","lvlkvido","ikvdllov","llovdikv","ovdvlikl","vvlklido","vlkodvil","lkdvvloi","vlklvdio","odlvilvk","ldkvliov","llkivovd","ivkvlold","lidklvov","idlkvvol","klivovld","kvloilvd","llovkidv","llivvkod","oivllkdv","odkllviv","ovldlivk","ivdoklvl","volivkdl","ivllvkdo","klovvild","diovlvlk","vkliovld","ivodklvl","olkvdilv","olvklivd","vvlilokd","ldvkivlo","livvdklo","kovvlild","vdlklivo","vldlkovi","dolvlivk","vklvdoli","iklvvdlo","lodilkvv","voiklvld","lildvvko","lviodvlk","vlilvdko","vkioldvl","odivlklv","kildvovl","kvvlildo","odlvklvi","dvllovik","lvklidvo","lvoldvik","lkvovlid","vklovlid","vlokidvl","llokdvvi","olvvlkid","odikllvv","lvvliodk","olivlvkd","kviodlvl","lvldvoik","kviodvll","kilvodvl","vkivlold","kldiovvl","lvklodiv","kliovvdl","dlliovkv","odvilvkl","lvldviko","dokilvvl","lvliovkd","ivklodlv","vlkoivld","vdkloivl","doilvvkl","oidllvvk","oidvllkv","dollikvv","kvillovd","odvlivkl","olkvdivl","voilldvk","lldokvvi","lvvikold","ivvloldk","ovdivlkl","idvvlokl","kvioldvl","vvkidlol","lvdkovli","oillvvkd","lodvlvki","vodllkvi","dvovlkil","kliovvld","dvlvloik","vklodilv","vlolidkv","vlidklvo","ivvodlkl","okivdllv","lodklvvi","kviolvdl","ovkivldl","ldkvlvoi","kdlvlvoi","ikovvldl","vvldloki","vlkvolid","ikvvoldl","divkllov","villvdko","diovkvll","vvdoilkl","vlidvlko","vvklldio","lkvidolv","ldivlvko","idkvlolv","odlikvvl","idovklvl","dlklvoiv","dvlloivk","dovvlilk","divkollv","vvikllod","vilvdkol","vvodlikl","vlvdlkio","oklvldvi","lkdivolv","vlldkivo","killodvv","iolvkvdl","ovllivkd","kodvivll","vkvioldl","vklvdiol","vllviodk","ivvldklo","dlvolvki","ldoivvkl","lodvkliv","iolvvldk","ikdllvvo","ilvlvdok","vkilovld","lkolvdvi","ilokvldv","vlkvilod","ovvdlikl","ikldovvl","llkovivd","lvkdilov","lovikdlv","dvlolvki","voldilvk","lolivvdk","okvilvld","vdvollki","dlivolkv","vvdkliol","kvdovlil","odivkllv","vldikvol","kdlolvvi","vkdilovl","livdlovk","olvlivdk","voikllvd","vllokivd","vvdkilol","iklovdlv","vdolvikl","idvvollk","kdovvlil","ovdvlkil","dolklvvi","ldvvlkoi","lvkolvdi","vlkvoidl","vlivokdl","ivlvdlok","ldivlokv","divvollk","dvllkoiv","klvoilvd","ikdovvll","vldvoilk","lkdlvvio","kollvdiv","ldliovkv","lodvvkli","livkodlv","viodvlkl","illdovvk","lviokvld","kdlvlovi","kdoivlvl","iodvvkll","ovlvkdil","okvdilvl","vkodillv","vlvoilkd","vdovlkli","oilklvvd","vioklvdl","klvdlvoi","vkvloldi","okldilvv","lvoivkld","lkvovild","lkldivvo","ovdlvlki","oklvvidl","liokldvv","kldvlivo","vovdlkli","ikdvlolv","ivklldov","ildvlkov","kdivolvl","lvkoivld","lkvvoild","vkdloilv","ivlldkvo","vvodillk","lvklovdi","lovilvkd","lvloikvd","lovlvidk","kivodvll","lvovkidl","lvovlkid","klvvidlo","kldvolvi","vdllkvio","vivkldlo","lvviklod","ldvivlok","iolldvkv","dkiovllv","lidvkvol","olidlvkv","vklvildo","llvidovk","oildlvkv","dovlvlik","ilvdlovk","vdlokliv","okillvvd","voldkvli","vdvkoill","lklovivd","liklvodv","klvlodvi","dlkvovil","vldloivk","dvlioklv","lodviklv","dvlkoivl","lovklidv","dllivovk","vdlvoikl","lovvdkli","dilvolkv","lkldvvio","lkvolvdi","kvoilvdl","dvlkiovl","kllivvdo","ldlvikov","lvikdvol","dllkivvo","llkvoidv","lkvlovdi","koidlvvl","ovdllvki","dklovvil","livkdvlo","dlvilvok","ilvovldk","lkvvolid","ivvldlko","dlvolkvi","kolilvvd","llkovdiv","vkviolld","vdollvki","dvllviok","vlodklvi","ollikvdv","kdvlilvo","dollvkiv","ldkolvvi","kdillovv","lvvklido","vklilvdo","vlioldkv","lioldvvk","vldilovk","dklvlivo","lovkvild","oildlkvv","vllovkdi","vdvkolil","ivdovlkl","vlldikvo","ilvkvdlo","idovlkvl","lvokldvi","lvivkdol","lvvidkol","vlldviko","ovlvikld","oivvllkd","vdlivlko","vdiklvlo","vliolvdk","oklivvld","vovdllik","kolvivld","lkvivlod","llvkiodv","vlkdvilo","vkdolvil","klovdvli","ldklviov","ldlikvov","lvvlkido","vidlkovl","kolivvld","vlolivkd","dvllivok","lolikdvv","lvdvikol","vdvilklo","vilvokdl","oivkllvd","lkivldvo","vvldkiol","loidlvvk","vlkildov","lkivldov","dloivlkv","lvkovldi","divkllvo","vllvokid","lvidovkl","kvllodiv","vdvolkil","vokdvill","iovvkldl","oilvldkv","lvvodkil","lkvdiolv","lkvlviod","kdlolviv","vllikvod","kdlovvli","voilkdvl","kvdovlli","volvidlk","vlidlvok","llvdkivo","okidvlvl","lvloivkd","vlovlidk","iovdlklv","koldivlv","lvdklivo","kvvloild","lidkvvlo","likvvldo","vovdklli","okdvlvli","odklivlv","ivlvokdl","vivkllod","vlvoikdl","livlodkv","kvdvolli","vkdvlloi","dovkvlil","lkdlvivo","klvivldo","dvivolkl","kdolvilv","vvolkidl","kvdvloil","lodvkvli","dlkviolv","vkvoidll","vokilldv","lkivodvl","liokvvld","lvlkdivo","vdllikvo","kldilvvo","vlviklod","ldvkolvi","lkvildvo","odllivkv","ldkiovlv","idvklvlo","kvlvldio","lkodlvvi","lvkdvoil","ldokvlvi","ldlkvvio","kdlvoliv","lvivlokd","llvvdoik","likvdvlo","ildvovlk","oivkvdll","dlolvvik","iolkvdlv","ovklvdil","lvdovkil","lolvidvk","ilvdlvko","dlkvviol","diklvlov","kvdlvloi","olkdvvil","kildvlov","vlivldko","kdilvvlo","lvivoldk","vvkilold","kldovvli","idklvolv","ldlivovk","lilkvdov","vdllikov","ivlokvld","lvkdlvoi","ivdokvll","ldvkilov","dlvlkoiv","kiovlvld","dvlolvik","kiolvvld","klviodvl","okilldvv","kdovlliv","lvokivld","kldlviov","kvlvoldi","lvvkdilo","ldlovikv","vllvoikd","vlkidlvo","ildklvov","lvdlvoik","oivdvkll","ldikvlov","vlkdvoli","dilklvvo","oklvvdil","lviovlkd","ilklvodv","ikldlvvo","divlvklo","lovvlikd","dovvllki","ovldkivl","vodivlkl","killovvd","ivlvkdlo","ldokvivl","vikodvll","dvlvkoli","ilvolkvd","llkodvvi","kvllovid","vkdivoll","idolvklv","ilokdlvv","oidllvkv","idllkvov","ldovkivl","vollkdvi","vlldkovi","illvkvdo","lilvvdok","lolvvkdi","vlvolikd","vklivldo","ildvkolv","dlvovlik","vidvkllo","ldovlvik","dvlikolv","viodklvl","oivvldkl","kilolvvd","iovvlldk","kvlodilv","livodlvk","voillvdk","ilvdvokl","lidvvlko","ivoldklv","dillvokv","kvilvodl","odlilvkv","ivoldlvk","kilovldv","kodvllvi","kvlliovd","kodvlvil","vllvdiko","iovkllvd","loklvvdi","dvkovlli","dvvillok","doilvlvk","ilvldvko","lkvliovd","ovdllkiv","vdoklliv","kildovlv","livlvkod","kldviolv","llokvdvi","vlkiovdl","lvkiodlv","dlviovlk","vldilvok","dvllokvi","ilvodlvk","dlvlikvo","lvdkilvo","kvdllovi","kivlvdlo","kllvodiv","idllovkv","kvliodlv","ovlkivld","dlvovlki","oilvkvdl","lokvivdl","ovlkivdl","dvlkloiv","llviovkd","vkolvdil","ovkvllid","vlivdolk","lidkvlov","ovkidlvl","vdokvill","olvvlikd","dvlviokl","dvolkvli","ovidlklv","dlvovkil","lodlikvv","ovkvidll","dkivvlol","dolvlvik","llkoidvv","lvkilovd","vvildolk","lidvlovk","vkvdillo","kovlivld","vvliodkl","klvovild","dllkovvi","vvkldiol","idllvvok","iklovvld","vilovdkl","ldolvivk","vvllikdo","ovldlvik","lokdvvli","dvvollki","iovlvdkl","lvklvdoi","idvvklol","lovdklvi","dvlovlik","lvdokivl","divlklov","lilkovvd","llkivvdo","vkivdlol","kodvillv","livkodvl","ollvvkdi","ilovldvk","lvldvoki","kllvvdio","klldvoiv","lvlodkvi","lvidolkv","ilkodlvv","olkidvlv","lkdvvoil","vdlvlkio","idvkllvo","kvdovill","vllkodvi","dilovlvk","ilvlkdvo","ovvllkid","kilvdolv","kvdilvol","vkvloidl","ovkildlv","volkvlid","olikdvlv","dlvklovi","llidvkov","livlvokd","idovvkll","dvlilvko","vviokldl","lkviolvd","ovdklilv","ivvkldlo","vdlliokv","ovldkilv","lilkvvdo","vvikdlol","vvlkldoi","oviklvld","diolkvlv","divlkvlo","llivvdok","lvldkvio","lvdolivk","lvioldkv","vvodklli","lovikdvl","odvklivl","vkvolldi","oilkvvdl","dilkvvlo","llkiodvv","lokldvvi","lklvdovi","klvdlvio","lilvkdov","vvodlkil","lvilokdv","lvkdliov","lovlivkd","dlvkviol","kvlioldv","llkiovvd","okdivvll","kdlovliv","oiklvvdl","ldvlkvoi","oivvkldl","ldkvoivl","lliovdvk","vkodlvil","dlvlivko","idvlvlko","lidlvvok","ilvvdkol","kilodvlv","lvdlikvo","vdilkvlo","kvloldiv","divllkvo","ldklvovi","odvikvll","ldlvkvio","vlivkdlo","vklvidlo","klivvldo","loilvkdv","ovivdllk","odivllkv","lvokvidl","kvllvdio","ovldlkvi","oikdvvll","vvoldilk","dklovilv","ivllodkv","vlvliodk","lkldviov","vollkdiv","ovildlvk","vilkvodl","vovilkld","oldvvkli","vvdlkilo","okvidlvl","ldokvilv","ollvkdiv","vllidkvo","livkdlvo","ovklldiv","vldvliok","vivllkod","vllkoidv","kdoillvv","dvklilvo","dklvlovi","lidklvvo","vkdolvli","lviklodv","dlolvivk","klilodvv","klivvlod","vkvdllio","lvkidovl","vvdolkil","ldokvvil","ldivvolk","ldiklovv","ovildvlk","llvvokdi","vlliodkv","voldlivk","ildvvkol","odkvlliv","ilvvkdlo","klvvoild","dlolvkiv","vidlolkv","lodivvkl","vdovikll","loldvivk","lvvkodil","ivdlvolk","llvkivod","oldkvivl","vokivdll","lkovidvl","kdvvloli","lkodvlvi","ikvlvlod","lvdkivol","vllkoivd","kvlvdoli","kdolvvil","vlodilvk","ldlvviok","vkldvoli","odlikvlv","iklvvlod","dvviklol","kdovvill","dklvivol","llvkdvio","vloilvdk","vdoilklv","ikvldlov","lkvloidv","lldvvkoi","ilvkodlv","vokvldli","liolkvdv","kovildvl","oikvlldv","ilvvolkd","vokvlldi","llvivdok","ivvklold","olvdlivk","llvikdvo","vlivdokl","llkvvdoi","kvvdolli","iodkvvll","idklvvlo","voidkllv","ilvdvlko","vlikdvol","odllvvik","lilovvdk","vviokdll","ilkvlvdo","kloilvvd","voiklvdl","dlkovvli","vvklildo","kdivvllo","vdlkloiv","vvllodik","vilkldvo","villdkvo","lokivdlv","lkidvolv","lvdkvloi","kvldvoil","vkvoildl","oikvlvld","kdivlovl","ikdllvov","lkidvvlo","lvdlviok","lioklvvd","llodvkiv","olvkivld","odvlilkv","lvikoldv","vdlilovk","ivldkolv","vldlovki","kvvilodl","lvkdoivl","iodvlklv","lidvlvko","kdvliolv","okvvlild","ldkvvloi","ilokldvv","vlvloikd","violdklv","lvlvkoid","dvkiollv","ldkvovil","olvvdikl","vollivkd","dlklvovi","ovikvdll","lodlkvvi","vidokvll","lilvkdvo","lvdvkoil","olvkdilv","vkliovdl","vldivolk","vlidolkv","volkdvli","ilvodlkv","lldviovk","vdklovil","vdlkvloi","lodlkivv","kdvovlil","klviolvd","ovdkillv","dlvlovik","llodivvk","dovvilkl","lvkiolvd","ivdvlolk","odivvkll","vlovdkli","ivdklvol","livodlkv","vidlkvol","vodlvikl","koidvlvl","ovdlkivl","dvvklilo","klvldovi","vloldvik","ilvolvdk","volkdlvi","vloldkvi","vvldkloi","lvkovild","lvlovkid","llvvdoki","lkvidvol","vildklvo","idlvlkov","volvdkli","kvlolidv","dokivlvl","lvdkiovl","kivlolvd","okivldlv","vidlvolk","olvkvdli","lvkvilod","dilvlvko","vodvlkli","vkvdlilo","lvdlovki","ovdklliv","ldvvkloi","vkovidll","illvdvko","lkdvvoli","lldkivvo","lkdlviov","vodilklv","vdklilvo","ldvikvol","liokvlvd","ivllvdko","dlvikolv","lvlkiovd","lovdiklv","kvdllvio","lvioklvd","ilvvdolk","dkvvlilo","lkvivdol","iklvvdol","ildvovkl","klvolidv","vvdolilk","kvdlvoil","kolvidvl","vkdoilvl","vdikolvl","ldioklvv","ovvlldki","vlkiodlv","okllivvd","lvikvldo","ovikldlv","lvdkvlio","lvdkivlo","kliovlvd","illkodvv","llvoidvk","loklivdv","okdllviv","dvlvoikl","llokidvv","lvldvkoi","kdvolvli","ldolvvki","vkiolvdl","klvdolvi","livklvod","olvvidkl","ovidvlkl","vldkolvi","lovvkldi","vokdilvl","likdvlvo","ovlvilkd","lkoildvv","vllovkid","kidovlvl","vvlkldio","ildvokvl","vvkdloli","lvoidvkl","vokvidll","vkdvilol","lkvdvoli","dkillvvo","kdvillvo","ivdklvlo","dlkvilov","vodvklli","vkvilold","ldvvloki","likdlovv","likvdvol","vldilvko","llvovdki","llvvikdo","dvolvkil","dikolvvl","ovkldivl","iovllvkd","vlikolvd","vvdollik","lokivvdl","odivklvl","ldvolivk","lvvidlok","lkovldvi","kvllvdoi","vdvolkli","llkovvid","vloivdkl","vlvkoidl","ldvvolik","idokvlvl","iovlklvd","vlkvidlo","ivvdokll","lklvidov","llvidokv","ovidllvk","olikldvv","viovdllk","vvldoilk","dllivkvo","lkovvidl","dlivkolv","lodlvivk","lildvvok","idvkvoll","ilvlovdk","oikvlvdl","vdllviko","vllkivdo","kvloivdl","lvlovidk","vkvldoil","lvkovdli","vlloivkd","vdllovik","dovvikll","iodlkvvl","kvvilold","lvlovikd","vivkldol","vlivolkd","oikvdvll","kvoidllv","klolvdvi","ldkvilvo","ildvlvko","ollvivkd","iokvvdll","lilvdovk","vviolkld","olkivvdl","dlvlviok","ovllikvd","vlolivdk","lvlkvdio","kllodvvi","loldvikv","vkvdliol","kivldvlo","vkvlildo","klldvvio","dvoilvlk","kodlivvl","lvdvkilo","vlkvliod","lvovkdil","lokdvilv","vlidkvlo","ovklidlv","vvklodil","vvilklod","kdvoilvl","lilovvkd","vdkvlloi","dolkvlvi","vollvikd","ivvldokl","kivlodvl","okllvivd","ldovlvki","iklldovv","dovivllk","vlivlokd","odklvliv","kdvlovli","ikvldlvo","illvvkdo","idvlolkv","lvvidolk","vvkldilo","ovivdkll","iklolvdv","idokllvv","vvdilolk","lkvodvli","kivlovld","liovlvkd","vvioldkl","diovvlkl","ldlkvvoi","vidllvok","vkdllovi","kiodllvv","vlvlkdoi","ovllvdik","vdklvilo","lklivdvo","kdovlilv","lvldkovi","lviolvkd","llvvkodi","kovvdlli","ilvlodvk","ivlkovld","kvldlovi","vdklvlio","olkvilvd","voklidvl","dollkviv","voikdlvl","dllkvovi","voilldkv","ilvokvld","oviklvdl","vdvoklil","kdiovlvl","vdkoivll","iloklvvd","lliodvvk","kvoilvld","dlklvvoi","oivlkldv","kovdlliv","vdlvokil","ivllokdv","oklvilvd","lovvldik","illvkvod","doviklvl","lvokdliv","kivvldlo","lkvivldo","dillvovk","dllkoivv","klivlvdo","dkvliolv","kvllvoid","oklidvlv","kdivllvo","dvlvloki","ldlivokv","dvkllvio","iovldlkv","koivldlv","vldlkoiv","ovlidklv","lokdivlv","olvvikdl","oklvildv","vkdvloli","iklvodvl","kidvlvlo","vliokldv","vdokllvi","okilvvdl","oikvvdll","llkovvdi","okvildvl","lvilvkdo","dlkvvoil","vodikvll","lvlvdkoi","vkdlvlio","lkldvoiv","lodlvvki","dovkllvi","vkvldiol","kivlvodl","dvlkoilv","lvkodlvi","dlklvivo","oilvvldk","vklvodil","lkldvovi","dvliokvl","loilkvdv","kdvloivl","illovdkv","dkliolvv","vlldokvi","kolvvlid","kllvdvoi","vikvldol","ovidlvlk","lvvoidkl","klivldvo","odlvkivl","iloldkvv","lolkvvid","lklvidvo","divvkllo","oilvdlvk","vdvliokl","kovivldl","lvviodlk","lvkdvloi","olidkvlv","vidvolkl","ilvdokvl","lvldkoiv","vlkvildo","kvvodlil","vliokdlv","lvdvlkio","kdvilolv","ikvodvll","llkivdov","vkldivol","vlkvlido","ldlvkvoi","kvodlvli","kvilldov","vlioklvd","dvllvoik","klidvlvo","koldvvil","odkvillv","liovlkvd","vdkilovl","ldikvvol","lvivkldo","ivkovdll","livlkovd","klovvdil","kilolvdv","ldkvolvi","lvkodliv","olvlikvd","vlvolkdi","vlkoivdl","lvdlvkoi","vodklvli","illkvvod","idoklvvl","lodvkilv","dvlvkoil","ovdillkv","ilkvodvl","dovvlkli","lovdvlki","illvdvok","llkidvvo","kovdvill","lildovkv","ivlovlkd","ivolvlkd","viovdkll","lvdkilov","vlliovdk","vdvolikl","lvviolkd","lilkdovv","lvolkdiv","ikdlvovl","kvvolild","kvliovdl","lvlodvik","diovlvkl","oilvlvkd","lodkvvil","dvollvik","odkillvv","ldklvoiv","ivklvdlo","dovklivl","vldoilvk","livokldv","ldkvlovi","llikovvd","kvlloidv","vilklvod","ldviolkv","klvidlvo","kovdlivl","ovdllikv","illovkdv","odvvllki","ilvvokdl","lvlivdko","idvllovk","vollvidk","iovvkdll","lilvovkd","dollvvik","vdkvoill","lvkidvlo","lvdvilok","vdovkill","vvolilkd","ldlivkvo","vdlviklo","ldoilvvk","klvviold","odlvvkil","vlolkivd","lovkilvd","volildkv","dvollivk","lvkloivd","llovdvik","kldvilvo","ivloklvd","ldkvviol","okdvlilv","villdvko","lidlvkov","vkdilvol","ovillkdv","ikvolvdl","vlokdvli","likvdlvo","dlvolivk","kvlvodli","kdvolivl","lkiovvdl","loilvvdk","ovlvkild","dlkvoivl","ldovvlki","vviloldk","dvlklvio","kidlvlov","idlvkovl","ilvdklov","dvvlkiol","vdvilkol","vlviolkd","vlidvolk","dklvviol","lvikdlov","okvlvldi","kivllvod","vldvoikl","kvovlild","ovvdikll","kdivvlol","dlkiovlv","lviodvkl","vvdklilo","lviokdvl","kdlvoilv","dolvvkli","kdvloilv","lkdolvvi","kdovilvl","dioklvvl","ldvovlki","dlvkvlio","dvoivlkl","dvvolilk","ldlivkov","vlkidlov","lilkovdv","vdvliklo","ilvklovd","lvokidlv","dilvvlko","ivllvodk","dilvovlk","koilvldv","kvdolvli","kldvviol","ildkvlov","ovlidkvl","vlvokild"] Outputs 2 7 Code - Python 2 class Solution: def numSimilarGroups(self, A): def find(x): while x != uf[x]: uf[x] = uf[uf[x]] x = uf[x] return x def union(x, y): count = 0 for i, j in zip(x, y): if i != j: if count < 2: count = -~count else: return False return count == 2 row_len = len(A) col_len = len(A[0]) A = set(A) uf = {word: word for word in A} groups = len(A) if col_len > row_len: for x, y in itertools.combinations(A, 2): if union(x, y): x_root = find(x) y_root = find(y) if x_root != y_root: groups -= 1 uf[x_root] = y_root else: for x in A: for i, j in itertools.combinations(range(col_len), 2): y = x[:i] + x[j] + x[-~i:j] + x[i] + x[-~j:] if y in A: x_root = find(x) y_root = find(y) if x_root != y_root: groups -= 1 uf[x_root] = y_root return groups References Problem Discuss Solution Answer: Make your code easier to read I don't see anything wrong with the code, except that it is sometimes a bit hard to see what is going on. You could add some comments here and there explaining what you are doing, and improve some variable names. For example, what does uf stand for? Also avoid using bit tricks like count = -~count when count += 1 does the same and is much clearer in its intent.
{ "domain": "codereview.stackexchange", "id": 39413, "tags": "python, beginner, algorithm, programming-challenge, python-2.x" }
MODIS Surface Reflectance Data State QA Aerosol Quantity field meaning is unclear
Question: I am trying to compare NDVI values computed from MODIS Surface Reflectance(MOD09) with the NDVI product (MOD13). MODIS Surface reflectance Collection 5 products include a Data State QA field that I am using to estimate which pixels are best suited for computing NDVI. A part of this field is labeled Aerosol quantity. The values in this field seem to significantly affect the result of NDVI computation. The explanation of the possible values of this field in the MOD09 Users' Guide seems not entirely clear to me. | 00 - climatology aerosol | 01 - low quantity | 10 - average | 11 - high The last three values are clear, while the meaning of the climatology value is somewhat non-intuitive. I have selected a crop field and calculated NDVI for all pixels clear of clouds or cloud shadows during one vegetation period and observed the following: It seems that results with low aerosol quantity (green points) match well with the results obtained with the complex rating algorithm used in the standard NDVI MOD13Q1 product. While the data with climatology state (orange points) seems far off. What does the climatology attribute mean and how to treat it? Answer: Most likely, climatology means there was no retrieval at all. Bayesian retrievals combine information from an a priori with information from measurements. When the retrieval fails for whatever reason, or the measurement contains insufficient information, instead of reporting no measurement at all, they copy over information from the a priori and use the flag to indicate that the reported "measurement" is in fact simply the climatology. For details, see optimal estimation, in particular the book by Rodgers: Clive D. Rodgers (2000). Inverse Methods for Atmospheric Sounding: Theory and Practice. World Scientific.
{ "domain": "earthscience.stackexchange", "id": 1059, "tags": "remote-sensing, modis, ndvi" }
How to use Anderson's rule to construct band discontinuities in heterojunctions
Question: I'm having some trouble in applying Anderson's rule to get a crude approximation for the band diagram of heterostructures. To make it more specific, I'm considering a donor-doped AlGaAs layer on top of an undoped GaAs layer. This is a type 1 (straddling gap) heterostructure. One region (AlGaAs) will be N type, with a fermi energy $E_F$ close to the conduction band, while for the other region it will simply lie in the middle of the band gap. Now, I know that the rule typically works with electron affinities $\chi$. AlGaAs has a lower affinity and a higher band gap, and thus we get that the conduction band of GaAs will be lower by a conduction band offset $\Delta E_c$, while its valence band will be higher by an offset $\Delta E_v$. However, simply putting them next to each other like in the diagram below (which is for the same type of heterostructure, except that both sides are now N type, but it is prettier than the one I tried to draw myself) is obviously wrong, as the Fermi level is supposed to be constant throughout the heterostructure. Now, my question is, how does one go from the above diagram, to the correct one given below: Okay, so what do I not understand. I get the basic idea that the fermi level is constant, and that to do so there is bending (related to the charge depletion of the donors, which causes an electric field). What I don't get, is how to know how to draw these discontinuities, and perhaps more importantly how to know where to draw the right conductance and valence band relative to the left one. Because if we compare the first image to the second, we see that the left side has remained the same, while the left side is now very different on the y-axis. I guess what I find weird is the change that has been made to $\chi_2$; it is much larger now, but what is it given by? How do you know how high to draw the right part of the conduction band? I think that there is something very obvious that I'm missing, but I simply don't see it. Answer: It might help to just look at the vacuum level, and think of how the structure reaches equilibrium. In your first picture, electrons will start to flow from right to left, towards the lower fermi level. This will charge the left side negatively, and leave positive charge on the right side. Therefore the vacuum level will curve on both sides, with a U curvature on the right and a ∩ curvature on the left. The bands are just carried along for the ride, always a fixed offset away from the vacuum level. Likewise, the Fermi energy stays at a fixed offset from the conduction band. This charge redistribution + level bending goes on until the Fermi energies match.
{ "domain": "physics.stackexchange", "id": 27790, "tags": "solid-state-physics, semiconductor-physics" }
Existence of time in some other universe
Question: Is it necessary that time should exist in another universe if it (universe) is there? How do we perceive timelessness? Answer: Time is just a construct of man. It is simply the length of observation. Even with a static situation, it can still be observed. Time exists anywhere there is observation, just like the empty set. Something which is timeless is said to be unaffected by time, unchanging. Thus, timelessness would just be the embodiment of that. A good example of timelessness is a single length of time, as it is unaffected by time, and will be - for all time, any time, or no time.
{ "domain": "physics.stackexchange", "id": 5846, "tags": "time, multiverse" }