anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Double poles in propagators
Question: I'm curious as to how to interpret double poles in the propagator. In general, the poles of a propagator tell us the mass. For example, for a free, massive scalar $$\mathcal{L}=\frac{1}{2}\phi(\Box-m^2)\phi,$$ with equations of motion $$(\Box-m^2)\phi=0,$$ the propagator goes like $$G(k)\sim \frac{1}{k^2+m^2},$$ and has a pole at the mass, $k^2=-m^2$. However, if one considers a higher-derivative theory, say $$\mathcal{L}=\frac{1}{2}\phi(\Box-m^2)^2\phi,$$ with equations of motion $$(\Box-m^2)^2\phi=0,$$ the propagator goes like $$\tilde{G}(k)\sim \frac{1}{(k^2+m^2)^2}.$$ This is a double pole at $k^2=-m^2$, and I was wondering what the interpretation of this is. Naively, one might say that the mass is still $m$; however, it is a double pole (and hence has zero residue), and so it feels like this interpretation should fail, but is there something that saves this viewpoint? Alternatively, a strong stance would be to claim that the theory breaks down, but if that is the case, then why is there a breakdown? Answer: If we start with OP's higher-derivative Lagrangian $$ {\cal L} ~=~ \frac{1}{2}[(\Box-m^2)\phi]^2, \tag{A} $$ one idea is to try to lower the number of derivatives by introducing more fields, e.g. $$ {\cal L}_1 ~=~ \chi (\Box-m^2)\phi - \frac{1}{2}\chi^2. \tag{B} $$ If we integrate out the $\chi$ field, $$ {\cal L}_1\quad\stackrel{\chi}{\longrightarrow}\quad{\cal L} \tag{C}$$ we return to OP's higher-derivative theory. However, when we diagonalize the kinetic terms $$ {\cal L}_1 ~\stackrel{(B)+(E)}{=}~ \frac{1}{2}\sum_{\pm}\pm\phi_{\pm} (\Box-m^2)\phi_{\pm} - \frac{1}{4}(\phi_+-\phi_-)^2, \tag{D} $$ with $$\phi_{\pm} ~=~\frac{\phi\pm\chi}{\sqrt{2}},\tag{E}$$ it becomes apparent that the $\phi_-$ field has the wrong sign in front of its kinetic term, i.e. it is a bad ghost. This is typical for higher-derivative theories, cf. the Ostrogradsky instability.
{ "domain": "physics.stackexchange", "id": 92498, "tags": "quantum-field-theory, field-theory, greens-functions, propagator, analyticity" }
Handling exceptions for providing log information while debugging
Question: I am catching exceptions to provide better information for the logs to make debugging easier in production. Does this code follow best practices, since I'm not really handling the error but just logging extra information? I was also considering creating a custom RazorException, and throwing that instead. What do you think? public string ParseFile(string templatePath, object model = null) { //some code removed for clarity try { ITemplate template = _templateService.Resolve(templatePath, model); ExecuteContext context = new ExecuteContext(); content = template.Run(context); } catch (TemplateCompilationException tcex) { _logger.Fatal("Razor parse failed: check for syntax error", tcex); throw; } catch (InvalidOperationException ex) { _logger.Fatal("Razor parse failed: trying to compile layout on its own", ex); throw; } catch (Exception ex) { _logger.Fatal("Razor parse failed", ex); throw; } return content; } Answer: I would avoid those throw statements entirely. You should only be catching exceptions if you plan on doing something with those exceptions. What you're doing is just logging and redefining the exception message. IMO a better approach is to put your logging code in the global exception handler. You'll have the complete stack trace, Exception type and if you're deploying PDB's with your production code (I hope you do), then you'll have line numbers as well. Search the web for these global handlers: protected void Application_Error(object sender, EventArgs e) in Global.asax AppDomain.CurrentDomain.UnhandledException TaskScheduler.UnobservedTaskException Update Following up from one of the comments from @nikita-brizhak, if you're developing a library then you might want to log those exceptions depending on your use case. Another case I can think of is wrapper type classes. For example if your application connects to a MySQL and an Oracle database, you might have an abstraction over those connection methods which catch MySqlConnectionException and OracleConnectionException and rethrow a generic DbConnectionException with generic information.
{ "domain": "codereview.stackexchange", "id": 8856, "tags": "c#, error-handling" }
SICP exercise 1.3 - sum of squares of two largest of three numbers
Question: From SICP Exercise 1.3: Define a procedure that takes three numbers as arguments and returns the sum of the squares of the two larger numbers. Square is: (define (square x) (* x x)) I know the code could be written using language features like lambda or let, but first to remain true to the exercise, I wouldn't use features that are not discussed yet. I took advantage of the fact that we only have three numbers that's why I am able to reduce my code into two if statements. All of the tests I've done produced correct output. Please review my code. (define (sum-of-squares a b c) (if (> a b) (+ (square a) (square (if (> b c) b c))) (+ (square b) (square (if (> a c) a c))))) Here's an implementation with let. If possible I would like this reviewed as well. (define (sum-of-squares a b c) (let ((bc (if (> b c) b c)) (ac (if (> a c) a c))) (if (> a b) (+ (square a) (square bc)) (+ (square b) (square ac))))) Now, I find this second implementation not shorter than the first, but I think it's easier to read because it can be difficult to read expressions with more expressions nested inside as with the first implementation. On the other hand I realized it's worthless to do this because the variables aren't used more than once. For expressions more complicated than expressions contained in this procedure, would it be a good idea to put them inside let even if they are only used once? How can I improve these codes and make them faster? Answer: First of all, the name sum-of-squares is misleading. Longer names-separated-with-hyphens are quite acceptable in Scheme, so why not tell the whole truth? I suggest sum-of-squares-except-min or sum-of-squares-of-greatest-two. The first solution is a bit of a brute-force approach. It works, but it is not obvious at a glance how it works unless you trace all paths of the decision tree. The second solution, if the indentation and naming were improved, would be a bit clearer. (define (sum-of-squares-except-min a b c) (let ((max-of-bc (if (> b c) b c)) (max-of-ac (if (> a c) a c))) (if (> a b) (+ (square a) (square max-of-bc)) (+ (square b) (square max-of-ac))))) Personally, I would adapt the second approach and go "all the way" with it. Then, all the possibilities would be laid out logically, one per line. (define (sum-of-squares-except-min a b c) (let ((least (min a b c))) (cond ((= least a) (+ (square b) (square c))) ((= least b) (+ (square a) (square c))) (else (+ (square a) (square b)))))) min should be built in, but you can implement it as a helper function yourself if you need to. Alternatively, sum all the squares, then subtract the square of the least. Which approach is fastest? It's hard to say, without benchmarking, whether it's better to define fewer intermediate variables, do less branching, or do less arithmetic. I wouldn't worry about performance for such a simple problem.
{ "domain": "codereview.stackexchange", "id": 18350, "tags": "performance, comparative-review, lisp, scheme, sicp" }
How to know if classification model is predicting 1 or 0
Question: I have used logistic regression to predict whether customer is good(1) or bad(0). I got the accuracy .80 . How do i know whether model predicted 1 or 0 .Is it related to parameter of model1.predict_proba(X_test)[:,1] (the 1 in the end in square bracket). Answer: You can find the model predictions here: model1.predict(X_test) array([0, 0, 0, ..., 0, 0, 0], dtype=int64) In a binary (good/bad or 1/0) model, predict_proba is the probability of 0 or 1 , the value that is calculated by your model, which is used along with a cutoff (which is 0.5 in the sklearn implementation and cannot be changed) to determine if the final prediction is 0 or 1 model1.predict_proba(X_test)[:,0] # probability the answer is 0 array([0.94009529, 0.96378774, 0.98951049, ..., 0.67607543, 0.97599932, 0.82838031]) model1.predict_proba(X_test)[:,1] # probability the answer is 1 array([0.05990471, 0.03621226, 0.01048951, ..., 0.32392457, 0.02400068, 0.17161969]) since this is a binary model, both should add up to one. The accuracy score is how many predictions were correct. (correct_pred_0 + correct_pred_1)/total_predictions If you remove the [:,1], you get the entire array: model1.predict_proba(X_test) #col 1 col 2 array([[0.94009529, 0.05990471], [0.96378774, 0.03621226], [0.98951049, 0.01048951], ..., [0.67607543, 0.32392457], [0.97599932, 0.02400068], [0.82838031, 0.17161969]]) First column is the probability for 0 and the second column is the probability for 1 you can check the order of the classes using this: model1.classes_, for binomial, the default is like this: array([0, 1], dtype=int64)
{ "domain": "datascience.stackexchange", "id": 7848, "tags": "logistic-regression" }
A terminology question: What are active neutrinos and why?
Question: The mass term for the type-I seesaw is given by $$\mathcal{L}_{mass}=-m_D\overline{\nu_L}N_R+M_R\overline{(N_R)^c}N_R+h.c.$$ where the right-chiral fields $N_R$ are electroweak singlets. Since they do not have any Standard model interactions, they are called sterile. On the other hand, $\nu_L$ fields are not sterile. Now I've found two more terms in the literature. Light neutrinos and active neutrinos. When we diagonalize the $6\times 6$ neutrino mass matrix $M_\nu=\begin{pmatrix}0 & m_D\\m_D^T & M_R\end{pmatrix}$ resulting from the Lagrangian above, we get, 3 light Majorana neutrinos and three heavy Majorana neutrinos (both are linear combinations of $\nu_L$ and $N_R$). My question is, among $\nu_L$, the 3 light Majorana neutrinos or the 3 heavy Majorana neutrinos what are referred to as active neutrinos? And why? In wikipedia, $\nu_L$ of the Standard Model are referred to as active which I think is in contradiction with the link above. Answer: A sterile neutrino is any neutrino which does not interact with Standard Model gauge bosons, and an active neutrino is any neutrino which does interact with Standard Model gauge bosons. Easiest way to distinguish them would be to ask whether there is a 3-point Feynman diagram you can draw for that neutrino where one leg is a gauge boson. I think this definition is consistent both with Wikipedia and with the way it's used in the paper you link to. Note that even in the title they refer to the "mixing of sterile and active neutrinos" which agrees with the Wikipedia definition. The sterile $N_R$ neutrinos mix with the active $\nu_L$ neutrinos to form mass eigenstates which are also active. However, if the heavy mass eigenstates have very weak gauge interactions, they will be approximately sterile. In the paper you link to, they do choose to refer to these heavy mass eigenstates as sterile, even though they are technically very weakly interacting. They state: We shall call $N_I$ as sterile neutrinos since they possess very suppressed gauge interactions. To summarize, $N_R$ are the sterile neutrinos, and any other linear combination between that and $\nu_L$, including $\nu_L$ itself, is active. But if you're talking about an eigenstate that is very nearly sterile, it may make sense to refer to it as sterile as well.
{ "domain": "physics.stackexchange", "id": 36832, "tags": "particle-physics, terminology, neutrinos, beyond-the-standard-model, majorana-fermions" }
ROSJOY settings
Question: Hi, I'm running on Ubuntu 12.04 and hydro, When using a joystick with my computer I see it assigned as js1. With nothing plugged into my computer, when I run the command ls /dev/input/' there is a js0 listed, I don't know what it is. Anyway, when I use rosrun joy joy_node I get an error [ERROR] [1407853001.741326266]: [registerPublisher] Failed to contact master at [localhost:11311]. Retrying... unless I first do rosparam set joy_node/dev "/dev/input/js1" but I need to do this every time I use ROS. Is there a way to save the setting or something like that? Originally posted by dshimano on ROS Answers with karma: 129 on 2014-08-12 Post score: 0 Answer: You can make a launch file and set that as the port you want to use. Originally posted by tonybaltovski with karma: 2549 on 2014-08-12 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 19017, "tags": "ros" }
Accuracy of physics laws
Question: How accurate are physics laws? For example, for newtons' first law $F=ma$, if we can get a measurement of both force, mass and acceleration with a percentage of uncertainly close to $1\times 10^{-9}\%$, will the formula match the value we determined? If not, how many percentages of error could we take and still believe the law still hold? Answer: Accuracy can mean different things. While the question asks about the statistical accuracy, what immediately comes when talking about the Newton's laws is that they are non-relativistic, i.e., they are valid up to small corrections of order $v/c$. Physics laws are based on empirical observations, the symmetries of the universe, and approximations appropriate for a given situation. Symmetries For example, we have reasons to think that conservation of momentum or energy are exact laws, since they follow from the symmetry of space in respect to translations in space and time (Noether's theorem). Testing these laws in practice will necessarily result in statistical errors, but improving the precision of measurement is unlikely to uncover any discrepancies. Approximations Newton's laws are valid only in non-relativistic limit. Thus, they will hold only up to small corrections of order $v/c$ where $v$ is the speed of the object and $c$ is the speed of light. If our relative statistical precision (in measuring the force, acceleration, etc.) is of order $v/c$, we will observe deviations. Empirical observations Laws of thermodynamics are a good example of the laws that were deduced phenomenologycally, as a result of many observations. Yet, statistical physics shows that they hold up to very high precision ($\sim 1/N\sqrt{N_A}$, where $N_A$ is the Avodagro constant). If the precision could be so high or when dealing with systems where the number of particles is not small, we will observe deviations from these laws. Remark I recommend the answer by @AdamLatosiński, which is technically probably more correct than mine. What I tried to explain in my answer is how the laws of physics are different from, e.g., the biological laws (since the subject was recently debated on this site) - the latter are generalizations of many statistical observations, but not grounded in reasoning about fundamental properties of the universe. They are therefore statistical laws, which are bound to be non-exact. Indeed, even the so-called Central dogma of molecular biology ($DNA\rightarrow RNA \rightarrow Protein$) is broken by some viruses, performing reverse transcription ($RNA\rightarrow DNA$.)
{ "domain": "physics.stackexchange", "id": 77773, "tags": "newtonian-mechanics, error-analysis, approximations, models, laws-of-physics" }
Find out whether string A can be shifted to get string B
Question: The task: Given two strings A and B, return whether or not A can be shifted some number of times to get B. For example, if A is abcde and B is cdeab, return true. If A is abc and B is acb, return false. Solution 1: const haveSameLength = (a, b) => a.length === b.length; const isSame = (a, b) => a === b; const isFullyShifted = a => a === 0; const shiftStringBy = i => a => `${a.substring(i)}${a.substring(0, i)}`; const isSameAfterShifting = (strA, strB, items) => { if (strA.length === 0 || strB.length ===0) { return false } if (!haveSameLength(strA, strB)) { return false } if (isSame(strA, strB)) { return true } if (isFullyShifted(items)) { return false } return isSameAfterShifting(strA, shiftStringBy(1)(strB), --items) } const str1 = 'abcde'; const str2 = 'cdeab'; console.log(isSameAfterShifting(str1, str2, str2.length)); Solution 2 const isSameAfterShifting2 = (strA, strB) => { if (strA.length === 0 || strB.length ===0) { return false } if (!haveSameLength(strA, strB)) { return false } const arrB = strB.split(''); const firstLetterA = strA.substring(0, 1); let shiftIndex = arrB.indexOf(firstLetterA); if (shiftIndex === -1) { return false } while (shiftIndex < arrB.length) { const strBShifted = `${strB.substring(shiftIndex)}${strB.substring(0, shiftIndex)}`; if (strA === strBShifted) { return true } shiftIndex++; } return false; } console.log(isSameAfterShifting2('abc', 'acb')); Which one is more readable and easier to understand for you? Answer: You can check if the String is empty with !str instead of strA.length === 0, console.log(''); // false i think haveSameLength and isSame are extras, you can write srtA.length === strB.length and it would still be readable, you can get the first letter with a simpler strA[0] instead of strA.substring(0, 1); Which one is more readable and easier to understand for you? a loop is easier to read and understand than a recursive function, But the hole approach seems like it can be simpler using a for loop, Array.some() , here's what i would suggest : You can generate an array of combinations moving the letters one index at a time, for a string abc you would have ['abc, bca', 'cba'], see if one of the resulting array entries euqals the second string : const isSameAfterShifting = (str1, str2) => { // check if the strings are empty or has different lengths if (!str1 || !str2 || str1.length !== str2.length) return false; // check if the strings are the same if (str1 === str2) return true; // generate the array let combos = []; for (let i = 0; i < str1.length; i++) { let c = str1.slice(i) + str1.slice(0, i); combos.push(c); } // for a string 'abc' // combos = ['abc', bca', 'cab'] // check if the array has one of its entries equal to the second string return combos.some(s => s === str2); } console.log( isSameAfterShifting('abc', 'cab') ); console.log( isSameAfterShifting('abc', 'cabaaa') ); console.log( isSameAfterShifting('abc', 'bac') ); you can replace the for loop with Array.from() const isSameAfterShifting = (str1, str2) => { // check if the strings are empty or has different lengths if (!str1 || !str2 || str1.length !== str2.length) return false; // check if the strings are the same if (str1 === str2) return true; // generate the array let combos = Array.from({ length: str1.length }, (_, i) => str1.slice(i) + str1.slice(0, i)); // for a string 'abc' // combos = ['abc', bca', 'cab'] // check if the array has one of its entries equal to the second string return combos.some(s => s === str2); }; console.log(isSameAfterShifting("abc", "cab")); console.log(isSameAfterShifting("abc", "cabaaa")); console.log(isSameAfterShifting("abc", "bac"));
{ "domain": "codereview.stackexchange", "id": 33535, "tags": "javascript, algorithm, strings, functional-programming, comparative-review" }
Are scipy second-order Gaussian derivatives correct?
Question: For an edge detection algorithm, I need to compute second-order derivatives of an image, and I do this with use of Gaussian derivatives. I assumed that the scipy.ndimage.gaussian_filter implementation of this principle would work well, but I get a confusing result. My question probably revolves around whether this is erroneous behaviour (i.e., a bug) on the side of scipy, or whether there is a fundamental reason for it. The test array is import numpy as np import matplotlib.pyplot as plt A = np.zeros((30,30)) A[:,15] = 1 plt.imshow(A) plt.show() This has a non-zero second-order derivative in the 'row' direction (which the scipy code appears to compute appropriately). However, its second-order derivative in the 'column' direction should be zero, as the values are constant in the column direction. However, this is not what I find when computing it as follows: from scipy.ndimage import gaussian_filter B = gaussian_filter(A, sigma=1, order=[2, 0], mode='reflect') plt.imshow(B) plt.colorbar() plt.show() As you can see, values on the order of 1e-5 are somehow present. This doesn't quite make sense, as (1) the solution should be just 0 everywhere, (2) the second derivative shouldn't be varying in the row direction. Interestingly, we /do/ get the correct result if we apply two successive first-derivative operations. To make sure we use the appropriate $\sigma$ value, we can use $\sigma_\text{total} = \sqrt{\sigma_\text{operation 1}^2 + \sigma_\text{operation 2}^2}$, so I'll choose $\sigma_\text{operation 1}=\sigma_\text{operation 2}=\sqrt{1/2}$. That gives the following code: im = gaussian_filter(A, sigma=np.sqrt(1/2), order=[1, 0], mode='reflect') im = gaussian_filter(im, sigma=np.sqrt(1/2), order=[1, 0], mode='reflect') plt.imshow(im) plt.colorbar() plt.show() This result simply has zeroes everywhere, which is what I would expect. My question: am I right in assuming the scipy operation contains a bug somehow? What would be the likely error? It is not the same as https://stackoverflow.com/questions/45255265/unexpected-behavior-of-gaussian-filtering-with-scipy, which is specifically related to using a cval incompatible with the array. I use mode='reflect' to avoid this issue. Answer: Ndimage generates a Gaussian kernel by sampling a Gaussian and normalizing it to 1. The derivative of this kernel is generated by modifying that normalized kernel according to the chain rule to compute the derivative, and this modification is applied repeatedly to obtain higher order derivatives. Here is the relevant source code. This indeed leads to a kernel that produces imprecise second order derivatives, as described by the OP in the question and the very nice answer. Indeed, one can not just sample a derivative of Gaussian to obtain a convolution kernel, because the Gaussian function is not band-limited, and so sampling causes aliasing. The Gaussian function is nearly bandlimited, sampling with a sample spacing of $\sigma$ leads to less than 1% of the energy being aliased. But as the order of the derivative increases, so does the bandlimit, meaning that the higher the derivative order, the more sampling error we get. [Note that if we had no sampling error, convolution with the sampled kernel would yield the same result as a convolution in the continuous domain.] So we need some tricks to make the Gaussian derivatives more precise. In DIPlib (disclosure: I'm an author) the second order derivative is computed as follows (see the relevant bit of source code): Sample the second order derivative of the Gaussian function. Subtract the mean, to ensure that the response to a contant function is 0. Normalize such that the kernel, multiplied by a unit parabola ($x^2$), sums to 2, to ensure that the response to a parabolic function has the right magnitude. As we can see, the error in this case is within numerical precision of the double-precision floating-point values: import diplib as dip import numpy as np import matplotlib.pyplot as plt A = np.zeros((30,30)) A[:,15] = 1 B = dip.Gauss(A, [1], [0,2]) # note dimension order is always [x,y] in DIPlib plt.imshow(B) plt.colorbar() plt.show() The DIPlib code to generate a 1D second order derivative of the Gaussian is equivalent to the following Python code: import numpy as np sigma = 2.0 radius = np.ceil(4.0 * sigma) x = np.arange(-radius, radius+1) g = np.exp(-(x**2) / (2.0 * sigma**2)) / (np.sqrt(2.0 * np.pi) * sigma) g2 = g * (x**2 / sigma**2 - 1) / sigma**2 g2 -= np.mean(g2) g2 /= np.sum(g2 * x**2) / 2.0
{ "domain": "dsp.stackexchange", "id": 10559, "tags": "image-processing, python, gaussian, scipy, derivative" }
Molecular structure of iodine nonoxide
Question: A question in an exam was as follows: Iodine reacts with ozone gas to form a dark yellow solid $\ce{X}.$ Let the number of lone pair of electrons in un-ionised form of $\ce{X}$ be $m,$ number of lone pair of electrons in the anionic moiety of $\ce{X}$ be $n$ and the positive charge on the cationic moiety of $\ce{X}$ be $p$ units. Then what is the value of $\displaystyle\frac{m - p}{n}?$ This reaction of iodine with ozone is: $$\ce{I2 + O3 -> I4O9 <=>I^3+(IO3^-)3}$$ Therefore the anionic moiety is $\ce{IO3-}$ and cationic moiety is $\ce{I^3+}$. However the first part of the question states un-ionised form. My assumption is that this is $\ce{I4O9}$ and they ask for the number of lone pairs on the molecular form. The answer to this question takes into consideration that $\ce{I4O9}$ is an equimolar mixture of $\ce{I2O4}$ and $\ce{I2O5}$ which gives the answer to be $2.5$ However, from the abstract of J. Raman Spectrosc. 1985, 16 (6), 424–426: The Raman spectrum of $\ce{I4O9},$ formed by the gas‐phase reaction of $\ce{I2}$ with $\ce{O3}$, has been measured. Freshly prepared samples of $\ce{I4O9}$ gave broad band spectra characteristic of an amorphous solid. Vibration bands at $780,$ $740,$ $620$ and $\pu{450 cm−1}$ were observed. It was established conclusively that $\ce{I4O9}$ is a distinct molecular species and not a mixture of $\ce{I2O5}$ and $\ce{I2O4}.$ If $\ce{I4O9}$ is a distinct molecular species, what is the molecular structure of $\ce{I4O9}?$ Answer: Regarding its ionic form, I don' think it is as simple as $\ce{I^3+(IO3^-)3}$. It is much more complex than that. From Ref.1: This compound is likewise to be considered as an $\ce{I(III,V)}$ oxide and reacts with alkali hydroxide to give $\ce{I-}$ and $\ce{IO3-}$. $$\ce{3I4O9 + 12OH- -> I- + 11IO3- + 6H2O}$$ Structurally, $\ce{I4O9}$ is possibly an iodate $\ce{I3O6+IO3-}$ (more precisely $\ce{(I3O6+)_n.nIO3-}$) in which the isopolycation $\ce{I3O6+}$ has a polymeric structure and is formulated as $\ce{I_^{III}(I^{V}O3)2^+}$ consisting of twice as many pyramidal $\ce{I^{V}O3}$ groups as square-planar $\ce{I^{III}O4}$ groups thus $\ce{I4O9}$ would correspond to $\ce{I(IO3)2^+IO3-}$ which would become $\ce{I(IO3)3}$. Reference Inorganic Chemistry By Egon Wiberg, A. F. Holleman, Nils Wiberg, Academic Press, 2001
{ "domain": "chemistry.stackexchange", "id": 14519, "tags": "inorganic-chemistry, molecular-structure, oxides" }
Java: Worries about my implementation of MVC in a GUI Application
Question: This is a question I have in regards to my implementation of the MVC design paradigm that I came up with. The MVC type I am using is where everything must go through the controller. No communication happens between the model and the View. This is what I saw apple doing when I played with some iPhone stuff so I wanted to work through it, even though I had to put my iPhone stuff on hold. So the issue I am having is I am seeing a lot of what I see as dirty, unnecessary looking code due to me trying to maintain this paradigm and because of this I have a feeling I am going about this quite incorrectly. So I thought I would seek some advice from you guys, who have much more experience in this sort of thing. Here is an example of code that just gets handed from class to class just so that it passes through the controller: I have a ViewController JFrame that holds all of the panels for my GUI. It also contains the next and previous buttons required for navigating through them. So.. This is going to look scary but it is just a LOT of repeated code, I am going to put these in order of how they are called. NextButton is pressed: (ViewManager.java): private void nextButtonActionPerformed(java.awt.event.ActionEvent evt) { controller.nextPanelRequested(); } (Controller.java): public void nextPanelRequested() { model.readPanel(); // Only following this chain ... } (Model.java): public void readPanel() { ... //LOGIC TO DETERMINE WHICH PANEL WE ARE ON, WHICH DETERMINES WHICH 'FETCH' TO USE: case PANELX: controller.fetchPanelInfo(panelList[PANELX]); break; .... } (Controller.java [Again]): public void fetchPanelInfo(Panel currentPanel) { ... else if (currentPanel.equals(Panel.PANELX)) { viewManager.getPANELXInfo(); } } (ViewManager.java [Again]): public void getPANELXInfo() { // This calls down to a specific JPanel and gets it to collect input and send. panelX.collectAndSendPanelInfo(); } (PanelX.java): public void collectAndSendPanelInfo() { viewManager.sendPanelXData(double1, double2, double3, ..., double 15); } (ViewManager.java [Again x2]): public void sentPanelXData(double1 double1In, ..., double15 double 15In) { controller.sendPanelXData(double1In, ..., double15In); } (Controller.java [Again x2]): public void sentPanelXData(double1 double1In, ..., double15 double 15In) { model.receivePanelXData(double1In, ..., double15In); } (Model.java [Again x2]): public void receivePanelXData(double1 double1In, ..., double15 double 15In) { instanceVariable = new AppropriateObject(double1In, ..., double2In); } Okay... I hope I don't get too much flak for the length of this question, I rewrote everything to simplify it and hide my specific code. I want you to see exactly what I am seeing, and this redundancy is what is making me uneasy. *Sigh*. It works. The issue is that each panel has a different number, and different set of types of inputs, so I cannot simply make one method that could transport them all. So I have about 8 calls that appear in EVERY one of those 4 classes. If there is some solution to my disgusting implementation of MVC I could delete around 30 methods and make future programmer's lives about 30x easier.. tracing through that line of code is terrible. And I CODED IT. :( So Question: Should my model be asking the view directly for the panel info rather than going through the controller? And: Should the model be transmitting its collected info in a better manner that goes directly to the model? In the past I have seen a similar construct transmit the user input with beans, I was uncomfortable with this approach because they needed them for transmitting over a network, and I am just on one screen with one application. All the handoffs I am doing here are not expensive.. They don't even factor in when I profile the performance of each method. I know the post is insane, but I would have killed for a post like this with an answer when I was programming this beast. Answer: If you are sending everything through the controller, it's not exactly MVC. The beauty of MVC is that most of your views "listen" to changes in the model, which means the controller doesn't need to "update" the views for regular data changes. That limits the controller's updating of views to cases where the view needs to be laid out again (such as changing screens, layout, etc). Models that model changing elements (like clocks) don't even involve the controller. The model calls a (typically) private "changed(...)" method which signals all of the listening views. The views then fetch the data they are interested in (might be different depending on the view), and the view then invalidates itself (to schedule a redraw of itself). One clock "view" might be a digital clock, another might be a changing non-editable text label, a third a "solar" representation of the sun / moon in different phases (dawn, morning, noon, etc). Your solution could probably be fixed pretty easily. Initialize the views with a reference to their viewed model and cut out the controller "updating" the view for any data-related stuff. Normally the controller will still have to update the view for certain items, mostly for presentation (which view is visible, etc). Have the assignment of a model to a view cause the view to "unlisten" to the old model and "listen" to the new model. In the model, have any data change notify all the "listeners" of that model object's data change. The "listeners" then need to re-pull whatever data they might be interested in.
{ "domain": "codereview.stackexchange", "id": 1156, "tags": "java, mvc" }
sensor package for audio driver
Question: Hi All, I am looking package for microphone (ekuilt) sensor. Could you suggest a good ros package for audio purpose. Regards, Originally posted by can-43811 on ROS Answers with karma: 101 on 2017-12-27 Post score: 0 Answer: If the proper drivers from your microphone are installed you can try: http://wiki.ros.org/audio_common It'll let you publish your audio stream and such. Never had a big issue with those. Originally posted by bpinaya with karma: 700 on 2017-12-27 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 29617, "tags": "ros, audio" }
Why is suicide inhibition considered a catalytic reaction when the catalyst is irreversibly modified because of the reaction?
Question: I understand that this might be meaningless semantics, but I'm confused and would appreciate clarification. I've always been taught that, a catalyst is, by definition, a substance that is increases the rate of a reaction but is not consumed by the reaction. Enzymes are a subset of catalysts, so all enzymes must share catalyst properties. From my understanding, suicide inhibition irreversibly modifies the enzyme and consumes it during its normal catalysis reaction. How can an enzyme that is consumed during its normal catalysis reaction still be an enzyme? Wouldn't it make much more sense to call it a reactant? Answer: It's not clear what the confusion stems from, but perhaps it was from the first sentence of the Wikipedia article, which says: Suicide inhibition ... is an irreversible form of enzyme inhibition that occurs when an enzyme binds a substrate analogue and forms an irreversible complex with it through a covalent bond during the normal catalysis reaction. This might mislead someone by using the phrase "during the normal catalysis reaction", which I suppose could be interpreted as saying the enzyme only catalyzes reactions involving the inhibitor. That is very much not the case. The normal action of the enzyme is with some other molecule, the normal substrate of the enzyme, and the catalytic reaction produces and release the product, leaving the enzyme in its original form. In contrast to this, the inhibitor is an analogue of the normal substrate molecule, that is to say, a molecule which is very similar (in the target region at least) to the normal substrate molecule. Because of the similarity, the enzyme will cause the inhibitor to undergo the same chemical reaction as the normal substrate. However, the inhibitor will thereby become active in a way which causes it to irreversibly bind to the enzyme and thus, unlike the normal substrate, will never be released.
{ "domain": "biology.stackexchange", "id": 8327, "tags": "biochemistry, molecular-biology, enzymes" }
Testing a sodium aluminate solution
Question: I prepared some sodium aluminate by reacting a strong $NaOH$ solution with aluminium foil and filtering the result. I now have a clear solution and I'm not sure what it is. I stopped adding aluminium when the reaction rate showed down dramatically in a hot water bath. Basically what I'm asking is what are the properties of sodium aluminate since Wikipedia is severely lacking in information on sodium aluminate Answer: There are multiple reactions going on simultaneously. Aluminum metal reacts with the oxygen in surrounding air in order to create a plating of aluminum (III) oxide. $$\ce{4Al(s) + 3O2(l) -> 2Al2O3(s)}$$ Upon addition of sodium hydroxide, the oxide layer reacts with the base, leaving the aluminum free to attack by water for instance. $$\ce{2 Al + 6 H2O -> 2 Al(OH)3 + 3 H2}$$ $$\ce{Al(OH)3 + NaOH -> Na+ + [Al(OH)4]-}$$ In summary, we can write the sum of the reactions as: $$\ce{2Al(s) + 2NaOH(aq) + 6H2O(l) -> 2Na+(aq) + 2[Al(OH)4]- + 3H2(g)}$$ The reason why the reaction proceeded slowly at first was due to the aluminum (III) oxide layer covering the aluminum metal. From this link, http://www2.uni-siegen.de/~pci/versuche/english/v44-10.html: This reaction is used in drain cleaners. They are mostly made out of strong alkalis, to which alumunim or zinc has been added. The alkalis break down organic residues chemically. In addition, the formation of hydrogen leads to a bubbling effect which adds an additional mechanical cleaning mechanism. Considering the properties of aluminates, one can go to PubChem hosted by the NIH which has a lot of information for almost every single compound necessary as well as literature linked to the specific compound in question: http://pubchem.ncbi.nlm.nih.gov/summary/summary.cgi?cid=14766&loc=ec_rcs
{ "domain": "chemistry.stackexchange", "id": 1052, "tags": "physical-chemistry, inorganic-chemistry" }
Conservation of momentum: slippery freight car
Question: In Kleppner and Kolenkow's 1st edition book, there was an example question : Sand falls from a stationary hopper into a freight car moving with uniform velocity v. The sand falls at the rate dm/dt. Find the force required to keep the car moving. Since it's an example the solution is given which is $v*dm/dt$ force is required. However, in the end it also writes," We can understand why this force is required considering in detail just what happens to the sand grains when it lands on the surface of the freight car. What would happen if the surface of the freight car were slippery?" But I really couldn't think of the force acting on the sand or what would happen if the sand grains fall on slippery surface. Answer: The freight car has a horizontal velocity the grains of sand have zero horizontal velocity (momentum). To accelerate the grains of sand to the horizontal velocity of the freight car a horizontal force must act on the grains of sand. The origin of the horizontal force is kinetic friction between surfaces (falling sand to sand already moving with freight car, falling sand to freight car) which are moving relative to one another. I think the quote is asking you as to what happens if there is no frictional force; the answering being that a heap of stationary sand builds up under the hopper.
{ "domain": "physics.stackexchange", "id": 48040, "tags": "newtonian-mechanics, forces, momentum, conservation-laws" }
How to concatenate FASTAs with respect to chromosome?
Question: I have the following two small FASTAs, dna1.fa and dna2.fa: # dna1.fa >chr1 ACTGACTAGCTAGCTAACTG >chr2 GCATCGTAGCTAGCTACGAT >chr3 CATCGATCGTACGTACGTAG >chr4 ATCGATCGATCGTACGATCG # dna2.fa >chr1 GCATCGTAGCTAGCTACGAT >chr3 CATCGATCGTACGTACGTAG >chr4 ATCGATCGATCGTACGATCG >chr5 CATCGATCGTACGTACGTAG If I concatenate these two files using cat, the piped result is as follows: $ cat dna1.fa dna2.fa > total1.fa $ cat total1.fa >chr1 ACTGACTAGCTAGCTAACTG >chr2 GCATCGTAGCTAGCTACGAT >chr3 CATCGATCGTACGTACGTAG >chr4 ATCGATCGATCGTACGATCG >chr1 GCATCGTAGCTAGCTACGAT >chr3 CATCGATCGTACGTACGTAG >chr4 ATCGATCGATCGTACGATCG >chr5 CATCGATCGTACGTACGTAG Here, as expected, these are two files which have concatenated together. It seems this is usually what is done when one concatenates several FASTAs together in bioinformatics. Question1: Is it possible to concatenate these "based on the chromosome"? Here is the desired result: >chr1 ACTGACTAGCTAGCTAACTG GCATCGTAGCTAGCTACGAT >chr2 GCATCGTAGCTAGCTACGAT >chr3 CATCGATCGTACGTACGTAG CATCGATCGTACGTACGTAG >chr4 ATCGATCGATCGTACGATCG ATCGATCGATCGTACGATCG >chr5 CATCGATCGTACGTACGTAG Note that these are exceptionally small FASTAs---usually one could be dealing with files of several GB in size. Question2: The meta question here is, are there any standard bioinformatic analyses whereby the "concatenated by chromosome" FASTA would be more useful than the straightforward concatenated FASTA via cat alone? Answer: Using the TblToFasta and FastaToTbl scripts I have posted before, combined with classic *nix utils, you can do: $ join -a1 -a2 <(FastaToTbl dna1.fa | sort) <(FastaToTbl dna2.fa | sort) | sed 's/ /=/2' | TblToFasta | tr = '\n' >chr1 ACTGACTAGCTAGCTAACTG GCATCGTAGCTAGCTACGAT >chr2 GCATCGTAGCTAGCTACGAT >chr3 CATCGATCGTACGTACGTAG CATCGATCGTACGTACGTAG >chr4 ATCGATCGATCGTACGATCG ATCGATCGATCGTACGATCG >chr5 CATCGATCGTACGTACGTAG Explanation The FastaToTbl will put everything on the same line, with the sequence's ID at the beginning of the line, then a tab and then the sequence (no matter how many lines of sequence you have). So, we do this to both files, sort them and pass that as input to join, which will join its input on lines where the 1st field is the same (by default). The -a1 -a2 means "also print non-matching lines from both files" so we don't miss cases where a sequence is in only one of the two files. Since the join will add a space between the two joined lines, we deal with that by replacing the second space on each line with a = (sed 's/ /=/2') and then pass the whole thing to TblToFasta to revert to fasta format. Finally, we use tr to replace the = with a newline so you can see where each sequence came from as you show in your example. For longer sequences where you want to avoid sorting the input, you can do: $ FastaToTbl dna1.fa dna2.fa | awk '{ seq[$1] ? seq[$1]=seq[$1]"\n"$2 : seq[$1]=$2 } END{ for(chr in seq){ printf ">%s\n%s\n", chr, seq[chr] } }' >chr1 ACTGACTAGCTAGCTAACTG GCATCGTAGCTAGCTACGAT >chr2 GCATCGTAGCTAGCTACGAT >chr3 CATCGATCGTACGTACGTAG CATCGATCGTACGTACGTAG >chr4 ATCGATCGATCGTACGATCG ATCGATCGATCGTACGATCG >chr5 CATCGATCGTACGTACGTAG Explanation Here, the tbl format sequences (see above) are passed to an awk script which does the work: seq[$1] ? seq[$1]=seq[$1]"\n"$2 : seq[$1]=$2 : if the 1st field ($1, the sequence identifier) is already in the array seq, then set the value of seq[$1] to the current value (so whatever sequence has been already seen for this ID), a newline and the 2nd field ($2, the sequence). If it hasn't been seen before, then set the value of seq[$1] to the 2nd field, the sequence. END{ for(chr in seq){ printf ">%s\n%s\n", chr, seq[chr]} : after processing everything, go through each element in the array seq and print out the data in fasta format. However, that will mean keeping both files in memory which may be a problem for huge sequences. You can avoid that by using temp files: $ FastaToTbl dna1.fa dna2.fa | awk '{ if(!seen[$1]++){ print ">"$1 > $1".tmp" } print $2 >> $1".tmp" }' && cat *.tmp > out.fa && rm *.tmp $ cat out.fa >chr1 ACTGACTAGCTAGCTAACTG GCATCGTAGCTAGCTACGAT >chr2 GCATCGTAGCTAGCTACGAT >chr3 CATCGATCGTACGTACGTAG CATCGATCGTACGTACGTAG >chr4 ATCGATCGATCGTACGATCG ATCGATCGATCGTACGATCG >chr5 CATCGATCGTACGTACGTAG Explanation This is the same basic idea as the script above, only instead of storing the information in an array, we print it as it comes into temp files. if(!seen[$1]++){print ">"$1 > $1".tmp"} : if this is the first time we've seen this ID, print it as a fasta header into the file $1".tmp" (e.g. chr1.tmp). print $2 >> $1".tmp" append the current sequence to the corresponding temp file. Once the temp files have been created, we concatenate them (cat *.tmp > out.fa) and delete them (rm *tmp). As for why you would want to do this, an obvious example is reconstructing a reference genome from short sequences. You would want all the sequences of the same schromosome together. Of course, you would also need to assemble them in the right order, which you aren't doing here. Alternatively, you might want to concatenate them this way so as to avoid having the extra, unneeded lines (>chrN) which would make your file larger. Or you would want to be able to retrieve all sequences with the same ID at once. But I don't really know why you would want to do it. You're the one asking for it, after all! :)
{ "domain": "bioinformatics.stackexchange", "id": 620, "tags": "fasta" }
Please help me analyse spectrogram
Question: I am working on noise reduction and I need to learn how to analyze spectrograms I mixed a pure speech file with violet noise in audacity and got the following spectrograms: Matlab version Sonogram version: (enter link description here) Spectrogram view in Audacity: What do all the colours mean and why do they look different? Shouldn't everything have the same colours as in the Matlab version? Or are the exact colours not that important, since the darker pattern at the bottom of each figure still look similar in all of them? --- EDIT :: Further additions: --- Here is a screenshot of the spectrogram of noisy speech as well as output of my noise reduced sound file. The one on the left is noisy signal while one on the right is the one passed through the noise reduction algorithm. Both figures are plotted through the spectrogram function of matlab, called as follows: spectrogram(data, hanning(128), 64, 128, 16000) The number 16000 is the Fs value (sampling frequency) returned by the wavread function used to read in the original noisy speech file into matlab. The sound file is a male adult speaking, mixed with violet noise in audacity --- EDIT 2: --- Also, here is the same thing in gray-scale, if it helps Answer: The spectrograms look different because they use: Different parameters for the FFT size and hop size (window overlap ratio). They simply use different palettes. The spectrogram gives you an array of numbers, which are scaled and map to a color palette to produce a color image. Matlab's colormaps are listed here - by default it uses "jet" ; so the time-frequency bins of lowest energy are colored in blue, and those of highest energy are colored red.
{ "domain": "dsp.stackexchange", "id": 10471, "tags": "matlab, spectrogram" }
Would neutron average life expectancy be shorten by gaining (relativistic) mass?
Question: A free neutron can survive up to 887.7s +/-3 (count from the bottle) or 878.5s +/-1 (pump kinetic energy). Could the difference of 9s due to its total energy at that moment when observation is made? Answer: I believe you're referring to the embarrassing problem that measurements of the neutron lifetime made with neutron beams have in recent decades given systematically longer lifetimes than measurements made with ultracold neutrons trapped in bottles: [source] Your idea is reasonable: since the beam neutrons are moving faster, perhaps time dilation plays a role. But the math doesn't work out. The modern beam measurements are made with thermal or cold neutron beams. A thermal neutron is drawn from an an ensemble with temperature $T\approx 300\rm\,K$, and the equipartition theorem tells us that such neutrons will typically have $kT \approx \frac12 mv^2$ --- that corresponds to a speed of about $v\approx 2000\,\mathrm{m/s} \approx 10^{-5}c$. But that gives a relativistic factor of $$ \gamma = \frac1{\sqrt{1-{v^2}/{c^2}}} \approx \left( 1-10^{-10} \right)^{-1/2} \approx 1+\frac12 10^{-10} $$ The ratio of the lifetimes from the two types of measurements is $$ \frac{887\rm\,s}{880\rm\,s} \approx 1.008 $$ which is different from one in the fourth significant figure, rather than the tenth significant figure. The relativistic correction isn't large enough to explain the discrepancy. (This argument is still sound if you're not sloppy with factors of two.) You mention in a comment that you learned about this discrepancy from this Scientific American news story. If you'd like more technical information you might read this feature article in the same magazine, or this academic review article which I think probably precipitated both of the Scientific American stories.
{ "domain": "physics.stackexchange", "id": 43924, "tags": "particle-physics, relativity, neutrons" }
LeetCode: Shortest Common Supersequence C#
Question: https://leetcode.com/problems/shortest-common-supersequence/ Given two strings str1 and str2, return the shortest string that has both str1 and str2 as subsequences. If multiple answers exist, you may return any of them. (A string S is a subsequence of string T if deleting some number of characters from T (possibly 0, and the characters are chosen anywhere from T) results in the string S.) Example 1: Input: str1 = "abac", str2 = "cab" Output: "cabac" Explanation: str1 = "abac" is a subsequence of "cabac" because we can delete the first "c". str2 = "cab" is a subsequence of "cabac" because we can delete the last "ac". The answer provided is the shortest such string that satisfies these properties. Note: 1 <= str1.length, str2.length <= 1000 str1 and str2 consist of lowercase English letters. Please review for performance, I am especially interested about the string concatenation part using System.Text; using Microsoft.VisualStudio.TestTools.UnitTesting; namespace StringQuestions { /// <summary> /// https://leetcode.com/problems/shortest-common-supersequence/ /// </summary> [TestClass] public class ShortestCommonSuperSequenceTest { [TestMethod] public void TestMethod1() { string str1 = "abac"; string str2 = "cab"; string expected = "cabac"; Assert.AreEqual(expected, ShortestCommonSuperSequenceClass.ShortestCommonSupersequence(str1, str2)); } [TestMethod] public void TestMethod2() { string str1 = "xxx"; string str2 = "xx"; string expected = "xxx"; Assert.AreEqual(expected, ShortestCommonSuperSequenceClass.ShortestCommonSupersequence(str1, str2)); } [TestMethod] public void TestMethod3() { string str1 = "xxxc"; string str2 = "xxa"; string expected = "xxxca"; Assert.AreEqual(expected, ShortestCommonSuperSequenceClass.ShortestCommonSupersequence(str1, str2)); } } public class ShortestCommonSuperSequenceClass { public static string ShortestCommonSupersequence(string str1, string str2) { if (string.IsNullOrEmpty(str1) || string.IsNullOrEmpty(str2)) { return string.Empty; } int i = 0; int j = 0; string lcs = FindLCS(str1, str2); StringBuilder res = new StringBuilder(); foreach (var letter in lcs) { while (str1[i] != letter) { res.Append(str1[i]); i++; } while (str2[j] != letter) { res.Append(str2[j]); j++; } res.Append(letter); i++; j++; } return res + str1.Substring(i) + str2.Substring(j); } private static string FindLCS(string str1, string str2) { string[,] dp = new string[str1.Length + 1, str2.Length + 1]; for (int i = 0; i < str1.Length+1; i++) { for (int j = 0; j < str2.Length+1; j++) { dp[i, j] = string.Empty; } } for (int i = 0; i < str1.Length; i++) { for (int j = 0; j < str2.Length; j++) { //remember 0 is 0 always if (str1[i] == str2[j]) { dp[i+1, j+1] = dp[i, j] + str1[i]; } else { if (dp[i + 1, j].Length > dp[i, j + 1].Length) { dp[i + 1, j + 1] = dp[i + 1, j]; } else { dp[i + 1, j + 1] = dp[i, j + 1]; } } } } return dp[str1.Length, str2.Length]; } } } Answer: Review of your code The code is written clearly, I have only a few remarks. The LeetCode problem description states that both str1 and str2 are non-empty strings, so that this if (string.IsNullOrEmpty(str1) || string.IsNullOrEmpty(str2)) { return string.Empty; } is not necessary. On the other hand, if you want to handle empty strings then the above is not correct: The shortest common supersequence of the empty string and "abc" is "abc", not the empty string. So that should be if (string.IsNullOrEmpty(str1)) { return str2; } if (string.IsNullOrEmpty(str2)) { return str1; } Here int i = 0; int j = 0; string lcs = FindLCS(str1, str2); StringBuilder res = new StringBuilder(); foreach (var letter in lcs) { // ... I would move the declarations of i and j down to where the variables are needed, i.e. directly before the foreach loop. The attentive reader of your code will of course quickly figure out that private static string FindLCS(string str1, string str2) determines the “longest common subsequence” but I would use a more verbose function name (or add an explaining comment). Explaining the used algorithm shortly would also be helpful to understand the code, something like /* The longest common subsequence (LCS) of str1 and str2 is computed with dynamic programming. dp[i, j] is determined as the LCS of the initial i characters of str1 and the initial j characters of str2. dp[str1.Length, str2.Length] is then the final result. */ On the other hand, this comment is mysterious to me: //remember 0 is 0 always if (str1[i] == str2[j]) Performance improvements str1.Length * str2.Length strings are computed in FindLCS(), and that can be avoided. As explained in Wikipedia: Longest common subsequence problem, is is sufficient to store in dp[i, j] the length of the corresponding longest common subsequence, and not the subsequence itself. When the dp array is filled then the longest common subsequence can be determined by deducing the characters in a “traceback” procedure, starting at dp[str1.Length, str2.Length]. This saves both memory and the time for the string operations. And this approach can easily be modified to collect the shortest common supersequence instead of the longest common subsequence. That makes your “post processing” in your ShortestCommonSupersequence() function obsolete. The maximum possible length of the shortest common supersequence is known. Therefore the characters can be collected in an array first, so that string operations and a final string reversing is avoided. Putting it all together, an implementation could look like this: public static string ShortestCommonSupersequence(string str1, string str2) { // Handle empty strings: if (string.IsNullOrEmpty(str1)) { return str2; } if (string.IsNullOrEmpty(str2)) { return str1; } // Dynamic programming: dp[i, j] is computed as the length of the // longest common subsequence of str1.Substring(0, i) and // str2.SubString(0, j). int[,] dp = new int[str1.Length + 1, str2.Length + 1]; for (int i = 0; i < str1.Length; i++) { for (int j = 0; j < str2.Length; j++) { if (str1[i] == str2[j]) { dp[i+1, j+1] = dp[i, j] + 1; } else { dp[i + 1, j + 1] = Math.Max(dp[i + 1, j], dp[i, j + 1]); } } } // Traceback: Collect shortest common supersequence. Since the // characters are found in reverse order we put them into an array // first. char [] resultBuffer = new char[str1.Length + str2.Length]; int resultIndex = resultBuffer.Length; { int i = str1.Length; int j = str2.Length; while (i > 0 && j > 0) { if (str1[i - 1] == str2[j - 1]) { // Common character: resultBuffer[--resultIndex] = str1[i - 1]; i--; j--; } else if (dp[i - 1, j] > dp[i, j - 1]) { // Character from str1: resultBuffer[--resultIndex] = str1[i - 1]; i--; } else { // Character from str2: resultBuffer[--resultIndex] = str2[j - 1]; j--; } } // Prepend remaining characters from str1: while (i > 0) { resultBuffer[--resultIndex] = str1[i - 1]; i--; } // Prepend remaining characters from str2: while (j > 0) { resultBuffer[--resultIndex] = str2[j - 1]; j--; } } // Create and return result string from buffer. return new string(resultBuffer, resultIndex, resultBuffer.Length - resultIndex); } Comparison: I ran both implementations on LeetCode Original code: Runtime 372 ms, Memory 48.9 MB. Improved code: Runtime 92 ms, Memory 26.1 MB.
{ "domain": "codereview.stackexchange", "id": 36382, "tags": "c#, programming-challenge, dynamic-programming" }
Improving player button control code
Question: Would somebody be kind enough to show me the best way to loop through this so it much more efficient that just repeating everything? const playButton = document.querySelector("#btnPlay"); const playButton1 = document.querySelector("#btnPlay1"); const playButton2 = document.querySelector("#btnPlay2"); const playButton3 = document.querySelector("#btnPlay3"); const playButton4 = document.querySelector("#btnPlay4"); const playButton5 = document.querySelector("#btnPlay5"); const playButton6 = document.querySelector("#btnPlay6"); const playButton7 = document.querySelector("#btnPlay7"); const playButton8 = document.querySelector("#btnPlay8"); const playButton9 = document.querySelector("#btnPlay9"); const playButton10 = document.querySelector("#btnPlay10"); const pauseButton = document.querySelector("#btnPause"); const pauseButton1 = document.querySelector("#btnPause1"); const pauseButton2 = document.querySelector("#btnPause2"); const pauseButton3 = document.querySelector("#btnPause3"); const pauseButton4 = document.querySelector("#btnPause4"); const pauseButton5 = document.querySelector("#btnPause5"); const pauseButton6 = document.querySelector("#btnPause6"); const pauseButton7 = document.querySelector("#btnPause7"); const pauseButton8 = document.querySelector("#btnPause8"); const pauseButton9 = document.querySelector("#btnPause9"); const pauseButton10 = document.querySelector("#btnPause10"); const iframe = document.querySelector("#player"); const iframe1 = document.querySelector("#player1"); const iframe2 = document.querySelector("#player2"); const iframe3 = document.querySelector("#player3"); const iframe4 = document.querySelector("#player4"); const iframe5 = document.querySelector("#player5"); const iframe6 = document.querySelector("#player6"); const iframe7 = document.querySelector("#player7"); const iframe8 = document.querySelector("#player8"); const iframe9 = document.querySelector("#player9"); const iframe10 = document.querySelector("#player10"); const player = new Vimeo.Player(iframe); const player1 = new Vimeo.Player(iframe1); const player2 = new Vimeo.Player(iframe2); const player3 = new Vimeo.Player(iframe3); const player4 = new Vimeo.Player(iframe4); const player5 = new Vimeo.Player(iframe5); const player6 = new Vimeo.Player(iframe6); const player7 = new Vimeo.Player(iframe7); const player8 = new Vimeo.Player(iframe8); const player9 = new Vimeo.Player(iframe9); const player10 = new Vimeo.Player(iframe10); playButton.addEventListener("click", playVideo); playButton1.addEventListener("click", playVideo1); playButton2.addEventListener("click", playVideo2); playButton3.addEventListener("click", playVideo3); playButton4.addEventListener("click", playVideo4); playButton5.addEventListener("click", playVideo5); playButton6.addEventListener("click", playVideo6); playButton7.addEventListener("click", playVideo7); playButton8.addEventListener("click", playVideo8); playButton9.addEventListener("click", playVideo9); playButton10.addEventListener("click", playVideo10); pauseButton.addEventListener("click", pauseVideo); pauseButton1.addEventListener("click", pauseVideo1); pauseButton2.addEventListener("click", pauseVideo2); pauseButton3.addEventListener("click", pauseVideo3); pauseButton4.addEventListener("click", pauseVideo4); pauseButton5.addEventListener("click", pauseVideo5); pauseButton6.addEventListener("click", pauseVideo6); pauseButton7.addEventListener("click", pauseVideo7); pauseButton8.addEventListener("click", pauseVideo8); pauseButton9.addEventListener("click", pauseVideo9); pauseButton10.addEventListener("click", pauseVideo10); function playVideo() { player.play(); } function playVideo1() { player1.play(); } function playVideo2() { player2.play(); } function playVideo3() { player3.play(); } function playVideo4() { player4.play(); } function playVideo5() { player5.play(); } function playVideo6() { player6.play(); } function playVideo7() { player7.play(); } function playVideo8() { player8.play(); } function playVideo9() { player9.play(); } function playVideo10() { player10.play(); } function pauseVideo() { player.pause(); player.setCurrentTime(0); } function pauseVideo1() { player1.pause(); player1.setCurrentTime(0); } function pauseVideo2() { player2.pause(); player2.setCurrentTime(0); } function pauseVideo3() { player3.pause(); player3.setCurrentTime(0); } function pauseVideo4() { player4.pause(); player4.setCurrentTime(0); } function pauseVideo5() { player5.pause(); player5.setCurrentTime(0); } function pauseVideo6() { player6.pause(); player6.setCurrentTime(0); } function pauseVideo7() { player7.pause(); player7.setCurrentTime(0); } function pauseVideo8() { player8.pause(); player8.setCurrentTime(0); } function pauseVideo9() { player9.pause(); player9.setCurrentTime(0); } function pauseVideo10() { player10.pause(); player10.setCurrentTime(0); } Answer: Instead of creating a variable for each of these, you can use arrays for the buttons, iframes, and players, and you can use anonymous functions for the click event listeners. const playerCount = 11; const playButtons = new Array(playerCount) .fill(0) .map((_, i) => document.querySelector(`#btnPlay${i || ""}`)); const pauseButtons = new Array(playerCount) .fill(0) .map((_, i) => document.querySelector(`#btnPause${i || ""}`)); const iframes = new Array(playerCount) .fill(0) .map((_, i) => document.querySelector(`#player${i || ""}`)); const players = iframes.map((iframe) => new Vimeo.Player(iframe)); players.forEach((player, index) => { playButtons[index].addEventListener("click", player.play); pauseButtons[index].addEventListener("click", () => { player.pause(); player.setCurrentTime(0); }); }); .fill(0) is needed because new Array(size) has "empty" elements, which are skipped over by .map. ${i || ""} is part of a template literal, this makes it so that if the value is 0 it's an empty string. Another approach is a traditional for-loop. If you don't need to store a reference to each of buttons, etc., that you can access later, this option can be considered "cleaner" since it doesn't have extra variables leak out into the surrounding scope. const playerCount = 11; // if you need this // const players = []; // you can also repeat this for other variables you want outside of this loop for (let i = 0; i < playerCount; i++) { const playButton = document.querySelector(`#btnPlay${i || ""}`); const pauseButton = document.querySelector(`#btnPause${i || ""}`); const iframe = document.querySelector(`#player${i || ""}`); const player = new Vimeo.Player(iframe); playButton.addEventListener("click", player.play); pauseButton.addEventListener("click", () => { player.pause(); player.setCurrentTime(0); }); // if you need this // players.push(player); } Speed isn't an issue for something this small (only 11 players). The choice is mostly just stylistic and whatever you find to be the most readable.
{ "domain": "codereview.stackexchange", "id": 44529, "tags": "javascript" }
gravitational wave detection using interferometer detectors
Question: I understood that the basic idea of the interferometric detectors is the michelson interferometer experiment, in which the change in the position of the mirrors will cause the interferometer ( constructive or destructive)which causes the changin the detected wave's intensity.the question is how do the gravitational waves cause the change in the arm length? here is the sentence I can't understand "Gravitational waves lead to rhythmic distortions of space. This distortion influences the time it takes a light signal to travel back and forth between two freely falling test masses. As the distance the light signal has to travel is stretched or squeezed, it takes the light a little more or a little less time to travel from one test mass to the other." from EISTEIN ONLINE site Catching the wave with light I think the reason that I can't understand this because I can't imagine how these distortion in space take place and how does the distance is stretched or squeezed.I hope someone can help! Answer: I hope the following makes sense to you. Gravity waves are thought to distort spacetime , but trying to get a mental picture of it will make you go well, mental. So difficult as it may be, try to stop thinking of a 4 D space, it won't work, for you, me or anybody. We can only imagine 3 D space, but we can (fairly!) easily calculate the distortion due to gravity on spacetime using math. Imagine the waves on water making the surface of the ocean distort periodically. Now you want to measure a certain length, it will differ, roughly speaking, depending on whether a wave is passing by at the time of measurement or not. Now extend that idea to spacetime being distorted so that our measurement of it, using light waves, also changes with time, as the gravity waves pass by. So if the light rays are out of phase, because one of them travels a different distance compared to the other, this will show up in the interference pattern of the interferometer. You can see from the picture below how the arms of the LIGO detector are set at 90 degrees apart, to maximise our chances of detecting the distortion, as the math , not just our mental pictures, tells us this is the best way to line them up for possible detection of gravity waves.
{ "domain": "physics.stackexchange", "id": 22807, "tags": "gravity, gravitational-waves" }
Servlet WebClient: How do I best solve for multiple servlet requests?
Question: This is working code presently visible here. Basically, this is still a work in process, but I do not want to go much further down the road without peer review. I've limited the source to the servlet. I am managing multiple servlet requests with parameters. I'm looking for feedback on that pattern--and anything else that stands out. public class MyServlet extends HttpServlet { // use this for usda reservoir station values later public final SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd"); // manage form requests and add web service reference object AwdbWebService m_webService = null; String countyName = null; String action = null; public final Logger logger = Logger.getLogger(getClass().getName()); @Resource(name = "jdbc/mydb", lookup = "jdbc/mydb") private DataSource dataSource; @Override public void init(ServletConfig config) throws ServletException { super.init(config); logger.info("Init"); System.out.println(getClass().getName() + ".init"); } public void service(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { action = req.getParameter("ACTION"); if("READ-FindStation" .equals(action)){ countyName = req.getParameter("countyName"); req.setAttribute("theCounty",countyName); RequestDispatcher dispatcher = req.getRequestDispatcher("/awdbRetrieveStationInventory-simple.jsp"); dispatcher.forward(req,resp); } else { PrintWriter writer = resp.getWriter(); writer.println("<html>"); writer.println("<head><title>MyServlet</title></head>"); writer.println("<body><h1>MyServlet</h1>"); writer.println("<h2>DataSource</h2>"); Connection conn = null; try { writer.println("Datasource: " + dataSource + "<br/><br/>"); conn = dataSource.getConnection(); Statement stmt = conn.createStatement(); ResultSet rst = stmt.executeQuery("select 1"); while (rst.next()) { writer.println("Resultset result: " + rst.getString(1) + "<br/><br/>"); } rst.close(); stmt.close(); conn.close(); writer.println("SUCCESS to access the datasource"); } catch (Exception e) { e.printStackTrace(writer); e.printStackTrace(); } finally { if (conn != null) { try { conn.close(); } catch (Exception e) { e.printStackTrace(); } } } writer.println("</body></html>"); } //end-else } public void initializeUSDAWebService() { try { URL wsURL = new URL("http://www.wcc.nrcs.usda.gov/awdbWebService/services?wsdl"); AwdbWebService_Service lookup = new AwdbWebService_Service(wsURL,new QName("http://www.wcc.nrcs.usda.gov/ns/awdbWebService","AwdbWebService")); m_webService = lookup.getAwdbWebServiceImplPort(); } catch (Exception e) { System.out.println("Problem creating usda web service client: " + e.getMessage()); e.printStackTrace(); } } /** * Use Case 1: Get Inventory of Stations * This example will get an inventory of all stations in Oregon * for SNOTEL stations that have Snow Water Equivalent * and return list of stations */ public List<StationMetaData> getStations(String countyName) { initializeUSDAWebService(); // TODO: create another jsp page to fill in networkCds and // element codes. Leave as static retrieval for first // pass of development. List<String> stationIds = null; List<String> stateCds = null; List<String> networkCds = Arrays.asList("SNTL"); List<String> hucs = null; List<String> countyNames = Arrays.asList(countyName); BigDecimal minLatitude = null; BigDecimal maxLatitude = null; BigDecimal minLongitude = null; BigDecimal maxLongitude = null; BigDecimal minElevation = null; BigDecimal maxElevation = null; List<String> elementCodes = Arrays.asList("WTEQ"); List<Integer> ordinals = Arrays.asList(1); List<HeightDepth> heightDepths = null; /* * If (logicalAnd) is true, the getStations() call will return only * stations that match ALL of the parameters passed in, otherwise it’ll * return stations that match ANY of the parameters passed in. */ boolean logicalAnd = true; List<String> stationTriplets = m_webService.getStations(stationIds, stateCds, networkCds, hucs, countyNames, minLatitude, maxLatitude, minLongitude, maxLongitude, minElevation, maxElevation, elementCodes, ordinals, heightDepths, logicalAnd); // plditallo - avoid null object returns to the calling jsp // List<StationMetaData> stations = null; List<StationMetaData> stations = Collections.emptyList(); try { stations = m_webService.getStationMetadataMultiple(stationTriplets); } catch (Exception e) { System.out.println("Problem retrieving usda stations from usda method: " + e.getMessage()); e.printStackTrace(); } return stations; } /** * Use Case 2: Get period of record for one station. * This will return period of Data that are SNOW WATER EQUIVALENT (element * code = WTEQ) for a given station and date range. * Note: Always use an ordinal of 1, and heightDepth of null * (height depth is only used for soil sensors) * @param p_stationTriplet The station to get data for, ex: "471:ID:SNTL" * @param p_beginDate The begin date - a String format "yyyy-MM-dd" * @param p_endDate The end date - a String format "yyyy-MM-dd" * @return An Array of Data Objects */ public Data[] getPeriodOfRecord(String p_stationTriplet, String p_beginDate, String p_endDate){ Data[] values = m_webService.getData(Arrays.asList(p_stationTriplet), "WTEQ", 1, null, Duration.DAILY, true, p_beginDate, p_endDate, true) .toArray(new Data[0]); return values; } /** * Use Case 3: Get past seven days' data. * This will return the last seven days of SNOW WATER EQUIVALENT (element * code = WTEQ)Data, relative to today, for a given station. * Note: Always use an ordinal of 1, and heightDepth of null * (height depth is only used for soil sensors) * @param p_stationTriplet The station to get data for, ex: "471:ID:SNTL" * @return An Array of Data Objects */ public Data[] getLastSevenDaysData(String p_stationTriplet){ String today = dateFormat.format(new Date()); Calendar lastWeek = GregorianCalendar.getInstance(); lastWeek.add(Calendar.DAY_OF_YEAR, -7); String sevenDaysAgo = dateFormat.format(lastWeek.getTime()); Data[] values = m_webService.getData(Arrays.asList(p_stationTriplet), "WTEQ", 1, null, Duration.DAILY, true, sevenDaysAgo, today, true) .toArray(new Data[0]); return values; } } Answer: public final SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd"); This stupid class is not thread-safe. So better forget it or use a ThreadLocal wrapper. e.printStackTrace(writer); Are you sure the user should see it? If it's just temporary, write a method like handle(Exception e) doing this (an add an TODO there). e.printStackTrace(); What is then your public final Logger logger = Logger.getLogger(getClass().getName()); good for? Additionally, reduce visibility as much as possible. List<String> stationIds = null; List<String> stateCds = null; ... Avoid nulls, especially for collections. An empty collection is much easier to work with. You seem to be avoiding a class like class Station { String stationId; String stateCd; ... } Why? If (logicalAnd) is true, the getStations() call will return only stations that match ALL of the parameters passed in, otherwise it’ll return stations that match ANY of the parameters passed in. Boolean arguments are better replaced by enum MatchType {ANY, ALL} Isn't this a bit too long a parameter list? List<String> stationTriplets = m_webService.getStations(stationIds, stateCds, networkCds, hucs, countyNames, minLatitude, maxLatitude, minLongitude, maxLongitude, minElevation, maxElevation, elementCodes, ordinals, heightDepths, logicalAnd); If you don't want to pass List<Station>, then consider grouping related parameters together. String p_stationTriplet Hungarian notation doesn't get used in Java.
{ "domain": "codereview.stackexchange", "id": 9755, "tags": "java, servlets" }
RESTful APIs (Create, Read) with flask-restful
Question: I wrote this simple API to be able to send logs from the client to MongoDB. Even though this works and fits my need, I would like to point out some concerns that I have regarding the following: Is it efficient to be calling the instance of MongoClient for every request? Do I need to close the instance of MongoClient after the request is successful? A more Pythonic way to return responses? from flask import Flask, jsonify, request, Response from flask_restful import Resource, Api from pymongo import MongoClient import json app = Flask(__name__) api = Api(app) USER = "user" PASS = "pw" MONGO_URI = 'mongodb://%s:%s@test.mlab.com/test' % (USER, PASS) PORT = 19788 def db_conn(): client = MongoClient(MONGO_URI, PORT) return client def insert_record(args): client = db_conn() replycode = 0 try: db = client['test'] posts = db.users posts.insert(args) except: replycode = 1 return replycode def select_record(args={}): client = db_conn() db = client['test'] result = db.users.find(args) return result class CreatUser(Resource): def post(self): try: content = request.get_json() if "Vehicle" not in content: return jsonify({"Result": "Vehicle number not in passed arguments"}), 400 else: vehicle = content['Vehicle'] reply = insert_record(content) if reply == 0: return jsonify({"Result" : "Successfully inserted user: " + vehicle}), 201 else: return jsonify({"Result" : "Failed to insert data. Check logs for more details"}), 400 except Exception as e: return jsonify({'Error' : str(e)}), 500 class ViewUser(Resource): def get(self): from bson import json_util results = select_record() final = [] for result in results: result.pop("_id") final.append(result) return jsonify(results=final), 200 api.add_resource(CreatUser, "/api/create") api.add_resource(ViewUser, "/api/view") if __name__ == "__main__": app.run(debug=True) Answer: Ad. 1 & 2: There is no need to open fresh connection for, and close it after request. MongoClient has connection-pooling built in, so only thing you need to do is to create instance of MongoClient with connection parameters and optional pool size. It will open connection on first usage and in case it gets closed or times out - reopen them when needed. A good way to do, would be to build a plugin wrapping MongoClient like they did with SQLite in this example. Such plugin can be used afterwards as persistent connection - all logic regarding connecting/keeping/reconnecting will happen inside, thus simplifying resource methods. But it is not necessary, instance of MongoClient can be created in global scope and imported into functions in same way like you did with db_conn(). Ad. 3: IMHO there is nothing wrong with jsonify since Flask-RESTful recognises Flasks response objects. But, as they show in here you don't need to do it and you can return dicts directly: class Todo3(Resource): def get(self): # Set the response code to 201 and return custom headers return {'task': 'Hello world'}, 201, {'Etag': 'some-opaque-string'} I would also recommend putting USER, PASS, MONGO_URI, PORT into configuration file, so they can be easily overridden by environment variables on production. Also Artyom24 pointed out nice improvements regarding API RESTfulness
{ "domain": "codereview.stackexchange", "id": 23905, "tags": "python, api, rest, mongodb, flask" }
$gazebo fails to work after installing gazebo_ros_pkgs
Question: Hi. I'm in a dilemma. After installing gazebo_ros_pkgs, the 'rosrun/roslaunch gazebo_ros' commands work well and bring a Gazebo GUI.However when I run alone the command $gazebo in a terminal, nothing happens. Just nothing! Neither Gazebo GUI which is expected, nor warning/error information in the terminbal. I think something might go wrong with the environment variables or else, but I can't figure out. It's really awkward, for gazebo is there but not avaiable to me. I really need some help. Thanks. Originally posted by BenWashburn on Gazebo Answers with karma: 7 on 2014-11-05 Post score: 0 Answer: Please run: export LIBGL_ALWAYS_SOFTWARE=1 then run: gazebo Originally posted by mudassar_ej with karma: 16 on 2014-11-06 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by BenWashburn on 2014-11-12: Thank you very much. Gazebo works! May I ask what does this environment variable LIBGLALWAYSOFTWARE mean and how does it affect the way Gazebo works?
{ "domain": "robotics.stackexchange", "id": 3665, "tags": "ros" }
If a regularization procedure respects a symmetry, is this symmetry unbroken in perturbation theory?
Question: I read in this paper the statement that a proof that SUSY is preserved in perturbation theory would be the existence of a regularization procedure which respects SUSY (for a particular theory). Is there a clear proof that the existence of a regularization procedure which respects a symmetry implies the absence of its anomaly? This makes sense at some level; step by step in my calculation, my formula for $\partial_\mu j^\mu$ respects this symmetry. The contrapositive to this statement is that if there is an anomaly, there is no regularization procedure which respects the symmetry. This can be made plausible through examples, of course, but I'd like to be sure of it. Answer: Here is a sketch of the philosophy. A regularisation is a deformation of the theory that renders it finite. Let us assume for simplicity that there is a single deformation parameter, say $x\in\mathbb R$, and let us choose the origin (the undeformed theory) at $x=0$. Some typical examples for $x$ are $x=\mu/M$, where $\mu$ is some fixed mass scale, and $M$ a Pauli-Villars mass. $x=d-n$, where $d$ is the dim-reg dimension, and $n$ the physical dimension (e.g., $n=4$). $x=\mu\epsilon$, where $\mu$ is some fixed mass scale, and $\epsilon$ is a small distance used in point-splitting. As the undeformed theory typically contain divergences, observables are usually $\mathcal O(1/x^a)$ for some $a\ge0$. In particular, this is so for the Noether current, which is a composite operator (and thus ambiguous, i.e., divergent). The regulator may spoil some symmetry, but the same is formally recovered in the $x\to0$ limit, and so the divergence of the Noether current is formally $\mathcal O(x^b)$, for some $b\ge0$. Combining these remarks, we obtain that the divergence of the regulated current is actually $\mathcal O(x^{b-a})$, for some parameters $a,b$. If $b>a$, the current is conserved in the physical limit, and the symmetry is non-anomalous. If $a=b$, the regulator leaves a finite piece in the physical limit, and the symmetry is anomalous. See e.g. this PSE post for an explicit example of this approach to anomalies. Now comes the key point. If the regulator respects the symmetry, the divergence of the current is identically zero, even for finite $x$. Therefore, $dj=0$ for all $x$, and $dj=0$ in the $x\to0$ limit. No finite piece may survive the physical limit, because the current is exactly conserved in the deformed theory. Note that this philosophy also holds for non-perturbative anomalies. If there is some regulator (defined non-perturbatively, e.g. lattice) that respects the symmetry, the latter is necessarily non-anomalous. This is why anomalies are typically induced by massless fermions only: if a mass term is allowed by the symmetry, then a Pauli-Villars regulator is allowed too, and the corresponding fermion cannot contribute to the anomaly. See e.g. 1909.08775 for more details.
{ "domain": "physics.stackexchange", "id": 64083, "tags": "quantum-field-theory, symmetry, regularization, quantum-anomalies" }
How do you calculate the intensity of light around the focal point from a focused collimated beam of light?
Question: Problem/Purpose of me asking this question to you people who know more than me: So I'm doing a science project where I'm collimating a beam of light to a focus point in a light medium (water vapor or fog) and I want to calculate the intensity near it. I can't seem to find an equation that describes this problem. I want to know two things. If you know anything that can be related in solving this problem, it is much appreciated! :) Issues/Things I need help to figure out!: (1) If I focus a collimating beam of light with a lens, (say a hand held magnifying glass), into a relatively uniformly dispersed light medium, (water vapor or theatrical fog) Can this focus point be seen in ANY direction, (say like 5 feet away from the focus point)? Doesn't light scatter isotropically in this case? IF NOT, What is the preferred direction of scattering of the light? (2) If an equation exists (And light scatters isotropically in the light medium used), can this equation say given the parameters, (light frequency used, index of refraction of medium, density of medium, size of collimated beam, lens dimensions used, etc) give the intensity of light in terms of the distance a person is away from the focal point? I am aware of the inverse square law, but in my case, its a bit different, isn't it? Wouldn't my situation involve some type of directionality? How do you find the best viewpoints from the focal point in which the intensity of the focal point is most profound? MORE relevant or related questions that need to be addressed: Does the particle size matter?, (the particles that make up the light medium) How do I determine the correct density of the given medium to produce the most profound effect, (Having the focal point illuminate as brightly as possible) How do I determine the right intensity of the initial column of light that is focused? Combining the previous two bullets, How do I determine the right combination of the density of light medium and intensity of light used to illuminate the focus point as bright as possible? The Gist: What I'm trying to do is to create a "point of light" inside a suspended light medium (Ideally viewable in all directions) and I'm trying to figure a way to figure this out with equations before buying a whole ensemble of things (Fog machine, light source/laser, magnifying glass or multiple lenses, etc) to test it. (If not viewable in all directions, and it has directionality, then I'm just gonna combine multiple systems pointing multiple columns of focused light in different directions to the same point in space to obtain an acceptable looking "point of light" in a medium). Answer: The scatter direction depends on the size of the particle and the wavelength. Very small particles (e.g. nitrogen atoms of the atmosphere) scatter isotropically. There is still an effect on the polarisation of the scattered light (Bees use that to locate the sun if they can't see it directly). Often Gaussian beams are used to describe how the intensity propagates in an optical train (a system of lenses and mirrors) http://en.wikipedia.org/wiki/Gaussian_beam. Note that this describes the intensity of a Laser with a Gaussian intensity profile (this is a good approximation for many lasers, especially if you focus them through a pinhole). If you have an extended light source you will have to add the intensities of several such beams. Once you have a numeric intensity profile (I guess 2D is enough for your usage), you can try an exponential decay law to estimate the effects of scatter. Your focus spot will look more intense the higher the numerical aperture of your lens is. If you use a lens that has 2.5 cm diameter and f=10cm the spot won't look as intense as with a lens that has f=3cm. Have you thought of using fluorescence? You could use dissolve some colour in water and use laser protection goggles to see the fluorescent light. Then you don't have to cope with scattering. You can get polygonal mirrors out of old laser printers. That way you can scan the beam in one direction. If you use a laser diode, you can modulate the intensity very fast. I recently purchased a 405nm Laser with 120mW for $120 from lasever.com. 120mW is very dangerous. If you don't have protection goggles or you share your space with other people don't use lasers>0.5mW!
{ "domain": "physics.stackexchange", "id": 1242, "tags": "optics, electromagnetic-radiation, experimental-physics, scattering, reflection" }
Why does a circle and square of same length occupy different areas?
Question: May be this question do not meet the standards of this blog but i was just calculating on things and i got stuck..actualy we were playing this game in which we were to fill students in a square and circle of almost same length but it was amazing to see in a square we fitted less than in a circle..thus If a square and a circle are of same length so i think they should also occupy the same area but they dont..why? Answer: Imagine a rope enclosing a two-dimensional gas, with vacuum outside the rope. The gas will expand, pushing the rope to enclose a maximal area at equilibrium. When the system is at equilibrium, the tension in the rope must be constant, because if there were a tension gradient at some point, there would be a non-zero net force at that point in the direction of the rope, but at equilibrium the net force must be zero in all directions. The gas exerts a force outward on the rope, so tension must cancel this force. Take a small section of rope, so that it can be thought of as a part of some circle, called the osculating circle. The force on this rope segment due to pressure is Pl, with P pressure and l the length. The net force due to tension is 2Tsin(l/2R), with T tension and R the radius of the osculating circle. Because the pressure is the same everywhere, and the force from pressure must be canceled by the force from tension, the net tension force must be the same for any rope segment of the same length. That means the radius of the osculating circle is the same everywhere, so the rope must be a circle.
{ "domain": "physics.stackexchange", "id": 24576, "tags": "geometry" }
How do scientists know that the distant parts of the universe obey the physical laws exactly as we observe around us?
Question: How do scientists know that distant parts of the universe obey the physical laws exactly as we observe around us? The question might look a bit odd but I am really stuck on my head. We know, scientists (with tools) explored physically only our solar system and some parts of our galaxy which is really a tiny part of the observable universe. And now they are constantly using these knowledge base along with 'tested physical laws' to measure the properties of distant parts of our universe. For example, we tested and found the speed of the light is constant within our local periphery many times (within our Earth and Space around Earth). But yet we presume that the speed of the light is constant even at the farthest part of our universe. Certainly we did not test it in the other distant part of the universe because we have no way until now. Not only light but also the other physical properties like, luminosity, gravity, and etc related physical laws are agreed upon based on tests within our solar system. And based on these laws we deduced the properties of other parts of the universe (i.e. age, distance, mass, luminosity of stars in millions/billions light years away). My question is, how do we know that these physical laws which we tested within a tiny area of the universe are consistently working in the distant parts of it? Is there any probability that the distant part of our universe obeys physical laws differently and our prediction based on applied physical laws gave us an unreal illusion of the actual reality, yet consistently? Answer: We don't know in general but to the extent we can measure, the laws seem to be the same, even if conditions are not. For example radioactive decay: We know how fast various elements decay, and we can observe the results of radioactive decay in distant supernovae. The conclusion is that, for at least some elements, the rate of radioactive decay is the same on Earth as it is in distant supernovae. After accounting for redshift, spectral emission lines remain unchanged by distance. This implies that the fine-structure constant is indeed constant. Distant galaxies have gravitational fields, and interactions between galaxies proceeds in the same way in distant galaxies as it does in local ones. Eventually, the justification is philosophical: There is no observational reason to believe gravity behaves differently in distant parts of the universe, and so we believe that it does not, In the extreme conditions of the early universe, some physical laws were different. For example, instead of distinct electromagnetic and weak fields, there was a single Electroweak field. But this can be described as single "law" with the electromagnetic and weak interactions being just the low energy approximation of the electroweak interaction. So if it were discovered that Gravity (for example) was working differently in distant parts of the universe, but that there was a consistent pattern or rule for how it varied, then that would simply become the new theory of gravity (with general relativity becoming only the local approximation to this new law). There is a more fundamental assumption: that the behaviour of matter and energy in the universe can be modelled by "laws". There are no angels dancing on pinheads. The justification for this is strictly in the realm of philosophy.
{ "domain": "astronomy.stackexchange", "id": 5485, "tags": "astrophysics, universe, space-time, observable-universe" }
Digital, continuous and the state term
Question: If a digital system has two or more modes (states), does a continuous system have just one mode (state)? Answer: Your first statement a digital system has two or more modes (states) is a bit confusing or misleading; it misses the point! The point is that a digital system can only be in one of a countable set of possible states (and that these are acquired in a countable set of times). The state space of an analog system can be uncountable. Now the problem with your statement is that two or more doesn't say much about the countability of states of digital systems. Uncountably many states, as an analog system might have, is always more than countably many. does a continuous system have just one mode (state)? um, no, only a constant system (a very boring system) would have one state. Such as system could be both analog or digital. (so, your definition "two or more" is actually wrong...) In generally, as mentioned above, an analog system can take one of a uncountable infinite set of states, whereas a digital system can only take one of a countable infinite or finite number of states. That's the usual definition of the difference between digital and analog, together with the time-discreteness of digital systems.
{ "domain": "dsp.stackexchange", "id": 10612, "tags": "terminology, state" }
What am I getting wrong about polarizing filters & quantum mechanics?
Question: I saw a video where they used polarizing filters. At first they took two and put them so that one is perpendicular to the other. They shone light through them and nothing passed. They then inserted a third filter in between, at 45 degree angle, and some light passed through. They claimed this shows how weird quantum mechanics is since there is no classic explanation as to how a filter increases light. Now, I understand the explanation of quantum mechanics. But why can't it be explained classically (without collapsing and probabilities)? If I understand correctly, we can think of a filter as projecting the light wave onto its axis. So the horizontal filter will take the y portion of the wave, and the vertical filter will take the x portion. If placed on on top the other, then after the wave passes the first, it has no component of the axis of the second filter, so nothing passes through. But, if we insert a 45 angle filter in between, then it'll project the wave to its axis and then the wave has a portion that can be projected to the third filter's axis. It's thinking of filters as slits that cause the wave to align as it passes through them. The amount of light that passes will be different in the two theories (I think 1/8 in quantum and ~1/2 in my classic explanation (if it is correct)), but the fact that inserting a filter causes light to pass through can be explained in both cases, no? UPDATE: The video is https://www.youtube.com/watch?v=zcqZHYo7ONs&vl=en Answer: It does work as well, the sentence "there is no classical explanation" is just wrong... Of course you can explain what is going on using quantum mechanics, but why would you? A photon is a VERY complicated thing, and I think that trying to explain classical phenomena using photons is just pedagogically wrong but that is my opinion . Some people don't think that way... Anyway, you're correct.
{ "domain": "physics.stackexchange", "id": 53212, "tags": "quantum-mechanics, polarization" }
Question about FFT analysis of a signal
Question: In labs we had to measure the oscillations of a pendulum using an electric sensor. So basically, my data consists of time and voltage (which represents amplitude) pairs. The task was then to perform FFT analysis on this data to obtain the frequency of the pendulum. After going through the motions using QtiPlot (removing mistakes, fixing offset, smoothing and interpolating), I get the major peak at 2.61Hz, however, if I perform a sinusoidal fit (using SigmaPlot), I obtain the period to be 1.94s. Both of this values have a very high precision and are completely irreconcilable. I am at my wits end, I've been at this for days. Please help me. $T=1.9312\pm2.6044E-005$ according to this fit the green is the smoothed and interpolated data, right is the fft, left is the same fft zoomed in My procedure with the data regarding the fft is to first find the offset by calculating the average value. Then I smooth data. Then I cut data so that I have 2^n point, and finally I interpolate points with the same number of starting points. After that, I ask the QtiPlot to perform FFT. The algoritm used to smooth is FFT filter, but I honestly haven't noticed a major difference when using some different algorythm. Answer: $T=1.94$ sec is certainly compatible with the big, but noisy peak at .5Hz. Why are you worrying about the tiny peak at about 2.6Hz? Your data looks very noisy so random but meaningless peaks are to be expected. I still don't understand why your FFT is so noisy given the smooth data of black points in the first plot. You say that you interpolate? Why do you do this? FFT works with discrete data sets. You presumable measured your amplitude against time at $2^N$ points and these are your black dots. (FFT workbest with data sets that are powers of 2). You should then apply FFT to this discrete data set to get the Fourier transform at $2^N$ values of the frequency. You then plot these with PlotPoints set equal to $2^N$. Did you do this? It looks athough you poltting our frequency with a resolution in excess of $2^{-N}$. If so, you probably introduced all sorts of aliasing artifacts. One more thing! You said that you "smoothed". I though that by this you meant "apodized" Looking at your first and second plot, however, it looks as if you cut off the data suddenly at $t=0$ and $t=40$. If you kept those violent ending in the FT then your FT plot wll be full of rapid oscillation artifacts ("diffraction rings") at the frequency of (1/Time Range). These will completely swamp the data you want. Further, if you plot these high frequency oscillations without carefully choosing your plotting preferences, this can give the kind of cruddy FT plot that you have. Look up apodization on Wikipedia and implement it to slowly turn off your data at the starting and ending times.
{ "domain": "physics.stackexchange", "id": 62710, "tags": "harmonic-oscillator, computational-physics, fourier-transform, software, data-analysis" }
Can you measure how non-inertial a frame is at a time $t$?
Question: Consider a frame R. It is my understanding that the "non-inertiality" of R at time $t$ is characterized by: The acceleration of the origin of R in some inertial frame (which doesn't depend on the chosen inertial frame) The instanteneous rotation (which again doesn't depend of the chosen inertial frame if I'm right) Can an observer in R measure these two, for example by dropping a ball in an endless well and comparing the measured acceleration vector with the one expected by the second law of dynamics? Answer: Sure. Get an accelerometer, a rock, a clock, and a sextant. Hold the accelerometer at the origin and measure your proper acceleration. Drop the rock at the origin and measure its (and hence your) angular velocity with the clock and the sextant. If you want to make very precise measurements of angular velocities that are very small compared to your proper acceleration, you will need to be a bit cleverer in your experimental apparatus, for instance, by measuring the change of the plane of oscillation of a large pendulum. See: Foucault's Pendulum experiment.
{ "domain": "physics.stackexchange", "id": 95773, "tags": "newtonian-mechanics, reference-frames, inertial-frames" }
Is dynamics only concerned with systems that are accelerating?
Question: I'd just like to check my understanding of the branches of mechanics. I suspect dynamics is not just concerned with accelerating systems. Doesn't dynamics just generally deal with systems in motion? Those systems can either be accelerating or in dynamic equilibrium (constant velocity), right? Or am I thinking of kinematics? Answer: "Dynamics" comes from the Ancient Greek δύναμις which meant power, ability. Dynamics deals with forces, the power behind forces, and their effects on motion. The basis of dynamics is Newton's second law of motion, which deals with acceleration. Since the initiation of motion and changes in motion always involve acceleration, you generally will find that acceleration is a part of most issues covered by dynamics. Kinematics deals with motion alone rather than with the forces that cause motion.
{ "domain": "physics.stackexchange", "id": 38665, "tags": "newtonian-mechanics, forces, kinematics, acceleration, terminology" }
Remove elements compared with other elements in a Set based on a condition
Question: Simplified the actual problem to focus on comparing/removing performance. Given a set S with n elements , find the most optimal way to compare each element to all others until a condition is met that decides if and which item to remove. I experience long runtimes when n is high (e.g. n=3000 has a runtime of 2500ms on an intel i5-6500 cpu) An element is a Tuple object (itv, t) where itv holds an integer interval value (between 0 and 4, inclusive) and t holds an integer total benefit value. The condition: Given Tuple s and another Tuple s', if itv == itv', then if t <= t' remove s, else remove s'. A naïve example. S={(1,20),(0,40),(1,35)} Compare (1,20) with (0,40): itv != itv', continue. Compare (1,20) with (1,35): itv == itv', t < t', remove (1,20) from S. Compare (0,40) with (1,34): itv != itv', return S. Due to Java's constraints on removing elements while iterating, for-loops are wildly inefficient and I currently use a nested foreach-loop inside an Iterator loop. However, I still feel like it underperforms. This thought is mostly because the code looks terrible, in my opinion. Below is my implementation. It tries to reduce the amount of unnecessary comparisons as much as possible. My code uses an outer Iterator for comparing the main Tuple to the others. In the case that the interval matches and the main Tuple's total benefit is worse, the inner loop is stopped, the main Tuple is removed and we continue with the next Iterator item. If the inner Tuple is worse, however, then the inner Tuple is marked for deletion and will be deleted at the end of the outer loop iteration. Does anyone see any glaring mistakes, have any comments about performance (expensive calls) or refactoring for code clarity? public static HashSet<Tuple> compareAndRemove(HashSet<Tuple> set) { HashSet<Tuple> bestTuples = new HashSet<>(set.size()); //Return set Collection<Tuple> tupleRemovals = new HashSet<>(); //Tuples to remove this iteration Iterator<Tuple> s_iter; do { set.removeAll(tupleRemovals); removedElements += tupleRemovals.size(); tupleRemovals = new HashSet<>(); s_iter = set.iterator(); if (s_iter.hasNext()) { Tuple s = s_iter.next(); if (!s_iter.hasNext()) { //Last element; nothing to compare anymore bestTuples.add(s); s_iter.remove(); break; } //Compare with other boolean removeMainTuple = false; for (Tuple s_prime : set) { boolean equalIntervals = true; if (s.equals(s_prime)) continue; //Skip same element //Check interval if (s.itv == s_prime.itv) { //Decide which Tuple to discard if (s.t <= s_prime.t) { removeMainTuple = true; //Remove Iterator element break; } else { tupleRemovals.add(s_prime); //Remove foreach element } } } //Move current item to return set if it was the best if (!removeMainTuple) { bestTuples.add(s); } s_iter.remove(); } } while (s_iter.hasNext()); bestTuples.addAll(set); return bestTuples; } Answer: In the case where you just want to find the best tuples, the code can be much simpler: find the Tuple with the best t for each value of itv (sounds like a job for Map) put the best bunch of Tuple into a Set, and return it public static Set<Tuple> bestTuples(Set<Tuple> set) { Map<Integer, Tuple> bestTuplesByItv = new HashMap<Integer, Tuple>(); Tuple bestTupleSoFar; for (Tuple t: set) { if ((bestTupleSoFar = bestTuplesByItv.get(t.itv)) == null || t.t > bestTupleSoFar.t) { bestTuplesByItv.put(t.itv, t); } } Set<Tuple> bestTuples = new HashSet<Tuple>(); bestTuples.addAll(bestTuplesByItv.values()); return bestTuples; } This should also be faster - your solution has some nested loops, O(n^2), but the suggestion above just iterates through the entire set once. My gut tells me that removing things from a Set is slower than just making a new set with the things we want, but I'm not sure about HashSet. Since the values for the Map are coming from a HashSet, all the values are guaranteed to have unique hash value. Since that's the case, we don't need a LinkedHashMap - a plain HashMap will work just fine (LinkedHashMap creates linked lists to deal with hash collisions - but that's unnecessary as we won't have any). Your code seems to be mutating the Set that is passed in as a parameter though - if that's desired behaviour, it's a trivial modification from the code above - just clear the values in set and add the values we want before returning: public static Set<Tuple> compareAndRemove(Set<Tuple> set) { Map<Integer, Tuple> bestTuplesByItv = new HashMap<Integer, Tuple>(); Tuple bestTupleSoFar; for (Tuple t: set) { if ((bestTupleSoFar = bestTuplesByItv.get(t.itv)) == null || t.t > bestTupleSoFar.t) { bestTuplesByItv.put(t.itv, t); } } set.clear(); set.addAll(bestTuplesByItv.values()); return set; }
{ "domain": "codereview.stackexchange", "id": 36165, "tags": "java, performance, algorithm, set" }
Why $dE + dW = C_pdT$?
Question: $dE$ is Internal Energy $dW$ is Work Done $C_p = $ Specific heat at Constant Pressure In Mayer's Reaction, They have given the relation $dE + dW = C_p.dT$ But I don't know how they have Derived this Relation. More information in Image Below :- Answer: This is only valid for contant pressure systems as the enthalpy reduces to the heat $$ ΔH =ΔE + ΔPV =ΔE +PΔV = ΔE + W = Q $$ or rather in differential form $$ dH = dE + dW = dQ $$ Since we know that enthalpy and hence heat can be written as $C_PdT$ for reversible systems (which is true for all isobaric processes which in fact includes our earth) so $$ dE + dW = C_P dT $$ By the way this is just a way of writing first law of thermodynamics for isobaric systems. Hope i could be of help
{ "domain": "physics.stackexchange", "id": 70843, "tags": "thermodynamics, kinetic-theory" }
CMB homogeneity: Inflation vs. common initial conditions?
Question: As I understand it, inflation was proposed to provide a mechanism by which now-causally disconnected regions of the universe could have once been in causal contact, and thus explain correlations in CMB temperatures between regions that shouldn’t have had the ability to exchange information. My question is simple: Why not instead propose that the initial conditions were sufficiently the same in distant regions to result in similar temperatures in areas not causally connected? Forgive my ugly analogy: If I find 2 coffee mugs in a house, one on the first floor and one on the second floor, and they are both exactly 200 degrees F, I can propose either that: A) These mugs have once in the past been in causal contact, and thus had time to equalize their temperatures Or B) Both mugs were exposed to sufficiently similar conditions to result in similar temperatures. In this case, my microwave. The latter seems by far the better choice; why have we settled on Choice A to explain CMB temperature correlations instead? Answer: Well, either you think that homogeneity is a fact in need of explanation, or you think it doesn't need to be explained at all. If you don't think it needs to be explained at all, and it just is that way, then that's a fair position to take, but you can see the other answers for why most people disagree. But if you do think that homogeneity has to be explained, your solution (B) is not a solution at all. It is true that if the situation is homogeneous at time $t$, it could be explained by having homogeneous initial conditions at time $t-1$. But that's not a real explanation, it's just kicking the can down the road, because then you have to explain why the situation was homogeneous at time $t - 1$. (It's turtles all the way down.) Or you might say homogeneity is because the visible universe came into thermal equilibrium with something else which was homogeneous. Again, that just moves the problem one step back because you have to explain why that other material was homogeneous. Once you start trying to propose any explanation at all, nothing is going to work unless you change cosmological history, e.g. by adding inflation.
{ "domain": "physics.stackexchange", "id": 53583, "tags": "cosmological-inflation, cosmic-microwave-background" }
What does "tol" mean in a gear box?
Question: In this NASA document, it is mentioned on page 221 (239 of the pdf) that a "23 tol gear box" was used. What does that mean? A google search for "23 tol gear box" (with quotes) comes up completely empty, and "tol gear box" shows only irrelevant results. A search for just tol doesn't come up with any convincing results either. I can't rule out that it is a typo of some sort. From the context I presume it to mean that the gearbox output runs 23 times as fast as the powered side, but I have to be certain. Answer: Oh, I get it. The typo is a missing space. It's supposed to mean "23 to 1".
{ "domain": "engineering.stackexchange", "id": 197, "tags": "mechanical-engineering, gears" }
Expression of $\not{p}$ in Dirac equation
Question: In scattering amplitudes, page 9, equation (2.6), (2.7), $\not{p}$ (in the Dirac equation (2.4)) is as follows: \begin{align} \not{p} = \left( \begin{matrix} 0 & p_{a\dot{b}} \\ p^{\dot{a}b} & 0 \end{matrix} \right), \end{align} where $p_{a\dot{b}} = \left( \begin{matrix} -p_0+p_3 & p_1-ip_2 \\ p_1+ip_2 & -p_0-p_3 \end{matrix} \right)$. But on the other hand, in the article, \begin{align} \not{p} = \left( \begin{matrix} E & \sigma \cdot \vec p \\ -\sigma \cdot \vec p & -E \end{matrix} \right), \end{align} where $E$ is the $2 \times 2$ identity matrix. Why these two expression of $\not{p}$ are different? Answer: In the first reference you are quoting the chiral basis with $$\gamma^0=\begin{pmatrix} 0 & \mathbf{1}_{2\times 2} \\ \mathbf{1}_{2\times 2} & 0 \end{pmatrix} $$ is used while the second reference uses the (standard) Dirac basis $$\gamma^0=\begin{pmatrix} \mathbf{1}_{2\times 2} & 0 \\ 0 & -\mathbf{1}_{2\times 2} \end{pmatrix} $$ see for example the corresponding Wikipedia article.
{ "domain": "physics.stackexchange", "id": 78821, "tags": "momentum, conventions, dirac-matrices" }
Do two stars on exact opposite sides of a black hole pull on each other gravitationally?
Question: If so, how do their gravitational fields go through the black hole? Can information be sent this way (via gravitational waves)? Answer: Do two stars on exact opposite sides of a black hole pull on each other gravitationally? I assume you are imagining these stars to be outside the event horizon? The answer is yes. Let's say it's a black hole with a Schwarzschild radius of 10 km. Then at distances greater than 10 km, it acts exactly like any other object of the same mass. The motion of your entire system is exactly the same as if it was some other object of the same mass. If the distances are large compared to 10 km, then in addition, Newtonian gravity is a good approximation. If so, how do their gravitational fields go through the black hole? Physics doesn't work according to Newtonian instantaneous action at a distance. Forces don't reach out along a straight line through space. The gravitational fields in this system are determined by the Einstein field equations, which at distances large compared to 10 km are well approximated by Newtonian gravity. Can information be sent this way (via gravitational waves)? Sure. For example, if one of your stars has a planet, then the system will emit gravitational waves (probably very faint ones), and those waves will be detectable at the other star's position. They propagate around the black hole rather than through it. These waves and their propagation are a non-Newtonian effect, and the details of the wave pattern as it arrives at the other star will, I imagine, be different than if you had replaced the black hole with some other type of object.
{ "domain": "physics.stackexchange", "id": 72821, "tags": "general-relativity, gravity, black-holes" }
Validity of Maxwellian distribution for interacting particles?
Question: I have read in a few (relatively credible) sources (e.g. Cambridge Tripos exam) that the Maxwell-Boltzmann speed distribution can be valid for interacting particles, but I have not been able to find a satisfactory proof/intuitive explanation of this, hence I seek advisement in two areas: 1) What is the intuitive explanation here? Is it possible to argue that the intermolecular forces are independent of velocity (and hence speed), maintaining the isotropy of the distributions? I have also read a brief comment (in another stackexchange post which for some reason I cannot find again) that this is only valid for short-range forces. What distinguishes the long-range forces from the short-range ones (assuming the argument I have mentioned is correct, since it would only be coordinate-dependent not velocity-dependent)? 2) Mathematically, how do we model the exponential term? What does the potential energy look like, would it be quadratic in $r_i$ or $v_i$ of the $i^{\text th}$ particle? Thanks! Answer: Intuitive explanations may be misleading and, more important, they heavily depend on the previous knowledge on a subject matter. Without a good training and a formal proof I would hardly believe that, even in a gas, the velocity distribution is a Maxwellian. Certainly, as you noticed, the fact that interaction potentials do not depend on velocities plays an important role to ensure that interacting and not interacting systems behave similarly. However, I would observe that the consequence of this fact on velocity distribution is not trivially obvious. In particular, I do not see a really convincing way to prove the isotropy of the distribution without using formulae. Within statistical mechanics the result can be obtained in a particularly simple way in the canonical ensemble. There, the probability distribution in phase space can be directly written as the Boltzmann's factor of the Hamiltonian $H$: $$ \rho( \{ {\bf r_i,p_i} \})= \frac{e^{-\beta H( \{ {\bf r_i,p_i} \})}}{Z} $$ which factorizes into a function of the positions times a function of the momenta as soon as the Hamiltonian can be written in the separated form $H( \{ {\bf r_i,p_i} \}) = K( \{ {\bf p_i} \})+P( \{ {\bf r_i} \})$. Since the one-particle velocity distribution can be obtained as marginal distribution, by integrating $\rho$ over all coordinates and all momenta but one, it is clear, in this case, that the presence of a potential energy $P$ does not modifies the velocity distribution. Maxwell distribution is recovered for classical systems, since the kinetic energy term corresponding to the motion of the centers of mass of the molecules is a sum of quadratic terms. However, already for classical particles, the general validity of such a result can be established only for macroscopic systems. Indeed, for finite, small size systems, this nice factorization does not hold in the micro canonical o in the gran canonical ensembles. Therefore the one-particle velocity distribution is not exactly a maxwellian, even for a perfect gas. One needs to take the limit for very large systems to recover both, the equivalence of results in different ensembles and the Maxwell's distribution. It is worth noticing that since the simple result in the canonical ensemble depends only on the separation of the Hamiltonian into a (quadratic) kinetic and a potential energy term, its validity cannot depend on the range of the potential, provided that the thermodynamic limit exists.
{ "domain": "physics.stackexchange", "id": 58399, "tags": "statistical-mechanics, kinetic-theory" }
Get list of running nodes
Question: Hi guys, is there a way to retrieve a list of all running nodes from within a node (rospy preferably)? I've already taken a look at rosgraph.masterapi. It seems appropriate but pretty complex. Is there an easier way than to dismantle the getSystemState call result? Thanks a lot! Cheers, Hendrik Originally posted by Hendrik Wiese on ROS Answers with karma: 1145 on 2014-05-14 Post score: 7 Answer: import rosnode rosnode.get_node_names() More detail in the rosnode API reference. Originally posted by Dan Lazewatsky with karma: 9115 on 2014-05-14 This answer was ACCEPTED on the original site Post score: 13 Original comments Comment by Hendrik Wiese on 2014-05-14: Ah, missing the forest for the trees... Thanks, pal! Comment by james P M on 2021-10-12: anything for c++
{ "domain": "robotics.stackexchange", "id": 17941, "tags": "ros, rosnode, rosgraph, rospy, nodes" }
How to prove that a particle or a body moves in a circular trajectory with respect to centre of mass?
Question: In some situations in mechanics, observing motion of bodies with respect to centre of mass often gives useful insights to visualize the situation and obtain many results.In some cases the motion of particles/rigid bodies with respect to centre of mass (COM) is circular. I would like to illustrate with the help of an example: A small ball of mass $m$ is placed in a circular tube of mass $M$ and radius R (cross section radius of the tube << R) which is kept on a horizontal plane in gravity free space. Friction is absent between tube and ball. Ball is given a velocity v. Then, path of ball with respect to COM will be circular.Now, sometimes it becomes intuitively difficult to realize the nature of path and I tried to assume velocities of the ball and tube at a general time t , conserve momentum and kinetic energy of the system but it did not prove the required result .I would like to know a general method which can deal with this example and other similar question where we need to prove that the path of a body with respect to COM is circular . Answer: A circle is the set of all points in a plane at a given non zero distance from a reference point called the "centre". By making a diagram of the tube and ball at any given moment, it is clear that the centre of mass of the system is at a distance of MR/(m+M) from the small ball, on the radius joining the centre of the loop and the small ball. Therefore at any given moment the ball is at a constant distance of MR/(m+M) from the centre of mass of the system. Therefore it lies on a circle. This situation was relatively simple and the path of the small ball turned out to be circular wrt COM. However the system is non rigid and one can't say for certain if the path of each point of the system is circular about the COM. However for rigid bodies, every point moves in a circle in the reference frame of the centre of mass, no matter how complex the motion is. (the system is non rigid because the small ball isn't fixed in position wrt the loop)
{ "domain": "physics.stackexchange", "id": 69677, "tags": "newtonian-mechanics, reference-frames, inertial-frames" }
PrettyPrint a Binary Tree
Question: I was going through this tutorial for Pretty Printing a binary search tree and decided to implement my own version. Here is my implementation. This version works only for trees that are complete binary tree. I would like to know what all optimisations can be done (errors present can be removed) in the code. public class PrettyPrintTree { private List<Integer> listOfNodes; public TreeNode root; public PrettyPrintTree(List<Integer> list) { listOfNodes = list; root = createTree(listOfNodes); } public static class TreeNode { TreeNode left; TreeNode right; int value; public TreeNode(int value) { this.value = value; } } public static TreeNode createTree(List<Integer> list) { TreeNode root = null; TreeNode temp, temp2; for (Integer integer : list) { if (root == null) { root = new TreeNode(integer); root.left = null; root.right = null; continue; } temp = root; temp2 = root; while (temp != null) { temp2 = temp; temp = (temp.value < integer) ? temp.right : temp.left; } if (temp2.value < integer) { temp2.right = new TreeNode(integer); } else { temp2.left = new TreeNode(integer); } } return root; } private static int getMaximumHeight(TreeNode node) { if (node == null) return 0; int leftHeight = getMaximumHeight(node.left); int rightHeight = getMaximumHeight(node.right); return (leftHeight > rightHeight) ? leftHeight + 1 : rightHeight + 1; } private static String getStartingSpace(int height) { int noOfSpaces = ((int) Math.pow(2, height - 1)) / 2; StringBuilder startSpaceStringBuilder = new StringBuilder(); for (int i = 0; i < noOfSpaces; i++) { // No. of spaces added everytime is the width of every node value startSpaceStringBuilder.append(" "); } return startSpaceStringBuilder.toString(); } private static String getUnderScores(int height) { int noOfElementsToLeft = ((int) Math.pow(2, height) - 1) / 2; int noOfUnderScores = noOfElementsToLeft - ((int) Math.pow(2, height - 1) / 2); StringBuilder underScoreStringBuilder = new StringBuilder(); for (int i = 0; i < noOfUnderScores; i++) { // No. of underscores added everytime is the width of every node // value underScoreStringBuilder.append("__"); } return underScoreStringBuilder.toString(); } private static String getSpaceBetweenTwoNodes(int height) { int noOfNodesInSubTreeOfNode = ((int) Math.pow(2, height - 1)) / 2; /** Sum of spaces of the subtrees of nodes + the parent node */ int noOfSpacesBetweenTwoNodes = noOfNodesInSubTreeOfNode * 2 + 1; StringBuilder spaceBetweenNodesStringBuilder = new StringBuilder(); for (int i = 0; i < noOfSpacesBetweenTwoNodes; i++) { spaceBetweenNodesStringBuilder.append(" "); } return spaceBetweenNodesStringBuilder.toString(); } private static void printNodes(LinkedList<TreeNode> queueOfNodes, int noOfNodesAtCurrentHeight, int height) { StringBuilder nodesAtHeight = new StringBuilder(); String startSpace = getStartingSpace(height); String spaceBetweenTwoNodes = getSpaceBetweenTwoNodes(height); String underScore = getUnderScores(height); nodesAtHeight.append(startSpace); for (int i = 0; i < noOfNodesAtCurrentHeight; i++) { TreeNode node = (TreeNode) queueOfNodes.get(i); if (node == null) continue; queueOfNodes.add(node.left); queueOfNodes.add(node.right); nodesAtHeight.append(underScore); nodesAtHeight.append(String.format("%2d", node.value)); nodesAtHeight.append(underScore); nodesAtHeight.append(spaceBetweenTwoNodes); } nodesAtHeight.substring(0, nodesAtHeight.length() - spaceBetweenTwoNodes.length()); System.out.println(nodesAtHeight.toString()); } private static String getSpaceBetweenLeftRightBranch(int height) { int noOfNodesBetweenLeftRightBranch = ((int) Math.pow(2, height - 1) - 1); StringBuilder spaceBetweenLeftRightStringBuilder = new StringBuilder(); for (int i = 0; i < noOfNodesBetweenLeftRightBranch; i++) { spaceBetweenLeftRightStringBuilder.append(" "); } return spaceBetweenLeftRightStringBuilder.toString(); } private static String getSpaceBetweenRightLeftBranch(int height) { int noOfNodesBetweenLeftRightBranch = (int) Math.pow(2, height - 1); StringBuilder spaceBetweenLeftRightStringBuilder = new StringBuilder(); for (int i = 0; i < noOfNodesBetweenLeftRightBranch; i++) { spaceBetweenLeftRightStringBuilder.append(" "); } return spaceBetweenLeftRightStringBuilder.toString(); } private static void printBranches(LinkedList<TreeNode> queueOfNodes, int noOfNodesAtCurrentHeight, int height) { if (height <= 1) return; StringBuilder brachesAtHeight = new StringBuilder(); String startSpace = getStartingSpace(height); // startSpace.substring(0, startSpace.length()); String leftRightSpace = getSpaceBetweenLeftRightBranch(height); String rightLeftSpace = getSpaceBetweenRightLeftBranch(height); brachesAtHeight .append(startSpace.substring(0, startSpace.length() - 1)); for (int i = 0; i < noOfNodesAtCurrentHeight; i++) { brachesAtHeight.append("/").append(leftRightSpace).append("\\") .append(rightLeftSpace); } brachesAtHeight.substring(0, brachesAtHeight.length() - rightLeftSpace.length()); System.out.println(brachesAtHeight.toString()); } public static void prettyPrintTree(TreeNode root) { LinkedList<TreeNode> queueOfNodes = new LinkedList<>(); int height = getMaximumHeight(root); int level = 0; int noOfNodesAtCurrentHeight = 0; queueOfNodes.add(root); while (!queueOfNodes.isEmpty() && level < height) { noOfNodesAtCurrentHeight = ((int) Math.pow(2, level)); printNodes(queueOfNodes, noOfNodesAtCurrentHeight, height - level); printBranches(queueOfNodes, noOfNodesAtCurrentHeight, height - level); for (int i = 0; i < noOfNodesAtCurrentHeight; i++) queueOfNodes.remove(); level++; } } public static void main(String[] args) { PrettyPrintTree lcs = new PrettyPrintTree(Arrays.asList(30, 20, 40, 10, 25, 35, 50, 5, 15, 23, 28, 33, 38, 41, 55)); PrettyPrintTree.prettyPrintTree(lcs.root); } } Answer: The private field listOfNodes can be converted to a local variable, and actually, it can also be inlined in the constructor: public PrettyPrintTree(List<Integer> list) { root = createTree(list); } Whenever possible, you should declare variable types with interfaces instead of implementation, for example instead of this: private static void printNodes(LinkedList<TreeNode> queueOfNodes, int noOfNodesAtCurrentHeight, int height) { It would be better this way: private static void printNodes(List<TreeNode> queueOfNodes, int noOfNodesAtCurrentHeight, int height) { In the same method, you have this loop: for (int i = 0; i < noOfNodesAtCurrentHeight; i++) { TreeNode node = (TreeNode) queueOfNodes.get(i); The cast to TreeNode is unnecessary, as the type of queueOfNodes is guaranteed by the method signature, so you can simply write as: TreeNode node = queueOfNodes.get(i); In the printBranches method, the queueOfNodes parameter is unused, so you should remove it, changing the method signature to this: private static void printBranches(int noOfNodesAtCurrentHeight, int height) { You have this code duplicated in two methods: StringBuilder spaceBetweenLeftRightStringBuilder = new StringBuilder(); for (int i = 0; i < noOfNodesBetweenLeftRightBranch; i++) { spaceBetweenLeftRightStringBuilder.append(" "); } return spaceBetweenLeftRightStringBuilder.toString(); It would be better to move this logic to its own method and reuse it. Actually you have this kind of duplication in multiple other places as well, when building strings. You could reduce duplicated segments more aggressively by creating a parameterized string builder that concatenates a string \$N\$ times (the duplicated operation). for example: public static String multiplyString(String string, int times) { StringBuilder builder = new StringBuilder(string.length() * times); for (int i = 0; i < times; ++i) { builder.append(string); } return builder.toString(); } In the prettyPrintTree method, the variable initialization int noOfNodesAtCurrentHeight = 0; is unnecessary, as the variable is always reassigned inside the while loop. In fact it would be best to declare the variable inside the loop. Why the line break in the middle of the statement here: printBranches(noOfNodesAtCurrentHeight, height - level); The statement is short enough (69 characters) without the line break: printBranches(noOfNodesAtCurrentHeight, height - level);
{ "domain": "codereview.stackexchange", "id": 11758, "tags": "java, algorithm, tree, formatting" }
Applying yaw_offset and mag_declination in IMU driver
Question: Hello, I am writing an IMU driver (for an IMU that will be fused with GPS using robot_localization) and want to make sure I am applying yaw_offset and mag_declination the same way robot_localization does it so there is no need to set those parameters in robot_localization. The data from the IMU (called raw_imu) has been set to the right ROS convention but forward is North (therefore the need for yaw offset). Is the right way to create a yaw rotation quaternion with the two parameters and then apply that to the raw IMU data as follows? YAW_OFFSET = 1.570796327 MAG_DECL = -0.1455605 # at my location declination is 8° 17' E yaw_transform = tf.transformations.quaternion_from_euler(0, 0, YAW_OFFSET+MAG_DECL) imu_transformed = tf.transformations.quaternion_multiply(yaw_transform, raw_imu) Originally posted by boost on ROS Answers with karma: 7 on 2018-02-12 Post score: 0 Answer: Should be pretty easy to test using bpython: >>> from tf import transformations >>> raw = transformations.quaternion_from_euler(1.0, 1.0, 1.0) >>> offset = transformations.quaternion_from_euler(0.0, 0.0, 0.3) >>> print(transformations.euler_from_quaternion(transformations.quaternion_multiply(raw, offset ))) (1.1668867925697146, 0.7335759265998189, 1.2166525164145243) >>> print(transformations.euler_from_quaternion(transformations.quaternion_multiply(offset, raw ))) (1.0000000000000002, 0.9999999999999999, 1.3000000000000003) So the rotation you want is indeed yaw_transform * raw_imu. This is verified via Wikipedia: Originally posted by Tom Moore with karma: 13689 on 2018-02-19 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 30019, "tags": "imu, navigation, ros-kinetic, robot-localization" }
Why are waterspouts less powerful than tornadoes?
Question: It's often said that waterspouts are weaker than tornadoes, but after some quick searches on the internet, I haven't been able to figure out why that is. Two relevant sources that I've found: This claims that "A difference between the two is that a waterspout tends to be weaker. The force of friction is weaker over water thus there is less air available to be drawn into the circulation." — perhaps this is clear to someone with a stronger background in the sciences than me, but I'm not sure why a weaker force of friction would lead there to be more air (if anything it seems like weaker friction would allow for more air to flow?) This source matches my intuition and claims the opposite reason: "In general, waterspouts are not as strong as tornadoes, in spite of the large moisture source and the reduced friction." but then doesn't explain why tornadoes are stronger So what makes tornadoes stronger? Thanks in advance! Answer: Part of it is due to friction. You can see in this video of a landfalling waterspout the influence that enhanced friction has on tightening the circulation. So, like the ice skater analogy, enhancing the inflow tightens the circulation by conserving by the conservation of angular momentum. Another thing to consider is that fair weather (non-tornadic) waterspouts are fairly common when it comes to waterspouts, whereas landspouts are uncommon in comparison, as mesocyclones generate preferentially over land. This means that tornadoes (not landspouts) have access to extra buoyancy over land (more convection=more severe storms= more violent tornadoes).
{ "domain": "earthscience.stackexchange", "id": 2322, "tags": "meteorology, water, tornado" }
Does a solvent with the following properties exist?
Question: I am looking for a solvent with these properties: It's aprotic. At 1 atm, it has a liquid range of at least 0 to 50 degrees Celsius. Water can dissolve in it, with a solubility of at least 10g/100g. It has a high flash point, preferably 100 degrees Celsius or higher. (Or better, be non-combustible.) Does a solvent with these properties exist? If it does not exist, what's the reason? Answer: There are many examples which clear the requirements. Here is a non-exhaustive list. I do not claim these data are absolutely accurate - you should cross-check everything if it matters to you. A couple of notes: Data for solubility of water in solvents are significantly sparser than for solvents in water. I have collected the latter values, but most of these are miscible with water in any proportion, and if not, they should still clear the requirement. Few organic solvents are non-flammable, and the ones that are typically have poor miscibility with water. To get high flash points, you'll need high-boiling solvents. I am not advertising Sigma-Aldrich/Merck, they simply have most of the examples and the necessary data. N,N′-Dimethylpropylene urea (DMPU) Melting point at 1 atm: -20 °C Boiling point at 1 atm: 246 °C Flash point: 121 °C Solubility in water at 25 °C: Fully miscible in any proportion Trimethyl phosphate Melting point at 1 atm: -46 °C Boiling point at 1 atm: 197 °C Flash point: 150 °C Solubility in water at 25 °C: 100 g solvent per 100 g water Triethyl phosphate Melting point at 1 atm: -56 °C Boiling point at 1 atm: 215 °C Flash point: 130 °C Solubility in water at 25 °C: 50 g solvent per 100 g water Dihydrolevoglucosenone (Cyrene™) Melting point at 1 atm: -18 °C Boiling point at 1 atm: 227 °C Flash point: 108 °C Solubility in water at 25 °C: Fully miscible in any proportion Triethylene glycol dimethyl ether (triglyme) Melting point at 1 atm: -45 °C Boiling point at 1 atm: 216 °C Flash point: 113 °C Solubility in water at 25 °C: "Very soluble" (likely fully miscible in any proportion) Some near misses of interest: Propylene carbonate Melting point at 1 atm: -55 °C Boiling point at 1 atm: 240 °C Flash point: 132 °C Solubility in water at 25 °C: 7.5 g water in 100 g solvent Note: Fully miscible with water in any proportion above 61 °C Hexamethylphosphoramide (HMPA) Melting point at 1 atm: 7 °C Boiling point at 1 atm: 232 °C Flash point: 144 °C Solubility in water at 25 °C: Fully miscible in any proportion NOTE: Carcinogenic gamma-Butyrolactone Melting point at 1 atm: -45 °C Boiling point at 1 atm: 204 °C Flash point: 98 °C Solubility in water at 25 °C: Fully miscible in any proportion N-Methyl-2-pyrrolidinone (NMP) Melting point at 1 atm: -24 °C Boiling point at 1 atm: 202 °C Flash point: 91 °C Solubility in water at 25 °C: Fully miscible in any proportion
{ "domain": "chemistry.stackexchange", "id": 16774, "tags": "water, solubility, solvents" }
Number format in JavaScript
Question: What's the best way to format a number? If decimal is less than 2 digits, add zero to make it 2 decimals If more than 4, truncate to 4 decimals If 3 decimals, keep it to 3 decimals 14 => 14.00 14.1 => 14.10 14.123 => 14.123 14.12347 => 14.1234 I don't want rounding to happen if (number !== 0) { nParts = number.toString().split('.'); if (nParts[1]) { if (nParts[1].length > 4) { nParts = 4; } else if (nParts[1].length < 3) { nParts = 2; } else { nParts = 3; } } else { nParts = 2; } } number = number.toFixed(nParts); Help me improve on this. Answer: since you don't want rounding, treat as string rather than number... function strange(number){ var n=0; if (number !== 0) { nParts = number.toString().split(/\.|,/); if (nParts[1]){ n=nParts[1].length; n = n <=2 ? 2 : n==3 ? 3 : 4; nParts[1] = nParts[1] + '0' }else{ n=2; nParts[1]='00'; } } return nParts[0] + '.' + nParts[1].slice(0,n); } ;strange(14); ;strange(14.1); ;strange(14.123); ;strange(14.12347); /* 14 => 14.00 14.1 => 14.10 14.123 => 14.123 14.12347 => 14.1234 */
{ "domain": "codereview.stackexchange", "id": 5831, "tags": "javascript, formatting, floating-point" }
What lies at the center of a neutron star if any?
Question: What does the current theory tell us about what is going on at the core of a neutron star, I'm expecting a black hole hopefully I won't be disappointed! Answer: Certainly not a black hole! That would not be stable situation at all. The content of the core of a neutron star is the subject of much speculation. The possibilities fall into a number of categories. (i) An increasingly hard neutron equation of state, such that neutrons retain their identities as they are squeezed closer, but an increasingly repulsive many-body strong nuclear force provides support. (ii) Additional hadronic degrees of freedom, such that the neutrons (and protons) transform into other heavy hadrons such as lambda or sigma particles. (iii) Some sort of quark plasma. (iv) Boson condensations involving the neutrons decaying into pions or kaons with zero momentum. There are a number of diagnostics of these possibilities: primarily, the maximum possible mass of a neutron star should decrease from about 3 solar masses for (i) down to about 1.5 solar masses for (iv). Secure measurements of a 2 solar mass neutron star would seem to rule out (iv), but even that seems not completely agreed. Another diagnostic is how quickly neutron stars can cool. The presence of quark matter or boson condensations should lead to much more rapid cooling by neutrino emission. Again, nothing conclusive has emerged yet.
{ "domain": "astronomy.stackexchange", "id": 1303, "tags": "neutron-star" }
Can we use potassium hydroxide to decarboxylate potassium benzoate to benzene?
Question: I know the famous reaction: I was thinking of extension of this reaction to other salts of Benzoic Acids such as: $$\ce{Potassium Benzoate + KOH ->[CaO,Δ] Benzene + K2CO3}$$ Is this reaction really carried out in Lab or there is not a favorable outcome/different product/unfeasible reaction such as in case of Kolbe's Reaction we obtain different products(basically isomers) for Na containing compound and for K containing compound. Answer: The reaction is famously known as soda-lime decarboxylation because of its use of sodalime which is mixture of $\ce{CaO}$ (75%) and $\ce{NaOH}$ (3%). It also contains trace amount of $\ce{KOH}$ (0.1%). You can however alter the composition where $\ce{KOH}$ is in majority while $\ce{NaOH}$ is in trace and the reaction will work in similar manner since the chemistry is not altered.
{ "domain": "chemistry.stackexchange", "id": 17139, "tags": "organic-chemistry, synthesis" }
Radio triangulation with Raspberry Pi
Question: Currently, I am thinking that since electromagnetic waves loose energy over distance, you might be able to figure out the relative distance to each of three radio "beacons" and use this information to triangulate the position of a Raspberry Pi robot with a radio sensor on-board. What kind of radio receiver may I need to use, and what information about the radio signal could I use to calculate the distance to the beacon? Amplitude, perhaps? The entire system would be running indoors as an Indoor Positioning System. In short, my question is what specific type of radio receiver would be adequate for using radio signal strength for triangulation. If amplitude contains the information of the strength of the signal, then I need a sensor that is sensitive enough to pick that up, and can provide amplitude readings in the first place. Answer: Unfortunately, There is too much interference for this to work probably, as Olin Lathrop said here on the Engineering SE: While your concept is theoretically correct, it doesn't work well in practice except for rough locations. The problem is interference. Radio waves get absorbed, reflected, diffracted, and in some cases even refracted. All those mean that there isn't just a single point source from the receiver's point of view. The various components of the signal bouncing around the environment can interfere constructively and destructively, making received amplitude only a very rough estimate of distance.
{ "domain": "robotics.stackexchange", "id": 1664, "tags": "raspberry-pi, precise-positioning" }
Shear Strain and no change in volume of the element
Question: Consider that at a particular point of the body the state of stress was that of a pure shear one, with $\tau_{xy}$ and $\gamma_{xy}$ as the shear stress and strain. I read two statements in two different books in this regard- In 1) it is stated that there will be no change in volume of the element. In 2) it is stated that the z face will be a rhombus. I'm thinking of these statements to be corollaries of each other. So if I start off by saying that volume of the element doesn't change how does that makes me conclude that the x, y and z dimensions won't change as well? OR If the x, y , z dimensions do not change how can I conclude that the volume doesn't change? Answer: Let's say that a cube under horizontal shear, $ \ \tau \ $deforms to a parallelogram as shown. As we see a small triangular wedge with the base $\ W= shear\ strain$ is subtracted from the left side but the same volume is added to the right side. So under small angle (linear) shear strain, the volume remains constant while the coordinates of x or Y change. '
{ "domain": "engineering.stackexchange", "id": 4818, "tags": "mechanical-engineering, structural-engineering, civil-engineering, stresses" }
Implementation of a Markov Chain
Question: I read about how markov-chains were handy at creating text-generators and wanted to give it a try in python. I'm not sure if this is the proper way to make a markov-chain. I've left comments in the code. Any feedback would be appreciated. import random def Markov(text_file): with open(text_file) as f: # provide a text-file to parse data = f.read() data = [i for i in data.split(' ') if i != ''] # create a list of all words data = [i.lower() for i in data if i.isalpha()] # i've been removing punctuation markov = {i:[] for i in data} # i create a dict with the words as keys and empty lists as values pos = 0 while pos < len(data) - 1: # add a word to the word-key's list if it immediately follows that word markov[data[pos]].append(data[pos+1]) pos += 1 new = {k:v for k,v in zip(range(len(markov)), [i for i in markov])} # create another dict for the seed to match up with length_sentence = random.randint(15, 20) # create a random length for a sentence stopping point seed = random.randint(0, len(new) - 1) # randomly pick a starting point sentence_data = [new[start_index]] # use that word as the first word and starting point current_word = new[start_index] while len(sentence_data) < length_sentence: next_index = random.randint(0, len(markov[current_word]) - 1) # randomly pick a word from the last words list. next_word = markov[current_word][next_index] sentence_data.append(next_word) current_word = next_word return ' '.join([i for i in sentence_data]) Answer: import random def Markov(text_file): Python convention is to name function lowercase_with_underscores. I'd also probably have this function take a string as input rather then a filename. That way this function doesn't make assumptions about where the data is coming from with open(text_file) as f: # provide a text-file to parse data = f.read() data is a bit too generic. I'd call it text. data = [i for i in data.split(' ') if i != ''] # create a list of all words data = [i.lower() for i in data if i.isalpha()] # i've been removing punctuation Since ''.isalpha() == False, you could easily combine these two lines markov = {i:[] for i in data} # i create a dict with the words as keys and empty lists as values pos = 0 while pos < len(data) - 1: # add a word to the word-key's list if it immediately follows that word markov[data[pos]].append(data[pos+1]) pos += 1 Whenever possible, avoid iterating over indexes. In this case I'd use for before, after in zip(data, data[1:]): markov[before] += after I think that's much clearer. new = {k:v for k,v in zip(range(len(markov)), [i for i in markov])} # create another dict for the seed to match up with [i for i in markov] can be written list(markov) and it produces a copy of the markov list. But there is no reason to making a copy here, so just pass markov directly. zip(range(len(x)), x) can be written as enumerate(x) {k:v for k,v in x} is the same as dict(x) So that whole line can be written as new = dict(enumerate(markov)) But that's a strange construct to build. Since you are indexing with numbers, it'd make more sense to have a list. An equivalent list would be new = markov.keys() Which gives you a list of the keys length_sentence = random.randint(15, 20) # create a random length for a sentence stopping point seed = random.randint(0, len(new) - 1) # randomly pick a starting point Python has a function random.randrange such that random.randrange(x) = random.randint(0, x -1) It good to use that when selecting from a range of indexes like this sentence_data = [new[start_index]] # use that word as the first word and starting point current_word = new[start_index] To select a random item from a list, use random.choice, so in this case I'd use current_word = random.choice(markov.keys()) while len(sentence_data) < length_sentence: Since you know how many iterations you'll need I'd use a for loop here. next_index = random.randint(0, len(markov[current_word]) - 1) # randomly pick a word from the last words list. next_word = markov[current_word][next_index] Instead do next_word = random.choice(markov[current_word]) sentence_data.append(next_word) current_word = next_word return ' '.join([i for i in sentence_data]) Again, no reason to be doing this i for i dance. Just use ' '.join(sentence_data)
{ "domain": "codereview.stackexchange", "id": 3470, "tags": "python, algorithm, markov-chain" }
C++ Not-So-Simple Entire HTML Generating Code v0.3
Question: As a follow-up to my code's v0.1 revision, we now introduce V0.3, with capability for storage of attributes and a struct for each HTML tag. Just start the struct, call its functions, change its variables then use <fstream> to make the complete HTML file using generateHTML()! #include <iostream> #include <fstream> #include <vector> struct HtmlTag { std::string tagname; std::string tagcontent; std::vector<std::string> tagattributes; std::vector<std::string> tagattrcontent; void addAttribute(std::string attrName, std::string attrCont) { tagattributes.push_back(attrName); tagattrcontent.push_back(attrCont); } int removeAttribute(int pos) { tagattributes.erase(tagattributes.begin() + pos); tagattrcontent.erase(tagattrcontent.begin() + pos); return tagattributes.size(); } std::string returnTag() { std::string attributes = ""; if (tagattributes.size() != tagattrcontent.size()) { return "ERR:UNMATCHINGLENGTH"; } unsigned short i; if(tagattributes.size() > 0 && tagattrcontent.size() > 0) { for(i = 0; i < tagattributes.size(); i++) { attributes = attributes + " " + tagattributes[i] + "=\"" + tagattrcontent[i] + "\""; } } else { return "<" + tagname + ">" + tagcontent + "</" + tagname + ">"; } return "<" + tagname + attributes + ">" + tagcontent + "</" + tagname + ">"; } }; std::string generateHTML(std::string doctype, std::string head, std::string body) { return "<!DOCTYPE " + doctype + " />\n\n<html>\n <head>\n " + head + "\n </head>\n <body>\n " + body + "\n </body>\n</html>"; } int main() { HtmlTag myHTML; myHTML.addAttribute("spam", "eggs"); myHTML.tagcontent = "bar"; myHTML.tagname = "foo"; unsigned int i2; for(i2 = 0; i2 < myHTML.tagattrcontent.size(); i2++) { std::cout << myHTML.tagattrcontent[i2] << ":" << myHTML.tagattributes[i2] << std::endl; } std::cout << myHTML.returnTag() << std::endl; std::cout << generateHTML("html", "<title>Hello World!</title>", myHTML.returnTag()) << std::endl; } Answer: I see some things that may help you improve your code. Prefer class to struct The way it's currently defined, any code can modify the contents of the structure. Better would be to make the data members private and provide necessary accessors to assure that the class is always complete and coherent. Don't use parallel structures Right now, the tag attributes and tag contents are stored in parallel vectors. The only association between an attribute and its value is that they have the same position within both vectors. Instead, use a std::pair and create a vector of those. That way, each attribute is a single entity. We can replace the two vectors with this one: std::vector<std::pair<std::string,std::string>> attributes; Use "range-for" to simplify code C++11 and newer allow the use of "range-for" which can really simplify code. For instance, the current code has this loop: unsigned int i2; for(i2 = 0; i2 < myHTML.tagattrcontent.size(); i2++) { std::cout << myHTML.tagattrcontent[i2] << ":" << myHTML.attributes[i2] << std::endl; } If we follow the previous point and have a single attributes vector instead, we can rewrite this as: for (const auto &attr : myHTML.attributes) { std::cout << attr.first << ":" << attr.second << std::endl; } Use const where practical In the returnTag routine, the underlying HtmlTag is not altered. Make this explicit by declaring that method const: std::string returnTag() const { Think of the user As a user of this code, I think I'd prefer to remove an attribute by name rather than by position. It's also not clear that returning the new count of attributes would be useful if the caller already has to keep track of the indexing. Provide a constructor It seems to me that it would be nice to be able to write code like this: HtmlTag myHTML{"foo","bar",{{"spam","eggs"}}}; We can do that by providing the appropriate constructor. In this case, it's actually quite simple: HtmlTag(std::string name, std::string content, std::vector<std::pair<std::string,std::string>> attr) : tagname{name}, tagcontent{content}, attributes{attr} {} Note that I'm assuming the attributes are now pairs as previously suggested. Use your own code Why doesn't the generateHTML code use any HtmlTags? Here's a way it could be rewritten to do so: std::string generateHTML(std::string doctype, std::string headtext, std::string bodytext) { HtmlTag body{"body",bodytext}; HtmlTag head{"head",headtext}; HtmlTag html{"html",head.returnTag()+body.returnTag()}; return "<!DOCTYPE " + doctype + " />\n\n" + html.returnTag(); } If that's not the way you'd like to write the code, it may suggest some improvements that might be made to the HtmlTag class.
{ "domain": "codereview.stackexchange", "id": 17789, "tags": "c++, html" }
Is there an algorithm to achieve optimal compression in a "streamed" manner, assuming equal probability of each possibility?
Question: (Sorry for the question title; edits are welcome.) Let's say that you have a set of data made of repeating units, consisting of a value with $2$ possibilities, a value with $3$ possibilities, $5$ possibilities, $4$ possibilities, repeat. The total number of possibilities per unit is $2\times3\times5\times4=120\text{ possibilities}$. Assuming that these are all equally probable, that's $\log_2120\approx6.9\text{ bits}$. This means that the minimum space required to represent $1$ unit is $1\text{ Byte}$, but the minimum space required to represent $1000$ units is $864\text{B}$. The naïve way of producing this compression would be to put them all into a base-120 number, then convert this to binary. However, this requires having access to all of the data at once in order to decompress it. There must[citation needed] be some algorithm that can take in a stream of compressed data, output a stream of decompressed data, and only use finite memory, but I can't find one. Please describe and explain such an algorithm, if one exists. Answer: However, this requires having access to all of the data at once in order to decompress it. Not true. The trick is to compress backwards, from the last symbol, to the first. You do need all the data at once to compress though. However, what you are looking for can be done with arithmetic coding, without even needing access to all of the data at once during compression.
{ "domain": "cs.stackexchange", "id": 12959, "tags": "algorithms, data-compression, entropy" }
Can non-free forces change the rest mass?
Question: While reading Hobsen et al.'s "General Relativity: An Introduction for Physicists", I came across a bit confusing derivation. Multiplying the 4-force and 4-velocity, the following derivation can be made $ \boldsymbol{u} \cdot \boldsymbol{f} = \boldsymbol{u} \cdot {d\boldsymbol{p} \over d\tau} = \boldsymbol{u} \cdot ({dm_0 \over d\tau}\boldsymbol{u} + m_0{d\boldsymbol{u} \over d\tau}) = c^2 {dm_0 \over d\tau} + m_0 \boldsymbol{u} \cdot {d\boldsymbol{u} \over d\tau} = c^2 {dm_0 \over d\tau} $ After this derivation, the authors make the following conclusion: where we have (twice) used the fact that $\boldsymbol{u} \cdot \boldsymbol{u} = c^2$. Thus, we see that in special relativity the action of a force can alter the rest mass fo a particle! A force that preserves the rest mass is called a pure force and must satisfy $\boldsymbol{u} \cdot \boldsymbol{f} = 0$ But I have the following objections and questions about this derivation: The rest mass is by definition a constant, so it should have been considered a constant while differentiating. If we go back to Newton's second law, which is still valid under the special theory of relativity (though with some correction), the mass is the resistance of a body to changes in velocity, i.e. the larger the mass is, the stronger the force we need to change its velocity. But a non-free force seems to contradict this basic concept when $dm_0 \over d\tau$ is negative, because this means that the force is reducing the resistance of the body towards the force. As a funny comparison, imagine that the harder you push a heavy box, the lighter it becomes (which is obviously not the case even in Newtonian mechanics, not to mention that special relativity predicts the opposite, i.e. the faster the body is, the harder it becomes to increase its velocity)!! Unless the mass is being converted to energy or transferred somewhere else (which is not inferred from the derivation, as the derivation comes straightforward from the force equation without depending on any other equation), where is the mass going?! Isn't this contradictory to the conservation of mass an energy law? If we assumed in this derivation that the rest mass is variable, why didn't we do so in many other derivations in the special theory of relativity? Do we have examples of such forces anyway? :-) Answer: I second the suggestion of @ChrisGerig in the comment above about reading the wiki article. This is the relevant paragraph: If the system consists of more than one particle, the particles may be moving relative to each other in the center of momentum frame, and they will generally interact through one or more of the fundamental forces. The kinetic energy of the particles and the potential energy of the force fields increase the total energy above the sum of the particle rest masses, and contribute to the invariant mass of the system. The sum of the particle kinetic energies as calculated by an observer is smallest in the center of momentum frame (or rest frame if the system is bound). What they call a particle is not an elementary particle, i.e. one which is point like and whose invariant mass is constant on all frames. Once there is a system of particles, even two photons, their invariant mass is variable.
{ "domain": "physics.stackexchange", "id": 2169, "tags": "special-relativity, forces, mass, mass-energy" }
Is there a well-defined notion of an “R/poly” complexity class?
Question: This would be the complexity class of all problems that are decidable in finite time with a polynomial length advice string that can be arbitrarily hard to compute. But potentially undecidable without this advice string. I think you might be able to just iterate through all possible advice strings, but it could be undecidable if a given advice string is the correct one, as a TM could act unpredictably and undecidably with the wrong one. Also, I presume that if R/poly is well defined and distinct, we could also define an “R/subexp” class. And, more broadly, the space complexity of the advice string could perhaps be a good measure of “how undecidable” an undecidable problem is. If R/poly is well defined and distinct, what is known about it if anything? Does it contain RE? Answer: There is nothing stopping you from defining the class, though I don’t recall seeing it studied. Actually, I can see two reasonable definitions for this class. The first one, which follows more literally the notation, is that $L\in\mathrm{R/poly}$ iff there is a recursive predicate $P(x,y)$ and an advice function $a\colon\mathbb N\to\{0,1\}^*$ such that $|a(n)|\le n^{O(1)}$ and $x\in L\iff P(x,a(|x|))$. I can’t say anything about this class (I don’t even know if it includes RE). The second possibility is that $L\in\mathrm{R/poly}$ iff there is a TM $M(x,y)$ and an advice function $a\colon\mathbb N\to\{0,1\}^*$ such that $|a(n)|\le n^{O(1)}$, and for each $x$, $M(x,a(|x|))$ halts and decides whether $x\in L$. (However, $M(x,y)$ may not necessarily halt for other choices of $y$.) Strictly speaking, this should be called $\mathrm{RE/poly\cap coRE/poly}$, I guess; but anyway, this class seems to be more robust, and I can say something about it. Under the second definition, R/poly includes RE; moreover, it includes all languages computable with polynomially many RE oracle queries, and languages computable with exponentially many parallel RE oracle queries. To see this, let $L\in\mathrm{RE}$, and let $M$ be a Turing machine that accepts $L$. We define the advice to be $a(n)=\#L_n$ (written in binary so that it has $O(n)$ bits), where $L_n=L\cap\Sigma^n$. Given an input $x$ of length $n$, and knowing $a(n)$, we can compute $L_n$ (and then check whether $x\in L_n$) by running in parallel (using dovetailing) $M$ on all inputs $w$ of length $n$, until $a(n)$ many of the instances halt and accept; then we know that the remaining instances cannot accept, and we can stop the search. If $L$ is itself not in RE, but it is computable with an RE oracle $L'$ to which it only makes queries of length bounded by a polynomial $p(n)$, let the advice be $\sum_{m\le p(n)}\#L'_m$, using a similar argument. More generally, this shows that R/poly is closed under polynomially bounded Turing reductions. If $L$ is computable by a TM $M$ with polynomially many queries to an RE oracle, which we may assume w.l.o.g. to be the halting problem $H$, then $L$ is also computable with polynomially bounded queries to $H$, hence it is in R/poly by the previous paragraph: we successively determine answers to the oracle queries by asking $H$ polynomially many questions of the form “what is the answer to the $i$th oracle query made by $M$ on input $x$, assuming the previous oracle queries were answered by $a_0,\dots,a_{i-1}\in\{0,1\}$” (this can be expressed as a polynomially long query to $H$). A similar argument applies if $L$ is computable with exponentially many parallel (= non-adaptive) oracle queries to $H$. To place an upper bound on R/poly, for any $L\in\mathrm{R/poly}$, the Kolmogorov complexity of $L_n$ (as a $2^n$-bit string) is $n^{O(1)}$: to compute $L_n$, we only need to specify $n$, $a(n)$, and a (constant-size) description of the TM $M$ from the definition. In contrast, a random string has Kolmogorov complexity about $2^n$. Thus, most languages are not in R/poly, or even in R/subexp. This also shows that R/poly does not contain $\Delta_2$, or even $\mathrm{EXP}^H$, as we can compute in this class the lexicographically first string of length $2^n$ and Kolmogorov complexity $\ge2^n$ by binary search (using queries expressing the RE predicate “every $2^n$-bit string extending $a_0\dots a_{i-1}$ is computed by some program of length $<2^n$”). Furthermore, a generalization of the Kolmogorov complexity argument shows that there is a strict hierarchy: if $\alpha,\beta\colon\mathbb N\to\mathbb N$ are functions such that $\alpha(n)\le\beta(n)\le2^n$ and $\beta(n)-\alpha(n)\ge2\log n$ or so, then $\mathrm R/\alpha(n)\subsetneq\mathrm R/\beta(n)$, as all languages $L\in\mathrm R/\alpha(n)$ have Kolmogorov complexity $K(L_n)\le\alpha(n)+O(\log n)$, whereas $\mathrm R/\beta(n)$ contains a language $L$ with $K(L_n)\ge\beta(n)$. Come to think of it, Kolmogorov complexity provides an exact characterization of R/poly: $$L\in\mathrm{R/poly}\iff\exists\text{ a polynomial $p$ }\forall n\in\mathbb N\:K(L_n)\le p(n).$$ We have already seen the left-to-right implication; for the converse, we can use as advice $a(n)$ a description of an algorithm that computes $L_n$.
{ "domain": "cstheory.stackexchange", "id": 5747, "tags": "complexity-classes, computability, decidability, advice-and-nonuniformity" }
Upstream Repository vs Release Repository
Question: I could not figure this out from the tutorials (http://wiki.ros.org/bloom/Tutorials/ReleaseCatkinPackage & http://wiki.ros.org/bloom/Tutorials/FirstTimeRelease), and I wanted to clarify this: if I commit something to the upstream repository, does this also update the release repository? Or do I have to release a new version each time I make a change? Originally posted by atp on ROS Answers with karma: 529 on 2015-05-16 Post score: 0 Answer: if I commit something to the upstream repository, does this also update the release repository? No. The upstream and release repository are separate repositories that have no direct (physical) relationship. do I have to release a new version each time I make a change? Well, no. You don't have to. It depends on what you find 'release worthy'. Releases typically group a bunch of changes. I guess you could say that releases are somewhat of a convenience for your users: if every change was followed by an immediate release, your users could just as well clone your development repository (I'm obviously ignoring things like packaging and installation, compatibility guarantees, etc). Originally posted by gvdhoorn with karma: 86574 on 2015-05-16 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by atp on 2015-05-16: So, that means that the release repository is only updated when you do a bloom-release, right? Comment by gvdhoorn on 2015-05-16: yes. Obviously you can also manually modify your release repository, but in typical ROS-ified interaction, only bloom and its scripts do something with it.
{ "domain": "robotics.stackexchange", "id": 21708, "tags": "bloom-release" }
Roman numerals to Arabic
Question: I've been learning to code for quite some time and my impression from the programming book I have is: "Keep it as simple as possible, but no simpler." I had an assignment to write a program that converts roman numerals to ints and vice versa, and I've been wondering how bad is the code that I worked on. Is it as terrible as I think it is? Is it because I'm a beginner? The code I wrote works fine, but I would very much appreciate it if you guys could show me a better and shorter way of reaching the same goal with simpler code (something that is definitely possible). #include "Header.h" class Roman { public: string roman = ""; int number; int as_int() { return number; } string int_to_roman(int number); void think(const char& a, const char& b, const char& c, int& divided, const int & dividee); //meant to ensure the roman permissible void roman_to_int(Roman a); }; void Roman::think(const char& a, const char& b, const char& c, int& divided, const int & dividee) { string name = ""; int number = divided; number /= dividee; if (number < 1) return; if (number == 1) name += a; if (number == 2) name = name+a+a; if (number == 3) name = name+a+a+a; if (number == 4) name = name + a+b; if (number == 5) name = b; if (number == 6) name = name + b+a; if(number==7) name = name + b+a+a; if(number==8) name = name + b+a+a+a; if(number==9) name = name + a+c; if(number== 10) name += c; int z = divided; divided = (z - (number * dividee)); roman += name; } string Roman::int_to_roman(int number) { if (number > 4999) error("Doesn't support numbers higher than 5000"); think('C', 'D', 'M', number, 100); think('X', 'L', 'C', number, 10); think('I', 'V', 'X', number, 1); return roman; } void Roman::roman_to_int(Roman a){ } ostream& operator << (ostream& os, Roman a) { return os << a.roman; } int test(char ch, int how_much) { if (cin.get() == ch) return how_much; cin.unget(); return 0; } void dis(char const& a, char const& b, int& sum, int times, string& numeral) { sum += (1*times); int number = sum; sum += test(a, (3*times)); if (number != sum) { numeral += a; return; } sum += test(b, (8*times)); if (number != sum) numeral += b; } int get_number(Roman &a) { int sum = 0; string numeral = ""; while (cin.get()!='\n') { cin.unget(); char ch = cin.get(); numeral += ch; switch (ch) { case'I': dis('V', 'X', sum, 1, numeral); break; case'V': sum += 5; break; case'X': dis('L', 'C', sum, 10, numeral); break; case'L': sum += 50; break; case'C': dis('D', 'M', sum, 100, numeral); break; case'D': sum += 500; break; case'M': sum += 1000; break; } } if (numeral != a.int_to_roman(sum)) error("Impermissible roman, try again.\n"); return sum; } int main() { while (true) { try { while (true) { Roman a; cout << "Please write a roman numeral and press enter.\n"; a.number = get_number(a); cout << a.number << ", " << a << endl; } system("pause"); return 0; } catch (exception& e) { cerr << "error: \n" << e.what() << '\n '; //return 1; // 1 - indicates failure } } } As a neophyte programmer it would really mean a lot to me to know if I'm doing things well enough or whether I can do any better regarding the code's brevity and efficiency. Edit: To the requests of some people, this is what Header.h contains: #include <algorithm> #include <cmath> #include <vector> #include <iostream> #include <sstream> #include <fstream> #include <string> using namespace std; void error(const string& a) { throw runtime_error(a); } Answer: There's lots of places to improve. You'll grow accustomed to good coding habits with time. if (number > 4999) error("Doesn't support numbers higher than 5000"); Use {} curly braces. Always. Even when there's just one statement. It prevents a whole class of bugs during later maintenance, and it's easy to do. Also, bug: the boolean condition is off-by-one from the English diagnostic. think('C', 'D', 'M', number, 100); think('X', 'L', 'C', number, 10); think('I', 'V', 'X', number, 1); First, kudos for representing the problem domain reasonably compactly. But the repetition of 'C' and 'X' is weird. Consider replacing this with a lookup table that maps letter to value. void Roman::think(const char& a, const char& b, const char& c, int& divided, const int & dividee) { Honestly, that's just a horrible signature. The a, b, c names are meaningless. And the next two names ignore the convention followed by math text books: https://en.wikipedia.org/wiki/Division_(mathematics) the dividend is divided by the divisor to get a quotient. int number = divided; Yes, we know it's a number, the int declaration told us that. But what is its meaning? (Hint, the name starts with "q".) Please strive to use meaningful identifiers. And merge that 2nd /= line into the 1st line. if (number == 1) name += a; if (number == 2) name = name+a+a; if (number == 3) name = name+a+a+a; These statements are quite similar. (In python this would simply be a * number.) Consider expressing the notion with a loop: if (number <= 3) { for (int i = 0; i < number; i++) { name += a; } } At the end you have: if(number== 10) This is probably not a case you should even be worrying about, as it suggests the block of code is trying to solve two problems (like dealing with 'C' and 'M') rather than solving one problem well. If you adjust how your code tackles the problem, you can guarantee that number will be within a certain range by the time it is seen by a given line of code. We call this an invariant, and it is a powerful technique for reasoning about correctness of your code. int z = divided; Introducing z is a bit bizarre, compared to the straightforward approach of incorporating divided in the RHS (right hand side) of the following assignment. Definitely drop the outer parentheses, which do nothing. Consider dropping the inner parentheses as well, unless you're concerned the reader might not be familiar with C's order of operations, which will do a multiply before a subtraction. roman += name; Clearly this works, side-effecting a member variable. But consider including roman in your method signature, to emphasize the return value to the reader. int as_int() { return number; } This works, but stylistically it's weird that you grouped this implementation in the middle of a bunch of declarations. Roman a; ... a.number = get_number(a); That is an odd API, which superficially looks like OOP yet you're essentially writing procedural Fortran code in C++. A more usual API would be to send the input into a's constructor, which is responsible for putting the object into a consistent state, perhaps one that has the number already translated. For an outside caller to reach into the object and mutate its .number attribute is definitely not good OOP style. There's more to say, but that's enough. It is good that you are working through exercises, since that is how you will learn. It is very good that you solicit criticism of your code, and that you want to improve what you write in future. Keep at it, take pride in your work, and your writing should progress nicely.
{ "domain": "codereview.stackexchange", "id": 28965, "tags": "c++, beginner, number-systems" }
Prove that this family of hash function is $3$-wise independent, but not $4$-wise independent
Question: Consider the hash function mapping $w$-bit keys to hash values in $\{0,...,m-1\}$. Suppose $w=cr$. Interpret a $w$-bit key $x$ as a vector $(x_1,...,x_c)$ of $c$ $r$-bit keys. Consider the hash family: $$H = \{h_{T_1,...,T_c}:T_i \in \{0,...,m-1\}^{2^r}\}$$ where $$h_{T_1,...,T_c}(x) = \sum\limits_{1\le i \le c}T_i[x_i]\mod m$$ Prove that $H$ is $3$-wise independent, but not $4$-wise independent. $H$ is $k$-wise independent if for inputs $x_1,...,x_k$ and output $v_1,...,v_k$, $\Pr[h(x_1) = v_1 \wedge ... \wedge h(x_k)=v_k] = \frac{1}{m^k}$ So I can certainly see why $H$ is $1$-wise and $2$-wise independent if we're choosing inputs to the hash function that must be different from each other. However, I'm having a lot of trouble seeing why the family is not $4$-wise independent, which I think lends greatly to my difficulty in proving that the family is $3$-wise independent. So for the family to not be $4$-wise independent, if we have inputs $x_1,...,x_4$ and outputs $v_1,...,v_4$, $\Pr[h(x_1)=v_1\wedge...\wedge h(x_4)=v_4]\not = \frac{1}{m^4}$. I am trying to think of a counterexample where this will not be the case, but it seems like we need, for example, $x_4$ to be constructed in such a way that $v_4$ can be attained by some combination of $v_1,v_2,v_3$. However, all of $x_1,...,x_4$ are $w$-bit keys, so even if $x_4$ had bits taken from each of $x_1,x_2,x_3$, the output $v_4$ would still be some random sum corresponding to the $c$ $r$-bit blocks accessing each $T_i$. Is there something I'm missing here? Perhaps it's not possible for $v_4$ to be some combination of $v_1,...,v_3$, but how else would we show that $H$ is not $4$-wise independent? Answer: Hint for refuting 4-wise independence: Suppose for simplicity that $c = 2$, pick some $\alpha \neq \beta$ of length $r$, and consider the four inputs $(\alpha,\alpha),(\alpha,\beta),(\beta,\alpha),(\beta,\beta)$. Hint for proving 3-wise independence: Let the three inputs be $x,y,z$. If there is some coordinate $i$ such that $x_i,y_i,z_i$ are all different then we are done. Otherwise, without loss of generality there are coordinates $i,j$ such that $x_i = y_i \neq z_i$ and $x_j = z_j \neq y_j$, and we are again done.
{ "domain": "cs.stackexchange", "id": 3723, "tags": "algorithms, algorithm-analysis, data-structures, probability-theory, hash-tables" }
Why is it OH- and not HO-?
Question: I am told that in a chemical equation the metal comes first and then the non metal. for example MgO, ZnSO4,etc. But when both the elements are non metals or metals, the one with the lower atomic number would come first. for example:- H2O,CO2,HF,etc. But why is it that hydroxide is OH- and not HO-? Also why is methane CH4 and not H4C? Answer: OH- and HO- are both acceptable. The charge is on the oxygen, of course. One can argue cation then anion (salts) versus alphabetical (e.g., AuBe, which is brittle and weak rather than ductile, suggesting ionicity). Organic formulas are C, H, then alphabetical. Order may be altered to conform to structure (e.g., propane as $\ce{H3CCH2CH3}$) or convenience $\ce{H3C^{-}Li^{+}}$ (which is nowhere near the real structure of methyllithium, but is how it reacts).
{ "domain": "chemistry.stackexchange", "id": 1061, "tags": "nomenclature" }
Retrieve a number in a webpage and store in a SQLite3 db
Question: I'm beggining Python. So I wrote a program which is supposed to get a number of connected people on a forum (like this one : http://www.jeuxvideo.com/forums/0-51-0-1-0-1-0-blabla-18-25-ans.htm) and store the number with the datetime in a database file (SQLite3). Every forum has his own table name. My code is supposed to do this : Create object for each forum we want to retrieve with the Forum class. Store these objects in a list to put use in a loop For. Get a web page .htm (with requests) where the number of connected people is wrote in a span tag with the class "nb-connect-fofo" who looks like this <span class="nb-connect-fofo">1799 connecté(s)</span>. I'm using BeautifulSoup to get the string and REGEX to get the number It's supposed to be done for every forum Execute a SQLite3 request to store the datetime, number in database file with the same name as the forum which is retrieve Here's my code : #!/usr/bin/python3 from bs4 import BeautifulSoup from time import sleep import sqlite3 import datetime import requests import re class Forum: def __init__(self, forum, url_forum): #initialization all object with there name, URL self.forum = forum self.url_forum = url_forum pattern = '([0-9]{1,5})' self.pattern = re.compile(pattern) def add_to_database(self): #Add to the SQLite3 database the number of connected people and the datetime to their own table connection = sqlite3.connect("database.db") c = connection.cursor() now = datetime.datetime.today() nb_co = self.recup_co() text = "INSERT INTO {0}(datetime, nb_co) VALUES('{1}', '{2}')".format(self.forum, now, nb_co) c.execute(text) connection.commit() connection.close() print(now, self.forum, str(nb_co)) sleep(1) def recup_co(self): #Retrieving the page and the number of people connected by using REGEX r = requests.get(self.url_forum) page_html = str(r.text) page = BeautifulSoup(page_html, 'html.parser') resultat = page.select(".nb-connect-fofo") nb_co = re.search(self.pattern, str(resultat)) return nb_co.group(0) def main(): # All forums which are scanned are here dixhuit_vingtcinq = Forum("dixhuit_vingtcinq", "http://www.jeuxvideo.com/forums/0-51-0-1-0-1-0-blabla-18-25-ans.htm") moins_quinze = Forum("moins_quinze", "http://www.jeuxvideo.com/forums/0-15-0-1-0-1-0-blabla-moins-de-15-ans.htm") quinze_dixhuit = Forum("quinze_dixhuit", "http://www.jeuxvideo.com/forums/0-50-0-1-0-1-0-blabla-15-18-ans.htm") overwatch = Forum("overwatch", "http://www.jeuxvideo.com/forums/0-33972-0-1-0-1-0-overwatch.htm") #All forum name's are stored here to use them with a list forums = [dixhuit_vingtcinq, moins_quinze, quinze_dixhuit, overwatch] while(True): for forum in forums: try: forum.add_to_database() except: print("An error occured with the forum '{0}' at {1}".format(forum.forum, datetime.datetime.today())) sleep(5) sleep(60) main() I will use it later to make graphics, make little statistics to improve my skill with Python. Maybe I will retrieve more forum and expand my program to scrap the website and get every post on these forums (If I, I will do this in a lot of time later). So I'm asking you for some improvements/ideas. As a beginner, there are obviously somes errors that can be very annoying. Because, i really want to improve Also, my code is running on one of my own server. Isn't it better to buy a cheap VPS for 2€ instead ? Thanks for reading and thanking you in advance. PS : If there are somes mistakes relatives to my post about the website tell me Answer: Code smells your code is vulnerable to SQL injection attacks because you are using string formatting to put query parameters into a query. You need to proper parameterize your query with the help of the database driver: query = """ INSERT INTO {table} (datetime, nb_co) VALUES(?, ?) """.format(table=self.forum) c.execute(query, (now, nb_co)) Note that this way you also don't need to worry about Python to database type conversions and quotes inside parameters - it will all be handled by the database driver. Performance instead of re-connecting to the database multiple times, think about connecting to a database once, processing all the data and then closing the connection afterwards same idea about the use of requests - you may initialize a Session() and reuse use lxml instead of html.parser as an underlying parser used by BeautifulSoup you can use SoupStrainer class to parse only the desired element, which will allow you to then simply get the text and split by space instead of applying a regular expression: parse_only = SoupStrainer(class_="nb-connect-fofo") page = BeautifulSoup(page_html, 'lxml', parse_only=parse_only) return page.get_text().split()[0]
{ "domain": "codereview.stackexchange", "id": 26291, "tags": "python, beginner, algorithm, regex, sqlite" }
On a trick to derive the Noether current
Question: Suppose, in whatever dimension and theory, the action $S$ is invariant for a global symmetry with a continuous parameter $\epsilon$. The trick to get the Noether current consists in making the variation local: the standard argument, which doesn't convince me and for which I'd like a more formal explanation, is that, since the global symmetry is in force, the only term appearing in the variation will be proportional to derivatives of $\epsilon,$ and thus the involved current $J^\mu$ will be conserved on-shell: $$ \delta S = \int \mathrm{d}^n x \ J^\mu \partial_\mu \epsilon .\tag{*}$$ This is stated, e.g., in Superstring Theory: Volume 1 by Green Schwarz Witten on page 69 and The Quantum Theory of Fields, Volume 1 by Weinberg on page 307. In other words, why a term $$ \int \mathrm{d}^n x \ K(x) \ \epsilon(x)$$ is forbidden? Taking from the answer below, I believe two nice references are theorem 4.1 example 2.2.5 Answer: I) Let there be given a local action functional $$ S[\phi]~=~\int_V \mathrm{d}^nx ~{\cal L}, \tag{1}$$ with the Lagrangian density $$ {\cal L}(\phi(x),\partial\phi(x),x). \tag{2}$$ [We leave it to the reader to extend to higher-derivative theories. See also e.g. Ref. 1.] II) We want to study an infinitesimal variation$^1$ $$ \delta x^{\mu}~=~\epsilon X^{\mu} \qquad\text{and}\qquad \delta\phi^{\alpha}~=~\epsilon Y^{\alpha}\tag{3}$$ of spacetime coordinates $x^{\mu}$ and fields $\phi^{\alpha}$, with arbitrary $x$-dependent infinitesimal $\epsilon(x)$, and with some given fixed generating functions $$ X^{\mu}(x)\qquad\text{and}\qquad Y^{\alpha}(\phi(x),\partial\phi(x),x).\tag{4}$$ It is implicitly assumed that under a variation the integration region $V$ changes according to the vector field $X^{\mu}$. Then the corresponding infinitesimal variation of the action $S$ takes the form$^2$ $$ \delta S ~\sim~ \int_V \mathrm{d}^n x \left(\epsilon ~ k + j^{\mu} ~ d_{\mu} \epsilon \right) \tag{5}$$ for some structure functions $$ k(\phi(x),\partial\phi(x),\partial^2\phi(x),x)\tag{6}$$ and $$ j^\mu(\phi(x),\partial\phi(x),x).\tag{7}$$ [One may show that some terms in the $k$ structure function (6) are proportional to eoms, which are typically of second order, and therefore the $k$ structure function (6) may depend on second-order spacetime derivatives.] III) Next we assume that the action $S$ has a quasisymmetry$^3$ for $x$-independent infinitesimal $\epsilon$. Then eq. (5) reduces to $$ 0~\sim~\epsilon\int_V \mathrm{d}^n x~ k. \tag{8}$$ IV) Now let us return to OP's question. Due to the fact that eq. (8) holds for all off-shell field configurations, we may show that eq. (8) is only possible if $$ k ~=~ d_{\mu}k^{\mu} \tag{9}$$ is a total divergence. (Here the words on-shell and off-shell refer to whether the eoms are satisfied or not.) In more detail, there are two possibilities: If we know that eq. (8) holds for every integration region $V$, we can deduce eq. (9) by localization. If we only know that eq. (8) holds for a single fixed integration region $V$, then the reason for eq. (9) is that the Euler-Lagrange derivatives of the functional $K[\phi]:=\int_V \mathrm{d}^n x~ k$ must be identically zero. Therefore $k$ itself must be a total divergence, due to an algebraic Poincare lemma of the so-called bi-variational complex, see e.g. Ref. 2. [Note that there could in principle be topological obstructions in field configuration space which ruin this proof of eq. (9).] See also this related Phys.SE answer by me. V) One may show that the $j^\mu$ structure functions (7) are precisely the bare Noether currents. Next define the full Noether currents $$ J^{\mu}~:=~j^{\mu}-k^{\mu}.\tag{10}$$ On-shell, after an integration by parts, eq. (5) becomes $$ \begin{align} 0~\sim~~~~~&\text{(boundary terms)}~\approx~ \delta S \cr ~\stackrel{(5)+(9)+(10)}{\sim}& \int_V \mathrm{d}^n x ~ J^{\mu}~ d_{\mu}\epsilon \cr ~\sim~~~~~& -\int_V \mathrm{d}^n x ~ \epsilon~ d_{\mu} J^{\mu} \end{align}\tag{11}$$ for arbitrary $x$-dependent infinitesimal $\epsilon(x)$. Equation (11) is precisely OP's sought-for eq. (*). VI) Equation (11) implies (via the fundamental lemma of calculus of variations) the conservation law $$ d_{\mu}J^{\mu}~\approx~0, \tag{12}$$ in agreement with Noether's theorem. References: P.K. Townsend, Noether theorems and higher derivatives, arXiv:1605.07128. G. Barnich, F. Brandt and M. Henneaux, Local BRST cohomology in gauge theories, Phys. Rep. 338 (2000) 439, arXiv:hep-th/0002245. -- $^1$ Since the $x$-dependence of $\epsilon(x)$ is supposed to be just an artificial trick imposed by us, we may assume that there do not appear any derivatives of $\epsilon(x)$ in the transformation law (3), as such terms would vanish anyway when $\epsilon$ is $x$-independent. $^2$ Notation: The $\sim$ symbol means equality modulo boundary terms. The $\approx$ symbol means equality modulo eqs. of motion. $^3$ A quasisymmetry of a local action $S=\int_V d^dx ~{\cal L}$ means that the infinitesimal change $\delta S\sim 0$ is a boundary term under the quasisymmetry transformation.
{ "domain": "physics.stackexchange", "id": 11997, "tags": "mathematical-physics, lagrangian-formalism, variational-principle, noethers-theorem, classical-field-theory" }
What is the product when Lithium solid is dropped in water?
Question: I am working on reaction predictions for the AP Chemistry and I need some help with this problem. On a practice quiz I tried to use the rule of "Metal + Water = Metal Oxide" so I answered with $\ce{2Li + O^{2-} -> Li2OH}$. I got zero points for this answer meaning both reactants and products were wrong. What is the correct answer and how would you get to the correct answer? Answer: I think the major error you made is applying what you call the "Metal + Water = Metal Oxide" rule. This may apply to some metals, but not all. Specifically, Lithium is a Group 1 metal, the so-called Alkali Metals. This means that Group 1 metals tend not to form oxides with water, but rather form hydroxides with water. That is, the metal ions bond with hydroxide ion, $\ce{OH-}$, which has a -1 charge. If you have ever seen this reaction (lots of YouTube videos if you have not done the reaction yourself), you will see that lithium in water produces lots of gas. And in some videos, you will see that this gas can be set on fire! Given the elements involved, that's a good indication that the gas is hydrogen, a very flammable gas. So the most likely equation for lithium in water forms an hydroxide of lithium along with hydrogen gas. Is that enough information for you to propose a different equation than what you answered?
{ "domain": "chemistry.stackexchange", "id": 347, "tags": "reaction-mechanism, water" }
P2os and rosaria not connecting to Pioneer P3-AT
Question: I have pioneer p3-at robot and a separate laptop with Ubuntu 14.04 and Ros Indigo properly installed in it. I downloaded p2os from https://github.com/allenh1/p2os and rosaria from https://github.com/amor-ros-pkg/rosaria.git I am connecting robot to laptop via serial(robot) to usb(laptop) converter with RS-232 cable But every time I run command sudo usermod -a -G dialout $USER Then I logged out and again logged in sudo chmod 777 -R /dev/ttyUSB0 rosrun rosaria RosAria _port:=/dev/ttyUSB0 But I get this error shivam@shivam-Inspiron-3542:~$ : rosrun rosaria RosAria [ INFO] [1422444061.117081691]: RosAria: using port: [/dev/ttyS0] Could not connect to simulator, connecting to robot through serial port /dev/ttyS0. Syncing 0 No packet. Syncing 0 No packet. Trying to close possible old connection Syncing 0 No packet. Syncing 0 No packet. Robot may be connected but not open, trying to dislodge. Syncing 0 No packet. Robot may be connected but not open, trying to dislodge. Syncing 0 No packet. Could not connect, no robot responding. Failed to connect to robot. [ERROR] [1422444067.613784823]: RosAria: ARIA could not connect to robot! (Check ~port parameter is correct, and permissions on port device.) [FATAL] [1422444067.613882512]: RosAria: ROS node setup failed... And when I run p2os it shows this [ERROR] [1349087058.764888277]: Error reading packet header from robot connection: P2OSPacket():Receive():read(): . . [ERROR] [1349087114.446749752]: Error reading packet header from robot connection: P2OSPacket():Receive():read(): [ERROR] [1349087114.447846009]: p2os setup failed... [p2os-1] process has died [pid 12089, exit code 255] Robot is working properly. I tried it with Aria Library which comes with p3-at. Actually robot has windows-7 installed in its onboard pc which I think should not be a problem because I am connecting it using the serial port of the robot. And no where it mentions that robot's OS is important Originally posted by shivam-kumar on ROS Answers with karma: 1 on 2016-06-08 Post score: 0 Answer: When you run RosAria or p2os you can specify the port via a port parameter. e.g. rosrun rosaria RosAria _port:=/dev/ttyUSB0 or rosrun p2os p2os_driver _port:=/dev/ttyUSB0. When you tested it with ARIA, did you alse specify the /dev/ttyUSB0 as the port (e.g. demo -rp /dev/ttyUSB0)? You can turn off the internal onboard computer with the switch on the side. This will give you extra battery life and prevent any possible interference with anything on that computer also using the serial port. Originally posted by ReedHedges with karma: 821 on 2016-06-10 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by shivam-kumar on 2016-06-10: I am using the port /dev/ttyUSB0, I am supplying it as a parameter in RosAria and p2os both. "Turn Off the on-board computer" - the serial port is on the on-board computer, I think if I turn it off then serial port won't work. Or is there any other serial port inside the robot Comment by ReedHedges on 2016-06-10: Yes you should use the HOST/SERIAL port on the left side of the robot if you are using an added laptop, that is the connection to the robot's controller. (Which is also connected to the internal computer as well.) Comment by shivam-kumar on 2016-06-10: So you mean I need to open the robot's top plate to access the Host Serial Port because I don't See it in P3-AT on the side Comment by ReedHedges on 2016-06-10: Sorry on the P3AT its on the top, next to TX, RX, STATUS leds, reset button, battery led, etc. STATUS should show if the controller is ready, communicating to software etc. TX and RX will show if there is any communication through the serial connection. Comment by shivam-kumar on 2016-06-10: Could you please tell me if I need to give any commands on the robot to connect via serial port. I know you said turn off the onboard computer and I did, but it still could not connect. So we don't need to give any permissions, just connect the serial cable to the robot and run the command on laptop Comment by shivam-kumar on 2016-06-10: Sir I Know its too many questions? but Sir please, its urgent for me, I have to complete my work in 1 month at my college
{ "domain": "robotics.stackexchange", "id": 24868, "tags": "ros, p2os-driver, rosaria, p2os" }
Is there a relativistic version of Navier-Stokes equations?
Question: Just as the title says, is there a relativistic version of Navier-Stokes equations? In electromagnetic hydrodynamics it would be very useful to have relativistic version of Navier-Stokes equations, although I couldn't find one Answer: I expand my previous comment. Yes, there are relativistic versions of the Navier-Stokes equation: They extend to special (or general) relativity the usual Navier-Stokes equations coupled to heat conduction (i.e., energy diffusion, see this answer). You can find them in several famous books, in particular: Landau, Fluid Mechanics (volume 6 of the theoretical physics course). In the chapter dedicated to relativistic hydrodynamics, you find the famous relativistic version of the Navier-Stokes equation in the so-called "Landau frame". See, e.g., this answer. Weinberg, Gravitation and Cosmology: here you can find the relativistic generalization of Navier-Stokes in the so-called "Eckart frame" (chapter 11). The problem is that both these "naive" relativistic generalizations of Navier-Stokes do not work: the partial differential equations, despite being written in a covariant fashion, display instabilities and lead to non-causal propagation of signals (they display instabilities both at the computer simulation level, i.e. once discretized, as well as at the exact mathematical level!). This is a theorem based on the analysis of Hiscock and Lindblom (1985). Beyond relativistic Navier-Stokes: To fix the stability and causality problems of relativistic versions of Navier-Stokes hydrodynamics, we have to look for a more general framework for dissipative relativistic hydrodynamics. The various possibilities, their underlying assumptions and motivations are summarized in this review. A possible (and widely used) alternative to Eckart or Landau versions of the relativistic Navier-Stokes is the so-called "Israel-Stewart hydrodynamics" (later revised and upgraded in Derivation of fluid dynamics from kinetic theory with the 14-moment approximation). You can find an introduction to Israel-Stewart hydrodynamics in the recent book by Rezzolla and Zanotti: this formulation overcomes the instability problem and is causal (signals, like sound waves, propagate subliminally, namely with a speed less than the one of light), as shown in a seminal work by Hiscock and Lindblom (1983). Why Eckart & Landau's approaches fail: A simple explanation of why Navier-Stokes does not work in special and/or general relativity is given in "When the entropy has no maximum: A new perspective on the instability of the first-order theories of dissipation": in the formulations of Landau and Eckart, the entropy function turns out not to have a maximum (the homogeneous equilibrium state is an unstable equilibrium point). Therefore, since the fluid wants to maximise entropy, the fluid explodes because entropy "wants" to grow indefinitely.
{ "domain": "physics.stackexchange", "id": 91236, "tags": "electromagnetism, special-relativity, fluid-dynamics, navier-stokes, magnetohydrodynamics" }
Is Our actual weight $mg$?
Question: As We are moving in a circle in uniform velocity, so the centripetal force acting on us should be $$ F_{net}= \frac {mv^2}{R} =\frac {4\pi^2mR}{T^2}. $$ There are only two forces acting on us. The normal force and the gravitational force. So $$ mg-F_N = \frac {4\pi^2mR}{T^2}.$$ Does that mean our actual weight is not mg but $ F_N = mg - \frac {4\pi^2mR}{T^2} $? Answer: That depends on how you define the phrase "actual weight." If actual weight means the gravitational force of Earth on an object, then $mg$ is correct. If actual weight means what a scale measures, then your formula that takes into account Earth's rotation is correct. In fact, this would mean actual weight varies with latitude because $R$ gets smaller as you get closer to the poles. Careful definitions of what is being measured is important for any scientific experiment. For another example, there are two lengths of time that are called a day: a synodic day day (the length of time it takes the sun to return to the same position in the sky) and a sidereal day (the time it takes distant stars to return to the same positon in the sky). Because Earth moves around the Sun as it rotates, these two days are slightly different.
{ "domain": "physics.stackexchange", "id": 94060, "tags": "newtonian-mechanics, newtonian-gravity, centripetal-force" }
rosrun controller_manager controller_manager list
Question: I am attempting to spawn controllers but interactions with controller_manager seem to hang. when I attempt to launch my controllers I see the following after a timeout: [INFO] [WallTime: 1398097365.560001] Controller Spawner: Waiting for service controller_manager/load_controller [WARN] [WallTime: 1398097395.567543] Controller Spawner couldn't find the expected controller_manager ROS interface. [gripper/controller_spawner-1] process has finished cleanly log file: /home/user/.ros/log/57c14baa-c96f-11e3-b22a-c81f6621927d/gripper-controller_spawner-1*.log Subsequently, if I attempt to merely list controllers/types the call hangs until I kill the process: rosrun controller_manager controller_manager list Anyone have any tips on what I am doing wrong? I have confirmed that ros_control and ros_controllers are installed. Thanks in advance. Originally posted by b3l33 on ROS Answers with karma: 113 on 2014-04-21 Post score: 0 Original comments Comment by ahendrix on 2014-04-21: Do you have a control process running? Can you see the controller_manager service with rosservice list ? Comment by b3l33 on 2014-04-21: No, it does not appear in the list of services.... Answer: I found my mistake. I had a type-o in my URDF. I was using an incorrect tag in the gazebo plugin namespace element. Namely, I had <gazebo> <plugin name="gazebo_ros_control" filename="libgazebo_ros_control.so"> <robot_namespace>/gripper</robot_namespace> </plugin> </gazebo> When I should have had <gazebo> <plugin name="gazebo_ros_control" filename="libgazebo_ros_control.so"> <robotNamespace>/gripper</robotNamespace> </plugin> </gazebo> Originally posted by b3l33 with karma: 113 on 2014-04-21 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 17722, "tags": "ros, controller-manager" }
Integral of a loop that doesn't encircle a current
Question: In Griffith's Electrodynamics, if we calculate $\oint \mathbf{B} \cdot d\mathbf{l}$, where the loop doesn't enclose the wire at all, then $\int d\phi=0$. I thought that $\int d\phi=\phi_2-\phi_1$ but why does it equal to zero? Answer: The limit of integration runs over the whole loop so the integral is: $$\int_{over loop} d\phi=\int_{I}d\phi+\int_{II}d\phi$$ where $\int_{I}d\phi$ =$\phi_2-\phi_1$ since it going ccw from lower tanget to upper tangent and $\int_{I}d\phi$ =$\phi_1-\phi_2$ since it going ccw from upper tanget to lower tangent therefore$$\int_{over loop} d\phi=\phi_2-\phi_1 +\phi_1-\phi_2 =0$$
{ "domain": "physics.stackexchange", "id": 69083, "tags": "electromagnetism, magnetic-fields" }
Differentiation between zinc, aluminium, and magnesium ions in solution
Question: If I have three aqueous ionic solutions in which I know that the cation is $\ce{Al^3+}$, $\ce{Mg^2+}$, or $\ce{Zn^2+}$, how do I find out which is which? I was thinking to add $\ce{OH-}$ in the form of $\ce{NaOH}$, or something else to produce a precipitate, and testing these precipitates in other liquids, but I'm not sure what other liquids to try. Answer: First, you add sodium hydroxide to the three different solution. (Remember to safe some of the solution, because we need it for the next test.) $\ce{Zn^2+}\text{ ion}$ The zinc ion will react with the hydroxide ion in the sodium hydroxide solution to form zinc hydroxide which is white. So, you see white precipitate. $$\ce{Zn^2+ +2 OH^- -> Zn(OH)2}$$ Adding more sodium hydroxide, the zinc hydroxide will form the zincate ion which is colourless. So, you will see a clear solution. $$\ce{Zn(OH)2 + 2 OH^- -> Zn(OH)4^2-}$$ $\ce{Al^3+}\text{ ion}$ The aluminium ion will react with the hydroxide ion in the sodium hydroxide solution to form aluminium hydroxide which is white. So, you see white precipitate. $$\ce{Al^3+ +3 OH^- -> Al(OH)3}$$ Adding more sodium hydroxide, the aluminium hydroxide will form aluminate ion which is colourless. So, you will see a clear solution. $$\ce{Al(OH)3 + OH^- -> Al(OH)4^-}$$ $\ce{Mg^2+}\text{ ion}$ The magnesium ion will react with the hydroxide ion in the sodium hydroxide solution to form magnesium hydroxide which is white. So, you see white precipitate. $$\ce{Mg^2+ +2 OH^- -> Mg(OH)2}$$ However, in this case, it won't form any else by adding more sodium hydroxide. So, you will still see the white precipitate even though you have continued to add sodium hydroxide. We can differentiate $\ce{Mg^2+}$ solution form the three solutions and now we are left with $\ce{Zn^2+}$ and $\ce{Al^3+}$. Take another sample from the remaining two solutions and add aqueous ammonia solution ($\ce{NH3}$) in the samples. $\ce{Zn^2+}\text{ ion}$ The zinc ion will react with the ammonia solution to form zinc hydroxide which is white. So, you see white precipitate. $$\ce{Zn^2+ + 2NH3 + 2H2O <=> Zn(OH)2 + 2NH4^+}$$ Adding more ammonia solution, the zinc hydroxide will form tetraamminezinc(II) ion which is colourless. So, you will see a clear solution. $$\ce{Zn(OH)2 + 4NH3 -> [Zn(NH3)4]^2+ + 2OH^-}$$ $\ce{Al^3+}\text{ ion}$ The aluminium ion will react with the ammonia solution to form aluminium hydroxide which is white. So, you see white precipitate. $$\ce{Al^3+ + 3NH3 + 3H2O <=> Al(OH)3 + 3NH4^+}$$ However, the aluminium hydroxide will not further react with ammonia. So, you will still see the white precipitate even though you have continued to add ammonia. In conclusion, first put sodium hydroxide in the three solutions (sample in test tube) and keep adding it until only one of the solutions still has white precipitate. That solution is $\ce{Mg^2+}$. Next, take the remaining two solutions (sample in test tube) and keep adding it until one of the solution still has white precipitate. That solution is $\ce{Al^3+}$. And finally the last solution is $\ce{Zn^2+}$.
{ "domain": "chemistry.stackexchange", "id": 3435, "tags": "experimental-chemistry, aqueous-solution, solubility, solutions, ions" }
Counting pairs that have a given difference in Java
Question: You will be given an integer k and a list of integers. Count the number of distinct valid pair of integers (a,b) in the list for which a+k=b. For example, the array [1,1,1,2] has two different valid pairs:(1,1) and (1,2). Note that the three possible instances od pair (1,1) count as a single valid pair, as do the three possible instances of the pair (1,2). if k=1, then this means we have total of one 1 valid pair which satisfies a+k=b=>1+1=2, the pair (1,2). My code: public class PairArrayStream { public static void main(String[] args) { int k =1; List<Integer> input = Arrays.asList(1,1,1,2); HashSet<HashSet> hs = new HashSet<HashSet>(); IntStream.range(0, input.size()) .forEach(i -> IntStream.range(0, input.size()) .filter(j -> i != j && input.get(i) - input.get(j) == k) .forEach(j -> { HashSet inner = new HashSet<>(); inner.add(input.get(j)); inner.add(input.get(i)); hs.add(inner); }) ); System.out.println("OutPut "+hs.size()); } } Without java 8 features:: int k =1; List<Integer> input = Arrays.asList(1,1,1,2); HashSet<HashSet> hs = new HashSet<HashSet>(); for(int i =0 ; i<numbers.size();i++){ for(int j = i; j<numbers.size();j++){ if(Math.abs(numbers.get(j)-numbers.get(i)) == k){ HashSet inner = new HashSet<>(); inner.add(numbers.get(j)); inner.add(numbers.get(i)); hs.add(inner); } } } Well, I am getting the correct output but 40% of the test cases gives me a timeout. Opinions and tactics are welcomed to make the code better and fast. Answer: From a review of your code, I see that you loop over your input in a nested for loop. This is actually not required to solve the problem. Instead, create one hashset for every number in your numbers array, and another hashset for every number in numbers + k. Then we only have to check for intersections. This should take your \$O(n^2)\$ algorithm and turn it into a nicer \$O(n)\$. Here's my suggested code (note that I included an inner class to store the pairs, but you could use builtins to solve it too: private static class Pair { public int a; public int b; public Pair(int a, int b) { this.a = a; this.b = b; } public String toString() { return "("+a+","+b+")"; } } public static List<Pair> getPairsFast(int k, List<Integer> numbers) { HashSet<Integer> hLow = new HashSet<>(); HashSet<Integer> hHigh = new HashSet<>(); List<Pair> ret = new ArrayList<>(); for (int i : numbers) { hLow.add(i); hHigh.add(i+k); } for (int i : hHigh) { if (hLow.contains(i)) { ret.add(new Pair(i-k, i)); } } return ret; } From some testing, it indeed seems to perform better the larger the input is, and it is >400 times faster for an input of size \$10^5\$.
{ "domain": "codereview.stackexchange", "id": 31629, "tags": "java, algorithm, time-limit-exceeded, k-sum" }
What are the roundish objects in this sketch of a tracheid?
Question: I got this picture from the Wikipedia entry for Tracheid. What are the roundish objects in the sketch? Also are tracheids sclerenchyma cells? Answer: The 'roundish stuffs' you are seeing are called 'pits'. The Dictionary of Botany defines a pit as '[a] cavity in the secondary cell wall, allowing exchange of substances between adjacent cells'. The pit itself is composed from a aperture, named the pit cavity and an environing membrane called the 'pit membrane'. The pit is comparatively analogous to the plasmodesmata connecting protoplasts by thin connecting vessels. The difference exists in the presence of pits within the xylem while plasmodesmata arise between other non-vascular cells additionally. To respond to your second question, where sclerenchyma cells are strengthened, lignified cells, tracheid cells are indeed the 'tracheary' or vascular specialisation of sclerenchyma tissue. For information regarding the Dictionary of Botany, the website is available from: http://botanydictionary.org/pit.html Information concerning ground tissue is available from: https://www.sciencedirect.com/topics/agricultural-and-biological-sciences/ground-tissue
{ "domain": "biology.stackexchange", "id": 8750, "tags": "plant-anatomy, tissue" }
language translation to Spanish?
Question: I am English, and living in Spain where I am starting to work with kids teaching Lego robotics with an established group of eager learners. I would also like to move them onto ROS. I am more than happy to help with translating your docs into Spanish (its also good for me!), but is there a driver to do this already? Thanks. Originally posted by chris2far on ROS Answers with karma: 1 on 2017-04-11 Post score: 0 Answer: the translation for spanish language is already there :) just visit the http://wiki.ros.org/es Originally posted by ΦXocę 웃 Пepeúpa ツ with karma: 424 on 2017-04-11 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by chris2far on 2017-04-11: thanks...I noticed in the end :-)
{ "domain": "robotics.stackexchange", "id": 27581, "tags": "ros" }
I cannot convert xacro file to urdf
Question: Hello, I have an issue about broadcasting some links at tf for which I have asked here. Someone commented that he solved my issue from converting the xacro file to a urdf one. How can I do that? I have searched for the issue in questions like here and here and when I type: rosrun xacro xacro.py rover_ws/src/labrob/labrob_description/urdf/labrob.urdf or rosrun xacro xacro.py 'rover_ws/src/labrob/labrob_description/urdf/labrob.urdf' I get an error like the following: xacro: Traditional processing is deprecated. Switch to --inorder processing! To check for compatibility of your document, use option --check-order. For more infos, see http://wiki.ros.org/xacro#Processing_Order No such file or directory: rover_ws/src/labrob/labrob_description/urdf/labrob.urdf XacroException('No such file or directory: rover_ws/src/labrob/labrob_description/urdf/labrob.urdf',) The directory is the correct one I am writting here, I don't know what I am doing wrong. Could you please help me? Thanks for your answers and time in advance, Chris PS: I am in ROS Kinetic Originally posted by patrchri on ROS Answers with karma: 354 on 2017-01-01 Post score: 0 Answer: did you try it with an absolute path? Originally posted by NEngelhard with karma: 3519 on 2017-01-01 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by patrchri on 2017-01-01: Thank you the command worked with the absolute path...if you would like add this as an answer to accept it...Your suggestion at my other question didn't work though with the converted urdf...I still get a wrong tree.
{ "domain": "robotics.stackexchange", "id": 26616, "tags": "ros-kinetic, xacro" }
Why can a multi-particle state $|\psi\rangle$ be uniquely characterized by the occupation numbers of the single particle states?
Question: Assume we have a system with N particles, each particle in one of $\gamma$ different single particle states $|k^\gamma\rangle$. The state space of the multi-particle system is spanned by the basis $$\mathcal{B}=\{|k^{p_1},k^{p_2}...k^{p_\gamma}\rangle\}_{\textrm{All permutations } p_i\in\{1..\gamma\}}$$ where the following notation is used$$|k^{p_1},k^{p_2}...k^{p_\gamma}\rangle:=|k^{p_\gamma}\rangle\otimes...\otimes|\textrm k^{p_\gamma}\rangle$$ Let us consider Bosonic systems. Take as an axiom, that the multi-particle state of a Bosonic system have to be symmetric under any exchange of the states of two particles. How to prove in general that there is only one multi-particle state $|\psi\rangle$ with $n_i$ particles in state $|k_i\rangle$? One can easily show this for 3 particles in a 3 level system, but the general case requires a bit more combinatorics and I'm struggeling. For now I would even be happy if someone can just assert that this is in fact true. Motivation: I need to know that this is actually true in order to properly understand the derivation of the grand-canonical partition sum of the Bose gas. There one replaces the trace over all Bosonic multi-particle states with arbitrary particle number $N$ by a sum over all single-particle state occupation numbers. Answer: This follows because the fully symmetric representation of the permutation group $S_m$ of $m$ states is 1-dimensional, and because the symmetric state \begin{align} \sum_{\sigma\in S_\gamma} P(\sigma) |k^{p_1},k^{p_2}...k^{p_\gamma}\rangle\, , \end{align} where $$ P(\sigma)|k^{p_1},k^{p_2}...k^{p_\gamma}\rangle = |k^{p_{\sigma(1)}},k^{p_{\sigma(2)}}...k^{p_{\sigma(\gamma)}}\rangle $$ carries this representation of the permutation group. Here this group would be $S_\gamma$. For instance the combination $$ \frac{1}{\sqrt{6}} \left(|210\rangle + |201\rangle+ \vert 120\rangle + \vert 102\rangle + \vert 021\rangle+\vert 012\rangle \right) \tag{1} $$ is an example of such a fully symmetric state of $S_3$. The dimension of the representation is the dimension of the set $\{|210\rangle, |201\rangle, \vert 120\rangle, \vert 102\rangle ,\vert 021\rangle,\vert 012\rangle\}$ and is is obviously the number of permutations of three distinct numbers $\{0,1,2\}$, i.e. the three possible states of your system. It is easy to see, by construction, that the symmetric irrep occurs at least once and is spanned precisely by the combination (1). But can the symmetric representation appear more than once? The answer is no by orthogonality: any state orthogonal to (1) or its generalization must ``traceless'', i.e. of the type $$ \frac{1}{\sqrt{6}} \left(a|210\rangle + b |201\rangle+ c \vert 120\rangle + \vert 102\rangle + d \vert 021\rangle+f \vert 012\rangle \right)\, , \tag{2} $$ with $a+b+c+d+e+f=0$. For a state like (2) to be symmetric under any permutation, one needs $a=b=c=d=e=f$, which is incompatible with the traceless condition. Thus (up to an overall phase an normalization), symmetrization guarantees you necessarily are the a 1-dimensional subspace of your full Hilbert space, and that this representation occurs once.
{ "domain": "physics.stackexchange", "id": 68897, "tags": "quantum-mechanics, statistical-mechanics, second-quantization" }
How to Subscribe vector type topic?
Question: hello :) i wanna publish and subscribe vector<geometry_msgs::PoseStamped> type topic, how can i do? void exceptionPointsCallback(const std::vector<geometry_msgs::PoseStamped::ConstPtr&> msg???????!!!!) { exceptions_points.clear(); exceptions_points.swap(msg); } Originally posted by Mohsen Hk on ROS Answers with karma: 139 on 2013-06-07 Post score: 0 Original comments Comment by ctguell on 2013-12-12: @Mohsen Hk did you manage to make this work? Im having problem accomplishing it, and would really appreciate some help Answer: Create msg file "geometry_msgs_vector.msg" in "msg" directory of ur package. In msg file write geometry_msgs::PoseStamped [] msgs_vector In ur code write the subscriber like this void callback(const package_name::geometry_msgs_vector::ConstPtr &msg) { } To access the vector use "msg.msgs_vector" Hope this works PS:Dont forget to include this msg header file "package_name/geometry_msgs_vector.h" and they might be some syntactical error in the code i have mentioned but the logic is correct Originally posted by ayush_dewan with karma: 1610 on 2013-06-07 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by ctguell on 2013-12-12: @ayush_dewan i cant manage to make it work could you be more specific in how can i make this work? I really appreciate your help
{ "domain": "robotics.stackexchange", "id": 14469, "tags": "ros, subscribe, topic" }
Demodulating a LoRa data symbol
Question: LoRa integrate data into symbols by selecting the starting frequency of an up chirp. The resulting up chirp traverses across BW and arrives at the same place it started. At the demodulator, this chirp is multiplied by a down chirp followed by the fft to extract the data bits. I tried to do this myself on paper but I can't help but find two different fft bins. I understand if both -ve & +ve bins are same, the demodulation is successful however, they don't aren't. I illustrate this at my best below. As per DSP principles, the multiplication of received and locally generated base band chirps is the addition of the two instantaneous frequencies of the chirps in red and purple. Then, I have drawn a line which represents their summation in yellow. As we can see, the multiplication produces two products (in yellow), one positive and one negative frequency component that are not equal. But according to literature , multiplication with a down chirp should produce only a single frequency bin, meaning, the positive and negative frequency products resulting from the multiplication must be of same magnitude. But, I see two different products here. How is that possible? Would be so much helpful to me if someone can point out where I am wrong. Answer: The resulting up chirp traverses across BW and arrives at the same place it started. not quite exactly right, but very close: it ends up in the DFT bin before the bin it started from. This makes the chirps (frequency domain) cyclically shifted versions of the straight-up "prototype chirp". (my name invention) Maybe this way of looking at it helps intuition: Let's assume the system is synchronized in time¹ and let's look what happens when you multiply the unshifted downchirp with the upchirp. You're multiplying a complex sinusoid with a frequency slope that is the exact inverse of the slope of the other sinusoid. In other words, you're multiplying two sinusoids which only differ in the sign of their exponents. Now, multiplying two exponential functions with the same base ($e^\cdot$) will lead to a new exponential function with the sum of the exponents of the factors as exponent. Yay! That means up- and down-chirp cancel out, you get a constant $e^{j0}=1$. Now, if the chirp is shifted, the sum of the exponents doesn't cancel - but it becomes a linear function – as argument of the complex exponential: a tone! A tone that on top of that falls in the raster of the DFT. So, a single peak after FFT. As per DSP principles, the multiplication of received and locally generated base band chirps is the addition of the two instantaneous frequencies of the chirps in red and purple yep. Then, I have drawn a line which represents their summation in yellow. And here you simply forgot to realize that your positive frequency is outside Nyquist bandwidth. You need to subtract the sampling rate, and you'll see that it aliases to the same frequency as the rest of the yellow line. Don't forget that your in discrete time and frequency domain: the spectrum must be imagined to repeat infinitely every multiple of the sampling rate. ¹ Unsolicited words of advice: if you find literature that proposes cool signalling schemes and uses the sentence Assume the system is synchronized, don't invest too much time into that scheme until you find information on how that can be synchronized... synchronization is one of the hard parts, and it has broken many a system's promises.
{ "domain": "dsp.stackexchange", "id": 9166, "tags": "demodulation, linear-chirp" }
C hack: replace printf to collect output and return complete string by using a line buffer
Question: I have this great C program I'd like to embed into an iOS app. One passes command line arguments to it and the results are printed to stdout via printf and fputs - like with all the good old Unix programs. Now I'd like to just edit main and the print functions to use my own printf function which collects all the output that normally goes to stdout and return it at the end. I implemented a solution by using a line buffer to collect all the printfs until the newline. And a dynamic char array whereto I copy when an output line is finished. The charm of this solution is - it's kind of tcl'ish: just throw everything into a text line, and if it's complete, store it. Now do that as long as necessary and return the whole bunch at the end. C code: #include <stdio.h> #include <stdlib.h> #include <stdarg.h> #include <string.h> // outLineBuffer collects one output line by several calls to tprntf #define initialSizeOfReturnBuffer 10 // reduced for testing (would be 16*1024) #define incrSizeOfReturnBuffer 5 // reduced for testing (would be 1024*1024) #define outLineBufferMaxSize 4095 char outLineBuffer[sizeof(char)*outLineBufferMaxSize] = ""; char *tReturnString; size_t sizeOfReturnBuffer, curPosOutBuffer = 0, lenOutLine = 0; With the replacement tprntf for all the original printf and fputs: // replace printf with this to collect the parts of one output line. static int tprntf(const char *format, ...) { const size_t maxLen = sizeof(char)*outLineBufferMaxSize; va_list arg; int done; va_start (arg, format); done = vsnprintf (&outLineBuffer[lenOutLine], maxLen-lenOutLine, format, arg); va_end (arg); lenOutLine = strlen(outLineBuffer); return done; } And the function when we complete one output line (everywhere \n is printed): // Output line is now complete: copy to return buffer and reset line buffer. static void tprntNewLine() { size_t newSize; long remainingLenOutBuffer; char *newOutBuffer; remainingLenOutBuffer = sizeOfReturnBuffer-curPosOutBuffer-1; lenOutLine = strlen(outLineBuffer)+1; // + newline character (\n) remainingLenOutBuffer -= lenOutLine; if (remainingLenOutBuffer < 0) { newSize = sizeOfReturnBuffer + sizeof(char)*incrSizeOfReturnBuffer; if ((newOutBuffer = realloc(tReturnString, newSize)) != 0) { tReturnString = newOutBuffer; sizeOfReturnBuffer = newSize; } else { lenOutLine += remainingLenOutBuffer; //just write part that is still available remainingLenOutBuffer = 0; } } snprintf(&tReturnString[curPosOutBuffer], lenOutLine+1, "%s\n", outLineBuffer); curPosOutBuffer += lenOutLine; outLineBuffer[0] = 0; lenOutLine = 0; } And a little main to test it (without Swift - e.g. plain gcc): int main(int argc, char *argv[]) { int i; sizeOfReturnBuffer = initialSizeOfReturnBuffer*sizeof(char); if ((tReturnString = malloc(sizeOfReturnBuffer)) == 0) { return 1; // "Sorry we are out of memory. Please close other apps and try again!"; } tReturnString[0] = 0; for (i = 1; i < argc; i++) { tprntf("%s ", argv[i]); } tprntNewLine(); tprntf("%s", "ABC\t"); tprntf("%d", 12); tprntNewLine(); // enough space for that ;-) tprntf("%s", "DEF\t"); tprntf("%d", 34); tprntNewLine(); // realloc necessary ... tprntf("%s", "GHI\t"); tprntf("%d", 56); tprntNewLine(); // again realloc for testing purposes ... printf("tReturnString at the end:\n>%s<\n", tReturnString); // contains trailing newline return 0; } The call from swift will then be as follows (using CStringArray.swift): let myArgs = CStringArray(["computeIt", "par1", "par2"]) let returnString = mymain(myArgs.numberOfElements, &myArgs.pointers[0]) if let itReturns = String.fromCString(returnString) { print(itReturns) } freeMemory() Answer: Revised Version Please find the revised version of my code below. The following things were improved: Buffer size problem and style issues for #defines fixed (see JS1s answer). Added a long string in main to test buffer realloc. Return codes 'unix style': 12 (ENOMEM) or 0 (OK) is returned Added a fputs replacement (tPuts). Used #define preprocessor statements to use tprntf instead of printf and tPuts instead of fputs. Added tFreeMemory to free allocated memory. strlen performance improvement: just parse new part of outLineBuffer - thanks Paul Ogilvie. Uploaded a complete Xcode 7 project on github swift-C-string-passing . The gcc standalone version can be found there too. C Code // C hack: replace printf to collect output and return complete string by using // a line buffer. // Beware of calling tprntNewLine so the line is added to the return string! #include <stdio.h> #include <stdlib.h> #include <stdarg.h> #include <string.h> #define OK 0 // linux return value: 0 = successful #define ENOMEM 12 // linux return value: 12 = out of memory // outLineBuffer collects one output line by several calls to tprntf #define INITIAL_SIZE_OF_RETURNBUFFER 10 // reduced for tests (would be 16*1024) #define INCR_SIZE_OF_RETURNBUFFER 5 // reduced for testing (would be 1024*1024) #define OUTLINE_BUFFER_MAXSIZE 4095 char outLineBuffer[sizeof(char)*OUTLINE_BUFFER_MAXSIZE] = ""; char *tReturnString; size_t sizeOfReturnBuffer, curPosOutBuffer = 0, lenOutLine = 0; With the replacement tprntf and tPust for all the original printf´ andfputs`: // fputs replacement to collect the parts of one output line in outLineBuffer. static int tPuts(const char *s, FILE *stream) { const size_t maxLen = sizeof(char)*OUTLINE_BUFFER_MAXSIZE; int rVal; if (stream == stdout) { rVal = snprintf (&outLineBuffer[lenOutLine], maxLen-lenOutLine, "%s",s); lenOutLine += strlen(&outLineBuffer[lenOutLine]); if (rVal < 0) { return EOF; } else { return rVal; } } else { return fputs(s, stream); } } // fputs replacement to collect the parts of one output line in outLineBuffer. static int tPuts(const char *s, FILE *stream) { const size_t maxLen = sizeof(char)*OUTLINE_BUFFER_MAXSIZE; int rVal; if (stream == stdout) { rVal = snprintf (&outLineBuffer[lenOutLine], maxLen-lenOutLine, "%s",s); lenOutLine += strlen(&outLineBuffer[lenOutLine]); if (rVal < 0) { return EOF; } else { return rVal; } } else { return fputs(s, stream); } } And the function when we complete one output line (everywhere \n is printed, don´t forget to call this otherwise the line won´t show up): // Output line is now complete: copy to return buffer and reset line buffer. // Don't forget to call this (especially for the last prints) so the line // is added to the return string! static void tprntNewLine() { size_t newSize; long remainingLenOutBuffer, neededSize; char *newOutBuffer; remainingLenOutBuffer = sizeOfReturnBuffer-curPosOutBuffer-1; lenOutLine++; // + newline character (\n) remainingLenOutBuffer -= lenOutLine; if (remainingLenOutBuffer < 0) { //newSize = sizeOfReturnBuffer + sizeof(char)*INCR_SIZE_OF_RETURNBUFFER; neededSize = -remainingLenOutBuffer; if (neededSize < sizeof(char)*INCR_SIZE_OF_RETURNBUFFER) neededSize = sizeof(char)*INCR_SIZE_OF_RETURNBUFFER; newSize = sizeOfReturnBuffer + neededSize; if ((newOutBuffer = realloc(tReturnString, newSize)) != 0) { tReturnString = newOutBuffer; sizeOfReturnBuffer = newSize; } else { // just write part that is still available: lenOutLine += remainingLenOutBuffer; //remainingLenOutBuffer = 0; } } snprintf(&tReturnString[curPosOutBuffer],lenOutLine+1,"%s\n",outLineBuffer); curPosOutBuffer += lenOutLine; outLineBuffer[0] = 0; lenOutLine = 0; } Free allocated memory: void tFreeMemory () { free(tReturnString); } And a little main to test it (without Swift - e.g. plain gcc): #ifndef COLLECT_STDOUT_IN_BUFFER #define COLLECT_STDOUT_IN_BUFFER #define printf tprntf #define fputs tPuts #endif // For testing with C compiler. Rename when used in Xcode project e.g. to mymain int main(int argc, char *argv[]) { int i; sizeOfReturnBuffer = INITIAL_SIZE_OF_RETURNBUFFER*sizeof(char); if ((tReturnString = malloc(sizeOfReturnBuffer)) == 0) { // "Sorry we are out of memory. Please close other apps and try again!" return ENOMEM; } tReturnString[0] = 0; curPosOutBuffer = 0; for (i = 0; i < argc; i++) printf("%s ", argv[i]); tprntNewLine(); printf("%s", "ABC\t"); printf("%d", 12); tprntNewLine(); // enough space for that ;-) fputs("DEF\t", stdout); printf("%d", 34); tprntNewLine(); // realloc necessary ... printf("%s", "xxxxxxxxx 80 chars are way more than the current buffer " "could handle! xxxxxxxxxx\t"); printf("%d", 56); tprntNewLine(); // again realloc (test: too small INCR_SIZE_OF_RETURNBUFFER) #ifdef COLLECT_STDOUT_IN_BUFFER //undo rename to view results: #undef printf #endif printf("tReturnString at the end:\n>%s<\n", tReturnString); tFreeMemory () return OK; } For the swift interaction please have a look at github swift-C-string-passing
{ "domain": "codereview.stackexchange", "id": 17421, "tags": "c, strings, swift" }
Designing Rayleigh Fading Wireless Channel with particular properties
Question: I have some information on particular wireless channel between an Access Point(AP) and K users as described in the picture below: How i can create this channel in Matlab? Answer: theta0 = 6.25*10^-4; di = 10; %whatever it is h = theta0.*.*di^(-3).*(1/sqrt(2)).*(randn(1,500) + 1i.*randn(1,500)); g = theta0.*.*di^(-3).*(1/sqrt(2)).*(randn(1,500) + 1i.*randn(1,500));
{ "domain": "dsp.stackexchange", "id": 7210, "tags": "matlab, fading-channel" }
What is the difference between a linear and non-linear solution in the bending of beams?
Question: I have been working on a simulator for bending of beams and came now to a tricky doubt: What should be the difference between a linear and non linear solution in this case (graphic at bottom)? The solution of the following ODE gives us the non-linear curvature: and for very small angles (dy/dx)^2 will tend to zero, so we can linearize (Sorry, $dy=dv$ in this image): So we integrate the following equation (I used the bvp4c function on Matlab) , that includes the curvature, to obtain the deflection of the beam: In red is the non-linear solution and in Blue the linear solution. My doubt is: at the middle of the $x$-axis $dy/dx=0$ in both curves, should I expect then also to have the same value of $y$, since at that precise point I could also cancel $dy/dx$ in the formula and both equations would look the same? In other words, what should I expect as a difference between a linear and a non-linear solution in such equations? Answer: I don't think that the deflection of the beam in the centre should be the same. You use two different relations to compute curvature from the deflection. Since, you use the same load, the two different relation have to result in different solutions. In general it's hard if not impossible to say in advance what the differences in the solution will be. This strongly depends on the simplifications you make on the way to the linear model.
{ "domain": "physics.stackexchange", "id": 13328, "tags": "homework-and-exercises, continuum-mechanics, stress-strain, non-linear-systems, structural-beam" }
Algorithm movie ticket booking
Question: Just for fun and practice purpose, I have written a short program in C# console with few classes which will create 30 seats for a cinema during program initialization with these options. Print all seats Seed 20 random taken seats Print all empty seats Print all taken seats I think my algorithm for generate 20 random taken seats doesn't look that efficient. I thought of extra space, which is the taken list. For that, while generated empty seats not equal with required empty seats, I will remove the random seat and push that random seat to taken list. But the problem with this is to print all the complete 30 seats, I have merge taken and empty list. Anyway is my code for review. /// <summary> /// One cinema will have many seats /// </summary> public class Cinema { private readonly List<Seat> seats; public Cinema() { // Maximum number of seat of any cinema is 30. seats = new List<Seat>(30); GenerateCompleteSeats(); } private void GenerateCompleteSeats() { int totalRow = 3; int totalCol = 10; for (int i = 0; i < totalRow; i++) { for (int y = 0; y < totalCol; y++) { seats.Add(new Seat(i+1, y+1)); } } } private void PrintSeats(List<Seat> _seats) { foreach (var item in _seats) { Console.WriteLine($"Row: {item.Row}. Col: {item.Col}. Status: {item.Status}"); } } internal void PrintCompleteSeats() { Console.WriteLine("Total seats: " + seats.Count); PrintSeats(seats); } internal void SeedCinemaSeats() { const int totalTakenSeatsNeeded = 20; int totalGeneratedEmptySeats = 0; // totalSeats will keep increasing until equal to total empty seats needed while (totalGeneratedEmptySeats != totalTakenSeatsNeeded) { // Generate a random number var randomNo = new Random(); // Get a random seat var randomSeat = seats[randomNo.Next(seats.Count)]; // Remove random seat //seats.Remove(randomSeat); // Update random seat status if(randomSeat.Status == Seat.EnumStatus.Empty) { randomSeat.Status = Seat.EnumStatus.Taken; totalGeneratedEmptySeats++; } } } internal void PrintEmptySeats() { var empty = seats.Where(s => s.Status == Seat.EnumStatus.Empty); Console.WriteLine("Total Seats: " + empty.Count()); PrintSeats(empty.ToList()); } internal void PrintTakenSeats() { var taken = seats.Where(s => s.Status == Seat.EnumStatus.Taken); Console.WriteLine("Total Seats: " + taken.Count()); PrintSeats(taken.ToList()); } } public class Seat { public int Row { get; } public int Col { get; } public EnumStatus Status { get; set; } /// <summary> /// A valid seat object will always need row and col. /// </summary> /// <param name="row"></param> /// <param name="col"></param> public Seat(int row, int col) { this.Row = row; this.Col = col; this.Status = EnumStatus.Empty; } public enum EnumStatus { Empty, Taken } } static void Main(string[] args) { // Initialize Cinema var cinema = new Cinema(); while (true) { Console.Clear(); Console.WriteLine("1. Print all Seats in Cinema"); Console.WriteLine("2. Seed some sample booked seats"); Console.WriteLine("3. Print empty/available seats"); Console.WriteLine("4. Print taken seats"); var opt = Console.ReadLine(); switch (opt) { case "1": cinema.PrintCompleteSeats(); break; case "2": cinema.SeedCinemaSeats(); break; case "3": cinema.PrintEmptySeats(); break; case "4": cinema.PrintTakenSeats(); break; default: break; } Console.ReadKey(); } } Answer: Picking random seats I think my algorithm for generate 20 random taken seats doesn't look that efficient. while (totalGeneratedEmptySeats != totalTakenSeatsNeeded) { // Generate a random number var randomNo = new Random(); // Get a random seat var randomSeat = seats[randomNo.Next(seats.Count)]; // Remove random seat //seats.Remove(randomSeat); // Update random seat status if(randomSeat.Status == Seat.EnumStatus.Empty) { randomSeat.Status = Seat.EnumStatus.Taken; totalGeneratedEmptySeats++; } } You shouldn't recreate a new Random (= random number generator, not a number by itself) for every new number, you should be using the same Random and requesting numbers multiple times. To that effect, put your initialization outside of the loop: var random = new Random(); while(...) { // ... } The randomization process can also be optimized, as you're currently running into possible retries when you randomly select a seat you had already selected before. That's inefficient, and it can be avoided by changing your "shuffle and draw" approach. Using the example of a deck of cards, if you want to draw 10 random cards, you don't need to draw these cards separately, shuffling the deck again each time (and putting the drawn card back in the deck on top of that). You can simply shuffle the deck and take the top 10 cards. Since the deck is in random order, the top 10 cards are as random as any other group of 10 cards would be. This also avoids having to retry draws, as the top 10 cards of the deck are guaranteed to not overlap with one another. Using LINQ, this can be done quite tersely: var shuffledSeats = seats.OrderBy(seat => random.Next()); Usually, you pick an ordering method that related to the seat (e.g. OrderBy(seat => seat.Price)), but in this case, we tell LINQ to order it by a random number, which effectively means that LINQ will randomly order our list. We then take the first 20 seats: var twentyRandomSeats = shuffledSeats.Take(20); and then we register these seats as taken: foreach(var seat in twentyRandomSeats) { seat.Status = Seat.EnumStatus.Taken; } These operations can be chained together: foreach(var seat in seats.OrderBy(seat => random.Next()).Take(20)) { seat.Status = Seat.EnumStatus.Taken; } Whether you chain them or not is up to you. It's a readability argument. It's definitely not wrong to keep the steps separate if you find it clearer. Separating taken seats from empty seats I will remove the random seat and push that random seat to taken list. But the problem with this is to print all the complete 30 seats, I have merge taken and empty list. This can indeed be an issue when you want to handle the complete list too. And you don't want to store three separate lists (all, taken, empty) as they may become desynchronized and it's a generally cumbersome juggling act. Since each seat carries its own status which indicates whether it's taken or not, we can simple keep all seats together in a single list, and then filter that list when we need to. LINQ allows for a terse and clean to read syntax: var emptySeats = seats.Where(seat => seat.Status == Seat.EnumStatus.Empty); var takenSeats = seats.Where(seat => seat.Status == Seat.EnumStatus.Taken); Since your seats list is a class field, you can define the other lists as computed class fields: class Cinema { private readonly List<Seat> seats = new List<Seat>(); private List<Seat> takenSeats => seats.Where(seat => seat.Status == Seat.EnumStatus.Taken); private List<Seat> emptySeats => seats.Where(seat => seat.Status == Seat.EnumStatus.Empty); } You can add null-checking here if you need it, but I would generally advise to avoid having nulls instead of continually having to check for it. To that effect, I've given seats a default value. As long as you don't explicitly make it null, you don't need to continually null check.
{ "domain": "codereview.stackexchange", "id": 37530, "tags": "c#" }
How many satellites does the Milky Way have?
Question: The small and large Magellanic clouds are galaxies that are orbiting our own galaxy. How many such "galactic satellites" do we know of? Answer: Table 2 from Drlica-Wagner et al., 2020 contains a list of 61 confirmed and candidate Milky Way satellites. Two of these are unconfirmed, and two are probable star clusters. 39 are confirmed satellite galaxies with known kinematics and 18 are probable satellite galaxies. These are all in a radius of about 300 kpc from the Sun. Deeper surveys in the coming years will possibly find another 150 satellites, since Nadler et al., 2020 finds that there are likely $220\pm 50$ satellite galaxies of the Milky Way.
{ "domain": "astronomy.stackexchange", "id": 4584, "tags": "milky-way, satellite, dwarf-galaxy" }
Is polarization charge real?
Question: Polarization charge is proportional to the inverse of divergence of polarization. Is it real? Are there real charge there? Answer: Polarisation involves displacement of real charges. If you place a slab of glass in a constant electric field perpendicular to it, there will be negative surface charge density on one side and a positive surface charge density on the other side. There is neutrality away from the surface. At the surface the divergency of the polarisation density gives the value of the surface charge density. The entire negative charge density shifts with respect to the posive one so that they no longer overlap and neutralise one another at the surface.
{ "domain": "physics.stackexchange", "id": 65917, "tags": "polarization" }
What does "bedding-in" mean in casting technology?
Question: I've heard this term used a lot in literature but I don't understand what the technique is they are referring to. Please help me out here. Answer: According to p.33 of this presentation 'bedding in' is a process of packing the molding sand by ramming the sand around and under the pattern until the sand is tightly packed and even with the parting line. There is also a glossary with foundry and casting terms defined here. This is used when the parts to be cast are quite large, often as a step in pit molding or floor molding. A description of pit molding is found at coursehero.com: 12.20.3 Pit Molding Usually large castings are made in pits instead of drag flasks because of their huge size. In pit molding, the sand under the pattern is rammed by bedding-in process. The walls and the bottom of the pit are usually reinforced with concrete and a layer of coke is laid on the bottom of the pit to enable easy escape of gas. The coke bed is connected to atmosphere through vent pipes which provide an outlet to the gases. One box is generally required to complete the mold, runner, sprue, pouring basin and gates are cut in it. Floor bedding is discussed in MANUFACTURING PROCESSES By J. P. KAUSHISH. An excerpt from Google Books (also found here)
{ "domain": "engineering.stackexchange", "id": 3138, "tags": "mechanical-engineering, manufacturing-engineering, casting, production-technology" }
Difference between fermions and bosons in Statistical Mechanics
Question: I am an undergraduate student in Physics and Mathematics. I am now preaparing for my final exam in Statistical Mechanics and I would like some help in a particular point. So here it goes: In the introductory course of Quantum Mechanics we have discussed how a global phase does not affect the physical state of the system (to be specific $|\psi\rangle$ and $-|\psi\rangle$ are the same state). However, in the course of Statistical Mechanics, we predicted the existence of fermions and bosons by applying an elementary permutation of two particles and seeing that the only possibilities are $\hat{P}|\psi\rangle = \pm |\psi\rangle$ What I have trouble understanding is that if the state is the same after the permutation, why do the particles have such a different behaviour. Attempt at solution: In my head, the way this could work is that indeed $|\psi\rangle$ and $-|\psi\rangle$ are the same state but the behaviour of the wave function (symmetric or antisymmetric with respect to particle interchange) is what determines if the system is a fermionic or a bosonic. I am aware that the question is a bit wishy-washy, but I have been looking around in different sources and no one seems to adress this particular concern. Any help or comments are appreciated (PS: This is my first question ever, so any tips in question formulation are also welcome) Answer: The main difference is more visible if you apply the permutation to some coordinates, say $x_1, x_2$ of the particles. These might be positions or spins or whatever you want, a property of the two particles where $x_1$ is the property of the particle $1$ and $x_2$ of the particle $2$. Let's work with the wave function $\langle x_1, x_2|\psi\rangle$ where we know, $\hat{P}$ being self adjoint $$\pm\langle x_1, x_2|\psi\rangle=\langle x_1, x_2|\hat{P}|\psi\rangle=\langle x_2, x_1|\psi\rangle$$ Now consider the fermonic case, the one with a minus sign$$\langle x_1, x_2|\psi\rangle=-\langle x_2, x_1|\psi\rangle$$ and look at the case $x_1=x_2=x$ $$\langle x, x|\psi\rangle=-\langle x, x|\psi\rangle$$ This clearly implies $\langle x, x|\psi\rangle=0$, lo and behold, two fermions can't be in the same quantum state! Bosons have no such problem, as there is no minus sign. This is the Pauli exclusion principle and it's the main "tangible" difference between bosons and fermions.
{ "domain": "physics.stackexchange", "id": 58990, "tags": "quantum-mechanics, statistical-mechanics, fermions, bosons" }
Variation of the fermion Fock state under small gauge transformation
Question: Suppose quantum state of fermions $\Psi_{f}$ in presence of external gauge field $A_{i}^{a}$ with temporary gauge $A_{0}^{a} = 0$: $$ |\Psi\rangle = \int dA^{a}_{i}|A_{i}^{a}\rangle \otimes|\Psi_{f}(A_{i}^{a})\rangle, \quad i = 1,2,3 $$ Here $a$ is color indice, $|A_{i}^{a}\rangle$ is coherent gauge field state with VEV $A_{i}^{a}$, and $|\Psi_{f}(A_{i}^{a})\rangle$ is Fock state of fermions in presence of external field $A_{i}^{a}$. The Gauss law is satisfied: $$ G|\Psi\rangle = 0, \quad G = D_{i}A^{i} - J^{0}, $$ where $J^{0}$ is charge density. How to prove that under small gauge transformation $$ \Omega = 1 +\omega, \quad A_{i}^{a} \to (A_{i}^{a})^{\Omega} = A_{i}^{a} + (D_{i}\omega )^{a} $$ the fermion state $|\Psi_{f}(A_{i}^{a})\rangle$ is changed as $$ |\Psi_{f}(A_{i}^{a}+(D_{i}\omega)^{a})\rangle = \left(1+\int d^{3}\mathbf r \text{tr}(\omega J^{0})\right)|\Psi_{f}(A_{i}^{a})\rangle ? $$ Answer: First, let me please correct the Gauss law constraint, whose first term should inlude the electric fields rather than the vector potentials: $$G = D_iE^i_a - J^0_a=0$$, (It is written explicitly in the Lie algebra components). In temporal gauge, the electric fields are the conjugate momenta of the vector potentials, thua at the quantum level: $$[E^i_a(x), A_j^b(y)] = \delta^i_j \delta^b_a \delta^3(x-y)$$ Thus they must be represented on the wave functionals by the functional derivatives with respect to the gauge potentials $$E^i_a(x) = \frac{\delta}{\delta A^{ia}(x)}$$ Therefore: $$|\Psi_{f}(A_{i}^{a}+(D_{i}\omega)^{a})\rangle = |\Psi_{f}(A_{i}^{a})\rangle + \int d^3x(D_{i}\omega)^{a} \frac{\delta |\Psi_{f}(A_{i}^{a})\rangle}{\delta A^{ia}(x)}$$ $$ = |\Psi_{f}(A_{i}^{a})\rangle + \int d^3x(D_{i}\omega)^{a} E_i^a(x)|\Psi_{f}(A_{i}^{a})\rangle$$ Perfprming an integration by parts: $$ = |\Psi_{f}(A_{i}^{a})\rangle - \int d^3x(D_{i}E_i)^{a} \omega^a(x)|\Psi_{f}(A_{i}^{a})\rangle$$ Then using the Gauss law: $$ = |\Psi_{f}(A_{i}^{a})\rangle - \int d^3x J^{0a} \omega^a(x)|\Psi_{f}(A_{i}^{a})\rangle$$
{ "domain": "physics.stackexchange", "id": 33231, "tags": "hilbert-space, gauge-theory, gauss-law, perturbation-theory" }
Could a human feel the black body radiation of another human standing behind her?
Question: I've been thinking about infrared radiation and noticing more and more how the human skin seems actually pretty sensitive to it. You can easily feel a bonfire from several meters away, far away from where any convection would heat your skin. When you open the hood of your car you can feel the heat from the engine even standing back a step or two (away from the updraft of hot air). Now try this: hold the palms of your hands against eachother a couple of inches apart and keep them like that for a couple of seconds. Then slowly (to avoid wind cooling) lift the other palm so they no longer face eachother. Do you feel it? For me there's a noticeable difference in warmth. Is that the skin detecting black body radiation from other skin? This could be easily blind-tested with a friend; you hold your palm out and look the other way, then see if you can correctly tell when your friend's palm is near you palm and when its not. Maybe the human skin is even able to detect black body radiation from another human standing behind her? Kind of like a sixth sense. Could explain the sensation of "i knew someone was there". I've noticed also that when you stand close to a concrete wall that was heated by the sun, but the sun has just set, you can tell which direction the wall is just from the heat on your body. Is this all placebo or does it actually work that way? Answer: In principle, yes. In practice, it's going to depend on the sensitivity of the neurons involved. The reason it could be felt is because what you feel is not temperature but heat flow (or changes in temperature). At least, that's one way to explain the classic experiment where a person lets one hand sit in hot water for a while, the other in cold, and then moves both hands to a lukewarm water bath. The hand that was in hot will feel cold, and vice versa. When two people stand close together it slows down the rate at which both are losing heat, so it can be felt, in principle. In practice, it's going to depend on all sorts of factors: how cold is the air in the middle, is the wind blowing, how close are they, etc? Any experiment to measure this would need to be properly blinded with the person doing the sensing properly shielded from detecting the other person by other means (for example: hearing them, seeing their shadow, smelling them).
{ "domain": "physics.stackexchange", "id": 33440, "tags": "thermal-radiation" }
How is magnetic field created in an atom according to quantum model
Question: How is magnetic field created in an atom according to quantum model? I mean, we are taught about the magnetic field by bohr's model which assumes that electron revolves around the nucleus in a circular path. But in reality it doesn't do so. So the field will keep changing its orientation. And it would be hard to align for them in external magnetic field if they are ferromagnetic or paramagnetic. Answer: Observables like quantised energy levels and quantised angular momentum of an atom are obtained by finding eigensolutions of the Schrödinger Equation (here for the Hydrogen atom). Separation into three parts allows to obtain the Colatitude and Azimuthal equations which allows to calculate the quantised angular momentum of the hydrogen atom, giving rise to the electron's orbital magnetic moment. The electron itself also has an intrinsic so-called spin magnetic moment which can only take on two values. The net magnetic moment of an atom is the vector sum of its orbital and spin magnetic moments.
{ "domain": "physics.stackexchange", "id": 26233, "tags": "magnetic-fields, electrons, atomic-physics, electric-fields, atoms" }
Do I need to correct predict_proba by training fraction?
Question: Many algorithms provide a predict_proba function indicating probability of a case to belong to that class (e.g. https://scikit-learn.org/stable/modules/generated/sklearn.svm.libsvm.predict_proba.html ). Quoting from the answer by @Media at Explain Binary Classification with output 0.5 (True) Suppose that you have a car classifier for distinguishing between white and blue cars. during training you had 100 images of blue car and 20 images of white car. During recall phase, if for an arbitrary image you have 50 percent for each class... If blue cars accounted for 83% of training cases, and I get predict_proba for a car to be blue to be 0.5, do I take the probability to be 0.5 or do I need to correct it by a factor of 0.83? If I do need to correct, do I multiply the factor (0.5*0.83) or divide it (0.5/0.83) to get the correct probability? Answer: If blue cars accounted for 83% of training cases, and I get predict_proba for a car to be blue to be 0.5, do I take the probability to be 0.5 or do I need to correct it by a factor of 0.83? First of all, the fact that your training dataset consists of 83% blue cars is not the same thing as the probability of the label being blue being 83%. This is the case only if the classifier is calibrated. sklearn team actually document the uncalibrated performance of their popular classifiers: They state that: Well calibrated classifiers are probabilistic classifiers for which the output of the predict_proba method can be directly interpreted as a confidence level. For instance a well calibrated (binary) classifier should classify the samples such that among the samples to which it gave a predict_proba value close to 0.8, approx. 80% actually belong to the positive class. If you would like to be able to interpret the probability of the label as a confidence interval, you should calibrate your classifier using CalibratedClassifierCV. You can do that after you have trained your classifier, by setting cv=prefit in CalibratedClassifierCV. Finally, if you are dealing with unbalanced classes and you want to take that into account during training, make sure to exploit the parameter class_weight, so that under-represented classes have a higher weight during training.
{ "domain": "datascience.stackexchange", "id": 5536, "tags": "classification, probability, probability-calibration" }
In-proc event dispatching through IoC container
Question: Here is the sender and handler interfaces: public interface ISender { Task SendAsync(object e); } public interface IHandler<in TEvent> { Task HandleAsync(TEvent e); } So I register in IoC container a sender service implementation, which dispatches events to all the compatible IHandler<in T> implementations. I use Autofac with a contravariance source, but there could be something else: [Service] public class Sender : ISender { public Sender(IServiceProvider provider) => Provider = provider; IServiceProvider Provider { get; } public async Task SendAsync(object e) { var eventType = e.GetType(); var handlerType = typeof(IHandler<>).MakeGenericType(eventType); var handlerListType = typeof(IEnumerable<>).MakeGenericType(handlerType); var method = handlerType.GetMethod("HandleAsync", new[] { eventType }); var handlers = ((IEnumerable)Provider.GetService(handlerListType)).OfType<object>(); await Task.WhenAll( handlers.Select(h => Task.Run(() => (Task)method.Invoke(h, new[] { e })) .ContinueWith(_ => { }))); } } Answer: I think the code would be more readable and easier to maintain if there was a method that handled the main calling of the other handlers. Unless there is a reason for provider to be a property I would make it a readonly field instead. public class Sender : ISender { private readonly MethodInfo handlerMethodInfo; private readonly IServiceProvider provider; public Sender(IServiceProvider provider) { this.provider = provider; Func<Task, object> handlerMethod = HandleAsync<object>; handlerMethodInfo = handlerMethod.Method.GetGenericMethodDefinition(); } private async Task HandleAsync<TEvent>(TEvent e) { var handlers = provider.GetService<IEnumerable<IHandler<TEvent>>>(); await Task.WhenAll(handlers.Select(h => h.HandleAsync(e))); } public async Task SendAsync(object e) { // Call the HandleAsync method var eventType = e.GetType(); var method = handlerMethodInfo.MakeGenericMethod(eventType); await (Task)method.Invoke(this, new object[] { e }); } } In the constructor just getting a reference to the method needed to call and in a way that's type safe. Then in the SendAsync we will use reflection to call the private method to call the other classes. Now the reflection code is a lot smaller but the main logic is still in "normal" code in the HandleAsync method. We could turn the SendAsync into ExpressionTrees to only require reflection once but they come with a high price. Typically code would need to be called ~500 to make up the speed hit of compiling the expression vs reflection.
{ "domain": "codereview.stackexchange", "id": 39895, "tags": "c#, reflection, task-parallel-library" }
rosmake rgbdslam_freiburg
Question: Hello, I have problem with compiling rgbdslam_freiburg in ROS Fuerte, I have installed rgbdslam_freiburg and g2o in fuerte_workspace using svn co http://alufr-ros-pkg.googlecode.com/svn/trunk/rgbdslam_freiburg svn co https://code.ros.org/svn/ros-pkg/stacks/vslam/trunk/g2o also installed ros-fuerte-vision-opencv , ros-fuerte-octomap , ros-fuerte-octomap-mapping and modified CMakeLists.txt of rgbdslam to adapt with g2o package dependent and remarked SiftGPU compiled first g2o without failure, but got error compiling rgbdslam as following /home/reza/fuerte_workspace/g2o/include/g2o/solvers/cholmod/linear_solver_cholmod.h:273: undefined reference to `g2o::get_monotonic_time()' CMakeFiles/rgbdslam.dir/src/transformation_estimation.o: In function `g2o::G2OBatchStatistics::globalStats()': /home/reza/fuerte_workspace/g2o/include/g2o/core/batch_stats.h:73: undefined reference to `g2o::G2OBatchStatistics::_globalStats' CMakeFiles/rgbdslam.dir/src/transformation_estimation.o: In function `g2o::LinearSolverCholmod<Eigen::Matrix<double, -1, -1, 0, -1, -1> >::computeSymbolicDecomposition(g2o::SparseBlockMatrix<Eigen::Matrix<double, -1, -1, 0, -1, -1> > const&)': /home/reza/fuerte_workspace/g2o/include/g2o/solvers/cholmod/linear_solver_cholmod.h:332: undefined reference to `g2o::get_monotonic_time()' CMakeFiles/rgbdslam.dir/src/transformation_estimation.o: In function `g2o::LinearSolverCholmod<Eigen::Matrix<double, -1, -1, 0, -1, -1> >::solvePattern(g2o::SparseBlockMatrix<Eigen::Matrix<double, -1, -1, 0, -1, -1> >&, std::vector<std::pair<int, int>, std::allocator<std::pair<int, int> > > const&, g2o::SparseBlockMatrix<Eigen::Matrix<double, -1, -1, 0, -1, -1> > const&)': /home/reza/fuerte_workspace/g2o/include/g2o/solvers/cholmod/linear_solver_cholmod.h:235: undefined reference to `g2o::MarginalCovarianceCholesky::MarginalCovarianceCholesky()' /home/reza/fuerte_workspace/g2o/include/g2o/solvers/cholmod/linear_solver_cholmod.h:236: undefined reference to `g2o::MarginalCovarianceCholesky::setCholeskyFactor(int, int*, int*, double*, int*)' /home/reza/fuerte_workspace/g2o/include/g2o/solvers/cholmod/linear_solver_cholmod.h:238: undefined reference to `g2o::MarginalCovarianceCholesky::computeCovariance(g2o::SparseBlockMatrix<Eigen::Matrix<double, -1, -1, 0, -1, -1> >&, std::vector<int, std::allocator<int> > const&, std::vector<std::pair<int, int>, std::allocator<std::pair<int, int> > > const&)' CMakeFiles/rgbdslam.dir/src/transformation_estimation.o: In function `g2o::G2OBatchStatistics::globalStats()': /home/reza/fuerte_workspace/g2o/include/g2o/core/batch_stats.h:73: undefined reference to `g2o::G2OBatchStatistics::_globalStats' CMakeFiles/rgbdslam.dir/src/transformation_estimation.o: In function `g2o::LinearSolverCholmod<Eigen::Matrix<double, -1, -1, 0, -1, -1> >::solvePattern(g2o::SparseBlockMatrix<Eigen::Matrix<double, -1, -1, 0, -1, -1> >&, std::vector<std::pair<int, int>, std::allocator<std::pair<int, int> > > const&, g2o::SparseBlockMatrix<Eigen::Matrix<double, -1, -1, 0, -1, -1> > const&)': /home/reza/fuerte_workspace/g2o/include/g2o/solvers/cholmod/linear_solver_cholmod.h:245: undefined reference to `g2o::MarginalCovarianceCholesky::~MarginalCovarianceCholesky()' /home/reza/fuerte_workspace/g2o/include/g2o/solvers/cholmod/linear_solver_cholmod.h:245: undefined reference to `g2o::MarginalCovarianceCholesky::~MarginalCovarianceCholesky()' CMakeFiles/rgbdslam.dir/src/transformation_estimation.o: In function `g2o::LinearSolverCholmod<Eigen::Matrix<double, -1, -1, 0, -1, -1> >::solveBlocks(double**&, g2o::SparseBlockMatrix<Eigen::Matrix<double, -1, -1, 0, -1, -1> > const&)': /home/reza/fuerte_workspace/g2o/include/g2o/solvers/cholmod/linear_solver_cholmod.h:194: undefined reference to `g2o::MarginalCovarianceCholesky::MarginalCovarianceCholesky()' /home/reza/fuerte_workspace/g2o/include/g2o/solvers/cholmod/linear_solver_cholmod.h:195: undefined reference to `g2o::MarginalCovarianceCholesky::setCholeskyFactor(int, int*, int*, double*, int*)' /home/reza/fuerte_workspace/g2o/include/g2o/solvers/cholmod/linear_solver_cholmod.h:197: undefined reference to `g2o::MarginalCovarianceCholesky::computeCovariance(double**, std::vector<int, std::allocator<int> > const&)' CMakeFiles/rgbdslam.dir/src/transformation_estimation.o: In function `g2o::G2OBatchStatistics::globalStats()': /home/reza/fuerte_workspace/g2o/include/g2o/core/batch_stats.h:73: undefined reference to `g2o::G2OBatchStatistics::_globalStats' CMakeFiles/rgbdslam.dir/src/transformation_estimation.o: In function `g2o::LinearSolverCholmod<Eigen::Matrix<double, -1, -1, 0, -1, -1> >::solveBlocks(double**&, g2o::SparseBlockMatrix<Eigen::Matrix<double, -1, -1, 0, -1, -1> > const&)': /home/reza/fuerte_workspace/g2o/include/g2o/solvers/cholmod/linear_solver_cholmod.h:204: undefined reference to `g2o::MarginalCovarianceCholesky::~MarginalCovarianceCholesky()' /home/reza/fuerte_workspace/g2o/include/g2o/solvers/cholmod/linear_solver_cholmod.h:204: undefined reference to `g2o::MarginalCovarianceCholesky::~MarginalCovarianceCholesky()' CMakeFiles/rgbdslam.dir/src/transformation_estimation.o: In function `g2o::LinearSolverCholmod<Eigen::Matrix<double, -1, -1, 0, -1, -1> >::solve(g2o::SparseBlockMatrix<Eigen::Matrix<double, -1, -1, 0, -1, -1> > const&, double*, double*)': /home/reza/fuerte_workspace/g2o/include/g2o/solvers/cholmod/linear_solver_cholmod.h:124: undefined reference to `g2o::get_monotonic_time()' CMakeFiles/rgbdslam.dir/src/transformation_estimation.o: In function `g2o::G2OBatchStatistics::globalStats()': /home/reza/fuerte_workspace/g2o/include/g2o/core/batch_stats.h:73: undefined reference to `g2o::G2OBatchStatistics::_globalStats' CMakeFiles/rgbdslam.dir/src/transformation_estimation.o: In function `g2o::LinearSolverCholmod<Eigen::Matrix<double, -1, -1, 0, -1, -1> >::solve(g2o::SparseBlockMatrix<Eigen::Matrix<double, -1, -1, 0, -1, -1> > const&, double*, double*)': /home/reza/fuerte_workspace/g2o/include/g2o/solvers/cholmod/linear_solver_cholmod.h:149: undefined reference to `g2o::get_monotonic_time()' CMakeFiles/rgbdslam.dir/src/graph_manager2.o: In function `~RobustKernel': /home/reza/fuerte_workspace/g2o/include/g2o/core/robust_kernel.h:57: undefined reference to `vtable for g2o::RobustKernel' /home/reza/fuerte_workspace/g2o/include/g2o/core/robust_kernel.h:57: undefined reference to `vtable for g2o::RobustKernel' collect2: ld returned 1 exit status make[3]: *** [../bin/rgbdslam] Error 1 make[3]: Leaving directory `/home/reza/fuerte_workspace/rgbdslam_freiburg/rgbdslam/build' make[2]: *** [CMakeFiles/rgbdslam.dir/all] Error 2 make[2]: Leaving directory `/home/reza/fuerte_workspace/rgbdslam_freiburg/rgbdslam/build' make[1]: *** [all] Error 2 make[1]: Leaving directory `/home/reza/fuerte_workspace/rgbdslam_freiburg/rgbdslam/build' -------------------------------------------------------------------------------} [ rosmake ] Output from build of package rgbdslam written to:ve 48/49 Complete ] [ rosmake ] /home/reza/.ros/rosmake/rosmake_output-20130928-160820/rgbdslam/build_output.log [rosmake-0] Finished <<< rgbdslam [FAIL] [ 644.51 seconds ] [ rosmake ] Halting due to failure in package rgbdslam. Active 48/49 Complete ] [ rosmake ] Waiting for other threads to complete. [ rosmake ] Results: [ rosmake ] Cleaned 49 packages. [ rosmake ] Built 49 packages with 1 failures. [ rosmake ] Summary output to directory [ rosmake ] /home/reza/.ros/rosmake/rosmake_output-20130928-160820 any help? , please Reza, Originally posted by Reza on ROS Answers with karma: 116 on 2013-09-28 Post score: 0 Original comments Comment by Felix Endres on 2013-10-21: Rgbdslam depends on the fuerte package of g2o not the latest version from git Answer: sorry, it was my fault, I should not remove from rgbdslam manifest.xml, it is now compiled. Originally posted by Reza with karma: 116 on 2013-09-29 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 15692, "tags": "ros, rgbdslam-freiburg" }
Potential energy of Hydroelectric plant before the dam was built
Question: So the energy potential of a Hydroelectric dam is the difference between the head and base. Before the dam was built where does all that energy go? I can understand that for a long valley that is dammed up the the energy was released from the headwaters to where the dam was built over a large area. However in a possibly simpler case of a waterfall, you have the "instantaneous" power of difference between the bottom and the head. Where is all that energy going? Is it just dissipated though vibration and sound? Answer: The energy is converted to heat. The friction of the water with the river bed, and with itself, converts the energy to heat.
{ "domain": "physics.stackexchange", "id": 19522, "tags": "fluid-dynamics, renewable-energy" }
Poynting's theorem ambiguity
Question: So in wikipedia for the Poynting's theorem we have: $-\frac{\partial u}{\partial t}= \nabla \cdot\vec S + \vec j \cdot\vec E$ where: $-\frac{\partial u}{\partial t}$ is the rate of change of energy density per unit volume. $\nabla\cdot \vec S$ is energy flow out of the volume, given by the divergence of the Poynting vector $\vec S$. $\vec j \cdot\vec E$ is the rate at which the fields do work on a charges in the volume ($\vec J$ is the current density corresponding to the motion of charge, $\vec E$ is the electric field, and $\cdot$ is the dot product). I don't understand this 3rd component in our equation. How is rate of work connected with the dot product of the current density and Electric field? Since the Poynting theorem considers electromagnetic fields, why do we observe only the realtionship between the current density and electric filed $\vec E$ but also not the relationship between $\vec j$ and $\vec B$? What is the physical meaning of doing this scalar product? Answer: Poyntings theorem can be derived with 0 assumptions of energy conservation.It can be entirely derived from maxwell equations and work = $f.dr$ Consider I have a charge distribution. Then the force element on a volume dv is given by $df= (\vec{E}+\vec{v}×\vec{B})\rho dv$ $df= \vec{E}\rho dv+(\vec{v}×\vec{B})\rho dv$ an infinitesimal work element, is then given b $df \cdot dr = \vec{E} \cdot \vec{dr} \rho dv+(\vec{v}×\vec{B})\cdot \vec{dr} \rho dv$ $dw = \vec{E} \cdot \vec{dr} \rho dv+(\vec{v}×\vec{B})\cdot \vec{dr} \rho dv$ dr isn't a line element in the form r(t), it is instead a position vector field line element representing the paths that each charge $\rho dv$ takes, this is equivelent to $ dr = \vec{V} dt$ thus, $dw = \rho\vec{E} \cdot \vec{V}dt dv+\rho(\vec{v}×\vec{B})\cdot \vec{V}dt dv$ the second term is zero since a vector by definition perpendicular to V, dotted with V is zero ( aka, magnetic fields do no work) So the qauntity $dw = \rho\vec{E} \cdot \vec{V}dt dv$ Represents the infinitesimal amount of work done by a single DQ of charge moving a tiny infinitesimal distance dr, ( or the distance moved by that charge with a velocity "V" in time "dt") Meaning the rate at which work is done on a charge DQ(divided by dt) must be given by $\frac{dw}{dt} = \rho\vec{E} \cdot \vec{V} dv$ Which, substituting in the definition of J is equivelant to $\frac{dw}{dt} = \vec{E} \cdot \vec{J} dv$ This is the rate at which work is being done on an infinitesimal charge DQ, lastly, to find the TOTAL rate at which work is being on charges in a volume, you simply integrate this about some volume $\frac{dW}{dt} = \iiint \vec{E} \cdot \vec{J} dv$ I have chosen to work with rate of work on a single dq and then integrate last, you could have done the same integrating first From here you can then derive poyntings theorem by eliminating J in favour of the fields using amperes law To answer those last questions, E dot J intiluatively is the component of the velocity of my charge in the direction of the field, which is indicative of the work expression. There is also no relationship between J and B since magnetic fields do no work
{ "domain": "physics.stackexchange", "id": 84949, "tags": "electromagnetism, poynting-vector" }
Is there a total binary computable function that specifies Turing machines with nonempty domain?
Question: I am working through Bridge's computablity book and I came across this problem that does not have an answer. I don't know how to precede, any help is much appreciated. Answer: My Tip for you: Check out the second recursion theorem and plug $\Psi$ in for $Q$. If you can't continue on you own here is my solution: The second recursion theorem implies that there is such a $m$ that: $$\varphi_m \simeq \lambda n.\Psi(m,n)$$ ($\simeq$ means that the left and right expression are equal for every input $n$ equal and if an expression in undefined for a certain $n$ this is also true for the other expression) Case 1: $\Psi(m,n) = 1$ This implies $f(m) = 0$ and thus $\text{domain}(\phi_m) = \emptyset$. But notice that $\varphi_m \simeq 1$ is implied by our assumption. Thus the domain of $\varphi_m$ is $\{1\}$. That's a contradiction. Case 2: $\Psi(m,n) = \text{undefined}$ This implies $f(m) = 1$ and thus $\text{domain}(\phi_m)\neq\emptyset$. But once again $\varphi_m \simeq \lambda n.\Psi(m,n) \Rightarrow \varphi_m \simeq \text{undefined} \Rightarrow \text{domain}(\phi_m) = \emptyset$, which is a contradiction.
{ "domain": "cs.stackexchange", "id": 16577, "tags": "computability" }
unable to increase speed of the bot using dwa_local_parameter
Question: I am trying to increase the speed of skid steering bot using dwa local planner but the speed doesn't increase above 0.5. I have tried manipulating acceleration, velocity and translational limit in the parameter file but the result is still the same. I don't get what is limiting the bot to not increase the speed above 0.5 m/s. Update- I checked the situation using rqt_reconfigure and the bot is able to react to any value in max_vel_x throughout the slider. But when I mention the same values in the yaml file, the bot sticks to 0.5 m/s for vel > 0.5. Below is the content of my dwa_local_planner_params.yaml file- DWAPlannerROS: max_vel_x: 1.5 min_vel_x: 0.0 max_vel_y: 0.0 min_vel_y: 0.0 max_trans_vel: 1.5 min_trans_vel: 0.0 trans_stopped_vel: 0.1 max_rot_vel: 5.0 min_rot_vel: 0.4 rot_stopped_vel: 0.4 acc_lim_x: 2.5 acc_lim_theta: 2.0 acc_lim_y: 2.5 yaw_goal_tolerance: 0.3 xy_goal_tolerance: 0.5 sim_time: 1.0 vx_samples: 6 vy_samples: 1 vtheta_samples: 20 path_distance_bias: 90.0 goal_distance_bias: 24.0 occdist_scale: 0.50 forward_point_distance: 0.325 stop_time_buffer: 0.2 scaling_speed: 0.25 max_scaling_factor: 0.2 oscillation_reset_dist: 0.05 publish_traj_pc : true publish_cost_grid_pc: true global_frame_id: odom Launch File- <node pkg="agv_navigation" name="controller" type="goal_sequence.py"></node> <node name="robot_state_publisher" pkg="robot_state_publisher" type="state_publisher"/> <param name ="publish_frequency" type=" double" value="50.0"/> <arg name="map_file" value="$(find agv_navigation)/map/world.yaml"/> <node pkg="map_server" type="map_server" name="map_server" args="$(arg map_file)" output = "screen" /> <include file="$(find agv_navigation)/launch/amcl.launch" /> <include file="$(find agv_description)/launch/agv_visualize.launch"/> <node pkg="move_base" type="move_base" respawn="false" name="move_base" output="screen"> <param name="base_local_planner" value="dwa_local_planner/DWAPlannerROS"/> <rosparam file="$(find agv_navigation)/config/costmap_common_params.yaml" command="load" ns="global_costmap" /> <rosparam file="$(find agv_navigation)/config/costmap_common_params.yaml" command="load" ns="local_costmap" /> <rosparam file="$(find agv_navigation)/config/local_costmap_params.yaml" command="load" /> <rosparam file="$(find agv_navigation)/config/global_costmap_params.yaml" command="load" /> <rosparam file="$(find agv_navigation)/config/base_local_planner_params.yaml" command="load" /> <rosparam file="$(find agv_navigation)/config/move_base_params.yaml" command="load"/> <rosparam file="$(find agv_navigation)/config/dwa_local_planner_params.yaml" command="load"/> <rosparam file="$(find agv_navigation)/config/global_planner.yaml" command="load"/> <remap from="cmd_vel" to="/cmd_vel"/> <remap from="odom" to="odom"/> </node> Originally posted by arjunchatterg on ROS Answers with karma: 36 on 2020-11-17 Post score: 0 Original comments Comment by mgruhler on 2020-11-18: Please use proper tags. gmapping and djikstra have nothing to do with your question. Thanks. Did you observe move_base actually publishing only a maximum velocity of 0.5m/s? Might be something else is limiting the robots velocity as well. Also, please check that the parameters are actually used, i.e. they are loaded in the correct namespace. Comment by arjunchatterg on 2020-11-18:\ /cmd_vel topic echoes the mentioned speed. What parameter could limit the speed ? Local planner should only be responsible for robot's speed right ? How do I check if they are correctly loaded ? rqt_graph seems fine to me. Comment by mgruhler on 2020-11-18: max_vel_x and max_vel_trans should be the relevant ones. Does changing any parmeter have any effect on the robot? Try setting it to 0.1m/s and see if it is observing this limit. If not, you probably have loaded the parameters in the wrong namespace. Edit your question and show the launch file with which you upload the parameters. Comment by arjunchatterg on 2020-11-18: Bot is reacting to speed less than 0.5 m/s. But it stays at 0.5-0.55 even if I provide 1.0 m/s. Comment by mgruhler on 2020-11-18: is this stuff on a repo that we can check? I don't see anything wrong with the configuration, but there are so many things to check that having this on a repo would really help... Comment by arjunchatterg on 2020-11-18:\ Config files are slight modifications of this repo Python script contains a simple addition of a loop of waypoints in this code Answer: I figured out the solution. The parameter is actually max_vel_trans instead of max_trans_vel. Thanks for your concern. Originally posted by arjunchatterg with karma: 36 on 2020-11-18 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by mgruhler on 2020-11-19: great you figured it out. Actually, the parameter name has been changed in melodic. Also, the planner outputs a warning (see also this) which you should have seen. But I also missed the name change :-D Comment by miura on 2020-11-19: Congratulations. Comment by arjunchatterg on 2020-11-19: Yes, I figured out after I looked at the warning. Thank you so much for your time and concern @mgruhler and @miura
{ "domain": "robotics.stackexchange", "id": 35773, "tags": "ros, navigation, ros-melodic, dwa-local-planner" }
Direction of deflection due to Coriolis Effect
Question: I have a massive confusion regarding the direction of deflection due to the Coriolis effect. Suppose, a ball is shot from the equator to the north pole. Initially, the ball has an eastward velocity, equal to that of the spinning earth, at the equator. However, the angular velocity of the ground decreases as we move towards the pole. The east-ward velocity of the ball however remains constant. Thus, the ball seems to travel faster than the ground and is deflected to the right side i.e. towards the east. If we take a different scenario, a ball is thrown from the pole towards the equator, the exact opposite thing happens. The ground seems to move faster than the ball, and so, it is deflected towards to west, which again, is the right side to its direction of travel. So, we have established that objects get deflected to the right side of their trajectories in the northern hemisphere due to the CoriolisHow effect. What about the case when the ball is launched vertically upwards ? Is there going to be a deflection then, and if so why? According to my teacher, when the ball is launched upwards and loses contact with me, it moves straight up, but the Earth has rotated towards the East by then. Hence the ball 'appears' to be deflected to the West. However, when we launch the ball, shouldn't it also have an eastwardly velocity equal to the ground at that point, just like the previous two examples? In those cases, the eastward velocity of the projectile was faster or slower than the eastward velocity of the ground. However, in this case, we are assuming that the ball doesn't have an eastward velocity. The first two cases are like throwing a ball inside a car. It should fall back to your palm, as it has the same velocity as you and the car. In the last example, why are we suddenly comparing the Coriolis force, to a ball being thrown outside a car, and then the car racing forward, making the ball appear to fall behind it. This seems to be different from Coriolis force. The concept of being deflected to the right doesn't make sense here. If you throw a ball upwards, in the northern hemisphere, the concept of right and left doesn't make sense. However, we know that going up, the right side is opposite to the right side while falling down. So the two effects should cancel each other. However, in my book example, the ball falls slightly towards the west, which would be true if the ball didn't have an east ward velocity. What am I missing here, can someone explain intuitively? Thanks. I've read a few answers on stack exchange, that claim that this happens, because when we throw the ball upwards vertically, its angular velocity what was initially the same as that of the surface, keeps decreasing. Hence the ball lags behind, because of this constant decrease. Hence the ball would fall to the west of us. However, if we drop a ball from a tower, it initially has an angular velocity equal to Earth, but as it falls down, its angular velocity increases and becomes more than the ground. Hence, it falls to the east. This happens because, once we release the ball, throwing up or down, it enters a Kepler orbit around Earth. As long as the ball is in my hand, it has the same angular velocity as that of the Earth. However, as soon as I release it, its angular velocity decreases or increases depending on it going up or down. How is this true ? Can someone show me the math ? Here is a link to the answer that I mentioned. link Answer: A ball thrown vertically (in the surface-bound, rotating frame of reference) up moves away from the axis (except when doing this at one of the poles) and hence its eastward velocity is too small to keep up with the rotation at the greater radius. As a consequence, the ball lags behind a bit and lands "behind" the throwing position, i.e., slightly west. If you look at the situation from the "top", i.e., from a far away position above the north pole, this is the Coriolis effect doing its thing. And indeed, in the rotating frame, the ball is accelerated to the right while going up, thus gaining a westward velocity component. It is also accelerated to the right while falling down, thus gaining an eastward velocity component that cancels the previously gained westward component. The net effect of these changes in horizontal velocity components is still a slightly westward landing position. If you look at the situation from the "bottom", i.e., from a far away position above the south pole, you have to switch left and right, but the effect in terms of east/west is the same.
{ "domain": "physics.stackexchange", "id": 81784, "tags": "newtonian-mechanics, reference-frames, projectile, rotational-kinematics, coriolis-effect" }
Compare Hidden Markov Model's sample with ground truth data
Question: I have a time-serie and I fit different HMMs on it, each with a different number of hidden states. Now after sampling from the models , I'd like to compare the results with the ground truth data and find the model that gets closer to the real world data in the original time-serie. For now I simply compared visually the distribution of the values generated by the HMMs and the distribution of values in the time-serie, but I'd like to compute a number indicating which model generates better samples. Answer: Compute the likelihood of the observed data, for each model. Then higher the likelihood, the better the fit. The likelihood is just the probability that the model assigns to the observed data, which for HMMs can be computed using dynamic programming. Be prepared that the more complex the model, the higher the likelihood will be, but that doesn't necessarily mean the model is "better" -- you run the risk of overfitting. Larger HMMs may fit the training data better, but if the HMM is too large, it might perform poorly on new data because of overfitting.
{ "domain": "cs.stackexchange", "id": 18289, "tags": "machine-learning, hidden-markov-models, time-series-analysis" }
Nodes of robotic system and actions
Question: Maybe I do not fully understand how actionlib works, but I have a pretty basic question. Suppose you have a robotic system, with an arm, a hand attached to the arm, a video camera etc, that we want to work together. Scenario: The camera see the object, the arm moves towards the objects and the hand is touching the object (we are getting sensor feedback from the hand). Finally, the hand grasps the object. What is the model for this using actions? 3 clients-1server? or 1 client-3servers for each component? In the schema 3 clients-1server, can we control the arm and the hand simultaneously or we have to cancel a goal to the arm, to send another for the hand? As far as I know an action_server represents a node, so I suppose we need 3 servers. But how can we coordinate the actions between the servers? Originally posted by iSaran on ROS Answers with karma: 3 on 2015-02-03 Post score: 0 Answer: It would be 3 servers and 3 clients (one for each server, but they could be in the same node). 3 servers, because there are 3 different actions that will need 3 different clients to call them. As for simultaneous control: That is fully supported by actionlib. However it is limited by what you actually want to do. For example object detection probably needs to run before moving the arm. Technically grasping and moving the arm are independent if you can synchronize the control, so they do what you want. These are more algorithmic/robotics limitations than actionlib's. Originally posted by dornhege with karma: 31395 on 2015-02-03 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by iSaran on 2015-02-03: Thanks for your nice and clear response. So, I need to implement a communication channel between the clients in order they know what each other do (ie the hand client to know what feedback from the arm has the arm client). Perhaps, a topic? Comment by dornhege on 2015-02-03: That depends on what your software does. There is not problem at all in having multiple clients in the same node. So you don't need an external communication. Comment by iSaran on 2015-02-03: Thank you, you made it very clear.
{ "domain": "robotics.stackexchange", "id": 20778, "tags": "actionlib" }
How is Stage related to Gazebo
Question: How is Stage related to Gazebo strategically. Is there any integration between Stage and gazebo such plugin integration for simulation of controllers and sensors. Originally posted by rnunziata on ROS Answers with karma: 713 on 2013-11-02 Post score: 1 Original comments Comment by Hansg91 on 2013-11-02: I'm not very familiar with gazebo's uses, but I believe they are both simulators, one more advanced than the other. I doubt there is any integration between the two. Answer: They are completely independent. No code is shared between both. Stage is much simpler, only for 2D objects, without physics. Originally posted by KruseT with karma: 7848 on 2013-11-03 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 16040, "tags": "ros" }
Gaussian signal generation
Question: Edit: Could the following be the answer? Generate WGN-like-signal which is centered around a set dBm value. Treat that signal like it was a frequency domain representation of an unknown X time-domain signal. Do some math to get he unit back into Volts/Hz from dBm/Hz (using reference impedance). Run inverse FFT. Original Question: I am looking for some advise on how to generate an array of samples pulled from normal (or Gaussian) distribution. But, and here's the deal. I want to control its PSD or ASD "directly", assuming I am using a refernce impedence of 50 Ohms. I would like to know Ahead-Of-Time (AOT), before signal gets generated, what STD Dev should be applied to achieve a set PSD or at least average spectral level in a given bandwidth/channel. This value should be set by a user. Code below should prove I know (more or less) what PSD or ASD is, I am just looking for an algorithm advise, maybe someone did something similar? Here's how I calculate the PSD or ASD for a generated white-noise-like signal in a standard way using numpy. import numpy as np import matplotlib.pyplot as plt import logging if __name__ == "__main__": logger = logging.getLogger("Main") logging.basicConfig(format='%(levelname)s:%(message)s', level=logging.DEBUG) DRAW_PLOTS = False mu, sigma = 0, 0.001 # mean and std deviation N = 1000 timestamp = 1 # 10 Hz sampling rate # This is in Volts and no timestamp s = np.random.normal(mu, sigma, N) # Because signal is in Volts # our result of fft is in Volts/Hz or would be if we say our timestamp for # signal `s` is eqault to 1 second # This noise_spectrum can be called Amplitude Spectral Density (ASD) # If we want PSD we have to go from ASD to PSD: # Square the voltage and also maybe apply some reference impedance # ... or don't if you want to end up with Voltz (^2) / Hz noise_spectrum = np.fft.fft(s) # The following is in dBV^2/Hz noise_spectrum_db = np.multiply( 20, np.log10(noise_spectrum) ) # The following is in dBW/Hz (divided by 50 Ohm) noise_spectrum_dbw = np.multiply( 20, np.log10(noise_spectrum/50) ) freq = np.fft.fftfreq(s.size, timestamp) logger.info( f"Avg channel power in dBW: {np.real(np.mean(noise_spectrum_dbw))}" ) logger.info( f"Avg channel power in dBm: {np.real(np.mean(noise_spectrum_dbm))}" ) ``` Answer: Below is a function which I wrote long back, when I needed to generate AWGN time-domain samples given Noise PSD in dBm/Hz. AWGN_NOISE() : Generates Additive White Gaussian Noise of PSD power in dBm/Hz AWGN has Gaussian PDF with 0 mean and $\sigma^{2} = N_{o}/2$ $NoisePSD_{dBm/Hz} = 10.log_{10}(\frac{N_o}{2.BW})$, Why? Because, Output Noise Power(in dBm) : $$10.log_{10}(\frac{N_o}{2}) = NoisePSD_{dBm/Hz} + 10.log10(BW)$$ Hence, $$\sigma = \sqrt{\frac{N_o}{2}} = \sqrt{BW.10^{\frac{NoisePSD_{dBm/Hz}}{10}}}$$ So, the python function would be as follows: def awgn_noise(length, power, Bandwidth): sigma = np.sqrt(Bandwidth * 10**(power/10)) noise = np.random.normal(0, sigma, length) return noise Hope this answers the question.
{ "domain": "dsp.stackexchange", "id": 8542, "tags": "noise, power-spectral-density, gaussian, numpy" }
Density of moist air
Question: Why does moist air have lesser density than dry air? When the amount of vapor inside increases, the mass increases and hence won't the density as well increase? Answer: The composition of dry air is about $78\%$ $\ce{N2}$, $21\%$ $\ce{O2}$ and $1\%$ $\ce{Ar}$. The molecular weights of these compounds are: $\ce{N2} = \pu{28 g/mol}$ $\ce{O2} = \pu{32 g/mol}$ $\ce{Ar} = \pu{40 g/mol}$ So, the average molecular weight of dry air is given as: $\pu{(0.78 * 28 g/mol) + ( 0.21 * 32 g/mol) + ( 0.01 * 40 g/mol) g/mol = 29 g/mol}$ The molecular weight of water is only $\pu{18 g/mol}$. So, a molecule of air being replaced by a molecule of water results in an air molecule with an average molecular weight of $\pu{29 g/mol}$ being replaced by a water molecule having a molecular weight of only $\pu{18 g/mol}$. This scenario assumes a constant temperature and pressure when humidifying the air.
{ "domain": "chemistry.stackexchange", "id": 7594, "tags": "density" }