anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Sales prediction of an Item
Question: So, I've been trying to implement my first algorithm to predict the (sales/month) of a single product, I've been using linear regression since that was what were recommended to me. I'm using data from the past 42 months, being the first 34 months as training set, and the remaining 8 as validation. I've been trying to use 4 features to start: Month number(1~12) Average price that the product was sold during that month Number of devolutions previous month Number of units sold previous month Here are images with graphs comparing the Real Data x Predicted Data and a Error x number of elements graph: So far the results are not good at all (as shown in the images above), the algorithm can't even get the training set right. I tried to use higher degrees polynomials, and the regularization parameter, it seems to make it worse. Then, I would like to know if there is a better approach for this problem, or what could I do to improve the performance. Thanks a lot in advance! Answer: Based on the information given by you. I'm assuming you have performed multiple linear regression ie multiple features and one response feature to be predicted. First, apply PCA on all of your features except the response variable you want to predict. In your case the four features you mentioned. Then transform it into a 2 component matrix using PCA. Once you are done with that plot the new Matrix you formed with the response features as a scatter plot.So effectively a 3D scatter plot. When you generate this scatter plot you will be able to visualize much better on which regression you have to use. You can decide for yourself if it is linear or not. Depending on how many outliers you are comfortable with.
{ "domain": "datascience.stackexchange", "id": 1887, "tags": "machine-learning, predictive-modeling, regression, linear-regression, prediction" }
nlp: phonetic edit distance between a word and the closest of a set of words
Question: Let's say someone is using Dragon Dictation, Google Speech, or some other free form dictation software (it will recognize anything they say to the best of its ability). I have some reasonably large set of words and I'm certain the speaker is trying to say one of these words. However, the voice recognition system is not perfect, so sometimes the voice recognition engine will spit out a word that is phonetically similar to what the user intended, but that is not in the set. In other words, I have a word X, and a set of words Y. I want the member of Y which sounds the most like X. I know that phonemes are the most basic sound unit of sound, so I've attempted to use CMU's Pronouncing dictionary to break X into phonemes, and use something similar to an edit distance algorithm to alter these phonemes until they match one of the members of Y. However, there are some huge issues: There are at least 40 phonemes in CMU's system, which makes for a huge branching factor when you consider adding or replacing at any given position. I can't thoroughly search beyond three changes or so, which isn't sufficient. Phonemes are perhaps too 'low-level' in the sense that there are many combinations of phonemes which are not words at all. Building on complaint #2, I was thinking it would be nice to have a weighted graph of at least the most common words to other reasonably similar words, and I could run a search algorithm on this graph to find words in Y more quickly, and with a less explosive branching factor. Precomputing such a graph is on the table, as it could just be loaded at runtime. Are there any sort of methods for doing this which exist currently? Or should I be thinking in a different direction? Answer: Building on the comments, there's something called the Needleman–Wunsch algorithm, for which you can find the lowest 'cost' alignment of two sequences (phonemes). You can get a good 'confusion cost matrix' by taking a normal confusion matrix and taking the log-odds of each confusion pair. As long as your set of words is less than 33k or so, you should be able to find the lowest cost match in about a second or so. Multiply that by N if you multithread across N cores... Full disclosure: not my ideas here, I got them from Phonemic Similarity Metrics to Compare Pronunciation Methods (Hixon, Schneider and Epstein, in Interspeech 2011).
{ "domain": "cs.stackexchange", "id": 10031, "tags": "algorithms, graphs, search-algorithms, natural-language-processing" }
What happens if we give a single electron charge to a hollow metal sphere?
Question: I found this related question: What happens to 5 electrons on a sphere? But this question describes the case when there can only be 5 electrons on that sphere at all times. The answer linked to the Thomson Problem, which gives a solution for the stable configuration of $N$ charges placed on a sphere. So my question is that, will it not repel the extra electrons inside the metal to produce a state of non-uniformly distributed + and - charges? If not, it would mean that there is a net non-zero field inside the bulk of the sphere, so that configuration cannot be a steady state, i.e. It will cause the sea of electrons to flow and achieve another steady state, which can be said from the Electrostatic Shielding Effect. So, what steady state will be achieved? What if it is a solid sphere and not a hollow one? Will the extra electron still be on the surface of the sphere? Any help is appreciated! P.S. You can leave the last question for me to attempt, if its easily deducible from the answer for the first two! Answer: When considering a big enough chunk of matter, as in your experiment, you are dealing with it's macroscopic observed effects, where granular effects of single electrons are smoothed out . So if you add $n$ electrons to the electron sea of the metal the observed effect will be a uniform increase of $-ne\over S$ in the surface charge density of the metal.
{ "domain": "physics.stackexchange", "id": 8754, "tags": "electrostatics, electrons, steady-state" }
Simple DateTime abstraction
Question: Some of my tests require that I need to test the date time results (like time stamps etc.). In order to be able to test the date time string I created a simple DateTime abstraction that I'm going to use in other projects later. It should replace direct calls to DateTime.(Utc)Now. Or should I rather call it TestValue instead of ConstValue? public abstract class DateTimeProvider { public abstract DateTime Value { get; } public DateTime? ConstValue { get; set; } public static implicit operator DateTime(DateTimeProvider dateTimeProvider) { return dateTimeProvider.Value; } } public class NowDateTimeProvider : DateTimeProvider { public override DateTime Value { get { return ConstValue ?? DateTime.Now; } } } public class UtcNowDateTimeProvider : DateTimeProvider { public override DateTime Value { get { return ConstValue ?? DateTime.UtcNow; } } } Answer: The idea of abstracting DateTime.Now and DateTime.UtcNow for testability is definitely a good one. There are some improvements I'd suggest: Naming FooProvider is a name I usually pick when I absolutely can't think of a better name for something that provides Foo. In this case, a much more natural name than DateTimeProvider would be Clock. I also don't really like the name ConstValue. For one thing it's an unnecessary shortening of ConstantValue. But probably Override would be more descriptive. Implicit Operator I tend to lean towards being pretty permissive with the use of the implicit operator. It's often a good way to be able to write more expressive code without cluttering things up with constructor calls. But in this case I think the benefit is very marginal. The difference in readability between calling e.g. AddTimestamp(clock) and AddTimeStamp(clock.Value) is very minor, and the latter is actually more expressive of that fact that you're stamping it with the time at the point that the AddTimeStamp method was called (rather than at some arbitrary point during its procedure). Design Splitting Now and UtcNow into different classes seems weird, for a couple of reasons: It implies that you need both in your system, but never in the same place. If you did need them in the same place, you'd need to handle two different clocks, which would be cumbersome. Similarly, it's quite possible you'll have a class which should make its own decision about whether to use Now or UtcNow. It may be a natural part of a class's responsibility to make that decision, not something that should be chosen in bootstrapping. In that case, you'd need to give the class some kind of DateTimeProviderProvider so it could get the right one! So with that in mind I'd suggest an intermediary step of changing to: public abstract class Clock { public abstract DateTime Now { get; } public DateTime? NowOverride { get; set; } public abstract DateTime UtcNow { get; } public DateTime? UtcNowOverride { get; set; } } Override logic Admittedly, the above actually looks worse than what we started with, but there's one more problem to tackle: the use of override-type logic. The issues with this: It's hard to test. Your NowDateTimeProvider exists so tests don't have to deal with the real current date time. But that class itself can't be tested without dealing with the real time. Although it's a simple class, it still has logic, which means not being able to test it is a problem. Your production code has to deal with testing concerns. Every clock you write can have the time overridden, even though there's no scenario in real code where this should be allowed. It's unsafe, and even if you're careful, it degrades the expressiveness of your code to have "Don't call me!" methods. It's more code. Instead of a simple Now, you need a Now, a NowOverride, and logic to glue them together. Fortunately, none of these are hard to fix: public abstract class Clock { public abstract DateTime Now { get; } public abstract DateTime UtcNow { get; } } public class RealClock : Clock { public override DateTime Now { get { return DateTime.Now; } } public override DateTime UtcNow { get { return DateTime.UtcNow; } } } public class TestClock : Clock { public override DateTime Now { get; set; } public override DateTime UtcNow { get; set; } } And finally... From that last version, you can see that once we've made these changes, there's no need to have an abstract class anymore. We're left with just signatures, so we can change to an interface. This also means we can consider not writing TestClock at all and using a mocking library instead.
{ "domain": "codereview.stackexchange", "id": 28488, "tags": "c#, datetime, unit-testing" }
Raychaudhuri scalar
Question: In Carroll's 'Space-time and Geometry', appendix F on congruences, the Raychaudhuri equation is derived. However, in the process, I seem to miss a calculation step that changes the sign of the Raychaudhuri scalar. Page 461, Carroll writes: \begin{align} U^\sigma \nabla_\sigma B_{\mu \nu} &=U^\sigma \nabla_\nu \nabla_\sigma U_\mu + U^\sigma R^\lambda_{\, \, \, \mu \nu \sigma} U_\lambda \\ &= \nabla_\nu (U^\sigma \nabla_\sigma U_\mu) - (\nabla_\nu U^\sigma)(\nabla_\sigma U_\mu) - R_{\lambda \mu \nu \sigma}U^\sigma U^\lambda \end{align} I can't wrap my head around it. To me, lowering the index on the Riemann tensor and elevating the same dummy index on U would have no influence on the sign whatsoever. Thanks for your insight! Answer: I think it's a typographical error. He should have changed the sign of the Riemann tensor in the last two lines of his calculation (F.10). Then to obtain the Ricci tensor with the correct sign in the Raychaudhuri equation, you must interchange $\mu$ and $\lambda$, getting the required negative sign. This is because the Ricci tensor is $R^k_{ikj}$ and thus before contracting $\mu$ and $\nu$ you must exchange the positions of $\mu$ and $\lambda$ to get the correct sign.
{ "domain": "physics.stackexchange", "id": 59125, "tags": "homework-and-exercises, general-relativity, differential-geometry, tensor-calculus" }
Clarification on vapor pressure
Question: If I have a glass of water, and the vapor pressure is the point of equilibrium where the liquid molecules are becoming gases and the gas molecules are colliding and becoming liquid at the same rate, does that mean none of the liquid is lost? I feel as if I am misunderstanding this concept because when I leave my cup of water on the table and come back to it later on I usually notice that there's less water in the cup because it evaporated. Answer: Yes, the vapor pressure is the point of equilibrium between the liquid and the vapor. But air moves about so the liquid never saturates the air and an equilibrium is never reached. So more and more water evaporates until it is all gone.
{ "domain": "chemistry.stackexchange", "id": 1281, "tags": "equilibrium, vapor-pressure" }
Why no ROSCON 2013 videos on youtube?
Question: Youtube works more reliably in more places for me, especially on mobile devices. I've found the links to vimeo on http://roscon.ros.org/2013/?page_id=14- couldn't there be a OSRF youtube channel with the videos duplicated from vimeo? Originally posted by lucasw on ROS Answers with karma: 8729 on 2013-11-01 Post score: 6 Original comments Comment by 130s on 2013-11-13: While I +1ed, Vimeo Android app works fine with me btw. Answer: Vimeo was chosen for the cleaner interface and the ability to host the videos without ads everywhere. Dual hosting it ends up being just twice as much work. Originally posted by tfoote with karma: 58457 on 2014-01-27 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by bvbdort on 2014-01-28: But most people search on youtube for ros related videos.Many would have watched them if they were in youtube.
{ "domain": "robotics.stackexchange", "id": 16031, "tags": "ros" }
EKF SLAM : SLAM specific Jacobians for new landmarks
Question: I am currently trying to understand the books SLAM for dummies and Simulataneous localization and mapping with the extended Kalman filter to implement slam. I have understood steps 1 and 2 SLAM for dummies. However I am having difficulty understanding Step 3: Add new landmarks to the current state (Page 40). The corresponding section in Simulataneous localization and mapping with the extended Kalman filter is 2.3.4 Landmark initialization for full observations. Specifically, I do not understand how the SLAM specific Jacobian $J_{xr}$ (SLAM for dummies) defined below is actually derived: I am confused about what transformation function the Jacobian $J_{xr}$ is linearizing. I notice that $J_{xr}$ is the same as Jacobian $G_R$ that linearizes the Inverse Observation model as described in section 2.3.4 of Simulataneous localization and mapping with the extended Kalman filter. Why does $J_{xr}$, contain $\Delta t$, which is the thrust applied to the robot? $\Delta t$ which belongs to the State Prediction model shouldn't be in the Inverse Observation model, no? Am I right to say that the Inverse Observation model is a function of the robot pose : [$x_{r}$, $y_{r}$, $\theta_{r}$] and the new landmark observation: [$d_{l1}$, $\phi_{l1}$] and the output is the updated state vector : [$x_{r}$, $y_{r}$, $\theta_{r}$, $x_{l1}$, $y_{l1}$]? Answer: Updating the covariance matrix for a new landmark always confused me, the equation is often given but I've never seen it derived. I had a go at deriving it here http://petercorke.com/wordpress/ekf-covariance-matrix-update-for-a-new-landmark I'd be interested to know if you find this useful.
{ "domain": "robotics.stackexchange", "id": 1937, "tags": "mobile-robot, slam, kalman-filter, ekf, jacobian" }
Decomposition reaction of tin(II) nitrate
Question: What are the products formed when $\ce{Sn(NO3)2}$ is subjected to high temperatures? I searched over the internet but I didn't get any satisfactory answer. Answer: No wonder you cannot find any literature on $\ce{Sn(NO3)2}$, because I don't think it exists in solid form. This is supported by the fact that I even can't find its CAS number online. This book (Ref.1) and relevant paper (Ref.2) support my suggestion, both of which state that: Attempts to prepare a covalently bound tin(II) nitrate, by the reaction of tin(IV) tetranitrate with anhydrous nitric oxide, have produced only a white solid solid of formula $\ce{SnN2O6}$, which gives a tin(IV) Mössbauer resonance ($\delta = \pu{0.29 mm\:s-1}$, $\Delta = \pu{0.96 mm\:s-1}$)... Nonetheless, a study on thermal decomposition of metal nitrates (Ref.3) states that: Due to a back-donation of electronic cloud from the nitrate to an unfilled $\mathrm{d}$-orbital of transition and noble metals, their nitrates generally exhibited lower decomposition temperatures ($T_d \lt \pu{700 K}$) than those of the base metals ($\gt \pu{850 K}$). They suggest those metal nitrates with lower decomposition temperatures decompose to corresponding oxides together with $\ce{NO2}$ and $\ce{O2}$. Thus, it is safe to assume that if $\ce{Sn(NO3)2}$ exists, it would decompose as following reaction suggest: $$\ce{2 Sn(NO3)2 (s) + heat -> 2 SnO (s) + 4 NO2 (g) + O2 (g)}$$ However, keep in mind that this is pure speculation. References: R. Greatrex, “Chapter 8: Mössbauer Spectroscopy,” In Specialist Periodical Reports: Spectroscopic Properties of Inorganic and Organometallic Compounds, Volume 6; N. N. Greenwood, Ed.; The Chemical Society Burlington House: London, United Kingdom, 1973, pp. 494-622. P. G. Harrison, M. I. Khalil, N.Logan, “Concerning anhydrous tin(II) nitrate,” Inorganic and Nuclear Chemistry Letters 1972, 8(6), 551-553 (https://doi.org/10.1016/0020-1650(72)80139-9). Shanmugam Yuvaraj, Lin Fan-Yuan, Chang Tsong-Huei, Yeh Chuin-Tih, “Thermal Decomposition of Metal Nitrates in Air and Hydrogen Environments,” J. Phys. Chem. B 2003, 107(4), 1044-1047 (https://doi.org/10.1021/jp026961c).
{ "domain": "chemistry.stackexchange", "id": 13291, "tags": "experimental-chemistry" }
How to retrieve images from a url in a pandas dataframe and store them as PIL object in a new column
Question: I'm trying to store as a PIL object in a new column of a dataframe pictures that are located in a column of the same dataframe in the form of URL's. I've tried the following code: import pandas as pd from PIL import Image import requests from io import BytesIO pictures = [None] * 2 df = pd.DataFrame({'project_id':["1", "2"], 'image_url':['http://www.personal.psu.edu/dqc5255/gl-29.jpg', 'https://www.iprotego.com/wp-content/uploads/google.jpg']}) df.insert(2, "pictures", pictures, True) for i in range(2): r = requests.get(df.iloc[i,1]) df.iloc[i,2] = Image.open(BytesIO(r.content)) df I expected to get a dataframe with this structure but including both training examples: project_id image_url pictures 0 1 http://www.personal.psu.edu/dqc5255/gl-29.jpg <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=400x300 at 0x116EF9AC8> But instead got the following error: OSError: cannot identify image file <_io.BytesIO object at 0x116ec2f10> ``` Answer: I just changed the User-agent in the for loop so that now the request line in the loop is: r = requests.get(df.iloc[i,1], headers=headers) with headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/XXX.XX (KHTML, like Gecko) Chrome/XX.X.XXXX.XXX Safari/XXX.XX'} and this solved the error. I also added a r.raise_for_status() to check the status before using the r.content Final code: import pandas as pd from PIL import Image import requests from io import BytesIO df = pd.DataFrame({'project_id':["1", "2"], 'image_url':['http://www.personal.psu.edu/dqc5255/gl-29.jpg', 'https://www.iprotego.com/wp-content/uploads/google.jpg']}) pictures = [None] * 2 df.insert(2, "pictures", pictures, True) headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/xxx.xx (KHTML, like Gecko) Chrome/xx.x.xxxx.xxx Safari/xxx.xx'} for i in range(2): r = requests.get(df.iloc[i,1], headers=headers) r.raise_for_status() df.iloc[i,2] = Image.open(BytesIO(r.content)) df ````
{ "domain": "datascience.stackexchange", "id": 5936, "tags": "python, pandas, image-preprocessing" }
Labeling an image mask as data for object detection?
Question: I am new to machine learning but had a question about a labeling method. If I had the following two images: Is there a way to use the second image as the label for the first one (i.e. anything in white is the object I want to detect)? As I understand it, labels are typically defined by vector points, but in my case, a sort of a raster clipping mask like shown above is much easier for me to generate, as I may have varying shapes of varying complexity. Is there software that can take in an image as the label in this sort of way, or is there no other choice than to turn the raster image into a vector? If there are any generalizations I'm making then please correct me! I'm not sure if there's an existing tool/method for this out there that I'm completely missing, but any help or direction would be appreciated. Answer: This task is called image segmentation. I suggest reading up on existing methods for image segmentation, and then try applying their approach to your task. For instance, you might take a look at the DeepLab architecture and train a DeepLab model; there are many others as well. Labels in image segmentation are normally described by a mask, not by vector points.
{ "domain": "cs.stackexchange", "id": 19663, "tags": "machine-learning" }
which version of Opencv should I use?
Question: Hi everyone, I'm a beginner in ROS. Currently, I'm trying to use a camera to locate and identify the color balls by the OpenCV package, but depend on my research I found that there has two version of OpenCV, one is OpenCV 2 and OpenCV 3. So what is different between this two OpenCV? And which one will be more suitable for me? By the way, I'm using ubuntu 14.04 LTS 64bit indigo. Thank you so much. Originally posted by Zero on ROS Answers with karma: 104 on 2016-05-23 Post score: 0 Answer: Hello, both of them are great, if you have not learned openCV, just to learn opencv3 since it have some different definitions of static parameters and this version has a better frame than opencv2. However, when you want to use openCV3 , you need to be really careful since some algorithms like SURF are not built directly in a traditional way, which means maybe you need to spend several hours to build the developing environment with the help of documents.Besides, lots of existing codes are wrote with opencv2,so be careful if you want to make an old package ,there will be many problems. :) If you want to know more about the difference between opencv2 and opencv3, I think you should go to opencv's website Originally posted by wsAndy with karma: 224 on 2016-05-24 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Zero on 2016-05-24: Thank you so much. I think I will choose use opencv2 since there will be many problems if I'm doing something wrong.
{ "domain": "robotics.stackexchange", "id": 24723, "tags": "opencv" }
What are the benefits of using zero capillary voltage for discarding fractions in LC–ESI–MS/MS?
Question: In the second episode of The Association for Mass Spectrometry & Advances in the Clinical Lab (MSACL) podcast, namely Getting going with mass spectrometry : Josh learns chromatography (aired 2021-01-07), Dr. Russell Grant mentioned the following tip at 01:03:11 (I apologize in advance for any inaccuracies in the transcript): The other thing that is often forgotten there is that you can drop the capillary voltage to zero. It's not as good as the diverter valve, but you are not going to be pulling the stuff towards the mass-spectrometer. If you only have a couple of windows and a bunch of junk, you can just keep capillary voltage on during these desired windows and save your diverter valves' maintenance time because they can become maintenance-limiting. I would think it would lead to a more frequent cleaning of the ionization chamber. Is the gain in postponed maintenance of a diverter valve really worth it? Are there any other use cases when disabling capillary voltage could be a superior alternative to using a diverter valve? Answer: I think there is no perfect answer to this situation. In general when the flow is stopped to any detector in an HPLC experiment, there is a change in baseline, and when the flow is re-started, ghost peaks appear in chromatogram. I am talking about optical detectors such as UV, fluorescence, refractive index etc. Same thing happens in MS and perhaps LC ESI-MS is not an exception, in the sense that diverting the flow can lead to the appearance of so-called "ghost" peaks or raised baseline. I have seen that in GC-MS experiments where a change in flow causes weird baseline shifts. This is an interesting phenomenon worth investigating. So the authors suggest that let us continue the flow, but let us stop the ionization process so that the junk does not go inside the quadrupoles and stay there forever. It is always easy the clean the external ESI system but the internal system needs professional service. Someone talks about these problems in this forum Diverter valve issues
{ "domain": "chemistry.stackexchange", "id": 15106, "tags": "experimental-chemistry, analytical-chemistry, chromatography, mass-spectrometry" }
Problems with accuracy.score sklearn
Question: I am learning python and trying myself out at mashine learning. I am reproducing a super simple example - based on the infamous iris dataset. Here it goes: from sklearn import datasets iris = datasets.load_iris() X = iris.data y = iris.target from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = .5) from sklearn import tree nordan_tree = tree.DecisionTreeClassifier() nordan_tree.fit(X_train, y_train) from sklearn.metrics import accuracy_score I get the following error message: Traceback (most recent call last): File "tree3.py", line 17, in <module> print(accuracy_score(y_test, predictions)) NameError: name 'predictions' is not defined I don't get it. As far as I understand, predictions is the vector containing all the predictions produced with DecisionTreeClassifier? What am I doing wrong? Answer: You have not defined the variable predictions anywhere. You will need to get them from your classifier somehow. You have fit your nordan_tree on your training data, now you can use the fitted nordan_tree to generate the predictions, for example like this: predictions = nordan_tree.predict(X_test) Then your line of: print(accuracy_score(y_test, predictions)) should work.
{ "domain": "datascience.stackexchange", "id": 2769, "tags": "python, scikit-learn, error-handling" }
Period of a pendulum
Question: In the book 'Calculus the Early Transcendetals' at page 776 (7th edition) they give that the period of a pendulum with length $\text{L}$ that makes a maximum angle $\theta_0$ with the vertical is: $$\text{T}_{\left[\text{s}\right]}=4\sqrt{\frac{\text{L}}{\text{g}}}\int_{0}^{\frac{\pi}{2}}\frac{1}{\sqrt{1-\sin^2\left(\frac{\theta_0}{2}\right)\sin^2(x)}}\space\text{d}x$$ Questions: Does this formula work, for any pendulum? How did they get this formula (hint to derive the given formula)? Answer: That formula holds for a simple pendulum of length $L$ in a gravitational field $g$ and released from rest with an amplitude $\theta_0$. Since this system is conservative its mechanical energy is constant and equals the gravitational potential energy when it is released. Setting the zero of potential energy at the fixed point of the pendulum, the mechanical energy is $$E=-mgL\cos\theta_0=\frac 12 mL^2\dot\theta^2-mgL\cos\theta.$$ Solving for $\frac{d\theta}{dt}$, separating the variables and integrating you should get $$\int_0^t dt=\sqrt{\frac{L}{2g}}\int_{\theta_0}^\theta \frac{d\theta}{\sqrt{\cos\theta-\cos\theta_0}}.$$ The idea now is to make a couple of variable changes in order to arrive to a known integral. What you have to do is to write $\cos a=1-2\sin^2(a/2)$ and then you define $\sin(\theta/2)=\sin(\theta_0/2)\sin x$, such that $$d\theta=\frac{2\sin(\theta_0/2)\sqrt{1-\sin^2x}}{\sqrt{1-\sin^2(\theta_0/2)\sin^2x}}dx.$$ The integral becomes $$t=\sqrt{\frac{L}{g}}\int_{x_0}^x\frac{dx}{\sqrt{1-\sin^2(\theta_0/2)\sin^2x}}.$$ The period equals four times the time the pendulum takes going from $\theta=0$ to $\theta=\theta_0$. Therefore $$T=4\sqrt{\frac{L}{g}}\int_{0}^{\pi/2}\frac{dx}{\sqrt{1-\sin^2(\theta_0/2)\sin^2x}}.$$ This is a complete elliptical integral of the first kind. A nice description of the pedulum including some nice plots can be found at wikipedia
{ "domain": "physics.stackexchange", "id": 31053, "tags": "newtonian-mechanics, newtonian-gravity, time, integration, oscillators" }
In current flow, do electrons propagate simultaneously or one after another?
Question: If we have 2 atoms (atom 'A', atom 'B') each with their own electron (A: 'Ea', B: 'Eb'): When Ea jumps from A -> B, simultaneously, is Eb in the process of jumping from B -> 'C'? or does Eb only jump to C after Ea has arrived at B? Answer: The electrons in a conductor (metal) are highly mobile, i.e., they are not tightly bound to the nuclei of their atoms, so that they are essentially shared by all the neighboring nuclei of the atoms in the conductor, forming an electron "cloud", When an electric field is applied to a conductor, all of the electrons essentially simultaneously experience a force that causes them to collectively "drift". or move in a direction opposite the direction of the field (the direction of an field, by convention, being the direction of the force that a positive charge would experience if placed in the field). There is no specific sequence, or "jumping", of individual electrons. In other words the sequence of movements of individual electrons are random from electron to electron. But the overall movement of the electrons, collectively, are in the same direction and result in the current. Hope this helps.
{ "domain": "physics.stackexchange", "id": 62442, "tags": "electric-current" }
Package not compiling after update
Question: Hi, ubuntu 12.04 and fuerte I just updated the ros-fuerte- realted things from ubuntu software center and the packages which were working and compiling before are showing this error. mkdir -p bin cd build && cmake -Wdev -DCMAKE_TOOLCHAIN_FILE=`rospack find rosbuild`/rostoolchain.cmake .. -- The C compiler identification is GNU -- The CXX compiler identification is GNU -- Check for working C compiler: /usr/bin/gcc -- Check for working C compiler: /usr/bin/gcc -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ -- Check for working CXX compiler: /usr/bin/c++ -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Found PythonInterp: /usr/bin/python (found version "2.7.3") [rosbuild] Building package depth_odometry [rosbuild] Cached build flags older than manifests; calling rospack to get flags Failed to invoke /opt/ros/fuerte/bin/rospack cflags-only-I;--deps-only depth_odometry CMake Error at /usr/lib/vtk-5.8/VTKTargets.cmake:16 (ADD_EXECUTABLE): Command add_executable() is not scriptable Call Stack (most recent call first): /usr/lib/vtk-5.8/VTKConfig.cmake:231 (INCLUDE) /usr/share/cmake-2.8/Modules/FindVTK.cmake:73 (FIND_PACKAGE) /opt/ros/fuerte/stacks/perception_pcl/pcl/vtk_include.cmake:1 (find_package) CMake Error at /opt/ros/fuerte/share/ros/core/rosbuild/public.cmake:129 (message): Failed to invoke rospack to get compile flags for package 'depth_odometry'. Look above for errors from rospack itself. Aborting. Please fix the broken dependency! Call Stack (most recent call first): /opt/ros/fuerte/share/ros/core/rosbuild/public.cmake:227 (rosbuild_invoke_rospack) CMakeLists.txt:16 (rosbuild_init) -- Configuring incomplete, errors occurred! make: *** [all] Error 1 Any help ? Originally posted by sai on ROS Answers with karma: 1935 on 2013-06-06 Post score: 1 Answer: That was caused by this bug in PCL. The next update should solve this issue; in the meanwhile, just do this: sudo rm -r /opt/ros/fuerte/stacks/perception_pcl/pcl Originally posted by Martin Günther with karma: 11816 on 2013-06-17 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 14463, "tags": "ros, rosdep, rosmake, update" }
Phase and group velocity - deriving phase changes into animation
Question: I have a question about the phase and group velocity topic. I am making an animation in Python showing both group and phase velocity changes. Let's assume we have 2 waves: $y_1 = A\cos(w_{1}t-k_{1}x)$ and $y_2 = A\cos( w_{1}t-k_{1}x)$. Superpositon of $y_1$ and $y_2$ is $y_3 = 2 A\cos\left(\frac{\left(t\left(w_{1}+w_{2}\right)-x\left(k_{1}+k_{2}\right)\right)}{2}\right)\cdot \cos\left(\frac{\left(t\left(w_{1}-w_{2}\right)-x\left(k_{1}-k_{2}\right)\right)}{2}\right)$. Is there any way to derive "phase changes", formula to accurately animate green dot moving from $y_3$ like in this video? (the green moving dot starting from 0:31). Answer: @Paddy's comment is the correct method. In full it is: $$y_3 = 2 A\underbrace{\cos\left(\frac{\left(t\left(w_{1}+w_{2}\right)-x\left(k_{1}+k_{2}\right)\right)}{2}\right)}_\text{waves within wavepacket}\cdot \underbrace{\cos\left(\frac{\left(t\left(w_{1}-w_{2}\right)-x\left(k_{1}-k_{2}\right)\right)}{2}\right)}_\text{envolpe function}$$ As the first term is the waves within the wavepacket (as smaller wavelength than for envelope, or larger wavenumber $k_1+k_2>\left|k_1-k_2\right|$) then we find x by setting the parameter of $\cos$ to a constant. $$\frac{\left(t\left(w_{1}+w_{2}\right)-x\left(k_{1}+k_{2}\right)\right)}{2}=\text{const}$$ For ease I will let the constant be zero: $$\implies x=\frac{w_{1}+w_{2}}{k_{1}+k_{2}}t$$ Thus, the coordinates of the green point are: $$\left(\underbrace{\frac{w_{1}+w_{2}}{k_{1}+k_{2}}t}_x,\quad \underbrace{2 A\cos\left(\frac{\left(t\left(w_{1}-w_{2}\right)-\frac{w_{1}+w_{2}}{k_{1}+k_{2}}t\left(k_{1}-k_{2}\right)\right)}{2}\right)}_{y_3}\right)$$
{ "domain": "physics.stackexchange", "id": 80607, "tags": "waves, velocity, superposition, phase-velocity" }
Why does our visible range not include infrared or UV radiation?
Question: As the radiation peak of the sun is in the UV region and since at around room temperature materials emit radiation at IR, I wonder why our eyes are not capable of using these wavelengths. I guess there is a reason why we exactly only see the region between these intensity peaks? Answer: Usually we don't ask why in biology because the explanation is always the same, it was good enough for survival. But here are a couple of explanations. The radiation peak from the sun is in the visible range of the spectrum, between 400nm-700nm with the highest point around 550nm as can be seen here or calculated from Wien's law and the sun temperature. That's why photosynthetic pigments use the visible spectrum and following them the rest of the ecological system. Our retina blocks most of the UV light and water absorbs the lower part of the IR, illustrated very nicely in this article, figure 1. It is true that materials on earth emit more radiation in the IR than in the UV-visible spectrum due to their temperature but: most objects have the same temperature approximately the soil emits a lot of radiation the radiation from the sun in the IR is high compared to the radiation of earth. This is the best graph I found, emphasizing the great difference between sun & earth intensities in the IR range there isn't a lot of variance in the wavelength intensity and therefore it is less useful. Notice how uniform the IR reflection is from this leaf spectral signature To summarize: UV light is dangerous so we filter it out and not use it IR light is uniform and not useful Visible light is the peak of sun emission and therefore the most efficient range to use
{ "domain": "biology.stackexchange", "id": 10816, "tags": "vision, eyes, human-eye, radiation" }
Evolution of a position state in an infinite well potential
Question: Let the potential be $$V = \infty \hspace{3cm}(0>x, x>L)$$ $$V = 0 \hspace{3.7cm}(L>x>0).$$ Now, we measure the position of a particle and discover it is located at $L/4$. What is the probability of finding the particle in each eigenstates of the energy? So, i guess the wavefunction is $$\psi = \delta(x-L/4)$$ We know the eigenbasis is given by $$\sqrt{\frac{2}{L}} \sin(n \pi x/L).$$ Then $$\delta(x-L/4) = \sum_n c_n \sqrt{\frac{2}{L}} \sin(n \pi x/L) \\ c_n = \sqrt{\frac{2}{L}} \int \delta(x-L/4) \sin(n \pi x /L)dx = \sqrt{\frac{2}{L}} \sin(\frac{\pi n}{4})$$ But, $\sum |c_n|^2 $ diverges! So how do i define the probability (generally given by $P_n = \frac{|c_n|^2}{\sum |c_n|^2}$? Answer: Lots of things are wrong with $\psi=\delta(x-L/4)$. First, you cannot normalize it since $$ \int_0^L \vert \psi\vert^2 dx = \int_0^L dx \delta(x-L/4)^2 = \delta(0) $$ is technically infinite. Next, on dimensional grounds $\psi=\delta(x-L/4)$ does not work. Since $$ \int dx \vert \psi\vert^2 $$ is a probability and thus a dimensionless number, $\psi$ should have dimension of 1/(length)$^{1/2}$, so you $\psi$ does not have the correct dimension. Another way to see the same thing is that your expansion coefficients $c_n$ do actually have the dimension of (length)$^{-1/2}$ so their modulus square cannot be interpreted as a (dimensionless) probability of finding your initial state in an energy eigenstate. Sooooo… Your wavefunction $\psi$ is incorrect. You may want to try instead a normalized Gaussian initial state that is very strongly peaked $x=L/4$, and then take the limit where the width of the Gaussian goes to $0$. Note that such a Gaussian would only approximately satisfy the boundary condition of the problem since the tail of the Gaussian would extend beyond the well, but you might be able to recover something meaningful in the limit of zero width. Edit: Following the comments of @MichaelSeifert and others, I did some additional work, using $$ \psi(x)=\left\{\begin{array}{ll}1/\sqrt{\epsilon}& \text{if } 1/4-\epsilon/2 < x< 1/4+\epsilon/2\, ,\\ 0&\text{otherwise} \end{array}\right. $$ with $L=1$ for simplicity, and hoping to recover something in the limit where $\epsilon\to 0$ and sharply peaked $\psi$. It is possible to obtain the expansion coefficients $$ c_n(\epsilon)=\frac{2 \sqrt{2} \sin \left(\frac{\pi n}{4}\right) \sin \left(\frac{\pi n \epsilon }{2}\right)}{\pi n \sqrt{\epsilon }}\, . $$ Expanding the $\sin(n\pi \epsilon/2)$ term yields $$ \vert c_n(\epsilon)\vert^2 = \frac{1}{180} \epsilon \left( 360-30 \pi ^2 n^2 \epsilon ^2+\pi ^4 n^4 \epsilon ^4+\ldots \right) \sin ^2\left(\frac{\pi n}{4}\right)\, . $$ You can easily see how we get in trouble: for any given $\epsilon>0$, there is $n_0$ so that, for any $n>n_0$, $n^2\epsilon^2\pi^2>1$ and the series for $\vert c_n(\epsilon)\vert^2$ stops to make sense as an expansion. The interesting part is that for finite $\epsilon$, the values of $\vert c_n(\epsilon)\vert^2$ for large $n$, which apparently cause trouble in the expansion of $\vert c_n(\epsilon)\vert^2$, become numerically very small. This is clear from the expression for $c_n(\epsilon)$: for finite $\epsilon$ $c_n(\epsilon)$ scales like $1/n$, and since $\vert\sin(n \pi\epsilon/2)\vert$ is bounded by one, eventually this must decrease. For instance, for $\epsilon=1/25$ we have the following plots for the values of $\vert c_n(\epsilon)\vert^2$: Summing the probabilities from $n=1$ to $n=500$ gives $0.989873$, so the first $500$ terms capture 99% of the initial state. For $\epsilon=1/50$, the plot is qualitatively the same, except that the vertical scale is divided by $2$, reflecting the scaling with $\epsilon$ of the leading term $2\epsilon\sin\left(n\pi /4\right)^2$. One needs to now sum to $n=1000$ to get $\sim 99$% of the initial state.
{ "domain": "physics.stackexchange", "id": 91138, "tags": "quantum-mechanics, wavefunction, schroedinger-equation, probability, dirac-delta-distributions" }
rosdep can't find cyclonedds during foxy install on raspbian
Question: I have created a fresh raspbian system on as Pi3b. I have created a new user with sudo privileges, rebooted to new user and installed latest updates follow by a reboot. I am now trying to install ros2 following these instructions https://docs.ros.org/en/foxy/Installation/Linux-Install-Binary.html and using this image ros2-foxy-20201211-linux-focal-arm64.tar.bz2. When trying to install the dependencies I get : robot@robot:~/ros2_foxy $ rosdep install --from-paths ros2-linux/share --ignore-src --rosdistro foxy -y --skip-keys "console_bridge fastcdr fastrtps osrf_testing_tools_cpp poco_vendor rmw_connext_cpp rosidl_typesupport_connext_c rosidl_typesupport_connext_cpp rti-connext-dds-5.3.1 tinyxml_vendor tinyxml2_vendor urdfdom urdfdom_headers" ERROR: the following packages/stacks could not have their rosdep keys resolved to system dependencies: rmw_cyclonedds_cpp: No definition of [cyclonedds] for OS [debian] Following a previous question I then proceed with the -r option and all seems to go OK until I try to run the demos when I get the following errors: talker robot@robot:~ $ ros2 run demo_nodes_cpp talker Traceback (most recent call last): File "/home/robot/ros2_foxy/ros2-linux/bin/ros2", line 33, in <module> sys.exit(load_entry_point('ros2cli==0.9.8', 'console_scripts', 'ros2')()) File "/home/robot/ros2_foxy/ros2-linux/lib/python3.8/site-packages/ros2cli/cli.py", line 67, in main rc = extension.main(parser=parser, args=args) File "/home/robot/ros2_foxy/ros2-linux/lib/python3.8/site-packages/ros2run/command/run.py", line 70, in main return run_executable(path=path, argv=args.argv, prefix=prefix) File "/home/robot/ros2_foxy/ros2-linux/lib/python3.8/site-packages/ros2run/api/__init__.py", line 61, in run_executable process = subprocess.Popen(cmd) File "/usr/lib/python3.7/subprocess.py", line 775, in __init__ restore_signals, start_new_session) File "/usr/lib/python3.7/subprocess.py", line 1522, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) OSError: [Errno 8] Exec format error: '/home/robot/ros2_foxy/ros2-linux/lib/demo_nodes_cpp/talker' listener robot@robot:~/ros2_foxy $ ros2 run demo_nodes_py listener Traceback (most recent call last): File "/home/robot/ros2_foxy/ros2-linux/lib/demo_nodes_py/listener", line 33, in <module> sys.exit(load_entry_point('demo-nodes-py==0.9.3', 'console_scripts', 'listener')()) File "/home/robot/ros2_foxy/ros2-linux/lib/demo_nodes_py/listener", line 25, in importlib_load_entry_point return next(matches).load() File "/home/robot/.local/lib/python3.7/site-packages/importlib_metadata/__init__.py", line 100, in load module = import_module(match.group('module')) File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1006, in _gcd_import File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 677, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 728, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/robot/ros2_foxy/ros2-linux/lib/python3.8/site-packages/demo_nodes_py/topics/listener.py", line 16, in <module> from rclpy.node import Node File "/home/robot/ros2_foxy/ros2-linux/lib/python3.8/site-packages/rclpy/node.py", line 41, in <module> from rclpy.client import Client File "/home/robot/ros2_foxy/ros2-linux/lib/python3.8/site-packages/rclpy/client.py", line 22, in <module> from rclpy.impl.implementation_singleton import rclpy_implementation as _rclpy File "/home/robot/ros2_foxy/ros2-linux/lib/python3.8/site-packages/rclpy/impl/implementation_singleton.py", line 31, in <module> rclpy_implementation = _import('._rclpy') File "/home/robot/ros2_foxy/ros2-linux/lib/python3.8/site-packages/rclpy/impl/__init__.py", line 28, in _import return importlib.import_module(name, package='rclpy') File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) ModuleNotFoundError: No module named 'rclpy._rclpy' The C extension '/home/robot/ros2_foxy/ros2-linux/lib/python3.8/site-packages/rclpy/_rclpy.cpython-37m-arm-linux-gnueabihf.so' isn't present on the system. Please refer to 'https://index.ros.org/doc/ros2/Troubleshooting/Installation-Troubleshooting/#import-failing-without-library-present-on-the-system' for possible solutions Following the trouble shooting guide the ros run receive / send work as expected. using mlocate on "rclpy._rclpy" I find nothing, searching for rclpy finds a lot in the home/robot/ros2_foxy/ros2-linux subdirectories but nothing in /usr/bin or anywhere else I might expect executables. Is there another specific file I should be looking for or any more information I could provide to assist. Further info : I now understand why locate can't find the _rclpy private subroutine within rclpy. just to confirm I have sourced setup.bash before runningg the demos. I have also installed Python3.8.0 from source as that showed as missing in the error messages. I have ordered another microSD card so I can try installing ROS2 under Ubunut 20.04. I have used ubuntu since Dapper Drake and so will be more au fait with trouble shooting. The disadvantage is the lack of raspi-config in the ubunut install and I could not detect my camera even with the start_x=1 hack. Originally posted by bassline on ROS Answers with karma: 28 on 2021-03-04 Post score: 0 Original comments Comment by TheLegendaryJedi on 2021-03-19: Hi @bassline. I'm having the same problem. Any updates on this? Comment by bassline on 2021-03-20: I tried everything again and using "top" to monitor CPU and memory usage I could see it was swapping out memory all the time in the final build. I bought a new Pi4B and installed under ubuntu with no problems. I suggest you try and close everything that is not essential, reduce memory swapping (sorry can't remember where I found that hack) and then try again. PS If you decide to upgrade to a 4B be aware that the mains supply has a different connector so you will need a new power supply as well. Comment by bassline on 2021-04-07: I have tried again pulling the most recent code but still get problems compiling (e.g. /usr/bin/ld: ../../lib/libOgreMain.so.1.12.1: undefined reference to `__atomic_fetch_add_8') my command line input (after working around several problems) is: MAKEFLAGS="-j1 -l1" colcon build --symlink-install --cmake-args -DCMAKE_EXTRA_LINKER_FLAGS='-latomic' -DBUILD_TESTING=OFF --cmake-force-configure --packages-skip-build-finished --continue-on-error Has anyone actually succeeded in compiling ROS2 on a Pi3B or do I just transfer my efforts to mi Pi4b? Answer: Success!! I eventually managed to get ROS2 to build on a Pi3B. It took me 8 hours of trial and error yesterday and may not be the most efficient way but it worked for me. I based it on the approach here https://medium.com/swlh/raspberry-pi-ros-2-camera-eef8f8b94304 up to the camera install. The previous link depends heavily on https://docs.ros.org/en/foxy/Installation/Linux-Development-Setup.html The main issue is kswapd0 keeps kicking in to manage the swap. Following are the extra steps / changes I made. Edit /etc/sysctl.conf as root to add " vm.swappiness = 0". This stops swapping except when completely out of memory. Reboot to activate the changes I got a warning about missing paths when installing the build pre-requisites so I added these to my PATH. In my case this was achieved by "export PATH=$PATH:/home/brian/.local/bin" I amended ~/.colcon/defaults.yaml to remove the last line selecting the compiler and added " - -DBUILD_TESTING=OFF" to reduce the load. I still had kswapd0 cutting in regularly so I limited the C compiler to one instance per thread and only used 2 threads. This made my command line :- MAKEFLAGS="-j1 -l1" colcon build --symlink-install --packages-skip-build-finished --continue-on-error --parallel-workers 2 This had the additional benefit that I could run "top -i" in another terminal to see what was going on. 5 Even so I still got kswapd0 grabbing a large chunk of processor every so often so I would abort the build (using Ctrl C or in extremis Ctrl Z), reboot the pi and set the build to continue. Eventually I got it all compiled with only one error - concerning ROS1 bridge which I don't need. Query : I am not sure if I am allowed to close this or whether I need a moderator to do that. I will take action if this is still open at the end of the month. Comment : I am now moving on the the camera where Raspbian seems to set up /dev/video10 onwards but ROS2 wants to use /dev/video0 - Issue now fixed, just needed the camera activating and the cable connecting the right way round (doh!) UPDATE - added missing PATH information. INFO UPDATE: I had not come across the alternative mentioned below Originally posted by bassline with karma: 28 on 2021-04-08 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by sgvandijk on 2021-04-09: Hi, I'm very happy to see my article has helped you! I will amend it with some comments about the MAKEFLAGS and --parallel-workers, others have commented about running out of resources during build as well. The -DBUILD_TESTING=OFF option is a good one to add, too! It would be interesting to know which missing paths you saw. Regarding the camera: /dev/video10 and up are devices for the onboard hardware decoder, encoder and GPU image signal processor. If you have a camera module connected but you don't see a /dev/video0, maybe you have to enable it first: https://projects.raspberrypi.org/en/projects/getting-started-with-picamera/2 Comment by gvdhoorn on 2021-04-09: I'm ignorant here, but can you not use ros-tooling/cross_compile? Or is building on-device faster? Comment by sgvandijk on 2021-04-10: ros_cross_compile indeed is horribly slow, because it uses emulation rather than a real cross-compilation toolchain. I haven't done proper timing vs on-device, but the latter was fast enough for me and doesn't require any additional setup on a separate machine and any copying over of build artifacts. However it'll be very different if this issue gets resolved.
{ "domain": "robotics.stackexchange", "id": 36166, "tags": "ros, ros2, raspbian" }
Charge distribution on a conducting solid disc
Question: Imagine that you have a disc of radius $R$ with charge $Q$ on it. It is a conducting disc. What would be the charge distribution? Is there a uniform distribution over whole area? $\sigma=\textrm{constant}$ Or is there a distribution depends on r? $\sigma=\sigma(r)=\frac{Q}{\pi r^2}$ Or there is no charge on the area but all of the charge is placed at the edge of the disc? $\lambda=\frac{Q}{2\pi R}$ Answer: The charge density would not be uniform. It is highest at points and sharp edges, where (in theory) it tends towards infinity. For a disc the highest charge density would be at the rim. See Charge distribution on conductors and Why does charge accumulate at points? Regarding this problem, Andrew Zangwill in Application 5.1 of his book on Modern Electrodynamics states that There is no truly simple way to calculate the surface charge density for a charged, conducting disk. In this Application we use a method which regards the disk as the limiting case of a squashed ellipsoid... He proceeds to obtain the result $$\sigma = \frac{Q}{4\pi R \sqrt{R^2-r^2}}$$
{ "domain": "physics.stackexchange", "id": 48494, "tags": "homework-and-exercises, charge, conductors" }
Is there a preferred way of naming the resonance hybrid in keto-enol tautomerism?
Question: While answering a question about keto-enol tautomerism the question arose, how can I refer to the resonance hybrid instead of on of the resonance forms. In the case of the deprotonated butane-2-one, I can name 1 3-oxobutan-2-ide and 2 but-2-en-olate (ignoring stereochemistry). I cannot come up with a name for the hybrid 3. While this might be quite a simple case, there are many resonance stabilised structures out there, which have contributions from more than one form. It gets especially complicated for ionic substructures. I wondered if there is either already an official recommendation (preferred IUPAC name), or any other suggestion at all handling those cases. After all, we are treating the same bonding situation with two different names, there should be something that covers both (multiple) cases. Answer: Nomenclature of Organic Chemistry, IUPAC Recommendations and Preferred names 2013 in the section P-76 DELOCALIZED RADICALS AND IONS mentions only "totally delocalized" ions like cyclopentadiene derived cyclopentadienyl radical, cyclopentadienylium cation, cyclopentadienide anion; benzo[7]annulenylium (azulenium?) and pentadienyl radical. However, there is a draft having this chapter expanded, allowing more systematic naming of delocalized ions. P-76.1.2 Partial delocalization is denoted by the descriptor deloc preceded by the locants indicating the extent of delocalization. This descriptor and its locants are cited immediately before the appropriate suffix and enclosed in parentheses. cyclopenta-2,4-dien(1,2,3-deloc)ide ... But I am not 100% sure how exactly would that apply to your 3 structure.. let's be brave and creative: (2Z)-but-2-en-2-(2,3,O-deloc)olate or (2Z)-but-2-en-2-ol(2,3,O-deloc)ate or if you don't need stereochemistry 3-oxobutan-2-(2,3,O-deloc)ide or maybe even just butanon(2,3,O-deloc)ide ? .. or butanon(O,2,3,-deloc)ide ? However, that draft is from 2004, but this rule did not make it into the current official nomenclature (2013), so for now we can just speculate, or contribute to the upcoming version.
{ "domain": "chemistry.stackexchange", "id": 8052, "tags": "nomenclature, ions" }
Can momentum never be zero in quantum mechanics?
Question: I have seen Zetilli's QM book deals with $E>V$ and $E< V$ (tunnelling) in case of the potential wells deliberately avoiding the E=V case, so I thought maybe something is intriguing about this and made this up. Suppose the total energy of the particle is equal to its potential energy.Then its kinetic energy should be zero, (speaking non-relativistically). But Kinetic energy operator is $\hat{T}=\hat{p}^2/2m$ (where $\hat{p}=-i\hbar\frac{\partial}{\partial x}$), So clearly since Kinetic energy is 0 here, momentum eigenvalue will also vanish. Now, Putting $E=V$ in time-independent Schrodinger equation (1D) we get, $$\frac{\partial^2\psi}{\partial x^2}=\frac{2m(E-V)}{\hbar^2}\psi\implies\frac{d^2\psi}{d x^2}=0\implies\psi=Ax+B$$ where $A$ and $B$ are arbitrary constants. Since, the wave function must vanish at $\pm\infty$, $A=0$,hence the wave function equals a constant=$B$ and is not normalizable. So, a particle with no momentum(or kinetic energy), gives a physically unrealizable wave function! Does this imply $E=V$ is a restricted critical case or momentum cant be zero in quantum mechanics or did i just go wrong somewhere? Answer: You are not wrong, but it is worth noting that the same thing is true of any momentum eigenstate (or closely related unbound eigenstate of a Hamiltonian with a potential well in it). Explicitly $$ -i\hbar\frac{\partial}{\partial x} \psi(x) = p\,\psi(x) $$ then $$ \psi = A e^{i \frac{p}{\hbar}x} $$ which is not normalisable either. This means that we can never truly realise a momentum eigenstate, but we can still use them as a basis for physically realisable states usng the rigged Hilbert space formalism. So yes we cannot realise a state with exactly zero momentum, but this is not a special property of the zero momentum state.
{ "domain": "physics.stackexchange", "id": 65536, "tags": "quantum-mechanics, momentum, wavefunction, schroedinger-equation, normalization" }
Build Farm Output is Difficult to Read
Question: Why do I need to look through over 1000 lines of output to figure out what went wrong with the build? Originally posted by David Lu on ROS Answers with karma: 10932 on 2013-12-26 Post score: 1 Answer: If you don't provide the full output, you may miss important things that happen during the build. If the information is not logged it is gone because you don't have access to the build environment for introspection afterwards. When viewing a failed build I usually start by searching for "error" caps insensitively and find what the error is and work back from there. Using this technique, especially searching from the bottom it usually takes only a couple of seconds to find the failure in a log. Originally posted by tfoote with karma: 58457 on 2013-12-26 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by 130s on 2013-12-26: Great tip that I should've asked long ago. I also search not found. Comment by David Lu on 2013-12-27: That is a good tip, but that doesn't alleviate the essential usability problem. I don't argue having access to the log. It's that it doesn't provide any high level information to give a hint other than reading the log. This answer seems to just be RTFM. Comment by tfoote on 2013-12-27: At the highest level you get pass or fail. If you have any suggestions for libraries or algorithms we can apply to them to give a good summary of failures we'd be happy to add them. Comment by 130s on 2013-12-27: Sounds more like a good enhancement for jenkins (or there might already exist plugins) ?
{ "domain": "robotics.stackexchange", "id": 16529, "tags": "ros, buildfarm, build, jenkins" }
Costmap2d inflation radius
Question: I've been trying to increase the inflation radius of obstacles in the local_costmap for move_base. The problem that I've noticed is that the inflation radius as viewed by Costmap2dPublisher never changes. Using rosparam get, I verified that the inflation_radius parameter was correctly set, but even setting this value as high as 10m has no effect. It seems that the inflation radius of the obstacles is directly linked to the circumscribed radius of the robot footprint. If I set the robot footprint to a 0.5m x 0.5m box, the inflation radius is 0.25m. If I set the robot footprint to a 0.8m x 0.8m box, the inflation radius is 0.4m. From the documentation on costmap_2d, it seemed that I could set my own inflation radius, but I've been unable to do so. I would like to be able to set two different values for the footprint and the inflation_radius. How can I go about doing so? Originally posted by DimitriProsser on ROS Answers with karma: 11163 on 2012-02-03 Post score: 10 Answer: I had similar confusion when I first went to configure the costmap to inflate differently. The inflation_radius is actually the radius to which the cost scaling function is applied, not a parameter of the cost scaling function. Inside the inflation radius, the cost scaling function is applied, but outside the inflation radius, the cost of a cell is not inflated using the cost function. Take a look at the documentation for the cost_scaling_factor. What you have to do is basically solve the equation there ( exp(-1.0 * cost_scaling_factor * (distance_from_obstacle - inscribed_radius)) * (costmap_2d::INSCRIBED_INFLATED_OBSTACLE - 1)) for the correct cost_scaling_factor, using your distance from obstacle and the cost value you want that cell to have. You'll have to make sure to set the inflation radius large enough that it includes the distance you need the cost function to be applied out to, as anything outside the inflation_radius will not have the cost function applied. Originally posted by Eric Perko with karma: 8406 on 2012-02-03 This answer was ACCEPTED on the original site Post score: 13 Original comments Comment by Eric Perko on 2012-02-06: I'm not sure where you can get the inscribed_radius directly, since it's calculated from the robot footprint. It might get printed as a debug statement somewhere. I think I just calculated it by hand. Might be useful to open a ticket to print out debugs with this if costmap_2d doesn't already. Comment by DimitriProsser on 2012-02-03: That makes sense. Where can I obtain the value of "inscribed_radius"? Also, is there a way to get Rviz to account for this change? While I can see the cost values changing (by accessing the costmap directly), Rviz still limits inflation to the size of the robot footprint.
{ "domain": "robotics.stackexchange", "id": 8099, "tags": "navigation, move-base, costmap-2d-ros, costmap-2d" }
Lunar angular frequency
Question: I have this question from my teacher which quite confuses me. It asks to calculate lunar angular speed, when we know that the Moon circles Earth in 27.3 day. Why is number (27.3) significant here? Is this a trick question? To my understanding the only needed number is 27 days in which the Moon rotates once around its axis. So, ω = 2π / T = 2π / 27 * 24 * 3600 = (whatever number ends up here). Am I missing something? Answer: Yes: you are missing something. $0.3$, for one thing. The Moon's sidereal rotation speed is $27.321661 \approx 27.3$, so why drop it in the formula? As hinted in the comments: the lunar rotation period determines its angular speed, but we don't know its rotation period (in the problem statement). Nevertheless, the moon is in a 1:1 tidal resonance with the Earth, so its sidereal rotation period is the same as its orbital period.
{ "domain": "physics.stackexchange", "id": 49427, "tags": "angular-velocity, moon" }
The equivalent electric field of a magnetic field
Question: I know that Lorentz force for a charge $q$, with velocity $\vec{v}$ in magnetic field $\vec{B}$ is given by $$\vec{F} =q \vec{v} \times \vec{B}$$ but there will exist a frame of reference where observer move at same velocity with that of charge $q$, so according to him $v=0$. hence he will see no magnetic force is exerted on charge $q$. I have work on this problem for a while and found that the special relativity predicts equivalent electric force will acting upon charge instead. I want to know the relationship between this equivalent electric force and magnetic force. Thanks in advance Answer: I haven't read them, but this, this, this and this thread (I thank a diligent Qmechanic) are related and clear up the but why-questions you might have. The transformation of the quantities in electrodynamics with respect to boosts are $$ \begin{alignat}{7} \mathbf{E}'&~=~ \gamma \left(\mathbf{E} + \mathbf{v} \times \mathbf{B}\right) &&+ \left(1 - \gamma\right) \frac{\mathbf{E} \cdot \mathbf{v}}{v^2} \mathbf{v} \\[5px] \mathbf{B}'&~=~\gamma\left(\mathbf{B}-\frac{1}{c^2}\mathbf{v} \times \mathbf{E}\right)&&+\left(1-\gamma\right)\frac{\mathbf{B} \cdot \mathbf{v}}{v^2} \mathbf{v} \\[5px] \mathbf{D}'&~=~ \gamma \left(\mathbf{D}+\frac{1}{c^2} \mathbf{v} \times \mathbf{H} \right) && + \left( 1 - \gamma \right) \frac{\mathbf{D} \cdot \mathbf{v}}{v^2} \\[5px] \mathbf{H}'&~=~ \gamma \left(\mathbf{H} - \mathbf{v} \times \mathbf{D}\right) && +\left(1 - \gamma\right) \frac{\mathbf{H} \cdot \mathbf{v}}{v^2}\mathbf{v} \\[5px] \mathbf{j}' & ~=~ \mathbf{j} - \gamma \rho \mathbf{v} && + \left(\gamma - 1 \right) \frac{\mathbf{j} \cdot \mathbf{v}}{v^2} \mathbf{v} \\[5px] \mathbf{\rho}' & ~=~ \gamma \left(\rho - \frac{1}{c^2} \mathbf{j} \cdot \mathbf{v}\right) \end{alignat} $$where $\gamma \left(v \right)$ and the derivation of the transformation is presented on this Wikipedia page and is most transparent in a space-time geometrical picture, see for example here. Namely, the electromagnetic field strength tensor $F_{\mu\nu}$ incorporates both electric and magnetic field $E,B$ and the transformation is the canonical one of a tensor and therefore not as all over the place as the six lines posted above. In the non-relativistic limit $v<c$, i.e. when physical boosts are not associated with Lorentz transformations, you have For the traditional force law, the first formula confirms the prediction that the new $E$ magnitude is $vB$. Also, beware and always write down the full Lorentz law when doing transformations. Lastly, I'm not sure if special relativity predicts equivalent electric force will acting upon charge instead is the right formulation you should use, because while the relation is convincingly natural in a special relativistic formulation, the statement itself is more a consistency requirement for the theory of electrodynamics. I'd almost say the argument goes in the other direction: The terrible transformation law of $E$ and $B$ with respect to Galilean transformations was known before 1905 and upgrading the status of the Maxwell equations to be form invariant when translating between inertial frames suggests that the Lorentz transformation (and then special relativity as a whole) is physically sensible.
{ "domain": "physics.stackexchange", "id": 2817, "tags": "electromagnetism, special-relativity" }
Printing words starting with a given letter with the least amount of appearances with the print itself being an another appearance
Question: I've had problems with this one and was unable to solve it on my Algorithms 1 exam: Input: -n words -k letters Problem: For each letter print out the one word which starts with that letter and has the least number of appearances. If there are more than one, write the lexicographically smallest one. The printing of a word counts as an appearance. Complexity: Time: O (nlogn + k) Space: O ( n + k ) This is the first algorithm I am unable to solve in given time complexity and it is driving me crazy. Appreciate the help! Example: n=6 words: dog bark dog woof doggy doggy k=2 letters and corresponding outputs: d - 2 is the least number of appearances of a word starting with d , dog and doggy - dog is the lexicographically smaller one so we print dog if we say d again, since we printed out dog, that counts as another appearance so the only word appearing twice is doggy, and we print that one Answer: The following assumes that you can store words in $O(1)$ space and compare them in $O(1)$ time. For each letter $\ell$ keep: The number of appearances $n_\ell$ of the word starting with $\ell$ that appears the least amount of times. A list $L_\ell$ of words starting with $\ell$ that appeared $n_\ell$ times, in lexicographic order. A list $R_\ell$ of words starting with $\ell$ that appeared $n_\ell + 1$ times and are smaller than the first word in $L_\ell$, in lexicographic order. A sorted list $S_\ell$ of pairs $(n_w, w)$, where $w$ is a word starting with $\ell$ that appears $n_w \ge n_\ell+1$ times and is not in $R_\ell$. The pairs are sorted in lexicographic order. Notice that you can construct everything above in $O(n \log n)$ overall time for all the letters $\ell$. When you have to return a word starting with letter $\ell$, do the following: Remove the first word $w^*$ from $L_\ell$ and output it. Insert $w^*$ into $R_\ell$. Insert all the words $w$ from a pair $(n_w, w)$ of $S_\ell$ such that (i) $n_w = n_\ell + 1$, and (ii) $w$ precedes the smallest word of $L_\ell$, into $R_\ell$. Delete the corresponding pairs from $S_\ell$. Notice that these are always the first elements in $S_\ell$. If $L_\ell$ is empty, then ignore constraint (ii). If $L_\ell$ is empty, increase $n_\ell$ by 1, swap $R_\ell$ and $L_\ell$, and repeat step 3. Steps 1, 2, and 4 can be performed in constant time. Step 3 requires a constant amount of time, plus a time proportional to the number of words $w$ moved from $S_\ell$ into $R_\ell$. This second term is upper bounded by $O(n)$ across all letters as this is an upper bound on the number of pairs contained in all $S_\ell$.
{ "domain": "cs.stackexchange", "id": 14267, "tags": "algorithms" }
What is the theoretical range of temperature the air must be in order to reflect/refract light (for a volumetric display)?
Question: Well, the only question I found in this website about volumetric displays on air was this one, but it specifically suggests making air denser in order to make it work, but my question is specifically about changing the temperature of air to show the voxels. In other words, the intention is to heat a portion of air (a cubic shaped area of any size) with simple jet of hot air or using a lot of lasers for it, and then pointing a projector or RBG lasers to convey in specific points, (supposedly) making voxels. Like pointing a laser to a mirage that appeared above a hot area, you would (supposedly) see the laser path. How hot I would need to this cube of air to be in order to refract the lasers/projectors light? Or it would be better to heat small points and then throw the lasers/projector at it, since it would be unsafe/uncomfortable to be around a continuous jet of hot air? The closest thing I could find was that volumetric display that ionises air with lasers to create images with points of light (but I think it is too dangerous to mess with it, maybe it can blind someone if you're not careful?) and the Heliodisplay, that throws a jet of condensed air/steam to project a image on it. Answer: To do what you are suggesting would require changing the index of refraction of the air significantly by heating and cooling localized points in space. The temperature dependence of the index of refraction of air has been addressed in another physics stackexchange post: Refractive Index of Air Depending on Temperature. It gives an equation that may be useful for your query.
{ "domain": "physics.stackexchange", "id": 88401, "tags": "reflection, refraction, air, laser-interaction" }
inverse signs of imu data
Question: I am using orientation data from imu as an absolute orientation for robot_pose_ekf. However, the data published by imu and ekf are same value but in inverse signs. Like from imu, x is 0.64, but from ekf, x is -0.64. All x, y ,z ,w are inverse signs. The transformation is like below. I am really confused with this. Can anybody help. Thx. tf::Quaternion orientation; quaternionMsgToTF(imu->orientation, orientation); imu_meas_ = tf::Transform(orientation, tf::Vector3(0,0,0)); tf::TransformBroadcaster bc; bc.sendTransform(tf::StampedTransform(tf::Transform(tf::Quaternion(0,0,0,1),tf::Vector3(0,0,0)),imu_stamp_,base_footprint_frame_, imu->header.frame_id)); // Transforms imu data to base_footprint frame robot_state_.waitForTransform(base_footprint_frame_, imu->header.frame_id, imu_stamp_, ros::Duration(0.5)); StampedTransform base_imu_offset; robot_state_.lookupTransform(base_footprint_frame_, imu->header.frame_id, imu_stamp_, base_imu_offset); imu_meas_ = imu_meas_ * base_imu_offset; imu_broadcaster_.sendTransform(StampedTransform(imu_meas_.inverse(), imu_stamp_, base_footprint_frame_, imu->header.frame_id)); Originally posted by fyxbird on ROS Answers with karma: 28 on 2014-06-26 Post score: 0 Original comments Comment by Tom Moore on 2014-06-28: Any chance you can use the preformatted text markup for your code so that it's a bit easier to read? Just select all the code and hit the little icon with ones and zeros. Comment by fyxbird on 2014-06-30: Hi, Tom, sorry for the delay. I tried to edit the code again but it is still not formatted. Where is the little icon with ones and zeros. Thanks. Comment by Tom Moore on 2014-07-02: Fixed the formatting for you. Comment by fyxbird on 2014-07-02: Thanks, Tom. Answer: I'm a bit confused as to what you're trying to do. I believe robot_pose_ekf lets you specify an IMU topic to fuse with the state estimate. It looks like you've written a new node that is (a) creating a static (identity) transform and then publishing it, (b) using that same transform to transform the measurement into base_footprint_frame_, and then (c) re-broadcasting the IMU data as a transform. If I'm correct, then: (a) can be accomplished by static_transform_publisher, I think (b) should be imu_meas_ = base_imu_offset * imu_meas_; You're broadcasting the inverse of the imu_meas_ for (c). Since the base_imu_offset is the identity, you're effectively broadcasting the inverse of the IMU measurement. That might account for the negated values. I'm not sure that any of this is necessary, but again, I'm not sure what you're trying to accomplish. I'm willing to bet that robot_pose_ekf transforms all the incoming measurements into the target frame before integrating them. Originally posted by Tom Moore with karma: 13689 on 2014-07-02 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by fyxbird on 2014-07-05: Hi, Tom. Thanks for the answer and sorry for the delay. My intention of creating a static transform and broadcast it is to establish a transformation between the two frames for waitfortransform(). As the imu broadcaster is publishing transformation after waitfortransform(), I will get an error without broadcaster bc. The transformation that bc broadcast is actually tf::Transform(tf::Quaternion(x,y,z,1),tf::Vector3(0,0,0), but as I haven't got the exact value for x,y,z, I write it to be (0,0,0,1). Sorry for this. I have tried imu_meas_ = base_imu_offset * imu_meas_; but this is no difference. The result is the same as imu_meas_ = imu_meas_ * base_imu_offset; Most of these codes are from robot_pose_ekf package. The inverse is from the package. I have checked through the code and I think inverse is necessary. Comment by fyxbird on 2014-07-05: When I compare orientation from imu and ekf, I find that when I rotate imu around x axis. The data from imu keeps growing smoothly, like from 0.22 to 0.24. But data from ekf will change, it keeps the same growing from 0.22 to 0.23, and suddenly becomes -0.23x. The value is the same but with the opposite signs. Comment by Tom Moore on 2014-07-15: Ok, after some investigation, I think I see what you're trying to do. You actually modified the source for robot_pose_ekf by adding the first block of code, right? The second block (after the "// Transforms imu data to base_footprint frame") is code from robot_pose_ekf. Comment by Tom Moore on 2014-07-15: If I'm correct, then part of my previous answer stands: you need to get rid of the first block wherein you broadcast the transform. Revert robot_pose_ekf back to the way it was, and instead use a static_transform_publisher in the launch file. Comment by Tom Moore on 2014-07-15: Put this in your launch file: Replace "base_footprint" and "imu" with the names of your frames for your robot and IMU. Let me know if it works. Add your tf tree to the question if needed Comment by fyxbird on 2014-07-16: Hi, Tom. It works fine now. Thanks.
{ "domain": "robotics.stackexchange", "id": 18397, "tags": "imu, navigation, robot-pose-ekf" }
How is synthesized Vacuum different from the one in the Universe?
Question: This question is with reference to Torricelli's Vacuum Barometer which was one of the first experiments involving vacuum creation in Laboratories. The Atmospheric pressure causes a rise in the mercury in the column. By manipulating the length of the column, Torricelli first synthesized vacuum. However I'm not very certain about the difference between this synthetically created vacuum and the natural Void of space. Several researches and accountable sites guarantee that we on Earth cannot create a better vacuum (in terms of particles per unit area) than what naturally exists in space. But the vacuum in the Universe contains Dark Energy which seems to be the cause of its expansion. So in a vacuum that is created in any experiment in laboratories such as the one done by Torricelli, can there be the presence of dark energy along with the empty space. If that is the case, shouldn't there be quantum fluctuations or expansions by the dark energy in that vacuum which would make a Mercury Barometer highly unstable as the vacuum will press against the mercury and will try to expand to infinity? If in case the vacuum that we created possesses no dark matter or energy, then haven't we been able to produce a more efficient void than the one in the Universe. Doesn't that contradict the fact that the spatial vacuum is the "purest of all"? Answer: Ultra High Vacuum goes down to pressures of perhaps $10^{-12}$mbar, which corresponds to a particle density of some $10^4$ molecules per cm$^3$. The interstellar medium has densities of perhaps 1 particle per cm$^3$ (Wikipedia agrees). So indeed the number density of particles in outer space can be much lower than that of a very good vacuum in a lab. Unless you are in a molecular cloud or such, in which case it might as well be much higher of course. The mass-energy density of dark energy is tiny in comparison to, well, in comparison to anything really: About $10^{-29}$g/cm$^3$ which is another factor $10^6$ or $10^7$ lower than the density number of particles in interstellar space quoted above. Any quantum fluctuations in the vacuum of a mercury barometer are utterly negligible. One can however measure the Casimir effect in the lab, but this is the infamously "worst model in physics" for dark energy, so again, no information about dark energy from that at all. So then there is the question whether the vacuum chamber in your lab would contain dark energy or not. Since dark energy is believed to be a property of spacetime itself, the answer is expected to be "yes". So, no, you not would expect the vacuum we create to be "purer" than the vacuum in outer space. However, given the above order-of-magnitudes, it should be clear why there can be no experimental evidence to support that. Finally, you mention dark matter. Dark matter interacts only very weakly, so we would expect that it simply passes through any vacuum vessel you build. In other words, just like with dark energy but for other reasons, the density of dark matter is also expected to the same, whether you are in outer space, in your lab breathing air, or in some vacuum vessel. Specifically, that density is of order $10^{-25}$g/cm$^3$ in the vicinity of our Sun.
{ "domain": "physics.stackexchange", "id": 86040, "tags": "quantum-mechanics, vacuum, dark-energy" }
Find acronyms in Haskell
Question: Related to this code golf challenge, I tried to find acronyms with Haskell without using regular expressions. The idea is to split the input string at every space or dash before finally gluing the heads of these parts together, if they are uppercase. This is my code: import System.Environment import Data.Char main :: IO () main = do [inp] <- getArgs -- get input from the command line putStrLn $ getAcronym inp getAcronym :: String -> String getAcronym [] = [] getAcronym s = foldr step [] parts where parts = split isWordSep s -- split into words step x acc = if isUpper . head $ x then head x : acc else acc -- glue uppercase heads together split :: (a -> Bool) -> [a] -> [[a]] split p [] = [] split p s@(x:xs) | p x = split p xs -- discard trailing white spaces | otherwise = w : split p r -- continue with the rest where (w, r) = break p s -- seperate prefix isWordSep :: Char -> Bool isWordSep x = x == ' ' || x == '-' As this really seems like a very simple problem, my code looks like way too much complexity. Do you have any helpful improvements to slim down my code? Answer: With the help of Gurkenglas, I have found a good solution for this problem: First, the getAcronym function can be dramatically reduced by using higher order functions and function composition: getAcronym :: String -> String getAcronym = filter isUpper . map head . split isWordSep Second, the split function can be replaced with Data.List.Split's wordsBy function, reducing the whole code to the following: import System.Environment import Data.Char import Data.List.Split (wordsBy) main :: IO () main = do [inp] <- getArgs -- get input from the command line putStrLn $ getAcronym inp getAcronym :: String -> String getAcronym = filter isUpper . map head . wordsBy isWordSep isWordSep :: Char -> Bool isWordSep x = x == ' ' || x == '-'
{ "domain": "codereview.stackexchange", "id": 27295, "tags": "beginner, strings, haskell" }
Promises with NodeJS and BlueBird
Question: I'm using bluebird for promises in my Node/Express application and wrote an API call in which the user passes in a JSON Web Token that contains their user information, and then I decode the token, and pull up the events that user should see based off their userId. If anyone could give me advice on how I can clean this code up while also using the proper promise practices, that would be great. /routes.js app.post('/api/events/', require('./views/api/index').events); /views/api/index.js var B_Promise = require('bluebird'); var jwt = require('jsonwebtoken'); exports.events = function(req, res) { var results = {}; var errors = []; var validateEmptyFields = function() { return new B_Promise(function(resolve, reject) { var token = req.body.token || req.param('token') || req.headers['x-access-token']; if (!token) { return reject('Please provide the token parameter'); } resolve(token); }); }; var getUser = function(token) { return new B_Promise(function(resolve, reject) { jwt.verify(token, req.app.config.api.secret, function(err, decoded) { if (err) { return reject(err); } resolve(decoded); }); }); }; var getEvents = function(user) { return new B_Promise(function(resolve, reject) { req.app.db.getConnection(function(err, connection){ if (err) { return reject(err); } /* jshint multistr: true */ connection.query('SELECT e.* FROM events e \ INNER JOIN event_to_groups etg ON e.id=etg.event_id \ INNER JOIN user_to_groups utg ON utg.group_id=etg.group_id \ WHERE utg.user_id=?', user.id, function(err, events) { if (err) { return reject(err); } results.events = events; resolve(); }); connection.release(); }); }); }; validateEmptyFields() .then(getUser) .then(getEvents) .catch(function(err) { errors.push(err); }) .finally(function() { res.json({results: results, errors: errors}); }); }; Answer: I would just name the variable for Bluebird promises as Promise as it mostly acts like the standard Promise object in ES6. That way, if you happen to run an ES6-compatible Node.js, you can just remove the import, and you'll be set. In the promise constructor, you need not do a return. The code will also feel awkward to a new developer, thinking Promise needs a return. You can simply do: jwt.verify(token, req.app.config.api.secret, function(err, decoded) { if (err) reject(err); else resolve(decoded); }); Bluebird has a static method called Promise.promisify which converts callback-style APIs into promise-returning ones. Just make sure you follow it's guidelines in that: function should conform to node.js convention of accepting a callback as last argument and calling that callback with error as the first argument and success value on the second argument. If your operation needs to return a promise, but it's not async, then you can simply return a resolved or rejected promise instantly using Promise.resolve and Promise.reject static methods, respectively. var validateEmptyFields = function() { var token = req.body.token || req.param('token') || req.headers['x-access-token']; return token ? Promise.resolve(token) : Promise.reject('Please provide the token parameter') }; In getEvents, I suggest you resolve with events. Then assemble results in the same way you assembled errors in catch. The idea is that your promise-generating functions should not be causing side-effects. It should only be aware of the fact that you called it with some arguments, and it resolves or rejects and nothing more. Move out that SQL query from the logic. It's messy. And so, without further ado: var Promise = require('bluebird'); var jwt = require('jsonwebtoken'); var EVENT_QUERY = '\ SELECT e.* FROM events e \ INNER JOIN event_to_groups etg ON e.id=etg.event_id \ INNER JOIN user_to_groups utg ON utg.group_id=etg.group_id \ WHERE utg.user_id=?'; exports.events = function(req, res) { var results = {}; var errors = []; var validateEmptyFields = function() { var token = req.body.token || req.param('token') || req.headers['x-access-token']; if (!token) Promise.reject('Please provide the token parameter'); else Promise.resolve(token); }; var getUser = function(token) { return new Promise(function(resolve, reject) { jwt.verify(token, req.app.config.api.secret, function(err, decoded) { if (err) return reject(err); else resolve(decoded); }); }); }; var getEvents = function(user) { return new Promise(function(resolve, reject) { req.app.db.getConnection(function(err, connection) { if (err) return reject(err); connection.query(EVENT_QUERY, user.id, function(err, events) { if (err) return reject(err); else resolve(events); }); connection.release(); }); }); }; validateEmptyFields() .then(getUser) .then(getEvents) .then(function(events) { results.events = events; }, function(err) { errors.push(err); }) .finally(function() { res.json({ results: results, errors: errors }); }); };
{ "domain": "codereview.stackexchange", "id": 18164, "tags": "javascript, node.js, promise, express.js" }
cache with least accessed items eviction
Question: I wrote this code, in order to implement a cache decorator which handles least accessed eviction. The role of this decorator is to memoize decorated functions calls with args and kwargs and return the previously computed value if still in cache. Warning, this handles only serializable args and kwargs, for the sake of simplicity. Please review it and tell me if this makes sense. The code: import json import hashlib from collections import OrderedDict class EvictionCache(OrderedDict): def __init__(self, *args, **kwargs): max_size = kwargs.pop('max_size', None) super(EvictionCache, self).__init__(*args, **kwargs) self._max_size = max_size @property def full(self): return len(self) >= self._max_size def set(self, hash, value): if self.full: self.evict() super(EvictionCache, self).__setitem__(hash, value) def get(self, hash): value = super(EvictionCache, self).pop(hash, None) if value: super(EvictionCache, self).__setitem__(hash, value) return value def evict(self): print 'evicting oldest cached item' super(EvictionCache, self).popitem(last=False) def cache(max_size=None): def make_md5(args, kwargs): data = dict(args=args, kwargs=kwargs) md5 = hashlib.md5(json.dumps(data, sort_keys=True)).hexdigest() return md5 def wrapper(f): # make this cache belong to the decorated function to prevent the cache # to be shared between different decorated functions f._cache = EvictionCache(max_size=max_size) def inner(*args, **kwargs): md5 = make_md5(args, kwargs) res = f._cache.get(md5) if res: print 'from cache' return res res = f(*args, **kwargs) print 'to cache' f._cache.set(md5, res) return res return inner return wrapper @cache(max_size=2) def func(x): return x * 2 if __name__ == '__main__': print func(1) print func(2) print func(2) print func(1) print func(2) print func(3) print func(4) print func(5) output to cache 2 to cache 4 from cache 4 from cache 2 from cache 4 to cache evicting oldest cached item 6 to cache evicting oldest cached item 8 to cache evicting oldest cached item 10 Answer: I don't really feel subclassing a built-in data structure and defining your custom get() and set() methods is a good idea. One of the downsides of the current implementation is that if I would do self.cache[key] = value - this will not go through the cache's set() method and, hence, will not go through the "eviction" check at all. If you are subclassing a dictionary or an ordered dictionary, properly define the __getitem__() and __setitem__() magic methods instead of writing your custom get() and set() on top of it. Also check this implementation that has a simpler wrapper, but uses a "proxy" "cached function" helper class. And, here is the source code of the Python's functools.lru_cache for the reference. Some other minor notes: use print() as a function for Python-3.x compatibility
{ "domain": "codereview.stackexchange", "id": 24755, "tags": "python, cache" }
/joint_states from rviz with gazebo running
Question: Hi there, I would like to have the same robot running in gazebo and in rviz. I need the joint_states from rviz but I got the problem that gazebo is publishing them too. is there a way to filter the rviz-joint_states? thanks Originally posted by javatar on ROS Answers with karma: 1 on 2011-12-15 Post score: 0 Answer: Rviz neither publishes nor subscribes to joint_states. It is just a visualization tool and the robot is visualized based on TF. Can you please provide more information on the display type you are using? Originally posted by Lorenz with karma: 22731 on 2011-12-15 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 7652, "tags": "gazebo, rviz" }
What is the explanation of the changes in stability going down a group for carbonates, bicarbonates, fluorides, and chlorides?
Question: For carbonates and bicarbonates, I know that stability increases down the group, and for chlorides and fluorides, stability decreases down the group. Why does this happen? Can someone explain this in detail? (I am talking about S block alkali metals) Answer: Carbonates The quote from your text: Carbonates of alkaline earth metals are insoluble in water and can be precipitated by addition of a sodium or ammonium carbonate solution to a solution of a soluble salt of these metals. The solubility of carbonates in water decreases as the atomic number of the metal ion increases. All the carbonates decompose on heating to give carbon dioxide and the oxide. Beryllium carbonate is unstable and can be kept only in the atmosphere of CO2. The thermal stability increases with increasing cationic size. So the stability that you are referring to is thermal stability. This is an important detail. So what is thermal stability? It's how resistant a molecule is to decomposition at higher temperatures. What's happening to cause thermal instability? So, lets look at the carbonate ion here: This is just an illustration, and in reality the negative charge we see on the two $\ce{O}$ atoms is localized due to resonance. Below the illustration shows where the negative charge is likely to be concentrated (colored in red). So, when we create a carbonate complex like the example below, the negative charge will be attracted to the positive ion. Because of this polarization, the carbon dioxide will become more stable and energetically favorable. How does going down a group play into this? Well as you go down the group, the charged ion becomes larger. The larger the ion, we see a lower charge density. Charge density is basically the amount of charge in a given volume. So, if a small ion has the same charge as a larger ion, the charge density will be greater for that small ion. Greater charge density, means a greater pull on that carbonate ion, and a greater pull causes the delocalized ions, and a more stable $\ce{CO2}$ molecule. So, the larger the ion, the lower the charge density, the less polarizing of an effect, and reduced stability of a $\ce{CO2}$ molecule, favoring the $\ce{CO3}$. Chloride and fluoride stability Stability of fluorides, chlorides, and other halogens, are likewise related to thier size. The halogens, specifically fluouride, is known for their electronegativity. Electronegativity, is the tendency to attract electrons to itself. As you move up the group, you see an increase in electronegtivity. This results in the creation of polar bonds. Illustrated below, you see that as charge of the positive ions increase, polarizability increases (left), and as the halogen ion increases, polarizability and electronegativity decrease (right). When the ions electron cloud, is less polarized, the bond is less strong, leading to a less stable molecule. Information and illustrations on carbonate ions were sourced from here.
{ "domain": "chemistry.stackexchange", "id": 3100, "tags": "stability, polarity, electronegativity" }
How does the length of the minor semi-axis of the moon's disc's inner terminator vary through the cycle?
Question: The curve that separates the bright and dark parts of the moon's disc as viewed from the earth is called the inner lunar terminator, which is semi-elliptical in shape. We can model the disc as a unit circle centred on the origin, in which the inner terminator runs from (0,1) to (0,-1) and its major semi-axis always has length 1. How does its minor semi-axis vary during the lunar cycle? Please assume that orbits are circular. In other words, if the point halfway along the inner terminator is X, what is the equation for the motion of X back and forth along the line-segment from (-1,0) to (1,0)? Note that the shape of the visible disc is not a lune, except in the two trivial cases. Answer: The moon is rotating wrt to the sun. As the moon rotates, the longitude of the terminator increases linearly. Treating the moon as being at "great distance, so the moon appears as an orthographic projection, the position of the middle of the terminator varies sinusoidally, it moves fastest when the moon is at first or third quarter. In the orthographic projection, the longitude and latitude $(\lambda,\varphi)$ are mapped to (x,y) by $$ \begin{align} x &= R\,\cos\varphi \sin\left(\lambda - \lambda_0\right) \\ y &= R\big(\cos\varphi_0 \sin\varphi - \sin\varphi_0 \cos\varphi \cos\left(\lambda - \lambda_0\right)\big) \end{align}$$ For points on the equator, and treating the central longitude $\lambda_0$ as the lunar prime meridian, and a scale of R=1, that simplifies to $x = \sin(\lambda)$. As the terminator moves, its longitude increases linearly, so x varies sinusoidally. Correcting for elliptical orbits, librations of the moon and a general perspective view would only marginally alter this.
{ "domain": "astronomy.stackexchange", "id": 2054, "tags": "the-moon, moon-phases" }
Why isnt node checked for nil value in start when transplanting binary tree
Question: Whilst I was reading CLRS I came across this: When TRANSPLANT replaces the subtree rooted at node u with the subtree rooted at node v, node u’s parent becomes node v’s parent, and u’s parent ends up having v as its appropriate child. TRANSPLANT(T, u, v) 1 if u.p == NIL 2 T.root = v 3 elseif u == u.p.left 4 u.p.left = v 5 else u.p.right = v 6 if v != NIL 7 v.p = u.p Lines 1–2 handle the case in which u is the root of T . Otherwise, u is either a left child or a right child of its parent. Lines 3–4 take care of updating u.p.left if u is a left child, and line 5 updates u.p.right if u is a right child. We allow v to be NIL, and lines 6–7 update v.p if v is non-NIL. Why wasn't v was checked for nil value in the starting of the procedure and if it was not checked then why was it checked in line 6. If v is nil then its parent will be its original parent and u's parent will reference to v - wouldn't this cause inconsistency in tree ? Context Transplant is the sub-procedure used in deletion of a node from a binary search tree. It is used in binary search tree not in RB Trees or B-Trees. Details about the book Edition - 3rd Print - 2nd PageNo - 296 Answer: If $v$ is NIL then transplanting $v$ just transplants an empty tree. This is fine. It is handled in the exact same way as transplanting an actual tree, the only difference being that we don't need to update $v$'s parent pointer. Could the code be structured differently? Probably. Why, then, did the authors choose this particular implementation? No deep reason. They programmed the algorithm, and that's the code they came up with. You might have come up with a different code. There's no one right answer in programming.
{ "domain": "cs.stackexchange", "id": 6709, "tags": "algorithms, data-structures, binary-trees, algorithm-design" }
Nuclear bomb mushroom cloud with trumpet formation
Question: I have found this specific image here (Loong found out that it is the Soviet Joe 4 test of the 400 kiloton RDS-6 warhead at the Semipalatinsk test site on August 12, 1953): Also an impressive Youtube Video of the same explosion As you see, an inverted trumpet cloud is forming around the rising mushroom stem. I have never seen this strange picture before. My educated guess is that the mushroom is rising through a layer of supersaturated air (which cannot form clouds normally because of missing seeds provided now by the explosion), causing fog which is pushed evenly outside by the high air pressure inside the stem. Is there a name for the phenomenon of the trumpet cloud? Is my explanation correct and if not, what is the cause of the trumpet? Answer: This is called a 'skirt' or 'bell' and it is indeed a condensation effect: humid air is entrained by the rising column, and water then condenses out as the pressure falls. These droplets, if they get big enough fast enough, then fall with respect to the rising air, resulting in these skirts. Condensation phenomena are fairly common with nuclear (and other large) explosions. This is covered in the Wikipedia article.
{ "domain": "physics.stackexchange", "id": 36584, "tags": "thermodynamics, nuclear-physics, atmospheric-science, explosions, meteorology" }
Video lectures on graduate level Classical Electrodynamics
Question: This is a rather broad question. Does anyone know of good video lectures for graduate level classical electrodynamics? Answer: My understanding of Graduate Level is an overlap between JD Jackson's Classical electrodynamics, Landau's Electrodynamics of continuous media and Landau's Classical Theory of fields. Unfortunately, there isnt much video material out there, which is justifiable because there is no great pedagogical need here. If you understand Griffith's level electrodynamics the Jackson's book is an advanced methods to solve sophisticated problems book. Which is best learnt by doing problems. ALthough I am not greatly impressed, this is a set of video lectures that treats Landau and Jackson as textbooks. http://vubeam.pa.msu.edu/lectures/phy962/962d/electrodynamics/ It might be worthwhile to have a look at Leonard Susskind's lecture on classical electrodynamics and classical theory of fields in the special relativity module. http://www.cosmolearning.com/video-lectures/electrodynamics/ If you're looking for companion notes, then these lecture slides would help you a lot more specifically with understanding the material presented in Jackson, I found it really helpful. http://physics.gmu.edu/~joe/PHYS685/
{ "domain": "physics.stackexchange", "id": 66079, "tags": "resource-recommendations, classical-electrodynamics" }
Counting Metrics
Question: Say that I have a set of $n$ points $N$, and am interested in metrics $d:N\times N \rightarrow \mathbb{R}$ over $N$. Let $M$ denote the set of all metrics over $N$. Now let me define the distance between two metrics $d_1$ and $d_2$ in $M$ to be: $$\partial(d_1, d_2) = \left|\sum_{i,j \in N}d_1(i,j)-d_2(i,j)\right|$$ It isn't hard to see that $(M, \partial)$ itself forms a metric space. I am interested in the size of the smallest $\epsilon$-net of $(M, \partial)$. (i.e. the smallest subset $S \subset M$ such that for all $d \in M$ there is some $d' \in S$ such that $\partial(d, d') \leq \epsilon$.) Are bounds on this quantity known, and/or are there standard techniques for estimating quantities like this? EDIT: As Suresh points out, there is no finite $\epsilon$-net if we are talking about unbounded metrics. Let us consider normalized metrics $M$ such that for all $d \in M$, and for all $i,j \in N$, $d(i,j) \leq 1$. Of course now for all $d_1,d_2 \in M$, $\partial(d_1,d_2) < n^2$. Answer: A standard technique is a volume argument. For example, you can look at Chapter 13 of Lectures on Discrete Geometry by Jiri Matousek (Springer, 2002). Then, you need to know (a bound for) the volume of your polytope (the set of all metrics under consideration). If you restrict your cone by the inequalities $d(x,y)+d(y,z)+d(z,x) \leq 2$ for all $x,y,z\in N$, then the space becomes bounded, which is often called a "metric polytope" or "metric polyhedron" in the literature. The following paper gives the volumes when $|N|\leq 6$, but I couldn't find any study for asymptotics. A. Deza, M. Deza, K. Fukuda, "On skeletons, diameters and volumes of metric polyhedra", Lecture Notes in Computer Science 1120 (1996) 112–128. http://dx.doi.org/10.1007/3-540-61576-8_78
{ "domain": "cstheory.stackexchange", "id": 595, "tags": "co.combinatorics, metrics, epsilon-nets" }
How to optimize the player movement code like in Tomb Of The Musk game?
Question: I've written some code that performs movement similar to the Tomb Of The Musk game. The idea is that the objet should move in a certain direction until it encounters an obstacle. After that, the player can choose which way the object should move. The code I wrote seems to me to be rather unoptimized. I think it could have been done quite a bit more rationally than I did. Object movement code: [SerializeField] float speed = .9f; [SerializeField] LayerMask obstacleMask; Vector3 targetPosition; Vector3 moving_direction; bool movingHorizontally; bool canCheck; void Start() { targetPosition = transform.position; } void Update() { if (movingHorizontally) canCheck = Physics.Raycast(transform.position, Vector3.left, .6f, obstacleMask) || Physics.Raycast(transform.position, Vector3.right, .6f, obstacleMask); else canCheck = Physics.Raycast(transform.position, Vector3.forward, .6f, obstacleMask) || Physics.Raycast(transform.position, Vector3.back, .6f, obstacleMask); if (canCheck) { if (Input.GetAxisRaw("Horizontal") != 0) { if (Input.GetAxisRaw("Horizontal") > 0) { moving_direction = Vector3.right; } else { moving_direction = Vector3.left; } targetPosition = GetTargetPosition(); movingHorizontally = true; } else if (Input.GetAxisRaw("Vertical") != 0) { if (Input.GetAxisRaw("Vertical") > 0) { moving_direction = Vector3.forward; } else { moving_direction = Vector3.back; } targetPosition = GetTargetPosition(); movingHorizontally = false; } } } void FixedUpdate() { transform.position = Vector3.MoveTowards(transform.position, targetPosition, speed); } Vector3 GetTargetPosition() { float ray_lenght = 1; while(true) { if (Physics.Raycast(transform.position, moving_direction, ray_lenght, obstacleMask)) break; ray_lenght += 1; } return transform.position + moving_direction * (ray_lenght-1); } Do you have any ideas regarding the improvement of my code? Answer: In terms of performance what I have done is removed unnecessary method calls and helped out your branch predictor by removing if/else statements. What I have mainly done though is to enhance code readability which will make it much easier to find performance bottlenecks. I have left your logic intact as I don't have enough information to change it. Also I noticed you have a call to Vector3.left. Have you implemented your own Vector3 class? [SerializeField] float speed = .9f; [SerializeField] LayerMask obstacleMask; Vector3 targetPosition; Vector3 moving_direction; bool movingHorizontally; bool canCheck; void Start() { targetPosition = transform.position; } void Update() { if (!CanMove()) return; var horizontalRawAxis = Input.GetAxisRaw("Horizontal"); var verticalRawAxis = Input.GetAxisRaw("Vertical"); (moving_direction, movingHorizontally) = horzontalRawAxis > 0? (Vector3.right, true) : horizontalRawAxis < 0 ? (Vector3.left, true) : verticalRawAxis > 0 ? (Vector3.forward, false) : (Vector3.back, false); targetPosition = GetTargetPosition(); } void FixedUpdate() { transform.position = Vector3.MoveTowards(transform.position, targetPosition, speed); } Vector3 GetTargetPosition() { float ray_lenght = 1; while(!PhysicsRaycast(moving_direction, ray_lenght)) ray_lenght += 1; return transform.position + moving_direction * (ray_lenght-1); } bool CanMove(MovementType movementType) => movementType switch { MovementType.Left => PhysicsRaycast(Vector3.left, .6f); MovementType.Right => PhysicsRaycast(Vector3.right, .6f); MovementType.Forward => PhysicsRaycast(Vector3.forward, .6f); MovementType.BackWard => PhysicsRaycast(Vector3.back, .6f); _ => throw new Exception("Unrecognised movement type"); }; bool CanMove(){ return movingHorizontally? CanMove(MovementType.Left) || CanMove(MovementType.Right) : CanMove(MovementType.Forward) || CanMove(MovementType.BackWard); } bool PhysicsRaycast(Vector3 moving_direction, float ray_length) => Physics.Raycast(transform.position, moving_direction, ray_length, obstacleMask); enum MovementType{ Left, Right, Forward, BackWard, }
{ "domain": "codereview.stackexchange", "id": 45208, "tags": "c#, performance, unity3d" }
Formal Connection Between Symmetry and Gauss's Law
Question: In the standard undergraduate treatment of E&M, Gauss's Law is loosely stated as "the electric flux through a closed surface is proportional to the enclosed charge". Equivalently, in differential form, and in terms of the potential (in the static case): $$\nabla^2 \phi = -\frac{\rho}{\epsilon_0}$$ Now, when using the integral form, one typically uses the symmetries of a known charge distribution to deduce related symmetries in the electric field, allowing the magnitude of the field to be factored out of the integral. To do so, one usually relies on intuitive, heuristic arguments about how the field in question "ought to" behave$^1$. I'm wondering how one goes about formalizing this notion in precise mathematical terms. In particular, it seems that there ought to be an equivalent statement for Gauss's law in differential form, along the lines of "symmetries in $\rho$ induce related symmetries in $\phi$". Is there a way to formally state this claim? In particular: How would one formulate a proof of the conditions (necessary and sufficient) under which a symmetry of $\rho$ induces a symmetry in $\phi$? When it exists, how does one explicitly state the induced symmetry in terms of the known symmetry? Can such a result be generalized for arbitrary linear PDEs whose source terms exhibit some symmetry? It seems to me like there must exist a concise, elegant, and general way to state and prove the above, but I can't quite seem to connect all the dots right now. $^1$ See, for example, in Griffiths, Example 2.3, p. 72: "Suppose, say, that it points due east, at the 'equator.' But the orientation of the equator is perfectly arbitrary—nothing is spinning here, so there is no natural "north-south" axis—any argument purporting to show that $\mathbf{E}$ points east could just as well be used to show it points west, or north, or any other direction. The only unique direction on a sphere is radial." Answer: Let $\mathcal{D}$ be the operator corresponding to your equation ($-\epsilon_0 \nabla^2$ in this case). Let $U$ be some operator corresponding to the symmetry. It might be a rotation or parity transformation, etc. If $U f = f$, we say the function $f$ is symmetric. If $U\mathcal D=\mathcal D U$ as operators, then we say your equation is symmetric. Let, $\mathcal {D} f= g$. If $\mathcal{D}$ is symmetric and $g$ is symmetric, then we can easily show $\mathcal{D}f = \mathcal{D}\mathcal U f$. If we can take an inverse of $\mathcal{D}$ we've proven $f$ is symmetric. Taking an inverse of $\mathcal{D}$ is the the same thing as being able to solve the equation uniquely. In your particular case we can solve the equation uniquely if we restrict our function space to have some boundary conditions, say vanishing at infinity. So this is the example you had in mind and this a formalization of the argument that $\phi$ must be symmetric. Now if we can't solve the equation uniquely then there may be a loophole in the argument. A particular case I have in mind is a magnetic monopole which is rotationally symmetric, but the vector potential solution has a Dirac string and is not. But any two solutions $f$ and $Uf$ in this case are connected by a gauge transformation.
{ "domain": "physics.stackexchange", "id": 53492, "tags": "electromagnetism, mathematical-physics, symmetry, gauss-law, differential-equations" }
Map, which contains biomes, which contain landforms, which contain tiles
Question: I am trying to model this: a Map can have multiple child of type Biomes and no parent a Biome can have multiple child of type Landforms and a Map as its parent a Landform can have multiple child of type Tiles and a Biome as its parent a Tile has no child and a Landform as its parent I want it to be generic so I can easily add new links to the chain (like adding a new kind of section between Biome and Landform for example). Here is the less ugly solution I have for now : public class RootSection<T, TChild> : Section<T> where T : Section<T> where TChild : Section<TChild> { public List<TChild> ChildSection { get; } // duplicate } public class MiddleSection<T, TChild, TParent> : Section<T> where T : Section<T> where TChild : Section<TChild> where TParent : Section<TParent> { public List<TChild> ChildSection { get; } // duplicate public TParent Parent { get; } // duplicate } public class BottomSection<T, TParent> : Section<T> where T : Section<T> where TParent : Section<TParent> { public TParent Parent { get; } // duplicate } public class Section<T> where T : Section<T> { List<T> AdjacentSections { get; } } public class Map : RootSection<Map, Biome> { } // (T, TChild) public class Biome : MiddleSection<Biome, Landform, Map> { } // (T, TChild, TParent) public class Landform : MiddleSection<Landform, Tile, Biome> { } // (T, TChild, TParent) public class Tile : BottomSection<Tile, Landform> { } // (T, TParent) As you can see, there is already duplicate code and I can't think of a solution to get rid of this issue. I feel like I am either missing something obvious or over-complexifying the problem. I also feel like this is close to a classic data structure which I ignore the name preventing me from searching for inspiration on the net. How can I rewrite this code to look cleaner ? Am I right to think it's close to a well known data structure ? Answer: I want it to be generic so I can easily add new links to the chain (like adding a new kind of section between Biome and Landform for example) Generics work fine for simple generic data structures (like a List). Your "generic" data structure is actually a very special one which will not be used outside of your model. It is more a try to extract the common parts of your model to a generic structure which may be useful, but based on your question I can not see the value in your case. In my experience, data stuctures with multiple generic types, which has contraints to other generic types, are hard to understand and make the code more complicated. In your case, I would just give generics up and write the data structure down as it is: public class Map { public List<Biome> Biomes { get; } = new List<Biome>(); public List<Map> AdjacentMaps { get; } = new List<Map>(); } public class Biome { public Map Map {get; } public List<Landform> Landforms { get; } = new List<Landform>(); public List<Biome> AdjacentBiomes { get; } = new List<Biome>(); } public class Landform { public Biome Biome {get; } public List<Tile> Tiles { get; } = new List<Tile>(); public List<Landform> AdjacentLandforms { get; } = new List<Landform>(); } public class Tile { public Landform Landform {get; } public List<Tile> AdjacentTiles { get; } = new List<Tile>(); } Much more readable! The properties has more descriptive names and it needs a minute to extend this hierachical data structure with other types.
{ "domain": "codereview.stackexchange", "id": 33664, "tags": "c#, object-oriented, generics" }
Is $y(t) = y(t-4)+x(t-4)$ time invariant or not?
Question: I want to check the time invariability of this recursively defined function $$y(t) = y(t-4)+x(t-4)$$ We can check time invariability of functions expressed in terms of x(t), but I couldn't find anything for such recursively defined function. In the book "Signals and System by A. Nagoorkani", we can test for time invariance as follow: Delay the input signal by m units of time and determine the response of the system for this delayed input signal. Let this response be y1(t). Delay the response of the system for unshifted input by m unit of time. Let this delayed response by y2(t). Check whether y1(t) = y2(t). If they are equal then the system is time invariant. Otherwise the system is time variant. Following these steps, the delayed version of input signal is $$x((t-m)-4)$$ But what is its response (y1)? Is it $$y(t-4) + x((t-m)-4)$$ $$\text{or}$$ $$y((t-m)-4) + x((t-m)-4)?$$ In other cases, we just shift the function $x(t)$ and leave all other terms as it is but the problem here is, the shifting of $x(t)$ may affect the $y(t)$ and so $y(t-4)$ as well. So, should I shift that term as well or keep it as it is? Answer: Let's re-write your difference equation: $$y(t) - y(t-4) = x(t-4)$$ Delaying the input gives the following difference equation for the output $y_1(t)$:$$y_1(t) - y_1(t-4) = x(t-m-4)$$ A delayed response to an input $x(t)$ gives $$y(t-m) - y(t-m-4) = x(t-m-4)$$ From there you can see that $y_1(t)$ and $y(t-m)$ satisfy the same difference equation. The system is therefore indeed time-invariant.
{ "domain": "dsp.stackexchange", "id": 12566, "tags": "signal-analysis, linear-systems" }
If a purely inductive circuit is started in presence of gravity, will the power source ever run out?
Question: I have read that purely inductive circuits do no work. So if an Inductor with an AC source, coming from an inverter is set up, will the source of power ever run out due to work against gravity? This operates under the assumption that the coil and connecting wires have 0 resistance. You might say that the work done in transferring electrons up the wire, will be regained as potential when it descends. But what happens in a non uniform gravitational field? Answer: For Newtonian gravith it's a conservative force, so there is no EMF around the loop due to gravity. Similarly for an electrostatic force. Suppose that current is delivered by mobile electrons. Just as a surface charge distribution on the surface of the conducting wires could arrange itself to oppose a constant electric field, so a proportional surface charge distribution can arrange itself to produce an electric field sufficient to suspend a mobile electron in a gravitational field. That's why if you lift up a piece of wire the mobile electrons don't all fall to the bottom of the wire (basically, some of them do until their field keeps the rest in check). Now when you hook the wires up there will still be that electric field supporting the mobile electrons. The other parts of the wire are supported by a stress caused by a strain of the lattice making up the solid. However as the current starts to flow, additional surface charge (and a Hall voltage) will have to develop to counter any magnetic forces (to enforce that the flowing current remain in the wire). Note that when the gravitational field is non uniform, it can still be countered by electrostatic forces if it is conservative. And Newtonian gravity is a conservative force. Since the electrostatic charge distribution counters the force of gravity on the mobile electrons you can ignore both together. Issues would happen if the circuit moves because then the surface charge would move and because the gravitational field would change so you'd need changing surface charges to counter it.
{ "domain": "physics.stackexchange", "id": 29266, "tags": "electromagnetism, newtonian-gravity, electric-circuits, electric-current, induction" }
Check if an integer equals the sum of its digits to the fifth power
Question: I can do this in one line in Python: sum([i for i in range(9,9999999) if i == sum([int(x)**5 for x in str(i)])]) Not very fast, but works nicely. I thought it might be quicker to do it in Haskell, a language I'm quite new to: -- helper func toDigits :: Int -> [Int] toDigits 0 = [] toDigits x = toDigits (x `div` 10) ++ [x `mod` 10] isSumofFifth n = n == sum (map (^5) (toDigits n)) sum (filter isSumofFifth [9..9999999]) But it seems as slow, or even slower (I haven't done exact profiling). I realise I could optimise it with a more refined upper bound, but aside from that, is there a better way to write this in Haskell? Answer: I agree with Glorfindel that the best result is achieved by thinking of the problem in a different way. Still, improvements can be made to the code that speed it up by about a factor of 3: toDigits :: Int -> [Int] toDigits 0 = [] toDigits x = let (d, m) = x `divMod` 10 in d `seq` m : toDigits d isSumofFifth n = n == sum (map (^5) (toDigits n)) main :: IO () main = do let result = sum (filter isSumofFifth [9..9999999]) putStrLn $ "Result is: " ++ show result First the divMod function is used to compute the quotient and modulus in a single step rather than separately, which saves time, as they are expensive operations. More importantly, the toDigits function can be changed to generate the digits in reverse order, which is fine for this problem, and thereby avoid a series of concatenations. In this code, each digit is generated as needed, while in the original, the first digit can't be read until all of the others are generated and then concatenated together from a series of single-element lists. This causes a lot of copying. Another small speed-up is achieved by the seq operator, which insures that d is fully calculated when m is returned, avoiding extra processing.
{ "domain": "codereview.stackexchange", "id": 35118, "tags": "haskell, mathematics" }
Running Gazebo and ROS on a remote instance
Question: First off, let me say that I know that similar questions have been asked before, but I have been unable to find any answers that actually solve my problem. I am trying to run Gazebo and the associated gazebo_ros package on a headless OpenStack instance, and using the gzweb software to access it through a browser. I do not need gzclient to be running on this instance at all. When I run the following command: roslaunch gazebo_ros empty_world.launch it gives me the following error: Error [RenderEngine.cc:680] Unable to create glx visual Warning [RenderEngine.cc:88] Unable to create X window. Rendering will be disabled Other similar questions (such as here and here) suggest using DISPLAY=:0, but when I add this into the command as follows: DISPLAY=:0 roslaunch gazebo_ros empty_world.launch it gives me a different error: Error [RenderEngine.cc:665] Can't open display: :0 Warning [RenderEngine.cc:88] Unable to create X window. Rendering will be disabled I have also tried setting <arg name="gui" default="false"/> <arg name="headless" default="true"/> in the empty_world.launch file, but no luck. I'm pulling my hair out here trying to get this thing working. Any advice would be hugely appreciated. Originally posted by Jordan9 on ROS Answers with karma: 113 on 2015-04-21 Post score: 0 Answer: I am not an expert on Gazebo, or openstack, but I have seen some similar stuff before. Perhaps the X11 configuration is not set up properly. You are running openstack headless. Does it have an X window server installed/properly configured? Ultimately that is what will enable you to access the GUI over the network. Originally posted by aak2166 with karma: 593 on 2015-04-21 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Jordan9 on 2015-04-23: Yeah, I think it's an X11 server problem, not a Gazebo problem. Will move to a more appropriate question forum.
{ "domain": "robotics.stackexchange", "id": 21487, "tags": "ros, gazebo, gazebo-ros" }
ROS-I training_unit demo_manipulation fails to build
Question: Hi, This exercise is failing to build on my system due to the following error: /home/work/ros_industrial_training/training/work/demo_manipulation/src/robot_io/src/nodes/simulated_grasp_action_server.cpp:35:70: fatal error: object_manipulation_msgs/GraspHandPostureExecutionAction.h: No such file or directory #include <object_manipulation_msgs/GraspHandPostureExecutionAction.h> I was going to build the object_manipulation_msgs from sources but it is a rosbuild project not catkin. Is there a way to easily convert this to catkin? There is a branch on the github with a couple of folders 'catkinized' but I don't think it's enough as catkin doesn't seem to detect anything. Thanks for your help. {UPDATE #1] Adding the dependency didn't work but copying the object_manipulation_msgs src folder from the supplements directory did the trick. Not sure if that is how it is supposed to be done or if catkin should be automatically detecting the folder in the supplements directory. [UPDATE #2] There are a host of other errors as well when launching the roslaunch file after building: The files ur5.transmission.xacro and ur5.gazebo.xacro are missing from the ur_description folder and need to be copied over from the supplements folder. The ur5.urdf.xacro file has errors in the <mesh> and <collision> filename tags that also need to be corrected. [UPDATE #3] Opened 2 separate issues Originally posted by atoz on ROS Answers with karma: 58 on 2016-02-27 Post score: 1 Original comments Comment by gvdhoorn on 2016-02-28: Looking at your last two edits, I'm getting the feeling that something is not working correctly wrt how you overlay on the supplements directories. The errors you mention are a result of the non-supplement pkgs having been updated outside of the tutorials. The same for object_manipulation_msgs. Answer: I suggest to report this to the industrial_training issue tracker. There might be some add_dependencies(..) missing for the simulated_grasp_action_server, causing you to run into this error. I was going to build the object_manipulation_msgs from sources but it is a rosbuild project not catkin. Is there a way to easily convert this to catkin? There is a branch on the github with a couple of folders 'catkinized' but I don't think it's enough as catkin doesn't seem to detect anything. The object_manipulation_msgs version that is in the industrial_training repository is actually a Catkin package. See here. Could you try to add a add_dependencies(simulated_grasp_action_server ${catkin_EXPORTED_TARGETS}) to the CMakLists.txt here? Edit: Adding the dependency didn't work but copying the object_manipulation_msgs src folder from the supplements directory did the trick. Not sure if that is how it is supposed to be done or if catkin should be automatically detecting the folder in the supplements directory. No, that is definitely not how that is supposed to work. This is at best a work-around. I can't select your comment as the answer, should I just close this question? Thanks for your help Please keep this open until we've actually fixed it. I can't find your issue at the issue tracker. Have you reported it? That would give the maintainers a chance to fix this at the source. Originally posted by gvdhoorn with karma: 86574 on 2016-02-28 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 23933, "tags": "ros, catkin, tutorials, ros-industrial, rosbuild" }
Project Euler #2 Efficiency
Question: Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be: 1,2,3,5,8,13,21,34,55,89,... By considering the terms in the Fibonacci sequence whose values do not exceed N, find the sum of the even-valued terms. I'd like to reduce the complexity of my code. import java.math.BigInteger; import java.util.Scanner; public class Solution { public static void main(String[] args) { Scanner sc = new Scanner(System.in); int testCases = sc.nextInt(); for (int i = 0; i < testCases; i++) { BigInteger input = sc.nextBigInteger(); System.out.println(calculate(input)); } } public static BigInteger calculate(BigInteger input) { BigInteger fib1 = new BigInteger("0"); BigInteger fib2 = new BigInteger("1"); BigInteger sum = new BigInteger("0"); BigInteger fib = new BigInteger("0"); while (input.compareTo(fib)>0) { if (fib.doubleValue() % 2 == 0) sum = sum.add(fib); fib = fib1.add(fib2); fib1 = fib2; fib2 = fib; } return sum; } } Answer: The test if (fib.doubleValue() % 2 == 0) does not produce the correct result for numbers with more than 17 digits, because that exceeds the precision of a double. Actually it returns true for 8944394323791464 and all subsequent Fibonacci numbers. That is not relevant for the concrete problem here (see below), but if you want to work with BigInteger then you should replace this with if (fib.remainder(bigTwo).equals(BigInteger.ZERO)) where bigTwo is defined as BigInteger bigTwo = new BigInteger("2"); Project Euler Problem #2 asks for the sum of all even-valued Fibonacci values not exceeding 4 million, so using BigInteger is not really necessary. All numbers fit into the range of int, and the above test simplifies to if (fib % 2 == 0) (I also planned to tell that using int instead of BigInteger makes the program more efficient, but it turned out that the difference in running time is not significant.)
{ "domain": "codereview.stackexchange", "id": 10149, "tags": "java, programming-challenge, fibonacci-sequence" }
Appending a codepoint to an UTF8 std::string using icu4c
Question: My code is void utf8_append(UChar32 cp, std::string& str) { size_t offset = str.size(); str.resize(offset + U8_LENGTH(cp)); auto ptr = reinterpret_cast<uint8_t*>(&str[0]); U8_APPEND_UNSAFE(ptr, offset, static_cast<uint32_t>(cp)); } This works but seems ugly. Maybe I am overlooking a simpler approach? Relevant documentation: https://unicode-org.github.io/icu/userguide/strings/utf-8.html and https://unicode-org.github.io/icu-docs/apidoc/released/icu4c/utf8_8h.html. Answer: Beauty is in the eye of the beholder. I say it is perfectly valid and correct code! The only thing you might get rid of is the static_cast<uint32_t>, as an UChar32, which is an alias forint32_t, will implicitly cast to uint32_t without warnings. You could also use append() instead of resize(), avoiding the addition, and remove the temporary ptr, to finally get: void utf8_append(UChar32 cp, std::string& str) { auto offset = str.size(); str.append(U8_LENGTH(cp), {}); U8_APPEND_UNSAFE(reinterpret_cast<uint8_t *>(&str[0]), offset, cp); } If you can use C++17, str.data() is slightly nicer than &str[0] in my opinion. Or you could write &str.front().
{ "domain": "codereview.stackexchange", "id": 39720, "tags": "c++, unicode" }
A Combinatorial Problem on Extremal Set Theory
Question: Given a ground set $[n]$, under what condition of parameters $a,b,c$ does a family of subsets $\mathcal{F}\subseteq 2^{[n]}$ with the following property exist? (i) $\forall S\in \mathcal{F}$, $|S|=a$. (ii) $\forall S_1,S_2\in \mathcal{F}$, $|S_1\triangle S_2|\ge b$, where $S_1\triangle S_2$ means the symmetric difference between two sets. (iii) $\forall T \subseteq [n], |T|\ge c$, $\exists S\in \mathcal{F}$, such that $|S\cap T|\ge 0.5a$. As mentioned below by Aryeh, a trivial case can be $\mathcal{F}=\{S_1,S_2\}$ with $S_1=[n/2]$ and $S_2=[n]-S_1$, where we have $a=c=n/2$ and $b=n$. Basically the problem asks under what condition of $a,b$ we can get $c=o(n)$. Any known results or related ideas about nontrivial sufficient (no need to be necessary of course) condition of $a,b,c$ is appreciated. This combinatorial problem rises from a communication game, and may have concrete connection with error-correcting codes. Answer: This is arguably trivial, but maybe good enough for you: For any $a,b,c$ with $b \le a \le c$ (in particular $a=b=c$), if $\mathcal{F}$ is maximal subject to (i) and (ii), then it satisfies (iii) for free. So if you're okay with the $b \le a \le c$ condition, then you can otherwise pick the parameters however you want, and then greedily build $\mathcal{F}$. The argument is very easy: In light of (i), (ii) can be equivalently restated as requiring $|S_1 \cap S_2| \le a - b/2$. (In general: $|S_1 \triangle S_2| = |S_1| + |S_2| - 2|S_1 \cap S_2|$.) Then, when $b \le a \le c$, if (iii) fails for some set, then you can add any size-$a$ subset of it to $\mathcal{F}$ while preserving (i) and (ii). Semi-related note: I'm reminded of the combinatorial designs used in the Nisan-Wigderson generator. There, they want the sets in $\mathcal{F}$ to have only $O(\log(n))$ overlap (so $b = 2a - O(\log(n)$), but, instead of (iii), they only want that $\mathcal{F}$ is large. So if you want a larger range of parameters than $b \le a \le c$, or have some flexibility in the properties required of your objects, then you might try looking at the related literature for more ideas.
{ "domain": "cstheory.stackexchange", "id": 3891, "tags": "cc.complexity-theory, set-theory, extremal-combinatorics" }
How to calculate $\phi_{i,j}$ in VGG19 network?
Question: In the paper Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network by Christian Ledig et al., the distance between images (used in the loss function) is calculated from feature maps $\phi_{i,j}$ extracted from the VGG19 network, where $\phi_{i,j}$ is defined as "feature map obtained by the j-th convolution (after activation) before the i-th maxpooling layer". Can you elaborate on how to calculate this feature map, may be for VGG54 mentioned in the paper? $\phi_{5,4}$ means 4th convolutional layer before 5th max-pooling layer right? But 4th layer has so 512 filters. So we would have 512 feature spaces. Which one to choose from this? Also what does "after activation" mean? I found this answer related to the same issue, but the answer didn't explain much. Answer: In section 2.2.1 of the paper, they state that they use euclidean distance. I'm going to take your word that there are 512 filter activations in that layer; if I'm reading this right, there aren't 512 feature spaces, there is a 512-dimensional feature space that they are calculating euclidean distance in. So your distance function between two images $p$ and $q$ is just the standard Euclidean distance formula: $$ d(\mathbf{p},\mathbf{q}) = \sqrt{\sum_{i=1}^{512}(p_i - q_i)^2}$$ where $\mathbf{p}$ and $\mathbf{q}$ are vectors holding the corresponding filter activations of $p$ and $q$. Edit: Above the horizontal rule is my original answer which is wrong (or incomplete). What I think is happening is that the authors are taking the euclidean distance as above for each position in the feature maps at the $i,j$ layer, and averaging those distances to generate a scalar loss value. So for a 7x7 feature map, they'd be taking 49 512-dimensional euclidean distances and averaging them to get the VGG19 5,4 loss. This is how I read equation (5) in section 2.2.1 in their paper. I think the missing piece is that the authors don't bother with the square root in the euclidean distance formula. As discussed below, I think the notation is unclear.
{ "domain": "datascience.stackexchange", "id": 4146, "tags": "deep-learning, cnn, feature-extraction, gan, vgg16" }
Homotopy type theory and Gödel's incompleteness theorems
Question: Kurt Gödel's incompleteness theorems establish the "inherent limitations of all but the most trivial axiomatic systems capable of doing arithmetic". Homotopy Type Theory provides an alternative foundation for mathematics, a univalent foundation based on higher inductive types and the univalence axiom. The HoTT book explains that types are higher groupoids, functions are functors, type families are fibrations, etc. The recent article "Formally Verified Mathematics" in CACM by Jeremy Avigad and John Harrison discusses HoTT with respect to formally verified mathematics and automatic theorem proving. Do Gödel's incompleteness theorems apply to HoTT? And if they do, is homotopy type theory impaired by Gödel's incompleteness theorem (within the context of formally verified mathematics)? Answer: HoTT "suffers" from Gödel incompleteness, of course, since it has a computably enumerable language and rules of inference, and we can formalize arithmetic in it. The authors of the HoTT book were perfectly aware of its incompletness. (In fact, this is quite obvious, especially when half of the authors are logicians of some sort). But does incompleteness "impair" HoTT? No more than it does any other formal system, and I think the whole issue is a bit misguided. Let me try an analogy. Suppose you have a car which can't take you everywhere on the planet. For instance, it can't climb vertically up a wall. Is the car "impaired"? Of course, it can't get you to the top of the Empire State building. Is the car useless? Far from it, it can take you too many other interesting places. Not to mention that the Empire State building has elevators.
{ "domain": "cstheory.stackexchange", "id": 2662, "tags": "lo.logic, type-systems, homotopy-type-theory" }
Working around a Segmentation Fault for reading Files in C
Question: I am getting a segmentation fault on the below snippet, only when I go above a text file in the range of 80-100kb. It will read smaller files of text but otherwise segmentation 11. int main(int argc, char* argv[]) { FILE* file; if (argc != 2 || (file = fopen(argv[1], "r")) == NULL) { printf("Invalid command\n"); exit(EXIT_FAILURE); } int capacity = 5; char* buffer = malloc(capacity); int length = 0; char ch; while ((ch = getc(file)) != EOF) { if (length >= capacity) capacity *= 2; buffer = realloc(buffer, capacity); buffer[length++] = ch; } buffer[capacity] = '\0'; printf("%d\n", length); printf("%s", buffer); free(buffer); fclose(file); return EXIT_SUCCESS; } What type of problems could I be overlooking and how can I improve this code to more efficiently achieve what I am trying to get out of it (which is to read an unknown size of text from a file and output it as a string, also dynamically allocating memory by doubling array size)? Answer: Besides not checking the return code as @CiaPan described, there's another problem, char ch; is incorrect as EOF is not representable by char, it's being converted from 0xffffffff to 0xff and might cause an early exit if the file happens to contain the byte 0xff. Here's the fixed code: #include <stdio.h> #include <stdlib.h> int main(int argc, char* argv[]) { FILE* file; if (argc != 2 || (file = fopen(argv[1], "r")) == NULL) { printf("Invalid command\n"); exit(EXIT_FAILURE); } int capacity = 5; char* buffer = malloc(capacity); if (!buffer) { perror("malloc()"); exit(EXIT_FAILURE); } int length = 0; int ch; while ((ch = getc(file)) != EOF) { if (length >= capacity) capacity *= 2; buffer = realloc(buffer, capacity); if (!buffer) { perror("realloc()"); exit(EXIT_FAILURE); } buffer[length++] = ch; } buffer[capacity] = '\0'; printf("%d\n", length); printf("%s", buffer); free(buffer); fclose(file); return EXIT_SUCCESS; }
{ "domain": "codereview.stackexchange", "id": 27850, "tags": "c, file, console, memory-management" }
Error when trying to connect with robot controller
Question: I have installed MotoROS into my Motoman robot controller, but when I try to connect it with doing: Warehouse Host: 192.168.255.1 (controller ip) and port: 50240 I get this error DBClientCursor::init call() failed [ERROR] [1427203508.542909786]: Exception caught while processing action 'connect to database' And Warehouse Host: 192.168.255.1 (controller ip) and port: 50241 I get: Tue Mar 24 08:33:16.723 ERROR: MessagingPort::call() wrong id got:1 expect:3 toSend op: 2004 response msgid:15 response len: 144 response op: 0 remote: 192.168.255.1:50241 Tue Mar 24 08:33:16.723 Assertion failure false src/mongo/util/net/message_port.cpp 246 0xa29b5540 0xa29b6918 0xa29ab744 0xa29bc5df 0xa29bc8a1 0xa296675d 0xa29873a5 0xa29654c4 0xa296d97f 0xa29656b2 0xa2965bf2 0xa296a9da 0xa296aaeb 0xa29624f4 0xa2ea0cce 0xa2ea7a4c 0xa2e922b7 0xa2e925f2 0xa3190dc3 0xa3164e91 /opt/ros/indigo/lib/libwarehouse_ros.so(_ZN5mongo15printStackTraceERSo+0x30) [0xa29b5540] /opt/ros/indigo/lib/libwarehouse_ros.so(_ZN5mongo10logContextEPKc+0x58) [0xa29b6918] /opt/ros/indigo/lib/libwarehouse_ros.so(_ZN5mongo12verifyFailedEPKcS1_j+0xd4) [0xa29ab744] /opt/ros/indigo/lib/libwarehouse_ros.so(_ZN5mongo13MessagingPort4recvERKNS_7MessageERS1_+0x2af) [0xa29bc5df] /opt/ros/indigo/lib/libwarehouse_ros.so(_ZN5mongo13MessagingPort4callERNS_7MessageES2_+0x41) [0xa29bc8a1] /opt/ros/indigo/lib/libwarehouse_ros.so(_ZN5mongo18DBClientConnection4callERNS_7MessageES2_bPSs+0x3d) [0xa296675d] /opt/ros/indigo/lib/libwarehouse_ros.so(_ZN5mongo14DBClientCursor4initEv+0xb5) [0xa29873a5] /opt/ros/indigo/lib/libwarehouse_ros.so(_ZN5mongo12DBClientBase5queryERKSsNS_5QueryEiiPKNS_7BSONObjEii+0x244) [0xa29654c4] /opt/ros/indigo/lib/libwarehouse_ros.so(_ZN5mongo18DBClientConnection5queryERKSsNS_5QueryEiiPKNS_7BSONObjEii+0x7f) [0xa296d97f] /opt/ros/indigo/lib/libwarehouse_ros.so(_ZN5mongo17DBClientInterface5findNERSt6vectorINS_7BSONObjESaIS2_EERKSsNS_5QueryEiiPKS2_i+0x92) [0xa29656b2] /opt/ros/indigo/lib/libwarehouse_ros.so(_ZN5mongo17DBClientInterface7findOneERKSsRKNS_5QueryEPKNS_7BSONObjEi+0x92) [0xa2965bf2] /opt/ros/indigo/lib/libwarehouse_ros.so(_ZN5mongo20DBClientWithCommands10runCommandERKSsRKNS_7BSONObjERS3_i+0x9a) [0xa296a9da] /opt/ros/indigo/lib/libwarehouse_ros.so(_ZN5mongo18DBClientConnection10runCommandERKSsRKNS_7BSONObjERS3_i+0x3b) [0xa296aaeb] /opt/ros/indigo/lib/libwarehouse_ros.so(_ZN5mongo20DBClientWithCommands5countERKSsRKNS_7BSONObjEiii+0x134) [0xa29624f4] /opt/ros/indigo/lib/libmoveit_warehouse.so(_ZN9mongo_ros17MessageCollectionIN11moveit_msgs14PlanningScene_ISaIvEEEE10initializeERKSsS7_S7_jf+0x29e) [0xa2ea0cce] /opt/ros/indigo/lib/libmoveit_warehouse.so(_ZN9mongo_ros17MessageCollectionIN11moveit_msgs14PlanningScene_ISaIvEEEEC2ERKSsS7_S7_jf+0x20c) [0xa2ea7a4c] /opt/ros/indigo/lib/libmoveit_warehouse.so(_ZN16moveit_warehouse20PlanningSceneStorage17createCollectionsEv+0x77) [0xa2e922b7] /opt/ros/indigo/lib/libmoveit_warehouse.so(_ZN16moveit_warehouse20PlanningSceneStorageC1ERKSsjd+0x72) [0xa2e925f2] /opt/ros/indigo/lib/libmoveit_motion_planning_rviz_plugin_core.so(_ZN18moveit_rviz_plugin19MotionPlanningFrame35computeDatabaseConnectButtonClickedEv+0x203) [0xa3190dc3] /opt/ros/indigo/lib/libmoveit_motion_planning_rviz_plugin_core.so(_ZN5boost6detail8function26void_function_obj_invoker0INS_3_bi6bind_tIvNS_4_mfi3mf0IvN18moveit_rviz_plugin19MotionPlanningFrameEEENS3_5list1INS3_5valueIPS8_EEEEEEvE6invokeERNS1_15function_bufferE+0x21) [0xa3164e91] [ERROR] [1427203996.729533547]: Exception caught while processing action 'connect to database' I don't know how to fix it. Thank you. Originally posted by jcgarciaca on ROS Answers with karma: 67 on 2015-03-24 Post score: 0 Answer: The reason you get that error is because the MoveIt RViz plugin is expecting a MongoDB instance at the other end of that TCP connection, which is obviously not the case. You don't use the MoveIt RViz plugin to connect to the MotoROS server running on your motoman controller. You use the nodes in the motoman_driver package. You should take a look at the motoman_driver/Tutorials page for how to set this up, and use it properly. I think especially the Using the Motoman FS/DX ROS Interface (Hydro) tutorial should help. Originally posted by gvdhoorn with karma: 86574 on 2015-03-24 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by jcgarciaca on 2015-04-06: Thank you. I tried with the tutorial you suggested (Using the Motoman FS/DX ROS Interface), and everything works well. Now, I want to work with on fly path planning so I created my own URDF and used Rviz. I'm interested to move my robot with using this tool (and to achieve on fly path planning). Comment by jcgarciaca on 2015-04-06: But I am not quite sure how to do it. When I execute the planning the next message appears. [ INFO] [1428354041.453863537]: Fake execution of trajectory [ INFO] [1428354042.454572147]: Execution completed: SUCCEEDED It is a Fake execution, but how can I do it in the real controller? Thanks Comment by gvdhoorn on 2015-04-07: This is probably better asked in a separate question. Do please search ROS Answers and see if existing answers help you. If not, pose a new question.
{ "domain": "robotics.stackexchange", "id": 21221, "tags": "ros, rviz, moveit, motoman, ros-industrial" }
Is trajectory the same as an orbit?
Question: Is trajectory the same as an orbit? I wanted to know about gravity assists, but most books I find are talking about different types of orbits and such. Are they related? Answer: The terms trajectory and orbit both refer to the path of a body in space. Trajectory is commonly used in connection with projectiles and is often associated with paths of limited extent, i. e., paths having clearly identified initial and end points. Orbit is commonly used in connection with natural bodies (planets, moons, etc.) and is often associated with paths that are more or less indefinitely extended or of a repetitive character, like the orbit of the Moon around the Earth. I did had an exact question few months back, but this page from NASA provided good amount of information regarding trajectory and orbit.
{ "domain": "physics.stackexchange", "id": 18590, "tags": "classical-mechanics, orbital-motion, terminology" }
$\rho_{SE}(0)=\rho_S(0)\otimes\rho_E(0)$: No coupling or no entanglement?
Question: We know that the entangled states cannot be expressed like product state, e.g. $|\omega\rangle = |\psi\rangle\otimes|\phi\rangle$. In the density matrix describing the correlations between system $S$ and environment $E$ we sometimes assume that there are no correlations between $E$ and $S$ at $t=0$: $\rho_{SE}(0)=\rho_S(0)\otimes\rho_E(0)$. I'm wondering what 'correlation' means here? Is this equation implying there's 'no coupling' or 'no entanglement' between $S$ and $E$? (I'm still a bit confused with the difference) Also in a quantum circuit, can we say that the qubits are coupled as long as there's some gate(s) acting on each qubit, but they are entangled only if some states in the final simulation result cannot be decomposed like the product state? Answer: Coupling is a dynamic concept that characterizes the evolution of a composite system. It means that the evolution involves an interaction between subsystems. In a quantum circuit, coupling corresponds to multi-qubit gates. Entanglement is a static concept that characterizes the state of a system. It is related to coupling in that it arises due to coupled evolution. Correlation is an overloaded term. In statistics and probability, it usually refers to a linear dependence between two random variables, but in the broadest sense it denotes non-linear dependence also. In quantum physics, the term also encompasses entanglement, because like classical correlation entanglement gives rise to dependence between observables. Quantum states can exhibit either type of correlation. For example, two qubits $A$ and $B$ in the joint state $$ \rho_{AB} = \frac{1}{2}\begin{pmatrix} 0 & & & \\ & 1 & & \\ & & 1 & \\ & & & 0 \end{pmatrix} $$ are correlated, because the knowledge that the qubit $A$ is in the state $|0\rangle$ reveals that qubit $B$ is in the state $|1\rangle$. However, the correlation is classical since the state describes a probability distribution over product states $$\rho_{AB} = \frac{|01\rangle\langle 01| + |10\rangle\langle 10|}{2},$$ not a superposition. It contains no entanglement, in contrast to a state such as $\beta_{AB} = (|01\rangle + |10\rangle)/\sqrt{2}$ which is a superposition in the joint Hilbert space of the qubits. Product states like $\rho_S \otimes \rho_E$ contain neither classical correlations (like those in $\rho_{AB}$) nor entanglement (like that in $\beta_{AB}$). Consequently, outcome distributions of any pair of measurements on $S$ and $E$ are independent. Therefore, people often say that these states have no correlations.
{ "domain": "quantumcomputing.stackexchange", "id": 2370, "tags": "quantum-state, entanglement, density-matrix, correlations" }
C++ SQL wrapper/Connection
Question: Working on my SQL project at last. The concept is easy to use and integrate SQL into C++. ThorsSQL::Connection mysql("mysql://host", "username", "password", "databaseName"); ThorsSQL::Statement bigEarnerStat(mysql, "SELECT ID, Name, Salary FROM Employee WHERE Salary > % and Age < %" ThorsAnvil::SQL::Prepare); // Bind variables to '%' in statement // Then execute the SQL statement. // Call function for every row returned. bigEarnerStat.execute(Bind(1000000, 32), // parameter bound to % in statement. // Function executed for each row returned. // Parameters are matched against the SELECT in the statement. // A bad type conversion will throw an exception. [](u64 id, std::string const& name, int salary){ std::cout << name << " is a fat cat earning $" << salary/100 << "." << salary%100 << "\n"; } ); Connection: Represents a single connection to the DB. The thing to note above is mysql://host. The concept being that these classes provide the framework that allows specific DB code to be plugged into (A MySQL variant will be coming to code review soon). So the "Schema" part of the URL string specifies the type of DB (and thus what specific plugin the code below uses (see ConnectionCreatorRegister) So the Connection object will defer all the DB specific work to the proxy member. This class handles all the generic code. ConnectionProxy: This DB specific code for a connection. ConnectionCreatorRegister: This allows a DB specific implementation to register itself as a viable alternative. Note: If you want to try compiling the code I suggest you check it out of the git repo and compile using the instructions there. But Saying that you can potentially compile it using only the source here just add a main(). Connection.h #ifndef THORS_ANVIL_SQL_CONNECTION_H #define THORS_ANVIL_SQL_CONNECTION_H #include "SQLUtil.h" #include <string> #include <map> #include <memory> namespace ThorsAnvil { namespace SQL { class Statement; class StatementProxy; class ConnectionProxy { public: virtual ~ConnectionProxy() = 0; virtual std::unique_ptr<StatementProxy> createStatementProxy(std::string const& statement, StatementType type) = 0; }; inline ConnectionProxy::~ConnectionProxy() {} using ConnectionCreator= std::function<std::unique_ptr<ConnectionProxy>(std::string const& host, int port, std::string const& username, std::string const& password, std::string const& database, Options const& options)>; class Connection { private: static std::map<std::string, ConnectionCreator>& getCreators(); friend class Statement; std::unique_ptr<StatementProxy> createStatementProxy(std::string const& statement, StatementType type); std::unique_ptr<ConnectionProxy> proxy; public: Connection(std::string const& connection, std::string const& username, std::string const& password, std::string const& database, Options const& options = Options{}); static void registerConnectionType(std::string const& schema, ConnectionCreator creator); }; template<typename T> class ConnectionCreatorRegister { public: ConnectionCreatorRegister(std::string const& schema) { Connection::registerConnectionType(schema, [](std::string const& host, int port, std::string const& username, std::string const& password, std::string const& database, Options const& options) { return std::unique_ptr<ConnectionProxy>(new T(host, port , username, password, database, options)); }); } }; } } #endif Connection.cpp #include "Connection.h" #include "Statement.h" #include <cerrno> #include <cstdlib> using namespace ThorsAnvil::SQL; Connection::Connection(std::string const& connection, std::string const& username, std::string const& password, std::string const& database, Options const& options) { std::size_t schemaEnd = connection.find(':'); if (schemaEnd == std::string::npos || connection[schemaEnd + 1] != '/' || connection[schemaEnd + 2] != '/') { throw std::runtime_error("Connection::Connection: Failed to find schema: " + connection); } bool hasPort = true; std::size_t hostEnd = connection.find(':', schemaEnd + 3); if (hostEnd == std::string::npos) { hasPort = false; hostEnd = connection.size(); } std::string schema = connection.substr(0, schemaEnd); std::string host = connection.substr(schemaEnd + 3, hostEnd - schemaEnd - 3); std::string port = hasPort ? connection.substr(hostEnd + 1) : "0"; errno = 0; char* endPtr; int portNumber = std::strtol(port.c_str(), &endPtr, 10); auto creator = getCreators().find(schema); if (host == "" || errno != 0 || *endPtr != '\0') { throw std::runtime_error("Connection::Connection: Failed to parse connection: " + connection); } if (creator == getCreators().end()) { throw std::runtime_error("Connection::Conection: Schema for unregister DB type: " + schema + " From: " + connection); } proxy = creator->second(host, portNumber, username, password, database, options); } std::map<std::string, ConnectionCreator>& Connection::getCreators() { static std::map<std::string, ConnectionCreator> creators; return creators; } void Connection::registerConnectionType(std::string const& schema, ConnectionCreator creator) { getCreators().emplace(schema, creator); } std::unique_ptr<StatementProxy> Connection::createStatementProxy(std::string const& statement, StatementType type) { return proxy->createStatementProxy(statement, type); } test/ConnectionTest.cpp #include "Connection.h" #include "Statement.h" #include "gtest/gtest.h" #include "test/MockMysql.h" ThorsAnvil::SQL::ConnectionCreatorRegister<MockMySQLConnection> registerFakeMysql("mysql"); TEST(ConnectionTest, Create) { using ThorsAnvil::SQL::Connection; Connection connection("mysql://127.0.0.1:69", "root", "testPassword", "test"); } TEST(ConnectionTest, CreateDefaultPort) { using ThorsAnvil::SQL::Connection; Connection connection("mysql://127.0.0.1", "root", "testPassword", "test"); } TEST(ConnectionTest, BadSchema) { using ThorsAnvil::SQL::Connection; ASSERT_THROW( Connection connection("badschema://127.0.0.1:69", "root", "testPassword", "test"), std::runtime_error ); } TEST(ConnectionTest, NoSchema) { using ThorsAnvil::SQL::Connection; ASSERT_THROW( Connection connection("127.0.0.1:69", "root", "testPassword", "test"), std::runtime_error ); } TEST(ConnectionTest, BadHost) { using ThorsAnvil::SQL::Connection; ASSERT_THROW( Connection connection("mysql://:69", "root", "testPassword", "test"), std::runtime_error ); } TEST(ConnectionTest, BadPort) { using ThorsAnvil::SQL::Connection; ASSERT_THROW( Connection connection("mysql://127.0.0.1:XY", "root", "testPassword", "test"), std::runtime_error ); } Answer: GCC now has SSO, with the ABI breaking backwards compatibility and all that. Which probably means you can pass them by value.
{ "domain": "codereview.stackexchange", "id": 15761, "tags": "c++, sql, template, c++14" }
How many steps does this recurrence take to get to 2 (or 1)?
Question: $T(2) = T(1) = 1$ $T(n) = T(\frac{n}{\log n}) + \Theta(1)$ Basically, I wanted to know how many steps before the recursion stops? I tried various approaches, but am not getting anywhere. I know for sure that this is $O((\log \log n)^2)$, but I wanted a $\Theta$ bound (tighter bound). (this analysis is incorrect as shown in the comments below). I've also run a simple program to compare $n, T(n)$, and $\log \log n$. n T(n) log log n 2, 1, 0 4, 2, 1 8, 3, 1 16, 3, 2 32, 4, 2 64, 4, 2 128, 5, 2 256, 5, 3 512, 5, 3 1024, 5, 3 2048, 6, 3 . . . 134217728, 9, 4 . . . 2199023255552, 12, 5 4398046511104, 12, 5 8796093022208, 12, 5 17592186044416, 13, 5 (This is not homework) Answer: The solution is $T(n) = \Theta(\log n /\log\log n)$. (I'm assuming the $O(1)$ term in the recurrence is really $\Theta(1)$, since otherwise $T(n)$ has no lower bound.) As you and Tsuyoshi already observed, you can derive a lower bound by overestimating the denominator in the recursive argument. Consider the function $L_k(n)$ defined by the recurrence $$ L_k(n) = L_k\left(\frac{n}{\log k}\right) + \Theta(1). $$ The solution $L_k(n) = \Theta(\log n / \log \log k)$ follows from standard methods. An easy inductive argument implies that $T(n) \ge L_k(n)$ for any $k\ge n$. Thus, setting $k=n^2$ (for example) immediately gives us $T(n) = \Omega(\log n/\log\log n)$. For the upper bound, the symmetric trick is to under-estimate the denominator in the recursive argument. Consider the function $U_k(n)$ defined by the recurrence $$ U_k(n) = \begin{cases} U_k\left(\frac{n}{\log k}\right) + O(1) & \text{if } n \ge k, \\ T(n) & \text{otherwise.}\\ \end{cases} $$ An easy inductive argument implies that $T(n) \le U_k(n)$ for any $k\le n$. Assuming inductively that $T(n)$ is a nondecreasing function of $n$, the solution $$ U_k(n) \le T(k) + O\left( \frac{\log(n/k)}{\log \log k} \right) $$ follows from standard methods. Setting $k=\sqrt{n}$ gives us the simpler(?) recurrence $$ T(n) \le T(\sqrt{n}) + O\left( \frac{\log n}{\log \log n} \right), $$ which is easy to solve by standard methods. I'll walk through that one, too. Multiply both sides by $\lg\lg n$: $$ T(n)\lg\lg n ~\le~ T(\sqrt{n})\lg\lg n + O(\log n) ~=~ T(\sqrt{n})(\lg\lg\sqrt{n} + 1) + O(\log n). $$ Setting $LT(n) = T(n)\lg\lg n$ gives us $$ LT(n) \le LT(\sqrt{n}) + T(\sqrt{n}) + O(\log n). $$ Assuming inductively that $T(n) \le 10^{10^{100}}\lg n$ for large enough $n$, the recurrence simplifies further to $LT(n) \le LT(\sqrt{n}) + O(\log n)$. This recurrence expands into a geometric series, implying the solution $LT(n) = O(\log n)$, which implies $T(n) = O(\log n / \log\log n)$. Hey look, all our inductive assumptions worked out! Short version: Logarithmic factors usually act like constants in recurrences, because they change so slowly. The details are often grungy, but straightforward with a little practice. (A glaring exception is the recurrence $T(n) = T(n/2) + \Theta(n/\log n)$, whose solution is $T(n) = \Theta(n\log\log n)$.)
{ "domain": "cstheory.stackexchange", "id": 1362, "tags": "ds.algorithms, time-complexity" }
rosbag topics with same MD5sum
Question: Hi, I've recorded a bag file in which I've detected that two types has the same MD5sum, according with the output of rosbag info types: navigation_g500/PNIPrimeImu [98c511aab3d6095235f8db7d430feee5] navigation_g500/XsensMTiImu [98c511aab3d6095235f8db7d430feee5] I've checked that the _s_getMD5Sum were the same in the PC I recorded the bag; probably because of some old bug of mine. Could somebody help me to find out how to fix de bag file? I want to read the msgs with rosbag::View, and I have all the code done and working; I've been reguarlarly using it with other types/topics inside bag files. Thanks in advance. UPDATE For the particular problem I have, with rosbag fix I end up with the msgs of type PNIPrimeImu turned into XsensMTiImu, with is wrong. I forget to say that the Raw view of rxbag shows the PNIPrimeImu msgs correctly. Originally posted by Enrique on ROS Answers with karma: 834 on 2012-04-30 Post score: 0 Original comments Comment by Eric Perko on 2012-04-30: What version of ROS and Ubuntu were you using when those MD5Sums were generated? And just to verify... the messages did not contain the same body at that time, right? Comment by Enrique on 2012-05-02: ros-electric and ubuntu oneiric. The messages are different. For some reason the MD5sum (and the name) where the same in the PC I record the bags. I'll try Chad Rockey's steps so see if I can recover/fix the messages. Anyway, I've also repeated the experiments. Answer: This problem means that your message definitions are the same. Likely you have the same message with two different filenames. You're having problems with 'rosbag fix' only right? I'm assuming that your bags are properly indexed and if they aren't you can run rosbag reindex on them. In order to migrate, you'll likely have to first split the topics into two separate bags. The tool to use for this is rosbag filter. For example try: rosbag filter original.bag PNI.bag "topic == '/pni_imu'" Now that you have bags with only one of the problematic types, you should be able to set up your environment so that you can migrate. This is pretty complicated, so I can't give you a complete list of steps to take, but here's some things to check: Make sure your new messages don't have this same problem (identical datatypes in the definition). Make sure you only don't have the old messages built currently. Make sure you only have one migration rule exported at a time. You probably don't have to make sure all of these are true, but if you keep having problems, if I didn't forget anything, that should be all you need to resolve this migration. Good luck! :D Originally posted by Chad Rockey with karma: 4541 on 2012-04-30 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Enrique on 2012-05-02: I'm going to try it, thanks! Not immediately tough, since I've repeated the experiment, and I can live without the "corrupted" bag file.
{ "domain": "robotics.stackexchange", "id": 9196, "tags": "rosbag" }
Is this predicate valid to delete single shared_ptr's?
Question: I wrote a predicate used in a remove_if call that deletes shared_ptr's of type StemmedSentence from an vector of sentences. The predicate: class EraseSentenceIf { ArrayStemmedSnippet * m_ass; public: EraseSentenceIf(ArrayStemmedSnippet *ass) : m_ass(ass) { } bool operator()(const std::shared_ptr< ArrayStemmedSnippet::StemmedSentence>& s) { std::shared_ptr<ArrayStemmedSnippet::StemmedSentence> tmp = s; // --- set StemmedSentnce object in ArrayStemmedSnippet class s->setParent(m_ass); // --- if true delete this sentence) if (s->trimStopWords()) { tmp.reset(); return true; } return false; } }; The remove_if call: EraseSentenceIf esi(this); sentences.erase( std::remove_if( sentences.begin(), sentences.end(), esi), sentences.end() ); Declaration: std::vector<shared_ptr<StemmedSentence> > sentences; The construction of the sentences objects looks like this: sentences.push_back(shared_ptr<StemmedSentence>( new StemmedSentence(index, i - 1 ))); The code seems to run fine, valgrind / gdb does not moan. I just want to get sure that I handle the deletion (or release) of the shared_ptr in a correct way. Can somebody please confirm this? Maybe I can improve something or I overlooked an important point. Thanks for your comments! Answer: Within the predicate you make a copy of the shared_ptr hence incrementing the reference count: std::shared_ptr<ArrayStemmedSnippet::StemmedSentence> tmp = s; A few lines later you explicitly reset this copy (note that this does not release any memory unless it's the last living shared_ptr referring to the pointee): // --- if true delete this sentence) if (s->trimStopWords()) { // NOT NECESSARY -- reference count will be decremented when tmp falls out of scope tmp.reset(); return true; } The actual deletion occurs when the shared_ptr residing inside the vector is destroyed (assuming it's the last remaining copy): sentences.erase( std::remove_if( sentences.begin(), sentences.end(), esi), sentences.end() ); So, everything will work fine as it is but the tmp variable in the predicate is unnecessary.
{ "domain": "codereview.stackexchange", "id": 2164, "tags": "c++, c++11" }
Project Euler #17: Number Letter Counts
Question: If the numbers 1 to 5 are written out in words: one, two, three, four, five, then there are 3 + 3 + 5 + 4 + 4 = 19 letters used in total. If all the numbers from 1 to 1000 (one thousand) inclusive were written out in words, how many letters would be used? NOTE: Do not count spaces or hyphens. For example, 342 (three hundred and forty-two) contains 23 letters and 115 (one hundred and fifteen) contains 20 letters. The use of "and" when writing out numbers is in compliance with British usage. Here's my implementation in Python: def number_to_word(n): """Assumes is an integer from 1 - 1000. Returns number in words ex: 122 --> one hundred and twenty-two.""" # num_to_alpha contains the unique values for numbers that will be returned according to repetitive patterns num_to_alpha =\ {1: 'one', 2: 'two', 3: 'three', 4: 'four', 5: 'five', 6: 'six', 7: 'seven', 8: 'eight', 9: 'nine', 10: 'ten', 11: 'eleven', 12: 'twelve', 13: 'thirteen', 14: 'fourteen', 15: 'fifteen', 16: 'sixteen', 17: 'seventeen', 18: 'eighteen', 19: 'nineteen', 20: 'twenty', 30: 'thirty', 40: 'forty', 50: 'fifty', 60: 'sixty', 70: 'seventy', 80: 'eighty', 90: 'ninety', 100: 'one hundred', 1000: 'one thousand'} # Numbers below 21 , 100, 1000 are unique words (cannot be formed using a repetitive rule) if 0 < n < 21 or n == 100 or n == 1000: return num_to_alpha[n] mod = n % 10 # Numbers in range (21 - 99) have a single rule except the multiples of 10 (formed by a single word) if 20 < n < 100: if n % 10 != 0: return f'{num_to_alpha[n // 10 * 10]}-{num_to_alpha[mod]}' return num_to_alpha[n] # Numbers above 100 have a single rule except the following: if 100 < n < 1000: # a) multiples of 100 if n % 100 == 0: return f'{num_to_alpha[n // 100]} hundred' # b) numbers whose last 2 digits are above 20 and are also multiples of 10 if not n % 100 == 0 and n % 100 > 20 and n % 10 == 0: return f'{num_to_alpha[n // 100]} hundred and {num_to_alpha[n % 100]}' # c) numbers whose last 2 digits are below 20 and not multiples of 10 if n % 100 < 21: second_part = num_to_alpha[n % 100] return f'{num_to_alpha[n // 100]} hundred and {second_part}' # d) numbers whose last 2 digits are above 20 and not multiples of 10 if n % 100 > 20: return f'{num_to_alpha[n // 100]} hundred and {num_to_alpha[((n % 100) // 10) * 10]}-' \ f'{num_to_alpha[(n % 100) % 10]}' # To prevent counting False values if n <= 0 or n > 1000: return '' def count(): """Cleans numbers from spaces and hyphens and returns count.""" all_numbers = [number_to_word(x) for x in range(1, 1001)] numbers_without_spaces = [number.replace(' ', '') for number in all_numbers] clean_numbers = [number.replace('-', '') for number in numbers_without_spaces] total = 0 for clean_number in clean_numbers: total += len(clean_number) return total if __name__ == '__main__': print(count()) Answer: num_to_alpha =\ is not Pythonic. If a statement is “incomplete”, there is no need to add a trailing \ to indicate the line is continued. An “incomplete” statement is one which contains unclosed {, [, or (, so this line could be written without the trailing \ simply by moving the { up to the previous line: num_to_alpha = { 1: 'one', ... The if 0 < n < 21 or n == 100 or n == 1000: test could be written more simply as if n in num_to_alpha:. This would also handle many other simple cases where the number exists in the num_to_alpha dictionary. You compute mod = n % 10, and then go on to test if n % 10 != 0:. You could simply test the already computed value if mod != 0:, or since non-zero values are Truthy, if mod:. You are testing for values which cannot possibly be true due to early tests. For instance, once a number is above 100, you test: if n % 100 == 0: return ... This is followed by: if not n % 100 == 0 and ...: If n % 100 evaluated to True, the first return statement would have been executed. Testing not n % 100 == 0 doesn’t add any value. At the end of the function, you explicitly check if n <= 0 or n > 1000: and return the empty string. And if THAT test fails ... what happens? None will be returned? If the number passed any of the above tests, and explicit return statement would have already returned the desired string, so if the end of the function is reached, does it matter what the return value is? Could you not unconditionally return ''. Or better: raise ValueError() DRY: Don’t Repeat Yourself You code is repeating the same tests. Is the number between 1 and 20? Are the last 2 digits of a 3 digit number between 1 and 20? If you structured your function in the following manner, you’d repeat yourself less, and find it easy to extend into ten thousands, hundred thousands, millions and beyond: Start with an empty string If n >= 1000 add num_to_alpha[n // 1000] and “thousand” to the string, n = n % 1000 If n >= 100 add num_to_alpha[n // 100] and “hundred” to the string, n = n % 100 If n > 0 and the string is not empty, add “and” to the string If n > 20, add num_to_alpha[n // 10 * 10] to the string n = n % 10 If n > 0 add num_to_alpha[n] to the string Return the resulting string Since you aren’t really using the returned strings, but eventually just counting the number of letters in the resulting string, you could optimize the function to not use strings at all. Just store the number of letters of each number in the dictionary: num_to_length = { 1: 3, 2: 3, 3: 5, 4: 4, ... } and add 8, 7, and 3 instead of the strings “thousand”, “hundred” & “and”.
{ "domain": "codereview.stackexchange", "id": 35283, "tags": "python, python-3.x, programming-challenge, numbers-to-words" }
Is $H_0$ reducible to $\overline H_0$?
Question: Be $H_0$ the special halting problem with $$H_0 = \lbrace \langle M \rangle \in \lbrace 0,1 \rbrace^* | \varepsilon \in L(M)\rbrace$$ and $\overline{H_0}$ being its complement. Is $H_0$ reducible to $\overline{H_0}$? $$H_0 \leq \overline{H_0}$$ I guess that this is not possible as $\overline H_0$ is not semi-decidable, but I am not sure how to approach such questions in general and how I would really prove it in this case in particular. Are there any properties of problems that forbid the reduction? Answer: Such reduction does not exist. If $H_0 \le \overline{H_0}$ then you can simply decide $H_0$: given input $\left<M\right>$ calculate the reduction $f(\left<M\right>)$ and Simultaneously run $\left<M\right>$ and $f(\left<M\right>)$ with empty input. If $\left<M\right>$ halts return the same answer, If $f(\left<M\right>)$ halts flip the answer and halt. We have $\epsilon\in L\left(\left<M\right>\right) \iff \epsilon\notin L\left(f\left(\left<M\right>\right)\right)$ hence one of the machines has to halt and our newly constructed machine for deciding $H_0$ always halts.
{ "domain": "cs.stackexchange", "id": 5049, "tags": "computability, turing-machines, reductions" }
Simple file downloader
Question: On my website I want to offer some PDFs, ZIP files and images for download. They are all in the same folder and I could simply link to them like <a href="download/file.pdf">download file</a>. But instead I want to invoke a download dialog by sending an attachment header via php. The link would change into <a href="download/load.php?f=file.pdf">download file</a>: <?php /*** * download/load.php * Simple file downloader including mime type * * @param string $f name of file in download folder * @return file with corresponding HTTP header */ // check whether parameter was sent if ( !isset($_GET['f']) || // check whether it is not `load.php` (this file) $_GET['f'] === "load.php" || // check whether file exists !is_readable($_GET['f']) || // make sure file is in the current directory strpos($_GET['f'], '/') ) { header("HTTP/1.0 404 Not Found"); die(); } $file = $_GET['f']; // check mime type $finfo = finfo_open(FILEINFO_MIME_TYPE); $mime = finfo_file($finfo, $file); finfo_close($finfo); // send mime type header('Content-type: ' . $mime); // invoke download dialog header('Content-Disposition: attachment; filename=' . $file); // send file readfile($file); ?> Is this script okay or does it have security vulnerabilities? If so, how could I improve it? Answer: Take care to call strpos() correctly. If the f parameter contains a leading /, it sneaks through.
{ "domain": "codereview.stackexchange", "id": 8144, "tags": "php, security" }
In theory, could gravitational waves be used to make a "gravity laser"?
Question: The sources I've read compare gravitational waves to electromagnetic waves. I'm curious to what extent this is. In theory, could gravity be harnessed in similar ways to how we've used electromagnetic radiation such as in lasers? If so: what differences would this have to a regular laser? If not: What differentiates gravitational waves from electromagnetic waves to make this theoretically impossible? Answer: Laser light generation is intimately related to processes that generate single photons. To date, gravitational waves have not been detected, and there are no known processes that produce single gravitons (not to mention there is no direct evidence that the gravitational field is quantized at all -- just logical arguments based on the structure of general relativity and quantum mechanics extrapolated to the relevant regime). Since there aren't any processes known to produce single gravitons, there is no known means by which one could produce a gravitational wave laser. EDIT: I agree with anna v's answer and John Rennie's comment, and I hadn't thought about free electron lasers when I wrote this. It would take relativistic planets or something like that, but it wouldn't be impossible.
{ "domain": "physics.stackexchange", "id": 20521, "tags": "general-relativity, gravity, gravitational-waves" }
Inefficient Stopwatch
Question: I have just finished a simple GUI stopwatch, but some of its code looks like it needs replacing. This is the code: Clock class (extends Thread): static int hr = 0; static int min = 0; static int sec = 0; static double milisec = 0; static int rotation = 0; static long l = 0; static long m = 0; public void run() { while (true) { if (Stopwatch.started()) { while (Stopwatch.started()) { if(rotation == 0) { l = System.currentTimeMillis(); try { Thread.sleep(5); } catch (InterruptedException e) { } m = System.currentTimeMillis(); plus((int) m - (int) l); rotation++; } else if(rotation == 1) { try { Thread.sleep(5); } catch (InterruptedException e) { } l = System.currentTimeMillis(); plus((int) l - (int) m); rotation--; } } } if (Stopwatch.resets()) { hr = 0; min = 0; sec = 0; milisec = 0; } System.out.print(""); //For some reason this program won't work if this line isn't here } } public static void plus(int i) { milisec += i; if (milisec >= 1000) { milisec -= 1000; sec++; } if (sec >= 60) { sec -= 60; min++; } if (min >= 60) { min -= 60; hr++; } } public static String getHr() { return hms(hr); } public static String getMin() { return hms(min); } public static String getSec() { return hms(sec); } public static String getMilisec() { return m(milisec); } public static String hms(Integer i) { String s = i.toString(); if(s.length() == 1) { s = "0" + s; } return s; } public static String m(Double d) { Integer i = (int) Math.round(d); String s = i.toString(); if(s.length() == 1) { s = "00" + s; } else if(s.length() == 2) { s = "0" + s; } return s; } Stopwatch class (extends JFrame) private JPanel contentPane; private static JTextField hr; private static JTextField min; private static JTextField sec; private static JLabel milisec; private static JButton start; static boolean bstart = false; static boolean breset = false; /** * Launch the application. */ public static void main(String[] args) { EventQueue.invokeLater(new Runnable() { public void run() { try { Stopwatch frame = new Stopwatch(); frame.setVisible(true); } catch (Exception e) { e.printStackTrace(); } } }); Thread t = new Clock(); t.setDaemon(true); t.start(); while (true) { if (bstart) { try { start.setText("Stop"); } catch (Exception e) { } } else { try { start.setText("Start"); } catch (Exception e) { } } try { hr.setText(Clock.getHr()); min.setText(Clock.getMin()); sec.setText(Clock.getSec()); milisec.setText(Clock.getMilisec()); } catch(Exception e) { } } } /** * Create the frame. */ public Stopwatch() { setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); setBounds(100, 100, 200, 110); contentPane = new JPanel(); contentPane.setBorder(new EmptyBorder(5, 5, 5, 5)); setContentPane(contentPane); contentPane.setLayout(new FlowLayout(FlowLayout.CENTER, 5, 5)); hr = new JTextField(); hr.setEditable(false); hr.setText("00"); contentPane.add(hr); hr.setColumns(3); JLabel colona = new JLabel(":"); contentPane.add(colona); min = new JTextField(); min.setEditable(false); min.setText("00"); contentPane.add(min); min.setColumns(3); JLabel colonb = new JLabel(":"); contentPane.add(colonb); sec = new JTextField(); sec.setEditable(false); sec.setText("00"); contentPane.add(sec); sec.setColumns(3); milisec = new JLabel("000"); milisec.setFont(new Font("Tahoma", Font.PLAIN, 8)); contentPane.add(milisec); JButton reset = new JButton("Reset"); reset.addMouseListener(new MouseAdapter() { @Override public void mouseReleased(MouseEvent arg0) { if (!bstart) { breset = true; } } }); contentPane.add(reset); start = new JButton("Start"); start.addMouseListener(new MouseAdapter() { @Override public void mouseReleased(MouseEvent e) { bstart = !bstart; } }); contentPane.add(start); } public static boolean started() { return bstart; } public static boolean resets() { if(breset) { breset = false; return true; } else { return false; } } It would be great if someone would point out some bad code and explain how to make it better. Answer: class Clock extends Thread In most cases, this is not a good idea -- there's no particular reason for Clock to be its own thread, rather than running in a thread managed by something else (like an ExecutorService). The more common approach would be class Clock implements Runnable promising that Clock will implement a run() method, which will allow you to hand it off to a Thread, or an ExecutorService or whatever. static int hr = 0; static int min = 0; static int sec = 0; static double milisec = 0; static int rotation = 0; static long l = 0; static long m = 0; Not a good idea -- you should normally prefer member variables to static variables. It will be much easier to test your code, and reason about what is going on, if Clock holds onto its own data (imagine, for a moment, the headache of two different clocks trying to run at the same time -- disaster). if(rotation == 0) { l = System.currentTimeMillis(); try { Thread.sleep(5); } catch (InterruptedException e) { } m = System.currentTimeMillis(); plus((int) m - (int) l); rotation++; } else if(rotation == 1) { try { Thread.sleep(5); } catch (InterruptedException e) { } l = System.currentTimeMillis(); plus((int) l - (int) m); rotation--; } Here's one of the problems with shared variables -- it makes it very hard to come back later and understand the context of what's going on. Your else block modifies l, without modifying m. Is that deliberate? It might be, it might not be. Better variable names here would make clearer the intent of the code. For instance, I'd like to suggest locally scoped variables long before = System.currentTimeMillis(); try { Thread.sleep(5); } catch (InterruptedException e) { } long after = System.currentTimeMillis(); plus(after - before); But you could be using l and m anywhere, so maybe this works, maybe it doesn't. Encapsulation is an important clue for people trying to understand your code. This particular construction looks suspicious: if(rotation == 0) { ...; rotation++; } else if(rotation == 1) { ...; rotation--; } 0 and 1 are not very good for expressing intent - defining a constant that explains what each of these numbers means would be a big help for those reading the code. Also, this looks like you are trying to implement a state machine - "if in this state, verb this way, then transition to that state" - and if that's what it's supposed to look like, then you should actually implement States and transitions so that it is obvious. try { start.setText("Start"); } catch (Exception e) { } The empty exception block is a bad sign - the catch block that you don't know how JButton works. A checked exception being thrown is an indication that there is a legitimate error condition that your code is supposed to recover from. Dropping it on the floor without leaving any evidence behind is very poor form. Even if you are absolutely certain that the Exception should not impact the behavior of your application in any way, at a minimum there should be a comment explaining why this is the case. try { Thread.sleep(5); } catch (InterruptedException e) { } A very bad sign - this indicates that you don't understand how cancellation and shutdown work. InterruptedException are an important part of communication, and should not be dismissed without comment. (You might reasonably dismiss InterruptedExceptions with comment - not all operations should be interruptable -- but you will usually reset the interrupted flag in case your caller cares about interruption). In this case, where you are the Thread, and never delegate anything, the interrupted flag isn't so important. System.out.print(""); //For some reason this program won't work if this line isn't here Not surprising. It probably doesn't work with the line there either. My guess is that the print call is introducing a memory barrier, that flushes the caches, but I won't swear to it. The very hand waving explanation being that the two threads you have created have no code in them indicating that any other thread needs to share the data, so the two threads are each happy spinning in tight loops, updating values in their local registers, and never checking to see if the common values of those variables have changed. However, the System.out.print call does have in it some code that knows about shared data, and your code appears to work when the print call forces local data to refresh. No promises, but maybe. Welcome to the horror show that is multi threaded programming. The good news, is that your data is relatively simple, in the sense that each shared piece of data has only one thread that writes it. For example, the Running/NotRunning state of the stopwatch needs to be visible to the Stopwatch and the Clock, but only the Stopwatch needs to change it (which happens when the button event handlers are called in the UI thread). Similarly, both pieces need to read the current clock time, but only the Clock thread needs to modify it. You should refactor the code so that these bits of data are separate from the classes that share them separate from each other For example: // This bit is used by all the threads interface StopwatchState { State getCurrentState(); } // This bit is used only by the thread that changes the state interface StopwatchController { void stop(); void start(); // not my choice, but similar to the implementation in your example; void toggle(); } class MultiThreadedStopwatchController implements StopwatchState, StopwatchController { private volatile State state; private static final EnumMap<State,State> transitions = ...; MultiThreadedStopwatchController () { state = State.STOPPED; } State getCurrentState () { return state; } void stop() { state = State.STOPPED; } void start() { state = State.STARTED; } void toggle() { state = transitions.get(state); } } ... class Clock { private final StopwatchState stopwatch; Clock (StopwatchState stopwatch) { this.stopwatch = stopwatch; } void run() { if (State.STARTED.equals(stopwatch.getCurrentState())) { .... } } } ... class Stopwatch { private final StopwatchController controller; Stopwatch(StopwatchController controller) { this.controller = controller; ... start = new JButton("Start"); start.addMouseListener(new MouseAdapter() { public void mouseReleased(MouseEvent e) { controller.toggle(); }}); MultiThreadedStopwatchController stopwatchController = new (...); Clock clock = new Clock(stopwatchController); Stopwatch stopwatch = new Stopwatch(stopwatchController); And then you can start to worry whether Clock should know that there's a state machine under the covers, of if instead the interface should look more like.... void run() { if (stopwatch.isRunning()) { .... } } (Which it probably should - the interface should specify what, not how). StateMachines There are degrees of how complicated to get with state machines. What happens most often is that programmers don't notice that they are implementing one, and the logic gets scattered all over. The stop watch here is a pretty simple example of a state machine -- we've got a mouseReleased event, that is supposed to change a stopped watch to a running watch, or a running watch to a stopped watch. Describing that more generally, the watch is in one of two states (running, stopped); and the mouseRelease event should change which state the watch is in. A common pattern, on recognizing that there is a state machine in place, is to define an Enum that describes the states. public enum State { STARTED, STOPPED; } And then an object to hold a particular instance of the machine that is transitioning between states public class FSM { volatile State currentState; FSM(State initialState) { this.currentState = initialState; } ... } We use the volatile keyword here because we know that FSM is going to be read by a thread other than the one that writes to it, and we need that value to be visible across all the threads. At a very hand-waving level, the volatile keyword tells the JVM that when this value is written, the value written needs to be pushed all the way out to share memory right away. Part of describing the state machine is defining which state changes are legal. It would be straighforward to write this out, long hand: if (State.STARTED.equals(currentState)) { currentState = State.STOPPED; } else { currentState = State.STARTED; } Which is fine... when you are dealing with only two states. But as you discover more states (STOPPED, STARTED, RESET...), you start needing to write out more transitions by hand. But, if we look carefully, we're really just doing a lookup here, and we can implement a lookup with a Map // Note: this map is shared by all instances of this kind of state machine static final Map<State, State> stateTransitions = new HashMap(); static { // when we initialize the class, we load the transition map stateTransitions.put(State.STARTED, State.STOPPED); stateTransitions.put(State.STOPPED, State.STARTED); } // here, we're changing the state of a specific instance public void toggle() { currentState = stateTransitions.get(currentState); } Now, because our State tokens are implemented as an enum (meaning that each of the States are really singletons), we can use an EnumMap - which is a Map that is optimized for the case where all of the keys are required to be from a specific enumeration. Having written the code this way, we can change our two mode button to a three mode button just by adding a new value into the enumeration, and updating the transition table. static { // when we initialize the class, we load the transition map stateTransitions.put(State.STARTED, State.STOPPED); stateTransitions.put(State.STOPPED, State.RESET); stateTransitions.put(State.RESET, State.STARTED); } And presto - everything works. Using a single Map works here because this is a toy problem, each state always goes to the "next" state. In a slightly more complicated problem, you can have more than one kind of transition - a STARTED watch can be STOPPED or PAUSED.... The trivial Map is no longer appropriate - we need to map State + Trigger = new State. // Note: this map is shared by all instances of this kind of state machine static final EnumMap<State, EnumMap<Trigger,State>> stateTransitions = ...; It's really this map that defines your state machine - you could use the same States and the same Triggers, organized in a different way, to produce a different class of machines. static { // when we initialize the FSM, we load the transition map // first, we create empty Trigger maps for each known state for (State s : State.values()) { stateTransitions.put(s, new EnumMap<Trigger,State>()); } // now fill the trigger maps with the supported transitions stateTransitions.get(State.STARTED).put(Trigger.TOGGLE, State.STOPPED); stateTransitions.get(State.STOPPED).put(Trigger.TOGGLE, State.STARTED); // here, we add pause support. stateTransitions.get(State.STARTED).put(Trigger.PAUSE, State.PAUSED); stateTransitions.get(State.PAUSED).put(Trigger.PAUSE, State.STARTED); } Stateless4j puts a reasonable fluent interface in front of the state machine creation idioms. Using that library, your state machine creation might look like... StateMachine<State, Trigger> stopwatch = new StateMachine<State, Trigger> (); stopwatch.configure(State.STARTED) .permit(Trigger.TOGGLE, State.STOPPED) .permit(Trigger.PAUSE, State.PAUSED); stopwatch.configure(State.STOPPED) .permit(Trigger.TOGGLE, State.STARTED); stopwatch.configure(State.PAUSED) .permit(Trigger.TOGGLE, State.STARTED);
{ "domain": "codereview.stackexchange", "id": 8567, "tags": "java, multithreading, datetime, swing, timer" }
Is there a thermodynamic heuristic argument on why a redshifted blackbody spectrum is a blackbody at a new temperature?
Question: Without calculating it, it isn't obvious to me that if you take Planck's Law for the spectral radiance as a function of temperature of a black body and shift all the frequencies by the same factor, you will get a curve that is also a blackbody curve, but at lower temperature. But because the blackbody curve is entropy maximizing, it seems there might be some sort of thermodynamic argument for this, e.g. a thought experiment which shows that if the redshifted spectrum were not a blackbody spectrum, it would be possible to build a perpetual motion machine with some sort of blackbody oscillating on a spring that sees different $z$-shifts at different times. Is there such an argument, or something roughly similar? Answer: This is a neat fact! I think the first time it's usually encountered is in a cosmology course, where the expansion of the universe keeps the CMB temperature well-defined. Here's an argument why. Consider adiabatic expansion of a photon gas at temperature $T$ from the standpoint of kinetic theory. In this point of view, each photon is a particle rapidly bouncing back and forth. Unlike for a classical ideal gas, every photon has exactly the same speed, so every photon must lose the same fraction of its energy. (This is because each one collides with the walls an equal number of times, picking up the same relativistic redshift factor every time.) Thus adiabatic expansion causes the frequency shift you're talking about. Now we just have to show that adiabatic expansion also keeps the temperature well-defined. But this is immediate by thermodynamics: we can already run a Carnot cycle with a photon gas. If, after each adiabat, the gas were not in thermal equilibrium, we could do additional work by using this temperature difference, contradicting the Second Law.
{ "domain": "physics.stackexchange", "id": 42945, "tags": "statistical-mechanics, relativity, inertial-frames, thermal-radiation, redshift" }
Very simple interpreter in C
Question: I made very simple interpreter in C and now, I want to make my code better. Here is my code: #define size_t unsigned long long int printf(const char *format, ...); int scanf(const char *format, ...); void *memmove(void *str1, const void *str2, size_t n); size_t strlen(const char *str); int main() { char ch[50]; do { printf(">>> "); scanf("%s", ch); if (ch[0] == '"' & ch[strlen(ch) - 1] == '"') { memmove(ch, ch + 1, strlen(ch)); ch[strlen(ch) - 1] = '\0'; printf("%s\n", ch); } } while(1); } GitHub Link (I don't want to include standard libraries) Answer: There are a number of issues in this very short program. Unless there are coding standards that require otherwise, it is better to use typedef to define types rather than #define. You are calling standard C libraries when you call printf(), scanf(), memmove() and strlen() unless you have written all of these functions on your own and linking in some sort of special manner. It is not a good idea to redefine size_t at any point since it is defined by the system header files and may be different on different platforms (the Windows version of size_t is different than the Fedora Linux version of size_t for example). The code contains a possible buffer overflow when performing the scanf(). The array ch is only 50 characters. This can lead to undefined behavior. You don't need a long long for an array that is a maximum of 50 characters.
{ "domain": "codereview.stackexchange", "id": 40781, "tags": "c, interpreter" }
How is a Galaxy formed?
Question: How is a galaxy formed? I know that the center of our galaxy is considered to be Sagittarius A* and it's surrounded by a lot of stars (also a lot of neutron stars). But what makes this happen? I mean, why are all those stars in that right formation, like other galaxies, rather than just being random clouds of stars in the universe? Answer: Well this is not a question that can be answered in a few sentences! In short We still do not know, how exactly the galaxies formed and have the shapes they possess. The large-scale structure (LSS) in the Universe as we see them today are a consequence of tiny primordial density fluctuations that arose right after the Big Bang. The reason for these tiny density fluctuations is believed to be quantum in nature. So, these tiny fluctuations in the early past lead to agglomeration of gas and dust clouds, leading to certain areas becoming denser. These denser areas slowed down the expansion of the Universe, allowing the gas to accumalate into small protogalactic clouds. Gravity in these clouds casued the gas and dust to collapse, and in turn form stars. These stars burned quickly and became globular clusters (while gravity was still collapsing the dust and gas). Also, according to the $\Lambda$CDM model, the structures form in a "bottom-up" fashion, i.e. small structures forming first (stars and galaxies) followed by large structures (galaxy clusters). This is exactly what we are observing today thanks to surveys that are probing high redshift ranges.
{ "domain": "astronomy.stackexchange", "id": 2025, "tags": "galaxy, universe, milky-way, space" }
How to Construct a Lattice from Program Statements
Question: In order to optimize a program, I am trying to figure out how the idea of a lattice applies to data-flow graphs, as introduced by this presentation (first diagram below). The lattice seems to take a program, order its inputs and outputs, and allow for easily determining flow of variables and state, so you can perform verification and optimization. But I am not sure how to construct the lattice from the program statements. In this question I would like to know how to construct the lattice from the program. Specifically, given a program such as this: var x = 10 var y = 20 var z = 0 var i = z while (i < x) { y = y + x * i i = i + 1 z = y + i } We can construct a Control-Flow Graph (CFG) where each node is a program statement such as var x = 10. From that we can construct a Data-Flow Graph (DFG) where we are keeping track of how a variable is used. This creates an def-use ordering, along the lines of: x, y, i at the same time z comes after i ... That's where I start getting lost. But once we have a partial ordering, somehow a lattice is created. Finally from the lattice, we can do things such as doing pointer analysis (Figure 2) and computing fix-points (Figure 3). That will help in optimization and verification of the program. I'm wondering only the first part, how to construct the lattice from the example program above. Wondering what is needed to construct the lattice at a high-level. I understand the upper and lower bounds of a lattice, which seems to correspond to CFG inputs and outputs being joined, but I don't quite see how to actually do the join (meet), what the nodes/vertices are in the lattice (not sure if it's a single program statement, a variable, or what), and what the edges are in the lattice. Once that is defined, then doing the rest of the stuff should be straightforward. Thank you for your help. ^- Figure (1). ^- Figure (2). ^- Figure (3). Answer: There is no such thing as a "DataFlow Graph lattice". Those are two separate things. A data flow graph is a graph that represents how data flows in a program, which can be helpful in data flow analysis. A lattice is a mathematical object that can be helpful for data flow analysis. Data flow analysis is a broad subject. There are many techniques for data flow analysis (many use lattices, but it's not a hard requirement). They don't all use lattices in the same way. So, there's no one way to do the conversion you are referring to. To learn more about the subject, I suggest reading a good textbook on program analysis.
{ "domain": "cs.stackexchange", "id": 11329, "tags": "lattices, data-flow-analysis" }
Is there a deep connection between the Heisenberg uncertainty principle and entropy?
Question: (Just so you know my background) I have taken a graduate course in quantum mechanics. I have also learned about information entropy in various places (statistical mechanics, information theory, dynamical systems). It recently occurred to me that the HUP might have an interpretation purely in terms of entropy. I pondered this for a bit, but didn't get very. My intuition for why this seems plausible is that the HUP acts a bound on the amount of information one can retrieve from a system in a single measurement, and entropy is the instrument we use to measure information (or lack thereof) in a system. Is there, in fact, a deep connection there? Or am I incorrectly identifying two completely distinct ideas since they both can be described loosely using the word "information"? Answer: Congratulations! You stumbled upon entropic uncertainty relations (a good review here). They are a reformulation of the usual uncertainty principles using entropy instead of variance. The simplest and most famous one is probably the Maassen Uffink relation: let $\rho$ be a quantum state, let $A$ and $B$ be two observables and $$ \mathcal M_O:\rho\mapsto\sum_i \langle o_i|\rho |o_i\rangle|o_i\rangle\langle o_i|$$ be the measurement channel for an observable $O$ where $|o_i\rangle$ denotes the eigenstates of $O$, then $$ S(\mathcal M_A(\rho))+S(\mathcal M_B(\rho))\geq \log\frac 1c$$ where $S(\tau)=-\mathrm{Tr}(\tau\log\tau)$ is the Von Neumann entropy and $$ c=\max_{i,j}|\langle a_i|b_j\rangle|^2.$$ I think this corresponds to the intuition you're talking about: if you're able to guess really well the outcome of an $A$ measurement, then the classical probability distribution $\mathcal M_A(\rho)$ is very peaked around one value and hence has low entropy, this means that to satisfy the bound $\mathcal M_B(\rho)$ must have a high entropy, hence this distribution is closer to a uniform distribution and it's hard to guess the outcome before measuring. Notice also that if $[A,B]=0$, then $A$ and $B$ share eigenvectors, hence $c=1$ and the bound is trivial, as you'd expect, i.e. both entropies can vanish at the same time. Many interesting generalizations have been put forward (multipartite, continuous variable, with quantum memory and others) and you can find a lot of them in the review I mentioned, which also makes good arguments as to why you should care about entropy instead of variance, and also to what extent you can recover one kind of uncertainty relation from the other.
{ "domain": "physics.stackexchange", "id": 73326, "tags": "quantum-mechanics, quantum-information, entropy, heisenberg-uncertainty-principle" }
How do I conduct an experimental modal analysis with a three-axis accelerometer?
Question: How can I compute the Frequency Response Function (FRF) if I use two three-axis MEMS accelerometer to measure the input excitation and output response? I have read from some books that the FRF can be computed as the ratio of response to excitation, e.g., $ H(s) = x(s)/F(s)$, where $s$ is the Laplace variable. or a similar way via Fourier transform. However, those books have not mentioned how to compute FRF when it comes to three-axis accelerometer. As there are three channels of data, i.e., X, Y, Z, should I compute the FRF for each channel separately, or I need to perform some kind of combination of these three axes before computing the FRF? Answer: The body of your question (the title is somewhat different) asks about computing the frequency response function (FRF) between a multi-input, multi-output (MIMO) system. First off, the numerical transfer function/FRF between two time domain signals is usually calculated by calculating the cross power spectrum between the two signals and dividing it by the power spectrum of the input $$ FRF_{A\rightarrow B}=\frac{S_{AB}(f)}{S_{AA}(f)}. $$ National Instruments Application Note 41 has a helpful introduction to this type of analysis. It is important to also calculate the coherence of the input and output signals so that you have an idea of how accurate the estimated transfer function is. To answer your question about how to deal with a MIMO system; usually people think about it as a matrix of transfer functions. In your case there will be 9 independent transfer functions: $$ \begin{pmatrix} FRF_{x\rightarrow x} & FRF_{x\rightarrow y} & FRF_{x\rightarrow z} \\ FRF_{y\rightarrow x} & FRF_{y\rightarrow y} & FRF_{y\rightarrow z} \\ FRF_{z\rightarrow x} & FRF_{z\rightarrow y} & FRF_{z\rightarrow z} \end{pmatrix} $$ Understanding how any excitation shows up at the output is then just a matter of multiplying the x, y, and z inputs (expressed in the frequency domain) by the transfer function matrix.
{ "domain": "engineering.stackexchange", "id": 434, "tags": "civil-engineering, structural-engineering, vibration, modal-analysis" }
Which plant's part is this?
Question: I found this plant in Rajasthan, India. This sample is 4-5 cm long. This is an ornamental plant. It was a potted plant. I put my fingertip in the photo to give an idea of size of the specimen. I can't figure out which plant this part belongs to. Please give the species name if you can. I would be very thankful to anyone out there who will help me. Answer: This is a bit hard to say exactly, since there is a number of possibilities. We can be pretty sure that these sample is from a Cypress, belonging to the family of Cupressaceae, but which subfamily or species this is exactly, can't be identified here. Therefore a wider image would be necessary, but since this is a potted plant, even this might not be enough, as a lot of these plants are used in gardening.
{ "domain": "biology.stackexchange", "id": 10789, "tags": "species-identification, botany, biodiversity" }
Neutron star: free fall acceleration
Question: The textbook from which I teach physics at the end of secondary school, has a question about a neutron star: $M_{star}=1.4\cdot M_{sun}$, radius 15km. "Calculate the free fall acceleration at the surface of the neutron star". Pupils are supposed to use $a=F_g/m=G*M_{star}/R^2$ Is the free fall acceleration the same as the coordinate acceleration for a hypothetical observer at rest on the star surface? Is the free fall acceleration the same as the coordinate acceleration for an observer at rest at a great distance from the star? Does the free fall acceleration at the surface have the same value according to both observers? Is the Newtonian approach $a=F_g/m=G*M_{star}/R^2$ correct, considering the strong gravity at the surface? Answer: I'm guessing your questions all amount to whether general relativistic effects become important at the surface of a neutron star. To answer this we can compare the flat space metric (in polar coordinates): $$ ds^2 = -c^2dt^2 + dr^2 + r^2 d\Omega^2 \tag{1} $$ with the Schwarzschild metric that describes the geometry outside a spherically symmetric mass: $$ ds^2 = -\left(1-\frac{2GM}{c^2r}\right)c^2dt^2 + \frac{dr^2}{\left(1-\frac{2GM}{c^2r}\right)} + r^2 d\Omega^2 \tag{2} $$ The difference is that factor of $1-2GM/c^2r$, which we can also write as $1-r_s/r$ where $r_s$ is the Schwarzschild radius - $r_s = 2GM/c^2$. Feeding in the mass and radius of the neutron star we find this factor is about $0.72$, so general relativistic effects are indeed important. Your question (1) is answered in What is the weight equation through general relativity?. The coordinate acceleration measured by an observer at the surface is: $$ a = \frac{GM}{r^2}\frac{1}{\sqrt{1-\frac{2GM}{c^2r}}} \tag{3} $$ so it differs from the Newtonian prediction by (in this case) a factor of about $\sqrt{0.72}$. Re your questions (2) and (3), offhand I don't know the expression for the coordinate acceleration measured far from the star, but it will not be the same as equation (3). A distant observer sees falling objects slow as they approach the event horizon and asymptotically approach zero speed at the horizon. So the coordinate acceleration is obviously different from the coordinate acceleration measured near the horizon.
{ "domain": "physics.stackexchange", "id": 21352, "tags": "homework-and-exercises, general-relativity, gravity, acceleration, neutron-stars" }
Why are nuts 6-sided?
Question: Why are nuts 6-sided? Why not 4 or 8? Answer: There are 3, 4, 8, and 12-sided nuts, and most exotically, 5-sided nuts all in use for specific applications, but the 6-sided hex nut is most popular because it offers a good trade off between a bunch of factors: Ease of tightening/loosening, especially in tight spaces: There are six distinct orientations you can grip a hex nut in, each 60 degrees apart. This means that if you can swing your wrench through a 60 degree arc, you can tighten the nut enough to remove the wrench and place it on the next pair of flats. In fact most open end wrenches have the jaws offset 15 degrees form the handle so that by flipping the wrench you can tighten the nut with as little as 30 degrees of access. (Box wrenches often have 12 points, allowing the same principle without the flipping.) A four sided nut would have a significant drawback that even with an offset wrench head, you would need 45 degrees of access to tighten the nut. Additionally, it's much easier to put an open end wrench on a hex nut as the two adjacent faces on the approach side guide the wrench towards the flats. By contrast, if you approach a square nut with the wrench flats not quite parallel to the nut flats, the wrench can very easily jam. Of course when there is space for access with a ratchet these considerations don't make much difference, but as people try to make machines smaller and lighter, designers often leave only the minimum access required to reach fasteners. Torque transmitted The more points the nut has, the more likely it is that the bearing surface of the point will fail in compression and 'round over.' This problem is exacerbated by the fact that some gap has to be left between the wrench and the nut to allow for manufacturing and alignment tolerances, which significantly decreases the contract area between the wrench and the nut. In practice, it's pretty hard to round over a hex nut unless you are using an adjustable wrench (where slop in the mechanism can lead to a bigger gap between the nut and the wrench and less parallel wrench faces.) A square nut is even harder to round over, and in this aspect is more desirable than a hex nut. 12-point nuts do exist, but if the faces were flat they would round over very easily, so instead they are manufactured with points that increase the size of the torque-transmitting face. Joint Properties Standard square nuts have a larger bearing area than their standard hex counterparts. This sometimes makes them preferable for a connection to a soft substrate like wood as they are less likely to pull through the material. Because square nuts can accommodate a greater gap between the wrench and the nut, they can be used in captive situations like weld-on cage nuts, or self-aligning server rack nuts. One common use for square nuts is on blind, low strength connections to soft wood. The bolt can be tightened from one side and the nut will embed itself in the wood, with the large faces preventing it from cutting a counter-bore into the wood. By contrast a hex nut would typically cut a circle in the wood and spin. Oversized hex nuts, thick washers, and tension control bolts all allow alternate solutions to these problems using hex nuts, but generally at a higher manufacturing cost. Material Efficiency In order to maintain the minimum amount of material between the threads and the edge of the nut, square nuts have a fair amount of 'wasted' volume in the corners that increases the amount of metal per nut for a given screw size. As you increase the number of flats, the shape of the nut becomes closer to a circle and more material efficient. (By the same logic, when laying out a connection hex nuts take up less space whereas square nuts have to be placed further apart.) For a few nuts, this difference is negligible, but for a company that buys or produces nuts by the truckload, it would add up very quickly. Ubiquity Even if these trade-offs changed, the simple fact that most people who work on machinery and structures have tools designed around 6-sided nuts would provide a big dis-incentive to change. Auto mechanics, for example have to buy a whole special set of tools when a car manufacturer decides to use an unusual fastener. In the case of the 5-sided nut, the very reason it is used is to be tamper resistant because very few people own wrenches that fit it, and adjustable tools designed for even-numbered polygons won't work. It is mainly used for fire hydrant fittings and valves.
{ "domain": "engineering.stackexchange", "id": 918, "tags": "mechanical-engineering" }
How to derive the answer to this convolution problem?
Question: I came across this below question, (which was a homework assignment question for Signal Processing class, which my friend mailed me for help solving), mulled over it for an hour and had no idea how to proceed with solving it. Let $C(x) = A(x)B(x)$ where: $$A(x)=\sum_{n=0}^{N_1}a(n)x^n$$ $$B(x)=\sum_{n=0}^{N_2}b(n)x^{2n}$$ $$C(x)=\sum_{n=0}^{N_3}c(n)x^n$$ Find expressions for $N_3$ and $c(n)$ as functions of $N_1$,$N_2$, the $a(n)$ and $b(n)$. Apparently, it has something to do with convolution, as in $c(n)$ is the convolution result of $a(n)$ & $b(n)$ or something like that. But I still can't figure out how that is. Can anybody please explain me the answer of $N_3$ and $c(n)$ Answer: HINT: what is the highest power of $x$ after multiplying $A(x)$ and $B(x)$? This gives you directly the value of $N_3$. Then rewrite $B(x)$ as $$B(x)=\sum_{n=0}^{2N_2}\hat{b}_nx^n$$ and you can use normal convolution of $a(n)$ and $\hat{b}(n)$ to derive $c(n)$. Now you just need to express $\hat{b}(n)$ in terms of $b(n)$. EDIT: OK, so here's the solution: We have $$C(x)=\sum_{n=0}^{N_3}c_nx^n$$ Since $C(x)=A(x)B(x)$, the highest power of $C(x)$ must be $N_3=N_1+2N_2$. You can rewrite $B(x)$ as $$B(x)=\sum_{n=0}^{2N_2}\hat{b}_nx^n$$ with $$\hat{b}_n=\begin{cases}b_{n/2},&n \text{ even}\\ 0,&n \text{ odd}\end{cases}$$ Now the coefficients $c_n$ can be written as the convolution of $a_n$ and $\hat{b}_n$: $$c_n=\sum_k\hat{b}_ka_{n-k}$$ For the index $k$ in the above sum we have the following constraints: $$0\le k\le 2N_2\quad\text{and}\quad 0\le n-k\le N_1$$ which results in the summation limits $$c_n=\sum_{k=\max\{0,n-N_1\}}^{\min\{n,2N_2\}}\hat{b}_ka_{n-k},\quad 0\le n\le N_3$$ If you sum only over even $k$, you can replace $\hat{b}_k$ by the coefficients $b_{k/2}$: $$c_n=\sum_{k=\max\{0,n-N_1\},k\text{ even}}^{\min\{n,2N_2\}}b_{k/2}a_{n-k},\quad 0\le n\le N_3$$ which can be rewritten once more as $$c_n=\sum_{k=\lceil{\max\{0,n-N_1\}/2}\rceil}^{\lfloor{\min\{n,2N_2\}/2}\rfloor}b_{k}a_{n-2k},\quad 0\le n\le N_3$$
{ "domain": "dsp.stackexchange", "id": 2099, "tags": "convolution, homework" }
Stack using two queues
Question: I have implemented the Stack using two Queues q1 and q2 respectively as shown below in the code. I would like to get you reviews on whether it's an efficient way of implementing it or now. For Pop operation : I will keep on removing elements from the Queue q1 and adding it to Queue 2 until only one element remains in Queue q1, then I will be removing that particular element from the Queue q1 to accomplish the Stack Pop operation For Push Operation: It's simple, I am just checking if Queue q1 is empty then I will start adding the elements in Queue q2 otherwise in queue q1. import java.util.LinkedList; import java.util.Queue; public class StackUsingQueues { Queue<Integer> q1 = new LinkedList<>(); Queue<Integer> q2 = new LinkedList<>(); public void push(int data){ if(q1.isEmpty()){ q2.add(data); } else{ q1.add(data); } } public int pop(){ int x; if(q1.isEmpty()){ if(q2.isEmpty()){ System.out.println("Stack Underflow"); System.exit(0); } else { /* I will keep on removing elements from the Queue q1 and adding it to Queue 2 until only one element remains in Queue q1, then I will be removing that particular element from the Queue q1 to accomplish the Stack Pop operation */ while(q1.size()!=1){ x = q1.remove(); q2.add(x); } return q1.remove(); }} else { while(q2.size()!=1){ x = q2.remove(); q1.add(x); } return q2.remove(); } return 0; } public static void main(String[] args) { StackUsingQueues st = new StackUsingQueues(); st.push(1); st.push(2); } } Answer: Broken Your implementation doesn't actually work. (The question should have been closed, but it already has an answer, so... too late.) The two st.push calls will append to q2 If you call pop after that, the algorithm tries to remove elements from q1 until its size becomes 1, but its size is 0 now, to that will throw a NoSuchElementException => You cannot pop from this stack, it's broken Missing methods A sorely missing method is isEmpty (or empty). Without such method, there's no really easy way to explore the elements of a stack. Basically keep popping until the program crashes? Not very ergonomic. Avoid System.exit System.exit doesn't belong in the middle of an algorithm. In fact it's best to avoid it altogether. It's extremely rare when using this is the right way to go in Java. Popping an empty stack Java has an exception dedicated to popping an empty stack, called EmptyStackException. How could you know that? When reinventing the wheel, it's good to look at the SDK (Stack in particular, in this case), to learn from what exists. Implementation Suggested implementation, using your algorithm idea: import java.util.EmptyStackException; import java.util.LinkedList; import java.util.Queue; public class StackUsingQueues<T> { private Queue<T> q1 = new LinkedList<>(); private Queue<T> q2 = new LinkedList<>(); public void push(T data) { q1.add(data); } public T pop() { if (isEmpty()) { throw new EmptyStackException(); } while (q1.size() > 1) { q2.add(q1.poll()); } T top = q1.poll(); Queue<T> temp = q1; q1 = q2; q2 = temp; return top; } public boolean isEmpty() { return q1.isEmpty(); } } Note that this algorithm is fast for pushes but slow for pops. The opposite is also possible: fast for pops but slow for pushes. For that alternative approach, see this answer on Stack Overflow.
{ "domain": "codereview.stackexchange", "id": 16146, "tags": "java, stack, queue" }
Multiple labels for the same rectbox?
Question: My goal is to identify the horse in a photo. I'm dealing with about 500 unique horses. My feeling is that the best way to distinguish one horse from another is by its face. So I trained Yolov5 successfully to find faces at reasonable angles. I'd like to take this a step further, and teach it to identify which horse's face it sees. I'm new to this sort of thing (though not programming in general), so the way I assume I should approach this is to add an additional label like face_horsename, with the unique name for the horse (or really, a unique reference to a database of horses). Is that the right approach? It seems like the Yolo file format doesn't allow for multiple labels for the same box, so my guess is I should just make 2 rectboxes that are identical, but both point to different labels. Frankly, I'd like to take it even further and label the same thing with the type of "blaze" of the horse's face, and its proper name for the horse's color. So now I'm talking about 4 labels. Is that the right approach (duplicate boxes with unique labels)? Answer: Duplicate boxes with unique labels make the problem too complex for the model. What I suggest is you use the horse face detection model to get a bounding box of the horse's face, crop the face image and use that image as a training sample for a separate classification model. I have seen this method used often in human identification, and dividing the tasks/models seems much more reasonable than trying to solve it in one model. P.S. Just out of curiosity, you said that the best way to distinguish one horse from another is by its face Is this really true? Aren't there better features to use from the body?
{ "domain": "ai.stackexchange", "id": 3106, "tags": "machine-learning, image-recognition, object-detection, object-recognition, yolo" }
How to solve for moment with uniform distributed load
Question: How do we get the moment when there's a distributed load? For example, the picture below has 9[(60)(18)]. I get that 60 is the distributed load and 18 is the total length of the load, my question is how did it get the 9? Answer: The resultant of distributed loads always acts on the centroid of the distributed load geometry, here the distributed load is uniform so its centroid lies half the way. If the distributed load varies linearly from zero at one end to a maximum value at the other end, then its centroid would lie at $\frac{1}{3} L$ from the "max load" end and $\frac{2}{3}L$ from the "zero load" end, with $L$ the side length.
{ "domain": "engineering.stackexchange", "id": 2381, "tags": "moments" }
Tower of Hanoi without recursion
Question: Hi I am pretty new to programming and I would like you to give me some feedback about my code, how does it look, what could be better. Thank you. A = [] B = [] C = [] PegDict = {'A': A,'B': B,'C': C} #Would it be better to use a two dimensional array? discs = int(input("Podaj ilość dysków: ")) for i in range(discs, 0, -1): A.append(i) movesNeeded = pow(2, discs) - 1 StartingPeg = A.copy() def move(fromm, to): to.append(fromm[-1]) fromm.pop() Moves the smallest disc one peg to the left. This part could be done better i think. def moveLeft(): if A and A[-1] == 1: move(A, C) return if B and B[-1] == 1: move(B, A) return if C and C[-1] == 1: move(C, B) return Moves the smallest disc one peg to the right def moveRight(): if A and A[-1] == 1: move(A, B) return if B and B[-1] == 1: move(B, C) return if C and C[-1] == 1: move(C, A) return Returns key of a peg that is the only valid move target for a cartain peg def PossibleMove(Peg): if Peg: if Peg[-1] != 1: for i in PegDict: x = PegDict[i] if not x: return i elif Peg[-1] < x[-1]: return i Main part moves = 0 while not C == StartingPeg: if discs%2 == 0: moveRight() moves += 1 else: moveLeft() moves += 1 print(A) print(B) print(C) print() for key in PegDict: if PossibleMove(PegDict[key]) != None: fromPeg = PegDict[key] onePossibleMove = PossibleMove(PegDict[key]) if fromPeg: moves += 1 move(fromPeg, PegDict[onePossibleMove]) print(A) print(B) print(C) print() print() print('Moves: '+ str(moves)) print('Minimal number of moves: '+ str(movesNeeded)) Answer: PEP-8 The Style Guide for Python Code has many stylistic guidelines that all Python programs should follow. Naming functions, methods, and variables should all be snake_case. CapitalWords are reserved for Types and ClassNames. So movesNeeded should be moves_needed and PegDict should be peg_dict, and so on. Commas All commas should be followed by exactly one space. {'A': A,'B': B,'C': C} violates this. Binary operators Binary operators should be surrounded by one space. You mostly follow this, except for the print('Moves: '+ str(moves)) statements at the end. Exponentiation movesNeeded = pow(2, discs) - 1 Python has the ** operator, for exponentiation. Thus, this could be written slightly more compactly: moves_needed = 2 ** discs - 1 Initial list generation A = [] for i in range(discs, 0, -1): A.append(i) This is a little verbose. You are already using the range() method to generate the disc numbers; you could simply create a list directly from the result: a = list(range(discs, 0, -1)) Moving a Disc def move(fromm, to): to.append(fromm[-1]) fromm.pop() I'm going to assume fromm is not a spelling error, but rather avoiding the from keyword. The PEP-8 recommendation is a trailing underscore: from_. My personal preference is to use synonyms. .pop() returns the item removed from the list, which is the value you used fromm[-1] to retrieve. Therefore, these operations could easily be combine into one statement: def move(source, destination): destination.append(source.pop()) Repeated Code print(A) print(B) print(C) print() You've repeated this code twice. Once moving the small disc, once moving a larger disc. Instead of repeating the code, you should move this into a function. Then, if you change how the discs are shown (curses, GUI, ...), you only have to alter the code once. def print_pegs(a, b, c): print(a) print(b) print(c) print() Iterating over a container for key in PegDict: if PossibleMove(PegDict[key]) != None: fromPeg = PegDict[key] onePossibleMove = PossibleMove(PegDict[key]) In this code, you are iterating over the PegDict, fetching the keys, and using the key to look up the dictionary value. In fact, you never use the key for anything else. You do not need the key at all, and could simply iterate over the contents of the dictionary: for peg in peg_dict.values(): if possible_move(peg) != None: from_peg = peg one_possible_move = possible_move(peg) But notice we are computing using possible_move(peg) twice. This is inefficient. You should compute the result once, save it in a temporary, and use the temporary variable for further tests and assignments: for peg in peg_dict.values(): move = possible_move(peg) if move != None: from_peg = peg one_possible_move = move More Advanced Changes Left or Right? Each iteration, you check if the number of discs was even or odd, and call the moveLeft() or moveRight() function. Since the number of discs is constant, you always make the same choice. You could move this decision out of the loop. move_smallest_disc = move_left if disc % 2 != 0 else move_right while len(c) != discs: # A simpler termination condition move_smallest_disc() print_pegs(a, b, c) moves += 1 ... But I've a different option... Cyclic Peg Order You always move the smallest disc either: a -> b -> c -> a -> b -> c a -> c -> b -> a -> c -> b You can keep track of which order you need with a list: if discs % 2 == 1: peg = [a, c, b] else: peg = [a, b, c] And move the smallest disc from peg[0] to peg[1], without having to hunt for which peg the smallest disc is on: move(peg[0], peg[1]) And later rotate the peg list: peg = peg[1:] + peg[:1] # [a, b, c] -> [b, c, a] -> [c, a, b] -> [a, b, c] After moving the smallest disc onto peg[1], the only possible moves for the larger disc will be peg[0] -> peg[2] or peg[2] -> peg[0], so you can greatly simplify the possible move determination, by just looking at those two pegs: source, destination = possible_move(peg[0], peg[2]) move(source, destination) Refactored Code from pathlib import Path import gettext gettext.install('hanoi', Path(__file__).parent) def move(source, destination): destination.append(source.pop()) def possible_move(peg1, peg2): if peg1 and (not peg2 or peg1[-1] < peg2[-1]): return peg1, peg2 else: return peg2, peg1 def print_pegs(a, b, c): print(a) print(b) print(c) print() def tower_of_hanoi(discs): a = list(range(discs, 0, -1)) b = [] c = [] minimum_moves = 2 ** discs - 1 if discs % 2 == 1: peg = [a, c, b] else: peg = [a, b, c] moves = 0 while len(c) != discs: if moves % 2 == 0: move(peg[0], peg[1]) # Smallest disc now on peg[1] else: source, destination = possible_move(peg[0], peg[2]) move(source, destination) peg = peg[1:] + peg[:1] # Rotate the peg ordering print_pegs(a, b, c) moves += 1 print() print(_('Moves:'), moves) print(_('Minimal moves:'), minimum_moves) if __name__ == '__main__': discs = int(input(_('Enter the number of disks: '))) tower_of_hanoi(discs) If you run pygettext on this, you can make a hanoi.pot template file, copy it to hanoi.po and put translations into it: msgid "Moves:" msgstr "Liczba ruchów:" msgid "Minimal moves:" msgstr "Minimalna liczba ruchów:" msgid "Enter the number of disks: " msgstr "Podaj ilość dysków: " Run msgfmt on that to generate an hanoi.mo file, and store it the subdirectory: pl/LC_MESSAGES. Running LANG="pl" ./hanoi.py on my machine, gives: Podaj ilość dysków: 2 [2] [1] [] [] [1] [2] [] [] [2, 1] Liczba ruchów: 3 Minimalna liczba ruchów: 3 With luck, I haven't butchered the translated strings too badly.
{ "domain": "codereview.stackexchange", "id": 38168, "tags": "python, beginner, python-3.x, tower-of-hanoi" }
Let's suppose there is only one object in this universe. May that be a quark, an atom, etc. What will be the consequences?
Question: I'm not a physics expert, but this question really intrigued me, so I thought "why not ask those who are"! Let's suppose there is only one object in this universe. May that be a quark, an atom, etc. What will be the consequences? In terms relating to, Gravitational force, Electrical force, etc. individually, and in terms of Grand Unified Theory. And also in terms of Quantum Mechanics. I'm assuming there will be no Quantum Mechanics, but correct me if i'm wrong. Answer: An atom and a quark are very different objects. A quark is a fundamental particle, whereas an atom is a made up of many fundamental particles (quarks and electrons) which are continually exchanging other fundamental particles such as photons and gluons. To keep things simple, let's suppose your hypothetical universe just contains one quark. As far as we know, quantum physics will still apply in your one-particle universe, and in particular the uncertainty principle will apply. This means that your single fundamental particle will be surrounded by a sea of virtual particles, and the more precisely you try to describe it, the more complicated it will appear. In simple terms, quantum physics tells us that the model of a single, isolated particle is only a very rough approximation to reality.
{ "domain": "physics.stackexchange", "id": 68778, "tags": "quantum-mechanics, forces, universe, thought-experiment" }
Random Forest VS LightGBM
Question: Random Forest VS LightGBM Can somebody explain in-detailed differences between Random Forest and LightGBM? And how the algorithms work under the hood? As per my understanding from the documentation: LightGBM and RF differ in the way the trees are built: the order and the way the results are combined. It has been shown that GBM performs better than RF if parameters tuned carefully. Random Forest: RFs train each tree independently, using a random sample of the data. This randomness helps to make the model more robust than a single decision tree, and less likely to overfit on the training data My questions are When would one use Random Forests over Gradient Boosted Machines? What are the advantages/disadvantages of using Gradient Boosting over Random Forests? Answer: RandomForest advantage compared to newer GBM models is that it is easy to tune and robust to parameter changes. It is robust for most use cases although the peak performance might not be as good as a properly-tuned GBM. Another advantage is that you do not need to care a lot about parameter. You can compare the number of parameter for randomforest model and lightgbm from its documentation. In sklearn documentation the number of parameter might seem a lot, but actually the only parameter you need to care about(ordered by importance) are max_depth, n_estimators, and class_weight, and the other parameters are better to be left as is. So for me, I would most likely use random forest to make baseline model. GBM is often shown to perform better especially when you comparing with random forest. Especially when comparing it with LightGBM. A properly-tuned LightGBM will most likely win in terms of performance and speed compared with random forest. GBM advantages : More developed. A lot of new features are developed for modern GBM model (xgboost, lightgbm, catboost) which affect its performance, speed, and scalability. GBM disadvantages : Number of parameters to tune Tendency to overfit easily Please bear in mind that increasing the number of estimators for random forest and gbm implies different behaviour. High value of n_estimators for random forest will affect it robustness, where as for GBM model will improve the model fit with your training data (which if too high will cause your model to overfit).
{ "domain": "datascience.stackexchange", "id": 6376, "tags": "machine-learning, random-forest, lightgbm" }
Sign of change in enthalpy and change in entropy
Question: Why is it wrong to assert that the change in entropy and the change in enthalpy must always have the same sign? What makes me think that they must have the same sign is the fact that every reaction invariably comes to equilibrium under suitable conditions; and so we have the corresponding temperature equal to $ΔH/ΔS$ (setting $∆G = 0$ in the equation $ΔG = ΔH - TΔS).$ Answer: TL;DR In general the entropy of reaction can be written as $$ T\Delta _r S= \Delta_r H + RT\log \left(\frac{Q_e}{Q}\right) $$ At equilibrium $Q_e=Q$ and $$ T\Delta _r S_e= \Delta_r H $$ Consider a simple reaction that behaves ideally (occurs under ideal solution conditions). If it is carried out at constant T and p we can write $$\Delta_r G = \Delta_r G^\circ + RT \log Q \tag{1}$$ where Q is the reaction quotient. But we can also write that $$\Delta_r G = \Delta_r H - T\Delta_r S\tag{2a}$$ and $$\Delta_r G^\circ = \Delta_r H^\circ - T\Delta_r S^\circ\tag{2b}$$ Equation (1) can then be written as $$\Delta_r G = \Delta_r H^\circ - T(\Delta_r S^\circ-R \log Q) \tag{3}$$ Matching terms in equations (2a) and (3) we have that $$\Delta_r H = \Delta_r H^\circ \tag{4a}$$ and $$\Delta_r S = \Delta_r S^\circ - R\log Q \tag{4b}$$ When the reaction is at equilibrium $Q=Q_e$ (the reaction quotient is then equal to the equilibrium constant, here written $Q_e$) and $\Delta_r G = 0$ which means, combining equations (2a) and (4a) that $$ T\Delta _r S_e = \Delta _r H^ \circ \tag{5} $$ and $$ T\Delta _r S^ \circ = T\Delta_r S_e + RT\log Q_e \tag{6}$$ so that $$ T\Delta _r S= T\Delta_r S_e + RT\log Q_e - RT\log Q \tag{7a} $$ or $$ T\Delta _r S= \Delta_r H ^\circ + RT\log\left(\frac{Q_e}{Q}\right) \tag{7b} $$ Now compare equations (5) and (7b). Equation (5) holds at equilibrium and says, sure enough, that the reaction entropy and enthalpy are equal in sign at this point in the reaction coordinate. However, equation (7b) - which is the more general expression - says that $\Delta_r S$ can in fact differ in sign from $\Delta_r H^\circ$, depending on the magnitude of the reaction quotient Q. It turns out that while the enthalpy of a reaction in an ideal solution is a constant, the entropy of reaction can be tuned by modifying Q.
{ "domain": "chemistry.stackexchange", "id": 14241, "tags": "physical-chemistry, thermodynamics, enthalpy, entropy, free-energy" }
Intuition behind Relativization
Question: I take course on Computational Complexity. My problem is I don't understand Relativization method. I tried to find a bit of intuition in many textbooks, unfortunately, so far with no success. I will appreciate if someone could shed the light on this topic so that I will be able to continue by myself. Few following sentences are questions and my thoughts about relativization, they will help to navigate the discussion. Very often relativization comes in comparison with diagonalization, which is a method that helps distinguish between countable set and uncountable set. It somehow comes from relativization that $P$ versus $NP$ question cannot be solved by diagonalization. I don't really see the idea why relativization show the useless of diagonalization, and if it's useless why is actually useless. The idea behind oracle Turing machine $M^A$ at first is very clear. However, when it comes to $NP^A$ and $P^A$ the intuition disappears. Oracle is a blackbox that is designed for special language and answers the question whether the string on the input of the oracle is in the language in time 1. As I understood TM that contains an oracle is just make some auxiliary operations and ask the oracle. So the core of the TM is the oracle, everything else is less important. What's the difference between $P^A$ and $NP^A$, even thought oracle in both of them works in time 1. The last thing is the proving the existence of an oracle $B$ such that $P^B \neq NP^B$. I found the proof in several textbooks and in all of them the proof seems very vague. I tried to use "Introduction to complexity" by Sipser, Chapter9. Intractability, and didn't get the idea of construction of a list of all polynomial time oracle TMs $M_i$. This is more or less everything what I know about relativization, I will appreciate if someonw would decide to share his/her thoughts on the topic. Addendum: in one of the textbooks I found example of $NP^B$ language (Computational Complexity: A Modern Approach by Boaz Barak Sanjeev Arora. Theorem 3.7. Page 74). $U_B=\left \{ 1^n:some \space string \space of \space length \space n \space is \space in \space B\right \} $ it's unary language. I believe that (1,11,111,1111,...) are all in $U_B$. Author affirms that such a language is in $NP^B$ which is I cannot understand why, hence oracle for B can resolve everything in time 1. Why do we need nondeterministic TM with oracle. If it's not good example of $NP^B$ please put yours such that to approve the existence of $NP^B$. Answer: You haven't really asked any question, but it seems like you don't know what $\rm{P}^A$ means and what $\rm{NP}^A$ means for a language $A$. The class $\rm{NP}^A$ is simply all languages that are decidable in "NP time", given a turing machine with $A$ as an oracle. This means a non-deterministic turing machine with access to $A$ which runs in polynomial time. The $\rm{P}^A$ is the deterministic version.
{ "domain": "cs.stackexchange", "id": 668, "tags": "complexity-theory, np-complete, complexity-classes, relativization, np" }
Where does amcl_demo.launch load its plugins from?
Question: When I run roslaunch turtlebot_navigation amcl_demo.launch map_file:=<path_of_my_map> I get, as part of the output: [ INFO] [1396554684.603955092]: Using plugin "static_layer" [ INFO] [1396554684.956445219]: Requesting the map... [ INFO] [1396554685.162530181]: Resizing costmap to 480 X 288 at 0.050000 m/pix [ INFO] [1396554685.262220473]: Received a 480 X 288 map at 0.050000 m/pix [ INFO] [1396554685.288558669]: Using plugin "obstacle_layer" [ INFO] [1396554685.355236864]: Subscribed to Topics: scan bump [ INFO] [1396554685.659024142]: Using plugin "inflation_layer" [ INFO] [1396554686.518542435]: Loading from pre-hydro parameter style [ INFO] [1396554686.738804760]: Using plugin "obstacle_layer" [ INFO] [1396554687.077918589]: Subscribed to Topics: scan bump [ INFO] [1396554687.373924581]: Using plugin "inflation_layer" [ INFO] [1396554687.994371819]: Created local_planner base_local_planner/TrajectoryPlannerROS [ INFO] [1396554688.069237432]: Sim period is set to 0.20 [ INFO] [1396554690.131045226]: odom received! Where does amcl_demo.launch its plugins from? (ROS Distro is Hydro and Ubuntu is 13.04). Originally posted by oswinium on ROS Answers with karma: 105 on 2014-04-03 Post score: 2 Answer: This is actually a really good question, and given the state of the documentation not trivial to find out. I haven't used costmaps heavily myself, only move_base without changing the default config much, so I had to some digging in the code to see whats going on. The output Loading from pre-hydro parameter style gives some hint. So in the amcl_demo.launch file through includes eventually the move_base node is started. This node includes two cost maps (local and global), which are run inside the node (not as seperate nodes). This could be the first confusion. The corresponding objects are created here (move_base.cpp#L111) and here (move_base.cpp#L139). You can see that the costmaps are invoked with the names global_costmap and local_costmap. This is important for specifying the parameters correctly. In the launch include file where move base is specified, you can see that there are 4 lines to load parameters for the 2 costmaps (move_base.launch.xml#L11-L14). Notice the extra ns="global_costmap" and ns="local_costmap" tags when loading costmap_common_params.yaml. The two other parameter files specifying parameters specific to local and global costmap have this namespace inside the .yaml file. So now we know where the parameters for the two costmaps are specified. But why is there nothing about plugins? We have to dig further and have a look inside the Costmap2DROS node. During initialization of the costmap, it checks (costmap_2d_ros.cpp#L107-L110) if the parameter plugins is set (in the private namespace). If not, it calls function resetOldParameters, which sets up the plugin parameters from within the node to mimick some default behaviour. So in order to change the plugins for example for the local costmap, first check in the resetOldParameters code to see how the parameters are set up (or use rosparam to inspect the parameters after the node was started). Then modify the yaml file for the local costmap (local_costmap_params.yaml in the turtlebot amcl demo) to specify the default plugins to get back to the default behaviour. If that works as expected, you can now add your additional custom plugins to this file. @David Lu, @tfoote: As maintainers of the the navigation stack / turtlebot_navigation package, it would probably be very helpful if the example launch files in move_base and/or turtlebot_navigation and/or costmap_2d would actually use the post-electric style parameterization of costmaps explicitely listing the layers in the plugins parameter. Otherwise there is a discrepancy between the costmap tutorial and the examples. Originally posted by demmeln with karma: 4306 on 2014-04-03 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by oswinium on 2014-04-07: Thanks a lot for the detailed response! Comment by oswinium on 2014-04-07: What I don't understand is why I would be making changes to the local costmap with static_map set to false and rolling_window set to true, considering that my aim is to load a (static?) layer with a lethal obstacle one meter ahead of the robot... Comment by demmeln on 2014-04-07: Since you want to place something relative to the robot, it for obstacle avoidance, it seems the local costmap is the right choice. I'm not sure how the rolling_window option will be interacting with you plugin. I suggest trying it or checking the source code on more details about rolloing_window. Comment by demmeln on 2014-04-07: Also, it depends a bit on what exactly you want to achieve with your custom layer. Maybe you can share a bit more on the actual goal of all this. Comment by oswinium on 2014-04-08: OK you were right; adding the plugins (the same stuff in the minimal.yaml file here: http://bit.ly/1qggQa7) to the local_costmap_params.yaml file does the trick. My main aim is to do this - http://bit.ly/1swomQj - and load it as a separate layer on my costmap for social navigation of my robot. Comment by demmeln on 2014-04-08: Good to know :) Comment by oswinium on 2014-04-08: So just another follow-up question: when this new layer is loaded, does the robot actually consider the 'fake' obstacle as a 'real' obstacle? I ask because it seems to have no problem 'walking through' it :S Comment by demmeln on 2014-04-08: I believe the local costmap is used for the local planner for doing obstacle avoidance. The global path planner is using only the global costmap I believe, so this will not consider your obstacle unless you also add something to the global costmap. Comment by demmeln on 2014-04-08: So I expect the robot to ignore the additional obstacles when planning a path, but still try to avoid it when trying to follow the path.
{ "domain": "robotics.stackexchange", "id": 17525, "tags": "navigation, amcl-demo.launch, plugin, amcl, pluginlib" }
Statistical Treatment of Constrained Systems in the Microcanonical Ensemble
Question: Consider a constrained classical Lagrangian $ \mathcal{L}' = \mathcal{L}(q, \dot{q}) + \lambda f(q) $ where $ \lambda $ is the lagrange multiplier for the constraint. We can get a Hamiltonian for this system by the standard Legendre transform, \begin{align} \mathcal{H}' = \dot{q} \frac{\partial}{\partial \dot{q}} \mathcal{L}' - \mathcal{L'} = \mathcal{H} - \lambda f(q) \end{align} If I were trying to get the equations of motion, I would normally treat $ \lambda $ as a normal variable and get the equations of motion as usual. Treating it like a normal variable, the microcanonical ensemble is, \begin{align} \Omega(E) &= \int dq dp d\lambda~\delta \big( E - \mathcal{H} + \lambda f(q) \big) \end{align} But if I inject the constain by hand into the microcanonical ensemble, I would expect an integral like, \begin{align} \Omega(E) &= \int dq dp~\delta \big( E - \mathcal{H} \big) ~\delta \big( f(q) \big) \end{align} What is the connection? I assume the second integral is correct as it makes more sense to me, but how to I derive it? What about KKT type constrained dynamics like a ball falling onto a floor? Answer: In the only field where I have seen this issue treated, polymers, the approach is your second take at it, i.e. introducing delta functions in the partition function [1]. Just to give you a feel for it, the simplest model of a polymer is a freely joint chain. If $r_i$ is the position of the $i$-th joint, the first constraint is to require that the distance between two consecutive joints is a constant, $\delta\big((r_{i+1}-r_i)^2 - a^2\big)$. The second one is that the motion of each joint shall be perpendicular to the link in the frame of the opposite joint, $\delta\big((p_{i+1}-p_i)\cdot(r_{i+1}-r_i)\big)$. Then the partition function is written $$Z = \int \Pi_i dr_i dp_i \delta\big((r_{i+1}-r_i)^2 - a^2\big) \delta\big((p_{i+1}-p_i)\cdot(r_{i+1}-r_i)\big)\exp\left(-\beta\frac{p^2}{2m}\right)$$ Caveat: I do not make the claim that the problem is always treated that way, as I am not a specialist in this area, but working in the field of crystallography, I have some exposure to protein physics, and this is the only approach I have been exposed to. [1] Martial Mazars. Statistical physics of the freely jointed chain. Phys. Rev. E, 53:6297–6319, Jun 1996.
{ "domain": "physics.stackexchange", "id": 44226, "tags": "statistical-mechanics, lagrangian-formalism, constrained-dynamics" }
rosjava error: Unknown CMake command "add_java_source_dir"
Question: Hello, I am trying to compile a package containing ROSJAVA related code. I am getting an error while compiling the package as below: mkdir -p bin cd build && cmake -Wdev -DCMAKE_TOOLCHAIN_FILE=rospack find rosbuild/rostoolchain.cmake .. [rosbuild] Building package eskorta_wifi [rosbuild] Including /opt/ros/diamondback/stacks/ros_comm/clients/rospy/cmake/rospy.cmake [rosbuild] Including /opt/ros/diamondback/stacks/ros_comm/clients/cpp/roscpp/cmake/roscpp.cmake [rosbuild] Including /opt/ros/diamondback/stacks/ros_comm/clients/roslisp/cmake/roslisp.cmake CMake Error at CMakeLists.txt:23 (add_java_source_dir): Unknown CMake command "add_java_source_dir". I have checked the cMakeLists.txt which indicates right directory. It is pointing to /bin directory of the package. Please suggest If we are doing something wrong. Prasad Originally posted by Prasad on ROS Answers with karma: 79 on 2011-06-18 Post score: 0 Answer: rosjava doesn't seem to be installed correctly. CMake is unable to resolve the macro add_java_source_dir which is part of rosjava's cmake integration. Can you roscd into it? If not, how did you install it? Did you install the ros-diamondback-client-rosjava debian package? Originally posted by Lorenz with karma: 22731 on 2011-06-19 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Prasad on 2011-06-24: Yes I did. roscd is working. Yes i have installed debian package.
{ "domain": "robotics.stackexchange", "id": 5885, "tags": "rosjava" }
FizzBuzz with user input variables
Question: Yes, another FizzBuzz! I know you guys may be tired of these but I think it's a tradition stepping stone for beginners on this site. Here are some notes on my thinking: I tried to keep it as flexible as possible where the logic is deducted from the variables as much as possible. So one could potentially make a user input form on a website and let the user select all the values (numbers and words), and the code just does it. I tried to avoid magic numbers as much as I know how. Not sure if 0 counts as a magic number. I did not feel the need to add any comments as I feel it is self-explanatory. If you feel otherwise, please let me know. Here is a link to PhpFiddle for your convenience. <?php $counter = 1; $stopper = 100; $fizzWord = 'Fizz'; $fizzNumber = 3; $buzzWord = 'Buzz'; $buzzNumber = 5; for ($counter = 1; $counter <= $stopper; $counter++) { if ((($counter %($fizzNumber * $buzzNumber) == 0))) { echo $fizzWord . $buzzWord; } elseif (($counter % $fizzNumber) == 0) { echo $fizzWord; } elseif (($counter % $buzzNumber) == 0) { echo $buzzWord; } else { echo $counter; } echo "<br>\n"; } ?> Answer: I know you've been seeing a lot of php, but I think this is the first time I've seen you ask a question with the sole intention of learning php! Great :) I think you'd be able to handle any critique thrown at you, at so I'll do my best, but it's a simple script which leaves little room for huge critiques! The variable name $counter is ambiguous, I feel. I keep wanting to assume it's some Counter object! A more suitable name in my opinion may be $index or a synonym to that! $stopper is also a strange name. It's seems too friendly for the code. What about $lastIndex or $endPoint or even $stoppingIndex. It's just my opinion, but I would find it easier to handle the trigger words in arrays. Such that we may have: $triggerWords = [ ["divisor" => 3, "word" => "Fizz"], ["divisor" => 5, "word" => "Buzz"] ] This would open up the possibility of expanding the amount of trigger words we want, plus it's more modularized, plus it's easier to change values in the future. You won't be restricted to two triggers, specifically "fizz" and "buzz", either. This will take a loop inside your main loop so it can iterate the array, but since you didn't mention optimization, this shouldn't be an issue. Setting $counter as the first argument in the for loop overwrites what you set on the first line. This could just be a mistake, and that's fine, just making you aware is all. Regarding your algorithm: it looks very clunky and over-complicated. Take into consideration rolfl's answer, and perhaps look up other implementations of PHP FizzBuzz (or Java, it's similar looking to PHP and is easy to read if you don't know the language!). It does look like you have one or two syntax errors (extra parentheses), and you may want to run this on multiple sites to make sure you are given the same result. Lastly, you need spaces/indentation before your last echo! Best to keep things leveled on their scope. Very nice, good work.
{ "domain": "codereview.stackexchange", "id": 9720, "tags": "php, beginner, fizzbuzz" }
If gravitons are 'real' and analogous to photons are they also being 'stretched' by the universe's expansion?
Question: Since photon wavelengths are stretched by our expanding universe, appearing to us as a redshift, would graviton wavelengths similarly be stretched? For that matter, do gravitons even have a wavelength like photons? Answer: What has just been proven is the existence of gravitational waves, not gravitons. Beside, if graviton exists, they are likely to be a "pseudo particle" like the photon, i.e., mostly a quantized emission of wave packet. As a wave, by construction the downstream part is late compared to the upstream part, and because of expansion, it will have slightly more length to cross than the uptream part at anytime, which accumulates with distance, resulting into the increase of the wavelenght. All kinds of waves thus "red"shift.
{ "domain": "physics.stackexchange", "id": 29565, "tags": "cosmology, space-expansion, quantum-gravity, gravitational-waves" }
Numerical solution to Mukhanov-Sasaki equation
Question: I am trying to figure out how to solve the Mukhanov-Sasaki equation to compute the power spectrum of an inflation potential that exhibits an ultra slow-roll phase that gives rise to an enhancement in the power spectrum, suitable for primordial black hole formation. In terms of the efold variable, the MS equation is $$ \frac{d^{2}\zeta_{k}}{dN^{2}}+\left( 3-\epsilon_{1}+\epsilon_{2} \right)\frac{d\zeta_{k}}{dN}+\left( \frac{k}{aH} \right)^{2}\zeta_{k}=0 $$ with the Bunch-Davies vacuum as the initial condition $$ u_{k}=\lim_{k>>aH}\frac{e^{-ik\tau}}{\sqrt{2k}} $$ I have computed the Hubble flow parameters but now I am unsure how to solve this ODE for all the modes of interest. I know that I need to consider them deep inside the horizon and evolve them to horizon exit where they freeze over, however, I am unsure how to proceed. Answer: You need to solve the MS equation separately for each $k$, initialized far enough in the past so that each mode is approximately in the Bunch-Davies limit. I've found that choosing $N_i$ so that $k = \mathcal{O}(100)a(N_i)H(N_i)$ is sufficient for this. Note that there is a different $N_i$ for each $k$. Here, $a(N) = a_0 \exp(N_0-N)$, where you need to pick a value for $a_0$. For example, you could set $a_0$ such that a particular scale, like the quadrupole, is at horizon crossing at $N_0$, i.e. $k_{\ell = 2} = a_0 H_0$. Initializing further back in time will help with accuracy, but there are diminishing returns. Also, if you go back too far, you'll begin to see trans-Planckian modulations in your power spectra. Then, you just evolve each mode forward in time until inflation ends. The largest-scale modes will have frozen-out long before the end of inflation, but this way you can compute $P(k)$ by evaluating each individual mode $u_k$ on an equal-time slice. Alternatively, you could evolve each mode until they are well-outside the horizon, say, when $k < aH/100$ or something. The modes freeze-out quite soon after horizon exit so applying a generous cut-off like this should work well.
{ "domain": "physics.stackexchange", "id": 53792, "tags": "cosmology, cosmological-inflation" }
Magnetic field lines can be entirely confined within the core of a toroid, but not within a straight solenoid. Why?
Question: I need a full explantation for this concept. Magnetic field lines can be entirely confined within the core of a toroid, but not within a straight solenoid. Answer: This is a solenoid and its magnetic field lines. This is a toroid and its magnetic field lines A solenoid by construction has two magnetic poles at the edges when current is flowing through its windings. One can think of a toroid as a solenoid that has been curved and joined so no poles are open. A toroid can have magnetic fields outside its geometrical boundary according to the way the currents are flowing, if there is a circumferential current that has not been neutralized. Once neutralized there is magnetic field only inside.( as described in the link given above). A neutralizing design is shown below. In contrast a solenoid will always have two open poles.
{ "domain": "physics.stackexchange", "id": 4711, "tags": "electromagnetism" }
How do you find the position at which three particles obey $m_1 a_1 = m_2 a_2$ if two of the particles form a composite body?
Question: From Classical Mechanics by Kibble: Consider a system of three particles, each of mass m, whose motion is described by (1.9). If particles 2 and 3, even though not rigidly bound together, are regarded as forming a composite body of mass 2m located at the mid-point $r=\frac{1}{2}(r_2 +r_3)$, find the equations describing the motion of the two-body system comprising particle 1 and the composite body (2+3). What is the force on the composite body due to particle 1? Show that the equations agree with (1.7). When the masses are unequal, what is the correct definition of the position of the composite (2 + 3) that will make (1.7) still hold? (1.9) is $$ m_1 a_1 = F_{12} + F_{13}, \\ m_2 a_2 = F_{21} + F_{23}, \\ m_3 a_3 = F_{32} + F_{31}.$$ (1.7) is $$ m_1 a_1 = -m_2 a_2 $$ So I've done the first part, however I don't know how to do the bit in italics. Apparently the answer is $$ r = \frac{m_2 r_2 + m_3 r_3}{m_2 + m_3}, $$ but I don't understand where this answer is coming from. Any help would be appreciated. Thank you. Answer: First, take a step back and note that this result should be intuitive. The formula given is the weighted average of $r_2,r_3$ with each position contributing proportionally according to its mass. I.e. the total mass is $m_{23}=m_2+m_3,$ and then the position $r_2$ makes up $\frac{m_2}{m_{23}}$ of $r$ and $r_3$ makes up $\frac{m_3}{m_{23}}$ of $r$. The resulting $r$ (from now on I will call it $r_{23}$) is called the center of mass of objects 2 and 3. You can find this algebraically by positing that the combined mass of the objects should be $m_{23}=m_2+m_3$ and then trying to find the acceleration that multiplies it in the net force. Add (1.9.ii) and (1.9.iii) and you get $$m_2a_2+m_3a_3=-m_1a_1.$$ We want to write the LHS as $m_{23}a_{23}$, so just forcibly factor out $m_2+m_3$ from the existing expression and call what's left $a_{23}.$ $$m_2a_2+m_3a_3=\underbrace{(m_2+m_3)}_{m_{23}}\underbrace{\left(\frac{m_2a_2+m_3a_3}{m_2+m_3}\right)}_{a_{23}}.$$ Integrate $a_{23}$ and you have $r_{23}$ as given.
{ "domain": "physics.stackexchange", "id": 81233, "tags": "classical-mechanics, newtonian-gravity, orbital-motion" }
How to make a launch file?
Question: Now let's create a launch file called turtlemimic.launch and paste the following: I pull up gedit and copy/paste 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 into gedit with sudo and save it at /home/pc/catkin_ws/src/beginner_tutorials/launch I $ cd ~/catkin_ws/ source devel/setup.bash then I try to run roslaunch beginner_tutorials turtlemimic.launch but i get the error pc@pc:~$ cd ~/catkin_ws/ pc@pc:~/catkin_ws$ source devel/setup.bash pc@pc:~/catkin_ws$ roslaunch beginner_tutorials turtlemimic.launch ... logging to /home/pc/.ros/log/93f61492-23a7-11e4-bf9f-9439e5ec40e7/roslaunch-pc-3090.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. Invalid roslaunch XML syntax: syntax error: line 1, column 3I I don't know what im doing wrong, the file will only save as plain text and not XML, the tutorial does not say how to save files in XML Originally posted by HelpwithUbuntu on ROS Answers with karma: 15 on 2014-08-14 Post score: 0 Original comments Comment by BennyRe on 2014-08-14: Please use the code block formatting tool. It's the button with the 101010 in it. I always have to edit your questions to avoid eye cancer. Comment by HelpwithUbuntu on 2014-08-14: Thanks for formatting my question, it looks ok when I put it in the question box but changes when it's posted. I have searched and I can't find out what a CODE BLOCK tool is. What is the button with 101010 and if all of this is important why is it not in the tutorial, it says beginner level. I am not playing dumb I really don't know what this stuff is. Comment by BennyRe on 2014-08-14: Do you really want to do robotics? The ROS tutorials are very easy and compared to real robotics a child's play. Comment by HelpwithUbuntu on 2014-08-14: YES, I will get the hang of it just like anything else hard in life the more you do it the better you get. How was I suppose to know that I don't include the line numbers, the tutorial said copy and paste, I find it funny you asked about spam. Answer: Don't copy-paste the line numbers from the tutorial page. You always can disable the line numbers in ROS wiki code examples. Originally posted by BennyRe with karma: 2949 on 2014-08-14 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by HelpwithUbuntu on 2014-08-14: Thank you so much, it works now. I don't understand why the tutorial does not tell me that.
{ "domain": "robotics.stackexchange", "id": 19056, "tags": "ros" }
How much cheaper is it to mount oceanographic sensors on elephant seals and other animals, compared using Argo floats and other abiotic sensors?
Question: This article says that mounting sensors on animals is relatively cheap compared with the alternatives. Seals, in other words, were filling in the blind spots on oceanographer’s maps. And they were doing it relatively cheaply—at least compared to the cost of ships or Argo floats, the international network of probes that transmit real-time measurements of the Earth’s oceans. But how much cheaper? Answer: Each float costs around US$15,000 and communication, deployment and calibrations costs approximately double the through-life cost of each float. (from http://www.argo.ucsd.edu/Argoflyer_final.pdf) One can roughly estimate from this source, given a life-span of four years, an Argo float costs slightly over \$3.5k/yr (float only) or \$7k/yr (including deployment and calibration etc.). ...by June 2007, over 95 percent of the goal of 3,000 floats had been deployed... ...With a four-year life span per float ... ...The annual cost of the worldwide Argo program is about $20 million... (from http://celebrating200years.noaa.gov/magazine/argo/welcome.html#float) Based on this source all inclusive number will be in the ball park of slightly less than \$7k/yr. So, without getting into details of whether one can be an alternative to another technically, a primitive cost comparison based on the yearly cost derived from the mentioned article and the links below it is ~7k\$/yr (including deployment and calibration etc.) or ~3.5k/yr (only the float) vs versus \$4k per tag with Tarpon fish. So if one can assume the deployment/retrieval/calibration costs are similar they will cost about the same by the end of first year. After one year the annual cost of a fish tag will keep getting cheaper and cheaper than a float until the end of its life-span.
{ "domain": "earthscience.stackexchange", "id": 58, "tags": "ocean, in-situ-measurements" }
Why does $ O = |\phi\rangle \langle\psi|$ equal $O =\lambda |\phi\rangle \langle\phi|\psi\rangle\langle\psi|$ for the 2 vectors of Hilbert space
Question: If we take the operator $$\hat{O} = |\phi\rangle \langle\psi| \space \space(1)$$ whereby $|\phi\rangle$ and $|\psi\rangle$ are two vectors of the hilbert space. My notes also state that $\hat{O}$ can be written as $$\hat{O} = \lambda P{\phi}P_{\psi} \space \space (2)$$ where $\lambda$ is a constant. $P{\phi}$ and $P_{\psi}$ are projector operators associated with $|\phi\rangle$ and $|\psi\rangle$. My question is how can this be. Surely if I write it in terms of state I see the following: $$ \hat{O} = \lambda |\phi\rangle \langle\phi|\psi\rangle\langle\psi|$$ I do not see how (1) and (2) equal one another. Answer: It can be possible if $\langle\phi|\psi\rangle\neq0$, and $\lambda$ would be $$\lambda=\frac{1}{\langle\phi|\psi\rangle}$$
{ "domain": "physics.stackexchange", "id": 78041, "tags": "quantum-mechanics, hilbert-space, operators, quantum-states" }
Why do different pain killers have different effects on people?
Question: I've noticed some pain killers working great for me, while others have no effect. Works for me Aspirin APC † Naproxen Doesn't work for me Paracetamol Diclofenac Tramadol I doubt there is much of a placebo effect at work, since most of these either did or did not work when I first took them, without having expectations either way. Whenever I have a head ache, I take an APC. I suspect it's actually the aspirin in there that does the job, since when I take just paracetamol, it doesn't do squat. As a kid I got children's aspirin, which worked. I once had a severe back ache. I was prescribed diclofenac (a heavier variant than the over the counter one), which didn't work. I was then prescribed tramadol — same results. I then tried naproxen, which worked rightaway. Why do some pain killers work while others don't? Is there an underlying mechanism, that explains why some of these work while others don't? Does that predict if pain killers that I haven't used yet will work? Please note that I'm not looking for medical advice on which pain killers to take; I'm just curious about how my body interacts with the various ones. †: the one consisting of aspirin, paracetamol, and caffeine, not the one containing phenaticin. Think Excedrin. Answer: I don't know of any interesting mechanism that is specific to pain killers, so I will instead answer for drugs in general. Drug action is a complex process consisting of many steps. Let's take a simple example: A systemic direct inhibitor of a kinase. This drug would need to*: Be absorbed into your bloodstream Remain in your bloodstream for sufficient time Be absorbed into the tissue Be able to bind the target protein 1 can fail due to interaction with other concurrently taken drugs or food, or simply genetic factors affecting the particular functioning of the gut mucosa. 2 can fail because the kidneys are too good at eliminating it, or the liver is metabolizing it too agressively (both also subject to modulation by other drugs, foods and genetic factors). 3 can fail because the transports in the cells aren't working as rapidly, or represent an allele less likely to take in the drug, or are modulated by other drugs/foods. The tissue can also have efflux pumps or enzymes that break down the drug. 4 can fail because the drug was designed for a specific allele of that kinase, but you happen to have a different allele, which has a slightly different structure that is no longer targeted by this drug. Then you have a host of physiological variables, and addiction/tolerance. Apparently the most common genetic reason by far for variable drug sensitivity is the specific set of CYP genes you have. CYP enzymes are abundant in the liver and chemically process various molecules (including drugs). Besides this, an interesting set of specific examples used to be available from 23andme. I'm not sure if they still provide this after the FDA ban on health information. Clopidogrel sensitivity - CYP2C19 variation Proton Pump Inhibitor (stomach acid reduction) - CYP2C19 variation Abacavir (HIV drug) - HLA-B*5701 SNP Acetaldehyde (alcohol flush) - ALDH2 mutation 5-fluorouracil (chemotherapy) - DPYD mutation PEG-IFN-alpha/RBV combination (Hepatitis C medicine) - IL28B SNP Phenytoin (epilepsy drug) - CYP2C9 variants Choline esters (class of muscle relaxants) - BCHE (CE degrader) variants Sulfonylurea (used for type 2 diabetes) - CYP2C9 variants Thiopurine (immune suppressant) - TPMT (enzyme that degrades thiopurine) variation Warfarin (anticoagulant) - CYP2C9 variants Caffeine - CYP1A2 SNP Metformin (diabetes drug) - SNP rs11212617, near the ATM gene Antidepressant - SNPs in ABCB1 affect likelihood of sexual dysfunction (common side effect) Beta-Blocker (heart disease) - Mutations in ADRB1 which is normally blocked by the drug Floxacillin (drug for staphylococcal infections) - SNPs in the MHC region affect liver toxicity of this drug Heroin - OPRM1 receptor (target of heroin) SNPs affect efficacy Lumiracoxib (used to treat pain and symptoms of osteoarthritis) - SNPs in the MHC region affect liver toxicity Naltrexone (alcohol and narcotic addiction drug) - SNPs in OPRM1 affect how much it can reduce pleasurable feeling from narcotics Statins (cardiovascular disease) - SNPs in COQ2 (mitochondrial component) affect risk of myopathy As you can see, our friends the CYP family enzymes come up frequently, and some are even repeat offenders like CYP2C9. Besides that, there is a fair number of cases where variation in the specific target of the drug are relevant. Note that this list is not comprehensive: Many drugs have not been studied in sufficient detail, and some may have complicated mechanisms instead of just "bind and inhibit protein X". I have omitted many details and links to literature, I am sure you can easily find them by searching on Google Scholar with the keywords I already gave. Let me know if that doesn't work, though. *: Note that these aren't necessarily required for all drugs. For example, some drugs can be applied directly to the skin and hence do not need to pass through blood.
{ "domain": "biology.stackexchange", "id": 2897, "tags": "pharmacology, pharmacodynamics, pharmacokinetics, analgesia, treatment" }
Showing that 4D rank-2 anti-symmetric tensor always contains a polar and axial vector
Question: In my special relativity course the lecture notes say that in four dimensions a rank-2 anti-symmetric tensor has six independent non-zero elements which can always be written as components of 2 3-dimensional vectors, one polar and one axial. For instance in the angular momentum tensor $L^{ab} = X^aP^b -X^bP^a$ the top row $L^{0i}=ct\vec{p}-(E/c)\vec{x}$ which is obviously polar (as $\vec{x}$ and $\vec{p}$ are polar vectors) while the spatial-spatial section contains the usual 3D angular momentum components which obviously represent the axial angular momentum $\vec{L}$ vector. (And the first column is just -1 times the polar vector due to the anti symmetry of the tensor). The notes only explain this as ‘these components transforming in identical ways to polar and axial vectors’. I would like to know how to show this, possibly from the co-ordinate transformation rule for a 4D rank 2 contravariant tensor by showing it has equivalent effects of transforming these vector components. Specifically the notes say ‘it works because those elements do transform as a vector under rotations’. I’m also confused as to why rotations specifically as a transformation are mentioned here. Answer: Qmechanic's answer is beautiful. I'll clarify one non-obvious detail, namely why the $\textbf{3}\wedge \textbf{3}$ transforms as a vector under the identity component of the rotation group. (It doesn't transform as a vector under reflections, which is why we call it an axial vector.) Let $F_{ab}$ be an antisymmetric tensor in 4d spacetime, and use $0$ for the "time" index and $\{1,2,3\}$ for the "space" indices. When Lorentz transformations are restricted to rotations, the components $F_{jk}$ with $j,k\in\{1,2,3\}$ do not mix with the component $F_{0k}=-F_{k0}$, so we can consider only the components $F_{jk}$. These are the components of the $\textbf{3}\wedge \textbf{3}$ in Qmechanic's answer. For the rest of this answer, all indices (including $a,b,c$) are restricted to the spatial values $\{1,2,3\}$. The antisymmetry condition, $F_{jk}=-F_{kj}$, implies that this has only $3$ independent components, which is the correct number of components for a vector, but something doesn't seem quite right: Under rotations, the transformation rule for a vector only uses one rotation matrix, but the transformation rule for $F_{jk}$ uses two rotation matrices — one for each index. How can these possibly be equivalent to each other? Of course they're not equivalent to each other for rotations with determinant $-1$, which is why we call it an axial vector, but they are equivalent to each other for rotations with determinant $+1$, and the purpose of this answer is to explain why that's true. Let $R_{jk}$ be the components of a rotation matrix whose determinant is $+1$. This condition means $$ \sum_{j,k,m}\epsilon_{jkl}R_{1j}R_{2k}R_{3m} = 1, \tag{1} $$ which can also be written $$ \epsilon_{abc} = \sum_{j,k,m}\epsilon_{jkm}R_{aj}R_{bk}R_{cm}. \tag{2} $$ The fact that $R$ is a rotation matrix also implies $$ \sum_c R_{cm}R_{cn}=\delta_{mn}, \tag{3} $$ which the component version of the matrix equation $R^TR=1$. Contract (2) with $R_{cn}$ and then use (3) to get $$ \sum_c\epsilon_{abc}R_{cn} = \sum_{j,k}\epsilon_{jkn}R_{aj}R_{bk}. \tag{4} $$ Equation (4) is the key. The effect of a rotation on $F_{jk}$ is $$ F_{jk}\to \sum_{a,b}R_{aj}R_{bk}F_{ab}, \tag{5} $$ with one rotation matrix for each index. Since $F_{ab}$ is antisymmetric, we can represent it using only three components like this: $$ v_m\equiv\sum_{j,k}\epsilon_{jkm}F_{jk} \tag{6} $$ The question is, how does $v$ transform under a rotation whose determinant is $+1$? To answer this, use (5) to get $$ v_m\to v_m'=\sum_{j,k}\epsilon_{jkm}\sum_{a,b}R_{aj}R_{bk}F_{ab} \tag{7} $$ and then use (4) to get $$ v_m' =\sum_{a,b,c} \sum_c\epsilon_{abc}R_{cm}F_{ab} =\sum_c R_{cm} v_m. \tag{8} $$ This shows that $v$ transforms like a vector under rotations whose determinant is $+1$. For rotations whose determinant is $-1$ (reflections), the right-hand side of equation (1) is replaced by $-1$, which introduces a minus sign in equation (4), which ends up putting a minus sign in equation (8). That's why we call $v$ an axial vector instead of just a vector. More generally, in $N$-dimensional space: Pseudovector and axial vector are synonymous with "completely antisymmetric tensor of rank $N-1$." Intuitively, an ordinary (polar) vector has only one index, and a pseudovector/axial vector is missing only one index. As a result, they both transform the same way under rotations, but only under rotations. They transform differently in other respects, including relfections and dilations. Under an arbitrary coordinate transform, a (polar) vector transforms as $v_{j}\to \Lambda^a_j v_{a}$. Under an arbitrary coordinate transform, a rank-2 tensor transforms as $F_{jk}\to \Lambda^a_j\Lambda^b_k F_{ab}$. (The components of $\Lambda$ are the partial derivatives of one coordinate system's coordinates with respect to the other's. Sums over repeated indices are implied.) If $N\neq 3$, then angular momentum is an antisymmetric rank-2 tensor (also called a bivector), not an axial vector. A bivector has 2 indices, but an axial vector has $N-1$ indices. To illustrate the different transformation laws for (polar) vectors and bivectors, consider a dilation (also called dilatation) that multiplies the spatial coordinates by a constant factor $\kappa$. Then each factor of $\Lambda$ contributes one factor of $\kappa$, so $F_{jk}\to\kappa^2 F_{jk}$, but a vector goes like $v_j\to \kappa v_j$. Axial vectors and bivectors are the same in 3d space, but they are not really vectors at all, even though they both happen to have 3 components in 3d space. If we only consider rotations (with determinant $+1$), then they might as well be vectors, but even that's only true in 3d space, not in other-dimensional spaces.
{ "domain": "physics.stackexchange", "id": 74918, "tags": "special-relativity, tensor-calculus, vectors, group-theory, representation-theory" }
Grover's algorithm for 3SAT problem gives unexpected results
Question: Based on SAT problem and Grover's algorithm, I've done some experiments. For the below example, I've received unexpected results: Input: boolean function: c example 4 p cnf 3 4 1 -2 -3 0 1 -2 3 0 1 2 -3 0 -1 -2 -3 0 Truth table of boolean function: Histogram of results According to the truth table, the results should be ['000', '001', '011', '101']. Why does the algorithm not return the expected solutions? EDIT: Regarding the first comment. I've noticed that for boolean function: c example 3 p cnf 3 3 1 -2 -3 0 1 -2 3 0 1 2 -3 0 We get the correct results. In this example $M>N/2$.Additionally, I've noticed that the first example is a balanced function and the second one is not. Is it relevant? Answer: This issue happens because $M \geq \frac{N}{2}$, where $M$ is the number of solutions and $N$ is the search domain size. For more details see this answer. A workaround is to double the search domain by adding a dummy variable: c example 4 c Add one to <#vars> and <#clauses> p cnf 4 5 1 -2 -3 0 1 -2 3 0 1 2 -3 0 -1 -2 -3 0 c Add a clause for the dummy variable -4 0 For you second example, 3 out of 5 solutions will be returned if the optimal number of iterations is used ($\Big\lfloor \frac{\pi}{4} \sqrt{N/M}\Big\rfloor$). You can get all the solutions by changing the number of iterations: grover = Grover(iterations = 2, quantum_instance=quantum_instance) Doing the same for the first example will not work (see the figure in this answer). However, doubling search domain should always work.
{ "domain": "quantumcomputing.stackexchange", "id": 3553, "tags": "qiskit, programming, grovers-algorithm, 3sat-problem" }