anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Nomenclature of complex benzene-based substituents
Question: For a complex alkyl substituent, (at least the ones I have come into contact with thus far) you name it as a chain numbering from the first non-parent carbon and placing the name in square brackets: e.g. isopentyl can also be named [3-methylbutyl]. What is the convention for more complex benzene-based substituents? Specifically, can you still use the more common names like phenol and toluene as a parent for your name if say you bond C5 of 2,3-dimethylphenol to your parent and what suffix will you use? Answer: Intro Your statement, that complex substituent names are placed in square brackets, is incorrect. These are to be placed in parentheses (round brackets). Correct preferred IUPAC name (PIN)[1] example: 2-(3-methylbutyl)phenol (PIN) Square brackets are used when the substituent name already contains parentheses, e.g.: 2-[(2R)-2-methylbutyl]phenol (PIN) Retained aromatic compounds allowed for substitution You can see that the names above are based on phenol, which is one of retained aromatic functional parent compounds, with allowed substitution, in preferred names. The other ones are aniline and benzoic acid (see IUPAC rule P-34.1.2.[1]). E.g.: 2-hydroxybenzoic acid (PIN) (not 2-carboxyphenol, as $\ce{-COOH}$ group is preferred over $\ce{-OH}$) Retained alkyl-substituted aromatic hydrocarbons (not allowed for substitution in PINs) There are also retained alkyl substituted aromatic parent hydrocarbons (P-22.1.3): toluene (PIN) methylbenzene and 1,2-xylene (PIN) o-xylene 1,2-dimethylbenzene (and the two other xylenes). But none of them is allowed for substitution in preferred names, e.g.: 1-chloro-4-(chloromethyl)benzene (PIN) α,4-dichlorotoluene Xylenes are not allowed for substitution even in the general non-preferred names: 1,2-bis(bromomethyl)benzene (PIN) α-bromo-2-(bromomethyl)toluene (not α,α′-dibromo-o-xylene) (Yet another retained name mesitylene for 1,3,5-trimethylbenzene (PIN) is allowed only for general names, not allowed for substitution.) Complex aromatic substituents Now if a complex substituent is made from substituted phenol, aniline or benzoic acid, the substituent name is no longer based on these names. They can be thought as suffixes, but now we have another higher-priority “suffix”, -…yl. E.g.: 3-hydroxy-4,5-dimethylphenyl acetate (PIN) (not 2,3-dimethylphenol-5-yl acetate) (In case you didn't hear ‘phenyl’ yet – it's not to be confused with ‘phenol’, it's the only, notorious, name for “benzenyl”, $\ce{-C6H5}$) References: Nomenclature of Organic Chemistry – IUPAC Recommendations and Preferred Names 2013 (Blue Book)
{ "domain": "chemistry.stackexchange", "id": 10792, "tags": "nomenclature, aromatic-compounds" }
How to compute the mass and max speed of a ball moving on a circular track under the influence of gravity and a spring
Question: How to finish part (a) and (b) of this dynamics problem? There are no solution or answer in the book... My professor say that this problem is very tricky, don't even try by yourself. Answer: Starting with equating the total energies at the starting and max velocity point. The energy at the starting point comes from the potential energies of the mass and spring. The energy at the other point also has the kinetic energy component of the mass. $$ m g (R \tan (\alpha )+R)+\frac{1}{2} k \left(\sqrt{R^2+(R \tan (\alpha )+R)^2}-\ell \right)^2 = \\ m g (R-R \sin (\beta ))+\frac{1}{2} k \left(\sqrt{(R-R \sin (\beta ))^2+(2 R-R \cos (\beta ))^2}-\ell \right)^2+\frac{1}{2}m v^2 $$ ($\ell$ is the unstretched spring length) Now we can solve for $v^2$ (or $v$). $$ \frac{1}{m}(-4. \sin ^2(\beta )-4. \cos ^2(\beta )+16. \cos (\beta )+40. \sqrt{-0.08 \sin (\beta )-0.16 \cos (\beta )+0.24}+(3.92 m+8.) \sin (\beta )-12 +\\ 4. \tan ^2(\alpha )-40. \sqrt{0.04 \tan ^2(\alpha )+0.08 \tan (\alpha )+0.08}+8. \tan (\alpha )+3.92 m \tan (\alpha )) $$ Looking at this expression we see that part of it depends on $\beta$ and the other part on $\alpha$, and there are no cross variables. The other variable is the mass $m$. Given the information that the max velocity is at $\beta=34 {}^{\circ}$, we can equate the derivative of $v^2$ to 0 and solve for $m$. This gives $m = 0.143 kg$. I think this is as far as we can get with the information provided. The actual velocity is a function of $\alpha$ and as expected higher values produce larger velocities. $$ 2.64447 \sqrt{4. \tan ^2(\alpha )-40. \sqrt{0.04 \tan ^2(\alpha )+0.08 \tan (\alpha )+0.08}+8.56054 \tan (\alpha )+12.0611} $$ [Update] If $\alpha=45{}^{\circ}$ then the velocity is $6.86192\ m/s$. (All the computations were done in Mathematica and I am attaching the screenshot).
{ "domain": "engineering.stackexchange", "id": 1318, "tags": "mechanical-engineering, dynamics" }
Performance of microkernel vs monolithic kernel
Question: A microkernel implements all drivers as user-space programs, and implements core features like IPC in the kernel itself. A monolithic kernel, however, implements the drivers as a part of the kernel (e.g. runs in kernel mode). I have read some claims that microkernels are slower than monolithic kernels, since they need to handle message passing between the drivers in user space. Is this true? For a long time, most kernels were monolithic because the hardware was too slow to run micro-kernels quickly. However, there are now many microkernels and hybrid kernels, like GNU/Hurd, Mac OS X, Windows NT line, etc. So, has anything changed about the performance of microkernels? Is this criticism of microkernels still valid today? Answer: As is always the answer (or at least the preface) to performance-related questions: know your problem domain, run comparative benchmarks, and remember what premature optimization is. First, no comprehensive benchmarking trials have compared monolithic kernels to current-generation microkernel systems that operate in an equivalent manner. So, while there may be trials that compare specific elements of those kernels, they're not going to be representative of the "big picture" that your question is looking to paint. With that being said, there are wildly divergent observations of kernel performance across microkernels; for example, the L4 microkernel family can be said to have IPC performance an order of magnitude higher than the Mach kernel. But every Apple device from this decade is running Mach, and they seem to work plenty fast, right? The moral of the story is that anybody who's deciding what kernel architecture to use needs to first decide what their ultimate goal is. Microkernel systems are (when properly implemented, of course) more secure, maintainable, and modular. However, they can be tough to architect properly, and may have performance overhead over a monolithic implementation. A monolithic kernel will be faster, but security will be harder to implement and it will be less modular and less easy to customize. Your kernel architecture should be based on what your end goal is. (And when all else fails, try it both ways and see what happens.)
{ "domain": "cs.stackexchange", "id": 5623, "tags": "operating-systems, performance, os-kernel" }
CFO limit for correct LTE Zadoff-Chu Synchronization
Question: I am working with Zadoff-Chu sequences in a Python simulation of an LTE-like communication system and need to understand the effects of carrier frequency offset (CFO) on these sequences. Specifically, I'm looking to derive an analytical expression for the cross-correlation between the original IDFT transformed Zadoff-Chu sequence and its version with applied CFO. The goal is to estimate the Peak of the correlation according to the CFO and to find out at which cfo the correlation receiver will fail. With the simulation i already figured out, that the peak is shifted when the CFO is too big. I am trying to derive mathematically an expression to find a rule how big the CFO can be without getting a shifted peak. Does anyone has any idea how to find or derive an expression? Here is a plot which shows the correlation with a cfo = 1/T (While T is the symbol time). It shows that the cfo caused a shifted peak. (Usually it is without cfo in the middle). Here's the setup: I generate the Zadoff-Chu sequence, apply a CFO, and then transform the sequence back to the time domain using an IDFT (Inverse Discrete Fourier Transform). I need to analytically calculate the cross-correlation of the resulting sequence with the original sequence to estimate the CFO. import numpy as np import matplotlib.pyplot as plt def zadoff_chu(u): n = np.arange(63) x = np.exp(-1j*np.pi*u*n*(n+1)/63) x[31] = 0 # This corresponds to the DC subcarrier return x N = 80 # modulation similar to LTE but with N=80 instead of 2048 fs = N*15e3 carriers = slice(N//2 - 31, N//2+31+1) # OFDM carriers symbol_time = N/fs zc_freq = np.zeros(N, 'complex') zc_freq[carriers] = zadoff_chu(25) # the root is one of the specified in LTE (25,29,34) zc_time = np.fft.fftshift(np.fft.ifft(zc_freq)) cfo = 1/symbol_time #example cfo zc_time_cfo = zc_time*np.exp(1j*2*np.pi*critical_cfo*np.arange(N)/fs) corr = np.correlate(zc_time_cfo, zc_time, 'full') plt.plot(np.abs(corr)) # with the current cfo the peak is shifted and not at N//2 Answer: The CFO estimation range is defined by the length of your correlation sequence, such that: $f_{est} = +/- 1/T$, where $T$ is the length of your sequence in time. Rewritten another way: $f_{est}= +/- F_s / N$ where $F_s$ is the sample frequency of your signal and $N$ is the number of samples used for correlation. You can plot the correlation as a function of frequency offset. It will follow a $\sin(x)/x$ shape where the nulls are at +/- $f_{est}$ To find the $\frac{\sin(x)}{x}$ (with $x = \pi f_{est} T$) like function, it is necessary to fix the autocorrelation at a lag of zero. The result will be the desired function of frequency offset.
{ "domain": "dsp.stackexchange", "id": 12340, "tags": "cross-correlation, synchronization, lte" }
Two atoms exist in the same coordinate position in the lattice?
Question: I am trying to simulate the properties of FeF3(H2O)3, so I download its crystal structure file from Crystallography Open Database, but it seems that in the lattice, the O atom and F atom exist at the same coordinate position. It also shows the F atom and H2O are at the same coordinate in the original paper, which measures the structure of FeF3(H2O)3, Here is the chemical formula information of FeF3(H2O)3. it seems the H atoms would be lost when forming a solid crystal. Chemical name Fe F3 (H2 O)3 Formula F3 Fe H6 O3 Calculated formula F3 Fe O3 My question is: Is it possible that two atoms exist at the same coordinate in the lattice? Answer: The paper states that four ligands of the iron are occupied by a 50:50 statistical mixture of water and fluoride (the double circles). They could have added the partial occupancy to the coordinates if the file format had allowed it: Because fluoride and water have distinct net charges, the stoichiometry is fixed, and there is probably some local order to maintain local charge neutrality as well, but not at the atom-by-atom level. The hydrogen atoms might not be resolved using the method employed here, but they certainly are part of the crystal.
{ "domain": "chemistry.stackexchange", "id": 16624, "tags": "crystal-structure, solid-state-chemistry, crystallography" }
Is anyone aware of a counter-example to the Dharwadker-Tevet Graph Isomorphism algorithm?
Question: At http://www.dharwadker.org/tevet/isomorphism/, there is a presentation of an algorithm for determining if two graphs are isomorphic. Given a number of shall we say, "interesting" claims by A Dharwadker, I am not inclined to believe it. In my investigation, I find that the algorithm will definitely produce the correct answer and tell you that two graphs are not isomorphic when in fact that is correct. However, it is not clear that the algorithm will consistently tell you if two graphs are isomorphic when they actually are. The "proof" of their result leaves something to be desired. However, I am not aware of a counter-example. Before I start writing software to test out the algorithm, I thought I would see if anyone was already aware of a counter-example. Someone requested a synopsis of the algorithm. I will do what I can here, but to really understand it, you should visit http://www.dharwadker.org/tevet/isomorphism/. There are two phases to the algorithm: A "signature" phase and a sorting phase. The first "signature" phase (this is my term for their process; they call it generating the "sign matrix") effectively sorts vertices into different equivalence classes. The second phase first orders vertices according to their equivalence class, and then applies a sort procedure within equivalence classes to establish an isomorphism between the two graphs. Interestingly, they do not claim to establish a canonical form for the graphs - instead, one graph is used as a kind of template for the second. The signature phase is actually quite interesting, and I would not do it justice here by attempting to paraphrase it. If you want further details, I recommend following the link to examine his signature phase. The generated "sign matrix" certainly retains all information about the original graph and then establishes a bit more information. After collecting the signatures, they ignore the original matrix since the signatures contain the entire information about the original matrix. Suffice to say that the signature performs some operation that applies to each edge related to the vertex and then they collects the multiset of elements for a vertex to establish an equivalence class for the vertex. The second phase - the sort phase - is the part that is dubious. In particular, I would expect that if their process worked, then the algorithm developed by Anna Lubiw for providing a "Doubly Lexical Ordering of Matrices" (See: http://dl.acm.org/citation.cfm?id=22189) would also work to define a canonical form for a graph. To be fair, I do not entirely understand their sort process, though I think they do a reasonable job of describing it. (I just have not worked through all the details). In other words, I may be missing something. However, it is unclear how this process can do much more than accidentally find an isomorphism. Sure, they will probably find it with high probability, but not with a guarantee. If the two graphs are non-isomorphic, the sort process will never find it, and the process correctly rejects the graphs. Answer: For graphA.txt: 25 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 1 1 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 0 0 0 0 1 1 1 0 0 0 1 1 1 0 0 0 1 1 1 0 0 0 1 1 1 0 0 0 0 1 0 0 1 1 0 1 0 0 1 1 0 1 0 0 1 1 0 1 1 1 0 0 0 0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 1 1 1 0 0 0 0 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 1 0 0 1 1 0 0 0 1 1 1 0 0 1 0 1 0 0 1 0 0 0 1 1 1 1 0 0 1 0 1 0 1 0 1 0 0 1 0 0 0 1 1 1 1 0 1 1 0 0 1 0 0 1 0 0 1 1 1 0 0 1 0 0 1 0 1 1 0 0 1 0 0 1 1 1 0 0 0 1 1 0 1 0 0 0 1 1 0 1 1 0 0 1 1 1 0 0 1 0 1 0 0 0 1 0 1 0 0 1 1 0 1 1 1 0 0 1 0 0 1 1 1 0 0 1 0 0 0 0 1 1 0 1 0 1 1 0 1 0 1 1 0 0 1 0 1 0 0 1 0 1 0 1 1 0 0 1 0 0 0 1 1 0 1 1 1 0 0 0 0 1 1 0 1 0 1 0 1 0 1 0 0 0 1 1 1 0 1 0 0 1 0 1 0 1 1 0 1 0 0 1 0 1 0 0 1 1 0 0 1 0 1 1 0 0 0 1 1 1 1 0 0 0 1 0 1 0 0 1 1 0 0 1 1 0 0 1 1 1 0 0 1 0 1 0 0 0 1 1 0 1 0 0 1 0 1 0 1 1 0 1 0 0 0 1 1 0 1 1 1 0 1 0 0 0 1 0 0 0 1 1 1 1 0 1 0 0 0 1 1 0 1 0 0 0 1 1 1 0 0 0 1 1 1 0 0 0 1 0 1 0 1 0 0 1 1 1 0 0 1 1 0 1 0 0 0 1 1 0 1 0 0 0 1 1 1 0 0 1 1 0 1 0 1 0 0 1 0 1 0 0 1 1 0 0 1 0 1 0 0 1 1 1 1 0 0 0 1 1 0 0 1 1 0 0 0 1 0 1 1 0 1 1 0 0 1 0 1 0 0 0 1 1 0 1 1 0 0 1 0 0 1 0 1 0 1 1 0 1 1 0 0 0 1 0 1 0 1 1 0 1 0 0 1 0 0 1 0 0 1 1 1 0 1 0 0 1 1 0 1 1 0 0 0 1 0 1 1 0 and graphB.txt: 25 0 0 0 1 1 0 0 0 0 1 1 1 1 0 0 1 0 1 1 0 1 1 0 1 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 1 1 0 0 1 1 1 1 0 0 1 0 1 0 1 0 0 1 1 1 0 1 0 0 0 0 1 0 1 1 1 0 0 1 1 0 1 1 1 0 1 0 1 1 1 1 1 1 0 0 1 0 0 0 0 0 0 1 0 0 0 1 1 0 1 0 0 0 0 1 1 0 0 0 1 1 1 1 0 0 1 1 0 0 1 0 0 1 0 0 0 0 1 0 0 1 1 0 0 0 1 0 1 1 0 0 1 1 1 1 1 0 1 1 1 0 1 0 0 1 1 0 0 1 1 0 0 1 1 1 0 0 1 0 0 0 0 0 1 1 0 0 0 0 1 0 1 1 1 0 1 0 1 1 0 1 1 0 0 0 1 0 0 1 1 1 0 1 1 0 1 0 0 1 1 0 0 0 0 0 0 1 0 1 1 1 1 0 0 1 1 1 1 0 1 0 0 1 0 0 0 1 1 1 0 0 0 0 1 0 1 1 0 1 1 0 1 0 1 0 0 0 0 1 0 1 1 1 0 0 0 0 1 1 1 0 1 0 0 1 0 0 0 1 0 1 0 0 0 1 1 0 0 1 1 1 0 1 1 0 1 1 0 0 0 0 0 1 1 1 0 1 0 0 1 0 1 1 0 1 0 1 1 0 0 1 0 0 0 0 1 0 1 0 1 0 0 1 1 0 1 0 1 0 1 1 0 1 1 1 0 0 1 0 1 1 1 0 1 0 0 1 1 0 1 0 0 1 0 1 0 1 0 1 0 0 1 1 1 0 1 0 0 0 0 1 1 0 1 0 0 0 1 0 1 1 0 0 1 0 1 0 0 0 0 1 1 1 1 0 1 1 0 1 1 1 1 0 1 0 1 0 0 0 0 0 1 0 1 0 0 1 1 1 0 1 0 1 0 0 0 0 1 0 1 1 1 0 0 1 0 1 1 1 0 0 0 1 0 0 0 0 1 1 1 1 1 0 1 0 0 1 0 1 0 0 0 1 1 0 1 0 0 1 0 0 0 1 0 1 0 1 1 1 0 0 0 1 0 1 1 1 1 0 0 1 1 0 1 1 0 0 0 1 0 1 0 0 1 1 0 0 0 0 1 1 1 1 0 1 0 1 1 0 0 0 1 1 1 1 0 0 0 0 0 1 0 0 0 1 1 0 0 1 0 0 1 0 0 1 1 1 1 0 1 1 1 0 0 1 0 0 0 0 1 1 1 0 1 0 1 1 0 0 1 0 1 0 0 1 0 0 0 1 0 1 1 1 1 0 0 0 1 0 0 0 1 0 1 1 1 0 1 1 0 0 1 0 0 0 1 1 1 1 0 0 which is obtained from graphA.txt by applying the (random) permutation 22 9 24 11 15 8 5 18 13 14 2 10 23 0 3 17 4 16 6 19 7 21 12 1 20 the C++ program isororphism.cpp from Figure 6.3. A C++ program for the graph isomorphism algorithm in http://www.dharwadker.org/tevet/isomorphism/ delivers the following output: The Graph Isomorphism Algorithm by Ashay Dharwadker and John-Tagore Tevet http://www.dharwadker.org/tevet/isomorphism/ Copyright (c) 2009 Computing the Sign Matrix of Graph A... Computing the Sign Matrix of Graph B... Graph A and Graph B have the same sign frequency vectors in lexicographic order but cannot be isomorphic. See result.txt for details. So we may assume that this is a counter-example to the Dharwadker-Tevet Graph Isomorphism algorithm. As suggested by Bill Province, the problem is 4.1. Proposition. If graphs $G_A$ and $G_B$ are isomorphic, then the algorithm finds an isomorphism. Bill Province's objection is that the proof of Proposition 4.1. doesn't use any special property of the sign matrix that wouldn't also apply to the adjacency matrix. More precisely, the following step in the proof is wrong: For the induction hypothesis, assume that rows $1, ..., t$ of $A$ and $B$ have been perfectly matched by Procedure 3.4 such that the vertex labels for the rows $1, ..., t$ of $A$ are $v_1, ..., v_t$ and the vertex labels for the rows $1, ..., t$ of $B$ are $φ(v_1) = v'_1, ..., φ(v_t) = v'_t$ respectively. because even if the rows have been perfectly matched, it doesn't follow that the vertex labels match the labels given by any isomorphism $φ$. Because a hole in the correctness proof was identified, the above counter-example should be sufficient for refuting claimed correctness of the proposed algorithm. Acknowledgments The counter-example is the first of the 8th graph pairs from http://funkybee.narod.ru/graphs.htm To manipulate graphs, I used and modified source code from ScrewBoxR1160.tar found at https://people.mpi-inf.mpg.de/~pascal/software/ To understand the hole in the correctness proof, András Salamon comment about Weisfeiler-Lehman was very helpful, as were the explanations from http://users.cecs.anu.edu.au/~pascal/docs/thesis_pascal_schweitzer.pdf Motivation to use this question as an opportunity to get familiar with nauty/Traces and the practical aspects of graph isomorphism was provided by vzn. The benefit of learning how to use state of the art programs for graph isomorphisms made it worthwhile to sink some time for finding a counter-example (which I strongly believed to exist).
{ "domain": "cstheory.stackexchange", "id": 3386, "tags": "graph-isomorphism" }
Why do organisms accumulate potential energy?
Question: I can understand that animals need some battaries to run. But, we learn that plants serve like batteries for animals because they accumulate the sun energy in the first place! You can eat them or burn to extract the accumulated energy. This means that they consist of structures that they want to blow-up and disintegrate to fall apart at the lower energy level. But why? This is strange to me since the most fundamental law of nature says that battaries spontaneously leak and discharge themselves. This means that the hydrocarbon structures that all life organisms are made of on the Earth are not stable. The photosinthesis, $\mathrm{H_2 O + CO_2 + \text{photo} \mapsto O_2 + CH_2O}$, is costly. It not only needs (photo) energy but uses it to produces the hydrocarbons, the $\mathrm{CH_2O}$-like spontaneously degrading structures! Why do you want to consist of something unstable that has tendency to blow up as soon as possible (why not immediatly, BTW?) if your purpose is survival, as any life organism on the Earth? I watched the Gibbs Free Energy but could not understand anything. Is it because of the min. energy law we have everything at the min energy already on the Earth and if you want to produce some structures (fortunately, there is a Sun), you have no other choice but to make them unstable? Answer: There are many things on Earth that do persist over time by being made of thermodynamically stable substances. A common example is a good solid lump of granite. These can "survive" unchanged for billions of years (much longer if it wasn't for plate tectonics). However, we do not say they are alive. There are a much smaller number of other things that persist over time in a different way: by using energy to build up a thermodynamically unstable structure, and then later breaking down that same structure to get the energy back, which is then used to capture more energy and build more structure, etc. Living organisms are a good example of this, although it also applies to other things, such as fire, thunderstorms and rivers. The general term for these is "dissipative structures", because they persist by downgrading, or "dissipating" energy. Of course it would be nice to have the best of both worlds, where you're made of a thermodynamically stable substrate and you're also able to grow by extracting energy from your environment. I guess a growing crystal would be an example of this in a sense - but the problem is that thermodynamically stable structures tend not to be very interesting. It would be impossible to find a thermodynamically stable structure that could extract energy from sunlight to build more of itself, because as soon as it's absorbed a photon and converted that energy into a chemical bond, it's out of thermodynamic equilibrium and therefore unstable. In general the more complex a structure is on the molecular level, the lower its entropy tends to be. If you want to be more complicated than, say, a rock, you don't really have much choice other than to be a far from equilibrium structure, which means you contain stored energy. You're right that all batteries leak. If you've decided to persist by being a living organism, that's just something you have to put up with. As a human, most of the energy you extract from the food you eat will go into maintaining your cells against the continual forces of decay that degrade them, with only a much smaller proportion going into useful things like growth and movement. A more prosaic answer is that life doesn't seem to have much of a choice about what it's made of. All life on Earth is made of water, proteins, and smaller amounts of lipids and other organic compounds. At the origins of life this probably wouldn't have been so far from equilibrium, since in a reducing atmosphere amino acids can form spontaneously. However, the combined result of photosynthesis and limestone formation is that the atmosphere is now highly oxidising, which means that on today's Earth protein burns rather nicely. In summary, the reasons that living organisms are made of unstable matter are that (i) if they didn't extract energy from their environment, we wouldn't consider them alive; (ii) if you extract energy from your environment, you become thermodynamically unstable; (iii) it's difficult if not impossible to have a complicated, yet organised, molecular structure without being far from equilibrium; and (iv) the decision to be made of proteins, which are particularly unstable, was made a long time ago, when they weren't anywhere near as flammable as they are today.
{ "domain": "physics.stackexchange", "id": 9632, "tags": "entropy, potential-energy" }
Is Fourier's law of conduction a consequence of the second principle?
Question: In classical thermodynamics courses, entropy is often motivated by the need to justify that heat flows from high temperatures zones to lower temperatures zones: this is seen as a consequence of maximizing entropy. However it could also be seen to be a consequence of the law of heat conduction. Looking up in the non-equuilibrium thermodynqmics literature I can see a connection with Onsager's reciprocal relations but from what I can tell they are introduced as phenomenological laws. Can the law of conduction be derived from the assumptions of classical thermodynamics? Answer: One cannot "derive" any non-equilibrium rate law from thermodynamics, simply because they are beyond the scope of the theory. Thermodynamics simply does not deal with such phenomena and hence cannot tell you how such processes occur (in this case heat conduction). All that thermodynamics does is relate mean values of certain properties of systems amongst each other (which also implies that second order quantities like specific heat which depend on property fluctuations cannot be derived in thermodynamics and are instead an external input to the theory). Instead, a so called derivation of Fourier's law is still primarily phenomenological and can be done, for which one must have a look at Chaikin and Lubensky's book on Principles of Condensed Matter, under the topic of Hydrodynamics. I shall just briefly mention some of the points as given in the book. The Fourier law can be thought as the lowest order contribution in a gradient expansion, thereby considering only small thermal gradients. There is no reason for linear response theory to be valid in a large temperature range, if it is, then it is a peculiarity of the material. Considering the slow spatial modulation in temperature as a long wavelength thermal excitation and the conservation of energy, we can systematically write down the linearized equation for the energy current, i.e heat. This would directly give you Fourier's law in the process. The associated transport coefficient is labelled the thermal conductivity and whose sign is fixed by the second law of thermodynamics, for stability reasons. A detailed procedure is provided in the book mentioned above.
{ "domain": "physics.stackexchange", "id": 48438, "tags": "thermodynamics, entropy, non-equilibrium" }
What is the science behind the inaccurate perception of colors?
Question: If I go into a green room (all walls are semitransparent and green) and spend some time - around 10+ min - in there, when I come out all my eyes see is white as pink. I see no (or very few other) colors due to this for a while - around 2 minutes. What is the science behind this? Why do eyes lose color perception in this case? Answer: To explain the neurophysiological background to the existing answers I would like to add the following: The effect you are describing (pinkish appearance of white) is generally referred to as a negative after image and it is a direct reflection of the color opponency in the retina. The effect is caused by adaptation of the (in this case green) cones in the retina. This adaptation occurs through photobleaching, which means that the chromophore (cis-retinal) in the visual pigment in the cones is converted to the inactive state (trans-retinal). The conversion of retinal forms the basis of the detection of photons by the cones. When you look at a green stimulus for a long time, the photopigment in the green cones is progressively converted to the inactive trans-retinal state and the green cones will stop responding to the green light stimulus. Why then does this lead to a pink'ish (magenta) perception of white? This is because color vision is based on color opponency. The visual system in the retina is based on three cones: red, green and blue cones. These cones feed into three channels: a red-green, a yellow-blue and an achromatic (luminance) channel. Note that yellow is formed by adding the red-green color signal, while the achromatic channel is formed by adding the red and green cones as well, where only light intensity information is extracted. The red-green and yellow-blue channels are opponent channels. For your specific example, the latter is important. Red-green opponency means that at the neural level, red responses cancel green responses and vice versa. Therefore, we are not able to perceive a "reddish green". White is perceived when light activates all cones, such that red cancels green, green cancels red and yellow cancels blue. However, if the green cone system is adapted due to the green pigment being bleached, the opponency of green is diminished and the red response will dominate. In addition, as the yellow channel will be reduced as well, there will be a bit of blue added as well due to reduced blue suppression, explaining the pink'ish (i.e., magenta) appearance you describe. A nice example of the negative after-effect is the following: Stare at the colored dots on the girl's nose in the negative photo image below for 30 seconds. Then look at a white surface and start blinking. You should see a non-negative image of the girl (see here for image source). Reference on the opponency model: Mather, Foundations of Perception, chapter 12 Colour Vision.
{ "domain": "biology.stackexchange", "id": 3232, "tags": "eyes, perception, neurophysiology, senses" }
Simplifying a boolean expression
Question: I need to prove: XY + ~XZ + YZ = XY + ~XZ I cannot think how to do this. I have tried factorising, but I just don't know of any rule that removes one of the terms like above. I start with the LHS, naturally: All I can do is get to: Y(X+Z) + ~XZ Answer: OK, answer is as follows, unsure if there's an easier method: $XY+X'Z+YZ \\ XY+X'Z+YZ(1) \\ XY+X'Z+YZ(X+X') \\ XY+X'Z+XYZ+X'YZ \\ (XY+XYZ)+(X'Z+X'YZ) \\ XY(1+Z)+X'Z(1+Y) \\ XY(1)+X'Z(1) \\ XY+X'Z$
{ "domain": "cs.stackexchange", "id": 12714, "tags": "logic" }
Setting class method callback from within object (C++)
Question: I have some object-oriented C++ code where I'm trying to create a subscriber in the constructor of the class, and that subscriber's callback function should be one of the class's functions. I have been following the relevant tutorial, but with using this instead of a variable pointing to the class object, but am stuck with an issue. When I use the code below, it compiles properly but whenever the callback is called, it uses default values for private class variables instead of the ones that the object should have. class TestObject { private: std::string _objectName; ros::Subscriber _testSubscriber; public: testCallback(const geometry_msgs::TransformStamped::ConstPtr& msg) { ROS_INFO("Callback made from object with name: -- %s --", _objectName.c_str()); } TestObject(std::string objectName, ros::NodeHandle n) { _objectName = objectName; _testSubscriber = n.subscribe("topicName", 1, &TestObject::testCallback, this); } } static std::map<std::string, TestObject> testObjects; int main(int argc, char **argv) { ros::init(argc, argv, "test_node"); ros::NodeHandle n; TestObject newObject("objname"); testObjects["firstObject"] = newObject; ros::spin(); return 0; } Now when a message is published on that topic, this node would output the following: [INFO] [Timethingy]: Callback made from object with name: -- -- i.e. it will have an empty string, even though the string should have been set when the object was constructed. In summary: the tutorial linked above assumes that a subsciption callback function inside a class is set from outside that class; how can I set it from inside an instance of the same class, while retaining access to all of that class instance's set variables? Originally posted by Jordan9 on ROS Answers with karma: 113 on 2015-04-16 Post score: 0 Original comments Comment by ReneGaertner on 2015-04-17: Does your Object exists at the time the callback is triggered? if you already left the Object, Ros ll generate a new one. Did u tried it with a static std::string? Comment by dornhege on 2015-04-17: This seems to be a bug in your c++ code unrelated to ROS. Comment by georg l on 2015-04-17: Please post the calling code as well. Comment by Jordan9 on 2015-04-17: @ReneGaertner the object should still exist, because it exists in a global static map (see my edit to the code in the OP). @georg, I have updated the OP to show the calling code as well. Answer: Your code does not compile for some typing errors and issues (no return type of testCallback, missing ; behind class etc.). If you want to remove irrelevant parts of your code before posting it please assure your code still compiles and the issue persists!! I have compiled your given code, commenting out all the map stuff and fixing the typing error and it worked as you expected. Passing this pointer to a subscriber is perfectly valid! Another point where your code does not compille is testObjects["firstObject"] = newObject; This call requires a TestObject() like, i. e. arg less constructor for your class, which it does not have in the above version. You could replace it by testObjects.insert( std::pair<std::string, TestObject>( "firstObject", newObject ) ); In the above code which would have the same effect but would compile in the above version as it only requires a copy constructor (TestObject( const TestObject& rhs )) which is implizitly created by the compiler. I am not sure want you are doning in your real code, but storing the class in a std::map like this can cause the above issue. If you copy a class object of a class holding a subscriber using the default copy constructor (all stl containers do internally a lot of copying) it creates a flat copy of your object. All contained class members are copied. The subscriber, however contains a pointer to its implementation, where the pointer is copied but not the implementation. This means if you copy a ros::Subscriber object you will just have to Handles for the same subscription that calls into the first callback bind to it. In your case that is the first object you created, that setup the subscriber and gave it its own this pointer. I. e. if you create a subscriber in a class giving it the this pointer and then copy the created object the subscriber handle in the copied object will still call the callback with the this pointer of the object it was copied from. If the object it was copied from then goes out of scope this will cause undefined behaviour or seg faults or if there is occasionally a 0 at the point where it believes _objectName.c_str() should be it may result in the behaviour up there. In short: If you have a class that has a subscriber and gives it "this" and you want to pass the created class object around or store it in stl containers, use boost::shared_ptr, like: std::map<std::string, boost::shared_ptr< TestObject > > testObjects; and then: boost::shared_ptr< TestObject > newObject( new TestObject("objname", n) ); testObjects["firstObject"] = newObject; Originally posted by Wolf with karma: 7555 on 2015-04-18 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by Jordan9 on 2015-04-23: Using the boost::shared_ptr was the answer. Thanks!
{ "domain": "robotics.stackexchange", "id": 21457, "tags": "ros, callback, roscpp" }
Prove that regular languages and context-free languages aren't closed under $Perm(L)$
Question: Let the operation $$Perm(L) = \{ w | \exists u \in L \text{ such that } u \text{ is a permutation of } w \}$$ Prove that both regular languages and CFLs aren't closed under $Perm(L)$. I've tried to use several well-known languages (like $\{0^n1^n\}$) and applying $Perm(L)$ and afterward manipulate them or using the pumping lemma in order to get a contradiction, but nothing worked out. Answer: Hints: For regular languages, consider $Perm((01)^*) \cap 0^* 1^*$. For context-free languages, consider $Perm(0^n 1^n 2^m 3^m) \cap 0^* 2^* 1^* 3^*$.
{ "domain": "cs.stackexchange", "id": 4703, "tags": "formal-languages, regular-languages, context-free, closure-properties" }
From escape velocity to gravitational acceleration
Question: If all you are given is the escape velocity from the surface of Earth can you back calculate to Earths 9.8 m/s^2 gravitational acceleration? In the end what I am looking for is a formula with Escape velocity on one side solving for 9.8 m/s^2 Answer: No, to translate escape velocity $v_{esc}$ at the surface of a planet into gravitational acceleration $g$ at the same location, you also need the radius $R$ of the planet. The equation that applies is: $2 g R = v_{esc}^2$.
{ "domain": "physics.stackexchange", "id": 18220, "tags": "homework-and-exercises, newtonian-gravity, acceleration, escape-velocity" }
Where Can I Find DFA Practice?
Question: Where can I find a web site or book where I can practice drawing DFAs from a language like " {w| w has at least three a’s and at least two b’s} ". It will be important to have access to the answer so as to check myself. I need the practice. Answer: Well you have a little problem between your hands. The problem is that creating a DFA that generates some language $L$ have more than one solution. In fact if $L$ is regular then it have infinite solutions. If you are searching a book with answers it may not be usefull because your answer can be correct even though it is different from the one in the book. There is some methods to prove the correctness of a DFA using proof by induction but it is fairly difficult. If you are having problems generating DFA I wouldn't reccommend to try proving the correctness until you are more experienced into this topic. My teacher told me that the best you can do to be sure that your DFA is correct(when you skipping the proof of correctness) is do it in a sistematic way and understandable for anyone that reads it.
{ "domain": "cs.stackexchange", "id": 6619, "tags": "finite-automata" }
Question on Galilean transformation
Question: Let $a$ be a scalar, $D$ a rotation matrix and $b$ and $v$ are $1\times 3$-vectors. We had the following Galiean transformation: $(t, x(t)) \to (t + a, Dx + b + v\cdot t)$ But why is it not $(t, x(t)) \to (t + a, Dx(t+a) + b + v\cdot (t+a))$ Or are both equivalent? Answer: It seems you are confusing how the Galilean transformations act. There are three main subset of the Galilean transformations: Translations: ${g_1(a,b);g_1(a,b)(t,x)=(t+a,x+b)}$, Rotations: ${g_2(D);g_2(D)(t,x)=(t,Dx)}$, Boosts: ${g_3(v);g_3(v)(t,x)=(t,x+vt)}$. Note that we do not write $x$ as a function of time, it is a completely independent coordinate. You are doing transformations in the following way, which is not correct, $$g_3(v)g_1(a,b)g_2(D)(t,x)=g_3(v)g_1(a,b)(t,Dx(t))=g_3(v)(t+a,Dx(t+a)+b)=(t+a,Dx(t+a)+b+vt),$$ $$g_1(a,b)g_3(v)g_2(D)(t,x)=g_1(a,b)g_3(v)(t,Dx(t))=g_3(v)(t,Dx(t)+vt)=(t+a,Dx(t+a)+v(t+a)+b).$$ The correct is $$g_3(v)g_1(a,b)g_2(D)(t,x)=g_3(v)g_1(a,b)(t,Dx)=g_3(v)(t+a,Dx+b)=(t+a,Dx+b+vt),$$ $$g_1(a,b)g_3(v)g_2(D)(t,x)=g_1(a,b)g_3(v)(t,Dx)=g_3(v)(t,Dx+vt)=(t+a,Dx+vt+b).$$ Boosts and translations commute. The order only matters with respect to rotations.
{ "domain": "physics.stackexchange", "id": 40013, "tags": "newtonian-mechanics, classical-mechanics, symmetry, group-theory, galilean-relativity" }
Formal language without grammar
Question: Definitions: Alphabet $Σ$: finite, non-empty set Language: subset of $Σ^*$ Grammar: Unrestricted grammar (Chomsky Type 0) Language of a grammar: all words that can be produced by applying $P$ multiple times, starting from $S$ Grammars are finite, therefore there are only countable infinite of them. But there are uncountably infinite many languages. Each grammar can only describe one language. Therefore, there are languages without grammars. Can you give an example for such a language without grammar? I searched the internet, but strangely, I could not even find the question in context of formal language. Answer: I assume that by "grammar" you mean type-0 grammars. One can probably extend those to capture more languages. Type-0 grammars are equivalent to Turing machines in expressive power. So, in order to show that a given language does not have a grammar, we can proceed as follows with suitable $L$: Assume there was a grammar for $L$. Then, we could semi-decide the word problem of $L$. That contradicts the known fact that $L$ is not recursively-enumerable. Note the semi here; assuming a grammar does not give us decidability, so the halting problem is not a suitable candidate -- we need a problem/language that is not even semi-decidable/recursively enumerable. From your computability background you should know that the complement of the (special) halting language, namely $\qquad \overline{H} = \{ \langle M \rangle \mid M \text{ loops on } \langle M \rangle \}$, is not semi-decidable. Thus, by the reduction outlined above, there is no (Chomsky) grammar for $\overline{H}$.
{ "domain": "cs.stackexchange", "id": 1695, "tags": "formal-languages, formal-grammars" }
pH of a solution of acetic acid and ammonium acetate
Question: After studying ionic equilibrium, I was just making some wild questions and I was unable to calculate the $\ce{pH}$ of this solution. Assume, 1 mole of each was added to $1L$ solution of water. My attempt : $\ce{CH3COOH}+\ce{NH4OH<=>CH3COONH4}+\ce{H2O}$ This reaction will happen till all $\ce{CH3COOH}$ is consumed by $\ce{NH4OH}$. More of $\ce{NH4OH} $ will be formed by Le Chatelier's principle. So, the answer must be same as that of a weak acid and weak base salt : $pH=\dfrac{1}{2}(pK_w+pK_a-pK_b)$. But how will we calculate if there was $2$ mole of acetic acid? Answer: You have some mistaken assumptions. $\ce{NH4OH}$ and $\ce{CH3COONH4}$ are not molecules that exist. The assuption "all $\ce{CH3COOH}$ is consumed" is incorrect. Why do you say more $\ce{NH4OH}$ will be formed? How is this possibe? What can it be formed from? Review that Henderson–Hasselbalch equation
{ "domain": "chemistry.stackexchange", "id": 1250, "tags": "homework, physical-chemistry, equilibrium, ph" }
Function for calculating Archimedean density functions
Question: I am not sure whether this is the right place to ask this, so please correct me if it's not. I am following this paper (e.g., Corollary 3.3) for some derived functional representations of the densities and writing code for those. The code for one of the families: (x will be a vector of 1xd between [0,1], tau - scalar in [1, infty).) 'joe_density' <- function(x, tau){ require(gmp) require(Rmpfr) require(magrittr) d <- length(x) P_d <- function(x, tau, d){ alpha <- 1/tau lapply(1:d, function(k){ as.numeric(gmp::Stirling2(n = d, k = k)) * gamma(k - alpha) / gamma(1 - alpha) * x^(k-1) }) %>% do.call(sum, .) } h <- prod((1 - (1 - x)^tau)) mpfr( tau^(d-1), 16) * mpfr( prod((1 - x)^(tau - 1)), 16) / mpfr( (1 - h)^(1 - 1/tau), 16) * mpfr( P_d( h/(1-h), tau, d), 16) } Example call: # Simple case x <- pnorm(rnorm(100)) joe_density(x, tau = 2) # Longer performance time when looping, d=2 expand.grid(x = seq(-5,5,by=0.1), y = seq(-5,5,by=0.1)) %>% apply(.,1,function(z){ x <- pnorm(z) joe_density(x, tau = 2) }) In essence, the typical calculations require tons of looping, gamma/Stirling numbers, and I have noticed that the default precision is usually not enough, hence the Rmpfr usage. One time evaluation of the code is not a problem, however, the calculations become very lengthy when the function is called thousands of time (e.g., passing it to optim and the likes). Are there simpler ways to program this? This particular function is just an example, but most of the different functions follow similar approach, with a high usage of Stirling/gamma, lengthy products and etc. Could, for example, rewriting parts of the code in Rcpp be helpful, or is this not the type of task that we could expect gains here? I have no experience in writing cpp code, so am not sure what to expect. Answer: Some suggestions: split code more, line by line, so it is easier to profile and see bottlenecks use profvis use %>% less often if that part of code is called frequently, it has some overhead separate function definitions This should run a little bit faster: (15s vs 27s on your example) P_d <- function(x, tau, d){ alpha <- 1/tau l <- sapply(1:d, function(k){ p1 <- Stirling2(n = d, k = k) p1 <- as.numeric(p1) p1 * gamma(k - alpha) / gamma(1 - alpha) * x^(k-1) }) sum(l) } joe_density <- function(x, tau){ d <- length(x) h <- prod((1 - (1 - x)^tau)) v1 <- tau^(d-1) v2 <- prod((1 - x)^(tau - 1)) v3 <- (1 - h)^(1 - 1/tau) v4 <- P_d( h/(1-h), tau, d) p1 <- mpfr(v1, 16) # slow p1 * v2 / v3 * v4 # slow } The slowest part is mpfr & the last line, because each value is converted to mpfr. Maybe you can deal with precision loss in some kind of different way?
{ "domain": "codereview.stackexchange", "id": 37902, "tags": "performance, r, rcpp" }
Calculating SNR of one N-point DFT over another M-point DFT in frequency domain
Question: Consider N-point sine signal mixed with a random noise (implemented in Python below): import numpy as np %matplotlib inline import matplotlib.pyplot as plt N = 128 # num of points in signal t = np.arange(0,N,1) sample_num = 5 freq = sample_num/N sigma = 1.5 # random noise noise = np.random.normal(0, sigma, len(t)) noise = noise-np.mean(noise) signal = np.sin(2*np.pi*freq*t)+noise plt.plot(signal) and let's see its power spectrum in dB (normalized to the maximum power): FFT_power_mag = abs(np.fft.fft(signal))**2 FFT_dB = FFT_power_mag/max(FFT_power_mag) FFT_dB = 10*np.log10(FFT_dB) plt.plot(FFT_dB[1:len(FFT_power_mag)//2]) plt.ylabel('Mag. in dB') plt.xlabel('Sample number') plt.axvline(sample_num-1, color='k', linestyle='--') plt.axhline(np.mean(FFT_dB[1:]), color='r', linestyle='--') print(np.mean(FFT_dB[1:])) $-15.367730381221198$ Defining SNR as the signal power to the average noise power, we get ~$15$ dB. Next, consider the same (frequency, sigma for noise etc.), but M-point (M>N) noisy signal and for simplicity I just output its power spectrum: For this signal SNR ~$18$ dB. Here (page 95, formula (3-33)) I found definition for calculating one SNR over another as the following: $$SNR_M = SNR_N+20\cdot log_{10}\left(\sqrt{\frac{M}{N}}\right),$$ which I don't completely understand. My first question is where does it come from, that the standard deviation value of the DFT bin's output noise is proportional to $\sqrt{N}$ (or $\sqrt{M}$)? The second question is how can I derive $20\cdot log_{10}\left(\sqrt{\frac{M}{N}}\right)$ from such definition? Answer: The power level of each bin in the power within the resolution BW of the DFT (think of each bin as the average power within the bandwidth of a bandpass filter centered on each bin, which is what the DFT is: a bank of filters). The resolution BW is the sampling rate divided by the total number of bins and your noise power is spread evenly over the sampling rate. So if you increase the total number of samples, the power indicated by each bin will go down exactly as you observe. From this we have $P_{noise}$ which is the total power as white noise spread evenly from $-f_s/2$ to $f_s/2$ where $f_s$ is the sampling rate. If we increase the number of points in the DFT from $N$ to $M$, without changing the sampling rate (thus we have changed the time duration), there will be no change in $P_{noise}$ but we will change the power that we measure in each bin as follows: $$P_{N} = \frac{P_{noise}}{N}$$ $$P_{M} = \frac{P_{noise}}{M}$$ Where $P_{N}$ and $P_{M}$ represent the noise power in each bin for an $N$ point DFT and $M$ point DFT respectively (assuming white noise). From the second equation we can equate $P_{noise}$ in terms of $P_{M}$ and $M$ and substitute this into the first equation as follows: $$ P_{noise} = M P_{M}$$ $$P_{N} = \frac{P_{noise}}{N} = \frac{M }{N}P_{M}$$ So for example, if we doubled the number of points from $N$ to $2N$, it's intuitive that the power in each bin will go down by a factor of 2, thus the power in each bin with $N$ points will be double the power in each bin with $2N$ points. To derive the equation, this comes directly from the power in each bin is given by $f_s/N$ for $N$ total bins. This the power ratio for two different total number of bins is simply $M/N$. For a power ratio to get dB we use $10Log()$ as follows: $$10Log_{10}(M/N)$$ This is the same as $$10Log_{10}(\sqrt{M/N}^2)$$ Which is then: $$20Log_{10}(\sqrt{M/N})$$ Since $log(x^n)=n log(x)$ The formula given for SNR as: $$SNR_M = SNR_N + 20 log_{10}\bigg(\sqrt{\frac{M}{N}}\bigg)$$ is referring to the achievable SNR for a narrow band signal that occupies in either case just one bin (such as a single tone). So this formula says that the SNR for a single tone relative to the noise in none bin in an M point DFT would be the larger by a factor of $10log_{10}(M/N)$ of the SNR for the same tone with the same total white noise power sampled at the same rate with a smaller N point DFT. (I find referring to this as SNR misleading since we are often interested in the SNR of a modulated signal that would occupy many bins, in which case the SNR is independent of $M$ and $N$). An additional word of caution- averaging the dB of a power measurement results in a significant error in the estimate of actual power (the result will be close to 2.5 dB higher than actual if Gaussian white noise, as nicely explained in this old HP app note #1303). Assuming the DFT was not windowed (or the losses due to windowing were properly accounted for as explained in more detail here) the power estimate is best found from the mean of the sum of the squares (as a complex conjugate product) of the DFT bins and then taking the dB of that quantity. Case in point with OP’s example, the standard deviation was 1.5 and 128 samples, so the true power in each bin would be given as: $$10log_{10}(1.5^2/128)= -17.55 \text{ dB}$$
{ "domain": "dsp.stackexchange", "id": 11700, "tags": "frequency-spectrum, dft, frequency-domain, snr, signal-power" }
pass python objects in ROS messages and services
Question: Is there a way to define a ROS message or service with arbitrary python objects? Is the only option to use a string and use json.dumps / json.loads? Originally posted by waspinator on ROS Answers with karma: 122 on 2018-01-19 Post score: 0 Answer: Is there a way to define a ROS message or service with arbitrary python objects? Assuming you mean: can I have fields with types of 'arbitrary Python types', then: no. Is the only option to use a string and use json.dumps / json.loads? Don't know whether it's the only way, but it is a way. Can you clarify why you'd want to do this? It's a really heavy and invasive form of coupling, which limits reusability of your components. Originally posted by gvdhoorn with karma: 86574 on 2018-01-19 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by waspinator on 2018-01-20: I use a python ORM to read/write to a database, and I was considering having a ROS node with services for each database operation. I could a) send the python object to the service to save, b) send just the properties and have the service create a new object, or c) use the ORM directly. not sure. Comment by gvdhoorn on 2018-01-21: In any case: ROS 1 msg IDL only supports primitive types or composites of those types. See wiki/msg.
{ "domain": "robotics.stackexchange", "id": 29801, "tags": "rospy" }
Without using pumping lemma, can we determine if $A =\{ww \mid w \in \{0,1\}^* \}$ is non regular?
Question: Without using pumping lemma, can we prove $A =\{ww \mid w \in \{0,1\}^* \}$ is non regular? Is $L= \{w \mid w \in \{0,1\}^* \}$ non regular? I'm thinking of using concatenation to prove the former isn't regular. If L is non regular then so is LL Answer: Your idea (while interesting) is unlikely to work for two reasons: How would you express $A$ as a concatenation of a language with itself? If $L$ is non-regular, it may still be the case that $LL$ is regular. See this post: Is $A$ regular if $A^{2}$ is regular? You could prove it using the Myhill-Nerode theorem - show that there are infinitely many equivalence classes. Or you could simply use a pumping argument without the lemma. That is, follow the proof of the lemma for this particular language.
{ "domain": "cs.stackexchange", "id": 2410, "tags": "formal-languages, regular-languages" }
Running VSLAM on stereo data
Question: I'm trying to run VSLAM on Stereo Data. When trying to roslaunch stereo_vslam.launch, I'm getting this error message cannot launch node of type [vslam_system/stereo_vslam_node] I suspect it has to do with files: find holidays.tree, holidays.weights, data/calonder.rtc on the line .....find vocabulary_tree)/holidays.tree $(find vocabulary_tree)/holidays.weights $(find vslam_system)/data/calonder.rtc ...... I don't find this file on the server ( holidays.tree, holidays.weights, data/calonder.rtc). Do I need them? If yes, how do I get them? Where Do I start? Thanks Originally posted by maz on ROS Answers with karma: 11 on 2011-05-22 Post score: 1 Answer: You actually need these files to launch stereo_vslam_node. They should be at /opt/ros/diamondback/stacks/vslam/vocabulary_tree folder. If not, try to upgrade the vslam package. Then, you can download the bag file mentioned in the tutorial: roscd vslam_system wget http://pr.willowgarage.com/data/vslam_system/vslam_tutorial.bag but according to my experience you have to fix the bag file before using it: rosbag fix vslam_tutorial.bag vslam_tutorial_fixed.bag and then you can play it before launching the vslam node: rosbag play --clock vslam_tutorial_fixed.bag Regards Jordi Originally posted by Jordi Pages with karma: 245 on 2011-06-23 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 5628, "tags": "ros, vslam, vslam-system" }
Why is the part of a sphere's area directly proportional to the square of its radius?
Question: Solid angle in the book is explained in this way: "...Let $dA$ be a small area element of the surface of the sphere. If the points situated on the boundary of this area be joined to $O$ (the center of the sphere), then the lines so drawn will subtend a solid angle $dw$ at $O$. Since the spherical area $dA$ is directly proportional to the square of the radius ($r^2$), the ratio $\frac{dA}{r^2}$ is a constant. This ratio is called the solid angle $dw$ subtended by the area $dA$ at the center." The sphere's area is $4πr^2$. is that why it's said that $dA$ is directly proportional to $r^2$? also is it just for definition's sake that $dw$ is the ratio? or is there any underlying concept to grasp? Answer: This seems to be a more mathematical question. if you know how an angle in the plain is defined ist is the length of the part of the circle delimitated by the angle divided by the radius, so $d\phi=\frac{ds}{r}$, The same applies to the "3-dimensional" angel, you define it as the area of the surface of a sphere , deliminated by this angle, wich you best visualize as a cone from the middle of the sphere cutting out a part area of the sphere , like the two lines cutting out a 2d angel in a circle. And yes, you have to divide bei r^2 since the area increases with r^2-
{ "domain": "physics.stackexchange", "id": 87802, "tags": "electric-fields, dimensional-analysis, geometry" }
In pilot wave theory where is the wave?
Question: As a non specialist, for a single particle system it's easy to appreciate the concept of a pilot wave extending through all Euclidean space, guiding a particle which ends up at a location determined by the pilot wave and its initial location. For multiple particles however the wave would presumably need more dimensions to reflect the configuration space of the system. Is this correct, and if so where does the pilot wave reside? A related question may be, if quantum computers give an exponential speedup for factorization, then according to pilot wave theory where does the computation take place? Answer: The "pilot wave" is just the usual wave function of quantum mechanics. If you have $N$ spinless particles, it is a map: $\psi: \mathbb{R}^{3N} \to \mathbb{C}$. This means it lives in the 3N-dimensional configuration space of the particles. I should emphasize that this is not a specialty of pilot wave theory but just the usual framework of quantum mechanics. The special thing in pilot wave theory are the additional actual particles that actually have positions in $\mathbb{R}^3$ and thus link the abstract object $\psi$ with objects that can be thought of as moving in our physical world, as the objects that tables, chairs and people are made of. I don't totally understand the intent of your last question, but let's try: The obvious answer is that if a computer computes something, this computation takes place inside of the computer. You know, configuration space is not really a "place" in any reasonable sense. It's an abstract mathematical way of describing the things that happen when, e.g. in a quantum computer, particles move back and forth.
{ "domain": "physics.stackexchange", "id": 45936, "tags": "quantum-interpretations, bohmian-mechanics" }
Vacuum in string theory
Question: How is vacuum described in string theory? The same as in the QFT? Does string theory have quantum fluctuations of the vacuum, are the processes of creation and destruction of virtual strings considered, or is there nothing in the vacuum state? Answer: String fields exist and they have a vacuum as any other quantum field theory. There is an entire subject known as string field theory that studies perturbative string theory via string fields. In string field theory you can define off-shell string states, study the creation of a string, or compute amplitudes for a finite time string scattering process. An interesting application of the second quantized perspective concerns the perturbative string finiteness is the UV/IR connection. I strongly recommend Ultraviolet and Infrared Divergences in Superstring Theory to gain an intuition of this connection. After the identification of UV divergences as IR effects, soft theorems are needed to demonstrate that the IR divergences can be cured Of course the latter is subtle in perturbative string theory (where adjectives like "soft" and "off-shell" are a little bit mysterious). It is convenient to highlight the outstanding String Field Theory as World-sheet UV Regulator. I am not aware of any other beautiful application of string field theory to ordinary perturbative string vacua of this type. A truly lovely paper that rigorously exhibit the perturbative healthy of string theory. References: If you are interested in string field theory, then you will love the truly wonderful paper Four Lectures on Closed String Field Theory . For an spectacularly marvelous introduction to closed string field theory see String Field Theory – A Modern Introduction. For an overview see developments in perturbative string theory. What does second quantization mean in the context of string theory?
{ "domain": "physics.stackexchange", "id": 74913, "tags": "quantum-field-theory, string-theory, vacuum, virtual-particles" }
Can someone explain what is the force the ball will exert?
Question: If a ball is falling under free fall then the force exerted by the ball on the ground would be $mg$. But that's not the case in real life ball would hit with more force. But when i draw free body diagram there is only one force that is acting on it $mg$ Can someone explain what is the force the ball will exert ? Answer: Interesting question. Ball force affecting ground is $$ F_c = \frac{\Delta p}{t_i} = \frac{mv}{t_i} $$, because end speed is zero. Now let's assume ball started to fall from some altitude, so it reached it's "contacting" speed $v$ according to : $$ v = v_0 + a \space t = g \space t_f $$ Plunging this into formula above, we get : $$ F_c = \frac{mg \space t_f}{t_i} = W \frac{t_f}{t_i} $$ So ball force affecting ground is it's weight scaled by falling time over interaction time with ground ratio. Analogically ground will exert normal force with same magnitude on ball. That's why eggs breaks falling down :-) Only in case when $ t_f \approx t_i $, i.e. when falling time approaches interaction time, contact force equals weight. This principle is exploited in various kinds of trampolines and Life net used by firefighters.
{ "domain": "physics.stackexchange", "id": 62002, "tags": "newtonian-mechanics, forces, collision, projectile, free-body-diagram" }
Is there some other measurement that describe the ability to absorb some spicified range of frequencies of sound?
Question: There are a variety of materials could be used to absorb sound in order to soundproof. Absorption coefficient is used to describe the absorb sound ability of the material. Is the following guess true: some materials are good at absorbing some spicified range of frequencies of sound (such as little girl screaming), while some other materials are good at absorbing some other spicified range of frequencies of sound, such as car's engine. If yes, is there some other measurement that describe the ability to absorb some spicified range of frequencies of sound, like little girl screaming? Answer: The absorption coefficient is usually specified by the manufacturer over some frequency range, in dB of loss. High frequencies are commonly absorbed by light, fluffy materials with a lot of air in them. Low frequencies are commonly absorbed by heavy materials with little or no elasticity.
{ "domain": "physics.stackexchange", "id": 64646, "tags": "acoustics" }
Why can no task have a utilization rate greater than one?
Question: Let $C_i$ be the execution time for task i, $T_i$ be the task period and utilization rate $U = \frac{C_i}{T_i}$ Then $U$ must be less or equal to $1$ for the task to be schedulable Proof: Let $\hat T$ be the lcm of task $T_1,...T_n$ i.e. $\hat T = \prod_i > T_i$. Define $\hat L_i = \hat T/T_i$ the number of times the task is run. Then the total number of execution is $\sum_i C_i/\hat L_i = \sum_i \frac{C_i \hat T} {T_i}$. Suppose utilization rate $U >1$, then $\sum_i \frac{C_i \hat T} {T_i} > \hat T$ which is impossible. End of proof. I am confused about several things in the proof. First, what is the physical quantity representing the product or lcm of all the task periods $T_i$? It would make more sense if it was the sum of all the $T_i$ which represents the total task period. Second, $\sum_i C_i/\hat L_i$ represents the total time spent on execution, I do not get how if this time is larger than $\hat T$, then the tasks would not be schedulable. This goes back to my misunderstanding of what $\hat T$ is. Can anyone help? Answer: It appears that you have a periodic scheduling system on a single processor where task $i$ must run for $C_i$ seconds out of every $T_i$. If this is the case, then the requirement that $\tfrac{C_i}{T_i}\le1$ is just saying that $C_i\le T_i$, i.e., you're not saying something like "Task 3 must run for 15 seconds out of every 10 seconds", which is physically impossible. It looks like there's actually a mistake in the statement of the theorem. I think it should say that the utilization of task $i$ is given by $U_i=\tfrac{C_i}{T_i}$, then define the total utilization to be $U = \sum_{i=1}^n U_i$ and then require the stronger condition that this $U$ satisfies $U\leq 1$. That is, you don't just require each task individually to use at most 100% of your resources: you require the whole ensemble to use at most 100%. You can think of it this way: if I'm supposed to spend $p_1\%$ of my time on task 1, ..., and $p_n\%$ of my time on task $n$, I'd better not be committing more than 100% of my time! The physical significant of defining $\hat{T} = \mathrm{lcm}(T_1, \dots, T_n)$ is that it's the shortest period of time in which every task is supposed to run an integer number of times. It's not essential that you consider the shortest such period: the proof would work fine if you used any period in which every task needs to run an integer number of times. Perhaps it would be conceptually easier to set $\hat{T} = \prod_{i=1}^n T_i$ instead.
{ "domain": "cs.stackexchange", "id": 4329, "tags": "optimization, scheduling" }
Extracting items from comma/semicolon-delimited strings, discarding parts after a hyphen
Question: New Scala dev here. Is there a more idiomatic or efficient way to accomplish this in Scala? Given a list of the following strings, I need to obtain the unique 'main parts' once any parts including and after a "-" have been removed. The output should be a list of sorted strings. Also note both ";" and "," have been used as separators. Input: val data: List[String] = List( "W22; O21; B112-WX00BK; G211; B112-WI00BK; G11", "W22; K122l; B112-WI00BK; O21; B112-WX00BK; G211", "W21, V32", "W21, N722", "S133-VU3150; S133-SU3150; R22-VK3150; R123-VH3" ) Desired Output: List( B112 G11 G211 O21 W22, B112 G211 K122l O21 W22, V32 W21, N722 W21, R123 R22 S133 ) My solution: def process(input: String): String = input.split(" ").map(word => { if (word contains "-") word.take(word.indexOf("-")) else word .replace(";", "").replace(",","")}).toSet.toList.sorted.mkString(" ") val result: List[String] = data.map(process(_)) Answer: First of all, the formatting could be a bit better. It's odd to have the first .replace at the same indentation level as else, and you can call map using curly braces if you want a function with multiple statements/expressions instead of making the function result a block. You don't actually need curly braces, but I'd say they're used for multi-line functions more often than are parentheses. def process(input: String): String = input .split(" ") .map { word => if (word contains "-") word.take(word.indexOf("-")) else word .replace(";", "") .replace(",", "") } .toSet .toList .sorted .mkString(" ") Using .toSet.toList is unnecessary - you can use .distinct instead. It might be a good idea to make a View after splitting the string to avoid making too many intermediate collections. You'll have to convert it to a Seq before sorting it, though, since sorted isn't defined for views. You can also replace ; and , using the regex [;,]. You'll need to use replaceAll instead of replace to use regex, though. However, why not include these two when you're splitting? If you do that, each word would only have to be mapped by word => word.takeWhile(_ != '-'), or _.takeWhile(_ != '-'). Doing that, we get the following: def process(input: String): String = input .split("[;,] ") .view .map(_.takeWhile(_ != '-')) .distinct .toSeq .sorted .mkString(" ") This feels like it should do fewer operations than your original, although I'm not sure if _.takeWhile(_ != '-') will be faster than the regex from vvotan's answer. If your data really is that small, though, efficiency shouldn't matter much. Could you provide some more information on what you're trying to do? Your description doesn't help much, and names such as data, result, input, and process are very generic and won't help readers.
{ "domain": "codereview.stackexchange", "id": 41831, "tags": "beginner, strings, parsing, scala" }
Is there a natural problem in quasi-polynomial time, but not in polynomial time?
Question: László Babai recently proved that the Graph Isomorphism problem is in quasipolynomial time. See also his talk at University of Chicago, note from the talks by Jeremy Kun GLL post 1, GLL post 2, GLL post 3. According to Ladner’s theorem, if $P \neq NP$, then $NPI$ is not empty, i.e. $NP$ contains problems that are neither in $P$ nor $NP$-complete. However, the language constructed by Ladner is artificial and not a natural problem. No natural problem is known to be in $NPI$ even conditionally under $P \neq NP$. But some problems are believed to be good candidates for $NPI$, such as Factoring integers and GI. We may think that with Babai's result, there might be a polynomial time algorithm for GI. Many experts believe that $NP \not\subseteq QP = DTIME(n^{poly\log n})$. There are some problems for which we know quasi-polynomial time algorithms, but no polynomial time algorithm is known. Such problems arise in approximation algorithms; a famous example is the directed Steiner tree problem, for which there is a quasi-polynomial time approximation algorithm achieving an approximation ratio of $O(\log^3 n)$ ($n$ being the number of vertices). However, showing the existence of such a polynomial time algorithm is an open problem. My question: Do we know any natural problems which are in $QP$ but not in $P$? Answer: There has, in fact, been quite a lot of recent works on proving quasi-polynomial running time lower bound for computational problems, mostly based on the exponential time hypothesis. Here are some results for problems that I consider quite natural (all results below are conditional on ETH): Aaronson, Impagliazzo and Moshkovitz [1] show a quasi-polynomial time lower bound for dense constraint satisfaction problems (CSPs). Note that the way CSP is defined in this paper allows the domain to be polynomially large, as the case where the domain is small is known to have a PTAS. Braverman, Ko and Weinstein [2] prove a quasi-polynomial time lower bound for finding $\epsilon$-best $\epsilon$-approximate Nash equilibrium, which matches Lipton et al.'s algorithm [3]. Braverman, Ko, Rubinstein and Weinstein [4] show a quasi-polynomial time lower bound for approximating densest $k$-subgraph with perfect completeness (i.e. given a graph that contains a $k$-clique, finds a subgraph of size $k$ that is $(1 - \epsilon)$-dense for some small constant $\epsilon$). Again, there is a quasi-polynomial time algorithm for the problem (Feige and Seltser [5]). References AM with multiple Merlins. In Computational Complexity (CCC), 2014 IEEE 29th Conference on, pages 44–55, June 2014. Mark Braverman, Young Kun Ko, and Omri Weinstein. Approximating the best nash equilibrium in $n^{o(log n)}$-time breaks the exponential time hypothesis. In Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’15, pages 970–982. SIAM, 2015. Richard J. Lipton, Evangelos Markakis, and Aranyak Mehta. Playing large games using simple strategies. In Proceedings of the 4th ACM Conference on Electronic Commerce, EC ’03, pages 36–41, New York, NY, USA, 2003. ACM. Mark Braverman, Young Kun-Ko, Aviad Rubinstein, and Omri Weinstein. ETH hardness for Densest-$k$-Subgraph with perfect completeness. Electronic Colloquium on Computational Complexity (ECCC), 22:74, 2015. U. Feige and M. Seltser. On the densest $k$-subgraph problems. Technical report, 1997.
{ "domain": "cstheory.stackexchange", "id": 5098, "tags": "cc.complexity-theory, complexity-classes, np-complete, polynomial-time, np-intermediate" }
Fully random PN sequence
Question: Can a CDMA PN sequence be fully random, as in encryption-grade randomness, or must it be deterministic? I have tried the first bytes of SHA-256 hashes as PN sequences, but somehow they don't coexist too well. I know a deterministic code can be generated on-the-fly so no storage is needed, but let's say I have 500 fully random PN sequences stored on a device. Is that good or do you really need deterministic codes like the Gold codes? Answer: A typical type of "pseudo-random" sequences used in CDMA is the "m-sequences" that also provide good properties in wireless channels with multipath. Have a look to this post for more information about these: http://www.behindthesciences.com/towards-5g/directsequencespreadspectrumandrakereceiver hope this helps :)
{ "domain": "dsp.stackexchange", "id": 7903, "tags": "digital-communications, radio" }
Dynamic convolution vs Volterra series
Question: I'm trying to understand how a dynamic convolution model relates to something like a Volterra series, and what kinds of effects the latter can capture that the former can't (and vice versa). By dynamic convolution, I mean a system where the impulse response varies with the amplitude of an impulse. I care only about dynamic convolutions that are well-behaved and "smooth" in some sense, i.e. where the impulse response varies continuously or smoothly as a function of the amplitude. I have this intuition that a Volterra series with nonzero coefficients only on the diagonals are somehow related to dynamic convolutions. In other words, a series that reduces to the following: $y = h_0 + h_1 \ast x + h_2 \ast x^2 + h_3 \ast x^3 + ...$ where there are never any terms like $x[t] \cdot x[t-1]$ and so on. I'm curious if I'm on the right track with that, and in general, whether any dynamic convolution can be transformed into a Volterra series in that form or vice versa. Answer: A flavor of dynamic convolution (is that a trademark by the way?) has a different impulse response $g_i$ associated with each range of instantaneous input. A number of ranges can be defined by fuzzy membership functions $f_i(x)$ (Fig. 1). Figure 1. Amplitude ranges that each use a different convolution kernel. Omitting time indices, the input $x$ and output $y$ are related by: $$y = \sum_{i=1}^Ng_i\ast \left(f_i\small(x\small)\times x\right).$$ In the notation used, convolution has higher precedence than summation. Reading the signal flow from the above, $f_i(x)\times x$ is a memoryless waveshaper. It is followed by regular convolution with $g_i$, and finally, the outputs of the convolutions are summed. For a practical, limited absolute input amplitude, well-behaved waveshapers, such as the ones needed here, can be approximated to arbitrary precision by polynomials. With the appropriate polynomial series with coefficients $a_{i,j}$: $$ = \sum_{i=1}^N g_i\ast\left(\sum_{j=0}^\infty a_{i,j}\times x^j\right),$$ and reorganizing: $$= \sum_{j=0}^\infty\left(\sum_{i=1}^N a_{i,j}\times g_i\right) \ast x^j.$$ Recognizing $h_j = \sum_{i=1}^N a_{i,j}\times g_i$ we have your diagonal Volterra series: $$= \sum_{j=0}^\infty h_j \ast x^j.$$ Unfortunately, polynomials are not very good at approximating functions of compact support, like those needed here (at least before combining the ranges), so for a reasonable level of error a diagonal Volterra series approximation of dynamic convolution may require taking quite large powers of the input signal, which is computationally expensive and can lead to numerical problems.
{ "domain": "dsp.stackexchange", "id": 4266, "tags": "signal-analysis, convolution, non-linear" }
Construct a grammar over {$a,b$} whose language is {${a^mb^ia^n | i = m+n}$}
Question: I am a little confused on how to approach this problem. I am unsure how to construct both parts of the grammar using a context-free grammar. This is as far as I got, but this will end up producing a's inside of the group of b's and a's. I'm not sure how to add the second group of a's to the right of the b's. $S \rightarrow aSb \ | \ A$ $A \rightarrow bAa \ | \ λ$ This language should produce strings like $abba, aabbba, aaaabbbbbbaa$ Answer: You must handle aXb and bYa sequences seperately. $\\S \rightarrow \ XY\ |\ \epsilon \ \\ X\rightarrow \ aXb\ | \ \epsilon \\ Y\rightarrow \ bYa\ | \ \epsilon$
{ "domain": "cs.stackexchange", "id": 11291, "tags": "context-free" }
General approach to extract key text from sentence (nlp)
Question: Given a sentence like: Complimentary gym access for two for the length of stay ($12 value per person per day) What general approach can I take to identify the word gym or gym access? Answer: Shallow Natural Language Processing technique can be used to extract concepts from sentence. ------------------------------------------- Shallow NLP technique steps: Convert the sentence to lowercase Remove stopwords (these are common words found in a language. Words like for, very, and, of, are, etc, are common stop words) Extract n-gram i.e., a contiguous sequence of n items from a given sequence of text (simply increasing n, model can be used to store more context) Assign a syntactic label (noun, verb etc.) Knowledge extraction from text through semantic/syntactic analysis approach i.e., try to retain words that hold higher weight in a sentence like Noun/Verb ------------------------------------------- Lets examine the results of applying the above steps to your given sentence Complimentary gym access for two for the length of stay ($12 value per person per day). 1-gram Results: gym, access, length, stay, value, person, day Summary of step 1 through 4 of shallow NLP: 1-gram PoS_Tag Stopword (Yes/No)? PoS Tag Description ------------------------------------------------------------------- Complimentary NNP Proper noun, singular gym NN Noun, singular or mass access NN Noun, singular or mass for IN Yes Preposition or subordinating conjunction two CD Cardinal number for IN Yes Preposition or subordinating conjunction the DT Yes Determiner length NN Noun, singular or mass of IN Yes Preposition or subordinating conjunction stay NN Noun, singular or mass ($12 CD Cardinal number value NN Noun, singular or mass per IN Preposition or subordinating conjunction person NN Noun, singular or mass per IN Preposition or subordinating conjunction day) NN Noun, singular or mass Step 4: Retaining only the Noun/Verbs we end up with gym, access, length, stay, value, person, day Lets increase n to store more context and remove stopwords. 2-gram Results: complimentary gym, gym access, length stay, stay value Summary of step 1 through 4 of shallow NLP: 2-gram Pos Tag --------------------------- access two NN CD complimentary gym NNP NN gym access NN NN length stay NN NN per day IN NN per person IN NN person per NN IN stay value NN NN two length CD NN value per NN IN Step 5: Retaining only the Noun/Verb combination we end up with complimentary gym, gym access, length stay, stay value 3-gram Results: complimentary gym access, length stay value, person per day Summary of step 1 through 4 of shallow NLP: 3-gram Pos Tag ------------------------------------- access two length NN CD NN complimentary gym access NNP NN NN gym access two NN NN CD length stay value NN NN NN per person per IN NN IN person per day NN IN NN stay value per NN NN IN two length stay CD NN NN value per person NN IN NN Step 5: Retaining only the Noun/Verb combination we end up with complimentary gym access, length stay value, person per day Things to remember: Refer the Penn tree bank to understand PoS tag description Depending on your data and the business context you can decide the n value to extract n-grams from sentence Adding domain specific stop words would increase the quality of concept/theme extraction Deep NLP technique will give better results i.e., rather than n-gram, detect relationships within the sentences and represent/express as complex construction to retain the context. For additional info, see this Tools: You can consider using OpenNLP / StanfordNLP for Part of Speech tagging. Most of the programming language have supporting library for OpenNLP/StanfordNLP. You can choose the language based on your comfort. Below is the sample R code I used for PoS tagging. Sample R code: Sys.setenv(JAVA_HOME='C:\\Program Files\\Java\\jre7') # for 32-bit version library(rJava) require("openNLP") require("NLP") s <- paste("Complimentary gym access for two for the length of stay $12 value per person per day") tagPOS <- function(x, ...) { s <- as.String(x) word_token_annotator <- Maxent_Word_Token_Annotator() a2 <- Annotation(1L, "sentence", 1L, nchar(s)) a2 <- annotate(s, word_token_annotator, a2) a3 <- annotate(s, Maxent_POS_Tag_Annotator(), a2) a3w <- a3[a3$type == "word"] POStags <- unlist(lapply(a3w$features, `[[`, "POS")) POStagged <- paste(sprintf("%s/%s", s[a3w], POStags), collapse = " ") list(POStagged = POStagged, POStags = POStags) } tagged_str <- tagPOS(s) tagged_str #$POStagged #[1] "Complimentary/NNP gym/NN access/NN for/IN two/CD for/IN the/DT length/NN of/IN stay/NN $/$ 12/CD value/NN per/IN person/NN per/IN day/NN" # #$POStags #[1] "NNP" "NN" "NN" "IN" "CD" "IN" "DT" "NN" "IN" "NN" "$" "CD" #[13] "NN" "IN" "NN" "IN" "NN" Additional readings on Shallow & Deep NLP: Integrating Shallow and Deep NLP for Information Extraction
{ "domain": "datascience.stackexchange", "id": 245, "tags": "machine-learning, nlp, text-mining, data-cleaning" }
Discriminate between bell states
Question: Is it possible to discriminate between 2 of the four Bell states (phi, psi) and does it interfere with the entanglement? Answer: The four Bell states are orthogonal, so it is possible to distinguish all of them deterministically. Of course, this requires joint measurements on both qubits.
{ "domain": "physics.stackexchange", "id": 25784, "tags": "quantum-mechanics, quantum-entanglement" }
Why prefer a discrete-time system because of a practical implementation instead of continuous one
Question: I have previously done this question on math stack, and I think that maybe here is better. I am studying the theory of discrete-time systems, comparing them with continuous ones in the control theory field.In particular, I have focused on the advantages and disadvantages to prefer sometimes the first over the latter. Some papers say that nowadays, most control systems are implemented through digital devices (as DSP, etc.). So sometimes, it is preferable to design directly a discrete-time controller rather than a continuous one. Indeed the engineers prefer this choice. I have also read that designing a continuous-time controller is not entirely avoided; indeed, many mathematicians prefer this choice. In that case, the idea is that a continuous-time is approximately a discrete-time system, with an arbitrarily small sampling time. My question is: what are the disadvantages of choosing a too small sampling time above all on the control effort? Note: the control effort is the amount of energy or power necessary for the controller to perform its duty. Answer: I see it as some sort of "degree of freedom" for information. you could reconstruct each signal not knowing the fully set of points over time, in frequency domain it is sufficient having less or equal to Nyquist frequency. But in discrete models you need even less information. Information that is sufficient to calculate the control reaction. On the other hand, the main problem on digital technologies is that they has information limits, when storing and when processing it. Then the preference is use the less information as you could work with but than is sufficient to work with. Actually when you compute a continuous system on a program like MATLAB, it actually works with discrete system with some tolerance so that you couldn't notice. No computer (discrete system) could handle the information in a continuous model since it is actually infinite (but redundant). Also, discrete formulation allows you to make finite matrix description, numeric approximations instead of analytics solutions (some times handle with convolutions in continuous time is not easy, in contrast in discrete time is a common matrix multiplication), and there is a lot of tools in linear algebra to use with this sort of formulations. Then main reason is the need of improve result with the limitations of the actual devices and their limits with information. what are the disadvantages of choosing a too small sampling time above all on the control effort? Choosing a too small sampling, means use additional resources in physics devices for "nothing", using more information than the sufficient is a waste of energy. There are exceptions when your system has external perturbations or you wish to use the "redundant information" to make an optimal guess. but in general is not needed.
{ "domain": "engineering.stackexchange", "id": 4218, "tags": "control-engineering, control-theory" }
How to compute normalization of one-particle states for Klein-Gordon field quantization
Question: I am reading through Dr. Schwartz's book on quantum field theory; in section 2.3.1, he writes the following relation: $$\langle\mathbf{p}|\mathbf{k}\rangle=2\omega_p(2\pi)^3\delta^3(\mathbf{p}-\mathbf{k})$$ where $\omega_p=|\mathbf{p}|$. Do note his conventions: $[\hat{a}_p,\hat{a}_k^\dagger]=(2\pi)^3\delta(\mathbf{p}-\mathbf{k})$ and $\sqrt{2\omega_p}\hat{a}_p^\dagger|0\rangle=|\mathbf{p}\rangle$. However, when I do the (fairly trivial) calculation myself, I get $$\begin{align} \langle\mathbf{p}|\mathbf{k}\rangle&=2\sqrt{\omega_p\omega_k}\langle0|\hat{a}_p\hat{a}_k^\dagger|0\rangle \\ &=2\sqrt{\omega_p\omega_k}\left((2\pi)^3\delta(\mathbf{p}-\mathbf{k})\langle0|0\rangle+\langle 0|\hat{a}_k^\dagger\hat{a}_p|0\rangle\right) \\ &=2\sqrt{\omega_k\omega_p}(2\pi)^3\delta(\mathbf{p}-\mathbf{k}) \end{align}$$ I fail to see how $\sqrt{\omega_p\omega_k}=\omega_p$! Do I have some fundamental misunderstanding of the situation? There doesn't seem to be anything wrong with the calculation. Any help is much appreciated. Answer: Since $$\omega_p=\sqrt{\mathbf{p}^2+m^2}$$ and $$f(x)\delta(x-y)=f(y)\delta(x-y) ,$$ in the sense of integration, you have $$\sqrt{\omega_p}\delta(\mathbf{p}-\mathbf{k})=\sqrt{\omega_k}\delta(\mathbf{p}-\mathbf{k}) .$$ As pointed out by John Dumancic and kaylimekay in the comments below, the identities for the $\delta$ function are only meaningful when they are utilized in integration. To be specific, one can perform substitution in the following expression $$ \int {d\mathbf{p}}\rho(\mathbf{k})\sqrt{\omega_p}\delta(\mathbf{p}-\mathbf{k})$$ to get $$ \int {d\mathbf{p}}\rho(\mathbf{k})\sqrt{\omega_k}\delta(\mathbf{p}-\mathbf{k})=\rho(\mathbf{k})\sqrt{\omega_k}\int {d\mathbf{p}}\delta(\mathbf{p}-\mathbf{k})=\rho(\mathbf{k})\sqrt{\omega_k} .$$ But without the integral $\int d\mathbf{p}$, the identity/equality is not rigorously defined. You may try to verify in your favorite textbook whether the identity is always utilized in the above context.
{ "domain": "physics.stackexchange", "id": 75244, "tags": "quantum-field-theory, hilbert-space, klein-gordon-equation, normalization" }
Failed to Get Silence in Audio Streaming
Question: I want to detect silence in audio streaming. I've been following lot of answers from any website and i feel like know how to do that. but i doubt i'm on the right path coz the result doesn't seems right. The stream is in mp3 format 16 bit 22050 hz, the data stream divided into small packets. So every packet received I'll check the data if its a header or not to build complete chunk of mp3. Every complete packets(mp3 chunk) will be decoded into PCM. the PCM data consist of signed value between 0-255. i will split every PCM data into 1.16 seconds of audio to detect silence in that duration. I've read some answer to scale the signals to make it between -1 and 1 for further processing. My first try is calculating RMS. the PCM data will be splited into 20 miliseconds and calculated to RMS for each number of samples (samples=PCM.length*0.02). Then changes the RMS result to dB with 20*log(rms)/log(10). The result is not as i expected. The more silence in the audio, i get less dB rate (-20dB to -30dB) and get (-50dB to -70dB) for more noise in audio. Then i try to compare it in chart but it doesn't really help at all. I think the result is somehow wrong. My second try is using FFT. The scaled signals from previous data added with padding to make it 2^n length. The padding used to make signals can be calculated with FFT. After i got the FFT result, then i do A-weighting. I can see some similarity in noise audio. The silence audio gives random chart that can't be compared with noise audio. So i decided to use noise to compare with every a-weighting result. First i try to compare it with calculating DTW of those signals and make the threshold is not more than 5k in differences, but sometimes i getting audio error(the accuracy is around 90%). Then i try compare it with another method using correlation ifft(fft(noise) * fft(test)), but i don't know how to use it. I've been comparing same 4 signals and the result doesn't showing any number 1 in that list (the highest is 0.625) could you tell me what my mistake is? i really confused where my mistake is. actually i want the result like ffmpeg RMS level but i didn't get the same result with ffmpeg in my first try. i can't use ffmpeg library coz my bos ask me to calculate it from memory which is pipe in and pipe out. i've try that and it seems ffmpeg can't be use for that scenario. and tell me if you need some sample of error. I'll provide you that. thanks Answer: the PCM data consist of signed value between 0-255. That seems to be your problem. A 16 bit MP3 file decodes into signed integer values of -32768 to +32767. Perhaps you are looking at bytes and not at samples ?
{ "domain": "dsp.stackexchange", "id": 7179, "tags": "fft, audio, audio-processing, waveform-similarity" }
Zero gravity area
Question: How can I find the radius of the circle (average) centered at the center of the earth where the gravitational force of the earth cancels the gravitational force of the other celestial bodies? Will the velocity which is required to reach the circumference of the circle be the escape velocity of earth? I think because the net acceleration on the object outside the circle will be directed away from the earth.. so that the body has truly escaped the earths gravity..? Sorry for my non scientific language im not even a layman. Answer: Your question is a bit vague. There are a lot of objects moving around the solar system, and they are not always in the same place. That said, there are two objects that affect the Earth in a more or less constant way: the Moon and the Sun. The gravity of two massive bodies rotating each other, provided that one of them is quite bigger than the other (such as the Earth-Moon or the Sun-Earth systems), cancel at several points: those are the Lagrangian points. The nearest one, the Earth-Moon L1 is about 326000 km high. About your question: Will the velocity which is required to reach the circumference of the circle be the escape velocity of earth? No, the escape velocity of Earth is (by definition) the velocity needed to reach an infinite distance from the surface of the Earth. To reach the Earth-Moon L1 you need less speed because: It is nearer than infinity (obviously). You have help from the gravity of the Moon. That said, there are another point where the Earth and Moon gravity cancel out, deep inside the Earth. Since the gravity as you go deep inside the planet decreases (proportional to the distance to the center) while the Moon gravity changes little (350000 km vs 356000 km is not a big deal), there is a point half-way (I didn't do the calculations) where the gravity of both the moon and the Earth would cancel out.
{ "domain": "physics.stackexchange", "id": 27298, "tags": "escape-velocity" }
php one-time prepared statement execution function
Question: I use prepared statements often even when I only need to execute the statement once (for security), so I implemented a function to abstract away all the function calls on the mysqli_stmt object, as well as bind_param()'s first argument since as far as my tests show it works identically even when int parameters are marked as strings. <?php $conn = new mysqli('localhost', 'name', 'password', 'db'); if ($conn->connect_error) die('Connection to database failed: ' . $conn->connect_error); function stmt($query, $params){ array_unshift($params, str_repeat('s', sizeof($params))); for($i = 1; $i < sizeof($params); $i++){ $params[$i] = &$params[$i]; } $stmt = $GLOBALS['conn']->stmt_init(); $stmt->prepare($query); $method = new ReflectionMethod('mysqli_stmt', 'bind_param'); $method->invokeArgs($stmt, $params); $stmt->execute(); if($stmt->error){ $result = ['error' => $stmt->error]; } else { $result = $stmt->get_result(); } $stmt->close(); return $result; } ?> Usage example: <?php $result = stmt('SELECT * FROM table_name WHERE id IN(?,?,?)', [1,2,3]); ?> Answer: I tested your code and it does work, so far, so good. This type of code has been written many times, so by searching online you can find a lot of good examples. The main problem I have with your code is that it is rather difficult to understand. I can work it out, but it is not obvious. Starting with prepending the $params with something, then the very weird: $params[$i] = &$params[$i] loop, followed by the usage of ReflectionMethod normally used for reverse-engineering code. I prefer more down to earth code for a simple function like this. Something like: function executeQuery($mysqli, $query, $parameters) { $stmt = $mysqli->stmt_init(); if ($stmt->prepare($query)) { $types = str_repeat("s", count($parameters)); if ($stmt->bind_param($types, ...$parameters)) { if ($stmt->execute()) { return $stmt->get_result(); } } } return ['error' => $stmt->error]; } Short and sweet. Some notes: I try to use a function name that actually reflects what the function does. I supply the database connection as an argument, for more flexibility. You can use multiple database connections and they don't need to be in the global scope. I check whether the query could be properly prepared. My code differs quite a bit from your code when it comes to binding the parameters. As you can see this is quite straightforward. Using ... to access variable arguments has been available since PHP 5.6 which came out in 2014. By directly returning the results when the execution was successful I know that an error must have occurred when the last line of the function is executed. This therefore also catches other problems. Personally I would not have expected to get a MySQLi result object out of this function. Because it will always have to be processed. Why not do this processing inside this function? Like this: function executeQuery($database, $query, $parameters) { $stmt = $database->stmt_init(); if ($stmt->prepare($query)) { $types = str_repeat("s", count($parameters)); if ($stmt->bind_param($types, ...$parameters)) { if ($stmt->execute()) { if ($result = $stmt->get_result()) { $rows = []; while ($row = $result->fetch_assoc()) { $rows[] = $row; } return $rows; } } } } return ['error' => $stmt->error]; } Now you simply get an array back. I agree that is not much different from returning a MySQLi result, but I am thinking ahead. Suppose you decide to change over from MySQLi to PDO in the future. You can easily recode the function above to work with PDO, but recoding the handling of MySQLi results everywhere in your code will be a lot harder. So I am using the function to abstract away from a particular database interface. Some people don't like the deep nesting of if () {} blocks. To prevent this you could instead write something, like the code below, for all these blocks: if (!$stmt->prepare($query)) { return ['error' => $stmt->error]; } I have to repeat that there are lots of ways of doing this. The answer I gave is based on the code you presented. It is, for instance, not hard to find out the type of the parameters, and adjust the $types string accordingly.
{ "domain": "codereview.stackexchange", "id": 35665, "tags": "php, mysqli" }
Diamondback SVN build; roscore won't start, ImportError: No module named rosgraph_msgs.msg
Question: I've just built ROS Diamondback desktop full-install from SVN using instructions here. Installation process run smoothly. When trying to run roscore the following error appears: Traceback (most recent call last): File "/home/user/ros/ros/bin/roscore", line 34, in <module> from ros import roslaunch File "/home/user/ros/ros/core/roslib/src/ros/__init__.py", line 57, in __getattr__ return __import__(name) File "/home/user/ros/ros_comm/tools/roslaunch/src/roslaunch/__init__.py", line 53, in <module> from roslaunch.scriptapi import ROSLaunch File "/home/user/ros/ros_comm/tools/roslaunch/src/roslaunch/scriptapi.py", line 42, in <module> import roslaunch.parent File "/home/user/ros/ros_comm/tools/roslaunch/src/roslaunch/parent.py", line 55, in <module> import roslaunch.server File "/home/user/ros/ros_comm/tools/roslaunch/src/roslaunch/server.py", line 70, in <module> from rosgraph_msgs.msg import Log ImportError: No module named rosgraph_msgs.msg roswtf gives the following output: ERROR: ROS has not been built. To fix: cd $ROS_ROOT make Of course doing as roswtf suggests or running rosmake ros doesn't fix the issue. EDIT: The thing that seems suspicious to me is there are only 18 steps in the process of building ros via rosmake ros. This seems to little in comparison with 72 dependencies being built when building cturtle. Any clues? Originally posted by tom on ROS Answers with karma: 1079 on 2011-02-19 Post score: 0 Answer: Can you try running "rosmake ros_comm" and re-running? Originally posted by kwc with karma: 12244 on 2011-02-20 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by tfoote on 2011-02-28: rosinstall version 0.5.16 and greater now will bootstrap ros_comm if present. Comment by kwc on 2011-02-26: I've patched roswtf r13416. Unfortunately, it won't make it in the final Diamondback RC, but we are also working to address the underlying cause better in rosinstall. Comment by joq on 2011-02-25: Sounds like there is still a bug in roswtf to fix that message. Comment by tom on 2011-02-20: Thank you, that solved the problem. I move on to reading REP108 :). Comment by fergs on 2011-02-20: See also REP108 for details of how "ros" stack was broken up for diamondback: http://www.ros.org/reps/rep-0108.html
{ "domain": "robotics.stackexchange", "id": 4809, "tags": "roscore, ros-diamondback" }
Relation between variation of lagrangian and vacuum expectation value
Question: Recently, I am struggling to review some fundamental things in quantum field theories. If $\delta L=0$, then the expectation value of variation of operator vanishes, i.e, $\langle T(\delta O)\rangle=0$ where $\langle\,\cdot\,\rangle$ means vacuum expectation value, and $T$ stands for time ordering. How this be true? How can prove this? any ideas or references? I am trying to find some reference, if you know any please let me know. Thanks in advance Answer: It is essentially due to Schwinger's quantum action principle, $$ \delta \langle A|B\rangle=i\langle A|\delta S|B\rangle $$ If $\delta L=0$ then $\delta S=0$, and so $$ \delta \langle A|B\rangle=0 $$ Now take for example $\langle A|=\langle 0|$ and $|B\rangle=O|0\rangle$ so that $$ 0=\delta \langle 0|O|0\rangle=\langle 0|\delta O|0\rangle $$ The modern interpretation of Schwinger's principle is through functional-integrals. I leave it to you to repeat the analysis above using them. The starting point is of course $$ \langle O \rangle=\int\mathrm d\phi\ O(\phi)\,\mathrm e^{iS[\phi]} $$ so that $$ \langle \delta O \rangle=\int\mathrm d\phi\ O(\phi)\,\delta S[\phi]\, \mathrm e^{iS[\phi]} $$ from which your result readily follows.
{ "domain": "physics.stackexchange", "id": 37054, "tags": "quantum-field-theory, variational-principle, action" }
What causes the Balmer Jump?
Question: To quote Wikipedia: Balmer Jump is caused by electrons being completely ionized directly from the second energy level of a hydrogen atom (bound-free absorption), which creates a continuum absorption at wavelengths shorter than 364.5 nm. Based on the graphs below we used in class, the non-grey energy distribution of the energy flux is continuously lower than the grey case for wavelengths shorter than $364.5nm$, which according to Wikipedia is due to the bound-free absorptions of electrons in the $n=2$ energy state. I understand why the energy flux gets lower for those wavelengths but I still cannot grasp why the discontinuity occurs. Is it because for wavelengths higher than $365.4nm$, the bound-free absorption of electrons of $n=3$ starts? And do bound-bound absorptions not occur at all for stars with these discontinuities? I would be very thankful for any clarification on this subject. Answer: The Balmer break comes from a combination of two main things: the ability of photons with high enough energies (wavelengths shorter than 364.5 nm) to ionize hydrogen atoms that are in the $n = 2$ energy level, and the inability of any photons with lower energies (wavelengths longer than 364.5 nm) to ionize the atoms. This leads to an abrupt change in absorption of outgoing photons, from high absorption at wavelengths shorter than 364.5 nm, to no absorption at longer wavelengths. Another important factor: Any photon with a wavelength shorter than 364.5 nm can ionize an $n = 2$ hydrogen atom -- but the probability (due to the cross-section of the photon-atom interaction) goes down as the wavelength gets shorter. That's why you get the sawtooth pattern: very-short-wavelength photons can ionize the atoms, but are unlikely to. As the wavelengths get longer, the probability of ionizing a photon increases, up to the point of 364.5 nm. Once the wavelength is longer than that, ionization is simply impossible, so the absorption goes to zero. In practice, the total absorption doesn't go to zero, because there are other sources of absorption, including hydrogen atoms in the higher states, like the $n = 3$ state (which has its own "Paschen jump"), and things like H$^{-}$ hydrogen ions. You also have to think about things like how many atoms are in different energy levels. In a very hot atmosphere, most of the hydrogen atoms will be in higher energy levels (or even ionized), so you won't get much of a Balmer break. do bound-bound absorptions not occur at all for stars with these discontinuities? They absolutely do occur -- they produce absorption lines, such as those of the Balmer series (which are not shown in the simplified, cartoon spectra in your figure). If you have enough hydrogen atoms in the $n = 2$ state to produce a noticeable Balmer break, then you'll automatically have Balmer absorption lines (H$\alpha$, H$\beta$, etc.) due to longer-wavelength photons being absorbed by the same atoms. These lines do get closer together as you approach 364.5 nm from the long-wavelength side, which has the effect of making the peak of the sawtooth a bit rounded, but they don't produce the jump -- they make it a little bit more gradual. (Your figures do not show the effects of the absorption lines.)
{ "domain": "astronomy.stackexchange", "id": 6799, "tags": "star, spectra, stellar-atmospheres" }
Possibility of SET mechanism of Wurtz reaction
Question: I was studying the Wurtz reaction, and I wanted to figure out the mechanism. When I searched the web, I came across the single electron transfer (SET) mechanism explanation. How can the alkyl group and the halogen form free radicals when the halogen is clearly much more electronegative then the alkyl group? Answer: How can the alkyl group and the halogen form free radicals when the halogen is clearly much more electronegative then the alkyl group? To answer your first question, it is important to note that the halogen does not form free radicals in this mechanism. Instead, the driving force is the net formation of sodium halides, as illustrated in the mechanism I drew below. In the preceding sections, I will try to compare the Wurtz mechanism to a perhaps more familiar one, the Grignard mechanism. This should help elucidate some general aspects of organometallic mechanisms. If you are familiar with the Grignard reaction, the following comparison may help clear up the involvement of radical species in the mechanism. The Wurtz reaction is in fact very similar to the Grignard reaction. Both involve the 'insertion' of a metal species in between the carbon-halogen bond. In the case of the Grignard reaction, this metal is magnesium. This allows for the formation of a simple alkyl magnesium halide; however, with the Wurtz reaction, the addition of a sodium atom in between the carbon-halogen bond results in the formation of neutral sodium halide and a carbon radical. This carbon radical will rapidly react with another equivalent of sodium metal to form a carbanionic species, which will in turn displace another halogen in an Sn2 reaction. The Wurtz reaction is not a super useful reaction due to the formation of this reactive radical. Most chemists would opt to use the much gentler Grignard reagent. Below, I have drawn out the mechanism of the Wertz reaction of 1-bromo-3-chlorocyclobutane to form bicyclo[1.1.0]butane. In this, you can see that a carbon radical is formed. This radical must come in contact with another equivalent of sodium before it is neutralized to form a carbanionic species. For this reason, the radical will also have plenty of time to react with other parts of the molecule in unwanted ways. The Grignard reaction, on the other hand, results in the formation of a radical in the 'reduction' step; however, this is neutralized very quickly in the recombination step and will thus not react unnecessarily with other parts of the molecule. I have included a diagram of the Grignard reagent formation below. Mechanism of the General Wertz Reaction Mechanism for the Formation of a Grignard Reagent
{ "domain": "chemistry.stackexchange", "id": 13846, "tags": "organic-chemistry, reaction-mechanism" }
Dagger 2 test application
Question: I have a complicated Dagger 2 use case on Android, where I have a lot of dependencies, but some of them are really slow to construct. Like 2-3 seconds slow. My solution was to create an RX Observable that will be provided through Dagger's injection mechanism to other components, instead of waiting for the slow component to construct on the main thread. The app's Application object would create the Observable, and provide access to it via one public getter I place on it. Application instance is a singleton by default, so I thought I could hold the reference there. Another Observable would be the one constructing the slow component and pushing it into the publicly-accessible stream. My code for demonstrating that scenario: public class Dagger2TestApplication extends Application { private Subject<SlowComponent> mSlowComponent; @Override public void onCreate() { super.onCreate(); // behavior subject saves the last value inside (great for singleton access) mSlowComponent = BehaviorSubject.create(); // async slow component loading Completable .create(emitter -> { // prepare an artificial blocking delay final long delay = 2000L + Math.round(Math.random() * 4000d); final long start = System.currentTimeMillis(); // noinspection StatementWithEmptyBody - just actively wait here do {} while (System.currentTimeMillis() - start < delay); // now create the instance final SlowComponent instance = DaggerSlowComponent.builder().contextModule(new ContextModule(this)).build(); // finally send the item to all observers, finalize this stream mSlowComponent.onNext(instance); emitter.onComplete(); }) .subscribeOn(Schedulers.io()) .subscribe(); } public Observable<SlowComponent> getSlowComponentObservable() { return mSlowComponent; } } I decided to use a BehaviorSubject because it would immediately emit anything that was previously pushed into the stream, making it a holder/wrapper around my singleton dependency graph. Basically, main main conundrum is - do you think this is the way to go for my use case? Or do you have any other ideas on how to achieve this type of behavior? Answer: Not sure why you complicated it, a simple set of operators could do the same: Observable<SlowComponent> mSlowComponent; SerialDisposable mDisposable; @Override public void onCreate() { super.onCreate(); mDisposable = new SerialDisposable(); mSlowComponent = Observable.fromCallable(() -> DaggerSlowComponent.builder() .contextModule(new ContextModule(this)) .build() ) .subscribeOn(Schedulers.io()) .replay(1) .autoConnect(0, mDisposable::set); } @Override public void onDestroy() { super.onDestroy(); mDisposable.dispose(); mSlowComponent = null; } Main points: Use fromCallable for a single element result replay(1) will keep replaying a that single result autoConnect(0) will start the sequence immediately, even without subscribers yet mDisposable will allow the sequence to be cancelled if the activity gets disposed while the slow component is still being created. Edit: I almost forgot, SequentialDisposable is internal to RxJava, the public version is SerialDisposable.
{ "domain": "codereview.stackexchange", "id": 27638, "tags": "java, android, asynchronous, dependency-injection, rx-java" }
Aircraft - static take off - how is this possible?
Question: From this Aviation SE answer, and personal experience, aircraft routinely are held on their brakes while the engines are run up to full power, prior to take-off. Intuitively, to me at least, this is hard to explain. Given the formidable power of jet engines, especially military aircraft, why is this possible? Why doesn't something break, or why doesn't the aircraft just skid down the runway, wheels locked, tyres smoking? For instance, it's not that hard to wheel-spin a stationary car, or vice versa slam on the brakes in a moving car and break the adhesion between the tyre and road surface. Answer: Why doesn't something break, or why doesn't the aircraft just skid down the runway, wheels locked, tyres smoking? Basically, the aircraft you're accustomed to simply don't have all that much thrust, particularly compared to the force required for hard braking. An example: a commercial airliner will typically have a rollout time of about 30 to 35 seconds, and a liftoff speed of 120 to 140 knots (70 m/sec). Then acceleration a is given by $$a = \frac{\Delta v}{\Delta t} = \frac{70}{35} = 2 \text{m/sec}^2 = 0.2 \text{ gs}$$ While this is respectable, it implies that, for a static takeoff, the brake coefficient of friction must be greater than 0.2, and that is not hard to do at all. Military aircraft (fighters, especially) need greater performance, of course, and a thrust-to-weight ratio greater than one is possible. A static takeoff for such frisky aircraft would indeed be problematic, requiring tires with friction coefficients greater than one. However, land-based fighters don't normally do static takeoffs, while carrier-based fighters do. But carrier aircraft are hooked to catapults, and the brakes are not the dominant factor.
{ "domain": "physics.stackexchange", "id": 27581, "tags": "newtonian-mechanics, friction" }
The "CPS" approach has done great harm to performance in SML/NJ; reasoning desired
Question: In a comment to Learning F#: What books using other programming languages can be translated to F# to learn functional concepts? Makarius stated: Note that the "CPS" approach has done great harm to performance in SML/NJ. Its physical evaluation model violates too many assumptions that are built into the hardware. If you take big symbolic applications of SML like Isabelle/HOL, SML/NJ with CPS comes out approx. 100 times slower than Poly/ML with its conventional stack. Can someone explain the reasons for this? (Preferably with some examples) Is there an impedance mismatch here? Answer: At first approximation, there is a difference in "locality" of memory access, when a programm just runs forward on the heap in CPS style, instead of the traditional growing and shrinking of stack. Also note that CPS will always need GC to recover your seemingly local data placed on the heap. These observations alone would have been adequate 10 or 20 years ago, when hardware was much simpler than today. I am myself neither a hardware nor compiler guru, so as second approximation, here are some concrete reasons for the approx. factor 100 seen in Isabelle/HOL: Basic performance loss according to the "first approximation" above. SML/NJ heap management and GC has severe problems to scale beyond several tens MB; Isabelle now uses 100-1000 MB routinely, sometimes several GB. SML/NJ compilation is very slow -- this might be totally unrelated (note that Isabelle/HOL alternates runtime compilation and running code). SML/NJ lacks native multithreading -- not fully unrelated, since CPS was advertized as "roll your own threads in user space without separate stacks". The correlation of heap and threads is also discussed in the paper by Morriset/Tolmach PPOPP 1993 "Procs and Locks: A Portable Multiprocessing Platform for Standard ML of New Jersey" (CiteSeerX) Note: PDF at CiteSeerX is backward, pages from from 10-1 instead of 1-10.
{ "domain": "cs.stackexchange", "id": 1115, "tags": "compilers, functional-programming, proof-assistants, continuations" }
What is the difference between photons and electromagnetic waves?
Question: Electromagnetic waves are generated by accelerating electric charges. Photons on the other hand, tend to describe something different, specifically the particle nature of electromagnetic waves as detected experimentally. Is it something like EM waves are like the ripples in water, but photons are like the individual molecules of water in a wavefront? Or is it like the amplitude of the wavefront? According to this answer, phonons can be thought of as molecules of the medium, but photons don't move in a medium. If we were talking about current, we would have electrons acting as a waveguide, but EM waves are different. (Is it too far off to think of the electromagnetic field as a medium? e.g. like a 2d matrix full of 0s, and if I place a +5 value in a corner, it spreads out to the three adjacent cells in the next step, and so on.) Also, if there is a difference between the two, why are they always mentioned as synonymous? Is it a common mistake? Answer: There are three main concepts here, not two: the photon, the quantum field mode (somewhat like a quantum wavefunction) the classical electromagnetic wave. The relationship is as follows. A quantum field mode is a continuous function of position and time, quite like a wave. It is also quite like a quantum wavefunction though not exactly the same. We are usually interested in quantum field modes having a well-defined frequency. A photon is a way of talking about how much energy is in some quantum field mode. The more energy, the more photons. A classical wave is an approximation to what happens when you have a large number of photons in a group of quantum field modes all of similar frequency and spatial shape. All this can be made precise via the mathematics. Then main thing to note is that the electromagnetic wave, which can be thought of as oscillating electric and magnetic field, corresponds to a stream of a large number of photons, and the quantum field modes that are involved have a shape in space and time similar to the shape of the resulting electromagnetic wave.
{ "domain": "physics.stackexchange", "id": 99765, "tags": "electromagnetic-radiation, photons" }
1D application of the differential form of Gauss' Law for the electric field from a point charge
Question: (This might be somewhat related to a previous question I posted here, however it seemed different enough to warrant a separate post.) I'm trying to see how, for a point charge at the origin, I might apply the differential form of Gauss' Law to integrate and find the electric field, for example along the $x$-axis: $$\nabla \cdot E= \frac{dE}{dx}=\frac{\rho}{\epsilon}$$ Essentially I'm trying to follow the procedure depicted on slide 13 of this lecture except for the case of a point charge at the origin. My understanding is that such a point charge would need to be represented by a Dirac delta charge density at the origin, which would make sense to me. However, integrating this Dirac delta would seem to produce a constant electric field rather than an electric field that approaches infinity at the origin and falls off as $\frac{1}{x^2}$. I'm confused overall how this decreasing electric field would be predicted as Gauss' law seems to suggest there would need to be negative charge density at those locations, whereas the charge density is zero almost everywhere. In my previous question linked above my error came from trying to use a noncontinuous expression for the electric field which did not have accurate partial derivatives, however here it seems that the charge density should definitely be 0 everywhere except the origin so I'm not sure what the fix might be. Answer: $\rho = \sigma \delta(x)$ describes an infinite charged sheet with charge density $\sigma$, not a point charge. As is well known and as you have found out, this produces an electric field that is constant on either side of the sheet. To describe a point charge $q$ at the origin, you would write the charge density as $\rho = q\delta(x)\delta(y)\delta(z)$ in Cartesian coordinates, or $\rho = \frac{q}{2\pi r^2}\delta(r)$ in spherical coordinates. The problem is not one dimensional in Cartesian coordinates. It is one dimensional in spherical coordinates, and you will obtain the inverse square law if you use the divergence operator in spherical coordinates.
{ "domain": "physics.stackexchange", "id": 94567, "tags": "electrostatics, electric-fields, gauss-law" }
Can heat be transformed into matter?
Question: Heat is a form of energy. Heat gets created by mechanical or chemical processes that act, most of them, on matter. Considering those processes that act on matter and create energy, can heat be transformed back into substance again? Or does it simply, part of it, gets wasted into nothingness? Or energy (heat) is one result of matter getting transformed, that cannot be captured. Answer: And endothermic reaction is one that absorbs heat, and that means the total mass of the products will be slightly greater than the total mass of the reagents. The increase in mass will be equal to the heat absorbed divided by $c^2$ i.e. the mass change will be given by Einstein's famous equation $E = mc^2$. This is an example of heat being converted to mass. Note however that endothermic reactions are generally only possible when there is a large and positive entropy change. Though they offer a way for mass to be converted to heat and then back to mass again, this can only occur when an entropy gradient permits so it can't occur indefinitely. An example of an endothermic process is dissolving most ionic solids in water. The heat of solution is negative for most ionic solids i.e. when you dissolve them in water the solution decreases in temperature. The ionic solid only dissolves because the entropy of solution is large and positive.
{ "domain": "physics.stackexchange", "id": 65626, "tags": "thermodynamics" }
Why use two LSTM layers one after another?
Question: In the example on the Keras site, seq2seq_translate.py on line 189, there is a LSTM layer after another (the first with return_sequences=True), but another example lstm_seq2seq.py which does the same thing but letter-by-letter uses only one LSTM in the encoder. My code looks like: encoder = LSTM(latent_dim, return_sequences=True)(encoder_inputs) encoder_outputs, state_h, state_c = LSTM(latent_dim, return_state=True)(encoder) My question is why does the word-by-word version use two LSTM layers? And why is the return_sequences used? Answer: About your first question: It is because word-by-word NLP model is more complicated than letter-by-letter one, so it needs a more complex network (more hidden units) to be modeled suitably. About your second question: When you want to use two-staged LSTMs, the hidden sequence of first LSTM must be used as input of the second LSTM and the return_sequences option is used to do this.
{ "domain": "datascience.stackexchange", "id": 3612, "tags": "keras, lstm" }
reset CATKIN_WHITELIST_PACKAGES
Question: I am building a subset of the packages in my catkin workspace using CATKIN_WHITELIST_PACKAGES. However, when I want to rebuild the whole workspace (without CATKIN_WHITELIST_PACKAGES set), it still only builds the whitelisted packages. I assumed that the whitelist is only temporary for each build, but it seems like it is globally stored. I couldn't find any documentation on it. If I want to change the whitelist, the only thing I can do is to remove build and devel. Can anyone provide me with some more information about CATKIN_WHITELIST_PACKAGES, i.e. what exactly it does and where it is stored? Originally posted by takahashi on ROS Answers with karma: 185 on 2017-03-15 Post score: 0 Answer: Search for CATKIN_WHITELIST_PACKAGES in the ROS wiki returned the following page. It describes the usage of that variable and how to revert back to building all packages: http://wiki.ros.org/catkin/commands/catkin_make Originally posted by Dirk Thomas with karma: 16276 on 2017-03-15 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by takahashi on 2017-03-15: what the h***, I am pretty sure I was on this page, but somehow I must have overlooked it. Thanks a lot anyway! Comment by Dirk Thomas on 2017-03-15: Please mark the answer as correct then so that other's searching for the same problem can find it easier.
{ "domain": "robotics.stackexchange", "id": 27322, "tags": "catkin" }
Adding links to tags on StackExchange sites
Question: This Greasemonkey script adds links to tags on StackExchange sites which allow for activating, deactivating and ignoring favorite and related tags by using UI components. Since JavaScript is not my favourite language I'd be glad to hear what can be done more elegant, more beautiful, more performant, etc. (also by using jQuery, for instance). // ==UserScript== // @name SEQTAIL - StackExchange Questions' TAgs Inline Links // @author Gerold 'Geri' Broser // @license GNU GPLv3 <http://www.gnu.org/licenses/gpl-3.0.html> // @namespace igb // @description Adds links to all tags that allow for activating, deactivating and ignoring favorite and related tags by UI components rather than by editing the search field. // @description:de Fügt zu allen Tags Links hinzu, die es erlauben Favorite und Related Tags über UI-Konponenten zu aktivieren oder zu deaktivieren, anstelle das Suchfeld zu bearbeiten. // @include http://stackoverflow.com/questions* // @include http://stackoverflow.com/unanswered* // @include http://codereview.stackexchange.com/questions* // @version 16.4.7 // @icon http://cdn.sstatic.net/Sites/stackoverflow/img/favicon.ico // @run-at document-idle // @tested-with Firefox 45.0.1, Greasemonkey 3.7; Chrome 49.0.2623.110, TamperMonkey 4.0.10 // ==/UserScript== (function() { 'use strict'; console.debug("BEGIN SEQTAIL..."); function qouteRegexSpecialCharsIn(string) { return string.replace(/([\[\]\(\)\{\}\|\?\+\-\*\^\$\\\.\!\=])/g, "\\$1"); } // Adds links to all tags that allow to active, deactivate and ignore favorite and related tags // by selecting rather than by editing the search field. function addLinksTo(tags) { for (var n = 0; n < tags.snapshotLength; n++) { var tag = tags.snapshotItem(n); var tagName = tag.innerHTML.replace(/<img.+>/g, ""); // 'ignore' tag if (search !== "" // if search field is empty (an ignored tag cannot exist on its own in search field)... && search != "[" + tagName + "]" // ...and tag not the only active (see above)... && search.indexOf("-[" + tagName + "]") < 0) { // ...and not already ignored var i = document.createElement('a'); var rgx = new RegExp("\\[" + qouteRegexSpecialCharsIn(tagName) + "\\]\\+*", "g"); // to remove tag from link below if it is active i.href = "/questions/tagged/" + search.replace(rgx, "").trim().replace(/ +/g, "+") + (search == "" ? "" : "+") + "-[" + tagName + "]"; i.title = "ignore questions tagged '" + tagName + "'"; i.style = "background-color: #e6e6e6; padding: 0 0.4em;"; i.innerHTML = '!'; tag.parentNode.insertBefore(i, tag.nextSibling); } // 'activate' and 'deactivate' tag var ad = document.createElement('a'); if (search.indexOf("[" + tagName + "]") < 0 // if tag not active... || search.indexOf("-[" + tagName + "]") >= 0) { // ...or ignored rgx = new RegExp("-\\[" + qouteRegexSpecialCharsIn(tagName) + "\\]", "g"); // to remove tag from link below if it is ignored ad.href = "/questions/tagged/" + search.replace(rgx, "").trim().replace(/ +/g, "+") + (search == "" ? "" : "+") + "[" + tagName + "]"; ad.title = "add '" + tagName + "' to active tags"; ad.style = "background-color: #e6ffe6; padding: 0 0.3em;"; //ad.innerHTML = '+'; // if using the regular '+' sign it is added to the tag name by SO after editing Favorite Tags, // since '+' is a valid character in tag names //const AND = "&#2227;" // ∧ ... logical AND is not displayed properly in FF ad.innerHTML = "&uarr;"; } else { rgx = new RegExp("\\[" + qouteRegexSpecialCharsIn(tagName) + "\\]", "g"); // to remove tag from link below if it is active ad.href = "/questions/tagged/" + search.trim().replace(rgx, "").trim().replace(/ +/g, "+"); ad.title = "remove '" + tagName + "' from active tags"; ad.style = "background-color: #ffe6e6; padding: 0 0.3em;"; //ad.innerHTML = '&minus;'; ad.innerHTML = "&darr;"; // since up arrow is used at 'add' above, use down arrow here for the sake of consistency } tag.parentNode.insertBefore(ad, tag.nextSibling); } // for(tags) } // addLinksTo(...) // Adds link to tags section that allows to deactivate all tags. function addAllLinkTo(tagsSection) { if (search === "" ) // if search field is empty return; var da = document.createElement('a'); da.href = "/questions"; da.title = "remove all from active tags"; da.style = "background-color: #ffe6e6; padding: 0 0.3em;"; da.innerHTML = '&times;'; tagsSection.appendChild(da); } // addAllLinkTo(...) var search = document.getElementById('search').firstElementChild.value; // active tags var aTag = document.evaluate("//div[@class='tagged']", document, null, XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null) .snapshotItem(0); if (aTag !== null) { var aTags = document.evaluate("//div[@class='tagged']/a[not(starts-with(., 'about'))]", aTag, null, XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null); addLinksTo(aTags); addAllLinkTo(aTag); } // interesting tags var iTag = document.getElementById('interestingTags'); if (iTag !== null) { var iTags = document.evaluate("//div[@id='interestingTags']/a", iTag, null, XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null); addLinksTo(iTags); addAllLinkTo(iTag); } // related tags var rTag = document.evaluate("//div[contains(@class, 'js-gps-related-tags')]", document, null, XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null) .snapshotItem(0); if (rTag !== null) { var rTags = document.evaluate( "//div[contains(@class, 'js-gps-related-tags')]/div/a", rTag, null, XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null); addLinksTo(rTags); } //unanswered tags var uTag = document.evaluate("//div[./h4[@id='h-related-tags']]", document, null, XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null) .snapshotItem(0); if (uTag !== null) { var uTags = document.evaluate( "//div[./h4[@id='h-related-tags']]/div/a", uTag, null, XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null); addLinksTo(uTags); } // Adds a section with links primarily for removing related tags which are no more in the Related Tags list once active. function addDeactivateSection(parentElement) { var active = search.split(" "); if (active[0] === "" ) // if search field is empty (first element is an empty string after split()) return; var deactivate = document.createElement('div'); deactivate.style = "margin-bottom: 1.5em;"; var h3 = document.createElement('h3'); h3.style = "margin-top: 1em; font-weight: normal;"; h3.innerHTML = "Remove from active tags"; deactivate.appendChild(h3); for (var i = 0; i < active.length; i++) { var r = document.createElement('a'); var rgx = "\-*" + qouteRegexSpecialCharsIn(active[i])/*.replace(/\[/g, "\\\[").replace(/\]/g, "\\\]")*/; r.href = "/questions/tagged/" + search.replace(new RegExp(rgx, "g"), "").trim().replace(/ +/g, "+"); r.title = "remove '" + active[i].replace(/[\[\]]/g, "") + "' from active tags"; r.style = "font-size: 12px; background-color: #ffe6e6; margin: 0.2em; padding: 0.2em 0.4em;"; r.innerHTML = active[i].replace(/[\-\[\]]/g, ""); if (active[i].startsWith("-")) { var n = document.createElement('span'); n.innerHTML = "not"; deactivate.appendChild(n); } deactivate.appendChild(r); } // for(active tags) parentElement.parentNode.parentNode.insertBefore(deactivate, parentElement.parentNode.nextSibling); addAllLinkTo(deactivate); } // addDeactivateSection(...) addDeactivateSection(iTag); console.debug("END SEQTAIL."); })(); // use strict Answer: Since Stack Exchange already uses jQuery, I think it's a good idea to write userscripts using jQuery too. However, rewriting the whole code to use jQuery instead of native APIs would require a lot of work, so I will just comment on the code you wrote. (function() { 'use strict'; I would put the 'use strict' statement in the next line. You're using many semicolons. JavaScript has a nice feature called automation semicolon insertion, which means you almost don't need to use semicolons at all, with just a few exceptions. From npm style guide: Don't use them [semicolons] except in four situations: for (;;) loops. They're actually required. null loops like: while (something) ; (But you'd better have a good reason for doing that.) case 'foo': doSomething(); break In front of a leading ( or [ at the start of the line. This prevents the expression from being interpreted as a function call or property access, respectively. Note that it's only a matter of style. You don't have to follow it if you don't like it, but personally I think that if something is not required, there's no reason to use it. You don't use quotes or apostrophes for string literals consistently. For example: 'use strict' console.debug("BEGIN StackExchange Questions...") document.getElementById('search') Decide if you want to write string using either apostrophes or double quotes and use it consistently in the whole code. Personally, I prefer apostrophes, because they require to press only one key, whereas to insert a double quote character you have to hold Shift too. And for example if you choose apostrophes and you want to make a string which contains apostrophes, feel free to use double quotes in this specific situation, so you don't have to escape the apostrophes with a backslash. You're using document.evaluate() to select DOM elements. I don't really see any reason why it could be better than document.querySelector(), which uses CSS-style selectors. And unfortunately, it's not supported by IE at all. I think you should support at least IE 11, because still many people use it. ECMAScript 6 has a lot of great features, and most of them are already supported in major browsers, but you aren't using any of them. For example, in for loop, you could use let instead of var, to limit the scope of the variable to the loop. This loop: for (var n = 0; n < tags.snapshotLength; n++) could be changed to: for (let n = 0; n < tags.snapshotLength; n++) Read more about let keyword on MDN docs. I see you're using strict comparison operators (=== and !==) almost everywhere, but in one place I think you forgot one equal sign: && search != "[" + tagName + "]" var i = document.createElement('a'); Try to use some more meaningful identifiers, for example ignoreTagLink. i.style = "background-color: #e6e6e6; padding: 0 0.4em;"; Instead of using inline styles, I recommend you adding a class to the element, and define CSS rules for that class, like that: var $style = document.createElement("style") $style.textContent = ` .someClass { background-color: #e6e6e6; padding: 0 0.4em; } ` document.head.appendChild($style) You're repeating some code — don't do this, it's a bad practice. Try to follow DRY (don't repeat yourself) principle. Some lines of code are too lengthy, like this one: var iTags = document.evaluate("//div[@id='interestingTags']/a", iTag, null, XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null); You could split it into multiple lines like that: var iTags = document.evaluate( "//div[@id='interestingTags']/a", iTag, null, XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null ); That's all issues I see for now, if I find some more, I'll edit this answer.
{ "domain": "codereview.stackexchange", "id": 19450, "tags": "javascript, stackexchange, userscript" }
Did Newton estimate the gravitational constant $G$?
Question: Did Newton estimate the gravitational constant $\mathrm{G}$? In my head, he did this by comparing: acceleration of an object on Earth (let's say, an apple) $9.81 \,\mathrm{m\cdot s^{-2}}$, $6400 \,\mathrm{km}$ from the centre of the Earth acceleration of the Moon, $384,000 \,\mathrm{km}$ As explained here. But did he actually take the next step and calculate what G must be to explain both accelerations? If so what value did he get? Answer: Without knowing the mass of the Earth, calculating the gravitational constant is impossible from $g$ and the acceleration of the Moon. The best you can do is calculate the product of the gravitational constant and the Earth's mass (GM). This is why Cavendish's experiments with the gravity of lead weights was important, since the mass of the body providing the gravitational force was known. Once $G$ was calculated from this experiment, the Earth could then be weighed from using either $g$ or the Moon's acceleration (both hopefully yielding the same answer). The suggestion in the previous paragraph that Cavendish's experiment resulting in a value for $G$ is still not quite right. While a value for $G$ could have been determined from the experiment, Cavendish only reported the specific gravity (the ratio of a density to water's density) of Earth. According to Wikipedia, the first reference in the scientific literature to the gravitational constant is in 1873--75 years after Cavendish's experiment and 186 years after Newton's Principia was first published: Cornu, A.; Baille, J. B. (1873). "Détermination nouvelle de la constante de l'attraction et de la densité moyenne de la Terre" [New Determination of the Constant of Attraction and the Average Density of Earth]. C. R. Acad. Sci. (in French). Paris. 76: 954–958. Click on the link if you read French or can find a translator. Also, the symbol $f$ is used instead of $G$. Newton's Principia can be downloaded here: https://archive.org/stream/newtonspmathema00newtrich#page/n0/mode/2up Follow up questions copied from the comments (in case the comment-deletion strike force shows up): So how exactly did Newton express his universal gravitational law. Was it like this "$F_g$ is equal to $GMm/r^2,$ but I must avow that I doth not know neither $G$ nor big $M$". Or did he just assign some number "$X$" to the gravitational effect due to the Earth, which ended up being $GM$? Philip Wood: I'm pretty sure that Newton never wrote his law of gravitation in algebraic form, nor thought in terms of a gravitational constant. In fact the Principia looks more like geometry than algebra. Algebra was not the trusted universal tool that it is today. Even as late as the 1790s, Cavendish's lead balls experiment was described as 'weighing [finding the mass of] the Earth', rather than as determining the gravitational constant. Interestingly, Newton estimated the mean density of the Earth pretty accurately (how, I don't know) so he could have given a value for G if he'd thought algebraically Mark H: Philip Wood is correct. Newton wrote Principia in sentences, not equations. The laws of gravity were described in two parts (quoting from a translation): "Tn two spheres mutually gravitating each towards the other, ... the weight of either sphere towards the other will be reciprocally as the square of the distance between their centres." And, "That there is a power of gravity tending to all bodies, proportional to the several quantities of matter which they contain." This is the full statement of the behavior of gravity. No equations or constants used. Who first measured the standard gravitational acceleration 9.80 m/s/s? I assume that was well known by the time of Newton? After a quick search, I can't find who first measured $g=9.8m/s^2$. It's not a difficult measurement, but would require accurate clocks with subsecond accuracy. This is an interesting article: https://en.wikipedia.org/wiki/Standard_gravity Actually, on page 520, Newton lists the acceleration due to gravity at Earth's surface like so: "the same body, ... falling by the impulse of the same centripetal force as before [Earth's gravity], would, in one second of time, describe 15 1/12 Paris feet." So, the value was first measured sometime between Galileo's experiments and Newton's Principia. Was Newton (and therefore all of us!) just a tiny bit luck y that the ratios worked out so nicely. I'm not putting down Sir Isaac (perhaps the smartest bloke who's ever drawn breath in tights), but even I might notice that $\frac{g(Earth)}{a_c(Moon)}=3600=\left(\frac{r(Earth−to−Moon)}{r(Earth)}\right)^2$. If the ratio had been a little messier, say one to 47½, it might have been a little harder to spot the connection. Newton knew that the moon was not exactly 60 earth-radii distant. He quotes a number of measurements in Principia: "The mean distance of the moon from the centre of the earth, is, in semi-diameters of the earth, according to Ptolemy, Kepler in his Ephemerides, Bidliuldus, Hevelius, and Ricciolns, 59; according to Flamsted, 59 1/3; according to Tycho, 56 1/2; to Vendelin, 60; to Copernicus, 60 1/3; to Kircher, 62 1/2 (p . 391, 392, 393)." He used 60 as an average, which results in an easily calculable square, but squaring isn't a difficult calculation anyway. The inverse square law was already being talked about by many scientists at the time, including Robert Hooke. Newton used the Moon as a confirmation of the inverse square law, not to discover it. He already knew what the answer should be if the inverse square law was true. In fact, it was the orbital laws discovered by Johannes Kepler--especially the constant ratio of the cube of the average distance from the central body and the square of the orbital period--that provided the best evidence for the inverse square law. In "The System of the World" part of Newton's Principia, he uses astronomical data to show that gravity is a universal phenomena: the planets around the Sun, the moons around Jupiter, the moons around Saturn, and the Moon around Earth. For the last, in order to establish the ratio of forces and accelerations, you need at least two bodies. Since Earth only has one moon, he made the comparison with terrestrial acceleration. I would love to read a proof (requiring less mathematical nous than Sir Isaac had at his disposal) for the connection from Kepler's 3rd law to Newton's inverse square. Do you know of one? A simple version of Kepler's Third Law to the inverse square law can be shown for circular orbits pretty easily. Define $r$ as the constant radius of the orbit, $T$ as the time period of the orbit, $v$ as the planet's velocity, $m$ as the mass of the orbiting planet, $F$ as the gravitational force, and $k$ as some constant. \begin{align} \frac{r^3}{T^2} = k &\iff r^3 = kT^2 \\ &\iff r^3 = k\left(\frac{2\pi r}{v}\right)^2 \\ &\iff r = \frac{4\pi^2k}{v^2} \\ &\iff \frac{v^2}{r} = \frac{4\pi^2k}{r^2} \\ &\iff \frac{mv^2}{r} = \frac{4\pi^2km}{r^2} \\ &\iff F = \frac{4\pi^2km}{r^2} \end{align} The quantity $v^2/r$ is the centripetal acceleration necessary for constant speed circular motion.
{ "domain": "physics.stackexchange", "id": 61143, "tags": "newtonian-gravity, experimental-physics, history, physical-constants" }
How do engineers deal with sinkholes?
Question: Sinkholes have been known to occur in the middle of cities or other locations where they affect buildings: Some areas are more prone to sinkholes than other areas because of the presence of old mines or limestone bedrock. Even in these areas where sinkholes are more common, humans continue to build buildings and even airports. How do engineers prevent or mitigate sinkholes? Is it as low-tech as dumping in rock until no more fits? Answer: Whether it's natural subsidence, like sinkholes or human induced subsidence like the collapse of underground engineered chambers or mining subsidence the two ways of dealing with it are backfill or leaving it alone and the enforcement of an exclusion zone. Where backfill is used for the remediation of subsidence it's generally loose rock fill, because it's the cheapest form of backfill and excavations aren't going to be established against the backfill, particular in civil situations. In mining, the subsidence backfill can be loose rock, sand or tailings, or cemented rock, sand or tailings; depending on circumstance, what materials are available and how much the company is prepared to pay. Edit: 26 March 2015 I came across this picture of a sinkhole being backfilled with concrete on the MSN news website. Additional Information 26 March 2015 One group of geotechnical contractors in the US advocates excavation and backfilling of sinkhole where the bedrock in no deeper than 4.5 m (15 ft). For deeper holes it recommends grouting and for very deep holes it recommends cap grouting. The Karst Sinkhole Treatment document by the Natural Resources Conservation Service recommends establishing of a buffer zone around the hole and backfilling. Depending on circumstances the backfill will include loose rock, concrete and if necessary geotextile. When floods hit Calgary, in Canada, in June 2013, creating numerous sinkholes, the holes were backfilled. The Sinkhole Guide recommends backfilling sinkholes with, “native earth materials or concrete. Broken limestone rip-rap or a concrete plug in the bottom of the sinkhole often helps create a stable foundation for the fill. Above that, add clayey sand to form a barrier that will help to prevent water from seeping downward through the hole and enlarging it further. Lastly, add sand and top soil, and landscape to surrounding conditions. Additional fill may be necessary over time, but most holes eventually stabilize.” According to the US Department of Transportation, fly ash in grout has been used to backfill sinkhole/subsidence holes in abandoned mines. In North Dakota, the Public Service Commission backfills subsidence hole in abandoned mines in that State.
{ "domain": "engineering.stackexchange", "id": 98, "tags": "geotechnical-engineering" }
ROS Init Exception
Question: Hi all! I wrote two nodes in python and get the following exception when I start the nodes. Can somebody tell me whats the problem? Exception in thread Thread-1: Traceback (most recent call last): File "/usr/lib/python2.6/threading.py", line 532, in __bootstrap_inner self.run() File "/opt/ros/electric/stacks/geometry/tf/src/tf/listener.py", line 238, in run rospy.spin() File "/opt/ros/electric/stacks/ros_comm/clients/rospy/src/rospy/client.py", line 98, in spin raise rospy.exceptions.ROSInitException("client code must call rospy.init_node() first") ROSInitException: client code must call rospy.init_node() first Thanks for help! Originally posted by JaRu on ROS Answers with karma: 153 on 2013-03-10 Post score: 0 Answer: You did not call rospy.init_node before calling rospy.spin. Originally posted by dornhege with karma: 31395 on 2013-03-10 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 13285, "tags": "ros" }
What recent positive developments in LENR were identified in this US House report?
Question: In this Low Energy Nuclear Reactions (LENR) briefing document prepared for the US House of Representatives, we can read (page 8): The committee is aware of recent positive developments in developing low-energy nuclear reactions (LENR), which produce ultra-clean, low-cost renewable energy that have strong national security implications. For example, according to the Defense Intelligence Agency (DIA), if LENR works it will be a "disruptive technology that could revolutionize energy production and storage." The committee is also aware of the Defense Advanced Research Project Agency's (DARPA) findings that other countries including China and India are moving forward with LENR programs of their own and that Japan has actually created its own investment fund to promote such technology. The above document was linked from this New Scientist article. What kind of recent LENR positive developments this document is talking about? Answer: Perhaps the following report was one significant piece of information that stimulated the US House report: http://lenr-canr.org/acrobat/MosierBossinvestigat.pdf INVESTIGATION OF NANO-NUCLEAR REACTIONS IN CONDENSED MATTER FINAL REPORT Dr. Pamela A. Mosier-Boss, Mr. Lawrence P. Forsley, Dr. Patrick J. McDaniel Performed for the Defense Threat Reduction agency. This summary of research funded by the US Navy over many years, coupled with a large number of positive reports from various other agencies and groups around the world, may have helped to stimulate renewed interest within the US Gov't. I think that it is fair to say that the probability that LENR is a real effect is almost surely between zero and one. If it turns out to be true, then the impact on humanity will be un-measurable. If it turns out to be false, then the impact of having wasted research money to explore it would be essentially negligible from a global perspective. It makes sense therefore for humanity to spend some money studying it, as it has in fact been doing in isolated investments internationally. In my personal opinion, if LENR is true, then it means that our current understanding of nuclear and condensed matter physics are deficient, as they seem to preclude the possibility that most of the LENR experimental reports are possible. But at some point the number of experimental claims by legitimate researchers can no longer be dismissed based on theoretical bias, no matter how well accepted it is. There is a need for more physicists to become active in this field, but the reputation trap that most would face if they did, not to mention the lack of support, are extreme negative inducements. There may also be negative effects of LENR that aren't apparent to us now, so that it should be carefully monitored. Because of the upside potential, it is impossible to prevent overly optimistic and eager inventors from trying to capitalize on the prospect of unlimited nearly free energy. There have been a number of unsuccessful inventors and startups trying to build LENR or cold fusion reactors over the years. The first one that I know of was John Tandberg, at Electrolux in Sweden, in the 1920s (http://newenergytimes.com/v2/books/Reviews/SoederbergByBritz.shtml). Someday one of them might succeed.
{ "domain": "physics.stackexchange", "id": 33974, "tags": "nuclear-physics, cold-fusion" }
"Twelve people have stood on the moon, but only one person has been to the Gates of Hell." What Guinness World Record did George Kourounis receive?
Question: In NPR's podcast and transcript For 50 years, 'The Gates Of Hell' crater has burned. Now officials want to put it out NPR's Scott Simon speaks to George Kourounis, Royal Canadian Geographical Society's explorer-in-residence, about the possible closing of "The Gates of Hell," a natural gas field in Turkmenistan. SIMON: You sound like you like this place. KOUROUNIS: Twelve people have stood on the surface of the moon, but only one person has been to the Gates of Hell. And I was very fortunate to be able to go there. And I even have the Guinness World Record certificate above my desk to help commemorate that. So I get a kick out of that every time I see it. Question: Exactly what "Gates of Hell" Guinness World Record did Royal Canadian Geographical Society's explorer-in-residence George Kourounis receive? "Twelve people have stood on the surface of the moon, but only one person has been to the Gates of Hell." Exactly what Guinness World Record did Royal Canadian Geographical Society's explorer-in-residence George Kourounis receive? Answer: According to the Guinness Book of World Records site, George Kourounis holds the record as the first person to reach the bottom: The first person to reach the bottom of Darvaza gas crater is George Kourounis (Canada), and was achieved at the Darvaza gas crater in Darvaza, Turkmenistan on 6 October 2013. More details from the same website. In November 2013, explorer and storm-chaser George Kourounis (Canada) became the first known person to venture into the blazing Darvaza Crater located in a natural gas field in the Karakum Desert, Turkmenistan. Also known as the “Door to Hell”, the fiery feature has been ablaze since 1971, when it’s widely believed that the ground caved in as a result of drilling and the pit was intentionally set alight to burn off the leaking gas. Wearing an insulated aluminium suit and using a custom-made Kevlar climbing harness, Kourounis descended to the crater’s base to collect rock samples. Later lab tests revealed bacteria living on the rocks, proving that life can survive the extreme temperatures which reach in excess of 1,000°C (1,830°F). The crater is 69 m (225 ft) wide and 30 m (99 ft) deep. Kourounis' expedition was financially backed by National Geographic and travel company Kensington Tours.
{ "domain": "earthscience.stackexchange", "id": 2423, "tags": "methane, fire" }
How to Determine Specific Activation Function from keras' .summary()
Question: I'm following a tutorial where a particular model is provided in .h5 format. Of course, I can call model.summary() on this model after loading it with load_model(), however the output looks like this: Layer (type) Output Shape Param # ================================================================= conv1d_1 (Conv1D) (None, 400, 32) 1568 _________________________________________________________________ batch_normalization_1 (Batch (None, 400, 32) 128 _________________________________________________________________ activation_1 (Activation) (None, 400, 32) 0 _________________________________________________________________ max_pooling1d_1 (MaxPooling1 (None, 100, 32) 0 _________________________________________________________________ conv1d_2 (Conv1D) (None, 100, 64) 32832 _________________________________________________________________ batch_normalization_2 (Batch (None, 100, 64) 256 _________________________________________________________________ activation_2 (Activation) (None, 100, 64) 0 _________________________________________________________________ max_pooling1d_2 (MaxPooling1 (None, 25, 64) 0 _________________________________________________________________ conv1d_3 (Conv1D) (None, 25, 128) 131200 _________________________________________________________________ batch_normalization_3 (Batch (None, 25, 128) 512 _________________________________________________________________ activation_3 (Activation) (None, 25, 128) 0 _________________________________________________________________ max_pooling1d_3 (MaxPooling1 (None, 6, 128) 0 _________________________________________________________________ flatten_1 (Flatten) (None, 768) 0 _________________________________________________________________ dense_1 (Dense) (None, 80) 61520 _________________________________________________________________ batch_normalization_4 (Batch (None, 80) 320 _________________________________________________________________ activation_4 (Activation) (None, 80) 0 _________________________________________________________________ dense_2 (Dense) (None, 80) 6480 _________________________________________________________________ batch_normalization_5 (Batch (None, 80) 320 _________________________________________________________________ activation_5 (Activation) (None, 80) 0 _________________________________________________________________ dense_3 (Dense) (None, 2) 162 _________________________________________________________________ activation_6 (Activation) (None, 2) 0 My question: is there a way to find out which activation function was used? e.g. relu, softmax, sigmoid, etc. using existing keras methods? Answer: You can check the type of activation in layer config. model.layers[idx].get_config(), where idx is the index of desired layer.
{ "domain": "datascience.stackexchange", "id": 5338, "tags": "classification, keras" }
AsyncDictionary - Can you break thread safety?
Question: This class is an Async/Await wrapped Dictionary. Of course it doesn't technically implement IDictionary, but the functionality is basically the same as an IDictionary. It achieves similar functionality to ConcurrentDictionary but with async/await, and is non-blocking. Note: please pay attention to the challenge. The challenge is to see if thread safety can be broken. There may not be a strong justification for the class's existence, but this is not the question. The question is: does it stand up to testing? Code Is Here public class AsyncDictionary<TKey, TValue> : IAsyncDictionary<TKey, TValue>, IDisposable { #region Fields private readonly IDictionary<TKey, TValue> _dictionary; private readonly SemaphoreSlim _semaphoreSlim = new SemaphoreSlim(1, 1); private bool disposedValue = false; #endregion #region Func private static readonly Func<IDictionary<TKey, TValue>, KeyValuePair<TKey, TValue>, Task<bool>> ContainsKeyFunc = new Func<IDictionary<TKey, TValue>, KeyValuePair<TKey, TValue>, Task<bool>>((dictionary, keyValuePair) => { return Task.FromResult(dictionary.ContainsKey(keyValuePair.Key)); }); private static readonly Func<IDictionary<TKey, TValue>, KeyValuePair<TKey, TValue>, Task<bool>> ClearFunc = new Func<IDictionary<TKey, TValue>, KeyValuePair<TKey, TValue>, Task<bool>>((dictionary, keyValuePair) => { dictionary.Clear(); return Task.FromResult(true); }); private static readonly Func<IDictionary<TKey, TValue>, KeyValuePair<TKey, TValue>, Task<int>> GetCountFunc = new Func<IDictionary<TKey, TValue>, KeyValuePair<TKey, TValue>, Task<int>>((dictionary, keyValuePair) => { return Task.FromResult(dictionary.Count); }); private static readonly Func<IDictionary<TKey, TValue>, KeyValuePair<TKey, TValue>, Task<ICollection<TValue>>> GetValuesFunc = new Func<IDictionary<TKey, TValue>, KeyValuePair<TKey, TValue>, Task<ICollection<TValue>>>((dictionary, keyValuePair) => { return Task.FromResult(dictionary.Values); }); private static readonly Func<IDictionary<TKey, TValue>, KeyValuePair<TKey, TValue>, Task<ICollection<TKey>>> GetKeysFunc = new Func<IDictionary<TKey, TValue>, KeyValuePair<TKey, TValue>, Task<ICollection<TKey>>>((dictionary, keyValuePair) => { return Task.FromResult(dictionary.Keys); }); private static readonly Func<IDictionary<TKey, TValue>, KeyValuePair<TKey, TValue>, Task<bool>> AddFunc = new Func<IDictionary<TKey, TValue>, KeyValuePair<TKey, TValue>, Task<bool>>((dictionary, keyValuePair) => { dictionary.Add(keyValuePair); return Task.FromResult(true); }); private static readonly Func<IDictionary<TKey, TValue>, KeyValuePair<TKey, TValue>, Task<bool>> AddOrReplaceFunc = new Func<IDictionary<TKey, TValue>, KeyValuePair<TKey, TValue>, Task<bool>>((dictionary, keyValuePair) => { if (dictionary.ContainsKey(keyValuePair.Key)) { dictionary[keyValuePair.Key] = keyValuePair.Value; } else { dictionary.Add(keyValuePair.Key, keyValuePair.Value); } return Task.FromResult(true); }); private static readonly Func<IDictionary<TKey, TValue>, KeyValuePair<TKey, TValue>, Task<bool>> ContainsItemFunc = new Func<IDictionary<TKey, TValue>, KeyValuePair<TKey, TValue>, Task<bool>>((dictionary, keyValuePair) => { return Task.FromResult(dictionary.Contains(keyValuePair)); }); private static readonly Func<IDictionary<TKey, TValue>, KeyValuePair<TKey, TValue>, Task<bool>> RemoveFunc = new Func<IDictionary<TKey, TValue>, KeyValuePair<TKey, TValue>, Task<bool>>((dictionary, keyValuePair) => { return Task.FromResult(dictionary.Remove(keyValuePair)); }); private static readonly Func<IDictionary<TKey, TValue>, KeyValuePair<TKey, TValue>, Task<bool>> RemoveByKeyFunc = new Func<IDictionary<TKey, TValue>, KeyValuePair<TKey, TValue>, Task<bool>>((dictionary, keyValuePair) => { return Task.FromResult(dictionary.Remove(keyValuePair.Key)); }); private static readonly Func<IDictionary<TKey, TValue>, KeyValuePair<TKey, TValue>, Task<TValue>> GetValueFunc = new Func<IDictionary<TKey, TValue>, KeyValuePair<TKey, TValue>, Task<TValue>>((dictionary, keyValuePair) => { return Task.FromResult(dictionary[keyValuePair.Key]); }); #endregion #region Constructor public AsyncDictionary() { //Note: the constructor overload to allow passing in a different Dictionary type has been removed to disallow unsynchronized access. It can be added if you're careful. _dictionary = new Dictionary<TKey, TValue>(); } /// <summary> /// This overload is used in cases where a standard Dictionary isn't the right choice. Warning: accessing the Dictionary outside this class will break synchronization /// </summary> //public AsyncDictionary(IDictionary<TKey, TValue> dictionary) //{ // _dictionary = dictionary; //} #endregion #region Implementation //Only when C# 8 comes! //TODO: IEnumerator<KeyValuePair<T1, T2>> GetEnumerator() //TODO: IEnumerator IEnumerable.GetEnumerator() public Task<ICollection<TKey>> GetKeysAsync() { return CallSynchronizedAsync(GetKeysFunc, default); } public Task<ICollection<TValue>> GetValuesAsync() { return CallSynchronizedAsync(GetValuesFunc, default); } public Task<int> GetCountAsync() { return CallSynchronizedAsync(GetCountFunc, default); } public Task AddAsync(TKey key, TValue value) { return CallSynchronizedAsync(AddFunc, new KeyValuePair<TKey, TValue>(key, value)); } public Task AddAsync(KeyValuePair<TKey, TValue> item) { return CallSynchronizedAsync(AddFunc, item); } public Task AddOrReplaceAsync(TKey key, TValue value) { return CallSynchronizedAsync(AddOrReplaceFunc, new KeyValuePair<TKey, TValue>(key, value)); } public Task ClearAsync() { return CallSynchronizedAsync(ClearFunc, default); } public Task<bool> GetContainsAsync(KeyValuePair<TKey, TValue> item) { return CallSynchronizedAsync(ContainsItemFunc, item); } public Task<bool> GetContainsKeyAsync(TKey key) { return CallSynchronizedAsync(ContainsKeyFunc, new KeyValuePair<TKey, TValue>(key, default)); } public Task<bool> RemoveAsync(TKey key) { return CallSynchronizedAsync(RemoveByKeyFunc, new KeyValuePair<TKey, TValue>(key, default)); } public Task<bool> RemoveAsync(KeyValuePair<TKey, TValue> item) { return CallSynchronizedAsync(RemoveFunc, item); } public Task<TValue> GetValueAsync(TKey key) { return CallSynchronizedAsync(GetValueFunc, new KeyValuePair<TKey, TValue>(key, default)); } #endregion #region Private Methods private async Task<TReturn> CallSynchronizedAsync<TReturn>(Func<IDictionary<TKey, TValue>, KeyValuePair<TKey, TValue>, Task<TReturn>> func, KeyValuePair<TKey, TValue> keyValuePair) { try { await _semaphoreSlim.WaitAsync(); return await Task.Run(async () => { return await func(_dictionary, keyValuePair); }); } finally { _semaphoreSlim.Release(); } } #endregion #region IDisposable Support protected virtual void Dispose(bool disposing) { if (!disposedValue) { if (disposing) { _semaphoreSlim.Dispose(); } disposedValue = true; } } public void Dispose() { Dispose(true); } #endregion } I set up some some unit tests here, but I'd like to see if anyone can break the thread safety of this. Can you add to the unit tests? Can you make the dictionary return the wrong results? Can you cause an exception that shouldn't come up from normal use of this class? Can you detect any other concurrency issues? Can find any other bugs Note: PRs are more than welcome, and the more unit tests, the better! Code Is Here public class AsyncDictionaryTests { #region Fields private const int max = 800; #endregion #region Tests [Test] public async Task TestAddAndRetrieveKeys() { var asyncDictionary = new AsyncDictionary<int, string>(); const int key = 1; await asyncDictionary.AddAsync(key, key.ToString()); var keys = (await asyncDictionary.GetKeysAsync()).ToList(); Assert.AreEqual(key, keys[0]); } [Test] public async Task TestAddAndRetrieveValues() { var asyncDictionary = new AsyncDictionary<int, string>(); const int key = 1; var value = key.ToString(); await asyncDictionary.AddAsync(key, value); var values = (await asyncDictionary.GetValuesAsync()).ToList(); Assert.AreEqual(value, values[0].ToString()); } [Test] public async Task TestContainsKey() { var asyncDictionary = new AsyncDictionary<int, string>(); const int key = 1; await asyncDictionary.AddAsync(key, key.ToString()); var contains = await asyncDictionary.GetContainsKeyAsync(key); Assert.True(contains); } [Test] public async Task TestContains() { var asyncDictionary = new AsyncDictionary<int, string>(); const int key = 1; var value = key.ToString(); var kvp = new KeyValuePair<int, string>(key, value); await asyncDictionary.AddAsync(kvp); var contains = await asyncDictionary.GetContainsAsync(kvp); Assert.True(contains); } [Test] public async Task TestRemoveByKey() { var asyncDictionary = new AsyncDictionary<int, string>(); const int key = 1; await asyncDictionary.AddAsync(key, key.ToString()); var contains = await asyncDictionary.GetContainsKeyAsync(key); Assert.True(contains); await asyncDictionary.RemoveAsync(key); contains = await asyncDictionary.GetContainsKeyAsync(key); Assert.False(contains); } [Test] public async Task TestRemove() { var asyncDictionary = new AsyncDictionary<int, string>(); const int key = 1; var kvp = new KeyValuePair<int, string>(key, key.ToString()); await asyncDictionary.AddAsync(kvp); var contains = await asyncDictionary.GetContainsKeyAsync(key); Assert.True(contains); await asyncDictionary.RemoveAsync(kvp); contains = await asyncDictionary.GetContainsKeyAsync(key); Assert.False(contains); } [Test] public async Task TestGetValue() { var asyncDictionary = new AsyncDictionary<int, string>(); const int key = 1; await asyncDictionary.AddAsync(key, key.ToString()); var value = await asyncDictionary.GetValueAsync(key); Assert.AreEqual(key.ToString(), value); } [Test] public async Task TestClear() { var asyncDictionary = new AsyncDictionary<int, string>(); const int key = 1; var value = key.ToString(); await asyncDictionary.AddAsync(key, value); await asyncDictionary.ClearAsync(); var values = (await asyncDictionary.GetValuesAsync()).ToList(); Assert.IsEmpty(values); } [Test] public async Task TestAnotherType() { var asyncDictionary = new AsyncDictionary<string, Thing>(); var thing = new Thing { Name="test", Size=100 }; await asyncDictionary.AddAsync(thing.Name, thing); var newthing = await asyncDictionary.GetValueAsync(thing.Name); Assert.True(ReferenceEquals(thing, newthing)); } [Test] public async Task TestThreadSafety() { var asyncDictionary = new AsyncDictionary<int, string>(); var tasks = new List<Task> { AddKeyValuePairsAsync(asyncDictionary), asyncDictionary.ClearAsync(), AddKeyValuePairsAsync(asyncDictionary) }; await Task.WhenAll(tasks); tasks = new List<Task> { AddKeyValuePairsAsync(asyncDictionary), AddKeyValuePairsAsync(asyncDictionary), AddKeyValuePairsAsync(asyncDictionary) }; await Task.WhenAll(tasks); tasks = new List<Task> { DoTestEquality(asyncDictionary), DoTestEquality(asyncDictionary), DoTestEquality(asyncDictionary), DoTestEquality(asyncDictionary), AddKeyValuePairsAsync(asyncDictionary) }; await Task.WhenAll(tasks); } #endregion #region Helpers private static async Task DoTestEquality(AsyncDictionary<int, string> asyncDictionary) { var tasks = new List<Task>(); for (var i = 0; i < max; i++) { tasks.Add(TestEquality(asyncDictionary, i)); } await Task.WhenAll(tasks); } private static async Task TestEquality(AsyncDictionary<int, string> asyncDictionary, int i) { var expected = i.ToString(); var actual = await asyncDictionary.GetValueAsync(i); Console.WriteLine($"Test Equality Expected: {expected} Actual: {actual}"); Assert.AreEqual(expected, actual); } private static async Task AddKeyValuePairsAsync(AsyncDictionary<int, string> asyncDictionary) { var tasks = AddSome(asyncDictionary); await Task.WhenAll(tasks); } private static List<Task> AddSome(AsyncDictionary<int, string> asyncDictionary) { var tasks = new List<Task>(); for (var i = 0; i < max; i++) { tasks.Add(AddByNumber(asyncDictionary, i)); } return tasks; } private static Task AddByNumber(AsyncDictionary<int, string> asyncDictionary, int i) { return asyncDictionary.AddOrReplaceAsync(i, i.ToString()); } #endregion } To see a UWP sample application, please clone the repo and run the sample there. Notes: this class is designed for: maintainability first, concurrency, and flexibility. It is modeled after IDictionary but embraces the async-await paradigm. It comes after years of frustration in trying to synchronise cache in async-await C# apps while trying to avoid blocking calls. It is heavily based on SemaphoreSlim with a maximum request concurrency of 1. Experience seems to indicate that this class behaves in a FIFO manner. However, the notes on SemaphoreSlim are a little worrying: If multiple threads are blocked, there is no guaranteed order, such as FIFO or LIFO, that controls when threads enter the semaphore. Is this an Achilles heal? The SemaphoreSlim code can be found here. Can you create a scenario where the FIFO is not honored in a way that breaks the functionality of the class? Conclusion: the marked answer exploits a mistake in the original code to break thread safety. However, the exercise was informative, as the larger question arose: what would the point of this class be? From my naieve perspective, it's designed in such a way that beginner programmers could use it, and likely achieve success with thread safety which is a little less complex than using ConcurrentDictionary, and uses the async-await pattern. Would this approach be recommended? Certainly not for performance reasons. But, the question would need to be asked in a different way to determine whether this class is useful or not. Answer: If you modify the AsyncDictionary while enumerating its keys/values it throws InvalidOperationException (if the backing dictionary is a Dictionary). var numbers = new AsyncDictionary<int, int>(); foreach(var number in Enumerable.Range(1, 1000)) { await numbers.AddAsync(number, number); } foreach(var number in await numbers.GetKeysAsync()) { await numbers.RemoveAsync(number); } A ConcurrentDictionary handles this scenario just fine.
{ "domain": "codereview.stackexchange", "id": 35596, "tags": "c#, unit-testing, thread-safety, concurrency, async-await" }
RVIZ unable to load libdefault_plugin.so
Question: Hi, After I tried running motion planning example for PR2 robot , there is a problem with my RVIZ that it is unable to load libdefault_plugin.so and thus when I click add , an empty panel appears and I am unable to add any display type .e.g camera. Exact error at the terminal when I launch RVIZis as follows ; " [ERROR] [1349764334.800506677]: wxWidgets Error [/opt/ros/electric/stacks/robot_model/colladadom/lib/libcollada15dom.so: undefined symbol: _ZN7pcrecpp2RE4InitEPKcPKNS_10RE_OptionsE] [ERROR] [1349764334.800632354]: Unable to load library [/opt/ros/electric/stacks/visualization/rviz/lib/libdefault_plugin.so] " I shall be thankful if any one could guide me to get out of this. Originally posted by BUTT on ROS Answers with karma: 31 on 2012-09-09 Post score: 0 Original comments Comment by jbohren on 2012-09-10: What versions of ROS / OS are you using? Comment by BUTT on 2012-09-10: I am using ROS Electric and Ubuntu 11.10. Comment by BUTT on 2012-10-08: this didnt help.I was onto some other work but now again need rviz and stuck with the same error when I run rviz.Have tried reinstalling ros-electric-visulaization package and robot_model too.Any ideas to above error ; [ERROR] [1349764334.800506677]: wxWidgets Error [/opt/ros/electric/stack Answer: Check if you have compiled version of colladadom. roscd colladadom ls lib/ if the lib/ directory is empty try to recompile the package rosmake colladadom Originally posted by Jakub with karma: 1821 on 2012-09-13 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 10964, "tags": "ros" }
robot_localization publishing to tf incorrectly
Question: I have left the robot_localization odom_frame parameter with value "odom" and base_link_frame to "base_link" defaults. The tf tree, while running robot_localization and the robot driver ca_driver with publish_tf set to "false". Looks like this (apparently I can't upload a file with less than 5 points): odom floating. base_footprint ---> base_link (broadcaster /ekf_localization) base_link --> right_wheel link (broadcaster /robot_state_publisher) base_link --> left_wheel link (broadcaster /robot_state_publisher) .... why is ekf_localization broadcasting base_footprint --> base_link ? and leaving odom floating? (not connected to anything) I can't even find base_footprint anywhere in the robot_localization parameter list. It is part of the robot description. Any help clarifying this would be appreciated. Edit Adding image of original tf tree Originally posted by ras_cal on ROS Answers with karma: 40 on 2016-08-10 Post score: 0 Original comments Comment by ahendrix on 2016-08-10: I've bumped your karma; you should be able to post images now. Comment by ras_cal on 2016-08-11: Thank you. I've posted image. Answer: by changing the base_link_frame parameter value from base_link to base_footprint I was able to connect the odom frame to base_footprint. I can now run the robot and visualize the ekf_localization output in RViz. Interestingly, the base_footprint to base_link transform is now broadcast by /robot_state_publisher Originally posted by ras_cal with karma: 40 on 2016-08-11 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 25495, "tags": "ros, navigation, odometry, base-link, robot-localization" }
Doing wire 3D cube without libraries
Question: I was having fun implementing some really basic stuff for 3D graphics. See what I came up with: Matrix.java: package net.coderodde.lib3d; /** * This class represents a matrix. * * @author Rodion "rodde" Efremov * @version 1.6 */ public class Matrix { /** * The actual elements. */ private final double[][] matrix; /** * Constructs a zero matrix with <code>rows</code> rows and * <code>columns</code> columns. * * @param rows the amount of rows in the matrix. * @param columns the amount of columns in the new matrix. */ public Matrix(int rows, int columns) { this.matrix = new double[rows][]; for (int i = 0; i < rows; ++i) { this.matrix[i] = new double[columns]; } } /** * Constructs a zero square matrix with <code>dimension</code> rows and * columns. * * @param dimension the dimension of this square matrix. */ public Matrix(int dimension) { this(dimension, dimension); } /** * Multiplies this matrix by vertex <code>v</code> and returns the result, * which is a vertex too. * * @param v the vertex to multiply by this matrix. * @return the result vertex. */ public Vertex product(Vertex v) { double[] vector = new double[]{ v.x, v.y, v.z }; double[] vec = new double[vector.length]; for (int row = 0; row < matrix.length; ++row) { for (int col = 0; col < matrix[row].length; ++col) { vec[row] += vector[col] * matrix[row][col]; } } return new Vertex(vec[0], vec[1], vec[2]); } /** * Returns the matrix for rotating around <tt>x</tt>-axis. * * @param angle the rotation angle in radians. * @return the rotation matrix. */ public static Matrix getXRotationMatrix(double angle) { final Matrix ret = new Matrix(3); ret.matrix[0][0] = 1; ret.matrix[0][1] = 0; ret.matrix[0][2] = 0; ret.matrix[1][0] = 0; ret.matrix[2][0] = 0; ret.matrix[1][1] = Math.cos(angle); ret.matrix[1][2] = -Math.sin(angle); ret.matrix[2][1] = Math.sin(angle); ret.matrix[2][2] = Math.cos(angle); return ret; } /** * Returns the matrix for rotating around <tt>y</tt>-axis. * * @param angle the rotation angle in radians. * @return the rotation matrix. */ public static Matrix getYRotationMatrix(double angle) { final Matrix ret = new Matrix(3); ret.matrix[0][1] = 0; ret.matrix[1][0] = 0; ret.matrix[1][2] = 0; ret.matrix[2][1] = 0; ret.matrix[1][1] = 1; ret.matrix[0][0] = +Math.cos(angle); ret.matrix[0][2] = +Math.sin(angle); ret.matrix[2][0] = -Math.sin(angle); ret.matrix[2][2] = +Math.cos(angle); return ret; } /** * Returns the matrix for rotating around <tt>z</tt>-axis. * * @param angle the rotation angle in radians. * @return the rotation matrix. */ public static Matrix getZRotationMatrix(double angle) { final Matrix ret = new Matrix(3); ret.matrix[0][2] = 0; ret.matrix[1][2] = 0; ret.matrix[2][2] = 1; ret.matrix[2][1] = 0; ret.matrix[2][0] = 0; ret.matrix[0][0] = Math.cos(angle); ret.matrix[0][1] = -Math.sin(angle); ret.matrix[1][0] = Math.sin(angle); ret.matrix[1][1] = Math.cos(angle); return ret; } } Vertex.java: package net.coderodde.lib3d; import java.awt.Color; import java.util.ArrayList; import java.util.Collections; import java.util.Iterator; import java.util.List; /** * This class implements a vertex that can be thought of as a vector. * * @author Rodion "rodde" Efremov * @version 1.6 */ public class Vertex implements Iterable<Vertex> { public double x; public double y; public double z; /** * The list of neighbor vertices. */ private final List<Vertex> neighborVertexList; /** * The list of colors. */ private final List<Color> neighborColorList; /** * Constructs a new vertex. * * @param x the initial <tt>x</tt>-coordinate. * @param y the initial <tt>y</tt>-coordinate. * @param z the initial <tt>z</tt>-coordinate. */ public Vertex(double x, double y, double z) { this.x = x; this.y = y; this.z = z; this.neighborVertexList = new ArrayList<>(); this.neighborColorList = new ArrayList<>(); } /** * Constructs a new vertex with the same coordinates as <code>other</code>. * * @param other the other vertex. */ public Vertex(Vertex other) { this(other.x, other.y, other.z); } /** * Constructs a new vertex located at origo. */ public Vertex() { this(0.0, 0.0, 0.0); } /** * Returns the view of colors of this vertex. * * @return the color view. */ public List<Color> getColorList() { return Collections.<Color>unmodifiableList(neighborColorList); } /** * Returns the view of neighbor vertices of this vertex. * * @return the neighbor view. */ public List<Vertex> getNeighborList() { return Collections.<Vertex>unmodifiableList(neighborVertexList); } /** * Adds a neighbor vertex to this vertex. * * @param neighbor the neighbor to add. * @param color the color of the edge from this vertex to * <code>neighbor</code>. */ public void addNeighbor(Vertex neighbor, Color color) { neighborVertexList.add(neighbor); neighborColorList.add(color); } /** * Returns an iterator over neighbor vertices of this vertex. * * @return an iterator. */ @Override public Iterator<Vertex> iterator() { return this.neighborVertexList.iterator(); } /** * Returns a string representation of this vertex. * * @return a string. */ @Override public String toString() { return "[Vertex (" + x + ", " + y + ", " + z + ")]"; } } SceneObject.java: package net.coderodde.lib3d; import java.util.ArrayList; import java.util.Collections; import java.util.Iterator; import java.util.List; /** * This class models an object in the scene. * * @author Rodion "rodde" Efremov * @version 1.6 */ public class SceneObject implements Iterable<Vertex> { /** * The list of vertices this object consists of. These are vector pointing * from <code>location</code>. */ private final List<Vertex> vertexList; /** * The location of this object. */ private final Vertex location; /** * Constructs a new scene object with given location. * * @param x the <tt>x</tt>-coordinate of the location. * @param y the <tt>y</tt>-coordinate of the location. * @param z the <tt>z</tt>-coordinate of the location. */ public SceneObject(double x, double y, double z) { this.location = new Vertex(x, y, z); this.vertexList = new ArrayList<>(); } /** * Constructs a new scene object with given location. * * @param vertex the initial location. */ public SceneObject(Vertex vertex) { this(vertex.x, vertex.y, vertex.y); } /** * Constructs a new scene object with location at origo. */ public SceneObject() { this(0.0, 0.0, 0.0); } /** * Returns the view of vertices belonging to this object. * * @return the vertex view. */ public List<Vertex> getVertexList() { return Collections.<Vertex>unmodifiableList(vertexList); } /** * Returns the location of this scene object. * * @return the location. */ public Vertex getLocation() { return new Vertex(location); } /** * Sets the location of this object. * * @param v the new location. */ public void setLocation(Vertex v) { this.location.x = v.x; this.location.y = v.y; this.location.z = v.z; } /** * Adds a vertex to this object. * * @param vertex the vertex to add. */ public void add(Vertex vertex) { this.vertexList.add(vertex); } /** * Rotates this geometric object relative to the point * <code>relative</code>. * * @param relative the relative point. * @param angleAroundX the angle around the <tt>x</tt>-axis. * @param angleAroundY the angle around the <tt>y</tt>-axis. * @param angleAroundZ the angle around the <tt>z</tt>-axis. */ public void rotate(Vertex relative, double angleAroundX, double angleAroundY, double angleAroundZ) { rotateImpl(relative, location, angleAroundX, angleAroundY, angleAroundZ); // Rotate the locatio vector of this geometric object. final Vertex zero = new Vertex(); // Rotate the vertex vectors. for (final Vertex vertex : vertexList) { rotateImpl(zero, vertex, -angleAroundX, -angleAroundY, -angleAroundZ); } } /** * Implements the rotation routine. * * @param relative the relative location. * @param target the vertex to rotate. * @param angleAroundXAxis the angle around the <tt>x</tt>-axis. * @param angleAroundY the angle around the <tt>y</tt>-axis. * @param angleAroundZ the angle around the <tt>z</tt>-axis. */ private void rotateImpl(Vertex relative, Vertex target, double angleAroundXAxis, double angleAroundYAxis, double angleAroundZAxis) { final Matrix x = Matrix.getXRotationMatrix(angleAroundXAxis); final Matrix y = Matrix.getYRotationMatrix(angleAroundYAxis); final Matrix z = Matrix.getZRotationMatrix(angleAroundZAxis); Vertex tmp = new Vertex(target.x - relative.x, target.y - relative.y, target.z - relative.z); tmp = x.product(tmp); tmp = y.product(tmp); tmp = z.product(tmp); target.x = relative.x + tmp.x; target.y = relative.y + tmp.y; target.z = relative.z + tmp.z; } @Override public Iterator<Vertex> iterator() { return this.vertexList.iterator(); } } SceneView.java: package net.coderodde.lib3d; import java.awt.Canvas; import java.awt.Color; import java.awt.Dimension; import java.awt.Graphics; import java.awt.event.KeyEvent; import java.awt.event.KeyListener; import java.util.ArrayList; import java.util.Arrays; import java.util.List; /** * This class implements a scene view displaying three-dimensional objects. * * @author Rodion "rodde" Efremov */ public class SceneView extends Canvas implements KeyListener { /** * The list of scene objects in this view. */ private final List<SceneObject> sceneObjectList; /** * The rotation source. All objects are rotated with respect to this point. */ private final Vertex rotationSource; /** * Constructs this scene canvas. * * @param width the width of this canvas in pixels. * @param height the height of this canvas in pixels. */ public SceneView(int width, int height) { setPreferredSize(new Dimension(width, height)); this.sceneObjectList = new ArrayList<>(); this.rotationSource = new Vertex(width / 2, height / 2, 0); this.addKeyListener(this); this.setBackground(Color.BLACK); } /** * Draws this view. * * @param g the graphics device handle. */ @Override public void update(Graphics g) { g.clearRect(0, 0, getWidth(), getHeight()); g.setColor(Color.red); for (final SceneObject object : sceneObjectList) { final Vertex objectOrigin = object.getLocation(); final List<Vertex> vertexList = object.getVertexList(); for (int i = 0; i < vertexList.size(); ++i) { final Vertex v = vertexList.get(i); final List<Color> colorList = v.getColorList(); final List<Vertex> neighborList = v.getNeighborList(); for (int j = 0; j < neighborList.size(); ++j) { final Vertex neighbor = neighborList.get(j); g.setColor(colorList.get(j)); g.drawLine((int) Math.round(objectOrigin.x + v.x), (int) Math.round(objectOrigin.y + v.y), (int) Math.round(objectOrigin.x + neighbor.x), (int) Math.round(objectOrigin.y + neighbor.y)); } } } } /** * Draws this view. * * @param g the graphics device handle. */ @Override public void paint(Graphics g) { update(g); } /** * Adds all the vectors in <code>vectors</code> to this scene view. * * @param objects the world objects to add to this view. */ public void addWorldObject(SceneObject... objects) { sceneObjectList.addAll(Arrays.asList(objects)); } /** * Responds to the event of a key being typed. * * @param e the key event. */ @Override public void keyTyped(KeyEvent e) { switch (e.getExtendedKeyCode()) { case KeyEvent.VK_A: sceneObjectList.stream().forEach((o) -> { o.rotate(rotationSource, 0.0, -0.1, 0.0); }); break; case KeyEvent.VK_D: sceneObjectList.stream().forEach((o) -> { o.rotate(rotationSource, 0.0, 0.1, 0.0); }); break; case KeyEvent.VK_W: sceneObjectList.stream().forEach((o) -> { o.rotate(rotationSource, 0.1, 0.0, 0.0); }); break; case KeyEvent.VK_S: sceneObjectList.stream().forEach((o) -> { o.rotate(rotationSource, -0.1, -0.0, 0.0); }); break; case KeyEvent.VK_Q: sceneObjectList.stream().forEach((o) -> { o.rotate(rotationSource, 0.0, 0.0, -0.1); }); break; case KeyEvent.VK_E: sceneObjectList.stream().forEach((o) -> { o.rotate(rotationSource, 0.0, 0.0, 0.1); }); break; } repaint(); } @Override public void keyPressed(KeyEvent e) {} @Override public void keyReleased(KeyEvent e) {} } SceneFrame.java: package net.coderodde.lib3d; import java.awt.Dimension; import java.awt.Toolkit; import javax.swing.JFrame; /** * This class implements the frame containing a view. * * @author Rodion "rodde" Efremov * @version 1.6 */ public class SceneFrame extends JFrame { /** * The actual view component. */ private final SceneView view; /** * Constructs a frame containing the view. * * @param width the width of the frame in pixels. * @param height the height of the frame in pixels. */ SceneFrame(int width, int height) { super("3D Cube"); add(view = new SceneView(width, height)); pack(); setResizable(false); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); final Dimension screenDimension = Toolkit.getDefaultToolkit().getScreenSize(); // Center out the frame. setLocation((screenDimension.width - getWidth()) / 2, (screenDimension.height - getHeight()) / 2); setVisible(true); } /** * Returns the scene view. * * @return the scene view. */ public SceneView getSceneView() { return view; } } Demo.java: package net.coderodde.lib3d; import java.awt.Color; import javax.swing.SwingUtilities; public class Demo { /** * The entry point into a program. * @param args the command line arguments. */ public static void main(final String... args) { SwingUtilities.invokeLater(() -> { SceneFrame frame = new SceneFrame(600, 600); // Let's build a wire cube. SceneObject cube = new SceneObject(300, 300, 0); // The vertices. Vertex v000 = new Vertex(-100, -100, -100); Vertex v001 = new Vertex(-100, -100, 100); Vertex v010 = new Vertex(-100, 100, -100); Vertex v011 = new Vertex(-100, 100, 100); Vertex v100 = new Vertex(100, -100, -100); Vertex v101 = new Vertex(100, -100, 100); Vertex v110 = new Vertex(100, 100, -100); Vertex v111 = new Vertex(100, 100, 100); Color red = Color.red; Color green = Color.green; Color yellow = Color.yellow; // Each vertex must know what other vertices it is linked to. v000.addNeighbor(v001, red); v000.addNeighbor(v010, red); v000.addNeighbor(v100, yellow); v001.addNeighbor(v101, yellow); v001.addNeighbor(v011, red); v001.addNeighbor(v000, red); v010.addNeighbor(v110, yellow); v010.addNeighbor(v000, red); v010.addNeighbor(v011, red); v011.addNeighbor(v111, yellow); v011.addNeighbor(v001, red); v011.addNeighbor(v010, red); // v100.addNeighbor(v000, yellow); v100.addNeighbor(v110, green); v100.addNeighbor(v101, green); v101.addNeighbor(v001, yellow); v101.addNeighbor(v111, green); v101.addNeighbor(v100, green); v110.addNeighbor(v010, yellow); v110.addNeighbor(v100, green); v110.addNeighbor(v111, green); v111.addNeighbor(v011, yellow); v111.addNeighbor(v101, green); v111.addNeighbor(v110, green); // Load the vertices to the cube. cube.add(v000); cube.add(v001); cube.add(v010); cube.add(v011); cube.add(v100); cube.add(v101); cube.add(v110); cube.add(v111); cube.rotate(cube.getLocation(), 0.0, Math.PI / 2.0, 0.0); // Add to the scene. frame.getSceneView().addWorldObject(cube); }); } } Any improvements I could do here? (For demonstration, click the window and use the keys Q, W, E, A, S, D to rotate the cube.) Answer: I'll edit this answer again later when I have more time. I have a lot to say. Since I've only had a chance to play with SceneView.java that is what I'm going to mention. the update method is too large. It needs to get broken up into smaller pieces. Typically I personally like to grab everything inside a loop (be it a while loop, or a for loop) and put it into a new method. With your update method I made 2 more methods: updateSceneObject and updateVertex. I notice too that you switch back and forth with how you iterate through things. Be consistent. Here is a small look at my solution. Note that there is a small bug introduced with this method because your current mode of drawing relies on each matrix being drawn at a specific order. /** * Draws this view. * * @param g the graphics device handle. */ @Override public void update(Graphics g) { g.clearRect(0, 0, getWidth(), getHeight()); g.setColor(Color.red); sceneObjectList.parallelStream().forEach(o->updateSceneObject(g,o)); } private void updateSceneObject(Graphics g, SceneObject object) { final Vertex objectOrigin = object.getLocation(); object.getVertexList().parallelStream().forEach(v->updateVertex(g,objectOrigin, v)); } private void updateVertex(Graphics g, Vertex objectOrigin, Vertex v) { final List<Color> colorList = v.getColorList(); final List<Vertex> neighborList = v.getNeighborList(); for (int j = 0; j < neighborList.size(); ++j) { final Vertex neighbor = neighborList.get(j); g.setColor(colorList.get(j)); g.drawLine((int) Math.round(objectOrigin.x + v.x), (int) Math.round(objectOrigin.y + v.y), (int) Math.round(objectOrigin.x + neighbor.x), (int) Math.round(objectOrigin.y + neighbor.y)); } } Vertex.java having a list of neighbors and their colors (as a parallel list) is 2 parts wrong. 1 reason it is wrong is a parallel list. in other words you an array that depends on a very specific order. Should that order get disturbed some problems can arise. Usually you will want to encapsulate those values in another class and make a single list of those. the second part of why it is wrong is that I believe Vertex should simply be a data structure and nothing more. I can see your reasoning behind it but neighbor doesn't describe well enough that your line segment is a specific color. (Took me a little bit to figure that out) So to fix points one and two I believe if you make a LineSegment class that has 2 vertices and a single color your code will clean up nicely. Then you would just change Cube to have a list of LineSegments and update those when you rotate. Tests I'm a huge advocate of writing tests. First they act as a form of documentation, then also give you a safety net to change things around with less fear of breaking something. A good thing that should have tests around it is the rotate method for SceneObject. So for instance public class SceneObjectTest { @Test public void testRotate() throws Exception { SceneObject object = new SceneObject(0,0,0); final Vertex CENTER = new Vertex(0,0,0); Vertex vertex0 = new Vertex(-10, 0, 0); Vertex vertex1 = new Vertex(10, 0, 0); object.add(vertex0); object.add(vertex1); object.rotate(CENTER, 1, 0, 0 );//rotate 1 degree on x System.out.println(vertex0); System.out.println(vertex1); assertXYZ(vertex0, -10, 0, 0);//0 assertXYZ(vertex1, 10, 0, 0);//0 } private void assertXYZ(Vertex vertex, double x, double y, double z) { assertEquals(vertex.x, x, String.format("expected Vertex to have x coordinate of %f but found %f\r\nActual:%s", x, vertex.x, vertex)); assertEquals(vertex.y, y, String.format("expected Vertex to have y coordinate of %f but found %f\r\nActual:%s", y, vertex.y, vertex)); assertEquals(vertex.z, z, String.format("expected Vertex to have z coordinate of %f but found %f\r\nActual:%s", z, vertex.z, vertex)); } } this test takes 0.394 seconds to run and tells me that rotating a horizontal line on the x axis does nothing. Which is correct. So what happens if I rotate 1 degree on the z axis. (Which I can visualize easily in my head) I would expect the following test to pass (look at testRotateOnZAxis()) public class SceneObjectTest { final Vertex CENTER = new Vertex(0, 0, 0); SceneObject object; private Vertex vertex0; private Vertex vertex1; @BeforeTest public void SetupBeforeTest() { object = new SceneObject(0, 0, 0); vertex0 = new Vertex(-10, 0, 0); vertex1 = new Vertex(10, 0, 0); object.add(vertex0); object.add(vertex1); } @Test public void testRotateOnXAxis() throws Exception { object.rotate(CENTER, 1, 0, 0);//rotate 1 degree on x System.out.println(vertex0); System.out.println(vertex1); assertXYZ(vertex0, -10, 0, 0);//0 assertXYZ(vertex1, 10, 0, 0);//0 } @Test public void testRotateOnZAxis() throws Exception { object.rotate(CENTER, 0, 0, 1);//rotate 1 degree on z System.out.println(vertex0); System.out.println(vertex1); //x' = x * cos(1); (-10 * 0.99984769515639123915701155881391), (10 * 0.99984769515639123915701155881391) //y' = y * sin(1); (-10 * 0.01745240643728351281941897851632), (10 * 0.01745240643728351281941897851632) assertXYZ(vertex0,-9.9984769515639123915701155881391,-0.1745240643728351281941897851632, 0); assertXYZ(vertex1, 9.9984769515639123915701155881391, 0.1745240643728351281941897851632, 0); } private void assertXYZ(Vertex vertex, double x, double y, double z) { assertEquals(vertex.x, x, String.format("expected Vertex to have x coordinate of %f but found %f\r\nActual:%s", x, vertex.x, vertex)); assertEquals(vertex.y, y, String.format("expected Vertex to have y coordinate of %f but found %f\r\nActual:%s", y, vertex.y, vertex)); assertEquals(vertex.z, z, String.format("expected Vertex to have z coordinate of %f but found %f\r\nActual:%s", z, vertex.z, vertex)); } } That test fails [Vertex (-5.403023058681398, 8.414709848078965, 0.0)] [Vertex (5.403023058681398, -8.414709848078965, 0.0)] java.lang.AssertionError: expected Vertex to have x coordinate of -9.998477 but found -5.403023 Actual:[Vertex (-5.403023058681398, 8.414709848078965, 0.0)] Expected :-9.998476951563912 Actual :-5.403023058681398 Now it is possible that I'm asserting incorrectly, but that is what the tests do. They show us what you expect to happen given certain circumstances. Hope this helps some.
{ "domain": "codereview.stackexchange", "id": 13689, "tags": "java, matrix, reinventing-the-wheel, graphics" }
Making kexec reboots less painful
Question: kexec is a way for a Linux kernel to directly boot another Linux kernel without going through the usual BIOS startup sequence, which can take several minutes on enterprise servers. The big problem with kexec is that Linux distributions don't set it up for you when you install a kernel, and setting it up yourself is manual and error-prone, so not many people actually use it when rebooting their servers. I wanted to learn some Ruby, (I've worked through this Rails tutorial but I haven't previously written anything substantial in Ruby) and so I wrote a Ruby script to help automate kexec somewhat. It can simply stage the latest installed kernel for kexec, or it has an interactive mode where you can choose a kernel from a list. It does both of these by searching for the GRUB configuration file, parsing it to get the kernel, initrd and kernel command line, and then calling kexec with these arguments. My big concern here is with the obvious duplicate code in process_grub_config and the functions it calls, load_kernels_grub and load_kernels_grub2. I know these bits need to be refactored, but I'm not familiar enough with the language to know the best way to go about it. In particular, it's necessary to parse GRUB 1 and GRUB 2 style configuration files differently, and these files can be in different locations depending on Linux distribution. I also literally wrote this yesterday evening, and I might not have had enough coffee while writing it, so I'm open to suggestions on any other part of the code that might need improvement. (Note: Because this code is meant to be part of the process of rebooting, I suggest testing in a virtual machine. Run the script with no arguments for a usage statement. I've personally tested it on EL6, EL7, Ubuntu 10.04, 12.04, and Debian wheezy and it should work properly on any Linux distirbution that uses GRUB 1 or GRUB 2.) #!/usr/bin/env ruby # kexec-reboot - Easily choose a kernel to kexec require 'optparse' # Find a mount point given the device special def device_to_mount_point(device) if File.ftype(device) != "blockSpecial" then STDERR.puts("Device #{device} isn't a block device\n") return nil end mount_point = nil mounts = open("/proc/mounts").each_line do |mount| line = mount.split if line[0] == device then mount_point = line[1] break end end mount_point = "" if mount_point == "/" # Eliminate double / if mount_point.nil? then STDERR.puts "Can't find the mount point for device #{device}\n" return nil end mount_point end # Find a mount point given the GRUB device and device map def device_map_to_mount_point(device, device_map) dev = device.match(/(hd\d+)/) part = device.match(/hd\d+,(\d+)/) mount_point = device_map.match(/\(#{dev[1]}\)\s+(.+)$/) mount_point_part = 1 + Integer(part[1]) if !part.nil? device_path = "#{mount_point[1]}#{mount_point_part}" if !File.exists?(device_path) then STDERR.puts("Can't find the device #{device_path} from #{device}\n") return nil end device_to_mount_point("#{mount_point[1]}#{mount_point_part}") end # Find a mount point given the device UUID def uuid_to_mount_point(uuid) begin device = File.realpath("/dev/disk/by-uuid/#{uuid}") rescue Errno::ENOENT STDERR.puts "No such file or directory, uuid #{uuid}\n" return nil end device_to_mount_point(device) end # Load the available kernels from the given GRUB 1 configuration file def load_kernels_grub(config) device_map = open("/boot/grub/device.map").read entries = Array.new config.scan(/title (.+?$).+?root \(([^\)]+)\).+?kernel ([^ ]+) (.+?)$.+?initrd (.+?$)/m).each do |entry| mount_point = device_map_to_mount_point(entry[1], device_map) name = entry[0].strip kernel = "#{mount_point}#{entry[2]}" initrd = "#{mount_point}#{entry[4]}" cmdline = entry[3].strip # Sanity check the kernel and initrd; they must be present if !File.readable?(kernel) then STDERR.puts "Kernel #{kernel} is not readable\n" next end if !File.readable?(initrd) then STDERR.puts "Initrd #{initrd} is not readable\n" next end entries.push({ "name" => name, "kernel" => kernel, "initrd" => initrd, "cmdline" => cmdline, }) end entries end # Load the available kernels from the given GRUB 2 configuration file def load_kernels_grub2(config) entries = Array.new config.scan(/menuentry '([^']+)'.+?\{.+?search.+?([0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}).+?linux(16)?\s+([^ ]+) (.+?)$.+?initrd(16)?\s+(.+?)$.+?\}/m).each do |entry| mount_point = uuid_to_mount_point(entry[1]) name = entry[0].strip kernel = "#{mount_point}#{entry[3]}" initrd = "#{mount_point}#{entry[6]}" cmdline = entry[4].strip # Sanity check the kernel and initrd; they must be present if !File.readable?(kernel) then STDERR.puts "Kernel #{kernel} is not readable\n" next end if !File.readable?(initrd) then STDERR.puts "Initrd #{initrd} is not readable\n" next end entries.push({ "name" => name, "kernel" => kernel, "initrd" => initrd, "cmdline" => cmdline, }) end entries end # Load a grub configuration file and process it def process_grub_config # TODO: Duplicate code smells, refactor this # First, locate the grub configuration file # We try GRUB 1 files first ["/boot/grub/menu.lst"].each do |file| begin entries = load_kernels_grub(open(file).read) if !entries.empty? then if $verbose then puts "Read GRUB configuration from #{file}\n" end return entries end rescue Errno::EACCES STDERR.puts("#{$!}\nYou must be root to run this utility.\n") exit 1 rescue Errno::ENOENT next end end # Then we try GRUB 2 files ["/boot/grub2/grub.cfg", "/boot/grub/grub.cfg"].each do |file| begin entries = load_kernels_grub2(open(file).read) if !entries.empty? then if $verbose then puts "Read GRUB configuration from #{file}\n" end return entries end rescue Errno::EACCES STDERR.puts("#{$!}\nYou must be root to run this utility.\n") exit 1 rescue Errno::ENOENT next end end STDERR.puts("Couldn't find a grub configuration anywhere!\n") exit 1 end def kexec(entry) if $verbose then print "Staging kernel #{entry['name']}\n" end fork do exec "/sbin/kexec", "-l", "#{entry['kernel']}", "--append=#{entry['cmdline']}", "--initrd=#{entry['initrd']}" end end def interactive_select_kernel entries = process_grub_config selection = nil loop do puts "\nSelect a kernel to stage:\n\n" entries.each_with_index do |entry, index| selection_number = index + 1 puts "#{selection_number}: #{entry['name']}\n" end print "\nYour selection: " selection = gets.chomp begin selection = Integer(selection) rescue ArgumentError return nil end break if selection.between?(0, entries.count) end return nil if selection == 0 entries[selection - 1] end def select_latest_kernel entries = process_grub_config entries.first end options = {} opts = OptionParser.new do |opts| opts.banner = "Usage: kexec-reboot [options]" opts.on("-i", "--interactive", "Choose the kernel to stage from a list") do |i| options[:interactive] = i end opts.on("-l", "--latest", "Stage the latest kernel") do |l| options[:latest] = l end opts.on("-r", "--reboot", "Reboot immediately after staging the kernel") do |r| options[:reboot] = r end opts.on("-v", "--[no-]verbose", "Extra verbosity.") do |v| $verbose = v end end opts.parse! if (options[:interactive]) then entry = interactive_select_kernel if (entry.nil?) then STDERR.puts "Canceled.\n" exit 1 end elsif (options[:latest]) then entry = select_latest_kernel else STDERR.puts opts.help exit 1 end if !entry.nil? then entry = kexec(entry) if options[:reboot] then `shutdown -r now` end end This code is now available on github and future changes will be published there. After this was posted, these changes have been made (which can be seen in the github version): A bug which caused kexec to fail if a previous kernel had already been staged (e.g. via kexec or a kdump crash kernel) has been fixed. A bug which caused the script to fail to find the boot partition on certain older HP ProLiant servers has been fixed. Ruby hashes have been changed to use symbols as keys, rather than strings. Support was added for systems that boot with UEFI. A number of further changes have been made and are now in github, including most of the suggestions given by 200_success. In addition, after more extensive testing on a variety of servers (thanks to ewwhite) the following change was made: When processing a grub 1 configuration, first assume the kernel can be reached in either / or /boot, before trying to read the device.map file, because device.map is very frequently wrong due to post-installation hardware changes. This issue doesn't affect systems which boot with grub 2. I'll be doing some more cleanup, and after I've incorporated the rest of the suggestions I'll post the new version for review. Answer: The Ruby code looks quite good. You have a couple of filehandle leaks. A typical way to process a file is open(…) { |file| … }. If you call open without a block, then you should also close the resulting filehandle. An even simpler approach would be to call static methods such as IO::readlines. For example, in device_to_mount_point, the following code mounts = open("/proc/mounts").each_line do |mount| line = mount.split if line[0] == device then mount_point = line[1] break end end could be simplified with proc_mounts = Hash[IO.readlines('/proc/mounts').collect { |line| line.split[0..1] }] mount_point = proc_mounts[device] You should avoid returning nil to indicate an error. That just burdens the caller with the responsibility to handle a nil result properly. If it's not actually an error, then return an empty string. If it is an error, you should raise an exception instead: raise ArgumentError.new("Device #{device} isn't a block device") It is unusual to see string-to-number conversions written as Integer(part[1]) in Ruby. A more common expression would be part[1].to_i. Here is one way to eliminate the code duplication in process_grub_config: def process_grub_config possible_grub_configs = [ ["/boot/grub/menu.lst", :load_kernels_grub], ["/boot/grub2/grub.cfg", :load_kernels_grub2], ["/boot/grub/grub.cfg", :load_kernels_grub2], ] possible_grub_configs.each do |filename, handler| begin entries = method(handler).call(IO::read(filename)) if !entries.empty? then if $verbose then puts "Read GRUB configuration from #{file}\n" end return entries end rescue Errno::EACCES STDERR.puts("#{$!}\nYou must be root to run this utility.\n") exit 1 rescue Errno::ENOENT next end end end I consider load_kernels_grub and load_kernels_grub2 to be misnamed, as they aren't actually loading anything, at least not in the kexec sense. I suggest a name like grub1_cfg_kernel_entries instead. In kexec, fork and exec could just be a system call: system "/sbin/kexec", "-l", entry['kernel'], "--append=#{entry['cmdline']}", "--initrd=#{entry['initrd']}" The entry['kernel'] parameter does not need string interpolation. In accordance with the suggestion in the kexec(8) man page, you could just call kexec with no option parameter, which loads the specified kernel and calls shutdown.
{ "domain": "codereview.stackexchange", "id": 38835, "tags": "beginner, ruby, parsing, linux" }
Asymmetry between space and time in special relativity
Question: Consider 2D spacetime with two inertial reference frame S and $S'$, where $S'$ is moving in the $S$ positive spatial direction at velocity $vt$, along with the usual graphical representation with $t$ on the vertical axis and $x$ on the horizontal axis. Use units such that $c = 1$. Consider the space-time displacement vector which points forward in time in the coordinates of $S'$ at $(0,0)$. I.e., it is displacement vector that connects the clock at point $(0,0)$ in $S'$ to the clock at point $(1,0)$ in $S'$. Is it true that it follows that the clock at $(1,0)$ in $S'$ is at $(t,vt)$ in $S$, for some value of $t$? If so, does it follow that viewing the time-axis of $S'$ in the $S$ coordinates, it is rotated to the right from vertical by an angle $\theta$ such that $v = \tan(\theta)$? Is there any analogous statement that can be made about the direction in $S$ of the $S'$ space-displacement vector, i.e. the vector with components $(0,1)$ in $S'$? I can't come up with one and it seems that the answer has to be "no" because I haven't yet used anything about Lorentz invariance of the interval and the rotation of the space and time axes is equivalent to Lorentz invariance. But I would not have expected an asymmetry here between time and space. Answer: If you choose to draw the frames so that the $S$ axes are horizontal and vertical on your paper, then they look like this (with the $S$ frame in black and the $S'$ frame in blue): The marked points are $(1,0)$ and $(0,1)$ in the $S'$ frame.
{ "domain": "physics.stackexchange", "id": 82207, "tags": "special-relativity, spacetime, coordinate-systems, inertial-frames" }
Rust Pig Latin Translator - Idiomatic suggestions
Question: This is the code I came up with for the Rust pig latin exercise. I am looking for suggestions on how to make it more idiomatic. I think working with iterators instead of chars and Strings would be a step in the right direction? I was unable to figure out how to use .map to reach the same result. I would have liked to iterate over a SplitWhiteSpace and apply the manipulations to each slices in one pass instead of using a for loop. Rust playground link fn main() { let phrase = "Test sentence for pig latin f नर र स्का स्कास्का ".to_lowercase(); let split = phrase.split_whitespace(); let mut pigifyed: String = String::new(); for word in split { let mut chars = word.chars(); let firstchar = chars.next().unwrap(); if chars.next() == None { pigifyed = format!("{} {}ay", pigifyed, firstchar) } else if is_vowel(&firstchar) { pigifyed = format!("{} {}-hay", pigifyed, word); } else { let end = &word[firstchar.len_utf8()..]; pigifyed = format!("{} {}-{}ay", pigifyed, end, firstchar); } } println!("{}", pigifyed) } fn is_vowel(char: &char) -> bool { let vowels = ['a', 'e', 'i', 'o', 'u']; vowels.contains(char) } Output: est-tay entence-say or-fay ig-pay atin-lay fay -ay र-नay रay ्का-सay ्कास्का-सay ay Answer: I can't see how you got stuck. You just do like you said: use map instead of Strings, then collect into a Vec, and finally join into one String. Here is the code. fn main() { let phrase = "Test sentence for pig latin f नर र स्का स्कास्का ".to_lowercase(); let split = phrase.split_whitespace(); let pigifyed = split.map(|word| { let mut chars = word.chars(); let firstchar = chars.next().unwrap(); if chars.next() == None { format!("{}ay", firstchar) } else if is_vowel(&firstchar) { format!("{}-hay", word) } else { let end = &word[firstchar.len_utf8()..]; format!("{}-{}ay", end, firstchar) } }).collect::<Vec<String>>().join(" "); println!("{}", pigifyed) } fn is_vowel(char: &char) -> bool { let vowels = ['a', 'e', 'i', 'o', 'u']; vowels.contains(char) }
{ "domain": "codereview.stackexchange", "id": 42533, "tags": "beginner, rust, pig-latin" }
How is the potential infinitude of the universe compatible with the Big Bang?
Question: I know no physics, but I read that while the observable universe is finite, for all physicists know the universe is infinite. How is this compatible with the Big Bang hypothesis? Does it mean that it's possible that at some point space expanded at literally infinite speed? Or something else? Edit: This question is answered by the comment referring to how the Big Bang did not occur at a point. Answer: As ACuriousMind comments, Big Bang wasn't an explosion at a point, dispersing matter in all directions. It was the creation of space, and its subsequent expansion. The Universe may or may not be infinite. If it is finite, is was born finite, and will always stay finite, although dark energy seems to expand it to an arbitrarily large size. If it is infinite — and currently, observational evidence suggest it is — is was born infinite, and always will be. It still expands, i.e. the distance between galaxies (that are not gravitationally bound to each other) will increase indefinitely. There is no way that the Universe can transit between being finite and infinite.
{ "domain": "physics.stackexchange", "id": 28543, "tags": "big-bang" }
PHP / Laravel - Using Python to image manipulation
Question: I have a simple job class in Laravel, that will do the following: Get all the images in a folder /original Perform some image manipulation on each image Save each image in a new folder /preprocessed All of these steps are added to a queue. However, I have taken an alternative approach and it is using python to do the actual image manipulation. This is my code: $images = Storage::allFiles($this->document->path('original')); foreach ($images as $key => $image) { $file = storage_path() . '/app/' . $image; //token/unique_id/original/1.jpg $savePath = storage_path() . '/app/' . $this->document->path('preprocessed'); $filename = $key . '.jpg'; //Scale and save the image in "/preprocessed" $process = new Process("python3 /Python/ScaleImage.py {$file} {$savePath} {$filename}"); $process->run(); // executes after the command finishes if (!$process->isSuccessful()) { throw new ProcessFailedException($process); return false; } } In my python file ScaleImage.py, it simply performs some image manipulation and saves the image to the /preprocessed folder. def set_image_dpi(file) //Image manipulation - removed from example //save image in /preprocessed im.save(SAVE_PATH + FILE_NAME, dpi=(300, 300)) return SAVE_PATH + FILE_NAME print(set_image_dpi(FILE_PATH)) The above code works. In the future, I might need to do even more image manipulation such as noise removal, skew correction, etc. My final goal is to use Tesseract OCR on the preprocessed image to grab the text content of the image. Now I am sure I could achieve similar results, for example using ImageMagick in PHP. However, I've read that for image processing and OCR, Python's performance is a lot better. Is the above approach a good idea? Is there anything that can be improved? Answer: Anything that can be improved? This line uses interpolation with complex (curly) syntax: $process = new Process("python3 /Python/ScaleImage.py {$file} {$savePath} {$filename}"); The simple syntax can be used - i.e. curly braces can be removed $process = new Process("python3 /Python/ScaleImage.py $file $savePath $filename"); Though there is an argument to maintain the habit of including curly braces for the times when it is required and also for clarity. Now I am sure I could achieve similar stuff with for example ImageMagick in PHP. Yes the PHP Image Processing and GD library has functions like imageresolution() to set the resolution of an image. It can be used with functions like imagecreatefromjpeg() to create the image object and imagejpeq() to output the image to a file. And yes there is an ImageMagick function Imagick::setImageResolution() which could be used instead.
{ "domain": "codereview.stackexchange", "id": 43365, "tags": "python, php, image, laravel" }
Balancing Redox Equations - Half Reactions
Question: How do I balance this using the half-reaction method: $$\ce{Zn(s) + HCl(aq) -> Zn^{2+} (aq) + H_2 (g)}$$ I would first split it up in its ionic components: $\ce{Zn(s) + H^+ + Cl^- -> Zn^{2+} (aq) + H_2 (g)}$. Then we have first the oxidation-reaction: $\ce{Zn(s) -> Zn^{2+} (aq)}$. Balancing this with electrons gives $\ce{Zn(s) -> Zn^{2+} + 2e^-}$. The reduction-reaction is $\ce{H^+ + Cl^- -> H_2 (g)}$. I need to balance this with $\ce{Cl}$? So that would get me $\ce{H^+ + Cl^- -> H_2 (g) + Cl}$. Then adding hydrogen to the left side with $\ce{H^+}$ gives $\ce{2H^+ + Cl^- -> H_2 (g) + Cl}$. However, I feel like I made a mistake somewhere. Any help please? Answer: You are right for the half reaction for oxidation is: $$\ce{Zn(s) \rightarrow Zn^{2+} + 2e^-}$$ Now, you have to write properly the half reaction for reduction: $$\ce{2H^+(\mathrm{aq}) + 2e^- \rightarrow H_2(g)}$$ This means, that the electrons given by zinc are taken by ion hydrogen. So, if we add the two half reactions: $$\ce{Zn(s) + 2H^+(\mathrm{aq}) \rightarrow Zn^{2+}(\mathrm{aq}) + H_2 (g)}$$ Notice, that ion chloride doesn't change its oxidation state. Its role is to keep the medium neutral electrically. So, we'll add two ions chloride to the two members of the equation: $$\ce{Zn(s) + 2(H^+_{\mathrm{aq}} + Cl^{-}_{\mathrm{aq}}) \rightarrow (Zn^{2+}_{\mathrm{aq}} +2 Cl^{-}_{\mathrm{aq}}) + H_2(g)}$$ Remember, redox reaction are reactions of electrons exchange between an oxidant and an reducer.
{ "domain": "chemistry.stackexchange", "id": 3148, "tags": "redox, aqueous-solution, stoichiometry" }
Relationships between different measure of opacity
Question: I'm reading some papers that compare different values for a materials opacity to a particular particle. The first is given as $\frac{dE}{dX}$, a single particles energy loss per unit column depth (X = $x\rho $) to a continuous process. Makes sense. So then the author goes on to compare his value to that of other authors, who have their opacities for the process expressed in $\kappa $ given in $cm^{2}/g $. They're both presented as measuring the same thing, but I usually think of $ \kappa $ as being an opacity for a large group of particles that are removed discretely from a beam, so that the total energy in the beam dies of exponentially (since energy loss per distance is proportional to the number of particles left in the beam). $\frac{dE}{dX}$ is due to a continuous process acting on all the particles, and so would have a roughly linear effect per distance. So my question is how does one compare the two values. What does $ \kappa $ mean in a context of continuous energy loss. Thanks Answer: What does $\kappa$ mean in a context of continuous energy loss In reference to energy $E$ (reaching "[normalized] column depth" $X$) opacity $\kappa_E$ may simply be defined as $\kappa_E := \frac{1}{E} \frac{dE}{dX}$. (If the so defined "opacity" is constant wrt. "[normalized] column depth $X$" this describes "exponential loss" (as a function of $X$) instead of "linear dependence on $X$".) but I usually think of $\kappa$ as being an opacity for a large group of particles that are removed discretely from a beam That's rather opacity $\kappa_N$, in reference to the (discrete, natural) number $N$ of particles (reaching "[normalized] column depth" $X$): $\kappa_N := \frac{1}{N} \frac{dN}{dX}$; or perhaps rather $\kappa_N := \left\langle\frac{1}{N} \frac{\Delta N}{\Delta X}\right\rangle_{\text average}$. Of course, the relation between $E$ and $N$, or correspondingly between $\kappa_E$ and $\kappa_N$ may be complicated ... Also, instead of being defined in reference to extensive quantities $E$ or $N$, opacity may be defined in reference to the corresponding (average) intensities: $I := \frac{\Delta E}{\Delta t \Delta A}$ or $I := \frac{\Delta N}{\Delta t \Delta A}$, as $\kappa_I := \frac{1}{I} \frac{dI}{dX}$. [Note on edited version: Consistent with the "unit of opacity $\kappa$" given as "$cm^2/g$" is the definition, in reference to "energy" $E$, as $\kappa_E := \frac{1}{E} \frac{dE}{dX}$, and not (as stated in the initial version) $\kappa_E := \frac{dE}{dX}$. The same consideration applies to any definition in reference to intensity. In reference to particle number $N$ I correspondingly changed the explicit definition of "opacity $\kappa_N$" as well, even though its "unit" (or "dimension") is not affected since the number $N$ is "dimensionless".]
{ "domain": "physics.stackexchange", "id": 8214, "tags": "particle-physics, astrophysics" }
Higher Spectral Efficiency with Frequency Domain Equivalent of Partial Response Signaling?
Question: In other recent posts such as this one, I detailed partial response signaling common to GMSK where a known inter-symbol interference (ISI) is intentionally introduced in the time domain at the benefit of increasing the data rate for a given spectral occupancy (spectral efficiency), with the only down-side being receiver complexity: we can completely demodulate the received waveform without added distortion by correlating to every possible combination of patterns over the finite memory of the ISI (done efficiently with a Viterbi decoder). "Faster than Nyquist" signaling follows similar thoughts in allowing for intentional ISI. This then makes me think a similar operation can be done in the frequency domain, for the same purpose- by intentionally introducing inter-carrier-interference (ICI) and then following the same logic in terms of cost and benefit as we do in partial response signaling (swapping time and frequency domains). Does this already exist or have been pursued in any similar form? Or is there a fundamental flaw in my thought process that keeps this from being a viable solution? Certainly it may not fit well with OFDM as currently done with standard FFT and IFFT processing, so I don't want to discount it on that thought alone which is limited to the specific algorithm rather than the result. Answer: Lacking other responses I did come across several proposed solutions and prior investigations on implementing non-orthogonal FDM with various names but all under the idea of increasing spectral efficiency by spacing the subcarriers in OFDM closer than $1/T$. Many of these are covered in the book "5G Mobile Communications" edited by Wei Xiang et al. They fall under various names, such as: N-OFDM: Non-orthogonal FDM FOFDM: Fast OFDM MASK: M-ary Amplitude Shift Keying OFDM SEFDM: Spectrally Efficient FDM PC/HC-MCM: Parallel Combinatory / High Compaction Multicarrier Modulation Ov-FDM: Overlapped FDM FOMS: Frequency Overlapped Multi-carrier System
{ "domain": "dsp.stackexchange", "id": 11119, "tags": "digital-communications, interference, spectral-efficiency" }
How to get a local path from teb local planner without topic?
Question: I would like to use teb local planner separately from the general navigation stack with my own navigation algorithms. I can initialize the teb planner, set a costmap, a global plan and compute the velocity commands. The teb planner calculates the local path that is seen via Rviz and the path is valid and feasible. However, I am interested not in getting the velocity commands but only in the local plan (that is published to the topics /local_plan (nav_msgs/Path) or /teb_poses (geometry_msgs/PoseArray). I would like to get the plan not by subscribing to these topics but through code. Is there a way to do this? I am using ROS Kinetic with Ubuntu 16.04. Update: To be more clear, I want to get the local path calculated by teb. The local plan is published to the topic /local_plan (nav_msgs/Path). So basically I can write a subscriber to this topic. I am asking whether there is another way to get this path. Like there is a way to get the calculated command velocities through the function bool TebLocalPlannerROS::computeVelocityCommands(geometry_msgs::Twist& cmd_vel). Originally posted by inaba on ROS Answers with karma: 3 on 2019-03-11 Post score: 0 Original comments Comment by mgruhler on 2019-03-12: This sounds like a strange use case. To avoid an xy-problem, could you please update your question by editing it and explain what your actually trying to achieve with this? Comment by inaba on 2019-03-12: @mgruhler I added an update. Hope now it is more clear. Comment by mgruhler on 2019-03-12: @inaba, thanks, but I still don't understand your usecase. Are you using move_base with teb_local_planner as the local planner therein? Or are you using this in your own node? How do you get to call the computeVelocityCommands function? So basically: how to you plan to tap into the API of teb_local_planner? Comment by inaba on 2019-03-12: @mgruhler I am not using move_base and other navigation stack. I have my own nodes for global planner, move base and so on. I created an instance of TebLocalPlannerROS in my separate node (that is basically operates like a local planner), and can call the class functions like setPlan and computeVelocityCommands from this node. The teb computes the path and publishes it to the topic for visualization purposes only as it is written in the teb tutorial. Everything works just perfect. But the problem that I am not interested in the velocity commands, I am interested in the local path. In order to get it I have to subscribe to this topic that I am trying to avoid. Comment by mgruhler on 2019-03-12: @inaba great. This is what was missing for me to understand your ultimate goal :-) Comment by vishal@leotechsa on 2019-10-15: hi @mgruhler, can you please guide me for writing the separate node (that basically operates like a local planner), I have written my own global planer node, for obstacle avoidance we need to use the teb local planner. So please can you guide me? Comment by NMICHAELB on 2021-06-01: @inaba I would also like to use the teb local planner without the general navigation stack(I already created the global planner and the local map). Could you create a repo uploading your solution? Or explain how best to proceed? I'm still relatively new to Ros, so it would be a good support for me to see how others have approached the problem. Comment by Jude Nwadiuto on 2021-07-13: @inaba I see that you have solved this problem. I would also like to do the same. Use the teb local planner without the navigation stack since I already have my own global planner. Please could you tell me how to go about this, a github repo would be awesome. Thanks in advance! Answer: Looking at the interfaces, there is no such function directly available within the teb_local_planner_ros.h. You might consider contacting the maintainer @croesmann either here on the the GitHub repo. So you might have to implement an interface yourself... Originally posted by mgruhler with karma: 12390 on 2019-03-12 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by inaba on 2019-03-12: Thank you! Comment by croesmann on 2019-03-12: Indeed, even though it would be straightforward to add an accessor method to the teb_local_planner_ros.h interface, I think you are better with simply creating your own interface to omit all navigation stack stuff (like costmaps, ...).. TebLocalPlannerROS just wrapps some simple calls to the PlannerInteface (either TebLocalPlanner or HomotopyClassPlanner, depending on what you want).You can also ahve a look at the source file test_optim_node.cpp for some ideas. Btw. I'll be right back soon to support the planner ;-) Comment by mgruhler on 2019-03-12: THUMBS UP @croesmann. Appreciate it. Really great project you have there! Comment by inaba on 2019-03-14: @croesmann Thank you! I did this - wrote my own wrapper with an additional function to get the teb optimal path - and it works))
{ "domain": "robotics.stackexchange", "id": 32633, "tags": "ros, teb-local-planner, ros-kinetic" }
Does a force independently affect velocity and angular velocity, or is it split between the two?
Question: I'm implementing physics for a computer game, and came accross something that looks unintuitive to me. Consider two bodies at rest: Let's say we momentarily apply the same amount of force to them, but in different locations: at the center of mass for one body, and off-center for the other. The second body will gain angular velocity, while the first one won't. Both will gain rightward velocity, in the same direction. But will the second body gain less velocity than the first? Or the same velocity? Naively I would expect less velocity, since some of the force was "expended" on giving it angular velocity, but I googled around and experimented in Algodoo, and it seems the velocity ends up the same. Is that correct? Is there an intuitive explanation for that? Answer: The velocities will be the same. Inspired by @NulliusinVerba's dumbell example, I came up with an intuitive proof: Let's say we split the ball in two, and momentarily apply the same force to the center of mass of one of the halves: The upper half gets twice the velocity (compared to the first ball from the question, because it has half its mass), and no angular velocity. Then we immediately attach the two halves together. The velocity of the resulting body is the same as that of the first ball, because of the conservation of momentum. And the angular velocity of the resulting body is obviously non-zero (which can be thought of in terms of conservation of angular momentum relative to the center of the ball).
{ "domain": "physics.stackexchange", "id": 97400, "tags": "rotational-dynamics, rigid-body-dynamics" }
Asymmetry in trigonal bipyramidal geometry
Question: I teach an MCAT course in chemistry. I like to explain VSEPR by saying.. first, imagine arranging electron pairs around the central atom so they are maximally distant from each other, and uniformly arranged. If there are two pairs, they'll just be across from each other. Three? You get a triangle, and so on. This rhetorical device fails with five pairs. Five is strange because the electron pairs are most definitely not equidistant from each other, with the equatorial pairs being 120 degrees from each other, but 90 degrees from the axial electron pairs. My question: is this a simplifying geometry for high school / college classes, and the geometry is actually more equally arranged in 3D space among the pairs? Alternatively, if 'trigonal bipyramidal' is actually the geometry, why? What induces the asymmetry? And does that imply that molecules with the same formula and connectivity, but with an atom equatorially versus axially located correspond to distinct stereoisomers? Answer: Yes, 5-coordinate compounds are always a bit confusing. The best way I've answered this question about VSEPR for students is to do a little demo with balloons. Blow up balloons (i.e., electron pairs) and tie them around a central knot. It will definitely give a trigonal bipyramidal shape - it has nothing to do with chemistry, and everything to do with "fitting 5 things around a central point." I disagree with @Martin that there's no theoretical basis to the shapes. The theory is purely geometric. There are multiple possible shapes, including square pyramidal, but these are higher in energy because you have more ~90-degree angles. As discussed in the comments, if all 5 atoms are in a plane, the angles are 72 degrees, which is clearly bad. You could in principal have the top of an octahedron, in which case all angles are 90 degrees, but that's also clearly worse. (In practice, square pyramidal structures have the central atom slightly above the plane, increasing some angles slightly beyond 90.) So I do the demo with the balloons, talk about square pyramidal and how this alternative is clearly slightly higher in energy. The Berry pseudorotation, as mentioned above, does convert between the shapes, and it's known that at room temperature the axial and equatorial positions scramble quickly. Update: I decided to do some quick calculations using Avogadro and MOPAC on $\ce{PF5}$ in square planar versus trigonal bipyramidal using the PM7 semiempirical method. This took longer to write than to run the calculations (seconds). Here's the optimized square pyramidal geometry: The phosphorous isn't in the plane, it's a bit above, to increase four of the bond angles and minimize electron-pair repulsion. Note that the largest angle is ~103 degrees, but the F-P-F angles in the basal plane are all ~87 degrees. Here's the trigonal bipyramidal geometry: Now the angles are exactly what we expect from VSEPR. We see 90 degree angles between the equatorial plane and the axial F atoms. And the F-P-F in the plane are all 120 degrees. Just from the angles we can guess that trigonal bipyramidal should be lower in energy. For reference, the difference in energy between these two geometries using PM7 is ~2.76 kcal/mol. So there's not a huge difference in energy, but it's there.
{ "domain": "chemistry.stackexchange", "id": 10814, "tags": "orbitals, molecular-orbital-theory, vsepr-theory" }
Carbocation Rearrangement in SNi
Question: In Peter Sykes' A Guidebook to Mechanism in Organic Chemistry regarding the retention of configuration of reaction of alcohols with $\ce{SOCl2}$: The rate at which the alkyl chlorosulphite intermediate breaks down to $\ce{RCl}$ is found to increase with increasing polarity of the solvent and also with the increasing stability of the carbocation $\ce{R+}$: an ion pair $\ce{R+SOClO-}$ is almost certainly involved. But it is nowhere mentioned whether a carbocation rearrangement is possible or not. In "NCERT Chemsitry Class 12" in the section of Preparation of Alkyl halides from Alcohols: The hydroxyl group of an alcohol is replace by halogen on reaction with concentrated halogen acids, phosphorous halides or thionyl chloride. Thionyl chloride is preferred because the other two products are escapable gases. If that method is preferred, then I assume rearrangement does not occur. So, why does not a carbocation rearrangement (like Wagner–Meerwein rearrangement) occur in the $\mathrm{S_Ni}$ pathway eventhough the intermediate is involved? Is it because the intermediate breaks down too quickly before any rearrangement is possible? Answer: The Mechanism of the reaction The reaction of an alcohol with thionyl chloride under base-free conditions is one of the most significant examples of the SNi mechanism (DN+ANDe to give it it’s correct IUPAC designation, which is perhaps more instructive in this case). There are two steps to the SNi mechanism, as shown below (figure taken from Bruckner, Organic Mechanisms [1]): Step 1: The alcohol reacts with thionyl chloride to afford an alkyl chlorosulfite, which can often be isolated.[2] At this stage, the oxymethine stereocentre hasn’t been touched– it’s the second step that will determine the stereochemistry. Step 2: The chlorine is delivered intra-molecularly, with extrusion of sulfur dioxide gas. Since the sulfur is being delivered from the same face that the hydroxyl was on, the reaction occurs with retention overall. This can be seen in the dotted lines in the figure above. Two things are worth noting at this stage: If pyridine is added to the reaction mixture, we see overall inversion via an SN2 mechanism.[2] This occurs because the pyridine interacts with the intermediate alkyl chlorosulfite formed, liberating a free chloride (nucleophilic) and hence allowing the substitution to take place inter-molecularly. This can be seen in the solid lines in the figure above. The second step in which I described as happening intra-molecularly doesn’t quite happen intra-molecularly. In reality, the C-O bond begins to fragment in an SN1 fashion, before the chloride attacks. The reason why we still see retention (rather than racemisation as is common with SN1) is due to a phenomenon called contact ion pairs.[#] Carbocation rearrangement The simple answer to your question is that at no stage of the reaction is it thought that there is any free carbocation concentration, making rearrangement less likely on kinetic grounds (along with elimination, which could equally be argued by your logic). The ion pair theory is such that the carbocation remains closely associated with the leaving group, allowing the nucleophilic attack to occur rapidly without the need for an additional collision to take place. [1]: Bruckner, R. Organic Mechanisms- Reactions, Stereochemistry and Synthesis; Springer: Berlin, 2007 [2]: J. Am. Chem. Soc. 1952, 74, 308 [#]: You can read about this in any advanced organic chemistry text such as Carey or March, and is a bit too long of an explanation for me to give satisfactorily here
{ "domain": "chemistry.stackexchange", "id": 8788, "tags": "organic-chemistry, reaction-mechanism" }
What's the relation between acceleration, position and angular velocity?
Question: I just encountered a problem involving lift and oscillations where I found the following differential equation: $$\ddot y = -\frac{ \rho gA}{m}y = -\omega^2 y$$ What's the relation between $\ddot y$, $y$ and $\omega$? Does $\ddot y = -\omega^2y $ or $\ddot y =\omega^2y$ apply in all situations? If yes, what's the reasoning behind it? I know it makes sense in terms of the units but what's the physical reason behind it? Answer: The equation for a simple harmonic oscilator is $\ddot{x}=-\omega^2x$, so by comparison in your equation you can state that in your system there is oscilatory motion with $\omega^2=\rho g A /m$.
{ "domain": "physics.stackexchange", "id": 55833, "tags": "harmonic-oscillator, spring, angular-velocity, displacement" }
How to initialize a state of the form$\frac{1}{\sqrt{2}}(|\texttt{++}\rangle + |\texttt{--}\rangle$) in the circuit model?
Question: I wonder how to initialise a Bell-like state, in the circuit model, where instead of standard $|\Phi^{\texttt{+}}\rangle$, the entanglement is in the x-basis. Hence a state $\frac{1}{\sqrt{2}}(|\texttt{++}\rangle + |\texttt{--}\rangle$. I thought I could just apply an $H^{\otimes 2}$ after the standard protocol: $CX(H\otimes I)$. But by applying a simple circuit equivalence, this turns out to create the standard $|\Phi^{\texttt{+}}\rangle$. Answer: Your state is $|\Phi^+\rangle$. One way to see this is by expanding each qubit into the computational basis first, then doing the FOIL, and grouping the bases after. Indeed, to answer the specific question about preparing your state, there’s no need to perform the two Hadamard gates after the CNOT gate. Remember, entanglement is basis-independent.
{ "domain": "quantumcomputing.stackexchange", "id": 3896, "tags": "gate-synthesis, bell-basis" }
How to find the amount of ions from the dissociation of a weak acid?
Question: Someone asked me this question,"How many ions are formed during the dissociation of 500 molecules of carbonic acid, if it dissociates in the first degree by 20%, and in the second degree by 1%?" and I don't understand this question. First degree? Second? Answer: The word "degree" is not well chosen, when not attached to the words "of dissociation". The "first degree" means that $20$% of the $\ce{H2CO3}$ molecules are dissociated into $\ce{H+ + HCO3^-}$ ions. The "second degree" means that $1$% are dissociated into $\ce{2 H+ + CO3^{2-}}$ ions. Now use this information to calculate the amount of $\ce{H^+, HCO3^-}$ and $\ce{CO3^{2-}}$ ions in this solution.
{ "domain": "chemistry.stackexchange", "id": 16955, "tags": "acid-base, aqueous-solution" }
Introductory resources for learning about quantum Hamiltonians
Question: I am seeking introductory resources which will enable me to answer these questions (textbooks, lecture series, etc.): Given a simple quantum system, how do I derive its Hamiltonian? Given a Hamiltonian, what questions can I answer about the system it describes (and how?). I am approaching this topic primarily from the computer science side. I am familiar with Newtonian classical mechanics as presented in first-year undergraduate physics courses, but never learned classical Hamiltonian mechanics. Answer: Given a simple quantum system, how do I derive its Hamiltonian? For quantum systems of continuous variables, the most common way to construct the Hamiltonian is to add the kinetic energy and potential energy, as described in this resource. The kinetic energy part is explained here, and various potential energy models are given here: which unfortunately I could only get to from this page! For quantum systems on discrete variables, you can construct any $2^n \times 2^n$ Hamiltonian using the Pauli matrices, and any Hamiltonian of any dimension using generalizations of the Gell-Mann matrices. In terms of other resources: There is an open source Hamiltonian Zoo on GitHub, but it is very incomplete. So far it tells you how to derive the Hamiltonian for two charges interacting with each other (Coulomb), for a spin system interacting with a magnetic field (Zeeman), and for a 2D p-wave Fermi superfluid, but not much else. However, since this is a resource request, I think the Hamiltonian Zoo is a good starting point, because it lists the names of almost every mainstream Hamiltonian imaginable, and the best resource for learning about each of those listed Hamiltonians is Wikipedia. For example: Molecular Hamiltonian Hubbard Hamiltonian Tight Binding Hamiltonian BCS Hamiltonian. In every case, the Hamiltonian is in the article, just look for the equation containing the big $H$! Given a Hamiltonian, what questions can I answer about the system it describes (and how?). There is no "resource" I know that teaches people what can be learned about a system based on looking at its Hamiltonian, and I'd be quite surprised if such a resource existed. What I can tell you is that there are things about the system that can be learned by looking at the Hamiltonian (such as number of particles or number of degrees of freedom, by looking at the number of terms in the kinetic and potential energy operators in the case of continuous variables, or the size of the matrix in discrete variable Hamiltonians). However you may want to ask this part as a separate question (and not a resource request) in case other people want to suggest other things that can be learned about a system by looking at the Hamiltonian apart from what I've already told you.
{ "domain": "quantumcomputing.stackexchange", "id": 398, "tags": "resource-request, hamiltonian-simulation" }
How to add a camera to turtlebot?
Question: I am very new to the world of hardware, ports, pins and stuff. We bought a turtlebot3, I wonder if how can we add a camera on it. Can I just bought a camera for raspberry pi like this one: https://www.amazon.com/Raspberry-Pi-Camera-Module-Megapixel/dp/B01ER2SKFS/ref=sr_1_3?s=electronics&ie=UTF8&qid=1502212966&sr=1-3&keywords=raspberry+pi+camera and write a node basically listening to image data sending from the port of camera and publish it? Is this the right(easy) way to achieve my goal? Originally posted by rozoalex on ROS Answers with karma: 113 on 2017-08-08 Post score: 0 Answer: It really depends on what you want to do with your camera. It's for example very likely that you will want to add your camera to the tf tree of your robot. Why ? Because to use your camera data alongside other sensors you will need to position each of the sensors in one similar coordinate framework. (example: rtabmap_ros can build a 3D map of an environment using RGBD camera, Odometry and 2D laser data, but in order to combine this data, you have to position each of the sensors in a common framework: this framework is often completed with what you will find in the wiki articles, in the "Required tf Transforms" and "Provided tf Transforms" sections) How to do that ? You can for example broadcast a tf, or edit the urdf file of your robot. For the turtlebot 2, here are some information about how to find the urdf files. On the other hand, if you plan on doing image processing using open CV, you don't need to do that to use your camera. If the camera is not already supported by a ROS Package (understand: not launched with a ROS node and its data seen through a ROS message topic being published), you'll need to connect it yourself with a new package to the ROS network. Originally posted by Blupon with karma: 127 on 2017-08-08 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 28557, "tags": "turtlebot, camera" }
Coredata delete all data in an entity
Question: I have this code that I am using to delete all records from an entity. The question is can it be done better or is already the best? func deleteIncidents() { let appDel: AppDelegate = UIApplication.sharedApplication().delegate as! AppDelegate let context: NSManagedObjectContext = appDel.managedObjectContext! let request = NSFetchRequest(entityName: "Incidents") request.returnsObjectsAsFaults = false do { let incidents = try context.executeFetchRequest(request) if incidents.count > 0 { for result: AnyObject in incidents{ context.deleteObject(result as! NSManagedObject) print("NSManagedObject has been Deleted") } try context.save() } } catch {} } Answer: let appDel: AppDelegate = UIApplication.sharedApplication().delegate as! AppDelegate let context: NSManagedObjectContext = appDel.managedObjectContext! The type annotations are not necessary, the Swift compiler can infer the type automatically: let appDel = UIApplication.sharedApplication().delegate as! AppDelegate let context = appDel.managedObjectContext! request.returnsObjectsAsFaults = false This makes no sense here because you don't want to access any properties of the returned objects. On the contrary, only the managed object ID is needed to delete objects and no properties need to be fetched at all, so this should be replaced by request.includesPropertyValues = false to increase the performance. I prefer to cast the return value from the fetch request immediately let incidents = try context.executeFetchRequest(request) as! [NSManagedObject] this makes both the type annotation : AnyObject and the cast as! NSManagedObject obsolete. You have to decide how an error should be handled, but I would at least print some message when running in debug mode } catch let error as NSError { debugPrint(error) } to detect possible problems. Starting with iOS 9, objects can be deleted directly in the store without loading them into memory. This would look like this: func deleteIncidents() { let appDel = UIApplication.sharedApplication().delegate as! AppDelegate let context = appDel.managedObjectContext! let coord = appDel.persistentStoreCoordinator let fetchRequest = NSFetchRequest(entityName: "Incidents") let deleteRequest = NSBatchDeleteRequest(fetchRequest: fetchRequest) do { try coord.executeRequest(deleteRequest, withContext: context) } catch let error as NSError { debugPrint(error) } } There is a catch (!) however: Existing objects already loaded into the managed object context are not automatically removed, and doing so it a bit tricky. See https://www.bignerdranch.com/blog/new-in-core-data-and-ios-8-batch-updating/ for more information.
{ "domain": "codereview.stackexchange", "id": 15642, "tags": "swift, core-data" }
Why does lambda decay violate parity?
Question: When a lambda particle decays into proton and a pion, I am told it does not conserve parity. Why? Answer: First, an assignment of the parities. The parity of fermions is a bit ambiguous because one may always redefine parity by $$ P \to P (-1)^{2J}, P(-1)^L, P(-1)^{3B}, P(-1)^{3Q} $$ or one may add the product of several factors of this kind because the second factor is a multiplicatively conserved sign. By this definition, one gets another parity that is still conserved (at least in low-energy processes that also conserve $B,L$). However, there's a convention that assigns a particular parity to fermions. Note that Weinberg has proved that $$ P^2 = (-1)^{2J} $$ so parity behaves much like the rotation by 180 degrees: its square changes the sign of states with an odd number of fermions. In the standard convention, electron, proton, and neutron are set to have $P=+1$: one is allowed to make three choices like that. The parity of the pion is then determined to be negative, $P=-1$, because a deuteron-pion ground state may decay into two neutrons with $L=1$ - the odd orbital momentum changes the sign of the parity, too. Strong and electromagnetic interactions preserve parity, so one may assign parity to all hadrons, as determined from various strong and electromagnetic interactions. Lambda then turns out to have a positive parity $P=+1$ much like protons and neutrons because it's just another bound state of three quarks and the change of the quarks' identity doesn't change parity. The neutral $\Lambda^0$ decays to a nucleon and a pion, $p+\pi^-$ or $n+\pi^0$, so a $P=+1$ state decays into one particle (nucleon) with $P=+1$ and one (pion) with $P=-1$. That violates the parity because $(+1)\neq (+1)(-1)$. It's because the decay is due to the weak interactions. Weak interactions don't preserve the parity because even the spectrum doesn't: for example, a parity-transformed partner of a left-handed neutrino - the right-handed neutrino - doesn't even exist. This asymmetry is confirmed by the weak interactions that contain two-component (fundamentally left-right asymmetric) spinors or, in the four-component spinor formalism, combinations $(1\pm \gamma_5)$ - scalars and pseudoscalars - that maximally violate the parity. Because of these interactions, any parity-violating process that respects all the other laws is allowed, although it may be slow because it must rely on the weak interactions which are weak. The decays of the neutral Lambda baryon - and similar decays of the other Lambda particles - belong among the parity-violating ones. The violations of parity conservation, discovered half a century ago, was a shock. But once you appreciate the simple fact that fields may be described by a 2-component spinor that prefers one chirality over the other - may describe left-handed particles without the right-handed ones - it's not so shocking. The two-component spinors work because $SL(2,C)=SO(3,1)$, locally. The fundamental representation of $SL(2,C)$ has a spin-1/2, and it also has a small enough number of components that there's no oppositely spinning particle (with the same other charges). The theory is still Lorentz-invariant. Not only parity is violated in some processes: at very high energies, those processes become so common that it isn't even possible to define parity accurately. After all, the parity transformation of the neutrino states is ill-defined. That's why it didn't hurt that I could have redefined parity by the signs coming from the lepton and baryon numbers (which are ultimately violated at very high energies, perhaps with the exception of $B-L$) at the beginning: none of the parity operators is actually fully well-defined on the spectrum of particle states.
{ "domain": "physics.stackexchange", "id": 533, "tags": "quantum-mechanics, particle-physics, parity" }
Integer to Alphabet string ("A", "B", ...."Z", "AA", "AB"...)
Question: So this question is prompted by two things. I found some code in our source control doing this sort of things. These SO questions: https://stackoverflow.com/questions/297213/translate-an-index-into-an-excel-column-name https://stackoverflow.com/questions/837155/fastest-function-to-generate-excel-column-letters-in-c-sharp https://stackoverflow.com/questions/4075656/how-to-get-continuous-characters-in-c/4077835#4077835 https://stackoverflow.com/questions/1011732/iterating-through-the-alphabet-c-sharp-a-caz So when I thought about this problem this popped into my head almost immediately. class Util { private static string[] alphabetArray = { string.Empty, "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z" }; public static IEnumerable<string> alphaList = alphabetArray.Cast<string>(); public static string IntToAA(int value) { while (Util.alphaList.Count() -1 < value) { Util.IncreaseList(); } return Util.alphaList.ElementAt(value); } private static void IncreaseList() { Util.alphaList = Util.alphabetArray.Take(1).Union( Util.alphaList.SelectMany(currentLetter => Util.alphabetArray.Skip(1).Select(innerLetter => currentLetter + innerLetter) ) ); } } My question is this: Is this approach a better solution (performance wise)? or is a recursive / computed value better (eg. this answer )? Answer: Math! Simple math is certainly the nicest way, no lists to deal with, just old fashion ASCII and math. If you want to be able to toggle the capitalization of this method, simply use a ternary operator like this isCapital ? 'A' : 'a' I just left it capital as that is how the OP seemed to want it. Jeff Mercado's Answer explained well enough the differences between calculated, recursive and such... I mostly wanted to provide a simplistic calculated answer that did not involve using lists. public static string IntToLetters(int value) { string result = string.Empty; while (--value >= 0) { result = (char)('A' + value % 26 ) + result; value /= 26; } return result; } Edit: To meet the requirement of A being 1 instead of 0, I've added -- to the while loop condition, and removed the value-- from the end of the loop, if anyone wants this to be 0 for their own purposes, you can reverse the changes, or simply add value++; at the beginning of the entire method.
{ "domain": "codereview.stackexchange", "id": 6529, "tags": "c#, performance, strings, converting" }
Patten matching to check for non-empty argument list
Question: I'm learning Elixir. while building a trivial cli application (as seen here http://asquera.de/blog/2015-04-10/writing-a-commandline-app-in-elixir/) I'm defining a module that implements a main/1 function that accepts a List as an argument. My question is: What is the best way to pattern match a method against a non-empty list ? This is what i did and it seems to work but i was wondering if the elixir community has better suggestions ( maybe def main(args) when is_list(args) and length(args) > 0 do is considered better ? ) defmodule Cli do def main([]) do IO.puts "arguments are needed" end def main([_|_] = args) do options = parse_args(args) input = options[:name] size = options[:size] output(input, size) end def parse_args(args) do {options, _, _} = OptionParser.parse args, switches: [name: :string, size: :integer] options end def output() do IO.puts "Missing required --name parameter" end def output(input) do # defaulting size to 50 output(input, 50) end def output(input, block_size) do IO.puts "you entered #{input} and #{block_size}" end end Answer: You've implemented it by pattern matching against empty lists with def main([]). If an empty list is passed to main, it will be caught here. In the second def main(args), args should never be an empty list. If you want to make sure that args is a list, you could use a guard clause: def main(args) when is_list(args). (You should then write a catch-all third definition: def main(_)).
{ "domain": "codereview.stackexchange", "id": 23901, "tags": "beginner, validation, elixir" }
Deep Learning for non-continuous dataset
Question: I am working with this dataset which is record of student academic details and I want to predict the student's performance. since the dataset is non-continuous I cannot apply CNN on this dataset. How can I apply Deep learning on this kind(non-continuous) of dataset. I searched online but could not find anything relevant Thank you!! Answer: Deep Learning excels in problems where the data is relatively unstructured. Stacked layers help find conceptual features that can be used to infer rules. Your dataset seems very structured at first glance. And, as you pose, it doens't look like it needs specialised layers that exploit sequential or spatial relations. Neural network wise, this would warrant one or two fully connected layers, connected to an output layer (shaped to your wishes). However, typically, problems like these are tackled with a more biased approaches (eg Decision Tree Learners).
{ "domain": "datascience.stackexchange", "id": 6268, "tags": "machine-learning, neural-network, deep-learning, cnn, machine-learning-model" }
What will be the path of this particle in combined electric and magnetic fields?
Question: Say a particle is moving with constant velocity along the positive y-axis. Both electric and magnetic fields are applied along the positive x-axis. What will be the path of this particle? I am confused between these two solutions: (1) Because the particle is moving along the y-axis and magnetic field is along x-axis, the magnetic force must be directed along the negative z-axis while electric force will be along positive x-axis. The resultant force will be in x-z plane which is perpendicular to the direction of velocity and hence the particle will execute circular motion with constant speed. (2) The magnetic force will be in negative z-axis and electric force will be along positive x-axis. The magnetic force will make the particle execute circular motion in the y-z plane and the electric force will take the particle in the positive x direction. The combined motion will create a helix. Answer: It's Option 2. To see this, write down the differential equations for the motion. We have Newton's Second Law as $m \dot{\vec{v}} = q (\vec{E} + \vec{v} \times \vec{B})$; assuming that both fields point in the $x$-direction ($\vec{E} = E \hat{\imath}$ and $\vec{B} = B \hat{\imath}$), the components of the Newton's Second Law are \begin{align} m \dot{v}_x &= q E\\ m \dot{v}_y &= q B v_z \\ m \dot{v}_z &= - q B v_y \end{align} You may or may not be able to solve these equations immediately. But what we can see is that the motion in the $x$-direction is completely independent of the motion in the $yz$-plane; the equation for $v_x$ doesn't include $v_y$ or $v_z$, and the equations for $v_y$ and $v_z$ don't involve $v_x$. Moreover, the equation for $v_x$ is just what it would be for a particle in an electric field (without a magnetic field), and the equations for $v_y$ and $v_z$ are just what they would be for a charged particle in a magnetic field (without an electric field.) So the particle executes uniformly accelerated motion along the $x$-axis, and circular motion parallel to the $yz$-plane. The result is a helix whose pitch increases along the path of the particle. Footnote: This all assumes that the charged particle is non-relativistic. If the speed of the particle becomes comparable to the speed of light, then the analysis becomes more complicated; but the basic conclusion, that the path is a helix of increasing pitch, remains essentially the same.
{ "domain": "physics.stackexchange", "id": 97740, "tags": "electromagnetism, magnetic-fields, electric-fields" }
Abstraction of a proton in cycloheptatriene
Question: So, whenever there is abstraction of H+ by an acid-base reaction, I see that the proton on the sp3-hybridized carbon is taken. Why is this if you take into account that vinylic hydrogens are more acidic? Also, on a sidenote, what is the hybridization state of a carbon in a C=C double bond where the vinylic proton was removed by a base? Answer: The pKa for a vinylic proton is ~ 43, the pKa for propene is ~ 40, and the pKa for 1,4-pentadiene is ~ 35. Consequently it's not the vinylic protons that are the most acidic, it's the allylic ones. Now the question whether tropylium anion is aromatic/antiaromatic is complicated.
{ "domain": "chemistry.stackexchange", "id": 2110, "tags": "organic-chemistry, aromatic-compounds" }
Convert pcap to bag file
Question: Dear everyone, I got som trouble when trying to convert a *.pcap file into a *.bag file. I try the method explained here : https://answers.ros.org/question/213080/convert-raw-velodyne-vlp16-pcap-to-bagfile/ In a first terminal I launch : roscore Then in second one, I launch : rosrun rosbag record -o your_vlp16_070815.bag /velodyne_packets to open the recording file ? and finally in the third one, I launch : rosrun velodyne_driver velodyne_node _model:=VLP16 _pcap:=bureau.pcap _read_once:=true to play the pcap file. I wait until the third command finish ("done" message), then CTRL+C the second command. I got a .*bag file. After this, I try to view it using RVIZ following this method : https://answers.ros.org/question/227205/how-to-display-bag-file-in-rviz/ rosrun rviz rviz -f velodyne rosrun velodyne_pointcloud cloud_node rosbag play your_vlp16_070815.bag I work with ROS Kinetic on Ubuntu 16.04. Questions : I don't have calibration file (*.yaml) because I get no file from VLP16 distributor. Can you provide me a sample file even if it doesn't correspond to my sensor ? When I run the viewer, I get "No tf data. Actual error: Fixed Frame [velodyne] does not exist". I get no points on the screen. Any idea ? Best regards, Boris Leroux Originally posted by BorisLerouxFox on ROS Answers with karma: 1 on 2017-08-09 Post score: 0 Original comments Comment by sohel on 2017-11-08: how much time does the third terminal require to finish. Its already more than a hour Comment by gvdhoorn on 2017-11-08: That will depend on the size of the pcap file. Are you not seeing the done message? Answer: When I run the viewer, I get "No tf data. Actual error: Fixed Frame [velodyne] does not exist". I get no points on the screen. Any idea ? You can try rosrun tf static_transform_publisher 0 0 0 0 0 0 base_link velodyne 10. This worked for me. If you have extrinsic data you may input those here. Furthermore select velodyne as fixed frame in rviz. Originally posted by SoVa with karma: 16 on 2018-04-14 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 28569, "tags": "velodyne" }
How can I remap enitre roslaunch file?
Question: Hello, I have 2 real turtlebots (for now.. I'll have more in the near future) and I want them to run on one ros-master, since they have identical topics if I launch in one turtlebot the other get it also. for example if I launch turtlebot_bringup minimal.launch in one and afterwards in the other it will shutdown the first and open in the last. since that I've thought to create groups and remap the topics but since all the topics are the same it seems like allot of work. can I remap entire launch file without remapping every topic one by one? Originally posted by tal_eldar on ROS Answers with karma: 33 on 2015-09-03 Post score: 0 Answer: You should be able to just export ROS_NAMESPACE to the namespace you want and launch the same file. It might also be possible to nest the whole roslaunch in one more group, but I'm not sure how well that deals with nesting. The main thing to look out for is that all nodes/launch parts are written properly to support nesting, i.e., not topics with / when not needed, etc. Originally posted by dornhege with karma: 31395 on 2015-09-03 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by cyborg-x1 on 2015-09-03: damned, you were faster ;-) Comment by cyborg-x1 on 2015-09-03: For multiple turtlebots he you should probably check out http://wiki.ros.org/turtlebot_concert/Tutorials/indigo/Concert%20Bringup (because of the APP stuff and so on ...) Comment by tal_eldar on 2015-09-04: Hi, thanks for the fast response! I'm new in ROS and the tutorials I've encountered so far regarded nodes, can you give me an example? I'm lauching allot of topics with: roslaunch turtlebot_bringup minimal.launch how can remap this argument? and concert is no solution for me (joint computer issues) Comment by tal_eldar on 2015-09-08: I was finally able to test the answer and it is working fine with exporting ROS_NAMESPACE before launching the launch file, is there a way to permanently set the ROS_NAMESPACE? Comment by dornhege on 2015-09-08: You could put that into your .bashrc. However setting that permanently is usually unnecessary as then you could also just not set it. It is mostly an advantage if you want to start launches in multiple different namespaces.
{ "domain": "robotics.stackexchange", "id": 22563, "tags": "ros, roslaunch, turtlebot, remapping" }
How does Higgs field transform when moving between inertial frames?
Question: My naive understanding of the Higgs field is it's a bit like molasses and that's how particles "get" mass. In fact without the Higgs field, particles could travel at the speed of light. However, the problem I have with this is doesn't this imply a special reference frame? Given that objects travelling at the speed of light relative to a "stationary object" (in the rest frame of the stationary object) gain mass - my understanding is that is partially due to the massively increased flux of Higgs boson as they move through the Higgs field. However, in the rest frame of the moving particle there is no mass increase. How does this work? If the analogy with molasses is correct then if the Higgs field transforms as per Lorentz then I can't see how you would get the flux increase which would lead to the increase in mass. So I'm guessing a different transformation is appropriate. What is this transformation? Answer: Frankly, you must specify what you mean a bit more clearly with your flakey molasses analogy: it hardly makes sense to me. In non-fantasy physics terms, the Higgs field is a scalar, and its non-vanishing vacuum value v is Lorentz invariant and looks identical in every frame. The Yukawa term giving fermions a mass by coupling left/right chiral modes is Lorentz invariant, and the Higgs mechanism framework giving gauge bosons masses is also Lorentz covariant/invariant. The term "gives mass" in physics means that setting the couplings to the Higgs, or its vacuum expectation v, equal to zero ("turning them off") would yield zero masses for the affected particles. So zero Yukawa g or zero v for fermions would decouple left/right modes and the respective fermions would be massless. Zero v would yield massless Ws and Z (messing with its coupling to them is trickier). Can you specify perhaps what this odd business of luxons in molasses is about? Ben Zimmer thought of it as the splinter on the banister of slippery metaphors.
{ "domain": "physics.stackexchange", "id": 80646, "tags": "special-relativity, higgs" }
Is there a good chance that gravitational waves will be detected in the next years?
Question: Is there a good chance that gravitational waves will be detected in the next years? Theoretical estimates on the size of the effect and the sensitivity of the newest detectors should permit a forecast on this. Answer: Yes, most likely, unless there is something fundamentally wrong with our understanding of gravity. The most promising candidate for detection is Advanced LIGO, which is currently in the process of being designed and built. The website has some really interesting information listed, including the construction schedule (PDF), and the upgrades, such as upgrading from a 10W to a 200W laser. According to Wikipedia, they are expecting to start operations sometime in 2014, which will be after they have completed construction and calibrated the instrument. Of particular note is that the higher power laser will make calibrating the mirrors more challenging, so right now they still have one interferometer (the shorter one) in operation and are performing a squeeze test. Once Advanced LIGO is complete, they are expecting a sensitivity increase by a factor of 10, pushing the detection rate to possibly daily. It may also be good to note that they are still processing the data from the old data runs (by means of Einstein@Home), so it is still possible that a detection will turn up within the data, although it will be be of a different type.
{ "domain": "physics.stackexchange", "id": 1039, "tags": "experimental-physics, gravitational-waves, ligo" }
Pellet of lithium in a vacuum
Question: What would happen to a grain of sand sized pellet of lithium in a vacuum? Because there is no pressure, would it become more like a sticky liquid? Answer: The melting point of lithium is 180°C so it wouldn't turn into a liquid unless you heated it to above this temperature. In principle all solids have a non-zero vapour pressure, so in principle solid lithium would sublime in a vacuum. However according to Wikipedia the vapour pressure of solid lithium is well approximated by: $$ \log P \approx 10.673 - \frac{8310}{T} $$ and at 298K (room temperature) this gives a vapour pressure of about $2 \times 10^{-20}$ Pa, which is effectively zero. So unless you're prepared to wait a very, very long time nothing is going to happen to your grain of lithium.
{ "domain": "physics.stackexchange", "id": 15135, "tags": "states-of-matter" }
Errors during rosmake
Question: I am trying to perform a rosmake on my package and I keep getting the same errors no matter what I have tried to do to fix them. I started out creating the package with: roscreate-pkg movbotGPS2 geometry_msgs roscpp and in my CMakeLists.txt I have uncommented the following: rosbuild_init() set(EXECUTABLE_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/bin) set(LIBRARY_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/lib) rosbuild_genmsg() rosbuild_add_library(${PROJECT_NAME} src/movesquare_modified.cpp) rosbuild_add_executable(movesquare src/movesquare_modified.cpp) My manifest.xml file contains: movbotGPS2 BSD http://ros.org/wiki/movbotGPS2 and my 'make' errors are: [ rosmake ] Packages requested are: ['movbotGPS2'] [ rosmake ] Logging to directory/home/turtlebot/.ros/rosmake/rosmake_output-20120411-134134 [ rosmake ] Expanded args ['movbotGPS2'] to: ['movbotGPS2'] [ rosmake ] Checking rosdeps compliance for packages movbotGPS2. This may take a few seconds. [ rosmake ] rosdep check passed all system dependencies in packages [rosmake-0] Starting >>> rosbuild [ make ] [rosmake-0] Finished <<< rosbuild ROS_NOBUILD in package rosbuild No Makefile in package rosbuild [rosmake-1] Starting >>> cpp_common [ make ] [rosmake-1] Finished <<< cpp_common ROS_NOBUILD in package cpp_common [rosmake-2] Starting >>> roslib [ make ] [rosmake-2] Finished <<< roslib ROS_NOBUILD in package roslib [rosmake-1] Starting >>> roscpp_traits [ make ] [rosmake-2] Starting >>> rostime [ make ] [rosmake-3] Starting >>> xmlrpcpp [ make ] [rosmake-0] Starting >>> roslang [ make ] [rosmake-2] Finished <<< rostime ROS_NOBUILD in package rostime [rosmake-3] Finished <<< xmlrpcpp ROS_NOBUILD in package xmlrpcpp [rosmake-2] Starting >>> rosconsole [ make ] [rosmake-0] Finished <<< roslang ROS_NOBUILD in package roslang No Makefile in package roslang [rosmake-3] Starting >>> std_msgs [ make ] [rosmake-0] Starting >>> rosclean [ make ] [rosmake-1] Finished <<< roscpp_traits ROS_NOBUILD in package roscpp_traits [rosmake-1] Starting >>> roscpp_serialization [ make ] [rosmake-3] Finished <<< std_msgs ROS_NOBUILD in package std_msgs [rosmake-2] Finished <<< rosconsole ROS_NOBUILD in package rosconsole [rosmake-3] Starting >>> rosgraph_msgs [ make ] [rosmake-0] Finished <<< rosclean ROS_NOBUILD in package rosclean [rosmake-2] Starting >>> rosgraph [ make ] [rosmake-0] Starting >>> rosparam [ make ] [rosmake-1] Finished <<< roscpp_serialization ROS_NOBUILD in package roscpp_serialization [rosmake-1] Starting >>> rosmaster [ make ] [rosmake-3] Finished <<< rosgraph_msgs ROS_NOBUILD in package rosgraph_msgs [rosmake-2] Finished <<< rosgraph ROS_NOBUILD in package rosgraph [rosmake-2] Starting >>> rospy [ make ] [rosmake-3] Starting >>> roscpp [ make ] [rosmake-0] Finished <<< rosparam ROS_NOBUILD in package rosparam [rosmake-0] Starting >>> rosunit [ make ] [rosmake-1] Finished <<< rosmaster ROS_NOBUILD in package rosmaster [rosmake-3] Finished <<< roscpp ROS_NOBUILD in package roscpp [rosmake-3] Starting >>> rosout [ make ] [rosmake-2] Finished <<< rospy ROS_NOBUILD in package rospy [rosmake-0] Finished <<< rosunit ROS_NOBUILD in package rosunit [rosmake-3] Finished <<< rosout ROS_NOBUILD in package rosout [rosmake-3] Starting >>> roslaunch [ make ] [rosmake-3] Finished <<< roslaunch ROS_NOBUILD in package roslaunch No Makefile in package roslaunch [rosmake-3] Starting >>> rostest [ make ] [rosmake-3] Finished <<< rostest ROS_NOBUILD in package rostest [rosmake-3] Starting >>> topic_tools [ make ] [rosmake-3] Finished <<< topic_tools ROS_NOBUILD in package topic_tools [rosmake-3] Starting >>> rosbag [ make ] [rosmake-3] Finished <<< rosbag ROS_NOBUILD in package rosbag [rosmake-3] Starting >>> rosbagmigration [ make ] [rosmake-3] Finished <<< rosbagmigration ROS_NOBUILD in package rosbagmigration No Makefile in package rosbagmigration [rosmake-3] Starting >>> geometry_msgs [ make ] [rosmake-3] Finished <<< geometry_msgs ROS_NOBUILD in package geometry_msgs [rosmake-3] Starting >>> movbotGPS2 [ make ] [ rosmake ] Last 40 linesvbotGPS2: 6.0 sec ] [ 1 Active 26/27 Complete ] {------------------------------------------------------------------------------- make[3]: Leaving directory /home/turtlebot/ros_workspace/movbotGPS2/build' [ 0%] Built target rospack_genmsg make[3]: Entering directory /home/turtlebot/ros_workspace/movbotGPS2/build' make[3]: Leaving directory /home/turtlebot/ros_workspace/movbotGPS2/build' [ 0%] Built target rosbuild_precompile make[3]: Entering directory /home/turtlebot/ros_workspace/movbotGPS2/build' make[3]: Leaving directory /home/turtlebot/ros_workspace/movbotGPS2/build' make[3]: Entering directory /home/turtlebot/ros_workspace/movbotGPS2/build' [ 50%] Building CXX object CMakeFiles/movbotGPS2.dir/src/movesquare_modified.o /home/turtlebot/ros_workspace/movbotGPS2/src/movesquare_modified.cpp:12:45: error: turtlebot_node/SetTurtlebotMode.h: No such file or directory /home/turtlebot/ros_workspace/movbotGPS2/src/movesquare_modified.cpp:133:19: warning: character constant too long for its type /home/turtlebot/ros_workspace/movbotGPS2/src/movesquare_modified.cpp: In function ‘int main(int, char**)’: /home/turtlebot/ros_workspace/movbotGPS2/src/movesquare_modified.cpp:39: error: ‘Vector3’ has not been declared /home/turtlebot/ros_workspace/movbotGPS2/src/movesquare_modified.cpp:39: error: expected ‘;’ before ‘tw’ /home/turtlebot/ros_workspace/movbotGPS2/src/movesquare_modified.cpp:40: error: ‘tw’ was not declared in this scope /home/turtlebot/ros_workspace/movbotGPS2/src/movesquare_modified.cpp:55: warning: unused variable ‘rndm2’ /home/turtlebot/ros_workspace/movbotGPS2/src/movesquare_modified.cpp:71: error: ‘pubvel’ was not declared in this scope /home/turtlebot/ros_workspace/movbotGPS2/src/movesquare_modified.cpp:74: error: ‘MAX_RUNTIME_SECONDS’ was not declared in this scope /home/turtlebot/ros_workspace/movbotGPS2/src/movesquare_modified.cpp:79: error: ‘roscpp’ was not declared in this scope /home/turtlebot/ros_workspace/movbotGPS2/src/movesquare_modified.cpp:79: error: ‘self’ was not declared in this scope /home/turtlebot/ros_workspace/movbotGPS2/src/movesquare_modified.cpp:84: error: ‘rate’ was not declared in this scope /home/turtlebot/ros_workspace/movbotGPS2/src/movesquare_modified.cpp:103: error: ‘nx’ was not declared in this scope /home/turtlebot/ros_workspace/movbotGPS2/src/movesquare_modified.cpp:103: error: ‘turtlebot_node’ has not been declared /home/turtlebot/ros_workspace/movbotGPS2/src/movesquare_modified.cpp:104: error: ‘turtlebot_node’ has not been declared /home/turtlebot/ros_workspace/movbotGPS2/src/movesquare_modified.cpp:104: error: expected ‘;’ before ‘srvMsg’ /home/turtlebot/ros_workspace/movbotGPS2/src/movesquare_modified.cpp:105: error: ‘srvMsg’ was not declared in this scope /home/turtlebot/ros_workspace/movbotGPS2/src/movesquare_modified.cpp:113: error: ‘twist’ was not declared in this scope /home/turtlebot/ros_workspace/movbotGPS2/src/movesquare_modified.cpp:113: error: ‘Twist’ was not declared in this scope /home/turtlebot/ros_workspace/movbotGPS2/src/movesquare_modified.cpp:121: error: expected unqualified-id before ‘<’ token /home/turtlebot/ros_workspace/movbotGPS2/src/movesquare_modified.cpp:121: error: expected ‘;’ before ‘)’ token /home/turtlebot/ros_workspace/movbotGPS2/src/movesquare_modified.cpp:131: error: expected primary-expression before ‘>’ token /home/turtlebot/ros_workspace/movbotGPS2/src/movesquare_modified.cpp:131: warning: left-hand operand of comma has no effect /home/turtlebot/ros_workspace/movbotGPS2/src/movesquare_modified.cpp:134: error: ‘square’ was not declared in this scope make[3]: *** [CMakeFiles/movbotGPS2.dir/src/movesquare_modified.o] Error 1 make[3]: Leaving directory /home/turtlebot/ros_workspace/movbotGPS2/build' make[2]: *** [CMakeFiles/movbotGPS2.dir/all] Error 2 make[2]: Leaving directory /home/turtlebot/ros_workspace/movbotGPS2/build' make[1]: *** [all] Error 2 make[1]: Leaving directory `/home/turtlebot/ros_workspace/movbotGPS2/build' [ rosmake ] Output from build of package movbotGPS2 written to: [ rosmake ] /home/turtlebot/.ros/rosmake/rosmake_output-20120411-134134/movbotGPS2/build_output.log [rosmake-3] Finished <<< movbotGPS2 [FAIL] [ 6.12 seconds ] [ rosmake ] Halting due to failure in package movbotGPS2. [ rosmake ] Waiting for other threads to complete. [ rosmake ] Results: [ rosmake ] Built 27 packages with 1 failures. [ rosmake ] Summary output to directory [ rosmake ] /home/turtlebot/.ros/rosmake/rosmake_output-20120411-134134 So can anyone help me by telling what I'm doing wrong? Thanks, Gene Originally posted by SmithGeneP on ROS Answers with karma: 21 on 2012-04-12 Post score: 0 Original comments Comment by SmithGeneP on 2012-04-16: I commented out the rosbuild_add_library line, added turtlebot_node in manifest.xml and I checked and made sure ros/ros.h was included. So I ran rosmake and the errors disappeared. Thank you Lorenz for the help. Comment by SmithGeneP on 2012-04-16: I commented out the rosbuild_add_library line, added turtlebot_node in manifest.xml and I checked and made sure ros/ros.h was included. So I ran rosmake and the errors disappeared. Thank you Lorenz for the help. Answer: In your case, I think you shouldn't build a library since you have a main function. Get rid of the rosbuild_add_library line. From the error message, I see that you are also depending on turtlebot_node, make sure that you depend on it in the manifest file. In addition, it seems like the compiler cannot find some roscpp stuff. Did you include ros/ros.h? Vector3 is provided by bullet. Add either tf or bullet to your dependencies in the manifest file and make sure that you have the corresponding include line in your cpp file. Originally posted by Lorenz with karma: 22731 on 2012-04-12 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 8944, "tags": "rosmake" }
Pair annihilation - can annihilation be moderated?
Question: I recently asked this question: How close does a particle-antiparticle pair need to be for annihilation to happen? And that received a good answer. But there was a second part to my question that was not addressed, and so I'm posting it here for more direct attention. When an annihilation event occurs, how fast does it occur, and can the release of energy somehow be moderated (in a similar manner in which a fission reaction is moderated)? Or is it all or nothing? With charged pairs I would think the task of moderating the reaction might be difficult since the two particles would be strongly attracted to one another. But then for a neutral particle pair (e.g. neutron and anti-neutron) there might be hope? Answer: Within current quantum field theory, it does not make sense to ask "how long" a particular process takes to occur. There is a certain probability that a particle and its antiparticle annihilate. But there is no concept of a "process of annihilation". There's the in-state (particle and anti-particle) and the out-state (products of the annihilation, usually photons), which are assumed to lie in the infinite past and future where no interaction is possible (since there is no notion of "particles" in the interacting case), and there's the probability to get the out-state from the in-state. Quantum field theory offers no description of "how" the in-state is converted into the out-state, except that it's unitary time evolution. You can expand the amplitude in Feynman graphs and think of the individual graphs as possible "processes" producing the out-state, but this is not rigorously meaningful. In particular, you can't tell which one of those processes produced the out-state, so the notion that a specific one of them did is not meaningful.
{ "domain": "physics.stackexchange", "id": 25991, "tags": "standard-model, probability, antimatter, pair-production" }
How to run ROS on a System on Chip (SoC)?
Question: Hi all, I'm hoping you can help me understand something. I have a robotics application where size and cost is potentially a big constraint. For lots of robots, single board computers can be very useful because we can install Linux on them and then ROS on top of that as the basis for the robots functionality. I'm wondering if it is possible to do the same using a system on chip? If so, how might I go about getting started with something like this? I'm basically imagining that I would use some kind of development kit similar to the Nvidia Jetson Nano (https://www.nvidia.com/en-gb/autonomous-machines/embedded-systems/jetson-nano/) to initially get into a chip, install Linux on it, install ROS on it and launch some ROS packages. With the chip working through the development kit, it would then be moved to a custom PCB on the robot for final usage. I suppose I'm thinking of the way the Nvidia Jetson Nano development kit is used to develop software that is deployed onto a System on Module for production but something relevant to a small, inexpensive System on Chip. Is this situation remotely realistic? I'm wondering if this rough approach might open up possibilities for making the electronics/computing hardware side of the robot really cheap and really small in size compared to using something like a Raspberry Pi or any other SBC. As you may guess, I cannot currently claim any level of expertise in this area so any guidance would be appreciated! Have a great day :) Originally posted by Py on ROS Answers with karma: 501 on 2020-03-19 Post score: 0 Answer: Yes, its realistic, and its also how about ~20% of my robots are built. Jetson even comes with a custom flavor of Ubuntu installed that you're working with. You can just install ROS via the usual routes and be on your way. For something like an RPi, it depends on the OS, but if you're using a light Ubuntu or Ubuntu, you should be good to go. There's things like raspberian and there's some images out there with ROS pre-installed. Otherwise, you can still use it but will have to do a source build. tl;dr: yes! Originally posted by stevemacenski with karma: 8272 on 2020-03-19 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 34610, "tags": "ros, microcontroller, ros2, raspberrypi, module" }
What is 'antenna-chain permutation' and its advantages?
Question: I am working with Wi-Fi channel state information (CSI, a sort of channel matrix with OFDM subcarriers). I am using Intel 5300 Wi-Fi card and this tool provides a way to play around with extracted CSI information via MATLAB. Interesting thing is, there is perm, a permutation vector in a figure below: It says a permutation vector is [2, 1, 3]. In FAQ page of the tool, it says: perm tells us how the NIC permuted the signals from the 3 receive antennas into the 3 RF chains that process the measurements. The sample value of [3 2 1] implies that Antenna C was sent to RF Chain A, Antenna B to Chain B, and Antenna A to Chain C. This operation is performed by an antenna selection module in the NIC and generally corresponds to ordering the antennas in decreasing order of RSSI. What I am curious about is, why does this Intel 5300 Wi-Fi card do antenna-chain permutation and what are advantages of performing it? Answer: I'll highlight terms that are worth googling in cursive below: What you're looking at is a diversity receiver. In this case, the idea is that by combining the signals of multiple antennas, the SNR can be maximized, minimizing the BER. There's multiple methods of doing so – simply selecting the antenna with the highest RSSI (being probably the one with the best SNR) is called selection combining. When thinking about outage probabilities, this leads to a Bernoulli distribution, ie. the more antennas you have, the less like it is that your communication fails totally (but the gain of having more antennas quickly decreases after the second). Another method is just adding up the signals – you can do that, and you'd get what's called equal gain combining. You'd get some sort of "oversampling" gain, supressing noise that is independent in each receiver branch, but a strong interferer might distort the result. Then, you can add the receive signals up, but weigh them by some function that is proportional to their "quality". That way, you get maximum ratio combining, at least if the weights are basically proportional to SNR. In a real-world receiver with a limited amount of ressources dedicated to assessing the signals, a simple ranking in three tiers of signal quality does sound like a feasible approach. For example, spending much power on synchronizing the strongest signal (i.e. estimating the phase of the channel impulse response) makes sense, since an error here has the gravest effect, whereas simpler approaches for the signals that are used to "push down" uncorrelated noise power (maybe aided by the strongest' chains info) might still make sense. Notice that modern WiFi receivers are pretty mighty beasts – not what you'd need to feel bad about if you don't understand them at once. Another explanation might be much more in the analog than in the digital domain: For the strongest signal, you might want to use a different receive amplifier than for the weakest one – amplifiers are either a) low-noise, or b) high dynamic range or c) power-efficient (not to mention d) through z) cheap).
{ "domain": "dsp.stackexchange", "id": 4336, "tags": "digital-communications" }
Solving System of Equations
Question: (Re-posted from StackOverflow as suggested) I have the following problem. The functions $f(x),g(x)$ are defined as $$ f(x) = \begin{cases} f_1(x) & 0 \leq x \leq 10, \\ f_2(x) & 10 < x \leq 20, \\ 0 & \text{otherwise}, \end{cases} \qquad g(x) = \begin{cases} g_1(x) & 0 \leq x \leq 5, \\ g_2(x) & 5 < x \leq 20, \\ 0 & \text{otherwise}, \end{cases} $$ In addition, we require the constraints $$ \int_0^{20} f(x) dx \geq K, \quad \int_0^{20} g(x) dx \geq Q, \quad f(x)+g(x) \leq R \text{ for all $x$}. $$ where $K,Q,R$ are parameters. I assume there is quite some elaborate theory behind it, and was wondering if anybody could point me in the right direction to devise an algorithm that can generate $f_1(x), f_2(x), g_1(x), g_2(x)$? I would like to add that for a given $K$ and $Q$, the interest is to keep $R$ as low as possible. Answer: Let $X$ be a random variable distributed uniformly over $[0,20]$. Your constraints imply $$ \mathbb{E}[f(X)] \geq \frac{K}{20}, \qquad \mathbb{E}[g(X)] \geq \frac{Q}{20}. $$ We conclude that $$ \mathbb{E}[f(X)+g(X)] \geq \frac{K+Q}{20}, $$ and so there is some point $x \in [0,20]$ such that $f(x) + g(x) \geq (K+Q)/20$. In particular, $R \geq (K+Q)/20$. This bound is tight, as shown by the functions $$ f(x) = \frac{K}{20} \mathbf{1}_{x \in [0,20]}, \qquad g(x) = \frac{Q}{20} \mathbf{1}_{x \in [0,20]}. $$
{ "domain": "cs.stackexchange", "id": 2649, "tags": "algorithms" }
Is there an updated version of the ROS command cheat sheet?
Question: I came across this while reading the ROS wiki: http://download.ros.org/downloads/ROScheatsheet.pdf I know at least the rxgraph has been replaced by rqt_graph but does anyone know if there's a more updated sheet like this? Originally posted by Athoesen on ROS Answers with karma: 429 on 2014-01-06 Post score: 4 Original comments Comment by gustavo.velascoh on 2014-01-06: Feel free to do it and share it to the community :D Answer: Meanwhile there is: announcement on ros-users ros news citing the announcement direct link: http://www.clearpathrobotics.com/ros-cheat-sheet Originally posted by felix k with karma: 1650 on 2014-03-03 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by tfoote on 2015-05-11: There's now a copy of the cheatsheet on github at: https://github.com/ros/cheatsheet
{ "domain": "robotics.stackexchange", "id": 16582, "tags": "ros" }
Converting units and sig figs (finding molarity)?
Question: This may seem very basic, but I'm a little confused. "What is the molarity of a solution 500 mL of which contains 0.500 moles of HCl?" Being that 500 has one significant figure, would it be 500 mL = 0.5 L as opposed to 500 mL = 0.500 L? In which case the answer would be 1 mol/L instead of 1.00 mol/L? Answer: In terms of significant figures, you always use the number with least amount of sig figs to determine how many sig figs your answer has. In this case, one number has 3 sig figs, and the other has 1. Therefore your answer should have 1 sig fig and be 1 mol/L, as you said. To make the zeroes in the 500 mL become significant it would need to have a period afterwards like this: 500. mL. Perhaps the writer forgot to do that, however as it is written the answer would be 1 sig fig: 1 mol/L
{ "domain": "chemistry.stackexchange", "id": 8378, "tags": "significant-figures" }
Is the yolk of an egg a cell?
Question: I am a little confused, I find sites on the internet that say that the yolk of an egg is a macrocell, and another says that it is not. So... is it a macrocell or not? Answer: No, the yolk is not part of the egg cell in chicken and ostrich eggs. Although I found many websites claiming that the yolk is part of the egg cell, none of them were reputable. I think that it is more of a myth than a fact. Although in some animals the yolk is enclosed within the cytoplasm of the cell, in larger species like chickens and ostriches the yolk is separate from the egg cell. Wikipedia quotes: The yolk is mostly extracellular to the oolemma [cell membrane], being not accumulated inside the cytoplasm of the egg cell (as occurs in frogs), contrary to the claim that the avian ovum (in strict sense) and its yolk are a single giant cell. [...] The yolk mass, together with the ovum proper (after fertilization, the embryo) are enclosed by the vitelline membrane, whose structure is different form a cell membrane. This means that in the chicken egg, the egg yolk is mainly held outside of the cell membrane (of the egg cell/embryo) and hence is not considered part of the cell. However, [in many small species] such as some fish and invertebrates, the yolk material is not in a special organ, but inside the egg cell (ovum). So although the yolk is part of the cytoplasm of the egg cell in some species, this is not true for chickens or for ostriches.
{ "domain": "biology.stackexchange", "id": 11815, "tags": "cell-biology, eggs" }