anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Proof techniques for showing that dependent type checking is decidable
Question: I'm in a situation where I need to show that typechecking is decidable for a dependently-typed calculus I'm working on. So far, I've been able to prove that the system is strongly normalizing, and thus that definitional equality is decidable. In many references I read, the decidability of typechecking is listed as a corollary of strong normalization, and I believe it in those cases, but I'm wondering how one goes about actually showing this. In particular, I'm stuck on the following: Just because well typed terms are strongly normalizing, doesn't mean that the algorithm won't loop forever on non-well typed inputs Since logical relations are usually used to show strong normalization, there's not a convenient decreasing metric as we progress typechecking terms. So even if my type rules are syntax directed, there's no guarantee that applying the rules will eventually terminate. I'm wondering, does anyone have a good reference to a proof of decidability of typechecking for a dependently typed language? If it's a small core calculus, that's fine. Anything that discusses proof techniques for showing decidability would be great. Answer: There is indeed a subtlety here, though things work out nicely in the case of type checking. I'll write down the issue here, since it seems to come up in many related threads, and try to explain why things work out all right when type-checking in a "standard" dependent type theory (I'll be deliberately vague, since these issues tend to crop up regardless): Fact 1: If ${\cal D}$ is a derivation of $\Gamma\vdash t:A$, then there is a derivation ${\cal D}'$ of $\Gamma \vdash A:s$ for some kind $s$, and for every subterm $u\leq t$, there is some type $B$, a context $\Delta$ and a derivation $\cal D''$ of $\Delta\vdash u:B$. This nice fact is somewhat hard to prove, and offset by a pretty nasty counter-fact: Fact 2: In general, $\cal D'$ and $\cal D''$ are not sub-derivations of $\cal D$! This depends a bit on the precise formulation of your type system, but most "operational" systems implemented in practice do satisfy Fact 2. This means that you cannot "pass to sub-terms" when reasoning by induction on derivations, or conclude that the inductive statement is true about the type of the term you're trying to prove something about. This fact bites you quite harshly when trying to prove seemingly innocent statements, e.g. that systems with typed conversion are equivalent to those with untyped conversion. However, in the case of type inference, you can give a simultaneous type and sort (the type of the type) inference algorithm by induction on the structure of the term, which may involve a type-directed algorithm as Andrej suggests. For a given term $t$ (and context $\Gamma$, you either fail or find $A, s$ such that $\Gamma\vdash t:A$ and $\Gamma\vdash A : s$. You do not need to use the inductive hypothesis to find the latter derivation, and so in particular you avoid the problem explained above. The crucial case (and the only case which really requires conversion) is application: infer(t u): type_t, sort_t <- infer(t) type_t' <- normalize(type_t) type_u, sort_u <- infer(u) type_u' <- normalize(type_u) if (type_t' = Pi(A, B) and type_u' = A' and alpha_equal(A, A') then return B, sort_t (or the appropriate sort) else fail Every call to normalize was done on well-typed terms, as this is the invariant for infer's success. By the way, as it is implemented, Coq does not have decidable type checking, as it normalizes the body of fix statements before attempting to type check them. At any rate, the bounds on the normal forms of well-typed terms are so astronomical, that the decidability theorem is mostly academic at this point anyways. In practice, you run the type checking algorithm for as long as you have patience for, and try a different route if it hasn't finished by then.
{ "domain": "cstheory.stackexchange", "id": 4532, "tags": "reference-request, pl.programming-languages, type-theory, dependent-type, typed-lambda-calculus" }
How to pad a windowed sinc filter in the frequency domain
Question: I'm creating windowed sinc filters to apply them to certain signals that I'm dealing with. To design the filters, I'm using the approach described in the book "The Scientist and Engineer's Guide to DSP". Here's a brief resuming: \[h[i] = \left\{ \begin{matrix} Kw(i)\frac{\sin(2\pi f_c(i - \frac{M}{2}))}{i - M/2} & 0\le i \le (M+1)\\ \\ 0 & \text{otherwise} \end{matrix} \right. \] \[ w(i): \text{A window function}\\ K: \text{DC gain}\\ f_c: \text{The cut frequency expressed as a fraction of the signal sampling rate} \] After calculating the filter kernel, I obtain the frequency response of the filter by taking the FFT of h[i] using an algorithm that approach the filter length to the nearest power of two(to get better performance). The same procedure it's also applied to the input signal in order to get its frequency response. To my understanding, the next step would be multiply the two frequency responses to get the filtered signal in the frequency domain, but the problem is that both signals are not the same size and I haven't found an explanation on how to deal with this problem. So people, how should I pad the filter signal to make this point-wise multiplication? Should I fill the end of the filter with zeros to match with the signal size or should I do it symmetrically putting zeros to the left and right of the filter response? Thanks in advance! Answer: You don't pad it in the frequency domain, you pad it in the time domain (i.e. before you calculate the FFT). You can put the zeros either symmetrically or after the filter. Either way works, it just results in a shift in the output.
{ "domain": "dsp.stackexchange", "id": 2159, "tags": "filter-design, lowpass-filter" }
Bell-Kochen-Specker theorem impact on realistic hidden-variable theories
Question: I've read the paper 'Generalizations of Kochen and Specker’s theorem and the effectiveness of Gleason’s theorem', where it says that non-contextual hidden-variable theories are ruled out by a theorem of Bell (1966), which is stated as 'There does not exist a bi-valued probability function on the rays (onedimensional subspaces) of a Hilbert space of dimension greater than 2.' There is given no explanation how exactly non-contextual hidden variable theories are ruled out by it. It is clear to me that it has to do with the fact that hidden-variable theories would lead to deterministic outcomes, i.e. bi-valued probability functions. Somewhere else (unfortunately I've forgotten the source) I've read that the reasoning is that the mapping $u \rightarrow (\rho u, u)$ is continuous on the unit sphere of the Hilbert space for any density operator $\rho$, so due to Bell's 1966 theorem it cannot be deterministic. So my first question is how to interpret the mapping $u \rightarrow (\rho u, u)$? Is $u$ a state? How does the specific observable and a specific outcome come into play in the formula? (I guess $(\rho u, u)$ is the probability for a specific outcome of a specific observable if the system is in state $u$?) My second question is why non-contextual hidden variable theories are ruled out, but not contextual hidden variable theories? If anything is unclear about my question, please feel free to ask. I'm looking forward to any answer or input. Answer: The Bell-Kochen-Specker theorem is one of the various no-go theorems against the existence of any "classical" theory capable to explain better the phenomenology of quantum mechanics and restoring realism and/or determinism. These alternative theories are based on the assumption that a deeper explanation than QM exists. The deeper description is constructed out of a set of quantum hidden variables, usually denoted by $\lambda$, whose nature is unknown. The considered theorem assumes certain quite general hypotheses on such theories, without entering into the details, and it proves that this class of theories cannot exist. Note. The so called Kochen Specker (without Bell!) theorem was established after the one discussed here. The one discussed here can be found in one of the first papers by Bell (1966). It was already known to at least one of the other two authors. The later KS theorem has more or less the same statement, but it has a proof of much more elementary level, based on a direct and lengthy computation on a specific system of elementary Yes/No propositions. It is apparently independent of the Gleason theorem. The theoretical significance is however identical in my view. The hypotheses of Bell-Kochen-Specker theorem are that, the hidden variable $\lambda$ defines a map associating each observable $A$ (any bounded selfadjoit operator in the Hilbert space of the system) with its value $v_\lambda(A)$ (a real number). This is the realism hypothesis: the hidden variable fixes the true values of all obvservables, including the incompatible ones! The other explicit hypothesis is that, restricting ourselves to any pair of compatible observables $A$ and $A'$, the above map satisfies some functional requirement. Usually additivity $$v_\lambda(A+A')= v_\lambda(A)+ v_\lambda(A')\quad \mbox{if $A$,$A'$ are compatible}$$ and multiplicativity. $$v_\lambda(AA')= v_\lambda(A) v_\lambda(A')\quad \mbox{if $A$,$A'$ are compatible}$$ The thesis is that, if the Hilbert space has finite dimension >2, this map does not exist (as a topological consequence of the Gleason theorem firstly discovered by Bell) or it is the trivial one associating everything to $0$. I stress that $\lambda$ has noting to do, at least directly, with any quantum state that can be defined on the physical system. In principle, a quantum state should correspond to some ensemble of values $\lambda$, i.e., a more approximate description. Quantum ramdomness should be explained in terms of classical randomness similary to statistical mechanics. However all those details are irrelevant in the theorem, and here is its powerfulness as a no-go theorem. Coming back to the thesis of the theorem, a way out is that the map $v_\lambda$ is not a function only of the observable $A$ one measures, but it also depend on the other (compatible) observables one measures simultaneously with $A$. $$v_\lambda(A| A_1, A_2, ....)$$ $A_1,A_2,...$ is the context of $A$. Each observable $A$ simultaneously has different values depending on its context when the hidden status $\lambda$ is given. In principle, maps of this form are not forbidden by the BKS theorem. From this perspective, the original maps that do not depend on the context are called non-contextual. Therefore, the BKS theorem rules out hidden variable theories which are (a) realistic (b) non-contextual, and (3) they satisfy some natural functional relations only referring to pairs of compatible observables. ADDENDUM Sketch of proof of BKS theorem. This is not the originary proof by Bell, but it similarly uses the continuity argument due to the Gleason theorem into a more straightforward way. THEOREM Let ${\cal H}$ be a finite dimensional real or complex Hilbert space with dimension $>2$. Let $B_{sa}({\cal H})$ the real linear space of everywhere defined selfadjoint operators on ${\cal H}$. There are no maps $v: B_{sa}({\cal H}) \to \mathbb{R}$, different of the zero map, such that (i) $v(A+B)= v(A)+v(B)$ if $AB=BA$, (ii) $v(AB) = v(A)v(B)$ if $AB=BA$. SKETCH OF PROOF Notice that orthogonal projectors are selfadjoint operators $P:{\cal H}\to {\cal H}$ such that $PP=P$. We can restrict $v$ to the space (lattice) of orthogonal projectors). From the hypotheses of additivity and multiplicativity, taking $PP=P$ into account, it is easy to conclude that (a) $v(P) \in \{0,1\}$ for every orthogonal projector $P$ and that (b) $v_\lambda(P_1+...+P_k)= v(P_1)+...+v(P_k)$ if $P_kP_h=0$ when $h\neq k$. Furthermore, $v(I)=1$ otherwise $v$ is the trivial map $v(P)=0$ for all orthogonal projectors. The spectral theorem would imply that $v(A)=0$ for every $A\in B_{sa}({\cal H})$ and this is not permitted. $\dim{\cal H}>2$, (a), and (b) through the Gleason theorem (a part of the proof) imply that there exists a unique mixed state, $\rho$, such that $v(P) = tr (P\rho)$ for every orthogonal projector $P\in B_{sa}({\cal H})$. Let us restrict this map to the set $S$ of the one-dimensional orthogonal projectors (that is the rays $p= |\psi\rangle \langle \psi|$). $$S \ni |\psi\rangle \langle \psi| \mapsto \langle \psi, \rho_\lambda\psi \rangle \in \{0,1\}$$ This map -- viewed as a map to $\mathbb{R}$ -- is continuous trivially, $S$ is connected and thus the image must be connected as well. However $\{0,1\}$ is a disconnected subset of $\mathbb{R}$ whose connected components are $\{0\}$ and $\{1\}$. Hence either $\langle \psi, \rho_\lambda\psi \rangle =0$ for all unit vectors $\psi \in {\cal H}$ or $\langle \psi, \rho_\lambda\psi \rangle =1$ for all unit vectors $\psi \in {\cal H}$. Notice that it must be $tr(\rho) =1$ since $\rho$ represents a mixed state. However, in the first case $tr(\rho) =0$, and in the second case (for $\dim({\cal H}) >2$) $tr(\rho) >1$. In both cases $tr(\rho) \neq 1$ as instead requested by the Gleason theorem. In summary, $\rho$ -- and thus $v$ -- does not exists. QED For some further discussion see chapter 5 of this book of mine.
{ "domain": "physics.stackexchange", "id": 97541, "tags": "quantum-mechanics, contextuality, quantum-foundations" }
cartesian product of two vector spaces
Question: A few days back I found an interesting problem which reads the following: Given two vector spaces generate the resulting set of its cartesian product. \begin{gather} \text{Let: } \mathcal{V}, \mathcal{W} \text{ be vector spaces}\\ \mathcal{V} \times \mathcal{W} = \{ (v,w) \mid v \in \mathcal{V} \land w \in \mathcal{W} \} \end{gather} Hint 1: A vector space is a set of elements called vectors which accomplishes some properties Hint 2: Design the solution for finite vector spaces Tip 1: It is recommended to use structures Constraint: You are forbidden to use any stl class I solved this problem with the next approach: struct vector_pair { double *vector_a; double *vector_b; size_t a_dimension; size_t b_dimension; }; struct cartesian_product_set { vector_pair *pairs; size_t pairs_number; }; cartesian_product_set vector_spaces_cartesian_product(double **space_v, size_t v_vectors, size_t v_dimension, double **space_w, size_t w_vectors, size_t w_dimension) { cartesian_product_set product_set{new vector_pair[v_vectors * w_vectors], v_vectors * w_vectors}; for (size_t i = 0, j, k = 0; i < v_vectors; i++) for (j = 0; j < w_vectors; j++) product_set.pairs[k++] = vector_pair{space_v[i], space_w[j], v_dimension, w_dimension}; return product_set; } How could I improve this code if possible? Thank you. Answer: const-correctness use references in favor of pointers where possible The fact that you leave the obligation to free the memory that you allocate to the caller is generally not a good practice a common pattern in your code is that you have pointers to arrays and their length - why not make a structure to bundle them up? try to make use of iterators and range-based-for-loops when you don't really need the index (which you don't in your example) since we don't really care about the type of the elements in a vector space you could use templates to generalize your algorithm And just to see if it would be possible, I tried to come up with a compile-time version of the algorithm: template<typename T> struct pair { T first; T second; }; template<std::size_t N, typename T> struct cvp { pair<T> pairs[N]; }; template <typename T, size_t NV, size_t NW> auto get_cvp(const T (&vs)[NV], const T (&ws)[NW]) { cvp<NV*NW, T> result; auto it_pairs = std::begin(result.pairs); for (const auto v : vs) { for (const auto w : ws) { *(it_pairs++) = {v, w}; } } return result; } you can try the code here: https://godbolt.org/z/e8GvEf
{ "domain": "codereview.stackexchange", "id": 39176, "tags": "c++, performance" }
Understanding the equation $r_1 r_2 e^{i2kL + (G - \alpha)L} = 1$
Question: I am currently studying the textbook Physics of Photonics Devices, Second Edition, by Shun Lien Chuang. In a section discussing The Invention of Semiconductor Lasers, the author says the following: However, if we are able to inject enough electrons and holes into the semiconductor to reach the so-called population inversion condition, which means that there are more downward than upward stimulated transitions, there will be a net gain of the photon number or optical intensity. Gain is not the only requirement for a laser. It requires a resonator, which can be a one-, two-, or three-dimensional structure. The most common one is the Fabry-Perot resonator formed by two parallel mirrors with a cavity length $L$. The light is reflected back and forth between the two mirrors, thus a standing wave pattern can be formed for certain resonant wavelengths (Fig. 1.5a). When the round-trip gain of the optical intensity is large enough to balance the loss due to waveguide absorption and mirror transmission, a threshold condition can be reached. It means that the optical field after the round-trip propagation reaches a resonance condition with a constructive phase and an amplitude of $1$, $$r_1 r_2 e^{i2kL + (G - \alpha)L} = 1 \tag{1.2.1}$$ where $r_1$ and $r_2$ are the reflection coefficients of the optical fields from the two end facets, $k$ is the propagation constant, $$k = 2 \pi n / \lambda = 2 \pi v n / c, \tag{1.2.2}$$ and $n$ is the refractive index of the semiconductor. $G$ is the modal gain coefficient of the guided optical mode in the semiconductor waveguide, and $\alpha$ is the absorption coefficient. Equation (1.2.1) leads to the phase and magnitude conditions for lasing, $$2kL = 2m \pi \tag{1.2.3}$$ $$G = \alpha + \dfrac{1}{2L} \ln\left( \dfrac{1}{R_1 R_2} \right) \tag{1.2.4}$$ where $R_1 = \vert r_1 \vert^2$ and $R_2 = \vert r_2 \vert$ are the power reflectivities. The phase condition (1.2.3) leads to the Fabry-Perot resonance spectrum $$v_m = \dfrac{mc}{2nL} \ \ \ m = \text{integer.} \tag{1.2.5}$$ I'm trying to understand (1.2.1). The equation takes the form $$\begin{align} r_1 r_2 e^{i2kL + (G - \alpha)L} &= 1 \\ \Rightarrow r_1 r_2 e^{i2kL} e^{(G - \alpha)L} &= 1 \end{align}$$ If I'm not mistaken, this is the phasor/analytic representation of the (harmonic) wave. Is my understanding here correct? Assuming this is the phasor/analytic representation of a (harmonic) wave, I now want to understand the structure of this wave, in terms of its equation. According to (2.37) of Optics, fifth edition, by Hecht, the phasor/analytic representation of a harmonic wave is $\psi(x, t) = Ae^{i(\omega t - kx + \epsilon)} = Ae^{i \varphi}$, where $\varphi$ is the phase. We are told that (1.2.3) is the phase. But then what is the $e^{(G - \alpha)L}$? Given what what I've just stated about the equation of a simple harmonic wave, I'm struggling to reconcile these two equations. And where is the "constructive phase" part of this equation? I wonder if that's what the $i2kL + (G - \alpha)L$ in $r_1 r_2 e^{i2kL + (G - \alpha)L} = r_1 r_2 e^{i2kL} e^{(G - \alpha)L}$ is, where $e^{i2kL}$ and $e^{(G - \alpha)L}$ are two waves? But there is no $i$ multiplying the $(G - \alpha)L$, so I don't see how this is a wave? So how does $e^{(G - \alpha)L}$ fit into this wave equation? I also wonder if there are any errors here on the author's part, since that's always a possibility. I would greatly appreciate it if people would please take the time to clarify this. Answer: The first part of the exponent, $ikL$ is the oscillatory part of the expression. Basically, the requirement that $2kL = 2M\pi$ says that an integral number of waves need to fit into the laser cavity. The other part of the exponent contributes to the amplitude or absolute value of the expression. The gain is equal to that amplitude: $r_1 r_2 e^{(G-\alpha) L}$. If the amplitude is less than 1, then the power of the light decreases with each round trip between the mirrors and there is no net gain and therefore no lasing. If the amplitude is greater than 1, then there is net gain and the system lases.
{ "domain": "physics.stackexchange", "id": 64921, "tags": "optics, waves, laser, photonics" }
threading a compression clamp
Question: I'm not an engineer, just a novice hobbyist, so I'm not sure my terminology "compression clamp" is correct, but I'm attaching a drawing showing what I'm trying to accomplish. This plate will be placed over two rods and then "screwed (partly) shut" so that the plate will "squeeze" the rods and not budge under normal operational stresses. The machine screw is inserted into the hole the pink arrow points at and threads into the hole the yellow arrow points at. My question: It seems to me that the threads will align properly if the gap between those two sections is cut in advance of the threading operation. Is that right? Or should the pink-arrow hole be left smooth-walled and only the yellow-arrow hole be threaded? Answer: The pink Arrow hole should be left smooth. If both holes have threads when you turn the screw into the pink hole as soon as it passes the gap and reaches the yellow hole, it will keep the gap space constant. Turning the screw more will just drive it into both holes while keeping the gap at the same space till the head of the screw jams onto the lower surface of the pink hole. Any further attempts at tightening the screw will either damage the treads or the screw. The pink hole should be left smooth and a right size washer used to allow the screw to tighten the bracket safely. Or alternately you can use a half thread screw similar to this photo.
{ "domain": "engineering.stackexchange", "id": 2995, "tags": "threads" }
Surprisingly uniform magnetic field inside a rotating charged sphere
Question: Consider this: A spherical shell of radius R, carrying a uniform surface charge $σ$, is set spinning at an angular velocity $ω$. What can be said about the magnetic field inside the sphere? I found the magnetic vector potential* at an arbitrary point in space, and used $B = curl(A)$ to find the magnetic field. To my surprise, the magnetic field inside the sphere is uniform! $$\mathbf{B}=\mathbf{\nabla}\times\mathbf{A} =\frac{2}{3}\mu_0\sigma R\mathbf{\omega}.$$ Why is this so? Is there a way to predict the same (uniformity of B inside the sphere) without going through the mathematical derivation? I'd love to get some more insight into this problem, after all, there is more to physics than just mathematics! P.S. *The calculation, if you want to go through it, can be found in the Electrodynamics text by Griffiths. Answer: The electric field inside a charged sphere is uniform. (It’s zero.) Coulomb’s Law and the Biot-Savart Law both have inverse-square spatial dependence, so it shouldn’t be too surprising that the magnetic field is uniform.
{ "domain": "physics.stackexchange", "id": 61662, "tags": "electromagnetism, magnetic-fields" }
Tracking Eye Movements
Question: The following class (that was my first class ever in python) was used in an eye tracking context. Imagine a context where you always will have a single gaze-point and some closed contours. How can I return all possible relations between them? For example, for two contours, the point can be: Relation 1: inside contour A and outside contour B; Relation 2: inside contour B and outside contour A; Relation 3: inside contours A and B; Relation 4: outside contours A and B; In my case, I am using each relation as tags for colors in a context where the number of contours changes. You can imagine the contours as sets as well, but let's avoid the complexities of an advanced Set Theory. This problem give us 2 to the power c = number of contours. An Abacus give us the number of columns to the power i = number of lines. So, an abacus seems to be good metaphor, for example: contours = [c1, c2] Abacus = ClassAbacus (len(contours)) color_tags = Abacus.Enumerate() give us a list with all possible tags where + is inside, - is outside and numbers are contours: ['+1+2', '+1-2', '-1+2', '-1-2'] So, now we have reference for 4 colors. We could give a name for each point as such: import cv2 ... def PolygonTestEx(contours, pt, contours_counter = 0, counter_code = ''): for x in xrange(1, len(contours) + 1): Inside = cv2.pointPolygonTest(contours[x -1], pt, False) # Inside = contours[x] if Inside > -1: counter_code = counter_code + '+' + str(x + contours_counter) else: counter_code = counter_code + '-' + str(x + contours_counter) contours_counter = contours_counter + len(contours) return contours_counter, counter_code And it should give us the 4 possible states in this context: Could I write this class with better readability? Please, fell free to suggest a faster way to do the same or missed common practices in the OOP. CURRENT_LINE = 0 SIGNS = 1 class BaseError(Exception): def __init__(self, value): self.value = value def __str__(self): return repr(self.value) class ClassAbacus(object): """ Recombining tool. Imagine each char in char_set as digits, or beads, and count them accordingly to a base reference. Imagine the base as the abacus lines. 'Counter' returns an integer equals to len(char_set) raised to the power x = base, giving the number of all possible bead states of the "abacus". 'Enumarate' prints each or all states. """ def __init__( self, char_set = ['+', '-'], base = 2 ): super(ClassAbacus, self).__init__() self.Chars = char_set self.Counter = 0 self.States = [] self.Abacus = [] self.Base = base if (base + 1) > 2: self.__Base = base + 1 self.Instantiate() else: try: raise BaseError(1) except BaseError as e: print 'Use argument "base" higher then one.', e.value def doSaveState(self): state = '' for column in xrange(0, len(self.Abacus)): # def addSignTo(state, column, item): state = state + self.Abacus[column][SIGNS][ self.Abacus[column][CURRENT_LINE] ] self.States.append(state) def doLineReset(self, from_column): to_column = len(self.Abacus) for column in xrange(from_column, to_column): self.Abacus[column][CURRENT_LINE] = 0 def doLineIncrement(self, column): self.Counter += 1 self.doSaveState() n = self.Abacus[column][CURRENT_LINE] self.Abacus[column][CURRENT_LINE] = n + 1 def doCount(self, Column): KeepsCounting = True HighColumn = len(self.Abacus) -1 LowColumn = 0 if Column < LowColumn: self.Counter += 1 self.doSaveState() KeepsCounting = False else: ColumnMaxLine = len(self.Abacus[Column][SIGNS]) -1 ColumnCurLine = self.Abacus[Column][CURRENT_LINE] if ColumnCurLine == ColumnMaxLine: Column -= 1 KeepsCounting = self.doCount(Column) return KeepsCounting if ( Column < HighColumn ) and ( ColumnCurLine < ColumnMaxLine ): self.doLineIncrement(Column) self.doLineReset( Column + 1 ) self.doCount(HighColumn) return KeepsCounting if ( Column == HighColumn ) and ( ColumnCurLine < ColumnMaxLine ): self.doLineIncrement(Column) self.doCount(Column) return KeepsCounting def Instantiate(self): # set the grid of the abacus for x in xrange(1, self.__Base): container = [] for char in self.Chars: # concatenate each char with its base reference container.append(char + str(x)) self.Abacus.append([0, container]) max_column = max(self.Abacus) max_index = self.Abacus.index(max_column) self.doCount(max_index) def Enumerate(self, index = -1): container = [] if (index < 0) or (index > self.Counter - 1): for state in self.States: container.append(state) return container else: return self.States[index] For the completeness sake, this class will be called here. Answer: I don't have cv2 installed (yet) so I'm going blind here, but I have a few comments. First, the problem you described can be solved very easily with itertools.product: import itertools def get_set_intersections(chars="+-", base=2): numbers = range(1, base+1) for signs in itertools.product(chars, repeat=base): yield "".join("{}{}".format(sign, n) for sign, n in zip(signs, numbers)) list(get_set_intersections()) #>>> ['+1+2', '+1-2', '-1+2', '-1-2'] Secondly, some comments about the code. You use CURRENT_LINE and SIGNS as constant indexes into a list of length 2. You should use a class instead. Here's one: class AbacusColumn(object): def __init__(self, current_line, signs): self.current_line = current_line self.signs = signs You have made a new class, BaseError. You shouldn't create new errors where you old ones suffice, though; just use ValueError. Even if you do want a new class, it should be a subclass of ValueError. The only time you use BaseError you do: try: raise BaseError(1) except BaseError as e: print 'Use argument "base" higher then one.', e.value ...which is equivalent to just print 'Use argument "base" higher then one. 1' Instead, do raise ValueError('Use argument "base" higher then one.') Also, switch the conditional, which becomes: if (base + 1) <= 2 Erm, do you mean if base <= 1 ? Don't use attributes with two double-underscores (self.__Base). Use a single underscore. Two gives you name mangling, which you don't want. The funciton definition line should be formatted as def __init__(self, char_set=['+', '-'], base=2): since it's too short to need line wrapping. Attributes should be in snake_case. You never use self.Base. Remove it. Actually, remove _Base instead since you can just add one at point of use. You don't need to call super; your superclass is object. You do (after the move to class AbacusColumn) state = '' for column in xrange(0, len(self.abacus)): state = state + self.abacus[column].signs[self.abacus[column].current_line] Firstly, this should be touched-up to state = '' for column in self.abacus: state += column.signs[column.current_line] Secondly, adding immutable containers in loops is bad. Use str.join instead: state = ''.join(column.signs[column.current_line] for column in self.abacus) In do_line_reset you should similarly do for column in self.abacus[from_column:]: column.current_line = 0 do_line_increment should use += for self.abacus[column].current_line += 1 I have a feeling keeps_counting is broken in do_count. The logic is: keeps_counting = True if ...: keeps_counting = False else: if ...: keeps_counting = self.do_count(column) return keeps_counting if ...: return keeps_counting if ...: return keeps_counting [function ends] This can be simplified into: if ...: ... else: if ...: return self.do_count(column) if ...: return True if ...: return True which means that you're returning None instead of False in the first branch or if all three ifs fail. However, it's OK as you never use the result. Remove it. In instantiate, you can simplify generating self.abacus with a list comprehension: self.abacus = [ AbacusColumn(0, [char + str(x+1) for char in self.chars]) for x in range(self.base) ] Your max(self.abacus) will have to change since AbacusColumn is not comparable, but it was flawed anyway since the were all initialized as [0, list of strings]. Since the strings are all of the same form bar the numbers you added to them, you're actually just finding the largest index (although since you're doing string comparisons it breaks for bases larger than 9). just do self.do_count(self.base - 1) In enumerate you do: for state in self.states: container.append(state) return container This is just return list(self.states) Your comparison would be better written as if not (0 < index < self.counter): although I suggest you change this to if not (0 <= index < self.counter): In fact, returning different types based off of whether index is out of bounds is silly and you should just return list(self.states) all the time. If you want a second way to access only one, make a new method. Your name ClassAbacus is odd: surely just Abacus is better. You only call instantiate from __init__, so I'd suggest moving it there. If not in the function, then below the function. Doing so allows you to remove self.base and self.chars. Move the error checking in __init__ to the top. Don't give self.abacus a default; it only hides bugs. Most of your methods start with do_. Personally, I would remove it. Spend the characters on better names (eg. what does count do?). Currently, this gives: class AbacusColumn(object): def __init__(self, current_line, signs): self.current_line = current_line self.signs = signs class Abacus(object): """ Recombining tool. Imagine each char in chars as digits, or beads, and count them accordingly to a base reference. Imagine the base as the abacus lines. 'Counter' returns an integer equals to len(chars) raised to the power x = base, giving the number of all possible bead states of the "abacus". 'Enumarate' prints each or all states. """ def __init__(self, chars=['+', '-'], base=2): if base <= 1: raise ValueError('Use argument "base" higher then one.') self.counter = 0 self.states = [] self.abacus = [ AbacusColumn(0, [char + str(x+1) for char in chars]) for x in range(base) ] self.count(base - 1) def save_state(self): state = ''.join(column.signs[column.current_line] for column in self.abacus) self.states.append(state) def line_reset(self, from_column): for column in self.abacus[from_column:]: column.current_line = 0 def line_increment(self, column): self.counter += 1 self.save_state() self.abacus[column].current_line += 1 def count(self, column): high_column = len(self.abacus) -1 low_column = 0 if column < low_column: self.counter += 1 self.save_state() else: column_max_line = len(self.abacus[column].signs) -1 column_cur_line = self.abacus[column].current_line if column_cur_line == column_max_line: column -= 1 self.count(column) if ( column < high_column ) and ( column_cur_line < column_max_line ): self.line_increment(column) self.line_reset( column + 1 ) self.count(high_column) if ( column == high_column ) and ( column_cur_line < column_max_line ): self.line_increment(column) self.count(column) def enumerate(self): return list(self.states) counter just tracks len(self.states). Remove it. count is recursive but it doesn't need to be. Try making it a while, and make it yield its results instead of saving them: class Abacus(object): def __init__(self, chars=['+', '-'], base=2): if base <= 1: raise ValueError('Use argument "base" higher then one.') self.base = base self.abacus = [ AbacusColumn(0, [char + str(x+1) for char in chars]) for x in range(base) ] def gen_state(self): return ''.join(column.signs[column.current_line] for column in self.abacus) def line_reset(self, from_column): for column in self.abacus[from_column:]: column.current_line = 0 def count(self, column): high_column = len(self.abacus) -1 while column >= 0: column_max_line = len(self.abacus[column].signs) - 1 column_cur_line = self.abacus[column].current_line if column_cur_line == column_max_line: column -= 1 else: yield self.gen_state() self.abacus[column].current_line += 1 if column < high_column: self.line_reset( column + 1 ) column = high_column yield self.gen_state() def enumerate(self): return list(self.count(self.base - 1)) This highlights that the class isn't doing anything useful; break it into methods. class AbacusColumn(object): def __init__(self, current_line, signs): self.current_line = current_line self.signs = signs def gen_state(abacus): return ''.join(column.signs[column.current_line] for column in abacus) def line_reset(abacus, from_column): for column in abacus[from_column:]: column.current_line = 0 def abacus(chars=['+', '-'], base=2): if base <= 1: raise ValueError('Use argument "base" higher then one.') column = high_column = base - 1 abacus = [ AbacusColumn(0, [char + str(x+1) for char in chars]) for x in range(base) ] while column >= 0: column_max_line = len(abacus[column].signs) - 1 column_cur_line = abacus[column].current_line if column_cur_line == column_max_line: column -= 1 else: yield gen_state(abacus) abacus[column].current_line += 1 if column < high_column: line_reset(abacus, column + 1) column = high_column yield gen_state(abacus) print(list(abacus(base=5))) This is far simpler that it originally was. I'd be tempted to remove AbacusColumn and split abacus into two parallel lists and flip the order they are searched, allowing several more simplifications: def abacus(chars=['+', '-'], base=2): if base <= 1: raise ValueError('Use argument "base" higher then one.') digit = [0] * base while True: yield ''.join(chars[current_line] + str(i) for i, current_line in enumerate(digit, 1)) for column in range(base): if digit[column] == len(chars) - 1: digit[column] = 0 else: digit[column] += 1 break else: return print(list(abacus(base=5)))
{ "domain": "codereview.stackexchange", "id": 11478, "tags": "python, combinatorics, opencv" }
Inclined plane problem
Question: This is the problem question from my textbook. My questions are concerning some formulas related to this problem. The force of friction is obviously down the plane. My initial question is: $F_{friction}={\mu}{F_{N}}$. However, the man is applying a force in the same direction as $F_{N}$. Does this affect the initial formula? Does this become $F_{friction}={\mu}{F_{off}}$ where $F_{off}=F_{friction}+F_{A}{\sin20^{\circ}}$ ($F_A$ is the force applied by the man)? My next question is related to my initial question. $F_p=F_w\sin35^\circ$ and $F_N=F_w\cos35^\circ$. In this case case would $F_w\sin35^\circ=F_p+F_{friction}$ or just $F_w\sin35^\circ=F_p$ ($F_p$ is the force the block exerts down the plane)? Similarly, would $F_w\cos35^\circ=F_N+F_{A}{\sin20^{\circ}}$ or just $F_w\cos35^\circ=F_N$? Finally, what is the actual solution to the problem? The book says $464\:\mathrm{N}$ but an online worked solution says $953\:\mathrm{N}$. Thanks in advance. Answer: Does this affect the initial formula? Does this become Ffriction=μFoff where Foff=Ffriction+FAsin20∘ (FA is the force applied by the man)? for above case you are talking about normal force hence no friction instead Wsin35. and your second question is basically the answer for your above as Fwcos35∘=FN+FAsin20∘ this is correct. i would stick with 953N and even that is not exact as it depends what the solution manual guy took g=? i took g as 9.81 and got 934 N however with g=10 i got 952 N
{ "domain": "physics.stackexchange", "id": 27097, "tags": "homework-and-exercises, newtonian-mechanics, forces, kinematics, friction" }
Conceptual question on volume current density and derivation of the continuity equation
Question: The expression for the volume integral of the volume charge density is $\int_{V} (\nabla \cdot\vec{J}) d\tau = -\frac{d}{dt} \int_{V} \rho d\tau = -\int_{V} (\frac{\partial \rho}{\partial t}) d\tau$ I understand physically that as charge flows out of a differential volume, the divergence of the volume current density is positive and that the volume charge density would decrease But I need more clear explanation for these equations, how did we go from $\int_{V} (\nabla \cdot\vec{J}) d\tau$ to $-\frac{d}{dt} \int_{V} \rho d\tau$ mathematiclly? When you think about it conceptually it is intuitive but I'm talking about the mathetmatics here, also how did we get from $-\frac{d}{dt} \int_{V} \rho d\tau$ to $\int_{V} (\frac{\partial \rho}{\partial t}) d\tau$ Is it because in this case $\rho$ might depend on the position? Maybe my questions are trivial but I don't have a strong background in multivariable calculus, I'll try my best to understand, so any insight would be appreciated. Answer: Consider a surface $S$ enclosing a volume $V$, the net current moving into the volume $V$ is $$ I = -\int_S {\rm d}^2{\bf S} \cdot {\bf J} \tag{1} $$ And your intuition here works: since ${\rm d}{\bf S}$ points outwards, if the flow of current is into the volume, the inner product is negative, so you need the minus sign. And you know that the current is just $$ I = \frac{{\rm d}q(t)}{{\rm d}t} = \frac{{\rm d}}{{\rm d}t}\int_{V}{\rm d}^3{\bf r} ~\rho({\bf r}, t) \tag{2} $$ where ${\bf \rho}$ is the charge density. Now note that you are integrating w.r.t to the coordinates, and taking the derivative w.r.t to time, so these two operations commute, but when if you interchange them you need to take into account that the quantity you are taking the time-derivative of ($\rho$) now depends on both the coordinates and time. In other words $$ I = \frac{{\rm d}}{{\rm d}t}\int_{V}{\rm d}^3{\bf r} ~\rho({\bf r}, t) = \int_{V}{\rm d}^3{\bf r} ~\frac{\partial}{\partial t}\rho({\bf r}, t)\tag{3} $$ Replace (3) in (1) $$ \int_{V}{\rm d}^3{\bf r} ~\frac{\partial}{\partial t}\rho({\bf r}, t) = -\int_S {\rm d}^2{\bf S} \cdot {\bf J} \tag{4} $$ Now apply the divergence theorem $$ \int_{V}{\rm d}^3{\bf r} ~\frac{\partial}{\partial t}\rho({\bf r}, t) = -\int_V {\rm d}^3{\bf r} \nabla \cdot {\bf J} \tag{5} $$
{ "domain": "physics.stackexchange", "id": 54044, "tags": "electromagnetism, conservation-laws, electric-current, classical-electrodynamics" }
Binary Search Tree Using Templates in C++
Question: I have made this BST using templates Node.h #ifndef NODE_H_INCLUDED #define NODE_H_INCLUDED template<typename T> class Node { public: Node<T> *pLeft; Node<T> *pRight; T val; Node<T>(T val) { this->val = val; pLeft = pRight = nullptr; } // Node<T>(const Node<T>& src); -> to be implemented // Node& operator=(const Node&); -> to be implemented }; #endif // NODE_H_INCLUDED Tree.h #ifndef TREE_H_INCLUDED #define TREE_H_INCLUDED #include <iostream> #include "Node.h" template<typename T> class Tree { Node<T>* root; Node<T>* insert_at_sub(T i, Node<T>*); Node<T>* delete_at_sub(T i, Node<T>*); int countNodes(Node<T> *p); void print_sub(Node<T> *p); Node<T>* minValue(Node<T>*); Node<T>* maxValue(Node<T>*); Node<T>* get_last(Node<T>*); Node<T>* get_first(Node<T>*); int t_size = 0; public: Tree () { root = nullptr; } ~Tree() { delete root; } void add(T i) { ++t_size; root = insert_at_sub(i, root); } void print() { print_sub(root); }; bool contain(T i) { return contain_sub(i, root); } bool contain_sub(T i, Node<T> *p); void destroy(T i) { if(contain(i)) root = delete_at_sub(i, root); else return; } void showFirst(); void showLast(); int get_size() { return t_size; } int getNumberLeftNodes() { return countNodes(root->pLeft); } int getNumberRightNodes() { return countNodes(root->pRight); } }; template<typename T> int Tree<T>::countNodes(Node<T> *p) { static int nodes; if(!p) return 0; if (p->pLeft) { ++nodes; countNodes(p->pLeft); } if (p->pRight) { ++nodes; countNodes(p->pRight); } return nodes + 1; } template<typename T> Node<T>* Tree<T>::insert_at_sub(T i, Node<T> *p) { if( ! p ) return new Node<T>(i); else if (i <= p->val) p->pLeft = insert_at_sub(i, p->pLeft); else if (i > p->val) p->pRight = insert_at_sub(i, p->pRight); return p; } template<typename T> void Tree<T>::print_sub(Node<T> *p) { if(p) { print_sub(p->pLeft); std::cout << p->val << std::endl; print_sub(p->pRight); } } template<typename T> bool Tree<T>::contain_sub(T i, Node<T> *p) { if (!p) return false; else if(i == p->val) return true; else if (i <= p->val) contain_sub(i, p->pLeft); else contain_sub(i, p->pRight); } template<typename T> Node<T> *Tree<T>::minValue(Node<T> *p) { Node<T> *current = p; while(current && current->pLeft) current = current->pLeft; return current; } template<typename T> Node<T> *Tree<T>::maxValue(Node<T> *p) { Node<T> *current = p; while(current && current->pRight) current = current->pRight; return current; } template<typename T> void Tree<T>::showLast() { Node<T> *last = maxValue(root); if(last) std::cout << last->val; else std::cout << ""; } template<typename T> void Tree<T>::showFirst() { Node<T> *first = minValue(root); if(first) std::cout << first->val; else std::cout << ""; } template<typename T> Node<T>* Tree<T>::delete_at_sub(T i, Node<T>* p) { if (i < p->val) p->pLeft = delete_at_sub(i, p->pLeft); else if (i > p->val) p->pRight = delete_at_sub(i, p->pRight); else if(i == p->val) { if ( ! p->pLeft) { Node<T> *temp = p->pRight; delete p; return temp; } else if ( ! p->pRight) { Node<T> *temp = p->pLeft; delete p; return temp; } Node<T> *temp = minValue(p->pRight); p->val = temp->val; p->pRight = delete_at_sub(p->val, p->pRight); } return p; } #endif // TREE_H_INCLUDED main.cpp #include <iostream> #include "Tree.h" using namespace std; class Test { string name; public: Test () {} Test(string name_) : name(name_) {} friend ostream& operator<<(ostream& os, Test& t) { os << t.name; return os; } bool operator<(Test t); bool operator<=(Test t); bool operator>(Test t); }; bool Test::operator<(Test t) { return (name < t.name); } bool Test::operator<=(Test t) { return (name <= t.name); } bool Test::operator>(Test t) { return (name > t.name); } int main() { Tree<int> tr; /* 4 / \ 1 6 / \ / \ 0 2 5 9 \ 89 / \ 12 222 \ 32 / 22 */ tr.add(4); tr.add(6); tr.add(1); tr.add(9); tr.add(2); tr.add(0); tr.add(89); tr.add(12); tr.add(32); tr.add(5); tr.add(22); tr.add(222); // tr.test(); tr.showFirst(); cout << endl; tr.showLast(); cout << endl; Tree<string> bst; bst.add("Zanildo"); bst.add("Helder"); bst.add("Wilson"); bst.add("Ady"); bst.add("Adilson"); bst.add("Patrick"); bst.showFirst(); cout << endl; bst.showLast(); cout << endl; Tree<Test> test; test.add({"Jhonny"}); test.add({"Bruno"}); test.add({"Garry"}); test.add({"Henry"}); test.add({"Amber"}); test.add({"Brandy"}); test.add({"Danny"}); test.add({"Cameron"}); test.add({"Edla"}); test.add({"Zenalda"}); test.showFirst(); cout << endl; test.showLast(); cout << endl; //test.print(); return 0; } I put all the Tree class functions implementation inside the header file because didn't want to put'em in a cpp file and having to add the explicit instantiation at the end of the file like this for example: template class Tree<int>; // explicit instantiation I also know I have to provide a copy constructor and an assignment operator for the Node class but I am still studying how to implement them correctly as this is my first time having the need to provide either of them. I have studied them before but while googling around came across the copy-swap idiom which is said to be the best approach so I have a few tabs open to study it and try to implement it. If anyone wish to give me a head start would be more than welcome. An iterative BST is better for performance when the tree gets quite large but I have made the main functions for this BST recursively on purpose. I know how to implement those functions iteratively but not all of them yet. After I want to create this BST entirely iterative. Besides all this, what can be said about my BST with template? Answer: Make it explicit that it is a binary tree Your class name is Tree, but there are many different types of trees. Make it clear that this is a binary tree, and name it BinaryTree. Move class Node inside class Tree Move the declaration of class Node inside that of class Tree, in the public section. This makes it clear that this Node is specific to the tree you are implementing, and avoids a potention conflict with other classes that might have nodes. Consider for example that you might also have an implementation of a linked list, which also consists of nodes named Node. If you would want to use both your binary tree and your linked list in the same program, you would get a conflict. Try to make your Tree look like other STL container classes Have a look at what member functions STL containers define. The closest STL container to your binary tree is std::set. You don't have to add all the functionality of an STL container right away, just first consider renaming some of your member functions to match that of the STL. For example, instead of add() and destroy(), use insert() and erase(). Instead of get_size(), use size(). There are several benefits to this. First, for someone who is already familiar with other STL containers, it makes working with your Tree more intuitive. But that's not all: if you make it look enough like an STL container, then some of the STL algorithms might actually start to work on your Tree as well! Move printing out of class Tree Instead of having a print_sub() function that only prints to std::cout, consider writing instead a function that walks the tree and takes a function as one of its argument, so that it allows the caller to decide what to do with each visited node. For example: template<typename T> void Tree<T>::visit_subtree(const Node<T> *p, std::function<void(const T&)> func) { if (p) { visit_subtree(p->pLeft, func); func(p->val); visit_subtree(p->pRight, func); } } template<typename T> void Tree<T>::visit(std::function<void(const T&)> func) { return visit_subtree(root, func); } Then you could call it like: Tree<Test> test; ... test.visit([](Test &val){std::cout << val << '\n';}); The advantage is that you can call it with any other function you like, so if you wanted to print it to std::cerr instead, or if you wanted to do something completely different with each element of the tree, you don't have to change your Tree's visit() function. However, another approach is: Implement iterators for your Tree Try to implement an iterator class for your Tree, and provide begin() and end() member functions that return the appropriate iterators, to allow someone to loop over all the elements of the tree with a simple for-statement, like: Tree<Test> test; ... for (const auto &val: test) std::cout << val << '\n'; Read this question for some good references on how to implement an iterator yourself. It is a bit of work, but it makes using your class much easier. Once you have it, you also get many things for free. For example, instead of having to write your own minValue() function, once you have iterators you can just use std::min_element on an instance of a Tree class to get the smallest element. Fix the memory leak in the destructor Your destructor only deletes the root node, not any of its children. Use const where appropriate You should make arguments, variables, return values and whole member functions const whereever appropriate. For example, countNodes() does not modify the Node<T> that you give a pointer to as an argument, and it also doesn't change anything in class Tree itself. Therefore, you should declare it as: int countNodes(const Node<T> *p) const; The same goes for many other functions. Apart from catching potential errors and helping the compiler produce better optimized code, doing this will also allow these member functions to be called on const instances of class Tree.
{ "domain": "codereview.stackexchange", "id": 37073, "tags": "c++, recursion, tree, template, binary-search" }
Formal Definition/counter part in mathematics for “Objects” of Object Oriented Models
Question: This is a question I asked in mathematics SE forum, and I was referred here. So here is the question- I'm a newbie in both formal mathematics and theoretical computer science, so please bear with me if you find my question is not properly framed. Object Oriented Modeling seems very useful in defining complex interactions when simulating real world. But it's mostly used in programming. I was wondering if we have a similar concept in mathematics. When we're doing programming, we can understand the concept of "Objects" and "Object Oriented Programming" and just implement it. But do we have formal definition of "Objects" in terms of Set Theory? Or for that matter, any other formal mathematical theory? Can we implement/ formally define three primary object orient modeling concepts- 1. Encapsulation 2. Inheritance 3. Polymorphism I know question is too broad, but would really appreciate if you can provide some pointers as well so that I can understand these concepts better. Answer: The answer is complicated, for two reasons. Different people in Computer Science interpret the term "object" differently. One is that an object consists of some data and operations packaged together. The other is that an object is all that but also has "state," i.e., it is some form of a changeable entity. There are deep philosophical issues to do with what "change" means (and what "entity" means, as it is constantly changing), and whether mathematical descriptions actually capture changeable entities. Object in the sense of data + operations: That is pretty standard in mathematics. Take any group theory text book. It will have somewhere a definition such as $h_g(x) = g x g^{-1}$. (It is a conjugation operator.) The $h_g$ is an "object" in this terminology. It has some data ($g$) and an operation $x \mapsto g x g^{-1}$. Or you can make it more object-y by taking the pair $\langle g, x \mapsto gxg^{-1}\rangle$ or the triple $\langle g, x \mapsto gxg^{-1}, x \mapsto g^{-1}xg\rangle$. You can construct these kind of "objects" in any functional programming language that has lambda abstraction and some way to form tuples. Abadi and Cardelli's "Theory of Objects" deals with objects of this kind extensively. Objects with state (or objects that change): Does mathematics have such things? I don't think so. I haven't seen a mathematician talk about anything that changes, not in his/her professional life. Newton used to write $x$ for the position of a particle, which is supposedly changing, and $\dot{x}$ for its rate of change. Mathematicians eventually figured out that what Newton was talking about was a function $x(t)$ from real numbers into a vector space, and $\dot{x}$ was another such function which was the first derivative of $x(t)$ with respect to $t$. From this, many deep-thinking mathematicians have concluded that change doesn't really exist and all you have are functions of time. But what was changing in Newtonian mechanics wasn't the position, but the particle. The position is its instantaneous state. No mathematician or physicist would pretend that a particle is a mathematical idea. It is a physical thing. So it is with objects. They are "physical" things, and the states are their mathematical attributes. For a nice discussion of this aspect, see the Chapter 3 of Abelson and Sussman's Structure and Interpretation of Computer Programs. This is a text-book at MIT and they teach it to all scientists and engineers, who I think understand "physical" things perfectly fine. The fact that particles aren't mathematical doesn't mean that we can't deal with them mathematically. If you ask a mathematician to model a two-particle system, he will immediately make up two functions and call them $x_1(t)$ and $x_2(t)$. So, the two particles reduces to two meaningless indices (1 and 2). This is the mathematician's way of saying we don't know what those particles are and we don't care. All we need to know is that their positions evolve independently (or separately). So, we will model them by two separate functions. Similarly the standard mathematical way to model object-oriented programs is to treat each object as an index into the state space. The only difference is that since objects come and go, and the structure of the system is dynamic, we need to extend it to a "possible world" model where each world is basically a collection of indices. Allocation and deallocation of objects would involve moving from one world to another. There is a problem though. Unlike in mechanics, we want the state of our objects to be encapsulated. But the mathematical descriptions of objects put states all over the place, completely destroying encapsulation. There is a mathematical trick called "relational parametricity" which can be used to cut things back to size. I won't go into it now, except to emphasize that it is a mathematical trick, not a very conceptual explanation of encapsulation. A second way of modelling objects mathematically, with encapsulation, is to finesse the states and describe the object behaviour in terms of observable events. For a good discussion of both of these models, I can refer you to my paper titled Objects and classes in Algol-like Languages. [Note added:] A nice analysis of the mathematical underpinnings of objects can be found in William Cook's article "On Understanding Data Abstraction, Revisited".
{ "domain": "cstheory.stackexchange", "id": 1518, "tags": "pl.programming-languages, semantics, formal-modeling, set-theory, object-oriented" }
Benfit of behaviors & benefit of hybrid systems
Question: I have two questions that I think are closely related: Why is it useful to introduce behaviors, e.g., goal-to-goal + obstacle avoidance? (The alternative would have been to just use a single behavior with the more complex cost function that rewards getting to the goal without hitting stuff)? Why is it useful to analyze hybrid systems (instead of purely continuous systems that approximate switches and resets as continuous phenomena)? I think know a naive answer: this makes system design & analysis more tractable, by splitting a very complex problem into smaller pieces that can be, to some extent, studied separately. But is there a more in-depth intuition behind these choices? Answer: The answer you have found out yourself is not naive but rather at the core of science and technology. In fact, when we are not good at tackling the problem as a whole, because it is way difficult or even intractable for us, or when we do not want to put too much effort into dealing with lots of details we are not interested in (think of your example of approximating continuous phenomena with discrete events), then we split it into pieces we are able to better manipulate, in the hope (sometimes we can get stronger guarantees though) that this new arrangement will help us solve the original problem. A secondary reason underlying this methodology is to implement the composition of tools, which makes our lives much easier because allows us to reuse components that have been designed and realized for problems different from the one at hand. Therefore, by breaking down (analysis) a big goal into a list of smaller ingredients that can be combined together, we will be able later on to aggregate (synthesis) those units to solve other quests, maybe only loosely related to our starting point.
{ "domain": "robotics.stackexchange", "id": 1776, "tags": "control" }
Names and abbreviations in biology
Question: After one year in college, I am quite surprised with the number of different abbreviations used in biology for the same thing. I wonder if there are any rules for naming something new in biology, for example new genes, or if it is just up to the researchers without restriction? If there are rules, how did the confusion arise? Answer: Elsewhere on SE Biology I have answered a related question regarding enzyme names. Here I will restrict myself to genes, as it seems that this is the main concern of the poster. (In talking about ‘abbreviations’ I think he is referring to what are generally termed gene ‘symbols’.) The answer from @KarlKjer addressed current recommendations regarding gene nomenclature; mine addresses the causes of the current confusion. Summary Rules (actually recommendations) did not appear until a late stage in the development of the field of genetics, when the problem was already there and there was a need to do something about it. Historically in animal systems the naming of new genes tended to relate to mutant phenotypes as this was how they were discovered. The function of the gene that had been mutated was generally unknown. Sometimes different mutants of the same gene caused different phenotypes, either because of severity of damage to the protein, use of animals from a different development stage, or different growth conditions for bacteria. These mutants appeared to be of different genes, which were given different names. Sometimes the same gene was discovered at about the same time by different workers and given different names in the laboratory while work was in progress and, hence, on publication. Later it became possible in some cases to identify the products of genes for which there was no clear mutant phenotype — perhaps just lethality — and one method of naming was on the basis of the size of the protein, preceded by the letter ‘p’ (e.g. p63) The current trend is to try to rename genes according to the function of their products, where these are known, i.e. to replace the name of a mutant phenotype by the name of an enzyme or structural protein. The implementation of such rules or recommendations is imperfect, depending on the cooperation of authors and editors. Illustration using some Drosophila genes I shall use Drosophila melanogaster (the fruit fly) to provide some illustratations of the history of genetic nomenclature, because this was the organism used by Thomas Hunt Morgan (he of the centimorgan) to perform genetic studies after the rediscovery of Mendel’s work in 1900. His laboratory was responsible for the discovery in 1910 of the first Drosophila mutant, a spontaneous sex-linked mutant that resulted in males having eyes that were white, rather than the normal red. The gene responsible for this mutant was named white, initiating the custom of naming genes after their mutant phenotype. (Drosophila gene names became italicized by convention.) What else could they have done? Not only was the protein product of this gene unknown, the whole idea of genes encoding proteins did not yet exist. The abbreviation (symbol) for this gene was merely w. Clearly, nobody anticipated that another 17,000 genes would follow, or suggested that a committee on gene nomenclature should be set up. It is interesting to note that the product of the white gene was not identified until almost 90 years later. In 1999 it was established that this was a member of the ABC transporter family, responsible for bringing into cells the guanine and tryptophan needed to make the red pigment so characteristic of the eye of the wild-type Drosophila. The human version of white was identified roughly contemporaneously by sequence homology, and although it was initially referred to as hW (human homologue of white) is now named according to its gene product: ATP-binding cassette sub-family G member 1 (ABCG1). Its function in humans — who only have red eyes in flash photographs — is in lipid transport. The era of of molecular genetics in the late twentieth century brought an avalanche of new genes, many not associated with a particular phenotype. The culture — accepted by scientific journals — was that if you discovered a new gene you had the right to name it as you pleased. A generation of young scientists were pleased to chose names that reflected twentieth century, rather than classical, culture. Flip through the names of Drosophila genes (try the autocomplete here) and you will find alien, bazooka, cactus, Dorothy, ether a go-go all the way to zucchini. Nor was it only the Drosophila geneticists that were responsible for this. The hedgehog segmentation gene, discovered in Drosophila, has three human homologues. One of these was named ‘sonic hedgehog’, which I am informed is the protagonist of a children’s video game. O tempora, o mores! A couple of final points will be made to illustrate the problems of nomenclatue of Drosophila genes. Several other mutations of the white gene have been observed which have different phenotypes (e.g. result in different eye colour). This is because they involve insertions into the gene rather than its complete deletion. This also illustrates one historic difficulty that could result in several names for the same gene. The other point is that many of the genes identified by genome sequencing, at least initially, were associated with no mutant phenotype and had no known function, and were so named simply be an accession number (CG1234 etc.). As functions emerged they would be renamed. And an example of the nomenclature situation can be seen in the NCBI entry for Drosophila gene TfIIS (RNA polymerase II elongation factor): Also known as: BG:DS00929.12; br52; CG3710; Dmel\CG3710; DmS-II; DmSII; l(2)35cF; l(2)35Cf; l(2)br52; l35Cf; RnpSII; TFIIS; TFIISA; TFS-II
{ "domain": "biology.stackexchange", "id": 8720, "tags": "genetics, nomenclature" }
Semantic HTML5 and proper use of tags
Question: I'm trying to develop a base for a blog using some of the new tags introduced in HTML5 and I want to not only make sure I'm using them correctly, but my code is also semantic. Here is just the 'sample' document. <!DOCTYPE html> <html> <head> <meta http-equiv="content-type" content="text/html;charset=utf-8"/> <title>page title</title> </head> <body> <div class="container"> <header> <hgroup> <h1>Blog title</h1> <h2>Blog tagline goes here</h2> </hgroup> <nav> <ul> <li><a href="#top-level-link">Top Link</a></li> <li><a href="#top-level-link">Top Link</a></li> <li><a href="#top-level-link">Top Link</a></li> </ul> </nav> </header> <!-- <hr> commenting out so the answers still make sense --> <aside> <nav> <ul> <li><a href="#">sidebar link</a></li> <li><a href="#">sidebar link</a></li> <li><a href="#">sidebar link</a></li> </ul> </nav> </aside> <div class="content"> <article> <h3>Article Header</h3> <section> <h4>Section header</h4> <p>Article section content</p> </section> <section> <h4>Section header</h4> <p>Article section content</p> </section> <section> <h4>Section header</h4> <p>Article section content</p> </section> <footer> posted by <a rel="author" href="#">user</a> on <time datetime="2012-01-01T00:00+00:00">January 1st, 2012. 12:00pm</time> </footer> </article> <article> <h3>Article Header</h3> <section> <h4>Section header</h4> <p>Article section content</p> </section> <!-- more sections may exist per article --> <footer> posted by <a rel="author" href="#">user</a> on <time datetime="2012-01-01T00:00+00:00">January 1st, 2012. 12:00pm</time> </footer> </article> <article> <h3>Article Header</h3> <section> <h4>Section header</h4> <p>Article section content</p> </section> <!-- more sections may exist per article --> <footer> posted by <a rel="author" href="#">user</a> on <time datetime="2012-01-01T00:00+00:00">January 1st, 2012. 12:00pm</time> </footer> </article> </div> <!-- <hr> Don't wanna make these guys look crazy --> <footer> Page footer. </footer> </div> </body> </html> Can anyone see any misuse of the new tags or any better - cleaner way to write this? Answer: Quick review: Remove this damn <hr>, styling is for CSS. In each <section>, you better have an <h2> :-) You can add rel="author" to your <a> link on the author. Even better, you can use <a href="https://url.to.google.plus/user?rel=author" rel="author">The author's name</a> (Google will recognize this as a "rich snippet" and show your face in google results. Take a look at http://schema.org for more information). Rich snippets are a whole other topic, so I'll stop talking about it :) Edit after your comment: Each article should contain an <h2> for its article sections. Each article should have its title in an <h1>. Yes, even if there are multiple <h1> in the main page. If there is just the titles on the main page (no article's text), feel free to use <h2>. The <h1> on the frontpage is tolerated, not on the blog pages. Feel free to use it or not on the frontpage, but make sure that each blog page has the title of the post as an <h1>, not the site's name. Related picture:
{ "domain": "codereview.stackexchange", "id": 1967, "tags": "html5" }
Find the missing number in a array
Question: I'm working on a personal project to practice my programming skills and I came to this problem. I tried to create a program that will find the missing number in a array. I'm looking forward to upgrading this program and am trying to find the missing numbers in a array. Yes, with an 's'. /******************************** *A sample program that will solve *for a missing integer in a given sorted array. ********************************/ public class MissingInteger{ public static void PrintArray(){ int []fArrayDuplicate = {1,2,3,4,5,7,8,9,10}; System.out.println("\nGiven the incomplete array:"); System.out.print("["); for(int a = 0 ; a<fArrayDuplicate.length ; a++) System.out.print(" " + fArrayDuplicate[a] + " "); System.out.print("]"); } public static void FindMissing(){ int []finalArray = {1,2,3,4,5,7,8,9,10}; int len = finalArray.length; //First, I took the length of the array. int sum = 0; for(int x=0;x<len;x++) sum+=finalArray[x]; //Second, I took the sum of the given array. int totalNumber = ((len + 1) * (len + 2)) / 2; //Third, I used this formula to get the total number. int missingInt = totalNumber - sum; // Fourth, I subtract the sum to total number to get missing integer. System.out.println("\nTherefore, the missing integer is " + missingInt + "."); } public static void main(String[] args){ PrintArray(); FindMissing(); } } Answer: In Java, the convention wants that methods are written in camelCase. Which means PrintArray should be printArray and so on. The indentation is a little off with the declaration of the int[]. Instead of this : int []fArrayDuplicate It should be like this int[] fArrayDuplicate It might look like nitpicking (and it might be!) but it improves readability by a ton. Talking about nitpicking, I think your for loop could use some spacing : for(int x=0;x<len;x++) could be for (int x = 0; x < len; x++) When you comment your code, you should explain why you wrote what you did write, not what you wrote. I mean, int len = finalArray.length; //First, I took the length of the array. is pretty obvious, so the comment isn't neccessary. I think it is the same for all your comments, except the third. But what I need to understand, as a developper who reads your code, isn't that you used a formula to get the total, but to understand how your formula works so you can get the total. Both your methods use the same int[], which makes me think it should be a parameter of both your methods, this way you can be sure they will never be different. public static void main(String[] args){ int[] fArrayDuplicate = {1,2,3,4,5,7,8,9,10}; PrintArray(fArrayDuplicate); FindMissing(fArrayDuplicate); } I also think you should pull out the main method of this class and put it in a class that is used only to start the application with the good parameters. Doing so, you could use your MissingInteger class to only find missing integer. This way, if you ever want to reuse it, you will be able to without bringing the main method with it. Also, you could make your MissingInteger class respect the OOP more. You could input the int[] in the constructor like so : public class MissingInteger { private int[] fArrayDuplicate; public MissingInteger(int[] fArrayDuplicate){ this.fArrayDuplicate = fArrayDuplicate; } public void PrintArray(){ System.out.println("\nGiven the incomplete array:"); System.out.print("["); for(int a = 0 ; a<fArrayDuplicate.length ; a++) System.out.print(" " + fArrayDuplicate[a] + " "); System.out.print("]"); } public void FindMissing(){ int len = fArrayDuplicate.length; //First, I took the length of the array. int sum = 0; for(int x=0;x<len;x++) sum+=fArrayDuplicate[x]; //Second, I took the sum of the given array. int totalNumber = ((len + 1) * (len + 2)) / 2; //Third, I used this formula to get the total number. int missingInt = totalNumber - sum; // Fourth, I subtract the sum to total number to get missing integer. System.out.println("\nTherefore, the missing integer is " + missingInt + "."); } } I think you might want to rename your class to MissingIntegerFinder since well, the class itself isn't about a missing integer, but about finding a missing integer. And the name fArrayDuplicate doesn't mean much to me. There's nothing talking about duplicates in your code, why is it prefixed with a f? I have a hard time to find a better name, someone might find one. But for now I'd name it integersWithOneMissing or... something like that.
{ "domain": "codereview.stackexchange", "id": 11205, "tags": "java, algorithm, array" }
How to Check Separability of 2D Filter / Signal / Matrix
Question: Given: x(n1,n2) = {1 ,n1=0,n2=0 ; 2 ,n1=1,n2=0 ; 3 ,n1=0,n2=1 ; 6 ,n1=1,n2=1 } How could one prove it is separable? Answer: Nilesh Padhi, Welcome to the DSP Community. The classic definition of separable means the data (2D) given by $ X \in \mathbb{R}^{m \times n} $ can be written as: $$ X = \sigma u {v}^{T} $$ Where $ \sigma \in \mathbb{R} $, $ u \in \mathbb{R}^{m} $ and $ v \in \mathbb{R}^{n} $. This is called Rank 1 Matrix. How can you get those parameters and vectors given $ X $? Well, the Singular Value Decomposition (SVD) is here to save the data. The SVD of $ X $ is given by: $$ X = U \Sigma {V}^{T} = \sum {\sigma}_{i} {u}_{i} {v}_{i}^{T} $$ You can see those match when $ {\sigma}_{j} = 0 $ for $ j \geq 2 $. So what you should do is the following: epsThr = 1e-7; [mU, mD, mV] = svd(mX); vD = diag(mD); if(all(vD(2:end) < epsThr)) vU = mU(:, 1); vV = mV(:, 1); end We checked if indeed the singular value of 2 and onward are small. If they do (You can decide to what degree of small by epsThr) then it is separable and the vectors are vU and vV. In your case: mX = [1, 3; 2, 6]; [mU, mD, mV] = svd(mX); vD = diag(mD); disp(vD); The result is: vD = 7.0711 0.0000 Since vD values are zero besides the first element (Only single non vanishing Singular Value) it is separable. Indeed you can see that: mD(1) * mU(:, 1) * mV(:, 1).' ans = 1.0000 3.0000 2.0000 6.0000 As expected. This method is really useful in Image Processing when we want to convolve with 2D kernel and we find it is separable and hence we can apply the 2D convolution using 2 1D convolutions (Along Columns / Rows). In that case we define $ \hat{u} = \sqrt{{\sigma}_{1}} u $ and $ \hat{v} = \sqrt{{\sigma}_{1}} v $ where $ {\sigma}_{1} $ is the Singular Value. Then we convolve $ \hat{u} $ along columns and $ \hat{v}^{T} $ along rows.
{ "domain": "dsp.stackexchange", "id": 6648, "tags": "matlab, image-processing, linear-algebra, svd, separability" }
Predicting popular times
Question: I have a MySQL database that contains datetime for every check-in with an RFID card. I have millions of records in it. It shouts machine learning prediction for me. So I'd like to predict popular times to see when most people use the terminal. I'd like to represent it the same way as Google does at places: This is the rare case when I do not ask for code, but keywords. I assume I have to predict number of check-ins between time ranges. Also I am sure I have to make a dataset like this: (These are the number of check-ins in hour ranges) month | day | day-name | 0-1 | 1-2 | ... | 6-7 etc. jan | 1 | Monday | 0 | 1 | | 12 jan | 2 | Tuesday | 1 | 3 | | 15 So it can predict today's popular times based on what day it is, also by day of given month (as Saturdays and Sundays will be dead, also 25th of December. A good prediction should know these from the dataset). As I wrote this question I solved most of it it seems. The only thing I need is a keyword as I have little experience in this. What model fits this best? Answer: "Regression", is to estimate a continuous value as a function of some other parameters. There are different forms of regression, such as linear regression and logistic regression, that differ in the assumptions they make on the data. You could try looking up those for a start.
{ "domain": "ai.stackexchange", "id": 373, "tags": "machine-learning, prediction" }
Relative molecular mass of an unknown gas in a mixture
Question: A gaseous mixture contains oxygen and another unknown gas in the molar ratio of 4:1 diffuses through a porous plug into 245 seconds, under similar conditions same volume of oxygen takes 220 seconds to diffuse. What is the molecular mass of the unknown gas? I used the Graham's law of diffusion to find the molecular mass of mixture. i.e. $$\frac{r_\text{mix}}{r_{\ce{O2}}}=\frac{220}{245}=\sqrt{\frac{M_{\ce{O2}}}{M_\text{mix}}}$$ $$M_\text{mix}=39.6$$ But what's next? How do I calculate the $M_\text{gas}$? Answer: After getting relative molecular mass ie $39.6$, $$\mathrm{M_{net} = \frac{(n_1\times M_1 + n_2\times M_2)}{(n_1 + n_2)}} \tag{i}$$ Given that the molar ratio of gases is $4:1$, the molar ratio of oxygen is $\mathrm{4x}$ and that of the unknown gas is $\mathrm x$. Plug $\mathrm{n_1 = 4x,\ n_2 = x,\ M_1= 32,\ M_2 = M,\ M_{net} = 39.6}$ in equation (i) and get $\mathrm M$.
{ "domain": "chemistry.stackexchange", "id": 10628, "tags": "physical-chemistry" }
What are the possible star fuels?
Question: I always thought the only fuel for a star was hydrogen, which is fused into helium. But while reading some questions and answers here in ASO, I saw phrases like "This balance stays relatively stable until the star runs out of whatever its current fuel is". Besides hydrogen, what are the possible fuels in a star’s core? Answer: As a star begins to age and exhaust its hydrogen supply, pressure will build in its interior until temperatures rise enough to begin fusing helium into carbon. If the star is massive enough, this process will continue, using heavier elements each time. Referring to this link on stellar fusion, depending on their mass, stars use a variety of elements from hydrogen to silicon as 'fuel'. Stellar cores of stars towards the end of their giant stages will have cores of iron, but iron doesn't fuse into anything heavier in a stellar core.
{ "domain": "astronomy.stackexchange", "id": 3026, "tags": "star, core" }
Golang unbuffered channel - Correct Usage
Question: Here is an example where I'm trying to understand the concepts of buffered channels> I have three functions and create a buffered channel of length 3. And also passed a waitgroup to nofiy when all the goroutines are done. And finally collecting the values through range. Could you please help me in reviewing this code? And where I could improve? package main import ( "fmt" "sync" ) type f func(int, int, chan int, *sync.WaitGroup) func add(x, y int, r chan int, wg *sync.WaitGroup) { fmt.Println("Started Adding function....") r <- (x + y) wg.Done() } func sub(x, y int, r chan int, wg *sync.WaitGroup) { fmt.Println("Started Difference function") r <- (x - y) wg.Done() } func prod(x, y int, r chan int, wg *sync.WaitGroup) { fmt.Println("Started Prod function") r <- (x * y) wg.Done() } func main() { var operations []f = []f{add, sub, prod} ch := make(chan int, len(operations)) wg := sync.WaitGroup{} x, y := 10, 20 wg.Add(len(operations)) for _, i := range operations { go i(x, y, ch, &wg) } wg.Wait() close(ch) for val := range ch { fmt.Println(val) } } Answer: The code in the question cannot determine which result corresponds to which operation. Otherwise, the code is correct. Here are two alternatives for improving the code: 1. Eliminate the channel Change the operations to simple functions that return an int. This makes it easier to test and reason about the implementation of the operations. Collect the results in a slice instead of in a channel. With this change, we know that result of operations[0] is at slice index 0, operations[1] is at slice index 1 and so on. Move all the waitgroup and goroutine related code together in main. This makes the concurrency aspect of the program easier to understand. Here's the code: package main import ( "fmt" "sync" ) type f func(int, int) int func add(x, y int) int { return (x + y) } func sub(x, y int) int { return (x - y) } func prod(x, y int) int { return (x * y) } func main() { var operations []f = []f{add, sub, prod} x, y := 10, 20 results := make([]int, len(operations)) var wg sync.WaitGroup wg.Add(len(operations)) for i, fn := range operations { go func(i int, fn f) { defer wg.Done() results[i] = fn(x, y) }(i, fn) } wg.Wait() fmt.Println(results) } 2. Eliminate the waitgroup The application can receive the known number of values sent to the channel instead of receiving until the channel is closed. If the channel is not closed, then there's no need for the waitgroup. What's more, there's no need to buffer the channel. package main import ( "fmt" ) type f func(int, int, chan int) func add(x, y int, r chan int) { r <- (x + y) } func sub(x, y int, r chan int) { r <- (x - y) } func prod(x, y int, r chan int) { r <- (x * y) } func main() { var operations []f = []f{add, sub, prod} ch := make(chan int) x, y := 10, 20 for _, i := range operations { go i(x, y, ch) } for range operations { val := <-ch fmt.Println(val) } }
{ "domain": "codereview.stackexchange", "id": 40363, "tags": "go" }
Status of new telescopes on Mauna Kea?
Question: I know that there has been conflicts between scientists and the native population. Astronomers want to build new telescopes on Mauna Kea, however, several sacred sites would have to be built on. My question is, are there any current plans to move forward with the observatories, or have they reached a gridlock? Answer: You're talking about the construction of the 'Thirty Meter Telescope'. Currently the construction has been halted by the State Supreme Court of Hawaii as a reaction to the violation of native sacred land. Negotiations are going on about moving the construction site to the canary Islands, which has been deemed an acceptable alternative in the northern hemisphere, even if only barely.
{ "domain": "astronomy.stackexchange", "id": 2225, "tags": "telescope, observational-astronomy, observatory" }
When using resolution variable elimination to simplify a cnf, does that change the truth values of the other variables?
Question: When you use resolution variable elimination to preprocess/simplify a formula in cnf form the resulting formula is equisatisfiable. What I wonder about is if I can use this technique to remove variables i don't care about while leaving the truth values of the variables I care about untouched in the final model I get from the sat solver. Answer: Yes, you can. Any satisfying assignment for the simplified formula can be extended to a satisfying assignment for the original formula, by setting appropriate values to the eliminated variables (but not changing the values of the remaining variables), and vice versa.
{ "domain": "cs.stackexchange", "id": 11378, "tags": "satisfiability, boolean-algebra" }
Is there a simple way to find ROS headers in CMake without catkin?
Question: I am building an external library that will be linked in as part of a ros package, but is built outside of Catkin. I'd like to use some of the ROS messages (e.g. ROS_INFO, ROS_DEBUG) inside this library for debugging. I have manually modified the CMake to include_directories( /opt/ros/kinetic/include/) and the code works as I want. I'd like to do something like: find_package( roscpp ) if (ROSCPP_FOUND) include_directories(${ROSCPP_INCLUDE_DIRS}) ... Unfortunately, CMake can't find roscpp without all of the catkin magic. I don't want to find_package(catkin) as it messes with the normal CMake build setup for the library. One hack I can think of is to check the ROS_ROOT environment variable, and if it exists, make the include relative to that. But that is a hack that I'd prefer to avoid. Is there a clean minimal setup that can discover the ROS include directory with pure CMake? Originally posted by dcconner on ROS Answers with karma: 476 on 2016-07-06 Post score: 1 Answer: Your variable names just have the wrong case. You must use roscpp_FOUND and roscpp_INCLUDE_DIRS. Originally posted by Dirk Thomas with karma: 16276 on 2016-07-06 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by dcconner on 2016-07-07: Gah! Thanks. I thought the example I was following made the tags all upper case.
{ "domain": "robotics.stackexchange", "id": 25169, "tags": "catkin, cmake" }
set_workspace moveit not working
Question: Hi friends, I am working with MoveIt and Panda from Franka Emika. I am trying to set the workspace for the robot in Moveit to limit Joints Movement to avoid crashs. But it not seems to have any effekt to the robot. Function is available for move_group in PythonAPI: https://github.com/ros-planning/moveit/blob/master/moveit_commander/src/moveit_commander/move_group.py My script looks like the following: group = moveit_commander.MoveGroupCommander("panda_arm") ## [minX, minY, minZ, maxX, maxY, maxZ] ws = [0.2, -0.3, 0.3, 0.7, 0.3, 0.7] group.set_workspace(ws) The script will fail due to an error if ws isn't well formed (e.g. ws with 7 entries) which tells me that I am doing the function-call correct .. If there are other ways to limit joint movements (in Space), I appreciate any idea or hints. Thanks a lot! Originally posted by Qualityland on ROS Answers with karma: 3 on 2021-12-09 Post score: 0 Answer: Hello Qualityland, Have a look at this answer, #q273485 in short you cannot add revolutionary joints. Instead, you can add a collision object to the region you want to avoid. you can have a look at moveit collision object adding or you can follow this answer #q209030. Originally posted by Ranjit Kathiriya with karma: 1622 on 2021-12-09 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 37234, "tags": "ros, moveit, ros-melodic" }
Unable to visualise point cloud in RViz even with header.frame_id selected
Question: Hi all, I very new to ROS, so please bear with me. I am trying to render sample point cloud in the RViz, but for some reason it just won't show. I have set fixed frame to map which is the value of header.frame_id. rostopic echo on the point cloud topic returns data, so my take is that the point cloud is correctly generated but cannot be properly displayed in rviz. Here's the code for publishing the point cloud: void Node::PublishMapPoints (std::vector<ORB_SLAM2::MapPoint*> map_points) { sensor_msgs::PointCloud2 cloud = MapPointsToPointCloud (map_points); map_points_publisher_.publish (cloud); } sensor_msgs::PointCloud2 Node::MapPointsToPointCloud (std::vector<ORB_SLAM2::MapPoint*> map_points) { if (map_points.size() == 0) { std::cout << "Map point vector is empty!" << std::endl; } sensor_msgs::PointCloud2 cloud; const int num_channels = 3; // x y z cloud.header.stamp = current_frame_time_; cloud.header.frame_id = "map"; cloud.height = 1; cloud.width = map_points.size(); cloud.is_bigendian = false; cloud.is_dense = true; cloud.point_step = num_channels * sizeof(float); cloud.row_step = cloud.point_step * cloud.width; cloud.fields.resize(num_channels); std::string channel_id[] = { "x", "y", "z"}; for (int i = 0; i<num_channels; i++) { cloud.fields[i].name = channel_id[i]; cloud.fields[i].offset = i * sizeof(float); cloud.fields[i].count = 1; cloud.fields[i].datatype = sensor_msgs::PointField::FLOAT32; } cloud.data.resize(cloud.row_step * cloud.height); unsigned char *cloud_data_ptr = &(cloud.data[0]); float data_array[num_channels]; for (unsigned int i=0; i<cloud.width; i++) { if (map_points.at(i)->nObs >= min_observations_per_point_) { data_array[0] = map_points.at(i)->GetWorldPos().at<float> (2); //x. Do the transformation by just reading at the position of z instead of x data_array[1] = -1.0* map_points.at(i)->GetWorldPos().at<float> (0); //y. Do the transformation by just reading at the position of x instead of y data_array[2] = -1.0* map_points.at(i)->GetWorldPos().at<float> (1); //z. Do the transformation by just reading at the position of y instead of z //TODO dont hack the transformation but have a central conversion function for MapPointsToPointCloud and TransformFromMat memcpy(cloud_data_ptr+(i*cloud.point_step), data_array, num_channels*sizeof(float)); } } return cloud; } Here's my RViz: Any help would be much appreciated. Originally posted by mun on ROS Answers with karma: 32 on 2019-09-03 Post score: 0 Original comments Comment by gvdhoorn on 2019-09-03: I'm not sure it really matters, but you seem to be setting width and height, but not depth of your pointcloud. A cloud with a depth of 0 would perhaps not work. Comment by mun on 2019-09-03: @gvdhoorn I thought I have here: cloud.data.resize(cloud.row_step * cloud.height); where cloud.row_step = cloud.point_step * cloud.width;. Am I missing something? Comment by gvdhoorn on 2019-09-03: No. I hadn't had my coffee yet. Ignore my comment. Comment by mun on 2019-09-03: @gvdhoorn All good. I did hope it could be as straightforward as that though... :( Comment by gvdhoorn on 2019-09-03: Btw: I rarely create PointCloud2 messages by hand. Is there any particular reason you're not creating a regular pcl::PointCloud<> and then convert it using pcl::toROSMsg(..) (from pcl_conversions)? Also documented here. Comment by mun on 2019-09-03: @gvdhoorn Oh, I'll try that. I should also add that rostopic echo on the map points topic gives data but it's all zeros. Comment by SamsAutonomy on 2019-09-03: I definitely agree with @gvdhoorn, it's way easy that way. Also, when you say it's all zeros, what do you mean? If all of your points are honestly 0,0,0, your point cloud is rather boring and all the points are going to be at the origin. Take a look at your Rviz screenshot, there is one white dot at the origin. Try increasing the point size to 0.1m or something in Rviz and if the white dot at the origin is larger, all of your points are probably sitting at the origin. Comment by mun on 2019-09-03: I've fixed my code now that they aren't all zeros. I'm getting some variations in the point cloud, but it's clearly not a point cloud. Here's a video: https://youtu.be/Yr9Qsx_Bpns Comment by mun on 2019-09-03: @gvdhoorn Can I ask why PointCloud is preferred over PointCloud2? Comment by mun on 2019-09-03: @gvdhoorn rostopic echo can return nonzero data, so the point cloud message is correct, right? The header looks like this: header: seq: 3658 stamp: secs: 1567488578 nsecs: 26397067 frame_id: "map" height: 1 width: 1194 fields: - name: "x" offset: 0 datatype: 7 count: 1 - name: "y" offset: 4 datatype: 7 count: 1 - name: "z" offset: 8 datatype: 7 count: 1 is_bigendian: False point_step: 12 row_step: 14328 Could it be that I've got the scale wrong? Comment by gvdhoorn on 2019-09-03:\ Can I ask why PointCloud is preferred over PointCloud2? The former is a C++ type in the pcl library, the latter is a ROS message in the sensor_msgs package. These are two different things. The pcl_conversions package knows how to convert between these two. Could it be that I've got the scale wrong? how could we know? You don't give any information about that. But just for you to check: everything in ROS uses metres for distances. RViz will assume metres in your PointCloud2 as well. And again: I would really recommend not creating a sensor_msgs/PointCloud2 message by hand. Answer: I indeed had the scale wrong. It's all good now! Thanks! Originally posted by mun with karma: 32 on 2019-09-03 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 33724, "tags": "rviz, ros-melodic, pointcloud" }
Remove duplicates from string without using additional buffer
Question: I wrote a simple program to remove duplicates from a String without using additional buffer. The program should filter the duplicates and return just the unique string. Example: Input: FOOOOOOOOLLLLLOWWWWWWWWWW UUUUP Output: FOLW UP I just want to know if the below solution is a good solution for my problem statement. public class compareString { public static void main(String args[]) { removeDuplicateString("FOOOOOOOOLLLLLOWWWWWWWWWW UUUUP"); } public static void removeDuplicateString(String input) { String value1 = input; String value2 = input; String finalValue = ""; int count = 0; char char1; char char2 = 0; for (int i = 0; i < value1.length(); i++) { flag = 0; char1 = value1.charAt(i); for (int j = 0; j < value2.length(); j++) { char2 = value2.charAt(j); if (char1 == char2) { count++; } } if (count > 1) { finalValue=finalValue+char1; i=i+(count-1); } else { finalValue = finalValue + char1; } count = 0; } System.out.println(finalValue); } } Answer: Thats not working. Try the input: abaa (result: aa) I do not know what you mean with "without using additional buffer", because finalValue is at least an additional buffer (which is kind of necessary. You could create a view, but this will be a bit complex). An easier approach would be to create a LinkedHastSet. Insert all characters, output the set, done: public static void removeDuplicateString2(final String input) { final Set<Character> set = new LinkedHashSet<>(); for (int i = 0; i < input.length(); i++) set.add(input.charAt(i)); final StringBuilder stringBuilder = new StringBuilder(set.size()); for (final Character character : set) stringBuilder.append(character); System.out.println(stringBuilder); } About your code: removeDuplicateString I would call it something like removeMultipleOccurrence String finalValue = ""; You may use a StringBuilder to reduce string concatenations. String value1 = input; String value2 = input; You do not need these variables, just use input instead of value1/2 int count = 0; char char1; char char2 = 0; You do not need char2, and you can put the rest inside the loop to the initializing place. flag = 0; This variable is not defined. if (count > 1) { finalValue=finalValue+char1; i=i+(count-1); } else { finalValue = finalValue + char1; } You may move the finalValue line from the if/else. Then you can skip the else part, because it is empty. If you combine everything, you could have: public static void removeMultipleOccurrence(final String input) { StringBuilder result = new StringBuilder(); for (int i = 0; i < input.length(); i++) { int count = 0; final char char1 = input.charAt(i); for (int j = 0; j < input.length(); j++) { if (char1 == input.charAt(j)) { count++; } } if (count > 1) i = i + (count - 1); result.append(char1); } System.out.println(result); } Hint: This is still not working, just doing the same in a more clear way. If you want to do it in your way, you have to search the string before the current position if you have already the current char. This could be done in this way: public static void removeMultipleOccurrence(final String input) { String finalValue = ""; for (int i = 0; i < input.length(); i++) { int count = 0; final char currentChar = input.charAt(i); for (int j = 0; j < i; j++) { if (currentChar == input.charAt(j)) { ++count; } } if (!(count > 0)) finalValue = finalValue + currentChar; } System.out.println(finalValue); } If we look at the count, it is just a boolean, so we end up with: public static void removeMultipleOccurrence(final String input) { final StringBuilder result = new StringBuilder(); for (int i = 0; i < input.length(); i++) { boolean alreadySeen = false; final char currentChar = input.charAt(i); for (int j = 0; j < i; j++) { if (currentChar == input.charAt(j)) { alreadySeen = true; break; } } if (!alreadySeen) result.append(currentChar); } System.out.println(result); } We could get rid of the boolean now: public static void removeMultipleOccurrence(final String input) { final StringBuilder result = new StringBuilder(); withNextChar: for (int i = 0; i < input.length(); i++) { final char currentChar = input.charAt(i); for (int j = 0; j < i; j++) { if (currentChar == input.charAt(j)) continue withNextChar; } result.append(currentChar); } System.out.println(result); } Or, we could look in the result, if we have it already: public static void removeMultipleOccurrence(final String input) { final StringBuilder result = new StringBuilder(); for (int i = 0; i < input.length(); i++) { String currentChar = input.substring(i, i + 1); if (result.indexOf(currentChar) < 0) //if not contained result.append(currentChar); } System.out.println(result); }
{ "domain": "codereview.stackexchange", "id": 3532, "tags": "java, strings" }
Prevalent large (>=90kDa) maintenance protein/loading control
Question: I was wondering if anyone had recommendations for good, large (hopefully 100kDa+), control proteins that would be present in most mammalian cells. I'm working mostly with tissue samples from humans and mice, and airway epithelium in particular if that matters. So far the best I've thought of is HSP90, but that is far from ideal for a number of reasons. My target proteins are between 125 and 210 kDa. To improve resolution, we've run pretty much everything under 90kDa off the gel. That removes the old standbys like actin or GAPDH. I understand that the protein levels could vary between cell types for a good loading control target. Bonus points if you know a good antibody clone to it. Answer: Vinculin! I love our grad-students, I can't believe I didn't think of it last night. Also, awesome chart, even if it's from a company.
{ "domain": "biology.stackexchange", "id": 1830, "tags": "lab-techniques, western-blot" }
Trying to understand lowest configurations of carbon
Question: My study group is debating about which are the three lowest configurations of carbon. I've been arguing that the electron has to jump to the 3s level for the configuration to be different. Others have suggested that the two valence electrons just have to change their $m$ and $s$ numbers on the 2p level. We are using Morrison's Modern Physics and having trouble settling this issue within the text. We are aware of Hund's rule, so some of the problem is about exactly what is meant by "configuration." We want to understand this problem and do the work ourselves, but we are installing doubt in one another. Can someone clarify "configuration" and maybe suggest the general approach appropriate here? Answer: By 'electron configuration' can be understood the way an atom's electrons are arranged in atomic orbitals, in accordance with Pauli's Exclusion Principle, the Aufbau Principle and Hund's Rule, of the lowest possible total energy (known as the Ground State). For carbon (Z=6), six electron have to be placed in the correct atomic orbitals. The first 2 occupy the lowest energy atomic orbital possible, that is 1s, so we have $1s^2$ for the first term. For the remaining four electrons, the next two lowest available atomic orbitals are 2s and 2p and following the above rules that gives us $2s^2$ and $2p^2$. Bearing in mind that to satisfy Hund's Rule the latter two 2p electrons are divided over one $p_x$ and one $p_y$ sub-orbital, each with one electron of the same spin quantum number ($m_s=-\frac{1}{2} \text{or} +\frac{1}{2}$). Overall we can write the electron configuration of carbon as: $1s^2 2s^2 2p^2$ or with some added detail $1s^22s^22p_x^12p_y^1$ and because $[He]=1s^2$, carbon's electron configuration (ground state) can be written as: $[C]=[He]2s^22p^2$ = $[He]2s^22p_x^12p_y^1$. The first excited state of carbon $C^*$, and the one that explains the existence of $C(+4)$ chemical compounds, is $[He]2s^12p_x^12p_y^12p_z^1$ where all three lone 2p electrons have the same $m_s$ value.
{ "domain": "physics.stackexchange", "id": 25204, "tags": "homework-and-exercises, atomic-physics, quantum-chemistry" }
Filter recognition from transfer function
Question: I have a filter with this transfer function in the z-plane $H(z)= z^{-k}$. Which type of filter is this (BP,LP,BS)and why? I have a delay filter with zero/poles equal to zero and the magnitude constant equal to 1, so can I characterize it as an all-pass filter ? Answer: From one of your comments it appears that you've actually already answered the question. Here are a few remarks and questions that should help you gain some more understanding: If you have a transfer function $H(z)$ (and if you assume that the corresponding system is stable), then the frequency response is obtained by choosing $z=e^{j\omega}$. For $H(z)=z^{-k}$ figure out the magnitude and the phase of $H(e^{j\omega})$. If the magnitude of $H(e^{j\omega})$ does not depend on frequency, what kind of filter have you got? What is the input-output relation of the given system?
{ "domain": "dsp.stackexchange", "id": 9853, "tags": "lowpass-filter, bandpass" }
What vegetation would thrive in the Martian atmosphere?
Question: Most plants require carbon dioxide for their photosynthesis, which Mars has in overabundance. Would atmosphere composition (let's ignore temperatures for the purpose of this question) of Mars allow vegetation to grow? Answer: This is not my field by a long shot, so take what I say with a grain of salt. However, this question is very hard to answer because whether or not a plant will grow depends on a great variety of factors. Even if we ignore the temperature as you say, there are other considerations. These include, but are not limited to: Soil composition, I doubt that Martian soil can support earth vegetation even if its atmosphere could. Plants need various nutrients, and specific pH ranges among other things. Atmospheric pressure, I am not at all sure that the Martian atmosphere (though it is, indeed rich in CO2) would be enough to drive an earth plant's photosynthesis. Bear in mind that the atmospheric pressure on Mars averages 600 pascals (0.087 psi), about 0.6% of Earth's mean sea level pressure (source). This makes it highly unlikely that unmodified earth plants would be able to thrive there. Water water water... Pollinating species. Many many plants depend on other species (e.g. bees or hummingbirds) for their propagation. These would be hard to find on Mars. Sunlight I don't know if Mars receives enough sunlight at its distance from the sun to drive an unmodified plant's photosynthesis. Now, that said, it should theoretically be possible to start with some extremophile archaea or bacteria that would over the course of many many many years (at least hundreds, thousands more probably) terraform Mars to make it suitable for human habitation. Specially engineered plants could play a role then but I find it very hard to believe that any existing, unmodified, multicellular plant life of earth origin could survive on Mars.
{ "domain": "biology.stackexchange", "id": 1180, "tags": "photosynthesis" }
The transversality axiom in Lieb's Thermodynamics paper
Question: I'm reading The Physics and Mathematics of the Second Law of Thermodynamics and have a question about the T4 transversality axiom which is writtern on page 54. T4) Transversality. if $\Gamma$ is the state space of a simple system and if $X \in \Gamma$, then there exist states $X_0 \overset{T}{\sim}X_1$ with $X_0 \prec\prec X \prec\prec X_1.$ The relation $\prec$ is the adiabatic accessibility. $X\prec Y$ means that there is an adiabatic transition from $X$ to $Y$. $X \prec \prec Y$ means $X\prec Y$ and $Y \not\prec X$. The relation $\overset{T}{\sim}$ is the thermal equilibrium. Why is this axiom plausible? I don't know orthodox thermodynamics and started studying from this paper. Answer: It is not really plausible, at least not to me and no more plausible than what it purports to replace namely Caratheodory's "axiom of adiabatic inaccessibility". The latter states that in an arbitrarily small neighborhood of equilibrium there are states that cannot be reached by an adiabatic process. To your question the answer is in Lieb's paper right after the definition T4 of transversality: "To put this in words, the axiom requires that for every adiabat there exists at least one isotherm (i.e., an equivalence class w.r.t. ∼T ), containing points on both sides of the adiabat." Recall the standard two-variable (p,V) plots of an ideal gas where you plot the isotherms and the adiabats. They create a net covering the state space and crucially you can define Carnot cycles, for example. You can define them because the isotherms cross the adiabats; when you have more than two variables then you have these adiabatic and isothermic surfaces that fill ("foliate") the state space. The entropy function is a strictly monotonic function from one adiabatic surface to another and Caratheodory's axiom is thereby satisfied for a state lying on a lower entropy surface cannot be reached by adiabatic means from a higher entropy surface. The transversality axiom is satisfied by not being able to have a cycle such that one part is isothermic during which energy and entropy are exchanged and while the other is adiabatic with work exchange. That such cycle cannot exist is consistent with Kelvin's axiom of denying the existence of a cycle with a single heat sink.
{ "domain": "physics.stackexchange", "id": 60780, "tags": "thermodynamics, statistical-mechanics, equilibrium, adiabatic" }
How to factorize the Matrix in TensorFlow? (Recommender System)
Question: Given a user ratings matrix which is $n \times p$, where $n$ users rate $p$ movies, I already have a row matrix $n \times 10$ which characterises the user. I ideally wanted to use the TF was method for optimisation, https://www.tensorflow.org/versions/master/api_docs/python/tf/contrib/factorization/WALSMatrixFactorization but it looks like it creates the row matrix itself. What I need is to create the column matrix - which is $10 \times p$ (not both), containing the relationship between hidden characteristics (10) to the movies (p). How can I do this in TF? Answer: If R is the rating matrix, U is the user matrix and M is the movie matrix, then note that there is almost certainly no matrix M that satsifes $R = UM$. U and M are too low rank. However you should be able to find the matrix M that minimizes $|R - UM|$. There is no need to use an optimizer, though you could I guess, because this is a convex problem. You're just solving a large linear system. This is indeed just what ALS does repeatedly. You've found an ALS solver and if you just need to solve one step and have the user matrix already, I think you just supply it as row_init and run one iteration? haven't used it, but conceptually that's all you are doing. You don't need weights either.
{ "domain": "datascience.stackexchange", "id": 3397, "tags": "tensorflow, recommender-system, matrix-factorisation" }
Why does the machine learning algorithm need to learn a set of functions in the case of missing data?
Question: I am currently studying the textbook Deep Learning by Goodfellow, Bengio, and Courville. Chapter 5.1 Learning Algorithms says the following: Classification with missing inputs: Classification becomes more challenging if the computer program is not guaranteed that every measurement in its input vector will always be provided. To solve the classification task, the learning algorithm only has to define a single function mapping from a vector input to a categorical output. When some of the inputs may be missing, rather than providing a single classification function, the learning algorithm must learn a set of functions. Each function corresponds to classifying $\mathbf{x}$ with a different subset of its inputs missing. This kind of situation arises frequently in medical diagnosis, because many kinds of medical tests are expensive or invasive. One way to efficiently define such a large set of functions is to learn a probability distribution over all the relevant variables, then solve the classification task by marginalizing out the missing variables. With $n$ input variables, we can now obtain all $2^n$ different classification functions needed for each possible set of missing inputs, but the computer program needs to learn only a single function describing the joint probability distribution. See Goodfellow et al. (2013b) for an example of a deep probabilistic model applied to such a task in this way. Many of the other tasks described in this section can also be generalized to work with missing inputs; classification with missing inputs is just one example of what machine learning can do. I was wondering if people would please help me better understand this explanation. Why is it that, when some of the inputs are missing, rather than providing a single classification function, the learning algorithm must learn a set of functions? And what is meant by "each function corresponds to classifying $\mathbf{x}$ with a different subset of its inputs missing."? I would greatly appreciate it if people would please take the time to clarify this. Answer: Intuitively, this is similar to the case when you are making predictions but you don't have all the necessary information to make the most accurate prediction or maybe there isn't a single accurate prediction, so you have a set of possible predictions (rather than a single prediction). For example, if you hadn't seen the last Liverpool game (in the Champions League) against Atlético Madrid, you would have probably said that Liverpool was the most likely team to win the CL this year (2020) too. However, after having seen their last game, you noticed that they are not unbeatable and they are not perfect, so, although they have shown you (during this and the previous season) that they are a very good team, they may also no be the best until the end of the season. So, at this point, you may have a set of two possible hypotheses: Liverpool will win the CL or Liverpool will not win the CL. In general, if you had a dataset that is representative of your whole population, then the dataset alone should be sufficient to make accurate predictions (i.e. it contains all the information sufficient to make accurate predictions). If that's not the case (which is often true), then you will have to account for all possible values of the missing data or you will have to make assumptions (or introduce an inductive bias). The authors also mention the concept of marginalization, which is used in probability theory to calculate marginal probabilities, e.g. $p(X=x)$ (or for short $p(x)$), when there's another random variable $Y$, by accounting for all possible values of $Y$. In other words, you're interested only in $p(x)$ and you may have the joint probability distribution $p(x, y)$, then marginalization allows you to compute $p(x)$ using e.g. $p(x, y)$ and all possible values that the random variable $Y$ can take. In any case, I think their description is a little bit vague and using the concept of marginalization to convey the idea behind the "multiple hypotheses" isn't the most appropriate approach, IMHO. If you are interested in these concepts in the context of neural networks, I suggest you read something about Bayesian machine learning or Bayesian neural networks.
{ "domain": "ai.stackexchange", "id": 2615, "tags": "machine-learning, classification, learning-algorithms, probability-distribution" }
Rotational Kinematics - Newton Second Law
Question: Consider the following system: Where we have: $M_a$ = Mass of A $M_b$ = Mass of B $R$ = Radius of the Pulley $I$ = Moment of Inertia of the Pulley $\mu$ = Coefficient of friction between A and table Now, using Newton Second Law, calculate the acceleration of the block B. I was able to calculate it using conservation of energy. But when it comes to applying net force, and resulting torque, I'm kinda lost. Here is what I've done: I've considered that the weight block B applies a force in one extreme of the string, and the Friction force in block A applies another force in the other extreme of the string, hence: $$ F_b = M_b g\\ F_a = - \mu M_a g $$ Then, I thought that the string is just transferring the forces to the pulley (correct me if I'm wrong to assume that), where these forces is resulting in some net torque: $$ T_{F_b} = M_b g r\\ T_{F_a} = -\mu M_a g r\\ T_{net} = gr(M_b - \mu M_a) $$ But we know, by Newton second law for rotational kinematics, that: $$ T_{net} = I \alpha $$ Then: $$ I \alpha = gr(M_b - \mu M_a)\\ \alpha = \frac{gr(M_b - \mu M_a)}{I} $$ But what I've found is angular acceleration $\alpha$. Is it going to be the same as the acceleration of block B? Can someone please check if what I've done is indeed correct and if it's not, correct and explain to me my mistakes? Thanks! Answer: You are on the right path. You want to assume an acceleration $a$ (to be determined), then write the forces in terms of that. This means that the tension between A and the pulley now is sufficient to accelerate $A$ and overcome the friction, $$T_A = m_a a + \mu m_A g$$ The tension between B and the pulley is the force of gravity minus the force needed to accelerate B: $$T_B = mg - ma$$ and the difference between these forces is just enough to accelerate the pulley. Note that angular acceleration $\alpha$ is related to linear acceleration $a$ with $a = \alpha R$ where $R$ is the radius of the pulley. Can you do it now?
{ "domain": "physics.stackexchange", "id": 37459, "tags": "homework-and-exercises, newtonian-mechanics, rotational-kinematics" }
Given a set of numbers (negative or positive), and a maximum weight w, find a subset that is maximal whose sum is less than w
Question: The aim of this problem is to find a subset (need not be consecutive) of a given set such that the sum is maximal and less than some given number $w$. (Note, we are trying to find a subset that is less than or equal to $w$ and not closest to $w$). For example, given a set $\{1, 3, 5, 9, 10\}$ and maximum weight 17, the maximal subset is $\{3, 5, 9\}$ since its sum is exactly 17. Another example: given a set $\{1, 3, 4, 9\}$ and maximum weight 15, the maximal subset is $\{1, 4, 9\}$ since its sum is 14, and there are no other subsets whose sum is 15. Example with both positive and negative numbers: given a set $\{-3, 2, 4\}$ and maximum weight 3, the subset is the set itself since -3 + 2 + 4 = 3. I know how to solve it with only positive numbers, but I am struggling to find an algorithm to solve this problem for the general case with both positive and negative numbers. Obviously, my goal is not to use the brute force approach and check every possible subset since the complexity would be $O(n2^n)$. I stumbled upon an idea on another post that suggested adding a sufficiently large number to every elements in the set and subsequently changing the maximum weight. That is given a set $R = \{ a_1, a_2, ... , a_n \}$, we add some number $X$ (we can pick some number greater than equal to the absolute value of the smallest negative number) to get a set that looks like $\{ a_1 + X, a_2 + X, ... , a_n + X \}$ and change the maximum weight to $nX + w$ where $w$ was the original weight. Now, we have reduced the problem to only non-negative numbers. However, I could not see a way to actually find the subset that was closest to the original weight, but only whether any elements add up exactly the original weight (ie, there is no way to actually find the subset, but only to determine that some subset exists). Is there any other clever trick like this one to solve the problem for both positive and negative numbers? Any help would be thoroughly appreciated. Answer: This problem can be solved by a rather simple dynamic programming: Given a set of numbers $\{a_1,\dots,a_n\}$, let $best(i, w)$ be the best solution using only a subset of the elements $a_i,\dots,a_n$ with a maximum value $w$, then $$best(i,w) = max(a_i + best(i+1,w-a_i), best(i+1,w))$$ With boundaries conditions $best(n + 1,w) = -\infty$ for any $w < 0$, and $best(n + 1,w) = 0$ otherwise. Note that this problem is NP-hard because it solves the partition problem, in particular the running complexity of the above program depends on the given numbers, and can be exponential in the size of the input in the worse case.
{ "domain": "cs.stackexchange", "id": 7109, "tags": "dynamic-programming" }
Can we say that the radiator of our home is a black body?
Question: I would like to understand the concept of Black Body through this question : can we say that the radiator of our home is a black body ? Answer: So as said in the comments, I'm adding something long I wrote some time ago to explain better what is black body radiation. I tried to translate it from Hebrew so I hope the English makes sense. If you have any more questions regarding how this is applicable to your radiator, you are more than welcome to comment. How did the equilibrium problem with radiation give rise to quantum theory? What is a thermodynamic equilibrium? Let's say we finished cooking ourselves a gourmet meal (ability we acquired during the quarantine days) and left the hot pan on the counter after we finished. What will happen to it? After a while - it will cool down and reach room temperature. Why does this happen? Because the atoms that make up the pan are fast and constantly collide with the atoms of the air near the pan, which is slower, causing the air around to heat up and the pan atoms to cool. After long enough, the "extra" energy that was only in the pan atoms transferred almost entirely to the energy in the air atoms - the room warmed up a bit, and the pan cooled a bit - until they reached the same temperature. Similarly, for an ice block placed on the table - in the collisions between the particles in the ice and the atoms of the air, more energy will pass from the air to the ice - meaning the ice will heat and the air will cool until it is at the same temperature. In thermodynamics, everything that can happen, will happen (for example, energy transfer from the ice/pan to the air and viceversa). In thermal equilibrium, you just demand that nothing changes - which tells you that the net flux of certain quantities will be zero, meaning temperatures, pressures, etc.. will be equal (if possible). Thermal equilibrium with radiation Now imagine a closed, black room with no particles in it at all. If the room is not at absolute 0 degrees kelvin, at any given moment there are particles in the walls that move, accelerate and collide (with each other). The thing is, when particles collide and accelerate, they emit radiation and lose some of their energy. (This comes from classical EM theory). Meaning EM theory tells us that there is a mechanism of losing energy from a material to the EM field. Similarly, if the room was at zero temperature and there was radiation in the room, it would be absorbed and it heats the atoms - so here too one can see that energy will stop flowing from the room to the radiation or the radiation to the room when the two systems are in equilibrium. This is exactly the point - there is nothing "mystical" about black hot bodies radiating, this is known from the classical and quantum mechanical descriptions of how matter and light interact. The point in black body radiation is that demanding thermal equilibrium (no net energy flux between the material and the radiation field) can predict the spectrum in thermal equilibrium. That is black body radiation! The radiation must also be in equilibrium with the material so that there is no energy leakage from one to the other! (That is, on average, the amount of energy that the substance emits will be the same as the amount of energy the substance consumes). In other words, no room at a temperature different from zero can be dark - the room is always lit by the walls! In equilibrium, this light is emitted from the walls and absorbed in such a way that the total light in the room is always unchanged (this is the equilibrium definition) Note that if the room is transparent, and does not interact with radiation (it does not absorb), there is also no reason to emit. In general, the more matter a radiation absorbs at a certain frequency, the more it will emit. This can be generalized and explained in further details by Kirchhoff's law relating emissivity to absorption, and this is the basis to the answer I gave you in the comment - your radiator is a black body only in the frequencies it absorbs well. This is also why hot gasses, which absorb and emit in specific frequencies, don't give black body curves. Therefore, the maximum radiation that can emit from an equilibrium material is indeed from black bodies, and for this situation one can calculate from equilibrium principles how much energy each radiation frequency should be. When analyzing the result, a problem emerged - the power per unit frequency was proportional to the square in frequency, which means that as the frequency increases there is more and more energy, and it goes on and on! It can't be that equilibrium has infinite energy in radiation, so obviously something was incomplete in the laws of physics of the time ... This mystery became known as the "ultraviolet catastrophe" (because ultraviolet is in the high frequency range). Planck's solution One bright day in the early 20th century, Max Planck thought of a rather strange idea, but tried to analyze what would happen if he took a strange assumption: that the absorption and emission of radiation cannot be done with any energy, but only in whole multiples of some size that increase with frequency. In other words, Planck "manually" put a condition on the the emission of radiation to "block the problem" at high frequencies: a wall particle cannot emit any radiation at too high a frequency if it does not have enough energy for it (as opposed to classical theory, which did not have this limit). Strangely, this assumption predicted exactly the amount of radiation that is from a black body: the amount of radiation emitted was finite, and each temperature has a frequency at which it has maximum energy! These understandings helped later understand how to measure the temperature of things only by the radiation they emit (like 'seeing' a star's temperature or just measuring someone's temperature without touch). But the solution hinted at something strange for physicists: did the interaction between light and matter really come in discrete energy packets (discrete dose = quanta)? Is really the smallest amount of energy that can be emitted or increases with frequency? This interpretation seemed like a curiosity until a patent office clerk used it to solve another mystery a few years later (The photoelectric effect). This official was named Albert Einstein, and he won the Nobel Prize after solving the mystery 16 years after he proposed his solution. And all because something didn't work out in some radiation and got swallowed up by hot things. And then quantum theory was born.
{ "domain": "physics.stackexchange", "id": 67490, "tags": "thermal-radiation" }
Electromagnetic Unruh/Hawking effect? (Improved argument)
Question: This is an improved version of the argument in Electromagnetic Unruh effect? In the quantum vacuum particle pairs, with total energy $E_x$, can come into existence provided they annihilate within a time $t$ according to the uncertainty principle $$E_x\ t \sim \hbar.$$ If we let $t=x/c$ then we have $$E_x \sim \frac{\hbar c}{x}$$ where $x$ is the Compton wavelength of the particle pair. Let us assume that there is a force field present that immediately gives the particles an acceleration $a$ as soon as they appear out of the vacuum. Approximately, the extra distance, $\Delta x$, that a particle travels before it is annihilated is $$\Delta x \sim a t^2 \sim \frac{ax^2}{c^2}.$$ Therefore the particle pairs have a new Compton wavelength, $X$, given by $$X \sim x + \Delta x \sim x + \frac{ax^2}{c^2}.$$ Accordingly the energy $E_X$ of the particle pairs, after time $t$, is related to their new Compton wavelength $X$ by \begin{eqnarray} E_X &\sim& \frac{\hbar c}{X}\\ &\sim& \frac{\hbar c}{x(1+ax/c^2)}\\ &\sim& \frac{\hbar c}{x}(1-ax/c^2)\\ &\sim& E_x - \frac{\hbar a}{c}. \end{eqnarray} Thus the particle pair energy $E_X$ needed to satisfy the uncertainty principle after time $t$ is less than the energy $E_x$ that was borrowed from the vacuum in the first place. When the particle pair annihilates the excess energy $\Delta E=\hbar a/c$ produces a photon of electromagnetic radiation with temperature $T$ given by $$T \sim \frac{\hbar a}{c k_B}.$$ Thus we have derived an Unruh radiation-like formula for a vacuum that is being accelerated by a field. If the field is the gravitational field then we have derived the Hawking temperature. By the equivalence principle this is the same as the vacuum temperature observed by an accelerating observer. But this formula should be valid for any force field. Let us assume that the force field is a static electric field $\vec{E}$ and that the particle pair is an electron-positron pair, each with charge $e$ and mass $m_e$. The classical equation of motion for each particle is then $$e\ \vec{E}=m_e\ \vec{a}.$$ Substituting the magnitudes of the electric field and acceleration into the Unruh formula gives $$T \sim \frac{\hbar}{c k_B}\frac{e|\vec{E}|}{m_e}.$$ If we take the electric field strength $|\vec{E}|=1$ MV/m then the electromagnetic Unruh/Hawking temperature is $$T\approx 10^{-2}\ \hbox{K}.$$ If this temperature could be measured then one could experimentally confirm the general Unruh/Hawking effect. Is there any merit to this admittedly non-rigorous argument or can the Unruh/Hawking effect only be analyzed using quantum field theory? Answer: You aren't really asking a question but here is my assessment of your argument. The Unruh effect states that if one were to couple a detector to a quantum field, the detector would detect a thermal excitation as it is being accelerated. More generally, however, this excitation has to do with the thermal character of the vacuum and not necessarily the coupling of a detector. So the acceleration argument is not exactly necessary. In fact (due to an argument by Sciama), the necessary and sufficient condition is that the vacuum be stable and stationary from the perspective of a uniformly accelerated frame. Your argument is very hand-wavy. There is a confusion of frames, there is no reference to a thermal density matrix, you have not constructed a boost Hamiltonian, you have not addressed the subtleties of the "quantum" equivalence principle, I don't know what metric you're talking about and so on.
{ "domain": "physics.stackexchange", "id": 44849, "tags": "quantum-field-theory, heisenberg-uncertainty-principle, hawking-radiation, virtual-particles, unruh-effect" }
$k$-Multiset intersection efficient algorithm
Question: Given a collection of sets $C= \{S_1,S_2,\cdots,S_n\}$ such that each set $S_i \in C$ is sorted and has at least $k$ elements. What is the most efficient algorithm for finding the intersection of these sets: $\bigcap_{S_i \in C}{S_i}$ Answer: If the least elements in all $S_i$ are equal then pick it for final set else pick least element among all $S_i$ and remove (remove minimum among all sets and not minimums of all sets). This works in $O(n*|C|*log |C|)$, where $|C|$ is the total number of elements in $C$ (not it's cardinality). This works since all your $Si$ are sorted and removal of element will maintain the sorted order.
{ "domain": "cs.stackexchange", "id": 4823, "tags": "algorithms, sets" }
Message with array of data
Question: Hello everyone, I try to create a custom message containing an array of int8. Therefore I use the .msg-file int8 len int8[] data and when I run colcon everything is fine. The resulting node publishes messages as shown above. When I run the node it immediately crashes without any warnings or errors. The node itself works fine when I use sensor_msgs::msg::Imu. Does anyone know how to fix this? Regards s0nc Originally posted by s0nc on ROS Answers with karma: 35 on 2022-03-17 Post score: 0 Original comments Comment by ljaniec on 2022-03-20: Just to be sure - did you add this new message to your CMakeLists in rosidl_generate_interfaces(${PROJECT_NAME} ...) and rosidl_default_generators, rosidl_default_runtime int the package.xml in build and exec dependencies? Comment by s0nc on 2022-03-21: I just included them as build_depend. It finally works. Thank you! Comment by ljaniec on 2022-03-21: Awesome, I just copied my comment to the answer, please upvote and accept it so your question will be marked as solved :) Answer: Just to be sure - you have to add this new message to your CMakeLists.txt in rosidl_generate_interfaces(${PROJECT_NAME} ...) and rosidl_default_generators, rosidl_default_runtime in the package.xml in build and exec dependencies. Originally posted by ljaniec with karma: 3064 on 2022-03-21 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 37511, "tags": "ros, message" }
Finding a '1' cell with a '0' to its right in a binary array
Question: Given an array of size n that holds ones and zeros I need to find an index of a 1 cell that has 0 to his right (in then next cell) there could be more than one pair in a given array, any one of them is fine. The array is not sorted, but we do know that the first element is 1 and the last element is 0. The search should be in $O(\log n)$ time. I'm thinking that a binary search variation is the answer but I'm not sure how. Answer: After viewing the original question, it seems that you have an additional piece of information: you know that the first element is 1 and that the last element is 0. This is crucial! So indeed you can solve it with binary search: first observe that if the first element is 1 and the last is 0, there must be subsequence of the form $1,0$ in the array. Denote the array by $A[1,...,n]$. If $A[n/2]=1$, then there is a $1,0$-subsequence in the subarray $A[n/2,...,n]$. If $A[n/2]=0$, then there is a $1,0$-subsequence in the subarray $A[1,...,n/2]$. So you can use Binary search, which takes $O(\log n)$ time.
{ "domain": "cs.stackexchange", "id": 1072, "tags": "algorithms, arrays, search-algorithms, binary-search" }
What is this raptor on a building in California?
Question: So I took this picture of a bird that was sitting just outside my workplace a few weeks ago: At first I searched the internet for raptors in California and found the Cooper's hawk. It looked so much like this bird so I thought that must be it and left it at that. However I just recently came across this bird and also this recent hawk identification question and they both look very similar to my bird as well. So I realized I don't know what to look for when trying to identify this bird. Does anyone know what this raptor could be? If it helps, the specific location was the city of Tustin in Orange County, California (Southern California). It is about 10 miles from the coast. Answer: It is a Red-shouldered Hawk (Buteo lineatus). Its breeding range spans eastern North America and along the coast of California and northern to northeastern-central Mexico. Identifying features: Presence of dark wings with white spots. Presence of dark-brownish head, orangish-brown chest.
{ "domain": "biology.stackexchange", "id": 6499, "tags": "species-identification, ornithology" }
Optical Waveguide's "Base Bandwidth"
Question: Consider a dielectric slab waveguide (lossless, isotropic) illuminated transversally from the vacuum (with coherent, monochromatic light). We define the base bandwidth of a waveguide (or optical fiber), $AB$, to be the inverse of the time retardation, $\Delta t$, at 1 km of the waveguide between the energy of a guided mode (transmitted following the zig-zag model) with a $\theta_{c}$ critical angle and the energy transmitted without total internal reflections. Let $n_{f}$ and $n_{s}$ be the refractive indexes of the film and the substrate respectively. Prove that if $\Delta n=n_{f}-n_{s}<<1$, then: $$ {\rm{AB}} = {\left( {\Delta t} \right)^{ - 1}} \simeq \frac{{2c{n_s}}}{{{{\left( {{\rm{AN}}} \right)}^2}}} = \frac{c}{{{n_f} - {n_s}}} $$ where $AN$ is the numerical aperture of the guide. This problem has been on my mind for 2 days now and it seems I can't find a method to calculate that time difference... Any ideas? My thinking so far: We have to look at the Ray Optics picture of Dielectric Waveguide Theory (see e.g. Tamir et al: Integrated Optics (chapter 2)). A guided mode is propagated through the waveguide following a series of total internal reflections at an angle $\theta_{c}$ with the normal to the film-substrate or film-cover surfaces, and therefore its energy is "trapped" in the film. A mode which is not a guided mode will travel through the waveguide suffering reflections and refractions and therefore radiating some energy to the cover and substrate. The rays travel through the film at the same speed $c/n_{f}$ but follow different paths, so it will take different times for them to advance 1 km in the waveguide. We could then try to find the components of these velocities in the direction of propagation ($z$ axis), $v_{i}$, and use the simple relation $t_{i}=d/v_{i}$. The trouble with this, is that the problem doesn't specify the angle at which the non-guided mode incides in the surfaces of the film. Am I missing something here? On the other hand, the numerical aperture of the waveguide is: $$ n\sin \left( {{\theta _{\max }}} \right) = {n_f}\sin \left( {90 - {\theta _{\rm{c}}}} \right) = {n_f}\cos \left( {{\theta _{\rm{c}}}} \right) $$ where n=1, and $\theta_{\max}$ is the angle of incidence of the illumination beam with the normal to the $x-y$ plane so that doing some work we find: $$ {\rm{AN}} = \sin \left( {{\theta _{\max }}} \right) = \sqrt {{n_f}^2 - {n_s}^2} $$ UPDATE: Some calculations: The effective refractive indexes of the 3 rays (ray 1: guided mode, ray 2: radiation mode, ray 3: no reflection at all) are: $$ {N_1} = \frac{{{\beta _1}}}{{{k_1}}} = \frac{c}{{{V_1}}} = {n_f}\sin \left( {{\theta _{\rm{c}}}} \right) $$ $$ {N_2} = \frac{{{\beta _2}}}{{{k_2}}} = \frac{c}{{{V_2}}} = {n_f}\sin \left( \theta \right) $$ $$ {N_3} = {n_f} = \frac{c}{{{v_3}}} $$ So that the retardations would be: ($d=1km$) Between rays 1 and 2: $$ \Delta {t_{1 - 2}} = {t_2} - {t_1} = d\left( {\frac{1}{{{V_2}}} - \frac{1}{{{V_1}}}} \right) = \left[ {...} \right] = \left( {d\frac{{{n_f}}}{c}} \right)\left( {\sin {\theta _c} - \sin \theta } \right) $$ Between rays 1 and 3: $$ \Delta {t_{1 - 3}} = {t_3} - {t_1} = d\left( {\frac{1}{{{v_3}}} - \frac{1}{{{V_1}}}} \right) = \left[ {...} \right] = \left( {d\frac{{{n_f}}}{c}} \right)\left( {\sin {\theta _c} - 1} \right) $$ And the respective base bandwidths: $$ {\rm{A}}{{\rm{B}}_{1 - 2}} = \frac{{\frac{c}{{d{n_f}}}}}{{{\rm{AN}} - \sin \theta }} $$ $$ {\rm{A}}{{\rm{B}}_{1 - 3}} = \frac{{\frac{c}{{d{n_f}}}}}{{{\rm{AN}} - 1}} $$ Are any of these results equal to the equation given at the beginning for $AB$? How would one use the approximation $n_{f}-n_{s}<<1$? Answer: What you should be comparing is the time it takes for direct propagation (which I would guess is the "energy transmitted without total internal reflection") versus the time it takes for guided propagation at the critical angle, which is the longest delay/broadening you will get out of the fibre at the other end. Modes at angles higher than $\theta_c$ will leak energy into the substrate and will not make it to the other end, so you don't need to consider them. Your error is in the calculation of the times each beam travels. For each length $l$ that the direct beam travels, the critical-angle beam travels a length $l'$ given by $$ \frac l{l'}=\sin(\theta_c). $$ Thus, if the direct beam travels a total length $d$, the critical-angle beam will travel a length $d'=\frac{d}{\sin(\theta_c)}>d$. Since they are both travelling in the same medium, the real index of refraction is the same, and hence their travel times are $$ t_\text{direct}=\frac{d}{v}=\frac {dn_f}{ c}\text{ and } t_\text{c.a.}=\frac{d'}{v}=\frac{dn_f}{c}\frac1{\sin(\theta_c)}. \tag0 $$ The critical angle will be given by the total internal reflection limit at the boundary with either the substrate or the cover, whichever has a larger index of refraction. Assuming wlog that $n_s>n_f>n_c$, the critical angle is given by $\sin(\theta_c)=n_s/n_f$. This means that the time delay is $$ \Delta t=t_\text{c.a.}-t_\text{direct}=\frac{dn_f}{c}\left(\frac{n_f}{n_s}-1\right)=d\frac{n_f}{n_s}\frac{n_f-n_s}{c}. \tag1 $$ The inverse of this is the bandwidth of the fibre, given by $$ \frac{d}{\Delta t}=\frac{n_s}{n_f}\frac{c}{n_f-n_s}. \tag2 $$ This is pretty close to the result you were asked for, $\frac{c}{n_f-n_s}$. For one, it has a factor of $d$, which is eliminated in your result, effectively, by calculating the 'bandwidth per unit length' of the fibre, $1\text{ km}/\Delta t$, in the understanding that the actual bandwidth will vary inversely with the actual length. This makes a lot of sense: longer fibres make for longer distances travelled by the different beams and therefore longer delays. This is to be expected and should be factored out. Other than that, some of the prefactors don't quite match up. For one, I must note that one of the equalities that you write as exact isn't really so: $$ \frac{{2c{n_s}}}{{{{\left( {{\rm{AN}}} \right)}^2}}}=\frac{2cn_s}{n_f^2-n_s^2}=\frac{2n_s}{n_f+n_s}\frac{c}{n_f-n_s}, $$ and this only equals $\cfrac{c}{n_f-n_s}$ in the limit where $n_f$ and $n_s$ are really quite close together. Similarly, in that limit, $\cfrac{n_s}{n_f}\approx 1$, so in that sense all three answers match. A bit further along those lines, the factor of $\cfrac{2n_s}{n_f+n_s}=\cfrac{2 n_s/n_f}{1+\frac{n_s}{n_f}}$ from the $1/\rm{AN}^2$ answer sits kind of "in between" the exact answer, $n_s/n_f$, so it is not so bad an approximation. I would therefore sum up the situation as saying that $$ \frac{d}{\Delta t} =\frac{n_s}{n_f}\frac{c}{n_f-n_s} \approx \frac{{2c{n_s}}}{{{{\left( {{\rm{AN}}} \right)}^2}}} =\frac{2n_s}{n_f+n_s}\frac{c}{n_f-n_s} \approx \frac{c}{n_f-n_s}, $$ where each approximation accumulates a slight loss of accuracy, from left to right, though of course everything tends to equality as $n_s/n_f\to1^-$. Thus, if it is convenient for some reason to include the numerical aperture in the formula for the bandwidth, then it makes some sense to put it in the picture.
{ "domain": "physics.stackexchange", "id": 10266, "tags": "optics, geometric-optics, waveguide, optical-materials" }
Identification of a purple flower
Question: What is this purple flower? Picture taken from a garden in India. Answer: It closely looks like Tibouchina urvilleana* or any other Tibouchina species. You can have a look here. [Source:Wikimedia common] Tibouchina urvilleana is a species of flowering plant in the family Melastomataceae, native to Brazil. Growing to 3–6 m (10–20 ft) tall by 2–3 m (7–10 ft) wide, it is a sprawling evergreen shrub with longitudinally veined, dark green hairy leaves. Clusters of brilliant purple flowers up to 10 cm (4 in) in diameter, with black stamens, are borne throughout summer and autumn.[Source] *Credits to @RHA for suggesting the right species.
{ "domain": "biology.stackexchange", "id": 7039, "tags": "botany, species-identification, flowers" }
What makes quantum computers so good at computing prime factors?
Question: One of the common claims about quantum computers is their ability to "break" conventional cryptography. This is because conventional cryptography is based on prime factors, something which is computationally expensive for conventional computers to calculate, but which is a supposedly trivial problem for a quantum computer. What property of quantum computers makes them so capable of this task where conventional computers fail and how are qubits applied to the problem of calculating prime factors? Answer: The short answer $\newcommand{\modN}[1]{#1\,\operatorname{mod}\,N}\newcommand{\on}[1]{\operatorname{#1}}$Quantum Computers are able to run subroutines of an algorithm for factoring, exponentially faster than any known classical counterpart. This doesn't mean classical computers CAN'T do it fast too, we just don't know as of today a way for classical algorithms to run as efficient as quantum algorithms The long answer Quantum Computers are good at Discrete Fourier Transforms. There's a lot at play here that isn't captured by just "it's parallel" or "it's quick", so let's get into the blood of the beast. The factoring problem is the following: Given a number $N = pq$ where $p,q$ are primes, how do you recover $p$ and $q$? One approach is to note the following: If I look at a number $\modN{x}$, then either $x$ shares a common factor with $N$, or it doesn't. If $x$ shares a common factor, and isn't a multiple of $N$ itself, then we can easily ask for what the common factors of $x$ and $N$ are (through the Euclidean algorithm for greatest common factors). Now a not so obvious fact: the set of all $x$ that don't share a common factor with $N$ forms a multiplicative group $\on{mod} N$. What does that mean? You can look at the definition of a group in Wikipedia here. Let the group operation be multiplication to fill in the details, but all we really care about here is the following consequence of that theory which is: the sequence $$ \modN{x^0}, \quad\modN{x^1}, \quad\modN{x^2}, ... $$ is periodic, when $x,N$ don't share common factors (try $x = 2$, $N = 5$) to see it first hand as: $$\newcommand{\mod}[1]{#1\,\operatorname{mod}\,5} \mod1 = 1,\quad \mod4 = 4,\quad \mod8 = 3,\quad \mod{16} = 1. $$ Now how many natural numbers $x$ less than $N$ don't share any common factors with $N$? That is answered by Euler's totient function, it's $(p-1)(q-1)$. Lastly, tapping on the subject of group theory, the length of the repeating chains $$ \modN{x^0}, \quad\modN{x^1}, \quad\modN{x^2}, ... $$ divides that number $(p-1)(q-1)$. So if you know the period of sequences of powers of $x \mod N$ then you can start to put together a guess for what $(p-1)(q-1)$ is. Moreover, If you know what $(p-1)(q-1)$ is, and what $pq$ is (that's N don't forget!), then you have 2 equations with 2 unknowns, which can be solved through elementary algebra to separate $p,q$. Where do quantum computers come in? The period finding. There's an operation called a Fourier transform, which takes a function $g$ written as a sum of periodic functions $a_1 e_1 + a_2 e_2 ... $ where $a_i$ are numbers, $e_i$ are periodic functions with period $p_i$ and maps it to a new function $\hat{f}$ such that $ \hat{f}(p_i) = a_i$. Computing the Fourier transform is usually introduced as an integral, but when you want to just apply it to an array of data (the Ith element of the array is $f(I)$) you can use this tool called a Discrete Fourier Transform which amounts to multiplying your "array" as if it were a vector, by a very big unitary matrix. Emphasis on the word unitary: it's a really arbitrary property described here. But the key takeaway is the following: In the world of physics, all operators obey the same general mathematical principle: unitarity. So that means it's not unreasonable to replicate that DFT matrix operation as a quantum operator. Now here is where it gets deep an $n$ Qubit Array can represent $2^n$ possible array elements (consult anywhere online for an explanation of that or drop a comment). And similarly an $n$ Qubit quantum operator can act on that entire $2^n$ quantum space, and produce an answer that we can interpret. See this Wikipedia article for more detail. If we can do this Fourier transform on an exponentially large data set, using only $n$ Qubits, then we can find the period very quickly. If we can find the period very quickly we can rapidly assemble an estimate for $(p-1)(q-1)$ If we can do that fast then given our knowledge of $N=pq$ we can take a stab at checking $p,q$. That's whats going on here, at a very high level.
{ "domain": "quantumcomputing.stackexchange", "id": 38, "tags": "speedup, classical-computing" }
What was the vertical beam of light in Chernobyl?
Question: In the HBO miniseries Chernobyl after the initial explosion we see a clear bright light shooting vertically up from the plant. I presume this was a thing that actually happened and not some creative license they took. What was the cause of this light and what are the mechanisms by which it works? Answer: There are 2 sources: Ionized-air glow, caused by gamma radiation from the core (more bluish color). While gamma radiation is emitted in all directions, it is shielded on the sides , and escape to air directly only in the vertical direction. Just light scattering (like in regular projectors), where core is a bright light source (more reddish color - due to high temperature and fire). Light scattering is enhanced by dust in the air. Again, light can only escape upwards. It seems to me that the effect was somewhat exaggerated in the movie. Not sure there is anything which could make light so well collimated. I would expect much less "focused" beam of light.
{ "domain": "physics.stackexchange", "id": 59031, "tags": "nuclear-physics" }
Grammar for all words other than $wq,qw$
Question: I want to generate a grammar that can't generate the words $qw$ and $wq$ but can generate the word $qwwq$. In other words, $L(G)=\{m ∈ \{q,w\}^* \mid m \neq wq,qw \}$. My grammar: \begin{align} &S \to qSw \mid wSq \mid qXq \mid wXw\\ &S \to qYw \mid wYq \mid q \mid w\\ &X \to qX \mid wX \mid qXw \mid wXq \mid ε \\ &Y \to qw \mid wq \\ \end{align} Answer: How about \begin{align*} S&\to q\mid w\mid qqB \mid wwB \mid qwA\mid wqA\mid \varepsilon \\ A&\to qB\mid wB \\ B&\to qB\mid wB\mid \varepsilon \end{align*} We can just explicitly include strings of length 0, 1 or 2 that are allowed, but add another non-terminal after the length two strings which are not allowed to force us to add at least one more terminal, then make sure after that we can add any letters we want in any order.
{ "domain": "cs.stackexchange", "id": 18048, "tags": "context-free, formal-grammars" }
What's the difference between the north and the south pole on a magnet?
Question: I was testing out objects with magnets and I didn't understand what made the north pole different from the south pole. I thought the poles were made of different materials, but I realized it wasn't. Can someone tell me what factors differentiate the north from the south? Answer: The North pole is where field lines come out and the South Pole is where they go into the magnet. That is all there is to it.
{ "domain": "physics.stackexchange", "id": 59318, "tags": "magnetic-fields" }
Calculation of the standard symplectic space matrix
Question: I am learning there is an important connection between Hamiltonian formalisms and Symplectic Geometry. It seems like the Newtonian Mechanics are described on what is called the standard symplectic space, which is given by: $$\Omega=\bigg(\matrix{0&I_n\\-I_n&0}\bigg)$$ This is a $2n\times2n$ matrix, where the $I_n$'s are identity matrices of $n\times n$. So the dimension of this space is proportional to the number of degrees of freedom of the system (in phase space there are $2n$ d.o.f.). My question is: how do they construct this matrix? Can there exist new machanical systems (relativistic, quantum) so that this matrix is modified? In such case, how are these constructed? I would appreciate good sources to study this kind of things, because I only found wikipedia pages, and also the book Mathematical Methods of Classical Mechanics by V. I. Arnold, but I find it too formal, and hard to follow. Answer: The geometric setting for a Hamiltonian theory is often taken to be a $2n$-dimensional real symplectic manifold $(M,\omega)$, where $\omega$ is a closed non-degenerate real 2-form. In a coordinate neighborhood $U$, the 2-form $\omega$ is given as $$\left.\omega\right|_{U}~=~\frac{1}{2}\sum_{I,J=1}^{2n}\omega_{IJ}(z)~ \mathrm{d}z^I \wedge\mathrm{d}z^J,\tag{1}$$ where $\omega_{IJ}=-\omega_{JI}$ is a non-degenerate antisymmetric real matrix. The inverse matrix $$ (\omega^{-1})^{IJ}~=~\{z^I,z^J\}_{PB}\tag{2} $$ gives rise to a Poisson bracket. The Darboux theorem states that there locally (in a sufficiently small open neighborhood $U$) exist Darboux/canonical coordinates $$z^I~=~\left(q^1,\ldots, q^n,p_1, \ldots, p_n\right),\tag{3}$$ where $\omega_{IJ}$ is on the form $$ \omega ~=~\begin{bmatrix} {\bf 0}_{n \times n} & -{\bf 1}_{n \times n} \cr {\bf 1}_{n \times n} & {\bf 0}_{n \times n} \end{bmatrix}, \tag{4}$$ or equivalently, $$\left.\omega\right|_{U}~=~\sum_{i=1}^n\mathrm{d}p_i \wedge\mathrm{d}q^i.\tag{5}$$ It may be helpful to note that one cannot diagonalize a non-degenerate antisymmetric real matrix in a real vector space (because the eigenvalues are imaginary), so the canonical form (4) is in some sense the best one could hope for, up to sign conventions and coordinate permutations.
{ "domain": "physics.stackexchange", "id": 34926, "tags": "classical-mechanics, differential-geometry, hamiltonian-formalism, phase-space, poisson-brackets" }
CNOT gate in Bernstein-Vazirani Algorithm explanation
Question: I am studying the Bernstein-Vazirani Algorithm from the Qiskit Textbook and I don't understand why specifically the CNOT gate is applied when s[q]=1 (see the code at the end of the linked page). I know that it somehow checks whether it should flip the probability amplitude of that particular state or not, but I can't precisely explain how. Answer: As you have identified yourself the CNOT is used to flip the sign of the coefficient of that particular bitstring. The reason this works is because $|+\rangle$ and $|-\rangle$ are the eigenvectors of the $X$ gate with eigenvalues 1 and -1 respectively: $$ |\psi\rangle \otimes X|-\rangle = 1/\sqrt{2}(|\psi\rangle \otimes X(|0\rangle-|1\rangle)) $$ $$ = 1/\sqrt{2}(|\psi\rangle \otimes (-|0\rangle+|1\rangle)) $$ $$ =-|\psi\rangle \otimes |-\rangle $$ So the CNOT will flip the sign of the state if the control qubit is 1.
{ "domain": "quantumcomputing.stackexchange", "id": 4463, "tags": "quantum-gate, quantum-algorithms" }
HTML5 Elements - First markup
Question: This is my first website. I would like to make sure that: I am using HTML5 sectioning elements correctly. I didn't forgot anything important for HTML5 compatibility like html5shiv, for example. Any other advice? <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>DEdesigns</title> <script scr="html5shiv.js"></script> <!-- allows html 5 styling --> <link rel="stylesheet" href="style.css"> </head> <body> <div id="container"> <header> <h1>DEdesigns</h1> <nav> <ul> <li><a href="#">Home</a></li> <li><a href="#">About</a></li> <li><a href="#">Services</a></li> <li><a href="#">Contact</a></li> </ul> </nav> </header> <!-- end header --> <article id="about-me"> <header> <h2>About Me</h2> </header> <section> <p>Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged</p> </section> <aside> <figure> <img src="#" alt="#" width="#" height="#"> </figure> </aside> </article> <!-- end #about-me --> <div id="gallery"> <header> <h2>My Work</h2> </header> <div id="gallery-conatiner"> <figure> <img src="#" alt="#" width="#" height="#"> </figure> <section> <aside> <p>rem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500</p> </aside> </section> <!-- ends first row --> <section> <aside> <p>rem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500</p> </aside> </section> <figure> <img src="#" alt="#" width="#" height="#"> </figure> <!-- ends second row --> <figure> <img src="#" alt="#" width="#" height="#"> </figure> <aside> <p>rem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500</p> </aside> </section> <!-- ends third row --> </div> </div> <!-- end #gallery --> <article id="services"> <header> <h2>Services</h2> </header> <section> <ol> <li>one</li> <li>two</li> <li>three</li> </ol> </section> </article> <!-- end #services --> <article id="contact-me"> <header> <h2>Contact Me</h2> </header> <p>some contact me stuff goes here</p> </article> <!-- end #contact-me --> <footer> <p>This is my fotter</p> </footer> </div> <!-- end #container --> </body> </html> Answer: About your use of sections The body sectioning root is fine: it has a header containing the site-wide heading (h1) and the site-wite navigation (nav) (good!), a footer, and several sections representing the document’s main content. If you want, you could use the main element as a container for the section(s) that represent the main content. Your first section, the article (#about-me), contains a section and an aside. Without knowing what content these have, it can’t be said if it’s correct or not. To make sure: You should only use such a sub-section if you are dividing your content into, well, sections, typically (but not necessarily) introduced by headings. If the section contains the actual "About me" content, then it’s wrong -- you should simply omit it. Only use the aside for the figure if the figure content is not directly related to the parent article. If, for example, you’d show your portrait in this figure, you should not use aside -- simply omit it. The next section is created implicitly by the h2 ("My Work"). You should make it explicit, otherwise the following sections, which should be sub-sections, are on the same hierarchy level as "My Work". So simply use section (or article, if the content matches its definition) instead of div. The (now) third section, article (#services) contains a sub-section where it’s not clear what its purpose is. If it, same as before, contains the actual content, omit this section. The last section, article (#contact-me") seems to be fine. So removing anything not related to sections, the document would look like: <body> <h1>DEdesigns</h1> <nav></nav> <article id="about-me"> <h2>About Me</h2> </article> <article id="gallery"> <h2>My Work</h2> <!-- snip, see below --> </article> <article id="services"> <h2>Services</h2> </article> <article id="contact-me"> <h2>Contact Me</h2> </article> </body> (Personally, I would use section instead of article for the four top-level sections "About Me", "My Work", "Services", and "Contact Me". I wouldn’t say article is necessarily wrong here, as it would also depend on the actual content, but these are not the best candidates for article.) Looks fine, except for the content of the gallery: (The last section is missing an opening <section> tag.) Unless you have content not included in the code, it doesn’t make sense to have a section whose only child is an aside. Either use aside (if the content matches its definition), or section. But, depending on your content, you might not need any sub-sections at all here. If it’s some kind of portfolio, you could use article for each entry. Always check your document outline, for example with the online tool HTML5 Outliner.
{ "domain": "codereview.stackexchange", "id": 10939, "tags": "beginner, html, html5" }
How do biologists discover information from fossils?
Question: I have a query about the study of fossils (palaeontology). Let me know about the study of fossils. How do biologist discover "DNA" information from dead and old fossils such as a dinosaur? (answer this question in a paragraph or two) Answer: How biologist discover "DNA" informations from dead and oldest fossils such as dinosaur. They don't. In general fossils contain very little organic material. It's all been replaced by stone (silicates). On top of that DNA degrades over time. There have been a handful of cases where researchers have claimed to recover small fragments of DNA from dinosaur fossils, but these are disputed, and are generally thought to be the result of contamination with human or other modern DNA. DNA has been recovered from some more recent fossils like Neanderthal. For such a general question a better starting place would be to read the Wikipedia article on Ancient DNA.
{ "domain": "biology.stackexchange", "id": 9597, "tags": "genetics, molecular-biology, dna, palaeontology" }
rosservice call in bash causing issues
Question: Hi, I am trying to use bash to execute a bunch of rosservice calls, one after the other. Some work but the ones that ask for position ones don't: rosservice call /wam/cart_move "position: 0 0 0.5" They work if I put them in terminal no problem, but I cannot bash call them.. Any suggestions as to why this my be the case? Update: 21 Jan - @Wolf saved the day - had to include sleep command to allow the service to run.. Thank you ros answers! Originally posted by kleinsplash on ROS Answers with karma: 13 on 2014-01-17 Post score: 1 Original comments Comment by Wolf on 2014-01-17: How does your script file look like? What is the error messsage? I can call rosservice from a bash shell script without problems. Did you select bash (# !/bin/bash ) at top of your script? Comment by kleinsplash on 2014-01-17: It's a typical .sh script: #! /bin/bash rosservice call /bhand/open_spread rosservice call /wam/cart_move "position: 0 0 0.5" rosservice call /wam/cart_move "position: 0.87 -0.15 -0.45" Comment by dornhege on 2014-01-20: What type of service is that? Comment by kleinsplash on 2014-01-20: wam/cart_move - type wam_srvs/CartPosMove; float32[3] position ---; (http://code.google.com/p/gwam-ros-pkg/wiki/GWAMPackagewam_node); (http://code.google.com/p/gwam-ros-pkg/source/browse/trunk/wam/wam_common/wam_srvs/srv/CartPosMove.srv?spec=svn203&r=86) Comment by BennyRe on 2014-01-20: What kind of error do you get? Comment by kleinsplash on 2014-01-20: I dont get an error - this runs without a problem - but does not execute all the command (specifically the service call to /wam/cart_move) Answer: Did you try to add a sleep for a short time between your service calls (Or call with --wait ...)? Maybe your node is just busy if you call the services immediately one after another... Originally posted by Wolf with karma: 7555 on 2014-01-20 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by kleinsplash on 2014-01-20: Thats it :) thank you.. I tried converting your comment to answer - but wasnt able to.. perhaps admin did the rest.. Comment by dornhege on 2014-01-20: If this answer solves the problem, please click the checkmark item next to the question to mark it answered.
{ "domain": "robotics.stackexchange", "id": 16676, "tags": "ros, rosservice, bash" }
A small timer in C
Question: I needed to make a timer to profile small programs a while back. My use wasn't for a major project or mission critical code or anything even remotely close to that, just for personal learning purposes. As I reviewed my old timer I found I was using clock which measures cpu time. Now being aware of the difference between wall time and cpu time, I made this timer using clock_gettime and was hoping for help finding any mistakes or possible improvements that could be made. Thanks. #include <stdio.h> #include <time.h> #include <unistd.h> #define NANO 1000000000 int main(int argc, char *argv[]) { struct timespec start; struct timespec stop; clock_gettime(CLOCK_REALTIME, &start); double start_time = ((float) start.tv_sec) + ((float) start.tv_nsec) / NANO; printf("%.4f\n", start_time); sleep(3); for (int i=1; i < 10000; i++) { for (int j=i; j < 10000; j++) { int l = j % i; } } clock_gettime(CLOCK_REALTIME, &stop); double stop_time = ((float) stop.tv_sec) + ((float) stop.tv_nsec) / NANO; printf("%.4f\n", stop_time - start_time); return 0; } Answer: A few things: Why are you mixing float and double? Use one of them and do so consistently. Your for loop doesn't do anything, it has no side effects. So any half-decent compiler will just remove the whole loop when the optimizer is enabled. To prevent this from happening, all variables inside the loop must be declared as volatile. Note that a call to sleep will cause your process to yield its time slice and let the OS context switch and execute code from other processes, before returning to your process. This will cause timing inaccuracies. It is likely that this is the reason why your code seems to work: the loop gets removed and instead you measure some random time when other processes are executing.
{ "domain": "codereview.stackexchange", "id": 19509, "tags": "c, timer" }
Does entropy increase with heat flow?
Question: In an exam, I had a scenario where 2 bodies with different temperatures were put together and over time their temperatures mixed and eventually became uniform. My intuition tells me that as a result, the entropy of this system increased, however the exam asked me to explain why entropy remained the same. Could someone tell me whether the exam is correct, and if so, why? Edit: The problem was posted on our website (with no official answer unfortunately), so here it is in full: Consider an isolated system composed of two bodies at slightly different temperatures T1 and T2 (T1 = T2 + dT) thas have been put in contact. a) What does the second law of thermodynamics say about the direction of heat flow between them? (1pt) b) Explain how the entropy of both bodies change. Show that the total entropy of that system is constant. (6pts) (there's a picture showing the 2 bodies and a barrier around them labeled "perfect insulation") Answer: As I describe here, for two bodies with a finite temperature difference, the total entropy is constant only if the connection is made through a Carnot heat engine. However, the exam explicitly states that the temperature difference is infinitesimal. This isn't uncommon in thermodynamics thought experiments; for example, if we consider the cooling of a hot object surrounded by a large thermal reservoir (e.g., a cup of hot coffee in your kitchen), then it's routine to assume that the environment is isothermal even though we know that the thermal energy lost by the coffee must heat up the reservoir to some degree. It's just that the amount can be assumed negligible. In this way, we can obtain that that environment gains $Q/T$ entropy, for example, where $Q$ is the thermal energy lost by the relatively small object and $T$ is the approximately constant temperature of the large reservoir. If this is the strategy that your exam is aiming for, then I expect the desired answer is that the slightly hotter body spontaneously heats the slightly cooler body because that arrangement increases the total entropy; however, because the temperature difference is infinitesimal, that entropy increase is negligible.
{ "domain": "physics.stackexchange", "id": 55528, "tags": "homework-and-exercises, thermodynamics, entropy, equilibrium" }
ROS integration with motes
Question: Hi, Suppose I'm running ROS on a Arduino Galileo controlled robot (since Galileo can run Linux I suppose it can run ROS)... And the Arduino platform is also connected to a ZigBee radio... How do I communicate with other such Galileo+ZigBee motes but not running ROS? Is it possible or do I have to install ROS on all the motes? Originally posted by vreg on ROS Answers with karma: 3 on 2014-08-02 Post score: 0 Answer: The Galileo can run ROS, but I've seen a fair number of questions and problems here on the discussion boards, and the ROS Hydro on Galileo wiki page also indicates that it is "Under construction and not yet ready for use." That said, I'm sure you can get ROS to run the Galileo, but it will take some perseverance and some elbow grease. As for communicating with other "Motes" on a ZigBee network, ROS doesn't provide many tools there. If you have a specific application and protocol in mind already, you're probably best off writing your own translation node, which would run on the ROS Galileo and translate between ROS and your ZigBee network. There is also the rosserial_xbee package, which provides some basic ROS messaging over XBee devices. I haven't used it myself, so I'm not sure how well-polished or stable it is, or how suitable it would be for your application. Originally posted by ahendrix with karma: 47576 on 2014-08-03 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 18868, "tags": "ros, arduino" }
Eliminating clauses from a CNF formula based on their unsatisfying truth assignments being covered by some other clause
Question: We are given a Boolean formula in Conjunctive Normal Form (CNF) with $n$ variables and $m$ clauses, where we do not allow repetition of clauses in a given formula and we do not allow repetition of variables in a given clause. Then it is well known that we can have up to $3^n-1$ distinct clauses. What I would like to know is the complexity of eliminating clauses where their unsatisfying assignments are already covered by some other clause. For example, given the following CNF formula $ \{ (a,b,c,d), (a,b,c,\bar{d}), (a,\bar{b},c,d), (a,b,c), (a,b,d), (b,c,d), (c,d) \}.$ After we eliminate the clauses where their unsatisfying truth assignments are already covered by some other clause we get following CNF formula $\{ (a,b,c), (a,b,d), (c,d) \}.$ How fast can this be done for an arbitrary formula? Could it be done in polynomial time in terms of the length of the input? Answer: What you're describing is called subsumption; it is a standard CNF simplification technique for SAT solvers. None of the operations involved is worse than quadratic to either the number of variables or number of clauses in the formula.
{ "domain": "cstheory.stackexchange", "id": 1972, "tags": "cc.complexity-theory" }
Collective measurements: importance and realization
Question: I am reading the paper Polar codes for classical-quantum channels by Wilde and Guha, and it is stated the fact that collective measurements are necessary in order to aciheve the Holevo symmetric information as it can be seen from the HSW theorem. I am wondering what are such collective measurements and why are they different from the normal quantum measurements. Additionally, I was expecting to obtain some insight about some physical realization of such kind of measurements, as it seems that if applied in an optical-fiber link, the usage of such codes would be a way to obtain a tranmission rate equal to the Holevo symmetric information, which would be the ultimate capacity of such a channel. Answer: Collective measurements are normal measurements. You just need to be clear on the setting under which they are operating. I haven't delved deeply into the specific paper you mention (so it's always possible they make marginally different assumptions), but I expect it goes like this: You are looking at using many copies of the same channel. Encoding will, generically, be an entangled state across the inputs for a quantum-quantum channel. In this case of a classical-quantum channel, the inputs are classical bits, so the inputs can be correlated but not entangled. Decoding will, generically, involve measurement of all outputs of the channel simultaneously, in an entangled basis. It is these measurements across multiple outputs in an entangled basis that are referred to as collective measurements, in comparison to single-system measurements on the outputs of individual channels. In comparison to measurements on the outputs of the individual channels, these collective measurements first involve an entangling unitary between all the outputs. Now, I said "generically" in the sense that this is the most general case that you should consider. One might hope that the optimum measurement might be simpler than that, e.g. measurements performed on individual channel outputs. Presumably one of the points this paper is making is that this is not the case in their specific setting.
{ "domain": "quantumcomputing.stackexchange", "id": 382, "tags": "error-correction, communication" }
How do I handle elastic contacts in a simulation with friction
Question: I'm trying to simulate a wheel as it hits the ground. Problem 1 Suppose a disc is dropped from a height. It has initial velocity of $-x,-y$ caused by throwing and gravity. It has no initial angular velocity. When it hits the ground it should have some rotation resulting from the collision. How to calculate force that caused torque for that rotation? Problem 2 The same disc is dropped from height. It doesn't have velocity on the side direction. It already spinning fast. When it hits the ground the spinning should cause some translation. How to calculate force that caused this linear acceleration? Some rotation must be lost in the collision, how to calculate that? To simplify assume static $friction = 1$ and $g = -10$. Answer: First consider a rough surface (infinite friction). At the moment of impact there is a momentum transfer from the ground to the disk. This is called impulse and it is a vector passing through the contact point. With a rough surface the impulse in the horizontal direction (along the contact) is independent of the impulse in the vertical direction (contact normal). The effect of the two impulses $J_x$ and $J_y$ have on the motion of the disk can be analyzed using the 2D inertia matrix at the contact point A. $$\begin{pmatrix}J_x \\J_y \\0\end{pmatrix} = \begin{vmatrix} m & 0 & -m R \\ 0 & m & 0 \\ -m R & 0 & I+m R^2 \end{vmatrix} \begin{pmatrix} \Delta \dot{x}_A \\ \Delta \dot{y}_A \\ \Delta \omega \end{pmatrix} $$ NOTE: This is a direct consequence of the equations of motion at the center of mass, expressed in terms of the linear motion (change) at A $(\Delta \dot{x}_A, \Delta \dot{y}_A)$ and the angular velocity (change) $\Delta \omega$. From the above we get the impulse required for a specific change in linear velocity (as well as the change in angular velocity). $$ \begin{align} J_x & = \left( \frac{1}{m} + \frac{R^2}{I} \right)^{-1} \Delta \dot{x}_A \\ J_y & = \left( m \right) \Delta \dot{y}_A \\ \Delta \omega &= \frac{R}{I} J_x \end{align} $$ The elastic collision law states that the change in motion is such that the final velocity at the contact is a fraction $\epsilon$ of the initial velocity, but in the opposite direction. For the impact with an immovable floor this is $$ \begin{pmatrix} \dot{x}_A \\ \dot{y}_A \end{pmatrix} + \begin{pmatrix} \Delta \dot{x}_A \\ \Delta \dot{y}_A \end{pmatrix} = -\epsilon \begin{pmatrix} \dot{x}_A \\ \dot{y}_A \end{pmatrix} $$ So the change in linear velocity is given by $$ \begin{pmatrix} \Delta \dot{x}_A \\ \Delta \dot{y}_A \end{pmatrix} = -(1+\epsilon) \begin{pmatrix} \dot{x}_A \\ \dot{y}_A \end{pmatrix} $$ and the change in angular velocity $$\begin{align} \Delta \omega &= \frac{R}{I} \left( \frac{1}{m} + \frac{R^2}{I} \right) \Delta \dot{x}_A \\ & = -(1+\epsilon) \frac{R}{I} \left( \frac{1}{m} + \frac{R^2}{I} \right) \dot{x}_A\\ & = -(1+\epsilon) \frac{R}{I} \left( \frac{1}{m} + \frac{R^2}{I} \right) \left(\dot{x}+R \omega \right) \end{align} $$ where $\dot{x}$ and $\omega$ are the initial horizontal and rotation velocity of the center of mass. The final velocities at the center are found by transforming the (change) motion from the contact point to the center of mass $$\begin{align} \dot{x}^\star &= \dot{x} + (\Delta \dot{x}_A-R \Delta \omega) \\ \dot{y}^\star &= \dot{y} + (\Delta \dot{y}_A) \\ \omega^\star & = \omega + \Delta \omega \end{align}$$ After some simplifications I get $$\begin{align} \dot{x}^\star &= - \frac{(1+\epsilon)I R \omega + (\epsilon I-m R^2) \dot{x}}{I+m R^2}\\ \dot{y}^\star &= -\epsilon \dot{y}\\ \omega^\star & = \omega - \frac{(1+\epsilon)m R (\dot{x}+R \omega)}{I+m R^2} \end{align}$$ Impulses are back calculated as $$\begin{align} J_x &= -(1+\epsilon) \left( \frac{1}{m} + \frac{R^2}{I} \right)^{-1} (\dot{x}+R \omega) \\ J_y &=-(1+\epsilon) m \dot{y} \end{align}$$ NOTE: $\left( \frac{1}{m} + \frac{R^2}{I} \right)^{-1}$ is the effective mass in the horizontal direction at the contact point. Finally, to handle finite friction you must limit $|J_x| \leq \mu | J_y |$ but retaining the direction (sign) it would have with infinite friction. Since $J_x$ is specified the change in horizontal and rotational motion is going to be different also accordingly.
{ "domain": "physics.stackexchange", "id": 27514, "tags": "homework-and-exercises, rotational-dynamics, collision, torque, rigid-body-dynamics" }
how to convert SPL in dBA to dB
Question: I wonder how I can convert dBA (A weighted) to dB and vice versa. can anybody provide any kind of table or online conversion app? thanks following formula comes from https://en.wikipedia.org/wiki/A-weighting :\begin{align} R_A(f) &= {12194^2 f^4 \over \left(f^2 + 20.6^2\right)\ \sqrt{\left(f^2 + 107.7^2\right)\left(f^2 + 737.9^2\right)}\ \left(f^2 + 12194^2\right)}\ ,\\[3pt] A(f) &= 20\log_{10}\left(R_A(f)\right) - 20\log_{10}\left(R_A(1000)\right) \\ &\approx 20\log_{10}\left(R_A(f)\right) + 2.00 \end{align} how can I inverse this formula? I also must mention that I have a specific frequency and I just want to convert this specific frequency SPL Answer: You can't convert these directly. You need to start with either a calibrated pressure time domain pressure wave form or a calibrated power spectrum. To create calculate unweighted dBSPL yo simply sum the total energy. To calculate dBA, you need to apply an A-weighting filter to either one of those first.
{ "domain": "dsp.stackexchange", "id": 9890, "tags": "filters, sound, digital, conversion" }
Balancing a pencil
Question: I came across this equation for balancing a pencil while solving some problems: $$ml\ddot { \theta } =mg\theta $$ Where $l=$the length of the pencil, and $\theta$ is the angle it makes with vertical. What I cannot understand is, why acceleration, $a=l\ddot { \theta }$ and not $\displaystyle \frac { l\ddot { \theta } }{ 2 } $? Is the center of mass located at its top and not the center? Or is there something else I am missing? Answer: What I cannot understand is, why acceleration, a=lθ¨ and not lθ¨/2? The equation you wrote doesn't mention anything about the linear acceleration. Is the center of mass located at its top and not the center? Or is there something else I am missing? The center of mass of the pencil is in the middle, not the top. There is likely something else you are missing. Or, rather, maybe there is something else the person who wrote what you are reading is missing. The tipping pencil has many forces acting on it: the force of gravity acting at the center of mass; the normal force acting at the bottom; the frictional force acting at the bottom. As far as I can tell, the easiest way to solve this problem is by applying a sum of torques analysis with the axis of rotation chosen as the bottom of the pencil. Then the sum of torques gives (the gravitation force acts at the center of mass): $$ \frac{mg l \sin(\theta)}{2} $$ and this is equal to $$ I\frac{d^2\theta}{dt^2} $$ where $I$ is the moment of inertia of a pencil about it bottom (not its middle, because the bottom was our choice for the axis of rotation) $$ I=\frac{ml^2}{3} $$ which, for small angles, gives $$ \frac{d^2\theta}{dt^2}=\frac{3}{2}\frac{g \theta}{l} $$ which is different from your equation by a factor of 3/2. If you consider the pencil not to be a solid rod, but rather to be a point mass located at the center of mass (at l/2) then you get $I=ml^2/4$ and you recover the equation you had written.
{ "domain": "physics.stackexchange", "id": 20122, "tags": "homework-and-exercises, newtonian-mechanics, rotational-dynamics" }
Talk to PC from ROSJava/Android using USB?
Question: Is it possible to connect a Android Phone using ROSJava and a PC running the ROS Core via USB? I'm hoping to not use Wifi. Or alternatively, has anyone seen any projects that attempted this? Originally posted by zmoratto on ROS Answers with karma: 11 on 2012-05-09 Post score: 0 Original comments Comment by zmoratto on 2012-05-11: Follow up, if your phone supports tethering this appears to be really easy. My phone was locked down by Verizon. However using a custom ROM was an acceptable solution. Answer: Yes, this is possible if your phone supports USB tethering. See my question here. Originally posted by Chad Rockey with karma: 4541 on 2012-05-09 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 9325, "tags": "usb, rosjava, android" }
2-team fraction quiz/test in Python 3 with Tkinter
Question: I created a math quiz/game called "Factions". In the game, there are 2 teams. Red team and Blue team. They take turns answering questions relating to fractions. If they get it correct, they gain points equal to the round number * 100. If not, then they lose points equal to the round number * 100. Each team starts off with 100 points. When a team's points reaches 0, they lose the game. Here is the code for the game. from tkinter import * import random import time class Game(object): def __init__(self): global root self.round = 1 self.operators = ["+", "+", "+", "+", "-", "-", "-", "*", "*", "/"] self.operator = "" # Scores and team labels self.blue_team_label = Label(root, text="BLUE TEAM", bg="blue") self.blue_team_label.config(font=("Courier", 50)) self.blue_team_label.grid(row=0, column=0, columnspan=10) self.red_team_label = Label(root, text="RED TEAM", bg="red") self.red_team_label.config(font=("Courier", 50)) self.red_team_label.grid(row=0, column=23, columnspan=10) self.blue_team_points = 1000 self.blue_team_points_label = Label(root, text="Points: " + str(self.blue_team_points)) self.blue_team_points_label.config(font=("Courier", 30)) self.blue_team_points_label.grid(row=5, column=0, columnspan=10) self.red_team_points = 1000 self.red_team_points_label = Label(root, text="Points: " + str(self.red_team_points)) self.red_team_points_label.config(font=("Courier", 30)) self.red_team_points_label.grid(row=5, column=23, columnspan=10) self.question = "(self.first_numerator / self.first_denominator) " + self.operator + \ "(self.second_numerator / self.second_denominator) " self.turn = "BLUE TURN" self.round_label = Label(root, text="ROUND " + str(self.round)) self.round_label.config(font=("courier", 20)) self.round_label.grid(row=26, column=23, columnspan=10) # Questions self.generate_question() def generate_question(self): """ Generate the questions for the game. """ self.first_numerator = random.randint(1, 5 * self.round) self.operator = random.choice(self.operators) self.second_numerator = random.randint(1, 5 * self.round) self.first_denominator = random.randint(1, 5 * self.round) self.second_denominator = random.randint(1, 5 * self.round) self.row_1_question = Label(root, text=str(self.first_numerator) + " " + " " + " " + str(self.second_numerator)) self.row_2_question = Label(root, text=" " + "/" + " " + self.operator + " " + "/" + " = ") self.row_3_question = Label(root, text="{0} {1}".format(str(self.first_denominator), str(self.second_denominator))) self.row_1_question.grid(row=25, column=10, columnspan=5) self.row_2_question.grid(row=26, column=10, columnspan=5) self.row_3_question.grid(row=27, column=10, columnspan=5) self.row_1_question.config(font=("courier", 12)) self.row_2_question.config(font=("courier", 12)) self.row_3_question.config(font=("courier", 12)) self.question = "(self.first_numerator / self.first_denominator) " + self.operator + "(self.second_numerator " \ "/ self.second_denominator) " self.question_entry_box = Entry(root) self.question_entry_box.grid(row=26, pady=12, column=16, columnspan=3) self.question_check_button = Button(root, text="ENTER", command=self.check_answer) self.question_check_button.grid(row=26, column=20) self.turn_label = Label(root, text=self.turn) self.turn_label.config(font=("courier", 20)) self.turn_label.grid(row=26, column=0, columnspan=9) def check_answer(self): self.answer = eval(self.question) self.attempted_answer = self.question_entry_box.get() if self.turn == "BLUE TURN": if self.answer == float(self.attempted_answer): self.blue_team_points += self.round * 100 else: self.blue_team_points -= self.round * 100 else: if self.answer == float(self.attempted_answer): self.red_team_points += self.round * 100 else: self.red_team_points -= self.round * 100 self.update() def update(self): self.blue_team_label = Label(root, text="BLUE TEAM", bg="blue") self.blue_team_label.config(font=("Courier", 50)) self.blue_team_label.grid(row=0, column=0, columnspan=10) self.red_team_label = Label(root, text="RED TEAM", bg="red") self.red_team_label.config(font=("Courier", 50)) self.red_team_label.grid(row=0, column=23, columnspan=10) self.blue_team_points_label = Label(root, text="Points: " + str(self.blue_team_points)) self.blue_team_points_label.config(font=("Courier", 30)) self.blue_team_points_label.grid(row=5, column=0, columnspan=10) self.red_team_points_label = Label(root, text="Points: " + str(self.red_team_points)) self.red_team_points_label.config(font=("Courier", 30)) self.red_team_points_label.grid(row=5, column=23, columnspan=10) if self.turn == "BLUE TURN": self.turn = "RED TURN" else: self.turn = "BLUE TURN" self.round += 1 if self.blue_team_points < 1: game_over_label = Label(root, text="BLUE TEAM LOSES") game_over_label.config(font=("courier", 20)) game_over_label.grid(row=50, column=0, columnspan=10) time.sleep(3) sys.exit() if self.red_team_points < 1: game_over_label = Label(root, text="RED TEAM LOSES") game_over_label.config(font=("courier", 20)) game_over_label.grid(row=50, column=0, columnspan=10) time.sleep(3) sys.exit() self.round_label = Label(root, text="ROUND " + str(self.round)) self.round_label.config(font=("courier", 20)) self.round_label.grid(row=26, column=23, columnspan=10) self.generate_question() root = Tk() root.title("Factions") game = Game() mainloop() This is my first project with tkinter, and one of my first in Python. I want feedback on how to make the game better, and any glitches you find. Answer: Don't mix presentation and business logic Let's take a look at self.turn. You're using it for two purposes - to talk to humans (that's why it's a string), and for the computer to track which turn it is. These concerns should be separated. If there will always be two players, the turn could be represented by a boolean, or maybe as an integer that's the player ID. It should only be converted to a string when you want to display whose turn it is on the screen. Your entire game is baked into one Game class, but a bunch of separation needs to be done. A great example of a method that should only appear in the business logic layer is generate_question. It shouldn't interact with the UI at all. Solving this issue will dramatically clean up your code, make debugging and maintenance easier, and generally decrease headaches. Use modern formatting Rather than this: str(self.first_numerator) + " " + " " + " " + str(self.second_numerator) you can do: f'{self.first_numerator} {self.second_numerator}' Be careful about rounding This: == float( is a great way to create a nasty bug. Sometimes this will evaluate to false even if the numbers seem like they should match -- they're just infinitesimally different. Either track integers as a member of fractions, or if you really need to compare floats, do so with some small tolerance, i.e. epsilon = 1e-12 if abs(self.answer - self.attempted_answer) < epsilon: # ... Create an upper main function ...to house the code that's currently in global scope. Don't repeat yourself This: if self.turn == "BLUE TURN": if self.answer == float(self.attempted_answer): self.blue_team_points += self.round * 100 else: self.blue_team_points -= self.round * 100 else: if self.answer == float(self.attempted_answer): self.red_team_points += self.round * 100 else: self.red_team_points -= self.round * 100 can be compressed - make a variable to hold the result of your multiplication: award = self.round * 100 And you don't need to repeat the entire block based on turn if you make a Player class with an award method.
{ "domain": "codereview.stackexchange", "id": 34003, "tags": "python, python-3.x, game, tkinter, quiz" }
Circular orbits
Question: First of all, I'm studying orbits for a hobby: world building. Unfortunately, my mathematical abilities approach a ridiculous low threshold, which means I am stuck with reading the simplest explanations, which in turn leave me asking tons of fairly basic questions. Allow me to start with a simple point. I know that Kepler's Laws state that planetary orbits must always be elliptical. I also know that Earth's orbit varies from more elliptical to less elliptical, and that its less elliptical stage is nearly circular. So... what would happen if Earth did have a circular orbit? Why is it impossible for any planet (or moon, by the way) to orbit another body in a perfectly circular path? Answer: You've been given an answer, and it's perfectly valid, but here's something from a different perspective (less strict). A circle is really just a particular case of an ellipse. Take an ellipse, and change it, by moving its focal points closer together. When those two points coincide, what you get is a circle. It's still an ellipse, technically - one that happens to have both focal points in the same place, is all. So yes, you can actually have planetary orbits, or any orbits, circular. There's nothing forbidding that. It's just pretty unlikely that this will occur via a natural process. As indicated elsewhere, in the real world, all orbits and trajectories are a bit imperfect due to perturbations - whether they be elliptical, circular, parabolic or hyperbolic, they are always a bit perturbed by external factors. In many cases, perturbations are so tiny that you can ignore them. When a planet is orbiting the Sun, and the orbit is elliptical, the Sun will be in one of those two focal points; the other point has no particular signification. If you could circularize that orbit, then the Sun would be in the center of the circle, of course. Kepler's laws remain valid for a circular orbit: The orbit of every planet is an ellipse with the Sun at one of the two foci. Still true. A circle is an ellipse where the foci coincide. A line joining a planet and the Sun sweeps out equal areas during equal intervals of time. Still true. On a circular orbit, the planet moves at constant speed, so the swept area remains constant per time. The square of the orbital period of a planet is directly proportional to the cube of the semi-major axis of its orbit. Still true. The semi-major axis becomes the radius of the circle. You must understand that Kepler's laws now have more of a historic interest. They are not exactly at the bleeding edge of science anymore. During Kepler's time, it seemed reasonable to state that all orbits must be elliptical (in the strict sense of the term), but now we know that trajectories (including orbits, or closed trajectories) can be circular, elliptical, parabolic or hyperbolic, depending on a few factors. We also know that perturbations actually deflect all these trajectories a little bit from ideal shapes (but it's usually a very tiny effect). We also know that relativity makes all "elliptical" orbits more complex - they remain close to elliptic, but the whole ellipse keeps turning around the central star very slowly. All this stuff was not known during Kepler's time, so take his laws for what they are - a snapshot of the development of our understanding in time.
{ "domain": "astronomy.stackexchange", "id": 4937, "tags": "orbit" }
octomap_server globally referenced pointcloud and transform
Question: Hi, I am trying to pass globally referenced point clouds and trajectories to octomap_server, but it's not producing the desired results? The point cloud and ray tracing are being inserted from the single origin of the first scan. Originally posted by anonymousSnowMan on ROS Answers with karma: 38 on 2013-01-09 Post score: 2 Answer: Found the issue: // directly transform to map frame: pcl::transformPointCloud(pc, pc, sensorToWorld); I believe this was performing a transform on already transformed trajectories. By commenting this entry out, rebuilding and trying again, the insertion and ray casting are working correctly. Originally posted by anonymousSnowMan with karma: 38 on 2013-01-13 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 12342, "tags": "octomap, octomap-server" }
Stress-energy tensor explicitly in terms of the metric tensor
Question: I am trying to write the Einstein field equations $$R_{\mu\nu}-\frac{1}{2}g_{\mu\nu} R=\frac{8\pi G}{c^4}T_{\mu\nu}$$ in such a way that the Ricci curvature tensor $R_{\mu\nu}$ and scalar curvature $R$ are replaced with an explicit expression involving the metric tensor $g_{\mu\nu}$. Using the equations $R=g^{\mu\nu}R_{\mu\nu}$ (relating the scalar curvature to the trace of the Ricci curvature tensor) and $R_{\mu\nu}=R^\lambda_{\mu\lambda\nu}$ (relating the Ricci curvature tensor to the trace of the Riemann curvature tensor), would anyone be willing to give recommendations on how to proceed, or already know the equation? Answer: The Einstein equations are some of the most complicated PDE's people study. There is no shortcut for this, you just have to do all the horrific algebra. Start with the trace-reversed Einstein equation \begin{equation} R_{\mu \nu}=8\pi G(T_{\mu \nu}-\frac{1}{2}Tg_{\mu \nu}) \end{equation} Use the equation for Ricci in terms of the Christoffel Connection \begin{equation} R_{\mu \nu}=2\Gamma^\alpha_{\mu [\nu,\alpha]}+2\Gamma^{\alpha}_{\lambda[\alpha}\Gamma^\lambda_{\nu]\mu} \end{equation} Plug in \begin{equation} \Gamma^i_{kl}=\frac{1}{2}g^{im}(g_{mk,l}+g_{ml,k}-g_{kl,m}) \end{equation} In the end you will just have lots of terms with up to two derivatives of the metric.
{ "domain": "physics.stackexchange", "id": 12159, "tags": "general-relativity, metric-tensor, tensor-calculus, stress-energy-momentum-tensor" }
Are correlations stronger than those allowed by quantum mechanics possible?
Question: We know how a quantum correlation setup can help us with a better probability of winning games like the CHSH. But what is the upper bound that physics can allow? Is it the quantum correlation setup? Or can we exceed them in general sense to get much stronger correlations? Answer: Yes, it is possible to conceive theories with "stronger correlations" than those given by quantum mechanics. One way to make this statement precise is to consider some kind of "measurement apparatus" (you can think of it as simply a black a box with some buttons that you can push and different LEDs that correspond to different possible outputs), and analyse the set of correlations between inputs and outputs that different physical theories allow for. For example, if you have two possible inputs and two possible outputs than the set of possible theories, often referred to as behaviours in this context, is the set of probabilities $\{p(a|x)\}_{x,a\in\{0,1\}}$. This is the set of vectors $\boldsymbol p$ in $[0,1]^{2^2}\subset\mathbb R^{2^2}$ normalised to one. In other words, it's a section of the $2^2$-dimensional hyperplane. More generally, in this context, it is common to consider a Bell-like scenario in which two parties are involved and each has its own box. Then, if each party can choose between $m$ possible inputs and can get $\Delta$ possible outputs, the set of possible behaviours is a hyperplane $\mathcal P\subset\mathbb R^{\Delta^2 m^2}$, which therefore has dimension $(\Delta^2-1)m^2$. The behaviours allowed by quantum mechanics are a strict subset of $\mathcal P$. One can consider different restrictions imposed on a physical theory and study the corresponding set of possible behaviours. A first natural assumption is to require a theory to be no-signalling, which means that it doesn't allow for faster-than-light communication. The set $\mathcal{NS}$ of no-signalling behaviours is strictly larger than the set $\mathcal Q$ of quantum behaviours. In the context of CHSH inequalities, the boundary between $\mathcal{NS}$ and $\mathcal Q$ would be observable via Tsirelson's bound, which tells us that quantum mechanics cannot produce correlations such that $S>2\sqrt 2$ (where $S$ is the usual operator defined for CHSH inequalities). Similarly, the boundary between $\mathcal Q$ and the set $\mathcal L$ of local (classical) behaviours can be witnessed via the standard Bell's inequalities. See the following picture representing the relations between these different sets (taken from Brunner et al., pag. 7): Have a look at Brunner et al.'s review (1303.2849) for more details. If you don't want to assume anything about a theory, then no, there are no restrictions at all. Any correlation between present and future is in principle possible.
{ "domain": "quantumcomputing.stackexchange", "id": 4368, "tags": "non-locality, foundations, correlations, nonlocal-games" }
How to infer about the volume of the body from measuring the mass of the object in 2 different liquids?
Question: While solving a problem on hydro statics subject I saw a statement that argued that if I know the weight of the object in the air and then I know the weight of the object in the water.so when I subtract between the weight I can conclude the volume of the objects. I have numeric example: weight in air is $740 gm$ weight in water is $690 gm$ so the difference is $$m = 740- 690= 50 gm$$ and somehow they concluded that the volume is $50 cm^3$ what is the explanation for that? is it about the Archimedes law? Answer: There can be two possible cases Case 1 The statement in your book might be wrong (Though more likely possibility is that statement in the book might be correct but what you read /inferred from it is wrong).In that case you are right. We can only subtract two quantities with same dimensions. In your case it is mass. We can't get a different quantity when we subtract two same quantity with same dimensions . Case 2 What i think it is- They meant volume of object submerged in the water is $50cm^3$. It is only because as density of water is 1 $gm \ cm^{-3}$. Now in case of water 50gm = 50 $cm^3$ but it still not actual volume of the object. It is the volume of object submerged in the liquid.If the object is fully submerged then the book is correct. Infact, this the actual statement of Archimedes Principle - The volume of liquid displaced is equal to the the volume of object submerged in the liquid.
{ "domain": "physics.stackexchange", "id": 69306, "tags": "homework-and-exercises, mass, fluid-statics, density, buoyancy" }
How to derive the Hamilton-Jacobi equation for the area of a minimal surface on a Riemannian manifold?
Question: The action for a string in this background $$G_{IJ}\tag{1}$$ can be written as the Nambu-Goto action $$S_{NG}=\int d\sigma^1d\sigma^2\sqrt{g}\quad\quad\Rightarrow\quad\mathcal{L}=\sqrt{g}\tag{2}$$ where the induced two-dimensional metric is $$g_{ab}=G_{IJ}\partial_aX^I\partial_bX^J.\tag{3}$$ This action represents the worldsheet area of the string, a two dimensional Riemannian manifold. This area is minimal if the Euler-Lagrange equations are satisfied but also if an equivalent equation is satisfied: the Hamilton-Jacobi equation, which have this form (see footnote at page 13 in Drukker) $$G^{IJ}\left(\frac{\delta S}{\delta X^I}\right)\left(\frac{\delta S}{\delta X^J}\right)=G_{MN}\partial_1X^{M}\partial_1X^{N}\tag{4}$$ (in this form), where $$\partial_a=\frac{\partial}{\partial\sigma^a}\,,\quad\sigma=1,2.\tag{5}$$ I know that the Hamilton-Jacobi equation is $$\frac{\partial S}{\partial t}+H\left(\frac{\partial S}{\partial x},x\right)=0.\tag{6}$$ How this expressions translates into the previous one? EDIT: Let me show you what I have. From (2) and the expression of the determinant $$g=\frac{1}{2}\varepsilon^{ab}\varepsilon^{cd}g_{ac}g_{bd}\tag{7}$$ $$ P_I^a=\frac{\partial\mathcal{L}}{\partial\partial_aX^I}=\frac{1}{\sqrt{g}}\varepsilon^{ab}\varepsilon^{cd}\partial_cX^JG_{IJ}g_{bd}\tag{8}$$ right? Then $$\mathcal{H}=P_I^a\partial_aX^I-\mathcal{L}\tag{9}=\sqrt{g}.$$ why is not zero? EDIT 2 Let us start with an equivalent action, Polyakov $$S_P=\frac{1}{2}\int d^2\sigma\sqrt{-h}h^{ab}\partial_aX^I\partial_bX^JG_{IJ}.\tag{10}$$ The momentum is $$P_I^a=\frac{\partial\mathcal{L}_P}{\partial\partial_aX^I}=\sqrt{-h}h^{ab}\partial_bX^JG_{IJ}.\tag{11}$$ Let us choose $$h_{ab}=\begin{pmatrix} -1 & 0\\ 0 & 1 \end{pmatrix}\,\quad\quad\Rightarrow\sqrt{-h}=1.\tag{12}$$ The Hamiltonian is then, $$\mathcal{H}_P=\frac{1}{2}\int d^2\sigma h_{ab}P^a_IP^b_JG^{IJ}.\tag{13}$$ Due to reparametrization invariance $$h_{ab}P^a_IP^b_JG^{IJ}=0,\tag{14}$$ or $$G^{IJ}P_I^\sigma P_J^\sigma=\partial_\tau X^I\partial_\tau X^JG_{IJ}.\tag{15}$$ Is this correct? Answer: Since we assume that the target-space (TS) metric $G_{IJ}$ does not depend explicitly on the world-sheet (WS) coordinates $(\tau,\sigma)$, the relevant Hamilton-Jacobi (HJ) equation is the time-independent formulation $$H(x, \frac{\partial W}{\partial x})~=~E\tag{A}$$ in terms of Hamilton's characteristic function $W$ rather than Hamilton's principal function $S$. Because of WS reparametrization invariance, the rhs. $E=0$ of the HJ equation (A) vanishes, cf. e.g. this Phys.SE post. In fact due to WS reparametrization invariance, the Legendre transformation of the Nambu-Goto (NG) action is singular. We encounter 2 primary constraints $$ \frac{1}{2T_0}P^2\mp \frac{T_0}{2}(X^{\prime})^2~=~0 \qquad\text{and}\qquad P\cdot X^{\prime} ~=~0, \tag{B}$$ cf. e.g. this Phys.SE post. [Here the $\mp$ sign corresponds to Euclidean (Minkowskian) TS signature, respectively. Note that the TS metric induces a WS metric of the same$^1$ signature. The constraints (B) can alternatively be deduced from the equivalent Polyakov action, cf. my Phys.SE answer here and links therein.] The HJ theory is usually not developed systematically for constrained systems, but we can view $$ \frac{1}{2T_0}\left(\frac{\delta W}{\delta X}\right)^2\mp\frac{T_0}{2}(X^{\prime})^2~=~0 \qquad\text{and}\qquad \frac{\delta W}{\delta X}\cdot X^{\prime} ~=~0 \tag{C}$$ as the appropriate analog of the HJ equation/eikonal equation. The first equality in eq. (C) corresponds to OP's eq. (4). In Ref. 1 the boundary of the WS is a Wilson-loop parametrized by $\tau$. Concerning OP's Hamiltonian density (9): Note that OP's $a$-index should by definition only be a temporal WS index, not a spatial WS index. Then the Hamiltonian density (9) indeed vanishes. In particular one does not sum over the $a$-index here. In eq. (11) OP is introducing polymomenta a la De Donder & Weyl. There is a similar issue with OP's eq. (13). References: N. Drukker, D.J. Gross & H. Ooguri, Wilson Loops and Minimal Surfaces, arXiv:hep-th/9904191. -- $^1$ Eq. (12) is inconsistent with OP's Riemannian TS signature.
{ "domain": "physics.stackexchange", "id": 59037, "tags": "differential-geometry, string-theory, hamiltonian-formalism, variational-calculus, constrained-dynamics" }
Analyzing genetic tags
Question: I've gone back and forth a few times recently on my Perl coding style when it comes to module subroutines. If you have an object and you want to call the method bar with no arguments, then you can either do $foo->bar() or $foo->bar. At one point I started favoring the latter because I felt it cleaned up the code and made it more readable. However, sometimes I question whether it would be better to be fully explicit, especially considering the possibility that someone else will have to look at my code later — someone who almost certainly will not be an expert Perl programmer. For example, consider this block of code. For the method calls that require arguments (get_tag_values and has_tag), there is no question about the parentheses. But what about next_feature and primary_tag? Is the readability I gain from dropping the parens worth losing the explicit syntax? Is one better than the other for long term maintainability? Or is this simply a subjective judgment call? while( my $feature = $gff3->next_feature ) { if($type eq "cds") { if( $feature->primary_tag eq "mRNA" ) { my($gene_id) = $feature->get_tag_values("Parent"); my($mRNA_id) = $feature->get_tag_values("ID"); next unless( $list eq '' or $genes_to_extract->{$gene_id} ); $subseq_locations->{ $feature->seq_id }->{ $mRNA_id } = Bio::Location::Split->new(); } elsif( $feature->primary_tag eq "CDS" ) { my($mRNA_id) = $feature->get_tag_values("Parent"); if( $subseq_locations->{ $feature->seq_id }->{ $mRNA_id } ) { $subseq_locations->{ $feature->seq_id }->{ $mRNA_id }->add_sub_Location( $feature->location ); } } } else { if( $feature->primary_tag eq $type ) { my $feat_id; if( $list ne '') { ($feat_id) = $feature->get_tag_values("ID") if($feature->has_tag("ID")); next unless( $feature->has_tag("ID") and $genes_to_extract->{$feat_id} ); } $subseq_locations->{ $feature->seq_id }->{ $feat_id } = Bio::Location::Split->new(); $subseq_locations->{ $feature->seq_id }->{ $feat_id }->add_sub_Location( $feature->location ); } } } Answer: Because either is technically acceptable, you are right that it is a style issue and simply a case of choosing a coding convention. However, I think that you have hit a very important point. So few other languages that use parentheses for functions allow a parameterless function call without parentheses that it can be very surprising for developers unfamiliar to perl. This point would sway me in favour of always using them and, indeed, I always do so out of habit because I use other languages a lot and it just comes naturally.
{ "domain": "codereview.stackexchange", "id": 1001, "tags": "perl, bioinformatics" }
Re-factorize a program using single responsibility principle - SOLID- SRP
Question: The class WalkingData storages a "date" and a "walked distance". The class also read the stored data. public class WalkingData { public DateTime Date { get; set; } public int WalkedDistance { get; set; } private string _filePath = @"c:\Data\Json.txt"; //Read Data from Json File public List<WalkingData> GetAll() { //If file does not exist returns an empty list if (!File.Exists(_filePath)) return new List<WalkingData>(); string jsonData; //Read the existing Json file using (StreamReader readtext = new StreamReader(_filePath)) { jsonData = readtext.ReadToEnd(); } //Deserialize the Json and returs a list of WalkingData return JsonConvert.DeserializeObject<List<WalkingData>>(jsonData); } //save an instance of WalkingData in Json file public void Save() { List<WalkingData> lstExistingWalkingData = new List<WalkingData>(); //if existing data, load it into lstExistingWalkingData if (File.Exists(_filePath)) lstExistingWalkingData = GetAll(); //Add the current instace into lstExistingWalkingData lstExistingWalkingData.Add(this); //Serialize lstExistingWalkingData string output = JsonConvert.SerializeObject(lstExistingWalkingData); //Save the Json file using (StreamWriter w = new StreamWriter(_filePath)) { w.WriteLine(output); } } } After I applied the Single Responsibility Principle I have the new code that I would like to confirm if I applied the principle in a reasonable way: //This class is located on a library called BOL and has a reference to DAL library public class WalkingData { public DateTime Date { get; set; } public int WalkedDistance { get; set; } } //This class is located on a library called BOL and has a reference to DAL library public class WalkingDataManager { WalkingDataRepository walkingDataRepository = new WalkingDataRepository(); public List<WalkingData> GetAll() { return walkingDataRepository.GetAll(); } public void Save(WalkingData walkingData) { walkingDataRepository.Save(walkingData); } } //this class is located in library Called DAL internal class WalkingDataRepository { private string _filePath = @"c:\Data\Json.txt"; //Read Data from Json File internal List<WalkingData> GetAll() { //If file does not exist returns an empty list if (!File.Exists(_filePath)) return new List<WalkingData>(); string jsonData; //Read the existing Json file using (StreamReader readtext = new StreamReader(_filePath)) { jsonData = readtext.ReadToEnd(); } //Deserialize the Json and returs a list of WalkingData return JsonConvert.DeserializeObject<List<WalkingData>>(jsonData); } //save an instance of WalkingData in Json file internal void Save(WalkingData walkingData) { List<WalkingData> lstExistingWalkingData = new List<WalkingData>(); //if existing data, load it into lstExistingWalkingData if (File.Exists(_filePath)) lstExistingWalkingData = GetAll(); //Add the current instace into lstExistingWalkingData lstExistingWalkingData.Add(walkingData); //Serialize lstExistingWalkingData string output = JsonConvert.SerializeObject(lstExistingWalkingData); //Save the Json file using (StreamWriter w = new StreamWriter(_filePath)) { w.WriteLine(output); } } } Answer: The 3 classes now follow the SRP. You may want to allow the file path to be updated by a parameter passed into the WalkingDataRepository constructor. This would allow a user to set the file path by command line arguments, question and answer at runtime or environment variable.
{ "domain": "codereview.stackexchange", "id": 38105, "tags": "c#, object-oriented" }
Attempt at Conway's Game of life
Question: I tried my hands of Conway's Game of life. Well it works as I wanted it to but I want to improve my design and coding practice. I will be glad to get feedback and ideas to improve it and how to code it better. #include<stdio.h> #include<windows.h> #include<conio.h> #define ALIVE true #define DEAD false #define T 100 const int GRIDSIZE = 50; using namespace std; class Grid { bool grid[GRIDSIZE][GRIDSIZE], stable; int generation; public: Grid(); void printFrame(); void nextFrame(); void play(); bool isExtinct(); bool isStable(); }; Grid::Grid() { for( int i=0; i < GRIDSIZE; i++) for( int j=0; j < GRIDSIZE; j++){ grid[i][j] = DEAD; } stable = false; generation = 1; // Manually setting the initial population // grid[24][20] = ALIVE; grid[2][2] = ALIVE; grid[3][3] = ALIVE; grid[3][6] = ALIVE; grid[4][4] = ALIVE; grid[2][6] = ALIVE; //grid[4][8] = ALIVE; //grid[4][9] = ALIVE; //grid[4][7] = ALIVE; //grid[5][6] = ALIVE; } void Grid::printFrame() { printf("\n ** GENERATION : %d ** \n\n", this -> generation); for( int i = 0; i < GRIDSIZE; i++) { for( int j = 0; j < GRIDSIZE; j++) { if( this -> grid[i][j]){ printf("*"); } else { printf(" "); } } printf("\n"); } } void Grid::nextFrame() { int numSurrounding = 0, tmpcounter=0; bool tempGrid [GRIDSIZE][GRIDSIZE]; for ( int i = 0; i < GRIDSIZE ; i++) { for ( int j = 0; j < GRIDSIZE ; j++) { if ( ( i + 1) < GRIDSIZE && this -> grid[i + 1][j] == true ) { numSurrounding++; } if ( (i-1) >= 0 && this -> grid[i - 1][j] == true ) { numSurrounding++; } if ( (j+1) < GRIDSIZE && this -> grid[i][j+1] == true ) { numSurrounding++; } if ( (j-1) >= 0 && this -> grid[i][j-1] == true ) { numSurrounding++; } if ( (i+1) < GRIDSIZE && (j+1) < GRIDSIZE && this -> grid[i+1][j+1] == true ) { numSurrounding++; } if ( (i+1) < GRIDSIZE && (j-1) >= 0 && this -> grid[i+1][j-1] == true ) { numSurrounding++; } if ( (i-1) >= 0 && (j+1) < GRIDSIZE && this -> grid[i-1][j+1] == true ) { numSurrounding++; } if ( (i-1) >= 0 && (j-1) >= 0 && this -> grid[i-1][j-1] == true ) { numSurrounding++; } if (numSurrounding < 2 || numSurrounding > 3) { tempGrid[i][j] = false; } else if ( numSurrounding == 2) { tempGrid[i][j] = this -> grid[i][j]; } else if ( numSurrounding == 3) { tempGrid[i][j] = true; } numSurrounding = 0; } } for ( int i = 0 ; i < GRIDSIZE ; i++ ) { for ( int j = 0 ; j < GRIDSIZE ; j++ ) { if (this -> grid[i][j] != tempGrid[i][j]) { tmpcounter = 1; } this -> grid[i][j] = tempGrid[i][j]; } } this -> generation++; if(tmpcounter == 0) this -> stable = true; } bool Grid::isExtinct() { for(int i = 0; i < GRIDSIZE; i++) for( int j = 0; j < GRIDSIZE; j++) if( this -> grid[i][j]) return false; return true; } bool Grid::isStable() { return this->stable; } void Grid::play(){ int generation; printf( " \n\n FIRST GENERATIOM :- (Press ENTER to let the life begin)\n\n"); this -> printFrame(); getch(); while(1) { system("CLS"); this -> printFrame(); Sleep(T); this -> nextFrame(); if( this -> isExtinct()) { system("CLS"); printf("\n\n\n\n *******************\n ALL THINGS CREATED MUST END SOMEDAY!\n *******************\n\n\n\n\n\n"); printf("\n\n\n Total generations till extinction : %d \n\n\n\n\n\n", (this -> generation)-1); break; } if(this -> isStable()) { system("CLS"); printf("\n\n\n\n *******************\n AND THE LIFE IS STAGNANT AFTER %d GENERATIONS! \n *******************\n\n\n\n\n\n", this -> generation); this -> printFrame(); getch(); break; } } } int main() { Grid life; life.play(); return 0; } Answer: Prefer declaring variables where you need them. This makes it clearer where they are expected to be used. Don't worry about optimizing stack allocations because the compiler will move the increment of the stack pointer tot he beginning of the function anyway. Only when there is a cost to initializing the object should it be pulled out of loops. void Grid::nextFrame() { bool tempGrid [GRIDSIZE][GRIDSIZE]; for ( int i = 0; i < GRIDSIZE ; i++) { for ( int j = 0; j < GRIDSIZE ; j++) { int numSurrounding = 0; if ( ( i + 1) < GRIDSIZE && this -> grid[i + 1][j] == true ) { numSurrounding++; } //... } } int gridChanged = 0; for ( int i = 0 ; i < GRIDSIZE ; i++ ) { for ( int j = 0 ; j < GRIDSIZE ; j++ ) { //... } } } you #define DEAD and ALIVE but then don't use them in the nextFrame function. Also enum beats macro for a group of related constants.
{ "domain": "codereview.stackexchange", "id": 18343, "tags": "c++, beginner, game-of-life" }
How to factor the output of a CNOT acting on the input $|-,+\rangle$
Question: I am trying to implement the Deutsch oracle in classical computer, using direction from this talk. There is this slide where they show how the CNOT gate modify 2 Hadamard transformed Qubits: While I understand the math, I'm having trouble implementing the last part, where the resulting tensor product is factored into 2 qubits: $ \frac{1}{2} \begin{pmatrix} 1\\ -1\\ 1\\ -1 \end{pmatrix} = \begin{pmatrix} \frac{1}{\sqrt2}\\ \frac{1}{\sqrt2}\\ \end{pmatrix} \otimes \begin{pmatrix} \frac{1}{\sqrt2}\\ \frac{-1}{\sqrt2}\\ \end{pmatrix} $ In the talk, they say the control qubit is supposed to stay the same, so it is simple to derive the target qubit. However, in this case, the control qubit is modified, while the target qubit is not. So should I implement this by using 2 different calculation for each case (control/target qubit stay the same)? If so, how do I choose which calculation to use? Or is there a better way to do this, using just a single calculation? Answer: There are really two different questions here. How can you figure out that a given output can be written as tensor product of two vectors? This is equivalent to asking: how do you figure out whether an output is separable? For pure states, which is what you are considering, this is rather easy. In your specific case (two qubits), you might simply notice that if $\psi=\psi^A\otimes \psi^B$ then there must be some specific relations between its elements. More specifically, in your notation, you should have $$\psi_2/\psi_1=\psi_4/\psi_3=\psi^B_2/\psi^B_1,\tag A$$ assuming $\psi_1,\psi_3,\psi^B_1\neq0$ (you should be able to work out the special cases with zeros without much difficulty). In this way you get the value of $\psi^B_2/\psi^B_1$, which is enough to know the full $\psi^B$ remembering that it must a normalised vector. You can similarly work out the $\psi^A$ vector. If condition (A) is not satisfied, then you know that the output cannot be written as a product state, i.e. does not admit this kind of tensor product decomposition. A more general technique to check for separability of pure states is to compute the entanglement entropy, which is the Von Neumann entropy of the reduced states. Given a pure bipartite state $\psi_{ij}$ (I'm using the notation $|\psi\rangle\equiv\sum_{ij}\psi_{ij}|i,j\rangle$ and then identifying $|\psi\rangle$ with $\psi_{ij}$), the associated density matrix is $\rho_{ijk\ell}\equiv\psi_{ij}\bar\psi_{k\ell}$, and the reduced density matrix is $\rho_{ik}=\sum_j \rho_{ijkj}$, which then reads $\rho_{ik}=\sum_j \psi_{ij}\bar \psi_{kj}.$ In the case of the output being separable, you have $\psi_{ij}=a_i b_j$ for some (normalised) vectors $a_i,b_j$, and thus $\rho_{ik}=a_i \bar a_k$, whose entropy is zero. As it turns out, the Von Neumann entropy is zero if and only if the (pure) state is separable, and therefore this method gives you a definitive answer about the separability. Why is the first qubit changed if the CNOT changes only the second one? The simple answer is that the statement "with the CNOT the control qubit is supposed to stay the same" is only true in the computational basis. Indeed, as an example, by simply applying local Hadamard operations on the two qubits you can convert a CNOT into a CNOT in which control and target qubits are inverted. How to do this is shown for example in the Wikipedia page.
{ "domain": "quantumcomputing.stackexchange", "id": 986, "tags": "quantum-gate, linear-algebra, deutsch-jozsa-algorithm" }
How to calculate the charge distribution at the ortho/para or meta sites of a substituted benzene ring
Question: I've modelled a few substituted benzene rings in avogadro and retrieved the partial charges of each atom; I've tried to sum the partial charges of the carbon and hydrogen atoms at the ortho/para and meta sites to see the overall partial charge at that site but results don't make sense as the para site always ends up being 0. How can I calculate the charge at the sites? Answer: Summary: The method used does not handle π-conjugation The partial charges in Avogadro (and many other software programs) are assigned by default by the Gasteiger-Marsili scheme† Importantly, while the Gastieger charges are fairly good for atomic partial charges of organic-ish molecules, they're unlikely to reproduce effects in conjugated systems very well. A method is presented for the rapid calculation of atomic charges in σ-bonded and nonconjugated π-systems. Atoms are characterized by their orbital electronegativities. In the calculation only the connectivities of the atoms are considered. Thus only the topology of a molecule is of importance. My suggestion is to use some type of quantum chemical calculation, e.g.: NWChem OpenMOPAC ORCA (etc.) Even semiempirical quantum chemical methods (e.g. PM7) will allow you to assign charges based on the quantum electrostatic potential around the molecule. They'll be much more accurate at the trends you want. † "Iterative Partial Equalization of Orbital Electronegativity - A Rapid Access to Atomic Charges," Tetrahedron, Vol. 36, pp. 3219-3228, 1980.
{ "domain": "chemistry.stackexchange", "id": 6712, "tags": "aromatic-compounds, software, cheminformatics" }
Infinitesimal time intervals use
Question: I've a question, that maybe will sound obvious, on the use of infinitesimal quantities. Consider the expression for the acceleration in non inertial frames. $\frac{d\vec{v}}{dt}=\frac{d\vec{v'}}{dt}+2\vec{\Omega}\times\vec{v'}+\frac{d\vec{\Omega}}{dt}\times\vec{r'}+\vec{\Omega}\times(\vec{\Omega}\times\vec{r'})$ The expression is not important (is just used as an example). Here both $\vec{\Omega}$ and $\frac{d\vec{\Omega}}{dt}$ appear (a vector and its derivative). Now here we are trying to find the variation of $\vec{v}$ in a infinitesimal interval $dt$. Nevertheless when we write $\vec{\Omega}$ we mean $\vec{\Omega}(t)$, i.e. the angular velocity at the time istant $t$. How can we talk about the time interval $dt$ considering $\vec{\Omega}(t)$ (at a particular istant $t$)? Is this justified by the fact that $dt$ is an infinitesimal quantity? Answer: Derivative is defined at a point not for an infinitesimal interval. $$\text {If $y=f(x)$} \Longrightarrow \; y’(x)|_{x=a}=\frac {\mathrm dy}{\mathrm dx}\bigg|_{x=a}=f’(a)=\lim_{\Delta x\to 0}{\frac {f(a+\Delta x)-f(a)}{\Delta x}}$$ $\large{\frac {\mathrm d\vec \Omega}{\mathrm dt}}$ doesn’t represent a fraction. That represents derivative of the $\vec \Omega$ relative to $t$ at a time instant like $t$ or $t_1$ or etc. In other words, when we write $\frac {\mathrm d\vec \Omega}{\mathrm dt}$, this means $\frac{\mathrm d\vec \Omega}{\mathrm dt}\bigg|_{t=t}$ and this doesn’t mean “variation of $\vec \Omega$” per “variation of $t$”. This just means derivative of $\vec \Omega$ relative to $t$. Now here we are trying to find the variation of $\vec v$ in an infinitesimal interval $\mathrm dt$. We don’t need to find the variation of the $\vec v$ in an infinitesimal interval $dt$. We need to find derivative of the $\vec v$ relative to $t$ at each time instant $t$. Acceleration vector of a particle at each time instant is defined by derivative of the velocity vector of that particle at that time instant not anything else.
{ "domain": "physics.stackexchange", "id": 31073, "tags": "time, acceleration, differentiation" }
Flipping all incoming/outgoing edges from a vertex in a DAG
Question: I'm working on a problem where I have a directed acyclic graph and I need to repeatedly flip all incoming (or outgoing, or both incoming and outgoing) edges from a single vertex. I think that resulting graph is still a DAG. Am I correct? Answer: This doesn't hold if we flip both outgoing and incoming edges as shown by @Yuval Filmus. Here is my try of a proof by contradiction for only flipping outgoing of incoming edges (sorry if it's too informal): After flipping all incoming(outgoing) edges from vertex v in a DAG G, we get another graph that is not a DAG (it has a cycle). Since we only changed the direction of the vertices incident to v, the only way we loose the acyclic property is if there is a new cycle going through vertex v. Since we flipped directions of all incoming(outgoing) edges from vertex v, all edges incident to vertex v are now outgoing(incoming), so it is not possible for any cycle to go through vertex v, which contradicts 2.
{ "domain": "cs.stackexchange", "id": 8989, "tags": "graphs, dag" }
Is a Convolutional Neural Net a special case of DNN? If so, how can the convolutional layer be modelled?
Question: In literature, Convolutional Neural Nets (CNNs) are presented as a special case of Deep Neural Nets (DNNs) (e.g., here). I do not understand how the convolutional layer can be implemented through a layer of neurons though. As far as I understand, a $n \times m$ kernel is used to calculate a single output feature over a combination of $n \cdot m$ input features in a certain stride. I see how this could be implemented as a layer in a Deep Neural Network, where basically $n \cdot m$ input features are the inputs for one output neuron, respectively. However, how would we ensure during training, that all the output neurons corresponding to the same kernel have the same weights for their inputs? Is it not the case that CNNs are a generalization of DNNs rather than the other way around? Answer: You're reading too much into the phrasing 'special case'. You already accurately understand the sense in which a CNN can be considered as a special case of a DNN; it is a DNN where many of the weights are repeated. Here people are probably referring to the model that is produced by the training procedure: every CNN could be expressed as a DNN with a particular pattern of repeated weights. This means that any function that can be computed by a CNN, can also be computed by a DNN. Don't worry about the rest, no one means anything more than that.
{ "domain": "cs.stackexchange", "id": 21800, "tags": "machine-learning, neural-networks" }
Can gene co-expression networks be used to help identify differentially expressed genes?
Question: In my RNAseq dataset of differentiated stem-cell lines, some samples have far fewer significantly differentially expressed genes than others. QC shows that this is because there are way fewer reads for one sample than the others. Can gene co-expression networks be used to infer differential expression? For example, if certain genes just miss the cutoff q-value cutoff If so, what papers have done so? Answer: While you can use networks to find differentially expressed genes (see the WGCNA package, which does this) in my experience this ends up largely matching what you'd get using a traditional package with a looser threshold for significance. Given the time savings of traditional packages, there's rarely any gain to using networks (it won't hurt to try, just note that it'll take some time). If some very interesting genes are just above your significance threshold then change your threshold. You need to validate your findings in some way anyway, so your p-value threshold is partly just a way to protect you from wasting your time (but if background knowledge suggests that your time wouldn't be wasted...).
{ "domain": "bioinformatics.stackexchange", "id": 207, "tags": "rna-seq, differential-expression, networks" }
Finding max of two elements in linear time with restriction
Question: I have a matrix in the following form: ID | # Counts for ID | ID's Within Range. 1 | 5 | 1,2,3 2 | 3 | 1,2 3 | 2 | 1,3 .... The idea is that I want to find the 2 id's that when you sum the have the highest count. However, you must first exclude the pairs that overlap. For instance, id's 1 and 2 have the highest count but you can't use them since 1 has 2 within range and 2 has 1 within range. Thus from the table above you would use 2,3 as the sum since they don't overlap. I would like to do this in O(n) or O(nlogn) time and O(n) storage. It's trivial to do if I first make all ID pairs but that gives me O(n^2). I can also do 2 forloops after sorting the list but that would also be at worst O(n^2) although usually much faster... Any help would be greatly appreciated! THanks! Answer: Ok, so as I understand it, you are given as input an undirected graph (representing the "within range" restrictions) with weights on its vertices, and you want to find a non-adjacent pair of vertices that has maximum total weight. I'm going to interpret linear time as being linear in the number of edges in the input graph, not just the number of vertices, because otherwise it doesn't make sense — you need to have enough time to be able to look at the whole input. Here's an $O(m+n)$ time solution (simplifying an earlier solution I posted here): Assume for simplicity that $n$ is a power of two (if not round up). Find the median of the vertex weights, the median of the highest $n/2$ vertex weights, the median of the highest $n/4$ vertex weights, etc. These median computations take a total of $O(n)$ time, and once completed they give you the set of $2^i$ largest weights for every choice of an integer $i$ For each vertex $v$ with degree $d_v$, let $i_v=\lceil\log_2(d_v+1)\rceil$. Form the pairs $(v,w)$ where $w$ is in the set of $2^{i_v}$ largest vertex weights found during the median calculations. From the sets of pairs found in the previous step, remove the pairs that correspond to adjacent pairs of vertices (either by using a hash table of edges for fast adjacency lookup, or by using two passes of bucket sort to make a sorted list of both pairs and input edges). The remaining pairs for each vertex $v$ necessarily include the heaviest nonadjacent neighbor of $v$. In particular the heaviest nonadjacent pair in the whole graph is somewhere in this list of pairs. Compute the weight for each pair and choose the max.
{ "domain": "cstheory.stackexchange", "id": 2050, "tags": "ds.algorithms, time-complexity, sorting" }
How does Logical Block Addressing affect OS disk scheduling optimizations such as SCAN (Elevator Algorithm)?
Question: If the OS is scanning from one edge of the disk to the other, doing so from behind the Logical Block Addressing (LBA) abstraction, although it may aim to service requests in an elevator-like way, what guarantee do we have that the physical mapping correlates to what we see as the OS? i.e. Could it be that servicing requests in logical order would end up seeking to random physical sectors, or do logical addresses still approximately reflect physical locations? Answer: You are right, this could be a problem. In older disks, the physical block number corresponds to the actual location of the block on the disk. So, that algorithm makes sense. Newer disks do fancier stuff. They have firmware that might remap blocks around to different locations, so that the physical block number no longer corresponds to the actual location of the block on disk in all cases (two blocks with similar physical block number might be far apart in actuality). Because this can screw up OS scheduling algorithms, disk firmware generally tries to avoid this situation where possible, but it can happen. It gets especially tricky with SSDs, which have very different performance properties than hard disks, but which must (for backwards compatibility) use an API originally designed for magnetic disks. So, modern SSD's do clever things to try to ensure that typical OS scheduling algorithms will still yield reasonable results, even though the physical block number doesn't necessarily correspond to the actual physical location of where the data is stored on the SSD.
{ "domain": "cs.stackexchange", "id": 6699, "tags": "operating-systems, scheduling" }
Couldn't find executable named rgbdslam
Question: I was following rgbdslam tutorial. I'm using ros groovy and ubuntu 12.04. I have installed rgbdslam using the given steps in http://www.ros.org/wiki/rgbdslam. When I run 'rosrun rgbdslam rgbdslam' it gives the following error. Can somebody help? [rosrun] Couldn't find executable named rgbdslam below /home/test/ros_workspace/rgbdslam_freiburg/rgbdslam [rosrun] Found the following, but they're either not files, [rosrun] or not executable: [rosrun] /home/test/ros_workspace/rgbdslam_freiburg/rgbdslam Originally posted by Cham on ROS Answers with karma: 11 on 2013-07-29 Post score: 0 Answer: Did you do rosmake rgbdslam_freiburg? And if so, what was the output? Originally posted by Felix Endres with karma: 6468 on 2013-08-13 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Cham on 2013-08-18: Hello,, Thank you for your response. Yes I did the step that you have mentioned above. It didn't give any error that time. I reinstalled ubuntu and everything is working fine now. Thank you very much.
{ "domain": "robotics.stackexchange", "id": 15090, "tags": "slam, navigation, rosrun" }
Bead on a rotating hoop
Question: This is problem 10.13 from Fowles and Cassiday, 7e. A bead of constant mass m is constrained to slide along a thin, circular hoop of radius $l$ that rotates with constant angular velocity $\omega$ in a horizontal plane about a point on its rim as shown. I need to figure out the kinetic energy of the bead to write down the Lagrangian. I want to define a vector $\vec{r}_{0}$ from the origin of the x-y system to the center of the hoop using polar coordinates and a vector $\vec{r}_{1}$ from the center of the hoop to the bead of mass m. Then $$ \vec{r}_{0} = l\hat{e}_{r_{0}} $$ $$ \vec{r}_{1} = l\hat{e}_{r_{1}} $$ where $\hat{e}_{r_{1}}$ points radially outward from the center of the hoop and $\hat{e}_{r_{0}}$ points radially outward from the center of the x-y plane. I want to use polar coordinates, with one set emanating from the x-y plane and a second set from the center of the hoop. Then the position of the bead $\vec{r}_{m} = \vec{r}_{0} + \vec{r}_{1}$. I get then that $\dot{\vec{r}}_ {0} = l\omega\hat{e}_{\theta_{0}}$ and $\dot{\vec{r}}_ {1} = l\dot{\theta}\hat{e}_{\theta_{1}}$ where I've defined $\theta_{0} = \omega t$. So $$ \dot{\vec{r}}_{m} \cdot \dot{\vec{r}}_{m} = l^{2}\omega^{2} + l^{2}\dot{\theta}^{2} + 2(l^{2}\omega\dot{\theta}\hat{e}_{r_{1}} \cdot \hat{e}_{r_{0}}) = l^{2}\omega^{2} + l^{2}\dot{\theta}^{2} + 2l^{2}\dot{\theta}\omega cos(\theta)$$ but this isn't right. I suspect that I'm ignoring some consequences of the origin at the center of the hoop rotating with respect to the fixed x-y reference frame, but I'm not sure exactly what consequence. I thought I was taking it into account by adding the rotating vector, but I guess not. Answer: The problem is that you are thinking of defining two coordinate systems, and then trying to combine them. This is incredibly dicey and requires extremely precise thinking to make sure all terms are accounted for. I would go so far as to call it a nightmare. Instead, just think of the position of the particle as the sum of two separate position vectors using the same polar coordinates. Associate your first position vector $\vec{r_0}$ with $\omega t$ and associate your second position vector $\vec{r_1}$ with $\omega t + \theta$. Then in your equations, everywhere there is a $\dot\theta$, it's replaced with $ \omega + \dot\theta$. Because your position vectors use the same polar coordinate system the dot product and time derivatives won't pick up any strange extra terms. A quick test you can use is to test the equation in edge cases where $\theta$ does not vary. If $\dot\theta = 0$, then radius is fixed and the total kinetic energy is easy to calculate. Your more general formula should reduce to the proper expression.
{ "domain": "physics.stackexchange", "id": 19941, "tags": "homework-and-exercises, lagrangian-formalism" }
Approximating sums as integrals and divergent terms
Question: I have the following sum (notice that the sum starts from 2, i.e. there's no divergence): $$\sum_{i=2}^{N}C_i\dfrac{\exp{\left(-k| \mathbf{R}_i-\mathbf{R}_1| \right) }}{| \mathbf{R}_i-\mathbf{R}_1|}$$ Where $\mathbf{R}_i$ are vectors belonging to $\mathbb{R}^3$ and are enclosed in some volume $V$ (They represent the positions of some atoms). $C_i$ is some well behaved function (we might aswell take it to be 1). Now suppose I want to approximate this sum as an integral, in the limit where $N \rightarrow \infty$ and the atoms at position $\mathbf{R}_i$ are densely close to each other. My tentative answer would be to write: $$\lim_{N \rightarrow \infty} \sum_{i=2}^{N}C_i \dfrac{\exp{\left(-k| \mathbf{R}_i-\mathbf{R}_1| \right) }}{| \mathbf{R}_i-\mathbf{R}_1|} = \int_V d^3\mathbf{R} \dfrac{\exp{\left(-k| \mathbf{R}-\mathbf{R_1}| \right) }}{| \mathbf{R}-\mathbf{R_1}|} \rho(\mathbf{R}) C(\mathbf{R}) $$ Where in this limit: $\mathbf{R}:=\mathbf{R}_i$, and $\rho(\mathbf{R})=\dfrac{N}{V}$ Is this in some way rigorous? I think it makes sense as I often saw a similar procedure in Statistical Mechanics. Now, what about the term $\mathbf{R}_i=\mathbf{R}_1$? In the sum that term is divergent and is not included. But in the integral it is somewhat impossible to exclude it, and it doesn't give any problem as it's divergence seems to be cancelled by the integration in 3 variables. Is there a way to convince myself that the error I'm making is negligible? Answer: Make a substitution $\mathbf R' = \mathbf R - \mathbf R_1$ $$\int_V d^3\mathbf{R} \dfrac{\exp{\left(-k| \mathbf{R}-\mathbf{R_1}| \right) }}{| \mathbf{R}-\mathbf{R_1}|} \rho(\mathbf{R}) C(\mathbf{R}) = \int_{V'} d^3\mathbf{R'} \dfrac{e^{-k| \mathbf{R}'| }}{| \mathbf{R}'|} \rho(\mathbf{R+\mathbf R_1}) C(\mathbf{R}+\mathbf R_1)$$ Let's now turn to polar coordinates centered in $\mathbf R_1$, with $r' = |\mathbf R'|$. It might now be quite hard to convert $\rho$ and $C$ to polar coordinates in this frame of reference, depending on the symmetries of your problem. If, as I suspect, $\rho$ is unknown and will be found using this integral, then you shouldn't have a problem. But I don't know, and I hope this helps anyway. $$ ... = \int re^{-kr}\rho(r,\theta,\phi) C(r, \theta, \phi) \,d\theta d\phi dr.$$ Notice that changing coordinates introduced a $r^2$ factor. This shows (unless I'm missing something!) that your integral doesn't diverge, if $\rho$ and $C$ are well-behaved.
{ "domain": "physics.stackexchange", "id": 49239, "tags": "statistical-mechanics, integration, many-body, approximations" }
How to check for free cells in a line in the costmap?
Question: Hello, What I want to do is something similar to a raycast. From a specific position draw a line and get the list of all the free cells in that line. The line has a maximum length, but it has to stop if it reach an obstacle. Is there an existing API for that? Or I need to implement it myself? Thanks! Originally posted by g.bardaro on ROS Answers with karma: 110 on 2018-07-11 Post score: 0 Answer: In the end I solved it by implementing a new layer on the costmap and using this: http://docs.ros.org/kinetic/api/costmap_2d/html/classcostmap__2d_1_1Costmap2D.html#af1d619d9b49b6851cb0a33de4e39ec78 Originally posted by g.bardaro with karma: 110 on 2018-08-10 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 31248, "tags": "ros, navigation, ros-kinetic, costmap, 2dcostmap" }
libg2o installation problem on ROS Hydro Raspbian Wheezy
Question: I have complied RGBDSLAM v2's prerequisite to install ROS Hydro according to Installing ROS Hydro on Raspberry Pi and put felixendres-rgbdslam_v2-7450b20 in catkin_ws/src. When i run rosdep install --from-paths src --ignore-src --rosdistro hydro -y --os=debian:wheezy i got ERROR: the following packages/stacks could not have their rosdep keys resolved to system dependencies: rgbdslam: No definition of [libg2o] for OS [debian] then i run roslocate info libg2o I got Using ROS_DISTRO: hydro WARNING: Package "libg2o" does not follow the version conventions. It should not contain leading zeros (unless the number is 0). Not found via rosdistro - falling back to information provided by rosdoc. Missing VCS control information for package libg2o, requires vcs[] and vcs_uri[] Can anyone helps with this problem? I am trying to get RGBDSLAM running on Raspberrypi B+Wheezy(2014-12-24) with kinect. Thank you! Originally posted by enyen on ROS Answers with karma: 11 on 2015-01-12 Post score: 0 Answer: Not all packages are available on raspbian. Originally posted by tfoote with karma: 58457 on 2015-03-03 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 20546, "tags": "ros, slam, navigation, rgbdslamv2, raspbian" }
Commutator of the Pauli-Lubanski vector operator and the generator of translations $P^\alpha$
Question: I'm trying to obtain the commutation relation between the Pauli-Lubanski vector operator and the generators of the Lorentz Group: $$[W^\mu,P_\sigma]=[\frac{1}{2}\epsilon^{\mu\nu\lambda\rho} P_\nu M_\lambda\rho,P_\sigma]\\ \hspace{2.3cm}= \frac{1}{2}\epsilon^{\mu\nu\lambda\rho}[ P_\nu M_{\lambda\rho},P_\sigma]\\ \hspace{3.9cm}=\frac{1}{2}\epsilon^{\mu\nu\lambda\rho}P_\nu[ -\eta_{\lambda\sigma}P_\rho+\eta_{\rho\sigma}P_\lambda]\\ \hspace{5.cm}=\frac{1}{2}\epsilon^{\mu\nu\lambda\rho}\eta_{\rho\sigma}P_\nu P_\lambda-\frac{1}{2}\epsilon^{\mu\nu\lambda\rho}\eta_{\lambda\sigma}P_\nu P_\rho $$ Now, I know a priori that this commutator is zero and I'm trying to change indices accordingly. For instance, in the first term I want to rename the dummy indices $\lambda$ to $\rho$ and vice-versa. This will permute the respective last indices in the Levi-Civita tensor. Since I'm only renaming indices, I assume that I don't have to put a minus sign when doing so: $$=\frac{1}{2}\epsilon^{\mu\nu\rho\lambda}\eta_{\lambda\sigma}P_\nu P_\rho-\frac{1}{2}\epsilon^{\mu\nu\lambda\rho}\eta_{\lambda\sigma}P_\nu P_\rho$$ But now, to get the same form for the Levi-Civita tensor in both terms I permute the last two indices of that tensor in the first term, taking into account that it is an antisymmetric tensor: $$=-\frac{1}{2}\epsilon^{\mu\nu\lambda\rho}\eta_{\lambda\sigma}P_\nu P_\rho-\frac{1}{2}\epsilon^{\mu\nu\lambda\rho}\eta_{\lambda\sigma}P_\nu P_\rho=-\epsilon^{\mu\nu\lambda\rho}\eta_{\lambda\sigma}P_\nu P_\rho$$ , which is not $0$. Have I gone wrong somewhere? If I do the same for the second term instead of the first I get the same result but positive. Since the results must be equal, the only possibility for something of the form $+\text{final result}=-\text{final result}$ is for $\text{final result}=0$. Does it make sense? Answer: Every term of the form $$\varepsilon^{\mu \nu \rho \sigma} P_{\rho} P_{\sigma}$$ in your calculation is zero, including the thing you said was not zero, because e.g. in $\varepsilon^{\mu \nu \rho \sigma} P_{\rho} P_{\sigma}$ the $\varepsilon^{\mu \nu \rho \sigma}$ is anti-symmetric in $\rho$ and $\sigma$, $\varepsilon^{\mu \nu \rho \sigma} = - \varepsilon^{\mu \nu \sigma \rho}$, while $P_{\rho} P_{\sigma}$ is symmetric in ν and ρ, $P_{\rho} P_{\sigma} = P_{\sigma} P_{\rho}$. The simplest way to show that this is zero is to prove that $A = - A$ so that $2A = 0$. For simplicity we consider the two-dimensional analogue $$\varepsilon^{\mu \nu} P_{\mu} P_{\nu}$$ where $\mu, \nu = 0,1$. I want to show that $\varepsilon^{\mu \nu} P_{\mu} P_{\nu} = - \varepsilon^{\mu \nu} P_{\mu} P_{\nu}$. The calculation is as follows: \begin{align} \varepsilon^{\mu \nu} P_{\mu} P_{\nu} &= + \varepsilon^{\nu \mu} P_{\nu} P_{\mu} \ \ (1) \\ &= - \varepsilon^{\mu \nu} P_{\nu} P_{\mu} \ \ (2) \\ &= - \varepsilon^{\mu \nu} P_{\mu} P_{\nu} \ \ (3) \end{align} In line $(1)$ I used the fact that I can just re-label dummy indices whatever way I want, since they are dummy indices, and here I want to have them written in reverse order so I can later invoke anti-symmetry on $\varepsilon^{\mu \nu}$ and symmetry on $P_{\mu} P_{\nu}$. To see very explicitly why I can re-label dummy indices, just write it out: \begin{align} \varepsilon^{\mu \nu} P_{\mu} P_{\nu} &= \varepsilon^{0 \nu} P_{0} P_{\nu} + \varepsilon^{1 \nu} P_{1} P_{\nu} \\ &= (\varepsilon^{00} P_{0} P_{0} + \varepsilon^{0 1} P_{0} P_{1}) + (\varepsilon^{1 0} P_{1} P_{0} + \varepsilon^{1 1} P_{1} P_{1}) \\ &= (\varepsilon^{0\mu} P_{0} P_{\mu}) + (\varepsilon^{1 \mu} P_{1} P_{\mu}) \\ &= \varepsilon^{\nu \mu} P_{\nu} P_{\mu}. \end{align} Note I have done absolutely nothing but write it out so that there are no dummy indices, then collect the terms up with dummy indices again, but now using a different labelling. In going from $(1)$ to $(2)$ I used the anti-symmetry of $\varepsilon^{\mu \nu}$ and in going from $(2)$ to $(3)$ I used the symmetry of $P_{\mu} P_{\nu}$. Now I have $A = - A$ so that $2A = 0$. Another way to prove this result is to write it in the form $A = \frac{1}{2} A + \frac{1}{2} A = \frac{1}{2}A - \frac{1}{2} A = 0$ which is just a longer way of doing the above calculation, and clearly uses the above calculation in going from the $+$ to the $-$, but it is often used (e.g. to derive the angular momentum operators in special relativity/classical mechanics etc...) so it's good to be aware: \begin{align} \varepsilon^{\mu \nu \rho \sigma} P_{\nu} P_{\rho} &= \frac{1}{2} \varepsilon^{\mu \nu \rho \sigma} P_{\nu} P_{\rho} + \frac{1}{2} \varepsilon^{\mu \nu \rho \sigma} P_{\nu} P_{\rho} \ \ (1) \\ &= \frac{1}{2} \varepsilon^{\mu \nu \rho \sigma} P_{\nu} P_{\rho} - \frac{1}{2} \varepsilon^{\mu \rho \nu \sigma} P_{\nu} P_{\rho} \ \ (2) \\ &= \frac{1}{2} \varepsilon^{\mu \nu \rho \sigma} P_{\nu} P_{\rho} - \frac{1}{2} \varepsilon^{\mu \nu \rho \sigma} P_{\rho} P_{\nu} \ \ (3) \\ &= \frac{1}{2} \varepsilon^{\mu \nu \rho \sigma} P_{\nu} P_{\rho} - \frac{1}{2} \varepsilon^{\mu \nu \rho \sigma} P_{\nu} P_{\rho} \ \ (4) \\ &= 0. \end{align} In line $(1)$ I know that $\varepsilon^{\mu \nu \rho \sigma}$ is anti-symmetric while $P_{\nu} P_{\rho}$ is symmetric so that the whole thing is immediately zero, and I want to show this explicitly by turning it into something like $A = \frac{1}{2} A + \frac{1}{2} A = \frac{1}{2} A - \frac{1}{2} A = 0$, so I introduce the $1/2$ just to get two copies of it which I expect will cancel one another. In going $(1)$ to to $(2)$ I just used the anti-symmetry of $\varepsilon^{\mu \nu \rho \sigma}$ to write one of them with a $-$ sign. In going from $(2)$ to $(3)$ I then re-labelled the dummy indices so that I would have $\varepsilon^{\mu \nu \rho \sigma}$ in both terms, In going from $(3)$ to $(4)$ I then used commutativity of $P_{\mu}$ and $P_{\nu}$. Note $(4)$ is now in the form $A = \frac{1}{2}A - \frac{1}{2}A = 0$.
{ "domain": "physics.stackexchange", "id": 65723, "tags": "quantum-field-theory, commutator, poincare-symmetry" }
Index matching algorithm without hash-based data structures?
Question: I am programming in C, so I do not want to implement a hash-based datastructure such as a hashset or hashmap/dictionary. However, I need to solve the following task in linear time. Given two arrays $a$ and $b$ which contain the same set of distinct integers, determine for every element of $a$ the index of the same element in $b$. For example, if $a=[9,4,3,7]$ and $b=[3,4,7,9]$, then the output should be $[3,1,0,2]$. Note that this becomes a very easy task when you have a hashset, because you can simply store for every element in $b$ the index, and then query the hashmap for every element of $a$. So my question is whether there is a linear algorithm for this task that does not use any hashsets. Answer: If the only operation allowed between any two (possibly the same) elements in the two arrays is to determine which one is the smaller one, then it will take $\Theta(n\log n)$ time in worst case for any algorithm. This can be seen from the situation when array $a$ is sorted while array $b$ is arbitrary before we apply the algorithm. Knowing the index $I(k)$ of the element in $b$ which is the same as the $k$-th element of $a$ for all $k$, we can sort $b$ in $O(n)$ time by simply putting $b_{I(k)}$ in $k$-th position (using one temporary working space or a new result array of length $n$). However, it is well-known that it takes at least $\Theta(n\log n)$ time (comparisons) to sort $b$ in worst cases for any algorithm. So obtaining that knowledge, the index $I(k)$ for all $k$ must take at least $$\Theta(n\log n)- O(n)=\Theta(n\log n)$$ time as well in worst cases. The following is a formal formulation of the conclusion above in the comparison computation model. Let $\mathcal O$ be an oracle that can tell a fixed strict linear ordering on $E$, a set of $n$ elements. That is, on input $e,f\in E$, $\mathcal O$ outputs -1 if $e\prec f$, 0 if $e$ is $f$ and 1 otherwise. Let $a$ and $b$ are two bijections from $\{0, 1,\cdots, n-1\}$ to $E$. To output $I(0), I(1), \cdots, I(n-1)$ in that order such that $a(k)=b(I(k))$ for all $0\le k\le n-1$, it will take $\Theta(n\log n)$ queries against $\mathcal O$ in the worst case. whether there is a linear algorithm for this task that does not use any hashsets. A computation model that is defined by no usage of hashset is not a well-defined computation mode. How can you check there is no usage of hashset? There are literally hundreds of ways to implement a data structure that is a hashset or looks like a hashset or looks like a hashset partially. In general, a well-defined computation model must be defined by what can be done formally.
{ "domain": "cs.stackexchange", "id": 13540, "tags": "search-algorithms, hash-tables, permutations" }
How to write arbitrary line elements in isotropic form?
Question: I have the following metric $$(ds)^2 = A(r) dt^2 + 2B(r) drdt - C(r)dr^2 - r^2d\Omega^2,$$ where $d\Omega^2 = d\theta^2 + \sin^2 \theta \;d\phi^2$. Is it possible to write this metric in isotropic form without performing a coordinate transformation in the time variable to remove the non-orthogonal components of the metric tensor? I have seen this answer. However, by following the method I get stuck with the $drdt$ term. I notice in Cheng, Relativity, Gravitation and Cosmology they remove the off diagonal term also. Any suggestions? Answer: You need to get rid of the $dr dt$ term first before you get a chance to make the coordinates isotropic. The most general method is just the reduction of a quadratic form, here $2\times 2$. Le't forget about the angular part: $$(ds)^2 = \begin{pmatrix}dt & dr\end{pmatrix} \begin{pmatrix}A(r) & B(r) \\ B(r) & -C(r)\end{pmatrix}\begin{pmatrix}dt\\dr\end{pmatrix}.$$ You just need to diagonalise that matrix. That will give you a linear transformation $(dt,dr)\to(dt',dr')$ such that $$ds^2 = \mathcal{A}dt'^2 - \mathcal{B}dr'^2$$ Then you can further try to change variables to get an isotropic form.
{ "domain": "physics.stackexchange", "id": 41854, "tags": "general-relativity, gravity, differential-geometry, metric-tensor, relativity" }
Functional Groups Identification
Question: Identify functional groups of the following organic molecule: A. ketone, alkene, carboxylic acid, ester B. alkyne, ester, carboxylic acid, aldehyde C. carboxylic acid, alkene, ketone, ester D. ester, aldehyde, carboxylic acid, alkene I am not sure what the answer it. I know it can't be B because there is no alkyne triple bond. However, shouldn't an alkene only exist in a straight chain? I know there is a COOH carboxylic acid, R-O-R' ether and a CHO aldehyde, and a double bond O for a ketone. I was wondering if someone could tell the right answer. Answer: The only right choice is D, because there is an ester, even though this ester is intramolecular, being made of the bridge -CO-O- in the lower part of the molecule. The aldehyde group is -CHO in the R.H.S. The carboxylic acid is in the upper part of the middle. And there are two alkene groups in the cycle at L.H.S.
{ "domain": "chemistry.stackexchange", "id": 13096, "tags": "organic-chemistry" }
What does it mean to divide by the degeneracy of the state in this textbook excerpt?
Question: This section of Griffiths Introduction to Quantum Mechanics deals with Boltzmann, Fermi-Dirac, and Bose-Einstein distributions. I don't understand this line (highlighted in yellow): Let's talk only of Maxwell-Boltzmann here to keep it simple. Originally, we had $$N_n=d_ne^{-(\alpha+\beta E_n)}$$ This was explained in the book to be the equation for the most probable occupation number for distinguishable particles. Then, in the image above, the author divides by $d_n$ to result in "the number of particles in a particular state with that energy", but I don't quite understand this. Could someone explain this bit in simpler terms? Or with a simple example? Answer: The formulas in Griffiths are correct, but the explanation is pretty clumsy, because he's basically done the derivation 'in reverse'. For simplicity I'll just talk about the distinguishable particle case, but the others are similar. The derivation in the forward direction looks like this: the Maxwell-Boltzmann distribution is the distribution that maximizes the entropy given fixed energy. Here, the entropy is defined as $$S \sim \sum p_i \log p_i$$ and the $p_i$ are the probabilities of occupancies of each state (not each energy level!). If you carry out the constrained optimization, using a similar method to Griffiths, you'll arrive at equation 5.103. Now, the probability of occupancy of a state only depends on its energy. Let's say that the probability of occupancy of a state at some energy is $p_n = 1/2$, and the degeneracy is $d_n = 10^6$. Then by the law of large numbers, the total occupancy $N_n$ of this entire energy level will be very close to $p_n d_n = (1/2) 10^6$. The occupancy could certainly be more or less, but the probability distribution will be peaked about this central value. The only problem with this approach is that the definition of $S$ is a little unintuitive. So instead, Griffiths works only with occupancy numbers $N_n$, so he can just "count the number of ways" to achieve those numbers instead of dealing with the probabilities $p_n$. Then, he implicitly takes the high $d_n$ limit, so that $N_n \approx p_n d_n$, and calculates $p_n = N_n / d_n$. The high $d_n$ limit is necessary so that the probability estimated by this ratio is accurate. For example, if $p_n = 2/3$ but $d_n = 10$, the most likely occupancy number could be $N_n = 7$. Then dividing would give the approximation $p_n \approx 0.7$. For our calculated value of $p_n$ to be good, we must take $d_n$ to infinity. A final muddy point is that Griffiths accidentally calls the probabilities $p_n$ "the most likely occupancy numbers of a state", even though this makes no sense because $p_n$ isn't even an integer, it's a probability between $0$ and $1$. This clumsy wording is because Griffiths has swept all of the probability language under the rug in favor of occupancy numbers, but it's just not right.
{ "domain": "physics.stackexchange", "id": 37250, "tags": "quantum-mechanics, energy, statistical-mechanics, density-of-states" }
First attempt at a pure JavaScript slider
Question: I'm working on this pure JS slider. It's quite a basic one, but it's what I need. I would like your feedback on it and any improvements/features that I'm missing. HTML <div class="container"> <ul id="slides"> <li class="slide showing">Slide 1</li> <li class="slide">Slide 2</li> <li class="slide">Slide 3</li> <li class="slide">Slide 4</li> <li class="slide">Slide 5</li> </ul> <div class="buttons"> <button class="controls" id="previous">&lt;</button> <button class="controls" id="next">&gt;</button> </div> JS var currentSlide = 0; var slides = document.querySelectorAll("#slides .slide"); var controls = document.querySelectorAll(".controls"); var next = document.getElementById("next"); var previous = document.getElementById("previous"); for (var i = 0; i < controls.length; i++) { controls[i].style.display = "inline-block"; } function goToSlide(n) { slides[currentSlide].className = "slide"; currentSlide = (n + slides.length) % slides.length; slides[currentSlide].className = "slide showing"; } function nextSlide() { goToSlide(currentSlide + 1); } function previousSlide() { goToSlide(currentSlide - 1); } next.onclick = function() { nextSlide(); }; previous.onclick = function() { previousSlide(); }; CSS /* Essential - Core */ #slides { position: relative; height: 300px; padding: 0px; margin: 0px; list-style-type: none; } .slide { position: absolute; left: 0px; top: 0px; width: 100%; height: 100%; opacity: 0; z-index: 1; -webkit-transition: opacity 1s; -moz-transition: opacity 1s; -o-transition: opacity 1s; transition: opacity 1s; } .showing { opacity: 1; z-index: 2; } /* Non-essential - Styles */ .slide { font-size: 40px; padding: 40px; box-sizing: border-box; background: #333; color: #fff; } .slide:nth-of-type(1) { background: red; } .slide:nth-of-type(2) { background: orange; } .slide:nth-of-type(3) { background: green; } .slide:nth-of-type(4) { background: blue; } .slide:nth-of-type(5) { background: purple; } .controls { color: #fff; font-size: 40px; cursor: pointer; } .controls:hover { color: #333; } .container { position: relative; } .container button:nth-of-type(1) { position: absolute; left: 0; top: 50%; z-index: 10; } .container button:nth-of-type(2) { position: absolute; right: 0; top: 50%; z-index: 10; } The slider must be responsive. However, I didn't add any "responsiveness" code to it, because I believe that can be handled by CSS. Am I wrong? Here's the JSFiddle Answer: I would suggest wrapping this whole thing in an IIFE such that you can modularize all of your slide show functionality into its own scope. That might look like this: (function() { // your javascript code }()) This would prevent you from defining a bunch of variables in global scope which could interact negatively with other javascript on a page where you want to insert the slide show. I agree with other answer about removing styling for .controls into CSS. You don't use this DOM collection anywhere else in your code, so why have it in JS at all? You might consider separating the logic for hiding/showing slides into a separate method from goToSlide(). This allows goToSlide() to just focus on setting current slide index and calling appropriate hide/show methods. I worry about writing to className as you could clobber any other classes that had been applied to a slide element. Since it seems like your code is really only worried about adding/removing showing, consider accessing classList on the element to add/remove this class only. I would probably place the logic related to "wrapping" index values into the next/previous convenience function and let goToSlide() do one thing only - show the slide index as given. Consider throwing an error for an illegal slide index passed to goToSlide(), since this function is exposed publicly. If you truly want to make this code re-usable, you may want to consider making this functionality into a proper "class" and internalize all references to state and DOM bindings. This would allow you to place multiple instances of these slide shows onto a page at once. Putting it all together might yield something like: // you could have this class defined in an externally included file function Slider(slideSelector, nextId, prevId, toggleClassName) { this.currentIndex = 0; this.slides = querySelectorAll(slideSelector); this.next = getElementById('#' + nextId); this.prev = getElementById('#' + prevId); this.toggleClassName = toggleClassName; this.next.onclick = function() { this.nextSlide(); } this.prev.onclick = function() { this.prevSlide(); } } Slider.prototype.goToSlide = function(idx) { this.validateIndex(idx); this.currentIndex = idx; this.hideAllSlides(); this.showSlide(idx); } Slider.prototype.nextSlide = function() { var idx = this.currentIndex + 1; if (idx === this.slides.length) { idx = 0; } this.goToSlide(idx); } Slider.prototype.prevSlide() = function() { var idx = this.currentIndex - 1; if (idx < 0) { idx = this.slides.length - 1; } this.goToSlide(idx); } Slider.prototype.hideAllSlides = function() { this.slides.forEach(function(slide) { slides.classList.remove(this.toggleClassName); }); } Slider.prototype.showSlide = function(idx) { this.validateIndex(idx); this.slides.item(idx).classList.add(this.toggleClassName); } Slider.prototype.validateIndex = function(idx) { if(!Number.isInteger(idx)) { throw new TypeError('Non-integer value passed'); } if ((idx > this.slides.length - 1) || idx < 0) { throw new RangeError('Out of range index value: ' + idx); } } // following code could then be in your document ready handler // you could instantiate any number of sliders // each which could operate independently of each other var slider1 = new Slider('#slider1 .slides', 'next1', 'prev1', 'showing'); var slider2 = new Slider('#slider2 .slides', 'next2', 'prev2', 'showing'); Notes: You could make ES6 class as well, but I did not show this, as I see nothing in your original code to make me think you are working in ES6. Taking class approach should eliminate need for IIFE around slider code, in that you have already encapsulated your logic/state. I have added index value validation, as given that the goToIndex() method could theoretically be called form anywhere in code that has access to the object in it's scope, it becomes more important to make sure object can not be put into a bad state.
{ "domain": "codereview.stackexchange", "id": 25199, "tags": "javascript" }
robot_pose_ekf visual odometry
Question: Hi all, I am working on a project which runs SLAM (RTAB map) on an autonomous land rover. The rover is equipped with a real sense D415 camera, a wheel odometry system and Pixhawk. We have been using robot_pose_efk to fuse the wheel odometry and IMU measurements. Since we have the real sense camera and the robot_pose_efk, which takes visual odometry message, I am wondering if it would be worth feeding the visual odometry into the robot_pose_efk in terms of computation point of view? (the RTAB mapping is already using the topics from real sense camera). If it is worth it to incorporate the camera into robot_pose_efk, how do I publish the /vo topic using the D415 camera? my robot_pose_efk launch file is shown as below. Thanks <launch> <node pkg="robot_pose_ekf" type="robot_pose_ekf" name="robot_pose_ekf"> <param name="output_frame" value="odom_combined"/> <param name="base_footprint_frame" value="base_footprint"/> <param name="freq" value="30.0"/> <param name="sensor_timeout" value="1.0"/> <param name="odom_used" value="true"/> <param name="imu_used" value="true"/> <param name="vo_used" value="false"/> <remap from="mavros/imu/data" to="imu_data" /> </node> </launch> Originally posted by buckbuck on ROS Answers with karma: 3 on 2018-08-22 Post score: 0 Answer: AFAIK, the visual odometry portion of robot_pose_ekf is not really used by anyone, and is probably poorly tested. You might also want to look at robot_localization package which is way better documented, and also since we have dropped support for robot_pose_ekf in Melodic (it is not released into debians). Originally posted by fergs with karma: 13902 on 2018-08-23 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 31621, "tags": "slam, navigation, visual-odometry, ros-kinetic, robot-pose-ekf" }
What is the gain (advantage) of oversampling and noise shaping in D/A conversion?
Question: It is clear that oversampling and noise shaping in A/D conversion can help in shaping quantization noise. But for D/A conversion, normally there is no quantization and hence I did not understand its use. What is the gain (advantage) of oversampling in D/A conversion? Answer: If you play 16-bit audio at 48kHz, you need the DAC analog reconstruction filter to pass 20kHz and attenuate 96dB at 24kHz, which is quite steep and requires complex multistage analog filter. The advantage of using oversampling is moving the sampling rate much higher, for example oversampling by 4x means the DAC runs at 192kHz, and the analog filter only needs to pass 20kHz and block 96dB at 96kHz, which allows for much simpler filter. The noise shaping is much like dithering after signal processing, but instead of adding white noise uniformly to the whole band, it weights the noise to high frequencies above the audio band, which can then be filtered away by the DAC reconstruction filter, so less noise remains on the audio band, thus increasing the apparent signal-to-noise ratio of the audio band.
{ "domain": "dsp.stackexchange", "id": 9665, "tags": "digital-to-analog" }
Photon Escape Angle From Black Hole
Question: Consider a photon source emitting photons near the surface of a Schwarzschild black hole. What angle, as a function of the source's radius from the event horizon, must the photons be emitted at such that they can escape to an observer at infinity? Answer: At the Schwarzschild radius, a photon must be emitted exactly normal to the surface in order to escape. As you travel outwards, the angle of emission decreases such that just above 1.5 times the Schwarzschild radius (i.e. the Photon Sphere) the photon can be emitted parallel to the tangent of the horizon and still escape. According to this source (which is also a good source for all fun things regarding Schwarzschild black holes), there is a critical emission angle for photons from a stationary source some radius, $R$, from the black hole. Note that this equation technically should work for radii less than the Schwarxschild radius (the event horizon radius, $r_s=\frac{2GM}{c^2}$), but it'll give you negative angles because photons can't escape. Also note that all angles are given relative to the radial direction. $\theta=0$ means directed radially outwards and $\theta=\pi$ is radially inwards. Inside the photon sphere, $R\le{3\over2}r_s$, the angles at which photons can escape are given by: $$\theta\le\arcsin\left[\frac{\sqrt{27}r_s}{2R}\sqrt{1-\frac{r_s}{R}}\right]$$ Outside the photon sphere, $R\ge{3\over2}r_s$, the escape angles are: $$\theta\le\pi-\arcsin\left[\frac{\sqrt{27}r_s}{2R}\sqrt{1-\frac{r_s}{R}}\right]$$ To get correct angles, just assume that arcsin always results in values between $-\pi/2$ and $\pi/2$. You'll note that for $R=r_s$, you find that $\theta=0$, which means only photons directed radially outwards escape. For $R=\frac{3}{2}r_s$, $\theta=\frac{\pi}{2}$ as my first paragraph stated. And that a radially inward photon ($\theta=\pi$) is always absorbed (this angle is asymptotically approached as $R\to\infty$).
{ "domain": "physics.stackexchange", "id": 99792, "tags": "homework-and-exercises, general-relativity, black-holes, photons, escape-velocity" }
OpenCV, drawing depth map
Question: Hi! I have a little project on C++. In this project, I I try to find sample image on depth map, received from Kinect. I check results by drawing both images (sample and current scene) in 1 window. Keypoints linked with lines after it. But I can't see current scene in window. Where can be problem? Something wrong in src/Classifier.cpp, method void Detector::findObject(cv::Mat scene). And one more question - what method should I use to copy data from one cv::Mat to other? I use cvtColor for this. I think, that class method copyTo() should do same work, but it doesn't. Thanks for help! Here project - http://dl.dropbox.com/u/16807894/robot-vision.zip Originally posted by CaptainTrunky on ROS Answers with karma: 546 on 2011-10-31 Post score: 0 Answer: I think, that found answer. OpenCV can't deal with different types images. When it tries to draw them in 1 window, only one image will be really shown (tested only with images from Kinect). Originally posted by CaptainTrunky with karma: 546 on 2011-11-25 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 7143, "tags": "ros, c++, opencv, image, depth" }
ImportError: No module named cv_bridge
Question: Hello all, I am trying to subscribe to a kinect publisher, grab an rgb image and convert it to an ipl image so I can manipulate it with opencv. I am using Fuerte and Ubuntu 12.04 and I am writing the code in Python. I am bit unsure how to make sure my package paths are all set up correctly. I have found some example code that I am posting below and I get an error stating "ImportError: No module named cv_bridge" I have added <run_depend>cv_bridge</run_depend> and <build_depend>cv_bridge</build_depend> to my package.xml My package will hopefully control a sphero at some point, and so my package is called 'sphero_controller' and it is in a catkin workspace below my home folder. I have added this to the end of my .bashrc export ROS_PACKAGE_PATH=/home/gideon:/opt/ros/fuerte/stacks:$ROS_PACKAGE_PATH Does anyone know what I may be doing wrong? Thanks Gideon #!/usr/bin/env python import roslib roslib.load_manifest('sphero_controller') import sys import rospy import cv2 from std_msgs.msg import String from sensor_msgs.msg import Image from cv_bridge import CvBridge, CvBridgeError import cv2.cv as cv from std_msgs.msg import ColorRGBA class image_converter: def __init__(self): self.image_pub = rospy.Publisher("image_topic_2",Image) cv2.namedWindow("Image window", 1) self.bridge = CvBridge() self.image_sub = rospy.Subscriber("/camera/rgb/image_color",Image,self.callback) def callback(self,data): try: cv_image = self.bridge.imgmsg_to_cv2(data, "bgr8") except CvBridgeError, e: print e (rows,cols,channels) = cv_image.shape if cols > 60 and rows > 60 : cv2.circle(cv_image, (50,50), 10, 255) cv2.imshow("Image window", cv_image) cv2.waitKey(3) try: self.image_pub.publish(self.bridge.cv2_to_imgmsg(cv_image, "bgr8")) except CvBridgeError, e: print e def main(args): ic = image_converter() rospy.init_node('image_converter', anonymous=True) try: rospy.spin() except KeyboardInterrupt: print "Shutting down" cv2.destroyAllWindows() if __name__ == '__main__': main(sys.argv) Originally posted by Gideon on ROS Answers with karma: 239 on 2014-08-19 Post score: 0 Answer: I STRONGLY recommend that you upgrade to a newer version of ROS. ROS Fuerte is no longer supported, and does not support the catkin package.xml that you're trying to use. If for some reason you can't upgrade to a newer version of ROS, you'll need to: switch your package to the older rosbuild build system add a dependency on cv_bridge to your manifest.xml make sure that the roslib.load_manifest() line in your python script is loading the manifest for your package. This is where the dependencies in your manifest.xml are added to your python path, so it's important to make sure it's loading the correct manifest. Originally posted by ahendrix with karma: 47576 on 2014-08-19 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Gideon on 2014-08-21: Thank you for your help. I had indigo and ubuntu 14.04 installed initially but there is no package available for the sphero so I downgraded. I will try to build the sphero package from source. Comment by Cerin on 2016-08-23: This doesn't really answer his question. I just did a fresh install of Kinetic, and I also got this error...
{ "domain": "robotics.stackexchange", "id": 19111, "tags": "kinect, opencv, ros-fuerte, cv-bridge, ros-package-path" }
Getting 600 n/kg when calculating the gravity for Earth
Question: Basic Problem: So, I'm trying to figure out how to calculate the gravitational force of the earth. I am using Desmos to graph the equation so that will explain where $x$ and $y$ come from. Whenever I put in the information for Earth I got around 600 for the gravitational force I got 600 n/kg(which should be 9.8 n/kg). Equation Details: I'm using the following equation: $$y=0.000000000066743\frac{\left(\left(\frac{4}{3}\pi x^{3}\right)\cdot5520\right)62}{\left(x+0.8\right)^{2}}$$ Im basing everything on the following equation for gravity: $$F=G{\frac{m_1m_2}{r^2}}$$ The part that says $(43πx3)⋅5520$ is for calculating the mass of Earth based on x, which is the radius. The $(43πx3)$ part gets the volume in $m^{3}$ and then multiples it by $5520$, which is the number of kilograms 1 cubic meter of Earth is. I then put in 62 for $m_2$ since that is the average weight in kg of a human. I then did some research and figured out that G, the gravitational constant, is $0.000000000066743$. Now, to get $r^{2}$, I did $(x+0.8)^2$ since x is the radius, thus the distance from the center of the Earth to the crust, and then added 0.8 since that is half the average height of a human. What Have I Tried: I have tried checking if my density is correct by multiplying it by the volume of the Earth and it was correct. I can't find the gravitational constant from another source so that could be a possibly incorrect thing. I double-checked that my volume equation was correct and I also checked that I'm using the right units of measurement. Thanks for any help and feel free to ask questions about any equations/anything in general. Answer: The gravitational acceleration can be calculated from the formula below: $$g=-\frac{GM}{r^2}$$ Where G is the gravitational constant, $6.6743 × 10^{11} m^3 kg^{-1} s^{-2}$ (it is better to express really small or large quantities in scientific notation), $M$ is the mass of the planet, and $r$ is the radius of the planet. Thus, $$g=-\frac{(6.6743 × 10^{-11} m^3 kg^{-1} s^{-2})(5.97219 × 10^{24} kg)}{(6.3781 × 10^6m)^2}≈-9.8 m/s^2$$ Thus, the gravitational acceleration on Earth is about $9.8 m/s^2$. To find the gravitational force on an object on the surface of the earth, multiply this number by the mass of the object (in kilograms). For instance, and object that weighs $2 kg$ has a graviation pull of about 9.6 Newtons (of force).
{ "domain": "physics.stackexchange", "id": 93561, "tags": "newtonian-mechanics, newtonian-gravity, earth" }
BFL - bind variables
Question: We're trying to use a Bayesian Filter to do some sensor fusion for estimating the joints positions on a joint based robot. For this we're using the BFL library as is done in the robot-pose-ekf package. We now have two versions of the same filter (one using ExtendedKalmanFilter, the other one a particle filter: BootstrapFilter) that converges but haven't yet found a way of binding the BFL state representation variables. In our case, we'd like to make sure that each variable of the state stays within the joint limits. Feel free to ask for more details as I'm not exactly sure what is most relevant. Originally posted by Ugo on ROS Answers with karma: 1620 on 2014-08-11 Post score: 0 Answer: from Enrico on BFL mailing list This is a constrained optimization problem. I am afraid there is no "clean" way of doing this. If the state variables are defined by Gaussian pdfs, than by definition those span an infinite support. You could try to make variables outside of the joint limits "highly unlikely" by filtering your measurements, and rejecting those outside a validation area, or carefully defining Process noise and Measurement Noise Covariance Matrices, but if you really want to use constraints, that you're actually solving an Estimation + Constrained Optimization problem, and you'll need different tools for that. Originally posted by Ugo with karma: 1620 on 2014-08-12 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 18989, "tags": "ros" }
Conceptual Doubts regarding centripetal acceleration of rolling objects
Question: Centripetal acceleration of a point with velocity $v$ and moving on a path of radius of curvature $r$ is given by $v^2/r$. If $v= rw$ where $w$ is angular velocity of a body , then centripetal acceleration is given by $w^2r$. So, my doubts are as follows: Doubt 1: in pure rolling motion as velocity of lowermost point is zero so it’s centripetal acceleration should be equal to zero but it is traversing a path of radius of curvature r about the centre of object so it seems contradictory to me . Also, I was taught that acceleration of lowermost point is $w^2r$ and hence it has an acceleration towards the centre of the object which also confuses me. Doubt 2 : Also, if the lowermost point has a centripetal acceleration and we solve any problem from the frame of reference of the lowermost point in a purely rolling object, then is a pseudo force $mw^2r$ applied on the centre of mass towards the lowermost point ? Or does the centre of mass have a centripetal acceleration towards the lowermost point that means a centrifugal force away from it? Any help would be greatly appreciated! Answer: Suppose we have a nice rolling wheel, with its center of mass (CoM) at its geometric center. The wheel is rolling on flat ground (so CoM moving at constant velocity). For doubt 1: there are two types of motion going on. The wheel is rolling, and also translating. The speed given by $v=\omega r$ is for rolling, and is measured in the center of mass frame (i.e. frame that is moving with the CoM). If you want the velocity with respect to an inertial frame (e.g. you standing still, observing the motion), you also have to account for the velocity of the center of mass. The acceleration vector in the CoM frame will be centripetal acceleration, $\omega^2 r$. However, if $\alpha\neq0$, within the CoM frame, you will also have tangential acceleration, $r\alpha$ (such as rolling down an incline). If you want the acceleration with respect to an inertial frame, you'd have to account for the acceleration of the CoM. For doubt 2: depends which frame you're considering. If you're in an inertial frame (e.g. you, observing a rolling wheel) then there is no pseudo force. If your frame is moving with a point (i.e. the frame follows a point on the rim), then there will be an outwards pseudo force. If you're considering a frame that tracks the point of contact on the ground (I think this is the one you're talking about), then there may or may not be a pseudo force. Case 1: the wheel is rolling on flat ground and CoM has constant velocity. In this case, there are no pseudo forces, since the frame of reference is inertial. There will be a net centripetal force of $\omega^2 r$ applied to the CoM, directed towards the point on the ground. Case 2: the wheel is rolling down an incline. In this case, you're in an accelerating frame of reference (since $\bf \vec a_{\rm CoM}\neq 0$, so there will be a pseudo force (it will actually be a force that opposes static friction such that $\sum \bf \vec F= 0$). Hope this clears your doubts.
{ "domain": "physics.stackexchange", "id": 79192, "tags": "acceleration, rotation, centrifugal-force" }
Is there a guide to cross-compiling ROS for ARM (BBB)?
Question: The eROS project appears to be graveyarded and I'm looking for a place to start. My Requirements: Build ROS Packages in amd64 machine with cross compiler "Deploy" binaries on target - Beaglebone black - which already has ROS installed on it Run ROS nodes on the target I can build the ROS packages on the BBB itself but that workflow gets out of hand for distributed systems - a cluster of 10-20 BBBs that may all run an instance of the built ROS package.. Thanks! Originally posted by Pranav on ROS Answers with karma: 13 on 2015-02-28 Post score: 0 Answer: Did you consider meta-ros or beagle-ros? Additional relevant links are: http://wiki.ros.org/BeagleBone http://wiki.ros.org/hydro/Installation/OpenEmbedded Originally posted by slivingston with karma: 254 on 2015-02-28 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Pranav on 2015-02-28: Hey thanks for pointing me in this direction! I've got it working now thanks to the pages you've given me. beagle-ros environment setup + meta-ros enabled me to bitbake user-defined recipes for ROS packages that I was able to deploy and successfully run on BBB. Thanks!
{ "domain": "robotics.stackexchange", "id": 21017, "tags": "ros, source" }