anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Generate meta tags for paging on views, refactoring needed for 'if else'
Question: I know this is a long code and hard to read but when i started working on this it was just a few if-s, and as time passed i added more and more and came to this. Tried refactoring it by my self and anything i tried to do had no effect on code. Switch statements don't work here( doesn't lower amount of lines needed for it to work, or i couldn't find a way for switch to work, which is more likely). If any one can help I would be grateful. if (isset($request)) { $page = $request->get('page'); } if ($pages->pageCount == 1 || $pages->pageCount == 0) { if (\Yii::$app->controller->action->id == 'category') { $this->registerLinkTag(['rel' => 'canonical', 'href' => Url::to(['blog/category/' . $category], true)]); } else if (\Yii::$app->controller->action->id == 'index') { $this->registerLinkTag(['rel' => 'canonical', 'href' => Yii::$app->urlManager->hostInfo . Yii::$app->urlManager->baseUrl . '/blog']); } else if (\Yii::$app->controller->action->id == 'archive') { $this->registerLinkTag(['rel' => 'canonical', 'href' => Url::to(['blog/' . $year . '/' . $month], true)]); } else if (\Yii::$app->controller->action->id == 'author') { if ($page != null) { $this->title = $user_name->username . ' - Travel blog | Page ' . $page . ' | Clickstay'; } else { $this->title = $user_name->username . ' - Travel blog | Clickstay'; } $this->registerLinkTag(['rel' => 'canonical', 'href' => Url::to(['blog/author/' . $name], true)]); } else if (\Yii::$app->controller->action->id == 'search') { $this->registerLinkTag(['rel' => 'canonical', 'href' => Url::current([], true)]); } }else { if ($currentPage == 1) { if (\Yii::$app->controller->action->id == 'category') { $this->registerLinkTag(['rel' => 'canonical', 'href' => Url::to(['blog/category/' . $category], true)]); $this->registerLinkTag(['rel' => 'next', 'href' => Url::to(['blog/category/' . $category . '/' . $next], true)]); } else if (\Yii::$app->controller->action->id == 'index') { $this->registerLinkTag(['rel' => 'canonical', 'href' => Yii::$app->urlManager->hostInfo . Yii::$app->urlManager->baseUrl . '/blog']); $this->registerLinkTag(['rel' => 'next', 'href' => Url::to(['blog/' . $next], true)]); } else if (\Yii::$app->controller->action->id == 'archive') { $this->registerLinkTag(['rel' => 'canonical', 'href' => Url::to(['blog/' . $year . '/' . $month], true)]); $this->registerLinkTag(['rel' => 'next', 'href' => Url::to(['blog/' . $year . '/' . $month . '/' . $next], true)]); } else if (\Yii::$app->controller->action->id == 'author') { if ($page != null) { $this->title = $user_name->username . ' - Travel blog | Page ' . $page . ' | Clickstay'; } else { $this->title = $user_name->username . ' - Travel blog | Clickstay'; } $this->registerLinkTag(['rel' => 'canonical', 'href' => Url::to(['blog/author/' . $name], true)]); $this->registerLinkTag(['rel' => 'next', 'href' => Url::to(['blog/author/' . $name . '/' . $next], true)]); } else if (\Yii::$app->controller->action->id == 'search') { $this->registerLinkTag(['rel' => 'canonical', 'href' => Url::current([], true)]); $this->registerLinkTag(['rel' => 'next', 'href' => Url::to(['blog/search/' . $next . '?q=' . $search . '/'], true)]); } } else if ($currentPage > 1 && $currentPage != $pages->pageCount) { if (\Yii::$app->controller->action->id == 'category') { $this->registerLinkTag(['rel' => 'canonical', 'href' => Url::to(['blog/category/' . $category . '/' . $currentPage], true)]); if ($page != 2) { $this->registerLinkTag(['rel' => 'prev', 'href' => Url::to(['blog/category/' . $category . '/' . $prev], true)]); } else { $this->registerLinkTag(['rel' => 'prev', 'href' => Url::to(['blog/category/' . $category], true)]); } $this->registerLinkTag(['rel' => 'next', 'href' => Url::to(['blog/category/' . $category . '/' . $next], true)]); } else if (\Yii::$app->controller->action->id == 'index') { $this->registerLinkTag(['rel' => 'canonical', 'href' => Url::to(['blog/' . $currentPage], true)]); if ($page != 2) { $this->registerLinkTag(['rel' => 'prev', 'href' => Url::to(['blog/' . $prev], true)]); } else { $this->registerLinkTag(['rel' => 'prev', 'href' => Url::to(['blog/'], true)]); } $this->registerLinkTag(['rel' => 'next', 'href' => Url::to(['blog/' . $next], true)]); } else if (\Yii::$app->controller->action->id == 'archive') { $this->registerLinkTag(['rel' => 'canonical', 'href' => Url::to(['blog/' . $year . '/' . $month . '/' . $currentPage], true)]); if ($page != 2) { $this->registerLinkTag(['rel' => 'prev', 'href' => Url::to(['blog/' . $year . '/' . $month . '/' . $prev], true)]); } else { $this->registerLinkTag(['rel' => 'prev', 'href' => Url::to(['blog/' . $year . '/' . $month], true)]); } $this->registerLinkTag(['rel' => 'next', 'href' => Url::to(['blog/' . $year . '/' . $month . '/' . $next], true)]); } else if (\Yii::$app->controller->action->id == 'author') { if ($page != null) { $this->title = $user_name->username . ' - Travel blog | Page ' . $page . ' | Clickstay'; } else { $this->title = $user_name->username . ' - Travel blog | Clickstay'; } $this->registerLinkTag(['rel' => 'canonical', 'href' => Url::to(['blog/author/' . $name . '/' . $currentPage], true)]); if ($page != 2) { $this->registerLinkTag(['rel' => 'prev', 'href' => Url::to(['blog/author/' . $name . '/' . $prev], true)]); } else { $this->registerLinkTag(['rel' => 'prev', 'href' => Url::to(['blog/author/' . $name], true)]); } $this->registerLinkTag(['rel' => 'next', 'href' => Url::to(['blog/author/' . $name . '/' . $next], true)]); } else if (\Yii::$app->controller->action->id == 'search') { $this->registerLinkTag(['rel' => 'canonical', 'href' => Url::current([], true)]); if ($page != 2) { $this->registerLinkTag(['rel' => 'prev', 'href' => Url::to(['blog/search/' . $prev . '?q=' . $search], true)]); } else { $this->registerLinkTag(['rel' => 'prev', 'href' => Url::to(['blog/search/?q=' . $search], true)]); } $this->registerLinkTag(['rel' => 'next', 'href' => Url::to(['blog/search/' . $next . '?q=' . $search], true)]); } } else if ($currentPage == $pages->pageCount) { if (\Yii::$app->controller->action->id == 'category') { $this->registerLinkTag(['rel' => 'canonical', 'href' => Url::to(['blog/category/' . $category . '/' . $currentPage], true)]); if ($page != 2) { $this->registerLinkTag(['rel' => 'prev', 'href' => Url::to(['blog/category/' . $category . '/' . $prev], true)]); } else { $this->registerLinkTag(['rel' => 'prev', 'href' => Url::to(['blog/category/' . $category], true)]); } } else if (\Yii::$app->controller->action->id == 'index') { $this->registerLinkTag(['rel' => 'canonical', 'href' => Url::to(['blog/' . $currentPage], true)]); if ($page != 2) { $this->registerLinkTag(['rel' => 'prev', 'href' => Url::to(['blog/' . $prev], true)]); } else { $this->registerLinkTag(['rel' => 'prev', 'href' => Url::to(['blog/'], true)]); } } else if (\Yii::$app->controller->action->id == 'archive') { if ($page != null) { $this->title = $user_name . ' - Travel blog | Page ' . $page . ' | Clickstay'; } else { $this->title = $user_name . ' - Travel blog | Clickstay'; } $this->registerLinkTag(['rel' => 'canonical', 'href' => Url::to(['blog/' . $year . '/' . $month . '/' . $currentPage], true)]); if ($page != 2) { $this->registerLinkTag(['rel' => 'prev', 'href' => Url::to(['blog/' . $year . '/' . $month . '/' . $prev], true)]); } else { $this->registerLinkTag(['rel' => 'prev', 'href' => Url::to(['blog/' . $year . '/' . $month . '/'], true)]); } } else if (\Yii::$app->controller->action->id == 'author') { $this->registerLinkTag(['rel' => 'canonical', 'href' => Url::to(['blog/author/' . $name . '/' . $currentPage], true)]); if ($page != 2) { $this->registerLinkTag(['rel' => 'prev', 'href' => Url::to(['blog/author/' . $name . '/' . $prev], true)]); } else { $this->registerLinkTag(['rel' => 'prev', 'href' => Url::to(['blog/author/' . $name], true)]); } } else if (\Yii::$app->controller->action->id == 'search') { $this->registerLinkTag(['rel' => 'canonical', 'href' => Url::current([], true)]); if ($page != 2) { $this->registerLinkTag(['rel' => 'prev', 'href' => Url::to(['blog/search/' .$prev. '?q=' . $search], true)]); } else { $this->registerLinkTag(['rel' => 'prev', 'href' => Url::to(['blog/search' . '?q=' . $search], true)]); } } } } Answer: You could try to solve this using polymorphism but it probably would get worse before it got better. There is an excellent site I just found with Martin Fowlers catalogue of refactorings: https://refactoring.guru First I can see that you have 5 types of actions (category, index, archive, author, search). I would try to separate those actions to a single conditional, something like this: $action = \Yii::$app->controller->action->id; if($action == "category") { if ($pages->pageCount == 1 || $pages->pageCount == 0) { ... } else { if ($currentPage == 1) { ... } else if ($currentPage > 1 && $currentPage != $pages->pageCount) { ... } else if ($currentPage == $pages->pageCount) { ... } } } if($action == "index") { ... I would do that for each of the actions. There will be more code for sure but it will still be a little bit more simple. Next you could try to refactor it out to different classes, that might be a little bit cumbersome and you would probably have to pass in alot of variables to the class. This depends a little bit how far you want to take it and if you feel it is worth it. You would want to have a parent class so you can pull up methods from your subclasses. Personally I would probably use the factory pattern for creating the new objects and and introduce parameter object (https://refactoring.guru/introduce-parameter-object) But ideally, your code in the end would probably look something like this: $actionLink = ActionLinkFactory::create(\Yii::$app->controller->action->id); $actionLink->setParameters(new ActionLinkParameters([ "pageCount" => $pages->pageCount, "currentPage" => $currentPage, ... ])); $actionLink->registerTag(); Ther is no silver bullet unfortunately, but you dont have to do everything at once either.
{ "domain": "codereview.stackexchange", "id": 20231, "tags": "php, yii" }
Goldstone theorem in Schwartz
Question: On page 566, Schwartz’s QFT book, to see the $\pi$ is the Goldstone boson, it reads: $$J^\mu=\frac{\partial L}{\partial(\partial_\mu \pi)} \frac{\delta \pi}{\delta \theta}=F_\pi \partial_\mu \pi \tag{28.15}$$ $$\langle\Omega|J^\mu(x)|\pi(p)\rangle=ip^\mu F_\pi e^{-ipx} \tag{28.16}$$ My question is: in the first equation, how is $\frac{\delta \pi}{\delta \theta}=F_\pi$ derived from the symmetry translation $\pi(x) \rightarrow \pi(x)+F_\pi \theta$ ? how to derive the second equation? My attempt to the second equation: $$\langle \Omega|J^\mu(x)|\pi(p)\rangle= F_\pi \langle \Omega|\partial_\mu\pi \pi|\Omega\rangle$$ Substitute $\pi=\int \frac{d^3 p}{(2\pi)^3\sqrt{2\omega_p}}[a_p e^{-ipx}+a_p^\dagger e^{ipx}]$ into it, I get $$F_\pi \langle \Omega| \int \frac{d^3 p}{(2\pi)^3\sqrt{2\omega_p}}[a_p (-ip^\mu)e^{-ipx}+a_p^\dagger (ip^\mu)e^{ipx}] \int \frac{d^3 k}{(2\pi)^3\sqrt{2\omega_k}}[a_k e^{-ikx}+a_k^\dagger e^{ikx}] |\Omega\rangle$$ $$=F_\pi \langle \Omega| \int \frac{d^3 p}{(2\pi)^3\sqrt{2\omega_p}}[a_p (-ip^\mu)e^{-ipx}] \int \frac{d^3 k}{(2\pi)^3\sqrt{2\omega_k}}[a_k^\dagger e^{ikx}] |\Omega\rangle$$ $$=F_\pi \int \frac{d^3 p}{(2\pi)^3\sqrt{2\omega_p}}\int \frac{d^3 k}{(2\pi)^3\sqrt{2\omega_k}}e^{-i(p-k)x}\langle \Omega|a_p(-ip^\mu)a_k^\dagger |\Omega\rangle$$ $$=F_\pi( -ip^\mu) \int \frac{d^3 p}{(2\pi)^3\sqrt{2\omega_p}}\int \frac{d^3 k}{(2\pi)^3\sqrt{2\omega_k}}e^{-i(p-k)x}(2\pi)^3\delta^3(p-k)$$ $$=F_\pi( -ip^\mu) \int \frac{d^3 p}{(2\pi)^3 2\omega_p}$$ Answer: For the first equation, consider an infinitesimal transformation, $\pi(x) \rightarrow \pi(x)+F_\pi \delta \theta$. We have $\delta \pi(x) = F_\pi \delta \theta$, so $\frac{\delta \pi(x)}{\delta \theta}= F_\pi$. For the second equation, your first mistake is on equating $|\pi(p)\rangle$ with $\pi|\Omega\rangle$. $\pi$ is a field, not a single creation operator. To derive that result, you just need to show that since $|\pi(p)\rangle$ is defined to be the state created by the $\pi$ field, $\langle \Omega |\pi(x)|\pi(p)\rangle=e^{-ipx}$. Then: $$ \begin{align} \langle \Omega |J^\mu(x)|\pi(p)\rangle= & F_\pi\langle \Omega |\partial^\mu\pi(x)|\pi(p)\rangle \\ =&F_\pi \partial^\mu e^{-ipx} \\ =&-ip^\mu F_\pi e^{-ipx}. \end{align} $$
{ "domain": "physics.stackexchange", "id": 65565, "tags": "homework-and-exercises, quantum-field-theory, lagrangian-formalism, field-theory, symmetry-breaking" }
How much data should I allocate for my training and and test sets? (in R)
Question: I have a matrix of 358.367 data. Each row is a DNA sequence from the human genome. I want to build a classification model in R, using XGBoost algorithm and 83 features (dinucleotides, trinucleotides, etc.). How should I split the data for the train and test set? For example 70% for the train set and 30% for the test set? 30% for the train set and 70% for the test set? Answer: There is no "golden rule" here. Your data set is very handy - neither too large nor too small. Sounds like a very exciting project! Here is how I often proceed in comparable settings. Do all splits stratified by response or, if the rows are not independent but rather clustered by some grouping variable (e.g. the family etc), grouped sampling. Important rule: avoid any leakage across splits. Set aside 10%-15% of rows for testing. Don't touch them until the analysis is complete. Act as you would never utilize this test set. Select loss function and relevant performance measure. Fit a random forest without tuning and use its OOB error as benchmark. Choose parameters of XGB by 5-fold cross-validation iteratively by grid search, first starting with very wide parameter ranges and then making those ranges smaller and smaller. The number of boosting rounds are automatically optimized by early stopping. Choose model and present cross-validation performance. At the very last, reveal test performance.
{ "domain": "datascience.stackexchange", "id": 7626, "tags": "machine-learning, classification, r, dataset, training" }
How does radioactive decay affect material properties?
Question: If I leave a bar of a radioactive material (e.g. uranium-235) for its half-life time, how will the bar look after halving its mass? Will it: stay the same size, but be lighter? shrink in size as to keep the same density? be filled with small holes (like cheese or bread)? have turned into a small pile of uranium dust as material holding the bar together has decayed? or maybe something different? Answer: The title question cannot be answered generally, unless the naturl decay chain to the final stable particles is given , and the time. Within the question the example of uranium 235 at its half life can be answered by looking at the natural decay chain : It is seen that it ends up in the stable lead 207, having lost through decays 38 nucleons. As alpha turns into a gas the material will be lighter by the ratio 38/235 . Radon is a noble gas, and it decays very fast into polonium, a metal. I do not think there will be time to create noticeable holes in the lattice by the radon leaving, as it decays very fast. Possibly the shape of the potential of the lattice may be affected. The stable end nucleus is lead which is a metal and will also occupy lattice locations. Maybe a specialist will answer with more details. Other nuclei will behave differently, depending on their natural decay chain.
{ "domain": "physics.stackexchange", "id": 70181, "tags": "nuclear-physics, radioactivity" }
Are indistinguishable bosons and fermions computationally equivalent to distinguishable qubits?
Question: From the middle-late decades of the last century, many researchers such as Bennett, Benoif, Deutsch, Feynman, Manin, Wiesner, among others, had some intuition that qubits are computationally more powerful or more interesting than classical bits. Physically embodied qubits as we understand and intuit nowadays are fundamentally distinguishable "particles" or "atoms". For example, at a high level we need to separate and index the wires in our circuit diagrams, but also more concretely we can point to individual ions in a trap or individual transmons and identify and label them as separate and distinguished. But, quantum mechanics is also concerned with two classes of indistinguishable particles - namely fermions and bosons. Going back all the way to the 1920's, Jordan and Wigner provided a mapping between the wavefunction in a Hilbert space spanned by a number of qubits and the wavefunction in a Fock space spanned by fermions. Feynman also hinted that a qubit could be defined by the presence or absence of such a particle. But, for fermions at least the sign problem is a separate issue that needs to be carefully addressed - e.g. whenever two fermions are swapped, the wavefunction picks up a negative phase. Furthermore regarding bosons, Aaronson and Arkhipov noted that sampling bosons such as photons from a network of mirrors and beam splitters is related to calculation of the permanent, while sampling the same for fermions is related to the determinant. It follows from Valiant's theorem that boson sampling is likely much (much) more difficult than fermion sampling. On the one hand we have fermions, which naively have the sign problem making simulation more difficult; however, fermion sampling is in P. On the other hand boson sampling is most likely not in P - nor is it likely to contain P. But, can we translate between fermions, bosons, and qubits efficiently? For example could we have created a quantum computer out of indistinguishable bosons instead of distinguishable qubits? Because of the Pauli exclusion principle, a chain can be either occupied or not with precisely one fermion, while bosonic occupancy is unlimited. BosonSampling experiments try to mitigate this by having at least quadratically more modes than bosons. Answer: Indeed, there are some works in this direction that show the interrelation between fermions, bosons and qubits. Of course, you can consider the seminal work of Jordan and Wigner that maps operators acting on fermions to operators acting on "qubits" (more precisely spins, since in 1928 qubits didn't exist yet). However, JW mapping has the problem that when you consider nontrivial fermionic lattices or interactions (e.g. 2D lattice of fermions interacting with nearest neighbours) your local fermionic operators will be mapped to nonlocal qubit operators of the size of your system. Thus, people started considering this problem in the early 2000's again. In this paper, Bravyi and Kitaev showed that you can efficiently simulate (indistinguishable) fermions with qubits without this nasty nonlocal interactions at the cost of including more qubits than the number of fermionic degrees of freedom you originally had. Indeed, they also showed that qubit operators can be expressed in terms of fermionic operators, meaning that fermionic quantum computers and standard quantum computers should be equivalent in some sense. More people like Verstraete and Cirac and also Ball found new mappings showing that fermionic statistics are not an outcome solely arising in fermionic systems and depending on the interactions between your subjacent spin system, you can reproduce these statistics. There are some subtleties of course. The Fock space is of course "a restrained Hilbert space", since you are taking only the symmetries or antisymmetrized tensors from the sum of single-particle Hilbert spaces and thus you cannot trivially recover all the dynamics of the whole Hilbert space from a given set of fermions and one of these local mappings unless you use ancilla fermions for this. This means that quantum computing using qubits and fermions is equivalent up to a constant overhead in resources. I guess the bosonic case should be equivalent but I am not an expert there.
{ "domain": "quantumcomputing.stackexchange", "id": 4609, "tags": "architecture, history, boson-sampling" }
How do you remember parameters?
Question: I'm currently doing research in pseudorandomness, which involves a zoo of wonderful objects such as pseudorandom generators, randomness extractors, expander graphs, etc. I find it a fascinating topic, but one thing that drives me crazy is the glut of parameters that are involved. I understand that these objects are very complex, but I cannot help but break out into a sweat when I see "Let $G$ be a standard $(\alpha,V,\epsilon^2,k,\delta)$-pseudorandom widget...". Then I have to flip back in the paper or find another paper (which probably uses a different parameterization) and try to remember what all $\alpha,V,\epsilon,k$ and $\delta$ all meant. It takes me quite a while to acquire a feeling for "good" parameter settings versus "bad" parameter settings, versus "natural" settings versus "easy" settings. There's probably no magic bullet for this issue - but I was wondering if other people had some method of managing the "parameter explosion" so that it's easier to retain in memory for a longer period of time? Thanks! Answer: My research practices, when I go into new research domain, cover combination of memory management, mnemonic, notes and other practices. I have no one recipe, cause each is dependent on nature of given domain. For some inspiration and sake of discussion, here is and example that came now into my mind: Go through papers into a few iterations: First, to get adjusted to domain and get first intuition about approaches and notations. Next step is preparation to "clusterization" of papers. prepare list of tags representing approaches, notations, features and other interesting me properties. Before I start tagging I go through papers and evaluate my tags, correct appropriately. Finally I tag papers Then I process papers according to tags in groups: with similar notation, approach... Thanks to working with papers sharing common properties, you can concentrate on differences. I prepare notes in form of MindMap with FreeMind. To easily reach "what was what". After processing papers grouped in sets with tags, try look at whole area with all groups to get boarder view. Now is step of looking thought whole domain to see connections and differences. When I can't remember details, I check-out in my MindMap to remind, what was what. Try to avoid overloading your memory, when it's unneeded, use: Mnemonics and Art Of Memory techniques. Use MindMapping.
{ "domain": "cstheory.stackexchange", "id": 1339, "tags": "soft-question, research-practice" }
Standardization and Normalization
Question: Which and all Machine Learning algorithms needs the data to be standardised/normalised before feeding into the model. How do we determine whether the particular model/data needs to be standardised/normalised. Thank you. Answer: Whenever you have features that they have different scale and it is significant for some features, you should standardize your feature. Take a look at here.
{ "domain": "datascience.stackexchange", "id": 2930, "tags": "classification, regression, normalization" }
Quantum Mechanics Griffiths Problem
Question: I was doing problem no. $4.4$ from Griffiths Third Edition. I cannot understand one thing related to the solution of the normalization of $Y_2^1$. $$Y_2^1 = -\sqrt{\frac{15}{8\pi}}e^{i\phi}sin\theta cos\theta $$ The solution manual shows: Where did the $e^{i\phi}$ term go? Answer: The squared norm of a vector of a complex Hilbert space is $|v|=+\sqrt{\langle v \vert v\rangle }$, and in this particular Hilbert space the inner product of two vectors is $\langle v|g\rangle =\int d\mu \, v^{*}g$. So in this case, the squared norm would be $\int d\Omega |Y_{2}^{1}|=\int d\Omega\frac{15}{8\pi}\sin^{2}\theta \cos^{2}\theta e^{i\phi} e^{-i\phi}$ where I intentionally did not simplify the last two exponentials for you to know why it "disappears" in the integral.
{ "domain": "physics.stackexchange", "id": 76982, "tags": "quantum-mechanics, homework-and-exercises, wavefunction, complex-numbers, spherical-harmonics" }
How to test the quality of a word embedding?
Question: I have trained a word2vec model using GenSim 4. The problem is that my corpus is quite small. How can I test the quality of the word embeddings I have obtained? Is there some standard measures to do that? Answer: One way to test your embedding is see how often your model agrees with the common consensus of how other embeddings complete word analogies. A collection of established word embedding analogies are here.
{ "domain": "datascience.stackexchange", "id": 9422, "tags": "word-embeddings, word2vec, gensim" }
How to model airborne sound channel
Question: I'm making a soft demodulation/decoding to communicate a microphone and a speaker in short range. I have to calculate the LLR using my constellation and the airborne sound channel probability density function (pdf). I have searched a lot about airborne sound or acoustic channel model, and I just found out some considerations about the attenuation depending on the frequency. My questions are: Wich book can I read, so i can learn more about soft demodulation examples: BPSK, QPSK, QAM, FSK? If I use a pdf estimator of the channel, how can I use the estimation to find the LLR of my received symbols? Can I use another model to aproximate the airborne acoustic channel (it could be related to AWGN), or which book can I read to learn about how to model a communication channel? Answer: Well, your channel model might give you an explicit PDF, but: You need to realize that a channel is not a scalar, ie. your PDF doesn't describe a random variable $\in \mathbb C$ or $\in \mathbb R$, but usually something like an impulse response, that is, something like a function of delay (so, your random variable is $\in \mathbb R \times \mathbb R$ or $\in \mathbb C \times \mathbb R$). I'm not an acoustics guy myself, but if I've learned one thing it's that at least for indoor acoustics, the "flat channel assumption" (that lets us wireless communication folks model channels as simple complex number) doesn't hold, and that your channel is very frequency-selective. Often, it's time-variant, and sometimes even non-linear. So, I'm not quite sure what to recommend here: If you're looking for soft decision in wireless communications, I personally think that Proakis is quite OK. That's one of the standard digital communications textbooks. That's up to your channel model, and you'll have to specify that. An AWGN channel is certainly not an adequate approximation of an audio channel.
{ "domain": "dsp.stackexchange", "id": 6000, "tags": "sound, demodulation, acoustics" }
How many oxygen molecules touch you in your lifetime?
Question: Following up on a comment by BlueRaja to this beautiful answer of Ilmari Karonen, I would like to phrase this follow up question: How many air molecules hit your average human's skin during their lifetime? Actually, make that how many oxygen molecules do you breathe in your lifetime? (In case you want some motivation: Ilmari calculates that the chance for a random oxygen molecule in the atmosphere (though I note that local effects could actually play a role for this one) to have existed in that form since the time of Homo erectus is about one in 1014. However, there's a lot of them molecules, so maybe you do brush up with quite a lot of them often.) Any takers? Answer: Analysis for Jesus's molecule usage: Our breathing rate changes a lot, but on average its about 1 breath every five seconds, or 12 breaths a minute, or 720 breaths an hour, or 17280 breaths a day or 6,307,200 breaths a year, and if we live for 32 years that gives us 201,830,400 breaths in his lifetime. How many atoms? multiply 2.02e8 total breaths x 1.61e23 molecules per breath to get a total of 3.25e31 total molecules. ...That means for Jesus, there were 32,500,000,000,000,000,000,000,000,000,000 molecules (325 decillion) that came into contact with his lungs during his lifetime. I didn't see anything wrong with the author's approach, so say 6,307,200 * 1.61e23 molecules/year (based on 6 liters of air/breath not oxygen). Since you want $O_2$ only then scale that down. At STP (Standard Temperature and Pressure: 1 atmosphere of pressure, 0C), one mole of any gas will occupy 22.4 liters of space. One mole of any substance contains 6.02 x 1023 particles. Air is about 21% oxygen making 1 liter of air to be about 0.21 liters of oxygen. To find out how many moles that represents=0.21 / 22.4. Then use Avogadro's number 6.02 x 10^23 atoms/mole. 1 L = 5.64e21 atoms or 2.82e21 molecules/L. 6L/breath => 1.69e22 molecules/breath 6,307,200 breaths a year => 1.07E29 molecules of $O_2$/year. How long are you going to live * 1.07E29 = $O_2$ passing into your lungs. The only two caveats to this answer: some of these molecules won't be unique and 6L represents gas going into the lungs, not what is utilized. Say you live 80 years, that's ~8.5e30 molecules of $O_2$.
{ "domain": "physics.stackexchange", "id": 9398, "tags": "soft-question, atmospheric-science" }
MessageFilter ApproximateTime doesn't call callback function
Question: Hello, I'm trying to implement MessageFilter in a class. I'm using ROS1 Melodic on Ubuntu 18.04. The code compiles without any errors but when I play my bag file, the callback function aren't called. I checked some other questions but I couldn't find a solution. Here is my code : Main: #include <ros/ros.h> #include "object_localization/CloudPainter.hpp" int main(int argc, char** argv) { ros::init(argc, argv, "cloud_painter"); ros::NodeHandle node_handle("~"); object_localization::CloudPainter cloud_painter(node_handle); ros::spin(); return 0; } CloudPainter.hpp: #pragma once #include <ros/ros.h> #include <message_filters/subscriber.h> #include <message_filters/synchronizer.h> #include <message_filters/sync_policies/approximate_time.h> #include <sensor_msgs/PointCloud2.h> #include <sensor_msgs/Image.h> typedef message_filters::Subscriber<sensor_msgs::PointCloud2> PointCloudSubscriber; typedef message_filters::Subscriber<sensor_msgs::Image> ImageSubscriber; typedef message_filters::sync_policies::ApproximateTime<sensor_msgs::PointCloud2, sensor_msgs::Image> MySyncPolicy; namespace object_localization { class CloudPainter { public: CloudPainter(ros::NodeHandle& t_node_handle); private: bool readParameters(); void messageFilterCallback(const sensor_msgs::PointCloud2ConstPtr& t_point_cloud, const sensor_msgs::ImageConstPtr& t_image); ros::NodeHandle& m_node_handle; std::string m_lidar_topic; std::string m_camera_topic; PointCloudSubscriber m_point_cloud_subscriber; ImageSubscriber m_image_subscriber; message_filters::Synchronizer<MySyncPolicy> m_synchronizer; }; } // namespace object_localization CloudPainter.cpp: #include "object_localization/CloudPainter.hpp" namespace object_localization { CloudPainter::CloudPainter(ros::NodeHandle& t_node_handle) : m_node_handle(t_node_handle) , m_synchronizer(MySyncPolicy(10), m_point_cloud_subscriber, m_image_subscriber) { if (!readParameters()) { ROS_ERROR("Could not read parameters."); ros::requestShutdown(); } ROS_INFO("Lidar topic: : %s", m_lidar_topic.c_str()); ROS_INFO("Camera topic: : %s", m_camera_topic.c_str()); m_point_cloud_subscriber.subscribe(m_node_handle, m_lidar_topic, 1); m_image_subscriber.subscribe(m_node_handle, m_camera_topic, 1); m_synchronizer.registerCallback( boost::bind(&CloudPainter::messageFilterCallback, this, _1, _2)); ROS_INFO("Successfully launched node."); } bool CloudPainter::readParameters() { if (!m_node_handle.getParam("lidar_topic", m_lidar_topic)) { return false; } if (!m_node_handle.getParam("camera_topic", m_camera_topic)) { return false; } return true; } void CloudPainter::messageFilterCallback(const sensor_msgs::PointCloud2ConstPtr& t_point_cloud, const sensor_msgs::ImageConstPtr& t_image) { ROS_INFO("Callback."); } } // namespace object_localization Output: [ INFO] [1625585829.773481527]: Lidar topic: : velodyne_points [ INFO] [1625585829.774161339]: Camera topic: : pylon_camera_node/image_rect [ INFO] [1625585829.777213271]: Successfully launched node. Originally posted by mebasoglu on ROS Answers with karma: 38 on 2021-07-06 Post score: 0 Answer: You appear to be only passing a private NodeHandle to your CloudPainter ctor: ros::NodeHandle node_handle("~"); object_localization::CloudPainter cloud_painter(node_handle); That's why you need to prefix your topics with /, otherwise CloudPainter will try to subscribe to /cloud_painter/velodyone_points and /cloud_painter/pylon_camera_node/image_rect which don't exist. With the / prefix, the ros::Subscriber will consider the topic names already fully resolved, and thus will not start searching for them in the private namespace of the node. Ok, I found my problem. I didn't use "/" before topic names. So I changed "velodyne_points" to "/velodyne_points" the better solution would be to either: use a regular NodeHandle for your subscriptions (ie: not a private one), or use remapping (and #q303611) to remap default topic names to the topics you wish to subscribe to. In your case that could be a points and a image topic, which you then remap to /velodyone_points and /pylon_camera_node/image_rect during rosrun or roslaunch of your node With roslaunch, that could look something like this: <launch> <node name="cloud_painter" type="cloud_painter" pkg="..."> <remap from="image" to="/pylon_camera_node/image_rect" /> <remap from="points" to="/velodyne_points" /> </node> </launch> Configuring topic names via ROS parameters is a bit of an anti-pattern (although opinions differ on that: #q342777). Originally posted by gvdhoorn with karma: 86574 on 2021-07-06 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by mebasoglu on 2021-07-06: Thank you.
{ "domain": "robotics.stackexchange", "id": 36656, "tags": "ros, c++, ros-melodic, callback, message-filter" }
MySQL-to-PostgreSQL migration script
Question: I'm working on a Python script to migrate a MySQL database into a PostgreSQL database with a different Schema (different table structures, different datatypes and so on). I'm a sysadmin and unfortunately I don't code very often. So I'm having some doubts about this initial programming phase. I begin with the tables that are easy to be migrate (almost the same structure), but very soon I will have to transfer tables that need more operations to be converted for compatability. My code actually looks like this: #!/usr/bin/python # Script Name: database-migration_msql-psql.py # Description: Migrate mysql database a2 # into postgresql database a3. # Created By: phphil. # Date: 7 Oct 2015. # # ------------------------------ # Import standard libraries | # ------------------------------ # import os import sys import mysql.connector import psycopg2 from pprint import pprint import MySQLdb # ------------------------------ # Import internal snippets | # ------------------------------ # from include.db_config import * #from include.MySQLCursorDict import * # ------------------------------ # Open database connections | # ------------------------------ # # Mysql connection try: cnx_msql = mysql.connector.connect( host=host_mysql, user=user_mysql, passwd=pswd_mysql, db=dbna_mysql ) except mysql.connector.Error as e: print "MYSQL: Unable to connect!", e.msg sys.exit(1) # Postgresql connection try: cnx_psql = psycopg2.connect(conn_string_psql) except psycopg2.Error as e: print('PSQL: Unable to connect!\n{0}').format(e) sys.exit(1) # Cursors initializations cur_msql = cnx_msql.cursor(dictionary=True) cur_psql = cnx_psql.cursor() # --------------------------------- # A2.msql-table1 > A3.psql-table1 | # --------------------------------- # cur_msql.execute("SELECT field1, field2, field3, field4, field5 FROM msql-table1") for row in cur_msql: ### trasformation/ conversion of mysql data OR in other cases type casting if row['user_id'] == 0: row['user_id'] = row['group_id'] else: pass try: cur_psql.execute("INSERT INTO psql-table1 (field1, field2, field3, field4, field5) \ VALUES (%(field1)s, %(field2)s, %(field3)s, %(field4)s, %(field5)s)", row) except psycopg2.Error as e: print "cannot execute that query!!", e.pgerror sys.exit("Some problem occured with that query! leaving early this lollapalooza script") # --------------------------------- # A2.msql-table2 > A3.psql-table2 | # --------------------------------- # cur_msql.execute("SELECT field1, field2, field3, field4, field5, field6 FROM msql-table2") for row in cur_msql: try: cur_psql.execute("INSERT INTO psql-table2 (field1, field2, field3, field4, field5, field6) \ VALUES (%(field1)s, %(field2)s, %(field3)s, %(field4)s, %(field5)s, %(field6)s)", row) except psycopg2.Error as e: print "cannot execute that query!!", e.pgerror sys.exit("Some problem occured with that query! leaving early this lollapalooza script") # --------------------------------- # A2.msql-table3 > A3.psql-table3 | # --------------------------------- # cur_msql.execute("SELECT field1, field2 FROM msql-table3") for row in cur_msql: try: cur_psql.execute("INSERT INTO psql-table3 (field1, field2) VALUES (%(field1)s, %(field2)s)", row) except psycopg2.Error as e: print "cannot execute that query!!", e.pgerror sys.exit("Some problem occured with that query! leaving early this lollapalooza script") # --------------------------------- # A2.msql-table4 > A3.psql-table4 | # --------------------------------- # cur_msql.execute("SELECT field1, field2, field3 FROM msql-table4") for row in cur_msql: try: cur_psql.execute("INSERT INTO psql-table4 (field1, field2, field3) \ VALUES (%(field1)s, %(field2)s, %(field3)s)", row) except psycopg2.Error as e: print "cannot execute that query!!", e.pgerror sys.exit("Some problem occured with that query! leaving early this lollapalooza script") # --------------------------------- # A2.msql-table4 > A3.psql-table4 | # --------------------------------- # cur_msql.execute("SELECT l.field1, r.field2, l.field3, l.field4, l.field5, l.field6, l.field7, l.field8 \ FROM msql-table4 l, msql-table0 r \ WHERE l.field2=r.field2") for row in cur_msql: try: cur_psql.execute("INSERT INTO psql-table4(field1, field2, field3, field4, field5, field6, field7, field8, field9) \ VALUES(%(field1)s, %(field2)s, %(field3)s, %(field4)s, %(field5)s, %(field6)s, %(field7)s, %(field8)s, %(field9)s, NULL, DEFAULT)", row) except psycopg2.Error as e: print "cannot execute that query!!", e.pgerror sys.exit("Some problem occured with that query! leaving early this lollapalooza script") # --------------------------------- # A2.msql-table5 > A3.psql-table5 | # --------------------------------- # cur_msql.execute("SELECT field1, field2, field3, field4, field5, field6, field7, field8, field9, field10 FROM msql-table5") for row in cur_msql: try: cur_psql.execute("INSERT INTO psql-table5 (field1, field2, field3, field4, field5, field6, field7, field8, field9, field10) \ VALUES (%(field1)s, %(field2)s, %(field3)s, %(field4)s, %(field5)s, %(field6)s, %(field7)s, %(field8)s, %(field9)s, %(field10)s)", row) except psycopg2.Error as e: print "cannot execute that query!!", e.pgerror sys.exit("Some problem occured with that query! leaving early this lollapalooza script") # --------------------------------- # A2.msql-table6 > A3.psql-table6 | # --------------------------------- # cur_msql.execute("SELECT field1, field2 FROM msql-table6") for row in cur_msql: try: cur_psql.execute("INSERT INTO psql-table6 (field1, field2) VALUES (%(field1)s, %(field2)s)", row) except psycopg2.Error as e: print "cannot execute that query!!", e.pgerror sys.exit("Some problem occured with that query! leaving early this lollapalooza script") ################ END OF SCRIPT ################ # --------------------------------------------- # Finalizing stuff & closing db connections | # --------------------------------------------- # ## Closing cursors cur_msql.close() cur_psql.close() ## Committing cnx_psql.commit() ## Closing database connections cnx_msql.close() cnx_psql.close() As you will notice, in each section of the script the structure is almost the same: Select data from a table of the source database (mysql), the result is handled by a cursor with dictionary flag (a python dictionary). After this, the dictionary is iterated within a for loop where possible e.g. casting fields, or adapt the table structure (see section: A2.right > A3.permission). And still inside the for loop, each record is inserted in the destination database. Questions/Doubts: Do I need to create a class in order to abstract the redundant code? Or maybe it's better to just create a function? Can someone post a short example? I have no Idea how to proceed. In both cases I see some problems on abstracting it because the redundant code is inside a loop where I will have to do different operations depending on what table I'm iterating. I used to open and close cursors at each operation(script section), then I decided to open both cursors at the beginning of the script, and use them until the end and close them. But now I've read this and I'm confused. What's better in your opinion? One cursor for each operation, or one cursor for the whole script? Answer: This code is fine. As far as scripts go, I'm assuming this is something that is run once and then forgotten about. But of course, in the interest of code review... Don't Repeat Yourself You have seven blocks of code that look something like: cur_msql.execute("SELECT ...") for row in cur_msql: try: cur_psql.execute("INSERT INTO ...", row) except psycopg2.Error as e: print "cannot execute that query!!", e.pgerror sys.exit(...) That's just crying out for a function: def select_and_insert(select_query, insert_query): cur_msql.execute(select_query) try: for row in cur_msql: cur_psql.execute(insert_query, row) except psycopg2.Error as e: print 'failed to execute query', e.pgerror sys.exit(...) That way, you just have 7 calls to select_and_insert, and all you're writing is the various queries without having to repeat all the extra logic. If statements In Python, unlike some functional languages, not every statement needs to evaluate to something. So in this check: if row['user_id'] == 0: row['user_id'] = row['group_id'] else: pass The else: pass is unnecessary and redundant. You could simply have done the row check. Of course, this looks like it breaks the don't repeat yourself idea of the function - but we can simply move this if statement into the SQL query: select ..., case when user_id == 0 then group_id else user_id end as user_id, ... No if necessary. Efficiency Running one insert statement per row is very inefficient, since you can run one insert statement for all the rows. Now that we've refactored this, we only have to change it in one function. Check out this answer for how we might go about doing such a thing and what the performance implication is. We're talking order of magnitude.
{ "domain": "codereview.stackexchange", "id": 16186, "tags": "python, beginner, postgresql, cursor" }
Why does using deuterium mass for hydrogen improve convergence of molecular dynamics simulations of water?
Question: I've come across many papers performing ab initio MD (Carr-Parinello or Born-Oppenheimmer) simulations of water that use the deuterium mass for hydrogen. And they state that this helps improve convergence. I haven't seen a derivation for it and I'm not able to come up with an intuitive reason for why this might be. I would greatly appreciate some insights. Here's an example of a paper that does so (I'm linking this because it doesn't seem to need institutional credentials to access): https://www.spiedigitallibrary.org/journals/Journal-of-Photonics-for-Energy/volume-1/issue-1/016002/iAb-initio-i-modeling-of-water-semiconductor-interfaces-for-photocatalytic/10.1117/1.3625563.full?SSO=1 Thanks! Answer: My intuitive reason is based on the need for time-scale separation between the evolution of the electronic degrees of freedom and the nuclear motion. The electrons are given fictitious "masses" for the evolution of the quantum degrees of freedom, and are typically maintained in their ground state by a low-temperature thermostat. The nuclei move around at speeds consistent with the physically desired temperature (e.g. ambient temperature). This means that, technically, the system is not at equilibrium. However, energy flow between the two subsystems is kept under control if their natural timescales are sufficiently different: usually called "adiabaticity". Giving the lightest nuclei an inflated mass is a simple way of slowing them down. It is used to allow a longer timestep in purely classical MD; but here I think that it also makes it easier to give the electrons a correspondingly higher mass, without compromising the adiabaticity, and hence use a longer timestep in the quantum part of the calculation (which is the expensive part).
{ "domain": "physics.stackexchange", "id": 52480, "tags": "water, molecular-dynamics, density-functional-theory" }
How should species density be calculated for a clumped distribution?
Question: Lets imagine 5 plots of different size are sampled for a target species: plot# count area(m^2) plot_density 1 1 5 0.2 2 3 2 1.5 3 0 10 0.0 4 5 1 5.0 5 2 6 0.33 What is this species' density? I see two ways to calculate the density that give completely different values. The first way averages the density at each plot: $$ \frac{\Sigma(\frac{count_i}{area_i})}{5} = 1.41/m^2 $$ This seems OK but it doesn't do much to control for changing plot sizes. For example if plot 3 above was 10 times larger the density would still be the same. In a situation where plot sizes are determined by the environment (say, under natural cover objects) this seems less than ideal. The second way totals the counts and the search area and divides them: $$ \frac{\Sigma(count_i)}{\Sigma(area_i)}=0.46/m^2 $$ I prefer the second method because it seems to describe the animals' density more accurately. And I would be happy to use the second method, but my issue is that I am unsure how to calculate a summary statistic for method two since a mean was never calculated. Method one gives a standard deviation of 2.1. What is the standard deviation for method two?? One possible solution I've come up with involves breaking the larger plots into 1m^2 plots and dividing the number of animals across those smaller plots. So now I have 24 1m^2 plots with the following "counts": plot# count area(m^2) plot_density 1-5 0.2 1 0.2 6-7 1.5 1 1.5 8-17 0.0 1 0.0 18 5.0 1 5.0 19-24 0.33 1 0.33 Now, using the first equation above I get: $$ \frac{\Sigma(\frac{count_i}{area_i})}{24} = 0.73/m^2 $$ With a standard deviation of 1.26. Is this a reasonable approach? Is there an established solution to this problem? Answer: To me, there are two issues that are mixed up here (if I understand you correctly). First, do you want to estimate the mean and variance for a statistical population (i.e. to characterize a larger population by independent samples), or do you want to calculate the actual density for a particular area, where you have counted all occurences in that entire area (but maybe divided the area into subareas out of convenience when counting)? This is not clear from your question. In the second case, your second option of pooling counts and areas is suitable. However, then you have only calculated the actual density in that particular area (ignoring issues of detectability of the organism when counting), and you cannot draw any inferences about a larger statistical population. If your aim is to draw inferences for a larger statistical population, a start is to calculate the mean and standard deviation (sd) of your sample. In that case, I assume that your samples are chosen randomly and independently from a larger statistical population. Your first option is then the right approach. However, since your samples have different sizes, you might want to attach more weight to larger samples, since they can be assumed to better describe the population average than smaller samples. This is called a weighted mean. Generally, the weighted arithmetic mean is defined as: $$\bar{x}_w = \dfrac{\sum_{i=1}^nw_ix_i}{\sum_{i=1}^nw_i}$$ where $w_i$ are the weights for each sample ($x_i$) and $n$ are the number of samples. Using this formula, you will arrive at exactly the same value for the weighted mean as you calculated in your second attempt (0.458), if you use the areas as weights. The weighted standard deviation is a bit more problematic, in the sense that there doesn't exist one single standard way of calculating this. However, a commonly used formula is: $$ \sigma_{w_1} = \sqrt{ \frac{ \sum_{i=1}^n w_i (x_i - \bar{x}_w)^2 }{ \frac{(M-1)}{M} \sum_{i=1}^n w_i } },$$ where $M$ are the number of nonzero weights. Other definitions of the weighted standard deviation can be found at Wikipedia: weighted sample variance. Another version is defined as (called "reliability weights" on the wiki page): $$ \sigma_{w_2} = \sqrt{ \frac{ \sum_{i=1}^n w_i (x_i - \bar{x}_w)^2 }{ \sum_{i=1}^n w_i - \frac{\sum_{i=1}^n w_i^2}{\sum_{i=1}^n w_i}}},$$ If I haven't made a mistake, these will give the standard deviations 1.15 and 1.22, using your data. As for the biological interpretation, all calculations of average densities indicate a clumped distribution, since the coefficient of variation (CV) is larger than 1 (CV = sd/mean), but more so if you use a weighted mean. The CV using the arithmetric mean is 1.5, while it is 2.5-2.65 for the weighted mean, which is reasonable since you are giving more weight to the large area sample with a zero count. However, I should also note that you should be cautious about using a weighted mean when you have a strongly clumped distribution, since your run the risk that e.g. a large sample plot land on an area with no occurences, which might bias your density estimate. Generally, when you have a clumped distribution, you need to sample more intensively to get a good estimate of average density, and many small samples is often better than a few large ones.
{ "domain": "biology.stackexchange", "id": 4099, "tags": "ecology, population-biology, statistics, landscape-ecology" }
Calculating wavelength in photoelectric effect
Question: How do I know what wavelength should radiate on a material with $W = 2.46 eV$ so that electrons are emitted with a maximum velocity of $1.0$x$10^{-6}$? Answer: By Einstein's photoelectric effect $hv - hv_0 = 1/2mv^2 $ where $hv_0 $= work function and $hv$ is the energy of the incident photon. $hv = 1/2mv^2 + hv_0 $ By c= f * z where f = frequency and z = wavelength $hc/z = 1/2mv^2 + hv_0$ $z/hc = 1 / (1/2mv^2 + hv_0)$ $z = hc / (1/2mv^2 + hv_0)$
{ "domain": "physics.stackexchange", "id": 18300, "tags": "homework-and-exercises, photoelectric-effect" }
Neural Network learning project based on 8 wave signals over 1 second at 1 sample every 10 ms ( hence 100Hz )
Question: I'm currentely trying to train a neural network that can decide wether a pattern produced by the movement of a hand near capacitive sensors is as expected, or random. I have an MPR121 microchip linked with an arduino, providing me 8 signals ( ranging from 0 to 255 ), that stays at baseline ( around 135 for my current conditions ) when nothing is near or perturbating the conductances. The value lowers when something conductive approaches it. Here is some pictures to help you represent the thing. The pattern of the eight channels together give information on the postion of the hand on one axis. The neural network is supposed to learn himself how the different channels react, in wich order, so i don't have to tell anything to the programm concerning the physical distance between two electrodes or whatever. The fact is that i would like to learn how to do neural networks but i have no idea how to feed the data ( especially for data over time, which differs from typical image analysis tutorials often seen for neural networks over the internet ), considering that there is 100*8 values to analyse for each frame (100 values for 1 sec frame, for the eight channels ) and i don't know if i should have a network with 800 input nodes.... seems to me like i miss something...? Can i pass a list as one input to a neural network ,hence reducing the amount of input nodes to 8 ? And another question i had, once trained, will th algorithm be exportable on an arduino ? I read somewhere that once trained, the weight of the nodes can be gathered, alowing to make an equation to choose from inputs to output, here yes or no. Arduino should be able to perform the calculation of this equation quite complex in a short amount of time i guess ? ( If you want to know, the labelisation of the data will be performed afterward, with a button connected to the arduino, telling him to record an d labellise as positive on an SD card the past 1 sec recording of the 8 channels. Everytime i will perform the hand pattern to be learnt, i will push this button. The randoms signals wich are labelled negative are 1 sec recordings, picked randomly over time of the day, to catch changes in baseline and hopefully some false positive noises due to people wandering around ) Thank you in advance for your help and advices on my project. =) And by the way, sorry for my english, hope i didn't shocked anyone. Answer: ...a neural network that can decide wether a pattern produced by the movement of a hand near capacitive sensors is as expected, or random. The neural network is supposed to learn himself how the different channels react, in wich order, so i don't have to tell anything to the programm concerning the physical distance between two electrodes or whatever. At this point, I am not sure if you wanted to create a neural network that can learn how to interpret the readings from the 8 capacitive sensors to infer the position of the hand over the sensor overall, or, a classifier that can understand a specific gesture. I think that what you are trying to do is the latter. That is, you want to move your hand in a particular gesture above the sensor "bar" and have a classifier recognising if this was one of the "known" gestures or someone just passing nearby. If this is indeed what you are trying to achieve then you would need to look more closely into types of neural networks that can take into account time such as recurrent neural networks. In this case, the input layer of the network would have 8 nodes, the hidden layer would have a number of neurons / layers that would depend on how long is the gesture to be learned and the output layer could well be a single node with a 0-1 output. And another question i had, once trained, will th algorithm be exportable on an arduino ? You can train this "offline" and then try to run the network on an Arduino, but I am not sure if you will get the same performance as you would get on a desktop computer. For two reasons: You might run out of memory quickly on the Arduino depending on the number of nodes your network will end up having; and The Arduino does not do floating point arithmetic. Yes, float does exist as a data type but in most boards it is emulated in software. If you have a board with a CPU that has a Floating Point Unit and the compiler can "see" it, then you might be in luck. But, without an FPU, without 32 (or at least 16) bit architecture and without much memory, then, the Arduino does not do floating point arithmetic. There is however another way by which you can do gesture recognition with less computational complexity so that it is easier for an Arduino to run it. Sum the outputs of all 8 capacitive sensors to produce one output and then run a "traditional" pattern recognition technique which would basically try to recognise a single waveform. In this way, you still do not have to make any additional specifications regarding the geometry of the bar. Suppose that you configure your sensor "bar" in this way. If you run your hand from left to right, you will get an output with 8 "bumps" as your hand passes closely from each one of the sensing elements. If you move your hand from left to right and also do a "wave" in mid-air, then you will get an output with 8 "bumps" whose amplitude will vary depending on the distance of your hand from the sensors. Obviously, the problem that this method has is that it cannot discriminate symmetrical gestures. For example, moving horizontally left to right and right to left. But, it can still discriminate between non-symmetric gestures (for example, moving left to right horizontally and moving left to right in an increasing "line" (or ramp). These two gestures would produce 2 distinct waveforms that it is fairly easy to classify. You can do this with a number of ways from very simple to very complex with varying degrees of accuracy (and of course computational complexity). One of the easiest (for Arduino too) would be to use a k-NN classifier. By the way, a resource you might find useful is the Gesture Recognition Toolkit. In addition to the software itself, the wiki contains a lot of information on materials and methods it uses. Hope this helps.
{ "domain": "dsp.stackexchange", "id": 5785, "tags": "neural-network, waveform-similarity, pattern, hardware" }
How do you make light?
Question: We make light making a charge oscillate, or heating a body. Are there differences between the two processes? But, above all, are there other ways in which we can produce em radiation? Answer: Quanta of em radiation called photons are produced in many reactions involving fundamental particles, atomic nuclei or whole atoms. These include thermal vibration of atoms and molecules; transition of a molecule, an atom or an orbital electron from an excited state to a lower energy state; particle-antiparticle annihilation; nuclear fusion; nuclear fission; and decay of unstable particles. Not all of these processes produce photons in the visible light range.
{ "domain": "physics.stackexchange", "id": 88384, "tags": "electromagnetism, thermodynamics, electromagnetic-radiation, visible-light, temperature" }
How is 1 g on a planet different from 1 g in a space ship when we look at aging?
Question: This question has been asked before in the form of the 'Twin paradox' , and there are 42 pages of questions on this site alone when I search for 'twin paradox'. For example Does the twin paradox require both twins to be far away from any gravity field? Clarification regarding Special Relativity The counterargument has been 'that the traveling twin has to accelerate and therefore is not in an inertial frame of reference.' OK, I believe that; it is consistent with the lay explanation of general relativity. This effect of gravitational time dilation was dramatically portrayed in the film 'Interstellar' when the crew goes to a planet near a black hole and everyone else gets a lot older. So what happens if we use the 'equivalence principle' and provide a gravitational acceleration to the non traveling twin? I have read many scenarios described on this site where the observer will not be able to tell if he is on earth or on the accelerating spaceship, yet the many of the questions here on physics stack exchange are answered with the statement 'the spaceship traveler will age more slowly'. If I can slow aging by 'dialing up gravity', or I can slow aging by traveling fast, why is there always the statement that 'the spaceship traveler will age more slowly'? Why does 1 g on a spaceship age you more slowly than 1 g on earth? Answer: Suppose that the traveller twin in $t = -t_0$ is passing by the Earth with $v = 0.9 c$, and turns its engines on, in order to meet once again his brother at Earth. And to achieve this, the ship keeps the same acceleration $g$ until the moment of return. At the moment $t_0$: $$\frac{dt}{d\tau} = \frac{1}{{(1-0.9^2)}^{1/2}} = 2.294$$ For a frame at uniform acceleration $g$: $$t = \frac{c}{g}senh\left(\frac{g\tau}{c}\right)$$ $$\frac{dt}{d\tau} = cosh\left(\frac{g\tau}{c}\right)$$ When $\tau = (+/-)45067942,31s$ => $t = (+/-)63206372,8s$ and $\frac{dt}{d\tau} = 2.294$ So, the total time until the twins meet again is: $t' = 2*45067942,31s = 2.86$ years for the traveller twin and $t = 2*63206372,8s = 4$ years for the Earth twin. While both are under the same acceleration $g$, the gravitational potentials are very different. In the Earth, it is enough to send a test mass faster than 11.4km/s for it escapes our well and never more return. In an uniformly accelerated frame, there is no escape. If at any moment during the trip, a test mass is send to the Earth direction, with any velocity, one day the ship will meet it again, if the acceleration $g$ is never turned off. It is almost a one directional black hole, in the meaning that there is no escape from any mass in the direction of the acceleration.
{ "domain": "physics.stackexchange", "id": 69194, "tags": "general-relativity, special-relativity, time-dilation, equivalence-principle" }
Algorithm for solving electromagnetic problems using only forces
Question: Is there any fundamental issue to solving electromagnetic problems with the following algorithm? (practicality aside) i) Set position, velocity, mass and charge for a set of particles. ii) Compute the electric field at the position of every particle produced by all the other particles with Coulomb law. iii) Compute the magnetic field at the position of every particle produced by all the other particles with Biot-Savart law. iv) Move all the particles a differential amount using Newton's second Law with Lorentz Force: for every particle i compute: $m \vec a = q(\vec E + \vec v \times \vec B)$ v) Go to step ii. Answer: Yes. At least two that I can see offhand: Coulomb's law only holds in Electrostatics, meaning it does not hold true for moving charges, even those moving with a uniform velocity with respect to each other. This is because the electric field for a moving charge is no longer the "usual" $1/r^2$ electric field as you can see in Chapter 26 of the Feynman Lectures (see Fig 26-4). The Biot-Savart law similarly only holds for Magnetostatics, where you deal with steady currents. A single moving point charge is certainly not a steady current! Furthermore, since these fields aren't constant, you should also remember that changes in the electromagnetic field travel at the speed of light $c$. In other words, the charges will not sense an instantaneous force as you describe, but a retarded one, retarded by a time $t - r/c$ where $r$ is the distance between the charges. Now, you could do a little bit better by actually using the exact electric and magnetic fields of moving charges (these are derived in the chapter of the Feynman Lectures I linked above), taking into account the retardation, and then use he formula: $$\mathbf{F} = q (\mathbf{E + v \times B}),$$ but I also see a fourth problem: Accelerated charges radiate energy in the form of electromagnetic waves. This emission causes a recoil force on the charged particle called the Abraham-Lorentz (or radiation reaction) force. You'd need to take this into account as well for a complete description. However this too is only valid at speeds that are small compared to the speed of light $c$. Its relativistic version is the Abraham-Lorentz-Dirac force, I believe. But this sounds like a very complicated problem without making some assumptions first (taking the non-relativistic limit, etc.).
{ "domain": "physics.stackexchange", "id": 70734, "tags": "electromagnetism, computational-physics, simulations, coulombs-law" }
How deep is the region near an event horizon where Hawking radiation is generated?
Question: In other words, how strong does gravity have to be to cause Hawking radiation to occur? Answer: Hawking Radiation can occur in conditions influenced by more than just gravity .. temperature can also influence this - (see Bogoliubov Theory of acoustic Hawking radiation in Bose-Einstein Condensates, A. Recati, N. Pavloff, I. Carusotto and Hawking radiation in a two-component Bose-Einstein condensate, P.-É. Larré and N. Pavloff) This means that gravity is but a single factor to consider when trying to establish the initial conditions for Hawking Radiation. It's not clear this answers your question, so lets assume we're only looking at gravity. The event horizon typically takes place at the boundary within which the gravitational object's escape velocity is equal to or greater than the speed of light. There are other definitions involving the pathways of lights, but lets go with this one for now. This means that at the event-horizon nothing with mass can escape since nothing with mass can go the speed of light, and particles with 0 rest mass, such as photons, can, at best, remain in orbit, or some other such odd behaviour such as red-shifting itself to match or exceed the circumference of event-horizon itself. In any event, the effectively disappear to an outside observer. Notwithstanding, suggestions such as light having fractal properties, or debates about the fractal-dimension of event-horizons and hairy black-holes, as you move away from the event-horizon - lets assume, for the sake of illustration, a simple decaying exponential gradient function describes how the black-hole's influence is felt, with it being greatest near the event-horizon, dying off exponentially the further you travel away (meaning as space-time bends less). Let's also assume the black-hole is not spinning (which isn't likely but it simplifies the illustration). The particle pairs responsible for Hawking radiation, are thought to happen anywhere in space, but for this radiation to actually occur (meaning be detectable), the black-hole's influence must be felt on the pair. So the likelihood of this radiation occurring will be diminishing according to some function as you move further away from the event-horizon but absolutely guaranteed on the event-horizon boundary itself. This means you could, in theory, have Hawking radiation occurring infinitely far away except the likelihood of the black-hole being able to wield influence on the particle pairs is so unlikely at that range that the probability of this occurring or being detectable is effectively 0. Nevertheless, even from some great distance as you approach your black-hole the probability will slowly raise enough that at some point and given sufficient time, you should be able to measure an occurrence. So, to answer your question, the strength of gravity sufficient for Hawking radiation to occur, given a particular black-hole, is inversely proportional to the degree of patience you posses. The greater your patience, the weaker the gravity needs to be. The less your patience, the stronger it needs to be.
{ "domain": "physics.stackexchange", "id": 10565, "tags": "quantum-mechanics, cosmology, hawking-radiation, event-horizon" }
Improvement on data normalization
Question: I have a part of code that is loading a dataset and normalizing property values to [0, 1]. My implementation is: import pickle import numpy as np # -- load data prop_1 = list(np.random.rand(10)*20) prop_2 = list(np.random.rand(10)*10) prop_3 = list(np.random.rand(10)*30) # -- normalize l_bound = [] l_bound.append(min(prop_1)) l_bound.append(min(prop_2)) l_bound.append(min(prop_3)) u_bound = [] u_bound.append(max(prop_1)) u_bound.append(max(prop_2)) u_bound.append(max(prop_3)) prop_1 = (np.array(prop_1) - l_bound[0]) / (u_bound[0] - l_bound[0]) prop_2 = (np.array(prop_2) - l_bound[1]) / (u_bound[1] - l_bound[1]) prop_3 = (np.array(prop_3) - l_bound[2]) / (u_bound[2] - l_bound[2]) However, the normalizing part of the code does not look graceful. Any suggestions on how to improve it? Can do this using a loop? Answer: Since you have numpy arrays, you should use their vectorized methods wherever possible. This can make your code a lot faster: In [1]: x = np.arange(10000000) In [2]: %timeit max(x) 988 ms ± 42.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [3]: %timeit x.max() 9.67 ms ± 114 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) This includes not casting your arrays to list. I would also make this a function that normalizes a single array: import pickle import numpy as np from typing import Iterable, Any def normalize_one(x: Iterable[Any]) -> np.ndarray: if not isinstance(x, np.ndarray): x = np.array(list(x)) low, diff = x.min(), x.ptp() return (x - low) / diff # -- load data prop_1 = np.random.rand(10)*20 prop_2 = np.random.rand(10)*10 prop_3 = list(np.random.rand(10)*30 # -- normalize prop_1 = normalize_one(prop_1) prop_2 = normalize_one(prop_2) prop_3 = normalize_one(prop_3) If you do have many arrays that need to be normalized, you can always do it in a list comprehension: properties = [prop_1, prop_2, prop_3] properties = [normalize_one(prop) for prop in properties] If you have many of them and they all have the same structure, I would use something like this (now limited to numpy arrays as input): def normalize(x: np.ndarray, axis: int = 1) -> np.ndarray: """Normalize the array to lie between 0 and 1. By default, normalizes each row of the 2D array separately. """ low, diff = x.min(axis=axis), x.ptp(axis=axis) # Indexing needed to help numpy broadcasting return (x - low[:,None]) / diff[:,None] properties = np.random.rand(3, 10) properties[0] *= 20 properties[1] *= 10 properties[2] *= 30 properties = normalize(properties) For props = np.random.rand(10000, 10) I get the following timings: Author Timed function call Time [s] Blade* list(normalize_blade(props)) 68.7 ms ± 749 µs Linny list(normalize_linny(*props)) 127 ms ± 1.42 ms Graipher [normalize_one(prop) for prop in props] 119 ms ± 7.4 ms Graipher normalize(props) 2.32 ms ± 113 µs The code I used for the test with the code in the OP is this one, which is just the generalization to many properties: def normalize_blade(properties): l_bound, u_bound = [], [] properties = [list(prop) for prop in properties] for prop in properties: l_bound.append(min(prop)) u_bound.append(max(prop)) for i, prop in enumerate(properties): yield (np.array(prop) - l_bound[i]) / (u_bound[i] - l_bound[i])
{ "domain": "codereview.stackexchange", "id": 37826, "tags": "python, numpy" }
What is the entropy change of a hatching egg?
Question: Treating the whole egg as the system. On one hand: The egg was only one cell, but when it becomes a full-grown chicken, it becomes a large number of cells, highly specialized to do their own jobs. So the system becomes more ordered. Thus, $\mathrm{d}S<0$. However, when we hatch the egg, we have to keep it at a certain temperature, so that the egg can absorb heat from the outside world, or $\mathrm{d}H>0$. If $\mathrm{d}S<0$, we get $\mathrm{d}G=\mathrm{d}H-T\mathrm{d}S>0$, which should not be possible. I'm just getting started with thermodynamics, so I want to know what did I do wrong here. Answer: Thermodynamic entropy has nothing to do with purposefulness or number of cells. A million introductions to entropy notwithstanding, it's also only correlated to even well-defined "orderliness" in systems in which we can pretend that the only energy in the system is the energy associated with thermal motion - a pretty good approximation for a heat engine, but a really bad approximation for most of the universe, which is full of dissipative processes. A dissipative process is a process that increases total system entropy faster than total system entropy would increase in the absence of the process, while at the same time making part of the system have less entropy than it would otherwise have. A system with dissipative processes (note again: defined by increasing in entropy faster) is self-ordering - giving the lie to the idea that order and entropy are necessarily opposites. If you want to apply the 2nd Law to systems that include things other than heat sources, heat sumps, and the gasses and idealized rigid walls of heat engines, I think it's easiest to define entropy in terms of the change of Gibbs free energy. This works well for anything from galactic superclusters to eggs. The egg is quietly dissipating chemical potential energy and in the process turning egg chemicals into chicken chemicals - some of which chicken chemicals will in turn have their own chemical potential energy dissipated and in the process turn stationary chicken parts into moving chicken parts capable of breaking out of the shell.
{ "domain": "physics.stackexchange", "id": 99219, "tags": "thermodynamics, entropy" }
Why $U(1)_Y$ hypercharge rather than $U(1)_\text{em}$ electromagnetism?
Question: In the Standard Model we have $SU(2)_I\times U(1)_Y$, where $U(1)_Y$ is weak hypercharge and $SU(2)_I$ is the symmetry group of weak isospin. Why do we introduce $U(1)_Y$ of weak hypercharge rather than $U(1)_\text{em}$ of electromagnetism? Answer: Short answer: to accurately model reality. Long answer: The weak interaction has several peculiar properties: The $W$ bosons are vector bosons (so the weak theory is likely a gauge theory) The $W$ bosons have electric charge The $W$ bosons have mass. (The $Z$ boson hadn't been observed experimentally; it was a prediction of the SM) The $W$ bosons couple chirally, meaning left-handed and right-handed fermions belong to different representations of the gauge group. In particular, left-handed fermions transform in the doublet representation, right-handed in the singlet (trivial) rep. EM is a gauge interaction that couples vectorially. This leads to two problems that weren't understood before the 60s. The first problem is how to have massive gauge bosons (items 1 and 3). Naively a vector boson mass term violates gauge symmetry and unitarity. The second problem is how to get massive fermions (item 4). A fermion mass term couples left- and right-handed fermions, but those belong to different weak representations so the fermion mass term also breaks gauge invariance. The realization of Higgs et al. and then later Glashow and Weinberg was that both of these problems can be solved by spontaneous symmetry breaking. The Higgs mechanism says that spontaneous symmetry breaking of a gauge symmetry leads to massive gauge bosons. And if the operator that gets a vacuum expectation value has the right quantum numbers you it can couple to the fermions in just the right way to make an effective fermion mass term. The questions are, what pattern of spontaneous symmetry breaking corresponds to reality, and what kind of operator should get a vev? Because of item 5, we need an SSB that leaves a $U(1)_{em}$ unbroken. Since there is only one Lie group with doublet irreps we also know the original gauge group should include an $SU(2)$ factor. Moreover, since the $W$ bosons have charge (item 2), the generator of $U(1)_{em}$ must not commute with some $SU(2)$ generators! To fix item 4 and make a gauge-invariant mass term we need an operator to get a vev which transforms in the doublet of $SU(2)$. This rules out the symmetry breaking pattern $SU(2)\rightarrow U(1)_{em}$ (there is no generator of $SU(2)$ that leaves a non-trivial doublet invariant, meaning a doublet always breaks $SU(2)\rightarrow {1}$ which doesn't leave room for a photon). The next simplest pattern to try is $SU(2)\times U(1)_Y\rightarrow U(1)_{em}$. We could achieve this with an uncharged doublet and $U(1)_Y = U(1)_{em}$, but then $[U(1)_{em},SU(2)]=0$ and the $W$ bosons wouldn't be charged. The only other way is to have the Standard Model pattern where the $U(1)_{em}$ generator is a linear combination of $U(1)_Y$ and some generator of $SU(2)$, which is indeed a possible pattern. So, the symmetry breaking pattern $SU(2)\times U(1)_Y \rightarrow U(1)_{em}$ with $Q = Y + T_3$ is the minimal SSB pattern that matches general weak phenomenology. It's not the only possible mechanism, so it's really very nice that nature chose this one. It's also nice because it predicts the $Z$ boson.
{ "domain": "physics.stackexchange", "id": 31133, "tags": "standard-model, symmetry-breaking, higgs, electroweak, isospin-symmetry" }
Sql query slow on Chembl postgres database
Question: Doing complex sql query on Chembl database and it takes 40 minutes on the machine from lab. I was thinking using dask, extract each table and export to parquet, when do the data processing . Is this a good approach or there is a better alternative? Answer: I suspect the issue is you want to parallelise (multi-threading really) the job to reduce run time. parquet is recommended. It is simply an archive format that allows compression and is often used with a pandas dataframe. Use parquet to store data by all means, but thats easy via pyarrow and the pandas command df.to_parquet('/User/OP/location/file,parquet' ,engine='pyarrow', compression='gzip') You will need to install pyarrow via pip or conda, there is a faster version (forget its name, something like fastparquet or something used by default). Yeah it's a very good archiving format, in fact its industrial standard and I should use it more. Python has loads of ways to parallelise of which dask is one way. dask is an entire framework. To leverage dask to parallelise postgres I would refer to this post. I don't use dask, I am aware it is a solution in high performance Python. Personally I would use this approach directly via multiprocessor from psycopg2 import connect import multiprocessor ... etc ... I would suggest its horses for courses, I simply know how to multiprocess in Python so I don't bother with dask. To answer the question in summary parquet is the best, if SPEED is not important. If speed is important then feather dask is mainstream within high performance Python and its doable. There's no reason not to There are other multithread Postgress approaches and thats what I would do, but if you just getting into parallelisation you may as well use dask. It will take some learning though. If you're not into parallelisation and you're not writing a data pipeline for publication I would simply consider getting a machine with more RAM. It shouldn't be taking that length of time, parallelisation will solve the run time - no question - but it's the time it will take to get to grips with a parallel solution. What is likely occurring is the job and database size is exceeding the RAM capacity of the machine. At a wild random guess a 16 or 32 Meg RAM chip would collapse the runtime to a few minutes and thats what you are looking for. From the comments. parquet is really for data compression and archiving. The issue is I/o is reasonably fast but not lightening fast. It is probably slower than pickle (it will be slower if you're compressing via gzip as well). For a 'temporary' save (couple months) and really fast i/o then its feather. There is no dispute about that. If you are keeping your data a long time and need to save space its parquet. Performance-wise feather is way faster than parquet. I've never used feather BTW I use pickle or parquet To give some idea a 500Meg flat file compressed an archived via parquet resulting in a 50Meg file, and will take several minutes to write/import. Thats 'cause it's doing loads of stuff. My suspicion is that this too slow for you.
{ "domain": "bioinformatics.stackexchange", "id": 2531, "tags": "python, database, data-retrieval, cheminformatics" }
Moment of Inertia of an L-shaped object
Question: A uniform thin bar formed into a L-shaped object of mass $m=2.5kg$ with a longer side of length $l=0.8m$ and a shorter side of length $l/2$. Initially the object is positioned with one end at the origin and the longer side along the $x$ axis. The centre of mass of the object has coordinates $r_{cm}=\frac{2l}{3} \hat i -\frac{l}{12} \hat j$ . I should also add that the object is held in position by a massless wire that makes an angle $\phi=50 ^{\circ} $ with the longer side of the object. The object is attached to the wall by a pivot (at the origin). Compute the moment of inertia of the object about an axis through the pivot perpendicular to the plane of the object. I know that moment of inertia is equal to $I=r^2 m $. I broke down the moment of inertia into two components, one calculating $I_1$ over the longer side of L (length of $l$), and the other calculating $I_2$ over the shorter side of L ( length of $ \frac{l}{2} $ ). However, the correct answer provided clearly states that $I_1=\frac{2}{9}ml^2$ and $I_2= \frac{13}{12}\frac{1}{3} ml^2$ (using the parallel axis theorem). I don't know how to achieve these results, my reasoning behind calculating $I_1$ is: Since $I=mr^2$, $I_1=\frac{2}{3}m l^2$ (since the longer side is twice as long as the shorter side) which clearly gives me an incorrect result. If someone could explain the logic behind calculating the total moment of inertia of this type of object that would be great! Answer: A diagram of the situation: The moment of inertia of the entire system can be written as the sum of the moment of inertia of each element about its center of mass, plus a component due to the fact that they are rotating about a point that is not its center of mass. This gives us for $I_1$: $$\begin{align}I_1 &= \frac{1}{12} \frac{2m}{3} L^2 + \frac{2m}{3} \left(\frac{L}{2}\right)^2\\ &= \frac29 m L^2\end{align}$$ as given in your solution. A similar approach can be used for $I_2$
{ "domain": "physics.stackexchange", "id": 24288, "tags": "homework-and-exercises, moment-of-inertia" }
catkin: multiple "undefined reference to ros::xyz'"
Question: I had to convert a pure CMAKE-Project to catkin, as I needed to use ROS. Therefor I tried to alter the CMakeLists to fit catkin. Unfortunately there are lots of ROS errors now, that tend to believe me there is an or are a couple errors still within. The order of find_package/include_directories/catkin_package before add_executable seems to be right and there are lots of target_link_libraries.. Currently there is no exact point for me to start from, if you find any please show me, I would really like to understand this. Deleted build and devel folder to start a clean catkin_make, to no visible change. Additional Info: I am using a custom CUDA Kernel, thats why there needs to be CUDA support and .cu files. Also the camera "Zed 2" comes with an own package, that I tried to include, but am not sure as to how good that is working. After re-reading everything I noticed that I use "image_transport" in my code and didn't see it in the CMakeLists, added it to no current change in the error result, just FYI. Build output: [ 85%] Linking CXX executable /home/xavier/catkin_ws/devel/lib/fire_monitoring/fire_monitoring CMakeFiles/fire_monitoring.dir/src/main.cpp.o: In function `main': main.cpp:(.text+0x138): undefined reference to `ros::init(int&, char**, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned int)' main.cpp:(.text+0x14c): undefined reference to `ros::console::g_initialized' main.cpp:(.text+0x150): undefined reference to `ros::console::g_initialized' main.cpp:(.text+0x16c): undefined reference to `ros::console::initialize()' main.cpp:(.text+0x1bc): undefined reference to `ros::console::initializeLogLocation(ros::console::LogLocation*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, ros::console::levels::Level)' main.cpp:(.text+0x200): undefined reference to `ros::console::setLogLocationLevel(ros::console::LogLocation*, ros::console::levels::Level)' main.cpp:(.text+0x20c): undefined reference to `ros::console::checkLogLocationEnabled(ros::console::LogLocation*)' main.cpp:(.text+0x274): undefined reference to `ros::console::print(ros::console::FilterBase*, void*, ros::console::levels::Level, char const*, int, char const*, char const*, ...)' main.cpp:(.text+0x280): undefined reference to `ros::Rate::Rate(double)' main.cpp:(.text+0x2b0): undefined reference to `ros::ok()' main.cpp:(.text+0x510): undefined reference to `ros::spinOnce()' main.cpp:(.text+0x518): undefined reference to `ros::Rate::sleep()' main.cpp:(.text+0x528): undefined reference to `ros::spin()' CMakeFiles/fire_monitoring.dir/src/main.cpp.o: In function `Functions::Functions()': main.cpp:(.text._ZN9FunctionsC2Ev[_ZN9FunctionsC5Ev]+0x268): undefined reference to `ros::NodeHandle::NodeHandle(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > > const&)' main.cpp:(.text._ZN9FunctionsC2Ev[_ZN9FunctionsC5Ev]+0x294): undefined reference to `image_transport::ImageTransport::ImageTransport(ros::NodeHandle const&)' main.cpp:(.text._ZN9FunctionsC2Ev[_ZN9FunctionsC5Ev]+0x5d8): undefined reference to `image_transport::ImageTransport::advertise(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned int, bool)' main.cpp:(.text._ZN9FunctionsC2Ev[_ZN9FunctionsC5Ev]+0x664): undefined reference to `ros::Publisher::~Publisher()' main.cpp:(.text._ZN9FunctionsC2Ev[_ZN9FunctionsC5Ev]+0x814): undefined reference to `ros::Publisher::~Publisher()' main.cpp:(.text._ZN9FunctionsC2Ev[_ZN9FunctionsC5Ev]+0x820): undefined reference to `ros::Subscriber::~Subscriber()' main.cpp:(.text._ZN9FunctionsC2Ev[_ZN9FunctionsC5Ev]+0x838): undefined reference to `image_transport::ImageTransport::~ImageTransport()' main.cpp:(.text._ZN9FunctionsC2Ev[_ZN9FunctionsC5Ev]+0x84c): undefined reference to `ros::NodeHandle::~NodeHandle()' CMakeFiles/fire_monitoring.dir/src/main.cpp.o: In function `Functions::~Functions()': main.cpp:(.text._ZN9FunctionsD2Ev[_ZN9FunctionsD5Ev]+0x98): undefined reference to `ros::Publisher::~Publisher()' main.cpp:(.text._ZN9FunctionsD2Ev[_ZN9FunctionsD5Ev]+0xa4): undefined reference to `ros::Subscriber::~Subscriber()' main.cpp:(.text._ZN9FunctionsD2Ev[_ZN9FunctionsD5Ev]+0xbc): undefined reference to `image_transport::ImageTransport::~ImageTransport()' main.cpp:(.text._ZN9FunctionsD2Ev[_ZN9FunctionsD5Ev]+0xc8): undefined reference to `ros::NodeHandle::~NodeHandle()' CMakeFiles/fire_monitoring.dir/src/main.cpp.o: In function `ros::Publisher ros::NodeHandle::advertise<std_msgs::Int32_<std::allocator<void> > >(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned int, bool)': main.cpp:(.text._ZN3ros10NodeHandle9advertiseIN8std_msgs6Int32_ISaIvEEEEENS_9PublisherERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEjb[_ZN3ros10NodeHandle9advertiseIN8std_msgs6Int32_ISaIvEEEEENS_9PublisherERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEjb]+0x94): undefined reference to `ros::NodeHandle::advertise(ros::AdvertiseOptions&)' /home/xavier/catkin_ws/devel/lib/libCudaLib.a(CudaLib_generated_firekernel.cu.o): In function `__sti____cudaRegisterAll()': /tmp/tmpxft_00005152_00000000-5_firekernel.cudafe1.stub.c:2: undefined reference to `__cudaRegisterLinkedBinary_45_tmpxft_00005152_00000000_6_firekernel_cpp1_ii_924cc440' collect2: error: ld returned 1 exit status fire_monitor/CMakeFiles/fire_monitoring.dir/build.make:226: recipe for target '/home/xavier/catkin_ws/devel/lib/fire_monitoring/fire_monitoring' failed make[2]: *** [/home/xavier/catkin_ws/devel/lib/fire_monitoring/fire_monitoring] Error 1 CMakeFiles/Makefile2:457: recipe for target 'fire_monitor/CMakeFiles/fire_monitoring.dir/all' failed make[1]: *** [fire_monitor/CMakeFiles/fire_monitoring.dir/all] Error 2 Makefile:140: recipe for target 'all' failed make: *** [all] Error 2 Invoking "make -j6 -l6" failed CMakeLists.txt: cmake_minimum_required(VERSION 3.0.2) project(fire_monitoring) list(APPEND SAMPLE_LIST ${PROJECT_NAME}) option(LINK_SHARED_ZED "Link with the ZED SDK shared executable" ON) if (NOT LINK_SHARED_ZED AND MSVC) message(FATAL_ERROR "LINK_SHARED_ZED OFF : ZED SDK static libraries not available on Windows") endif() find_package(catkin REQUIRED COMPONENTS roscpp rosconsole nodelet std_msgs ) find_package(ZED 3 REQUIRED) find_package(OpenCV REQUIRED) find_package(GLUT REQUIRED) find_package(GLEW REQUIRED) find_package(OpenGL REQUIRED) find_package(CUDA REQUIRED) IF(NOT WIN32) SET(SPECIAL_OS_LIBS "pthread" "X11") add_definitions(-Wno-write-strings) ENDIF() catkin_package( CATKIN_DEPENDS roscpp rosconsole std_msgs ) ########### ## Build ## ########### ## Specify additional locations of header files ## Your package locations should be listed before other locations include_directories(${ZED_INCLUDE_DIRS}) include_directories(${OpenCV_INCLUDE_DIRS}) include_directories(${GLUT_INCLUDE_DIR}) include_directories(${GLEW_INCLUDE_DIRS}) include_directories(${CUDA_INCLUDE_DIRS}) include_directories(${CMAKE_CURRENT_SOURCE_DIR}/include) include_directories(${catkin_INCLUDE_DIRS}) link_directories(${ZED_LIBRARY_DIR}) link_directories(${OpenCV_LIBRARY_DIRS}) link_directories(${GLUT_LIBRARY_DIRS}) link_directories(${GLEW_LIBRARY_DIRS}) link_directories(${OpenGL_LIBRARY_DIRS}) link_directories(${CUDA_LIBRARY_DIRS}) FILE(GLOB_RECURSE CUDA_FILES cu/*.cu) FILE(GLOB_RECURSE SRC_FILES src/*.c*) FILE(GLOB_RECURSE HDR_FILES include/*.h* include/sl/*.h*) message("SRC files: ${SRC_FILES}") message("HDR files: ${HDR_FILES}") message("Cuda files: ${CUDA_FILES}") set(CUDA_ARCH_BIN " 30 " CACHE STRING "Specify 'real' GPU arch to build binaries for, BIN(PTX) format is supported. Example: 1.3 2.1(1.3) or 13 21(13)") set(CUDA_ARCH_PTX "" CACHE STRING "Specify 'virtual' PTX arch to build PTX intermediate code for. Example: 1.0 1.2 or 10 12") set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS} "-Xcompiler;-fPIC;-std=c++11") set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS} "--ftz=true;--prec-div=false;--prec-sqrt=false; -rdc=true") SET(CUDA_VERBOSE_BUILD ON CACHE BOOL "nvcc verbose" FORCE) SET(CUDA_LIB_TYPE STATIC) CUDA_ADD_LIBRARY(CudaLib ${CUDA_LIB_TYPE} cu/firekernel.cu) CUDA_COMPILE(cuda_objs ${CUDA_FILES}) #cuda_add_executable(${PROJECT_NAME} ${CUDA_FILES}) add_executable(${PROJECT_NAME} ${HDR_FILES} ${SRC_FILES} ${CUDA_FILES}) add_dependencies(${PROJECT_NAME} CudaLib) add_definitions(-std=c++14) if (LINK_SHARED_ZED) SET(ZED_LIBS ${ZED_LIBRARIES} ${CUDA_CUDA_LIBRARY} ${CUDA_CUDART_LIBRARY} ${CUDA_DEP_LIBRARIES_ZED}) else() SET(ZED_LIBS ${ZED_STATIC_LIBRARIES} ${CUDA_CUDA_LIBRARY} ${CUDA_LIBRARY}) endif() target_link_libraries(${PROJECT_NAME} ${catkin_LIBARIES} ${SPECIAL_OS_LIBS} ${ZED_LIBS} ${OpenCV_LIBRARIES} ${OPENGL_LIBRARIES} ${GLUT_LIBRARY} ${GLEW_LIBRARIES} CudaLib) Originally posted by kremerf on ROS Answers with karma: 16 on 2021-01-11 Post score: 0 Original comments Comment by gvdhoorn on 2021-01-11: just a comment: I had to convert a pure CMAKE-Project to catkin, as I needed to use ROS. you write this as if it's an unavoidable consequence or requirement, but that's not necessarily the case. It's perfectly possible for ROS packages to depend on pure CMake packages, as long as those packages export appropriate library targets and install their headers in the expected locations (this is no different from what they'd need to do to be able to use them as dependencies of any other CMake project). That would be no different from depending on any other system dependencies with a ROS package. So an alternative approach would be to make use of whatever functionality is provided by your "pure CMAKE-Project" as-if it were a regular system dependency, and not convert it to a Catkin project at all. I believe keywords would be "ros wrapper" then. Comment by kremerf on 2021-01-11: You are absolutely right. I figured this way might be easier, as I haven't written a ros wrapper yet - that was what I meant. But I will look into it, hopefully getting a result in both. Comment by gvdhoorn on 2021-01-11: A ROS wrapper is nothing special or complicated. It just means you don't embed ROS in your business logic, and link against the libraries which provide your business logic, instead of directly building a set of source files. Comment by JackB on 2021-01-11: @kremerf see this repo. This has been previously discussed as a resource for people looking to learn how to use ROS outside of a workspace. For people (like myself) not too familiar with build systems outside of ROS I found it especially helpful. A little more discussion on this can be found here Incorporating ROS without using catkin for C++ projects Comment by gvdhoorn on 2021-01-11: Please note: while potentially helpful, that repository (and approach) was not what I referred to in my comment as a "ros wrapper". Comment by kremerf on 2021-01-12: I am currently looking at this ros wrapper tutorial to see whether it works or not. Will come back with more info. Comment by gvdhoorn on 2021-01-12: I don't believe that page / tutorial shows you how to depend on a system dependency. It seems to assume "the motor driver" is contained in a single .cpp and .h pair, which are compiled from-source as part of the ROS node. That's OK, if you have something that simple. For anything else, you're probably going to have to link against whichever libraries your dependency exports, add its headers to your include_path(..) and #include them in your .cpp files. But just to make sure: a "ROS wrapper" is really nothing special. There are no special tricks. Just consider your plain-CMake package similar to Boost or OpenCV: do you go and copy-paste parts of ROS into Boost or OpenCV? No. You make use of whatever Boost functionality you need by #include-ing Boost headers and by target_link_library(..)-ing against Boost's libraries. That's all. Comment by kremerf on 2021-01-12: That felt a little short to me as well. I have a couple .h .cpp and .cu files and other stuff that expands from there. It is more than just a couple files. Answer: After rewatching my code I noticed, that the project I build around the ZED SDK was basically already some sort of wrapper itself. Noticing that I tried to make it work itself with catkin, rather than writing something else instead. How? Well, turns out when you work with a CMakeLists you should really get to know that file.. I remade it several times, without success and then googled every single line, and thought about it with a colleague (I cannot stress enough how much of a help another mind in this is, when you are not that familiar with catkin..) Today I revisited my CMakeList and commented in and out every single package and line to see whether I included some not crucial stuff. Re-ordered it (as there are lots of people in the internet that tell you to include or find_package in the end of the file.. turns out that was unnecessary for me. So now I am ready to go with a fresh looking file. Just because I'd like to see additional code fragments myself if I had this kind of error: project(name LANGUAGES CXX CUDA) #Didn't find mine, so here goes set(OpenCV_DIR __path__) # included some ROS-Packages here, if you do not need any of them, just throw them out find_package(catkin REQUIRED COMPONENTS roscpp rosconsole nodelet image_transport sensor_msgs stereo_msgs std_msgs ) # Other important stuff, like ZED, CUDA, OpenCV, ... find_package(__other stuff__) # almost the same as with the catkin components catkin_package( INCLUDE_DIRS ${OpenCV_INCLUDE_DIRS} CATKIN_DEPENDS roscpp rosconsole sensor_msgs stereo_msgs image_transport ) # Not necessary, but I do not like to have paths to files in my lists.. set(NODE_SRC ${CMAKE_CURRENT_SOURCE_DIR}/src/main.cpp) #All packages you need again.. include_directories( ${catkin_INCLUDE_DIRS} ${CUDA_INCLUDE_DIRS} ${ZED_INCLUDE_DIRS} ${OpenCV_INCLUDE_DIRS} ${GLUT_INCLUDE_DIR} ${GLEW_INCLUDE_DIRS} ${OPENGL_INCLUDE_DIRS} ) # Again, no paths for me please FILE(GLOB CUDA_FILES cu/*.cu) # If you use cuda, just put everything here, works just fine. You probably could split it, but the computer seems to know which actual compiler to use I guess, the files have explicit endings.. cuda_add_executable(name ${NODE_SRC} ${CUDA_FILES}) #Important again TARGET_LINK_LIBRARIES(name ${catkin_LIBRARIES} ${OpenCV_LIBRARIES} ${GLUT_LIBRARIES} ${GLEW_LIBRARIES} ${OPENGL_LIBRARIES} ${ZED_LIBRARIES} ) I ommited these: #set(CUDA_ARCH_BIN " 30 " CACHE STRING "Specify 'real' GPU arch to build binaries for, BIN(PTX) format is supported. Example: 1.3 2.1(1.3) or 13 21(13)") #set(CUDA_ARCH_PTX "" CACHE STRING "Specify 'virtual' PTX arch to build PTX intermediate code for. Example: 1.0 1.2 or 10 12") #set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS} "-Xcompiler;-fPIC;-std=c++11") #set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS} "--ftz=true;--prec-div=false;--prec-sqrt=false; -rdc=true") as they are used by CUDA_COMPILE - if I am not mistaken - and the compiling "just works". This may be the case for me, but it might not for you, so just letting you know. Wth my beautiful lack of knowledge I figured this now very clean file (and the according package.xml) should be able to be automatically generated by a smart piece of software, but maybe not for more complex ones - if someone knows, please let me know. It is working for me now, so I'll close this one. Originally posted by kremerf with karma: 16 on 2021-01-18 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by kremerf on 2021-01-18: Well, I would accept it, but I do not have >10 Points.
{ "domain": "robotics.stackexchange", "id": 35950, "tags": "ros-melodic, ubuntu, ubuntu-bionic" }
Writing a date checker in C
Question: I'm currently working on writing a date checker function in C and wondering how it can be improved upon. Q: Write a function that is passed a month, day, and year and will determine if that date is valid. You can assume each parameter is passed in as an integer. Remember to check for leap year. I currently have this and want to see if there are any beginner mistakes, if pointers could improve efficiency or redundancies in my code. #include <stdio.h> #include <stdbool.h> bool is_valid_date(int month,int day ,int year) { //Validate year int days_on_month[]={31,28,31,30,31,30,31,31,30,31,30}; if(year<0||year>2017) { //year not valid , stop validation , date is invalid return false; } bool isLeapYear; //Check for leap year , every 4 years there is a leap year if(year%4==0) { //if year is evenly divisble by 100 it must also be by 400 if(year%100==0) { if(year%400==0) { isLeapYear=true; }else{ isLeapYear=false; } }else{ isLeapYear=true; } } //Validate months; if(month<1||month>12) { // not valid , stop validation , date is invalid return false; } //validate day if(month!=2) { //Valid Date if(day>=1&&day<=days_on_month[month-1]){return true;} } //February (changes no of days on leap year) else { if(isLeapYear) { if(day>=1&&day<=29){return true;} }else{ if(day>=1&&day<=28){return false;} } } return false; } // test if function works int main(void) { int day = 28; int month = 12; int year = 2012; printf("Is it true? Yes = 1 No = 0! = %d\n", is_valid_date(day, month, year)); return 0; } Answer: Some quick formatting tips: You use whitespace sparingly. In the example code, there is a lot of room for more whitespace that makes the code more readable. Something like: if (year % 400 == 0) { isLeapYear = true; } It makes the code more easier to read. Also, and this is not a recommendation but just an option you may want to consider, but you can move the if-else to a new line. I've always found that easier to read, and I've noticed that some (not all) teachers of mine tend to find that good formatting. So for example: if(year%400==0) { isLeapYear=true; }else{ isLeapYear=false; } This code can be rewritten as: if(year%400==0) { isLeapYear=true; } else{ isLeapYear=false; } **As an aside, upon receiving further advice from co-users on CodeReview in the comments, it is better to use curly braces, even if they are solely for one command in an if-statement, as it will help in detecting the scope of the if-statement with more ease.
{ "domain": "codereview.stackexchange", "id": 25020, "tags": "c, datetime, validation" }
Do biological brains compute using quantum mechanics?
Question: Someone on research gate said the answer was no Our brain is a neural network with a very complex connectome. Any system is in a sense a computer adhering quantum mechanics, but what is known about human brain doesn’t tell us it uses quantum-mechanical effects such as entangled states and superposition states as en essential element of computation. So the human brain is more like a classical computer Or is the brain not like a Turing machine? Answer: As far as we know - and I know, so correct me anyone if there's research to the contrary - the neuron interactions in the brain are well within the classical regime. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5681944/
{ "domain": "quantumcomputing.stackexchange", "id": 1327, "tags": "quantum-turing-machine, quantum-biology, biocomputing" }
Why don't we just say that the Klein Gordon equation describes a two component complex function?
Question: These vectors form the basis vectors of the field that the KG equation describes: (for each $\vec{p}$ in $R^3$): $$|e^{i\vec{p} \cdot \vec{x}} , E=+\sqrt {p^2+m^2}\rangle$$ $$|e^{i\vec{p} \cdot \vec{x}} , E=-\sqrt {p^2+m^2}\rangle$$ Clearly, any single component function $\psi (x)$ can be expressed as a linear combination of only half of these basis vectors. If we utilise all of these basis vectors in a linear combination, we will end up describing a two component function ($\psi _1 (x), \psi _2 (x)$), not one! The energy $E$ serves as a label to double the number of basis vectors!! This equation isn't about a single component function. To put it another way, given a single component initial state $\psi (x)$, you will never know how to evolve it in time using the KG equation! This is because there will be ambiguity about whether to attach $e^{-iEt}$ or $e^{+iEt}$ in the evolution, because each momentum eigenstate is degenerate. This ambiguity goes away with the two component KG equation: $$\frac {d \psi _1 (x)}{dt}= +\sqrt {P^2+m^2} \psi _1 (x)$$ $$\frac {d \psi _2 (x)}{dt}= -\sqrt {P^2+m^2} \psi _2 (x)$$ Answer: You seem confused about what's going on because you do not clearly distinguish between spatial $\vec x$ and total 4-position $x$. It is true that any function $f(\vec x)$ can be expressed as a "superposition" of either $\mathrm{e}^{\mathrm{i}\vec p\cdot \vec x}$ or its conjugate - that's just the Fourier transform. It is false that this has anything to do with the Klein-Gordon equation - solutions to the KG equation are functions $f(x)$ on spacetime, and you cannot express such a function as a superposition of $\mathrm{e}^{\mathrm{i}\vec p\cdot \vec x}$s. You could do a 4d Fourier transform and express arbitrary functions on spacetime as "superpositions" of $\mathrm{e}^{\mathrm{i} p\cdot x}$, but note that in this case $p^0$ would be wholly independent from the other components of momentum, since the Fourier transform doesn't know anything about the mass shell. Instead, the claim about the solution to the Klein-Gordon equation is the following: Any solution $f(x)$ can be expressed as $$ f(x) = \int \left(A(\vec p)\mathrm{e}^{\mathrm{i}(\vec x\cdot \vec p - tE_p)} + B(\vec p)\mathrm{e}^{-\mathrm{i}(\vec x\cdot \vec p - tE_p)}\right)\frac{1}{2 E_p}\frac{\mathrm{d}^3p}{(2\pi)^3}$$ for arbitrary functions $A(\vec p)$ and $B(\vec p)$. Here we have written the exponentials explicitly in terms of $E_p$, since the shorthand $\mathrm{e}^{\mathrm{i}px}$ seems to be core to the confusion in the question. This is not a Fourier transform, it really is superposing the specific "basic" solutions $\mathrm{e}^{\mathrm{i}(\vec x\cdot \vec p - tE_p)}$ and $\mathrm{e}^{-\mathrm{i}(\vec x\cdot \vec p - tE_p)}$, and there are lots of functions $f(x)$ that cannot be expressed in this form - if a function has this form, then it is a solution to the Klein-Gordon equation. Additionally, both summands are needed - there is no way to claim we would only need "half" of this. As for turning the Klein-Gordon equation into a first-order equation: Your "two-component" solution at the end doesn't really make a lot of sense - what is "$\sqrt{P^2+m^2}$" supposed to be, after all? For an arbitrary function $\psi(x)$, there is no such thing as "$P^2$" - all you can take there is the differential operator $\partial_i\partial^i$, and what even is the square root of a differential operator? You can try to make this work but it's really not something that would be computationally tractable even if you manage to make it well-defined. Nevertheless, this line of thought ("how do I turn the KG equation into a first-order equation?") is precisely what led Dirac to the Dirac equation. It turns out that in order to have an operator that squares nicely to the KG equation in every component, you need to have four components, not just two, but otherwise, that's exactly what Dirac did - every solution of the Dirac equation (where $\psi$ is a 4-component object whose components are coupled through this equation via the action of the $\gamma$-matrices) $$ (\mathrm{i}\partial_\mu\gamma^\mu - m) \psi(x) = 0$$ also has the property that every component of $\psi(x)$ individually fulfills the KG equation.
{ "domain": "physics.stackexchange", "id": 87430, "tags": "quantum-mechanics, antimatter, klein-gordon-equation" }
Webrtc_ros fatal error: adapted_video_track_source.h: No such file or directory
Question: Can anyone help me solve this problem? I try using error msg to search on google but have no solution Thanks ! My system : windows10/ VMWARE/ ubuntu 18.04 / ROS melodic My webrtc insall step cd ~ git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git sudo vim ~/.bashrc export PATH="$PATH:~/depot_tools" source ~/.bashrc mkdir webrtc-checkout cd webrtc-checkout fetch --nohooks webrtc gclient sync My webrtc_ros install step: cd catkin_ws/src git clone https://github.com/RobotWebTools/webrtc_ros.git cd .. catkin_make -DCATKIN_WHITELIST_PACKAGES="webrtc_ros" **Below is my error log:** [ 64%] Building CXX object webrtc_ros/webrtc_ros/CMakeFiles/webrtc_ros_server.dir/src/ros_video_capturer.cpp.o In file included from /home/jetbot/catkin_ws/src/webrtc_ros/webrtc_ros/src/ros_video_capturer.cpp:1:0: /home/jetbot/catkin_ws/src/webrtc_ros/webrtc_ros/include/webrtc_ros/ros_video_capturer.h:6:10: fatal error: webrtc/media/base/adapted_video_track_source.h: No such file or directory #include <webrtc/media/base/adapted_video_track_source.h> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. webrtc_ros/webrtc_ros/CMakeFiles/webrtc_ros_server.dir/build.make:134: recipe for target 'webrtc_ros/webrtc_ros/CMakeFiles/webrtc_ros_server.dir/src/ros_video_capturer.cpp.o' failed make[2]: *** [webrtc_ros/webrtc_ros/CMakeFiles/webrtc_ros_server.dir/src/ros_video_capturer.cpp.o] Error 1 make[2]: *** Waiting for unfinished jobs.... In file included from /opt/ros/melodic/include/async_web_server_cpp/websocket_connection.hpp:5:0, from /opt/ros/melodic/include/async_web_server_cpp/websocket_request_handler.hpp:5, from /home/jetbot/catkin_ws/src/webrtc_ros/webrtc_ros/src/webrtc_web_server.cpp:11: /opt/ros/melodic/include/async_web_server_cpp/websocket_message.hpp:15:19: warning: missing terminating ' character # warning I don't know how to create a packed struct with your compiler ^ CMakeFiles/Makefile2:560: recipe for target 'webrtc_ros/webrtc_ros/CMakeFiles/webrtc_ros_server.dir/all' failed make[1]: *** [webrtc_ros/webrtc_ros/CMakeFiles/webrtc_ros_server.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs.... Originally posted by asps946701 on ROS Answers with karma: 3 on 2022-10-03 Post score: 0 Original comments Comment by ravijoshi on 2022-10-03: The webrtc_ros depends on webrtc. Make sure webrtc is installed properly. Comment by asps946701 on 2022-10-03: I will install webrtc. Thanks you !! Comment by asps946701 on 2022-10-06: Do you have webrtc_ros insatll guide after ros was installed. I was installed webrtc but it got many error. Trying some solution but also fail .Thnaks Part of error : -/opt/ros/melodic/include/webrtc/media/base/adapted_video_track_source.h:16:10: fatal error: absl/types/optional.h: No such file or directory #include "absl/types/optional.h" -/home/jetbot/catkin_ws/src/webrtc_ros/src/ros_video_renderer.cpp:19:36: error: ‘I420BufferInterface’ is not a member of ‘webrtc’ const rtc::scoped_refptrwebrtc::I420BufferInterface& buffer = frame.video_frame_buffer()->ToI420(); -/home/jetbot/catkin_ws/src/webrtc_ros/src/ros_video_renderer.cpp:21:22: error: base operand of ‘->’ is not a pointer cv::Mat bgra(buffer->height(), buffer->width(), CV_8UC4); Answer: Please see the following error: /home/jetbot/catkin_ws/src/webrtc_ros/webrtc_ros/include/webrtc_ros/ros_video_capturer.h:6:10: fatal error: webrtc/media/base/adapted_video_track_source.h: No such file or directory #include <webrtc/media/base/adapted_video_track_source.h> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. Basically, the webrtc_ros depends on webrtc. Therefore, please make sure webrtc is installed before compilation. On the other hand, I suggest installing pre-built binaries using apt because it automatically takes care of installing dependencies. Please see below: $ sudo apt install ros-melodic-webrtc $ sudo apt install ros-melodic-webrtc-ros Originally posted by ravijoshi with karma: 1744 on 2022-10-07 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by asps946701 on 2022-10-13: Hi , I used apt install "ros-melodic-webrtc" and "ros-melodic-webrtc-ros" but it got same error. When I go to path : "/opt/ros/melodic/include/webrtc/media/base/" , I found .h file name is :"adaptedvideotracksource.h". Should I modify in "ros_video_capturer.h" file : "#include <webrtc/media/base/adapted_video_track_source.h>" to "#include <webrtc/media/base/adaptedvideotracksource.h>" ? Thanks! Comment by ravijoshi on 2022-10-14: In the original question, you are trying to install webrtc_ros from the source. I advised installing pre-built binaries using apt. It should work well. However, you are reporting the same error. Therefore, I request you enumerate all of the commands you are using. Comment by asps946701 on 2022-10-14: Sorry ,I may have misunderstood. If I use apt to install webrtc_ros , I will not require to build webrtc_ros from source ? Now I install webrtc_ros from apt command is no problem. thank you very much Comment by ravijoshi on 2022-10-14:\ If I use apt to install webrtc_ros , I will not require to build webrtc_ros from source ? Yes. You are right. You do not need to build from source. Now I install webrtc_ros from apt command is no problem. I am glad you made it work. May I request you to click on the check mark symbol, ✔ icon, located at the top left side of this answer? It will mark this question as answered. Comment by asps946701 on 2022-10-14: OK Thanks you very much !
{ "domain": "robotics.stackexchange", "id": 38012, "tags": "ros, ros-melodic, catkin-make" }
Lesser qubit computer doing the parts of Shor's against e.g., RSA-2048 sized prime
Question: After posting this question to Physics, it became pretty clear I should have posted here. So: How might a (e.g.) 72-bit crypto-relevant quantum computer attack RSA-2048? Bonus: how might that be characterized? (e.g., nn-qubit requires xxx passses, run time ~yyy) Shor's algorithm appears to allow for parallel execution or iterative runs with a combination step. Assumption is that smaller-qubit QC might be able to perform those pieces. However, it is suggested that a 4000-qubit/100m-gate quantum computer would be necessary. As the quantum piece of Shor's is a large transform, I assume that sets the constraint for qubit-size Side note: there also appear to be possible speedups that may reduce the run time, such as qubit recycling? or the 4-8 passes vs. the 20-30 passes (by David McAnally) Answer: Even with qubit recycling, 72 qubits will not be enough to do RSA-2048. Table 1 of the paper: https://www.nature.com/articles/nature12290 Tells you that 1154 qubits are needed to do RSA-768 (which is much smaller than RSA-2048). This is without error correction. Sure you can use your 72-qubit quantum computer to do a little sub-routine of Shor's algorithm, but this will not help if you have to do the rest of the algorithm on a classical computer. For any benefit, the quantum computer has to be doing the "rate-limiting step"
{ "domain": "quantumcomputing.stackexchange", "id": 465, "tags": "quantum-algorithms, cryptography, speedup, shors-algorithm" }
Non-symplectic Hamiltonian systems
Question: I'm wondering when the phase space of a Hamiltonian system looses its symplectic structure. I think it happens when the Hamiltonian $H$ depends on a set of other variables $S_1,...,S_k$ as well as on the set of canonical variables $q_i, p_i$. So, besides having $$\begin{gather} \dot{q}_i = \bigl\{ q_i, H \bigr\} = \frac{\partial H}{\partial p_i}\\ \dot{p}_i= \bigl\{ p_i, H \bigr\} = - \frac{\partial H}{\partial q_i} \end{gather}$$ we have also some $\dot{S}_i = \bigl\{ S_i, H \bigr\}$, which is not a Hamilton equation. Is it right? Could anyone make this clearer? Answer: Well, more generally, a Hamiltonian system $\dot{z}^I=\{z^I,H\}$ with a Hamiltonian function $H:M\times \mathbb{R}\to \mathbb{R}$ is defined on a (not necessarily invertible) Poisson manifold rather than a symplectic manifold. A (not necessarily invertible) Poisson manifold might not have local canonical/Darboux coordinates, cf. OP's example. An important example of a non-invertible Poisson bracket is the Dirac bracket for constrained systems.
{ "domain": "physics.stackexchange", "id": 96744, "tags": "coordinate-systems, hamiltonian-formalism, hamiltonian, phase-space, poisson-brackets" }
Which side of the DNA helix is used for describing SNPs?
Question: In genetic research I often come across references to single-nucleotide polymorphisms (SNPs). An example is rs3184504(C;T). As far as I understand it: In this case rs3184504 refers to a specific point in the DNA genome, and (because this one occurs in a non-sex human chromosome) it indicates the nucleotides at that point in each copy of that chromosome. But how does one determine which strand of the DNA helix it references? Is there a convention? Or is the strand indicated in the SNP reference (in this case rs3184504)? If one strand is C;T then the opposing strand will be G;A ... but in the context of the gene in question I assume it matters which strand is referenced in the SNP. E.g., if the code following this SNP on the chromosome in which rs3184504 has C is: CTCGA... GAGCT... this is a different genotype from the SNP indicating a C on the other strand: GTCGA... CAGCT... So the SNP description is only meaningful if it also indicates the strand, right? If so, how does the SNP descriptor do that? Answer: @WYSIWYG is correct, but in view of the brevity of his answer (and a second answer contradicting him) I provide chapter and verse. The definition of RefSNP reference number is given by NCBI as: “A reference SNP ID number, or “rs” ID, is an identification tag assigned by NCBI to a group (or cluster) of SNPs that map to an identical location.” It is important to realize that this number bears no relationship to the numbering of the DNA sequence: “The rs ID number, or rs tag, is assigned after submission.” To find information about the position of an SNP from its rs number one can search the NCBI dbSNP. In the case of rs1384504 the first few lines of the first entry in the search results is: Variant type: SNV Alleles: T>A,C,G Chromosome: 12:111446804 (GRCh38) This indicates that the SNP is at position 111446804 on Chromosome 12, which has a T in the reference genome, and this lies in a gene designated SH2B3. (You can find this from the links on this page.) It can be seen on this page that the orientation of this gene happens to coincide with that of the reference chromosome. However a nearby gene, PPP1CC, is in the opposite orientation and contains a SNP rs367543175 C/T. The start of the entry for this is Variant type: SNV Alleles: C>T Chromosome: 12:110720185 (GRCh38) …indicating that the SNP is at position 110720185 on Chromosome 12, which has a C in the reference genome, i.e. on the anti-sense strand of this gene. And, of course, most human SNPs lie outside genes, so the concept of sense and anti-sense strand does not apply. Conclusion To find out whether the reference base in a designation for a SNP lying in the coding region of gene corresponds to the sense or anti-sense strand of that gene, one has to access the documentation for that SNP. This information is not embodied in the allele data appended to the reference SNP ID number.
{ "domain": "biology.stackexchange", "id": 10574, "tags": "genomics, snp" }
Semantic distinction between "Partition Function" and "Generating Functional" in QFT?
Question: I am just now learning about these, and I have seen them defined as follows: The generating functional for a set of fields $\phi_i$ is defined by: $$Z[J_i]=\int\mathcal{D}\phi_i e^{i(S[\phi_i]+\int J_i\phi_i)}$$ and the partition function is $Z[0]$, generating the vacuum bubbles. However, as I read more and more about this, I find the semantic distinction between "partition function" and "generating functional" upheld less and less. Many people, such as Wikipedia, equate the two and just call everything "the partition function". Does anybody have any particular knowledge about how important the semantic distinction between the two is at higher levels (i.e. further down the line in QFT, or in certain lines of research, or...)? Answer: As I bet you already know, these two names have perfectly good reasons. The quantity $Z[J]$ is called the generating functional because functionally differentiating it yields correlation functions. This is just like how the partition function $Z[\beta]$ in thermodynamics is a generating function for the cumulants of energy. The quantity $Z[0]$ is called the partition function because upon Wick rotation to imaginary time it becomes equal to the integral of $e^{- S_E/\hbar}$, so it can be interpreted as the partition function of a system with energy $S_E$. Similarly, you can think of $Z[J]$ as a partition function for a thermodynamic system in a fixed external field, so $Z[J]$ can be called both names. Your last question is primarily opinion-based, but my experience has been that in hep-th people generally get less consistent with terminology as they progress from undergrad to grad to postdoc to professor. (I'm sure it would be different in, e.g. mathematical physics.) All the intuition from all the different words gets melted into this big soup that they dip into freely. Which word they use just depends on how they're thinking about $Z[J]$ in the moment.
{ "domain": "physics.stackexchange", "id": 49184, "tags": "quantum-field-theory, statistical-mechanics, terminology, soft-question, partition-function" }
Loading map using a service call
Question: Hi, I am having trouble understanding or creating service calls for loading maps using map_server. This link provides idea of how to call it from the terminal. But unable to understand how can I call it from the python node. What should be my import like nav_msgs.srv or map_msgs.srv. Thanks in advance Originally posted by Flash on ROS Answers with karma: 126 on 2022-03-29 Post score: 0 Answer: Updating solution just in case anyone has the same problem. To import, we have to use. from nav2_msgs.srv import LoadMap Sample code for the service call Class Service(Node): def__init__(self): self.map_client = self.create_client(LoadMap,'/map_server/load_map') self.map_request = LoadMap.Request() while not self.map_client.wait_for_service(timeout_sec=10.0): self.get_logger().info('Waiting for service') self.send_request() def send_request(self): self.map_request.map_url = "../maps/my_map.yaml" wait = self.map_client.call_async(self.map_request) rclpy.spin_until_future_complete(self, wait) if wait.result() is not None: self.get_logger().info('Request was responded') else: self.get_logger().info('Request Failed') This will send map data to /map topic that can be used to get the map data Originally posted by Flash with karma: 126 on 2022-03-29 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 37537, "tags": "navigation, ros2, mapping, map-server" }
Calculating work to move a particle
Question: I have this question : There is a negatively charged spherical shell. Within this shell is a positive charge q which is at the center of the shell. How much work is required to move the charge from the middle to a position where the particle is right next to the inside wall of the shell. The answer is 0 but I don't know how to get that. I understand work is the change in potential energy and in this case that would just be $ {kqq\over r^2} $ so wouldn't work just be the potential energy at the center minus the potential energy after the change in position? Answer: The electric field due to the negatively charged shell is zero inside due to symmetry. There is no field inside ,so force on the particle will be zero and since force is zero you don't need to push the particle against electric field to bring the particle from the middle to a position where the particle is right next to the inside wall of the shell. So work done on the particle must be zero.
{ "domain": "physics.stackexchange", "id": 37736, "tags": "energy, particle-physics" }
Simple object pool on Android
Question: Assume the server is sending data to our app at regular intervals, at least once per second. The received data is parsed and stored in memory as a POJO. A blocking queue is processing data in a worker thread. Suppose I want to use a very simple object pool, so I don’t have to create a new POJO for each server response. I don’t want to use e.g. Apache Commons Pool. Would the following implementation suffice? public abstract class ObjectPool<T> { private final Stack<T> mPool; public ObjectPool() { mPool = new Stack<T>(); } public T obtain() { if (mPool.isEmpty()) { return create(); } else { return mPool.pop(); } } public void recycle(T object) { if (null != object) { destroy(object); mPool.push(object); } } public void clear() { mPool.clear(); } public int getSize() { return mPool.size(); } protected abstract T create(); protected abstract void destroy(T object); } Can I shoot myself in the foot using this? Any caveats I’m not seeing? Thanks! Answer: I have to question whether this makes any sense in your application: Assume the server is sending data to our app at regular intervals, at least once per second Have you done any testing/metrics that indicate that the current system you have is actually a problem? I would be very surprised if it is. If the server is sending data as infrequently as that, there does not seem to be any reason to 'pool' the objects at all. The overhead of 'cleaning' an instance, storing it, recycling it, repopulating it, and repeating with it is probably just as overwhelming as simply creating a new instance and GC'ing the old one. Unless you can already identify 'specifically' that object creation is a significant part of an existing performance problem, I would suggest that you are looking in the wrong places for performance improvements. EDIT - hypothetically... Answering hypothetically: no, I don't believe your solution is enough... by the time you get to the frequency where object creation/reuse is a problem, you will be needing multiple concurrent threads to do the work anyway, and your solution is not synchronized (by the way, new and GarbageCollection are thread-safe) You have created an artificial latency in your program flow - see Ahmdahl's law .. by adding to the critical path instead of taking work off the path (parallelizing things). As your system gets busier, you are making the problem worse and not better. Bottom line, is that, at low volumes your current solution is not going to make a difference, and that at high frequencies, your solution is only going to make things worse. One other thing to consider, and Android performance is not my strength, but on 'full' computers (I often work with systems with more than 128 cores in highly-threaded systems), the cost of having to bring the stack in to your system cache, and then replace it with the cache of your object is probably going to be more than just creating a new object would... Again, the more busy your system, the more this sort of issue 'hurts'.
{ "domain": "codereview.stackexchange", "id": 5289, "tags": "java, android" }
Converting an infix expression to postfix
Question: So the question is to convert the following expression to postfix: (a+b)^(p+q)^(r*s*t) The answer I get when I calculate is: ab+pq+^rs*t*^ But the answer is given to be ab+pq+rs*t*^^ I assume that the step when you need to push second '^' into stack when there is already a '^' in the stack is where I went wrong (I pop out the '^' before pushing). Shouldn't we pop out the first '^' as they are of equal precedence ? Or is it an exception to '^' operator ? Answer: There is a sort of an exception to '^', the exponentiation operator since it is right associative. That is, $a^{b^c}$ means $a^{\left({b^c}\right)}$ instead of $\left(a^b\right)^c$. That is, (a+b)^(p+q)^(r*s*t) means (a+b)^((p+q)^(r*s*t)) instead of ((a+b)^(p+q))^(r*s*t). When you reach the first ^ while evaluating postfix expression ab+pq+rs*t*^^, you will pop out the result of rs*t* and the result of pq+. You get the expected exponentiation as expected.
{ "domain": "cs.stackexchange", "id": 12511, "tags": "stacks" }
If we execute kernel programs in the user mode and user programs in kernel mode what are some consequences
Question: Question on operating system If we execute kernel programs in the user mode and user programs in kernel mode what are some consequences Answer: Assuming a monolithic kernel like Linux, there just are no "kernel programs", every in-kernel thread is just a function running in the kernel's address space. If you copy that and place it in a userland program (given the correct environment in terms of available data structures and functions to call) it will work fine as long as it doesn't invoke any privileged operations. Any of those will give some sort of exception. Placing a userland program into the kernel would mean to add a lot of infrastructure provided by the operating system: in C's terms, standard input and output, the contents of the standard C library and any others in use. Using a snippet of userland code in the kernel will work just fine, as long as any infrastructure/environment called upon is provided. Kernel code and userland code are the exact same instructions (but the kernel has access to some privileged operations and registers that userland can't use).
{ "domain": "cs.stackexchange", "id": 18035, "tags": "operating-systems" }
Minimum separation from the spacetime interval
Question: I've been working through invariant spacetime interval questions recently, and I came across a question in my lecture notes where; $$\Delta s^2=\Delta x^2 -(c\Delta t)^2 > 0 $$ Now it is clear to me that there is no frame where $\Delta x' = 0$ which I have already proven as the question required. Now, out of curiosity, I'm wondering if there is a way to determine the minimum value that $(\Delta x')^2$ can take? I am assuming that the spacetime interval is the same in every frame, so $$\Delta s'^2=\Delta x'^2 -(c\Delta t')^2 > 0$$ which would give $$\Delta x'^2 > (c\Delta t')^2$$ But since $t'$ can be equal to 0, I'm not sure where to go from here. Is there anybody that can either show me how, or point me in the right direction? Any help is much appreciated. Answer: Let me restate the problem the way I understand it: we have 2 events A and B separated by a space-like interval $$\Delta s^2=\Delta x^2 -(c\Delta t)^2 > 0 $$ now, different observers will measure these 2 events A and B and come up with different $\Delta x$ and $\Delta t$, but what will be the minimum possible $\Delta x$ (if it exists) that one of these observers might measure? Quick answer: from $$ 0\geq-(c\Delta t)^2$$ we obtain $$ \Delta x^2\geq\Delta x^2-(c\Delta t)^2)=\Delta s^2$$ So $\Delta x^2\geq\Delta s^2$ always in a spacelike interval and therefore the minimum value that it can attain is $\Delta s^2$, for an observer that sees A and B happen simultaneously i.e. with $\Delta t=0$ Now let's try a different , more physically insightful, apprach. First a little trick which will help better visualize the situation: let's agree that all observers reset their clocks, meters,etc such that event A has coordinates (0,0) for every observer. This does not change the motion of an observer and in general the physics of any problem. So A=(0,0) for everybody, while B=(t,x) has different coordinates for different observers, but for everybody $\Delta s^2=x^2-t^2$ is the same, say $\Delta s^2=9$ ($c=1$ from now on). Every observer will draw a space-time diagram with event A at the origin (not shown) and event B appearing somewhere. If we overlay all the diagrams we get the following where each observer has drawn B as a different colored dot at different positions. All these dots belong to the locus $\Delta s^2=x^2-t^2=9$ so it is clear that the green observer will measure the smallest $\Delta x^2$ and $\Delta t^2=0$ PS: the light cone in the diagram is a bonus, I could not resist putting it in ;-)
{ "domain": "physics.stackexchange", "id": 53783, "tags": "special-relativity, metric-tensor, coordinate-systems, inertial-frames, observers" }
Spring oscillator and time dilatation
Question: Consider the mass M suspended on the (ideal) spring with stiffness D, whose suspension point is in rest in inertial frame K'. I understand how the principle of relativity requires that this harmonic oscillator oscillate at a slower frequency by a factor of gamma (√(1-v^2/c^2)) when measured in inertial frame K (moving with v relative to K'). But I would like to see how the slowing down of this specific type of clock actually happens: Its period is T = 2π√(M/D), so if the oscillation is perpendicular to v ( so that length contraction does not complicate the thing), naively it seems that it slows only by √(gamma), due to M being larger in K. Where is my error? Does Lorentz boost change the stiffness of springs? Answer: An idealized harmonic oscillator is a clock with period $T$ defined as $T = 2 \pi \sqrt{m/k}$ where: $m$ rest mass of the material point $k$ spring constant An inertial reference frame in relative motion vs. the rest frame of the oscillator measures a period dilated by the $\gamma$ Lorentz factor. If you want to read the time dilation in the oscillator period definition as measured by the moving frame you have to consider a first $\gamma$ factor attached to the rest mass of the oscillator and a second $\gamma$ factor inversely attached to the spring constant. The latter is due to the definition of the four-force in SR (special relativity). Thus a $\gamma^2$ under a $\sqrt{}$ yields the $\gamma$ you are looking for.
{ "domain": "physics.stackexchange", "id": 49269, "tags": "special-relativity, reference-frames, harmonic-oscillator, spring" }
Harmonic oscillator propagator in Euclidean time
Question: I'm following Nastase's book on Quantum Field Theory but this question is just about quantum mechanics in the path integral formalism. In chapter 8 he considers the propagator equation for a harmonic oscillator $$\left(\frac{d^2}{dt^2}+\omega^2\right)\Delta(t-t')=\delta(t-t'),$$ which under Wick rotation $t\to -i\tau$ turns into $$\left(-\frac{d^2}{d\tau^2}+\omega^2\right)\Delta_{E}(\tau-\tau')=\delta(\tau-\tau'),$$ where the subscript $E$ stands for Euclidean. Now, I'm having troubles with checking that the unique periodic solution $$\Delta_{E}(\tau-\tau'\pm\beta)=\Delta_{E}(\tau-\tau')$$ for the propagator equation in Euclidean time turns to be $$\Delta_E(\tau-\tau')=\frac{1}{2\omega}\left[\left(1+\frac{1}{e^{\beta|\omega|}-1}\right)e^{-\omega(\tau-\tau')}+\frac{1}{e^{\beta|\omega|}-1}e^{\omega(\tau-\tau')}\right].$$ When I treat this as an ansatz and plug it back in the propagator equation I don't get the $\delta(\tau-\tau')$ so I would appreciate any insight on this computation. Answer: This is the way I'd do the Bosonic problem for the case $\beta=2\pi$, $\omega=M$. (I am cut and pasting from old notes which is why I have chosen these parameters) Star with $$ \Delta(\tau-\tau')= \frac 1 {2\pi} \sum_{n=-\infty}^\infty \frac{e^{in(\tau-\tau')}}{n^2+M^2} $$ which gives the delta via $$\sum_{n=-\infty}^\infty e^{in\tau}= \sum_{m=-\infty}^\infty 2\pi \delta(\tau+2\pi m).$$ Now evaluate the sum as follows: $$ \frac 1{2\pi} \sum_{n=-\infty}^\infty \frac{e^{in\tau}}{n^2+M^2}= \sum_{n=-\infty}^\infty \frac 1{2|M |} e^{-|M||\tau+2\pi n|}, \quad \hbox{(Poisson Summation)}\nonumber\\ = \frac 1 {2M} \frac{\cosh(\pi -\tau)M}{\sinh \pi M}, \quad0 <\tau<2\pi,\nonumber\\ = \frac 1{2M} e^{-M\tau} +\frac 1 M\frac{ \cosh M\tau}{(e^{2\pi M}-1)}\quad0 <\tau<2\pi.\nonumber % = \frac 1{2M} (\coth \pi M\cosh M \tau- \sinh M\tau) \nonumber $$ The first line come from applying Poisson summation to the zero temperature expression $$ \int_{-\infty}^{\infty} \frac{dk}{2\pi}\frac{e^{ik\tau}} {k^2+M^2}=\frac 1 {2|M|}e^{-|\tau||M|} $$ and has the physical interpretation as the method-of-images sum over the $n$-fold winding of the particle trajectory around the periodic imaginary time direction. The passage from the first to second lines is just summing the two geometric series from $n=0$ to $ \infty$ and $n=-\infty$ to $-1$. In this "$\cosh(\pi -\tau)$" version the delta function comes from the restriction on $\tau$ which leads to discontinuity in the slope of $\Delta(\tau-\tau')$ at $\tau-\tau'=2\pi m$.
{ "domain": "physics.stackexchange", "id": 86659, "tags": "harmonic-oscillator, greens-functions, propagator, wick-rotation, thermal-field-theory" }
Can polarization occur if both charges are neutral?
Question: If I keep neutral conductive pieces of some metal close to a neutral conductive sheet, what will happen? Will any of them get polarized or nothing will happen. My guess is nothing will happen as for polarization atleast one of the object should be charged. Answer: On the face of it, the answer is "nothing will happen". However, if you bring the surfaces close enough together, you may find that electron affinity between the two is different, in which case electrons may move by a very small amount - in the same way that when atoms react, the resulting molecule may have a dipole moment. The effect would be restricted to the surface layer only - and therefore be tiny.
{ "domain": "physics.stackexchange", "id": 27745, "tags": "electrostatics, charge, polarization" }
Phenomenology application of quantum anomaly
Question: Anomaly means that: the system has a symmetry at classical level (both discrete and continous), but when we quantize the theory, the system no longer holds the symmetry. I'm wondering for every anomaly, if we can design a experiment to check? For example, the global chiral anomaly, we can measure the life time of photons when the pi zero particle decays. And for parity anomaly, the corresponding experiment is quantum hall effect. However, currently I don't know how to check the conformal anomaly and the gravitational anomaly. And by the way I'm not sure if the gauge anomaly will cause any physical observable effect. Answer: The main distinction here is between global and gauge anomalies. As you described, a symmetry is anomalous if it is realized by our classical description of the system but is broken quantum mechanically (i.e. the Lagrangian is invariant under the symmetry but the path integral is not). Generally, it depends on the system you're considering and what type of symmetry is anomalous, but I’ll focus on anomalies in particle physics. Roughly speaking, we should distinguish between anomalies in global symmetries and in gauge symmetries. As you suggested, we can measure global anomalies in particle physics by observing some decay process not allowed by that symmetry (e.g. proton decay and baryon number conservation). These are distinct from gauge anomalies. Gauge symmetries are not real symmetries of the physical system but are redundancies in our description of the theory (which we put in to make, for instance, Lorentz invariance manifest). Because gauge symmetries are not real, they shouldn’t be broken by quantum effects. This is why gauge and gravitational anomalies must all cancel in the theory, otherwise the quantum theory would be inconsistent. Gravitational anomalies are simply another kind of gauge anomaly, as local diffeomorphism invariance is a gauge symmetry of gravity. So global anomalies are fine, the quantum theory just didn’t have some symmetry you thought it might, and we can measure these in experiments. Gauge anomalies (and gravitational anomalies) are a sign that the theory doesn’t make sense and should be zero in a physical system. A few other things to note: There are a number of other interesting anomalies not mentioned above, e.g. 't Hooft anomalies. Conformal anomalies indicate the breaking of the scale invariance of a theory. QCD (with massless quarks and gluons) is classically scale invariant, but we know that this must be broken as QCD confines (and the quarks and gluons around us sit inside protons and neutrons). Technically speaking, a gauge anomaly sometimes means that you need to add more degrees of freedom to the theory.
{ "domain": "physics.stackexchange", "id": 68920, "tags": "quantum-field-theory, particle-physics, condensed-matter, quantum-anomalies" }
Green's function for Dirac operator on $S^4$
Question: Let $S^4$ be a round sphere of radius 1 (with the standard Riemannian round metric), and let $D_\text{F} \equiv \gamma^\mu \nabla_\mu$ be the Dirac operator on $S^4$, acting on the usual spinors for the Riemannian metric. I wonder what is its Green's function $G_\text{F}$ on $S^4$, given by the equation $ D_\text{F} G_\text{F}(x)_{\alpha \beta} = \delta_{S^4}(x) \delta_{\alpha \beta}?$ I happen to know the Green's function for the conformally coupled scalar, namely Green's function $G_\text{B}$ for $$\Delta_\text{scalar} \equiv - \nabla^\mu \nabla_\mu + 2$$ acting on scalar functions. But I think $\gamma^\mu \nabla_\mu \gamma^\nu \nabla_\nu$ slightly differs from $\Delta_\text{scalar}$ by a constant proportional to the $S^4$ scalar curvature, so it's not obvious to me how to get $G_\text{F}$ from $G_\text{B}$ (which we do all the time on flat space). So more generally, I'd like to know the Green's function for the Dirac operator on $S^n$. References and suggestions will be great! Answer: The spinor bundle on the standard sphere $S^n$ can be trivialized by Killing spinors as explained e.g. in an article by Baer. This means we have spinors $\psi_1,...,\psi_N$ such that $$ \nabla_X\psi_j=\mu X\cdot\psi_j $$ for all tangent vectors $X$ on $S^n$, where $\mu\in\{\pm\frac{1}{2}\}$, and every spinor $\Phi$ can be globally written as $\Phi=\sum_{j}f_j\psi_j$ with complex scalar functions $f_j$. Let $D$ be the Dirac operator. Analogously to Lemma 3 in the article mentioned above one gets $$ (D+\mu)^2\Phi=\sum_{j=1}^{N}\Big(\Delta_g f_j+\frac{(n-1)^2}{4}f_j\Big)\psi_j $$ and thus $$ D(D+2\mu)\Phi=\sum_{j=1}^{N}\Big(\Delta_g f_j+\frac{n(n-2)}{4}f_j\Big)\psi_j. $$ On the right hand side we have the conformal Laplace operator acting on the functions $f_j$. Therefore, if $f_j$ are multiples of the Green function for the conformal Laplace operator at a point $y\in S^n$, then $(D+2\mu)\Phi$ is a Green function for the Dirac operator at $y$.
{ "domain": "physics.stackexchange", "id": 56821, "tags": "differential-geometry, field-theory, dirac-equation, greens-functions, dirac-matrices" }
Basic Tic-Tac-Toe
Question: I am starting a software engineering degree soon and have been practicing the last couple months. I have posted some code I wrote for a basic Tic Tac Toe game below (C++). I have read the worst thing to do is to get into bad habits early on, so please can you read over the code and tell me what I should be doing differently? Be as brutally honest as possible, I care more about my future than feelings. Bad practices, bad code, bad formatting, anything. #include <iostream> #include <string> void playGame(); void printBoard(char board[3][3]); int nextMove(char symbol, int plays, int playedMoves[9]); bool checkWinner(char board[3][3], char symbol, int plays); int main() { using namespace std; cout << "Tic Tac Toe by Kevin\n" << "____________________\n\n"; playGame(); char yn = 'a'; //Check if players want another game while (true) { while (tolower(yn) != 'y' && tolower(yn) != 'n') { cout << "Do you want to play again? (y/n): "; cin >> yn; //following 2 lines are to handle incorrect inputs cin.clear(); cin.ignore(numeric_limits<streamsize>::max(), '\n'); } if (tolower(yn) == 'y') playGame(); else break; } return 0; } //Where the game starts. Sets up board and runs 9 plays void playGame() { using namespace std; char board[3][3] = { { ' ', ' ', ' ', }, { ' ', ' ', ' ', }, { ' ', ' ', ' ' } }; printBoard(board); char symbol = ' '; int move = 0; int playedMoves[9] = { 0 }; for (int plays = 1; plays < 10; plays++) { if (!(plays % 2)) symbol = 'O'; else symbol = 'X'; move = nextMove(symbol, plays, playedMoves); playedMoves[plays - 1] = move; int x = 0, y = 0; x = (move + 2) % 3; y = (move - 1) / 3; //math to convert 1-9 to [][] board[x][y] = symbol; printBoard(board); if (plays > 4) if (checkWinner(board, symbol, plays)) break; } } //Prints the board void printBoard (char board[3][3]) { using namespace std; string sLine = ""; cout << endl; for (int k = 0; k < 3; k++) { sLine = ""; for (int l = 0; l < 3; l++) { sLine += board[l][k]; if (l < 2) sLine += "|"; } cout << sLine << endl; if (k < 2) cout << "-----" << endl; } cout << endl; } //Takes next move and checks for errors int nextMove (char symbol, int plays, int playedMoves[9]) { using namespace std; bool validPlay = false; int move = 0; while (!validPlay) { while (move < 1 || move > 9) { cout << "Player " << symbol << " enter your move: "; cin >> move; if (!cin) { cin.clear(); cin.ignore(numeric_limits<streamsize>::max(), '\n'); } } validPlay = true; for (int k = 0; k < plays; k++) { if (move == playedMoves[k]) validPlay = false; } } return move; } //Check to see if a player has won or if there is a tie bool checkWinner(char board[3][3], char symbol, int plays) { using namespace std; bool winner = false; //Check vertical and horizontal for (int k = 0; k < 3; k++) { if (board[k][0] == symbol && board[k][1] == symbol && board[k][2] == symbol) winner = true; if (board[0][k] == symbol && board[1][k] == symbol && board[2][k] == symbol) winner = true; } //Check diagonal if (board[0][0] == symbol && board[1][1] == symbol && board[2][2] == symbol) winner = true; if (board[2][0] == symbol && board[1][1] == symbol && board[0][2] == symbol) winner = true; if (winner) cout << "Congratulations! The winner is " << symbol << "'s!\n\n"; if (!(winner) && (plays == 9)) cout << "Unfortunately it was a tie!\n\n"; return winner; } Answer: Here are some observations that may help you improve your code. Use the appropriate #includes This code uses numeric_limits which is actually defined in <limits>. Even if your code compiles that way, it's probably only because one of the other files happened to include it. You can't and shouldn't rely on that, though. Use objects You have a board structure and then separate functions printBoard and checkWinner that operate on board data. With only a slight syntax change, you would have a real object instead of C-style code written in C++. You could declare a TicTacToe object and then play, print and checkWinner could all be member functions. Separate I/O from program logic Right now every individual function has both game logic and I/O. It's often better design to separate the two so that the game logic is independent of the I/O with the user. Use const where practical I would not expect the checkWinner or printBoard routines to alter the underlying board on which they operate, and indeed they do not. You should make this expectation explicit by using the const keyword: void printBoard(const char board[3][3]); This declares that the printBoard will not modify the passed board, making it clear to both the compiler and to the human reader of your code. Create a function rather than repeating code In several places in the code, a prompt is printed, then a answer read, then the input cleared. Instead of repeating it, make it into a function. Don't abuse using namespace std Putting using namespace std at the top of every program is a bad habit that you'd do well to avoid. Having it inside every function is only slightly better. For instance, it's entirely superfluous in the playGame function and should be omitted. For this program, I'd advocate removing it everywhere and using the std:: prefix where needed. Think of the user It's not intuitively obvious how to enter a move when the game first starts. It would be better to consider the user and to offer a prompt (at least at the beginning) showing how the squares are numbered. Fix the bug If a player enters a square that is already filled, the program simply hangs. That's not good behavior and must be considered a bug. Consider altering your algorithm Right now, the checkWinner routine is only called if more than four moves have been made. Since this code is not particularly performance sensitive, why not simply call it every time? It would make the code a little simpler to read and the extra time taken will almost certainly never be noticed. Declare the loop exit condition at the top The for loop inside playGame currently says this: for (int plays = 1; plays < 10; plays++) Reading that line, we would conclude that the play continues until plays >= 10. However, way down at the bottom of the loop is a break that occurs if one player has won. Rather than forcing the reader of the code to examine every line, it's better if you simply declare loop exit conditions completely and honestly at the top. Put conditional statements on a separate line The code currently includes a number of places where something like this is done: if (l < 2) sLine += "|"; It makes things a little bit harder to read than if you put them on separate lines like this: if (l < 2) { sLine += "|"; } Further, especially when you are beginning, it's useful to always put the curly braces there. Doing so will make your intentions clear to both readers of the code and the compiler and can reduce the possibility for certain kinds of subtle bugs like this: for (int i = 0; i < 3; ++i) f[k] = k*2; g[k] = f*7; By the indentation, one would expect that both statements are executed every loop iteration, but the subtle lack of braces means that the compiler will do something else entirely. Use meaningful variable names Your function names are descriptive and good enough, but the variable names are not so good. In particular the printBoard routine uses a loop counter named l which is a particularly bad choice of variable name. It's too easily mistaken for the digit 1 or the letter i. Eliminate "magic numbers" This code has a number of "magic numbers," that is, unnamed constants such as 2, 9, 10, etc. Generally it's better to avoid that and give such constants meaningful names. That way, if anything ever needs to be changed, you won't have to go hunting through the code for all instances of "3" and then trying to determine if this particular 3 is relevant to the desired change or if it is some other constant that happens to have the same value. Eliminate return 0 at the end of main When a C++ program reaches the end of main the compiler will automatically generate code to return 0, so there is no reason to put return 0; explicitly at the end of main.
{ "domain": "codereview.stackexchange", "id": 14098, "tags": "c++, beginner, tic-tac-toe" }
rosbag keyboard controls unresponsive
Question: I have been using rosbag play to test some CPU-intensive computer vision algorithms on recorded data. Periodically during playback the command-line interface to rosbag play becomes completely unresponsive: the timer stops counting and it is no longer responsive to keyboard controls. Messages are still being published while it is unresponsive (verified with rostopic hz and image_view). If I play the same data back without the CPU-intensive nodes running, everything is fine. Stepping through the data is important for evaluating my nodes, so this has been very frustrating. Has anyone found a workaround for this problem? Originally posted by mkoval on ROS Answers with karma: 524 on 2011-05-27 Post score: 1 Original comments Comment by phil0stine on 2012-01-25: Im getting the same behavior, but I have noticed that (for me at least), /clock is no longer being published, even though the actual data messages are. Since /clock is so important, I am a bit surprised at this. Answer: Hi, I will offer some observations I made when I was playing back and processing bags with Kinect data, which may or may not fix your issue. First, try tweaking the --rate and --queue parameters. Second, try loading the bag using rxbag, and playing it from there, instead of the command line. Using rxbag, I was able to get better system performance than rosbag play. Also I think it gives you better control over the playback - like starting at specific times or stepping through frame by frame. Hope this helps, Ivan Originally posted by Ivan Dryanovski with karma: 4954 on 2011-05-28 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by mkoval on 2011-05-28: Using the --rate options with a fractional rate fixed the problem. Thanks!
{ "domain": "robotics.stackexchange", "id": 5689, "tags": "rosbag" }
Using industrial_robot_client / simple_message with a single port
Question: I'm trying to interface a Thermo CRS F3 robot arm, which runs RAPL-3 language and has a serial port for communication. I initially started with rosserial, but then found out that the simple_message protocol is better suited for my purpose. Because I'm already confused about all the new stuff, I'd like to get started with the default streaming client in industrial_robot_client before trying to customize it. I think I can connect a TCP port to the serial port using socat/ncat, and then implement the simple_message protocol on the robot side. However, I see that the default clients use up to 4 TCP ports: MOTION = 11000, SYSTEM = 11001, STATE = 11002, IO = 11003. Of which MOTION and STATE are at least needed. Could I just specify the same port for both robot_state and join_trajectory_action nodes? Or would the replies to different commands get mixed up? Or is there some other reason such as buffering and latency to keep the ports separate? Originally posted by jpa on ROS Answers with karma: 3 on 2019-10-03 Post score: 0 Answer: However, I see that the default clients use up to 4 TCP ports: MOTION = 11000, SYSTEM = 11001, STATE = 11002, IO = 11003. Of which MOTION and STATE are at least needed. Could I just specify the same port for both robot_state and join_trajectory_action nodes? I don't think so: while those nodes are clients (and in theory could connect to the same TCP port), there is currently no support for interleaving request-reply pairs (ie: there is no matching of incoming replies to outstanding requests). Or would the replies to different commands get mixed up? that would probably be what would happen, yes. Or is there some other reason such as buffering and latency to keep the ports separate? the intent of Simple Message was to make writing server programs as simple as possible, as they typically have to be implemented using the very limited runtime systems of (industrial) robot controllers (ie: proprietary languages, limited memory and cpu, primitive support for socket communication). (De)multiplexing message streams is certainly possible, but a non-trivial task in such constraint environments. Originally posted by gvdhoorn with karma: 86574 on 2019-10-04 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by jpa on 2019-10-04: Ok, thanks for the info! I'll see whether I'll just use two serial ports or if I'll add some custom framing on top for interleaving.
{ "domain": "robotics.stackexchange", "id": 33845, "tags": "ros-melodic" }
Hexadecimal converter using a switch statement
Question: I need some help improving my code and wondering if I there is a better way to do the following: StringBuilder buf = new StringBuilder(node.length() + 8); for (int i=0, n=node.length(); i<n; i++) { char c = node.charAt(i); String hexadecimal = ""; switch (c) { case '"': hexadecimal = "\\22"; break; case '&': hexadecimal = "\\26"; break; case '\'': hexadecimal = "\\27"; break; case '/': hexadecimal = "\\2f"; break; case ':': hexadecimal = "\\3a"; break; case '<': hexadecimal = "\\3c"; break; case '>': hexadecimal = "\\3e"; break; case '@': hexadecimal ="\\40"; break; case '\\':hexadecimal = "\\5c"; break; default: { if (Character.isWhitespace(c)) { hexadecimal = "\\20"; }else if(c > 127){ hexadecimal = "\\" + Integer.toHexString(c) + "\\"; } else { hexadecimal = String.valueOf(c); } } } buf.append(hexadecimal); } return buf.toString(); I don't like how there is a switch statement which just sets a variable (hexadecimal) based on what the case is. I feel there is a better way to do this, such as by using a hash-map. I need the code to be flexible so that adding new characters and setting the hexadecimal is easy. Answer: A hash map is often a good idea for avoiding switch statements. It can be used in combination with the Strategy and Abstract Factory patterns, such as in these examples. However, the Strategy Pattern is overkill for your problem. A simple hashmap would look like this Map<Character, String> hexMap = new HashMap<Character, String>(); hexMap.put('\"', "\\22"); hexMap.put('\n', "\\20"); // puts more stuff in map... for (int i=0, n=node.length(); i<n; i++) { char c = node.charAt(i); buf.append(hexMap.get(c)); } But a quick googling of "java character to hexadecimal" produces these results. My favorite is StringBuilder buf = new StringBuilder(node.length() + 8); for (int i = 0; i < node.length(); ++i) { char ch = node.charAt(i); buf.append(String.format("\\%1$x", (ch & 0xFFFF))); } return buf.toString(); EDIT: Forgot to say, don't use magic numbers. I have no clue what the frack that 8 is doing in your program nor do I have to patience to try to deduce.
{ "domain": "codereview.stackexchange", "id": 2020, "tags": "java" }
Is $UP\not=NP$ with respect to random oracle?
Question: It is shown in An average-case depth hierarchy theorem for Boolean circuits a random oracle makes $PH$ infinite. Is it possible to also show $UP\not=NP\not=\Sigma_2^P\not=\Sigma_3^P\not=\Sigma_4^P\not=\dots\not=PH\not=PSPACE$ with respect to a random oracle or does a random oracle give $UP=NP$? Answer: Yes. Beigel CCC '89 showed $\mathsf{P} \neq \mathsf{UP} \neq \mathsf{NP}$ with probability 1. Combined with Rossman-Servedio-Tan, this gives the result you want. You should always try the Complexity Zoo for questions like this...
{ "domain": "cstheory.stackexchange", "id": 4258, "tags": "cc.complexity-theory, reference-request, oracles, polynomial-hierarchy, random-oracles" }
Multiclass classification with Neural Networks
Question: Let’s suppose I wanted to classify some input as one of three categories using a simple neural network. The output of my network are three columns (one for each possible category I assume) with values between 0 and 1. Moreover, the single rows add up to precisely one when adding the three columns together. Is is possible to interpret the output as the probability of my input belonging to each single category? Answer: Indeed, this is the standard interpretation of continuous classifier outputs, not only for neural networks, but for the more general case called Softmax Regression. Thus, provided that you have used softmax activation on the final layer (in order, among other things, to ensure that your outputs indeed sum up to 1), you can interpret the continuous outputs as the respective probabilities of a particular data sample belonging to each one of your classes. See also the discussion in this (rather unfortunately titled) discussion at SO: How to convert the output of an artificial neural network into probabilities?
{ "domain": "datascience.stackexchange", "id": 2374, "tags": "machine-learning, neural-network, deep-learning, r, multiclass-classification" }
Quick questions about second quantization?
Question: What were the historical problems that the second quantization solved? My current understanding is that in re-normalisation one splits the result into a finite and a divergent part and only keeps the convergent part of the answer. But these infinities only seem to have cropped up because one second quantized the field (not sure how to prove this but seems intuitively true)... I guess I'm also asking: Is there any indisputable evidence the field is second quantized? Answer: The Schrodinger equation describes particles, and right back at the beginnings of quantum mechanics it was an obvious question whether a field could be quantised in the same way. The first steps I know of in this direction were Born, Heisenberg and Jordan's paper Zur Quantenmechanik II in 1926, so it really does go back almost to the beginnings of quantum mechanics. It only took a few more years for physicists to realise that a field theory could describe both fields and particles, that it could do so in a relativistic way and that particle creation and annihilation emerged naturally from a field theory. So quantum field theory was a no brainer. It was an elegant idea that described a wide range of physical phenomena. The early problems with field theory weren't with the basic ideas. The non-interacting scalar field theory is elegant and exactly soluble. The problems were that the equations describing interacting fields were very complicated and the perturbative approaches used at the time didn't work. They produced the infinities that you allude to in your question. Renormalisation hasn't change the basic ideas. It just means we now know how to do the calculations correctly. You ask: Is there any indisputable evidence the field is second quantized? and the answer is that quantum field theory is tested every day in colliders across the world and has so far proved effective at describing the behaviour of fundamental particles. So that would be a yes then. One last comment: the term second quantisation is an unfortunate one. It's original meaning was a second method of quantisation i.e. an alternative approach to first quantisation, and not that anything is being quantised twice. Few physicists I know would use the term second quantisation because of the potential for confusion. However you will still find it being used disappointingly frequently.
{ "domain": "physics.stackexchange", "id": 56516, "tags": "quantum-field-theory, history, second-quantization" }
Waveguide cut off frequency derivation - Wave equation to Helmholtz equation
Question: I'm trying to derive the cut off frequency for a wave guide. I found a derivation on wikipedia, but I don't understand the first step where we go from the wave equation to the helmholtz equation. Why does only considering $\psi(x,y,z,t)=\psi(x,y,z)e^{i\omega t}$ mean you can go from the wave equation to the helmholtz equation? Answer: By guessing a time dependence of the form $e^{i\omega t}$ you remove the time dependence from the equation. specifically, it transforms to the Helmholz equation. I recommend you set $\psi(x,y,z,t)=u(x,y,z)e^{i\omega t}$ and substitute into the wave equation and do the algebra and see for yourself. You'll get the Helmholz equation on the spatial part $u(x,y,z)$. I suggest you read on separation of variables
{ "domain": "physics.stackexchange", "id": 77236, "tags": "electromagnetism, waves, maxwell-equations, vector-fields, waveguide" }
Can excitons be understood in terms of classical quantum physics?
Question: From what I understand, an exciton is an electron-hole pair in a semiconductor that exists in a bound state (through the electrostatic potential). I have seen it stated that this pair behaves in a way analogous to a hydrogen atom, i.e. as two particles bound through some potential, and having a radius, etc. It is not at all clear to me how that could be made concrete. Can the hole somehow be treated as a particle? If so, is it possible to write down the Schrӧdinger equation (or Hamiltonian) for the system, or does the statement belong to quantum field theory or other more general theories? In particular it's not at all obvious to me what would be the mass of a hole. Answer: Hole as a particle First, hole can really be treated as a particle. For electrons, there are Pauli exclusion principle, so there are only one electron per state(state can be described by momentum $\vec p$, band index and spin). In semiconductors, there are valence band and conduction band. In ground state, valence band is completely occupied by electrons, so bulk momentum is zero. For convinience, we can shift energy, such as $E = 0$ for ground state, so Removing one electron (with momentum $\vec p_1 $) from valence band will make a system to have a positive energy (for valence band, we have $ \varepsilon \approx - \frac{p^2}{2 m_v^*}$, therefore resultative energy will be $E \approx \frac{p_1^2}{2 m_v^*})$. In addition, bulk momentum would be $\vec P = - \vec p_1$. It is convinient to use hole-formalism, where such pertrubation on ground state is treated like creation of hole, particle with charge $+\vert e \vert$ and energy spectra $\varepsilon(\vec p) = \frac{p^2}{2 m_v^*}$. Here occurs that $m_v^*$, that comes from valence band structure, is a mass of the hole. What is exciton? Now suppose we have a system with one hole(one valence electron absent) and one conduction electron. They have charges $+1$ and $-1$ respective, so they can interact electromagnetically. From quantum mechanics, for two interacting particles, we can't have energy of interacting particles to be written as $E_{1-2} = \frac{p_1^2}{2 m_1} + \frac{p_2^2}{2 m_2}$. Rather it can be written in such way: $E_{1-2} = \frac{P^2}{2(m_1 + m_2)} + E_{inter}$, where first term stands for center of mass movement, and second stands for interaction. It occurs that for attractive force E_{inter} can be less than zero, with wave-function such that two particles are localized near each other. It is exactly what is called bound state. In semiconductor system, bound state of electron-hole pair is called exciton. What actually happens in semiconductor system? Or how can I imagine that electon attracts such a void? Absense of one valence electron means that there is some $+1$ net charge. Recall that semiconductor is a system that have background ions with positive charge and electrons "flying" over such background. It is difficult to imagine $10^{23}$ number of particles, constantly scattering on each other, but we can imagine that conduction electron is attracted to the places, where valence electron is absent. There are net charge distribution is positive-sign. In quantum-mechanical manner, it can be localized near such places, with gain in energy described approximately by hydrogen states(but with effective $Z, m$). What formalism should be used to obtain this picture? Well, exciton is indeed collective excitation, and strictly speaking must be considered from field point of view. However, with some physical intuition, we can approximate such system by some simple solvable models. For example, there we neglect interaction of conducting electron with valence electrons. It permits us to use two-particle wave-function formalism and write a Schrodinger equation for it.
{ "domain": "physics.stackexchange", "id": 24532, "tags": "condensed-matter, electronic-band-theory, quasiparticles" }
Why are equations of state for a non-ideal gas so elusive?
Question: The ideal gas equation (daresay "law") is a fascinating combination of the work of dozens of scientists over a long period of time. I encountered Van der Waals interpretation for non-ideal gases early on, and it was always somewhat in a "closed-form" $$\left( p + \frac{n^2a}{V^2} \right)(V - nb) = nRT$$ with $a$ being a measure of the charge interactions between the particles and $b$ being a measure of the volume interactions. Understandably, this equation is only still around for historical purposes, as it is largely inaccurate. Fast-forwarding to the 1990s, Wikipedia has a listing of one of the more current manifestations (of Elliott, Suresh, and Donohue): $$\frac{p V_\mathrm{m}}{RT} = Z = 1 + Z^{\mathrm{rep}} + Z^{\mathrm{att}}$$ where the repulsive and attractive forces between the molecules are proportional to a shape number ($c = 1$ for spherical molecules, a quadratic for others) and reduced number density, which is a function of Boltzmann's constant, etc (point being, a lot of "fudge factors" and approximations are getting thrown into the mix). Rather than seeking an explanation of all of this, I am wondering whether a more "closed form" solution lies at the end of the tunnel, or whether the approximations brought forth in the more modern models will have to suffice? Answer: At the end of the tunnel, you're still trying to approximate the statistical average of interactions between individual molecules using macroscopic quantities. The refinements add more parameters because you're trying to parametrise the overall effect of those individual interactions for every property that is involved for each molecule. You're never going to get a unified "parameter-free" solution for those without going down to the scale of the individual molecules (e.g. ab initio molecular dynamics), as far as I can tell.
{ "domain": "chemistry.stackexchange", "id": 12594, "tags": "physical-chemistry, gas-laws, intermolecular-forces, equation-of-state" }
What's the proper way to use gzerr?
Question: I'm trying to use gzerr to output error messages to the console in which gazebo is running from. If I put the following code in my model plugin's load method, the gzerr text never shows in the console. gzerr << "this message never gets displayed\n"; std::cout << "this message gets displayed just fine" << std::endl; I'm starting gazebo using the "gazebo" command, and I have verified with an std::cout that the plugin's load method is getting called. I've also made sure that gazebo/common/Console.h is included. I've also tried using sdferr the same way as gzerr, and sdferr works exactly as expected. What am I doing wrong? Originally posted by pcdangio on Gazebo Answers with karma: 207 on 2016-04-13 Post score: 0 Answer: Run Gazebo in verbose mode: gazebo --verbose Originally posted by chapulina with karma: 7504 on 2016-04-13 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by pcdangio on 2016-04-14: Thanks! Interesting that by default error messages won't show up. What's the proper way then, to notify the user of an error if not running in verbose mode? Comment by chapulina on 2016-04-14: It depends on the "user". If it's someone who will be using the command line, you'd use the verbose mode. If it's a graphical interface user, you could make a GUI plugin which prints messages to the screen.
{ "domain": "robotics.stackexchange", "id": 3905, "tags": "gazebo" }
Derivative of operator with respect to parameters
Question: From Shankar's QM book pg. 56: For an operator $\theta(\lambda)$ that depends on a parameter $\lambda$ defined by $$\theta(\lambda)=e^{\lambda\Omega}$$ where $\Omega$ is also a constant operator, we can show that $$\frac{d}{d\lambda}\theta(\lambda)= e^{\lambda\Omega}\Omega=\theta(\lambda)\Omega .\tag{1.9.7}$$ Hence if we are confronted with the above differential equation, its solution is given by $$\theta(\lambda)=Ce^{\lambda\Omega}$$ where $C$ is a constant operator. My question is why does the constant operator $C$ appear? Answer: For essentially the same reason that it appears in differential equations of functions. The differential equation $$\frac{\text{d}\theta(t)}{\text{d}\lambda} = \theta(\lambda) \Omega$$ defines a family of operators, given by $$\theta(\lambda) = C e^{\lambda \Omega}.$$ Different choices of the constant operator $C$ lead to different operators $\theta(\lambda)$, all of which satisfy the same differential equation. In other words, the choice of $C=\mathbb{I}$ is just one of the possibilities. This mirrors the case when you're working with functions, the solution to a differential equation of the form $f'(t) = a\times f(t)$ is the family of functions $f_c(t) = c \exp(at)$, where $c$ is a constant that is set by the value of $f(t)$ at $t=0$. Another way to see explicitly that any choice of the operator $C$ satisfies this equation is by explicitly writing out the operator in its power-series form, i.e.: \begin{align} \theta(\lambda) = C e^{\lambda \Omega} &= C + \lambda C \Omega + \frac{\lambda^2}{2!} C \Omega^2 + \frac{\lambda^3}{3!} C \Omega^3 + ... \\ \implies \frac{\text{d}\theta}{\text{d}\lambda} &= 0\,\,\mathbb{I} + C \Omega + \lambda C \Omega^2 + \frac{\lambda^2}{2!} \lambda C \Omega^3 + ...\\ &= C \left( \mathbb{I} + \lambda \Omega + \frac{\lambda^2}{2!} \Omega^2 + ...\right) \Omega \\ &= C e^{\lambda\Omega} \Omega,\\ \text{i.e. }\quad \frac{\text{d}\theta}{\text{d}\lambda} &= \theta(\lambda) \Omega. \end{align} Note that since $C$ and $\Omega$ need not commute, so I pulled $C$ out to the left, and $\Omega$ out to the right. Thus, $\theta(\lambda) = C\exp{\lambda\Omega}$ satisfies the differential equation for any constant operator $C$.
{ "domain": "physics.stackexchange", "id": 86512, "tags": "quantum-mechanics, differentiation, differential-equations" }
start and stop rosbag within a python script
Question: Hi, I wrote an Action Server which gets all topics I want to record via the goal. Everything works fine until i want to stop rosbag. With my script i can start rosbag out of my script and i can see the bag-file which is generated. But when i want to kill the rosbag process out of my script, rosbag doesn't stop. It doesn't stop until i stop my whole ActionServer. But the action server should still run after stopping rosbag. Here is the code I wrote: (the topic_logger.msg includes the goal, an id and an array of topics) #! /usr/bin/env python import roslib; roslib.load_manifest('topic_logger') import rospy import subprocess import signal import actionlib import topic_logger.msg class TopicLoggerAction(object): # create messages that are used to publish feedback/result _feedback = topic_logger.msg.topicLoggerFeedback() _result = topic_logger.msg.topicLoggerResult() def __init__(self, name): self._action_name = name self._as = actionlib.SimpleActionServer(self._action_name, topic_logger.msg.TopicLoggerAction, execute_cb = self.execute_cb) self._as.start() rospy.loginfo('Server is up') def execute_cb(self, goal): if goal.ID == "topicLog": # decide whether recording should be started or stopped if goal.command == "start": #start to record the topics rospy.loginfo('now the topic recording should start') args = "" for i in goal.selectedTopics: args = args + " " + i command = "rosbag record" + args self.p = subprocess.Popen(command, stdin=subprocess.PIPE, shell=True, cwd=dir_save_bagfile) rospy.loginfo(self.p.pid) # process = p # print p.stdout.read() # check if the goal is preempted while 1: if self._as.is_preempt_requested(): rospy.loginfo('Logging is preempted') self._as.set_preempted() break elif goal.command == "stop": #stop to record the topics rospy.loginfo('now the topic recording should stop') #self.p.send_signal(signal.SIGTERM) rospy.loginfo(self.p.pid) killcommand = "kill -9 " + str(self.p.pid) rospy.loginfo(killcommand) self.k = subprocess.Popen(killcommand, shell=True) rospy.loginfo("I'm done") #while 1: # if self._as.is_preempt_requested(): # rospy.loginfo('Logging is preempted') # self._as.set_preempted() # break else: rospy.loginfo('goal.command is not valid') else: rospy.loginfo('goal.ID is not valid') if __name__ == '__main__': rospy.init_node('topicLogger') dir_save_bagfile = '/home/ker1pal/' TopicLoggerAction(rospy.get_name()) rospy.spin() Are there any ideas why I can't stop rosbag out of my script without killing my whole ActionServer ? Thanks Ralf Originally posted by r_kempf on ROS Answers with karma: 133 on 2011-07-25 Post score: 11 Answer: Ralf, I'm just now playing around with actionlib, so I don't know if there are any other interactions, but I also recently wrote a node with starts recording data via rosbag and subprocess.Popen(). I've been ending the rosbag session with: rosbag_proc = subprocess.Popen(...) ... rosbag_proc.send_signal(subprocess.signal.SIGINT) SIGINT is the same as "Ctrl-C" for Unix. It seems to end the rosbag process cleanly without adversely affecting my ROS node. Originally posted by heyneman with karma: 46 on 2011-07-26 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by r_kempf on 2011-07-27: I tried it bevor by sending such a signal but it doesn't help.what are the parameters you give to Popen? Comment by rahvee on 2018-06-01: When I end rosbag record by pressing Ctrl-C, it ends cleanly. But when I use subprocess.Popen and send_signal(signal.SIGINT), nothing happens (the child process doesn't stop recording). If I use SIGTERM, I end up with a filename.bag.active file, which is not good. :-( Comment by Tones on 2019-04-15: sending Ctrl-C to rosbag record seems to stop the main process, but the child processes seem to keep running. I could finish the recording properly by sending one more Ctrl-C to one of the subprocesses. An alternative is sending SIGINT to the complete process group.
{ "domain": "robotics.stackexchange", "id": 6248, "tags": "python, rosbag" }
rosserial fails when using sensor_msgs::Imu
Question: I am attempting to send IMU messages to ROS from an Arduino and was making good progress until last night... I can send all sorts of primitive messages, and messages which contain other primitives, but for some reason the IMU message doesn't make it to rostopic echo /imu. Code attached, please advise: $ rosrun rosserial_python serial_node.py _port:=/dev/ttyACM0 [INFO] [WallTime: 1381881252.718444] ROS Serial Python Node [INFO] [WallTime: 1381881252.726693] Connecting to /dev/ttyACM0 at 57600 baud [INFO] [WallTime: 1381881255.329399] Note: publish buffer size is 512 bytes [INFO] [WallTime: 1381881255.329896] Setup publisher on imu [sensor_msgs/Imu] [ERROR] [WallTime: 1381881272.931531] Mismatched protocol version in packet: lost sync or rosserial_python is from different ros release than the rosserial client [INFO] [WallTime: 1381881272.931971] Protocol version of client is Rev 0 (rosserial 0.4 and earlier), expected Rev 1 (rosserial 0.5+) ... genpy.message.DeserializationError: unpack requires a string argument of length 32 Which is thrown from this line of _Imu.py: (_x.orientation.x, _x.orientation.y, _x.orientation.z, _x.orientation.w,) = _struct_4d.unpack(str[start:end]) Here's my sketch code: #include <ros.h> #include <sensor_msgs/Imu.h> ros::NodeHandle nh; sensor_msgs::Imu imu_msg; ros::Publisher pub("imu", &imu_msg); char frame_id[] = "imu"; void setup(){ nh.initNode(); nh.advertise(pub); imu_msg.header.seq = 0; imu_msg.header.stamp = nh.now(); imu_msg.header.frame_id = frame_id; imu_msg.orientation.x = 0; imu_msg.orientation.y = 0; imu_msg.orientation.z = 0; imu_msg.orientation.w = 0; imu_msg.orientation_covariance[0] = 0; imu_msg.orientation_covariance[1] = 0; imu_msg.orientation_covariance[2] = 0; imu_msg.orientation_covariance[3] = 0; imu_msg.orientation_covariance[4] = 0; imu_msg.orientation_covariance[5] = 0; imu_msg.orientation_covariance[6] = 0; imu_msg.orientation_covariance[7] = 0; imu_msg.orientation_covariance[8] = 0; imu_msg.angular_velocity.x = 0; imu_msg.angular_velocity.y = 0; imu_msg.angular_velocity.z = 0; imu_msg.angular_velocity_covariance[0] = 0; imu_msg.angular_velocity_covariance[1] = 0; imu_msg.angular_velocity_covariance[2] = 0; imu_msg.angular_velocity_covariance[3] = 0; imu_msg.angular_velocity_covariance[4] = 0; imu_msg.angular_velocity_covariance[5] = 0; imu_msg.angular_velocity_covariance[6] = 0; imu_msg.angular_velocity_covariance[7] = 0; imu_msg.angular_velocity_covariance[8] = 0; imu_msg.linear_acceleration.x = 0; imu_msg.linear_acceleration.y = 0; imu_msg.linear_acceleration.z = 0; imu_msg.linear_acceleration_covariance[0] = 0; imu_msg.linear_acceleration_covariance[1] = 0; imu_msg.linear_acceleration_covariance[2] = 0; imu_msg.linear_acceleration_covariance[3] = 0; imu_msg.linear_acceleration_covariance[4] = 0; imu_msg.linear_acceleration_covariance[5] = 0; imu_msg.linear_acceleration_covariance[6] = 0; imu_msg.linear_acceleration_covariance[7] = 0; imu_msg.linear_acceleration_covariance[8] = 0; } void loop(){ imu_msg.header.seq++; imu_msg.header.stamp = nh.now(); // Doesn't work: pub.publish(&imu_msg); // These work: // pub.publish(&imu_msg.header); // pub.publish(&imu_msg.orientation); // pub.publish(&imu_msg.angular_velocity); // This doesn't nh.spinOnce(); delay(500); } Originally posted by Dereck on ROS Answers with karma: 1070 on 2013-10-15 Post score: 0 Original comments Comment by colidar on 2019-07-23: thanks for sharing your sketch code, it helped me a lot. But why is it working with setting every value to zero? Because of that the imu_message is literally empty. Answer: The IMU message is too large for the Arduino serial port buffer: http://answers.ros.org/question/12782/rosserial_arduino-cant-send-a-sensor_msgsimu-msg/ The solution is to send a float array via serial and then convert that to an IMU message in another node. Originally posted by Dereck with karma: 1070 on 2013-10-24 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by marwa eldiwiny on 2016-11-13: got the same error while trying to receive from arduino Comment by Markus Bader on 2016-11-13: Depending on your board (UNO/M0/...) you can search for the define #define SERIAL_BUFFER_SIZE 64" in the arduino framework and increase it :-) Look for a file named RingBuffer.h
{ "domain": "robotics.stackexchange", "id": 15879, "tags": "arduino, rosserial" }
Is my code fine in the areas of read and write?
Question: What: This is a code that when executed simply writes packet to certain file types. Packet is very simple and contains things like: description, partCount, price... Please: Ignore the usage of namespace std; as this is for educational purposes, mostly. It will most likely not be reproduced in a real program. Focus on reviewing the read and write portions as I would love feedback on those portions. I used some other things for simplicity, but in general I am mainly looking for feedback on read/write portions of the code. Intent: I'm specifically looking for feedback on the read and write portions of the code. Could I get feedback on it as well as is it feasible as a piece of code and what can I improve on/etc.??? Main.cpp: #include <iostream> #include <fstream> #include <vector> #include "Packet.h" using namespace std; int main() { cout << "----------------------------------------------------" << endl; cout << "Input and Output Filing System" << endl; cout << "----------------------------------------------------" << endl; ifstream inFile; ofstream outFile; fstream bOutFile; vector<Packet> package; Packet one(0, "A packet of cheese.", 21.99, 4); package.push_back(one); Packet two(1, "A packet of super-cheese.", 31.99, 2); package.push_back(two); cout << "Starting to write packets to file..." << endl; outFile.open("Insert Your Destination of File Here.Txt"); for (int i = 0; i < package.size(); ++i) { outFile << "{" << endl; outFile << package[i].getPartId() << "," << endl; outFile << package[i].getDescription() << "," << endl; outFile << package[i].getPartCount() << "," << endl; outFile << package[i].getPrice() << "," << endl; outFile << "}" << endl << endl; } outFile.close(); cout << "Starting to read packets from file..." << endl; inFile.open("Insert Your Destination of File Here.Txt"); string line; while (getline(inFile, line)) { if (line[0] != '{' && line[0] != '}') { cout << line.substr(0, line.size() - 1) << endl; } } inFile.close(); cout << "Starting to write packets to binary..." << endl; bOutFile.open("Insert Your Destination of File Here.bin", ios::out | ios::binary); for (int i = 0; i < package.size(); ++i) { bOutFile.write((char*)&package[i], sizeof(package[i])); } bOutFile.close(); system("pause"); } Packet.h: #pragma once #include <string> using namespace std; class Packet { public: Packet(int partId, string description, double price, int partCount) : partId(partId), description(description), price(price), partCount(partCount) {} int getPartId() const { return partId; } string getDescription() const { return description; } double getPrice() const { return price; } int getPartCount() const { return partCount; } private: int partId; string description; double price; int partCount; }; Answer: for (int i = 0; i < package.size(); ++i) I'd use for (auto& p : package) instead if the index (i) is not used for anything. outFile << package[i].getPartId() << "," << endl; Remember that std::endl flushes the stream. This is very minor, but I'd use a manual '\n' and then flush when the whole object is written. bOutFile.write((char*)&package[i], sizeof(package[i])); This is I do not like at all. Copying objects like this is problematic when you have objects with more complex data. I.e. pointers or dynamically allocated memory. Take a look at serialization/deserialization, I found this read quite good concerning the topic: https://rubentorresbonet.wordpress.com/2014/08/25/an-overview-of-data-serialization-techniques-in-c/ Packet(int partId, string description, double price, int partCount Make the passed arguments const references instead! (Otherwise an extra copy will be done)
{ "domain": "codereview.stackexchange", "id": 35089, "tags": "c++, performance, beginner" }
Bullet Cluster and MOND
Question: Apparently the Bullet Cluster is some slam-dunk proof of ΛCDM. The argument seems to be that most (>90%) of the baryonic mass in these clusters is in the form of X-ray emitting gas. Therefore the gravity lensing should follow the gas. However, I can't find any references for the basic assumption about the gas to total baryonic mass ratio (that didn't already assume a ΛCDM model) . Can anyone provide the background? Answer: The first paper I looked at (Paraficz et al. 2012) explains that the hot gas mass is determined from X-ray observations. The X-ray flux from an optically thin gas depends on the square of the gas density multiplied by its volume [Specifically: $f_x = A(T) n_{e}^2 V/4\pi d^2$, where $A(T)$ is the known radiative cooling function and $T$ comes from the X-ray spectrum, $V$ the volume, $n_e$ the electron number density and $d$ the distance.] - if you can measure $f_x$ then estimate the volume you get the density and also the gas mass. Some details for the analysis of the Chandra X-ray observations of the Bullet cluster are found in Close et al. (2006), including how they model the geometry of the various components. They conclude that their gas mass estimate is good to 10 per cent. The masses of individual galaxies are estimated by modelling their luminosities through Faber-Jackson or (for spirals) Tully-Fisher scaling relations (see also here). These give the total galaxy mass, which would include dark matter. To estimate just the baryonic mass one just uses the mass to luminosity ratio for stellar material under the assumption that most of the baryonic matter is stars (a small correction could be made for gas, dust etc). It is on this basis that it is claimed that the X-ray emitting gas contains a similar amount of mass to that associated with individual galaxies. If those galaxies have non-baryonic dark matter halos that dominate their total mass (which seems likely unless they have extraordinarily low luminosity to mass ratios) then I think this leads to the claim that about 90 per cent of the baryonic mass is in the X-ray emitting gas. If one is sceptical of dark matter and don't trust the FJ and TF scaling relations, then I guess you just take the luminosity of the individual galaxies, convert that to a stellar mass, and you would arrive at more-or-less the same number. For the Bullet Cluster, gravitational lensing then reveals that the galaxies plus hot gas only represents 20 per cent of the total cluster mass (9 per cent in hot gas, 11 per cent in galaxies) and thus that 89 per cent of the total mass is not in galaxies and that only a small fraction of this is in the form of a hot baryonic gas.
{ "domain": "physics.stackexchange", "id": 25372, "tags": "dark-matter, gravitational-lensing, galaxy-clusters, modified-gravity" }
Bra space and adjoint vectors
Question: If I'm not wrong, a bra, $ \langle \phi_n | $, can be thought as a linear functional that when applied to a ket vector, $| \phi_m \rangle$, returns a complex number; that is, the inner product it's a linear mapping that $\xi \rightarrow \mathbb{C} $; yet, exist a bra to each ket, and in discrete basis, the reverse it's valid too. Well, thinking in the scope of discrete basis, my question, then, is: when we take a adjoint of a vector, we go from a vector space to a "linear functional" space? That is, when we want to calculate the inner product of $|\phi_n \rangle$ with itself, we are, as a matter of fact, applying the bra associated with the vector in the same vector? Answer: 1) Whenever one has a topological vector space (TVS) $V$ over some field $\mathbb{F}$, one can construct a dual vector space $V^*$ consisting of continuous linear functionals $f:V\to\mathbb{F}$. 2) Under relative mild conditions on the topology of $V$, it is possible to turn the dual vector space $V^*$ into a TVS. One may iterate the construct of dual vector spaces, so that more generally, one may consider the double-dual vector space $V^{**}$, the triple-dual vector space $V^{***}$, etc. 3) There is a natural/canonical injective linear map $i :V\to V^{**}$. It is defined as $$i(v)(f):=f(v),\qquad\qquad v\in V, \qquad\qquad f\in V^*. $$ 4) If the map $i$ is bijective $V\cong V^{**}$, one says that $V$ is a reflexive TVS. 5) If $V$ is an inner product space (which is a particularly nice example of a TVS), then there is a natural/canonical injective conjugated linear map $j :V\to V^*$. It is defined as $$j(v)(w):=\langle v, w \rangle ,\qquad\qquad v,w\in V. $$ Here we follow the Dirac convention that the "bracket" $\langle\cdot, \cdot \rangle$ is conjugated linear in the first entry (as opposed to a lot of the math literature). 6) Riesz representation theorem (RRT) shows that $j$ is a bijection if $V$ is a Hilbert space. In other words, a Hilbert space is selfdual $V\cong V^*$. If one identifies $V$ with the set of kets, and $V^*$ with the set of bras, one may interpret RRT as saying that there is a natural/canonical one-to-one correspondence between bras and kets.
{ "domain": "physics.stackexchange", "id": 2493, "tags": "quantum-mechanics, mathematical-physics, mathematics, vectors, hilbert-space" }
Line of Best Fit with or Without Constant Term
Question: Some other physics teachers and I were discussing an AP problem about a potential experiment for measuring $g$ and disagreed on the best way to use a line of best fit to analyze the data. The experiment measures the acceleration of an Atwood machine and uses the theoretical relation $a= \frac{m_1-m_2}{m_1+m_2}g$. The AP problem wants students take some sample data, plot $a$ versus $\frac{m_1-m_2}{m_1+m_2}$, and interpret the slope of the line as an experimental value of $g$. The question is whether the line of best fit should be made to pass through the origin or not. That is, should we try to fit to the form $a = mz+b$ to the data or just $a=mz$. My argument is that the model we have does not have a constant term and adding one would be overfitting so we should not fit to $a = mz+b$ just like we shouldn't fit to $a = mz^2 +bz+c$. Another teacher argued that we should treat the data as the data and fit its true line of best fit, independent of what we think the model might be. Obviously think I am right, but am I mistaken? Answer: You should almost always include the intercept. Not including the intercept can lead to bias in your estimate of the slope in your model as well as other problems. it is generally a safe practice not to use regression-through-the origin model and instead use the intercept regression model. If the regression line does go through the origin, b0 with the intercept model will differ from 0 only by a small sampling error, and unless the sample size is very small use of the intercept regression model has no disadvantages of any consequence. If the regression line does not go through the origin, use of the intercept regression model will avoid potentially serious difficulties resulting from forcing the regression line through the origin when this is not appropriate. (Kutner, et al. Applied Linear Statistical Models. 2005. McGraw-Hill Irwin). This I think summarizes my view on the topic completely. Other cautionary notes include: Even if the response variable is theoretically zero when the predictor variable is, this does not necessarily mean that the no-intercept model is appropriate (Gunst. Regression Analysis and its Application: A Data-Oriented Approach. 2018. Routledge) It is relatively easy to misuse the no intercept model (Montgomery, et al. Introduction to Linear Regression. 2015. Wiley) regression through the origin will bias the results (Lefkovitch. The study of population growth in organisms grouped by stages. 1965. Biometrics) in the no-intercept model the sum of the residuals is not necessarily zero (Rawlings. Applied Regression Analysis: A Research Tool. 2001. Springer). Caution in the use of the model is advised (Hahn. Fitting Regression Models with No Intercept Term. 1977. J. Qual. Tech.) To explore this in a little more depth let's suppose that our data follows the equation $$y=\beta_1 x + \beta_0 + \mathcal{N}(0,\sigma)$$ where for concreteness $\beta_0=6$ and $\sigma=5$. Suppose also that we have a good scientific theoretical model that says $\beta_0 = 0$. Let's see what happens if we fit our data to 3 different models: An "overfitted" quadratic model: $y=\beta_2 x^2 + \beta_1 x + \beta_0 $ The recommended "intercept" model: $y=\beta_1 x + \beta_0$ The theoretical "no-intercept" model: $y=\beta_1 x$ Let's sample 21 data points as follows: Now, visually it seems that for $\beta_1=1$ and $\beta_0=6$ and $\sigma=5$ the small intercept is negligible, and the theoretical no-intercept model should be fine to use. The no-intercept model has an estimated $\beta_1 = 1.100 \ [1.044,1.157]$ which confidently excludes the true value of $1$. In contrast, the intercept model has an estimated $\beta_1 = 0.944 \ [0.874, 1.014]$ and the quadratic model has $\beta_1 = 1.091 \ [0.825, 1.358]$, both of which include the true value in the 95% confidence interval. If we repeat this 1000 times we obtain the following histogram: The intercept model is the best of these three, with the no-intercept model missing the true parameter in its confidence interval and the quadratic model having an overly broad confidence interval. This is confirmed by the Bayesian information criterion (BIC) which is lowest for the intercept model. So one danger of the no-intercept model is the tendency to artificially introduce bias into the slope. Another issue is the tendency to produce a statistically significant result even when there is no trend. To investigate this we will generate data with $\beta_1=0$ and $\beta_0=6$. In this case the no-intercept model hallucinates a slope of $\beta_1=0.974 \ [0.521,1.428]$. Not only does this model invent a non-existent effect, it is quite confident, with a highly significant p-value of $p<0.001$, that the effect is non-zero. In contrast the intercept model obtains a non-significant ($p=0.792$) slope of $\beta_1 = 0.097 \ [-0.658,0.852]$, and the quadratic model obtains $\beta_1 = 1.892 \ [-0.951,4.735]$. Again, repeating 1000 times we obtain Again, the intercept model is the best of these three, with the no-intercept model missing the true parameter in its confidence interval and the quadratic model having an overly broad confidence interval. This is confirmed by the BIC which is again lowest for the intercept model. So another danger of the no-intercept model is the tendency to artificially invent effects that do not exist, and to falsely produce such effects with a high degree of confidence. Finally, let's examine the behavior of these models in the situation where the no-intercept model is actually appropriate. Here we will set $\beta_1=1$ and $\beta_0=0$ so the data actually matches the theoretical no-intercept model. In this case all three models include the true slope of $1$ in the confidence interval. The no-intercept model estimates $\beta_1 = 1.021 \ [0.972,1.069]$ while the intercept model estimates $\beta_1 = 1.043 \ [0.947,1.139]$ and the quadratic model estimates $\beta_1 = 0.768 \ [0.415, 1.120]$. Repeating this 1000 times we obtain the histogram: This time, the no-intercept model is slightly better. All models provide an unbiased estimate of the $\beta_1$ parameter, but the no-intercept model has a slightly more narrow confidence interval. This is reflected in the fact that the no-intercept model has the lowest BIC of the three. So, if a no-intercept model is desired, then an appropriate procedure would be to fit an intercept model, check the intercept, if it is not significant then fit the no-intercept model, and use some model-selection criterion to choose. But the first step will necessarily be to fit an intercept model. And often the extra steps are not worth the small improvement in precision gained with the no-intercept model.
{ "domain": "physics.stackexchange", "id": 96116, "tags": "kinematics, measurements, statistics, data-analysis" }
Summing all keys in the leaves of a tree
Question: My program takes 35 sec to run example 2. How can I make my program faster? def sum_keys(K, inputs, count=1): A, B, M, L1, L2, L3, D, R = list(map(int, inputs)) x = ((A*K)+B)%M y = ((A*K+2*B)%M) if K < L1 or count == D: my_list.append(K) elif L1 <= K < L2: sum_keys(x, inputs, count + 1) elif L2 <= K < L3: sum_keys(y, inputs, count + 1) elif L3 <= K: sum_keys(x, inputs, count + 1) sum_keys(y, inputs, count + 1) return sum(my_list) def read_input(inputstring): inputs = inputstring.split() A, B, M, L1, L2, L3, D, R = list(map(int, inputs)) x = ((A*R)+B)%M y = ((A*R+2*B)%M) if L1 <= R < L2: return sum_keys(x, inputs) elif L2 <= R < L3: return sum_keys(y, inputs) elif L3 <= R: sum_keys(x, inputs) return sum_keys(y, inputs) my_list = [] if __name__ == '__main__': print(read_input(input())) Answer: I dislike your duplicate, near C&P, functions. The differences between the functions are: One splits the string. One doesn't append to the list. One uses R the other K to calculate x and y. This means you can reduce the amount of code. If you make a function that changes R to K and splits the string, and you make a change to the other function to not add to the list on the first call. You also want all A, B, etc and your my_list to be in this second functions scope. This means a closure is perfect! You should also generate all the keys before calling sum. I done a profile of you code a significant amount of time was spent in sum. You can also reduce your ifs. If you change K < L1 to just test against that, you can then test only against the upper limit on all the other ifs. This should result in something like: def read_input(input_string): A, B, M, L1, L2, L3, D, R = map(int, input_string.split()) keys = [] def inner(K, count): x = (A * K + B) % M y = (A * K + 2 * B) % M if count == D: keys.append(K) elif K < L1: if count != 0: keys.append(K) elif K < L2: inner(x, count + 1) elif K < L3: inner(y, count + 1) else: inner(x, count + 1) inner(y, count + 1) inner(R, 0) return sum(keys) import cProfile cProfile.run('read_input("717 244 2069 280 300 450 20 699")') print(read_input("717 244 2069 280 300 450 20 699")) cProfile.run('read_input("31 17 43 5 15 23 5 30")') print(read_input("31 17 43 5 15 23 5 30"))
{ "domain": "codereview.stackexchange", "id": 19616, "tags": "python, performance, algorithm, tree, complexity" }
"Stationary" vs. moving wave packet
Question: I am working through a quantum mechanics problem involving the time evolution of a free particle (the particle is a proton) given that the initial state is a Gaussian wave packet of the form: $$ \psi(x,0)=(2\pi\sigma^2)^{-1/4}e^{-x^2/4\sigma^2}\,, $$ where $\sigma$ is the width of the Gaussian. I worked through it and got that the evolution, $\psi(x,t)$, is $$ \psi(x,t)=\frac{\sqrt{\frac{1}{\sigma^2}}(\sigma^2)^{3/4}}{2^{1/4}\pi^{3/4}}\int_{-\infty}^{\infty}e^{i(kx-\hbar k^2t/2m_p)-k^2\sigma^2}dk\,. $$ To derive this, I used the Fourier transform to expand $\psi(x,0)$ in terms of the eigenfunctions of a free particle, which are the plane waves. Then, I computed $\phi(k)$ using the inverse Fourier transform and substituted this back into the Fourier integral for $\psi(x,0)$. To compute $\psi(x,t)$, I realized that the component waves of the wave packet must propagate independently from one another, and the time evolution of a general plane wave is given by $\psi(\vec{r},t)=Ae^{i(\vec{k}\cdot\vec{r}-\omega t)}$. I multiplied this by the integrand of $\psi(x,0)$ to obtain $\psi(x,t)$, where I substituted $\hbar k^2/2m_p$ for $\omega$ (I used the Planck relation and the energy eigenvalues of a free proton). Performing the integration and substituting the necessary constants, I plotted the probability density $|\psi(x,t)|^2$ and got a stationary Gaussian centered about $x=0$ that spreads out in time. By stationary, I mean that it does not move along the $x$-axis. It stays symmetric about the origin. I have also seen Gaussian wave packets that move along the $x$-axis as they spread. So, what is the difference? What makes a Gaussian stay centered while some appear to move? Answer: If you look at your distribution in momentum space, you can see that it is an even function about $p=0$. For this reason, the average value of the momentum is zero, and so the center of the wave packet will remain stationary. However, you can "imprint" a momentum on the original position-space wave function by multiplying by a plane wave. That is, if the initial state is given by $$ \psi(x,0)=(2\pi\sigma^2)^{-1/4}e^{ik_0x}e^{-x^2/4\sigma^2}\,, $$ then the center of the momentum-space wave function is at $p = \hbar k_0$, indicating that the average value of the momentum is $\hbar k_0$. Then, the center of the wave-packet in position space will be at $x=\frac{\hbar k_0}{m}t$, i.e., the wave-packet will move with a speed give by $p_0/m$.
{ "domain": "physics.stackexchange", "id": 94048, "tags": "quantum-mechanics, wavefunction, schroedinger-equation, fourier-transform, time-evolution" }
Mapping of categorical features into binary indicator features
Question: I am currently reading an introductory machine learning book by Daumé (ch. 03, p. 30) and when discussing the mapping of categorical features with "n" possible values into "n" binary indicator features, the following question is proposed: Is it a good idea to map a categorical feature with n values to log2(n) binary features? Why wouldn't that be the case, seeing as how much resources could be spared by working with fewer features? Does this approach depend on the model that is being used? Answer: Compactifying the data like this saves space in memory, but it adds false relationships that one-hot encoding doesn't. Let's consider a categorical feature with levels A, B, C, D, that you decide to encode as 00, 01, 10, 11 respectively. In a linear model, you only get three parameters (a constant and one for each new feature); you can fit "the right" parameters to hit A, B, and C, but then fixing those parameters determines what happens on D, and may be a very poor fit there. (You're now assuming essentially that (B-A)+(C-A)=D-A.) In k-NN, the distance between any two levels in the one-hot encoding is the same, 1. In this encoding, the distance between A and B is 1, but the distance between A and D is $\sqrt{2}$. In SVM, the set {A,D} is not separable from {B,C}. With one-hot encoding, everything is separable. In a tree model, using the raw categorical allows the tree to split any subset of the levels against the rest; using this binary encoding forces the tree to split (AB|CD) or (AC|BD) [missing (AD|BC)]; using one-hot encoding forces the tree to split (A|BCD) or (B|ACD) or (C|ABD) or (D|ABC). The tree can split differently subsequently and eventually recover an arbitrary split, but a greedily built tree might never accomplish that. Notice in particular that in these last three models, we've made A and D "more different" from each other than we might have reason to believe. And in this small example, the catches were fairly small/few, but as the dimension increases these tend to become more exaggerated. Now, it may still be the case that it's worth doing, but these are some things to consider.
{ "domain": "datascience.stackexchange", "id": 5473, "tags": "machine-learning, categorical-data, beginner, encoding, k-nn" }
What are unitary representations used for in physics?
Question: Specifically in quantum mechanics I have seen unitary representations crop up a few times. I understand what they are and how they work mathematically, I'm unsure as to what use they have in physics? Because my understanding is that they are linear representations which are also unitary operators, which are applied to Hilbert spaces and preserve inner product. But why not just use unitary operators instead of unitary reperenstations, is there something about the groups being represented that makes them important to the Hilbert space too. Answer: Observables are (in the simplest cases) hermitian operators, not unitary operators. Exponentiation of hermitian operators give unitary operators, v.g. the time evolution operator $U(t)=e^{-it\hat H}$ when the Hamiltonian is time independent. Unitary operations often encapsulate fundamental physical symmetries of the system at the global level, without affecting the norm of the states, i.e. guaranteeing probability is not lost through symmetry operations. For instance, by rotational invariance, one can always make a (unitary) rotation of the system and choose the quantization axis for the angular momentum to be $\hat z$. Of course a unitary transformation will also take you from a basis where $\hat L_z$ is diagonal to a basis where (say) $\hat L_x$ is diagonal, and this gives you insight into the possible outcomes of measuring $\hat L_x$: since the physics does not depend on the orientation of axes, it must be that the possible outcomes of measuring $\hat L_x$ are identical to those of $\hat L_z$. (The probabilities, which depend on the basis, can be different for a given state.) Beyond angular momentum one can also think of various relations between cross-sections in theories where operators are connected by symmetries, which must in turn be implemented by unitary transformations. There are some applications - for instance in quantum optics - where the actual group representations are very useful, as this paper on SU(2) and SU(1,1) interferometers shows. There are generalizations of this to more modes. They are also useful in constructing coherent states and can thus be used as starting points of phase space methods (v.g. $P$-functions, $Q$-functions or Wigner functions). The wave-functions of rigid rotors are properly symmetrized functions of group representations. There are other applications of course but the ones above are directly applicable to SU(2), for which the representations are well-known. Finally, there is some work done on non-unitary representations of states. This was done in the context of particle decay, for instance as was done here by Barut and Raczka, but this never really "caught on".
{ "domain": "physics.stackexchange", "id": 38888, "tags": "hilbert-space, group-representations, representation-theory" }
People perception modules: detectors and trackers
Question: Hello, I am working on human-aware navigation and human-robot spatial interaction, therefore, I am looking into different people detectors. In our project we use a ROSyfied version of this detector: www.vision.rwth-aachen.de/publications/pdf/jafari-realtimergbdtracking-icra14.pdf which is based on RGB-D data, gives quite reasonable results, and will be made public soon. However, to enhance this detection I am currently looking for other people detection methods (like laser based leg detection) which I can combine with our tracker. Since the detection and tracking is not really part of my work I am looking for existing ROS packages that can be easily installed and trained. Our set-up is comprised of hydro and Ubuntu 12.04 using a sick s300 and an asus xtion So far I looked at David Lu's fork of the people_experimental: github.com/DLu/people stack, which apart from some compilation errors that are easy to fix, works out of the box but gives very bad results (only detects legs at distances <2m) which I think is because of the bad resolution of our laser (3cm in distance). Due to the non-existent or very well hidden documentation I have no idea how to retrain it with data collected from our robot. Any help on this would be greatly appreciated. Almost all of the other perception algorithms I found or which are mentioned on this site are either not catkinized or are not available as a hydro package. My main question to the community would therefore be: What other people detectors are available for hydro? Any hints and suggestions would be greatly appreciated. I am sorry if that question has been asked already. I could only find one similar question which exclusively listed things that do not seem to exist for hydro. If there is a similar thread I would appreciate if you could refer me to it. Cheers, Christian P.S.: Apparently my karma is insufficient to publish links. Sorry for the workaround. Originally posted by Chrissi on ROS Answers with karma: 1642 on 2014-08-05 Post score: 1 Original comments Comment by Dan Lazewatsky on 2014-08-05: FYI, the leg detector from that fork has been merged into the main people repo, and is now available in the debs in hydro. Comment by Chrissi on 2014-08-06: Thanks Dan, I tried that as well but some of the launch files don't work (wrong paths to config files and included launch files) so I decided to check it out from github because that makes it easier to mend imo. Do you know if there is a more detailed documentation on the usage of the leg_detector than this: http://wiki.ros.org/leg_detector?distro=hydro ? Comment by Dan Lazewatsky on 2014-08-06: If something isn't working, please submit a ticket in the issue tracker so we can get it fixed. I'm not aware of any more detailed documentation, but @David Lu might be able to help. Comment by Chrissi on 2014-08-06: Don't get me wrong, the leg_detector works fine. It is just some of the other components as discussed here: http://answers.ros.org/question/78026/problem-with-leg_detector-and-people_tracking_filter/ Comment by Dan Lazewatsky on 2014-08-06: You said some of the launch files don't work - I was referring to that. Comment by Chrissi on 2014-08-06: Ah, OK. Sorry for the confusion. Never used the issue tracker. A link would be nice. Thanks. Comment by Dan Lazewatsky on 2014-08-06: https://github.com/wg-perception/people/issues/new Answer: https://github.com/spencer-project/spencer_people_tracking This Github repository contains people and group detection and tracking components developed during the EU FP7 project SPENCER for 2D laser, camera and RGB-D data. As the project is still going on, new detection and tracking modules and documentation will still be added during the next 12 months. Our laser detector (reimplementation of the boosted classifier using laser segment features from Arras et al., ICRA'07) is trained on data from an LMS 200 and LMS 500 at around 70 cm height above ground, and it works at ranges up to 15-20 meters, though precision drops at larger distances. In general, detection results are of course much better when visual data is available (esp. in complex environments). We have also integrated the upper-body RGB-D detector and groundHOG detectors by RWTH Aachen by Jafari et al. mentioned in the original question, as well as the RGB-D people detector from PCL. All components are tested on ROS Hydro and Indigo. There is also a set of reusable RViz plugins for visualizing the outputs of the perception pipeline. Originally posted by timm with karma: 376 on 2015-04-10 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 18911, "tags": "ros, leg-detector" }
Why do we give so much importance to energy, i.e., the conserved quantity under time symmetry?
Question: In almost all equations—from GR to QFT—energy conservation is a tool for solving those equations, but we know that energy on large scales is not conserved. Why do we still use this (not) conserved quantity in our fundamental laws of physics? Answer: As a complement to Nickolas's great answer, note that throughout physics, conservation laws are so useful that even approximate conservation laws can be of enormous interest. Conservation laws let you deduce aspects of the behavior of a physical system, without needing to actually solve equations. This is tremendously useful for (a) building intuition, (b) quickly solving problems, (c) providing ways to check if there are mistakes in the solutions to the equations. Some examples of approximate conservation laws include: Mass (which is conserved to a very good approximation in non-relativistic physics, but is not conserved in special relativity) Kinetic energy is approximately conserved in collisions which are approximately-but-not-exactly elastic. "Elastic" collisions in a freshman college physics lab will often be analyzed as if kinetic energy were exactly conserved. Baryon number (this is used in particle physics) Energy is not an exact conservation law in general in GR (although in some cases it is). However, even in GR, energy is approximately conserved locally -- meaning, on time scales short compared to the scale on which the gravitational field is changing in time. In fact, in ordinary circumstances in a lab, this conservation law holds to extremely high precision. That's why energy conservation is still useful, even though it isn't exact.
{ "domain": "physics.stackexchange", "id": 89795, "tags": "energy, particle-physics, cosmology, energy-conservation" }
What are robotics base on ROS?
Question: Please give sample , thanks ! Originally posted by roschina on ROS Answers with karma: 3 on 2014-12-20 Post score: 0 Answer: You can fins here Robots Using ROS Originally posted by bvbdort with karma: 3034 on 2014-12-20 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 20398, "tags": "ros" }
Reduction graph to planar bounded treewidth and bounded diameter graph
Question: We got reduction graph to planar bounded treewidth graph, but this is unlikely to be true. Let $H$, the planarizing gadget, be planar graph with four distinguished vertices $u,u',v,v'$ on the outer faces. Take graph $G$ drawn on the plane. Add new vertex $S$, adjacent to all vertices of $G$. So far the diameter is at most two. Replace each pair of crossing edges $(u,u'),(v,v')$ by new copy of the gadget $H$. The resulting graph $G'$ is planar with diameter $D = 2\max(d(u,u'),d(v,v'))$ where $d$ is the distance in $H$. The treewidth of $G'$ is $O(D)$, which is constant for fixed $H$, for reference see here. Similar reduction with specially chosen $H$ is used to show NP-hardness of problems for planar graphs. What is wrong with this reduction? Correctness of the reduction is unlikely, because for bounded treewidth graphs a lot of graph invariants are computable in polynomial time and choosing suitable gadget $H$ might give relation between invariants of $G$ and $G'$, implying $P=NP$. Another reference claims "bounded genus graphs of bounded diameter have bounded treewidth". Answer: The diameter of $G'$ will not be bounded. Replacing edge crossings with gadgets can effectively cut each edge $O(n)$ times, so the diameter can blow up by a factor of $O(n)$.
{ "domain": "cstheory.stackexchange", "id": 4882, "tags": "graph-theory, graph-algorithms, planar-graphs" }
The human organism knows what we have to eat?
Question: My mother has told that if your organism needs a vitamin, you want fruits include that vitamin. For example you have lack of C-vitamin you would like to eat lemon. Is there any explanation for it, or just it is a myth? The human organism knows that what we have to eat? Answer: Assuming you mean organism, not organisation, it's not entirely false that cravings can be driven by deficiencies. In the case of the eating disorder pica, where a person eats non-food items like dirt and clay, can be caused by iron deficiency (Source). However, that article also suggests the possibility that pica causes iron deficiency by causing the person to eat items that lower iron absorption. So, my answer to your question is, maybe.
{ "domain": "biology.stackexchange", "id": 3529, "tags": "human-biology, vitamins" }
What does it mean for something to be a ket?
Question: Ok so I will provide the following example, which I am choosing at random from Sabio et al(2010): $$\psi(r,\phi)~=~\left[ \begin{array}{c} A_1r\sin(\theta-\phi)\\ A_2\frac{K}{2}r^3\sin^3(\theta-\phi)\\ A_2r^2\sin^2(\theta-\phi)\\ -A_1r\sin(\theta-\phi)\\ \end{array} \right].$$ Obviously, this is a 4-component vector, but the author calls it a wavefunction. In fact, calling such things wavefunctions is fairly common in my little corner of condensed matter theory, and I'm still unsure whether this is sloppy language, or I am missing something. Perhaps the thing on the LHS should in fact be the ket $|\psi\rangle$ because my understanding of a wavefunction is that it can not be a vector. I.e., it is defined: $$\psi(x)~=~\langle x|\psi\rangle.$$ This should just be a number correct? I.e. at any point (let's say $x=1$ for a 1-dimensional system) that wavefunction should just be a number, not a vector? Answer: An inner product doesn't necessarily have to produce a scalar. For example, consider matrix multiplication. When you take the inner product of a $M\times N$ matrix and an $N\times P$ matrix, you get another matrix which is $M\times P$. The inner product only "collapses" one dimension of the first thing with one dimension of the second thing. You can even consider "generalized matrices" which can have numbers of dimensions other than 2. Say you have a matrix of dimensions $A\times B \times C$, and another of dimensions $B\times C\times D$. There are actually two different ways you can take the inner product of these two matrices; you can combine the second dimension of the first one with the first dimension of the second one, and get a result with dimensions $A\times C\times C\times D$. Or you can combine the third dimension of the first one with the second dimension of the second one, and get a result that is $A\times B\times B\times D$. Or you could do both (that would be two inner products), and get an $A\times D$ result. In the case of wavefunctions, you know that a wavefunction is defined from the inner product $\langle \vec{x}\vert\psi\rangle$, where $\lvert\psi\rangle$ and $\lvert \vec{x}\rangle$ are quantum states. These quantum states are abstract objects, and in particular, they're not constrained to having only one dimension. In your example, the state $\lvert\psi\rangle$ is some abstract object that probably has four dimensions: three of the dimensions correspond to spatial position, and are labeled by the index $\vec{x}$ (or $r$, $\theta$, $\phi$, given how it's written), but there is another dimension has four components. You can think of $\lvert\psi\rangle$ as a matrix which has dimensions $A\times B\times C\times D$, where $A$, $B$, and $C$ just happen to be infinity (and $D = 4$). When you take the three inner products of a matrix of dimension $A\times B\times C$ with a matrix of dimension $A\times B\times C\times D$, you get a vector of dimension $D$, and that's exactly what's happening here. If you want to get a single component out of this state $\lvert\psi\rangle$, you will need to take four inner products: the three that are involved in $\lvert \vec{x}\rangle$, and one with a basis vector along the other dimension, let's call it $e_i$. These basis vectors would be of the form $$e_0 = \begin{bmatrix}1 \\ 0 \\ 0 \\ 0\end{bmatrix}$$ or similar. A single component of the state might be written $\psi_i(x)$, and it would be defined as the quadruple inner product $$\psi_i(\vec{x}) = e_i\cdot \psi(\vec{x}) = e_i\cdot\langle\vec{x}\vert\psi\rangle$$ If you already have a representation of $\psi$ in terms of its components, you could also make use of this identity: $$\psi_i(\vec{x}) = \sum_j \iiint \mathrm{d}^3\vec{x}' \underbrace{(e_i)_j}_{\text{component of basis vector}}\overbrace{\delta(\vec{x}' - \vec{x})}^{\text{component of basis function}}\psi_j(\vec{x}')$$ which really just tells you how to express the same object in a different basis.
{ "domain": "physics.stackexchange", "id": 6687, "tags": "quantum-mechanics, wavefunction, hilbert-space" }
On work done by motion of objects in elliptic orbits
Question: In uniform circular motion, the force is perpendicular to the instantaneous direction of motion. So work done is zero. But if an object is in elliptical orbit such as a planet, I find it hard to understand how the force is perpendicular to direction of motion. And if it is not, some work is done which also doesn't seem to make sense. Answer: When the planet is getting closer to the Sun, the Sun is doing positive work on it. When it is getting farther from the Sun, the Sun is doing negative work on it. Over a complete orbit, the net work done is zero.
{ "domain": "physics.stackexchange", "id": 77809, "tags": "newtonian-mechanics, newtonian-gravity, energy-conservation, work, orbital-motion" }
RadixSort implementation design and performance
Question: Credits for the original implementation I based my code on: Quora - What is the most efficient way to sort a million 32-bit integers? Implementation // RadixSort - works for values up to unsigned_int_max (32-bit) template<typename Iter> void radix_sort(Iter __first, Iter __last){ typedef typename iterator_traits<Iter>::value_type value_type; vector<value_type> out(__last - __first); // calculate most-significant-digit (256-base) value_type __mx = *max_element(__first, __last); //O(n) int __msb = 0; do { __msb += 8; __mx = __mx >> 8; } while(__mx); Iter __i, __j, __s; bool __swapped = false; for (int __shift = 0; __shift < __msb; __shift += 8) { // cycle input/auxiliar vectors if (__swapped) { __i = out.begin(); __j = out.end(); __s = __first; } else { __i = __first; __j = __last; __s = out.begin(); } // counting_sort size_t count[0x100] = {}; for (Iter __p = __i; __p != __j; __p++) count[(*__p >> __shift) & 0xFF]++; // prefix-sum size_t __m, __q = 0; for (int i = 0; i < 0x100; i++) { __m = count[i]; count[i] = __q; __q += __m; } // filling result Iter __v; for (Iter __p = __i; __p != __j; __p++) { __v = __s + count[(*__p >> __shift) & 0xFF]++; *__v = *__p; } __swapped = !__swapped; } // if ended on auxiliar vector, copy to input vector if (__swapped) copy(out.begin(), out.end(), __first); //O(n) } Discussion In order to implement a more general algorithm than the one used as reference, I tried to refactor the code using template and iterators. This is the main difference between my code and the original. I can't simply swap references when using iterators, so I approached this problem by cycling the iterators used in each counting-sort loop. Doing so prevents the need to copy the output array on every loop. Another different aspect of my code is that I find max-element and calculate the most-significant-digit on base-256. I use this information to determine how many counting-sort loops are necessary, instead of hardcoding 32 (unsigned_max) and always running the loop 4-times even if all values are less than 255. This actually adds unnecessary overhead if 4-loops are necessary, but it should increase execution time otherwise. The temporary container used is a vector<value_type> and this is something I'm not quite sure about, I feel like this is a point where I could improve my code. I'd like to hear opinions about it. Questions What are possible improvements that can be made to my implementation? How would you re-factor the iterator cycle I used? (I hate it) Which container should I use for the temp-array? Should I use std::copy or std::move to move the data from the temp-array to the input container? Relevant Information I decided to run some benchmarks to test if calculating the maximum-digit makes a huge difference. element_values defines the range of values each element can have. The maximum-digit only affects the number of loops if element_values can be represented with 24-bits or less. Here are the results for n = 1E7: element_values < 256 (8-bits): radix_sort_msd - Average time: 51.59 ms radix_sort_32 - Average time: 109.03 ms element_values < UINT_MAX(32-bits): radix_sort_msd - Average time: 107.38 ms radix_sort_32 - Average time: 89.75 ms While radix_sort_msd works really well when element_values is small, it really just depends on the dataset. Therefore implementing is a matter of preference. Answer: Improvement to radix_sort_32 One of the things that you were wondering about was whether radix_sort_msd or radix_sort_32 was better. The former called max_element() in order to determine the max width of the values, and the latter always did 4 passes even when they weren't necessary. I took radix_sort_32 and added this code after generating the counts: // If this byte is zero for every value, skip the byte entirely. if (count[0] == (size_t)(__last - __first)) continue; This is a quick check to find out if the entire input was full of zeroes for the current byte. If so, it moves on to the next byte instead of wasting time making an exact copy of the input to the temp area. It still uses more time than radix_sort_msd in the "values < 256" case, but it is much faster, because for 3 passes it can skip more than 50% of the work. On my machine, radix_sort_32 got 40% faster in the "values < 256" case compared to before the change. This might make you reconsider which version is better. Hybrid radix sort Another thing you could do to improve your radix sort is to use a so-called "hybrid" radix sort, which is a hybrid of MSD and LSD radix sorts. The idea is that use you one pass of radix sort on the most significant byte. This breaks the input into 256 "bins". Then you run LSD radix sort (your current sort) on each of the 256 bins. The advantage of this hybrid sort over a plain LSD radix sort is that the hybrid version is more cache friendly. In the plain LSD sort, if your input doesn't fit in cache, then the second counting pass can't benefit from caching because by the end of the first counting pass, the start of the array will already be evicted from the cache. With the hybrid sort, the input is first broken up into 256 parts (which hopefully do fit in the cache). So sorting each bin should be faster due to caching. However, there is extra overhead, because each of the 256 smaller sorts needs to do some fixed amount of work to deal with the counting bucket. So we should only do the hybrid sort if the input is larger than our cache size, otherwise we will only have more overhead and no benefit. I modified your code to do the hybrid sort and it ran faster than the original version on randomized 32-bit input. I did not attempt to optimize it for the "values < 256" case, although there are definitely possibilities for improvements there, similar to what I mentioned in the previous section. Here is the code: // LSD RadixSort helper function for radix_sort() below. template<typename Iter> static void radix_sort_lsd(Iter __first, Iter __last, Iter __out, Iter __outEnd, int __msb, bool needsSwap) { Iter __i, __j, __s; bool __swapped = false; for (int __shift = 0; __shift < __msb; __shift += 8) { // cycle input/auxiliar vectors if (__swapped) { __i = __out; __j = __outEnd; __s = __first; } else { __i = __first; __j = __last; __s = __out; } // counting_sort size_t count[0x100] = {}; for (Iter __p = __i; __p != __j; __p++) count[(*__p >> __shift) & 0xFF]++; if (count[0] == (size_t)(__last - __first)) continue; // prefix-sum size_t __m, __q = 0; for (int i = 0; i < 0x100; i++) { __m = count[i]; count[i] = __q; __q += __m; } // filling result for (Iter __p = __i; __p != __j; __p++) { *(__s + count[(*__p >> __shift) & 0xFF]++) = *__p; } __swapped = !__swapped; } // if ended on auxiliar vector, copy to input vector if (__swapped != needsSwap) copy(__out, __outEnd, __first); //O(n) } // If the input exceeds this threshold (in bytes), we do one pass of MSD // followed by the rest of the passes done by LSD. This number should be // an estimate of the cache size. #define THRESHOLD 8000000 // RadixSort - works for values up to unsigned_int_max (32-bit) template<typename Iter> void radix_sort(Iter __first, Iter __last) { typedef typename iterator_traits<Iter>::value_type value_type; size_t len = (size_t) (__last - __first); vector<value_type> out(len); // First, test if the input exceeds the caching threshold. If the input // is smaller than the threshold, just do a straight LSD radix sort. if (len * sizeof(value_type) < THRESHOLD) { radix_sort_lsd(__first, __last, out.begin(), out.end(), sizeof(value_type) * 8, false); return; } // Set __shift to the most significant byte. int __shift = (sizeof(value_type)-1) * 8; Iter __s = out.begin(); Iter __p = __first; // counting_sort size_t count[0x100] = {}; for (size_t i = 0; i < len; i++) { count[(*__p++ >> __shift) & 0xFF]++; } // prefix-sum size_t __m, __q = 0; for (int i = 0; i < 0x100; i++) { __m = count[i]; count[i] = __q; __q += __m; } // filling result __p = __first; for (size_t i = 0; i < len; i++) { *(__s + count[(*__p >> __shift) & 0xFF]++) = *__p++; } // For each of the 256 bins, do a LSD radix sort on the bin. The input // and auxiliary vectors have been swapped, so we pass needsSwap = true to // indicate that the LSD sort should end on the auxiliary vector instead. int startIndex = 0; for (int i = 0; i < 0x100; i++) { int endIndex = count[i]; radix_sort_lsd(__s + startIndex, __s + endIndex, __first + startIndex, __first + endIndex, __shift, true); startIndex = endIndex; } } Other things I think that vector<value_type> is an appropriate container for your auxiliary array. I can't think of anything better than that. As far as copy vs move, they should be equivalent for numeric types. See this Stackoverflow question and its answers for some good explanations on why that is.
{ "domain": "codereview.stackexchange", "id": 24564, "tags": "c++, performance, algorithm, sorting" }
Largest rectangle problem
Question: Recently while solving programming challenges for the fun of it, I encountered a challenge which left me kind of puzzled and yearning for a proper solution, other than brute forcing. The problem (stated in my own words) is as follows: In each testcase you are given N (6 <= N <= 10) sticks. Each stick has length L (1 <= L <= 20) provided. The lengths are represented by space separated integers. Print the maximum area that can be created by forming a rectangle from these sticks. Example input: 1 // testcase count 6 // number of sticks 1 1 2 2 3 1 // lengths I have managed to solve this problem using brute-force method (which's complexity I didn't dare estimate), stacking multiple for loops and lots of if conditional statements. However, I believe there must be some algorithm that would be suitable for this kind of a problem - there usually is one smart solution in those algorithm puzzles. I tried to think of a solution for the past few days and didn't come to any meaningful conclussion. I would greatly appreciate any suggestions - I don't need no code or ready solutions, only a hint about whether there is some algorithm that could be used here to make the solution more elegant, and with better computational complexity than a simple brute force. EDIT You are supposed to form 4 edges of a rectangle. Sticks can be stacked on top of each other to form a longer stick. Eg. a1 = 2+2 a2 = 3+1 b1 = 1 b2 = 1 a*b = 4 The problem is to maximise the value of a*b. Answer: Unfortunately this problem is NP-hard, so no computer scientist knows (or is admitting that they know! :-P) a way to solve it in time polynomial in the number of sticks. Don't feel bad about using brute-force here -- it's essentially the best you can do. First, let's reframe the rectangle problem as a decision problem, that is, a problem to which the answer is either YES or NO: Rectangle: Given a multiset $Y = (y_1, \dots, y_m)$ of stick lengths and an area threshold $k$, is it possible to build a rectangle from the sticks in $Y$ having area at least $k$? Squares are optimal solutions when they exist Before getting into the reduction, we will need a property about what a maximal solution to the Rectangle problem looks like. Under the constraint that $a+b=c$, $ab$ is maximised precisely when $a=b$, i.e., squares have uniquely maximum area among all rectangles of fixed perimeter. You can verify this by substituting $b=c-a$ into $ab$ and then differentiating w.r.t. $a$: the derivative becomes zero at $a=c/2=b$, and this is clearly a maximum since the coefficient of the $a^2$ term is negative. The practical significance of this fact is: If and only if it is possible to partition the stick lengths into 4 equal-size subsets, then the maximum-size area achievable is $s^2/16$, where $s$ is the sum of all stick lengths. Reduction from Partition I'll show this is NP-hard by reducing the NP-hard problem Partition to it. In Partition, we are given a multiset of $n$ integers $X=(x_1, \dots, x_n)$, and our task is to partition this multiset into two parts having equal sum. Given an instance $X=(x_1, \dots, x_n)$ of Partition, with total sum $t=\sum x_i$, we construct an instance $Y = (y_1, \dots, y_{n+2})$ of Rectangle by taking each $x_i$ as a stick length $y_i$, and adding two more "big" sticks, $y_{n+1}$ and $y_{n+2}$, both of length $t/2$. (That is, $m=n+2$.) Finally, we set $k$ to $t^2/4$. If the answer to the constructed Rectangle problem instance is YES, then by the maximal-square property, it must be possible to partition $Y$ into four equal-sum parts, each having sum $t/2$. Let $(A, B, C, D)$ be such a partition. Two of these four multisets must each contain a "big" stick and nothing else, since the big sticks have to go somewhere, and combining one of them with any other sticks would mean that one side exceeds $t/2$, meaning that some other side must fall short of $t/2$, meaning that the rectangle is not square and thus of size strictly less than $t^2/4$ -- a contradiction. Remove these two big sticks, leaving two remaining multisets of sticks, which must also be of equal sum (that is, also $t/2$): these two multisets represent a valid partition of the original $X$ into two equal-sum parts, showing that the answer to the original Partition instance is also YES in this case. In the other direction, if the answer to the original Partition problem is YES, meaning that there is a way to partition $X$ into two equal-sum parts $U$ and $V$, then clearly the answer to the constructed Rectangle problem instance is also YES: We could simply use the four multisets $U$, $V$, $\{y_{n+1}\}$ and $\{y_{n+2}\}$. Since a YES to either problem instance implies a YES to the other, it must also be that a NO to either instance implies a NO to the other: in other words, the answers to both problems are always the same. And since the Rectangle problem instance can be constructed in polynomial time from the Partition instance, any algorithm that can solve Rectangle in polynomial time could also be used as a subroutine to solve Partition (any, by extension, every other NP-hard problem) in polynomial time. Since Partition is an NP-hard problem, this implies that Rectangle is too.
{ "domain": "cs.stackexchange", "id": 12585, "tags": "algorithms, data-structures" }
roscd cannot find the package
Question: Everytime I compile a package I'm working on, or try to use a new package, ROS cannot find the package. I've even restarted from scratch. For example, this afternoon I'm trying to run a publisher for an imu. So I started from scratch because nothing is working, again. After 30 minuts of looking for a mistake, googling, etc. it just starts to work. This is what I'm doing: myuser@mypc:$ cd ros myuser@mypc:$ rm -rf * myuser@mypc:$ mkdir src myuser@mypc:$ cd src myuser@mypc:$ catkin_init_workspace myuser@mypc:$ cp -r ~/develeopment/imu . myuser@mypc:$ cd .. myuser@mypc:$ catkin_make Base path: /home/myuser/development/ros Source space: /home/myuser/development/ros/src Build space: /home/myuser/development/ros/build Devel space: /home/myuser/development/ros/devel Install space: /home/myuser/development/ros/install #### #### Running command: "cmake /home/myuser/development/ros/src -DCATKIN_DEVEL_PREFIX=/home/myuser/development/ros/devel -DCMAKE_INSTALL_PREFIX=/home/myuser/development/ros/install -G Unix Makefiles" in "/home/myuser/development/ros/build" #### -- The C compiler identification is GNU 4.8.4 -- The CXX compiler identification is GNU 4.8.4 -- Check for working C compiler: /usr/bin/cc -- Check for working C compiler: /usr/bin/cc -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ -- Check for working CXX compiler: /usr/bin/c++ -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Using CATKIN_DEVEL_PREFIX: /home/myuser/development/ros/devel -- Using CMAKE_PREFIX_PATH: /home/myuser/development/ros/devel;/opt/ros/indigo -- This workspace overlays: /home/myuser/development/ros/devel;/opt/ros/indigo -- Found PythonInterp: /usr/bin/python (found version "2.7.6") -- Using PYTHON_EXECUTABLE: /usr/bin/python -- Using Debian Python package layout -- Using empy: /usr/bin/empy -- Using CATKIN_ENABLE_TESTING: ON -- Call enable_testing() -- Using CATKIN_TEST_RESULTS_DIR: /home/myuser/development/ros/build/test_results -- Looking for include file pthread.h -- Looking for include file pthread.h - found -- Looking for pthread_create -- Looking for pthread_create - not found -- Looking for pthread_create in pthreads -- Looking for pthread_create in pthreads - not found -- Looking for pthread_create in pthread -- Looking for pthread_create in pthread - found -- Found Threads: TRUE -- Found gtest sources under '/usr/src/gtest': gtests will be built -- Using Python nosetests: /usr/bin/nosetests-2.7 -- catkin 0.6.14 -- BUILD_SHARED_LIBS is on -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- ~~ traversing 1 packages in topological order: -- ~~ - imu_3dm_gx4 -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- +++ processing catkin package: 'imu_3dm_gx4' -- ==> add_subdirectory(imu_3dm_gx4) -- Using these message generators: gencpp;genlisp;genpy -- imu_3dm_gx4: 1 messages, 0 services -- Boost version: 1.54.0 -- Configuring done -- Generating done -- Build files have been written to: /home/myuser/development/ros/build #### #### Running command: "make -j4 -l4" in "/home/myuser/development/ros/build" #### Scanning dependencies of target _imu_3dm_gx4_generate_messages_check_deps_FilterOutput Scanning dependencies of target geometry_msgs_generate_messages_cpp Scanning dependencies of target geometry_msgs_generate_messages_py Scanning dependencies of target std_msgs_generate_messages_cpp [ 0%] [ 0%] [ 0%] Built target geometry_msgs_generate_messages_cpp Built target geometry_msgs_generate_messages_py Built target std_msgs_generate_messages_cpp Scanning dependencies of target roscpp_generate_messages_cpp Scanning dependencies of target rosgraph_msgs_generate_messages_py Scanning dependencies of target std_msgs_generate_messages_lisp [ 0%] Built target roscpp_generate_messages_cpp [ 0%] [ 0%] Built target rosgraph_msgs_generate_messages_py [ 0%] Built target std_msgs_generate_messages_lisp Scanning dependencies of target sensor_msgs_generate_messages_py Built target _imu_3dm_gx4_generate_messages_check_deps_FilterOutput Scanning dependencies of target geometry_msgs_generate_messages_lisp [ 0%] Scanning dependencies of target rosgraph_msgs_generate_messages_cpp Scanning dependencies of target sensor_msgs_generate_messages_lisp Built target geometry_msgs_generate_messages_lisp [ 0%] Built target sensor_msgs_generate_messages_py [ 0%] [ 0%] Built target sensor_msgs_generate_messages_lisp Built target rosgraph_msgs_generate_messages_cpp Scanning dependencies of target rosgraph_msgs_generate_messages_lisp Scanning dependencies of target roscpp_generate_messages_py Scanning dependencies of target roscpp_generate_messages_lisp Scanning dependencies of target diagnostic_msgs_generate_messages_cpp [ 0%] [ 0%] Built target rosgraph_msgs_generate_messages_lisp Built target roscpp_generate_messages_lisp [ 0%] [ 0%] Built target roscpp_generate_messages_py Built target diagnostic_msgs_generate_messages_cpp Scanning dependencies of target std_msgs_generate_messages_py Scanning dependencies of target diagnostic_msgs_generate_messages_lisp Scanning dependencies of target diagnostic_msgs_generate_messages_py Scanning dependencies of target sensor_msgs_generate_messages_cpp [ 0%] Built target std_msgs_generate_messages_py [ 0%] [ 0%] [ 0%] Built target diagnostic_msgs_generate_messages_lisp Built target diagnostic_msgs_generate_messages_py Built target sensor_msgs_generate_messages_cpp Scanning dependencies of target imu_3dm_gx4_generate_messages_lisp Scanning dependencies of target imu_3dm_gx4_generate_messages_cpp Scanning dependencies of target imu_3dm_gx4_generate_messages_py [ 16%] [ 50%] [ 50%] Generating Lisp code from imu_3dm_gx4/FilterOutput.msg Generating C++ code from imu_3dm_gx4/FilterOutput.msg Generating Python from MSG imu_3dm_gx4/FilterOutput [ 50%] Built target imu_3dm_gx4_generate_messages_lisp [ 66%] Generating Python msg __init__.py for imu_3dm_gx4 [ 66%] Built target imu_3dm_gx4_generate_messages_py [ 66%] Built target imu_3dm_gx4_generate_messages_cpp Scanning dependencies of target imu_3dm_gx4 Scanning dependencies of target imu_3dm_gx4_generate_messages [ 66%] Built target imu_3dm_gx4_generate_messages [100%] [100%] Building CXX object imu_3dm_gx4/CMakeFiles/imu_3dm_gx4.dir/src/imu_3dm_gx4.cpp.o Building CXX object imu_3dm_gx4/CMakeFiles/imu_3dm_gx4.dir/src/imu.cpp.o Linking CXX executable /home/myuser/development/ros/devel/lib/imu_3dm_gx4/imu_3dm_gx4 [100%] Built target imu_3dm_gx4 So the package is found, compiled and linked. Then I do: myuser@mypc:$ source devel/setup.bash myuser@mypc:$ roscd imu_3dm_gx4 roscd: No such package/stack 'imu_3dm_gx4' Now I keep trying to make it work, searching on the web, running it again, until it suddenly works. This has happened all time the past week, and I cannot figure out what I'm doing wrong or what system variable I've missconfigured. I'm running ubuntu 14.04, ros indigo. Any light on this matter would be greatly appreciated. As crazy as it may sound, it looks as the changes take time to appear. Originally posted by jcolmena on ROS Answers with karma: 3 on 2015-09-21 Post score: 0 Original comments Comment by jarvisschultz on 2015-09-25: I see you edited your terminal log to use catkin_init_workspace. Did my answer fix your issue? Answer: First, I assume your fifth line should read catkin_init_workspace instead of catkin_make. I believe your issue is likely related to the cache used by rospack. Quoting from the rospack docs: rospack re-parses the manifest.xml files and rebuilds the dependency tree on each execution. However, it maintains a cache of package directories in ROS_ROOT/.rospack_cache. This cache is updated whenever there is a cache miss, or when the cache is 60 seconds old. You can change this timeout by setting the environment variable ROS_CACHE_TIMEOUT, in seconds. Set it to 0.0 to force a cache rebuild on every invocation of rospack. I have encountered exactly the same sort of behaviour that you are reporting in the past, and I've always understood this behavior to simply mean that the rospack cache is out of date. Running the command rospack profile will force rospack to re-crawl your active workspaces and refresh the cache. This page also supports my theory. Hope this info helps. Originally posted by jarvisschultz with karma: 9031 on 2015-09-21 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 22679, "tags": "ros, roscd, rosrun" }
Predicting distribution of integral of random process from power spectral density?
Question: Suppose I have a random process $X(t)$ and I know the power spectral density of $X(t)$, $S_{XX}(f)$. What can be said about the distribution of $Y(t) = \int_{t'=0}^T X(t') dt'$? Bear in mind I have a physicists background and little formal knowledge on stochastic integration. As a crude physicist example, if $X(t)$ is white noise with a flat power spectral density $S_{XX}(f) = \sigma^2$ then $Y(t)$ is like a Weiner process with $Y(t) \sim \mathcal{N}(0, \sigma^2t)$. I'm curious if this observation can be generalized. If $S_{XX}(f)$ is not enough information to determine how $Y(t)$ is distributed then what additional information is needed? Answer: Given the power spectral density (PSD) of a white noise process, you can not infer anything about the distribution of that waveform in time. For all white noise processes, the power spectral density is a constant and this is given by the autocorrelation function being an impulse, or equivalently the unit sample for discrete-time waveforms: any zero mean waveform with independent identically distributed samples (regardless of the distribution of the magnitude and phase of those samples) will have an autocorrelation function that is a unit sample scaled by the variance of the samples. The Fourier Transform of an impulse is 1 for all frequencies and the Discrete Fourier Transform of the unit sample function ($x[n] = 1$ for $n=0$, and 0 for all other n) is similarly 1 for all frequencies, and as further derived in other posts here and elsewhere the Fourier Transform of the Autocorrelation function is the PSD. As an example, consider the samples $x[n]$ as samples of either a Gaussian Distributed Random White Noise process or a Uniformly Distributed White Noise process each having the same variance and distribution (Independent and Identically Distributed or I.I.D.): as long as each of these samples are uncorrelated, the autocorrelation function will be a unit sample scaled by the variance and therefore, given that the PSD is the Fourier Transform of the autocorrelation function, will have the same PSD regardless of the distribution. This is very similar to the fact that we are not able to infer anything about the distribution if we are only given the variance of the waveform. Note too, interestingly how in the Discrete Fourier Transform $X(k)$ of the samples of an I.I.D. random variable in time will tend toward a Gaussian distributed in frequency regardless of the distribution in time, as each of the samples $X(k)$ are created by the sum of I.I.D. samples as given by the formula for the Discrete Fourier Transform. Regardless of the distribution of the magnitude of those samples in time, as long as they are uncorrelated and have a constant mean and standard deviation, then according to the Central Limit Theorem as a sum of the samples from an I.I.D random process, the result will tend toward Gaussian. In general for a continuous time white noise waveform extending to $t=\infty$, the Fourier Transform will equivalently be a white noise waveform given all samples on the Fourier Transform will be independent regardless of how closely spaced any two frequency samples are (which is not the case when the time domain waveform is windowed, which creates dependence on adjacent samples in the frequency domain). This is also discussed in these other related posts: Fourier Transform of a PSD and response of a PSD input Effect of windowing on noise
{ "domain": "dsp.stackexchange", "id": 10917, "tags": "power-spectral-density, random-process, white-noise" }
Does a tertiary carbocation rearrange to another tertiary carbocation?
Question: I strongly believe that a carbocation should not rearrange to another if there are no immediate benefits(like a Greedy Algorithm). The doubt hit me while solving this question. The answer mentioned in the book answers (b) but my answer comes out to be (a). Here is the mechanism: Answer: Option B is the correct one. The process of dehydration of alcohols follow E1 elimination mechanism which involves two steps out of which the first one is slow ionisation of C-X bond( where X is any hetro atom) and the second step is fast removal of $H^+$ ion .Since the first step is slow it provides enough time for the carbocation to rearrange and form a stable carbocation . The mechanism itself is self explanatory. Firstly as Oxygen possess lone pairs of electrons so the $H^+$ ion attacks the OH group as a result the $H_2O $ molecular gets eliminated resulting in a positive charge there. Now hydride shift takes place as the so formed $ \textbf{Carbocation is more stable as it has 7 alpha hydrogens compared to the 5 alpha hydrogens of the previous carbocation}$. Now the C-C bond undergoes ionisation to form a three degree Carbocation and a carbanion . Now this carbanion forms a bond with the Carbocation formed earlier to form a six member ring . Now the resulting carbocation undergoes methyl shift which produces more stable three degree Carbocation. Now a $H^+$ ion from the neighbouring carbon atom is ejected as a result the pie bond is formed.
{ "domain": "chemistry.stackexchange", "id": 11752, "tags": "organic-chemistry, carbocation" }
Closure property of recursively enumerable language
Question: I read that recursively enumerable languages are closed under intersection but not under set difference. We know that, $A \cap B = A - ( A - B)$. Now for LHS (left-hand side) to be closed under intersection, RHS(right-hand side) should be closed under set difference . But we know that RHS is not closed under set difference so LHS is also not closed under intersection. Suppose we assume that R.E is closed under intersection then, $A \cap B = \overline{(\overline{A} \cup \overline{B})}$. Now LHS can be closed only if RHS is closed under complement (as R.E languages are already closed under union). But we know R.E is not closed under complement, so again a contradiction. So, R.E should not be closed under intersection right ? Answer: Basically, "R.e. sets are closed under intersection means that for any two r.e. sets $A \cap B$ is again r.e, but when we say that r.e. are not closed under some set-theoretic operation it means there are at least one pair of r.e sets which results in not-r.e. set under that operation no matter it is set-theoretic difference or complement. In other words, R.e. sets are closed under a set theoretic operation $\star$ means that for any pair $A$ and $B$, $A\star B$ is r.e. R.e. sets are NOT closed under a set theoretic operation $\star$ means that NOT for all pair $A$ and $B$, $A\star B$ is r.e. In simple words, there may be sets $A$ and $B$ such that $A\star B$ is r.e., and there exist other sets $C$ and $D$ such that $C \star D$ is not r.e. For example, the set $N - 2N$ (the set of all odd positive integers is r.e (even recursive), but the set $N - A_{TM}$ is clearly not r.e. ($A_{TM}$ denotes the set of Halting problem).
{ "domain": "cs.stackexchange", "id": 10023, "tags": "computability, closure-properties" }
What it sounds like when I'm travelling at the speed of sound
Question: totally hypothetical here: lets say a man is playing a song on a guitar and I begin travelling quickly away from the guitar, if I were to reach the speed of sound, what will I hear? (my assumption is that I will hear a single note humming in a constant state...like pressing a key on a synth). assume im not in a vehicle and the sound of air wizzing past me isn't involved...not a practical situation, just hypothetical. total noob here, my apologies. and to take it a step further...if i can speed up or slow down (move forward or backward) ever so slightly from the current note "im in", then back to the speed of sound at another note, would this be possible?...to move from one note of the song to another? Answer: my assumption is that I will hear a single note humming in a constant state. A sound wave is not a thing that you can hear. Assume for a moment that you are just standing in the coffee shop, enjoying the music. What you are hearing is not the waves. What you are hearing is the guitar. The waves carry acoustic energy from the guitar to your ear. The guitar causes fluctuations in the pressure of the air that immediately surrounds it. "Wave" is our word for how those fluctuations propagate through the air. Your eardrum experiences the same fluctuations as the wave passes by, and you hear the sound. If you could somehow magically keep pace with the waves and not feel the supersonic blast of wind in your face then you would hear nothing because the wave is not passing you by. You would experience only the steady-state pressure of one peak of the wave or one trough. As far as your ears are concerned, a steady-state pressure equals silence.
{ "domain": "physics.stackexchange", "id": 26002, "tags": "waves, acoustics, shock-waves" }
Where does a normal force come from?
Question: Being more specific, let's say i place an object on top of a table, this will result on the table applying a normal force on the object. My question is: Why does this force exists? Is it because of the existence of eletrical forces between the table and the object that makes a "repulsion", or even because the object "deforms" the structure of the table and the intramolecular forces are trying to "fix it" (make the table, that is a solid, go back to it's normal structure, that way aplying a force)? Answer: It's not exactly electrical or intra/inter-molecular forces as you conjecture in your question. Rather, it's ultimately exchange forces, e.g., https://en.wikipedia.org/wiki/Exchange_interaction. As two macroscopic objects get close (really close) together, the electron shells surrounding their respective atoms begin to affect each other. And two electrons (because they're fermions) can't simultaneously occupy the same state (be in "the same place at the same time", colloquially), better known as the Pauli Exclusion Principle, https://en.wikipedia.org/wiki/Pauli_exclusion_principle (link added after I noticed @Qmechanic edited that Tag into the original question:) So as you try to push the macroscopic objects together, thus forcing too many electrons into the available atomic shell states, the overall multi-particle state describing that collection of electrons (determined by the Slater determinant, e.g., https://en.wikipedia.org/wiki/Slater_determinant) necessarily gives zero probability for finding any two electrons in the same state. And that gives rise to the macroscopic effect/semblance of a "force", preventing the macroscopic objects from being "in the same place at the same time". Edit$\mathbf{\mbox{--------}}$ Another effect involving exchange forces (unrelated to the op's question about normal forces, per se, but perhaps more generally physically interesting) is the Bose-Einstein condensate, https://en.wikipedia.org/wiki/Bose%E2%80%93Einstein_condensate Here, a gas of bosons is supercooled so that most of the constituent "particles" all fall into the lowest-energy state. And that's possible because bosons aren't subject to the Pauli exclusion principle, so a large collection of them can all occupy that same state. And then this macroscopic collection exhibits some remarkable quantum properties that you'd expect to only be observable at the microscopic level. But, now, you couldn't prepare such a remarkable condensate comprised of fermions, like electrons, for exactly the same reason discussed above --- except for https://en.wikipedia.org/wiki/Fermionic_condensate#Fermionic_superfluids where fermions are paired together so that each pair of fermions acts like a boson. An interesting video discussing all this is at http://learner.org/resources/series213.html Click the [vod] link along the right-hand side of Program 6. Macroscopic Quantum Mechanics The second half of this video interviews Deborah Jin (and some of her grad students), who produced the first-ever fermionic condensate, discussing the physics involved. (Unfortunately, the video's from 2010, and a more recent issue of Physics Today carried Jin's obit, also discussing her accomplishments.)
{ "domain": "physics.stackexchange", "id": 49862, "tags": "newtonian-mechanics, electromagnetism, forces, pauli-exclusion-principle" }
ROSRUN IMAGE_VIEW problem
Question: Hi everybody I am trying to remotely see the image from a usb camera installed on a robot. I run the command rosrun image_view image_view image:=/logitech_usb_cam/image_raw and I get this feedback (image_raw:3417): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", What does this mean? Thank you all Originally posted by agrirobot-George on ROS Answers with karma: 1 on 2012-09-20 Post score: 0 Answer: It seems to be a general bug in ubuntu 11.10. Take a view on http://askubuntu.com/questions/66356/gdk-gtk-warnings-and-errors-from-the-command-line Originally posted by Poseidonius with karma: 427 on 2012-09-20 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 11087, "tags": "ros" }
Correcting Impulse Invariance Method
Question: I'm trying to find out if the correction (Jackson, Nelatury, Mecklenbräuker) could improve the (IIM based) filter response near Nyqvist. Here's my c++ routine (which is used for to calculate various RIAA and non-RIAA filters by just changing the timeconstant values and samplerate): double a0, a1, a2, b0, b1, b2; double fs = 44100; //timeconstants (case RIAA): // frequency -> time conversion 1/(2*pi*fc) (= R*C) //poles double p1 = 3180e-6; // 1/(2*pi*50.05Hz) double p2 = 75e-6; // 2212Hz //zeros double z1 = 318e-6; // 500.5Hz double z2 = 0.0; // 3.18e-6 for Neumann pole (50kHz) double pole1= exp(-1.0/(fs*p1)); double pole2 = exp(-1.0/(fs*p2)); double zero1 = exp(-1.0/(fs*z1)); double zero2 = exp(-1.0/(fs*z2)); a0 = 1.0; // = 1.0 a1 = -pole1 - pole2; // = -0.931176 a2 = pole1 * pole2; // = 0 b0 = 1.0; // = 1.0 b1 = -zero1 - zero2; // = -1.731986 b2 = zero1 * zero2; // = 0.733838 Tried to google "ready to use" solution of this but the only source code I found few papers I could use for correction. Papers: Jackson Mecklenbräuker Nelatury EDIT: Bypassed the above mentioned papers. Improved the method so that now there's no need for additional correction biquad but by using z2 (which is unused) for the correction. One can decide where the error lies. Result: Answer: You might be surprised to hear that your code implements neither the conventional impulse invariance method (IIM) nor the corrected IIM. Instead it implements the matched Z-transform, which maps the poles and zeros of the analog prototype according to $$z_k=e^{s_kT}\tag{1}$$ where $T=1/f_s$ is the sampling interval. The two versions of the IIM can be derived as follows. If we take as an example the RIAA de-emphasis transfer function $$H(s)=\frac{sT_2+1}{(sT_1+1)(sT_3+1)}\tag{2}$$ with $T_1=3.18\,\text{ms}$, $T_2=75\,\mu\text{s}$, and $T_3=318\,\mu\text{s}$, we first have to do a partial fraction expansion of $(2)$ to get $$H(s)=\frac{A_1}{s-p_1}+\frac{A_2}{s-p_2}\tag{3}$$ with $p_1=-1/T_1$, $p_2=-1/T_3$, and $$A_1=\frac{1}{T_1}\frac{T_2-T_1}{T_3-T_1},\qquad A_2=\frac{1}{T_3}\frac{T_3-T_2}{T_3-T_1}$$ With the constants used in $(3)$, the transfer function of the transformed system can directly be written as $$H(z)=\frac{TA_1}{1-e^{p_1T}z^{-1}}+\frac{TA_2}{1-e^{p_2T}z^{-1}}\tag{4}$$ From $(4)$ it can be seen that the poles are the same as for the matched Z-transform, but the zeros are different. Note that for this example there is only one zero for the matched Z-transform as well as for the conventional IIM. The corrected IIM according to Mecklenbräuker and Jackson subtracts a constant term from the transfer function $(4)$ of the conventional IIM: $$H(z)=\frac{TA_1}{1-e^{p_1T}z^{-1}}+\frac{TA_2}{1-e^{p_2T}z^{-1}}-\frac{T}{2}(A_1+A_2)\tag{5}$$ Note that this solution has one more zero than the other two methods. The poles of all three methods are identical. The figure below shows the approximation of the RIAA de-emphasis filter given by $(2)$ according to the three aforementioned transformations (IIM in red, IIM corrected in green, matched Z in magenta). IIM performs worst, matched Z-transform and the corrected IIM are similar but their errors have different signs. This fact could be used to come up with a filter that combines the corrected IIM and the matched Z-transform. Since their poles are identical, we only need to combine their numerator coefficients. The result of the combined filter is shown in black. Its approximation error is less than 1 dB over the whole frequency range.
{ "domain": "dsp.stackexchange", "id": 3639, "tags": "filters, filter-design, infinite-impulse-response, digital-filters" }
wrong diff_drive_controller pose calculation?
Question: Hallo, I am using diff_drive_controller in my project. I wrote the hw interface which calculates the distance based on the ticks coming from the left and right wheels. I am sending absolut ticks (increased when moving forward, decreased when moving backwards). The ticks are coming from the robot (slave) left_wheel_ticks and right_wheel_ticks topics, while the hw interface is runing on a master pc and subscribed to these topics. Clocks are in sync. The calculated linear travel distance for each wheels seems to be ok and calculated well, but not the robot's travel distance (position in the odometry) void read(const ros::Duration &period) { double distance_left = (_wheel_ticks[0] * ((_wheel_diameter * M_PI) / _wheel_encoder_ticks)); double distance_right = (_wheel_ticks[1] * ((_wheel_diameter * M_PI) / _wheel_encoder_ticks)); pos[0] += linearToAngular(distance_left - last_dist_left); vel[0] += linearToAngular((distance_left - last_dist_left) / period.toSec()); pos[1] += linearToAngular(distance_right - last_dist_right); vel[1] += linearToAngular((distance_right - last_dist_right) / period.toSec()); last_dist_left = distance_left; last_dist_right = distance_right; } double linearToAngular(const double &travel) const { return travel / _wheel_diameter; } _wheel_encoder_ticks = 20 _wheel_diameter = 0.065 If I am pushing the robot by hand for 1 wheel rotation (20 ticks), I can see, that the left and right wheel (linear) distance is 20cm with my 6.5cm diameter wheels. My wheel raidus is 3.25cm, if I set the wheel_radius_multiplier to 2.0, it seems to be ok ... but I don't have 13cm wheels, but 6.5cm. I would assume, that the position in the published odometry topic increases by 20cm / 1 wheel rotation, but it is around the half of it. Why? What am I doing wrong? Here is the params for the diff_drive_controller: mobile_base_controller: type: "diff_drive_controller/DiffDriveController" left_wheel: ['base_lt_wheel_shaft_joint'] right_wheel: ['base_rt_wheel_shaft_joint'] publish_rate: 50 #extra_joints: # - name: <name_of_caster_wheel_joint> # position: 0.01 # velocity: 0.0 # effort: 0.0 # - name: <name_of_caster_wheel_joint> # position: 0.01 # velocity: 0.0 # effort: 0.0 # Odometry covariances for the encoder output of the robot. These values should # be tuned to your robot's sample odometry data, but these values are a good place # to start pose_covariance_diagonal : [0.001, 0.001, 1000000.0, 1000000.0, 1000000.0, 1000.0] twist_covariance_diagonal : [0.001, 0.001, 1000000.0, 1000000.0, 1000000.0, 1000.0] estimate_velocity_from_position: false # pose_covariance_diagonal: [0.001, 0.001, 0.001, 0.001, 0.001, 0.03] # twist_covariance_diagonal: [0.001, 0.001, 0.001, 0.001, 0.001, 0.03] cmd_vel_timeout: 0.25 velocity_rolling_window_size: 2 # Top level frame (link) of the robot description base_frame_id: base_footprint # Odometry fused with IMU is published by robot_localization, so # no need to publish a TF based on encoders alone. enable_odom_tf: true # Jetbot hardware does not provides wheel velocities # estimate_velocity_from_position: true # Wheel separation and radius multipliers wheel_separation: 0.12 wheel_radius: 0.0325 # wheel_separation_multiplier: 1.0 # default: 1.0 # wheel_radius_multiplier : 1.0 # default: 1.0 # Velocity and acceleration limits for the robot linear: x: has_velocity_limits : true max_velocity : 0.1 # m/s has_acceleration_limits: true max_acceleration : 0.05 # m/s^2 angular: z: has_velocity_limits : true max_velocity : 0.1 # rad/s has_acceleration_limits: true max_acceleration : 0.6 # rad/s^2 Thanks! Originally posted by balint.tahi on ROS Answers with karma: 50 on 2021-06-16 Post score: 0 Original comments Comment by fjp on 2021-10-07: The answer/project here might help you too. Answer: Update Apparently, my understanding of linear velocity is wrong. The correct term seems to be angular velocity. I did some research and came across this. https://answers.gazebosim.org/question/21825/what-is-the-velocity-assigned-to-the-gazebo-ros-conrtol-controller/?answer=22000#post-id-22000 From what I learned there, I think the following should be changed. // Divide the linear velocity by the radius, not the diameter. double linearToAngular(const double &travel) const { return travel / ( _wheel_diameter / 2 ); } I think it is consistent with the case where setting wheel_radius_multiplier to 2.0 works correctly. OLD pos[0] += linearToAngular(distance_left - last_dist_left); vel[0] += linearToAngular((distance_left - last_dist_left) / period.toSec()); pos[1] += linearToAngular(distance_right - last_dist_right); vel[1] += linearToAngular((distance_right - last_dist_right) / period.toSec()); probably don't need linearToAngular. Before doing this operation, the unit is set to meters. (This is the distance the robot has traveled.) pos[0] +=(distance_left - last_dist_left); vel[0] +=(distance_left - last_dist_left) / period.toSec(); pos[1] +=(distance_right - last_dist_right); vel[1] +=(distance_right - last_dist_right) / period.toSec(); Also, since vel is supposed to be finding the velocity at that point, there is no need to add it. pos[0] +=(distance_left - last_dist_left); vel[0] = (distance_left - last_dist_left) / period.toSec(); // removed + pos[1] +=(distance_right - last_dist_right); vel[1] = (distance_right - last_dist_right) / period.toSec(); // removed + Originally posted by miura with karma: 1908 on 2021-06-19 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by balint.tahi on 2021-06-21: Okay, so as you said, it is in meters (linear). The other commenter said it is in radians (angular) ... where can I find a documentation to figure out which is used by the controller? The problem, that this (your suggestion) is not working as well. I am not doing anything else, just driving my robot forward, watching for the linear distance for each wheels (lets say 20cm, because that is 1 rotation for the wheels), and checking the odom topic pose/pose/position/x basically, which should match with the linear wheel distance (because of the forward movement, same linear distance for each wheels). It is not the same with you calculation and mine and real life basically. Comment by balint.tahi on 2021-06-21: Just before your answer I found exactly the same issue :) But to be honest, it has to be angular velocity and pos, so the linearToAngular also needed into the pos and vel calculation. Now at least the odom is accurate. The navigation is still not working correctly ... but I am one step further. Thanks for the help. Comment by miura on 2021-06-21: Thank you too. It was a learning experience.
{ "domain": "robotics.stackexchange", "id": 36539, "tags": "ros, ros-melodic, diff-drive-controller" }
WKB for $E > V(x)$
Question: When we use the WKB method, at least when I learned it, all of our examples had $V(x) > E$ at some point, allowing for turning points. Say we have some $V(x) < E$ for all $x$. How would we apply WKB to get a solution for our wave functions? Answer: If all of space is classically accessible $\forall x\in\mathbb{R}:E>V(x)$, then we are looking at (non-normalizable) scattering states in the continuum part of the energy spectrum, where there is no quantization condition, and where the WKB wave function for the 1D TISE is purely oscillatory $$ \psi(x)~=~\frac{1}{\sqrt{p(x)}}\sum_{\pm}C_{\pm} \exp\left(\pm \frac{i}{\hbar}\int_{x_0}^x \! p(x^{\prime})~\mathrm{d}x^{\prime}\right),\qquad p(x)~:= \sqrt{2m (E-V(x))}.$$ The two complex constants $C_{\pm}\in \mathbb{C}$ can in principle be determined from asymptotic scattering data as $x\to \pm \infty$.
{ "domain": "physics.stackexchange", "id": 62342, "tags": "quantum-mechanics, wavefunction, schroedinger-equation, scattering, semiclassical" }
Are there galaxies with 2 or more super massive black holes orbiting each other?
Question: We now know that most stellar systems have 2 or more stars orbiting each other. Do we know of any galaxies which have 2 or more super massive black holes orbiting each other? Is it possible? Answer: Yes, there are galaxies with two supermassive black holes in the center, see for instance 4C +37.11 Most likely such galaxies are formed by collision and merger of two galaxies, and their cores have not yet merged. Source
{ "domain": "astronomy.stackexchange", "id": 3886, "tags": "galaxy, supermassive-black-hole" }
Reverse the combustion engine
Question: All you know what a combustion engine is doing: Transform chemical energy into kinetic energy. What about the opposite? I know when I park a car at a slope and lose the breaks it will drive backwards and downwards. And will not produce petrol because of that :) But where in the process is that critical point, respectively why will it not produce petrol in detail? Are there machines that can do such a process? Answer: There are ways to do the opposite process. One way would be using an endothermic chemical reaction. Once equilibrium is reached, you could use friction to heat the reaction medium (simply by rubbing the beaker). Then, since the reaction is endothermic, it would start again. This would convert mechanical energy into chemical energy. Another way would be to perform an electrolysis thanks to a dynamo. The dynamo would convert mechanical energy into electric energy, which would then be converted into chemical energy through an electrolysis.
{ "domain": "physics.stackexchange", "id": 42391, "tags": "thermodynamics, heat-engine, carnot-cycle" }
Is it possible to apply deep dream technique for the audio streams?
Question: What happens if you apply the same deep dream technique which produces "dream" visuals but to media streams such as audio files? Does changing image functions into audio and enhancing the logic would work, or will it no longer work/doesn't make any sense? My goal is to create "dream" like audio based on the two samples. Answer: As far as I can see, there's no reason why you couldn't (for example) take the convolutional inputs to deepdream from adjacent sample points, rather than adjacent spatial positions, as is the case with image input. Given the 'self similar' nature of deep dream images, listening to this fractal granular synthesis technique might be of interest/inspiration.
{ "domain": "ai.stackexchange", "id": 17, "tags": "convolutional-neural-networks, deepdreaming" }
Can particles be in positional eigenstate in reality?
Question: Do i understand quantum right in the following description? what we observed as particle is just the phenomenon they have some kind of quality corresponding to the macroscopic stones in experiments like photoelectric effect and the position of so called particles can never be observed precisely even after the collapse of wavefunction. Answer: Many eigenfunctions are idealised states that cannot occur in practice. For example when you learn about the hydrogen atom you learn that the orbitals $1s$, $2s$, $2p$, etc are the energy eigenstates i.e. the eigenstates of the Hamiltonian operator. However these states are time independent so they must have existed for an infinite time and continue to exist for an infinite time into the future. Obviously this can't be true (the universe hasn't existed for an infinite time) so these states cannot exist in practice. What we observe are very close approximations to them. A similar argument applies to position eigenstates. These are Dirac delta distributions and have infinite uncertainty in momentum, along with other pathological features like infinite density. The best we would ever do is have a close approximation to a position eigenstate i.e. a particle localised to within some finite region of space. Incidentally this answers your previous question Are the particles we see in a cloud chamber position eigenstates? since the tracks in a cloud chamber are, as you say in that question, just states localised to a small region of space and not position eigenstates.
{ "domain": "physics.stackexchange", "id": 41735, "tags": "quantum-mechanics, hilbert-space, observables" }
Why centrifugal force (not centripetal force) is considered while deriving the effect of rotation of Earth on "g"?
Question: As I was trying to work out the expression for the apparent "g"-in case of the rotation of Earth-I was considering the centripetal force (m$w^2$r) and the weight (mg) all the time, thinking these were the forces that were acting on the body. But later I found out that it should have had used centrifugal force instead of centripetal force. But why? We should be considering the forces acting on the body right? And it's the centripetal force that is acting on the body (with direction towards the center of the small circle). Answer: You're right, but you've forgotten one thing: the reference frame. When you are in an inertial frame of reference the only real force in a circular movement is the centripetal force $\vec F_c = - \frac{mv^2}{r} \vec û_r$ But, ¿what happens when you are in a non-inertial reference frame ? Remember that Newton's second law works perfectly on inertial reference frames, but it doesn't work on non inertial reference frame. So, if you want the law to remain valid you need to add some fictitious or inertial forces. That is, any observer located in a non-inertial reference system will need fictitious forces to explain correctly the movement (is the "trick" to making the second law works) In a circular movement (Earth) you are not in a inertial reference frame = Earth is a non-inertial reference frame, for this reason, for you (who are in this reference frame) doesn't exist any centripetal force, you just feel a force (that doesn't exist really) pushing you out . If you are inside of the Earth this is your non inertial reference frame, then, you need to consider inertial/fictitious forces, like centrifugal force and of course Coriolis force. Now get out of the Earth and you won't have to include these forces, naturally. Final comment, as you can see in the second image you've put, the centrifugal force causes a little change in the weight direction $mg$ an it does not point exactly to the center of the Earth Hope that helps J.
{ "domain": "physics.stackexchange", "id": 45168, "tags": "newtonian-gravity, reference-frames, centripetal-force, centrifugal-force" }
What is the difference between a neutron star and a white dwarf?
Question: What is the difference between a neutron star and a white dwarf? I know that both are very dense even if they go through different phases. Answer: In a neutron star, the force of gravity is strong enough to press the protons and electrons together to form neutrons {1}, White dwarfs are only very compact. With even more mass, you get a black hole. All three types are outcomes of star death, when the failing fusion in the middle of a star is no longer able to counteract gravity. What a star become when it collapses is depending on its mass.
{ "domain": "astronomy.stackexchange", "id": 5048, "tags": "neutron-star, white-dwarf" }
Heat transfer calculated from the specific heat formula, converting to Kelvin
Question: Say I have $10$ g of silver, whose specific heat is $0.235$. I've heated it up from $50.0 ^\circ C$ to $60.0 ^\circ C$. How much heat has been transferred? Using the equation $$ Q = C_p m\Delta T $$ where $C_p$ is the specific heat, $m$ is the mass of the object, and $\Delta T$ is the change of temperature in Kelvin, I found $$ Q = (0.235)(10)(60.0-50.0) = 23.5\, \text{J}.$$ My teacher said that we have to add $273$ to the temperature difference to convert to Kelvin, so $$ Q = (0.235)(10)((60.0-50.0)+273) = 665.05\, \text{J}.$$ I don't see her reasoning since the difference in Kelvin is the same as the difference in Celsius. But the book also had the same answer as my teacher, and I got this question wrong in a test, where the official answer was $665.06 \, \text{J}$. So now I know this is the right answer and my friends agree, but why is it right? Answer: dmckee already said this but I figure it's worth repeating because we're really really sure. $$60.0^\circ\mathrm{C} - 50.0^\circ\mathrm{C} = 10\text{ K}$$ You're exactly correct that you should get the same answer by converting to Kelvins before subtracting: $$60.0^\circ\mathrm{C} - 50.0^\circ\mathrm{C} = 333.2\text{ K} - 323.2\text{ K} = 10\text{ K}$$ So you do not add 273K to this result; your teacher and the book are wrong. About Kelvins Degrees Celsius (and Fahrenheit) are funny things, actually. They are only useful for subtraction. The reason is that these temperature systems are defined relative to a fixed point, the triple point of water, at which the temperature is defined to be $T_3 = 0.01^{\circ}\mathrm{C}$. So when you say something is at a temperature of $60.0^\circ\mathrm{C}$, you're really saying that $t - T_3 = 59.9^{\circ}\mathrm{C}$. This means that every temperature expressed in degrees Celsius implicitly depends on the triple point of water. Obviously, not everything in nature depends on the triple point of water. So we would like to have some way of eliminating that dependence before using temperatures in calculations. You can do this by taking a difference between two temperatures. Suppose you had two temperatures, $t_i$ and $t_f$ (for example, $t_i - T_3 = 49.9^\circ\mathrm{C}$ and $t_f - T_3 = 59.9^\circ\mathrm{C}$). $$t_f - t_i = (t_f - T_3) - (t_i - T_3) = 59.9^\circ\mathrm{C} - 49.9^\circ\mathrm{C} = 10\;\Delta^{\circ}\mathrm{C}$$ Here I've "invented" the unit $\Delta^{\circ}\mathrm{C}$ for a temperature difference, because temperature differences and "relative" temperatures don't work the same way. Notice that a temperature difference doesn't depend on $T_3$ at all. In fact, if we used an entirely different reference value in place of $T_3$, the difference would still be the same. Once you have a temperature difference, you can multiply it or divide it by other things. You can also add or subtract other temperature differences. This is very similar to things like potential energy, where only the difference between two energies is meaningful, not the actual amounts of energy. Now, it turns out that there are several important formulas in thermodynamics that involve differences between the actual temperature and a particular reference temperature $T_0$; for example, the thermal energy of noninteracting particles, $$\overline{E} = \frac{3}{2}k_B (T - T_0) = \frac{1}{N}\sum_{i=1}^N\frac{1}{2}m_iv_i^2$$ Based on experiments, you can calculate that $$T_0 = -273.15^\circ\mathrm{C}$$ So evidently, nature assigns some special significance to temperature differences relative to $T_0$: the difference $t - T_0$ is important in some way that no other temperature difference (such as $t - T_3$) is. Based on this result, physicists thought it would make sense to develop a temperature scale which set $T_0 = 0$, so that we wouldn't have to keep subtracting it all the time. The first person to reach this conclusion was Lord Kelvin, thus the thermodynamic temperature scale and its unit were named after him. This is the origin of the Kelvin. So to summarize, when you have a temperature (not a temperature difference) in degrees Celsius, what you really have is $t - T_3$, and when you have a temperature in Kelvin, what you really have is $t - T_0$. In order to convert a temperature from Celsius to Kelvin, you do this: $$\underbrace{t - T_0}_{\text{in K}} = \underbrace{t - T_3}_{\text{in }^\circ\text{C}} + \underbrace{T_3 - T_0}_{273.15\Delta^\circ\mathrm{C}}$$ i.e. you add 273.15 to the numeric value. On the other hand, when you have a temperature difference, what you really have is $t_f - t_i$, which doesn't depend on any reference point. So to convert from Celsius to Kelvin, you don't need to do anything. Application Here's how this applies to your example. You have a formula $$Q = C_p m\Delta t = C_p m (t_f - t_i)$$ But you can't plug in for $t_f$ and $t_i$ directly. The only information you have is relative to $T_3$: $$t_i - T_3 = 49.9^\circ\mathrm{C}$$ $$t_f - T_3 = 59.9^\circ\mathrm{C}$$ so you have to stick a couple extra terms into that formula: $$Q = C_p m \bigl[(t_f - T_3) - (t_i - T_3)\bigr]$$ Now you can substitute in your numerical values, $$Q = C_p m \bigl[59.9^\circ\mathrm{C} - 49.9^\circ\mathrm{C}\bigr] = C_p m (10\Delta^\circ\mathrm{C})$$ There's no need to add or subtract anything else. Alternatively, you could convert the temperatures to Kelvins before plugging them in. Converting to Kelvins means that you now have $$t_i - T_0 = 323.2\text{ K}$$ $$t_f - T_0 = 333.2\text{ K}$$ Again, you have to stick a couple extra terms into the formula: $$Q = C_p m \bigl[(t_f - T_0) - (t_i - T_0)\bigr] = C_p m \bigl[333.2\text{ K} - 323.2\text{ K}\bigr] = C_p m (10\Delta\mathrm{K})$$ By definition, the Kelvin and Celsius scales have degrees of the same size, so $\Delta^\circ\mathrm{C} = \Delta\mathrm{K}$, so these two results are the same. But because of the special properties of the temperature $T_0$, you can also show that $\Delta\mathrm{K} = 1\text{ K}$; in other words, when you're dealing with Kelvins, it's safe to leave off the deltas and not worry too much about when $t$ is a temperature and when it's a temperature difference. That only works for Kelvins, though, not degrees Celsius.
{ "domain": "physics.stackexchange", "id": 6296, "tags": "homework-and-exercises, thermodynamics, absolute-units" }
Are the Venusian "continents" likely to have existed before the global resurfacing event?
Question: A topographic map of the surface of Venus shows the large highland areas Ishtar Terra, Aphrodite Terra and Lada Terra that have occasionally been described as "continents", plus various other smaller regions such as Beta Regio. Are these "continents" likely to have existed before the global resurfacing event, or is the topography of the planet likely to have been completely altered? Answer: Ishtar Terra belongs to the tessera type of terrain, "one of the most tectonically deformed types of terrain on Venus", representing ~7.3 % of Venus' surface (Ivanov & Head 2011, Global geological map of Venus). Tessera are "dated" (relatively, from stratigraphic relationships) from the Fortunian, the oldest period: The most important features of the Fortunian are large tessera-bearing crustal plateaus (e.g., Fortuna, Ovda, etc.) that form a specific class of first-order (a few thousands of kilometers across) highs on Venus [...] The lowest stratigraphic position of tessera and its relative abundance show that this unit is of extreme importance as a probable ‘window’ into the geological past of Venus. Stratigraphy of defined and mapped geologic units on Venus. The visible geologic history of Venus consists of the Fortunian, Guineverian, and Atlian Periods. The major portion of the surface of Venus (about 70%) was resurfaced during the Guineverian Period. See text for discussion. T is the mean model absolute age of the surface Further works by Ivanov & Head detail the history of volcanism (2013) and tectonism (2015) on Venus. They consider tessera themselves as a resurfacing process: The globally observed stratigraphic relationships among the tectonic and volcanic units that make up the absolute majority (~96%) of the surface of Venus divide the observable portion of its geologic history into three different episodes, each with a specific style of resurfacing. These are as follows: (1) Global tectonic regime, when tectonic resurfacing dominated. Exposed occurrences of these units comprise about 20% of the surface of Venus. (2) Global volcanic regime, when volcanism was the most important process of resurfacing and resurfaced about 60% of Venus. (3) Network rifting-volcanism regime, when both tectonic and volcanic activities were about equally important. During this regime, about 14% of the surface of Venus was modified. So, to summarize, "continent-like" (tessera) terrains like Ishtar Terra have existed for a long (yet unknown) time, and have survived the resurfacing from volcanism that covers most of the surface today. But they are themselves considered to represent a (tectonic) resurfacing process that has affected older (and unknown) pre-Fortunian terrains. A global correlation chart that shows the three major regimes of resurfacing on Venus: global tectonic regime, global volcanic regime, and Network-rifting-volcanism regime.
{ "domain": "astronomy.stackexchange", "id": 4874, "tags": "venus, geology, volcanism" }
Can doppler shift be used to find the MH370 black boxes?
Question: The Australian ship Ocean Shield has detected multiple pings from the black boxes onboard the missing Malaysia Airline Flight 370, specifically on 4 lines of bearing according to this article. The same article also states that they need a few more lines of bearing in order to further narrow down the search area. So my question is pretty much as in the title - can they use doppler shift to help figure out where the pings are coming from? IIRC, they used the doppler shift from the satellite data to figure out that the aircraft was most likely on the southern of the two arcs that were proposed several weeks ago. Answer: Doppler shift occurs only when the sender, the receiver or both are moving relatively to each other. As the black boxes rest at the bottom of the ocean and the search ships move relatively slow, there won't be any significant Doppler shift. However, if the Ocean Shield receives several signals at different locations (the location of the Ocean Shield), the position of the black box can be triangulated. The Doppler shift from the signals the satellites received was most probably used to determine the speed of MH370 to extrapolate the most likely area in which the plane might have gone down.
{ "domain": "physics.stackexchange", "id": 13042, "tags": "experimental-physics, everyday-life, doppler-effect" }
Choose betwen a few message values depending on 3 variables
Question: This code write out the title and the body of a message depending basically on 3 variables combination. How could i refactor this nicely? a factory pattern? how more precisely? a mapping array ? but how? defer the conditions to another function WriteOutMessage($moderation, $moderation_state, $Authorship)? Try to somehow have the same variable names as the possible combinations values? switch ($moderation_state) { case 'draft': if ($moderation_state_original == 'draft') { if ($current_user_name != $author_name) { //Envoi à l'auteur $send_to = $author_email; $params['node_title'] = '[DUD] Modifications de ' . $current_user_name . ' pour l\'article ' . $article->label() . ' de ' . $author_name; $params['message'] = $current_user_name . " vient de modifier l'article " . $article->get('title') ->getString() . " sans changer son état. Allez vite voir...\n"; $params['message'] .= "C'est là: " . $base_url . "/node/" . $article->get('nid') ->getString() . "\n"; $params['message'] .= "A Bientôt, \n"; $params['message'] .= "Le Dioude."; break; } else { //pas d'envoi d'e-mails $params['node_title'] = ''; $params['message'] = ''; $send = FALSE; break; } } else { //pas d'envoi d'e-mails $params['node_title'] = ''; $params['message'] = ''; $send = FALSE; break; } break; case 'propose_a_la_relecture': if ($moderation_state_original == 'propose_a_la_relecture') { if ($current_user_name != $author_name) { //Envoi à l'auteur $send_to = $author_email; $params['node_title'] = '[DUD] Modifications de ' . $current_user_name . ' pour l\'article ' . $article->label() . ' de ' . $author_name; $params['message'] = $current_user_name . " vient de modifier l'article " . $article->get('title') ->getString() . " sans changer son état. Allez vite voir...\n"; $params['message'] .= "C'est là: " . $base_url . "/node/" . $article->get('nid') ->getString() . "\n"; $params['message'] .= "A Bientôt, \n"; $params['message'] .= "Le Dioude."; break; } else { //pas d'envoi d'e-mails $params['node_title'] = ''; $params['message'] = ''; $send = FALSE; break; } } else { //TODO replace with dosi list //retrieve all dosi members $send_to = "dd@ue"; $params['node_title'] = '[DUD] Nouvel article proposé à la publication par ' . $author_name . ': ' . $article->label(); $params['message'] = "L'article " . $article->get('title') ->getString(); $params['message'] .= " vient d'être proposé à la publication par " . $author_name . ".\n"; $params['message'] .= "Qu'il en soit remercié pour le temps qu'il contribue ainsi à faire gagner à ses collègues et à tous les AMUsagers.\n"; $params['message'] .= "N'hésitez pas à faire avancer le workflow en assurant une relecture.\n"; $params['message'] .= "Un petit pas pour la DOSI, mais à coup sûr un grand pas pour la qualité du service public français.\n"; $params['message'] .= "C'est là: " . $base_url . "/node/" . $article->get('nid') ->getString() . "\n"; $params['message'] .= "Le Dioude."; break; } case 'relecture_1_ok': if ($moderation_state_original == 'relecture_1_ok') { if ($current_user_name != $author_name) { //Envoi à l'auteur $send_to = $author_email; $params['node_title'] = '[DUD] Modifications de ' . $current_user_name . ' pour l\'article ' . $article->label() . ' de ' . $author_name; $params['message'] = $current_user_name . " vient de modifier l'article " . $article->get('title') ->getString() . " sans changer son état. Allez vite voir...\n"; $params['message'] .= "C'est là: " . $base_url . "/node/" . $article->get('nid') ->getString() . "\n"; $params['message'] .= "A Bientôt, \n"; $params['message'] .= "Le Dioude."; break; } else { //pas d'envoi d'e-mails $params['node_title'] = ''; $params['message'] = ''; $send = FALSE; break; } } else { //TODO replace with dosi list $send_to = "gg@gg.fr"; $params['node_title'] = '[DUD] Relecture 1 validée pour l\'article ' . $article->label() . " de " . $author_name; $params['message'] = "L'article " . $article->get('title') ->getString() . " de " . $author_name . " vient d'être passé dans l'état Relecture 1 ok par le relecteur " . $current_user_name; $params['message'] .= "\nC'est là: " . $base_url . "/node/" . $article->get('nid') ->getString() . "\n"; $params['message'] .= "Merci de poursuivre l'effort en assurant la 2eme relecture, dernière étape avant publication.\n"; $params['message'] .= "Le Dioude."; break; } case 'relecture_2_ok': if ($moderation_state_original == 'relecture_2_ok') { if ($current_user_name != $author_name) { //Envoi à l'auteur $send_to = $author_email; $params['node_title'] = '[DUD] Modifications de ' . $current_user_name . ' pour l\'article ' . $article->label() . ' de ' . $author_name; $params['message'] = $current_user_name . " vient de modifier l'article " . $article->get('title') ->getString() . " sans changer son état. Allez vite voir...\n"; $params['message'] .= "C'est là: " . $base_url . "/node/" . $article->get('nid') ->getString() . "\n"; $params['message'] .= "A Bientôt, \n"; $params['message'] .= "Le Dioude."; break; } else { //pas d'envoi d'e-mails $params['node_title'] = ''; $params['message'] = ''; $send = FALSE; break; } } else { //retrieve all rédacteur en chef $query = \Drupal::entityQuery('user'); $nids = $query->execute(); foreach ($nids as $nid) { $user = \Drupal\user\Entity\User::load($nid); if ($user->hasRole('redacteur_en_chef')) { $emails_redacteur_en_chef[] = $user->getEmail(); } } $send_to = implode(',', $emails_redacteur_en_chef); $params['node_title'] = '[DUD] Relecture 2 ok pour l\'article ' . $article->label() . " de " . $author_name; $params['message'] = "L'article " . $article->get('title') ->getString() . " de " . $author_name . " vient d'être passé dans l'état Relecture 2 ok par le relecteur " . $current_user_name; $params['message'] .= "\nC'est là: " . $base_url . "/node/" . $article->get('nid') ->getString() . "\n"; $params['message'] .= "El Dioudolo."; break; } case 'published': if ($moderation_state_original == 'published') { if ($current_user_name != $author_name) { //retrieve all rédacteur en chef $query = \Drupal::entityQuery('user'); $nids = $query->execute(); foreach ($nids as $nid) { $user = \Drupal\user\Entity\User::load($nid); if ($user->hasRole('redacteur_en_chef')) { $emails_redacteur_en_chef[] = $user->getEmail(); } } $send_to = implode(',', $emails_redacteur_en_chef); $send_to .= ','; $send_to .= $author_email; $params['node_title'] = '[DUD] Modifications de ' . $current_user_name . ' pour l\'article ' . $article->label() . ' de ' . $author_name; $params['message'] = $current_user_name . " vient de modifier l'article " . $article->get('title') ->getString() . " sans changer son état. Allez vite voir...\n"; $params['message'] .= "C'est là: " . $base_url . "/node/" . $article->get('nid') ->getString() . "\n"; $params['message'] .= "A Bientôt, \n"; $params['message'] .= "Le Dioude."; break; } else { //retrieve all rédacteur en chef $query = \Drupal::entityQuery('user'); $nids = $query->execute(); foreach ($nids as $nid) { $user = \Drupal\user\Entity\User::load($nid); if ($user->hasRole('redacteur_en_chef')) { $emails_redacteur_en_chef[] = $user->getEmail(); } } $send_to = implode(',', $emails_redacteur_en_chef); $params['node_title'] = '[DUD] Modifications de ' . $current_user_name . ' pour l\'article ' . $article->label() . ' de ' . $author_name; $params['message'] = $current_user_name . " vient de modifier l'article " . $article->get('title') ->getString() . " sans changer son état. Allez vite voir...\n"; $params['message'] .= "C'est là: " . $base_url . "/node/" . $article->get('nid') ->getString() . "\n"; $params['message'] .= "A Bientôt, \n"; $params['message'] .= "Le Dioude."; break; } } else { // $send_to=$article->getOwner()->getEmail(); //TODO replace with dosi list $send_to = "zz@zz"; $params['node_title'] = '[DUD] Nouvel article publié par ' . $author_name . ": " . $article->label(); $params['message'] = "L'article " . $article->get('title') ->getString() . " de" . $author_name . " vient d'être publié.\n"; $params['message'] = "Un grand merci collectif à l'auteur: " . $author_name . ". La DOSI l'aime. Ses colllègues l'aiment. AMU entière l'aime. Gloire à toi " . $author_name . " !!!"; $params['message'] .= "\nPrécipitez-vous: " . $base_url . "/node/" . $article->get('nid') ->getString() . "\n"; $params['message'] .= "Merci à tous pour votre participation. \n"; $params['message'] .= "Le Dioude."; break; } case 'mise_a_jour_necessaire': //TODO replace with dosi list $send_to = "dd@dd.fr"; $params['node_title'] = '[DUD] Mise à jour souhaitée pour: ' . $article->label(); $params['message'] = "L'article " . $article->get('title')->getString(); $params['message'] .= " crée par " . $author_name . " "; $params['message'] .= "nécessite une mise à jour. \nUn cycle de relecture est nécessaire avant sa remise en publication:\n"; $params['message'] .= "C'est là: " . $base_url . "/node/" . $article->get('nid') ->getString() . "\n"; $params['message'] .= "A Bientôt, \n"; $params['message'] .= "Le Dioude."; break; default: //pas d'envoi d'e-mails $params['node_title'] = ''; $params['message'] = ''; $send = FALSE; break; } Answer: This is how I would rewrite the code to be a lot more read- and maintainable. The most important point was the D.R.Y. (Don't Repeat Yourself) which was already pointed out by @mickmackusa When you have larger texts with variables make use of the heredoc syntax. If you have a return/break/continue/exit/die in an if construct you don't need the else example: <?php if ($somevalue === true) { if ($someOtherValue === true) { foo(); return 3; } else { bar() return 2; } } else { baz(); return 3; } can be replaced with: <?php if ($somevalue === true) { if ($someOtherValue === true) { foo(); return 3; } bar(); return 2; } baz(); return 3; Also try to use === and !== instead of == and != whenever possible Here is the refactored code (~70 lines less): <?php function getChiefRedactorEmailAddresses(): array{ $emails_redacteur_en_chef = []; //retrieve all rédacteur en chef $query = \Drupal::entityQuery('user'); $nids = $query->execute(); foreach ($nids as $nid){ $user = \Drupal\user\Entity\User::load($nid); if ($user->hasRole('redacteur_en_chef')){ $emails_redacteur_en_chef[] = $user->getEmail(); } } return $emails_redacteur_en_chef; } // convenience variable to make code/messages more readable $article_detail_url = $base_url . '/node/' . $article->get('nid')->getString(); $article_title = $article->get('title')->getString(); $article_label = $article->label(); // since the exact same title is used 6 times, we can safely assign it only once to make the code more readable $node_title_mod = "[DUD] Modifications de $current_user_name pour l'article $article_label de $author_name"; // this message is used twice $message_mod = <<<EOF $current_user_name vient de modifier l'article $article_title sans changer son état. Allez vite voir... C'est là: $article_detail_url A Bientôt, Le Dioude. EOF; // set the default values here so we can remove this assignment from all the else statements $params['node_title'] = ''; $params['message'] = ''; $send = false; switch ($moderation_state) { case 'draft': if ($moderation_state_original === 'draft' && $current_user_name !== $author_name) { //Envoi à l'auteur $send_to = $author_email; $params['node_title'] = $node_title_mod; $params['message'] = $message_mod; break; } break; case 'propose_a_la_relecture': if ($moderation_state_original === 'propose_a_la_relecture') { if ($current_user_name !== $author_name) { //Envoi à l'auteur $send_to = $author_email; $params['node_title'] = $node_title_mod; $params['message'] = $message_mod; break; } //pas d'envoi d'e-mails break; } //TODO replace with dosi list //retrieve all dosi members $send_to = "dd@ue"; $params['node_title'] = "[DUD] Nouvel article proposé à la publication par $author_name: $article_label"; $params['message'] = <<<EOF L'article $article_title vient d'être proposé à la publication par $author_name Qu'il en soit remercié pour le temps qu'il contribue ainsi à faire gagner à ses collègues et à tous les AMUsagers. N'hésitez pas à faire avancer le workflow en assurant une relecture. Un petit pas pour la DOSI, mais à coup sûr un grand pas pour la qualité du service public français. C'est là: $article_detail_url Le Dioude. EOF; break; case 'relecture_1_ok': if ($moderation_state_original === 'relecture_1_ok') { if ($current_user_name != $author_name) { //Envoi à l'auteur $send_to = $author_email; $params['node_title'] = $node_title_mod; $params['message'] = <<<EOF $current_user_name vient de modifier l'article $article_title sans changer son état. Allez vite voir... C'est là: $article_detail_url A Bientôt, Le Dioude. EOF; break; } //pas d'envoi d'e-mails break; } //TODO replace with dosi list $send_to = "gg@gg.fr"; $params['node_title'] = "[DUD] Relecture 1 validée pour l'article $article_label de $author_name"; $params['message'] = <<<EOF L'article $article_title de $author_name vient d'être passé dans l'état Relecture 1 ok par le relecteur $current_user_name C'est là: $article_detail_url Merci de poursuivre l'effort en assurant la 2eme relecture, dernière étape avant publication. Le Dioude. EOF; break; case 'relecture_2_ok': if ($moderation_state_original === 'relecture_2_ok') { if ($current_user_name !== $author_name) { //Envoi à l'auteur $send_to = $author_email; $params['node_title'] = $node_title_mod; $params['message'] = <<<EOF $current_user_name vient de modifier l'article $article_title sans changer son état. Allez vite voir... C'est là: $article_detail_url A Bientôt, Le Dioude. EOF; break; } //pas d'envoi d'e-mails break; } $send_to = implode(',', getChiefRedactorEmailAddresses()); $params['node_title'] = "[DUD] Relecture 2 ok pour l'article $article_label de $author_name"; $params['message'] = <<<EOF L'article $article_title de $author_name vient d'être passé dans l'état Relecture 2 ok par le relecteur $current_user_name C'est là: $article_detail_url El Dioudolo. EOF; break; case 'published': if ($moderation_state_original === 'published') { $send_to = getChiefRedactorEmailAddresses(); if ($current_user_name !== $author_name) { // since it's the same message we don't need the extra if here, we can just add the current users email // to the send_to array $send_to[] = $author_email; } $send_to = implode(',', $send_to); $params['node_title'] = $node_title_mod; $params['message'] = <<<EOF $current_user_name vient de modifier l'article $article_title sans changer son état. Allez vite voir... C'est là: $article_detail_url A Bientôt, Le Dioude EOF; break; } // $send_to=$article->getOwner()->getEmail(); //TODO replace with dosi list $send_to = "zz@zz"; $params['node_title'] = "[DUD] Nouvel article publié par $author_name: $article_label"; $params['message'] = <<<EOF L'article $article_title de $author_name vient d'être publié. Un grand merci collectif à l'auteur: $author_name. La DOSI l'aime. Ses colllègues l'aiment. AMU entière l'aime. Gloire à toi $author_name!!! Précipitez-vous: $article_detail_url Merci à tous pour votre participation Le Dioude. EOF; break; case 'mise_a_jour_necessaire': //TODO replace with dosi list $send_to = "dd@dd.fr"; $params['node_title'] = "[DUD] Mise à jour souhaitée pour: $article_label"; $params['message'] = <<<EOF L'article $article_title crée par $author_name nécessite une mise à jour. Un cycle de relecture est nécessaire avant sa remise en publication: C'est là: $article_detail_url A Bientôt, Le Dioude. EOF; break; // no need for a "default:", since it doesn't add anything } How could you improve this code more? You could use a simple replacement template engine for the email messages and put the texts into individual files. Example: Very simple "template engine" <?php $article_title = 'Best template engine ever'; $author_name = 'Donald Trump'; $article_detail_url = '/some/path/to/article'; $message = include 'includes/messages/some_message.php'; some_message.php <?php return <<<EOF L'article $article_title vient d'être proposé à la publication par $author_name Qu'il en soit remercié pour le temps qu'il contribue ainsi à faire gagner à ses collègues et à tous les AMUsagers. N'hésitez pas à faire avancer le workflow en assurant une relecture. Un petit pas pour la DOSI, mais à coup sûr un grand pas pour la qualité du service public français. C'est là: $article_detail_url Le Dioude. EOF;
{ "domain": "codereview.stackexchange", "id": 28824, "tags": "php, factory-method" }
Physical Interpretation of a Scalar Quantity Related to Currents/Conservation Laws
Question: Let $Q_{ab} = (\psi_{;a})(\psi_{;b}) - (1/2)g_{ab}|\nabla \psi|^2$ be the energy-momentum tensor of the wave equation in some space time. I will use semicolons to refer to covariant differentiation and $\partial$s to refer to coordinate differentiation. Let $\pi_{ab}$ be the deformation tensor for some fixed vector field $X$. In deriving "almost conservation laws" one uses the identity. $(Q_{ab}X^b)^{;a} = (\psi^{;a}_{;a})(X^a\partial_a\psi) + (1/2)Q^{ab}\pi_{ab}$ Does there exist a physical interpretation of the scalar $Q^{ab}\pi_{ab}$ that is not based on the above formula? I am most interested in how one should think about this quantity in relativistic contexts, e.g. black hole geometries. P.S. Just in case anyone is tempted, I am looking for something more than "$Q^{ab}\pi_{ab}$ vanishes if the flows of $X$ are isometries." Answer: Observe the following: the Einstein-Hilbert stress-energy is formally given by the first variation of the matter Lagrangian density relative to the inverse metric: $$ Q = \frac{\delta L_\phi \mathrm{dvol}_g}{\delta g^{-1}} $$ where $L_\phi = g^{-1}(\nabla\phi,\nabla\phi)$ is the Lagrangian function for the scalar field. Further observe that $\mathcal{L}_X\mathrm{Id} = 0$, where $\mathrm{Id}:TM\to TM$ is the identity map and $\mathcal{L}_X$ is the Lie derivative. So $\mathcal{L}_X (g^{-1} g) = (\mathcal{L}_Xg^{-1})g + g^{-1}\pi = 0$. Hence $\mathcal{L}_X g^{-1} = -g^{-1}\pi g^{-1}$. This means that $$ c(Q g^{-1} \pi g^{-1}) = -c(Q \mathcal{L}_Xg^{-1}) $$ where $c(\cdot)$ maps 1,1-tensors to scalars is the contraction operation. Next, recall that the Lie derivative can be defined by the one parameter family of diffeomorphisms generated by a vector field. More precisely, let $\Psi_t$ be the one parameter family of diffeomorphisms generated by $X$. Then we have a one parameter family of tensor fields $G_t := \Psi_t^* g^{-1}$ defined by the pushforward of the inverse metric. For this family we have that $\mathcal{L}_Xg^{-1} = \frac{d}{dt}G_t |_{t=0}$. Holding the scalar field $\phi$ fixed, we write $\hat{L}_t = G_t(\nabla\phi,\nabla\phi) \mathrm{dvol}_{(G_t)^{-1}}$ for the corresponding one parameter family of Lagrangian densities, then we have that the chain rule gives $$ \frac{d}{dt} \hat{L}_t |_{t = 0} = c(Q\mathcal{L}_Xg^{-1}) $$ Note that we in fact can split $$\mathcal{L}_X (L_\phi\mathrm{dvol}_g) = \frac{d}{dt}\hat{L}_t |_{t=0} + 2 g^{-1}(d\phi, d\mathcal{L}_X\phi) \mathrm{dvol}_g$$ into the "geometric/gravity" part and the "matter" part using that the exterior derivative commutes with the Lie derivative. Therefore the scalar quantity you wrote down is the portion of the change of the matter Lagrangian density in the direction of the vector field $X$ that is caused by the change of the (inverse) metric in that direction. In you are willing to consider the Lagrangian density as a physical quantity, this gives a "physical" meaning to your scalar quantity as the "geometric/gravity" portion of the flow of the Lagrangian density under the vector field $X$. In a not very precise sense, if the vector field $X$ is time-like, you can associate to it a time-like congruence. So this scalar measures the infinitesimal (fictitious) work "done by gravity/acceleration of frame". (I use the word fictitious here to indicate that the $X$ dependence of this scalar is somewhat analogous to the frame-dependence of the "centrifugal force" in classical mechanics.)
{ "domain": "physics.stackexchange", "id": 1426, "tags": "general-relativity, classical-mechanics" }
Bayes-consistent cost-sensitive classification
Question: In cost-sensitive classification, we have a confusion (or cost) matrix $C$, where $C(i,j)$ is the cost incurred for predicting label $i$ when nature specifies $j$. The costs are non-negative, but no other restriction (such as symmetry) need be imposed. In the classic setting (PAC and its multiclass generalization), $C(i,j)=1[i\neq j]$. The notion of Bayes-consistency carries over naturally to the cost-sensitive setting. For any joint distribution $P$ over the instances $\mathcal{X}$ and labels $\mathcal{Y}$, we define the risk of a predictor $f:\mathcal{X}\to \mathcal{Y}$ as $$ R(f)= \mathbb{E}_{(X,Y)\sim P} C(f(X),Y). $$ Letting $f^*$ be a minimizer of $R(\cdot)$ over all measurable $f$, we define the Bayes-optimal risk as $R^*:=R(f^*)$. Question: What is known about Bayes-consistent classification in the cost-sensitive setting? For example, when $\mathcal{X}$ is a metric space and $C(i,j)=1[i\neq j]$, various nearest-neighbor methods are known to be strongly Bayes-consistent. Is anything known about other cost matrices? Answer: I'm not sure if this is what you're looking for, but people have studied consistency of surrogate risk minimization. There, we define a surrogate loss function $L$ and a link $\psi$. We first minimize surrogate loss on our dataset, yielding some surrogate hypothesis $h$. Then we define $f(x) = \psi(h(x))$. This procedure is roughly consistent if, as data $\to \infty$, we have $f \to f^*$. The question is, given e.g. $C$, what are some nice, consistent surrogate losses? Example: for binary $0-1$ loss, one can use hinge loss or logistic loss as a surrogate, and one can show this is consistent as long as the hypothesis class is rich enough. Tewari and Bartlett (2007) study multiclass classification, but I think not cost-sensitive, and relate consistency to calibration. A more recent work is Agarwal and Agarwal (2015), which has some more references. [1] Tewari, Bartlett. On the Consistency of Multiclass Classification Methods. JMLR 2007. https://www.jmlr.org/papers/volume8/tewari07a/tewari07a.pdf [2] A. Agarwal, S. Agarwal. On consistent surrogate risk minimization and property elicitation. COLT 2015. http://proceedings.mlr.press/v40/Agarwal15.pdf
{ "domain": "cstheory.stackexchange", "id": 5235, "tags": "machine-learning, lg.learning, online-learning" }
What is a good interpretation of this 'learning curve' plot?
Question: I read about the validation_curve and how interpret it to know if there are over-fitting or underfitting, but how can interpret the plot when the data is the error like this: The X-axis is "Nº of examples of training" Redline is train error Green line is validation error Thanks Answer: The X axis is the number of instances in the training set, so this plot is a data ablation study: it shows what happens for different amount of training data. The Y axis is an error score, so lower value means better performance. In the leftmost part of the graph, the fact that the error is zero on the training set until around 6000 instances points to overfitting, and the very large difference of the error between the training and validation confirms this. In the right half of the graph the difference in performance starts to decrease and the performance on the validation set seems to be come stable. The fact that the training error becomes higher than zero is good: it means that the model starts generalizing instead of just recording every detail of the data. Yet the difference is still important, so there is still a high amount of overfitting.
{ "domain": "datascience.stackexchange", "id": 7784, "tags": "machine-learning, classification, boosting" }