anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Rate of Change of Extensive Property Across Control Volume Term in Reynolds Transport Theorem
Question: The basic form of Reynold's Transport Theorem can be written as: $${DB_{sys}\over Dt}={\partial B_{CV} \over \partial t}−\dot B_{in}+\dot B_{out}$$ Now my question is, shouldn't ${\partial B_{CV} \over \partial t}=0$, since there is no way that a property inside the control volume will just randomly disappear or appear? For example when we deal with the classic three inlet-one outlet problem, we always equate ${\partial B_{CV} \over \partial t}=0$, when we try to find the velocity of fluid at the outlet. If this is the case, then why do we include the ${\partial B_{CV} \over \partial t}$ term at all? Answer: $\partial B_{CV}/\partial t$ is a generation term. For a classic fluid flow problem such as you describe it's equal to zero, but there are many scenarios where it might not be. Consider a model of the level of CO$_2$ in a room. The ventilation would give a flux in and out of the room, but the occupants are generating CO$_2$ by breathing, so the generation term is positive. A control volume in which components of the flow are reacting, such as combustion, would have a generation term, either positive or negative, in the equations for energy, oxygen, fuel and combustion products.
{ "domain": "physics.stackexchange", "id": 79943, "tags": "fluid-dynamics, flow" }
Sharpen reflected image
Question: I'm currently working on a project which is a kind of hologram. As an object I have a display and with a plexiglass I use the reflection so that I can see the display inside the plexiglass. The experimental setup looks something like this: https://i.stack.imgur.com/8js7N.jpg But the problem is that the picture in the plexiglass is blurred: https://i.stack.imgur.com/ZKA1T.jpg (The blur is also visible in reality, so it has nothing to do with the photo) My idea is to sharpen the image with a plano-convex lens. However, I cannot find a plano-convex lens large enough to cover the whole display. Besides, I would need very high diopters to keep the focal length as short as possible. Do you have an idea how I can sharpen the image in the plexiglas best? Answer: It's very likely that the problem is that you're getting reflections off of both faces of the plexiglas plate, so you're getting two images on top of each other, slightly offset. You can fix this by putting a linear polarizer (e.g., from Edmund) in front of your image source, and tilting the plexiglas at about 57 degrees (Brewster's angle) to the image source. Light of one polarization will reflect off the front surface only. Effectively,there won't be any light of the other polarization.
{ "domain": "physics.stackexchange", "id": 65826, "tags": "optics, lenses" }
Why are inert gas (especially Xenon) compounds powerful oxidizing agents?
Question: I am curious as to why compounds with inert gases, such as $\ce{XeF4}$, $\ce{XeF2}$, and $\ce{XeO3}$ are considered powerful oxidizing agents. I would attribute the phenomenon to the highly oxidized state of the noble gas which would be unstable but I am not too sure about it. Answer: Let's be clear here. Xenon by itself is not an oxidizing agent. Instead, certain compounds of xenon tend to be strong oxidizing agents. First, let's consider an analogous case - the nitro group. Molecules containing nitro groups ($\ce{-NO2}$) tend to be explosive - the nitro groups contain oxygen which facilitates combustion. Remember, combustion is just a form of oxidation. Back to xenon! Xenon, being a noble gas, shouldn't like bonding too much. It already has a complete octet of outer shell (valence) electrons. So it is likely the case that compounds of xenon are only weakly held together. Compounds of xenon can therefore said to be relatively thermodynamically unstable. Therefore, it is also likely that compounds of xenon tend to react in the presence of other molecules - especially ones containing atoms to which fluorine can form stronger bonds. We verify the above by examining bond enthalpy data: the Xe-F bond has a bond energy of about 130 kJ/mol. Compare this to a typical C-F bond with a bond energy of 485 kJ/mol. It is no wonder therefore that compounds such as xenon difluoride are used in fluorinating (oxidizing) organic molecules. Here are some examples from Wikipedia. Why the nitro group in the second photo has been reduced to an amino group - I have no clue - and I suspect it's just an error. Further updates soon ...
{ "domain": "chemistry.stackexchange", "id": 3166, "tags": "stability, periodic-trends, oxidation-state, noble-gases" }
How to get taxonomic tree data
Question: I am looking for taxonomy data of individual species and their classes something like the following format: { "type": "family", "parentType": "order", "parentScientificName": "hymenoptera", "commonName": "ant", "scientificName": "formicidae" } { "type": "species", "parentType": "genus", "parentScientificName": "...", "commonName": "black ant", "scientificName": "lasius niger" } Basically, from this, you can construct the full taxonomic tree. When looking at the pubmed taxonomy data I don't see how to reconstruct the tree. Please show me how. Answer: I can't leave a comment at the time of writing this (since I don't have enough reputation right now), so I'll leave this as an answer. Your question is a bit confusing. It's titled "How to get taxonomic data", but at the end of the post, you mention "Basically, from this, you can construct the full taxonomic tree. When looking at the pubmed taxonomy data I don't see how to reconstruct the tree. Please show me how." I'll try to address both your questions here. Acquiring the lineage data: Are you perhaps looking for something like this? NCBI taxonomy itself is also accessible programmatically via E-Utilities. Constructing an evolutionary tree: You could perhaps use something like the data.tree R package to construct your tree using the lineage data. Your question is a bit too vague, so I apologize if my answer is of no assistance here.
{ "domain": "biology.stackexchange", "id": 9766, "tags": "literature, data" }
Completeness in tensor product basis
Question: I have a (probably simple) question about the completeness relation in dirac notation. Mostly I just want to check if I am understanding this correctly, because I can't actually find this mentioned anywhere.$\newcommand{\ket}[1]{\left|#1\right>}$ $\newcommand{\bra}[1]{\left<#1\right|}$ So for a complete set of states $\ket{\alpha}$ we often use the completeness relation to rewrite operator products. $$\sum_{\alpha} \ket{\alpha} \bra{\alpha} = \mathbb{I}.$$ Here $\mathbb{I}$ is the identity operator. In many situations we use a basis of tensor product states, i.e. a basis $\ket{\alpha} \otimes \ket{\beta}$, where $\ket{\alpha}$ should be understood to be in a different Hilbert space than $\ket{\beta}$ (maybe they refer to different particles). I am wondering what the completeness relation would look like in this case. Let's say I have an operator $A$ that acts only on the $\ket{\alpha}$ space. Are all the following expressions valid? $$(1) \qquad A = \sum_{\alpha} \sum_{\alpha'} \ket{\alpha'}\bra{\alpha'} A \ket{\alpha} \bra{\alpha}$$ $$(2)\qquad A = \sum_{\alpha} \sum_{\alpha'} \sum_{\beta} \ket{\alpha'}\bra{\alpha'} A \ket{\alpha} \ket{\beta} \bra{\beta} \bra{\alpha} $$ $$(2)\qquad A = \sum_{\alpha} \sum_{\beta} A \ket{\alpha} \ket{\beta} \bra{\beta} \bra{\alpha}. $$ I realise these examples are pretty arbitrary, but I hope they illustrate my question. Do the completeness relations still hold separately, also when the total Hilbert space $\mathcal{H} = \mathcal{H}_{\alpha} + \mathcal{H}_{\beta}$ now has the complete basis $\ket{\alpha} \otimes \ket{\beta}$. Am I still allowed to just insert complete outer products of either $\ket{\alpha}$ or $\ket{\beta}$ where convenient in my equations, or should I be more careful. I think what I am doing is not wrong, since in essence I am just inserting identity operators of the two different spaces, but I also realise that what I am doing is far from rigorous. Any help would be appreciated! Answer: Let's say I have an operator A that acts only on the $|\alpha\rangle$ space. If the Hilbert space under consideration is the tensor product space $\mathcal H = \mathcal H_\alpha \otimes \mathcal H_\beta$, then you can't act on the elements of $\mathcal H$ with an operator $A$ which acts only on $\mathcal H_\alpha$ alone. You must consider the operator $A \otimes \mathbf 1_\beta$, with $\mathbf 1_\beta$ being the identity operator on $\mathcal H_\beta$. The completeness relation on $\mathcal H_\alpha\otimes \mathcal H_\beta$ takes the form $$\mathbf 1= \sum_{\alpha,\beta} \bigg(|\alpha\rangle\otimes |\beta\rangle\bigg)\bigg(\langle \alpha|\otimes \langle \beta|\bigg)$$ You could use this relation to yield (dropping the tensor product symbols for brevity, and to match your notation) $$A \otimes \mathbf 1_\beta = \sum_{\alpha,\alpha',\beta,\beta'} |\alpha\rangle|\beta\rangle A_{\alpha\alpha'}\delta_{\beta\beta'} \langle \alpha'|\langle\beta'| = \sum_{\alpha,\alpha',\beta} |\alpha\rangle|\beta\rangle A_{\alpha\alpha'}\langle\alpha'|\langle \beta|$$ because $$\langle\alpha|\langle\beta| (A\otimes \mathbf 1_\beta) |\alpha'\rangle|\beta'\rangle = \langle\alpha|\langle\beta|\bigg(|A\alpha\rangle|\mathbf 1_\beta\beta'\rangle\bigg) \equiv \langle\alpha|A\alpha\rangle \cdot \langle \beta|\beta'\rangle = A_{\alpha\alpha'} \delta_{\beta \beta'}$$
{ "domain": "physics.stackexchange", "id": 73791, "tags": "quantum-mechanics, hilbert-space, operators" }
Why was Uranus named what it was?
Question: Why was Uranus named what it was, and who came up with the name? Answer: Herschel, the discoverer of the planet named it Georgium Sidus, "George's star" after his patron, King George III of Great Britain. The name was not popular outside of Great Britain, and there were various other proposals. It was Johann Elert Bode who proposed Uranus, the Latin form of the Greek god of the sky. It fits with the existing planets having the names of Roman Gods, and just as Saturn was the father of Zeus, so Uranus was the father of Saturn.
{ "domain": "astronomy.stackexchange", "id": 3293, "tags": "naming, uranus" }
One Hot Encoding for any kind of dataset
Question: How can I make a one hot encoding for a unknown dataset which can iterate and check the dytype of the dataset and do one hot encoding by checking the number of unique values of the columns, also how to keep track of the new one hot encoded data with the original dataset? Answer: I would recommend to use the one hot encoding package from category encoders and select the columns you want to using pandas select dtypes. import numpy as np import pandas as pd from category_encoders.one_hot import OneHotEncoder pd.options.display.float_format = '{:.2f}'.format # to make legible # make some data df = pd.DataFrame({'a': ['aa','bb','cc']*2, 'b': [True, False] * 3, 'c': [1.0, 2.0] * 3}) cols_encoding = df.select_dtypes(include='object').columns ohe = OneHotEncoder(cols=cols_encoding) encoded = ohe.fit_transform(df) Note that you can change the way you handle unseen data with handle_unknown: str options are ‘error’, ‘return_nan’, ‘value’, and ‘indicator’. The default is ‘value’. Warning: if indicator is used, an extra column will be added in if the transform matrix has unknown categories. This can cause unexpected changes in dimension in some cases.
{ "domain": "datascience.stackexchange", "id": 7885, "tags": "data, python-3.x, one-hot-encoding" }
Dao function using hibernate
Question: I'm just wondering if the following is the best way to write a dao function. Should I get the entity manager before the transaction and close it after the transaction every time? Should I write transactions inside a dao? public void sendBack(Long requestId,String comments){ EntityManager em = getEntityManager(); em.getTransaction().begin(); String update = "update CsRequestReceivers set activeInd = :activeInd,sendBackComments=:comments where requestId = :requestId and activeInd = :oldActiveInd"; em.createQuery(update).setParameter("activeInd", 0l) .setParameter("comments", comments) .setParameter("requestId", requestId) .setParameter("oldActiveInd", 1l) .executeUpdate(); em.getTransaction().commit(); em.close(); } Answer: It depends on your app context. If im writting a web app Id create a filter which would getEntityManager(), begin() the transaction, then add it to the request scope, .doFilter() and pass it to the dao in the constructor (at the controller, of course). When it returns to the filter id commit(), or even rollback() if any exception happens.
{ "domain": "codereview.stackexchange", "id": 900, "tags": "java, hibernate" }
What is wrong with my calculations of Venus' orbital period?
Question: I am trying to use Kepler's second law to find the duration of Venus's orbit. I am assuming circular orbits (using Earth and Venus, so low eccentricity). Here is my process: Assuming that the radius of Earth's orbit is 150 million km, then the swept area in one day is $\frac{1}{365.25}\times\pi\times 150^2 \approx 194 \text{ million km}^2$. Venus must sweep the same area in the same time. Assuming a orbital radius of 108 million km for Venus, and using $A = \frac{\theta}{360}\pi r^2$, we can find the central angle for the swept sector, that is, the angle traveled in one Earth day: $194 = \frac{\theta}{360}\pi \times108^2 \implies \theta = 1.90 ^{\circ}$ per Earth day. Hence the orbital period should be $\frac{360}{1.90}\approx 189$ Earth days. Of course, the orbital period of Venus is $224.7$ Earth days. The difference between 189 and 224.7 appears to be well beyond the error introduced by my assumption of circular orbits. What am I doing wrong? I know this is perhaps a circuitous way of doing this calculation. My goal is to write a mathematics exercise that uses the area of sectors in a meaningful way. Answer: Kepler's laws state that a planet sweeps equal areas in equal times as it moves in its elliptical orbit. It doesn't state that different planets will sweep the same area. The "equal areas" law can be derived from the "conservation of angular momentum". In fact dA/dt = L/(2m) (where A is the area, L is the angular momentum and m is the (reduced) mass). Different planets will sweep out different areas. To calculate the period you used Kepler's third Law: $T^2 = k a^3$ (T= orbital period,a = semi-major axis). If, for convenience you take a in AU and T in Earth Years, then the constant $k=1$. For Venus, a = 0.72. so $T=\sqrt{0.72^3}=0.61$ or about 223 days. Hyperphysics has a section on Kepler's Laws
{ "domain": "astronomy.stackexchange", "id": 3261, "tags": "orbit, solar-system" }
[build] Error: Unable to find source space `/home/usr/Desktop/src`
Question: I am trying to build a workspace that is stored in: /home/usr/Desktop/ROS/workspace/, the workspace directory is clean and does not contain any hidden files hence .catkin_tools is not present. It only contains one directory src. I have tried catkin init with empty src directory and by putting a package in src. Here is the output: Catkin workspace `/home/usr/Desktop` is already initialized. No action taken. ----------------------------------------------------------- Profile: alternate Extending: [explicit] /opt/ros/melodic Workspace: /home/usr/Desktop ----------------------------------------------------------- Build Space: [missing] /home/usr/Desktop/build Devel Space: [missing] /home/usr/Desktop/devel Install Space: [unused] /home/usr/Desktop/install Log Space: [missing] /home/usr/Desktop/logs Source Space: [missing] /home/usr/Desktop/src DESTDIR: [unused] None ----------------------------------------------------------- Devel Space Layout: linked Install Space Layout: None ----------------------------------------------------------- Additional CMake Args: None Additional Make Args: None Additional catkin Make Args: None Internal Make Job Server: True Cache Job Environments: False ----------------------------------------------------------- Whitelisted Packages: None Blacklisted Packages: None ----------------------------------------------------------- ----------------------------------------------------------- WARNING: Source space `/home/usr/Desktop/src` does not yet exist. ----------------------------------------------------------- What I don't understand is why is catkin looking for workspace in /home/usr/Desktop and not the current directory? The same package is easily built with catkin_make though and I am able to run all the nodes. Also If I put src in /home/usr/Desktop I am able to build successfully. I am using ROS Melodic on Ubuntu 18.04. I am not sure about the catkin tools version but i installed it from here: https://jbohren-ct.readthedocs.io/en/pre-0.4.0-docs/installing.html Originally posted by khansaadbinhasan on ROS Answers with karma: 94 on 2019-10-13 Post score: 1 Original comments Comment by gvdhoorn on 2019-10-13:\ I am not sure about the catkin tools version but i installed it from here: https://jbohren-ct.readthedocs.io/en/pre-0.4.0-docs/installing.html Please try to always use the main documentation when using any tools. In this case that would be here: https://catkin-tools.readthedocs.io/en/latest/index.html The fact that the link you posted (eventually) leads to an account called: jbohren-forks seems to suggest it is not the main version of the documentation. Answer: the workspace directory is clean and does not contain any hidden files hence .catkin_tools is not present. No, but: Workspace: /home/usr/Desktop it's likely there is a .catkin_tools directory in /home/usr/Desktop. What is the output of ls -al $HOME/Desktop/.catkin_tools? The rest of the warnings/errors you are getting follow from this. If $HOME/Desktop is the workspace according to catkin_tools, there is no src space (as that would be in $HOME/Desktop/workspace/src). Edit: I didn't knew i had it on my desktop, but why should that matter? I can have multiple workspaces, what if desktop is a workspace, I cannot have any other workspace inside it? Having multiple workspaces is completely supported. But you just can't nest workspaces (ie: workspaces inside other workspaces). If you want to have multiple workspaces, make them siblings or place them in entirely different directories. It might also be informative to take a look at wiki/catkin/workspaces. Specifically: workspace overlaying. Originally posted by gvdhoorn with karma: 86574 on 2019-10-13 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by khansaadbinhasan on 2019-10-13: Thanks for the answer. It worked, I didn't knew i had it on my desktop, but why should that matter? I can have multiple workspaces, what if desktop is a workspace, I cannot have any other workspace inside it? Comment by Thimaya on 2020-11-13: Hello so i am a beginner and have a similar issue, how exactly did u work around it? catkin build returns an error saying source at home/thimaya/src doesnt exist while im in ~/ws_moveit/ . My home directory contains catkin_ws , ws_moveit and .catkin_tools . I probbed into profiles in .catkin_tools and it said source : src is it not possible to make a the folder in the home directory cuz of catkin_ws ? I tried making one in another partition of mine but i had the same error. Thanks in advance ! Comment by khansaadbinhasan on 2020-11-13: @thimaya It has been a long time since I asked that question. From what I remember, I had made my desktop into a workspace and then deleted all files except the hidden .catkin_tools. When I tried to build my workspace nested inside Desktop it gave error. You should not make your Home into a workspace. What you should do. Press Ctrl+H delete .catkin_tools. Also delete other files and folders created by ROS. Now make one folder say catkin_ws then follow tutorials to build this workspace and create packages. Also, whenever you run build or make, make sure you are in catkin_ws folder.
{ "domain": "robotics.stackexchange", "id": 33883, "tags": "ros-melodic, catkin" }
Why a group that describe rotations always have $su(2)$ Lie algebra?
Question: I'm reading the book Physics from Symmetry by Jakob Schwichtenberg. In part II the author explain the Lie group theory and in particular he treat the $SU(2)$ group. At a certain point the author tells us that $SU(2)$ is the covering group of the Lie algebra $su(2)$ and this involve that every group with that Lie algebra can be described using $SU(2)$. After that the author seems to take for granted that every group that describe 3D rotations must have the $su(2)$ Lie algebra and for this reason he study the $SU(2)$ representations. My question is, why every group that describe 3D rotations must have the $su(2)$ Lie algebra? Is it because $su(2$) Lie algebra encode some sort of behavior of the rotations that we assume to be true? Answer: From the comments, Yes is this the question, how do you know that there isn't another description of the rotations with another Lie algebra? because to me it looks like we have found 2 description of rotations, SU(2) and SO(3), we have seen that they have same Lie algebra and so we have assumed all the descriptions will have that Lie algebra. I believe there is a fundamental misunderstanding here, so I'll try to quickly review the story. I: Representations of SO(3) The group $SO(3)$ is the rotation group in 3D space. It is defined by its fundamental representation as the $3\times 3$ real, orthogonal matrices with determinant $1$. This representation allows us to act on elements of $\mathbb R^3$ with elements of $SO(3)$ via standard matrix multiplication. From here, it is natural to ask what effect a rotation has on something which isn't an element of $\mathbb R^3$. This leads us to the representation theory of Lie groups. Given some vector space $V$, we seek a map $\rho: SO(3) \rightarrow GL(V)$ (where $GL(V)$ is the set of invertible linear maps from $V\rightarrow V$) which has the following property: $$\forall R_1,R_2 \in SO(3), \qquad \rho(R_1\circ R_2)= \rho(R_1)\circ \rho(R_2)$$ Such a map is called a representation of $SO(3)$ on the representation space $V$, and for every rotation $R\in SO(3)$, $\rho(R)$ provides the corresponding action on elements of $V$. II: Representations of $\frak{so}(3)$ As it turns out, it is quite cumbersome to work directly with representations of groups. Fortunately, we know that at least in some connected neighborhood of the identity element, we can form a one-to-one correspondence between a Lie group $G$ and its associated Lie algebra $\frak g$, with the elements of $G$ obtained from the elements of $\frak g$ by exponentiation. Therefore, rather than seeking representations of $SO(3)$ on vector spaces, we seek representations of $\frak{so}(3)$ on them, which we can then (maybe) exponentiate to obtain representations of the original group. The fundamental representation of $\frak{so}(3)$ is the $3\times 3$ real, antisymmetric matrices. The standard basis $L_i$, $i=1,2,3$ has commutation relations $$[L_i,L_j]=\epsilon_{ijk}L_k$$ A representation of the Lie algebra $\frak{so}(3)$ on a vector space $V$ is a linear map $\varphi:{\frak{so}(3)}\rightarrow {\frak{gl}}(V)$ ( where ${\frak{gl}}(V)$ is the set of linear maps from $V\rightarrow V$) subject to the condition $$\forall g,h \in {\frak{so}}(3), \qquad \varphi\big([g,h]\big) = \big[\varphi(g),\varphi(h)\big]$$ The linearity makes this much nicer to work with. We can simply search for sets of three matrices (or operators, in the infinite-dimensional case) which obey the right commutation relations, and this will constitute a representation of $\frak{so}(3)$. III: Projective Representations and the Universal Cover It turns out that a representation of a Lie algebra $\frak g$ does not automatically yield a representation of the corresponding Lie group $G$ upon exponentiation if $G$ is not simply-connected (and $SO(3)$ is not). In this case, we encounter representations of $\frak{so}(3)$ which, when exponentiated, yield projective representations of $SO(3)$ instead; that is, we will have that $$\forall R_1,R_2\in SO(3), \qquad \rho(R_1\circ R_2)=c(R_1,R_2) \rho(R_1)\circ\rho(R_2)$$ where $c(R_1,R_2)$ is some constant. Physically, this is actually desirable in the sense that Wigner's theorem tells us that we can represent symmetry transformations as unitary operators up to a phase. If we restricted our attention only to properly unitary representations of our symmetry group, then we would miss some. Unfortunately, projective representations can be a pain to work with because of these extra factors which we would need to keep track of. This leads us to the concept of the universal cover. Given a Lie group $G$, its universal cover $U(G)$ is the unique simply connected Lie group which shares the same Lie algebra. Since it is simply connected, every representation of $\frak{g}$ gives rise to a genuine, non-projective representation of $U(G)$. In other words, rather than considering projective representations of $SO(3)$, we can consider genuine representations of $U\big(SO(3)\big)$. To summarize, our goal was to find representations of $SO(3)$ which can act on vector spaces other than $\mathbb R^3$. It is much more convenient to consider representations of the Lie algebra $\frak{so}(3)$, but because $SO(3)$ is not simply-connected, some representations of $\frak{so}(3)$ give rise to projective representations of $SO(3)$ rather than genuine ones. In the context of quantum mechanics, this is actually a good thing, but projective representations are annoying to work with. Because every projective representation of $SO(3)$ is a genuine representation of the universal cover $U\big(SO(3)\big) \simeq SU(2)$, we can study the latter without worrying about those pesky factors we'd need to keep track of by studying the former.
{ "domain": "physics.stackexchange", "id": 71399, "tags": "group-theory, representation-theory, rotation, lie-algebra" }
Is KF the most ionic compound?
Question: I saw somewhere (can't recall where) that KF is the most ionic compound. I expected CsF. Does the greater polarizability of Cs allow it to more easily form covalent bonds compared to K? Does this overcome the fact that K being in n = 4 should bond better with F in n = 2? Answer: Yes, this is a subtle thing. Using the Pauling electronegativities, one would expect CsF to have the larger electronegativity difference (3.2). So in principal, it should be "more ionic." Unfortunately, an ionic bond requires separating charge, so $\ce{Cs+F-}$. The problem is that $\ce{Cs+}$ is much larger than $\ce{K+}$ and so the dipole moment for a fully ionic $\ce{Cs+F-}$ would be much larger. We find that the charge isn't fully separated, likely because of the need to stabilize the large dipole moment. So some people plot a curve of "% ionic character" as determined by the actual dipole moment vs. the expected dipole moment (i.e, full charge separation). We find that this curve reaches a limit ~75-85% ionic character. IIRC, I think on that basis LiF is more Ionic than KF. For more, I really like Bruce Robinson's lecture notes: http://courses.washington.edu/bhrchem/c152/Lec23.pdf via the Internet Archive
{ "domain": "chemistry.stackexchange", "id": 1985, "tags": "inorganic-chemistry, bond, ionic-compounds, electronegativity" }
What are fluids or "goos" called, that increase friction?
Question: I have a shaft inside a bushing, the shaft has a gear on one end that is supposed to provide resistance to other gears it is driven with. I'm having a hard time finding the correct term for the sort of grease or fluid, or in more lay-man's terms a "goo", that will provide increased friction between the shaft and the bushing, rather than reducing friction, as it is with regular oil or grease. Essentially a tar-like substance, that isn't as messy as actual tar. Perhaps also similar to tree-sap, except more manageable in terms of putting it into a mechanical device. Also tree-sap gets hard over time, I'd like to keep its mechanical properties, like viscosity, etc. In case you're wondering, this is going to be part of a soft-open gearing, such that a small door doesn't fling open when the lock is released, instead it provides some resistance. Answer: They are called haptic greases. Used on knobs and levers to provide stiction but very smooth movement when actuated. Here's an explanation from Nye Technologies - https://www.nyelubricants.com/motion-control
{ "domain": "engineering.stackexchange", "id": 4000, "tags": "mechanical, friction, fluid" }
Brainfuck interpreter (with emphasis on robustness)
Question: While writing a review of @MotokoKusanagi's Brainfuck interpreter, I decided to write my own implementation to illustrate a few points. In particular, I'd like it to be robust to malformed programs, and reasonably efficient (short of JITting the Brainfuck code). Please point out any deficiencies you might find in this implementation. #include <fcntl.h> #include <libgen.h> #include <stddef.h> #include <stdio.h> #include <stdlib.h> #include <sys/stat.h> #include <unistd.h> typedef struct { char *begin, // Pointer to beginning of program *end, // Pointer just beyond the end of the program **jumps; // Pointer to jump table } program; /** * Fills in p.jumps, the jump table. For each '[' or ']' character * in the code, the pointer at the corresponding offset in p.jumps * is set to the matching ']' or '[' (or NULL if it is mismatched). * This function calls itself recursively to handle nesting. */ static char *build_jump_table(program p, char *begin) { char *c = begin; while (c < p.end) { ptrdiff_t i = c - p.begin; switch (*c) { case '[': p.jumps[i] = NULL; // In case no matching ']' is ever found p.jumps[i] = c = build_jump_table(p, c + 1); if (!c) return NULL; // Error: no closing bracket break; case ']': p.jumps[i] = (begin > p.begin) ? begin - 1 : // Normal case NULL; // Error: no opening bracket return c; } c++; } return NULL; } /** * Loads the program from the specified path. On failure, all members of * the returned program will be NULL, and errno is set. */ program load_program(const char *path) { const program FAILURE = { .begin = NULL, .end = NULL, .jumps = NULL }; int fd; struct stat stat_buf; if ( -1 == (fd = open(path, O_RDONLY)) || -1 == fstat(fd, &stat_buf) ) { return FAILURE; } int size = stat_buf.st_size; char *text = malloc(size); if ( (text == NULL) || (size != read(fd, text, size)) ) { free(text); close(fd); return FAILURE; } close(fd); program p = { .begin = text, .end = text + size, .jumps = malloc(size) }; if (p.jumps == NULL) { free(text); return FAILURE; } build_jump_table(p, p.begin); return p; } void free_program(program p) { free(p.jumps); free(p.begin); } int execute_program(program p) { unsigned char mem[30000] = { 0 }; unsigned char *ptr = mem; char *ip = p.begin; while (ip < p.end) { ptrdiff_t i = ip - p.begin; switch (*ip) { case '<': if (--ptr < mem) return i + 1; break; case '>': if (++ptr >= mem + sizeof mem) return i + 1; break; case '-': (*ptr)--; break; case '+': (*ptr)++; break; case '.': putchar(*ptr); fflush(stdout); break; case ',': *ptr = getchar(); break; case '[': if (!*ptr) ip = p.jumps[i]; break; case ']': if ( *ptr) ip = p.jumps[i]; break; } if (!ip++) { return i + 1; } } return 0; } void usage(FILE *f, char *argv[]) { fprintf(f, "Usage: %s PROGRAM.bf\n", basename(argv[0])); } int main(int argc, char *argv[]) { if (argc <= 1) { usage(stderr, argv); return 1; } program prog = load_program(argv[1]); if (!prog.begin) { perror("Error: Could not load program"); return 2; } int error_offset = execute_program(prog); if (error_offset > 0) { fprintf(stderr, "Error at offset %d\n", error_offset); } free_program(prog); } Answer: Deficiencies? I don't really see any. Improvements? Maybe :) You don't handle at all the return values of putchar, getchar and fflush. The Wikipedia article has some hints about how different implementations handle an EOF from the user input. Yours works as well, but is this really what you want? In build_jump_table, the switch used to check whether *c is a [ or a ] seems a bit overkill. It would probably be easier to get away with a simple if .. else statement. That's highly speculative, but I guess that if you replace the characters <, >, -, +, ., ,, [ and ] by values ranging from 0 to 7, or at least by contiguous values during the program load, it might help your compiler to generate a better jump table for your switch, which might help increasing the speed of the programs. The assumption here is that your care more about the speed during the execution and that a little slowdown during the program load isn't a problem if it can otherwise improve performance. I guess that you want your programs to be fast while they run and that a small slowdown when it terminates isn't one of your concerns if you can otherwise speed your program up. Therefore, if your compiler supports it (I guess it does), you could use __builtin_expect to influence branch prediction while the program runs: switch (*ip) { case '<': if (__builtin_expect(--ptr < mem, false)) { return i + 1; } break; case '>': if (__builtin_expect(++ptr >= mem + sizeof mem, false)) { return i + 1; } break; // ... } It would however be a bad idea to influence branch prediction for the conditions in the cases of [ and ]. The results are too prone to change to risk a pessimization. I would have some other small remarks but I will keep them for myself since they would be considered a matter of style and you know what you're doing anyway :)
{ "domain": "codereview.stackexchange", "id": 14261, "tags": "c, interpreter, brainfuck" }
Asynchronous database query system using futures-rs in Rust
Question: Background I've been working on an algorithmic trading platform in Rust and utilizing the newly released futures-rs library to keep everything asynchronous and non-blocking. One issue I've come across with this is that the majority of database interface libraries currently available for Rust are blocking, which poses a large issue for my application since it's very heavily focused on speed and efficiency. Description To overcome this, I devised a system that my application can use to fire off database queries asynchronously. It works by starting up multiple threads on which Postgres connections are created. The system distributes new queries to available connections, handles to which are stored in a VecDeque as Receiver objects. If all the connections are busy, the query is stored in a different queue and popped out by the worker threads as they complete their previous queries. Both the connection handle queue and the query queue are held in Arc<Mutex<VecDeque>> objects to allow them to be accessed by different threads. I've tested the system to ensure that it's indeed asynchronous, but I wanted to ask around and see if there was anything I could do to make it better either syntactically or performance-wise. use std::collections::VecDeque; use std::thread; use std::sync::{Arc, Mutex}; use postgres; use futures::stream::{Stream, channel, Sender, Receiver}; use futures::{Future, oneshot, Complete}; use transport::postgres::get_client; // helper types to keep function declarations clean type QueryError = postgres::error::Error; type SenderQueue = Arc<Mutex<VecDeque<Sender<(String, Complete<()>), ()>>>>; type QueryQueue = Arc<Mutex<VecDeque<String>>>; pub struct QueryServer { conn_count: usize, // how many connections to open query_queue: QueryQueue, // internal query queue conn_queue: SenderQueue, // senders for idle query threads } // locks the QueryQueue and returns a queued query, if there are any. fn try_get_new_query(query_queue: QueryQueue) -> Option<String> { let mut qq_inner = query_queue.lock().unwrap(); // there is a queued query if !qq_inner.is_empty() { return Some(qq_inner.pop_front().unwrap()) }else{ // No new queries return None } } // executes the query and blocks the calling thread until it completes #[allow(unused_must_use)] fn execute_query(query: String, client: &postgres::Connection) { client.execute(query.as_str(), &[]) /*.map_err(|err| println!("Error saving tick: {:?}", err) )*/; } // Creates a query processor that awaits requests fn init_query_processor(rx: Receiver<(String, Complete<()>), ()>, query_queue: QueryQueue){ // get a connection to the postgres database let client = get_client().expect("Couldn't create postgres connection."); // Handler for new queries from main thread // This blocks the worker thread until a new message is received // .wait() consumes the stream immediately, so the main thread has to wait // for the worker to push a message saying it's done before sending more messages for tup in rx.wait() { let (query, done_tx) = tup.unwrap(); execute_query(query, &client); // keep trying to get queued queries to exeucte until the queue is empty while let Some(new_query) = try_get_new_query(query_queue.clone()) { execute_query(new_query, &client); } // Let the main thread know it's safe to use the sender again // This essentially indicates that the worker thread is idle done_tx.complete(()); } } impl QueryServer { pub fn new(conn_count: usize) -> QueryServer { let mut conn_queue = VecDeque::with_capacity(conn_count); let query_queue = Arc::new(Mutex::new(VecDeque::new())); for _ in 0..conn_count { // channel for getting the Sender back from the worker thread let (tx, rx) = channel::<(String, Complete<()>), ()>(); let qq_copy = query_queue.clone(); thread::spawn(move || { init_query_processor(rx, qq_copy) }); // store the sender which can be used to send queries // to the worker in the connection queue conn_queue.push_back(tx); } QueryServer { conn_count: conn_count, query_queue: query_queue, conn_queue: Arc::new(Mutex::new(conn_queue)) } } // queues up a query to execute that doesn't return a result pub fn execute(&mut self, query: String) { // no connections available let temp_lock_res = self.conn_queue.lock().unwrap().is_empty(); // Force the guard locking conn_queue to go out of scope // this prevents the lock from being held through the entire if/else let copy_res = temp_lock_res.clone(); if copy_res { // push query to the query queue self.query_queue.lock().unwrap().push_back(query); }else{ let tx = self.conn_queue.lock().unwrap().pop_front().unwrap(); let cq_clone = self.conn_queue.clone(); // future for notifying main thread when query is done and worker is idle let (c, o) = oneshot::<()>(); tx.send(Ok((query, c))).and_then(|new_tx| { // Wait until the worker thread signals that it is idle o.and_then(move |_| { // Put the Sender for the newly idle // worker into the connection queue cq_clone.lock().unwrap().push_back(new_tx); Ok(()) }).forget(); Ok(()) }).forget(); } } } Answer: Here are a few shallow comments on the code. This code let mut qq_inner = query_queue.lock().unwrap(); // there is a queued query if !qq_inner.is_empty() { return Some(qq_inner.pop_front().unwrap()) }else{ // No new queries return None } looks like a verbose way of writing query_queue.lock().unwrap().pop_front() The signature fn try_get_new_query(query_queue: QueryQueue) -> Option<String> means that it takes an Arc, but the code has no interest in ownership so should really be fn try_get_new_query(query_queue: &Mutex<VecDeque<String>>) -> Option<String> instead. The inner type is better labelled QueryQueue and the outer types are Arc<QueryQueue> and &QueryQueue. This gives your API more flexibility, and since Arc dereferences to a normal &-reference, this does not prevent any prior use-cases. This code #[allow(unused_must_use)] fn execute_query(query: String, client: &postgres::Connection) { client.execute(query.as_str(), &[]) /*.map_err(|err| println!("Error saving tick: {:?}", err) )*/; } firstly has a very strange comment, since map_err should almost never involve println!. I could understand logging, but not if it throws away the error. Secondly, it'd be nicer to write as fn execute_query(query: String, client: &postgres::Connection) { let _ = client.execute(query.as_str(), &[]); } I'd also write it as fn execute_query(query: &str, client: &postgres::Connection) { let _ = client.execute(query, &[]); } since it also doesn't really make calling any harder and increases flexibility. This is a totally optional style point, but IMO, thread::spawn(move || { init_query_processor(rx, qq_copy) }); is nicer as thread::spawn(move || init_query_processor(rx, qq_copy)); In execute, there's another minor style point which is that }else{ should be } else {. Personally, given that let (c, o) = oneshot::<()>(); only uses c and o once, there's no great need to shorten them so much. YMMV.
{ "domain": "codereview.stackexchange", "id": 21713, "tags": "asynchronous, rust, postgresql" }
Computing features over the same dataframe
Question: The following code computes three different features over the same dataset. I'm not sure if the filter_by_day_segment function can be made tidy or there's a more efficient/short but still readable way of refactor my code. library(dplyr) filter_by_day_segment <- function(data, day_segment){ if(day_segment == "daily"){ return(data %>% group_by(local_date)) } else { return(data %>% filter(day_segment == local_day_segment) %>% group_by(local_date)) } } compute_metric <- function(data, metric, day_segment){ if(metric == "countscans"){ data <- filter_by_day_segment(data, day_segment) return(data %>% summarise(!!paste("sensor", day_segment, metric, sep = "_") := n())) }else if(metric == "uniquedevices"){ data <- filter_by_day_segment(data, day_segment) return(data %>% summarise(!!paste("sensor", day_segment, metric, sep = "_") := n_distinct(value))) } else if(metric == "countscansmostuniquedevice"){ data <- data %>% group_by(value) %>% mutate(N=n()) %>% ungroup() %>% filter(N == max(N)) data <- filter_by_day_segment(data, day_segment) return(data %>% summarise(!!paste("sensor", day_segment, metric, sep = "_") := n())) } } data <- read.csv("test.csv") day_segment <- "daily" metrics <- c("countscans", "uniquedevices", "countscansmostuniquedevice") features = data.frame() for(metric in metrics){ feature <- compute_metric(data, metric, day_segment) if(nrow(features) == 0){ features <- feature } else{ features <- merge(features, feature, by="local_date", all = TRUE) } } print(features) A test CSV file "local_date","value" "2018-05-21","FC:44" "2018-05-21","FC:58" "2018-05-21","FF:7E" "2018-05-21","F8:77" "2018-05-21","F8:77" "2018-05-22","FB:F1" "2018-05-22","FC:62" "2018-05-22","FE:D4" "2018-05-22","FE:D4" "2018-05-22","FC:F1" "2018-05-23","F8:77" "2018-05-23","F8:77" "2018-05-23","FF:13" "2018-05-23","F8:3F" "2018-05-23","F8:3F" "2018-05-23","F8:3F" "2018-05-23","FC:B6" "2018-05-24","FC:0D" "2018-05-24","F8:3F" "2018-05-24","F7:B6" "2018-05-24","F6:96" "2018-05-24","F6:96" "2018-05-24","F6:96" "2018-05-24","F6:96" "2018-05-24","F6:96" "2018-05-24","F6:96" "2018-05-24","F6:96" "2018-05-25","FC:A8" "2018-05-25","FC:44" "2018-05-25","FC:44" "2018-05-25","FC:44" "2018-05-25","FC:44" "2018-05-25","FC:44" "2018-05-25","FC:44" "2018-05-25","FC:44" "2018-05-25","FC:44" "2018-05-26","FC:F1" "2018-05-26","FC:A8" "2018-05-26","FF:89" "2018-05-26","FF:89" "2018-05-26","FF:89" Answer: If I was reviewing this code professionally, my first comment would be that you should stick to a style guide. This will govern things like spacing between statements, brackets, operators etc. It can be seen as a kind of nitpicky remark (I certainly did at first) but having a consistent style massively aids readability for you and for others. The second (style) comment would be that your code is halfway between very pipe-based code and "standard" R style (lots of assignment). This makes it difficult to read. If you're going to go with pipes, stick with it. Also, when using an ifelse with more than 2 conditions, it's often clearer to use a switch block. This reduces the potential for massive nested ifs, and should also discourage you from doing much branching within the if. I don't really like how similar the summarise calls are in each case. Ideally I would refactor that, but I think that is somewhat tricky due to how dplyr handles n and n_distinct. This is how I would rewrite your code, though I would also consult the tidyverse style guide to see their recommendations -- I generally try to avoid programming with dplyr so I'm not that familiar with the style. filter_by_day_segment_refactor <- function(data, day_segment) { ## Minimise the amount done within the if/else clause (also reduces duplication) if (day_segment == "daily") { fun <- identity } else { fun <- function(x) filter(x, day_segment == local_day_segment) } data %>% fun() %>% group_by(local_date) } compute_metric_refactor <- function(data, metric, day_segment) { ## 3 case switch block with just 1 pipe each, ## rather than 3 conditionals with a mix of ## pipes and assignment switch(metric, "countscans" = { data %>% filter_by_day_segment(day_segment) %>% summarise(!!paste("sensor", day_segment, metric, sep = "_") := n()) }, "uniquedevices" = { data %>% filter_by_day_segment(day_segment) %>% summarise(!!paste("sensor", day_segment, metric, sep = "_") := n_distinct(value)) }, "countscansmostuniquedevice" = { data %>% group_by(value) %>% mutate(N = n()) %>% ungroup() %>% filter(N == max(N)) %>% filter_by_day_segment(day_segment) %>% summarise(!!paste("sensor", day_segment, metric, sep = "_") := n()) } ) }
{ "domain": "codereview.stackexchange", "id": 36567, "tags": "r" }
Can conductor be charged?
Question: I have a copper conductor. For a while, I apply a voltage of $12kV$ DC from a source. After removing the source, will the conductor stay charged from the source if is not earthed? Will it discharge when I connect the conductor to earth? Answer: A voltage source acts as an electron pump. Suppose we take a battery as an example, then in the battery a chemical reaction pumps electrons from the cathode to the anode. The cathode becomes depleted in electrons and becomes positive while the anode acquires an excess of electrons and becomes negative. As soon as enough electrons have been pumped for the resulting potential difference to equal the reaction potential the reaction stops. The point of all this is that suppose we attach some conductors to the battery in the way you describe. We'll make them spheres for simplicity: If a charge $Q$ is transferred then the voltage on conductor A is: $$ V_A = \frac{Q}{C_A}$$ where $C_A$ is the capacitance of conductor A, and likewise: $$ V_B = -\frac{Q}{C_B}$$ At equilibrium the potential difference $V_A-V_B$ will be equal to the battery voltage so: $$ V_\text{batt} = Q\left(\frac{1}{C_A} + \frac{1}{C_B}\right) $$ So given the voltage of the battery and the capacitance of the two conductors we can calculate how much charge is transferred. This will be non-zero, so the conductors will become charged and if you disconnect them from the battery they will keep that charge. So the answer to your question: After removing the source, will the conductor stay charged from the source if is not earthed? is yes. But you need the capacitances on both sides of the voltage source to calculate how much charge is transferred, and you only say what is connected to one side of your 12kV voltage source. If the other side is connected to earth the capacitance is effectively infinite i.e. $1/C_\text{earth}=0$ in our equation above. However if the other side of your 12kV source is not connected to anything the capacitance is close to zero so $1/C\approx\infty$ and plugging this into our equation we gind $Q\approx 0$ i.e. a negligable charge is transferred. So while it's generally true that in the situation you describe the conductor will end up charged, you haven't given us enough information to say what that charge is.
{ "domain": "physics.stackexchange", "id": 38077, "tags": "electricity, electric-current, charge, potential, voltage" }
Calculating the apparent magnitude of a satellite
Question: I'm writing a program that involves calculating the apparent magnitude of satellites from a ground location. I currently have the intrinsic magnitude of the satellites and the solar phase angle in degrees. I can't seem to find a formula that works. I tried magnitude = intrinsicMagnitude - 15 + 5 * Math.Log(distanceToSatellite) - 2.5 * Math.Log(Math.Sin(B) + (Math.PI - B) * Math.Cos(B)); (B is the phase angle) ...but it does not work (it's returning numbers like +30). I know it's wrong because I'm comparing it to heavens-above.com satellite passes. intrinsicMagnitude = Visual magnitude at 1000km away (Use -1.3) distanceToSatellite = Observer distance to the satellite in km (Use 483) B = This is what I am trying to figure out. In the paper it says what this is but it says some other things that I do not understand. The phase angle you use to get this should be 113. The target output of this equation should be around -3. Answer: This is for satellites with unknown size and orientation but known standard magnitude (Standard magnitude can be found on the satellite info page of heavens above, the number is called intrinsic magnitude) The proper formula is double distanceToSatellite = 485; //This is in KM double phaseAngleDegrees = 113.1; //Angle from sun->satellite->observer double pa = phaseAngleDegrees * 0.0174533; //Convert the phase angle to radians double intrinsicMagnitude = -1.8; //-1.8 is std. mag for iss double term_1 = intrinsicMagnitude; double term_2 = 5.0 * Math.Log10(distanceToSatellite / 1000.0); double arg = Math.Sin(pa) + (Math.PI - pa) * Math.Cos(pa); double term_3 = -2.5 * Math.Log10(arg); double apparentMagnitude = term_1 + term_2 + term_3; This will give the apparent magnitude of the satellite. Note: I gave the formula in C#
{ "domain": "astronomy.stackexchange", "id": 3369, "tags": "satellite, artificial-satellite, mathematics, iss" }
What happens to a radioactive material's atom when it disintegrates?
Question: Suppose you initial had radioactive $2^n$ atoms (where $n$ is an integer). Now after a number of halflives the number of left out atoms becomes 1. Now what will happen to it will it disintegrate and the leftover would be half an atom? Now if the reaction stops then the statement "The decaying radioactive atom would never end" then it'll be wrong. Answer: Radioactive decay is a stochastic process. This means that there is random chance involved, so the exponential model used to represent radioactive does not say exactly how many atoms of the original substance will be left at a given time, rather it tells you the expected value of atoms remaining. If you begin with n=1 atom, after some time the exponential model gives you n=0.5. This does not mean there are 0.5 atoms remaining, it rather means that there is a 0.5 chance that the atom has not decayed yet.
{ "domain": "physics.stackexchange", "id": 52225, "tags": "radioactivity, statistics, randomness, half-life" }
Making sheet metal parts in Autodesk Fusion 360
Question: I'm trying to do some sheet metal work in Autodesk Fusion 360 and it has proved frustrating in the extreme. Maneuvering sketches and bodies to where I want them has proved extremely difficult. I understand that Autodesk Inventor has far better support for sheet metal parts, which is great if I could afford Inventor. What is a workflow for making sheet metal parts in Fusion 360? Answer: This video about Faux Sheet Metal parts was incredibly helpful. Start with a set of sketches that describe your sheet metal project. Create patches based on those sketches. Each patch can then be moved to it's proper location. After the patches are where they should be, then they can be extruded into bodies with the appropriate thickness. While this let's a user create parts that look like sheet metal parts, I believe there are significant differences between this process and a process that uses tools provided by Inventor or SolidWorks.
{ "domain": "engineering.stackexchange", "id": 733, "tags": "cad, autodesk-inventor" }
Is there a gate that puts a qubit into superposition with a not so purely probabalistic (50 50) outcome?
Question: I know that a Hadamard states is a purely probabalistic one; e.g. $$H\vert 0\rangle=a\vert 0\rangle+b\vert 1\rangle$$ where $a^2=0.5$ and $b^2=0.5$. Are there any states in which the probabilities differ, and if there are how are they important? Answer: Welcome to QCSE. You already know that $a^2=b^2=0.5$. For a single qubit gate akin to the Hadamard gate you can achieve any two probabilities you want, as long as they add to $1$. For example one trick that I learned was that you could choose ratios of Pythagorean triples, i.e. numbers $a$,$b$,$c$ such that $a^2+b^2=c^2$. Let's have a gate called $\mathrm{YOUSEF}$ defined as: $$\mathrm{YOUSEF}\vert 0\rangle=\frac{3}{5}\vert 0\rangle+\frac{4}{5}\vert 1\rangle.$$ Such a gate may be useful in biasing your transition probabilities in a manner your algorithm dictates.
{ "domain": "quantumcomputing.stackexchange", "id": 1612, "tags": "quantum-gate, hadamard" }
Concatenate two vectors and store the result
Question: I code in Rust, but one doesn't need to be familiar with Rust to understand the question. We are given the two vectors, first and res. Our goal is to append res to first, and then to assign the result to res. Let's consider the following two approaches. The first one is first.extend(res); res = first; The second one is res = first.into_iter().chain(res.into_iter()).collect(); The former seems to be more readable, but the latter is somewhat more straightforward: "take first, turn it into the iterator, chain it with the iterator obtained from consuming res, then collect the result". Also, there is only one line of code instead of two. I would prefer the first one. What do you think? Answer: Actually, you want: res = [first, rest].concat(); // require your type to be cloneable Or: res.splice(0..0, first); Or: res = vec![first, rest].into_iter().flatten().collect(); But it seems odd that you are assembling the Vec this way. Usually we try to assemble it from beginning to end. I'd look to see if there is a better way to structure to code to avoid this.
{ "domain": "codereview.stackexchange", "id": 42181, "tags": "rust" }
Node publishes only half of expected frequency
Question: I'm writing a ROS2 C++ node for a camera. In the code we create a callback from the camera driver which then publishes the image. When we measure the topic frequency we only get half the actual callback frequency. If we set the camera to 10fps we get around 5fps in ROS and when we double the camera rate to 20fps we get around 10fps in ROS. This leads us to believe that we are not limited in processing. The topic frequency is measured with ros2 topic hz and for the callback measuring we use this: void BufferReceived(void *callBackOwner, BGAPI2::Buffer *pBufferFilled) { count++; // Pack the OpenCV image into the ROS image. sensor_msgs::msg::Image::UniquePtr msg(new sensor_msgs::msg::Image()); // Read from buffer and convert to ros msg pub_->publish(std::move(msg)); if (count == 60){ count = 0; auto duration = std::chrono::system_clock::now() - start_time; auto fps = 60.0 / (duration.count()); std::cout << "fps: " << fps * 1000000000 << std::endl; start_time = std::chrono::system_clock::now(); } } We have written something similar in Python and there we had no problems for it to publish at the correct frequency. We work in ROS galactic with the Fast DDS middleware. I don't really have an idea about how I should start to debug this, because everything is working as expected with the exception of losing some frames. This is how we create the publisher: pub_ = this->create_publisher<sensor_msgs::msg::Image>("/output", rclcpp::SystemDefaultsQoS()); Whereby we modified the default QOS via the XML to be reliable and to have sufficiently big buffers. <?xml version="1.0" encoding="UTF-8" ?> <profiles xmlns="http://www.eprosima.com/XMLSchemas/fastRTPS_Profiles"> <transport_descriptors> <!-- Create a descriptor for the new transport --> <transport_descriptor> <transport_id>shm_transport</transport_id> <type>SHM</type> <!-- <sendBufferSize>100000000</sendBufferSize> <receiveBufferSize>100000000</receiveBufferSize> --> <segment_size>1000000000</segment_size> <maxMessageSize>10000000</maxMessageSize> </transport_descriptor> </transport_descriptors> <participant profile_name="SHMParticipant" is_default_profile="true"> <rtps> <!-- Link the Transport Layer to the Participant --> <userTransports> <transport_id>shm_transport</transport_id> </userTransports> </rtps> </participant> <data_writer profile_name="default publisher profile" is_default_profile="true"> <qos> <publishMode> <kind>SYNCHRONOUS</kind> </publishMode> <durability> <kind>VOLATILE</kind> </durability> <reliability> <kind>RELIABLE</kind> </reliability> </qos> <topic> <historyQos> <kind>KEEP_LAST</kind> <depth>20</depth> </historyQos> </topic> <historyMemoryPolicy>DYNAMIC</historyMemoryPolicy> </data_writer> <data_reader profile_name="default subscriber profile" is_default_profile="true"> <qos> <durability> <kind>VOLATILE</kind> </durability> <reliability> <kind>RELIABLE</kind> </reliability> </qos> <historyMemoryPolicy>DYNAMIC</historyMemoryPolicy> <topic> <historyQos> <kind>KEEP_LAST</kind> <depth>20</depth> </historyQos> </topic> </data_reader> Originally posted by Gintecc on ROS Answers with karma: 23 on 2022-02-15 Post score: 2 Answer: Apparently ros2 topic hz is super unreliable, if I write a simple python script to record the frequency I get the expected result. Originally posted by Gintecc with karma: 23 on 2022-02-15 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by gvdhoorn on 2022-02-15: If there isn't already an issue open, it would be good to report this, as the sole purpose of ros2 topic hz is to give an accurate estimate of the rate of messages published on a specific topic. If that doesn't work, or the reported number isn't reliable, it would seem ros2 topic hz does not function according to specifications. Please consider taking the time to report it and describe your experiences. Unknown problems cannot be fixed. Comment by Bernd Pfrommer on 2022-08-11: The fact that ROS2 is slow to deserialize messages, and that for this reason "ros2 topic hz" can give gross underestimates of the actual frequency is apparently well known but not well documented and not fixed, and for this reason re-learned by every single ROS2 user who for the first time uses "ros2 topic hz" on hardware that is too slow to deserialize (I believe in Python!!!) the message. Example here.
{ "domain": "robotics.stackexchange", "id": 37440, "tags": "ros, ros2, rclcpp, sensor-msgs" }
What angle does the incident ray from point A on the ground make with the ground so reflected ray off curved mirror reaches the eye above the ground?
Question: In this illustration: E is the eye of the observer, projecting E straight to the ground gives E'. A is a red dot on the ground. P is the point of incidence on the mirror. P' is the projection of P on the ground. AE makes an angle α (angle EAE') with the ground. The incident ray AP makes an angle β with the ground (angle PAP'). Thinking of the mirror ball as a globe-like object, the circle on the ground whose center is O is the projection of the "latitude" of the ball that goes through P, and that projection goes through P'. Here are my questions: Is it possible to know angle β? Is it possible that β equals α? Does line OP' bisect angle AP'E'? Here is another illustration, with circle O simply being the bottom of the cylinder. The same questions for this case. Answer: May be above two Figures answer a part of your questions. Note : double-clicking on the images you'll have larger and more clear views.
{ "domain": "physics.stackexchange", "id": 80762, "tags": "reflection, geometric-optics" }
Reddit Challenge #383 Python
Question: I've completed the Reddit challenge #383. Please find a a summary of the challenge and the test cases below: Reddit Challenge #383 Necklace Matching challenge: In this challenge, we should imagine a necklace with many beads. Each bead has a letter engraved onto it. If you push the letters around the string, you can rearrange the letters. So 'word' can become 'ordw' or 'dwor' and continue that movement until it turns back into the original word: 'word' 'ordw' 'rdwo' 'dwor' 'word' The challenge is to detect if the original word can turn into the changed word. For example: 'letters' can turn into 'ttersle' but can't turn into 'terstle' because the two 't' should be stringed > together. Here is the test cases for the challenge: (original, changed) => expected response ("nicole", "icolen") => true ("nicole", "lenico") => true ("nicole", "coneli") => false ("aabaaaaabaab", "aabaabaabaaa") => true ("abc", "cba") => false ("xxyyy", "xxxyy") => false ("xyxxz", "xxyxz") => false ("x", "x") => true ("x", "xx") => false ("x", "") => false ("", "") => true challenge 2: Some words such as 'mama' can turn into the original word without looping all the way through the necklace. 'mama' 'amam' 'mama' 'amam' 'mama' In this challenge, you need to count how many times the original word appears when cycling the necklace. This includes the final form of the necklace. so in 'mama', the word is seen twice while 'read' has the word only appear once. Here is the test cases for this challenge: (original) => expected response ("abc") => 1 ("abcabcabc") => 3 ("abcabcabcx") => 1 ("aaaaaa") => 6 ("a") => 1 ("") => 1 challenge 3: This challenge involves the enable1 word list found here:https://raw.githubusercontent.com/dolph/dictionary/master/enable1.txt In this challenge, we need to find the words that form other words when cycled through. The results should be one set of four words. The correct result should be: ['estop', 'pesto', 'stope', 'topes'] Code explanation In my code I created a class called necklace that manages a necklace. In this case, a necklace is a string with a particular order. This class is used to get the results for challenge 1 and 2. Two external functions exists that are used to solve challenge 3 which uses the necklace class to determine if they are the same necklace. Please find my code below: ''' This modules contains code that solves the Reddit exercise #383 ''' class Necklace(): ''' Necklace class Create a theoretical string necklace which you can manipulate ''' def __init__(self, word): ''' Sets up the necklace using a word. Stores the word as original. Stores all possible positions. ''' self.original = word self.cycles_list = self.get_cycle() @staticmethod def cycle_left(word): ''' move the word one letter to the left. Example: 'word' => 'ordw' ''' return word[1:]+word[0] def get_cycle(self): ''' Get every possible combination of the necklace ''' word = self.original cycles = [word] i = 0 while i < len(word): word = self.cycle_left(word) cycles.append(word) i = i + 1 return cycles def is_same_necklace(self, changed_word): ''' Check if the changedWord is the same necklace but in a different position returns false if it is another combination of words ''' return changed_word in self.cycles_list def count_repeats(self): ''' Counts how many times the original position of the necklace is repeated when cycling through all possible positions of the necklace ''' count = self.cycles_list.count(self.original) if count > 1: return count-1 return count def print_cycle(self): ''' Prints all the possible positions the necklace has ''' print(self.cycles_list) def get_words_from_file(file_name): ''' Reads all the words from a specific file ''' file = open(file_name, "r") return file.read().split("\n") def find_similar_words(): ''' Get all the words from the enable1.txt file and finds all words that are the same necklace Returns the first set of words that has four combination but are all the same necklace ''' all_words = get_words_from_file("enable1.txt") similar = {} words_dict = {} for word in all_words: if len(word) in words_dict: words_dict[len(word)].append(word) else: words_dict[len(word)] = [word] for key in words_dict: print('Finding all similar words with a length of '+str(key)) words = words_dict[key] for word in words: necklace = Necklace(word) index = words.index(word)+1 for word2 in words[index:]: if necklace.is_same_necklace(word2): if word in similar: similar[word].append(word2) else: similar[word] = [word2] for key in similar: if len(similar[key]) >= 3: return [key]+similar[key] return None if __name__ == "__main__": #challenge 1 test cases print("Challenge 1 Test Cases Results:") NECKLACE = Necklace("nicole") print(NECKLACE.is_same_necklace("icolen")) print(NECKLACE.is_same_necklace("lenico")) print(NECKLACE.is_same_necklace("coneli")) print(Necklace("aabaaaaabaab").is_same_necklace("aabaabaabaaa")) print(Necklace("abc").is_same_necklace("cba")) print(Necklace("xxyyy").is_same_necklace("xxxyy")) print(Necklace("xyxxz").is_same_necklace("xxyxz")) print(Necklace("x").is_same_necklace("x")) print(Necklace("x").is_same_necklace("xx")) print(Necklace("x").is_same_necklace("")) print(Necklace("").is_same_necklace("")) #challenge 2 test cases print("Challenge 2 Test Cases Results:") print(Necklace("abc").count_repeats()) print(Necklace("abcabcabc").count_repeats()) print(Necklace("abcabcabcx").count_repeats()) print(Necklace("aaaaaa").count_repeats()) print(Necklace("a").count_repeats()) print(Necklace("").count_repeats()) #challenge 3 result print("Challenge 3 is starting:") result = find_similar_words() print("Challenge 3 Test Cases Results:") print(result) Things that could be done to improve code quality Storing the cycles (self.cycles) Not sure if it should have been stored or retrieved when needed. I changed it to store the cycles so it could speed up challenge three however I'm not sure it makes much of a difference. count_repeats function I tried to use self.cycle.counts however for results with no combination, it came out as '0' instead of '1' however I don't know if there was a better way find_similar_words function I tried to optimise it as best as I can however it still seems really slow. A lot of redditors put restriction on the word length (>=5) however it should include a search on all words. Answer: Your docstrings are not compliant, they should use """ not '''. You currently have discouraged unless multiline string literals. If you are inheriting from nothing (), then you can just remove the brackets and make your code cleaner. The Necklace class is largely over complicated. The Necklace class is even more redundant when you do self.cycles_list = self.get_cycle(). This also signals to me that the Necklace is actually performing two jobs, and not the one i would intuitively think. It's un-Pythonic to use while loops when you can easily use a for i in range loop. You can change self.get_cycle() to a simple list comprehension and merge cycle_left into it. To handle the solution_1("", "") is True test case, we can default the range to 1 if the length is 0. is_same_necklace is a really poor name, even more so that a simple in is more readable and better understood. You can allow a class to utilize this operator by defining the __contains__ dunder method. You can simplify the logic of count_repeats by removing the last value. Your tests would better be described as pytest tests. So far this would get: def necklace_cycle(beads): return [ beads[i:] + beads[:i] for i in range(len(beads) or 1) ] def solution_1(original, changed): return changed in necklace_cycle(original) def solution_2(original): return necklace_cycle(original).count(original) def test_solution_1(): for original, changed, expected in [ ("nicole", "icolen", True), ("nicole", "lenico", True), ("nicole", "coneli", False), ("aabaaaaabaab", "aabaabaabaaa", True), ("abc", "cba", False), ("xxyyy", "xxxyy", False), ("xyxxz", "xxyxz", False), ("x", "x", True), ("x", "xx", False), ("x", "", False), ("", "", True), ]: assert solution_1(original, changed) is expected def test_solution_2(): for original, expected in [ ("abc", 1), ("abcabcabc", 3), ("abcabcabcx", 1), ("aaaaaa", 6), ("a", 1), ("", 1), ]: assert solution_2(original) == expected You should always wrap open in a with. This is so the file is closed correctly. Currently you're not closing the file, which can lead to problems. You can just use file.read_lines() and strip the newlines. You can use setdefault to set a dictionaries key to a list and then append to that list. for word in all_words: words_dict.setdefault(len(word), []).append(word) Personally I would move this grouping code into its own function. This will allow us to use it twice if we need to. You can use for key, words in words_dict.items(): rather than: for key in words_dict: words = words_dict[key] If you remove the print then you can use for words in words_dict.values(): instead. You can use for index, word in enumerate(words, start=1): rather than: for word in words: index = words.index(word)+1 Note: These are technically different operations if there are duplicates in words. However enumerate is the solution that you want to use. You can update the similar to be appended to using dict.setdefault. I would change the default to [word] rather than []. def get_words(file_name): with open(file_name) as f: return map(str.rstrip, f.readlines()) def grouper(values, transformation): output = {} for value in values: output.setdefault(transformation(value), []).append(value) return output def find_similar_words(words): similar = {} for grouped_words in grouper(words, len).values(): for index, word in enumerate(grouped_words, start=1): necklace = necklace_cycle(word) for word2 in grouped_words[index:]: if word2 in necklace: similar.setdefault(word, [word]).append(word2) return similar.values() def solution_3(): four_words = ( words for words in find_similar_words(get_words("enable1.txt")) if len(words) >= 4 ) return next(four_words, None) def test_solution_3(): assert solution_3() == ["estop", "pesto", "stope", "topes"] It's much easier to read, and also runs in 5:20 rather than 10:30. This includes the time it takes to run all tests. But since the other tests take 0.05s I'm fine with this. Now we can focus on improving performance. Anagrams You should group by anagrams. This is because you're currently checking if four and cats are similar. And I think them not sharing a single character in common might just indicate that they are not. To do this you can just sort the value and group by that. This makes your \$O(n^2)\$ code perform better because \$n\$ is now much smaller than it was before. This is because \$a^2 + b^2 <= (a + b)^2\$ when a and b are natural numbers - which is what we're working with. def by_anagram(word): return tuple(sorted(word)) def find_similar_words(words): similar = {} for grouped_words in grouper(words, by_anagram).values(): ... This runs in 1.04s rather than 5:20. Sets You can further improve the performance and readability of the code by using sets. We know that the updated find_similar_words should return the intersection of the necklace and the grouped words. Since you are already returning duplicates this means we can just use set.intersection. The performance increase from this is likely due to the fact that we're returning early, and so don't consume the entire of the grouper. def find_similar_words(words): for words_ in grouper(words, by_anagram).values(): words_ = set(words_) for word in words_: yield words_ & set(necklace_cycle(word)) This runs in 0.62s rather than 1.04s.
{ "domain": "codereview.stackexchange", "id": 37662, "tags": "python, performance, python-3.x, reddit" }
Is there any physical difference between a receding body and a moving body?
Question: Edited version. A body $A$ is receding at acceleration $\vec {a}$ with respect to a point $P$ because of the expansion of the universe. Another body $B$ is accelerating at the rate of $\vec{a}$ with respect to a point $P$ through spacetime. Its movement is not because of the expansion of the universe. For $P$, body $A$ and $B$ are accelerating in the same way but the cause is different in each case. So, is there any way of differentiating between the states of motion of the two bodies for $P$ within the laws of physics? Answer: In general relativity acceleration is more complicated than it is in Newtonian mechanics. We normally think of acceleration as $d^2x/dt^2$ but in GR it is possible to have accelerating rest frames. For example if you are in an accelerating car then in your rest frame $d^2x/dt^2 = 0$ because, well, it's your rest frame so your $x$ coordinate is constant at zero. But even if all the car windows were blacked out, so could not check your surroundings, you would be able to tell you were accelerating because you could feel the g force pushing you back into your seat. And if you dropped an object you would see it accelerate away from you - it would be you accelerating, not the object, but it would look to you as if it was accelerating away from you. And in fact this is key to defining a measure of acceleration that all observers will agree on regardless of their rest frame. Suppose you drop an object and it accelerates away from you with some acceleration $a$ that must mean your acceleration is $-a$, and we call the magnitude of this your proper acceleration. This proper acceleration is a Lorentz scalar meaning that all observers will agree on it. The proper acceleration can be a bit counter intuitive. For example if I drop my pen it will accelerate away from me at $-g$, and that must mean my proper acceleration is $g$ even though in my coordinates I am stationary in my chair. Conversely if you jump out of an airplane above me (ignoring wind resistance) your proper acceleration is zero until you open your parachute, even though I would measure you to be accelerating towards me. In GR terms I would be the one accelerating upwards towards you. For more on this see How can you accelerate without moving? and If gravity isn't a force, then how are forces balanced in the real world? Anyhow, the objects that we see accelerating away from us due to the expansion of the universe have a proper acceleration of zero. Their acceleration is like your acceleration when you've just jumped out of the plane. Even though we see the objects accelerating those objects would feel themselves to be weightless just as you feel weightless when you're free falling. By contrast one of Elon Musk's rockets accelerating away from the surface of the Earth has a non-zero proper acceleration and if you were in that rocket you'd feel high g forces not weightlessness. So this difference in the proper acceleration is how we tell the difference between the two types of motion.
{ "domain": "physics.stackexchange", "id": 95357, "tags": "special-relativity, kinematics, reference-frames, terminology" }
Why can you see virtual images?
Question: In optics it is widely mentioned real images are projectable onto screens whereas virtual ones can only be seen by a person. Isn't that contradictory? I mean in order to see the virtual image it has to be projected onto the retina (ultimately acting as a screen). So, why can you see virtual images in the first place? Answer: Your eye is a second optical system. It re-focuses the diverging rays to produce a real image on the retina. This process is exactly the same thing it does when looking at a nearby (i.e. not at effective infinity) object.
{ "domain": "physics.stackexchange", "id": 622, "tags": "optics, vision, lenses" }
Magpi Magazine Downloader
Question: I have created a simple program in Python which downloades all issues of the MagPi magazine by parsing this web page for links ending in .pdf, then downloading them using the urllib module. It also supports command line options, using the argparse module: $ ./magpi_downloader.py -h usage: magpi_downloader.py [-h] [-q] [-r] [--view] [-t FILETYPE] [-a] [-i] [-e] DIR [REMOTE_DIR] Download issues of the MagPi magazine positional arguments: DIR The directory to install into. REMOTE_DIR The directory to fetch from. Files must be links on that page. Works best with Apache servers with the default directory listing. Default: http://www.raspberrypi.org /magpi-issues/ optional arguments: -h, --help show this help message and exit -q, --quiet Silence progress output -r, --reinstall Reinstall all issues --view List the issues available for install -t FILETYPE The extension of the files to download. Default: pdf -a, --all Install all files with the right extension. -i, --issues Install the regular issues (default behavior) -e, --essentials Install the 'Essentials' collection I am looking for ways to improve the user interface, the speed, and making it generally cleaner and more pythonic. Source code: import urllib, os, re, time, sys, argparse from bs4 import BeautifulSoup ISSUES_REGEX = u'MagPi[0-9]+' ESSENTIALS_REGEX = u'Essentials_.*' def get_installed(directory): if not os.path.exists(directory): if not quiet: print("Directory doesn't exist\nCreating new directory with the name {}".format(directory)) os.mkdir(directory) for filename in os.listdir(directory): if re.match(regex, filename): yield filename def download(filename, localfilename): localfile = get_full_path(download_dir, localfilename) try: open(localfile, 'w').close() urllib.request.urlretrieve(filename, localfile) except KeyboardInterrupt: print('Cleaning up...') os.remove(localfile) sys.exit() def webopen(page): return urllib.request.urlopen(page) def get_links(soup): for link in soup.find_all('a'): yield link.get('href') def get_issues(soup): links = list(get_links(soup)) for link in links: if re.match(regex, link): yield link def get_missing(installed, all_issues): for issue in all_issues: if not issue in installed: yield issue def get_full_path(directory, filename): return os.path.join(directory, filename) def to_install_info(directory): page = webopen(remote_dir) soup = BeautifulSoup(page) issues = list(get_issues(soup)) installed = list(get_installed(directory)) missing = list(get_missing(installed, issues)) return issues, installed, missing def install(missing): for issue in missing: print('Downloading {} '.format(issue)) download(get_full_path(remote_dir, issue), issue) print('Done') def install_quiet(missing): for issue in missing: download(get_full_path(remote_dir, issue), issue) def print_to_install_info(issues, installed, missing): print('{} Released:\n\n{}\n\n{} Installed:\n\n{}\n\n{} To install:\n\n{}'.format(len(issues), issues, len(installed), installed, len(missing), missing)) def install_all(missing): install(missing) def install_all_quiet(missing): install_quiet(missing) if __name__ == '__main__': parser = argparse.ArgumentParser(description="Download issues of the MagPi magazine") parser.add_argument('-q', '--quiet', action='store_true', help="Silence progress output") parser.add_argument('-r', '--reinstall', action='store_true', help="Reinstall all issues") parser.add_argument('--view', action='store_true', help="List the issues available for install") parser.add_argument('-t', dest='filetype', metavar='FILETYPE', type=str, action='store', default=u'pdf', help="The extension of the files to download. Default: %(default)s") parser.add_argument('-a', '--all', dest='types', action='store_const', const=u'.*', help="Install all files with the right extension.") parser.add_argument('-i', '--issues', dest='types', action='store_const', const=ISSUES_REGEX, help='Install the regular issues (default behavior)') parser.add_argument('-e', '--essentials', dest='types', action='store_const', const=ESSENTIALS_REGEX, help="Install the 'Essentials' collection") parser.add_argument('directory', metavar='DIR', type=str, help="The directory to install into.") parser.add_argument('remote_dir', metavar='REMOTE_DIR', type=str, nargs='?', default='http://www.raspberrypi.org/magpi-issues/', help="The directory to fetch from. Files must be links on that page. Works \ best with Apache servers with the default directory listing. Default: %(default)s") args = parser.parse_args() download_dir = args.directory remote_dir = args.remote_dir types = args.types if not types: types = ISSUES_REGEX regex = u"^{}\.{}$".format(types, args.filetype) print(regex) quiet = args.quiet reinstall = args.reinstall view = args.view issues, installed, missing = to_install_info(download_dir) if reinstall: missing = issues msg = 'Overwriting issues' else: msg = 'Installing Issues' if not quiet: print_to_install_info(issues, installed, missing) if view: print('To install, run without the view flag.') else: print(msg) install_all(missing) else: install_all_quiet(missing) I am sorry for the extreme lack of comments and docstrings, so that is what I will focus on. Usage cases: To download everything: $ ./magpi_downloader.py -a download_dir To download all regular issues: $ ./magpi_downloader.py download_dir To download the MagPi essentials: $ ./magpi_downloader.py -e download_dir To view but not install available issues: $ ./magpi_downloader.py download_dir Other options are as documented above. Answer: The code overall is not readable. It's quite lengthy, there is no modular structure, no meaningful comments and docstrings. The first thing I would do is to split it into multiple modules grouped logically, define the docstrings for each of the functions (which may actually lead to some functions being joined together or even removed). Also, some of the code blocks can be extracted into separate functions. For instance, apply the "Extract Method" refactoring method and extract the "parsing arguments with argparse" part into a separate function. Now, let's go over the issues one by one: import organization - avoid importing multiple built-in modules on a single line. Third-party imports need to have a newline before them. Put 2 newlines after all the imports (PEP8 import guidelines) since you are issuing multiple requests to the same domain, I would use requests module maintaining a web-scraping session via requests.Session - this might result into a significant performance boost: So if you're making several requests to the same host, the underlying TCP connection will be reused, which can result in a significant performance increase when instantiating a "soup" object, it's highly recommended to explicitly specify the underlying parser to avoid letting BeautifulSoup do this automatically - it might choose html.parser on your machine, but on the other machine, it may pick lxml or html5lib which may result into different parsing results (see Differences between parsers): soup = BeautifulSoup(page, "html.parser") # soup = BeautifulSoup(page, "html5lib") # soup = BeautifulSoup(page, "lxml") you can improve the way you locate the elements on a page. Currently, you are finding all the links via find_all("a") and then applying a regular expression to the href attribute values. BeautifulSoup can do both in a single "find" command: soup.find_all("a", href=re.compile(r"expression here")) the print_to_install_info() can benefit from using a multi-line string (to avoid multiple newline characters in the format template string) except for printing, the install() and install_quiet() functions really duplicate each other. Either have a single function with quiet argument, or, even better, use logging module with a configurable/controllable log level see if you can replace listdir() and an inner regex check with a single glob.glob() (or glob.iglob()) call.
{ "domain": "codereview.stackexchange", "id": 24486, "tags": "python, python-3.x, web-scraping" }
Keras reuse trained weights on CNN with different number of channels
Question: Related to TrackNet, a CNN for tracking tennis balls on TV tennis matches, the Arxiv paper mentions it is scalable, ie. the input can be any number of frames concatenated rather than the three they used. So I tried to concatenate 11 frames and adjusted the input layer dimension: #changed from 9 to 33 for 11 frames input imgs_input = Input(shape=(33,input_height,input_width)) But now when I try to load a weights file that comes with the open source code, I am getting an error: Traceback (most recent call last): File "predict_video.py", line 55, in <module> m.load_weights( save_weights_path ) File "/usr/local/lib/python2.7/dist-packages/keras/engine/network.py", line 1166, in load_weights f, self.layers, reshape=reshape) File "/usr/local/lib/python2.7/dist-packages/keras/engine/saving.py", line 1058, in load_weights_from_hdf5_group K.batch_set_value(weight_value_tuples) File "/usr/local/lib/python2.7/dist-packages/keras/backend/tensorflow_backend.py", line 2465, in batch_set_value assign_op = x.assign(assign_placeholder) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variables.py", line 1952, in assign name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/state_ops.py", line 227, in assign validate_shape=validate_shape) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_state_ops.py", line 66, in assign use_locking=use_locking, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/util/deprecation.py", line 507, in new_func return func(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 3616, in create_op op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2027, in __init__ control_input_ops) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1867, in _create_c_op raise ValueError(str(e)) ValueError: Dimension 0 in both shapes must be equal, but are 3 and 64. Shapes are [3,3,33,64] and [64,9,3,3]. for 'Assign' (op: 'Assign') with input shapes: [3,3,33,64], [64,9,3,3]. The actual input for the original CNN is 3 video frames of height 360, width 640 and the code looks like this: imgs_input = Input(shape=(9,input_height,input_width)) And the model is instantiated like this: m = modelFN( n_classes , input_height=height, input_width=width ) where n_classes is a command line argument with default value of 256 For 11 frames, I tried instantiating the 3 frames model, loading the weights and then instantiating the 11 frames model and tried to used old_model.get_weights() specified in this answer: Stackoverflow answer So the model and weights loading snippet looks like this: #load TrackNet model modelFN = Models.TrackNet.TrackNet m = modelFN( n_classes , input_height=height, input_width=width ) m.compile(loss='categorical_crossentropy', optimizer= 'adadelta' , metrics=['accuracy']) #load and save from same path m.set_weights( save_weights_path ) #load TrackNet 11 frames model and transfer weights model11 = Models.TrackNet11.TrackNet11 m11 = model11(n_classes, input_height=height, input_width=width) m11.compile(loss='categorical_crossentropy', optimizer='adadelta', metrics=['accuracy']) m11.load_weights(m.get_weights) The full code is available at the link below TrackNet repo I tried the Stackoverflow answer and tried to used None for the channel dimension because using 33 gave me an error saying dim2 is different ie. [3,3,33,64] vs [3,3,9,64] but now I am getting: ValueError: The channel dimension of the inputs should be defined. Found `None`. So the channel dimension has to be defined. I am going to try this: datasciencestackexchange answer But this means that the weights from inputs to first conv2d layer will not be the pretrained ones? Anyways, I did try it but was unable to get any output, ie. it did not track the tennis ball at all and I am pretty sure there are no other errors in the code but will double check. If anyone has a easy solution that would be appreciated. My attempt at converting from 3 frames concatenated input to 11 frames can be seen at the following link in files predict_video.py and predict_video11.py. In the Models folder you will see TrackNet.py for 3 frames and TrackNet11.py for 11. There is also a python 3 version that I converted to from the original python 2 version using py2to3 that works and comes with requirementspy3.txt assuming you have the correct version of tensorflow installed for your machine (cpu or gpu with cuda, cudnn). TrackNet on Gitlab link Arxiv paper link: Arxiv TrackNet Answer: It's impossible to change the number of channels. The weights of the model depend on the number of channels. Changing channels is changing weights. Changing weights is having a completely new model. You can only change the image size (in purely convolutional networks - without Flatten - the image size does not affect the number of weights). But: Frames are not channels. Take care with this. Frames are entire images, not channels of images. But it's impossible to help further without knowing the code of the original CNN. I don't know if the net is purely convolutional, if it uses the frames as samples, if it uses TimeDistributed frames, or if it uses recursive layers.
{ "domain": "datascience.stackexchange", "id": 5665, "tags": "python, keras, cnn, object-detection" }
Which of the following pair of molecules have identical bond dissociation energies (approx)?
Question: Which of the following pair of molecules have identical bond dissociation energies (approx)? $\ce{H2}$, $\ce{F2}$ $\ce{F2}$, $\ce{I2}$ $\ce{N2}$, $\ce{CO}$ $\ce{HF}$, $\ce{O2}$ I think it's either (2) or (3). (Single correct type). Bot $\ce{F2}$ and $\ce{I2}$ bonds are weak because of inter electronic repulsions and due to large sizes respectively. In case (3), both molecules have bond order 3. Answer: The answer here has nothing to do with how the molecules relate to each other, e.g. similar bond order or length. This is actually a question that highlights an unexpected break in a trend. The bond enthalpies for the halogens are: $$ \begin{array}{|c|c|}\hline \text{Halogen}&\text{Bond energy (kJ/mol)}\\\hline \ce{F-F}&\pu{156}\\\hline \ce{Cl-Cl}&\pu{243}\\\hline \ce{Br-Br}&\pu{193}\\\hline \ce{I-I}&\pu{151}\\\hline \end{array} $$ Plotting these data we would expect the bond enthalpy of $\ce{F2}$ to be the strongest of them all, something around $\pu{300 kJ/mol}$, but in fact it much weaker than for chlorine. Indeed it is so weak, it as almost as weak as for iodine. The reason for this is that fluorine is so small that it breaks the trend. The atomic radius is so small that the electrostatic repulsion between the nuclei is significant, and the non-bonding electrons surrounding each atom also repel each other due to the small amount of space available. These forces counteract the strong attraction between the nuclei and the bonding pair of electrons and weakens the bond compared to what might be expected.
{ "domain": "chemistry.stackexchange", "id": 5180, "tags": "inorganic-chemistry, bond" }
Derivation of the Cartan Field equation
Question: Please help me understand how, in this introduction to spacetime and fields, the Einstein Cartan equation: $$C^k_{\hspace{2mm} [ji]}-\delta_{[i}^{k}C^l_{\hspace{2mm} j]l}=\frac{\kappa}{2}s_{ij}^{\hspace{2mm}k}, $$ when the starting variation to the Field equation with respect to the contortion tensor ($C$ being the contortion tensor, $s$ being the spin tensor) is derived to be $$\frac{-1}{\kappa c}\int (C^{kj}_{\hspace{2mm} i}-C^{lj}_{\hspace{2mm} l}\delta_i^k) \sqrt{-\mathfrak{g}}\delta C^i_{\hspace{2mm}jk}\delta \Omega \hspace{3mm} + \frac{1}{2c}\int s_j^{\hspace{2mm} ik}\sqrt{-\mathfrak{g}}\delta C^j_{\hspace{2mm}ik}\delta\Omega=0,$$ which leads directly to the expression $$C^{kj}_{\hspace{2mm} i}-C^{lj}_{\hspace{2mm} l}\delta_i^k = \frac{\kappa}{2}s_i^{\hspace{2mm}jk}. $$ Is there some form of symmetrisation that can reduce this equation into the familiar form above? Answer: Note that the spin tensor is skew-symmetric in its lower indices, $$ s_{ij}{}^k=-s_{ji}{}^k $$ Therefore, we have $s_{ij}{}^k=s_{[ij]}{}^k$. From this, its easy to see that $$ A_{ij}{}^k=s_{ij}{}^k \quad \Leftrightarrow \quad A_{[ij]}{}^k=s_{ij}{}^k $$ as required.
{ "domain": "physics.stackexchange", "id": 29043, "tags": "homework-and-exercises, general-relativity, differential-geometry, tensor-calculus, variational-calculus" }
A* algorithm how to choose $h(n)$, a heuristic estimation of the cost of node n
Question: I'm trying to understand the A$^*$ search algorithm. I read in wikipedia that A$^*$ chooses nodes based on their cost, calculated as $f(n)=g(n)+h(n)$, where $h(n)$ is a heuristic estimation of the cost to reach the target from node $n$. But I don't understand how to determine its value. Thanks for your help. Answer: Let's try to walk together through the $f$ function! The intuition here is that we want to choose nodes in our exploration frontier which are: Close to the start state Close to a goal state The shortest distance to the start state can be calculated recursively for every node in the search space pretty much as we do in uniform cost search. However, the distance of a node to a goal state cannot be precisely calculated (for that we would need to know the shortest path to a goal, which would mean we have already solved the problem). Thus the best we can do is try estimating it, using a heuristic function $h$. Obviously, we want this estimation to be as precise as possible, but we also want some other properties to hold: (admissibility) we want $h$ to never overestimate the cost to the goal from any given node. (consistency) we want $h(n)$ to not be greater than $h(n')+c(n,n')$ for any successor $n'$ of $n$, where $c(n,n')$ is the cost of the path that joins both states. In other words, the true cost of going from any given node to another must be less than the difference between the heuristic value of both nodes. If we impose the restriction that $h(n)=0$ if $n$ is a goal state, then it is easy to see that consistency implies admissibility. These properties guarantee that the algorithm is sound, complete and optimal for tree-search (admissibility) and graph-search (consistency). Now, there are many heuristics out there for each problem. One example is the trivial heuristic which assigns $h(n)=0$ for every $n$. This heuristic is indeed consistent. In this case, $A^*$ behaves as uniform cost search. We can find heuristics better suited to solve a problem. One way to do so is by relaxing the conditions to solve the problem, and calculating the exact solution of the relaxed problem from the state we are in. For example, we can take the $n$-queens problem, where each step consists in moving one queen, and take as a heuristic the number of queens under attack minus one (or zero if no queen is under attack).
{ "domain": "cs.stackexchange", "id": 7300, "tags": "algorithms, graphs, search-algorithms" }
Dining Philosophers variation in C
Question: This is a variation of the Dining Philosophers Problem. The task is to coordinate several students inside a gym. All students try to obtain their desired training weights from a shared weight rack. During runtime, the user can issue commands to: block a student (b + student id) unblock a student (u + student id) proceed (p + student id) (ends a student's workout/rest loop) end the program (q or Q) I'm a beginner in C and would appreciate opinions and pointers on how to improve my code. main.c #include <pthread.h> #include "main.h" #include "gym_monitor.h" /* * * Main module of the Thread-Coordination Exercise. This modul contains * the main function which creates the threads. After creation all threads * enter the gym_routine function. * */ #define REST_LOOP 1000000000 #define WORKOUT_LOOP 500000000 #define WEIGHTS_ANNA 6 #define WEIGHTS_BERND 8 #define WEIGHTS_CLARA_DIRK 12 #define WEIGHTS_EMMA 14 #define MAX_INPUT_SIZE 3 #define WEIGHT_RACK_DEF {4,4,5} static pthread_barrier_t gym_routine_barrier; static void workout(Student* student) { for( int i = 0; i < WORKOUT_LOOP; i++ ) { if(student->status == BLOCKED) { rest_student(student); }else if(student->status == PROCEED) { student->status = NORMAL; break; } } } static void rest(Student* student) { for( int i = 0; i < REST_LOOP; i++ ) { if(student->status == BLOCKED) { rest_student(student); }else if(student->status == PROCEED) { student->status = NORMAL; break; } } } static void* gym_routine(void* stud) { pthread_barrier_wait(&gym_routine_barrier); Student* student = (Student*) stud; while(student->status != QUIT) { get_weights(student); workout(student); put_weights(student); rest(student); } return NULL; } int main(void) { char available_weights[] = WEIGHT_RACK_DEF; int students_weights[] = {WEIGHTS_ANNA,WEIGHTS_BERND,WEIGHTS_CLARA_DIRK, WEIGHTS_CLARA_DIRK,WEIGHTS_EMMA}; Student students[NR_STUDENTS]; monitor_vars* monitor = init_monitor(); pthread_barrier_init(&gym_routine_barrier, NULL, NR_STUDENTS); int res; for( int i = 0; i < NR_STUDENTS; i++ ) { students[i].thread_id = i; students[i].weight_plan = students_weights[i]; students[i].status = NORMAL; for(int j = 0; j < NR_WEIGHTS; j++) { students[i].current_weight[j] = 0; } students[i].mon = monitor; students[i].sem_student = init_sem_student(); students[i].other_students = students; students[i].weight_rack = available_weights; res = pthread_create(&students[i].thread, NULL, gym_routine, (void*) &students[i]); if(res != 0) { perror("Thread creation failed"); exit(EXIT_FAILURE); } } /*Handling user input*/ char input[MAX_INPUT_SIZE] = {0}; while(strncasecmp(fgets(input, MAX_INPUT_SIZE, stdin),"q", 1)) { /* trying to get rid of newline from input this is the only 'solution' that works so far*/ if(input[0] == '\n' || input[0] == '\0') { continue; } fflush(stdout); if((input[0] - '0') >= 0 && (input[0] - '0') < NR_STUDENTS && strchr("bpu", input[1]) && strlen(input) == 2 ) { int student_id = input[0] - '0'; students[student_id].status = input[1]; if(students[student_id].status == UNBLOCK) { wake_student(&(students[student_id])); students[student_id].status = NORMAL; } }else { printf("Not a valid instruction\n"); fflush(stdout); } } /*updating student status*/ for(int i = 0; i < NR_STUDENTS; i++) { wake_student(&students[i]); //students can only quit if they are not asleep students[i].status = QUIT; } for(int i = 0; i < NR_STUDENTS; i++) { pthread_join(students[i].thread,NULL); destroy_sem_student(&students[i]); free(students[i].sem_student); } destroy_monitor(monitor); exit(EXIT_SUCCESS); } main.h #ifndef MAIN_H #define MAIN_H #include <stdlib.h> #include <pthread.h> #include <string.h> #include <strings.h> #include "gym_monitor.h" enum weight_names{ KG_2, KG_3, KG_5 }; #define BLOCKED 'b' #define PROCEED 'p' #define NORMAL 'n' #define UNBLOCK 'u' #define QUIT 'q' #define NR_WEIGHTS 3 #define NR_STUDENTS 5 typedef struct Student Student; struct Student { pthread_t thread; int thread_id; int weight_plan; char status; char training_state; char current_weight[NR_WEIGHTS]; monitor_vars* mon; sem_t* sem_student; Student* other_students; char* weight_rack; }; #endif gym_Monitor.c #include "gym_monitor.h" #include "main.h" /* * Monitor module - Encapsulates all objects and functions that manage * thread-coordination. * */ #define MAX_2KG_3KG 4 #define MAX_5KG 5 #define RED "\x1B[31m" #define RESET "\x1B[0m" const int weight_arr[] = { [KG_2] = 2, [KG_3] = 3, [KG_5] = 5 }; static int calculate_weight(Student* student, int weight) { if(weight == 0) { return 1; } if(weight >= weight_arr[KG_2] && student->weight_rack[KG_2] > 0) { student->weight_rack[KG_2] -= 1; student->current_weight[KG_2] += 1; if(!calculate_weight(student, weight - weight_arr[KG_2])) { student->weight_rack[KG_2] += 1; student->current_weight[KG_2] -= 1; }else { return 1; } if(weight >= weight_arr[KG_3] && student->weight_rack[KG_3] > 0) { student->weight_rack[KG_3] -= 1; student->current_weight[KG_3] += 1; if(!calculate_weight(student, weight - weight_arr[KG_3])) { student->weight_rack[KG_3] += 1; student->current_weight[KG_3] -= 1; }else { return 1; } } if(weight >= weight_arr[KG_5] && student->weight_rack[KG_5] > 0) { student->weight_rack[KG_5] -= 1; student->current_weight[KG_5] += 1; if(!calculate_weight(student, weight - weight_arr[KG_5])) { student->weight_rack[KG_5] += 1; student->current_weight[KG_5] -= 1; }else { return 1; } } } return 0; } static void display__status(Student* student) { int consistency_check[] = {0,0,0}; for(int i = 0; i < NR_STUDENTS; i++) { printf("%d(%d)%c:%c:[%d, %d, %d] ",student->other_students[i].thread_id, student->other_students[i].weight_plan, student->other_students[i].status, student->other_students[i].training_state, student->other_students[i].current_weight[KG_2], student->other_students[i].current_weight[KG_3], student->other_students[i].current_weight[KG_5]); consistency_check[KG_2] += student->other_students[i].current_weight[KG_2]; consistency_check[KG_3] += student->other_students[i].current_weight[KG_3]; consistency_check[KG_5] += student->other_students[i].current_weight[KG_5]; } if( consistency_check[KG_2] > MAX_2KG_3KG || consistency_check[KG_3] > MAX_2KG_3KG || consistency_check[KG_5] > MAX_5KG ) { printf(RED "Inconsistent State\n" ); printf("[%d, %d, %d]\n" RESET,consistency_check[KG_2], consistency_check[KG_3], consistency_check[KG_5]); }else { printf("Supply: [%d, %d, %d]\n", student->weight_rack[KG_2], student->weight_rack[KG_3], student->weight_rack[KG_5]); } fflush(stdout); } void get_weights(Student* student) { pthread_mutex_lock(&student->mon->lock); student->training_state = GET_WEIGHTS; while(!calculate_weight(student, student->weight_plan)) { student->training_state = BLOCKED; display__status(student); pthread_cond_wait(&student->mon->insufficient_weight, &student->mon->lock); if(student->status != QUIT) { student->status = NORMAL; } } display__status(student); student->training_state = WORKOUT; pthread_mutex_unlock(&student->mon->lock); } void put_weights(Student* student) { pthread_mutex_lock(&student->mon->lock); student->training_state = PUT_WEIGHTS; student->weight_rack[KG_2] += student->current_weight[KG_2]; student->weight_rack[KG_3] += student->current_weight[KG_3]; student->weight_rack[KG_5] += student->current_weight[KG_5]; for(int i = 0; i < NR_WEIGHTS; i++) { student->current_weight[i] = 0; } display__status(student); pthread_cond_signal(&student->mon->insufficient_weight); student->training_state = REST; pthread_mutex_unlock(&student->mon->lock); } void rest_student(Student* student) { sem_wait(student->sem_student); } void wake_student(Student* student) { sem_post(student->sem_student); } sem_t* init_sem_student() { int res = 0; sem_t* sem_student = malloc(sizeof(sem_t)); if(sem_student == NULL) { perror("malloc failed, exiting..."); exit(EXIT_FAILURE); } res = sem_init(sem_student,0,1); if(res != 0) { perror("Semaphore creation failed"); exit(EXIT_FAILURE); } return sem_student; } void destroy_sem_student(Student* student) { int res = 0; res = sem_destroy(student->sem_student); if(res != 0) { perror("Destroying semaphore failed"); exit(EXIT_FAILURE); } } monitor_vars* init_monitor() { int res = 0; monitor_vars* monitor_vars_ptr = malloc(sizeof(monitor_vars)); if(monitor_vars_ptr == NULL) { perror("malloc failed, exiting..."); exit(EXIT_FAILURE); } res = pthread_mutex_init(&monitor_vars_ptr->lock, NULL); if(res != 0) { perror("Mutex creation failed"); exit(EXIT_FAILURE); } res = pthread_cond_init(&monitor_vars_ptr->insufficient_weight, NULL); if(res != 0) { perror("Condition Variable creation failed"); exit(EXIT_FAILURE); } return monitor_vars_ptr; } void destroy_monitor(monitor_vars* monitor_vars_ptr) { int res = 0; res = pthread_mutex_destroy(&monitor_vars_ptr->lock); if(res != 0) { perror("Destroying mutex failed"); exit(EXIT_FAILURE); } res = pthread_cond_destroy(&monitor_vars_ptr->insufficient_weight); if(res != 0) { perror("Destroying condition variable failed"); exit(EXIT_FAILURE); } free(monitor_vars_ptr); } gym_monitor.h #ifndef MONITOR_H #define MONITOR_H #include <pthread.h> #include <stdio.h> #include <semaphore.h> #define REST 'R' #define WORKOUT 'W' #define GET_WEIGHTS 'G' #define PUT_WEIGHTS 'P' struct Student; typedef struct { pthread_mutex_t lock; pthread_cond_t insufficient_weight; }monitor_vars; /* * Manages the weight distribution among the students. * Students enter the function, lock the mutex and try to obtain * their corresponding weight. If the weight is not obtainable * the student is waiting on a condition variable. Otherwise the * student picks the weight and unlocks the mutex. * * parameter - the student * return - none */ void get_weights(struct Student* stud); /* * Coordinates the transfer of weights from students back to the * weight rack inside a monitor. * * parameter - the student * return - none */ void put_weights(struct Student* stud); /* * Student waits on a semaphore * * parameter - the student * return - none */ void rest_student(struct Student* stud); /* * Student waiting on a semaphore is unblocked * * parameter - the student * return - none */ void wake_student(struct Student* stud); /* * Creates, initializes and returns a struct of monitor_vars. * The struct contains a mutex and a condition variable. * Exits the program if either memory allocation or initializing fails. * * parameter - none * return - the mutex and condition variable inside a monitor_vars struct */ monitor_vars* init_monitor(); /* * Destroys the supplied monitor struct. * Exits the program if destroying the monitor fails. * * parameter - the monitor struct to be destroyed * return - none */ void destroy_monitor(monitor_vars* mon); /* * Creates, initializes and returns a semaphore. * Exits the program if either memory allocation or initializing fails. * * parameter - none * return - the initialized semaphore */ sem_t* init_sem_student(); /* * Destroys the semaphore of the supplied student. * Exits the program if destroying the semaphore fails. * * parameter - the student * return - none */ void destroy_sem_student(struct Student* student); #endif Answer: Before I go into details, I feel obliged to say that your project is fairly ambitious for a C beginner. I have several criticisms, which I hope you will receive as constructive, but overall, your code makes me suspect that although you are new to C, you are not new to programming in general or to multi-threaded programming in particular. If I am mistaken about that then please take it as a complement. Data races Your code contains some data races involving the manipulation and testing of students' status variables. The main thread modifies these as a result of command input and at shutdown, and each student's thread both reads and writes that variable for its own student. Some of the student threads' accesses are performed under protection of the mutex (those in functions from gym_routine.c), but others and the main thread's are not. Since these variables are written to by at least one thread each and read by multiple threads, every access must be appropriately protected once the per-student threads are started. You've apparently chosen to use a mutex for that, which is fine; you just need to be sure to protect all accesses. Busy loops You use high-iteration-count busy loops for making the workout() and rest() functions consume non-trivial time. At minimum, you'll need to greatly reduce the iteration counts when you correct the data races there, as locking and unlocking mutexes is costly. Really, however, you ought to choose a delay mechanism that doesn't consume CPU. pthread_cond_timedwait() provides one such mechanism, with the advantage that another thread (e.g. the main one) can interrupt the wait if needed. That could be made to work in concert with resolving your data races, by giving each student its own mutex and condition variable to protect access to the student status. That would also allow you to set the durations of the workout and rest times in terms of machine-independent time units. Unnecessary dynamic allocation There are many good uses for dynamic allocation, but it's complicated enough and easy enough to mismanage that you should not use it where you don't actually need it. In particular, just because you need a pointer to something does not necessarily mean that that thing needs to be dynamically allocated. It's not uncommon to use a pointer to an ordinary local or file-scope variable, obtained via the & operator. In your code, this applies to most (but, oddly, not all) of your synchronization objects. For example, I recommend changing the sem_student member of struct Student from a pointer to a plain sem_t. (It will then need to be handled slightly differently, but mainly the dynamic allocation will go away.) Similarly, there is no need to dynamically allocate your monitor_vars object. Just declare an instance. Input handling In your main input loop, you should account for the possibility that fgets() returns NULL (indicating end-of-file before any input is read, or error). On the other hand, you probably do not have to account for input[0] == '\0' because fgets() always copies at least one character from input to buffer upon success (provided you specify a buffer size of at least 2). Literal '\0' characters in the input could conceivably trip you up, but if you need to accommodate those then you need to handle input altogether differently. If you mean to accept only one command per input line, then I'd recommend consuming the balance of the line, up to the next newline, at the bottom of each iteration of the input loop. The one thing to watch out for there would be input lines containing a newline at index 1, which your current code will reject as invalid, but for which the trailing newline will already have been read. Issues with headers and #includes As a matter mostly of style, each of your headers should #include those headers defining constants and identifiers used directly by that header, if any, but not any other headers. (Each C source file should do the same.) Do not otherwise have your headers include other headers; it is unnecessary, and under some circumstances it can be harmful. For example, given it's current contents, your gym_monitor.h is right to include pthread.h and semaphore.h, but there appears to be no reason for it to include stdio.h. On the other hand, I would encourage having it include main.h for the definition of struct Student, or else to combine those two headers into one. As a separate matter, it is a good idea to ensure that each source file and header that #includes headers includes them in the same relative order. This is less important for standard library headers, but there's no good reason to distinguish. It can be the case that changing the order of headers changes their interpretation (which would be a weakness of one or more of the headers involved, but sometimes that happens). In your case, your files differ on the order of gym_monitor.h and main.h.
{ "domain": "codereview.stackexchange", "id": 23223, "tags": "beginner, c, multithreading, pthreads, dining-philosophers" }
Merkle tree sorting leaves and pairs
Question: I am implementing a Merkle tree and am considering using either of the two options. The first one is sorting only by leaves. This one makes sense to me since you would like to have the same input every time you are constructing a tree from the data, that might not arrive sorted by default. CAB / \ CA \ / \ \ C A B / \ / \ / \ 1 2 3 4 5 6 The second one is sorting by leaves and pairs, which means that after sorting the leaves, you also sort all the pairs after hashing them, however I'm not entirely sure about the benefits of this implementation (if any). ACB / \ AC \ / \ \ C A B / \ / \ / \ 1 2 3 4 5 6 I have seen these implementations of Merkle trees in the past but am not sure about their benefits. So why choose one over the other? Answer: Further expanding on @Paul Etscheit, sorting the hash pairs simplifies the verification of merkle proofs. Example: The open zeppelin merkle proof verification smart contract, requires the hash pairs to be sorted. This is made even clearer when you look at their tests. Merkle inclusion proofs can be verified through a function function verify(bytes[] proof, bytes root, bytes leaf). The function will return true if root is equal to the root computed from the leaf and proof nodes. Using sorted hash pairs means your merkle proof does not needs to contain information about the order in which the child hashes should be combined in. i.e. Your proof can simply be an array of hashes. For example: Using merkletreejs and the following merkle tree: └─ 7075152d03a5cd92104887b476862778ec0c87be5c2fa1c0a90f87c49fad6eff ├─ e5a01fee14e0ed5c48714f22180f25ad8365b53f9779f79dc4a3d7e93963f94a │ ├─ ca978112ca1bbdcafac231b39a23dc4da786eff8147c4e72b9807785afee48bb │ └─ 3e23e8160039594a33894f6564e1b1348bbd7a0088d42c4acb73eeaed59c009d └─ 2e7d2c03a9507ae265ecf5b5356885a53393a2029d241394997265a1a25aefc6 └─ 2e7d2c03a9507ae265ecf5b5356885a53393a2029d241394997265a1a25aefc6 Without sorting the hashpairs you need to use .getProof(<node>), which returns a proof of the form: [ { position: 'right', data: <Buffer 3e 23 e8 16 00 39 59 4a 33 89 4f 65 64 e1 b1 34 8b bd 7a 00 88 d4 2c 4a cb 73 ee ae d5 9c 00 9d> }, { position: 'right', data: <Buffer 2e 7d 2c 03 a9 50 7a e2 65 ec f5 b5 35 68 85 a5 33 93 a2 02 9d 24 13 94 99 72 65 a1 a2 5a ef c6> } ] When you sort the hashpairs, you can use .getHexProof(<node>), which returns a proof of the form: [ '0x3e23e8160039594a33894f6564e1b1348bbd7a0088d42c4acb73eeaed59c009d', '0x2e7d2c03a9507ae265ecf5b5356885a53393a2029d241394997265a1a25aefc6' ]
{ "domain": "cs.stackexchange", "id": 18699, "tags": "algorithms, trees, hash" }
K alpha and K beta, which one has more energy?
Question: I have a question regarding K-alpha and K-beta in X-rays. I examined the intensity vs. wavelength diagram and concluded that K-beta has more energy than K-alpha but K-beta is more intense. Am I correct? if so, is that always the case? Answer: The $K_\alpha$ is produced by the $2p \to 1s$ transition and the $K_\beta$ is produced by the $3p \to 1s$ transition. So the $K_\beta$ radiation has a higher energy than the $K_\alpha$ transition. If you look at a typical spectrum: (picture from the Arizona State University web site) then you can see the $K_\alpha$ has a longer wavelength than the $K_\beta$, and a longer wavelength means a lower energy since the energy is given by: $$ E = h\nu = \frac{hc}{\lambda} $$ The $K_\alpha$ normally has the higher intensity since the probability that the X-rays will cause a $n=1 \to N$ transition generally falls with increasing $N$.
{ "domain": "physics.stackexchange", "id": 48691, "tags": "quantum-mechanics, radiation, x-rays" }
Speed up two for-loops?
Question: The code below is slow. Any thoughts on speeding up? dict1 = {} dict2 = {} list_needed = [] for val in dict1.itervalues(): for k,v in d1ct2.iteritems(): if val == k: list_needed.append([val,v]) Answer: Perhaps, for val in dict1.itervalues(): if val in dict2: list_needed.append([val,dict2[val]]) This is similar to list_needed = [[val,dict2[val]] for val in dict1.itervalues() if val in dict2] My preference would be to make [val,v] a tuple - i.e (val,v) which is an immutable D.S and hence would be faster than a list. But it may not count for much.
{ "domain": "codereview.stackexchange", "id": 1933, "tags": "python" }
$2k$ number assignment
Question: Given $k$ numbers $A_1 \leq A_2 \leq ... \leq A_k$ such that $\sum\limits_{i=1}^k A_i = k(2k + 1)$ is there an assignment of numbers $i_1, i_2, ... , i_{2k}$ which is a permutation of $1, 2, ... , 2k$ such that $i_1 + i_2 \leq A_1\\ i_3 + i_4 \leq A_2\\ .\\.\\.\\ i_{2k-1} + i_{2k} \leq A_k$ ? I cannot find an efficient algorithm and that solves this problem. It seems to be a combinatorial problem. I was unable to find a similar NP-Complete problem. Does this problem look like a known NP-Complete problem or can it be solved with a polynomial algorithm? Answer: This problem is strongly NP-complete. Suppose all the $A_j$ are odd. Then we know that since $i_{2j-1} + i_{2j} = A_j$ is odd, one of $i_{2j-1}$ and $i_{2j}$ is even and the other is odd. We can assume that $i_{2j-1}$ is odd and $i_{2j}$ is even. By letting $\pi_j = \frac{1}{2}(i_{2j-1}+1)$ and $\sigma_j = \frac{1}{2}(i_{2j})$, we can show that this is equivalent to asking for two permutations, $\pi$ and $\sigma$, of the numbers $1 \ldots n$ such that $\pi_j + \sigma_j = \frac{1}{2}(A_j+1)$. This problem is known to be NP-complete; see this cstheory.se problem and this paper of W. Yu, H. Hoogeveen, and J. K. Lenstra referenced in the answer.
{ "domain": "cs.stackexchange", "id": 1508, "tags": "np-complete, decision-problem" }
How was the cama's life expectancy computed?
Question: A cama is a hybrid between a male dromedary camel and a female llama. The first cama was born on January 14, 1998, yet on the Wikipedia page it is said that a cama's life span is 30–40 years. How was that number determined? Is it simply the average between a camel's and a llama's life spans? Answer: Since no source is given for the 30 - 40 years estimate in Wikipedia, we can't find out how the authors of the Wiki page reached that estimate, but 'someone made an educated guess' seems likely. There are a few reasonable ways that one might educatedly guess the longevity of the cama, but there are good reasons to treat those educated guesses with caution. First, as you suggested, you could just estimate the cama's longevity from the life expectancy of camels and llamas. This is likely to fall in the right ball-park, but should be interpreted with caution: ligers (lion - tiger hybrids) are reputed to have high rates of premature death, and the same may well apply to camas. Second, you could plug the animal's body measurements, metabolic-rate measurements, or similar measurements into a model which relates species' attributes to a measure of their longevity. Many different models of lifespan have been built (e.g. here), and if your goal is to get a rough estimate of how long a member of a particular species will live, those models are not a bad way of making a first guess. The 'this is a weird hybrid and might have issues with premature death' caveat still applies, though.
{ "domain": "biology.stackexchange", "id": 4571, "tags": "genetics, zoology, lifespan" }
If a predicate is not computable, what can be said about its negation?
Question: Doing the following exercise: Let $\overline{HALT(x,y)}$ be defined as $\overline {HALT(x,y)} \iff \text{program number y never halts on input x}$ Show that it is not computable. Just want to make sure I have understood the concept correctly. We had in a theorem that HALT(x,y) is not computable which means that we cannot determine whether program number y eventually halts on input x. I realized that $\overline {HALT(x,y)}$ is the negation of HALT(x,y). Is it true (I cannot find it in my book or on the internet) that if a function is (not) computable, its negation is also (not) computable? A function being computable means there is a program p which computes it, we cannot say there is a program Q that computes its negation. Or can we draw such conclusion? Answer: To answer the literal question that you asked, $\mathsf{HALT}$ is a boolean function, i.e. a function whose values are in the set $\{\mathsf{false}, \mathsf{true}\}$. The negation of $\mathsf{HALT}$ is the composition of this function with the negation operator $\mathsf{not}$. Since $\mathsf{not}$ is bijective, any algorithm that computes $\mathsf{HALT}(x,y)$ terminates if and only if $\mathsf{not}(\mathsf{HALT}(x,y))$ terminates (because if $\mathsf{not}(\mathsf{HALT}(x,y))$ terminates then $\mathsf{not}(\mathsf{not}(\mathsf{HALT}(x,y))) = \mathsf{HALT}(x,y)$ terminates). Framing the question in a more intuitive manner, $\mathsf{HALT}$ is a predicate: it is the characteristic function of a set. The predicate is computable if and only if the set is decidable, i.e. recursive. The complement of a recursive set is a recursive set (the reasoning above is one way to prove it), which amounts to saying that the negation of the corresponding predicate is computable iff the predicate is. A related property that is not conserved by negation is semi-decidability, as in recursively enumerable sets. The complement of a recursively enumerable set is not r.e. in general (in fact, an r.e. set has an r.e. complement iff it is recursive). Whereas a recursive set's characteristic function uses two values to distinguish between the elements that are in the set and the ones that aren't, a recursively enumerable set can be given as the domain of a partial recursive function: the r.e. set is the domain on which the function has a value. If the function is given as an algorithm, the domain is the part where the computation of the function halts and returns a value; the complement of the domain is the part where the computation does not halt.
{ "domain": "cs.stackexchange", "id": 588, "tags": "computability, proof-techniques" }
What is "strobe"?
Question: For advancing the address input of the RAM, we provide the signal “strobe”. Can anyone explain this in a bit more detail? Thanks. Answer: While the strobe is a hardware concept, it is sometimes directly referred to in software. A strobe is generally a voltage line in the computer, sometimes on the CPU or another chip on the motherboard, for example. The voltage of a strobe line will need to go high or low at certain times for certain things to happen. How's that for vague? On an older computer like an Apple ][, you could directly read the strobe signal of the keyboard controller from a particular memory location. This would tell you whether the user had pressed a key or not and allow you to either handle it or continue on with other processing.
{ "domain": "cs.stackexchange", "id": 9141, "tags": "cpu" }
Rosserial_arduino: URDF library fail to generate (missing dependencies)
Question: Hi, I am trying to setup the Arduino IDE according to the tutorial -> http://wiki.ros.org/rosserial_arduino/Tutorials/Arduino%20IDE%20Setup However, I ran into this problem at the last step where i run the following command. rosrun rosserial_arduino make_libraries.py . At the terminal, I received the result below. Any insight on what caused this and how to fix this? Thank you. *** Warning, failed to generate libraries for the following packages: *** urdf_tutorial (missing dependency: convex_decomposition) Traceback (most recent call last): File "/home/user/catkin_ws/install/share/rosserial_arduino/make_libraries.py", line 91, in <module> rosserial_generate(rospack, path+"/ros_lib", ROS_TO_EMBEDDED_TYPES) File "/home/user/catkin_ws/install/lib/python2.7/dist-packages/rosserial_client/make_library.py", line 584, in rosserial_generate raise Exception("Failed to generate libraries for: " + str(failed)) Exception: Failed to generate libraries for: ['urdf_tutorial (missing dependency: convex_decomposition)'] Originally posted by Kiroja on ROS Answers with karma: 1 on 2015-10-04 Post score: 0 Answer: under ubuntu: sudo apt-get install ros-jade-convex-decomposition Originally posted by mimooh with karma: 36 on 2015-10-12 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 22738, "tags": "arduino, rosserial" }
How to show $\rho > 0$ when $\rho$ be minimum attainable from $y_n(W^{*T}X_n)$, where $W^*$ the vector that separates the data?
Question: In the book Learning from Data written (by Abu Mostafa), we have the following exercise: Let $\rho$ be minimum attainable from $y_n(W^{*T}X_n)$ where $W^*$ is the vector that separates the data. Show $\rho > 0$. Also assume the Perceptron Learning Algorithm is initialized with the 0 vector. How to prove the above statement? I thought that it could be negative since a Perceptron function returns either +/-1? Even I wonder if I comprehend this proof question correctly. Answer: We have a vector $w*$ such that it has separated all of the data points. This implies if the correct classification of a point is -1, then $w^{*T}x_n$ is also negative. If $y_n$ is positive, then $w^{*T}x_n$ is positive. Thus is because we've stated every point is correctly classed. Thus $\rho>0$
{ "domain": "ai.stackexchange", "id": 3036, "tags": "machine-learning, proofs, perceptron" }
Find longest Palindrome substring
Question: I am preparing for technical interview and I found this question on geeksforgeeks, but I did not understand their solution. So, I have written my own solution. I want to optimize this code. #include <iostream> #include <string> bool isPalindrome(std::string& str) { for (int i = 0, j = str.length()-1; i<j; ++i, --j) { if (str[i] != str[j]) { return false; } } return true; } int longestPalindrome(std::string& str, std::string& palindromeStr) { int max = 0, start = 0, end = 0; for (int i = 0; i < str.length(); ++i) { for (int j = i+1; j < str.length(); ++j) { std::string sub = str.substr(i, j); if (isPalindrome(sub) && max < sub.length()) { max = sub.length(); start = i; end = j; } } } palindromeStr = str.substr(start, end); return max; } int main() { std::string str = "forgeekskeegfor"; std::string palindromeStr; std::cout << longestPalindrome(str, palindromeStr) << '\n'; std::cout << palindromeStr << '\n'; } Answer: The code is reasonably clear and obvious, but has some severe inefficiencies. First, let's pick up some simple oversights. Both isPalindrome() and longestPalindrome() ought to have internal linkage (using either the static keyword or the anonymous namespace), and the str arguments should be reference to const: namespace { bool isPalindrome(const std::string& str); int longestPalindrome(const std::string& str, std::string& palindromeStr); } In passing, we can simplify the interface of longestPalindrome(). It doesn't need to return the string and its length; if we simply return the longest palindrome, then obtaining the length is trivial: std::string longestPalindrome(const std::string& str); // main() can now look like: // std::string palindromeStr = longestPalindrome(str); // std::cout << palindromeStr.size() << '\n'; // std::cout << palindromeStr << '\n'; The next oversight is that std::string::length() returns a std::size_t, so don't compare it with (signed) int: for (std::size_t i = 0, j = str.length()-1; i<j; ++i, --j) // ^^^^^^^^^^^ Note that I've left a bug there (that's neatly missed because we always call this with a non-empty string): if str.length() is zero, then j starts at a very large positive value (because the subtraction is unsigned, and wraps). BTW, there's a neat way to test a string for symmetry (at the expense of repeating the initial comparisons), using <algorithm>: static bool isPalindrome(const std::string& str) { return std::equal(str.begin(), str.end(), str.rbegin()); } Now to the matter of efficiency. We're creating new string objects for every possible substring of the input. That's a lot of copying. We could reduce that by using std::string_view. That's only part of the way towards an efficient solution, though. We really need to change the algorithm. My recommendation is to iterate over each character as a possible mid-point of an embedded palindrome, and at each position, determine what's the longest palindrome possible from there (in most cases, it will be 1 or 2 chars). There's no need to consider longer substrings centred on that position once you have a failing case, so that eliminates much of the unnecessary work we're doing here. Hint: for this we can use std::make_reverse_iterator() and std::mismatch(). Finally, the single test we have in main() isn't really enough. At a minimum, we want examples of odd- and even-length palindromes, and also check that we handle the trivial case of empty string as input. Update - using iterators I've developed the idea I hinted at in the second section; there's probably a little more scope for reducing duplication: #include <algorithm> #include <iostream> #include <iterator> #include <string> #include <string_view> namespace { template<typename Iter> // requires BidirectionalIterator(Iter) void updateBest(Iter forward_start, Iter forward_end, std::reverse_iterator<Iter> backward_start, std::reverse_iterator<Iter> backward_end, std::string_view& best_so_far) { auto span = std::mismatch(forward_start, forward_end, backward_start, backward_end); auto start = span.second.base(); auto end = span.first; std::string_view candidate{ &*start, static_cast<std::size_t>(std::distance(start, end)) }; if (candidate.size() > best_so_far.size()) { best_so_far = candidate; } } std::string_view longestPalindrome(const std::string& str) { std::string_view best_so_far; // Work out from the middle of the string auto const halfway = (str.size() + 1) / 2; // first, loop from midpont to end of string (but we can stop // when there's no room for a bigger palindrome) for (auto i = str.begin() + halfway; i + best_so_far.length()/2 < str.end(); ++i) { // test for odd-length palindrome updateBest(i, str.end(), std::make_reverse_iterator(i), str.rend(), best_so_far); // test for even-length palindrome updateBest(i + 1, str.end(), std::make_reverse_iterator(i), str.rend(), best_so_far); } // then, loop from midpont to beginning of string (but stop // when there's no room for a bigger palindrome) for (auto i = str.rbegin() + halfway; i + best_so_far.length()/2 < str.rend(); ++i) { // test for odd-length palindrome updateBest(i.base(), str.end(), i, str.rend(), best_so_far); // test for even-length palindrome updateBest(i.base(), str.end(), i + 1, str.rend(), best_so_far); } return best_so_far; } } int main() { for (std::string s: { "", "forgeekskeegfor", "abc abc", "forgeeksskeeg", "geeksskeegfor" }) { auto palindromeStr = longestPalindrome(s); std::cout << "Found palindrome of length " << palindromeStr.size() << " in " << s << ": " << palindromeStr << '\n'; } }
{ "domain": "codereview.stackexchange", "id": 32361, "tags": "c++, algorithm, strings, interview-questions, palindrome" }
Use the commutation relation to show that the conjugate momentum acts on eigenstates of $\hat{\Phi}$ as $ - i \delta / \delta\phi_a(\mathbf{x})$
Question: This is part (b) of Schwartz's Problem 14.3 in his Quantum Field Theory and the Standard Model textbook. Suppose that we have a real scalar field operator $\hat{\Phi}(x^0,\mathbf{x})$ with conjugate momentum field $\hat{\Pi}(x^0,\mathbf{x}) := \partial_0 \hat{\Phi}(x^0,\mathbf{x})$. These operators satisfy the equal-time commutation relations $$ [ \hat{\Phi}(x^0,\mathbf{x}), \hat{\Phi}(x^0,\mathbf{y}) ] = 0 \\ [ \hat{\Pi}(x^0,\mathbf{x}), \hat{\Pi}(x^0,\mathbf{y}) ] = 0 \\ [ \hat{\Phi}(x^0,\mathbf{x}), \hat{\Pi}(x^0,\mathbf{y}) ] = i \delta^{(3)}(\mathbf{x} - \mathbf{y}) $$ At some initial time $t=0$ we can define simultaneous (orthonormal) eigenstates of the Schrodinger-picture operator $\hat{\Phi}(0,\mathbf{x})$ and $\hat{\Pi}(0,\mathbf{x})$ which satisfy $$ \hat{\Phi}(0,\mathbf{x}) | \phi_a \rangle = \phi_a(\mathbf{x}) | \phi_a \rangle \ \ \ \ \mathrm{and} \ \ \ \ \hat{\Pi}(0,\mathbf{x}) | \pi_a \rangle = \pi_a(\mathbf{x}) | \pi_a \rangle $$ Question: The task of Schwartz's problem 14.3(b) here is to use the (third) commutation relation to show that $\hat{\Pi}(0,\mathbf{x})$ acts on eigenstates of $\hat{\Phi}(0,\mathbf{x})$ as the variational derivative $- i \delta/ \delta\phi_{a}(\mathbf{x})$. My Attempt: I think that this means to show that $\langle \phi_a | \hat{\Pi}(0,\mathbf{x}) | \zeta \rangle = - i \dfrac{\delta}{\delta \phi_a(\mathbf{x})} \langle \phi_a | \zeta \rangle$ for any state $|\zeta \rangle$ in the Fock space. So far what I have shown that the third commutation relation implies that $[ \hat{\Phi}(x^0,\mathbf{x}), \hat{\Pi}(x^0,\mathbf{y})^n ] = i n \delta^{(3)}(\mathbf{x} - \mathbf{y})\hat{\Pi}(x^0,\mathbf{y})^{n-1}$ for any $n \geq 1$. From this it follows that for any number $\epsilon$ we have $[ \hat{\Phi}(x^0,\mathbf{x}), e^{ - i \epsilon \hat{\Pi}(x^0,\mathbf{y}) } ] = \epsilon e^{ - i \epsilon \hat{\Pi}(x^0,\mathbf{y}) } \delta^{(3)}(\mathbf{x} - \mathbf{y})$. From here, applying this commutator to a field eigenstate $|\phi_a \rangle$ yields $$ \hat{\Pi}(x^0, \mathbf{x}) \big( e^{ - i \epsilon \hat{\Phi}(x^0,\mathbf{y}) } | \phi_a \rangle \big) = \big( \phi_a(\mathbf{x}) + \epsilon \delta^{(3)}(\mathbf{x} - \mathbf{y} ) \big) \big( e^{ - i \epsilon \hat{\Phi}(x^0,\mathbf{y}) } | \phi_a \rangle \big) $$ This is where I get stuck though. I was hoping to use the above to show that $\big( e^{ - i \epsilon \hat{\Phi}(x^0,\mathbf{y}) } | \phi_a \rangle \big) \propto | \phi_a + \epsilon \rangle$, but the extra $\delta$-function doesn't seem to make this work (even without the $\delta$-function problem, this would still just be a proportionality, up to some phase). From there my idea was to consider the inner product $\langle \phi_a | e^{- i \epsilon \hat{\Pi}(0,\mathbf{x}) } | \zeta \rangle$ and take the limit $\epsilon$ to prove the result. Answer: We consider the action of the conjugate momentum operator $\hat{\pi}(t, \vec{x})$ on a field eigenstate $\left|\phi_t\right\rangle$ of a field operator $\hat{\phi}(x)=\hat{\phi}(t, \vec{x})$. We want to answer why this is equal to the the action of the variation of $\phi$ on the same eigenstate: $$ \hat{\pi}(t, \vec{x})\left|\phi_t\right\rangle=-i \frac{\delta}{\delta \phi(x)}\left|\phi_t\right\rangle $$ The key is to consider the action of $$ \hat{U}=e^{i \lambda \int \mathrm{d}^3 \vec{y} \hat{\pi}(t, \vec{y}) f(\vec{y})}, $$ on an eigenstate of $\hat{\phi}$, for a test function $f$. Since $\hat{\pi}$ and $\hat{\phi}$ are canonical conjugates, we can use their canonical commutation relations, to see that \begin{equation}\label{expr} e^{-i \lambda \int \mathrm{d}^3 \vec{y} \hat{\pi}(t, \vec{y}) f(\vec{y})} \hat{\phi}(x) e^{i \lambda \int \mathrm{d}^3 \vec{y} \hat{\pi}(t, \vec{y}) f(\vec{y})}=\hat{\phi}(t, \vec{x})+\lambda f(\vec{x})+o(\lambda) \end{equation} where $f$ is a test function. From this follows $$ \hat{\phi}(x)\left(e^{i \lambda \int \mathrm{d}^3 \vec{y} \hat{\pi}(t, \vec{y}) f(\vec{y})}\left|\phi_t\right\rangle\right)=(\phi(x)+\lambda f(\vec{x}))\left(e^{i \lambda \int \mathrm{d}^3 \vec{y} \hat{\pi}(t, \vec{y}) f(\vec{y})}\left|\phi_t\right\rangle\right), $$ which in turn tells us that that applying $e^{i \lambda \int \mathrm{d}^3 \vec{y} \hat{\pi}(t, \vec{y}) f(\vec{y})}$ to an eigenstate of $\hat{\phi}$ yields $$ e^{i \lambda \int \mathrm{d}^3 \vec{y} \hat{\pi}(t, \vec{y}) f(\vec{y})}\left|\phi_t\right\rangle=\left|\phi_t+\lambda f\right\rangle $$ Finally, let us consider $$ \left\langle\phi_t\left|e^{i \lambda \int \mathrm{d}^3 \vec{y} \hat{\pi}(t, \vec{y}) f(\vec{y})}\right| \Psi\right\rangle=\Psi\left[\phi_t-\lambda f\right] . $$ Evaluating the right-hand side and left-hand side for small $\lambda$, we find $$ \hat{\pi}(t, \vec{x})\left|\phi_t\right\rangle=-i \frac{\delta}{\delta \phi(x)}\left|\phi_t\right\rangle $$
{ "domain": "physics.stackexchange", "id": 99078, "tags": "quantum-field-theory, momentum, commutator, quantization" }
Polynomial time algorithm - Matching pairs of vertices
Question: Let T be a tree with root r and S a set with an even number of vertices of T. Design a polynomial time algorithm that finds |S|/2 simple and disjoint paths in edges matching pairs of vertices in S. The following figure shows an example in which S consists of the six vertices enclosed in circles and three paths (denoted by three different types of dotted lines) that match those vertices. Hint: Gain intuition by making instances like the one in the figure and observing what happens when you walk the entire border of the tree almost touching it, but not really doing that; then think about how to express the idea of such walk with a recursive algorithm. I tried to analyze the problem but I don't really understand how to express it as a recursive algorithm, could you help me, please? Answer: The idea is to use recursion. Let us suppose that the children of the root $r$ are $v_1,\ldots,v_d$, and let $S_1,\ldots,S_d$ consist of those elements of $S$ in the subtree rooted at $v_1,\ldots,v_d$, respectively. One easy case is when $r \notin S$ and $|S_i|$ is even for all $i$. In this case, we can simply recurse on the subtrees. There are two complications that arise in the general case: Some $|S_i|$ might be odd. The root might belong to $S$. The two complications are related, but let us tackle them one by one. Suppose first that $r \notin S$. If $|S_i|$ is odd for some $i$, then we would like to fix that. The only reasonable way is to add $v_i$ to $S_i$ if $v_i \notin S_i$, and to remove $v_i$ from $S_i$ is $v_i \in S_i$. We now solve the modified problem recursively, and have to somehow derive a solution for the original problem. Let us introduce some notation: $O$ is the set of $i$ such that $|S_i|$ is odd, and $S'_i$ is the set $S_i$ after modification (adding or removing $v_i$). Consider a solution for the new instance. We will derive a solution for the original instance by pairing up the indices in $O$. Suppose that we chose to pair $i,j$, and consider the solutions for $S'_i,S'_j$. There are three cases to consider: $v_i \in S_i$ and $v_j \in S_j$. In this case we add a path from $v_i$ to $v_j$ via $r$. $v_i \notin S_i$ and $v_j \notin S_j$. In this case we add a path from $v_i$ to $v_j$ and "erase" $v_i,v_j$. That is, in the solution to the new problem, $v_i$ is connected to some $w_i$ in its subtree, and $v_j$ is connected to some $w_j$ in its subtree. We connect $w_i$ and $w_j$ via the path $w_i-v_i-r-v_j-w_j$. $v_i \in S_i$ and $v_j \notin S_j$. In this case we add a path from $v_i$ to $v_j$ and "erase" $v_j$. That is, in the solution to the new problem, $v_j$ is connected to some $w_j$ in the subtree. We connect $v_i$ and $w_j$ via the path $v_i-r-v_j-w_j$. When $r \in S$, we only have to modify the above strategy a little. We arbitrarily pair $r$ with some $i \in O$, and then proceed much as above. This time there are only two cases: $v_i \in S_i$. In this case we simply connect $r$ and $v_i$. $v_i \notin S_i$. In this case we add the edge between $r$ and $v_i$ and "erase" $v_i$. That is, in the solution to the new problem, $v_i$ is connected to some $w_i$ in its subtree. We connect $r$ and $w_i$ via the path $r-v_i-w_i$.
{ "domain": "cs.stackexchange", "id": 12924, "tags": "algorithms, graphs, algorithm-analysis, trees, recursion" }
Does a source emitting visible light also emit infrared, microwave and radio waves?
Question: I have a bulb which is hot enough to emit visible light and obviously it's hot enough to emit radiation which lies before the visible light temperature i.e. radio waves, microwaves, and infrared light. So is the bulb emitting radio waves, microwaves, infrared and visible light at the same time? (I think this is true but not sure as astronomers see stars at almost all light i.e. infrared, UV, Gamma rays, visible etc.) Thanks in advance! Answer: Not all light bulbs are thermal emitters. Fluorescent lights do not use incandescence, hence they would not emit an equal spectrum to an incandescent source with an identical maximal light frequency. But in general yes objects do concurrently emit a whole spectrum of waves based on their temperature, regardless of whether their light is visible to us.
{ "domain": "physics.stackexchange", "id": 21061, "tags": "visible-light, electromagnetic-radiation, radio, microwaves, infrared-radiation" }
Generating JSON with Jinja2
Question: I needed quickly to represent a list in json format. I wrote the following front-end in jinja2 with google appengine. { "title": "The Basics - Networking", "description": "Your app fetched this from a remote endpoint!", "movies": [ {% if results %} {% for scored_document in results %} { "title": "{{ scored_document.fields.0.value|safe | replace('"','') }}", "releaseYear": "2014", {% set testing = scored_document.fields.8.value|displayimg %} {% if testing and testing != 'False' %} "img": "{{ testing }}", {% endif %} "url": "https://www.koolbusiness.com/newvi/{{ scored_document.fields.8.value|int|safe }}.html"}{% if not loop.last %} , {% endif %} {% endfor %}{% endif %} ] } There are several unclean errors with wrong variable names because I copied the code from an example but the output is good: { "title": "The Basics - Networking", "description": "Your app fetched this from a remote endpoint!", "movies": [ { "title": "1200 SQFT SITES available for sale Nelamangala for sale--7lacs-30*40 sqft ", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/wEuEmcPNEXfA37EATiWaQ0odK09nAjcaiL5HR_WBCr5RTyU6LeKkfSHH9kQkcchfRToRE4z7UKsfrOtlZnwTxmMI2Xc=s150", "url": "https://www.koolbusiness.com/newvi/6371047123189760.html"} , { "title": "Buy prestige glass top 3 burner gas stove from nbhomeshop", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/_rL9pl62uSq4g9csH1aBK1cniie_JUEAZwUIKQAxlhjIH875O1D4aVVgV6blxGECgsOvSRcL15pdk3-JAmEI_zltcw=s150", "url": "https://www.koolbusiness.com/newvi/6469401068961792.html"} , { "title": "angular js online training,software courses,android course,seo course,Corporate training institutes in Hyderabad.", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/mcih7gjnhgF-R6--Kj45X16U7j2GjpQhI5_JaU3XOS_cIBIrQbhHb14rseIlyqSUZALDj3g-ofXKhm_JgvMnP0ns8Q=s150", "url": "https://www.koolbusiness.com/newvi/6735288535613440.html"} , { "title": "angular js online training,software courses,android course,seo course,Corporate training institutes in Hyderabad.", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/Gpv6aHVU3rIRGv5vvT2SCkPJYB5gtEdhKnn3yBf_Gurmz87uUlN_kEGISoGCFriDm21ZFRFCSqGPH2fsuXACjTbipCk=s150", "url": "https://www.koolbusiness.com/newvi/5561960915533824.html"} , { "title": "Improve Memory and Brain Power with Branole X", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/TX3EGNSULaNOJWsxgvOXP6JHne24nnYFgWEcrHrNLKgb5OKFaH6WoI6853CQDyOgRYDFMF9VCSmLAp1rfE2T4gjduw=s150", "url": "https://www.koolbusiness.com/newvi/6742589980016640.html"} , { "title": "Aqua Grand +water purifier For Best Price in Megashope", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/mXzD6eG-xMr_cGN9GCinVjBCw_MdwfoY0clQ9vS-MEjJWumg4IhdZRSxhxTh17r1MJTalwRikyAX3HOCz_5DUyolFQ=s150", "url": "https://www.koolbusiness.com/newvi/5527058937544704.html"} , { "title": "Dell Honored at 2016 International Design Excellence Awards", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/ftXusdPY_UZlDxRFv3xq-sc_DPDO9pCG5GwhJGeAsC9fUqTi2Grz9RNlWwY4xel5NHoOcAOURLNOQBuCe-gJCvgO=s150", "url": "https://www.koolbusiness.com/newvi/5592851125633024.html"} , { "title": "Dell Honored at 2016 International Design Excellence Awards", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/D5kF6KJ0YXMB-kFYropQdRbNusngKg1DjEvKheo_aPrd1wn1vqRjvRfMC2Up0S8KHDlIsFoSKBuvC8VHj5ktUEqCrZA=s150", "url": "https://www.koolbusiness.com/newvi/5035817758097408.html"} , { "title": "Dell Honored at 2016 International Design Excellence Awards", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/TV2YoFBoQA3ATreDpqibPf_rps7H__MA-ctgZHtOJoIUMjo6sh8IIDmkaSkakjWIwhoR4EpIj-say4JXmXsVgMs-2vs=s150", "url": "https://www.koolbusiness.com/newvi/5556571268448256.html"} , { "title": "Buy gold necklace online at Amethystbyrahulpopli", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/kabOt5lt7EoRVxU-YkAvw8J4uLpoOfjFOXplJjQUYmxR_Jj4liL59fSQub3qNNFxXl-A4P45PglKHWcFrC0V3zh2lQ=s150", "url": "https://www.koolbusiness.com/newvi/6155801079054336.html"} , { "title": "Buy MLM software & Build MLM business online ", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/464yjWWhP16YDMJzzn_JN3GCW0KEO1xaOMAtLDEUzAhfX_xPNxu5qZIWPg8vQ81LpVluLzfNPANKLXYLCQxCG_Hhug=s150", "url": "https://www.koolbusiness.com/newvi/5363524836524032.html"} , { "title": "Dell Honored at 2016 International Design Excellence Awards", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/4nQU7IEgS4-5D_MDkxOzKR_76acQYY1PhWIEaJpCzbc3BgD7V7f1AVDeT6w5suXQOvmfbXobcqV7IMNSU9WfxHiy9rw=s150", "url": "https://www.koolbusiness.com/newvi/5518504436432896.html"} , { "title": "Famous Vastu Consultant Services in Gurgaon", "releaseYear": "2014", "url": "https://www.koolbusiness.com/newvi/6151819375935488.html"} , { "title": "Fully furnished Boys PG in sohna road gurgaon", "releaseYear": "2014", "url": "https://www.koolbusiness.com/newvi/6430480075325440.html"} , { "title": "Dell Honored at 2016 International Design Excellence Awards", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/Hl7GSSCQQJA_KY-33FB6uqVvOqgNPj8AKdXAj0jsJaFSlo2_6KPvJfMY2NKUUu6MkmhIeZm384EwOBT3Y4Y6qfGqz0w=s150", "url": "https://www.koolbusiness.com/newvi/5616690073174016.html"} , { "title": "Renting Leasing Residential Property in Sohna Road Gurgaon", "releaseYear": "2014", "url": "https://www.koolbusiness.com/newvi/6124910868955136.html"} , { "title": "Shivagiri Township Phase I., International SchoolSingle plot - (60 x 40) Rs.2400 Sq Ft", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/5Yf1wdA8FwjUQJCxFUarVxVqQEBVw1X-rfzOccwiWsxHDFdFSs4NgBhYBOnDEjKwlR7boQS6Qe1U6DC4cXCSYD7l6g=s150", "url": "https://www.koolbusiness.com/newvi/6351520859684864.html"} , { "title": "Buy Fully Furnished Commercial Property in Gurgaon", "releaseYear": "2014", "url": "https://www.koolbusiness.com/newvi/5025919469092864.html"} , { "title": "Dell Honored at 2016 International Design Excellence Awards", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/t1KEzWjVqxYFQAiLkxmOl1xveiyUeQ8MhrAuH_KYodsUwtidNha7T0LyzdzIkD70FKo2mGHyHuPwWd35BYbGpIv_Pso=s150", "url": "https://www.koolbusiness.com/newvi/4912444554084352.html"} , { "title": "Aqua Grand +water purifier For Best Price in Megashope", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/H9QBDOXFyN6jvLF-7qg9Mm6bfvgHccfWt6VGs53oq-6Lynh-7EFAycGC9ysanmqZww4OPfNo3gFhfjjgT6WcH_YexKA=s150", "url": "https://www.koolbusiness.com/newvi/6186115495100416.html"} , { "title": "ANSYS COURSES IN CHENNAI | BEST ANSYS TRAINING - 9884433249", "releaseYear": "2014", "url": "https://www.koolbusiness.com/newvi/5609388628770816.html"} , { "title": "AutoCAD Courses for Civil Engineers in Chennai | Civil AutoCAD Training", "releaseYear": "2014", "url": "https://www.koolbusiness.com/newvi/6120900711677952.html"} , { "title": "AutoCAD Training Institute in Chennai | Best AutoCAD Institute", "releaseYear": "2014", "url": "https://www.koolbusiness.com/newvi/6319819437637632.html"} , { "title": "Aqua Grand +water purifier For Best Price in Megashope", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/dBEZi48stLzoSa935dSwrG6XG9rHWauVibD6hL-qNMyPSyl9NQ3r-RDL31__Y1z5x0J9ypv2406WA5EWlQN15_yQcw=s150", "url": "https://www.koolbusiness.com/newvi/5028809445212160.html"} , { "title": "SEBS PROVIDING HOME BASED WORKS ..", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/oz6lXftn2ZAo72l94ZIZDt8k5zma8Qi_Sj590HD0arbqlZYw5Q9TbHGeDMaOw4DPC4wXj3sgV354Uq_q1ceOFiI9Ag=s150", "url": "https://www.koolbusiness.com/newvi/5042552803688448.html"} , { "title": "India's most successful online examination software - Tutoreal", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/OU9YwSGw0ipdKrmhbrvj4zgQf1zB_q4vHQExrkq3Qhlz1Em29QkpFLEIQGuRtW5R34eU06vfEMXMBMzlgIb3LmpgYw=s150", "url": "https://www.koolbusiness.com/newvi/6090008890966016.html"} , { "title": "DO THE PART TIME EARN REGULARLY", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/eOPNfb9X0Bww0hWgmkDRWCMDVF1UbD6FxC7dUa7vObSakSe-0ry6_1gzfBqlhqohbePu-G3YQCuNEjcG9O2oiJA_=s150", "url": "https://www.koolbusiness.com/newvi/6449927687241728.html"} , { "title": "WHO WANT EXTRA INCOME DO PART TIME JOB", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/rAk2Tigz2eq9ObmQial0dGqhunIDmoOslLo-mWjtoY_XCX8Zdt3WLGLDH4N73dTstCiuGKGRhPzmTYtFOeuP2jXkyn4=s150", "url": "https://www.koolbusiness.com/newvi/5756869484216320.html"} , { "title": "FEELING BOAR THEN FILLUP THE FORMS", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/qg58D3wCeFW6dGqlbIwocJ21HT3-1jv1Yu8mKBPLLAgNo0w9Zuo1ddRvbVnXkCrHG5takW6Qu90JG44axHByiF05=s150", "url": "https://www.koolbusiness.com/newvi/6081454389854208.html"} , { "title": "ONCE REGISTER IN SEBS TAKE ONE YEAR INCOME", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/LFUJB7ftdt01IcURnDPp1sp11luU_I16lETvCZJy86ndhEPw3t8YUBuf8Mwylktj3RucJX62vlTrMJZME23cOfQMJg=s150", "url": "https://www.koolbusiness.com/newvi/5029901172211712.html"} , { "title": "Pay Per Click (PPC) Services Company Delhi & Across India", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/nkfVvC8BWZaOCQKKQxhWngEXX1zUTQVhgfWLQnGoFIn3JD7xg5CmdrZ0JFmC8PxUm10l-hZ1dBRGYwvMjfcO_QllRQ=s150", "url": "https://www.koolbusiness.com/newvi/5046438675349504.html"} , { "title": "SEBS PROVIDING WORKS FOR STUDENTS", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/pS_UVwB-a_GUMBqEHOlyX4oubrbCrDOBz41KW_DWkMGSyFcbjAtVr9hbO_rg4YhE-XN5wTAVF7yCLBgLWL5J-vJWS2o=s150", "url": "https://www.koolbusiness.com/newvi/6334093358792704.html"} , { "title": "PART TIME HOME BASED WORKS ARE AVAILABLE ", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/DxEGyB6jwYQNIC95qcTa1Ty_mR8UOt6ac9hI3SiSxALUe-2pxsZpZMyu2E4SQKYQrDREP1T-tPGie3C2-fmwKqSM5Gw=s150", "url": "https://www.koolbusiness.com/newvi/6360722558681088.html"} , { "title": "JUST FILLUP THE FORMS WE WILL PROVIDE MONEY", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/4afOgWiVS9Mp1CqaBWRKOJciIeKtlx6BcZtje6olRNTeTrIXGLYsa0_UURL9MNjurD1txcJJFmKaWmQVQRB4G7kl3q8=s150", "url": "https://www.koolbusiness.com/newvi/5335997048946688.html"} , { "title": "NOW U GET WEEKLY PAYMENT THROUGH LAPTAP", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/l8lSMOQAlwFjaFfGJYlHHxZauN3FNyKoKWCR_15JEqtZFdU0Fl-acBmNovThWdxaCvByeNxYTdEMs6RSsVlYqUNTQwU=s150", "url": "https://www.koolbusiness.com/newvi/5271450132938752.html"} , { "title": "HURRY UP VACANCIES ARE START", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/rE5RiFhsinGtWeBEnXhB-txkj5A5SBPzaQ1gOdjozreykKxo4D367WXvYItUV5tfzuq3IoregaRLahk3OO0OJqXw6rU=s150", "url": "https://www.koolbusiness.com/newvi/5208193451950080.html"} , { "title": "web designing company in visakhapatnam ", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/7HmV0NElSz8vHAtJmN-FUJF0PqzYtey-Yu9KzAxOWExsNsmZYmYaohfl8UbsRrBj0VfP8ga0VfX9vBqycPTiVtMG9sw=s150", "url": "https://www.koolbusiness.com/newvi/4964108984123392.html"} , { "title": "CALORIZED OXYGEN LANCING PIPES", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/AZTmI7wTK9i7IdeGLNefH312nJPjgSeMg7YLXg9ikmXc1_IpmJyJWf6vR0BKhYplYXwHqY9Y0TmZj9tOOmlzl8XWjw=s150", "url": "https://www.koolbusiness.com/newvi/5063230118428672.html"} , { "title": "Best restaurants in Delhi , Satya Niketan", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/FwCr3e3EaikXod8Wp6PELnISvU92_ArD79YqRt_q3yb8YjZmwXtqS5B884sCMJaPKi_ma3m7uNuVbbrAUxVgNXbaYg=s150", "url": "https://www.koolbusiness.com/newvi/5225620952842240.html"} , { "title": "Artificial Lawn Grass Manufacturer and Supplier in delhi- Dspaze", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/hDtSO48SIFa_IEojY7V1rc8f3JHjh7iH6GmrLhRH2SNfX6vgrYUk8IMFuuoJDTajyIcxGBoxccprIvZojuEkqULX_g=s150", "url": "https://www.koolbusiness.com/newvi/6114783336071168.html"} , { "title": "Big data training in bangalore", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/vaV5q_cW9G6V850gtf4j9OMxMFBn2wXi4b-h4O0sJib_-fNpChVi2rnN2qJAEsMkXKsNOhpW-Fty0hchLptr9wSNR48=s150", "url": "https://www.koolbusiness.com/newvi/5313711805825024.html"} , { "title": "Contact Verdure - Best Swimming Pool Vendor in Delhi ", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/JCbNplL03XfL-r7DAiNdmN9YUM0KbhpCxtxijieq2xe57C2lAznwCZWozD6277nFM1yuUkVjKa-cPeXpGfp5OhvebQ=s150", "url": "https://www.koolbusiness.com/newvi/6364646816612352.html"} , { "title": "Aqua Grand +water purifier For Best Price in Megashope", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/Qw_cEOu8ECK4RBXlRhCCc_Pa0ks0tI8EOp-8xFsCpKx4RJYY1_7XXDc3HqV1c2PfN_hHrie8AjbkwqNzMsreA_PRtA=s150", "url": "https://www.koolbusiness.com/newvi/5876661759246336.html"} , { "title": "Scanner Tenders, Tenders By Scanner, Tenders For Scanner", "releaseYear": "2014", "url": "https://www.koolbusiness.com/newvi/5238746909769728.html"} , { "title": "Dr. Arun Kumar Singh - Best Endocrinologist in Faridabad, NCR", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/8mdHMf2xTufAPasbX6ButMBRTBQf4WvnU6bUAquxLcKfLoHykrQ88LH5niPoIPqdKiicZXnELCS90nVZDa_jxeCX=s150", "url": "https://www.koolbusiness.com/newvi/5286396216475648.html"} , { "title": "Wholesale Women Clothing Suppliers", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/lD5xaOsaBAT4xchvXzQNHm81wnNdTG-M-w8iq2djhRRp4CZH8NyOxjz9XWlxUwB2N7zf7onYLVlsN0f9eSUD7ZT85A=s150", "url": "https://www.koolbusiness.com/newvi/5857946976124928.html"} , { "title": "Earn Money Online RS 24000/Month F/P/T", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/AdK3MLruGvx66rRO7567EZivvSOI8I-6RDrS_Bo59Az6-PhRHzu1YW6ECYDzdcsT374W-xCEPIy2gJx-xuSV1LtTAQ=s150", "url": "https://www.koolbusiness.com/newvi/5053740119752704.html"} , { "title": "Job problem solution in Mumbai", "releaseYear": "2014", "url": "https://www.koolbusiness.com/newvi/4977441099481088.html"} , { "title": "Best HR Policy Classes in Delhi NCR", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/ONkJrdtozkk6E1_0ZFjNubFCJUhr2tGEd0HaC1U8ZwCdVemPOQfeDWPPih6QJogRwPQo3PfVMHvFg9G6wnNJgQeXpQ=s150", "url": "https://www.koolbusiness.com/newvi/5224881413160960.html"} , { "title": "A Dedicated Web Portal/Website Of Solan District.", "releaseYear": "2014", "img": "https://lh3.googleusercontent.com/lLSF35XwnQ7ld4cBuTSs3lZdeMSG8PyKGb6qTHFcqBlGH0NxOxI9K8MqjT9vZ3vYeT0Lr9N3akcEWfvVaGFpzl9quQ=s150", "url": "https://www.koolbusiness.com/newvi/6343054942273536.html"} ] } Answer: Instead of having the .json output being served from a template file, make use of the json module function of webapp2_extras. The current response headers for the /in.json page has: Content-Type: text/html; charset=utf-8 whereas, it should be ideally: application/json. Why you should avoid creating a json in text is because if you later want to add/remove a key, or iterate over a creating a certain key, you might run into syntax issues. Debugging those syntax errors would be a pain since you run 3 level deep in the existing code already. Instead, create a python dict in the handler code itself, and send a response as follows: from webapp2_extras import json ... self.response.headers['Content-Type'] = 'application/json' self.response.write(json.encode(your_python_dict)) This way, a client will not receive a malformed JSON response ever, and the server would simply issue a 5xx error code.
{ "domain": "codereview.stackexchange", "id": 29843, "tags": "python, python-2.x, google-app-engine" }
How do I include statistical significant index in bar chart in excel?
Question: I want to have a star dot above each bar chart. I don't know how to present that index. Please anyone help me to resolve this problem, Answer: Quite seriously: Friend Don't Let Friends Use Excel. Learn to use Matlab/Octave, or python, or R, or just plain Gnuplot. No matter what tool you use, since "significance" is a separate variable, you need to convert your collection of "*" , "**" etc to numeric values and plot the desired symbols at the matching category locations.
{ "domain": "engineering.stackexchange", "id": 1922, "tags": "statistics" }
Knapsack dynamic programming complexity issue for $W=1$, is it $O(n)$?
Question: The 0-1 knapsack problem is given by $$\begin{align}&\text{maximize } \sum_{i=1}^n v_ix_i,\tag{P1}\\& \text{subject to } \sum_{i=1}^n w_i x_i \leq W,\\&\text{and } x_i \in \{0,1\}.\end{align}$$ Running the dynamic programming algorithm for this problem would give an optimal solution in $O(nW)$. If I define $p_i=w_i/W$, I get the following knapsack problem: $$\begin{align}&\text{maximize } \sum_{i=1}^n v_ix_i,\tag{P2}\\& \text{subject to } \sum_{i=1}^n p_i x_i \leq 1,\\&\text{and } x_i \in \{0,1\}.\end{align}$$ Running the dynamic programming algorithm for this problem would give an optimal solution in $O(n)$. This is not true, isn't it? Answer: The running time is $O(nW)$ if all weights are integers. This is because the dynamic programming table is indexed by possible weights of subsets. If all weights are integers, then since we are only interested in subsets whose sum of weights is at most $W$, there are only $W+1$ weights of subsets that could possibly interest us. This is where the $W$ factor in the running time is coming from. Dividing all weights by $W$ accomplishes exactly nothing. You haven't changed the problem, so the required time for solving it is remains exactly the same.
{ "domain": "cs.stackexchange", "id": 15903, "tags": "time-complexity, knapsack-problems" }
How many genes per 23 chromosomes in human genome?
Question: It is estimated that humans have between 20,000 and 25,000 genes. Every person has two copies of each gene, one inherited from each parent. There are 46 chromosomes, half from the mother, half from the father. Does this mean that there are a) approximately 20-25,000 genes on the 23 chromosomes inherited from the mother, and 20-25,000 genes on the 23 chromosomes inherited from the father, or b) approximately 10-12,500 genes on the 23 chromosomes inherited from the mother, and 10-12,5000 genes on the 23 chromosomes inherited from the father? Answer: Answer A is correct! The human genome has around 20,000 genes. In both, diploid and haploid chromosome-count, the gene-count stays the same; grown humans have 2 alleles per gene in most of their cells, which is considered 'diploid'. Gametes are haploid. Willyard, C., Expanded human gene tally reignites debate. Nature, 2018. 558(7710): p. 354-355.
{ "domain": "biology.stackexchange", "id": 11425, "tags": "human-genetics, chromosome, human-genome" }
Green functions in Quantum Mechanics
Question: How the Green's functions and the Quantum Mechanics are related? Do they can be used to solve the Schrödinger equation of an particle subjet to some potential that is not a Dirac's delta? And the proprieties of some Green's functions that are symmetrical, i.e. $ G(x|\xi) = G(\xi|x)^{\ast} $, has some relation with the propriety of the inner product $ \langle \alpha \vert \beta \rangle = \langle \beta \vert \alpha \rangle^{\ast} $? Answer: Schrödinger equation is a linear partial differential equation, so sure, you can use the usual formalism of Green's functions to solve it. First let's recall how the stuff works. Suppose $L$ is the linear operator and $D$ are the boundary conditions and we want to solve equations $Lu = f$ and $Du = 0$ for $u$. Using the identity property of the convolution $g*\delta = g$ one is motivated to solve the simpler equation $LG = \delta$ and then one finds $u = G*f$ because $$L(G*f) = (LG)*f = \delta*f = f$$. Now, for the time-independent Schrödinger equation the following should be useful. If the operator (understood also with the given boundary conditions) also has a complete basis of eigenvectors $\left\{\left|\phi_n\right>\right\}$ corresponding to eigenvalues $\left\{\lambda_n\right\}$ then the Green's function can easily be seen to be $$G(x, x') = \sum_n {\phi_n(x)^* \phi_n(x') \over \lambda_n}$$ (just apply the operator $L$ to it and use that $L \left|\phi_n\right> = \lambda_n \left|\phi_n\right>$. So again we can see that $G$ is in a sense an inverse of $L$ (and indeed it is often written simply as $L^{-1}$). Now, it turns out there is a deeper connection between Green's functions and quantum mechanics via Feynman's path integral if we pass to the time dependent Schrödinger equation. I am not going to derive all the stuff here but suffice it to say that Green's function takes on the meaning of a propagator of the particle. Namely, the probability amplitude that the particle gets from the event (t, x) to the event (t', x') is a Green's function of the time-dependent Schrödinger equation $G(x,t;x',t') = \left<x\right| U(t,t') \left|x'\right>$. So yes, the fact that the Green's function is symmetric is precisely because it can be interpreted as an inner product. This stuff generalizes further to quantum field theory and Green's functions are among the basic objects of study there.
{ "domain": "physics.stackexchange", "id": 428, "tags": "quantum-mechanics" }
Is there a freely usable map of Earth cave systems and/or density?
Question: I am considering creating some sort of model of the Earth using real world collected data. Things like worldwide height maps certainly exist, as do many maps of minerals, rainfall, etc. Are there worldwide data available on underground cave systems? By density, I mean the likelihood of there being a cave in a certain place, given coordinates. Hence it does not have to be exact (although this would be better), but a mathematical model or approximation would be useful if the former does not exist. Answer: You might like to check World Cave List which has a pretty extensive list of caves, their depths, and lengths. For example: This list has been automatically produced from our World Caves Database. Total depth and length of all caves currently collected in the database: Number of caves = 2424 Caves deeper than 300m = 1075 Caves longer than 3kms = 1628 Cumulated depth = 648 932 m Cumulated length = 17 930 192 m
{ "domain": "earthscience.stackexchange", "id": 478, "tags": "geography, open-data, database" }
AC compressor adds less heat than is removed during condensation?
Question: The compression phase in an air conditioner is approximately adiabatic, so that heat is added to the refrigerant. It then naturally transfers heat to the outside. My question is: since heat was added both during the evaporation phase (from inside the house) and during compression, why should we expect that more heat is lost during condensation than was added by compression? Secondarily, it seems that in refrigerators, the compression phase turns the gas to a liquid. I assume that air conditioners generally do the same, but then calling the next phase "condensation" doesn't make a lot of sense (since it's already a liquid). What gives? Edit: http://physics.bu.edu/~duffy/ns549_fall07_notes06/cooling_fridge.html: The gas is transferred to a compressor, where most of the work is done. The gas is compressed adiabatically, heating it and turning it back to a liquid. The liquid passes through cooling coils on the outside of the fridge. Because the liquid is now warmer than room temperature, heat is transferred naturally to the room. This is an isobaric compression process. So it seems that sometimes condensation happens before the "condenser" stage (4). Answer: Question 1: Work is done on the refrigerant to increase its pressure enough to condense against ambient air. This work does have to exit the system as heat, but the amount of heat added by compression work is not going to be as much as the amount of heat absorbed by boiling refrigerant inside the evaporator, as this would lead to a VERY low efficiency for the refrigeration system. This can be verified by looking at the coefficient of performance for the particular system that you are dealing with, as shown here. Question 2: the compressor increases the pressure of the refrigerant enough to allow it to condense at ambient conditions, but compression alone does NOT condense the refrigerant. The high pressure and high temperature refrigerant turns into liquid at the condenser as it releases heat to the environment.
{ "domain": "physics.stackexchange", "id": 90956, "tags": "thermodynamics" }
Why does light bend towards the normal when passing from a rarer to a denser medium?
Question: Whenever light rays are entering a denser medium they bend towards the normal. Why do rays choose the path towards the normal? Why cannot they choose the path away form the normal? Answer: See in Wikipedia the topic "Phase velocity" http://en.wikipedia.org/wiki/Phase_velocity and also Snellius law. Let $M_1$ be a less refractive medium, and $M_2$ a more refractive one. Let $n_1$ respectively $n_2$ be their refraction indexes. It is known that in any medium the light wave has a phase velocity specific to that medium, and this velocity and the refraction index are interdependent. Note, it's not the velocity of the light that changes from a medium to another medium. The velocity of light is c. It's the phase velocity that changes, and here below are some explanations. The light propagation is described, in the simplest case, by $Asin(\vec k \vec r - 2\pi\nu t)$. The quantity $\phi = \vec k \vec r - 2\pi\nu t \ $ is called phase, and a surface on which at a given time $t$ the phase is constant, is called wave-front. The phase-velocity is the ratio of the distance between two neighbor wave-fronts of the same phase $\phi$, and the time $T$ needed for the light to travel a distance equal to the distance $d$ between two neighbor wave-fronts. This time is given by $$ T = 1/\nu . \tag{i}$$ Note that the light frequency $\nu$ doesn't change from medium to medium, therefore $T$ doesn't change. For illustration of the situation imagine the following scenario: Consider a front wave of phase $\phi$ (red line) touching at the time $t_0$ the point $A$ on the separation surface between the two media, then at a time $t_1$ the point $B$ of the front wave touches a point $B_1$ of the separation surface, and at a time $t_2$ the point $C$ of the front wave touches the point $C_2$ of the separation surface. Let the points $A, B, C$ be chosen so as $t_1 = t_0 + T$, and $t_2 = t_1 + T$. As the velocity of the wave is smaller in $M_2$, $$\frac {v_2}{v_1} = \frac {n_1}{n_2} \tag{ii}$$ the distance that the wave can travel during the time $T$ is smaller, $$ d_1 = v_1 T, \ \ \ d_2 = v_2 T ==> d_2 < d_1 \tag{iii}$$ (hence the distance between two consecutive wave-fronts is smaller.) In consequence, the angle $\theta$ of the wave-front with the separation surface is smaller in $M_2$ than in $M_1$. Finally, note that the angle between the front wave in $M_2$ and the separation surface is exactly equal to the refraction angle (i.e. between the normal to the separation surface and the normal to the wave-front).
{ "domain": "physics.stackexchange", "id": 25295, "tags": "optics, refraction" }
Benchmarking inline vs normal functions in C++
Question: I am currently learning C++ and have learnt about inline functions and how they can give a performance benefit. How could this be improved? #include <iostream> #include <chrono> #include <ctime> long fibonacci(unsigned n) { if (n < 2) return n; return fibonacci(n - 1) + fibonacci(n - 2); } inline long fib(unsigned n) { if (n < 2) return n; return fibonacci(n - 1) + fibonacci(n - 2); } void timedFibonacci(unsigned n) { // Time vars std::chrono::time_point<std::chrono::system_clock> start, end; // Start time. start = std::chrono::system_clock::now(); //std::cout << "Functional Fibonacci of " << n << " = " << fibonacci(n) << '\n'; fibonacci(n); // End time. end = std::chrono::system_clock::now(); // Time elapsed. std::chrono::duration<double> functionalTimeElapsed = end - start; // Start time. start = std::chrono::system_clock::now(); //std::cout << "Inline Fibonacci of " << n << " = " << fib(n) << '\n'; fib(n); // End time. end = std::chrono::system_clock::now(); // Time elapsed. std::chrono::duration<double> inlineTimeElapsed = end - start; if (functionalTimeElapsed > inlineTimeElapsed) { std::cout << "n = " << n << "\tFunctional method was faster by " << ((functionalTimeElapsed - inlineTimeElapsed) / inlineTimeElapsed) * 100 << "%\n"; } else if (functionalTimeElapsed < inlineTimeElapsed) { std::cout << "n = " << n << "\tInline method was faster by " << ((inlineTimeElapsed - functionalTimeElapsed) / functionalTimeElapsed) * 100 << "%\n"; } else { std::cout << "n = " << n << "\tFunctional method and inline method took the same time.\n"; } } int main() { for (int i = 5; i <=40 ; i++) { timedFibonacci(i); } cin.get(); } Answer: Inlining First of all, your inline function calls the non-inline function, so if the compiler is really paying attention to the inline specification, it's only affecting a single "layer" of invocation. inline long fib(unsigned n) { if (n < 2) return n; return fibonacci(n - 1) + fibonacci(n - 2); } To stand a chance of getting some good out of the inline specification, you probably want to have this call itself instead of the other function: inline long fib(unsigned n) { if (n < 2) return n; return fib(n - 1) + fib(n - 2); } In general, recursive functions are a problem for inlining in any case, so many compilers completely disable inline code generation for recursive functions. Add in the fact that most reasonably current compilers basically ignore the inline specifier anyway, and you get a high likelihood that you won't see any difference between these functions at all. Timing I'd do the timing somewhat differently. The single responsibility principle applies here, just like everything else. That means the timer should deal only with timing. I usually use something on this general order: template <typename F, typename ...Args> auto timer(F f, std::string const &label, Args && ...args) { using namespace std::chrono; auto start = high_resolution_clock::now(); auto holder = f(std::forward<Args>(args)...); auto stop = high_resolution_clock::now(); std::cout << label << " time: " << duration_cast<microseconds>(stop - start).count() << "\n"; return holder; } Arguably this already does more than it should (printing out results in addition to the actual timing) but at least it's quite a bit closer to a single responsibility. With that, we invoke each Fibonacci generator separately, something on this order: auto foo = timer(fib, "inline", max); The result will then look something like this: inline time: 1062 Just a minor note: this is a somewhat more general timing function that most people usually need. In particular, it doesn't require the function being timed to take a specific number of arguments--the number and types of arguments you pass after the label have to match those needed by the function you're timing, but as far as the timer itself cares, it's essentially wide open. Efficiency Ignoring, for the moment, the issue of inlining (which is probably pretty much a red herring anyway), I'd consider other ways of computing Fibonacci numbers. In the absence of memoization, a recursive function is horribly inefficient. An iterative function is many times faster. That can be written with a for loop, something on this general order: unsigned long long fib(unsigned limit) { unsigned long long a = 1; unsigned long long b = 1; unsigned long long c; if (limit < 2) return 1; for (auto i=1ULL; i<limit; i++) { c = a + b; a = b; b = c; } return c; } This is somewhat longer, but the difference in speed is...fairly substantial. Computing fib(43) with each, I get timings like this: out of line time: 2807407170 inline time: 2794785991 iterative time: 321 Sum (ignore): 1568397607 I had to switch to showing the time in nanoseconds to get a non-zero time for the iterative solution. Even so, for fib(43) and highest resolution-timing I can get, it still shows a time of 0 about one run out of every 3 or 4. The recursive versions are approximately 7 orders of magnitude slower (and although inline code generation did seem to help a tiny bit, it's still minuscule compared to the improvement from a better algorithm. Oh, and for what it's worth, it looks like the inline code generation really did make a difference. It is pretty small, but at least in my testing, the inline version does finish just a tiny bit faster every time. Final Code For what it's worth, here's the code I ran to get the timing comparison: #include <iostream> #include <string> #include <chrono> template <typename F, typename ...Args> auto timer(F f, std::string const &label, Args && ...args) { using namespace std::chrono; auto start = high_resolution_clock::now(); auto holder = f(std::forward<Args>(args)...); auto stop = high_resolution_clock::now(); std::cout << label << " time: " << duration_cast<nanoseconds>(stop - start).count() << "\n"; return holder; } long fibonacci(unsigned n) { if (n < 2) return n; return fibonacci(n - 1) + fibonacci(n - 2); } inline long fib(unsigned n) { if (n < 2) return n; return fib(n - 1) + fib(n - 2); } unsigned long long iter_fib(unsigned limit) { unsigned long long a = 1; unsigned long long b = 1; unsigned long long c; if (limit < 2) return 1; for (auto i = 1ULL; i < limit; i++) { c = a + b; a = b; b = c; } return c; } int main() { static const int max = 43; auto a = timer(fibonacci, "out of line", max); auto b = timer(fib, " inline", max); auto c = timer(iter_fib, " iterative", max); std::cout << "Sum (ignore): " << a + b + c; }
{ "domain": "codereview.stackexchange", "id": 22354, "tags": "c++, c++11, fibonacci-sequence, benchmarking" }
What are the advantages of $\frac{dp}{dt}$ over $\frac{dp}{ds}$ as the definition of force?
Question: Suppose we want to have a formula which measures how hard we push or pull. Now, momentum is a measure of quantity of motion. And, our external push is something which changes it, so our formula must have something to do with change in momentum. Now, if we push a box continuously with large effort, then it's momentum changes by a large amount in a short time interval. And, if we push the same box with a feeble effort, then to produce the same change in momentum, we have to push it for a longer time interval. So, the formula $\frac{dp}{dt}$ should make sense as a measure of force. But, we can also think of the problem like this: If we push a box with large effort even through a small distance, then its momentum changes by large amount. But if we push the same box with a feeble effort continuously, then we've to push it through a larger distance to produce the same change in momentum. So, $\frac{dp}{ds}$ can also be a measure of how hard we push or pull. Then, what are the advantages of choosing $\frac{dp}{dt}$ as force? Answer: The reason for defining force (like many other quantities) in the way that we do, is that the laws of physics take simple mathematical forms if we use these quantities. In the case of force, if we use $F=\frac{dp}{dt}$, then Hooke's law, Newton's third law, Newton's law of gravitation, the Lorentz force law (and no doubt many other laws) take simple forms – which they would not do if we used your proposed alternative.
{ "domain": "physics.stackexchange", "id": 39493, "tags": "newtonian-mechanics, classical-mechanics, forces, momentum" }
Connect Orange 3.20 to postgresql database
Question: I installed orange 3.20 on windows 7. It works so far, the problem is connecting it to a server-based Postgres database. While the connection can be made at the moment, when you try to load a table the message "missing extension quantile" comes up. A few problems are coming up with this message. It seems like it is not possible to install this extension on a windows server without a lot of stress. The extension seems not to be actual (version 2015) and is not compatible with the actual PostgreSQL versions (which will cause trouble for future updates). Does anyone know if Orange will support this in the future and a convenient way to use Orange for windows 7? Answer: To summarize my linked answer. While you may be able to connect Orange to Postgresql, Orange does not handle large datasets very well (it will cause it to crash). Generally, it does not support datasets the size that you would be storing in SQL sized databases. Personally, I think Orange works, and is most effective, with smaller sized datasets for illustrative tasks and exploratory data analysis.
{ "domain": "datascience.stackexchange", "id": 4929, "tags": "orange, databases" }
rxtools fails under osx Lion and HomeBrew
Question: Hi all, I'm following the tutorial reported here in order to install ROS on my MacBook pro with Lion. I succeed in the installation of the basic stacks but now I'm blocked with Rviz. When I try to rosmake it the compilation of rxtools fails with the following error: -- Build files have been written to: /Users/luca/Software/ros/electric/rx/rxtools/build cd build && make -l8 Scanning dependencies of target rospack_genmsg_libexe [ 0%] Built target rospack_genmsg_libexe Scanning dependencies of target rosbuild_precompile [ 0%] Built target rosbuild_precompile Scanning dependencies of target rxtools [ 5%] Building CXX object CMakeFiles/rxtools.dir/src/rxtools/topic_display.o [ 11%] Building CXX object CMakeFiles/rxtools.dir/src/rxtools/topic_display_generated.o [ 16%] Building CXX object CMakeFiles/rxtools.dir/src/rxtools/topic_display_dialog.o [ 22%] Building CXX object CMakeFiles/rxtools.dir/src/rxtools/rosout_generated.o [ 27%] Building CXX object CMakeFiles/rxtools.dir/src/rxtools/rosout_panel.o [ 33%] Building CXX object CMakeFiles/rxtools.dir/src/rxtools/rosout_filter.o [ 38%] Building CXX object CMakeFiles/rxtools.dir/src/rxtools/rosout_text_filter.o [ 44%] Building CXX object CMakeFiles/rxtools.dir/src/rxtools/rosout_text_filter_control.o [ 50%] Building CXX object CMakeFiles/rxtools.dir/src/rxtools/rosout_severity_filter.o [ 55%] Building CXX object CMakeFiles/rxtools.dir/src/rxtools/rosout_severity_filter_control.o [ 61%] Building CXX object CMakeFiles/rxtools.dir/src/rxtools/rosout_list_control.o [ 66%] Building CXX object CMakeFiles/rxtools.dir/src/rxtools/rosout_setup_dialog.o /Users/luca/Software/ros/electric/rx/rxtools/src/rxtools/rosout_setup_dialog.cpp: In member function ‘virtual void rxtools::RosoutSetupDialog::onTopicBrowse(wxCommandEvent&)’: /Users/luca/Software/ros/electric/rx/rxtools/src/rxtools/rosout_setup_dialog.cpp:74: warning: ‘__s_getDataType’ is deprecated (declared at /Users/luca/Software/ros/electric/ros_comm/messages/rosgraph_msgs/msg_gen/cpp/include/rosgraph_msgs/Log.h:86) [ 72%] Building CXX object CMakeFiles/rxtools.dir/src/rxtools/logger_level_panel.o [ 77%] Building CXX object CMakeFiles/rxtools.dir/src/rxtools/init_roscpp.o Linking CXX shared library ../lib/librxtools.dylib ld: warning: ignoring file /Developer/SDKs/MacOSX10.7.sdk/System/Library/Frameworks//QuickTime.framework/QuickTime, file was built for unsupported file format which is not the architecture being linked (x86_64) Undefined symbols for architecture x86_64: "wxRichTextCtrl::wxRichTextCtrl(wxWindow*, int, wxString const&, wxPoint const&, wxSize const&, long, wxValidator const&, wxString const&)", referenced from: rxtools::TextboxDialog::TextboxDialog(wxWindow*, int, wxString const&, wxPoint const&, wxSize const&, long)in rosout_generated.o "wxRichTextBuffer::BeginBold()", referenced from: rxtools::RosoutListControl::onItemActivated(wxListEvent&) in rosout_list_control.o "wxRichTextBuffer::BeginTextColour(wxColour const&)", referenced from: rxtools::RosoutListControl::onItemActivated(wxListEvent&) in rosout_list_control.o "wxRichTextCtrl::ms_classInfo", referenced from: rxtools::TextboxDialog::onChar(wxKeyEvent&) in rosout_list_control.o ld: symbol(s) not found for architecture x86_64 collect2: ld returned 1 exit status make[3]: *** [../lib/librxtools.dylib] Error 1 make[2]: *** [CMakeFiles/rxtools.dir/all] Error 2 make[1]: *** [all] Error 2 How can I fix it? Thanks! Originally posted by LucaGhera on ROS Answers with karma: 128 on 2011-12-13 Post score: 0 Answer: This is probably due to you using the incorrect version of wxWindows and wxPython. There is a note in the tutorial on how to fix this: rosmake --rosdep-install rviz This does a binary install of wxWindows and wxPython to be used with both rxtools and rviz. Originally posted by mjcarroll with karma: 6414 on 2011-12-16 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by LucaGhera on 2011-12-19: @mjcarroll Actually it worked. I removed the file '$PREFIX/share/ros/wxpython-version.installed' and then executed 'rosdep install rviz'. Thanks! Comment by mjcarroll on 2011-12-19: It may also be worthwhile to rebuild all of ROS, as it looks like you have an architecture mismatch somewhere. Especially after removing the files that macports left behind. Comment by mjcarroll on 2011-12-19: You won't see it in brew list due to the way that it is installed. However, you should see it in $PREFIX/share/ros/wxpython-version.installed, where prefix is your brew prefix (brew --prefix). Also, you can test the wxPython install with the python shell using: import wx; wx.version() Comment by LucaGhera on 2011-12-18: @mjcarroll Thanks. Hi did it before but it doesn't work. Should I see wxPython and wxWindows in the list of the brew packages (brew list)? Cause I don't see them. Is it possible that I have some old files from macPorts?
{ "domain": "robotics.stackexchange", "id": 7622, "tags": "rviz, rxtools, macos, homebrew, macos-lion" }
Distribution of Quantum Measurement Uncertainties (what Exactly does the Heisenberg Uncertainty Principle tell us)?
Question: In quantum mechanics, one of the first results students (and random people with an interest in physics) are exposed to is the Heisenberg Incertainty Principle. Informally, this principle expresses the product of the uncertainties of certain pairs of properties must be greater than some fixed value. Essentially, the "knowabilities" of certain pairs of properties of a quantum particle are linked such that certainty in one gives rise to uncertainty in another. Typically, the properties position and momentum are given as an example, so I will use those for the remainder of this question. I will use $x$ and $\Delta x$ for position and uncertainty in position and $p$ and $\Delta p$ for momentum and uncertainty in momentum, all respectively. My question is this: If I make a measurement, and say I know the measurement has uncertainty $\Delta x$, what distribution should I expect "actual" position to follow. I know $\Delta x$ and let's say I know $\Delta p$ as well. These are numbers, they don't describe a physical state without some context. Are $\Delta x$ and $\Delta p$ the bounds of a uniform distribution describing the possible values of $x$ and $p$? Are they the variance of a normal distribution? Are they the variance (or some other property) of a distribution whose shape is determined by the quantum state of the particles interacting during the observation? I have found some resources (like this one https://opentextbc.ca/universityphysicsv3openstax/chapter/the-heisenberg-uncertainty-principle/) that reference this but don't ever give an explicit answer. I also know that the Heisenberg Uncertainty Principle is a special case of the Cauchy-Schwarz inequality, which I plan to explore as an avenue to answering this question for myself. Answer: Are they the variance of a distribution whose shape is determined by the quantum state of the particles interacting during the observation? Yes.
{ "domain": "physics.stackexchange", "id": 81559, "tags": "quantum-mechanics, heisenberg-uncertainty-principle" }
cannot find octovis for viewing the built map
Question: I am using ros-fuerte. I used rgbdslam package to built a map. I saved the map as 'name.bt'. When I am trying to view it using rosrun octovis octovis <filename.bt> I got the error octovis stack/package not found. But I checked the installed location using '$ dpkg -L ros-fuerte-octovis' I got output as: /. /usr /usr/share /usr/share/doc /usr/share/doc/ros-fuerte-octovis /usr/share/doc/ros-fuerte-octovis/changelog.Debian.gz /usr/share/doc/ros-fuerte-octovis/copyright /opt /opt/ros /opt/ros/fuerte /opt/ros/fuerte/bin /opt/ros/fuerte/bin/octovis /opt/ros/fuerte/include /opt/ros/fuerte/include/octovis /opt/ros/fuerte/include/octovis/TrajectoryDrawer.h /opt/ros/fuerte/include/octovis/ColorOcTreeDrawer.h /opt/ros/fuerte/include/octovis/SelectionBox.h /opt/ros/fuerte/include/octovis/OcTreeDrawer.h /opt/ros/fuerte/include/octovis/OcTreeRecord.h /opt/ros/fuerte/include/octovis/PointcloudDrawer.h /opt/ros/fuerte/include/octovis/SceneObject.h /opt/ros/fuerte/lib /opt/ros/fuerte/lib/liboctovis.a /opt/ros/fuerte/lib/cmake /opt/ros/fuerte/lib/cmake/octovis /opt/ros/fuerte/lib/cmake/octovis/octovis-config.cmake /opt/ros/fuerte/lib/cmake/octovis/octovis-config-version.cmake /opt/ros/fuerte/lib/liboctovis.so Is there any possible solutions? Originally posted by Sudhan on ROS Answers with karma: 171 on 2012-07-20 Post score: 0 Answer: You no longer start octovis as a ROS package in fuerte (i.e. with "rosrun"). It is now installed as a regular system-wide library, so you start it just like any other program: octovis <filename.bt> Originally posted by AHornung with karma: 5904 on 2012-07-20 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Sudhan on 2012-07-20: Initially, I tried that too and also now. But, I got the same error, ERROR: Filestream to 1stmap.bt not open, nothing read. Comment by Sudhan on 2012-07-20: Sorry. In the mapping process instead of using openni_node.launch, I used openni.launch from other package. So, I guess the map was not built. But, I cannot find the file openni_node.launch in openni_camera package. Comment by AHornung on 2012-07-20: That is a completely different problem, I guess. What happens if you open octovis (without filename), then go the File / Open and pick your file? Comment by Sudhan on 2012-07-20: I solved the problem. I cannot find openni_node.launch because its located in the package openni_camera_deprecated. But, in older versions its located in openni_camera. So, try for $ roslaunch openni_camera_deprecated openni_node.launch
{ "domain": "robotics.stackexchange", "id": 10291, "tags": "ros, octomap, ros-fuerte, octovis" }
VSlam error while running tutorial.bag
Question: Hi all, I got this error while running vslam-tutorial.bag and of course it can't run [ INFO] [1298885315.414216976]: Opening vslam_tutorial.bag Waiting 0.2 seconds after advertising topics... done. Hit space to toggle paused, or 's' to step. Client [/wide_stereo/stereo_vslam_node] wants topic /wide_stereo/right/camera_info to have datatype/md5sum [sensor_msgs/CameraInfo/c9a58c1b0b154e0e6da7578cb991d214], but our version has [sensor_msgs/CameraInfo/1b5cf7f984c229b6141ceb3a955aa18f]. Dropping connection. Client [/wide_stereo/stereo_vslam_node] wants topic /wide_stereo/left/camera_info to have datatype/md5sum [sensor_msgs/CameraInfo/c9a58c1b0b154e0e6da7578cb991d214], but our version has [sensor_msgs/CameraInfo/1b5cf7f984c229b6141ceb3a955aa18f]. Dropping connection. How can I solve this problem? Originally posted by Tien Thanh on ROS Answers with karma: 231 on 2011-02-27 Post score: 1 Original comments Comment by Homer Manalo on 2011-03-02: Ooops, now I can also see this error on my system! This works in cturtle (actually not fully cause I still have some problems (http://answers.ros.org/question/112/vslam-tutorial) but not on recently installed diamondback (just updated to final release). Comment by Tien Thanh on 2011-02-28: Yes, it still have the same error Comment by Homer Manalo on 2011-02-27: Can you download the bag again and see if it is still the same? Answer: Just copy and paste answer from Homer, Thanks! Ooops, now I can also see this error on my system! This works in cturtle (actually not fully cause I still have some problems (http://answers.ros.org/question/112/vslam-tutorial) but not on recently installed diamondback (just updated to final release). - Homer Manalo (19 hours ago) Originally posted by Tien Thanh with karma: 231 on 2011-03-03 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 4893, "tags": "ros, vslam" }
Comparing automata sizes given Myhill-Nerode equivalence under a function
Question: Consider two finite languages, $L_A$ over alphabet $A$ and $L_B$ over alphabet $B$. $A$ might be the same as $B$. Since $L_A$ and $L_B$ are finite languages, there exist minimal acyclic deterministic finite-state automata to decide them: $M_A$ and $M_B$ respectively. So $x \in L_A$ iff $M_A$ accepts $x$, and $y \in L_B$ iff $M_B$ accepts $y$. We are also given that $L_A$ is bigger than $L_B$: $|L_A| > |L_B|$. We are given a function $f:A^*\to B^*$. We also have the constraint that acceptance is preserved under $f$: $\ \ x \in L_A$ iff $f(x) \in L_B$ [modeled below by formula $\eqref{eq1}$]. It was established by the answer to my previous question that if two strings $u$ and $v$ map to the same Myhill-Nerode equivalence class in $B^*$ under $f$ [modeled by formula $\eqref{eq2}$], they map to the same equivalence class in $A^*$ [modeled by formula $\eqref{eq4}$]. This was done by showing that formula $\eqref{eq3}$ follows from $\eqref{eq1}$ and $\eqref{eq2}$, and $\eqref{eq4}$ follows from $\eqref{eq3}$. $\forall x\in A^*\ \ ((x \in L_A) \leftrightarrow (f(x) \in L_B)) \tag{1} \label{eq1}$ $\forall z\in A^*\ \ ((f(uz) \in L_B) \leftrightarrow (f(vz) \in L_B)) \tag{2} \label{eq2}$ $\forall z\in A^*\ \ ((uz \in L_A) \leftrightarrow (f(uz)\in L_B) \leftrightarrow (f(vz)\in L_B) \leftrightarrow (vz\in L_A)) \tag{3} \label{eq3}$ $\forall z\in A^*\ \ ((uz \in L_A) \leftrightarrow (vz \in L_A)) \tag{4} \label{eq4}$ Questions: Given the constraints and results above, can we conclude that $|M_A| = |M_B|$? $|M|$ is the number of states in the finite automaton $M$. Is the reasoning below correct? I think this would be true, because if it were not, there would be a counterexample where two strings $u$ and $v$ would map to the same state in one automaton but two different states in the other automaton. Case 1. There are two strings $u$ and $v$ which map to one state in $M_A$ but two states in $M_B$. So $uz \in L_A$ and $vz \in L_A$ and $f(uz) \in L_B$, but $f(uz) \notin L_B$. This violates formula $\eqref{eq1}$ under the substitution $x \mapsto uz$. Case 2. There are two strings which map to one state in $M_B$ but two states in $M_A$. So under this assumption, there would exist $u$ and $v$ such that $f(uz) \in L_B$ and $f(vz) \in L_B$ and $\ uz \in L_A$ but $vz \notin L_A$. This also violates formula $\eqref{eq1}$, under the substitution $x \mapsto vz$. Answer: It was established by the answer to my previous question that if two strings $u$ and $v$ map to the same Myhill-Nerode equivalence class in $B^*$ under $f$ [modeled by formula (2)], they map to the same equivalence class in $A^*$ [modeled by formula (4)] ... No, formula (2) does not describe the condition that two strings $u$ and $v$ in $A^*$ are mapped to the elements that belong to the same Myhill-Nerode equivalence class in $B^*$. The correct formula that describes that condition should be $$\forall z\in B^*~((f(u)z \in L_B) \leftrightarrow (f(v)z \in L_B)) $$ Hence I am afraid this question does not make much sense.
{ "domain": "cs.stackexchange", "id": 20323, "tags": "regular-languages, finite-automata" }
Taylor approximation in physics
Question: In a textbook about semicoductor physics, I came across a passage about deriving the carrier concentration at thermal equilibrium in semiconductors I didn't quite grasp: The recombination $R(T,n,p)$ rate depends on the ionization energy, temperature $T$, and carrier concentrations $n$, $p$. As the recombination rate must be zero if one of the carrier concentrations $n$, $p$ is zero, it follows for the first non-vanishing term of a Taylor expansion: $$R(T,n,p) = r(T)np$$ The Taylor expansion, as I know it from the mathematics class, is defined only for functions of one variable. Besides that, there is always a point involved about which the expansion is carried out. Strictly speaking, $R$ is a function of three variables. And no point is given. I guess this is the kind of inaccuracy practiced among physicists in order not to bloat their argumentation, and generally understood -- among physicists. So, how does an expansion of a function of three variables work and which point should I assume here? Answer: The argument is not a mathematical Taylor expansion, there is an implicit physical argument here which is very well known but not given. The rate of a reaction (like recombination) at low density should be the product of the densities, with a coefficient that is independent of the density. This happens to be a mathematical Taylor expansion in the densities. But zero is a special point for an expansion, because there are natural power-law change of variables, and you can't determine which power is right without knowing the physics. For example, the period of a pendulum goes to zero as the length of the pendulum goess to zero, does this mean that the period must be proportional to L? Of course not. It's the square-root of L. The correct argument When you have two objects which are moving randomly and have to find each other, the probability each A object finding a B object per unit time is proportional to the density of B objects. If you have twice as many B objects per unit volume, it takes half as long to find one, just from independent search statistics. Similarly, the rate is doubled if you double the concentration of A objects. So the leading term is the so-called mass-action term, the product of the concentrations. At higher concentrations, the A's and B's start to have excluded volume effects--- it is slightly easier for A to find B because the mass-action assumes indepedent motion of the different B's while the real B's only can travel in the volume not excluded by other B's. There are also corrections to the diffusion rate fron interactions. The corrections are negligible when the gas is dilute compared to the physical cross-section of interaction, and this limit defines the domain of applicability of mass action dynamics. The argument that the molecules are in the mass-action limit is what is used here, and the Taylor expansion phrasing is suboptimal. But the result is that the physics of recombination is by the first term in a taylor expansion in the two densities, so the calculation that follows is correct, even though the physical argument is wrong. This is typical of textbooks, and you should probably throw this one away.
{ "domain": "physics.stackexchange", "id": 1937, "tags": "mathematical-physics, semiconductor-physics" }
Magnetic field of a Herzian dipole antenna
Question: If I am given the dipole moment of very short dipole antenna as $P = P_0 sin (\omega t)$, what will be the magnetic field and polarization of far field radiation? Do I need to consider the time variation when calculating near and far fields, as in how many half or full cycles will exists along the dipole ? Thanks in advance. Answer: You can use phasors instead of dealing with time domain. For a Hertzian dipole oriented along the $z$ axis, the magnetic field will have only one ($\phi$) component, both in near-field and in far-field, which will be equal to \begin{equation} H_{\phi}=\frac{i\omega P}{4\pi}\left[\frac{ik}{r}+\frac{1}{r^2}\right]\sin\theta e^{-i k r} \end{equation} where $(r,\theta,\phi)$ are the spherical coordinates, $i$ is the imaginary unit, $\omega$ is the angular frequency, $k$ is the wavenumber, $P$ is the dipole moment and the $\exp(i\omega t)$ time convention has been adopted. The far magnetic field can be easily obtained by dropping the $1/r^2$ term, i.e. \begin{equation} H_{\phi}\simeq\frac{i\omega P}{4\pi}\frac{ik}{r}\sin\theta e^{-i k r} \end{equation} Opposite to that, the electric field will have two components (i.e., $r$ and $\theta$) in the near field equal to \begin{equation} E_{\theta}=\zeta\frac{i\omega P}{4\pi}\left[\frac{ik}{r}+\frac{1}{r^2}+\frac{1}{i k r^3}\right]\sin\theta e^{-i k r} \end{equation} and \begin{equation} E_{r}=\zeta\frac{i\omega P}{4\pi}\left[\frac{1}{r^2}+\frac{1}{i k r^3}\right]\cos\theta e^{-i k r} \end{equation} The far field can be obtained by dropping the $1/r^2$ and $1/r^3$ terms. Accordingly, \begin{equation} E_{\theta}\simeq\zeta\frac{i\omega P}{4\pi}\frac{ik}{r}\sin\theta e^{-i k r} \end{equation} and \begin{equation} E_{r}\simeq0 \end{equation} The electric far field is then linearly polarized along $\hat{i}_\theta$, while the magnetic far field is linearly polarized along $\hat{i}_\phi$.
{ "domain": "physics.stackexchange", "id": 13940, "tags": "electromagnetism, mathematical-physics, antennas, dipole" }
Walk file system checking that backup files are present
Question: Everyday I manually check that that my systems were backed up. I use the windows file system through the Explorer gui and check that the backup files were modified on the previous day or later. This script walks the file system backup directory and compares the files modified since the start of the previous day to a list I've prepared in the workfile.txt file. There are a number of areas to address. When building the path\filename strings should I break out the 'ifs' into their own functions? When walking the file system I exclude directories by hard coding them in if statements. Should I check if a set of excluded directories is in the dirs list? I tried but I couldn't get it to work that way so I reverted to individual checks. Would be better to parse the dates out of the path and file names rather then hard coding the places to insert dates with strings like [yesterday]? backproj.py import datetime import os from os.path import join, getsize from colorama import Fore, Back, Style TODAY = datetime.date.today() YESTERDAY = TODAY - datetime.timedelta(days=1) # TODO: Move to config or CLI argument working_directory = 'W:\\SQLServer_Backups_Recent' def get_raw_names_from_file(file): with open(file, 'r') as f: # TODO: Try/Catch if file doesn't exist? return list(f) def process_raw_file_names(raw_names): for file in raw_names: if ('<yesterday>') in file: yest_file = file.replace( '<yesterday>', '{d.month}-{d.day}-{d.year}-'.format( d=YESTERDAY) + YESTERDAY.strftime('%a')) else: yest_file = file if ('_' + str(YESTERDAY.year)) in yest_file: nostamp_file = yest_file[:(yest_file.find( '_' + str(YESTERDAY.year)))] + '\n' else: nostamp_file = yest_file if ('<jira_yesterday>') in nostamp_file: clean_file = file.replace( '<jira_yesterday>', YESTERDAY.strftime('%Y-%b-%d')) elif ('<jira_today>') in nostamp_file: clean_file = file.replace('<jira_today>', TODAY.strftime('%Y-%b-%d')) else: clean_file = nostamp_file base_file_names.append(clean_file) return base_file_names def walk_file_system(working_directory): current_files = [] yesterday = datetime.date.today() - datetime.timedelta(days=1) for root, dirs, files in os.walk(working_directory): # TODO: Move all these directories to config, determine how to # filter dirs list if 'Marked' in dirs: dirs.remove('Marked') if 'BackupNonCritical' in dirs: dirs.remove('BackupNonCritical') if 'Attachments' in dirs: dirs.remove('Attachments') if 'x_GrouplinkBackupDB_20131121' in dirs: dirs.remove('x_GrouplinkBackupDB_20131121') if 'x_JIRA-LastOldJiraBackup' in dirs: dirs.remove('x_JIRA-LastOldJiraBackup') if 'x_PTS-DB' in dirs: dirs.remove('x_PTS-DB') if 'x_IvanBackups' in dirs: dirs.remove('x_IvanBackups') if 'x_PTS-Devel' in dirs: dirs.remove('x_PTS-Devel') if 'x_DBAPPS' in dirs: dirs.remove('x_DBAPPS') if 'x_pledgerep' in dirs: dirs.remove('x_pledgerep') file_list = (join(root, name) for name in files) for item in file_list: mtime = datetime.datetime.fromtimestamp( int(os.stat(item).st_mtime)).strftime('%Y-%m-%d') if mtime >= (YESTERDAY).__str__(): if ('_' + str(YESTERDAY.year)) in item: clean_item = item[:(item.find('_' + str(YESTERDAY.year)))] else: clean_item = item current_files.append(clean_item + '\n') return current_files if __name__ == '__main__': base_file_names = [] current_files = [] print('\n\n' + Back.BLUE + 'Welcome to Backup Checker') print(Back.BLUE + 'Scanning: ' + working_directory, Style.RESET_ALL, '\n') raw_names = get_raw_names_from_file('workfile.txt') base_file_names = process_raw_file_names(raw_names) current_files = walk_file_system(working_directory) missing_files = set(base_file_names).difference(current_files) extra_files = set(current_files).difference(base_file_names) print('Lines in input file:', len(raw_names)) print('Lines in processed base file list:', len(base_file_names)) print(Fore.GREEN + 'File system files counted:', len(current_files), Style.RESET_ALL) print('Base files missing from File system:', len(missing_files)) for file in missing_files: print('Missing:', Fore.RED + file, Style.RESET_ALL) for file in extra_files: print('Extra:', Fore.GREEN + file, Style.RESET_ALL) workfile.txt W:\SQLServer_Backups_Recent\Apps\CriticalBackups\<yesterday>\MPBackups\Bitbucket\Bitbucket_backup W:\SQLServer_Backups_Recent\JIRA\XML-Backups\<jira_yesterday>--0200.zip W:\SQLServer_Backups_Recent\AppTest\GEN\GEN_backup Answer: You can definitely put the excluded directories in a constant and iterate over it: EXCLUDE = 'Marked', 'BackupNonCritical', 'Attachments', ... def walk_file_system(working_directory): ... for to_exclude in EXCLUDE: if to_exclude in dirs: dirs.remove(to_exclude) As an alternative, since the if to_exclude in dirs might be costly (for a long list of sub-directories), you could convert them to sets: EXCLUDE = {'Marked', 'BackupNonCritical', 'Attachments', ...} def walk_file_system(working_directory): ... for to_exclude in EXCLUDE.intersection(dirs): dirs.remove(to_exclude) Or, since directory names have to be unique in every directory: def walk_file_system(working_directory): ... dirs = set(dirs).difference(EXCLUDE) Regarding the dates, the comparisons where you first call __str__ seem very hacky. Better use the fact that datetime.date objects are comparable: import datetime TODAY = datetime.date.today() YESTERDAY = TODAY - datetime.timedelta(days=1) file_name = "foo.txt" mod_time = datetime.datetime.fromtimestamp(os.stat(file_name).st_mtime) if mod_time.date() > YESTERDAY: print "new file" if mod_time.date() == TODAY: print "still a new file" You can also use format's nice direct format specifiers for date objects: file.replace('<yesterday>', '{:%m-%d-%Y}-'.format(YESTERDAY) Your whole process_raw_file_names function is actually very hard to understand at the moment. But I'm currently struggling to come up with good alternatives. You should avoid shadowing the built-in function join with os.path.join. Since you actually use that function only once and in a short line, why not do this to follow the Python Zen "Explicit is better then implicit" import os for name in files: item = os.path.join(root, name) mtime = datetime.datetime.fromtimestamp(os.stat(file_name).st_mtime) ... This file_list = (join(root, name) for name in files) for item in file_list: mtime = datetime.datetime.fromtimestamp( int(os.stat(item).st_mtime)).strftime('%Y-%m-%d') if mtime >= (YESTERDAY).__str__(): if ('_' + str(YESTERDAY.year)) in item: clean_item = item[:(item.find('_' + str(YESTERDAY.year)))] else: clean_item = item current_files.append(clean_item + '\n') can now be better written as: year_suffix = "_{:%Y}".format(YESTERDAY) for name in files: item = os.path.join(root, name) mtime = datetime.datetime.fromtimestamp(os.stat(item).st_mtime) if mtime.date() >= YESTERDAY: index = item.find(year_suffix) if index != -1: item = item[:index] current_files.append(item + '\n')
{ "domain": "codereview.stackexchange", "id": 22453, "tags": "python, beginner, windows" }
Capillary Perpetual Motion
Question: Can anyone figure out what is wrong with this perpetual motion machine? What part of it violates physics? I found it on a website a while ago, and I couldn't figure out what was wrong with it. Thanks, and enjoy! By the way, here's the website: https://www.lockhaven.edu/~dsimanek/museum/capillar.htm Answer: The answer is quite simple. At the enlarged diameter of the capillary tube, the capillarity is lost/reduced, depending on the diameter. The water will no longer rise into the larger diameter at the top of the capillary tube, and the syphon cannot work without its inlet end being submerged in water.
{ "domain": "physics.stackexchange", "id": 86997, "tags": "water, surface-tension, perpetual-motion, capillary-action" }
Slam_toolbox message filter dropping message
Question: Hi guys, I am trying to start mapping with slam_toolbox. (Ubuntu 18.04 with Eloquent) When starting the online async node (or sync, I tested both) this message gets spammed and no map is produced: [slam_toolbox-1] [INFO] [slam_toolbox]: Message Filter dropping message: frame 'base_scan' at time 1594996830.893 for reason 'Unknown' I am publishing my daser data to base_scan and my TF-Tree looks like this: odom -> base_fooprint -> base_link -> base_scan. My Odom msg is published by the wheels with frame_id: odom and child_frame_id: base_footprint. I don't know if this is important but just to have everything in: I am filtering my lasermessage with an own node so it produces only 180° scan instead of 270°. I am replacing the unneccesary points with NAN. In Rviz my TF looks clean, instead of map with the warning: No transform from [map] to [base_link] My slam_toolbox yaml file (the rest is untouched but I think it should be all default): slam_toolbox: ros__parameters: # Plugin params solver_plugin: solver_plugins::CeresSolver ceres_linear_solver: SPARSE_NORMAL_CHOLESKY ceres_preconditioner: SCHUR_JACOBI ceres_trust_strategy: LEVENBERG_MARQUARDT ceres_dogleg_type: TRADITIONAL_DOGLEG ceres_loss_function: None # ROS Parameters odom_frame: odom map_frame: map base_frame: base_footprint scan_topic: /scan mode: mapping #localization Dont know if this issue is related: https://github.com/SteveMacenski/slam_toolbox/issues/169. I am not using a bridge but the error output seems to be the same. Thanks in advance. If you need any more information just let me know and i will update the question. Originally posted by pfedom on ROS Answers with karma: 74 on 2020-07-21 Post score: 3 Answer: Got it. My odometry transform was missing the timestamp. Adding it got the slam working ! Originally posted by pfedom with karma: 74 on 2020-07-23 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by surfertas on 2020-07-23: Adding a bit more details with regards to your solution would be helpful for future viewers. (e.g. where and how did you add the missing timestamp.) Thanks. Comment by pfedom on 2020-07-23: The transform from odom to base_footprint (wheel_odometry frame_id to child_frame_id) was missing the header.stamp part. I just missed it when I created the tf-broadcaster. Adding it to the tf-broadcaster gave me seconds and nanoseconds in the transform message and the slam was working. I hope this is well explained. If you need more information just comment again please. Comment by Shneider1m7 on 2021-03-05: Hey @pfedom I'm facing a similar problem when i launch slam toolbox online sync, I got "no map received warning" in rviz and "[rivz]: Message Filter dropping message: frame 'base_scan' at time 1594996830.893 for reason 'Unknown' " I didn't get what you did. Comment by sisaha9 on 2021-04-19: @Shneider1m7 did you figure out what was the issue? I am facing the same problem right now
{ "domain": "robotics.stackexchange", "id": 35307, "tags": "ros, ros2, tf2, 2d-mapping, transform" }
How do I reduce the mains spike in a switched mode PSU?
Question: I have a switched-mode PSU for a gaming laptop. It has a bad habit of burning out the switches on mains extension panels when I turn it on (two in the last four days). The turn-on surge is always audible and spectacularly visible (sparks) if I plug in with the power on. I don't think the PSU is faulty (I have two to compare), probably just not that well-designed. I need to turn it on and off fairly regularly - it's not possible to leave permanently powered on. I qualified in electronics (before SM-PSUs!) and write software these days, but I understand roughly what is going on. Is there anything I can put in series on the input side to reduce the spike? A ready made device would be great, but I'm happy to build something. I'm guessing some sort of inductor will do the job, but I'm not sure of all the factors/trade-offs involved and I'd rather design well than just over-specify, plus I don't have any serious test equipment left. Or is there another approach? The PSU is specified as 100-240V / 1.7A AC in, and 19.5V / 6.15A / 120W DC out. I am in the UK where we use the full 240V. Answer: OK, as it's been so quiet, I've looked at this in more detail. The accepted solution seems to be an Inrush Current Limiter based on a negative temperature coefficient (NTC) Thermistor. This sits on an input line and can be before rectification (ac) or after (dc), but obviously must be before the capacitance that presents almost zero impedance at startup. At room temperature, the thermistor has an impedance that significantly restricts the inrush current. As current flows through, it heats up and the NTC implies that its impedance drops with increasing temperature. At some temperature, it reaches a balance where heat is being radiated at the same rate as it is being generated and a steady state is reached. Turning to more quantitative issues: Typical impedances vary from less than 1Ω to 100+Ω at room temperature, typically given as 25°C in manuals. The impedances can drop to fractions of an Ω at working temperatures which can easily be in excess of 200°C. This high temperature must be allowed for in terms of mounting and ventilation. The dissipation constant (power lost per unit of temperature above surroundings) or "thermal inertia" of the device means that it can take quite a few seconds, even a few minutes, after power-off for the temperature to drop to a level where the impedance has significantly elevated again. If the PSU is turned on again within this time frame the original inrush current problem will at least partially manifest as the high-temperature/low-impedance combination will not restrict it enough.
{ "domain": "engineering.stackexchange", "id": 3554, "tags": "electrical-engineering, power-electronics, circuits, consumer-electronics, electronics" }
Cardinal direction enum from range input
Question: Still learning JS and I made the following function to convert degrees into cardinal directions respectively. My first iteration of the code was over 50 lines and I was able to get it down to 13 using a for loop. Is there a way to optimize or simplify the code down even more? Is my logic and implementation ideal? https://codepen.io/bbbenji/pen/JjPGNmY function setCardinalDirection() { value = document.querySelector('#orientation').value // Get current value from range input document.querySelector(".degrees").textContent = value // Inject current input value into label direction = document.querySelector(".direction") // Define intercardinal direction display element degrees = 22.5 // Define range between two intercardinal directions cardinal = ["N", "NNE", "NE", "ENE", "E", "ESE", "SE", "SSE", "S", "SSW", "SW", "WSW", "W", "WNW", "NW", "NNW", "N"] for ( i = cardinal.length; i >= 0; i-- ) { // Iterate through cardinal array backwards if ( value < degrees/2 + degrees * i ) { cardinalOut = cardinal[i] } } direction.textContent = cardinalOut // Inject current cardinal value into label } document.querySelector("#orientation").addEventListener('input', function(event) { setCardinalDirection() }) window.addEventListener("load", function(){ setCardinalDirection() }) <form> <label for ="orientation"> Orientation: <strong><span class="degrees"></span>° (<span class="direction"></span>)</strong> </label> <br /> <input id="orientation" type="range" min="0" max="360" step="1" value="145"> </form> Answer: Evaluation For somebody learning Javascript this code isn't bad, however it does have some inefficiencies and practices that are frowned upon (e.g. Global variables). See the suggestions below for advice about improving it. Suggestions Semicolon terminators Unless you are intimately familiar with the statements that need to be terminated by a semicolon it is wise to always terminate each line with a semi-colon. Selecting elements by class vs id attribute The first line selects an element from the DOM by the id attribute - i.e. document.querySelector('#orientation'). The next line uses a class selector to select an element: document.querySelector(".degrees"). If there is only one element with class name degrees that matters for this application then an id attribute would be more appropriate than class. Also, it isn't wrong to use document.querySelector() to get elements by id but using document.getElementById() "is definitely faster" 1 (see this jsPerf test for comparison). Similarly document.getElementsByClassName() is also faster. I would suggest only using document.querySelector() when there is a complex selector (e.g. form lable[for="direction'] Global variables Any variable not declared with a keyword like var, const or let is considered global. For a small application like this it likely wouldn't lead to any issues but in a larger application it could lead to unintentional side-effects if the same name is used in different functions. Excess closures The lines to add event listeners can be simplified: document.querySelector("#orientation").addEventListener('input', function(event) { setCardinalDirection() }) window.addEventListener("load", function(){ setCardinalDirection() }) Instead of wrapping the calls to setCardinalDirection() in an extra function, just pass the name of the function: document.querySelector("#orientation").addEventListener('input', setCardinalDirection); window.addEventListener("load", setCardinalDirection);
{ "domain": "codereview.stackexchange", "id": 35786, "tags": "javascript, html, event-handling, dom" }
Copyable Atomic
Question: I find it pretty annoying that c++11 atomics can't be copied. The reasons for this have been discussed e.g. here and I don't want to argue about them now. However, I find myself repeatedly in situations, where I want to copy data structures around that contain atomics. Usually BEFORE they are actually used in a multithreaded context (e.g. in order to return them from a factory function, or to store them in a vector etc.). In order to solve that problem without having to manually write a copy constructor over and over again, I decided to write a simple class, that publicly derives from std::atomic and adds those functionality: /** * Drop in replacement for std::atomic that provides a copy constructor and copy assignment operator. * * Contrary to normal atomics, these atomics don't prevent the generation of * default constructor and copy operators for classes they are members of. * * Copying those atomics is thread safe, but be aware that * it doesn't provide any form of synchronization. */ template<class T> class CopyableAtomic : public std::atomic<T> { public: //defaultinitializes value CopyableAtomic() : std::atomic<T>(T{}) {} constexpr CopyableAtomic(T desired) : std::atomic<T>(desired) {} constexpr CopyableAtomic(const CopyableAtomic<T>& other) : CopyableAtomic(other.load(std::memory_order_relaxed)) {} CopyableAtomic& operator=(const CopyableAtomic<T>& other) { this->store(other.load(std::memory_order_relaxed), std::memory_order_relaxed); return *this; } }; In my toy examples this worked pretty well, however, I'm not sure, if I really considered all possible ramifications of this - deriving from standard library types feels awkward enough and when it comes to synchronization primitives I feel a bit like playing with fire (or juggling with razorblades to quote Herb Sutter). So what I would like to know (aside from general improvement suggestions, or alternative approaches): Can this really serve as a drop-in replacement everywhere, where you would use a normal atomic (especially also with CAS instructions). Is my claim, that the copy constructor / assignment operator is threadsafe correct (both with respect to this as well as with respect to other) Can you think of any performance regressions that this would introduce when used as a synchronization primitive instead of a std::atomic (e.g. because - for some reasons - compilers treat std::atomics in a way they don't / can treat this class) Answer: Difference in default constructor std::atomic<T>'s default constructor is trivial - yours isn't. If your goal is to simply add copy semantics, I would maintain this same behavior: CopyableAtomic() = default; Memory orders I'm not sure about relaxed here. You probably want to ensure consistent orderings where you use this object, so I would change the load()s to use std::memory_order_acquire and the store() to use std::memory_order_release. That is: CopyableAtomic& operator=(const CopyableAtomic<T>& other) { this->store( other.load(std::memory_order_acquire), std::memory_order_release); return *this; } Otherwise, you might get unexpected reorderings. Better to be on the safe side if you're writing a class like this. Otherwise This looks perfectly fine to me. With the exception of copying, this is precisely a std::atomic so it should have all the same behavior everywhere - so it should be able to be a straight-forward drop in replacement in all places.
{ "domain": "codereview.stackexchange", "id": 17273, "tags": "c++, c++11, atomic" }
Is $E=mc^2$ reserved to nuclear physics?
Question: I was wondering, while putting a log in my fireplace, how much energy the piece of wood would give. The most famous formula poped into my head: $E=m \cdot c ^ 2$! Is this formula applicable to a burning object or is in only applicable to a nuclear reaction? Answer: The identity $E=mc^2$ is a universal law of physics. It says that the mass – that can be interpreted as the conserved mass; inertial mass determining the resistance to acceleration; or gravitational mass determining the strength of the gravitational field – is equivalent to the energy, a conserved quantity that was originally defined as the sum of the kinetic and potential energy and that was extended to many other forms of energy later. The identity $E=mc^2$ also says that each kilogram in mass $m$ carries a latent energy $9\times 10^{16}$ joules that may be extracted from the mass under some circumstances (annihilation; collapse into a black hole that later evaporates, and so on). We usually talk about $E=mc^2$ in nuclear physics where the kinetic energy is often comparable (or even much greater) than the latent energy associated with the rest mass (for the LHC, the kinetic energy is 4,000 times greater than the latent energy stored in the rest mass). However, the energy-mass conversion is potentially important in all other situations. In annihilation, one converts 100% of $E=mc^2$ to pure energy – thermal energy, mostly usable as work. In thermonuclear fusion, one gets about 1% of $E=mc^2$. In nuclear fission, one gets about 0.1% of $E=mc^2$. Chemical reactions only convert one millionth of a percent of the mass to pure energy; they're one million times less "efficient" than nuclear reactions. For example, a typical reaction involving two atoms or two molecules produces (or consumes) 1 electronvolt of energy. That's equal to $1.602\times 10^{-19}$ joules which is about $1.8\times 10^{-36}$ kilograms. It looks like a little but if the number of atomic or molecular pairs is $10^{26}$, comparable to Avogadro's constant, one already gets $10^{-10}$ kilograms which may be in principle measured. If you could measure the mass of the coal plus ashes plus gases with the precision of 8 significant figures – which is on the edge of the current technological abilities – you could see that the burned coal (plus the gases, ashes) is lighter than the original coal (plus oxygen) by about one millionth of a percent. This tiny portion of the original mass was converted to heat, radiation, or related forms of energy. In a similar way, if you heat an object, it becomes a bit heavier; the energy added to the object also increases its mass according to $E=mc^2$. Roughly speaking, one could say that the energy-mass conversion becomes large and important if we deal with massive particles that move by speeds comparable to the speed of light. But the general law applies to all phenomena in Nature although the mass equivalent to the converted energy is usually small.
{ "domain": "physics.stackexchange", "id": 5683, "tags": "energy, heat, nuclear-physics" }
How to calculate valence factor (n-factor) for a element?
Question: Suppose I am given some reaction in which $$\ce{C6H12O6 -> CO2}$$ and I want to calculate n-factor for this reaction to ultimately calculate equivalent weight of carbon for this reaction. Since $$E=\frac{M}{n}$$ I know that n-factor is no of electron gained/lost by one atom of a compound. So Initial Oxidation state of carbon is $$6x + 12-12=0$$ $$x=0$$ meaning and final oxidation state of carbon is $$-4$$ Hence, n-factor of this reaction should be $4$ but it's given to be $28$. How is this possible? Can someone explain how to correctly calculate n-factor with few more examples? Answer: Hint : n-factor of a molecule/compound is defined as the change in oxidation state per molecule. You have correctly calculated the change of one carbon atom as 4. But how many carbon atoms are there in the glucose molecule? Note: The average oxidation state of carbon in glucose is zero while in reality the different carbons have different OS. (Reference) n-factor of a reaction is not defined but it is defined for a single species participating in that reaction. n-factor of glucose is not equal to the n-factor of the product formed in that reaction, that is, $\ce{CO_2}$.
{ "domain": "chemistry.stackexchange", "id": 15637, "tags": "stoichiometry, oxidation-state, mole" }
How to build an autonomous navigation system for robots?
Question: I like to know can we make a web site or application(or using something free like google map) to routing a way and our robot can follow the line exactly? For example I like to build a quad-copter and specify some streets on the map and my quad-copter flies upon that specified streets. Answer: In order to implement the line following function you need to make several design choices which will affect your robot performance and accuracy. Here are some examples: GPS Coordinates: using a GPS receiver the robot can navigate through a set of waypoints represented as coordinates (or elevated coordinates for an UAV) to follow; Compass Heading: using a calibrated compass can lead the vehicle on a certain path with a given set of headings; Inertial Navigation: using a IMU and EKF the robot can estimate its position knowing the starting point, however the accuracy will be highly dependent on the quality of the sensors (commercial grade IMUs are not suitable for precise odometry). To set up waypoints on the run you can use either a radio, a web (socket, HTTPSync, REST, ...) or satellite link. There's lots of choices for the GUI too, ranging from a native app to a web interface or a traditional software. To recap, if you can specify the sensors you would like to use and the communication channel I can expand on that. References Inertial Navigation System Compass navigation for robots ArduPilot Mission Planner ROS web interface builder
{ "domain": "robotics.stackexchange", "id": 1485, "tags": "gps, automatic, autonomous-car" }
Is a black hole singularity a single point?
Question: General relativity is expressed in terms of differential geometry, which allows you to do interesting things with the coordinates: multiple coordinates may refer to a single point, eg. the equirectangular projection has whole lines at the top and the bottom that correspond to the North and South poles, respectively; or a single coordinate may refer to multiple points, for example by using inverse geometry the origin refers to all the points infinitely far away. So, is the singularity just a single point in the curved spacetime, or can it be a more extended object, described by a single coordinate? Answer: Need for a coordinate-independent definition multiple coordinates may refer to a single point Normally, the way we define this kind of thing in GR is that we have an atlas, and the atlas is made of charts. Each chart is required to be invertible, so no, we can't have multiple coordinates that refer to a single point. In any case, when we define dimensionality on a topological space, we do it in a coordinate-independent way. E.g., one can use the Lebesgue covering dimension. A singularity in the metric versus a singularity on a well-defined metric background Suppose I have a two-dimensional space with coordinates $(u, v )$, and I ask you whether $S = \{(u, v )|v = 0\}$ is a point or a curve, while refusing to divulge what metric I have in mind. You’d probably say $S$ was a curve, and if the metric was $ds ^2 = du ^2 +dv ^2$, you'd be right. On the other hand, if the metric was $ds ^2 = v^2du ^2 +dv ^2$, $S$ would be a point. This was an example where there were two possible metrics we could imagine. At a singularity, it’s even worse. There is no possible metric that we can extend to the singularity. Hawking and Ellis have a nice discussion along these lines, including an example similar to the one above, in section 8.3, "The description of singularities," p. 276: [The singularity theorems] prove the occurrence of singularities in a large class of solutions but give little information as to their nature. To investigate this in more detail, one would need to define what one meant by the size, shape, location and so on of a singularity. This would be fairly easy if the singular points were included in the space-time manifold. However it would be impossible to determine the manifold structure at such points by physical measurements. In fact there would be many manifold structures which agreed for the non-singular regions but which differed for the singular points. After presenting the example, they say: In the first case the singularity would be a three-surface, in the second case a single point. Not a point or set of points So is the singularity just a single point in the curved spacetime, or can it be a more extended object, described by a single coordinate? Well, technically it's none of the above. A singularity in GR is like a piece that has been cut out of the manifold. It's not a point or point-set at all. Because of this, formal treatments of singularities have to do a lot of nontrivial things to define stuff that would be trivial to define for a point set. For example, the formal definition of a timelike singularity is complicated, because it has to be written in terms of light-cones of nearby points. Boundary constructions don't provide an answer There are some possible heuristics you might use in order to describe the singularity as if it were a point-set and talk about its dimensionality as if it were a point-set. You can draw a Penrose diagram. On a Penrose diagram, a horizontal line represents a spacelike 3-surface, with one dimension being shown explicitly on the diagram and the other two being because rotational symmetry is implicit. If you look at the Penrose diagram for a Schwarzschild black hole, the singularity looks like a horizontal line. It's not a point-set, but if it were, it would clearly be spacelike, and it sort of looks like it would be a 3-surface. This is very different from what most people would probably imagine, which would be that it's a 1-dimensional timelike curve, like the world-line of an electron. If you try to develop this heuristic into something more rigorous, it basically doesn't work. This program is referred to as "boundary constructions." Reviews are available on this topic (Ashley, Garcia-Parrado). There are a number of more or less specific techniques for constructing a boundary, with an alphabet soup of names including the g-boundary, c-boundary, b-boundary, and a-boundary. As someone who is not a specialist in this subfield, the impression I get is that this is an area of research that has turned out badly and has never produced any useful results, but work continues, and it is possible that at some point the smoke will clear. As a simple example of what one would like to get, but doesn’t get, from these studies, it would seem natural to ask how many dimensions there are in a Schwarzschild black hole singularity. Different answers come back from the different methods. For example, the b-boundary approach says that both black-hole and cosmological singularities are zero-dimensional points, while in the c-boundary method (which was designed to harmonize with Penrose diagrams) they are three-surfaces (as one would imagine from the Penrose diagrams). May depend on the type of black hole People have studied GR in more than 3+1 dimensions, and, e.g., in 4+1 dimensions you can get things like "black rings." If you look at how these are actually described in the literature, people seem to find it more convenient to talk about the topology of the horizon, rather than the dimensionality of the singularity: http://relativity.livingreviews.org/Articles/lrr-2008-6/fulltext.html . This is presumably because the formalism is ill-suited to talking about the dimensionality of the singularity. Another example is the Kerr metric for a rotating black hole. The singularity is commonly described as a ring. But in this review article, there is a discussion of the singularity on p. 8 and again on p. 28. In both spots, there are scare quotes around "ring." Again I think this is because we can't really answer geometrical questions about the singularity, because it isn't a point-set, and therefore you can't say what the metric looks like there. A strong curvature singularity criterion doesn't provide an answer Another approach is to look at what happens to matter that goes into the formation of a black hole through gravitational collapse, or to a hypothetical cloud of test particles that falls into an eternal black hole. If the matter is crushed to zero volume, then it might make sense to interpret this as evidence that the singularity should be thought of as having zero volume. Unfortunately this doesn't necessarily give a definitive answer either. One can define something called a strong curvature singularity (s.c.s.), defined as one for which a geodesic is incomplete at affine parameter $\lambda=0$, with $\lim_{\lambda\rightarrow0}\lambda^2R_{ab}v^av^b\ne0$, where $v^a$ is the tangent vector. The volume of a cloud of test particles goes to zero as it approaches such a singularity, the interpretation being that infalling matter is crushed, not just spaghettified. A Schwarzschild spacetime's singularity is not an s.c.s., because it's a vacuum spacetime, so the Ricci tensor vanishes. That is, there is only spaghettification, not crushing. A cloud of test particles maintains an exactly constant volume as it falls in. However, a completely different situation could possibly exist during the collapse leading to the formation of an astrophysical black hole. During the collapse, there is infalling matter present, so the Ricci tensor need not vanish. Indeed, it appears that in some fairly realistic models of gravitational collapse, the singularity, during the period of collapse, is a timelike (locally naked) singularity (Joshi), meaning that it is completely different in character from the spacelike singularity of an eternal black hole such as a Schwarzschild black hole. It appears that in such calculations, the density of matter does blow up at the singularity, which suggests that it may be an s.c.s. during formation. Difficulties in the case of the Schwarzschild spacetime When we think about a black hole, we usually imagine by default the kind of eternal black hole described by the Schwarzschild spacetime. Some significant difficulties occur even in this simplest possible case. As noted above, the singularity may have a completely different character than the singularity occurring during the gravitational collapse of an astrophysical black hole, and this leads to suspicion that by considering the Schwarzschild case, we are omitting essential considerations. Furthermore, we have the no-hair theorem, which states that for a stationary electrovacuum spacetime having an event horizon, there is only one class of solutions, which can be parametrized by three variables: mass, charge, and angular momentum. This defines a clear sense in which the singularity of a stationary black hole has no physical properties. If it did have such properties, those would have to be limited to the list of three properties described by the no-hair theorem. However, these are not properties of the singularity but rather properties of some large region of spacetime as measured by a distant observer, who can't even say whether the singularity exists "now." (It may in fact appear to such an observer that infalling matter takes infinite time to pass through the horizon.) The chances for a more definitive answer might be better in the case of a naked singularity. Such a singularity can exist in both the past light cone and the future light cone of an observer, so one can imagine doing experiments on it and finding the results. In this sense it may be more likely to have measurable properties. References Ashley, "Singularity theorems and the abstract boundary construction," https://digitalcollections.anu.edu.au/handle/1885/46055 Garcia-Parrado and Senovilla, "Causal structures and causal boundaries," http://arxiv.org/abs/gr-qc/0501069 Joshi and Malafarina, "All black holes in Lemaitre-Tolman-Bondi inhomogeneous dust collapse," https://arxiv.org/abs/1405.1146
{ "domain": "physics.stackexchange", "id": 46605, "tags": "general-relativity, black-holes, event-horizon, singularities" }
Official sbt [maven] set up for rosjava
Question: I have been struggling setting up Scala access to ROS. This should normally work via rosjava, but I am struggling with build dependencies. This is the key part of the build.sbt file regarding rosjava: resolvers += "rosjava repository" at "https://github.com/rosjava/rosjava_mvn_repo/raw/master", resolvers += "jfrog repo" at "https://repo.jfrog.org/artifactory/libs-releases/", libraryDependencies += "org.scalatest" %% "scalatest" % "2.2.6" % "test", libraryDependencies += "org.ros.rosjava_core" % "rosjava" % "[0.3,0.4)", the rosjava maven repo is more or less standard following the maven setup suggestions online. But the build fails (on some commons packages) if I don't include the jfrog repo. I have no idea what is the jfrog repo, whether it is reliable and will be around to stay. Is this repo the right way to set up rosjava dependencies? What otherwise? I suppose that any rosjava expert using maven or gradle, will also be able to answer this question. You don't need to be a Scala/Sbt user. Thanks Originally posted by Andrzej Wasowski on ROS Answers with karma: 36 on 2017-03-17 Post score: 0 Answer: Half a year later I no longer have the problems. I use the following sbt setup re. basic Ros Java (build.sbt): ... resolvers += "rosjava repository" at "https://github.com/rosjava/rosjava_mvn_repo/raw/master", libraryDependencies += "org.ros.rosjava_core" % "rosjava" % "[0.3,0.4)", libraryDependencies += "org.ros.rosjava_messages" % "sensor_msgs" % "[1.12, 1.13)", ... The reference to jfrog seems to no longer be needed. Originally posted by Andrzej Wasowski with karma: 36 on 2017-10-30 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 27347, "tags": "ros, gradle, rosjava" }
Current in a Concentration Cell
Question: I would like to know if there is a simple function that gives the current as a function of time in a concentration cell. The current is measured in the wire connecting the two terminals of the cell. The two half cells are isolated i.e. not connected by a salt bridge and the electrolyte is a liquid. I am looking at a very simple case without involving the factors that affect the current and the internal resistance of the cell. From my workings, I've got $\displaystyle I=C\ln\left( \frac { A-q }{ B+q } \right) $ or more specifically $\displaystyle \frac { dq }{ dt } =\frac { RT }{ nrF }\ln\left( \frac { VnF{ C }_\pu{ cathode }-q }{ VnF{ C }_\pu{ anode }+q } \right) $ where $R$ represents the gas constant $T$ represents the absolute temperature $n$ represents the number of moles of electrons exchanged $r$ represents the resistance of the wire $F$ represents the Faraday's constant $V$ represents the Volume of the solution (assuming both electrolytes' to be same) $C$ represents the initial concentration of the electrolyte in each half cell. Now I've tried integrating this function using WolframAlpha and then differentiating it to obtain the current explicitly as a function of time. Seems this function is too complicated for Wolfram to evaluate.(I've used the expressions with A,B,C) Is my derived expression correct? Also, shouldn't a simple dynamic process have a simpler equation? Thanks. Answer: Yes, your derived expression looks correct to me. As to the closed form for the integral, WolframAlpha is most likely unable to find one because it is simply not there among the elementary and special functions. This has nothing to do with the equation being simple. Of course it is simple; nature "solves" it quite easily. It is just that the nature's definition of "simple" does not necessarily coincide with ours. Helium atom is simple, so good luck trying to solve the Schrodinger equation for it. Would you argue an atom is not actually that simple? Well, your concentration cell is made up of quite a few atoms. So it goes.
{ "domain": "chemistry.stackexchange", "id": 9621, "tags": "electrochemistry, numerical-analysis" }
Logical gates with integer values
Question: I have created this code for both boolean and integer values to display a truth table for an "AND","OR","XOR", "NOT" gate. However I think that my code needs reviewing as it could be simplified. public class LogicalOpTable { public static void main(String[] args){ boolean p,q; System.out.println("P\tQ\tAND\tOR\tXOR\tNOT"); p = false; q = false; System.out.print(p + "\t" + q + "\t" + (p&&q) + "\t"); System.out.println((p||q)+"\t"+(p^q)+"\t"+(!p)); p = false; q = true; System.out.print(p + "\t" + q + "\t" + (p&&q) + "\t"); System.out.println((p||q)+"\t"+(p^q)+"\t"+(!p)); p = true; q = false; System.out.print(p + "\t" + q + "\t" + (p&&q) + "\t"); System.out.println((p||q)+"\t"+(p^q)+"\t"+(!p)); p = true; q = true; System.out.print(p + "\t" + q + "\t" + (p&&q) + "\t"); System.out.println((p||q)+"\t"+(p^q)+"\t"+(!p)); System.out.println(); withBinary(); } public static void withBinary(){ System.out.println("A\tB\tAND\tOR\tXOR\tNOT"); int a = 0; int b = 0; int and = a&b; int or = a|b; int xor = a^b; int not = a; if(a==0 && b == 0 ) not = 1; System.out.println(a + "\t" + b + "\t" + and + "\t" + or + "\t" + xor + "\t" + (not)); b=1; and = a&b; or = a|b; xor = a^b; not = a; if(a==0 && b == 1) System.out.println(a + "\t" + b + "\t" + and + "\t" + or + "\t" + xor + "\t" + (not)); a=1; b=0; not = b; and = a&b; or = a|b; xor = a^b; not = a; if(a==1 && b == 0) System.out.println(a + "\t" + b + "\t" + and + "\t" + or + "\t" + xor + "\t" + (not)); a=1; b=1; not = 0; and = a&b; or = a|b; xor = a^b; if(a==1 && b == 1) System.out.println(a + "\t" + b + "\t" + and + "\t" + or + "\t" + xor + "\t" + (not)); } } Answer: First part You want to iterate through all possible combinations of for a pair of booleans, so you can make the code reflect that explicitly and make it simpler: System.out.println("P\tQ\tAND\tOR\tXOR\tNOT"); for (boolean p : new boolean[] {true, false}) { for (boolean q : new boolean[] {true, false}) { System.out.print(p + "\t" + q + "\t" + (p&&q) + "\t"); System.out.println((p||q)+"\t"+(p^q)+"\t"+(!p)); } } System.out.println(); Second part You have something suspicious in your if statements. As you don't use braces, this code smells of a copy&paste bug: if(a==0 && b == 0 ) not = 1; System.out.println(a + "\t" + b + "\t" + and + "\t" + or + "\t" + xor + "\t" + (not)); If the condition is met, it will execute the line not = 1;. The second line will be executed independently of the condition. I strongly suggest you use braces even for 1-line blocks, like this: if (a == 0 && b == 0) { not = 1; System.out.println(a + "\t" + b + "\t" + and + "\t" + or + "\t" + xor + "\t" + (not)); } This way you can avoid this kind of bugs. Now, applying the same reasoning as with the previous method, you can rewrite it as: System.out.println("A\tB\tAND\tOR\tXOR\tNOT"); for (int a : new int[] {0, 1}) { for (int b : new int[] {0, 1} ) { System.out.println(a + "\t" + b + "\t" + (a & b) + "\t" + (a | b) + "\t" + (a ^ b) + "\t" + ~a); } } BUT What do you want to accomplish with the NOT operation? Do you mean the Bitwise Complement? I used that, but maybe you want a function that returns 0 when it's 1, and 1 when it's 0. In that case, you need to replace the ~a with something like (a == 0) ? 1 : 0. So the whole code could be reduced to just: public static void printTable() { System.out.println("P\tQ\tAND\tOR\tXOR\tNOT"); for (boolean p : new boolean[]{true, false}) { for (boolean q : new boolean[]{true, false}) { System.out.print(p + "\t" + q + "\t" + (p&&q) + "\t"); System.out.println((p||q)+"\t"+(p^q)+"\t"+(!p)); } } System.out.println(); System.out.println("A\tB\tAND\tOR\tXOR\tNOT"); for (int a : new int[]{ 0, 1}) { for (int b : new int[]{0, 1} ) { System.out.println(a + "\t" + b + "\t" + (a & b) + "\t" + (a | b) + "\t" + (a ^ b) + "\t" + ~a); } } }
{ "domain": "codereview.stackexchange", "id": 15110, "tags": "java" }
Find the longest length of sequence of 1-bits achievable by flipping a single bit from 0 to 1 in a number
Question: Problem Statement The problem is defined in the book as following: 5.3 You have an integer and you can flip exactly one bit from 0 to 1. Write code to find the length of the longest sequence of 1s you could create. EXAMPLE Input 1775 (or: 11011101111) Output 8 — Cracking the Coding Interview (6th edition) Feedback I am looking for Here's the list of things I am interested to hear back (in order of significance): Design decisions and improvements (as in "better approach(es) performance- and memory-wise"). Code readability. JavaScript (ES6) language idioms. Whatever you find important to say that does not fall into three categories mentioned above. My approach, design, implementation, and performance description Both time and space complexity of the solution is O(n), where n is the total count of bits in a bit representation of the integer number. However, I feel there might be some smart approach (or a "trick") that improves the solution. My code basically consists of three parts. numbersWithSingleZeroFlippedToOneIn(n) function attempts to set a single bit to 1 via bitwise or (|) operator with a 1 shifted to every possible position. If the result of that | application to n does not equal to n itself, it means the bit has changed the state from 0 to 1 and the resulting number should be used in the next step. The numbers from the previous steps are iterated through via reduce() function. The seed value is set to -1 which indicates an "unknown" maximal length of sequence of 1s (which is determined by making a call to longestSequenceOfOnes(n). The longestSequenceOfOnes(n) function slides from one side of the bit array to another and increments the sequence length by 1 for each observed 1-bit; or resets the sequence length to 0 when a 0-bit is observed. The code actually explains this part better... Code const NUMBER_OF_BITS_IN_NUMBER = 32; function flipToWin(numberToFlip) { return numbersWithSingleZeroFlippedToOneIn(numberToFlip) .reduce( (subresult, flippedNumber) => Math.max(subresult, longestSequenceOfOnes(flippedNumber)), -1, ); } function numbersWithSingleZeroFlippedToOneIn(numberToFlip) { const flippedNumbers = []; for (let shift = 0; shift < NUMBER_OF_BITS_IN_NUMBER; shift++) { const candidate = numberToFlip | (1 << shift); const isFlipped = candidate !== numberToFlip; if (isFlipped) flippedNumbers.push(candidate) } return flippedNumbers; } function longestSequenceOfOnes(flippedNumber) { let longestSequence = 0; let currentSequence = 0; for (let position = 0; position < NUMBER_OF_BITS_IN_NUMBER; position++) { const isBitInPositionSet = flippedNumber & (1 << position); if (isBitInPositionSet) { currentSequence += 1; } else { longestSequence = Math.max(longestSequence, currentSequence); currentSequence = 0; } } longestSequence = Math.max(longestSequence, currentSequence); return longestSequence; } Unit tests import { flipToWin } from '../src/cracking-the-coding-interview/5-bit-manipulation/5-3-flip-to-win'; describe(flipToWin.name, () => { [ { inputNumber: 0, expectedResult: 1 }, { inputNumber: 1, expectedResult: 2 }, { inputNumber: 2, expectedResult: 2 }, { inputNumber: 4, expectedResult: 2 }, { inputNumber: 8, expectedResult: 2 }, { inputNumber: 16, expectedResult: 2 }, { inputNumber: 32, expectedResult: 2 }, { inputNumber: 3, expectedResult: 3 }, { inputNumber: 5, expectedResult: 3 }, { inputNumber: 6, expectedResult: 3 }, { inputNumber: 10, expectedResult: 3 }, { inputNumber: 12, expectedResult: 3 }, { inputNumber: 20, expectedResult: 3 }, { inputNumber: 24, expectedResult: 3 }, { inputNumber: 48, expectedResult: 3 }, { inputNumber: (~0 & (~0 << 1)), expectedResult: 32 }, { inputNumber: (~0 & (~0 << 2)), expectedResult: 31 }, { inputNumber: (~0 & (~0 << 3)), expectedResult: 30 }, { inputNumber: (~0 & (~0 << 4)), expectedResult: 29 }, ].forEach(({ inputNumber, expectedResult }) => { it(`Should return length of ${expectedResult} for input number ${inputNumber}.`, () => { expect(flipToWin(inputNumber)).toEqual(expectedResult); }); }); }); Answer: I will leave it for others to talk about the code details, because I'm not as up to date with Javascript as I might be. Instead I'm going to focus on the analysis of the algorithm and alternatives. What you have is technically an \$ O(1) \$ approach on the basis that all your loops are up to a constant 32. However, that's generally uninteresting for analysis so I'll assume the number of possible bits can vary. (i.e. assume instead that the input is an arbitrarily long array of ones and zeroes) In more conventional analysis, it's \$ O(n^2) \$ in time and space: You have a list of size \$ O(n) \$ coming out of numbersWithSingleZeroFlippedToOneIn and for each element in that list you're running an \$ O(n) \$ function longestSequenceOfOnes. Without changing your core algorithm, you could reduce the space complexity massively by using a generator rather than storing flippedNumbers (an approach that is normally worth considering whenever you create an array to iterate over just once.) A theoretical lower bound on this problem is \$O(n)\$ because any algorithm must check each bit in the input sequence. Supposing the task was simpler: find the longest sequence of 1-bits in the number. One way of doing that is to loop over the bits of the number for one pass, keeping track of the last observed zero. When you hit another zero, you check the number of ones between the current and last zero, and compare it to the longest sequence seen thus far. This approach can be easily adapted to the current problem while remaining an \$O(n)\$ algorithm (and \$O(1)\$ in space). All you'd have to do is keep track of the last two seen zeros, because the ability to flip a single zero to a one is equivalent to the ability to just ignore one zero.
{ "domain": "codereview.stackexchange", "id": 30683, "tags": "javascript, interview-questions, ecmascript-6, bitwise" }
rosparam load all yaml-files in folder
Question: Hi, is it possible to load all yaml-files in an folder? Like: <rosparam folder="$(find parameter)/my_params/" /> Originally posted by Tobias Neumann on ROS Answers with karma: 179 on 2014-09-05 Post score: 0 Answer: According to the roslaunch doc it shouldn't work: "The tag can either reference a YAML file or contain raw YAML text." You could try writing a small node that takes a path and uses rosparam.load_file() Originally posted by Chrissi with karma: 1642 on 2014-09-05 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Tobias Neumann on 2014-09-05: Yes I thought about that too, but I thought there might be an easier solution. But if I write a node, how can I guarantee that all parameters are loaded before my launch file startes the next node that is depending on them? Comment by Chrissi on 2014-09-05: One solution would be to offer a service once all parameters are set and to make the other nodes wait for that service to become available before doing anything. Comment by Tobias Neumann on 2014-09-05: Mh, that would work, but needs more codeing than adding a line for each yaml file in the launch file :( There might be an different way? Would be a cool feature for rosparam Comment by Chrissi on 2014-09-06: We are actually following a database approach to that. In my project we are using mongodb and have a tool that reads yaml files, saves the found parameters in the datacentre and also sets them on the parameter server. You can have a look at this
{ "domain": "robotics.stackexchange", "id": 19311, "tags": "ros, roslaunch, yaml, rosparam" }
Idiomatic and Conventional F#
Question: As of late I have been learning F#. Today I accumulated what I have learned so far to develop my first program - a script that counts the number of lines of code in a given Visual Studio project. The code for said program is as follows: #r "System.Xml.Linq" open System open System.IO open System.Xml.Linq open System.Text.RegularExpressions printf "Enter solution path: " let solutionPath = Console.ReadLine() let solutionDirPath = Path.GetDirectoryName(solutionPath) let solutionText = File.ReadAllText(solutionPath) let projectPaths = Regex.Matches(solutionText, "([^\"]*csproj)") |> Seq.cast<Match> |> Seq.map (fun m -> Path.Combine(solutionDirPath, m.Groups.[1].Value)) let codeFilePaths = projectPaths |> Seq.map (fun projPath -> let projectDirPath = Path.GetDirectoryName(projPath) let document = XDocument.Load(projPath) document.Descendants() |> Seq.where (fun desc -> desc.Name.LocalName = "Compile") |> Seq.map (fun desc -> Path.Combine(projectDirPath, desc.FirstAttribute.Value))) |> Seq.concat let locIsSignificant (loc : string) = let sanitizedLoc = loc.Trim() match sanitizedLoc with | "" -> false | "{" -> false | "}" -> false | _ -> true let totalLoc = codeFilePaths |> Seq.sumBy (fun path -> File.ReadLines(path) |> Seq.where locIsSignificant |> Seq.length) printfn "Total Loc: %i" totalLoc Console.ReadKey() |> ignore I have tried write both clean and idiomatic code but I still have some concerns, namely: In this script, I introduce sequential intermediate bindings (projectPaths->codeFilePaths->totalLoc) however, the binding locIsSignificant feels like an interruption to this sequence. Have I arranged my code in an conventional way? Is my code as idomatic as can be? That is, could I make the code more expressive by using constructs or commands that I have not considered? Of course, any additional feedback is welcome/encouraged. Answer: Regex.Matches(solutionText, "([^\"]*csproj)") |> Seq.cast<Match> |> Seq.map (fun m -> Path.Combine(solutionDirPath, m.Groups.[1].Value)) Why are you using groups? Since the whole match is a single group, you can remove the parentheses from the expression and then directly use m.Value. Also, you should probably check that the name ends in .csproj, not just csproj and that it's actually the end of the file name. let codeFilePaths = projectPaths |> Seq.map (fun projPath -> let projectDirPath = Path.GetDirectoryName(projPath) let document = XDocument.Load(projPath) document.Descendants() |> Seq.where (fun desc -> desc.Name.LocalName = "Compile") |> Seq.map (fun desc -> Path.Combine(projectDirPath, desc.FirstAttribute.Value))) |> Seq.concat I think the complicated map lambda makes this hard to read, I would instead write it as a local named function. If you don't want to do that, at least make sure to indent your code properly (the first |> Seq.map and the |> Seq.concat should be on the same level). Instead of the where, you can use the overload of Descendants that accepts a node name (but this means you have to use XML namespaces). Instead of relying on the first attribute being the right one, I would access the attribute by name. With all these changes and John Palmer's suggestion about Seq.collect, the code becomes: let codeFilePaths = let codeFilesForProject projPath = let projectDirPath = Path.GetDirectoryName(projPath) let document = XDocument.Load(projPath) let ns = XNamespace.Get "http://schemas.microsoft.com/developer/msbuild/2003" document.Descendants(ns + "Compile") |> Seq.map (fun desc -> Path.Combine(projectDirPath, desc.Attribute(XName.Get "Include").Value)) projectPaths |> Seq.collect codeFilesForProject
{ "domain": "codereview.stackexchange", "id": 9666, "tags": ".net, f#" }
Can an observable be invariant under local $U(1)$ but not under global $U(1)$?
Question: Consider a quantum field theory with two fields, a complex scalar field $\phi$ and a $U(1)$ gauge field $A$. Both fields are dynamic fields, not background fields. Suppose that spacetime is topologically trivial. Suppose that the lagrangian is invariant under \begin{align*} \phi(x) &\to e^{-i\theta(x)}\phi(x) \\ A(x) &\to A(x)+d\theta(x), \tag{1} \end{align*} for all $\theta(x)$, and define two groups: $G$ is the group of all transformations of the form (1). $H$ is the subgroup with $\theta(x)\to 0$ as $|x|\to\infty$. I'm deliberately leaving the lagrangian unspecified, but assume that if the field $A$ were omitted, then the global $U(1)$ symmetry of the remaining $\phi$-only model would not have an 't Hooft anomaly. Question: Can an operator constructed from the fields $\phi$ and $A$ be invariant under $H$ but not under $G/H$? Here's my attempt to construct an operator that is invariant under $H$ but not under $G/H$: $$ \phi(x)\exp\left(-i\sum_u \int_{P(x,u)} A\right), \tag{2} $$ where $P(x,u)$ is a path from $x$ to spacelike infinity, approaching spacelike infinity in the direction $u$. The sum over directions $u$ is an attempt to make (2) well-defined by "smearing out" the seemingly ill-defined behavior at spacelike infinity. Intuitively, this is a charged entity $\phi(x)$ "dressed" by an electromagnetic field (like the Coulomb field) as required by Gauss's law (= gauge invariance). If (2) really is well-defined, then it is invariant under $H$ but not under the transformations in $G$ with constant $\theta$. But I don't know if (2) really is well-defined. Naively, operators like (2) seem to be needed so that we can create states with charged particles (including their electric fields) from the vacuum. However, states of different charge are normally considered to belong to different superselection sectors (different Hilbert-space representations of the algebra of observables), which suggests that such operators can't actually exist. And in this particular example, the model could be in the Higgs phase, with no unscreened charges. So I wouldn't be surprised if the example (2) is incurably ill-defined, but can this be turned into a compelling argument that no such operators exist? The paper Lectures on the Infrared Structure of Gravity and Gauge Theory has things to say about the significance of $G/H$, but I didn't find an answer to my question there. Maybe related: Why is $U(1)$ special when defining global charges? Answer: The answer to your question is "yes". The group G contains the global symmetry group $U(1)$, which is associated by Noether's theorem to electric charge. There are many observables which are not invariant under this symmetry. Your operator 2 is close to correct. What you really want is a $\phi$-field dressed with a Wilson line going to spatial infinity. $$ \phi(x) e^{i\int_P A_\mu dx^\mu} $$ where $P$ is a path connecting $x$ to spatial infinity. This is well-defined operator, and it'll fail to be invariant under non-trivial transformations at infinity. It's a standard gauge-invariant construction of the charged state creation/annihilation operators. (Appendix added by OP based on user1504's comments) One way to see that it's well-defined is to formulate the theory on a finite lattice. The factor $\phi(x)$ is associated with lattice site $x$, and the exponential factor is a product of "link variables" $U(x_1,x_2)$ along any path from $x$ to any point on the boundary. Each link variable is an element of $U(1)$ associated with a nearest-neighbor pair $(x_1,x_2)$ of sites. This is the lattice version of a $U(1)$ gauge field. To help make the relationship bewteen the lattice formulation and the continuum picture more inuitive, the boundary can be structured as suggested in this comment (copied from user1504): The lattice limits are easiest to deal with if you set things up so that the boundary of the lattice has no internal edges, only points. Let $G$ be the group of transformations of the form \begin{align} \phi(x) &\to \phi(x)g(x) \\ U(x,y) &\to g^{-1}(x)U(x,y)g(y) \end{align} with $g(x)\in U(1)$ for each site. Define $H$ to be the subgroup of $G$ for which $g(x)=1$ on all boundary sites. Then the operator described above is invariant under $H$ but not under all of $G$. In particular, it is not invariant if $g(x)$ is independent of $x$ (a "global" $U(1)$ transformation).
{ "domain": "physics.stackexchange", "id": 66736, "tags": "quantum-field-theory, symmetry, quantum-electrodynamics, gauge-invariance" }
moveit: cannot move interactive marker with IKfast
Question: Hello i´ve a problem with ikfast for moveit. i´ve gone trough all tutorials and KDL and Trac_ik are working fine in rviz. However when i´m using the 3DOF IKfast i cant set the goal position with the marker or interact in any way with the leg with Move Group Interface. The leg is a simple leg with 3 actors in a row. when i print openrave-robot.py <myrobot_name>.dae --info links i´ve got the following links base_link 0 thorax 1 base_link leg_center_l1 2 thorax coxa_l1 3 leg_center_l1 femur_l1 4 coxa_l1 tibia_l1 5 femur_l1 tibia_foot_l1 6 tibia_l1 the ik-fast was calclulated with python `openrave-config --python-dir`/openravepy/_openravepy_/ikfast.py --robot=<myrobot_name>.dae --iktype=translation3d --baselink=2 --eelink=6 --savefile=ik_leg1.cpp so it is the exact same problem as in this case https://www.youtube.com/watch?v=M1c8A-DfghA but using "position_only_ik: true" in kinematics.yaml doesnt solve the problem leg_1: kinematics_solver: group_kinematics/IKFastKinematicsPlugin kinematics_solver_attempts: 3 kinematics_solver_search_resolution: 0.005 kinematics_solver_timeout: 0.005 position_only_ik: true I assume that the IK cannot solve for the solution and somehow the srdf may be wrong. I´ve used many configurations in the srdf from assigning only joints and one eef. <group name="leg_1"> <joint name="base_joint" /> <joint name="leg_center_joint_l1" /> <joint name="coxa_joint_l1" /> <joint name="femur_joint_l1" /> <joint name="tibia_joint_l1" /> <joint name="tibia_foot_joint_l1" /> </group> <end_effector name="leg_1_foot" parent_link="tibia_foot_l1" group="leg_1" /> as putting putting everythin in a chain, put the rest in a subgroup <group name="leg_1"> <chain base_link="thorax" tip_link="tibia_l1" /> </group> <group name="leg_test"> <joint name="leg_center_joint_l1" /> <joint name="coxa_joint_l1" /> <joint name="femur_joint_l1" /> <joint name="tibia_joint_l1" /> <joint name="tibia_foot_joint_l1" /> <joint name="base_joint" /> </group> <end_effector name="leg1eef" parent_link="tibia_l1" group="leg_test" parent_group="leg_1" /> Originally posted by PizzaTime on ROS Answers with karma: 36 on 2016-03-29 Post score: 1 Original comments Comment by gvdhoorn on 2016-03-30: Have you ticked the Approximate IK checkbox in the RViz MoveIt plugin UI? I'm not sure just setting position_only_ik: true is enough for the RViz plugin. It does influence the programmatic interface, but the interactive marker may be a separate thing. Comment by PizzaTime on 2016-03-30: Hello, Yes i´ve tried to allow approximate IK and also set collisions to ignore. Comment by PizzaTime on 2016-03-30: i´ve tried to use a newer openrave version without any success. do you know if there is a convenient way to test the generated ik? Answer: Ok so now it runs roughly and very rugged what have i done? I've installed the recommended version of sympy as told in this wiki http://wiki.ros.org/Industrial/Tutorials/Create_a_Fast_IK_Solution/Prerequisites i've simplified the dae and used only the arm i want to calculate i've rounded the dae rosrun moveit_ikfast round_collada_numbers.py <myrobot_name>.dae <myrobot_name>.rounded.dae 5 regenereated the ikfast solution... PS: without the correct sympy version is was impossible to generate the ikfast for the rounded.dae but still it is not usable and i'm still unable to use the move_group_interface to set any valid positions ok i got it :-) problem was finally solved when i changed my urdf model and removed all stl files. the generated ik_fast solver was significant smaller and now everything works perfekt! Originally posted by PizzaTime with karma: 36 on 2016-03-30 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by nuboter_cc on 2020-08-22: Thank you for your answer, it give me some guidance. But, there are still some qusetions. I built my robot by SolidWorks, and I export it as urdf. The robot description are some links to the mesh.STL in the urdf. I meet the same problem as you. Should I transform mesh.STL to mesh.dae , as your answer? If I should, what I should watch out in this process?
{ "domain": "robotics.stackexchange", "id": 24273, "tags": "rviz, moveit, ikfast, marker, openrave" }
Speed optimization for block XOR
Question: In code I'm currently maintaining, there is a need to do very many repeated XOR operations of blocks of memory. The block size in my case is always 16 bytes. Because the code is executed very frequently, and because it's speed critical, I would like it to be as fast as possible. Ordinarily I'd write this in assembly language for that reason, but I'd prefer that the code stay in C++ to maximize portability. This is what I'm currently using, but I would like a review to address particularly the following questions: Can this code be made faster? Can the portability be improved? In particular, I'm interested in avoiding problems on architectures for which there is a penalty for misaligned memory access. memxor.c const size_t BLKSIZE = 16; typedef unsigned bigsize_t; inline void memxor(uint8_t dst[BLKSIZE], const uint8_t *a, const uint8_t *b) { bigsize_t *s=reinterpret_cast<bigsize_t *>(dst); const bigsize_t *au = reinterpret_cast<const bigsize_t *>(a); const bigsize_t *bu = reinterpret_cast<const bigsize_t *>(b); for (size_t i=BLKSIZE/sizeof(*s); i; --i) *s++ = *au++ ^ *bu++; } Answer: You're stuck at a point where your readability is good, and any performance improvements are going to come at the expense of making the code more ugly. Additionally, you're already making architecture choices based on the compiler's interpretation of 'unsigned'. It's already not pretty. Oh, and the un-braced 1-line for-loop is a problem for readability too. I don't know of any optimizations that are available at this point that will not come with the portability, or readability cost. For a block-size of 16 bytes, and with a likely 4-byte unsigned value, the odds are that your loop will iterate 4 times only anyway. The compiler may unroll that loop, and move on. The above is just a waffle that really means: from here on, you cannot accomplish all three: clean code, portability, and performance. You need to compromise somewhere. Profiling would be the logical thing to do. profile your current code, and establish a baseline. I would then suggest that you investigate whether you can have a larger block size. Any alignment fiddling you may have to do will pay off better if you are working with larger blocks. Can you batch these blocks in to 1MB contiguous regions? With larger contiguous blocks you could then consider vectorization, alignment, and other optimizations that have an increased, and fixed setup cost, but improved throughput. Additionally, I would consider forcing bigsize_t to be unsigned long long, and then manually unroll the loop. If you find an exact type that is 64 bits (there has to be one? and, if not, you can have an alternate implementation...), then you can force your type to match: #include <limits.h> #if UINT_MAX == 18446744073709551615ULL typedef unsigned int big64_t; #elif ULONG_MAX == 18446744073709551615ULL typedef unsigned long big64_t ; #elif ULLONG_MAX == 18446744073709551615ULL typedef unsigned long long big64_t; #else #error "Cannot find unsigned 64bit integer." #endif big64_t *s = reinterpret_cast<big64_t *>(dst); const big64_t *au = reinterpret_cast<const big64_t *>(a); const big64_t *bu = reinterpret_cast<const big64_t *>(b); s[0] = au[0] ^ bu[0]; s[1] = au[1] ^ bu[1]; With systems that are not natively 64-bit, the compiler will adjust the operation to be relatively efficient anyway. In the event that this is the year 2100, and all datatypes are 128 bit, or more, then you should possibly add the code that does a single 128-bit XOR for your input. I see the above as being as portable, and equally readable. As for the performance, that will require a profile and benchmark.
{ "domain": "codereview.stackexchange", "id": 11608, "tags": "c++, performance, c++11, portability" }
concatenate string, with different first and last iteration
Question: Good day. My code works but I think there is a better way to do this. Like string pattern or string manipulation. I'm not yet familiar with both terms. The goal is to get "=A1-A2-B3-D4-WHATEVER STRING=". From array of strings. The code is : string[] arr = { "A1", "A2", "B3", "D4", "WHATEVER STRING"}; // can be any string value string newString = "="; for (int i = 0; i < arr.Length; i++) { if (i == arr.Length-1) { newString += arr[i].ToString() + "="; } else { newString += arr[i].ToString() + "-"; } } Answer: You should use String.Join() here. Its easier to read and shorter as well. Like string[] arr = { "A1", "A2", "B3", "D4", "WHATEVER STRING" }; // can be any string value string result = "=" + string.Join("-", arr) + "=";
{ "domain": "codereview.stackexchange", "id": 39238, "tags": "c#" }
AMCL Particle Weights Are Always Equal - Why?
Question: I'm driving the Turtlebot around the turtlebot_world Gazebo environment. I'm printing out the AMCL particle filter poses and the weights assigned to the poses, and when I drive around for a while (as long as 5 minutes), the weights assigned to the poses in the particle filter are all set to the same value (particle weight = 0.00196, for 500 particles, so they sum to 1, of course). I expected there to be some variety in the particles' weights - some higher, some lower. My theories as to why they're all equal: I could be printing the weights at the wrong point of the AMCL code It's a demo Gazebo world and a demo AMCL map - is the set-up too "perfect" for different hypotheses to result in different weights or something? Here's a code snippet: File: AMCL/src/amcl_node.cpp Function: AmclNode::laserReceived(const sensor_msgs::LaserScanConstPtr& laser_scan) ~Line 1245, Added the following: for(int i=0;isample_count;i++) { ROS_DEBUG("Weight: %f",set->samples[i].weight); } Originally posted by ElizabethA on ROS Answers with karma: 120 on 2017-11-10 Post score: 0 Answer: Particle weights are supposed to be different, but AMCL resamples every resample_interval_ which could be every step. Regardless, you are indeed getting your data after the measurement update step but also after resampling. After resampling every particle weight is the same. Try looking at the particles after sensor data is integrated. line 1235 before the resampling in the next line Originally posted by PeterMilani with karma: 1493 on 2017-11-10 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by ElizabethA on 2017-11-11: Great catch! I moved the print statement to line 1235 before the resampling, and now I'm seeing variety in the particle weights. Woohoo! Thank you. My resample_interval_ parameter = 1, so it's resampling at every step. Comment by PeterMilani on 2017-11-16: great! please mark the answer as correct!
{ "domain": "robotics.stackexchange", "id": 29335, "tags": "navigation, amcl" }
Why isn't lateral torsional buckling of rectangular timber members dependent on torsional stiffness?
Question: I'm reading about buckling of timber members from Eurocode 5: Design of timber structures part 1. Section 6.3.3 gives instructions to check the stability of beams subject to compression, bending or combination of both. Formula 6.30 gives the relative slenderness of a member: $$\lambda_{rel.m} = \sqrt{\frac{f_{m,k}}{\sigma_{m,crit}}}$$ where $\sigma_{m,crit}$ is the critical bending stress calculated according to the classical theory of stability, using 5-percentile stiffness values. $f_{m,k}$ is the characteristic bending strength of the timber. Formula 6.31 gives the equation for critical bending moment for lateral torsional buckling: $$\sigma_{m,crit}=\frac{\pi \sqrt{E_{0.05}IG_{0.05}I_t}}{WL_{eff}}$$ Formula 6.32 gives the critical bending stress for solid rectangular cross-section: $$\sigma_{m,crit}=\frac{0,78b^2}{hL_{ef}}E_{0,05}$$ I'm curious about this difference between formulae 6.31 and 6.32. The general formula 6.31 is the usual formula for lateral torsional buckling that I'm familiar with. For some reason, a different formula for rectangular cross sections are given. One thing that strikes me is that this formula does not include the shear modulus $G$, or the torsional stiffness $I_t$. Why isn't the lateral torsional critical moment of rectangular cross sections dependent on shear modulus $G$ or torsional stiffness? Are the two formulas even equivalent (does the specific formula for rectangular sections follow from the general one?) Answer: Bit late to the party, but there's an explanation to be found in the book 'Structural Timber Design To EC5' section 4.5.1.2: Seems like the removal of the torsional rigidity term is an engineering decision rather than a mathematical one. Equation 4.7(a) in the image will simplify to the EC5 softwood equation if you use the approximation G = E/16.
{ "domain": "engineering.stackexchange", "id": 4793, "tags": "structural-engineering, buckling" }
What is the force of friction between two bodies given their masses and a force pulling them as a unit accross a surface?
Question: Where a force of 200N pulls two blocks together(as one system) across a horizontal table top(µ=0.800) $m_A$ = 5.00kg, $m_B$ = 10.0kg Find the acceleration of the system. Find f$_k$ between B and A I found a to be 5.485 m/s² which agrees with the textbook's 5.5m/s². The textbook says b is 173N but I can't seem to get a number even close to that. How does one go about solving this kind of equation, please provide calculations or formulas in the order they are needed, insteed of just abstract steps. Answer: If both block A, and B are moving together as a system, the two blocks will not have a kinetic friction between the two of them (because they are stationary to each other). Draw your free-body diagram of both blocks individually, and write an expression for all the forces acting on the each block. Share what you find by editing your question, so that we may know why you might not be getting the answer you desire. $\sum F = ma$ Edit: You did not explain the question well enough, but I can see that the friction is indeed equal to 172.5 N assuming that mass B is on top of mass A, and the force is applied on the top mass.
{ "domain": "physics.stackexchange", "id": 5583, "tags": "homework-and-exercises, forces, kinematics, friction" }
AMCL laser skipping scan
Question: I am doing amcl on a quadrotor, as the quadrotor move along and yaw pitch roll might affect the laser scan thus the scan I received might end up confuse my amcl and get the wrong localization. To overcome this problem, my partner has try to compensate laser scan value(within certain range) by creating this node. Let just call it LaserCompensation. Here's some of the files. LaserCompensation.cpp #include <stdexcept> #include <termios.h> #include <string> #include <stdlib.h> #include <vector> #include <stdint.h> #include <stdint.h> #include <limits> #include "Rotator.cpp" #include "sensor_msgs/LaserScan.h" #include "../../px-ros-pkg/px_comm/msg_gen/cpp/include/px_comm/OpticalFlow.h" #include "../../flight/msg_gen/cpp/include/flight/flightFeedback.h" #include <ros/ros.h> #include <cmath> #define PI 3.141592654 double Flowtime1 ; double FlowAltitude1; double Flowtime2; double FlowAltitude2; double altitudedifference; double timedifference; double pitchAngle; double rollAngle; sensor_msgs::LaserScan receivedscan; sensor_msgs::LaserScan publishscan; Rotator *rotator; void altitudeParse(const px_comm::OpticalFlow::ConstPtr& optFlowMsg) { Flowtime2 = Flowtime1; Flowtime1 = optFlowMsg->header.stamp.toSec(); FlowAltitude2 = FlowAltitude1; FlowAltitude1 = optFlowMsg->ground_distance; altitudedifference = FlowAltitude1-FlowAltitude2; timedifference = Flowtime1 - Flowtime2; } void angleParse(const flight::flightFeedback::ConstPtr& msg2) { pitchAngle = msg2->pitchAngle; rollAngle = msg2->rollAngle; } void Transform()//sensor_msgs::LaserScan& scan_msg) { publishscan.angle_min = -2.35619443; publishscan.angle_max = 2.35619443; publishscan.angle_increment = (1.25/180*PI); publishscan.time_increment = receivedscan.time_increment; publishscan.scan_time = receivedscan.scan_time; publishscan.range_min = receivedscan.range_min; publishscan.range_max = receivedscan.range_max; publishscan.header.stamp = receivedscan.header.stamp; publishscan.header.frame_id = "test1"; rotator->set(rollAngle, pitchAngle); float x = 0.0; float y = 0.0; float z = 0.0; float xt = 0.0; float yt = 0.0; float zt = 0.0; float r = 0.0; float rt = 0.0; float indexdifference[216]; for (int o=0;o<216;o++) { indexdifference[o]=360.0; } uint g=0; for(int i = 0; i < 1080; i++){ r = receivedscan.ranges[i]; x = r*cos(((0.25*i)-45)/180*PI); z = r*sin(((0.25*i)-45)/180*PI); rotator->unrotate(x, y ,z, xt, yt, zt); float d=-90.0; if((xt!=0.0)||(zt!=0.0)) { d=atan2(zt,xt)*180.0/PI+45.0; } if(d<0.0){ d+=360.0; } float e= (d*215/270)+0.5; int h = e; float absolutedifference= std::abs (e-h); rt = sqrt(zt*zt+xt*xt); if(h>=0.0) { if(absolutedifference<indexdifference[h]) { indexdifference[h]=absolutedifference; publishscan.ranges[h]=rt; } } /*if(i<2 && absolutedifference<angle[0]) { angle[0]=d; publishscan.ranges[0]=rt; } else if(absolutedifference<angle[(i-3/5)+1] && i>2) { angle[(i-3/5)+1]=d; publishscan.ranges[i/5]=rt; }*/ } } void parseScan(const sensor_msgs::LaserScan::ConstPtr& scan_msg) { receivedscan.angle_min = scan_msg->angle_min; receivedscan.angle_max = scan_msg->angle_max; receivedscan.angle_increment = scan_msg->angle_increment; receivedscan.time_increment = scan_msg->time_increment; receivedscan.scan_time = scan_msg->scan_time; receivedscan.range_min = scan_msg->range_min; receivedscan.range_max = scan_msg->range_max; receivedscan.ranges = scan_msg->ranges; receivedscan.header.stamp = scan_msg->header.stamp; receivedscan.header.frame_id = scan_msg->header.frame_id; Transform(); } int main(int argc, char **argv) { rotator = new Rotator(); ros::init(argc, argv, "LaserCompensation"); ros::NodeHandle n; publishscan.ranges.resize(216); int count = 0; ros::Subscriber altitude= n.subscribe( "/px4flow/opt_flow", 1, &altitudeParse ); ros::Subscriber PitchRoll= n.subscribe( "flightFeedback", 1000, &angleParse ); ros::Subscriber scan= n.subscribe("scan", 1, &parseScan); ros::Publisher Cscan = n.advertise<sensor_msgs::LaserScan>("scan2",1); ros::Rate loop_rate(40); //Freq = 40Hz while (ros::ok()) { if((altitudedifference <0.05 && altitudedifference > -0.05) && timedifference < 0.5) { Cscan.publish(publishscan); } count++; ros::spinOnce(); } delete rotator; return 0; } amcl_node.cpp static const std::string scan_topic_ = "scan2"; launch file: full.launch <arg name="map_file" default="$(find known_mapping)/map.yaml"/> <!-- Run the map server --> <node name="map_server" pkg="map_server" type="map_server" args="$(arg map_file)" /> <node name="hokuyo" pkg="hokuyo_node" type="hokuyo_node" respawn="false" output="screen"> <param name="calibrate_time" type="bool" value="true"/> <!-- Set the port to connect to here --> <param name="port" type="string" value="/dev/ttyACM0"/> <param name="intensity" type="bool" value="false"/> <param name="cluster" value="1"/> </node> <node name="LaserCompensation_node" pkg="LaserCompensation" type="LaserCompensation_node" respawn="false" output="screen" /> #### publish an example base_link -> laser transform ########### <node pkg="tf" type="static_transform_publisher" name="base_link_to_laser" args="0.0 0.0 0.0 1.571 0.0 0.0 base_link test1 40" /> #### start the laser scan_matcher ############################## <node pkg="laser_scan_matcher" type="laser_scan_matcher_node" name="laser_scan_matcher_node" output="screen"> <param name="max_iterations" value="10"/> <param name="publish_pose_stamped" value="true"/> </node> <!--- Run AMCL --> <node name="amcl" pkg="amcl" type="amcl" output ="screen"> <param name="odom_model_type" value="omni"/> <param name="update_min_d" value="0.15"/> <param name="update_min_a" value="0.5"/> <param name="initial_pose_x" value="-13.0"/> <param name="initial_pose_y" value="-22.0"/> <param name="initial_pose_a" value="3.979"/> </node> </launch> Originally posted by FuerteNewbie on ROS Answers with karma: 123 on 2013-12-09 Post score: 0 Answer: Try to add in your launch file the remap parameter in your amcl node : <node name="amcl" pkg="amcl" type="amcl" output ="screen"> -> <remap from="scan" to="scan2" /> <param name="odom_model_type" value="omni"/> This allow amcl to use scan2 instead of scan topic. You'll have to do the same thing in your laser_scan_matcher node. Originally posted by Jbot with karma: 429 on 2013-12-09 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by FuerteNewbie on 2013-12-18: You mean I have to remap scan2 for laser_scan_matcher too? Comment by FuerteNewbie on 2014-01-01: What do you mean by do the same thing in laser scan matcher node? Adding this line into amcl launch code and also laser scan matcher launch code? Comment by FuerteNewbie on 2014-01-01: Thanks a lot man it's working! Can explain the theory behind? What have remap actually done?
{ "domain": "robotics.stackexchange", "id": 16405, "tags": "navigation, odometry, laser, amcl" }
Seekable HTTP response stream wrapper
Question: I created this wrapper to use together with HttpClient streams and ZipArchive. ZipArchive reads .zip index once from the end of the archive, so this wrapper caches last 4MiB of the stream. Also the wrapper avoids pointless seeks until the first read. I am interested if there any issues with this approach, and whether this can be improved. namespace Playground { using System; using System.IO; using System.Net; using System.Net.Http; using System.Net.Http.Headers; using System.Threading; using System.Threading.Tasks; public class SeekableHttpStream : Stream { private long _position; private long _underlyingStreamOffset; private Stream _underlyingStream; private bool _forceRequest; internal SeekableHttpStream( HttpClient client, HttpResponseMessage response, HttpRequestMessage request) { Client = client; Response = response; Request = request; var headers = response.Headers; var acceptRanges = headers?.AcceptRanges; if (acceptRanges == null || !acceptRanges.Contains("bytes")) { throw new ArgumentException("server does not support HTTP range requests", nameof(request)); } var contentHeaders = response.Content?.Headers; if (contentHeaders.ContentLength != null) { Length = contentHeaders.ContentLength.Value; } else if (contentHeaders.ContentRange != null) { if (contentHeaders.ContentRange.Length == null) { throw new ArgumentException("missing Content-Range length", nameof(request)); } Length = contentHeaders.ContentRange.Length.Value; } else { throw new ArgumentException("failed to determine stream length", nameof(request)); } } public HttpClient Client { get; } public HttpResponseMessage Response { get; } public HttpRequestMessage Request { get; } public override bool CanRead => _position < Length; public override bool CanSeek => true; public override bool CanWrite => false; public override long Length { get; } public override long Position { get => _position; set => Seek(value, SeekOrigin.Begin); } public override void Flush() { } public override int Read(byte[] buffer, int offset, int count) { EnsureStreamOpen().GetAwaiter().GetResult(); int read = _underlyingStream.Read(buffer, offset, count); _position += read; return read; } public override int Read(Span<byte> buffer) { EnsureStreamOpen().GetAwaiter().GetResult(); int read = _underlyingStream.Read(buffer); _position += read; return read; } public override async Task<int> ReadAsync(byte[] buffer, int offset, int count, CancellationToken cancellationToken) { await EnsureStreamOpen(cancellationToken).ConfigureAwait(false); int read = await _underlyingStream.ReadAsync(buffer, offset, count, cancellationToken) .ConfigureAwait(false); _position += read; return read; } public override int ReadByte() { EnsureStreamOpen().GetAwaiter().GetResult(); var value = _underlyingStream.ReadByte(); ++_position; return value; } public override async ValueTask<int> ReadAsync(Memory<byte> buffer, CancellationToken cancellationToken = default) { await EnsureStreamOpen(cancellationToken).ConfigureAwait(false); int read = await _underlyingStream.ReadAsync(buffer, cancellationToken) .ConfigureAwait(false); _position += read; return read; } public override long Seek(long offset, SeekOrigin origin) { return SeekAsync(offset, origin).GetAwaiter().GetResult(); } private ValueTask EnsureStreamOpen(CancellationToken cancellationToken = default) { if (_underlyingStream == null) { _forceRequest = true; return new ValueTask(SeekAsync(0, SeekOrigin.Current, cancellationToken)); } return default; } public async Task<long> SeekAsync(long offset, SeekOrigin origin, CancellationToken cancellationToken = default) { const long SeekThreshold = 1024 * 1024; long newPosition = origin switch { SeekOrigin.Begin => offset, SeekOrigin.Current => _position + offset, SeekOrigin.End => Length + offset, _ => throw new ArgumentOutOfRangeException(nameof(origin)), }; if (newPosition < 0) { throw new ArgumentOutOfRangeException(nameof(offset)); } if (newPosition > Length) { throw new NotSupportedException("seeking beyond the length of the stream is not supported"); } long delta = newPosition - _position; if (_underlyingStream == null) { if (_forceRequest) { await OpenUnderlyingStream(newPosition, cancellationToken) .ConfigureAwait(false); } _position = newPosition; } else if (_underlyingStream.CanSeek && newPosition >= _underlyingStreamOffset && newPosition <= _underlyingStreamOffset + _underlyingStream.Length) { _underlyingStream.Position = newPosition - _underlyingStreamOffset; _position = newPosition; } else if (delta < 0 || delta > SeekThreshold) { await OpenUnderlyingStream(newPosition, cancellationToken) .ConfigureAwait(false); } else if (delta > 0) { var buffer = new byte[delta]; await ReadAsync(buffer, 0, (int)delta, cancellationToken); } return _position; } private async Task<HttpRequestMessage> CopyHttpRequest() { var clone = new HttpRequestMessage(Request.Method, Request.RequestUri); if (Request.Content != null) { var bytes = await Request.Content.ReadAsByteArrayAsync() .ConfigureAwait(false); clone.Content = new ByteArrayContent(bytes); if (Request.Content.Headers != null) foreach (var h in Request.Content.Headers) clone.Content.Headers.Add(h.Key, h.Value); } clone.Version = Request.Version; foreach (var prop in Request.Properties) { clone.Properties.Add(prop); } foreach (var header in Request.Headers) { clone.Headers.TryAddWithoutValidation(header.Key, header.Value); } return clone; } private async Task OpenUnderlyingStream(long position, CancellationToken cancellationToken = default) { const long UseBufferedStreamThreshold = 4 * 1024 * 1024; if (position < 0) { throw new ArgumentOutOfRangeException(nameof(position)); } using var newRequest = await CopyHttpRequest() .ConfigureAwait(false); if (position > 0) { var responseHeaders = Response.Headers; var contentHeaders = Response.Content.Headers; if (responseHeaders.ETag != null) { newRequest.Headers.IfRange = new RangeConditionHeaderValue(responseHeaders.ETag); } else if (contentHeaders.LastModified != null) { newRequest.Headers.IfRange = new RangeConditionHeaderValue(contentHeaders.LastModified.Value); } newRequest.Headers.Range = new RangeHeaderValue(position, null); } long remainingLength = Length - position; var response = await Client.SendAsync( newRequest, remainingLength <= UseBufferedStreamThreshold ? HttpCompletionOption.ResponseContentRead : HttpCompletionOption.ResponseHeadersRead, cancellationToken ).ConfigureAwait(false); if (!response.IsSuccessStatusCode) { response.EnsureSuccessStatusCode(); } else if (position > 0 && response.StatusCode != HttpStatusCode.PartialContent) { response.Dispose(); throw new InvalidOperationException("range request not supported or content has changed since last request"); } else { try { var stream = await response.Content.ReadAsStreamAsync() .ConfigureAwait(false); if (_underlyingStream != null) { await _underlyingStream.DisposeAsync() .ConfigureAwait(false); } _underlyingStream = stream; _underlyingStreamOffset = position; _forceRequest = false; _position = position; } catch { response.Dispose(); throw; } } } protected override void Dispose(bool disposing) { _underlyingStream?.Dispose(); Response.Dispose(); Request.Dispose(); base.Dispose(disposing); } #region Unsupported write methods public override void SetLength(long value) { throw new NotSupportedException(); } public override void Write(byte[] buffer, int offset, int count) => throw new NotSupportedException(); public override IAsyncResult BeginWrite(byte[] buffer, int offset, int count, AsyncCallback callback, object state) { throw new NotSupportedException(); } public override void EndWrite(IAsyncResult asyncResult) => throw new NotSupportedException(); public override void Write(ReadOnlySpan<byte> buffer) => throw new NotSupportedException(); public override Task WriteAsync(byte[] buffer, int offset, int count, CancellationToken cancellationToken) { throw new NotSupportedException(); } public override ValueTask WriteAsync(ReadOnlyMemory<byte> buffer, CancellationToken cancellationToken = default) { throw new NotSupportedException(); } public override void WriteByte(byte value) { throw new NotSupportedException(); } public override int WriteTimeout { get => throw new NotSupportedException(); set => throw new NotSupportedException(); } #endregion Unsupported write methods } public static class HttpClientExtensions { public static async Task<SeekableHttpStream> GetSeekableStreamAsync(this HttpClient client, string requestUri) { using var request = new HttpRequestMessage(HttpMethod.Get, requestUri); return await SendSeekableStreamAsync(client, request) .ConfigureAwait(false); } public static async Task<SeekableHttpStream> GetSeekableStreamAsync(this HttpClient client, Uri requestUri) { using var request = new HttpRequestMessage(HttpMethod.Get, requestUri); return await SendSeekableStreamAsync(client, request) .ConfigureAwait(false); } public static async Task<SeekableHttpStream> SendSeekableStreamAsync(this HttpClient client, HttpRequestMessage request) { HttpMethod method = request.Method; try { request.Method = HttpMethod.Head; var response = await client.SendAsync(request, HttpCompletionOption.ResponseHeadersRead) .ConfigureAwait(false); response.EnsureSuccessStatusCode(); try { return new SeekableHttpStream(client, response, request); } catch { response.Dispose(); throw; } } finally { request.Method = method; } } } } Answer: else if (delta > 0) { var buffer = new byte[delta]; await ReadAsync(buffer, 0, (int)delta, cancellationToken); } This code has a fairly nasty bug: ReadAsync is not required to read as much data as you're requesting, it can read less. You either need to use the .Net 7 method ReadExactlyAsync, or you need to read in a loop, until the right number of bytes has been read.
{ "domain": "codereview.stackexchange", "id": 44849, "tags": "c#, .net, asynchronous, http, stream" }
ROS2 Talker cannot communicate with Listener
Question: Hi, I want to use ROS2. I finished to setup ROS2,but talker cannot communicate with listener. My setup is below. $ sudo apt update && sudo apt install curl $ curl http://repo.ros2.org/repos.key | sudo apt-key add - $ sudo sh -c 'echo "deb [arch=amd64,arm64] http://repo.ros2.org/ubuntu/main xenial main" > /etc/apt/sources.list.d/ros2-latest.list' $ sudo apt update $ sudo apt install `apt list "ros-ardent-*" 2> /dev/null | grep "/" | awk -F/ '{print $1}' | grep -v -e ros-ardent-ros1-bridge -e ros-ardent-turtlebot2- | tr "\n" " "` So, the terminal 1 is $ source /opt/ros/ardent/setup.bash $ ros2 run demo_nodes_cpp talker and, the terminal 2 is $ source /opt/ros/ardent/setup.bash $ ros2 run demo_nodes_cpp listener terminal 1 display $ ros2 run demo_nodes_cpp talker [INFO] [talker]: Publishing: 'Hello World: 1' [INFO] [talker]: Publishing: 'Hello World: 2' [INFO] [talker]: Publishing: 'Hello World: 3' [INFO] [talker]: Publishing: 'Hello World: 4' [INFO] [talker]: Publishing: 'Hello World: 5' but ,terminal 2 no response. What am I wrong? (I use Ubuntu16.04LTS) Originally posted by takijo on ROS Answers with karma: 110 on 2018-08-12 Post score: 5 Original comments Comment by Geoff on 2018-09-10: How long have you left the listener running for to wait for a response? Discovery can take some time, although on a local machine I would expect it to success fairly quickly. Comment by takijo on 2018-09-10: I waited for about ten seconds,but not try it more. After a change of my ufw, I could success quickly. Answer: Make sure that you are allowing multicast traffic in the firewall. By default, the ufw firewall included in 16.04 blocks multicast traffic, even for loopback (i.e. on the same system). Here's an example of allowing multicast in ufw. Originally posted by Geoff with karma: 4203 on 2018-08-12 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by takijo on 2018-08-13: I can communicate with listener by the the way of above site. I allow udp multicast in ufw. sudo ufw allow in proto udp to 224.0.0.0/4 sudo ufw allow in proto udp from 224.0.0.0/4 Thank you for your help! Comment by Dirk Thomas on 2018-09-10: If you run into a similar problem you can try the new multicast verb to test if simple multicast functionality is working for you: https://github.com/ros2/ros2cli/pull/145 Comment by Tav_PG on 2019-09-25: Also note, the same symptoms (i.e. ROS2 Talker cannot communicate with Listener) occur if the computer has no network configured. To test, try this: ros2 multicast receive in one terminal and ros2 multicast send in another terminal. If there are errors like load_entry_point … then plug your computer into a network and try again. Comment by codierknecht on 2021-08-23: Is there someone else with ros2 multicast working, other nodes are not able to see each other? Even ros2 topic pub /chatter std_msgs/String "data: ping" in one terminal isn't noticed by ros2 topic list in an another terminal. Of cause both have ros2 galactic sourced. Comment by twaddell on 2021-09-27: @codierknecht, I also have that issue, what version are you running? did you ever find a fix? Comment by codierknecht on 2021-10-01: @twaddell I did mange to solve the problem for running nodes on the same machine by exporting ROS_LOCALHOST_ONLY=1. However I currently don't have a second machine due to cross-compilation problems for a raspberry pi model 2b. Comment by RickROS2 on 2023-01-12: I have a similar problem. I am trying to connect a raspberry pi 3B+ to my pc with ROS2. I can connect to it just fine, my firewall is not active and multicast/ping between them works. However, topics listed on my pi do not show up on my pc and talker listener mode does not work either. (listener no response). ROS_ID is both 30. Please help me I do not know what else to do. Comment by grhex on 2023-04-03: I have a similar problem with ROS2 humble. I tried several things, including: sudo ufw allow in proto udp to 224.0.0.0/4 sudo ufw allow in proto udp from 224.0.0.0/4 "ros2 multicast receive" and "ros2 multicast send" communicated between two machines over wifi, but the talker and listener still didn't. Comment by xaxam2001 on 2023-05-16: Hi, I have the exact same issue that @grhex. I'm trying to use a turtlebot4 that use a pi 4B. I can ping the pi from my machine and the machine from my pi, ros2 multicast receive and ros2 multicast send are working fine. But the listener and the talker doesn't communicate, I can't use ros2 node list, I can't stop the daemon using ros2 daemon stop and I think there are a lot of other commands I can't use. Even after adding the rules in the answer above and even if I disable the ufw firewall. I precise that I'm using Discovery Server configuration and I'm using FastDDS. Does someone have any solution now ?
{ "domain": "robotics.stackexchange", "id": 31520, "tags": "ros2" }
Does Newton's third law stem from the translational symmetry of space?
Question: So, we know from Noether's theorem that the translational symmetry of space (nothing changes if our physical system is located in a different position) implies the conservation of a quantity we call momentum. We also know that conservation of momentum is equivalent to Newton's third law. So my question is: Does the physical observation of the homogeneity of space imply Newton's third law? Or are there additional assumptions involved? Does Newton's third law is an unavoidable necessity/consequence in a universe with this symmetry? Answer: Does the physical observation of the homogeneity of space imply Newton's third law? Or are there additional assumptions involved? There are two additional assumptions beyond the observed symmetry of spatial homogeneity. First is that the laws of physics can be written in the form of a Lagrangian. There is, in principle, no reason that the laws of physics must be based on an action principle. If that were not the case then Noether’s theorem would not apply. Second is that there are objects that interact pairwise via mechanical forces. With those two additional assumptions Newton’s third law is inevitable. The forces between objects must obey Newton’s third law.
{ "domain": "physics.stackexchange", "id": 95042, "tags": "newtonian-mechanics, classical-mechanics, momentum, symmetry, conservation-laws" }
Use of amino-acid sequences versus use of nucleotide sequences in phylogenetic analysis
Question: Reading a paper about gene evolution, I see that they do phylogenetic analysis for bacteria using protein sequences. They take the method from another paper. I can suspect that amino-acid sequences are more stable than nucleotide sequences, to the presence of synonymous substitutions.... but, is this stability required between closely related species? doesn't it make the analysis less powerful? does it make it more reliable? In other words, what's the advantage of using amino-acid sequences versus using nucleotide sequences for phylogenetic analysis? Answer: In general, many sequence alignment programs can use multiple substitution models, distinguishing between nucleotides, amino acids, and codons. A protein sequence has functional information that is not directly visible in the nucleotide sequence. The papers you link deal with horizontal gene transfer, where a gene is passed to more distant organism. Different species have different codon usage biases, i.e. the translation efficiency is different for different codons. On one hand, this means that HGT is more likely to occur between species of similar codon usage. On the other hand, "codon usage of horizontally transferred genes approaches the host usage over time." Thus, on the nucleotide level, the phylogenetic signal will get lost due to the evolutionary pressure on translation efficiency, while on the protein level, there will be more conservation.
{ "domain": "biology.stackexchange", "id": 619, "tags": "bioinformatics, phylogenetics" }
FIR filter design with time and frequency domain constraints
Question: I deal with the problem of the synthesis of the filter with the requirements of both the frequency and time characteristics. Now there is a filter that meets the requirements. Roughly speaking the requirements are as follows: Frequency response FIR Low Pass Fpass ~ 0.0(3) (* pi rad/sample) Fstop ~ 0.1(6) (* pi rad/sample) bandwidth irregularity of ~ 1e-3 (maybe ~ 1e-2) dB ~ -40 dB suppression Time requirements Overshoot in step response ~ 4% (transition time ~ 1e-2 ms) I would like to know about the possibilities of such a synthesis (I’m with filterDesigner in the matlab failed, tried free tools - Iowa Hills) or at least about the transformation of the finished filter into a narrower (Fpass = 0.025, Fstop = 0.125) (Matlab can do this, but only IIR is obtained). Answer: The most straightforward way to impose constraints in the frequency domain as well as in the time domain for the design of (linear phase) FIR filters is to use linear programming. Constraints on the step response or on the impulse response are naturally linear, and constraints on the frequency response can also be formulated as linear constraints in the case of linear phase FIR filters. Take a look at this document for some examples. More details on the linear programming formulation of the FIR filter design problem can be found in this paper.
{ "domain": "dsp.stackexchange", "id": 7507, "tags": "filter-design, finite-impulse-response, digital-filters" }
Do recessive alleles really exist?
Question: This question may seem illogical to some, but I seriously have this doubt. I searched google for some proofs but they were extremely complex and I couldn't understand anything. I was just wondering whether we really have recessive allele or not? Is it like there are only dominant allele present, and when dominant allele isn't present we consider it recessive. Is it like there is a place reserved for dominant allele and if that place is empty it turns out to be recessive. Although this thought is vague but then if you take this example, it probably would make sense: Let's say there are two allele : T and t for tall and short respectively. Now, because TT, Tt, tT are tall, they all lead to the formation of specific hormone due to the presence of dominant allele. But the plant with genotype tt is short, hence the "extra" hormone for "excessive" tallness is missing in tt plant because the dominant gene isn't present. Can we say that that place for two allele (dominant allele) is actually empty in the plant with genotype tt? Answer: This is a good question. Firstly, let's get the terminology straightened out: The terms recessive and dominant are not actually used to describe genes, but rather alleles. The term allele is used for alternative forms of the same gene.(1) Many genes have two or more alleles, and some genes have alleles that can be described as dominant or recessive. It is also useful to be familiar with the terms genotype and phenotype. Genotype refers to the form of the genetic material (DNA), while phenotype refers to the observed effect of the genotype. Secondly, it's important to realize that the concept of "dominant" and "recessive" is a human-made abstraction. The concept of dominant and recessive genes were invented (or discovered, if you like) before we reached a good undertanding of how genetics works at the molecular level. Thus, genetics in the early days had a much cruder understanding of how different phenotypes depend on the genotype. The dominant/recessive concept is useful in cases where there is a limited number of possible allele pairs and one or several alleles can mask the effect of other alleles. This is a very limited set of cases. Many genes have many different alleles, and it's hard to account for all of them. Many alleles also do not have a clear observable effect, in which case any "masking" effect is not prominent. To quote Wikipedia (2): The most common basis of dominance and recessiveness is that the dominant allele codes for a functional protein and the recessive allele for a mutant, non-functional protein. A further important point is made in the following passage: Dominance is not inherent to an allele. It is a relationship between alleles; one allele can be dominant over a second allele, recessive to a third allele, and codominant to a fourth. Dominance should be distinguished from epistasis, a relationship in which an allele of one gene affects the expression of an allele at a different gene. Let's consider your example: A single gene with two different alleles, T and t. Let's assume the T allele codes for a more powerful version of the growth hormone than the t allele, or that the t allele produces a non-functional hormone. Furthermore, we assume that production of the T hormone from a single allele is enough to make the plant grow tall, and that production of T or t from a second allele has no further effect. Then, we can say that the T allele masks the effect of the t allele. However, as you can probably imagine, one allele fully masking the effect of another is rather rare. So recessive/dominant allele pairs are rather the exception than the norm. Thus, the that place for the two alleles in the genome of the plant is definitely not empty. Note that a mutation could in principle cause one of the genes to be deleted in its entirety from one of the chromosomes of the plant during reproduction. The "place" for the gene on that chromosome would then indeed be empty. This has nothing to do with how dominance and recessiveness usually works, except that a recessive gene could become expressed if the dominant allele is deleted. So, at the molecular level there is no concept of recessive or dominant alleles. Some genes have allele pairs that can be described as dominant/recessive in relation to each other, most others do not. The effect of different combinations of alleles determines whether the relationship between the alleles is described in terms of dominant/recessive.
{ "domain": "biology.stackexchange", "id": 1908, "tags": "genetics, gene-synthesis" }
RGBDSLAM Crash when press SPACE
Question: Hi, the compilation is OK, but when I launch the programm and press SPACE bar to update, the programm crashes. There is the image of the error below. Originally posted by leobber on ROS Answers with karma: 15 on 2015-01-19 Post score: 1 Answer: I'm not an expert at this, but i hope it helps. Have you checked if you're using SIFTGPU detector in case of not using a GPU? Something like this was happening with me and it was because of that, check your rgbdslam.launch. Originally posted by Phelipe with karma: 74 on 2015-01-20 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Felix Endres on 2015-01-26: if that's the case, try to set ORB instead of SIFTGPU in your launchfile
{ "domain": "robotics.stackexchange", "id": 20623, "tags": "slam, navigation, rgbd6dslam, rgbdslamv2, rgbdslam-freiburg" }
Are Genetic Programming runtimes faster on QCs than on classical computers?
Question: If this isn't known, would they theoretically be? I'm particularly interested in knowing whether a QC would be faster at evaluating the fitness function of the possible solutions than a classical machine Answer: There are quantum algorithms for genetic programming which would theoretically have advantages over the corresponding classical genetic programming algorithms but you would need a full-fledged quantum computer with more qubits than any quantum computer we currently have, in order to observe such an advantage.
{ "domain": "quantumcomputing.stackexchange", "id": 202, "tags": "speedup, applications" }
How to represent graphically the relationship of $Z$ and $\gamma$ to $W_3$ and $B^0$?
Question: How to represent graphically the relationship of $Z$ and $\gamma$ to $W_3$ and $B^0$ ? I made these two schematics below, but I'm not sure which one is correct, nor if we we should put $W_3$ or $B^0$ in $x$ axis or in $y$ axis. -If this would be the schematic of the left, it could not work, since $Z$ has a positive y axis, thus $-\sin_{\theta_W}$ (negative) would expect the $Z$ to be at negative $y-axis$. -If this would be the schematic of the right, it could not work, since the $x$ value of $Z$ is negative. -If $B^0$ would be in the $x$ axis, it would not work since it represents the $x$ axis for the rotation matrix. Thus, none of my schematics seem to work A third and fourth are the following Answer: The fourth diagram is the appropriate one if you want to visualize this relationship as a rotation.
{ "domain": "physics.stackexchange", "id": 68329, "tags": "standard-model, electroweak" }
Continuity equation for a charged particle
Question: I am trying to prove the continuity equation for a charged particle moving with some speed v. So, I start with the charge density and current density as, \begin{align} \rho(x,t) & = q\delta(x-vt) \\ J(x,t) & = q v \delta(x-vt) \end{align} It seems that one would have to take a derivative of the delta function to prove the continuity equation. How does one take such a derivative? Answer: Distributions, like Dirac's delta, are defined by how they act on smooth functions. Since this is a physics site, I will use integral notation instead of a more fancy mathematics notation. Dirac's delta distribution is defined by $$ \int_{-\infty}^{\infty} \delta(x) \, f(x) \, dx = f(0). $$ The derivative of a distribution $u$ is defined by $$ \int_{-\infty}^{\infty} u'(x) \, f(x) \, dx = -\int_{-\infty}^{\infty} u(x) \, f'(x) \, dx. $$ Therefore, $$ \int_{-\infty}^{\infty} \delta'(x) \, f(x) \, dx = -\int_{-\infty}^{\infty} \delta(x) \, f'(x) \, dx = -f'(0). $$ But you don't need to know the exact definition to do your exercise. You only need to know that you can take the derivative and that the chain rule works (at least in this case): $$ \partial_t \rho = \partial_t \left(q \delta(x-vt)) \right) = q (-v)\delta'(x-vt) = -\partial_x\left(qv\delta(x-vt)\right) = -\partial_x J $$ so $$ \partial_t \rho + \partial_x J = 0. $$
{ "domain": "physics.stackexchange", "id": 71069, "tags": "homework-and-exercises, electrostatics, conservation-laws, charge, dirac-delta-distributions" }
"Proof" that zero curvature implies $\partial_a \Gamma^b_{cd}$ is symmetric in $a$ and $c$
Question: I know the claim is wrong. I just want to know where this "proof" goes haywire: Assume curvature is 0 implies Parallel transport is path independent implies Path integration of Christoffel symbols, $\Gamma ^b_{cd}$, is path independent implies $\Gamma^b_{cd}=\partial_c T^b_d$ for some tensor field $T^b_d$ implies $\partial_a \Gamma^b_{cd}=\partial_a \partial_c T^b_d= \partial_c \partial_a T^b_d=\partial_c \Gamma^b_{ad}$ implies $\partial_a \Gamma^b_{cd}$ is symmetric in $a$ and $c$ Answer: Parallel transport is path independent implies Path integration of Christoffel symbols, $\tau^b_{cd}$, is path independent implies $\tau^b_{cd}=\partial_c T^b_d$ for some tensor field $T^b_d$ The first statement is correct (given the assumptions that precede it). I don't know what the second statement means exactly, but the third statement is definitely not correct. For one thing, the Christoffel symbols are symmetric in their lower two indices (assuming no torsion), but your expression is not (although we could fix that trivially by symmetrizing the indices).$^\star$ What does follow from the first statement is that given two paths, $x_1^\mu(\lambda_1)$ and $x_2^\mu(\lambda_2)$, such that the two paths intersect at points two points, the parallel propagator from one intersection point to the other is the same for $x_1$ and $x_2$. The parallel propagator $P[x]^\mu_{\ \nu}(\lambda, \lambda_0)$ is a solution of the parallel transport equation satisfied by a vector $V$ parallel transported along a curve $x$ \begin{equation} \frac{{\rm d} x^\mu}{{\rm d} \lambda} \nabla_\mu V^\nu = 0 \end{equation} The parallel propagator relates the value of $V^\mu$ at $\lambda$ to its value at $\lambda_0$ after parallel propagation along a curve $x$ \begin{equation} V^\mu(\lambda) = P[x]^\mu_{\ \ \nu}(\lambda, \lambda_0) V^\nu(\lambda_0) \end{equation} We can write a formal expression for the parallel propagator in terms of the path-ordered exponential \begin{equation} P[x](\lambda, \lambda_0) = \mathcal{P} \exp\left(\int_{\lambda_0}^\lambda {\rm d \lambda'} A[x](\lambda')\right) \end{equation} where \begin{equation} A[x]^\mu_{\ \ \nu}(\lambda) = -\tau^\mu_{\nu \sigma} \frac{{\rm d} x^\nu}{{\rm d}\lambda} \end{equation} and $\mathcal{P}$ is the path-ordering symbol, which means to expand the exponential in a Taylor series and order the matrix factors $A$ in each term so that each factor of $A$ appears in order of decreasing value of $\lambda$. Ordinarily, the parallel propagator between two spacetime points will depend on the path taken, but in flat spacetime it will not. Therefore the condition of zero curvature implies that \begin{equation} P[x_1]^\mu_{\ \ \nu}(1, 0) = P[x_2]^\mu_{\ \ \nu}(1, 0) \end{equation} where $x_1^\mu(0) = x_2^\mu(0)$ is the agreed starting point of the two curves, and $x_1^\mu(1) = x_2^\mu(1)$ is the ending point. There clearly is some relationship among the Christoffel symbols at different points on the manifold, but it is not as simple as saying that the Christoffel symbols must be the gradient of some function. The integral equation implied by the equality of the parallel propagators can be converted to a differential equation by considering infinitesimally small loops. In fact this condition will amount to saying that the Riemann curvature is zero \begin{equation} R^\mu_{\ \ \nu\rho\sigma} = \partial_\rho\tau^\mu_{\nu\sigma} - \partial_\sigma\tau^\mu_{\nu\rho} + \tau^\mu_{\rho \beta} \tau^\beta_{\nu \sigma} - \tau^\mu_{\sigma \beta} \tau^\beta_{\nu \rho} = 0 \end{equation} which can be read as a non-linear differential equation for $\tau^\mu_{\nu\sigma}$. We can in fact check explicitly that your ansatz $\tau^\mu_{\nu\sigma} = \partial_{(\nu} T^\mu_{\ \sigma)}$ does not solve this equation in general (where I've symmetrized your ansatz, and defined the notation $(ab)=\frac{1}{2}(ab+ba)$. \begin{eqnarray} R^\mu_{\ \ \nu\rho\sigma} &=& \partial_\rho \partial_{(\nu} T^\mu_{\sigma)} - \partial_\sigma\partial_{(\nu} T^\mu_{\rho)} + \partial_{(\rho} T^\mu_{\beta)} \partial_{(\nu}T^\beta_{\sigma)} - \partial_{(\sigma} T^\mu_{\beta)} \partial_{(\nu}T^\beta_{\rho)} \\ &=& \frac{1}{2} \left( \partial_\rho \partial_\nu T^\mu_\sigma - \partial_\sigma \partial_\nu T^\mu_\rho \right) + O(T^2) \\ &\neq& 0 \end{eqnarray} where the term in brackets on the last line is not zero in general, and the $O(T^2)$ terms involve two powers of $T$ that cannot cancel the term in brackets. $^\star$ A good check to do is also whether both sides of an equation transform the same way under coordinate transformations. I'm running out of steam to check if the symmetrized version of your ansatz transforms in the right way to match the transformation of the Christoffel symbols, but if the transformation properties don't match then this is an immediate killer for your identity. Reference for the parallel propagator stuff: Sean Carroll's GR lecture notes, chapter 3 https://arxiv.org/abs/gr-qc/9712019
{ "domain": "physics.stackexchange", "id": 84295, "tags": "homework-and-exercises, general-relativity, differential-geometry, tensor-calculus" }
How to recognize exoplanet transit
Question: I am using Python package lightkurve for exoplanets searching by the transit method. When I download light curve of some star and apply periodogram, I find frequency and power of periodic components in the light curve. However, I noticed that multiples (0.5x, 2x, ...) of the original period are displayed too. Here is example for Kepler-6b. import lightkurve as lk import numpy as np light_curve = lk.search_lightcurvefile("Kepler-6", quarter=1).download().PDCSAP_FLUX light_curve.scatter() periodogram = light_curve.to_periodogram(method="bls", period=np.arange(0.5, 10, 0.0001)) periodogram.plot() I could take only strongest period (3.24 d), but what if there are more exoplanets (1.08 d, 9.71 d, ...)? I thought that if I use the light_curve.fold(period) method, I can tell if it is a transit (there is only drop of flux) or not (there are more drops of flux). However, Kepler-20f has also multiple drops of flux after fold light curve (because of other planets?). How can I tell if it is a planet transit or not? Answer: Of course you will get multiple peaks in the periodogram. The Fourier series representing a non-sinusoidal signal will contain frequencies at multiples of the fundamental frequency. Similarly, you can have a periodic signal with double or treble the period which will look identical, but where the phenomenon causing the signal repeats either two or three times during each cycle. The solution is of course to plot the data folded on the proposed period, as you have done. In the examples you have shown, clearly the period of 3.23d is the only valid period for a claimed exoplanet, since the transit only takes place once per cycle and there is no sign of any other features or excessive scatter in the folded light curve at that period. For the 9.71d period to be valid you would either (i) have to have three planets, equi-spaced around the same orbital circle, each with a 9.71d period, and of identical size so that the transits are the same depth. That is not a stable situation. Or (ii) you are unfortunate enough to have one planet with a 3.23d period and another with a 9.71d period that transit at exactly the same time. But it is not possible to do that without having unequal transit depths and causing scatter in the transit shape when folded at 3.23d. I cannot make out what your point is about Kepler 20f. This would produce a single feature in a light curve folded on a period of 19.48d, but the amplitude of that signal is small (and I can't see it). Both Kepler 20c and 20d produce transits that will be $\sim 10$ times as deep, with periods of 10.8d and 77d respectively.
{ "domain": "astronomy.stackexchange", "id": 4776, "tags": "exoplanet, python, planetary-transits, kepler, light-curve" }