anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Implementation of stack using pointers
Question: Please review my code and let me know how I can possibly improve it. #include<iostream> using namespace std; class node { private: int data; public: node *next; node(int element) { data=element; next=NULL; } int getdata() { return(data); } }; class stack { private: node *start; public: void push(int element); int pop(); void traverse(); stack() { start=NULL; } }; inline void stack::push(int element) { node *ptr; node *temp=start; ptr=new node(element); if(start==NULL) { start=ptr; ptr->next=NULL; } else { while(temp->next!=NULL) { temp=temp->next; } temp->next=ptr; ptr->next=NULL; } } inline int stack::pop() { node *temp=start; node *top=start; while(temp->next!=NULL) { temp=temp->next; } while(top->next!=temp) { top=top->next; } top->next=NULL; int item=temp->getdata(); delete (temp); } inline void stack::traverse() { node *temp=start; while(temp!=NULL) { cout<<"data is"<<temp->getdata(); temp=temp->next; } } int main() { stack a; for(int i=0;i<10;i++) { a.push(i); } a.traverse(); for(int i=0;i<5;i++) { a.pop(); } a.traverse(); return(0); } Answer: First of all, I will start with one of the most common remarks: please, do not use using namespace std;. It is especially bad if you write it in a header file since it leads to namespace pollution and name clashes. Instead of a method named traverse, it would be better to overload operator<< to print your list. Here is how you could adapt your function (put that code in your class): friend std::ostream& operator<<(std::ostream& stream, const stack& sta) { node *temp= sta.start; while(temp!=NULL) { stream<<"data is"<<temp->getdata(); temp=temp->next; } return stream; } Don't forget to make it a friend function so that it can access the private members of stack. Generally speaking, you shouldn't mark your functions inline. While it may sometimes speed up your code for small functions (your functions aren't small), most of the time, it will simply be totally ignored by the compiler. Even if it is taken into account by the compiler for inlining, it could still make your executable larger and will probably not significantly speed up your code. Worse: since inline functions have to live in the header file, you will have to compile again every file including this one when you change the implementation of your functions. Bottom line: remove all your inline qualifications, they won't gain you anything and may make things worse. From the implementation of pop, I bet that it is meant to return the popped int since you called getdata. However, you forgot to return item. Your compiler should have warned you about the unused variable item and the non-void function pop returning nothing. If not, you should tell your compiler to give you more warnings. You don't need to write return 0; at the end of the function main in C++. If nothing has been returned when the end of the function is reached, it automagically does return 0;. This remark only holds for the function main though. Also, you don't need to put parenthesis around the returned result; return is not a function.
{ "domain": "codereview.stackexchange", "id": 11693, "tags": "c++, beginner, stack, collections, pointers" }
What are respirasomes?
Question: I have read wikipedia but don't understand it well. Do we mean Complex I, II,III and IV when we say respirasomes? Answer: What the article is saying, is that there are several respirasomes, each of which consists of multiple other mitochondrial complexes or cytochromes. The article makes an example of three commonly observed respirasomes when it says: The most common supercomplexes observed are Complex I/III, Complex I/III/IV, and Complex III/IV. So, the short answer to your question is that a respirasome refers to a set of multiple cytochromes, whose combined action leads to respiration. This paper goes into more detail, and reiterates that the term "respirasome" is used to describe multiple cytochrome supercomplexes.
{ "domain": "biology.stackexchange", "id": 5378, "tags": "cell-biology, mitochondria" }
AT&T assembly - Basic loop & write - follow-up
Question: This is a follow-up question to this one: AT&T Assembly - Basic loop & write The code loops to display "Hello, World!" ten times. I implemented the syscall instead of int $0x80, used a decrementing loop to avoid a useless instruction and commented the code. Is there any way to make it better? When debugging with GDB, it appears exit $0 is part of l. However, I would like it to be part of _start (since l represents the loop only). Is that possible? .section .data hello: .string "Hello, world!\n" len = . -hello .equ EXIT, 60 .equ WRITE, 1 .equ STDOUT, 1 .section .bss # Write a str of length len on the standard output. .macro write str, len movq $WRITE, %rax movq $STDOUT, %rdi movq \str, %rsi movq \len, %rdx syscall .endm # Exit with the specified error code .macro exit code movq $EXIT, %rax movq \code, %rdi syscall .endm .section .text .globl _start # Loops to display "Hello, World!" ten times. _start: movq $10, %r8 # Counter l: write $hello, $len dec %r8 jnz l exit $0 Answer: This is much better. Since this is v2, let me introduce you a couple more slightly advanced concepts. First: The second time you execute write, what will be in (most of) those registers (rdi, rsi, rdx)? It's easy to think in terms of C (or practically every other high level language), and how variables have 'scope.' But asm's registers (even when used with call) don't work that way. The is nothing 'wrong' with using macros like this. It makes reading / maintaining much easier. And in more complex examples, you probably wouldn't be able to assume that these registers haven't been modified since the last write. But since we are critiquing this particular case, be aware that the code is slightly less efficient than it could be (since it re-assigns values that are already there 3 * 9 times). Second (follows from the first): write modifies a LOT of registers (as a percentage of how many registers there are). In addition to the ones you see (rax, rdi, rsi, rdx), there are also the registers 'clobbered' by using syscall (rcx, r11). It's going to be difficult to compose complex code if every time you want to write something, that many registers get destroyed. This leads us to the concept of 'calling conventions' (something I've written about before). In short, you come up with (or use an existing) standard set of rules about what registers your macros/routines are allow to modify, then avoid using those 'volatile' registers in the routines that call them (or save the contents of the registers yourself before making the call). And if the macros/routines need more registers than your 'rules' allow, they must push/pop all the additional registers to preserve their contents. Yes, this is starting to get a bit deeper. But register handling is something you need to start thinking about if you want to read/write asm. Additionally, it starts to explain some of the 'junk' you see when you disassemble your C code. Those push/pops you see around the calls and at the top of functions? This is why they are there. I could talk about the cost/benefits of macros vs routines, but this is probably enough A few more thoughts: Add a comment at the top of the file. Even something simple like The code loops to display "Hello, World!" ten times. Stylistically, perhaps . - hello? Negative hello seems weird. Yes, nitpicky. But that's what you get when you ask for code reviews on such basic code. Edit1: Hmm. You haven't accepted an answer yet. Were you looking for something else? Or perhaps I wasn't clear about what I was trying to explain? Let me take a shot at explaining this again: Using macros isn't the same as defining routines. Instead of defining the code in one place and calling it from other places, it pastes the entire macro in where ever it is invoked. So your code basically expands to: movq $10, %r8 l: movq $1, %rax movq $1, %rdi movq $hello, %rsi movq $15, %rdx syscall dec %r8 jnz l movq $60, %rax movq $0, %rdi syscall If you were to call write twice in a row, you'd get: movq $10, %r8 l: movq $1, %rax movq $1, %rdi movq $hello, %rsi movq $15, %rdx syscall movq $1, %rax movq $1, %rdi movq $hello, %rsi movq $15, %rdx syscall dec %r8 jnz l movq $60, %rax movq $0, %rdi syscall Now, if I'm not mistaken, the syscall you are using overwrites rax (uses it as a return value), but leaves the other parameters alone. Such being the case, it would be slightly more efficient to write: movq $1, %rdi movq $hello, %rsi movq $15, %rdx movq $10, %r8 l: movq $1, %rax # or perhaps movq %rdi, %rax? syscall dec %r8 jnz l movq $60, %rax movq $0, %rdi syscall This doesn't lend itself well to macros. But if performance was more important to you than "maintainability," this would be (slightly) better. Alternately, you could do this as a routine. The existing code for write uses 6 of the 15 x64 registers, stomping on their existing values in order to make the call. Registers are a precious and limited resource. If you were doing anything much more complex, you would start to run out. Using routines allows you to bundle code in such a way that only a limited number of registers get modified, causing a minimum amount of disruption in the code that calls it. For example, if you were to use the Microsoft x86 'fastcall' calling convention (a poor choice for 64bit linux, but useful as an illustration), then the first parameter gets placed in rcx, the second in rdx, and the return value (if any) goes in rax. rcx, rdx and rax can all be changed by the routine, but all other registers must be returned unchanged. So, re-working write with this in mind, we get something like this: # On entry: # rcx points to the string to print # rdx contains the length of the string push %rsi # Save the non-volatile registers we modify push %rdi push %r11 movq %rcx, %rsi # Move the string pointer to the correct register movq $WRITE, %rax movq $STDOUT, %rdi syscall # At this point: # rax contains the return value from the call # rcx/r11 have been clobbered pop %r11 # Restore the registers and return pop %rdi pop %rsi ret You can call it like this: mov $hello, rcx mov $len, rdx call write # At this point, the contents of rcx and rdx are undefined. Note that when I say "rcx and rdx are undefined," I mean they are undefined by definition. Yes, you can look at write and see what they would contain, but you pretend like you don't. This way someone can modify write to work slightly differently, and every place that calls it will still work correctly. As long as everybody follows the agreed-upon 'rules.' The implications here define a lot of how registers actually get used, both by compilers when they generate asm and by people who write their own. If you know how registers will be treated when you make a call, that helps you choose which registers to use for what. For example, write needs the length in rdx. So if you needed to count how many bytes were in a string before passing it to write, then using rdx when doing the count suddenly makes a lot more sense. And using rcx to hold your 1-10 loop counter would obviously be a poor choice, since it gets wiped out during each write. To sum up: Using the write macro allows for easy-to-read code. It also allows you to call the macro (somewhat) generically. However, it has some limitations that may make it a bad choice for more complicated code, or if performance it a primary consideration. That's what I see when I read this code.
{ "domain": "codereview.stackexchange", "id": 23589, "tags": "beginner, assembly" }
If the standard state symbol means that the substance is pure (and at 1 bar) how is it possible to have a standard REACTION enthalpy?
Question: For the reaction one mole of substance $A$ in equilibrium with one mole of substance $B$ the standard reaction enthalpy is defined: $$\triangle_rH^{\theta}(T)=H_m^{\theta}(B;T)-H_m^{\theta}(A;T)$$ However, I have two issues with this. Firstly: the standard state of a substance (denoted by the symbol $\theta$) is the pure material at a pressure of 1 bar. How can the reaction enthalpy be standard - the reactant have to be mixed (thus are not pure) to react. Secondly: what is $H_m^{\theta}$? Surely nothing can have a molar enthalpy because only changes in enthalpy can be measured not absolute values. Am I right in thinking that is is because the enthalpy scale has no zero value? Answer: The standard reaction enthalpy you have describe is only for a pure substance if you are talking about a standard enthalpy of formation. Otherwise the equation wouldn't be valid for reactions that have more than one product, right? In the specific case of a standard enthalpy of formation, you are talking about the enthalpy change associated with the creation of one mole of a pure substance from its component elements in their standard states. This gets at your second question ... the zero on the enthalpy scale is established by convention (and this is the reason why the definition of standard enthalpy of formation works) ... the standard enthalpy of formation of an element in its standard state is DEFINED to be zero. Now, you can estimate the standard reaction enthalpy for any chemical reaction by subtracting the summed standard enthalpies of formation of the reactants from the summed standard enthalpies of formation of the products, taking into account the appropriate stoichiometric coefficients from the balanced chemical equation. That process is similar to what you have written, but not precisely the same, since you omitted the stoichiometry information (and the chemical equation, for that matter). The standard enthalpy estimate obtained will pertain to reactions carried out at the given conditions .. standard enthalpies are defined for 1 atm pressure but variable temperature, so you may need to correct tabulated standard enthalpies of formation to match the given reaction conditions. One last point is that enthalpy is absolutely defined as a thermodynamic quantity .. it is the internal energy plus the product of pressure and volume, $H=U + pV$. The issue is that the internal energy of a substance is hard to define absolutely .. are you going to include the binding energies of the core electrons to their atoms? How about the strong forces holding the nuclei together? Thus for chemical processes it almost always makes sense to deal with just changes in enthalpy, since most of the really low level stuff I mentioned doesn't change between reactants and products. I think that may have been part of what you were getting at with your second question.
{ "domain": "chemistry.stackexchange", "id": 2543, "tags": "physical-chemistry, thermodynamics, enthalpy" }
How do I prove no algorithm exists for a given problem?
Question: Is there a general framework for showing that a problem has no algorithm? For example, to show that two problems are equally as hard to each other, we use reduction. One example of where this was done, is Hilbert's 10th problem Answer: The term or tag you are looking for is "undecidability" or "computability". The basic version of the theory of undecidability is a part of the theory of computation, which is an ingredient of the curriculum of all students majoring in Compute Science, as far as I know. You can search for "undecidability" or "halting problem", the most famous and the most canonical problem that cannot be solved by an algorithm, in your textbook or course material. Or right here. There are also much stuff around.
{ "domain": "cs.stackexchange", "id": 14211, "tags": "complexity-theory" }
Why is it that the normal force experienced as a object travels along a parabolic surface increases as compared to a flat surface?
Question: Is this because the surface 'comes up against' the object due to it's curved nature Answer: Yes, but the thought behind your intuition need to be spelled out. A rigid object does not deform when you push on it. If you push on it, it pushes back with a reaction force just hard enough to prevent deformation. If an object exerts a force on a surface, the reaction force is just hard enough to prevent the object from moving into the surface. This force does not prevent sideways motion. That would be a friction force. Both ice and concrete are rigid and exert a reaction force. But concrete also exerts a friction force. Note that friction may prevent sliding, but not rolling. On a flat horizontal surface, the normal force is equal to the weight of the object. On a tilted flat surface, weight is vertical and the normal force is perpendicular to the surface. You have to break the forces up into components. We will choose components normal and parallel to the surface. $$\vec F_{net} = \vec w + \vec F_{norm}$$ $$= \vec w_{parallel} + \vec w_{norm} + \vec F_{norm}$$ The angles do not change as the object rolls downward, and so $\vec w_{parallel}$ and $\vec w_{norm}$ do not change. The surface adjusts $\vec F_{norm}$ to be just enough to cancel $\vec w_{norm}$. $\vec F_{norm}$ is constant. $$\vec F_{norm} = - \vec w_{norm}$$ Also $$\vec F_{net} = m \vec a_{net} = \vec w_{parallel}$$ The forces add up to a constant net force and the object accelerates uniformly parallel to the surface. As long as the object stays in contact with the surface, the velocity is always parallel to the surface. In this case, velocity is in the same direction as the force. $$\vec F_{net} = \vec F_{parallel} $$ For a curved surface like a parabola, it is the same idea but the angles do change. As the object rolls downward, you have to figure out a new angle for each point. $$\vec F_{net} = \vec w + \vec F_{norm}$$ $$m \vec a_{parallel} + m \vec a_{norm} = \vec w_{parallel} + \vec w_{norm} + \vec F_{norm}$$ We can write the components separately. $$m \vec a_{norm} = \vec w_{norm} + \vec F_{norm}$$ $$m \vec a_{parallel} = \vec w_{parallel}$$ Because the velocity changes direction, there must be a component of $\vec a$ perpendicular to $\vec v$. You may need to think about that a bit to convince yourself. Keep in mind that if $\vec a$ is always parallel to $\vec v$, the object speeds up or slows down. But the change in velocity is always in the same direction as the velocity. If $m \vec a_{norm} \ne \vec 0$, $$\vec F_{norm} = m \vec a_{norm} - \vec w_{norm}$$ The surface must push back on the object hard enough to Oppose the component of weight that tries to push the object into the surface Change the direction of the object's motion. As you more or less said, the object comes up against the surface due to its curved nature.
{ "domain": "physics.stackexchange", "id": 95372, "tags": "kinematics" }
Can beamforming be created by transmitting same signal with different phase
Question: As known, the basic idea of beamforming is to transmit the same signal using more transmit antennas, let's say we have $4$ transmit; leading to increase the power of the transmitted signal as well as focus its direction . My question, what's about if we transmit the same signal but with different phase, for example instead of transmitting $X = [x;x;x;x]$ the new signal becomes $X = [x; -x; x; -x]$. Will the beamforming advantages will be kept and why ? Answer: Can beamforming be created by transmitting same signal with different phase Yes, that's literally how you do beamforming, usually. It's called "phased array", if your geometric arrangement of antennas can be called an "array". You might write down the formula which you use to describe your understanding of beamforming, and you'll see the phase terms in that. Will the beamforming advantages will be kept and why ? We don't know exactly what kind of advantages you're thinking of, but since that's exactly what most beamformers do, yes? Again, write down your formula for a beamformer. Then, look for the terms that change your signal's phase.
{ "domain": "dsp.stackexchange", "id": 10297, "tags": "beamforming, mimo" }
Is matching with mismatches a special(parametrized) case of Closest String problem?
Question: I am a bit confused. Somehow I have a problem connecting two problems together. The Closest String problem and the problem of matching with mismatches. They seam to be related but, I fail to see the connection. The Closest String problem is defined as : Instance: Strings $S_{1},S_{2}...S_{n}$ over alphabet $\Sigma$ of length $L$ each and a non-negative integers $d$ and $n$. Parameters: $n,d$ Question: Is there a string $s$ of length $L$ such that $\delta(s,S_{i})\leq d$ for all $i=1..n$? Note: $\delta(x,y)\leq d$ is the Hamming distance between $x$ and $y$. This problem is proven to be NP-complete. On the other hand we have a problem of matching with mismatches, which is described as: The problem of string matching with $d$ mismatches consists of finding all occurrences of a pattern of length $m$ in a text of length $n$ such that in at most $d$ positions the text and the pattern have different symbols. In the following, we assume that $0 < d < m$ and $m\leq n$. Landau and Vishkin gave the first (to my knowledge) efficient algorithm to solve this problem in $O(kn)$ time. Now my question is: Is matching with mismatches, or can it be seen, as a special parametrized case of the Closest String problem and how is this connection made? Answer: In the latter you are given the string $s$ that you are looking for in the former. An algorithm for the first one is to search for a string $s$ (try all strings) and run the latter procedure and verify that the set you get is the entire set.
{ "domain": "cs.stackexchange", "id": 989, "tags": "np-complete, decision-problem, matching" }
Misner, Thorne and Wheeler, Box 9.2 Commutator ... doesn't make sense to me
Question: I apologize for the goofy commutator $\left[\left[\_,\_\right]\right]$ notation. MathJax doesn't like my \llbracket \rrbracket notation. And I religiously use $\left[\dots\right]$ for function arguments. Edit to add for future reference: \left[\![\_,\_\right]\!] $\to\left[\![ \_,\_ \right]\!]$ This is from Chapter 9, Box 9.2 of Gravitation, by Charles W. Misner, Kip S. Thorne & John Archibald Wheeler. It seems incorrect to me. Is it? A. Pictoral representation of flat spacetime For ease of visualization, consider flat spacetime, so the two vector fields $\mathfrak{u}\left[\mathscr{P}\right]$ and $\mathfrak{v}\left[\mathscr{P}\right]$ can be laid out in spacetime itself. Choose an event $\mathscr{P}_{0}$ where the commutator $\left[\left[ \mathfrak{u},\mathfrak{v}\right]\right] $ is to be calculated. Give names $\mathscr{P}_{1},\mathscr{P}_{2},\mathscr{P}_{3},\mathscr{P}_{4}$ to the events pictured in the diagram. Then the vector $\mathscr{P}_{4}-\mathscr{P}_{3},$ which measures how much the four-legged curve fails to close, can be expressed in the coordinate basis $$ \mathscr{P}_{4}-\mathscr{P}_{3}=\left(\mathfrak{u}\left[\mathscr{P}_{0}\right]+\mathfrak{v}\left[\mathscr{P}_{1}\right]\right)-\left(\mathfrak{u}\left[\mathscr{P}_{2}\right]+\mathfrak{v}\left[\mathscr{P}_{0}\right]\right)$$ $$ =\left(\mathfrak{v}\left[\mathscr{P}_{1}\right]-\mathfrak{v}\left[\mathscr{P}_{0}\right]\right)-\left(\mathfrak{u}\left[\mathscr{P}_{2}\right]-\mathfrak{u}\left[\mathscr{P}_{0}\right]\right) $$ $$ =\left(v^{\beta}{}_{,\alpha}u^{\alpha}\mathfrak{e}_{\beta}\right)_{\mathscr{P}_{0}}-\left(u^{\beta}{}_{,\alpha}v^{\alpha}\mathfrak{e}_{\beta}\right)_{\mathscr{P}_{0}}+\text{errors} $$ $$ =\left[\left[\mathfrak{u},\mathfrak{v}\right]\right]_{\mathscr{P}_{0}}+\text{errors}, $$ where $\text{errors}$ consits of terms such as $v^{\beta}{}_{,\mu\nu}u^{\mu}u^{\nu}\mathfrak{e}_{\beta}.$ Notice that if $\mathfrak{u}$ and $\mathfrak{v}$ are halved everywhere, then $\left[\left[ \mathfrak{u},\mathfrak{v}\right]\right] $ is cut down by a factor of 4, while the error terms in the above go down by a factor of 8. From what is given, $v^{\beta}{}_{,\alpha}$ and $u^{\beta}{}_{,\alpha}$ are evaluated at $\mathscr{P}_{0},$ so they remain constant as $\mathfrak{u}$ and $\mathfrak{v}$ are reduced in magnitude. Call $\left\{ u^{\beta}{}_{,\alpha}\right\} _{\mathscr{P}_{0}}=\left\{ a^{\beta}{}_{\alpha}\right\} $ and $\left\{ v^{\beta}{}_{,\alpha}\right\} _{\mathscr{P}_{0}}=\left\{ b^{\beta}{}_{\alpha}\right\} $, which are constants in the limit as $\mathfrak{u}$ and $\mathfrak{v}$ go to $\mathfrak{0}$; as are $\left\{ \mathfrak{e}_{\beta}\right\} _{\mathscr{P}_{o}}$. So $$ \left(v^{\beta}{}_{,\alpha}u^{\alpha}\mathfrak{e}_{\beta}\right)_{\mathscr{P}_{0}}-\left(u^{\beta}{}_{,\alpha}v^{\alpha}\mathfrak{e}_{\beta}\right)_{\mathscr{P}_{0}} $$ $$ =\left(b^{\beta}{}_{\alpha}u^{\alpha}-a^{\beta}{}_{\alpha}v^{\alpha}\right)\mathfrak{e}_{\beta} $$ $$ =\left[\left[ \mathfrak{u},\mathfrak{v}\right]\right] _{\mathscr{P}_{0}}, $$ and $$ \left[\left[ \frac{\mathfrak{u}}{2},\frac{\mathfrak{v}}{2}\right]\right] _{\mathscr{P}_{0}}=\frac{1}{2}\left[\left[ \mathfrak{u},\mathfrak{v}\right]\right] _{\mathscr{P}_{0}}. $$ So I get a factor of 2, not 4. The graphic actually uses differentiable vector fields. When I reduce the vectors by half, the commutator is reduced by half. Apparently the polygon representing the open quadrilateral retains its shape as the vectors are uniformly reduced in magnitude. Edit to add: I believe I figured this out. The displacements along $\mathfrak{u}\left[\mathscr{P}\right]$ and $\mathfrak{v}\left[\mathscr{P}\right]$ represent unit changes in coordinate values, so reducing the vectors by half reduces the coordinate mesh, and therefore the scale by which we are differentiating. Edit to retract the previous suggestion. It is exactly wrong. The curves to which $\mathfrak{u}$ and $\mathfrak{v}$ are tangent are not in general coordinate curves. Answer: This is far from rigorous, but it explains why the partial derivatives $u^{\beta}{}_{,\alpha}$ and $v^{\beta}{}_{,\alpha}$ scale proportionally with a uniform scaling of the parameters. I shall assume that the families of curves to which the vector fields $\mathfrak{u}$ and $\mathfrak{v}$ are parallel remain fixed as parameters are uniformly scaled. The included figure shows a very simply example representing only one family of tangent curves. These are the horizontal blue $v$ "curves". The vertical black line at the left side represents the set of points on adjacent $v$ curves with a parameter value of $0$. It is assumed that the parameterization changes continuously as "adjacent" curves are encountered when "cutting across the grain" of the family of $v$ curves. The various curves which begin vertically at the bottom represent points of equal parameter value along the $v$ curves. The black arrows lying under the red arrows depict the tangent vectors to the originally parameterized $v$ curves. The read arrows represent the tangent vectors when the parameter units are uniformly scaled by .5. The black and red arrows share common tails, intentionally placed at a common integer x value. The rectangular Cartesian coordinate system is shown as a light green grid. Between the tips of the black arrows is a line segment approximating the rate of change in the originally parameterized tangent vectors, separated in the y direction by a unit change in coordinate value. The line segment joining the tips of the red arrows approximates the rate of change in the tangent vectors after scaling the parameters. The "slopes" $\frac{\Delta x}{\Delta y}$ of the joining line segments approximate the partial derivatives $v^{x}{}_{,y}$ which I had mistakenly taken to be constant under a uniform change of parameter. The red arrows are approximately half as long as the corresponding black arrows, and become exactly half in the limit as the parameters are reduced to zero.
{ "domain": "physics.stackexchange", "id": 50585, "tags": "general-relativity, differential-geometry, curvature, commutator, vector-fields" }
Resolution of X-ray crystallography
Question: A structure determined by X-ray crystallography has a resolution of 1.5 Å. When I look at the coordinates, I find every backbone C-N distance is 1.32 Å.i.e. Accurately predicted. If resolution is not good why every C-N bond distance is accurately predicted? Answer: The structural model of a protein is obtained using both experimental data and prior knowledge about geometry of macromolecules. As a structural model is refined, interatomic distances are also restrained. So, the C-N distance isn't really predicted. Alternatively, knowledge about this distance is used to construct reasonable model. In response to the comment below I'll cite Rupp's Biomolecular Crystallography, the most comprehensive book on this subject. Refinement and restraints are covered in Chapter 12, Model building and refinement. An observation does not have to be an experimental observation (measured data) specific to the particular structure, but can be any type of general prior knowledge regarding molecular stereochemistry. Known stereochemistry can be exploited and implemented in macromolecular refinement in the form of geometric constraints and restraints. [...] We know from accurate and precise small molecule and peptide fragment structures [...] that bond lengths and bond angles show distinct and relatively narrow distributions around their mean positions. [...] Refinement programs read restraint target values specific for each residue and each ligand molecule from restraint library files, which are subject to continuous empirical update. The value given in this book (Table 12-1) for peptide bond C-N is 1.336 Å with variance 0.023 Å.
{ "domain": "biology.stackexchange", "id": 4944, "tags": "proteins, lab-techniques, protein-structure, xray-crystallography" }
Canonical ensemble derivation
Question: I have checked several references for the derivation of the probability function of the canonical ensemble. I have seen two (essentially similar) approaches. Both assume a system is placed in a large reservoir: Study the probability that the system is in a given microstate. Assert that this probability is proportional to $\Omega_S$ where $\Omega_S$ is the multiplicity of the system. For example, https://www2.oberlin.edu/physics/dstyer/StatMech/CanonicalEnsemble.pdf. Study the probability that the reservoir is in a given microstate. Assert that this probability is proportonal to $\Omega_R$ where $\Omega_R$ is the multiplicity of the reservoir. Both of these seem unsatisfactory to me. In both cases it seems like the probability should be proportional to $\Omega_R \cdot \Omega_S.$ Why is it okay to neglect this multiplication? Answer: You're probably misunderstanding the oberlin document. They consider the class of states where system is in definite microstate, so $\Omega_S=1$. Then $\Omega_S\Omega_R=\Omega_R$.
{ "domain": "physics.stackexchange", "id": 96857, "tags": "statistical-mechanics" }
Why don't (electrically charged) particles act on themselves?
Question: As per the Maxwell-Gauss equation, an electron modifies the electrical field around it. Therefore it should act (through an electric force) on itself. Now obviously this force would be directed towards the electron itself and its vector would be along $\overrightarrow0$, but since the distance from the electron to itself (neglecting quantum considerations for now) is 0, the force vector can be expressed in cartesian coordinates as : $$ \overrightarrow F=\frac{e^2}{4\pi\varepsilon_0r^2}\overrightarrow0=\frac{e^2}{4\pi\varepsilon_0r^2}\cdot0\hat{e_x}=\infty\cdot0\hat{e_x} $$ Now this raises an infinity, which heavily suggests that this expression for the electric force is wrong in this context, and maybe there would be a more correct (set of) formula(s) from quantum mechanics that I do not know (please tell me if that's the case). But it's also an indeterminate form — is there a mathematical argument that waives it? Or is the act of considering a force applied by the electron on itself nonsensical in the first place? My current guess for an argument is that symmetry considerations clearly make it so that the electron in this experience should have no "preferred direction", so there should not be any net force. But this could also mean that the electron does interact with itself, and that the forces it applies on itself are always cancelling each other out. Maybe this is more of a "problem approaching" issue than a "fundamental physics" one. But please let me know if this is related to anything interesting (even if it's probably not). Answer: Point particles do self-interact, sort of. Remember that the description of an electron as a “particle” is a semiclassical simplification which sweeps an awful lot of modern physics under the rug. In quantum field theory, “an electron” is a quantized excitation of a spinor field associated with a particular mass, charge, and other quantum numbers. In the context of quantum field theory, when we say that “the electron is a point particle,” we don’t mean that some zero-size analog of a sand grain exists somewhere to be located or not. A better interpretation is that the electron is structureless. No matter how closely you look at an electron, there is no new interaction which switches on so that “an electron” is no longer a good description of what’s happening. This is different from atoms, which can be driven into excited states and eventually separated into electrons and nuclei; from nuclei, which can be driven into excited states and eventually separated into protons and neutrons; and from protons and neutrons, which can be driven into excited states like the delta or lambda baryons. (Protons and neutrons can’t actually be separated into quarks, for reasons which are too complex for a parenthetical. But we have reasons to believe that quarks, like electrons, are structureless.) The length scale of a quantum-mechanical interaction is set by the de Broglie wavelength of the particles/fields involved. High-momentum interactions probe short-distance physics. The shorter your interaction distance, the more high-energy effects start to leak in. The first of these effects is called the “vacuum polarization.” Quantum electromagnetism is mediated by virtual photons, which can spend part of their time as virtual electron-positron pairs. At high energies, these virtual pairs become more and more important. The overall effect is that the fine-structure constant, $$ \alpha = \frac{1}{\hbar c} \frac{e^2}{4\pi\epsilon_0} $$ is actually a “running constant.” In low energy interactions, $\alpha\approx 1/137$. By the time the relevant length scale corresponds to the masses of the weak vector bosons $W$ and $Z$, the effective electromagnetic coupling is a little stronger, $\alpha\approx 1/127$. If you insisted on thinking of this in terms of classical electromagnetism, you might say that the electric field is slightly stronger than the $1/r^2$ prediction very close to “the electron,” due to vacuum polarization, a kind of self-interaction. But if you push closer to this “core,” then the weak interaction becomes unignorable, and classical electromagnetism is no good to you any more. For non-relativistic quantum mechanics, you can get pretty far by equating the electron’s probability density $|\psi^*\psi|$ with its charge density. So an electron in an $s$-wave orbital has the largest charge density at the nucleus, but the charge density is relatively uniform within roughly the Bohr radius. A uniform charge density distribution does not suffer from the singularity that you’re worrying about.
{ "domain": "physics.stackexchange", "id": 88665, "tags": "electromagnetism, forces, self-energy" }
KeyError:'unique_identifier_msgs' and KeyError:'service_msgs' when compiling interface package
Question: I am pretty new to ROS, so please do not expect much from me. I am building a robotic system for my project, and as of right now I am setting up interfaces to use in the project. I added a message file (.msg) and it worked fine, but when I go to add an action file (.action), it starts giving me a KeyError. After looking at the error message, I thought the package it referenced was not installed (it was showing up first with 'service_msgs'), so I went ahead and added the line <depend>service_msgs</depend> to my packages.xml file, and I also added find_packages(service_msgs REQUIRED) and added that package next to DEPENDENCIES in CMakeLists.txt. After that, I went to compile it and I got the same error message but with unique_interfaces_msgs. I do the same thing with that package, and after compiling I get the previous message: KeyError: 'services_msgs'. I try compiling again and I get KeyError: 'unique_identifier_msgs'. It seems to switch between both messages, but I cannot tell what the actual problem is. I got the same error message but instead saying action_msgs or builtin_interfaces. I am very confused now and I do not know what to do. My code right now is below: CMakeLists.txt: cmake_minimum_required(VERSION 3.8) project(interfaces) if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang") add_compile_options(-Wall -Wextra -Wpedantic) endif() # find dependencies find_package(ament_cmake REQUIRED) # uncomment the following section in order to fill in # further dependencies manually. find_package(geometry_msgs REQUIRED) find_package(rosidl_default_generators REQUIRED) find_package(service_msgs REQUIRED) find_package(builtin_interfaces REQUIRED) find_package(unique_identifier_msgs REQUIRED) rosidl_generate_interfaces(${PROJECT_NAME} "msg/Location.msg" "actions/SendBotToLoc.action" DEPENDENCIES geometry_msgs ) if(BUILD_TESTING) find_package(ament_lint_auto REQUIRED) # the following line skips the linter which checks for copyrights # comment the line when a copyright and license is added to all source files set(ament_cmake_copyright_FOUND TRUE) # the following line skips cpplint (only works in a git repo) # comment the line when this package is in a git repo and when # a copyright and license is added to all source files set(ament_cmake_cpplint_FOUND TRUE) ament_lint_auto_find_test_dependencies() endif() ament_package() package.xml: <?xml version="1.0"?> <?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?> <package format="3"> <name>interfaces</name> <version>0.0.1</version> <description>This package stores all interfaces for use with actions, topics, and services</description> <maintainer email="vxw397@student.bham.ac.uk">Vivan Waghela</maintainer> <license>Apache-2.0</license> <buildtool_depend>ament_cmake</buildtool_depend> <test_depend>ament_lint_auto</test_depend> <test_depend>ament_lint_common</test_depend> <depend>geometry_msgs</depend> <buildtool_depend>rosidl_default_generators</buildtool_depend> <exec_depend>rosidl_default_runtime</exec_depend> <member_of_group>rosidl_interface_packages</member_of_group> <depend>service_msgs</depend> <depend>unique_identifier_msgs</depend> <export> <build_type>ament_cmake</build_type> </export> </package> Location.msg (inside msg folder): geometry_msgs/Point location SendBotToLoc.action (inside actions folder) uint8 MAIN_BOT=0 uint8 bot geometry_msgs/Point final_point --- uint8 exit_code --- I have also tried adding the 'std_msgs' dependency by adding the line find_package(std_msgs REQUIRED)to CMakeLists.txt and <depend>std_msgs</depend> in packages.xml, but that did not change the error message at all. All help will be greatly appreciated. Thanks. Edit: I realised that I forgot to show the full error, so I am adding that here: Starting >>> interfaces --- stderr: interfaces Traceback (most recent call last): File "/opt/ros/iron/lib/rosidl_generator_type_description/rosidl_generator_type_description", line 50, in <module> sys.exit(main()) File "/opt/ros/iron/lib/rosidl_generator_type_description/rosidl_generator_type_description", line 46, in main generate_type_hash(args.generator_arguments_file) File "/opt/ros/iron/lib/python3.10/site-packages/rosidl_generator_type_description/__init__.py", line 160, in generate_type_hash pkg_dir = include_map[pkg] KeyError: 'builtin_interfaces' gmake[2]: *** [CMakeFiles/interfaces__rosidl_generator_type_description.dir/build.make:77: rosidl_generator_type_description/interfaces/msg/Location.json] Error 1 gmake[1]: *** [CMakeFiles/Makefile2:215: CMakeFiles/interfaces__rosidl_generator_type_description.dir/all] Error 2 gmake[1]: *** Waiting for unfinished jobs.... gmake: *** [Makefile:146: all] Error 2 --- Failed <<< interfaces [0.34s, exited with code 2] Summary: 0 packages finished [0.72s] 1 package failed: interfaces 1 package had stderr output: interfaces The error message above shows up with other package names in place of 'builtin_interfaces'. Answer: Soooo, apparantly the reason for the errors was simply because I had an action in the folder actions, and it appears that ROS2 does not like that. They want actions to be in a folder specifically called action. Moving Location.action to the folder action and changing CMakeLists.txt as needed made building pass. I did not even need to add the dependencies services_msgs, builtin_interfaces, etc.
{ "domain": "robotics.stackexchange", "id": 38978, "tags": "ros, message, action" }
catkin_create_rosjava_xx scripts/commands not found
Question: I have installed ROS kinetic on Ubuntu 16.04. Have followed the instructions to setup rosjava from source mentioned in this link catking-rosjava-workspace Now when I try to create an empty workspace catkin_create_rosjava_pkg, it returns command not found. Here is my ROS environment variables., env | grep ROS ROS_ROOT=/opt/ros/kinetic/share/ros ROS_PACKAGE_PATH=/opt/ros/kinetic/share ROS_MASTER_URI=http://localhost:11311 ROSLISP_PACKAGE_DIRECTORIES= ROS_DISTRO=kinetic ROS_ETC_DIR=/opt/ros/kinetic/etc/ros Is catkin_create_rosjava_xx commands not supported anymore? Please help. Originally posted by shreyasathreya on ROS Answers with karma: 16 on 2018-01-17 Post score: 0 Answer: Check the answer here - https://answers.ros.org/question/277437/solvedrosjavacatkin_create_rosjava_pkg-command-not-found/ Originally posted by shreyasathreya with karma: 16 on 2018-01-17 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 29769, "tags": "catkin, ros-kinetic" }
Ants eating bark
Question: Was out walking in the park today, came across a tree, and me being me I thought "Hey lets try climbing this thing for fun." But something strange struck me, the tree was completely devoid, as in COMPLETELY, of any bark. On the trunk, on the branch, no bark anywhere, pure albino. Well, I reasoned, maybe it's just born that way. But then I look down, and I see the remnants of some bark wrapping around the tree's base. (Greyish colour). Huh. And I also saw approximately 7 massive slightly orange ants tumbling through an ant hole. I'm not a large fan of ants, especially since I had pretty much been standing on their nest for a couple of minutes, so I hastily backed off. Here's where my mind kicked in though, how did the tree become barkless? For some background, I live in Sydney, NSW, Australia, and it's pretty darn hot here, nearly Summer. MAYBE the wind blew it off? (but there hasn't been any wind lately) Only logical explanation was THE ANTS DID IT. Do ants really eat the bark off trees like this? Dang that's a big picture. You can sort of see if you look closely the remains of the bark at the bottom. Been to this park before, and I'm 75% sure the tree was not albino back then. Answer: Trees have different kinds of bark. If your tree was not moist and seeping, it had bark. Since that tree is clearly alive, it's likely that it has a smooth, light-colored naturally peeling bark. Japanese Stewartia (Stewartia pseudocamillia) has a beautiful bark that is constantly shedding in patches of tan, green, orange and brown. It always has bark, however. Crepe Myrtle is also used in landscaping for it's beautiful, peeling bark. To know if this condition is normal for that tree, identification (using range, habitat, bark, leaves, twigs, and sometimes flowers, fruits and seeds) is needed. Eucalyptus grandis is native to your area. Eucalyptus species also shed bark and the picture in your question definitely looks like eucalyptus. When it is finished peeling, it's very light. This might be your tree.
{ "domain": "biology.stackexchange", "id": 3100, "tags": "botany, species-identification, plant-anatomy" }
How is fin efficiency a useful quantity for a fin's heat dissipation performance?
Question: I mean, it's the ratio of actual heat transfer to heat transfer if the material had infinite conductivity. How is it of any practical, or even theoretical use? How does it help, in saying one fin is better over other? Answer: If the material has infinite conductivity, the entire fin would be at the base temperature. This represents an ideal scenario. If the temperature of the fin were the same as the base all the way through, that would mean the heat transfer between the fin and cooling medium would be a theoretical maximum. In reality, you will lose some of that efficiency, because the imperfect conduction means that the further from the base the fin is, the lower the temperature, and thus heat transfer is reduced. There's some level of tradeoff between increased area of fins, and the decreased temperature due to non-infinite conduction; but the "ideal" fin the efficiency compares to does not have that tradeoff.
{ "domain": "physics.stackexchange", "id": 61825, "tags": "convection, heat-conduction" }
Updating register_globals code for importing $_GET and $_POST data
Question: I'm updating a bunch of PHP code that relies on register_globals and uses request data globally. In the process of fixing/updating I spend a lot of time writing code that looks like: <?php $x = isset($_REQUEST['x']) ? $_REQUEST['x'] : $xdefault; // other times loaded from $_GET['x'] or $_POST['x'] $abc = isset($_REQUEST['abc']) ? $_REQUEST['abc'] : $abcdefault; ?> I've been toying with the idea of using something like: <?php function _export(&$source, &$defaults) { return array_merge($defaults, array_intersect_key($source, $defaults)); } // old code essentially uses $_REQUEST, but I could point this at $_POST // $_GET or a db query in the future $defaults = array ( 'x' => $xdefault, 'abc' => $abcdefault, ); $options = _export($_GET, $defaults); extract($options); //export if needed until global code can be fixed ?> The benefits being: It's easier (I feel) to read array syntax compared to a list of ternary issets() assignments The defaults array acts as a whitelist The defaults array can be used to document the expected or required inputs Are there any disadvantages or possible improvements to this approach? I would name _export() something different. Answer: This seems a bit complex for relatively little benefit. Personally, from these two choices, I would go with the isset approach for simplicity and slightly improved performance. But I do see your point about the default value, and the readability of the array structure. If you actually do have a lot of parameters which have a default value, your approach might have merit. But since you are going through the trouble of updating legacy code anyways, you could also think about choosing a different approach and create an Input class. It would also result in readable code, and you could add input filters, increasing your security. The structure is really a matter of preference, but it might look something like this: class Input { function getRaw($value, $default) { ... } function getInt($value, $default) { ... } function getSafeHTML($value, $default) { ... } function getFiltered($value, $default, $regex) { ... } ... function postRaw($value, $default) { ... } function postInt($value, $default) { ... } function postSafeHTML($value, $default) { ... } function postFiltered($value, $default, $regex) { ... } ... ... } It's then used like this: $x = Input::getInt('x', $xdefault); $ab = Input::postCleanHTML('ab', $abdefault); It's a bit more work, but it's readable, well structured, and provides additional security; filtering input should never be your main line of defense, but it is highly recommended as defense in depth (depending on the input of course; some input you want unaltered, which is why the getRaw method is there). Misc There's no need to pass the arguments by reference to _export. It gives off the impression that the function will change the values, which it does not (and should not, especially for $_GET). if you are updating legacy code anyways, I would check if REQUEST is really needed. If it is not, change it to GET or POST to increase the security of the code even further.
{ "domain": "codereview.stackexchange", "id": 17980, "tags": "php, php5" }
Calculus of Variations - Virtual displacements
Question: I am currently reading "The Variational Principles of Mechanics - Cornelius Lanczos", in which the author talks about the variation of a function $F(q_1, q_2, \dots q_n)$ where $q_1, q_2, \dots q_n$ are the generalized coordinates $$F=F(q_1, q_2, \dots q_n)$$ $$ \delta F=\frac{\partial F}{\partial q_1}\delta q_1+\frac{\partial F}{\partial q_n}\delta q_n+\dots+\frac{\partial F}{\partial q_n}\delta q_n \tag{1} $$ next he writes, $\delta q_1=\epsilon a_1, \delta q_2=\epsilon a_2, \dots ,\delta q_n=\epsilon a_n$ where $a_1, a_2 \dots , a_n$ are the direction cosines, and $\epsilon$ is a small variation. I find it wrong to use the same $\epsilon$ for all $\delta q_i $'s as it seems to be inconsistent with dimensions For example if we are dealing with spherical coordinates $(r, \theta, \phi)$, according to the above the individual variations become $\delta r=\epsilon \hat{r}$, So I expect $\epsilon$ to have a dimension of $r$ (unit vectors are dimensionless) $ \delta \theta = \epsilon \hat{\theta},$ now $\epsilon$ has a dimension of $\theta$? substituting these in $\text{eq}(1)$ makes it worse, Further I like to think (I may be wrong) of the $\delta F$ as $\nabla F$, since both seem to have the same form, but as we know $\nabla$ is different for different coordinates, but $\text{eq}(1)$ seems to use $\delta$ as we use for Cartesian coordinates Am I missing out something? P.S This still isn't a very great problem since the main goal is to find the stationary value which anyways leads to a conclusion $\frac{\partial F}{\partial q_k}=0$ Answer: I agree with you that his choice of notation is questionable. Regarding $\epsilon$, I believe it can only make sense if it is chosen to be dimensionless; this is the only way for putting it in front of every variable to work. Next regarding the gradient analogy, I believe you are misinterpreting. When you have a function of multiple variables say $f(x,y,z)$, the total variation reads: \begin{equation} df = \frac{\partial f}{\partial x} dx + \frac{\partial f}{\partial y} dy + \frac{\partial f}{\partial z} dz \end{equation} This result is independent of how you define the gradient and holds whatever the variables mean. To define the gradient from the above formula, you need to come up with a notion of length measure and a dot product in the space of variables you are considering and then define $\nabla f$ as the vector field such that: \begin{equation} df = \nabla f \cdot d\vec{l} \end{equation} where $d\vec{l}$ is an infinitesimal vector element that reads differently in different coordinate systems. In cartesian coordinates it would read $d\vec{l} = dx \: \hat{u}_x + dy \: \hat{u}_y + dz \: \hat{u}_z$ In cylindrical coordinates it would read instead $d\vec{l} = dr \: \hat{u}_r + rd\theta \: \hat{u}_{\theta} + dz \: \hat{u}_z $ Thus leading to coordinate-system-dependent gradient expressions. In any case, the variational formula you wrote is exactly analogous to the first variation formula I wrote and not to a gradient.
{ "domain": "physics.stackexchange", "id": 26984, "tags": "lagrangian-formalism, coordinate-systems, notation, dimensional-analysis, variational-calculus" }
Unitary supermatrices
Question: I am reading Efetov's article on Anderson Localization, where some kind of supersymmetric formalism is used, and I am currently trying to make sense of the definitions. The most useful reference is this long review article, around equation 2.20. Here are the useful definitions. I will only use 2x2 supermatrices, which is enough for my question, and I will write complex numbers with latin letters, and Grassmann numbers with greek letters. The complex conjugate of a Grassmann number $\theta$ is $\theta^*$, and we use the definition $$(\theta^*)^*=-\theta.$$ A supermatrix $F$ takes the form $$F=\begin{pmatrix}a & \theta \\ \eta & b\end{pmatrix},$$ and its transpose is defined as $$F^T=\begin{pmatrix}a & -\eta \\ \theta & b\end{pmatrix},$$ which implies the nice property that if we denote the hermitian conjugate by $F^\dagger=(F^T)^*$, then $(F^\dagger)^\dagger=F$. (NB: it is important that the Grassmann elements of $F$ are complex to have this property). A supermatrix $U$ is said to be unitary if $$U U^\dagger = U^\dagger U =1.$$ Now here's the question. Writing $U$ in the form of $F$, I am trying to find the most general form of $U$ such that it is unitary. The problem is that I cannot find a consistent way to do that. The relevant equations, using $U U^\dagger = U^\dagger U =1$, are $$ |a|^2-\theta^*\theta=1,\\ |b|^2+\eta^*\eta=1,\\ |a|^2-\eta^*\eta=1,\\ |b|^2+\theta^*\theta=1,\\ a\theta^*+b^*\eta=0,\\ a^*\eta+b\theta^*=0,\\ $$ as well as the complex conjugate of the last two equations. Playing with this, one finds (without any assumptions) that $|a|^2+|b|^2=2$ and $\eta^*\eta=\theta^*\theta$. Now assuming that $a\neq 0$, one gets $\eta=-b/a^*\theta^*$, which implies that (if $\theta^*\neq 0$) $|a|^2=|b|^2=1$. Here comes the problem : we also have that $|a|^2-\theta^*\theta=1$ which implies that $\theta^*\theta=0$, which is not possible, unless $\theta^*=\theta$, but this breaks our (important) assumption that $\theta$ is complex (to have $(U^\dagger)^\dagger=U$). If instead we have $a=0$, this implies $\theta^*\theta=-1$, and I am not sure how to interpret this... (Yes, $\theta^*\theta$ is bosonic, but I don't think this equality makes sense, since if we integrate with respect to $\theta$ and $\theta^*$ both side, we get $1=0$...). Otherwise, one could just impose $\theta=\eta=0$, but in that case, there is not really a point to define unitary supermatrices, since their effect is kind of trivial (i.e. change the phase of the bosons and fermions independently). Answer: The problem was in assuming that because one has $$ (|b|^2-|a|^2)\theta^*=0,$$ then $|b|^2=|a|^2$. Indeed, the only thing it tells you is that $$|b|^2=|a|^2+\sigma \theta^*,$$ where $\sigma$ can be another Grassmanian (or a complex number). In fact, one can check that the two independent equations $|b|^2=1-\theta^*\theta$ and $|a|^2=1+\theta^*\theta$ are compatible with $ (|b|^2-|a|^2)\theta^*=0$. The best way to solve the above equations is to solve the equations for $a$ and $b$ in terms of $\theta^*\theta$. One gets $$a=e^{i\alpha}\left(1+\frac{\theta^*\theta}{2}\right),\\ b=e^{i\beta}\left(1-\frac{\theta^*\theta}{2}\right), $$ with $\alpha$ and $\beta$ two arbitrary phases. Then the equation linking $\eta$ and $\theta^*$ is used to get $$\eta=-e^{i(\alpha+\beta)}\theta^*.$$ One then checks that these results are compatible with all the equations in the question. Thus, a general 2x2,unitary supermatrix is parametrized by only three numbers : two real numbers $\alpha$ and $\beta$, and a Grassman number $\theta$, and it reads $$U=\begin{pmatrix} e^{i\alpha}\left(1+\frac{\theta^*\theta}{2}\right) && \theta \\ -e^{i(\alpha+\beta)}\theta^* && e^{i\beta}\left(1-\frac{\theta^*\theta}{2}\right). \end{pmatrix}$$ One then checks that it is indeed superunitary.
{ "domain": "physics.stackexchange", "id": 34500, "tags": "supersymmetry, conventions, complex-numbers, unitarity, grassmann-numbers" }
Discrimination of Bell states on unkown qubits
Question: Consider the scenario where Alice is given three qubits, and promised two of them are maximally entangled Bell state $\in_R \{\frac{|00+11>}{\sqrt{2}}, \frac{|00-11>}{\sqrt{2}}\}$ and one of them is BB84 state $\in_R \{ |0\rangle, |1\rangle, \frac{|0+1\rangle}{\sqrt{2}}, \frac{|0-1\rangle}{\sqrt{2}} \}$, but she does not which qubit is the BB84 state. My questions are the follwing 1) Is there a (probabilistic) strategy for Alice to determine what bell state was given to her? 2) Suppose the set of Bell states to choose from was $\{\frac{|00+11>}{\sqrt{2}}, \frac{|00-11>}{\sqrt{2}}, \frac{|01+10>}{\sqrt{2}}, \frac{|01-10>}{\sqrt{2}}\}$. Would that make things different? Alice only has one copy and has access to all the qubits (I am not talking about LOCC). We allow von Neumann measurements and don't care if the state discrimination is (non)destructive. Answer: Let $\vert\psi_i\rangle$ be the 4 Bell states and $\vert\phi_k\rangle$ the 4 "BB84" states. Then, define a POVM measurement with elements $$ P_{ik\pi} = \tfrac{1}{12}\vert\psi_i\rangle\langle\psi_i\vert\otimes \vert\phi_k\rangle\otimes \langle\phi_k\vert\ , $$ where $\pi$ is a permutation of the three qubits (i.e., the three possible choices for the unentangled qubit, since the Bell part is symmetric under permutation). Then, it is straightforward to check that $\{P_{ik\pi}\}$ forms a complete POVM, and whenever you obtain outcome $i,k,\pi$, you can be sure to have the corresponding state (while any other state will not deterministically yield this outcome). Thus, this forms a probabilistic scheme for distinguishing these states. (Of course, there might be a better scheme for doing so, in particular for only 2 Bell states, but to assess that one would probably need a precise figure of merit.)
{ "domain": "physics.stackexchange", "id": 24901, "tags": "quantum-mechanics, quantum-information, quantum-entanglement" }
Why do we take $h$ as "height from surface to bottom" when calculating liquid pressure?
Question: In the following image, pressures of points x, y and z are $P_{x}, P_{y}$ and $P_{z}$ respectively, and they all are equal. My question is, why? The amount of matter on x is much more than amount of matter on y. Why do they still have equal pressure on them? Answer: A simple though experiment is to fill all the space up to the surface by water and remove the walls. Now you can easily believe the formula $p = \rho g z$ (as the same amount of water rests on the points x, y and z) and the water is static as all forces cancel. If you now insert the (infinitesimally thin, totally rigid) walls, the forces acting on them exactly cancel out – the pressures obviously do not change. If you now remove the water surrounding the walls the pressures inside the tank will not change either (how should they), this means that the walls now supply the force, that the water outside supplied before, otherwise they would not remain in their position. This tells you that $p = \rho g h$ holds even in such containers as sketched in the question. More formally: Because the condition for a fluid to be static is $\nabla p = \vec f$, where $\vec f$ are the external force densities. This can be derived by considering a small box of fluid with sides of length $l$. For the small box of fluid to remain static, the forces acting on it have to cancel. The pressure on the sides of the box exerts forces like (in the $x$-direction, $p_1$ is the pressure on the right, $p_2$ is the pressure on the left): $$ F = A (p_1 - p_2) = l^2 (p_1 - p_2). $$ So in the limit of a small box this gives: $$ F = l^3 \frac{p_1 - p_2}{l} = V \frac{p_1 - p_2}{l}. $$ So the force density in the x-direction is $$ f = \frac{p_1 - p_2}{l} $$ For $l \to 0$ this goes to the derivative $\partial_x p$. The same arguments apply in the y- and z-directions. For a constant external force $\vec f = -\rho g \vec e_z$, like gravity, you can easily solve the equation (under the assumption that the fixed boundaries simply resits the pressure and that the pressure at the surface is imposed by the atmosphere above). You get: $$ p = -\rho g z + p(0). $$ (Note that the variable $z$ increases upwards).
{ "domain": "physics.stackexchange", "id": 28996, "tags": "newtonian-gravity, pressure, fluid-statics" }
How would I theorise a quantum query algorithm in O(1)?
Question: I am currently attempting to solve a problem from Nielsen-Chuang, and I can't seem to figure out how I would do this; I'm trying to implement Grover's algorithm to solve the problem of differentiating between the two functions with 99% probability in O(1), $$ f_0:\{0,1\}^n → \{-1,1\} \; s.t. \; \hat{f}(00...0)=\sqrt{2/3}\\ f_1:\{0,1\}^n → \{-1,1\} \; s.t. \; \hat{f}(11...1)=\sqrt{2/3} $$ Does anyone know how I would do this? Answer: I hunted around for this for a little bit and couldn't find it in my copy of N&C, but nonetheless I think that the setup is more akin to the Deutsch-Jozsa algorithm than to Grover's algorithm. TL/DR, much as the Deutsch-Jozsa algorithm uses the Hadamard transform to distinguish a constant function from a balanced function with the promise that the function is constant or balanced, a quantum Fourier transform can distinguish an almost-constant function from a high frequency function with the promise that the function is almost-constant or is high-frequency. Repeating a small number of times amplifies the success probability. For example, the Deutsch-Jozsa algorithm uses the Hadamard transform to distinguish a constant function from a balanced function. Similarly as described in the question it appears that we have oracle access to a Boolean function $f$: $$f:\{0,1\}^n \mapsto \{0,1\}$$ with a promise on the coefficients of the Fourier transform, that either: $$\hat{f}(00...0)=\sqrt{2/3},$$ e.g. $f$ is nearly constant on its codomain, or $$\hat{f}(11...1)=\sqrt{2/3},$$ e.g. $f$ has a high frequency. There is no promise on other Fourier coefficients. Our task is to determine whether $f$ is constant or is high frequency. Similar to the Deutsch-Jozsa where we prepare a uniform superposition on the input register, evaluate the oracle function, perform a Hadamard transform on the first register, and measure the first register, here we can prepare a uniform superposition on our input register, evaluate the oracle function, perform a quantum Fourier transform on the first register, and measure the first register. If our oracle is nearly constant, we measure the first register as $\vert 00\cdots0\rangle$ with probability $(\sqrt{2/3})^2=2/3$. If our oracle is high-frequency, we measure the first register as $\vert 11\cdots 1\rangle$ with probability $(\sqrt{2/3})^2=2/3$. Either way with probability $1/3$ we might get junk by measuring some other string (say $\vert101001\cdots0\rangle$) corresponding to another Fourier coefficient. Nonetheless we can repeat the procedure, say, $4$ times, and quickly get a high probability, $\gt 99\%$, of faithfully determining whether $f$ is constant or high-frequency, simply based on taking the majority and relying on Chernoff's bound.
{ "domain": "quantumcomputing.stackexchange", "id": 2546, "tags": "quantum-algorithms, mathematics, grovers-algorithm, complexity-theory, nielsen-and-chuang" }
Set notation for ACL matrix
Question: This might not be a computer science specific question and apologies if that is the case but it does come from material related to working out access control lists and I cannot understand the notation and I wonder if someone could break it down for me? $$A’[s_i,o_j]=A[s_i,o_j] \; ∀(s_i,o_j)≄(s,o)$$ I understand that $A’[s_i,o_j]$ is the new resulting matrix after some operation and I think $s_i$ and $o_j$ are the matrix pairs that are not being amended although I do not know exactly that this is the case or how adding the $i$ and $j$ denotes this. This might be way off the mark but is it saying that the new matrix is equal to the old matrix where none of the pairs are equal to the pair being deleted? I am also looking at a list of set symbols and I cannot find this $≄$ listed. Answer: The notation means: For any $(s_i,o_j) \neq (s,o)$, $A'[s_i,o_j] = A[s_i,o_j]$. The correct symbol seems to be $\neq$, "not equal", rather than what you wrote. The symbol $\forall$ means "for all". The notation $s_i$ just means "element number $i$ in the set of rows", and $o_j$ is similarly "element number $j$ in the set of columns". You could just as well have replaced $s_i,o_j$ with any two other symbols (other than $s,o$).
{ "domain": "cs.stackexchange", "id": 18415, "tags": "sets, notation" }
What is the heat needed for an isothermal expansion of gas and why do physics and chemistry yield different answers?
Question: One mole of a certain ideal gas is contained under a weightless piston of a vertical cylinder at a temperature $T$. The face of the piston opens into the atmosphere. What is the heat supplied in the process to expand the gas from volume $V_1$ to $V_2$ isothermally ? Friction of the piston against the cylindrical wall is negligibly small. Now thermodynamics is a common topic to both physics and chemistry and as per my understanding of thermodynamics,I am getting different results for the heat that will need to be supplied. According to physics :$$∆Q =∆U+∆W$$ And here the Work done is the Work done by the gas. That is : $$W = \int PdV=\int \frac{RT}{V}dV = RT( \ln V_2)-RT( \ln V_1)$$ and as for Isothermal process: $$∆U =0 , ∆Q =RT( \ln V_2)-RT( \ln V_1)$$ Chemistry on the other hand says: $$∆U =∆Q +∆W $$ and defines it as the Work done on the gas and this will give: $$W = -P_{atm}(V_2-V_1) $$ and as for isothermal process:$$ ∆U =0 , ∆Q =P_{atm}(V_2-V_1)$$ I am also uncertain that maybe neither of these is the correct expression and the network is actually in that work from both these forces and we have to take the sum of these to yield :$$∆Q =RT( \ln V_2)-RT( \ln V_1)-P_{atm}(V_2-V_1)$$ Now the value of $Q$ should be unique because in the real world when we perform an experiment, only one value of $Q$ would be supplied. So which one of these is correct and where am I lacking in my understanding of thermodynamics from a physics standpoint ?? Now as this question links both physics and chemistry, I want to post it on both sites (I hope it's fine) and the link to it is here Answer: The physics and chemistry examples you gave describe two different processes. In the physics example, the gas is subjected to an isothermal reversible (quasi-static) expansion, where the gas pressure and the external pressure decrease very gradually. For this case, using the ideal gas law to determine both the gas pressure and the external pressure are valid. However, the chemistry example is much different. Here, the gas is subjected to an "isothermal" irreversible (non-quasi-static) expansion, where the external pressure is suddenly dropped from its initial value and then held at that value until the system re-equilibrates. In such a case, the ideal gas law can not be used to determine the gas pressure because the ideal gas law applies only to thermodynamic equilibrium situations (or to reversible processes where the gas passes through a continuous sequence of thermodynamic equilibrium states). For an irreversible (non-quasi-static) process, the gas passes through sequence of non-equilibrium states, and the ideal gas law is not valid for this. In addition, even though the external boundary of the gas is held at a constant temperature, during this irreversible deformation, the interior temperature of the gas varies with spatial location. So, even though we call the process "isothermal," it is really only isothermal for a tiny fraction of the gas in contact with the boundary. Of course, in the end, the gas temperature throughout will be back to the wall value when the gas reaches thermodynamic equilibrium again.
{ "domain": "physics.stackexchange", "id": 68004, "tags": "homework-and-exercises, thermodynamics, work, conventions, gas" }
Follow a moving point
Question: Hello, i want to follow a moving point which i get published by a 3D camera with the tool of my UR3 Robot. So i need to dynamicly update the goal of my movement. I can move to the point statically using the MoveGroupInterface and the line group->setPoseTarget(PointPose); and group->move(); Now if the point changes its position how do i follow it? (It should be a smooth change in movement) I haven't found any tutorial or hints... I'm currently using ROS Kinetic, on Ubuntu 16.04 LTS and the ur_modern_driver. A link to a tutorial or something else would be really usefull! Originally posted by Sokrates on ROS Answers with karma: 16 on 2018-07-05 Post score: 0 Original comments Comment by aarontan on 2018-07-05: which package/repo are you using? Comment by Sokrates on 2018-07-05: I'm currently using ROS Kinetic, and the ur_modern_driver on Ubuntu 16.04 LTS. Answer: You're trying to do visual servoing using velocity control, it's still pretty new but there are packages to help you achieve this. If you use the jog arm package it allows you to control the robot arm by setting the velocity (6 DOF) of the end effector. The package actually does many things but this is what you'll want it for. By controlling the velocity of the end effector instead of creating discrete plans, a very smooth behaviour can be achieved. Now you just need to make an algorithm that takes the position of the end effector and the position of the 3D point and produces a twist message (6 DOF velocity) which will make the arm smoothly move towards the point, slow down and stop when it reaches it. I recommend keeping the UR paddle with the emergency stop handy while testing this. Hope this helps. Originally posted by PeteBlackerThe3rd with karma: 9529 on 2018-07-05 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Sokrates on 2018-07-06: Thank you very much for your answer. I will have look at this and give an update! Is this the only way? Or is there a way to use it with moveit? Comment by PeteBlackerThe3rd on 2018-07-06: I'm not an expert on this, but I don't know of any way to do this type of velocity control using moveit. Look forward to hearing about your progress.
{ "domain": "robotics.stackexchange", "id": 31195, "tags": "ros, ros-kinetic, movegroup, universal-robot" }
Finding max rating for a movie
Question: I am taking Stanford's Introduction to Databases Self-Paced online course. I have gone through the videos in the SQL mini-course, and I am having trouble completing the exercises. The following is the question from the SQL Movie-Rating Query Exercises, Question 7: For each movie that has at least one rating, find the highest number of stars that movie received. Return the movie title and number of stars. Sort by movie title. The database can be found here. My answer to this question is as follows: SELECT distinct Movie.title, Rate.stars FROM Movie, Rating, (SELECT * FROM Rating R1 WHERE not exists (SELECT mID FROM Rating R2 WHERE R1.stars < R2.stars and R1.mID = R2.mID)) as Rate WHERE Movie.mID = Rate.mID and Rate.stars = Rating.stars order by Movie.title; This seems like a very tortured query, and it seems to me like I am missing some important concepts. Can someone help me refactor this query? Answer: SELECT distinct Movie.title, Rate.stars You should rarely use the DISTINCT keyword. In this particular case it's unnecessary and may do the wrong thing. What you want to do is to return the highest star rating for a given movie title: SELECT Movie.title, MAX(Rating.stars) There's the title and we use the MAX keyword to make sure that it's the highest stars. More on when we can use MAX later. FROM Movie, Rating, (SELECT * FROM Rating R1 WHERE not exists (SELECT mID FROM Rating R2 WHERE R1.stars < R2.stars and R1.mID = R2.mID)) as Rate WHERE Movie.mID = Rate.mID and Rate.stars = Rating.stars We can make this simpler: FROM Movie, Rating WHERE Movie.mID = Rating.mID No subselects required. GROUP BY Movie.title ORDER BY Movie.title; The GROUP BY will allow us to use MAX. It says to return only one row per Movie.title value. The other columns need to be aggregated with grouping functions, like MAX. You already had the ORDER BY clause and presumably it's correct.
{ "domain": "codereview.stackexchange", "id": 11983, "tags": "sql, mysql" }
$C_1 \subseteq C_2$ implies $C_1^A\subseteq C_2^A$?
Question: $C_1 \subseteq C_2$ implies $C_1^A\subseteq C_2^A$? I've given a caveat that one shouldn't make this implication blindly and it shall be justified. I can think of examples such that $C_1^A \subsetneq C_2^A$ and examples such that $C_1^A = C_2^A$. but could it be that $C_2^A \subset C_1^A$? I can't see how. So what's the point of this caveat? Answer: A proof technique is (informally) relativizing if the results it generates also hold relative to an oracle. Not all proof techniques are relativizing. Perhaps the best known example is $\mathsf{IP}=\mathsf{PSPACE}$, which uses the technique of algebraization. Although $\mathsf{PSPACE} \subseteq \mathsf{IP}$, there is an oracle $O$ such that $\mathsf{PSPACE}^O \not\subseteq \mathsf{IP}^O$ (see this question on cstheory).
{ "domain": "cs.stackexchange", "id": 9803, "tags": "complexity-theory" }
Can we use silver nitrate to distinguish between chloride and carbonate salts?
Question: Initially, we are given a solution containing two salts, they can be either carbonate or chloride salts (we do not know their composition initially). Generally, $\ce{AgNO3}$ is used as a confirmatory test of $\ce{Cl-}$ (a white precipitate is obtained on adding silver nitrate to solution containing the chloride salt). $$\ce{NaCl + AgNO3 -> NaNO3 + \underset{white}{AgCl} \downarrow}$$ But $$\ce{Na2CO3 + 2AgNO3 -> 2NaNO3 + \underset{brownish white}{Ag2CO3} \downarrow}$$ Therefore, can we use $\ce{AgNO3}$ to directly identify chloride in the solution or do we need to remove the possible carbonate ions to confirm the presence of chloride before performing the test as mentioned above? (A similar question may arise in the identification of bromide and iodide ions but I have ignored it for now as they are rarely asked in exams. Still, I would appreciate it if someone could answer this as well.) The question has been asked mainly because the silver nitrate test is generally taught to be performed without mentioning the above precautions and it is also a very famous confirmative test. Answer: The test for Chloride ion is silver nitrate in an Acidified solution [with nitric acid]. This removes carbonate (as carbonic acid), cyanide[careful] and sulfide (as hydrogen sulfide). (Use a well-ventilated hood as the products of acidifying both sulfide and cyanide are poisonous.) Sulfate must be absent, if present it can be removed with barium nitrate (as barium sulfate). The silver chloride formed is solubilized by addition of ammonia; insoluble precipitate remaining may be bromide or iodide. The exact procedure to follow is given in reputed indepth Qualitative Analysis lab books.
{ "domain": "chemistry.stackexchange", "id": 15483, "tags": "inorganic-chemistry, experimental-chemistry, analytical-chemistry, identification, elemental-analysis" }
Depth image on amcl_demo
Question: Hi! I'm having some trouble while trying to access the kinect depth image using amcl_demo. When i run roslaunch openni_launch openni.launch it works well, and i can access/see topic /camera/depth/image. The problem is when i run roslaunch turtlebot_navigation amcl_demo map_file:=/... It starts all the nodes from the camera (the topic is there as well) but i can't see the depth image from the same topic before. I think i need to enable something but don't know what to do. Hope you can help. Thank you! Originally posted by Thiagopj on ROS Answers with karma: 23 on 2014-08-18 Post score: 0 Answer: I would check the launch file at turtlebot_navigation amcl_demo.alunch and see if the parameter depth_registration is true. In general, setting depth_registration to true when running roslaunch openni_launch openni.launch (which is launched via amcl_demo.launch) will kill the regular camera/depth/image, and vice versa. After launching amcl_demo.launch, try rosrun rqt_reconfigure rqt_reconfigure and select /camera/driver from the drop-down menu. When the depth_registration checkbox is checked, then camera/depth_registered/image_raw should display correctly, but not camera/depth/image. Unchecking it should swap it back. Originally posted by ahubers with karma: 301 on 2014-08-18 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Thiagopj on 2014-08-20: Thank you ! Problem solved :)
{ "domain": "robotics.stackexchange", "id": 19095, "tags": "ros, navigation, kinect, openni, amcl" }
Node.JS Server Queue Processor
Question: I implemented a simple queuing system for my Node.JS app and wanted a critique on it's structure. const TPS = 20; const Queue = { counter: 1, items: {}, /** * Add an item to the queue, with the given func to call * @param {Function} func * @param {Boolean} repeating * @return {Number} */ add(func, repeating = false) { const id = this.counter++; this.items[id] = {func, repeating, id}; return id; }, /** * Remove an item from the queue with the given id * @param {Number} id */ remove(id) { if (this.items.hasOwnProperty(id)) { delete this.items[id]; } }, /** * Process items in the queue */ process() { for (let id in this.items) { // Prevent this item from being processed again if (!this.items.hasOwnProperty(id) || this.items[id].processing) { continue; } // Delete this item when it's scheduled for deletion if (this.items[id].scheduledForDeletion) { delete this.items[id]; continue; } // Let the queue know this item is being processed and // it's scheduled deletion status this.items[id].processing = true; this.items[id].scheduledForDeletion = !this.items[id].repeating; // Don't wait for item's promise to resolve, since this // will create a backlog on the queue (async () => { try { await this.items[id].func.call(null); } catch (err) { // TODO: Handle errors. console.error(err); } this.items[id].processing = false; })(); } } }; (function tick() { setTimeout(tick, 1000 / TPS); Queue.process(); })(); This is an example of how it's implemented. // Add three items to the queue: 1 normal, 1 async and 1 repeating Queue.add(() => console.info(`[tick] -> ${Date.now()}`)); Queue.add(async () => setTimeout(() => console.info(`[async] -> ${Date.now()}`), 100)); const timeLoop = Queue.add(() => console.info(`[loop] time (loop) -> ${Date.now()}`), true); // Remove the looping item from the queue setTimeout(() => Queue.remove(timeLoop), 500); The idea is to have this run when the server starts and continually process the queue items. Queue is in it's own file and exported. I import this file into my controllers and call (for example) Queue.add('function to add user to DB and send out email'). Answer: The structure looks fine. It is quite succinct and makes good use of const and let where appropriate. To adhere to the D.R.Y. principle, process() can and should utilize remove() to remove items from the queue. I considered suggesting that arguments be accepted with each function but that can be achieved with partially bound functions. I also considered suggesting you consider using a class, since ES-6 featured can be utilized, but then you would either need to instantiate a queue once or else make all methods static. I would suggest you consider accepting an error handler callback for each function. That way, instead of writing all errors to the console, the caller could add an appropriate handler.
{ "domain": "codereview.stackexchange", "id": 32818, "tags": "javascript, node.js, ecmascript-6, queue" }
How can I connect a silicon hose with 2mm inner diameter to a smaller 1,9mm port
Question: The biggest diameter of my pressure sensor input port has a size of 1.93 mm. The silicon hose which I want to connect with the sensor's port has an inner diameter size of 2.0 mm. My honeywell pressure sensor: My silicon hose: As you can imagine - it does not fit. The port is to small or the inner diameter of the silicon hose to large. What is the best way to bring both together? Are there adapters availabe? What is the name therefore? What is a good way in engineering to fix such an issue? Thanks in advance for your help! Answer: The fittings on your pressure sensor look like barb fittings. If that's the case, a 1.6mm ID silicone tubing will easily stretch over a 1.93mm barb fitting. 1.6mm (or 1/16") tubing is a fairly common size and you should be able to get it from standard laboratory suppliers. You'll then need an adapter to go from your 2mm ID to 1.6mm ID silicone tubing. Again, 2.4mm (3/32") is a fairly common size and a 2mm ID silicone tubing will easily stretch over a 2.4mm barb fitting. So something like AD-6005 from Nordson Medical or equivalent should do it:
{ "domain": "engineering.stackexchange", "id": 1836, "tags": "mechanical-engineering, pressure, sensors, pneumatic" }
CyberFaze app for Facebook
Question: I had an efficiency problem like I thought and I didn't have the best solution here. My solution was \$O(n)\$ and directly a more experienced member told me a solution that is \$O(1)\$, that only fetches one element and doesn't have to do a count. I'd like to rewrite to that solution, if you agree that is the next thing I should do. I've also been considering adding the function crop image to the Facebook application I write where there is no bug. I just plan to implement the proposed change so that images get fetches faster exactly like was suggested as answer to the question above. Do you agree that it's the next thing to do about this code? import random from google.appengine.api import files, images class CyberFazeHandler(BaseHandler): """ Every time you call this function, it will perform a count operation, which is O(n) with the number of FileInfo entities, then perform an offset query, which is O(n) with the offset. This is extremely slow and inefficient, and will get more so as you increase the number of images. If you expect the set of images to be small (less than a few thousand) and fairly constant, simply store them in code, which will be faster than any other option. If the set is larger, or changes at runtime, assign a random value between 0 and 1 to each entity, and use a query like this to retrieve a randomly selected one: """ def get_random_image(self, category): fileinfos = FileInfo.all().filter('category =', category) return fileinfos[random.randint(0, fileinfos.count() - 1)] """ do like this instead q = FileInfo.all() q.filter('category =', category) q.filter('random >=', random.random()) return q.get() """ def get(self): """ If the user will be loading a lot of these mashups, it makes more sense to send them as separate images, because there will be fewer images for the browser to cache (a+b+c images instead of a*b*c). """ eyes_image = self.get_random_image(category='eyes') nose_image = self.get_random_image(category='nose') mouth_image = self.get_random_image(category='mouth') eyes_data = None try: eyes_data = blobstore.fetch_data(eyes_image.blob.key(), 0, 50000) except Exception, e: self.set_message(type=u'error', content=u'Could not find eyes data for file ' + str(eyes_image.key().id()) + ' (' + unicode(e) + u')') eyes_img = None try: eyes_img = images.Image(image_data=eyes_data) except Exception, e: self.set_message(type=u'error', content=u'Could not find eyes img for file ' + str(eyes_image.key().id()) + ' (' + unicode(e) + u')') nose_data = None try: nose_data = blobstore.fetch_data(nose_image.blob.key(), 0, 50000) except Exception, e: self.set_message(type=u'error', content=u'Could not find nose data for file ' + str(nose_image.key().id()) + ' (' + unicode(e) + u')') nose_img = None try: nose_img = images.Image(image_data=nose_data) except Exception, e: self.set_message(type=u'error', content=u'Could not find nose img for file ' + str(nose_image.key().id()) + ' (' + unicode(e) + u')') mouth_data = None try: mouth_data = blobstore.fetch_data(mouth_image.blob.key(), 0, 50000) except Exception, e: self.set_message(type=u'error', content=u'Could not find mouth data for file ' + str(eyes_image.key().id()) + ' (' + unicode(e) + u')') mouth_img = None try: mouth_img = images.Image(image_data=mouth_data) except Exception, e: self.set_message(type=u'error', content=u'Could not find mouth img for file ' + str(mouth_image.key().id()) + ' (' + unicode(e) + u')') minimum = min(int(eyes_img.width), int(nose_img.width), int(mouth_img.width)) eyes_url = images.get_serving_url(str(eyes_image.blob.key()), size=minimum) nose_url = images.get_serving_url(str(nose_image.blob.key()), size=minimum) mouth_url = images.get_serving_url(str(mouth_image.blob.key()), size=minimum) self.render( u'cyberfaze', minimum=minimum, eyes_image=eyes_image, eyes_url=eyes_url, nose_image=nose_image, nose_url=nose_url, mouth_image=mouth_image, mouth_url=mouth_url, form_url=blobstore.create_upload_url('/upload'), ) class UserRunsHandler(BaseHandler): """Show a specific user's runs,""" # ensure friendship with the logged in user""" @user_required def get(self, user_id): if True: # self.user.friends.count(user_id) or self.user.user_id == user_id: user = User.get_by_key_name(user_id) if not user: self.set_message(type=u'error', content=u'That user does not use Run with Friends.' ) self.redirect(u'/') return self.render(u'user', user=user, runs=Run.find_by_user_ids([user_id])) else: self.set_message(type=u'error', content=u'You are not allowed to see that.' ) self.redirect(u'/') class RunHandler(BaseHandler): """Add a run""" @user_required def post(self): try: location = self.request.POST[u'location'].strip() if not location: raise RunException(u'Please specify a location.') distance = float(self.request.POST[u'distance'].strip()) if distance < 0: raise RunException(u'Invalid distance.') date_year = int(self.request.POST[u'date_year'].strip()) date_month = int(self.request.POST[u'date_month'].strip()) date_day = int(self.request.POST[u'date_day'].strip()) if date_year < 0 or date_month < 0 or date_day < 0: raise RunException(u'Invalid date.') date = datetime.date(date_year, date_month, date_day) run = Run(user_id=self.user.user_id, location=location, distance=distance, date=date) run.put() title = run.pretty_distance + u' miles @' + location publish = u'<a onclick=\'publishRun(' \ + json.dumps(htmlescape(title)) \ + u')\'>Post to facebook.</a>' self.set_message(type=u'success', content=u'Added your run. ' + publish) except RunException, e: self.set_message(type=u'error', content=unicode(e)) except KeyError: self.set_message(type=u'error', content=u'Please specify location, distance & date.' ) except ValueError: self.set_message(type=u'error', content=u'Please specify a valid distance & date.' ) except Exception, e: self.set_message(type=u'error', content=u'Unknown error occured. (' + unicode(e) + u')') self.redirect(u'/') class RealtimeHandler(BaseHandler): """Handles Facebook Real-time API interactions""" csrf_protect = False def get(self): if self.request.GET.get(u'setup') == u'1' and self.user \ and conf.ADMIN_USER_IDS.count(self.user.user_id): self.setup_subscription() self.set_message(type=u'success', content=u'Successfully setup Real-time subscription.' ) elif self.request.GET.get(u'hub.mode') == u'subscribe' \ and self.request.GET.get(u'hub.verify_token') \ == conf.FACEBOOK_REALTIME_VERIFY_TOKEN: self.response.out.write(self.request.GET.get(u'hub.challenge' )) logging.info(u'Successful Real-time subscription confirmation ping.' ) return else: self.set_message(type=u'error', content=u'You are not allowed to do that.') self.redirect(u'/') def post(self): body = self.request.body if self.request.headers[u'X-Hub-Signature'] != u'sha1=' \ + hmac.new(self.facebook.app_secret, msg=body, digestmod=hashlib.sha1).hexdigest(): logging.error(u'Real-time signature check failed: ' + unicode(self.request)) return data = json.loads(body) if data[u'object'] == u'user': for entry in data[u'entry']: taskqueue.add(url=u'/task/refresh-user/' + entry[u'id']) logging.info('Added task to queue to refresh user data.' ) else: logging.warn(u'Unhandled Real-time ping: ' + body) def setup_subscription(self): path = u'/' + conf.FACEBOOK_APP_ID + u'/subscriptions' params = { u'access_token': conf.FACEBOOK_APP_ID + u'|' \ + conf.FACEBOOK_APP_SECRET, u'object': u'user', u'fields': _USER_FIELDS, u'callback_url': conf.EXTERNAL_HREF + u'realtime', u'verify_token': conf.FACEBOOK_REALTIME_VERIFY_TOKEN, } response = self.facebook.api(path, params, u'POST') logging.info(u'Real-time setup API call response: ' + unicode(response)) class RefreshUserHandler(BaseHandler): """Used as an App Engine Task to refresh a single user's data if possible""" csrf_protect = False def post(self, user_id): logging.info('Refreshing user data for ' + user_id) user = User.get_by_key_name(user_id) if not user: return try: user.refresh_data() except FacebookApiError: user.dirty = True user.put() class FileInfo(db.Model): blob = blobstore.BlobReferenceProperty(required=True) uploaded_by = db.UserProperty() facebook_user_id = db.StringProperty() uploaded_at = db.DateTimeProperty(required=True, auto_now_add=True) category = db.CategoryProperty(choices=('eyes', 'nose', 'mouth', 'other')) class FileBaseHandler(webapp.RequestHandler): def render_template(self, file, template_args): path = os.path.join(os.path.dirname(__file__), 'templates', file) self.response.out.write(template.render(path, template_args)) class FileUploadFormHandler(FileBaseHandler): # @util.login_required # @user_required def get(self): # user = users.get_current_user() if True: # user: # signed in already # self.response.out.write('Hello <em>%s</em>! [<a href="%s">sign out</a>]' % ( # user.nickname(), users.create_logout_url(self.request.uri))) self.render_template('upload.html', {'logout_url' : (users.create_logout_url(r'/' ) if users.get_current_user() else None)}) else: # let user choose authenticator self.response.out.write('Hello world! Sign in at: ') class FileUploadHandler(BaseHandler, blobstore_handlers.BlobstoreUploadHandler): csrf_protect = False def post(self): blob_info = self.get_uploads()[0] if False: # not users.get_current_user(): blob_info.delete() self.redirect(users.create_login_url(r'/')) return file_info = FileInfo(blob=blob_info.key()) # , logging.debug('if user') if self.user: logging.debug('found user') file_info.facebook_user_id = self.user.user_id logging.debug('set user id') db.put(file_info) self.redirect('/file/%d' % (file_info.key().id(), )) class AjaxSuccessHandler(FileBaseHandler): def get(self, file_id): self.response.headers['Content-Type'] = 'text/plain' self.response.out.write('%s/file/%s' % (self.request.host_url, file_id)) class FileInfoHandler(BaseHandler, FileBaseHandler): def get(self, file_id): file_info = FileInfo.get_by_id(long(file_id)) if not file_info: self.error(404) return self.render(u'info', file_info=file_info, logout_url=users.create_logout_url(r'/')) class FileDownloadHandler(blobstore_handlers.BlobstoreDownloadHandler): def get(self, file_id): file_info = FileInfo.get_by_id(long(file_id)) if not file_info or not file_info.blob: self.error(404) return self.send_blob(file_info.blob, save_as=True) class GenerateUploadUrlHandler(FileBaseHandler): # @util.login_required def get(self): self.response.headers['Content-Type'] = 'text/plain' self.response.out.write(blobstore.create_upload_url('/upload')) class ServeHandler(blobstore_handlers.BlobstoreDownloadHandler): def get(self, resource): resource = str(urllib.unquote(resource)) blob_info = blobstore.BlobInfo.get(resource) self.send_blob(blob_info) class SetCategoryHandler(webapp.RequestHandler): def get(self, file_id): file_info = FileInfo.get_by_id(long(file_id)) if not file_info or not file_info.blob or file_id == '25001': self.error(404) return file_info.category = self.request.get('cg') file_info.put() self.response.out.write('category updated') def main(): routes = [ (r'/', CyberFazeHandler), (r'/user/(.*)', UserRunsHandler), (r'/run', RunHandler), (r'/realtime', RealtimeHandler), (r'/task/refresh-user/(.*)', RefreshUserHandler), ('/ai', FileUploadFormHandler), ('/serve/([^/]+)?', ServeHandler), ('/upload', FileUploadHandler), ('/generate_upload_url', GenerateUploadUrlHandler), ('/file/([0-9]+)', FileInfoHandler), ('/file/set/([0-9]+)', SetCategoryHandler), ('/file/([0-9]+)/download', FileDownloadHandler), ('/file/([0-9]+)/success', AjaxSuccessHandler), ] application = webapp.WSGIApplication(routes, debug=os.environ.get('SERVER_SOFTWARE', '').startswith('Dev' )) util.run_wsgi_app(application) if __name__ == u'__main__': main() Answer: class CyberFazeHandler(BaseHandler): def get_random_image(self, category): I'd recommend you consider adding docstrings, to a give quick explanation what your functions are doing. q = FileInfo.all() q.filter('category =', category) q.filter('randomvalue >=', random.random()) return q.get() def get_random_image_legacy(self, category): fileinfos = FileInfo.all().filter('category =', category) return fileinfos[random.randint(0, fileinfos.count() - 1)] Why would you keep a legacy method around? def get(self): eyes_image = self.get_random_image(category='eyes') if not eyes_image: logging.debug("getting eyes failed, trying legacy method") eyes_image = self.get_random_image_legacy(category='eyes') nose_image = self.get_random_image(category='nose') if not nose_image: nose_image = self.get_random_image_legacy(category='nose') mouth_image = self.get_random_image(category='mouth') if not mouth_image: mouth_image = self.get_random_image_legacy(category='mouth') You've got pretty much the same thing repeated three times. Write a generic function that can handle getting the image and handling the failure case eyes_data = None try: eyes_data = blobstore.fetch_data(eyes_image.blob.key(), 0, 50000) except Exception, e: self.set_message(type=u'error', content=u'Could not find eyes data for file ' + str(eyes_image.key().id()) + ' (' + unicode(e) + u')') Do you really want to catch any exception here? Usually you want to catch a more specific exception. eyes_img = None Do this in the except clause try: eyes_img = images.Image(image_data=eyes_data) except Exception, e: self.set_message(type=u'error', content=u'Could not find eyes img for file ' + str(eyes_image.key().id()) + ' (' + unicode(e) + u')') nose_data = None try: nose_data = blobstore.fetch_data(nose_image.blob.key(), 0, 50000) except Exception, e: self.set_message(type=u'error', content=u'Could not find nose data for file ' + str(nose_image.key().id()) + ' (' + unicode(e) + u')') nose_img = None try: nose_img = images.Image(image_data=nose_data) except Exception, e: self.set_message(type=u'error', content=u'Could not find nose img for file ' + str(nose_image.key().id()) + ' (' + unicode(e) + u')') mouth_data = None try: mouth_data = blobstore.fetch_data(mouth_image.blob.key(), 0, 50000) except Exception, e: self.set_message(type=u'error', content=u'Could not find mouth data for file ' + str(eyes_image.key().id()) + ' (' + unicode(e) + u')') mouth_img = None try: mouth_img = images.Image(image_data=mouth_data) except Exception, e: self.set_message(type=u'error', content=u'Could not find mouth img for file ' + str(mouth_image.key().id()) + ' (' + unicode(e) + u')') Again almost exact code duplicated several times, refactor it into a function. minimum = min(int(eyes_img.width), int(nose_img.width), int(mouth_img.width)) I'd put the int on the outside so you don't have to repeat it eyes_url = images.get_serving_url(str(eyes_image.blob.key()), size=minimum) nose_url = images.get_serving_url(str(nose_image.blob.key()), size=minimum) mouth_url = images.get_serving_url(str(mouth_image.blob.key()), size=minimum) self.render( u'cyberfaze', minimum=minimum, eyes_image=eyes_image, eyes_url=eyes_url, nose_image=nose_image, nose_url=nose_url, mouth_image=mouth_image, mouth_url=mouth_url, form_url=blobstore.create_upload_url('/upload'), ) Basically, you've got the same logic repeating for eyes, nose and mouth. I'd probably write a FaceFeature object to handle that logic and then create three of them. That way we'd get rid of the duplication.
{ "domain": "codereview.stackexchange", "id": 719, "tags": "python, performance, google-app-engine, facebook" }
Can you publish only obstacles from a costmap
Question: I'm wondering if it is possible to publish only the cells (points) that are occupied in a given costmap_2d object using the newer layered costmap approach in Hydro? This was how the nav_msgs/GridCells worked on the /obstacles topic of old and it was a useful feature. I suppose it would not be terribly difficult to write a node that subscribes to the /costmap topic now and converts the occupancy grid into a GridCells message of occupied cells, but I'm just curious if that functionality is built in to the current implementation of costmap_2d or not. Thanks! Originally posted by Matt_Derry on ROS Answers with karma: 41 on 2014-01-17 Post score: 2 Answer: It is not currently built in. You are correct that you could write another node, since the information for the gridcells is a subset of the information in the costmap. Originally posted by David Lu with karma: 10932 on 2014-01-17 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 16681, "tags": "navigation, occupancy-grid, costmap" }
Basic intuition for motor torque
Question: when shopping on digi-key or other electronics distributor, I can see the Torque - Holding spec, given in oz-in and mNm. I understand Nm in a torque context when the force is applied as a lever at a distance from the rotation point (for instance when applying 1 N force on a wrench, 20 cm from the rotational point, would give 0,2Nm). But what does it mean when we are talking about motor torque? It seems to me that it is like opposite: the motor is applying force at the rotational point, so the distance is always 0. Does it mean motor specs are always measuring the force the motor is applying 1 meter away from it? I am trying to have a good intuition of what Nm mean when looking at motors, because tutorials online only talk about wrenches or other cases when the force is applied away from the ration point. Thanks a lot! Answer: Let's say you have a motor with torque rating of 10 Nm and it came in a box with just a shaft. The rating just means if you attach a pulley with R =1 meters you get ten Nm torque. But if you attach a 0.2 meters pulley you get 5*10=50 Nm torque. Of course your belt runs 5 times slower. Shaft torque/ demand torque = radius of your pulley in meters.
{ "domain": "engineering.stackexchange", "id": 3296, "tags": "motors, torque" }
Why $\sum \partial\phi_v/ \partial p_v = 0$?
Question: In Investigations on the Theory of the Browning Movement, on page 5, Einstein wrote: of all atoms of the system), and if the complete system of the equations of change of these variables of state is given in the form $$\dfrac{\partial p_v}{\partial t}=\phi_v(p_1\ldots p_l)\ (v=1,2,\ldots l)$$ whence $$\sum\frac{\partial\phi_v}{\partial p_v}=0,$$ I assume it is an elementary result, since he gives no explanation on how to deduce it. How can I obtain this relation? Attempt: I tried to consider $$\sum\frac{\partial \phi_v}{\partial p_v} ~=~ \sum\frac{\mathrm{d}t \phi_v}{\mathrm{d}t} \left(\partial_t p_v \right)^{-1} ~=~ \sum \frac{\partial_t \phi_v}{ \phi_v} \,,$$ but I couldn't go any further. Answer: The variables $$p^{\nu}, \qquad \nu=1,\ldots, \ell \tag{A}$$ are the phase space coordinates. The derivative $\frac{\partial p^{\nu}}{\partial t}$ in Einstein's paper is a total time derivative. The vector field $$\phi~=~\sum_{\nu=1}^{\ell}\phi^{\nu}\frac{\partial }{\partial p_{\nu}} \tag{B}$$ generates time evolution. The divergence of a vector field $$ {\rm div}\phi~=~ \frac{1}{\rho}\sum_{\nu=1}^{\ell}\frac{\partial (\rho\phi^{\nu})}{\partial p^{\nu}},\tag{C}$$ where $\rho$ is the density in phase space, which we will assume is constant $$\rho={\rm constant} \tag{D}$$ (wrt. the chosen coordinate system). Apparently Einstein assumes that the vector field $\phi$ is divergencefree, $$ {\rm div}\phi~=~0 .\tag{E}$$ We stress that not all vector fields are divergencefree. Counterexample: The dilation vector field $$\phi~=~\sum_{\nu=1}^{\ell}p^{\nu}\frac{\partial }{\partial p^{\nu}}\tag{F}$$ is not divergencefree. The corresponding flow solution reads $$ p^{\nu}(t)~=~p^{\nu}_{(0)} e^t.\tag{G}$$ Assumption (D) and (E) follow e.g. in a Hamiltonian formulation because of (among other things) Liouville's theorem. Recall that Hamiltonian vector fields are divergence-free. See also this related Phys.SE post.
{ "domain": "physics.stackexchange", "id": 35088, "tags": "statistical-mechanics, phase-space, brownian-motion" }
Coherence and Interference
Question: Is coherence always necessary for interference? I thought so, but I came across a problem: If you try to measure an x-ray spectrum, you usually do so by using bragg reflection on a crystal, i.e. interference. But x-rays from a usual x-ray-tube aren't coherent. How does that work? Or is a mere collimator sufficient to make it coherent? But than: It is possible to isolate wavelength from a diode (for example) with a prism-monochromator, although that as well is a non-coherent light source. What's the error in reasoning here? Answer: In experiments with single photons or single electrons behind an obstacle after a while - sending a lot of particles one by one - an intensity distribution in the form of fringes is observable. This observation is valid for single shot particles even behind a single sharp edge. This intensity distribution is called interference. Interference is a phenomenon in which two waves superpose to form a resultant wave of greater, lower, or the same amplitude. So if one explains the case of fringes from single particles behind a single edge by interference this process should happens in interaction of the particle with itself. Then the phenomena of fringes behind a slit or a lot of slits are reducible to the case of the self-interaction for every particle. Without an obstacle you observe the particles of a beam for example with a Gaussian distribution. No wave behavior of the particles is observable [left image). Behind an obstacle the particles remember their wave behavior and appear on the observation screen - still as undivided particles - with a wavelike distribution (right image). ......... If you try to measure an x-ray spectrum, you usually do so by using bragg reflection on a crystal, i.e. interference. But x-rays from a usual x-ray-tube aren't coherent. If you want to get “nice” fringes, the only thing you have to care about is to get a beam with small bandwidth of wavelengths. But “nice” depends from what you want. For example from unfiltered sunlight you still get fringes, this time with different and shifted intensity distributions from different colors: Is coherence always necessary for interference? Reading the things above perhaps you could agree, that not?
{ "domain": "physics.stackexchange", "id": 44121, "tags": "interference, coherence" }
Is heavier object falls slower according to special theory of relativity?
Question: I'm newbie in physics, but have some idea want to know whether it is correct! I saw some information about special theorey of relativity. It just like says "when objects go faster it becomes more mass,but only reach near speed of light" Now the question is, will the heavier object falls slower because it needs more kinetic energy to reach the same speed? Answer: No. A heavier object will gain more kinetic energy when falling than a lighter object but it has a higher potential energy to start with. It is actually not kinetic energy that is needed to fall but potential energy that is converted in kinetic energy while falling. Generally, while the gravitational force on an object depends on it's gravitational mass, so does it's inertia (seen in Newton's law) and due to this its acceleration will not depend on mass. This is the equivalence principle in general relativity.
{ "domain": "physics.stackexchange", "id": 24436, "tags": "special-relativity" }
Add numbers in array without adding adjacent number
Question: This is first time I code reviewing and would like feedback on coding in industry standards and optimum code. This program adds number in array in two formats: Adds adjacent numbers in a serial manner as proof for the actual sum to refer to the required output. Adds the last and first variables in the array. import java.util.stream.IntStream; class twoadj1 { public int a; public int b[]; public int sum=0; public int sum1=0; public int k=0; public int m=0; twoadj1(int size) { b = new int[size]; k = size-1; m = size; } void valueadd() { { for(int z = 0; z < b.length; z++) { b[z] = (int)(Math.random()*9); System.out.print(b[z]+ " "); int sum = IntStream.of(b).sum(); System.out.println("real sum"+sum); } for (int j=0;j<b.length/2;j++) { sum1 = sum1+b[j]+b[k]; k--; System.out.println("Process: " +sum1); } if((m%2)==0) System.out.println("Sum after required output1: " +sum1); else if((m%2)==1) { sum1 += b[m/2]; System.out.println("Sum after required output2: "+sum1); } } } } public class twoadj { public static void main(String[] args) { twoadj1 a = new twoadj1(5); a.valueadd(); } } Answer: Your indentation is inconsistent, and it looks like you got confused yourself, since you are computing and printing "real sum" many times. The code organization could use improvement as well. The valueadd() function does a lot of stuff: Populating the array with random members Printing the array Summing the array using streams, and printing that sum Summing the array by working from the ends towards the middle, and printing that sum There is no way to figure out what it does without reading all of the code. Ideally, each function should be limited to a single responsibility. There are a lot of instance variables, all cryptically named, and all public: public int a; public int b[]; public int sum=0; public int sum1=0; public int k=0; public int m=0; Only the array should be an instance variable here; all of the others could be local variables. The output looks a bit sloppy as well. For example, I would expect there to be a newline after printing the array contents, and a space after "real sum". Suggested solution Notice how each method has a one-sentence JavaDoc summary of what it does. (If you can't write such a summary for a function, then it would be an indication that the function is poorly designed.) import java.util.stream.Collectors; import java.util.stream.IntStream; public class ArrayAdder { private int[] array; /** * Randomly populates an array of integers of the specified size. */ public ArrayAdder(int size) { this.array = new int[size]; for (int i = 0; i < size; i++) { this.array[i] = (int)(9 * Math.random()); } } /** * The elements of the array, delimited by spaces. */ public String toString() { return IntStream.of(this.array) .mapToObj(String::valueOf) .collect(Collectors.joining(" ")); } /** * Sums the array using IntStream. */ public int streamSum() { return IntStream.of(this.array).sum(); } /** * Sums the array by working from the ends toward the middle. */ public int nestedSum() { int i, j, sum = 0; for (i = 0, j = this.array.length - 1; i < j; i++, j--) { sum += this.array[i] + this.array[j]; } if (i == j) { sum += this.array[i]; } return sum; } /** * Demonstrates the equivalence of two addition methods on a 5-element array. */ public static void main(String[] args) { ArrayAdder demo = new ArrayAdder(5); System.out.println(demo); System.out.println("Sum using stream: " + demo.streamSum()); System.out.println("Nested sum: " + demo.nestedSum()); } }
{ "domain": "codereview.stackexchange", "id": 21272, "tags": "java" }
Footstep_planner crashes when used with customized map for some (start, goal) combinations
Question: Hello to everybody, I am trying to get the footstep_planner to work with some customized maps, my problem is that when for example I designate the points maked in the second image (at the end of this post) the application crashes, sometimes with: [ERROR] [1339689725.772754649]: ERROR: grid2Dsearch is called on invalid start (154 392) or goal(504 190) and always with: [footstep_planner-3] process has died [pid 23662, exit code -11, cmd /opt/ros /fuerte/stacks/humanoid_stacks/humanoid_navigation/footstep_planner /bin/footstep_planner_node __name:=footstep_planner __log:=/home/sergio/.ros/log /21710a10-b632-11e1-a3e0-0022fb659408/footstep_planner-3.log]. log file: /home/sergio/.ros/log/21710a10-b632-11e1-a3e0-0022fb659408 /footstep_planner-3*.log I checked the log and it is always empty. I am currently running ros fuerte on ubuntu 12.04, with the latest version of the footstep_planner. The file I use to load the map looks like this: image: sample.bmp resolution: 0.01 origin: [0.0, 0.0, 0.0] occupied_thresh: 0.5 free_thresh: 0.1 negate: 0 And here are the two images: map. Here it appears as a jpg file but my file is actually an bmp http://i49.servimg.com/u/f49/10/09/41/82/sample10.jpg Image with locations http://i49.servimg.com/u/f49/10/09/41/82/captur10.png If you have any suggestions from where the problem might come I want already to express you all my gratitude in advance. Originally posted by Sergio Sousa on ROS Answers with karma: 13 on 2012-06-14 Post score: 0 Answer: That is an error of the 2D Dijkstra heuristic in the underlying SBPL. You may see more information by using rxconsole and setting the output level to "Debug", or by starting the footstep_planner node in a separate terminal. I'll check with your map file as soon as I can. Could you test setting the heuristic_type parameter to either EuclStepCostHeuristic or EuclideanHeuristic, to see if the map is working otherwise? Originally posted by AHornung with karma: 5904 on 2012-06-14 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 9796, "tags": "ros" }
Complexity of union (computational geometry)
Question: I'm courrently rading "Computational Geometry" from Mark de Berg, Otfried Cheong, Marc van Kreveld, Mark Overmars and found the following theorem 13.9. Let $S$ be a collection of convex polygonal pseudodiscs with n edges in total. Then the complexity of their union is at most $2n$. I'm not really sure what this should mean (this might be a a language barrier). First I thought that this would be the time complexity, but I'm now pretty much unsure about this. Answer: The complexity of a union of objects is the number of pieces in the boundary of the union. In the plane "complexity of boundary" is equivalent (upto a constant factor) to "number of vertices". But in general the boundary complexity is the sum of complexities of all objects needed to describe the boundary (vertices, edges, faces, and so on). It's just that in the plane the number of vertices in the boundary is the same as the number of edges (because the boundary is a set of cycles in this case)
{ "domain": "cstheory.stackexchange", "id": 1802, "tags": "cg.comp-geom" }
What advantage do dogs have in clearly disclosing whether they are afraid or unafraid in a conflict?
Question: Dogs are known to clearly show whether they are afraid or unafraid with the posture of their tails during a conflict. How and why is this beneficial? My limited understanding makes me feel that, in any social setting, disclosing whether an actor is afraid or unafraid would almost always be unfavourable for the actor. Is this incorrect? Isn't it better for the less confident dog to try to bark out the stronger one and if it doesn't work then simply walk away, rather than broadcast that it is afraid of the stronger dog? Answer: It is better for animals to broadcast their qualities, and size one-another up before risking injury in conflict. As pack animals, dogs are particularly sensitive to where they are placed in a hierarchy. So it is better for them to telegraph their qualities than to engage in conflict that may hurt them.
{ "domain": "biology.stackexchange", "id": 8262, "tags": "evolution, zoology, behaviour, psychology, dogs" }
Right-hand rule of EM waves (electric, magnetic field and direction of propagation)
Question: According to my physics book, the electric field, the magnetic field and the direction of propagation obey to right-hand rule. Anyway I am not sure why in the pictures below the book states that $B = \sqrt{B_z^2 + B_y^2}$ and $B^2 = B_z^2 + B_y^2$ $$\mathbf{E}\times\mathbf{B}=\left|\begin{matrix}\mathbf{i}&\mathbf{j}&\mathbf{k}\\0&E_y&E_z\\0&B_y&B_z\end{matrix}\right|=(E_yB_z-E_zB_y)i=(vB_x^2+vB_y^2)\mathbf{i}=vB^2\mathbf{i}$$ $$E=\sqrt{E_y^2+E_z^2}=\sqrt{v^2B_z^2+v^2B_y^2}=v\sqrt{B_z^2+B_y^2}=vB$$ Answer: Ah, this is just vector addition. They’re calculating the total length of the $B$ vector from its $y$ and $z$ components, so they use the Pythagorean theorem.
{ "domain": "physics.stackexchange", "id": 79556, "tags": "electromagnetism, waves, electric-fields" }
Is it possible to (de)activate a specific set of cells in jupyter?
Question: I have a jupyter notebook and I would like to perform several runs, the code I want to run depending on the results of the previous runs. I divided my notebook into several cells, and for each run I would like to select which cell should be executed and which one should not. Is there a functionality which looks like "run all cells except those I explicitly deactivate"? Answer: Welcome to DataScience.SE! This is not currently possible. You could change the cells to Raw.
{ "domain": "datascience.stackexchange", "id": 11271, "tags": "jupyter" }
Can you perform a Wick rotation if the poles are on the imaginary axis?
Question: I know you can perform a Wick rotation whenever the poles are outside the contour but what happens if the poles are on the imaginary axis? Can you do it anyway? Answer: Well, the lore is that one is supposed to regularize an oscillatory Minkowski integral from a QFT calculation by the Feynman $i\epsilon$-prescription (which moves poles off the integration contour) in order to Wick rotate to an exponentially decaying and convergent Euclidean integral. If one encounters poles at the Euclidean end of the Wick rotation, then something has likely gone wrong in the process.
{ "domain": "physics.stackexchange", "id": 77643, "tags": "quantum-field-theory, regularization, wick-rotation, analyticity" }
Is it possible to have electric field in water without having electrolysis?
Question: Is it possible to have electric field in water (using electrodes with voltage difference) without having electrolysis in the water (or any other reaction)? Answer: Well, if the voltage is lower than needed for any possible electrolysis reaction, there would be no reaction, just the field. So the answer may seem to be yes. Now, in fact there is a catch. Once you turn on the voltage, even if the reaction is not possible, the ions will rush to the respective electrodes and form the so-called electric double layer. Its thickness, known as Debye length, is usually quite small. As for the rest of the solution, it will be effectively shielded from the electric field. Whether or not this counts as a positive answer is up to you.
{ "domain": "chemistry.stackexchange", "id": 5248, "tags": "electrochemistry, water, electrolysis" }
Swimming across a river
Question: I have come across a physics riddle where 2 swimmers with same speed $c$ are competing against each other in a river, they start at the same spot, first swimmer (s1) is swimming distance $d$ up stream and back, while the other swimmer (s2) is swimming same distance but across and back. the river's stream has velocity of $v$. which one will arrive first at the starting point. my answer: $$ t_1 = 2dc/(c^2-v^2)$$ $$ t_2 = 2d/(sqrt(c^2-v^2)) $$ comparing both sides we get: $$ t_1 > t_2 : [c > sqrt(c^2-v^2)]$$ the actual answer of the riddle is $t_1<t_2$ they calculated $t_2$ = $2d/c$ how can the velocity of the second swimmer be $2d/c$ ? it seems they have neglected in their answer the force of the current, and if so the swimmer would not be able to return to the starting point but some point offshore. shouldn't the velocity be as i calculated $sqrt(c^2-v^2)$ ? Answer: If the second swimmer's aim is simply to cross to the other bank and back to the first bank (not necessarily needing to get back to his starting point) then he can ignore the water flow and direct all his swimming energy in heading across the river. He can just swim a distance of 2d relative to the water and ignore the fact he is drifting downstream. If his target is to get back to the starting point then he must head into the flow and your calculation is correct.
{ "domain": "physics.stackexchange", "id": 41290, "tags": "homework-and-exercises, kinematics, velocity, relative-motion" }
Dynamic reconfigure of costmaps
Question: Hello everyone! I have a mobile robot in Gazebo with defined dimensions, whose task is to pick-up tables and to transport them to other locations. I am using move_base for path planning and navigation. The robot is navigating properly when there is no load on him, but fails to avoid obstacles when there is a table on it. Obviously, it is because dimensions of tables are not taken into account of costmaps during navigation. The question is following: is there any possibility to dynamically reconfigure or recompute my costmaps during robot motion on move_base? My first thought was to two parallel running move_base nodes each taking control after some command, but I am not sure if this is even possible, going to try it soon, but any other/better solutions are very appreciated! Originally posted by kurshakuz on ROS Answers with karma: 76 on 2019-06-27 Post score: 0 Answer: You can always increase the inflation layer in you costmap params config files. You can also use dynamic reconfigure to change the inflation value on the run. Also you can change the value in your code and update the inflation parameter. This [question](http://answers.ros.org/question/12276/is-there-a-c-api-for-a-dynamic-reconfigure-client/) tackles that problem. Originally posted by pavel92 with karma: 1655 on 2019-06-28 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by David Lu on 2019-06-28: in addition to changing the inflation radius, you will likely want to change the robot radius / footprint in the general costmap namespace. https://github.com/ros-planning/navigation/blob/melodic-devel/costmap_2d/cfg/Costmap2D.cfg#L20
{ "domain": "robotics.stackexchange", "id": 33275, "tags": "gazebo, navigation, move-base, ros-kinetic" }
Why does hyperventilation make you feel like you need to breathe more?
Question: Calm Clinic claims: "The problem is that hyperventilation makes your body feel like you're not getting enough oxygen. Essentially, it makes you feel like you need to take deeper breaths and take in as much air as possible. This makes all of the symptoms of hyperventilation worse." As far as I know, the brain controls breathing rate by measuring the amount of carbon dioxide in the blood. So is this true? If so, why? Answer: Hyperventilation alone does not cause you to feel that you're not getting enough oxygen. Rather, it's what causes hyperventilation that does that (thus resulting in hyperventilation.) The Calm Clinic explains this quite well (while only mildly contradicting your quote): During periods of intense anxiety, the body is sent into a state of fight or flight, when the brain signals to the body that danger is afoot. When this happens, you automatically start breathing quickly, as this oxygenates your blood and prepares your body to respond to a threat by fighting or fleeing. If the threat that has triggered your fight or flight response (whether real or imagined) persists, you’re likely to continue hyperventilating until you start to experience other unpleasant physical symptoms. The focus of your question is "...This makes all of the symptoms of hyperventilation worse." [emphasis mine] You can hyperventilate by breathing too quickly or too deeply; either way, in people without underlying medical disorders, hyperventilation is usually caused by stress/anxiety. Anxiety makes your heart rate increase, and causes a perception of the need for more air (not the actual hyperventilation). It is often accompanied by some degree of chest tightness, which many people reasonable attribute to a problem with their heart. These things tend to cause more stress, so it's a cycle. The symptoms of hyperventilation are dizziness/lightheadedness, tingling in your hands/feet and around your mouth, and more but less common symptoms. Under normal circumstances, hyperventilation leads to a period of decreased respiratory rate to allow for arterial blood to build up that critical buffer, HCO3. Dealing with Anxiety Symptoms: Hyperventilation (written for laypersons)
{ "domain": "biology.stackexchange", "id": 12271, "tags": "human-biology, physiology, respiration, breathing, lungs" }
Are Hamiltonians CPT invariant?
Question: I'm confused by the CPT theorem. It states (more or less) that a Lorentz invariant quantum field theory needs to be CPT invariant. But what does it actually mean for a QFT to be CPT invariant? It surely means that it's Lagrangian is. What about the Hamiltonian? Does the invariance get inherited by it as well? What about other observables that I can measure in an experiment? Are all of those also invariant under CPT (even though they might not be Lorentz invariant)? Related to this: If I have a Lagrangian (density) that transforms in a specific way under C, P, and T, and I derive a low energy Hamiltonian from it, does this one necessarily inherit the same transformation properties? And if yes, is the reverse true? If a Hamiltonian (of an isolated system) is odd under C, P, or T, will this necessarily correspond to violations of the same symmetries at high energies? Answer: The WP statement is that Any Lorentz invariant local quantum field theory with a Hermitian Hamiltonian must have CPT symmetry. The theory involves the vacuum state, the Lagrangian, and the Hamiltonian. What about other observables that I can measure in an experiment? Are all of those also invariant under CPT? They don't need to, in principle: that's how you might measure a violation; but they turn out to be, confirming CPT is a sacrosanct symmetry. Re: your second paragraph. P and T are violated by the weak interactions, the first maximally and the second (CP) by a little, e.g. in neutron decay. CPT is still conserved there, low energies, just as in higher ones! The weak QFT Hamiltonian preserves CPT at all energies!
{ "domain": "physics.stackexchange", "id": 98655, "tags": "hamiltonian, symmetry-breaking, charge-conjugation, cpt-symmetry, cpt-violation" }
How to write a closed term with this type?
Question: X→Y →(X+Y)×Y I'm confused about how to get the type (X + Y). If I assume m: (X + Y), n: X, k: Y then I can get $${m:X+Y \vdash \lambda n:X.\lambda k:Y.<m, k>}$$ which get the right type. But I can't reduce the context m: X + Y. So, could anyone tell me how to make it? Thanks a lot. Answer: Whenever a new type constructor is defined, it comes with introduction and elimination terms. These are part of the definition. In the case of sums, the following are part of the definition of what it means to have $X + Y$: given types $X$ and $Y$ there is a type $X + Y$ given a term $e : X$ there is a term $\mathsf{inl}(e) : X + Y$ given a term $e' : Y$ there is a term $\mathsf{inr}(e') : X + Y$ (there is also an eliminator and equations which I am not writing down here) So I think this is the point you missed: we do not just say "there is a type $X + Y$" without anything else, as that would be completely useless. We also provide ways of constructing (introducing) and deconstructing (eliminating) terms of the new type. Now that we know that $\mathsf{inl}$ and $\mathsf{inr}$ are there by definition of sum types, it is easy: $$\lambda x : X . \lambda y : Y . \langle \mathsf{inr}(y), y\rangle.$$ By the way, Haskell's djinn package can help you with this (I'd show a demo but OSX El Capitan broke cabal).
{ "domain": "cs.stackexchange", "id": 6338, "tags": "lambda-calculus" }
Deriving the Old Quantum Condition ($\oint p_i dq_i=nh$)
Question: A body undergoing periodic motion in an orbit of quantum number $n$ will have a period $T$, determined by $$T=\oint \frac{ds}{v}=\oint \frac{ds}{\sqrt{\frac{2}{m}(E-V)}}$$ Where $ds$ is an infinitesimal displacement, $v$ is the body's speed, $m$ is its mass, $E$ its total energy, and $V$ its potential energy. Likewise, it'll have an abbreviated action over the whole period equal to $$J=\oint\vec p\cdot d\vec s =\sum_i\oint p_idq_i=\oint\sqrt{2m(E-V)}ds$$ It can be easily seen that $$T=\dfrac{dJ}{dE}$$ Now, according to Bohr, in the limit of large quantum numbers (corresponding to large vibrations) the behaviour of the body should approach its classical behavior. So the frequency of light emitted by a body as it drops from state $n$ to a lower state should be an integer multiple of the frequency at which the body moves in its periodic motion. Since the lowest frequency of light is emitted when the body drops to the state directly below it, that frequency m,ust correspond to the body's frequency of motion in the classical limit. So, according to the Planck Hypothesis: $$f_n\approx\frac{E_n-E_{n-1}}{h}$$ Where $h$ is Planck's constant. Replacing: $$\dfrac{dE}{dJ}\approx\frac{1}{h}\dfrac{dE}{dJ}(J_n-J_{n-1})$$ Cancelling out $\dfrac{dE}{dJ}$ we obtain $$J_n-J_{n-1}=h$$ From this we obtain that action is quantized $$J=\oint\vec p\cdot d\vec s =\sum_i\oint p_idq_i=nh$$ where $n$ is an integer. But the Old Quantum Condition states that $$\oint p_i dq_i=nh$$ Meaning that the action for every generalized coordinate and monetum is quantized, with the $n$ for each coordinate is a quantum number. How does one go from The total action over one period of a body is quantized to The action for each individual coordinate is quantized? Answer: Okay, I think I finally got it! Thanks bolbteppa! A body which oscillates may oscillate with different periods in different degrees of freedom, or coordinates. One such example is a precessing orbit. Now the period of motion in coordinate $i$ is $$T_i=\oint\frac{dq_i}{\dot q_i}$$ Which, according to Hamiltonian Mechanics is $$\oint\frac{dq_i}{\left(\dfrac{\partial H}{\partial p_i}\right)}$$ The action over a full period from just one coordinate $J_i$ is $$J_i=\oint p_i dq_i$$ The derivative of $J_i$ with respect to $H$ is $$\dfrac{\partial J_i}{\partial H}=\oint \dfrac{\partial p_i}{\partial H}dq_i=\oint\frac{dq_i}{\left(\dfrac{\partial H}{\partial p_i}\right)}$$ So $T_i$ is equal to $\dfrac{\partial J_i}{\partial H}$, or the frequency $f_i$ is equal to $\dfrac{\partial H}{\partial J_i}$ From this I can just go through the math from my question: $$f_i(n)=\frac{H(J_n)-H(J_{n-1})}{h}$$ $$\dfrac{\partial H}{\partial J_i}=\frac{1}{h}\dfrac{\partial H}{\partial J_i}(J_i(n)-J_i(n-1))$$ $$J_i(n)-J_i(n-1)=h$$ $$J_i=\oint p_i dq_i=nh$$ Thanks for your help!
{ "domain": "physics.stackexchange", "id": 46377, "tags": "quantum-mechanics, hamiltonian-formalism, phase-space, quantization, semiclassical" }
What would happen if the inner core bulged and moved a tiny bit?
Question: The inner core is solid only because of the very high pressure of the outer core and mantle . Say, if the inner core moved a teeny tiny bit would some of it liquify and melt? Or does it stay solid? Also if the Inner core bulged would a bump about 1/8 of the size of the mantle occur on the inner core because of space taken up? Answer: The following paragraphs are reproduced from https://www.nationalgeographic.org/encyclopedia/core/ "Growth in the Inner Core As the entire Earth slowly cools, the inner core grows by about a millimeter every year. The inner core grows as bits of the liquid outer core solidify or crystallize. Another word for this is “freezing,” although it’s important to remember that iron’s freezing point more than 1,000° Celsius (1,832° Fahrenheit). The growth of the inner core is not uniform. It occurs in lumps and bunches, and is influenced by activity in the mantle. Growth is more concentrated around subduction zones—regions where tectonic plates are slipping from the lithosphere into the mantle, thousands of kilometers above the core. Subducted plates draw heat from the core and cool the surrounding area, causing increased instances of solidification. Growth is less concentrated around “superplumes” or LLSVPs. These ballooning masses of superheated mantle rock likely influence “hot spot” volcanism in the lithosphere, and contribute to a more liquid outer core. The core will never “freeze over.” The crystallization process is very slow, and the constant radioactive decay of Earth’s interior slows it even further. Scientists estimate it would take about 91 billion years for the core to completely solidify—but the sun will burn out in a fraction of that time (about 5 billion years)."
{ "domain": "earthscience.stackexchange", "id": 2265, "tags": "geophysics, core, pressure" }
What are these structures inside a cut open Allamanda blanchetti?
Question: As suggested by @tyersome in my previous question on the same topic which I posted yesterday, I have cut open few flowers of Allamanda blanchetti in order to observe if any reproductive structures could be found. I took 2 buds in different stages of opening as well,if they had some kind of bud pollination. But it seems there is no difference in the bud and bloomed flower. then I found these yellow long stiff hairs all stuck together, with some transparent secretion on them,no pollen. I think that these hairs cluster together to form this small star shape projection inside the flower we usually see when it is in bloom. and this small green bulb like thing with a thread like extension to attach it to the base of the pedicel. and then on further extending the incision this small swollen part almost like a ovary. But on its transverse section I didn't observe any ovules or placenta inside. I don't have microscope or other high resolution equipment, I tried looking at it with a magnifying glass but no new details appeared. So what are these structures? Do they enable sexual reproduction for the plant? Links to related reliable websites are also welcome. Answer: The third photo gets to the heart of the matter. Here's an enlargement: The gold-colored areas are the anthers of some of the stamens, the filaments of which are fused with the tissue of the petal. (This is typical of the Apocynaceae, the family to which A. blanchetti belongs; they have epipetalous stamens, that is the stamens are "adnate" i.e. fused with the petal.) This is the male part of the flower; the anthers have the pollen grains which the plant is trying to get transferred to another flower. Above the knife is the style. This part of the female structure is a hollow tube which runs from the openings in the stigma (the out of focus green blob) down to the ovary (not in photo). The central part of the fourth photo is presumably the ovary (missing its style). The ovary contain the ovules which will become seeds after fertilization from pollen which sticks to the stigma and grows a tube down the style.
{ "domain": "biology.stackexchange", "id": 10940, "tags": "botany, sexual-reproduction, flowers, dissection" }
Intervals as infinitesimals of same order (Landau & Lifshitz)
Question: I don't understand the following statement in Landau & Lifshitz, Classical Theory of Fields, p.5: $ds$ and $ds'$ are infinitesimals of same order. [...] It follows that $ds^2$ and $ds'^2$ must be proportional to each other: $$ds^2 = a \, ds'^2.$$. I don't get why the proportionality applies, and why does it apply to the squares of the infinitesimals. Answer: First, Landau and Lifshitz stated that $ds$ and $ds'$ approach zero simultaneously, so that there is some hidden variable $x$ such that, \begin{equation} \lim_{x\to 0} ds(x) =0 \end{equation} and \begin{equation} \lim_{x\to 0} ds'(x) =0, \end{equation} assuming and $ds$ and $ds'$ are continuous functions of $x$. Next, the two are infinitesimals of the same order since the two inertial frames $K$ and $K'$ are equivalent. The frame $K'$ (in which the interval $ds'$ is measured) moves relative to the frame $K$. Suppose $ds'$ is an infinitesimal of greater order than $ds$, i.e., according the the reference given in the above answer, \begin{equation} \lim_{x\to 0} \frac{ds'(x)}{[ds(x)]^n} = A,\quad A\neq 0,\quad n>1, \end{equation} where $A$ can depend only on the magnitude of the relative velocity, not its direction and certainly not the coordinates, for reasons related to homogeneity of space and time and isotropy of space. Since $K$ is also moving relative to $K'$ and the principle of relativity holds, by symmetry one ought to have \begin{equation} \lim_{x\to 0} \frac{ds(x)}{[ds'(x)]^n} = A,\quad A\neq 0,\quad n>1, \end{equation} which is absurd. Hence $ds$ and $ds'$ have to be infinitesimals of the same order.
{ "domain": "physics.stackexchange", "id": 23989, "tags": "special-relativity, metric-tensor" }
Are there any other notable assumptions I've missed in my lab write-up?
Question: Let's say I have to write a lab report that includes notable assumptions made that are pertinent, significant and relevant to my experiment. The purpose of my experiment is to determine the permittivity of free space experimentally to within the same order of magnitude as the generally accepted value. My experiment is a Coulomb balance where you set up a balance of two flat capacitor plates (~12cm x 12cm) and afterwards introduce a known weight (50mg). A mirror is attached to the top (freely swinging) arm of the balance which then reflects a laser onto a target, showing the deviation that the weight caused. I then wire the two plates up to an electrical potential that I can control and remove the weight. I increase the voltage until I see the same deviation of the laser, the point being that I can then equate the known gravitational attractive force to the electrical one and calculate out a value for the permittivity of free space. So far for my assumptions I have : a) The capacitor plates' thicknesses are irrelevant (or at least thin enough to be negligible). b) The permittivity of free space resembles that of air closely enough. c) All equipment and apparatus used have no significant internal resistance, conduct electricity fairly perfectly and have no error in labelling or value reporting. d) All equipment and apparatus used have no significant and unwanted inherent magnetic residue. There is a separate part where I account for/write about sources of error. I'm not looking for additional sources of error - correct me if I'm wrong, but a source of error is not the same as an assumption. I can't think of anything else that I should include in this list but I'm fairly certain there must be more assumptions that I haven't thought of. Any thoughts ? Answer: As you are only looking for order-of-magnitude accuracy, I do not think it is appropriate to mention assumptions which are unlikely to have a significant effect on the result. There is no point identifying assumptions just for the sake of having an impressive number of them. Quality (significance) is more valuable than quantity here. a) Thickness of plates. I don't see what effect this might have, so I would say it is not worth mentioning. The usual assumption that $L \gg d$ is worth mentioning. Whether or not this assumption is justified depends on the accuracy of your measurement. For order of magnitude accuracy your values justify this assumption. b) Yes. Ideally you would perform the experiment in vacuum, but that would involve too much effort. You might compare standard values for permittivity of air and vacuum to justify this assumption. c) Yes. This relates to measurement of $V$ between the plates. d) I don't see the significance of this, particularly because you are using magnetic damping. If you include this I think you need to explain how you think residual magnetism might cause a significant (order of magnitude) error. Magnetic forces might have an influence when there are currents or moving charges, but not with static electricity. I presume the key equation you used is $F=\frac{\epsilon L^2V^2}{2d^2}$. The squared quantities potentially introduce the largest errors, so you need to concentrate on those, especially those which have the largest % error. I think measurement of $d$ introduced the biggest source of error. You seem to have done a careful experiment. You performed several runs. I expected that you would vary $m$ and $V$ to check that $m \propto V^2$ as your equation predicts. The only other source of error I can think of is possible bending of the pivot arm. This would cause the plates to mis-align from parallel. But I expect the arm is quite stiff and $m=50g$ too small to cause a significant effect, so this is probably not worth mentioning.
{ "domain": "physics.stackexchange", "id": 38869, "tags": "electricity, experimental-physics, coulombs-law, experimental-technique" }
Unit test for the right kind of cryptography key
Question: I have a quite large unit test case for one class that currently does not exist, I am going to write it after finishing the test case. I am wondering if my unit test doesn't lack something important, or alternatively, if it is not too complex. I have to describe what the tested class is supposed to do, although some details are left out, and I can answer questions about them if needed. This class is a key matcher, part of a library implementing things like json web signing/json web key. The key matcher will take input from some other classes like JWS processors, that will process json objects. The json representation of, for example, a signed object, can contain a key used to verify the signatures on the object. This key can be given as a json web key or certificate, or can be given by key id. A key set can also be provided, that would mean the key identified by the key id is to be looked up in that set. The protocol does not specify the policy of determining which key to use for verification, so in theory all those fields can be present at once. The key matcher will match keys, trying in some predetermined order, like explicit key has priority over a certificate, and it has priority over matching in key set, last is an external source of keys or certificates that is application specific. However, usually when the matcher looks for keys in different sources, and a higher priority source is found, a lower priority source is usually not tried even if keys from the higher priority source do not match. It stems from the fact that if an application sends a signed or encrypted object containing multiple incompatible keys, it is an application error and such cases should not be handled, because they make no sense. This test case tests matching in case of each possible source of key material, and most but not all cases of match failure. However, it does not test for cases where matcher would throw NullPointerException or IllegalArgumentException or possibly IllegalStateException, for example if a key type is not specified. I am not sure if I should test for good reaction to bugs in the user of the matcher class. The last test will test if the order of matching is correct. I would like to know if my unit test too complex, or if it misses something important. /** * Copyright (c) 2016-2017, acme-client developers * All rights reserved. * * Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * * 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ package io.github.webczat.acmeClient.jws.keyMatching; import static org.junit.Assert.assertEquals; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.when; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; import java.security.cert.X509Certificate; import java.util.*; import org.junit.Test; import io.github.webczat.acmeClient.jws.KeyType; import io.github.webczat.acmeClient.jws.NoMatchingKeyException; import io.github.webczat.acmeClient.jws.WebKey; import io.github.webczat.acmeClient.jws.WebPublicKey; import io.github.webczat.acmeClient.testUtil.CertificateTestUtils; /** * This class tests key matcher. * * @author webczat */ @SuppressWarnings({ "javadoc" }) public class KeyMatcherTest { /** * Test for matching an explicitly given key. */ @Test public void testExplicitKeyMatchWithAlgorithm() { WebPublicKey key = mock(WebPublicKey.class); when(key.getKeyType()).thenReturn(KeyType.RSA); when(key.getAlgorithm()).thenReturn("test"); assertEquals(new KeyMatcher().setKeyType(KeyType.RSA).setAlgorithm("test").setWebKey(key).match(), key); } /* * Test for explicitly given key without match without algorithm on the key. */ @Test public void testExplicitKeyMatchWithoutAlgorithm() { WebPublicKey key = mock(WebPublicKey.class); when(key.getKeyType()).thenReturn(KeyType.RSA); assertEquals(new KeyMatcher().setKeyType(KeyType.RSA).setAlgorithm("test").setWebKey(key).match(), key); } /** * Test explicit key match with bad algorithm. */ @Test(expected = NoMatchingKeyException.class) public void testExplicitKeyMatchWithBadAlgorithm() throws NoMatchingKeyException { WebPublicKey key = mock(WebPublicKey.class); when(key.getKeyType()).thenReturn(KeyType.RSA); when(key.getAlgorithm()).thenReturn("test2"); new KeyMatcher().setKeyType(KeyType.RSA).setAlgorithm("test2").setWebKey(key).match(); } /** * Test for explicit key with bad key type. */ @Test(expected = NoMatchingKeyException.class) public void testExplicitKeyMatchWithBadType() throws NoMatchingKeyException { WebPublicKey key = mock(WebPublicKey.class); when(key.getKeyType()).thenReturn(KeyType.EC); new KeyMatcher().setKeyType(KeyType.RSA).setAlgorithm("test").match(); } /** * Tests for explicit key match with a key validator passing. */ @Test public void testExplicitKeyMatchWithPassingValidator() { WebPublicKey key = mock(WebPublicKey.class); when(key.getKeyType()).thenReturn(KeyType.RSA); assertEquals( new KeyMatcher().setKeyType(KeyType.RSA).setAlgorithm("test").setKeyValidator((k) -> true).setWebKey( key).match(), key); } /** * Test for explicit key match with failing validator. */ @Test(expected = NoMatchingKeyException.class) public void testExplicitKeyMatchWithFailingValidator() throws NoMatchingKeyException { WebPublicKey key = mock(WebPublicKey.class); when(key.getKeyType()).thenReturn(KeyType.RSA); new KeyMatcher().setWebKey(key).setKeyValidator((k) -> false).setAlgorithm("test").setKeyType( KeyType.RSA).match(); } /** * Test for matching a key from the given set of keys, with given key * identifier. * */ @Test public void testSetKeyMatchWithKeyId() { WebPublicKey key1 = mock(WebPublicKey.class), key2 = mock(WebPublicKey.class), key3 = mock(WebPublicKey.class); when(key1.getKeyType()).thenReturn(KeyType.RSA); when(key1.getKeyId()).thenReturn("test"); when(key2.getKeyType()).thenReturn(KeyType.EC); when(key2.getKeyId()).thenReturn("test"); when(key3.getKeyType()).thenReturn(KeyType.RSA); LinkedHashSet<WebKey> keySet = new LinkedHashSet<WebKey>( Arrays.asList(new WebKey[] { key3, key2, key1, key1 })); assertEquals(new KeyMatcher().setKeyId("test").setKeyType(KeyType.RSA).setAlgorithm("test").setWebKeySet( keySet).match(), key1); } /** * Test for no matching keys for key id when matching by key set. */ @Test(expected = NoMatchingKeyException.class) public void testSetKeyMatchWithKeyIdAndNoCandidates() throws NoMatchingKeyException { WebPublicKey key = mock(WebPublicKey.class); when(key.getKeyType()).thenReturn(KeyType.RSA); LinkedHashSet<WebKey> keySet = new LinkedHashSet<WebKey>(Arrays.asList(new WebKey[] { key })); new KeyMatcher().setAlgorithm("test").setKeyId("test").setKeyType(KeyType.RSA).setWebKeySet(keySet).match(); } /** * Tests key matching using a key set, with no key id given. */ @Test public void testSetKeyMatchWithoutKeyId() { WebPublicKey key1 = mock(WebPublicKey.class), key2 = mock(WebPublicKey.class); when(key1.getKeyType()).thenReturn(KeyType.RSA); when(key2.getKeyType()).thenReturn(KeyType.EC); LinkedHashSet<WebKey> keySet = new LinkedHashSet<>(Arrays.asList(new WebKey[] { key2, key1 })); assertEquals(new KeyMatcher().setAlgorithm("test").setKeyType(KeyType.RSA).setWebKeySet(keySet).match(), key1); } /** * Test for matching keys from set with no key id and no candidates. */ @Test(expected = NoMatchingKeyException.class) public void testSetKeyMatchWithoutKeyIdAndCandidates() throws NoMatchingKeyException { WebPublicKey key = mock(WebPublicKey.class); when(key.getKeyType()).thenReturn(KeyType.EC); HashSet<WebKey> keySet = new HashSet<>(Arrays.asList(new WebKey[] { key })); new KeyMatcher().setAlgorithm("test").setKeyType(KeyType.RSA).setWebKeySet(keySet).match(); } /** * Test for matching key from external source. */ @Test public void testExternalKeyMatch() { WebPublicKey key1 = mock(WebPublicKey.class), key2 = mock(WebPublicKey.class); when(key1.getKeyType()).thenReturn(KeyType.RSA); when(key2.getKeyType()).thenReturn(KeyType.EC); KeyProvider kp = mock(KeyProvider.class); when(kp.lookupKey("test")).thenReturn(Arrays.asList(new WebKey[] { key2, key1 })); assertEquals(new KeyMatcher().setKeyType(KeyType.RSA).setAlgorithm("test").setKeyId("test").setKeyProvider( kp).match(), key1); } /** * Test matching keys from external source when no keys match. */ @Test(expected = NoMatchingKeyException.class) public void testExternalKeyMatchWithNoCandidates() throws NoMatchingKeyException { WebPublicKey key = mock(WebPublicKey.class); when(key.getKeyType()).thenReturn(KeyType.EC); KeyProvider kp = mock(KeyProvider.class); when(kp.lookupKey("test")).thenReturn(Arrays.asList(new WebKey[] { key })); new KeyMatcher().setKeyType(KeyType.RSA).setAlgorithm("test").setKeyId("test").setKeyProvider(kp).match(); } /** * Test for external key matching when no key id specified, it should not * work at all. */ @Test(expected = NoMatchingKeyException.class) public void testExternalKeyMatchWithNoKeyId() throws NoMatchingKeyException { WebPublicKey key = mock(WebPublicKey.class); when(key.getKeyType()).thenReturn(KeyType.RSA); KeyProvider kp = mock(KeyProvider.class); when(kp.lookupKey("test")).thenReturn(Arrays.asList(new WebKey[] { key })); new KeyMatcher().setKeyType(KeyType.RSA).setAlgorithm("test").setKeyProvider(kp).match(); } /** * Test for certificate matching when cert chain is explicitly given. */ @Test public void testExplicitCertificateMatch() { List<X509Certificate> certs = CertificateTestUtils.newChain(3, null).getCertificateChain(); assertEquals(new KeyMatcher().setAlgorithm("test").setKeyType(KeyType.RSA).setCertificateChain(certs).match(), certs); } /** * Test explicit certificate match that fails. */ @Test(expected = NoMatchingKeyException.class) public void testExplicitCertificateMatchFailure() throws NoMatchingKeyException { List<X509Certificate> certs = CertificateTestUtils.newChain(3, null).getCertificateChain(); new KeyMatcher().setKeyType(KeyType.EC).setAlgorithm("test").setCertificateChain(certs).match(); } /** * Test SHA256 fingerprint matching. */ @Test public void testFingerprintCertificateMatchWithSha256() throws NoSuchAlgorithmException { List<X509Certificate> certs = CertificateTestUtils.newChain(3, null).getCertificateChain(); byte[] fingerprint = MessageDigest.getInstance("SHA2-256").digest(certs.get(0).getEncoded()); CertificateProvider cp = mock(CertificateProvider.class); when(cp.lookupCertificateBySha256Fingerprint(fingerprint)).thenReturn(certs); assertEquals(new KeyMatcher().setKeyType(KeyType.RSA).setAlgorithm("test").setSha256Fingerprint( fingerprint).setCertificateProvider(cp).match(), certs); } /** * Test certificate matching using SHA1 fingerprint. */ @Test public void testFingerprintCertificateMatchWithSha1() throws NoSuchAlgorithmException { List<X509Certificate> certs = CertificateTestUtils.newChain(3, null).getCertificateChain(); byte[] fingerprint = MessageDigest.getInstance("SHA1").digest(certs.get(0).getEncoded()); CertificateProvider cp = mock(CertificateProvider.class); when(cp.lookupCertificateBySha1Fingerprint(fingerprint)).thenReturn(certs); assertEquals(new KeyMatcher().setKeyType(KeyType.RSA).setAlgorithm("test").setCertificateProvider( cp).setSha1Fingerprint(fingerprint).match(), certs); } /** * Test for matching by sha256 fingerprint when key type is invalid. */ @Test(expected = NoMatchingKeyException.class) public void testFingerprintCertificateMatchWithSha256AndBadKeyType() throws NoMatchingKeyException, NoSuchAlgorithmException { List<X509Certificate> certs = CertificateTestUtils.newChain(3, null).getCertificateChain(); byte[] fingerprint = MessageDigest.getInstance("SHA2-256").digest(certs.get(0).getEncoded()); CertificateProvider cp = mock(CertificateProvider.class); when(cp.lookupCertificateBySha256Fingerprint(fingerprint)).thenReturn(certs); new KeyMatcher().setKeyType(KeyType.EC).setCertificateProvider(cp).setAlgorithm("test").setSha256Fingerprint( fingerprint).match(); } /** * Test for sha1 fingerprint matching when wrong key type is given. */ @Test(expected = NoMatchingKeyException.class) public void testFingerprintCertificateMatchWithSha1AndBadKeyType() throws NoMatchingKeyException, NoSuchAlgorithmException { List<X509Certificate> certs = CertificateTestUtils.newChain(3, null).getCertificateChain(); byte[] fingerprint = MessageDigest.getInstance("SHA1").digest(certs.get(0).getEncoded()); CertificateProvider cp = mock(CertificateProvider.class); when(cp.lookupCertificateBySha1Fingerprint(fingerprint)).thenReturn(certs); new KeyMatcher().setCertificateProvider(cp).setKeyType(KeyType.EC).setAlgorithm("test").setSha1Fingerprint( fingerprint).match(); } /** * Test for fingerprint matching with no fingerprints set. */ @Test(expected = NoMatchingKeyException.class) public void testFingerprintCertificateMatchWithNoFingerprint() { new KeyMatcher().setKeyType(KeyType.RSA).setAlgorithm("test").setCertificateProvider( mock(CertificateProvider.class)).match(); } /** * Test for matching when no parameters are set. */ @Test(expected = NoMatchingKeyException.class) public void testKeyMatchWithNoData() throws NoMatchingKeyException { new KeyMatcher().setKeyType(KeyType.RSA).setAlgorithm("test").match(); } /** * Test for matching order. */ @Test public void testMatchingOrder() { WebPublicKey key1 = mock(WebPublicKey.class); when(key1.getKeyType()).thenReturn(KeyType.RSA); KeyMatcher km = new KeyMatcher(); km.setKeyType(KeyType.RSA).setAlgorithm("TEST").setWebKey(key1); List<X509Certificate> certs1 = CertificateTestUtils.newChain(1, null).getCertificateChain(); km.setCertificateChain(certs1); WebPublicKey key2 = mock(WebPublicKey.class); when(key2.getKeyType()).thenReturn(KeyType.RSA); when(key2.getKeyId()).thenReturn("test"); Set<WebKey> keySet = new HashSet<>(); keySet.add(key2); km.setWebKeySet(keySet); km.setKeyId("test"); List<X509Certificate> certs2 = CertificateTestUtils.newChain(1, null).getCertificateChain(); byte[] fingerprint1 = MessageDigest.getInstance("SHA2-256").digest(certs2.get(0).getEncoded()); km.setSha256Fingerprint(fingerprint1); List<X509Certificate> certs3 = CertificateTestUtils.newChain(1, null).getCertificateChain(); byte[] fingerprint2 = MessageDigest.getInstance("SHA1").digest(certs3.get(0).getEncoded()); km.setSha1Fingerprint(fingerprint2); CertificateProvider cp = mock(CertificateProvider.class); when(cp.lookupCertificateBySha256Fingerprint(fingerprint1)).thenReturn(certs2); when(cp.lookupCertificateBySha1Fingerprint(fingerprint2)).thenReturn(certs3); km.setCertificateProvider(cp); WebPublicKey key3 = mock(WebPublicKey.class); when(key3.getKeyType()).thenReturn(KeyType.RSA); KeyProvider kp = mock(KeyProvider.class); when(kp.lookupKey("test")).thenReturn(Arrays.asList(new WebKey[] {key3})); km.setKeyProvider(kp); assertEquals(km.match(), key1); km.setWebKey(null); assertEquals(km.match(), certs1); km.setCertificateChain(null); assertEquals(km.match(), key2); km.setWebKeySet(null); assertEquals(km.match(), certs2); km.setSha256Fingerprint(null); assertEquals(km.match(), certs3); km.setSha1Fingerprint(null); assertEquals(km.match(), key3); } } Answer: I really like the small test methods. But without seeing the actual implementation, it is very hard to tell, if a test case makes sense or does what it should do. Small improvements: Split your test methods into three blocks, when-given-then and use a empty line between those. It can help a lot, not always, but I recommend to do it. It's like using your indicator: Even though noone is around (= quite an easy test case), it's a good habit to use it always, so you will use it, when it's actually needed. You can make WebPublicKey an instance variable and use the @Mock annotation. In the setup (@Before), you can use MockitoAnnotations.initMocks(this);. So you can save the first line of every test. I have a hard time to understand, what match() does, or what it should do (= the intention is not clear). Why must match() equal key? When I read matches(), I expect to have a boolean returned. Shouldn't it be something like findMatchingKey() or something? The test-prefix of your test cases aren't needed, it's used "back in the day", before annotations were a thing in java/junit. Instead of testExplicitKeyMatchWithAlgorithm, you can write explicitKeyMatchesWithAlgorithm The JavaDoc for the methods are first of all, most of the times it is JavaDoc, but not always. 2nd: I'm 99% sure, noone will ever read those java docs (you guys do even generate those for test cases?). 3rd: "Test for matching an explicitly given key." vs "testExplicitKeyMatchWithAlgorithm". So, you have a comment, a method name, and the actual code. Rhetorical question: which one is true? The java doc does not talk about algorithm, the method name does. The "test" algorithm: I usually declare those explicitly as variable in the test case, so the reader sees where it's used (WebPublicKey and KeyMatcher) There's a lof of repetition of the WebPublicKey instantiation (RSA KeyType and test algorithm), you might want to add a static helper method for that, something like rsaWithTestAlgorithmKeyMatcher(), or even add a constant. For the other creations, I'd provide a method like keyMatcher(keyType: KeyType, algorithm: String): KeyMatcher. testMatchingOrder: Now, that's quite the confusing test method. You wrote, that the implementation does not exist. The actual test driven approach would be "write failing test case, implement, refactor". Now important is, you write one failing test case. And you only implement what is needed for the test case to run ("Know when to stop."), so you do not implement too much. And an important thing, too, is: "What do I need to change, to make my test fail". I mention all that especially because of the last test case. Do you actually have to use four different Keys, wouldn't be two enough to ensure the correct return value? If not: Then something's different, you have to consider writing two different test cases. testMatchingOrder 2: Also to point out the other points I've mentioned, especially the helper-methods and then give-when-then block: Those applied should make this method a lot easier to understand. Beside that: There's nothing wrong to write keyMatcher instead of km, certificateProvider instead of cp and so on. It certainly would have helped me. testMatchingOrder 3: After reading that, I still do not understand the expected behavior of the KeyMatcher, especiall the latest part. You call match, and expect key1. Why? Then you set the webKey to null (why?), and then you expect certs1 (why?). The setup of the test doesn't really help to understand it either. Hope that helps,...
{ "domain": "codereview.stackexchange", "id": 25492, "tags": "java, unit-testing, cryptography" }
Why use solids for soundproofing when sound waves travel through them faster?
Question: I'm doing a soundproofing experiment where I test out different materials with different specific heat capacities to see how SHC affects how much sound level is emitted. But I also know that sound travels through solids faster than liquids or gases, so theoretically would a liquid or gas be better at soundproofing since sound can't travel as fast through them? Or does the speed of sound not have any effect in how much sound we can hear? Answer: the speed of sound is unrelated to the sound-deadening characteristics of materials, except as follows: The speed of sound in any given material is related to the acoustic impedance of that material. When a sound wave travels from one material (like air) into another (like a 2x4 stud wall sheathed with 3/4" drywall) where the acoustic impedances of the two materials differ greatly, then the resulting impedance mismatch will cause some of the incident sound wave to be reflected off the interface and therefore not penetrate the wall. This mismatch effect is one tool used by acoustics engineers to design walls that stop the transmission of sound waves.
{ "domain": "physics.stackexchange", "id": 65948, "tags": "waves, acoustics" }
Is there any abbreviation for hydrates?
Question: I'm getting tired of writing out the full formulas for hydrates, adding $\ce{.$x$\,H2O}$ to all formulas. I'm wondering if there's standard shorthand for these. Answer: There is, as far as I know, no shorthand for that, but you might be able to omit it completely in a first order approximation. The closest might be adding $\ce{(aq)}$ when working in aqueous environment. Adding $\ce{.$x$\,H2O}$ to the formula usually indicates, that it is a solid and that there are $x$ water molecules enclosed in the crystal structure. It depends on what you are looking at, which kind of reaction, what you might be able to omit. For example, when you are describing a drying process, it is not possible: $$\ce{\overset{blue}{CuSO4.5H2O} ->[\Delta] \overset{white}{CuSO4} + 5H2O}$$ When you are working in aqueous solution it is usually not important, as there is much more water around it, hence $$\ce{\overset{blue}{CuSO4.5H2O} + Na2S -> \overset{black}{CuS} v + 2Na+ + SO4^{2-} + 5H2O}$$ is essentially the same as $$\ce{CuSO4~(aq) + Na2S~(aq) ->CuS~(s) v + 2Na+ ~(aq) + SO4^{2-}~(s)}$$ If you are describing a redox reaction, it might be the same, hence the following should be fine $$\ce{CuSO4~(aq) + Zn~(s) -> Cu~(s) v + Zn^{2+}~(aq) + SO4^{2-}~(aq)}$$ Often you can omit the $\ce{(aq)}$ or $\ce{(s)}$ all together. So depending on what kind of chemistry you are talking about, some information can be implied and therefore be omitted. If you add an example of what kind of reactions/ substances you are working with, I can expand my answer to take that into account.
{ "domain": "chemistry.stackexchange", "id": 2830, "tags": "water, notation" }
How does the bladder transition from releasing urine at night to being able to hold urine at night?
Question: I wonder about this. What's the biology of the transition from wetting the bed at night to holding urine at night? Is there a chemical change with how the bladder muscles contract? Answer: I'm not sure how much the bladder has a role in this. Mostly, the reduced frequency of urination during the night is the result of less urine production by the kidneys. This is in response to a hormone called ADH, antidiretic hormone (it has other names as well). ADH acts on the distal convoluted tubules of the nephrons in the kidney to fine-tune the reabsorbtion of water in the tubule bringing it back into the circulatory system. This hormone is released in reponse to a few factors including blood volume. But it's release is also cyclic over a 24 hour cycle. https://www.ncbi.nlm.nih.gov/pubmed/202988 It can happen that this daily cycle of ADH release intensity gets out of adjustment and you urinate more frequently at night, perhaps more than in the day. This has happened to me. Also, according to my GP, it is a frequent cause of bed wetting in children.
{ "domain": "biology.stackexchange", "id": 6647, "tags": "human-physiology, organs" }
rviz: how to show video from cam in "Camera" but not "Image"
Question: Hi, I am new to ROS and would like to visualize some results. I connected an usb camera and run rviz. But I can only see the video if I add "Image" display type. But if I add "Camera" display type, it shows nothing. How can I let it show in "Camera"? Thanks ! Originally posted by KWL on ROS Answers with karma: 3 on 2016-07-29 Post score: 0 Answer: The image displays just an image. The camera display will display the image with renderings of other data in the scene rendered on top of it, but in order to do that it requires /tf data that creates a tree between the "fixed frame" (set from the "global options" group in the "Displays" panel) and the frame that the image is in (this is determined by the value of the "header->frame_id" that is published within each image message). So probably you don't have /tf data to show the image in the camera display. You can work around this by setting the fixed frame to be the same thing as the image's frame id. Originally posted by William with karma: 17335 on 2016-07-29 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by KWL on 2016-08-02: Thanks for your answer! I will try to work around this! Comment by lotharchris on 2020-01-11: This workaround worked for me, but when I add "Robot Model", it has an error unless I change the fixed frame to base_footprint, which breaks the camera image. How do I create /tf data for the camera ?
{ "domain": "robotics.stackexchange", "id": 25403, "tags": "rviz" }
Why don't we include the adhesive and cohesive force while calculating rise in a capillary tube?
Question: The contact angle of a liquid solid interface is explained by saying that the liquid surface must be perpendicular to the resultant of adhesive cohesive and gravitational forces acting on it, since it cannot sustain shear stresses. However, once the contact angle is determined, the cohesive and adhesive forces are always omitted from the discussion. For instance, one way of calculating the rise of water in a capillary tube is to equate the force due to the surface tension to the weight of the liquid risen. $$2\pi R T \cos \theta = \pi R^2 \rho g h $$ which gives $$h = \frac{2T\cos \theta}{R\rho g}$$ where $R$ is the radius of the tube, $T$ the surface tension and $\theta$ the contact angle. However, why are adhesive and cohesive forces excluded from this discussion. As far as I'm aware, the adhesive forces is the main reason the liquid rises (or falls) in the capillary and not the surface tension. Answer: Adhesive forces are accounted for when calculating capillary height. My guess is that you think they are not because you read, somewhere, a discussion in which adhesive forces were used to calculate a contact angle, then the contact angle was used to calculate the height. In that case, adhesive forces are being use the calculate the height. They are simply being used through the intermediary of the contact angle. If you want, you can do the calculation like this: The water height will rise so that the energy of the system is minimized. Let us assume that the shape of the surface of the water in the capillary is fixed and focus only on the height of the column. There is gravitational energy to account for as well as surface energy between the water and the column. If we raise the water height by an amount $\mathrm{d}h$, we have increased the gravitational energy by $\rho g A h \mathrm{d}h$, where $h$ is the height of the bottom of the surface of the water, A is the cross-sectional area of the capillary, $\rho$ is the density of the water, and $g$ is gravitational acceleration. If the surface energy per unit area of contact between the water and the capillary is $-\gamma$, we reduce the energy by $2\pi r \gamma \mathrm{d}h$ when raising the height by $\mathrm{d}h$. The energy is minimized when these two are equal, $$\rho g \pi r^2 h = 2\pi r \gamma$$ or $$h = \frac{2 \gamma}{\rho g r}$$ If we take $\gamma = \cos\theta T$ we reproduce your expression.
{ "domain": "physics.stackexchange", "id": 25838, "tags": "newtonian-mechanics, fluid-statics, surface-tension, capillary-action, adhesion" }
Does the body have a gate control for pain
Question: I understand it is not the most accurate source but I recall a House episode where he claimed the body had a control mechanism for pain in which only the most painful thing was felt. Is that true? and if so how does the body control which injury we feel and could we use this to our advantage? Answer: Pain receptors, more precisely pain signal propagation, can be inhibited in several ways. One is, for example, fine touch receptors (mechanoreceptor in illustration) inhibit propagation of pain signal in spinal cord (a lot of pain processing happens in spinal cord): That is why humans often rub skin around or nearby injury, whether it is in joints or something more superficial. You should find more information about such control in any textbook on pain. See this page for short primer. Next, pain propagation is modulated -- still in spinal cord -- by descending pathway from brain and brain stem. See this page for more, or address Kandel's textbook. Now, I don't know whether or not it is identified how/where brain "selects" to what pain pay attention. My hypothesis is that it happens by competition, e.g. several signals compete by inhibiting each other via some mechanism, probably, related to stress-induced analgesia.
{ "domain": "biology.stackexchange", "id": 3874, "tags": "brain, pain" }
Stresses in a camera mount
Question: I'm building a mount for a small IR sensor which will look very similar to this, minus the large camera. I am wondering whether or not it would be necessary to do calculations to ensure that the structure is safe and will not collapse. The potential problems I see are the bending moment due to the weight of the top motor. Would I calculate this and compare to the yield stresses in the bolts of the bracket? Potential buckling? Would it be worthwhile to use a software such as Ansys to run a stress simulation? The weight of the camera is very small and would not effect any calculations. I am using aluminium. Thanks Answer: If it’s not just a hobby project, you should at least hand check the major stresses/forces. While doing that instead of using the weight of the motor, the mass times max expected max acceleration must be considered. Acceleration depends on your operational limit cases, I.e. drop, touchdown, or crash scenarios. If you do not foresee any reasonable scenarios, still, you could take rough factors to weight, (the more serious it is, the higher the factor). And finally a 1.25 or 1.5 for safety must be considered. Without any checks, if it’s a hobby project, you can of course go ahead and experience the world, and tweak the design in time.
{ "domain": "engineering.stackexchange", "id": 2020, "tags": "mechanical-engineering, stresses, aluminum" }
Meson as hadron and boson
Question: In wikipedia page about hadrons the following image appears: I can understand why the intersection between hadrons and fermions are baryons, as a way to say a baryon is a kind of hadron composed of several quark fermions. However, what is the meaning of the intersection between hadron and bosons labeled in the picture as mesons? If I understand correctly, a meson consists of one quark and one antiquark, nothing related to any boson. Answer: Since a meson is composed of two spin 1/2 particles, its total spin must be an integer, which makes it a boson.
{ "domain": "physics.stackexchange", "id": 73300, "tags": "particle-physics, standard-model, bosons, spin-statistics, mesons" }
Resistor Program Using Functional Decomposition
Question: The purpose of this program is to write a code that will give a user a list or menu of choices that show colors and their values according to some data and prompt them to enter input to determine the value of a resistor. I have completed my source code for this program and it runs perfectly, but I would like some feedback on how to improve this code before I submit it. The picture above shows the colors and the data values associated with each. My professor has given me the liberty to use either 1 or 2 dimensional arrays. I chose 1 dimensional arrays. #include <iostream> #include <cmath> #include <string> using namespace std; //Function prototypes int color(string[], const int); double time(string[], double[], const int, int, int, int); int tolerance(string[], double[], const int); int main() { // Variables const int band = 10, multiplier = 12, toleranceSize = 4; // Represents size of each string array int a, b, c; // Holds the 3 colors from band which are entered by user int output; // Contains tolerance double product; // Stores result from calculation // Defined string arrays string BAND_COLOR_CODE[band] = { "black", "brown", "red", "orange", "yellow", "green", "blue", "violet", "grey", "white" }; string MULTIPLIER_COLOR_CODE[multiplier] = { "black", "brown", "red", "orange", "yellow", "green", "blue", "violet", "grey", "white", "gold", "silver" }; string TOLERANCE_COLOR_CODE[toleranceSize] = { "brown", "red", "gold", "silver" }; // Arrays which hold numeric values for the string arrays double multiplierArray[multiplier] = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0.1, 0.01 }; double toleranceArray[toleranceSize] = { 1, 2, 5, 10 }; // Loop to show each color and the number it corresponds to for (int i = 0; i < band; i++) { cout << BAND_COLOR_CODE[i] << " == " << multiplierArray[i] << endl; } // Call the function three times and store the information (i.e.: color and its value) in the variables a, b, and c a = color(BAND_COLOR_CODE, band); b = color(BAND_COLOR_CODE, band); c = color(BAND_COLOR_CODE, band); cout << "\n"; //Call the function to store the result from the calculation product = time(MULTIPLIER_COLOR_CODE, multiplierArray, multiplier, a, b, c); cout << "\n"; //Call the function to store the tolerance output = tolerance(TOLERANCE_COLOR_CODE, toleranceArray, toleranceSize); cout << "\n"; //Display information about the resistor cout << "This resistor has " << product << " ohms with " << output << " % tolerance." << endl; cout << "\n"; system("PAUSE"); return 0; } /** * Pre-Condition: This function accepts a string array and its size. * * Post-Condition: It sets each color to a particular numeric value * * and converts it into an int to return after accepting user input. * */ int color(string BAND_COLOR_CODE[], const int band) { // Variables char code[10]; // Represents the colors from the string array BAND_COLOR_CODE int num = 0; // Holds numeric value from 0-9 for color band static int j = 0; // Stores user input for color ++j; // Increments to allow user to input colors in succession cout << "\n"; // Prompt for user input cout << "Enter a color for band " << j << ": > "; cin.getline(code, 10); // Loop to take care of the case of letters for (int i = 0; i < 10; i++) code[i] = tolower(code[i]); // Loop to set user input to a number for (int i = 0; i < band; i++) { if(code == BAND_COLOR_CODE[i]) { switch (i) { case 0: { num = 0; break; } case 1: { num = 1; break; } case 2: { num = 2; break; } case 3: { num = 3; break; } case 4: { num = 4; break; } case 5: { num = 5; break; } case 6: { num = 6; break; } case 7: { num = 7; break; } case 8: { num = 8; break; } case 9: { num = 9; break; } default: { cout << "ERROR!" << endl; } } } } return num; } /** * Pre-Condition: This function accepts two arrays, their size, and three * * variables that will hold colors. * * Post-Condition: It accepts user input and converts it into a int to * * use in data calculation. It returns a product. * */ double time(string MULTIPLIER_COLOR_CODE[], double multiplierArray[], const int multiplier, int a, int b, int c) { // Variables char code[10]; // Represents the colors from the string array BAND_COLOR_CODE double total = 0; // Overall sum of a, b, and c double num = 0; // Holds numeric values for multiplier double value; // Stores value of resistor // Loop to show colors in the multiplier for (int i = 0; i < multiplier; i++) { cout << MULTIPLIER_COLOR_CODE[i] << " == " << multiplierArray[i] << endl; } cout << "\n"; // Prompt for user input cout << "Enter a color for the multiplier: > "; cin.getline(code, 10); // Loop to take care of case for letters for (int i = 0; i < 10; i++) code[i] = tolower(code[i]); // Loop to set user input to a number for (int i = 0; i < multiplier; i++) { if (code == MULTIPLIER_COLOR_CODE[i]) { switch (i) { case 0: { num = 0; break; } case 1: { num = 1; break; } case 2: { num = 2; break; } case 3: { num = 3; break; } case 4: { num = 4; break; } case 5: { num = 5; break; } case 6: { num = 6; break; } case 7: { num = 7; break; } case 8: { num = 8; break; } case 9: { num = 9; break; } case 10: { num = 0.1; break; } case 11: { num = 0.01; break; } default: { cout << "ERROR!" << endl; } } } } total += (a * 100) + (b * 10) + c; value = total * pow(10, num); return value; } /** * Pre-Condition: This function accepts two arrays and their size. * * Post-Condition: It gets user input and converts it into an int * * to be returned. * */ int tolerance(string TOLERANCE_COLOR_CODE[], double toleranceArray[], const int toleranceSize) { // Variables char code[10]; // Represents the colors from the string array BAND_COLOR_CODE int num = 0; // Holds numeric values for tolerance // Loop to set user input to a number for (int i = 0; i < toleranceSize; i++) { cout << TOLERANCE_COLOR_CODE[i] << " == " << toleranceArray[i] << endl; } cout << "\n"; // Prompt for user input cout << "Enter a color for the tolerance: > "; cin.getline(code, 10); // Loop to take care of case for letters for (int i = 0; i < 10; i++) code[i] = tolower(code[i]); // Loop to set user input to a number for (int i = 0; i < toleranceSize; i++) { if (code == TOLERANCE_COLOR_CODE[i]) { switch (i) { case 0: { num = 1; break; } case 1: { num = 2; break; } case 2: { num = 5; break; } case 3: { num = 10; break; } default: { cout << "ERROR!" << endl; } } } } return num; } Answer: I have found a couple of things that could help you improve your code. Don't abuse using namespace std Putting using namespace std at the top of every program is a bad habit that you'd do well to avoid. It's not necessarily wrong to use it, but you should be aware of when not to (as when writing code that will be in a header). Don't use system("cls") There are two reasons not to use system("cls") or system("pause"). The first is that it is not portable to other operating systems which you may or may not care about now. The second is that it's a security hole, which you absolutely must care about. Specifically, if some program is defined and named cls or pause, your program will execute that program instead of what you intend, and that other program could be anything. First, isolate these into a seperate functions cls() and pause() and then modify your code to call those functions instead of system. Then rewrite the contents of those functions to do what you want using C++. Combine related data The color strings and the values they represent are tightly bound but not in your data structures. Consider instead defining an object to contain both a color name and the associated value. That way it's much more clear that they are related and helps in both maintenance and understanding of the code. Use a menu object or at least a common menu function In a number of places in your code, you have something like a menu. Your code presents a prompt (a list of values) and then asks the user to pick one. Rather than repeating that code in many places, it would make sense to make it generic. Only the prompt strings actually change, but the underlying logic of presenting the choices and asking for input are all the same. It looks like you're a beginning programmer, and so perhaps you haven't learned about objects yet, but this kind of repeated task with associated data is really well-suited to object-oriented programming and that's something that C++ is very good at expressing. Use const where practical Your string labels are not and should not be altered by the program, so they should be declared const. In general, whenever you are writing a variable or function prototype look for places you can use const. Localize data to where it is used The MULTIPLIER_COLOR_CODE and related strings are never actually used in main except to pass to other routines which suggests that they should be moved to those routines instead. Make them static const and move them where they belong. Think of the user of your code Not all resistors have five bands. In fact, in practice most only have four. You might want to consider that and allow the user to tell the program how many bands are on the resistor. It might also be a nice feature to allow an uninformed user to still get the right value even if they enter the bands in reverse order.
{ "domain": "codereview.stackexchange", "id": 13123, "tags": "c++" }
Finding the number of words from a user input string or a text file
Question: I'm new to Java and is trying to solve the beginners problem of finding out the number of words in an user input string or a text file. I was just wondering if there are any alternatives to any of the steps that can improve efficiency as well as simplicity. import java.io.BufferedReader; import java.io.FileNotFoundException; import java.io.FileReader; import java.util.Scanner; public class WordsCount{ public static void main(String[] args) { try(Scanner sc1 = new Scanner(System.in)){ String userInputOrTextFile = sc1.next(); if (userInputOrTextFile.equalsIgnoreCase("userInput")){ WordsCount.countUserInput(); } else { WordsCount.countTextFile(); } } } private static void countUserInput() { try(Scanner sc1 = new Scanner(System.in)){ String s1 = sc1.nextLine(); System.out.println(s1.split(" ").length + " words in the user input sentence." ); } } private static void countTextFile() { int countingWords = 0; try(Scanner sc1 = new Scanner(new BufferedReader(new FileReader("xanadu.txt")))){ while(sc1.hasNext()){ sc1.next(); countingWords++; } } catch (FileNotFoundException e){ System.out.println("File not found"); } System.out.println(countingWords + " words are in the xanadu.txt file"); } } Answer: Double Scanner usage You use the scanner multiple times, this can create a few different bugs if the user scan input fet into the program using "userInput" mode. You should pass the scanner to the countUserInput, and remove the try-with-resources block from that method to prevent System.in from being closed. Swallowing FileNotFoundException without printing its message When opening the input file, you are swallowing a FileNotFoundException without printing its message. This exception is also thrown under the following conditions: When you try to open a directory as a file Opening a file where you don't have read permissions for The file doesn't exists The best way to print a message would be: catch(FileNotFoundException e){ System.err.println("Error opening: " + e.getMessage()); } This would show up as: Error opening: C:/test.txt (Permission denied) Bugs with large number of words Your concurrent application overflows when counting at least 2147483647 words, by replacing the int with a long, you can support at least 9223372036854775807, a increase of 2^32. Use a package name While this may or may not be purposely left away, a package names groupes the code in an organization unit where everything can be found together, this also allows calls from other projects to your project.
{ "domain": "codereview.stackexchange", "id": 18283, "tags": "java" }
Why does chlorine have a higher electron affinity than fluorine?
Question: Since fluorine has its valence electrons in the n=2 energy level, and since chlorine has its valence electrons in the n=3 energy level, one would initially expect that an electron rushing towards fluorine would release more energy, as it would land in the n=2 energy level, whereas in chlorine, the electron would land only in the n=3 energy level, and would then not release as much energy. Thus, one would expect fluorine to have a greater electron affinity than chlorine. However, why is it that chlorine has a higher electron affinity (349 kJ/mol) than fluorine (328.165 kJ/mol)? Answer: Fluorine, though higher than chlorine in the periodic table, has a very small atomic size. This makes the fluoride anion so formed unstable (highly reactive) due to a very high charge/mass ratio. Also, fluorine has no d-orbitals, which limits its atomic size. As a result, fluorine has an electron affinity less than that of chlorine. See this.
{ "domain": "chemistry.stackexchange", "id": 1419, "tags": "periodic-trends, electron-affinity" }
Does the water in hydrates affect final concentration?
Question: Say I have a hydrate, and I am going to dissolve it in water. Does the water in the hydrate affect the final concentration? Here's an example: If I have $\pu{249.69 g}$ of $\ce{CuSO4·5H2O}$, and dissolve it in $\pu{200 mL}$ of water, will the concentration be $\pu{5 M}$? Answer: Of course, the water in the hydrate affects the final concentration. Molar mass of $\ce{CuSO4.5H2O}$ is $\pu{249.5 g/mol}.$ Amount of substance is $$n = \frac{345.55}{249.5} = \pu{1.385 mol}$$ Concentration: $$C = \frac{1.385}{0.2} = \pu{6.925 mol/L}$$ But, this is only an illustrative example. It's not real. The solubility of $\ce{CuSO4.5H2O}$ is $\pu{320 g/L}$ at $\pu{20 °C}.$
{ "domain": "chemistry.stackexchange", "id": 12555, "tags": "aqueous-solution, concentration" }
Isn't energy absolute according to Thermodynamics?
Question: I was taught that Internal Energy $U$ is a relative quantity: only changes in $U$ are meaningful. It doesn't have an absolute value, since it always comes with an arbitrary constant (for example $U = nT +c$). Entropy $S$, on the other hand, has an absolute value thanks to the Third Postulate of Thermodynamics. Volume $V$ and number of moles $n$ are also absolute, obviously. The Fundamental Relation of a system relates $S$, $U$, $V$ and $n$ (For example $U = e^{S/n}Vn$, but any example would work). Since $S$, $V$, and $n$ are always absolute quantities, doesn't that mean that $U$ must also be absolute? So any system has a meaningful, absolute value of $U$ with no arbitrary constant? Answer: It's true that the temperature $T$, entropy $S$, pressure $P$, volume $V$ and amount of material $N$ are all amenable to being modeled as absolute. However, thermodynamics formulates the complete expression for internal energy as $$U=TS - PV + \sum \mu N.$$ This shifts the relative nature to the chemical potential $\mu$, which—like the internal energy $U$—can be measured relative to a reference only. (The same holds for the enthalpy $H\equiv U+PV$, the Helmholtz potential $F\equiv U-TS$, and the Gibbs free energy $G\equiv U+PV-TS$, and after all, the chemical potential $\mu\equiv\left(\frac{\partial G}{\partial N}\right)_{T,P}$ is just the molar Gibbs free energy.) If you consider a closed system at equilibrium so that the last term is constant, then you’re implicitly taking that system as the reference.
{ "domain": "physics.stackexchange", "id": 90147, "tags": "thermodynamics, energy, statistical-mechanics, entropy" }
Inbred mice has no severe phenotype outcome?
Question: Why does 20 generation of inbred mouse have no particular strange phenotypes, but on the contrary, when on purposely inbreed dogs or tigers for specific phenotype cause severe deformation of the bone structure or cranial structure? Answer: Some information on inbred strains of laboratory mice: https://en.wikipedia.org/wiki/Inbred_strain http://what-when-how.com/molecular-biology/pure-line-molecular-biology/ A relevant quote on the consequences of inbreeding: Inbreeding in allogamic organisms bring the deleterious recessive alleles to homozygosity; the immediate consequence is an increase in the frequency of defective offspring, or, in another words, an increase in the genetic load of the population. This phenomenon is called inbreeding depression or inbreeding degeneration. As inbreeding continues, the deleterious alleles are selected out and eventually disappear. The original heterozygous populations are often more fit than the resulting pure lines because they profit from heterosis and balanced polymorphisms; the main advantage of pure lines is the quick production of many individuals with the same well-adapted genotype, while the allogamy continuously generates new genotypes. In other words, inbreeding is harmful because it makes it more likely that offspring will have two copies of bad recessive alleles, meaning those alleles get expressed, meaning the organism gets the bad consequences that wouldn't show up if they had only one copy. This is what happens over a single generation however; over many generations the bad alleles are selected against, precisely because they are harmful to the organism so it reproduces less, and after enough generations of inbreeding you hit a point where all individuals are genetically identical and have all the "good" alleles (if they were lucky; otherwise they die out or stay stuck with some bad-but-not-fatal alleles). They still may be worse off than their more diverse ancestors, but they're not completely messed up like their unfortunate great-aunts and uncles who didn't make it either. The big difference between inbred mice and dogs or tigers is the "for specific phenotype" aspect. Laboratory organisms are inbred so that you get a large pool of genetically-identical individuals, meaning they're much easier to experiment on. The aim is the inbreeding itself, not any particular phenotype. For example if you look at the page for the most popular strain of laboratory mice, C57BL/6, you can see it has many different properties and is used for many different things. On the other hand, dogs aren't inbred for the purpose of inbreeding or of being genetically identical; the aim is to get desirable phenotypes, and inbreeding is just an efficient way of achieving that aim. It also isn't obvious that many of the problems purebred dogs have is inbreeding (i.e. lack of genetic diversity, high levels of homozygosy) per se, but the fact that the traits being bred for are just plain unhealthy for the dog, or part of a bell curve that include bad outcomes at the edges. For example, Syringomyelia in the Cavalier King Charles Spaniel: Some researchers estimate that as many as 95% of CKCSs may have Chiari-like malformation (CM or CLM), the skull bone malformation believed to be a part of the cause of syringomyelia, and that more than 50% of cavaliers may have SM.* It is worldwide in scope and not limited to any country, breeding line, or kennel, and experts report that it is believed to be inherited in the cavalier King Charles spaniel. CM is so widespread in the cavalier that it may be an inherent part of the CKCS's breed standard. (emphasis mine) Same thing for that spine malformation that's related to selecting for corkscrew tails. The genes that make the tail corkscrew also mess with the spine. In other words, the issue isn't inbreeding or not but whether the genes themselves are harmful. When organisms are selected for traits that are directly harmful in their extreme, or are associated with harmful genes that just happen to be next to those that are selected for in the chromosome, then the harmful consequences will spread through the population. Inbreeding is only a problem insofar as it allows the process go faster (more offspring per generation have the desired trait). On the other hand when you're just inbreeding with no specific focus on phenotype, or not phenotypes that have obvious harm associated (i.e. no lab would select for a frivolous trait that also causes harm. They're either selecting for the harmful trait on purpose, or they're selecting against it, because they'll want animals that are as healthy as possible except for the one variable they're interested in), then you'll end up with populations that are fairly normal except for some of the direct consequences of genetic uniformity. It should be noted that most purebred dogs probably aren't inbred strains the same way many laboratory animals are; those are genetically identical, so the whole point is that their offspring will be like they are. So while they may be less fit than a non-inbred version of them might be, their offspring won't be any less fit than they are. And this is not what's observed with purebred animals like dogs and horses; individuals aren't identical, and looking at the page on Syringomyelia it seems the problems are getting worse.
{ "domain": "biology.stackexchange", "id": 7084, "tags": "genetics, mouse" }
Rope tension question
Question: If two ends of a rope are pulled with forces of equal magnitude and opposite direction, the tension at the center of the rope must be zero. True or false? The answer is false. I chose true though and I'm not understanding why. Forces act at the center of mass of the object, so if there are two forces of equal and opposite magnitude, then they should cancel out resulting in zero tension, no? Answer: The tension of the rope is the shared magnitude of the two forces. Imagine cutting the rope at a point and inserting a spring scale in its place. The reading will show the tension. A rope with zero tension would be hanging loosely or laying on the ground, neglecting the rope's mass.
{ "domain": "physics.stackexchange", "id": 25428, "tags": "newtonian-mechanics, forces, string" }
If $f(n) = \Theta(g(n))$, do both functions bound each other for all $n$ or only sufficiently large $n$?
Question: The following is an excerpt from CLRS: $\Theta(g(n))= \{ f(n) \mid \text{ $\exists c_1,c_2,n_0>0$ such that $0 \le c_1 g(n) \le f(n) \le c_2g(n)$ for all $n \ge n_0$}\}$. Assuming $n \in \mathbb{N}$, I was unable to find $f(n)$ and $g(n)$ such that the bound does not apply for all $n$. Note: This question was asked with the flawed assumption that $f(n)$ and $g(n)$ necessarily have natural domains. Answer: You need to have a non-negative sufficiently large input size $n$ from which point on the bound holds. Have a look the Figure 3.1 in CLRS, which shows graphically examples of the $O, \Theta$ and $\Omega$ notation. You can also see why this makes sense. For example, we are interested in knowing how an algorithm's runtime behaves as the input gets larger and larger. Thus, we don't really care too much about small values of $n$. It is not always the case that for every nonnegative $n$ a bound would hold. For example, consider two functions $f(n)=n$ and $g(n)=n \log n$. We can plot them for a few small values of $n$. $g(n)$ does not dominate $f(n)$ for all values of $n$. For sufficiently large values of $n$, it will. In other words, $f(n) = O(g(n))$. Note that this is only an upper bound, but the idea will be the same for $\Theta$.
{ "domain": "cs.stackexchange", "id": 778, "tags": "asymptotics, landau-notation" }
'DataFrame' object has no attribute 'to_dataframe'
Question: I'm sure I have a small error here that I'm overlooking, but am having a tough time figuring out what I need to change. Here is my code up until the error I'm getting. # Load libraries import pandas as pd import numpy as np from pandas.tools.plotting import scatter_matrix import matplotlib.pyplot as plt from sklearn import model_selection from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from sklearn.metrics import accuracy_score from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.discriminant_analyisis import LinearDiscriminantAnalysis from sklearn.naive_bayes import GaussianNB from sklearn.svm import SVC # Load dataset names = ['action','reject','approve','occ','loanamt', 'suffolk', 'appinc','typur','unit','married','dep','emp',yjob','self','atotinc','cototinc','hexp'] # from azureml import Workspace # ws = Workspace( # workspace_id='', # authorization_token='==', # endpoint='https://studioapi.azureml.net' # ) # ds = ws.datasets['loanapp_c.csv'] ds = pd.read_csv('desktop/python ML/loanapp_c.csv') dataset = ds.to_dataframe() I was running this on Azure and am now trying to do it locally. Here is the error I'm getting: AttributeError Traceback (most recent call last) <ipython-input-3-b49a23658806> in <module>() 32 33 ds = pd.read_csv('desktop/python ML/loanapp_c.csv') ---> 34 dataset = ds.to_dataframe() 35 36 # shape ~/anaconda3/lib/python3.7/site-packages/pandas/core/generic.py in_getattr_(self, name) 4374 if self._info_axis.can_hold_identifiers_and_holds_name(name): return self[name] -> 4376 return object._getattribute_(self,name) 4377 4378 def _setattr_(self, name, value): AttributeError: 'DataFrame' object has no attribute 'to_dataframe' Not sure what I have wrong. Answer: The function pd.read_csv() is already a DataFrame and thus that kind of object does not support calling .to_dataframe(). You can check the type of your variable ds using print(type(ds)), you will see that it is a pandas DataFrame type.
{ "domain": "datascience.stackexchange", "id": 4648, "tags": "dataframe" }
Synteny, genetics?
Question: Could anyone explain the concept of Synteny relating to genetics? A picture would help. I tried read the wikipedia source along with another PDF http://gep.wustl.edu/repository/course_materials_WU/annotation/About_Synteny_Analysis.pdf And I feel it only somewhat helped. From what I gather synteny is about the order of genes, relative to their homologous genes? Or their location in general? Answer: Syntenic blocks contain the same genes of order between chromosomes of different species. The figure above shows (left to right) syntenic block shared between human chromosome 17 and corresponding chromosomes in three other mammals (horse, pig and chimpanzee). And as expected, the more distinct the species (such as pig and horse) the more disarranged the order of genes are. Ref: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3758187/
{ "domain": "biology.stackexchange", "id": 3875, "tags": "genetics" }
Is this case of weighted 2SAT NP-complete?
Question: Weighted 2SAT asks if it is possible to satisfy the formula with at most $k$ variables set as positive/negative. Trivially, every instance must be in 2CNF. It is known to be $\mathsf{NP}$-complete. We have following additional restriction: each variable appears twice as positive and once as negative or vice versa. Example of instance: $(x\lor y)\land(x\lor z)\land(\overline x\lor t)\land(\overline y\lor z)\land(\overline y\lor \overline t)\land(\overline z\lor \overline t)$ Is this weighted 2SAT variant $\mathsf{NP}$-complete? Of course, if vertex cover where each vertex has only 2 edges is $\mathsf{NP}$-hard, this also must be $\mathsf{NP}$-hard. So, if such result is known, then it also can be an answer. Ah, well, this does not help. Answer: You can express the predicate "$x = y$" using one occurrence of each polarity: $$ (x \lor \lnot y) \land (\lnot x \lor y). $$ Consider now an instance of weighted 2SAT, in which each variable appears at most $M$ times. Duplicate each variable $M$ times, and enforce that all copies are the same using the gadget above. Replace each occurrence of each variable by a distinct copy of the variable. If the original instance asks for an assignment with at most $k$ positive variables, ask for at most $Mk$ positive variables. We obtain an instance of your problem which is equivalent to the original problem. This shows that your problem is also NP-complete.
{ "domain": "cs.stackexchange", "id": 9605, "tags": "np-complete, 2-sat" }
Force between two finite parallel current carrying wires
Question: Remark: This is not a homework question...It is pure out of theoretical interest. I asked this the mathematics-community a couple days ago and got no answer, so I figured I'd try here. Most standard physics textbooks compute the force two infinite wires exert on each other, but they remain silent about the case where the wires are finite. Let's say we have two parallel wires carrying a current of equal magnitude in the same direction, both of which have a length $d$ and also seperated by a distance $d$. I now want to find out the force one wire exerts on another, using the Biot-Savart Law. Let the left wire be positioned at the origin of the $xy$-plane, going along the $y$-axis, and let the other wire be a distance $d$ to the right. We assume the currents are flowing in the positive $y$-direction. Then we first choose a source element (on the left wire) of infinitesimal length $dy$ described by the position vector $\mathbf{r_0} = y_0 \hat{j}$. This constitutes a current source of $I d\vec{l} = (Idy) \hat{j}$. We then pick an arbitrary field point $P$ on the other wire with position vector $\mathbf{r_p} = x\hat{i} + y \hat{j}$. Then the position vector $\mathbf{r}$ pointing from the source point to the field point is given as \begin{align*} \mathbf{r} = \mathbf{r_p} - \mathbf{r_0} = x\hat{i} + (y - y_0) \hat{j}, \end{align*} with $\sqrt{x^2 + (y-y_0)^2}$ being the length of this vector. If we now calculate the crossproduct $d\vec{l} \times \mathbf{r}$, we can write it as \begin{align*} dy \hat{j} \times (x\hat{i} + (y - y_0)\hat{j}) = -dy x \hat{k} \end{align*} Now comes the tricky part. I think I need to setup a double integral, because we are working with infinitesimal force elements $d\mathbf{F}$, each which is given as $d\mathbf{F} = I d\vec{l} \times \mathbf{B}$. But we also have that \begin{align*} d\mathbf{B} = \frac{\mu_0 I}{4 \pi} \frac{d \vec{l} \times \hat{r}}{r^2} = \frac{\mu_0 I}{4 \pi} \frac{d \vec{l} \times \mathbf{r}}{r^3} = -\frac{\mu_0 I}{4 \pi} \frac{dyx}{\sqrt{x^2 + (y-y_0)^2}} \hat{k} \end{align*} Hence I need to somehow integrate over $d\mathbf{F}$ and $d\mathbf{B}$. Does anyone have an idea how to do this? Answer: Let's call the circuit in the origin circuit one and it's line element $\mathrm{d}l_1=(0,\mathrm{d}y_1,0)$ and the one to it's right $r_2=(d,y_2,0)$ then the force between them is $\mathrm{d}F_{12}=i \mathrm{d}l_2 \times B_1$ where $$B_1=\frac{\mu_0i}{4\pi}\int_{l_1} \frac{\mathrm{d}l_1\times \Delta r}{(\Delta r)^3}$$ and $\Delta r=(d,(y_2-y_1),0)$ so we have that $$\mathrm{d}l_1\times \Delta r=(0,0,-\mathrm{d}y_1 d)$$ so we get as you wrote: $$B_1=-\frac{\mu_0 i d}{4 \pi}\int_{0}^{d} \frac{\mathrm{d}y_1}{(d^2+(y_2-y_1)^2)^{\frac{3}{2}}}$$ Ok now let's call $y_2-y_1=t$ so $\mathrm{d}t=-\mathrm{d}y_1$ then we can write $$B_1=\frac{\mu_0 i d}{4\pi}\int_{y_2}^{y_2-d}\frac{\mathrm{d}t}{(d^2+t^2)^{\frac{3}{2}}}$$ we now make the substitution $$t=d\cdot \sinh(u)$$ and we obtain $$\mathrm{d}t=d\cdot \cosh(u)\mathrm{d}u$$ and then $$B_1=\frac{\mu_0 i d}{4\pi}\int \mathrm{d}u \frac{d \cosh(u)}{d^3 \cosh(u)^3}$$ in which we used $$\cosh(u)^2-\sinh(u)^2=1$$ $$B_1=\frac{\mu_0 i }{4\pi d}\int \frac{\mathrm{d}u}{\cosh(u)^2}$$ now $\frac{1}{\cosh^2(u)}$ is the derivative of $\tanh(u)$ so $$\int \frac{\mathrm{d}u}{\cosh(u)^2}=\tanh(u)$$ we get then $$B_1=\frac{\mu_0 i }{4\pi d}\tanh\left(a\sinh\left(\frac{y_2-y_1}{d}\right)\right)+\text{const}$$ where we have substituted back all parameters $$u=a\sinh\left(\frac{t}{d}\right) \\ t=y_2-y_1$$ so knowing that (where $a\sinh(x)$ is the inverse function of $\sinh(x)$): $$\tanh(a\sinh(x))=\frac{x}{\sqrt{x^2+1}}$$ finally $$B_1=\frac{\mu_0 i }{4\pi d} \frac{\frac{y_2-y_1}{d}}{\sqrt{(\frac{y_2-y_1}{d})^2+1}}+\text{const}=\frac{\mu_0 i }{4\pi} \frac{1}{\sqrt{(y_2-y_1)^2+d^2}}+\text{const}$$ now we calculate it between $y_1=0$ and $y_1=d$ which yields $$B_1=\frac{\mu_0 i }{4\pi} \left[ \frac{1}{\sqrt{(y_2-d)^2+d^2}}-\frac{1}{\sqrt{y_2^2+d^2}}\right]$$ to calculate the force we take $B_1=(0,0,B_1 \hat{z})$ and we operate the following: $$\mathrm{d}F_{12}=i\mathrm{d}l_2 \times B_1=i(B_1\mathrm{d}y_2,0,0)$$ now we have to integrate on the circuit two: $$F_{12}=\frac{\mu_0 i^2 }{4\pi} \int_{0}^{d}\mathrm{d}y_2\left[ \frac{1}{\sqrt{(y_2-d)^2+d^2}}-\frac{1}{\sqrt{y_2^2+d^2}}\right]=\frac{\mu_0 i^2 }{4\pi} \left[I(y_2-d)-I(y_2)\right]$$ and know we do the same trick as before $$t=y_2-d \ \text{or}\ t=y_2\ \text{for the second piece}$$ $$t=d\cdot \sinh(u)$$ $$\mathrm{d}y_2=\mathrm{d}t=d\cdot \cosh(u)\mathrm{d}u$$ then: $$I=\int \mathrm{d}u\cdot d \cdot \cosh(u) \frac{1}{\sqrt{d^2\cosh^2(u)}}=u=a\sinh\left(\frac{t}{d}\right)$$ we finally get $$F_{12}=\frac{\mu_0 i^2 }{4\pi}\left[a\sinh\left(\frac{y_2-d}{d}\right)-a\sinh\left(\frac{y_2}{d}\right)\right]_0^d$$ which curiously enough is zero for this choice of parameters! I hope that helped!
{ "domain": "physics.stackexchange", "id": 20790, "tags": "electromagnetism, electricity, magnetic-fields" }
No known controllers and their joints in MoveIt! when connecting real robot
Question: Hi, I was setting up MoveIt! for my robot, which uses ros_canopen as its driver. I used moveit_setup_assistant to generate the moveit_config package and follow this tutorial to configure MoveIt! to talk to the real robot. Rostopic list shows that the FollowJointTrajectory action on the server side(robot driver) is ready: /joint_trajectory_controller/command /joint_trajectory_controller/follow_joint_trajectory/cancel /joint_trajectory_controller/follow_joint_trajectory/feedback /joint_trajectory_controller/follow_joint_trajectory/goal /joint_trajectory_controller/follow_joint_trajectory/result /joint_trajectory_controller/follow_joint_trajectory/status /joint_trajectory_controller/state My controller.yaml for the MoveIt side is controller_list: - name: joint_trajectory_controller action_ns: follow_joint_trajectory type: FollowJointTrajectory default: true joints: - I1_Joint - T2_Joint - T3_Joint - i4_Joint - t5_Joint When I clicked the execute button in rviz, errors showed up: [ERROR]: Unable to identify any set of controllers that can actuate the specified joints: [ I1_Joint T2_Joint T3_Joint i4_Joint t5_Joint ] [ERROR] : Known controllers and their joints: [ERROR] : Apparently trajectory initialization failed The moveit_controller_manager.launch starts the moveit side for the system. I use Indigo and Ubuntu14.04. Any help will be much appreciated. Originally posted by Craig on ROS Answers with karma: 78 on 2016-12-28 Post score: 0 Answer: You did not follow through with this part of the tutorial: http://docs.ros.org/indigo/api/moveit_tutorials/html/doc/pr2_tutorials/planning/src/doc/controller_configuration.html#create-the-controller-launch-file Instead of filling in the Manipulator5d_moveit_controller_manager.launch.xml as it is stated in the tutorial, you created your own manipulator5d_moveit_controller_manager.launch and copy&pasted launch descriptions from the other files. The concrete problem because of which move_group can't find your controller is that you load the controller.yaml file yourself and do not put it in the right namespace. move_group expects to find the parameter in /move_group/controller_list but your launch file loads it in /controller_list. Originally posted by v4hn with karma: 2950 on 2016-12-28 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by Craig on 2016-12-28: It works. Thanks! I think the tutorial should make this clear. Comment by v4hn on 2016-12-28: please provide a patch! :) https://github.com/ros-planning/moveit_tutorials/blob/indigo-devel/doc/pr2_tutorials/planning/src/doc/controller_configuration.rst
{ "domain": "robotics.stackexchange", "id": 26588, "tags": "moveit" }
The dimensional formula of distance travelled in $n$th second
Question: I read that the dimensional formula of distance traveled in $n^{th}$ second is same as that of velocity. Okay, the formula for the distance traveled in $n^{th}$ second is $s_t= u+\frac{a}{2}(2t-1)$ where $u$ is initial velocity, $a$ is uniform acceleration and $t$ is the time. If we proceed and expand the aforementioned formula we will get $s_t = u + at-\frac{a}{2}$. Now, the last term i.e. $-\frac{a}{2}$ is not a velocity and isn't the principle of homogeneity violated? Answer: The distance travelled in a time $t$ is: $$ s = ut + \tfrac{1}{2}at^2 $$ So the distance travelled between $t$ and $t - \Delta t$ is: $$\begin{align} \Delta s &= s(t) - s(t - \Delta t) \\ &= ut + \tfrac{1}{2}at^2 - u(t - \Delta t) - \tfrac{1}{2}a(t - \Delta t)^2 \\ &= u\Delta t + \tfrac{1}{2}a(2t\Delta t - \Delta t^2) \end{align}$$ The equation you cite is obtained by setting $\Delta t = 1$, but remember that you're setting $\Delta t$ equal to one second not the dimensionless quantity $1$. So your equation should really be: $$ \Delta s = u\cdot (1 \space\text{second}) + \tfrac{1}{2}a(2t(1 \space\text{second}) - (1 \space\text{second})^2) $$ or multiplying this out: $$ \Delta s = u\cdot (1 \space\text{second}) + at(1 \space\text{second}) - \tfrac{1}{2}a(1 \space\text{second})^2 $$ So it is dimensionally consistent.
{ "domain": "physics.stackexchange", "id": 30180, "tags": "homework-and-exercises, kinematics" }
How to detect that a waveform is of the same type if it has a different frequency (Example: Sinwave with 2Hz and another with 10Hz)?
Question: I am trying to detect whether a received waveform is a Sinusoid, Square Wave, or any other. How should I go about doing this, if the signal frequency can be different every time? I have looked online and come across methods like Correlation or DTW, but I don't think any of them would be helpful in this case. Answer: Do you want to distinguish between sine and square only, or could there be 1000s of different waveforms? If the waveform is perfectly periodic, you could use correlation to find the fundamental period, theb use a dft/fft of that length to find the harmonics. The relative frequency/gain/phase of harmonics should be constant for a given waveform up until Nyquist. For just a few (perfect) waveforms, I think that you could do some ad hoc in the time domain. The derivative should be quite different for square waves and sines.
{ "domain": "dsp.stackexchange", "id": 11191, "tags": "signal-detection, waveform-similarity" }
cv_bridge not showing RGB image
Question: I am following the steps from a Book, for setting up OpenCV to work together ROS through cv_bridge. The book is a bit old, and the original code was a legacy python2 code. The Camera is a Kinect V1. I used 2to3 to convert it to python3, and after several runs catching the error codes and updating (or trying to) to new OpenCV commands I got it running, But in the book, says it was supposed to show 3 windows: RBG, Depth and Edges. When I run my "adjusted" code, it only shows Depth and Edges. Would somebody know what is missing on it? # This is the original Code import rospy import sys import cv2 import cv2.cv as cv from sensor_msgs.msg import Image, CameraInfo from cv_bridge import CvBridge, CvBridgeError import numpy as np class cvBridgeDemo(): def __init__(self): self.node_name = "cv_bridge_demo" #Initialize the ros node rospy.init_node(self.node_name) # What we do during shutdown rospy.on_shutdown(self.cleanup) # Create the OpenCV display window for the RGB image self.cv_window_name = self.node_name cv.NamedWindow(self.cv_window_name, cv.CV_WINDOW_NORMAL) cv.MoveWindow(self.cv_window_name, 25, 75) # And one for the depth image cv.NamedWindow("Depth Image", cv.CV_WINDOW_NORMAL) cv.MoveWindow("Depth Image", 25, 350) # Create the cv_bridge object self.bridge = CvBridge() # Subscribe to the camera image and depth topics and set # the appropriate callbacks self.image_sub = rospy.Subscriber("/camera/rgb/image_color", Image,self.image_callback) self.depth_sub = rospy.Subscriber("/camera/depth/image_raw", Image,self.depth_callback) rospy.loginfo("Waiting for image topics...") def image_callback(self, ros_image): # Use cv_bridge() to convert the ROS image to OpenCV format try: frame = self.bridge.imgmsg_to_cv(ros_image, "bgr8") except CvBridgeError, e: print e # Convert the image to a Numpy array since most cv2 functions # require Numpy arrays. frame = np.array(frame, dtype=np.uint8) # Process the frame using the process_image() function display_image = self.process_image(frame) # Display the image. cv2.imshow(self.node_name, display_image) # Process any keyboard commands self.keystroke = cv.WaitKey(5) if 32 <= self.keystroke and self.keystroke < 128: cc = chr(self.keystroke).lower() if cc == 'q': # The user has press the q key, so exit rospy.signal_shutdown("User hit q key to quit.") def depth_callback(self, ros_image): # Use cv_bridge() to convert the ROS image to OpenCV format try: # The depth image is a single-channel float32 image depth_image = self.bridge.imgmsg_to_cv(ros_image,"32FC1") except CvBridgeError, e: print e # Convert the depth image to a Numpy array since most cv2 functions # require Numpy arrays. depth_array = np.array(depth_image, dtype=np.float32) # Normalize the depth image to fall between 0 (black) and 1 (white) cv2.normalize(depth_array,depth_array, 0, 1, cv2.NORM_MINMAX) # Process the depth image depth_display_image = self.process_depth_image(depth_array) # Display the result cv2.imshow("Depth Image", depth_display_image) def process_image(self, frame): # Convert to grayscale grey = cv2.cvtColor(frame, cv.CV_BGR2GRAY) # Blur the image grey = cv2.blur(grey, (7, 7)) # Compute edges using the Canny edge filter edges = cv2.Canny(grey, 15.0, 30.0) return edges def process_depth_image(self, frame): # Just return the raw image for this demo return frame def cleanup(self): print "Shutting down vision node." cv2.destroyAllWindows() def main(args): try: cvBridgeDemo() rospy.spin() except KeyboardInterrupt: print "Shutting down vision node." cv.DestroyAllWindows() if __name__ == '__main__': main(sys.argv) And this is my roughly adjusted one: #!/usr/bin/env python import rospy import sys import cv2 from sensor_msgs.msg import Image, CameraInfo from cv_bridge import CvBridge, CvBridgeError import numpy as np class cvBridgeDemo(): def __init__(self): self.node_name = "cv_bridge_node" #Initialize the ros node rospy.init_node(self.node_name,anonymous=True) # What we do during shutdown rospy.on_shutdown(self.cleanup) # Create the OpenCV display window for the RGB image self.cv_window_name = self.node_name cv2.namedWindow(self.cv_window_name, cv2.WINDOW_NORMAL) cv2.moveWindow(self.cv_window_name, 25, 75) # And one for the depth image cv2.namedWindow("Depth Image", cv2.WINDOW_NORMAL) cv2.moveWindow("Depth Image", 25, 350) # Create the cv_bridge object self.bridge = CvBridge() # Subscribe to the camera image and depth topics and set # the appropriate callbacks self.image_sub = rospy.Subscriber("/camera/rgb/image_raw", Image,self.image_callback) self.depth_sub = rospy.Subscriber("/camera/depth/image_raw", Image,self.depth_callback) rospy.loginfo("Waiting for image topics...") def image_callback(self, ros_image): # Use cv_bridge() to convert the ROS image to OpenCV format try: frame = self.bridge.imgmsg_to_cv2(ros_image, "bgr8")#bgr8 except CvBridgeError as e: print(e) # Convert the image to a Numpy array since most cv2 functions # require Numpy arrays. frame = np.array(frame, dtype=np.uint8) # Process the frame using the process_image() function display_image = self.process_image(frame) # Display the image. cv2.imshow(self.node_name, display_image) # Process any keyboard commands self.keystroke = cv2.waitKey(5) if 32 <= self.keystroke and self.keystroke < 128: cc = chr(self.keystroke).lower() if cc == 'q': # The user has press the q key, so exit rospy.signal_shutdown("User hit q key to quit.") def depth_callback(self, ros_image): # Use cv_bridge() to convert the ROS image to OpenCV format try: # The depth image is a single-channel float32 image depth_image = self.bridge.imgmsg_to_cv2(ros_image,"32FC1") except CvBridgeError as e: print(e) # Convert the depth image to a Numpy array since most cv2 functions # require Numpy arrays. depth_array = np.array(depth_image, dtype=np.float32) # Normalize the depth image to fall between 0 (black) and 1 (white) cv2.normalize(depth_array,depth_array, 0, 1, cv2.NORM_MINMAX) # Process the depth image depth_display_image = self.process_depth_image(depth_array) # Display the result cv2.imshow("Depth Image", depth_display_image) def process_image(self, frame): # Convert to grayscale grey = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # Blur the image grey = cv2.blur(grey, (7, 7)) # Compute edges using the Canny edge filter edges = cv2.Canny(grey, 15.0, 30.0) return edges def process_depth_image(self, frame): # Just return the raw image for this demo return frame def cleanup(self): print("\n\n\nShutting down vision node.") cv2.destroyAllWindows() def main(args): try: cvBridgeDemo() rospy.spin() except KeyboardInterrupt: print("\n\n\nShutting down vision node.") cv2.destroyAllWindows() if __name__ == '__main__': main(sys.argv) Originally posted by msantos on ROS Answers with karma: 21 on 2021-02-11 Post score: 0 Original comments Comment by tryan on 2021-02-12: I only see two instances each of imshow() and namedWindow() , so I'm not sure why three images are expected--even in the original code. In image_callback(), the BGR image is processed (self.process_image()) into a Canny edge image before display. If you want to see the color image, you'll have to add an extra display. Comment by msantos on 2021-02-13: @tryan That was my feeling too. I got reading again a few times to ensure is nothing missing in that example... This is wahy I asked before concluding with my low-level about openCV. Perhaps the editors have missed something. I will try to add it and see how it goes. Answer: tryan's comment was actually the answer. I could manage to fix it by adding the following lines: In the init func: cv2.namedWindow('Image RBG', cv2.WINDOW_NORMAL) cv2.moveWindow('Image RBG', 25, 350) And in the callback func: cv2.imshow('Image RBG', frame) Originally posted by msantos with karma: 21 on 2021-02-15 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 36077, "tags": "ros, ros-melodic, cv-bridge, python3" }
Minimum Multi-Degree Polynomials representing Boolean Functions
Question: In the 10th Anniversary Edition of Nielsen and Chuang Quantum Computation and Quantum Information textbook, Chapter 6.7 talks about Black Box algorithm limits. It is given: $f:\{0,1\}^n \rightarrow \{0,1\}$ $F:\{X_0,X_1,X_2,....,X_{N-1} \}\rightarrow \{0,1\}$ $\text{such that} \space F \space \text{is a boolean function,} \space X_k=f(k) \space and \space N=2^n-1$ It is then mentioned that: We say that a polynomial $p: R^N\rightarrow R$ represents $F$ if $p(X)=F(X)$ for all $X \in \{0,1\}^N$ (where $R$ denotes the real numbers). Such a polynomial $p$ always exists, since we can explicitly construct a suitable candidate: $$p(X)=\sum_{Y\in \{0,1\}^N} F(Y)\prod_{k=0}^{N-1}[1-(Y_k-X_k)^2]$$ Can someone explain this formula to me and whether the construction is a result of rigorous steps or by intuition? Will also be good if there are useful materials related to this for me to read. Answer: Plug in an arbitrary $X$ into the formula. Look at each summand for each particular $Y \in \{0,1\}^N$ If $Y \neq X$, then there must be at least one index $i$ such that $X_k \neq Y_k$. But both $Y_k$ and $X_k$ are only either $0$ or $1$. So if they are not equal, then the difference must be either $+1$ or $-1$. Square that and you get $1$ if they are different. Then one of the terms in the product will be $1-1=0$. So if $Y \neq X$, that particular summand is $0$. The only summand that remains is when $Y=X$. In that case each of the $1-(Y_k-X_k)^2$ are equal to $1$. So the product of all of those still gives $1$ and the result for that summand is $F(Y)=F(X)$. Add together the summands for all the $Y$, and you get the only nonzero term, $F(X)$. So $p(X)=F(X)$ for all $X \in \{0,1\}^N$. As desired, $p$ represents $F$. However, there may be simpler $p$ that still do the same. This is just one of them. The one of smallest degree gives the notion of degree of a boolean function which is related to sensitivity and block sensitivity.
{ "domain": "quantumcomputing.stackexchange", "id": 873, "tags": "mathematics, complexity-theory, nielsen-and-chuang" }
C++ simple enumerator implementation
Question: I write this enumerator class out of the necessity. I am working on a project that heavily depends on inheritance, abstract (interface) classes, etc. Therefore I needed a unified container that can store objects of different types (all derived from a single base class). class posted below is aiming to be a C++ implementation of the C# IEnumerator. I was aiming to achieve the best possible (runtime/execution) performance from this class, and because of that I deliberately omitted some out_of_range checks (Visual Studio 2015, in debug mode checks that error via assertion, because of that I think I don't need to check it at runtime also). Unfortunately, the most important methods bool fwd_next(), bool rvs_next() are obligated to perform some additional checks. Which may affect performance. I am just a junior/hobbyist programmer, therefore I am expecting a high level of criticism because I believe it would help me learn more. I spent some time implementing this class, but if You are aware of some other implementation in a form of a third party library (boost perhaps ? I checked but I couldn't find any...), please let me know. In the future I will implement a vector<list<T*>::iterator>> in order to store (and keep track on) iterators returned by list<T*>::push_back like functions. I will do this to be able to access randomly any element in list<T*> m_item_list with O(1) complexity. I would like to hear an opinion on: If You have an idea on how to implement better functionality in my class, please let me know. If you see something wrong, ill, or simply unfit with my code, please also let me know. Everything else, You are willing to point out. Question: Is there a way to allocate a continuous memory in advance (just like std::vector and then just dynamically construct new objects in that memory range? This is the result: enumerator.hpp: #pragma once #ifndef __ENUMERATOR_HPP__ #define __ENUMERATOR_HPP__ ///////////////////////////////////////////////////////////////////////////////////////////////////////// // ENUMERATOR: // -- INCLUDE: ///////////////////////////////////////////////////////////////////////////////////////////////////////// // c++ libs: #include <list> /////////// ///////////////////////////////////////////////////////////////////////////////////////////////////////// // ENUMERATOR: ///////////////////////////////////////////////////////////////////////////////////////////////////////// namespace cpplib { namespace common { template <typename T> class enumerator { // typedef: private: using list_ = std::list<T*>; using list_fwd_c = typename list_::const_iterator; using list_fwd = typename list_::iterator; using list_rvs_c = typename list_::const_reverse_iterator; using list_rvs = typename list_::reverse_iterator; // members: private: list_ m_item_list; bool m_flag_first_item; // iterators list_fwd m_fwd_itr; list_rvs m_rvs_itr; // helper: private: __forceinline void _fwd_erase(); __forceinline void _fwd_erase(list_fwd & fwd_itr); __forceinline void _rvs_erase(); __forceinline void _rvs_erase(list_rvs & rvs_itr); // constructors: public: __forceinline enumerator(); template <typename U> __forceinline enumerator(const enumerator<U> &) = delete; __forceinline ~enumerator(); // methods: public: __forceinline void clear(); __forceinline bool fwd_erase_current(); __forceinline bool rvs_erase_current(); template <typename U = T, typename ... arg_list> __forceinline void emplace_back(arg_list ... arg_tail); // methods: public: template <typename F> __forceinline void fwd_for_each(F & func); template <typename F> __forceinline void rvs_for_each(F & func); // methods: public: __forceinline T & fwd_current(); __forceinline T & rvs_current(); __forceinline bool fwd_next(); __forceinline bool rvs_next(); __forceinline void fwd_reset(); __forceinline void rvs_reset(); // methods: public: __forceinline const T & fwd_peak() const; __forceinline const T & rvs_peak() const; __forceinline const T & fwd_peak_next() const; __forceinline const T & rvs_peak_next() const; }; } // !eval } // !cpplib /////////// ///////////////////////////////////////////////////////////////////////////////////////////////////////// // ENUMERATOR: // -- INCLUDE .inl: ///////////////////////////////////////////////////////////////////////////////////////////////////////// #include "inl/enumerator.inl" /////////// #endif // !__ENUMERATOR_HPP__ enumerator.inl: #pragma once #ifndef __ENUMERATOR_INL__ #define __ENUMERATOR_INL__ ///////////////////////////////////////////////////////////////////////////////////////////////////////// // ENUMERATOR: // -- INCLUDE: ///////////////////////////////////////////////////////////////////////////////////////////////////////// /////////// ///////////////////////////////////////////////////////////////////////////////////////////////////////// // ENUMERATOR: ///////////////////////////////////////////////////////////////////////////////////////////////////////// namespace cpplib { namespace common { namespace detail { } // !detail template <typename T> __forceinline enumerator<T>::enumerator() : m_item_list(), m_flag_first_item(false) {}; template <typename T> __forceinline enumerator<T>::~enumerator() { clear(); } template <typename T> __forceinline void enumerator<T>::clear() { for (auto token : m_item_list) { delete token; } m_item_list.clear(); // set flag: m_flag_first_item = false; } template <typename T> __forceinline void enumerator<T>::_fwd_erase() { delete *m_fwd_itr; m_fwd_itr = m_item_list.erase(m_fwd_itr); } template <typename T> __forceinline void enumerator<T>::_fwd_erase(list_fwd & fwd_itr) { delete *fwd_itr; fwd_itr = m_item_list.erase(fwd_itr); } template <typename T> __forceinline void enumerator<T>::_rvs_erase() { delete *m_rvs_itr; m_rvs_itr = m_item_list.erase(m_rvs_itr); } template <typename T> __forceinline void enumerator<T>::_rvs_erase(list_rvs & rvs_itr) { delete *rvs_itr; rvs_itr = m_item_list.erase(rvs_itr); } template <typename T> __forceinline bool enumerator<T>::fwd_erase_current() { if (m_flag_first_item) { _fwd_erase(); return true; } else { return false; } } template <typename T> __forceinline bool enumerator<T>::rvs_erase_current() { if (m_flag_first_item) { _rvs_erase(); return true; } else { return false; } } template <typename T> template <typename U, typename ... arg_list> __forceinline void enumerator<T>::emplace_back(arg_list ... arg_tail) { m_item_list.push_back(new U(arg_tail ...)); if (! m_flag_first_item) { m_flag_first_item = true; m_fwd_itr = m_item_list.begin(); m_rvs_itr = m_item_list.rbegin(); } }; template <typename T> template <typename F> __forceinline void enumerator<T>::fwd_for_each(F & func) { for (auto fwd_itr = m_item_list.begin(); fwd_itr != m_item_list.end(); ++fwd_itr) { func(**fwd_itr); } } template <typename T> template <typename F> __forceinline void enumerator<T>::rvs_for_each(F & func) { for (auto rvs_itr = m_item_list.rbegin(); rvs_itr != m_item_list.rend(); ++rvs_itr) { func(**rvs_itr); } } template <typename T> __forceinline T & enumerator<T>::fwd_current() { return **m_fwd_itr; } template <typename T> __forceinline T & enumerator<T>::rvs_current() { return **m_rvs_itr; } template <typename T> __forceinline bool enumerator<T>::fwd_next() { if (m_flag_first_item) { if (++m_fwd_itr != m_item_list.end()) { return true; } else { m_fwd_itr = --m_item_list.end(); return false; } } else { return false; } } template <typename T> __forceinline bool enumerator<T>::rvs_next() { if (m_flag_first_item) { if (++m_rvs_itr != m_item_list.rend()) { return true; } else { m_rvs_itr = --m_item_list.rend(); return false; } } else { return false; } } template <typename T> __forceinline void enumerator<T>::fwd_reset() { m_fwd_itr = m_item_list.begin(); } template <typename T> __forceinline void enumerator<T>::rvs_reset() { m_rvs_itr = m_item_list.rbegin(); } template <typename T> __forceinline const T & enumerator<T>::fwd_peak() const { return **m_fwd_itr; } template <typename T> __forceinline const T & enumerator<T>::rvs_peak() const { return **m_rvs_itr; } template <typename T> __forceinline const T & enumerator<T>::fwd_peak_next() const { auto fwd_itr = m_fwd_itr; return **(++fwd_itr); } template <typename T> __forceinline const T & enumerator<T>::rvs_peak_next() const { auto rvs_itr = m_rvs_itr; return **(++rvs_itr); } } // !eval } // !cpplib /////////// #endif // !__ENUMERATOR_INL__ main.cpp: #include <iostream> #include <string> #include "common_utils/enumerator.hpp" int main() { cpplib::common::enumerator<double> test; for (int i = 0; i < 5; i++) { test.emplace_back(i); } // do-while becasue 1st element is skipped in a while loop. do { std::cout << test.fwd_current() << std::endl; } while (test.fwd_next()); std::cout << std::endl; // for each 'naive' implementation: test.fwd_for_each(([](auto item) {std::cout << item << std::endl;})); std::cout << std::endl; test.rvs_for_each(([](auto item) {std::cout << item << std::endl;})); std::cout << std::endl; // Exit: return 0; } Answer: enumerator::emplace_back() Methods like push_back or emplace_back don't belong in an enumerator. Like the name says, the enumerator's job is to enumerate a collection, not to provide all the features of a collection. In C# most collections implement IEnumerable, but that only for the features of enumerating them. If you are provided an IEnumerable you are not provided with any methods of modifying the collection. Forcing optimizations I was aiming to achieve the best possible (runtime/execution) performance Have you profiled your code to detect possible issues with performance? Visual Studio includes a nice and easy to use profiler. The presence of __forceinline on template code is kind of useless. The compiler can inline it if it deems appropriate even without __forceinline. It's also nonstandard. Code is difficult to read There are various aspect of your coding style that make the code quite difficult to read: Having #pragma once but also declaring header guards Using big comment sections that serve no purpose like // ENUMERATOR: Mixing abbreviations with normal words: list_rvs & rvs_itr Building an enumerator versus sticking to C++ iterators I wouldn't (re)invent an enumerator for use in production C++ projects. Sticking to normal iterators still has advantages: Standard library algorithms work on iterators Iterators can be copied, allowing more flexibility than a C# IEnumerable which is normally only iterated over once before you start to worry about performance issues like iterating it a second time might trigger a new call to the database boost provides helpers to more easily construct iterators for custom collections boost provides iterator adaptors that allow pipeing iterators similar to what you do with LINQ If the above still don't suit the needs, I would take a look at the Ranges Library
{ "domain": "codereview.stackexchange", "id": 26730, "tags": "c++, iterator" }
Getting the position and the pose of robot using tf_listener
Question: I'm trying to get position and orientation of the robot calculated by gmapping. I used this question link text and tutorial link text as a reference. I can get the robot position using tf::Transform::getOrigin() which is /map -> /base_link transform. However the robot angle obtained from tf::Transform::getRotation()::getAngle() is only from 0 to π. To determine the angle of the robot from 0 to 2π, What should I do? Originally posted by s-kawakami on ROS Answers with karma: 1 on 2012-07-19 Post score: 0 Answer: Use tf::getYaw: tf::getYaw(transform.getRotation()); Originally posted by Lorenz with karma: 22731 on 2012-07-19 This answer was ACCEPTED on the original site Post score: 11
{ "domain": "robotics.stackexchange", "id": 10275, "tags": "ros, navigation, tf-listener, gmapping, stampedtransform" }
What is going on in front of and behind a fan?
Question: Why is it that when you drop paper behind a fan, it drops, and is not blown/sucked into the fan, whereas if you drop paper in front of a fan, it is blown away? Answer: There is a YouTube video that visualizes the air flow around a propeller for various configurations. I caught a screen shot of a moment that more or less shows what is going on: As you can see, this happens at 2:07 into the clip - this happens to be for a dual rotor configuration (two counter rotating blades) but the principle is the same. Behind the rotor (above, in this picture) the air is moving slowly. Air over a wide range of area is drifting towards the rotor, where it is accelerated. I will leave it up to others to describe the mathematics behind this contraction - but I thought visualizing the flow would at least confirm your observation that it is indeed slower behind the fan, and faster in front of it. In other words - it pushes, but doesn't suck. A better image showing the flow lines around the propeller is given at this article about the mechanics of propellers As the pressure is increased, the flow velocity goes up and the flow lines end up closer together (because of conservation of mass flow). This gives the flow the asymmetry you observed. But it's still more intuitive than rigorous... AFTERTHOUGHT Hot Licks made an excellent observation in a comment that I would like to expand on. The air being drawn towards the fan is moving in the pressure differential between the atmosphere at rest, and the lower pressure right in front of the fan blades. The pressure gradient is quite small, so the air cannot flow very fast - and it has to be drawn from a wide region to supply the mass flow. After impact with the blade (or at least after "interacting" with the blade), the air has a LOT more momentum that is directed along the axis of the fan (with a bit of swirl...). This higher momentum gives the air downstream of the fan its coherence as can be seen in the diagram.
{ "domain": "physics.stackexchange", "id": 16207, "tags": "fluid-dynamics, aerodynamics, fan" }
What equation could you use to model the shape of a thread when held at 2 ends?
Question: If you were to hold a string at both ends what shape would the string take(in earth's gravity). Obviously this depends on the length of string and the distance you hold the 2 ends apart so let's say that you can only change the distance of the ends from one another on a straight line parallel to the ground. What equation could you use to predict the shape of the string at any given length(of the string) and distance(of the ends)? Answer: Are you talking about the Catenary shape? The general equation is $$ y(x) = a \left( \cosh \left( \tfrac{x}{a} \right) -1 \right) $$ where $y(0)=0$ is the lowest point on the curve, and the parameter $a$ defines how much it bends. What remains constant along the curve is the horizontal component of tension $H$, which can be used to find $a$ $$ a = \frac{H}{w} $$ where $w$ is the weight per unit length.
{ "domain": "physics.stackexchange", "id": 73600, "tags": "classical-mechanics, gravity, string" }
Why are we using Heisenberg equation of motion for non-observable $a$ and $a^{\dagger}$?
Question: The author in one of my textbooks derived the time dependency of $a(t)$ and $a^{\dagger}(t)$ through the equation of motion. Is that allowed? Answer: You can e.g. argue that if Heisenberg equation of motion holds for the Hermitian operators/observables $a+a^{\dagger}$ and $i(a^{\dagger}-a)$, it should also hold for $a$ and $a^{\dagger}$ by $\mathbb{C}$-linearity.
{ "domain": "physics.stackexchange", "id": 37655, "tags": "quantum-mechanics, quantum-field-theory, operators, time-evolution" }
Why dimmer high-redshift supernovae means the expansion is accelerating, if the dilated region pertains the distant past?
Question: I must be seeing this wrong, because it seems to me the data indicates faster expansion in the past, contrary to Adam Reiss' study that led to the birth of the notion of 'dark energy'. In Reiss' seminal study, [https://arxiv.org/abs/astro-ph/9805201], which was followed by others [https://www.researchgate.net/publication/231060958_Measurements_of_O_and_L_from_42_High-Redshift_Supernovae] that confirmed the measurements (and earned him the 2011 Nobel), high redshift (very distant) Ia supernovae are 15-25% dimmer (i.e more distant) than the current Hubble constant (circa 74 km/s/megaparsec) would predict from their redshift. This lesser luminosity (or greater distance) in the distant region was concluded to signify the universe's expansion is accelerating, rather than slowing down as would be expected from the effect of gravity, as a ball thrown upwards by hand will rise more slowly each second until it stops rising and falls back down. Thus, that some effect must be accelerating the expansion of the universe, referred to by the placeholder name 'dark energy'. This measurement generates a graph where low redshift (closer) supernovae seem to follow a straight line along the Hubble constant, and then in the more distant region the plot goes up from this straight line, as they are further than their redshift would predict, forming what I call for the purposes of this question a 'shallow V' shape. [pages 267-269, How Old Is the Universe?, by David A. Weintraub, available at http://93.174.95.29/main/3FF82B0C945C31FF63E4388ED31AB7F9] The standard answer is something along the line "The argument is pretty straightforward, and can easily be understood with a real-world example. Consider two cars leaving your house. They start out going at the same speed. But over time, one car speeds up, while the other slows down. After some time, which care will be farther away from you? The one that’s speeding up, of course. That’s obvious. And something that’s farther away looks dimmer (at least if they have the same intrinsic brightness)." This explanation - that the universe currently expands at a faster rate than in the past - does not seem to explain the “shallow V” curve we see. It would explain a “shallow A”, with a higher Hubble constant in the present and a lower one in the past. We see a curve (or a line with a bend on it) that - it seems to me - cannot be interpreted other than the greater increment being at the point of departure, most distant from us, not nearer to where we are, ie. with a higher Hubble constant in the distant past, say around 80-90 km/s/Mp. How can we be seeing this curve we see with the accepted interpretation? Should not the curve be bending the other way? Obviously I must be seeing this wrong, but where is my error? Answer: They are further away than they "should" be according to a decelerating universe model (one without dark energy). That is because the expansion has accelerated. The Hubble parameter isn't a constant, it was larger in the past and is getting smaller. The Hubble parameter does not equate to the expansion rate of the universe. It is defined as the rate of change of the scale factor divided by the scale factor. A Hubble constant would imply exponentially accelerating expansion. e.g. Evolution of the Hubble parameter and Is hubble constant dependent on redshift? More detail The light travel time distance to a distant supernovae is measured by the time it takes light to travel from the supernova (when it went off) to us, now. A relatively nearby supernova is barely affected by any acceleration/deceleration of the expansion. It's distance is given approximately by a straightforwad application of a "coasting universe", one that is expanding at a constant rate - the flat line on your top plot. If the light has to travel further across the universe then the effects of the cosmological parameters start to become apparent. In a decelerating universe the rate of expansion slows, so a simple application of a "coasting universe" would result in an underestimated distance (the dashed line curving down in your top plot). I attach a better plot, from the Riess et al. (1999) paper that you reference, with lots of lines representing different combinations of $\Omega_M$ (matter density, a decelerating influence) and $\Omega_\Lambda$ (dark energy, an accelerating influence). Thus a coasting universe has neither and that is the flat line in the middle, representing a universe that has always been expanding at a rate that can be measured locally. In a decelerating universe (e.g. a flat universe with $\Omega_M=1$, $\Omega_{\Lambda}=0$), the expansion rate was much faster in the past. If however we lived in an "empty" coasting universe, the rate of expansion would have been constant and we would now see the supernovae as further away. If the universe is accelerating (e.g. $\Omega_{\Lambda}=1$, $\Omega_M=0$), then this effect becomes more extreme; the supernovae would be much further away and even fainter. We appear to live in a universe somewhere between coasting and extreme acceleration (well we know there is some decelerating matter in the universe right!). This is the $\Omega_M=0.28$, $\Omega_\Lambda=0.72$ curve. This universe has been accelerating for the last few billion years, which covers the distances to the supernovae on this plot. The distant ones suffered a lower rate of expansion than those closer to us and as a result are further away than predicted by a coasting model. However, in such a universe, if you go even further back in time to even more distant supernovae, then the expansion was being decelerated, because whilst the dark energy density stays the same, the matter density was much higher. That is why you can just see the $\Omega_M=0.28$, $\Omega_\Lambda=0.72$ turning down again at the highest redshifts on the plot. And here indeed is a more recent plot (from Riess et al. 2007), which appears to show this turn-down at higher redshifts quite clearly and delineates the turnover from a decelerating to accelerating universe at $z \sim 0.8$ (e.g. Daly et al. 2008).
{ "domain": "astronomy.stackexchange", "id": 6830, "tags": "cosmology, dark-energy" }
Why do we need to include impulse by string?
Question: There is this paragraph in the solution of the question : Since ball A is suspended by an inextensible string, therefore, just after collision, it can move along horizontal direction only. Hence, a vertically upward impulse is exerted by thread on the ball A. This means that during collision two impulses act on ball A simultaneously. One is impulse interaction J between the balls and the other is impulsive reaction J’ of the thread. But, why do we need to include impulse by thread? Can't we just apply concept of conservation of mechanical energy? I mean, it got horizontal velocity and this horizontal velocity will act as tangential velocity and will help ball to complete one vertical circle. But, the book doesn't seem to understand this, or am I missing something? Answer: If there is no impulse via the thread and the collision between A and B is elastic then the outcome of the collision cannot be as described : ball A cannot move horizontally. Proof : Linear momentum must be conserved in the x and y directions. If A moves horizontally after the collision then B must also have a horizontal component of velocity which it did not have before. Ball A has no vertical velocity after the collision so the vertical component of B's velocity must be the same after the collision as it was before. B's total velocity and therefore its kinetic energy has increased, and A also has some kinetic energy after the collision. The balls have more kinetic energy after the collision than they had before. But this contradicts the assumption that the collision was elastic - ie that kinetic energy is conserved by the collision. Conclusion : Some other impulsive force must have provided the extra kinetic energy. This impulse must have come from the string.
{ "domain": "physics.stackexchange", "id": 50538, "tags": "homework-and-exercises, newtonian-mechanics, collision, string" }
Smoke ring in space?
Question: I understand that a smoke ring is an example of a vortex. I sort of grasp why a vortex stays coherent - the rotary motion of component molecules cause a pressure difference within the vortex. I do not understand whether there is some interaction with the external environment that also keeps the vortex stable. The question: would a smoke ring blown in a vacuum stay a coherent ring? Or would the ring tear itself apart immediately without opposing external molecules to hold it in? A related but less extreme scenario would be a water ring "blown" into zero-gravity air. Answer: The smoke ring is an instance of vortex, as you have mentioned. So, to begin with - How does the vortex work? The vortex effect is possible due to the viscous friction between the substance you inject and the environment it's injected in. How does the friction affect the injected substance? Its outer layers slow down: But the inner ones are still moving fast compared to the outer ones. A moving object will create a temporary deficit of pressure behind it; and the outer layers, which have been overtaken by the inner layers, are no fools! They immediately follow in the territory of low pressure. This way, the outer layers are "spinning up". I don't think it's necessary to analyse this in a more detailed way for this question I hope that this brief introduction is enough for your question to be answered: Is it possible for the smoke ring to form in space? The answer is: nope. In order for such a ring to exist, there should be an environment which will cause the friction and slow down the outer layers of the injected substance. As the friction in the space will be almost inexistent, the outer layers will not slow down and spin up (to a degree when the vortex effect could be noticed by you).
{ "domain": "physics.stackexchange", "id": 48236, "tags": "vortex" }
ROS Fuerte/Groovy install from source on ARM/Ubuntu 12.04
Question: I am attempting to install either Fuerte or Groovy from source on an ARM based processor. For those interested, the board is the ODroid-U2. There are various resources that I have used for assistance on the matter in regards to those who have attempted an install on pandaboards and raspberry pi. If interested, they are given throughout. I will mainly focus on my path in trying Fuerte, but if anyone has had it working with Groovy, I will go that route. I haven't had a direct issue following Fuerte+From_Source to install Fuerte. I have made the directories and gotten to the point where I perform the "rosdep install -ay" command, but I receive warnings similar to this. Specifically, the error pertains to the fact that PCL cannot be found. If you attempt to do a "rosmake -a", you also get an error related to this. I know this error is due to the fact that there isn't an ARM compatible version of ros-pcl, so I need to have the ability to compile it from source. This is where my primary problem is. I am not sure of how to do this. I know of the resource for pcl, but I am not sure of the correct way to compile it and include it into ROS. I am not too worried about the version of PCL at this time. I have also attempted to do the same thing with Groovy. I know there is an ARM based version of ROS to get started with Groovy, but it still does not include PCL. In either case, I have not been able to find a good source for a method of building PCL and would appreciate any help on the matter. Originally posted by orion on ROS Answers with karma: 213 on 2013-05-29 Post score: 0 Answer: I would start with the precompiled Groovy debs and then try to compile PCL and anything else you need on top of the deb-based install; you'll save yourself some time getting the base system set up. I think PCL compiles fairly well on ARM, but you'll probably want to adjust the build flags. In particular, flags that enable SSE optimizations will fail because ARM doesn't have SSE, and you'll want to turn the number of parallel builds down to 1, because PCL tends to use a lot of memory when compiling. Originally posted by ahendrix with karma: 47576 on 2013-05-29 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by kalectro on 2013-05-29: to increase your swap you can also format an USB drive as a swap device and use it for faster compilation. This way the swap is not written to the SD card Comment by orion on 2013-05-30: ahendrix, I completely agree and have gone this route. I have installed Groovy from the current deb-based install described on the ROS site. It is the installing PCL part that I am lost on. I am not sure of which PCL to get. I downloaded a few and they did compile correctly, but ROS never found it. Comment by kalectro on 2013-05-30: remove all other pcl installs and try sudo apt-get install ros-groovy-pcl Comment by orion on 2013-05-30: kalectro, thanks, but I have seen that solution many places. It will not work. I have not found a source that has an ARM compiled version of ros-groovy-pcl. I may have been missing something, but that didn't work for me. Only worked when compiling from source on X86/X64 based computer. Comment by kalectro on 2013-05-30: sorry... I thought there was a pcl binary available. If not, you can compile it from source using the version in my repo https://github.com/kalectro/pcl_groovy.git Use the rasPi branch Comment by orion on 2013-05-30: kalectro, that is the root of my problem. When I grab the source, where should I compile it to. Should I place it in the root of /opt, place in either the share/stacks locations within ROS, or place it in my workspace. I know it shouldn't matter, but it wasn't working for me when I tried. Comment by kalectro on 2013-05-30: clone the code into the src folder of your catkin environment. You can also create a new catkin workspace just for this purpose. Then compile the code using catkin_make_isolated --install Comment by orion on 2013-05-30: I attempted to build this but got the following error http://pastebin.com/DCbrCr2n. I attempted using my primary workspace as it was new and I am not sure how to setup multiple workspaces in groovy yet (was able to in Fuerte). Any ideas. Comment by kalectro on 2013-05-31: catkin_make_install --install -j1 Comment by orion on 2013-05-31: That still didn't work, however, I found what the problem was. Through some sources talking about how to change the CMake file, they all mention a setting armv6. When changing that to my setup (armv7) it worked on both the ROS PCL code and your modified CMake file. Comment by kalectro on 2013-06-01: sorry I forgot you were working on the Odroid... using the branch for the raspberry pi is not really a good idea then because it uses optimizations for its architecture. It should however work with the master branch Comment by orion on 2013-06-01: You are right. I figured that out through a few different avenues. I finally got it to compile, however, now I am having an issue with YAML, which I posted elsewhere. I will edit this soon with a combined solution between your and my discussion. Thanks very much for the help. Comment by uwlau on 2013-06-13: hey @orion. i do have the same problem with compiling PCL for my PandaBoardES. Do you have your "combined solution" yet? Comment by po1 on 2013-07-04: It has not been tested yet, but there are ARM binaries for PCL, built for the RPi. You may want to check this repository: http://64.91.227.57/repos/rospbian/
{ "domain": "robotics.stackexchange", "id": 14347, "tags": "robotic-arm, pcl, ros-fuerte, odroid, ros-groovy" }
How to control a Robot?
Question: Hello Everyone! How to control a Robot? I creat this robot and not know how to move this http://gazebosim.org/wiki/Tutorials/1.9/build_robot/add_laser A lot of thanks!! Originally posted by vncntmh on Gazebo Answers with karma: 80 on 2013-09-23 Post score: 0 Answer: you have a few tutorials on older versions. the underscores used to set text to italic are making a few typos, make sure to notice them 1-this tutorial shows how to set the model velocities. http://gazebosim.org/wiki/Tutorials/1.3/intermediate/control_model for the above tutorial you might also find this usefull. // ************* QUATERNION / POSE DATA ****************** static double qw,qx,qy,qz, Rrad, Prad, Yrad; math::Vector3 p = model->GetWorldPose().pos; math::Quaternion r = model->GetWorldPose().rot; //from quaternion to Roll Pitch Yaw, in radians qw=r.w; qx=r.x; qy=r.y; qz=r.z; Rrad=atan2( 2*(qw*qx+qy*qz), 1-2*(qx*qx+qy*qy) ); //Roll Prad=asin(2*(qw*qy-qz*qx)); //Pitch Yrad=atan2( 2*(qw*qz+qx*qy), 1-2*(qy*qy+qz*qz) ); //Yaw // *********************************************** //set velocities float velx,vely; velx=Vlin*cos(Yrad); vely=Vlin*sin(Yrad); this->model->SetLinearVel(math::Vector3(velx, vely, 0)); this->model->SetAngularVel(math::Vector3(0, 0, Vang)); // *********************************************** 2- or you can control the model using aplying joint forces, but this needs a robot with joints as wheels http://gazebosim.org/wiki/Tutorials/1.3/control_robot/mobile_base hope this helps. Originally posted by GAugusto with karma: 161 on 2013-09-24 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Zahra on 2013-09-24: Hey, I have a question related to controlling the robot from the keyboard. I made a ros_enabled plugin to control the robot using the following method in this answer, by subscribing the twist type of msgs. http://answers.gazebosim.org/question/1991/what-is-the-best-way-to0-learn-gazebo-in-ros-for/ I am using groovy, and gazebo 1.9.1 with gazebo _ros_pckgs is there a way to use the keyboard-teleop to control the robot (turtlebot) from the keyboard manually instead. thanks in advance! Comment by GAugusto on 2013-09-24: the idea should be the same, the robot should have the topics for the vel comands, you just need to edit whatever controller (or the plugin) your using to publish the velocity commands to those topics. you can also change the plugin to match the controllers topics, whatever u see fit, they just need to match. in the same link at the end i sugest the use of one keyboard controller and to edit the controller to publish to the respective topics. Comment by GAugusto on 2013-09-24: for example, if you have a controller publishing twist messages to "robot2/cmdvel" topic then you can change the plugin from "gz/cmdvel" to the same topic by changing the string name1 of the plugin from "gz" to "robot2". in case of a single robot it usualy does not have prefix so if you change the string name1 to "" (empty string) you will get the topic "/cmdvel". hope i was clear enough :S Comment by GAugusto on 2013-09-24: for example, if you have a controller publishing twist messages to "robot2/cmdvel" topic then you can change the plugin from "gz/cmdvel" to the same topic by changing the string name1 of the plugin from "gz" to "robot2". in case of a single robot it usualy does not have prefix so if you change the string name1 to "" (empty string) you will get the topic "/cmdvel". hope i was clear enough :S Comment by vncntmh on 2013-09-28: @GAuguto, When to load http://gazebosim.org/wiki/Tutorials/1.3/control_robot/mobile_base get a error: Error [Plugin.hh:127] Failed to load plugin my_plugin_with_sensor.so: my_plugin_with_sensor.so: cannot open shared object file: No such file or directory
{ "domain": "robotics.stackexchange", "id": 3464, "tags": "control, gazebo-1.9" }
Question about interesting topics for research
Question: I’m college student and I have to write a paper related with physics and math. I would like to have some interesting ideas of not so complex topics to investigate. Any suggestions? Answer: Solve 1-d diffusion equation $$ \frac{\partial n(x,t)}{\partial t} = D \frac{\partial^2 n(x,t)}{\partial x^2} $$ for $0 \le x \le L$ with initial distribution $n(x,0) = \delta(x-L/2)$. You may investigate the effect of boundary conditions: two reflection boundaries $$ \frac{\partial n(x,t)}{\partial x} \big ]_{x=0} = \frac{\partial n(x,t)}{\partial x} \big ]_{x=L} = 0. $$ two absorption boundaries $$ n(0, t) = n(L, t) = 0 $$ One reflection boundary and one absorption boundary $$ \frac{\partial n(x,t)}{\partial x} \big ]_{x=0} = 0; \text{ and } n(L, t) = 0. $$ Finally, you may look into partial absorption boundary, if you want to move further. This can be solve either numerically or analytically (in a fast convergent summation serious). I think this a very heuristic physic exercise with a good math content.
{ "domain": "physics.stackexchange", "id": 77654, "tags": "classical-mechanics, education" }
Why is a black hole black?
Question: In general relativity (ignoring Hawking radiation), why is a black hole black? Why nothing, not even light, can escape from inside a black hole? To make the question simpler, say, why is a Schwarzschild black hole black? Answer: It's surprisingly hard to explain in simple terms why nothing, not even light, can escape from a black hole once it has passed the event horizon. I'll try and explain with the minimum of maths, but it will be hard going. The first point to make is that nothing can travel faster than light, so if light can't escape then nothing can. So far so good. Now, we normally describe the spacetime around a black hole using the Schwarzschild metric: $$ds^2 = -\left(1-\frac{2M}{r}\right)dt^2 + \left(1-\frac{2M}{r}\right)^{-1}dr^2 + r^2 d\Omega^2$$ but the trouble is that the Schwarzschild time, $t$, isn't a good coordinate to use at the event horizon because there is infinite time dilation. You might want to look at my recent post Why is matter drawn into a black hole not condensed into a single point within the singularity? for some background on this. Now, we're free to express the metric in any coordinates we want, because it's coordinate independent, and it turns out the best (well, simplest anyway!) coordinates to use for this problem are the Gullstrand–Painlevé coordinates. In these coordinates $r$ is still the good old radial distance, but $t$ is now the time measured by an observer falling towards the black hole from infinity. This free falling coordinate system is known as the "rainfall" coordinates and we call the time $t_r$ to distinguish it from the Schwarzschild time. Anyhow, I'm going to gloss over how we convert the Schwarzschild metric to Gullstrand–Painlevé coordinates and just quote the result: $$ds^2 = \left(1-\frac{2M}{r}\right)dt_r^2 - 2\sqrt{\frac{2M}{r}}dt_rdr - dr^2 -r^2d\theta^2 - r^2sin^2\theta d\phi^2$$ This looks utterly hideous, but we can simplify it a lot. We're going to consider the motion of light rays, and we know that for light rays $ds^2$ is always zero. Also we're only going to consider light moving radially outwards so $d\theta$ and $d\phi$ are zero. So we're left with a much simpler equation: $$0 = \left(1-\frac{2M}{r}\right)dt_r^2 - 2\sqrt{\frac{2M}{r}}dt_rdr - dr^2$$ You may think this is a funny definition of simple, but actually the equation is just a quadratic. I can make this clear by dividing through by $dt_r^2$ and rearranging slightly to give: $$ - \left(\frac{dr}{dt_r}\right)^2 - 2\sqrt{\frac{2M}{r}}\frac{dr}{dt_r} + \left(1-\frac{2M}{r}\right) = 0$$ and just using the equation for solving a quadratic gives: $$ \frac{dr}{dt_r} = -\sqrt{\frac{2M}{r}} \pm 1 $$ And we're there! The quantity $dr/dt_r$ is the radial velocity (in these slightly odd coordinates). There's a $\pm$ in the equation, as there is for all quadratics, and the -1 gives us the velocity of the inbound light beam while the +1 gives us the outbound velocity. If we're at the event horizon $r = 2M$, so just substituting this into the equation above for the outbound light beam gives us: $$ \frac{dr}{dt_r} = 0 $$ Tada! At the event horizon the velocity of the outbound light beam is zero so light can't escape from the black hole. In fact for $r < 2M$ the outbound velocity is negative, so not only can light not escape but the best it can do is move towards the singularity.
{ "domain": "physics.stackexchange", "id": 38204, "tags": "general-relativity, gravity, black-holes, speed-of-light" }
The order of which time complexity is higher, 3 ^ log(n) or n ^ 3?
Question: Which one has a higher order time complexity: $n ^ 3$ $3 ^ {\log n}$ I know that an exponential time order is higher than polynomials. However, it uses a logarithmic power which is low for a large n, while n^3 or more could remain still high, not? Answer: The answer easily follows from the identity $3^{\log n} = n^{\log 3}$. Whether or not $\log 3 < 3$ depends on the basis of the logarithm.
{ "domain": "cs.stackexchange", "id": 10760, "tags": "time-complexity" }
Any support for Motoman HC10?
Question: Currently performing a benchmark for collaborative robots using ROS. I have been searching but I cannot find support for this robot. Anybody can confirm this? Thanks. Originally posted by Walter_SG on ROS Answers with karma: 1 on 2017-09-24 Post score: 0 Answer: The Motoman HC10 and HC20 are now available in the Motoman repo with the motoman_drive v1.9.0. However some of the collaborative robot feather have limitations under ROS. You should carefully review the associated note: HC Robot Notes for ROS Originally posted by Eric Marcil with karma: 16 on 2020-09-24 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by gvdhoorn on 2020-09-24: I've made this the accepted answer, as, as of 1.9.0, HC robots are indeed supported by MotoROS.
{ "domain": "robotics.stackexchange", "id": 28912, "tags": "ros, motoman" }
Is the Sun-set and the Sun-rise Symmetrical for the Observer?
Question: Is there the effect of sun rising and sun setting, in terms of Rayleigh scattering and visual spectrum and other factors completely similar and symmetric? I mean can one recognise them from a picture taken from the sky? Answer: The average air temperature is always lower at sunrise, which changes the atmospheric refraction infinitesimally. On the moon, you would only have the tiny difference from the doppler shift due to your motion relative to the sun, so that sunrise would be a teeny-weeny bit bluer than sunset.
{ "domain": "physics.stackexchange", "id": 1690, "tags": "optics" }
How can one get the eccentricity of the orbit of the Sun around center of the Milky Way?
Question: How can one get the eccentricity of the orbit of the Sun around center of the Milky Way? Can it be measured? Answer: Short answer, no. The Sun's orbit is non-Keplarian; there are many perturbations and a general unevenness in the motion of the Sun around the Galactic centre. This is a result of non-uniform mass distributions, the galaxy not being a point mass, and the impact the relative motions of neighbour stars has on measuring. Thus, giving a particular eccentricity for the Sun is almost meaningless. For instance, it fluctuates up and down roughly $2.7$ times per orbit and it passes through high density regions which cause major perturbations. This creates instability in any average eccentricity. Long answer, it is not impossible. In theory, we could measure it. However, we have two rest frames; local and standard rest. The local rest frame refers to how we can take the average motion of stars within (say) $100~pc$ and use this average to compute our approximate orbital properties. The standard rest frame refers to us using Oort constants/properties and similar things in order to determine our more specific motion around the galaxy based on accelerational perturbations, etc. Both frames have their own advantages and both give slightly different values for our currently computed orbital characteristics. The problem lies with determining the relative weights each might contribute to an eccentricity value. While the motion of the Sun may be non-Keplarian, we do know that the circular velocity is around $230{km\over s}$ and the peculiar velocity is on order $15{km\over s}$. This leads many astronomers to say that while measuring the eccentricity would be very hard and calculating it would be near impossible, they can say that it is most likely on the order of a few percent. Definitely less than $10\%$, but a value in the range of $e=0.02-0.08$ would be the most likely.
{ "domain": "physics.stackexchange", "id": 8117, "tags": "astronomy, orbital-motion, sun, observational-astronomy, milky-way" }
Multinomial theorem and binomial factor (case for Bosons)
Question: I am trying to understand the meaning behind the binomial factor and multinomial theorem when dealing with problems in statistical mechanics, mostly combinatoric related problems. This is the problem that I am currently dealing with: A particle can be in 10 different energy states. Now we have 2 indistinguishable boson particles, and for them we want to find the nr of possible microstates in which the system can be found: The solution is : $ [( 2 + 10 - 1)!] / [2!(10 - 1)!]$ What is the logic behind this expression and how did we reach it? I want to understand the logic behind the expression. I know that my problem starts with the binomial expression, what it means in combinatoric, and only after I understand that I can go to the multinomial theorem. Can anyone provide a detailed and comprehensive explanation as to how these formulas come to be and how they help in the above problem? Answer: This combinatory problem is considering all permutations of 2 particles and 9 separatrices, illustrated as follow: Two bluse lines denotes the two indistinguished boson particles, and the 9 red lines each one is a separatrix between two consecutive energy levels. The first part in the figure represents a configuration of state $\vert 1,1 \rangle$, the second is $\vert 3,8 \rangle$, and the third $\vert 6,10 \rangle$. Each of the different permutation of these 11 lines (2 blue line and 9 red lines) represente a configuration. The total number of permutation is: $$ N_{total} = \frac{11!}{2!\,9!}. $$
{ "domain": "physics.stackexchange", "id": 80454, "tags": "statistical-mechanics, bosons" }
Ab initio method to calculate C-C bond dissociation energy?
Question: I'm trying to review how to calculate bond dissociation energy for C-C in ethane, a very simple calculation (or should be). I get the following Hartree energies for SCF 6-31G(p) for methane, ethane, and H2: ch4_spe -40.194639920746, ethane -79.228125042573, h2 -1.126456057886, I would think that to get the C-C bond dissociation energy, I would simply do (Perl code) $energies{ethane} - $energies{ch4_spe} * 2 - $energies{h2} correct? The reason that I'm asking is that this give an answer of 2.2876 Hartree using NWChem, which converts to 6006 kJ/mol, which is off by about a factor of 17 from the correct value of about 346 kJ/mol. I know that SCF isn't that accurate, but it shouldn't be this far off. I optimized the bond lengths using Avogadro. This is the correct method to calculate the C-C bond energy? Answer: This "experiment" is about determining bond dissociation energy. The general way to solve this isn't like calculating enthalpies of reaction, as I initially thought, but rather through calculating the energy of two CH3 radicals, and comparing it with ethane. As another user pointed out, this should be done with unrestricted Hartree Fock and with diffuse basis sets. Thus, energy(ethane) - 2*energy(methyl radical) is the solution.
{ "domain": "chemistry.stackexchange", "id": 13118, "tags": "computational-chemistry, ab-initio" }
Normalized Square Error vs Pearson correlation as similarity measures of two signals
Question: Which measure should be considered better and when ? I tested both the measures on some data that I have and I got mixed results i.e. some are showing better results with Pearson and some with the Normalized Squared Error. By similarity I mean similar shapes of the signals that I have. I normalized the data before applying the measures to them. Answer: It depends on the normalization that you perform on the data. Note that for computing the Pearson correlation coefficient you subtract the means of the signals. This is normally not the case if you simply compute a mean squared error between the signals, unless mean removal is part of your normalization procedure. I assume you compute the Pearson correlation coefficient as $$r=\frac{\sum_i(x_i-m_x)(y_i-m_y)}{\sqrt{\sum_i(x_i-m_x)^2}\sqrt{\sum_i(y_i-m_y)^2}}$$ where $x_i$ and $y_i$ are the data to be compared, and $m_x$ and $m_y$ are their sample means. If the sample means are different, this will have no influence on the correlation coefficient $r$. Different means will, however, influence the mean squared error between the signals: $$MSE=\frac{1}{n}\sum_{i=1}^{n}(x_i-y_i)^2$$ unless the normalization takes care of it. If you assume that both signals have zero mean (or that the mean has been removed by normalization), and that both signals have been normalized to have an average power of 1 $$\frac{1}{n}\sum_{i=1}^{n}x_i^2=1,\quad \frac{1}{n}\sum_{i=1}^{n}y_i^2=1$$ then both error measures simplify to $$r=\frac{1}{n}\sum_{i=1}^{n}x_iy_i,\quad MSE=2\left (1-\frac{1}{n}\sum_{i=1}^{n}x_iy_i\right)=2(1-r)$$ This means that with an appropriate normalization which removes the mean of both signals and normalizes their power to unity, both error measures are equivalent and one can be computed from the other. Summarizing, in general the Pearson correlation coefficient gives you a better idea of the similarity of two signals. If you normalize the signals appropriately, then the MSE is equivalent to the correlation coefficient $r$ and there is a simple relation between them.
{ "domain": "dsp.stackexchange", "id": 983, "tags": "correlation, waveform-similarity, normalization" }