anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
QFT on curved spacetime, uniqueness of spacelike hypersurface | Question: Consider the Lagrangian of a real, scalar field coupled to gravity via the metric $g_{\mu\nu}$ and covariant derivative $\nabla_\mu$
$$\mathcal{L} = \sqrt{-g} (-\frac{1}{2} g^{\mu\nu} \nabla_\mu \phi \nabla_\nu \phi -\frac{1}{2}m^2\phi^2). $$
After canonical quantization, we can expand the field operator in terms of positive-negative frequency modes as (putting the theory in a box)
$$ \hat{\phi}(x) = \sum_k [\hat{a}_k u_k(x) + \hat{a}_k^\dagger u^*_k(x)]. $$
However, generally different observers wouldn’t agree on this split between the modes and identify particle states differently. So, this expansion is not unique and for each such expansion we need to find a complete set of orthonormal modes $\{v_k(x)\}$, where orthonormality is w.r.t. the Klein-Gordon inner product:
$$ (\phi_1, \phi_2) := -i \int \sqrt{g_\Sigma} [\phi_1 \nabla_\mu \phi_2^* - \phi_2^* \nabla_\mu \phi_1] d\Sigma^\mu.$$
Here $\Sigma$ is a space-like hypersurface and $g_\Sigma$ the induced metric.
My question is about the uniqueness of this hypersurface. In particular, how arbitrary is this choice? For example, would choosing a coordinate chart or defining a worldline fix this surface? Also, if coordinates fix it, I’d expect isometries to result in equivalent definitions of vacuum. Is this correct?
Answer:
how arbitrary is this choice?
It needs to be a Cauchy surface. In other words, it has to intersect every causal curve exactly once. If it missed some curve, you would be missing information on the spacetime. If it intersected some curve more than once, you'd have issues with causality in the spacetime and might be giving too much information to the equations of motion. The existence of such a surface is equivalent to requiring the spacetime to be globally hyperbolic, which is a common assumption within QFTCS.
Any choice of Cauchy surface is equivalent, as one can show from the conservation laws associated to the Klein–Gordon equation and fooling around with Stoke's theorem.
If I recall correctly, this is discussed on Chap. 14 of Wald's General Relativity textbook (in bold, because I do not mean the QFTCS book).
For example, would choosing a coordinate chart or defining a worldline fix this surface?
Not necessarily. For example, pick the Schwarzschild coordinates in the maximally extended Schwarzschild spacetime. They only cover a part of the spacetime, so they are not enough to specify a Cauchy surface. They might be enough to specify a Cauchy surface on a smaller portion of the spacetime, though (such as one of the outer regions of the black and white holes).
Defining a worldline won't help you fix such a surface, since you actually need a spacelike hypersurface cutting through all of spacetime. A single observer does not have access to that.
Also, if coordinates fix it, I’d expect isometries to result in equivalent definitions of vacuum. Is this correct?
A timelike Killing field often leads you to a preferred notion of vacuum, so, in this sense, yes. However, there are caveats.
Kerr spacetime, for example, does not have a Killing-invariant non-singular vacuum state. (Wald's QFTCS book mentions this when discussing the Unruh effect in curved spacetime, there's probably a detailed reference in there)
Different observers tangent to the same Killing field might have different notions of particles. For example, pick two accelerated observers with different accelerations in Minkowski spacetime. Both of them are moving tangentially to the boost Killing field, but they disagree on the temperature of the Unruh bath, since they have different accelerations. | {
"domain": "physics.stackexchange",
"id": 93116,
"tags": "general-relativity, quantum-field-theory, differential-geometry, klein-gordon-equation, qft-in-curved-spacetime"
} |
Calculating and improving time and memory complexity | Question: private static int Sum (int[] a, int from, int to){
int total=0;
for (int i=from; i <= to; i++)
res += a[i];
return total;
}
public static int Method3 (int []a){
int temp=0;
for (int i=0; i <= a.length; i++)
{
for (int j=0; j <= a.length; j++)
{
int c = Sum(a,i,j);
if (c%3 == 0)
{
if (j-i+1 > temp)
temp = j-i+1;
}
}
}
return temp;
}
The purpose of Method3 method is to find the longest combination of a given array numbers', so that the sum of the numbers of the combination can be divided by 3 without remainder.
How do I make it more efficient in terms of both time and memory complexity?
How do I even approach to something like this?
ow can I know that the complexity I've reached is the best possible?
Answer: It is possible to make it much more efficient by using a completely different algorithm. The idea is to take a closer look at prefix sums. Let's assume that a subarray [L, R] is divisible by 3. It means that prefixSum[R] - prefixSum[L - 1] == 0 (mod 3), or prefixSum[R] == prefixSum[L - 1] (mod 3). It results in a simple solution: iterating over all elements of the array from left to right and maintaining the current prefix sum modulo 3 and keeping track of the first occurrence of each value modulo 3. The code can look like this:
int getLongestSubarray(int[] a) {
int[] firstOccurrence = new int[3];
firstOccurrence[0] = -1;
firstOccurrence[1] = a.length;
firstOccurrence[2] = a.length;
int prefixSum = 0;
int result = 0;
for (int i = 0; i < a.length; i++) {
prefixSum = (prefixSum + a[i]) % 3;
if (prefixSum < 0)
prefixSum += 3;
result = Math.max(result, i - firstOccurrence[prefixSum]);
firstOccurrence[prefixSum] = Math.min(firstOccurrence[prefixSum], i);
}
return result;
}
The time complexity is O(n) and the space complexity is O(1). It is optimal because it is not possible to find the longest subarray divisble by 3 without seeing all elements of the input array. | {
"domain": "codereview.stackexchange",
"id": 11442,
"tags": "java, complexity"
} |
Minimizing proper time | Question: I've started studying general relativity course and now I have a question about proper time. Consider functional
$$S[x]=-\int_A^B ds,$$
where $A$, $B$ are fixed points of the space-time and $ds^2=dt^2-dx^2$ (let our space-time be two-dimensional without loss of generality). Finding its minimum is equivalent to maximizing proper time $s=\int_A^B ds$ and it a well-known fact that maximizers of proper time are straight lines, as it can be easily checked by writing Lagrange equation for $\mathcal{L}(t,x,\dot{x})=-\sqrt{1-\dot{x}^2}$. So my question is: what are minimizers of proper time?
I've heard that there are, hm, lots of them, but I can't write down any. If one wants to minimize $s$, he takes lagrangian $\mathcal{L}=\sqrt{1-\dot{x}^2}$ and he obtains that
$$\ddot{x}=0,$$
so minimizers necessarily are straight lines. But we have already proved that they are maximizers, so I come to contradiction. Where am I wrong?
Answer:
Given two (timelike separated) points in Minkowski space, there is a unique timelike curve that maximizes the proper time, namely the straight line, as OP already mentions. However there is no timelike curve that minimizes the proper time. Nevertheless proper time does have an infimum, namely zero. This is essentially because a massive point particle can always fly a bit closer to the speed of light without reaching it.
For a general action functional, the Euler-Lagrange (EL) equations yield stationary configurations. Extremal configurations might not exist.
Example. For a differentiable function $f:I\to \mathbb{R}$ on an (open or closed) interval $I$, recall that a stationary point is neither a necessary nor a sufficient condition for an extremum for $f$. Similar statements are true in calculus of variations. | {
"domain": "physics.stackexchange",
"id": 61118,
"tags": "general-relativity, special-relativity, lagrangian-formalism, variational-principle, geodesics"
} |
Why doesn't stovetop glass crack from thermal shock? | Question: I have a feeling that if my stovetop was made out of regular glass, it would have cracked from rapid cooling/heating a long time ago – especially if you consider accidental water spills. What kind of glass is able to withstand such abuse?
(I am just looking for terms to google for further reading.)
Answer: Stovetops are made of glass-ceramic which has extremely low thermal expansion, hence no cracking from temperature change. In fact, the coefficient is with $0.1 \cdot 10^{-6} \ 1/K$ even lower than that of Borosolicate glass at $3.3 \cdot 10^{-6} \ 1/K$. Since glass-ceramic can reach a negative coefficient of thermal expansion, getting even closer to zero is just a matter of engineering (thanks to @Volker Siegel for this interesting fact).
The brand name for one kind of glass-ceramic, Ceran (by German company Schott) is in German often used to generally describe glass-ceramic stovetops. Borosilicate glass is, to my knowledge, used only for heat-resistant cookware, but not the stovetops themselves. | {
"domain": "engineering.stackexchange",
"id": 3493,
"tags": "glass"
} |
Motivation for the use of 1-forms in General Relativity | Question: During a course I took on General Relativity, the professor started with an introduction on differential geometry. Vectors were properly motivated: he said that since the differential manifold doesn't have distances it didn't make sense to define vectors as the displacement between two points; we had to use something infinitesimal instead. Then, he gave several advantages for using derivatives as vectors instead of the classic euclidean two-point arrows.
However, when he starting explaining 1-forms... he just said what they were and move on. I don't really understand why we need 1-forms. Also, I don't know if 1-forms are also a generalization of an euclidean concept (such as vectors).
I've read many questions about 1-forms but none of them asked about clear motivation for introducing them in a General Relativity course So that's the question: What is the motivation for using 1-forms in General Relativity? what do they are useful for? Can't we just use vectors and then introduce a metric to have a direct product?
Note: I have checked many books looking for a proper motivation but I just find the definition followed by the usual interpretation of 1-forms as perpendicular planes in space. I've read Gravitation, Carrol's book and Schutz' both books.
To be clear, I don't need a physical explanation, what I want is motivation for using 1-forms when we can just use the metric and two vectors if we want a inner product.
Answer: The notion of differential forms depends on several structures:
the wedge product
the dual space
the tangent bundle
sections
So its not suprising that your lecturer wasn't as easily able to motivate them as vectors! Lets take these step by step:
1. The wedge product
In a 3d vector space we have the additional structure of an inner product and the cross product. These have geometric interpretations. However, when we generalise to a vector space of any dimension it's easy to see that the inner product generalises in an obvious way. Not so the cross product. In fact, this is only available in 3d.
Recall, that the scalar triple product $u.(v \times w)$ gives the volume of the parallelopid formed by the sides $u,v,w$. It is this property that generalises.
Given a parallelepid in a n-dimensional vector space $V$ (this is the generalisation of a parallelogram in the plane) whose sides are $v_1,..., v_n$. Then the wedge product $v_1\wedge ... v_n$ gives us the signed volume. It turns out that this is a vector, but they don't lie in the same vector space as $v$. We call them $k$-vectors and say that $v_1\wedge ... v_k$ lies in $\wedge^k V$.
2. The dual space
The dual space of a vector space $V$ is usually written $V^*$. It consists of all linear functions to the real line, $f:V\rightarrow R$. What does this mean? Each function is linear, so we can think of it as a kind of measurement or metric on the vector space. It tells us how to measure a vector. Thus $V^*$ is the space of all the ways we can measure vectors in $V$.
3. The Tangent Space
Given a manifold $M$, we can construct its tangent bundle $TM$. The easiest example to visualise is when the manifold is a curve or surface. Lets take the curve first: at every point of the curve $C$ we can draw the tangent line to it, this line extends to infinity and is a 1d vector space. We bundle them up all together into the bundle $TC$, and the tangent line at the point $p$ on the curve is $T_pC$. Similarly for a surface $S$, at each point $p$ of the surface we can draw the tangent plane to it, we write this as $T_pS$ and we bundle them all together into the bundle $TS$.
Now any bundle $E$ over a manifold $M$, has a projection map $\pi:E\rightarrow M$ and this is how they are usually referred to. It tells us where the 'fibres' are attached to. If we take the first example, $TC$, the tangent bundle of the curve; let $v$ be a vector in one of the tangent spaces, say $T_pC$ - this means that $v$ is in the tangent line (rather vector space) - that is defined (or attached) to the point $p$ of the curve. The projection map $\pi$ simply maps $v$ to the point $p$. So we can see that the image of the entire space $T_pC$ is just the point $p$.
4. Sections
Given a bundle $\pi:E\rightarrow M$ then we can take its space of sections $CE$. This is the space of all maps $s:M\rightarrow E$ such that $pi\circ s =Id_M $. For example, suppose $E$ was a bundle of vector spaces over the manifold $M$, then a section is a choice of a vector in each fibre. It is a vector field.
Construction of differential forms
Finally we put all these structures together: We construct the bundles $\wedge^k T^*M$. That is we take the manifold $M$, we construct the tangent space $TM$ over it, and then take it's dual space $T^*M$ and the finally we take the $k^{th}$ wedge $\wedge^k T^*M$. The sections of this bundle is $C(\wedge^k T^*M)$ and this is the space of all $k$-differential forms and is usually written (at least by mathematicians and sometimes others) as $\Omega^kM$.
Uses
It turns out that we have a map $d^k:\Omega^kM \rightarrow \Omega^{k+1}M$ called the exterior derivative (another name for the wedge product is the exterior product) and this generalises the $grad$ operator in vector analysis. That is $d^0=grad$. The other vector analysis operators - $div$ & $curl$ - are variants of this.
It also turns out that when we integrate a form $\omega$ over a manifold $M$ we get a generalisation of Stokes theorem: $\int_M d\omega=\int_{dM} \omega$, where the symbol $dM$ is the boundary of the manifold.
conclusion
Thus we see that differential forms allow us to generalise the vector analysis that we're already familiar with in 3d Euclidean space to the context of manifolds of any dimensions. This is important given the importance of vector analysis in physics. But they have many other uses, for example de Rham cohomology. They also bring in many other notions that are important, for example vector, fibre and principal bundles.
There is a formulation of General Relativity that uses a connection on the frame bundle of the tangent bundle and this a principal bundle with structure group the Lorentz group. This connects to the way the other forces are described, for example electromagnetism, the electroweak and the strong force are described as principal bundles with structure group $U(1),SU(2)$ and $SU(3)$ respectively in the Standard Model. | {
"domain": "physics.stackexchange",
"id": 46483,
"tags": "general-relativity, differential-geometry, tensor-calculus"
} |
Partial trace - experimental implementation and calculation | Question: How does one actually take the partial trace on a quantum computer/real experiment? Wikipedia says that this is a valid quantum operation but I can't see how to implement it. Given an entangled pure state $\psi_{AB} \in H_{A}\otimes H_{B}$, I wish to do some operations and measurements to obtain $\rho_A = \sum_{i\in H_B} \langle i\vert \rho_{AB} \vert i\rangle$.
Since $\rho_A$ has many possible purifications, this computation is not unitary but $\rho_A$ is unique. Applying a projective measurement $\sum_i \vert i\rangle\langle i\vert$ on $B$, doesn't work. I somehow need to "forget" that the state is actually entangled and "lose" the $H_{B}$ part of the state but this is (correct me if I'm wrong) not allowed in quantum information.
So if I have a single copy of a quantum bipartite state, what quantum circuit should I use that spits out the partial trace? Also, I'd love to know if such a circuit exists, what the computational complexity of it would be.
Answer: The circuit will not "spit out" the partial trace. But what you can do is to just look at the A part of the system, and ignore the B part. The A part will be described by the reduced density matrix, and in particular, any measurement/operation you perform will be. | {
"domain": "physics.stackexchange",
"id": 53273,
"tags": "quantum-mechanics, quantum-information"
} |
A red-black full tree where every black node has at most 1 red child has at most (n-1)/4 red nodes | Question: Let us call a red-black tree strict when every black node has at most one red child.
Show that a strict red-black full tree has at most $(n − 1)/4$ red nodes; a binary tree is full when every node has zero or two children.
The hint for this problem says to use a charging argument but I can't seem to figure one out.
My idea was to consider the root of the tree (which must be black) and recursively consider the number of red nodes in the left and right subtrees and show by induction that this holds. But I can't seem to figure out some of the details (and there seem to be some inconsistencies).
Answer: Deposit 3 dollars on each red nodes initially.
For each black node, if it is a sibling or a child of a red node, move one dollar on that red node onto the black node.
Claim 1: there is at most one dollar on each black node. There is no dollar on the root of the tree.
Claim 2: there is no dollar on red nodes.
The total amount of dollars was $3\times\#\text{red nodes}$ initially. The total amount of dollars at the end is at most $\#\text{black nodes} - 1$.
$$3\times\#\text{red nodes} \le \#\text{black nodes} - 1$$
which means $4\times\#\text{red nodes} \le (\#\text{black nodes} + \#\text{red nodes}) - 1=\#\text{all nodes}-1$.
I will let you prove the two claims. | {
"domain": "cs.stackexchange",
"id": 20062,
"tags": "data-structures, red-black-trees"
} |
How do I create a filtered gene list using expression medians | Question: Forgive the simple noob question
I have TPM data of ~50k genes (rows) across ~1k cell lines (columns). In R, I would like to output an "intermediate expression" gene list for each cell line, like:
>head(finalExpressedGeneList, n=3)
Celltype1 Celltype2 CelltypeN
gene1 gene5 gene3
gene3 gene6 gene6
etc etc etc
I am defining intermediate expression as "gene expression >= median value within each cell line." I have already replaced all 0 values with NA using:
>data[data == 0] <- NA
I then got medians for each column using:
>colMed <- apply(data,2, FUN = median, na.rm = TRUE)
Now I need to somehow filter the rows from each column and copy the associated gene into a new matrix. I started building the code below:
>if (data[,1]>=colMed [1])
{
finalExpressedGeneList <- row.names(data)
}
But I realized that even if I got it working, I would still have to iterate over all of my columns. I'm sure that there is a cleaner way to do this.
EDIT (adding example data)
#create example data
>j = 5
>M <- matrix(NA,j,j)
>M <- sapply(1:j, function(i) `length<-`(1:(j-i+1), j))
>rownames(M) <- c("gene1", "gene2", "gene3", "gene4", "gene5")
>colnames(M) <- c("cellLine1", "cellLine2", "cellLine3", "cellLine4", "cellLine5")
>view(M)
cellLine1 cellLine2 cellLine3 cellLine4 cellLine5
gene1 1 1 1 1 1
gene2 2 2 2 2 NA
gene3 3 3 3 NA NA
gene4 4 4 NA NA NA
gene5 5 NA NA NA NA
I would like to get an output of:
cellLine1 cellLine2 cellLine3 cellLine4 cellLine5
gene3 gene3 gene2 gene2 gene1
gene4 gene4 gene3
gene5
because the median of cellLine1 has three genes greater than or equal to the median of that column (3), cellLine2 has two genes that are greater than or equal to the median of that column (2.5), etc...
Answer: This below should work. It returns a list,
## example data
j = 5
M <- matrix(NA,j,j)
M <- sapply(1:j, function(i) `length<-`(1:(j-i+1), j))
rownames(M) <- c("gene1", "gene2", "gene3", "gene4", "gene5")
colnames(M) <- c("cellLine1", "cellLine2", "cellLine3", "cellLine4", "cellLine5")
#####
genelist = apply(M,2,function(i)rownames(M)[which(i>=median(i,na.rm=T))])
####
#you can call the genes for cellline 1 using:
genelist$cellLine1
Hope I got it correct! | {
"domain": "bioinformatics.stackexchange",
"id": 1080,
"tags": "r, rna-seq, filtering"
} |
Sorting an array of objects by a property which may be missing | Question: Problem
I am sorting an array of objects. Each element may have a property called reference or it may be undefined. When defined, it is a user-provided string (which is why I use String.localeCompare()).
If the reference is not available, that element's relative position in the final list is irrelevant save that all blank references must appear grouped together at the end.
Implementation
function sort(searchResult) {
const hasReference = searchResult
.filter(taskCard => typeof taskCard.reference !== "undefined");
const missingReference = searchResult
.filter(taskCard => typeof taskCard.reference === "undefined");
hasReference.sort(
(tc1, tc2) => tc1.reference.localeCompare(tc2.reference)
);
return hasReference.concat(missingReference);
}
My Concerns
Is this an efficient way of solving this problem in terms of time?
I'm currently iterating over the initial array twice. Is there an elegant ES6 way to do this in a single pass that I'm not seeing?
I believe memory to be less of a concern (because I exactly double the amount of memory used while processing by generating two new arrays that together are the size of the original); am I correct?
Are there any scalability pitfalls doing it this way?
As usual, any other comments are welcome.
Answer:
Is this an efficient way of solving this problem in terms of time?
The filtering steps have time complexity \$O(n)\$,
and the sorting step has \$O(n \log n)\$.
Even if the filtering may look a bit of a waste,
the dominant operation is the sort,
if most of the items have the reference property.
I'm currently iterating over the initial array twice. Is there an elegant ES6 way to do this in a single pass that I'm not seeing?
Yes, you could implement the compare method in a way that items with undefined value as the property get sorted at the end,
as @igor-soloydenko did in his answer.
I believe memory to be less of a concern (because I exactly double the amount of memory used while processing by generating two new arrays that together are the size of the original); am I correct?
Extra \$O(n)\$ memory doesn't seem a big concern.
If it is, then you can use the in-place alternative,
that doesn't use extra memory.
More or less, see the next point.
Are there any scalability pitfalls doing it this way?
The only scalability pitfall that I see is the extra \$O(n)\$ memory,
in case of very large input.
The fact that you partition the input has the interesting effect that if a large portion of the input has undefined values,
the sorting step will be faster using the current technique compared to the in-place alternative,
because the items with undefined values will not be part of the slow sort operation.
If you don't expect many undefined values,
then the in-place technique should be faster. | {
"domain": "codereview.stackexchange",
"id": 27673,
"tags": "javascript, performance, array, sorting, ecmascript-6"
} |
Way of measuring thickness of a glass | Question: Imagine you have a laser pointer and a glass. You don't know the refractive index of the glass and you just have Wavelength of laser!
How can you measure the thickness of glass?
(I have some idea but please tell me your ideas and then we will have discussion!)
Answer: Have a look at Floris' answer to Calculating light's lateral shift in a glass slab.
If you shine your laser through the glass plate at an angle $\theta$ then the beam will be deflected by a distance $x$ given by:
$$ x =d\sin\theta\left(1-\frac{\sqrt{1-\sin^2\theta}}{\sqrt{n^2-\sin^2\theta}}\right) $$
where $d$ is the unknown thickness of the plate and $n$ is the unknown refractive index. If you measure the displacement $x_1$ for an angle $\theta_1$ then measure the displacement $x_2$ for a different angle $\theta_2$ then you can divide the two displacements to get:
$$ \frac{x_1}{x_2} = \frac{\sin\theta_1\left(1-\frac{\sqrt{1-\sin^2\theta_1}}{\sqrt{n^2-\sin^2\theta_1}}\right)}{\sin\theta_2\left(1-\frac{\sqrt{1-\sin^2\theta_2}}{\sqrt{n^2-\sin^2\theta_2}}\right)} $$
You can solve this equation to find $n$ then substitute the value of $n$ to find $d$.
In practice you wouldn't just use two measurements. To improve accuracy you would record many values of $x$ and $\theta$ and use a curve fitting program to calculate the values of $d$ and $n$. | {
"domain": "physics.stackexchange",
"id": 24148,
"tags": "optics, refraction, geometric-optics"
} |
StaticLayer map-resize erases all obstacles from ObstacleLayer | Question:
Hello @David Lu
I am using Google Cartographer as SLAM component, and move_base for navigation. In the config of the global costmap, I have setup costmap_2d::StaticLayer to subscribe to the /map topic published by cartographer. There is also an ObstacleLayer and an InflationLayer. My problem is that each time cartographer dynamically adjusts the map size, all obstacles which have been added to the global costmap get erased.
Looking at the code of static_layer.cpp I can see that StaticLayer::incomingMap() is executed when a map with new dimensions is received. In this function, there is a section that initializes the costmap with static data.
How can I prevent the StaticLayer from erasing previously discovered obstacles from the ObstacleLayer to get erased from the master costmap (i.e. global costmap)?
Thanks
Originally posted by Huibuh on ROS Answers with karma: 399 on 2017-02-09
Post score: 0
Answer:
In the initial formulation of the static layer, the size was presumed to be relatively constant (static you might say).
The semantics of resizing the underlying static map are not so well defined that it defines how to deal with all the previous data in the layers. It's a decent use case, but I don't believe its currently covered in the existing layers.
Originally posted by David Lu with karma: 10932 on 2017-02-12
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Huibuh on 2017-02-23:
You're right, each time StaticLayer::incomingMap() is executed, Costmap2D::resizeMap() is called on all other layers. In this function, initMaps() and resetMaps() wipes the existing data. Do you have any suggestions how to modify Costmap2D::resizeMap() to preserve the data?
Comment by David Lu on 2017-03-26:
It would involve a substantial amount of changes. The implementation should probably be similar to Costmap2D::updateOrigin which also changes the coordinate system of the map while keeping some of the data. | {
"domain": "robotics.stackexchange",
"id": 26965,
"tags": "navigation, move-base, costmap-2d"
} |
HOW TO USE MIT-ROS-PACKAGE! | Question:
HI,
I'm trying to use the demo packages created by MIT(http : //www.ros.org/wiki/mit-ros-pkg/KinectDemos), however, I don't know where to get the packages, I saw a address listed like: Version Control: https://svn.csail.mit.edu/mit-ros-pkg, should I use “svn co https://svn.csail.mit.edu/mit-ros-pkg” or "apt-get install https://svn.csail.mit.edu/mit-ros-pkg"?
Originally posted by DavidXiong on ROS Answers with karma: 3 on 2013-03-30
Post score: 1
Answer:
The installation instructions are pretty extensive although there are no fuerte or groovy instructions. Which part you don't get?
I think you should use rosinstall ~/ros_workspace ~/ros_workspace/kinect_demos.rosinstall
The rosinstall documentation is rather vague indeed,I don't get why your are send back to installing ros page when you are looking for documentation.
Originally posted by davinci with karma: 2573 on 2013-03-30
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by DavidXiong on 2013-03-30:
@davinci, I was trying to use MIT-DEMO in GROOVY, so I didn't see the command "rosinstall ~/ros_workspace ~/ros_workspace/kinect_demos.rosinstall", now I changed my ROS back to electric and followed the tutorial(http : //www.ros.org/wiki/mit-ros-pkg/KinectDemos/electric), however, in the step
Comment by DavidXiong on 2013-03-30:
@davinci, silly number limiter! In the step "Validate certificate from MIT CSAIL's svn", in the terminal, it seems that I should enter the Client certificate filename(it shows like: Authentication realm: https://svn.csail.mit.edu:1443
Client certificate filename: ), so how to deal with it? THX!
Comment by davinci on 2013-03-31:
https://svn.csail.mit.edu/mit-ros-pkg/ does seem to work, perhaps try that? Otherwise contact the maintainer of the package.
Comment by DavidXiong on 2013-03-31:
@davinci , https://svn.csail.mit.edu/mit-ros-pkg/ works, however, followed the tutorials http ://www.ros.org/wiki/mit-ros-pkg/KinectDemos/electric, in step 7, I always get the error like:
Comment by DavidXiong on 2013-03-31:
@davinci, svn: URL 'https://code.ros.org/svn/wg-ros-pkg/stacks/motion_planning_common/trunk' doesn't exist
Exception caught during install: Error processing 'motion_planning_common' : [motion_planning_common] Checkout of https://code.ros.org/svn/wg-ros-pkg/stacks/motion_planning_common/trunk version
Comment by DavidXiong on 2013-03-31:
@davinci, I find that at the website of https://code.ros.org/svn/wg-ros-pkg/stacks, the motion_planning_common do not exist, maybe it has been removed, but how can I sole this problem?THX!
Comment by DavidXiong on 2013-03-31:
"solve", not "sole", silly spelling mistake.
Comment by chao on 2014-01-21:
@DavidXiong, have you managed to install the package properly? Have you tried to solve the ( Authentication realm: https://svn.csail.mit.edu:1443 Client certificate filename: )? | {
"domain": "robotics.stackexchange",
"id": 13608,
"tags": "openni"
} |
Why is the units of kcat 1/s? | Question: I understand that $k_\text{cat}$ measures the turnover number of an enzyme. This measure is therefore a quantity of molecule conversions per unit of time. I suspect that my problem is more that of a lack of maths but why is the unit expressed in $\mathrm s^{-1}$? Why not just seconds?
Answer: The turnover frequency is a rate. (It's actually the kinetic rate of the reaction in saturating substrate concentration, normalized to the amount of enzyme.)
It might be helpful to think about something like a factory that makes a certain number of widgets in a certain number of time. How would you naturally describe how fast the factory could make widgets? Most people would express how fast the factory works in terms of "widgets per hour". It's the same with enzymes - it's most natural to describe how fast the enzyme works in terms of "turnovers per second". It's just that the "turnover" is implied and is viewed as "dimensionless", so it's left off, leaving you simply with "s-1" as a unit.
(Note that even though the "turnover" is left off, it's still very important to know it's there, and to know what, exactly, one "turnover" is. In complicated reactions there is sometimes different reasonable ways to count what "one turnover" is, and different definitions could result in a 2, 3, 4, 5, etc. fold difference in value. Those vexing numeric differences become easy to explain once you realize this researcher is using the formation of one molecule of this product as "one turnover", but that researcher is using the disappearance of one molecule of that substrate as "one turnover".)
Naively, one could invert the ratio to come up with a value in time, rather than in 1/time. For example, if a factory makes 5 widgets per hour, it takes them 0.2 hours to make a widget ... sort of. One issue you run into is that that's the time on average. In a stochastic process like enzyme turnover, it's highly unlikely that any single turnover will take exactly the time specified. Some will take longer, some will be quicker. Only for a large number of turnovers in aggregate does that time value have meaning. Saying that "a turnover takes 5 s" gives a false impression about individual turnovers in a way that saying "this enzyme works at 0.2 turnovers per second" doesn't.
Additionally, in the case of multi-step reactions, saying that "a turnover takes 5 s" may be incorrect. The entire pathway may take much more that that. It's just that once the pathway has reached steady state it's only the rate limiting step which takes 5 s. For example, take a pathway of 20 steps, 19 of which take 4 s each, but one which takes 5 s. It's going to take (19*4 + 5) = 81 seconds for any given substrate molecule to start at the beginning of the pathway and make it to the end. However, once the pathway is full and each step is handing off products to the next step, the pathway will be producing product at a rate of 1 every 5s. (This, of course, doesn't apply to enzymes where all reactions take place at the same active site, but it's a consideration in the general case, which explains why 1/s is the preferred unit for reaction rates.) | {
"domain": "chemistry.stackexchange",
"id": 4541,
"tags": "physical-chemistry, biochemistry, kinetics, enzymes, units"
} |
Can the production rate of labs-on-a-chip ever reach the production rate of silicon chips? | Question: Labs-on-a-chip and silicon chips both have the potential - and in some cases have already reached the potential - to drastically scale down tasks, in these cases performing chemical reactions and performing computations.
Part of the appeal of silicon chips is - besides their size - is that they can be easily mass-produced. Labs-on-a-chip have tremendous potential, but they are in the early stages of development.
Can labs-on-a-chip be mass-produced in the same way that silicon chips are mass-produced?
Answer: Production rates of lab-on-chip fluidic devices can exceed the production rate of silicon ICs easily. Some types of lab-on-chip devices can be fabricated via injection molding. Of course, there are subsequent operations: assembly, QC. But those can be automated.
I'm aware of a device that's already being produced at a rate of 5 million units a year. These ones. They are not small enough to fit the "chip" category: the disk is about 100mm diameter. Nevertheless, it follows the philosophy of lab-on-chip.
At the same time. In terms of complexity, present cutting edge lab-on-chip devices are 8 to 10 orders of magnitude simpler than present cutting edge ICs. Let me put it this way: if today's lab-on-chip were silicon ICs, they would be 741 OpAmps from 1968. | {
"domain": "engineering.stackexchange",
"id": 41,
"tags": "electrical-engineering, biomedical-engineering"
} |
Are space and time hierarchies even comparable? | Question: I am wondering if there are any results to what extent the space and time hierarchies "disagree" on which problem is harder. For example, is it known whether there are languages $L_1$ and $L_2$ such that $L_1 \in \DeclareMathOperator{TIME}{TIME} \TIME(f(n))\setminus SPACE(g(n)),L_2\in \DeclareMathOperator{SPACE}{SPACE} \SPACE(g(n)) \setminus \TIME(f(n))$? How often does this occur?
P.S.- The question Function with space-depending computation time seems to ask something similar but was worded confusingly and none of the answers seem to be what I'm looking for.
Answer: You can get the situation you describe by choosing weird functions $f(n)$ and $g(n)$.
For example, let $g(n) = n^3$ and $$f(n) = \begin{cases}
n & \text{if $n$ is odd},
\\\
2^{n^5} & \text{if $n$ is even}.
\end{cases}
$$
Then choose $L_1$ and $L_2$ as follows:
$L_1$ is a language containing only strings of even length which can be decided in time $O(2^{n^5})$ but not in time $O(2^{n^4})$. The existence of such a language is pretty easy to prove from the time hierarchy theorem.
$L_2$ is a language containing only strings of odd length which can be decided in space $O(n^3)$ but not in space $O(n^2)$. The existence of such a language is pretty easy to prove from the space hierarchy theorem.
Then we have the following facts:
$L_1 \in TIME(f(n))$:
To decide whether a string is in $L_1$, simply check whether the length $n$ is even. If it is, then continue to use the $O(2^{n^5})$ time decider for $L_1$ whose existence is guaranteed by the definition of $L_1$. If $n$ is odd, immediately reject since $L_1$ does not include any odd length strings anyway. This procedure decides $L_1$, runs in time $O(n)$ when $n$ is odd, and runs in time $O(2^{n^5})$ when $n$ is even. In other words, this procedure decides $L_1$ in time $O(f(n))$. As desired, $L_1 \in TIME(f(n))$.
$L_2 \in SPACE(g(n))$:
By the definition of $L_2$, $L_2$ can be decided in space $O(n^3)$. Thus, $L_2 \in SPACE(n^3) = SPACE(g(n))$, as desired.
$L_1 \not\in SPACE(g(n))$:
Suppose for the sake of contradiction that $L_1 \in SPACE(g(n)) = SPACE(n^3)$. We know that $SPACE(n^3) \subseteq TIME(2^{O(n^3)}) \subsetneq TIME(2^{n^4})$. Thus, there exists a decider for $L_1$ which runs in time $O(2^{n^4})$. This directly contradicts the definition of $L_1$. Then by contradiction, we see that $L_1 \not\in SPACE(g(n))$.
$L_2 \not\in TIME(f(n))$:
Suppose for the sake of contradiction that $L_2 \in TIME(f(n))$. This means that there exists a constant $c$ and an algorithm $A$ deciding $L_2$ such that on any input of size $n$, algorithm $A$ terminates in time $c\times f(n)$.
We construct a new algorithm $A'$ as follows: given some input, walk through the entire input, keeping track of whether the input length is even or odd; if at the end of the input the length is determined to be odd, return to the start of the input and run $A$; otherwise, reject. For any input of odd length, $A'$ returns the same answer as $A$. For any input of even length, $A'$ rejects, which matches the expected behavior since $L_2$ contains no even length strings. Thus, $A'$ also decides $L_2$. On even length inputs, $A'$ runs for exactly $n$ steps. On odd length inputs, $A'$ runs for exactly $2n$ steps more than $A$ requires. But $A$ requires at most $c\times f(n)$ steps, which for odd $n$ is $cn$. Thus, in all cases, $A'$ runs in at most $(c+2)n$ steps. In other words, algorithm $A'$ decides $L_2$ in time $O(n)$.
But since $TIME(n) \subseteq SPACE(n)$, we can conclude that $L_2 \in SPACE(n) \subsetneq SPACE(n^2)$. This contradicts the definition of $L_2$. Thus, by contradiction we see that $L_2 \not\in TIME(f(n))$. | {
"domain": "cstheory.stackexchange",
"id": 4306,
"tags": "time-complexity, space-complexity, space-time-tradeoff, time-hierarchy"
} |
What alloy am I making (Al-Bi) | Question: I have been melting aluminum in a DIY furnace for a few months in order to make lost wax casts. I was thinking that since bismuth has a much lower melting point than aluminum, that I could add a small amount of bismuth to lower the melting point (9 parts Al, 1 part Bi). Qualitatively this seems to be working. But I am confused.
Looking up a phase diagram for Al-Bi, I find:
Which if I understand it correctly seems to mean that the melting point actually went up. Am I reading it correctly? If both metals separately have a lower melting point than the alloy at that ratio (and I am not reaching the new melting point), then what am I making?
Besides lowering the melting point, my hope was also to lower the viscosity so that the mix fills in my mold better. Will adding bismuth help accomplish this goal? Should I be adding tin instead? Is there a tin-bis-al ratio that would yield minimum viscosity at temperatures below 1500 °F (ideally mostly made out of Al)?
Answer: You are not making anything ,usable. Bi does not raise the melt point until over 3.4 % Bi, that is lost in this diagram. If your aim is to make aluminum castings , look in a book and do it the way the rest of the world does, add Si. At about 12% Si the melting point is down to about 998 F, about as low as you will get . And,suprise, this is a common composition for aluminum alloy castings. | {
"domain": "engineering.stackexchange",
"id": 3461,
"tags": "metallurgy, alloys"
} |
Write a Kotlin-function isPrimeNumber | Question: Exercise:
Write a prime number-test isPrime(num: Int), which for integer m >= 2 checks, if the integer is a prime number or not.
My solution:
fun main(args: Array<String>) {
var isPrime: Boolean = false
for (i in 2..101) {
isPrime = isPrime(i)
if (isPrime) {
println("$i => ${isPrime(i)}")
}
}
}
fun isPrime(num: Int): Boolean {
val upperLimit = num / 2;
var i = 2
while (i <= upperLimit) {
if (num % i == 0) {
return false
}
i++
}
return true
}
Could my solution become improved concerning efficiency?
Answer: Two easy improvements
Check up to the square root of n rather than n/2. So for 101 you only need to check n up to 10 not 50. If your number isn't prime then 1 of it's factors must be less than it equal to its square root.
Don't check multiples of 2, so do a single test to see if the number can be divided by 2 then only test odd numbers starting at 3.
So if you test 101 for primeness these's changes mean that instead of testing for divisibilty by 2..50 you only test 2,3,5,7,9
Other things to consider
Use the primes below square root n. So if you know that 2,3,5,7 are the only primes below square root of 101 you only need to test for divisibilty by these numbers to show 101 is prime.
A more complex solution would be to look at a deterministic version of the Miller Rabin test as described here this works well if you just want to know if a specific value is prime.
Most efficient if you want all primes below n would be a prime number sieve | {
"domain": "codereview.stackexchange",
"id": 40893,
"tags": "primes, integer, kotlin"
} |
When a proton attracts an electron in the electromagnetic field is the proton "bending" the field? | Question: As the question states when a proton attracts an electron in the electromagnetic field is the proton "bending" the electromagnetic field like the earth bends space time ("creating" gravity)?
Answer: If you could look at a map of the electric field lines in the vicinity of two separated charged particles, you would see that the field lines appear to bend. However, you would just be seeing the result of the vector sum of the two fields. The field at any point would just be the sum of the fields due to the two particles separately. So it is not really appropriate to think of the fields of the two particles "bending" each other.
To first order, the same is true of the gravitational field in the vicinity of two masses. However, there is a higher-order term that represents a nonlinearity in the behavior of gravitational fields. The term is very small except in the case of very strong gravitational fields. The nonlinear term can be interpreted as "bending" the gravitational field lines, because it results in a typically very small but nonetheless significant difference between the actual gravitational field at any point and the vector sum of the separate gravitational fields due to the two masses.
A good start is to think carefully about what you mean when you say "bending the field". What do you mean by "field" and "bending" (in terms of how you would measure them)? Normally, the electrostatic field at a point is defined as a vector whose magnitude is the force experienced by a point charge placed at that point, divided by the charge, and whose direction is the direction of the force. In electrostatics, the force is directly proportional to the value of the test charge. In gravity, it's almost proportional to the value of a test mass -- but not quite. When a relationship like that is not directly proportional, it's called "nonlinear".
Actually, in the case of extremely strong electromagnetic fields, there is reason to believe that nonlinearities do appear, due to quantum electrodynamic (QED) effects that arise, referred to as "vacuum polarization". | {
"domain": "physics.stackexchange",
"id": 58593,
"tags": "electromagnetism, electromagnetic-induction"
} |
How does space time differ between galaxies? | Question: Does the gravity wells differ enough between galaxies to have different speeds in space time? How much slower in space time would the biggest galaxy have compared to the Milky Way?
Answer:
Does the gravity wells differ enough between galaxies to have different speeds in space time ?
You are using these terms in a very confused way.
The gravitational field of an object depends on it's mass distribution. It is normally described as a potential field.
A gravitational field does not a have a speed that characterizes it, not do they have a speed through space-time (the field always propagates at the speed of light, regardless of source).
How much slower in space time would the biggest galaxy have compared to the Milky Way?
Keeping in mind that size has nothing to do with speed, note that are normally two aspects to the speed of an object relative to something else (important to remember that speeds are always relative to some other object that "defines" what is at rest and that this choice is arbitrary - that's the theory of relativity for you).
Galaxies can have a local speed due to e.g. motion within the local group like our own local group of galaxies. This aspect of motion is really just governed to "normal" gravitational effects.
On a much larger scale they can have an apparent motion due to the expansion of the universe. This part of motion is due to a large scale effect only explained by using general relativity and which only has an observable effect over cosmological distances.
The gravitational field of an object has no effect on it's relative motion due to the expansion of space-time, whereas it's gravitational field does affect the relative motion of objects local to it and hence their relative motion to that object. | {
"domain": "astronomy.stackexchange",
"id": 2771,
"tags": "gravity, galaxy, astrophysics, space-time, mass"
} |
Why does shell fusion produce more energy than core fusion? | Question: Stars go from the main sequence phase to the red giant branch due to the depletion of hydrogen in the core. As a result, the star contracts and shell hydrogen fusion begins, which apparently produces much more energy. Why does this produce more energy than core hydrogen fusion? The Wikipedia article on red giants says that this is because of higher temperatures, but why does shell hydrogen fusion occur at higher temperatures than core hydrogen fusion?
After the helium flash in low-medium mass stars, does shell hydrogen fusion continue to occur? If so, then why does there a drop in luminosity from the helium flash to the horizontal branch? And if not, then why not?
Similarly to the first question, in the AGB phase, why does shell helium fusion produce so much more energy than core fusion?
Why does the early AGB phase derive most of its energy from shell helium fusion and not shell hydrogen fusion?
Answer: Ultimately this is more of an overly long comment, as I think a more satisfying and complete answer would properly explain things in a more concrete fashion—more of a "it has to do this because..." answer than my "it can do this because..." one.
The short of the answer to the first question is that helium fusion needs ~25 times the temperature that hydrogen fusion does. The proton-proton chain initates around $4\times 10^6$ Kelvin, whereas helium fusion doesn't begin until around $10^8$ Kelvin. So when the main stage ends and the helium core contracts and the temperature rises, the "edge" of the core can have temperatures well in excess of the minimum hydrogen fusion temperature, and so a shell around it can have temperatures well beyond it. Fusion rates are (approximately) polynomial in temperature, with the degree depending on the reaction in question, so small increases in temperature can produce substantially more fusion. The gravitational force is strong enough to overwhelm the pressure from the induced fusion, and so will contract the surrounding shell to temperatures in excess of the minimum needed. This is basically what happens in the cores of main sequence massive stars (relative to, say, the Sun). Actually, our own Sun's energy is mostly from the proton-proton chain and has a core temperature of around $1.57\times 10^7$ Kelvin, nearly four times the minimum necessary. And still, the core needs to be nearly 10 times hotter than that to initiate Helium fusion.
For the second question, the short of the answer is that the core has undergone thermal expansion after the Helium flash, and so occupies the temperature ranges most conducive to a strong hydrogen fusion rate. The material outside the core is now at lower temperatures and pressure, so the fusion rate is reduced substantially. So the energy output comes principally from the core, and the helium fusion at near minimum temperatures releases less energy than the hydrogen shell at (well) beyond minimum temperatures did. Thus the star overall produces less energy and contracts.
The remaining questions are explained in similar fashion: one has to pay attention to the sensitivity of reaction rates to temperature, and what the temperatures in those shells actually are. The sensitivities are different for each reaction chain, and the temperatures can go well beyond the minimum necessary. | {
"domain": "astronomy.stackexchange",
"id": 1525,
"tags": "stellar-evolution"
} |
How to use this salt analysis chart to determine the cations present in a given solution? | Question:
With the only possibilities being $\ce{Ba^2+, Ca^2+, Cu^2+, Fe^2+, Fe^3+, Pb^2+}$, what cations are present in a solution which:
a) Forms a white precipitate with sulfate and with fluoride, no precipitate with chloride, and a blue precipitate with hydroxide which dissolves in ammonia solution
I have the answer with me, but I am not too sure how to use the above chart for determining cations in a solution, to answer the question. I can sort of make it out, but I am not completely sure.
Answer: For this question rather than reading the question and trying to understand what is going on all in one go, I suggest just work your way through the flow chart and refer to the question as you go along so that you don't overload your brain with excess information and confuse yourself.
So starting from the top, it asks if a precipitate will form if $\ce{HCl}$ is added. Looking from the information in the question, there is no precipitate formed. Therefore we can rule out $\ce{Pb^2+}$ being in the solution. Now following the steps in the flow chart, it asks if a precipitate forms if sulfuric acid is added to it. Looking from the data in the question, it says that a white precipitate is formed with sulfate. Therefore, we follow the next step which asks if a precipitate is formed when $\ce{NaF}$ is added. In the question it states that a white precipitate forms with fluoride. Therefore we can conclude that $\ce{Ca^2+}$ ions are in the solution.
However, if we look again into the question, it states that a blue precipitate also forms when mixed with hydroxide ions. Therefore, if we look back to the flow chart regarding the section on testing with $\ce{NaOH}$, it can be seen that traces of $\ce{Cu^2+}$ ions also exist in the solution. Therefore, both $\ce{Ca^2+}$ and $\ce{Cu^2+}$ cations are in the solution.
As you can see, once you know the general method, it is really quite simple and self-explanatory. | {
"domain": "chemistry.stackexchange",
"id": 4092,
"tags": "inorganic-chemistry, experimental-chemistry, salt"
} |
Hamiltonian of an electron in a magnetic field | Question: Suppose I have an electron in a magnetic field given by:
$$\vec{B}=B\hat{z}$$
The potential energy of this system is given by:
$$U=-\vec{\mu} \cdot \vec{B}=\frac{g\mu_B}{\hbar}\vec{S} \cdot \vec{B}$$
Here, $\vec{\mu}$ is the magnetic moment of the electron, $g$ is the Lande $g$-factor, $\mu_B$ is the Bohr Magneton, and $\vec{S}$ is the spin of the electron.
This shows that from a statistical mechanics perspective, electrons with spin oriented in the direction of the magnetic field have higher energy than the ones with spin antiparallel to $B$. Moreover, magnetic moment and spin point in the opposite directions.
Anyway, when solving, we simply note that:
$$\vec{S}\cdot\vec{B}=\hat{S_3} B_z$$
Since the electrons are either in $|\uparrow\rangle$ or in the $|\downarrow\rangle$ state, the expectation value of this is nothing but the eigenvalues corresponding to these states, i.e. $\frac{\hbar}{2}$ and $-\frac{\hbar}{2}$ respectively.
This is how we obtain the energy for parallel and antiparallel configurations of the spin and the magnetic field.
From what I understand, till now, we were basically finding the expectation value of $\hat{U}$ for parallel and antiparallel spins.
In general, we should have,
$$\hat{U}=\frac{g\mu_B}{\hbar}\vec{S} \cdot \vec{B}=\frac{g\mu_B}{\hbar}\frac{\hbar}{2}\vec{\sigma} \cdot \vec{B}\approx\mu_B\space\hat{\sigma_3}{B_z}$$
To obtain the energy of the parallel configuration, we would take $\langle\uparrow |\hat{U}|\uparrow\rangle$.
Similarly, we can obtain an expression for the antiparallel configuration.
My question is, whether:
$$\hat{U}=+\mu_B\space \hat{\sigma_3} B_z$$
or is it:
$$\hat{U}=-\mu_B\space \hat{\sigma_3} B_z \ \ \ ?$$
Since magnetic moment and angular momentum should be in the opposite direction for negatively charged particles, I believe it should be the former. Wikipedia agrees with this viewpoint.
However, in many texts on quantum statistical mechanics, like Pathria for example, the latter is said to be true.
Can someone point out which one of the two expressions is correct?
Answer: For spin 1/2 fermions
\begin{equation}
\boldsymbol S = \frac{\hbar}{2}\boldsymbol\sigma
\end{equation}
where $\boldsymbol\sigma$ is a vector of the three Pauli matrices $(\sigma_x, \sigma_{y}, \sigma_z).$ Furthermore $\boldsymbol \mu = -\frac{g_{S}\mu_{B}}{\hbar} \boldsymbol S$, where the spin g-factor of an electron is approximately 2. For $\boldsymbol B = (0, 0, B_{z})$ your Hamiltonian is given by
\begin{equation}
H = \mu_{B}B_{z}\sigma_{z}
\end{equation}
Note that the sign has changed because the charge of the electron is negative. This means electrons and protons actually behave in opposite manners in magnetic fields, i.e. they have opposite magnetic moments.
The two Zeeman Hamiltonians you have written are equivalent up to a unitary transformation $UHU^{-1} = -H$ where $U=\sigma_{x}$. This essentially amounts to a relabelling of the spin components ( $\uparrow \rightarrow \downarrow$ and vice-versa). | {
"domain": "physics.stackexchange",
"id": 88923,
"tags": "quantum-mechanics, statistical-mechanics, magnetic-fields, quantum-spin, magnetic-moment"
} |
Why the map is different when creating from logged file? | Question:
I appreciate it very much if someone help me to understand what happen here.
I am using Kinect sensor with Turtlebot2 for creating map using gmapping. I create a bag file with rosbag record scan tf. Also I have created a map at the end of mapping attached in the picture. The picture of map and the bag files are here.
Later I play the bag file and try to create a map but this map is completely different from the original one where walls positions are skewed and in different directions.
I don't know what cause this, I am thinking to change the gmapping parameter, but I don't even know much about their effect.
Thanks a lot
Originally posted by niraj007 on ROS Answers with karma: 11 on 2016-04-04
Post score: 1
Answer:
Gmapping takes in a large number of scans, but ignores most of them in order to minimize computation time.
If you ran the bag file and ran gmapping, it is unlikely that you will get the exact same map because the scans that gmapping will process are likely to be different.
Another thing to consider. Does your bag file contain map messages? These wouldn't interfere with the gmapping algorithm, but the result on rviz would look strange because you're getting maps from two different sources.
If you recorded a bag without gmapping running, then played back the bag with gmapping running, you will probably get the same result each time you run gmapping because you are running it in a more controlled environment.
Originally posted by Sebastian with karma: 363 on 2016-04-04
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by niraj007 on 2016-04-05:
Hi Sebastian,
Thanks for the great idea. I will record without gmapping running and post the result. I am not sure about the map messages inside the bag. I would appreciate again if I can know the way to check.
Comment by mgruhler on 2016-04-05:
Check which topics are in a bagfile: rosbag info <BAGFILE>.
Only publish some topics: rosbag play <BAGFILE> --topics <TOPICS>
See also http://wiki.ros.org/rosbag/Commandline
Comment by niraj007 on 2016-04-07:
Sebastien suggestion it works well. First I run rosrun gmapping and rosbag record scan tf, then I move the Turtlebot2 around, stop the rosbag record. Later I play rosbag with gmapping running, I could generate a same map as before. Thanks a lot to everyone. | {
"domain": "robotics.stackexchange",
"id": 24306,
"tags": "navigation, rosbag, gmapping"
} |
How to change volume of a PCM 16 bit signed audio? | Question: I know I can multiply samples, then clip but perceived volume is non-linear for humans.
Can you please help with a formula.
Answer: Changing the volume of an audio signal must be done by applying a gain (multiplication) - and optionally clipping if your system has a limited dynamic range. This is as simple as that. Applying a non-linear function to an audio signal will cause distortion and add harmonics, and you don't want this to happen - you want to modify the loudness of the signal, not its timbre. [To be fair, there are non-linear processings designed to change the perceived loudness of the signal without affecting the timbre, within a given dynamic range constraint (eg. multiband compression), but it doesn't look like this is what you need.]
Where non-linearity and fancy response curves come to play is when designing a user interface - when deciding on the relationship between the position of the control (knob or slider, whether on a GUI or as physical hardware) and the gain applied to the signal. This is where perception matters, because the users will expect a mapping between the position of the slider and their perception of loudness. Please note that even if the relationship between the position of the volume control and the gain applied to the signal is non-linear, the process of applying the gain to the signal is linear, and non-linearity would be unwanted there!
When it comes to physical volume controls, eg in hifi systems or personal audio players, the relationship between the knob position and the attenuation is closer to an exponential curve, though its shape has been tweaked and is constrained by the manufacturing process - sometimes it's just two or three linear segments. You can find those curves in the datasheets from manufacturers ("A" taper). Mixing console faders usually have their response compressed so that the upper half of their travel covers the useful range of -20 dB..+6dB.
In the software world - at least for music production - it is most common to have volume/gain knobs calibrated in dB. For example, if you have a 100 pixels long volume slider graduated from -48dB to +6dB, the gain applied to the signal would be $10^{\frac{-48 + 54 \frac{x}{100}}{20}}$. | {
"domain": "dsp.stackexchange",
"id": 386,
"tags": "audio"
} |
Ros Matlab i/o and /tf | Question:
Hello everybody,
I try to use the topic /tf in order to obtain the pose of the end-effector of the PR2. Unfortunately I can't subscribe to the topic, I have this error:
>SubTf = rosmatlab.subscriber('/tf', 'tf/tfMessage',100,node);
Error using rosmatlab.node/addSubscriber (line 661)
Java exception occurred:
org.ros.exception.RosMessageRuntimeException: java.lang.ClassNotFoundException: tf.tfMessage
at org.ros.internal.message.definition.MessageDefinitionReflectionProvider.get(MessageDefinitionReflectionProvider.java:58)
at org.ros.internal.message.Md5Generator.generate(Md5Generator.java:44)
at org.ros.internal.message.topic.TopicDescriptionFactory.newFromType(TopicDescriptionFactory.java:36)
at org.ros.internal.node.DefaultNode.newSubscriber(DefaultNode.java:286)
at org.ros.internal.node.DefaultNode.newSubscriber(DefaultNode.java:297)
Caused by: java.lang.ClassNotFoundException: tf.tfMessage
at java.net.URLClassLoader$1.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at org.ros.internal.message.definition.MessageDefinitionReflectionProvider.get(MessageDefinitionReflectionProvider.java:54)
... 4 more
Error in rosmatlab.subscriber (line 38)
sub = node.addSubscriber(topicName,topicMessageType,bufferLimit);
If you have any idea to resolve it, it would be great.
Originally posted by kpax77 on ROS Answers with karma: 36 on 2015-02-13
Post score: 0
Original comments
Comment by Andromeda on 2015-02-18:
did you solve this problem?
Answer:
I answer my own question. Finally the problem is that tfMessage is not part of the standard roscore distribution. So you have to add it in the ROSMatlab path.
The procedure is the following:
Download the tf.jar here:
https://github.com/rosjava/rosjava_mvn_repo/tree/master/org/ros/rosjava_messages/tf
Don't forget to download it in 'raw'.
Add the jar files to \toolbox\psp\rosmatlab\jars
Edit <\MATLAB>\toolbox\local\classpath.txt by adding the full path to the jar file above the line “# ROS-MATLAB-END”
Restart MATLAB.
And it should work.
Originally posted by kpax77 with karma: 36 on 2015-02-26
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by kpax77 on 2015-02-27:
And now if you know on an example how to use /tf in order to obtain the position of a joint, don't hesitate. | {
"domain": "robotics.stackexchange",
"id": 20878,
"tags": "ros, matlab, transform"
} |
Help!: rosinstall_generator generating an empty rosinstall file | Question:
Hi Everyone,
I am trying to install the Desktop version of Hydro on Mac OS X using Homebrew. I am following the instructions as mentioned on the installation page. But when I try to generate the rosinstall file for downloading and building the ROS stack, my rosinstall_generator command for desktop version generates an empty file with following prompt: No packages/stacks left after applying the exclusions
As mentioned on the installation page, here is the command I used: rosinstall_generator desktop --rosdistro hydro --deps --dry-only > hydro-desktop-dry.rosinstall
I did install rosinstall_generator using pip in the earlier steps. Still, I tried updating it using the pip. Did anyone else face this issue? What am I missing over here?
Thanks,
Jasprit
Originally posted by jaspritsgill on ROS Answers with karma: 88 on 2013-09-29
Post score: 0
Answer:
You're asking for desktop with the --dry-only option. However there are no dry packages in desktop which is why it is giving you an empty list. You should be using --wet-only.
See the instructions here: http://wiki.ros.org/hydro/Installation/OSX/Homebrew/Source for hydro on OS X.
Originally posted by tfoote with karma: 58457 on 2013-09-29
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by jaspritsgill on 2013-09-30:
I am trying to build the rosbuild packages, hence --dry-only. The instructions say that for BARE-BONES no dry packages are required but desktop does. is that changed in hydo? For building catkin wrkspces I did use --wet-only. I will try proceeding if there are no packages for Desktop too.
Comment by Artem on 2013-10-01:
I am having the same issue, trying to install full-desktop. Im getting "No packages/stacks left after applying the exclusions". Is this situation normal?
Comment by jaspritsgill on 2013-10-01:
It seems so. I checked for full-desktop too and I got the same. I guess there are no dry packages for Hydro desktop, as tfoote mentioned. The installation instructions are not updated it seems. I proceeded with the installation with no dry packages and it seems to be working fine so far.
Comment by tfoote on 2013-10-01:
The instructions were out of date. That is expected as there are no released rosbuild packages in hydro. I've updated the instructions to make a note and remove the out of date lines.
Comment by jaspritsgill on 2013-10-03:
Thanks tfoote! | {
"domain": "robotics.stackexchange",
"id": 15705,
"tags": "ros"
} |
Ignore collision between specific models | Question:
Hi,
Is there a way to ignore collision between two objects ? Like an object passes through another specific object, but keeps the collision with the environment (other models and ground).
Thanks in avdance :)
Originally posted by djou07 on Gazebo Answers with karma: 78 on 2015-06-13
Post score: 1
Answer:
Hi,
you can use the <collide_bitmask> tag in the collision.
Here is the tutorial.
However you need the latest gazebo (v.6) in order to work. Meaning you need to install from source using the default branch for now.
Cheers,
Andrei
Originally posted by AndreiHaidu with karma: 2108 on 2015-06-15
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by djou07 on 2015-06-15:
thanks a lot for your answer. | {
"domain": "robotics.stackexchange",
"id": 3782,
"tags": "collision"
} |
Are there any online resources one can use to access raw clinical trial data without having to contact those involved in the study? | Question: I want to do some statistical analysis and I have a preference for real data so I was wondering if there's somewhere I can get such data for free without a heap of bother in phoning those involved in the studies in question.
Answer: See these three sites:
ClinicalTrials.gov
NIDA
ClinicalStudyDataRequest
For clinical trials conducted by other agencies (in different countries), you may have to contact those authorities. | {
"domain": "biology.stackexchange",
"id": 3161,
"tags": "statistics, clinical-trial"
} |
Project Euler: #36 Binary and Base 10 Palindromes | Question: Here is the Euler problem referenced, it says:
The decimal number, 585 = 1001001001\$_2\$ (binary), is palindromic in both
bases.
Find the sum of all numbers, less than one million, which are
palindromic in base 10 and base 2.
(Please note that the palindromic number, in either base, may not
include leading zeros.)
My solution is as follows:
def check_palindrome_base_ten(num):
a = str(num)
return a == a[::-1]
def check_palindrome_base_two(num):
a = str(bin(num))[2:]
return a == a[::-1]
def check_palindrome(num):
return check_palindrome_base_ten(num) and check_palindrome_base_two(num)
def sum_palindrome_in_a_range(lower_limit, upper_limit):
return sum(x for x in xrange(lower_limit, upper_limit+1, 2) if check_palindrome(x))
%timeit sum_palindrome_in_a_range(1,1000000)
1 loops, best of 3: 247 ms per loop
I noticed by the end of the solution after just brute forcing it that the step could be changed to 2 to use only odd numbers because the binary 1st digit should always be 1 if the number was greater than 1 and a palindrome. This cut my execution time in literal half from ~480ms to 247ms.
In addition I thought that perhaps writing mini functions and then running a reverse index on that number would be faster than doing say:
str(a) == str(a)[::-1]
Because I get to avoid running str and bin twice. Is that correct logic?
Are there any such other optimizations that I have missed that I can use to reduce runtime and in general make my code more useful. I feel as though I may be stuck in a for loop / list comprehension trap and perhaps am not thinking creatively enough when approaching these solutions. I'm scared that's going to cause my solutions when I code actual problems to be inefficient. So perhaps it's also a code methodology review in that respect.
Answer:
Are there any such other optimizations that I have missed that I can use to reduce runtime and in general make my code more useful.
Yes. You're thinking about the problem backwards. As far as I can tell, your solution may be as fast as you can do in Python solving the problem directly. That is, something like:
sum(filter(satisfies_problem, xrange(1, 1000000))
But we can do way better. Palindromes are fairly sparse, so rather than go through a million numbers and checking them to see, we can actually just generate all the palindromes in one base and just check if they're a palindrome in the other base. Basically, for every number from 1 to 1000 we can just add its reverse with or without duplication (e.g. 146 can become the palindromes 14641 and 146641). That is:
def make_palindrome(p, repeat):
result = p
if not repeat: p //= 10
while p > 0:
result = result * 10 + p % 10
p //= 10
return result
Then we can just generate all the right palindromes, base 10, and check if they match base 2:
total = 0
cap = upper_limit / 10 ** (math.log10(upper_limit)/2)
for p in xrange(int(cap) + 1):
for repeat in (True, False):
pal = make_palindrome(p, repeat)
if pal & 1 and lower_limit <= pal <= upper_limit:
as_bin = bin(pal)[2:]
if as_bin == as_bin[::-1]:
total += pal
return total
Timing comparison, this is 100x faster:
Brute Force Search 0.3477s
Palindrome Generator 0.0030s | {
"domain": "codereview.stackexchange",
"id": 16108,
"tags": "python, programming-challenge, python-2.x"
} |
Utilizing idle socket server to do meaningful thing (timeout after sleep on epoll_wait) | Question: I write an app server that uses TCP socket on Linux. When there is no traffic (no data is sent by client, no client connect() or close()), the process sleeps on epoll_wait() while waiting for events hit the socket file descriptors.
What is a good thing to do while the process is sleeping?
So my initiative is to make the sleep time short and force the process to read from memory again and again until the events come.
Reason for doing this is that to keep critical (for performance) data hot in the cache.
Is that a worthwhile thing to do?
Or better I let the process sleeping until events come?
My understanding
While the process is sleeping on epoll_wait() for too long, the kernel will schedule to run other processes.
If my app is scheduled away for too long, then its data on memory will be evicted from the cache, as cache is shared across multiple processes, then other processes' data will take place in the cache.
epoll_wait documentation
https://man7.org/linux/man-pages/man2/epoll_wait.2.html
Can also be read from man 2 epoll_wait.
Relevant Part of Code (event_loop and exec_epoll_wait)
struct srv_tcp_state {
int epoll_fd;
int tcp_fd;
int tun_fd;
bool stop;
struct_pad(0, 3);
struct cl_slot_stk client_stack;
struct srv_cfg *cfg;
struct client_slot *clients;
uint16_t *epoll_map;
/*
* We only support maximum of CIDR /16 number of clients.
* So this will be `uint16_t [256][256]`
*/
uint16_t (*ip_map)[256];
/* Counters */
uint32_t read_tun_c;
uint32_t write_tun_c;
struct bc_arr bc_arr_ct;
utsrv_pkt_t send_buf;
struct iface_cfg siff;
bool need_iface_down;
bool aff_ok;
struct_pad(1, 4);
cpu_set_t aff;
};
static int exec_epoll_wait(int epoll_fd, struct epoll_event *events,
int maxevents, struct srv_tcp_state *state)
{
int err;
int retval;
int timeout = 50; /* in milliseconds */
retval = epoll_wait(epoll_fd, events, maxevents, timeout);
if (unlikely(retval == 0)) {
/*
* epoll_wait() reaches timeout
*
* TODO: Do something meaningful here.
*/
/*
* Force the process to read critical data so
* it is always hot (at least in L2 or L3?)
*/
memcmp_explicit(state, state, sizeof(*state));
return 0;
}
if (unlikely(retval < 0)) {
err = errno;
if (err == EINTR) {
retval = 0;
prl_notice(0, "Interrupted!");
return 0;
}
pr_err("epoll_wait(): " PRERF, PREAR(err));
return -err;
}
return retval;
}
static int event_loop(struct srv_tcp_state *state)
{
int retval = 0;
int maxevents = 64;
int epoll_fd = state->epoll_fd;
struct epoll_event events[64];
/* Shut the valgrind up! */
memset(events, 0, sizeof(events));
while (likely(!state->stop)) {
retval = exec_epoll_wait(epoll_fd, events, maxevents, state);
if (unlikely(retval == 0))
continue;
if (unlikely(retval < 0))
goto out;
retval = handle_events(state, events, retval);
if (unlikely(retval < 0))
goto out;
}
out:
return retval;
}
memcmp_explicit
This code is located in different file, this prevents compiler to inline or optimize or remove the memcmp call (which happens when epoll_wait reaches its timeout).
int memcmp_explicit(const void *s1, const void *s2, size_t n)
{
return memcmp(s1, s2, n);
}
likely and unlikely macros
#define likely(EXPR) __builtin_expect(!!(EXPR), 1)
#define unlikely(EXPR) __builtin_expect(!!(EXPR), 0)
About __builtin_expect
https://stackoverflow.com/questions/7346929/what-is-the-advantage-of-gccs-builtin-expect-in-if-else-statements
Answer: Keeping the cache hot
So my initiative is to make the sleep time short and force the process to read from memory again and again until the events come.
Reason for doing this is that to keep critical (for performance) data hot in the cache.
If nothing else running on the same CPU, there is no reason for the cache to go cold. Things in the cache don't time out, they just stay there until they are evicted when necessary.
If there are other processes running, then your strategy might help your socket server, but there are several issues with this:
It is selfish; you might prevent other processes from keeping their data in the cache, thus lowering their performance.
It might not work at all, since once another process gets a time slice, their memory access patterns may evict your data from the cache again.
By not staying idle, your CPU might not get a chance to go into a low power state, thus keeping your cache literally hot.
My recommendation is to just use an infinite timeout for epoll_wait().
About memcmp_explicit()
Be aware that compilers are getting better and better at optimizing things. Even if you put memcmp_explicit() in a different translation unit than where it is called, the compiler might optimize it out if link time optimization is enabled. | {
"domain": "codereview.stackexchange",
"id": 40904,
"tags": "performance, c, linux, socket"
} |
How to get error out of astropy constants | Question: How do I get the error value out of an astropy.constants quantity?
In [87]: from astropy import constants as c
In [88]: c.M_sun
Out[88]: <Constant name='Solar mass' value=1.9891e+30 error=5e+25 units='kg' reference="Allen's Astrophysical Quantities 4th Ed.">
In [89]: c.M_sun.value
Out[89]: 1.9891e+30
In [90]: c.M_sun.error
AttributeError: 'Constant' object has no 'error' member
Answer: While I'm not familiar with the package, a very quick look at the documentation suggests that you want
In [90]: c.M_sun.uncertainty
instead. I've just checked and this appears to be correct.
> python -c "from astropy import constants as c ; print c.M_sun.uncertainty"
5e+25 | {
"domain": "astronomy.stackexchange",
"id": 1079,
"tags": "astropy, python"
} |
Fail to separate sound signals by FastICA on real-world recording | Question: I have written a program to perform FastICA on a stereo WAV file using the code on Python MDP FastICA Example
With the audio examples I get very good results.
Then I try to do real world recording using two computer mono microphones connected to the stereo mic in of my pc by connecting mic 1 to L channel and mic 2 to R channel. I test by playing some music at the background while I am talking in a quiet room.
However, running FastICA does not separate the signals at all. Is it possible that the quality of microphones is too poor? Do I need to do anything to the recorded WAV file (16 bits, signed PCM, 44100Hz) before running FastICA?
You can download the recording here.
Answer: ICA in raw form is only suitable for use with phase synchronised observation mixtures. Using microphones as you have described will introduce a phase delay as pointed out by other posters. However this phase delay can be used to great avail. The best known algorithm that deals with stereo separation in the presence of delays is DUET. The links are broken but the references you are looking for are here >http://eleceng.ucd.ie/~srickard/bss.html.
This is the paper you should look for >
A. Jourjine, S. Rickard, and O. Yilmaz, Blind Separation of Disjoint Orthogonal Signals: Demixing N Sources from 2 Mixtures, IEEE Conference on Acoustics, Speech, and Signal Processing (ICASSP2000), Volume 5, Pages 2985-2988, Istanbul, Turkey, June 2000 | {
"domain": "dsp.stackexchange",
"id": 3405,
"tags": "ica, python"
} |
JavaScript and jQuery check for image file and assign CSS | Question: This is my first attempt at JavaScript, so I am looking to learn.
I have a website that has jQuery built in and I wanted to leverage that in the following way:
I want to get a number variable that precedes a certain text string (*n*/portal.css)
Get the name of the current webpage
Use this number and pageName to help build a URL to a specifically named image file The image will have the same name as the webpage with an "_hd.jpg" appended to it
Check if the image file exists and if it does add this image to a CSS div background otherwise do nothing
<script>
$(function () {
$(document).foundation();
//get webpage name
var loc = window.location.href
var fileNameIndex = loc.lastIndexOf("/") + 1;
var dotIndex = loc.lastIndexOf('.');
var pageName = loc.substring(fileNameIndex, dotIndex < fileNameIndex ? loc.length : dotIndex);
//get number variable
var head = document.head
var portalCss = head.innerHTML.match(/\/[0-9]+\/portal.css/);
var portalNum = portalCss[0].replace("/", "")
portalNum = portalNum.replace("portal.css", "")
//build url for image
var HeaderImgUrl = '/Portals/' + portalNum + '/Images/' + pageName + '_hd.jpg'
//check if image exists and assign css values to css class RowPageHeader
$.ajax({
url: HeaderImgUrl, //or your url
success: function (data) {
$(".RowPageHeader").css("background-image", "url(" + HeaderImgUrl + ")");
$(".RowPageHeader").css("background-position", "center top");
$(".RowPageHeader").css("background-size", "100% auto");
$(".RowPageHeader").css("background-repeat", "no-repeat");
$(".RowPageHeader").css("overflow", "hidden");
},
})
var FooterImgUrl = '/Portals/' + portalNum + '/Images/footer_bg.jpg'
$.ajax({
url: FooterImgUrl, //or your url
success: function (data) {
$(".RowFooter").css("background-image", "url(" + FooterImgUrl + ")");
$(".RowFooter").css("background-position", "center top");
$(".RowFooter").css("background-size", "2000px auto");
$(".RowFooter").css("background-repeat", "no-repeat");
$(".RowFooter").css("overflow", "hidden");
},
})
})
</script>
Two questions:
How can I improve this code?
I want to move the CSS rules into a CSS class in the pages skin.css file. How can I add a CSS class to the existing one that I am adding the CSS rules to?
Answer: Basically every duplicated object can be made one object. And maybe use a function so you can call it when required.
If you use an IIFE like below, you can put this code in an external *.js file. It will then run asap without stalling other http requests while loading the page. Also pass in the jQuery object as a parameter so you can safely use $ instead.
The configuration separation allows you to easily change things when needed. They're also grouped and give you a better overview. Compared to the other objects the cfg variable is the "local global" so you can use cfg throughout your whole script.
The window object groups your functionality or component in one object. Basically one DOM object for a project is ideal imho. Allows you to access window.ComponentName in other files as well.
The activation is under your control. For the moment it uses the common DOM ready event and basically just runs the init() function.
// @param ($): jquery version x?
(function ($) {
// 1. CONFIGURATION
var cfg = {
rowpageheader: '.RowPageHeader',
rowfooter: '.RowFooter',
options: {
header: {
'background-position': 'center top',
'background-size': '100% auto',
'background-repeat': 'no-repeat',
'overflow': 'hidden'
},
footer: {
'background-position': 'center top',
'background-size': '2000px auto',
'background-repeat': 'no-repeat',
'overflow': 'hidden'
}
},
path: {
portal: 'Portals',
images: 'Images'
},
misc: {
portalcss: 'portal.css',
imagequality: '_hd.jpg',
footerbg: 'footer_bg.jpg'
}
};
// 2. DOM OBJECT
window.Images = {
init: function () {
this.cacheItems();
this.activate();
},
cacheItems: function () {
this.rowPageHeader = $(cfg.rowpageheader);
this.rowFooter = $(cfg.rowfooter);
},
activate: function () {
var cfgOptions = cfg.options,
portalNum = this.getPortalNum(),
pageName = this.getPageName();
this.updateHeader(portalNum, pageName, cfgOptions.header);
this.updateFooter(portalNum, cfgOptions.footer);
},
getPageName: function () {
var loc = window.location.href,
fileNameIndex = loc.lastIndexOf('/') + 1,
dotIndex = loc.lastIndexOf('.');
return loc.substring(fileNameIndex, dotIndex < fileNameIndex ? loc.length : dotIndex);
},
getPortalNum: function () {
var head = document.head,
portalcss = cfg.misc.portalcss,
regexp = '/\/[0-9]+\/' + portalcss + '/',
portalCss = head.innerHTML.match(regexp),
portalNum = portalCss[0].replace('/', '');
return portalNum.replace(portalcss, '');
},
updateHeader: function (portalNum, pageName, options) {
var proj = this,
cfgPath = cfg.path,
headerUrl = [cfgPath.portal, portalNum, cfgPath.images, pageName, cfg.misc.imagequality];
$.ajax({
url: headerUrl.join('/')
}).done(function () {
var opt = $.extend(options, { backgroundImage: 'url(' + headerUrl + ')' });
proj.rowPageHeader.css(opt);
});
},
updateFooter: function (portalNum, options) {
var proj = this,
cfgPath = cfg.path,
footerUrl = [cfgPath.portal, portalNum, cfgPath.images, cfg.misc.footerbg];
$.ajax({
url: footerUrl.join('/')
}).done(function () {
var opt = $.extend(options, { backgroundImage: 'url(' + footerUrl + ')' });
proj.rowFooter.css(opt);
});
}
};
// 3. DOM READY
$(function () {
$(document).foundation();
Images.init();
});
} (jQuery));
Haven't checked if this works, but it should ^^ | {
"domain": "codereview.stackexchange",
"id": 4854,
"tags": "javascript, jquery, css, beginner"
} |
Working with dependency injection and factories | Question: Thanks in advance for any insight. All used classes are at the top, and everything starts at the comment:
// where the magic happens
In particular, I am looking for feedback on my attempt at using the factory method and dependency injection. However, I would appreciate any other feedback as well. I have a list of various questions at the bottom.
<?php
class Config {
// Class that holds config info.
// Besides PDO connection info, contains flags for testing mode, live, eCommerce-enabled, etc.
// no setters, only getters... I'm thinking of this class as a glorified associative array
protected $properties = array();
function __construct()
{
// reads app config file(s), sets various keys in $this->properties;
}
public function get($key)
{
return $this->properties[$key];
}
}
class Request {
protected $server;
protected $get;
protected $post;
function __construct(array $server, array $get, array $post)
{
$this->server = $server;
$this->get = $get;
$this->post = $post;
}
// other methods in here like getUri, isAjax,
// getRequestMethod, getPost, getGet, getAgent, getRemoteAddr
}
class Session {
function __construct()
{
// Start the session
session_start();
// Set a user ID cookie, etc
}
// other session-related setters and getters
}
class ModelFactory{
protected $className; // class to instantiate (string)
function __construct($className)
{
$this->className = $className;
}
public function build(Config $c, Request $r, Session $s)
{
$pdo = new PDO(
$c->get('dsn') ,
$c->get('pdo_user') ,
$c->get('pdo_pass'),
$c->get('pdo_options')
);
return new $this->className($pdo, $c, $r, $s);
}
}
class ControllerFactory{
public function build()
{
$c = new Config();
$r = new Request($_SERVER, $_GET, $_POST);
$s = new Session();
// Reads config file with route info, compares it to $r->getUri
// to find name of controller and action within controller.
$name = 'Controller_Example';
$action = 'action_showComments';
// Returns correct child object of Controller.
return new $name($c, $r, $s, $action);
}
}
abstract class Controller{
protected $config;
protected $request;
protected $session;
protected $action;
function __construct(Config $c, Request $r, Session $s, $action )
{
$this->config = $c;
$this->request = $r;
$this->session = $s;
$this->action = $action; // string of name of function to execute
}
protected function before() { /* some extendable code to execute before action */ }
protected function after(){ /* some extendable code to execute after action */ }
public function execute()
{
// would an output buffer be good here? eg. ob_start()
// only doing this because im not sure if $this->$this->action() would work
$method_to_execute = $this->action;
$this->before();
$this->$method_to_execute();
$this->after();
}
}
class Controller_Example extends Controller{
/**
* A function that gets list of recent comments.
*/
public function action_showComments()
{
$m_factory = new ModelFactory('Model_Comment');
$comment_model = $m_factory->build($this->config, $this->request, $this->session);
$comments = $comment_model->getComments();
$title = 'Displaying Comments';
$bid = 'comment';
include '/path/to/views/commentview.php'; // see below for commentview.php
}
}
class Model_Comment {
protected $config;
protected $request;
protected $session;
protected $pdo;
function __construct(PDO $pdo, Config $c, Request $r, Session $s, )
{
$this->config = $c;
$this->request = $c;
$this->session = $c;
$this->pdo = $pdo;
}
public function getComments()
{
// uses $this->pdo to query database, returns an array of comments
// may or may not use request, session, or config objects
return array('This is a comment', 'So is this', 'And this is too!');
}
}
// where the magic happens
$factory = new ControllerFactory();
$controller = $factory->build();
$controller->execute();
?>
<!-- commentview.php -->
<!DOCTYPE html>
<head>
<title><?php echo $title ?></title>
</head>
<body id="<?php echo $bid ?>">
<ul>
<?php
foreach ($comments as $comment) {
echo '<li>'.$comment.'</li>';
}
?>
</ul>
</body>
<!-- end commentview.php -->
Questions:
Does $this->$this->action() work correctly if $this->action is a string which is the name of a method in the same class (see Controller::execute()) ?
If not every model needs the config, request, and session objects, does ModelFactory::build() violate the law of demeter? How can I avoid this problem?
As a corollary, is there a way I can make the parameters for Factory::build() variable
in order to have an abstract class or interface for all factories? Example:
abstract class Factory{ abstract function build({variable params}) }
Won't having new operators for factories in places like Controller_Example::action_showComments() defeat the purpose of DI making code more testable?
How could I approach templating html pages in this app?
What are some advantages for using ob_start() in a situation like Controller::execute()?
Answer: 1. $this->$this->action()
You could do this with $this->{$this->action}(). The braces are important for the precedence. PHP wants to break up $this->this->action() first into $this->$this (using the second $this as a string to get the property from the first $this) and then call the action method with that property.
2. LoD vs LSP
I don't think it is exactly the Law of Demeter that is broken. I think it might be the Liskov Substitution Principle.
3. Factory::build
Yes. There are three situations:
// No parameters
return new $className;
// One parameter
return new $className($param);
// Multiple parameters
$object = new \ReflectionClass($className);
return $object->newInstanceArgs($params);
4. Testing with new
See this for testing with new.
5. Templating
Personally I use plain PHP.
6. ob_start
The advantage of buffered output is that nothing is sent until you want it to be. This is important when you may want to set response headers (which must be the first output that is sent).
Config
// I'm thinking of this class as a glorified associative array
This is a great place to implement ArrayAccess. You can then access your config settings with:
$config['pdo_user'] // I prefer $config over $c | {
"domain": "codereview.stackexchange",
"id": 1602,
"tags": "php, dependency-injection"
} |
With the quantum entanglement experiment what exactly do they mean by "one particle instantaneously affects the outcome of the other" | Question: It sounds like if you measure the spin of one particle, the other particle immediately registers as the opposite spin without anyone touching the detector. Is this what they mean?
Answer: You ask :
$ \ \ \ \ \ \ $ Will Scientist 2 receive a measurement at 8:32 without touching the detector?
The answer is we don't know.
We believe that in the process of measurement of entangled particles, time has no meaning. We believe an even more amazing thing: that, if the two measurement events are separated by a space-like interval, the result of the measurements is decided upon by both particles. None of them is the leader, i.e. produces the result and conveys to the other particle. Both particles decide the joint result.
Just imagine that the two labs are located in rockets in opposite movements, and that each scientist performs the measurement of his particle. Imagine also that from the point of view of a third scientist, on the Earth, both measurements are done at 8:32, Greenwich hour. The scientist in each rocket would claim the he measured first, and his measurement result was conveyed to the other lab.
Then, who is right? Which one of the measurements was independent, and which one just conformed itself according to the information received from the other?
On the other hand, if the measurements are separated by a time-like interval, we can say which measurement was the 1st one. But, honestly speaking, not even in this case we are sure that the 1st experiment done decided alone on the result. | {
"domain": "physics.stackexchange",
"id": 19191,
"tags": "quantum-mechanics, quantum-entanglement"
} |
DiffDriveController in Gazebo Ros control | Question:
Hi! I set up a sim via rviz & gazebo firstly using gazebo diff drive plug in. then I noticed some limits using this model and I attempted to switch to gazebo_ros_control diff_drive_controller. the robot spawns and visualizes in gazebo but when in rviz I try to command via joystick or setting a 2d goal nothing happens.
What is wrong? Am i missing something?
Are the two libraries substitutes or do they complement each other?
Launch file
<?xml version="1.0" encoding="UTF-8"?>
<launch>
<param name="/use_sim_time" value="true" />
<param name="robot_description" command="$(find xacro)/xacro --inorder $(find nav-sim)/urdf/gbot.urdf.xacro" />
<node pkg="rviz" type="rviz" name="rviz" output="screen"/>
<node pkg="robot_state_publisher" name="robot_state_publisher" type="robot_state_publisher" />
<node pkg="joint_state_publisher" name="joint_state_publisher" type="joint_state_publisher" />
<arg name="x" default="0"/>
<arg name="y" default="0"/>
<arg name="z" default="0"/>
<!-- <include file="$(find nav-sim)/launch/gbot_control_teleop.launch" />-->
<include file="$(find gazebo_ros)/launch/empty_world.launch">
<arg name="world_name" value="$(find nav-sim)/launch/complicated_world_tagged" />
<arg name="debug" value="false" />
<arg name="gui" value="true" />
<arg name="paused" value="false"/>
<arg name="use_sim_time" value="true"/>
<arg name="headless" value="false"/>
<arg name="verbose" value="true"/>
</include>
<node name="spawn_robot_urdf" pkg="gazebo_ros" type="spawn_model" output="screen"
args="-urdf -param robot_description -model gbot.urdf.xacro -x $(arg x) -y $(arg y) -z $(arg z)" >
</node>
<rosparam file="$(find nav-sim)/config/gconfig.yaml" command="load" />
<node name="spawner" pkg="controller_manager" type="spawner"
respawn="false" output="screen" args= "mobile_base_controller" />
</launch>
Plug in
<?xml version="1.0"?>
<robot>
<gazebo reference="base_link">
<plugin name="gazebo_ros_control" filename="libgazebo_ros_control.so">
<robotSimType>gazebo_ros_control/DefaultRobotHWSim</robotSimType>
</plugin>
</gazebo>
<!-- <gazebo reference="base_link">-->
<!-- <plugin name="differential_drive_controller" filename="libgazebo_ros_diff_drive.so">-->
<!-- <alwaysOn>true</alwaysOn>-->
<!-- <legacyMode>false</legacyMode>-->
<!-- <updateRate>20</updateRate>-->
<!-- <leftJoint>left_wheel_joint</leftJoint>-->
<!-- <rightJoint>right_wheel_joint</rightJoint>-->
<!-- <wheelSeparation>${wheel_separation}</wheelSeparation>-->
<!-- <wheelDiameter>${wheel_radius * 2}</wheelDiameter>-->
<!-- <torque>20</torque>-->
<!-- <commandTopic>/twist_mux/cmd_vel</commandTopic>-->
<!-- <odometryTopic>/odom_in</odometryTopic>-->
<!-- <odometryFrame>odom</odometryFrame>-->
<!-- <robotBaseFrame>base_link</robotBaseFrame>-->
<!-- <publishWheelTF>false</publishWheelTF>-->
<!-- <publishWheelJointState>false</publishWheelJointState>-->
<!-- <odometrySource>world</odometrySource> <!– 'encoder' instead of 'world' is also possible –>-->
<!-- <publishTf>1</publishTf>-->
<!-- </plugin>-->
<!-- </gazebo>-->
<!-- hokuyo -->
<gazebo reference="laser_frame_HF">
<sensor type="ray" name="head_hokuyo_sensor_front">
<pose>0 0 0 0 0 0</pose>
<visualize>true</visualize>
<update_rate>40</update_rate>
<ray>
<scan>
<horizontal>
<samples>1800</samples>
<resolution>0.2</resolution>
<min_angle>-3.14</min_angle>
<max_angle>3.14</max_angle>
</horizontal>
</scan>
<range>
<min>0.40</min>
<max>10.0</max>
<resolution>0.01</resolution>
</range>
<noise>
<type>gaussian</type>
<!-- Noise parameters based on published spec for Hokuyo laser
achieving "+-30mm" accuracy at range < 10m. A mean of 0.0m and
stddev of 0.01m will put 99.7% of samples within 0.03m of the true
reading. -->
<mean>0.0</mean>
<stddev>0.00001</stddev>
</noise>
</ray>
<plugin name="gazebo_ros_head_hokuyo_controller" filename="libgazebo_ros_laser.so">
<topicName>/scanHF</topicName>
<frameName>laser_frame_HF</frameName>
</plugin>
</sensor>
</gazebo>
<gazebo reference="laser_frame_HB">
<sensor type="ray" name="head_hokuyo_sensor_back">
<pose>0 0 0 0 0 ${3.14}</pose>
<visualize>true</visualize>
<update_rate>40</update_rate>
<ray>
<scan>
<horizontal>
<samples>1800</samples>
<resolution>0.2</resolution>
<min_angle>-3.14</min_angle>
<max_angle>3.14</max_angle>
</horizontal>
</scan>
<range>
<min>0.40</min>
<max>10.0</max>
<resolution>0.01</resolution>
</range>
<noise>
<type>gaussian</type>
<!-- Noise parameters based on published spec for Hokuyo laser
achieving "+-30mm" accuracy at range < 10m. A mean of 0.0m and
stddev of 0.01m will put 99.7% of samples within 0.03m of the true
reading. -->
<mean>0.0</mean>
<stddev>0.00001</stddev>
</noise>
</ray>
<plugin name="gazebo_ros_head_hokuyo_controller" filename="libgazebo_ros_laser.so">
<topicName>/scanHB</topicName>
<frameName>laser_frame_HB</frameName>
</plugin>
</sensor>
</gazebo>
</robot>
Config file:
mobile_base_controller:
type : "diff_drive_controller/DiffDriveController"
left_wheel : 'left_wheel_joint'
right_wheel : 'right_wheel_joint'
publish_rate: 50.0 # default: 50
pose_covariance_diagonal : [0.001, 0.001, 1000000.0, 1000000.0, 1000000.0, 1000.0]
twist_covariance_diagonal: [0.001, 0.001, 1000000.0, 1000000.0, 1000000.0, 1000.0]
# Wheel separation and diameter. These are both optional.
# diff_drive_controller will attempt to read either one or both from the
# URDF if not specified as a parameter
# wheel_separation : 1.0
# wheel_radius : 0.3
# Wheel separation and radius multipliers
wheel_separation_multiplier: 1.0 # default: 1.0
wheel_radius_multiplier : 1.0 # default: 1.0
# Velocity commands timeout [s], default 0.5
cmd_vel_timeout: 0.25
# Base frame_id
base_frame_id: base_link #default: base_link
# Odom frame_id
odom_frame_id: /odom #default: /odom
# Velocity and acceleration limits
# Whenever a min_* is unspecified, default to -max_*
linear:
x:
has_velocity_limits : true
max_velocity : 1.0 # m/s
min_velocity : -0.5 # m/s
has_acceleration_limits: true
max_acceleration : 0.8 # m/s^2
min_acceleration : -0.4 # m/s^2
has_jerk_limits : true
max_jerk : 5.0 # m/s^3
angular:
z:
has_velocity_limits : true
max_velocity : 1.7 # rad/s
has_acceleration_limits: true
max_acceleration : 1.5 # rad/s^2
has_jerk_limits : true
max_jerk : 2.5 # rad/s^3
/gazebo_ros_control:
pid_gains:
left_wheel_joint:
p: 1.0
i: 1.0
d: 0.0
right_wheel_joint:
p: 1.0
i: 1.0
d: 0.0
#left_wheel:
# type: velocity_controllers/JointVelocityController
# joint: left_wheel_joint
# pid: { p: 1.0, i: 1.0, d: 0.0 }
#
#right_wheel:
# type: velocity_controllers/JointVelocityController
# joint: right_wheel_joint
# pid: { p: 1.0, i: 1.0, d: 0.0 }
Robot urdf:
<?xml version="1.0"?>
<robot xmlns:xacro="http://www.ros.org/wiki/xacro" name="gbot">
<xacro:property name="base_width" value="0.202"/>
<xacro:property name="base_len" value="0.3"/>
<xacro:property name="wheel_radius" value="0.030"/>
<xacro:property name="base_wheel_gap" value="0.007"/>
<xacro:property name="wheel_separation" value="0.15"/>
<xacro:property name="wheel_joint_offset" value="0.02"/>
<xacro:property name="caster_wheel_radius" value="0.015"/>
<xacro:property name="caster_wheel_joint_offset" value="0.1"/>
<xacro:property name="laser_radius" value="0.03"/>
<xacro:property name="laser_len" value="0.02"/>
<xacro:macro name="box_inertia" params="m w h d">
<inertial>
<mass value="${m}"/>
<inertia ixx="${m / 12.0 * (d*d + h*h)}" ixy="0.0" ixz="0.0" iyy="${m / 12.0 * (w*w + h*h)}" iyz="0.0" izz="${m / 12.0 * (w*w + d*d)}"/>
</inertial>
</xacro:macro>
<xacro:macro name="cylinder_inertia" params="m r h">
<inertial>
<mass value="${m}"/>
<inertia ixx="${m*(3*r*r+h*h)/12}" ixy = "0" ixz = "0" iyy="${m*(3*r*r+h*h)/12}" iyz = "0" izz="${m*r*r/2}"/>
</inertial>
</xacro:macro>
<!-- <link name="dummy">-->
<!-- </link>-->
<link name="base_link">
<xacro:box_inertia m="10" w="${base_len}" h="${base_width}" d="0.01"/>
<visual>
<geometry>
<box size="${base_len} ${base_width} 0.02"/>
</geometry>
</visual>
<collision>
<geometry>
<box size="${base_len} ${base_width} 0.01"/>
</geometry>
</collision>
</link>
<!-- <joint name="dummy_joint" type="fixed">-->
<!-- <parent link="dummy"/>-->
<!-- <child link="base_link"/>-->
<!-- </joint>-->
<link name="base_footprint">
<xacro:box_inertia m="20" w="0.001" h="0.001" d="0.001"/>
<visual>
<origin xyz="0 0 0" rpy="0 0 0" />
<geometry>
<box size="0.001 0.001 0.001" />
</geometry>
</visual>
</link>
<joint name="base_link_joint" type="fixed">
<origin xyz="0 0 ${wheel_radius + 0.005}" rpy="0 0 0" />
<parent link="base_link"/>
<child link="base_footprint"/>
</joint>
<!-- <xacro:macro name="sensor_laser">-->
<link name="laser_frame_HF">
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<mass value="1" />
<geometry>
<cylinder radius="${laser_radius}" length="${laser_len}"/>
</geometry>
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<cylinder radius="${laser_radius}" length="${laser_len}"/>
</geometry>
</collision>
<xacro:cylinder_inertia m="1" r="${laser_radius}" h="${laser_len}"/>
</link>
<joint name="sensor_laser_joint" type="fixed">
<origin xyz="${base_len/3} 0 0.1" rpy="0 0 0"/>
<parent link="base_footprint" />
<child link="laser_frame_HF" />
</joint>
<link name="laser_frame_HB">
<visual>
<origin xyz="0 0 0" rpy="0 0 ${3.14}"/>
<mass value="1" />
<geometry>
<cylinder radius="${laser_radius}" length="${laser_len}"/>
</geometry>
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 0 ${3.14}"/>
<geometry>
<cylinder radius="${laser_radius}" length="${laser_len}"/>
</geometry>
</collision>
<xacro:cylinder_inertia m="1" r="${laser_radius}" h="${laser_len}"/>
</link>
<joint name="sensor_laser_joint2" type="fixed">
<origin xyz="-${base_len/3} 0 0.1" rpy="0 0 3.14"/>
<parent link="base_footprint" />
<child link="laser_frame_HB" />
</joint>
<!-- </xacro:macro>-->
<!-- <xacro:sensor_laser />-->
<xacro:macro name="wheel" params="prefix reflect">
<link name="${prefix}_wheel">
<visual>
<origin xyz="0 0 0" rpy="${pi/2} 0 0"/>
<geometry>
<cylinder radius="${wheel_radius}" length="0.005"/>
</geometry>
</visual>
<collision>
<origin xyz="0 0 0" rpy="${pi/2} 0 0"/>
<geometry>
<cylinder radius="${wheel_radius}" length="0.005"/>
</geometry>
</collision>
<xacro:cylinder_inertia m="10" r="${wheel_radius}" h="0.005"/>
</link>
<joint name="${prefix}_wheel_joint" type="continuous">
<axis xyz="0 1 0" rpy="0 0 0" />
<parent link="base_link"/>
<child link="${prefix}_wheel"/>
<origin xyz="0 ${((base_width/2)+base_wheel_gap)*reflect} -0.005" rpy="0 0 0"/>
</joint>
</xacro:macro>
<transmission name="left_wheel_transmission">
<type>transmission_interface/SimpleTransmission</type>
<joint name="left_wheel_joint">
<hardwareInterface>hardware_interface/VelocityJointInterface</hardwareInterface>
</joint>
<actuator name="left_wheel_actuator">
<mechanicalReduction>1</mechanicalReduction>
<hardwareInterface>VelocityJointInterface</hardwareInterface>
</actuator>
</transmission>
<transmission name="right_wheel_transmission">
<type>transmission_interface/SimpleTransmission</type>
<joint name="right_wheel_joint">
<hardwareInterface>hardware_interface/VelocityJointInterface</hardwareInterface>
</joint>
<actuator name="right_wheel_actuator">
<mechanicalReduction>1</mechanicalReduction>
<hardwareInterface>VelocityJointInterface</hardwareInterface>
</actuator>
</transmission>
<xacro:wheel prefix="left" reflect="1"/>
<xacro:wheel prefix="right" reflect="-1"/>
<xacro:macro name="sphere_inertia" params="m r">
<inertial>
<mass value="${m}"/>
<inertia ixx="${2.0*m*(r*r)/5.0}" ixy="0.0" ixz="0.0" iyy="${2.0*m*(r*r)/5.0}" iyz="0.0" izz="${2.0*m*(r*r)/5.0}"/>
</inertial>
</xacro:macro>
<link name="caster_wheel1">
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<sphere radius="${caster_wheel_radius}"/>
</geometry>
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<sphere radius="${caster_wheel_radius}"/>
</geometry>
</collision>
<xacro:sphere_inertia m="1" r="${caster_wheel_radius}"/>
</link>
<joint name="caster_wheel_joint" type="continuous">
<axis xyz="0 1 0" rpy="0 0 0" />
<parent link="base_link"/>
<child link="caster_wheel1"/>
<origin xyz="${caster_wheel_joint_offset} 0 -${caster_wheel_radius+0.005}" rpy="0 0 0"/>
</joint>
<link name="caster_wheel2">
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<sphere radius="${caster_wheel_radius}"/>
</geometry>
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<sphere radius="${caster_wheel_radius}"/>
</geometry>
</collision>
<xacro:sphere_inertia m="5" r="${caster_wheel_radius}"/>
</link>
<joint name="caster_wheel_joint2" type="continuous">
<axis xyz="0 1 0" rpy="0 0 0" />
<parent link="base_link"/>
<child link="caster_wheel2"/>
<origin xyz="-${caster_wheel_joint_offset} 0 -${caster_wheel_radius+0.005}" rpy="0 0 0"/>
</joint>
<xacro:include filename="$(find nav-sim)/urdf/_d435.urdf.xacro" />
<sensor_d435 parent="base_link">
<origin xyz="0 0 0" rpy="0 0 0"/>
</sensor_d435>
<xacro:include filename="$(find nav-sim)/urdf/gbot_gazebo_plugins.urdf.xacro"/>
</robot>
Plug-ins:
<?xml version="1.0"?>
<robot>
<gazebo>
<plugin name="gazebo_ros_control" filename="libgazebo_ros_control.so">
</plugin>
</gazebo>
<!-- <gazebo>-->
<!-- <plugin name="differential_drive_controller" filename="libgazebo_ros_diff_drive.so">-->
<!-- <alwaysOn>false</alwaysOn>-->
<!-- <legacyMode>false</legacyMode>-->
<!-- <updateRate>20</updateRate>-->
<!-- <leftJoint>left_wheel_joint</leftJoint>-->
<!-- <rightJoint>right_wheel_joint</rightJoint>-->
<!-- <wheelSeparation>${wheel_separation}</wheelSeparation>-->
<!-- <wheelDiameter>${wheel_radius * 2}</wheelDiameter>-->
<!-- <torque>20</torque>-->
<!-- <commandTopic>/twist_mux/cmd_vel</commandTopic>-->
<!-- <odometryTopic>/odom_in</odometryTopic>-->
<!-- <odometryFrame>odom</odometryFrame>-->
<!-- <robotBaseFrame>base_link</robotBaseFrame>-->
<!-- </plugin>-->
<!-- </gazebo>-->
<!-- hokuyo -->
<gazebo reference="laser_frame_HF">
<sensor type="ray" name="head_hokuyo_sensor_front">
<pose>0 0 0 0 0 0</pose>
<visualize>true</visualize>
<update_rate>40</update_rate>
<ray>
<scan>
<horizontal>
<samples>1800</samples>
<resolution>0.2</resolution>
<min_angle>-3.14</min_angle>
<max_angle>3.14</max_angle>
</horizontal>
</scan>
<range>
<min>0.40</min>
<max>10.0</max>
<resolution>0.01</resolution>
</range>
<noise>
<type>gaussian</type>
<!-- Noise parameters based on published spec for Hokuyo laser
achieving "+-30mm" accuracy at range < 10m. A mean of 0.0m and
stddev of 0.01m will put 99.7% of samples within 0.03m of the true
reading. -->
<mean>0.0</mean>
<stddev>0.00001</stddev>
</noise>
</ray>
<plugin name="gazebo_ros_head_hokuyo_controller" filename="libgazebo_ros_laser.so">
<topicName>/scanHF</topicName>
<frameName>laser_frame_HF</frameName>
</plugin>
</sensor>
</gazebo>
<gazebo reference="laser_frame_HB">
<sensor type="ray" name="head_hokuyo_sensor_back">
<pose>0 0 0 0 0 ${3.14}</pose>
<visualize>true</visualize>
<update_rate>40</update_rate>
<ray>
<scan>
<horizontal>
<samples>1800</samples>
<resolution>0.2</resolution>
<min_angle>-3.14</min_angle>
<max_angle>3.14</max_angle>
</horizontal>
</scan>
<range>
<min>0.40</min>
<max>10.0</max>
<resolution>0.01</resolution>
</range>
<noise>
<type>gaussian</type>
<!-- Noise parameters based on published spec for Hokuyo laser
achieving "+-30mm" accuracy at range < 10m. A mean of 0.0m and
stddev of 0.01m will put 99.7% of samples within 0.03m of the true
reading. -->
<mean>0.0</mean>
<stddev>0.00001</stddev>
</noise>
</ray>
<plugin name="gazebo_ros_head_hokuyo_controller" filename="libgazebo_ros_laser.so">
<topicName>/scanHB</topicName>
<frameName>laser_frame_HB</frameName>
</plugin>
</sensor>
</gazebo>
</robot>
Originally posted by prcgnn on ROS Answers with karma: 11 on 2022-03-21
Post score: 1
Answer:
Hi prcgnn. These two libraries are not complementary. ROS Control is a generic framework to implement controllers in ROS applications, for example, the diff_drive_controller. gazebo_ros_control is a Gazebo Plugin that "talks" with the ROS Control stack (through a Hardware Interface), allowing you to control a robot in Gazebo using a ROS Controller.
I suggest you read the documentation from ROS Control, and also how it works with Gazebo (link).
But that being said. As you did not post the entire robot's URDF, I imagine that you did not add the transmission associated to the joints you need to move (link).
Could you edit your question and include your terminals? Probably they have some useful information too.
Also, the way you define the yaml file, instead of "cmd_vel", your rostopic will probably be "mobile_base_controller/cmd_vel". It works fine, but if you want to use the teleop or rviz's joystick, you need to set them to use the new name.
Check this tutorial about diff_drive_controller, it may help you.
I hope this information is helpful, and you can solve your issue.
Originally posted by schulze18 with karma: 91 on 2022-03-22
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by prcgnn on 2022-03-23:
Thank you for your reply. I just added the urdf missing part. Have a look and tell me if you see something strange. The fact is this frobot is part of a larger navigation module based on twist mux (cmd_vel and joystick) and odom fram with topic odom_in. In the gazebo plug in I was able to specify all this parameters but in this ros_control I am not. Any hint?
Comment by schulze18 on 2022-03-24:
In general, the ros controllers set their topic names based on the controller namespace ("mobile_base_controller" in your case). But you can use topic remap to change it (link). Is your controller working? For example, if you test without the joystick, teleop, etc, and you just do a "rostopic pub" into the topic ending with "cmd_vel" (probably "mobile_base_controller/cmd_vel"). Does the robot move? if you do a "rostopic list", is there any topic with "cmd_vel"?
Comment by prcgnn on 2022-03-24:
Yes, I already implemented the remap and all is inside cmd_vel and odom _in topics.
If testing with rostopic pub the robot moves still slowsly.
Yes, if input rostopic list cmd_vel is shown. It seemes like rviz and gazebo have different types of behaviour.For ex. a rotation of 90 deg in rviz corresponds to ~60 deg in gazebo. Any idea?
Comment by schulze18 on 2022-03-25:
Make sure you are continuos publishing into cmd_vel, by using the "-r 100" in the ros topic pub. Do your wheels seem to be rotating in place in Gazebo? Like slipping on the floor? If so, probably you need to tune the friction and physical coefficients of your link (setting max_step_size to 0.001 and real_time_update_rate to 1000 in the Gazebo Physics usually also help in those cases too).
Comment by prcgnn on 2022-03-29:
Is it possible to tune Gazebo Physics inside urdf file? Or where do I have to specify them? I have no sdf.
Yes I confirm you that robot moves when publishing cmd_vel topics. | {
"domain": "robotics.stackexchange",
"id": 37518,
"tags": "ros, gazebo, rviz, gazebo-ros-control"
} |
Flow in parallel vertical pipes | Question:
Consider a tank open to atmospheric pressure, with a large cross-section area A1 and height H, that contains an incompressible fluid.
Two vertical pipes with length H and cross-section area A2 are connected to the bottom of the tank. These pipes are connected to a third, horizontal pipe with a cross-section area A3, which begins at the end of the left vertical pipe, merges with the right vertical pipe, and continues right (so in general the pressure there is unknown and not atmospheric).
Assuming steady, irrotational flow, is it possible to infer anything regarding the ratio of flow in the left and right vertical pipes?
Bernoulli's principle applies for a streamline connecting the top of the tank with each of the left/right vertical pipes, as well as with the horizontal pipe, so
$2\rho gH+\frac{1}{2}\rho(\frac{dH}{dt})^2+P_{atm}=\rho gH+\frac{1}{2}\rho v_{left}^2+P_{left}=\rho gH+\frac{1}{2}\rho v_{right}^2+P_{right}=\frac{1}{2}\rho v_{bottom}^2+P_{bottom}$
where $v_{bottom}, P_{bottom}$ refer to the velocity and pressure in a position right to the rightmost pipe.
From continuity we get
$A_1\frac{dH}{dt}=A_2(v_{left}+v_{right})=A_3v_{bottom}$
So we have 5 equations, but we have 7 properties (velocities and pressures), and since the system should in general be predictable, it seems to me that there is another constraint missing. Additional equations can be made for any point in the vertical pipes (since potential energy is exchanged for kinetic energy), as well as for the bottom pipe, left of the rightmost intersection, but that doesn't seem to provide any helpful information.
Answer: Pressure drop is the driving force for flow. Knowing that, you can see that the flow from the left vertical pipe has a longer distance to travel than flow from the right vertical pipe. You also know that the vertical pipes meets at a junction, and the pressure there is only one (unknown) value, which gives you a differential pressure equation for the individual flows to that junction. Finally, from the junction to the outlet on the right hand side of the picture, there has to be a differential pressure to support the flow in that common line. From this information, there should be enough equations to solve for the unknowns. | {
"domain": "physics.stackexchange",
"id": 60430,
"tags": "fluid-dynamics, flow"
} |
The RA and Dec of lunar poles | Question: The moon faces the earth with one fixed side. However there is a small vibration.
So I am not sure the south and north poles of the moon is fixed. Even if they are fixed points on the moon, the pole axis should point at different locations in the sky.
What are the general positions of the two points in equatorial coordinate system of our earth?
Answer: @questionhang, the moon's celestial north pole is always 1.543 degrees from the ecliptic north pole (Earth's orbital pole), which, according to en.wikipedia.org/wiki/Orbital_pole was at RA 18h 0m 0.0s, D +66° 33′ 38.55″ at J2000.
If an error or about 1.5 degrees if small enough for you, then use the ecliptic north pole.
If not, the problem is that the moon's north pole precesses through that whole circle every 18.6 years, so it changes fairly rapidly. If one knew accurately where it was on a given date, one could to calculate where it should be now, but I don't find those numbers either. For my purposes, I am ok using the ecliptic north pole. | {
"domain": "astronomy.stackexchange",
"id": 2674,
"tags": "the-moon, pole"
} |
install groovy from source fails at pcl | Question:
Hi,
I'm trying to install groovy from source on a Gentoo laptop using instructions from here:
www.ros.org/wiki/groovy/Installation/Source
It fails while building the point cloud library. Please help on how to proceed.
Thanks.
The command I'm using to build is:
./src/catkin/bin/catkin_make_isolated --install-space /opt/ros/groovy --cmake-args -DSETUPTOOLS_DEB_LAYOUT=OF
This is the first error it threw up while building the pcl package:
[ 17%] Building CXX object examples/common/CMakeFiles/pcl_example_check_if_point_is_valid.dir/example_check_if_point_is_valid.cpp.o
Linking CXX shared library ../lib/libpcl_io.so
Linking CXX executable ../../bin/pcl_example_check_if_point_is_valid
/usr/lib/gcc/x86_64-pc-linux-gnu/4.6.3/../../../../x86_64-pc-linux-gnu/bin/ld: /usr/local/lib/vtk-5.10/libvtkCommon.a(vtkFloatArray.cxx.o): relocation R_X86_64_32 against `.rodata' can not be used when making a shared object; recompile with -fPIC
/usr/local/lib/vtk-5.10/libvtkCommon.a: could not read symbols: Bad value
collect2: ld returned 1 exit status
make[2]: *** [lib/libpcl_io.so.1.6.0] Error 1
make[1]: *** [io/CMakeFiles/pcl_io.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
But then it goes on and then breaks down completely here:
[ 22%] Building CXX object sample_consensus/CMakeFiles/pcl_sample_consensus.dir/src/prosac.cpp.o
Linking CXX shared library ../lib/libpcl_sample_consensus.so
[ 22%] Built target pcl_sample_consensus
make: *** [all] Error 2
Traceback (most recent call last):
File "./src/catkin/bin/../python/catkin/builder.py", line 658, in build_workspace_isolated
number=index + 1, of=len(ordered_packages)
File "./src/catkin/bin/../python/catkin/builder.py", line 456, in build_package
install, jobs, force_cmake, quiet, last_env, cmake_args, make_args
File "./src/catkin/bin/../python/catkin/builder.py", line 359, in build_cmake_package
run_command(make_cmd, build_dir, quiet)
File "./src/catkin/bin/../python/catkin/builder.py", line 186, in run_command
raise subprocess.CalledProcessError(proc.returncode, ' '.join(cmd))
CalledProcessError: Command '/home/ro/ros_catkin_ws/devel_isolated/nodelet_topic_tools/env.sh make -j4 -l4' returned non-zero exit status 2
<== Failed to process package 'pcl':
Command '/home/ro/ros_catkin_ws/devel_isolated/nodelet_topic_tools/env.sh make -j4 -l4' returned non-zero exit status 2
Reproduce this error by running:
==> /home/ro/ros_catkin_ws/devel_isolated/nodelet_topic_tools/env.sh make -j4 -l4
Command failed, exiting.
Originally posted by logicalguy on ROS Answers with karma: 1 on 2013-04-03
Post score: 0
Answer:
You need to fix your installation of VTK for your 64 bit architecture. You do not appear to have compiled it with -fPIC as stated in the error message.
Originally posted by tfoote with karma: 58457 on 2013-04-04
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by logicalguy on 2013-04-05:
Yes, I had to rebuild vtk from source and it worked. Thanks. | {
"domain": "robotics.stackexchange",
"id": 13677,
"tags": "ros, pcl, ros-groovy, build"
} |
Why do wild lizards often tap with their front leg? | Question: I have a lot of wild lizards (possibly Podarcis muralis) in my garden, and I have noticed some of their behavior I'd like to understand:
whenever they are in 'still' position on any surface, they are quite often 'beating' with one of their front legs on the surface
in some cases they put the tail of the mate in the mouth
What is this behavior? Some kind of communication?
Answer: I found a paper(1) about the 'tail in the mouth' behaviour in the whiptail lizard, Cnemidophorus ocellifer:
Every time that the female moved, the male followed her. During these periods, he often maintained physical contact, covering her hind legs and base of tail region . The male also performed a series of tongue-flicks on the female's back during this activity. During the observation, the female twice executed a sequence of two or three sinuous, figure-eight movements, which were confined to a small area, and after which, the female moved away. After each movement, the male remained motionless for a few moments and then resumed following her while foraging. After 45 minutes and nearly 15 m from their initial positions, the pair entered in dense underbrush, where we could not continue to observe them. While we observed the pair, there was no agonistic encounter between the accompanying male and other males for access or copula with the female.
The paper continues to explain this behaviour as simply a mating strategy:
Two conditional mating strategies have been described for teiid males (Zaldívar-Rae & Drummond 2007). 1) In consensual copulations,
the male courts the female, slowly circling her for several minutes,
then straddling her and copulating; this strategy is often performed
by a male companion, and thus is linked to accompaniment. 2)
Opportunistic copulations are not preceded by courtship, and are
characterized by a male chasing and holding a foraging female, and not
accompanying her after copulation (Zaldívar-Rae & Drummond 2007).
The 'taping the surface they are on' seems like an attempt to induce vibrations, which is a method of communication.(2)
Some chameleon species communicate with one another by vibrating the substrate that they are standing on, such as a tree branch or leaf. Animals that use vibrational communication exhibit unique adaptations in morphology (i.e., body form) that enable them to detect vibration and use it in communication. These include unique adaptations in ear and jaw morphology that give the animal direct contact with the surface they are standing on, and enable them to detect subtle vibrations. Lizards that live on substrates that can be easily moved (such as thin tree branches or leaves) are probably more likely to use vibrational communication than lizards that live on substrates that do not transmit vibrations as easily, such as the ground or thick tree trunks.
1- https://www.scielo.br/scielo.php?script=sci_arttext&pid=S1676-06032011000400031#:~:text=The%20most%20commonly%20reported%20mating,Bull%202000%2C%20Ribeiro%20et%20al.
2- https://en.wikipedia.org/wiki/Lizard_communication | {
"domain": "biology.stackexchange",
"id": 10783,
"tags": "ethology, herpetology"
} |
Find the action from given equations of motion | Question: Is there a systematic procedure to generally obtain an appropriate action that corresponds to any given equations of motion (if I know that it exists)?
Answer: In general, this is difficult, as the same dynamics can be written in many different forms.
In concrete cases, I'd do one of the following:
Work out the Hamiltonian (i.e., look for conserved quantities of a reasonably simple form), then work out pairs of conjugate variables that allow you to write the equation of motion in Hamiltonian form, then invert the canonical formalism to get the Lagrangian.
Write down the most general combinations of terms whose functional derivatives resemble those in the given equation, and then try to match terms, accounting for a possible (but assumed simple) integrating factor. | {
"domain": "physics.stackexchange",
"id": 5173,
"tags": "lagrangian-formalism, variational-principle, action"
} |
photon absorption by atoms causes heat? | Question: I have came up with a weird doubt: photon absorption by atoms causes heat? I mean, I was always told that if the photon's frequency is the magic one, the atoms absorbs the photon and goes to an excited state. So I have to suppose that heating (increasing in kinetic energy) happens when the frequency is outside the set of permitted transitions. Is it correct?
Answer: This is true. If for example you subject Hydrogen gas to a perfectly monochromatic 121.57 nm laser, then all that will happen is that the gas will scatter the light in all directions, glowing without increasing the temperature.
Otherwise there are many different phenomena that are involved in the heat transfer of energy by radiation. For example in solids, photons are absorbed and turn into phonons which are waves that when they are numerous lead to thermalization, and for molecules you have photon absorption that leads to molecular vibration which again increases the root mean square speed, etc... | {
"domain": "physics.stackexchange",
"id": 22265,
"tags": "visible-light, photons, atoms, spectroscopy"
} |
Impact of Alan Turing's approach to morphogenesis | Question: Shortly before his untimely passing, the computing pioneer Alan Turing published his most cited paper The Chemical Basis of Morphogenesis (1952).
The central question for Turing was: how does a spherically symmetric embryo develop into a non-spherically symmetric organism under the action of symmetry-preserving chemical diffusion of morphogens (as Turing calls them, an abstract term for arbitrary molecules relevant to development)? The insight that Turing made is that very small stochastic fluctuations in the chemical distribution can be amplified by diffusion to produce stable (i.e. not time varying except slow increases in intensity; although also potentially time-varying with 3 or more morphogens) patterns that break the spherical symmetry.
The theory is beautifully simple and abstract, and produces very important qualitative results (and also quantitative results through computer simulation, which unfortunately Turing did not get to fully explore). However, even in the definition Turing discusses some potential limitations such as ignoring mechanical factors, and the inability to explain preferences in handedness. The particular models he considers -- a cycle of discrete cells and a circular tissue -- do not seem particularly relevant. As far as I understand, the key feature is his observation of symmetry breaking through small stochastic noise and instability.
What was the most important contribution of Turing's paper to developmental biology? Is his approach still used, or has the field moved on to other models? If his approach is used, how was the handedness problem resolved?
Answer: This is a very interesting question. Many people have researched this topic, and many still are. But regardless, I had never heard of Alan Turing's contributions, so thank you!
First of all, I cannot actually find who first coined the term morphogen. Though people had hypothesized that chemicals could play a critical role in development through much of the 20th century, I cannot actually find the first person to use morphogen. But the most important paper really came from a guy named Lewis Wolpert, who came up with the model of a gradient of morphogens leading to differential cell fates. The idea being that if some area of an embryo produces a morphogen at a very high concentration, then as you move away from that area, the concentration goes down. So if this morphogen is required at or above a certain threshold for activity, then only those cells with that concentration will have a certain cell fate, while at lower concentrations, the cells can become something different.
But this does not really answer your question. You are asking how a single cell, which is spherically symmetrical, can determine a particular axis. Though most organisms do this is in slightly different ways, the most common feature is that sperm entry point breaks the symmetry. The best way to explain this is to show you a diagram of Xenopus (frog) eggs.
Image from: http://studentreader.com/nieuwkoop-center/
The Xenopus egg, first of all, is inherently not spherically symmetrical. There is a black animal pole, and a white vegetal pole. The sperm can only enter a marrow region of the egg about 30˚ north of the animal/vegetal line. Upon fertilization, an event occurs where the pigmented areas turn toward the sperm entry point, leaving a gray crescent. Nearby the gray crescent, in the vegetal pole, a structure called the organiser develops. This organiser creates many of the morphogens that then pattern the rest of the embryo.
Researchers have studied this a lot in many different organisms, but a few things really remain constant: eggs are not exactly spherically symmetrical, and the sperm entry point provides asymmetry. | {
"domain": "biology.stackexchange",
"id": 11125,
"tags": "embryology, history, development"
} |
Confusion about left/right-handed spinor notations | Question: Peskin & Schroeder eq (3.78) states that
$$(\bar{u}_{1R}\sigma^\mu u_{2R})(\bar{u}_{3R}\sigma_\mu u_{4R})=\cdots$$
But I don't understand what the $u_{1R}$ means. Since 4-component Dirac spinor consists of left-handed and right-handed 2-component spinors, we can say
$$\psi(x)=\begin{pmatrix}u_1(p)\\u_2(p)\\u_3(p)\\u_4(p) \end{pmatrix}e^{-ipx}=\begin{pmatrix}\psi_L\\ \psi_R\end{pmatrix} $$
for the positive frequency solutions. That is, $u_1,u_2$ correspond to the left-handed part and $u_3,u_4$ correspond to the right-handed part. But then, the above notations like $u_{1R},u_{2R}$ do not make any sense to me.
Answer: It is stated above the quoted formula: "By sandwiching identity (3.77) between the right-handed portions (i.e., lower half) of Dirac spinors $u_1$, $u_2$, $u_3$, $u_4$, we find the identity" (3.78).
Thus $u_{1R}$ simply means the right-handed component of the spinor $u_1$; the "$1$" does not indicate a component here. Analogously for the other spinors. | {
"domain": "physics.stackexchange",
"id": 97705,
"tags": "field-theory, notation, dirac-equation, spinors"
} |
Expressing conditional in linear program | Question: I have two variables $A$ and $B$, with $A$ being binary and $B$ is a real number where $B \ge 0$. My conditions are:
if B > 0
A = 1
else
A = 0
How to express this as a linear program? I have figured out one condition using the big-M method:
$MA \gt B$
which means that if $B>0$, then $A$ must be 1 to satisfy this constraint. However if $B=0$, then $A$ can be either 1 or 0, and I need another constraint. How to enforce $A$ to be 0 when $B=0$?
Answer: If you know the maximum value of $B$ then you can easily express all comparisons as described here: https://blog.adamfurmanek.pl/2015/09/12/ilp-part-4/
In your case you need the following:
$0 \le -B + MA \le M-1$
assuming that $M$ is big enough. | {
"domain": "cs.stackexchange",
"id": 10145,
"tags": "linear-programming, integer-programming"
} |
Which cyclohexane conformation is more stable? | Question: I got this question wrong on a test and I don't know why:
Question: Which conformation is more stable?
Answer: the chair on the left is more stable.
I thought that the ring conformation has zero ring strain so I'm confused.
Answer: By symmetry, the cyclohexane on the right requires each carbon atom to have a $\ce{C-C-C}$ angle of 120 degrees. From VSEPR theory, this is not optimal for tetrahedral carbon. A regular tetrahedron has angles between vertices of approximately 109.5 degrees and the equilibrium bond angles of a secondary carbon won't be too discrepant from this. | {
"domain": "chemistry.stackexchange",
"id": 124,
"tags": "organic-chemistry, cyclohexane"
} |
Light field 5D Plenoptic Function | Question: Wikipedia says "Since rays in space can be parameterized by three coordinates, x, y, and z and two angles $\theta$ and $\phi$, as shown at left, it is a five-dimensional function"
I'm not understanding why $\theta$ and $\phi$ are necessary here, or why this needs to be a five-dimensional function at all. If you change the angle values of $\theta$ or $\phi$ doesn't that rotate the vector in space at its endpoints and thus change the x,y,z values? Can't you parameterize the ray in space with only x,y,z or only $\theta$ and $\phi$?
Answer: You need to know two things about a ray:
its direction, which can be specified by two angles, $\theta$ and $\phi$;
and a point through which it passes, which needs three coordinates, $x,y,z$ (or equivalent in another coordinate system).
You can see that you need all of these by thinking about the case where you only have one or the other set:
if you only have $\theta$ and $\phi$, then obviously there is a whole family of rays which you can construct simply by starting with one and dragging it around, like filling space with a bundle of dried spaghetti;
if you only have $x, y$ & $z$, then you have a point in space, through which you can pass a whole family of rays in all different directions.
So you need both things. | {
"domain": "physics.stackexchange",
"id": 28809,
"tags": "electromagnetism, optics, visible-light, geometric-optics"
} |
BFS and DFS tree traversal | Question: I posted some code as part of an answer to another question and I thought it would be a good idea to get that code reviewed also.
Any comments are welcomed, but I am mostly annoyed by terminalNode. It is basically a slightly convenient factory method, but I think it should either be renamed to something more enlightening, or something entirely different should be done.
object TreeTraversal extends App {
case class Node[+T](value: T, left: Option[Node[T]], right: Option[Node[T]]) {
def map[V](f: T => V): Node[V] =
Node(f(value), left.map(l => l.map(f)), right.map(r => r.map(f)))
def childrenLeftRight: List[Node[T]] = List(left, right).flatten
}
def terminalNode[T](value: T) = Node(value, None, None)
def dfs[T](tree: Node[T]): List[T] = {
var output = List[T]()
tree.map(t => (output = t :: output))
output.reverse
}
def bfs[T](tree: Node[T]): List[T] = {
@tailrec
def bfsLoop(accum: List[List[T]], nextLayer: List[Node[T]]): List[T] = nextLayer match {
case Nil => accum.reverse.flatten
case _ => bfsLoop(nextLayer.map(_.value) :: accum, nextLayer.flatMap(_.childrenLeftRight))
}
bfsLoop(List[List[T]](), List(tree))
}
val tree1 = Node[Int](6, Some(Node(13, Some(terminalNode(101)), None)), Some(Node(42, Some(terminalNode(666)), Some(terminalNode(65)))))
println("map: " + tree1.map(i => s"Hello$i"))
println("dfs: " + dfs(tree1))
println("bfs: " + bfs(tree1))
}
Node looks like a monad. Could I get all other operations such as flatMap and foldLeft for free somehow?
Answer:
Any comments are welcomed, but I am mostly annoyed by terminalNode
From my opinion, it's more natural to provide default values for case classes:
case class Node[+T](value: T, left: Option[Node[T]] = None, right: Option[Node[T]] = None) {//...
Then you can simply use Node(42) anywhere in your code.
Node looks like a monad. Could I get all other operations such as flatMap and foldLeft for free somehow?
Typically, any methods in Monads are based on flatMap and unit methods. Therefore you need to implement flatMap method, as unit method you already have (Node.apply).
flatMap needs to accept a method, which returns new instance of Node, e.g. def flatMap[V](f: T => Node[V]): Node[V]. Then it should look like the following:
def flatMap[V](f: T => Node[V]): Node[V] = {
f(value).copy(
left = left.map(_.flatMap(f)),
right = right.map(_.flatMap(f))
)
}
And it you create a new test, it produce the same output, as for map:
println("map: " + tree1.map(i => s"Hello$i"))
println("fmap: " + tree1.flatMap(i => Node(s"Hello$i"))) //will be the same outputs
But, you should handle cases, when f(value) may return new node with existing left and/or right values, otherwise the implementation above may override the result. You need to consider this in your final implementation.
Then, according to the rule of map, it can be present as map(f) = fmap unit f, so you can create method map2 with flatMap:
def map2[V](f: T => V): Node[V] =
flatMap(t => Node(f(t)))
Then you will have the same outputs:
println("map: " + tree1.map(i => s"Hello$i"))
println("map2: " + tree1.map2(i => s"Hello$i")) //will be the same outputs
Regarding foldLeft: typically you can almost everything implement with foldLeft, but not otherwise. So we need to understand, what foldLeft should do (DSF?), and then we can express other methods with this.
Also couple notes, I'd change (it's arguable, of course, but worth considering):
Instead of left.map(l => l.map(f) I'd use left.map(_.map(f))
tree.map(t => (output = t :: output)) this can be replaced with mutable ArrayBuffer for 2 reasons: it should be slightly faster and slightly more well-looking.
Consider this code:
def dfs2[T](tree: Node[T]): List[T] = {
val output = ArrayBuffer[T]()
tree.map(output += _)
output.toList
}
And outputs again will be the same:
println("dfs: " + dfs(tree1))
println("dfs2: " + dfs2(tree1)) //will be the same outputs
Hope that helps. | {
"domain": "codereview.stackexchange",
"id": 9967,
"tags": "algorithm, scala, tree"
} |
Why is the electric potential continuous when we aproach an infinite uniformly charged sheet? | Question: Supose we have an infinite uniformly charged sheet on the plane z=0,
and at $z>0$ $\to\vec{E}=\frac{\sigma}{2\epsilon_{0}}$ and at $z<0$
$\to\vec{E}=-\frac{\sigma}{2\epsilon_{0}}$. Therefore, calculating
the electric potential $V=-\int\vec{E}.d\vec{z}$, we will have something
like this: $V=\frac{\sigma}{2\epsilon_{0}}z$ for $z>0$ and $V=-\frac{\sigma}{2\epsilon_{0}}z$
for $z<0$. . And taking limits and proceding mathematically, we see
that the potential is continuous but not differentiable at $z=0$,
but here's what it's causing me a headache (and please correct me
if I'm wrong):
We define electric potential as the amount of work an external agent
has to exert on an unit charge to move it from infinity (taking of
course $V(\infty)=0$) to a certain position in the z direction (that's
the only direction that matters as we're talking of an infinite sheet).
And I may be confusing myself with the point charge model but it is
not intuitive to me that the closer we get to the infinite plate (starting
from $z=+\infty$) the potential aproaches 0. I would expect to make
a tremendous amount of work in order to get that unit charge as close
as possible to the plate, and not simply become very easy all of the
sudden.
Please, where is my mistake?
Answer: Your equation:
$$ V=-\int\vec{E}.d\vec{z} $$
is correct, but remember that there is a constant of integration. So the potential is:
$$ V(z)=\frac{\sigma}{2\epsilon_{0}}z + V_0 $$
for some constant $V_0$. That means it is incorrect to say the potential is zero when $z=0$. We cannot assign an absolute value to the potential. We can only calculate potential differences.
As you say, it's common to take the potential to be zero at infinity but this only makes sense when the electric field tends to zero at infinity. for example it makes sense for a point charge because the field decreases as $r^{-2}$. The problem with the infinite flat sheet is that the field strength is independent of distance. It does not tend to zero at infinity so the potential at infinity is not a well defined quantity and we cannot usefully set it to zero.
Given the symmetry of the sheet it is an obvious choice to set the potential to zero at the sheet i.e. to choose $V_0=0$. But all this does is to define our potential function $V(z)$ as equal to the energy needed to move a unit charge from the sheet to a distance $z$. | {
"domain": "physics.stackexchange",
"id": 47860,
"tags": "electrostatics, potential, charge"
} |
Reaction of conjugated alkene with KNH2 | Question: I could not think of this reaction fitting into any of the reactions I know except the aryne mechanism. And hence I came up with the following mechanism after seeing the answer (which I've written in the picture next to question). Please tell if my proposed mechanism is correct.
Also, I am highly unsure of the fact that the last species formed will be able to extract a hydrogen atom from somewhere in the solution.
Question:
This is my proposed mechanism:
Source: Joint Entrance Exam (JEE) 1997 India
Answer: Here is a flowchart of what the commenters have stated. An α-elimination occurs stepwise 1 --> 2 --> 3. There is a phenyl migration (bridged?) to afford tolane 4. Look here for this chemistry. | {
"domain": "chemistry.stackexchange",
"id": 9986,
"tags": "organic-chemistry, reaction-mechanism, halides, elimination"
} |
Is motion smooth? | Question: It's obvious that for every particle velocity is smooth i.e it cannot undergo sudden finite change in its position in infinitisiminal time.
Similarly any particle's velocity cannot undergo a change instantaneously (Infinite acceleration can't happen, intuitively).
Does this pattern apply to higher time derivatives of position like jerk? If yes then till how much higher derivative? 10th? 100? Infinite?
Answer: Typically these higher derivatives are assumed to be smooth.
The key question will be what causes a discontinuity in the n-th derivative. If you focus on classical mechanics, the forces on an object boil down to the positions of particles in the system, which are continuous. This means there would need to be a discontinuity (such as a divide by zero) in the equations of motion in order to have a non-smooth higher derivative.
When you push towards quantum mechanics, the terms get really murky quickly, because position ceases to be a single observable number. But if you stick to classical mechanics, we find things stay nice and smooth.
Now that being said, when modeling real systems, we very often assume instantaneous changes in velocity or higher derivatives. This is because, in many cases, we can get away with ignoring the precise acceleration or jerk function and treat it as-if it were a simple discontinuous system. A straight forward example of this is a billiard ball collision. For most intents and purposes, this collision is "instantaneous" and the velocity of the balls changes in a discontinuous manner. However, if we look closer, with a slow motion camera, we find that the collision is not actually instantaneous -- position and its derivatives smoothly change over time. In fact, if you look hard enough, you can even see the ripple as the effects of the impact race across the surface of the ball. But, for the purposes of determining the result of a trick shot, these fine details are immaterial, and calculations assuming an instantaneous change in velocity are used. | {
"domain": "physics.stackexchange",
"id": 58559,
"tags": "kinematics, acceleration, velocity"
} |
Spin 3/2 Statistical Mechanics Problem | Question: I am trying to solve a problem from the book 'Introductory Statistical Mechanics' (Bowley, Sanchez). The question reads:
Calculate the free energy of a system of N particles, each with spin 3/2 with one particle per site, given that the levels associated with the four spin states have energies e, 2e, -e, -2e....
What I want to know is how I use the face that each particle has a spin 3/2? Does this add some kind of degeneracy I need to take into account?
Answer: If there are no term in the Hamiltonian lifting the degeneracy on the spin, then you'll have a degree of degeneracy equal to $$g_s=(2s+1)$$ where s is the spin ($=\frac32$ so $g_s=4$ in your case).
Each correspond to a projection of the spin along some axis (the z usually) $$s_z = \frac32,\frac12,-\frac12,-\frac32$$ | {
"domain": "physics.stackexchange",
"id": 5246,
"tags": "statistical-mechanics"
} |
Does light color change when refracting? | Question: When light refracts from a medium to a second one, its frequency stays the same, and its wavelength changes. If this is true, why we see the refracted light ray's colour is the same as the incident light ray in the second medium? The colors should not be the same. If the wavelength changes, colour should change too.
Answer: The color will not change. What you're not taking into account is the speed of light in the medium. It's not the same $ c $ en vacuo. The frequency stays the same. What changes is the speed of light in the refracting medium and as a result wavelength. This difference for speed is the exact reason we have refractive effects, and I believe was the observation that led to Snell's Law. In symbols $$ \lambda = \frac {c} {\nu} $$ where $\lambda$ is the wavelength and $ \nu $ is the frequency.
The speed of light changes because the photons have to have its energy ( and therefore it's presence ) propagated across very long molecular chains. Electrons have to absorb the incident photons, re-emit, and repeat this in a longitudinal direction. Depending on what the material is made of, this will take variable time in variable media. That notion manifests itself in the different indices of refraction that different objects have. | {
"domain": "physics.stackexchange",
"id": 12538,
"tags": "visible-light"
} |
How to calculate the spin of an atom | Question: If given an atom say ${^{108}_{47}Ag}$, what is the systematic way to determine its spin so that one knows whether it is a boson or a fermion?
Answer: Count up the total number of protons, neutrons and electrons. If the total is odd it's a fermion; if even, a boson. No compicated spinology needed. | {
"domain": "physics.stackexchange",
"id": 53231,
"tags": "atomic-physics, quantum-spin, fermions, bosons, quantum-statistics"
} |
A derivation in Jaynes' paper, linking stat-mech and Shannon's entropy | Question: I have been going through E. T. Jaynes' 1957 paper, Information Theory and Statistical Mechanics. There is a step in his derivations, which has been giving me headaches for the past day; would appreciate some pointers on how to complete it!
Link to paper here: https://bayes.wustl.edu/etj/articles/theory.1.pdf
A bit of notation: Lagrange multipliers are represented by $\lambda$'s, and $\lambda_0$ = ln Z where Z is the partition function. The probability of each state $j$ is
$\begin{equation}
p_j = \frac{\exp \left[ -(\lambda_0 + \lambda_1 f_1(x_j) + \cdots \right]}{Z}.
\end{equation}$
The expectation values of the functions $f's$ are fixed/known, and provide the constraints for maximizing Shannon's entropy; for simplicity let's consider only $f_1$. Assume that the probabilistic states $i$'s are discrete.
Now onto the question. Jaynes says near Equation (5.1) of the paper that he's going to perturb $f_1$ such that $f_1(x_j) \rightarrow f_1(x_j) + \delta f_1(x_j)$, for all $j$. At the same time, the expectation value of $f_1$ is independently altered: $\left<f_1\right> \rightarrow \left<f_1\right> + \delta\left<f_1\right>$.
How does the entropy, the derived probability distribution, and the Lagrange multiplier change? Equation (5.1) states that
$\begin{equation}
\delta\lambda_0 = \delta \mathtt{ln} Z = -\left( \delta\lambda_1 \left<f_1\right> + \lambda_1 \left<\delta f_1\right> \right).
\end{equation}$
But how? Here's my current approach:
$
\begin{eqnarray}
\delta\lambda_0 &= ln Z' - ln Z \\
&= ln (Z' / Z) \\
&= ln (\frac{\sum_j \exp[(\lambda_1 + \delta\lambda_1)\times (f_1(x_j) + \delta f_1(x_j))]}{Z}).
\end{eqnarray}
$
But then I am pretty lost on how to "get rid of" the logarithmic function...Thanks for reading, and looking forward to see what you think!
Answer: Remember that $\delta\lambda_1$ and $\delta f_1$ are small so expanding the exponential and then the log to first order:
$$\eqalign{
\delta\lambda_0&\simeq\ln\Big[{1\over{\cal Z}}\sum_je^{-\lambda_1 f_1(x_j)-\delta\lambda_1 f_1(x_j)-\lambda_1\delta f_1(x_j)}\Big]\cr
&=\ln\Big[{1\over{\cal Z}}\sum_je^{-\delta\lambda_1 f_1(x_j)-\lambda_1\delta f_1(x_j)}e^{-\lambda_1 f_1(x_j)}\Big]\cr
&\simeq \ln\Big[{1\over{\cal Z}}\sum_j\Big(1-\delta\lambda_1 f_1(x_j)-\lambda_1\delta f_1(x_j)\Big)e^{-\lambda_1 f_1(x_j)}\Big]\cr
&=\ln\Big(1-\delta\lambda_1\langle f_1\rangle-\lambda_1\langle \delta f_1\rangle\Big)\cr
&\simeq -\delta\lambda_1\langle f_1\rangle-\lambda_1\langle \delta f_1\rangle
}$$
Alternatively, a faster calculation is
$$\eqalign{
\delta \ln{\cal Z}&={1\over{\cal Z}}\delta{\cal Z}\cr
&={1\over{\cal Z}}\sum_j\Big(-\delta\lambda_1 f_1(x_j)-\lambda_1\delta f_1(x_j)\Big)e^{-\lambda_1 f_1(x_j)}\cr
&=-\delta\lambda_1\langle f_1\rangle-\lambda_1\langle \delta f_1\rangle
}$$ | {
"domain": "physics.stackexchange",
"id": 59020,
"tags": "statistical-mechanics, entropy"
} |
On the time complexity of finding independent sets in hypergraphs | Question: I am interested in the complexity of the following problems:
(a) Finding an independent set of a fixed size in an arbitrary hypergraph
(b) Finding an independent set of maximum size in an arbitrary hypergraph
It is well known that the analogues of (a) and (b) (say (a') and (b')) for graphs are NP-complete and NP-hard, respectively. However, I am having trouble finding references for the complexity of (a) and (b). In fact, all of the papers I've been able to find on these two problems (for hypergraphs) just cite the fact that (a') and (b') are NP-complete and NP-hard, as if these results trivially generalize to (a) and (b). Is this actually the case? If not, can anyone provide references that prove the complexity of (a) and (b)?
Answer: The ubiquitous principle is that a generalized problem is at least as hard to the original problem. This rather trivial principle is applied as truism in most if not all papers.
Here is a specific version of that principle for the case of time-complexity.
Lemma. Let $A,B, C$ be three languages such that $A= B\cap C$ and $A\not=C$. ($C$ is considered as a generalization of $A$.) It takes polynomial time to determine whether $x\in B$. Then deciding $C$ is as least as hard as deciding $A$ modulo polynomial time. If $A$ is NP-complete and $C$ is in NP, then $C$ is NP-complete.
Proof. Let $\beta$ be a polynomial-time algorithm that decides $B$. Suppose we have an algorithm $\gamma$ that decides $C$. Let $\alpha$ be the following algorithm:
Given input string $x$, run $\beta$ to determine if $x\in B$. If not, return no. If yes, run $\gamma$ to determine if $x\in C$. If not, return no. Return yes otherwise.
It is routine to check that $\alpha$ return yes if and only if the input is in $A$, i.e., $\alpha$ decides $A$. Since $\beta$ takes polynomial time, deciding $C$ is at least as hard as deciding $A$ modulo polynomial time.
It follows immediately that if $A$ is NP-complete and $C$ is in NP, then $C$ is NP-complete.
Let
$A=\{\langle G,s\rangle \mid \text{ there is an independent set of size }s\text{ in an ordinary graph G}\}$. (I use the phrase "ordinary graph" or simply "graph" to contrast with "hypergraph". "General graph" sounds more like hypergraph. Indeed, hypergraph is a generalization of an ordinary graph.)
Let $B=\{\langle G\rangle\mid G \text{ is an ordinary graph}\}$.
Let $C=\{\langle G, s\rangle \mid \text{ there is an independent set of size }s\text{ in a hypergraph G}\}$.
The lemma tells us that deciding $C$ is at least as hard as deciding $A$.
Exercise 1. Explain that deciding whether a graph is 3-colorable is at least as hard as deciding whether a given planar-graph is 3-colorable. (In fact, both problems are NP-complete.)
Exercise 2. Prove that it is NP-complete to determine whether there is independent set of given size in a given triangle hypergraph. A triangle hypergraph is defined as a hypergraph where each edge is a set of 3 vertices. | {
"domain": "cs.stackexchange",
"id": 13919,
"tags": "graphs, time-complexity"
} |
Web Scraping with Python | Question: My target website is Indeed. I tried to implement the scraper such that it is convenient for the end user. I am introducing my code in the readme file on my github repo.
I am somewhat a beginner in programming so I am looking for guidance on things such as if the libraries that I used are approriate, the code itself and on generally making the script better.
Link to the README file.
import requests
from bs4 import BeautifulSoup
jobName = input('Enter your desired position: ').replace(' ', '-')
place = input("Enter the location for your desired work(City, state or zip): ")
URL = 'https://www.indeed.com/q-'+jobName+'-l-'+place.replace(' ', '-')+'-jobs.html'
page = requests.get(URL)
soup = BeautifulSoup(page.content, 'html.parser')
pages = soup.find(id='searchCountPages')
noPag = pages.text.split('of')[1].strip(' jobs').replace(',', '')
nmoPag = input(f"There are {noPag} number of pages. If you want to scrape all of them write 'Max' else write number of pages you wish to scrape: ")
if nmoPag == 'Max':
nmoPag = noPag
for i in range(0, int(nmoPag)*10, 10):
URL = 'https://www.indeed.com/jobs?q='+jobName+'&l='+place.replace(' ', '+')+'&start='+str(i)
page = requests.get(URL)
soup = BeautifulSoup(page.content, 'html.parser')
results = soup.find(id='resultsCol')
listings = results.find_all('div', class_='result')
for job in listings:
jobT = job.find('a', class_='jobtitle')
jobL = job.find('span', class_='location').text.strip()
jobS = job.find('div', class_='summary').text.strip()
link = jobT['href']
if any(any(subs in s for s in (jobT.text.strip().lower(), jobS.lower())) for subs in (jobName.split('+')[0], jobName[1])):
print('Your job in '+jobL+' as a '+ jobT.text.strip()+
'.\nHere is a quick summary of your job here: '+
jobS+'\nLink for more information and application for the job - https://indeed.com'+link, end='\n\n\n')
Answer: What you are doing sounds reasonable overall.
It's good that you are using F-strings:
nmoPag = input(f"There are {noPag} number of pages. If you want to scrape all of them write 'Max' else write number of pages you wish to scrape: ")
But there are many other places where you don't eg:
print('Your job in '+jobL+' as a '+ jobT.text.strip()+
'.\nHere is a quick summary of your job here: '+
jobS+'\nLink for more information and application for the job - https://indeed.com'+link,
So I would upgrade the rest of the code for more consistency and better readability :)
Consistency: you are mixing lower case and upper case in some variable names eg jobName vs place. Remember that variable names are case-sensitive in Python. The practice could be dangerous and confusing. Imagine that you have jobName and jobname, it's two variables that may be assigned different values.
There is redundancy in the use of functions, for example this bit of code is repeated twice:
jobT.text.strip()
Don't repeat yourself, just assign the result of jobT.text.strip() to a variable once, and reuse it.
More repetition: www.indeed.com is hardcoded 3 times. Define a global variable for the root URL, then add the query string parameters as required.
With the urllib library you could take advantage of the URI-building functions that are available to you. See Creating URL query strings in Python. Tip: if you want to build URIs but don't fire them immediately, you can also use prepared requests.
Although in the present case, the site does not use classic query string parameters separated with a &. So you can instead use an F-string, again, with sanitized variable values:
url = f"https://www.indeed.com/q-{jobName}-l-{place}-jobs.html"
Note regarding user input: the most obvious is to check that the input is not empty. Always trim the input too, because some people might copy-paste text with extra tabs or spaces (think Excel). Maybe you could use a regex to replace multiple occurrences of whitespace with a single hyphen.
I would also add more checks to make sure that all the DOM elements you are expecting were found - because a website is subject to change at any time. When that happens, the code must alert you.
Finally, a quality script should have exception handling (10 lines of code would suffice). Catch exceptions always and log the full details to a file. This will help you a lot with debugging and troubleshooting.
As your code is on Github, some people might want to use it. If they have problems with it, it would be good if they could attach a log in their bug report, so that you get better insight into the error.
Since you are scraping a website, all sorts of errors will happen often: DNS resolution errors, timeouts, requests denied etc. Your script should handle those errors gracefully. | {
"domain": "codereview.stackexchange",
"id": 37815,
"tags": "python, web-scraping"
} |
asctec_mav_framework GPS based position control | Question:
Hi
I would like to use asctec_mav_framework ros package with my pelican, however I would like to know if I can do GPS based position control using this package.
I see that I can set ~position_control to 'GPS', however the documentation says:
"Sending GPS waypoints over the HLP to the LLP is currently not implemented as it is not available on the interface between HLP and LLP."
What does that statement mean? Does it mean I can't send Lat/Long waypoint, but can set position in meters, relative to the home?
Thank you.
Originally posted by Yogi on ROS Answers with karma: 411 on 2012-02-09
Post score: 0
Answer:
If that's true, you could always just convert those GPS coordinates to UTM, and it act like a waypoint set in meters. That conversion can be found in the gps_common package in the conversions.h header.
Originally posted by DimitriProsser with karma: 11163 on 2012-02-10
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 8176,
"tags": "ros, pelican"
} |
What does "28" and "27" mean on my metric thread pitch gauge? | Question: I have a metric thread pitch gauge that came in a tap and die set, it has pitches like $0.75$, $0.8$, $1.25$, etc. which is the distance between each thread. However there are two gauges that say $27$ and $28$ - I thought maybe it means $0.28\text{ mm}$ or possibly $0.28\text{ inches}$ if they threw in some non-metric ones, but it is neither of these (it's about $0.9\text{ mm}$) what are these?
Answer: 28 is a standard Threads Per Inch dimension for 1/4 and 1/2 inch screws.
$\frac{25.4 \,\text{[mm/Inch]}}{28\, \text{[Threads/Inch]}} = 0.9 \,\text{[mm metric pitch]}$
The 27 seems strange to me. But I usually use metric bolts. | {
"domain": "engineering.stackexchange",
"id": 913,
"tags": "threads"
} |
Interferometer problem about periodic fringe patterns | Question: Please,kindly help me; my teacher assigned me problem set, and she never went over anything about interferometers in class. Also, there is almost no information about about how to solve interferometer problems in my book. I have no idea how to set this problem up, and I have no intuition concerning how I should solve it...
Can someone help me set this problem up and explain it? I know if this example is solved, I will be able to solve the rest of the problems:
Sodium D lines, of wavelength(a)=589.0 nanometers and wavelength(b)=589.6 nanometers are used on a Michelson interferometer. When the first mirror moves, these lines periodically appear and disappear. Explain, in detail, this phenomenon and write an equation that would allow you to know how much the mirror must move to make the lines reappear and disappear one time.
Answer: Your situation is unfortunate, but this will become very typical as you get further in school.
I will do my best to explain this problem fully:
To begin with, in this problem, two interference patterns are formed, each pattern unique to one of the wavelengths provided to you. It is important to understand here that the fringe patterns might overlap, but do $\textbf{not}$ $\textit{interfere}$ with one another (in terms of wave front interference). Consequently, it can be concluded that the bright fringes of one wavelength will eventually share locations with the dark fringes of the other wavelength.
When this occurs,no fringes will be visible, as there will be no dark bands to differentiate a between a single fringe, and the fringes adjacent to it.
In order to make the transition between $\textbf{periods of fringe absence / appearance}$ the mirror's change in position must produce an $\textbf{integer number}$ of fringe shifts for each wavelength, and the number of shifts for the shorter wavelength must be one more than the number for the longer wavelength.
$\textbf{This next bit is imperative for understanding the Michelson Interferometer:}$
The light travels the length of the apparatus $\textbf{twice}$, so the change in the position of the mirror must also be accounted for $\textbf{twice}$.
Thus, even though a fringe shift possesses a wavelength value of $\lambda$ , we will denote it as $\cfrac{\lambda}{2}$ and the change in position of the mirror $(\bigtriangleup X)$ will become $2(\bigtriangleup X)$
To wrap this problem up, let's make $\epsilon_n$ the number of fringes produced by a given wavelength.
$\epsilon_1$ = $\cfrac{2(\bigtriangleup X)}{\lambda_a}$ and $\epsilon_2$ = $\cfrac{2(\bigtriangleup X)}{\lambda_b}$, therefore $\epsilon_2$ = ($\epsilon_1 + 1$)
From here we can see that $\cfrac{2(\bigtriangleup X)}{\lambda_b}$ = $\cfrac{2(\bigtriangleup X)}{\lambda_a} + 1$ , meaning that $\bigtriangleup X = \cfrac{\lambda_a\lambda_b}{2(\lambda_a-\lambda_b)}$
I will let you plug in your values, you should get a number much less than a meter, but much greater than a nanometer. This was not an easy problem for what I am assuming is an A level physics course. Let me know if you have questions. Good luck. | {
"domain": "physics.stackexchange",
"id": 15469,
"tags": "homework-and-exercises, optics, interference, interferometry"
} |
array of strings as param tag in launch file | Question:
Hi,
I'm trying to read an array of strings into my ros node. Here is the line in my launch file
<param name="string_list" value="[this, is, a, string, array]"/>
And here is the nh.getParam() call in my node
ros::NodeHandle nh("~");`
std::vectorstd::string string_list;
nh.getParam("string_list", string_list);
I know that the param is there when I do a rosparam list. And i know the namespaces are correct since it works with a simple int param. But in this case with a vector of strings, when I read the string_list after the getParam() call, it is of 0 length
What am I doing wrong?
Thank you
Originally posted by 2ROS0 on ROS Answers with karma: 1133 on 2018-01-27
Post score: 1
Answer:
Please see #q194592, especially the answer by @peci1.
Originally posted by gvdhoorn with karma: 86574 on 2018-01-28
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by 2ROS0 on 2018-01-28:
<arg name="string_list" default="[this, is, list]"/>
<rosparam param="string_list" subst_value="True">$(arg string_list)</rosparam>
This shows the following error: requires the 'string_list' arg to be set | {
"domain": "robotics.stackexchange",
"id": 29889,
"tags": "roslaunch, ros-kinetic, rosparam"
} |
Print all prime numbers to within the threshold boundaries with the range provided by user | Question: I found this exercise on the web and this is my attempt to solve it.
Things i'm interested the most from this coding review
Naming convention
I don't comment the code that much. For you to faster review the code i did
extra commenting.Should i comment the code this way ?
Your professional opinion
Things a i can use to improve this code, therefore my programming skills.
getprimes.h / getprimes.c
#ifndef GETPRIMES_H_
#define GETPRIMES_H_
void print_primes( int iter, const int max );
#endif
static int is_num_prime( const int n )
{
// First prime number is 2
if ( n < 2 ) return 0;
for( int i = 2; i < n; ++i )
/*If this holds true for any iteration of the loop
number is not prime and exit the function with status(0)*/
if( n % i == 0 && i != n )
return 0;
/*Number(n) is prime if all if checks are false,
therfore return(1) when loop ends.*/
return 1;
}
void print_primes( int iter, const int max )
{
int prime_count = 0;
int line_break = prime_count + 10;
puts("");
// min, max range provided by user. iter = min.
for(; iter <= max; ++iter ){
/*If the number is prime, print it and increase
the prime_count by one */
if( is_num_prime(iter) ){
printf("| %-4i | ", iter);
prime_count++;
// Add the line break after 10 prints of prime numbers.
if( prime_count == line_break ){
line_break = prime_count + 10;
puts("");
}
}
}
// Print the total prime numbers count.
printf("\n\n[ Prime numbers Count:%i ]\n\n",
prime_count );
}
str2int.h / str2int.c
#ifndef STR2INT_H_
#define STR2INT_H_
long str_to_int( const char arr[] );
#endif
#include <stdlib.h>
#include <ctype.h>
static int is_str_digit( const char arr[] )
{ // Check if each character of an array is a digit.
int iter = 0;
/*Iterate over an array while array[iter] character is not
a new line or null character.
- If array[iter] character is not a digit return(0)
- If the loop did all iterations then all array[iter] characters are
digits, therfore return (1).*/
while( arr[iter] != '\n' && arr[iter] != '\0' ){
if( !isdigit(arr[iter]) )
return 0;
iter++;
}
return 1;
}
long str_to_int( const char arr[] )
{ /* If array is not NULL, and all array characters are digits:
- Convert the characters of the array to a long int.
- Return the result of conversion.
Return (0) in any other case. */
long result;
if( arr != NULL && is_str_digit( arr )){
result = strtol( arr, NULL, 10 );
if( result )
return result;
}
return 0;
}
main.c
#include <stdio.h>
#include <ctype.h>
#include "str2int.h"
#include "getprimes.h"
#define LEN(x) (sizeof(x)/sizeof(*x))
const typedef struct {
const long min;
const long max;
}Threshold;
typedef struct {
char str[10];
long min;
long max;
}UserInput;
int main()
{
/*Threshold range. Range provided by user
cannot exceed threshold range boundaries */
Threshold range = { 2, 10000 };
/*This is a variable of type UserInput that will
store user input strings and their conversions to long int*/
UserInput input;
/*Get the first user input range boundary(min) */
printf("Search for prime numbers [FROM]:");
fgets( input.str, LEN(input.str),stdin );
// Convert it to ( long int )
input.min = str_to_int( input.str );
/* If the conversion is unsuccessful input.min will be 0,
so this check if conversion is successful and if first user input
is within the threshold range boundaries.*/
if( input.min >= range.min && input.min <= range.max ){
/*Get the second user input range boundary(max) */
printf("Search for prime numbers [TO ]:");
fgets( input.str, LEN(input.str),stdin );
// Convert it to (long int)
input.max = str_to_int( input.str );
/*This check is conversion is successful and if input.max is
within the range( input.min - range.max )*/
if ( input.max > input.min && input.max < range.max )
// Print all prime numbers within the provided range.
print_primes( input.min, input.max );
else
puts("[QUITING] Not acceptable input");
}else
puts("[QUITING] Not acceptable input");
}
Answer:
Prime number efficiency.
There are multitudes of ways to improve the prime test. OP's code is a basic one and works for all n. Good step 1. Let us try more
Why the i != n test. i < n takes care of that
for( int i = 2; i < n; ++i )
// if( n % i == 0 && i != n ) return 0;
if(n % i == 0) return 0;
Further improvements maintain prime lists (not shown) and use of the quick sieve of Sieve of Eratosthenes - practical when O(n) bits of memory are available.
Many compilers when performing n % i can calculate n / i for little or no extra emitted code. Source code can use the quotient to stop the loop far sooner: about √n iterations rather than n. This is better in many cases that than iterating to sqrt(n) (using the math.h function) as that invokes floating-point math with its rounding issues and potential less precision that the chosen integer type.
int quot = 2;
for( int i = 2; i <= quot; ++i )
if (n % i == 0) return 0;
quot = n/i;
}
Choose a consistent integer type
print_primes( input.min, input.max ); down converts long to int. If the prime test is for ints. I'd expect consistency. input.min, input.max should be the same type. As this is a prime test and no need for negative numbers, consider unsigned or go for widest like uintmax_t.
Minor
Simplify digit string detection.
'\n' and '\0' are not digits either, so exit loop when a non-digit is found. Note: isdigit(ch) is UB when ch < 0 and not EOF, so a cast is used below.
Not shown: recommend to lop off the '\n' in the calling code and only allow a '\0' ending here.
static int is_str_digit(const char arr[]) {
size_t i = 0;
while (isdigit((unsigned char) arr[i])) {
i++;
}
return (i>0) && ((arr[i] == '\n') || (arr[i] == '\0'));
}
6.1 or 1/2 dozen of the other (not too much difference) idea:
long str_to_int( const char arr[] ) could form the long instead of calling a loop that checks each digit and then calls strtol(). Note that str_to_int() sounds like it converts a string to an int rather than long. Consider renaming to str_to_long().
#include <limits.h>
long str_to_long(const char *arr) {
long sum = 0;
while (isdigit((unsigned char) *arr)) {
if ((sum >= LONG_MAX/10) && (sum > LONG_MAX/10 || *arr > LONG_MAX%10 + '0')) {
// overflow
return INT_MAX;
}
sum = sum*10 + *arr++ - '0';
}
return sum;
}
I/O details
stdout should be flushed to insure it appears before the requested input via fgets(). Robust code checks the return value offgets`.
printf("Search for prime numbers [FROM]:");`
fflush(stdout);
if (fgets( input.str, LEN(input.str),stdin ) == NULL) {
Handle_EOF_or_Error();
}
Printing the line break;
Rather than maintain line_break, simply use %10
if (prime_count % 10 == 0) puts("");
Just before printing the total prime numbers count, test prime_count for a consistent output.
if (prime_count % 10) puts(""); | {
"domain": "codereview.stackexchange",
"id": 24914,
"tags": "beginner, c, primes"
} |
Is step detection the correct approach to this problem? what if not? | Question: I'm looking for some advice on where and what to start reading for learning to solve this.
I've the time series of the position coordinates (x,y) of an animal in an open field (just a cage). I want to detect the time instants when the animal stops or starts "walking" (i.e.: moving from one place to another, not in the same place).
I thought it couldnt be so difficult, but I'm finding some trouble solving it.
So, I'm looking for the times from which the signal will be flat, or the instant where the signal starts changing from flatness:
I think this thing may be related with step detection but I'm not sure.
I would start with just one coordinate for simplicity. Is seems that step detection could be what im looking for, but:
a. my signal "baseline" will be different every time. the animal will move and stop somewhere else, and so.
b. the signal can be VERY "noisy" since the animal could and will move a lot while staying in the same place (ie: grooming)
c. this changes can be whether slow or fast, and I need both.
Firstly, i ll be glad if i can solve this problem for just one coordinate, although I will have to look for this changes in the two coordinate system.
so, my questions are:
1. is step detection a good aproach to this problem? what else if not?
2. any suggestion in doing this for both (x,y) coordinates?
thanks in advance
EDIT: I get (x,y) coordinates by acquiring an overhead image with a camera and tracking a led attached to the animal's head. Tracking is done by color filtering.
EDIT2: copy of the data:
https://www.dropbox.com/s/oph33szu891rgrl/data_sample.txt
https://www.dropbox.com/s/idulfdr965eeh7i/data_sample.mat
data format is (x,y,t)
EDIT3:
I've been trying smoothing the data but it is not really what I expected. I need to clean not high frequencies but low amplitudes. The movements I look for can be either fast or slow, but with big amplitude.
here, an example of signal with matlab function smooth()
note that I'm looking for the green moments, which I lose when smoothing
Answer: First off, I would start by applying a smoothing/averaging filter to your position data. This will get rid of a lot of the noise from your stationary motion.
To answer your first question, step detection is the way to go, but it depends on how you're detecting the steps. The best way in my mind would be to look at the derivative, the speed, and detect periods of non-zero speed. If you've smoothed out your position data, this should result in periods of time where the animal is moving in one direction, rather than back and forth.
This approach makes two dimensional analysis fairly simple. Once you've calculated your x and y speeds, you can combine them to get an absolute speed, independent of direction. You can use the same detection approach to find the periods of motion, which should give you the answer you're looking for. | {
"domain": "dsp.stackexchange",
"id": 1818,
"tags": "signal-analysis, signal-detection, moving-average, position"
} |
How to calculate AUC in coverage graph | Question: Is there a way to calculate the area under a curve in a coverage graph?
Thank you in advance.
EDIT:
I'd like to calculate the AUC per strand per gene in a coverage graph. I have two wig files (sense and antisense) generated from a single bam (RNA-seq) file (using the strand-coverage software).
Answer: Yes, just add up the coverage at each location; that's integration in a nutshell.
Or, alternatively / equivalently, count the total number of reads within the region. | {
"domain": "bioinformatics.stackexchange",
"id": 737,
"tags": "rna-seq, coverage"
} |
Which cryogenic vials and caps are ideal for storing glycerol stocks? | Question: There are internally threaded, externally threaded, natural caps. For the vials, there are skirts, star-footed, and various bottoms. Why do any of these things matter?
Answer: For any kind of frozen cell stock (for cell culture or bacteria) we routinely use freezing vials similar to these: http://www.sigmaaldrich.com/catalog/ProductDetail.do?D7=0&N5=SEARCH_CONCAT_PNO%7CBRAND_KEY&N4=V9255%7CSIGMA&N25=0&QS=ON&F=SPEC
These tubes are internally threaded with a rubber grommet to seal the lid and make the tube air and water tight. I suspect the air tight seal prevents condensation from entering the tube and freezing within the tube, but I've never had a problem with either choice of tube.
The tubes don't really matter at -80 C for a glycerol stab. Freezing cells in liquid nitrogen on the other hand has a huge impact when storing in liquid nitrogen. Some tubes are meant to be stored in the liquid versus the vapour phase of nitrogen and not matching the tube to it's intended storage condition can result in explosion of the tube. | {
"domain": "biology.stackexchange",
"id": 196,
"tags": "ecoli"
} |
What causes incomplete combustion? | Question: During the combustion of hydrocarbons, there is a difference between the amounts of carbon or hydrogen that results in incomplete or complete combustion of the material.
My question is, besides from an insufficient amount of oxygen, what can cause incomplete combustion?
What are the physical or chemical properties of the hydrocarbon responsible for incomplete combustion, such as the hydrogen to carbon ratio (mass, volume, number of atoms, etc.) or the saturation.
Please keep the reply suitable for a student studying high school to early university chemistry.
Answer: According to the Iowa State University page Carbon Monoxide Poisoning: Checking for Complete Combustion (AEN-175) suggest the following mechanisms that cause incomplete combustion:
Insufficient mixing of air and fuel.
Insufficient air supply to the flame.
Insufficient time to burn.
Cooling of the flame temperature before combustion is complete.
The main notion is the lack of air to cause complete combustion (either through lack of supply, mixing or time for combustion).
However, it should be noted that, practically speaking, according to the ChemWiki page Burning Alkanes, the size of the hydrocarbon has an affect in 'normal' conditions:
Provided the combustion is complete, all the hydrocarbons will burn with a blue flame. However, combustion tends to be less complete as the number of carbon atoms in the molecules rises. That means that the bigger the hydrocarbon, the more likely you are to get a yellow, smoky flame.
The reason being (according to ChemWiki) is:
If the liquid is not very volatile, only those molecules on the surface can react with the oxygen. Bigger molecules have greater Van der Waals attractions which makes it more difficult for them to break away from their neighbors and turn to a gas.
So, the larger the hydrocarbon, the less likely it would vaporise sufficiently, thus less chance of the optimal mixing of air and fuel, resulting in incomplete combustion.
However, provide the right conditions with sufficient supply of air and optimal hydrocarbon-air mixing ratio, it is still possible to cause complete combustion, in the general formula (for many hydrocarbons):
$$\ce{C_{x}H_{y} +O2 ->H2O + CO2}$$
A quite comprehensive resource explaining each of the factors is the University of Tulsa chapter Fuels and Combustion. | {
"domain": "chemistry.stackexchange",
"id": 4236,
"tags": "combustion"
} |
Prove $T(n) = T(\left \lceil{\frac{n}{2}}\right \rceil) + 1 = O(\log(n))$ | Question: As the title said, prove $T(n) = T(\left\lceil{\frac{n}{2}}\right\rceil) + 1 = O(\log(n))$
My approach is to find $c, n_0 \in \mathbb{R}_+$ such that:
$$\forall n \geq n_0, T(n) \leq c\log(n) -d \text{, where d is a constant}$$
Assume the statement is true for every $m < n$, especially $m = \left\lceil{\frac{n}{2}}\right\rceil$, therefore
$$T(\left\lceil{\frac{n}{2}}\right\rceil) \leq c \log(\left\lceil{\frac{n}{2}}\right\rceil) - d $$
$$\iff T(n) = T(\left\lceil{\frac{n}{2}}\right\rceil) + 1$$
$$\leq c\log(\left\lceil{\frac{n}{2}}\right\rceil) - d + 1$$
$$= c\log(\left\lfloor{\frac{n}{2}}\right\rfloor + 1) - d + 1$$
$$\leq c\log(\frac{n + 2}{2}) - d + 1$$
$$= c\log(n + 2) - c\log(2) - d + 1$$
$$= c\log(n + 2) - d$$
This is where I stuck and can not go further. I need to eliminate the constant $2$ in $n + 2$. Any help and hint is welcome.
Edit: In term of master theorem. I have to solve this recurrence with substitution method.
Answer: If you let T(0) = T(1) = 0, then prove by complete induction for every k >= 1 that T(n) = k for all n such that $2^{k-1} < n <= 2^k$.
k = 1: True because T(2) = 1.
k -> k + 1: Let $2^k < n <= 2^{k+1}$. Then $2^{k-1} < n/2 <= 2^k$, therefore $2^{k-1} < \lceil n/2 \rceil<= 2^k$, therefore $T(\lceil n/2 \rceil) = k$, therefore T(n) = k+1.
So instead of O(log n) you get the much stronger $T(1) + \lceil \log_2 n \rceil$ for all n >= 1. | {
"domain": "cs.stackexchange",
"id": 17929,
"tags": "runtime-analysis, big-o-notation"
} |
How to understand Heisenberg time in random matrix theory? | Question: Recently, from few papers, I have encountered the word 'Heisenberg time' $t_{\text{H}}$ which is an inverse of a mean level spacing $\Delta(\hat{\mathcal{H}})$ of a finite system Hamiltonian $\hat{\mathcal{H}}$. Though its mathematical definition is given, I couldn't find the physical meaning of it. Is it ok to understand as a least time needed for a system to distinguish an average energy difference of $\Delta(\hat{\mathcal{H}})$?
Answer: Suppose you have some another probe system which is coupled to your finite system. Then up to a time $t_{H}$ your probe will not experience the discreteness of the spectrum. That is, up to a time $t_{H}$ the probe will think that your system has continuous spectrum.
Put in other words, up to a time $t_{H}$ the probe will think that it interacts with an infinite quantum environment, whose consinuous spectral density is a smoothed verstion of the actual discrete spectrum.
After the time $t_{H}$ the probe will 'notice' that the system is finite. In the simplest cases, this means that some kind of reflected wave will come to the probe. | {
"domain": "physics.stackexchange",
"id": 94513,
"tags": "quantum-mechanics, hamiltonian, eigenvalue, chaos-theory, matrix-model"
} |
Einstein's box - unclear about Bohr's retort | Question: I was reading a book on the history of Quantum Mechanics and I got intrigued by the gendankenexperiment proposed by Einstein to Bohr at the 6th Solvay conference in 1930.
For context, the thought experiment is a failed attempt by Einstein to disprove Heisenberg's Uncertainty Principle.
Einstein considers a box (called Einstein's box; see figure) containing electromagnetic radiation and a clock which controls the opening of a shutter which covers a hole made in one of the walls of the box. The shutter uncovers the hole for a time Δt which can be chosen arbitrarily. During the opening, we are to suppose that a photon, from among those inside the box, escapes through the hole. In this way a wave of limited spatial extension has been created, following the explanation given above. In order to challenge the indeterminacy relation between time and energy, it is necessary to find a way to determine with adequate precision the energy that the photon has brought with it. At this point, Einstein turns to his celebrated relation between mass and energy of special relativity: $E = mc^2$. From this it follows that knowledge of the mass of an object provides a precise indication about its energy.
--source
Bohr's response was quite surprising: there was uncertainty in the time because the clock changed position in a gravitational field and thus its rate could not be measured precisely.
Bohr showed that [...] the box would have to be suspended on a spring in the middle of a gravitational field. [...] After the release of a photon, weights could be added to the box to restore it to its original position and this would allow us to determine the weight. [...] The inevitable uncertainty of the position of the box translates into an uncertainty in the position of the pointer and of the determination of weight and therefore of energy. On the other hand, since the system is immersed in a gravitational field which varies with the position, according to the principle of equivalence the uncertainty in the position of the clock implies an uncertainty with respect to its measurement of time and therefore of the value of the interval Δt.
Question: How can Bohr invoke a General Relativity concept when Quantum Mechanics is notoriously incompatible with it? Shouldn't HUP hold up with only the support of (relativistic) quantum mechanics?
Clarifying a bit what my doubt is/was: I thought that HUP was intrinsic to QM, a derived principle from operator non-commutability. QM shouldn't need GR concepts to be self consistent. In other words - if GR did not exist, relativistic QM would be a perfectly happy theory. I was surprised it's not the case.
Answer: Bohr realized that the weight of the device is made by the displacement of a scale in spacetime. The clock’s new position in the gravity field of the Earth, or any other mass, will change the clock rate by gravitational time dilation as measured from some distant point the experimenter is located. The temporal metric term for a spherical gravity field is $1~-~2GM/rc^2$, where a displacement by some $\delta r$ means the change in the metric term is $\simeq~(GM/c^2r^2)\delta r$. Hence the clock’s time intervals $T$ is measured to change by a factor
$$
T~\rightarrow~T\sqrt{(1~-~2GM/c^2)\delta
r/r^2}~\simeq~T(1~-~GM\delta r/r^2c^2),
$$
so the clock appears to tick slower. This changes the time span the clock keeps the door on the box open to release a photon. Assume that the uncertainty in the momentum is given by the $\Delta p~\simeq~\hbar/\Delta r~<~Tg\Delta m$, where $g~=~GM/r^2$. Similarly the uncertainty in time is found as $\Delta T~=~(Tg/c^2)\delta r$. From this $\Delta T~>~\hbar/\Delta mc^2$ is obtained and the Heisenberg uncertainty relation $\Delta T\Delta E~>~\hbar$. This demands a Fourier transformation between position and momentum, as well as time and energy.
This argument by Bohr is one of those things which I find myself re-reading. This argument by Bohr is in my opinion on of these spectacular brilliant events in physics.
This holds in some part to the quantum level with gravity, even if we do not fully understand quantum gravity. Consider the clock in Einstein’s box as a blackhole with mass $m$. The quantum periodicity of this blackhole is given by some multiple of Planck masses. For a blackhole of integer number $n$ of Planck masses the time it takes a photon to travel across the event horizon is $t~\sim~Gm/c^3$ $=~nT_p$, which are considered as the time intervals of the clock. The uncertainty in time the door to the box remains open is
$$
\Delta T~\simeq~Tg/c(\delta r~-~GM/c^2),
$$
as measured by a distant observer. Similary the change in the energy is given by $E_2/E_1~=$ $\sqrt{(1~-~2M/r_1)/(1~-~2M/r_2)}$, which gives an energy uncertainty of
$$
\Delta E~\simeq~(\hbar/T_1)g/c^2(\delta
r~-~GM/c^2)^{-1}.
$$
Consequently the Heisenberg uncertainty principle still holds $\Delta E\Delta T~\simeq~\hbar$. Thus general relativity beyond the Newtonian limit preserves the Heisenberg uncertainty principle. It is interesting to note in the Newtonian limit this leads to a spread of frequencies $\Delta\omega~\simeq~\sqrt{c^5/G\hbar}$, which is the Planck frequency.
The uncertainty in the $\Delta E~\simeq~\hbar/\Delta t$ does have a funny situation, where if the energy is $\Delta E$ is larger than the Planck mass there is the occurrence of an event horizon. The horizon has a radius $R~\simeq~2G\Delta E/c^4$, which is the uncertainty in the radial position $R~=~\Delta r$ associated with the energy fluctuation. Putting this together with the Planckian uncertainty in the Einstein box we then have
$$
\Delta r\Delta t~\simeq~\frac{2G\hbar}{c^4}~=~{\ell}^2_{Planck}/c.
$$
So this argument can be pushed to understand the nature of noncommutative coordinates in quantum gravity. | {
"domain": "physics.stackexchange",
"id": 812,
"tags": "quantum-mechanics, history"
} |
AssertionError: wrong color format 'ansibrightred' | Question: I am running my old qiskit code after a very long time it's not running now showing error wrong color format 'ansibrightred' and I don't know why?
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister
circuit = QuantumCircuit(2,2)
circuit.h(0)
circuit.cx(0,1)
circuit.measure([0,1], [0,1])
Answer: That error is caused by running qiskit-terra 0.13.0 with pygments installed, but below the optional dependency minimum version of 2.4: https://github.com/Qiskit/qiskit-terra/blob/master/setup.py#L117. However, you've found a bug in terra, an error should only be shown if you are using the optional functionality (circuit.qasm() with formatted=True set or using the circuit library jupyter widget). I've pushed up a fix to address this edge case in: https://github.com/Qiskit/qiskit-terra/pull/4229
In the meantime while waiting for that to get into a release, you can workaround this issue by either installing pygments>=2.4 or uninstalling pygments. Either will fix the error. | {
"domain": "quantumcomputing.stackexchange",
"id": 1469,
"tags": "qiskit, ibm-q-experience"
} |
L293D won't turn motor backwards | Question: My small robot has two motors controlled by an L293D and that is controlled via a Raspberry Pi. They will both go forwards but only one will go backwards.
I've tried different motors and tried different sockets in the breadboard, no luck. Either the L293D's chip is broken (but then it wouldn't go forwards) or I've wired it wrong.
I followed the tutorial, Controlling DC Motors Using Python With a Raspberry Pi, exactly.
Here is a run down of what works. Let the 2 motors be A and B:
When I use a python script (see end of post) both motors go "forwards". When I change the values in the Python script, so the pin set to HIGH and the pin set to LOW are swapped, motor A will go "backwards", this is expected. However, motor B will not move at all.
If I then swap both motors' wiring then the original python script will make both go backwards but swapping the pins in the code will make motor A go forwards but motor B won't move.
So basically, motor A will go forwards or backwards depending on the python code but motor B can only be changed by physically changing the wires.
This is forwards.py
import RPi.GPIO as GPIO
from time import sleep
GPIO.setmode(GPIO.BOARD)
Motor2A = 23
Motor2B = 21
Motor2E = 19
Motor1A = 18
Motor1B = 16
Motor1E = 22
GPIO.setup(Motor1A, GPIO.OUT)
GPIO.setup(Motor1B, GPIO.OUT)
GPIO.setup(Motor1E, GPIO.OUT)
GPIO.setup(Motor2A, GPIO.OUT)
GPIO.setup(Motor2B, GPIO.OUT)
GPIO.setup(Motor2E, GPIO.OUT)
print("ON")
GPIO.output(Motor1A, GPIO.HIGH)
GPIO.output(Motor1B, GPIO.LOW)
GPIO.output(Motor1E, GPIO.HIGH)
GPIO.output(Motor2A, GPIO.HIGH)
GPIO.output(Motor2B, GPIO.LOW)
GPIO.output(Motor2E, GPIO.HIGH)
And this is backwards.py
import RPi.GPIO as GPIO
from time import sleep
GPIO.setmode(GPIO.BOARD)
Motor2A = 21
Motor2B = 23
Motor2E = 19
Motor1A = 16
Motor1B = 18
Motor1E = 22
GPIO.setup(Motor1A, GPIO.OUT)
GPIO.setup(Motor1B, GPIO.OUT)
GPIO.setup(Motor1E, GPIO.OUT)
GPIO.setup(Motor2A, GPIO.OUT)
GPIO.setup(Motor2B, GPIO.OUT)
GPIO.setup(Motor2E, GPIO.OUT)
print("ON")
GPIO.output(Motor1A, GPIO.HIGH)
GPIO.output(Motor1B, GPIO.LOW)
GPIO.output(Motor1E, GPIO.HIGH)
GPIO.output(Motor2A, GPIO.HIGH)
GPIO.output(Motor2B, GPIO.LOW)
GPIO.output(Motor2E, GPIO.HIGH)
If you see this diff https://www.diffchecker.com/skmx6084, you can see the difference:
Below are some pictures. You can use the colour of the cables to link them between pictures
enter image description here
Answer: The motor driver chip you state you are using, the L293D, is a "quadruple half H driver." This means that, instead of two full H circuits capable of driving a motor forward and reverse, you have four half H circuits, which are only capable of driving a motor in one direction.
You even speculate in your post,
Either the L293D's chip is broken (but then it wouldn't go forwards) or I've wired it wrong.
I'm going to guess that you have not wired the motors to the driver chip correctly. This is just a guess, though, because you haven't posted pictures or schematics of your wiring.
In looking at a diagram of the chip, it looks like maybe an easy place to go wrong would be to wire into a ground pin on the chip instead of to one of the motor outputs.
You can see in the picture above from the datasheet (linked above) that the "heat sink and ground" pins are right beside the driver output pins. If you happened to wire one of your motor leads to pins 4, 5, 12, or 13 instead of 3, 6, 11, or 14, then the motor would still turn one direction (HIGH to LOW), but would not turn in another direction (LOW to LOW) because it's not connected to the driver pin that should be HIGH.
Again, pure speculation on my part, but it would seem to explain all of your symptoms. Please take a picture of your wiring and edit your question to include it.
:EDIT:
It's hard to tell in your pictures, and I can't see which way the chip is oriented, but it looks like:
Gold wires are Vcc
The teal wire is Pin 19, which you have as "Motor 2E" - or motor 2 enable, HOWEVER, it's not plugged in to the (3,4EN) or (1,2EN) pin on the L293D chip. Those are pins 9 and 1, respectively, and they are located on the corners of the chip. It looks like it's maybe plugged in to 4A (or 2A, again I can't tell the orientation of the chip).
Purple with maybe a white stripe is Pin 23, which you have listed as "Motor 2B", but that wire goes to a corner of the chip, which is where the motor enable is located.
So it looks like to me that when you set "Motor 2E", to enable motor 2, you are actually NOT enabling motor two, but instead you are setting maybe what you're calling 2A in your code. Then, when you think you are going "forward", you set "2A" LOW and "2B" high - what you're calling "2B" is actually the motor enable, and that's why it's turning on.
When you try to turn the other way, it looks like you're setting "2A" HIGH and "2B" LOW, but again what you're calling "2B" is actually the motor enable pin, so you're disabling the motor.
Try swapping 2B and 2E in your code (swap 23 and 19).
If that doesn't work, please post clearer pictures and we can troubleshoot some more. Particularly, I'm looking to see the chip's orientation and a better (crisper and better lit) shot of the wires where the enter the Raspberry Pi. Also, you have 3 white wires or very light gray and it's hard to tell where they're going. | {
"domain": "robotics.stackexchange",
"id": 962,
"tags": "wheeled-robot, raspberry-pi"
} |
Unique namespace for a robot using the launch file | Question:
Hi! I am trying to develop a multi-robot system using Qbots. For that I need to assign different namespaces to each robot in the network. (As Qbot1, Qbot2 etc.)
As my previous question, and the answers I got, I wrote a single launch file and tried to push all of my topics under a given namespace. For this process, I used "ns" variable in include tag. Although there was a lot of issues in the results, I decided to make this question simpler by providing a small code, which I think includes the root of those problems.
This is a very simple launch file which can launch a kobuki core and then operate it manually by executing keyop program.
Here is my first launch file without including the namespace.
<launch>
<!-- Run the Kobuki base -->
<include file="$(find kobuki_node)/launch/minimal.launch"/>
<!-- Run the Keyop controller -->
<include file="$(find kobuki_keyop)/launch/keyop.launch"/>
</launch>
This is the rqt_graph I obtained by running the above program.
This is the second launch file I wrote by including the namespace variable.
<launch>
<!-- Run the Kobuki base -->
<include ns="qbot1" file="$(find kobuki_node)/launch/minimal.launch"/>
<!-- Run the Keyop controller -->
<include ns="qbot1" file="$(find kobuki_keyop)/launch/keyop.launch"/>
</launch>
And this is the relevant rqt graph
Although both of the launch files work without a problem, I can see that there are differences in two rqt graphs. Instead of a single mobile_base_nodelet_manager topic (when there is no namespace), the second rqt_graph displays a node and a topic under the name of mobile_base_nodelet_manager.
Due to this reason, the more advanced launch files in my program does not work properly.
Can anyone explain me what is wrong with the second launch file, and how can I make it work like the first one. (I already tried writing a single namespace for the whole launch file, and also the group tag)
I use ROS Kinetic version on Ubuntu 16.04.
Originally posted by TharushiDeSilva on ROS Answers with karma: 79 on 2019-03-19
Post score: 0
Original comments
Comment by gvdhoorn on 2019-03-19:
Please attach your imaged directly to your question instead of linking to your google drive. I've given you sufficient karma for that.
Answer:
There are no differences between the two rqt graphs, except for the namespace.
If you look at the first graph, you can see 4 nodes
/keyop
/mobile_base
/mobile_base_nodelet_manager
/diagnostic_aggregator
and 4 topics
/mobile_base_nodelet_manager/bond
/mobile_base/commands/velocity
/mobile_base/commands/motor_power
/diagnostics
If you look at the second graph, you can see the corresponding topics and nodes with the prefix qbot1 and that all nodes are subscribed to / publishing to correct topics.
I think your confusion is about the namespace box. In rqt graphs, the topics and nodes are grouped by the top level namespace in boxes. For an instance, in the first rqt graph, /mobile_base_nodelet_manager/bond topic is inside the box /mobile_base_nodelet_manager because it's the top level namespace. Similarly, topics /mobile_base/commands/velocity and /mobile_base/commands/motor_power are inside the box /mobile_base.
Since you add a top level namespace /qbot1, everything with /qbot1 as the top level namespace, (that is, everything except /diagnostics topic) is within the qbot1 box. In the second rqt graph, you can see the bottom of the box but top and sides are cropped out. The rqt_graph wiki page has an example image which would make it clearer to you.
The bottom line is, given your launch files, the rqt_graphs you are getting are correct. Personally, I prefer the <group> tag if more than one launch file is included in the namespace group.
If your more complex launch files are not working, can you please edit the question or post it as a new question with the actual problem you are running into?
Originally posted by janindu with karma: 849 on 2019-03-21
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by TharushiDeSilva on 2019-03-21:
@janindu. Thank you so much for the explanation. I have posted my advanced issue as a new question in here.
Comment by janindu on 2019-03-21:
I can't see the question anymore. Hope you have solved it!
Comment by TharushiDeSilva on 2019-03-21:
Sorry about that. I sort of figured out the node which is creating the problem, and I had temporarily deleted the question until I could edit it. It is available now. I'm so thankful if you can take a look at it.
https://answers.ros.org/question/319040/freenect-cameradepthpoints-topic-not-published-under-a-group-namespace-tag/ | {
"domain": "robotics.stackexchange",
"id": 32676,
"tags": "ros, namespace, ros-kinetic, multi-robot"
} |
How convert instances of a language efficiently into 3SAT instances? | Question: The following question is from Sanjeev Arora and Boaz Barak ( not homework )
Show that for every time constructible
$T:N \to N$, if $L \in NTIME(T(n))$ then we can give a polynomial-time Karp reduction from $L$ to $3SAT$ that transforms instances of size $n$
into 3CNF formulae of size $O(T(n)logT(n))$. Can you make this reduction also run in $O(T(n)logT(n))$?
The best I could come up with is a $O(T(n)^3)$ formula. We take each configuration to be of at most length $T(n)$ and the machine runs for $T(n)$ steps. So we have a grid of $T(n)$ rows each with $T(n)$ columns. Each row represents a configuration. We just impose the condition that each row follows from the previous by using $M's$ ( where $M$ is a machine deciding $L$ in $O(T(n))$ time ) transition function in at most one step. The first row should represents start configuration and last row represents the accept configuration. Thus an $O(T(n)^3)$ formula is formed by this approach.
How can I do better ?
Answer: The idea is to reduce an instance $x$ to $\phi$ that uses only $O(T(n))$ symbols and clauses. $\log T(n)$ will be the length of binary representation of each symbol. This is the upper limit that you cannot cross if you want to solve the problem. So, you will have to optimize quite a bit. See the oblivious TM and state representation in Arora and Barak to get an idea. | {
"domain": "cs.stackexchange",
"id": 6203,
"tags": "complexity-theory, nondeterminism"
} |
Why ionization is more probable in hydrogen atom than excitation to the $n = 3$ level? | Question: The Wikipedia article about the H$\alpha$ spectral line states
"it takes nearly as much energy to excite the hydrogen atom's electron from $n = 1$ to $n = 3$ (12.1 eV, via the Rydberg formula) as it does to ionize the hydrogen atom (13.6 eV), ionization is far more probable than excitation to the $ n = 3$ level"
Since 12.1 eV < 13.6 eV, more energy is required to ionize the atom then to excite it to the n=3 level. Hence shouldn't excitation to the n = 3 level be more probable?
Answer: The article is referring to hydrogen atoms in nebulae within a galaxy. The hydrogen in these clouds is excited by uv light from young, massive stars recently formed in the nebulae. These stars have an approximately black body spectrum, so the light they emit spans a wide range of wavelengths. In particular the intensity at an energy of 13.6eV is comparable to the intensity at 12.1eV, and this means a star hot enough to excite the $n=1$ to $n=3$ transition also emits light energetic enough to fully ionise the hydrogen atom. The result is that hydrogen atoms near such a star are likely to end up full ionised instead of just excited to the $n=3$ state.
The $n=3$ population arises when the ionised hydrogen atoms recombine with electrons. The recombination produces neutral atoms in a range of excited states, and a fraction of these will be in the $n=3$ state. The decay of these atoms then creates the $\mathrm H\alpha$ emission. | {
"domain": "physics.stackexchange",
"id": 74715,
"tags": "quantum-mechanics, atomic-physics, hydrogen, ionization-energy"
} |
TicTacToe game with functional AI in ruby - follow-up | Question: A month ago I posted an earlier version of the game here, and got a great response, which was mainly about the structure of my code. I freed some time today and reworked the code from scratch.
Things I improved:
Class separation
No more god object
Applied single responsibility principle for classes and methods
I would love to hear if the code can be improved even further (structure, logic, etc.), as this is I feel the maximum of my coding capability:
# players in the game
class Player
attr_reader :name, :symbol
def initialize(name, symbol)
@name = name
@symbol = symbol
end
end
# the game board
class Board
attr_accessor :board
def initialize
@board = (1..9).to_a
end
def display_board
puts "\e[H\e[2J" # ANSI clear
@board.each_slice(3).with_index do |row, idx|
print " #{row.join(' | ')}\n"
puts ' ---+---+---' unless idx == 2
end
puts
end
def welcome_msg
print "\nWelcome to Tic Tac Toe.\n\n"
puts 'Enter 1 to play against another player, 2 to play against an evil AI'\
', 3 to watch evil AI play against kind AI.'
puts 'Type EXIT anytime to quit.'
end
def cell_open?(position)
@board[position - 1].is_a?(Fixnum) ? true : false
end
def win_game?(symbol)
sequences = [[0, 1, 2], [3, 4, 5], [6, 7, 8],
[0, 3, 6], [1, 4, 7], [2, 5, 8],
[0, 4, 8], [2, 4, 6]]
sequences.each do |seq|
return true if seq.all? { |a| @board[a] == symbol }
end
false
end
def full?
@board.each do |cell|
return false if cell.is_a? Fixnum
end
true
end
def place_mark(position, symbol)
@board[position - 1] = symbol
end
end
# game logic
class Game
attr_accessor :board, :player1, :player2, :ai, :current_player
def initialize
@board = Board.new
@player1 = Player.new('Player 1', 'X')
@player2 = Player.new('Player 2', 'O')
@ai = AI.new('Evil AI', 'O')
@ai2 = AI.new('Kind AI', 'X')
@current_player = @player1
start_screen
end
def start_screen
@board.welcome_msg
choice = nil
until (1..3).include?(choice)
choice = gets.chomp
exit if choice.downcase == 'exit'
game_modes(choice.to_i)
end
end
def game_modes(choice)
case choice
when 1 then pvp_game
when 2 then pva_game
when 3 then ava_game
else puts 'You silly goose, try again.'
end
end
def pvp_game
@board.display_board
until game_over
player_place_n_check
swap_players
end
end
def pva_game
@board.display_board
until game_over
player_place_n_check
swap_pva
ai_place_n_check(@ai)
swap_pva
end
end
def ava_game
@board.display_board
until game_over
ai_place_n_check(@ai2)
swap_ava
ai_place_n_check(@ai)
swap_ava
end
end
def game_over
@board.win_game?(@current_player.symbol) || @board.full?
end
def player_place_n_check
position = player_input
@board.place_mark(position.to_i, @current_player.symbol)
@board.display_board
result?
end
def ai_place_n_check(player)
position = player.ai_turn(@board)
@board.place_mark(position.to_i, @current_player.symbol) unless position.nil?
@board.display_board
result?
end
def player_input
input = nil
until (1..9).include?(input) && @board.cell_open?(input)
puts "Choose a number (1-9) to place your mark #{@current_player.name}."
input = validate_input(gets.chomp)
end
input
end
def validate_input(input)
if input.to_i == 0
exit if input.downcase == 'exit'
puts 'You can\'t use a string, silly.'
else
position = validate_position(input.to_i)
end
position
end
def validate_position(position)
if !(1..9).include? position
puts 'This position does not exist, chief.'
puts 'Try again or type EXIT to, well, exit.'
elsif !@board.cell_open? position
puts 'Nice try but this cell is already taken.'
puts 'Try again or type EXIT to, well, exit.'
end
position
end
def result?
if @board.win_game?(@current_player.symbol)
puts "Game Over, #{@current_player.name} has won."
exit
elsif @board.full?
puts 'Draw.'
exit
end
end
def swap_players
case @current_player
when @player1 then @current_player = @player2
else @current_player = @player1
end
end
def swap_pva
case @current_player
when @player1 then @current_player = @ai
else @current_player = @player1
end
end
def swap_ava
@current_player == @ai ? @current_player = @ai2 : @current_player = @ai
end
end
# AI player components
class AI
attr_accessor :board, :name, :symbol
def initialize(name, symbol)
@name = name
@symbol = symbol
end
def ai_turn(board)
loading_simulation
check_win(board)
return @finished if @finished
check_block(board)
return @finished if @finished
check_defaults(board)
return @finished if @finished
end
# first check if possible to win before human player.
def check_win(board)
@finished = false
1.upto(9) do |i|
origin = board.board[i - 1]
board.board[i - 1] = 'O' if origin.is_a? Fixnum
# put it there if AI can win that way.
return @finished = i if board.win_game?('O')
board.board[i - 1] = origin
end
end
# if impossible to win before player,
# check if possible to block player from winning.
def check_block(board)
@finished = false
1.upto(9) do |i|
origin = board.board[i - 1]
board.board[i - 1] = 'X' if origin.is_a? Fixnum
# put it there if player can win that way.
return @finished = i if board.win_game?('X')
board.board[i - 1] = origin
end
end
# if impossible to win nor block, default placement to center.
# if occupied, choose randomly between corners or sides.
def check_defaults(board)
@finished = false
if board.board[4].is_a? Fixnum
@finished = 5
else
rand < 0.51 ? possible_sides(board) : possible_corners(board)
end
end
def possible_sides(board)
[2, 4, 6, 8].each do |i|
return @finished = i if board.board[i - 1].is_a? Fixnum
end
end
def possible_corners(board)
[1, 3, 7, 9].each do |i|
return @finished = i if board.board[i - 1].is_a? Fixnum
end
end
def loading_simulation
str = "\r#{name} is scheming"
10.times do
print str += '.'
sleep(0.1)
end
end
end
Game.new
Answer: Encapsulation
Inside the Game class:
attr_accessor :board, :player1, :player2, :ai, :current_player
Don't expose these attributes as they are only used inside the class. (In ruby you don't have to declare "instance variables / fields" to use them)
Also, make all internal methods private so they are not accessible from outside the class.
Single Responsibility Principle
Game should only be responsible for the logic of the game (having the players take turns). It should not be responsible for getting user input, so all user input related methods should be moved into Player (which I renamed to Human so it's obvious it only represents human players):
class Human
attr_reader :name, :symbol
def initialize(board, name, symbol)
@board = board
@name = name
@symbol = symbol
end
def get_input
...
puts "Choose a number (1-9) to place your mark #{@name}." # name is available directly
...
end
private
def validate_input(input); ...; end
def validate_position(position); ...; end
end
Note I added board to Human so it can validate the input. I renamed player_input to get_input (it's obvious it's the player's input now becuase it's inside Human). Now Game can use Human like this:
@player1 = Human.new(@board, 'Player 1', 'X')
position = @current_player.get_input
Polymorphism
A lot of code in Game is duplicated so it can handle both Human and AI. In order to avoid this duplication, we must be able to treat Humans and AIs the same way in the code. We can acheive this by making both classes expose the same interface.
The interface Human exposes is a single method called get_input which takes no arguments and returns a position. We can make AI expose this same interface by renaming ai_turn to get_input, and moving the board argument to the constructor (like we did in Human):
class AI
attr_reader :name, :symbol
def initialize(board, name, symbol)
@board = board
@name = name
@symbol = symbol
end
def get_input # no arguments
...
end
private
# no need to pass board around, simply use @board
def check_win; ...; end
def check_block; ...; end
def check_defaults; ...; end;
def possible_sides; ...; end;
def possible_corners; ...; end;
def loading_simulation; ...; end;
end
Now, if we have a variable called player, we can call player.get_input and it will work both in the case player is a Human and in the case it's an AI. We can treat both cases the same way.
This means Game can simply have 2 players, and the same logic will work for PvP, PvAI and AIvAI:
class Game
def initialize
@board = Board.new
start_screen
end
private
def start_screen
@board.welcome_msg
choice = nil
until (1..3).include?(choice)
choice = gets.chomp
exit if choice.downcase == 'exit'
game_modes(choice.to_i)
end
end
def game_modes(choice)
case choice
when 1 then
@player1 = Human.new(@board, 'Player 1', 'X')
@player2 = Human.new(@board, 'Player 2', 'O')
when 2 then
@player1 = Human.new(@board, 'Player 1', 'X')
@player2 = AI.new(@board, 'Evil AI', 'O')
when 3 then
@player1 = AI.new(@board, 'Evil AI', 'O')
@player2 = AI.new(@board, 'Kind AI', 'X')
else puts 'You silly goose, try again.'
end
@current_player = @player1
run_game
end
def run_game
@board.display_board
until game_over
player_place_n_check
swap_players
end
end
def game_over; ...; end
def player_place_n_check
position = @current_player.get_input
@board.place_mark(position.to_i, @current_player.symbol) unless position.nil?
@board.display_board
result?
end
def result?; ...; end
def swap_players; ...; end
end
Single Responsibility Principle (2)
Game still does too much. It handles the main menu which is not related to the game logic. It's best to move the code that shows the menu and constructs Humans/AIs outside the Game class.
This requires moving the board argument from the contructors of Human and AI to the get_input methods in both classes (sorry, didn't forsee it coming). Then make Game take the players as arguments so it can be instantiated like this (for example):
Game.new(Human.new('Player 1', 'X'), AI.new('Evil AI', 'O'))
Hopefully this fixes the structural problems with the code. After you follow these suggestions, you want to post a follow up question so you can get review for individual classes and methods.
Good luck, and keep learning! | {
"domain": "codereview.stackexchange",
"id": 16476,
"tags": "ruby, tic-tac-toe, ai"
} |
SQL code reuse with C# | Question: I have some code that allows for SQL reuse for C#. I'm pretty happy with it, but the fact that I can't find anything similar on the web makes me think I'm going about it the wrong way. I have a GitHub repo with a Visual Studio solution, and the code is repeated down below.
Here's the use case:
I'm working on a fairly standard C# and SQL Server project where we're generating reports. Each report is essentially the results of SQL query, displayed in HTML in the browser. I'd rather not use an ORM like Entity Framework, since we need to exactly control the SQL for performance reasons (e.g. use SQL server windowing functions, table hints, etc).
However, many of the reports are similar, just slicing and dicing by different attributes, so there's a desire for code reuse. SQL Server has stored procedures, views, and functions, but the consensus seems to be that you'll get into trouble trying to use them to create abstractions. To get around this I'm trying to use C# to reuse SQL code.
I'm essentially using Visual Studio's T4 text templates to generate SQL files at development time. I'm then passing the SQL to Stack Exchange's Dapper.NET project to handle variable substitution, issuing the query, and POCO [de]serialization.
Given a query like this, in the template file CuteAnimalsByLocation.tt:
<#@ output extension=".sql" #>
select * from animals a
where a.IsCute = 1 and a.IsFuzzy = 1 and a.Location = @location
And this file, DeadlyMachinesByLocation.tt:
<#@ output extension=".sql" #>
select * from DeadlyMachines m
where m.IsLethal = 1 and m.HasExplosives = 1 and m.Location = @location
I can reuse the above queries in a more complex query by writing the following, AnimalsInPeril.tt:
<#@ output extension=".sql" #>
WITH
CuteAnimalsInLocation as
(
<#@ include file="CuteAnimalsByLocation.tt" #>
),
DeadlyMachinesInLocation as
(
<#@ include file="DeadlyMachinesByLocation.tt" #>
)
select a.* from CuteAnimalsInLocation a
inner join DeadlyMachinesInLocation m on a.Location = m.Location
I can run all 3 of these queries like this: (assuming the POCOs Animal and DeadlyMachine exist; they're just POCOs that map to the table schemas):
using (var connection = new SqlConnection(ConnectionString))
{
connection.Open();
var queryParams = new { Location = "NorthAmerica" };
IEnumerable<Animal> animalsNeedingHelp =
connection.QueryFromFile<Animal>("AnimalsInPeril", queryParams);
IEnumerable<Animal> cuteAnimals =
connection.QueryFromFile<Animal>("CuteAnimalsByLocation", queryParams);
IEnumerable<DeadlyMachine> deadlyMachines =
connection.QueryFromFile<DeadlyMachine>("DeadlyMachinesByLocation", queryParams);
}
connection.QueryFromFile is defined as this, and depends on Dapper:
public static class DapperFileExtensions
{
public static IEnumerable<TReturn> QueryFromFile<TReturn>(this IDbConnection cnn, string file, object params = null)
{
var sql = File.ReadAllText(file + ".sql");
return cnn.Query<TReturn>(sql, params);
}
}
Can anyone see problems with this approach? I guess the only thing I can think of is that I could add caching around the File.ReadAllText call.
Answer: You say performance is a concern, but you're doing this.
select a.* from
That's poor form whether or not you already have a table scan in the underlying query plan for the CTE.
I think you misunderstood the article you linked to. It is warning against doing exactly what you've done here. SQL just doesn't lend itself to code reuse in the way a "regular" programmer is accustomed to. It's a different beast that way, it takes a different mindset.
SQL is a set based query language. Code reuse comes in the form of (well tuned) views and stored procedures. If you need this kind of fine grained control over the SQL, then keep it where it belongs, in the database.
Depending on just how dynamic your queries are, you may not always be getting the benefit of query plan caching. Each time a query that the analyzer hasn't seen before is processed, it has to generate a new plan, so you'll take a performance hit.
Lastly, I left a comment, but I should mention it here too. Be careful using query hints. You may be able to outwit the analyzer now, based on the current data, but the data will change over time. When the data has changed and your hint is no longer the most efficient query plan, the analyzer won't be able to choose the most efficient plan because you've told it not to. Performance gained this way may degrade over time. | {
"domain": "codereview.stackexchange",
"id": 15686,
"tags": "c#, sql, template"
} |
Phidget interfacekit 8/8/8 outputs | Question:
My phidgets interfacekit 8/8/8 works great with the sensors (inputs). However it also has 8 outputs. Can someone tell me if these can be used as well? I have found an example for the sensors here:
http://mediabox.grasp.upenn.edu/roswiki/phidgets_ros.html#InterfaceKit_ROS_API
And another package here: http://ros.org/wiki/phidgets
But the documentation is either missing or vague..
However the usage of the outputs is not documented..or am I missing something?
Originally posted by davinci on ROS Answers with karma: 2573 on 2012-09-19
Post score: 0
Answer:
http://ros.org/wiki/phidget_ik allows you to use the outputs, although it doesn't provide a ROS API (see http://ros.org/doc/api/phidget_ik/html/classPhidgetIK.html for the class description). Another option would be the stack at https://launchpad.net/phidgets-ros-pkg .
Originally posted by wcaarls with karma: 76 on 2012-09-19
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 11079,
"tags": "ros"
} |
Orthonormality of Radial Wave Function | Question: Is the radial component $R_{n\ell}$ of the hydrogen wavefunction orthonormal? Doing out one of the integrals, I find that
$$\int_0^{\infty} R_{10}R_{21}~r^2dr ~\neq~0$$
However, the link below says that these wave functions should be orthonormal (go to the top of page 3):
http://www.phys.spbu.ru/content/File/Library/studentlectures/schlippe/qm07-05.pdf
Am I doing something wrong? Are the radial components orthogonal, or aren't they? Are there some kind of special condition on $n$ and $\ell$ that make $R_{n\ell}$ orthogonal? Any help on this problem would be appreciated.
Answer: No, the radial parts of the wavefunctions are not orthogonal, at least not quite to that extent.
The radial components are built out of Laguerre polynomials, whose orthogonality only holds when leaving the secondary index fixed (the $\ell$ or $2\ell+1$ or whatever depending on your convention). That is,
$$ \langle R_{n'\ell} \vert R_{n\ell} \rangle \equiv \int_0^\infty R_{n'\ell}^*(r) R_{n\ell}(r) r^2 \, \mathrm{d}r = \delta_{nn'}. $$
You can check this yourself, using some of the lower-order functions, e.g.
\begin{align}
R_{10}(r) & = \frac{2}{a_0^{3/2}} \mathrm{e}^{-r/a_0}, \\
R_{21}(r) & = \frac{1}{\sqrt{3} (2a_0)^{3/2}} \left(\frac{r}{a_0}\right) \mathrm{e}^{-r/2a_0}, \\
R_{31}(r) & = \frac{4\sqrt{2}}{9 (3a_0)^{3/2}} \left(\frac{r}{a_0}\right) \left(1 - \frac{r}{6a_0}\right) \mathrm{e}^{-r/3a_0}.
\end{align}
(Note that $R_{10}$ and $R_{21}$ are in fact both strictly positive, so they can't integrate to $0$.) You should find
$$ \langle R_{10} \vert R_{10} \rangle = \langle R_{21} \vert R_{21} \rangle = \langle R_{31} \vert R_{31} \rangle = 1 $$
and
$$ \langle R_{21} \vert R_{31} \rangle = \langle R_{31} \vert R_{21} \rangle = 0, $$
as expected. However, $\langle R_{10} \vert R_{21} \rangle = \langle R_{21} \vert R_{10} \rangle$ and $\langle R_{10} \vert R_{31} \rangle = \langle R_{31} \vert R_{10} \rangle$ are very much neither $0$ nor $1$.
You can recover the full orthogonality you expect, but only by adding on the angular dependence given by the spherical harmonics for the full wavefunction. | {
"domain": "physics.stackexchange",
"id": 10058,
"tags": "quantum-mechanics, homework-and-exercises, wavefunction, hydrogen"
} |
Virtual displacement and generalized coordinates | Question: I have a doubt regarding the expression of a virtual displacement using generalized coordinates. I will state the definitions I'm taking and the problem.
The system is composed by $n$ points with positions $\mathbf r _i$ and subject to $3n-d$ constraints of the form: $$\phi _j (\mathbf r _1, \mathbf r _2,...,\mathbf r _n,t)=0\qquad (1\leq j \leq 3n-d), \tag{1}$$
that, deriving with respect to time, gives: $$\sum _{i=1} \frac{\partial \phi _j}{\partial \mathbf r _i} \cdot \dot {\mathbf r}_i=-\frac{\partial \phi _j}{\partial t}.\tag{2}$$
According to my notes, a set of possible velocities $(\mathbf v_1,\mathbf v_2,...,\mathbf v_n)$ is one that satisfies the above system of $j$ equations (with $v_i$ in the place of $\dot r _i$), while a set of virtual velocities is one that satisfies the homogeneous system
$$\sum _{i=1} \frac{\partial \phi _j}{\partial \mathbf r _i} \cdot \dot {\mathbf r}_i=0.\tag{*}$$
Finally, a virtual displacement is given by the product of a virtual velocity by a quantity $\delta t$, with the dimensions of time.
I have the following problem. Suppose that I have a parametrization of the configuration space at time $t$ in the form: $$\mathbf r _i = \mathbf r _i (q_1,\dots ,q_d;t).$$ That is: $$\phi _j(\{\mathbf r _i (q_1,\dots,q_d;t)\},t)=0$$ for all $q=(q_1,\dots,q_d)\in Q$ and $t\in [t_1,t_2]$.
Now, according to my notes, if such a parametrization is given, the general form of a virtual displacement is: $$\delta \boldsymbol r _i =\sum _h \frac{\partial \mathbf r _i}{\partial q _h}\delta q _h.$$
Let $q(t)$ be a curve in the coordinate's space. By taking the total derivative of both sides of the precedent equation, I obtain: $$\sum _i \frac{\partial \phi _j}{\partial \mathbf r _i}\cdot (\sum _h \frac{\partial \mathbf r_i}{\partial q _h} \dot q _h)+\sum _i \frac{\partial \phi _j}{\partial \mathbf r _i}\cdot \frac{\partial \mathbf r _i}{\partial t} +\frac{\partial \phi _j}{\partial t}=0.$$ But the first term is zero because it is the product of the gradients $\nabla _{\mathbf r _i}\phi _j$ with the virtual velocities $\mathbf v _i$. But, in this case, it looks like that the second+third terms should be zero.
I suspect that there's an error, I don't see why the second+term should always give $0$ and I would like a proof check of what I wrote above.
Answer: When I wrote this question some years ago, I was very confused about those "virtual displacements". Now I realize that analytical mechanics is one of those parts of physics where knowing the proper mathematical language, differential geometry in this case, can make your life incredibly easier.
Virtual displacements and generalized coordinates.
Lagrangian mechanics takes place on a manifold $M$, which is embedded in $\mathbb R^{3N}$ via a (possibly non constant) mapping $\iota _t$. Virtual displacements are nothing but tangent vectors to $\iota _t (M)$. When $\iota _t=\iota _0$ is constant, virtual displacements also coincide with the velocity vectors of curves on $\iota _0 (M)$.
Generalized coordinates are the charts of the base manifold $M$; my above parametrization $\mathbf r _i(q_h;t)$ can be understood as a composition: $$Q\times \mathbb R \mapsto M\times \mathbb R \mapsto \mathbb R ^{3N},$$
$$(q,t)\mapsto (x(q),t)\mapsto \iota _t(x(q))\equiv r(q,t).$$
For fixed $t_0$, $r(q,t_0)$ parametrizes $\iota _{t_0} (M)$ and therefore $\frac{\partial r}{\partial q _i}$ is a tangent vector to $\iota _{t_0} (M)$, that is, it is a virtual displacement.
To make contact with the OP, the embedding $\iota _t (M)$ is directly defined via cartesians equations: $$\phi _j (r,t)=0\qquad r\in \mathbb R ^{3N}, j=1,2,\dots ,3N-d,$$ virtual displacements are orthogonal to the $3N-d$ gradients $\nabla \phi _j$, as in equation $(*)$ of the OP. [Here $r$ denotes the $N$-tuple $r=(\mathbf r_1,\dots, \mathbf r _N)\in \mathbb R ^{3N}$]
There's actually no problem with what I said after my last equation in the OP. The first term vanishes because each $\frac{\partial r}{\partial q _i}$ is separately a tangent vector. The second term vanishes because $\frac{\partial r}{\partial t}$ is the actual velocity of a point that is stationary with respect to the base manifold $M$ (e.g., a material point stationary at $\theta =0$ of a rotating ring). | {
"domain": "physics.stackexchange",
"id": 34135,
"tags": "classical-mechanics, lagrangian-formalism, differential-geometry, mathematical-physics, constrained-dynamics"
} |
Why use vacuum permeability during derivation of $\vec{B}=\mu_0(\vec{H} + \vec{M})$ | Question: Why do we use $\mu_0$ during the derivation of $\vec{B}=\mu_0(\vec{H} + \vec{M})$ where $\vec{M}$ is magnetization?
The derivation given in Sadiku's Elements of Electromagnetics:
Let $\vec{J_f}$ be free volume current density, $\vec{J_b}$ be bound volume current density,
\begin{align*}
\nabla \, \times \left( \frac{\vec{B}}{\mu_0} \right) &= \vec{J_f} + \vec{J_b} = \vec{J} \\
&= \nabla \times \vec{H} \, + \nabla \times \vec{M} \\
&= \nabla \times (\vec{H} + \vec{M}) \\
\vec{B} &=\mu_0(\vec{H} + \vec{M}) \quad \blacksquare
\end{align*}
I don't understand why we should use $\mu_0$ in the first place. Why don't we use $\mu$ instead? In free space, $\vec{M} = 0$ and
\begin{align*}
\nabla \times \vec{H} &= \vec{J_f} \\
\nabla \times \left( \frac{\vec{B}}{\mu_0} \right) &= \vec{J_f}
\end{align*}
then naturally we'd like to still have $\nabla \times \vec{H} = \vec{J} = \vec{J_f} + \vec{J_b}$ when $\vec{M} \neq 0$, so we could just change the $\mu_0$ to some constant $\mu$, so that
\begin{align*}
\nabla \times \vec{H} &= \vec{J} \\
\nabla \times \left( \frac{\vec{B}}{\mu} \right) &= \vec{J} = \vec{J_f} + \vec{J_b}
\end{align*}
but in the correct derivation,
\begin{align*}
\nabla \times \left( \frac{\vec{B}}{\mu_0} \right) &= \vec{J} = \vec{J_f} + \vec{J_b}
\end{align*}
What is it that forces us to use $\mu_0$?
Answer: ${\bf B}=\mu_0({\bf H}+{\bf M})$ is effectively just the definition of the auxiliary field ${\bf H}$. ${\bf B}$ is defined as the quantity that generates the velocity-dependent force in the Lorentz Force Law, and the magnetization ${\bf M}$ is the magnetic moment per unit volume. ${\bf H}$ is then defined in terms of the other two.
For the currents, the bound current ${\bf J}_{b}$ is defined as the curl of ${\bf M}$, and then the free current is whatever is left over, ${\bf J}_{f}={\bf J}-{\bf J}_{b}$. There is thus no room to change the constant $\mu_{0}$ to something else. | {
"domain": "physics.stackexchange",
"id": 58475,
"tags": "electromagnetism, magnetic-fields"
} |
Complex property of sparse horner polynomials by induction | Question: I'm following this article to do a formal proof on elliptic curve cryptography. My question here addresses only a property "easily proved by induction".
Definitions (the important part is the addition definition at the bottom)
Let assume the following hierarchy:
abstract class shf;
case class const(c: Integer) extends shf
case class POP(i: nat, p: shf) extends shf
case class POW(i: nat, p: shf, q: shf) extends shf
where nat is the type for naturals including zero. We state also a property on the hierarchy called normality given by the function:
def normal(p: shf) = p match {
case const(c) => true
case POP(i,p) => i > 0 && normal(p) && p = POW(_,_,_)
case POW(i,p,q) => i > 0 && normal(p) && normal(q) && p != POW(_,_,0)
}
Assume we have some type shfn to say that we have a shf with the normality property. Then also define these functions:
def pop(i: nat, p: shfn) = {
case i = 0 or p = const(_) => p
case p = POP(j,q) => POP(i+j,q)
case _ => POP(i,p)
}
Denote nat* the naturals without zero:
def pow(i: nat*, p: shfn) = {
case p = 0 => pop(1,q)
case p = POW(j,r,0) => POW(i+j,r,q)
case _ => POW(i,p,q)
}
Finally, define the addition operation:
if (x = const(c1)) {
if(y = const(c2)) c1+c2 (over the integers)
else if(y = POP(i,p)) POP(i,x+p)
else if(y = POW(i,p,q)) POW(i,p,x+q)
}
if(y = const(c)) y+x
if(x = POP(i,p) && y = POP(j,q)){
if(i=j) pop(i,p+q)
else if(i > j) pop(j,POP(i-j,p)+q)
else if(i < j) pop(j,POP(j-i,q)+p)
}
if(x = POP(i,p) && y = POW(j,q,r)){
if(i=1) POW(j,q,r+p)
else POW(j,q,r+POP(i-1,p))
}
if(y = POP(i,p) && x = POW(j,q,r)) y+x
if(x = POW(i,p,q) && y = POW(j,r,s)){
if(i=j) pow(i,p+r,q+s)
else if(i > j) pow(j,POW(i-j,p,0)+r,q+s)
else pow(i,POW(j-i,r,0)+p,s+q)
}
The problem
I'm trying to prove that if $x,y$ are normal then $x+y$ is also normal. But the induction is not trivial to me.
I tried to split it. First I consider $x$ to be constant and everything went well by using the corresponding induction hypothesis.
The problem appears when I consider $x$ to be $POP$ and I do induction on $y$. When $y$ is also $POP$ I have to rely on the $POW$ case because note that as $x,y$ are normal and instances of $POP$ then their second components need to be instances of pow. The other cases for this situation then need the $POP-POW$ case. But proving the $POP-POW$ case could potentially lead to a $POP-POP$ case again.
How can I structure my proof to succeed?
You may want to look at the actual proof on ACL2 here. Also, if you have any ideas to simplify the problem (to make it more easy to reason about and describe it) please let me know.
Intuition
Intuitively a POP construct represents a jump in the list of variables where our shf form is evaluated and a POW represents a polynomial in current variable x of the form $x^i \cdot p + q$. So normality for POP means that the jumping index is as big as possible (there cannot be nested POPs) and normality for POW means the power index $i$ is as big as possible.
Answer: You need to use induction on some complexity measure $\mu(x,y)$ (on some well-ordered domain) which has the following property:
When running the addition procedure on $x,y$, all recursive calls are on pairs $z,w$ for which $\mu(z,w) < \mu(x,y)$.
A complexity measure satisfying this property shows that the addition procedure terminates, and will also allow you to do a proof by induction. Note that you are not splitting into cases according to the coarse structure of $x,y$.
It is often the case that you can take $\mu(x,y)$ to be the total description length of $x,y$, so you might start by checking whether this works here. Another possibility is to take the sum of the total degrees of all monomials in $x,y$.
Finally, how should you approach the problem? I suggest forgetting about the formalism (in particular, ignoring POP completely) and writing out the simplification steps in their barest forms. Using $N(p,q)$ for the normal form of $p+q$ (themselves already in normal form), the rules are:
$N(c_1,c_2) = c_1+c_2$.
$N(c,x_i^jp+q) = x_i^jp+N(c,q)$.
$N(x_i^{j_1}p_1+q_1,x_i^{j_2}p_2+q_2) = x_i^{j_2}N(x_i^{j_1-j_2}p_1+0,p_2)+N(q_1,q_2)$, where $j_1 > j_2$.
$N(x_i^jp_1+q_1,x_i^jp_2+q_2) = x_i^jN(p_1,p_2)+N(q_1,q_2)$.
$N(x_{i_1}^{j_1}p_1+q_1,x_{i_2}^{j_2}p_2+q_2) = x_{i_1}^{j_1}p_1+N(q_1,x_{i_2}^{j_2}p_2+q_2)$, where $i_1 < i_2$.
Here we are conflating POP and POW, which makes things a littler more concrete.
We want to prove that these rules are correct, namely:
If $p,q$ are in normal form then $N(p,q)$ terminates, is in normal form, and agrees as a function with $p+q$.
This can be split into several tasks. First, assuming that $p,q$ are in normal form, show that they match exactly one rule. Second, show that in each recursive invocation $N(r,s)$ on the right-hand side, (i) $\mu(r,s) < \mu(p,q)$, and (ii) $r,s$ are in normal form. Third, show that the right-hand side agrees with $p+q$ as a polynomials (using the inductive hypothesis). Fourth, show that the right-hand side is in normal form (again using the inductive hypothesis).
Proving that the addition procedure as given is valid is similar, only we need to take into account the different encoding which separates POW and POP. | {
"domain": "cs.stackexchange",
"id": 9010,
"tags": "cryptography, induction, software-verification"
} |
Force Drag and impact kinetic energy on relativistic spacecraft | Question: Assuming a relativistic rocket travelling at 0.95 times the speed of light (c), what would be the drag force on the cross-section area $(\pi500^2)$ of the ship facing the direction of travel assuming here that drag coefficient is 0.25. The equation for force drag in classical mechanics is:
$$
F_D = \frac12\ pv^2C_DA
$$
where $p$ is density of interstellar space; which should be about 2 million protons per $m^3$.
However I am not sure if in relativistic mechanics it is as such:
$$
F_D = pv^2\gamma^2
$$
where $\gamma$ is the lorentz factor (gamma factor).
Both of these equations will give different results.
Furthermore, what is the kinetic energy when matter within the vacuum of space impacts on cross-section area $(\pi500^2)$ of the ship facing the direction of travel?
The equation is as follows:
$$
K_E = (\gamma-1)pc^2
$$
However the results from these equations are about 26.6 N, 0.0028 N and $6.6e^-4$ J respectively. The latter is not much! However in many articles on the web and as per certain experts, the KE should have been greater and as explosive as nuclear bombs?! Where am I wrong?
Answer: Calculations of air drag are always an approximation. And the equations you are trying to use are certainly not going to be valid under such extreme situations—huge speed and tiny density. Furthermore, when these protons collide with the ship, they will very likely penetrate into the metal unlike the conditions for which equations like you are using are applicable where the gas molecules just bounce off the object. This will cause radiation damage to the ship's hull. Furthermore, at such high speeds interaction with photons will become important, in particular the cosmic microwave background, causing additional drag. | {
"domain": "physics.stackexchange",
"id": 39978,
"tags": "special-relativity, space, space-travel"
} |
Gravity: Velocity (and distance) as a function of time, but wait; there's more | Question: First, ignore air resistance. Always ignore air resistance.
Using kinematics for gravitational acceleration systems works within a specific scope, and when the system's scope widens too far, they become an oversimplification. Kinematics are great for systems subject to constant acceleration, and gravity fundamentally cannot be a constant acceleration (take a look at its formula).
I know how to accurately model velocity as a function of displacement in a gravitational system using potential energy, more specifically the changes in potential energy. Basically the difference between initial potential energy (using gravity calculated with the object's initial height) and final potential energy (with gravity calculated from the object's current position) tells us the change in the object's kinetic energy and thus velocity. I've checked these calculations against highly precise computer simulations and found them to be accurate.
This set up models velocity as a function of displacement. My question is how to derive a formula which models velocity as a function of time in a gravitational system. I've asked some colleagues about this and none can produce an accurate model. I tried to integrate the acceleration to yield velocity, but that approach can't convert acceleration vs displacement into velocity vs time.
Any insight would be much appreciated.
EDIT: To clarify, the model I'm looking for is not so much of an orbit as it is a simple (one dimensional) free fall straight down to the "earth"
Answer: I mean this depends strongly on what trajectories you're considering; it sounds like you're trying to use a $1/r^2$ force law on trajectories that are purely radial. Then we have:$$\ddot r = k/r^2,$$ which we can solve in a rather boring way by multiplying both sides by $\dot r$ and integrating to get:$$\dot r = \sqrt{C - \frac{2k}r},$$ which of course is your formula from the term for kinetic energy. To get a closed form for $r(t)$ you would now need to integrate:$$\int \frac{dr}{\sqrt{C_1 - 2k/r}} = t + C_2.$$The left hand side can be rewritten defining $\kappa = 2k/C_1$ and $v = \sqrt{C_1}$ as $$\int \frac{dr}{\sqrt{1 - \kappa/r}}=v t + C.$$Looking purely at the left-hand side we try substituting $u = 1 - \kappa/r,$ whence $r = \kappa/(1-u)$ and $dr = \kappa~du/(1-u)^2.$ So then we have$$\int \frac{du}{(1 - u)^2 \sqrt{u}}$$Now let $v = \sqrt{u}$ and this becomes $$\int \frac{dv}{(1 - v^2)^2},$$
which can be solved by the method of partial fractions.
The unfortunate thing is that even though you can get a $t(r)$ this way, inverting it to an $r(t)$ does not seem like an expression with a simple form. | {
"domain": "physics.stackexchange",
"id": 36013,
"tags": "homework-and-exercises, newtonian-mechanics, newtonian-gravity, acceleration, velocity"
} |
ROS2 Eloquent TF listener (tf_echo) values freeze/resume on a cycle unexpectedly | Question:
Ubuntu 18.04
ROS2 Eloquent
TF
C++
###My pipeline:
I have a robot which uses sensors to detect the rotation, extension of its joints. It has revolute, prismatic, and fixed joints in a URDF file. Yes, I realize that the link names may be confusing, but its not really relevant.
me@my_computer:~/ROS2_workspace/src/
return LaunchDescription([my_package/urdf_dir check_urdf my_robot.urdf
robotname is: MyRobot
---------- Successfully Parsed XML --------------- root Link: base_link has 1 child(ren)
child(1): 0
child(1): 1
child(1): 3
child(1): 4
child(1): 5
child(1): 6
child(1): 7
child(1): 9
child(1): 10
I am gathering data from sensors and publishing that data with my node on a custom Msg interface.
void publish_sensor_data
{
return LaunchDescription([
auto sensors_msg = std::make_unique<SensorsMsg>();
/* magic */
pub_for_sensors_->publish(std::move(sensors_msg));
}
I then have a second node which takes my custom Msg and converts it to Msg::JointState
void sensors_callback(const SensorsMsg::SharedPtr sensors_msg){
auto joint_state_msg = std::make_shared<sensor_msgs::msg::JointState>();
joint_state_msg->name.push_back("revolute_joint_1");
joint_state_msg->name.push_back("revolute_joint_2");
joint_state_msg->name.push_back("prismatic_joint_1");
joint_state_msg->name.push_back("revolute_joint_3");
joint_state_msg->name.push_back("revolute_joint_4");
joint_state_msg->name.push_back("revolute_joint_5");
joint_state_msg->name.push_back("revolute_joint_6");
joint_state_msg->name.push_back("prismatic_joint_2");
joint_state_msg->name.push_back("fixed_joint");
joint_state_msg->position.push_back( deg_to_rad(sensors_msg->angle));
joint_state_msg->position.push_back( deg_to_rad(sensors_msg->angle));
joint_state_msg->position.push_back( in_to_m( sensors_msg->extend));
joint_state_msg->position.push_back( deg_to_rad(sensors_msg->angle));
joint_state_msg->position.push_back( deg_to_rad(sensors_msg->angle));
joint_state_msg->position.push_back( deg_to_rad(sensors_msg->angle));
joint_state_msg->position.push_back( deg_to_rad(sensors_msg->angle));
joint_state_msg->position.push_back( in_to_m( sensors_msg->extend));
joint_state_msg->position.push_back( deg_to_rad(0.) );
joint_state_msg->header.stamp = this->now();
joint_state_pub_->publish(std::move(*joint_state_msg));
}
I make use of robot_state_publisher to convert my Msg::JointState to send my transforms over the TF server. This is done through a launch file
import os
from ament_index_python.packages import get_package_share_directory
from launch import LaunchDescription
from launch_ros.actions import Node
def generate_launch_description():
urdf = os.path.join(get_package_share_directory('my_package'),'urdf_dir', 'my_robot.urdf')
config_common = os.path.join(get_package_share_directory('my_package'), 'yaml_dir', 'robot_params.yaml')
return LaunchDescription([
Node(package='my_package', node_name='sensor_publisher', node_executable='sensor_publisher', name="sensor_publisher", output='screen', arguments=[config_common]),
Node(package='my_package', node_executable='sensors_to_jointstates', output='screen', parameters=[config_common] ),
Node(package='robot_state_publisher', node_executable='robot_state_publisher', output='screen', arguments=[urdf, config_common]),
Node(package='rviz2', node_executable='rviz2', output='screen' )
])
###My problem:
Although I can use ros2 topic echo /joint_states (image below, left) to see that my sensor data is coming in at the rate I expect (20 hz), subsequently turned into joint states, and finally turned into TFMessages, the values that are received using ros2 run tf2_ros tf2_echo base_link 10 20 (image below, right) will have somewhat regular freezes, where the header timestemp will hold the same value for the sec and nanosec, and the sensor values wont change either. This will clear up after a short moment (~2s), but will reappear shortly (~2s) after
There are a couple things to note about the image
Both terminals are running at the same time, but the tf_echo command on the right has a timestamp far ahead of the ros2 topic echo on the left
Both terminals/nodes respective sensor values react instantly/synchronously when the physical sensor changes position (not pictured), as long as the TF is in its active phase and regardless of the timestamp difference
Only the TF Terminal will have its timestamp and values freeze (pictured)
The freeze goes on a cycle, to where the timestamp frozen on the screen is typically ~4.0s ahead of the last value stuck on screen. About every 4th cycle, it will skip an extra ~1.0s
This freeze can be visualized using rviz2 at the same time (icon pictured above in Ubuntu Launcher)
When resuming from frozen to active, the TF values will skip some time in order to match with real time speed
All of my nodes have Default QoS with queue size of 50
My computer memory and processing cores are fine. No thread is locked
###My Question:
How do I get TF to react as expected, without the values and timestamp freezing on a cycle? What should I investigate next?
I have looked into a few different things but I would like your opinions so that I can consider everything equally
Tank you for viewing. I look forward to your response
Originally posted by ateator on ROS Answers with karma: 37 on 2020-10-22
Post score: 0
Original comments
Comment by mjcarroll on 2020-10-23:
What is the behavior on the /tf and /tf_static topic? That would be the missing link between /joint_states and the end result of tf2_echo. Can you echo those topics and see if you are getting gaps/freezes in there?
Also, is there a chance that you can create a reproducable example (that doesn't rely on hardware)? That would go a long way in helping others solve it.
Comment by ateator on 2020-10-23:
The behavior on /tf is as expected, no freezes or anything. I have use_tf_static set to false, so the /tf_static topic does not have any echo output. It seems that the issue is somewhat related to the transform listener. I copied, modified the code inside echoListener code in tf2_ros/tf_echo.cpp (from both Eloquent and Foxy because they have different formats) to work with my project and experienced the same error with my own listener, with both formats.
I will put together some code to reproduce this error in a clean, concise way
Comment by ateator on 2020-10-23:
I have created a minimal reproducible package and the issue has resolved. I will be doing more research and will update
Answer:
I haven't figured out the exact answer but I believe the problem is in my CMakeLists.txt. I built a new minimal project and the setup shown above works fine.
Originally posted by ateator with karma: 37 on 2020-11-02
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 35667,
"tags": "ros, ros2, tf2-ros"
} |
How does equilibrium change when pressure is increased? | Question: What happens to the equilibrium when an increase in pressure is applied to a system with the same number of moles of gas on both sides of the reaction, according to Le Chatelier's Principle?
For example:
$$
\ce{2 HBr (g) <--> H2 (g) + Br2 (g)}
$$
Answer: According to Le Chatelier's Principle, when there is an increase in pressure then the equilibrium will shift towards the side of the reaction which contains fewer moles of gas and when there is decrease in pressure, the reaction favors the side with more number of moles.
In your question, as both the side of the reaction contains there same number of moles of gas i.e. 2 moles of gas on the left and 2 moles of gas on the right side of the reaction, hence any change in pressure will have no effect on the system.
Therefore, there is no effect of any change in pressure. | {
"domain": "chemistry.stackexchange",
"id": 7749,
"tags": "thermodynamics, equilibrium, pressure"
} |
Mean value with respect to maxwell's velocity distribution | Question: I don't understand why for an arbitrary $A$, the integral
$$ \int \text{d}^3v \;f(\vec{v})A(\vec{v}) $$
is the mean value $\langle A\rangle_f$ of $A$ with respect to Maxwell's velocity distribution. Would anyone be so kind and explain it to me please?
Answer: That's the general definition of an average of a distribution. It is the generalization of the discrete case where the average is the sum (Probability of a result)$\times$ (the value of the result), i.e.
If the probability of getting outcome 1 is 1/3 and the probability of getting outcome 2 is 2/3 then the average would be
$$
\frac{1}{3}\times 1 + \frac{2}{3}\times 2 =\frac{5}{3}\, .
$$
You can see how, since the outcome $2$ is more probable, the average will be closer to $2$ than to $1$. Of course if both outcomes have probability $1/2$ then the average is $\frac{3}{2}$.
When you go to the continuous case, the sum (Probability of a result)$\times$ (the value of the result) gets replaced by an integral (i.e. a continuous sum) so that
$$
\langle A\rangle = \int dA P(A) A
$$
where $P(x)$ is the probability density. An example of this is the computation of the average position in quantum mechanics:
$$
\langle x\rangle = \int dx x P(x)\, ,
$$
where $P(x)=\psi^*(x)\psi(x)$ is the probability density. | {
"domain": "physics.stackexchange",
"id": 62528,
"tags": "thermodynamics, ideal-gas, quantum-statistics"
} |
Pong in c++ console app | Question: I would really appriciate if someone could review my code and give me feedback. This was my first multi file project.
Main:
#include <iostream>
#include <vector>
#include "Pad.h"
#include <conio.h>
#include <Windows.h>
#include <iostream>
#include "Ball.h"
#include"consoleFunctions.h"
int main()
{
Map map{};
Ball ball{};
FirstPad firstPad{map};
SecondPad secondPad{ map };
WhoScored whoScored{ none };
map.print();
int ballCounter{};
int secondPadCounter{};
while (firstPad.getScore() < 10 && secondPad.getScore() < 10) { // game finishes when one player has 10 points
map.print();
std::cout << "Score: " << firstPad.getScore() << ":" << secondPad.getScore();
if (_kbhit()) {
firstPad.move(map);
}
if (ballCounter == 13) { //slowing down ball
ball.move(map, whoScored);
ballCounter = 0;
}
if (secondPadCounter == 11) { //slowing down computer paddle movement
secondPad.move(ball, map);
secondPadCounter = 0;
}
if (whoScored == leftPad) {
firstPad.increaseScore();
}
else if (whoScored == rightPad) {
secondPad.increaseScore();
}
whoScored = none;
++ballCounter;
++secondPadCounter;
}
set_cursor( MAP_WIDTH / 2, MAP_HEIGHT / 2);
if (firstPad.getScore() == 10) std::cout << "LEFT PAD WINS";
else std::cout << "RIGHT PAD WINS";
std::cin.ignore();
}
Map.h
#pragma once
#include <vector>
#include <iostream>
#include"EnumsAndStructs.h"
class Map {
private:
std::vector<std::vector<Objects>> m_map; //matrix which holds map
public:
Map();
void print();
std::vector<std::vector<Objects>>& getMap() { return m_map; }
};
Map.cpp
#include "map.h"
#include<Windows.h>
#include"consoleFunctions.h"
#include<sstream>
extern const int MAP_HEIGHT{ 25 };
extern const int MAP_WIDTH{ 100 };
Map::Map() {
//resizing array
m_map.resize(MAP_HEIGHT);
for (int i{ 0 }; i < MAP_HEIGHT; ++i) {
m_map[i].resize(MAP_WIDTH);
}
//setting map for begining
for (int i{ 0 }; i < MAP_HEIGHT; ++i) {
for (int j{ 0 }; j < MAP_WIDTH; ++j) {
if (j == 0 || j == MAP_WIDTH - 1) {
m_map[i][j] = Objects::VERTICAL_BORDER;
}
else if (i == 0 || i == MAP_HEIGHT - 1) {
m_map[i][j] = Objects::HORISONTAL_BORDER;
}
else {
m_map[i][j] = Objects::NOTHING;
}
}
m_map[MAP_HEIGHT / 2][MAP_WIDTH / 2] = Objects::BALL;
}
}
void Map::print() {
set_cursor();
cursor_off();
std::ostringstream ss;
for (int i{ 0 }; i < MAP_HEIGHT; ++i) {
for (int j{ 0 }; j < MAP_WIDTH; ++j) {
if (m_map[i][j] == Objects::HORISONTAL_BORDER) {
ss << '-';
}
else if (m_map[i][j] == Objects::VERTICAL_BORDER) {
ss << static_cast<char>(0XB3);
}
else if (m_map[i][j] == Objects::PAD) {
ss << static_cast<char>(0XB3);
}
else if(m_map[i][j] == Objects::BALL){
ss << '0';
}
else {
ss << ' ';
}
}
ss << '\n';
}
std::cout << ss.str();
}
Pad.h
#pragma once
#include <vector>
#include "EnumsAndStructs.h"
#include "Ball.h"
extern const int MAP_HEIGHT;
extern const int MAP_WIDTH;
class Ball;
class Map;
class Pad {
protected:
int m_score{};
const int m_height{ 3 };
Coordinates m_coordinates{};
void setPadInitially(Map& map); //initially sets pads on map, used only in constructor
public:
const Coordinates& getCoordinates() const { return m_coordinates; }
int getHeight() const;
int getScore() const { return m_score; }
void increaseScore() { ++m_score; }
};
class FirstPad : public Pad {
private:
Direction m_direction{ Direction::NONE };
void takeDirection();
public:
FirstPad(Map& map);
void move(Map& map);
};
class SecondPad : public Pad {
public:
SecondPad(Map& map);
void move(const Ball& ball, Map& map);
};
Pad.cpp
#include "Pad.h"
#include <conio.h>
const int firstPadX{ 5 };
const int secondtPadX{ MAP_WIDTH - 7 };
constexpr auto KEY_UP = 72;
constexpr auto KEY_DOWN = 80;
// class Pad
int Pad::getHeight() const { return m_height; }
void Pad::setPadInitially(Map& map) {
int j{};
for (int i{ m_coordinates.m_y }; j < m_height; ++i) {
map.getMap()[i][m_coordinates.m_x] = Objects::PAD;
++j;
}
}
//class FirstPad
FirstPad::FirstPad(Map& map)
{
m_coordinates.m_x = firstPadX;
m_coordinates.m_y = (MAP_HEIGHT / 2) - (m_height / 2);
int j{};
setPadInitially(map);
}
void FirstPad::takeDirection() {
auto input{ _getch() };
switch (input) {
case KEY_DOWN: m_direction = Direction::DOWN;
break;
case KEY_UP: m_direction = Direction::UP;
break;
default: m_direction = Direction::NONE;
}
}
void FirstPad::move(Map& map) {
//chagning y coordinate as pad goes up and down
takeDirection();
if (m_direction == Direction::UP) {
if (m_coordinates.m_y > 1) {
--m_coordinates.m_y;
map.getMap()[m_coordinates.m_y][m_coordinates.m_x] = Objects::PAD;
map.getMap()[m_coordinates.m_y + m_height][m_coordinates.m_x] = Objects::NOTHING;
}
}
else if (m_direction == Direction::DOWN) {
if (m_coordinates.m_y + m_height < MAP_HEIGHT - 1) {
++m_coordinates.m_y;
map.getMap()[m_coordinates.m_y + m_height - 1][m_coordinates.m_x] = Objects::PAD;
map.getMap()[m_coordinates.m_y - 1][m_coordinates.m_x] = Objects::NOTHING;
}
}
}
// class SecondPad
SecondPad::SecondPad(Map& map) {
m_coordinates.m_x = secondtPadX;
m_coordinates.m_y = (MAP_HEIGHT / 2) - (m_height / 2);
setPadInitially(map);
}
void SecondPad::move(const Ball& ball, Map& map) {
// Coumputer controls this pad, basically follows ball
int padMiddleY = m_coordinates.m_y + m_height / 2;
if (padMiddleY < ball.getCoordinates().m_y && m_coordinates.m_y < MAP_HEIGHT - m_height - 1) {
++m_coordinates.m_y;
map.getMap()[m_coordinates.m_y + m_height - 1][m_coordinates.m_x] = Objects::PAD;
map.getMap()[m_coordinates.m_y - 1][m_coordinates.m_x] = Objects::NOTHING;
}
else if (padMiddleY > ball.getCoordinates().m_y && m_coordinates.m_y > 1) {
--m_coordinates.m_y;
map.getMap()[m_coordinates.m_y][m_coordinates.m_x] = Objects::PAD;
map.getMap()[m_coordinates.m_y + m_height][m_coordinates.m_x] = Objects::NOTHING;
}
}
Ball.h
#pragma once
#include "EnumsAndStructs.h"
#include "Map.h"
extern const int MAP_HEIGHT;
extern const int MAP_WIDTH;
class Map;
class Ball
{
private:
Coordinates m_coordinates{};
Direction m_direction{Direction::RIGHT_DOWN};
void handleColision(Map& map, WhoScored& whoScored);
public:
Ball()
: m_coordinates{MAP_WIDTH/ 2, MAP_HEIGHT / 2} // puts ball in the middle
{}
void move(Map& map, WhoScored& whoScored);
void reflectDirection();
const Coordinates& getCoordinates() const { return m_coordinates; }
};
Ball.cpp
#include "Ball.h"
void Ball::reflectDirection() {
switch (m_direction) {
case Direction::RIGHT_UP: {
m_direction = Direction::LEFT_UP;
return;
}
case Direction::RIGHT_DOWN: {
m_direction = Direction::LEFT_DOWN;
return;
}
case Direction::LEFT_UP: {
m_direction = Direction::RIGHT_UP;
return;
}
case Direction::LEFT_DOWN: {
m_direction = Direction::RIGHT_DOWN;
return;
}
}
return;
}
void Ball::handleColision(Map& map, WhoScored& whoScored) {
//handling bouncing on horisiontal walls and pads and checking who scored on verticals
if (map.getMap()[m_coordinates.m_y][m_coordinates.m_x] == Objects::HORISONTAL_BORDER) {
if (m_direction == Direction::RIGHT_UP) { m_direction = Direction::RIGHT_DOWN; }
else if (m_direction == Direction::RIGHT_DOWN) { m_direction = Direction::RIGHT_UP; }
else if (m_direction == Direction::LEFT_UP) { m_direction = Direction::LEFT_DOWN; }
else if (m_direction == Direction::LEFT_DOWN) { m_direction = Direction::LEFT_UP; }
}
else if (map.getMap()[m_coordinates.m_y][m_coordinates.m_x] == Objects::VERTICAL_BORDER) {
if (m_coordinates.m_x < MAP_WIDTH / 2) { //if it is left side
whoScored = rightPad;
}
else whoScored = leftPad;
m_coordinates.m_y = MAP_HEIGHT / 2;
m_coordinates.m_x = MAP_WIDTH / 2;
map.getMap()[m_coordinates.m_y][m_coordinates.m_x] = Objects::BALL;
}
else if (map.getMap()[m_coordinates.m_y][m_coordinates.m_x] == Objects::PAD) {
reflectDirection();
}
}
void Ball::move(Map& map, WhoScored& whoScored) {
//ball will only move on 45 degrees, pretty simple
//if balls next position is bounacable object call handleColision
//whoScored is not needed in this function but in handleColision
if (m_direction == Direction::RIGHT_DOWN) {
if (map.getMap()[m_coordinates.m_y][m_coordinates.m_x] != Objects::VERTICAL_BORDER && //without this ifs our objects will disappear as ball bounces of them
map.getMap()[m_coordinates.m_y][m_coordinates.m_x] != Objects::HORISONTAL_BORDER &&
map.getMap()[m_coordinates.m_y][m_coordinates.m_x] != Objects::PAD) {
map.getMap()[m_coordinates.m_y][m_coordinates.m_x] = Objects::NOTHING;
}
++m_coordinates.m_y;
++m_coordinates.m_x;
if (map.getMap()[m_coordinates.m_y][m_coordinates.m_x] != Objects::NOTHING) {
handleColision(map,whoScored);
}
else map.getMap()[m_coordinates.m_y][m_coordinates.m_x] = Objects::BALL;
}
else if (m_direction == Direction::RIGHT_UP) {
if (map.getMap()[m_coordinates.m_y][m_coordinates.m_x] != Objects::VERTICAL_BORDER &&
map.getMap()[m_coordinates.m_y][m_coordinates.m_x] != Objects::HORISONTAL_BORDER &&
map.getMap()[m_coordinates.m_y][m_coordinates.m_x] != Objects::PAD) {
map.getMap()[m_coordinates.m_y][m_coordinates.m_x] = Objects::NOTHING;
}
--m_coordinates.m_y;
++m_coordinates.m_x;
if (map.getMap()[m_coordinates.m_y][m_coordinates.m_x] != Objects::NOTHING) {
handleColision(map, whoScored);
}
else map.getMap()[m_coordinates.m_y][m_coordinates.m_x] = Objects::BALL;
}
else if (m_direction == Direction::LEFT_DOWN) {
if (map.getMap()[m_coordinates.m_y][m_coordinates.m_x] != Objects::VERTICAL_BORDER &&
map.getMap()[m_coordinates.m_y][m_coordinates.m_x] != Objects::HORISONTAL_BORDER &&
map.getMap()[m_coordinates.m_y][m_coordinates.m_x] != Objects::PAD) {
map.getMap()[m_coordinates.m_y][m_coordinates.m_x] = Objects::NOTHING;
}
++m_coordinates.m_y;
--m_coordinates.m_x;
if (map.getMap()[m_coordinates.m_y][m_coordinates.m_x] != Objects::NOTHING) {
handleColision(map,whoScored);
}
else map.getMap()[m_coordinates.m_y][m_coordinates.m_x] = Objects::BALL;
}
else if (m_direction == Direction::LEFT_UP) {
if (map.getMap()[m_coordinates.m_y][m_coordinates.m_x] != Objects::VERTICAL_BORDER &&
map.getMap()[m_coordinates.m_y][m_coordinates.m_x] != Objects::HORISONTAL_BORDER &&
map.getMap()[m_coordinates.m_y][m_coordinates.m_x] != Objects::PAD) {
map.getMap()[m_coordinates.m_y][m_coordinates.m_x] = Objects::NOTHING;
}
--m_coordinates.m_y;
--m_coordinates.m_x;
if (map.getMap()[m_coordinates.m_y][m_coordinates.m_x] != Objects::NOTHING) {
handleColision(map, whoScored);
}
else map.getMap()[m_coordinates.m_y][m_coordinates.m_x] = Objects::BALL;
}
}
EnumsAndStructs.h
#pragma once
enum class Objects {
NOTHING,
PAD,
VERTICAL_BORDER,
HORISONTAL_BORDER,
BALL
};
enum class Direction {
NONE,
UP,
DOWN,
RIGHT,
LEFT,
RIGHT_UP,
RIGHT_DOWN,
LEFT_UP,
LEFT_DOWN,
};
struct Coordinates {
int m_x;
int m_y;
};
enum WhoScored {
none,
leftPad,
rightPad,
};
consoleFunctions.h
#pragma once
void cursor_off();//stops blinking cursor
void set_cursor(int x = 0, int y = 0);
consoleFucntions.cpp
#include<Windows.h>
void cursor_off()
{
CONSOLE_CURSOR_INFO console_cursor;
console_cursor.bVisible = 0;
console_cursor.dwSize = 1;
SetConsoleCursorInfo(GetStdHandle(STD_OUTPUT_HANDLE), &console_cursor);
}
void set_cursor(int x = 0, int y = 0)
{
HANDLE handle;
COORD coordinates;
handle = GetStdHandle(STD_OUTPUT_HANDLE);
coordinates.X = x;
coordinates.Y = y;
SetConsoleCursorPosition(handle, coordinates);
}
Answer: This looks pretty good for a first project of this size!
Consider using a curses library
You are using <conio.h> and <Windows.h>, but those are of course Windows-specific header files. You can make your program more portable by using a curses library to draw the screen and to handle the keyboard input.
There are several curses implementations that also work on Windows, the most popular one of those is probably PDCurses.
Use std::array<> if your map is going to have a fixed size
If you know the size of your map at compile time, you can use std::array instead std::vector. This is more efficient, especially if you are nesting them. Use constexpr to declare the map size, and then you can use those constants to declare the arrays:
class Map {
static constexpr std::size_t HEIGHT = 25;
static constexpr std::size_t WIDTH = 100;
std::array<std::array<Objects, WIDTH>, HEIGHT> m_map;
public:
...
auto& getMap() { return m_map; }
};
Consider not using m_map
Your m_map, when implemented using std::arrays, uses 10 kilobytes of memory. With std::vectors, it even needs a little more. But is it really necessary to store the map this way? You know the walls are always at the edge of the map region, and then the only two other things to worry about are the position of the paddles and the ball. Since each paddle can only move up or down, you only need four integers in total to describe the state of the map: the y-coordinate of each paddle, and the x- and y-coordinates of the ball.
Of course this means rewriting some of the code, but I don't think it will actually be more complicated than it already is now.
Remove FirstPad and SecondPad
You should not need to create different classes for each of the pads. Instead of having SecondPad::move() contain the computer player logic, move that logic out, and just add a move() function to Pad that takes a Direction as an argument:
class Pad {
...
public:
Pad();
void move(Map& map, Direction dir);
...
};
If anything, make a Player and Computer class that then each control one Pad object.
Unnecessary use of stringstreams
I don't see the point in Map::print() first printing everything to ss, and then printing the contents of ss to std::cout. Why not print everything directly to std::cout?
Avoid repeating yourself
There is a lot of code duplication in your program that could have been avoided. For example, in Ball::move():
void Ball::move(Map& map, WhoScored& whoScored) {
// Remove the ball from its current position
auto& cur_object = map.getObject(m_coordinates);
if (cur_object == Objects::BALL) {
cur_object = Objects::NOTHING;
}
// Update the position of the ball
switch (m_direction) {
case Direction::RIGHT_DOWN: ++m_coordinates.m_y; ++m_coordinates.m_x; break;
case Direction::RIGHT_UP: --m_coordinates.m_y; ++m_coordinates.m_x; break;
case Direction::LEFT_DOWN: ++m_coordinates.m_y; --m_coordinates.m_x; break;
case Direction::LEFT_UP: --m_coordinates.m_y; --m_coordinates.m_x; break;
}
// Handle collisions if necessary
auto& next_object = map.getObject(m_coordinates);
if (next_object != Objects::NOTHING) {
handleCollision(map, whoScored);
}
else next_object = Objects::BALL;
}
}
The above also introduces a new member function for Map to get a reference to the object at a given position:
Objects& Map::getObject(Coordinates pos) {
return m_map[pos.y][pos.x];
}
You could even go further. Consider creating an array of relative positions for each of the directions:
static constexpr Coordinates directions[] = {
/* NOTHING */ {0, 0};
/* UP */ {0, -1};
...
/* LEFT_DOWN */ {-1, 1};
};
This way, the switch statement in the above code can be replaced as follows:
// Update the position of the ball
m_coordinates.x += directions[m_direction].x;
m_coordinates.y += directions[m_direction].y;
Avoid magic numbers
There are several magic numbers in your code. Whenever you have some number, like the number of points you need to win, don't just write that number in the code, create a constant for it. This has several benefits: because the number now has a name, it is more self-documenting, and if you ever need to change the number, you only need to do it in one place. So:
static constexpr int winning_score = 10;
static constexpr int ball_move_interval = 13;
static constexpr int computer_move_interval = 11;
...
while (firstPad.getScore() < winning_score && secondPad.getScore() < 10) {
...
if (ballCounter == ball_move_interval) {
ball.move(map, whoScores);
ballCounter = 0;
}
...
}
For the vertical lines, you use static_cast<char>(0XB3). I recommend you stick with ASCII characters, and just write '|' (the pipe symbol), but note that if you really want to use a high-ASCII symbol here, you could have written either '│' (literally the character with code 0xB3 from codepage 437) or '\xb3'. The latter still looks like a magical number, so in that case I would still create a named constant for it:
static constexpr char vertical_bar = '\xb3'; | {
"domain": "codereview.stackexchange",
"id": 43965,
"tags": "c++, object-oriented, game, console, pong"
} |
What do you think about my version of matplotlib subplots of filled contours with a "synchronized" color scheme? | Question: Dear Python VIS community.
Imagine the following situation:
You ran an experiment in an earth system model (ESM) where you altered some input parameters relative to a control run of the same ESM. Now you look at the surface air temperature and since you have this information for both your experiment and your control run, you can compare the two data sets. Say you want to look at the seasonal difference in air temperature between the experiment and the control (i.e. experiment - contorl = difference). This would result in four maps, one for each season (Winter, Spring, Summer, Autumn, defined as "DJF", "MAM", "JJA", "SON").
import matplotlib.pyplot as plt
import matplotlib.colors as mcol
import matplotlib.cbook as cbook
import matplotlib.colors as colors
import numpy as np
import sys
winter = np.random.random((96,144))
winter[0:48,:] = winter[0:48,:] * 3
winter[48:,:] = winter[48:,:] * -6
spring = np.random.random((96,144))
spring[0:48,:] = spring[0:48,:] * 4
spring[48:,:] = spring[48:,:] * -6
summer = np.random.random((96,144))
summer[0:48,:] = summer[0:48,:] * 10
summer[48:,:] = summer[48:,:] * -2
autumn = np.random.random((96,144))
autumn[0:48,:] = autumn[0:48,:] * 4
autumn[48:,:] = autumn[48:,:] * -7
Cool data! But how do you visualize this using matplotlib? Okay you decide for filled contours from matplotlib. This will yield a map including a colorbar for each season. Now that is cool and all, but you would like to have the four maps in subplots (easy!, see Figure 1).
fig, axes = plt.subplots(2, 2)
axes = axes.flatten()
seasons = zip(axes, [winter, spring, summer, autumn])
for pair in seasons:
im = pair[0].contourf(pair[1])
plt.colorbar(im, ax=pair[0])
Figure 1. Suplots with individual colorbar.
Since you want to look at the difference, you can tell that 0 will be important, because your temperature can either be the same (x=0), or it can be regionally cooler (x < 0) or warmer (0 < x).
In order to have the value zero (0) exactly split the colormap you decide for a diverging colormap (based on this example: https://matplotlib.org/3.2.0/gallery/userdemo/colormap_normalizations_diverging.html#sphx-glr-gallery-userdemo-colormap-normalizations-diverging-py). Perfect you think?
colors_low = plt.cm.RdBu_r(np.linspace(0, 0.3, 256))
colors_high = plt.cm.RdBu_r(np.linspace(0.5, 1, 256))
all_colors = np.vstack((colors_low, colors_high))
segmented_map = colors.LinearSegmentedColormap.from_list('RdBu_r', all_colors)
divnorm = colors.TwoSlopeNorm(vmin=-7, vcenter=0, vmax=10)
fig, axes = plt.subplots(2, 2)
axes = axes.flatten()
seasons = zip(axes, [winter, spring, summer, autumn])
for pair in seasons:
im = pair[0].contourf(pair[1], cmap=segmented_map, norm=divnorm, vmin=-7, vmax=10)
plt.colorbar(im, ax=pair[0])
Figure 2. Diverging colormap with 0 in as the center.
Not quite, because although you supply contourf() with the overall vmin and vmax and your derived colormap, the levels (i.e. ticks) in the colorbar are not the same for the four plots ("WHY?!?", you scream)!
Aha you find that you need to supply the same levels to contourf() in all four subplots (based on this: https://stackoverflow.com/questions/53641644/set-colorbar-range-with-contourf). But how do you exploit the functionality of contourf, that automatically choses appropriate contour levels?
You think that you could invisibly plot the four individual images, and then extract the color levels from each (im = contourf(), im.levels, https://matplotlib.org/3.3.1/api/contour_api.html#matplotlib.contour.QuadContourSet) and from this create a unique set of levels that combines the maximum and minimum from the extracted color levels.
# -1- Create the pseudo-figures for extraction of color levels:
fig, axes = plt.subplots(2, 2)
axes = axes.flatten()
seasons = zip(axes, [winter, spring, summer, autumn])
c_levels = []
for pair in seasons:
im = pair[0].contourf(pair[1], cmap=segmented_map, norm=divnorm, vmin=-7, vmax=10)
c_levels.append(im.levels)
# -1.1- Clear the figure
plt.clf()
# -2- Find the colorbars with the most levels below and above 0:
lower = sys.maxsize
lower_i = 0
higher = -1 * sys.maxsize
higher_i = 0
for i, c_level in enumerate(c_levels):
if np.min(c_level) < lower: # extract the index for the array with the minimum value
lower = np.min(c_level)
lower_i = i
if np.max(c_level) > higher: # extract the index for the array with the maximum value
higher = np.max(c_level)
higher_i = i
# -3- Create the custom color levels as a combination of the minimum and maximum found above
custom_c_levels = []
for level in c_levels[lower_i]:
if level <= 0: # define the levels for the negative section, including 0
custom_c_levels.append(level)
for level in c_levels[higher_i]:
if level > 0: # define the levels for the positive section, excluding 0
custom_c_levels.append(level)
custom_c_levels = np.array(custom_c_levels)
# -4- create the new normalization to go along with the new color levels
v_min = custom_c_levels[0]
v_max = custom_c_levels[-1]
divnorm = divnorm = colors.TwoSlopeNorm(vmin=v_min, vcenter=0, vmax=v_max)
# -5- plot the figures
fig, axes = plt.subplots(2, 2)
axes = axes.flatten()
seasons = zip(axes, [winter, spring, summer, autumn])
for pair in seasons:
im = pair[0].contourf(pair[1], levels=custom_c_levels, cmap=segmented_map, norm=divnorm, vmin=v_min, vmax=v_max)
# -6- Get the positions of the lower right and lower left plot
left, bottom, width, height = axes[3].get_position().bounds
first_plot_left = axes[2].get_position().bounds[0]
# -7- the width of the colorbar
width = left - first_plot_left + width
# -8- Add axes to the figure, to place the color bar in
colorbar_axes = plt.axes([first_plot_left, bottom - 0.15, width, 0.03])
# -9- Add the colour bar
cbar = plt.colorbar(im, colorbar_axes, orientation='horizontal')
# -10- Label the colour bar and add ticks
cbar.set_label("Custom Colorbar")
Figure 3. Final figure, including correct contours and single colorbar.
And it works!
But I'm sure the community is able to do that in a much more straight forward kind of way.
Challenge accepted? Then I would really appreciate if you could show me how to do that in a more simple approach.
ps: I give random data because I think it is more reasonable for the exercise (not using additional libraries such as iris and cartopy). However you can imagine that we are actually looking at a world map with land surface temperature ;).
Answer: I conducted some research into the way the color levels are chosen in contourf in the matplotlib python code (https://github.com/matplotlib/matplotlib/blob/master/lib/matplotlib/contour.py#L1050). They use a matplotlib ticker MaxNLocator() instance to define the color levels, here is its documentation https://matplotlib.org/3.3.3/api/ticker_api.html#matplotlib.ticker.MaxNLocator.
Then I found this documentation https://matplotlib.org/3.1.1/gallery/ticks_and_spines/tick-locators.html.
This yields the following working solution that is more straight forward than the presented approach (based on the data and colormap initialized above):
import matplotlib.ticker as ticker
# -1- Find the global min and max values:
gl_min = np.min(np.stack((winter, spring, summer, autumn)))
gl_max = np.max(np.stack((winter, spring, summer, autumn)))
# -2- Create a simple plot, where the xaxis is using the found global min and max
fig, ax = plt.subplots()
ax.set_xlim(gl_min, gl_max)
# -3- Use the MaxNLocator() to get the contours levels
ax.xaxis.set_major_locator(ticker.MaxNLocator())
custom_c_levels = ax.get_xticks()
# -4- Clear the figure
plt.clf()
# -5- Create the diverging norm
divnorm = colors.TwoSlopeNorm(vmin=gl_min, vcenter=0, vmax=gl_max)
# -6- plot the figures
fig, axes = plt.subplots(2, 2)
axes = axes.flatten()
seasons = zip(axes, [winter, spring, summer, autumn])
for pair in seasons:
im = pair[0].contourf(pair[1], levels=custom_c_levels, cmap=segmented_map, norm=divnorm, vmin=gl_min, vmax=gl_max)
# -7- Get the positions of the lower right and lower left plot
left, bottom, width, height = axes[3].get_position().bounds
first_plot_left = axes[2].get_position().bounds[0]
# -8- the width of the colorbar
width = left - first_plot_left + width
# -9- Add axes to the figure, to place the color bar in
colorbar_axes = plt.axes([first_plot_left, bottom - 0.15, width, 0.03])
# -10- Add the color bar
cbar = plt.colorbar(im, colorbar_axes, orientation='horizontal')
# -12- Label the color bar and add ticks
cbar.set_label("Custom Colorbar")
# -13- Show the figure
plt.show()
The process is simple:
First we have to find the global min and max value (i.e. across all data sets). Then we create a figure and set its limits to the found min/max values. Then we tell the axis that we want to use the MaxNLocator(). Then we can easily extract the found labels and use them as our custom contour levels. | {
"domain": "codereview.stackexchange",
"id": 40179,
"tags": "python, matplotlib"
} |
Limma decideTests function: what kind of multiple hypothesis testing correction does parameter "method" involve? | Question: What kind of multiple hypothesis testing correction does method="global" do in Limma's decideTests function? According to the documentation:
method="global" will treat the entire matrix of t-statistics as a
single vector of unrelated tests.
and
method="global" is useful when it is important that the same
t-statistic cutoff should correspond to statistical significance for
all the contrasts.
What does this really mean? Does it use a Bonferroni correction?
Answer: More context from the docs, looking at the arguments to decideTests():
method: character string specifying how genes and contrasts are to be combined in the
multiple testing scheme. Choices are "separate", "global", "hierarchical"
or "nestedF".
adjust.method: character string specifying p-value adjustment method. Possible values are "none",
"BH", "fdr" (equivalent to "BH"), "BY" and "holm". See p.adjust for details.
These are two different options.
One option (adjust.method) chooses the multiple test correction, for which the default is BH/FDR.
The other option (method) decides how the multiple test correction is applied, for example can it be applied separately to different genes or across contrasts. method = global in this context just means that every single test statistic is considered at the same time. An alternative to this would be to consider a nested design.
More intuitively, method = separate applies the correction to each column of your matrix independently, such that in a $m x n$ matrix of statistics, the "number of tests" is $m*n$ for global and $m$ for separate.
For understanding the effect of the number of tests on FDR, I suggest consulting the FDR wiki page. Basically, using a method other than global will increase power. | {
"domain": "bioinformatics.stackexchange",
"id": 1922,
"tags": "rna-seq, limma"
} |
Not receiving any callback for synchronized PointCloud2 and Robot EndpointState | Question:
I am trying to receive data from two different publishers as listed below-
Message Type: baxter_core_msgs::EndpointState. Topic Name: /robot/limb/right/endpoint_state
Message Type: sensor_msgs::PointCloud2. Topic Name: /kinect2/sd/points
Please see the code snippet below-
void DataCollector::callback(const baxter_core_msgs::EndpointStateConstPtr& ee_msg, const sensor_msgs::PointCloud2ConstPtr& pc_msg)
{
std::cout << "Solve all of perception here" << std::endl;
}
DataCollector::DataCollector()
{
ros::NodeHandle nh;
message_filters::Subscriber<baxter_core_msgs::EndpointState> baxter_arm_sub(nh, "/robot/limb/right/endpoint_state", 1);
message_filters::Subscriber<sensor_msgs::PointCloud2> point_cloud_sub(nh, "/kinect2/sd/points", 1);
TimeSynchronizer<baxter_core_msgs::EndpointState, sensor_msgs::PointCloud2> sync(baxter_arm_sub, point_cloud_sub, 100);
sync.registerCallback(boost::bind(&DataCollector::callback, this, _1, _2));
ros::spin();
}
Unfortunately, there is no callback received. I dig into the problem and noticed that baxter_core_msgs::EndpointState message is being received in high frequency compared to sensor_msgs::PointCloud2.
Plese see below the screenshot of the output of rostopic echo /robot/limb/right/endpoint_state|grep secs and rostopic echo /kinect2/sd/points|grep secs in respective terminals taken at the same time-
Originally posted by ravijoshi on ROS Answers with karma: 1744 on 2018-02-15
Post score: 0
Answer:
See #q60903 and #q172676 for almost duplicates.
Summarising: TimeSynchronizer needs timestamps to match exactly. If you don't have that happening in your message stream, but still want callbacks for messages that are "close enough", then you should use the ApproximateTimePolicy.
Originally posted by gvdhoorn with karma: 86574 on 2018-02-15
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by gvdhoorn on 2018-02-15:
Note also that if your Kinect2 is not connected to the Baxter PC, you might want to look into synchronising the clocks between all hosts in your ROS node graph. This will still not make TimeSynchronizer call your callbacks, but it will avoid problems later on (with TF fi).
Comment by ravijoshi on 2018-02-15:
Got it. Thank you very much.
Comment by gvdhoorn on 2018-02-15:
If I may suggest: try to search for these kind of problems. It's not that we don't want to help, but getting answers on this forum takes time, so if you find it yourself, you save time.
Use Google (not the integrated search). I always do something like:
$search_term site:answers.ros.org
Comment by ravijoshi on 2018-02-17:
Thanks for telling the trick.I got it. | {
"domain": "robotics.stackexchange",
"id": 30050,
"tags": "ros, rosmessage, ros-indigo"
} |
Issue with reading sensor_msgs | Question:
Hello every one, since, i want to read messages from the topic /cartesian/solution which has sensor_msgs and want to then some random topic /panda/pand. For confirmation, i am also printing out one member of topic /cartesian/solution But all the time its showing me zero. It means my callback function is not updating the values to one declared in public class. Because all the time it is showing me zeros for all of these values, which are their inherent assignments. I want to read these from this topic but unable to do it.
Updated Code
using namespace std;
class server {
public:
std_msgs::Float64 joint_position0, joint_position1, joint_position2, joint_position3, joint_position4, joint_position5, joint_position6, joint_position7, joint_position8, joint_position9, joint_position10, joint_position11, joint_velocity0, joint_velocity1, joint_velocity2, joint_velocity3, joint_velocity4, joint_velocity5, joint_velocity6, joint_velocity7, joint_velocity8, joint_velocity9, joint_velocity10, joint_velocity11;
void jointStateCallback(const sensor_msgs::JointState::ConstPtr& msg);
}
void server::jointStateCallback(const sensor_msgs::JointState::ConstPtr& msg) {
joint_position0.data = msg->position[0];
joint_position1.data = msg->position[1];
joint_position2.data = msg->position[2];
joint_position3.data = msg->position[3];
joint_position4.data = msg->position[4];
joint_position5.data = msg->position[5];
joint_position6.data = msg->position[6];
joint_position7.data = msg->position[7];
joint_position8.data = msg->position[8];
joint_position9.data = msg->position[9];
joint_position10.data = msg->position[10];
joint_position11.data = msg->position[11];
joint_velocity0.data = msg->velocity[0];
joint_velocity1.data = msg->velocity[1];
joint_velocity2.data =msg->velocity[2];
joint_velocity3.data = msg->velocity[3];
joint_velocity4.data = msg->velocity[4];
joint_velocity5.data = msg->velocity[5];
joint_velocity6.data = msg->velocity[6];
joint_velocity7.data = msg->velocity[7];
joint_velocity8.data = msg->velocity[8];
joint_velocity9.data = msg->velocity[9];
joint_velocity10.data = msg->velocity[10];
joint_velocity11.data = msg->velocity[11];
}
int main(int argc, char** argv) {
server objserver;
ros::init(argc, argv, "mover_node");
ros::NodeHandle n;
//ros::AsyncSpinner spinner(1);
//spinner.start();
//bool success;
ros::Subscriber joint_sub = n.subscribe("/catersian/solution", 100, &server::jointStateCallback, &objserver);
while (ros::ok())
{
ros::spinOnce();
cout << objserver.joint_position8.data;
cout<<"\n";
}
return(0);
}
Originally posted by SunnyKatyara on ROS Answers with karma: 3 on 2019-08-09
Post score: 0
Original comments
Comment by ct2034 on 2019-08-09:
Can please make sure that your code is displayed correctly in the question and clarify what exactly the problem is. What are you trying to do? What do you expect to happen? What happens instead?
Comment by ct2034 on 2019-08-09:
And sorry, but maybe you want to look at something like this: http://www.cplusplus.com/reference/vector/vector/vector/
Comment by zmk5 on 2019-08-09:
I would highly suggest you try and use a object oriented (class) approach to doing this instead of relying on global variables.
See the following example of something like this:
The header for the class with pubs and subs:
https://github.com/ACSLaboratory/pheeno_ros/blob/cleanup/include/pheeno_ros/pheeno_robot.h
The src file for the the class:
https://github.com/ACSLaboratory/pheeno_ros/blob/cleanup/src/pheeno_robot.cpp
and finally, a src file with the main:
https://github.com/ACSLaboratory/pheeno_ros/blob/cleanup/src/random_walk.cpp
Comment by SunnyKatyara on 2019-08-10:
Dear ct2034 and zmk5, thanks for your message. Now, i have changed the syntax of node with class defined instead of using global variables. But, i am unable to get the data from the topic. It is giving me some strange values, may be due to time lapse? when i used float, it was given me some random values and then i replaced ith std_msgs/Float64 and now, it is giving me zeros, all the time.
Code#
using namespace std;
class server {
public:
std_msgs::Float64 joint_position0, joint_position1, .......etc
void jointStateCallback(const sensor_msgs::JointState::ConstPtr& msg);
};
void server::jointStateCallback(const sensor_msgs::JointState::ConstPtr& msg) {
joint_position0.data = msg->position[0];
... for all joint values
ros::Publisher joint_pub = n.advertise<trajectory_msgs::JointTrajectory>("/panda/pand", 100, true);
ros::Subscriber joint_sub = n.subscribe("/catersian/solution", 100, &server::jointStateCallback, &objserver);
Can you please help me, where i am making mistake
Comment by ct2034 on 2019-08-10:
Can you please edit your original question to contain the latest code and make sure that all is syntax highlighted. And add: What are you trying to do? What do you expect to happen? What happens instead?
Comment by SunnyKatyara on 2019-08-10:
Dear ct2034, i have updated my post. Since, i want to read the values from topic "/Cartesian/solution" and print out it values. But all the time its showing zeros. I do not where i am mistakes. Thank you
Comment by SunnyKatyara on 2019-08-10:
Sorry, i did not highlight the code. I have highlighted it now.
Thank you
Answer:
If you want to read values from /cartesian/solution, you have a typo in your code. Is says /cATersian/.... Everything else works. I tested it.
Although I would recommend using arrays or vectors for the data instead of 24 individual variables. I also like the conversion to a class. But there it is generally good practise to have the variables protected.
Originally posted by ct2034 with karma: 862 on 2019-08-10
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by SunnyKatyara on 2019-08-10:
but ct2034, when i run the code, it gives me zeros only. It means my callback function is not working. Because all the time, it prints 0s only, which are default in the class. The output from /cartesian/solution is :
sunny@sunny-ThinkPad-Edge-E531:~/catkin_ws/src/tutorial_iros2018/launch$ rostopic echo /cartesian/solution
header:
seq: 0
stamp:
secs: 23
nsecs: 456000000
frame_id: ''
name: [panda_joint_x, panda_joint_y, panda_joint_theta, schunk_pw70_joint_pan, schunk_pw70_joint_tilt,
panda_joint_1, panda_joint_2, panda_joint_3, panda_joint_4, panda_joint_5, panda_joint_6,
panda_joint_7]
position: [0.0, 0.0, 0.0, 0.0, 0.0, 0.27218156462835696, -0.23239725174321993, 0.06414067279654619, -0.41568844865564586, -0.20909900886432725, 0.6249307486805932, -1.7535359390708531]
But i am not getting these value in the ouput, with this script code. This is the data published by other node on this topic.
Comment by ct2034 on 2019-08-10:
yes, because your code is subscribing to /catersian/solution instead of /cartesian/solution.
You can also add some output to the callback function to see when it is called. (which it won't be with that code)
Comment by ct2034 on 2019-08-10:
i can also recommend to add something like ros::Duration(1).sleep(); to your loop. Otherwise, it may just spam you terminal with too many outputs
Comment by SunnyKatyara on 2019-08-10:
Thank you so much. It was really as mistake with topic name. I did not put head to spelling. But thank you so much. It is working now :)
Comment by SunnyKatyara on 2019-08-10:
One last thing, i want to ask you that, how can i read the time from this topic and send it on other topic "/panda/pand".
Thank you
Comment by ct2034 on 2019-08-10:
Cool. Happy it works now. Can you please accept my answer.
the time is in the header. (You can see it in your rotopic echo ouput) just copy that to your new message. | {
"domain": "robotics.stackexchange",
"id": 33598,
"tags": "ros-kinetic"
} |
What is the work done by a force that changes with time $F(t)$? | Question: There is a force that changes with time. (F(t))
And the position vector is also given as a function time. (r(t))
Here how do we find the work done by F(t) between, lets's say t=0 and t=1?
This is my actual time-dependent force:
And this is the position vector:
sometimes I get confused because of those i and j s...
Answer: The work done during a short interval of time, $[t, t+\Delta t]$ is given by usual formula
$$
\Delta W = \mathbf{F}(t)\cdot \Delta \mathbf{r}(t),
$$
where $\mathbf{F}(t)$ and $\Delta \mathbf{r}(t)$ are the force and the displacement at the beginning of the interval. The total work is then approximately a sum over all the intervals, and this approximation becomes exact as the length of the interval goes to zero:
$$
W = \sum_{all intervals} \mathbf{F}(t)\cdot \Delta \mathbf{r}(t)
=
\int_{\mathbf{r}(0)}^{\mathbf{r}(1)} \mathbf{F}(t)\cdot d \mathbf{r}(t)
=
\int_{0}^{1} \mathbf{F}(t)\cdot \dot{\mathbf{r}}(t)dt
=
\int_{0}^{1} \mathbf{F}(t)\cdot \mathbf{v}(t)dt
=
\int_{0}^{1} P(t)dt,
$$
where
$$
P(t) = \mathbf{F}(t)\cdot \mathbf{v}(t)
$$
is the instantaneous power of the force. | {
"domain": "physics.stackexchange",
"id": 73677,
"tags": "homework-and-exercises, newtonian-mechanics, forces, work"
} |
[ROS2] image_transport only advertising raw option | Question:
Hi,
I created a ROS2 stereo camera node using image transport. I have two publisher:
m_publisher_left_image = image_transport::create_publisher(this, "stereo/left/image_raw");
m_publisher_right_image = image_transport::create_publisher(this, "stereo/right/image_raw");
m_publisher_left_info = this->create_publisher<sensor_msgs::msg::CameraInfo>("stereo/left/camera_info", 10);
m_publisher_right_info = this->create_publisher<sensor_msgs::msg::CameraInfo>("stereo/right/camera_info", 10);
When I start the node on my laptop I can see the following topics with ros2 topic list:
/stereo/left/camera_info
/stereo/left/image_raw
/stereo/left/image_raw/compressed
/stereo/left/image_raw/compressedDepth
/stereo/left/image_raw/theora
/stereo/right/camera_info
/stereo/right/image_raw
/stereo/right/image_raw/compressed
/stereo/right/image_raw/compressedDepth
/stereo/right/image_raw/theora
Now I run the exact same code on a Jetson Nano, also using eloquent distro but I can only see
/stereo/left/camera_info
/stereo/left/image_raw
/stereo/right/camera_info
/stereo/right/image_raw
I install eloquent via debian packages in late Nov on my laptop. On the Jetson it was installed on 6th Jan.
I would be glad if someone could explain me the cause of the different behavior.
Originally posted by tlaci on ROS Answers with karma: 48 on 2020-01-14
Post score: 1
Original comments
Comment by stevemacenski on 2020-01-14:
Do you have the image transport plugins installed on your jetson nano?
Comment by tlaci on 2020-01-15:
Yes, I have.
ros-eloquent-image-transport-plugins/bionic 2.2.1-1bionic.20191213.060222 amd64
A set of plugins for publishing and subscribing to sensor_msgs/Image topics in representations other than raw pixel data.
Edit: I have just realized that it says amd64 for all my packages. Shouldn't it be arm64? I installed ros2 according to the eloquent (debian packages) tutorial.
Comment by tfoote on 2020-01-15:
If you're on the Jetson you should have arm64 assuming you're running an arm64 os image. The plugins certainly won't load if they are the wrong architecture.
Comment by stevemacenski on 2020-01-15:
Sounds like we have a diagnosis!
Comment by tlaci on 2020-01-16:
It works now. I thought image_transport_plugins should be installed with eloquent, for me it did not.
Answer:
I reinstalled eloquent choosing the right architecture. Somehow it is a problem if amd64 is listed in the brackets too, maybe because it finds amd64 packages first.
sudo sh -c 'echo "deb [arch=arm64] http://packages.ros.org/ros2/ubuntu `lsb_release -cs` main" > /etc/apt/sources.list.d/ros2-latest.list'
After that I installed image_transport_plugins and now it works fine.
Originally posted by tlaci with karma: 48 on 2020-01-16
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 34274,
"tags": "ros2, image-transport"
} |
What is the ohmic value of a resistor that will dissipate 1 W when the voltage across it is 2 V? | Question: I've been learning about Ohm's and Watt's law throughout this chapter so I'm already familiar with substituting Ohm's Law for part of Watt's law to get values. But nowhere in the chapter did it go over getting the ohmic value of the resistor. As far as I can tell $$ P = \frac{V^{2}}{R} $$ is the formula that I'll need to get it. I think my issue is basic algebra knowledge, how to get the V^2 to the other side. After trying to solve this the first time I checked my answer and realized I did it wrong and already know the answer is 4Ω :/ So, that kind of tells me it's someting like 2^2 V times 1 W. But I want to know how to do it correctly using the math. If I need ot show what I've tried already I can but using Latex is tough for me and takes a lot of time to write out the problem. I will if I need to though.
Answer: You have the right equation - now just solve for $R$. This means you multiply both sides by $R$, and divide both sides by $P$. In steps, you started with
$$P = \frac{V^2}{R}$$
Multiply both sides by $R$:
$$P\ R = \frac{V^2}{R}R = V^2$$
Now divide both sides by $P$, to get
$$\frac{P}{P}R = \frac{V^2}{P}\\
R = \frac{V^2}{P}$$
Physics equations manipulate just like other equations... | {
"domain": "physics.stackexchange",
"id": 16880,
"tags": "homework-and-exercises, electric-circuits, electrical-resistance"
} |
Finding the two node-disjoint paths, minimizing the sum of their lengths | Question: Given an undirected graph and a start and end node, I am trying to find two node-disjoint paths such that the sum of their lengths is minimized. In particular, each path must start at the start node, end at the end node, and no node (other than the start or end node) can be visited by both nodes. Is there an efficient algorithm to solve this problem?
I know that if I just need the shortest path, I can use Dijkstra's algorithm, but finding one path with Dijkstra's algorithm and then searching for another is sometimes not optimal at all, since nodes in the first path may have blocked a possible and way shorter path.
I have tried implementing the node_disjoint_path algorithm from the Python networkx package. But I must say that the result is very varying from okay to really bad. My network is not that big. "Only" 6000 nodes and 7500 edges.
Answer: This can be solved by a small modification to Suurballe's algorithm, with $O(|E| + |V| \log |V|)$ running time. You might also be interested in Computing the k shortest edge-disjoint paths on a weighted graph.
Suurballe's algorithm works with directed graphs. You can convert an undirected graph to a directed graph by replacing each undirected edge $(u,v)$ with two directed edges $u \to v$, $v \to u$. | {
"domain": "cs.stackexchange",
"id": 17983,
"tags": "algorithms, graphs, shortest-path"
} |
By What Mechanism can Felines Reverse Diabetes? | Question: To my knowledge house cats (and likely other felines) are the only animal able to go into remission after onset of Type 2 diabetes mellitus. I don't have a reference, this has been by peers in my research institution and by a veterinarian. Has this mechanism been studied, and if so, what is it?
Answer: I spent a couple of years working in a diabetes research institute. And you did hear right.
Cats dogs monkeys all can be susceptible to diabetes, although rats and mice don't seem to have diabetes that behaves the same way as humans - I don't think a mouse has ever been observed to get diabetes without specific gene knockouts. One could argue that mice simply do not get diabetes the way people do.
There are two causes of type 2 diabetes in the research world. The first happens because the animal may have a genetic susceptibility and environmental conditions induce the disease (like most people). The second happens because of mutations in one specific gene or another pretty much cause it to happen when the animal gets to a mature age (google term: mature onset diabetes in the young (MODY)). MODY genes have not proven to be a great model for type 2 diabetes treatment in most people (about 1% of diabetics have a MODY diabetes though).
Cats and dogs have been animals of interest as models of diabetes because their symptoms are human like and have shorter reproductive cycles. Some cats (like Burmese) clearly have a greater genetic predisposition to diabetes, but cats who are obese, have high glycemic diets, or who get little or no physical activity are more likely to develop diabetes.
As with humans, the condition leads to the insulin secreting cells of of the pancreas to slowly degrade and if these cells are gone, there is no recovery from type 2 diabetes. All this given its likely that cats get diabetes that is similar to the human condition.
In treating cats its been shown that such diabetes can be reversed. The most important thing to do - common to all treatments - is to change the cat's diet - restrict calories if the cat is obese and change high glycemic food. Additionally, insulin or drug treatments that reduce insulin resistance (such as are given to humans) can restart the islet cells that secrete insulin if the animal is lucky and don't always have to be lifetime prescriptions.
The reference I have does not give remission statistics, but it must work a reasonable amount of the time.
But I wouldn't want to leave you with the impression that cats are special. It turns out that human diabetes type 2 is often reversible. This reference estimates 80% of the time. This study mostly focuses on the same treatments - change of diet and some fasting, but no exercise regimin as the cats.
There's probably nothing magical about cat diabetes 2... ;) | {
"domain": "biology.stackexchange",
"id": 693,
"tags": "zoology"
} |
How can free expansion be truly irreversible if particles have a small chance of returning to their original state? | Question: According to Halliday-Resnick, a free expansion of a gas is an irreversible process. However, the text continues that in a system of particles in a box, it is possible (though very unlikely) for a system of uniformly distributed particles to all cluster in one portion of the box, which is, in essence, the reverse of a free expansion.
Not only does this 'reversal' seem to imply that free expansion is not irreversible, but it also seems to violate the 2nd Law of Thermodynamics, as entropy is decreasing when the particles cluster.
Where am I going wrong here?
Answer: Let's consider an example case. Take one sixth of a mole of gas particles; that's $10^{23}$ particles. The probability that all happen to be in one half of a box (assuming equal probabilities for the two halves for each particle) is
$$
2^{-10^{23}}.
$$
Suppose each particle takes a millisecond to cross the box, so that the whole gas takes about 1 millisecond to sample a new distribution across the two halves of the box. The number of distributions thus sampled per year is
1 year / 1 millisecond $\simeq 3 \times 10^{10}$
So the expected number of years until a gathering all in one half of the box occurs is
$$
T = 2^{10^{23}} / 3\times 10^{10}.
$$
To understand this number, let's take the log to base 10:
$$
\log_{10} T = 10^{23} \log_{10} 2 - \log_{10} 3 - 10 \simeq 0.3 \times 10^{23}
$$
Now compare with the age of the universe, which I will take as 14 billion years:
$$
\log_{10} (T / 1.4 \times 10^{10} ) \simeq 0.3 \times 10^{23} - \log_{10} 1.4 - 10 \simeq 0.3 \times 10^{23}.
$$
Now to be cautious let's take a third of this, thus getting $10^{22}$. So the estimate is that it will take a time greater than
$10^{10^{22}}$ times the current age of the universe from the Big Bang
for the gas to gather spontaneously in one half of the box, if it is to happen just by random independent motion of each of the particles.
None of our knowledge of physics can be trusted on this large a timescale. What the calculation really means is that the authors were incorrect to assert that the gas "could" spontaneously gather in one side of the box. What the calculation means is that the gas could not gather on one side, merely by independent random motions of the particles, by any reasonable definition of the words "could" and "could not". | {
"domain": "physics.stackexchange",
"id": 82570,
"tags": "thermodynamics, statistical-mechanics, entropy"
} |
Most efficient dict filter on key tuple? | Question: I have a dict wherein a value needs to be retrievable by any one of several keys. It seemed that making multiple dictionaries or multiple entries in a single dictionary pointing to the same value object (ref type) was more maintenance than I wanted to commit to.
I decided to use a tuple for the key. I would then use filter(...) with a lambda to determine if the given provider_lookup was in the keyed tuple. I am not concerned that a value may be duplicated across tuples (this will be guarded against as the code moves forward). Here are the two methods:
def register_block_provider(self, provider):
block = provider()
self.__block_providers.update({
(block.block_id, block.name): block
})
def get_block_provider(self, provider_lookup):
for provider_key in filter(lambda p: (p[0], p[1]), self.__block_providers.keys()):
if provider_lookup in provider_key:
print("we've found the key: {key}".format(key=provider_key))
return self.__block_providers[provider_key]
return None
Are there specific improvements that can be made to the get_block_provider method? This works fine so I'm just asking for some feedback on the details of the implementation.
Answer: filter
Is useless here. It's purpose is to filter out values based on a predicate:
list(filter(f, data))
is the same as:
[d for d in data if f(d)]
Given that, in your case, f is lambda p: (p[0], p[1]), it will always evaluate to True in a boolean context. Thus never filtering anything. I’m not sure what it was you were trying to achieve, but you can remove filter without changing the behaviour:
def get_block_provider(self, provider_lookup):
for provider_key in self.__block_providers:
if provider_lookup in provider_key:
print("we've found the key: {key}".format(key=provider_key))
return self.__block_providers[provider_key]
return None
get_block_provider
I believe this method is to be used in the same kind of way than __getitem__ on dicts. Thus you need to act accordingly; it's the principle of least attonishment: you claim to be somewhat a dict then you get to act like a dict.
First of, remove that print. There is nothing justifying that line in a getter. If the user want to be sure it got something, it's its job to print something. It's not your classe's job.
Second, it might be better to raise a KeyError as a dict would have instead of returning None:
def get_block_provider(self, provider_lookup):
for provider_key in self.__block_providers:
if provider_lookup in provider_key:
return self.__block_providers[provider_key]
raise KeyError("No block corresponding to lookup '{}'".format(provider_lookup))
Be a dict
You said that
It seemed that making multiple dictionaries or multiple entries in a single dictionary pointing to the same value object (ref type) was more maintenance than I wanted to commit to.
But I disagree. Especially since looking at the code and your comments, you’re only expecting 2 different kind of lookups for your blocks: ID and name.
You do not have to entirely expose a dict interface. But you should use its strength. Your for loop in get_block_provider is a complete waste of time: you’re performing an \$O(n)\$ lookup where a dict would have provided an \$O(1)\$ one.
Instead, your lookup should rely on the underlying dictionary one. And you should focus on having only one entry point to update both keys at once:
def register_block_provider(self, provider):
block = provider()
blocks = self.__block_providers
# update both keys at once
blocks[block.block_id] = blocks[block.name] = block
def get_block_provider(self, provider_lookup):
return self.__block_providers[provider_lookup]
You can even define
__getitem__ = get_block_provider
to mimic a dict interface. | {
"domain": "codereview.stackexchange",
"id": 18540,
"tags": "python, search, hash-map, lambda"
} |
Multi beam laser | Question:
Hello to all!
This is my first post here and don't know if this is the correct place to it.
My question is related to a laser that I'm trying to use inside ROS, it is an IBEO.
image description http://www.isr.uc.pt/%7Eurbano/evsim/images/ibeo_guppy.jpg
This laser has four measuring channels and four what I've seen the "sensor_msgs/LaserScan" message only supports on channel at a time. is this correct?
If so, is there any other way to send the four channels ate once? I' would like to create a driver compatible with RVIZ.
Best regards, Mauro.
Originally posted by maurosmartins on ROS Answers with karma: 1 on 2011-07-13
Post score: 0
Original comments
Comment by Ivan Dryanovski on 2011-07-14:
Do you mean its a laser with 1 range reading, and 4 additional fields per reading (like intensity etc). Or is it 4 lasers rolled up into one device, producing 4 separate range readings? The answer I gave assumed the former
Answer:
As far as I know the IBEOs have those channels at different pitch angles, right?
In that case you have two possibilities (besides creating your own message).
The first would be to convert the laser data to a pointcloud as Ivan Dryanovski suggested.
The second would be to publish the four laser channels as four laser scan topcis and use the tf library. Each scan will have a different frame_id associated in the header and you use a static transform publisher that publishes the transforms according to the relative pose of the scans to the sensor.
Both versions will display correctly in rviz. The second has four different topcis, though. The advantage is that it publishes LaserScan messages making it compatible with software that uses LaserScans.
The best would probably be to follow an approach similar to the kinect driver: You advertise every possibility in your driver, but you will only publish (and convert laser to pointcloud) if a publisher has subscribers.
Originally posted by dornhege with karma: 31395 on 2011-07-14
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 6136,
"tags": "rviz, laser"
} |
Complexity of a variant of the max word problem. NP-complete? | Question: I'd like to be able to state that the following problem is NP hard.
I am wondering whether anybody have any pointers to related/recent work?
The problem:
Given a finite set of transition matrices $A$ and two non-negative vectors
$\vec{x}$ and $\vec{y}$.
Do there exist $A_1, A_2, ..., A_n \in A$ such that
$$\vec{x} \, A_1 \, A_2 ... A_n \, \vec{y} \geq P$$
Answer: If you allow the repetition of matrices, i.e. there exists $ 1 \leq i < j \leq n $ s.t. $ A_i =A_j $, then your problem is actually undecidable.
Let $ EMPTY_{PFA} $ be the emptiness problem for probabilistic finite automaton (PFA).
A PFA is a 4 tuple: $ P=(\Sigma,\{A_{\sigma \in \Sigma}\},x,y) $, where $\Sigma = \{\sigma_1,\ldots,\sigma_k\}$ is the input alphabet, each $ A_{\sigma} $ is a stochastic matrix, $x$ is a stochastic row vector (initial distribution), and $ y $ is a zero-one column vector. Each word, say $w \in \Sigma^*$, corresponds to a sequence of the matrices from $ \{A_{\sigma \in \Sigma}\} $ by allowing repetition, and vice versa. The accepting probability of $w$ by $P$ is as follows:
$$f_P(w) = x \cdot A_{w_1} \cdot A_{w_2} \cdots A_{w_{|w|}} \cdot y, $$
where $w_i$ is the $i^{th}$ symbol of $w$ and $|w|$ is the length of $w$.
$ EMPTY_{PFA} $ is the problem of, for a given PFA $ P $ and a threshold $ \lambda \in (0,1) $, whether there exists a word accepted with a probability at least $ \lambda $. $ EMPTY_{PFA} $ was shown to be undecidable. It is an old result and you can start digging from this article: http://arxiv.org/abs/quant-ph/0304082
$ EMPTY_{PFA} $ can be reduced to your problem. So, if your problem is decidable, then $ EMPTY_{PFA} $ is also decidable. But this is a contradiction. So, your problem is undecidable, too. | {
"domain": "cstheory.stackexchange",
"id": 2502,
"tags": "cc.complexity-theory, np-hardness, matrices"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.