anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Finding the $\mathcal Z$-transform of $((-\frac{1}{3})^n + \frac{1}{2})^n \mu[n-2]$? | Question: What is the $\mathcal Z$-transform of $\left(\left(-\frac{1}{3}\right)^n + \frac{1}{2}\right)^n \mu[n-2]$?
Doing raw computations with large $n$ gives a sum of $0.6030$ which doesn't seem right, and I'm not sure how to manipulate the expression to get some clean result.
Answer: Since $\left| -\frac{1}{3}\right|< \frac{1}{2}$ and $n$ starts at $2$, one can think about an approximation based on the first terms of the polynomial development of $a_n = \left(\frac{1}{2}\right)^n\left(1+2\left(-\frac{1}{3}\right)^n\right)^n $. Using a binomial (polynomial) development up to degree $2$ on the second term only, one gets:
$$a_n \approx b_n= \left(\frac{1}{2}\right)^n + 2n\left(-\frac{1}{6}\right)^n + 2n(n-1)\left(\frac{1}{18}\right)^n \,.$$
Details: since coefficients are zero for $n<2$, for $n=2$,
$$a_2=\left(1+2\left(-\frac{1}{3}\right)^2\right)^2= b_2 = 1+ {2 \choose 1}\left(2\left(-\frac{1}{3}\right)\right)+ {2 \choose 2}\left(2\left(-\frac{1}{3}\right)\right)^2$$
exactly. Then for $n>2$, the term $c_n=2\left(-\frac{1}{3}\right)^n$ tends to be small. You can approximate $(1+c_n)^n$ by $1+ {n \choose 1}c_n+ {n \choose 2}c_n^2$.
The following graphs provide an evaluation of the approximation, up to $n=8$. Top graph superimposes $a_n$, @MaximilianMatthé's approximation (MM), and what we get using the first (D0), the first two (D1), or the three terms (D2) of the above series. $D0$ (only $\left(\frac{1}{2}\right)^n$) is clearly insufficient. The others are quite close. The bottom graph shows the absolute approximation errors. As you can see, $D1$ and $D2$ converge very fast when $n$ grows, and are already quite good for the first terms.
The nice thing with these approximations is that you not only reuse the classical properties of the $\mathcal{Z}$-transform: linearity, time-shift and scaling, but also differentiation (up to order $2$), because of the factors $n$ and $n(n-1)$:
$$ n x_n \to -z\frac{dX(z)}{dz}\,.$$
A Matlab code for reproduction:
n = (2:8)';
s = ((-1/3).^n+1/2).^n;
s0 = ((-1/3).^n+(1/2).^n);
sa = ((1/2).^n);
sb = ((1/2).^n).*(1+2.*n.*(-1/3).^n);
sc = (1/2).^n+2.*n.*(-1/6).^n + 2.*n.*(n-1).*((1/18).^n);
subplot(2,1,1)
plot(n,[s,s0,sa,sb,sc],'.-');axis([0 max(n) -Inf Inf]); grid on
legend('Orig.','MM','D0','D1','D2')
h=xlabel('$n$ index');set(h,'INterpreter','latex');title('Series')
subplot(2,1,2)
semilogy(n,abs([nan(size(s)),s0-s,sa-s,sb-s,sc-s]),'.-');axis([0 max(n) -Inf Inf]); grid on
legend('Orig.','MM','D0','D1','D2')
h=xlabel('$n$ index');set(h,'INterpreter','latex');title('Absolute approximation errors') | {
"domain": "dsp.stackexchange",
"id": 4601,
"tags": "z-transform"
} |
Virtual photon description of $B$ and $E$ fields | Question: I continue to find it amazing that something as “bulky” and macroscopic as a static magnetic or electric field is actually a manifestation of virtual photons.
So putting on your QFT spectacles, look closely at the space near the pole of a powerful magnet – virtual photons! Now look between the plates of a charged capacitor – virtual photons again!
But if it’s all virtual photons, how do we get the difference between a magnetic and electric field?
Answer: the wave function of a single photon has several components - much like the components of the Dirac field (or Dirac wave function) - and this wave function is pretty much isomorphic to the electromagnetic field, remembering the complexified values of $E$ and $B$ vectors at each point. The probability density that a photon is found at a particular point is proportional to the energy density $(E^2+B^2)/2$ at this point. But again, the interpretation of $B,E$ for a single photon has to be changed.
So whether the field around an object is electric or magnetic or both is encoded in the "polarization" of the virtual photons.
You may imagine that the photon has 6 possible polarizations or so, identified with the components of $E$ and $B$. Well, for a particular direction, it is really just the $E+iB$ combination that acts as the wave function, so there are only three polarizations for a given direction - and one of them (the longitudinal) is forbidden, too. ;-) But the qualitative point that there are many polarizations is correct.
However, as emphasized repeatedly, you shouldn't imagine that a virtual photon is a real particle that can be counted. That's a reason why QGR's answer is pretty much irrelevant for your question because there is no operator counting virtual photons at all - so it makes no sense to ask whether it commutes with other operators. QGR may have thought about real photons but he hasn't answered your question, anyway.
By the way, static fields correspond to a vanishing frequency - because everything with a non-vanishing frequency will go like $\exp(i\omega t)$ or $\cos(\omega t)$. So if you want to describe the fields of electric sources and magnets as a collection of virtual photons, you must realize that the static nature of the field implies that the relevant fields will have the energy equal to zero. But the momentum is nonzero because the field depends on space - because of the sources. Such virtual photons are very far from being on-shell - they're very virtual, indeed. It is not too helpful to talk about virtual photons with particular frequencies and wavenumbers if there are electric sources in the middle of the region you want to describe. The Fourier analysis is only helpful for photons in a pretty much empty space.
But you could calculate the probabilities of various outcomes for a charged particle in an external electric or magnetic field, produced e.g. by many spinning electrons, using Feynman diagrams - where the virtual photons are the internal lines. The Feynman diagrams would be able to calculate the force acting on the probe particle. Some terms in the force wouldn't depend on the velocity - the electric forces - while others would depend on the velocity - the magnetic ones. These different terms would always come from the "same type" of virtual photons but all these photons depend on the sources of the field, so you would of course get different results for electric and magnetic fields.
All this stuff is confusing and really unnecessary. If you worry that quantum electrodynamics won't reproduce basic properties of electromagnetism - such as the difference between electricity and magnetism; or the difference between attractive and repulsive forces - then you shouldn't worry. It can be easily demonstrated that in the classical limit - e.g. for strong enough fields with a low enough frequency - the quantum electrodynamics (and the quantum field) directly reduces to the right classical limit, the classical electrodynamics (and the classical fields). Virtual photons are just a very helpful tool to study all kinds of processes similar to scattering. Their maths can be deduced from quantum fields - not the other way around - and these virtual photons don't happen to be useful to describe your kind of highly classical situations.
Best wishes
Luboš | {
"domain": "physics.stackexchange",
"id": 318,
"tags": "electrostatics, quantum-electrodynamics, virtual-particles, magnetostatics, carrier-particles"
} |
Is is true that every monad transformer is equivalent to its underlying/base monad? | Question: Question originally asked in proofassistants.stackexchange
Just like the title says, is it true (in some sensible model)? And if so, how to prove it? Something tells me it should be true and higher-order version of parametricity/theorems-for-free is needed to show this.
A concrete instance of the problem:
∀s. ∀a. (State s a ≅ (∀m. Monad m → StateT s m a))
In general the problem can be stated informally like so:
∀a. (MonadT Id a ≅ (∀ m. Monad m → MonadT m a))
where MonadT is a "schema" standing for arbitrary monad transformer.
The background type theory can be assumed to be system F or some dependent type theory consistent with parametricity. Depending on the theory used, the statement can be either internal or external.
Answer: The equation F Id ≅ ∀ (m: Monad). F m seems to be correct (for most transformers F, see below). However, I would not say that "a monad transformer is equivalent to its base monad". A monad transformer probably carries more information than its base monad, because there is no known way of mechanically converting a given base monad into its monad transformer.
By definition, a transformer's base monad can be obtained by applying the transformer to Id. This is denoted by F Id. You claim that F Id ≅ ∀ (m: Monad). F m. This seems to be correct.
Also you are correct that a proof will involve parametricity at the level of monads.
A possible proof could go like this. Consider the category of monads: objects are monads and arrows are monad morphisms. In that category, a monad transformer is an endofunctor. (This is true for almost all monad transformers, except Continuation and Codensity and some other variants of those monads. See https://stackoverflow.com/questions/63882053/whats-a-functor-on-the-category-of-monads?rq=3 and see also my answers in Explaining monad transformers in categorical terms and in https://stackoverflow.com/questions/24515876/is-there-a-monad-that-doesnt-have-a-corresponding-monad-transformer-except-io .)
Then we want to prove the following property:
F Id ≅ ∀ (m: Monad). F m
where ∀ (m: Monad) goes over all monads.
The first step is to prove that the identity monad Id is an initial object in the category of monads. For any given monad M, there is only one monad morphism between Id and M (that morphism is given by the monad M's unit method, unit: ∀ a. Id a → M a). I omit the proof of this property.
The second step is to use the Yoneda lemma (still in the category of monads). The Yoneda lemma says: For any Set-valued functor G (that is, a functor from the category of monads to the category of sets), the Yoneda lemma says:
G X ≅ Nat(Hom(X, _), G) ,
where Nat(K, L) is the set of natural transformations between functors K and L; Hom(X, _) is the Set-valued functor that maps a given monad M into the set of morphisms (in the category of monads) between the monads X and M.
Now we want to apply the Yoneda lemma to our situation where G = F is our monad transformer endofunctor. But then there is a technical difficulty: we cannot use the Yoneda lemma because G is not a set-valued functor (it's a monad-valued functor). We need to map monads into sets in some way. This is not straightforward; for instance, it is not clear how to choose a set that would correspond to the List monad or to the Maybe monad. It is easier to introduce a type parameter t and talk about the type List t or Maybe t. For any given type t there is a well-defined set of all values of type List t. So, we can temporarily choose some arbitrary type t and define a set-valued functor G that maps a monad m into the set of all values of type (F m) t. For this functor G, the Yoneda lemma shown above will hold.
The third step is to choose X = Id in the Yoneda lemma and find:
G Id ≅ Nat(Hom(Id, _), G)
The fourth step is to see why it makes sense to write the right-hand side as ∀ (m: Monad). G m in the notation of a programming language. In fact, since we are working in a purely functional language where parametricity holds, any value of type ∀ (m: Monad). G m must be implemented in a way that is fully parametric in the monad m: it may use only the monad m's methods but no other knowledge about m.
The set Hom(Id, M) is a single-element set because Id is an initial object and there is only one monad morphism between Id and M. So, the functor Hom(Id, _) is a constant functor that maps any object into a single-element set.
What is the set of natural transformations between that constant functor and G? A component at m of such a transformation is a morphism of type Hom(Id, m) => G m. Here => means the arrow in the category Set.
But Hom(Id, m) is a single-element set. So, a morphism Hom(Id, m) => G m is the same as just choosing one element in the set G m.
We find that a natural transformation between Hom(Id, _) and G is the same as a choice, for each monad m, of an element in the set G m in a way that does not depend on the monad m other than through the monad m's methods (in other words, in a fully parametric way). An element in the set G m is the same as a value of type F m t. In a programming language, we would write that as the type ∀ (m: Monad). F m t assuming that values of that type must be fully parametric. The assumption of parametricity allows us to use the parametricity theorem and is necessary because otherwise the values of type ∀ (m: Monad). F m t would not correspond to natural transformations.
In this way we find, in a programming language notation, that:
(F Id) t ≅ ∀ (m: Monad). F m t
This holds for all types t. So, we can rewrite this identity more concisely as:
F Id ≅ ∀ (m: Monad). F m
There are certainly some rough edges in this sketch of a proof, but I'm not sufficiently well-versed in category theory to polish it off.
The analogy with the Yoneda lemma for ordinary types and endofunctors gives the following equivalence between types:
F 0 ≅ ∀ t. F t
because 0 is an initial object in the category of types. | {
"domain": "cstheory.stackexchange",
"id": 5746,
"tags": "type-theory, monad, parametricity"
} |
How can I replicate AstroImageJ's pixel to RA/Dec algorithm in my own code? | Question: EDIT: solved, thanks to Eric Jensen's suggestion (in a comment on his answer) that I include the correction to my right ascension value as a function of my declination. All other comments were helpful but included information I'd already referenced, it was the the correction to RA that I was missing.
As the title states, I'm trying to use the WCS data added to a FITS header by a plate solve to calculate RA and Dec at any given pixel for an image in decimal form. I am aware that programs like AstroImageJ can display this sort of info, but only in HHMSS/DDMMSS format and only by using the cursor. I'd like to automate this with my own code.
Problem summary:
I have a number of plate solved images of star fields with satellite streaks, and I'd like to automate the extraction of satellite RA/Dec from the images based on their positions within the FITS pixel space with a function that allows me to input FITS header data and pixel location to generate RA/Dec. I know that this is a solved problem because AstroImageJ can give me truth data from a plate solved image when I mouse over certain pixels, but I have so many satellite streaks to measure that using the cursor for RA/Dec readout for each point along them would be prohibitively time consuming.
What I've tried so far:
I have tried to apply the transformations from the FITS header data based on this Caltech guide, but the algorithm here does not match my data. In particular, the RA/Dec I calculate for a pixel's position using this algorithm is offset by up to an arcminute (in a ~1* FOV frame) from the RA/Dec I get using my cursor in AIJ to find the RA/Dec of the same spot, and I haven't managed to figure out why these transformations aren't working.
I know that these transformations assume a linear model and don't directly include distortion, so I included a correction for the SIP distortion but found that the distortions were sub-pixel in my images and couldn't account for the offset.
I then found this StackExchange post of a user with a similar problem and attempted to apply a correction for the gnomonic projection based on this algorithm, and while that seemed to work, I'm still off by a few arcminutes.
So, after using the FITS algorithm, SIP distortion correction, and correcting for the gnomonic projection, I've accounted for everything I can think of but I still can't match AstroImageJ's numbers for a plate solved image. Can anyone help me find the missing piece here?
I would also be open to suggestions on alternate methods to get satellite streak RA/Dec from plate solved images, in case there's a simpler approach I'm missing.
Answer: You don’t say what language your code is in, but there are Python functions to do this in the WCS module in astropy.
In particular, look at the pixel_to_world function.
If you want to look at AstroImageJ’s implementation, the code is open source, here. The file WCS.java has the main world coordinate routines. | {
"domain": "astronomy.stackexchange",
"id": 6484,
"tags": "artificial-satellite, astrometry, algorithm, fits-header"
} |
Are Saturn's rings stable? | Question: Saturn's rings contain many moonlets that shape the rings of Saturn. The structures in the rings of Saturn around moonlets are similar to those in protoplanetary disks around newly formed planets, which makes me wonder if the material in the rings will follow a similar fate and form into larger bodies.
Will the material within the rings of Saturn eventually coalesce into moons, or are the rings of Saturn stable enough to last billions of years?
Answer: Most of Saturn's rings are inside it's Roche limit, which means they will never clump together. Tidal forces prevent this from happening.
small objects that are already together can withstand the tidal forces. A sufficiently large body inside a planet's Roche limit should break apart by tidal forces. That may be how the rings formed in the first place, or they may have formed by collision.
Saturn's rings are thought to be relatively young, if 100 million years qualifies as young, but it is for objects inside the solar system. Being young doesn't tell us how long they will last. I think the 300 million year estimate may be accurate enough, originally mentioned in the comments, but until that happens, the rings will remain, just grow smaller over time and they'll never coalesce into moons.
As a footnote, Saturn's thinner outer rings or gossamer rings are outside the Roche limit, and those might, one day, form into moons, or they may be too thin for that to happen. A certain density is probably required for moon formation. | {
"domain": "astronomy.stackexchange",
"id": 4225,
"tags": "saturn-rings, moonlet"
} |
An augmented version of the optimization problem | Question: In my last question, I proposed the following problem:
(1) Given a finite dimensional composite system AB whose initial state is a product state of A and B so that $\rho_{AB}=\rho_A\otimes \rho_B$.
(2) Assuming AB undergoes a joint unitary operation $U_{AB}$ on AB, and the output of system A is given by
$O_A=Tr_B (U_{AB}\rho_{AB}U_{AB}^{+})$
Question:
What's the initial state $\rho_A$ that will result in an $O_A$ with a maximal Von Neumann entropy (for a given $\rho_B$ and $U_{AB}$)?
I was wondering if the maximally mixed state will always be a solution. Thank for the help from Martin and Norbert Schuch, now we know this is not the case.
Norbert Schuch's example: A and B are qubits, $U_{AB}=|(00+11)/\sqrt{2}><00|+|(01+10)/\sqrt{2}><01+|10><10|+|11><11|$, then we know that for $\rho_B=|1><1|$, optimal $\rho_A=|0><0|$ but not $I/2$.
Now I would like to augment the problem as follows:
If the operation is repeated, which means we take the output of subsystem A, $O_A$, as the new input and iterate the operation till the system converge to a final output $O_{AF}$, then what's the optimal initial input $\rho_A$ that will lead to a $O_{AF}$ with the maximal entropy? Will the maximally mixed state always be a solution?
Note: There are cases that the iterative operation will not converge, but I'd like to believe this is relatively rare.
PS: It can be verified that the maximally mixed state of A is a solution of my new augmented version with the configuration of Norbert's example since any input $\rho_A$ will finally converge to the same output $O_{AF}=|1><1$.
Answer: The maximally entangled state will not always lead to the output with the maximum entropy. Consider e.g.
a $4$-level system and a channel
$$
\mathcal E(\rho) = |0\rangle\langle0|\,(\langle0|\rho|0\rangle +\langle1|\rho|1\rangle) + \tfrac12(|2\rangle\langle2|+|3\rangle\langle3|)(\langle2|\rho|2\rangle+\langle3|\rho| 3 \rangle)
$$
On the maximally mixed state, this will have give an output
$$
\tfrac12|0\rangle\langle0|+ \tfrac14(|2\rangle\langle2|+|3\rangle\langle3|)
$$
while the fixed point with maximal entropy is
$$
\tfrac13(|0\rangle\langle0|+ |2\rangle\langle2|+|3\rangle\langle3|)
$$
(which obviously can be reached with itself as an input). | {
"domain": "physics.stackexchange",
"id": 28942,
"tags": "quantum-mechanics, quantum-information"
} |
Split redirecting to multiple files in bash | Question: A function mycommand which runs command and:
Gives me three log-files which are:
*.stdout.log: everything from stdout
*.stderr.log: everything from stderr
*.full.log: everything (i.e., both stderr and stdout)
Prints all output to screen
The idea is that I can quickly see through the errors, skim for additional info in stdout. But in some cases both (stdout and stderr) messages should be seen as one as it gives context, thus I want a third file to do that. For real-time overview I obviously want to see it on my screen too.
This works but it is rather clumsy and doesn't look nice. I'm unsure whether this is the right way to do it or I should improve the code.
The snippet is from my *.*shrc.
mycommand () { command "$@" \
> >(tee command_"$@"_$(date +%F_%T).stdout.log command_"$@"_$(date +%F_%T).full.log) \
2> >(tee command_"$@"_$(date +%F).stderr.log command_"$@"_$(date +%F_%T).full.log >&2 ) } \
Answer: There are a number of things here which concern me.
First up, the stderr log file does not have the time on the file name (missing _%T). This is a classic copy-paste+partial-fix issue, you copied the same code to multiple places, then needed to fix it, but you only fixed some of them.
The solution to that is to extract the code to just one place, and reuse that:
local basename=command_"$@"_$(date +%F_T)
tee ${basename}.stderr.log
Now, basename is reusable, and if you want to change all three (stderr, stdout, full), then you can change them in just one place.
Your code does not handle complex commands gracefully. What if the command is ls -laR /etc .... how will it save away to log files called:
command_ls -laR /etc_2015-09-22_10:25:30.stdout.log
You need to rationalize that. I started with the spaces and slashes first... using a bash regex/replace for [ \/] to be replaced with an _ underscore.
local command="$@"
local base=command_${command//[ \/]/_}_$(date +%F_%T)
Next up, I am concerned that you have two tee processes both writing to the same "full" file. The one that starts second will overwrite what the first one started with, but I suspect that they will subsequently "merge" the results. It would be better to start a clean file, and then have both tee processes append to the existing file.
I messed around with your function, and came up with:
threepipe () {
local command="$@"
local base=rcommand_${command//[ \/]/_}_$(date +%F_%T)
local both=$base.both.log
rm -f $both
touch $both
$command 2> >(tee -a $both | tee "$base.stderr.log" >&2) | tee -a $both | tee "$base.stdout.log"
}
Note that only the stderr needs to be handled by the anonymous fifo. The redirect back to stderr keeps it out of the way of the rest of the stdout. Also, note the use of the -a flag for tee. | {
"domain": "codereview.stackexchange",
"id": 15903,
"tags": "bash, linux, shell, unix, sh"
} |
Square-tree using maps and recursion | Question:
Define a procedure square-tree analogous to the
square-list procedure of exercise
2.21. That is, square-list should behave as follows:
(square-tree (list 1
(list 2 (list 3 4) 5)
(list 6 7))) (1 (4 (9 16) 25) (36 49))
Define square-tree both directly
(i.e., without using any higher-order
procedures) and also by using map and
recursion.
I wrote this solution. What do you think?
(define (square x) (* x x))
(define (square-tree tree)
(cond ((null? tree) null)
((pair? tree)
(cons (square-tree (car tree))
(square-tree (cdr tree))))
(else (square tree))))
(define (map-square-tree tree)
(map (lambda (subtree)
(if (pair? subtree)
(cons (square-tree (car subtree))
(square-tree (cdr subtree)))
(square subtree)))
tree))
(define a (list 1 1 (list (list 2 3) 1 2)))
EDIT: This is a much better solution for map-square-tree.
(define (square x) (* x x))
(define (square-tree tree)
(cond ((null? tree) null)
((pair? tree)
(cons (square-tree (car tree))
(square-tree (cdr tree))))
(else (square tree))))
(define (map-square-tree tree)
(map (lambda (subtree)
((if (pair? subtree) map-square-tree square) subtree))
tree))
(define a (list 1 1 (list (list 2 3) 1 2)))
Answer: Your direct definition of square-tree is correct.
Your definition using map calls square-tree; to make it properly recursive, call map-square-tree instead. Further, you may recurse on the subtree itself. This will make your code succinct.
(define (map-square-tree tree)
(map (lambda (subtree)
((if (pair? subtree) map-square-tree square) subtree))
tree)) | {
"domain": "codereview.stackexchange",
"id": 239,
"tags": "recursion, lisp, scheme, sicp"
} |
Why do we consider log-space as a model of efficient computation (instead of polylog-space) ? | Question: This might be a subjective question rather than one with a concrete answer, but anyway.
In complexity theory we study the notion of efficient computations. There are classes like $\mathsf{P}$ stands for polynomial time, and $\mathsf{L}$ stands for log space. Both of them are considered to be represented as a kind of "efficiency", and they capture the difficulties of some problems pretty well.
But there is a difference between $\mathsf{P}$ and $\mathsf{L}$: while the polynomial time, $\mathsf{P}$, is defined as the union of problems which runs in $O(n^k)$ time for any constant $k$, that is,
$\mathsf{P} = \bigcup_{k \geq 0} \mathsf{TIME[n^k]}$,
the log space, $\mathsf{L}$, is defined as $\mathsf{SPACE[\log n]}$. If we mimics the definition of $\mathsf{P}$, it becomes
$\mathsf{PolyL} = \bigcup_{k \geq 0} \mathsf{SPACE[\log^k n]}$,
where $\mathsf{PolyL}$ is called the class of polylog space.
My question is:
Why do we use log space as the notion of efficient computation, instead of polylog space?
One main issue may be about the complete problem sets. Under logspace many-one reductions, both $\mathsf{P}$ and $\mathsf{L}$ have complete problems. In contrast, if $\mathsf{PolyL}$ has complete problems under such reductions, then we would have contradict to the space hierarchy theorem. But what if we moved to the polylog reductions?
Can we avoid such problems? In general, if we try our best to fit $\mathsf{PolyL}$ into the notion of efficiency, and (if needed) modify some of the definitions to get every good properties a "nice" class should have, how far can we go?
Is there any theoretical and/or practical reasons for using log space instead of polylog space?
Answer: The smallest class containing linear time and closed under subroutines is P. The smallest class containing log space and closed under subroutines is still log space. So P and L are the smallest robust classes for time and space respectively which is why they feel right for modeling efficient computation. | {
"domain": "cstheory.stackexchange",
"id": 444,
"tags": "cc.complexity-theory, space-bounded, big-picture"
} |
Segmentation of a PointCloud to find a specific object (a cup) pcl | Question:
Hi!
In my task i need to detect the pose of an object (a cup in my case) because i have to grasp the cup with a robot.
I'm trying to catching the point cloud of the scene with ROS and a kinect.
I thought to segment my point cloud that represent the scene, because i want to keep only the point of the cup.
I have implemented a code in ROS that has a subscriber to detect the sensor_msgs/PointCloud2, then i transform the PointCloud2 in a pcl::PointXYZ.
And this is the code to the segmentation part:
pcl::ModelCoefficients::Ptr coefficients (new pcl::ModelCoefficients);
pcl::PointIndices::Ptr inliers (new pcl::PointIndices);
// Create the segmentation object
pcl::SACSegmentation<pcl::PointXYZ> seg;
// Optional
seg.setOptimizeCoefficients (true);
// Mandatory
seg.setModelType (pcl::SACMODEL_CYLINDER);
seg.setMethodType (pcl::SAC_RANSAC);
seg.setDistanceThreshold (0.01);
seg.setInputCloud (cloud);
seg.segment (*inliers, *coefficients);
it seems to work with a SACMODEL_PLANE, but with SACMODEL_CYLINDER this is the error that appears:
[pcl::SACSegmentation::initSACModel] No valid model given!
[pcl::SACSegmentation::segment] Error initializing the SAC model!
Could not estimate a planar model for the given dataset.
I found the various model here http://docs.pointclouds.org/1.7.0/group__sample__consensus.html
Help me please!
P.S. And tell me if i'm trying to do the correct things for my task...
Tank You!
Originally posted by lukeb88 on ROS Answers with karma: 33 on 2014-06-09
Post score: 1
Answer:
I suggest to follow this tutorial http://pointclouds.org/documentation/tutorials/cylinder_segmentation.php#cylinder-segmentation.
They use pcl::SACSegmentationFromNormals instead of pcl::SACSegmentation.
For me it works!
Originally posted by rastaxe with karma: 620 on 2014-06-10
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 18209,
"tags": "kinect, pcl"
} |
Proof of existence of lowest temperature $0 K$ | Question: In mathematics there is the concept of infinity meaning that whenever you pick a number and say that it is the smallest/Largest there is a way to further reduce/increase that number by subtracting/adding any other number.
But in physics or chemistry, I see that the absolute temperature does not have a negative reading and the lowest temperature is $0 K$.
What is the evidence or logic behind that the temperature below zero cannot exist?
Answer: In physics, temperature and other concepts in "thermodynamics" (that was known for centuries from macroscopic analyses of the heat engines and similar systems) is given by a more fundamental theory, the so-called "statistical mechanics". According to statistical mechanics, the thermal phenomena are explained by the motion of the atoms and various states in which the atoms may be found (and the number of these states).
In particular, the probability $p_k$ of a state $k$ (in classical physics, the state is described e.g. by the location and velocity of each particle) is given by
$$ p_k = C \exp(-E_k/k_BT) $$
where $E_k$ is the energy of the state $k$, $k_B$ is Boltzmann's constant converting kelvins to joules, and $T$ is the absolute temperature in kelvins. The coefficient $C$ is a "normalization factor" that is $k$-independent and chosen so that the sum of $p_k$ over $k$ is equal to one (the total probability).
This form makes it clear that $T\lt 0$ isn't allowed: the exponential would be growing with $E_k$ and because there are infinitely many states with ever larger values of $E_k$ (the kinetic energy may grow arbitrarily high, in particular), the probabilities would be getting larger and their sum would diverge: it couldn't be normalized to one.
Before this statistical explanation involving Boltzmann's constant was known, the temperature was a phenomenological quantity measured by a thermometer. One was actually uncertain about any redefinition $T\to f(T)$ where $f(T)$ is a monotonically increasing function. In principle, one may relabel $T$ so that zero kelvins gets mapped to $-\infty$ in another convention for the temperature, for example; try $T_\text{new convention} = \ln (T)$. However, the ideal gases obeyed $pV = nRT$ so at a fixed pressure, the volume of some gas was proportional to the absolute temperature – the same one as one in statistical mechanics, without any redefinition by a function $f$.
So people knew how to measure the "right absolute temperature" even well before statistical mechanics was understood. The usual thermometers relied on the expansion of liquids etc. which are not ideal gases but they're close enough. For ideal gases, where the absolute temperature is proportional to the volume, the statement that $T\gt 0$ is equivalent to the statement that the volume of the ideal gas cannot be negative. You cool it down and it shrinks but it can't shrink below zero.
Volume is about the "shape" but the underlying reason for the positivity of temperature isn't about locations; it is about the motion. Any physical object with quadratic degrees of freedom will carry $k_BT/2$ of kinetic energy per degree of freedom. Again, because the quadratic kinetic energy of the type $mv_x^2/2$ can't be negative, the absolute temperature can't be negative, either.
In lasers and similar devices, one may formally find negative absolute temperatures when the number of atoms at a higher energy level is greater than the number of atoms at a lower energy level. However, this negative temperature can't be brought to equilibrium with all degrees of freedom in a larger object because the number of high-energy states is always divergent. In lasers, one kind of abuses the fact that that the energy of the "interesting degrees of freedom" is bounded both from below and from above (we only allow two or few levels for each atom). | {
"domain": "physics.stackexchange",
"id": 3674,
"tags": "thermodynamics, temperature, solid-state-physics"
} |
What is the relationship between robustness and adversarial machine learning? | Question: I have been reading a lot of articles on adversarial machine learning and there are mentions of "best practices for robust machine learning".
A specific example of this would be when there are references to "loss of efficient robust estimation in high dimensions of data" in articles related to adversarial machine learning. Also, IBM has a Github repository named "IBM's Adversarial Robustness Toolbox".
Additionally, there is a field of statistics called 'robust statistics' but there is no clear explanation anywhere about its relation to adversarial machine learning.
I would therefore be grateful if someone could explain what robustness is in the context of Adversarial Machine Learning.
Answer: A robust ML model is one that captures patterns that generalize well in the face of the kinds of small changes that humans expect to see in the real world.
A robust model is one that generalizes well from a training set to a test or validation set, but the term also gets used to refer to models that generalize well to, e.g. changes in the lighting of a photograph, the rotation of objects, or the introduction of small amounts of random noise.
Adversarial machine learning is the process of finding examples that break an otherwise reasonable looking model. A simple example of this is that if I give you a dataset of cat and dog photos, in which cats are always wearing bright red bow ties, your model may learn to associate bow ties with cats. If I then give it a picture of a dog with a bow tie, your model may label it as a cat. Adversarial machine learning also often includes the ability to identify specific pieces of noise that can be added to inputs to confound a model.
Therefore, if a model is robust, it basically means that it is difficult to find adversarial examples for the model. Usually this is because the model has learned some desirable correlations (e.g. cats have a different muzzle shape than dogs), rather than undesirable ones (cats have bow ties; pictures containing cats are 0.025% more blue than those containing dogs; dog pictures have humans in them more often; etc.).
Approaches like GANs try to directly exploit this idea, by training the model on both true data and data designed by an adversary to resemble the true data. In this sense, GANs are an attempt to create a robust discriminator. | {
"domain": "ai.stackexchange",
"id": 1547,
"tags": "neural-networks, machine-learning, ai-design, ai-safety, adversarial-ml"
} |
Relation between First Law of Thermodynamics and Ideal Gas Law | Question: Thermodynamics has always been a tough thing for me. There are lots of assumptions in this subject (those assumptions, I know, are necessary, I know the science of thermodynamics is a very practical science).
First Law of Thermodynamics states mathematically:
$$\Delta U=Q+W$$
(with proper sign conventions must be used). This is just a law of conservation of energy and a very straightforward equation, but when we come to chemical thermodynamics this equation changes its form and becomes:
$$\Delta U=Q+p\,\Delta V$$
My intuition says as soon as pressure and volume comes in any equation it becomes specifically for gases. So, my first question is:
Why thermodynamical equations are just for gases?
Let's imagine an isothermal expansion of a gas (that simple piston and gas experiment) under a constant pressure, now work $W$ is
$$W=p\,\Delta V$$
but if use ideal gas Law equation i.e.
$$pV=nRT$$
$$p\,\Delta V = \Delta nRT + nR\,\Delta T\tag1$$
since the expansion is isothermal therefore $\Delta T = 0$ and I can think that during expansion no atom or molecule has been annihilated therefore $\Delta n = 0$, so after all we get
$$p\,\Delta V = 0$$
$$W=0$$
I want to know my mistakes in above consideration.
There is a question in my book:
A swimmer coming out of a pool is covered with a film of water weighing $18\ mathrm g$. How much heat must be supplied to evaporate this water at $298\ \mathrm K$? Calculate the internal energy change of vaporization at $100\ \mathrm{^\circ C}$. $\Delta_\mathrm{vap}H^\circ = 40.66\ \mathrm{kJ\ mol^{-1}}$ for water at $373\ \mathrm K$
My book give its solution like this
$$\ce{H2O(l) -> H2O(g)}$$
Amount of substance of $18\ \mathrm g$ of $\ce{H2O(l)}$ is just $1\ \mathrm{mol}$. Since, $\Delta U=Q-p\,\Delta V$, therefore,
$$\Delta U=\Delta H-p\,\Delta V $$.
$$\Delta U=\Delta H-\Delta nRT$$
$$\Delta U=40.66 \times 10^3\ \mathrm{J\ mol^{-1}}-1\ \mathrm{mol}\times8.314\ \mathrm{J\ K^{-1}mol^{-1}}\times373\ \mathrm K$$
$$ \Delta U=37.56\ \mathrm{kJ\ mol^{-1}}$$
I have a lot of problems with this solution which goes directly to the foundations of science of thermodynamics. (I must say it's because of these books that science becomes a rotten subject, these books destroy the real essence of science).
How is $\Delta n=1\ \mathrm{mol}$?
Why temperature is taken as $373\ \mathrm K$ and not $298\ \mathrm K$, since the process starts at $298\ \mathrm K$ we should use it?
At $373\ \mathrm K$ the process becomes an isothermal one (latent heat) so $\Delta U$ ought to be zero, if we think the process of vaporization starts from $373\ \mathrm K$.
Any help will be much appreciated. Thank you.
Answer:
Why thermodynamical equations are just for gases?
They are not. The equation $\Delta U = q + P\Delta V$ applies to any phase (gas, liquid, solid...) when only pV work is done. In the particular form of the equation you present, the pressure is in addition constant during the work.
Gases are (1) an easy way to introduce thermodynamics concepts because they allow for a simplified analysis and may exhibit dramatic behavior, and (2) they provide a link to understanding the behavior of other phases. They also happen to be inherently important for practical and historical reasons during the development in the science.
I want to know my mistakes in above consideration.
There are no mistakes. Consider the ideal gas law $$V=\frac{nRT}{p}$$ If you assume that $p$, $n$ and $T$ are constant, then the dependent variable $V$ will also be constant.
What's probably causing the confusion is that the water is undergoing a phase change. We assume that the liquid does no work, that the change in volume is only due to the formation of vapor. In practice $\pu{\Delta n=+ 1 mol}$ for the gas, $\pu{\Delta n= - 1 mol}$ for the liquid, and $\pu{\pu{\Delta n= 0}}$ for all of the water. We ignore the work done when reducing the amount of liquid because it is small compared to that done when the gas is formed (the change in volume of gas is much greater).
How is Δn=1 ?
See the answer to the previous question. You are converting $\pu{18 g}$ of liquid water into vapor. Since the molecular weight of water is $\pu{18 g/mol}$, you are converting $\pu{1 mol}$ of water.
Why temperature is taken as 373 K and not 298 K, since the process starts at 298 K we should use it?
I agree, if the work is actually performed at a lower temperature than $\pu{373 K}$, so this is an estimate. It is assumed that water is "boiling off" the skin at $\pu{373 K}$ and $\pu{1 atm}$ vapor pressure (the boiling point of water at $\pu{1 atm}$ of pressure is $\pu{373 K}$). Maybe not an accurate portrayal of what is going on, but it gets you to practice the theory.
I have lot of problems with this solution which goes directly to the foundations of science of thermodynamics. (I must say it's because of these books that science becomes a rotten subject, these books destroy the real essence of science).
The problems more generally are that (1) we have an intuition about the way the world works, based on everyday observations, and this intuition sometimes misinforms us; some of the effort of education is to develop a more accurate intuition; and (2) when teaching science, practice problems are sometimes abstract and don't reflect real life situations except approximately. For instance, when you dry yourself after swimming, you are far from a thermodynamic equilibrium, with air currents, sun heating your skin, and a low water vapor pressure all playing a potential role. Modeling all this is beyond the scope of an introductory course. Probably the most important approximations here (assuming a closed system free of mass flow) are that the enthalpy of vaporization is constant over a broad temperature range and/or (as you rightly pointed out) that the water evaporates at $373 K$. | {
"domain": "chemistry.stackexchange",
"id": 12504,
"tags": "thermodynamics, gas-laws"
} |
Jahn Teller Effect on Metal Complexes | Question: I've been learning how the Jahn Teller Distortion effects the orbitals in Metal complexes and how the splitting of eg and t2g orbitals happens.
But the book mentions that the effect is strong for electron configurations where the eg orbital has 1 or 3 electrons. I can't figure out what that's the case. Why isn't the effect strong with 2 electrons in the eg orbital as well?
Answer: If the orbitals of the metal core is distorted. the amount of splitting is not high enough to promote the pairing of energy. therefore the 2 electrons will go to two separate eg orbitals. let the splitting energy due to this distortion be β1. the electron in the lowers orbital will have an energy loss of β1/2 and the electron in the higher orbital will have an energy gain of β1/2 and therefore the net gain in the energy is 0. which does not give any additional stability to the complex(as there is no release of energy).
hence the distortion does not have a strong effect. | {
"domain": "chemistry.stackexchange",
"id": 14482,
"tags": "inorganic-chemistry, coordination-compounds, molecular-orbital-theory, orbitals"
} |
What is Relative and Absolute? | Question: I was studying about relative motion, and randomly thought about What actually is a Relative Term or a Absolute Term. So far in the school book it is written that, Quantities that depends on a reference frame are called relative terms, but I found this vague, and it didn't feed my curiosity.
So, I searched in internet about this, and found an article Which describe Relative and Absolute, With some Graph (Space-Time Graph I Guess I Don't remember). I don't know if its correct or not.
I am unable to Find the article again, So I thought of asking it here, as this will also benefit Others.
Answer: The Basic Meaning of Relative and Absolute
A relative concept or a quantity is that which is defined in relation to something else -- in such a way that a meaningful description of the concept or the quantity necessarily involves a reference to the something else in relation to which it has been defined.
For example, if I told you that I have twice as many teeth as my grandma has then I have given you a relative description of the cardinality of my teeth. In other words, to meaningfully know how many teeth I have, you need to refer to how many teeth my grandma has. However, if I tell you that I have $32$ teeth then it is an absolute description of the cardinality of my teeth because it meaningfully defined in and of itself and does not need to refer to something else.
There are various slightly different meanings and contexts in which the words relative or absolute might be used in physics but the basic impulse behind the idea remains the same. I will give a couple of examples, one of which has already been mentioned in other answers:
In a basic sense, all descriptions of quantities that have dimensionful units are necessarily relative. In particular, they convey meaningful information only in relation to their units! For example, when I tell you that my bag weighs $20 \ \mathrm{kg}$, I am telling you that its inertia is $20$ times as much as that of the famous cylinder in a bunker somewhere in France.
A more obvious example of a relative quantity is the specific gravity of a substance. It is defined as the ratio between the density of the said substance and the density of water. As you can see, this is a relative quantity because it is defined explicitly in relation to the density of water. Notice that specific gravity is unitless but it is still a relative quantity because of the way it is defined (in other words, just because something is a pure number does not mean that it is absolute).
Sometimes Relative Quantities Can be Promoted to the Status of Absolute
As I discussed, a relative quantity is a quantity that is described in relation to or in reference to some other quantity. However, what if there is a unique/preferred/natural/correct/universal/obvious choice for the reference? In such a case, it might be natural to consider that whenever we speak of the relative quantity, it is understood that it is being spoken of in reference to this particular unique/preferred/natural/correct/universal/obvious reference. In this case, the relative quantity can be treated as an absolute quantity because the reference of its definition is invariable and universal. This is the reason we don't normally speak of mass being a relative quantity because the unit of mass is fixed to be $1\ \mathrm{kg}$ since the SI treaty.
The Story of Relative and Absolute Motion
So, what kind of a concept is motion? Well, in an obvious and basic sense, motion is a manifestly relative concept. Motion is defined as the phenomenon of an object changing its position over time with respect to a frame of reference. This is true both in Newtonian physics and in relativistic physics. This is obvious upon reflection because the only way to even speak of "change in position" is when you have a reference frame w.r.t. which the "change" happens. So, this is the basic definitional/conceptual sense in which all motion is trivially relative.
However! What if there were to be a uniquely preferred frame of reference? Well, in that case, it would make sense to say that we can define absolute motion as motion w.r.t. that special frame. And that is exactly what happened in Newtonian mechanics. Newton conceived of an absolute space:
"Absolute space, in its own nature, without regard to anything external, remains always similar and immovable. Relative space is some movable dimension or measure of the absolute spaces; which our senses determine by its position to bodies: and which is vulgarly taken for immovable space ... Absolute motion is the translation of a body from one absolute place into another: and relative motion, the translation from one relative place into another ..."
This kind of immovable background of absolute space is very intuitive to all of us, I would presume. We kind of associate the empty space we see in the pictures or imaginations of outer space as this absolute space in which all the motion of all the planets happens. It just seems true that that is the reference frame with respect to which we ought to define true motion. And that's exactly what Newton did. He defined absolute motion as the motion that happens w.r.t. the frame of reference associated with this absolute space.
It should be noted that the actual equations of Newtonian mechanics were such that one cannot do any experiment to detect as to whether an object is actually moving or at rest w.r.t. this absolute space. Why? Because all the equations of motion were invariant under Galilean transformation among inertial frames. Thus, Newton's own theory predicts that an experiment that is done in the absolute rest frame, i.e., in the frame of reference that is at rest w.r.t. the absolute space and an experiment that is done in an inertial frame moving at a constant velocity w.r.t. this absolute space would both give the exact same outcome. So, all one could actually observe was relative motion. It should be noted that the more technical and deeper reason as to why Newtonian mechanics kind of needs absolute space, at a theoretical level, is to define inertial frames! Because all one can say otherwise is that an inertial frame moves at a constant velocity w.r.t. another inertial frame and we can only experimentally detect as to which frame is inertial and which frame is not (using Newton's first law). But, theoretically, one cannot explain why this set of frames all moving with mutual constant velocities is inertial but that another set of frames who are also all moving with mutual constant velocities but who are accelerated w.r.t. the first set of frames is not inertial. Absolute space solves this issue, at a theoretical level, in Newtonian mechanics.
Without going into the details of the very interesting developments in Maxwell's theory, suffice it to say that with Einstein's theories of special and general relativity, we now understand that physics does not need the notion of the absolute space (as Laplace said of God). In fact, all inertial frames are completely equivalent and thus, all motion is truly relative. Notice that this was the case all along, as I mentioned, even in Newtonian mechanics, there was no way to actually find out the supposedly preferred frame of the absolute space. Moreover, now, we don't even need the absolute space at a theoretical level to define inertial frames because, from general relativity, we know that it is gravity that defines inertial frames -- the freely falling frame is the inertial frame. And so, the verdict is that motion is indeed relative.
Remarks
While there is no absolute space, there are often some more natural frames of reference to work with than others. For example, in cosmology, there is a preferred frame of coordinates called the comoving coordinates -- and one speaks of the age of the universe, one is referring to the age of the universe in this specific system of coordinates. There is also a preferred frame of reference when one does special relativity on compact spaces. This allows one to define "absolute motion" as motion w.r.t. this naturally preferred coordinate system. | {
"domain": "physics.stackexchange",
"id": 77098,
"tags": "reference-frames, terminology, definition, relative-motion"
} |
Amplitude at distance from source | Question: So, there is a sound at $S$, whose intensity $I$ obeys the inverse square law ($I \sim \frac{1}{x^2}$). At point $P$, at a distance $r$ from $S$, the air molecules oscillate with an amplitude of $8μm$. Point $Q$ is at a distance of $2r$ from $S$. What is the amplitude of the air molecules at $Q$? What is the relationship between amplitude and distance?
Answer: Yes, Electro, the intensity $I$ - and energy density $T_{00}$ and similar quantities - is proportional to the squared amplitude,
$$ I \sim A^2 $$
Because the intensity must go as
$$ I \sim \frac{1}{r^2} $$
as the energy gets spread over the sphere of area $4\pi r^2$, it follows that
$$ A \sim \frac{1}{r}.$$
See e.g. the $1/r$ factor in this formula:
http://en.wikipedia.org/wiki/Dipole_antenna#Elementary_doublet | {
"domain": "physics.stackexchange",
"id": 767,
"tags": "classical-mechanics, waves"
} |
What does a Umlaut (double dot) above an angle mean? | Question: I'm reading a paper on double pendulums and there is an equation of motion that contains a double dot (Umlaut) above an angle. What does this mean / is this a standard notation in equations of motion?
Answer: It means the second time derivative.
In other words, $$\ddot\theta=\frac{d^2\theta}{dt^2}$$ which represents the angular acceleration of an object (which is a pendulum bob in your example).
These, and indeed first time derivatives (or even more than first, second etc.) are very common in physics (and in engineering and many other subjects), since we are often thinking about instantaneous timed rates of change of quantities. For example, the instantaneous rate in change of an objects position $x$ is called its instantaneous velocity $v$ where $$v=\dot x=\frac{dx}{dt}$$ and its acceleration is the rate in change of this quantity, or $$a=\ddot x=\frac{dv}{dt}=\frac{d^2x}{dt^2}$$ | {
"domain": "physics.stackexchange",
"id": 83779,
"tags": "classical-mechanics, acceleration, differentiation, notation"
} |
Why does REINFORCE work at all? | Question: Here's a screenshot of the popular policy-gradient algorithm from Sutton and Barto's book -
I understand the mathematical derivation of the update rule - but I'm not able to build intuition as to why this algorithm should work in the first place. What really bothers me is that we start off with an incorrect policy (i.e. we don't know the parameters $\theta$ yet), and we use this policy to generate episodes and do consequent updates.
Why should REINFORCE work at all? After all, the episode it uses for the gradient update is generated using the policy that is parametrized by parameters $\theta$ which are yet to be updated (the episode isn't generated using the optimal policy - there's no way we can do that).
I hope that my concern is clear and I request y'all to provide some intuition as to why this works! I suspect that, somehow, even though we are sampling an episode from the wrong policy, we get closer to the right one after each update (monotonic improvement). Alternatively, we could be going closer to the optimal policy (optimal set of parameters $\theta$) on average.
So, what's really going on here?
Answer: The key to REINFORCE working is the way the parameters are shifted towards $G \nabla \log \pi(a|s, \theta)$.
Note that $ \nabla \log \pi(a|s, \theta) = \frac{ \nabla \pi(a|s, \theta)}{\pi(a|s, \theta)}$. This makes the update quite intuitive - the numerator shifts the parameters in the direction that gives the highest increase in probability that the action will be repeated, given the state, proportional to the returns - this is easy to see because it is essentially a gradient ascent step. The denominator controls for actions that would have an advantage over other actions because they would be chosen more frequently, by inversely scaling with respect to the probability of the action being taken; imagine if there had been high rewards but the action at time $t$ has low probability of being selected (e.g. 0.1) then this will multiply the returns by 10 leading to a larger update step in the direction that would increase the probability of this action being selected the most (which is what the numerator controls for, as mentioned).
That is for the intuition -- to see why it does work, then think about what we've done. We defined an objective function, $v_\pi(s)$, that we are interested in maximising with respected to our parameters $\theta$. We find the derivative of this objective with respect to our parameters, and then we perform gradient ascent on our parameters to maximise our objective, i.e. to maximise $v_\pi(s)$, thus if we keep performing gradient ascent then our policy parameters will converge (eventually) to values that maximise $v$ and thus our policy would be optimal. | {
"domain": "ai.stackexchange",
"id": 2246,
"tags": "reinforcement-learning, policy-gradients, reinforce"
} |
Indivisiblity of quarks | Question: I have been researching the Standard Model of Particle Physics recently. According to the model, quarks are indivisible. Does this mean that quarks cannot be divided, or does it mean that if we were to divide them, we would be left with nothing?
Answer: The former, although maybe we're wrong. It wouldn't be the first time. | {
"domain": "physics.stackexchange",
"id": 83450,
"tags": "particle-physics, standard-model, quarks, beyond-the-standard-model"
} |
Statistical Analysis of Protein Folding Problem | Question: I’m new to the field of protein folding. I’ve been searching and came across some books for predicting structures (Introduction to Protein Structure Prediction: Methods and Algorithms). Does anyone know whether I can find some code (say in C++, Java, or R) related to the prediction of protein structures? Or, do you know other good articles or books related to the statistics of the protein folding problem?
Thanks for the help.
Answer: One of the quickest ways to get oriented on what is going in the world of protein folding and modeling is to look at the proceedings of the Critical Assessment of Structure Prediction (CASP). CASP is basically a contest, held every 2 years where anyone can use their algorithm to predict the 3D structure of a protein whose structure is known, but not publicly available.
Its been a few years since I reviewed them results much - it looks like this year was interesting, but a perennial winner has been Rosetta, which has turned into an edifice of many suites of software which each execute different tasks in protein folding and modeling.
Open source software is pretty hard to find in this field. The software is complex. It usually includes components of machine and statistical learning, molecular dynamics, specialized algorithms that build up the protein one residue at a time, others which manipulate blocks of the protein structure around in space, electrostatic calculations, you name it. In addition, the software, once it gives some sort of result is quite valuable. I don't think any of these suites has really been released. I know that Rosetta is available to use as a web service, but you have to apply for access to the source. I don't think its an easy thing to get.
Some of the most complicated components are available open source. Molecular modeling and molecular dynamics open source software is quite sophisticated. I think we need an open source protein folding suite open source. I think David Shortle's algorithms might be a candidate for such a suite as its not so complicated and it works in some cases.
This field is pretty obscure and difficult to get around in. There aren't any easy introductions that I know of. Protein structures are computationally expensive and painful to work with in terms of writing software. On the other hand protein folding that really works is a revolutionary breakthrough, at least equal to the impact of the development of computers as a technology. | {
"domain": "biology.stackexchange",
"id": 1114,
"tags": "software, book-recommendation, protein-folding"
} |
Is it possible to create an action server in a Gazebo plugin? | Question:
I'm talking about ROS2 and GazeboROS here, but the question can extend to ROS1.
An excerpt of pseudo code would be
void MyPlugin::Load(gazebo::physics::ModelPtr model, sdf::ElementPtr sdf)
{
ros_node_ = gazebo_ros::Node::Get(sdf);
auto my_action_server = rclcpp_action::create_server<MyAction>(
ros_node_->get_node_base_interface(),
ros_node_->get_node_clock_interface(),
ros_node_->get_node_logging_interface(),
ros_node_->get_node_waitables_interface(),
"my_command",
std::bind(&MyPlugin::handle_action_goal, this, std::placeholders::_1, std::placeholders::_2),
std::bind(&MyPlugin::handle_action_cancel, this, std::placeholders::_1),
std::bind(&MyPlugin::handle_action_accepted, this, std::placeholders::_1));
}
My current finding seems to suggest that it is not possible to do so - Gazebo simply doesn't load this plugin if the action server is there.
Originally posted by 546568303@qq.com on Gazebo Answers with karma: 1 on 2020-03-12
Post score: 0
Original comments
Comment by chapulina on 2020-03-12:
I can't see why not. What do you mean it doesn't load the plugin? Do you see any error messages in verbose mode?
Comment by 546568303@qq.com on 2020-03-13:
@chapulina It simply doesn't. I can comment out the action server definition lines and see the plugin loaded and prints, but as long as these lines exist the plugin doesn't load and even debug message at the beginning of MyPlugin::Load() would not print.
Comment by 546568303@qq.com on 2020-03-13:
@chapulina Pushed a minimal example onto Github: https://github.com/AlanSixth/gazebo_ros_action_tutorial
Comment by 546568303@qq.com on 2020-03-13:
@chapulina Oh I think I just fixed the problem. The reason was that I didn't add rclcpp_action as the dependency in CMakeLists.txt and somehow that caused the plugin loading to fail silently. Thanks a lot for helping though.
Comment by chapulina on 2020-03-13:
Glad you could work it out! Feel free to add that as an answer and accept it so it can help others in the future.
Answer:
It is possible. I have provided a minimal example here: https://github.com/AlanSixth/gazebo_ros_action_tutorial
I had a problem before where the plugin containing an action server wouldn't load but I fixed the problem in the end. The reason was that I didn't add rclcpp_action as the dependency in CMakeLists.txt and somehow that caused the plugin loading to fail silently.
Originally posted by 546568303@qq.com with karma: 1 on 2020-03-15
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 4481,
"tags": "ros, gazebo-9"
} |
Can I use cosine similarity as a distance metric in a KNN algorithm | Question: Most discussions of KNN mention Euclidean,Manhattan and Hamming distances, but they dont mention cosine similarity metric. Is there a reason for this?
Answer: Short answer: Cosine distance is not the overall best performing distance metric out there
Although similarity measures are often expressed using a distance metric, it is in fact a more flexible measure as it is not required to be symmetric or fulfill the triangle inequality. Nevertheless, it is very common to use a proper distance metric like the Euclidian or Manhattan distance when applying nearest neighbour methods due to their proven performance on real world datasets. They will therefore be often mentioned in discussions of KNN.
You might find this review from 2017 informative, it attempts to answer the question "which distance measures to be used for the KNN classifier among a large number of distance and similarity measures?" They also consider inner-product metrics like the cosine distance.
In short, they conclude that (no surprise) no optimal distance metric can be used for all types of datasets, as the results show that each dataset favors a specific distance metric, and this result complies with the no-free-lunch theorem. It is clear that, among the metrics tested, the cosine distance isn't the overall best performing metric and even performs among the worst (lowest precision) in most noise levels. It does however outperform other tested distances in 3/28 datasets.
So can I use cosine similarity as a distance metric in a KNN algorithm? Yes, and for some datasets, like Iris, it should even yield better performance (p.30) compared as to Euclidian. | {
"domain": "datascience.stackexchange",
"id": 7338,
"tags": "classification, recommender-system, cosine-distance"
} |
Electromagnetism in curved spacetime | Question: I am trying to follow a derivation outlined in Asenjo et al. 2017.
In equation 1, they define the covariant derivative of the field tensor,
$$ \nabla_{\alpha} F^{\alpha \beta} = 0 $$
From this they arrive at,
$$ \partial_{\alpha} [\sqrt{-g} g^{\alpha \mu} g^{\beta \nu} (\partial_{\mu} A_{\nu} - \partial_{\nu} A_{\mu})] = 0$$
Now, since $F^{\alpha \beta} =g^{\alpha \mu} g^{\beta \nu} F_{\mu \nu} $ and $F_{\mu \nu} = \nabla_{\mu} A_{\nu} - \nabla_{\nu} A_{\mu}$, I can see the general methods and substitutions taken to arrive at this answer, but am confused on 2 points:
Why the switch from covariant to partial derivatives?
Where does the $\sqrt{-g}$ term come from? What is $g$?
Answer: There are some aspects here:
First, you are correct that $g^{\alpha\mu}g^{\beta\nu}$ simply raise the indices on $F_{\mu\nu}$.
The field strength tensor is really defined as a second-order differential form, i.e. $F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu$ with partial derivatives. That doesn't matter for computing the components, since the extra terms with Christoffel symbols cancel, but the formalism is much clearer.
Finally, about the $\sqrt{-g}$: This is a standard trick to rewrite (covariant) divergences. Observe that the covariant derivative (your first equation) can be expanded as
$$D_\alpha F^{\alpha\beta}=\partial_\alpha F^{\alpha\beta} + \Gamma_{\alpha\gamma}^{\alpha} F^{\gamma\beta}+ \Gamma_{\alpha\gamma}^{\beta} F^{\gamma\alpha}\,.$$
The last term drops out because $\Gamma$ is symmetric in the lower indices and $F$ is antisymmetric. The first Christoffel symbols is
$$\Gamma_{\alpha\gamma}^\alpha=\frac{1}{2}g^{\alpha\delta}\left(\partial_\gamma g_{\alpha\delta}+\partial_\alpha g_{\gamma\delta}-\partial_\delta g_{\alpha\gamma}\right)\,,$$
where the second and third term cance (can you see why?), so
$$\Gamma_{\alpha\gamma}^\alpha=\frac{1}{2}g^{\alpha\delta}\partial_\gamma g_{\alpha\delta}\,.$$
This is of the form $\text{tr}\left(M^{-1}\partial M\right)$ for the matrix $g$. Using the identity $$\ln \det M=\text{tr}\ln M$$ (see e.g. https://math.stackexchange.com/questions/1487773/the-identity-deta-exptrlna-for-a-general), we can rewrite this as
$$\frac{1}{2}g^{\alpha\delta}\partial_\gamma g_{\alpha\delta} = \frac{1}{\sqrt{-g}}\partial_\gamma \sqrt{-g}\,,$$ and your second formula follows from the Leibniz rule. (I may or may not have misse a minus sign somewhere.) | {
"domain": "physics.stackexchange",
"id": 45312,
"tags": "electromagnetism, general-relativity, metric-tensor, tensor-calculus"
} |
How to fix colcon build fail? | Question:
Hi all,
I am trying to follow this tutorial https://index.ros.org/doc/ros2/Tutorials/Colcon-Tutorial/ for ROS 2.0 Eloquent on Mac OS Catalina. This fails after entering the colcon build --symlink-install command and I get this error message:
user@users-MBP ros2_example_ws % sudo colcon build --symlink-install
Password:
Starting >>> examples_rclcpp_minimal_action_client
Starting >>> examples_rclcpp_minimal_action_server
Starting >>> examples_rclcpp_minimal_client
Starting >>> examples_rclcpp_minimal_composition
--- stderr: examples_rclcpp_minimal_action_server
Traceback (most recent call last):
File "/Users/user/ros2_eloquent/ros2-osx/share/ament_cmake_core/cmake/package_templates/templates_2_cmake.py", line 21, in <module>
from ament_package.templates import get_environment_hook_template_path
ModuleNotFoundError: No module named 'ament_package'
CMake Error at /Users/user/ros2_eloquent/ros2-osx/share/ament_cmake_core/cmake/ament_cmake_package_templates-extras.cmake:41 (message):
execute_process(/usr/local/bin/python3
/Users/user/ros2_eloquent/ros2-osx/share/ament_cmake_core/cmake/package_templates/templates_2_cmake.py
/Users/user/Desktop/ros2_example_ws/build/examples_rclcpp_minimal_action_server/ament_cmake_package_templates/templates.cmake)
returned error code 1
Call Stack (most recent call first):
/Users/user/ros2_eloquent/ros2-osx/share/ament_cmake_core/cmake/ament_cmake_coreConfig.cmake:38 (include)
/Users/user/ros2_eloquent/ros2-osx/share/ament_cmake/cmake/ament_cmake_export_dependencies-extras.cmake:15 (find_package)
/Users/user/ros2_eloquent/ros2-osx/share/ament_cmake/cmake/ament_cmakeConfig.cmake:38 (include)
CMakeLists.txt:13 (find_package)
---
Failed <<< examples_rclcpp_minimal_action_server [ Exited with code 1 ]
--- stderr: examples_rclcpp_minimal_action_client
Traceback (most recent call last):
File "/Users/user/ros2_eloquent/ros2-osx/share/ament_cmake_core/cmake/package_templates/templates_2_cmake.py", line 21, in <module>
from ament_package.templates import get_environment_hook_template_path
ModuleNotFoundError: No module named 'ament_package'
CMake Error at /Users/user/ros2_eloquent/ros2-osx/share/ament_cmake_core/cmake/ament_cmake_package_templates-extras.cmake:41 (message):
execute_process(/usr/local/bin/python3
/Users/user/ros2_eloquent/ros2-osx/share/ament_cmake_core/cmake/package_templates/templates_2_cmake.py
/Users/user/Desktop/ros2_example_ws/build/examples_rclcpp_minimal_action_client/ament_cmake_package_templates/templates.cmake)
returned error code 1
Call Stack (most recent call first):
/Users/user/ros2_eloquent/ros2-osx/share/ament_cmake_core/cmake/ament_cmake_coreConfig.cmake:38 (include)
/Users/user/ros2_eloquent/ros2-osx/share/ament_cmake/cmake/ament_cmake_export_dependencies-extras.cmake:15 (find_package)
/Users/user/ros2_eloquent/ros2-osx/share/ament_cmake/cmake/ament_cmakeConfig.cmake:38 (include)
CMakeLists.txt:13 (find_package)
---
Aborted <<< examples_rclcpp_minimal_action_client
--- stderr: examples_rclcpp_minimal_composition
Traceback (most recent call last):
File "/Users/user/ros2_eloquent/ros2-osx/share/ament_cmake_core/cmake/package_templates/templates_2_cmake.py", line 21, in <module>
from ament_package.templates import get_environment_hook_template_path
ModuleNotFoundError: No module named 'ament_package'
CMake Error at /Users/user/ros2_eloquent/ros2-osx/share/ament_cmake_core/cmake/ament_cmake_package_templates-extras.cmake:41 (message):
execute_process(/usr/local/bin/python3
/Users/user/ros2_eloquent/ros2-osx/share/ament_cmake_core/cmake/package_templates/templates_2_cmake.py
/Users/user/Desktop/ros2_example_ws/build/examples_rclcpp_minimal_composition/ament_cmake_package_templates/templates.cmake)
returned error code 1
Call Stack (most recent call first):
/Users/user/ros2_eloquent/ros2-osx/share/ament_cmake_core/cmake/ament_cmake_coreConfig.cmake:38 (include)
/Users/user/ros2_eloquent/ros2-osx/share/ament_cmake/cmake/ament_cmake_export_dependencies-extras.cmake:15 (find_package)
/Users/user/ros2_eloquent/ros2-osx/share/ament_cmake/cmake/ament_cmakeConfig.cmake:38 (include)
CMakeLists.txt:13 (find_package)
---
Aborted <<< examples_rclcpp_minimal_composition
--- stderr: examples_rclcpp_minimal_client
Traceback (most recent call last):
File "/Users/user/ros2_eloquent/ros2-osx/share/ament_cmake_core/cmake/package_templates/templates_2_cmake.py", line 21, in <module>
from ament_package.templates import get_environment_hook_template_path
ModuleNotFoundError: No module named 'ament_package'
CMake Error at /Users/user/ros2_eloquent/ros2-osx/share/ament_cmake_core/cmake/ament_cmake_package_templates-extras.cmake:41 (message):
execute_process(/usr/local/bin/python3
/Users/user/ros2_eloquent/ros2-osx/share/ament_cmake_core/cmake/package_templates/templates_2_cmake.py
/Users/user/Desktop/ros2_example_ws/build/examples_rclcpp_minimal_client/ament_cmake_package_templates/templates.cmake)
returned error code 1
Call Stack (most recent call first):
/Users/user/ros2_eloquent/ros2-osx/share/ament_cmake_core/cmake/ament_cmake_coreConfig.cmake:38 (include)
/Users/user/ros2_eloquent/ros2-osx/share/ament_cmake/cmake/ament_cmake_export_dependencies-extras.cmake:15 (find_package)
/Users/user/ros2_eloquent/ros2-osx/share/ament_cmake/cmake/ament_cmakeConfig.cmake:38 (include)
CMakeLists.txt:13 (find_package)
---
Aborted <<< examples_rclcpp_minimal_client
Summary: 0 packages finished [0.81s]
1 package failed: examples_rclcpp_minimal_action_server
3 packages aborted: examples_rclcpp_minimal_action_client examples_rclcpp_minimal_client examples_rclcpp_minimal_composition
4 packages had stderr output: examples_rclcpp_minimal_action_client examples_rclcpp_minimal_action_server examples_rclcpp_minimal_client examples_rclcpp_minimal_composition
12 packages not processed
A reoccurring bit of this log is ModuleNotFoundError: No module named 'ament_package', which I tried to investigate. It seems that ament_package does exist but somehow things aren't quite linked up as expected to get things working.
Does anybody have any ideas how I might solve this?
Thanks!
Originally posted by Py on ROS Answers with karma: 501 on 2019-12-04
Post score: 1
Answer:
After looking at this again I have found that I needed to add source ~/ros2_eloquent/ros2-osx/setup.bash to the .bashrc file. This was explained in the instruction at https://index.ros.org/doc/ros2/Installation/Eloquent/OSX-Install-Binary/ but I've misunderstood something and thought I needed to use the .zshrc rather than .bashrc on OSX Catalina, which has proved not to be the case.
Thanks for helping to solve this!
Originally posted by Py with karma: 501 on 2019-12-11
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by marguedas on 2019-12-11:
glad you found the correct fix!
You should be able to mark your own answer as correct (so that this issue can be considered closed) | {
"domain": "robotics.stackexchange",
"id": 34090,
"tags": "ros, mac"
} |
What's up with SiO₂ being tetrahedral? | Question: If silicon dioxide tends to form a crystal lattice with four $\ce{O}$'s around a central $\ce{Si}$, why isn't the molecular formula $\ce{SiO4}$ then? I'm confused why it's unique in that its molecular formula doesn't match up with its geometry.
Answer: The problem you are having with the formula is based on a misunderstanding about how far you can take molecular and structural formulae.
For compounds that form distinct molecules it is often worth writing the molecular formula in a way that helps you understand the structure of the molecule. But this is a convenience not a generalisation that can apply to all possible compounds.
It doesn't apply to $\ce{SiO2}$ because there is no such thing as an silica molecule. Silica, like many other minerals, is a 3D network of bonds with no discrete molecular components. In silica each Silicon is bonded to 4 Oxygens, but each oxygen is shared with two silicons. This gives the $\ce{SiO2}$ formula. This also explains why mineralogy is harder than chemistry. | {
"domain": "chemistry.stackexchange",
"id": 5294,
"tags": "crystal-structure"
} |
EEG data layout for RNN | Question: How should one structure an input data matrix (containing EEG data) for an RNN?
Normally, RNNs are presented as language models where you have a one hot vector indicating the presence of a word. So if you input was the sentence "hello how are you", you would have 4 one hot vectors (I think):
[1, 0, 0, 0]
[0, 1, 0, 0]
[0, 0, 1, 0]
[0, 0, 0, 1]
How do these individual vectors get condensed into a single data matrix?
In the case of single channel EEG (with only 1 electrode), 256 samples per second, 1 second long samples, how should this data be structured? Would it be 256 vectors? If so, what does each vector represent/contain? Or should it be 1 vector that is 256 elements long?
Furthermore, how does this extend to multi channel EEG with, say, 64 electrodes over 256 time samples?
I would prefer to use the raw EEG data, rather than trying some dimensionality reduction (calculating means, spectrograms etc)
Answer: RNNs are not designed to do language modeling exclusively, they are designed to process time series data, and language happened to be representable as time series.
There is plenty of papers demonstrating how to use RNNs to do classification and regression on time series (awesome list of papers).
One-hot encoding is often used in cases where the input is discrete and not a number that can be directly fed into a model. However, one-hot encoding is actually not always the norm for language-modeling, some research actually map each character (or each word depending on how one wants to model the problem) to a unique numerical identifier (e.g. with $id\in \left\{ 0\ldots n \right\}$; with $n$ the size of the vocabulary) and the model map that identifier to a vector representation. This is particularly useful when the size of the vocabulary is huge and you want to avoid having to deal with big one-encoded vectors. Take a look at word2vec and word2vec Tensorflow for more details about this.
In your case, you want to process sensor data. There is no need for one-hot encoded because your data are continuous and numerical. In other words, you can input the recorded EEG data directly; although it's usually better to clean them up and normalize them beforehand. There is actually plenty of papers about EEG data processing with RNNs.
Regardless of the type of data, the idea to fed them into a RNN remains the same: provide a data sample $x$ for each timestep $t$.
For EEG data recorded for 5 timesteps with 1 electrode $a$, you would have a 1-dimensional vector with one sample per timestep:
$a = [0.12, 0.44, 0.134, 0.39, 0.23]$
$inputvector = [0.12, 0.44, 0.134, 0.39, 0.23]$
For EEG data recorded for 5 timesteps with 3 electrodes $a, b, c$, you would have a 3-dimensional vector with one sample per timestep:
$a = [0.12, 0.44, 0.134, 0.39, 0.23]$
$b = [0.43, 0.92, 0.3, 0.37, 0.4]$
$c = [0.13, 0.1, 0.4, 0.21, 0.14]$
$inputvector = [[0.12, 0.43, 0.13], [0.44, 0.92, 0.1], ...]$
I highly recommend you to read some of the literature I linked above. | {
"domain": "datascience.stackexchange",
"id": 1550,
"tags": "machine-learning, neural-network, time-series, rnn"
} |
Is there a non-deterministic protocol for entanglement generation between distant parties? | Question: I'm aware that one can imperfectly clone entanglement that's shared between two parties (i.e. Bell pairs) using deterministic quantum cloning machines to produce two, lower fidelity entangled states.
What I want to know is, does there exist some strategy to non-deterministically generate entanglement between two distant parties? In other words if Alice and Bob have a Bell pair between them, is there some LOCC strategy they can do that will either create another Bell pair of the same fidelity, or fail wiith some probability.
Answer: No, any such protocol would violate the holevo bound (1 bit of communication per 1 sent qubit, including qubits sent during preparation). You could just keep repeating the process until it gave you entanglement, then use superdense coding to achieve 2 bits per qubit. | {
"domain": "quantumcomputing.stackexchange",
"id": 3722,
"tags": "entanglement, information-theory, communication, locc-operation, cloning"
} |
How to know spin out of wave function? | Question: I do not clearly understand some concepts, so maybe someone will clarify this for me.
Imagine we have a random wavefunction for an electron, it could be anything.
How can I with known wave function calculate the value of spin of electron? I mean I know that we cannot exactly know with 100% would it be spin up or spin down, but which steps should be made to calculate the probabilities?
Or maybe I am misunderstanding, and that the wavefunction of an electron must always be in the form of $|\psi\rangle=c_1|\psi_{1_{spin up}}\rangle + c_2|\psi_{2_{spin down}}\rangle$? If yes, I know that probabilities of spin up and down is just ${c_1}^2$ and ${c_2}^2$ respectively.
But what if wave function would be different (not necessarily identical to the following equation, but of a different form)? For example, the normalized wavefunction of a particle in an infinite square well.
$$\psi_n\left(z\right) = \sqrt{\frac{2}{L_z}}\sin{\frac{n\pi z}{L_z}}$$
Answer: The wavefunction you are giving is the one of a particle (with no spin) in an infinite potential well, this is described as a state living on a certain Hilbert space. To include the spin of a particle you must force it as a tensor product of the system you are considering times the two-level system from the spin (if you want to consider the potential well $\otimes$ spin). So you are right, to have a spin system, your wavefunction (a qubit) will only have two degrees of freedom represented in the Bloch sphere or as you put it $|{\psi}\rangle = c_1 |{up}\rangle + c_2 |{down}\rangle$ where $|c_1|^2 + |c_2|^2 = 1$.
Thus, if you are describing the state as energy levels of the potential and a spin configuration, you may have a superposition of states living in this space. If you want the information of a certain value (such as the spin or the energy of the harmonic oscillator) you can project to that space on the basis you prefer. For example, if you want to project on either up or down spin component you can act with $\langle up|$ or $\langle down|$ to your state $|\psi\rangle$. Whether you get 0 or 1 from one of these projections you may conclude if your spin is up or down, also, you could project the state you have to a particular space of energy levels (since each $\psi_n$ you mentioned are orthogonal).
This post might be helpful as well. | {
"domain": "physics.stackexchange",
"id": 68989,
"tags": "quantum-mechanics, hilbert-space, wavefunction, quantum-spin"
} |
Physics definition of work and lifting | Question: My calculus text (Swokowski, Olnikc, Pence, 6th edition) gives the
formula for work as $W = Fd$ and then goes on to explain that if the
force varies over the distance the formula becomes an integral.
As part of an example, it then shows that the work to lift a 500 lb
beam 30 feet would be 500 * 30 = 15,000 ft-lb. But don't we have to
exert an upward force greater than the weight of the beam somewhere in
our model in order to get the beam to move up? Then the work would be
greater than 15,000 ft-lbs according to the definition of work.
It seems that in setting up integrals or just using the formula
W = Fd examples always use the weight of the increment or object to be
lifted without taking into account the fact that more than that force
must be exerted at some point in order for the thing to move upwards.
By the formula W = Fd, if we greatly accelerate an object upwards, the
work done lifting the object will be greater than if a lesser
acceleration is applied across the same distance, but examples in my
book don't seem to take this into account. (I am assuming that
F = ma, and of course the mass stays constant.)
Answer: You are correct. To simplify matters, this amount is often ignored. There are several reasons why such a simplification is valid here.
In the first place, we have no minimum speed for the lift. By reducing the velocity, we can make the acceleration (and work needed to do so) as arbitrarily close to zero as we desire.
Any extra work done to accelerate can be returned during a deceleration. All that is required is that your path for the beam has it start and stop with the same speed. If the speed is the same, then kinetic energy at that point must be the same. That means any work done on the object must have gone into some other form and we assume it to be gravitational potential energy here. | {
"domain": "physics.stackexchange",
"id": 41340,
"tags": "newtonian-mechanics, forces, energy, work, potential-energy"
} |
Can a single git repository release multiple ROS stacks? | Question:
I am setting up a repository which will contain several ROS stacks. I want rosinstall and ros_release to manage each stack separately. I know how to do it with svn, but git seems to force the whole repository to be managed as a unit.
Is there a way to use git that meets these requirements?
Do I need a separate git repository for each stack?
Originally posted by joq on ROS Answers with karma: 25443 on 2011-06-19
Post score: 3
Answer:
You need to map git repositories directly onto stacks as you cannot do partial checkouts of git repositories. For source-based installs using rosinstall, you can get into bad states if you are using more than one stack from a single git repository. Another way to think of it is, outside of ROS, if you were releasing a software library, you would have a separate repository for that software library, i.e. we generally map releasable units to separate repositories.
You can, however, use git submodules to create larger, virtual repositories. git submodules are not seamless, but they are the closest to combining the aggregate 'give me everything' behavior of SVN and managing release-able units separately.
Originally posted by kwc with karma: 12244 on 2011-06-19
This answer was ACCEPTED on the original site
Post score: 4 | {
"domain": "robotics.stackexchange",
"id": 5888,
"tags": "ros, git, rosinstall, ros-release, best-practices"
} |
Use microwave cavity in atomic clock | Question: In most of the papers regarding atomic clocks, the author talks about a microwave cavity. In this box, all the unwanted frequencies of the electromagnetic radiation are absorbed and the other frequencies are maintained. But how does this work with an atomic clock? In this case, one uses atoms rather than radiation. What is the difference between these applications of the cavity? I thought this might have to do something with the duality between atomic particles and waves, although I couldn't find any sources that could verify this.
Answer: At atomic clock works by using a signal generator to produce an EM wave with a frequency matching the caesium hyperfine transition i.e. 9,192,631,770 Hz. The EM wave is sent through a cloud of caesium atoms and the frequency is constantly adjusted to maximise the absorption by the caesium atoms. Keeping the time is just a matter of counting the number of cycles produced by your signal generator and dividing by 9,192,631,770 to get the number of seconds.
So the clock uses a microwave cavity tuned to about 9.192 GHz because that's about the frequency required by the lock. However cavity tuning is not nearly precise enough for timekeeping. So having used the cavity to get an EM wave of about the correct frequency, the frequency is fine tuned to match the absorption line of the caesium atoms. Once this has been done we can start counting cycles to measure time. | {
"domain": "physics.stackexchange",
"id": 14234,
"tags": "homework-and-exercises, atomic-physics, microwaves"
} |
Separating the action of Hamiltonian and Symmetry (Block diagonalize Hamiltonian) | Question: I heard this statement in one lecture.
Consider a first quantized Hamiltonian $H$ on a single-particle Hilbert space (of finite dimension $N$).
If the Hamiltonian possess symmetries that is unitarily represented, then
we can bring the Hamiltonian into a block diagonal form with each block $H^{(\lambda)}$ labelled by the irreducible (unitary) representation $\lambda$ of its symmetry group $G_0$. These irreducible blocks do not exhibit the unitary symmetries.
This seems to be an elementary fact that most papers do not give reference for it. Can anyone point out a proof of it?
Note
The lecturer also gave the following exact statement of his claim:
Suppose we have a Hamiltonian $H$ on single-particle Hilbert space (of
finite dimension $N$). Assume its group of symmetry is $G_0$. Then
the space $\mathcal{V}$ of single-particle Hilbert space, decomposes
into a direct sum of vector spaces $\mathcal{V}_\lambda$ associated
with the irrep (irreducible representations, labeled by $\lambda$) of
$G_0$.
\begin{equation}
\mathcal{V} = \oplus_\lambda \mathcal{V}_\lambda
\end{equation}
Let $m_\lambda$ denotes the multiplicity of $\lambda$th irrep.
Denote the dimension of each irrep as $d_\lambda$.
In each vector space $\mathcal{V}_\lambda$, one can choose a
(orthogonal) basis of the form:
\begin{equation}
|v^{(\lambda)}_\alpha\rangle \otimes |w^{(\lambda)}_k\rangle
\end{equation}
where
$G_0$ acts only only $|w^{(\lambda)}_k\rangle$,
$k=1,\cdots,d_\lambda$,
$H$ acts only on $|v^{(\lambda)}_\alpha\rangle$,
$\alpha=1,\cdots,m_\lambda$.
If you have difficulty understanding @ACuriousMind's answer, please read my comments in that answer for a concrete example.
Answer: This seems to be a strange formulation of the fact that $H$ and $G_0$ commute, so there are "joint eigenstates", in particular, each of the $V_\lambda^{(i)}$ (I'm labelling the irreducible representations by $i = 1,\dots,m_\lambda$ here) can be chosen to be an eigenspace of $H$ with energy $E^{(i)}_\lambda$. So, we pick some abstract vector $\lvert E_\lambda^{(i)}\rangle$ and a basis $\lvert v_{\lambda,j}\rangle,j = 1,\dots,d_\lambda$ of $V_\lambda$, and there's an isomorphism from the vector space spanned by $\lvert E_\lambda^{(i)}\rangle\otimes\lvert v_{\lambda,j}\rangle$ for $j = 1,\dots,d_\lambda$ to $V_\lambda^{(i)}$.
Using the tensor product is a bit of a weird notational choice for this - you can do it as there's the isomorphism I've indicated, but usually you'd just pick a basis of the $V^{(i)}_\lambda$ that are eigenvectors of some generators (those in the Cartan subalgebra if we have a Lie group) of $G_0$, and call the resulting basis $\lvert E_\lambda^{(i)},v_{\lambda,j}\rangle$ for whatever eigenvalues $v_{\lambda,j}$ occur in the $V_\lambda$ representation. | {
"domain": "physics.stackexchange",
"id": 39457,
"tags": "solid-state-physics, group-theory, group-representations"
} |
A quine in pure lambda calculus | Question: I would like an example of a quine in pure lambda calculus. I was quite surprised that I couldn't find one by googling. The quine page lists quines for many
"real" languages, but not for lambda calculus.
Of course, this means defining what I mean by a quine in the lambda calculus, which I do below. (I'm asking for something quite specific.)
In a few places, e.g. Larkin and Stocks (2004), I see the following quoted as a "self-replicating" expression: $(\lambda x.x \; x)\;(\lambda x.x \; x)$. This reduces to itself after a single beta-reduction step, giving it a somehow quine-like feel. However, it's un-quine-like in that it doesn't terminate: further beta-reductions will keep producing the same expression, so it will never reduce to normal form. To me a quine is a program that terminates and outputs itself, and so I would like a lambda expression with that property.
Of course, any expression that contains no redexes is already in normal form, and will therefore terminate and output itself. But that's too trivial. So I propose the following definition in the hope that it will admit a non-trivial solution:
definition (tentative): A quine in lambda calculus is an expression of the form
$$(\lambda x . A)$$
(where $A$ stands for some specific lambda calculus expression) such that $((\lambda x . A)\,\, y)$ becomes $(\lambda x . A)$, or something equivalent to it under changes of variable names, when reduced to normal form, for any input $y$.
Given that the lambda calculus is as Turing equivalent as any other language, it seems as if this should be possible, but my lambda calculus is rusty, so I can't think of an example.
Reference
James Larkin and Phil Stocks. (2004) "Self-replicating expressions in the Lambda Calculus"
Conferences in Research and Practice in Information Technology, 26 (1), 167-173.
http://epublications.bond.edu.au/infotech_pubs/158
Answer: You want a term $Q$ such that $\forall M \in \Lambda$:
$$QM \rhd_\beta Q$$
I will specify no further restrictions on $Q$ (e.g. regarding its form and whether it is normalising) and I will show you that it definitely must be non-normalising.
Assume $Q$ is in normal form. Choose $M \equiv x$ (we can do so because the theorem needs to hold for all $M$). Then there are three cases.
$Q$ is some atom $a$. Then $QM \equiv ax$. This is not reducible to $a$.
$Q$ is some application $(RS)$. Then $QM \equiv (RS)x$. $(RS)$ is a normal form by hypothesis, so $(RS)x$ is also in normal form and not reducible to $(RS)$.
$Q$ is some abstraction $(\lambda x.A)$ (if $x$ is supposed to be free in $A$, then for simplicity we can just choose $M$ equivalent to whatever variable $\lambda$ abstracts over). Then $QM \equiv (\lambda x.A)x \rhd_\beta A[x/x] \equiv A$. Since $(\lambda x.A)$ is in normal form, so is $A$. Consequently we cannot reduce $A$ to $(\lambda x.A)$.
So if such a $Q$ exists, it cannot be in normal form.
For completeness, suppose $Q$ has a normal form, but is not in normal form (perhaps it is weakly normalising), i.e. $\exists N \in \beta\text{-nf}$ with $N \not\equiv Q$ such that $\forall M \in \Lambda$:
$$QM \rhd_\beta Q \rhd_\beta N$$
Then with $M \equiv x$ there must also be exist a reduction sequence $Qx \rhd_\beta Nx \rhd_\beta N$, because:
$Qx \rhd_\beta Nx$ is possible by the fact that $Q \rhd_\beta N$.
$Nx$ must normalise since $N$ is a $\beta$-nf and $x$ is just an atom.
If $Nx$ were to normalise to anything other than $N$, then $Qx$ has two $\beta$-nfs, which is not possible by a corollary to the Church-Rosser theorem. (The Church-Rosser theorem essentially states that reductions are confluent, as you probably already know.)
But note that $Nx \rhd_\beta N$ is not possible by argument (1) above, so our assumption that $Q$ has a normal form is not tenable.
If we permit such a $Q$, then, we are certain that it must be non-normalising. In that case we can simply use a combinator that eliminates any argument it receives. Denis's suggestion works just fine:
$$Q \equiv (\lambda z.(λx.λz.(x x)) (λx.λz.(x x)))$$
Then in only two $\beta$-reductions:
\begin{align}
QM &\equiv (\lambda z.(λx.λz.(x x)) (λx.λz.(x x))) M \\
& \rhd_{1\beta} (λx.λz.(x x)) (λx.λz.(x x)) \\
& \rhd_{1\beta} (λz.((λx.λz.(x x))(λx.λz.(x x))) \\
& \equiv Q
\end{align}
This result is not very surprising, since you are essentially asking for a term that eliminates any argument it receives, and this is something I often see mentioned as a direct application of the fixed-point theorem. | {
"domain": "cs.stackexchange",
"id": 3115,
"tags": "lambda-calculus"
} |
High dimensional Pareto dominance query data structure | Question: I have a large (10 million+) set $X$ of data points in some high dimensional $\mathbb{R}^d$ ($d \geq 500$) space. Each data point is quite sparse, e.g. has around $10$ components. Every missing component can be seen as having value $-\infty$ for the purposes of this problem. Associated with each data point $X_i$ is a price $y_i$.
As a quick refresher, a Pareto order is a partial order that orders vectors by those that are 'strictly better', component-wise. That is, for $a, b \in \mathbb{R}^d$ we have
$$a \preceq b \iff \forall i : a_i \leq b_i.$$
In this case we also say that $b$ dominates $a$ (although not strictly, $a$ and $b$ could be equal). Finally note that this is very much a partial order, it very often happens that $a$ and $b$ are incomparable.
Let $D(z) = \{x \in X : z \preceq x\}$ be the elements in our dataset that dominate $z$ (taking into account only the data points, not their associated prices). I wish to know if there exists an efficient data structure that can answer two things about some $z \in \mathbb{R}^d$ (even sparser, e.g. around $4$ to $5$ components):
the number of items that dominate $z$, or $|D(z)|$, and
the cheapest $k$ prices among those that dominate $z$, or the smallest $k$ elements of $\{y_i : X_i \in D(z)\}$.
Note that $k$ is small here (e.g. $10$) even though $D(z)$ might contain thousands of elements.
Does a data structure solving this problem efficiently exist? Every approach I can think of suffers badly from the curse of dimensionality.
Answer: Here is one approach you could consider. If the number of non-missing coordinates is tightly concentrated around 10, it might help you partly avoid the curse of dimensionality. I don't know whether it will be useful in practice.
Choose a random hash function $h:\{1,\dots,d\} \to \{1,\dots,10\}$. If $x \in \mathbb{R}^d$ is a data point, let $f(x)$ be its signature, where $f:\mathbb{R}^d \to \mathbb{R}^{10}$ is defined as
$$f(x) = (x^*_1,\dots,x^*_{10})$$
where $x^*_j = \max \{x_i \mid h(i)=j\}$.
Notice that the signature is dense, i.e., $f$ maps a sparse high-dimensional vector to a dense low-dimensional vector.
Also, notice that $f$ is monotonic: if $z \preceq x$ then $f(z) \preceq f(x)$. The converse does not necessarily hold.
The approach will be to build a data structure that, given a query $z$, helps us enumerate all $z$ such that $f(z) \preceq f(x)$; then we will check each such $x$ to see whether $z \preceq x$, and count the number that do, or output the lowest-priced ones that do. This reduces the problem from a 500-dimensional problem (on sparse data points) to a 10-dimensional problem (on dense data points).
How does the 10-dimensional data structure look? We can use a simple trie, where the $i$th level branches on the value of $x^*_i$, and each leaf stores one data point. In practice, I suggest organizing the list of children at the $i$th level using a binary search tree keyed on $x^*_i$, rather than as a list.
Now, the lookup algorithm simply traverses the trie recursively, but with the traversal pruned in the obvious way. In other words, at each level we only explore the children $x^*_i$ where $z^*_i \le x^*_i$ (for query $z$). Using the binary search tree data structure, at each level it is easy to enumerate only those children without having to enumerate the other children.
What is the running time of this algorithm? The worst-case time could be bad, but I'll analyze the average-case running time, via an extremely crude heuristic. If $z,x$ are two randomly chosen data vectors, then crudely $\Pr[z^*_i \le x^*_i] \approx 1/2$ for each $i$, as we have two randomly chosen numbers from $\mathbb{R}$ and it is roughly equally likely which one is larger. Therefore, we can expect that only about a $1/2^{10}$ fraction of the data points $x$ will satisfy $f(z) \preceq f(x)$. And, the running time of the recursive traversal of the trie will be approximately proportional to the number of such data points $x$. Therefore, this heuristic predicts the average-case running time of this algorithm, on a random query $z$, to be something like $O(|X|/2^{10})$ time. In other words, this is approximately a 1000-fold speedup over the naive algorithm of enumerating all data points in your dataset. This crude analysis is overly optimistic and probably the true running time will be worse (e.g., due to collisions in the hash function, as this analysis implicitly assumed that both $z$ and $x$ have exactly 10 components and there are no collisions in the hash function, but in practice neither of those will always be true).
P.S. There are multiple variants possible. We can also consider replacing 10 by an arbitrary $d'$ and optimizing over $d'$. Also, we can alternatively define $f$ by
$$f(x) = (x^*_0,x^*_1,\dots,x^*_{d'})$$
where $x^*_0$ is the number of $x^*_1,\dots,x^*_{d'}$'s that are not $-\infty$. I don't know whether either of those would be better, but they could be variants to try on your data set.
Another possible optimization is to precompute a dozen different copies of the data structure for a dozen different hash functions. Then, when you want to answer a query for $z$, check which hash function maximizes the number of coordinates in $f(z)$ that are not $-\infty$, and use the corresponding data structure for the lookup. If $x$ has exactly 10 non-missing coordinates, then it is very likely that one of these hash functions leads to a signature with only 0, 1, or 2 coordinates at $-\infty$. Also you could consider an approach where $d'$ is large, say $d'=20$; then there is a good chance that there will be one hash function that does not introduce any collision... though I'm not sure what the effect of increasing the dimension like that might be on running time. | {
"domain": "cs.stackexchange",
"id": 17682,
"tags": "data-structures, computational-geometry, pareto"
} |
How many people will lose their lives if a big earthquake shakes Bucharest? | Question: Bucharest becomes a huge city. Bucharest has never seen such dense population before. from 1947 to 1990 there was a communist government in Romania. 1988 Armenia earthquake with 38,000 deaths shows us that constructions in the Soviet Union are not strong enough. What are the consequences of occurrence a powerful earthquake near Bucharest. An earthquake like what happened in Turkey–Syria in 2023. in bad scenario how many people will lose their lives?
Answer: Zero (in the meaning of basic seismolgical aspect, not in terms of people): we can partially predict earthquakes. However, an earthquake is characterized by three questions: when, where, how big?
At best, we can answer 2 out of 3 of the mentioned questions at the same time.
First: we can give some boundaries on the magnitude of an earthquake hitting Bucharest, see for example this:
https://www.researchgate.net/publication/309634750_Next_Future_Large_Earthquake_in_Romania_A_Disaster_Waiting_to_Happen
Second: if we do know how big can an earthquake be, but we do not know when it will happen, we can assume it will happen tomorrow. So we can check scenarios for this hypotetical earthquake with the current knowledge of the building conditions:
https://www.romaniajournal.ro/society-people/350000-buildings-seriously-damaged-45000-people-injured-or-dead-in-case-of-major-earthquake-in-romania/
Finally, "luckily" you have an historical example of a strong eartquake in the region:
https://en.wikipedia.org/wiki/1940_Vrancea_earthquake
So you can assume a lower boundary from that earthquake. Please ignore the origin of the building codes, it does not matter if it is a relic from the Kolkoz or if it has been spilled out from the capitalist world: even in the land of unbounded freedom, there are big issues with the construction codes, effective building standards and their resistance to earthquakes ... https://www.nytimes.com/2018/10/04/us/san-francisco-building-codes-earthquakes.html and as well https://www.sfchronicle.com/sf/article/earthquake-building-risk-safety-17782287.php | {
"domain": "earthscience.stackexchange",
"id": 2662,
"tags": "earthquakes, seismology"
} |
Jenkins doc job error: rosdep thriftpy | Question:
I have this error in the Jenkins doc job of my package: http://build.ros.org/view/Mdoc/job/Mdoc__z_laser_projector__ubuntu_bionic_amd64/2/console
04:09:39 /tmp/ws/src/z_laser_projector/doc/zlp_core.rst:4: WARNING: autodoc: failed to import module u'zlp_core'; the following exception was raised:
04:09:39 Traceback (most recent call last):
04:09:39 File "/usr/lib/python2.7/dist-packages/sphinx/ext/autodoc.py", line 658, in import_object
04:09:39 __import__(self.modname)
04:09:39 File "/tmp/ws/src/z_laser_projector/src/z_laser_projector/zlp_core.py", line 29, in <module>
04:09:39 import thriftpy
04:09:39 ImportError: No module named thriftpy
I'm trying to solve it including dependencies in my package.xml:
<build_depend>python3-thriftpy</build_depend>
<build_export_depend>python3-thriftpy</build_export_depend>
<exec_depend>python3-thriftpy</exec_depend>
However, when I try to build the package at Travis-CI (in private before try in Jenkins), it throws this error:
ERROR: the following packages/stacks could not have their rosdep keys resolved
to system dependencies:
z_laser_projector: Cannot locate rosdep definition for [python3-thriftpy]
The command "rosdep install -y --from-paths src --ignore-src --rosdistro $ROS_DISTRO" failed and exited with 1 during .
How could I include this dependency in order to fix the Jenkins job? Am I taking the right way to solve the error?
Originally posted by rluque on ROS Answers with karma: 20 on 2020-11-24
Post score: 0
Answer:
Package.xml dependencies are resolved with rosdep. In order to be resolvable with the default sources, rosdep definitions need to be added. You can read about contributing those here: https://github.com/ros/rosdistro/blob/master/CONTRIBUTING.md#rosdep-rules-contributions
Originally posted by nuclearsandwich with karma: 906 on 2020-11-24
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by gvdhoorn on 2020-11-25:
Just to add to this: package.xml does not use the name of a Python package or the name of a Debian version of that Python package directly.
All names in package manifests are actually keys, which get looked up in a database (called the rosdep DB). The result of that mapping is then used to check for presence of and to install a package.
So if python3-thriftpy is not already registered in the rosdep DB, you can add the name to your manifest, but that won't help. rosdep will still not be able to "understand" it. That's what
Cannot locate rosdep definition for [python3-thriftpy]
means.
This may seem rather convoluted, but there are good reasons for this system. See #q215059 for a Q&A which discusses this.
Comment by rluque on 2020-12-22:
Thank you very much for your help! I asked for adding python3-thriftpy depend at rosdep. See here: https://github.com/ros/rosdistro/pull/27582 | {
"domain": "robotics.stackexchange",
"id": 35795,
"tags": "ros, ros-melodic, jenkins, build"
} |
How to build OpenCV package? | Question:
I have just started using ROS and completed some basic tutorials in the Wiki.
I have created a workspace and tried building some packages. So far so good.
It has been a while since i did anything technical, and have used OpenCV in the past , i thought it could be a good way to start learning ROS .
Can anyone point me to a tutorial on how to use the OpenCV package in ROS ? How do i build it and use it from inside the sandbox workspace ? Should i pull it from the repo and use rosmake or build it at the original installation path?
Originally posted by bhala on ROS Answers with karma: 1 on 2012-10-12
Post score: 0
Answer:
OpenCV is a system dependency as of electric, so you'll have to install it separately rather than using rosmake. You can find installation instructions on the openCV website.
To use openCV with ROS, you'll want to use the cv_bridge package of the vision_opencv stack. It basically converts openCV data structures into ROS messages. You can find tutorials for using cv_bridge here.
Originally posted by thebyohazard with karma: 3562 on 2012-10-12
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 11339,
"tags": "opencv"
} |
Texas Hold'em in Java | Question: I have been a programmer for 12 years, mainly ERP software and C development and
am looking to make a career/specialty change to Java. I've read it countless
times if you want to learn a new language you need to write code, then write
some more code and then finally write more code. So I've written some code!
I love Poker, so I have written a small Texas Hold'em program. Here is the overview of what it does:
Asks the user for the number of players
Create a deck of cards
Shuffle
Cut the deck
Deal players hole cards
Burns a card
Deals flop
Burns a card
Deals turn
Burns a card
Deals river
Prints the deck to console to show random deck was used
Prints the 'board'
Prints burn cards
Printers players cards
Evaluates the value of each players hand (Royal flush, full house, etc...)
There are 6 .java files (see below for links). I used an interface, created my own comparators, even implemented some try/catch blocks (although I'm still learning how to use these properly).
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
public class TexasHoldEm {
public static void main(String[] args) throws Exception {
// variables
Deck holdemDeck = new Deck();
int numPlayers = 0;
int cardCounter = 0;
int burnCounter = 0;
int boardCounter = 0;
Board board = new Board();
// initializations
numPlayers = getNumberOfPlayers();
Player[] player = new Player[numPlayers];
/* 3 shuffles just like in real life. */
for(int i=0;i<3;i++){
holdemDeck.shuffle();
}
// Cut Deck
holdemDeck.cutDeck();
// Initialize players
for (int i=0;i<numPlayers;i++){
player[i] = new Player();
}
// Main processing
// Deal hole cards to players
for (int i=0;i<2;i++){
for (int j=0;j<numPlayers;j++){
player[j].setCard(holdemDeck.getCard(cardCounter++), i);
}
}
// Start dealing board
// Burn one card before flop
board.setBurnCard(holdemDeck.getCard(cardCounter++), burnCounter++);
// deal flop
for (int i=0; i<3;i++){
board.setBoardCard(holdemDeck.getCard(cardCounter++), boardCounter++);
}
// Burn one card before turn
board.setBurnCard(holdemDeck.getCard(cardCounter++), burnCounter++);
// deal turn
board.setBoardCard(holdemDeck.getCard(cardCounter++), boardCounter++);
// Burn one card before river
board.setBurnCard(holdemDeck.getCard(cardCounter++), burnCounter++);
// deal river
board.setBoardCard(holdemDeck.getCard(cardCounter++), boardCounter++);
//------------------------
// end dealing board
//------------------------
System.out.println("The hand is complete...\n");
// print deck
holdemDeck.printDeck();
//print board
board.printBoard();
// print player cards
System.out.println("The player cards are the following:\n");
for (int i=0;i<numPlayers;i++){
player[i].printPlayerCards(i);
}
// print burn cards
board.printBurnCards();
//------------------------
// Begin hand comparison
//------------------------
for (int i=0;i<numPlayers;i++){
HandEval handToEval = new HandEval();
// populate with player cards
for (int j=0;j<player[i].holeCardsSize();j++){
handToEval.addCard(player[i].getCard(j),j);
}
//populate with board cards
for (int j=player[i].holeCardsSize();j<(player[i].holeCardsSize()+board.boardSize());j++){
handToEval.addCard(board.getBoardCard(j-player[i].holeCardsSize()),j);
}
System.out.println("Player " + (i+1) + " hand value: " + handToEval.evaluateHand());
}
}
protected static int getNumberOfPlayers() throws Exception{
int intPlayers = 0;
String userInput = "";
// Get number of players from user.
System.out.println("Enter number of players (1-9):");
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
try {
userInput = br.readLine();
} catch (IOException ioe) {
System.out.println("Error: IO error trying to read input!");
System.exit(1);
}
// convert user input to an integer
try {
intPlayers = Integer.parseInt(userInput);
} catch (NumberFormatException nfe) {
System.out.println("Error: Input provided is not a valid Integer!");
System.exit(1);
}
if ((intPlayers<1) || (intPlayers>9)){
throw new Exception("Error: Number of players must be an integer between 1 and 9");
}
return intPlayers;
}
}
Player.java
public class Player {
private Card[] holeCards = new Card[2];
//constructor
public Player(){
}
public Player(Card card1, Card card2){
holeCards[0] = card1;
holeCards[1] = card2;
}
//methods
protected void setCard(Card card, int cardNum){
holeCards[cardNum] = card;
}
protected Card getCard(int cardNum){
return holeCards[cardNum];
}
protected int holeCardsSize(){
return holeCards.length;
}
protected void printPlayerCards(int playerNumber){
System.out.println("Player " + (playerNumber+1) + " hole cards:");
for (int i=0;i<2;i++){
System.out.println(holeCards[i].printCard());
}
System.out.println("\n");
}
}
HandEval.java
import java.util.Arrays;
public class HandEval {
private Card[] availableCards = new Card[7];
private final static short ONE = 1;
private final static short TWO = 2;
private final static short THREE = 3;
private final static short FOUR = 4;
// Constructor
public HandEval(){
}
//methods
protected void addCard(Card card, int i){
availableCards[i] = card;
}
protected Card getCard(int i){
return availableCards[i];
}
protected int numCards(){
return availableCards.length;
}
protected void sortByRank(){
Arrays.sort(availableCards, new rankComparator());
}
protected void sortBySuit(){
Arrays.sort(availableCards, new suitComparator());
}
protected void sortBySuitThenRank(){
Arrays.sort(availableCards, new suitComparator());
Arrays.sort(availableCards, new rankComparator());
}
protected void sortByRankThenSuit(){
Arrays.sort(availableCards, new rankComparator());
Arrays.sort(availableCards, new suitComparator());
}
protected String evaluateHand(){
String handResult = new String();
short[] rankCounter = new short[13];
short[] suitCounter = new short[4];
// initializations
for (int i=0;i<rankCounter.length;i++){
rankCounter[i] =0;
}
for (int i=4;i<suitCounter.length;i++){
suitCounter[i] = 0;
}
// Loop through sorted cards and total ranks
for(int i=0; i<availableCards.length;i++){
rankCounter[ availableCards[i].getRank() ]++;
suitCounter[ availableCards[i].getSuit() ]++;
}
//sort cards for evaluation
this.sortByRankThenSuit();
// hands are already sorted by rank and suit for royal and straight flush checks.
// check for royal flush
handResult = evaluateRoyal(rankCounter, suitCounter);
// check for straight flush
if (handResult == null || handResult.length() == 0){
handResult = evaluateStraightFlush(rankCounter, suitCounter);
}
// check for four of a kind
if (handResult == null || handResult.length() == 0){
handResult = evaluateFourOfAKind(rankCounter);
}
// check for full house
if (handResult == null || handResult.length() == 0){
handResult = evaluateFullHouse(rankCounter);
}
// check for flush
if (handResult == null || handResult.length() == 0){
handResult = evaluateFlush(rankCounter, suitCounter);
}
// check for straight
if (handResult == null || handResult.length() == 0){
// re-sort by rank, up to this point we had sorted by rank and suit
// but a straight is suit independent.
this.sortByRank();
handResult = evaluateStraight(rankCounter);
}
// check for three of a kind
if (handResult == null || handResult.length() == 0){
handResult = evaluateThreeOfAKind(rankCounter);
}
// check for two pair
if (handResult == null || handResult.length() == 0){
handResult = evaluateTwoPair(rankCounter);
}
// check for one pair
if (handResult == null || handResult.length() == 0){
handResult = evaluateOnePair(rankCounter);
}
// check for highCard
if (handResult == null || handResult.length() == 0){
handResult = evaluateHighCard(rankCounter);
}
return handResult;
}
private String evaluateRoyal(short[] rankCounter, short[] suitCounter){
String result = "";
// Check for Royal Flush (10 - Ace of the same suit).
// check if there are 5 of one suit, if not royal is impossible
if ((rankCounter[9] >= 1 && /* 10 */
rankCounter[10] >= 1 && /* Jack */
rankCounter[11] >= 1 && /* Queen */
rankCounter[12] >= 1 && /* King */
rankCounter[0] >= 1) /* Ace */
&& (suitCounter[0] > 4 || suitCounter[1] > 4 ||
suitCounter[2] > 4 || suitCounter[3] > 4)){
// min. requirements for a royal flush have been met,
// now loop through records for an ace and check subsequent cards.
// Loop through the aces first since they are the first card to
// appear in the sorted array of 7 cards.
royalSearch:
for (int i=0;i<3;i++){
// Check if first card is the ace.
// Ace must be in position 0, 1 or 2
if (availableCards[i].getRank() == 0){
// because the ace could be the first card in the array
// but the remaining 4 cards could start at position 1,
// 2 or 3 loop through checking each possibility.
for (int j=1;j<4-i;j++){
if ((availableCards[i+j].getRank() == 9 &&
availableCards[i+j+1].getRank() == 10 &&
availableCards[i+j+2].getRank() == 11 &&
availableCards[i+j+3].getRank() == 12)
&&
(availableCards[i].getSuit() == availableCards[i+j].getSuit() &&
availableCards[i].getSuit() == availableCards[i+j+1].getSuit() &&
availableCards[i].getSuit() == availableCards[i+j+2].getSuit() &&
availableCards[i].getSuit() == availableCards[i+j+3].getSuit())){
// Found royal flush, break and return.
result = "Royal Flush!! Suit: " + Card.suitAsString(availableCards[i].getSuit());
break royalSearch;
}
}
}
}
}
return result;
}
// Straight flush is 5 consecutive cards of the same suit.
private String evaluateStraightFlush(short[] rankCounter, short[] suitCounter){
String result = "";
if (suitCounter[0] > 4 || suitCounter[1] > 4 ||
suitCounter[2] > 4 || suitCounter[3] > 4){
// min. requirements for a straight flush have been met.
// Loop through available cards looking for 5 consecutive cards of the same suit,
// start in reverse to get the highest value straight flush
for (int i=availableCards.length-1;i>3;i--){
if ((availableCards[i].getRank()-ONE == availableCards[i-ONE].getRank() &&
availableCards[i].getRank()-TWO == availableCards[i-TWO].getRank() &&
availableCards[i].getRank()-THREE == availableCards[i-THREE].getRank() &&
availableCards[i].getRank()-FOUR == availableCards[i-FOUR].getRank())
&&
(availableCards[i].getSuit() == availableCards[i-ONE].getSuit() &&
availableCards[i].getSuit() == availableCards[i-TWO].getSuit() &&
availableCards[i].getSuit() == availableCards[i-THREE].getSuit() &&
availableCards[i].getSuit() == availableCards[i-FOUR].getSuit())){
// Found royal flush, break and return.
result = "Straight Flush!! " + Card.rankAsString(availableCards[i].getRank()) + " high of " + Card.suitAsString(availableCards[i].getSuit());
break;
}
}
}
return result;
}
// Four of a kind is 4 cards with the same rank: 2-2-2-2, 3-3-3-3, etc...
private String evaluateFourOfAKind(short[] rankCounter){
String result = "";
for (int i=0;i<rankCounter.length;i++){
if (rankCounter[i] == FOUR){
result = "Four of a Kind, " + Card.rankAsString(i) +"'s";
break;
}
}
return result;
}
// Full house is having 3 of a kind of one rank, and two of a kind of
// a second rank. EX: J-J-J-3-3
private String evaluateFullHouse(short[] rankCounter){
String result = "";
short threeOfKindRank = -1;
short twoOfKindRank = -1;
for (int i=rankCounter.length;i>0;i--){
if ((threeOfKindRank < (short)0) || (twoOfKindRank < (short)0)){
if ((rankCounter[i-ONE]) > 2){
threeOfKindRank = (short) (i-ONE);
}
else if ((rankCounter[i-ONE]) > 1){
twoOfKindRank = (short)(i-ONE);
}
}
else
{
break;
}
}
if ((threeOfKindRank >= (short)0) && (twoOfKindRank >= (short)0)){
result = "Full House: " + Card.rankAsString(threeOfKindRank) + "'s full of " + Card.rankAsString(twoOfKindRank) + "'s";
}
return result;
}
// Flush is 5 cards of the same suit.
private String evaluateFlush(short[] rankCounter, short[] suitCounter){
String result = "";
// verify at least 1 suit has 5 cards or more.
if (suitCounter[0] > 4 || suitCounter[1] > 4 ||
suitCounter[2] > 4 || suitCounter[3] > 4){
for (int i=availableCards.length-1;i>3;i--){
if (availableCards[i].getSuit() == availableCards[i-ONE].getSuit() &&
availableCards[i].getSuit() == availableCards[i-TWO].getSuit() &&
availableCards[i].getSuit() == availableCards[i-THREE].getSuit() &&
availableCards[i].getSuit() == availableCards[i-FOUR].getSuit()){
// Found royal flush, break and return.
result = "Flush!! " + Card.rankAsString(availableCards[i].getRank()) + " high of " + Card.suitAsString(availableCards[i].getSuit());
break;
}
}
}
return result;
}
// Straight is 5 consecutive cards, regardless of suit.
private String evaluateStraight(short[] rankCounter){
String result = "";
// loop through rank array to check for 5 consecutive
// index with a value greater than zero
for (int i=rankCounter.length;i>4;i--){
if ((rankCounter[i-1] > 0) &&
(rankCounter[i-2] > 0) &&
(rankCounter[i-3] > 0) &&
(rankCounter[i-4] > 0) &&
(rankCounter[i-5] > 0)){
result = "Straight " + Card.rankAsString(i-1) + " high";
break;
}
}
return result;
}
// Three of a kind is 3 cards of the same rank.
private String evaluateThreeOfAKind(short[] rankCounter){
String result = "";
// loop through rank array to check for 5 consecutive
// index with a value greater than zero
for (int i=rankCounter.length;i>0;i--){
if (rankCounter[i-1] > 2){
result = "Three of a Kind " + Card.rankAsString(i-1) + "'s";
break;
}
}
return result;
}
// Two pair is having 2 cards of the same rank, and two
// different cards of the same rank. EX: 3-3-7-7-A
private String evaluateTwoPair(short[] rankCounter){
String result = "";
short firstPairRank = -1;
short secondPairRank = -1;
for (int i=rankCounter.length;i>0;i--){
if ((firstPairRank < (short)0) || (secondPairRank < (short)0)){
if (((rankCounter[i-ONE]) > 1) && (firstPairRank < (short)0)){
firstPairRank = (short) (i-ONE);
}
else if ((rankCounter[i-ONE]) > 1){
secondPairRank = (short)(i-ONE);
}
}
else
{
// two pair found, break loop.
break;
}
}
// populate output
if ((firstPairRank >= (short)0) && (secondPairRank >= (short)0)){
if (secondPairRank == (short)0){
// Aces serve as top rank but are at the bottom of the rank array
// swap places so aces show first as highest pair
result = "Two Pair: " + Card.rankAsString(secondPairRank) + "'s and " + Card.rankAsString(firstPairRank) + "'s";
}
else
{
result = "Two Pair: " + Card.rankAsString(firstPairRank) + "'s and " + Card.rankAsString(secondPairRank) + "'s";
}
}
return result;
}
// One is is two cards of the same rank.
private String evaluateOnePair(short[] rankCounter){
String result = "";
for (int i=rankCounter.length;i>0;i--){
if((rankCounter[i-ONE]) > 1){
result = "One Pair: " + Card.rankAsString(i-ONE) + "'s";
break;
}
}
return result;
}
// high card is the highest card out of the 7 possible cards to be used.
private String evaluateHighCard(short[] rankCounter){
String result = "";
for (int i=rankCounter.length;i>0;i--){
if((rankCounter[i-ONE]) > 0){
result = "High Card: " + Card.rankAsString(i-ONE);
break;
}
}
return result;
}
}
Deck.java
import java.util.Random;
public class Deck{
private Card[] cards = new Card[52];
//Constructor
public Deck(){
int i = 0;
for (short j=0; j<4; j++){
for (short k=0; k<13;k++){
cards[i++] = new Card(k, j);
}
}
}
// Print entire deck in order
protected void printDeck(){
for(int i=0; i<cards.length;i++){
System.out.println(i+1 + ": " + cards[i].printCard());
}
System.out.println("\n");
}
// Find card in deck in a linear fashion
// Use this method if deck is shuffled/random
protected int findCard(Card card){
for (int i=0;i<52;i++){
if (Card.sameCard(cards[i], card)){
return i;
}
}
return -1;
}
//return specified card from deck
protected Card getCard(int cardNum){
return cards[cardNum];
}
protected void shuffle(){
int length = cards.length;
Random random = new Random();
//random.nextInt();
for (int i=0;i<length;i++){
int change = i + random.nextInt(length-i);
swapCards(i, change);
}
}
protected void cutDeck(){
Deck tempDeck = new Deck();
Random random = new Random();
int cutNum = random.nextInt(52);
for (int i=0;i<cutNum;i++){
tempDeck.cards[i] = this.cards[52-cutNum+i];
}
for (int j=0;j<52-cutNum;j++){
tempDeck.cards[j+cutNum] = this.cards[j];
}
this.cards = tempDeck.cards;
}
// Swap cards in array to 'shuffle' the deck.
private void swapCards(int i, int change){
Card temp = cards[i];
cards[i] = cards[change];
cards[change] = temp;
}
}
Card.java
import java.util.*;
public class Card{
private short rank, suit;
private static String[] ranks = {"Ace", "2", "3", "4", "5", "6", "7", "8", "9", "10", "Jack", "Queen", "King"};
private static String[] suits = {"Diamonds", "Clubs", "Hearts", "Spades"};
//Constructor
public Card(short rank, short suit){
this.rank = rank;
this.suit = suit;
}
// Getter and Setters
public short getSuit(){
return suit;
}
public short getRank(){
return rank;
}
protected void setSuit(short suit){
this.suit = suit;
}
protected void setRank(short rank){
this.rank = rank;
}
// methods
public static String rankAsString(int __rank){
return ranks[__rank];
}
public static String suitAsString(int __suit){
return suits[__suit];
}
public @Override String toString(){
return rank + " of " + suit;
}
// Print card to string
protected String printCard(){
return ranks[rank] + " of " + suits[suit];
}
// Determine if two cards are the same (Ace of Diamonds == Ace of Diamonds)
public static boolean sameCard(Card card1, Card card2){
return (card1.rank == card2.rank && card1.suit == card2.suit);
}
}
class rankComparator implements Comparator<Object>{
public int compare(Object card1, Object card2) throws ClassCastException{
// verify two Card objects are passed in
if (!((card1 instanceof Card) && (card2 instanceof Card))){
throw new ClassCastException("A Card object was expeected. Parameter 1 class: " + card1.getClass()
+ " Parameter 2 class: " + card2.getClass());
}
short rank1 = ((Card)card1).getRank();
short rank2 = ((Card)card2).getRank();
return rank1 - rank2;
}
}
class suitComparator implements Comparator<Object>{
public int compare(Object card1, Object card2) throws ClassCastException{
// verify two Card objects are passed in
if (!((card1 instanceof Card) && (card2 instanceof Card))){
throw new ClassCastException("A Card object was expeected. Parameter 1 class: " + card1.getClass()
+ " Parameter 2 class: " + card2.getClass());
}
short suit1 = ((Card)card1).getSuit();
short suit2 = ((Card)card2).getSuit();
return suit1 - suit2;
}
}
Board.java
public class Board {
private Card[] board = new Card[5];
private Card[] burnCards = new Card[3];
//constructor
public Board(){
}
//methods
protected void setBoardCard(Card card, int cardNum){
this.board[cardNum] = card;
}
protected Card getBoardCard(int cardNum){
return this.board[cardNum];
}
protected void setBurnCard(Card card, int cardNum){
this.burnCards[cardNum] = card;
}
protected Card getBurnCard(int cardNum){
return this.burnCards[cardNum];
}
protected int boardSize(){
return board.length;
}
protected void printBoard(){
System.out.println("The board contains the following cards:");
for(int i =0; i<board.length;i++){
System.out.println(i+1 + ": " + getBoardCard(i).printCard());
}
System.out.println("\n");
}
protected void printBurnCards(){
System.out.println("The burn cards are:");
for(int i =0; i<burnCards.length;i++){
System.out.println(i+1 + ": " + getBurnCard(i).printCard());
}
System.out.println("\n");
}
}
Answer: Let's start with the most basic, the Card class.
import java.util.*;
Except during development, it's custom to explicitly import only the classes you need instead of using wildcards.
public class Card{
private short rank, suit;
It's is most certainly a valid choice to store rank and suit as shorts, since it's most likely the fastest and most efficient way, but if you want to learn the specifics of Java you may want to look into enumerations.
protected void setSuit(short suit){
this.suit = suit;
}
protected void setRank(short rank){
this.rank = rank;
}
IMHO cards are a prime example for immutable objects. There is no reason to need to change the rank or the suit of a card, so I would drop the setters.
public static String rankAsString(int __rank){
return ranks[__rank];
}
public static String suitAsString(int __suit){
return suits[__suit];
}
It's very unusual to use underlines in variable names in Java, especially two as a prefix. I would simple name the parameters rank and suit especially since these are class (static) methods and not instance methods, so there is no confusion with the fields.
It may be worth thinking about, if these actually need to be class methods and shouldn't be instance methods. If you have other classes which need to convert short into the corresponding names independently from the Card class, then it would be ok. But in your case I would say it's not the case, and one should try and hide the fact that suits and ranks are implemented as shorts as much as possible.
public @Override String toString(){
return rank + " of " + suit;
}
// Print card to string
protected String printCard(){
return ranks[rank] + " of " + suits[suit];
}
The Java community is split over the question, if the toString() method should be overridden purely for debugging reasons, or if it should be used in the "business logic". In this "simple" application I don't think you need to distinguish between the two uses, so I would drop printCard() and only use toString().
BTW, it's custom to have annotations before the method modifiers.
public static boolean sameCard(Card card1, Card card2){
return (card1.rank == card2.rank && card1.suit == card2.suit);
}
Instead of implementing your own method it's probably a good idea to override equals() (or at least make this an instance method). If you make Card immutable as I suggested before, it simplifies the implementation to comparing the references, to see if they are the same, because you should only have one instance of each possible card.
@Override public boolean equals(Object that) {
return this == that;
}
(Although it may be safer to compare rank and suit as a fall back.)
EDIT 1:
First a quick digression about Enumerations. The Enums have an ordinal number and a compareTo method, allowing you to sort them. You also can assign them properties and create your own order based on those.
The offical language guide has examples for suit and rank enumations and for extending enumerations with your own properties using planets as an example: http://download.oracle.com/javase/1.5.0/docs/guide/language/enums.html
When/if I get to the hand ranking (I haven't looked at it yet) I may be able to give some suggestions have to implement it with enums.
Next are the Comparators. I don't have much experience with them, so I can only give some general suggestions:
Classes should always start with a capital letter in Java.
You should extend them from Comparator<Card> instead of Comparator<Object>, since you only need to compare cards with each other and not with any other objects.
While it is good extra practice, you may want to skip the suit comparator, because it's not really needed in Poker in general (and Texas Hold'em specifically). Suits never have an order in hand ranking usually only needed in some "meta" contexts (such as randomly determinating the position of the button), that currently don't apply to your program. However if you do keep it, then you should correct the order of the suits, because the official ranking is (from lowest to highest) "Clubs", "Diamonds", "Hearts" and "Spades".
Next up, is the Board. First off, I'd like to say, that I'm nor sure if I'd use a Board class like this in a real world Poker application. However off the top of my head I can't think of a different way to do it and for practice this is perfectly fine.
The only thing I would change is instead of explicitly setting each card by index with setBoardCard(Card card, int cardNum), I would let the Board class track the index internally itself and just use addBoardCard(Card card), because you shouldn't be able to "go back" and change the board cards (ditto for the burn cards).
EDIT 2:
Regarding sorting suits: Ok, that makes sense. I haven't looked at hand evaluation yet. However that is more a case of grouping that sorting, so maybe there is a different (better?) way to do that. I'll have to think about it.
Tracking indexes: You certainly could use a Collection (or more specifically a List) to do this (and it would be more "Java-like", but in your case where you have a fixed maximum number of cards on the board the arrays are fine. I'd something like:
public class Board { //abbreviated
private Card[] board = new Card[5];
private Card[] burnCards = new Card[3];
private int boardIndex = 0;
private int burnCardIndex = 0;
//methods
protected void addBoardCard(Card card){
this.board[boardIndex++] = card;
}
protected void addBurnCard(Card card){
this.burnCards[burnCardIndex++] = card;
}
}
Next is Deck:
First I'd suggest to create just one Random object statically instead of creating one for each call of shuffle and cutDeck.
Really problematic is the creation of a new temporary Deck object for cutting the deck. This can go wrong very fast, because it contains an unnecessary second set of cards and if there is a bug you'll easily get duplicate cards. Instead just use a new array. Also you can use System.arraycopy to simplify copying from one array to another.
EDIT 3:
Creating a new Deck doesn't only create a new array, it creates a a filled array with new cards, which you don't need. Just an array is enough:
protected void cutDeck(){
Card[] temp = new Card[52];
Random random = new Random();
int cutNum = random.nextInt(52);
System.arraycopy(this.cards, 0, temp, 52-cutNum, cutNum);
System.arraycopy(this.cards, cutNum, temp, 0, 52-cutNum);
this.cards = temp;
}
EDIT 4:
There is not much to say about Player except, I'd remove setCard and just use the constructor to assign the cards to the player in order to make the object immutable. Or at least implement an addCard method just like in Board.
The main class:
In getNumberOfPlayers your error handling it some what inconsistent. On the one hand you write to System.out (System.errwould probably be better) and on the other hand you throw an exception.
For the IOException I wouldn't catch it here, but outside of getNumberOfPlayers. In bigger projects it may make sense to "wrap" it in your own Exception class for this:
try {
userInput = br.readLine();
} catch (IOException ioe) {
throw new HoldemIOException(ioe);
}
Both the caught NumberFormatException and invalid range should throw the same (or related) custom exceptions. Don't just throw a simple Exception, because it's meaningless to others that need to catch it. Example:
try {
intPlayers = Integer.parseInt(userInput);
} catch (NumberFormatException nfe) {
throw new HoldemUserException("Error: Not an integer entered for number of players", nfe);
}
if ((intPlayers<1) || (intPlayers>9)){
throw new HoldemUserException("Error: Number of players must be an integer between 1 and 9");
}
Notice that in the causing IOException and NumberFormatException, are passed as an argument to the new exception in case they are needed further down the line.
Both the HoldemIOException and HoldemUserException could be extended from a basic HoldemException which in turn extends Exception. A simple "empty" extention such as
class HoldemException extends Exception {}
for all three cases would be enough.
Also you never should let an exception (especially a self thrown one) just drop out at the end completely unhandled. Catch all exceptions you know of at a reasonable place, in your case at the getNumberOfPlayers:
do {
try {
numPlayers = getNumberOfPlayers();
} catch (HoldemUserException e) {
System.out.println(e.getMessage());
// "Only" a user error, keep trying
} catch (HoldemIOException e) {
System.err.println("Error: IO error trying to read input!");
System.exit(1);
}
} while (numPlayers = 0);
I only added the do while loop, to show how to handle the two types of exception differently. It would be wise to add a proper way for the user to get out of the loop.
Dealing the cards:
Here we have another counter (cardCounter) to track to "position" of the deck. Again it would be better to have the Deck class track the dealt cards. You should consider implementing the Deck as an actual "stack" or a "queue" - it doesn't matter which, since you aren't adding any items. Java provides a Queue interface you could use.
Thinking about it, you could also use the same interface for Player and Board (although you'd need to separate the burn cards into its own object in that case). That would simplify dealing to player.add(deck.remove());, board.add(deck.remove()); and burnCards.add(deck.remove()).
EDIT 5:
Ok, most likely the final edit.
I've started looking at the hand evaluation and I don't think I can write much about that. You've implemented it in a very procedural way based on your current Card objects and if your goal is to do this in a more Java way you'd probably need to re-write the Card object and create a proper "Hand" object first (most likely based on a Set) and then re-write the evaluation based on that.
Poker hand evaluation is a very complex topic, especially when considering 5 cards out of 7. How this should be implemented will depend on if you want to focus on "good Java practice" or "speed". If you really interested in expanding this, then you probably should first read up on the topic - there are several questions on Stack Overflow about this, and probably countless articles on the web - and then repost a new question focusing on hand evaluation. I'll be happy to have a look at that some day - if I have the time.
Just one specific thing: What are the constants ONE, TWO, THREE and FOUR for? Even for a procedural approach these seem to be completely out of place and should most likely be replaced by an appropriate loop where they are used.
Finally, have fun and good luck on your further adventures in Java land! | {
"domain": "codereview.stackexchange",
"id": 173,
"tags": "java, beginner, game, playing-cards"
} |
Simple string joiner in modern C++ | Question: (See the next iteration.)
I have this small template function for conveniently dumping a sequence into a string such that there is a delimiter between two consecutive elements, and no such after the last element:
#include <algorithm>
#include <iostream>
#include <iterator>
#include <sstream>
#include <string>
#include <vector>
using std::cout;
using std::endl;
using std::for_each;
using std::vector;
template<typename T>
std::string join(const T begin,
const T end,
std::string separator,
std::string concluder)
{
const auto length = std::distance(begin, end);
std::stringstream ss;
size_t count = 0;
for (T iter = begin; iter != end; ++iter, ++count)
{
ss << *iter;
if (count < length - 1)
{
ss << separator;
}
}
ss << concluder;
return ss.str();
}
template<typename T>
std::string join(T begin, T end, std::string separator)
{
return join(begin, end, separator, "");
}
template<typename T>
std::string join(T begin, T end)
{
return join(begin, end, ", ");
}
int main() {
vector<vector<int>> matrix = {
{ 1, 2, 3 },
{ 4, 5 },
{ },
{ 10, 26, 29 }
};
for_each(matrix.cbegin(),
matrix.cend(),
[](std::vector<int> a) {
cout << join(a.cbegin(), a.cend()) << endl;
});
}
Any idea how to improve this?
Answer: I assume that you probably want to users to be able to pass ForwardIterator InputIterator. I will explain why I made the assumption.
According to assumption, you can rename the T into InputIt. Iterators denoting range are usually called first and last, even though last is not actually last.
template<typename InputIt>
std::string join(InputIt first,
InputIt last,
std::string separator,
std::string concluder)
I've never seen iterators being passed as const. Probably you wanted iterators referring to const objects? It is common to say const iterator, but it doesn't mean that iterators are const.
count and length variables are somewhat odd. The pair of iterators should denote the range needed, and it is work of the operator!=() to check if the end is hit.
for (T iter = begin; iter != end; ++iter, ++count)
Never seen traversing the range like that. People usually use this:
while (first != last)
{
ss << *iter++;
ss << separator;
}
It will need to be tweaked a bit to not output the separator at the end. I'll use what @Zeta suggested (awesome idea):
We'll check if the range is empty, then print the first element:
if (first == last)
{
return concluder;
}
std::stringstream ss;
ss << *first;
++first;
And then we swap sequence of output in the loop, so we output separator first, then the element. This way, there won't be separator at the end:
while (first != last)
{
ss << separator;
ss << *first;
++first;
}
Even though preincrement is slightly faster than postincrement, my benchmarking program didn't show significant difference (it is less than 1%). Nevertheless, I've used preincrement to not make life harder for custom iterators.
Then we simply output concluder:
ss << concluder;
Rather than creating multiple overloads use default arguments:
template <typename InputIt>
std::string join(InputIt first, InputIt last, const std::string& separator = ", ", const std::string& concluder = "")
And const correctness, of course.
Full code:
#include <algorithm>
#include <iostream>
#include <iterator>
#include <sstream>
#include <string>
#include <vector>
using std::cout;
using std::endl;
using std::for_each;
using std::vector;
template<typename InputIt>
std::string join(InputIt first,
InputIt last,
const std::string& separator = ", ",
const std::string& concluder = "")
{
if (first == last)
{
return concluder;
}
std::stringstream ss;
ss << *first;
++first;
while (first != last)
{
ss << separator;
ss << *first;
++first;
}
ss << concluder;
return ss.str();
}
int main() {
vector<vector<int>> matrix = {
{ 1, 2, 3 },
{ 4, 5 },
{},
{ 10, 26, 29 }
};
for_each(matrix.cbegin(),
matrix.cend(),
[](std::vector<int> a) {
cout << join(a.cbegin(), a.cend()) << endl;
});
}
I don't really like using, but I don't think it is of any importance here.
Using input itertors will yield basic exception safety, since the range is invalid after it being passed once. For other supported iterator types the function has strong exception safety guarantee. | {
"domain": "codereview.stackexchange",
"id": 42166,
"tags": "c++, strings, c++14"
} |
Uncertainty and Classical waves | Question: My professor, introducing Heisenberg uncertainty principle, started from the Fourier transform and the classical uncertainty for waves.
He told about the localized impulsive wave
$\delta(x)$
which has defined position but total uncertainty of impulse (its Fourier transform is composed of every possible momentum). On the other hand, a wave of defined impulse is a monochromatic wave, which spreads over the entire position axis and doesn't have a proper localization.
I'm perfectly comfortable those considerations, but then, out of noting, he writes
$$\Delta x \: \Delta k \geq 1/2$$
From this it's easy to derive the Heisenberg principle, but I can't understand where the previous formula comes from.
Does it come from Fourier transform properties, from the properties of optical waves, or from something else?
Answer: The Heisenberg Uncertainty Principle has two distinct aspects:
One is the identification of matter as a wave and, in particular, the relationship between a particle's momentum $p$ and its wavelength $\lambda$ through de Broglie's relationship $p=h/\lambda$. This is the crucial bit of physical input.
The second one is purely mathematical, and it's the relationship $\Delta x\, \Delta k\geq 1/2$. This is a general fact about waves and their Fourier transforms, and in a signal-processing context it's known as the bandwidth theorem.
In general, the bandwidth theorem is a bit hard to state precisely - or rather, there are multiple valid slightly different ways to state it, depending on exactly how you define the terms that appear in it and the classes of functions you're considering. However, in all its incarnations it is simply a fundamental fact of the theory of Fourier transforms.
As an example, if you have a complex-valued function $f(x)$ normalized to $\int_{-\infty}^\infty |f(x)|^2\:\mathrm dx=1$ and you define the position uncertainty as
$$
\Delta x=\sqrt{\int_{-\infty}^\infty x^2 \: |f(x)|^2\:\mathrm dx - \left(\int_{-\infty}^\infty x \: |f(x)|^2\:\mathrm dx\right)^2}
$$
the Fourier transform as
$$
\tilde f(k)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty e^{-ikx}f(x)\:\mathrm dx,
$$
and the wavevector uncertainty as
$$
\Delta k=\sqrt{\int_{-\infty}^\infty k \: |\tilde f(k)|^2\:\mathrm dk - \left(\int_{-\infty}^\infty k \: |\tilde f(k)|^2\:\mathrm dx\right)^2}
,$$
then the uncertainty relation
$$\Delta x\:\Delta k\geq \frac12$$
holds at least for all continuously differentiable $f$ such that $f'$, $\hat xf$ and $\hat k\tilde f$ are in $L_2$ (example proof). The uncertainty principle does hold for broader classes of functions, at least in a moral sense, but as I said there are multiple valid variants and it's a pain to list them all. However, for any suitable class of (generalized) functions, and definitions of the uncertainties, as long as the left-hand side's uncertainty product makes sense then it will have some sort of lower bound of order unity. | {
"domain": "physics.stackexchange",
"id": 31954,
"tags": "quantum-mechanics, optics, heisenberg-uncertainty-principle, fourier-transform"
} |
Accumulator with decay? | Question: I am using an accumulator in the form of
X + [Y-1], X plus the unit delay of the output.
Is there an accumulator that follows this form, but will fall to 0 over time, similar to a differentiator?
Answer: You can use a leaky accumulator (in analogy with a leaky integrator):
$$y(n)=\alpha x(n)+(1-\alpha)y(n-1),\quad 0<\alpha<1\tag{1}$$
You could also just multiply $y(n-1)$ with a constant less than 1. The reason why in (1) $x(n)$ is multiplied by $\alpha$ is to guarantee a DC gain of 1. | {
"domain": "dsp.stackexchange",
"id": 2134,
"tags": "signal-analysis, filter-design, math"
} |
Setting up ros on real hardware | Question:
Hello,
I began to work on ROS.On my home computer everything is okay.I can send/receive messages over topics, I can exchange messages over services, use rosbags, parameters, create launch files.I am aware that nodes can make communications over network etc.
What about porting these programmes to real robots? Is it neccessary to first setup an OS and setup ROS on robots and start nodes? What is the common practice? Do robots have systems to just respond to commands and transmit info to nodes which are on a different computer? or is there way to make the nodes actually run on robots without need to setup host OS?
Originally posted by basbursen on ROS Answers with karma: 53 on 2016-04-13
Post score: 3
Answer:
There are many different ways that people architect robot systems that use ROS. For example, there are "native-ROS" robots like the PR2, Baxter, Turtlebot, etc. These robots all have a Linux computer of some kind that has a full-blown OS and a version of ROS running. These robots also have lower level, embedded computers for things like motor control. The main computer and OS on the robot then employs various mechanisms for communicating with these lower level control computers. They may use communication protocols like EtherCAT, RS232, or CAN, and various hardware/driver solutions for implementing these protocols.
Another strategy often employed is to have the robots themselves be non-ROS robots. This is more common for applications like swarm robots, and quadrotors where each individual agent is fairly simple. In this case, ROS typically runs on some off-board computer and some sort of communication tool (rosserial, a custom node, etc) is used to pass data between the ROS world and the robots (control commands, sensor readings, etc.).
In between the two previous ideas would be a very small embedded computer that is capable of running ROS (Raspberry Pi, odroid, BBB, etc.). In this case the small computer could run ROS and natively communicate with other ROS computers, and it could perform its own low-level control and sensor interfacing.
If you want to run a ROS node that you've developed on a normal PC directly on a robot, then that robot must be able to run ROS and must have a host OS.
Originally posted by jarvisschultz with karma: 9031 on 2016-04-13
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by gapal on 2016-09-15:
for "In between the two previous ......(Raspberry Pi, odroid, BBB, etc.). ", which of the following is a possible way to install ROS:
install some flavour of ubuntu and then install ROS
install specific OS like NuttX and then install ROS
other options are also there ( if yes, please inform
Comment by jarvisschultz on 2016-09-15:
Options 1 and 2 are certainly possible with Option 1 likely being the easier path to success, and it seems to me the more common path. Getting ROS compiled on a less common distro can be challenging. | {
"domain": "robotics.stackexchange",
"id": 24372,
"tags": "ros, hardware"
} |
Difference between $R^{a}_{bcd}$ and $R_{abcd}$ Riemann tensor types | Question: What is the intuitive, geometrical meaning regarding the usual mixed Riemann tensor $R^{a}{}_{bcd}$ with respect to its purely covariant counterpart $R_{abcd}$?
Answer: There is no deep intuitive geometrical meaning behind a Riemann tensor with some indices moved up/down. You could say that the two variants are "dual" to each other, loosely speaking. The only new information that arises from raising or lowering an index is the underlying metric tensor. So that's where your geometric interpretation can come in, if you want to think in that way.
As an example, consider the simplest vector you can think of: the infinitesimal position coordinates $dx^\mu$. What is the geometrical meaning of $dx_\mu$ compared to $dx^\mu$? They represent the same physics, but are still dual to each other. They are dual in the sense that when you combine the them and contract the indices, you get a scalar quantity. In special relativity, you will get $ \eta_{\mu\nu} dx^\mu dx^\nu = ds^2$, where $ds^2$ is a scalar quantity that represents the (square of) the proper time. The metric tensor $\eta_{\mu\nu}$ makes the inner product work.
This idea can be generalized to any tensor with any number and positioning of indices, like the Riemann tensor. So raising or lowering indices is, fundamentally, just another way of defining an inner product under the hood, and also defining some kind of scalar.
The important question, then, is does that scalar represent something physically/geometrically?
For $dx^\mu$, the constructed scalar represented proper time.
For momentum $p^\mu$, the constructed scalar represents the mass.
For the metric tensor itself $g_{\mu \nu}$, the constructed scalar represents the number of spacetime dimensions.
For the Riemann tensor, there are four scalars you can construct from a pair of Riemann tensors, three of them being $R_{\mu \nu \rho \sigma} R^{\mu \nu \rho \sigma}$, $R_{\mu \nu} R^{\mu \nu}$ and $R^2$. The first one is often used as a measure of curvature. Another interesting scalar that you can create from a combination of these is the Gauss-Bonnet term. | {
"domain": "physics.stackexchange",
"id": 95674,
"tags": "general-relativity, differential-geometry, metric-tensor, tensor-calculus, curvature"
} |
When using a hash and a collision occurs and linear probing is used how is that item found again | Question: How is the search for the item supposed to find the specific item if the data is all clumped together and all the search is given is the output of the hash function. I don't understand how it knows which of the pieces of data is the one that corresponds to the key if linear probing has been used so it could be any of a number of pieces of data?
Thanks for any help
Answer: Any given equality function over the objects. You're asking if an object is in a hash table, and it is if there is an object that is "equal" to the given object.
And remember that object equality implies hash equality. | {
"domain": "cs.stackexchange",
"id": 8820,
"tags": "hash-tables"
} |
Roomba s9 -- no communication | Question: I connected my Roomba s9+ USB port to my Windows 10 PC, which cannot detect it (nothing under Ports):
I messed around a bit and updated drivers, and now Windows 10 detects Roomba as:
I don't know what to do with this. Putty serial connection to this port receives nothing.
What should I do to troubleshoot the issue?
Answer: Unfortunately, the Roomba S9+ doesn't support the Open Interface spec. So you can't control the robot through the USB port. And neither do any of the other latest generation of Roombas (i.e. the "i", and "j" series)
Only certain models of the 500, 600, 700, 800, and maybe "e" series robots have the serial port hardware and code support to make it "hackable". (And the existence of the port doesn't mean that the code supports it). This feature was phased out a while ago. So the older the robot, the more likely it will be to support the OI spec.
The proper way to control a Roomba is to use a "Create". i.e. either a Create (antique), Create2 (recently retired), or Create3.
Disclaimer: I work at iRobot where I am developing the next generation of consumer robots. However, my postings on this site are my own and don't necessarily represent iRobot's positions, strategies, or opinions. | {
"domain": "robotics.stackexchange",
"id": 2582,
"tags": "serial, roomba"
} |
How does light dissipate? | Question: Wondering how light dissipates in both forward and sideways directions? I am doing a report on is eflux proportional to $d^{2}$ and am struggling to understand the motion and travel of light and how it decreases in flux as distance increases.
Answer: Light doesn't dissipate (see below for important qualifier): instead the 'amount' of light that crosses a given area (the flux) changes.
To see why this is think of a source of light which is (approximately) a point: a light bulb will do. Out of this source is coming a certain, fixed, total amount of light per second. In fact the light bulb is spitting out a certain number of photons per second, and that number is constant (approximately: obviously the light can get brighter or dimmer and this corresponds to the rate that it emits photons changing, but we can ignore that).
Once the light has emitted these photons, they are not created or destroyed: the number of them is constant (again, see caveat below). So the number of photons per second crossing any surface surrounding the light bulb is the same.
So, now, consider two spheres surrounding the bulb, one of which has radius $r_1$, and one of which has radius $r_2$. The number of photons crossing each of these spheres is the same. But the surface area of the spheres is not the same: the first sphere has area $4\pi r_1^2$, and the second has area $4\pi r_2^2$. So the number of photons crossing a given unit area of the spheres is different. And, in fact, if the total number of photons per second is $N$, then the number per unit area is
$$\frac{N}{4\pi r^2}$$
for a sphere of radius $r$. This quantity is the flux, and as you can see it goes like $1/r^2$. The reason it goes like $1/r^2$ is not because light dissipates, but because area of surfaces goes like $r^2$ in three-dimensional space: the total amount of light (of photons) is constant, but the area goes up like $r^2$.
Caveat. I said above that light / photons are neither created nor destroyed. Well, of course, this isn't true: photons can collide with atoms and be absorbed or scattered, with different numbers of lower-energy photons being re-emitted, and various other similar processes can happen. But if there is a vacuum, or a gas which is transparent to light at the frequencies you care about, then to a very good approximation you can assume that they aren't created or destroyed (except by the light itself and by whatever they finally crash into).
Caveat 2: everything above is fairly simple-minded: a real quantum-field-theory person would pick holes in almost all of it. But it gives a good enough impression of what happens, I think. | {
"domain": "physics.stackexchange",
"id": 42540,
"tags": "electromagnetism, visible-light"
} |
How realistic is the i.i.d assumption in the definition of Shannon's entropy? | Question: Let me first say I come from a physics background and have about zero exposure to computer science, so the question may be very naive. Shannon's entropy looks perfectly natural and useful from a statistical/thermal physics point of view, but now I'm trying to understand how it is applied for real computers.
Normally the messages we want to store and process in a computer have grammar and meanings, which seems to suggest the symbols constituting the message must follow some conditional probability distribution, and the utterance of the symbol at the nth position should change the probability distribution of the symbol that will appear at the (n+1)-th position. However, in Shannon's definition symbols are assumed to be independent and identically distributed random variables, which seems to be far from realistic, so how come it is still a useful concept for computers?
Answer: No. Shannon's definition is perfectly general.
There is a special case when the symbols are iid random variables, and you might have seen a formula for that special case (which is indeed simpler), but the definition is fully general. Note that when we write the entropy $H(X)$, you should take the random variable $X$ to be the entire sequence of symbols. Then the standard definition applies directly, and doesn't assume that each individual symbol in $X$ is iid. | {
"domain": "cs.stackexchange",
"id": 11237,
"tags": "information-theory, entropy"
} |
Equations of motion from the Standard Model | Question: For some time now I have been wondering if you could not derive any sort of equations of motion from the Standard Model:
$$\mathscr{L}=-\frac{1}{4}F^{\mu\nu}F_{\mu\nu}+i\bar{\psi}D\psi+\bar{\psi}\phi\psi+h.c.+\vert D\phi\vert^2-V(\phi).$$
Since it is a Lagrangian shouldn't we be able to use the Euler-Lagrange equation to find some equations of motion? Since I don't understand the theory myself this might already have been done, or is being done by physicists. However that does not impact my curiosity.
Answer: Yes, it's a normal field theory, so you may derive the equations of motion. They will be the ordinary Maxwell's equations for the electromagnetic field
$$ \partial_\nu F^{\mu\nu} = j^\nu $$
with $j^\mu$ calculated as the sum of the conserved currents for the Dirac field and for the Higgs fields, combined with the Dirac equation coupled to the electromagnetic field (with some Yukawa interaction $y\cdot \phi\psi$ terms), and the Klein-Gordon equation for a charged scalar field with some $V'(\phi)$ and $\psi \psi$ terms added in the right hand side etc. | {
"domain": "physics.stackexchange",
"id": 11064,
"tags": "quantum-field-theory, lagrangian-formalism, standard-model"
} |
errors installing groovy on mac osx homebrew (pydot) | Question:
I am getting the following error while trying to do a fresh install of groovy on my mac (10.7.5) with homebrew:
Any suggestions?
$ rosdep install --from-paths src --ignore-src --rosdistro groovy -y
executing command [sudo pip install -U pydot]
Password:
Downloading/unpacking pydot
Running setup.py egg_info for package pydot
Couldn't import dot_parser, loading of dot files will not be possible.
Downloading/unpacking pyparsing (from pydot)
Running setup.py egg_info for package pyparsing
Traceback (most recent call last):
File "<string>", line 14, in <module>
File "/Users/mike/ros_catkin_ws/build/pyparsing/setup.py", line 9, in <module>
from pyparsing import __version__ as pyparsing_version
File "pyparsing.py", line 629
nonlocal limit,foundArity
^
SyntaxError: invalid syntax
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 14, in <module>
File "/Users/mike/ros_catkin_ws/build/pyparsing/setup.py", line 9, in <module>
from pyparsing import __version__ as pyparsing_version
File "pyparsing.py", line 629
nonlocal limit,foundArity
^
SyntaxError: invalid syntax
----------------------------------------
Command python setup.py egg_info failed with error code 1
Storing complete log in /Users/mike/.pip/pip.log
ERROR: the following rosdeps failed to install
pip: command [sudo pip install -U pydot] failed
Originally posted by Mike Bosse on ROS Answers with karma: 41 on 2013-03-20
Post score: 3
Answer:
Looks like pyparsing switched its default version to py3k only:
http://pyparsing.wikispaces.com/News
I guess you will have to install version 1.5.7 explicitly:
$ sudo pip install pyparsing==1.5.7
I haven't run into this, but I assume that's because I already have pyparsing installed from before this change.
Originally posted by William with karma: 17335 on 2013-03-21
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 13463,
"tags": "ros"
} |
Does a single atom vibrate? | Question: Thermal vibration looks impossible, but does a single atom in vacuum vibrate due to the electrons' or subatomic particles' actions? If yes, how much is that vibration and is there a way to stop it?
Answer:
Vibration is a mechanical phenomenon whereby oscillations occur about an equilibrium point. The word comes from Latin vibrationem ("shaking, brandishing"). The oscillations may be periodic, such as the motion of a pendulum—or random, such as the movement of a tire on a gravel road.
Vibration is a classical mechanics phenomenon. Atoms may be considered classical mechanics points when in bulk and a model with mechanical vibrations used, but a single atom can only be described quantum mechanically about its center of mass.
The quantum mechanical wavefunction has electrons in orbitals and nuclei within the nucleus in energy levels depending on the quantum mechanical model used. All the periodic functions describing the atom have to do with the probability of measurement, which means many atoms must be measured in order to see the phase space of the possible positions, and any periodic function has to do with the frequencies displayed in the probability distributions, not space distributions for one atom.
Within the envelope of the Heisenberg uncertainty, one cannot know exactly the position of the single atom, because then the uncertainty in its momentum would be very large, but that does not mean it vibrates, it just means that the probability of its location and momentum are correlated. | {
"domain": "physics.stackexchange",
"id": 82926,
"tags": "atomic-physics, standard-model, vibrations"
} |
Longest common substring in linear time | Question: We know that the longest common substring of two strings can be found in $\mathcal O(N^2)$ time complexity.
Can a solution be found in only linear time?
Answer: Let $m$ and $n$ be the lengths of two given strings,
Linear time assuming the size of the alphabet is constant.
Yes, the longest common substring of two given strings can be found in $O(m+n)$ time, assuming the size of the alphabet is constant.
Here is an excerpt from Wikipedia article on longest common substring problem.
The longest common substrings of a set of strings can be found by building a generalized suffix tree for the strings, and then finding the deepest internal nodes which have leaf nodes from all the strings in the subtree below it.
Building a generalized suffix tree for two given strings takes $O(m+n)$ time using the famous ingenious Ukkonen's algorithm. Finding the deepest internal nodes that come from both strings takes $O(m+n)$ time. Hence we can find the longest common substring in $O(m+n)$ time.
For a working implementation, please take a look at Suffix Tree Application 5 – Longest Common Substring at GeeksforGeeks
(Improved!) Linear time
In fact, the longest common substring of two given strings can be found in $O(m+n)$ time regardless of the size of the alphabet.
Here is the abstract of Computing Longest Common Substrings Via Suffix Arrays by Babenko, Maxim & Starikovskaya, Tatiana. (2008).
Given a set of $N$ strings $A = \{\alpha_1,\cdots,\alpha_N\}$ of total length $n$ over alphabet $\Sigma$ one may ask to find, for each $2 \le k\le N$, the longest substring $\beta$ that appears in at least $K$ strings in $A$. It is known that this problem can be solved in $O(n)$ time with the help of suffix trees. However, the resulting algorithm is rather complicated (in particular, it involves answering certain least common ancestor queries in $O(1)$ time). Also, its running time and memory consumption may depend on $|\Sigma|$.
This paper presents an alternative, remarkably simple approach to
the above problem, which relies on the notion of suffix arrays. Once
the suffix array of some auxiliary $O(n)$-length string is computed, one
needs a simple $O(n)$-time postprocessing to find the requested longest
substring. Since a number of efficient and simple linear-time algorithms
for constructing suffix arrays has been recently developed (with constant
not depending on $|\Sigma|$), our approach seems to be quite practical.
Here is the general idea of the algorithm in the paper above. Let string $\alpha$ be concatenation of all $\alpha_i$ with separating sentinels. Construct the suffix array for $α$ as well as its longest-common-prefix array. Apply a sliding window technique to these arrays to obtain the longest common substrings. | {
"domain": "cs.stackexchange",
"id": 13580,
"tags": "algorithms, time-complexity, strings, longest-common-substring"
} |
How to specify URDF link which attaches to world ground? | Question:
Hello. The system is Gazebo7 (ver 7.16), running ROS Kinetic, in Ubuntu 16.04. The model I am working with looks like so:
The problem is that when I start the simulation my robot jumps up like crazy (13 meters or so) and lands painfully. I was able to figure out that the jump occurs due to the fact that the simulation starts with the robot partially "inside" the ground:
I now understand that the reason for this is that my base_link (the largest box in the center of the robot) which is the coordinate system origin of my model is snapped to the ground of the simulation (figured this out using: https://answers.ros.org/question/225133/part-of-my-robot-gets-embedded-inside-the-ground-for-ros-and-gazebo/). The problem is that the base_link is not the lowest part of the model.
I tried moving the Z axis of the tag of the base_link in the URDF file, but it didn't have the desired effect:
Is there a tag I can integrate into my URDF that dictates which link snaps to the ground? or is there a mod I can do in the .world file? I am hoping the simulation can be started with the wheels on the ground rather than inside the ground.
Last piece of information: I saved the world with the robot on all four wheels, and when I launch the simulation all is well; however as soon as I select "Reset world" from the "Edit" menu, the robot jumps super high.
I have attached the URDF for the base_link for reference:
<link name="base_link">
<!--If you do not explicitly specify a <collision> element. Gazebo will
treat your link as "invisible" to laser scanners and collision checking-->
<collision>
<geometry>
<box size="0.65 0.381 0.132"/>
</geometry>
<!-- line below allows us to insert:<origin rpy="${rpy}" xyz="${xyz}"/>-->
<origin rpy="0 0 0" xyz="-0.325 -0.1905 0.066"/>
</collision>
<visual>
<geometry>
<!--box dimensions is Meters. L X W X H where the L X H is a ractange,
and the H extrudes it upwards -->
<box size="0.65 0.381 0.132"/>
</geometry>
<!-- line below allows us to insert:<origin rpy="${rpy}" xyz="${xyz}"/>-->
<origin rpy="0 0 0" xyz="-0.325 -0.1905 0.066"/>
<material name="white"/>
</visual>
<inertial>
<!-- line below allows us to insert:<origin rpy="${rpy}" xyz="${xyz}"/>-->
<origin rpy="0 0 0" xyz="-0.325 -0.1905 0.066"/>
<!--all blocks now need a 'mass' argument-->
<mass value="25"/>
<!--This is the 3x3 inertial matrix. See: https://wiki.ros.org/urdf/XML/link -->
<!--where x=length; y=width; z=height. these lines of code came from
Emiliano Borghi's project-->
<inertia ixx="0.33871875" ixy="0" ixz="0" iyy="0.916508333333" iyz="0" izz="0.916508333333"/>
</inertial>
</link>
Thanks in advance.
Originally posted by TorontoRoboticsClub on Gazebo Answers with karma: 5 on 2020-02-09
Post score: 0
Answer:
If I understand correctly you don't actually want to attach the robot to the ground, right? You want to spawn the robot above the ground and you want the robot to move freely with the respect to the ground. Correct?
How do you launch the simulation? Where does your robot spawning happens? Since you showed us the robot description in URDF, I suppose you use the launch files that look something like this.
<?xml version="1.0"?>
<launch>
<arg name="paused" default="false"/>
<arg name="use_sim_time" default="true"/>
<arg name="gui" default="true"/>
<arg name="debug" default="false" />
<arg name="verbose" default="true" />
<param name="world" value="$(arg world)" />
<!-- startup simulated WORLD -->
<include file="$(find robot_gazebo)/launch/world.launch">
<!--<arg name="world" value="$(arg world)"/>-->
<arg name="paused" value="$(arg paused)" />
<arg name="use_sim_time" value="$(arg use_sim_time)" />
<arg name="gui" value="$(arg gui)" />
<arg name="debug" value="$(arg debug)" />
<arg name="verbose" value="$(arg verbose)" />
</include>
<param name="robot_description"
command="$(find xacro)/xacro --inorder '$(find robot_gazebo)/launch/upload_robot.xacro' " />
<!-- Run a python script to the send a service call to gazebo_ros to spawn a URDF robot -->
<node name="urdf_spawner" pkg="gazebo_ros" type="spawn_model" respawn="false" output="screen" args="-urdf -model myrobot -param robot_description "/>
<!-- ros_control robot_control launch file -->
<!-- creates topics for joints state and joint controller -->
<include file="$(find robot_control)/launch/myrobot_control.launch" />
</launch>
If that is correct, find the line where you call the spawn_model node. Where you input the arguments for the node, for example name of your robot and its description, you can also specify the pose of your robot. So the x, y and z coordinates in the world and the orientation R (roll), P (pitch) and Y (yaw). See the example below.
<!-- Run a python script to the send a service call to gazebo_ros to spawn a URDF robot -->
<node name="urdf_spawner" pkg="gazebo_ros" type="spawn_model" respawn="false" output="screen"
args="-urdf -model myrobot -param robot_description -x 0 -y 0 -z 0.5 - R 0 -P 0 -Y 0"/>
Originally posted by kumpakri with karma: 755 on 2020-02-10
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by TorontoRoboticsClub on 2020-02-10:
@kumpakri Not only did you figure out all my code correctly, but you even got the indentation right!!! Thanks a ton!!! with your solution I don't need to mess around with the URDF. It's a very clean solution. Thanks again!!!
Comment by Robot_Enthusiast on 2021-09-27:
I put the -x 0 -y 0 -z 0 after the robot_description in my launch file but my robot is still tilted. A box is the base link and all other links are w.r.t that using fixed joints. The base link is tilted making all other links tilted as well. What should I do to make it stand straight? | {
"domain": "robotics.stackexchange",
"id": 4474,
"tags": "gazebo-7"
} |
Poisson brackets of pre-quantized annihilation and creation operators localized in $p$-space | Question: Let $\phi(\vec{x},t)$ be real classical scalar field and $\pi(\vec{x},t)$ its conjugate momentum. It can be written as Fourier Transform $$\phi(\vec{x},t)=\int \frac{d^3p}{(2\pi)^3}e^{i\vec{p}\vec{x}}\phi(\vec{p},t).$$ Then it is put into Klein-Gordon equation to show that free field can be viewed as system of infinite many harmonic oscillators. In order to quantize free field one introduces symbols (pre-quantized a/c operators):
\begin{equation}
a(\vec{p},t)=\sqrt{\frac{\omega_{\vec{p}}}{2}}\phi(\vec{p},t)+i\sqrt{\frac{1}{2\omega_{\vec{p}}}}\pi(\vec{p},t)
\end{equation}
\begin{equation}
a^{+}(\vec{-p},t)=\sqrt{\frac{\omega_{\vec{p}}}{2}}\phi(\vec{p},t)-i\sqrt{\frac{1}{2\omega_{\vec{p}}}}\pi(\vec{p},t)
\end{equation}
To proceed further in canonical quantization one has to calculate poisson brackets $$\{a(\vec{p},t),a^{+}(\vec{q},t)\}.$$ My lecturer's notes give $$\{a(\vec{p},t),a^{+}(\vec{q},t)\}=(2\pi)^{3}i\delta(\vec{p}-\vec{q})$$
without any calculation or explanation. Poison brackets are defined as (ommiting $t$ for clarity): $$\{A,B\}=\int d^{3}z \frac{\delta{A}}{\delta\phi(\vec{z})}\frac{\delta{B}}{\delta\pi(\vec{z})}-\frac{\delta{B}}{\delta\phi(\vec{z})}\frac{\delta{A}}{\delta\pi(\vec{z})}.$$ Where $\delta$ denotes functional derivative. Problem with my calculation is that I have to calculate poisson bracket of those pre-quantized annihilation and creation operators localized in $p$-space but poisson brackets are defined in $x$-space and thus I have to take Fourier transform $a(\vec{x},t)$ of $a(\vec{p},t)$, but it doesn't make much sense to me because it immediately destroys sharp localization in $p$-space (because I integrate over whole $p$-space). I would be grateful for consistent explanation.
Answer: I think I figured it out.
Let
\begin{equation}
\phi(\vec{p},t)=\int d^3x \:\, e^{-i\vec{p}\vec{x}}\phi(\vec{x},t)
\end{equation}
and the same goes for $\pi(\vec{p},t)$. One can then view these Fourier integrals $\phi(\vec{p},t)$ as functionals on $\phi(\vec{x},t)$. Calculating functional derivatives (ommiting $t$):
\begin{equation}
\frac{\delta \phi(\vec{p})}{\delta \phi(\vec{z})}=\lim_{\epsilon \rightarrow 0} \frac{\int d^3x \; \, e^{-i\vec{p}\vec{x}}(\phi(\vec{x})+\epsilon \delta(\vec{x}-\vec{z})) \; \; -\int d^3x \; \, e^{-i\vec{p}\vec{x}}\phi(\vec{x})}{\epsilon}=e^{-i\vec{p}\vec{z}}
\end{equation}
and analogously for $\phi(\vec{q})$,$\pi(\vec{p})$,$\pi(\vec{q})$. Calculation of poisson bracket is now straightforward - just calculating fucntional derivative of $a(\vec{p})$, $a^{+}(\vec{q})$ using above formula. Then it yields:
\begin{equation}
\{a(\vec{p}),a^{+}(\vec{q})\}=\int d^3z \;\, \frac{-i}{2}\left( \sqrt{\frac{\omega_p}{\omega_q}}e^{-i(\vec{p}-\vec{q})\vec{z}} + \sqrt{\frac{\omega_q}{\omega_p}}e^{-i(\vec{p}-\vec{q})\vec{z}} \right)
\end{equation}
And this integral is equal to
\begin{equation}
-i(2\pi)^3\delta(\vec{p}-\vec{q})
\end{equation}
factors in squate root cancel out because formula is nonzero for $\vec{p}=\vec{q}$ only. Sign is different from the one in lecture notes but I checked it and there is used another convention (different in sign) for poisson bracket as I am used to. | {
"domain": "physics.stackexchange",
"id": 35484,
"tags": "field-theory, fourier-transform, poisson-brackets"
} |
Counting letters, words, etc. in the input | Question: I'm trying to learn some coding to broaden my scope of knowledge, and I seemed to have run into a bit of a conundrum.
I'm trying to create a program to output the number of characters, digits, punctuation, spaces, words and lines that are being read in from a file.
Here is the text file I am reading in:
See Jack run. Jack can run fast. Jack runs after the cat. The cat's fur is black. See Jack catch the cat.
Jack says, "I caught the cat."
The cat says, "Meow!"
Jack has caught 1 meowing cat. Jack wants 5 cats, but can't find any more.
Here is my code:
#include <iostream>
#include <fstream>
using namespace std;
int main()
{
ifstream lab3;
string word;
lab3.open("lab3.txt");
int countletters=0,countnum=0,countpunc=0,countspace=0,words=0,line=0;
char character,prevchar = 0;
if(!lab3)
{
cout << "Could not open file" << endl;
return 1;
}
while(lab3.get(character) && !lab3.eof())
{
if(isalpha(character))
{
countletters++;
}
if (isdigit(character))
{
countnum++;
}
if (ispunct(character))
{
countpunc++;
if (isalpha(prevchar))
{
words++;
}
}
if (isspace(character))
{
countspace++;
if (isalpha(prevchar))
{
words++;
}
}
if(character=='\n')
{
line++;
}
prevchar = character;
}
cout << "There are " << countletters << " letters." << endl;
cout << "There are " << countnum << " numbers." << endl;
cout << "There are " << countpunc << " punctuations." << endl;
cout << "There are " << countspace << " spaces." << endl;
cout << "There are " << words << " words." << endl;
cout << "There are " << line << " sentences." << endl;
lab3.close();
return 0;
}
Output:
There are 167 letters.
There are 2 numbers.
There are 18 punctuations.
There are 52 spaces.
There are 47 words.
There are 4 sentences.
Some things I am hoping to learn:
Advice for improvements on my code for learning purposes/efficiency.
Explanation for reading information in from a text file: whether it is letters, numbers, punctuation - whatever you may run across doing this type of data-processing.
Some things I am aware of:
using namespace std; is not good practice - what is the best practice for real world applications?
I am a beginner and this may not be (definitely is not) the cream-of-the-crop coding.
Answer: I see some things that may help you improve your code.
Don't abuse using namespace std
Putting using namespace std at the top of every program is a bad habit that you'd do well to avoid. In my own production code, I usually simply type std:: where needed. It's a little bit more typing but it has two very nice benefits: it absolutely avoids any possibility of name collisions and it makes it absolutely clear which particular namespace is being used for various items.
Make sure you have all required #includes
The code uses std::isalpha but doesn't #include <cctype> or <locale>. It's not clear which one you want here. The functions in <cctype> match the functions you're using, but you should be aware that they are only defined for the C locale. Read this for details.
Omit unused variables
Because word is never used, it can and should be omitted from the program.
Use better naming
I would say that words is a good variable name, but line (singular) is not. Also, we have countletters (plural) but countnum (singular). Some consistency in naming would improve this program.
Initialize and open files in one step
Instead of separately invoking lab3.open() I'd suggest instead writing it this way:
std::ifstream lab3{"lab3.txt"};
Or if you wanted to make the program much more flexible and allow the user to specify a file name, this could be:
std::ifstream lab3{argv[1]};
Decompose the program into smaller parts
Right now, all of the code is in main which isn't necessarily wrong, it means that it's not only hard to reuse but also hard to troubleshoot. Better is to separate the code into small chunks. It makes it both easier to understand and easier to fix or improve. In this case, I'd suggest creating an object that looks like this:
class WordCounter
{
public:
WordCounter();
void count(std::istream &in);
friend std::ostream& operator<<(std::ostream &out, const WordCounter &w);
private:
int letters;
int nums;
int puncts;
int spaces;
int words;
int lines;
};
When you define those functions, then, the main routine can look like this:
int main()
{
std::ifstream lab3{"lab3.txt"};
if (!lab3)
{
std::cout << "Could not open file\n";
return 1;
}
WordCounter counter;
counter.count(lab3);
std::cout << counter;
}
Use consistent formatting
Using consistent formatting helps readers of your code understand it without distraction. This code is mostly well formatted, but the indenting within the while loop is a bit inconsistent.
Use precise terminology
The code claims to be counting sentences, but is actually counting lines. Either number will work if no lines contain more than one sentence, but I did not see any guarantee that this will always be the case.
Don't use std::endl if you don't really need it
The difference betweeen std::endl and '\n' is that '\n' just emits a newline character, while std::endl actually flushes the stream. This can be time-consuming in a program with a lot of I/O and is rarely actually needed. It's best to only use std::endl when you have some good reason to flush the stream and it's not very often needed for simple programs such as this one. Avoiding the habit of using std::endl when '\n' will do will pay dividends in the future as you write more complex programs with more I/O and where performance needs to be maximized.
Omit return 0
When a C or C++ program reaches the end of main the compiler will automatically generate code to return 0, so there is no need to put return 0; explicitly at the end of main.
Note: when I make this suggestion, it's almost invariably followed by one of two kinds of comments: "I didn't know that." or "That's bad advice!" My rationale is that it's safe and useful to rely on compiler behavior explicitly supported by the standard. For C, since C99; see ISO/IEC 9899:1999 section 5.1.2.2.3:
[...] a return from the initial call to the main function is equivalent to calling the exit function with the value returned by the main function as its argument; reaching the } that terminates the main function returns a value of 0.
For C++, since the first standard in 1998; see ISO/IEC 14882:1998 section 3.6.1:
If control reaches the end of main without encountering a return statement, the effect is that of executing return 0;
All versions of both standards since then (C99 and C++98) have maintained the same idea. We rely on automatically generated member functions in C++, and few people write explicit return; statements at the end of a void function. Reasons against omitting seem to boil down to "it looks weird". If, like me, you're curious about the rationale for the change to the C standard read this question. Also note that in the early 1990s this was considered "sloppy practice" because it was undefined behavior (although widely supported) at the time.
So I advocate omitting it; others disagree (often vehemently!) In any case, if you encounter code that omits it, you'll know that it's explicitly supported by the standard and you'll know what it means. | {
"domain": "codereview.stackexchange",
"id": 24074,
"tags": "c++, beginner, strings, file"
} |
Condition for sliding of two blocks placed one over another and connected by spring | Question:
A constant force F is applied to smaller mass till M slides. The spring constant is k. Now it is asked to find k.
I'm confused with the condition at which the block will start sliding. Can someone please help! (Yes I know you might feel this is homework and i'm asking for the solution.I'm not. Just a small conceptual hint will do)
Note: There is no wall on the left. The arrangement is placed on a floor.
Answer:
Just a small conceptual hint will do
No problem. A hint:
Set up Newton's law, $\sum F=ma$. You will see that the sum of all the three forces must equal... yes, what should it equal?
I'm confused with the condition at which the block will start sliding
What is the difference in the equation mentioned above for a point just after it started moving, and the point just before it starts to move? Think about the right-hands side. | {
"domain": "physics.stackexchange",
"id": 22315,
"tags": "homework-and-exercises, newtonian-mechanics, friction, spring"
} |
Differentiating Scalar along a geodesic | Question: I have been studying GR for sometime and doing exercises from Schutz and I have a question about differentiating along a geodesic. Here is what I know. The equation of geodesic in terms of four momentum is given as,$$p^\alpha p^{\beta}_{;\alpha}$$. Now if I want to differentiate a scalar along the geodesic I figured I have to do this, $$\frac{d\phi}{d\tau}$$ Here, $\tau$ is the proper time which is the parameter along the curve. The change of the scalar $\phi$ along the curve is equal to,
$$\frac{d\phi}{d\tau}=\phi,_{\beta}U^\beta$$ Here, $U^\beta$ is the four velocity of the curve. Writing this in covariant derivative form I believe it should just be,$$\frac{d\phi}{d\tau}=\phi_{;\beta}U^\beta$$ So if a scalar (like a dot product between vectors) is constant along the geodesic then I believe it means that,$$\frac{d\phi}{d\tau}=0.$$ Is this correct?
In the question I am trying to solve, the condition is that $p^\alpha\epsilon_\alpha=\text{constant}$ along the geodesic. I am trying to write the condition of what this means to proceed with further calculation.
Answer: So I ended up trying to solve the problem, assuming what I stated above is correct. I will state the problem here for reference, Show that if a vector field $\epsilon^\alpha$ satisfies Killing’s equation then $p^\alpha\epsilon_\alpha$ is constant along the geodesic. So I just took covariant derivative of $\phi$ as, $$\frac{d\phi}{d\tau}=U^\beta\phi_{;\beta}$$ and then set it to zero. Here I defined $\phi = p^\alpha \epsilon_\alpha$. Then basically expanded the covariant derivative of $\epsilon$ and got to the point (using the equation of geodesic $U^\alpha U^\beta_{;\alpha}=0$) where you get $U^\alpha U^\beta \epsilon_{\alpha,\beta}$. Since $\epsilon^\alpha$ is known to be Killing vector field then it satisfies the Killing equation $\epsilon_{\alpha,\beta}=-\epsilon_{\beta,\alpha}$ which basically makes it anti-symmetric hence the sum $U^\alpha U^\beta \epsilon_{\alpha,\beta}$ is zero which proves that,
$$\frac{d(p^\alpha\epsilon_\alpha)}{d\tau}=U^\beta(p^\alpha\epsilon_\alpha)_{;\beta}=0$$.
Hope this made sense. I also checked solution manual (which I only check once I am out of all options to figure it out myself) this is how it is done as well. | {
"domain": "physics.stackexchange",
"id": 55095,
"tags": "general-relativity, tensor-calculus, differentiation, geodesics"
} |
In what circumstances would a 1000 years on Earth be a single day | Question: I am not sure if this is the right place to ask this question. If if isn't then please move it accordingly.
I was googling about time dilation and age of the earth and I ran into some argument by someone claiming the earth to be 7000 years old; He did mention a also in the paragraph that 1000 years on earth. I was reminded about a part in Interstellar where the protagonist spends 1 hour and it is essentially 7 years on earth.
I am not interested in the religious aspects of this , but are there circumstances where there can be this strong of time difference? And if so, what are those circumstances? Also can this be mathematically calculated?
Would a larger planet moving around a larger sun have this effect?
Somewhat related:
Do planets experience time dilation as they orbit the sun and if so what effect would this have on their orbit
Time dilation for non-physicists
Answer: Time dilation occurs in both Special relativity, where a stationary observer and a moving observer would measure two different time intervals between the same event, and General relativity, where an observer in a strong gravitational field measures a different time interval between two events than an observer in a weaker gravitational field.
In the Special Relativistic case, all we need is an idealized (for the sake of brevity and comprehension) version of the twin paradox. Ignoring acceleration, a spaceship leaves earth with a velocity $v$, travels in a straight line, and turns around at some point and travels to earth with velocity $-v$ (same speed, opposite direction). SR tells us that the time measured by the spaceship $t'$ is related to the time measured on Earth $t$ by $$t'=t\sqrt{1-v^2/c^2},$$ where c is the speed of light. We know that $t'/t=$ (1 day)/(1000 years) = $1/365,000$. Solving for $v$, we can show that if the ship travels at 99.9999999996% the speed of light, and turns around after 12 hours, the ship will return after 1000 years have elapsed in Earth time.
In the General Relativistic case, we can imagine an observer near something very dense and massive, like a black hole or neutron star, comparing times with an observer on earth. Again idealizing the problem significantly, we can relate the stellar observer's time $t'$ to the Earth time $t$ by the equation $$t'=t\sqrt{1-2GM/rc^2},$$ where G is the gravitational constant, M is the mass of the black hole, r is the distance the stellar observer is from the center of mass of the black hole, and c is again the speed of light. This depends on both M and r, but if we choose an arbitrary M, like the mass of the nearest known black hole, V616 Monocerotis (approx. 10 times the mass of our sun), we can find out how far away we would have to be from its center of mass in order to get the specified time dilation $t'/t=1/365000$. Solving for r, we find that an observer would need to be about 20 nanometers from the event horizon of V616 Monocerotis in order to experience a time dilation of this magnitude. | {
"domain": "physics.stackexchange",
"id": 64592,
"tags": "time, relativity, time-dilation, estimation"
} |
Is it fine to use my domain model as DTO (Entity Framework Core)? | Question: I was reading about anemic data models and rich domain models in DDD. I don't want to follow DDD completely but take rather pragmatic approach and just take some concepts out of it because clean ddd seems like an overkill. What I'm building is Web API (using ASP.NET Core and Entity Framework Core).
I don't want to maintain separate DTOs just for serializing and deserializing my models.
And I want to make most of the things like service layer and my controllers generic. Because it's mostly CRUD and logic is repeating - a lot of boilerplate.
I'm not planning to use any other ORM other than EF Core.
EF Core allows mapping to backing fields:
https://ardalis.com/encapsulated-collections-in-entity-framework-core
https://technet.microsoft.com/en-us/mt842503.aspx
Is it fine if I will design my models like this and at the same time use them as my DTOs for serialization and deserialization?
public class Player
{
[JsonIgnore]
public int Id { get; set; }
[NotMapped, Required]
public PlayerInfo ActiveInfo { get; set; } // setter must be public to allow deserialization (or use custom resolver for json net)
[JsonIgnore]
public ICollection<PlayerInfo> PlayerInfos { get; set; } = new HashSet<PlayerInfo>();
private Player()
{
}
public Player(PlayerInfo info)
{
info.IsActive = true;
PlayerInfos.Add(info);
}
public void SetActiveInfo(PlayerInfo playerInfo)
{
var currentlyActiveInfo = PlayerInfos.SingleOrDefault(info => info.IsActive);
if (currentlyActiveInfo != null)
{
currentlyActiveInfo.IsActive = false;
}
playerInfo.IsActive = true;
PlayerInfos.Add(playerInfo);
ActiveInfo = playerInfo;
}
}
This is controller:
public class PlayerController
{
private readonly DbContext _dbContext { get; set; }
private readonly DbSet<Player> _players { get; set; }
public PlayerController(DbContext dbContext)
{
_dbContext = dbContext;
_players = dbContext.Set<Player>();
}
// Create new player
[HttpPost]
public ActionResult<Player> Post([FromBody] Player player)
{
if (!ModelState.IsValid)
{
//Handle validation error
}
// No service layer for the sake of simplicity.
player.SetActiveInfo(player.ActiveInfo); // will use _activeInfo
_players.Add(player); // begins tracking entity
if (_dbContext.Save() == 0)
{
// handle error
}
return player;
}
}
Answer:
I was reading about anemic data models and rich domain models in DDD. I don't want to follow DDD completely but take rather pragmatic approach and just take some concepts out of it because clean ddd seems like an overkill.
I don't want to maintain separate DTOs just for serializing and deserializing my models.
I'm similarly pragmatic as you, and I agree with you in this situation.
KISS (keep it simple & stupid) applies here. If your application is not sufficiently large, a DTO layer abstraction is not necessary.
I have several colleagues who would disagree with this, this is an open discussion topic. Personally, I disagree with blindly implementing things just because it was needed in other projects. Before I implement something, I need to justify its existence in the current project.
I'm not planning to use any other ORM other than EF Core.
It's not so much about what you plan to use, but rather the chance of this changing in the future, and how prepared you want to be when it turns out you have to change it.
Is it fine if I will design my models like this and at the same time use them as my DTOs for serialization and deserialization?
Yes, as long as you're okay with the tighter coupling and don't start violating SRP by loading everything into your entity classes.
However, that is a separate discussion from whether or not you should use DTOs. You could just as well violate SRP on your DTO class. | {
"domain": "codereview.stackexchange",
"id": 30616,
"tags": "c#, asp.net-core, ddd, entity-framework-core"
} |
When studying the hydrogen atom, why do we seek simultaneous eigenfunctions of $\hat{L}^2$, $\hat{L}_z$, and $\hat{H}$? | Question: When solving the Schrödinger equation for the hydrogen atom, textbooks invariably work in a more constraint situation, whereby not only an eigenfunction for the Hamiltonian operator $\hat{H}$ is sought, but one which is simultaneously an eigenfunction for $\hat{L}^2$ and $\hat{L}_z$. My question is why we do this?
A similar question has been asked here, but the answers are unsatisfactory. Yes, I understand we can do it. Yes, I understand that we have lots of freedom in our choice of $\psi(\vec{x})$ if we merely solve for $\hat{H}$. But I want to know why this is the right way to proceed. As far as I understand, it is perfectly physically acceptable for a wave function to not be an eigenfunction of some operator, so why must the wave function for a hydrogen atom be an eigenfunction for $\hat{L}^2$ and $\hat{L}_z$?
Answer: I'd like to elaborate some more on my comment. As you probably know, quantum mechanics take place in a Hilbert space $\mathcal H$, and every physical state is represented by a unit vector in $\mathcal H$. The time evolution of a given state $\vert\psi\rangle$ is given by the Schrödinger equation $i\hbar\vert\dot\psi\rangle=H\vert\psi\rangle$. Using some math we can show that if $\vert\psi(t=0)\rangle$ is an eigenstate of the Hamiltonian $H$, solving this equation becomes particularly easy (just multiply $\vert\psi(t=0)\rangle$ by an appropriate phase factor). And if we can express any arbitrary state as a linear combination of eigenstates, it's still pretty easy (give each term a phase factor of its own). So solving the Schrödinger equation reduces to finding a basis of the Hilbert space made up entirely of eigenstates of the Hamiltonian (because then we can express any arbitrary state as a linear combination of eigenstates, and thus solve the equation as described above).
Now the main part of a lecture about the hydrogen atom will consist of finding this basis in different scenarios (with or without spin, with strong/weak/no electromagnetic fields, etc.). And it turns out that the Hamiltonian of the hydrogen atom is degenerate, so we have some free choices when looking for a basis. And it turns out that very conveniently, we have the choice to make the basis states into eigenstates of not only the Hamiltonian, but also additional physically relevant operators: $L^2,L_z,S^2$ and $S_z$.
This does not mean that the basis basis states are the only physically allowed states. Just that all physically allowed states can be expressed as a linear combination of these eigenstates. It's actually quite unlikely to find an atom in any of those basis eigenstates. For instance, the electron might have a defined energy quantum number $n=2$, defined absolute angular momentum number $l=1$, but the direction of the angular momentum might not be along our arbitrarily chosen $z$-axis, so the electron has no defined quantum number $m_l$. So maybe it's in the state $\vert\psi\rangle=\frac{1}{\sqrt2}(\vert n=2,l=1,m_l=1\rangle+\vert n=2,l=1,m_l=-1\rangle)$, which is not one of our basis states. But we can still use the time evolution of the basis states to calculate the time evolution of this other state, too.
There are also different choices for a basis, and every physically possible state can still be written as a linear combination of those. But the one that is found in the standard literature is the most convenient when trying to solve the Schrödinger equation. | {
"domain": "physics.stackexchange",
"id": 70178,
"tags": "quantum-mechanics, operators, atomic-physics, commutator, observables"
} |
Representation of maximally entangled states of $2n$ qubits with Pauli matrices? | Question: I'm reading this paper while the author states in the eq(A1) that, for a $2n$ qubits maximally entangled state $|\Psi ^+\rangle \langle \Psi ^+|$, we can write it with Pauli operators $P_u\in\left\{ I,X,Y,Z \right\} ^{\otimes n}$ as
$$
|\Psi ^+\rangle \langle \Psi ^+|=\frac{1}{4^n}\sum_u{P_u\otimes P_{u}^{T}} \tag 1.
$$
I can verify this point for two qubits case. The dentisy matrix of the two qubits maximally entangled state is
$$
|\Psi ^+\rangle \langle \Psi ^+| =\frac{1}{\sqrt{2}}\left( |00\rangle +|11\rangle \right) \frac{1}{\sqrt{2}}\left( \langle 00|+\langle 11| \right) =\frac{1}{2}\left( \begin{matrix}
1& 0& 0& 1\\
0& 0& 0& 0\\
0& 0& 0& 0\\
1& 0& 0& 1\\
\end{matrix} \right). \tag 2
$$
It's easy to verify that eq(2) can also be rewritten as the following
$$
\frac{1}{4}\left( I\otimes I+X\otimes X+Z\otimes Z+Y\otimes Y^T \right) =\frac{1}{4}\left( I\otimes I+X\otimes X+Z\otimes Z-Y\otimes Y \right)
\tag 3
$$
which corresponds to the right-hand side of eq(1).
My question is, how can we rigorously show the general case, i.e. eq(1)?
Answer: Once you've shown it for $n=1$ it follows for any $n$. This is because you can rearrange the expression for a $2n$-qubit maximally entangled state between a pair of subsystems $A$ and $B$ into a tensor product of $n$-many 2-qubit maximally entangled states on those subsystems:
\begin{align}
|\Psi^+_{2n}\rangle_{AB}&:=\sum_{i \in \{0,1\}^n} |i\rangle_{A}\otimes |i\rangle_{B} \tag{1}
\\&=\sum_{i_1,\dots, i_n \in \{0,1\}} |i_1\dots i_n\rangle_{A}\otimes |i_1\dots i_n\rangle_{B} \tag{2}
\\&=\sum_{i_1,\dots, i_n \in \{0,1\}} |i_1\rangle_A |i_1\rangle_B \otimes \cdots \otimes|i_n\rangle_A |i_n\rangle_B \tag{3}
\\&= \left( \sum_{i_1 \in \{0,1\}} |i_1\rangle_A |i_1\rangle_B \right)\otimes \cdots\otimes\left( \sum_{i_n \in \{0,1\}} |i_n\rangle_A |i_n\rangle_B \right) \tag{4}
\\&= |\Psi^+_2\rangle_{AB}\otimes \cdots \otimes |\Psi^+_2\rangle_{AB}.\tag{5}
\end{align}
Here, $|\Psi^+_2\rangle$ is the two-qubit maximally entangled state you chose in your initial question. Then, use your expression for this state in the Pauli basis and afterwards put the subsystems back into their original order:
\begin{align}
|\Psi^+_{2n}\rangle\langle \Psi^+_{2n}|_{AB} &= |\Psi^+_2\rangle \langle \Psi^+_2|_{AB}\otimes \cdots \otimes |\Psi^+_2\rangle \langle \Psi^+_2|_{AB} \tag{6}
\\&= \left( \frac{1}{4} \sum_{P_1 \in \{I,X,Y,Z\}} P_1^A\otimes (P_1^B)^T \right) \otimes \cdots \otimes \left( \frac{1}{4} \sum_{P_n \in \{I,X,Y,Z\}} P_n^A\otimes (P_n^B)^T \tag{7}
\right)
\\&= \frac{1}{4^n} \sum_{P_1,\dots,P_n \in \{I,X,Y,Z\}} P_1^A\otimes (P_1^B)^T \otimes \cdots \otimes P_n^A\otimes (P_n^B)^T \tag{8}
\\&= \frac{1}{4^n} \sum_{P_1,\dots,P_n \in \{I,X,Y,Z\}} \left[P_1^A\otimes \cdots P_n^A\right] \otimes \left[P_1^B \otimes \cdots \otimes P_n^B\right]^T \tag{9}
\\&= \frac{1}{4^n} \sum_{P \in \{I,X,Y,Z\}^n} P\otimes P^T, \tag{10}
\end{align}
where the superscripts denote in which subsystem each Pauli is acting (i.e. some of the kronecker products are redundant/unnecessary and there is probably a cleaner notation to make this point). | {
"domain": "quantumcomputing.stackexchange",
"id": 4899,
"tags": "quantum-state, density-matrix"
} |
Relation between speed of sound and compressibility | Question: We know that
$c^2=\frac{\partial p}{\partial ρ}$
The adiabatic compressibility is defined as: $\beta_S=-\frac{1}{V}\frac{\partial V}{\partial p}$ such that the subscript "S" stands for "adiabatic"
How can I show that $c^2=\frac{1}{\rho \beta_S}$ ?
I tried replacing $V$ by $\frac{m}{\rho}$ but I get for $\beta_S=-\rho \frac{\partial \frac{1}{\rho}}{\partial p}$
Answer: Hint: use the chain rule:
$$\beta_S=-\rho \frac{\partial \frac{1}{\rho}}{\partial p}=-\rho\frac{\partial \frac{1}{\rho}}{\partial \rho} \frac{\partial {\rho}}{\partial p}$$
No need to use $PV=nRT$, which does not hold for non-ideal gases. | {
"domain": "physics.stackexchange",
"id": 11806,
"tags": "homework-and-exercises, thermodynamics"
} |
auto-starting new master | Question:
Why does roslaunch auto-start a new master for every remote machine?
Things I've tried:
-playing w/ every variation of default tag for machines defined in the .launch
Things that work but extremely painful at our scale:
-hard coding ROS_MASTER_URI in the /opt/ros/fuerte/env.sh
I'm on the latest fuerte.
Cheers,
-Willy
Originally posted by uuilly on ROS Answers with karma: 13 on 2014-04-07
Post score: 0
Original comments
Comment by ahendrix on 2014-04-07:
I think this was a bug in roslaunch in Fuerte. I suggest upgrading to a newer version of ROS.
Comment by uuilly on 2014-04-07:
Below you said that it was fixed. Is that not the case?
Answer:
According to this question this is fixed in ros-comm 1.8.14. Can yo confirm that you have that version or later of ros-comm ? ( dpkg -l ros-fuerte-ros-comm )
Originally posted by ahendrix with karma: 47576 on 2014-04-07
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by uuilly on 2014-04-07:
Yup. Saw that thread. I'm up to date on all machines:
ii ros-fuerte-ros-comm 1.8.16-1precise-20130502-1635-+0000
Any other thoughts? | {
"domain": "robotics.stackexchange",
"id": 17565,
"tags": "ros, master, roslauch"
} |
Are these molecular structures of cyclodecapentaene identical? | Question:
I think that the above structures are identical compounds as I doesn't exist in the shown form but it would exist naturally in the II form, so these should be identical.
But these are mentioned to be geometrical isomers in a book that I was reading. Even if I assume that I existed in it's given form, then how can they be geometrical isomers? In I, there are two trans alkenes, three cis alkenes and in II, all five are cis alkenes.
So, are these compounds isomers or identical?
Answer: If you make spring-and-ball models of both compounds, you can see that the two compounds are entirely different, and both are contorted.
The all-cis model on the left has all its hydrogens pointing outward, so the contortions of the carbon chain involve just carbon-carbon twisting to come close to 120 degree angles. The model of the cis-trans molecule on the right has two inward-pointing hydrogens (all the rest point out) which are sterically interacting so much that they introduce strain that is not evident from the carbon-only diagram in the original post. (If I put in all the hydrogens, the image is too cluttered.)
The fact that one is all cis and the other cis-trans certifies that the molecules are different, but the 2-D drawing can seem vague. The NMR spectra of these two compounds would be different. I can imagine the molecule on the left flipping all around (the model sure did!), but the molecule on the right seems pretty stuck.
Thinking in 3-D is easy after you've done it many times. But when you are starting out, it is helpful to make molecular models. And even if you've done it a million times, there are situations where your mental image is so fluid that you need to solidify it with a model, no matter how inadequate the model is. And then you mentally add onto the solid model. | {
"domain": "chemistry.stackexchange",
"id": 14718,
"tags": "organic-chemistry, isomers"
} |
Communication complexity of Independent Set game? | Question: Consider the following communication game.
Independent Set game
Let $[n] = \{0,1,\dots,n-1\}$ and let $r$ be a positive integer smaller than $n/(1+\log n)$.
Alice receives a set $X$ of edges, each edge being a pair of distinct vertices from $[n]$, and Bob receives a set $Y$ of edges.
Alice and Bob must communicate to determine whether the graph $([n],X \cup Y)$ contains an independent set with $r$ vertices.
Let IS$_{n,r}(X,Y) = 1$ if there is such a set, and IS$_{n,r}(X,Y) = 0$ otherwise.
There is a simple nondeterministic protocol to confirm that IS$_{n,r}(X,Y) = 1$; Alice nondeterministically chooses an $r$-vertex set $I$ that forms an independent set in her graph $([n],X)$ and sends Bob a description of this set using at most $r(1+\log n)$ bits.
Bob responds with 1 if $I$ is an independent set in $([n],Y)$ also; otherwise he responds with 0.
On the other hand, there is a fooling set of pairs of sets $(V,V)$ such that $([n],V)$ is a graph formed by $r$ disconnected vertices adjoined to a complete graph with $n-r$ vertices.
This 1-fooling set contains $\binom{n}{r}$ such fooling pairs, so the nondeterministic communication complexity of $\text{IS}_{n,r}$ is at least $\Omega(r\log n)$.
It follows that the simple protocol is optimal up to a small constant factor.
My question is:
Is there a large 0-fooling set for IS$_{n,r}$?
If not, is there an efficient deterministic protocol?
I would also be interested in pointers to the literature if this problem is known.
The closest I have found is the Clique vs. Independent Set game of Yannakakis but this did not seem useful here.
Answer: Yes, there is a large 0-fooling set.
It is enough to prove this for $r=2$, as we can add (the same) $r-2$ independent vertices to all graphs in our construction.
For $r=2$, take every graph that has exactly ${n\choose 2}/2$ edges.
Everyone will fool around with its complement. | {
"domain": "cstheory.stackexchange",
"id": 3194,
"tags": "cc.complexity-theory, lower-bounds, communication-complexity"
} |
Silly question: make deuterium, tritium and Oxygen-17 and Oxygen-18 from nuclear waste | Question: Here is a silly question. Atoms of Nuclear waste isotopes usually have extra neutrons. Can regular hydrogen and oxygen come into contact with those atoms and take the extra neutrons away? Then the radioactive waste elements go to their stable isotopes and there is a source of deuterium, heavy oxygen and heavy water which has its use in nuclear reactor.
Answer: I think you are asking whether it would be possible to take some neutron-enriched nuclear waste and put it in contact with isotopes that can absorb an extra neutron with becoming radioactive. The answer to that question is "not easily." It costs about 8 MeV to remove a neutron from a nucleus; those sorts of energies are only available in nuclear reactions.
It is possible to choose materials for neutron shielding which don't activate very much. A common choice is polyethylene plastic mixed with boron carbide: in that material most of the neutron capture is $\rm ^{10}B\to{}^{11}B$, and to a lesser extent $\rm^1H\to {}^2H$.
A major contributor to long-lived, high-activity waste is plutonium, which is produced when neutrons capture on uranium (especially on U-238) without inducing a fission. That's a hard reaction to avoid, since putting neutrons on uranium is the operating mode for a reactor. | {
"domain": "physics.stackexchange",
"id": 78340,
"tags": "nuclear-physics, radioactivity, nuclear-engineering"
} |
Using LINQ to perform a LEFT OUTER JOIN in 2 DataTables (Multiples criteria) | Question: I know that exists a lot of solutions about how to create an OUTER JOIN between two DataTables.
I created the following code in C#:
DataTable vDT1 = new DataTable();
vDT1.Columns.Add("Key");
vDT1.Columns.Add("Key2");
vDT1.Columns.Add("Data1");
vDT1.Columns.Add("Data2");
vDT1.Rows.Add(new object[] { "01", "ZZ", "DATA1_AAAA", "DATA2_AAAA" });
vDT1.Rows.Add(new object[] { "02", "ZZ", "DATA1_BBBB", "DATA2_BBBB" });
DataTable vDT2 = new DataTable();
vDT2.Columns.Add("Key");
vDT2.Columns.Add("Key2");
vDT2.Columns.Add("Data3");
vDT2.Columns.Add("Data4");
vDT2.Rows.Add(new object[] { "01", "ZZ", "DATA3_AAAA", "DATA4_AAAA" });
vDT2.Rows.Add(new object[] { "01", "ZZ", "DATA3_BBBB", "DATA4_BBBB" });
vDT2.Rows.Add(new object[] { "01", "ZZ", "DATA3_CCCC", "DATA4_CCCC" });
vDT2.Rows.Add(new object[] { "01", "ZZ", "DATA3_DDDD", "DATA4_DDDD" });
DataTable vDT3 = new DataTable();
vDT3.Columns.Add("Key");
vDT3.Columns.Add("Key2");
vDT3.Columns.Add("Data1");
vDT3.Columns.Add("Data2");
vDT3.Columns.Add("KeyTemp1");
vDT3.Columns.Add("KeyTemp2");
vDT3.Columns.Add("Data3");
vDT3.Columns.Add("Data4");
DataRow vDRnull = vDT2.Rows.Add();
var vLINQ = vDT1.AsEnumerable()
.GroupJoin(vDT2.AsEnumerable(),
dr1 => new { key1 = dr1["Key"], key2 = dr1["Key2"] },
dr2 => new { key1 = dr2["Key"], key2 = dr2["Key2"] },
(dr1, result) => dr1.ItemArray.Koncat(
((result.FirstOrDefault<DataRow>() == null)
? vDRnull
: result.FirstOrDefault<DataRow>()).ItemArray));
foreach (var aw in vLINQ)
{
vDT3.Rows.Add(aw);
}
I implemented this extension:
public static T[] Koncat<T>(this T[] x, T[] y)
{
if (x == null) throw new ArgumentNullException("x");
if (y == null) throw new ArgumentNullException("y");
int oldLen = x.Length;
Array.Resize<T>(ref x, x.Length + y.Length);
Array.Copy(y, 0, x, oldLen, y.Length);
return x;
}
This easy and clean (from my point of view), but I wanted to ask the experts for recommendations, improvements, or if this method have a performance flaw.
Normally I need manage huge amount of data (More than 100K items).
Answer: DataTables are quite powefull and offer lots of the real database functionality. Also as far as joins are concerned a few things are possible and I'm of the opinion that if someone uses DataTables he also should use the functionality they offer ;-)
It this case using DataTable joins your example could look like this:
DataSet ds = new DataSet();
DataTable dt1 = new DataTable();
dt1.Columns.Add("Key");
dt1.Columns.Add("Key2");
dt1.Columns.Add("Data1");
dt1.Columns.Add("Data2");
dt1.Rows.Add(new object[] { "01", "ZZ", "DATA1_AAAA", "DATA2_AAAA" });
dt1.Rows.Add(new object[] { "02", "ZZ", "DATA1_BBBB", "DATA2_BBBB" });
DataTable dt2 = new DataTable();
dt2.Columns.Add("Key");
dt2.Columns.Add("Key2");
dt2.Columns.Add("Data3");
dt2.Columns.Add("Data4");
dt2.Rows.Add(new object[] { "01", "ZZ", "DATA3_AAAA", "DATA4_AAAA" });
dt2.Rows.Add(new object[] { "01", "ZZ", "DATA3_BBBB", "DATA4_BBBB" });
dt2.Rows.Add(new object[] { "01", "ZZ", "DATA3_CCCC", "DATA4_CCCC" });
dt2.Rows.Add(new object[] { "01", "ZZ", "DATA3_DDDD", "DATA4_DDDD" });
//dt2.Rows.Add(new object[] { "02", "ZZ", "DATA5_DDDD", "DATA4_DDDD" });
ds.Tables.Add(dt1);
ds.Tables.Add(dt2);
// specify the relations between the data tables
DataRelation drel = new DataRelation(
"MyJoin",
new DataColumn[] { dt1.Columns["Key"], dt1.Columns["Key2"] },
new DataColumn[] { dt2.Columns["Key"], dt2.Columns["Key2"]});
ds.Relations.Add(drel);
DataTable jt = new DataTable("JoinedTable");
jt.Columns.Add("Key");
jt.Columns.Add("Key2");
jt.Columns.Add("Data1");
jt.Columns.Add("Data2");
jt.Columns.Add("Data3");
jt.Columns.Add("Data4");
ds.Tables.Add(jt);
// create the result table
foreach (DataRow row in dt1.Rows)
{
var childRows = row.GetChildRows("MyJoin");
// mimics left join
var hasChildRows = childRows.Length > 0;
if (!hasChildRows)
{
jt.Rows.Add(row["Key"], row["Key2"], row["Data1"], row["Data2"], null, null);
continue;
}
foreach (var child in childRows)
{
jt.Rows.Add(row["Key"], row["Key2"], row["Data1"], row["Data2"], child["Data3"], child["Data4"]);
}
}
jt.Rows.Dump(); // LINQPad dump
As far as your code is concerned I'm not happy with the Koncat method because it modifies the ItemArray that belongs to the DataRow instead of creating a new result.
You actually don't need it because LINQ already has a such a method that you could use like this:
dr1.ItemArray
.Concat(result.Any() ? result.First().ItemArray: Enumerable.Empty<object>())
.ToArray());
It's not necessary to call the FirstOrDefault method two times. It's better to just check if the result has Any rows and then get the First one and its ItemArray or otherwise an empty IEnumerable, finally you turn it into an array and you're done:
var vLINQ = vDT1.AsEnumerable()
.GroupJoin(
vDT2.AsEnumerable(),
dr1 => new { key1 = dr1["Key"], key2 = dr1["Key2"] },
dr2 => new { key1 = dr2["Key"], key2 = dr2["Key2"] },
(dr1, result) =>
dr1.ItemArray
.Concat(result.Any() ? result.First().ItemArray: Enumerable.Empty<object>())
.ToArray());
or if you can use C# 6 even shorter with the ?. and ?? operators
dr1.ItemArray
.Concat(result.FirstOrDefault()?.ItemArray ?? Enumerable.Empty<object>())
.ToArray()); | {
"domain": "codereview.stackexchange",
"id": 16265,
"tags": "c#, performance, linq, .net-datatable, join"
} |
How does KNN work if there are duplicates? | Question: I am currently debating with my friend about how KNN handles duplicates. Suppose K = 2, and we have a 1-dimensional set of data points to illustrate my dilemma
I = {1, 2, 2, 2, 2, 2, 6}
Thus is it correct to say that the K=2 nearest neighbours of data point 1 is simply {2, 2}? Also, same reasoning if we did the 2 nearest neighbours of data point 2 it would be {2, 2} as well not including itself?
Answer: Your reasoning is correct - you should consider duplicate points as separate. You can see that this must be the case in several ways:
Introduction of small random noise to the data should not affect the classifier on average. This would not be the case if you removed duplicates.
Suppose that your input space only has two possible values - 1 and 2, and all points "1" belong to the positive class while points "2" - to the negative. If you remove duplicates in the KNN(2) algorithm, you would always end up with both possible input values as the nearest neighbors of any point, and would have to predict a 50% probability for either class, which is certainly not a consistent classification strategy.
The extra question to think about is how to deal with the situation when you have different Y labels assigned to several points with the same X coordinate.
You could mix all classes together and say that the label of each point in the set of duplicates is represented by the distribution of labels in the whole set of points with that coordinate. Alternatively, you could simply sample K random points from the set.
Both strategies should result in a consistent classifier, however in the second case your predictions may not be deterministic. Most practical implementations (including, for example, sklearn.neighbors.KNeighborsClassifier), however, use this simpler, nondeterministic strategy, as it is perhaps slightly more straightforward. | {
"domain": "datascience.stackexchange",
"id": 4157,
"tags": "data-mining, k-nn"
} |
Writing the $U(1)$ gauge transformation as coordinate transformation | Question: In quantum mechanics one can "always" write the way an operator acts on a wave function as a coordinate transformation. As an example we can look at unitary representation of the momentum operator
\begin{equation}
U=e^{\frac{i}{\hbar}p} \psi= \psi(x)+a\psi '(x)+\frac{a^2}{2}\psi '' (x)+...=\psi(x+a)
\end{equation}
with $p=\frac{\hbar}{i}\frac{d}{dx}$.
Is there a similar way to write the $U(1)$ gauge transformation $\psi(x) \rightarrow \psi'(x)=e^{-\frac{i}{\hbar}\Lambda(x)} \psi(x)$ as a transformation of the coordinates in the sense that $e^{-\frac{i}{\hbar}\Lambda(x)} \psi(x)=\psi(R(x))$ where $R$ is some kind of map or operator? (In the example with the momentum operator $R=x+a$)
Answer: Not all unitary transformations can be written as coordinate transformations. Not sure where you heard this, but it is verifiably false.
By and large symmetries can be separated into two classes: internal symmetries and coordinate symmetries. The $U(1)$ transformation you're asking about is of the former type. Something like a Lorentz transformation would be of the latter type. For coordinate symmetries, what you say is true by definition and is actually realized by the action of such a $U$ on the coordinate basis, $|\boldsymbol{X}\rangle$.
What is true for internal symmetries is the following. Suppose $U(g)$ is a unitary operator forming a representation of some group $G$. That is, for any $g,h\in G$ we have $U(g)U(h)=U(gh)$. Then assuming this to be an honest symmetry of the system, $U(g)$ commutes with the Hamiltonian and hence our states can be labeled by the energy and a group index simultaneously. That is, our wavefunction will look something like $\psi_a$.
Now the corrected version of the assertion in the question would be the existence of a collection of matrices (not operators) $R_a^b(g)$ which form a representation of the group $G$, meaning $R^a_c(g)R^c_b=R^a_b(gh)$ (summation over repeated indices implied), such that
$$
U(g)\psi_a=R^b_a(g)\psi_b.
$$ | {
"domain": "physics.stackexchange",
"id": 73376,
"tags": "quantum-mechanics, gauge-theory, gauge-invariance"
} |
Chaos and the $P{=}NP$ question | Question: I am interested in learning connections between "chaos," or more broadly, dynamical systems, and
the $P{=}NP$ question.
Here is an example of the type of literature I am seeking:
Ercsey-Ravasz, Mária, and Zoltán Toroczkai. "Optimization hardness as transient chaos in an analog approach to constraint satisfaction." Nature Physics 7, no. 12 (2011): 966-970. (Journal link.)
Has anyone written a survey, or made a bibliographic compendium?
Answer: the paper you cite by Ercsey-Ravasz, Toroczkai is very crosscutting; it fits in with/ touches on several lines of NP complete problem/ complexity/ hardness research. the connection to statistical physics and spin glasses was uncovered mainly via "phase transitions" in the mid 1990s and that has led to a large body of work, see Gogioso[1] for a 56p survey. the phase transition coincides with what is known as "the constrainedness knife edge" in [2]. the exact same transition point does turn up in very theoretical analyses of computational complexity/ hardness eg [3] that also relate to early studies of transition point behavior in clique problems by Erdos. [4] is a survey/ video lecture on phase transitions and computational complexity by Moshe Vardi. [5][6] are overviews of phase transition behavior across NP complete problems by Moore, Walsh.
then there is scattered but maybe increasing study of the diverse connections of dynamical systems with computational complexity and hardness in a variety of contexts. there is a general connection found in [7] possibly explaining some of the underlying reasons for frequent "overlap". refs [8][9][10][11] are diverse but show a reoccuring theme/ crosscutting appearance between NP complete problems and various dynamical systems. in these papers there is some concept/ examples of a hybrid link between discrete and continuous systems.
chaotic behavior in NP complete systems is analyzed in [11].
A somewhat similar ref to Ercsey-Ravasz/ Toroczkai in the area of quantum algorithms in that the dynamical system is found to run "apparently" in P-time [12]
In this paper we study a new approach to quantum algorithm which is a combination of the ordinary quantum algorithm with a chaotic dynamical system. We consider the satisfiability problem as an example of NP-complete problems and argue that the problem, in principle, can be solved in polynomial time by using our new quantum algorithm.
[1] Aspects of Statistical Physics in Computational Complexity / Gogioso
[2] The constrainedness knife edge / Toby Walsh
[3] The Monotone Complexity of k-Clique on Random Graphs / Rossman
[4] Phase transitions and computational complexity / Moshe Vardi
[5] Phase transitions in NP-complete problems:
a challenge for probability, combinatorics, and
computer science / Moore
[6] Phase transition behavior / Walsh
[7] Determining dynamical equations is hard / Cubitt, Eisert, Wolf
[8] The steady state system problem is NP-hard even for monotone quadratic Boolean dynamical systems / Just
[9] Predecessor and Permutation Existence Problems for Sequential Dynamical Systems / Barret, Hunt III, Marathe, Ravi, Rosenkrantz, Stearns. (also goes by Analysis Problems for Graphical Dynamical Systems: A Unified Approach Through Graph Predicates)
[10] A Dynamical Systems Approach to Weighted Graph Matching / Zavlanos, Pappas
[11] On chaotic behaviour of some np-complete problems / Perl
[12] New quantum algorithm for studying NP-complete problems / Ohya, Volovich | {
"domain": "cstheory.stackexchange",
"id": 4234,
"tags": "cc.complexity-theory, p-vs-np"
} |
Is this alternate theory of gravity as cause instead of effect plausible? | Question: I came across this video today on YouTube that presents an interesting alternate theory of Gravity and the "missing" matter in the Universe that Dark Matter/Energy theories try to account for.
If I understand it correctly, it asks the question "If it is possible to space-time to be bent without the need for mass, couldn't the gravitational effects we see that under current theories requiring more mass than we have discovered be a consequence of space-time that is somehow dented either as a remnant of something in the past or just that is the way it exists? Perhaps in a way the Big Bang resulted in an unfolding of space-time instead of an expansion of it and the leftover folds account for the extra gravitational affects we see?"
I don't have the scientific background to evaluate this theory fully and I'm interested if the theory presented in this video is remotely plausible to someone who has some expert knowledge of the subject.
Answer: This video is ridiculous. There is no content to it, and it is repeating hackneyed things that are obvious to anyone. Further, the idea could be presented in one sentence of text, saving people a lot of time:
"Can dark matter be spacetime curvature with no matter, and can dark energy by a straightening out of the rubber sheet in a rubber sheet conception of GR?"
The first is silly, since any curvature would necessarily behave as matter. The second is doubly-silly, because the rubber sheet is a terrible analogy for GR, in that it is the wrong components that are curved (space and not time), and the rubber sheet geodesics are repulsive, not attractive.
The rubber sheet is actually a good model of Newtonian gravitational interaction between long parallel rods (or point particles in 2d), to the extent that the rubber sheet is flat (not curved) but has height variations which can be used to drive masses toward each other in the Earth's gravitational field. The curvature of the sheet is second order (in the sense of calculus, it vanishes as the height squared), while the height variations are first order, so it is not a contradiction to imagine a flat sheet with height variations.
The rest of this answer is devoted to a discussion of the finer points.
Curvature without matter
The theory of General Relativity is not just some made up stuff that you can modify willy nilly. You need to be consistent with the basic general principles of physics.
Suppose you have a space region which is curved, and you put it in a big constant slowly varying gravitational field, by bringing a big black hole close, say, what happens? The region of curvature must accelerate toward the black hole, by the equivalence principle. It must fall into the black hole by the black hole horizon property, and it must increase the mass of the black hole, by the horizon area theorem, which is the law of entropy increase.
So you have an object which responds to gravity just like any other matter, and it is matter by definition, if you like, whether you see something there or not. The total mass-energy is determined from the curvature.
The inverse problem
There is a cute point of view very close to this idea which is the following
Einstein equations relate $T_{\mu\nu}$ to $G_{\mu\nu}$. Solving for the metric is hard. But what if you just take any old metric and solve for $T$ (this is trivial), can't you then find infinitely many trivial solutions to GR?
The issue with this idea is that if you specify the curvature arbitrarily, the matter you get will be grossly unphysical, in that it will have negative energy, it will have matter flow faster than the speed of light, and it will have speed of sound greater than the speed of light in many cases. The restriction on the inverse problem give rise to the energy conditions, which, in addition to the field equation, form the physical content of GR. Here are two of them:
Null energy condition/Weak energy condition: The (borderline) energy component of T along any null-null direction is nonnegative.
Strong energy condition: The energy component of T along any timelike or null direction exceeds the sum of the pressure components along the diagonal (in a local orthonormal frame).
The weak energy condition can be colloquially restated as follows
Gravity always focuses light
And heuristically, perhaps precisely, as the condition
You can't use a local gravity field to get a light signal between two far away points faster than the speed of light. (see this question: Does a Weak Energy Condition Violation Typically Lead to Causality Violation?
The strong energy condition can be colloquially restated as follows:
The pressure in matter as a function of density, when integrated from zero density, never has the speed of sound exceed the speed of light.
This condition has an implicit assumption that the pressure be found by classical thermodynamics, by going from a vacuum by adding density at a given temperature. It can be violated if you just have coherent particles making a scalar field expectation value in a vacuum, without making a superluminal speed of sound, just because the perturbations away from the vacuum still obey the strong energy condition, although the vacuum itself does not.
These two conditions are notable in that they allow you to describe two types of results. The weak energy condition gives theorems which are universal to GR in any setting, like closed-trapped surface singularity theorems, and area theorem, while the strong energy condition is used for more special situations where there are no scalar fields giving a bulk cosmological constant, like the big-bang singularity theorem (which fails with scalar field driven inflation).
If the warping introduced by hand violates the weak energy condition, it is difficult to see how it could not be used to signal faster than light, or to violate positive energy and make a perpetual motion machine. If it violates the strong energy condition, and it is not a homogenous scalar field, it is difficult to see how little bumps can't be used to propagate sound faster than light.
So it is believed that only homogenous classical fields violate the strong energy condition, and that nothing classical violates the weak energy condition.
Inflation
The theory of inflation postulates that there is a homogeonous scalar field which had a large expectation value near the big bang, and a large energy density. This gives rise to accelerated expansion, which makes the universe equilibrate to a small-horizon distance sphere called a deSitter space.
The deSitter phase lasts a short time, and seeds the modern era, where we are expanding normally. But we still see some residual deSitter like acceleration, and this is almost certainly due to some residual field energy in our vacuum, a residual scalar (or many scalar) which is left behind after inflation ended into our vacuum.
These ideas are very natural in GR, and in fact are predictions of GR. So it is not reasonable to say that accelerated expansion and dark matter point to a violation of GR. This is like saying that the discovery of Neptune invalidates Newton's model of the solar system, because it alters the orbit of Jupiter. That's included in the theory.
But in the special case of vacuum energy, it is philosophicaly possible in classical GR to consider the energy as part of the equations, or part of the matter, and both positions are viable (classically). The name "Dark energy" reflects the philosophical position that it should be considered matter. The name "cosmological constant" reflects the other view that it should be considered part of the Einstein equations.
These two points of view can't really be distinguished from each other classically in any positivist way, so the two positions are classically equivalent. Whether one is true or the other is completely moot. Quantum mechanically, there is the question of whether deSitter space is stable, and if it is unstable to decay to a zero (or perhaps negative) cosmological constant, then this might be intepreted as resolving the question in favor of the "dark energy" point of view. | {
"domain": "physics.stackexchange",
"id": 2132,
"tags": "gravity, spacetime"
} |
Anyone know what came of "Pyro? | Question:
About 15-20 years ago there was an active project called Pyro. See for example: https://dl.acm.org/doi/pdf/10.1145/1047568.1047569 "In this article we describe a programming framework called Pyro, which provides a set of abstractions that allows students to write platform-independent robot programs." Pyro worked with ROS and maybe some other frameworks. I am trying to connect to that project, if anything is still happening there. Even if not, I'd like to take a look at whatever code is still left over. Does anyone know someone related to that project?
Originally posted by pitosalas on ROS Answers with karma: 628 on 2020-05-24
Post score: 0
Original comments
Comment by gvdhoorn on 2020-05-24:
I'll not close this for now -- due to the connection with ROS and the chance that there *may be * someone here who recognises the name, but I would really recommend you post this somewhere else.
A quick Google shows that at least one of the authors of the paper you link has a Github profile, so it should not be too difficult to get in touch with him.
Comment by pitosalas on 2020-05-24:
Thanks. Indeed I've emailed people and have posted it elsewhere. I am casting a wide net :)
Comment by gvdhoorn on 2020-05-24:
For normal question this would be an reason to immediately close it, as cross-posting is not allowed.
Please edit your question and link to the other places you've posted.
Answer:
From following up with the authors I believe that Pyro is no longer a supported project although the code may still be available.
Originally posted by pitosalas with karma: 628 on 2020-07-06
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 35006,
"tags": "ros-melodic"
} |
Eigenstate of $S_x^2$ and measurement | Question: How could I go about finding eigenstates of a given operator, namely $S_x^2$ in basis with good $S^2$ and $S_z$, that is $|s, m\rangle$?
I had the idea of writing it as a sum $4S_x^2=S_+^2+S_-^2+2S_+S_-$.
I know I have to solve eigenstate problem $$S_+^2+S_-^2+2S_+S_-|\psi\rangle=4\lambda|\psi\rangle.$$ I know also to write a general wavefunction in form $|\psi\rangle =a|11\rangle+b|10\rangle+c|1-1\rangle$, then check how it acts upon this given wavefunction, but I am unsure how to proceed further.
I also have to measure outcomes on spin 1 state, that is $|\psi\rangle =a|11\rangle+b|10\rangle+c|1-1\rangle$ consecutively with $S_x^2, S_y^2$ and $S_z^2$. I assume I have to find eigenstates of $S_y^2$ as well in order to complete the measurement. I already know that $S_z^2$ is trivially $S_x^2|s m\rangle=\hbar^2m^2|s m\rangle$.
Answer: I can give you the strategy without telling you the answer.
The strategy is to know how the operator acts on any state of the form $|s,m\rangle$ and thus on any superposition of such states over different values of $m$. If you already know the action of $S_x$ on any state $|s,m\rangle$ then you can skip ahead. If not, for an operator like $S_x$ or $S_x^2$, it is indeed easier to write things in terms of the raising and lowering operators $S_\pm$. From some definition or by the commutation relation with $S_x$, you should know that $$S_\pm|s,m\rangle\propto |s,m\pm 1\rangle.$$ This is an important starting point! Then you must find the normalization constant (another standard exercise) and you can proceed.
One now ostensibly knows the relation
$$S_\pm |s,m\rangle=\mathcal{N}_\pm(s,m)|s,m\pm1\rangle.$$ One can use this to write $S_x^2\sum_m\psi_m|s,m\rangle$ for any amplitudes $\psi_m$. Then we indeed just solve the eigenvalue problem
$$S_x^2\sum_m\psi_m|s,m\rangle=\lambda \sum_m\psi_m|s,m\rangle$$ by using orthonormality of the $S_z$ eigenstates to write the $2s+1$ coupled linear equations
$$\lambda \psi_n=\langle s,n|S_x^2\sum_m \psi_m|s,m\rangle.$$
Of course, a more specific strategy involves knowing or guessing the eigenstates of $S_x$ and then using those to construct eigenstates of $f(S_x)$ for any function $f$, but that's the shortcut. | {
"domain": "physics.stackexchange",
"id": 90963,
"tags": "quantum-mechanics, homework-and-exercises, quantum-spin, eigenvalue, quantum-measurements"
} |
Basic Spin or Double Cover Experiment | Question: We know that Spin is described with $SU(2)$ and that $SU(2)$ is a double cover of the rotation group $SO(3)$. This suggests a simple thought experiment, to be described below. The question then is in three parts:
Is this thought experiment theoretically sound?
Can it be conducted experimentally?
If so what has been the result?
The experiment is to take a slab of material in which there are spin objects e.g. electrons all (or most) with spin $\uparrow$. Then rotate that object $360$ degrees (around an axis perpendicular to the spin direction), so that macroscopically we are back to where we started. Measure the electron spins. Do they point $\downarrow$?
Answer: I think that you are confused. When you rotate something by 360 degrees, you won't change the direction in space of anything. You will only change the wave function to minus itself - if there is an odd number of fermions in the object (which is usually hard to count for large objects).
If you have electrons with spins pointing up and you rotate them around the vertical axis by any angle, whether it's 360 degrees or anything else, you will still get electrons with spin pointing up. This is about common sense - many spins with spin up give you a totally normal, "classical" angular momentum that can be seen and measured in many ways.
The flip of the sign of the wave function can't be observed by itself because it is a change of phase and all observable probabilities only depend on the density matrix $\rho=|\psi\rangle \langle\psi|$ in which the phase (or minus sign) cancels. The phase - or minus sign - has nothing to do with directions in space. It is just a number. In particular, it is incorrect to imagine that complex numbers are "vectors", especially if it leads you to think that they're related to directions in spacetime. They're not.
You would have to prepare an interference experiment of an object that hasn't rotated with the "same" object rotated by 360 degrees - and it's hard for macroscopic objects because the "same" object quickly decoheres and you must know whether it has rotated or not, so no superpositions can be produced. ;-)
However, all detailed measurements of the spin with respect to any axis indirectly prove that the fermions transform as the fundamental representation of $SU(2)$. In particular, if you create a spin-up electron and measure whether it's spin is up with respect to another axis tilted by angle $\alpha$, the probability will be $\cos^2(\alpha/2)$. The only sensible way to obtain it from the amplitude is that the amplitude goes like $\cos(\alpha/2)$ and indeed, this function equals $-1$ for $\alpha$ equal to 360 degrees. | {
"domain": "physics.stackexchange",
"id": 13968,
"tags": "quantum-mechanics, experimental-physics, quantum-spin, group-theory, fermions"
} |
Canonical momentum Velocity dependent Lagrangian | Question: I have a homework problem wich I think I'm on the verge of solving but need help with some relations:
Show that if the potential $U$ in the Lagrangian contains velocity-dependent terms, the canonical momentum corresponding to a coordinate of rotation $\theta$ of the entire system is no longer the mechanical angular momentum $L_{\theta}$ but is given by
$$p_{\theta}=L_{\theta}-\sum_{i}\mathbf{n}.\mathbf{r_{i}}\times\nabla_{v_{i}}U.$$
This is what I have so far:
I know that the position vectors $\mathbf{r_{i}}$ are functions of the $q_i$ generalized coordinates. On the other hand $U=U(\mathbf{r_{i}},\mathbf{\dot{r_{i}}})$
The canonical momentum with respect to a coordinate of rotation $\theta$ is given by:
$$p_{\theta}=\frac{\partial{L}}{\partial\dot{\theta}}=\frac{\partial T}{\partial{ \dot{\theta}}}-\sum_{i}\bigg(\frac{\partial u}{\partial{{r_{i}}}}\frac{\partial r_{i}}{\partial \dot{\theta}}+\frac{\partial U}{\partial \dot{r_{i}}}\frac{\partial \dot{r_{i}}}{\partial \dot{\theta}}\bigg) $$
Using:
$$\frac{\partial \dot{r_{i}}}{\partial \dot{\theta}}=\frac{\partial r_{i}}{\partial \theta}$$
We have:
$$p_{\theta}=\frac{\partial{L}}{\partial\dot{\theta}}=\frac{\partial T}{\partial{ \dot{\theta}}}-\sum_{i}\bigg(\frac{\partial u}{\partial{{r_{i}}}}\frac{\partial r_{i}}{\partial \dot{\theta}}+\frac{\partial U}{\partial \dot{r_{i}}}\frac{\partial r_{i}}{\partial \theta}\bigg) $$
I know that $$\frac{\partial U}{\partial \dot{r_{i}}}$$ can be rewritten as the scalar product of gradient of U and the unit velocity but I don't see how a cross product can be made to appear.
Answer: $\newcommand{\td}[0]{\dot{\theta}}$If you change $\td$ by an amount $\Delta \td$, then the velocity of the $i$th particle will change by an amount $\Delta \td \hat{n} \times \vec{r}_i$. The positions stay the same. Thus the resulting change in the lagragian will be $\partial_{\dot{\vec{r}}_i}L \cdot \Delta \td \hat{n} \times \vec{r}_i = \partial_{\dot{\vec{r}}_i}(T-U) \cdot \Delta \td \hat{n} \times \vec{r}_i$ Now if we distribute across $T-U$, the term with $T$ gives the mechanical angular momentum $L_\theta$. The term with $U$ gives $-\partial_{\dot{\vec{r}}_i}U \cdot \Delta \td \hat{n} \times \vec{r}_i = -\hat{n} \cdot \vec{r}_i \times \partial_{\dot{\vec{r}}_i}U \Delta \td$. Therefore the derivate of $L$ with respect to $\dot{\theta}$ is $L_\theta - \hat{n} \cdot \vec{r}_i \times \partial_{\dot{\vec{r}}_i}U$. | {
"domain": "physics.stackexchange",
"id": 45356,
"tags": "homework-and-exercises, angular-momentum, lagrangian-formalism, potential, angular-velocity"
} |
Find and replace database string in all Web.configs across a stack | Question: I was tasked with this assignment with almost no time and I cannot stand up a new environment to test this. The risk is pretty high with what is the request is for so I'm asking for as much peer and code review as I can get.
I'm tasked to check all the web.configs that have a certain DB Server and change it to a new value.
This will happen across about 250 servers with about an hour maintenance window.
My first pass I want it to find the configs and place it in a folder on my local machine for me to review the change with C:\NewConfigs\FullPathofConfigHere
Second pass I will actually set-content or create a new config which (this is commented out for now) to the configs in their current location. All of which is on either a D: or E: Drive.
$servers = Get-Content "servers.txt"
$WebConfigFile = "web.config"
$connectionstring1 = "DBstring1.local.domain"
$connectionstring2 = "DBstring2.local.domain"
$to = "C:\ConfigFinder\BackupConfigs\"
$NewFolder = "C:\ConfigFinder\NewConfigs\"
Function Backup {
foreach ($computer in $servers) {
Get-ChildItem -Recurse -Force \\$computer\d$ -ErrorAction SilentlyContinue -Include $WebConfigFile |
Where-Object {$_.FullName -notlike "*Recycle.bin*"} |
Select-Object FullName |
#This makes a backup copy before doing any rewrite
% {
$newpath = join-path $To $_.FullName.ToLower()
md $newpath
Copy-Item $_.FullName.ToLower() -destination $newpath -verbose
}
Get-ChildItem -Recurse -Force \\$computer\e$ -ErrorAction SilentlyContinue -Include $WebConfigFile |
Where-Object {$_.FullName -notlike "*Recycle.bin*"} |
Select-Object FullName |
#Select-String $connectionstring1 |
#This makes a backup copy before doing any rewrite
% {
$newpath = join-path $To $_.FullName
md $newpath
Copy-Item $_.FullName.ToLower() -destination $newpath -verbose
}
}
}
Function CreateLocal {
foreach ($computer in $servers) {
Get-ChildItem -Recurse -Force \\$computer\d$ -ErrorAction SilentlyContinue -Include $WebConfigFile |
Where-Object {$_.FullName -notlike "*Recycle.bin*"} |
Select-Object FullName |
#Select-String $connectionstring1 |
#This makes a backup copy before doing any rewrite
% {
$ConfigName = "Web.qa.Config"
$newpath = join-path $NewFolder $_.FullName.Replace("Web.config","")
md $newpath
$finaldestination = $newpath + $ConfigName
(Get-Content $_.FullName).replace($connectionstring1, $connectionstring2) | Out-File $finaldestination
}
Get-ChildItem -Recurse -Force \\$computer\e$ -ErrorAction SilentlyContinue -Include $WebConfigFile |
Where-Object {$_.FullName -notlike "*Recycle.bin*"} |
Select-Object FullName |
#This makes a backup copy before doing any rewrite
% {
$ConfigName = "Web.qa.Config"
$newpath = join-path $NewFolder $_.FullName.Replace("Web.config","")
md $newpath
$finaldestination = $newpath + $ConfigName
(Get-Content $_.FullName).replace($connectionstring1, $connectionstring2) | Out-File $finaldestination
}
}
}
Function ConfigonServer {
Write-Host "CAUTION YOU ARE ABOUT TO WRITE NEW CONFIGS ON THE SERVERS"
$resp = Read-Host " Are you SURE you want to continue? (Y/[N])"
if ($Resp.ToUpper() -eq "N") {
Write-Host "Taking you back to Safety"
sleep 3
Menu
}
if ($Resp.ToUpper() -eq "Y") {
foreach ($computer in $servers) {
Get-ChildItem -Recurse -Force \\$computer\d$ -ErrorAction SilentlyContinue -Include $WebConfigFile |
Where-Object {$_.FullName -notlike "*Recycle.bin*"} |
Select-Object FullName |
#Select-String $connectionstring1 |
#This makes a backup copy before doing any rewrite
% {
$ConfigName = "Web.qa.Config"
$finaldestination = $_.FullName.replace("Web.config","") + $ConfigName
(Get-Content $_.FullName).replace($connectionstring1, $connectionstring2) | Out-File $finaldestination -encoding "UTF8"
}
Get-ChildItem -Recurse -Force \\$computer\e$ -ErrorAction SilentlyContinue -Include $WebConfigFile |
Where-Object {$_.FullName -notlike "*Recycle.bin*"} |
Select-Object FullName |
#Select-String $connectionstring1 |
#This makes a backup copy before doing any rewrite
% {
$ConfigName = "Web.qa.Config"
$finaldestination = $_.FullName.replace("Web.config","") + $ConfigName
(Get-Content $_.FullName).replace($connectionstring1, $connectionstring2) | Out-File $finaldestination -encoding "UTF8"
}
}
}
}
Function HoldMyBeer {
Write-Host "CAUTION YOU ARE ABOUT TO RE-WRITE ALL THE CONFIGS"
$resp = Read-Host " Are you SURE you want to continue? (Y/[N])"
if ($Resp.ToUpper() -eq "N") {
Write-Host "Taking you back to Safety"
sleep 3
Menu
}
if ($Resp.ToUpper() -eq "Y") {
foreach ($computer in $servers) {
Get-ChildItem -Recurse -Force \\$computer\d$ -ErrorAction SilentlyContinue -Include $WebConfigFile |
Where-Object {$_.FullName -notlike "*Recycle.bin*"} |
Select-Object FullName |
#Select-String $connectionstring1 |
#This makes a backup copy before doing any rewrite
% {
(Get-Content $_.FullName).replace($connectionstring1, $connectionstring2) | Set-Content $_.FullName -encoding "UTF8"
}
Get-ChildItem -Recurse -Force \\$computer\e$ -ErrorAction SilentlyContinue -Include $WebConfigFile |
Where-Object {$_.FullName -notlike "*Recycle.bin*"} |
Select-Object FullName |
#Select-String $connectionstring1 |
#This makes a backup copy before doing any rewrite
% {
(Get-Content $_.FullName).replace($connectionstring1, $connectionstring2) | Set-Content $_.FullName -encoding "UTF8"
}
}
}
}
Function Menu {
Do {
Write-Host ""
Write-Host ""
Write-Host "===================================================="
Write-Host "What would you like to do Today?"
Write-Host "===================================================="
Write-Host ""
Write-Host "1. Backup to Local Disk" -foregroundcolor green
Write-Host "2. Create New Strings to Local Disk" -foregroundcolor cyan
Write-Host "3. Create New Configs on The Server List" -foregroundcolor yellow
Write-Host "4. Re-write the Files on the servers)" -foregroundcolor magenta
Write-Host "5. Exit"
Write-Host ""
Write-Host ""
Write-Host $errout
$Choice = Read-Host '(1-5)'
switch ($Choice) {
1 {
Backup; break
}
2 {
CreateLocal; break
}
3 {
ConfigonServer; break
}
4 {
HoldMyBeer; break
}
5 {
Exit;exit
}
default {
$errout = "No, try again........Try 1-5 only"
}
}
}
until ($Choice -ne "")
}
Menu
Answer: Operator Case Sensitivity
By default most (all?) of the comparison operators, like -eq and -like, are case-insensitive by default. What that means is that code like
if ($Resp.ToUpper() -eq "N")
is redundant and functionally the same as
if ($Resp -eq "N")
While on the topic of case sensitivity .ToLower() serves no purpose here
Copy-Item $_.FullName.ToLower()
Verb-Noun Naming Convention
PowerShell function naming recommendations are Verb-Noun(s). The action you are performing and the object of your action. You see this is all stock cmdlets like Get-Item. MSDN has an extensive but simple to follow list of recommendations. Trying and align the name as best as possible with what the cmdlet/function/code is doing. If you are having an issue figuring out a name it is possible that you need to break up that code into separate pieces.
Not to overly criticize HoldMyBeer. At least Hold-Beer would be better :)
The Choice Menu System
PowerShell has a great way of creating menus guiding user input and acting on the results. It is not too hard to get a grasp at first glance so I am going to include the code snippet from the from the front of that article.
$title = "Delete Files"
$message = "Do you want to delete the remaining files in the folder?"
$yes = New-Object System.Management.Automation.Host.ChoiceDescription "&Yes", `
"Deletes all the files in the folder."
$no = New-Object System.Management.Automation.Host.ChoiceDescription "&No", `
"Retains all the files in the folder."
$options = [System.Management.Automation.Host.ChoiceDescription[]]($yes, $no)
$result = $host.ui.PromptForChoice($title, $message, $options, 0)
switch ($result)
{
0 {"You selected Yes."}
1 {"You selected No."}
}
This would have more functionality and less worrying about code and enduser selections. This way you know the enduser can only select a valid option.
Select-Object -ExpandProperty
When you are using select-object you are getting an object array containing the properties you requested. This is the case even if you select one property.
To make subsequent use of that single property you are return just that property as an array as supposed to an object array with that property. Consider the following code:
Get-ChildItem -Recurse -Force \\$computer\e$ -ErrorAction SilentlyContinue -Include $WebConfigFile |
Where-Object {$_.FullName -notlike "*Recycle.bin*"} |
Select-Object FullName | # ....
This could be simplified
Get-ChildItem -Recurse -Force "\\$computer\e$" -ErrorAction SilentlyContinue -Include $WebConfigFile |
Select-Object -ExpandProperty FullName |
Where-Object {$_ -notlike "*Recycle.bin*"} | # ....
This will also see a positive effect in other parts of your code as you refer to fullname frequently.
Code Repetition
If you find yourself repeating the same code over and over again you should be asking if there is another way. Functions and better use of cmdlet parameters here would make some headway for you.
You are gathering files rather frequently. Albeit the code is used only once per menu selection. However if you had to make a logic change you would have to ensure your doing it in around 8 places. That is a huge margin for error.
Heck you could even consolidate both these blocks since Get-ChildItem supports arrays for -Path
Get-ChildItem -Recurse -Force \\$computer\d$ -ErrorAction SilentlyContinue -Include $WebConfigFile |
Where-Object {$_.FullName -notlike "*Recycle.bin*"} |
Select-Object FullName |
#Select-String $connectionstring1 |
#This makes a backup copy before doing any rewrite
% {
$ConfigName = "Web.qa.Config"
$finaldestination = $_.FullName.replace("Web.config","") + $ConfigName
(Get-Content $_.FullName).replace($connectionstring1, $connectionstring2) | Out-File $finaldestination -encoding "UTF8"
}
Get-ChildItem -Recurse -Force \\$computer\e$ -ErrorAction SilentlyContinue -Include $WebConfigFile |
Where-Object {$_.FullName -notlike "*Recycle.bin*"} |
Select-Object FullName |
#Select-String $connectionstring1 |
#This makes a backup copy before doing any rewrite
% {
$ConfigName = "Web.qa.Config"
$finaldestination = $_.FullName.replace("Web.config","") + $ConfigName
(Get-Content $_.FullName).replace($connectionstring1, $connectionstring2) | Out-File $finaldestination -encoding "UTF8"
}
The first line would work with this small change.
Get-ChildItem -Recurse -Force "\\$computer\d$", "\\$computer\e$" -ErrorAction SilentlyContinue
That cuts out the second block entirely since they both appear to do the same thing.
Filter > Include
Since you are only trying to find files with a certain name and are searching recursively through drive you will find that -Filter will out perform -Include since it functions at the provider level. From Get-ChildItem on MSDN
Filters are more efficient than other parameters, because the provider applies them when retrieving the objects, rather than having Windows PowerShell filter the objects after they are retrieved from the provider.
Consistency
I see that you are using both Out-File and Set-Content. Pick one and stick with it. If someone else is reading your code they may have to spend time wondering why you chose this. Set-Content should perform better between the two.
Trepidation of running this code
If you are at all concerned about running this code it would be a trivial exercise to set up a test environment to ensure that your code only does what you expect it to.
There are some other areas that could be addressed but the ones above would be the ones I would actually focus on first. | {
"domain": "codereview.stackexchange",
"id": 24290,
"tags": "powershell"
} |
attractive part of Lennard-Jones potential derivation | Question:
The question is taken from this site:
http://www.chem.konan-u.ac.jp/PCSI/web_material/LJ.pdf
I don't see how they can end up with this element:
$\frac{3x^2}{2r^2}$ (I)
my attempt: In the Taylor series used they have the third element $\frac{3x^2}{8}$ but they use $x=\frac{-2zz_1+x^2_1+y^2_1+z^2_1}{r^2}$ and I can't justify (I)?
Answer: You need to look at $ \left (\dfrac{-2zz_1+r^2_1}{r^2} \right )^2$ = with $r^2_1 = x^2_1 + y^2_1 +z^2_1$.
You get
$$\dfrac{4z^2z^2_1}{r^4}-\dfrac{4zz_1r_1^2}{r^4}+\dfrac{r^4_1}{r^4} = \dfrac{4z^2_1}{r^2}-\dfrac{4z_1r^2_1}{r^3}+\dfrac{r^4_1}{r^4}$$
with the first term being the only significant one. | {
"domain": "physics.stackexchange",
"id": 34359,
"tags": "electromagnetism"
} |
What could be causing microseisms on Mars? | Question: The anouncement published in Science Mini tremors detected on Mars for first time says that the Mars lander InSight is detecting microseisms on Mars after various vibrations produced by the lander and its interaction with the wind are excluded.
The above linked Wikipedia article says:
In seismology, a microseism is defined as a faint earth tremor caused by natural phenomena. Sometimes referred to as a "hum", it should not be confused with the anomalous acoustic phenomenon of the same name. The term is most commonly used to refer to the dominant background seismic and electromagnetic noise signals on Earth, which are caused by water waves in the oceans and lakes. Characteristics of microseism are discussed by Bhatt. Because the ocean wave oscillations are statistically homogenous over several hours, the microseism signal is a long-continuing oscillation of the ground. The most energetic seismic waves that make up the microseismic field are Rayleigh waves, but Love waves can make up a significant fraction of the wave field, and body waves are also easily detected with arrays. Because the conversion from the ocean waves to the seismic waves is very weak, the amplitude of ground motions associated to microseisms does not generally exceed 10 micrometers. (several citations available in original article)
We can exclude large surface bodies of water on Mars, so the vibrations must be coming from other sources.
Question: What could be causing microseisms on Mars?
Answer: In addition to @Erik answer, rocks as any other material dilates when it gets hotter and contracts as it gets colder. On Earth's such phenomena, and differences in temperature within rocks leads even to the cracking of rocks.
That could be another source of microseismicity, because the thermal amplitude in Mars is huge. The temperatures on the two Viking landers, measured at 1.5 meters above the surface ranged from -17.2°C to -107°C.
Another one could be meteorite and micrometeorites impacts, that could be more common there due to the much thinner atmosphere.
Finally, Mars interior is getting colder, therefore its inner layers are contracting and its diameter slowly shrinking. That leads to thrust faults, that have been observed in other solar system bodies (specially smaller moons and even our Moon). This thrusting also can cause seismicity and microiseismicity without any doubt. | {
"domain": "earthscience.stackexchange",
"id": 1715,
"tags": "mars, microseism"
} |
Can't install snakemake v5.15.0 from conda | Question: I'm trying to install the latest version of snakemake from conda into a conda env, following the instructions here. I.e.
conda create -n snakemake -c conda-forge -c bioconda snakemake
However, this seems to only give me the option to install snakemake v5.3.0, while the latest no-arch version on conda is v5.15.0.
Any help greatly appreciated.
Solving environment: done
## Package Plan ##
environment location: /home/ubuntu/anaconda3/envs/snakemake
added / updated specs:
- snakemake
The following packages will be downloaded:
package | build
---------------------------|-----------------
aioeasywebdav-2.4.0 | py36_1000 18 KB conda-forge
aiohttp-3.6.2 | py36h7b6447c_0 544 KB
appdirs-1.4.3 | py36h28b3542_0 15 KB
async-timeout-3.0.1 | py36_0 12 KB
attrs-19.3.0 | py_0 39 KB
bcrypt-3.1.7 | py36h7b6447c_0 40 KB
blas-2.14 | openblas 10 KB conda-forge
boto3-1.9.66 | py36_0 105 KB
botocore-1.12.189 | py_0 3.4 MB
datrie-0.8 | py36h7b6447c_0 147 KB
docutils-0.16 | py36_0 668 KB
dropbox-9.4.0 | py36_0 756 KB
filechunkio-1.6 | py36_0 7 KB bioconda
ftputil-3.2 | py36_0 85 KB bioconda
google-api-core-1.16.0 | py36_1 86 KB
googleapis-common-protos-1.51.0| py36_2 72 KB
idna_ssl-1.1.0 | py36_0 7 KB
importlib_metadata-1.5.0 | py36_0 48 KB
jsonschema-3.2.0 | py36_0 95 KB
liblapacke-3.8.0 | 14_openblas 10 KB conda-forge
multidict-4.7.3 | py36h7b6447c_0 68 KB
networkx-2.4 | py_0 1.2 MB
numpy-1.17.0 | py36h99e49ec_0 24 KB r
numpy-base-1.17.0 | py36h2f8d375_0 5.2 MB r
pandas-1.0.3 | py36h0573a6f_0 8.6 MB
protobuf-3.11.4 | py36he6710b0_0 635 KB
psutil-5.7.0 | py36h7b6447c_0 315 KB
pygraphviz-1.3.1 | py36_0 205 KB bioconda
pynacl-1.3.0 | py36h7b6447c_0 1.1 MB
pyrsistent-0.16.0 | py36h7b6447c_0 93 KB
pysftp-0.2.9 | py36_0 31 KB bioconda
ratelimiter-1.2.0 | py36_1000 12 KB conda-forge
rsa-3.1.4 | py36_0 87 KB bioconda
s3transfer-0.1.13 | py36_0 79 KB
snakemake-5.3.0 | py36_1 4 KB bioconda
snakemake-minimal-5.3.0 | py36_1 280 KB bioconda
sqlite-3.31.1 | h62c20be_1 2.0 MB
typing_extensions-3.7.4.1 | py36_0 40 KB
wrapt-1.12.1 | py36h7b6447c_1 49 KB
yarl-1.4.2 | py36h7b6447c_0 132 KB
zipp-2.2.0 | py_0 12 KB
------------------------------------------------------------
Total: 26.2 MB
The following NEW packages will be INSTALLED:
_libgcc_mutex pkgs/main/linux-64::_libgcc_mutex-0.1-main
aioeasywebdav conda-forge/linux-64::aioeasywebdav-2.4.0-py36_1000
aiohttp pkgs/main/linux-64::aiohttp-3.6.2-py36h7b6447c_0
appdirs pkgs/main/linux-64::appdirs-1.4.3-py36h28b3542_0
asn1crypto pkgs/main/linux-64::asn1crypto-1.3.0-py36_0
async-timeout pkgs/main/linux-64::async-timeout-3.0.1-py36_0
attrs pkgs/main/noarch::attrs-19.3.0-py_0
bcrypt pkgs/main/linux-64::bcrypt-3.1.7-py36h7b6447c_0
blas conda-forge/linux-64::blas-2.14-openblas
boto3 pkgs/main/linux-64::boto3-1.9.66-py36_0
botocore pkgs/main/noarch::botocore-1.12.189-py_0
ca-certificates pkgs/main/linux-64::ca-certificates-2020.1.1-0
cachetools pkgs/main/noarch::cachetools-3.1.1-py_0
cairo pkgs/main/linux-64::cairo-1.14.12-h8948797_3
certifi pkgs/main/linux-64::certifi-2020.4.5.1-py36_0
cffi pkgs/main/linux-64::cffi-1.14.0-py36h2e261b9_0
chardet pkgs/main/linux-64::chardet-3.0.4-py36_1003
configargparse pkgs/main/noarch::configargparse-1.1-py_0
cryptography pkgs/main/linux-64::cryptography-2.8-py36h1ba5d50_0
datrie pkgs/main/linux-64::datrie-0.8-py36h7b6447c_0
decorator pkgs/main/noarch::decorator-4.4.2-py_0
docutils pkgs/main/linux-64::docutils-0.16-py36_0
dropbox pkgs/main/linux-64::dropbox-9.4.0-py36_0
expat pkgs/main/linux-64::expat-2.2.6-he6710b0_0
filechunkio bioconda/linux-64::filechunkio-1.6-py36_0
fontconfig pkgs/main/linux-64::fontconfig-2.13.0-h9420a91_0
freetype pkgs/main/linux-64::freetype-2.9.1-h8a8886c_1
fribidi pkgs/main/linux-64::fribidi-1.0.5-h7b6447c_0
ftputil bioconda/linux-64::ftputil-3.2-py36_0
gitdb pkgs/main/noarch::gitdb-4.0.2-py_0
gitpython pkgs/main/noarch::gitpython-3.1.1-py_1
glib pkgs/main/linux-64::glib-2.63.1-h5a9c865_0
google-api-core pkgs/main/linux-64::google-api-core-1.16.0-py36_1
google-auth pkgs/main/noarch::google-auth-1.13.1-py_0
google-cloud-core pkgs/main/noarch::google-cloud-core-1.3.0-py_0
google-cloud-stor~ pkgs/main/noarch::google-cloud-storage-1.27.0-py_0
google-resumable-~ pkgs/main/noarch::google-resumable-media-0.5.0-py_1
googleapis-common~ pkgs/main/linux-64::googleapis-common-protos-1.51.0-py36_2
graphite2 pkgs/main/linux-64::graphite2-1.3.13-h23475e2_0
graphviz pkgs/main/linux-64::graphviz-2.40.1-h21bd128_2
harfbuzz pkgs/main/linux-64::harfbuzz-1.8.8-hffaf4a1_0
icu pkgs/main/linux-64::icu-58.2-h9c2bf20_1
idna pkgs/main/noarch::idna-2.9-py_1
idna_ssl pkgs/main/linux-64::idna_ssl-1.1.0-py36_0
importlib_metadata pkgs/main/linux-64::importlib_metadata-1.5.0-py36_0
jinja2 pkgs/main/noarch::jinja2-2.11.1-py_0
jmespath pkgs/main/noarch::jmespath-0.9.4-py_0
jpeg pkgs/main/linux-64::jpeg-9b-h024ee3a_2
jsonschema pkgs/main/linux-64::jsonschema-3.2.0-py36_0
ld_impl_linux-64 pkgs/main/linux-64::ld_impl_linux-64-2.33.1-h53a641e_7
libblas conda-forge/linux-64::libblas-3.8.0-14_openblas
libcblas conda-forge/linux-64::libcblas-3.8.0-14_openblas
libedit pkgs/main/linux-64::libedit-3.1.20181209-hc058e9b_0
libffi pkgs/main/linux-64::libffi-3.2.1-hd88cf55_4
libgcc-ng pkgs/main/linux-64::libgcc-ng-9.1.0-hdf63c60_0
libgfortran-ng pkgs/main/linux-64::libgfortran-ng-7.3.0-hdf63c60_0
liblapack conda-forge/linux-64::liblapack-3.8.0-14_openblas
liblapacke conda-forge/linux-64::liblapacke-3.8.0-14_openblas
libopenblas conda-forge/linux-64::libopenblas-0.3.7-h5ec1e0e_6
libpng pkgs/main/linux-64::libpng-1.6.37-hbc83047_0
libprotobuf pkgs/main/linux-64::libprotobuf-3.11.4-hd408876_0
libsodium pkgs/main/linux-64::libsodium-1.0.16-h1bed415_0
libstdcxx-ng pkgs/main/linux-64::libstdcxx-ng-9.1.0-hdf63c60_0
libtiff pkgs/main/linux-64::libtiff-4.1.0-h2733197_0
libuuid pkgs/main/linux-64::libuuid-1.0.3-h1bed415_2
libxcb pkgs/main/linux-64::libxcb-1.13-h1bed415_1
libxml2 pkgs/main/linux-64::libxml2-2.9.9-hea5a465_1
markupsafe pkgs/main/linux-64::markupsafe-1.1.1-py36h7b6447c_0
multidict pkgs/main/linux-64::multidict-4.7.3-py36h7b6447c_0
ncurses pkgs/main/linux-64::ncurses-6.2-he6710b0_0
networkx pkgs/main/noarch::networkx-2.4-py_0
numpy r/linux-64::numpy-1.17.0-py36h99e49ec_0
numpy-base r/linux-64::numpy-base-1.17.0-py36h2f8d375_0
openssl pkgs/main/linux-64::openssl-1.1.1g-h7b6447c_0
pandas pkgs/main/linux-64::pandas-1.0.3-py36h0573a6f_0
pango pkgs/main/linux-64::pango-1.42.4-h049681c_0
paramiko pkgs/main/noarch::paramiko-2.7.1-py_0
pcre pkgs/main/linux-64::pcre-8.43-he6710b0_0
pip pkgs/main/linux-64::pip-20.0.2-py36_1
pixman pkgs/main/linux-64::pixman-0.38.0-h7b6447c_0
prettytable conda-forge/noarch::prettytable-0.7.2-py_3
protobuf pkgs/main/linux-64::protobuf-3.11.4-py36he6710b0_0
psutil pkgs/main/linux-64::psutil-5.7.0-py36h7b6447c_0
pyasn1 pkgs/main/noarch::pyasn1-0.4.8-py_0
pyasn1-modules pkgs/main/noarch::pyasn1-modules-0.2.7-py_0
pycparser pkgs/main/noarch::pycparser-2.20-py_0
pygraphviz bioconda/linux-64::pygraphviz-1.3.1-py36_0
pynacl pkgs/main/linux-64::pynacl-1.3.0-py36h7b6447c_0
pyopenssl pkgs/main/linux-64::pyopenssl-19.1.0-py36_0
pyrsistent pkgs/main/linux-64::pyrsistent-0.16.0-py36h7b6447c_0
pysftp bioconda/linux-64::pysftp-0.2.9-py36_0
pysocks pkgs/main/linux-64::pysocks-1.7.1-py36_0
python pkgs/main/linux-64::python-3.6.10-hcf32534_1
python-dateutil pkgs/main/noarch::python-dateutil-2.8.1-py_0
python-irodsclient conda-forge/noarch::python-irodsclient-0.8.2-py_0
pytz pkgs/main/noarch::pytz-2019.3-py_0
pyyaml pkgs/main/linux-64::pyyaml-5.3.1-py36h7b6447c_0
ratelimiter conda-forge/linux-64::ratelimiter-1.2.0-py36_1000
readline pkgs/main/linux-64::readline-8.0-h7b6447c_0
requests pkgs/main/linux-64::requests-2.23.0-py36_0
rsa bioconda/linux-64::rsa-3.1.4-py36_0
s3transfer pkgs/main/linux-64::s3transfer-0.1.13-py36_0
setuptools pkgs/main/linux-64::setuptools-46.1.3-py36_0
six pkgs/main/linux-64::six-1.14.0-py36_0
smmap pkgs/main/noarch::smmap-3.0.2-py_0
snakemake bioconda/linux-64::snakemake-5.3.0-py36_1
snakemake-minimal bioconda/linux-64::snakemake-minimal-5.3.0-py36_1
sqlite pkgs/main/linux-64::sqlite-3.31.1-h62c20be_1
tk pkgs/main/linux-64::tk-8.6.8-hbc83047_0
typing_extensions pkgs/main/linux-64::typing_extensions-3.7.4.1-py36_0
urllib3 pkgs/main/linux-64::urllib3-1.25.8-py36_0
wheel pkgs/main/linux-64::wheel-0.34.2-py36_0
wrapt pkgs/main/linux-64::wrapt-1.12.1-py36h7b6447c_1
xmlrunner conda-forge/noarch::xmlrunner-1.7.7-py_0
xz pkgs/main/linux-64::xz-5.2.5-h7b6447c_0
yaml pkgs/main/linux-64::yaml-0.1.7-had09818_2
yarl pkgs/main/linux-64::yarl-1.4.2-py36h7b6447c_0
zipp pkgs/main/noarch::zipp-2.2.0-py_0
zlib pkgs/main/linux-64::zlib-1.2.11-h7b6447c_3
zstd pkgs/main/linux-64::zstd-1.3.7-h0b5b093_0
Proceed ([y]/n)?
Answer: This is an example of the conda solver behaving incorrectly. In this case, it's choosing to pick a version that minimizes the number of dependencies required. Version 5.3.0 has 25 fewer dependencies, so if you let conda choose the version that's what it will pick. There's nothing you can really do about this other than specifying the version of snakemake that you want.
Update: It's very likely that mamba will correctly do what you want by default. Though I haven't used it, it's supposed to be a drop in replacement for conda here. | {
"domain": "bioinformatics.stackexchange",
"id": 1395,
"tags": "snakemake, conda"
} |
libopencv2.3-dev is not updated | Question:
The libopencv2.3-dev version is not updated in the ubuntu repository. 2.3.1+svn6514+branch23-1~natty is needed for ros-electric-vision-opencv but 2.3.0+svn5720+trunk-5~natty is in the repository. Where does this need to be reported?
Originally posted by Reemco on ROS Answers with karma: 1 on 2011-08-16
Post score: 0
Answer:
Thanks for the report. It should be resolved now if you run an 'apt-get update'
Originally posted by kwc with karma: 12244 on 2011-08-17
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Bruno Normande on 2011-10-23:
Hummm... i'm getting the same error now, except that my installed version is 2.3.1+svn6514+branch23-4~natty | {
"domain": "robotics.stackexchange",
"id": 6446,
"tags": "ros, libopencv, vision-opencv"
} |
Direction of mass transfer in two film theory | Question:
In an extraction process compound A is transferred from one solvent to another solvent. The two solvent may be, for example, water and dichloromethane. These two solvents are immiscible and when mixing them together, one of the solvents, say solvent 1, forms droplets that are dispersed in solvent 2, as depicted [...].
Fig. 4 shows the concentration profile across the interface between the two solvents. For this concentration profile, answer the following questions:
(a) In which direction is the mass transfer, from solvent 1 to solvent 2 or vice versa? Motivate your answer by a short explanation (2 pt).
How can you tell which direction the mass transfer will go?
I would assume that because solvent 1 forms droplets in solvent 2, then solvent 1 should be the organic phase. Would the mass transfer be from solvent 2 to solvent 1? Because the aqueous phase would contain compound A.
Edit
Just to see if I have understood it correctly, if I am given the concentration profile below. Then mass transfer would occur from the aqueous phase to the organic phase because the aqueous phase has a higher concentration.
Answer: Just keep in mind: the driving force of mass transfer is a concentration gradient. Nature tries to 'equilibriate' a difference in concentration between two phases.
As such, the mass transfer in that sense will be from the phase of high concentration (S2) to the phase of lower concentration (S1).
Two film theory is only a simplification of the mass transport directly at the interface, in which we assume there are two boundary layers on each side of the interface in which the concentration gradient is constant, giving us linear concentration profiles, and the concentration directly at the interface is determined by Henrys Law. This makes dealing with such problems much easier.
Mass and Heat Transfer are often taught in one course as they have many similiarities, but Heat Transfer offers a more intuitive introduction to the principles of transport phenomena in my view:
In heat transfer the driving force is the temperature gradient. We intuitively know that heat transfer will be from the hotter system to the colder system, and formally know that from the 2nd law of thermodynamics. Can you see the similarity to mass transport? | {
"domain": "chemistry.stackexchange",
"id": 11575,
"tags": "concentration, extraction, immiscibility"
} |
Left-Factoring a grammar into LL(1) | Question: I have a homework assignment where I need to convert a grammar into LL(1). I've already removed the left recursion, but I'm having trouble doing left-factoring. All of the examples I've found are simple, and look something like this:
A -> aX | aY
becomes:
A -> aZ
Z -> X | Y
I understand that. However, my grammar looks more like this:
X -> aE | IXE | (X)E
E -> IE | BXE | ϵ
I -> ++ | --
B -> + | - | ϵ
I'm not sure how to apply the simpler example to this. I've been trying for at least a couple of hours and I've lost track of all of the things I've tried. Generally, my attempts have looked something like this:
X -> X' | IXE
X' -> aE | (X)E
E -> IE | BIX'E | BX'E | ϵ
And I then try to convert the E rules into ones having only one production starting with + or -:
X -> X' | IXE
X' -> aE | (X)E
B' -> + | -
E -> IE | B'IX'E | IX'E | B'X'E | X'E | ϵ
And then...
X -> X' | IXE
X' -> aE | (X)E
B' -> + | -
E -> +P | -M | ϵ
P -> +E | IX'E | +X'E | X'E
M -> -E | IX'E | -X'E | X'E
And so on. But I continually end up with a lot of extra nonterminals, and some very long productions / chains of productions, without actually having left-factored it. I'm not sure how to approach this - I can't seem to eliminate some nonterminal having multiple productions starting with a + and with a -.
Answer: Let's have a look at your grammar:
$\qquad \begin{align}
X &\to aE \mid IXE \mid (X)E \\
E &\to IE \mid BXE \mid \varepsilon \\
I &\to \text{++} \mid \text{--} \\
B &\to \text{+} \mid \text{-} \mid \varepsilon
\end{align}$
Note that $X$ does not need left-factoring: all rules have disjoint FIRST sets¹.
If you want to make this obvious, you can drop $I$ and inline it:
$\qquad \begin{align}
X &\to aE \mid \text{++}XE \mid \text{--}XE \mid (X)E \\
E &\to \text{++}E \mid \text{--}E \mid BXE \mid \varepsilon \\
B &\to \text{+} \mid \text{-} \mid \varepsilon
\end{align}$
Similarly, we can inline $B$:
$\qquad \begin{align}
X &\to aE \mid \text{++}XE \mid \text{--}XE \mid (X)E \\
E &\to \text{++}E \mid \text{--}E \mid \text{+}XE \mid \text{-}XE \mid XE \mid \varepsilon
\end{align}$
Now we see that we actually have to do left-factoring on $E$: we have obvious conflicts, and we get additional conflicts via $XE$. So, let's inline $X$ once at $XE$:
$\qquad \begin{align}
X &\to aE \mid \text{++}XE \mid \text{--}XE \mid (X)E \\
E &\to \text{++}E \mid \text{--}E \mid \text{+}XE \mid \text{-}XE \mid aEE \mid \text{++}XEE \mid \text{--}XEE \mid (X)EE \mid \varepsilon
\end{align}$
And now we can left-factor as easily as in your example:
$\qquad \begin{align}
X &\to aE \mid \text{++}XE \mid \text{--}XE \mid (X)E \\
E &\to \text{+}P \mid \text{-}M \mid aEE \mid (X)EE \mid \varepsilon \\
P &\to \text{+}E \mid XE \mid \text{+}XEE \\
M &\to \text{-}E \mid XE \mid \text{-}XEE
\end{align}$
By now we can see that we are not getting anywhere: by factoring away $\text{+}$ or $\text{-}$ from the alternatives, we dig up another $X$ which again has both $\text{+}$ and $\text{-}$ in its FIRST set.
So let's have a look at your language. Via
$\qquad \displaystyle X \Rightarrow aE \Rightarrow^* aI^n E \Rightarrow aI^nBXE$
and
$\qquad \displaystyle X \Rightarrow aE \Rightarrow^* aI^n E \Rightarrow aI^nIE$
you have arbitrarily long prefixes of the form $+^+$ which end differently, semantic-wise: an LL(1) parser can not decide whether any given (next) $\text{+}$ belongs
to a pair -- which would mean choosing alternative $IE$ -- or comes alone -- which would mean choosing $BXE$.
In consequence, it doesn't look like your language can be expressed by any LL(1) grammar, so trying to convert yours into one is futile.
It's even worse: as $BXE \Rightarrow BIXEE \Rightarrow^* BI^n X E^n E$, you can not decide to chose $BXE$ with any finite look-ahead. This is not a formal proof, but it strongly suggests that your language is not even LL.
If you think about what you are doing -- mixing Polish notation with unary operators -- it is not very surprising that parsing should be hard. Basically, you have to count from the left and from the right to identify even a single $B$-$\text{+}$ in a long chain of $\text{+}$. If I think of multiple $B$-$\text{+}$ in a chain, I'm not even sure the language (with two semantically different but syntactically equal $\text{+}$) can be parsed deterministically (without backtracking) at all.
That would be the sets of terminals that can come first in derivations of a non-terminal/rule alternative. | {
"domain": "cs.stackexchange",
"id": 528,
"tags": "formal-languages, formal-grammars, parsers, left-recursion"
} |
what does catkin_make do during package building | Question:
After creating a package in source folder of catkin workspace , I built the package using catkin_make . I noticed few folders have generated w.r.t the package in devel and build folders of workspace . I couldn't comprehend them . Could someone please tell me what happens for the package after invoking catkin_make ? Thanks in advance.
Originally posted by sam26 on ROS Answers with karma: 231 on 2017-02-03
Post score: 1
Original comments
Comment by gvdhoorn on 2017-02-03:
So to avoid board members pointing you to documentation again, could you please tell us what you have found yourself, how you understood that and what is still unclear? "a few folders [..] I couldn't comprehend them" is rather vague: which folders specifically, and what was unclear about them?
Comment by sam26 on 2017-02-03:
build has got a folder with the package's name in which there is a CMakefiles folder and a installspace folder which has a .pc file by the package's name.
devel space has got a share folder in which there is a CMake folder with .cmake files and the install and src folders are untouched .
Comment by sam26 on 2017-02-03:
So what do these folders and files mean and why are they only affecting the build and devel spaces ?
Answer:
I think this is all explained in wiki/catkin/workspaces.
[..] build has got a folder with the package's name in which there is a CMakefiles folder [..]
From the wiki (emphasis mine):
1.2 Build Space
The build space is where CMake is invoked to build the catkin packages in the source space. CMake and catkin keep their cache information and other intermediate files here.
So the files you found under build/$pkg_name are the files that CMake (and to a certain extend Catkin) generate during what CMake calls the configure phase (random link to a page explaining this, I couldn't find something on the CMake site. Refer to 5.2.1 - The CMake Process - The Configure Step). Examples are things like CMakeCache.txt and friends. These are all placed in separate directories, as mixing them wouldn't work.
[..] and a installspace folder which has a .pc file by the package's name.
From the wiki:
1.3 Development (Devel) Space
The development space (or devel space) is where built targets are placed prior to being installed. The way targets are organized in the devel space is the same as their layout when they are installed.
So the devel and the install space have their files and directories layed out in similar ways, and they only contain the results of the build. For the devel space that is limited to things that were generated and compiled (ie: shared libraries and binaries) for the install space that includes everything that was included in an install(..) statement in the CMakeLists.txt (and some additional things like msg headers, but they have install(..) targets automagically added).
The .pc files you specifically mention are pkg-config files. They are generated by the catkin_package(..) statement in your CMakeLists.txt and include information needed by packages to successfully use and link to other packages.
[..] devel space has got a share folder in which there is a CMake folder with .cmake files [..]
Those .cmake files are CMake config files. They have a similar function as the .pc files and were also generated by catkin_package(..).
[..] and the install and src folders are untouched .
I'm not sure I understand this last bit.
Finally, you might want to read A Gentle Introduction to Catkin by @jbohren. Based on your previous questions I think you'll find it enlightening.
Originally posted by gvdhoorn with karma: 86574 on 2017-02-03
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by sam26 on 2017-02-03:
Thank you very much ! Brilliant explanation :) | {
"domain": "robotics.stackexchange",
"id": 26911,
"tags": "ros, catkin-make, devel"
} |
Setting a joint in the right pose | Question:
Hi,
Updated description:
I would like to perform a rotation around a local Y-axis that points to another direction than the global Y-axis.
Global coordinate system is: span( (1,0,0); (0,1,0); (0,0,1) )
Local coordinate system shall be: span( (1/sqrt(2), -1/sqrt(2),0); (1/sqrt(2), 1/sqrt(2),0) ; (0,0,1) )
Now the desired rotation has to be around the new Y-axis (1/sqrt(2), 1/sqrt(2),0). As you can see below, when I add a rotation to the model pose, the sensor pose or the pose of the link, no rotation to my new coordinate system is added. Do I have to use the joint of type "revolute2" to add the "static" rotation around the Z-axis before performing the rotation around the "new / local" Y-axis? Is there another possibility?
Old description:
I have to do a very simple operation: a child link (sensor) of a model shall do half of a rotation (SetJointPosition, 0 to -pi) around the parent link (box). I have a model for this with a parent and a child link. Everything is all right as long as the joints pose has no rotation in it.
<model name="box">
<pose>0 0 0.5 0 0 0</pose>
<static>false</static>
<link name="link">
<collision name="collision">...</collision>
<visual name="collision">
<geometry>
<box>
<size>1 1 1</size>
</box>
</geometry>
</visual>
</link>
<link name="link2">
<pose>0.0 0.0 0.51 0.0 0.0 0.0</pose>
<sensor name="raysensor" type="ray">
<always_on>true</always_on>
<visualize>true</visualize>
<ray>...</ray>
</sensor>
</link>
<joint name="my_joint" type="revolute">
<pose>0 0 0 0 0 0.785</pose>
<parent>link</parent>
<child>link2</child>
<axis>
<xyz>0 1 0</xyz>
</axis>
</joint>
</model>
The translation works. If I set an offset in x, y or z-coordinate the joint is shifted to the requested point. A rotation isn't done.
I tried to fix this with turning the whole model, but even this failed:
What can I do to bring my joint in the right position?
Originally posted by Chris on Gazebo Answers with karma: 67 on 2013-07-25
Post score: 0
Answer:
I found out, that it is just a problem with the visualisation of the joint... the rotation is done correctly, although the joint axes are wrong...
The added rotation to the model turns the model and the sensor (and the joint, but it is not shown) I expected that the red x-axis points in the rotated direction...
Here you can see the correct rotation -pi/2 (although the joint axis are not in the right positions)
I want to add, that it is not enough to just rotate the sensor link!
I think this might be connected with this bug?!
Originally posted by Chris with karma: 67 on 2013-07-31
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 3393,
"tags": "gazebo"
} |
Why is $\left.\frac{\partial C_V}{\partial V}\right|_T$ different in these derivations? | Question: I want to show that $\delta q$ is not an exact differential.
Starting from $dE = \delta q - pdV$ and because $E := E(V, T)$ is a state function, which allows to express the exact differential as
$$
dE = \left.\frac{\partial E}{\partial V}\right|_T dV +\left.\frac{\partial E}{\partial T}\right|_V dT,
$$
the two expressions can be set equal giving after rearrangement
$$
\delta q = \left[ \left.\frac{\partial E}{\partial V}\right|_T + p \right]dV + \left.\frac{\partial E}{\partial T}\right|_V dT
$$
and therefore also
$$
\delta q = \left[ \left.\frac{\partial E}{\partial V}\right|_T + p \right]dV + C_V dT.
$$
Now, by definition $\partial E/\partial T|_V = C_V$, and so
$$
\left.\frac{\partial C_V}{\partial V}\right|_T =\left[\frac{\partial}{\partial V}\left.\frac{\partial E}{\partial T}\right|_V\right]_T
$$
and because $E$ is a state function, the sequence of partial derivatives can be exchanged (according to Schwarz' theorem, while I don't understand how it works), allowing to write
$$
\left.\frac{\partial C_V}{\partial V}\right|_T =\left[\frac{\partial}{\partial T}\left.\frac{\partial E}{\partial V}\right|_T\right]_V. (*)
$$
Also, multiplying the above expression for $\delta q$ by $1/\partial V$ at constant $T$, I obtain
$$
\left.\frac{\delta q}{\partial V}\right|_T = \left[ \left.\frac{\partial E}{\partial V}\right|_T + p \right]\left.\frac{\partial V}{\partial V}\right|_T + C_V \left.\frac{\partial T}{\partial V}\right|_T
$$
where $\partial V/\partial V = 1$ and the second term on the right hand side equals $0$ because $\partial T = 0$ at constant temperature. Multiplying the remaining equation by $\partial / \partial T$ at constant $V$ gives
$$
\left[\frac{\partial}{\partial T}\left.\frac{\delta q}{\partial V}\right|_T\right]_V = \left[ \frac{\partial}{\partial T} \left(\left.\frac{\partial E}{\partial V}\right|_T + p \right)\right]_V.
$$
Now assuming $\delta q$ were exact, again the sequence of partial derivatives would not matter and I could write
$$
\left[\frac{\partial}{\partial V}\left.\frac{\delta q}{\partial T}\right|_V\right]_T = \left[ \frac{\partial}{\partial T} \left(\left.\frac{\partial E}{\partial V}\right|_T + p \right)\right]_V
$$
and by using $q=E$ since the "inner" differential on the left side is evaluated at constant volume,
$$
\left[\frac{\partial}{\partial V}\left.\frac{\partial E}{\partial T}\right|_V\right]_T = \left.\frac{\partial}{\partial V} C_V\right|_T = \left[ \frac{\partial}{\partial T} \left(\left.\frac{\partial E}{\partial V}\right|_T + p \right)\right]_V. (**)
$$
From this we find that $(*)$ and $(**)$ are different and thus the assumption must be wrong and therefore $\delta q$ is not an exact differential.
Does this make any sense?
Exact wording from book:
Starting with $dE = \delta q - pV$, show that
a) $\delta q = C_V dT + [P+(\partial E/\partial V)_T] dV$
b) $\left(\frac{\partial C_V}{\partial V}\right)_T = \left[\frac{\partial}{\partial T} \left(\frac{\partial E}{\partial V}\right)_T\right]_V$
c) $\delta q$ is not an exact differential.
For c), the book states
If $\delta q$ were an exact differential, then by solution to a), $(\partial C_V/\partial V)_T$ would have to be equal to $[\partial /\partial T(P+(\partial E/\partial V)_T)]_V$ but it is not according to solution of b), hence $\delta q$ is not exact.
Answer: I think it is important to not loose yourself in calculations. The method in your book probably starts from saying that considering an object $\delta Q$ that has the following general form:
$\delta Q = C_vdT + hdV$ say,
then it is an exact differential iif
$\left(\frac{\partial C_v}{\partial V}\right)_T = \left(\frac{\partial h}{\partial T}\right)_V$
This is also equivalent to saying that $C_v$ and $h$ can be thought of as partial derivatives of the same state function with respect to $T$ and $V$ respectively.
Now, from the first law of thermodynamics, we know that
$C_v \equiv \left( \frac{\partial U}{\partial T}\right)_V$
hence the point is then to figure out if $h$ is the partial derivative of $U$ with respect to $V$ at fixed $T$ and obviously it is not since the latter would be $h-p$ where $p$ is the thermodynamic pressure which is not a null function.
That ends the story I believe...but please tell me if you don't trust this argument, I can easily be fooled myself by circular arguments. | {
"domain": "physics.stackexchange",
"id": 12403,
"tags": "thermodynamics"
} |
Is Impulse Frame Independent? | Question: The title says it all.
I was wondering if the impulse on an object would change if we look from another inertial or non-inertial frame of reference. According to me it shouldn't, since the force that causes the impulse would be frame independent as well.
I tried looking it up, but couldn't find it. So can someone prove or disprove the frame-independency of impulse caused by a force?
Answer: In Newtonian physics the impulse is frame independent. In an inertial frame the impulse is:
$$ \Delta \mathbf p = \int \mathbf F(t)dt $$
and this does not depend upon velocity so the integral is the same for frames with different velocities. This comes down to the fact that while velocity is frame dependent a velocity difference is not.
For a non-inertial frame we can introduce a fictional force $\mathbf F_f$ so the change in momentum measured in this frame becomes:
$$ \Delta p = \int (\mathbf F_f(t) + \mathbf F(t))dt $$
But since the forces add linearly this splits into a fictional impulse change and the real impulse change:
$$ \Delta p = \int \mathbf F_f(t)dt + \int \mathbf F(t)dt $$
And the second term remains frame independent.
However if we consider relativistic effects then this is no longer true since relativistic speeds do not add linearly and the difference between two velocities is not frame independent. However if we use four-momentum instead and define the four-impulse to be the change in four-momentum then we will find the four-impulse is frame independent. | {
"domain": "physics.stackexchange",
"id": 59804,
"tags": "reference-frames, momentum"
} |
Minimize $f(r,X)$ over all sets $X$ using Dynamic Programming | Question: If a set of numbers $a_1, a_2, \cdots, a_n$ $($such that each $a_i \in \mathbb{N} \cup \{0\})$ and an $r \in \mathbb{N}$ are given, find set $X = \{x_0, x_1, \cdots, x_r \ | \ x_0 = 0 < x_1 < \cdots < x_r = n\}$ such that $f(r,X) := \displaystyle\max_{j=0}^{r-1} \sum_{i = x_j}^{x_{j+1}-1} a_i$ is minimized over all possible $X$.
I can think of a brute force approach, that is, generate $\Theta ({{n-1} \choose {r-1}})$ sets $X$ and calculate the $\max$ for each $X$, each of which takes as many as $r$ steps. This is a very inefficient approach. How do I make it better? I feel like there is a way using DP. (Please feel free to edit the title if this is not DP).
This seems like a possible solution. What if we pre-compute $\displaystyle\Sigma_{i=j}^{j+k} {a_i}$ $(k \geq 0)$ for all $1 \leq j \leq n$ first and then create a DP with function $$\text{best}(p,q) = \text{best way of having } q \text{ intervals such that we end at } [p-x,p] \text{ for some } x \geq 0$$ Our goal is to find $\text{best}(n,r)$ and backtrack to find the intervals.
$\text{best}(p,q) = \min \{ \text{best}(p,q-1), \ \max(\text{best}(p-1,q-1), \text{dist}(p-1,p)), \ \max(\text{best}(p-2,q-1), \text{dist}(p-2,p)), \dots) \}$
where $\text{dist}(x,y)$ = $\displaystyle\Sigma_{i=x}^{y} a_i$ which is pre-computed. This probably works but the backtracking seems difficult and time-hungry.
I tried greedy and I could generate counter-examples right away. Was not very helpful.
Answer: Main idea: Instead of finding $X$ itself, find the minimum value for $f(r,X)$. After you do this, it is easy to find a minimal $X$. Simply greedily add elements to an interval until the sum of the interval is too large - once this happens, create a new interval at that position.
Finding the minimal value for $f(r,X)$ is a classic (maybe prototypical) binary search problem. The algorithm is as follows: binary search over the answer. Within your binary search, greedily add elements to an interval until the sum of the interval is higher than the answer - once this happens, create a new interval. (Note that this is the same method as is used to construct $X$.) At the end, check how many intervals you have formed - if it is $\leq r$ then it is possible to divide the array into $r$ intervals such that $f(r,X) \leq ans$. The time complexity is $O(NlogS)$, where $S$ is the sum of all $a_i$.
Although binary search does work for all practical values, your statement does state $\mathbb{N}$ as the possible size of elements. If elements can be arbitrarily large, we can use DP as you mentioned, fairly similar to the DP that you use. $best(p,q)$ can be the minimum $f(\cdot)$ taking $q$ intervals from the first $p$ elements. By transitioning in the same way as your statement, we get a trivial $O(N^3)$ solution.
We can also optimise this DP. One optimisation: $best(x,q)$ is monotonically increasing as $x$ increases, but $dist(x,p)$ is monotonically decreasing. So, we can use binary search to find the intersection of the two lines. The transition is now $O(logN)$ and the overall complexity is $O(N^2logN)$.
Important thing to note: this DP avoids the issues with your DP because it doesn't store the actual $X$ within the DP itself. | {
"domain": "cs.stackexchange",
"id": 17350,
"tags": "algorithms, dynamic-programming"
} |
Is there a real life meaning about KMeans error? | Question: I am trying to understand the meaning of error in sklearn KMeans.
In the context of house pricing prediction, the error linear regression could be considered as the money difference per square foot.
Is there a real life meaning about KMeans error?
Answer: The K-means Error gives you, what is known as, total intra-cluster variance.
Intra-cluster variance is the measure of how much are the points in a given cluster spread.
The following cluster will have high intra-cluster variance
In the image below, even though the number of points are same as that of the image above, the points are densely distributed and hence will have lower intra-cluster variance.
K-means Error is interested in the total of such individual cluster variances.
Suppose for a given data, if clustering 'A' forms clusters like the first image and clustering 'B' forms clusters like the second image, you will in most cases choose the second one.
Although this does not mean that the K-means Error is a perfect objective to optimize on to form clusters. But it pretty much catches the essence behind clustering.
Code used for cluster plot generation -
import numpy as np
from matplotlib import pyplot as plt
sparse_samples = np.random.multivariate_normal([0, 0], [[50000, 0], [0, 50000]], size=(1000))
plt.plot(sparse_samples[:, 0], sparse_samples[:, 1], 'b+')
axes = plt.gca()
axes.set_xlim(-1000, 1000)
axes.set_ylim(-1000, 1000)
plt.show()
dense_samples = np.random.multivariate_normal([0, 0], [[5000, 0], [0, 5000]], size=(1000))
plt.plot(dense_samples[:, 0], dense_samples[:, 1], 'r+')
axes = plt.gca()
axes.set_xlim(-1000, 1000)
axes.set_ylim(-1000, 1000)
plt.show()
In both cases, a 1000 datapoints from a Bivariate Normal Distribution are sampled and plotted . In the second case, the Covariance Matrix is changed to plot a denser cluster. np.random.multivariate_normal's documentation can be found here. Hope this helps! | {
"domain": "datascience.stackexchange",
"id": 5560,
"tags": "machine-learning, data-mining, k-means"
} |
Speed of sound in a gas (for adiabatic perturbations in cosmology) | Question: In the book "Introduction to Cosmology" by Barbara Ryden, equation 4.57 gives the sound speed for adiabatic perturbations in a gas with pressure P and energy density $\epsilon$. The equation is as follows:
$c_{s}^2= c^2 \frac{\ dP}{d\epsilon}$, where $c$ is the speed of light $c_{s}$ is the speed of sound. The general equation for speed of sound in a medium is given by:
$c_{s}^2= \frac{\ dP}{d\rho}$ ,where $\rho$ is the mass density of the medium. For interstellar medium with non-relativistic gas with mass the energy density is approximately given by:
$\epsilon \approx \rho c^2$. So for such gases the first equation can be obtained by substituting for $\epsilon$ from 3rd equation. But how one derive the first equation for a relativistic gas of photons? It seems that the author of the book treats the equation valid for all cases, both relativistic and non-relativistic. Is there any other general way to derive the first equation without invoking relativistic or non-relativistic differentiation ?
Answer: The general expression for the speed of sound is
$$
c_s^2 = \frac{\partial P}{\partial \epsilon}
$$
(in units where $c=1$). If there is a conserved particle number, then the derivative has to be taken at constant entropy per particle $s/n$. This relation can be derived from the equations of relativistic fluid dynamics, combined with thermodynamic identities, and the derivation can be found in text books that cover relativistic fluids, for example Weinberg's book on general relativity. (Note that weinberg, as well as some other authors, use the symbol $\rho$ to denote the energy density, not the mass density.)
The non-relativistic expression is
$$
c_s^2 = \left.\frac{\partial P}{\partial\rho}\right|_{s/n}
$$
where $\rho$ is the mass density of the fluid. This follows from the non-relativistic limit of the first formula, or directly from the usual (non-relativistic) Euler equation.
For a gas of photons both $P$ and $\epsilon$ are proportional to
$T^4$, but with a different coefficient of proportionality, $P=\epsilon/3$. As a result $c_s^2=1/3$ (in units of $c$). This result applies to any gas of weakly interacting massless particles.
Note that sound is a collisional mode. This means that a cold gas of photons cannot support sound, but a hot gas of $\gamma,e^\pm$ has a sound mode. | {
"domain": "physics.stackexchange",
"id": 90011,
"tags": "thermodynamics, general-relativity, cosmology, acoustics, astrophysics"
} |
Custom layer not showing on Costmap2D | Question:
Hi all,
I configured move_base to include an additional layer produced by a PointCloud2:
Observation Source:
yellow_tape_layer:
observation_sources: barrier_tape_detection_scan
barrier_tape_detection_scan:
#sensor_frame: camera_rgb_optical_frame
data_type: PointCloud2
topic: /barrier_tape_detection_scan
observation_persistence: 100.0
marking: true
clearing: false
min_obstacle_height: -2.0
max_obstacle_height: 2.0
The point cloud shows correctly in rviz, transformations etc. are fine.
Note that I tried to manually set the sensor_frame as well - every second point cloud is published in another frame, which is why I tried to avoid that option - but neither case works. I even increased the observation_persistence, hoping that it would just ignore observations happening too quickly.
In general, I copied the configuration from out laser scanners producing the obstacle_layer:
enabling the layer in global and local costmap:
plugins:
- {name: obstacle_layer, type: "costmap_2d::VoxelLayer"}
- {name: yellow_tape_layer, type: "costmap_2d::VoxelLayer"}
- {name: inflation_layer, type: "costmap_2d::InflationLayer"}
...
yellow_tape_layer:
enabled: true
max_obstacle_height: 2.0
origin_z: 0.0
z_resolution: 2
z_voxels: 1
unknown_threshold: 15
mark_threshold: 0
combination_method: 1
track_unknown_space: true
obstacle_range: 2.5
raytrace_range: 3.0
publish_voxel_map: true
Which is why I assumed it to work - however, no matter the sensory input, nothing shows in the costmap visualized in rviz. (//edit: I also set combination_method for the laser to 1, of course.)
On startup, move_base happily greets me and says it's going to listen on "barrier_tape_detection_scan" - no leading slash - but it does the same for the lasers, also dropping the slash there. As I said, there are certainly points published and I can visualize them in rviz, yet they never appear in the costmap.
Anything else I can try? Should I post a rosbag/all config files?
ROS Indigo, Ubuntu 14.04 (old, but no time to upgrade...)
Originally posted by ItsFine on ROS Answers with karma: 16 on 2018-04-18
Post score: 0
Original comments
Comment by David Lu on 2018-04-18:
If you disable the other two layers. do you see anything? Does rostopic info /barrier_tape_detection_scan report a subscription from move_base?
Answer:
Mea culpa.
It turned out it worked from the beginning, it's just that the costmap is only initialized and viewed when the robot is actually following a move-base planned path, not due to movement by joystick...
Make sense, actually.
Originally posted by ItsFine with karma: 16 on 2018-04-21
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 30681,
"tags": "navigation, move-base, costmap, ros-indigo"
} |
Calculating Voltage Drop for Part of a System Without Current (Resistance and Voltage are Known) | Question: I'm wondering if there is a way to calculate voltage drop across part of a system if you know resistance and the potential difference of the entire system as well as its subparts but don't know the current. It seems like there should be a way considering that voltage is joules per coulomb so voltage drop shouldn't depend on how many coulombs are flowing through the system.
Let's say you have an electrical distribution wire with 40,000 volts relative to ground, but an unknown current. Let's say that the insulated wire gets grounded somehow (by a tree falling on it, etc.) You would know that the initial voltage is 40,000V and the final voltage is 0V since the current flows to the ground. Could you figure out how much voltage was dropped by the insulation on the wire and then how much voltage was dropped by the tree, assuming that you know the resistance of both the insulation and the tree, but not the current flowing through the system? If so, how?
Note: You can't figure out the current using ohm's law because the current is limited by transformers.
Answer: You might not be able to use Ohm's Law directly to solve for the current of the system (as some fuse will almost instantaneously trip so that there isn't a short in the line), but you could use it indirectly to analyze the situation.
In this circumstance, it would in fact be true that the instantaneous current in both the tree and insulation of the wire are the same, as this is an example of a single-loop "circuit" (used in a loose sense of the word), and by Kirchoff's Junction Rule, the current everywhere in a single-loop circuit is the same in both the tree and the insulation, though unknown in this problem.
We can use Ohm's Law because the current in the tree, $i_t$ is equal to the current in the insulation, $i_i$. Therefore, since $V=iR$, $$\frac{V_t}{R_t} = \frac{V_i}{R_i}$$
This equation can be solved for the fraction $\frac{V_t}{V_i}$ and set equal to $\frac{R_t}{R_i}$, and in your question, you stated that you knew, or could find out, the resistances of both materials. Now you know the fractional electric potentials for both media.
Thus, if the total electric potential is known, $V_{total}$, you can find the potential drop in each one using the equation $V_{total} = V_t + V_i$, and viola, you now know the potential drop in each insulator! | {
"domain": "physics.stackexchange",
"id": 50537,
"tags": "electricity, electrical-resistance, voltage"
} |
Reward negative derivative on linear regression | Question: I'm actually new to Data Science and I'm trying to make a simple linear regression with only one feature X ( which I added the feature log(X) before adding a polynomial features) on a motley dataset using Python an all the Data Science stack that comes with it (numpy, pandas, sci-kit learn, ...)
Here you can find a piece of code of my regression using scikitlearn:
def add_log(x):
return np.concatenate((x, np.log(x)), axis=1)
# Fetch the training set
_X = np.array(X).reshape(-1, 1) # X = [1, 26, 45, ..., 100, ..., 8000 ]
_Y = np.array(Y).reshape(-1, 1) # Y = [1206.78, 412.4, 20.8, ..., 1.34, ..., 0.034]
Y_train = _Y
X_train = add_log(_X) if use_log else _X
# Create the pipeline
steps = [
('scalar', StandardScaler()),
('poly', PolynomialFeatures(6)),
('model', Lasso(alpha=alpha, fit_intercept=True))
]
pipeline = Pipeline(steps)
pipeline.fit(X_train, Y_train)
My feature X can go between 1 to ~80 000 and Y can go between 0 and ~2M
There is one thing I know about the curve I should obtain is that it should always decrease so the derivative should be always negative
I make a little schema to explain what I expect vs what I have:
Therefore I would like to reward prediction where derivative is always negative even if my data suggest the opposite.
Is there a way to do that with sci-kit learn?
Or maybe I'm suggesting a bad solution to my problem and there is another way to obtain what I want ?
Thank you
Answer: When you use linear regression you always need to define a parametric function you want to fit. So if you know that your fitted curve/line should have a negative slope, you could simply choose a linear function, such as: y = b0 + b1*x + u (no polys!). Judging from your figure, the slope (b1) should be negative. The consequence will be that you probably will not get a great fit since the function is not very flexible. But you will get an easy-to-interpret result.
What you can do to improve performance in this case is to work on your features. You can center the features (divide by mean) or scale them (divide by 1000 or so). However, since this is a linear transformation you will not gain much from this. Another option would be to do a log-log transformation (take logs for y and X). This will give you an interpretation such as "if X increases by 1%, y changes by b1%". The advantage is that "large" values will become smaller, which gives a better fit on data with large(r) values. Since your data seems to be mostly positive, this could be an option. The model looks like: log(y) = b0 + b1*log(x) + u.
Another approach would be to see if some of your observations are "outliers" and cause your estimated function to be "wobbly". You can - for instance - define a quadratic model such as: y = b0 + b1*x + b2*x^2 + u, estimate the model, and detect outliers based on Cook's distance. However, this approach seems arbitrary since you would need to remove observations until you get the desired slope. It is not a really good idea to select data until the data fit what we want to see. It may only be an option if just a few observations cause trouble (as it seems to be the case in your plot).
Yet another possibility would be that you "split" your data. Here I assume that only observations in some range cause trouble (in you figure the "low" x's) while the rest of the observations ("higer" x's) follow a linear trend or so. I had exactly the same problem recently. I had a linear trend for the largest part of my x's, while only few observations had a highly non-linear pattern. I detected this using generalised additive models (GAM). Here is a tutorial for a Python implementation.
This was my result:
The figure shows that there is a mostly linear trend for the largest part of the data (lower 90% here). Only the upper 10% caused trouble. So I estimated a linear model, but added an interaction term to alow for a separate slope for the upper 10% of data. By doing so I got a reasonable linear estimate for the slope of the lower 90%, while avoiding a "biased" estimate by the "wobbly" upper 10% of data. This works as follows: you generate a dummy/indicator variable which equals I=1 for the "wobbly" data and I=0 otherwise. Then you estimate a linear model like: y = b0 + b1*X + b2*I + b3*I*X + u. The result is that you get an extra intercept (b2) and slope (b3) for the "wobbly" part of the data indicated by I. This in turn means that you also get an extra slope for the non-wobbly part of the data (b0, b1).
Another thing: Why do you use lasso? Lasso is used to "shrink" features/variables. You only have one variable, so there is no need to shrink it. I would go for ordinary least squares (OLS), so a simple linear regression. | {
"domain": "datascience.stackexchange",
"id": 5579,
"tags": "scikit-learn, linear-regression"
} |
How can I launch rviz on a remote machine? | Question:
I have a launch file that is meant to kick off freenect on a robot(zen) and rviz on a desktop computer(thetis). I understand that rviz is graphical and needs to use thetis' screen. After some googling I discovered a recommendation saying that I could do this by placing export DISPLAY=:0 in my env.sh file (on thetis). Sadly it does not work.
The launch file starts promisingly and produces:
started roslaunch server http://zen:40724/
remote[thetis-0] starting roslaunch
remote[thetis-0]: creating ssh connection to thetis:22, user[paul]
launching remote roslaunch child with command: [env ROS_MASTER_URI=http://zen:11311 /opt/ros/kinetic/env.sh roslaunch -c thetis-0 -u http://zen:40724/ --run_id d0af37a2-8a9c-11e6-9197-247703b09b04]
remote[thetis-0]: ssh connection created
But later it reports:
[thetis-0]: [Zen-1] process has died [pid 8018, exit code -6, cmd /opt/ros/kinetic/lib/rviz/rviz __name:=Zen __log:=/home/paul/.ros/log/d0af37a2-8a9c-11e6-9197-247703b09b04/Zen-1.log].
log file: /home/paul/.ros/log/d0af37a2-8a9c-11e6-9197-247703b09b04/Zen-1*.log
[thetis-0]: all processes on machine have died, roslaunch will exit
remote[thetis-0]: QXcbConnection: Could not connect to display
[Zen-1] process has died [pid 8018, exit code -6, cmd /opt/ros/kinetic/lib/rviz/rviz __name:=Zen __log:=/home/paul/.ros/log/d0af37a2-8a9c-11e6-9197-247703b09b04/Zen-1.log].
log file: /home/paul/.ros/log/d0af37a2-8a9c-11e6-9197-247703b09b04/Zen-1*.log
My launch file is:
<launch>
<machine name="thetisx" address="thetis" env-loader="/opt/ros/kinetic/env.sh" user="fred"/>
<include file="$(find freenect_launch)/launch/freenect.launch"/>
<node machine="thetisx" pkg="rviz" type="rviz" name="Zen" output="screen">
</node>
</launch>
Doing some investigation, I discovered that if, from zen, I do:
ssh fred@thetis
password: *****
rosrun rviz rviz
I get a failure and core dump saying QXcbConnection: Could not connect to display
However if I do:
ssh fred@thetis
password: *****
export DISPLAY=:0
rosrun rviz rviz
then rviz successfully appears on the remote machine.
So why does putting export DISPLAY=:0 in the env.sh file not work and how can I launch rviz remotely ?
Originally posted by elpidiovaldez on ROS Answers with karma: 142 on 2016-10-04
Post score: 3
Original comments
Comment by shoemakerlevy9 on 2016-12-05:
I would also like to know an answer to this. I tried both setting the display and the -X option when I ssh in.
Answer:
Try ssh fred@thetis -X
Originally posted by rastaxe with karma: 620 on 2016-10-05
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by elpidiovaldez on 2016-10-05:
Thanks, but -X makes no difference. The crucial thing is to set DISPLAY. I already know how to run rviz remotely via ssh. I want to run it remotely from a launch file. That should work by setting DISPLAY in the env.sh for the remote machine, but it doesn't. | {
"domain": "robotics.stackexchange",
"id": 25894,
"tags": "ros, rviz, roslaunch, remote"
} |
Destination encoding for ros image to opencv image conversion | Question:
I am trying to create an OpenCV image from a ros image using this . I am subscribing to a topic /camera/depth/img_rect_color which is of the type sensor_msgs/Image with 32FC1 encoding. How do I specify destination encoding in the following line:
cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::32FC1);
If I specify 32FC1, it gives the error :
error: invalid suffix "FC1" on integer constant
cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::32FC1);
^
/home/ubuntu/tst_ws/src/roscv_conv/src/roscv_conv_node.cpp: In member function 'void ImageConverter::imageCb(const ImageConstPtr&)':
/home/ubuntu/tst_ws/src/roscv_conv/src/roscv_conv_node.cpp:39:71: error: expected unqualified-id before numeric constant
How can I rectify this?
Originally posted by skr_robo on ROS Answers with karma: 178 on 2016-08-10
Post score: 0
Answer:
Rectified the issue by changing the line as:
cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::TYPE_32FC1);
Originally posted by skr_robo with karma: 178 on 2016-08-10
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 25494,
"tags": "ros, opencv, sensor-msgs, depth-image"
} |
Is it normal to have a smaller bam file after merging bam files? | Question: I have 2 bam files that belong to the same sample and I merged them with samtools merge. And after merging, I realized that the merged version is a bit smaller than the sum of the other two separate files' size. I also wonder if I merged "the fastq" files of this sample first and convert it to bam would the result be the same?
Answer: If you merge a lot of BAM files you lose the overhead of the header which depending on the size of your BAM files this can be a significant difference or not.
With FASTQ files there should be less difference (as they don't have an overall header). However, the compression ratio can change depending on how you zip your files. [EDIT: I just tested it merging two fastq.gz that were 357MB and 562MB - the result was fastq.gz that was smaller than the sum at only 826MB]
No, the BAM header information would be lost in the conversion from BAM -> FASTQ -> BAM. | {
"domain": "bioinformatics.stackexchange",
"id": 1907,
"tags": "sam, samtools"
} |
Using MTC to Plan a Circular Path | Question:
Hi,
I'm currently trying to use the ros2 branch of MTC (MoveIt Task Constructor) to plan a circular motion to open a door. I know that it's possible to use the MoveTo or MoveRelative stages to make an arm rotate around the TCP frame by just specifying the desired goal orientation in a PoseStamped message (MoveTo) or angular rotation in a TwistStamped message (MoveRelative) - as shown in the Cartesian demo code.
So what I'm thinking of doing is adding a fixed frame in the robot URDF (let's call it the door_hinge_frame) that is a child of one of the robot's links (not necessarily the TCP frame though). The door_hinge_frame would be located at the physical joint of the door when the TCP is in the desired 'opening' pose. Then, by just specifying the MoveTo or MoveRelative message frame_id to the door_hinge_frame, and specifying the goal pose to 90 degrees, a circular motion should be planned that opens the door.
So a few questions:
Is this the correct way to perform this motion or is there an easier way? If so, what would that way be?
While trying to test this approach out, the resulting motion via MTC is not the one that I expected. Instead of performing a circular motion around the door_hinge_frame, the arm still tries to rotate around the TCP frame. I've also tried to set the stage IK frame (via stage->setIKFrame()) to the door_hinge_frame, but that results in non-circular trajectories around the hinge joint.
In the above example, the TCP frame should move as shown in the left-hand part of the image below. But let's say I want the desired motion to be more like the right-hand part. How would I achieve that?
If I was using the C++ MoveGroupInterace API, I could achieve this by sending a static transform to the TF tree of where the door hinge frame is located relative to a world frame (in which case, I'd just delete the door_hinge_frame in the robot URDF). Then I could create 90 or so waypoints that define the pose of the TCP relative to the door hinge frame such that the orientation of the TCP is as shown in the right-hand part of the image. Next, I could call the computeCartesianPath function with the waypoints and generate the trajectory. However, the CartesianPath class in MTC does not include a 'plan' function that allows passing in waypoints. What I could do instead is just create a Serial container of MoveTo stages in which I define the 90 or so waypoints. However, then the arm would accelerate/decelerate to each of the 90 waypoints instead of creating one trajectory where the arm would would only accelerate/decelerate once...
Any help with answering these questions would be much appreciated!
Note that I'm working in a Docker container running Ubuntu 20.04 and ROS Galactic (I've also seen this issue in a Docker container running Ubuntu 22.04 and ROS Humble). My host laptop runs Ubuntu 18.04.
Originally posted by swiz23 on ROS Answers with karma: 86 on 2022-05-03
Post score: 2
Original comments
Comment by fvd on 2022-08-13:
This answer discusses the "conventional" waypoint method you described. Note that the Pilz industrial planner offers a CIRC option. You could extend MTC with a CircularPath class and use it.
This question is quite broad, so it might not get a proper response. Please self-answer when you find a satisfactory solution to your problem.
Answer:
There's a good chance ros-planning/moveit#3197 and ros-planning/moveit_task_constructor#380 could improve things for this use-case.
Originally posted by gvdhoorn with karma: 86574 on 2022-08-28
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 37635,
"tags": "ros, ros2, moveit, cartesian"
} |
Is there a deep learning method for 3D labels? | Question: As the question says, I want to feed labels into a neural net that are three dimensional. Let's say that I have 3 possible labels and each one of my data points corresponds to a percentage of those labels. e.g, my first datapoint contains 20% of label A, 30% of label B, and 50% of label C.
Is there any architecture able to deal with this shape of label data?
Answer: Since the probability are summing up to zero, so you can simply treat it as Multi-class problem and use a network with Softmax at the end.
Last layer and compile -
model.add(keras.layers.Dense( 3, activation="softmax"))
model.compile( optimizer='adam, loss="categorical_crossentropy", metrics='accuracy')
Metrics - Accuracy is not appropriate. Define a custom metrics based on the interpretation of 3 probabilities
The labels will be as per the probability-
e.g. This is for MNIST 10 digits -
Digit 1 - [0.05, 0.55, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05]
Prediction - [0.064, 0.356, 0.059, 0.069, 0.068, 0.050, 0.044, 0.122, 0.064, 0.101]
Code for MNIST - Colab link | {
"domain": "datascience.stackexchange",
"id": 7947,
"tags": "deep-learning, multilabel-classification, labels"
} |
FFT based symbol synchronization for digital demodulation | Question: Where can I read more about "slow search" methods for symbol synchronization based on FFT for extracting a clock signal from the modulated signal?
I have read here (8.7 Symbol synchronization)
One method of synchronization is to extract a harmonic of the symbol
frequency from the received signal. Then a local symbol clock can be
synchronized by methods that are very similar to the phase-locked
loops used to recover the carrier phase. If necessary, a start-up
procedure, such as one that uses a slow search, can be used for
initialization. Synchronization is then maintained by locking a
feedback loop to a clock signal that is extracted from the modulated
waveform.
Then here I found a diagram which seems to be the kind of algorithm I am looking for:
The timing tone can be extracted by ... computing DFT at the symbol
frequency (i.e., a single point of the DFT output is needed for each
data block)
There must be more literature about this technique? I am not interested in high-performance methods for synchronization, but rather recovering the clock signal by any means without conserving memory or processing time.
I figure the basic technique is to
Pass the signal through a matched filter
"Condition" the signal (and make it purely real) by computing the magnitude (as in the diagram)
Passing the signal through a DFT computation
Searching the DFT bins for the highest power frequency
Use the discovered frequency of the DFT bin to generate a local timing signal
Use the discovered phase of the DFT bin (as in the diagram) to adjust the local timing signal offset
Answer: Symbol timing synchronization seems to be a complex topic although once you get some basic principles right, it all makes simple sense. The method you have referred to is known as Digital Filter and Square Timing Recovery$\ ^{[1]}$, also referred to as Oerder and Meyr algorithm.
EDIT:
And later steps in your summary are not correct. There is no search, timing phase is extracted from the first harmonic. Everything else except the DC term is zero due to the signal being bandlimited.
[1]: M. Oerder and H. Meyr, "Digital filter and square timing recovery," in IEEE Transactions on Communications, vol. 36, no. 5, pp. 605-612, May 1988. | {
"domain": "dsp.stackexchange",
"id": 6344,
"tags": "digital-communications, modulation, synchronization"
} |
Helmhoz coil - Define the magnetic field | Question: I have to calculate the magnetic field at point x = R/2 (R is the separation between the two coils).
I see why, for one coil, we only take the component along x from the B field as the values along y "cancel" themselves.
But if we add another coil on the other side, the values along x should also cancel themselves (since they have the same intensity at x = R/2)!
I really don't get why it's not 0 at point x = R/2. Here's a scheme of my understanding (with in red the dB field from the other coil)
If something is unclear please let me know.
Answer: I believe that you are talking about two parallel circular coils separated a distance $R$ along the $x$ axis, right? If this is the case the field would cancel if the currents circulating the two coils had opposite directions, and that is because the lines of the magnetic field go in the same direction in both sides of a coil (for large $ x$) as depicted in this image:
If you put another coil in the disposition explained before the result you'd get would be:
Now it's clear that in the middle point the fields would have the same direction and the total field would be non-zero. | {
"domain": "physics.stackexchange",
"id": 19178,
"tags": "homework-and-exercises, electromagnetism, magnetic-fields"
} |
How does Isoxaben kill newly germinating seeds, with almost no effects on established ones? | Question: Isoxaben (N-[3-(1-ethyl-1-methylpropyl)-1,2-oxazol-5-yl]-2,6-dimethoxybenzamide) is a pre-emergent herbicide used in landscape beds before the application of mulch (my use for it, anyway). It kills the weeds as they germinate, while not harming most established plants.
Here's the mode of action, as stated by Dow AgroSciences:
Isoxaben belongs to the Benzamide family of herbicides and inhibits cellulose biosynthesis in the cell walls of susceptible
weeds (WSSA group 21). This means that cells cannot divide during the reproductive cycle; therefore, they cannot grow,
causing death. While cell division does not occur, this mode of action should not be confused with mitotic inhibition that
occurs with dinitroaniline herbicides.
How does this only affect germinating weeds, while not noticeably affecting weeds that have emerged from the soil (I have to hit those with another herbicide)?
Answer: As with your other question on mitotic inhibitors ("How does Trifluralin kill newly germinating seeds, with almost no effects on established ones?"), inhibiting cellulose synthesis will inhibit growth. Hence, an established plant will stand its ground, but newly germinating plants cannot grow without cellulose synthesis and hence will fail to germinate. | {
"domain": "biology.stackexchange",
"id": 3432,
"tags": "botany, toxicology"
} |
What type of rivet is this? | Question: This is a pivot point on a linkage in my coffee table. It appears to be a rivet with a tail that was pressed with a cross-shaped punch to widen it. I haven't see something like this before. Is there a name for this? What is the benefit of cross-punching it?
Answer: It was hit with a cross-shaped punch to get that side or head to expand.
Usually done with a single blow for speed and low cost. Other choices can be nuts and bolts with nylock nuts so they don’t come loose easily. | {
"domain": "engineering.stackexchange",
"id": 4655,
"tags": "fasteners, rivets"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.