anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Use of the term first order dependency | Question: In a question I am doing it says:
Show explicitly that the function $$y(t)=\frac{-gt^2}{2}+\epsilon t(t-1)$$ yields an action that has no first order dependency on $\epsilon$.
Also my textbook says that
[...] if a certain function $x_0(t)$ yields a stationary value of $S$ then any other function very close to $x_0(t)$ (with the same endpoint values) yields essentially the same $S$, up to first order of any deviations.
I am confused about the first order bit? In the first case does it mean that $\frac{\partial S}{\partial \epsilon}=0$ or that it does not depend of $\epsilon$ but may take some other constant value. In the second case does it mean likewise or something different, please explain?
Answer: Hints:
The action is
$$\tag{A} S[y]:=\int_0^1 \! dt ~L(y,\dot{y}), \qquad L(y,\dot{y})~:=~\frac{m}{2}\dot{y}^2 -mgy, $$
with Dirichlet boundary conditions
$$\tag{B} y(0)~=~0 \quad\text{and}\quad y(1)~=~-\frac{g}{2}. $$
Calculate explicitly the composed function
$$\tag{C} s(\epsilon)~:=~ S[y_{\epsilon}] , $$
where
$$\tag{D} y_{\epsilon}(t)~:=~-\frac{gt^2}{2}+\epsilon t(t-1).$$
Check that the virtual paths (D) satisfy the Dirichlet boundary conditions (B). Why do we need to check that?
Show explicitly that the function $s(\epsilon)$ has no first order dependence on $\epsilon$. What is the physical significance of this fact?
References:
David Morin, The Lagrangian Method, Chap 6, Lecture notes, 2007; Exercise 6.30. | {
"domain": "physics.stackexchange",
"id": 40658,
"tags": "homework-and-exercises, lagrangian-formalism, terminology, variational-principle, action"
} |
How do I find the longitude of the subsolar point? | Question: On December 21, 2018 at 2:23pm PST I want to be standing as directly under the sun as humanly possible. Obviously the latitude of that point will be the Tropic of Capricorn. Assuming I have the right ascension for the sun at that time how would I know what line of longitude to stand on? Please use simple non-technical terms if possible.
Answer: In case someone stumbles across this question and wants to know the math here's what I've been able to figure out.
All right ascensions(RA) are distances, in time, east of the vernal equinox given in hours, minutes, and seconds. The location of the vernal equinox depends on the standard used when giving the RA. For instance J2000 uses the location of the equinox at noon on Januar 1, 2000. The equinox is the imaginary line formed by the intersection of the equatorial plane and the ecliptic plane on the side of the planet where the sun was moving from below the equator to above it.
To begin finding the longitude of an object you need to compute the GMST or GAST for the date and time the measurement was given. You then subtract the GMST from the RA and multiply the result by 15 to get the longitude of the object. | {
"domain": "physics.stackexchange",
"id": 50081,
"tags": "astronomy, geometry"
} |
How is the Time Derivative of the the Electric Field Equal to the Current Density in Gaussian Units? | Question: The microscopic form of Ampere's law with the Maxwell addition in Gaussian units states,
\begin{equation}
\nabla \times \vec{B} = \frac{1}{c} \left ( 4 \pi \vec{J} + \frac{\partial \vec{E}}{\partial t} \right ),
\end{equation}
Where $\vec{B}$ is the magnetic field density (in Gauss); $\vec{J}$ is the free current density (in statC cm$^{-2}$ s$^{-1}$); and $\vec{E}$ is the electric field (in statV cm$^{-1}$).
How is it that the free density term, $\vec{J}$ and the time derivative of the electric field, $\frac{\partial \vec{E}}{\partial t}$, have the same Gaussian units? I see how it this works out with CGS, but not if I try and stay in Gaussian only units.
Answer: A statvolt is a statcoulomb per centimeter, because the electrostatic potential of a point charge in Gaussian units is $\varphi=q/r$. | {
"domain": "physics.stackexchange",
"id": 61762,
"tags": "electromagnetism, conventions, units, si-units, unit-conversion"
} |
String compression using repeated character counts | Question:
Implement a method to perform basic string compression using the counts of
repeated characters. For example, the string aabcccccaaa would become
a2blc5a3. If the "compressed" string would not become smaller than the original string, your method should return the original string.
Can this be implemented in a better way performance-wise?
package string;
public class CompressChar {
int[] seq = new int [256];
public String compressString(String str){
StringBuffer strComp = new StringBuffer();
for( char c : str.toCharArray()){
seq[c]++;
}
for (char c : str.toCharArray()){
if(seq[c]>0){
strComp.append(c).append(seq[c]);
seq[c]=0;//so that it does not enter , when char occurs again
}
}
if(str.length()<strComp.length()){
return str;
}
return strComp.toString();
}
public static void main(String[] args) {
CompressChar ch = new CompressChar();
System.out.println(ch.compressString("abbcdrfac"));
}
}
Answer: Your solution here is wrong, it produces incorrect results when it compresses chars that are in two different places in the string:
For the input: aaaaabbbbbcdrfaaaaaaccccc your program produces:
a11b5c6d1r1f1
when it should produce:
a5b5c1d1r1f1a6c5
Because the a and c repeat in two different places, you include them all in the first count.
Since this is homework, writing out the full solution would be counter-productive, but, if you have:
StringBuilder sb = new StringBuilder();
and two variables to contain the previous char and the previous char count, you could just go through each character in the string, if it's the same as the previous char, you increment the count. If it's different, you add the previous char and it's count to the StringBuilder, and reset the previous and count to 1.
Then, when your loop is done, if the StringBuilder is smaller than the input, return the StringBuilder version.
This type of compression is called 'run length encoding' by the way. | {
"domain": "codereview.stackexchange",
"id": 10354,
"tags": "java, performance, strings, compression"
} |
99 times drunk on Java | Question: This is a rags-to-riches attempt from this question, based on the popular 99 Bottles of Beer, by (lightly) using Java 8's stream-based processing.
Any room for improvement in terms of readability? I can only think of two concerns:
It's not immediately obvious what the placeholders for each Line value are expecting.
I need to skip() by HEADER_COUNT to trim away the... header.
public final class Bottles {
enum Line {
BOTTLES("%s bottle%s of beer"),
ON_WALL_PREFIX("%s on the wall, %s."),
ON_WALL_SUFFIX("%s, %s on the wall.");
private final String pattern;
private Line(String pattern) {
this.pattern = pattern;
}
public String with(Object first, Object second) {
return String.format(pattern, first, second);
}
}
private static final int HEADER_COUNT = 2;
private static final long ALCOLISM_LEVEL = 99;
private static final String PASS_AROUND = "Take one down and pass it around";
private static final String BUY_MORE = "Go to the store and buy some more";
private Bottles() {
// empty
}
public static void main(String[] args) {
getLyrics(ALCOLISM_LEVEL).forEach(System.out::println);
}
public static List<String> getLyrics(long target) {
return Stream.concat(LongStream.rangeClosed(0, target)
.map(i -> target - i)
.mapToObj(Bottles::toWord)
.flatMap(Bottles::getVerse)
.skip(HEADER_COUNT),
Stream.of(Line.ON_WALL_SUFFIX.with(BUY_MORE, toWord(target))))
.collect(Collectors.toList());
}
private static String toWord(long i) {
return Line.BOTTLES.with(i == 0 ? "no more" : i, i == 1 ? "" : "s");
}
private static Stream<String> getVerse(String value) {
String last = Line.ON_WALL_PREFIX.with(value, value);
return Stream.of(Line.ON_WALL_SUFFIX.with(PASS_AROUND, value), "",
Character.toUpperCase(last.charAt(0)) + last.substring(1));
}
}
Answer: In a word, I would say that your program feels… discombobulated. I see recognizable bits and pieces of "99 Bottles of Beer", but it's not obvious how it all fits together.
Overall, I'm not sure that it is much of an improvement over the question that inspired this one. Naturally, I'm partial to my answer, which has the advantage of putting all of the conditionals right at the point of use by using the right tools for complex string substitutions. You can streamify the for-loop in my answer, if you like.
That said, I think there are ways to improve a stream-based implementation.
enum Line isn't doing much, and makes your code incoherent. "%s bottle%s of beer" is not really a line — it's just a phrase — and you're calling it a "line" just to reuse your wrapper for String.format(). The formatted BOTTLES, in turn, gets substituted into ON_WALL_PREFIX and ON_WALL_SUFFIX, so they are clearly not peers. Furthermore, you still have PASS_AROUND and BUY_MORE as constants defined elsewhere.
Defining string constants instead of just using string literals isn't necessarily better. It just makes your eyes jump around to figure out what's going on.
Your getVerse() is weird: it doesn't produce what I would think of as a verse, delimited by the paragraph breaks. Rather, it produces the end of one verse and the beginning of the next, unified by the fact that they have the same number of bottles.
In getLyrics(), using a stream to generate the sequence 99, 98, …, 1, 0, 99 is awkward. You have to start with an ascending count, reverse it, skip half of a weirdly delimited "verse", and concatenate a special case at the end that kind of looks like getVerse().
If you're using streams, why bother converting the lyrics to a List?
It's easier (though not necessarily more efficient) to lowercase an entire string than to uppercase just the first letter.
ALCOHOLISM_LEVEL was misspelled.
public final class Bottles {
private static final long ALCOHOLISM_LEVEL = 99;
private final long n, replenishment;
private Bottles(long stock, long replenishment) {
this.n = stock;
this.replenishment = replenishment;
}
public String toString() {
return String.format("%s bottle%s of beer",
n == 0 ? "No more" : n,
n == 1 ? "" : "s");
}
private String whatToDo() {
return n == 0 ? "Go to the store and buy some more"
: "Take one down and pass it around";
}
private static Bottles next(Bottles b) {
return new Bottles((b.n == 0) ? b.replenishment : b.n - 1,
b.replenishment);
}
private static String getVerse(Bottles b) {
return String.format(
"%1$s on the wall, %2$s.\n%3$s, %4$s on the wall.",
b,
b.toString().toLowerCase(),
b.whatToDo(),
next(b).toString().toLowerCase()
);
}
public static Stream<String> getLyrics(long target) {
return Stream.iterate(new Bottles(target, target), Bottles::next)
.limit(target + 1)
.map(Bottles::getVerse);
}
public static void main(String[] args) {
System.out.println(getLyrics(ALCOHOLISM_LEVEL)
.collect(Collectors.joining("\n\n")));
}
} | {
"domain": "codereview.stackexchange",
"id": 16977,
"tags": "java, rags-to-riches, 99-bottles-of-beer"
} |
"Low-pass filter" in non-EE, software API contexts | Question: I am an experienced software engineer and am working on smartphone sensors. I've taken fundamental EE classes in DSP and am trying to apply my knowledge. I believe that I understand convolution, transfer functions, z-transform, etc. I know a little bit about FIR and IIR filters.
Now, when reading through software APIs and documentation, I see people are applying a LPF to sensor data in the time domain. I know that you do that through the use of difference equations (e.g. y[i] = y[i-1] + 2*x[i]), but I learned in my EE class that LPF are typically applied through the convolution operation where you convolve the time signal with the coefficients of a sinc wave (for example) and with a specific cut-off frequency. So the colloquial use of "low-pass filter" is not exact enough for me.
For example, the Google Android API has this documentation:
http://developer.android.com/reference/android/hardware/SensorEvent.html#values
public void onSensorChanged(SensorEvent event)
{
// alpha is calculated as t / (t + dT)
// with t, the low-pass filter's time-constant
// and dT, the event delivery rate
final float alpha = 0.8;
gravity[0] = alpha * gravity[0] + (1 - alpha) * event.values[0];
gravity[1] = alpha * gravity[1] + (1 - alpha) * event.values[1];
gravity[2] = alpha * gravity[2] + (1 - alpha) * event.values[2];
linear_acceleration[0] = event.values[0] - gravity[0];
linear_acceleration[1] = event.values[1] - gravity[1];
linear_acceleration[2] = event.values[2] - gravity[2];
}
How do I interpret that low-pass filter? What is the cut-off frequency? What is the transition bandwidth? Are they using this LPF solely to do averaging?
Answer: The filter in your example is a first-order infinite impulse response (IIR) filter. Its transfer function is:
$$
H(z) = \frac{1 - \alpha}{1 - \alpha z^{-1}}
$$
which corresponds to a difference equation of:
$$
y[n] = \alpha y[n-1] + (1-\alpha) x[n]
$$
where $x[n]$ is the filter input and $y[n]$ is the filter output.
This type of filter is often used as a low-complexity lowpass filter and is often called a leaky integrator. It is favored because of its simple implementation, low computational complexity, and its tunability: its cutoff frequency depends upon the value of $\alpha$. $\alpha$ can take on values on the interval $[0,1)$. $\alpha = 0$ yields no filtering at all (the output is equal to the input); as $\alpha$ increases, the cutoff frequency of the filter decreases. You can think of $\alpha = 1$ as a boundary case where the cutoff frequency is infinitely low (the filter output is zero for all time).
You can think of this intuitively by noticing that the filter input is weighted by $\alpha$, so as the parameter increases, the quantity $1-\alpha$ decreases, so each input sample has a smaller proportional effect on the value of any particular output sample. This has the effect of smearing out the filter's impulse response over a longer period of time. Summing over a longer period of time is similar to computing a long moving average. As the length of a moving average increases, the cutoff frequency of the average decreases.
For your example, where $\alpha = 0.8$, the frequency response of the filter is as follows:
From the example, I would guess that this filter is being used to smooth high-frequency noise out of a time series of measurements from a sensor, trying to tease out a comparatively low-frequency signal of interest. This would be a very typical application for this sort of filter.
On your other sub-question, you are correct that filtering is often implemented via convolution of the input signal with the filter's impulse response. In most cases, this is only done with finite impulse response (FIR) filters. IIR filters such as this one are typically implemented using the filter's difference equation; since an IIR system's impulse response is infinitely long, you must truncate it to some finite length to make convolution with it tractable, at which point the filter is no longer IIR. The difference equation format is almost always cheaper to implement computationally, although the feedback inherent in that structure can lead to numerical issues that must be addressed (such as internal overflow and roundoff error accumulation). | {
"domain": "dsp.stackexchange",
"id": 1578,
"tags": "filters, filter-design, software-implementation"
} |
Will it be $\vec{AB}+\vec{AC}=\vec{AD}$ or $\vec{AB}-\vec{AC}=\vec{AD}$? | Question:
The resultant of $\vec{AB}$ and $\vec{AC}$ is $\vec{AD}$. Now, which of the following is correct?
$$\vec{AB}+\vec{AC}=\vec{AD}\tag{1}$$
$$\vec{AB}-\vec{AC}=\vec{AD}\tag{2}$$
I think $(1)$ is correct.
Answer: (1) is correct, the resultant means the addition | {
"domain": "physics.stackexchange",
"id": 87128,
"tags": "vectors, mathematics, geometry"
} |
map_server segfaults | Question:
Hello,
I have been having an issue with the "rosrun map_server map_server map.yaml" command. I have ros kinetic on Ubuntu 16.04 running on a Jetson Tx2. Every time I run the command, the console tells me there has been a “segmentation fault (core dumped).” Here is the link to page with the map.yaml file: https://github.com/mit-racecar/racecar/tree/master/racecar/maps. Is there something wrong with the formatting of the yaml files or is it something else? Also, if I have to build map_server from source, what command would I use to do so.
Thank you,
Michael Chen
Originally posted by Michael Chen on ROS Answers with karma: 13 on 2018-09-10
Post score: 1
Original comments
Comment by mgruhler on 2018-09-11:
I tried to look into that and could actually reproduce the problem. But updating all debian packages fixed it. I'm not sure what the problem has been, I'm assuming some problem with dependencies, but cannot pinpoint it. Maybe @David Lu has some more insights?
Please report if upgrade don't fix it
Comment by Michael Chen on 2018-09-11:
Thanks for the insight! How would I update the debian package? I am unfamiliar with some of this.
Comment by mgruhler on 2018-09-11:
sudo apt update && sudo apt upgrade
Comment by Michael Chen on 2018-09-11:
It worked! Thank you.
Answer:
This seems to have been a dependency issue and can be resolved by updating the respective debian packages.
Originally posted by mgruhler with karma: 12390 on 2018-09-12
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by gvdhoorn on 2018-09-12:
It could well be that this was caused by the same (presumed) ABI incompatibility that lead to #q303093 being posted. | {
"domain": "robotics.stackexchange",
"id": 31755,
"tags": "navigation, mapping, ros-kinetic, map-server"
} |
What are infrared and collinear safety? | Question: I'm studying particle jets for the first time and I do not understand what "collinear" and "infrared" safety (which are two requirements that a method for counting jets in an event should fulfill, as I understand) are.
Wikipedia says:
A jet algorithm is infrared safe if it yields the same set of jets after modifying an event to add a soft radiation. Similarly, a jet algorithm is collinear safe if the final set of jets is not changed after introducing a collinear splitting of one of the inputs.
What does soft radiation, i.e. the detection of photons in the lower energy part of the X-ray spectrum, have to do with jets counting?
What does it mean to "introduce a collinear splitting" of the particles detected?
Answer:
"Soft radiation" in particle physics refers to particles or photons with very low energy, typically much lower than the energy scale of the process being studied. These soft particles can arise from a variety of sources, including quantum fluctuations and the decay of heavier particles. The requirement for an infrared safe jet algorithm means that the presence of these soft particles should not affect the algorithm's ability to identify and count the jets in an event.
"Collinear splitting" refers to the phenomenon where two particles move in very similar directions and can be difficult to distinguish as separate entities. In such cases, it is possible for a jet algorithm to misinterpret these collinear particles as a single jet, or conversely, to mistakenly split a single jet into two separate jets. The requirement for a collinear safe jet algorithm means that the algorithm should not be sensitive to such collinear splittings and should be able to correctly identify the number of jets in an event regardless of the collinearity of the particles. | {
"domain": "physics.stackexchange",
"id": 93659,
"tags": "particle-physics, terminology, definition, radiation"
} |
How much of the forces when entering water is related to surface tension? | Question: When an object enters water with high velocity, (like in Why is jumping into water from high altitude fatal?), most of it's kinetic energy will be converted, eg to accelerate water, deform the object etc. -
What is the relevance of the surface tension to this?
Are the effects related to surface tension just a small part, or even the dominant part regarding the forces.
Answer: Unless I have made a conceptual mistake (which is very possible), surface tension plays essentially no role in the damping of the impact of a fast-moving object with a liquid surface.
To see this, a simple way to model it is to pretend that the water isn't there, but only its surface is, and see what happens when an object deforms this surface. Let there be a sphere of density $\rho=1.0\text{g/cm}^3$ and radius $r=1\text{ft}$ with velocity $v=200\text{mph}$, and let it collide with the interface and sink in halfways, stretching the interface over the surface of the sphere.
Before the collision, the surface energy of the patch of interface that the sphere collides with is
$$E_i=\gamma A_1=\gamma\pi r^2$$
and after collision, the stretched surface has a surface energy of
$$E_f=\gamma A_2=2\gamma\pi r^2$$
and so the energy loss by the sphere becomes
$$\Delta E=E_f-E_i=\gamma\pi r^2$$
which in the case of water becomes (in Mathematica):
<< PhysicalConstants`
r = 1 Foot;
\[Gamma] = 72.8 Dyne/(Centi Meter);
Convert[\[Pi] r^2 \[Gamma], Joule]
0.0212477 Joule
Meanwhile, the kinetic energy of the ball is
$$E_k=\frac{1}{2}\left(\frac{4}{3}\pi r^3\rho\right)v^2$$ which is:
\[Rho] = 1.0 Gram/(Centi Meter)^3;
v = 200 Mile/Hour;
Convert[1/2 (4/3 \[Pi] r^3 \[Rho]) v^2, Joule]
474085 Joule
and hence the surface tension provides less than one millionth of the slowdown associated with the collision of the sphere with the liquid surface. Thus the surface tension component is negligible.
I'd suspect that kinematic drag provides most of the actual energy loss (you're basically slamming into 200 pounds of water and shoving it out of the way when you collide), but I've never taken fluid dynamics so I'll await explanations from people with more experience. | {
"domain": "physics.stackexchange",
"id": 13022,
"tags": "fluid-dynamics, surface-tension"
} |
How to Set the Same Categorical Codes to Train and Test data? Python-Pandas | Question: NOTE:
If someone else it's wondering about this topic, I understand you're getting deeper in the Data Analysis world, so I did this question before to learn that:
You encode categorical values as INTEGERES only if you're dealing with Ordinal Classes, i.e. College degree, Customer Satisfaction Surveys as an example.
Otherwise if you're dealing with Nominal Classes like, gender, colors or names, you MUST convert them with other methods since they do not specific any numerical order, most known are One-hot Encoding or Dummy variables.
I encorage you to read more about them and hope this has been useful.
Check the link below to see a nice explanation:
https://www.youtube.com/watch?v=9yl6-HEY7_s
This may be a simple question but I think it can be useful for beginners.
I need to run a prediction model on a test dataset, so to convert the categorical variables into categorical codes that can be handled by the random forests model I use these lines with all of them:
Train:
data_['Col1_CAT'] = data_['Col1'].astype('category')
data_['Col1_CAT'] = data_['Col1_CAT'].cat.codes
So, before running the model I have to apply the same procedure to both, the Train and Test data.
And since both datasets have the same categorical variables/columns, I think it will be useful to apply the same categorical codes to each column respectively.
However, although I'm handling the same variables on each dataset I get different codes everytime I use these two lines.
So, my question is, how can I do to get the same codes everytime I convert the same categoricals on each dataset?
Thanks for your insights and feedback.
Answer: First, note that Random Forests can handle categorical variables (moreover, if you have too much categories, reducing this number is a good practice).
If you want to apply a filter to your data, I'd suggest you using sklearn transformers (like OneHot Encoder, Label Encoding, ... pick the one you need according to what you want to do).
In this case, you have to fit the encoder in your train dataset, and then apply it in your test. If you want to apply this in a real case, you have to save your trained encoders alongside your trained model, so you can apply the encoder directly to the new data before predicting on it, so it has the same pattern.
Here is an example with Label Encoder
from sklearn import preprocessing
train, test = ... # SEPARATE YOUR DATA AS YOU WANT
le = preprocessing.LabelEncoder()
trained_le = le.fit(train)
train = trained_le.transform(train)
test = trained_le.transform(test) | {
"domain": "datascience.stackexchange",
"id": 9635,
"tags": "machine-learning, scikit-learn, pandas, random-forest, categorical-data"
} |
Rewrite apply function to use recursion instead | Question: Probably the hardest part of learning lisp has been to think in the "lisp way" which is elegant and impressive, but not always easy. I know that recursion is used to solve a lot of problems, and I am working through a book that instead uses apply to solve a lot of problems, which I understand is not as lispy, and also not as portable.
An experienced lisper should be able to help with this logic without knowing specifically what describe-path, location, and edges refer to. Here is an example in a book I am working through:
(defun describe-paths (location edges)
(apply (function append) (mapcar #'describe-path
(cdr (assoc location edges)))))
I have successfully rewritten this to avoid apply and use recursion instead. It seems to be working:
(defun describe-paths-recursive (location edges)
(labels ((processx-edge (edge)
(if (null edge)
nil
(append (describe-path (first edge))
(processx-edge (rest edge))))))
(processx-edge (cdr (assoc location edges)))))
I would like some more seasoned pairs of eyes on this to advise if there is a more elegant way to translate the apply to recursion, or if I have done something unwise. This code seems decent, but would there been something even more "lispy" ?
Answer: Lisp is a multiparadigm language.
apply is just as lispy as recursion, and, in a way, much more so (think in HOFs)!
Style
Please fix indentation.
Please write #'foo instead of (function foo).
Implementations
The first (HOF) version can be much more efficiently rewritten in using mapcan (provided defscribe-path returns fresh lists):
(defun describe-paths (location edges)
(mapcan #'describe-path
(cdr (assoc location edges)))))
The second (recursive) version can be made tail recursive using an accumulator. This would help some compilers produce better code.
(defun describe-paths-recursive (location edges)
(labels ((processx-edge (edge acc)
(if (null edge)
acc
(processx-edge (rest edge)
(revappend acc (describe-path (first edge)))))))
(nreverse (processx-edge (cdr (assoc location edges))))))
Note the use of revappend/nreverse instead of append to avoid quadraticity. | {
"domain": "codereview.stackexchange",
"id": 5121,
"tags": "recursion, lisp, common-lisp"
} |
C safe getline() | Question: For a recent project of mine I had to do read input from the console in pure C with no external libraries (in other words, code I've written by myself). I don't like the standard formatted input such as scanf or std::cin in C++, I feel like having to do a dummy read to get rid of the leftover newline is not great. In C++ I can do std::string in; std::getline(std::cin, in); to read an entire line, but C doesn't have that (at least not on my system). Reading input from the console is often needed, so after several times of writing it I decided I wanted something I can write once and for all. This code is simple enough that I don't think I've left too many holes, but I'd still love to hear feedback about it so please do let me know what I can do to improve it.
#ifndef UUTIL_H
#define UUTIL_H
#include <stdbool.h>
#include <stddef.h>
#include <stdio.h>
bool getLine(FILE *src, char **dest, size_t *size);
#endif
#include <stdbool.h>
#include <stddef.h>
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include "uutil.h"
bool getLine(FILE *src, char **dest, size_t *size) {
if (src == NULL) {
if (size != NULL) *size = 0;
return false;
}
if (dest == NULL) {
if (size != NULL) *size = 0;
return false;
}
char *data;
size_t cap = 32;
size_t readCount = 0;
data = malloc(cap * sizeof(char));
if (data == NULL) {
*size = 0;
return false;
}
while (true) {
int readVal = fgetc(src);
if (readVal == EOF || readVal == '\n') {
if (readCount == cap) {
// The buffer is full, but we still need to shove in the final NUL byte
size_t newCap = cap + 1;
char *newData = realloc(data, newCap * sizeof(char));
if (newData == NULL) {
// Failed to resize, shove it in anyway
data[readCount - 1] = '\0';
if (size != NULL) *size = readCount;
return false;
}
cap = newCap;
data = newData;
}
data[readCount] = '\0'; // Do not count the final NUL byte
if (size != NULL) *size = readCount;
*dest = data;
return true;
}
if (readCount == cap) {
// The buffer is full, but we still have more data to shovw in
size_t newCap = cap * 2;
char *newData = realloc(data, newCap * sizeof(char));
if (newData == NULL) {
// Failed to resize, clean up stdin and return
data[readCount - 1] = '\0';
if (src == stdin) while (fgetc(src) != EOF);
if (size != NULL) *size = readCount;
return false;
}
cap = newCap;
data = newData;
}
data[readCount++] = readVal;
}
}
Answer:
let me know what I can do to improve it.
Design flaw
Somehow, the caller needs to distinguish between a call that immediately ended due to '\n' vs. EOF. Just like fgets().
Perhaps a 3-way return: EOF, 0 (failure) or 1 (success)?
Design flaw 2 (subtle)
EOF occurs due to end-of-file and input error.
Some input, then end-of-file should return true
No input and end-of-file should return EOF. (See above).
Any amount of input, then input error should return EOF. (See above).
Review feof() and ferror().
Something like:
if (readVal == EOF) {
...
// Note: `!feof()` subtlely different than `ferror()` here.
if (readCount == 0 || !feof(src)) return EOF;
return 1;
}
Lack of documentation
bool getLine(FILE *src, char **dest, size_t *size) deserves a few comments, especially how to call and what the return value means.
Allocate to the reference object size, not type
Easier to code right, review and maintain.
//data = malloc(cap * sizeof(char));
data = malloc(sizeof data[0] * cap);
Declare and initialize
Avoid leaving newly declare objects unassigned.
//char *data;
//.... some time later
//data = malloc(cap * sizeof(char));
char *data = malloc(cap * sizeof(char));
Assign a dummy
As apparently size == NULL is OK, when NULL, reassign to a dummy so later code does not need multiple if (size != NULL) ... tests
size_t dummy;
if (size == NULL) {
size = &dummy;
}
Remove special code for reallocate on end-of-line
Simply use if (readCount + 1 == cap) { in the general loop.
Final allocation
When leaving the function, perform a final "right-size" re-allocation.
Allocation growth
newCap = cap * 2 is reasonable, yet 1) does not detect overflow for a really long input. 2) Limited to SIZE_MAX/2.
Instead, start at 31 and then newCap = cap * 2 + 1, code can then handle up to SIZE_MAX. | {
"domain": "codereview.stackexchange",
"id": 44182,
"tags": "c, console"
} |
Diagnostic aggregator not resetting between runs | Question:
I'm having a problem that the diagnostic aggregator contains analyzers from a previous run. The situation is this:
Start roscore
Start an application, including diagnostic aggreggator, using roslaunch
Stop the application and launch another one with a new yaml-file
The aggregator message now contains analyzers from both the new and the old application (the old ones are stale now of cource).
The problem arises when having roscore running and starting different applications with roslaunch (so skipping step one would fix the problem). The reason I start roscore is that I have a GUI that can start (and close down) different applications. This GUI is a rosnode, so I have to have roscore running while starting and stopping other applications.
Is there a way of resetting the aggregator so it just listens to new messages (the ones specified specified in the last yaml-file loaded)?
Originally posted by Ola Ringdahl on ROS Answers with karma: 328 on 2011-11-17
Post score: 1
Answer:
I found out how to solve this problem. By deleting "diagnostic_aggregator" from the parameter server when closing an application, only the the nodes specified in the new yaml-file will show when starting a new application.
Originally posted by Ola Ringdahl with karma: 328 on 2011-12-07
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 7336,
"tags": "ros, diagnostics, diagnostic-aggregator"
} |
Can starvation occur with interrupts? | Question: The Computer Science text book I own describes how an interrupt is handled in an operating system:
At some point at the end of an instruction the processor checks if there are any outstanding interrupts
If there are, the priority of the present task is compared with the highest priority interrupt
If there is a higher priority interrupt, the current job is suspended
The contents of the special registers are stored so that the job can be restarted later (aren't general purpose registers preserved as well? or do ISRs do not use these?)
Interrupts are then serviced until all have been dealt with
Control is returned to the original job
If the current task has more priority than the interrupt, it is queued and serviced later
The last point "If the current task has more priority than the interrupt, it is queued and serviced later". Does this mean that if an I/O interrupt is generated but the current tasks are all very high priority, they will not get serviced (ignored)? moreover, how does the computer determine when to service these if the current task is more "important"?
Answer: I quite disagree with your textbook.
First, Interrupts can be very different between architectures. Most does not support nesting interrupts, while others allow some forms of it (fast interrupts for time critical operations in embedded...)
One can imagine a CPU having 3 modes :
User
Supervisor
Interrupt
When the interrupt request signal[s] is active, the CPU automatically saves a few registers, sets the supervisor flag and branches into the interrupt vector. If several interrupts occurs simultaneously, the highest priority interrupt is taken. If one or several interrupts becomes active while the CPU is already in interrupt processing mode, it will wait until the end of the current interrupt service routine to eventually process the next highest level interrupt. This is similar when interrupts are temporarly masked by the OS when it is updating some structures...
Which registers are saved depends a lot of the type of CPU. The minimum is just being able to resume execution later : PC, flags, one register. You can also have a full set of shadow registers reserved for interrupts and which don't need to be saved.
OS tasks are not interrupts. There is usually a timer that regularly tiggers an interrupt which, in turn, wakes the scheduler for switching tasks. It is also done by I/O related interrupts : A network packet is received, an interrupt is triggered, the Ethernet NIC interrupt routine tells the OS to wake the network daemon...
No task has higher priority than interrupts.
Because while being in "interrupt" mode, the CPU cannot be interrupted and cannot process higher priority stuff, OSes try to spend as little time as possible in that state, they instead schedule tasks to asynchronously process events, for one or several symmetric CPUs. | {
"domain": "cs.stackexchange",
"id": 5259,
"tags": "operating-systems"
} |
Are the coefficients of the weight enumerator polynomial of a stabilizer code always integers? | Question: Consider an $ [\![n,k]\!] $ stabilizer code. Define the weight enumerator polynomial $ A(x) $ of the code as
$$
A(x):=A_0+A_1x+\dots+A_nx^n
$$
where
$$
A_j:=\frac{1}{(2^k)^2} \sum_{p \in P_n,\,\mathrm{wt}(p)=j} |\mathrm{tr}(p \Pi)|^2.
$$
Here $ \Pi $ is the projector onto the code subspace.
Is it the case that the $ A_j $ are always integers?
Answer: TL;DR: For the stabilizer codes, $A_j$ is the number of stabilizer operators with Hamming weight $j$ and thus a non-negative integer.
Suppose $g_1,\dots,g_{n-k}$ are generators of the stabilizer group $S$ of a $[\![n,k]\!]$ stabilizer code. Define $S^w:=\{s\in S\,|\,\mathrm{wt}(s)=w\}$ and $P_n^w:=\{p\in P_n\,|\,\mathrm{wt}(p)=w\}$. Then
$$
\begin{align}
A_j &= \frac{1}{4^k}\sum_{p\in P_n^j}[\mathrm{tr}(p\Pi)]^2\tag1\\
&= \frac{1}{4^k}\sum_{p\in P_n^j}\left[\mathrm{tr}\left(p\prod_{i=1}^{n-k}\frac{I+g_i}{2}\right)\right]^2\tag2\\
&= \frac{1}{4^k}\sum_{p\in P_n^j}\left[\frac{1}{2^{n-k}}\mathrm{tr}\left(p\sum_{b_1=0}^1\dots\sum_{b_{n-k}=0}^1g_1^{b_1}\dots g_{n-k}^{b_{n-k}}\right)\right]^2\tag3\\
&= \frac{1}{4^n}\sum_{p\in P_n^j}\left[\mathrm{tr}\left(p\sum_{s\in S}s\right)\right]^2\tag4\\
&= \frac{1}{4^n}\sum_{p\in P_n^j}\left(\sum_{s\in S}\mathrm{tr}(ps)\right)^2\tag5\\
&= \frac{1}{4^n}\left[\sum_{p\in P_n^j\setminus S}\left(\sum_{s\in S}\mathrm{tr}\left(ps\right)\right)^2+\sum_{p\in S^j}\left(\sum_{s\in S}\mathrm{tr}\left(ps\right)\right)^2\right]\tag6\\
&= \frac{1}{4^n}\left(0+\sum_{p\in S^j}\left(2^n\right)^2\right)\tag7\\
&=|S_j|\tag8
\end{align}
$$
in analogy with the classical case. This gives us another way to see that $A_0+\dots+A_n=2^{n-k}=|S|$. | {
"domain": "quantumcomputing.stackexchange",
"id": 4571,
"tags": "error-correction, stabilizer-code"
} |
Gmapping not working, can not create map | Question:
Hello
Hello
Im trying to create a map from a bag file using the gmapping. I have done everything using the tutorial http://www.ros.org/wiki/slam_gmapping/Tutorials/MappingFromLoggedData but when I tried to save the map with the map server it was just waited for the map and nothing happened. My bag file has camera, imu, laser scans and odometry .
The output of command of rxbag is
camera/depth/image
lse_xsens_mti/xsens/imu/data
odom
scan
My launch file is
launch>
<param name="/use_sim_time" value="true"/>
<node name="rosplay" pkg="rosbag" type="play" args="/home/2013-01-11-15-47-56.bag --clock"/>
<node pkg="tf" type="static_transform_publisher" name="baselink_laser" args="0 0 0 0 0 0 /base_link /laser 10"/>
<node pkg="gmapping" type="slam_gmapping" name="slam_gmapping" output="screen">
<param name="base_frame" value="base_link"/>
<param name="odom_frame" value="base_link"/>
<param name="map_frame" value="map"/>
</node>
<!-- Start an rviz node with a custom configuration for the viewpoint, map_server, trajectory, laser scans, etc -->
<node pkg="rviz" type="rviz" output="screen" name="rviz" args="-d $(find pow_analyzer)/launch/pow_rviz.vcg"/>
</launch>
Any help?
Originally posted by Astronaut on ROS Answers with karma: 330 on 2013-04-17
Post score: 0
Answer:
You configured the odom_frame as base_link. That should be odom instead (or whatever frame your odometry is in). As it is gmapping won't get any movements.
Originally posted by dornhege with karma: 31395 on 2013-04-18
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Astronaut on 2013-04-18:
ok. thanks. now its working | {
"domain": "robotics.stackexchange",
"id": 13872,
"tags": "navigation, gmapping"
} |
Confusion with two-mode bra-ket notation | Question: Let us consider some abstract two-mode bosonic model with a conserved total number of quanta (i.e. eigenvalue $N$ of the operator $\hat{N} = a^\dagger a + b^\dagger b$ remains constant with corresponding subspace). Thus, the entire Hilbert space splits into a sum of invariant Hilbert sub-spaces for all $N = 0, 1, 2, \dots$. Thereunder, all of the states corresponding to the subspace with $N$ quanta have a form of a superposition of the blocks like $|n, N-n \rangle$, where $n = 0, 1, \dots, N$.
Thus, as far as I figured out, the coherent state reads
$$
|\alpha, \beta\rangle = e^{-(|\alpha|^2+|\beta|^2)/2} \sum_{n=0}^{\infty} \sum_{k=0}^{\infty} \frac{\alpha^n \beta^k}{\sqrt{n! k!} }|n, k\rangle \\ \rightarrow e^{-(|\alpha|^2+|\beta|^2)/2}\sum_{m = 0}^N \frac{\alpha^m \beta^{N-m}}{\sqrt{m! (N-m)!}} |m, N-m\rangle,
$$
where I assumed $|\alpha|^2 + |\beta|^2 = N$.
Let's consider the following state $|\alpha, 0 \rangle$, where $|\alpha|^2 = N$. I'm a bit confused, how will this state look like in the subspace with $N$ quanta?
Answer: You took an unfortunate left turn in your otherwise plausible reexpression of the first quadrant semi-infinite square lattice in terms of finite N+1- length diagonals thereof, stacked along the n=k line for all Ns. As the comments warn you, one hardly ever constrains conjugate (Mellin) variables among themselves, anymore than one constrains x with p in Fourier transforms in QM.
Your coherent state is
$$
|\alpha, \beta\rangle = e^{-(|\alpha|^2+|\beta|^2)/2} \sum_{n=0}^{\infty} \sum_{k=0}^{\infty} \frac{\alpha^n \beta^k}{\sqrt{n! k!} }|n, k\rangle \\ =\sum_{N=0}^\infty
e^{-(|\alpha|^2+|\beta|^2)/2}\sum_{n = 0}^N \frac{\alpha^n \beta^{N-m}}{\sqrt{n! (N-n)!}} |n, N-n\rangle \\ =e^{-|\alpha|^2 (1+|z|^2 )/2} \sum_{N=0}^\infty \alpha^N
\sum_{n = 0}^N \frac{ z^{N-n}}{\sqrt{n! (N-n)!}} |n, N-n\rangle ,
$$
where I defined $\beta \equiv z \alpha$, so that $|\alpha|^2 + |\beta|^2 =|\alpha|^2 (1+|z|^2 ) $.
Possibly useful picture to summarize your rearrangement: The lower right corner (SW) is the vacuum; the abscissa is n, going East, extending to infinity; and the ordinate k, increasing North, also extending to infinity. Your inner sums are the finite diagonals SE to NW, of lengths 0, 1, 2, ... with 1, 2, 3 points each. The outer sum adds these diagonals from SW to NE, without limit.
Your question of how does the one-mode coherent state $|\alpha, 0 \rangle$ sits in $|\alpha, z \alpha \rangle$ is answerable by inspection of the lattice picture; as $z\to 0$, the only surviving term with vanishing exponent of the second (finite) sum is the topmost sum point at the beginning of the diagonal (n=N), $ \frac{1}{\sqrt{N! }} |N, 0 \rangle $, restoring the standard one-mode coherent state: plug it in. | {
"domain": "physics.stackexchange",
"id": 89668,
"tags": "quantum-mechanics, hilbert-space, notation, coherent-states"
} |
Text based chess game in Python | Question: I have reached a significant milestone since the start of my chess game project. I have implemented all basic functionality. There is no castling, en passant and pawn promotion yet but I have implemented checks and checkmate functionality.
I would like my code to be reviewed before going further in the project so that I can optimize the code beforehand.
I would appreciate any improvements and also bugs found.
Here is my code:
constants.py
WHITE = True
BLACK = False
RANK: dict[str, int] = {
"a": 0, "b": 1, "c": 2, "d": 3,
"e": 4, "f": 5, "g": 6, "h": 7
}
position.py
from constants import *
class Position:
def __init__(self, y: int, x: int) -> None:
self.y = y
self.x = x
def __add__(self, other):
return Position(self.y + other.y, self.x + other.x)
def __sub__(self, other):
return Position(self.y - other.y, self.x - other.x)
def __mul__(self, value: int):
return Position(self.y * value, self.x * value)
def __eq__(self, other) -> bool:
return self.y == other.y and self.x == other.x
def __repr__(self) -> str:
return f"(y: {self.y}, x: {self.x})"
def __str__(self) -> str:
flipped_rank = {v: k for k, v in RANK.items()}
return f"{flipped_rank[self.x]}{self.y + 1}"
def abs(self):
return Position(abs(self.y), abs(self.x))
support.py
from position import *
def is_same_color(*pieces: list[str]) -> bool:
for i in range(len(pieces) - 1):
if is_white(pieces[i]) == is_white(pieces[i + 1]):
return False
return True
def is_white(piece: str) -> bool:
return piece.isupper()
def is_black(piece: str) -> bool:
return piece.islower()
def is_king(piece: str) -> bool:
return piece.lower() == "k"
def is_queen(piece: str) -> bool:
return piece.lower() == "q"
def is_rook(piece: str) -> bool:
return piece.lower() == "r"
def is_knight(piece: str) -> bool:
return piece.lower() == "n"
def is_bishop(piece: str) -> bool:
return piece.lower() == "b"
def is_pawn(piece: str) -> bool:
return piece.lower() == "p"
def is_empty(piece: str) -> bool:
return piece == "."
def extract_move(move: str) -> tuple[Position, Position]:
try:
start_pos = Position(int(move[1]) - 1, RANK[move[0]])
end_pos = Position(int(move[3]) - 1, RANK[move[2]])
return start_pos, end_pos
except:
raise ValueError(f"Invalid position {move}")
def sign(x: int | float):
if x < 0:
return -1
elif x > 0:
return 1
elif x == 0:
return 0
main.py
from support import *
from copy import deepcopy
start_position = [
["R", "N", "B", "Q", "K", "B", "N", "R"],
["P", "P", "P", "P", "P", "P", "P", "P"],
[".", ".", ".", ".", ".", ".", ".", "."],
[".", ".", ".", ".", ".", ".", ".", "."],
[".", ".", ".", ".", ".", ".", ".", "."],
[".", ".", ".", ".", ".", ".", ".", "."],
["p", "p", "p", "p", "p", "p", "p", "p"],
["r", "n", "b", "q", "k", "b", "n", "r"]
]
class Board:
def __init__(self, board):
self.board = board
self.current_turn = WHITE
self.legal_moves = self.generate_legal_moves()
self.status = "RUNNING"
def play_move(self, move):
start_pos, end_pos = extract_move(move)
if move in self.legal_moves:
self[end_pos] = self[start_pos]
self[start_pos] = "."
self.current_turn = not self.current_turn
self.legal_moves = self.generate_legal_moves()
self.update_status()
else:
print(f"Invalid move {move}...")
def update_status(self):
if self.is_checkmate():
self.status = "GAMEOVER"
def play_moves(self, moves: str):
for move in moves.split():
print(self)
self.play_move(move)
def is_valid(self, move: str) -> bool:
start_pos, end_pos = extract_move(move)
largest = max(start_pos.y, start_pos.x, end_pos.y, end_pos.x)
smallest = min(start_pos.y, start_pos.x, end_pos.y, end_pos.x)
# Check if coordinates are out of bound
if smallest < 0 or largest > 7:
return False
if start_pos == end_pos:
return False
piece = self[start_pos]
if is_empty(piece):
return False
to_capture_piece = self[end_pos]
if not is_empty(to_capture_piece) and not is_same_color(to_capture_piece, piece):
return False
delta = end_pos - start_pos
if is_pawn(piece):
if abs(delta.y) == 1: # 1 step forward
if delta.x == 0 and is_empty(self[end_pos]): # No capture
return True
elif abs(delta.x) == 1 and not is_empty(self[end_pos]): # Capture
return True
if (abs(delta.y) == 2 and start_pos.y in (1, 6) and
is_empty(self[end_pos]) and is_empty(self[end_pos - Position(sign(delta.y), 0)])
): # 2 step forward
return True
elif is_bishop(piece):
if abs(delta.y) == abs(delta.x):
increment = Position(sign(delta.y), sign(delta.x))
for i in range(1, abs(delta.y)):
if not is_empty(self[start_pos + (increment * i)]):
return False
return True
elif is_rook(piece):
if delta.x == 0 or delta.y == 0:
increment = Position(sign(delta.y), sign(delta.x))
for i in range(1, max(abs(delta.y), abs(delta.x))):
if not is_empty(self[start_pos + (increment * i)]):
return False
return True
elif is_knight(piece):
if delta.abs() == Position(2, 1) or delta.abs() == Position(1, 2):
return True
elif is_queen(piece):
# Rook validation
if delta.x == 0 or delta.y == 0:
increment = Position(sign(delta.y), sign(delta.x))
for i in range(1, max(abs(delta.y), abs(delta.x))):
if not is_empty(self[start_pos + (increment * i)]):
return False
return True
# Bishop validation
if abs(delta.y) == abs(delta.x):
increment = Position(sign(delta.y), sign(delta.x))
for i in range(1, abs(delta.y)):
if not is_empty(self[start_pos + (increment * i)]):
return False
return True
elif is_king(piece):
if abs(delta.y) in (0, 1) and abs(delta.x) in (0, 1):
return True
return False
def is_check(self, move) -> bool:
new_game = deepcopy(self)
start_pos, end_pos = extract_move(move)
new_game[end_pos] = new_game[start_pos]
new_game[start_pos] = "."
king_pos = new_game.get_king_pos(new_game.current_turn)
for pos in new_game.get_all_pieces_pos()[not new_game.current_turn]:
if new_game.is_valid(str(pos) + str(king_pos)):
return True
return False
def is_checkmate(self) -> bool:
return len(self.legal_moves) == 0
def generate_legal_moves(self) -> list[str]:
legal_moves = []
candidate_moves = []
pieces_pos = self.get_all_pieces_pos()[self.current_turn]
# print([str(pos) for pos in pieces_pos])
for pos in pieces_pos:
piece = self[pos]
# print("In for pos in pieces_pos:", piece, pos)
if is_pawn(piece):
if is_white(piece):
deltas = [
Position(1, 0), Position(2, 0),
Position(1, 1), Position(1, -1)
]
else:
deltas = [
Position(-1, 0), Position(-2, 0),
Position(-1, 1), Position(-1, -1)
]
for delta in deltas:
try:
move = str(pos) + str(pos + delta)
candidate_moves.append(move)
except KeyError:
pass
elif is_knight(piece):
deltas = [
Position(2, 1), Position(1, 2),
Position(-2, 1), Position(1, -2),
Position(2, -1), Position(-1, 2),
Position(-2, -1), Position(-1, -2)
]
for delta in deltas:
try:
move = str(pos) + str(pos + delta)
candidate_moves.append(move)
except KeyError:
pass
elif is_bishop(piece):
deltas = [
Position(1, 1), Position(-1, -1),
Position(1, -1), Position(-1, 1)
]
for delta in deltas:
for i in range(1, 8):
try:
move = str(pos) + str(pos + delta * i)
candidate_moves.append(move)
except KeyError:
pass
elif is_rook(piece):
deltas = [
Position(1, 0), Position(0, 1),
Position(-1, 0), Position(0, -1)
]
for delta in deltas:
for i in range(1, 8):
try:
move = str(pos) + str(pos + delta * i)
candidate_moves.append(move)
except KeyError:
pass
elif is_king(piece):
deltas = [
Position(1, 0), Position(0, 1),
Position(-1, 0), Position(0, -1),
Position(1, 1), Position(-1, -1),
Position(1, -1), Position(-1, 1)
]
for delta in deltas:
try:
move = str(pos) + str(pos + delta)
candidate_moves.append(move)
except KeyError:
pass
elif is_queen(piece):
deltas = [
# Bishop
Position(1, 1), Position(-1, -1),
Position(1, -1), Position(-1, 1),
# Rook
Position(1, 0), Position(0, 1),
Position(-1, 0), Position(0, -1)
]
for delta in deltas:
for i in range(1, 8):
try:
move = str(pos) + str(pos + delta * i)
candidate_moves.append(move)
except KeyError:
pass
for move in candidate_moves:
try:
# print(move, self.is_valid(move), not self.is_check(move))
if self.is_valid(move) and not self.is_check(move):
legal_moves.append(move)
except ValueError:
pass
return legal_moves
def get_all_pieces_pos(self) -> dict[bool, list[Position]]:
pieces_pos = {WHITE: [], BLACK: []}
for y in range(8):
for x in range(8):
piece = self.board[y][x]
if not is_empty(piece):
pieces_pos[is_white(piece)].append(Position(y, x))
return pieces_pos
def get_king_pos(self, color: bool) -> Position:
for y in range(8):
for x in range(8):
if is_king(self.board[y][x]) and is_white(self.board[y][x]) == color:
return Position(y, x)
def __getitem__(self, pos: Position) -> str:
return self.board[pos.y][pos.x]
def __setitem__(self, pos: Position, piece: str) -> None:
self.board[pos.y][pos.x] = piece
def __repr__(self) -> str:
return "\n".join(
[" ".join(rank + [str(8 - i)]) for i, rank in enumerate(self.board[::-1])] +
[" ".join(RANK.keys())]
)
game = Board(start_position)
def play():
while True:
print(game)
if game.status == "GAMEOVER":
print(player, "wins!!")
break
player = "WHITE" if game.current_turn == WHITE else "BLACK"
# print([str(move) for move in game.generate_legal_moves()])
move = input(f"{player}, please enter your move:").lower().strip()
game.play_move(move)
play()
Thank you for your time!!
Answer: Bug
play() gets a move string from the user, and calls game.play_move(move). which in turn calls extract_move(move), which can raise a ValueError, but that is not caught by any of the proceeding functions.
Confusing Code
try:
move = str(pos) + str(pos + delta)
candidate_moves.append(move)
except KeyError:
pass
Nothing about the above code looks like it can generate a KeyError. The only possibility looks to be the pos + delta, but Position.__add__ only does some math and calls the constructor which just stores the results. append on a vanilla list shouldn't fail. The only other possibility is str(...), which should always succeed:
def __str__(self) -> str:
flipped_rank = {v: k for k, v in RANK.items()}
return f"{flipped_rank[self.x]}{self.y + 1}"
And here is where we find the exception: it is possible to construct an invalid Position, which cannot be stringified.
This code shows two other issues:
Only some invalid positions raise an exception. If the y is out of legal range, it can be stringified without issue. Only a that falls out of the valid x-range will raise an exception and be prevented from being added to candidate.moves.
flipped_rank is recomputed on every stringification, despite depending only upon the RANK constant. It should be computed once:
FLIPPED_RANK = {v: k for k, v in RANK.items()}
class Position:
def __init__(self, y: int, x: int):
if x not in range(8) or y not in range(8):
raise ValueError("Illegal position coordinate")
self.y = y
self.x = x
def __str__(self) -> str:
return f"{FLIPPED_RANK[self.x]}{self.y + 1}"
PEP 8
Constants
start_position is a constant. As such, PEP 8: The Style Guide for Python Code recommends using UPPER_CASE for the identifier's name. It terms of names, it isn't really just one position. It is the starting position of all the pieces (plural), so perhaps STARTING_POSITIONS would be better? Or maybe even STARTING_BOARD, since it gets assigned to the .board member?
Blank Space
is_valid(...) and generate_legal_moves(...) are LONG functions. They could use additional blank lines to break the code into logical sections ... or be separated into more functions.
Type Hints
You've added type hints in many places, but omitted them in several. Eg, Position.abs() needs a return type, In Board, the __init__(), play_move(), lacks parameter type hints, and many methods could be declared to return -> None.
def is_same_color(*pieces: list[str]) -> bool means the function accepts a variable number of lists of strings. I'm certain you just want the function to accept a variable number of strings, which would be written as: *pieces: str.
Types
def get_king_pos(self, color: bool) is an indication you need some better types, since a colour is not a boolean. You have two sides/players. They are not the True player and the False player. Let's create an enumeration for the the player:
from enum import Enum
class Player(Enum):
WHITE = "White"
BLACK = "Black"
You can use Player as a type. If the player variable holds the current player, you can now write f"{player.value}, please enter your move:" to create the input() prompt.
Separate Logic and I/O
You have a game system where "K" represents the white king, and "r" represents the black rook. Unicode characters can convey this information much more directly: ♜♞♝♛♚♟︎ ♖♘♗♕♔♙. You could change the implementation to use these characters instead of upper & lower case letters, but if you wanted to build a graphical version of the game, or a 3d animated battle-chess game, you'd again find the unicode chess piece characters annoying to work with.
Really, what you want is for the game logic to use an "abstract" internal representation, and have the UI layer convert to the appropriate user representation. So maybe an enumeration for piece types:
class PieceType:
PAWN = 1
ROOK = 2
BISHOP = 3
KNIGHT = 4
QUEEN = 5
KING = 6
Now the black king could be represented as (PieceType.KING, Player.BLACK), and a dictionary could map that value to 'k', and later changing that to use '♚' would be a trivial modification.
But that still leaves a long string of if is_pawn(piece) type tests. It would be even better to have a type hierarchy which represents the chess pieces, and the specific types would know how they can move:
class ChessPiece:
...
class Pawn(ChessPiece):
...
class Rook(ChessPiece):
...
...
class King(ChessPiece):
...
With appropriate member functions, you could ask the piece which side it belongs to, where it is on the board, and what its valid moves are. | {
"domain": "codereview.stackexchange",
"id": 44267,
"tags": "python, python-3.x, game, console, chess"
} |
Creating a map with Lidar on Neato without control | Question:
Hi folks,
finding the tutorials linked in this question
http://answers.ros.org/question/60363/neato-laser-scan-shows-impossible-datas-in-rviz/
I learned a lot of connecting Neatos lidar to ros.
But because I'm not so firm with ros (at work I'm only developing
complex C++ programs) I hope, someone can teach me a way to fullfill
my wish:
I owned a Vorwerk VR100 wit Rev. 64 PCB (so no additional serial ports
are known at this time). So I planned to connect a transparent bluetooth
module to the lidars port (only listening to the lidar datas during the
cleaning process). I don't want to control the robot at the first time.
Only passive scanning the lidar data.
I'd like to watch live, how a map of my house is created out of this
continuous lidar data without additional odometry as seen in this video:
http://www.youtube.com/watch?v=o9KzTR0vTXk
Later I want to add a trace along the way of the robot as wide as it's
brushes to find out skipped and not cleaned areas and to remove obstacles
causing such areas.
There was a video showing this procedure by the Neato developers with a
live view inside it's brain: http://www.youtube.com/watch?v=hf1zY8vRC2E
So is there a way, to create a map only using the lidar data without
odometry and without control of the robot?
Sorry for my poor english, I'm german.
Appendix 1:
system: Ubuntu 10.10 maverick
ros: cturtle
installation: http://xv11hacking.wikispaces.com/Connecting+to+ROS
and https://github.com/tu-darmstadt-ros-pkg/hector_slam
Made successfully "rosmake eigen", checked all entrys from wiki for ros+eigen but get furthermore on "rosmake hector_slam"
the errors "Failed to find rosdep eigen for package ..." in a lot of packages and "Could not find module FindOpenCV.cmake or a..." in package hector_compressed_map_transport.
Why it isn't possible to get a consistent IDE in linux or at least understandable an reproducible instructions to build a package?
Is there anyone, who has installed these packages successfully and can give me a hint, where I made the misstake processing the tutorials?
Appendix 2:
Now, with ubuntu 12.04 and ros fuerte I got no errormessages at the begin of the buildprocess. But now the process stops building "hector_geotiff_plugins":
/home/vr100/ros/hector_slam/hector_geotiff_plugins/src/trajectory_geotiff_plugin.cpp:117:109: Fehler: expected constructor, destructor, or type conversion at end of input
make[3]: *** [CMakeFiles/hector_geotiff_plugins.dir/src/trajectory_geotiff_plugin.o] Fehler 1
make[3]: Verlasse Verzeichnis '/home/vr100/ros/hector_slam/hector_geotiff_plugins/build"
The source snippet out of trajectory_geotiff_plugin.cpp:
116 #include <pluginlib/class_list_macros.h>
117 PLUGINLIB_EXPORT_CLASS(hector_geotiff_plugins::TrajectoryMapWriter, hector_geotiff::MapWriterPluginInterface)
I made all here requested modifications for "Eigen" but had no success. He doesn't like the line with "PLUGINLIB_EXPORT_CLASS". Any hints?
Edit: Excluding this line from build will succeed the build process and all parts are to see in rviz. But this can't be the final solution because the geotiff_plugin now is not functional.
Appendix 3:
Now I can create and show maps using the lidar data of the VR100 robot. But this works only if I record the data before using rosbag and then send them via playback to the map/rviz. Only recorded data affects any reaktion in rviz. Using the live lidar data, no reaction on the map is to see. When I playback the previous data parallel, these data create a map and afterwards the life data affects the direction and (the false) position of the shown tf (red/green angle) but not the map. rostopic shows in both cases the published /scan and /rpms topics (both subscribed by at least one subscribers). When running driver and rosbag-player in parallel, /scan and /rmps-topics have two publishers. I changed the neato driver to publish its frame name also with "scan" to prevent the usage of remapping to "neato_laser" or other difficulties.
Appendix 4:
Big thanks to dornhege and jodafo. Now the lidar data are shown in the map. Yesterday I got my serial-bluetooth module from china, build it into the robot and can now watch it all over the house.
Two questions are left. Which parameters I have to optimize to prevent the mapper from lose the actual position of the robot and starting a new map on another position: http://img801.imageshack.us/img801/9921/g52m.png
The maximal and minimal values for the laser data I already limited to 3m and 30cm.
Second: I'd like to add a path with the wide of the brushes of the robot (~30cm) to see, witch areas were cleaned and which skipped. The path display itself let me only change the color, not the wide. Is there a possibility to create a own path display with a wider line?
Originally posted by FlashErase on ROS Answers with karma: 39 on 2013-09-03
Post score: 0
Original comments
Comment by jodafo on 2013-09-08:
you most likely won't need the geotiff_plugin as long as you don't want to participate in robocup rescue ;).
as for the error, i could imagine it broke somewhere in the process of making hector slam run on groovy shrug.
Comment by FlashErase on 2013-09-09:
Thank you for this hint. In this case I'll ignore this error and let the problematic line excluded.
Comment by dornhege on 2013-09-11:
For additional questions, please open a new question instead of appending them here.
Answer:
Check out http://wiki.ros.org/hector_mapping.
Originally posted by dornhege with karma: 31395 on 2013-09-03
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by FlashErase on 2013-09-03:
Thank you. This system (and a lot of other) I already found. But without a plan how to start, how to configure and how to continue there is no way for me to get success.
Comment by FlashErase on 2013-09-03:
I need a hint how to see the map in rviz (I only can see the momentary distance points after enabling this hook). I found no settings to enable the view to the map. And second, where I can read how to use my own live lidar data for the map instead of the played sample data?
Comment by FlashErase on 2013-09-03:
@dornhege
Sorry for deleting your comment "Just add a map display in rviz." together with my answer.
A map display is always included. But it does not get data because the nodes "hector_mapping", "hector_trajectory_server" and "geotiff_node" didn't start. Messages: "Cannot locate node of type ..."
Comment by dornhege on 2013-09-03:
Do you have them installed/built?
Comment by FlashErase on 2013-09-03:
Sorry, I ignored some errormessages during build. The build process stops because "Eigen" isn't installed and configured. Reading this: hxxp://wiki.ros.org/eigen I surrender for today and hope, that I'll have tomorrow after work enough brain left to fight with this mess.
Comment by FlashErase on 2013-09-04:
I'm not intelligent enough. On "Failed to find rosdep eigen for package hector_mapping" Google finds 3 hits with no helpful infos. The ros wiki gives no hint for using Eigen in cturtle. I installed libeigen2-dev and libeigen3-dev. Always the same errormassages. Any hints to use the installed Eigen?
Comment by FlashErase on 2013-09-04:
Appended text to my original question.
Comment by FlashErase on 2013-09-07:
Last question: how can i change the datasource in this tutorial: https://github.com/tu-darmstadt-ros-pkg/hector_slam from the bag player to the real laser data /neato_laser from this driver: http://wiki.ros.org/xv_11_laser_driver/Tutorials/Running%20the%20XV-11%20Node#View_the_data_in_RVIZ ?
Comment by dornhege on 2013-09-08:
remapping, I guess.
Comment by FlashErase on 2013-09-08:
Could you give me please a litte bit more informations instead only jot a nugget every time? I wrote in my question, that I'm not so firm with ros and rviz. The documentation for rviz is poor and I can't find a successfull way to use the lidar data for mapping there. What to do for "remapping"?
Comment by dornhege on 2013-09-08:
It's only a guess, because I haven't looked at the data: I think that the topics for the laser used in the two links you gave are different names. Remapping the neato laser topic to the name that is used in the example should fix that.
Comment by jodafo on 2013-09-08:
have a look at hector_mapping/launch/mapping_default.launch. In there it says
so you can either set "scan_topic" arg to "\neato_laser" in your launch file, or use "remap". syntax for both can be found in the tutorials.
Comment by FlashErase on 2013-09-09:
Hi jodafo ,thank you, this was a tip in the form I hoped for. But unfortunately all changes brings only the effekt, that the map-topic reports the error, that "/scan" or "/neato_laser" can't be converted to /map. Only when I change the fixed frame to /scan or /neato_laser I can see the lidar ... tbc
Comment by FlashErase on 2013-09-09:
data (but only the life data, no map is shown). Is there anywhere a step by step tutorial for creating a hector_mapping application using lidar data? The wiki documentations seems to agree that the reader already knows allthing about ros and Co. The samples are so complex that no red line is to find
Comment by jodafo on 2013-09-09:
well, did you do http://wiki.ros.org/ROS/Tutorials ?
I honestly found the hector tutorial quite well written. I mean you just have to make sure your tf tree looks like the one given by the hector guys and remap the scan topic, then hector_mapping should publish a tf from map to odom frame.
Comment by FlashErase on 2013-09-10:
Thank you, jodafo. By reading a lot of manuals and FAQ I got step by step a (partially) functional configuration and can see a map of my homebot: http://img801.imageshack.us/img801/1347/v84x.png.
Unfortunately it only works with playing previous recorded bags. Life data from lidar are ignored.
Comment by jodafo on 2013-09-10:
that is very weird. did you maybe set "use sim time" to true even though you are using the actual lidar?
Comment by FlashErase on 2013-09-11:
That's indeed weird. "use_sim_time" is "true". I put the relevant files on http://flasherase.ohost.de/vr100.zip. The exact behaviour I'll append on my initial question.
Comment by dornhege on 2013-09-11:
In that case set use_sim_time to false, when you're working in reality. | {
"domain": "robotics.stackexchange",
"id": 15412,
"tags": "ros, navigation, lidar, mapping, neato"
} |
Double use of ATP in relaxing myosin & active transportation of calcium? | Question: Is the ATP molecule which is bound to the myosin head for relaxation of the muscle (i.e., to break the cross bridge) also utilized for active transport of calcium to the sarcoplasmic reticulum during muscle relaxation?
Answer: ATP is used (hydrolyzed) by ion pumps to transport ions such as calcium against their concentration gradient.
Molecular motors (such as myosin and kinesin) also couple ATP hydrolysis with movement.
However both these processes are separate and just because calcium flux changes happen during muscle contraction doesn't mean that it is the same ATP that drives some kind of a bifuntional enzyme to perform both tasks.
Moreover calcium is released during the contraction phase from the sarcoplasmic reticulum (SR); pumping back the calcium to the SR happens during the repolarization (there is no myosin movement at this stage). | {
"domain": "biology.stackexchange",
"id": 3246,
"tags": "physiology"
} |
Determine heat flux from temperature profile | Question:
The heating
element and the insulator are of equal thickness L. Heat transfer in
the air film adjacent to the heater is assumed negligible.
I've noticed that I find these type of problem the hardest to understand, so before I ask my question, does anyone know a good video or website which explains in detail how you should interpret temperature/concentration profiles?
My answer to (a) would be that both of the points should have the same heat flux. I know that one of the relationships used for layered wall is that q=constant, meaning that it is the same through every layer, however is that the correct way of thinking for this problem? I don't think so and don't understand how I should interpret the profile.
Another way I thought about it was that the heat flux for the layers should be:
$$q_1=(k_1/L)\cdot(T_\text{rear} - T_{12})$$
$$q_3=(k_3/L)\cdot(T_{23} - T_\text{front})$$
and that:
$$k_3>k_1$$
$$(T_\text{rear} - T_{12})\gt(T_{23} - T_\text{front})$$
If I just for simplicity assume that $L=1\ \mathrm m$. Then I would get that
$$q_1=(\text{something small})\cdot(\text{something big})$$
$$q_3=(\text{something big})\cdot(\text{something small})$$
Hence another reason why I think that the heat fluxes are the same.
Answer: Your first line of thinking to part a) is correct. For conductive heat transfer through rectangular layers, the heat flux needs to be constant and equivalent through all layers for steady state, or you would have a 'build up' of heat at between one layer. This would locally increase the temperature, and we wouldn't be at steady state.
For b) your second approach to a) is useful. You know the relative sizes of the temperature gradients from the sketch, and know from a) that the heat flux must be the same for each slab. Combine these two and you will find a requirement on the relative magnitudes of the conductivities. | {
"domain": "chemistry.stackexchange",
"id": 11577,
"tags": "heat, thermal-conductivity"
} |
Why 2 electrons can't be in the same quantum state when they are distant apart? | Question: I understand that when 2 electrons are confined into a very small volume of space slightly bigger than their debroglie wavelength, one of the pair must jiggle with increase momentum due to pauli exclusion principle.
But looking at G. Smith's comment in my earlier question, why can't 2 electrons separated with a vast distance of space share the same quantum state? It didn't make sense to me unless the electrons are bound to an atom then each of them must go to different energy level since already 2 electrons with the same lowest energy state show different spin state.
Answer: Electrons, in general, don't necessarily have well-defined positions, so the idea that they are "separated with a vast distance of space" is nebulous, at best. I'm going to assume that when you say
2 electrons separated with a vast distance of space
you mean
2 electrons whose wavefunctions are well-localized (i.e. strongly peaked in position space) and for which the peaks of the wavefunctions are separated by a large distance.
As you can plainly see from this characterization, the wavefunctions of the two electrons are very different, since they peak in different places. The wavefunction is part of the electron's quantum state.* This means that, since the electrons have different position wavefunctions, they are in different quantum states.
*There are other ways to define the quantum state that don't directly reference position, such as giving a momentum-space wavefunction or a decomposition into eigenstates of a particular potential, but the same principle applies - if whatever you use to represent the quantum state is different for one electron than for another, they're in different quantum states. | {
"domain": "physics.stackexchange",
"id": 63001,
"tags": "quantum-mechanics, electrons, pauli-exclusion-principle"
} |
Unit tests for a single JavaScript function | Question: Here is the question: What is the better way to unit test this specific function? Should all the test items be included in one test or is it better to break up the tests so each test is testing for something more specific?
Function being tested:
$scope.saveFailure = function (response) {
var errs = [];
response.data.forEach((entry) => errs.push("options." + entry.dataField));
if (errs.length > 0) $scope.$emit('datafields:errors', errs);
$scope.gotoTop();
$scope.processing = false;
};
Method 1: Multiple Unit Tests
var mockFailureResponse;
beforeEach(function () {
mockFailureResponse = {
data: [],
};
});
it('saveFailure should set processing to false', function () {
$scope.processing = true;
$scope.saveFailure(mockFailureResponse);
expect($scope.processing).toBe(false);
});
it('saveFailure should call goToTop()', function () {
spyOn($scope, 'gotoTop');
$scope.saveFailure(mockFailureResponse);
expect($scope.gotoTop).toHaveBeenCalled();
});
it('saveFailure should emit datafield errors when present', function () {
spyOn($scope, '$emit');
mockFailureResponse = {
data: [{dataField: "field"}],
};
$scope.saveFailure(mockFailureResponse);
expect($scope.$emit).toHaveBeenCalledWith('datafields:errors', ['options.field']);
});
it('saveFailure should not emit datafield errors non are present', function () {
spyOn($scope, '$emit');
$scope.saveFailure(mockFailureResponse);
expect($scope.$emit.calls.count()).toEqual(0);
});
Method 2: Single Unit Test
it('saveFailure should handle failed requests', function () {
spyOn($scope, '$emit');
let mockFailureResponse = {
data: [],
};
$scope.saveFailure(mockFailureResponse);
expect($scope.$emit.calls.count()).toEqual(0);
mockFailureResponse = {
data: [{ dataField: "field" }],
};
$scope.saveFailure(mockFailureResponse);
expect($scope.$emit).toHaveBeenCalledWith('datafields:errors', ['options.field']);
spyOn($scope, 'gotoTop');
$scope.saveFailure(mockFailureResponse);
expect($scope.gotoTop).toHaveBeenCalled();
$scope.processing = true;
$scope.saveFailure(mockFailureResponse);
expect($scope.processing).toBe(false);
});
Answer: I would say that, of course, the first version when you split your tests into granular logical blocks is much better and far more explicit.
There is even a practice to have a single assertion per test which sometimes is too strict of a rule, but it helps to follow the Arrange Act Assert test organizational pattern. | {
"domain": "codereview.stackexchange",
"id": 27043,
"tags": "javascript, unit-testing, angular.js, jasmine"
} |
Relation between number of edges and vertices in a DAG | Question: I conjecture that, in a Directed Acyclic Graph, $O(|V|) = O(|E|)$. Is this statement correct, can it be refined? This is probably standard material; is there a simple reference about this?
Answer: The maximum number of edges in a DAG with n vertices is $\Theta(n^2)$.
Refer https://stackoverflow.com/questions/11699095/how-many-edges-can-there-be-in-a-dag | {
"domain": "cs.stackexchange",
"id": 5716,
"tags": "graphs"
} |
If we rub glass particles with paper , will there be any charge induction in glass particles? | Question: If we rub glass particles with paper , will there be any charge induction in glass particles ?
I know if you rub with silk they do get charged, but i want to know specifically for glass and paper.
Answer: Most materials rubbed against another (different) material will result in a transfer of charge from one to the other. This is known as the triboelectric effect. The amount of charge transferred can vary considerably and depends on a number of factors including how much energy went into rubbing as well as humidity of the atmosphere.
The triboelectric series can be useful in estimating the amount of charge which will be produced on each material. Materials which are 'far apart' in the series will generally produce more charge when rubbed together, compared to materials which are 'close' to each other in the series.
Both glass and paper have an affinity for positive charge, with glass having a slightly higher (positive) charge affinity of +25 nC/J whereas paper shows +10 nC/J. This means we would expect glass to have a slightly more positive charge (+15 nano-coulombs per joule) when rubbed with paper. | {
"domain": "physics.stackexchange",
"id": 18479,
"tags": "particle-physics, electrostatics, induction"
} |
How do I change the filename_format of picture to rostime? | Question:
Dear All:
At present the Bag file change a picture file. File name are frame000001,frame000002......
How do I change the filename_format of picture to rostime?
Originally posted by lin on ROS Answers with karma: 1 on 2017-06-02
Post score: 0
Answer:
link maybe can help you
have you finished it?
Originally posted by alexchauncy with karma: 16 on 2018-05-15
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 28037,
"tags": "ros, image-view, rostime"
} |
Beginner Rust text adventure | Question: I've been trying to pick up some Rust experience and decided to try and make a text adventure game. I'd like some feedback on potential bad practice and non-Rust-style code I may have used. I'm moving towards Rust from a Python perspective.
use std::io::stdin;
struct Game {
room: usize,
inventory: Vec<Item>,
rooms: Vec<Room>
}
impl Game {
fn room(&self) -> &Room {
&self.rooms[self.room]
}
fn room_mut(&mut self) -> &mut Room {
&mut self.rooms[self.room]
}
fn exits(&self) {
let mut index = 0;
let mut s = String::from(
format!("{} has {} exits:", &self.room().name, &self.room().exits.len())
);
for exit in &self.room().exits {
s = format!("{}\n({}) {}", s, index, self.rooms[*exit].name);
index += 1;
}
println!("{}", s);
}
fn view_inventory(&self) {
let mut index = 0;
let mut s = String::from(
format!("You have {} items:", self.inventory.len())
);
for item in &self.inventory {
s = format!("{}\n({}) {}", s, index, item.name);
index += 1;
}
println!("{}", s);
}
fn move_room(&mut self, room: usize) {
self.room = self.room().exits[room];
}
fn take(&mut self, item: usize) -> &Item {
let item = self.room_mut().items.remove(item);
self.inventory.push(item);
&self.inventory[self.inventory.len() - 1]
}
}
struct Item {
name: String,
description: String
}
struct Room {
name: String,
description: String,
exits: Vec<usize>,
items: Vec<Item>
}
impl Room {
fn look(&self) {
println!("{}", self.description)
}
fn inspect(&self) {
let mut index = 0;
let mut s = String::from(
format!("{} has {} items:", &self.name, &self.items.len())
);
for item in &self.items {
s = format!("{}\n({}) {}", s, index, item.name);
index += 1;
}
println!("{}", s);
}
}
fn main() {
let mut rooms = vec![
Room {
name: String::from("Bedroom"),
description: String::from("A tidy, clean bedroom with 1 door and a balcony"),
exits: vec![1, 2],
items: vec![ Item {
name: String::from("Key"),
description: String::from("A golden key")
}]
},
Room {
name: String::from("Balcony"),
description: String::from("An outdoor balcony that overlooks a gray garden"),
exits: vec![0],
items: vec![]
},
Room {
name: String::from("Landing"),
description: String::from("A carpetted landing with doors leading off it. It overlooks a large living space. A set of stairs leads down"),
exits: vec![0],
items: vec![]
},
];
let mut player = Game {
room: 0,
rooms: rooms,
inventory: vec![]
};
println!("Type `look' to look around. Type `move <room no>' to switch room");
loop {
let mut input = String::new();
match stdin().read_line(&mut input) {
Ok(_) => {
let mut commands = input.trim().split_whitespace();
match commands.next() {
Some("look") => {
player.room().look();
player.exits();
}
Some("move") => {
let args: Vec<&str> = commands.collect();
if args.len() != 1 {
println!("Incorrect args.");
continue;
}
let room_no: usize = match args[0].parse() {
Ok(a) => {a},
Err(e) => {
println!("{}", e);
continue
}
};
player.move_room(room_no);
println!("You moved to {}", player.room().name);
}
Some("inventory") => {
player.view_inventory();
}
Some("inspect") => {
player.room().inspect();
}
Some("take") => {
let args: Vec<&str> = commands.collect();
if args.len() != 1 {
println!("Incorrect args.");
continue;
}
let item_no: usize = match args[0].parse() {
Ok(a) => {a},
Err(e) => {
println!("{}", e);
continue
}
};
let item = player.take(item_no);
println!("You collected {}", item.name);
}
None => {},
_ => {},
}
}
Err(error) => panic!("Error occured reading stdin: {}", error),
}
}
}
The source is also available on GitHub.
Answer: struct Game {
room: usize,
inventory: Vec<Item>,
rooms: Vec<Room>
}
I would recommend current_room instead of room. The meaning is slightly clearer.
You have several functions like the following, the comments here apply to all of them
fn exits(&self) {
let mut index = 0;
let mut s = String::from(
format!("{} has {} exits:", &self.room().name, &self.room().exits.len())
);
format! already produces a String, so you don't need String::from. You also do not need the & because the format! macro will already add them.
for exit in &self.room().exits {
You can for (index, exit) in self.room().exits.iter().enumerate() {. Then you don't need to keep track of the index yourself.
s = format!("{}\n({}) {}", s, index, self.rooms[*exit].name);
Rather than assign a new string object, it probably makes sense to use s.push_str to onto the existing string.
index += 1;
}
println!("{}", s);
There doesn't appear to be a good reason to build up a string object and then print it. Your code would be simpler here if you just println! each piece of the string as you build it.
}
let mut rooms = vec![
Room {
name: String::from("Bedroom"),
description: String::from("A tidy, clean bedroom with 1 door and a balcony"),
exits: vec![1, 2],
items: vec![ Item {
name: String::from("Key"),
description: String::from("A golden key")
}]
},
All your strings are static, so you may want to consider using &'static str to hold the various strings instead of String which will avoid having to call String::from when you create the room objects here.
You may also want to consider installing clippy. It is an extra Cargo command (cargo clippy) that has a number of extra lints for common Rust mistakes. It points to several of the points I showed here. | {
"domain": "codereview.stackexchange",
"id": 32223,
"tags": "beginner, rust, adventure-game"
} |
How can gravity be describe as a carried on gravitons if light is affected by gravity but no effect on gravity? | Question: if photons are emitting graviton while it going near to mass then this graviton should effect the mass the much as the graviton that object with mass emit, and we know light have no mass so it not have an effect on gravity. another problem I was thinking about is that gravity may expends in the speed of light but when object is going through curve of space time and getting way for mass we may say that the gravity force it fill is weaker but if it just follow the space-time curve then it would be effected immediately, and if it result of emitting graviton then the effect would be delayed till the graviton get to the mass which would make the gravity getting weaker in the upside relation as we except because it would be more delayed the much you get a way from gravity
Answer: In classical general relativity any object with a four vector , i.e. has energy and momentum, contributes to the curvature of space time that is gravity. Light has energy and momentum in classical electromagnetism so the classical gravitational fields are affected.
In quantum mechanics classical light emerges from photons, and photons have a four vector. In effective quantization of gravity, which is the frame where gravitons are expected as the gauge bosons of gravity, any interaction between photons and a gravitationa field happens withvirtual gravitons, so the argument of masses does not hold. There are integrals with bounds over which the integration for the probable interaction of photons with the gravitational field of massive objects is represented by virtual gravitons. Real on mass shell gravitons have not been seen, the way photons are seen, so it has no meaning to talk about graviton photon scattering.
You have to study quantum field theory to be able to really understand this.
When gravity is definitively quantized one expects that the same will hold true. | {
"domain": "physics.stackexchange",
"id": 77078,
"tags": "general-relativity, gravity, visible-light, mass-energy, quantum-gravity"
} |
Why is the mean speed of a gaseous molecule $\sqrt{\frac{8RT}{\pi M}}$? | Question: I was able to derive the formula for root-mean-squared speed of a molecule, using a basic method taught at school. However, in my textbook, there's a point that:
It can be shown that $\bar v = \sqrt{\frac{8RT}{\pi M}}$
where $R = 8.314\text{ J mol}^{-1}\text{ K}^{-1}$, T = Absolute temperature, and M = Molar mass of given gas.
I was unable to prove this. I am aware of the Maxwell-Boltzmann speed distribution curve, but haven't learnt it in detail. Can this be derived from that? Are there other methods of deriving it?
Answer: You're on the right track: you do indeed need to work out $v_{rms}$ from the Maxwell-Boltzmann distribution. There are no other methods, because, by definition, you compute a mean with respect to a probability distribution describing the underlying population, so the pdf has to come in somewhere.
The MB distribution comes from (1) the Boltzmann Distribution, justified by the analysis of the Canonical Ensemble, and, to apply this distribution, you also need to know what degeneracy function aka density of states $g(E) $ is - with each particle's energy given by $m\,v^2/2$, the number of possible states in a narrow velocity range around velocity $v$ is proportional to the volume of a spherical shell, i.e. proportional to $v^2$. So when we work out the Boltzmann distribution for the velocities, we get $p(E)\propto g(E) \exp(-E/(k\,T)) = v^2 e^{-\frac{m\,v^2}{2\,k\,T}}$, which last function is proportional to the Maxwell-Boltzman distribution $p_{MB}(v)$.
Now all you need to do is find the correct normalization constant to make $\int_0^\infty p_{MB}(v)\,\mathrm{d} v = 1$ (or look up the normalized distribution) and then work out the mean square velocity $\int_0^\infty\,v^2\, p_{MB}(v)\,\mathrm{d} v$ and you're there.
You will also need to convert the Boltzmann constant $k$ to molar gas constant $R=N_A\,k$. | {
"domain": "physics.stackexchange",
"id": 46888,
"tags": "kinetic-theory"
} |
How undecidable is it whether a given Turing machine runs in polynomial time? | Question: The proof of Theorem 1 that PTime is not semi-decidable in this recent preprint effectively shows that it is $\mathsf{R}\cup\mathsf{coR}$-hard. The proof itself is similar to undecidability proofs at cs.SE and cstheory.SE. However, what is different is a footnote, which highlights that proving both $\mathsf{R}$-hardness and $\mathsf{coR}$-hardness seems to be new:
1The undecidability of PTime seems to be folklore, but non-semi-decidability seems to be new, even though semi-decidability is a natural property here.
But I want to ignore the question whether this observation is new. What interests me is what is known about the degree of undecidability of PTime. For example, is computation in the limit powerful enough to decide PTime? Are there better lower bounds than $\mathsf{R}\cup\mathsf{coR}$ for the difficulty of PTime?
Note: I denote the problem to decide "whether a given Turing machine runs in polynomial time" as PTime here. I know that this is incorrect, because PTime is a class of languages and not a class of Turing machines. However, I found this misuse of terminology convenient for this question.
Answer: The answer is $\bf 0''$ (and so in particular computation in the limit - which corresponds to $\le_T\bf 0'$ - is not enough). And this stronger result is also folklore (I was assigned it as an exercise way back when).
As an upper bound we just check quantifier complexity: $\Phi_e$ runs in polynomial time iff there exists some polynomial $p$ such that for every input $n$ we have $\Phi_e(n)[p(\vert n\vert)]\downarrow$ (that final clause only uses bounded quantifiers). So running in polynomial time is a $\Sigma^0_2$ property.
Now we just need to show optimality. We'll actually show a bit more: not just that the Turing degree is $\bf 0''$, but that the set is $\Sigma^0_2$-complete. Supposing $$A=\{x:\exists y\forall z\theta(x,y,z)\}$$ (with $\theta\in\Delta^0_0$) is any other $\Sigma^0_2$ set, we want to reduce $A$ to the set of programs running in polynomial time.
So fix an instance $i$. We'll say $m$ is $s$-bad for $i$ if $m<s$ and there is some $n<s$ such that $\neg\theta(i,m,n)$. Let $t_i(s)$ be the least $m$ which is not $s$-bad for $i$. Now we'll whip up a stupid program $\Phi_{e_i}$ as follows: on input $s$, $\Phi_{e_i}(s)$ runs for ${\vert s\vert}^{t_i(s)}$-many steps and then halts and outputs $0$. The point is that if $i\in A$ then $t_i(s)$ stabilizes and so $\Phi_{e_i}$ runs in polynomial time, and otherwise $\Phi_{e_i}$ does not run in polynomial time.
Now the map $\mu: i\mapsto e_i$ is computable, so we get a many-one reduction of $A$ to our set.
As a quick comment, note that we only used one property of polynomial time above: that there is no biggest polynomial time bound. This let us keep pushing up the runtime bit by bit (each time $t_i(s)$ goes up) while preserving the possibility of still running in polynomial time. By contrast, "halts within seven steps on all inputs" (say) is merely $\Pi^0_1$-complete. Basically, whenever we have a concrete resource-bounded complexity class with no "maximal bound," being in that class will be $\Sigma^0_2$-complete. In particular, this applies to NP, EXPTIME, PSPACE, ... | {
"domain": "cs.stackexchange",
"id": 14989,
"tags": "computability, undecidability, polynomial-time"
} |
Does fire caught by appliances causes short circuit | Question: I know that short-circuit causes appliances to catch fire.But in my exam question was does the assertion is true
I think although each of wire has insulation but they are separated by a distance at plug i.e. where it connects to appliance.
Answer: Assertion A is correct.
A short circuit can result in the release of a large amount of energy in the form of heat. That, in turn, can ignite materials and cause a fire. What prevents wires from short-circuiting is the separation of the conductors by electrical insulation. In the case of wires it's the insulation on the surface of the wires.
Most electrical insulations consist of polymeric materials, chiefly plastics. At elevated temperatures (temperatures that exceed the temperature rating of the insulation) the insulation becomes compromised. This can be a long term degradation in the electrical and thermal properties of the material. It can also occur rapidly at very high temperatures that cause the plastic to melt and come off the conductor resulting in a short-circuit.
RESPONSE TO EDIT:
I'm assuming the cord conductors where they separate at the entrance to the appliance are encapsulated in a female plug body, since cord conductors are never split up where the enter the appliance.
Clearly, if the conductors are reliably separated there is a low probability of shorting. However, there is always the possibility of the connector body overheating due to improper or faulty connections between the conductors and the connector pins (typically crimp type connections), which could lead to arcing and breakdown of the insulation in the connector. The risk is higher for high amperage products such as heating appliances.
Then you also need to consider the wiring inside the appliance, which is called internal wiring, where live and neutral conductors intermingle. Should these conductors overheat due, for example, to faults in the appliance (e.g., failure of temperature controls) insulation overheating and short-circuiting is possible.
Finally, there the cord conductors of going to the left of your diagram to the plug and receptacle. The cord conductor insulation, being exposed, is subject to possible physical damage to the insulation.
Bottom line: All things considered, the risk of insulation failure and short circuiting leading to fire is greater the higher the temperatures the insulation is exposed to.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 65456,
"tags": "homework-and-exercises, thermodynamics, electricity, electric-circuits, heat-conduction"
} |
Openni in ROS ARM repository | Question:
Hi everyone,
I'm trying to install ROS on Odrouid-U2 board. I used installation instructions from ros.org/wiki/groovy/Installation/UbuntuARM
I was able to install core system, but some packages as openni_camera are marked as failed to build. There are also no libopenni and libopenni-sensor-primesense packages, which are provided in the ros repository for x86 architectures.
I was able to compile OpenNI and openni_camera from sources with minor modifications, so now I would like to look into why deb building process fails, but I don't know where to start.
Where I can find source code that is used for building these packages?
Is there any documentation about how is the packaging process organized?
Originally posted by Vladyslav Usenko on ROS Answers with karma: 13 on 2013-04-22
Post score: 1
Answer:
I'm running the ARM builds based on my fork of the build tools repository: https://github.com/trainman419/catkin-debs , and trying to push any fixes I make into the upstream repository where I can.
The OpenNI builds are missing for two reasons: I don't have builds of the driver packages you mentioned, and most of the reports are that the current OpenNI drivers run so poorly on most ARM chips that they're mostly useless, so getting them working hasn't been a priority.
The last build of openni-camera failed on Precise armel because the debs for drivers were missing: http://namniart.com/build/job/ros-groovy-openni-camera_binarydeb_precise_armhf/3/console ; I suspect my other targets are failing for the same reason.
If you can come up with ARM builds of the libopenni-dev and libopenni-sensor-primesense-dev debs, I'll inject them into my build system and retry the openni-camera builds.
Originally posted by ahendrix with karma: 47576 on 2013-04-22
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Vladyslav Usenko on 2013-04-23:
Looks like openni deb packages are coming from these repos: https://github.com/jspricke/debian-openni.git and https://github.com/jspricke/openni-sensor-primesense , at lease I haven't found any other repositories yet. They are not working out of box on arm, but I will try to patch them a little.
Comment by JBuesch on 2013-08-20:
Has there been any progress on that?
Comment by Vladyslav Usenko on 2013-08-20:
I've managed to build the debs, but looks like the first version of openni doesn't work well on ARM. I ended up using OpenNI2. I think you can get CMake port from this repo https://github.com/jkammerl/OpenNI2/ and use it with catkin. | {
"domain": "robotics.stackexchange",
"id": 13907,
"tags": "robotic-arm, openni"
} |
Extract Interface | Question: One of the latest refactorings for Rubberduck is Extract Interface. This refactoring will take a class, display all public members, and allow you to select which members you wish to include in your interface. Next, it creates the interface, adds Implement <Interface Name> to the top of the file, and calls Implement Interface to implement empty members of the interface. Unfortunately, due to the time to parse and resolve references, we chose to not rename the existing members.
This is the model for the refactoring:
public class ExtractInterfaceModel
{
private readonly RubberduckParserState _parseResult;
public RubberduckParserState ParseResult { get { return _parseResult; } }
private readonly IEnumerable<Declaration> _declarations;
public IEnumerable<Declaration> Declarations { get { return _declarations; } }
private readonly QualifiedSelection _selection;
public QualifiedSelection Selection { get { return _selection; } }
private readonly Declaration _targetDeclaration;
public Declaration TargetDeclaration { get { return _targetDeclaration; } }
public string InterfaceName { get; set; }
public List<InterfaceMember> Members { get; set; }
private static readonly DeclarationType[] DeclarationTypes =
{
DeclarationType.Class,
DeclarationType.Document,
DeclarationType.UserForm
};
public readonly string[] PrimitiveTypes =
{
Tokens.Boolean,
Tokens.Byte,
Tokens.Date,
Tokens.Decimal,
Tokens.Double,
Tokens.Long,
Tokens.LongLong,
Tokens.LongPtr,
Tokens.Integer,
Tokens.Single,
Tokens.String,
Tokens.StrPtr
};
public ExtractInterfaceModel(RubberduckParserState parseResult, QualifiedSelection selection)
{
_parseResult = parseResult;
_selection = selection;
_declarations = parseResult.AllDeclarations.ToList();
_targetDeclaration =
_declarations.SingleOrDefault(
item =>
!item.IsBuiltIn && DeclarationTypes.Contains(item.DeclarationType)
&& item.Project == selection.QualifiedName.Project
&& item.QualifiedSelection.QualifiedName == selection.QualifiedName);
InterfaceName = "I" + TargetDeclaration.IdentifierName;
Members = _declarations.Where(item => !item.IsBuiltIn &&
item.Project == _targetDeclaration.Project &&
item.ComponentName == _targetDeclaration.ComponentName &&
item.Accessibility == Accessibility.Public &&
item.DeclarationType != DeclarationType.Variable &&
item.DeclarationType != DeclarationType.Event)
.OrderBy(o => o.Selection.StartLine)
.ThenBy(t => t.Selection.StartColumn)
.Select(d => new InterfaceMember(d, _declarations))
.ToList();
}
}
And the presenter:
public interface IExtractInterfacePresenter
{
ExtractInterfaceModel Show();
}
public class ExtractInterfacePresenter : IExtractInterfacePresenter
{
private readonly IExtractInterfaceView _view;
private readonly ExtractInterfaceModel _model;
public ExtractInterfacePresenter(IExtractInterfaceView view, ExtractInterfaceModel model)
{
_view = view;
_model = model;
}
public ExtractInterfaceModel Show()
{
if (_model.TargetDeclaration == null) { return null; }
_view.ComponentNames =
_model.TargetDeclaration.Project.VBComponents.Cast<VBComponent>().Select(c => c.Name).ToList();
_view.InterfaceName = _model.InterfaceName;
_view.Members = _model.Members;
if (_view.ShowDialog() != DialogResult.OK)
{
return null;
}
_model.InterfaceName = _view.InterfaceName;
_model.Members = _view.Members;
return _model;
}
}
Next comes the refactoring:
public class ExtractInterfaceRefactoring : IRefactoring
{
private readonly RubberduckParserState _state;
private readonly IRefactoringPresenterFactory<ExtractInterfacePresenter> _factory;
private readonly IActiveCodePaneEditor _editor;
private ExtractInterfaceModel _model;
public ExtractInterfaceRefactoring(RubberduckParserState state, IRefactoringPresenterFactory<ExtractInterfacePresenter> factory,
IActiveCodePaneEditor editor)
{
_state = state;
_factory = factory;
_editor = editor;
}
public void Refactor()
{
var presenter = _factory.Create();
if (presenter == null)
{
return;
}
_model = presenter.Show();
if (_model == null) { return; }
AddInterface();
}
public void Refactor(QualifiedSelection target)
{
_editor.SetSelection(target);
Refactor();
}
public void Refactor(Declaration target)
{
_editor.SetSelection(target.QualifiedSelection);
Refactor();
}
private void AddInterface()
{
var interfaceComponent = _model.TargetDeclaration.Project.VBComponents.Add(vbext_ComponentType.vbext_ct_ClassModule);
interfaceComponent.Name = _model.InterfaceName;
_editor.InsertLines(1, GetInterface());
var module = _model.TargetDeclaration.QualifiedSelection.QualifiedName.Component.CodeModule;
var implementsLine = module.CountOfDeclarationLines + 1;
module.InsertLines(implementsLine, "Implements " + _model.InterfaceName);
_state.RequestParse(ParserState.Ready);
var qualifiedSelection = new QualifiedSelection(_model.TargetDeclaration.QualifiedSelection.QualifiedName,
new Selection(implementsLine, 1, implementsLine, 1));
var implementInterfaceRefactoring = new ImplementInterfaceRefactoring(_state, _editor, new MessageBox());
implementInterfaceRefactoring.Refactor(qualifiedSelection);
}
private string GetInterface()
{
return "Option Explicit" + Environment.NewLine + string.Join(Environment.NewLine, _model.Members.Where(m => m.IsSelected));
}
}
And the support classes, InterfaceMember and Parameter:
public class Parameter
{
public string ParamAccessibility { get; set; }
public string ParamName { get; set; }
public string ParamType { get; set; }
public override string ToString()
{
return ParamAccessibility + " " + ParamName + " As " + ParamType;
}
}
public class InterfaceMember
{
public Declaration Member { get; set; }
public IEnumerable<Parameter> MemberParams { get; set; }
public string Type { get; set; }
public string MemberType { get; set; }
public string PropertyType { get; set; }
public bool IsSelected { get; set; }
public string MemberSignature
{
get
{
var signature = Member.IdentifierName + "(" +
string.Join(", ", MemberParams.Select(m => m.ParamType)) + ")";
return Type == null ? signature : signature + " As " + Type;
}
}
public string FullMemberSignature
{
get
{
var signature = Member.IdentifierName + "(" +
string.Join(", ", MemberParams) + ")";
return Type == null ? signature : signature + " As " + Type;
}
}
public InterfaceMember(Declaration member, IEnumerable<Declaration> declarations)
{
Member = member;
Type = member.AsTypeName;
GetMethodType();
MemberParams = declarations.Where(item => item.DeclarationType == DeclarationType.Parameter &&
item.ParentScope == Member.Scope)
.OrderBy(o => o.Selection.StartLine)
.ThenBy(t => t.Selection.StartColumn)
.Select(p => new Parameter
{
ParamAccessibility = ((VBAParser.ArgContext)p.Context).BYREF() == null ? Tokens.ByVal : Tokens.ByRef,
ParamName = p.IdentifierName,
ParamType = p.AsTypeName
})
.ToList();
if (PropertyType == "Get")
{
MemberParams = MemberParams.Take(MemberParams.Count() - 1);
}
IsSelected = false;
}
private void GetMethodType()
{
var context = Member.Context;
var subStmtContext = context as VBAParser.SubStmtContext;
if (subStmtContext != null)
{
MemberType = Tokens.Sub;
}
var functionStmtContext = context as VBAParser.FunctionStmtContext;
if (functionStmtContext != null)
{
MemberType = Tokens.Function;
}
var propertyGetStmtContext = context as VBAParser.PropertyGetStmtContext;
if (propertyGetStmtContext != null)
{
MemberType = Tokens.Property;
PropertyType = Tokens.Get;
}
var propertyLetStmtContext = context as VBAParser.PropertyLetStmtContext;
if (propertyLetStmtContext != null)
{
MemberType = Tokens.Property;
PropertyType = Tokens.Let;
}
var propertySetStmtContext = context as VBAParser.PropertySetStmtContext;
if (propertySetStmtContext != null)
{
MemberType = Tokens.Property;
PropertyType = Tokens.Set;
}
}
public override string ToString()
{
return "Public " + MemberType + " " + PropertyType + " " + FullMemberSignature + Environment.NewLine + "End " + MemberType +
Environment.NewLine;
}
}
This is the dialog code behind:
public partial class ExtractInterfaceDialog : Form, IExtractInterfaceView
{
public string InterfaceName
{
get { return InterfaceNameBox.Text; }
set { InterfaceNameBox.Text = value; }
}
private List<InterfaceMember> _members;
public List<InterfaceMember> Members
{
get { return _members; }
set
{
_members = value;
InitializeParameterGrid();
}
}
public List<string> ComponentNames { get; set; }
public ExtractInterfaceDialog()
{
InitializeComponent();
InterfaceNameBox.TextChanged += InterfaceNameBox_TextChanged;
InterfaceMembersGridView.CellValueChanged += InterfaceMembersGridView_CellValueChanged;
SelectAllButton.Click += SelectAllButton_Click;
DeselectAllButton.Click += DeselectAllButton_Click;
}
void InterfaceNameBox_TextChanged(object sender, EventArgs e)
{
ValidateNewName();
}
void InterfaceMembersGridView_CellValueChanged(object sender, DataGridViewCellEventArgs e)
{
_members.ElementAt(e.RowIndex).IsSelected =
(bool) InterfaceMembersGridView.Rows[e.RowIndex].Cells[e.ColumnIndex].Value;
}
void SelectAllButton_Click(object sender, EventArgs e)
{
ToggleSelection(true);
}
void DeselectAllButton_Click(object sender, EventArgs e)
{
ToggleSelection(false);
}
private void InitializeParameterGrid()
{
InterfaceMembersGridView.AutoGenerateColumns = false;
InterfaceMembersGridView.Columns.Clear();
InterfaceMembersGridView.DataSource = Members;
InterfaceMembersGridView.AlternatingRowsDefaultCellStyle.BackColor = Color.Lavender;
InterfaceMembersGridView.MultiSelect = false;
var isSelected = new DataGridViewCheckBoxColumn
{
AutoSizeMode = DataGridViewAutoSizeColumnMode.AllCells,
Name = "IsSelected",
DataPropertyName = "IsSelected",
HeaderText = string.Empty,
ReadOnly = false
};
var signature = new DataGridViewTextBoxColumn
{
AutoSizeMode = DataGridViewAutoSizeColumnMode.Fill,
Name = "Members",
DataPropertyName = "MemberSignature",
ReadOnly = true
};
InterfaceMembersGridView.Columns.AddRange(isSelected, signature);
}
void ToggleSelection(bool state)
{
foreach (var row in InterfaceMembersGridView.Rows.Cast<DataGridViewRow>())
{
row.Cells["IsSelected"].Value = state;
}
}
private void ValidateNewName()
{
var tokenValues = typeof(Tokens).GetFields().Select(item => item.GetValue(null)).Cast<string>().Select(item => item);
OkButton.Enabled = !ComponentNames.Contains(InterfaceName)
&& char.IsLetter(InterfaceName.FirstOrDefault())
&& !tokenValues.Contains(InterfaceName, StringComparer.InvariantCultureIgnoreCase)
&& !InterfaceName.Any(c => !char.IsLetterOrDigit(c) && c != '_');
InvalidNameValidationIcon.Visible = !OkButton.Enabled;
}
}
Please tell me everything that can be improved with this code. The more non-trivial improvements you make, the better. Nitpicks are also welcome.
Answer: This section of code feels funny to me.
private static readonly DeclarationType[] DeclarationTypes =
{
DeclarationType.Class,
DeclarationType.Document,
DeclarationType.UserForm
};
public readonly string[] PrimitiveTypes =
{
Tokens.Boolean,
Tokens.Byte,
Tokens.Date,
Tokens.Decimal,
Tokens.Double,
Tokens.Long,
Tokens.LongLong,
Tokens.LongPtr,
Tokens.Integer,
Tokens.Single,
Tokens.String,
Tokens.StrPtr
}
These really feel like they belong to another class. I'm not really sure what to name that class, but these belong closer to the Parser. Maybe they're static members of DeclarationTypes and Tokens respectively. Maybe they're extension methods, or maybe they both belong to some helper class, but I have a hard time believing that no other IRefactorings need to know which tokens are primitive types, or which declaration types are classes.
The other issue with these is that the arrays can be modified, if someone was silly enough to do it.
public readonly string[] PrimitiveTypes =
The readonly modifier only means that we can't assign the identifier a different reference. Nothing stops us from modifying the internals of the array. I'd reach for a ReadOnlyCollection of some kind. ReadOnlyCollection is designed to be a base class, so this gives us one more reason to extract these useful snippets into some bit of reusable code. | {
"domain": "codereview.stackexchange",
"id": 17605,
"tags": "c#, meta-programming, rubberduck"
} |
Find the longest word's length | Question: The challenge is simple:
Return the length of the longest word in the provided sentence.
The solution is just as simple:
function findLongestWord(str) {
arr = str.split(' ');
size = 0;
for (var s in arr) {
if (arr[s].length > size) {
size = arr[s].length;
}
}
return size;
}
However, I vaguely remember you're not supposed to use for..in in JavaScript unless absolutely necessary. What would be the more idiomatic approach for this loop?
Answer: First, I'd not name the function findLongestWord as you're not looking for the longest word, but the length of the longest word. Try getLongestWordLength instead.
You're forgetting var for your variables. This makes them shoot up to the global scope and be declared there, and we don't want that to happen. For ES6, there's also let.
for-in is only advisable on objects, and even on objects you guard it with hasOwnProperty. That's because it iterates through prototype properties (things other than the array elements or instance properties). A regular loop (for or while) while incrementing an index until length would be better. But there's an even better approach...
You can create a map of lengths by using map on your split string, returning the length of the strings. Then you use Math.max to get the largest number in the array of lengths. We can use the spread operator (...) to spread the array as arguments to Math.max.
function getLongestWordLength(str){
return Math.max(...(str.split(' ').map(s => s.length)));
}
The above is ES6 syntax. The ES5 equivalent would look like the following. One notable difference, aside from the more verbose map is the use of apply to provide Math.max with a dynamic set of arguments.
function getLongestWordLength(str) {
return Math.max.apply(Math, str.split(' ').map(function (s) {
return s.length;
}));
}
document.write(getLongestWordLength('The quick brown fox jumps over the lazy doge')); | {
"domain": "codereview.stackexchange",
"id": 17674,
"tags": "javascript, strings, programming-challenge"
} |
Phase difference of reflected waves | Question: The Phase difference of 2 travelling waves is how much one wave has shifted from the other, 'angle wise'
It's constant if both waves travel at the same speed. https://www.desmos.com/calculator/lrekw2jxpd
I learnt that the phase difference of a reflected wave is $\pi$, but what does 'phase difference' even mean here. The waves are travelling in opposite directions and hence there's no 'phase difference' (or at least a constant one) https://www.desmos.com/calculator/umtrrvxlkn
.
If you reflect a wave, you can always tell a time, t for which the phase difference between the original and reflected wave is $0, \pi, 2\pi..$(any real number).
So what do people mean when they say that the phase difference is $\pi$?
Answer: It means that when primary wave is expressed as
$$
y_p(x) = A_p\cos (\Omega t - kz),~~~A_p > 0
$$
so it moves in direction of $z$-axis, and reflecting wall is at $z=0$, then reflected wave is expressed as
$$
y_r(x) = A_r\cos (\Omega t + kz +\pi),~~~A_r > 0
$$
which means that at $z=0$, the reflected wave value $y_r(0)$ has opposite sign to incoming wave $y_p(0)$.
So the phase shift $\pi$ refers to values of both waves on the boundary.
This shift $\pi$ happens only for the reflected wave that is reflected back to medium with lower phase speed. For example, when light travelling in air is reflecting from the air-glass boundary back to air. | {
"domain": "physics.stackexchange",
"id": 76073,
"tags": "waves, reflection"
} |
Buying a telescope for the first time. How important is the focal ratio and is a computerised telescope worth it? | Question: I'm wanting to purchase a telescope for the first time. I've narrowed it down to 2 options based on what I've discovered from researching and my price range. I mainly want do planetary observations and some deep space. The 2 options that I've found are the following
Celestron 114 LCM Computerized Telescope
Orion 9851 SpaceProbe 130 EQ Reflector Telescope
From what I've gathered, both do have decent specs for what I want to do but I'm still a bit lost when it comes to the aperture and focal length. The Celestron 114LCM has a 1000mm focal length but the aperture is only 114mm where the Orion 130 has a 900mm focal length but a 130mm aperture. Am I right in thinking that the Orion telescope would be better for me? Also, with the Celestron telescope it is computerised so, although the fun would be taken out of messing around with the scope and feeling satisfied with the results, is it really worth it to have this feature for a first time purchase? Thanks in advance!
Answer: We can compare these scopes on a few grounds:
Celestron 114 LCM
Greater focal length
Alt-Az mount (intuitive)
Computerized (easy)
Orion 9851 SpaceProbe 130 EQ
Cheaper
Larger aperture
Greater highest useful magnification
Non-computerized (the experience of learning the sky through one's own methods, alone with the telescope, is one which you should not deny yourself, in my humble opinion)
EQ mount (allows for simple star tracking)
These are just some items which stand out to me as being important. The question you have asked is largely one of opinion—there is no "right" answer here. It depends entirely on what you want. You say you want to focus on planetary observations, a field where extreme magnifications are common and useful. For deep space, larger apertures and shorter focal lengths will aid.
The focal ratio is a quick way to judge the "speed" of the telescope's optics. It is found by taking the ratio between the focal length and aperture. The Orion scope in this instance has an f-ratio of f/6.9, while that of the Celestron is f/8.8. This implies that the view through the Orion scope will be slightly brighter, broader, and less-magnified than that through the Celestron, for any given eyepiece. However, these ratios are quite close and are both within a range commonly used both for deep-sky and planetary observations.
My final piece of advice is that for a beginner, the subtle differences between these scopes in these regards simply does not matter. You are unlikely to experience any significant difference between these two scopes. Because of that, I would personally recommend going for the Orion telescope—it's cheaper, will give you greater magnifications for planetary observations, and its lack of computerization will force you to truly learn the sky. Should you desire easier set-up, less time navigating through the sky, and so on, then the computerized Celestron may be preferable.
Telescopes are beautiful instruments. Whichever you choose will serve you well.
Clear skies! | {
"domain": "astronomy.stackexchange",
"id": 1960,
"tags": "telescope, amateur-observing, deep-sky-observing"
} |
Circulation of Vector Field versus Work done by Vector Field | Question: According to wikipedia, the circulation of a vector field along a curve is defined as:
$$\oint V \cdot dl$$
where $V$ is the vector field, and $dl$ is the infinitesimal component along the boundary of the curve.
Isn't this analogous to work? Where work is defined as:
$$ \oint F \cdot dr $$
Where $F$ is the vector force field and $dr$ is the infinitesimal path component? And if so, why is there a distinction in the naming?
Answer: Question makes sense. In the most general case, it is analogous to work only it in that it is a continuous sum of the infinitesimal direction component of displacement across another vector. In some specific cases they are very similar, and in some cases that are the same exact thing.
A circulation is a path integral in a vector field around a closed curve. Work is a path integral of force around a curve. Just those sentences themselves tell what is different - first sentence has “vector field” and “closed”, whereas second sentence does not. Second sentence has “force” and first doesn’t.
So those three differences, which do not always apply:
1.In a vector field, $V$ is unchanging. No matter how we move about in that field, the $V$ magnitude and direction will be only a function of position. That’s what a vector field is.
This does not have to be true about work. The path taken may be part of what determines the force. Or there may be a time component. The vector may change through time. In a vector field it’s unaffected by time, path, everything except position.
In some cases, forces are a vector field. A spring, or in an electric or magnetic field, or slowly in a fluid pressure field. Many situations.
2.In a circulation, the vector field isn’t always force. It is as often as not something else. Electric field for example. But your question assumes and implies this already, as you are only asking if it is analogous. If it isn’t force, then the integral of the vector dot displacement has a different meaning, but is otherwise a similar thing (how youve moved in direction of the vector and its magnitude).
3.If you had asked about a path integral in a vector field and how that might be analogous to work, then the question would be simpler, because we could stop there. But the link goes to a circulation, which is a path integral around a closed loop. So the circulation only applies to a loop where you end-up where you started. There are situations where work is calculated over a closed loop in a vector field, and that would be exactly “a circulation”, but they are less common.
If that was the question, then we’d be comparing “a path integral in a vector field”, and “a path integral over force”. So if we were moving around in a force field that was characterized by location, like the ones mentioned earlier, our situation would qualify as both. | {
"domain": "physics.stackexchange",
"id": 82033,
"tags": "fluid-dynamics, vector-fields"
} |
How does an electric motor work? | Question: At school, we have been working on how an AC electric motor works. I have pasted a photo on how the textbook explains it functioning, however I do not understand it completely.
The description of each frame is not necessary for my question. The photo and intuition I believe is.
My question is: On the fifth frame, wouldn’t the motor “lose” a turn before switching poles again and then rotating? I have also pasted an illustration of my idea below:
Continuation:
As the photo tries to illustrate, wouldn’t the electric motor “lose” a turn because at 6 the current flow changes from up to down, but north is already down?
I understand that an electric motor runs very fast (and thus difficult to observe and experiment), but I am taking each frame to frame to understand how it really works.
Answer: Notice that the frames in the illustration come in pairs. Frame 1 is the same as frame 4, except that the rotating electromagnet has been rotated through 180 degrees (and its polarity has been reversed as well). Similarly frame 2 is the same as frame 5, except that the rotating electromagnet has been rotated through 180 degrees. So if there were a frame 6 shown it would look the same as frame 3 i.e. the rotating electromagnet would be aligned with the poles of the permanent magnet (the stator) but would have no polarity. This is because the AC current running through the windings of the electromagnet (the armature) does not reverse instantaneously - there are two brief moments in each cycle when the current in the armature is zero. These instants happen between frames 2 and 4 (when the polarity of the lower pole of the electromagnet changes from N to S), and again between frame 5 and 1 (when the polarity of the lower pole of the electromagnet changes from S to N). This instant of zero current is what the unpolarised state in frame 3 represents. At this instant there is no force on the rotating electromagnet, but it continues to rotate because of its momentum. | {
"domain": "physics.stackexchange",
"id": 98698,
"tags": "electromagnetism"
} |
Prove that the square root measurement $\Lambda_y=\frac14(\rho_{B^3})^{-\frac12}|\psi_y\rangle\langle\psi_y|(\rho_{B^{3}})^{-\frac{1}{2}}$ is a POVM | Question: Consider $\textit{X}\sim \mathrm{Unif}([0,1,2,3]), |\mathcal{Y}|=|\mathcal{X}|=4$. Also for every random variable realization {\it x} we use three parallel quantum channels like the one employed before such that:
\begin{equation}
\displaystyle \rho_{XB^{3}}=\sum_{x}p_{X}(x)|x\rangle\langle x|_{X}\otimes|\psi_{x}\rangle\langle\psi_{x}|_{B^{3}},
\end{equation}
Prove that the Square Root Measurement:
$$ \Lambda_{y}=\frac{1}{4}(\rho_{B^{3}})^{-\frac{1}{2}}|\psi_{y}\rangle\langle\psi_{y}|(\rho_{B^{3}})^{-\frac{1}{2}},$$ for $ y\in[0,1,2,3], $
is a positive operator-valued measure.
A positive operator-valued measure (POVM) is a set of operators $\{\Lambda_j\}$ that satify:
\begin{align*}
\Lambda_j&\succeq 0\\
\sum_j \Lambda_j &=I.
\end{align*}
I have proved the first property (I think):
For any state $|\phi\rangle$, we need to prove that $\langle \phi | \Lambda | \phi \rangle \geq 0$:
\begin{align*}
\langle \phi | \Lambda_{y} | \phi \rangle &= \frac{1}{4} \langle \phi | (\rho_{B^{3}})^{-\frac{1}{2}}|\psi_{y}\rangle\langle\psi_{y}|(\rho_{B^{3}})^{-\frac{1}{2}} | \phi \rangle \\
&= \frac{1}{4} | (\rho_{B^{3}})^{-\frac{1}{2}}|\phi \rangle |^2 \\
&\geq 0
\end{align*}
I am having trouble to prove the second property:
I have to prove the following equality:
$$\sum_{y} \langle \phi | \Lambda_{y} | \phi \rangle = \langle \phi | \phi \rangle$$
For that what I have is:
\begin{align*}
\sum_{y} \langle \phi | \Lambda_{y} | \phi \rangle &= \sum_{y} \frac{1}{4} \langle \phi | (\rho_{B^{3}})^{-\frac{1}{2}}|\psi_{y}\rangle\langle\psi_{y}|(\rho_{B^{3}})^{-\frac{1}{2}} | \phi \rangle \\
&= \frac{1}{4} \langle \phi | (\rho_{B^{3}})^{-\frac{1}{2}}\left(\sum_{y} |\psi_{y}\rangle\langle\psi_{y}|\right)(\rho_{B^{3}})^{-\frac{1}{2}} | \phi \rangle \\
\end{align*}
But I don't know how to follow.
Answer: Thanks to all people in the comments section that have helped me to arrive to this conclusion:
\begin{align*}
\sum_y \Lambda_y
&= \sum_y \frac{1}{4}(\rho_{B^{3}})^{-\frac{1}{2}}|\psi_{y}\rangle\langle\psi_{y}|(\rho_{B^{3}})^{-\frac{1}{2}}\\
&=(\rho_{B^{3}})^{-\frac{1}{2}}\sum_y \left(\frac{1}{4}|\psi_{y}\rangle\langle\psi_{y}|\right)(\rho_{B^{3}})^{-\frac{1}{2}}\\
&=(\rho_{B^{3}})^{-\frac{1}{2}}\rho_{B^3}(\rho_{B^{3}})^{-\frac{1}{2}}\\
&=I
\end{align*}
Where we have used that $X\sim \mathrm{Unif}([0,1,2,3])$, so the probabilities in the spectral decomposition are $\frac{1}{4}.$ | {
"domain": "quantumcomputing.stackexchange",
"id": 4513,
"tags": "quantum-state, measurement, povm"
} |
Proof that the axial current is conserved in classical QED | Question: I am trying to use the Lagrangian of QED (without kinetic terms for photons) to prove that the axial current of QED satisfies $\partial_\mu j^\mu_5 = 2im\bar\psi\gamma^5\psi,$ where $j^\mu_5 = \bar\psi\gamma^\mu\gamma^5\psi.$ Now, I have used the chiral transformation $\psi \to e^{i\alpha(x)\gamma^5}\psi$ and $\bar \psi \to \bar\psi e^{-i\alpha(x)\gamma^5}$. Working through the calculations, I found that the lagrangian changes to $$\mathcal L - i\alpha(x) (\bar\psi\gamma^5(i\gamma^\mu \partial_\mu - m -e\gamma^\mu A_\mu)\psi) +\alpha(x)(\partial_\mu\bar\psi\gamma^\mu\gamma^5\psi).$$ At this point, I cannot figure out how to get rid of the terms involving $A_\mu$ and the partial derivatives. If you can provide any help, it would be greatly appreciated.
Answer: The transformations are not what you have. The field $\bar \psi$ is defined by $\bar \psi = \psi^\dagger \gamma_0$, so they should be
$$
\psi\to e^{i\gamma^5 \alpha}\psi, \quad \bar\psi \to \bar\psi e^{i\gamma^5 \alpha}.
$$
This means that $m\bar\psi \psi$ is not invariant under the axial transformation | {
"domain": "physics.stackexchange",
"id": 94041,
"tags": "lagrangian-formalism, conservation-laws, field-theory, noethers-theorem, classical-field-theory"
} |
What (actually) defines an Aten-class near-earth asteroid? | Question: I just read in the EarthSky.org article Hours after discovery, asteroid swept by that the near earth asteroid 2016 QA2 recently passed the earth at about 85,000 km. The article says that it is a member of the Aten group of asteroids. That article links to this very cool GIF http://www.virtualtelescope.net/2016qa2_28aug2016.gif, however it's 40MB so it takes a while to load.
Wikipedia says:
Aten asteroids are defined by having a semi-major axis of less than one astronomical unit (AU), the distance from the Earth to the Sun. They also have an aphelion (furthest distance from the Sun) greater than 0.983 AU${}^2$.
${}^2$http://neo.jpl.nasa.gov/neo/groups.html
That link shows the following table and drawings. The text says:
NEAs are divided into groups (Aten, Apollo, Amor) according to their perihelion distance (q), aphelion distance (Q) and their semi-major axes (a).
Are $q$ and $Q$ really perihelion and aphelion respectively? The table says Aten asteroids have $Q>0.983$ but in the drawing for Aten asteroids it says $Aphelion<1.0167$. I noticed these values could be reciprocals, so there may be something more going on here. But at least on the surface there seems to be some contradiction.
Could someone confirm these definitions with an external source - and help me understand the basis for these limits that deviate from 1 AU by only +/- 1.7%?
The image below is from this explanation: Near Earth Asteroid / NEO Classifications Based on Locations. It might be helpful for discussion.
Answer: An Earth-crossing asteroid has either
an aphelion larger than Earth's perihelion ($Q \gt q_\oplus$, 0.983 AU),
or a perihelion smaller than Earth's aphelion ($q < Q_\oplus$, 1.017 AU).
For those which have both, the semimajor axis $a$ determines whether it is an Aten or an Apollo - subject to change by a close encounter with Earth.
This Minor Planet Center
blog post
is consistent with the JPL table.
I think the diagram used the wrong inequality for Aten aphelia. | {
"domain": "astronomy.stackexchange",
"id": 1791,
"tags": "asteroids, near-earth-object"
} |
Design a simple card game in JavaScript | Question: I wrote a simple card game in JavaScript and I wonder if anyone could give me some advice and make some improvements on my code.
Here is the demo that I build:
https://codesandbox.io/s/cardgame-his95?file=/src/index.js
So it is a 2 person card game. Every card has a number and also has a pattern.
The game goes like this: we have a set of pre-defined rules, and these rules have a ranking. The player that satisfies the highest-ranked rule will win. And there could be tie if they end up with the same ranking. The goal here is not just to make the game work, also I wanted to account for maintainability. For example, we can easily add new rules and swap the ranking and re-define the rule if possible.
There are mainly a couple of classes.
First is the Game class
class Game {
constructor({ play1, play2, rules, messages }) {
this.play1 = play1;
this.play2 = play2;
this.rules = rules;
this.messages = messages;
}
play() {
let rankOfP1 = Infinity;
let rankOfP2 = Infinity;
for (const rule of this.rules) {
if (rule.validator(this.play1)) {
rankOfP1 = rule.rank;
break;
}
}
for (const rule of this.rules) {
if (rule.validator(this.play2)) {
rankOfP2 = rule.rank;
break;
}
}
return rankOfP1 === rankOfP2
? this.messages.tie
: rankOfP1 < rankOfP2
? this.messages.win.player1
: this.messages.win.player2;
}
}
Here the rules are an array of rule object where every object looks like this
{
description: "Six Cards of the same pattern",
rank: 1,
validator: cards => {
return hasSamePattern(cards, 6);
}
The lower the rank gets, the more important this rule is. So If player1 satisfies a rule with rank 1, and player1 satisfies a rule with rank 2, then we say player1 won.
And validator is the function that takes an array of card object and returns a boolean to determine if the set of cards satisfy the rule.
And lastly we have a Card class which is pretty simple
class Card {
constructor({ pattern, number }) {
this.pattern = pattern;
this.number = number;
}
}
Please take a look and feel free to make some improvements on it. Also please suggest some better naming for variables if possible, I am not a native English speaker so some of the names of the variables would be a bit weird.
Finally I wrote this game in a OOP style. I know that JavaScript is not the best language out there when it comes to OOP. Also I am not really good at Object Oriented Design. I wonder if anyone knows how to rewrite the game in functional programming style. That would be super cool!
Answer: Your constructor has
constructor({ play1, play2, rules, messages }) {
this.play1 = play1;
this.play2 = play2;
this.rules = rules;
this.messages = messages;
}
You may as well Object.assign the parameter to the instance instead:
constructor(config) {
Object.assign(this, config);
}
pattern is a slightly odd name for what it represents here - the usual English word for one of the clubs, diamonds, etc, is suit. rule is a bit strange as well - a rule usually refers to the process by which a game is played (eg "Hands consist of 6 cards" or "The player with the best hand wins"). To describe the different possible winning combinations and their ranks, I'd use the word handRanks or something similar. play1 and play2 aren't great descriptors either - these represent the cards held in each player's hand, so maybe use player1Cards or player1Hand.
With regards to the play() method, when you want to find an item in an array which fulfills a condition, it would be more appropriate to use .find, rather than a for loop - find more clearly indicates what the intention of the loop is, and is more concise. You also need to set the rank to Infinity if no handRanks pass - why not integrate this Infinity into the handRanks array itself? You're also writing the looping code twice - you can make it more DRY by putting it into a function, and calling that function twice instead.
new Card({ suit: "spade", number: 1 }), // <-- Suit
new HandRank({ // <-- HandRank
description: "Six Cards of the same suit", // <-- Suit
rank: 1,
validator: cards => {
return hasSameSuit(cards, 6); // <-- hasSameSuit, not hasSamePattern
}
}),
new HandRank({ // <-- HandRank
description: "Nothing special",
rank: Infinity, // <-- add this whole new HandRank
validator: cards => true,
}),
getRank(cards) {
return this.handRanks.find(({ validator }) => validator(cards)).rank; // <-- this.handRanks
}
play() {
const rankOfP1 = this.getRank(this.player1Cards); // <-- player1Cards
const rankOfP2 = this.getRank(this.player2Cards); // <-- player2Cards
return rankOfP1 === rankOfP2
? this.messages.tie
: rankOfP1 < rankOfP2
? this.messages.win.player1
: this.messages.win.player2;
}
One of the benefits of using arrow functions is that if the function contains only a single expression which is immediately returned, you can omit the { } brackets and the return keyword, if you want to make things concise, eg for the hasSameSuit test above:
validator: cards => hasSameSuit(cards, 6),
If you want to find if any item in an array passes a test, but you don't care about which item passes the test, you should use .some, not .find. (.some returns a boolean indicating whether any passed, .find returns the found item) For the hasSamePattern (or hasSameSuit) method, use:
return Object.values(patterns).some(num => num >= threshold);
Your hasConsecutiveNums method has the bug mentioned in the comments previously - a hand of [1, 2, 2, 3] will not pass a 3-consecutive-number test because the sorted array will contain 2 twice, failing if (prevNum + 1 === num) {. De-duplicate the numbers with a Set first.
const nums = [...new Set(cards.map(card => card.number).sort((a, b) => a - b))];
I wonder if anyone knows how to rewrite the game in functional programming style.
Javascript isn't entirely suited to completely functional programming either, though it can get most of the way there. To start with, make your functions pure, and avoid side-effects and mutations. For example, assigning to a property of the instance with this.play1 = play1; (or this.player1Cards = player1Cards;) is a mutation. None of your code fundamentally requires anything non-functional (except the console.log at the very end, which is unavoidable), so it should be pretty easy to convert - rather than assigning to properties, just keep the variables in a closure, and return a function for the play method, eg:
const makeGame = ({ player1Cards, player2Cards, handRanks, messages }) => () => {
// getRank is now a standalone function which takes a handRanks parameter
const rankOfP1 = getRank(player1Cards, handRanks);
const rankOfP2 = getRank(player2Cards, handRanks);
return rankOfP1 === rankOfP2
? messages.tie
: rankOfP1 < rankOfP2
? messages.win.player1
: messages.win.player2;
};
const play = makeGame({ ... });
console.log(play()); | {
"domain": "codereview.stackexchange",
"id": 38052,
"tags": "javascript, object-oriented, game, playing-cards"
} |
How to tell rosdep that binaries are installed? | Question:
Unable to install libusb-devel I found a suggestion to install the binaries. So I did that.
But rosdep doesn't seem to know it and complains. I'm not sure that it matters (since I can't get openni to install anyway), but I wonder whether there's a way to tell rosdep that the binaries are there.
Edit:
No good way to do this in the comments so I'll ask here:
There's no libusb right now in rosdep.yaml, so would this look like the following at the end of my file?
libusb:
macports: |
# Manually installed
Originally posted by Eponymous on ROS Answers with karma: 255 on 2011-03-06
Post score: 0
Original comments
Comment by tom on 2011-03-10:
Please mark the question answered, if/once the issue is solved.
Answer:
The simplest work around is to change the OSX definition of libusb in the rosdep.yaml file. You can set it to point to something already installed or add a multiline script which is commented like below:
macports: |
# Manually installed
Update:
you will need to find the rosdep.yaml file where it is getting the definition of libusb as needed by openni. I'm not sure what source you're running from but the following should find you the file.
rosdep where_defined libusb1.0
Originally posted by tfoote with karma: 58457 on 2011-03-06
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 4966,
"tags": "ros, libusb, rosdep, osx"
} |
Flow of current in open circuit | Question: If we place a bunch of positively charged particles on one side of a conducting wire and a bunch of negatively charged particles on its other side, the situation will be analogous to a battery. Will then current flow in the open conductor for a small amount of time until the whole positively charged particle gets neutralized?
OK, current flow is possible. But can we create a steady current in this way?
Answer: A charged conductor has a potential $V$ that depends on the amount of charge carried and on the geometry of the conductor. In general, when two conductors with different potentials are connected with a conducting wire, a transient current passes from the body with higher potential to that with lower potential. The current persists till the potential difference drops to zero, and this usually takes a very short duration of time.
If the two conductors initially have equal and opposite charges, zero potential difference is obtained when both conductors are neutralized.
Edit:
Yes it is practically possible: two conductors having the same charge with an opposite sign actually form a capacitor by definition, and when we introduce a resistance into the circuit, the schematic representation becomes:
Our job is to find suitable values for $C$ and $R$ to extend the discharge duration as long as possible.
We can apply Kirchoff's voltage law and solve for the current as a function of time, we get:$$I=\frac{V}{R} e^{-\frac{t}{R.C}}$$
Where $V$ is the initial voltage. Current decays to practically zero at $t=5R.C$, where $t$ is in seconds, $R$ in ohms, and $C$ in Farads. So, a $1 M\Omega$ resistor and a few $\mu F$ capacitor can create a current that persists for quite a few seconds. | {
"domain": "physics.stackexchange",
"id": 35340,
"tags": "electricity, electric-circuits, electric-current, charge, classical-electrodynamics"
} |
rtabmap-ros mapping without loop closure detection | Question:
Hi,
I'm using rtabmap_ros with kinect and a simulation environment (Gazebo) and ubuntu 14.04 and ros indigo
I'm trying to 3D map a very big structure model (Aircraft model) placed in a Gazebo environment..the 3D mapping is done using a kinect placed on a UAV that autonomously navigates around the structure ... the map starts to be created successfully and incrementally but after a several hours the first mapped parts disappears. covering the structure takes alot of time. The path that the UAV follows is too long and does not include loop closure so I increased the RGBD/localimmunizationRatio from 0.025 to 0.5 to handle longer paths and I set up the RGBD/LocalLoopDetectionSpace to false but still I have the same problem so what could be the problem in my case ??
Here is the launch file part with the parameters I used:
<group ns="$(arg ns)">
<node if="$(arg rgbd_odometry)" pkg="rtabmap_ros" type="rgbd_odometry" name="rgbd_odometry" output="screen">
<param name="frame_id" type="string" value="iris/xtion_sensor/camera_depth_optical_frame"/>
<param name="wait_for_transform" type="bool" value="true"/>
<param name="Odom/Force2D" type="string" value="true"/>
<remap from="rgb/image" to="/iris/xtion_sensor/iris/xtion_sensor_camera/rgb/image_raw"/>
<remap from="depth/image" to="/iris/xtion_sensor/iris/xtion_sensor_camera/depth/image_raw"/>
<remap from="rgb/camera_info" to="/iris/xtion_sensor/iris/xtion_sensor_camera/rgb/camera_info"/>
</node>
<node name="rtabmap" pkg="rtabmap_ros" type="rtabmap" output="screen" args="$(arg args)">
<param name="database_path" type="string" value="$(arg database_path)"/>
<param name="frame_id" type="string" value="/iris/xtion_sensor/ground_truth/iris/xtion_sensor/ground_truth/odometry_sensor_link"/>
<param name="odom_frame_id" type="string" value="world"/>
<param name="subscribe_depth" type="bool" value="true"/>
<remap from="odom" to="/iris/ground_truth/odometry"/>
<remap from="rgb/image" to="/iris/xtion_sensor/iris/xtion_sensor_camera/rgb/image_raw"/>
<remap from="depth/image" to="/iris/xtion_sensor/iris/xtion_sensor_camera/depth/image_raw"/>
<remap from="rgb/camera_info" to="/iris/xtion_sensor/iris/xtion_sensor_camera/rgb/camera_info"/>
<remap from="rtabmap/get_map" to="/iris/get_map"/>
<param name="RGBD/LocalLoopDetectionSpace" type="string" value="false"/>
<param name="RGBD/OptimizeFromGraphEnd" type="string" value="false"/>
<param name="Kp/MaxDepth" type="string" value="8.5"/>
<param name="LccIcp/Type" type="string" value="1"/>
<param name="LccIcp2/CorrespondenceRatio" type="string" value="0.05"/>
<param name="LccBow/MinInliers" type="string" value="10"/>
<param name="LccBow/InlierDistance" type="string" value="0.1"/>
<param name="RGBD/AngularUpdate" type="string" value="0.1"/>
<param name="RGBD/LinearUpdate" type="string" value="0.1"/>
<param name="RGBD/LocalImmunizationRatio" type="string" value="0.50"/>
<param name="Rtabmap/TimeThr" type="string" value="700"/>
<param name="Mem/RehearsalSimilarity" type="string" value="0.30"/>
<!-- localization mode -->
<param if="$(arg localization)" name="Mem/IncrementalMemory" type="string" value="false"/>
<param unless="$(arg localization)" name="Mem/IncrementalMemory" type="string" value="true"/>
<param name="Mem/InitWMWithAllNodes" type="string" value="$(arg localization)"/>
</node>
<node if="$(arg rtabmapviz)" pkg="rtabmap_ros" type="rtabmapviz" name="rtabmapviz" args="-d $(find rtabmap_ros)/launch/config/rgbd_gui.ini" output="screen">
<param name="subscribe_depth" type="bool" value="true"/>
<param name="frame_id" type="string" value="/iris/xtion_sensor/ground_truth/iris/xtion_sensor/ground_truth/odometry_sensor_link"/>
<param name="wait_for_transform" type="bool" value="true"/>
<remap from="rgb/image" to="/iris/xtion_sensor/iris/xtion_sensor_camera/rgb/image_raw"/>
<remap from="depth/image" to="/iris/xtion_sensor/iris/xtion_sensor_camera/depth/image_raw"/>
<remap from="rgb/camera_info" to="/iris/xtion_sensor/iris/xtion_sensor_camera/rgb/camera_info"/>
</node>
<!-- Visualization RVIZ -->
<node if="$(arg rviz)" pkg="rviz" type="rviz" name="rviz" args="-d $(find aircraft_inspection)/rviz/rgbd_gazebo.rviz"/>
<node if="$(arg rviz)" pkg="nodelet" type="nodelet" name="points_xyzrgb" args="load rtabmap_ros/point_cloud_xyzrgb standalone_nodelet">
<remap from="rgb/image" to="/iris/xtion_sensor/iris/xtion_sensor_camera/rgb/image_raw"/>
<remap from="depth/image" to="/iris/xtion_sensor/iris/xtion_sensor_camera/depth/image_raw"/>
<remap from="rgb/camera_info" to="/iris/xtion_sensor/iris/xtion_sensor_camera/rgb/camera_info"/>
<remap from="cloud" to="voxel_cloud" />
<remap from="rtabmap/get_map" to="/iris/get_map"/>
<param name="decimation" type="double" value="2"/>
<param name="voxel_size" type="double" value="0.02"/>
<param name="RGBD/LocalImmunizationRatio" type="string" value="0.5"/>
</node>
</group>
Originally posted by Randa on ROS Answers with karma: 3 on 2016-02-14
Post score: 0
Answer:
Hi,
In your setup, the memory management of rtabmap is enabled:
<param name="Rtabmap/TimeThr" type="string" value="700"/>
That means that when the map update takes more than 700 ms, some oldest locations would be transferred to its long-term memory to keep the algorithm online, splitting the map so that the oldest part is not shown anymore (unless you come back in that area). However, the global map is still available at the end of the run: rtabmap should be paused first then we call publish_map (params: global, optimized, graphOnly) service with global parameter set to 1:
$rosservice call /rtabmap/pause
$rosservice call /rtabmap/publish_map 1 1 1
The graphOnly parameter can be set to 0 to make rtabmap node republish all data (point clouds) on the mapData topic (if rtabmapviz or MapCloud rviz plugin has already cached all the previous clouds, you can only republish the global graph).
To reconstruct the global map offline, you can open the database in rtabmap standalone application, and just click "Edit->Download all clouds".
Note on simulation
You said that there is no loop closure in your path, does that mean you assume perfect odometry? In that case (you need only to assemble the point clouds), you could even disable loop closure detection of rtabmap, thus saving some computational time:
<param name="Kp/WordsPerImage" type="string" value="-1"/>
However, if you plan to work on a real robot, you may have to plan your path with loop closures so that odometry errors can be corrected.
cheers
Originally posted by matlabbe with karma: 6409 on 2016-02-15
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Randa on 2016-02-17:
Thanks for your clarification and suggestion,
but I have a question about the ""Rtabmap/TimeThr" parameter, I found that the default value of this paramter is 0(means infinity) so is it better to increase the parameter value in my case or just use the default value??
Comment by matlabbe on 2016-02-17:
Setting it to 0 disables the memory management, so you will not experience parts of map disappearing. However, the processing time will not be bounded, so no guaranty that updates will be done under 1 sec (default Rtabmap/DetectionRate) for long-term mapping. | {
"domain": "robotics.stackexchange",
"id": 23769,
"tags": "slam, navigation, 3dmapping, rtabmap, rtab"
} |
Is this question worded correctly? | Question: I'm trying to solve some questions in dynamics and I came across this problem. I've attempted to solve it first without looking at the answer but after several attempts, I've failed. I've looked at the answer and I found out two cases:
the question is poorly worded.
There is hidden details I couldn't extract from the question.
As you can see from the problem, the question is to determine the distance travelled by car A when they pass each other. The car A first travels with constant acceleration and then travels with constant speed. The author assumes implicitly the car A will pass car B when car A travels with a constant speed. How can I, as a student, infer this from the question?
Answer: The following may be useful to consider.
The distance that car A travels during the accelerating portion is computed as $533.33$ ft. During that same amount of time, $13.33$ seconds, car B will have traveled $13.33 \times 60 = 799.8$ feet. Since car A and B are 6000 feet apart at $t=0$, then they will still be $6000-799.8-533.33=4666.87$ feet apart after $13.33$ seconds. Therefore, the conclusion must be that they pass each other when car A is in the constant velocity portion.
I hope this helps. | {
"domain": "physics.stackexchange",
"id": 73688,
"tags": "homework-and-exercises, kinematics"
} |
Is the total energy of a canonical ensemble system of $N$ particles, with single-particle energy levels given by $\epsilon_i$ fixed? | Question: Is the total energy of a canonical ensemble system of $N$ particles, with single-particle energy levels given by $\epsilon_i$ fixed ?
We know the total energy of the system is given by : $$E=\sum_{i} n_i \epsilon_i$$
Here $n_i$ is the number of particles in the $\epsilon_i$ energy level.
However, we know the probability of a single particle having energy $\epsilon_j$ is given by :
$$P(\epsilon_j)=\frac{g_j e^{-\beta\epsilon_j}}{Z}$$
Here, $g_j$ is the degeneracy of that energy level, and $Z$ is the single particle partition function.
Moreover, we know that the probability of a single particle having energy $\epsilon_j$, is the total number of particles in that energy level, divided by the total number of particles - according to the definition of probability.
Hence, $$\frac{n_j}{N}=P(\epsilon_j)=\frac{g_j e^{-\beta\epsilon_j}}{Z}$$
This implies,
$$n_j=NP(\epsilon_j)=N\frac{g_j e^{-\beta\epsilon_j}}{Z}$$
Hence we can easily find $n_j$ for any $\epsilon_j$. So, if we know the number of particles in each of these energy levels, we can determine the exact total energy of the system $E$, in the first equation.
However, this seems problematic. If we find out the total energy of the system, and the number of particles in each energy level, we are restricting this entire system to one particular microstate. The probability of obtaining this particular microstate is $1$. The probability of obtaining any other microstate must be $0$.
However, shouldn't every possible microstate of the system have some finite probability i.e. every possible value of total energy have some finite probability?
I've asked a couple of related questions, and the amazing answers to those questions suggest that $n_j$ is not the actual number of particles in the $\epsilon_j$ level. Rather, it is the expected number of particles in that energy level. However, many answers over different websites and comments to one of my previous answers disagree and claim that $n_j$ is the exact actual number of particles in that level indeed.
Can anyone shed some light on this, and clear my doubt.
Answer: No, the energy of a canonical ensemble is not fixed. The three classic ensembles are
Microcanonical: Fixed energy (think a gas in a perfectly insulated box): the system itself has been constrained to have a constant energy.
Canonical: Fixed temperature (think of a system immersed in an infinite, temperature controlled water bath). Energy can fluctuate into and out of the system.
Grand Canonical: Fixed temperature and fixed chemical potential. Energy and particles can fluctuate into and out of the system. The only difference here is that the microstate space includes all possible occupancy numbers.
The confusion comes from the notation where people write, e.g., $E$ as shorthand for $\langle E\rangle$ in the canonical ensemble. However, it is a physically different situation -- if you have the capability to measure thermal fluctuations, e.g. by measuring the heat capacity, you will plainly see that the energy of a canonical ensemble actually fluctuates randomly about its expectation value. The law of large numbers suppresses these fluctuations, so they're usually not important. A perfectly insulated microcanonical ensemble by definition does not have any energy fluctuations.
In all three ensembles, there is arguably no sensible way to talk about the "number of particles in microstate $i$". The point of thermodynamics is that we know nothing about the system, we can only make statements about their broad statistical properties and the probability distribution of microstates.
The $n_j$ you look at in the canonical ensemble are expectation values: at any given moment there may well be no particles in that microstate. Now bear in mind that for very large systems, it is extremely unlikely that the true occupancy of a given energy level is far from its expectation value, which is why some sources may suggest that $n_j$ is the "actual" number of particles in macrostate $i$. | {
"domain": "physics.stackexchange",
"id": 84330,
"tags": "thermodynamics, statistical-mechanics, probability, partition-function, quantum-statistics"
} |
PluginlibFactory: The plugin for class ‘grid_map_rviz_plugin/GridMap‘ failed to load | Question:
Hi
can someone help me with this error?
lucas@lucas-VirtualBox:~$ source /home/lucas/projects/imagine/build/devel/setup.bash
lucas@lucas-VirtualBox:~$ rviz
[ INFO] [1547197031.245982719]: rviz version 1.12.16
[ INFO] [1547197031.246442344]: compiled against Qt version 5.5.1
[ INFO] [1547197031.246692910]: compiled against OGRE version 1.9.0 (Ghadamon)
[ INFO] [1547197031.735517498]: Stereo is NOT SUPPORTED
[ INFO] [1547197031.736545820]: OpenGl version: 3 (GLSL 1.3).
[ERROR] [1547197038.107183504]: PluginlibFactory: The plugin for class 'grid_map_rviz_plugin/GridMap' failed to load. Error: Could not find library corresponding to plugin grid_map_rviz_plugin/GridMap. Make sure the plugin description XML file has the correct name of the library and that the library actually exists.
lucas@lucas-VirtualBox:~$
Originally posted by LucLucLuc on ROS Answers with karma: 3 on 2019-01-11
Post score: 0
Original comments
Comment by gvdhoorn on 2019-01-11:
Not without a verbatim copy-paste of the full error.
Comment by LucLucLuc on 2019-01-11:
i want to include a image. But it doesnt work. I dont now why. It shows ">5 points required to upload files" I try wih Chrom/ Internet Explorer - jpg/png/ and pdf ---- do you have any idea?
Comment by gvdhoorn on 2019-01-11:
There is absolutely no need for attaching images. Error messages are text and can be copy-pasted, verbatim, from the terminal window into your question. Please do so.
Comment by LucLucLuc on 2019-01-11:
thanks for your fast answers. I followed your advice
Answer:
Have you actually installed the wiki/grid_map_rviz_plugin package?
Edit: and seeing this:
lucas@lucas-VirtualBox:~$ source /home/lucas/projects/imagine/build/devel/setup.bash
can you tell us how you've setup your workspace?
Originally posted by gvdhoorn with karma: 86574 on 2019-01-11
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by LucLucLuc on 2019-01-11:
Oh nice, i actually installed the wiki/grid_map_rviz_plugin package - it works. But i want to creat a package independent project in c++..
Why do I have to install the package? Why can not cmake with catkin contain the necessary files?
Comment by gvdhoorn on 2019-01-11:\
But i want to creat a package independent project in c++..
I don't understand what that means, but how do you expect things to work without having the plugin actually installed?
Comment by LucLucLuc on 2019-01-11:
Why do I have to install the package? Why can not cmake with catkin contain the necessary files?
Comment by gvdhoorn on 2019-01-11:\
Why can not cmake with catkin contain the necessary files?
?
CMake ~= Catkin are build tools. They don't contain any code other than what is needed to be able to build software.
You're sort-of asking: "why does Visual Studio not contain/provide MS Word?" | {
"domain": "robotics.stackexchange",
"id": 32259,
"tags": "ros, rviz, ros-kinetic, ubuntu, ubuntu-xenial"
} |
Why do the object detection networks produce multiple anchor boxes per location? | Question: In various neural network detection pipelines, the detection works as follows:
One processes the input image through the pretrained backbone
Some additional convolutional layers
The detection head, where each pixel on the given feauture map predicts the following:
Offset from the center of the cell ($\sigma(t_x), \sigma(t_y)$ on the image)
Height and width of the bounding boxes $b_w, b_h$
Objectness scores (probability of object presence)
Class probabilities
Usually, detection heads produce not a single box, but multiple.
The first version of YOLO - outputs 2 boxes per location on the feature map of size $7 \times 7$
Faster R-CNN outputs 9 boxes per location
YOLO v3 - outputs 9 boxes per pixel from the predefined anchors : (10×13),(16×30),(33×23),(30×61),(62×45),(59× 119), (116 × 90), (156 × 198), (373 × 326)
These anchors give the priors for the bounding boxes, but with the help
of $\sigma(t_x), \sigma(t_y), b_w, b_h$ one can get any possible bounding box on the image for some pixel on the feature map.
Therefore, the network will produce plenty of redundant boxes, and a certain procedure - NMS suppresion has to be run over the bounding box predictions to select only the best.
Or the purpose of these anchors is to start from a prior, reshape and shift slightly the bounding box, and then compare with the ground truth.
Is it the case, that if one used only a single bounding box for detection - it would be hard to train the network to rescale the initial bounding to, say, 10 times, and produce some specific aspect ratio?
Answer: Yes, theoretically it is possible to learn the offsets to get any possible bounding box from only one anchor box. However, it is hard to learn such dramatic shifts and changes. Learning only small offsets from the prior is easier and tends to converge better.
In specific applications however, one might already know the typical size and ratio of objects, and that this is very similar for all of them. In such cases, one box per anchor can be enough to learn well.
Note that many redundant boxes are predicted anyway, even if only one anchor box is used per location, because there are usually many anchor locations distributed in a grid based fashion over the image. Therefore, NMS is a necessary step anyway and does not depend on having multiple boxes per anchor. | {
"domain": "ai.stackexchange",
"id": 3046,
"tags": "computer-vision, object-detection, bounding-box"
} |
What phrases describe collisions with coefficients of restitution less than zero or greater than one? | Question: The coefficient of restitution describes the elasticity of a collision:
1 = perfectly elastic, kinetic energy is conserved
0 = perfectly inelastic, the objects move at the same speed post impact
However, COR values > 1 and < 0 are also physically meaningful:
COR > 1, a collision where the impact adds energy (e.g. an explosion)
COR < 0, a partial collision where the objects partially pass through each other. Say, like this:
Are there colloquial terms that characterize these types of collisions? Perhaps explosion does work well enough for the first case, but I can't think of anything that adequately describes the apple case.
(If inelastic is defined as COR != 1 it's probably broad enough but likely not very illuminating in practice.)
Answer: For COR < 0 you can say perforating collision (or piercing or even crossing).
For COR > 1 one could use exergonic collision, but maybe that causes more confusion. This is taken from chemistry where there are exergonic reactions.
My two cents. | {
"domain": "physics.stackexchange",
"id": 3718,
"tags": "classical-mechanics, collision, terminology"
} |
Match blood types in C | Question: I have written a solution to the blood-type matching problem, as described at https://projectlovelace.net/problems/blood-types/. The problem is to determine whether a given recipient (in this case, argv[1]) will find a match for a blood transfusion in an array of available donors (argv + 2).
Input blood type: B+
Input list of available blood types: A- B+ AB+ O+ B+ B-
Output: match
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <err.h>
typedef struct {
enum { O, A, B, AB } abo;
enum { P, M } rh;
} Blood;
const int abo[4][4] = {
{ O, O, O, O }, // O
{ O, A, O, A }, // A // *
{ O, B, O, B }, // B // *
{ O, A, B, AB }, // AB
};
const int rh[2][2] = {
{ P, M }, // P
{ M, M }, // M // *
};
Blood
parse(char *s){
char rh0 = s[strlen(s)-1];
char *abo0 = strdup(s);
abo0[strlen(s)-1] = '\0';
Blood b = {
!strncmp(abo0, "O", 1) ? O
: !strncmp(abo0, "A", 1) ? A
: !strncmp(abo0, "B", 1) ? B
: !strncmp(abo0, "AB", 2) ? AB
: -1,
rh0 == '+' ? P
: rh0 == '-' ? M
: -1,
};
return b;
}
int
main(int argc, char *argv[]) {
if(argc < 3)
err(1, "no arguments given\n");
int n = argc - 2;
char *a0 = argv[1];
char **as = argv + 2;
Blood b0 = parse(a0);
Blood *bs = malloc(sizeof(Blood) * n);
for(int i = 0; i < n; i++)
bs[i] = parse(as[i]);
for(int i = 0; i < 4; i++) {
for(int j = 0; j < 2; j++) {
for(int k = 0; k < n; k++) {
if(abo[b0.abo][i] == bs[k].abo && rh[b0.rh][j] == bs[k].rh) {
printf("match\n");
return 0;
}
}
}
}
printf("no match\n");
return 1;
}
which works correctly in my tests, but uses some hacks. Specifically, lines marked with // * have repeated values of enums, since I couldn't find a way to make loop sizes conditional on blood types.
What would be a better way, without forcing the blood type enums to be the same size? And more generally, is there a simpler version of the blood-type matching algorithm?
Answer: Each blood factor can be present or not present. The blood types can be bit-encoded using one bit for the A factor, one bit for the B factor, and one bit for the Rh factor.
enum { A=1, B=2, Rh=4 };
You would parse the blood type "AB+" as A + B + Rh == 7, and "O-" as 0 because it does not contain any A factor, B factor, or Rh factor.
Survival requires the donor's blood not contain any factors missing in the recipient's blood type. Complement the recipient's blood type to get the bad factors, and use a bit-wise and operation to test that all of the factors are not present in the donor's blood.
bool donor_matches = (donor & ~recipient) == 0;
If you want to leave you enum's as encoding the 4 blood types, and the Rh factor separately, you can get to a similar coding.
Note that:
enum { O, A, B, AB } abo;
will define O=0, A=1, B=2, AB=3, so you have a similar bit assignment as above. The Rh factor on the other hand would be better to reverse the order:
enum { M, P } rh;
so that M=0, P=1. Now you could express the match condition:
bool donor_match = (donor_abo & ~recipient_abo) == 0 && (donor_rh & ~recipient_rh) == 0;
Finally, if you want to leave the rh enum as you have originally defined it:
enum { P, M } rh;
then note that donor_rh >= recipient_rh would express that the rh factors match, or the donor has a less-restrictive (numerically higher) rh factor. | {
"domain": "codereview.stackexchange",
"id": 36687,
"tags": "c"
} |
Regular language such that L(r) = L(r₁) L(r) ∪ L(r₂) | Question: Prove or disprove the following statement: for arbitrary regular expressions $r_1$ and $r_2$ over an
alphabet $\Sigma$ such that $\epsilon$ belongs to $L(r_1)$, there exists a regular expression $r$ over $\Sigma$ such that $r = r_1r + r_2$.
My Solution: Let us consider $\Sigma =\{a\}$. $L_1$ is even number of symbols so $r_1 =(aa)^*$ and $L_2$ is odd number of symbols so $r_2 = a(aa)^*$. $r$ is either $r_2$ which is possible or $rr_1$. Now if we keep replacing $r$ with $rr_1$ in the expression we will end up getting an infinitely large string because $r$ cannot be $\epsilon$ (thats what I think). So in my opinion this language is not regular. But I am not sure if I am thinking correctly.
Answer: Take $r = (r_1 + r_2)^*$. We have to prove
$$(r_1 + r_2)^* = r_1(r_1 + r_2)^* + r_2$$
First note that both $(r_1 + r_2)^*$ and $r_1(r_1 + r_2)^* + r_2$ contain $\epsilon$ since we are given that $\epsilon \in L(r_1)$, so we will consider a nonempty string in our proof.
Let's first show that $(r_1 + r_2)^*\subseteq r_1(r_1 + r_2)^* + r_2$. But it is trivial since we know that $\epsilon \in L(r_1)$ and so
$$r_1(r_1 + r_2)^* + r_2 = (r_1 + r_2)^* + r_1'(r_1 + r_2)^* + r_2, \text { where } r_1 = r_1' + \epsilon$$
Now we have to show that $r_1(r_1 + r_2)^* + r_2 \subseteq (r_1 + r_2)^*$. Assume a nonempty string $s \in r_1(r_1 + r_2)^* + r_2$. Then either $s \in r_2$ or $s \in r_1(r_1+r_2)^m$ for some nonnegative integer $m$. In the first case since $r_2 \subseteq (r_1+r_2)^*$, we are done, so assume $s \in r_1(r_1+r_2)^m$. But we know that $(r_1+r_2)^{m+1} \subseteq (r_1+r_2)^*$. Thus
$$(r_1+r_2)^{m+1} = (r_1+r_2)(r_1+r_2)^m = r_1(r_1+r_2)^m + r_2(r_1+r_2)^m$$
meaning $r_1(r_1+r_2)^m \subseteq (r_1+r_2)^*$. | {
"domain": "cs.stackexchange",
"id": 9975,
"tags": "formal-languages, regular-languages"
} |
How to find resulting intensity in 3 beam interference? | Question: Suppose we have three incident light beams coming from slits S¹, S², S³ with some initial phase differences $Φ¹²,Φ²³,Φ³¹$ between them interfering at a point and we want to calculate resulting intensity.
We proceed by writing resultant from beam 1 and beam 2 and then finding resultant of $I¹²$ and $I³$
Now I am unabale to figure out what to put the phase difference in the second case?
I think we can't put Φ¹³ because it was P.D. between 1 and 2 and not between resultant and 3. Same for the other also. I even don't know how to calculate P.D. between resultant and 3 because resultant beam is not coming from a single slit but its a net effect of 1 and 2.
To understand the question more clearly, see this.
Here the P.D. between 1&2; and 2&3 can be easily found to be $2π/3$ and $2π$ respectively. Firstly, 2 and 3 give $I¹²=4I$ at $P$ and for calculating net Intensity we need phaase diff. between this $I¹²$ and $I$ (which is basically $I³$)
Answer: We can add sinusoidal functions very easily, by using something called Phasor Diagram.
Plot all the intensities on the phasor diagram by keeping the phase difference in mind.
$S_1P - S_2P = \frac{\lambda}{6} \implies \phi_{12} = \pi/3$
$S_1P - S_3P = \frac{2\lambda}{3} \implies \phi_{13} = 4\pi/3$
Like this: (Taking the initial phase of S3 as reference)
After making the phasor diagram, calculate the resultant, treating them as vectors. In this case as rays coming from S2 and S3 are out of phase, they are simply being cancelled and the net intensity at P is due to S1 only and is equal to $I$.
However, if you still want to know what will be the phase difference between $I_{12}$ and ${I_3}$, then calculate the resultant of S1 and S2 on phasor diagram keeping S3 intact, and then find the angle between resultant and S3, like this:
Calculate $\theta$ and the phase difference will be given by $\frac{\pi}{2} + \theta$ | {
"domain": "physics.stackexchange",
"id": 75338,
"tags": "optics, waves, interference"
} |
Aperture/image circle in a DIY digital microscope? | Question: I have just taken apart a broken USB microscope, and recognized that there is actually no magic going on inside.
So I thought how about building one of my own. The idea is to buy a regular (possibly used) optical microscope's objective lens and a USB webcam. All still low to mid cost, of course. I think I have learned that many objective lenses are optimized for a tube length of 160mm, so if I get one of those, I only need to remove the webcam's lens and put the bare CMOS sensor 160mm apart from the microscope lens (of course, in a pitch black tube with the suitable thread etc.).
But what about the aperture/sensor dimensions? The typical sensors I have seen are about 5mm width. Will that work or will only part of the CMOS sensor be illuminated then? If the sensor only covers part of the image, that wouldn't be a major problem, I guess, because that would mean additional magnification for free...
Or is the aperture of the objective a parameter I need to consider before buying? Or is there something like a "standard aperture"?
Answer: I don't think the objective will limit you. The focal plane dimensions are typically limited by the eyepiece or sensor. In this table, they are all much larger than 5mm. | {
"domain": "physics.stackexchange",
"id": 94498,
"tags": "optics, microscopy"
} |
Expression for fugacity coefficient derived from a pressure-explicite EOS | Question: In the book, Thermodynamics for Process Simulation, the authors propose to derive an expression for the fugacity coefficient from a pressure-explicit equation of state, as an example among many others.
Assuming, $P = P(T, v)$ or $z =\frac{Pv}{RT} = z(T, v)$, the calculations lead to,
$$\left(\frac{\partial \ln \phi}{\partial v}\right)_T = \left(\frac{\partial z}{\partial v}\right)_T-\left(\frac{\partial \ln z}{\partial v}\right)_T + \left(\frac{1}{v}-\frac{P}{RT}\right) \tag1$$
where $\phi$ is the fugacity coefficient.
What I don't understand is, when they integrate, they get,
$$\ln \phi = z - 1 - \ln z + \frac{1}{RT}\int_{\infty}^v \left(\frac{RT}{v}-P\right)\mathrm{d}v \tag2$$
Can someone explain me how they find $z-1-\ln z$ by integrating the first two terms of $(1)$ between $\infty$ and $v$?
When integrating the first two terms, it leads to $$\left[z - \ln z \right]_{\infty}^v$$ which I understand, but then I don't get the final result.
I suppose it's linked to the values of $z$ when the volume is very large and when the volume is "$v$" but it does't make sense for me.
I would have assume to get $$\left[z - \ln z \right]_{\infty}^v = v - \ln v - \left[z(v^{\infty}) - \ln z(v^{\infty}) \right]$$ and nothing else.
But I am missing something probably obvious.
Thank you in advance for your help!
Answer: Mindful of the guideline "don't give answers in comments", I've converted my comment to an answer, trivial though it is.
When $v\rightarrow \infty$ you get the ideal gas eos, so $z\rightarrow 1$. This gives you the result, i.e. $1$ at the lower limit (subtracted) and $z-\ln z$ at the upper one.
When you write down the integrated form, $z-\ln z$, you need to remember that it means that you are evaluating the function $z(v)$ at the two limits $v=\infty$ and $v=v$, and using those values in that expression; you should not be trying to set $z=v$ at the limits, which seems to be (partly) what you have written at the end of your question.
Edit following OP comment.
More details. At constant $T$, $z=z(v)$ is a function of molar volume $v$,
determined by the equation of state, $P(v)$ at the given value of $T$. This equation is unknown, but we can be sure that,
in the ideal gas limit $v\rightarrow\infty$, $z(v)\rightarrow 1$.
Integrating both sides of the equation from the ideal gas limit to
the desired volume $v$, and using $v'$ as the integration variable, gives
eqn (2) of the question in its full form:
$$
\left| \ln \phi(v')\right|_{v'=\infty}^{v'=v}
= \left| z(v') - \ln z(v') \right|_{v'=\infty}^{v'=v}
+ \frac{1}{RT}\int_{\infty}^v \left(\frac{RT}{v'}-P(v')\right)\mathrm{d}v'
$$
I think it's clearer to distinguish between the integration variable $v'$
and the "upper" limit of integration $v$,
but many people would be happy just to use $v$ instead of $v'$.
Anyway,
on the left,
we know that, in the ideal gas limit,
the fugacity coefficient $\phi(\infty)=1$,
so $\ln\phi(\infty)=0$ and we are just left with $\ln\phi(v)$.
On the right,
similarly,
we substitute in the upper and lower limits for $v'$.
We know $z(\infty)=1$, so
the function being evaluated is $z(\infty)-\ln z(\infty)=1$ at the lower limit,
and $z(v)-\ln z(v)$ at the upper limit.
So the final answer is
$$
\ln \phi(v)
= z(v) - \ln z(v) - 1
+ \frac{1}{RT}\int_{\infty}^v \left(\frac{RT}{v'}-P(v')\right)\mathrm{d}v'
$$
An important point is that the integration variable
is the molar volume $v'$ (or $v$ if you prefer), not $z$.
In evaluating the result, the integration limits $v$ and $\infty$
are substituted for $v'$, the argument of the function being evaluated.
It is incorrect to set $z=v$, or $z=\infty$.
(This should be even more clear in this case,
since $z$ is a dimensionless quantity, whereas $v$ is not). | {
"domain": "chemistry.stackexchange",
"id": 10687,
"tags": "thermodynamics"
} |
Retrieving NCBI Taxa IDs from refseq or GenBank assembly accession | Question: I have about 10,000 genome files all named by either refseq or genbank accession number, do you know if it's possible to convert these numbers to the corresponding NCBI taxon ID or species?
for example:
GCA_000005845.2 to 79781
In the case of E.coli
Edit:
The file names look like this:
GCF_000337855.1_ASM33785v1_protein.faa.gz
GCF_000091125.1_ASM9112v1_protein.faa.gz
GCF_000184535.1_ASM18453v1_protein.faa.gz
My operating system is Ubuntu 16.04
Answer: Turns out I'd already written some code that did it, human memory is a funny thing.
This uses biopython to split the field description to where the species is. May not work for all NCBI files, but seems to work on most.
import Bio
from Bio import SeqIO
from Bio import AlignIO
for record in SeqIO.parse (FILE, "fasta"):
Speciesname = record.description.split('[', 1)[1].split(']', 1)[0] | {
"domain": "bioinformatics.stackexchange",
"id": 2522,
"tags": "python, phylogenetics"
} |
Derivation of Aharonov Bohm effect for Quasiparticles | Question: I've noticed the following:
Observation: Central results in the condensed matter physics rely on Aharonov Bohm-type arguments involving quasiparticles with fractional charge.
However, I can't see how these arguments are rigorous. In order for them to work, we would need a partition function of the form
$$Z[x,A]=\int Dx\,\, e^{\frac{i}{\hbar}S[x,A]},~~~~~~S[x,A]=\int_C(qA+\text{terms without $A$})$$
There are two issues with demonstrating the above: for one, this is describing a quantum-mechanical system, whereas quasiparticles are naturally mentioned in a field-theoretic context. So we'd need to find a way to reduce a field-theoretic partition function to a particle-partition function:
$$Z[\Psi,A]=\int D\Psi\,\, e^{\frac{i}{\hbar}S[\Psi,A]},~~~\to~~~Z[x,A]=\int Dx\,\, e^{\frac{i}{\hbar}S[x,A]}$$
The second issue would lie in finding the field-theoretic Lagrangian for a quasiparticle, but it seems like that is already known (i.e., for the fractionally charged anyons of the FQHE, just write down the Chern-Simons lagrangian). However, the first step is missing, and in a glaring way.
Last Thing:
An even better alternative, if possible, would be to derive everything in the Hamiltonian picture, taking advantage of the adiabatic theorem. The Hamiltonian for the quasiparticles would then take the form
$$H=\sum_q E_q \gamma^\dagger_q \gamma_q$$
Does anyone else find it awfully surprising that no one has tried to prove the Aharonov Bohm effect in this context?
Answer: I'm not sure what you were really asking about in the first question, so I'll just focus on the second.
The reason that people do not go for such a "second quantized" description is that these $\gamma_q$ operators must be necessarily highly non-local for fractionalized quasiparticles. And there is typically no controlled way to "derive" such a second-quantized Hamiltonian since the system is usually strongly interacting. However, it does not mean that the fractional braiding phase has not been derived using adiabatic theorem. In fact, it was first shown in this way. Take $\nu=1/m$ Laughlin state as an example. First, following Laughlin, we have the (first-quantized) wavefunction for quasiholes, as a function of the complex coordinate of the quasihole. This was proposed by Laughlin. Then one can simply go and calculate the braiding phase of the quasiholes, as an adiabatic Berry phase. Under certain assumption about the plasma screening which has to be checked numerically, one can establish the AB phase as well as the braiding statisitcs. The original derivation was done in http://journals.aps.org/prl/pdf/10.1103/PhysRevLett.53.722.
Now maybe your question is, how do we know that quasihole wavefunction written down by Laughlin is a good approximation to the actual quasiparticle? This is justified in the following way: first, one should ask why Laughlin's wavefunction for the ground state is a good approximation of the actual ground state. People checked that numerically, of course, for a small number of electrons (~ 13 electrons for 1/3, actually not that small). However a better argument is that the Laughlin wavefunction is the exact ground state of a Hamiltonian in the lowest Landau level with certain very short-ranged interaction, instead of Coulomb, and the quasihole wavefunction also represents eigenstates of this ideal Hamiltonian. How do we know then the ideal Hamiltonian is in the same "phase" as the Coulomb one? At this point numerics is involved to show that indeed these two are in the same phase. It's not rigorous but that's the best one can do. So in a sense, Laughlin's ingenious "guess" of the wavefunction is essential since we do not know, and probably never will, how to solve the problem of many-electrons with Coulomb interactions in a Landau level analytically. | {
"domain": "physics.stackexchange",
"id": 26851,
"tags": "gauge-theory, path-integral, quasiparticles"
} |
What is the difference between SOLiD, 454, and Illumina next-gen sequencing? | Question: I've started teaching myself about next-generation sequencing in preparation for a new job, and I'm wondering what the main differences are between the 454, SOLiD, and Illumina/Solexa machines, in terms of sample/library preparation and chemistry. How difficult is it for a protein veteran but next-gen newbie to get high-quality useful reads? I'm mainly going to be looking at antibody diversity, but consider this in general terms for any next-gen project.
I understand the basic theory of fluorescent chain-termination sequencing, but it's been a number of years since I last did it, and my career is now taking me back into the DNA realm.
Answer: Here is a short summary of the sequencing technologies you listed. Illumina is the most frequently used one.
Roche/454 FLX Pyrosequencer technology is based on pyrosequencing method, which utilizes the use of the enzymes ATP sulfurylase and luciferase. After the incorporation of each nucleotide by DNA polymerase, a pyrophosphate is released, which further takes part in downstream light-producing reactions. The amount of light is proportional to the incorporated number of nucleotides. The DNA is fragmented and adapters are ligated at both ends. The fragments are mixed with agarose beads, which carry adapters complementary to the library adapters, and thus each bead is associated with a unique DNA fragment. The beads and DNA fragments are isolated in individual micelles, where emulsion PCR takes place and million copies of the single fragments are amplified onto the surface of each bead. Each bead is placed in a well of picotiter plate (PTP), as the wells have dimensions such that only one bead can fit per well. Enzymes are added to the beads and pure nucleotide solutions are added with an immediate imaging step. On one side of the array a CCD (charge-optic device) camera records the light emitted from each bead. The first four nucleotides (TCGA) are the same as the start of the adapter, which allows for the emitted light to be calibrated according to the type of nucleotide added. The major disadvantage of this method is the misinterpretation of homopolymers (consecutive nucleotides, e.g. AAA or CCC). Such areas are prone to insertions or deletions, as the length of the stretch can be inferred only by the intensity of the light emitted. On the other hand, substitution errors are very unlikely. The 454 FLX instrument generates ~400 000 reads per instrument-run, as the reads are 200-400bp. The greatest advantage of this platform is the read length, which is the longest of all second generation technologies. Although sequencing on 454 platform is more expensive than sequencing on Illumina platform (40USD per Mega base versus 2USD per Mega base), it could still be the best choice for de novo assembly or metagenomics applications. (Mardis, E., 2008, Shendure, J. and Ji, H., 2008).
Illumina is the most frequently used one. The DNA to be sequenced is fragmented into about 200 base strands. Adapters are ligated onto the ends of the fragments and one of these adapters is hybridized on a flow cell. Illumina essentially utilizes a unique "bridged" amplification reaction that occurs on the surface of the flow cell. Localized PCR reaction is performed, and each of the hybridized pieces of DNA will get locally amplified to generate clusters which will have the exact same molecule. Subsequently, the DNA sequence is determined by adding a primer on one of the ends of the molecule. A mixture of modified nucleotides is added. Each one of them carries a base-specific fluorescent label attached to it. Also, each one has the 3' OH group blocked which ensures the incorporation of one nucleotide at a time. The flow cell is placed under a microscope and when light is shined on its fluorescence, the emission light shows which base was incorporated on each one of those clusters. The Illumina read length is approximately 35 bases, but over billion reads are generated at a time. The major error type is substitution, rather than deletion or insertion (Mardis, E., 2008, Shendure, J. and Ji, H., 2008).
The SOLiD platform utilizes the use of DNA fragmented library, which is flanked by ligated adapters. The fragments are attached to small paramagnetic beads and emulsion PCR is performed to amplify the fragments. In contrast to the other sequencing platforms, sequencing by synthesis is performed by utilizing DNA ligase, rather than polymerase. Each cycle of sequencing involves the ligation of a degenerate population of fluorescently labeled universal octamer primers. A specific position of the octamer (e.g., base 5) carries a fluorescent label. After ligation, images are acquired in four channels, followed by cleavage of the octamer between positions 5 and 6, removing the fluorescent label. After several rounds of octamer ligation, which enable sequencing of every 5th base (e.g., bases 5, 10, 15, and 20), the extended primer is denatured. Different primers can be used to examine the previous or next positions (e.g., base 3 or 6). The platform can use primers which carry two adjacent correlated with label bases. This approach involves the examination of the bases twice in a cycle, which decreases the error rates. SOLiD read lengths are 25-35 bp, and each sequencing run yields 2–4 Gb of DNA sequence data (Mardis, E., 2008, Shendure, J. and Ji, H., 2008).
References:
Mardis, E., Next generation DNA sequencing methods. Ann. Rev. Genomics Hum. Genet, 9: 387-402 (2008). DOI 10.1146/annurev.genom.9.081307.164359
Shendure, J. and Ji, H., Next-generation DNA sequencing. Nat. Biotechnol., 26: 1135-1145 (2008). DOI 10.1038/nbt1486 | {
"domain": "biology.stackexchange",
"id": 821,
"tags": "molecular-biology, genomics, dna-sequencing"
} |
How to migrate tf data types such as Quaternion, Vector3, Transform to tf2? | Question:
There is not much mentioned about tf2 datatypes on tf2_ros documentation at http://docs.ros.org/latest/api/tf2_ros/html/c++/.
While comparing to http://docs.ros.org/latest/api/tf/html/c++/, we don't have any data types mentioned. My C++ code has plenty of tf datatypes such as
tf::Quaternion q(msg->pose[index].orientation.x,
msg->pose[index].orientation.y,
msg->pose[index].orientation.z,
msg->pose[index].orientation.w);
tf::Transform transform;
How will I migrate them to tf2_ros?
Originally posted by cybodroid on ROS Answers with karma: 234 on 2018-06-21
Post score: 0
Answer:
Please see the Supported Datatypes section of the tf2 wiki.
The tf::Quaternion is a fork of the Bullet btQuaternion which will be your easiest conversion.
Originally posted by tfoote with karma: 58457 on 2018-06-21
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 31069,
"tags": "ros, ros-melodic, tf2-ros, tf2, transform"
} |
Are $\log_{10}(x)$ and $\log_2(x)$ in the same big-O class of functions? | Question: Are $\log_{10}(x)$ and $\log_{2}(x)$ in the same big-O class of functions? In other words, can one say that $\log_{10}(x)=O(\log x)$ and $\log_{2}(x)=O(\log x)$?
Answer: Yes. Because they differ only by a constant factor. Remember high school math:
$\log_2 x = \dfrac{\log_{10} x}{\log_{10}2}$. | {
"domain": "cs.stackexchange",
"id": 551,
"tags": "landau-notation"
} |
Why are there dust particles on TV screens? | Question: My professor gave us the following reason:
The screen is positively charged. When dust particles fly near it, the positive charges in the screen induce a charge in the dust particle, pulling the negative charges closer to it, and pushing the positive charges away. The screen then attracts the negative side, pulling the dust particle to it.
But why does the screen attract the whole dust particle? Its only attracting the negative charges so why does the "whole" dust particle go to the screen? Why doesn't the screen just rip out the electrons, and repel away the rest of the particle?
Answer: Short answer:Finite size of dust particles and inverse square law.
The side of the dust particle towards the screen is negatively charged and the other side is positively charged.Now due to the inverse square law,the force of attraction is more than than the repulsion as the negative side is closer and the particle experiences a net force towards the screen.
Why the electrons don't travel to the screen?
Because they still tightly held by the atoms of the dust particle.The T.V. screen only induced a slight polarization of the dust particle.The electrons have not travelled to the negative side(unlike conductors) but the atom itself undergoes slight polarization.(Similar to what happens in dielectrics). | {
"domain": "physics.stackexchange",
"id": 20456,
"tags": "electromagnetism, electrostatics"
} |
Where does turtlebot3_core define odom and base_footprint names? | Question:
Hello! I am attempting to run slam gmapping for 2 robots. I am having a problem where the transform trees are conjoining instead of there being 2 separate trees. I have already included namespaces and tf_prefixes in all of the launch files. odom and base_footprint are both generated after launching turtlebot3_core, and I cannot for the life of me find where these names are defined. Where can I change the names of these generated tf nodes? Or is there another way I can manage this?
Originally posted by AmateurHour on ROS Answers with karma: 95 on 2018-08-01
Post score: 1
Answer:
Hi :)
TurtleBot3 was updated last month.
For now it provides simple way for loading multiple robot.
You can find the way how to figure it out.
https://discourse.ros.org/t/announcing-turtlebot3-software-v1-0-0-and-firmware-v1-2-0-update/4888/6
http://emanual.robotis.com/docs/en/platform/turtlebot3/applications/#load-multiple-turtlebot3s
Originally posted by Darby Lim with karma: 811 on 2018-08-01
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by jayess on 2018-08-01:
@Darby Lim instead of only providing links to the resources, can you please update your question with a copy and paste of some minimal example? If and when these links disappear (which happens often) then your answer will lose its value and won't be self-contained.
Comment by AmateurHour on 2018-08-01:
Wow, further reading has shown that this update takes care of exactly the problem I am having! Thank you so much for the help!
Comment by Darby Lim on 2018-08-01:
@jayess Sorry, I will add some examples next answers :)
Comment by jayess on 2018-08-01:
@Darby Lim No problem, thanks! | {
"domain": "robotics.stackexchange",
"id": 31442,
"tags": "navigation, ros-kinetic, gmapping, transform"
} |
Is momentum along the line of collision conserved when a ball falls on an inclined plane | Question:
A ball of mass 1kg falling vertically with a velocity2m/s strikes a wedge of mass 2kg. Wedge lies a smooth horizontal surface and the coefficient of resitution between the ball and the wedge is 1/2. Find the velocity of the wedge and the ball immediately after collision.
I obtained the correct equation involving the coefficient of restitution. However, for the second equation involving momentum, I obtained an incorrect equation.
Method 1
Momentum along normal to wedge is conserved.
Initial momentum = Final momentum
$(-2\cos 30)(1) = (V_y)(1) + (-V_w \sin 30)(2)$
$V_w=V_y+\sqrt{3}$
Method 2
Impulse of ball, which is along normal, = $J = V_y - (-2\cos 30) = V_y + \sqrt{3}$
Impulse of wedge, which is along horizontal $=(V_w-0)(2)=2V_w$
Horizontal component of J which acts to the right = Impulse of wedge which acts to the left
$J \sin 30 =2V_w $
$4V_w= V_y+\sqrt{3}$
Why do the equations from the 2 methods differ? More specifically, why is Method 1 is incorrect and Method 2 is correct. Isn't impulse essentially a change in momentum and since duration of collision is $0$, shouldn't the final and initial momentum should be same?
Answer: The ground provides a force that has a non-zero component perpendicular to the direction of the incline. Therefore, momentum is not conserved in the direction perpendicular to the incline. | {
"domain": "physics.stackexchange",
"id": 86132,
"tags": "classical-mechanics, momentum, vectors, collision"
} |
openai_ros q-learning example not work | Question:
Hi guys, As I'm trying to follow this tutorial from OpenAI_ROS to play around Q learning on a Gazebo simulated turtlebot in a maze. However, I cannot initiate my turtlebot. I have succesfully downloaded and built openai_ros and openai_example_projects in my ROS workspace. I'm using Ubuntu 16.04+ROS-Kinetic + Gazebo-7.0.
Basically, I'm following these steps:
$ roslaunch gym_construct main.launch to start gazebo simulation (this part looks fine to me).
$ roslaunch turtle2_openai_ros_example start_training.launch and this will give me following error messages.
[ERROR] [1534517590.397706, 29.824000]: WORLD RESET
[ERROR] [1534517590.399843, 29.826000]: NOT Initialising Simulation Physics Parameters
[WARN] [1534517590.401831, 29.828000]: Start Init ControllersConnection
[WARN] [1534517590.401994, 29.828000]: END Init ControllersConnection
Originally posted by FloppyHank on ROS Answers with karma: 13 on 2018-08-17
Post score: 1
Answer:
Hi, I'm one of the developers of the package.
your steps are correct.
I think that you have no error when executing the code and that everything is working. The thing is that, due to going fast in the deployment of the code, we have used ERROR and WARN messages to show different colors of messages for our own debugging purpose. Then we forgot to put it back to normal info messages.
What I mean is that those messages are not error messages, just INFO messages. We are going to change this next week.
So you should have a list of messages like this:
* /turtlebot2/wait_time: 0.1
NODES
/
turtlebot2_maze (turtle2_openai_ros_example/start_qlearning.py)
ROS_MASTER_URI=http://10.8.0.1:11311
process[turtlebot2_maze-1]: started with pid [15644]
[ERROR] [1535564679.315268, 210.837000]: WORLD RESET
[ERROR] [1535564679.324640, 210.846000]: NOT Initialising Simulation Physics Parameters
[WARN] [1535564679.331335, 210.852000]: Start Init ControllersConnection
[WARN] [1535564679.331698, 210.852000]: END Init ControllersConnection
[WARN] [1535564680.745782, 211.494000]: START wait_until_twist_achieved...
[WARN] [1535564680.778331, 211.550000]: Reached Velocity!
[WARN] [1535564680.778792, 211.550000]: END wait_until_twist_achieved...
[ERROR] [1535564680.797151, 211.566000]: WORLD RESET
[WARN] [1535564681.186834, 211.714000]: new_ranges=5
[WARN] [1535564681.188497, 211.714000]: mod=144
[WARN] [1535564681.189583, 211.714000]: NOT done Validation >>> item=0.749866962433< 0.5
[WARN] [1535564681.190660, 211.714000]: NOT done Validation >>> item=0.922657012939< 0.5
[WARN] [1535564681.191821, 211.714000]: NOT done Validation >>> item=2.44325733185< 0.5
[WARN] [1535564681.192948, 211.714000]: NOT done Validation >>> item=2.64469790459< 0.5
[WARN] [1535564681.194034, 211.714000]: NOT done Validation >>> item=0.914873301983< 0.5
[WARN] [1535564681.195369, 211.714000]: ############### Start Step=>0
[WARN] [1535564681.196458, 211.714000]: Next action is:2
[WARN] [1535564681.215806, 211.714000]: START wait_until_twist_achieved...
[WARN] [1535564681.274124, 211.772000]: Not there yet, keep waiting...
[WARN] [1535564681.330621, 211.828000]: Reached Velocity!
[WARN] [1535564681.331351, 211.828000]: END wait_until_twist_achieved...
...
... and so on.
So your robot should start moving and trying actions.
Please confirm that it is working. Otherwise, copy here the whole output of your command.
Originally posted by R. Tellez with karma: 874 on 2018-08-29
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by FloppyHank on 2018-09-24:
Yes, you are right. I figured out this a couple of days before but forgot to check this answer back. Thank you. | {
"domain": "robotics.stackexchange",
"id": 31567,
"tags": "ros-kinetic"
} |
Double buffering Gdiplus and GDI | Question: I have this working code and was wondering how I can optimize it. Is there anything I can declare globally so it does not need to be be created over and over? This function is called around 30 times a second.
bmScreenBuffer is a global Bitmap.
Contents of drawScreen():
// Buffer - Get the main window handle to create 2 dcs (device contexed)
HWND hwndMain = programInfoPtr->hWnd;
HDC hdcMain = GetDC(hwndMain);
HDC hdcBuffer = CreateCompatibleDC(hdcMain);
// Current buffer and old
HBITMAP hbm_Buffer = CreateCompatibleBitmap(hdcMain, bmScreenBuffer->GetWidth(), bmScreenBuffer->GetHeight());
HBITMAP hbm_oldBuffer = (HBITMAP)SelectObject(hdcBuffer, hbm_Buffer);
// Tells GDI+ to draw to the "GDI" DC
Graphics* g = new Graphics(hdcBuffer);
g->DrawImage(bmScreenBuffer, 0, 0);
// Now copy the image to the screen
BitBlt(hdcMain, 0, 0, bmScreenBuffer->GetWidth(), bmScreenBuffer->GetHeight(), hdcBuffer, 0, 0, SRCCOPY);
// Clean Up
ReleaseDC(hwndMain, hdcMain);
SelectObject(hdcBuffer, hbm_oldBuffer);
DeleteDC(hdcBuffer);
DeleteObject(hbm_Buffer);
delete g;
Answer:
There's probably a faster way to grab the display like with DirectX or something. WinGDI (BitBlt) is probably pretty slow.
GetDC has no effect on class or private DCs, so it is always sensible to call ReleaseDC after GetDC.
Watch out GetClassInfoEx function.
HBITMAP hbm_Buffer = CreateCompatibleBitmap(hdcMain, bmScreenBuffer->GetWidth(), bmScreenBuffer->GetHeight());: assuming your machine is running in 32-bit color, this generates a 32-bit bitmap (ie: compatible with the current display). Don't use CreateCompatibleBitmap here. Use CreateBitmap instead. Also note that you no longer really need hdcMain for this.
Remember that the Graphics class itself is a managed object and it will be garbage collected sooner or later. This class also uses unmanaged memory, the garbage collector doesn't "see" that memory and doesn't know that the actual memory used by the Graphics class may be much higher than the size of a Graphics instance. But fortunately you did it correctly: it's better to delete such objects as soon as you no longer need them, deleting will free any unmanaged memory or any other resources like opened files. | {
"domain": "codereview.stackexchange",
"id": 17193,
"tags": "c++, performance, graphics, winapi"
} |
Apply Joint Effort to Gazebo Model | Question:
Hello,
I am using Gazebo with ROS. I created a model in Gazebo that I was able to successfully send service calls to (to get a joint to spin for example). Specifically, I used the srv file gazebo_msgs/ApplyJointEffort (with a serviceClient node). What I'm trying to do now is get the nodes to be publisher/subscription-based to match another structure I am working with. Is it possible to create my own msg file based off of the gazebo_msgs/ApplyJointEffort srv in order to make a publisher node and still an effort on a joint? I tried doing just that and the publishing happens but there is no response to the model in gazebo. I'm obviously uncertain as to how the gazebo_ros package communication exactly works so any help would be appreciated.
Thank you
Originally posted by nbanyk on ROS Answers with karma: 40 on 2014-08-28
Post score: 0
Original comments
Comment by nbanyk on 2014-09-02:
is this question more suited for the Gazebo forum?
Answer:
I think I found out from the Gazebo board that that service call did not use physics and so was not relevant to me. Instead I was advised to use a plugin so the ROS side of things would have to involve the plugin. Closing this out.
Originally posted by nbanyk with karma: 40 on 2015-06-24
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 19228,
"tags": "gazebo"
} |
Optimizing JSON append template | Question: I am fetching a list of messages from some JSON and appending each message as an LI within a UL. I have created the message template within jQuery to append each message to the document. My code works well but I can't help but feel it could be improved (I am a beginner JS programmer). From what I have seen online moving the templating to handlebars.js would be a better solution but I can't be sure.
$(document).ready(function() {
var broadcastMessagesJsonURL = "https://api.myjson.com/bins/sadap";
$.getJSON(broadcastMessagesJsonURL, function(data) {
$.each(data.broadcastMessages, function(i, item) {
//All items from the json
var broadcastMessageID = (item.ID);
var checkBroadcastMessageRead = (item.read ? " broadcast__message__read__state--unread" : "");
var broadcastMessageSubject = (item.subject);
var broadcastMessageGroup = (item.group);
var broadcastMessageDateSent = (item.dateSent);
var checkBroadcastFeatureImage = (item.featureImage ? " broadcast__message__image--active" : "");
var checkBroadcastForm = (item.form ? " broadcast__message__form--active" : "");
var checkBroadcastAttachments = (item.attachments ? " broadcast__message__attachment--active" : "");
var broadcastMessageContent = (item.content);
var broadcastMessageTemplate = ('<li class="broadcast__message__list__item" data-broadcast-message-ID="' + broadcastMessageID + '"> \
<div class="broadcast__message__wrapper"> \
<div class="broadcast__message__read__state' + checkBroadcastMessageRead + '"></div> \
<div class="broadcast__message__subject">' + broadcastMessageSubject + '</div> \
<div class="broadcast__message__group">' + broadcastMessageGroup + '</div> \
<div class="broadcast__message__date__time__stamp" title="' + broadcastMessageDateSent + '">' + broadcastMessageDateSent + '</div> \
<div class="broadcast__message__snippet">' + broadcastMessageContent + '</div> \
</div> \
</li>');
$(".broadcast__messages__list").append(broadcastMessageTemplate);
});
}).fail(function() {
console.log("broadcastMessages json cannot be loaded");
});
});
CODEPEN LINK
Answer: Rewrite
Below is a rewrite with all my further suggestions applied.
const messagesURL = 'https://api.myjson.com/bins/sadap';
const messagesLoadError = () => console.error('broadcastMessages json cannot be loaded');
const forEachMessage = (_, message) => {
const messageRead = message.read ? ' broadcast__message__read__state--unread' : '',
featureImage = message.featureImage ? ' broadcast__message__image--active' : '',
form = message.form ? ' broadcast__message__form--active' : '',
attachments = message.attachments ? ' broadcast__message__attachment--active' : '';
const template = `<li class="broadcast__message__list__message" data-broadcast-message-ID="${message.ID}">
<div class="broadcast__message__wrapper">
<div class="broadcast__message__read__state${messageRead}"></div>
<div class="broadcast__message__subject">${message.subject}</div>
<div class="broadcast__message__group">${message.group}</div>
<div class="broadcast__message__date__time__stamp" title="${message.dateSent}">${message.dateSent}</div>
<div class="broadcast__message__snippet">${message.content}</div>
</div>
</li>`;
$('.broadcast__messages__list').append(template);
};
$.getJSON(messagesURL, data => $.each(data.broadcastMessages, forEachMessage)).fail(messagesLoadError);
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"
integrity="sha384-fj9YEHKNa/e0CNquG4NcocjoyMATYo1k2Ff5wGB42C/9AwOlJjDoySPtNJalccfI"
crossorigin="anonymous">
</script>
Remarks
API security
Your API endpoint (https://api.myjson.com/bins/sadap) uses insecure and archaic SSL configuration. Actually, I couldn't connect to it at all, since I have tweaked up browser configuration. Few key points about your API's security:
Vulnerable to the OpenSSL CCS vulnerability (CVE-2014-0224),
Relies fully on insecure RC4 or legacy CBC mode of ciphers it uses,
Doesn't support Forward Secrecy.
See Qualys' test result.
Unused callback's parameter
You have the following line in your code:
$.each(data.broadcastMessages, function(i, item) {
but i is never used. In such cases, it's good to replace it with underscore (_).
Naming issues
As Leon Bambrick added on top of Phil Karlton's quote:
There are only two hard things in Computer Science: cache
invalidation, naming things, and off-by-one errors.
„Some” of your names are overly verbose (word broadcast seem to be everywhere) and other are not descriptive enough, like item.
Wall of text
This
var broadcastMessageID = (item.ID);
var checkBroadcastMessageRead = (item.read ? " broadcast__message__read__state--unread" : "");
var broadcastMessageSubject = (item.subject);
var broadcastMessageGroup = (item.group);
var broadcastMessageDateSent = (item.dateSent);
var checkBroadcastFeatureImage = (item.featureImage ? " broadcast__message__image--active" : "");
var checkBroadcastForm = (item.form ? " broadcast__message__form--active" : "");
var checkBroadcastAttachments = (item.attachments ? " broadcast__message__attachment--active" : "");
var broadcastMessageContent = (item.content);
would be much easier to read, if you would combine vars and align equality signs, question marks and colons:
var broadcastMessageID = item.ID,
broadcastMessageSubject = item.subject,
broadcastMessageGroup = item.group,
broadcastMessageDateSent = item.dateSent,
broadcastMessageContent = item.content,
checkBroadcastMessageRead = item.read ? ' broadcast__message__read__state--unread' : '',
checkBroadcastFeatureImage = item.featureImage ? ' broadcast__message__image--active' : '',
checkBroadcastForm = item.form ? ' broadcast__message__form--active' : '',
checkBroadcastAttachments = item.attachments ? ' broadcast__message__attachment--active' : '';
Variables
Most of your variables are unnecessary. If you would rename ambiguous item to message, you could use message.ID instead of doing var broadcastMessageID = (item.ID).
Also, bear in mind that the following three variables are unused: checkBroadcastFeatureImage, checkBroadcastForm, checkBroadcastAttachments.
Use template literals
var broadcastMessageTemplate could become much more easily readable if you used template literals to declare it. Mind however, that this is still not the „most vanilla” method to create DOM elements either. Take a look at createElement().
Fail notification
You should use console.error() instead of console.log() in your fail's callback. | {
"domain": "codereview.stackexchange",
"id": 25445,
"tags": "javascript, beginner, jquery, json"
} |
Asking for Fibonacci numbers, up till 50 | Question: I am learning Python, and this is one of the program assignments I have to do.
I need to write code to prompt user to enter Fibonacci numbers continuously until it's greater than 50. If everything correct, screen show 'Well done', otherwise 'Try again'.
While my code is working correctly, I would like to know how I should write it better.
My code:
# Init variables
t = 0
prev_2 = 1
prev_1 = 0
run = 1
# Run while loop to prompt user enter Fibonacci number
while (run):
text = input('Enter the next Fibonacci number >')
if (text.isdigit()):
t = int(text)
if t == prev_2 + prev_1:
if t <= 50:
prev_2 = prev_1
prev_1 = t
else:
print('Well done')
run = 0
else:
print('Try again')
run = 0
else:
print('Try again')
run = 0
Answer: First, run should be a Boolean value (True or False). 1 and 0 work, but they're far less clear. Note how much more sense this makes:
. . .
run = True
while (run):
text = input('Enter the next Fibonacci number >')
if (text.isdigit()):
t = int(text)
if t == prev_2 + prev_1:
if t <= 50:
prev_2 = prev_1
prev_1 = t
else:
print('Well done')
run = False
else:
print('Try again')
run = False
else:
print('Try again')
run = False
Instead of using a run flag though to exit the loop, I think it would be a lot cleaner to just break when you want to leave:
while (True):
text = input('Enter the next Fibonacci number >')
if (text.isdigit()):
t = int(text)
if t == prev_2 + prev_1:
if t <= 50:
prev_2 = prev_1
prev_1 = t
else:
print('Well done')
break
else:
print('Try again')
break
else:
print('Try again')
break
Instead of doing a isdigit check, you can just catch the ValueError that int raises:
while (True):
text = input('Enter the next Fibonacci number >')
try:
t = int(text)
except ValueError:
print('Try again')
break
if t == prev_2 + prev_1:
if t <= 50:
prev_2 = prev_1
prev_1 = t
else:
print('Well done')
break
else:
print('Try again')
break
This goes with Python's "It's better to ask for forgiveness than permission" philosophy.
Some other things:
Don't put parenthesis around the condition of while and if statements unless you really feel that they help readability in a particular case.
You don't need to have t assigned outside of the loop. I'd also give t a better name like user_input:
prev_2 = 1
prev_1 = 0
# Run while loop to prompt user enter Fibonacci number
while True:
text = input('Enter the next Fibonacci number >')
try:
user_input = int(text)
except ValueError:
print('Try again')
break
if user_input == prev_2 + prev_1:
if user_input <= 50:
prev_2 = prev_1
prev_1 = user_input
else:
print('Well done')
break
else:
print('Try again')
break | {
"domain": "codereview.stackexchange",
"id": 37980,
"tags": "python, beginner, fibonacci-sequence"
} |
Deterministic communication complexity vs partition number | Question: Background:
Consider the usual two-party model of communication complexity where Alice and Bob are given $n$-bit strings $x$ and $y$ and have to compute some Boolean function $f(x,y)$, where $f:\{0,1\}^n \times \{0,1\}^n \to \{0,1\}$.
We define the following quantities:
$D(f)$ (the deterministic communication complexity of $f$): The minimum number of bits that Alice and Bob need to communicate to compute $f(x,y)$ deterministically.
$Pn(f)$ (the partition number of $f$): The logarithm (base 2) of the smallest number of monochromatic rectangles in a partition (or a disjoint cover) of $\{0,1\}^n \times \{0,1\}^n$.
A monochromatic rectangle in $\{0,1\}^n \times \{0,1\}^n$ is a subset $R \times C$ such that $f$ takes the same value (i.e., is monochromatic) on all elements of $R \times C$.
Also note that the partition number is different from the "protocol partition number", which was the subject of this question.
See the text by Kushilevitz and Nisan for more information. In their notation, what I have defined to be $Pn(f)$ is $\log_2 C^D(f)$.
Note: These definitions are easily generalized to non-Boolean functions $f$, where the output of $f$ is some larger set.
Known results:
It is known that $Pn(f)$ is a lower bound on $D(f)$, i.e., for all (Boolean or non-Boolean) $f$, $Pn(f) \leq D(f)$.
Indeed, most lower bound techniques (or perhaps all?) for $D(f)$ actually lower bound $Pn(f)$. (Can anyone confirm that this is true of all lower bound techniques?)
It is also known that this bound is at most quadratically loose (for Boolean or non-Boolean functions), i.e., $D(f) \leq (Pn(f))^2$. To summarize, we know the following:
$Pn(f) \leq D(f) \leq (Pn(f))^2$
It is conjectured that $Pn(f) = \Theta(D(f))$. (This is open problem 2.10 in the text by text by Kushilevitz and Nisan.) However, to the best of my knowledge, the best known separation between these two for Boolean functions is only by a multiplicative factor of 2, as shown in "The Linear-Array Conjecture in Communication Complexity Is False" by Eyal Kushilevitz, Nathan Linial, and Rafail Ostrovsky.
More precisely, they exhibit an infinite family of Boolean functions $f$, such that $D(f) \geq (2 - o(1)) Pn(f)$.
Question:
What is the best known separation between $Pn(f)$ and $D(f)$ for non-Boolean functions? Is it still the factor-2 separation referenced above?
Added in v2: Since I haven't received an answer in a week, I'm also happy to hear partial answers, conjectures, hearsay, anecdotal evidence, etc.
Answer: This question has just been resolved! As I mentioned, it was known that
$Pn(f) \leq D(f) \leq (Pn(f))^2$,
but it was a major open problem to show that either $Pn(f) = \Theta(D(f))$ or that there exists a function for which $Pn(f) = o(D(f))$.
A few days ago this was resolved by Mika Göös, Toniann Pitassi, Thomas Watson (http://eccc.hpi-web.de/report/2015/050/). They show that there exists a function $f$ which satisfies
$Pn(f) = \tilde{O}((D(f))^{2/3})$.
They also show an optimal result for the one-sided version of $Pn(f)$, which I'll denote by $Pn_1(f)$, where you only need to cover the 1-inputs with rectangles. $Pn_1(f)$ also satisfies
$Pn_1(f) \leq D(f) \leq (Pn_1(f))^2$,
and they show that this is the best possible relation between the two measures, since they exhibit a function $f$ which satisfies
$Pn_1(f) = \tilde{O}((D(f))^{1/2})$. | {
"domain": "cstheory.stackexchange",
"id": 3206,
"tags": "lower-bounds, communication-complexity"
} |
Understanding the spatial resolution of a imaging sensor, should it evenly divide the overall image resolution? | Question: I am reviewing the Landsat 1 satellite, specifically its Multispectral Scanner (MSS) imaging system. The system had the following specifications
Sensor type: opto-mechanical
*Spatial Resolution: 68 m X 83 m (commonly resampled to 57 m, or 60 m)
Spectral Range: 0.5 – 1.1 µm
Number of Bands: 4, 5 (Landsat 3 only)
Temporal Resolution: 18 days (L1-L3), 16 days (L4 & L5)
Image Size: 185 km X 185 km
Swath: 185 km
Programmable: no
Why doesn't the spatial resolution of the sensor evenly divide the final image resolution? Is the spatial resolution not "locked" to the size of a pixel?
Answer: It sounds like it is not a line scan camera in the sense where you have a line of pixels that is captured simultaneously. It sounds more like a "pixel-scan" camera where something is scanning perpendicular to the satellite's direction of motion.
There is overlap between consecutive images as a result and they are removing that overlap so that only "unique" information is being included in their resolution numbers. | {
"domain": "physics.stackexchange",
"id": 71070,
"tags": "optics, satellites, imaging"
} |
Dynamically call lambda based on stream input: Try 2 | Question: Originally asked here: Dynamically call lambda based on stream input
The following version is based heavily on the answer provided by @iavr, though I hope that I have done enough more to make it worth reviewing again.
A way to clean parameters of all declarations:
template<typename Decorated>
struct CleanType
{
typedef typename std::remove_reference<Decorated>::type NoRefType;
typedef typename std::remove_const<NoRefType>::type BaseType;
};
Get Caller traits of a functor:
template <typename T>
struct CallerTraits
: public CallerTraits<decltype(&T::operator())>
{};
template<typename C, typename ...Args>
struct CallerTraits<void (C::*)(Args...) const>
{
static constexpr int size = sizeof...(Args);
typedef typename std::make_integer_sequence<int, size> Sequence;
typedef std::tuple<typename CleanType<Args>::BaseType...> AllArgs;
};
// Notice the call to `CleanType` to get the arguments I want.
// So that I can create objects of the correct type before passing
// them to the functor:
The actual calling of the functor based on reading data from a stream (via ResultSetRow (but I am sure that can be easily generalized)).
Done in 2 parts:
This is the part you call:
template<typename Action>
void callFunctor(Action action, Detail::ResultSetRow& row)
{
// Get information about the action.
// And retrieve how many parameters myst be read from
// the stream before we call `action()`
typedef CallerTraits<decltype(action)> Trait;
typedef typename Trait::Sequence Sequence;
doCall2(action, row, Sequence());
}
Part 2:
Here we extract the parameters from the stream then call the action.
template<typename Action, int... S>
void doCall2(Action action, Detail::ResultSetRow& row, std::integer_sequence<int, S...> const&)
{
// Create a tupple that holds all the arguments we need
// to call the functior `action()`
typedef CallerTraits<decltype(action)> Trait;
typedef typename Trait::AllArgs ArgumentTupple;
// Use the template parameter pack expansion
// To read all the values from the stream.
// And place them in the tuple.
ArgumentTupple arguments(row.getValue1<typename std::tuple_element<S, ArgumentTupple>::type>()...);
// Use the template parameter pack expansion
// To call the function expanding the tupple into
// individual parameters.
action(std::get<S>(arguments)...);
}
Example usage:
// Using `ResultRowSet` from below.
int main()
{
std::stringstream stream("1 Loki 12.3 2.2");
Detail::ResultRowSet row(stream);
callFunctor([](int ID, std::string const& person, double item1, float item2){
std::cout << "Got Row:"
<< ID << ", "
<< person << ", "
<< item1 << ", "
<< item2 << "\n";
}, row);
}
Here it will read an int (ID), a std::string (person) a double (item1) and a float (item2) from the stream (represented by row), then call the lambda provided.
This is not the actual implementation of Detail::ResultSetRow. But for code review purposes you can think of it as:
namespace Detail
{
class ResultRowSet
{
std::istream& stream;
public:
ResultRowSet(std::istream& s)
: stream(s)
{}
template<typename T>
T getValue1()
{
T val;
stream >> val;
return val;
}
};
}
Answer: It is becoming really good, but there are still a few things that can be done:
There already is a standard way to "clean" a type: std::decay. It does a little bit more than just removing the reference and the const qualification though, but it does basically what you need. Therefore, you can totally get rid of CleanType and use std::decay instead:
using AllArgs = std::tuple<typename std::decay<Args>::type...>;
And since you are using C++14 (I assume this because of std::integer_sequence), you can even use std::decay_t instead:
using AllArgs = std::tuple<std::decay_t<Args>...>;
N3887 also made its way to the C++14 standard. Therefore, if you actually use C++14, you will be able to use the alias template std::tuple_element_t instead of the metafunction std::tuple_element:
ArgumentTupple arguments(row.getValue1<std::tuple_element_t<S, ArgumentTupple>>()...);
This line looks kind of wrong:
typedef typename std::make_integer_sequence<int, size> Sequence;
When I tried to compile your example, it didn't compile at first because of the typename in this line. std::make_integer_sequence is an alias template, there shouldn't be a typename before it.
You sometimes use ResultRowSet and use ResultSetRow at some other places. It's probably a typo or something like that :p
ArgumentTupple also feels like a typo. It should probably be ArgumentTuple.
std::integer_sequence does not contain anything. Therefore, there is no need to pass is by const&, you can simply pass it by value. | {
"domain": "codereview.stackexchange",
"id": 6956,
"tags": "c++, c++14, lambda, template-meta-programming"
} |
How does one calculate how big something has to be, to be seen at a given distance? | Question: Ignoring curvature of the Earth.
How do I calculate the size an object would need to be in order to appear to be approx 1cm tall at a given distance?
Answer: Let $w$ be the actual size of the object, $d$ be the distance to the object, $w_r$ be the size of a reference object, and $d_r$ be the distance to the reference object.
If you want your object to appear to be the same size at some distance $d$ as a reference object at a reference distance $d_r$, then using the properties of similar triangles...
$$\frac{w}{d} = \frac{w_r}{d_r}$$
Solving for $w$, the size of your object,
$$w = w_r \frac{d}{d_r}$$
For example, if you want an object at 20 meters to appear to be the same size as a 1 cm object at 3 meters, you would plug in like so:
$$w = 0.01 ~\rm{m} \frac{20 ~\rm m}{3 ~\rm m} = 0.067 ~\rm{m} = 6.7 ~\rm{cm}$$ | {
"domain": "physics.stackexchange",
"id": 29473,
"tags": "homework-and-exercises, optics, geometry, vision, distance"
} |
How long can a naked human survive on Mars? | Question: How long can a naked human survive at the surface of the Mars planet?
For instance, let's say a worker's base takes fire while he sleeps, the building is totally ablaze and he can do nothing but run to the emergency building 200 meters away without any respiratory equipment, pressure suit, UV protections or anything.
Maybe a human could survive for a rather long time, and apnea time is the real limiting factor?
Climate:
Temperature: 27 °C to −50 °C (base is at a rather warm location, on Equator)
Pressure: 0.006 bar (Earth: 1 bar)
Answer: Long story short, the astronaut probably wouldn't make it, and would first loose consciousness then suffocate.
There is a lot of myth and hollywood dramatization regarding this kind of thing. Here are some:
You will explode. This is just ridiculous. The skin is air tight (relatively speaking). It is also very elastic and can pull and bend quite a great deal before tearing. Through quite a few, equally durable, tissues, it is connected to the bones, which are unaffected by negative air pressure.
Your blood will boil. The circulatory system is also a closed system. It is not directly exposed to the environment. Also, the blood pressure in a healthy person is averaged around 100 mmHg and Earth's atmosphere at sea level is 700 mmHg. There is already a massive disparity, yet Earth's atmosphere doesn't go pouring into our veins at random. Likewise, the blood in a person's veins won't go pouring out into the atmosphere simply because the pressure is extremely low.
Any air in your lungs will be forcibly sucked out of you. Again, this is a closed system, if you hold your breath. As long as you don't try to breathe there is nothing forcing the air from your lungs.
Your eyes will be sucked from their sockets. Thank you Total Recall (the original) for this myth. The eyes are very firmly in place. You might feel a pull on them, but they aren't going anywhere.
Absolutely none of these would happen in the vacuum of space and certainly not on the Martian surface.
Here's what will happen:
Any liquid material on the surface of your body will vaporize. All sweat, saliva (if you open your mouth), water in the mucous, and tears on the eyes, will nearly instantly vaporize. It would be quite uncomfortable, especially the eyes, but very survivable. Long term exposure to a vacuum might damage the eyes eventually, but in this scenario there are far more pressing matters.
The negative pressure may also cause your eardrums to rupture. Try to imagine the feeling your ears get on the plane at 30 thousand feet times about 100. Since you can't close your ears, plugging them with your fingers may help, but likely just postpone the inevitable pressure disparity, which will lead to at least great discomfort and possible drum rupture.
If you had the chance, you should take a deep breath before jumping out and avoid trying to breathe at all costs. Opening your lungs for a breath would very quickly draw whatever breath you had in them out into the atmosphere and also vaporize any fluid that was in them. Considering the rapid pace of this event, you would surely have permanent damage from this and begin suffocating and will die in a minute or two unless you get medical attention. There seems to be conflict in the procedure for this, with some sources suggesting that emptying your lungs would be better to avoid this. Considering the astronaut in your example is about to do a 200 meter dash he is going to need all the O2 he can get.
The cold is also a treacherous factor. At -50C the heat from your body would dissipate so rapidly that it would likely be very debilitating, perhaps causing you to seize up and clutch your extremities to your chest. It would also be excruciatingly painful. It might cause you to fall into shock. In a total vacuum however, this does not exist. A vacuum is actually very insulating, but on Mars' what little atmosphere is there is enough for you to feel the chill.
The lack of oxygen and high CO2 would be what would eventually kill you. You would eventually try to breathe and loose consciousness within a breath or two (or you might loose consciousness first then your body would try to breathe naturally). Your body would try to keep you alive as long as possible by pumping your heart faster and increasing your blood pressure, but your heart would eventually fail and you would die within seconds after that.
Evaluating your particular scenario
There are first a few things about the scenario that seem unlikely.
The astronaut would not be naked. They remain in some of their protective gear in case of emergencies like this. He would have some protection against the cold if he had to run out without it.
Certain things like extinguishers are always nearby. The idea that one was unreachable is a little silly.
Space vehicles, shuttles, and buildings all are compartmentalized like a submarine. They could simply close off the affected areas.
Fire only burns in the presence of oxygen (with few exceptions). After being closed off, compartments can be decompressed, halting the fire immediately. They can then begin refilling them with breathable atmosphere. This poses the same vacuum problem if the astronaut is in the affected area, but he won't be trying to run across the surface naked. People have survived rapid decompressions before without a single injury, as listed in my second source, however, others have never regained consciousness.
Assuming you made it to the other building, they usually only open from the inside. So unless someone saw you coming and opened the door for you, your last minutes would be better spent desperately fighting the fire until it consumed you rather than banging on the neighbor's door.
But let's say all of this breaks down and poor Astronaut Joe finds himself making a mad, streaker's dash across the Martian surface.
We should assume that our astronaut is both quite athletic and knowledgeable about the environment outside of his shelter, knowing that he should take a deep breath before heading out, but literally has no time to grab a shirt, goggles, breathing mask, or anything. Just him and Mars.
If he's lucky, it's not too cold and he doesn't seize up and fall into the fetal position within a few steps. Let's say it's -20C, which is pretty darn cold, but tolerable for a minute or two. I next see him flailing across the surface, trying to run in Mars' low gravity (no easy task). He begins to loose vision as a cloud of steam of what should be tears pours from his dry eyes. His eyelids then begin to stick to his eyeballs because there is no longer any lubrication. He is desperate to reach the other building, and his movements get more and more erratic, as his body is rapidly using all of the available oxygen. All the while, he is still holding his breath, knowing that if he attempts to breathe he will likely collapse within seconds and perish. But the more he struggles to make it, the more his innate urge to breathe overpowers his conscious thought to prevent it. Eventually, against his own willpower, he gasps and attempts to inhale deeply. He takes two or three dying breaths, if they can be called that, then falls to the ground unconscious. He dies shortly after when his heart stops and his brain tissues die.
The whole ordeal lasts under 30 seconds and he is 100 meters at most away from his shelter. If he did get lucky and actually made it to the other building, he would likely have frostbite over much of his body, permanent damage to his eyes and ears, and possibly deadly exposure to cosmic rays.
The event would be very similar to being drug by a fishing line to the bottom of the sea and drowning.
Sources:
A fun questionnaire to see how long you would last in space
What really happens in a vacuum | {
"domain": "biology.stackexchange",
"id": 1700,
"tags": "human-biology, death, speculative"
} |
Pendulum clock correction | Question: I'm trying to solve this task:
The pendulum clock was transported from the Earth's equator to Antarctica (in the vicinity of the southern geopole) for scientific experiments. Estimate the pendulum clock correction over the Earth's day in Antarctica (at a temperature of $t = −90 ° C$) if these clocks are calibrated at the equator (at a temperature of $t = + 50 ° C$). The coefficient of thermal expansion of the pendulum substance is $α_h$ = $2,4 · 10^−5$ $deg^−1$. The original verified length of the pendulum is $ℓ_0$ = 300 mm. How much should the length of the pendulum be changed so that the correction of the hours per day is no more than 10 seconds?
I think I have almost solved the task, but at the very last moment a have my variable being gone... Here is the solution:
Formulas to use: $L_t=L_0 × (1+α*t)$, $T=2π×√(l/g)$
We should find $L_0$ first:
$L_0=L_{+50}/(1 + 2,4 × 10^{-5} × 50)=0,3/(1+0,0012)=0,29964 (m)$
Then $L_{-90}$:
$L_{-90}=L_0×(1+2,4×10^{-5}×(-90))=0,29964×(1-0,00216)=0,29964×0,99784=0,29899 (m)$
Periods:
$T_{+50}=2π × √(L_{+50}/g)=2 × 3,142 × √(0,3/9,81)=1,0989 (sec)$
$T_{-90}=2π × √(L_{-90}/g)=2 × 3,142 × √(0,29899/9,81)=1,09706 (sec)$
$∆T=T_{+50}-T_{-90}$
After that we should find the number of complete fluctuations per day:
$N=(24 × 60 × 60)/T_{-90} =86400/1,09706=78755,9477$
So the correction should be:
$x=N × ∆T=78755,9477 × (1,0989-1,09706)=144,91 (sec)$
Now we need it need to be less than 10 secs:
$N × ∆T<10$
Here just transformations:
$\frac{86400}{(2π × √(L_{-90}/g))}×(2π× √(L_{+50}/g)-2π× √(L_{-90}/g))<10$
And so the $L_{+50}$ just disappears! I must have made a mistake somewhere. Could somebody help?
Answer: There are a couple of oddities I see in this working.
I'm not completely clear if $l_0$ is the length at zero celcius or at the equator, at 50 celcius. But I'll go with your interpretation, that 300 = length at 50 celcius.
The temperatures seem extreme and unreasonable. It is warm at the equator and cold in Antarctia, but +50 to -90 seems rather unreal!
Given that, the next thing is your use of 9.81 for g. This is simply not correct at the equator or at the pole. At the equator you should use 9.78, and at antarctica you need 9.83
That said, the final part is just to find the proper length in antarctica, using the local value of g. That will be slightly longer than 300mm due to the higher gravitational acceleration.
You would just write down the formula for the period and set it equal to the time period at the equator, and use that to find the length. The difference in length is the change you need to make. You can then investigate how errors in the value of "g" used propagate and if this allows for the required accuracy.
If necessary you can convert that length to a length at the pole. (and its not clear to me if the change in length needs to be done at the pole or at the equator) Again there is a question of error propagation.
I think this question is really about the variation in $g$. If $g$ is assumed to be constant, then the length of the pendulum at the pole should be 300mm to get the same period as at the equator, since the period of a pendulum depends only on the length and on the local gravitational field. And that doesn't seem to be an interesting question. At least if $g$ is assumed constant then this question is off topic, as there is no astronomical content at all! | {
"domain": "astronomy.stackexchange",
"id": 6046,
"tags": "earth, time, mathematics, clock"
} |
Power of randomness vs. power of indefinite computation | Question: I am writing a paragraph on the power of randomness, part of which I am trying to ground in theory of computation (I am no expert/researcher in this field).
First off, I am aware that for traditional complexity classes such as ZPP and BPP, it is open question whether they equal P. In context of this question, I am interested in a particular subset of ZPP.
Let ZPP be the set of decision problems for which there exists a Probabilistic Turing Machine (PTM) which solves it in a time that is polynomial in expectation.
Note that this definition allows indefinite computation (albeit with an infinitesimal likelihood). In fact, most membership proofs by example (I am aware of) use PTMs which exploit this fact.
Let bounded-ZPP (my own term) be the subclass of ZPP which (in addition) requires a PTM to have a bounded execution, i.e. it to halt after a finite (possibly super-polynomial) # steps.
I was wondering about the following:
Is there a more conventional name for bounded-ZPP?
Is bounded-ZPP = ZPP?
Is P = bounded-ZPP?
If (3) holds, wouldn't this suggest that a hypothetical P $\subset$ ZPP, would be more correctly attributed to allowing indefinite execution in ZPP, rather than the "power of random choice"?
Answer: Any problem in ZPP is computable (in fact, it is in the intersection of NP and coNP). Given any ZPP machine, run it in parallel with a deterministic machine that solves the same problem. This affects the running time by at most a polynomial factor (the exact factor depending on the model of computation), and so the new machine is also in ZPP. The new machine is also guaranteed to always halt. So bounded-ZPP is the same class as ZPP. | {
"domain": "cstheory.stackexchange",
"id": 4137,
"tags": "cc.complexity-theory, complexity-classes, probabilistic-computation"
} |
How to Select Point Spread Function Empirically for Image Deconvolution? | Question: When the captured image is blurring, one way of obtaining a clear image is via image deconvolution technique. In order to perform deconvolution successfully, usually we need to pay attention to the following issues:
Image content, which determines what kind of apriori information you will use (Total Variation, Laplacian and so on) during the image reconstruction procedure.
Image noise level, which determines the balance factor between apriori information and data consistence term.
Point Spread Function (PSF), which is used to measure how blurring the image is.
Here my question is related to PSF estimation. The simplest way of finding the right PSF is to use different PSFs to perform deconvolution, and then select the best one visually. Then my question is how we can tell which PSF is the best visually.
Answer: One way, though not directly visual, is to observe the image statistics.
We have a pretty good idea about the statistics of Natural Images, more specifically, their Gradient Distribution (See Statistics of Natural Images and Models by Jinggang Huang, D. Mumford, What Makes a Good Model of Natural Images by Yair Weiss, William T. Freeman, The Statistic Distribution of Image Gradient - Mathematics StackExchnage and UCLA Stat232A-CS266A Statistical Modeling and Learning in Vision and Cognition - Chapter 2 - Empirical Observations: Image Space and Natural Image Statistics).
So what you can do is see how well the statistics of the result matches the models.
Actually many modern (Prior to the Deep Learning era) Deblurring Methods work according to this approach with very nice results.
Look at Blind Motion Deblurring Using Image Statistics by Anat Levin. | {
"domain": "dsp.stackexchange",
"id": 7353,
"tags": "image-processing, deconvolution, blind-deconvolution"
} |
How should we define the behavior of a Turing machine where the head tries to move left from the leftmost tape position? | Question: If we have a Turing machine in a model with a tape that is infinite only to the right and assume at some point the head tries to move left from the leftmost position.
How should we define the behavior in such a case? Is a machine doing so for some input not a valid Turing machine? And if so, how can we make sure when we define a Turing machine that this situation can't occur for any input?
I've read some sources about Turing machines though couldn't find the answer to this specific case, and I see no reason why this case won't happen for an arbitrary Turing machine and some input.
Answer: There is no single accepted model of Turing machines. Different choices would lead to very similar models, which simulate each other with very small overhead. That's why we usually don't care so much about the exact model.
Here are some possibilities:
If the Turing machine attempts to move to the left of the origin, then the head stays put. The Turing machine might or might not be notified.
The Turing machine has a mechanism to detect that it has reached the origin, and cannot move to the left of it (the specs don't allow it).
Same, but not the machine can move to the left, and crashes if it attempts to do so (you decide how to interpret a crash – acceptance, rejection, infinite loop).
If a Turing machine attempts to move to the left of the origin, then instead it moves right. This makes sense if the Turing machine is guaranteed to always move at every step.
You can probably think of several other alternatives. | {
"domain": "cs.stackexchange",
"id": 16244,
"tags": "turing-machines"
} |
Can grid-based clustering method be use for customer segmentation? | Question: I am trying some clustering methods for customer segmentation and I stumbled upon grid based methods like: STING, MAFIA, WAVE CLUSTER, and CLIQUE. However, from what i've read, most of them are for image segmentation.
So before I invest my time in implementing these algorithms, I would like to know if anyone has tried using grid based clustering for clustering customer data before or on something that is not image based?
Answer: Depends on what your customer data is, but here's an idea:
You can look for a way to map your customer data on a 2D plot. Then you could save these plots as rasters (images in a lossless format, ML libraries commonly use .tiff). Make sure to strip decorations, scales, labels and anything that is not pure data representation, so that now every pixel in the raster represents what has now become a datapoint for you.
If you achieve this, you would be able to apply the desired algorithms on the newly created images.
The advantage of such an approach is that you add (or rather underline preexising) topological properties to your data, such as the proximity of what now become your pixels. It should go without saying that such properties should be utilised only if they contextually make sense for your data. And due to the existence of a continuous distance metric on numerical dimensions, the proximity example is better applicable to such numerical dimensions. | {
"domain": "datascience.stackexchange",
"id": 3837,
"tags": "machine-learning, clustering"
} |
Can isotopes of a given element be represented by different symbols? | Question: Can isotopes of any given element be represented using a completely different chemical symbol? What's the IUPAC's take on this?
Sure, ordinarily you would add the isotope's mass as a superscript to the element's symbol to differentiate it from other isotopes: For example, carbon-12 ($\ce{^{12}C}$) and carbon-14 ($\ce{^{14}C}$); however the base-symbol $\ce{C}$, for carbon, doesn't change.
But the isotopes of hydrogen don't seem to follow this strictly. Often, I see deuterium ($\ce{^{2}H}$) and tritium ($\ce{^{3}H}$) represented by $\ce{D}$ (I see this one in organic chem textbooks a lot) and $\ce{T}$ respectively. Does this "convention" fit in with IUPAC norms? If so, can isotopes of other elements be represented differently as well?
Answer: IR-3.3.1 Isotopes of an element
The isotopes of an element all bear the same name (but see Section IR-3.3.2) and are
designated by mass numbers (see Section IR-3.2). For example, the atom of atomic number
8 and mass number 18 is named oxygen-18 and has the symbol $\ce{^{18}_{}O}$.
IR-3.3.2 Isotopes of hydrogen
Hydrogen is an exception to the rule in Section IR-3.3.1 in that the three isotopes $\ce{^{1}_{}H}$, $\ce{^{2}_{}H}$ and $\ce{^{3}_{}H}$ can have the alternative names protium, deuterium and tritium, respectively. The symbols D and T may be used for deuterium and tritium but $\ce{^{2}_{}H}$ and $\ce{^{3}_{}H}$ are preferred because
D and T can disturb the alphabetical ordering in formulae (see Section IR-4.5). The
combination of a muon and an electron behaves like a light isotope of hydrogen and is
named muonium, symbol $\ce{Mu}$.⁵
These names give rise to the names proton, deuteron, triton and muon for the cations $\ce{^{1}_{}H+}$, $\ce{^{2}_{}H+}$, $\ce{^{3}_{}H+}$ and $\ce{Mu+}$, respectively. Because the name proton is often used in contradictory senses, i.e. for isotopically pure $\ce{^{1}_{}H+}$ ions on the one hand, and for the naturally occurring undifferentiated isotope mixture on the other, it is recommended that the undifferentiated mixture be designated generally by the name hydron, derived from hydrogen.
Source:
N.G. Connelly, T. Damhus, R.M. Hartshorn, A.T. Hutton (eds) (2005). Nomenclature of Inorganic Chemistry (PDF). RSC–IUPAC. ISBN 0-85404-438-8.
Addendum:
The small subscript ⁵ present in the source is a reference to Names for Muonium and Hydrogen Atoms and Their Ions, W.H. Koppenol, Pure Appl. Chem., 73, 377–379 (2001) which can be viewed over at:
https://www.iupac.org/publications/pac/pdf/2001/pdf/7302x0377.pdf
$\cdots$ A particle consisting of a positive muon and an
electron ($\pu{\mu^+ e^–}$) is named “muonium” and has the symbol $\ce{Mu}$. Examples: “muonium
chloride,” $\ce{MuCl}$, is the equivalent of deuterium chloride $\cdots$ | {
"domain": "chemistry.stackexchange",
"id": 8098,
"tags": "nomenclature, notation, isotope"
} |
Degree of ionization and Saha equation | Question: Say you want to calculate degree of ionization for different gases in atmosphere of a star with abundances similar to those in Sun (let's assume you only have hydrogen, helium and sodium) over the temperature range (from 2000 K to 45000 K for example) using Saha equation:
$$\frac{n_{i+1}}{n_i}=\frac{g_{i+1}}{g_i} \frac{2}{n_e} \frac{{(2\pi m_e)}^{3/2}}{h^3} {(k_B T)^{3/2}} e^{-\chi /k_B T}$$,
which you write down for all three elements and of course next to abundances and temperature you also know ionization potentials $\chi$ for each element.
How can one calculate electron density in that case and how does it change? I understand, that at lower temperatures number of electrons is equal to number of ionized sodium atoms since it is the easiest to ionize (and in general $n_e=n_H^1+n_{He}^1+n_{Mg}^1$, where 1 means first level of ionization) but that doesn't help much. And additional question: should higher levels of ionization be included, given the temperature?
Answer: If you are dealing with elements heavier than helium ('metals' in astrophysical jargon), then for 45000 K you may need to consider higher ionization states. Indeed, this temperature corresponds to a thermal energy of $k_B T\approx4~$eV, and for magnesium, the 2nd ionization potential is 15$~$eV. If your accuracy requirement is of order $2\times\exp(-15/4)\approx ~$5%, then you need to include neutral, singly ionized and doubly ionized ions. For greater accuracy, include more ionization states. You can find ionization levels for all ions of all elements at the NIST web site.
However many ionization levels you include, you can calculate electron density as
$n_e=\displaystyle\sum_{Z}\sum_{i=1}^{Z}i\cdot n_i(Z)\qquad\qquad$(1)
Here $Z$ denotes the chemical element ($Z=1$ for H, $Z=12$ for Mg, etc.) and $i$ denotes the ionization level. $n_i(Z)$ is the number density of $i$ times ionized element Z. E.g., $n_0(12)$ is the density of Mg atoms, $n_1(12)$ is the density of singly ionized Mg, $n_2(12)$ is the density of doubly ionized Mg, etc. These densities come from the Saha equations and the condition
$\displaystyle\sum_{i=0}^{Z}n_i(Z)=n(Z),\qquad\qquad$(2)
where $n(Z)$ is the number density of nuclei of element $Z$ (or the density of $Z$ atoms at low temperature).
In case you are wondering why under the $\sum$ sign, densities $n_i$ are multiplied by $i$, here is an explanation. Every singly ionized ion contributes $i=1$ electron to the plasma, every doubly ionized ion contributes $i=2$ electrons, etc. | {
"domain": "physics.stackexchange",
"id": 1542,
"tags": "astrophysics"
} |
joystick output not reliable | Question:
Using joystick to drive turtlesim, he goes a ways then stops. If I move the stick a bit, he'll start up again. This may be a similar problem to seeing the numbers zero out when I run a full robot stack, with ROS_controls, and echo cmd_vel output to the terminal. Numbers look good for five seconds but then go to zero at unexpected times and at different stick angles. Is ROS failing me or is it the hardware? Using jstest the numbers stay looking good. Is it a problem with the publishing rate in hertz?
Edit:
<launch>
<node pkg="turtlesim" type="turtlesim_node" name="turtlesim" respawn="true"/>
<node pkg="teleop_twist_joy" type="teleop_node" name="teleop_twist_joy">
<remap from="cmd_vel" to="turtle1/cmd_vel"/>
</node>
<node pkg="joy" type="joy_node" name="joy_node"/>
</launch>
The above is for the turtlesim test. The stack for actual robot is RosJet from here: https://github.com/NVIDIAGPUTeachingKit/rosjet.
Thanks for help!
Originally posted by Rodolfo8 on ROS Answers with karma: 299 on 2018-01-08
Post score: 0
Original comments
Comment by gvdhoorn on 2018-01-08:
This sounds to me like a joystick device being opened in 'event mode' and using a motion controller with an integrated (and enabled) command time-out (ie: no new Twist in X sec -> stop).
Can you describe the full setup you have? Which nodes are involved in the teleop dataflow?
Comment by gvdhoorn on 2018-01-09:
I've moved your comments to your original question, as comments are not suited for showing launch files. Please always update your original question in these cases using the edit button/link.
Answer:
If that is your launch file, I believe you are missing some important parameters for this to work correctly.
The launch file for teleop_twist_joy sets (among others) the autorepeat_rate parameter for the joy_node node, which is probably what is causing the issues here.
Is it a problem with the publishing rate in hertz?
In a way: yes. Without autorepeat_rate, joysticks in event mode will not cause any msgs to be sent without any changes being detected to any of the axes. If you maintain stick position (ie: angles), nothing will change (due to filtering/deadzone), causing no msgs to be published. No messages -> auto stop (in well-written mobile base drivers, that is, see here for how this is implemented in turtlesim).
From the joy wiki page:
~autorepeat_rate (double, default: 0.0 (disabled))
Rate in Hz at which a joystick that has a non-changing state will resend the previously sent message.
I would recommend to alter your launch file to include the teleop_twist_joy/launch/teleop.launch file, instead of starting the node directly (note that you also don't need to start the joy_node itself anymore in that case). That will set all parameters as they should be, and things should start working a whole lot better. See also the teleop_twist_joy documentation on the ROS wiki.
Including provided launch files is in general always a good strategy btw, as typically launch files configure parameters that the node needs. Defaults can work, but they are always going to be sub-optimal.
Edit: and if you want to go a step further, you could look into including some of the packages from the Turtlebot, such as yocs_cmd_vel_mux (multiplexing between different Twist sources) and yocs_velocity_smoother (filter for Twist inputs).
Originally posted by gvdhoorn with karma: 86574 on 2018-01-09
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by Rodolfo8 on 2018-01-10:
That makes sense! I'll try this out as soon as I get back to the robot shop. Thanks!
Comment by Rodolfo8 on 2018-01-10:
Yes, that was it. Added two lines of parameters and it works perfectly. Thanks. I removed those lines earlier (from joy and teleop launch sections) because the software wouldn't work at all, and after removing the parameters, it started working for the first time.
Comment by Rodolfo8 on 2018-01-10:
Now I need to figure out why the motor commands from the Arduino often contain garbage and therefore don't control the motors well. Maybe it's because the encoders aren't connected up yet.
Comment by gvdhoorn on 2018-01-11:
That would seem to be a different issue.
Comment by gvdhoorn on 2018-01-11:\
Added two lines of parameters and it works perfectly.
note btw that the teleop_twist_joy launch file does more: it also loads a yaml file that configures teleop_twist_joy with quite some additional parameters.
Comment by Rodolfo8 on 2018-01-14:
Got the motors to perform! Turns out the Sabertooth Arduino libraries are not compatible with ROS. Program the direct serial writes instead and you're good to go.
Comment by Marcus Barnet on 2020-05-02:
This helped me a lot! Thanks @gvdhoorn. | {
"domain": "robotics.stackexchange",
"id": 29682,
"tags": "ros, navigation, teleop, joy"
} |
The most efficient way to merge two lists in Java | Question: I am looking for a way to merge two files with a list of distinct words, one word per line. I have to create a new txt file that would contain all the words of the first list and all the words from the second list. I don't have any other specifications. The order of words in the result doesn't matter.
public class testMain {
public static void main(String[] args) {
File f1=new File("words.txt");
File f2=new File("words1.txt");
HashSet <String> hash1=new HashSet<String>();
HashSet <String> hash2=new HashSet<String>();
try{
Scanner s=new Scanner(f1);
while(s.hasNextLine()){
hash1.add(s.nextLine());
}
s=new Scanner(f2);
while(s.hasNextLine()){
hash2.add(s.nextLine());
}
}
catch(FileNotFoundException e){}
hash1.addAll(hash2);
Object[]array =hash1.toArray();
File newFile=new File("mixOfLists.txt");
try{
PrintWriter writer=new PrintWriter(newFile);
for(int i=0; i<array.length; i++){
writer.println(array[i]);
}
writer.close();
}
catch(FileNotFoundException e){ System.out.print("No Such File");}
System.out.print("Done!");
}
}
Answer: Exception Handling
Your handling is not great.... this is a sign of poor forward planning:
catch(FileNotFoundException e){}
And this is a sign of something almost as bad:
catch(FileNotFoundException e){ System.out.print("No Such File");}
System.out.print("Done!");
}
The first time I read that, I got confused and thought the "Done!" println was part of the exception handling. You need to work on the indentation. Also, just printing "No such file" is not a very helpful exception handling.
Style
1-liner blocks are seemingly convenient but in the long term can have negative impacts on maintainability. You have a lot of them, and they make reading your code hard.
Your code is also suffocating due to lack of whitespace. You need to put spaces around operators to help the code to breath.... yeah, that sounds alarmist, but it really helps.
File newFile=new File("mixOfLists.txt");
try{
PrintWriter writer=new PrintWriter(newFile);
for(int i=0; i<array.length; i++){
writer.println(array[i]);
}
should be:
File newFile = new File("mixOfLists.txt");
try{
PrintWriter writer = new PrintWriter(newFile);
for(int i = 0; i < array.length; i++) {
writer.println(array[i]);
}
....
Resources
You should use try-with-resources for your IO sources and sinks. As things stand at the moment, you don't close the readers properly.
Algorithm
You're reading both files in to their own sets, and then merging the sets, and then outputting the result.
A better solution would be to use the boolean return value from the add(...) method to determine whether the word has been seen before... consider:
while(s.hasNextLine()){
String line = s.nextLine();
if(hash1.add(line)) {
writer.println(line);
}
}
The above code can be used for both the input files, and only writes out the word if the word has not been seen before.
This way you have only one set, and you do the merge at the same time as the reading.
Also, you should be using Java 8 streams..... hmmm... that would be nice. | {
"domain": "codereview.stackexchange",
"id": 14020,
"tags": "java, file, hash-map"
} |
Efficiently solve unexplored maze in Python | Question: I am trying to solve a maze in Python where parts of the maze are not explored, meaning in each step the player has to move toward an unexplored area and explore it until they discover the exit.
I have a working solution, but it is quite inefficient. It takes approximately 2 seconds for a 100x100 pixel map. I'm looking for a more efficient solution.
The maze itself is relatively simple. In the following image, I show the walls (white), the starting position (red) which is always in the center of the maze, and the unexplored area (blue).
I'm looking for the purple line, i.e., the path that connects the starting point with the next unexplored area. Because the white walls are always closed, we can tell that the unexplored area starts where the white walls end (the walls don't end there, they are just not discovered yet).
To solve this, I am choosing a random corner (here marked in green) and doing a breadth-first search. The resulting path is shown in the following image.
Here is my code:
import cv2
import numpy as np
maze_image = cv2.imread("mypath/maze.png")
maze = np.asarray(maze_image) # maze_image is the 100x100 black and white image of the maze
start = (0, 0) # we start the search at the goal
end = (int(maze.shape[1]/2), int(maze.shape[0]/2)) # this is the position of the player
maze[start[0]][start[1]] = 1
maze[end[0]][end[1]] = 0
# breadth-first connecting start and goal
def make_step(k):
for i in range(len(maze)):
for j in range(len(maze[i])):
if maze[i][j] == k:
if i>0 and maze[i-1][j] == 0:
maze[i-1][j] = k + 1
if j>0 and maze[i][j-1] == 0:
maze[i][j-1] = k + 1
if i<len(maze)-1 and maze[i+1][j] == 0:
maze[i+1][j] = k + 1
if j<len(maze[i])-1 and maze[i][j+1] == 0:
maze[i][j+1] = k + 1
k = 0
while maze[end[0]][end[1]] == 0:
k += 1
make_step(k)
# finding the shortest path
i, j = end
k = maze[i][j]
path = [(i,j)]
while k > 1:
if i > 0 and maze[i - 1][j] == k-1:
i, j = i-1, j
path.append((i, j))
k-=1
elif j > 0 and maze[i][j - 1] == k-1:
i, j = i, j-1
path.append((i, j))
k-=1
elif i < len(maze) - 1 and maze[i + 1][j] == k-1:
i, j = i+1, j
path.append((i, j))
k-=1
elif j < len(maze[i]) - 1 and maze[i][j + 1] == k-1:
i, j = i, j+1
path.append((i, j))
k -= 1
print(path)
The code explained: We start at the goal and give its pixel the value 1. Then we check for any neighbor pixels that have the value 0 (0 = not a wall). These get the value 2. Any neighbors of them that are not walls get value 3, and so on until we reach the center of the maze. Once we have reached the center, we just connect pixels in reverse value until we reached the goal, which gives us the shortest path between the start and the goal.
The algorithm works fine, but it takes around 2 seconds for the 100x100 pixel maze. To make it practical, I would need to go 10x faster. Any suggestions for improvement are very welcome. I have attached the original 100x100 maze below. The start position is the center of the maze.
Answer: Code Style:
There are a few minor issues:
indentation should be 4 spaces, not 2, to be PEP-8 compliant
the code can only run once; write a function to find the path from any arbitrary starting point to any arbitrary goal
Algorithmic improvements
This is where the biggest improvement could be found.
You start by declaring a corner as a goal. Then you go over every single pixel in the entire image (so looping over 10,000 times) to determine that 2 spots are adjacent to that corner. Then you loop over every single pixel once more to find what is adjacent to those 2 spots. Then you loop again...
The shortest possible path to the centre, which would assume zero obstacles, would require 100 steps. Each step requires 10,000 loops. Your best case solution is 1,000,000 loops. This is \$O(n^3)\$ for the best case scenario.
\$O(n^3)\$ means a 200x200 image would take 8 times longer.
What you want is to start by creating a list. The original list will just have the goal in it. Find every valid unexplored pixel adjacent to each element in the list: add those to a new list, and mark what step (k) you found it on. You now no longer need the original list, and can repeat on the new list.
When repeating on the new list, you don't add previously added pixels, only new ones.
What this would look like for the first few steps would be: just the corner [(0, 0)], then the 2 pixels adjacent [(1, 0), (0, 1)], then the 3 adjacent to that [(2, 0), (1, 1), (0, 2)] (the middle one only gets added once - when it gets looked at again it is already marked, just like the origin), then so on.
When this method finds the goal, it can stop the loop. Your code already knows how to follow the marked path.
This would at least mean the possibility space wouldn't creep up on you as quickly as it is currently. Thus, it would run faster not just here, but as you get larger and larger images the difference would be quite noticeable.
Advanced
Expand around both the goal & the origin simultaneously. The search area increases the further you get away from your starting position, so by searching in both directions at once, they are likely to overlap (and thus find you the solution) before the search gets as big as it would otherwise. | {
"domain": "codereview.stackexchange",
"id": 44270,
"tags": "python"
} |
ROS: frame transformation (tf) + TimeSynchronizer? | Question: How to get a frame transformation using TimeSynchronizer?
I am developing some 3D recognition application for a mobile robot. The robot has an RGB camera, depth camera, and uses rtabmap for SLAM, so I have colored point cloud as a map.
My application takes an input data for each moment of time, processes it, and outputs a segmented/labelled point cloud.
The input data is following:
RGB image
Depth map
Cumulative colored point cloud corresponding to this point in time
Corresponding position of the robot (actually, camera) in the point cloud (item 3). I need it to project the point cloud to an image and make some operations.
A screenshot from Rviz to illustrate the input data (except the position of the robot):
To get all this data at once, I am trying to write a callback function using message_filters.ApproximateTimeSynchronizer. But I can't figure out how to get the position data. When I try to use tf.TransformListener() along with other subscribers and a synchronizer, I get the error: AttributeError: 'TransformListener' object has no attribute 'registerCallback'
My code (simplified):
class SaveSyncedAll:
def __init__(self):
self.bridge = CvBridge()
self.sub_rgb = message_filters.Subscriber("/raw_frontal_camera/color/image_raw", Image)
self.sub_d = message_filters.Subscriber("/raw_frontal_camera/depth/image_rect_raw", Image)
self.sub_gsm = message_filters.Subscriber(
"/cloud_map", PointCloud2, queue_size=1, buff_size=52428800
)
self.listener = tf.TransformListener(cache_time=rospy.Duration(0.01))
# === THE METHOD BELOW LEADS TO AN ERROR ===
ts = message_filters.ApproximateTimeSynchronizer(
[self.sub_rgb, self.sub_d, self.sub_gsm, self.listener], queue_size=10, slop=0.01
)
ts.registerCallback(self.callback_all)
# where to save data
self.dataset_dir = "/home/3Drecognition/data_samples/All-1"
# we need only each 10th frame
self.frame_freq = 10
self.frame_n = 0
def callback_all(self, data_rgb, data_d, data_pc):
# get rgb image
try:
cv_image_rgb = self.bridge.imgmsg_to_cv2(data_rgb, "bgr8")
except CvBridgeError as e:
print(e)
return
# get depth image
try:
cv_image_d = self.bridge.imgmsg_to_cv2(data_d, desired_encoding='passthrough')
except CvBridgeError as e:
print(e)
return
# get depth image
try:
(trans_d, rot_d) = self.listener.lookupTransform(
'/map', '/frontal_camera_depth_optical_frame', rospy.Time(0)
)
(trans_rgb, rot_rgb) = self.listener.lookupTransform(
'/map', '/frontal_camera_rgb_optical_frame', rospy.Time(0)
)
except (tf.LookupException, tf.ConnectivityException, tf.ExtrapolationException):
return
# get colored point cloud
xyz, rgb = unpack_pc2_data_1D(data_pc)
# save obtained data sample to try my algorithm on it later
# <.. some code here ..>
if __name__ == '__main__':
rospy.init_node('save_synced_all', anonymous=True)
ip = SaveSyncedAll()
rospy.spin()
I'm using ROS Noetic.
Answer: I have two workaround solutions that do not directly answer your question (sorry), but might be useful to you or to future readers.
1. Use the image timestamp to get the transform
You could define the TransformListener in __init__, but instead of trying to synchronize it with the other topics, you can call it from inside the subscriber using the timestamp of one of the messages (or maybe their average). Hope that this tutorial is clearer than me.
2. Create a dedicated node to listen to tf
A non-elegant workaround is to have a node with just the TransformListener continuously listening for the transform you need, and re-publishing it on a dedicated topic that you can then subscribe with a message_filters.Subscriber.
The introduced delay should not be a big issue as transforms should come at a higher rate than images, and the introduced overhead should not be too long.
The above, of course, only if you are down for a quick and dirty solution. | {
"domain": "robotics.stackexchange",
"id": 38535,
"tags": "mobile-robot, ros, python, ros-noetic"
} |
Tubing for explosive gas or hydrogen transfer | Question: Recently, I heard that there is a law/ OSHA/ ASME piece of regulation dictating that hydrogen should be transferred (suppose from a tank of hydrogen to another location) by using specially approved tubing. Is this true?
Answer: I would say that it is true and for that case there will be many relevant codes / regulations that need to be respected. Make sure you read them all before designing / altering the system. | {
"domain": "engineering.stackexchange",
"id": 1962,
"tags": "pressure, safety, compressed-air, pressure-vessel"
} |
why continents do not subduct | Question: In section 1.7 of Geodynamics by D. Turcotte & G. Schubet it is stated that
"(...) continental crust cannot be destroyed by subduction"
which I cannot completely understand. So far I understand the oceanic lithosphere sinks at the trenches because it cools down and becomes denser (thermal contraction) as it moves away from the ocean ridges. So the logic is: buoyancy will make it descend.
But then again, isn't the continental crust thicker and denser? By the same logic above, it should sink as well.
Am I missing something? Thanks in advance for any comments
Answer: It is continental crust which hs the greater buoyancy, so when it meets another plate of continental crust neither can subduct. Instead, they collide, crumple and fold, making them thicker and higher. An example of this is the collision between the Indo-Australian plate and the Eurasian plate, which has formed the Himalayas. Had the Indo-Australian plate been oceanic plate, which is thinner and less buoyant, it would have been subducted. | {
"domain": "earthscience.stackexchange",
"id": 1946,
"tags": "geodynamics, continental-rifting"
} |
Why does like dissolve like? | Question: Polar solvents love polar solutes to be dissolved in it and non polar with non polar. Often said as like dissolves like.
Okay, polar loving polar can be understood with help of the facts: same polar nature, same kind of interactions etc.
But how will you explain the application of this rule with organic or non-polar solvents?
Solomons’ and Fryle’s Organic Chemistry says that there are some unfeasible entropy changes when polar solutes are added in non polar solvents and vice versa.
Why so?
Answer: The reason behind this is the hydrophobic effect. Everyone has seen it if they pour a spoonful of vegetable oil into a pot of water, e.g. to cook pasta. As long as nothing is disturbing the vegetable oil, it will collect itself together in one big bubble rather than form many small bubbles.
Polar solvents will always be arranged in a way that positively polarised areas are close to negatively polarised areas in the neighbouring solvent molecule. Hydrogen bonds — especially in water — are nothing but the extension of this concept to even more polarisation. Polar molecules, as you stated, will fit together with this scheme well. Unpolar compounds mixed in do not. As soon as you add an unpolar compound to a polar solvent, you are creating a type of artificial boundary and only the areas of the solvent molecules that are neither particularly positive or negative will be happy to be in the near vicinity. That means that you have a much higher ordering of the solvent molecules where they hit an unpolar compound, because one direction is basically doomed to be neither positive nor negative.
This ‘solvent wall’ will form no matter whether the unpolar island is large or small. However, it will also have almost the same thickness, no matter how big the contained part is. Therefore, if multiple unpolar molecules clump together, the overall number of polar molecules constrained in that wall is lower — an entropic gain. Thus, unpolar compounds tend to either not dissolve or precipitate out of polar solvent solutions.
This also works in the opposite direction. A polar molecule will prefer, for energetic reasons, to have a polar neighbour. However, an unpolar solvent cannot provide that polar environment. A neighbouring undissolved polar molecule, however, can. Here again, the polar compounds rather stay clustered together as it allows the central molecules to be more disordered while only a small layer on the border must take care to interact as well as possible with the unpolar solvent.
Note that this answer assumes some kind of black/white dichotomy. In reality, most compounds are somewhere on a scale from absolutely unpolar to very polar and are able to adapt to a wide range of organic solvents. Conversely, some organic solvents have the reputation of dissolving almost anything organic; most notably dichloromethane. However, the solubilities may greatly vary. | {
"domain": "chemistry.stackexchange",
"id": 8941,
"tags": "physical-chemistry, solubility, solutions, entropy"
} |
Which is meant by +/-9.2e18 years in timespan? | Question: I was able to convert the 9.2e18 AD to a date, but I am confused about the exact date. Which date is 9.2e18 AD and and 9.2e18 BC?
Time span (absolute) - [9.2e18 BC, 9.2e18 AD] i.e +/- 9.2e18 years
NumPy documentation, section "Datetime Units" under "Datetimes and Timedeltas"
Code
Meaning
Time span (relative)
Time span (absolute)
Y
year
+/- 9.2e18 years
[9.2e18 BC, 9.2e18 AD]
M
month
+/- 7.6e17 years
[7.6e17 BC, 7.6e17 AD]
W
week
+/- 1.7e17 years
[1.7e17 BC, 1.7e17 AD]
D
day
+/- 2.5e16 years
[2.5e16 BC, 2.5e16 AD]
I have converted the 9.2e18 (epoch - I believe it represented in epochs) to a date. It gave me very big date which I did not expect. Are my assumptions accurate?
How many years was covered according to the date 1970-01-01 from 9.2e18 BC?
What are some examples using this timespan to judge my assumptions to get the date of 9.2e18 BC and 9.2e18 AD with units by NumPy?
Answer: From the documentation you referred: "The length of the span is the range of a 64-bit integer times the length of the date or unit."
64 bit integer has values from -2^63 to 2^63-1, which is the same as from -9.2e18 to 9.2e18. So, the time span column shows you which dates would you cover if use only the corresponding units. Note, i.e. that time span for years 12 times bigger then time span for months and 52 times bigger then timespan for weeks.
So, the date 9.2e18BC is literally 9.2 quintillions years before christ
UPD with clarification to comment
First of all, there are two different concepts - the date (like 10th august of 2021) and time duration (like two years). The later is referred as time delta in python. And you can't add/subtract years in numpy from date because different years have different amount of time -- like 365 or 366 days. However you can subtract basically any amount of days like that:
start_date = np.datetime64('0000-01-01')
days_to_substract = np.timedelta64(100, 'D')
print(start_date - days_to_substract) # initial date minus 100 days
>>> -001-09-23
Note, that you can in fact manipulate with dates in vanilla python with datetime, but as mentioned in the other answer, the dates can not be less then 01-01-0001 for basic python without numpy | {
"domain": "datascience.stackexchange",
"id": 9933,
"tags": "time-series, numpy, mathematics, epochs, time"
} |
In a solar cell when an electron is freed due to light in the depletion layer, why does it move to N-type layer even though it is negatively charged? | Question: Even though in a solar cell the N-type layer is negatively charged why do the electrons from the depletion layer get attracted to it?
Answer: I think the main misunderstanding is that the n-layer is neutrally charged overall so it does not attract anything.
Only in the depletion region is there an electric field. The electric fields acts to sweep electrons towards the n-layer and holes toward the p-layer.
The depletion region partially overlaps both n and p doped layers. The part in the n-side is positively charged because the region is depleted of electrons which were once bound to the n-type dopants. This makes the n-side of the depletion region positive which attracts electrons towards the n-layer. | {
"domain": "physics.stackexchange",
"id": 78211,
"tags": "electricity, electric-fields, electrons, atoms, solar-cells"
} |
If qubits' amplitude can be a complex number, then how can we negative probability? | Question: We have this equation to describe a qubit ${\displaystyle |\psi \rangle =\alpha |0\rangle +\beta |1\rangle }$, with $\alpha$ and $\beta$ are amplitudes and can be complex numbers.
We also have $\alpha^{2}$ is the probability of the qubit collapse into zero, the same with $\beta$. But $\alpha$ can and mostly be a complex number then $\alpha$ can be a negative number.
For example, $\alpha$ is $i$ then our probability is $i^2$ which is $-1$. Then how can we have negative probability?
Answer: Given that a physical system is in the state$^1$ $|\psi\rangle$, the probability of measuring it in the state $|\phi\rangle$ is not $\langle \phi|\psi\rangle^2$, but $\big|\langle \phi|\psi\rangle\big|^2$, which is discussed in introductory Quantum Mechanics textbooks. Hence, the probability in your example with $\alpha = i$ is not $\alpha^2 = -1$, but rather $|\alpha|^2 = 1$.
$^1$ I'm assuming the states to be normalized ($\langle \psi|\psi\rangle = 1$) when writing the expression for the probability. | {
"domain": "physics.stackexchange",
"id": 89634,
"tags": "quantum-mechanics, wavefunction, quantum-states"
} |
How does a sponge "suck" up water against gravity? | Question: If I take a sponge and place it in a shallow dish of water (i.e. water level is lower than height of sponge), it absorbs water until the sponge is wet, including a portion of the sponge above the water level. In other words, it seems the sponge pulls some water from the bath up into itself, doing work, and the water gains some gravitational potential energy.
Where does the energy required to do this work come from? My suspicion is that the answer involves physical and/or chemical bonds between the water and the sponge, or possibly the change in the surface area to volume ratio of the water.
Answer: This effect is called capillarity and is not that straightforward.
The contact between water and a solid surface is determined by the chemical bonds. It is macroscopically observed in the contact angle that the water/air surface makes with the solid surface. This angle depends on the strength of the bonds between the solid and the water molecules. You can see this when you pour water in a glass: the water at the edge of the glass is a bit higher than in the center; it makes an angle with the glass surface.
Now, if there is a lot of solid around the water, such as water in a tiny tube, there are a lot of contact points. Therefore, the water/air interface will be strongly curved. The curvature of this interface modifies the surface tension, which represents the energy contained in that surface. A good way to interpret the effect of curvature is that you surround a given portion of the interface by more (or less) water molecules as you curve the interface. The pressure on the interface is thus reduced or increased depending on the curvature.
In a small vertical tube, the curvature can be such that the pressure is higher than for a flat interface. Thus, it can counteract the gravity more easily.
In conclusion, the energy comes from the thermal (pressure) energy of the water molecules which push from the bottom. | {
"domain": "physics.stackexchange",
"id": 8031,
"tags": "fluid-dynamics, everyday-life, potential-energy, capillary-action"
} |
Rubberduck's "Rename" refactoring implementation | Question: Knowing who's using what, and where, I've implemented a "Rename" refactoring for Rubberduck.
It works great - it needs further extensive testing, but the preliminary tests are very, very exciting.
There are a few things I'm not sure I like though:
The entire logic is implemented in the Rubberduck.UI namespace. The "Extract Method" refactoring logic is also implemented under that namespace (under Rubberduck.UI.Refactorings.ExtractMethod) - I think I might be violating SRP with these presenter classes, but I'm not sure it's worth the trouble. Any thoughts?
I don't know whether/how it's possible to actually replace tokens in the parse tree, so the renaming actually boils down to a very very localized search & replace... and the implementation is ugly beyond words - and I'm not sure how to go about making it right.
The code acquiring the target identifier looks like it could get cleaner... but how?
namespace Rubberduck.UI.Refactorings.Rename
{
public class RenamePresenter
{
private readonly VBE _vbe;
private readonly IRenameView _view;
private readonly Declarations _declarations;
private readonly QualifiedSelection _selection;
public RenamePresenter(VBE vbe, IRenameView view, Declarations declarations, QualifiedSelection selection)
{
_vbe = vbe;
_view = view;
_view.OkButtonClicked += OnOkButtonClicked;
_declarations = declarations;
_selection = selection;
}
public void Show()
{
AcquireTarget(_selection);
_view.ShowDialog();
}
private static readonly DeclarationType[] ModuleDeclarationTypes =
{
DeclarationType.Class,
DeclarationType.Module
};
private void OnOkButtonClicked(object sender, EventArgs e)
{
if (ModuleDeclarationTypes.Contains(_view.Target.DeclarationType))
{
RenameModule();
}
else
{
RenameDeclaration();
}
RenameUsages();
}
private void RenameModule()
{
try
{
var module = _vbe.FindCodeModules(_view.Target.QualifiedName.QualifiedModuleName).Single();
module.Name = _view.NewName;
}
catch (COMException exception)
{
MessageBox.Show(RubberduckUI.RenameDialog_ModuleRenameError, RubberduckUI.RenameDialog_Caption);
}
}
private void RenameDeclaration()
{
var module = _vbe.FindCodeModules(_view.Target.QualifiedName.QualifiedModuleName).First();
var content = module.get_Lines(_view.Target.Selection.StartLine, 1);
var newContent = GetReplacementLine(content, _view.Target.IdentifierName, _view.NewName);
module.ReplaceLine(_view.Target.Selection.StartLine, newContent);
}
private void RenameUsages()
{
var modules = _view.Target.References.GroupBy(r => r.QualifiedModuleName);
foreach (var grouping in modules)
{
var module = _vbe.FindCodeModules(grouping.Key).First();
foreach (var line in grouping.GroupBy(reference => reference.Selection.StartLine))
{
var content = module.get_Lines(line.Key, 1);
var newContent = GetReplacementLine(content, _view.Target.IdentifierName, _view.NewName);
module.ReplaceLine(line.Key, newContent);
}
}
}
private string GetReplacementLine(string content, string target, string newName)
{
// until we figure out how to replace actual tokens,
// this is going to have to be done the ugly way...
// what we're trying to avoid here,
// is to replace all instances of "Foo" in "Foo = FooBar" when target is "Foo".
var result = ' ' + content;
if (result.Contains(' ' + target))
{
result = result.Replace(' ' + target, ' ' + newName);
}
if (result.Contains(target + ' '))
{
result = result.Replace(target + ' ', newName + ' ');
}
if (result.Contains(target + '.'))
{
result = result.Replace(target + '.', newName + '.');
}
else if (result.Contains('.' + target))
{
result = result.Replace('.' + target, '.'+ newName);
}
if (result.Contains('(' + target))
{
result = result.Replace('(' + target, '(' + newName);
}
if (result.Contains(":=" + target))
{
result = result.Replace(":=" + target, ":=" + newName);
}
if (result.Contains(target + '!'))
{
result = result.Replace(target + '!', newName + '!');
}
else if (result.Contains('!' + target))
{
result = result.Replace('!' + target, '!' + newName);
}
return result.Substring(1);
}
private static readonly DeclarationType[] ProcedureDeclarationTypes =
{
DeclarationType.Procedure,
DeclarationType.Function,
DeclarationType.PropertyGet,
DeclarationType.PropertyLet,
DeclarationType.PropertySet
};
private void AcquireTarget(QualifiedSelection selection)
{
var targets = _declarations.Items.Where(declaration =>
declaration.QualifiedName.QualifiedModuleName == selection.QualifiedName
&& (declaration.Selection.Contains(selection.Selection))
|| declaration.References.Any(r => r.Selection.Contains(selection.Selection)))
.ToList();
var nonProcTarget = targets.Where(t => !ProcedureDeclarationTypes.Contains(t.DeclarationType)).ToList();
if (nonProcTarget.Any())
{
_view.Target = nonProcTarget.First();
}
else
{
_view.Target = targets.FirstOrDefault();
}
if (_view.Target == null)
{
// no valid selection? no problem - let's rename the module:
_view.Target = _declarations.Items.SingleOrDefault(declaration =>
declaration.QualifiedName.QualifiedModuleName == selection.QualifiedName
&& ModuleDeclarationTypes.Contains(declaration.DeclarationType));
}
}
}
}
I definitely need a different approach for the GetReplacementLine method; I thought of using regex, but I'd rather not. Or should I? Are there other alternatives?
I've tagged this with antlr because my Declaration object does expose a RuleContext object that I'm just not using here... anyone familiar with ANTLR knows if there's something I should know that would make my life easier here?
Answer: GetReplacementLine()
Instead of using String.Replace() you should check out the TokenRewriteStream mentioned in this answer.
If you need to use String.Replace() then you should omit the not needed call to String.Contains(). If the string which should be replaced isn't found in the 'content', the unchanged 'content' will be returned. In addition it will speed up the execution because it does not have to search for the token twice.
But you also have duplicated code inside this method, so it would be better to extract this into 2 separate methods. One replacing the searchterm with a passed prefix and the other with a passed postfix.
Like:
private string PostfixReplace(string content, string token, string newName, string postFix)
{
return content.Replace(token + postFix, newName + postFix);
}
AcquireTarget()
There is no need to call ToList() on the result of the Linq Where clauses because you only need either First() or FirstOrDefault(). The call to ToList() will slow down the execution because every item will be qualified but you only need the first one.
var
IMHO you are misusing the var keyword, because you use it all the time. Assume you need to dig into this class after not touching it for a few weeks, you won't know what most of the types will be, because it isn't obvious what type the right side is, e.g
var targets = _declarations.Items.Where(declaration =>
declaration.QualifiedName.QualifiedModuleName == selection.QualifiedName
&& (declaration.Selection.Contains(selection.Selection))
|| declaration.References.Any(r => r.Selection.Contains(selection.Selection)))
.ToList(); | {
"domain": "codereview.stackexchange",
"id": 12775,
"tags": "c#, antlr, rubberduck"
} |
Context Sensitive Grammar for $x \# x^R \# x$ | Question: This language is given.
$L = \{\; x \# x^R \# x \mid x\in \{a,b\}^*\;\}$
I have to figure out a context sensitive grammar for it.
I've tried several rules already but it's hard to make a copy of the first part and also get it in the last with the reversed part in the middle.
Answer: A context-sensitive grammar
$$\begin{aligned}
S&\to AS\alpha\alpha \mid BS\beta\beta \mid \#T & & (1)\\
A\alpha&\to\alpha A &&(2)\\
B\alpha&\to\alpha B &&(3)\\
A\beta&\to\beta A &&(4)\\
B\beta&\to\beta B &&(5)\\
T\alpha\alpha&\to ATA&&(6)\\
T\beta\beta&\to BTB&&(7)\\
A&\to a&&(8)\\
B&\to b&&(9)\\
T&\to\# &&(10)\\
\end{aligned}$$
OK, I am lying. Except context-free rules $1$, $8$, $9$, $10$, none of the rules are allowed in a context-sensitive grammar.
However, those rules are non-contracting. They can be transformed methodically to context-sensitive rules as shown here. Hence, we can say the non-contracting grammar above represents a context-sensitive grammar.
The idea to generate $L$: blowup, move and change
An effective approach is to design grammar rules in the following order.
Blow up the initial symbol to include the field separators and enough placeholders.
Move the placeholders to the appropriate destinations.
At destinations, change placeholders to wanted symbols.
Employ new symbols as well as left-and/or-right context to ensure orderly derivations and no unintended derivations.
Suppose we have derived $\chi\#\chi^RT\chi$ for some string $\chi$ consisting of $A$s and $B$s. Here is how we can extend $\chi$ in $\chi\#\chi^RT\chi$ to $A\chi$.
Surround it by $\color{blue}{A}\cdots\color{blue}{\alpha\alpha}$ so that we will derive $\color{blue}{A}\chi\#\chi^RT\chi\color{blue}{\alpha\alpha}$.
Move $\color{red}{\alpha\alpha}$ towards $T$ to obtain ${A}\chi\#\chi^RT\color{red}{\alpha\alpha}\chi$
Change $T\alpha\alpha$ to $ATA$. We have derived $A\chi\#(A\chi)^RTA\chi$.
Similarly, we can extend $\chi$ to $B\chi$
The technique, $XY\to YX$
This production rule enables $Y$ to move left when $X$ is to the left of it at the time of derivation.
Although $XY\to YX$ is not a context-sensitive rule, the same generation effect can be realized by the following four context-sensitive rules, where $U$ and $V$ are two new non-terminals.
$\quad XY\to XU$
$\quad XU\to VU$
$\quad VU\to VX$
$\quad VX\to YX$
This technique is used in rule $2$, $3$, $4$, $5$, which enables $\alpha$ and $\beta$ to move left towards $T$, so that they will be changed by rule $6$ and $7$ to $A$ and $B$ respectively.
The real context-sensitive grammar
Here is the solution proper.
$$\begin{aligned}
S&\to AS\alpha\alpha \mid BS\beta\beta \mid \#T &\quad & (1)\\
A\alpha&\to A\alpha_A &&(2.1)\\
A\alpha_A&\to\alpha\alpha_A &&(2.2)\\
\alpha\alpha_A&\to A\alpha &&(2.3)\\
B\alpha&\to B\alpha_B &&(3.1)\\
B\alpha_B&\to\alpha\alpha_B &&(3.2)\\
\alpha\alpha_B&\to\alpha B &&(3.3)\\
A\beta&\to A\beta_A &&(4.1)\\
A\beta_A&\to\beta\beta_A &&(4.2)\\
\beta\beta_A&\to\beta A &&(4.3)\\
B\beta&\to B\beta_B &&(5.1)\\
B\beta_B&\to\beta\beta_B &&(5.2)\\
\beta\beta_B&\to\beta B &&(5.3)\\
T\alpha \alpha&\to T\alpha_T\alpha &&(6.1)\\
T\alpha_T\alpha &\to A\alpha_T \alpha &&(6.2)\\
A\alpha_T \alpha&\to A\alpha_TA &&(6.3)\\
A\alpha_T A&\to ATA &&(6.4)\\
T\beta \beta&\to T\beta_T\beta &&(7.1)\\
T\beta_T\beta &\to B\beta_T \beta &&(7.2)\\
B\beta_T \beta&\to B\beta_TB &&(7.3)\\
B\beta_T B&\to BTB &&(7.4)\\
A&\to a&&(8)\\
B&\to b&&(9)\\
T&\to\# &&(10)\\
\end{aligned}$$ | {
"domain": "cs.stackexchange",
"id": 19780,
"tags": "formal-languages, formal-grammars, context-sensitive"
} |
Why does a range query on a segment tree return at most $\lceil \log_2{N} \rceil$ nodes? | Question: If an array $A[1 \ldots N]$ is represented using a segment tree having sets in each interval, why does a range query $[L\ldots R]$ returns at most $\lceil \log_2{N} \rceil$ sets (or disjoint intervals)?
If came across this statement while reading this answer.
To quote:
Find a disjoint coverage of the query range using the standard segment
tree query procedure. We get $O(\log n)$ disjoint nodes, the union of
whose multisets is exactly the multiset of values in the query range.
Let's call those multisets $s_1, \dots, s_m$ (with $m \le \lceil \log_2 n \rceil$).
I tried searching for a proof, but couldn't find it on any site. Can anyone help me prove it?
Answer: Here's the basic idea.
Let a dyadic interval be an interval of the form
$$ [2^b a,2^b(a+1)-1] $$
for some integer $a,b \geq 0$.
Claim 1. If $m < 2^n$ then any interval of the form $[0,m-1]$ can be written as the disjoint union of at most $n$ dyadic intervals.
Proof. Expand $m$ as a sum of decreasing powers of 2:
$$ m = 2^{a_1} + \cdots + 2^{a_k}. $$
Then we can write
$$
[0,m-1] = [0,2^{a_1}-1] \cup [2^{a_1},2^{a_1}+2^{a_2}-1] \cup \cdots \cup [2^{a_1} + \cdots + 2^{a_{k-1}},2^{a_1} + \cdots + 2^{a_k}-1].
$$
Claim 2. If $0 \leq m_1 \leq m_2 \leq 2^n$ then any interval of the form $[m_1,m_2-1]$ can be written as the disjoint union of at most $2n$ dyadic intervals.
Proof. The binary expansion of $m_1$ and $m_2$ is of the form $m_1 = x0y, m_2 = x1z$, where $|y|=|z|$. Let $m = x10^{|z|}$. Using Claim 1, we can express $[0,m_2-m-1]$ as a union of at most $n$ dyadic intervals. Shifting these by $m$, we express $[m,m_2-1]$ as a union of at most $n$ dyadic intervals. Similarly, using Claim 1 we can express $[0,m-m_1-1]$ as a union of at most $n$ dyadic intervals. Shifting and inverting, we express $[m_1,m-1]$ as a union of at most $n$ dyadic intervals.
(In both cases, one needs to check that shifting, and possibly inverting, preservers an interval being dyadic.) | {
"domain": "cs.stackexchange",
"id": 16038,
"tags": "data-structures, arrays, search-trees, intervals"
} |
Do perfect spheres exist in nature? | Question: Often in physics, Objects are approximated as spherical. However do any perfectly spherical objects actually exist in nature?
Answer: No, but it doesn't matter.
The theories that approximate things using spheres are ones in which the final result (the number you measure, the reading on your meter, whatever) depends continuously in some sense on the deviations from sphericity. More symbolically, for any $\varepsilon$ tolerance you allow in your measurement (none of our measurements are infinitely precise), there exists a $\delta$ such that any real object "within $\delta$" of being a sphere will give the same measurement to within $\varepsilon$.
It is not that theories are invalid because they assume something "wrong" about nature. Instead, you have to understand that there is always an implicit statement about how "real" behavior approaches the model as deviations from the model's assumptions get smaller. | {
"domain": "physics.stackexchange",
"id": 16609,
"tags": "soft-question, geometry, nature"
} |
Estimation of average waiting time | Question: I'm simulating a procedure that assigns tasks to servers and want to estimate the average waiting time until a task is served (finds a free server).
This procedure runs periodically, thus every task that is rejected in a run can try in the next runs until it finds a free server.
The inter-arrival times of tasks follow an exponential distribution.
Between runs, some tasks may finish.
Is there a way to estimate the average waiting time of tasks?
Answer: If tasks arrive faster than they can be dealt with, average waiting time is unbounded.
You can probably adapt the Pollaczek–Khinchine formula to give an analytic answer to your question:
$$L = \rho + \frac{\rho^2 + \lambda^2 \operatorname{Var}(S)}{2(1-\rho)}$$
where
$L$ is the mean queue length;
$\lambda$ is the arrival rate of the Poisson process;
$1/\mu$ is the mean of the service time distribution $S$;
$\rho={\lambda \over \mu}$ is the utilization; and
$Var(S)$ is the variance of the service time distribution $S$. | {
"domain": "cs.stackexchange",
"id": 9043,
"tags": "algorithm-analysis"
} |
Classification of topolgical phases when eigenstates belong to complex Grassmannian | Question: I want to understand the paper which belongs to Ludwig (I put it below). I do not understand why exactly he got the new space $U(m+n)/U(m) \times U(n)$. My understanding from Grassmannian Manifold is that I think because we have m positive eigenvalues which are identified and n minus eigenvalues which are identified also we should mod out these two. Maybe it is wrong!
Source: Topological phases: Classification of topological insulators and superconductors of non-interacting fermions, and beyond Below equation 38
Answer: You have filled all the negative energy one-particle states $|i\rangle$ and the many particle state you get is the wedge product
$$
\Psi= |1\rangle\wedge |2\rangle\wedge \ldots \wedge |n\rangle
$$
This state depends only on the subspace of $ {\mathcal H}^{n+m}$ spanned by the states $|1\rangle, |2\rangle, \ldots, |n\rangle$ and this is left alone by the $U(n)\times U(m)$
that transforms only the occupied space and the unoccupied space, but does not take any state from to the other. The set of all maps that takes one-Slater-determinant states to one-Slater determiant states is $U(n+m)$. The set of distinct one-Slater-determinant states is therefore the coset $U(n+m)/(U(n)\times U(m))$. | {
"domain": "physics.stackexchange",
"id": 79146,
"tags": "topological-field-theory, topological-phase"
} |
Correlation Function of One-Dimensional XY Model | Question: From the Harvard lecture notes XY model: particle-vortex duality by Subir Sachdev, the path-integral of 1D XY-model is given by
$$\mathcal{Z}=\int\mathcal{D}\theta\exp{\left\{-\frac{K}{2}\int \!dx~(\frac{d\theta}{dx})^{2}\right\}}.\tag{4}$$
Introducing a complex order parameter $$\psi=e^{i\theta},\tag{3}$$ the correlation function is given by
$$\left\langle\psi(x)\psi^{\ast}(0)\right\rangle=\exp{\left(-\frac{1}{K}\int\!\frac{dk}{2\pi}\frac{1-\cos(kx)}{k^{2}}\right)}.\tag{5}$$
My question is how I should perform the path-integral to obtain the above correlation function?
Answer: It seems that we can also use a trick in Xiao-Gang Wen's book (Quantum field theory of many body systems, page 93).
Now $\mathcal{L}=\frac{K}{2\pi}(\partial_x\theta)^2$, then the correlation function (in imaginary time) is
$$\langle e^{i\theta(x_1)}e^{-i\theta(0)}\rangle=\frac{\int D\theta(x)e^{-\int dx\frac{K}{2\pi}(\partial_x\theta)^2+\int dx f(x)\theta(x)}}{\int D\theta(x)e^{-\int dx\frac{K}{2\pi}(\partial_x\theta)^2}}=e^{\frac{1}{2}\int dxdy\,f(x)G(x-y)f(y)},$$
where $f(x)=\delta(x-x_1)-\delta(x)$, and $G(x-y)=(-\frac{K}{\pi}\partial_x^2)^{-1}$.
$$\frac{1}{2}\int dxdy\,f(x)G(x-y)f(y)=G(0)-G(x_1)=-\int\frac{dk}{2\pi}\frac{\pi}{K}\frac{1-e^{ikx}}{k^2},$$ and $\int dke^{ikx}/k^2$ can be further simplified to $\int dk \cos(kx)/k^2$. And also, this method can be generalized to D dimension, which is the same as Quantum spaghettification, but here we do not need the re-definition procedure. | {
"domain": "physics.stackexchange",
"id": 92155,
"tags": "homework-and-exercises, quantum-field-theory, path-integral, correlation-functions, partition-function"
} |
Why don't we normally see the Higgs boson? | Question: I am a physics student and my dad just asked me about the Higgs Boson. I've told him the little I know, that the Higgs field is a field that is supposed to give mass to elementary particles, and that finding the boson was crucial to see if this mechanism actually did exist.
After telling that, a question has come into my mind. I have heard that it is very difficult to create the circumstances where we would have Higgs Bosons. I have been told also that this boson is the carrier of the field responsible of giving mass to other particles. So the question is quite natural to me, how come something which "is not there" in "normal" conditions happens to do its work?
I mean, the carrier of the electromagnetic force are the photons and we have photons in "normal conditions"(by this I have meant not the conditions we have inside the LCH, for example) so it is natural to see their effects, but how is this possible with the Higgs?
Answer: The difficulty with Higgs boson is it's high mass, so in order to create it, you need lots of energy (125GeV, using $E=mc^2$).
What is important to give particles mass is s the Higgs field, not the Higgs boson (which is an excitation of the field).
The problem is that you have mixed the concept of real particles and "virtual" or "force carrier" particles. The latter can't be observed and can be created spontaneosly, because the energy requiered is "borrowed" via Heisenberg's Principle ($\Delta E\Delta t \geq \frac{\hbar}{2}$).
Comparing to the analogy you made: two charges will attract/repell, via EM interaction without photons being present. The EM is mediated by virtual photons, but these are not physically observable, unlike "light" photons. | {
"domain": "physics.stackexchange",
"id": 9991,
"tags": "particle-physics, higgs"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.